Professional Documents
Culture Documents
Gabor Szabo - Diss - ETH - 16207 PDF
Gabor Szabo - Diss - ETH - 16207 PDF
16207, 2005
Optimization Problems in
Mobile Communication
presented by
Dipl. Eng. in Comp. Sc. Gabor Vilmos Szabo
born 19.12.1976, citizen of Zalau, Romania
Zusammenfassung
Auf die eine oder andere Art hat Mobilfunk viele Menschen beein-
flusst. Mobilfunknetzwerke haben es ermoglicht, selbst in den abgele-
gensten Orten erreichbar zu sein. In letzter Zeit wurde viel Forschung
in die Planung und Optimierung von Mobilfunknetzwerken investiert.
Netzwerkanbieter und Nutzer konnen nur davon profitieren, wenn Al-
gorithmiker effiziente Losungen fur Optimierungsprobleme aus die-
sem Umfeld liefern.
In dieser Dissertation analysieren wir drei verschiedene algorith-
misch interessante Probleme der mobilen Telekommunikation: Funk-
mastplatzierung mit Frequenzzuweisung, OVSF Code Zuweisung und
Joint Base Station Scheduling.
Wir prasentieren eine neuartige Losungstechnik fur das Problem,
Funkmasten zu positionieren und ihnen Frequenzen zuzuordnen. Die-
se Fragestellung ist ein Kernproblem beim Design und der Optimie-
rung von GSM Netzwerken. Da die interessanteste Version dieses
Problems N P-hart ist, benutzen wir einen heuristischen Ansatz, der
auf dem evolutionaren Paradigma beruht, um gute Losungen zu fin-
den. Wir untersuchen und vergleichen zwei multikriterielle Techni-
ken und einen neuen Algorithmus, den ,,steady state evolutionaren
Algorithmus mit Pareto Turnieren (stEAPT). Die evolutionaren Al-
gorithmen, die hier benutzt werden, werfen ein interessantes Daten-
strukturen Problem auf. Wir stellen eine auf Prioritatswarteschlangen
basierende Technik vor, die das entstehende ,,Maxima-Schichten Pro-
blem lost. Dieses Problem besteht darin, eine Menge von Punkten der
Groe n so in Schichten aufzuteilen, dass sich keine zwei Punkte in
einer Schicht dominieren. Hierbei dominiert ein Punkt in Rd einen
anderen, wenn er in keiner Dimension schlechter ist als der andere
und in mindestens einer besser.
Weiterhin betrachten wir das Problem, dynamisch OVSF Codes in
einem UMTS Netzwerk zu allozieren. Der kombinatorische Kern des
OVSF Code Zuweisungsproblems besteht darin, Knoten eines voll-
standigen binaren Baums der Hohe h (Codebaum) n gleichzeitigen
Verbindungen zuzuweisen, so dass keine zwei zugewiesenen Codes
auf einem Wurzel-Blatt Pfad sind. Eine Verbindung, die einen Anteil
von 2d der zu Verfugung stehenden Bandbreite benotigt, braucht
einen beliebigen Code der Tiefe d im Baum. Diese Codezuweisung
kann sich mit der Zeit andern. Dabei wollen wir die Anzahl der Code-
iv
Acknowledgment
No thesis is ever the product of only one persons efforts, and cer-
tainly this one was no exception. It would never have become real-
ity without the help and suggestions of my advisors, colleagues and
friends.
First of all I would like to thank my supervisor prof. Peter Wid-
mayer for his constant guidance, support and for his thoughtful and
creative comments.
I am also very grateful to my co-examiners Dr. Thomas Erlebach
and Dr. Riko Jacob for reading my thesis and for working with me
on some of the challenging algorithmic problems presented in this
piece of work. I would also like to express many thanks to my coau-
thors Matus Mihalak, Marc Nunkesser, Karsten Weicker and Nicole
Weicker for the brainstorming hours and also for the fun we had at
conferences. In this regard, I am also indebted to Michael Gatto for
sharing the office with me for the past two years, for being such a
great colleague, for proof reading my thesis as well as for the discus-
sions on photography and many other topics.
I would also like to take this opportunity to thank many people and
friends with whom I spent time, academically and socially. I thank
Dr. Christoph Stamm for giving me advice on good programming
style and also for being my supervisor during my diploma work. A
special thank you goes to Barbara Heller who always helped me when
I needed advice regarding the academic life at the ETH and life in
Zurich. I am also grateful to my former and current colleagues for
their friendship: Luzi Anderegg, Dr. Mark Cieliebak, Jorg Derungs,
Dr. Stephan Eidenbenz, Dr. Alex Hall, Dr. Tomas Hruz, Zsuzsanna
Liptak, Dr. Sonia P. Mansilla, Dr. Leon Peeters, Dr. Paolo Penna,
Conrad Pomm, Dr. Guido Proietti, Franz Roos, Konrad Schlude, Dr.
David Taylor and Mirjam Wattenhofer. In this respect I would also
like to thank my Hungarian and Romanian friends from the ETH for
organizing the numerous parties and many other social events.
I am also grateful to my parents for their support and encourage-
ment during these five years.
My biggest thank you goes to Veronika Bobkova who always
made me motivated when I was loosing faith in my work and for
supporting me.
vi
Contents
1 Introduction 1
1.1 GSM and UMTS System Planning and Optimization 2
1.2 Short Theory of Algorithms and Complexity . . . . . 5
1.3 Summary of Results . . . . . . . . . . . . . . . . . . 10
i
ii Contents
Introduction
1
2 Chapter 1. Introduction
or a function of the input size) off the optimum solution. This con-
trasts with the concept of a heuristic, where no such guarantee can
be given, but good solutions are found for most instances arriving in
practice. In practice, heuristics often outperform provablyworst-
case optimum approximation algorithms.
From a practical point of view, mobile phones affect all of us.
New mobile phone networks are being designed for each new tech-
nology generation. Each generation poses a different set of problems.
When designing new networks, algorithmically challenging and com-
putationally intensive problems without existing theoretical models
have to be solved. In this thesis we elaborate on three of these chal-
lenging problems: coverage and frequency allocation (in Chapter 2),
spreading code allocation (in Chapter 4), and load balancing with cell
synchronization (in Chapter 5). In addition we analyze a data struc-
ture problem (in Chapter 3) that improves the runtime efficiency of a
class of evolutionary multi-objective optimization strategies.
First, we present shortly some aspects of the design and optimiza-
tion problems raised by 2nd and 3rd generation mobile telecommu-
nication networks. For a technical description on the existing and
future generation mobile networks, the interested reader is referred
to [76, 46] and [60]. Next, we present briefly the algorithmic tools
and theory used for solving and analyzing the problems tackled in
this thesis. At the end of this chapter we present our main results
and in the following chapters we present in detail each of the above
mentioned problems.
Cook [20] and Levin [61] proved that the satisfiability problem
is N P-complete. To show N P-completeness of a new problem it
is enough to show that it is in N P and to find a polynomial time
reduction from one of the problems already known as being N P-
complete. Because of the transitive property of Karp type reductions,
the existence of a polynomial time algorithm for one of the N P-
complete problems would imply polynomial time algorithms for all
problems in N P.
8 Chapter 1. Introduction
Approximation Algorithms
In the previous section we presented the complexity class N P and
the concept of N P-completeness. In real life applications it turns out
that most interesting problems are N P-complete. In such cases, one
cannot expect polynomial time optimal algorithms unless P=N P.
The widely believed assumption in complexity theory is that P=N P
and thus there is little hope for someone to prove the contrary. Hence,
it is worth looking for algorithms that run in polynomial time and
return, for every input, a feasible solution whose measure is not too
far from the optimum.
Given an input instance x for an optimization problem , we say
that a feasible solution y S(x) is an approximate solution of prob-
lem and that any polynomial time algorithm that returns a feasible
solution is an approximation algorithm.
The quality of an approximate solution can be defined in terms
of the distanceof its measure from the optimum. In this thesis we
use only relative performance measures. The following definition is
from [7]. There are other definitions on the performance ratio that use
values smaller than one for minimization problems. The two defini-
tions are basically the same and for ease of presentation we use the
definition from [7].
Heuristics
In the previous section we presented the notion of approximation al-
gorithms that provide good quality solutions (in terms of running time
and solution measure) to N P-complete problems. However, in prac-
tice most applications use algorithms that do not provide such guaran-
tees and still produce good enough solutions in a short time for most
input instances.
In the following, we refer to algorithms that do not provide any
worst case (or average case) guarantees on their execution time nor on
their solution quality as heuristic methods (or heuristics). In practice,
heuristics often outperform provably worst-case optimum approxi-
mation algorithms.
Evolutionary algorithms are one of the state of the art heuris-
tic methods that can handle efficiently multi-objective optimization
problems. In Chapter 2 we present an evolutionary algorithm tailored
to a specific multi-objective optimization problem. Other local search
type heuristics are the following: simulated annealing, tabu search,
neural networks, and ant colony optimization. A brief description of
these methods can be found in [7] and [27].
Online Computation
Traditional optimization techniques assume complete knowledge of
all data of a problem instance. However, in reality decisions may
have to be made before the complete information is available. This
observation has motivated the research on online optimization. An
algorithm is called online if it takes decisions whenever a new piece
of data requests an action. The input is given as a request sequence
r1 , r2 , . . . presented one-by-one. The requests must be served by an
online algorithm ALG at the time they are presented. When serving
request ri , the algorithm ALG has no knowledge about the requests
rj , j > i. Serving ri incurs a cost and the overall goal is to minimize
the total service cost of the request sequence.
We can formally define online optimization problems as a request-
answer game, in the following way.
networks that use the TDD radio interface together with the CDMA
channel separation technology is the joint base station scheduling
(JBS) problem. In Chapter 5 we present different versions of the JBS
problem. Consider a scenario where radio base stations need to send
data to users with wireless devices. Time is discrete and slotted into
synchronous rounds. Transmitting a data item from a base station to a
user takes one round. A user can receive the data from any of the base
stations. The positions of the base stations and users are modeled as
points in Euclidean space. If the base station b transmits to user u in
a certain round, no other user within distance at most b u2 from
b can receive data in the same round due to interference phenomena.
The goal is to minimize, given the positions of the base stations and of
the users, the number of rounds until all users have received their data.
We call this problem the Joint Base Station Scheduling Problem (JBS)
and consider it on the line (1D-JBS) and in the plane (2D-JBS). For
1D-JBS, we give an efficient 2-approximation algorithm and poly-
nomial time optimal algorithms for special cases. We also present
an exact algorithm for the general 1D-JBS problem with exponential
running time. We model transmissions from base stations to users as
arrows (intervals with a distinguished endpoint) and show that their
conflict graphs, which we call arrow graphs, are a subclass of perfect
graphs. For 2D-JBS, we prove N P-hardness and show that some nat-
ural greedy heuristics do not achieve approximation ratio better than
O(log n), where n is the number of users.
14 Chapter 1. Introduction
Chapter 2
2.1 Introduction
The engineering and architecture of large cellular networks is a highly
complicated task with substantial impact on the quality-of-service
perceived by users, the cost incurred by the network providers, and
environmental effects such as radio smog. Because so many different
aspects are involved, the respective optimization problems are proper
objects for multiobjective optimization and may serve as real-world
benchmarks for multiobjective problem solvers.
For all cellular network systems one major design step is selecting
the locations for the base station transmitters [BST Location (BST-L)
problem] and setting up optimal configurations such that coverage of
the desired area with strong enough radio signals is high and deploy-
ment costs are low.
For Frequency Division/Time Division Multiple Access systems
a second design step is to allocate frequency channels to the cells.
For Global System for Mobile communications systems, a fixed fre-
quency spectrum is available. This spectrum is divided into a fixed
15
16 Chapter 2. BST Placement with Frequency Assignment
the interference.
18 Chapter 2. BST Placement with Frequency Assignment
Figure 2.1.1 gives a simple example for interference, where for trans-
mitter A channels 2, 4 are noisy from a co-channel interference point
of view since B uses the same channels and A serves demands in the
overlapping region. With a frequency gap gAC = 2, the channel 1
is noisy from an adjacent-channel interference point of view because
the channel 2 is also present at transmitter A and hence it violates the
frequency gap.
The set of all candidate solutions S is defined as
S = {t1 , . . . , tm } m N ti T, i {1, . . . , m} .
20 Chapter 2. BST Placement with Frequency Assignment
{1, 2, 4} A C {1, 3, 5}
{2, 4, 6}
Initialization
Termination Parent
criteria selection
Environmental
selection Recombination
Evaluation Mutation
2.3.2 Initialization
To initialize individuals at random, we start with an empty individual,
and we fill it with transmitters by applying the repair function. To
produce another, different, individual, we just reorder the sequence
of demands, and then repeat the first step. The reason for the reorder-
ing lies in the property of the repair function to take the order into
account.
This procedure has the advantage of producing legal individuals
only. A pure random setting of the single values of an individual
would instead lead with a high probability to an illegal individual.
It is perhaps a small disadvantage of this approach that the max-
imal population size depends on the number of demand nodes, since
the described initialization gives us only as many different individuals
as there are permutations of the sequence of demand nodes. However
this is not an issue for our real-world problem instances.
2.3.3 Mutation
Just like in most successful real-world applications of evolutionary
algorithms, we need to include problem knowledge in the genetic op-
erators to make the overall process effective and efficient. Since there
are some rules of thumb used by experts to get better solutions, we
introduce several mutation operators that use information produced
by the evaluation function. These mutation operators are able to yield
local changes of a given solution. We call them directed mutations.
Operators with a similar intention have also been used in timetabling
(e.g. [70]).
Using only such operators does not guarantee that all points in the
search space are reachable. Therefore, we also introduce some addi-
tional mutation operators that do not consider problem knowledge
and we call them random mutations.
2.3. Concrete Realization 27
2.3.4 Recombination
In addition to the different mutation operators, we want to use the
possibility of combining genetic material of two individuals. Such a
2.3. Concrete Realization 29
2.3.5 Selection
The selection method is based on the cost cost(ind ) and the interfer-
ence IR(ind ) of the individuals ind in the population.
We have investigated the two standard multiobjective methods
Strength Pareto Evolutionary Algorithm 2 (SPEA2) [92] and the fast
elitist Nondominated Sorting Genetic Algorithm NSGA-II [24].
SPEA2 uses an external archive where the best candidate solu-
tions are stored. Each individual in the archive and the population
gets a strength value assigned which denotes how many individuals it
dominates. The raw fitness of an individual is computed as the sum
of the strength values of all individuals that dominate the individual.
The final fitness is obtained by adding a density information to the
30 Chapter 2. BST Placement with Frequency Assignment
parent 1 parent 2
recombination
repair
raw fitness which favors individuals with fewer neighbors from a set
of individuals with equal raw fitness. The parental selection is imple-
mented as a tournament according to the final fitness values. The time
complexity of the algorithm depends primarily on the density compu-
tation needed for the integration phase. If both the archive size and
the population size are N the integration of N newly created individ-
uals using the naive approach presented in [92] takes O(N 2 ) time. If
the archive size is N and the population size is 1, the time to integrate
N individuals is O(N 3 ).
NSGA-II computes all layers of non-dominated fronts. This re-
sults in a rank for each individual. Furthermore a crowding distance
is computed that is a measure for how close the nearest neighbors
are. Selection takes place by using a partial order where an individual
with lower rank is considered better and if the individuals have equal
rank the individual with bigger crowding distance is preferred. The
complexity of this method is determined by the expensive computa-
tion of the non-dominated fronts. In [24] a naive approach is used to
compute the Pareto fronts which turns out to run in O(M N 2 ) time,
2.3. Concrete Realization 31
Case 1: Both sets are empty. That means that new is a new non-
dominated candidate solution and should be inserted into the
population. The individual with worst fitness value worst is
deleted and the rank of individuals in IsDominated(ind) is
decreased by one (light gray area in Figure 2.3.2). The new
individual new is inserted with rank 0.
For the case that all individuals are in the Pareto front, we con-
sidered to compute a crowding measure online to determine the worst
individual and to prevent genetic drift. However, we did not encounter
such a situation so far.
2.3. Concrete Realization 33
worst
f2
dominates
new
dominated
case 1 f1
worst
f2
dominates
new
dominated
case 2 f1
2.3.6 Algorithm
The main loop of stEAPT is sketched in Algorithm 2.1; it follows
the usual steady state scheme. For the variation of a selected individ-
ual only one of the operators, directed mutation, random mutation, or
recombination, is applied. The probability for the application of these
different kinds of operators is set by the parameter pDM for the prob-
ability to apply a directed mutation and pRM for the probability to
apply a random mutation. pDM + pRM < 1 is required and the prob-
ability for applying recombination follows as pRC = 1pDM pRM .
The repair function is applied to each newly created illegal individual.
34 Chapter 2. BST Placement with Frequency Assignment
f2
dominates
new
worst
dominated
case 3 f1
2.4 Experiments
2.4.1 Experimental Setup
The three described multiobjective methods are applied to realistic
demand distributions on a 9000 9000 m2 area in the city of Zurich.
We use two different resolution grids on top of the service area. The
demand nodes are distributed on a 500 m resolution grid, for the
transmitter positions we use a finer grain resolution of 100 m.
The number of calls at a certain demand node are computed ac-
cording to the formula described in [79]. This formula is based on
information related to the call attempt rate (considered the same for
every mobile phone), number of mobile units per square kilometer,
the mean call duration and the size of the area represented by the
demand node. All these entities relate factors like land usage, pop-
ulation density and vehicular traffic with the calling behavior of the
mobile units.
For the studied service area we have |D| = 288 demand nodes
with a total number of 505 calls. Figure 2.4.1 shows the demand dis-
tribution: each disk represents a demand with its radius proportional
to the number of carried calls. The empty area is the lake of Zurich.
|F |
gAC = 64 in our case. We use a simple isotropic wave propagation
model, where propagation depends only on the transmitting power
(in dBmW). The cell radius of a transmitter is computed as wp(ti ) :=
25pow i . We have a discrete set of power values that can be set for the
transmitters: MinPow = 10 dBmW, MaxPow = 130 dBmW, with
increments of 1 dBmW. The transmitter positions are chosen from a
discrete set of positions, given by the terrain resolution and the ser-
vice area. The deployment cost of each transmitter is cost(ti ) :=
10 pow i + cap i , for ti = (pow i , cap i , pos i , Fi ).
The three multiobjective methods have been applied to the given
problem using various settings for population size, tournament size,
number of objective evaluations, and probability distribution for the
different types of operators (pDM , pRM , pRC ). For each algorithm
with a specific parameter setting 16 independent, consecutive runs
have been executed.
11500
10500
Preto Front (N SG A 2)
C ost
9500
8500
7500
0
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
11500
10500
Pareto Front (SPEA 2)
C ost
9500
8500
7500
0
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
11500
10500
Pareto Front(stEA PT)
C ost
9500
8500
7500
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
11500
10500
Pareto Front (N SG A 2)
C ost
9500
8500
7500
0
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
11500
10500
Pareto Front (SPEA 2)
C ost
9500
8500
7500
0
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
11500
10500
Pareto Front(stEA PT)
C ost
9500
8500
7500
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
11500
10500
Pareto Front(N SG A 2)
C ost
9500
8500
7500
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
10'th
11'th
12'th
13'th
14'th
15'th
16'th
1'st
2'nd
3'rd
4'th
5'th
6'th
7'th
8'th
9'th
11500
10500
Pareto Front(SPEA 2)
C ost
9500
8500
7500
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
11500
10500
Pareto Front(stEA PT)
C ost
9500
8500
7500
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Interference
Maxima Peeling in
d-Dimensional Space
3.1 Introduction
Most real life optimization problems contain more than just one op-
timization criteria and usually these objectives are conflicting with
each other. Hence, there is no single best solution to such problems
but rather a set of good solutions. One state-of-the-art meta-heuristic
approach to solve complicated, real-life multiobjective optimization
problems is by using evolutionary algorithms.
Most multiobjective evolutionary algorithms (MOEA) use Pareto
domination to guide the search in finding a set of bestsolutions. In
a d-dimensional space a solution dominates another solution if and
only if it is at least as good as the other solution in all objectives and
there is at least one dimension in which it is strictly better than the
other solution. Here, we focus on the case where all objectives are to
be maximized. For an example, consider the solution represented by
the point p in Figure 3.1.1 for the two-dimensional space. Every point
inside the gray area is dominated by the point p. A solution (point)
is nondominated if it is not dominated by any other solution. Recent
MOEAs return the Pareto front, the solutions not dominated by any
other solution in the search space.
Even though most MOEAs nowadays work with several objec-
49
50 Chapter 3. Maxima Peeling in d-Dimensional Space
x2
x1
Figure 3.1.1: The point p dominates all points inside the gray rect-
angle.
tives simultaneously, they still transform these objectives into one sin-
gle fitness value. This is necessary because what makes evolutionary
algorithms advance in the search space is the selection of highly fit
individuals (solutions) over less fit ones. Two of the most successful
MOEAs are Zitzler and Thieles strength Pareto evolutionary algo-
rithm (SPEA [94]) and Deb et al.s nondominated sorting genetic al-
gorithm (NSGA-II [25]). As fitness value for an individual they both
use some combination between the number of individuals it domi-
nates and the number of individuals it is dominated by. The computa-
tionally most expensive task of these algorithms is the fitness compu-
tation. Hence it makes sense to get as fast as possible algorithms for
solving this task.
We focus our attention on the NSGA-II technique, where the fit-
ness value is given by the layer-of-maxima to which the individual
belongs. The set of maxima is the subset of the original points that are
nondominated. The layers-of-maxima problem asks for a partition of
the points into layers. The first layer is formed by the points in the set
of maxima. Layer i (i > 1) is formed by points that are dominated by
at least one point from layer i 1 and are not dominated by any point
from layers j i. The NSGA-II approach, as all evolutionary algo-
rithms, maintains a population of individuals (solutions) from which
new individuals are created. The new individuals enter the population
only after a whole new generation of the same size as the old popula-
3.1. Introduction 51
All the points belonging to the same layer are incomparable between
themselves. An example for the correct labelling of point p in two-
dimensional space can be seen in Figure 3.1.2. The model of com-
putation used in this chapter (unless stated otherwise) is the pointer
machine model (SMM), for a definition of this model the interested
reader is referred to the chapter on machine models from [82]. We
relax the pointer machine model by allowing reals to be stored and
compared instead of integers.
x2
1
2 1
2
2 1
3 2
max
p
4 l(p) = 1 + 2
3
x1
Figure 3.1.2: Labeling the point p. The points inside the gray rect-
angle dominate p.
upwards to the previous level and again the maximum label is chosen.
The propagation stops at the 1st level tree. The correct label of the
point q is the maximum of the labels returned from the active nodes
of the 1st-level tree plus one. Figure 3.2.1 visualizes the way a query
is performed in the multilevel balanced binary search tree. The path
shown on the figure is the search path and the nodes shown are the
active nodes.
Insert: Inserting the point p (after its label was computed) is done
by inserting the point into the 1st-level tree and then into the lower
level trees of the nodes on the search path. The insertion is carried
out recursively until the last level trees are reached. Here, besides
inserting the point p, the maximum labels of the nodes on the search
path are updated if smaller than the label of p.
If we use a balanced binary search tree for all d 1 dimen-
sions then the sweep-hyperplane algorithm has a running time of
O(n logd1 (n)), which is not better than the algorithm presented
in [51]. To improve the running time we consider a different data
structure for the one-dimensional case (d 1th-level trees). For this
case we need a data structure that is able to answer layer-of-maxima
queries and can be updated faster than in O(log(n)) time.
The problem reduced to the one-dimensional space is the follow-
ing. We are given the projection onto the line of a subset S of the
original points in S together with their correct layer-of-maxima label.
The layer-of-maxima labels reflect the layers in the d-dimensional
space and not on the line. We want to compute the intermediate
label for a query point q by looking at its projection onto the line
(considering only its q1 coordinate). We say intermediate label be-
cause this is the label computed at one of the d 1th-level trees in
the multilevel layer-of-maxima tree. An example instance of the re-
duced one-dimensional layer-of-maxima query problem is shown in
Figure 3.2.2. To find the intermediate label of q one has to find the
point with the maximum label that dominates it in the reduced one-
dimensional space. For the instance in Figure 3.2.2 the intermediate
label is 2. We also want to be able to insert a point p that is already
labeled. The label of the points partitions the line into layer regions.
A layer region is an interval (a, b] with the property that all the query
points falling into this interval get the same intermediate label l. The
intervals are defined by the position and label of the point at their
3.2. Sweep-Hyperplane Algorithm 57
d 1th-level trees
1st-coordinate
lmax
q1
...
d 2th-coordinate
2nd-level trees
lmax
lmax
qd3
qd3
lmax,1
lmax,j
qd2
qd2
d 1th-coordinate
1st-level tree
qd1
1 + maxj {lmax,j }
active nodes
52 1 4 2 21 1
q1 x1
52 1 4 q1 21 1
x1
5 4 2 1 0
52 1 4 5p 21 1
x1
5 4 2 1 0
insert(p, 5)
5 2 1 0
right boundary b. In Figure 3.2.3 one can see the layer regions for the
instance presented in Figure 3.2.2. An important property of the layer
region partition is that the labels of the regions are decreasing from
left to right.
Answering a query is now reduced to the problem of finding the
upper bound b of the layer region to which the query point belongs
and returning its label.
An insertion of the labeled point p = (p1 , p2 , . . . , pd ) can be car-
ried out by finding the layer region to which it belongs based on the
p1 coordinate. If the label of the point p is less or equal than the la-
bel of the region then the point is discarded. Otherwise, the region is
split into two at the position p1 and the new point p defines the label
of the newly created region at its left side. The predecessor regions
of p with smaller label than p have to be merged into one single re-
gion defined by p. An example for inserting the point p with label 5
is shown in Figure 3.2.4. After inserting the point p the region with
label 2 is split and the newly created region with label 5 is merged
with the predecessor regions having labels 4 and 5.
The operations needed to maintain the layer regions under inser-
tions and layer queries define a data structure that supports the fol-
lowing operations: predecessor, successor, insert, delete. One effi-
3.2. Sweep-Hyperplane Algorithm 59
totic space usage nor the asymptotic runtime and make our algorithm
work in the comparison based pointer machine model. Setting the
flag doesnt change the asymptotic running time of the algorithm and
the preprocessing step can be done in O(n logd2 (n) log log(n))
time.
The space usage is determined by the multilevel layer-of-maxima
tree. The tree at the 1st level induces a factor O(n) storage and every
lower level induces a factor O(log(n)) except the last level (van Emde
Boas priority queues) that induces a factor O(log(n) log log(n)) stor-
age overhead. All together the storage overhead of the sweep hyper-
plane algorithm is O(n logd2 (n) log log(n)). We conclude our
analysis with the following theorem.
Theorem 3.3 The running time and space usage of Algorithm 3.2 is
O(n logd2 (n) log log(n)).
3.3. Divide and Conquer Algorithm 61
3.3.1 Algorithm
Every divide-and-conquer algorithm works in two steps: problem de-
composition (divide step) and solution merging (conquer step).
Divide step: We divide the original set S into two subsets A and B
such that the number of points in A and B is approximately the same
and in addition all the points in B have higher 1st-coordinates than
the points in A. This division can be done in linear time by finding
the median point pm (see [11]) according to the 1st coordinate. Fig-
ure 3.3.1 shows an example division using the vertical line L defined
by the 1st coordinate of the median point. One important property
62 Chapter 3. Maxima Peeling in d-Dimensional Space
x2
A
pm
B
L x1
Invariant 3.4 The labels of the points in A are bounded from below
by the labels of the points in B that dominate them.
Merge step: In this step we have to update the labels of the points in
the set A by taking into consideration the points from B that dominate
them. Before the merging is started all the points in B have already
their correct labels. It is enough to consider only the projection of the
two point sets A and B onto the division line L, because all the points
in B have higher 1st coordinates than the points in A. If we consider
that the points in both sets A and B are presorted according to their
2nd coordinates, then a simple parallel scan of the sets A and B (in
decreasing 2nd coordinate order) is enough to initialize the label of
3.3. Divide and Conquer Algorithm 63
x2
1
1
1
2
A 3
B
3
3
1
L x1
y
1
1 0 0 0
0
0
q 3 3 3
24
5 2
14
33
s5 s4 s3 s2 s1 x
Figure 3.3.3: Sweep line approach to solve the base case contribution
problem in the plane.
the sweep line meets a point from the set A and a data structure up-
date is issued when a point from B is met. A problem instance in
two-dimensional space, together with the current sweep line s5 and
previous sweep line positions and the layer regions, is shown in Fig-
ure 3.3.3. The points of set A are represented with crosses and the
points from B are represented with circles. The figure also shows how
the labels change for the points in A that were already visited. In the
label notation l1 l2 for the visited points from set A, l1 represents
the old label of the point and l2 represents the new label after consid-
ering the points from B that dominate it. Hence, by using the sweep
line together with a van Emde Boas priority queue [81] the two-
dimensional contribution problem can be solved in O(n log log(n))
time and space.
When n1 , n2 are not constants and d > 2 after a split of the sets A
and B we have three recursive calls (see lines 9-14 in Algorithm 3.6).
The splitting can be done in linear time. In case n1 or n2 is a constant
we can do exhaustive comparison to update the labels in A. When
d = 2 we run the base case algorithm from Section 3.3.1. We can
simplify this recursive function by considering n1 + n2 = n as the
only input parameter representing the size of the set A B. After this
simplification we get:
2 Tc ( n2 , d) + Tc (n, d 1) + n if n > 2 and d > 2
Tc (n, d) = 2d if n = 2
n log log(n) if d = 2
It is easy to see that the right hand side of the last inequality is bounded
from above by 1 + d + pd2 log(p) if 3 < d < p, where p is consid-
ered to be a natural number. For d = 3 we can get the exact canonical
form for B(p, 3) by unfolding the recursion along p and knowing that
B(p, 2) = log(p) + 1:
p
B(p, d) = B(1, d) + (1 + log(k))
k=2
p
= 4 + (p 1) + log k .
k=2
qd
dth coordinate
Figure 3.4.1: Search path with its left and right sibling nodes.
Insertion The insertion follows the search path starting at the first
level tree and continuing recursively on the second level trees of the
nodes on this path. The recursion stops at the d2th level trees where
the new point is inserted and the (min, max) values on the search
path are updated if needed. On the way back from the recursion the
point is inserted also in the higher level trees. Rebalancing is done as
presented in [62].
Space and Running Time Analysis The first two steps of the up-
date clearly take worst case O(logd1 (n)) time and the third step
takes amortized O(logd1 (n)) time. The storage overhead of the data
structure is O(n logd2 (n)).
3.5. Conclusions and Open Problems 73
4.1 Introduction
In this chapter we focus on a specific aspect of the air interface Wide-
band Code Division Multiple Access (W-CDMA) of UMTS networks
that turns out to be algorithmically interesting. More precisely, we
focus on its multiple access method Direct Sequence Code Division
Multiple Access (DS-CDMA). The purpose of this access method is
to enable all users in one cell to share the common resource, i.e. the
bandwidth. In DS-CDMA this is accomplished by a spreading and
scrambling operation. Here we are interested in the spreading op-
eration that spreads the signal and separates the transmissions from
the base-station to the different users. More precisely, we consider
spreading by Orthogonal Variable Spreading Factor (OVSF-) codes
[2, 47], which are used on the downlink (from the base station to the
user) and the dedicated channel (used for special signaling) of the up-
link (from user to base station). These codes are derived from a code
tree. The OVSF-code tree is a complete binary tree of height h that is
constructed in the following way: The root is labeled with the vector
(1), the left child of a node labeled a is labeled with (a, a), and the
right child with (a, a). Each user in one cell is assigned a different
OVSF-code. The key property that separates the signals sent to the
users is the mutual orthogonality of the users codes. All assigned
codes are mutually orthogonal if and only if there is at most one as-
signed code on each leaf-to-root path. In DS-CDMA users request
75
76 Chapter 4. OVSF Code Assignment
different data rates and get OVSF-codes of different levels. The data
rate is inversely proportional to the length of the code. In particular, it
is irrelevant which code on a level a user gets, as long as all assigned
codes are mutually orthogonal. We say that an assigned code in any
node in the tree blocks all codes in the subtree rooted at that node and
all codes on the path to the root, see Figure 4.1.1 for an illustration.
level bandwidth
(a)
3 8
(a,a) (a,-a)
2 4
height h
1 2
0 1
N leaves
assigned code blocked code
4.2.1 Feasibility
One might ask him/her-self whether there exists always a feasible
code assignment for a new code request. To answer this question we
present here the necessary conditions to have a feasible code assign-
ment. In the later sections we always consider only cases where it is
possible to have a feasible assignment.
Given an assignment F of n codes in an OSVF code tree T ac-
cording to the request vector r = (r0 , . . . , rh ) and a new code re-
quest on level li , we examine the existence of a code assignment F
for the request vector r = (r0 , . . . , rli + 1, . . . , rh ). Every assigned
code on level l has its unique path from the root to a node of length
h l. The path can be encoded by a word w {0, 1}hl . The
bits in the encoding determine whether the path follows the left or
right child at a certain level in the tree (left encoded with 0 and right
with 1). The orthogonality properties required by the code assign-
ment make the path/node identifiers to form a binary prefix free code
set. On the other hand, given a prefix free code set with code lengths
{hl1 , . . . , hln+1 } (where li is the level of code i {1, . . . , n+1})
we can clearly assign codes on levels li by following the paths de-
scribed by the code words (see Figure 4.2.1). This shows that a
code assignment F for codes on levels l1 , . . . , ln+1 exists if and
only if there exists a binary prefix free code set of given code lengths
{h l1 , . . . , h ln+1 }.
We use the Kraft-McMillan inequality to check the existence of a
prefix free code set of given code lengths.
Theorem 4.1 [1] A binary prefix free code set having code lengths
a1 , . . . , am exists if and only if
m
2ai 1. (4.2.1)
i=1
82 Chapter 4. OVSF Code Assignment
0 1
0 1
0 1
0 1 0 01 1
11
0 11
0 01 1
00
0 1 0 0 0
0
1 1
0 1
0
0 1 0 1 0 1 0 1 0
011 101
1 0 1 0
110
1 0 1
01 01 01 010100 1
001 10
0 1 01 01
Figure 4.2.1: Correspondence of code assignments in tree of height
4 with codes on levels {0,1,1,1,2} and prefix free codes of lengths
{4,3,3,3,2}
h
ri 2i N .
i=0
F \ F
F \ F
Tx Ty
xR
S R
S
cS
R
Proof. The construction goes from the top levels towards the leaves:
for the current level l we insert codes in all positions that are not
blocked by codes on the levels above. Then all codes on level l that
are not in F are deleted and the construction proceeds recursively
to the lower levels until the assignment F is obtained. The number
of insertions used in this process is bounded from above by the total
number of nodes of a binary tree (not necessarily complete) that has
as leaves the code positions in F . Since the size of F is n we know
that the number of nodes in such a binary tree is at most 2n 1.
The number of deletions is the number of insertions minus the size of
F . In conclusion the total number of insertions and deletions used
to force algorithm A to produce the code assignment F is bounded
from above by 3n .
level T0 OPT
DCA
1
1
T1 1 T2 T3
l 2k 1
l-1 k
k
1 k-1,k+1
0 ... ...
k/2
k-1
4.3.2 N P-Hardness
In this part we prove that the decision variant of the one-step offline
CA problem is N P-complete. The decision variant of the one-step
offline CA problem asks whether a new code request can be satis-
fied with at most cmax reassignments. The problem is obviously in
N P since one can guess an optimal solution and it can be checked
88 Chapter 4. OVSF Code Assignment
code request
for this level
triplet receiver
ll trees
trees trees
q
token tree
Figure 4.3.2: Initial code tree configuration for the reduction from a
3DM instance.
(2, 1, 1, 0)
(2, 1, 0) (0, 0, 1)
together with their cost for Tv . The cost of a possible signature s for
Tv (usually s is different from the original signature sv of T v ) is de-
fined as the minimum number of codes in Tv that have to move away
from their old position in order to attain a tree Tv with signature s.
To attain Tv it might be necessary to move codes into Tv from other
subtrees but we do not count these movements in the cost of s for Tv .
Given a code tree T with all these tables computed, one can com-
pute the cost of any single code insertion on level l from the table at
the root node r: Let sr = (sr0 , . . . , srh ) be the signature of the whole
code tree before insertion, then the cost of the insertion is the cost of
the signature (sr0 , . . . , srl + 1, . . . , srh ) in this table plus one (for the
original assignment).
The computation of the tables starts at the leaf level, where the
cost of the one-dimensional signatures is trivially defined. At any
node v of level l(v) the cost c(v, s) of signature s for Tv is computed
from the cost incurred in the nodes left subtree Tl plus the cost in-
curred in its right subtree Tr plus the cost at v. The costs c(l, s ) and
c(r, s ) in the subtrees come from two feasible signatures with the
property s = (s0 + s0 , . . . , sl(v)1 + sl(v)1 , sl(v) ). We use the no-
tation s = (s + s , sl(v) ) to represent the composition of signature s
using the lower level signatures s and s . Any pair s , s of such sig-
natures corresponds to a possible configuration after the code inser-
tion. The best pair for node v gives c(v, s). Let sv = (sv0 , . . . , svl(v) )
be the signature of Tv and s , s are chosen such that s = (s + s , 0),
92 Chapter 4. OVSF Code Assignment
Topt
...
T3
T2
T1
Agreedy Aopt
Tg
l1
l2
l3
l4
c0
0
l
c0
g
x0
Definition 4.12 Let Topt be the set of the opt-trees for a code inser-
tion c0 and let T t (together with its code assignment F t ) be the code
tree after t steps of the algorithm Agreedy . An -mapping at time t
is a mapping t : Mt V (Topt ) for some Mt F t , such that
v Mt : l(v) = l((v)) and t (Mt ) (F t \ Mt ) is a code
assignment.
x
t
t t
t
t
Lemma 4.15 For every set Ct and code tree T t in algorithm Agreedy
there exists an independent mapping (t , t ).
costA N 0 N/4 + 1
A ...
...
Aopt
costopt N 0 1
4.4 Online CA
A practically more relevant version of the code assignment problem is
where several code insertions and deletions have to be satisfied in an
online fashion. In this section we give a lower bound on the competi-
tive ratio of any deterministic online algorithm, analyze several online
strategies and present a resource augmented algorithm with constant
competitive ratio.
Proof. Let A be any deterministic algorithm for the problem. First the
adversary asks for N leaf insertions. The adversary Aopt makes the
insertions in such a way that later on it needs no additional reassign-
ments. In the next step the adversary deletes N/2 codes (every sec-
ond from the assignment produced by A) to get the situation in Figure
4.4.1. Then a code request on level h 1 causes algorithm A to move
N/4 codes. We can proceed with the left subtree of fully assigned leaf
level codes recursively and repeat this process h1 times. Thus, Aopt
needs N + h 1 code assignments. Algorithm A needs N + T (N )
code assignments, where T (N ) = 1 + N/4 + T (N/2) and T (2) = 0.
After reducing the recursive formula we get the following canonical
form: T (N ) = h 1 + N2 (1 2/N ). For costA c costOP T we
must have c 3N/2+h2
N +h1 N 3/2.
4.4. Online CA 99
Figure 4.4.4 shows an initial code assignment that does not satisfy
the Invariant 4.24. In this example the invariant can be established by
moving one code.
We prove that this algorithm is equivalent to the algorithm that
minimizes the number of gap trees on every level.
Figure 4.4.5 highlights the gap trees for an example code assignment.
The implication of Invariant 4.24 is that there can be at most one gap
tree on every level.
gap trees
q = (1, 2, 1, 0, 0)
Proof. First we prove that it is necessary to have at most one gap tree
on every level to minimize the number of blocked codes. If there are
two gap trees Tu , Tv on a level l then by moving the codes from the
sibling tree Tu of Tu into the gap tree Tv we create a gap tree at level
l + 1 rooted at the parent of u and hence we reduce the number of
blocked codes by one. This shows that having at most one gap tree
per level is a necessary condition to minimize the number of blocked
codes.
Next we prove the sufficiency of the condition. Suppose that F
leaves at most one gap tree on every level of T . The free bandwidth
capacity of T can be expressed as
h
cap = qi 2i .
i=0
Since qi {0, 1}, the gap vector is the binary representation of the
number cap. Therefore the gap vector q is the same unique vector for
all code assignments having at most one gap tree at every level and
serving the same requests as F . The gap vector determines also the
number of blocked codes:
h
# blocked codes =(2h+1 1) qi (2i+1 1).
i=0
104 Chapter 4. OVSF Code Assignment
This shows that any code assignment having at most one gap tree per
level and serving the same requests as F has the same number of
blocked codes as F .
Lemma 4.28 The algorithm Agap always has a gap tree of sufficient
height to assign a code on level l and satisfies the Invariant 4.24.
Proof. First we show that the algorithm Agap satisfies the Invari-
ant 4.24 by showing that the tree T after the algorithm returns has
at most one gap tree on every level. Consider an insertion into the
smallest gap tree of level l where the code fits. New gap trees can
occur only on levels j, l j < l and only within the gap tree on
level l . At most one new gap tree can occur on every level. Suppose
that after creating a gap tree on level j, we have more than one gap
trees on this level. Then, since j < l , we would assign the code into
this smaller gap tree, a contradiction (Figure 4.4.6). Therefore, after
4.4. Online CA 105
j j
l ...
Figure 4.4.6: Two gap trees on a lower level than l violate the mini-
mality of the chosen gap tree.
Algorithm Agap does not need any extra code movements after
insertions and hence is optimal when only code insertions are con-
sidered. However, when deletions are also allowed the algorithm is
(h)-competitive.
Proof. The request sequence from the proof of Theorem 4.23 shows
the (h)-competitiveness of algorithm Agap .
106 Chapter 4. OVSF Code Assignment
Can the gap between the lower bound of 1.5 and the upper
bound of O(h) for the competitive ratio of the online CA be
closed?
Since then these two questions were analyzed in [78] and lead to the
following results: the general offline CA problem is N P-complete
and there exist instances of code trees for which any optimal offline
greedy algorithm needs to reassign more than one code per an inser-
tion/deletion request. Whether there are instances where more than an
amortized constant number of reassignments are necessary remains
still open.
Chapter 5
5.1 Introduction
We consider different combinatorial aspects of problems arising in the
context of load balancing in time division networks. These problems
turn out to be related to interval scheduling problems and interval
graphs.
The general setting is that mobile phone users are served by a set
of base stations. For our considerations the user is the mobile device.
In each time slot (round) of the time division multiplexing each base
station serves at most one user. Traditionally, each user is assigned
to a single base station that serves the user until it leaves the cell of
the base station or until its demand is satisfied. The amount of data
that a user receives depends on the strength of the signal it receives
from its assigned base station and on the interference, i.e. all signal
power it receives from other base stations. In [21], Das et al. propose
a novel approach. In their approach clusters of base stations jointly
decide which users to be served in which round in order to increase
network performance. Intuitively, if in each round neighboring base
stations try to serve pairs of users such that the mutual interference
is low, this approach increases the throughput. We turn this approach
into a discrete scheduling problem in one and two dimensions (see
109
110 Chapter 5. Joint Base Station Scheduling
u1 b1 u 2 u 3 u 4 b2 u 5 b3 u6
u1 b1 u 2 u 3 u 4 b2 u 5 b3 u6
b4 u11
u7
b5 u12
u6
u1 u4 u8
b2 b3
u3
b1 u10
u2
u5 u9
lems (see [32, 73] and the references given therein for an overview).
Our problem is more similar to a setting with several machines where
one wants to minimize the number of machines required to schedule
all intervals. A version of this problem where intervals have to be
scheduled within given time windows is studied in [19]. Inapprox-
imability results for the variant with a discrete set of starting times
for each interval are presented in [18].
user. At the same time, we have to decide in which round each se-
lected arrow is scheduled under the side constraint that all arrows in
each round must be compatible. To distinguish between the different
rounds we have to label the arrows scheduled in these rounds with
different labels. The arrows scheduled in the same round should be
labeled identically. For labeling the arrows we use colors that repre-
sent the rounds.
For the two-dimensional JBS problem we have positions in R2
and interference disks d(bi , uj ) with center bi and radius bi uj 2
instead of arrows. We denote the set of interference disks for the user
base-station pairs from a set P by D(P ). Two interference disks are
in conflict if the user that is served by one disk is contained in the
other disk; otherwise, they are compatible. The problems can now be
stated as follows:
1D-JBS
2D-JBS
The two problems mentioned above are the same except the input in
1D-JBS is given on a line and in 2D-JBS is given in the plane. The
arrows can be considered as one-dimensional disks.
114 Chapter 5. Joint Base Station Scheduling
5.2 1D-JBS
As mentioned earlier, solving the 1D-JBS problem requires selecting
an arrow for each user and coloring the resulting arrow graph with
as few colors as possible. To understand when a selection of arrows
leads to an arrow graph with small chromatic number, we first study
the properties of arrow graphs in relation to existing graph classes.
Next we analyze special cases of 1D-JBS that are solvable in polyno-
mial time. At the end of this section we present a dynamic program
that solves the decision version of the 1D-JBS problem in time nO(k) ,
where k is the number of rounds, and we show a 2-approximation al-
gorithm. The big open problem remains the complexity of the general
1D-JBS problem: Is it N P-complete or is it polynomially solvable?
graphs, trapezoid graphs, co-comparability graphs, AT-free graphs, and weakly chordal
graphs were observed by E. Kohler, J. Spinrad, R. McConnell, and R. Sritharan at the
seminar Robust and Approximative Algorithms on Particular Graph Classes, held in
Dagstuhl Castle during May 2428, 2004.
5.2. 1D-JBS 117
One can also show that arrow graphs are AT-free (i.e., do not
contain an asteroidal triple) and weakly chordal. For more details
on these graph classes the interested reader is referred to the text-
book [12].
Lemma 5.3 For instances of 1D-JBS with evenly spaced base sta-
tions, there is always an optimal solution that is non-crossing.
which means they block fewer users, and even though the head of al
has moved left and the head of ar has moved right there cannot be any
arrow that would block them. In t1 the arrow ar cannot be blocked
since there are no arrows coming from base stations positioned to the
right of br (property of equidistant base stations) and arrows coming
from the left side of br cannot block the new ar because they would
have blocked the old arrow ar (contradiction to the feasibility of s).
For t2 the reasoning is symmetric.
Dynamic Programming
We present a dynamic programming approach that solves the special
case of the 1D-JBS problem with evenly spaced base stations in time
O(m n4 log n). The idea is to consider the base stations in left-to-
right order. For every base station, compute a cost function for all
possible left and right division points in the intervals to the left re-
spectively right of the base station. The cost function that we store in
the table of base station bi , denoted by i (di1 , di ), is the minimum
number of rounds needed to schedule the users from u1 up to udi us-
ing the base stations b1 , b2 , . . . , bi . In the case of evenly spaced base
stations this cost depends only on the division point positions di1
and di that define the users served by the base station bi . An example
for division points is given in Figure 5.2.2.
By (vi ) we denote the set of potential division points for inter-
val vi , i.e., the set of the indices of users in vi and of the rightmost
user to the left of vi (or 0 if no such user exists). The table entries
1 (d0 , d1 ) for b1 , where d0 is set to zero and d1 (v1 ), are com-
puted using the greedy coloring algorithm. For i 1, we compute
the values i+1 (di , di+1 ) for di (vi ), di+1 (vi+1 ) from the
values i (, di ). The minimum coloring for fixed division points di
120 Chapter 5. Joint Base Station Scheduling
d0 d1 di1 di di+1 dm
and di+1 at base station bi+1 is the maximum between the number of
colors needed at bi , using the same di and choosing di1 such that
i (di1 , di ) is minimized, and the coloring ci of the arrows intersect-
ing the interval vi . In the case of evenly spaced base stations with
no far out users this cost ci depends only on the division points
di1 , di and di+1 and we denote it by ci (di1 , di , di+1 ). The col-
oring i (di1 , di ) is compatible with the coloring ci (di1 , di , di+1 )
up to redefinition of colors. The algorithm chooses the best division
point di1 to get:
Theorem 5.4 The base station scheduling problem for evenly spaced
base stations without far out users can be solved in O(m n4 log n)
time by dynamic programming.
Theorem 5.5 For the 1D-JBS problem with 3 base stations and 3k
122 Chapter 5. Joint Base Station Scheduling
Figure 5.2.3: Far out users u10 , u11 and u12 are served by b3 in
rounds 1, 2 and 3, respectively. The arrows represent the Type 1
rounds. Users u1 , u8 and u9 will be scheduled in a round of Type 2
(not shown).
users with k far out users deciding whether a k-schedule exists can
be done in O(n log n) time.
Proof. The proof can be found in [30] and will appear in the thesis
of M. Mihalak. The proof shows that the greedy scheduling strategy
finds such a k-schedule in time O(n log n) if one exists.
(Lemma 5.1). If we consider one of them, the other one is the alternative user ar-
row.
5.2. 1D-JBS 123
bi bi+1
1
2
4
3
5
p
The overall running time of the algorithm is O(m n2k+1 log n).
There is a solution to k-1D-JBS if and only if the algorithm finds such
a set of compatible neighbors.
increases the clique size at p, then no type 3 arrow can be in the maxi-
mum clique at p (observe that arrows of type 3 and 4 are compatible).
Therefore, the clique at p cannot be bigger than the clique at bi+1 , a
contradiction.
The analysis of algorithm AkJBS and the Lemma 5.6 lead to the
following theorem:
s.t. li + ri k, cliques C in G(A) (5.2.2)
li C ri C
li + ri = 1, i {1, . . . , |U |} (5.2.3)
li , ri {0, 1}, i {1, . . . , |U |} (5.2.4)
kN (5.2.5)
base station using the left or right user arrow. With constraints (5.2.2)
we bound the size of the cliques in the arrow graph of the selected ar-
rows from above (those that have their indicator variable set to 1). The
constraints (5.2.3) are needed to make sure that all users are served
exactly with one user arrow. The objective is to minimize the maxi-
mum clique size k, where k N.
We obtain the LP relaxation by allowing li , ri [0, 1] and k
0. An optimal solution to the LP relaxation can be rounded to an
integer solution using the following rounding strategy: li :=
li +
0.5, ri := 1 li . This rounding gives an integer solution that is
not worse than twice the original optimal fractional solution. This is
true because in the worst case all arrows in a maximum clique can
get their weights (the value of their indicator variable) doubled. We
know that the optimal integer solution is a feasible fractional solution.
Hence, the optimal integer solution cannot be better than the optimal
fractional solution. Thus, we have shown that the cost of the rounded
solution is at most twice the cost of the optimal integer solution.
One might think that maybe a more clever analysis can prove bet-
ter upper bound for the rounded integer solution. Unfortunately this is
not the case because we show that indeed the rounded integer solution
can get as bad as twice the optimal fractional solution. Figure 5.2.5
gives an example where the cost of an optimal fractional solution is
indeed smaller than the cost of an optimal integral solution by a fac-
tor arbitrarily close to 2. In this example the basic construction I1
contains two base stations bl and br and one user u in between. Both
the solution of the ILP and the solution of the LP relaxation have cost
1. I2 is constructed recursively by adding to I1 two (scaled) copies
of I1 in the tail positions of the arrows. In this case the cost of the
relaxed LP is 1.5 and the integral cost is 2. The construction In , after
n recursive steps, is shown at the bottom of Figure 5.2.5. This con-
struction is achieved by using I1 and putting two scaled In1 settings
in the tail of the arrows from I1 . The cost of the LP relaxation for In
is n+1
2 , whereas the cost of the ILP is n.
I1 :
lu = 0.5 ru = 0.5
bl u br
I2 :
bl u br
In :
In1 In1
bl u br
u1 u3 un1 b1 b2 un u4 u2
a1
a2
a3
a4
an1
an
porting these selected arrows has two base stations in the middle and
n/2 users on each sides. Fortunately, we can still solve such an LP in
polynomial time with the ellipsoid method of Khachiyan [58] applied
in a setting similar to [45]. This method only requires a separation or-
acle that provides us with a violated constraint for any values of li , ri ,
if one exists. It is easy to check for a violation of constraints (5.2.3)
and (5.2.4). For constraints (5.2.2), we need to check if for given val-
ues of li , ri the maximum weighted clique in G(A) is smaller than k.
By Theorem 5.2 this can be done in time O(n log n). Summarizing,
we get the following theorem:
a u b
a u
u b
The proof is similar to the proof from Section 5.2.5, except that in-
stead of arrow graphs we have another special case of trapezoid graphs.
130 Chapter 5. Joint Base Station Scheduling
5.3 2D-JBS
After analyzing several versions of the 1D-JBS problem we get closer
to the real application from mobile communication and have a closer
look at the situation in the plane. In this section we analyze the
two-dimensional version of the base station scheduling problem (2D-
JBS). The decision variant k-2D-JBS of the problem asks whether
the users can be served in at most k rounds for a given k and a given
2D-JBS instance. We show that k-2D-JBS is N P-complete for any
k 3. For the case k = 1, we give a polynomial time algorithm.
Then we present a constant-factor approximation algorithm for a con-
strained version of 2D-JBS. At the end of the section, we show log-
arithmic lower bounds on the approximation ratio of several natural
greedy approaches.
b1
u1
uk1
u2
uk b1 uk k1
ui
vi vi+1
di,i+1
bi bi+1
di,i
bi bi+1
Figure 5.3.2: Forcing base stations to serve users only inside their
outer disk.
5.3. 2D-JBS 133
C1 v5 C2
Kk2 Kk2
v0 Kk2 v2
v4 C0 v6
C4 Kk2 v7 Kk2 C
3
v3
v0 v1 v2 vl2 vl1 vl
k1 k1 k1 ... k1 k1 k1
k1 k1 k1 ... k1 k1 k1 k1
o0 k1 o1 k1 o2 k1
k1 k1 k1 k1 k1 k1 k1 k1 k1 k1 k1
Cv1 Cv2 Cv3 Cv4 Cv5 Cv6 Cv7 Cv8 Cv9 Cv10 Cv11
Property 5.11 A JBS instance for a Wkl has the following properties:
The last property holds since the interference that the outer users
(v0 , . . . , vl ) produce forces every valid k-schedule to schedule them
in the same round.
Property 5.13 A JBS instance for a Kkl has the following properties:
Proof. By the same argument as for the Wkl we can prove the second
property. From the properties of a k-wire we know that every valid k-
schedule schedules the outer users v0 . . . vl in the same round. User vl
cannot be served by the right-most base station because then the base
station to the left of it would have to stay idle in that round. Thus a
valid k-schedule will assign to every base station only its inner and
outer users. The user vl is in the interference region of the user vl+1 ,
hence it forces every k-schedule to schedule them in different rounds.
By using k-wires together with k-chains we can transform the edges
of a graph into a 2D-JBS instance. The following auxiliary graph
structure is used to transform a high degree node into an independent
set of vertices (output nodes) that copy the color of the original vertex
and connect to the neighbors of the original vertex.
5.3. 2D-JBS 137
Property 5.15 A JBS instance for a Ckl of size s has the following
properties:
Proof. The proof follows similar arguments as the one presented for
a k-wire.
To avoid crossing edges when embedding the general graph into the
plane we need the following special graph structure.
v5
v4 v6
k 1 clique
k 2 clique
v7 k-wire
vi
Figure 5.3.5: Helper gadgets for realizing the k-crossing with JBS
instances.
5.3. 2D-JBS 139
Property 5.17 The JBS instance for a k-crossing Hk has the follow-
ing properties:
Proof. The JBS instance for Hk is constructed in such a way that all
base stations are fully loaded with k users. Hence it is obvious that it
cannot be scheduled in less than k rounds. Similarly to the proof for
the k-wire, one can show that the only valid k-scheduling serves with
every base station only its own inner and outer users. Since this JBS
instance is a realization of the original k-crossing auxiliary graph we
know that every k-coloring satisfies the last two properties. For ease
of presentation we omitted the auxiliary base stations. One might
question if this construction really allows placing for every base sta-
tion also the corresponding auxiliary base station. We do not provide
a formal proof of this. But intuitively and also as the Figure 5.3.5(a)
shows there is free place next to the outer disk of every base station
to place the auxiliary base stations.
In what follows we give the main idea on how to embed a general
graph G into the plane using the auxiliary graph structures presented
above.
High degree vertices vi are broken into an independent set of vertices
vi,1 . . . vi,d (d 2 is the degree of vi ). Each such vertex vi,j is then
connected to one of the neighbors of vi . The vertices are placed on
a line such that the modified vertices coming from the same original
vertex are placed next to one another. The original edges are replaced
by horizontal and vertical lines connecting the corresponding vertices
(or modified vertices). The properties that must hold in order to do
the replacements without conflicting with each other are:
G:
v1 v2 v5
v4 v3
one possibe embedding of G
The proof is similar to the one given in [44] and is based on the prop-
erties of the auxiliary graph structures.
If G is k-colorable then using the embedding we get compatible col-
ors for the output vertices of the auxiliary graph structures. Since the
auxiliary graph structures are k-colorable in their own and knowing
that the output vertices with which they connect to other auxiliary
graph structures are compatible we get that G is also k-colorable.
Knowing that G is k-colorable we identify in constant time the origi-
nal vertices from G to which the output vertices of G correspond and
color them correspondingly. This coloring is a valid k-coloring of G
since G is a valid embedding of G.
Proof. In the reduction presented for the k-2D-JBS problem, the se-
lection of serving base stations for every user in the auxiliary graphs
is uniquely determined by the construction. Hence the same reduc-
tion also works for just the coloring part of the k-2D-JBS problem.
For the 1-2D-JBS problem this observation implies that every op-
timal assignment (if one exists) must be a set of empty disks.
We now present the algorithm A1-round that solves the 1-2D-JBS
problem optimally. In a preprocessing step we find all empty disks
formed by base stations from B and users from U . If there is some
user without an empty disk then the algorithm stops because there is
no feasible assignment that would schedule the users from U in one
round. For every user the algorithm picks an arbitrary empty disk (we
know that every base station produces at most one empty disk 3 ) and
we know from Observation 5.20 that this empty disk cannot conflict
with the empty disks selected for the other users. When all users have
their empty disks the algorithm stops and outputs the assignment.
Algorithm A1-round has polynomial running time. Using standard
techniques from computational geometry, e.g. [23], one can compute
a Voronoi diagram for the user points in O(n log n) time and then a
point location data structure so that the closest user of a base station
(the preprocessing step of A1-round ) can be determined in O(log n)
time. Thus the running-time of A1-round is O((m + n) log n).
from b.
5.3. 2D-JBS 143
Hence the optimal solution needs at least k/|B | rounds. This yields
the following theorem.
General 2D-JBS
When not too much is known about the structure of the problem one
way is to use some greedy strategies and analyze their worst case
behavior. For our 2D-JBS problem we first looked at the greedy al-
gorithm that selects for every unserved user the smallest disk that can
serve the user. The intuition behind this greedy choice is that by us-
ing smaller disks fewer users are blocked and hence more users can
be served in the same round which hopefully leads to fewer rounds.
Even though we couldnt prove a tight bound on the approximation
ratio achieved by this algorithm we found a lower bound construction.
This lower bound construction is presented here and is interesting be-
cause two other greedy algorithms are fooled by this construction.
The greedy algorithms that we present proceed round by round
and in each round decide the subset of disks to be used based on
certain greedy rules.
b1 u 1 b0 un bn
u2
b2
. . .
Td
b0
1
2
3 d
u0,1 u0,2 u0,4 u0,2d1
b1 b2 b4 ... b2d1
b3 b5 b6
u6,7
Td1 b7 Td1
Wd
some level in one round by using the base stations from below, at
the expense of blocking all users on level + 1. Thus, it can serve
all users in two rounds by serving the odd levels in one round and the
even levels in the other. Consequently, the approximation ratio of the
greedy algorithms is (d). The number nd of users in the tree Td can
be calculated by solving the simple recursion nd = 2nd1 + 1 (which
follows from the recursive construction of the tree), where n1 = 1.
This gives n = nd = 2d 1 users and thus d = log (n + 1).
We formulate the lower bound for these greedy algorithms in the
following theorem (the proof can be found in [30]).
151
152 Bibliography
[59] H.T. Kung, F. Luccio, and F.P. Preparata. On finding the max-
ima of a set of vectors. Journal of the ACM, 22(4):469476,
October 1975.
Bibliography 157
[81] P. van Emde Boas, R. Kaas, and E. Zijlstra. Design and Imple-
mentation of an Efficient Priority Queue. Mathematical Systems
Theory, 10:99127, 1977.
[89] F.F. Yao. On finding the maximal elements in a set of plane vec-
tors. Technical report, Computer Science Dep. Rep., University
of Illinois at Urbana-Champaign, 1974.