Professional Documents
Culture Documents
TEL280 FundamentosCommInalambricas PDF
TEL280 FundamentosCommInalambricas PDF
Ingeniería
Inalámbrica
Chapters 1-3: Wireless
Communications Fundamentals
The above description is more appropriate for a static/fixed position transmitter and receiver.
Subsequent chapters will describe the complexities added by mobility.
where p(x) is the probability of an outcome of the source X, and by convention 0 log 0 = 0.
Note that the entropy does not depend on the actual values of the outcomes, but only on their
probability distribution.
Entropy measures uncertainty of a random variable. For example, if all outcomes are equally
likely, the entropy of the source is equal to the logarithm of the number of possible outcomes.
The larger the number of possible outcomes, the larger the entropy. Alternatively, let’s
consider a random variable with only two possible outcomes A and B. Let’s say that outcome
A has a probability p and outcome B has a probability (1-p). Then, the entropy of that random
variable is maximized when p=0.5 (i.e., both outcomes are equally likely). In that case, the
entropy is equal to 1. The entropy in minimal (equal to 0) if p=0 or p=1, that is, in the cases
where either outcome A or B are certain. In general, the more bias the random variable has
towards one of the outcomes, either A or B (less uncertainty), the lower its entropy.
Shannon proved that (Source coding theorem) if a source X is encoded at rate R, error free
decoding is possible if and only if R ≥ H(X). That is, entropy captures the amount of
information generated by a source.
Now, let’s consider two random variables X and Y. The conditional entropy H(Y|X) defined
as:
𝐻(𝑌|𝑋) = 𝑝 𝑥 𝐻(𝑌|𝑋 = 𝑥)
!
= − 𝑝 𝑥 𝑝 𝑦 𝑥 log 𝑝(𝑦|𝑥)
! !
= − 𝑝 𝑥, 𝑦 log 𝑝(𝑦|𝑥)
! !
measures the uncertainty left in Y when X is known. Let’s consider the joint entropy of two
variables X and Y (that is, the entropy of a variable Z=(X,Y) ). It can be shown that H(Z) is
equal to H(Z) = H(X,Y) = H(X) + H(Y|X) = H(Y) + H(X|Y). Thus, H(Y|X) = H(X,Y) –
H(X). That is, the uncertainty left once X is known.
The above motivates the definition of mutual information I(X; Y) between two random
variables:
𝐼(𝑋; 𝑌) = 𝐻 𝑋 − 𝐻(𝑋|𝑌)
= 𝐻 𝑌 − 𝐻(𝑌|𝑋)
which is a measure of the amount of information that one random variable contains about
another. For example, I(X;Y) measures the reduction on X’s uncertainty thanks to the
knowledge of Y. That is, how much information about X is gained by knowing (observing)
Y. A case of particular interest is when X is a random variable representing the signal being
transmitted over a channel and Y is the received (measured) signal. Thus, I(X;Y) measures
how much information we can get about X by measuring Y.
Since a channel is represented by fY|X (y|x), the conditional probability of the output (y)
conditioned on the input (x), then the only undetermined quantity is f(x), the pdf of the
channel input (the message). That is, once f(x) is chosen, f(x,y), and I(X;Y) are completely
determined. Thus, it make sense to choose the f(x) (i.e., encoding) that maximizes the
information that can be obtained about the message based on observing the channel output y.
This motivates the definition of channel capacity C as:
𝐶 = max 𝐼(𝑋; 𝑌)
!(!)
Note that f(x) in the maximization expression corresponds to the pdf of the channel input, not
the pdf of the source. A source S’s message is mapped to a set of binary codewords (“source
Shannon proved (channel coding theorem) that transmission at a rate R on a channel defined
by the conditional probability fy|x can be accomplished error free if and only if R ≤ C.
Therefore C is the theoretical channel capacity. Shannon proof is based on the generation of
random codes (impractical due to large memory requirements) and exploiting the fact that
for large codewords -- thanks to the law of large numbers -- most of the codewords pdf’s
“energy” is concentrated among ‘typical sequences’. A random codeword of length n for a
message (channel coding) is generated by drawing n consecutive values (iid) from the pdf
f(x). Most codewords generated this way will be “typical sequences”, which Shannon
exploits in his proof.
As an example, let’s calculate the capacity of a Gaussian channel (Y = X + η), where the
channel output Y is equal to the sum of the channel input X plus a noise component η that is
Gaussian-distributed with zero mean and variance (‘energy’) equal to N. This is a very
important case, since Gaussian noise appears very frequently in nature (thermal noise) and
because Gaussian noise is the most challenging (for a given constraint on energy, it has the
higher entropy, or uncertainty).
Let’s assume that the sender is power limited: E{x2}≤ P (that is, the long term average of
power is below a value P). Recalling I(X;Y) = H(Y) – H(Y|X) = H(Y) – H(η) = H(Y) - 0.5
log2(2πeN), where the last equality follows from the entropy of a Gaussian variable (η).
Thus, maximization of I(X;Y) reduces to maximizing the entropy of H(Y). Now, since E{Y2}
= E{X2}+ 2E{X}E{η} + E{ η2} = E{X2} + N ≤ P + N. Where we have used the fact that X
and η are independent and η is zero-mean. Recalling that the entropy of a power-limited
random variable is maximized by the Gaussian distribution, and that the sum of two
Gaussians variables is also Gaussian, we conclude that H(Y) is maximized when Y is
Gaussian with variance (P+N), and that happens when X is Gaussian. In that case, H(Y) = 0.5
log2[2πe(P+N)], I(X;Y) = 0.5 log2[2πe(P+N)] - 0.5 log2[2πeN] = 0.5 log2(1+P/N). Then, the
capacity of a power limited Gaussian channel is C = 0.5 log2(1+P/N).
Now, let’s compute the channel capacity when the average power is bounded by P and the
sender is limited to transmit signals within a bandwidth W. This is a fairly common case,
where a transmitter is assigned only a limited fraction of the electromagnetic spectrum.
• Bandwidth limited regime: P >> WN0 (SNR >> 1). Capacity grows logarithmically
with power, that is, this regime is power inefficient as the power must increase
exponentially to obtain an increase in the bitrate. On the other hand, the regime is
bandwidth efficient as the bitrate per unit of bandwidth (bps/Hz) that can be achieve is
greater than 1. Note that as power increases towards infinity, so does the capacity –
although much more slowly.
• Power limited regime: P << WN0 (SNR << 1). Capacity grows almost linearly with
respect to power. Figure 1 shows, for a fixed received power, the relationship between
Capacity (C) and Bandwidth (W). As W tends to infinity, ∆=P/WN0 tends to 0, and since
for very small ∆, log2(1+∆) = log2 e ln (1+∆) ≈ log2 e ∆, we have CAWGN ≈ W(log2
e)P/WN0 = (log2 e/N0) P. Thus, even with infinity bandwidth, the capacity is bounded,
limited by the power available. Thus, in power limited regime adding additional
bandwidth is of little use.
Figure 1 illustrates these two regions for a fixed received power. It can be seen that for small
values of W, the capacity increases rapidly (almost linearly) with available bandwidth. Thus,
in this regime bandwidth is the main limit to capacity. But as bandwidth increases, we get to
a point of diminishing returns where adding more bandwidth is of little use. In this case, it is
the power that limits the capacity on the channel.
𝑅! ≤ 𝐶 𝑃! , ∀𝑘 𝜖 {1, . . , 𝑁}
𝑅! + 𝑅! ≤ 𝐶 𝑃! + 𝑃! , ∀𝑘, 𝑗 𝜖 {1, . . , 𝑁}
𝑅! ≤ 𝐶 𝑃! , ∀ 𝑆 ⊂ 1, . . , 𝑁
!∈! !∈!
where S is any set of indexes (subset of {1,..N}), and the function C(x) = W log2[1+
(x/(WN0)].
The above set of inequalities has a straightforward interpretation: for any subset of users, the
sum of their rates should be less or equal to the rate achieved by a single transmitter whose
power is the sum of the users powers. That is, the rate that would be achieved if we combined
the powers of the subset of users.
The capacity region is then the convex intersection of half-planes in N-dimension space. The
boundary of this capacity region is the limit toward which different multi-access techniques
can achieve. The closer to the capacity region’s boundary, the better the technique.
Let’s take a look at the family of TDMA/FDMA techniques. These techniques split the
channel in N orthogonal sub-channels, either in time (TDMA) or frequency (FDMA). The
split need not be even, but using any set of weights λ1, λ2, ..λN such that Σ λk = 1.
For TDMA, user k will transmit a fraction λk of the time, and therefore it can use a power
Pk/λk such that its average (over time) power is equal to Pk. Therefore, the capacity achieved
during the transmission intervals will be equal to C(Pk/λk). Since that user can only transmit a
fraction λk of time, the average throughput it will achieve is λkC(Pk/λk).
Thus, both FDMA and TDMA can achieve the same transmission rates. But, how do these
rates compare against the optimal (capacity region) boundary?. It is easy to verify that
T/FDMA only achieves (intersects) the optimal boundary when λk = Pk / Σi Pi, that is, the
subchannel assigments are proportional to the users’ powers. Thus, FDMA and TDMA are
optimal only in a special case, which may not match the users’ traffic requirements. For
arbitrary powers and traffic requirements alternatives such as CDMA should be employed.
𝑃!
𝑅! ≤ 𝐶 𝑃! = 𝑊 log ! 1 +
𝑁
𝑃!
𝑅! ≤ 𝐶 𝑃! = 𝑊 log ! 1+
𝑁
𝑃! + 𝑃!
𝑅! + 𝑅! ≤ 𝐶 𝑃! + 𝑃! = 𝑊 log ! 1+
𝑁
This capacity region is illustrated in Figure 2 (left), where the boundary region defined by the
three inequalities is labeled “MUD”, by MultiUser Detection, that is, for the ability of a
receiver to simultaneously decode two (or more) packets. In contrast, the curve labeled SUD
shows the capacity achieved by a radio that can only decode one packet at a time (Single
User Detection), as is the case of T/FDMA. The difference between SUD and MUD capacity
regions is shown in yellow.
Corner point 1: R2 is equal to its maximum value. Then, by the third inequality:
𝑃! + 𝑃! 𝑃! 𝑃!
𝑅! ≤ 𝑊 log ! 1 + − 𝑊 log ! 1 + = 𝑊 log ! 1 +
𝑁 𝑁 𝑁 + 𝑃!
Corner point 2: R1 is equal to its maximum value, then by the third inequality:
𝑃! + 𝑃! 𝑃! 𝑃!
𝑅! ≤ 𝑊 log ! 1 + − 𝑊 log ! 1 + = 𝑊 log ! 1 +
𝑁 𝑁 𝑁 + 𝑃!
These corner points have an intuitive interpretation. Let’s consider the corner point 1. The
rate achieved by user 1 is equal to the capacity of an equivalent single-user channel where the
signal from user 2 is regarded as noise (interference). As long as the rate of user 1 is below
the Shannon capacity of this channel (noise plus interference), it can be perfectly
decoded/recovered. Once the signal from user 1 has been recovered, it can be subtracted from
the received signal, leaving only the signal of user 2 plus the channel’s noise (i.e., remove all
interference). Therefore, user 2 can achieve its maximum rate, as if it was alone.
A simple interpretation of the corner points may lead to an overestimation of the difference
between the optimal boundary values versus the capacity achieved by T/FDMA. In particular,
It should be considered, however, that achieving the capacity region is not trivial, and may
require sophisticated radios and multiple access schemes with coordination/synchronization
between users. Thus, a feature that may not increase the capacity by much (e.g., MUD) may
make achieving this capacity much easier, and in practice add a lot of value. Furthermore, it
may have a strong impact on the higher layers. For example, consider the results in Figure 2
(right), where the end-to-end throughput for a multi-hop network running a CSMA/CA MAC
is shown. Under SUD, lack of coordination causes packets collision as load increases,
exhibiting an instable ALOHA-like behavior. However, under MUD many of these
collisions are recoverable – even without coordination among the nodes – and therefore the
throughput stabilizes with high loads. This results in a significant improvement of end-to-end
throughput under high load case, and in a less stringent requirement for the flow control/rate
adaptation layer (e.g. TCP).
1.8"
E2E#Throughput#(Mbps)#
R2 MUD"
1.6"
MUD (e.g. CDMA) 1.4"
SUD"
C2 1.2"
1"
0.8"
0.6"
SUD 0.4"
(e.g. FDMA) 0.2"
0"
0" 2" 4" 6" 8" 10"
C1 R1 Load#(Mbps)#
Figure 2. Comparison of Single user (FDMA, TDMA) versus Multi-User Detection systems. (a) Capacity Region. (b)
Actual end-to-end throughput for an ad hoc network running 802.11 MAC.
2 Wireless Propagation
So far, we have discussed a simple “attenuation plus Gaussian noise channel”, which is
adequate for wire-based or even fixed wireless local loop communication, but fails to capture
the complexities introduced by a mobile wireless channel.
In this chapter we describe the properties of wireless signal propagation, in a static setting
first, and finally in a mobile environment.
Reflection occurs when the transmitted wave encounters an object of large dimension as
compared to its wavelength such as large walls, metal cabinet, ceilings and furniture. Some of
the transmitted signal will be absorbed through this medium and the remaining will be
reflected off of the medium’s surface. The energy of the transmitted and reflected waves is a
function of the geometry and material properties of the obstruction and the amplitude, phase,
and polarization of the incident wave.
Scattering occurs when the transmitted wave encounters a large quantity of small dimension
objects such as metal cabinets, lamp posts, bushes, and trees. The reflected energy in
a scattering situation is spread in all directions before reaching the receiver.
Diffraction occurs when the surface of the obstruction has sharp edges producing secondary
waves that in effect bend around the obstruction. Like reflection, diffraction is affected by the
physical properties of the obstruction and the incident wave characteristics. In situations
where the receiver is heavily obstructed, the diffracted waves may have sufficient strength to
produce a useful signal
PL = path loss, UL = power loss (dB) at 1m distance, n = path loss coefficient factor usually
between 2 and 4, d = distance between transmitter and receiver in meters, and F is a zero-
mean Gaussian random variable with variance σ2 modeling pathloss variations due to
shadowing, a term that encompasses signal strength variations due to artifacts in the
environment (i.e., occlusions, reflections, etc.). From practical measurements, σ2 is between 4
and 12dBs, with 8 being a “typical” value. Accordingly, received signal strengths at locations
that are of equal distance from the transmitter are considered i.i.d. normal random variables.
While the above pathloss expression accounts for signal variations over large scales, the
received signal strength can vary considerably over small distances (in the order of λ) and
small time scales, due to multipath fading (see below) . As a result, pathloss can exhibit wide
variations even when distance (d) changes by as little as a few centimeters.
Multipath propagation is a fact of life in any terrestrial radio scenario. While the direct or line
of sight path is normally the main wanted signal, a radio receiver will receive many signals
resulting from the signal taking a large number of different paths. These paths may be the
result of reflections from buildings, mountains or other reflective surfaces including water,
etc. that may be adjacent to the main path. Additionally other effects such as ionospheric
reflections give rise to multipath propagation as does tropospheric ducting.
The multipath propagation resulting from the variety of signal paths that may exist between
the transmitter and receiver can give rise to interference in a variety of ways including
distortion of the signal, loss of data and multipath fading.
At other times, the variety of signal paths arising from the multipath propagation can be used
to advantage. Schemes such as MIMO use multipath propagation to increase the capacity of
the channels they use. With increasing requirements for spectrum efficiency, the use of
multipath propagation for technologies such as MIMO are able to provide significant
improvements in channel capacity that are much needed.
At times there will be changes in the relative path lengths. This could result from either the
transmitter or receiver moving, or any of the objects that provides a reflective surface
moving. This will result in the phases of the signals arriving at the receiver changing, and in
turn this will result in the signal strength varying. It is this that causes the fading that is
present on many signals.
It can also be found that the interference may be flat, i.e. applied to all frequencies equally
across a given channel, or it may be selective, i.e. applying to more to some frequencies
across a channel than others
One of the ways which is particularly obvious when driving in a car and listening to an FM
radio. At certain points the signal will become distorted and appear to break up. This arises
from the fact that the signal is frequency modulated and at any given time, the frequency of
the received signal provides the instantaneous voltage for the audio output. If multipath
propagation occurs, then two or more signals will appear at the receiver. One is the direct or
line of sight signal, and another is a reflected signal. As these will arrive at different times
because of the different path lengths, they will have different frequencies, caused by the fact
that the two signals have been transmitted by the transmitter at slightly different times.
Accordingly when the two signals are received together, distortion can arise if they have
similar signal strength levels.
Another form of multipath propagation interference that arises when digital transmissions are
used is known as Inter Symbol Interference, ISI. This arises when the delay caused by the
extended path length of the reflected signal. If the delay is significant proportion of a symbol,
then the receiver may receive the direct signal which indicates one part of the symbol or one
state, and another signal which is indicating another logical state. If this occurs, then the data
can be corrupted.
One way of overcoming this is to transmit the data at a rate the signal is sampled, only when
all the reflections have arrived and the data is stable. This naturally limits the rate at which
data can be transmitted, but ensures that data is not corrupted and the bit error rate is
minimized. To calculate this the delay time needs to be calculated using estimates of the
maximum delays that are likely to be encountered from reflections.
Using the latest signal processing techniques, a variety of methods can be used to overcome
the problems with multipath propagation and the possibilities of interference.
Multipath fading occurs in any environment where there is multipath propagation and there is
some movement of elements within the radio communications system. This may include the
radio transmitter or receiver position, or in the elements that give rise to the reflections. The
multipath fading can often be relatively deep, i.e. the signals fade completely away, whereas
at other times the fading may not cause the signal to fall below a useable strength.
Multipath fading may also cause distortion to the radio signal. As the various paths that can
be taken by the signals vary in length, the signal transmitted at a particular instance will
arrive at the receiver over a spread of times. This can cause problems with phase distortion
and intersymbol interference when data transmissions are made. As a result, it may be
necessary to incorporate features within the radio communications system that enables the
effects of these problems to be minimized.
The overall signal at the radio receiver is a summation of the variety of signals being
received. As they all have different path lengths, the signals will add and subtract from the
total dependent upon their relative phases.
At times there will be changes in the relative path lengths. This could result from either the
radio transmitter or receiver moving, or any of the objects that provides a reflective surface
moving. This will result in the phases of the signals arriving at the receiver changing, and in
turn this will result in the signal strength varying as a result of the different way in which the
signals will sum together. It is this that causes the fading that is present on many signals.
Often the multipath fading that affects cellular phones is known as fast fading because it
occurs over a relatively short distance. Slow fading occurs as a cell phone moves behind an
obstruction and the signal slowly fades out.
To move from a signal being in phase to a signal being out of phase is equivalent to
increasing the path length by half a wavelength or 0.075m, or 7.5 cms. This example looks at
a very simplified example. In reality the situation is far more complicated with signals being
received via many paths. However it does give an indication of the distances involved to
change from an in-phase to an out of phase situation.
When signals are propagated via the ionosphere it is possible for the energy to be propagated
from the transmitter to the receiver via very many different paths. Simple diagrams show a
single ray or path that the signal takes. In reality the profile of the electron density of the
ionosphere (it is the electron density profile that causes the signals to be refracted) is not
smooth and as a result any signals entering the ionosphere will be scattered and will take a
variety of paths to reach a particular receiver. With changes in the ionosphere causing the
path lengths to change, this will result in the phases changing and the overall summation at
the receiver changing.
The changes in the ionosphere arise from a number of factors. One is that the levels of
ionization vary, although these changes normally occur relatively slowly, but nevertheless
have an effect. In addition to this there are winds or air movements in the ionosphere. As the
levels of ionization are not constant, any air movement will cause changes in the profile of
the electron density in the ionosphere. In turn this will affect the path lengths.
When signals are ducted in this way, they will be subject to multipath fading. Here, heat
rising from the Earth's surface will ensure that the path is always changing and signals will
vary in strength. Typically these changes may be relatively slow with signals falling and
rising in strength over a period of a number of minutes.
• Flat fading: This form of multipath fading affects all the frequencies across a given
channel either equally or almost equally. When flat multipath fading is experienced, the
signal will just change in amplitude, rising and falling over a period of time, or with
movement from one position to another.
• Selective fading: Selective fading occurs when the multipath fading affects different
frequencies across the channel to different degrees. It will mean that the phases and
amplitudes of the signal will vary across the channel. Sometimes relatively deep nulls
may be experienced, and this can give rise to some reception problems. Simply
maintaining the overall amplitude of the received signal will not overcome the effects of
selective fading, and some form of equalization may be needed. Some digital signal
formats, e.g. OFDM are able to spread the data over a wide channel so that only a portion
of the data is lost by any nulls. This can be reconstituted using forward error correction
techniques and in this way it can mitigate the effects of selective multipath fading.
Selective fading occurs because even though the path length changes by the same physical
length (e.g. the same number of meters, yards, miles, etc.), this represents – for different
frequencies -- a different proportion of a wavelength. Thus, the phase will change across the
bandwidth used.
Selective fading can occur over many frequencies. It can often be noticed when medium
wave broadcast stations are received in the evening via ground wave and skywave. The
phases of the signals received via the two means of propagation change with time and this
causes the overall received signal to change. As the multipath fading is very dependent on
path length, it is found that it affects the frequencies over even the bandwidth of an AM
broadcast signal to be affected differently and distortion results.
The coherent bandwidth Δfc of the channel is the frequency separation at which two sinusoid
are affected differently by the channel. If Δfc is large compared to the bandwidth of the
transmitted signal, the channel is said to be narrowband, and it will experience flat fading.
But, if Δfc is small with respect to the bandwidth of the signal, the channel is said to be
wideband and it will be frequency selective. In this case the signal is severely affected by the
channel. Note that with high data rate signals becoming commonplace, wider bandwidths are
needed and wideband channels are becoming the norm. As a result several nulls and peaks
may occur across the bandwidth of a single signal. To address this issue, modern modulation
techniques such as OFDM (see Subsection 2.6.2.1) have been developed.
When the signals reach the receiver, the overall signal is a combination of all the signals that
have reached the receiver via the multitude of different paths that are available. These signals
will all sum together, the phase of the signal being important. Dependent upon the way in
which these signals sum together, the signal will vary in strength. If they were all in phase
with each other they would all add together. However this is not normally the case, as some
will be in phase and others out of phase, depending upon the various path lengths, and
therefore some will tend to add to the overall signal, whereas others will subtract.
As there is often movement of the transmitter or the receiver this can cause the path lengths
to change and accordingly the signal level will vary. Additionally if any of the objects being
used for reflection or refraction of any part of the signal moves, then this too will cause
variation. This occurs because some of the path lengths will change and in turn this will mean
their relative phases will change, giving rise to a change in the summation of all the received
signals.
The Rayleigh fading model can be used to analyze radio signal propagation on a statistical
basis. It operates best under conditions when there is no dominant signal (e.g. direct line of
sight signal), and in many instances cellular telephones being used in a dense urban
environment fall into this category. Other examples where no dominant path generally exists
are for ionospheric propagation where the signal reaches the receiver via a huge number of
individual paths. Propagation using tropospheric ducting also exhibits the same patterns.
Accordingly all these examples are ideal for the use of the Rayleigh fading or propagation
model.
Let X be a random variable denoting the amplitude of the received signal. X is the result of
combining the In-Phase (NI) and Quadrature (NQ) components; 𝑋 = 𝑁!! + 𝑁!! . Under the
assumption that X is the result of adding many reflected signals of similar amplitude, and
under the law of large numbers, both NI and NQ are Gaussian zero-mean random variables
with variance 𝜎 ! . In that case, X is Rayleigh distributed, that is, it has a probability density
function (pdf) equal to:
𝑥 ! ! !!
𝑓! 𝑥 = ! 𝑒 !!
𝜎
Y = X2/2, the power of the received signal, has an exponential distribution with mean 𝜎 ! ,
that is:
1 ! !!
𝑓! 𝑦 = ! 𝑒 !
𝜎
2.4.2 Rician fading
Rician fading is a stochastic model for radio propagation anomaly caused by partial
cancellation of a radio signal by itself — the signal arrives at the receiver by several different
paths (hence exhibiting multipath interference), and at least one of the paths is changing
(lengthening or shortening). Rician fading occurs when one of the paths, typically a line of
Rayleigh fading is the specialized model for stochastic fading when there is no line of sight
signal, and is sometimes considered as a special case of the more generalized concept of
Rician fading.
Channel characterization
NI = m1 + η1, NQ = m2 + η2, η1 and η2 are independent zero-mean Gaussian each with
variance σ2.
A Rician fading channel can be described by two parameters: K and Ω. K is the ratio between
the power in the direct path (specular component, ν2 = m12 + m22) and the power in the other,
scattered, paths (2σ2). Ω is the total power from both paths (Ω = ν2+ 2σ2), and acts as a
scaling factor to the distribution.
The received signal amplitude (not the received signal power) R is then Rice distributed with
parameters ν2 = K/(K+1) Ω and σ2 = Ω/[2(K+1)].The resulting PDF then is:
(!!!) !!
! !!! ! !!! !(!!!)
f 𝑥 = 𝑒 ! 𝐼! 2 𝑥
! !
where I0(.) is the 0-th order modified Bessel function of the first kind. Note that for K=0 (no
direct path), the resulting PDF becomes the Rayleigh PDF.
Nakagami fading occurs for multipath scattering with relatively large delay-time spreads,
with different clusters of reflected waves. Within any one cluster, the phases of individual
reflected waves are random, but the delay times are approximately equal for all waves. As a
result the envelope of each cumulated cluster signal is Rayleigh distributed. The average time
delay is assumed to differ significantly between clusters. If the delay times also significantly
exceed the bit time of a digital link, the different clusters produce serious intersymbol
interference, so the multipath self-interference then approximates the case of co-channel
interference by multiple incoherent Rayleigh-fading signals.
Let X be a random variable representing the amplitude of the received signal, if we assume
that the received signal is the result of adding multiple independent and identically distributed
(i.i.d.) Rayleigh-fading signals, then X is Nakagami distributed, as follows:
! !!
! ! !
f 𝑥 = 𝑥 !!!! 𝑒 ! !
!(!) !
where Ω = E(x2). And m is defined as the ratio of moments, called the fading figure:
When the source of the waves is moving toward the observer, each successive wave crest is
emitted from a position closer to the observer than the previous wave. Therefore, each wave
takes slightly less time to reach the observer than the previous wave. Hence, the time between
the arrivals of successive wave crests at the observer is reduced, causing an increase in the
frequency. While they are travelling, the distance between successive wave fronts is reduced,
so the waves "bunch together". Conversely, if the source of waves is moving away from the
observer, each wave is emitted from a position farther from the observer than the previous
wave, so the arrival time between successive waves is increased, reducing the frequency. The
distance between successive wave fronts is then increased, so the waves "spread out".
Since electromagnetic waves do not require a medium to propagate, only the relative
difference in velocity (v) between the observer and the source needs to be considered to
calculate the doppler shift Δf. Let ϕ be the angle between the line connecting the mobile and
sender and the direction of the velocity vector v. Then Δf = v cos ϕ f/c. For example, for
f=1.8GHz, v=4mph (pedestrian speed), and ϕ=0 then the Doppler shift is equal to Δf = 10Hz,
while for v=40mph (vehicular speed), the Doppler shift is equal to Δf = 100Hz.
Doppler shifts require a synchronization circuitry (usually a Phase Locked Loop) to recover
the signal frequency.
Doppler spread
Slow fading arises when the coherence time of the channel is large relative to the delay
constraint of the channel. In this regime, the amplitude and phase change imposed by the
channel can be considered roughly constant over the period of use. Slow fading can be caused
by events such as shadowing, where a large obstruction such as a hill or large building
obscures the main signal path between the transmitter and the receiver. The received power
change caused by shadowing is often modeled using a log-normal distribution with a standard
deviation according to the log-distance path loss model, as discussed in Subsection 2.1.
Fast fading occurs when the coherence time of the channel is small relative to the delay
constraint of the channel. In this case, the amplitude and phase change imposed by the
channel varies considerably over the period of use. Rayleigh fading is typically fast.
The coherence time of the channel is related to the Doppler spread of the channel, that is, the
difference in Doppler shifts between different signal components contributing to a single
fading channel tap (see previous subsection and Figure 4). Channels with a large Doppler
spread have signal components that are each changing independently in phase over time.
Since fading depends on whether signal components add constructively or destructively, such
channels have a very short coherence time. In general, coherence time is inversely related to
Figure 5 shows two instantiations of Fast Rayleigh fading, as experienced by two receivers
moving at different speeds: 6Km/h (walking speed) and 60Km/h (vehicle driving speed). The
figure shows the received power (dB with respect to the average RMS) versus time, for 1
second.
Figure 5. One second of (Fast) Rayleigh, with a maximum Doppler Shift of (a)10Hz and (b)100Hz.
At pedestrian speed (Figure 5 left) the observed Doppler spread was 10Hz. Observing the
channel received power over the one-second interval, we notice 10 instances of deep fading
(fading of 10 dB below the average). Thus, while the coherence time is not constant (the
observed signal is not perfectly periodic), we observe that its average value is highly
correlated to the inverse of the Doppler spread. At vehicular speed (Figure 5 right) the
observed Doppler spread was 100Hz. Observing the channel received power over the one-
second interval, once again we observe a high correlation between channel coherence time (in
this case in the order of 1/100Hz = 10msec). While not perfectly periodic, the signal deep-
fade instants exhibit a high degree of time regularity.
Left untreated, fast fading can have a devastating effect in communication performance. For
example, Figure 6 shows the bit error probability pe of a BPSK signal without coding under
white noise with and without fading. It can be seen that if there is no fading (AWGN channel,
solid line) the error probability decreases very rapidly (exponentially) with SNR. Indeed,
error probabilities in the order of 10-12 can be achieved with a SNR of around 15dB.
However, if the channel experiences fading (Rayleigh channel, dashed and dotted lines) the
error probability decreases much more slowly (linearly) with respect to SNR. A SNR of
about 40dB is needed to achieve an error probability of 10-4 (unacceptably high). And, if we
want to obtain an error rate in the order of 10-12, we would need a SNR of around 120dB!.
The reason for the need of such high SNR is that under fast fading, the error probability is
dominated by the occurrences of a deep fade. That is, for high SNR, bit errors mostly occur
during periods of deep fade. Thus, error rate is equal to probability of deep fade. In order to
reduce the error probability, we need to increase the signal power to the point that it survives
a deep fade. Given that the received power is exponentially distributed (Rayleigh fading), the
Rayleigh Channel
AWGN channel
Figure 6. Error probability for simple uncoded BPSK modulation under AWGN with and without fading.
Fortunately, most of the losses due to fading can be recovered again thanks to diversity
coding, as discussed in Section 2.6.2.
In a fast-fading channel, the transmitter may take advantage of the variations in the channel
conditions using time diversity to help increase robustness of the communication to a
temporary deep fade. Although a deep fade may temporarily erase some of the information
transmitted, use of an error-correcting code coupled with successfully transmitted bits during
other time instances (interleaving) can allow for the erased bits to be recovered. In a slow-
fading channel, it is not possible to use time diversity because the transmitter sees only a
single realization of the channel within its delay constraint. A deep fade therefore lasts the
entire duration of transmission and cannot be mitigated using coding.
• Time diversity: the simple form of time diversity is to simply retransmit the affected
packet. That is a case of repetition coding. A more sophisticated technique involves the
use of (k, n) coding, where a message is coded into n blocks such that any set of k blocks
(k < n) is sufficient to recover the message. Then, as long as the number of blocks
experiencing fading is less than (n-k), the message will be recovered. A similar (an older)
technique is bit interleaving in conjunction with bit-level error correcting codes.
• Frequency diversity: using modulation schemes as OFDM that allow treating the
channel as a set of n subchannels, where each subchannel experiences a different fading
level (selective fading). Using (k, n) coding, a message can be coded into n blocks, each
transmitted via a different subchannel. As long as the number of subchannels
experiencing fading is less than (n-k), the message will be recovered.
• Space diversity: use of two or more antennas with enough separation (order of a
wavelength) to experience independent fade. Thus, the likelihood that all the signal
received from all the antennas experience deep fade decreases exponentially with the
number of antennas.
Note that combination of the above techniques is also possible. For example, space-time
diversity consists mapping a message into n blocks, and then sequentially transmitting the
blocks alternating the antenna used in each transmission.
2.6.2.1 OFDM
In order to meet the requirements to transmit large amounts of data over a radio channel, it is
necessary to choose the most appropriate form of signal bearer format. One form of signal
lends itself to radio data transmissions in an environment where reflections may be present is
Orthogonal Frequency Division Multiplex, OFDM. An OFDM signal comprises a large
number of carriers, each of which are modulated with a low bit rate data stream. In this way
the two contracting requirements for high data rate transmission, to meet the capacity
requirements, and low bit rate to meet the intersymbol interference1 requirements can be met.
OFDM is the modulation format that is used for many of today's data transmission formats.
The applications include 802.11n Wi-Fi, LTE (Long Term Evolution for 3G cellular
telecommunications), LTE Advanced (4G), WiMAX and many more. The fact that OFDM is
being widely used demonstrates that it is an ideal format to overcome multipath propagation
problems. Although OFDM is more complicated than earlier forms of signal format, it
provides some distinct advantages in terms of data transmission, especially where high data
rates are needed along with relatively wide bandwidths.
This is not the case with OFDM. Although the sidebands from each carrier overlap, they can
still be received without the interference that might be expected because they are orthogonal
to each another. This is achieved by having the carrier spacing equal to the reciprocal of the
symbol period.
1
Multipath propagation causes the reception of multiple delayed-copies of the same symbol.
If the delay spread is greater than the symbol duration, a delayed copy of a symbol will
interfere with the signal of the next symbol. This is referred to as Intersymbol Interference
(ISI), which is computationally intensive to address at high data rates. Note also that a long
symbol duration, besides limiting the impact of ISI, also results in a subchannel width narrow
enough for the subchannel to experience flat fading.
One requirement of the OFDM transmitting and receiving systems is that they must be linear.
Any non-linearity will cause interference between the carriers as a result of inter-modulation
distortion. This will introduce unwanted signals that would cause interference and impair the
orthogonality of the transmission.
In terms of the equipment to be used the high peak to average ratio of multi-carrier systems
such as OFDM requires the RF final amplifier on the output of the transmitter to be able to
handle the peaks whilst the average power is much lower and this leads to inefficiency. In
some systems the peaks are limited. Although this introduces distortion that results in a
higher level of data errors, the system can rely on the error correction to remove them.
Data on OFDM
The data to be transmitted on an OFDM signal is spread across the carriers of the signal, each
carrier taking part of the payload. This reduces the data rate taken by each carrier. The lower
data rate has the advantage that interference from reflections is much less critical. This is
achieved by adding a guard band time or guard interval into the system. This ensures that the
data is only sampled when the signal is stable and no new delayed signals arrive that would
alter the timing and phase of the signal.
OFDM advantages
OFDM has been used in many high data rate wireless systems because of the many
advantages it provides.
• Immunity to selective fading: One of the main advantages of OFDM is that is more
resistant to frequency selective fading than single carrier systems because it divides
the overall channel into multiple narrowband signals that are affected individually as
flat fading sub-channels.
• Resilience to interference: Interference appearing on a channel may be bandwidth
limited and in this way will not affect all the sub-channels. This means that not all the
data is lost.
• Spectrum efficiency: Using close-spaced overlapping sub-carriers, a significant
OFDM advantage is that it makes efficient use of the available spectrum.
• Resilient to ISI: Another advantage of OFDM is that it is very resilient to inter-
symbol and inter-frame interference. This results from the low data rate on each of the
sub-channels.
• Resilient to narrow-band effects: Using adequate channel coding and interleaving it
is possible to recover symbols lost due to the frequency selectivity of the channel and
narrow band interference. Not all the data is lost.
• Simpler channel equalization: One of the issues with CDMA systems was the
complexity of the channel equalization which had to be applied across the whole
channel. An advantage of OFDM is that using multiple sub-channels, the channel
equalization becomes much simpler.
OFDM disadvantages
Whilst OFDM has been widely used, there are still a few disadvantages to its use which need
to be addressed when considering its use.
• High peak to average power ratio: An OFDM signal has a noise like amplitude
variation and has a relatively high large dynamic range, or peak to average power
ratio2. This impacts the RF amplifier efficiency as the amplifiers need to be linear and
accommodate the large amplitude variations and these factors mean the amplifier
cannot operate with a high efficiency level.
2
To visualize why this is the case, consider for example the case where the sequence to be
transmitted is formed by 1’s, that is, each subchannel uses BPSK modulation (1 bit) and is
transmitting the same symbol. The resulting IFFT of the all 1’s sequence is the impulse
response. That is, a high pulse.
OFDM variants
There are several other variants of OFDM for which the initials are seen in the technical
literature. These follow the basic format for OFDM, but have additional attributes or
variations:
2.6.2.2 MIMO
While multipath propagation creates interference for many radio communications systems, it
can also be used to advantage to provide additional capacity on a given channel. Using a
scheme known as MIMO, multiple input multiple output, it is possible to multiple the data
capacity of a given channel several times by using the multipath propagation that exists.
In view of the advantages that MIMO offers, many current wireless and radio
communications schemes are using it to make far more efficient use of the available
spectrum. The disadvantage to MIMO is that it requires the use of multiple antennas, and
with modern portable equipment such as cell phones being increasingly small, it can be
difficult to place tow sufficiently spaced antennas onto them.
Figure 9 shows a MxN MIMO system. Note that hij represents the channel gain from the
sender’s antenna i to the receiver antenna j. For the case of fast Rayleigh fading, and when
the separation of the antennas is on the order of the carrier wavelength, we can assume that
the set of hij are iid Rayleigh-distributed random variables. Let’s start by considering the case
where the transmitter only uses antenna 1. Thus, at the receiver the signal received at antenna
k is equal to h1k s(t) + noise. If we just add these N signals, the expected value of its power is
! ! !
equal to ! ℎ!! 𝐸 𝑠 𝑡 ! . Approximating ! ℎ!! ≈ max! ℎ!! , which is true for when the
channel gains are fairly diverse and therefore the maximum gain dominate the received
power, we get that the probability of a deep fade is equal to the probability of all channels
experiencing fading at the same instant. Thus, if pf is the probability that a single (source,
destination) antenna pair experiences deep fade, then the probability that the combined
received signal experiences deep fade is equal to pfn, that is, probability of deep fade
decreases exponentially with respect to the number of antennas.
Note that the above analysis is very conservative. MIMO receivers do not simply add up the
signals they receive on their antennas, they employ techniques such as Maximal Ratio
Combining (MRC) to “tune” to the sender. MRC is a method of diversity combining in
which:
• the signals from each channel are added together,
• the gain of each channel is made proportional to the rms signal level and inversely
proportional to the mean square noise level in that channel.
• different proportionality constants are used for each channel.
Note that MRC does not require coordination between sender and receiver, nor channel state
estimation at the receiver. It only requires information locally available at the receiver. The
main insight behind MRC is to assign a received signal a weight proportional to the SNR (or
confidence level) of the signal. This way, the signal with higher SNR (most reliable signal) is
given preference.
3 Multiple Access
3.1 Duplexing (FDD and TDD)
In half duplex, the two communicating parties take turns transmitting over a shared channel.
Two-way radios work this way. As one party talks, the other listens. Speaking parties often
say “Over” to indicate that they’re finished and it’s time for the other party to speak. In
networking, a single cable is shared as the two computers communicating take turns sending
and receiving data.
In a cell phone with a transmitter and receiver operating simultaneously within such close
proximity, the receiver must filter out as much of the transmitter signal as possible. The
greater the spectrum separation, the more effective the filters.
FDD uses lots of frequency spectrum, though, generally at least twice the spectrum needed by
TDD. In addition, there must be adequate spectrum separation between the transmit and
receive channels. These so-called guard bands aren’t useable, so they’re wasteful. Given the
scarcity and expense of spectrum, these are real disadvantages.
However, FDD is very widely used in cellular telephone systems, such as the widely used
GSM system. In some systems the 25-MHz band from 869 to 894 MHz is used as the
downlink (DL) spectrum from the cell site tower to the handset, and the 25-MHz band from
824 to 849 MHz is used as the uplink (UL) spectrum from the handset to cell site.
Another disadvantage with FDD is the difficulty of using special antenna techniques like
multiple-input multiple-output (MIMO) and beamforming. These technologies are a core part
of the new Long-Term Evolution (LTE) 4G cell phone strategies for increasing data rates. It
is difficult to make antenna bandwidths broad enough to cover both sets of spectrum. More
complex dynamic tuning circuitry is required.
FDD also works on a cable where transmit and receive channels are given different parts of
the cable spectrum, as in cable TV systems. Again, filters are used to keep the channels
separate.
Uplink and downlink sub-bands are said to be separated by the frequency offset. Frequency-
division duplexing can be efficient in the case of symmetric traffic. In this case time-division
duplexing tends to waste bandwidth during the switch-over from transmitting to receiving,
has greater inherent latency, and may require more complex circuitry.
Another advantage of frequency-division duplexing is that it makes radio planning easier and
more efficient, since base stations do not "hear" each other (as they transmit and receive in
different sub-bands) and therefore will normally not interfere with each other. On the
converse, with time-division duplexing systems, care must be taken to keep guard times
between neighboring base stations (which decreases spectral efficiency) or to synchronize
base stations, so that they will transmit and receive at the same time (which increases network
Figure 11. TDD quickly alternates the transmission and reception of data over time. Concurrent transmissions.
Because of the high-speed nature of the data, the communicating parties cannot tell that the
transmissions are intermittent. The transmissions are concurrent rather than simultaneous. For
digital voice converted back to analog, no one can tell it isn’t full duplex.
In some TDD systems, the alternating time slots are of the same duration or have equal
downlink (DL) and uplink (UL) times. However, the system doesn’t have to be 50/50
symmetrical. The system can be asymmetrical as required.
For instance, in Internet access, download times are usually much longer than upload times so
more or fewer frame time slots are assigned as needed. Some TDD formats offer dynamic
bandwidth allocation where time-slot numbers or durations are changed on the fly as
required.
The real advantage of TDD is that it only needs a single channel of frequency spectrum.
Furthermore, no spectrum-wasteful guard bands or channel separations are needed. The
downside is that successful implementation of TDD needs a very precise timing and
synchronization system at both the transmitter and receiver to make sure time slots don’t
overlap or otherwise interfere with one another.
3.1.3 Comparison
Most cell-phone systems use FDD. The newer LTE and 4G systems use FDD. Cable TV
systems are fully FDD.
Most wireless data transmissions are TDD. WiMAX and Wi-Fi use TDD. So does Bluetooth
when piconets are deployed. ZigBee is TDD. Most digital cordless telephones use TDD.
Because of the spectrum shortage and expense, TDD is also being adopted in some cellular
systems, such as China’s TD-SCDMA and TD-LTE systems. Other TD-LTE cellular systems
are expected to be deployed where spectrum shortages occur.
Next generation architectures were based on either Code Division Multiple Access (CDMA)
or Time Division Multiple Access (TDMA).
In CDMA systems more than one user can use the same carrier frequency and transmit
simultaneously. The mobile user's narrowband message signal is multiplied by a very large
bandwidth signal called the spreading signal. This spreading signal is a pseudo-noise code
sequence that has a chip rate which is orders of magnitude greater than the data rate of the
message. Each user has its own codeword that is approximately orthogonal to all other
codewords. The receiver performs a time correlation operation to detect only the specific
desired codeword. All other codewords appear as noise due to decorrelation. There is no
transmission coordination among the users which operate independently. For detecting a
specific user's message signal the receiver needs to have knowledge of the codeword used by
this user's transmitter; all the other messages received (generated with different codewords)
form the noise floor, which must be kept low. Therefore, a power regulation scheme is
employed so that all the messages are received at the AP with approximately equal power.
Unfortunately, there is no way to control the power level of mobiles associated with other
APs (i.e. mobile users located in neighboring cells that also contribute to the noise floor).
Some CDMA characteristic are :
• Multipath fading may be substantially reduced because the signal is spread over a large
spectrum. If the spread spectrum bandwidth is greater than the coherence bandwidth of
the channel, the inherent frequency diversity will mitigate the effects of small-scale
fading.
• CDMA has potential for soft handoff (change of associated AP) since a mobile user can
be received simultaneously by more than one AP, being possible to choose at any point in
time the best AP without switching frequencies.
• Since increasing the number of users raises the noise floor gradually degrading the
performance for all users, CDMA has a soft capacity limit.
• Current and future advances in digital modulation and coding promise a greater tolerance
to signal to noise ratio, therefore increasing the capacity of a CDMA system. The
theoretical maximum capacity of a CDMA system is greater than the equivalent for a
TDMA system.
It should be noted that mixed approaches (FDMA, TDMA, CDMA) were also implemented.
For example, GSM is a FDMA/TDMA system that divides the available bandwidth in 200
kHz wide channels using FDMA, while each of these channels serves as many as 8 users
using TDMA.
More recently, more sophisticated multiple access techniques such as SC/OFDMA have been
developed, in order to address the requirement of higher data rates (wideband) as well as low
intersymbol interference.
The remainder of this section discusses this multiple access techniques more in detail.
3.3 FDMA
FDMA is the process of dividing one channel or bandwidth into multiple individual bands,
each for use by a single user. Each individual band or channel is wide enough to
accommodate the signal spectra of the transmissions to be propagated. The data to be
transmitted is modulated on to each subcarrier, and all of them are linearly mixed together.
The best example of this is the cable television system. The medium is a single coax cable
that is used to broadcast hundreds of channels of video/audio programming to homes. The
coax cable has a useful bandwidth from about 4 MHz to 1 GHz. This bandwidth is divided up
into 6-MHz wide channels. Initially, one TV station or channel used a single 6-MHz band.
But with digital techniques, multiple TV channels may share a single band today thanks to
compression and multiplexing techniques used in each channel.
This technique is also used in fiber optic communications systems. A single fiber optic cable
has enormous bandwidth that can be subdivided to provide FDMA. Different data or
information sources are each assigned a different light frequency for transmission. Light
generally isn’t referred to by frequency but by its wavelength (λ). As a result, fiber optic
FDMA is called wavelength division multiple access (WDMA) or just wavelength division
multiplexing (WDM).
One of the older FDMA systems is the original analog telephone system, which used a
hierarchy of frequency multiplex techniques to put multiple telephone calls on single line.
The analog 300-Hz to 3400-Hz voice signals were used to modulate subcarriers in 12
channels from 60 kHz to 108 kHz. Modulator/mixers created single sideband (SSB) signals,
Frequency Division Multiple Access or FDMA is a channel access method used in multiple-
access protocols as a channelization protocol. FDMA gives users an individual allocation of
one or several frequency bands, or channels. It is particularly commonplace in satellite
communication.
Some FDMA characteristics are:
• In FDMA all users share the satellite transponder or frequency channel
simultaneously but each user transmits at single frequency.
• FDMA can be used with both analog and digital signal.
• FDMA requires high-performing filters in the radio hardware, in contrast to TDMA
and CDMA.
• FDMA is not vulnerable to the timing problems that TDMA has. Since a
predetermined frequency band is available for the entire period of communication,
stream data (a continuous flow of data that may not be packetized) can easily be used
with FDMA.
• Due to the frequency filtering, FDMA is not sensitive to near-far problem which is
pronounced for CDMA.
• Each user transmits and receives at different frequencies as each user gets a unique
frequency slots.
FDMA is distinct from frequency division duplexing (FDD). While FDMA allows multiple
users simultaneous access to a transmission system, FDD refers to how the radio channel is
shared between the uplink and downlink (for instance, the traffic going back and forth
between a mobile-phone and a mobile phone base station). Frequency-division multiplexing
(FDM) is also distinct from FDMA. FDM is a physical layer technique that combines and
transmits low-bandwidth channels through a high-bandwidth channel. FDMA, on the other
hand, is an access method in the data link layer.
3.4 TDMA
Time division multiple access (TDMA) is a channel access method for shared medium
networks. It allows several users to share the same frequency channel by dividing the signal
into different time slots. The users transmit in rapid succession, one after the other, each
using its own time slot. This allows multiple stations to share the same transmission medium
(e.g. radio frequency channel) while using only a part of its channel capacity. TDMA is used
in the digital 2G cellular systems such as Global System for Mobile Communications (GSM),
IS-136, Personal Digital Cellular (PDC) and iDEN, and in the Digital Enhanced Cordless
Telecommunications (DECT) standard for portable phones. It is also used extensively in
satellite systems, combat-net radio systems, and PON networks for upstream traffic from
premises to the operator. For usage of Dynamic TDMA packet mode communication, see
below.
TDMA is a type of Time-division multiplexing, with the special point that instead of having
one transmitter connected to one receiver, there are multiple transmitters. In the case of the
uplink from a mobile phone to a base station this becomes particularly difficult because the
mobile phone can move around and vary the timing advance required to make its
transmission match the gap in transmission from its peers.
TDMA characteristics
• Shares single carrier frequency with multiple users
• Non-continuous transmission makes handoff simpler
• Slots can be assigned on demand in dynamic TDMA
• Less stringent power control than CDMA due to reduced intra cell interference
• Higher synchronization overhead than CDMA
• Advanced equalization may be necessary for high data rates if the channel is
"frequency selective" and creates Intersymbol interference
• Cell breathing (borrowing resources from adjacent cells) is more complicated than in
CDMA
• Frequency/slot allocation complexity
• Pulsating power envelope: Interference with other devices
In the GSM system, the synchronization of the mobile phones is achieved by sending timing
advance commands from the base station which instructs the mobile phone to transmit earlier
and by how much. This compensates for the propagation delay resulting from the light speed
Initial synchronization of a phone requires even more care. Before a mobile transmits there is
no way to actually know the offset required. For this reason, an entire time slot has to be
dedicated to mobiles attempting to contact the network; this is known as the random-access
channel (RACH) in GSM. The mobile attempts to broadcast at the beginning of the time slot,
as received from the network. If the mobile is located next to the base station, there will be no
time delay and this will succeed. If, however, the mobile phone is at just less than 35 km
from the base station, the time delay will mean the mobile's broadcast arrives at the very end
of the time slot. In that case, the mobile will be instructed to broadcast its messages starting
nearly a whole time slot earlier than would be expected otherwise. Finally, if the mobile is
beyond the 35 km cell range in GSM, then the RACH will arrive in a neighbouring time slot
and be ignored. It is this feature, rather than limitations of power, that limits the range of a
GSM cell to 35 km when no special extension techniques are used. By changing the
synchronization between the uplink and downlink at the base station, however, this limitation
can be overcome.[citation needed]
3G systems
Although most major 3G systems are primarily based upon CDMA, time division duplexing
(TDD), packet scheduling (dynamic TDMA) and packet oriented multiple access schemes are
available in 3G form, combined with CDMA to take advantage of the benefits of both
technologies.
While the most popular form of the UMTS 3G system uses CDMA and frequency division
duplexing (FDD) instead of TDMA, TDMA is combined with CDMA and Time Division
Duplexing in two standard UMTS UTRA
A major advantage of TDMA is that the radio part of the mobile only needs to listen and
broadcast for its own time slot. For the rest of the time, the mobile can carry out
measurements on the network, detecting surrounding transmitters on different frequencies.
This allows safe inter frequency handovers, something which is difficult in CDMA systems,
not supported at all in IS-95 and supported through complex system additions in Universal
Mobile Telecommunications System (UMTS). This in turn allows for co-existence of
microcell layers with macrocell layers.
Dynamic TDMA
In dynamic time division multiple access, a scheduling algorithm dynamically reserves a
variable number of time slots in each frame to variable bit-rate data streams, based on the
traffic demand of each data stream. For example, Figure 13 shows the dynamic frame
structure proposed by the European Union’s Magic WAND project (calles MASCARA). In
that frame, we can appreciate 4 clearly defined interval: (i) frame header (for synchronization
and broadcasting of this frame’s uplink schedule), (ii)Downlink period,(iii) uplink period, and
finally (iv) contention period for new session joining the network.
From AP to MT From MT to AP
Figure 13. MASCARA: Dynamic TDMA frame proposed by the Magic WAND project.
3.5 CDMA
CDMA is an example of multiple access, which is where several transmitters can send
information simultaneously over a single communication channel. This allows several users
to share a band of frequencies (see bandwidth). To permit this without undue interference
between the users, CDMA employs spread-spectrum technology and a special coding scheme
(where each transmitter is assigned a code).
CDMA is used as the access method in many mobile phone standards such as cdmaOne,
CDMA2000 (the 3G evolution of cdmaOne), and WCDMA (the 3G standard used by GSM
carriers), which are often referred to as simply CDMA.
Uses:
• The Qualcomm standard IS-95, marketed as cdmaOne.
• The Qualcomm standard IS-2000, known as CDMA2000, is used by several mobile
phone companies, including the Globalstar satellite phone network.
• The UMTS 3G mobile phone standard, which uses W-CDMA.
3.6.1 OFDMA
As it can be seen in Figure 14 (bottom), a OFDMA signal is formed by assigning N out of M
(N <= M) subcarriers to a user. A single user maps its N symbols into the assigned
At the receiver end, the receiver gets the sum of all the users signals, and apply to FFT to
recover the symbols. It then applied the reverse mapping to obtain the message from each
user.
Note that since the subcarriers are orthogonal, they do not interfere with each other and there
is no need for coordination among users.
Once the symbol signal is transformed via the IFFT, each of the M resulting values is map to
one sub-carrier (out of N). There are different subcarriers mappings:
• Localized mapping: the FFT outputs (M values) are map to a subset of consecutive
subcarriers, thereby confining them to only a fraction of the system bandwidth.
• Distributed mapping: the FFT outputs of the input data are assigned to subcarriers over
the entire bandwidth non-continuously, resulting in zero amplitude for the remaining
subcarriers.
• Interleaved mapping: it is a special case of distributed SC-FDMA is called interleaved
SC-FDMA (IFDMA), where the occupied subcarriers are equally spaced over the entire
bandwidth.
Figure 15 shows an example of distributed (left) and localized (right) subcarrier mappings for
a network with 3 users. Note that the distributed mapping shown in Figure 15 (left) is also an
interleaved mapping, since all subcarriers associated with the same user’s data are equally
spaced.
Figure 16 shows the transmitted signal in frequency for both the distributed/interleaved and
the localized subcarrier mapping. Figure 17 shows the same signal in the time domain. For
example, for the interleaved mapping, the signal in the frequency domain is the same as the
original one after an expansion on frequency (Figure 16 top). Since an expansion en
frequency corresponds to a contraction in time, the resulting time signal is a time-contracted
copy of the original signal, repeated several times due to periodicity of the FFT (see Figure
17 top). Thus, since {xn} is well behaving, so is the time output to the channel {xm}.
Now, if localized mapping is used, signal in the frequency domain is the same as the original
one after a contraction in frequency (Figure 16 bottom). Since a contraction in frequency
corresponds to an expansion in time, the highlighted values of Figure 17 (bottom) are a time
expanded version of the original signal. The other values correspond to aliasing effects. The
resulting time signal is also well behaving.