You are on page 1of 42

TEL280:

Ingeniería
Inalámbrica
Chapters 1-3: Wireless
Communications Fundamentals

Compiled by: César A. Santiváñez, Ph.D.


Based on publicly available (on-line) information
March 2016

TEL280 – Class Handout


1 CHANNEL CAPACITY ......................................................................................................................... 3
1.1 SHANNON CAPACITY ............................................................................................................................. 3
1.2 CAPACITY OF THE ADDITIVE WHITE GAUSSIAN NOISE (AWGN) CHANNEL ...................................................... 5
1.3 MULTIPLE ACCESS CHANNEL ................................................................................................................... 7
2 WIRELESS PROPAGATION ................................................................................................................. 9
2.1 PATHLOSS (REFLEXION, DIFFRACTION. SCATTER) ........................................................................................ 9
2.2 MULTIPATH PROPAGATION .................................................................................................................. 11
2.3 MULTIPATH FADING ............................................................................................................................ 13
2.3.1 Frequency selective vs flat fading ............................................................................................ 15
2.4 MULTIPATH FADING MODELS ................................................................................................................ 15
2.4.1 Rayleigh Fading ....................................................................................................................... 15
2.4.2 Rician fading ............................................................................................................................ 16
2.4.3 Nakagami-m fading ................................................................................................................. 17
2.5 MOBILITY IMPACT ............................................................................................................................... 18
2.5.1 Doppler effect .......................................................................................................................... 18
2.5.2 Fast versus Slow fading ........................................................................................................... 19
2.6 OVERCOMING MULTIPATH PROPAGATION ISSUES ..................................................................................... 21
2.6.1 Slow fading .............................................................................................................................. 21
2.6.2 Fast fading ............................................................................................................................... 22
3 MULTIPLE ACCESS .......................................................................................................................... 28
3.1 DUPLEXING (FDD AND TDD) ................................................................................................................ 28
3.1.1 Frequency Division Duplex (FDD) ............................................................................................. 28
3.1.2 Time Division Duplex (TDD ...................................................................................................... 30
3.1.3 Comparison ............................................................................................................................. 31
3.2 MULTIPLE ACCESS TECHNIQUES ............................................................................................................ 32
3.3 FDMA .............................................................................................................................................. 33
3.4 TDMA .............................................................................................................................................. 35
3.5 CDMA ............................................................................................................................................. 38
3.6 OFDMA AND SC-FDMA ..................................................................................................................... 39
3.6.1 OFDMA .................................................................................................................................... 39
3.6.2 SC-FDMA .................................................................................................................................. 40

2 TEL280 – Class Handout


1 Channel Capacity
This chapter describes the mathematical tools to measure the capacity of a communication
channel. It then describes the capacity of a simple wireless channel with Gaussian noise. That
is, of a channel that attenuates the signal by a constant factor and adds a Gaussian noise
component. Based on this channel, a theoretical comparison of Single-user Multiple Access
techniques (FDMA, TDMA) versus Multi-user Detection Multiple Access Techniques (e.g.,
CDMA) is presented.

The above description is more appropriate for a static/fixed position transmitter and receiver.
Subsequent chapters will describe the complexities added by mobility.

1.1 Shannon Capacity


Shannon was the first in provide a mathematical formulation for the capacity of a
communication channel, that is, the maximum rate of information that could be transmitted
error-free through the medium. Shannon formulation places no restriction on the codeword
length and therefore it may not be applicable to situations with demanding latency
constraints, small amount of information to transmit, or limited memory.

In Shannon formulation, a channel is characterized (modeled) by the conditional probability


of its output (y) given its input (x): fy|x. Typically, x and y are vectors of discrete time signals
(“samples”). However, this does not limit the applicability of the theory to continuous time
signals/channels. Indeed, for a band-limited channels, any signal x(t) of duration T seconds
and bandwidth W can be represented as a weighted sum of 2WT basis signals. That is, a
band-limited signal of duration T has a dimensionality of 2WT, as anticipated by the
“sampling theorem” which states that any such signal can be recovered by sampling it at a
rate of 2W samples per second, or equivalently, by 2WT samples. Therefore, when dealing
with continuous-time signals, we can work instead with the vectors representing these signals
in their respective basis.

The amount of information a source X generates is characterized by its entropy, defined as

𝐻 𝑋 = − !∈!𝑝 𝑥 log ! 𝑝(𝑥) bits

where p(x) is the probability of an outcome of the source X, and by convention 0 log 0 = 0.
Note that the entropy does not depend on the actual values of the outcomes, but only on their
probability distribution.

Entropy measures uncertainty of a random variable. For example, if all outcomes are equally
likely, the entropy of the source is equal to the logarithm of the number of possible outcomes.
The larger the number of possible outcomes, the larger the entropy. Alternatively, let’s
consider a random variable with only two possible outcomes A and B. Let’s say that outcome
A has a probability p and outcome B has a probability (1-p). Then, the entropy of that random
variable is maximized when p=0.5 (i.e., both outcomes are equally likely). In that case, the
entropy is equal to 1. The entropy in minimal (equal to 0) if p=0 or p=1, that is, in the cases
where either outcome A or B are certain. In general, the more bias the random variable has
towards one of the outcomes, either A or B (less uncertainty), the lower its entropy.

3 TEL280 – Class Handout


The above expression defines entropy for discrete-value random
variables. The differential entropy is similarly defined for continuous value random variables
by means of their probability density function f(x): h(X) = - ∫ f(x) log2 f(x) dx. For example,
let X be a continuous random variable with a probability density function that is Gaussian
with mean µ and variance σ2, its differential entropy h(X) is equal to h(X) = 0.5 log2(2πeσ2).

Shannon proved that (Source coding theorem) if a source X is encoded at rate R, error free
decoding is possible if and only if R ≥ H(X). That is, entropy captures the amount of
information generated by a source.

Now, let’s consider two random variables X and Y. The conditional entropy H(Y|X) defined
as:

𝐻(𝑌|𝑋) = 𝑝 𝑥 𝐻(𝑌|𝑋 = 𝑥)
!

= − 𝑝 𝑥 𝑝 𝑦 𝑥 log 𝑝(𝑦|𝑥)
! !

= − 𝑝 𝑥, 𝑦 log 𝑝(𝑦|𝑥)
! !

measures the uncertainty left in Y when X is known. Let’s consider the joint entropy of two
variables X and Y (that is, the entropy of a variable Z=(X,Y) ). It can be shown that H(Z) is
equal to H(Z) = H(X,Y) = H(X) + H(Y|X) = H(Y) + H(X|Y). Thus, H(Y|X) = H(X,Y) –
H(X). That is, the uncertainty left once X is known.

The above motivates the definition of mutual information I(X; Y) between two random
variables:

𝐼(𝑋; 𝑌) = 𝐻 𝑋 − 𝐻(𝑋|𝑌)
= 𝐻 𝑌 − 𝐻(𝑌|𝑋)
which is a measure of the amount of information that one random variable contains about
another. For example, I(X;Y) measures the reduction on X’s uncertainty thanks to the
knowledge of Y. That is, how much information about X is gained by knowing (observing)
Y. A case of particular interest is when X is a random variable representing the signal being
transmitted over a channel and Y is the received (measured) signal. Thus, I(X;Y) measures
how much information we can get about X by measuring Y.

Since a channel is represented by fY|X (y|x), the conditional probability of the output (y)
conditioned on the input (x), then the only undetermined quantity is f(x), the pdf of the
channel input (the message). That is, once f(x) is chosen, f(x,y), and I(X;Y) are completely
determined. Thus, it make sense to choose the f(x) (i.e., encoding) that maximizes the
information that can be obtained about the message based on observing the channel output y.
This motivates the definition of channel capacity C as:

𝐶 = max 𝐼(𝑋; 𝑌)
!(!)

Note that f(x) in the maximization expression corresponds to the pdf of the channel input, not
the pdf of the source. A source S’s message is mapped to a set of binary codewords (“source

4 TEL280 – Class Handout


coding”) with rate R, where R is greater or equal to the source’s
entropy H(S). Then, the resulting coded binary sequence is input to the “channel coding”
stage, that maps a sequence of bits (say nR) into a sequence of n symbols to be transmitted.
Thus, in the above expression X is a random variable representing the symbol that is sent
through the channel. Its pdf depends on the channel coding used (the sequence at the output
of a good source-coded signal is typically uniformly distributed), which is under a designer
control.

Shannon proved (channel coding theorem) that transmission at a rate R on a channel defined
by the conditional probability fy|x can be accomplished error free if and only if R ≤ C.
Therefore C is the theoretical channel capacity. Shannon proof is based on the generation of
random codes (impractical due to large memory requirements) and exploiting the fact that
for large codewords -- thanks to the law of large numbers -- most of the codewords pdf’s
“energy” is concentrated among ‘typical sequences’. A random codeword of length n for a
message (channel coding) is generated by drawing n consecutive values (iid) from the pdf
f(x). Most codewords generated this way will be “typical sequences”, which Shannon
exploits in his proof.

As an example, let’s calculate the capacity of a Gaussian channel (Y = X + η), where the
channel output Y is equal to the sum of the channel input X plus a noise component η that is
Gaussian-distributed with zero mean and variance (‘energy’) equal to N. This is a very
important case, since Gaussian noise appears very frequently in nature (thermal noise) and
because Gaussian noise is the most challenging (for a given constraint on energy, it has the
higher entropy, or uncertainty).
Let’s assume that the sender is power limited: E{x2}≤ P (that is, the long term average of
power is below a value P). Recalling I(X;Y) = H(Y) – H(Y|X) = H(Y) – H(η) = H(Y) - 0.5
log2(2πeN), where the last equality follows from the entropy of a Gaussian variable (η).
Thus, maximization of I(X;Y) reduces to maximizing the entropy of H(Y). Now, since E{Y2}
= E{X2}+ 2E{X}E{η} + E{ η2} = E{X2} + N ≤ P + N. Where we have used the fact that X
and η are independent and η is zero-mean. Recalling that the entropy of a power-limited
random variable is maximized by the Gaussian distribution, and that the sum of two
Gaussians variables is also Gaussian, we conclude that H(Y) is maximized when Y is
Gaussian with variance (P+N), and that happens when X is Gaussian. In that case, H(Y) = 0.5
log2[2πe(P+N)], I(X;Y) = 0.5 log2[2πe(P+N)] - 0.5 log2[2πeN] = 0.5 log2(1+P/N). Then, the
capacity of a power limited Gaussian channel is C = 0.5 log2(1+P/N).

1.2 Capacity of the Additive White Gaussian Noise (AWGN) Channel


One of the most important channel models is the “Additive White Gaussian Noise” (AWGN)
channel. In this channel, the output y(t) is equal to the sum of the channel’s input x(t) plus a
noise component η(t) that is a stationary white noise process with power spectral density
(psd) equal to N0/2. The noise is white, meaning that its psd is constant at all frequencies, that
is, it contains all the frequencies components – as white light does. White noise has no
memory. That is, the autocorrelation of the noise at times t1 and t2 is equal to zero (i.e., E{
η(t1) η*(t2) } = 0). The noise is Gaussian, meaning that at any time t, the random variable η(t)
has a Gaussian pdf.

Now, let’s compute the channel capacity when the average power is bounded by P and the
sender is limited to transmit signals within a bandwidth W. This is a fairly common case,
where a transmitter is assigned only a limited fraction of the electromagnetic spectrum.

5 TEL280 – Class Handout


Let’s consider transmissions over a (long) time interval T. As we mentioned before, the set of
band-limited signals of duration T has a dimension equal to L=2WT. That is, any such signal
can be uniquely represented by a linear combination of 2WT base signals (which are also
orthogonal).
Let x = (x1, x2, …, xL) and y = (y1, y2, …, yL) be the representations of the channel input x(t)
and the channel output y(t) in the base signals, respectively. Since the channel is linear and
the noise is white, it can be shown that yk = xk + ηk, where ηk is Gaussian with variance
WN0/L. Thus, we can treat each of the (xk,yk) duples as an independent Gaussian sb-channel
and split the transmitter power among these L channels. The optimal power strategy is to
evenly split the power P among the L sub-channels (due to the concave nature of the Shannon
Capacity). Thus, the capacity of the k-th sub-channel Ck is equal to Ck = 0.5 log2[1+
(P/L)/(WN0/L)] = 0.5 log2[1+ (P/(WN0)]. Adding over all L sub-channels, it can be found that
over an interval T, a total of WT log2[1+ (P/(WN0) bits can be transmitted over the AWGN
channel. Thus, the channel capacity is equal to

CAWGN (P, W) = W log2[1+ (P/(WN0)]

From the above expression, we can appreciate two operating region:

• Bandwidth limited regime: P >> WN0 (SNR >> 1). Capacity grows logarithmically
with power, that is, this regime is power inefficient as the power must increase
exponentially to obtain an increase in the bitrate. On the other hand, the regime is
bandwidth efficient as the bitrate per unit of bandwidth (bps/Hz) that can be achieve is
greater than 1. Note that as power increases towards infinity, so does the capacity –
although much more slowly.

• Power limited regime: P << WN0 (SNR << 1). Capacity grows almost linearly with
respect to power. Figure 1 shows, for a fixed received power, the relationship between
Capacity (C) and Bandwidth (W). As W tends to infinity, ∆=P/WN0 tends to 0, and since
for very small ∆, log2(1+∆) = log2 e ln (1+∆) ≈ log2 e ∆, we have CAWGN ≈ W(log2
e)P/WN0 = (log2 e/N0) P. Thus, even with infinity bandwidth, the capacity is bounded,
limited by the power available. Thus, in power limited regime adding additional
bandwidth is of little use.

Figure 1 illustrates these two regions for a fixed received power. It can be seen that for small
values of W, the capacity increases rapidly (almost linearly) with available bandwidth. Thus,
in this regime bandwidth is the main limit to capacity. But as bandwidth increases, we get to
a point of diminishing returns where adding more bandwidth is of little use. In this case, it is
the power that limits the capacity on the channel.

6 TEL280 – Class Handout


Figure 1. Capacity increase with respect to bandwitdh (W)

1.3 Multiple Access Channel


We focus our attention to the capacity region of a AWGN channel of bandwidth W shared by
N users (N transmitter and one receiver). Let Pk and Rk be the available power and transmit
rate at user k, respectively. A vector of transmit rates (R1, R2, …, RN) is achievable if and
only of the following set of inequalities are satisfied:

𝑅! ≤ 𝐶 𝑃! , ∀𝑘 𝜖 {1, . . , 𝑁}
𝑅! + 𝑅! ≤ 𝐶 𝑃! + 𝑃! , ∀𝑘, 𝑗 𝜖 {1, . . , 𝑁}

𝑅! ≤ 𝐶 𝑃! , ∀ 𝑆 ⊂ 1, . . , 𝑁
!∈! !∈!

where S is any set of indexes (subset of {1,..N}), and the function C(x) = W log2[1+
(x/(WN0)].

The above set of inequalities has a straightforward interpretation: for any subset of users, the
sum of their rates should be less or equal to the rate achieved by a single transmitter whose
power is the sum of the users powers. That is, the rate that would be achieved if we combined
the powers of the subset of users.

The capacity region is then the convex intersection of half-planes in N-dimension space. The
boundary of this capacity region is the limit toward which different multi-access techniques
can achieve. The closer to the capacity region’s boundary, the better the technique.

Let’s take a look at the family of TDMA/FDMA techniques. These techniques split the
channel in N orthogonal sub-channels, either in time (TDMA) or frequency (FDMA). The
split need not be even, but using any set of weights λ1, λ2, ..λN such that Σ λk = 1.

For TDMA, user k will transmit a fraction λk of the time, and therefore it can use a power
Pk/λk such that its average (over time) power is equal to Pk. Therefore, the capacity achieved
during the transmission intervals will be equal to C(Pk/λk). Since that user can only transmit a
fraction λk of time, the average throughput it will achieve is λkC(Pk/λk).

7 TEL280 – Class Handout


For FDMA, user k will be assigned a bandwidth λk W, where W is the total channel
bandwidth. Replacing W by λk W in the Shannon capacity formula results in Rk ≤ λk
C(Pk/λk), same as for the TDMA case.

Thus, both FDMA and TDMA can achieve the same transmission rates. But, how do these
rates compare against the optimal (capacity region) boundary?. It is easy to verify that
T/FDMA only achieves (intersects) the optimal boundary when λk = Pk / Σi Pi, that is, the
subchannel assigments are proportional to the users’ powers. Thus, FDMA and TDMA are
optimal only in a special case, which may not match the users’ traffic requirements. For
arbitrary powers and traffic requirements alternatives such as CDMA should be employed.

Let’s look deeper into this for the 2-user case.


2-user example
The 2-user capacity region is determined by the inequalities:

𝑃!
𝑅! ≤ 𝐶 𝑃! = 𝑊 log ! 1 +
𝑁
𝑃!
𝑅! ≤ 𝐶 𝑃! = 𝑊 log ! 1+
𝑁
𝑃! + 𝑃!
𝑅! + 𝑅! ≤ 𝐶 𝑃! + 𝑃! = 𝑊 log ! 1+
𝑁

This capacity region is illustrated in Figure 2 (left), where the boundary region defined by the
three inequalities is labeled “MUD”, by MultiUser Detection, that is, for the ability of a
receiver to simultaneously decode two (or more) packets. In contrast, the curve labeled SUD
shows the capacity achieved by a radio that can only decode one packet at a time (Single
User Detection), as is the case of T/FDMA. The difference between SUD and MUD capacity
regions is shown in yellow.

It is of interest to analyze the corner points in the MUD region’s boundary:

Corner point 1: R2 is equal to its maximum value. Then, by the third inequality:
𝑃! + 𝑃! 𝑃! 𝑃!
𝑅! ≤ 𝑊 log ! 1 + − 𝑊 log ! 1 + = 𝑊 log ! 1 +
𝑁 𝑁 𝑁 + 𝑃!

Corner point 2: R1 is equal to its maximum value, then by the third inequality:
𝑃! + 𝑃! 𝑃! 𝑃!
𝑅! ≤ 𝑊 log ! 1 + − 𝑊 log ! 1 + = 𝑊 log ! 1 +
𝑁 𝑁 𝑁 + 𝑃!

These corner points have an intuitive interpretation. Let’s consider the corner point 1. The
rate achieved by user 1 is equal to the capacity of an equivalent single-user channel where the
signal from user 2 is regarded as noise (interference). As long as the rate of user 1 is below
the Shannon capacity of this channel (noise plus interference), it can be perfectly
decoded/recovered. Once the signal from user 1 has been recovered, it can be subtracted from
the received signal, leaving only the signal of user 2 plus the channel’s noise (i.e., remove all
interference). Therefore, user 2 can achieve its maximum rate, as if it was alone.

A simple interpretation of the corner points may lead to an overestimation of the difference
between the optimal boundary values versus the capacity achieved by T/FDMA. In particular,

8 TEL280 – Class Handout


for corner point 1, one may say that while under T/FDMA only one
user (user 2) can transmit (at maximum rate), for multiuser detection an additional rate of
!
𝑅! = 𝑊 log ! 1 + !!!! for user 1 is possible. However, the difference between both
!
techniques is not as high, as it can be observed in Figure 2 (left). There are points in the curve
SUD closer to the corner point 1 than the point in the vertical axis (R1=0, R2=C(P2)). Indeed,
the difference between both curves is not so big.

It should be considered, however, that achieving the capacity region is not trivial, and may
require sophisticated radios and multiple access schemes with coordination/synchronization
between users. Thus, a feature that may not increase the capacity by much (e.g., MUD) may
make achieving this capacity much easier, and in practice add a lot of value. Furthermore, it
may have a strong impact on the higher layers. For example, consider the results in Figure 2
(right), where the end-to-end throughput for a multi-hop network running a CSMA/CA MAC
is shown. Under SUD, lack of coordination causes packets collision as load increases,
exhibiting an instable ALOHA-like behavior. However, under MUD many of these
collisions are recoverable – even without coordination among the nodes – and therefore the
throughput stabilizes with high loads. This results in a significant improvement of end-to-end
throughput under high load case, and in a less stringent requirement for the flow control/rate
adaptation layer (e.g. TCP).

1.8"
E2E#Throughput#(Mbps)#

R2 MUD"
1.6"
MUD (e.g. CDMA) 1.4"
SUD"
C2 1.2"
1"
0.8"
0.6"
SUD 0.4"
(e.g. FDMA) 0.2"
0"
0" 2" 4" 6" 8" 10"
C1 R1 Load#(Mbps)#

Figure 2. Comparison of Single user (FDMA, TDMA) versus Multi-User Detection systems. (a) Capacity Region. (b)
Actual end-to-end throughput for an ad hoc network running 802.11 MAC.

2 Wireless Propagation
So far, we have discussed a simple “attenuation plus Gaussian noise channel”, which is
adequate for wire-based or even fixed wireless local loop communication, but fails to capture
the complexities introduced by a mobile wireless channel.

In this chapter we describe the properties of wireless signal propagation, in a static setting
first, and finally in a mobile environment.

2.1 Pathloss (Reflexion, diffraction. Scatter)

9 TEL280 – Class Handout


When a radio frequency signal travels from source to destination, it
suffers several modifications/attenuations due to reflection, scattering and diffraction, as
shown in Figure 3. The reduction in power from transmitter to receiver is referred to as
“pathloss” (loss along the path).

Figure 3. Wireless signal propagation.

Reflection occurs when the transmitted wave encounters an object of large dimension as
compared to its wavelength such as large walls, metal cabinet, ceilings and furniture. Some of
the transmitted signal will be absorbed through this medium and the remaining will be
reflected off of the medium’s surface. The energy of the transmitted and reflected waves is a
function of the geometry and material properties of the obstruction and the amplitude, phase,
and polarization of the incident wave.

Scattering occurs when the transmitted wave encounters a large quantity of small dimension
objects such as metal cabinets, lamp posts, bushes, and trees. The reflected energy in
a scattering situation is spread in all directions before reaching the receiver.

Diffraction occurs when the surface of the obstruction has sharp edges producing secondary
waves that in effect bend around the obstruction. Like reflection, diffraction is affected by the
physical properties of the obstruction and the incident wave characteristics. In situations
where the receiver is heavily obstructed, the diffracted waves may have sufficient strength to
produce a useful signal

Pathloss prediction model is basically an empirical mathematical formulation for the


characterization of radio wave propagation. The models are usually developed to predict the

10 TEL280 – Class Handout


behavior of how the signal is propagated in a several environments
and places. The Log Distance Path loss Model is an indoor propagation model that predicts
the signal loss inside a building. However, this model is also suitable for outdoor propagation
in a short range distance. The mathematical formulation of the Log Distance Pathloss Model
is:

PL = UL + 10. n. log (d) + F

PL = path loss, UL = power loss (dB) at 1m distance, n = path loss coefficient factor usually
between 2 and 4, d = distance between transmitter and receiver in meters, and F is a zero-
mean Gaussian random variable with variance σ2 modeling pathloss variations due to
shadowing, a term that encompasses signal strength variations due to artifacts in the
environment (i.e., occlusions, reflections, etc.). From practical measurements, σ2 is between 4
and 12dBs, with 8 being a “typical” value. Accordingly, received signal strengths at locations
that are of equal distance from the transmitter are considered i.i.d. normal random variables.

While the above pathloss expression accounts for signal variations over large scales, the
received signal strength can vary considerably over small distances (in the order of λ) and
small time scales, due to multipath fading (see below) . As a result, pathloss can exhibit wide
variations even when distance (d) changes by as little as a few centimeters.

2.2 Multipath Propagation

Multipath propagation is a fact of life in any terrestrial radio scenario. While the direct or line
of sight path is normally the main wanted signal, a radio receiver will receive many signals
resulting from the signal taking a large number of different paths. These paths may be the
result of reflections from buildings, mountains or other reflective surfaces including water,
etc. that may be adjacent to the main path. Additionally other effects such as ionospheric
reflections give rise to multipath propagation as does tropospheric ducting.

The multipath propagation resulting from the variety of signal paths that may exist between
the transmitter and receiver can give rise to interference in a variety of ways including
distortion of the signal, loss of data and multipath fading.

At other times, the variety of signal paths arising from the multipath propagation can be used
to advantage. Schemes such as MIMO use multipath propagation to increase the capacity of
the channels they use. With increasing requirements for spectrum efficiency, the use of
multipath propagation for technologies such as MIMO are able to provide significant
improvements in channel capacity that are much needed.

Multipath propagation basics


Multipath radio signal propagation occurs on all terrestrial radio links. The radio signals not
only travel by the direct line of sight path, but as the transmitted signal does not leave the
transmitting antenna in only the direction of the receiver, but over a range of angles even
when a directive antenna is used. As a result, the transmitted signals spread out from the
transmitter and they will reach other objects: hills, buildings reflective surfaces such as the
ground, water, etc. The signals may reflect of a variety of surfaces and reach the receiving
antenna via paths other than the direct line of sight path.

11 TEL280 – Class Handout


Multipath fading
Signals are received in a terrestrial environment, i.e. where reflections are present and signals
arrive at the receiver from the transmitter via a variety of paths. The overall signal received is
the sum of all the signals appearing at the antenna. Sometimes these will be in phase with the
main signal and will add to it, increasing its strength. At other times they will interfere with
each other. This will result in the overall signal strength being reduced.

At times there will be changes in the relative path lengths. This could result from either the
transmitter or receiver moving, or any of the objects that provides a reflective surface
moving. This will result in the phases of the signals arriving at the receiver changing, and in
turn this will result in the signal strength varying. It is this that causes the fading that is
present on many signals.

It can also be found that the interference may be flat, i.e. applied to all frequencies equally
across a given channel, or it may be selective, i.e. applying to more to some frequencies
across a channel than others

Interference caused by multipath propagation (Intersymbol Interference)


Multipath propagation can give rise to interference that can reduce the signal to noise ratio
and reduce bit error rates for digital signals. One cause of a degradation of the signal quality
is the multipath fading already described. However there are other ways in which multipath
propagation can degrade the signal and affect its integrity.

One of the ways which is particularly obvious when driving in a car and listening to an FM
radio. At certain points the signal will become distorted and appear to break up. This arises
from the fact that the signal is frequency modulated and at any given time, the frequency of
the received signal provides the instantaneous voltage for the audio output. If multipath
propagation occurs, then two or more signals will appear at the receiver. One is the direct or
line of sight signal, and another is a reflected signal. As these will arrive at different times
because of the different path lengths, they will have different frequencies, caused by the fact
that the two signals have been transmitted by the transmitter at slightly different times.
Accordingly when the two signals are received together, distortion can arise if they have
similar signal strength levels.

Another form of multipath propagation interference that arises when digital transmissions are
used is known as Inter Symbol Interference, ISI. This arises when the delay caused by the
extended path length of the reflected signal. If the delay is significant proportion of a symbol,
then the receiver may receive the direct signal which indicates one part of the symbol or one
state, and another signal which is indicating another logical state. If this occurs, then the data
can be corrupted.

One way of overcoming this is to transmit the data at a rate the signal is sampled, only when
all the reflections have arrived and the data is stable. This naturally limits the rate at which
data can be transmitted, but ensures that data is not corrupted and the bit error rate is
minimized. To calculate this the delay time needs to be calculated using estimates of the
maximum delays that are likely to be encountered from reflections.

Using the latest signal processing techniques, a variety of methods can be used to overcome
the problems with multipath propagation and the possibilities of interference.

12 TEL280 – Class Handout


2.3 Multipath Fading
Multipath fading is a feature of many radio communications links. Multipath fading occurs as
a result of the many signal paths that are in existence on all terrestrial radio communications
links whether they are used for applications such as cellular telecommunications, mobile
radio, or for HF or VHF radio communications..

Multipath fading occurs in any environment where there is multipath propagation and there is
some movement of elements within the radio communications system. This may include the
radio transmitter or receiver position, or in the elements that give rise to the reflections. The
multipath fading can often be relatively deep, i.e. the signals fade completely away, whereas
at other times the fading may not cause the signal to fall below a useable strength.

Multipath fading may also cause distortion to the radio signal. As the various paths that can
be taken by the signals vary in length, the signal transmitted at a particular instance will
arrive at the receiver over a spread of times. This can cause problems with phase distortion
and intersymbol interference when data transmissions are made. As a result, it may be
necessary to incorporate features within the radio communications system that enables the
effects of these problems to be minimized.

Multipath fading basics


Multipath fading is a feature that needs to be taken into account when designing or
developing a radio communications system. In any terrestrial radio communications system,
the signal will reach the receiver not only via the direct path, but also as a result of reflections
from objects such as buildings, hills, ground, water, etc that are adjacent to the main path.

The overall signal at the radio receiver is a summation of the variety of signals being
received. As they all have different path lengths, the signals will add and subtract from the
total dependent upon their relative phases.

At times there will be changes in the relative path lengths. This could result from either the
radio transmitter or receiver moving, or any of the objects that provides a reflective surface
moving. This will result in the phases of the signals arriving at the receiver changing, and in
turn this will result in the signal strength varying as a result of the different way in which the
signals will sum together. It is this that causes the fading that is present on many signals.

Cellular multipath fading


Cellular telecommunications is subject to multipath fading. There are a variety of reasons for
this. The first is that the mobile station or user is likely to be moving, and as a result the path
lengths of all the signals being received are changing. The second is that many objects around
may also be moving. Automobiles and even people will cause reflections that will have a
significant effect on the received signal. Accordingly multipath fading has a major bearing on
cellular telecommunications.

Often the multipath fading that affects cellular phones is known as fast fading because it
occurs over a relatively short distance. Slow fading occurs as a cell phone moves behind an
obstruction and the signal slowly fades out.

13 TEL280 – Class Handout


The fast signal variations caused by multipath fading can be detected even over a short
distance. Assume a frequency of 2 GHz (e.g. a typical approximate frequency value for many
3G phones). The wavelength can be calculated as: λ = c/f = 3x108/2x109 = 0.15 meters
(where: c = speed of light in meters per second, f = frequency in Hertz).

To move from a signal being in phase to a signal being out of phase is equivalent to
increasing the path length by half a wavelength or 0.075m, or 7.5 cms. This example looks at
a very simplified example. In reality the situation is far more complicated with signals being
received via many paths. However it does give an indication of the distances involved to
change from an in-phase to an out of phase situation.

Ionospheric multipath fading


Short wave radio communications is renowned for its fading. Signals that are reflected via the
ionosphere, vary considerably in signal strength. These variations in strength are primarily
caused by multipath fading.

When signals are propagated via the ionosphere it is possible for the energy to be propagated
from the transmitter to the receiver via very many different paths. Simple diagrams show a
single ray or path that the signal takes. In reality the profile of the electron density of the
ionosphere (it is the electron density profile that causes the signals to be refracted) is not
smooth and as a result any signals entering the ionosphere will be scattered and will take a
variety of paths to reach a particular receiver. With changes in the ionosphere causing the
path lengths to change, this will result in the phases changing and the overall summation at
the receiver changing.

The changes in the ionosphere arise from a number of factors. One is that the levels of
ionization vary, although these changes normally occur relatively slowly, but nevertheless
have an effect. In addition to this there are winds or air movements in the ionosphere. As the
levels of ionization are not constant, any air movement will cause changes in the profile of
the electron density in the ionosphere. In turn this will affect the path lengths.

Tropospheric multipath fading


Many signals using frequencies at VHF and above are affected by the troposphere. The signal
is refracted as a result of the changes in refractive index occurring, especially within the first
kilometers above the ground. This can cause signals to travel beyond the line of sight. In fact
for broadcast applications a figure of 4/3 of the visual line of sight is used for the radio
horizon. However under some circumstances relatively abrupt changes in refractive index
occurring as a result of weather conditions can cause the distances over which signals travel
to be increased. Signals may then be "ducted" by the ionosphere over distances up to a few
hundred kilometers.

When signals are ducted in this way, they will be subject to multipath fading. Here, heat
rising from the Earth's surface will ensure that the path is always changing and signals will
vary in strength. Typically these changes may be relatively slow with signals falling and
rising in strength over a period of a number of minutes.

14 TEL280 – Class Handout


2.3.1 Frequency selective vs flat fading
Multipath fading can affect radio communications channels in two main ways:

• Flat fading: This form of multipath fading affects all the frequencies across a given
channel either equally or almost equally. When flat multipath fading is experienced, the
signal will just change in amplitude, rising and falling over a period of time, or with
movement from one position to another.

• Selective fading: Selective fading occurs when the multipath fading affects different
frequencies across the channel to different degrees. It will mean that the phases and
amplitudes of the signal will vary across the channel. Sometimes relatively deep nulls
may be experienced, and this can give rise to some reception problems. Simply
maintaining the overall amplitude of the received signal will not overcome the effects of
selective fading, and some form of equalization may be needed. Some digital signal
formats, e.g. OFDM are able to spread the data over a wide channel so that only a portion
of the data is lost by any nulls. This can be reconstituted using forward error correction
techniques and in this way it can mitigate the effects of selective multipath fading.

Selective fading occurs because even though the path length changes by the same physical
length (e.g. the same number of meters, yards, miles, etc.), this represents – for different
frequencies -- a different proportion of a wavelength. Thus, the phase will change across the
bandwidth used.

Selective fading can occur over many frequencies. It can often be noticed when medium
wave broadcast stations are received in the evening via ground wave and skywave. The
phases of the signals received via the two means of propagation change with time and this
causes the overall received signal to change. As the multipath fading is very dependent on
path length, it is found that it affects the frequencies over even the bandwidth of an AM
broadcast signal to be affected differently and distortion results.

The coherent bandwidth Δfc of the channel is the frequency separation at which two sinusoid
are affected differently by the channel. If Δfc is large compared to the bandwidth of the
transmitted signal, the channel is said to be narrowband, and it will experience flat fading.
But, if Δfc is small with respect to the bandwidth of the signal, the channel is said to be
wideband and it will be frequency selective. In this case the signal is severely affected by the
channel. Note that with high data rate signals becoming commonplace, wider bandwidths are
needed and wideband channels are becoming the norm. As a result several nulls and peaks
may occur across the bandwidth of a single signal. To address this issue, modern modulation
techniques such as OFDM (see Subsection 2.6.2.1) have been developed.

2.4 Multipath fading models


There are several statistical models that have been derived to describe multipath fading, some
based on analysis of the physical phenomena, others based on empirical measurements. In
this section we describe some of the most widely-used models.
2.4.1 Rayleigh Fading
Rayleigh fading is the name given to the form of fading that is often experienced in an
environment where there is a large number of reflections present. The Rayleigh fading model

15 TEL280 – Class Handout


uses a statistical approach to analyze the propagation, and can be
used in a number of environments.

When the signals reach the receiver, the overall signal is a combination of all the signals that
have reached the receiver via the multitude of different paths that are available. These signals
will all sum together, the phase of the signal being important. Dependent upon the way in
which these signals sum together, the signal will vary in strength. If they were all in phase
with each other they would all add together. However this is not normally the case, as some
will be in phase and others out of phase, depending upon the various path lengths, and
therefore some will tend to add to the overall signal, whereas others will subtract.

As there is often movement of the transmitter or the receiver this can cause the path lengths
to change and accordingly the signal level will vary. Additionally if any of the objects being
used for reflection or refraction of any part of the signal moves, then this too will cause
variation. This occurs because some of the path lengths will change and in turn this will mean
their relative phases will change, giving rise to a change in the summation of all the received
signals.

The Rayleigh fading model can be used to analyze radio signal propagation on a statistical
basis. It operates best under conditions when there is no dominant signal (e.g. direct line of
sight signal), and in many instances cellular telephones being used in a dense urban
environment fall into this category. Other examples where no dominant path generally exists
are for ionospheric propagation where the signal reaches the receiver via a huge number of
individual paths. Propagation using tropospheric ducting also exhibits the same patterns.
Accordingly all these examples are ideal for the use of the Rayleigh fading or propagation
model.

Let X be a random variable denoting the amplitude of the received signal. X is the result of
combining the In-Phase (NI) and Quadrature (NQ) components; 𝑋 = 𝑁!! + 𝑁!! . Under the
assumption that X is the result of adding many reflected signals of similar amplitude, and
under the law of large numbers, both NI and NQ are Gaussian zero-mean random variables
with variance 𝜎 ! . In that case, X is Rayleigh distributed, that is, it has a probability density
function (pdf) equal to:

𝑥 ! ! !!
𝑓! 𝑥 = ! 𝑒 !!
𝜎
Y = X2/2, the power of the received signal, has an exponential distribution with mean 𝜎 ! ,
that is:

1 ! !!
𝑓! 𝑦 = ! 𝑒 !
𝜎
2.4.2 Rician fading
Rician fading is a stochastic model for radio propagation anomaly caused by partial
cancellation of a radio signal by itself — the signal arrives at the receiver by several different
paths (hence exhibiting multipath interference), and at least one of the paths is changing
(lengthening or shortening). Rician fading occurs when one of the paths, typically a line of

16 TEL280 – Class Handout


sight signal, is much stronger than the others. In Rician fading, the
amplitude gain is characterized by a Rician distribution.

Rayleigh fading is the specialized model for stochastic fading when there is no line of sight
signal, and is sometimes considered as a special case of the more generalized concept of
Rician fading.

Channel characterization
NI = m1 + η1, NQ = m2 + η2, η1 and η2 are independent zero-mean Gaussian each with
variance σ2.
A Rician fading channel can be described by two parameters: K and Ω. K is the ratio between
the power in the direct path (specular component, ν2 = m12 + m22) and the power in the other,
scattered, paths (2σ2). Ω is the total power from both paths (Ω = ν2+ 2σ2), and acts as a
scaling factor to the distribution.

The received signal amplitude (not the received signal power) R is then Rice distributed with
parameters ν2 = K/(K+1) Ω and σ2 = Ω/[2(K+1)].The resulting PDF then is:

(!!!) !!
! !!! ! !!! !(!!!)
f 𝑥 = 𝑒 ! 𝐼! 2 𝑥
! !

where I0(.) is the 0-th order modified Bessel function of the first kind. Note that for K=0 (no
direct path), the resulting PDF becomes the Rayleigh PDF.

2.4.3 Nakagami-m fading


The Nakagami fading model was initially proposed because it matched empirical results for
short wave ionospheric propagation. In current wireless communication it is used to model
urban radio multipath channels.

Nakagami fading occurs for multipath scattering with relatively large delay-time spreads,
with different clusters of reflected waves. Within any one cluster, the phases of individual
reflected waves are random, but the delay times are approximately equal for all waves. As a
result the envelope of each cumulated cluster signal is Rayleigh distributed. The average time
delay is assumed to differ significantly between clusters. If the delay times also significantly
exceed the bit time of a digital link, the different clusters produce serious intersymbol
interference, so the multipath self-interference then approximates the case of co-channel
interference by multiple incoherent Rayleigh-fading signals.

Let X be a random variable representing the amplitude of the received signal, if we assume
that the received signal is the result of adding multiple independent and identically distributed
(i.i.d.) Rayleigh-fading signals, then X is Nakagami distributed, as follows:

! !!
! ! !
f 𝑥 = 𝑥 !!!! 𝑒 ! !
!(!) !

where Ω = E(x2). And m is defined as the ratio of moments, called the fading figure:

17 TEL280 – Class Handout


Ω!
𝑚=
E 𝑋! − Ω !

Note that if X is Nakagami distributed, the corresponding instantaneous power (Y = X2/2) is


gamma distributed.

Another uses of the Nakagami distribution:

• It describes the amplitude of received signal after maximum ratio diversity


combining.
• The Rician and the Nakagami model behave approximately equivalently near their
mean value. This observation has been used in many recent papers to advocate the
Nakagami model as an approximation for situations where a Rician model would be
more appropriate.

2.5 Mobility Impact


So far we have described the wireless propagation in a more-or-less static (instantaneous)
setting. In this section we describe the impact of mobility of the multipath fading.

2.5.1 Doppler effect


The Doppler effect (or Doppler shift) is the change in frequency of a wave (or other
periodic event) for an observer moving relative to its source. It is commonly heard when a
vehicle sounding a siren or horn approaches, passes, and recedes from an observer. Compared
to the emitted frequency, the received frequency is higher during the approach, identical at
the instant of passing by, and lower during the recession.

When the source of the waves is moving toward the observer, each successive wave crest is
emitted from a position closer to the observer than the previous wave. Therefore, each wave
takes slightly less time to reach the observer than the previous wave. Hence, the time between
the arrivals of successive wave crests at the observer is reduced, causing an increase in the
frequency. While they are travelling, the distance between successive wave fronts is reduced,
so the waves "bunch together". Conversely, if the source of waves is moving away from the
observer, each wave is emitted from a position farther from the observer than the previous
wave, so the arrival time between successive waves is increased, reducing the frequency. The
distance between successive wave fronts is then increased, so the waves "spread out".

Since electromagnetic waves do not require a medium to propagate, only the relative
difference in velocity (v) between the observer and the source needs to be considered to
calculate the doppler shift Δf. Let ϕ be the angle between the line connecting the mobile and
sender and the direction of the velocity vector v. Then Δf = v cos ϕ f/c. For example, for
f=1.8GHz, v=4mph (pedestrian speed), and ϕ=0 then the Doppler shift is equal to Δf = 10Hz,
while for v=40mph (vehicular speed), the Doppler shift is equal to Δf = 100Hz.

Doppler shifts require a synchronization circuitry (usually a Phase Locked Loop) to recover
the signal frequency.

Doppler spread

18 TEL280 – Class Handout


Some fading also arise when the Doppler shift between two signals
are different due to multipath propagation. The relative speed of a source is different for
different objects located at different places. For example, in Figure 4 the receiver is moving
with speed v towards the wall (away from the sender). It receives two signals, one directly
from the sender and one bounced from the wall. The frequency shifts of the two signals are
different: the signal from the sender has a negative Doppler shift (i.e., frequency is reduced),
while the signal that bounced from the wall has a positive Doppler shift (i.e., frequency
increased). The difference between the Doppler shifts in signals coming from the same
transmitter and using two different paths is called the Doppler spread. In a way similar to the
phase-shifted signals that cancel each other by destructive interference, the frequency-shifted
signals interfere and create fading.

Figure 4. Doppler spread due to reflections.

2.5.2 Fast versus Slow fading


The terms slow and fast fading refer to the rate (time) at which the magnitude and phase
change imposed by the channel on the signal changes. The coherence time is a measure of the
minimum time required for the magnitude change or phase change of the channel to become
uncorrelated from its previous value.

Slow fading arises when the coherence time of the channel is large relative to the delay
constraint of the channel. In this regime, the amplitude and phase change imposed by the
channel can be considered roughly constant over the period of use. Slow fading can be caused
by events such as shadowing, where a large obstruction such as a hill or large building
obscures the main signal path between the transmitter and the receiver. The received power
change caused by shadowing is often modeled using a log-normal distribution with a standard
deviation according to the log-distance path loss model, as discussed in Subsection 2.1.

Fast fading occurs when the coherence time of the channel is small relative to the delay
constraint of the channel. In this case, the amplitude and phase change imposed by the
channel varies considerably over the period of use. Rayleigh fading is typically fast.

The coherence time of the channel is related to the Doppler spread of the channel, that is, the
difference in Doppler shifts between different signal components contributing to a single
fading channel tap (see previous subsection and Figure 4). Channels with a large Doppler
spread have signal components that are each changing independently in phase over time.
Since fading depends on whether signal components add constructively or destructively, such
channels have a very short coherence time. In general, coherence time is inversely related to

19 TEL280 – Class Handout


Doppler spread, typically approximated as Tc ≈ 1/Ds, where Tc is the
coherence time, and Ds is the Doppler spread.

Figure 5 shows two instantiations of Fast Rayleigh fading, as experienced by two receivers
moving at different speeds: 6Km/h (walking speed) and 60Km/h (vehicle driving speed). The
figure shows the received power (dB with respect to the average RMS) versus time, for 1
second.

Figure 5. One second of (Fast) Rayleigh, with a maximum Doppler Shift of (a)10Hz and (b)100Hz.

At pedestrian speed (Figure 5 left) the observed Doppler spread was 10Hz. Observing the
channel received power over the one-second interval, we notice 10 instances of deep fading
(fading of 10 dB below the average). Thus, while the coherence time is not constant (the
observed signal is not perfectly periodic), we observe that its average value is highly
correlated to the inverse of the Doppler spread. At vehicular speed (Figure 5 right) the
observed Doppler spread was 100Hz. Observing the channel received power over the one-
second interval, once again we observe a high correlation between channel coherence time (in
this case in the order of 1/100Hz = 10msec). While not perfectly periodic, the signal deep-
fade instants exhibit a high degree of time regularity.

Left untreated, fast fading can have a devastating effect in communication performance. For
example, Figure 6 shows the bit error probability pe of a BPSK signal without coding under
white noise with and without fading. It can be seen that if there is no fading (AWGN channel,
solid line) the error probability decreases very rapidly (exponentially) with SNR. Indeed,
error probabilities in the order of 10-12 can be achieved with a SNR of around 15dB.
However, if the channel experiences fading (Rayleigh channel, dashed and dotted lines) the
error probability decreases much more slowly (linearly) with respect to SNR. A SNR of
about 40dB is needed to achieve an error probability of 10-4 (unacceptably high). And, if we
want to obtain an error rate in the order of 10-12, we would need a SNR of around 120dB!.
The reason for the need of such high SNR is that under fast fading, the error probability is
dominated by the occurrences of a deep fade. That is, for high SNR, bit errors mostly occur
during periods of deep fade. Thus, error rate is equal to probability of deep fade. In order to
reduce the error probability, we need to increase the signal power to the point that it survives
a deep fade. Given that the received power is exponentially distributed (Rayleigh fading), the

20 TEL280 – Class Handout


probability that the received signal is above a sensitivity threshold is
proportional to 1/γ where γ is the average received power.

Rayleigh Channel

AWGN channel

Figure 6. Error probability for simple uncoded BPSK modulation under AWGN with and without fading.

Fortunately, most of the losses due to fading can be recovered again thanks to diversity
coding, as discussed in Section 2.6.2.

2.6 Overcoming Multipath propagation issues


Multipath propagation is an issue for any radio communications system. Ranging from the
short range wireless communications such as Wi-Fi though the cellular and longer range data
schemes such as WiMAX though to VHF links where troposheric propagation may affect the
signal path, through to HF systems using the ionosphere for reflections. In all of these
systems, the effects of multipath propagation can be seen and experienced. Any form of
communications, therefore has to be able to accommodate the effects of the multipath
propagation in one way or another.

In a fast-fading channel, the transmitter may take advantage of the variations in the channel
conditions using time diversity to help increase robustness of the communication to a
temporary deep fade. Although a deep fade may temporarily erase some of the information
transmitted, use of an error-correcting code coupled with successfully transmitted bits during
other time instances (interleaving) can allow for the erased bits to be recovered. In a slow-
fading channel, it is not possible to use time diversity because the transmitter sees only a
single realization of the channel within its delay constraint. A deep fade therefore lasts the
entire duration of transmission and cannot be mitigated using coding.

This section describes techniques used to mitigate the impact of fading.

2.6.1 Slow fading


Since slow fading, by definition, last several transmission intervals, time-based techniques
(repetition of packets, interleaving and coding, etc.) are ineffective. In that case, the following
techniques may be used:

21 TEL280 – Class Handout


• Channel estimation/equalization, and then use of this
information to optimize the transmission parameters and even the signal waveform. This
is most effective for Wireless Local Loops, or situations with little mobility, where the
cost of training sequences for channel estimation is amortized among several
transmissions (packets). Some level of adaptation can also be obtained in power-
controlled WLANs, where the transmit power and bitrate of the channel can be adjusted
in response to channel variations (low to moderate fading). Also, modulation techniques
like OFDM (see Subsection 2.6.2.1) that allow to adjust the transmit signal’s power
spectral density can be used to avoid frequencies experiencing deep fade and thus
mitigate the impact of frequency selective fading.
• Cooperative diversity. In a multihop network, two or more senders can coordinate a
simultaneous transmission, acting as a multiple-antenna radio (space diversity). This is
referred to as cooperative diversity. If the distance between the transmitting nodes is large
enough, they may be able to overcome obstacles.
• Opportunistic forwarding. In this technique, the next node to forward a packet towards its
destination (in a multihop network) is not fixed by the transmitter, but it is dynamically
chosen among the nodes that received the packet (send as a broadcast). This way, if an
intermediate node is under a deep fade, an alternate node may receive it and forward the
packet.
• Fast re-routing or other higher layer techniques. That is, let the higher layers choose an
alternate path, if available. Care need to be taken to be sufficiently responsive to the
channel changes while avoiding saturating the network with excessive control messages.

2.6.2 Fast fading


Under fast fading, the channel state at current time is difficult to predict based on past history,
and therefore channel estimation/equalization techniques are not effective. In this case, fading
can be overcome with the help of diversity techniques:

• Time diversity: the simple form of time diversity is to simply retransmit the affected
packet. That is a case of repetition coding. A more sophisticated technique involves the
use of (k, n) coding, where a message is coded into n blocks such that any set of k blocks
(k < n) is sufficient to recover the message. Then, as long as the number of blocks
experiencing fading is less than (n-k), the message will be recovered. A similar (an older)
technique is bit interleaving in conjunction with bit-level error correcting codes.
• Frequency diversity: using modulation schemes as OFDM that allow treating the
channel as a set of n subchannels, where each subchannel experiences a different fading
level (selective fading). Using (k, n) coding, a message can be coded into n blocks, each
transmitted via a different subchannel. As long as the number of subchannels
experiencing fading is less than (n-k), the message will be recovered.
• Space diversity: use of two or more antennas with enough separation (order of a
wavelength) to experience independent fade. Thus, the likelihood that all the signal
received from all the antennas experience deep fade decreases exponentially with the
number of antennas.

Note that combination of the above techniques is also possible. For example, space-time
diversity consists mapping a message into n blocks, and then sequentially transmitting the
blocks alternating the antenna used in each transmission.

22 TEL280 – Class Handout


In the reminder of this section we will discuss two important
techniques used, among other things, to overcome fading: OFDM modulation and MIMO
antennas.

2.6.2.1 OFDM
In order to meet the requirements to transmit large amounts of data over a radio channel, it is
necessary to choose the most appropriate form of signal bearer format. One form of signal
lends itself to radio data transmissions in an environment where reflections may be present is
Orthogonal Frequency Division Multiplex, OFDM. An OFDM signal comprises a large
number of carriers, each of which are modulated with a low bit rate data stream. In this way
the two contracting requirements for high data rate transmission, to meet the capacity
requirements, and low bit rate to meet the intersymbol interference1 requirements can be met.

OFDM is the modulation format that is used for many of today's data transmission formats.
The applications include 802.11n Wi-Fi, LTE (Long Term Evolution for 3G cellular
telecommunications), LTE Advanced (4G), WiMAX and many more. The fact that OFDM is
being widely used demonstrates that it is an ideal format to overcome multipath propagation
problems. Although OFDM is more complicated than earlier forms of signal format, it
provides some distinct advantages in terms of data transmission, especially where high data
rates are needed along with relatively wide bandwidths.

What is OFDM? - The concept


OFDM is a form of multicarrier modulation. An
OFDM signal consists of a number of closely
spaced modulated carriers. When traditional
frequency division multiplexing (FDM) is applied
to a carrier, the sidebands spread out either side. It
is necessary for a receiver to be able to receive the
whole signal to be able to successfully demodulate
the data. As a result when signals are transmitted
close to one another they must be spaced so that
Figure 7. Traditional FDM signals.
the receiver can separate them using a filter and
there must be a guard band between them, as
shown in Error! Reference source not found..

This is not the case with OFDM. Although the sidebands from each carrier overlap, they can
still be received without the interference that might be expected because they are orthogonal
to each another. This is achieved by having the carrier spacing equal to the reciprocal of the
symbol period.

To see how OFDM works, it is


necessary to look at the receiver.

1
Multipath propagation causes the reception of multiple delayed-copies of the same symbol.
If the delay spread is greater than the symbol duration, a delayed copy of a symbol will
interfere with the signal of the next symbol. This is referred to as Intersymbol Interference
(ISI), which is computationally intensive to address at high data rates. Note also that a long
symbol duration, besides limiting the impact of ISI, also results in a subchannel width narrow
enough for the subchannel to experience flat fading.

23 TEL280 – Class Handout


This acts as a bank of demodulators, translating each carrier down to
DC. The resulting signal is integrated over the symbol period to regenerate the data from that
carrier. The same demodulator also demodulates the other carriers. As the carrier spacing
equal to the reciprocal of the symbol period means that they will have a whole number of
cycles in the symbol period and their contribution will sum to zero - in other words there is
no interference contribution (see Figure 8).
Figure 8. Spectrum of an OFDM signal.

More specifically, the OFDM modulation process is the following:


• Sender takes a serial stream of bits and maps them into a parallel array of N complex
values. Each value is a symbol to be transmitted, corresponding to a given modulation
scheme (e.g., QPSK, QAM, etc.). Different subchannels may have different modulation
schemes.
• Sender computes the Inverse Fast Fourier Transform (IFFT) of the block of N values, and
transmits the resulting time series (after adding a cyclic prefix equal to the last values of
the time sequence, to avoid boundary effects). Thus, we can say that each of the N
symbols is modulating one of the N sub-carriers (FFT components).
• OFDM provides frequency diversity as each sub-carrier can be independently
coded/modulated
– If the channel is good in the subcarrier region, then a high bit rate code is used.
– If the channel suffers of frequency selective fading at the subcarrier region, a low
bit rate is used (high coding gain), or even the subcarrier can be left unused.
– However, this require knowledge of channel state, which may be difficult to
achieve in a fast fading environment.
• Note that subcarriers are overlapping (see Figure 8) but orthogonal (recall that the IFFT is
a reversible transformation).
• Once the receiver get N samples of the signal, it applies the FFT to the block of N
samples. Each of the N output of the FFT corresponds to one transmitted symbol, plus
noise.

One requirement of the OFDM transmitting and receiving systems is that they must be linear.
Any non-linearity will cause interference between the carriers as a result of inter-modulation
distortion. This will introduce unwanted signals that would cause interference and impair the
orthogonality of the transmission.

In terms of the equipment to be used the high peak to average ratio of multi-carrier systems
such as OFDM requires the RF final amplifier on the output of the transmitter to be able to
handle the peaks whilst the average power is much lower and this leads to inefficiency. In
some systems the peaks are limited. Although this introduces distortion that results in a
higher level of data errors, the system can rely on the error correction to remove them.

Data on OFDM
The data to be transmitted on an OFDM signal is spread across the carriers of the signal, each
carrier taking part of the payload. This reduces the data rate taken by each carrier. The lower
data rate has the advantage that interference from reflections is much less critical. This is
achieved by adding a guard band time or guard interval into the system. This ensures that the
data is only sampled when the signal is stable and no new delayed signals arrive that would
alter the timing and phase of the signal.

24 TEL280 – Class Handout


The distribution of the data across a large number of carriers in the
OFDM signal has some further advantages. Nulls caused by multi-path effects or interference
on a given frequency only affect a small number of the carriers, the remaining ones being
received correctly. By using error-coding techniques, which does mean adding further data to
the transmitted signal, it enables many or all of the corrupted data to be reconstructed within
the receiver. This can be done because the error correction code is transmitted in a different
part of the signal.

OFDM advantages
OFDM has been used in many high data rate wireless systems because of the many
advantages it provides.

• Immunity to selective fading: One of the main advantages of OFDM is that is more
resistant to frequency selective fading than single carrier systems because it divides
the overall channel into multiple narrowband signals that are affected individually as
flat fading sub-channels.
• Resilience to interference: Interference appearing on a channel may be bandwidth
limited and in this way will not affect all the sub-channels. This means that not all the
data is lost.
• Spectrum efficiency: Using close-spaced overlapping sub-carriers, a significant
OFDM advantage is that it makes efficient use of the available spectrum.
• Resilient to ISI: Another advantage of OFDM is that it is very resilient to inter-
symbol and inter-frame interference. This results from the low data rate on each of the
sub-channels.
• Resilient to narrow-band effects: Using adequate channel coding and interleaving it
is possible to recover symbols lost due to the frequency selectivity of the channel and
narrow band interference. Not all the data is lost.
• Simpler channel equalization: One of the issues with CDMA systems was the
complexity of the channel equalization which had to be applied across the whole
channel. An advantage of OFDM is that using multiple sub-channels, the channel
equalization becomes much simpler.

OFDM disadvantages
Whilst OFDM has been widely used, there are still a few disadvantages to its use which need
to be addressed when considering its use.

• High peak to average power ratio: An OFDM signal has a noise like amplitude
variation and has a relatively high large dynamic range, or peak to average power
ratio2. This impacts the RF amplifier efficiency as the amplifiers need to be linear and
accommodate the large amplitude variations and these factors mean the amplifier
cannot operate with a high efficiency level.

2
To visualize why this is the case, consider for example the case where the sequence to be
transmitted is formed by 1’s, that is, each subchannel uses BPSK modulation (1 bit) and is
transmitting the same symbol. The resulting IFFT of the all 1’s sequence is the impulse
response. That is, a high pulse.

25 TEL280 – Class Handout


• Sensitive to carrier offset and drift: Another disadvantage of
OFDM is that is sensitive to carrier frequency offset and drift. Single carrier systems
are less sensitive.

OFDM variants
There are several other variants of OFDM for which the initials are seen in the technical
literature. These follow the basic format for OFDM, but have additional attributes or
variations:

• COFDM: Coded Orthogonal frequency division multiplexing. A form of OFDM


where error correction coding is incorporated into the signal.
• Flash OFDM: This is a variant of OFDM that was developed by Flarion and it is a
fast hopped form of OFDM. It uses multiple tones and fast hopping to spread signals
over a given spectrum band.
• OFDMA: Orthogonal frequency division multiple access. A scheme used to provide
a multiple access capability for applications such as cellular telecommunications
when using OFDM technologies.
• VOFDM: Vector OFDM. This form of OFDM uses the concept of MIMO
technology. It is being developed by CISCO Systems. MIMO stands for Multiple
Input Multiple output and it uses multiple antennas to transmit and receive the signals
so that multi-path effects can be utilized to enhance the signal reception and improve
the transmission speeds that can be supported.
• WOFDM: Wideband OFDM. The concept of this form of OFDM is that it uses a
degree of spacing between the channels that is large enough that any frequency errors
between transmitter and receiver do not affect the performance. It is particularly
applicable to Wi-Fi systems.
• Each of these forms of OFDM utilize the same basic concept of using close spaced
orthogonal carriers each carrying low data rate signals. During the demodulation
phase the data is then combined to provide the complete signal.

2.6.2.2 MIMO
While multipath propagation creates interference for many radio communications systems, it
can also be used to advantage to provide additional capacity on a given channel. Using a
scheme known as MIMO, multiple input multiple output, it is possible to multiple the data
capacity of a given channel several times by using the multipath propagation that exists.

In view of the advantages that MIMO offers, many current wireless and radio
communications schemes are using it to make far more efficient use of the available
spectrum. The disadvantage to MIMO is that it requires the use of multiple antennas, and
with modern portable equipment such as cell phones being increasingly small, it can be
difficult to place tow sufficiently spaced antennas onto them.

MIMO can provide:


• Spatial diversity, providing protection against fading.
• Spatial multiplexing, allowing multiple streams between sender and receiver. An NxN
MIMO system can support as to N simultaneous streams (multiplying the bitrate N
times).

26 TEL280 – Class Handout


• Multiuser communication, allowing multiple reception from
two or more transmissions (null steering/interference cancelation).

In this section we will focus on spatial diversity against fading.

Figure 9 shows a MxN MIMO system. Note that hij represents the channel gain from the
sender’s antenna i to the receiver antenna j. For the case of fast Rayleigh fading, and when
the separation of the antennas is on the order of the carrier wavelength, we can assume that
the set of hij are iid Rayleigh-distributed random variables. Let’s start by considering the case
where the transmitter only uses antenna 1. Thus, at the receiver the signal received at antenna
k is equal to h1k s(t) + noise. If we just add these N signals, the expected value of its power is
! ! !
equal to ! ℎ!! 𝐸 𝑠 𝑡 ! . Approximating ! ℎ!! ≈ max! ℎ!! , which is true for when the
channel gains are fairly diverse and therefore the maximum gain dominate the received
power, we get that the probability of a deep fade is equal to the probability of all channels
experiencing fading at the same instant. Thus, if pf is the probability that a single (source,
destination) antenna pair experiences deep fade, then the probability that the combined
received signal experiences deep fade is equal to pfn, that is, probability of deep fade
decreases exponentially with respect to the number of antennas.

Note that the above analysis is very conservative. MIMO receivers do not simply add up the
signals they receive on their antennas, they employ techniques such as Maximal Ratio
Combining (MRC) to “tune” to the sender. MRC is a method of diversity combining in
which:
• the signals from each channel are added together,
• the gain of each channel is made proportional to the rms signal level and inversely
proportional to the mean square noise level in that channel.
• different proportionality constants are used for each channel.

MRC is also known as ratio-squared combining and predetection combining. Maximal-ratio


combining is the optimum combiner for independent AWGN channels. MRC can restore a
signal to its original shape.

Note that MRC does not require coordination between sender and receiver, nor channel state
estimation at the receiver. It only requires information locally available at the receiver. The
main insight behind MRC is to assign a received signal a weight proportional to the SNR (or
confidence level) of the signal. This way, the signal with higher SNR (most reliable signal) is
given preference.

27 TEL280 – Class Handout


Figure 9. An MxN MIMO system.

3 Multiple Access
3.1 Duplexing (FDD and TDD)

Duplexing is the process of achieving two-way communications over a communications


channel. It takes two forms: half duplex and full duplex.

In half duplex, the two communicating parties take turns transmitting over a shared channel.
Two-way radios work this way. As one party talks, the other listens. Speaking parties often
say “Over” to indicate that they’re finished and it’s time for the other party to speak. In
networking, a single cable is shared as the two computers communicating take turns sending
and receiving data.

Full duplex refers to simultaneous two-way communications. The two communicating


stations can send and receive at the same time. Landline telephones and cell phones work this
way. Some forms of networking permit simultaneous transmit and receive operations to
occur. This is the more desirable form of duplexing, but it is more complex and expensive
than half duplexing. There are two basic forms of full duplexing: frequency division duplex
(FDD) and time division duplex (TDD)

3.1.1 Frequency Division Duplex (FDD)


FDD requires two separate communications channels. Wireless systems need two separate
frequency bands or channels (see Figure 10). A sufficient amount of guard band separates the
two bands so the transmitter and receiver don’t interfere with one another. Good filtering or
duplexers and possibly shielding are a must to ensure the transmitter does not desensitize the
adjacent receiver.

28 TEL280 – Class Handout


Figure 10. FDD requires two (symmetrical) frequency bands for the uplink and downlink channels.

In a cell phone with a transmitter and receiver operating simultaneously within such close
proximity, the receiver must filter out as much of the transmitter signal as possible. The
greater the spectrum separation, the more effective the filters.

FDD uses lots of frequency spectrum, though, generally at least twice the spectrum needed by
TDD. In addition, there must be adequate spectrum separation between the transmit and
receive channels. These so-called guard bands aren’t useable, so they’re wasteful. Given the
scarcity and expense of spectrum, these are real disadvantages.

However, FDD is very widely used in cellular telephone systems, such as the widely used
GSM system. In some systems the 25-MHz band from 869 to 894 MHz is used as the
downlink (DL) spectrum from the cell site tower to the handset, and the 25-MHz band from
824 to 849 MHz is used as the uplink (UL) spectrum from the handset to cell site.

Another disadvantage with FDD is the difficulty of using special antenna techniques like
multiple-input multiple-output (MIMO) and beamforming. These technologies are a core part
of the new Long-Term Evolution (LTE) 4G cell phone strategies for increasing data rates. It
is difficult to make antenna bandwidths broad enough to cover both sets of spectrum. More
complex dynamic tuning circuitry is required.

FDD also works on a cable where transmit and receive channels are given different parts of
the cable spectrum, as in cable TV systems. Again, filters are used to keep the channels
separate.

Uplink and downlink sub-bands are said to be separated by the frequency offset. Frequency-
division duplexing can be efficient in the case of symmetric traffic. In this case time-division
duplexing tends to waste bandwidth during the switch-over from transmitting to receiving,
has greater inherent latency, and may require more complex circuitry.

Another advantage of frequency-division duplexing is that it makes radio planning easier and
more efficient, since base stations do not "hear" each other (as they transmit and receive in
different sub-bands) and therefore will normally not interfere with each other. On the
converse, with time-division duplexing systems, care must be taken to keep guard times
between neighboring base stations (which decreases spectral efficiency) or to synchronize
base stations, so that they will transmit and receive at the same time (which increases network

29 TEL280 – Class Handout


complexity and therefore cost, and reduces bandwidth allocation
flexibility as all base stations and sectors will be forced to use the same uplink/downlink
ratio)

Examples of Frequency Division Duplexing systems are:

• ADSL and VDSL


• Most cellular systems, including the UMTS/WCDMA use Frequency Division
Duplexing mode and the cdma2000 system.
• IEEE 802.16 WiMax also uses Frequency Division Duplexing mode.

3.1.2 Time Division Duplex (TDD
TDD uses a single frequency band for both transmit and receive. Then it shares that band by
assigning alternating time slots to transmit and receive operations. The information to be
transmitted—whether it’s voice, video, or computer data—is in serial binary format. Each
time slot may be 1 byte long or could be a frame of multiple bytes.

Figure 11. TDD quickly alternates the transmission and reception of data over time. Concurrent transmissions.

Because of the high-speed nature of the data, the communicating parties cannot tell that the
transmissions are intermittent. The transmissions are concurrent rather than simultaneous. For
digital voice converted back to analog, no one can tell it isn’t full duplex.

In some TDD systems, the alternating time slots are of the same duration or have equal
downlink (DL) and uplink (UL) times. However, the system doesn’t have to be 50/50
symmetrical. The system can be asymmetrical as required.

For instance, in Internet access, download times are usually much longer than upload times so
more or fewer frame time slots are assigned as needed. Some TDD formats offer dynamic
bandwidth allocation where time-slot numbers or durations are changed on the fly as
required.
The real advantage of TDD is that it only needs a single channel of frequency spectrum.
Furthermore, no spectrum-wasteful guard bands or channel separations are needed. The
downside is that successful implementation of TDD needs a very precise timing and
synchronization system at both the transmitter and receiver to make sure time slots don’t
overlap or otherwise interfere with one another.

30 TEL280 – Class Handout


Timing is often synched to precise GPS-derived atomic clock
standards. Guard times are also needed between time slots to prevent overlap. This time is
generally equal to the send-receive turnaround time (transmit-receive switching time) and any
transmission delays (latency) over the communications path.

Examples of time-division duplexing systems are:

• UMTS 3G supplementary air interfaces TD-CDMA for indoor mobile


telecommunications.
• The Chinese TD-LTE 4-G, TD-SCDMA 3-G mobile communications air interface.
• DECT wireless telephony
• Half-duplex packet switched networks based on carrier sense multiple access, for
example 2-wire or hubbed Ethernet, Wireless local area networks and Bluetooth, can
be considered as Time Division Duplexing systems, albeit not TDMA with fixed
frame-lengths.
• IEEE 802.16 WiMAX
• PACTOR
• ISDN BRI U interface, variants using the Time Compression Multiplex (TCM) line
system
• G.fast, a digital subscriber line (DSL) standard under development by the ITU-T

3.1.3 Comparison

Most cell-phone systems use FDD. The newer LTE and 4G systems use FDD. Cable TV
systems are fully FDD.

Most wireless data transmissions are TDD. WiMAX and Wi-Fi use TDD. So does Bluetooth
when piconets are deployed. ZigBee is TDD. Most digital cordless telephones use TDD.
Because of the spectrum shortage and expense, TDD is also being adopted in some cellular
systems, such as China’s TD-SCDMA and TD-LTE systems. Other TD-LTE cellular systems
are expected to be deployed where spectrum shortages occur.

Figure 12. FDD versus TDD comparison.

31 TEL280 – Class Handout


TDD appears to be the better overall choice, but FDD is far more widely implemented
because of prior frequency spectrum assignments and earlier technologies. FDD will continue
to dominate the cellular business for now. Yet as spectrum becomes more costly and scarce,
TDD will become more widely adopted as spectrum is reallocated and repurposed.

3.2 Multiple Access Techniques


Earlier multiaccess architectures for wireless broadband voice applications (e.g. AMPS
cellular telephony) were based in Frequency Division Multiple Access (FDMA), where each
mobile user is assigned its own frequency channel for transmission. No other mobile user
may use the same frequency channel in the same cell and therefore, it makes an inefficient
use of the available bandwidth since the assigned mobile user may not need to utilize it
continuously. In addition, FDMA tends to be rather complex to implement since it requires
the use of duplexers and tight RF filtering to minimize adjacent channel interference.

Next generation architectures were based on either Code Division Multiple Access (CDMA)
or Time Division Multiple Access (TDMA).

In CDMA systems more than one user can use the same carrier frequency and transmit
simultaneously. The mobile user's narrowband message signal is multiplied by a very large
bandwidth signal called the spreading signal. This spreading signal is a pseudo-noise code
sequence that has a chip rate which is orders of magnitude greater than the data rate of the
message. Each user has its own codeword that is approximately orthogonal to all other
codewords. The receiver performs a time correlation operation to detect only the specific
desired codeword. All other codewords appear as noise due to decorrelation. There is no
transmission coordination among the users which operate independently. For detecting a
specific user's message signal the receiver needs to have knowledge of the codeword used by
this user's transmitter; all the other messages received (generated with different codewords)
form the noise floor, which must be kept low. Therefore, a power regulation scheme is
employed so that all the messages are received at the AP with approximately equal power.
Unfortunately, there is no way to control the power level of mobiles associated with other
APs (i.e. mobile users located in neighboring cells that also contribute to the noise floor).
Some CDMA characteristic are :

• Multipath fading may be substantially reduced because the signal is spread over a large
spectrum. If the spread spectrum bandwidth is greater than the coherence bandwidth of
the channel, the inherent frequency diversity will mitigate the effects of small-scale
fading.
• CDMA has potential for soft handoff (change of associated AP) since a mobile user can
be received simultaneously by more than one AP, being possible to choose at any point in
time the best AP without switching frequencies.
• Since increasing the number of users raises the noise floor gradually degrading the
performance for all users, CDMA has a soft capacity limit.
• Current and future advances in digital modulation and coding promise a greater tolerance
to signal to noise ratio, therefore increasing the capacity of a CDMA system. The
theoretical maximum capacity of a CDMA system is greater than the equivalent for a
TDMA system.

32 TEL280 – Class Handout


In TDMA, the radio spectrum is divided into time slots; in each slot
only one user is allowed to either transmit or receive. Unlike FDMA and CDMA, TDMA
data transmission is not continuous but it employs a buffer-and-burst approach as in a packet
network. The transmission from various users is interlaced into a frame structure that consists
of a preamble (for addressing and synchronization proposes), the information message
(several time slots for users transmission) and trail bits. Some of the features of TDMA are:
• Since the transmission is not continuous, the user's transmitter can be turned off when not
in use (which is most of the time).
• Since the mobile is able to listen to other APs during idle time slots it has the information
required to execute Mobile Assisted HandOff (MAHO) which simplifies the handoff
process (change from one AP to another) and reduces the cost of the AP.
• Duplexers are not required, reducing the complexity of the RF unit.
• Adaptive equalization is usually necessary as well as high synchronization overhead (in a
slot by slot basis).

It should be noted that mixed approaches (FDMA, TDMA, CDMA) were also implemented.
For example, GSM is a FDMA/TDMA system that divides the available bandwidth in 200
kHz wide channels using FDMA, while each of these channels serves as many as 8 users
using TDMA.

More recently, more sophisticated multiple access techniques such as SC/OFDMA have been
developed, in order to address the requirement of higher data rates (wideband) as well as low
intersymbol interference.

The remainder of this section discusses this multiple access techniques more in detail.

3.3 FDMA

FDMA is the process of dividing one channel or bandwidth into multiple individual bands,
each for use by a single user. Each individual band or channel is wide enough to
accommodate the signal spectra of the transmissions to be propagated. The data to be
transmitted is modulated on to each subcarrier, and all of them are linearly mixed together.

The best example of this is the cable television system. The medium is a single coax cable
that is used to broadcast hundreds of channels of video/audio programming to homes. The
coax cable has a useful bandwidth from about 4 MHz to 1 GHz. This bandwidth is divided up
into 6-MHz wide channels. Initially, one TV station or channel used a single 6-MHz band.
But with digital techniques, multiple TV channels may share a single band today thanks to
compression and multiplexing techniques used in each channel.

This technique is also used in fiber optic communications systems. A single fiber optic cable
has enormous bandwidth that can be subdivided to provide FDMA. Different data or
information sources are each assigned a different light frequency for transmission. Light
generally isn’t referred to by frequency but by its wavelength (λ). As a result, fiber optic
FDMA is called wavelength division multiple access (WDMA) or just wavelength division
multiplexing (WDM).
One of the older FDMA systems is the original analog telephone system, which used a
hierarchy of frequency multiplex techniques to put multiple telephone calls on single line.
The analog 300-Hz to 3400-Hz voice signals were used to modulate subcarriers in 12
channels from 60 kHz to 108 kHz. Modulator/mixers created single sideband (SSB) signals,

33 TEL280 – Class Handout


both upper and lower sidebands. These subcarriers were then further
frequency multiplexed on subcarriers in the 312-kHz to 552-kHz range using the same
modulation methods. At the receiving end of the system, the signals were sorted out and
recovered with filters and demodulators.

Original aerospace telemetry systems used an FDMA system to accommodate multiple


sensor data on a single radio channel. Early satellite systems shared individual 36-MHz
bandwidth transponders in the 4-GHz to 6-GHz range with multiple voice, video, or data
signals via FDMA. Today, all of these applications use TDMA digital techniques.

Frequency Division Multiple Access or FDMA is a channel access method used in multiple-
access protocols as a channelization protocol. FDMA gives users an individual allocation of
one or several frequency bands, or channels. It is particularly commonplace in satellite
communication.
Some FDMA characteristics are:
• In FDMA all users share the satellite transponder or frequency channel
simultaneously but each user transmits at single frequency.
• FDMA can be used with both analog and digital signal.
• FDMA requires high-performing filters in the radio hardware, in contrast to TDMA
and CDMA.
• FDMA is not vulnerable to the timing problems that TDMA has. Since a
predetermined frequency band is available for the entire period of communication,
stream data (a continuous flow of data that may not be packetized) can easily be used
with FDMA.
• Due to the frequency filtering, FDMA is not sensitive to near-far problem which is
pronounced for CDMA.
• Each user transmits and receives at different frequencies as each user gets a unique
frequency slots.

FDMA is distinct from frequency division duplexing (FDD). While FDMA allows multiple
users simultaneous access to a transmission system, FDD refers to how the radio channel is
shared between the uplink and downlink (for instance, the traffic going back and forth
between a mobile-phone and a mobile phone base station). Frequency-division multiplexing
(FDM) is also distinct from FDMA. FDM is a physical layer technique that combines and
transmits low-bandwidth channels through a high-bandwidth channel. FDMA, on the other
hand, is an access method in the data link layer.

FDMA also supports demand assignment in addition to fixed assignment. Demand


assignment allows all users apparently continuous access of the radio spectrum by assigning
carrier frequencies on a temporary basis using a statistical assignment process. The first
FDMA demand-assignment system for satellite was developed by COMSAT for use on the
Intelsat series IVA and V satellites.

There are two main techniques:


Single-channel per-carrier (SCPC): uses a single signal at a given frequency and bandwidth.
An example of SCPC is broadcast satellites: when radio stations are not multiplexed as
subcarriers onto a single video carrier, but instead independently share a transponder.
Another example is SC-FDMA.

34 TEL280 – Class Handout


Multi-channel per-carrier (MCPC): several subcarriers are
combined or multiplexed into a single bitstream before being modulated onto a carrier
transmitted from a single location to one or more remote sites. This uses time-division
multiplexing (TDM) as well as frequency-division multiplexing. An example is OFDMA.

3.4 TDMA
Time division multiple access (TDMA) is a channel access method for shared medium
networks. It allows several users to share the same frequency channel by dividing the signal
into different time slots. The users transmit in rapid succession, one after the other, each
using its own time slot. This allows multiple stations to share the same transmission medium
(e.g. radio frequency channel) while using only a part of its channel capacity. TDMA is used
in the digital 2G cellular systems such as Global System for Mobile Communications (GSM),
IS-136, Personal Digital Cellular (PDC) and iDEN, and in the Digital Enhanced Cordless
Telecommunications (DECT) standard for portable phones. It is also used extensively in
satellite systems, combat-net radio systems, and PON networks for upstream traffic from
premises to the operator. For usage of Dynamic TDMA packet mode communication, see
below.

TDMA is a type of Time-division multiplexing, with the special point that instead of having
one transmitter connected to one receiver, there are multiple transmitters. In the case of the
uplink from a mobile phone to a base station this becomes particularly difficult because the
mobile phone can move around and vary the timing advance required to make its
transmission match the gap in transmission from its peers.

TDMA characteristics
• Shares single carrier frequency with multiple users
• Non-continuous transmission makes handoff simpler
• Slots can be assigned on demand in dynamic TDMA
• Less stringent power control than CDMA due to reduced intra cell interference
• Higher synchronization overhead than CDMA
• Advanced equalization may be necessary for high data rates if the channel is
"frequency selective" and creates Intersymbol interference
• Cell breathing (borrowing resources from adjacent cells) is more complicated than in
CDMA
• Frequency/slot allocation complexity
• Pulsating power envelope: Interference with other devices

TDMA in mobile phone systems


2G systems
Most 2G cellular systems, with the notable exception of IS-95, are based on TDMA. GSM,
D-AMPS, PDC, iDEN, and PHS are examples of TDMA cellular systems. GSM combines
TDMA with Frequency Hopping and wideband transmission to minimize common types of
interference.

In the GSM system, the synchronization of the mobile phones is achieved by sending timing
advance commands from the base station which instructs the mobile phone to transmit earlier
and by how much. This compensates for the propagation delay resulting from the light speed

35 TEL280 – Class Handout


velocity of radio waves. The mobile phone is not allowed to transmit
for its entire time slot, but there is a guard interval at the end of each time slot. As the
transmission moves into the guard period, the mobile network adjusts the timing advance to
synchronize the transmission.

Initial synchronization of a phone requires even more care. Before a mobile transmits there is
no way to actually know the offset required. For this reason, an entire time slot has to be
dedicated to mobiles attempting to contact the network; this is known as the random-access
channel (RACH) in GSM. The mobile attempts to broadcast at the beginning of the time slot,
as received from the network. If the mobile is located next to the base station, there will be no
time delay and this will succeed. If, however, the mobile phone is at just less than 35 km
from the base station, the time delay will mean the mobile's broadcast arrives at the very end
of the time slot. In that case, the mobile will be instructed to broadcast its messages starting
nearly a whole time slot earlier than would be expected otherwise. Finally, if the mobile is
beyond the 35 km cell range in GSM, then the RACH will arrive in a neighbouring time slot
and be ignored. It is this feature, rather than limitations of power, that limits the range of a
GSM cell to 35 km when no special extension techniques are used. By changing the
synchronization between the uplink and downlink at the base station, however, this limitation
can be overcome.[citation needed]

3G systems
Although most major 3G systems are primarily based upon CDMA, time division duplexing
(TDD), packet scheduling (dynamic TDMA) and packet oriented multiple access schemes are
available in 3G form, combined with CDMA to take advantage of the benefits of both
technologies.

While the most popular form of the UMTS 3G system uses CDMA and frequency division
duplexing (FDD) instead of TDMA, TDMA is combined with CDMA and Time Division
Duplexing in two standard UMTS UTRA

Comparison with other multiple-access schemes


In radio systems, TDMA is usually used alongside Frequency-division multiple access
(FDMA) and Frequency division duplex (FDD); the combination is referred to as
FDMA/TDMA/FDD. This is the case in both GSM and IS-136 for example. Exceptions to
this include the DECT and PHS micro-cellular systems, UMTS-TDD UMTS variant, and
China's TD-SCDMA, which use Time Division duplexing, where different time slots are
allocated for the base station and handsets on the same frequency.

A major advantage of TDMA is that the radio part of the mobile only needs to listen and
broadcast for its own time slot. For the rest of the time, the mobile can carry out
measurements on the network, detecting surrounding transmitters on different frequencies.
This allows safe inter frequency handovers, something which is difficult in CDMA systems,
not supported at all in IS-95 and supported through complex system additions in Universal
Mobile Telecommunications System (UMTS). This in turn allows for co-existence of
microcell layers with macrocell layers.

CDMA, by comparison, supports "soft hand-off" which allows a mobile phone to be in


communication with up to 6 base stations simultaneously, a type of "same-frequency
handover". The incoming packets are compared for quality, and the best one is selected.
CDMA's "cell breathing" characteristic, where a terminal on the boundary of two congested

36 TEL280 – Class Handout


cells will be unable to receive a clear signal, can often negate this
advantage during peak periods.

A disadvantage of TDMA systems is that they create interference at a frequency which is


directly connected to the time slot length. This is the buzz which can sometimes be heard if a
TDMA phone is left next to a radio or speakers.[1] Another disadvantage is that the "dead
time" between time slots limits the potential bandwidth of a TDMA channel. These are
implemented in part because of the difficulty in ensuring that different terminals transmit at
exactly the times required. Handsets that are moving will need to constantly adjust their
timings to ensure their transmission is received at precisely the right time, because as they
move further from the base station, their signal will take longer to arrive. This also means that
the major TDMA systems have hard limits on cell sizes in terms of range, though in practice
the power levels required to receive and transmit over distances greater than the supported
range would be mostly impractical anyway.

Dynamic TDMA
In dynamic time division multiple access, a scheduling algorithm dynamically reserves a
variable number of time slots in each frame to variable bit-rate data streams, based on the
traffic demand of each data stream. For example, Figure 13 shows the dynamic frame
structure proposed by the European Union’s Magic WAND project (calles MASCARA). In
that frame, we can appreciate 4 clearly defined interval: (i) frame header (for synchronization
and broadcasting of this frame’s uplink schedule), (ii)Downlink period,(iii) uplink period, and
finally (iv) contention period for new session joining the network.

Dynamic TDMA is used in

• Intelbras WISP+ Radios - iPoll(tm) is TDMA based protocol


• Ubiquiti airMAX Radios
• HIPERLAN/2 broadband radio access network.
• IEEE 802.16a WiMax
• Bluetooth
• The Packet radio multiple access (PRMA) method for combined circuit switched
voice communication and packet data.
• TD-SCDMA
• ITU-T G.hn

37 TEL280 – Class Handout


Variable length Time Frame

.. ......... FH Period Downlink Period Uplink Period Contention Period ..........

Variable Variable Variable


Boundary Boundary Boundary
Radio turn around

Broadcast Reservation based traffic Contention based traffic

From AP to MT From MT to AP
Figure 13. MASCARA: Dynamic TDMA frame proposed by the Magic WAND project.

3.5 CDMA
CDMA is an example of multiple access, which is where several transmitters can send
information simultaneously over a single communication channel. This allows several users
to share a band of frequencies (see bandwidth). To permit this without undue interference
between the users, CDMA employs spread-spectrum technology and a special coding scheme
(where each transmitter is assigned a code).

CDMA is used as the access method in many mobile phone standards such as cdmaOne,
CDMA2000 (the 3G evolution of cdmaOne), and WCDMA (the 3G standard used by GSM
carriers), which are often referred to as simply CDMA.

Uses:
• The Qualcomm standard IS-95, marketed as cdmaOne.
• The Qualcomm standard IS-2000, known as CDMA2000, is used by several mobile
phone companies, including the Globalstar satellite phone network.
• The UMTS 3G mobile phone standard, which uses W-CDMA.

CDMA is a spread-spectrum multiple access technique. A spread spectrum technique spreads


the bandwidth of the data uniformly for the same transmitted power. A spreading code is a
pseudo-random code that has a narrow ambiguity function, unlike other narrow pulse codes.
In CDMA a locally generated code runs at a much higher rate than the data to be transmitted.
Data for transmission is combined via bitwise XOR (exclusive OR) with the faster code. The
figure shows how a spread spectrum signal is generated. The data signal with pulse duration
of Tb (symbol period is XOR’ed with the code signal with pulse duration of Tc (chip period).
The bandwidth of the data signal is 1/Tb and the bandwidth of the spread spectrum signal is
1/Tc. The ratio Tb/Tc is called the spreading factor or processing gain and determines to a

38 TEL280 – Class Handout


certain extent the upper limit of the total number of users supported
simultaneously by a base station

3.6 OFDMA and SC-FDMA


Figure 14 shows the block diagram of OFDMA and SC-FDMA. Two frequency-based
multiple access techniques based on OFDM modulation used in LTE cellular networks. As
was the case with OFDM, these techniques are used to deal with selective fading and high
data rates.

OFDMA is the simpler of these 2 techniques. It is a straightforward extension of OFDM, and


as such it shares it advantages and disadvantages. Most notably, it suffers from a high Peak-
to-Average Power ratio (PAPR). While this may not be a problem for downlink
communications (as an LTE base station can handle sophisticated waveforms), a high PAPR
is an issue for small mobile device (uplink communication). That is the motivation for the
development of SC-FDMA: to reduce the PAPR on OFDMA to be used in the uplink of
modern cellular systems (LTE).

Figure 14. SC-OFDMA (top) and OFDMA (bottom).

3.6.1 OFDMA
As it can be seen in Figure 14 (bottom), a OFDMA signal is formed by assigning N out of M
(N <= M) subcarriers to a user. A single user maps its N symbols into the assigned

39 TEL280 – Class Handout


subcarriers and set the remaining subcarriers to zero. It then applies
the M-point IFFT to the resulting block of M sysbols. The resulting signal is then transmitted.

At the receiver end, the receiver gets the sum of all the users signals, and apply to FFT to
recover the symbols. It then applied the reverse mapping to obtain the message from each
user.

Note that since the subcarriers are orthogonal, they do not interfere with each other and there
is no need for coordination among users.

OFDMA also suffers from high PAPR.


3.6.2 SC-FDMA
Single Carrier FDMA was developed to overcome the high PAPR of OFDMA. To this end,
SC-FDMA adds a preprocessing stage to OFDMA, as shown in Figure 14 (top). That stage is
an FFT (FFT) transform, which results in a smoother signal being transmitted over the air, as
will be explained below. On the other hand, the transmitted signal no longer has each
subcarrier associated to one and only one symbol. Instead, the information of each symbol is
spread over the entire set of subcarriers, which acts as a single channel (therefore the name:
Single Carrier FDMA). As a consequence, fading affecting one single subcarrier may affect
all symbols, not just one. Thus, care should be taken to choose “clean” subcarriers for each
user.

Once the symbol signal is transformed via the IFFT, each of the M resulting values is map to
one sub-carrier (out of N). There are different subcarriers mappings:

• Localized mapping: the FFT outputs (M values) are map to a subset of consecutive
subcarriers, thereby confining them to only a fraction of the system bandwidth.
• Distributed mapping: the FFT outputs of the input data are assigned to subcarriers over
the entire bandwidth non-continuously, resulting in zero amplitude for the remaining
subcarriers.
• Interleaved mapping: it is a special case of distributed SC-FDMA is called interleaved
SC-FDMA (IFDMA), where the occupied subcarriers are equally spaced over the entire
bandwidth.

Figure 15 shows an example of distributed (left) and localized (right) subcarrier mappings for
a network with 3 users. Note that the distributed mapping shown in Figure 15 (left) is also an
interleaved mapping, since all subcarriers associated with the same user’s data are equally
spaced.

Figure 15. SC-FDMA subcarrier mappings.

40 TEL280 – Class Handout


The key to understanding the lower PAPR values of SC-FDMA, is to remember that the input
signal to the SC-FDMA system is a well-behaving signal, with reasonable PAPR. Since the
signal is expose to a FFT and a IFFT transform, and given the duality of the FFT and IFFT, it
is expected that the resulted signal (in time) is similar to the input signal.

Figure 16 shows the transmitted signal in frequency for both the distributed/interleaved and
the localized subcarrier mapping. Figure 17 shows the same signal in the time domain. For
example, for the interleaved mapping, the signal in the frequency domain is the same as the
original one after an expansion on frequency (Figure 16 top). Since an expansion en
frequency corresponds to a contraction in time, the resulting time signal is a time-contracted
copy of the original signal, repeated several times due to periodicity of the FFT (see Figure
17 top). Thus, since {xn} is well behaving, so is the time output to the channel {xm}.

Now, if localized mapping is used, signal in the frequency domain is the same as the original
one after a contraction in frequency (Figure 16 bottom). Since a contraction in frequency
corresponds to an expansion in time, the highlighted values of Figure 17 (bottom) are a time
expanded version of the original signal. The other values correspond to aliasing effects. The
resulting time signal is also well behaving.

Figure 16. SC-FDMA signal in frequency

41 TEL280 – Class Handout


Figure 17. SC-FDMA signal in time.

42 TEL280 – Class Handout

You might also like