You are on page 1of 14

An Overview of

Quadrature Amplitude Modulation,

Carrierless Amplitude and Phase, Discrete

Multi-tone and Coding Efficiency

By

Dr. Hadi HMIDA


Senior Research Scientist
R&D, STC

STC- 2002
Signal Processing & Communications By Dr. Hadi HMIDA

An Overview of Quadrature Amplitude Modulation, Carrierless


Amplitude and Phase, Discrete Multi-tone and Coding Efficiency

1. Quadrature Amplitude Modulation (QAM)

Consider two base-band messages m1(t), m2(t) to be transmitted, the


corresponding QAM is defined by (1). Figure 1 show the QAM modulator
scheme :

(1) S(t) = m 1(t)cos(2πf ct) + m 2(t)sin(2πf ct)

m1(t)

cos(2πfct) ~ S(t)
Σ

π/2

m2(t)

Figure (1): QAM Modulator

M-ary Modulation
For digital signals we have M-ary QAM. M is the number of symbols
representing the data to be transmitted. For M=2 m for example, we have:

m1(t) = a i p(t)
m2(t) = b i p(t)
where p(t) is a baseband pulse

2 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

pi(t) = ai p(t) cos(2 πfct) + bi p(t) sin(2 πfct)


(2) = ri p(t) cos(2πfct - Φ ι)
bi
ri = a 2 + b2 and Φ ι= − tan-1
ai
for i = 0,1,2 ,…..M

Thus, evry pulse pi(t) represent a one over 2m symbols each one coded
over m bits. For M=16, m=4 , the amplitude levels may be ± A, ±3A, a i
the constellation

bi X Y
? ? +3A ? ? Error

Z T ri ei
ri
? ? +A ? ?
Φι

-3A –A +A +3A ai
? ? -A ? ?

? ? -3A ? ?

Figure (2). 16-QAM constellation

To optimize the bit error probability for a given symbol, the assignment of
bit pattern to symbols in the constellation space should be intelligent.
The GRAY code is used to ensure that neighboring symbols (those most
likely to be detected in error) only differ by one bit as depicted below.

1110 1111

Figure (8): Gray code 1100 1101


assignment

3 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

It is considered as an example of modulation coding.Now we consider M-


ary PSK, the constellation diagram is restricted to a circle. The symbols
have the same amplitude and the phase changes from symbol to another
as depicted below for 8-PSK.

? ? ?
? ?
? ?
? ?
? ?
? ?
?
Figure (3): 16-PSK constellation

Comparing the constellation of 16-QAM with 16-PSK we can see the


space between symbol states for QAM is greater than that for 16-PSK (if
we suppose that they are drawn to scale for equal average symbol power
for both QAM and PSK) A= 1.47B.
The larger spacing between symbols means that the detection process
should be less susceptible to noise. The peak po wer in QAM is greater
than that for PSK. It must be taken into account if the transmission
process is peak power limited.

Example:
If the maximum vector length in a square 16-QAM constellation is 100mV
Determine the long term average power that would be delivered into 50
ohms antenna load if each point in the constellation has an equal
probability of transmission.

Answer:
We refer to one quadrant of the 16-QAM constellation in figure (2) , the
average power developed by 4 symbols is:

4 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

X2= T 2= (3A)2 + (A)2= 10A 2


Y2= (3A)2 +(3A)2= 18A2
Z2=(A)2+(A)2 = 2A 2

18 A 2 + 2 x10 A 2 + 2 A 2 10 A 2
Average power = =
4R R
The maximum vector length Y =100mV = 18A 2
(100 mV ) 2
Therefore A= =23.6mV
18
10 A 2
The average power for all the symbols states is: = 111W
R

2. Carrierless Amplitude and Phase (CAP)

Carrierless AM/PM (CAP) is a bandwidth–efficient two dimensional


passband transmission scheme, see Figure (4), which is closely related
to the more familiar quadrature amplitude modulation (QAM). We suppose
Dn as the data to be sent. Dn are fed to an encoder that deliver complex
symbols called An = an +jbn . The encoder maps the data blocks Dn of m
bits into one of k= 2m different complex sy mbols An. The CAP line code
using k different complex symbols are called k-CAP (e.g. k=64 called 64-
CAP, k=128 called 128 -CAP). We can represent the constellation of the An
and we called it k-CAP see Figure (5). After the encoder the symbols are
an and bn are fed to digital shaping filters. The output of the filters are
then subtracted and converted to analog using a D/A converter followed
by a shaping filter LPF:

an
In-phase filter
h(t)= g(t)cos2πfc t

S(t)
Dn
Encoder Σ D/A LPF
+

Quadrature filter T2
T1 bn ~
h (t)= g(t)sin2πfc t

Figure (4): CAP modulator where g(t) is baseband pulse.

5 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

+∞ ~
(3) S(t) = ∑ {a h(t − nT
n =−∞
n 2 ) − b n h (t-nT 2)}

S(t) defined by (1) is the expression of CAP where h(t) and h(t) are Hilbert
transforms of each other (their Fourier transforms have the same
amplitude characteristic and phase characteristics that differ by p/2) and
T1 is the symbol period, T2 is the sampling period, an and bn are discrete
multilevel symbols.

bn

? +1 ?

an
-1 +1

? ?
-1

Figure (5): Example of 4-CAP

The CAP receiver may be deduced directly from the previous scheme see
Figure (6).

T2
Adaptive
Filter I Data
â out
IN A/D Decision Decoder
Devi ce

Adaptive
Filter II
T2 T1

Figure (6): CAP Receiver

The received analog data (after filtering by a low pass filter LPF) is
T
converted to digital at a sampling period T2 = 1 .
n

6 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

We suppose that the filter (I) is identity filter {FI [s(t)]=s(t)} and the filter
(II) is an Hilbert filter {FII [s(t)]=- ~s( t ) } . In the other hand due to

transmission the signal S(t) in (1) becomes slightly different and the
filters (I & II) outputs are:

+∞
SI (t) = ∑ {a p(t − nT
n =−∞
n 2
~
) − bn p (t-nT2)}

(4) +∞
SI I(t) = ∑ {b p(t − nT
n= −∞
n 2
~
) + an p (t-nT 2)}

We assume that p(t) satisfies Nyquist criterion i.e. p(kT)= δ(kT) (Dirac
~ (kT) 0
distribution) and it Hilbert transform p ≡

In this case if we then sample the signals in (2) at T=nT 2 , we get


ˆ n and bˆ n at the output of the decision devise as depicted in Figure (6).
a

The original data an , bn are reconstituted at the decoder ouput.

3. Discrete Multi Tone (DMT)

DMT modulation divides the channel into a number of sub-channels


referred as tones each one of them has its own central frequency f i. It is a
form of FDM see Figure (7) and (8).

po
x0

C p1
O x1
Data In D Xk to the line
E Σ
R pn
xn

Figure (7): Multi- Tone (carrier) modulation


From the encoded data called previously complex symbols An= an + jbn
we built the real sequence {x n} composed by the real and imaginary parts

7 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

of each complex symbol. If we have N/2 symbols then the sequence {x n}


is length N.

The symbol xn is now modulated by N-dimensional sampled sinusoid

modulating vectors pn . The partial results are then added Xk = Σ xnpn


where the modulating vector pn = [p no ,pn1,…pn N-1].

+2 j πnk
1
pn = e N , k,n=0,1,…N-1
N

We conclude that Xk (DMT) may be computed by an inverse Discrete


Fourier Transform (IDFT) and we have:

N +2 j πnk

X k = ∑ x ne N

n= 0
(5) N
X k = ∑ x n WN
+ nk

n =0

Where W N= exp(2jπ/N) Which is processed by Inverse Fast Fourier


Transform algorithm (IFFT) length N= 8, 16, 32, 64, 128, 512…We can
summaries the DMT by the diagram

N/2 QAM
xo
Data IN S/P IFFT P/S D/A to Line

xN
(a)

(b)
fo f1 ……… fn

Figure (8): a) DMT diagram, b) DMT spectrum model (FDM)

8 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

* The bandwidth of each sub-channel is B = 4.3125kHz


The carriers are fn = n * 4.3125 kHz

Remark:
The function p n (carriers) are considered as an orthogonal basis. The
exponential functions are also orthogonal; there are many orthogonal
functions that form an orthogonal basis as Haar, Hadamard, cosine and
Wavelets functions for which exist fast algorithms. These orthogonal
transformations called also unitary transformations and are widely used in
speech coding and image compression. Wavelet functions are shifted
rectangular pulses.

4. Line coding (Modulation) 2B1Q

In many communication applications as xDSL, ISDN, physical layer in X25


equipment an FR, a four-level analog modulation is used to represent two
simultaneous bits of information per clock (hence 2B1Q : uses four signal
levels (1Q), equivalent to two bits (2B)).

Binary Data Quat


00 -3
01 -1
10 +3
11 +1

+3 11 00 01 10 01 11

+1

-1

-3

Figure (9): 2B1Q line coding rule

9 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

5. Forward Error Coding (FEC)

The coding techniques may be divided into three types:


Source coding : the data (source) is altered or coded to make it best for
transmission (e.g. Compression)
Modulation coding: (see Gray coding used in QAM)
Channel coding: Extra bits (redundancy bits)are added to the source
data in order to provide a means of detecting and/or correcting
transmission errors.

The FEC is a channel coding. There are two main types:


Convolution coding: data is processed on the serial bit stream. A
frequent representation of the convolution encoding is the trellis diagram.
The Viterbi algorithm (decoder) is very effective at performing the path
search process (trellis).

Block coding: Data is processed by block of length k


It means binary data coded over k bit we add m bits for parity check
(redundancy) to form code word of length n =k+m. The code is called
(n,k) code with code rate R= k/n .

Data over k bits n bits (n= k+m)


Encoder

Figure (10): Encoding

The information transfer rate is reduced by a factor k/n . The factor 1 -R


is called the redundancy of the block.

10 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

BER performance
We compare the error probability of coded and uncoded schemes under
similar constraint of power and information rate. The improvement of
Eb/No performance of the code vs uncoded systems, at a specified BER, is
termed the coding gain .

Consider a t-error correcting (n,k) code. In this case, k information digits


are coded into n digits. We assume that k information digits are
transmitted in the same time interval over both systems and that the
transmitted power Si is also maintained the same for both systems. It
means that the coded and the uncoded have equal energy per information
bit. Transmitted bit is different to information bit. When we transmit n
bits it contains k information bits so:
kEbI = nEbT we conclude that E bI = n/k EbT and EbI >EbT

Let Peu and Pec represent the digit error probabilities in the uncoded and
coded cases, respectively.
For the uncoded case, a word of k digits will be received wrong if any one
of the k digits is in error. If P Eu and PEc represent the word error
probabilities of the uncoded and coded systems, respectively, then:

(6) P Eu ˜ k P eu

P Ec ˜ C n
t +1 (Pec)t+1

For QAM
4 Eb
P eu = 3Q{ } then we have
5N
(7) 4 Eb
PEu =3kQ{ }
5N
4k Eb t+1
PEc ˜ C nt+1 ( 3Q{ }) , Pec <<1
5 nN

11 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

For PSK

2 Eb
Peu= Q{ }
N

2E b
(8) PEu =kQ{ }
N
2k Eb t+1
PEc ˜ C nt+1 ( Q{ }) , Pec<<1
nN

Q(x) is related to erfc(x) and erf(x)

The comparison of binary modulation techniques is based on relative BER


performance.

Application

Compare the performance of AWGN BSC using a single error correcting


(15,11) code with that of the same system using uncoded transmission
given that Eb/N =9.12 for the uncoded scheme and coherent PSK is used
to transmit data.

Answer:
Eb/N =9.12 ~ 9.59 dB
PEu= 11Q( 18 .24 )= 1.1 10-4

11(18 .24 ) 2
PEc= C 15
2
( Q{ }) =105(1.37x10-4)2 = 1.96 10 -6
15

Note that the word error probability of the coded system is reduced by a
factor PEc /PEu = 56. on the other hand, if we wish to achieve the error
probability of the coded transmission (1.96x10 -6) using the uncoded
system, we must increase the transmitted power. If E’b is the new value
of E b to achieve P Eu = 1.96x10 -6,

2E ' b
P Eu =11Q{ }= 1.96x10-6
N

12 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

This gives E’b /N = 12.95 ~ 11.12dB


This is an increase over the old value of 9.12 by a factor of 1.42, or 1.5dB

10- 1

10- 2
Pe 10- 3

10- 4 P Ec P Eu
10- 5

10- 6

10- 7

2 4 6 8 10 12 E b/N dB

9.59

1.53dB

For the power to Noise ratio 9.12 PEu> P Ec means the Probability of error
is less when the data is coded than when it is uncoded. In this example
the coding introduce a gain of 1.53 dB. This result is related to the code
used. The code efficiency is related to the gain, the code rate and its
ability to detect and correct errors.

6. Channel Capacity
The Shannon –hartley capacity limit for error-free communications is
given by:

S
C= W log2 [ + 1] b/s
(9) N

The power and bandwidth efficiency

13 R &D, STC 2/3/2002


Signal Processing & Communications By Dr. Hadi HMIDA

For a system transmitting at maximum capacity C , the average signal


power S measured at the receiver input , can be written as S= Eb C, where
Eb is the average received energy per bit.
The average noise power N, can also be redefined as N= No .W, where No
is the noise power density (watts/Hz).
The Shannon-hartley theorem can be written in the form

EbC
C/W = log2 [1+ ] b/s/Hz
(10) N oW

The ratio C/W represents the bandwidth efficiency of the system.


A large ratio involves great bandwidth efficiency. If the ratio is small
means that the bit energy is not enough to detected successfully the bits
in the presence of a given amount of noise.

There are other equations that concern the channel capacity of twisted
pairs related to NEXT interference and Gaussian noise :

2
∞ Hν ( f ) Ps ( f )
C = Sup ∫0
log 2 [1 +
H x ( f ) Ps ( f ) + N o / 2
2
]df b/s
(11)
H ν(f) : transfer function of the TP
H x(f) : Next transfer function of
P s(f) : Power spectral density
No : power of nose

14 R &D, STC 2/3/2002

You might also like