Professional Documents
Culture Documents
By
STC- 2002
Signal Processing & Communications By Dr. Hadi HMIDA
m1(t)
cos(2πfct) ~ S(t)
Σ
π/2
m2(t)
M-ary Modulation
For digital signals we have M-ary QAM. M is the number of symbols
representing the data to be transmitted. For M=2 m for example, we have:
m1(t) = a i p(t)
m2(t) = b i p(t)
where p(t) is a baseband pulse
Thus, evry pulse pi(t) represent a one over 2m symbols each one coded
over m bits. For M=16, m=4 , the amplitude levels may be ± A, ±3A, a i
the constellation
bi X Y
? ? +3A ? ? Error
Z T ri ei
ri
? ? +A ? ?
Φι
-3A –A +A +3A ai
? ? -A ? ?
? ? -3A ? ?
To optimize the bit error probability for a given symbol, the assignment of
bit pattern to symbols in the constellation space should be intelligent.
The GRAY code is used to ensure that neighboring symbols (those most
likely to be detected in error) only differ by one bit as depicted below.
1110 1111
? ? ?
? ?
? ?
? ?
? ?
? ?
?
Figure (3): 16-PSK constellation
Example:
If the maximum vector length in a square 16-QAM constellation is 100mV
Determine the long term average power that would be delivered into 50
ohms antenna load if each point in the constellation has an equal
probability of transmission.
Answer:
We refer to one quadrant of the 16-QAM constellation in figure (2) , the
average power developed by 4 symbols is:
18 A 2 + 2 x10 A 2 + 2 A 2 10 A 2
Average power = =
4R R
The maximum vector length Y =100mV = 18A 2
(100 mV ) 2
Therefore A= =23.6mV
18
10 A 2
The average power for all the symbols states is: = 111W
R
an
In-phase filter
h(t)= g(t)cos2πfc t
S(t)
Dn
Encoder Σ D/A LPF
+
Quadrature filter T2
T1 bn ~
h (t)= g(t)sin2πfc t
+∞ ~
(3) S(t) = ∑ {a h(t − nT
n =−∞
n 2 ) − b n h (t-nT 2)}
S(t) defined by (1) is the expression of CAP where h(t) and h(t) are Hilbert
transforms of each other (their Fourier transforms have the same
amplitude characteristic and phase characteristics that differ by p/2) and
T1 is the symbol period, T2 is the sampling period, an and bn are discrete
multilevel symbols.
bn
? +1 ?
an
-1 +1
? ?
-1
The CAP receiver may be deduced directly from the previous scheme see
Figure (6).
T2
Adaptive
Filter I Data
â out
IN A/D Decision Decoder
Devi ce
b̂
Adaptive
Filter II
T2 T1
The received analog data (after filtering by a low pass filter LPF) is
T
converted to digital at a sampling period T2 = 1 .
n
We suppose that the filter (I) is identity filter {FI [s(t)]=s(t)} and the filter
(II) is an Hilbert filter {FII [s(t)]=- ~s( t ) } . In the other hand due to
transmission the signal S(t) in (1) becomes slightly different and the
filters (I & II) outputs are:
+∞
SI (t) = ∑ {a p(t − nT
n =−∞
n 2
~
) − bn p (t-nT2)}
(4) +∞
SI I(t) = ∑ {b p(t − nT
n= −∞
n 2
~
) + an p (t-nT 2)}
We assume that p(t) satisfies Nyquist criterion i.e. p(kT)= δ(kT) (Dirac
~ (kT) 0
distribution) and it Hilbert transform p ≡
po
x0
C p1
O x1
Data In D Xk to the line
E Σ
R pn
xn
+2 j πnk
1
pn = e N , k,n=0,1,…N-1
N
N +2 j πnk
X k = ∑ x ne N
n= 0
(5) N
X k = ∑ x n WN
+ nk
n =0
N/2 QAM
xo
Data IN S/P IFFT P/S D/A to Line
xN
(a)
(b)
fo f1 ……… fn
Remark:
The function p n (carriers) are considered as an orthogonal basis. The
exponential functions are also orthogonal; there are many orthogonal
functions that form an orthogonal basis as Haar, Hadamard, cosine and
Wavelets functions for which exist fast algorithms. These orthogonal
transformations called also unitary transformations and are widely used in
speech coding and image compression. Wavelet functions are shifted
rectangular pulses.
+3 11 00 01 10 01 11
+1
-1
-3
BER performance
We compare the error probability of coded and uncoded schemes under
similar constraint of power and information rate. The improvement of
Eb/No performance of the code vs uncoded systems, at a specified BER, is
termed the coding gain .
Let Peu and Pec represent the digit error probabilities in the uncoded and
coded cases, respectively.
For the uncoded case, a word of k digits will be received wrong if any one
of the k digits is in error. If P Eu and PEc represent the word error
probabilities of the uncoded and coded systems, respectively, then:
(6) P Eu ˜ k P eu
P Ec ˜ C n
t +1 (Pec)t+1
For QAM
4 Eb
P eu = 3Q{ } then we have
5N
(7) 4 Eb
PEu =3kQ{ }
5N
4k Eb t+1
PEc ˜ C nt+1 ( 3Q{ }) , Pec <<1
5 nN
For PSK
2 Eb
Peu= Q{ }
N
2E b
(8) PEu =kQ{ }
N
2k Eb t+1
PEc ˜ C nt+1 ( Q{ }) , Pec<<1
nN
Application
Answer:
Eb/N =9.12 ~ 9.59 dB
PEu= 11Q( 18 .24 )= 1.1 10-4
11(18 .24 ) 2
PEc= C 15
2
( Q{ }) =105(1.37x10-4)2 = 1.96 10 -6
15
Note that the word error probability of the coded system is reduced by a
factor PEc /PEu = 56. on the other hand, if we wish to achieve the error
probability of the coded transmission (1.96x10 -6) using the uncoded
system, we must increase the transmitted power. If E’b is the new value
of E b to achieve P Eu = 1.96x10 -6,
2E ' b
P Eu =11Q{ }= 1.96x10-6
N
10- 1
10- 2
Pe 10- 3
10- 4 P Ec P Eu
10- 5
10- 6
10- 7
2 4 6 8 10 12 E b/N dB
9.59
1.53dB
For the power to Noise ratio 9.12 PEu> P Ec means the Probability of error
is less when the data is coded than when it is uncoded. In this example
the coding introduce a gain of 1.53 dB. This result is related to the code
used. The code efficiency is related to the gain, the code rate and its
ability to detect and correct errors.
6. Channel Capacity
The Shannon –hartley capacity limit for error-free communications is
given by:
S
C= W log2 [ + 1] b/s
(9) N
EbC
C/W = log2 [1+ ] b/s/Hz
(10) N oW
There are other equations that concern the channel capacity of twisted
pairs related to NEXT interference and Gaussian noise :
2
∞ Hν ( f ) Ps ( f )
C = Sup ∫0
log 2 [1 +
H x ( f ) Ps ( f ) + N o / 2
2
]df b/s
(11)
H ν(f) : transfer function of the TP
H x(f) : Next transfer function of
P s(f) : Power spectral density
No : power of nose