Professional Documents
Culture Documents
Digital Modulation 1: Ingmar Land and Bernard H. Fleury
Digital Modulation 1: Ingmar Land and Bernard H. Fleury
– Lecture Notes –
Contents
I Basic Concepts 1
References
[1] J. G. Proakis and M. Salehi, Communication Systems Engi-
neering, 2nd ed. Prentice-Hall, 2002.
Part I
Basic Concepts
Waveform
Channel
Some properties:
Remark:
Commonly used binary symbol alphabets are F2 := {0, 1} and
B := {−1, +1}. In the following, we will use F2.
Information Source U1 , U2 , . . .
PSfrag replacements Source Encoder
Binary
Decoder
Sink BSS U1 , U2 , . . .
Encoder
Some Remarks
(i) There is no loss of generality resulting from these substitutions.
Indeed it can be demonstrated within Shannon’s Information The-
ory that an efficient source encoder converts the output of an in-
formation source into a random binary independent and uniformly
distributed (i.u.d.) sequence. Thus, the output of a perfect source
encoder looks like the output of a BSS.
(ii) Our main concern is the design of communication systems for
reliable transmission of the output symbols of a BSS. We will not
address the various methods for efficient source encoding.
U X(t)
BSS Digital
Transmitter
Binary Digital
Sink Û Receiver Y (t)
• limited power
• limited bandwidth
Design goals:
• “good” waveforms
• low-complexity transmitters
• low-complexity receivers
4PPM
Transmitter
[00] 7→ s1(t)
[U1, U2] [01] 7→ s2(t) X(t)
PSfrag replacements [10] 7→ s3(t)
[11] 7→ s4(t)
rag replacements
For an arbitrary digital transmitter, we have the following:
Digital
Transmitter
[U1, U2, . . . , UK ] X(t)
Waveform
Look-up
Table
Vector Waveform
Encoder Modulator
X1
Vector
[U1, . . . , UK ] Look-up Table ψ1(t) X(t)
[0 . . . 0] 7→ s1
XD
[1 . . . 1] 7→ sM
ψD (t)
Vector Waveform
Encoder Modulator
X1
Vector
Look-up Table ψ1(t)
√ X2
T
[U1, U2] [00] 7→ [ Es, 0, 0, 0]
√ ψ2(t) X(t)
[01] 7→ [0, Es, 0, 0]T X3
√
[10] 7→ [0, 0, Es, 0]T
√ ψ3(t)
[11] 7→ [0, 0, 0, Es]T X4
ψ4(t)
Remarks:
- The
R signal energy of each basis function is equal to one:
ψd(t)dt = 1 (due to their construction).
- The basis functions are orthonormal (due to their construction).
√
- The value Es is chosen such that the synthesized signals have
the same energy as the original signals, namely Es (compare
Example 3).
3
X(t) Y (t)
Remarks
f
−f0 f0
B B
x(t) ∈ S.
RT c1
0 (.)dt estimation
s1 (t) of x̂(t) [û1, . . . , ûK ]
y(t) and
table
RT cM look-up
0 (.)dt
sM (t)
m = 1, 2, . . . , M . The vector
Basic Idea
ψD (t)
replacements
Matched-filter based Demodulator and Vector De-
coder
MF Y1
ψ1(T − t)
Y (t) Vector [Û1, . . . , ÛK ]
Decoder
MF YD
ψD (T − t)
T
Remark:
Since the vector decoder “sees” noisy observations of the vectors
sm, these vectors are also called signal points with respect to Sψ ,
and the set of signal points is called the signal constellation.
which is the “part” of y(t) within the vector space spanned by the
set Sψ of orthonormal functions, i.e.,
y 0(t) ∈ Sψ .
Observations
Mean value
Using the definition of Wd and taking into account that the pro-
cess W (t) has zero mean, we obtain
hZT i
E Wd = E W (t)ψd (t)dt
0
ZT
= E W (t) ψd(t)dt
0
= 0.
Thus, all random variables Wd have zero mean.
Variance
X = [X1, X2 , . . . , XD ]T,
output vector
Y = [Y1, Y2, . . . , YD ]T,
and the relation
Y = X + W,
where
W = [W1, W2, . . . , WD ]T
is a vector of D independent random variables distributed as
Wd ∼ N (0, N0/2).
PSfrag replacements
W
X Y
X(t) Y (t)
ψ1(t) ψ1(t)
RT
XD 0 (.)dt YD
ψD (t) ψD (t)
[X1, . . . , XD ] [Y1, . . . , YD ]
placements
Using this equivalence, the communication system operating over a
(waveform) AWGN channel can be represented as a communication
system operating over an AWGN vector channel:
U X
BSS Vector
Encoder
pY |X (y|x) = pW (y − x)
−D/2 1 2
= πN0 exp − ky − xk .
N0
Symbolically, we may write
N0
Y |X = sm ∼ N sm , I D .
2
Remark:
The likelihood of x is the conditional probability density function
of y given x, and it is regarded as a function of x.
ψ2
sm
PSfrag replacements
ψ1
ψ3
3
Let Esm denote the energy of waveform sm(t), and let E s denote
the average waveform energy:
ZT M
2 1 X
E sm = sm(t) dt, Es = Es
M m=1 m
0
Remark:
The energies and signal-to-noise ratios usually denote received val-
ues.
U X
BSS Vector
Encoder
Nomenclature
u = [u1, . . . , uK ] ∈ U, x = [x1, . . . , xD ] ∈ X,
û = [û1, . . . , ûK ] ∈ U, y = [y1, . . . , yD ] ∈ RD ,
U = {0, 1}K , X = {s1, . . . , sM }.
Optimality criterion
Block Diagram
U Vector X
Encoder
U enc X
W
RD
Û Encoder X̂ Mod. Symbol Y
Inverse Decision
U enc−1 X dec RD
Vector Decoder
ag replacements
A decision rule is a function which maps each observation y ∈
RD to a modulation symbol x ∈ X, i.e., to an element of the signal
constellation X.
RD
y (1)
y (2) s 1 , s 2 , . . . , sM =X
y (3)
m = 1, 2, . . . , M .
RD
D1
.
..
s 1 , s 2 , . . . , sM =X
DM D2
Di ∩ Dj = ∅ for i 6= j.
D1 ∪ D 2 ∪ · · · ∪ D M = R D .
Remark:
The a-posteriori probability of a decoding error is
Sfrag replacements
the probability of the event {dec(y) 6= X} conditioned on the
event {Y = y}:
Proof:
Let dec : RD → X denote any decision rule, and let
decopt : RD → X denote an optimal decision rule. Then we
have
Z
Pr dec(Y ) 6= X = Pr dec(Y ) 6= X Y = y · pY (y)dy
RD
Z
= Pr dec(y) 6= X Y = y · pY (y)dy
RD
(?)
Z
≥ Pr decopt (y) 6= X Y = y · pY (y)dy
RD
Z
= Pr decopt(Y ) 6= X Y = y · pY (y)dy
RD
= Pr decopt (Y ) 6= X .
for all x ∈ X.
dec : y 7→ x̂ = dec(y),
Pr(x̂ 6= X|Y = y)
is minimal.
The decision rule is called the MAP decision rule as the selected
modulation-symbol x̂ maximizes the a-posteriori probability den-
sity function pX|Y (x|y). Notice that the two formulations are
equivalent.
X∈X Y ∈ RD
Thus, the ML decision rule for the AWGN vector channel selects
the vector in the signal constellation X that is the closest one to the
observation y, where the distance measure is the squared Euclidean
distance.
PSfrag replacements
Block diagram of the ML decision rule for the AWGN vector-
channel:
select
y s1 1 2 largest
2 ks1 k
x̂
.. .. one
. .
sM 1 2
2 ksM k
PSfragVector
replacements
correlator:
D
PD
a∈R ha, bi = d=1 ad bd ∈R
b ∈ RD
Remark:
Some signal constellations X have the property that kxk2 has the
same value for all x ∈ X. In these cases (but only then), the ML
decision rule simplifies further to
n o
x̂ = decML(y) = argmax hy, xi .
x∈X
RD
D1
.
..
s 1 , s 2 , . . . , sM =X
DM D2
Lower bound:
If a symbol is erroneous, there is at least one erroneous bit in the
block of K bits. Therefore
1
Pb ≥ K Ps .
Upper bound:
If a symbol is erroneous, there are at most K erroneous bits in the
block of K bits. Therefore
Pb ≤ P s .
Combination:
Combining the lower bound and the upper bound yields
1
K Ps ≤ Pb ≤ Ps .
An efficient encoder should achieve the lower bound for the bit-
error probability, i.e.,
Pb ≈ K1 Ps.
Remark:
The bounds can also be obtained by considering the union bound
of the events
{Û1 6= U1}, {Û2 6= U2}, . . . , {ÛK 6= UK },
i.e, by relating the probability
Pr {Û1 6= U1} ∪ {Û2 6= U2} ∪ . . . ∪ {ÛK 6= UK }
to the probabilities
Pr {Û1 6= U1} , Pr {Û2 6= U2} , . . . , Pr {ÛK 6= UK } .
00 01 11 10
s1 −2 s2 s3 2 s4
Motivation
When a decision error occurs, it is very likely that the transmitted
symbol and the detected symbol are neighbors. When the bit
blocks of neighboring symbols differ only in one bit, there will be
only one bit error per symbol error.
Effect
Gray encoding achieves
1
Pb ≈ K Ps
Ps = Pr(X̂ 6= X)
= Pr [Û1, Û2] 6= [U1, U2]
= Pr {Û1 6= U1} ∪ {Û2 6= U2}
The term (?) is small compared to the other two terms if the SNR
is high. Hence
Ps ≈ 2 · P b .
Part II
Digital Communication
Techniques
3.1 BPAM
PAM with only two waveforms is called binary PAM (BPAM).
s1 (t) s2 (t)
A A
0 0
T t T t
−A −A
U 7→ x(t)
with
Remark: In the general case, the basic waveform may also have
another shape.
Gram-Schmidt procedure
1
ψ1(t) = √ s1(t)
E1
with
ZT ZT
2
E1 = s1(t) dt = A2dt = A2 · T.
0 0
Thus (
√1 for t ∈ [0, T ] ,
ψ1(t) = T
0 otherwise.
Signal constellation
Vector representation of the two waveforms:
p
s1(t) ↔ s1 = s1 = + Es
p
PSfrag replacements s2(t) ↔ s2 = s2 = − Es
√ √
Thus the signal constellation is X = − Es, + Es .
√ √
Es Es
ψ1
s2 0 s1
Vector Encoder
The vector encoder is usually defined as
( √
+ Es for u = 0,
x = enc(u) = √
− Es for u = 1.
3.1.3 ML Decoding
D2 D1
y
√ √
s2 = − E s 0 s1 = + E s
BPAM transceiver:
U Encoder X X(t)
BSS √
0 7→ + √Es
1 7→ − Es W (t)
placements ψ1(t)
Û Decoder Y RT Y (t)
Sink √1 (.)dt
D1 7→ 0 T 0
D2 7→ 1
RT
√1 (.)dt
T 0
Vector representation:
U Encoder X
BSS √
0 7→ + √Es
1 7→ − Es
W ∼ N (0, N0 /2)
Û Decoder Y
Sink D1 7→ 0
D2 7→ 1
p 1 1 p
pY |X (y| + Es) = √ exp − |y − Es|2
πN0 N0
p 1 1 p
2
pY |X (y| − Es) = √ exp − |y + Es|
πN0 N0
p p
Pr(X̂ 6= X|X = + Es) = Pr(Y ∈ D2|X = + Es)
p
= Pr(Y < 0|X = + Es)
Z0
1 1 p
2
=√ exp − |y − Es| dy
πN0 N0
−∞
√
− 2Es /N0
Z
1
=√ exp − 12 z 2 dz,
2π
−∞
The function q(z) denotes a Gaussian pdf with zero mean and vari-
ance 1; i.e., it may be interpreted as the pdf of a random variable
Z ∼ N (0, 1).
Symbol-error probability
The symbol-error probability results from averaging over the con-
ditional symbol-error probabilities:
Ps = Pr(X̂ 6= X)
p p
= Pr(X = + Es) · Pr(X̂ 6= X|X = + Es)
p p
+ Pr(X = − Es) · Pr(X̂ 6= X|X = − Es)
r 2E p
s
=Q = Q( 2γs)
N0
√ √
as Pr(X = + Es) = Pr(X = − Es) = 1/2.
p p
Pr(X̂ 6= X|X = + Es) = Pr(Y < 0|X = + Es)
p p
= Pr( Es + W < 0|X = + Es)
p p
= Pr(W < − Es|X = + Es)
p
= Pr(W < − Es)
0
p
= Pr(W < − 2Es/N0)
p
= Q( 2Es/N0),
p
where we substituted W 0 = W/ N0/2; notice that
W 0 ∼ N (0, 1). A plot of the symbol-error probability can found
in [1, p. 412].
{Y < 0}) is transformed into an equivalent
The initial event (here p
event (here {W 0 < − 2Es/N0}), of which the probability can
easily be calculated. This method is frequently applied to deter-
mine symbol-error probabilities.
Due to the same reason, the transmit energy per symbol is equal
to the transmit energy per bit, and so also the SNR per symbol is
equal to the SNR per bit:
Eb = E s , γs = γb .
3.2 MPAM
In the general case of M -ary pulse amplitude modulation (MPAM)
with M = 2K , input blocks of K bits modulate the amplitude of
a unique pulse.
Waveform encoder:
u = [u1, . . . , uK ] →
7 x(t)
K
U = {0, 1} → S = s1(t), . . . , sM (t) ,
where
(
(2m − M − 1) A g(t) for t ∈ [0, T ],
sm(t) =
0 otherwise,
A ∈ R, and g(t) is a predefined pulse of duration T ,
g(t) = 0 for t ∈
/ [0, T ].
Gram-Schmidt procedure:
Orthonormal function
1 1
ψ(t) = √ s1(t) = p g(t),
E1 Eg
RT 2 RT 2
E1 = 0 s1(t) dt, Eg = 0 g(t) dt.
Signal constellation
X = s1 , s 2 , . . . , s M
= −(M − 1)d, −(M − 3)d, . . . , −d, d, . . .
. . . , (M − 3)d, (M − 1)d .
Example: M = 2, 4, 8
0 1
s1 s2
00 01 11 10
s1 s2 s3 s4
s1 s2 s3 s4 s5 s6 s7 s8
Vector Encoder
The vector encoder
enc : u = [u1, . . . , uK ] →
7 x = enc(u)
K
U = {0, 1} → X = −(M − 1)d, . . . , (M − 1)d
Thus,
s
M2 − 1 2 3E s
Es = d ⇐⇒ d=
3 M2 − 1
The SNR per symbol and the SNR per bit are related as
1 1
γb = γs = γs .
K log2 M
3.2.3 ML Decoding
PSfrag replacements
Decision regions
Example: Decision regions for M = 4.
D1 D2 D3 D4
y
s1 s2 s3 s4
−2d 0 2d
3
Starting from the special cases M = 2, 4, 8, the decision regions
can be inferred:
D1 = {y ∈ R : y ≤ −(M − 2)d},
D2 = {y ∈ R : −(M − 2)d < y ≤ −(M − 4)d},
···
Dm = {y ∈ R : (2m − M − 2)d < y ≤ (2m − M )d},
···
DM −1 = {y ∈ R : (M − 4)d < y ≤ (M − 2)d},
DM = {y ∈ R : (M − 2)d < y}.
BPAM transceiver:
U Gray X X(t)
BSS Encoder
um 7→ sm W (t)
ψ(t)
Û Decoder Y RT Y (t)
placements Sink √1 (.)dt
T 0
Dm 7→ um
Vector representation:
RT
√1 (.)dt
T 0
U Gray X
BSS Encoder
um 7→ sm
W ∼ N (0, N0 /2)
Û Decoder Y
Sink
Dm 7→ um
X ∈ {s1, sM }.
Case 1: X ∈ {s1, sM }
Assume first X = s1. Then
Case 2: X ∈ {s2, . . . , sM −1 }
Assume X = sm. Then
Pr(X̂ 6= X|X = sm) = Pr(Y ∈
/ Dm|X = sm)
= Pr(W < −d or W > d)
= Pr(W < −d) + Pr(W > d)
W d
= Pr p < −p
N0/2 N0/2
W d
+ Pr p >p
N0/2 N0/2
d d
=Q p +Q p
N0/2 N0/2
d
=2·Q p .
N0/2
Symbol-error probability
Averaging over the symbols, one obtains
M
1 X
Ps = Pr(X̂ 6= X|X = sm)
M m=1
2 d 2(M − 2) d
= Q p + Q p
M N0/2 M N0/2
M −1 d
=2 Q p .
M N0/2
Expressing the symbol-error probability as a function of the SNR
per symbol, γs, or the SNR per bit, γb, yields
r
M −1 6
Ps = 2 Q γs
M M2 − 1
r
M −1 6 log2 M
=2 Q γb .
M M2 − 1
Remark: When comparing BPAM and 4PAM for the same Ps,
the SNRs per bit, γb, differ by 10 log10(5/2) ≈ 4 dB. (Proof?)
4.1 BPPM
PPM with only two waveforms is called binary PPM (BPPM).
s1 (t) s2 (t)
A A
0 T
t 0 T
t
2 T 2 T
−A −A
U 7→ x(t)
with
Remark: In the general case, the basic waveform may also have
another shape.
Gram-Schmidt procedure
The waveforms s1(t) and s2(t) are orthogonal and have the same
energy
ZT ZT
Es = s21(t)dt = s22(t)dt = A2 · T2 .
0 0
√ p
Notice that Es = A T /2.
The two orthonormal functions and the corresponding waveform
representations are
1 p
ψ1(t) = √ s1(t), s1(t) = Es · ψ1(t) + 0 · ψ2(t),
Es
1 p
ψ2(t) = √ s2(t), s2(t) = 0 · ψ1(t) + Es · ψ2(t).
Es
Signal constellation
Vector representation of the two waveforms:
p
s1(t) ←→ s1 = ( Es, 0)T,
p
s2(t) ←→ s2 = (0, Es)T.
√ √
Thus the signal constellation is X = ( Es, 0)T , (0, Es)T .
PSfrag replacements
ψ2
√ s2
Es
√
2Es
s1
0 √
0 Es ψ1
Vector Encoder
The vector encoder is defined as
(√
( Es, 0)T for u = 0,
x = enc(u) = √
(0, Es)T for u = 1.
Eb = E s , γs = γb .
4.1.3 ML Decoding
Decision
PSfrag regions
replacements
D1 = {y ∈ R2 : y1 ≥ y2}, D2 = {y ∈ R2 : y1 < y2}.
√
2Es y2
D2
√ s2
Es D1
s1
0 √
0 Es y1
Y1
Z z≥0→0 Û
z<0→1
Y2
ψ1(t)
Vector X1
U Encoder X(t)
BSS √
0 7→ ( Es, 0)T
√
1 7→ (0, Es)T W (t)
X2
placements ψ2(t)
Y1 q R
T /2
Vector 2
T 0
(.)dt
Û Decoder
Sink D1 7→ 0
D2 7→ 1
q R
2 T
(.)dt
Y (t)
T T /2
Y2
ψ1(t)
ψ (t)
q R 2
(.)dtVector representation including a AWGN vector channel
2 T /2
T 0
q R
2 T
T T /2 (.)dt X1
Vector
U Encoder
BSS √
0 7→ ( Es, 0)T
√
1 7→ (0, Es)T
X2 W1 ∼ N (0, N0 /2)
Y1
Vector W2 ∼ N (0, N0 /2)
Û Decoder
Sink D1 7→ 0
D2 7→ 1
Y2
ξ2 D2
ξ20 D1
s2 s1 + w
w2
ξ10
√ w10
Es
α = π/4
w20
√ s1 w1 ξ1
Es
Thus, we obtain
p
6 X|X = s1) =
Pr(X̂ = Pr(W20
> Es/2)
r
W20 Es
= Pr( p > )
N /2 N 0
p 0 √
= Q( Es/N0) = Q( γs).
By symmetry,
√
Pr(X̂ 6= X|X = s2) = Q( γs).
Symbol-error probability
Averaging over the conditional symbol-error probabilities, we ob-
tain
√
Ps = Pr(X̂ 6= X) = Q( γs).
Remark
Due to their signal constellations, BPPM is called orthogonal sig-
naling and BPAM is called antipodal signaling. Their BEPs are
given by
√ p
Pb,BPPM = Q( γb), Pb,BPAM = Q( 2γb).
When comparing the SNR for a certain BEP, BPPM requires 3 dB
more than BPAM. For a plot, see [1, p. 409]
4.2 MPPM
In the general case of M -ary pulse position modulation (MPPM)
with M = 2K , input blocks of K bits determine the position of a
unique pulse.
Waveform encoder
u = [u1, . . . , uK ] →
7 x(t)
K
U = {0, 1} → S = s1(t), . . . , sM (t) ,
where (m − 1)T
sm(t) = g t − ,
M
m = 1, 2, . . . , M , and g(t) is a predefined pulse of duration T /M ,
g(t) = 0 for t ∈
/ [0, T /M ].
The waveforms are orthogonal and have the same energy as g(t),
ZT
Es = E g = g 2(t)dt.
0
PSfrag replacements
Example: (M=4)
PSfrag replacements
Consider
PSfrag a rectangular pulse g(t) with amplitude A and du-
replacements
ration T /4. Then, we have the set of four waveforms, S =
eplacements
{s1(t), s2 (t), s3 (t), s4 (t)}:
s1 (t)
s1 (t) s2 (t) s3 (t) s4 (t)
s1 (t) s2 (t)
A A A A
s1 (t) s2 (t) s3 (t)
0 t 0 t 0 t 0 t
0 T 0 T 0 T 0 T
Gram-Schmidt procedure
As all waveforms are orthogonal, we have M orthonormal func-
tions:
1
ψm(t) = √ sm(t),
Es
m = 1, 2, . . . , M .
The waveforms and the orthonormal functions are related as
p
sm(t) = Es · ψm(t),
m = 1, 2, . . . , M .
Signal constellation
Waveforms and their vector representations:
M
zp }| {
s1(t) ←→ s1 = ( Es, 0, 0, . . . , 0)T
p
s2(t) ←→ s2 = (0, Es, 0, . . . , 0)T
...
p
sM (t) ←→ sM = (0, 0, 0, . . . , Es)T
Signal constellation:
X = s1 , s 2 , . . . , s M ∈ R M .
Example: M = 3
The signal constellation
p p p
X = ( Es, 0, 0) , (0, Es, 0) , (0, Es, 0)T ∈ R3
T T
√
may be visualized using a cube with side-length Es. 3
Vector Encoder
enc : u = [u1, . . . , uK ] →
7 x = enc(u)
K
U = {0, 1} → X = s1, s2, . . . , sM
The SNR per symbol and the SNR per bit are related as
1 1
γb = γs = γs .
K log2 M
4.2.3 ML Decoding
û = enc-1 (decML(y))
placements
Implementation of the ML detector using vector correlators:
hy, s1i
y s1 select x̂ û
-1
enc
.. max.
.
hy, sM i
sM
so that
( √
Es + Es · W1 for m = 1,
hy, smi = √
Es · Wm for m = 2, . . . , M .
W 0 = (W10 , . . . , WM
0 T
) ∼ N (0, I),
M
Y q
2Es
M −1
p
= Q N0 +v =Q ( 2γs + v).
m=2
Using this result and inserting pW10 (v), we obtain the conditional
symbol-error probability
Symbol-error probability
The above expression is independent of s1 and thus valid for all
x ∈ X. Therefore, the average symbol-error probability results as
Z+∞h
M −1
p i 1 v2
Ps = 1−Q ( 2γs + v) √ exp − dv.
2π 2
−∞
Hamming distance
The Hamming distance dH(u, u0) between two length-K vectors
u and u0 is the number of positions in which they differ:
K
X
dH(u, u0) = δ(uk , u0k ),
k=1
Example: 4PPM
Let the blocks of source bits and the modulation symbols be asso-
ciated as
Example:
Consider K = 3 and write the binary vectors of length 3 in the
columns of a matrix:
T 0 0 0 0 1 1 1 1
T
u1 , . . . , u 8 = 0 0 1 1 0 0 1 1 .
0 1 0 1 0 1 0 1
In every row, half of the entries are ones. The overall number of
ones is thus 3 · 23−1 . 3
Remarks
BPSK
for t ∈ [0, T ].
QPSK
for t ∈ [0, T ].
Orthonormal functions:
q
ψ1(t) = √1 s1 (t) = 2
cos 2πf0 t
Es T
for t ∈ [0, T ].
PSfrag replacements
Signal constellation:
√ √
Es Es
ψ1
s2 0 s1
Vector encoder:
(
s1 for u = 1,
x = enc(u) =
s2 for u = 0.
QPSK
Orthonormal functions:
q
ψ1(t) = √1 s1 (t) =+ 2
cos 2πf0t
Es T
q
ψ2(t) = √1 s2 (t) =− 2
Es T sin 2πf0 t
for t ∈ [0, T ].
with Es = P T .
Signal constellation:
ψ2
PSfrag replacements s2
ψ1
s3 s1
s4
General Case
where φm = 2π m−1
M , m = 1, 2, . . . , M .
(Notice: cos(x + y) = cos x cos y − sin x sin y.)
Orthonormal functions:
q
2
ψ1(t) = T cos(2πf0 t)
q
2
ψ2(t) = − T sin 2πf0t
for t ∈ [0, T ].
m = 1, 2, . . . , M . Remember: Es = P T .
Signal constellation:
n p p o
m−1 m−1 T
X= Es cos 2π M , Es sin 2π M : m = 1, 2, . . . , M
Example: 8PSK
For M = 8, we obtain φm = π m−1
4 , m = 1, 2, . . . , 8. (Figure?) 3
5.3 ML Decoding
ML symbol decision rule
PSfrag replacements
BPSK
ψ1
BPSK has the same signal constellation as BPAM. Thus, the ML
detectors for these two modulation schemes are the same.
D2 D1
y
s2 0 s1
(u = 0) (u = 1)
PSfrag replacements
Implementation of the ML detector:
Y y≥0→1 Û
y<0→0
QPSK
y2
D2
s2 01
D1
11 00
y1
s3 s1
D3
s4 10
D4
Z1 z1 > 0 → 0 Û1
Y1
z1 < 0 → 1
Z2 z2 < 0 → 0 Û2
Y2
z2 > 0 → 1
General Case
y2 s2
PSfrag replacements
2π/M
π/M D1
y1
s1
sM
placements q
2
T
cos 2πf0 t
T
X1
U Vector X(t)
or Channel
Encoder
X2
q
2
T
− sin 2πf0 t
T
W (t)
cos 2πf0 t
Y1 q R
2 T
(.)dt
Û Vector T 0
Decoder q R
2 T
Y (t)
T 0
(.)dt
Y2
− sin 2πf0 t
X1 Y1
X2 Y2
W2 ∼ N (0, N0 /2)
Since BPSK has the same signal constellation as BPAM, the two
modulation schemes have the same symbol-error probability:
q p
2Es
Ps = Q N0 = Q( 2γs).
QPSK
Sfrag replacements
0 that X = s is transmitted. Then, the conditional
Assume 1
symbol-error probability is given by
p
Es/2
Pr(X̂ 6= X|X = s1) = Pr(y ∈ / D1|X = s1)
= 1 − Pr(y ∈ D1|X = s1)
= 1 − Pr(s1 + W ∈ D1).
s2 D2
bo
D1
ion
ξ20
cis
de
s1 + w
de
c
isi
w2
ξ10
on
bo
w10
un
d
ar
π/4
y
w20
D3 y1
√ w1
Es s1 ξ1
D4
q q
= Pr W10 > − E2s · Pr W20 < E2s
q q
W10 E W20 Es
= Pr √ > − N0 · Pr √
s
< N
N0 /2 N0 /2 0
√ √
= 1 − Q( γs) 1 − Q( γs)
√ 2
= 1 − Q( γs) .
Hence,
√ 2
Pr(X̂ 6= X|X = s1) = 1 − 1 − Q( γs)
√ √
= 2 Q( γs) − Q2( γs).
General Case
Each of these events may denote the case that the received vector
is within a certain decision region.
QPSK
Pr(Û 6= U |X = s1) =
= 12 Pr(Û1 6= U1|X = s1) + Pr(Û2 6= U2|X = s1)
= 21 Pr(Û1 6= 0|X = s1) + Pr(Û2 6= 0|X = s1)
= 21 Pr(Û1 = 1|X = s1) + Pr(Û2 = 1|X = s1) .
Û1 = 1 ⇔ Y ∈ D 3 ∪ D4 ,
Û2 = 1 ⇔ Y ∈ D 2 ∪ D3 .
Thus,
Pr(Û 6= U |X = s1) =
= 12 Pr(Y ∈ D3 ∪ D4) + Pr(Y ∈ D2 ∪ D3)
q q
= 21 Pr W10 < − E2s + Pr W20 > E2s
q q
W 0 0
= 12 Pr √ 1 < − N Es
+ Pr √W2 > N Es
N0 /2 0 N0 /2 0
√
= Q( γs).
BPSK: T = Tb,
QPSK: T = 2Tb.
General Case
11 10
Remark
The ψ1 component of the modulation symbol is called the in-phase
component (I), and the ψ2 component is called the quadrature
component (Q).
placements6.2 Transceiver
We consider communication over an AWGN channel.
q
2
or Channel T
cos 2πf0 t
T
X1
[U1, U2] Vector X(t)
Encoder
X2
q
2
T
− sin 2πf0 t
T 3T W (t)
2 2
cos 2πf0 t
Y1 q R
2 T
[Û1, Û2] (.)dt
Vector T 0
Decoder q R
2 3T /2
Y (t)
T T /2
(.)dt
Y2
− sin 2πf0 t
QPSK
replacements
u 00 11 01
I +a −a −a
Q +a −a +a
10
∆φ +π − π2
−π
+ π2 t
T
0 2 = Tb T 2T 3T
OQPSK
u 00 11 01
I +a +a −a −a −a −a
Q +a +a −a −a +a +a
+π ∆φ + π2 + π2 0 − π2
−π
t
T
0 2 = Tb T 2T 3T
Remarks
{0, 1} → S
u 7→ x(t)
0 →
7 s1(t)
1 → 7 s2(t).
∆f = |f1 − f2|
Orthonormal functions:
q
ψ1(t) = √1 s1 (t) = 2
cos 2πf1t + φ1
Es T
q
ψ2(t) = √1 s2 (t) = 2
cos 2πf2t + φ2
Es T
for t ∈ [0, T ].
ψ2
√ s2
Es
√
2Es
s1
D1 0 √
D2 0 Es ψ1
7.1.3 ML Decoding
PSfrag replacements
Decision regions
2
D1 = {y ∈
√R : y1 ≥ y2}, D2 = {y ∈ R2 : y1 < y2}.
2Es
(u = 1)
y2
D2
√ s2 D1
Es
(u = 0)
s1
0 √
0 Es y1
placements q
2
T
cos(2πf1 t + φ1 )
T
X1
U Vector X(t)
or Channel
Encoder
X2
q
2
T
cos(2πf2 t + φ2 )
T
W (t)
cos(2πf1 t + φ1 )
Y1 q R
2 T
(.)dt
Û Vector T 0
Decoder q R
2 T
Y (t)
T 0
(.)dt
Y2
cos(2πf2 t + φ2 )
m = 1, 2, . . . , M .
m, n = 1, 2, . . . , M .
The phases are assumed to be known to the receiver. Thus, we
consider coherent demodulation.
Orthonormal functions:
q
ψm(t) = √1 sm (t) = 2
cos 2πfmt + φm
Es T
for t ∈ [0, T ], m = 1, 2, . . . , M .
Vector Encoder
enc : u = [u1, . . . , uK ] →
7 x = enc(u)
U = {0, 1}K → X = s1, s2, . . . , sM
with K = log2 M .
placements q
2
T
cos(2πf1 t + φ1 )
T
X1
U Vector X(t)
or Channel
Encoder
XM
q
2
T
cos(2πfM t + φM )
T
W (t)
cos(2πf1 t + φ1 )
Y1 q R
2 T
(.)dt
Û Vector T 0
Decoder q R
2 T
Y (t)
T 0
(.)dt
YM
cos(2πfM t + φM )
Wm ∼ N (0, N0/2),
m = 1, 2, . . . , M .
Remarks
The M waveforms sm(t) ∈ S are orthogonal. The receiver simply
correlates the received waveform y(t) to each of the possibly trans-
mitted waveforms sm(t) ∈ S and selects the one with the highest
correlation as the estimate.
q R
T
2
T 0
(.)dt Y1
q R
T
2
(.)dt Y2
PSfrag replacements T 0
− sin 2πf1 t
Y (t)
cos 2πf2 t
q R
T
2
T 0
(.)dt Y3
q R
T
2
T 0
(.)dt Y4
− sin 2πf2 t
and
T T
Y = Y T1 , Y T2 = Y1 , Y 2 , Y 3 , Y 4 .
pY |X (y|s1) =
Z2π
= pY |X,Φ1 (y|s1, φ1) pφ1 (φ1) dφ1
0
Z2π
1
= 2π pY |X,Φ1 (y|s1, φ1) dφ1
0
Z2π h i
1 1 2
Es
2 2
= 2π πN0 · exp −N 0
· exp − N10 ky 1k + ky 2k ·
0
√
2 Es
· exp N0 ky 1 k cos(φ1 − α1) dφ1
h i
1 2 Es 2 2
= πN0 · exp −N 0
· exp − N10 ky 1k + ky 2k ·
Z2π √
1 2 Es
· 2π exp N0 ky 1 k cos(φ1 − α1) dφ1 .
0
| {z √
}
I0 2 NE0 s ky 1k
7.3.5 ML Decoding
= argmax ky mk
m∈{1,2}
= argmax ky mk2.
m∈{1,2}
ML decoder
(
0 for x̂ = s1 ⇔ ky 1k ≥ ky 2k,
û =
1 for x̂ = s2 ⇔ ky 1k < ky 2k.
cos 2πf1 t
q R Y1
2
T
T
0
(.)dt (.)2 kY 1k2
q R
2
T
T
0
(.)dt (.)2
Y2
q R Y3
2
T
T
0
(.)dt (.)2
kY 2k2
q R
2
T
T
0
(.)dt (.)2
Y4
− sin 2πf2 t
Notice:
q
2
kY 1k = Y12 + Y22, Z = kY 1k2 − kY 2k2,
q
2
kY 2k = Y32 + Y42.
Orthonormal functions:
q
2
ψ1(t) = T cos 2πf1t
q
2
ψ2(t) = − sin 2πf1t
T
...
q
ψ2m−1 (t) = T2 cos 2πfmt
q
2
ψ2m(t) = − T sin 2πfmt
...
q
ψ2M −1 (t) = T2 cos 2πfM t
q
ψ2M (t) = − T2 sin 2πfM t
for t ∈ [0, T ].
Signal constellation:
X = s1 , s 2 , . . . , s M .
m̂ = argmax ky mk2.
m∈{1,...,M }
The optimal noncoherent receiver for MFSK over the AWGN chan-
nel is as follows:
cos 2πf1 t
Y1
q R
2 T
(.)dt (.)2 kY 1k2
T 0
q R
2
T
T
0
(.)dt (.)2 select
Y2 maximum
Y (t) − sin 2πf1 t and Û
Consequently,
Pr(X̂ = X|X = s1) =
= Pr kW mk2 ≤ kY 1k2 for all m 6= 1|X = s1
= Pr √ 1 kW mk2 ≤ √ 1 kY 1k2 for all m 6= 1|X = s1
N /2 N /2
| 0 {z } | 0 {z }
Rm R1
= Pr Rm ≤ R1 for all m 6= 1|X = s1
Z∞
= Pr Rm ≤ R1 for all m 6= 1|X = s1, R1 = r1 ·
0
· pR1 (r1) dr1.
Notice that
Remark
The Rayleigh distribution and the Ricd distribution are often used
to model mobile radio channels.
Ps = 1
2 e−γs/2.
BFSK
We have γb = γs and Pb = Ps.
Therefore, the bit-error probability is given by
Pb = 1
2 e−γb/2.
MFSK
For M > 2, we have
1
(i) γb = K γs ,
2K−1
(ii) Pb = K Ps with K = log2 M (see MPPM).
2 −1
8.1 Motivation
Consider BFSK. The signal waveforms of BFSK are
√
U = 0 7→ s1(t) = 2P cos 2πf1t ,
√
U = 1 7→ s2(t) = 2P cos 2πf2t ,
for t ∈ [0, T ]. The minimum frequency separation such that s1(t)
and s2(t) are orthogonal (assuming coherent detection) is
1
∆f = |f1 − f2| = .
2T
In the following, we assume minimum frequency separation.
Remark
The BFSK signal x(t) can be generated with the same expression
as above, when the phase pulse
0
for τ < 0,
b•(τ ) = τ /2T for 0 ≤ τ < T ,
0 for T ≤ τ
is used.
φ(t)
f2
f1
4π
−f2 −f2
−f1 −f1
3π
f2 f2 f2
f1 f1 f1
2π
−f2 −f2 −f2 −f2
5T
−f1 −f1 −f1 −f1
π
f2 f2 f2 f2 f2
f1 f1 f1 f1 f1
0
0 T 2T 3T 4T t
Sfrag replacements
φ(t) mod 2π
2π
−f2 −f2 −f2 −f2
−f1 −f1 −f1 −f1
π
f2 f2 f2 f2 f2
5T
f1 f1 f1 f1 f1
0
0 T 2T 3T 4T t
t ∈ [0, T ].
Orthonormal functions:
p
ψ1(t) = 2/T cos 2πf1t ,
p
ψ2(t) = 2/T cos 2πf2t ,
t ∈ [0, T ].
with Es = P T .
Remark:
The modulation symbol at discrete time j is written as
Signal constellation:
ψ2
PSfrag replacements s3
ψ1
s2 s1
s4
Remark:
MSK and QPSK have the same signal constellation. However, the
two modulation schemes are not equivalent because their vector
encoders are different. This is discussed in the following.
Modulator:
The symbol X j = (Xj,1 , Xj,2 )T is transmitted in the time interval
t ∈ [jT, (j + 1)T ).
t = jT + τ, τ ∈ [0, T ).
X j 7→ Xj (τ ),
Xj,1
Xj (τ )
Xj,2
q
2
T
cos(2πf2τ )
T
Remark:
The index j is not necessary for describing the modulator. It is
only introduced to have the same notation as being used for the
vector encoder.
Vj+1
Uj T Vj
Vj
Vj+1 = Uj ⊕ Vj = Uj + Vj mod 2.
PSfrag replacements
Uj
(ii) A shift register clocked at times T (“every T seconds”):
Vj+1 T Vj
As Vj ∈ {0, 1}, this device can be in two different states. All possi-
ble transitions can be depicted in a state transition diagram:
0
PSfrag replacements1 1
0
1
1
The values in the circles denote the states v, and the labels of the
arrows denote the inputs u.
eplacements
The state transitions and the associated input values can also be
depicted in a trellis diagram. The initial state is assumed to be
zero.
0 0 0 0
1 1 1 1 1 1
1
1
1
0 0 0 0 0 0 0 0 0 0 0
0 1 2 3 4 5
PSfrag replacements 0
1/s3 1/s4
0/s2
The transitions are labeled by u/x. The value u denotes the binary
source symbol at the encoder input, and x denotes the modulation
symbol at the encoder output.
Remark
The vector encoder is a finite-state machine as considered in the
previous section.
1/
1/
1/
s4
s4
s4
s4
s3
s3
s3
s3
s3
1/
1/
1/
1/
1/
0 0/s 0 0/s 0 0/s 0 0/s 0 0/s 0
1 1 1 1 1
0 1 2 3 4 5
General Conventions
The last bit UL−1 is selected such that the encoder is driven back
to state 0:
VL−1 = 0 −→ UL−1 = 0,
VL−1 = 1 −→ UL−1 = 1.
1/
s4
s3
1/
0 0/s 0
1
j j+1
Observations:
Uj s3 10
Xj,1
01 s1
Uj
Vj+1 Vj s2 00
T s4 11 Xj,2
Uj Vj
At time index j:
• input symbol Uj ,
• state Vj ,
Notice:
8.6 Demodulator
The received symbol Y j = (Yj,1 , Yj,2 )T is determined in the time
interval
t ∈ [jT, (j + 1)T ).
t = jT + τ, τ ∈ [0, T ).
q R
Yj,1 2
T
T
0
(.)dτ
qT Y (t)
2 q R
T 2 T
Yj,2 T 0
(.)dτ
cos(2πf2 τ )
Xj,1 Yj,1
Xj Wj,1 W Yj
j
Wj,2
Xj,2 Yj,2
Or in mathematical terms:
ky j − xj k2 = ky j k2 −2hy j , xj i + kxj k2
| {z } | {z }
indep. of xj = Es
and the expression for the likelihood function, the rule for ML
sequence estimation (MLSE) can equivalently be formulated
as follows:
L−1
1 X
x̂[0,L−1] = argmax exp − ky j − xj k2
x[0,L−1] ∈A N0 j=0
L−1
X
= argmin ky j − xj k2
x[0,L−1] ∈A j=0
L−1
X
= argmax hy j , xj i.
x[0,L−1] ∈A j=0
8.9
2
T
cos(2πf1τ )
q
T
Uj s3 10 Xj,1
Uj 01 s1 X(t)
Vj+1 Vj s2 00
T
AWGN Vector Channel
s4 11
T
W (t)
cos(2πf1τ )
Yj,1
2 T
T 0
(.)dτ
Ûj Maximum-Likelihood
q R
Sequence Decoding
2 T Y (t)
T 0
(.)dτ
Canonical Decomposition of MSK
q R
Yj,2
cos(2πf2τ )
NavCom
149
150
j j+1 j j+1
The new labels are called branch metrics. They may be inter-
preted as a gain or a profit made when going along this branch.
hy 1 , s2 i hy 2 , s2 i hy 3 , s2 i hy 4 , s2 i
π π hy π hy π hy π hy π
1, 2, 3, 4,
s4 s4 s4 s4
i i i i
i i i i i
,s ,s ,s ,s ,s
3 3 3 3 3
hy 0
h y 1
h y 2
hy 3
h y 4
0 hy 0 , s1 i
0 hy 1 , s1 i
0 hy 2 , s1 i
0 hy 3 , s1 i
0 hy 4 , s1 i
0
0 1 2 3 4 5
Notice: The branch metric and the path metric are not necessarily
a mathematical metric.
Survivors
The Viterbi algorithm relies on the following property of optimum
paths:
π π π π
0 0 0 0 0 0
0 1 2 3 4 5
For each state in each depth n, there are several subpaths ending
in this state. The one with the largest metric (“the best path”) is
called the survivor at this state in this depth.
Viterbi Algorithm
Remarks
π π π π
0 0 0 0 0 0
0 1 2 3 4 5
Observation
At depth n = 2, the two survivors (solid line and dashed line) will
look as depicted in (a) or (b), but never as in (c) or (d).
0 1 2 0 1 2
PSfrag replacements
(a) (c)
0 1 2 0 1 2
(b) (d)
The general result for the given MSK transmitter follows by induc-
tion:
The two survivors at any depth n are identical for j < n
and differ only in the current state.
Therefore, final decisions on any state V̂n (value of the vector-
encoder state at time n) can already be made at time n.
We proof now that the Viterbi algorithm makes the final decision
−hy 0 , s1 i
about state V̂1 (state at time j = 1) already at time 1.
−hy 0 , s3 i
Proof
The inner product is defined as
s2 = −s1, s4 = −s3.
Therefore,
0 1 2 3 4 5
−hy
• Survivor s3i v = 0 (time j = 1):
at0,state 2
−hy 1, s1i
π −h π
hy 1, s3i 3
i y
,s 1 ,s
h y0 3i
0 0 0
hy 0, s1i hy 1, s1i
PSfrag replacements 0 1 2
−hy
• Survivor s3i v2 = 1 (“π”) (time j = 1):
at0,state
hy 1, s1i
−hy 1, s1i
π π
3
i 3
i
−hy 1, s3i , s , s
hy 0 hy 1
0 0 0
hy 0, s1i
0 1 2
Obviously, either the two upper paths or the two lower paths are
the survivors. Therefore, the two survivors go through the same
state V̂1.
ML State Sequence
By induction, we obtain the following optimal decision rule for
state Vj based on y j−1 and y j :
(
0 if hy j−1, s3i − hy j , s3i < hy j−1, s1i + hy j , s1i,
V̂j =
1 if hy j−1, s3i − hy j , s3i > hy j−1, s1i + hy j , s1i,
j = 1, 2, . . . , L − 1. (Remember: VL = 0 by convention.)
√
Substituting this in the decision rule and dividing by Es, we ob-
tain the following simplified (and still optimal) decision rule:
(
0 if yj−1,2 − yj,2 < yj−1,1 + yj,1 ,
V̂j =
1 if yj−1,2 − yj,2 > yj−1,1 + yj,1 ,
j = 1, 2, . . . , L − 1.
Zj zj > 0 → 0 V̂j
zj < 0 → 1
Yj,2 Yj−1,2
T
Zj,2
Vector-Encoder Inverse
Given the sequence of state estimates {V̂j }, the sequence of bit
estimates {Ûj } can computed according to
Thereplacements
PSfrag following device implements this operation:
The most likely error-event for high SNR is that the transmitted
path and the estimated path differ in one state. This error-event
leads to two bit errors.
PSfrag replacements
Differential Encoder (DE)
bj cj+1 cj
T
Operation:
cj+1 = bj ⊕ cj ,
j = 0, 1, 2, . . . . Initialization: c0 = 0.
PSfrag replacements
Differential Decoder (DD)
bj bj−1 cj
T
Operation:
cj = bj ⊕ bj−1,
j = 0, 1, 2, . . . . Initialization: b−1 = 0.
placements
8.14 Precoded MSK
The DD(!) is used as a pre-coder and is placed before the vector
replacements
encoder. The DE(!) is placed behind the vector-encoder inverse.
W (t)
Uj X(t) Y (t) Ûj−1
DD MSK Tx MSK Rx / DE
Uj0 s3 10
Xj,1
Uj 01 s1
Vj = Uj−1 s2 00
T s4 11 Xj,2
eplacements Uj0 Vj
0 1 2 3 4 5
and so
p
Y0,1 = Es + W0,1 , Y0,2 = W0,2 ,
p
Y1,1 = Es + W1,1 , Y1,2 = W1,2 .
Remember:
Motivation
The phase function of MSK is continuous, but not its dervative.
These “edges” in the transmit signal lead to some high-frequency
components. By avoiding these “edges”, i.e., by smoothing the
phase function, the spectrum of the transmit signal can be made
more compact.
Concept
The output signal of the transmitter is generated similar to that
in MSK:
L−1
√ X
x(t) = 2P cos 2πf1t + ul · 2π b(t − lT ) ,
|l=0 {z }
φ(t)
(i) The phase pulse may stretch over more than one symbol
duration T .
(ii) The phase pulse may be be selected such that its derivative
is zero for t = and t = JT .
(ii) The phase pulse is the integral over the frequency pulse:
Zτ
b(τ ) = c(τ 0)dτ 0.
0
Part III
Appendix
A Geometrical Representation of
Signals
A.1 Sets of Orthonormal Functions
The D real-valued functions
ψ1(t), ψ2 (t), . . . , ψD (t)
form an orthonormal set if
Z+∞ (
6 l (orthogonality),
0 for k =
ψk (t) ψl (t) dt =
1 for k = l (normalization).
−∞
Example: D = 3
ψ1(t)
√
1/ T
√ T t
−1/ T
ψ2(t)
√
1/ T
PSfrag replacements T t
√
−1/ T
ψ3(t)
√
1/ T
√ T t
−1/ T
3
Example: (continued)
The following functions are elements of Sψ :
s1 (t) s2 (t)
√ √
2/ T 2/ T
√ T t √ T t
−2/ T −2/ T
3
The set Sψ is a vector space with respect to the following opera-
tions:
• Addition
+ : s1(t), s2 (t) →7 s1(t) + s2(t)
Example: (continued)
Example: (continued)
ψ2
PSfrag replacements
1 s1
0.5
0.5 ψ1
1
1
ψ3 s2
s1(t) + s2(t) 7 → s 1 + s2 ,
as(t) 7→ as.
Scalar product in RD :
D
X
hs1, s2i := s1,i s2,i
i=1
for s1, s2 ∈ RD .
Inserting the two sums into the definition of the scalar product, we
obtain
D
Z X D
X
hs1(t), s2 (t)i = s1,k ψk (t) s2,l ψl (t) dt =
k=1 l=1
D X
X D Z D
X
= s1,k s2,l ψk (t)ψl (t)dt = s1,k s2,k = hs1, s2i
k=1 l=1 | {z } k=1
“orthonormality”
Hence the mapping Sψ → RD preserves the scalar product, i.e., it
is an isometry.
Norms in Sψ
Z
ks(t)k2 := hs(t), s(t)i = s2(t) dt = Es
Norms in RD :
D
X
2
ksk := hs, si = s2i
i=1
From the isometry and the definitions of the norms, we obtain the
Parseval relation
ks(t)k2 = ksk2.
δ(t)
ψD (t)
sD
Filter
s(t)
s2
ψ2(t)
sD Filter
ψD (t)
D
X
s(t) = sk δ(t) ∗ ψk (t) with Dirac function δ(t)
k=1
RT
s1(t) 0 (.)dt hs1(t), s2 (t)i
s2(t)
We want
ZT Z+∞
! 0
hs1(t), s2 (t)i = s1(τ )s2 (τ ) dτ = s1(τ )g(t − τ ) dτ 0
t =t0
0 −∞
Example:
g replacements
s2(t) s2(t0 − t)
t0
T t T t
sample at
time T
s1 (t) matched filter hs1(t), s2 (t)i
s2(T − t)
Correlator-based Implementation
frag replacements
RT
(.)dt s1
0
s(t) ψ1(t)
RT
(.)dt sD
0
ψD (t)
frag replacements
Matched-filter based Implementation
MF s1
ψ1(T − t)
s(t)
MF sD
ψD (T − t)
sample at
time T
Example: Orthonormalization
replacements
Original signal waveforms:
s1 (t) s3 (t)
1 1
0 0
1 2 3 t 1 2 3 t
−1 −1
s2 (t) s4 (t)
1 1
√ 0 0
1/√2 1 2 3 t 1 2 3 t
−1/ 2 −1 −1
B Orthogonal Transformation of a
White Gaussian Noise Vector
The statistical properties of a white Gaussian noise vector
(WGNV) are invariant with respect to orthonormal transforma-
tions. This is first shown for a vector with two dimensions, and
then for a vector with an arbitrary number of dimensions.
Consider
PSfrag the original coordinate system (ξ1, ξ2) and the rotated
replacements
coordinate system (ξ10 , ξ20 ).
ξ2
ξ20
w
w2
ξ10
w10
α = π/4
w20
w1 ξ1
Proof
Comment
In the case D = 2, V takes the form
cos α sin α
V =
− sin α cos α
Pb → 0 is possible if R < CW .
Pb → 0 is impossible if R > CW .
Example: MPAM
The rate is given by R = T1 log2 M . The bandwidth required to
transmit pulses of duration T is about W = 1/T . The spectral
efficiency is thus
R bit
= log2 M .
W sec Hz
Error probabilities can be found in [1, p. 412, 435]. 3
Example: MPPM
The rate is given by R = T1 log2 M . The bandwidth required
to transmit pulses of duration T /M is about W = M/T . The
spectral efficiency is thus
R log2 M bit
= .
W M sec Hz
Error probabilities can be found in [1, p. 426, 435]. 3