You are on page 1of 28

EE810: Communication Theory I

Page 83

Signal Constellations, Optimum Receivers and Error


Probabilities

Source

mi

Modulator
(Transmitter)

si (t )

r (t )

Demodulator
(Receiver)

Estimate
of mi

w(t )

There are M messages. For example, each message may carry = log 2 M bits.
- dimensiona
observatio
If theN bit
durationl is
Tb , thenn space
the message (or symbol) duration is Ts = Tb .
r1 , rtwo
rN )
,different
The messages( in
time slots are statistically independent.
2,
The M signals si (t), i = 1, 2, . . . , M , can be arbitrary but of finite energies
1
Ei . The a priori message probabilities are Pi . Unless stated otherwise,
it is
Choose s1 (t ) or m1
assumed that the messages are equally probable.
The noise w(t) is modeled as a stationary, zero-mean, and white Gaussian
process with power spectral density of N0 /2.
The receiver observes the received signal r(t) = si (t) + w(t) over one symbol
duration and makes a decision to what message was transmitted. Thecriterion
2
M for the optimum receiver is to minimize the probability of making an error.

Choose s2 (t ) or m2

Choose s M (t ) or m M

Optimum receivers and their error probabilities shall be considered for various
signal constellations.

t = kTs

si (t )

r (t )

kTs

( )dt

( k 1) Ts

University of Saskatchewan

r1

Decision
Device

Decision

EE810: Communication Theory I

Page 84

Binary Signalling
Received signal:
On-Off
Orthogonal
Antipodal
H1 (0):
r(t) = w(t)
r(t) = s1 (t) + w(t) r(t) = s2 (t) + w(t)
H2 (1): r(t) = s2 (t) + w(t) r(t) = s2 (t) + w(t) r(t) = s2 (t) + w(t)
R Ts
0 s1 (t)s2 (t)dt = 0

Decision space and receiver implementation:


Choose 0
s1 (t )
0

Choose 1
s 2 (t )

1 (t ) =

s2 ( t )

r (t )

Tb

Choose 0

0D

r (t )
E
0

s1 (t )

( )d t

r1

Choose 1

1 (t )

r1

Tb

1 (t )

( )d t

r2 r1

1D

r2 < r1

0D

r2

2 (t )

(b) Orthogonal

s1 (t )

1D

(a) On-Off

Choose 0

( )d t

1 (t )

r2 2 (t )
s 2 (t )

E
2
E
r1 <
2
r1

E 2

E 2

Tb

Choose 1
s 2 (t )

0
E

1 (t ) =

s2 ( t )
E

(c) Antipodal

University of Saskatchewan

Tb

r (t )

1 (t )

( )d t

r1 0

1D

r1 < 0

0D

EE810: Communication Theory I

Page 85

Error performance:
On-Off Orthogonal Antipodal
 
 
 
E
E
From decision space: Q 2N0
Q N0
Q 2E
 
 
 N0 
Eb
Eb
b
With the same Eb : Q N
Q N
Q 2E
N0
0
0

The above shows that, with the same average energy per bit Eb = 21 (E1 + E2 ),
antipodal signalling is 3 dB more efficient than orthognal signalling, which has the
same performance as that of on-off signalling. This is graphically shown below.
10

10

OnOff and Orthogonal

Pr[error]

10

10

10

10

Antipodal
4

6
8
E /N (dB)
b

University of Saskatchewan

10

12

14

EE810: Communication Theory I

Page 86

Example 1: In passband transmission, the digital information is encoded as a variation of the amplitude, frequency and phase (or their combinations) of a sinusoidal
carrier signal. The carrier frequency is much higher than the highest frequency of
the modulating signals (messages). Binary amplitude-shift keying (BASK), binary
phase-shift keying (BPSK) and binary frequency-shift keying (BFSK) are examples
of on-off, orthogonal and antipodal signallings, respectively.
(a) Binary Data

1
(b) Modulating Signal 0

t
0

Tb

3Tb

2Tb

4Tb

5Tb

6Tb

7Tb

8Tb

9Tb

V
(c) ASK Signal

0
V
V

(d) PSK Signal

V
V
(e) FSK Signal

BASK
a0 = 1
a01 = 1
s1 (t):
s2 (t): V V cos(2fc t)
t T0b
fc = Tnb
(a) QPSK0Signal
n is an integer
V

a0 = 1
a1 = 1

a 2 BFSK
a 4 = 0 BPSKa6 = 1
=0
1
a7 =c t)
=0
= 1 cos(2f
Va3cos(2f
1 t) a5 V

V cos(2f2 t) V cos(2fc t)
f2 + f1 = 2Tnb
fc = Tnb
m
f2 f1 = 2T
n is an integer
b

2Tb

4Tb

6Tb

a2 = 0

a4 = 0

a6 = 1

a3 = 0

a5 = 1

t
8Tb

a7 = 1

V
University of Saskatchewan

(b) OQPSK Signal

EE810: Communication Theory I

Page 87

Example 2: Various binary baseband signaling schemes are shown below. The
optimum receiver and its error performance follows easily once the two signals
s1 (t) and s2 (t) used in each scheme are identified.
Binary Data

Clock

Tb
V
(a) NRZ Code

V
(b) NRZ - L

V
V
(c) RZ Code

0
V

(d) RZ - L

V
V
(e) Bi - Phase

0
V

(f) Bi - Phase - L

V
V
(g) Miller Code

0
V

University
of Saskatchewan
(h) Miller
-L
0

Time

1 (t )

2 M

0
EE810: Communication Theory I

Page 88

s (t )

M
M -ary Pulse Amplitude Modulation

si (t) = (i 1)1 (t),

s1 (t )

s2 (t )

s3 (t )

i = 1, 2, . . . , M

sk (t )

(1)

sM 1 (t )

sM (t )

(M 2) (M 1)

(k 1)

1 (t )

f (r1 sk (t ) )

r1

(k 1)

Choose s1 (t )

Choose s M (t )
Choose sk (t )
f (r1 sk (t ) )

Decision rule:

3


choose sk1 (t) if k 2 < r1 < k 21 , k = 2, 3, . .. , M 1
choose s1 (t) if r1 < 2 and choose sM (t) if r1 > M 23
R kTs
r(t)1 (t)dt and determines which
The optimum receiver computes r1 = (k1)T
s
r1
signal r1 is closest to:
t= kTs

kTs

r (t )

()dt

r1

Decision
Device

( k 1) Ts

1 (t )
2

1 (t )

3
2

1 (t )

t = Ts
Ts

University of Saskatchewan
0

( )dt

r1

Compute
2

EE810: Communication Theory I

Page 89

f (r1 sk (t ) )

Probability of error:

f (r1 sk (t ) )

r1

(k 1) (k 1)

Choose s1 (t )

Choose s M (t )

Choose s M (t

Choose sk (t )

Choose sk (t )

For the M 2 inner signals: Pr[error|si (t)]


2Q
/
2N
.
f (r1 s=
(
t
)
)
0
k


For the two end signals: Pr[error|si (t)] = Q / 2N0

f (r1 sk (t ) )

Since Pr[si (t) = 1/M , then


Pr[error] =

M
X
i=1


2(M 1)  r p
1
Q / 2N0
Pr[si (t)] Pr[error|si (t)] =
M

The above is symbol (or message) 2error2probability. If each message carries

= log2 M bits, the bit error probability depends on the mapping


and it is
r1
often tedious to compute exactly. If the mapping is a Gray mapping
the
)
1 (t(i.e.,
3

3
0
mappings of the nearest signals differ
2
2 in only 2one bit), 2a good approximation
is Pr[bit error = Pr[error]/M .

1 (t )
A modified signal set
to minimize
the average transmitted energy (M is even):
2

3
2

T 0
2 0 ()dt
s

t = Ts

r1

r (t )

3
2

1 (t )
Compute

(r1 si1 )2 + (r2 si 2 )2

Decision
(t )
for i = 1, 2, , M
If 1 (t) is a sinusoidal
i.e., 1 (t)
cos(2fc t), where 0 t 1
1 (carrier,
t)
2

2choose

0
and
Ts ; fc = k/Ts ; k is an integer,Ts the scheme is alsor2known as M -ary amplitude
the smallest
()dt
shift keying (M -ASK).

=t = T T2s
s

University of Saskatchewan

r1

Ts

2 (t )

( )dt

t = Ts
r1

EE810: Communication Theory I

Page 90

To express the probability in terms of Eb /N0 , compute the average transmitted


energy per message (or symbol) as follows:
PM
M
2 X
i=1 Ei
=
(2i 1 M )2
Es =
M
4M i=1


(M 2 1)2
2 1
M (M 2 1) =
(2)
=
4M 3
12
Thus the the average transmitted energy per bit is
(M 2 1)2
Es
=
=
Eb =
log2 M
12 log2 M
2(M 1)
Q
Pr[error] =
M

(12 log2 M )Eb


M2 1

(6 log2 M )Eb
(M 2 1)N0

(4)

10

M=16

Pr[symbol error]

10

M=8
3

10

M=4
4

10

M=2
5

10

10

University of Saskatchewan

10
Eb/N0 (dB)

15

(3)

20

EE810: Communication Theory I

Page 91

M -ary Phase Shift Keying (M -PSK)


The messages are distinguished by M phases of a sinusoidal carrier:


(i 1)2
si (t) = V cos 2fc t
, 0 < t < Ts ; i = 1, 2, . . . , M
M





(i 1)2
(i 1)2
E cos
=
1 (t) + E sin
2 (t)
M
M
|
{z
}
|
{z
}
si1
si2

(5)

where E = V 2 Ts /2 and the two orthonormal basis functions are:


1 (t) =

V sin(2fc t)
V cos(2fc t)

; 2 (t) =
E
E

(6)

M = 8:

2 (t )

s3 (t ) 011

010 s4 (t )

s2 (t ) 001
E

110 s5 (t )

s1 (t ) 000

1 (t )

111 s6 (t )

s8 (t ) 100
s7 (t ) 101

Figure 1: Signal space for 8-PSK with Gray mapping.

2 (t )

s2 ( t )
University of Saskatchewan

EE810: Communication Theory I

Page 92

Decision regions and receiver implementation:


r2
Region Z 2
Choose s2 (t )
s 2 (t )

s1 (t )

r1

Region Z1
Choose s1 (t )
sM ( t )

Ts

()dt

r1

Compute

(r1 si1 )2 + (r2 si 2 )2

r(t )

Decision

for i = 1, 2, , M
and choose

1 (t )
Ts

()dt

r2

the smallest

2 (t )
Choose m2=01
2 (t )

Probability of error:

M
1 X
s 2 (t )i (t)] = Pr[error|s1 (t)]
Pr[error] =
Pr[error|s
r2
M i=1
(t )

= 1 Pr[(r1 , r2 ) Z1 |s1 (t)]


Z Z
= 1
p(r1 , r2 |s1 (t))dr1 dr2
(r1 ,r2 )Z1

s3 (t )

s1 (t )

#
2
2
(r1 0 E) + r2
1
exp
.
where p(r1 , r2 |s1 (t)) =
N0
N0
"

University of Saskatchewan

s4 ( t )

r1

1 (t )

1 (t )

Choose m1=00

Choose m3=11

(7)

EE810: Communication Theory I

Page 93

p
Changing the variables V = r12 + r22 and = tan1 rr12 (polar coordinate system), the join pdf of V and is
!

2
V
V + E 2 EV cos
p(V, ) =
exp
(8)
N0
N0
Integration of p(V, ) over the range of V yields the pdf of :
Z
p(V, )dV
p() =
0
Z

2

1 NE sin2 V 2E/N0 cos /2


=
Ve
dV
e 0
2
0
With the above pdf of , the error probability can be computed as:
Pr[error] = 1 Pr[/M /M |s1 (t)]
Z /M
= 1
p()d

(9)

(10)

/M

In general, the integral of p() as above does not reduce to a simple form and
must be evaluated numerically, except for M = 2 and M = 4.
An approximation to the error probability for large values of M and for large
symbol signal-to-noise ratio (SNR) s = E/N0 can be obtained as follows. First
the error probability is lower bounded by
Pr[error] = Pr[error|s1 (t)]
> Pr[(r1 , r2 ) is closer to s2 (t) than s1 (t)|s1 (t)]
o
n  p
E/N0
= Q sin
M
The upper bound is obtained by the following union bound:

(11)

Pr[error] = Pr[error|s1 (t)]


< Pr[(r1 , r2 ) is closer to s2 (t) OR sM 1 (t) than s1 (t)|s1 (t)]

  p
E/N0
(12)
< 2Q sin
M
Since the lower and upper bounds only differ by a factor of two, they are very
tight for high SNR. Thus a good approximation to the error probability of M -PSK
is:

  p
E/N0
Pr[error] 2Q sin
 
p M
(2Eb /N0 ) sin
= 2Q
(13)
M
University of Saskatchewan

for i = 1, 2, , M
and choose

1 (t )
EE810: Communication Theory I

Ts

r2

()dt

Page 94

the smallest

Quadrature Phase Shift Keying (QPSK):


2 (t )

Choose m2=01
2 (t )
r2

s3 (t )

2 (t )
Choose m1=00

Choose m3=11

s 2 (t )

s1 (t )

1 (t )

r1

s4 ( t )

1 (t )

Choose m4=10

"

Pr[correct] = Pr[
r1 0 and r2 0|s1 (t)] = 1 Q
"

Pr[error] = 1 Pr[correct] = 1 1 Q
= 2Q

University of Saskatchewan

E
N0

Q2

E
N0

r
r

E
N0

E
N0

!#2

(14)

!#2
(15)

EE810: Communication Theory I

Page 95

The bit error probability of QPSK with Gray mapping: Can be obtained by considering different conditional message error probabilities:
r !#
r !"
E
E
1Q
(16)
Pr[m2 |m1 ] = Q
N0
N0
r !
E
Pr[m3 |m1 ] = Q2
(17)
N0
r !#
r !"
E
E
Pr[m4 |m1 ] = Q
1Q
(18)
N0
N0
The bit error probability is therefore
Pr[bit error] = 0.5 Pr[m2 |m1 ] + 0.5 Pr[m4 |m1 ] + 1.0 Pr[m3 |m1 ]
!
r
r !
E
2Eb
=Q
= Q
N0
N0

(19)

where the viewpoint is taken that one of the two bits is chosen at random, i.e.,
with a probability of 0.5. The above shows that QPSK with Gray mapping has
exactly the same bit-error-rate (BER) performance with BPSK, while its bit rate
can be double for the same bandwidth.
In general, the bit error probability of M -PSK is difficult to obtain for an arbitrary mapping. For Gray mapping, again a good approximation is Pr[bit error] =
Pr[sumbol error]/. The exact calculation of the bit error probability can be found
in the following paper:
J. Lassing, E. G. Strom, E. Agrell and T. Ottosson, Computation of the Exact BitError Rate of Coherent M -ary PSK With Gray Code Bit Mapping, IEEE Trans.
on Communications, vol. 51, Nov. 2003, pp. 17581760.

University of Saskatchewan

EE810: Communication Theory I

Page 96

Comparison of M -PSK and BPSK


p
 
Pr[error]M -ary ' 2Q
(2Eb /N0 ) sin
,
M
!
!
r
r
2Eb
2E
b
Pr[error]QPSK = 2Q
Q2
N0
N0
p
Pr[error]BPSK = Q( 2Eb /N0 )

(M > 4)

10

M=32

10

M=16

M=4
Pr[symbol error]

10

M=8
M=2

10

10

10

10

Lower bound
Upper bound

10
E /N (dB)
b

M
3
4
5
6

15

20

M -ary BW/binary BW sin2 (/M ) M -ary energy/binary energy

8
16
32
64

University of Saskatchewan

1/3
1/4
1/5
1/6

0.44
0.15
0.05
0.44

3.6 dB
8.2 dB
13.0 dB
17.0 dB

EE810: Communication Theory I

Page 97

M -ary Quadrature Amplitude Modulation


In QAM modulation, the messages are encoded into both the amplitudes and
phases of the carrier:
r
r
2
2
cos(2fc t) + Vs,i
sin(2fc t)
(20)
si (t) = Vc,i
Ts
Ts
r
p
2
0 t Ts
cos(2fc t i ),
=
(21)
Ei
i = 1, 2, . . . , M
Ts
2
where Ei = Vc,i
+ Vs,i2 and i = tan1 (Vs,i /Vc,i ).

Examples of QAM constellations are shown on the next page.


Rectangular QAM constellations, shown below, are the most popular. For
rectangular QAM, Vc,i , Vs,i {(2i 1 M )/2}.

University of Saskatchewan

EE810: Communication Theory I

Examples of QAM Constellations:

University of Saskatchewan

Page 98

EE810: Communication Theory I

Page 99

Another example: The 16-QAM signal constellation shown below is an international standard for telephone-line modems (called V.29). The decision regions of
the minimum distance receiver are also drawn.

1011
5

1010
3

1100

0010
1

r2
0101

0100

0000

-3

-5

1110

0011

-1

E1000

1001

0110

1
0001

0111

Region Z 2
Choose s2 (t )

-1

s 2 (t )
1111

-3

s1 (t )

r1

Region Z1
1101

Choose s1 (t )

-5

sM ( t )

Receiver implementation (same as for M -PSK):


Ts

()dt

r1

Compute

(r1 si1 )2 + (r2 si 2 )2

r(t )

for i = 1, 2, , M
and choose

1 (t )
Ts

()dt

r2

the smallest

2 (t )
Choose m2=01
2 (t )
University of Saskatchewan

s 2 (t )

r2

Decision

EE810: Communication Theory I

Page 100

Error performance of rectangular M -QAM: For M = 2 , where is even, QAM

consists of two ASK signals on quadrature carriers, each having M = 2/2 signal
points. Thus the probability of symbol error can be computed as
2
Pr[error] = 1 Pr[correct] = 1 1 PrM [error]
(22)

where PrM is the probability of error of M -ary ASK:


!
r


1
3 Es
PrM [error] = 2 1
(23)
Q
M 1 N0
M
Note that in the above equation, Es is the average energy per symbol of the QAM
constellation.
1

10

10

Pr[symbol error]

M=64
3

10

M=16
4

10

M=4
5

10

10

University of Saskatchewan

10
Eb/N0 (dB)

15

20

EE810: Communication Theory I

Page 101

Comparison of M -QAM and M -PSK:


r

Pr[error]PSK Q
Pr[error]PSK

2Es
sin
N0
M

1
PrM ASK [error] = 2 1
M

(24)
r

3 Es
M 1 N0

(25)

Since the error probability is dominated by the argument of the Q-function, one
may simply compares the arguments of Q for the two modulation schemes. The
ratio of the arguments is
RM =

3/(M 1)
2 sin2 (/M )

(26)

M 10 log10 RM
8
1.65 dB
16
4.20 dB
32
7.02 dB
64
9.95 dB
Thus M -ary QAM yields a better performance than M -ary PSK for the same bit
rate (i.e., same M ) and the same transmitted energy (Eb /N0 ).

University of Saskatchewan

EE810: Communication Theory I

Page 102

M -ary Orthognal Signals


si (t) =

2 (t )

2 (t )

Ei (t),

i = 1, 2, . . . , M
2 (t )

2 (t )s 2 (t )

s 2 (t )

sE2 (t )

E
s1 (t )

s1 (t )

E
(a) M=2

s 2 (t )

1 (t )

(27)

s1 (t )

s1 (t )

E
s 3 (t )

1 (t )

The decision rule of the minimum distance receiver


easily:
t = Ts follows
(a) M=2
(b) M=3
T (t )
Choose si (t) if ri > rj3, j = 1, 2, 3, . . r.1, M ; j 6= i
Minimum distance receiver:
r (t )

t = Ts
s

( )dt

s (t )
1 (t ) = 1
s (t )
(t ) =E M

E Ts

t = Ts r1
rM

(T)dt

(28)

( )dt

s (t )
1 (t ) = 1 Ts
E

r (t )

t = Ts

Choose
the
largest

Decision

Choose
the
largest

Decision

rM

( )dt

2 (t )

2 (t )

s (t )
M (t ) = M
s 2 (t )
E

s 2 (t )

E
(Note: Instead
of i (t) one may use si (t))

s1 (t )

2 (t )

s1 (t )

1 (t )

s1 (t )

University ofs Saskatchewan


2 (t )

1 (t )

(b) M=3

3 ( t )
s 3 (t )

1 (t )

s3 ( t )

s3 (t )

s1 (t )

2 (t )
E

s 2 (t )

1 (t )

EE810: Communication Theory I

Page 103

When the M orthonormal functions are chosen to be orthogonal sinusoidal carriers, the signal set is known as M -ary frequency shift keying (M -FSK):
si (t) = V cos(2fi t), 0 t Ts ; i = 1, 2, . . . , M

(29)

where the frequencies are chosen so that the signals are orthogonal over the interval
[0, Ts ] seconds:


1
, i = 1, 2, . . . , M
(30)
fi = (k (i 1))
2Ts
Error performance of M orthogonal signals:
To determine the message error probability consider that message s1 (t) was transmitted. Due to the symmetry of the signal space and because the messages are
equally likely
Pr[error] = Pr[error|s1 (t)] = 1 Pr[correct|s1 (t)]
= 1 Pr[all rj < r1 ) : j 6= 1|s1 (t)]
M
Y
Pr[rj < r1 ) : j 6= 1|s1 (t)]
= 1
j=2

= 1

M
Y

R1 = j=2

Pr[(rj < R1 )|s1 (t)]p(R1 |s1 (t))dR1



M 1
2
R
(N0 )0.5 exp 2 dR2
= 1
N0
R2 =
R1 =
(
2)
(R1 E)
dR1
(31)
(N0 )0.5 exp
N0
Z

Z

R1

The above integral can only be evaluated numerically. It can be normalized so


that only two parameters, namely M (the number of messages) and E/N0 (the
signal-to-noise) enter into the numerical integration as given below:

M 1 #
Z "
Z y
1
1
2
Pr[error] =
1
ex /2 dx
2
2

r !2
2E
1
dy
(32)
y
exp
2
N0
University of Saskatchewan

EE810: Communication Theory I

Page 104

The relationship between probability of bit error and probability of symbol error
for an M -ary orthogonal signal set can be found as follows. For equiprobable
orthogonal signals, all symbol errors are equiprobable and occur with probability
Pr[symbol error] Pr[symbol error]
=
M 1
2 1


There are k ways in which k bits out of may be in error. Hence the average
number of bit errors per -bit symbol is
 

X
21
Pr[symbol error]
k
=

Pr[symbol error]
(33)
2 1
2 1
k
k=1

The probability of bit error is the the above result divided by . Thus
21
Pr[bit error]
=
Pr[symbol error] 2 1

Note that the above ratio approaches 1/2 as .

Figure 2: Probability of bit error for M -orthogonal signals versus b = Eb /N0 .

University of Saskatchewan

(34)

EE810: Communication Theory I

Page 105

Union Bound: To provide insight into the behavior of Pr[error], consider upper
bounding (called the union bound) the error probability as follows.
Pr[error] = Pr[(r1 < r2 ) or (r1 < r3 ) or, . . . , or (r1 < rM )|s1 (t)]
< Pr[(r1 < r2 )|s1 (t)] + . . . + Pr[(r1 < rM )|s1 (t)]

p
p
= (M 1)Q
E/N0 ) < M Q( E/N0
< M e(E/2N0 )

where a simple bound on Q(x), namely Q(x) < exp


M = 2 , then

2
x2

(35)

has been used. Let

Pr[error] < e ln 2 e(Eb /2N0 ) = e(Eb /N0 2 ln 2)/2

(36)

The above equation shows that there is a definite threshold effect with orthogonal
signalling. As M , the probability of error approaches zero exponentially,
provided that:
Eb
> 2 ln 2 = 1.39 = 1.42dB
(37)
N0
A different interpretation of the upper bound in (36) can be obtained as follows.
Since E = Eb = Ps Ts (Ps is the transmitted power) and the bit rate rb = /Ts ,
(36) can be rewritten as:
Pr[error] < e ln 2 e(Ps Ts /2N0 ) = eTs [rb ln 2+Ps /2N0 ]

(38)

Ps
the probability
2 ln 2N0
or error tends to zero as Ts or M become larger and larger. This behaviour of
error probability is quite surprising since what it shows is that: provided the bit rate
is small enough, the error probability can be made arbitrarily small even though
the transmitter power can be finite. The obvious disadvantage, however, of the
approach is that the bandwidth requirement increases with M . As M , the
transmitted bandwidth goes to infinity.
Since the union bound is not a very tight upper bound
at sufficiently low SNR
n 2o
due to the fact that the upper bound Q(x) < exp x2 for the Q function is
loose. Using a more elaborate bounding techniques it can be shown that (see
texts by Proakis, Wozencraft and Jacobs) Pr[error] 0 as , provided that
Eb /N0 > ln 2 = 0.693 (1.6dB).

The above implies that if rb ln 2 + Ps /2N0 > 0, or rb <

University of Saskatchewan

( )dt
0

EE810: Communication Theory I

M (t ) =

Page 106

s M (t )
E

Biorthognal Signals
2 (t )

2 (t )

s 2 (t )

s1 (t )

s 2 (t )

s1 (t )

1 (t )

s1 (t )

E
s3 ( t )
s2 (t )

3 ( t )

(a) M=2

s3 (t )

s1 (t )

1 (t )

s 2 (t )

(b) M=3

A biorthogonal signal set can be obtained from an original orthogonal set of N signals by augmenting it with the negative of each signal. Obviously, for the biorthogonal set M = 2N . Denote the additional signals by si (t), i = 1, 2, . . . , N and
assume that each signal has energy E.
The received
r(t) is closerthan si (t) than si (t) if and only if (iff)
signal
R Ts
R Ts
R Ts
2
2
[r(t)
E
E
(t)]
dt
<
[r(t)+
(t)]
dt.
This
happens
iff
r
=
i
i
i
0
0
0 r(t)i (t)dt >
0. Similarly, r(t) is closer to si (t) than to sj (t) iff ri > rj , j 6= 1 and r(t) is closer
to si (t) than to sj (t) iff ri > rj , j 6= 1. It follows that the decision rule of the
minimum-distance receiver for biorthogonal signalling can be implemented as
Choose si (t) if |ri | > |rj | and ri > 0,

j 6= i

(39)

The conditional probability of a correct decision for equally likely messages, given
that s1 (t) is transmitted and that

r1 = E + w 1 = R 1 > 0
(40)
is just
Pr[correct|s1 (t), R1 > 0] = Pr[R1 < all rj < R1 ) : j 6= 1|s1 (t), R1 > 0]
= (Pr[R1 < rj < R1 |s1 (t), R1 > 0])N 1

N 1

Z R1
2
R
(N0 )0.5 exp 2 dR2
=
N0
R2 =R1
(41)
University of Saskatchewan

EE810: Communication Theory I

Page 107

Averaging over the pdf of R1 we have


Z  Z R1
Pr[correct|s1 (t)] =


N 1
2
R
(N0 )0.5 exp 2 dR2
N0
R2 =
R1 =0
(
2)
(R1 E)
dR1
(42)
(N0 )0.5 exp
N0


Again, by virtue of symmetry and the equal a priori probability of the messages
the above equation is also the Pr[correct]. Finally, by noting that N = M/2 we
obtain

 M2 1

Z  Z R1
2
R
Pr[error] = 1
(N0 )0.5 exp 2 dR2
N0
R2 =
R1 =0
(
2)
(R1 E)
dR1
(43)
(N0 )0.5 exp
N0
The difference in error performance for M biorthogonal and M orthogonal signals
is negligible when M and E/N0 are large, but the number of dimensions (i.e.,
bandwidth) required is reduced by one half in the biorthogonal case.

University of Saskatchewan

EE810: Communication Theory I

Page 108

Hypercube Signals (Vertices of a Hypercube)

s 2 (t )

2 (t )

2 (t )
s 2 (t )

s1 (t )

s1 (t )

s3 (t )
s 2 (t )

s1 (t )

1 (t )

s 4 (t )
0

1 (t )

1 (t )

3 ( t )

2 Eb

s5 (t )

s6 (t )
s 4 (t )

s3 (t )

s7 (t )

s8 (t )

2 Eb

(a) M = 2

2 Eb

(b) M = 22 = 4

(c) M = 23 = 8

(t )

Here the M = 2 signals are located on the vertices of 2an -dimensional hypers 2 (t ) geometrically above for
cube centered at the origin. This configuration is shown
= 1, 2, 3. The hypercube signals can be formed as follows:
s 2p
(t )

si (t) =

Eb

0
X

E j=1
2

E
2

s1 (t )

s1 (t )
0

(
t
)
bij j (t), 1 bij {1}, i = 1, 2, . . . , M = 2 1 (t ) (44)
E
2

E
6

2E

Thus the components of the signal vector si = [si1 , ssi2(t,) . . . , si ]> are Eb .
3
To evaluate the error probability, assume that the signal
p
p(b) 
(a) M4= 
2 p
M =3
=

s
E
,

E
,
.
.
.
,

E
(45)
1
b
b
b
(binary antipodal)
(equilateral triangle)
2 (t )

is transmitted. First claim that no error is made if the noise components along
j (t) satisfy
p
s 2 (t )
wj < Eb , for all j = 1, 2, . . . ,
(46)
The proof is immediate. When 2E
r = x = s1 + w
x si is:
0

w
,
j
(xj ssij3 ()t )=
2 Eb w j ,

is received, the jth component of


2E

if sij = Eb
(t )
if sij = + 1Eb

3 ( t )
Since Equation (46) implies:
2E
sp
4 (t )
2 Eb wj > wj for all j,
University of Saskatchewan

(c) M = 4
(regular tetrahedron)

(47)

s1 (t )

(48)

EE810: Communication Theory I

Page 109

it follows that
2

|x si | =

X
j=1

(xj sij ) >

X
j=1

wj2 = |x s1 |2

for all si 6= s1 whenever (46) is satisfied.


Next claim that an error is made if, for at least one j,
p
wj > E b

(49)

(50)

This follows from the fact that x is closer to sj than


to s1 whenever (50) is satisfied,
where
sj denotes that signal with components Eb along the jth direction and
Eb in all other directions. (Of course, x may be still closer to some signal
other than sj , but it cannot be closest to s1 ).
Equations (49) and (50) together imply that a correct decision is made if and only
if (46) is satisfied. The probability of this event, given that m = m1 , is therefore:
h
i
p
Pr[correct|m1 ] = Pr all wj < Eb ; j = 1, 2, . . . ,
=

Y
j=1

h
p i
p i 
Pr wj < Eb = 1 Pr wj Eb
h

= (1 p)
in which,

(51)

!
r
h
p i
2Eb
p = Pr wj Eb = Q
N0

isagain the probability of error for two equally likely signals separated by distance
2 Eb . Finally, from symmetry:
Pr[correct|mi ] = Pr[correct|m1 ] for all i,
hence,

"

Pr[correct] = (1 p) = 1 Q

2Eb
N0

!#

(52)

(53)

In order to express this result in terms of signal energy, we again recognize that
the distance squared from the origin to each signal si is the same. The transmitted
energy is therefore independent of i, hence can be designated by E. Clearly
2

|si | =

University of Saskatchewan

X
j=1

s2ij = Eb = E

(54)

EE810: Communication Theory I

Page 110

Thus
p=Q

2E
N0

(55)

The simple form of the result Pr[correct] = (1 p) suggests that a more


immediate derivation may exist. Indeed one does. Note
that thejth coordinate of
the random signal s is a priori equally likely to be + Eb or Eb , independent
of all other coordinates. Moreover, the noise wj disturbing the jth coordinate is
independent of the noise in all other coordinates. Hence, by the theory of sufficient
statistics, a decision may be made on the jth coordinate without examining any
other coordinate. This single-coordinate
decision corresponds to the problem of
binary signals separated by distance 2 Eb , for which the probability of correct
decision is 1 p. Since in the original hypercube problem a correct decision is
made if only if a correct decision is make on every coordinate, an since these
decisions are independent, it follows immediately that Pr[correct] = (1 p) .

University of Saskatchewan

You might also like