You are on page 1of 3

Optical Communication Theory and Techniques

Part I: Communication Theory and Digital Transmission


January 9, 2013

A binary transmission system uses the signals s(t) and −s(t) to transmit the symbol a0 ∈ {−1, 1}, with
P(a0 = ±1) = 1/2 and
 A, 0 ≤ t ≤ T


s(t) = 

0, otherwise.

The receiver is as sketched in the following figure:

r(t) x(t) z  1, if z > 0




h(t) â0 = 

−1, if z < 0

t = t0

where the filter impulse response is


h(t) = e−t/T u(t) ,
u(t) being the unit step function. The received waveform can be expressed as

r(t) = a0 s(t) + w(t) , 0≤t≤T

where w(t) is a zero-mean Gaussian noise process with power spectral density N0 /2.
Writing the output of the filter as x(t) = a0 g(t) + n(t) and knowing that w(t) is independent of a0 :

1. Determine and sketch the filtered signal component g(t) = s(t) ⊗ h(t) for all t ≥ 0.

2. Compute mean and variance of z = x(t0 ), at the sampling instant t0 such that g(t0 ) is maximum.

3. Determine the probability of error when the sampling instant t0 is as in (2).

4. Suppose now that h(t) is the impulse response of the filter matched to s(t) and that the sampling
instant is still as in (2). Determine the probability of error in this case and compare it with
the value obtained in (3). What is the power penalty (expressed in dB) for the less efficient
configuration?

5. Suppose now that, instead of single symbol, a binary sequence of symbols an ∈ {−1, 1} is
transmitted at a transmission rate of 1/T . Tell, justifying your answer, whether the probability
of error for both cases in (3) and (4) remains the same.
Solution:

1. As s(τ) vanishes for τ < 0 and τ > T , while h(t − τ) vanishes for τ > t, we have
ˆ∞ ˆ )
min(t,T
 AT (1 − e−t/T ) , 0≤t≤T


g(t) = s(τ)h(t − τ)dτ = s(τ)h(t − τ)dτ = 

 AT (1 − e )e
 −1 −(t−T )/T
, t>T
−∞ 0

and a sketch of g(t) is given in the following figure:

g(t)

t
T 2T 3T 4T

As can be seen, g(t) is maximum at t = T and the maximum value is g(T ) = AT (1 − e−1 ).

2. For a one-shot transmission, r(t) is applied to the filter at t = 0, such that r(t) = 0 for t < 0. So,
choosing t0 = T , as ˆ T
n(T ) = w(τ)h(T − τ)dτ ,
0
the mean of z = a0 g(T ) + n(T ) is
ˆ T
E{z} = E{a0 }g(T ) + E{w(τ)}h(T − τ)dτ = 0
0

because E{a0 } = E{w(t)} = 0. Thus, taking into account that a is independent of w(t) and thus
also of n(t), the variance of z turns out to be
2 o
E{z2 } = E a0 g(T ) + n(T ) = E a20 g2 (T ) + 2a0 g(T )n(T ) + n2 (T )
n n o

= E a20 g2 (T ) + 2E a0 g(T )E n(T ) + E n2 (T )


   

= g2 (T ) + E n2 (T )


because E a20 = 1. As regards E n2 (T ) , we have


 

(ˆ T ˆ T )
 2
E n (T ) = E w(τ1 )h(T − τ1 )dτ1 w(τ2 )h(T − τ2 )dτ2
0 0
ˆ Tˆ T
E w(τ1 )w(τ2 ) h(T − τ1 )h(T − τ2 )dτ1 dτ2

=
0 0
ˆ ˆ T
N0 T
!
= δ(τ1 − τ2 )h(T − τ2 )dτ2 h(T − τ1 )dτ1
2 0 0
ˆ ˆ
N0 T 2 N0 T −2(T −τ1 )/T N0 T
= h (T − τ1 )dτ1 = e dτ1 = (1 − e−2 ) ,
2 0 2 0 4
and hence
N0 T
E{z2 } = A2 T 2 (1 − e−1 )2 + (1 − e−2 ) .
4
3. Due to the symmetry, letting σn2 = E n2 (T ) , the probability of error can be written as


P(E ) = P(E | a0 = −1) = P(z > 0 | a0 = −1) = P −g(T ) + n(T ) > 0 = P n(T ) > g(T )
 
  s 
g(T )  AT (1 − e )   4Eb (e − 1) 
!  −1 
 
=Q = Q  q  = Q  ,
  
σn  N0 T N0 (e + 1) 
4
(1 − e−2 ) 

where Eb = A2 T is the average energy per bit.

4. In this case, the receiver is optimum and thus

d
!
P(E ) = Q √
2N0
where d is the distance between s(t) and −s(t). As
ˆ T
2 2
d = 2s(t) dt = 4A2 T = 4Eb ,
0

we get
 2Eb 
r 
P(E ) = Q   ,
N0
as expected, the signals being antipodal. As 2(e − 1)/(e + 1) ' 0.92 < 1, the probability of error
is larger when not using the matched filter and the power penalty is

1 e+1
!
10 log10 ' 0.35 dB .
2 e−1

5. In this case, the received waveform can be written as


X
r(t) = an s(t − nT ) + w(t) ,
n

where an = ±1 with equal probability. Thus, sampling at t0 = T , we have


X X
z = x(T ) = an g(T − nT ) + n(T ) = a0 g(T ) + an g(T − nT ) + n(T ) .
n n6=0

It is easy to see that, if h(t) is the filter matched to s(t), g(t) is as shown in the following figure

g(t)

t
T 2T 3T 4T

such that g(kT ) = 0 for k 6= 1. Thus, the ISI term vanishes and the probability of error remains
unchanged. This is not true in the other case and, as the ISI term can be seen as an additional
noise, the probability of error increases.

You might also like