Professional Documents
Culture Documents
Spring 2016
Consider a simple (2,1) linear, binary repetition code used as an error control code (ECC).
Since the minimum distance is 2, only one of the two 1-bit error patterns can be corrected.
The resulting probability of decoding error is then Pe = (1 ) + 2 = , which is no better
than uncoded transmission, and in fact is worse because with the rate-1/2 code the
information transmission rate is 1/2 bit per channel use.
Now, consider binary PAM signaling over an additive white Gaussian noise channel using
the same (2,1) block code. At the output of the matched filter/correlator receiver the model
is
y = a + w,
w
a
where a = 1 is the transmitted signal point, w = 1 is zero-mean Gaussian noise of
w2
a2
y
covariance w2 I where w2 = N 0 / 2 , and y = 1 is the pair of samples from the output of
y2
the matched filter (or correlators). The block diagram is shown below, indicating the
distinction between the separate PAM detection followed by ECC decoding, and the
combined detection and decoding (the soft-decision case).
The figures below shows the noise clouds about the two different 2-dimensional signal
a
, with = 1 . The first figure shows the decoding regions
points a = 1 ,
a 2
for hard-decision decoding of each bit. The second shows the decoding regions for the
soft-decision decoding of each 2-bit codeword. In the figure, there appear to be about 6
hard-decision decoding errors and one soft-decision decoding error for the particular
realization of 100 iid additive white Gaussian noise vectors added to each of the two signal
points (with the SNR = 6 dB).
,
i
f
y
<
0
.
two consecutive PAM symbols, this results in the (2-dimensional) detection regions shown
in Figure 1. From previous work, the binary PAM bit error probability is Pb = Q E s / s w2 .
For the hard-decision decoding, the BSC then has bit error probability = Pb , and the ECC
decoder then yields bit error probability = Pb . That is, there is no reduction in bit error
rate by using the (2,1) code.
For the combined ECC/PAM detection/decoding (the soft-decision decoding), the pair of
received PAM symbols corresponding to ECC codeword bits are viewed together, as a 2dimensional vector. The ML detector then selects the (2-dimensional) symbol a to
maximize p ( y | a ) . Since there are just two binary codewords in the code, there are just
two 2-dimensional symbol vectors to consider in the maximization. Using the assumption
that the PAM channel noise samples are white and Gaussian, the ML decision rule is:
if
Choose a =
[( y1 ()) 2 + ( y 2 ()) 2 ]
[( y1 ) 2 + ( y 2 ) 2 ]
1
;
exp
exp
>
2 w2
2 w2
2p w
2p w
otherwise, choose a = . Since the dependence on the symbols is only in the exponent,
if || y () || 2 < || y || 2 ; otherwise choose
the ML decision rule becomes: a =
then Pe| =
yD
Note that this implies that it takes a factor of two less signal-to-noise ratio for soft-decision
detection/decoding to achieve the same bit error probability as the hard-decision decoding
case.
In summary, by using soft-decision decoding, the minimum distance between (multidimensional) symbols is increased, and hence the probability of ML decoding error is
reduced. This is accomplished at the expense of bandwidth. In the example considered, a
rate-1/2 code is used, so it takes two channel transmissions to send one information bit.
Problem 1. Suppose a (3, 1) repetition code is used with the binary PAM signaling. Find
the bit error probability for hard-decision and soft-decision decoding.
Problem 2. Suppose a (n, 1) repetition code is used with the binary PAM signaling. Find
the bit error probability for hard-decision and soft-decision decoding.
Problem 3. A (8, 4) linear binary code (a shortened Hamming code) with d min = 4 is used
with binary PAM. Find (approximate) bit error probability expressions for hard-decision
and soft-decision decoding. What is the effective coding gain of the soft-decision decoding
over the hard-decision decoding?