You are on page 1of 4

EE 451

Example of Hard- and Soft-Decision Decoding

Spring 2016

Consider a simple (2,1) linear, binary repetition code used as an error control code (ECC).
Since the minimum distance is 2, only one of the two 1-bit error patterns can be corrected.
The resulting probability of decoding error is then Pe = (1 ) + 2 = , which is no better
than uncoded transmission, and in fact is worse because with the rate-1/2 code the
information transmission rate is 1/2 bit per channel use.
Now, consider binary PAM signaling over an additive white Gaussian noise channel using
the same (2,1) block code. At the output of the matched filter/correlator receiver the model
is
y = a + w,
w
a
where a = 1 is the transmitted signal point, w = 1 is zero-mean Gaussian noise of
w2
a2
y
covariance w2 I where w2 = N 0 / 2 , and y = 1 is the pair of samples from the output of
y2
the matched filter (or correlators). The block diagram is shown below, indicating the
distinction between the separate PAM detection followed by ECC decoding, and the
combined detection and decoding (the soft-decision case).

The figures below shows the noise clouds about the two different 2-dimensional signal
a
, with = 1 . The first figure shows the decoding regions
points a = 1 ,
a 2
for hard-decision decoding of each bit. The second shows the decoding regions for the
soft-decision decoding of each 2-bit codeword. In the figure, there appear to be about 6
hard-decision decoding errors and one soft-decision decoding error for the particular
realization of 100 iid additive white Gaussian noise vectors added to each of the two signal
points (with the SNR = 6 dB).

Figure 1. Hard-decision detection regions.

Figure 2. Soft-decision detection/decoding regions (SNR ( Eb / N 0 ) = 6 dB).


The simulation is repeated with 500 AWGN vectors added to each signal point, and the
result shown below. Note that there are far fewer decoding errors with the soft-decision
decoding, even using this simple code that, for the BSC, offers no reduction in probability
of bit error.

Figure 3. Comparison of hard- and soft-decision detection/decoding regions.


Analysis
For sample-by-sample (hard-decision) decoding, first each PAM symbol is individually
detected and mapped to a decoded codeword bit; then the error control code (ECC)
decoder forms the decoded information bit. The maximum likelihood (ML) detection rule
for binary PAM is to map the noisy (matched filter output) sample y to the closest (in
, if y 0;
Euclidean distance) symbol, a . This rule is simply: a =
When applied to

,
i
f
y
<
0
.

two consecutive PAM symbols, this results in the (2-dimensional) detection regions shown

in Figure 1. From previous work, the binary PAM bit error probability is Pb = Q E s / s w2 .
For the hard-decision decoding, the BSC then has bit error probability = Pb , and the ECC
decoder then yields bit error probability = Pb . That is, there is no reduction in bit error
rate by using the (2,1) code.
For the combined ECC/PAM detection/decoding (the soft-decision decoding), the pair of
received PAM symbols corresponding to ECC codeword bits are viewed together, as a 2dimensional vector. The ML detector then selects the (2-dimensional) symbol a to
maximize p ( y | a ) . Since there are just two binary codewords in the code, there are just
two 2-dimensional symbol vectors to consider in the maximization. Using the assumption
that the PAM channel noise samples are white and Gaussian, the ML decision rule is:

if
Choose a =

[( y1 ()) 2 + ( y 2 ()) 2 ]
[( y1 ) 2 + ( y 2 ) 2 ]
1

;
exp
exp
>

2 w2
2 w2
2p w
2p w

otherwise, choose a = . Since the dependence on the symbols is only in the exponent,


if || y () || 2 < || y || 2 ; otherwise choose
the ML decision rule becomes: a =

a = , where = . The ML decision regions for the soft-decision detection are

shown in Figure 2, and a comparison of the two cases shown in Figure 3.


Let D and D be the ML decision regions for the two respective (2-dimensional)

is
symbols. The probability of detection error, given transmission of symbol =

then Pe| =

p( y | )d y . Taking advantage of the circular symmetry of the 2-

yD

dimensional Gaussian probability density function, this integral is evaluated as


d
Pe| = Q min , where d min = 2 2 is the distance between the two (2-dimensional)
2 w
symbols. By symmetry, the probability of detection error, conditioned on transmitting the
other symbol, is the same. The energy per symbol per dimension is still E s = 2 , so the
soft-decision probability of (2-dimensional) symbol error becomes
2Es
.
Pe = Q
s2
w

Note that this implies that it takes a factor of two less signal-to-noise ratio for soft-decision
detection/decoding to achieve the same bit error probability as the hard-decision decoding
case.
In summary, by using soft-decision decoding, the minimum distance between (multidimensional) symbols is increased, and hence the probability of ML decoding error is
reduced. This is accomplished at the expense of bandwidth. In the example considered, a
rate-1/2 code is used, so it takes two channel transmissions to send one information bit.
Problem 1. Suppose a (3, 1) repetition code is used with the binary PAM signaling. Find
the bit error probability for hard-decision and soft-decision decoding.
Problem 2. Suppose a (n, 1) repetition code is used with the binary PAM signaling. Find
the bit error probability for hard-decision and soft-decision decoding.
Problem 3. A (8, 4) linear binary code (a shortened Hamming code) with d min = 4 is used
with binary PAM. Find (approximate) bit error probability expressions for hard-decision
and soft-decision decoding. What is the effective coding gain of the soft-decision decoding
over the hard-decision decoding?

You might also like