You are on page 1of 8

320 IEEE TRANSACTIOIJS ON INFORMATION THEORY, VOL. 40, NO.

2, MARCH 1994

An Efficient Maximum-Likelihood-Decoding
Algorithm for Linear Block Codes with Algebraic
Decoder
Toshimitsu Kaneko, Toshihisa Nishijima, Member, IEEE,
Hiroshige Inazumi, and Shigeichi Hirasawa, Senior Member, IEEE

Abstract- A new soft decoding algorithm for linear block On the other hand, studies of decoding methods which trade
codes is proposed. The decoding algorithm works with any a slight degradation in performance for reducing the decoding
algebraic decoder and its performance is strictly the same as complexity have been reported 121-161. These decoding meth-
that of maximum-likelihood-decoding(MLD). Since our decod-
ing algorithm generates sets of different candidate codewords ods are approximations of MLD with less complexity. The
corresponding to the received sequence, its decoding complexity generalized minimum distance (GMD) decoding proposed by
depends on the received sequence. We compare our decoding G. D. Forney [2] is in the class of decoding methods satisfying
algorithm with Chase algorithm 2 and the Tanaka-Kakigahara the property that an algebraic decoder is used to generate
algorithm in which a similar method for generating candidate
codewords is used. Computer simulation results indicate, for some a number of candidate codewords and these codewords are
signal-to-noiseratios (SNR),that our decoding algorithm requires compared to the received sequence by the likelihood measure.
less average complexity than those of the other two algorithms, Chase algorithms [4] are also in this class. GMD decoding and
but the performance of ours is always superior to those of the Chase algorithms are asymptotically optimum for high signal-
other two. to-noise ratio (SNR), but their performances are inferior to that
Index Tenns-Maximum-likelihood-decoding, soft decision de- of MLD for low SNR or practical code length. It has been said
coding, generalized minimum distance decoding, linear block that the central problem is to find an efficient technique that
codes.
can generat,: a set of codewords that will contain with high
probability the codeword that is most likely.
I. INTRODUCTION In this paper, we propose a new decoding algorithm, which
is in the class of decoding methods described above and
S OFT decision decoding is a decoding method to im-
prove the reliability of coding systems by using more
information than that of hard decision decoding. It is well
works with any algebraic decoder for linear block codes. The
probability that the most likely codeword is contained in the set
known that maximum-likelihood-decoding (MLD) is the best of candidatc: codewords generated by our decoding algorithm
method in the sense of minimizing the probability of not is always (:qual to 1 strictly. So our decoding algorithm
decoding correctly under the assumption that all codewords is MLD. Our decoding algorithm generates a larger set of
are transmitted with the same probability. However, since the candidates when a noisy sequence is received, and a smaller
complexity required for performing MLD grows exponentially set of Candidates when a clean sequence is received, and
with the code length n, studies on decoding methods for thus reduces the average decoding complexity without loss
reducing the complexity of MLD have been developed 111. of performance for MLD. The decoding complexity required
for this algorithm depends on the property of a received
sequence.
Manuscript received March 16, 1992. Some of results in this paper were We first issume that the binary linear block codes are used.
presented at the IEICE, Technical Group on IT, Kobe, Japan, July 1991
and some were presented at the Symposium on Information Theory and Its In Section [I, we show one of the sufficient conditions that
Applications, Ibusuki, Japan, December 1991. the codeword is most likely. In Section 111, extending the
T. Kaneko was with the Department of Industrial Engineering and Man- results of Section 11, we derive the sufficient condition that
agement, School of Science and Engineering, Waseda University, Shinjuku,
169 Japan. He is now with the Communication and Information System a set of codewords includes all the codewords that are more
Laboratory, R&D Center, Toshiba Corporation, Kawasaki, 210 Japan. likely than a given codeword. We try to find out a set of
T. Nishijima was with Department of Computer Science and Engineering, smaller number of candidate codewords which are enough
Kanagawa Institute of Technology, Atsugi, 243-02 Japan. He is now with the
Department of Industrial and System Engineering, College of Engineering, to perform MLD. In Section IV, we propose a new MLD
Hosei University, Koganei, 184 Japan. algorithm. We compare the performance of this algorithm with
H. Inazumi was with Department of Information Science, Shonan Institute similar decoding algorithms, though they are not MLD, by
of Technology, Fujisawa, 251 Japan. He is now with the Department of
Industrial and Systems Engineering, Aoyama-gakuin University, Setagaya, computer s mulations assuming the additive white Gaussian
157 Japan. noise (AWGN) channel. As a result, we conclude that for
S. Hirasawa is with the Department of Industrial Engineering and Man- some value; of SNR, less average complexity is required for
agement, School of Science and Engineering, Waseda University, Shinjuku,
169 Japan. performing MLD by the proposed algorithm. We also discuss
IEEE Log Number 9400603. on extension into q-ary ( q > 2) linear block codes.
0018-9448/94$04.00 0 1994 1EE.E

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.
KANEKO ef a/.: MAXIMUM-LIKELIHOOD-DECODING ALGORITHM FOR LINEAR BLOCK CODES 32 1

11. DECISION
CRITERIA FOR THE MOSTLIKELYCODEWORDS arbitrary codeword such that z' # z.Then from the fact that
~ H ( z ' , z) 2 d, we have
Let C be a binary linear block code of length n,dimension
IC and minimum distance d, denoted by (n, k, d ) code. The
codeword which is the output of the encoder is denoted by
hi + hz 2 d (5)

z = (21, 2 2 , . . . ,z,) E C , where 5 ; is an element of GF(2). where ~ H ( ub), denotes the Hamming distance between a
Each z; is transmitted over the AWGN channel. and b. Let 7740 = IlS&II assuming z' # 20. Since there
At the receiver, the demodulator generates the reliability are m - h2 t hl positions which satisfy y y # z: and
sequence a = (cq,a2,. . . , cy,) from the received sequence d ~ ( z '2,0 ) 2 d , we have also
y = (y1, yz , . . . ,y,), where yi is a received signal when z; is
transmitted. The reliability cy; matched to the AWGN channel
m - h2 + hi + WLO 2 d. (6)
is the bit-log-likelihood ratio From (5) and (6), following inequality is derived.

where K is an arbitrary positive constant and p(y;)z;) is the where 1. denotes the greatest integer less than or equal to
probability of receiving y; when z; is transmitted. Then we can a. Since llS:,I) 2 IlS, flS:, 11, (7) is one of the lower bounds
get the hard decision sequence yH = (y,", y y , . . . , y,") of y, of Il%f II.
Next, we represent the reordered elements s k ) in the set
S,. as
I 1 3 v )I I la3?)I I ' . . i la3p-m)1. (8)
where /ail indicates the reliability of y?.
We now define the sets of positions, U, S,, and S:. Let U be Then from tht: above discussions,
the set of all positions of a codeword, i.e., U = { 1, 2, . . . , n}. &I" +-I +m
And we divide the set U into S, and S: by the codeword z.
If z; = y y then the position i belongs to S,, otherwise the
l(Y, 2') 2 IQ1 (9)
.'#=I3 i=l
position i belongs to S:, i.e., S, = {ilzc,= y?, i E U} and
S; = { i ( z i #
,:y i E U}. Obviously, U = S, +
'9:. is derived. This leads to the following lemma immediately.
Then, the maximum-likelihood metric of a codeword z, Lemma 1: If the codeword z satisfies
denoted by L(y, z),can be presented as d- 1-1

l(Y, z)< Ia3pI, (10)


2=1
iES, i€S$
then there is no codeword which is more likely than z.
P r m ~Iri the case z # z o , Lemma 1 is valid from above
i€U i€SZ
discussions. So we prove in the case z = ZO. m = m0. From
Since the term Icy;I is independent of z,L(y, z)depends d ~ ( z ' 2, 0 ) 2 d, we have
only on IlS:,II + mo > d. (1 1)
(4) One of the lower bounds of llS~,llin the case z = zo is
aES;
Ils;,II 2 d - mo. (12)
So our purpose is to find the codeword z from y which
minimizes the value of l(y, z).When l(y, z) is given for +
Replacing m by m o , d - mo and d - [(mo m)/2] are the
an arbitrary codeword z,we are interested in the smallest same. Thus, Lemma 1 is also given when z = zo. 0
value of I(y, z') for all z' # z.If there exists z such that Lemma 1 gives a decision criterion that a codeword is most
l(y, z) < l(y, z'),then we can determine that z is likely. We can stop searching codewords when we find a
most likely, and can terminate the algorithm. We can never codeword which satisfies (10).
know, however, the value of min,,+,l(y, 2 ' ) . Minimizing Here, we must consider that we cannot always get the
IlSe,ll and selecting the smallest Inz/,we derive a lower bound codeword zo.That is the case in which the algebraic decoder
on it, where llSll denotes the cardinal number of S . fails to find a codeword zo which satisfies dH(yH, zo) I
First, we derive the minimum value of ~ ~ from
S the~ L(d- ~ ~In this case, however, (10) still becomes a decision
, 1)/2].
minimum distance d of the code. Since we select yH as the criterion as long as we replace r r ~ oby m.
first input sequence of the algebraic decoder for the proposed The criterion in Lemma 1 can be derived from ideas similar
decoding algorithm described later, assume that we already to those used io derive the criterion in [6].The latter is, with
know both z and 2 0 , where zo is the output of the algebraic notation used here, given by
decoder when yH is the input sequence. d-m
Let m = llS;ll, h 1 = IIS, n Se,ll, 0 5 hl 5 TL - 7n
and hz = 1 1s: n Szfll, 0 5 h2 5 m, where z' is an

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.
322 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 40, NO. 2, MARCH 1994

+
From the fact that m 2 [(mo m)/2J, the criterion in Lemma Next, using (7) and the similar manner in Lemma 1, a lower
1 is superior to that in [6]. Moreover, the former works in the bound on niin,rELJ+l--L, E(y, 2‘). where z’satisfies the above
region m < 2d - mo, where the criterion in [6] works in the condition, IS given by
region m < d. So our criterion also extends the region in d-1 1-t-1
which the criterion works. The reason is that in the former the
condition d H (z’, zo) > d is also considered while in the latter
, > d is considered. However,
only the condition d ~ ( z ’ z)
t+l
when we use the criterion with the codeword z 0 and when we
cannot obtain z o , i.e., m = mO,both are exactly the same.
i= 1

111. GENERATING
METHODFOR THE! SET OF CANDIDATES If z satitfies (16), then 5 is more likely than codewords in
L,+l - L, consequently, (L,+l - L 3 )n C, = 4. Moreover,
If we generate the set of candidates by algebraic decoder, since the light-hand side of (17) is monotonic increasing
which codeword is chosen as an element of the set depends with j , (L.,+2 - L,+1) n c, = 4, (L,+3 - L,+2) n C, =
on which sequence is selected as the input of the algebraic 4, . . . , ( L , - L , _ , ) n C , = 4 also hold. Since L , = C, C, c
decoder. We assume that the input sequence of the algebraic L, when ( 1 6) holds. 0
decoder is the sum of the hard decision sequence yH and If z is known, Lemma 2 clarifies whether L, is enough
an error sequence e = ( e l , e a , . . . , e , ) estimated at the to perform MLD or not. If we want to generate a set with
+
receiver, i.e., yH e. In the following, we call e’s estimated a smaller number of candidates, such a set is given by LT
error sequences. We now discuss the set of estimated error where T is the smallest value of j which satisfies (16). Since
sequences e’s which are enough to perform MLD. the right-hand side of (16) is monotonic increasing with j , we
Let C, to be the set of codewords which are more likely than can find thc value of T by replacing j by 0, 1, 2, . . . ,n - t - 1,
z.Then C, is enough to perform MLD, however, we cannot where the first value which satisfies (16) is equal to T.
generate C, exactly without referring to all codewords. So we
define, instead, the set of codewords L, as follows. We select IV. DECODING
ALGORITHM
all sequences as the estimated error sequences e’s which have
any combination of l’s, which are located in the i positions In this section, we introduce Chase algorithm 2 and the
~ . define a set L, to be the set
with the lowest value of ~ u ,Then, Tanaka-Kakigahara algorithm in which a similar method for
of codewords that are outputs of the algebraic decoder when generating a set of estimated error sequences to our proposed
+
yH e are inputs. We represent the reordered elements U(’) algorithm I S used. After that, we propose a new decoding
algorithm.
in the set U, as
Chase algorithm 2 [4]: For this algorithm, only error se-
IQI,(1) I5 l%(2) I I. . . I lad4 I. (14) quences with no more than [ ( d - 1)/2J errors located outside
of the set of Ld/2J positions with the lowest value of la21
To generate L,, we select the estimated error sequences are considered. A set of estimated error sequences required by
e’s, whose components e,(l), eu(2),. . . , e,(%)are 0 or 1 and Chase algorithm 2 is given by letting e have any combination
e,(,+l), . . . ,e,(,) are 0. Thus, there are 2, patterns of e and of 1’s which are located in the Ld/2] positions with the lowest
the algebraic decoder is used for 22 times. value of 1(i21ri.e., all binary sequences whose components
From the definition, L, satisfies
e,(’), e u ( 2 ) , .. . , e,([d/~l) . . . ,e,(%)
are 0 or 1 and e,([d/21+1),
Lo & L1 g L2 g ... c_ L , (15) are 0.
Tanaka- Kakigahara algorithm 151: A set required by
where LO = ( 2 0 ) . Denoting the error collecting capability Tanaka-Kakigahara algorithm is given by letting e have any
of the code by t, the fundamental lemma for the proposed combination of 1’s which are located in the positions in
algorithm described later is as follows. which the value of la2lis smaller than the arbitrary positive
Lemma 2: If j E U satisfies threshold 6. This algorithm can be considered as an adaptive
Chase algorithm 2, and the performance of this decoding
algorithm changes with the value of threshold 8. It should be
noted that in this algorithm for a fixed value of 8 there is no
z=1 2=1 terminating cnterion maintaining MLD.
then C, c L,. Proposed algorithm: We assume an arbitrary codeword zis
Proof: At the beginning, we note that the set L, satisfies known. To find the codeword z’which minimizes l(y, 2’). we
(15). First, we show that the codeword z’which is an element search only codewords in C,. Unfortunately, it is impossible
of L,+I - L, satisfies x:(3+l) # y$J+l) and must have t to generate a set C, without examining all codewords, so
positions which satisfy &,) # yz,) for i > j + 1. If z’is we search the codewords in LT which includes C,. If we
an element of L,+1, there must be at most t positions which can find thle codeword z‘which satisfies E(y, 5’) < E(y, z)
satisfy T : ( % #
)~5%) +
for i > j 1. And if z’is not an element in LT, then we search only candidates in the set L p which
of L,, there must be at least t +
1 positions which satisfy includes Ccl.Note that C,! is the subset of C, and L T ~is the
x:(%) # y z t )for i > j . These two facts lead to the condition subset of LT, i.e., T’ 5 T. If we can find the codeword z“
above. which satisfies E(y, z”)< l(y, 2’) in L p , then we search only

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.
KANEKO ef al.: MAXIMUM-LIKELIHOOD-DECODINGALGORITHM FOR LINEAR BLOCK CODES 323

candidates in the set LT" which includes Cxff.Note again that Theorem 1: The decoding algorithm described above per-
Czft is the subset of Cx!and LTU is the subset of LT,, i.e., forms strictly MLD.
T" 5 T'. We repeat these procedures. Thus, we replace LT Example I : Letting code C be the (7, 4, 3) code, and as-
by the subset L T I ,and next replace L p by the subset LT", suming that a = (-0.2, -1.0, 1.2, -0.4, -0.1, -0.7, -1.6)
and so on. We terminate this procedure when there are no is received. Then we have yH = (0, 0, 1, 0, 0, 0, 0).
remaining codewords to search. Finally, the set of codewords First, we choose the estimated error sequence
to search becomes L p , where T* is the smallest value of e(') = (0, 0. 0, 0, 0, 0, 0). We decode yH e(") = yH +
j which satisfies (16) when z is the most likely codeword. using algebraic decoder and obtain a codeword
We can also terminate when the candidate is determined to z o = (0, 0, 0, 0, 0, 0, 0). Then, Z(y, 20) = 1.2. Since
minimize Z(y, z) by Lemma 1. mo = m = 1, right-hand-side of (10) is 1-0.11+1-0.21 = 0.3,
Since the set of codewords to search is replaced by LT and (10) does not hold because 1.2 # 0.3. Thus, it cannot
having smaller T and Li holds the relation (lo), we decide be determined that zo is most likely. Replacing j by
the order of search as LO,L1 - LO, L2 - L 1 , .. -. Defining 0, 1, 2 , . . . , the value of right-hand side of (16) takes
ordered estimated error sequence e(i) = ( e r ) , e t ) , . . . ,e?)),
0.3, 0.6, 1.1, 1.7,. . .. Since the minimum value of j
i = 0, 1,. . . , 2n - 1 satisfying satisfying (16) is 3, T = 3 and e = (*, 0, 0, *, *, 0, 0)
n
is selected as the estimated error sequences, where *
Cep-l
j=1
= 2, e[4 E (0, 1) (18)
is 0 or 1. rhe ordered estimated error sequences are
e(') = (0, 0,0, 0, I, 0, o), e(') = (1,0, 0, 0, 0, 0, o),
e@) = (1, 0, (1, 0, 1, 0 , ~),---,e(')= (I, 0, 0, 1, 1, 0, 0).
the algebraic decoder generates the set Lj by selecting y+e(i), +
Second, we decode yH e(') using algebraic decoder and
z = 0, 1 , . . . , 2j - 1 as inputs. Selecting e(i) in order i = obtain a codeword z1 = (0, 0, 1, 0, 1, 1, 0). z 1 is more likely
0, 1, 2;.. is equivalent to searching codewords in order than z o because l(y, 20)= 0.8. From m = 2, right-hand side
Lo,L1 - Lo, LZ - Ll,... . +
of (10) is I - 0.21 I - 0.41 = 0.6 0.8 # 0.6, it cannot be
Thus, the decoding algorithm proposed in this paper is determined that z1 is most likely. Since T = 2 for a codeword
described as follows. 2 1 , e = (*, 0, 0, 0, *, 0, 0) is selected as the estimated error

procedure Decoding Algorithm sequences. Tht: ordered estimated error sequences are e(') and
begin e ( 3 ) only.
+
Third, we decode yH e(') using algebraic decoder and
i : = 0; obtain 2 2 = :1, 0 , 1, 1, 0, 0, 0). Since Z(y, 2 2 ) = 0.6, z 2
T : = n; is more likely than zl. Right-hand side of (10) is I - 0.11 +
Z(y, 3):= 0 ; I - 0.71 = O h and 0.6 < 0.8, we can decide that z 2 is the
while i < 2T - 1 do most likely codeword from Lemma 1. Thus, we select 2 2 as
an output and terminate the algorithm.
begin
In this example, Chase algorithm 2, and the Tanaka-
+
Decode yH e ( i ) using algebraic de- Kakigahara algorithm with 0 = 0.15 select e =
coder; (0, 0, 0, 0, *, 0, 0) as the estimated error sequences and
if the algebraic decoder successes to find fail to find the most likely codeword 2 2 . Tanaka-Kakigahara
a codeword z and algorithm with 0 = 0.3 selects e = (0, *, 0, 0, 0, *, 0) and
Z(y, z)< I(y, 3) then can find z2 using the algebraic decoder 4 times.
Example 2: Letting code C be the (7, 4, 3) code, and
begin assuming that a = (-1.0, -1.1, 0.2, -0.4, -1.1, -0.5,
z:=2; -1.4) is received. Our decoding algorithm decodes yH e(O) +
if z satisfies (10) then Halt first and gets zo = ( 0 , 0 , 0, 0, 0, 0, 0 ) . Since 20 holds
(an estimate of codeword is Lemma I, we can terminate with the first decoding step, while
3); Chase algorithm 2 and the Tanaka-Kakigahara algorithm
Calculate T (=the smallest with 0 = 0.15 select e = (0, 0, 0, 0, *, 0. 0) as estimated
value of j satisfying (16) with error sequences and decode yH +
e using algebraic decoder
3); twice. The Tanaka-Kakigahara algorithm with 0 = 0.3 selects
e = (0, 0, *, 0, *, 0, 0) as estimated error sequences and
end; decodes them using the algebraic decoder 4 times.
2: = 2 + 1;
end; V. COMPUTER RESULTS
SIMULATION
Halt (an estimate of codeword is 3); In this section, we compare by computer simulations the
performances cif the various decoding algorithms for binary an-
end; tipodal signals over the AWGN channel, using the (23, 12, 7)
Lemma 1 and Lemma 2 lead to the following theorem Golay code and three BCH codes of code length 15, where the
without proof. number of samples for these simulations are 100 000. On this

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.
~

324 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 40, NO. 2, MARCH 1994

10

16'
-
._
Y
.

3t
&. lo-2
i;
3
s
10.~

JO-#o 1 2 3 4
rh CdBl T L [dB1
~

(C)
Fig. 1. Comparisons of performance over the AWGN channel.

condition, the received signal yi is given by y; = zia+ the number of decoding times for which the algebraic decoder
when xi = 0 and y; = -a+ zi when xi = 1, where is used.
E, is the energy per bit of the channel input and zi is Fig. 2 shows the performances of average complexity re-
identically distributed Gaussian random variables with mean 0 quired by the three decoding algorithms. Our algorithm re-
and variance a2 = N0/2, and NO is the noise spectral density. quires less average complexity than both Chase algorithm
The SNR for the channel is y = E,/No and the SNR per 2 and the Tanaka-Kakigahara algorithm over yb = 1 2 N

transmitted information bit is yb = y . n / k . dB. In this range of SNR, our decoding algorithm gives the
We compare three decoding algorithms, our decoding algo- smallest block error probability and simultaneously requires
rithm, Chase algorithm 2, and the Tanaka-Kakigahara algo- the least average complexity. These results show that our
rithm. The reason why we compare our decoding algorithm decoding algorithm can generate sets of small number of
with these algorithms, though they are not MLD, in that these candidate codewords which include the most likely codeword.
algorithms use similar methods for generating the estimated In the range under 'yb = 1 2 dB, our decoding algorithm
N

error sequences. The performance of the Tanaka-Kakigahara requires more complexity than that of Chase algorithm 2 or
algorithm changes with the value of threshold 8. So we the Tanaka-Kakigahara algorithm. By increasing the value of
evaluate it with different values of 8. 8, the Tanaka-Kakigahara algorithm improves performance
The performances of these algorithms are shown in Fig. I . while requiring more complexity without guaranteeing the
Our algorithm gives the smallest block error probability of performance of MLD. Since our decoding algorithm guaran-
the three. This is obvious because our decoding algorithm is tees MLD, as SNR becomes lower and lower, our decoding
MLD and others are not. algorithm is closer and closer to conventional MLD.
Before we discuss on the complexity required by these Next, we consider the performance with limited complexity.
algorithms, we must define a measure of the complexity. Most That is, in the case that the number of decoding times by
of the complexity required by decoding algorithms in the class the algebraic decoder is truncated by some value. This is
that uses algebraic decoder to generate a set of candidate of practical meaning because the decoding delay tolerable in
codewords is the computational complexity performed by the the coding system is taken into account. Denote the case in
algebraic decoder. We define the decoding complexity to be which the algebraic decoder performs at most 2i times by

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.
KANEKO et al.: MAXIMUM-LIKELIHOOD-DECODING ALGORITHM FOR LINEAR BLOCK CODES 325

4- T-K [ 06.377)

zoo
(23.12.7) GolW

I 1
(Me

3' 4'
T (15.5.7) Bnl Cmie
T-K ( 8=0.276)

\
7 b [dB1

25

24-

2,
Y

T-K ( 8=0.276)

( 15,7,5) 001 code

zoo 1 T I >[dB1
; 6
(C)
Fig. 2. Comparisons of average decoding complexity over the AWGN channel.

limit, = 1:. The performance of our decoding algorithm with considered. Our decoding algorithm always requires, as space
such limited complexity is shown in Fig. 3. With the limitation complexity, only one code word to be stored on list.
limitT = 0, our decoding algorithm gives obviously the Recently, another approach to perform MLD with small
same block error probability as hard decision decoding. And average Complexity was reported [7]. This decoding algorithm
with the limitation limitT = [ d / 2 J , our decoding algorithm requires few codewords to compare their metrics in low SNR,
gives the same block error probability as Chase algorithm while it requires many codewords to be stored on list.
2. However, our decoding algorithm requires less average In addition, we show the maximum number of times for
complexity than Chase algorithm 2 because our decoding which the algebraic decoders are used to decode y during
algorithm generates 2 Ld/21 estimated error sequences or less the computer simulations in Tables 111-V where N,, is the
and algebraic decoder is used for 2Ldl2j times or less while maximum number of decoding times the algebraic decoder is
Chase algorithm 2 always generates 2Ld/21 estimated error used. This result is just one example because the probability
sequences and algebraic decoder is always used for 2Ld/'J that adaptive decoding algorithms generate all codewords is
times. not 0.
To evaluate the proposed algorithm, we also simulated for
more long codes, the (128, 64, 22) extended BCH code and INTO Q-ARYLINEARCODES
VI. EXTENSION
the (256, 139, 32) extended BCH code, where the number
of samples for these simulations are 50 OOO. The results are We represent the reliability sequence a for q-ary codes, as
shown in Tables I and I1 where Naveis the average number
of decoding times the algebraic decoder is used. a = (a1,a2,-..,an), (19)
Discussion: We have discussed only time complexity as
decoding complexity, but space complexity must be also

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.
326 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 40, NO. 2, MARCH 1994

IOOL
TABLE I
AVERAGE
DECODING
COMPLEXITY FOR THE (128, 64, 22) CODE
Tb = 0 [dB1
SNR [dBI 5.0 5.5 6.0 6.5 7.0
. rb = 1 [dB1 iv,,, 283.13 4.48 1.08 1.00 1.00

7'0 = 2 [dB1
TABLE I1
AVERAGE
1)ECODING COMPLEXITY FOR THE (256, 139, 32) CODE
Yb = 3 [dB1
SNR [dB] 5.5 6.0 6.5 7.0
Nab e 76.21 1.02 1.00 1.00

Tb =.4 [dB1
TABLE I11
MAXIMUM DECODING
COMPLEXITY FOR THE (23, 12, 7) CODE
IC3 : (23.12.7) Colav Code
I I I I I I I SNR [dB] 0.0 1.0 2.0 3.0 4.0
0 1 2 3 4 5 6 7 "ax 1024 1024 512 256 256
Iimitr
(a)
TABLE IV
MAXIMUM
DECODING
COMPLEXITY FOR THE (128, 64, 22) CODE

SNR [dBI 5.0 5.5 6.0 6.5 7.0


"ax 2097152 32768 1024 5 1

TABLE V
MAXIMUM
DECODING
COMPLEXITY FOR THE (256, 139, 32) CODE

SNR [tlB] 5.5 6.0 6.5 7.0


2097152 1024 1 1

We now derive the lower bound of min,/+,l(y, 5') where


z' is an arbitrary codeword such that 2' # z by the same way
in binary case. For q-ary codes, (5) and (6) are not always
Iimitr true. This is because in the q-ary case, the error value is not
(b) unique (in the binary case, the error value must be 1). From
Fig. 3. Performance with the limited complexity. d ~ ( z ' zo)
, 2: d, we have

ai,j = K . In P(Yild
1 E (0, l , - . . , q - 1) (21)
IIs;,II + mo > d . (25)
'
Cl+JP(YiI~)
Then we derive one of the lower bounds on IlS;,II,
where q is a prime power and c y i , j indicates the reliability
of the symbol j at the position i. Denote the component of Ils:JII L d - mo. (26)
ai which is greatest by adSt,and the component which is
second greatest by cyHnd. Then the maximum-likelihood metric Here, we no,e that
L(y, z)for q-ary codes is calculated as

So L(y, z)depends, as in the binary case, only on We represent the reordered element s?) in Sx as

This definition includes the binary case. Indeed, in the binary Then One Of the lower bounds is derived as
case, d-mn

holds when position i belongs S;. This corresponds to the


definition of the binary case. The lemma for q-ary codes similar to Lemma 1 is as follows.

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.
KANEKO et al.: MAXIMUM-LIKELIHOOD-DECODING
ALGORITHM FOR LINEAR BLOCK CODES 327

Lemma 1’: If the codeword z satisfies is used to generate a number of candidate codewords.
d-mo This decoding algorithm uses a method to generate can-
qy, z)< (a:!) - a:?) (30) didate codewords similar to Chase algorithm 2 and the
i=l Tanaka-Kakigahara algorithm. One of the differences between
these algorithms is that our decoding algorithm generates a set
then there is no codeword which is more likely than z.
of candidate codewords in which the most likely codeword
Next, we must define the set Li for q-ary codes. We select
must be included. Consequently our decoding algorithm is
all sequences as the estimated error sequences e which have
MLD while others are not. Another difference is that our
any combination of elements of GF(q), which are located
decoding algorithm and the Tanaka-Kakigahara algorithm are
in the i positions with the lowest value of atst - adaptive while Chase algorithm 2 is not. The complexity of an
Then, as in the binary case, define a set Li to be the set adaptive decoding algorithm depends on a received sequence.
of codewords that are outputs of the algebraic decoder when
We show, by computer simulations, that our decoding
yH + e are inputs. To generate Li for q-ary codes, we
algorithm requires less average complexity than Chase algo-
select all sequences e, as the estimated error sequences whose rithm 2 and the Tanaka-Kakigahara algorithm for some SNR,
elements e,(l), e u ( 2 ) ,. . . , eU(%)
are the elements of G F ( q ) and
though the block error probability of ours is always superior
. . . , e,(%) are 0. So there are qi patterns of e, hence to others. However, it is not avoidable that the complexity
the algebraic decoder is used qi times. of these decoding algorithms increases exponentially with
Lemma 2 for q-ary codes is as follows.
the code length n. Further investigations for this research
Lemma 2’: If j E U satisfies
include calciilating computational complexity of this proposed
d-mo-t-1 algorithm analytically.

i=l
t+l REFERENCES
+ cb::;+
i=l
4
-I)% (31) [ l ] J. K. Wclf, “Efficient maximum likelihood decoding of block codes
using a trellis,’’ IEEE Trans. Inform. Theory, vol. IT-24, pp. 7 6 8 0 , Jan.
then C , C Lj. 1978.
[2] G. D. Fomey, Jr., “Generalized minimum distance decoding,” IEEE
Thus, the decoding algorithm for q-ary codes is given by Truns. I j b r m . Theory, vol. IT-12, pp. 125-131, Apr. 1966.
replacing (10) and (16) by (30) and (31), respectively. The [3] -, Concatenated Codes, MIT research monograph No.
37. Cambridge MA: MIT Press, 1966.
components of the ordered estimated error sequence e(‘), i = [4] D. Chase, “A new class for decoding block codes with channel mea-
0, 1,.. . , qR - 1 are given so that surement information,” IEEE Truns. Inform. Theory, vol. IT-18, pp.
n
170-182, Jan. 1972.
[SI H. Tanaka and K. Kakigahara, “Simplified correlation decoding by
selecting possible codewords using erasure information,” IEEE Trans.
Inform. Theory, vol. IT-29, pp. 743-748, Sept. 1983.
[6] D. J. Taipale and M. B. hrsley, “An improvement to generalized-
minimum-distance decoding.” IEEE Trans. Inform. Theory, vol. 37, pp.
VII. CONCLUSION 167-172, Jan. 1991.
[7] Y. S. Hsn and C. R. P. Hartmann, “Designing efficient maximum-
The new MLD algorithm proposed in this paper is in the likelihood soft-decision decoding algorithmsfor linear block codes using
class of decoding methods in which the algebraic decoder algorithm A* ,” private communication, 1992.

Authorized licensed use limited to: Southeast University. Downloaded on April 22,2021 at 12:42:44 UTC from IEEE Xplore. Restrictions apply.

You might also like