This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

ON COMMUNICATIONS,

VOL. 42, NO. 2/3/4, FEBRUARYIMARCHIAPRIL

1994

313

**List Viterbi Decoding Algorithms with Applications
**

Nambirajan Seshadri, Member IEEE and Carl-Erik W. Sundberg, Fellow IEEE

Abstract-A list Viterbi decoding algorithm (LVA)produces a rank ordered list of the L globally best candidates after a trellis search. Here, we present two such algorithms, (i) a parallel LVAthat simultaneously produces the L best candidates and (ii) a serial LVA that iteratively produces the kth best candidate based on knowledge of the previously found k-l best paths. The application of LVAto a concatenated communication system consisting of an inner convolutional code and an outer error detecting code is considered in detail. Analysis as well as simulation results show that significant improvement in error performance is obtained when the inner decoder, which is conventionally based on the Viterbi algorithm (VA), is replaced by the LVA.An improvement of up to 3 dB is obtained for the additive white Gaussian noise (AWGN) channel due to an increase in the minimum Euclidean distance. Ever larger gains are obtained for the Rayleigh fading channel due to an increase in the time diversity. It is also shown that a 10% improvement in throughput is obtained along with significantly reduced probability of a decoding failure for a hybrid FEe / ARQ scheme with the inner code being a rate compatible punctured convolutional (Repe) code.

1.

INTRODUCTION

The Viterbi algorithm (VA) [1-2]performs efficientmaximum likelihood decoding of finite state signals observed in noise. In many situations, the state space is either too large to search or only a part of the signal is generated by a finite state machine. Two examples of such communication systems are shown in Figure 1. The first one is

n

(i)

Speech Encoder

Convolutional Encoder

Viterbi Decoder

Speech Decoder

(ii)

Error Detection Encoder

Convolutional Encoder

Vilerbi Decoder

Syndrome Decoder

Fig. 1. Examples of communication systems with concatenated decoders.

a concatenated coding system consisting of an outer error

Paper approved by Sergio Benedetto, the Editor for Signal Design, Modulation and Detection of the IEEE Communications Society. Manuscript received March 4, 1991; revised March 31, 1992. This paper was presented in part at the Conference on Information Sciences and Systems, Princeton, NJ, March 1990 and the IEEE Global Telecommunications Conference, San Diego, CA, November 1989. AT&T Bell Laboratories, Signal Processing Research Department, 600 Mountain Avenue, Murray Hill, NJ 07974 IEEE Log Number 9400913. 0090.6778/94$04.00

detecting code and an inner error correcting convolutional code. The second one is an outer speech coder followed by an inner error correcting convolutional code. In the first example, the joint state space can be defined but it may be too huge to search. In the second example, the combined source-channel coder state space is not precisely defined. Conventionally, the inner code is decoded first under the assumption that the input to the inner code is independent and identically distributed data. This is then followed by outer decoding. The inner decoder, which is normally based on the Viterbi algorithm (VA), searches for the best path through a trellis that is defined by the inner code. Considerable improvement in performance is obtained over this conventional decoding approach when the knowledge of L > 1 best paths through this trellis is utilized during subsequent decoding. Various generalizations of the VA have appeared in the literature. Forney [3]considered a list-of-2 maximum likelihood decoder for the purpose of analyzing sequential decoding algorithms [4]. Yamamoto and Itoh (Y-I) [5] proposed an ARQ algorithm where the decoder requests a frame repeat whenever the best path into every state at some trellis level is "too close" to the second best candidate into every state. However they do not explicitly make use of the second best candidate as we do. More recently, Hashimoto [6)has proposed a list type reduced-constraint generalization of the Viterbi algorithm which contains the Viterbi and the M-algorithm [7]as special cases. The purpose of this algorithm is to keep the decoding complexity to be no more than that of the Viterbi algorithm and to avoid error propagation due to reduced state decoding. Again, no explicit use of the survivors other than the best is made after decoding. The Y-I algorithm has been successfully employed in a concatenated coding scheme by Deng and Costello [8]. The inner decoder is a convolutional code with the Y-I decoding algorithm and the outer decoder is an errors and erasures decoder where the symbol erasure information is supplied by the Y-I algorithm. List decoding of block codes for the binary symmetric channel has recently been studied by Elias [9]. This work contains an extended reference list on list decoding of block codes for this channel. A new soft output Viterbi algorithm which gives analog reliability information associated with each decoded symbol from the VA has been proposed by Hagenauer and Hoeher [10,11]. Such algorithms are typically used in the inner decoding stage.

© 1994 IEEE

1. state 1 (at time 0) and ending at state 1 (at time M). 1. 1)] . starting from. k) be the corresponding ranking) of the kth best path at time t . For the parallel algorithm. 1'5':. 1'5':. i) (where Ct(j.) The total number of sequences is N2 M where M is the total number of such trellis sections. FEBRUARYfMARCHfAPRIL 1994 Here. the task of identifying the L best candidates is achieved in one pass through the trellis while the serial algorithm achieves the same result by L successive passes through a trellis. we present two LVAsl that produce a rank ordered list of the L globally best candidates after a trellis search.1 to state i at time t is given by Ct(j. Similarly let ~t(i.for example. At time t. i). Asymptotic error performance and simulation results are presented for the Gaussian and Rayleigh fading channels. (1) 6(i) = 1 . 2. (Some of the connections are non-existent for rates less where it = ~t+! (it+!) . (2) . NO. For a textbook treatment of hybrid FEC / ARQ schemes. k) = cl(l. 1 '5':. i)] . the inner code is either a fixed rate convolutional code or an incremental redundancy transmission code such as the rate compatible punctured convolutional (RCPC) code [13]' and the outer code is assumed to be an ideal error detecting code. . ~t(i. In this work. (t = 1) (5) ¢t(i. k) be the state (and rt(i. k). Let the history of the best path be stored in the array ~t(j). be the lowest cost to reach state i at time t from state 1 at time O. Path Backtracking: The best state sequence is 2. We consider in detail the application of this algorithm to a concatenated communication system consisting of an inner forward error correction (FEe) code and an outer (ideal) error detecting code. Parallel Algorithm: Let ¢t(i. LIST VITERBI DECODING ALGORITHMS In this section we present the parallel and serial LVAs. Let the minimum cost to reach state j at time t from the known starting state be ¢Jt(j). L. Section 4 concludes the work. We begin by summarizing the Viterbi algorithm which also establishes the notations to be used subsequently. 2/314. The Viterbi algorithm can be summarized as follows: 1. The incremental cost (metric) associated lWe have used the term generalized Viterbi algorithm (GVA) instead of List Viterbi algorithm (LVA) in conference versions of this paper. l'5':. when this path passes through state i at time t. we briefly consider the application of this algorithm to other concatenated communication systems.M-l. than log2N bits/state transition.1 candidates. k) = 1. 1)] .t'5':.2. 1'5':. 42. L. with moving from state j at time t . k '5':.314 IEEE TRANSACTIONS ON COMMUNICATIONS. 2. N. (3) ~M(I) = arg min [¢M-l(j) l~j~N + CM(j. N . i '5':. 3. i) = 00 if j and i are not connected). The two algorithms are (i) a parallel LVA that simultaneously produces the L best candidates and (ii) a serial LVA that iteratively produces the kth best candidate based on the knowledge of the previously found k -r. ~t(j) is the state occupied by the best path into state j at time t -1. Parallel LVA (4) The parallel LVA finds the L best paths simultaneously by computing the L best paths into each state at every time instant. VOL. i '5':. k '5':. Termination: + ct(j. we refer the reader to [12]. Initialization: (t = 1) <!h(i} = cl(l.i) . Recursion: (1 < t < M) l~j~N (/Jt(i) = min l~j~N [¢t-l (j) + Ct(j. 2. N . Fully connected trellis with N states.1. ~t(i) = arg min [¢t-l(j) 1'5':. 2. Summary of the Viterbi Algorithm An N state fully connected trellis is shown in Figure 2. The parallel and serial LVA along with implementation and complexity details are described in Section 2.i)] (t = M) = min l~j~N ¢M(I) [¢M-l(j) + CM(j. Later in the text. Section 3 considers the application of the LVA to hybrid forward error correction and automatic repeat request (FEC / ARQ) data transmission scheme. Initialization v» Fig. 4. i '5':. The problem is to find the best state sequence (with minimum cost) through the trellis.

k) = l*. A simplified parallellist-of-2 VA can be found in [14]. beginning with the most likely path. 2nd Best Path: We make use of the fact that the globally 2nd best path after leaving the best path at SOmeinstant. 3. merges with the best path at a later instant and never diverges again. k) = l*.. = arg min(k) [cPt-l(j. 3. Recursion (1 (/>t(i. k) = (j*.SESHADRI AND SUNDBERG: LIST VITERBI DECODING ALGORITHMS WITH APPLICATIONS 315 2. This avoids many of the unwanted computations of the parallel algorithm. = ~M(l.1 candidates are determined to be in "error". (9) . Admissible and inadmissible topologies for the 2nd best path.l*)= min(k) l~j~N l~I~L [cPM-l(j. (6) It is easy to modify the parallel algorithm to operate in a continuous transmission mode rather than in a block mode by maintaining the L best paths into each state and by releasing the L best symbols (after tracking back Dp symbols where Dp is the decoding depth) at each instant corresponding to the best among the N L survivors. It+!) . [(Pt-l (l. Serial LVA The serial algorithm finds the L most likely candidates. The parallel algorithm is illustrated in Figure 3 where at each state and at every time. argmin(k) [cPM-l(j.3. This is because any subsequent divergence will result in a higher cost path.l) + Ct(j. (8) It = rt+l (jt+!. 1)] .l) ssss» l~I~L 1)] (7) ~t(i. k) = j* "Yt(i. i::. N. This algorithm requires maintaining a cost array of N L accumulated costs and a state array of N L x M which stores the path history for each time instant. k) < t < M) [cPt-l(j. Then min { [(Pt-l (i(t-l). i)]. Figure 4 shows the admissible and inadmissible topology for the second best path. 1) + Ct(l. Let Best Path Here j* is the predecessor state for the kth best path into state i and l* is the corresponding ranking. i(t) be the state occupied by the best path at time t. the second best among the N L survivors etc. Fig. i(t))] . Path Backtracking: The kth best state sequence is where jt = ~t+1 (jt+1. 1::. i(t))]}. k) = j* "Yt(i. one at a time. The best path is assumed to be found by the Viterbi algorithm. Using this fact. 4. + CM(j. i)]. the N L accumulated costs ._ Inadmissable 2nd Best Path L Paths Fig. 1 <i ~ N where (j*. 2) + Ct(i(t-l). We illustrate the serial algorithm for finding the 2nd and 3rd best before presenting the general algorithm. ~t(i. The main benefit of this algorithm is that the kth best candidate is computed only when the previously found k . I) + CM(j. min l<I<N l#i(t=-l) are computed and the L smallest accumulated cost paths along with their costs are stored.l*) min(k) denotes the l~j~N 19~L kth smallest value.L Paths Best Path L Paths . 4. 2. It is convenient to retain all the computations performed by the Viterbi algorithm including the cost associated with each locally best (partial) state sequence. It+r) . k) . the followingrecursion can be written. jM-l k). Dynamic programming for finding the L-best paths implemented in a parallel manner. -Admissable 2nd Best Path and IM-l = fM(l. l) = min(k) l~j~N l~J~L + Ct(j. Termination (t = M) cPM(l.

42. (a) Locally 2nd best 2nd best candidates candito the b. . This candidate is the Cj+I th best path from time instant 0 (and state 1) to time instant j (and state M). c. and one that is in contention for being the kth best candidate. i(M-1). The one with the minimum cost remains in contention. The locally 2nd best candidate into this state is computed and its cost (Pt(i(i). i(j). Cj = Cj + 1 for that time instant j when the final merge of the r» best path with the globally best path happens. Loop back to (a) until k = L. 1). A "final merge" means that this path has not merged with the globally best path at time t1 and remains merged from time t onwards. The cost of this locally 2nd best candidate is compared to that of the surviving candidate and the candidate with the lower cost remains in contention. Initialize the number of candidates found to k = 1. i. Let us assume that this state is not the one where the globally 2nd best finally merges with the globally best path. Supposing this time instant is T and the predecessor state is l( T-l). to 1 for all j. then this candidate must be the locally second best path into this state. The lowest of the two remains in contention. b.. Furthermore.316 IEEE TRANSACTIONS ON COMMUNICATIONS. Let the best state sequence be (1. The cost of this candidate is compared to the lone survivor obtained previously and the lowest of the two remains in contention. We note that further computational savings can be realized by storing the currently available L best candidates in a stack. Update the c. 2/314. . Let us assume that the kth best path is to be found. d. Figure 5b shows the locally second best candidates to the globally 2nd best. i(1). i(2). The cost of this candidate is added to the cost of the globally best path over the remainder of the trellis and is compared to the cost of the surviving candidate. where the ph element Cj is the number of paths. 3rd Best Path: Let us first consider the state i(t) occupied by the best path at time t. the time instant at which the last best merge happened should be recorded as well as the corresponding predecessor state.. FEBRUARYIMARCHIAPRIL 1994 The first term in the recursion of (9) is the second best path to reach state i(t).1. Then we note that the candidate for being the 3rd best and finally merging with the best path at time T is the second best to the globally 2nd best path over the time span t = 0 to t = T and is the globally best candidate over the remaining time span. and M is the length of the state sequence. Steps dates involved in serial list-of-3 path. Serial Algorithm: / 2nd best path Locally 2nd best paths to the globally best path o 0 o o I I I 0 Locally 2nd best paths to the 2nd best path Fig. 2) is added to the cost of the globally best path over the remaining time.» entry of the merge count array. 1) in (9) is available by prior use of the Viterbi algorithm. Let us consider the time T at which the globally 2nd best path finally merges with the best. (Pt-l(l. Then. among the previously found globally best paths that finally merge with the globally best path at time j. Set up a path matrix of size L x M. 5. find the path that finally merges at time i. c. . In order to release the second best path. to the globally best globally 2nd best path. with the constraint that this path should merge with the globally best at time t and not before. We initialize C. It can . This recursion is performed until the end of the trellis is reached. Initialization: a. it is not necessary to keep track of 2N accumulated costs and 2N path histories.. We also assume that the globally best sequence has been found by the Viterbi algorithm and that the locally best path into each state at every time instant is known along with its associated cost. the globally second best state sequence is the best path from state 1 (at time 0) to state l(T-l). (b) Locally VA.e. We now summarize the serial algorithm. it is clear that the kth globally best candidate is at least the kth best candidate in the stack. to store the state sequences. The second term is the second best path to reach state i(i). Figure 5a shows such candidates. unlike the parallel algorithm. At time j. Then. Thus storing these values will significantly reduce the computational cost.. . where L corresponds to the maximum number of best state sequences that are to be found. We note that. NO. Increment k by 1. VOL. The remainder is the same as the globally best state sequence. with the constraint that this path should have merged with the globally best by at least time t . If the globally third best candidate were to merge with the best candidate for the final time at time t.. Recursion: a. Form a "merge" count array of size M.

ApPLICATIONS TO DATA TRANSMISSION error performance of this coding system in the presence of noise and fading. It provides an indication of the gain that can be achieved by practical codes or modulation schemes at high channel SNRs. i.1.. This corresponds for the AWGN channel to pair of signal points (codewords) that are at the minimum Euclidean distance. L = 1: When L = 1. Also. we are interested in the probability of the correct candidate not being among the L best. We note that using the serial algorithm. This candidate is compared to the kth candidate in the stack and the lower of the two is the globally kth best path. using a stack) has been proposed by Soong and Huang [15]. They use the forward trellis search using the VA for finding the locally best candidate into each state at every instant followed by a single backward tree search (backwards stack) to find all the remaining L . (9)). Block diagram of data transmission system with LVAdecoder. Computational and Storage Requirements etc. . The storage requirement to store the accumulated cost into each state at every instant is about M times higher than the VA.. The results show that the worst case asymptotic coding gain is independent of the code. for example. 2. If all the intermediate computations are stored.1 best paths. the third best The probability of incorrect decoding is evaluated for the additive white Gaussian noise channel when the LVA is used for decoding.. In any event. the inner decoder based on the VA releases the best decoded sequence and the outer decoder performs error detection on this decoded sequence. The serial algorithm is ideally suited for this purpose. In this case the inner decoder has to perform a second Viterbi decoding etc. more storage and computation than those of the VA. its rate. Land requests the inner decoder for the k + 1th best candidate only if the kth best is found to be in error. it is sufficient to find the 2nd best path to the k . requires lower computational effort than performing a second Viterbi decoding as required in the conventional approach. the computational cost is much lower than that of the parallel list Viterbi algorithm. The serial algorithm finds the kth best only if the (k .e. and let seQ) and s(fl) be the corresponding signals that are actually transmitted on the channel.4. The minimum Euclidean distance is then given by (10) where (11) where II(-) II is the squared Euclidean distance._ and fl be two codewords. If an error is detected then the outer decoder. Thus the total cost to find the kth best path is N M additions and comparisons. Here we propose that the decoder based on the LVA releases the L best candidates and the outer decoder (assumed ideal in the performance evaluation below) selects the correct one from the L best if it exists. the outer code is an error detecting code and the inner code is a convolutional code. 3.1)th best is in "error". On the other hand.1th globally best path. 6. the average computational cost for the serial algorithm may be smaller than that of the parallel algorithm. Some additional cost is incurred in inserting a newly found candidate into the stack if its cost is lower than that of any candidate in the stack. may request the transmitter to retransmit.. Figure 6 shows the decoder operation for this concatenated coding system. We first evaluate the asymptotic Data X Accept Data ( Correct Parity ) Outer Decode Incorrect Parity It is clear that the parallel algorithm requires L times Fig. This is the probability that the correct candidate is not in the list of the L best candidates. This is because the outer decoder performs error detection on the v» best candidate. when the VA is used for de- coding. the error performance is mainly determined by the pair of signal points that are most likely to be confused by each other. Conventionally. The results for the Gaussian channel are derived for any L and for any code (including the inner code being a coded modulation) whilefor the Rayleigh fading channel the results are derived for binary codes and L = 2. In particular. 3. The stack is continuously updated in the process of finding the kth best candidate. the parallel LVA has to find the L best all the time. . Let g. using the final version of the serial algorithm which stores all the intermediate computations.SESHADRI AND SUNDBERG: LIST VITERBI DECODING ALGORITHMS WITH APPLICATIONS 317 also be verified that the only other possibility is that the kth candidate is the 2nd best path to the (k _1)th globally best path. the task of finding the 2nd best. Asymptotic A WGN Channel SNR Gain with LVA Decoding for the Here. in order to find the kth best path. On the other hand. We will present simulation results with specific codes to show that practical gains are close to the theoretical predictions. k = 1. This requires the evaluation of M path metrics. Our analysis makes use of signal space geometry.e. and the modulation scheme.. The probability of error at very large signal to noise ratios is given . A decoding failure is declared if k = L. about N metric additions and N comparisons are required at each instant (by using eqn. A tree-trellis search algorithm similar to this final version of the serial algorithm (i.

b) sin <l>J The gains with L = 4. every signal point is equidistant form the other three. The actual gain is often smaller when the number of set of L nearest neighbors is taken into account. L = 1. pp.318 IEEE TRANSACTIONS ON COMMUNICATIONS.b))2 2D(9:. the tightest packing of L + 1 signal points in L dimensions. 2. These three signal points form a triangle with the edges of the triangle being at least Dmin in length.25 dB. The closest point in the region of error to s(g._) is 0 as shown in Figure 7. Here. and the puncturing table for the R = 8/10 code is given in [13]. where R is the code rate. will dominate the error probability at large (L2~ 1) .5 dB respectively as shown below by simulations. We assume a Rayleigh fading channel that permits coherent demodulation (ideal recovery of carrier phase) of binary PSK. and s(_g) corresponding to data sequences g. The block error probabilities PBL. is the simplex [16]. The coding gain G of the LVA (L = 2) over the VA (L = 1) is then given by 10log10(G) = 10log10 ( Deq . 3. from each of the L points. OJ D Fig.C)-D2(a. Probability ing Channel of Incorrect Decoding for the Rayleigh Fad- etD(a. This distance is given by D 2 ~ D2(Q. 259-261]' we get the worst case asymptotic gain for the GVA with L outputs over the VA as 10log10(G) = 10log1o = _1_ V'ii Jx roo e-t2j2 dt (13) and No is the one-sided power spectral density of the noise. 16 are 2. PBL is the probability that the correct alternative is not among the L best produced by the LVA. The worst case signal space geometry that corresponds to the least gain is a tetrahedral packing. is the energy per channel symbol.04 dB. the gain approaches 3 dB.8.75 dB respectively. The received symbol Tk at time k is (17) .76 dB. Signal space (a. 2. b. The generator matrix for the mother code. 7. NO. The worst case asymptotic gains presented here are somewhat optimistic for intermediate channel signal-to-noise ratios. For any L. Similar results are obtained for the rate R = 8/10 code which is shown in Figure 8. Interleaving over many symbols (ideally infinite) is assumed to justify the assumption of independent fading from symbol to symbol. [17] where every signal point is equidistant.2. 42. Results in Figure 8 indicate that a gain of about 1 dB is gained for a R = 1/2 code with block size of 512 information bits when L = 2.0 dB and 1. The We consider the decoding of coded data symbols subjected to independent Rayleigh fades from symbol to symbol and corrupted by additive white Gaussian noise. General Result: We want to know the most likely event in which a GVA can output the globally L best candidates and not have the correct candidate on the list. and the Euclidean distance from s(g. For large values of L. We now have three candidates s(g. (16) b[D(a. SNRs. The energy per information bit Eb is given by REb = E. D(a.l'.)2 Dmin/2 = 10log10 (±) 3 (15) which is 1. L = 3: When L = 3.e).25 dB when L = 3.~) are at Dmin from each other.50 dB.c) geometry for LVA with L = 2.. The channel signal-to-noise ratio used in Figures 8 and 9 is Ec/No where E. 2/314. seQ) and s(. s(b).Q) 4 ( 1_ (D2(b. using the simplex geometry. and Q._). and the result in [16. FEBRUARYIMARCHIAPRIL 1994 by (12) where Q(x) coding gain can then be evaluated as 10 10glO( ) = 10logio G linebreak (3/2) which is 1.2 and 3 have been simulated for the AWGN channel. and about 1. and R = 8/10 codes with memory v = 4.b) cos <1>.£)D(9:. The (small) loss due to the rate of the outer error detecting code is not taken into account in (16).. L = 2: When L = 2. VOL. Then. Deq.) ) (14) eq - This distance takes on its lowest value when all the three points s(g.C)-D2(a._) to 0. we are interested in finding the probability that the correct data sequence is not among the list of the two best decoder outputs. an error occurs whenever the cor- rect candidate is not on the list of the three best decoded messages._). Simulation Results: We have performed a series of simulations for rate R = 1/2. What seems practical to achieve with a list size of L = 2 and 3 are gains of about 1.

Block error probability for the L(:S. 8. The effective diversity is thus (3/2)Dfree. 20-2 is chosen to be 1 without loss of generality. Without loss of generality. An inner R =' 1/2. The performance of the decoder can be improved if a reliable estimate of the fade information (ak 's). 9. let the transmitted codeword s(ll) be the all zero codeword. Ee/No. = . = 1: Provided the fade is independent from symbol to symbol. Each of the codewords s(g). When Dfree is 10-3 Gaussian Channel 2-CPSK Modulation Soft Decisions 10-4 R:..:::j const. 'Y No {2} ak s. Xk is the transmitted symbol taking on values ±1.1 erasures.Dfree -1 erasures can be made (leading to a diversity of D = Dine). so that the average SNR per received channel symbol.lO: iii 10.. 8/10 Code v:. also called the channel state information (CSI) is incorporated into the metric calculations [13)..) and s(~) differ in at least D free positions from the other two.SESHADRI AND SUNDBERG: LIST VITERBI DECODING ALGORITHMS WITH APPLICATIONS 319 10-1 2-CPSK Modulation Soft Decisions y=4 Gaussian Channel where nk is Gaussian variate with mean zero and variance No. s(fl. the minimum number of positions in which both s(fl) and s(£) differ from seQ) can be easily seen to be ~Dfree. " = 4 code is used. 3) best path LVA on the Gaussian channel. Then...4 -0 Q) U 0 en ]I ::J E 10-5 _ = -E s. is Ee/No. Let s(c) differ from s(Q) and s(Q) in d( < Dfree) positions where the first two do not.. ak is Rayleigh distributed with probability density function 10-2 R = 1/2 Code ~ _c e c. For this model. the asymptotic error probability with soft demodulation and Viterbi decoding is well approximated by r. Block error probability for L(:S. there should be at least ~Dfree . = . Ee/ No which is distributed according to the pdf 'Y is given by 'Y (19) 1. The number of information bits per block is 512. we will again consider three codewords s(g).20-2 No ( 20) 10"6 '---_--L-_---'-_----'-_---' ·1 o __ -'---_------' 4 5 where E{·} denotes the expectation. v = 4 code is used.Thus for s(Q) and s(£) to be selected over s(g).However. w L- . s(fl). (S~R) D (21) where the time diversity D = Dfree (minimum Hamming distance of the code) and SNR is the average receiver channel signal-to-noise ratio. we consider a simplified model where if the channel is in a fade. we can ask how many erasures can be made and have the correct candidate in the list of two with L = 2. An inner R 8/10. the received symbol is assumed to be demodulated perfectly. and sC~). the received symbol is assumed to be an erasure. L Fig. The number of information bits per block is 512.s CIS e 10-3 512 Information Bits per Block PS1 The instantaneous signal-to-noise ratio per received symbol = a~ . . Otherwise. 3) best path LVA on the Gaussian channel. LVA_Clearly when L = 1. by is the mean of the random variable and is given . L = 2: To find out the worst case when a transmitted codeword will not be on the list of the best two candidates from the decoder output.4 PS2 PS3 512 Information Bits per Block 10-5 0 2 3 Ec/No (dB) 4 5 Fig. instead of considering an ideal Rayleigh fading channel. Let us assume that they differ in exactly D free positions and that D free is a even number. the average signal-to-noise ratio per received symbol (EclNo). .

We assume an error free feedback channel.) [13].. The metric used for decoding is the correlation metric (soft decision). In a typical ARQ scenario. FEBRUARYIMARCH/APRIL 1994 an odd number.). + N. Let K(N. In our examples. Note that an increase of Din (21) quickly yields significant gains in channel signal-to-noise ratio and in error probability. The hybrid FEC/ ARQ scheme with RCPC codes and Viterbi decoding works as follows. + N. This block of NT = N. The possible rate are p/(p+ ). These values are naturally lower for small SNRs and without CSI as the graphs indicate. the number of such codewords is upper bounded as (23) and the fading is assumed to be independent from symbol to symbol.1. The demodulation is assumed to be ideally coherent.2. data bits are augmented with Nc cyclic redundancy check (CRC) bits using an outer block code. This is confirmed by the simulations below for Rayleigh fading channel. erased positions. NO. ). VOL. The most flexible and robust hybrid ARQ schemes uses inner rate compatible punctured convolutional (RCPC) codes. . erasures. The largest relative gains occur for small values of Dfree. Hybrid FECjARQ where e = L Dtr. bits before being encoded by the inner RCPC code. rule which for a given value of A is in the form of a puncturing table a(). e. Also shown in the table is list size (L*) as determined by the best known binary codes (from Appendix A of [18]) with D free = 3 and code length equal to Ne. and rates from 1 to l/n. Erasures for a Binary Code With D free = 3 with RCPC and LVA Decoding Ne 4 D 5 6 7 8 9 10 11 12 L* 2 4 8 16 20 38 72 142 L** 4 5 6 7 8 9 10 11 6 10 16 29 52 94 171 Simulation Results: We have simulated the performance of rate R = 1/2 and 8/10 convolutional codes with memory u = 4 on a Rayleigh fading channel with BPSK modulation. Table 1 shows an upper bound (L**) on the required list size to achieve a certain time Diversity (D) for Dfree = 3. The transmitter starts with the highest code rate possible. The simulation results in Figure 10 for the R 1/2 code. By calculating the slope. We have not used any channel state information (CSI) (the fade values) at the decoder and we refer to [13] for a detailed evaluation of the error performance with and without CSI. Erasures: We obtain an upper bound to the list size in the presence of N.320 IEEE TRANSACTIONS ON COMMUNICATIONS. the effective increase in diversity is at least (Dfree + 1)/2. Thus an upper bound on the list size so that the correct candidate is on the list with N.8 and 16. Two parameters are used to characterize the . the same as for the AWGN channel. and in Figure 11 for the R = 8/10 code show a linear relationship between log PBL and the SNR. and puncturing period p == 8. A block of N.Dfree). soft decisions and no CSI. + t/ bits are encoded by a family of RCPC codes with memory v. and is about 9 when L 2. On very noisy channels like fading mobile radio channels. = = = = 3.2. the diversity is D = r~Dfreel ' (22) List Decoding With Arbitrary N. using the Hamming bound [18]. erasures is K(Ne. we will use a value of n = 3. These codes can be decoded using soft Viterbi decoding. Then.e-1 J.3. we will assume that the probability of undetected error for this code is zero. The parameter A is called the level of puncturing. To summarize. with list-of-2 decoding. a subset of all possible A is used. = 1. (n -1)p.g.4. the number of such codewords is upper bounded by K(Ne. Not that the diversity at large SNRs and with perfect CSI is 7 (value of Dfree) when L == 1 and about 11 when L 2 (based on the analysis above). For the purpose of analysis and simulations. The output of the convolutional code is then punctured according to a rate compatible puncturing. u known information bits for terminating the trellis are added to the N. )" = 0. Note that the gain with the LVA (L = 3) is almost 4. Table 1 List Sized Needed With n. . Similar results are obtained for the rate R = 8/10 code. 42.. Let the number of erasures of a given binary code be Ni: Consider the worst case situation of all the codewords being identical except in the N. The main advantage of the RCPC codes over other FECs is their incremental redundancy transmission feature using the concept of rate compatible puncturing [13]. In the simulations for the Rayleigh fading channel we have used binary phase shift keying. and will continue to transmit additional punctured bits corresponding to successively lower rate codes until it receives positive acknowledgement from the receiver. one can evaluate the diversity. Dfree) be the maximum number of codewords in any binary code of length N and minimum Hamming distance D free.. powerful FEC is needed. 2/3/4. Dfree). Combinations of forward error correction (FEC) and automatic repeat request (ARQ) systems are often referred to as hybrid ARQ schemes [12]..5 dB in Ec/No over the VA (L = 1) case at PBL = 10-3. It can be seen in Figure 10 that the diversity is about 5 when L I. The information bits are first encoded by the rate l/n convolutional code. Since the minimum Hamming distance of the code is D free.

8/16.8 10. g 10. W .. The results are shown in Figure 12 for the ide- 0. An inner R = 1/2. 8/12. 2. Ideal channel state information is also used. v=:4 code is used. The throughput S is the average number of received accepted bits over the average number of transmitted bits.4 0. For reference.. "[ 0. L = 3) in Figure 9 for the v = 4. == 1. Both these methods will reduce the system throughput. 4.. As an outer code..!__ .L_ _ _j__--l.2 Rayleigh Fading Channel CSI.SESHADRI AND SUNDBERG: LIST VITERBI DECODING ALGORITHMS WITH APPLICATIONS 321 Rayleigh Fading Channel 2-CPSK Modulation Soft Decisions NoCSI R ~ 1/2 Code v~4 512 Information Bits per Block performance of the hybrid FEC/ ARQ system are the throughput S and the probability of a decoding failure PF.___---' 3 4 5 6 7 8 9 r.n t Fig. Block error probability for L(~ 3) best path LVA on the fully interleaved Rayleigh fading channel. This corresponds to about 1 dB in Ec/No.5 L-_. However this can be approximately estimated by comparing PB1 (VA) and PB3 (LVA. and 8/24 respectively. and 16 giving code rates of 8/9. we have also simulated the same system using a conventional Viterbi decoder (VA). We did not explicitly simulate PF for the u == 4. 10. The relative improvement of the throughput is about 10% over the signal-to-noise ratio range.6 o VA • LVA. 11. 12. R = 1/3 code which is used as the final low rate code in the system in Figure 12. A gain in Ec/No of about 3 dB is achieved at PBL = 10-4• 4.. The information block size N. We have simulated the complete hybrid FEC/ ARQ system for the v = 4 RCPC codes with the LVA (L = 3). DISCUSSION AND CONCLUSIONS We have presented parallel and serial list Viterbi decoding algorithms [LVAs]that produce an ordered list of L> 1 globally best candidates after a trellis search. Soft Decision 2-CPSK Modulation v = 4 Codes 10-1 Fig. was chosen to be 382. we chose an hypothetical block code with Nc of 34. L =3 F= 0. The number of information bits per block is 512. v = 4 code. EelNo for a hybrid ~ :c ~ a. Same as Figure 10 with a R"" 8/10. 8/10. R == 1/2 code. Throughput versus average channel SNR FEC/ARQ scheme with LVAand VA respectively. There is a nonzero probability that correct decoding cannot be achieved at the lowest code rate. The RCPC codes chosen have).2 g CD -0 10-3 '3 i:i5 ~ E Rayleigh Fading Channel 2-CPSK Modulation Soft Decisions NoCSI v:::4 R ~ 8/10 Code 512 Information Bits per Block ally interleaved Rayleigh fading channel with BPSK modulation and soft decision decoding. PF. _ _. Novel Fig. 8. The puncturing period is p = 8.. . We intend to use the LVA instead of the VA to reduce PF and simultaneously increase the throughput (and implicitly decrease the overall delay). This quantity can be reduced by decreasing the code rate 1/ n or through the process of code combining [13). This is the probability of failure to decode. Combined with the improvement in throughput comes the significant improvement in PF.

Seshadri and C-E. "Multi-level Codes with Large Time Diversity for the Rayleigh Fading Channel. NY. W. "Rate Compatible Punctured Convolutional Codes (RCPC codes) and their Applications. Jr. During 1976. and yet another with L = 8 etc. [17) C. We have analyzed and simulated the performance of the list decoding algorithm with multiple outputs for the Gaussian and the Rayleigh fading channels.1. [11] P. "TCM on Frequency-Selective Fading Channels: A Comparison of Soft-Output Viterbi-Like Equalizers. For the Rayleigh fading channel. Forney. degree in Electronics and Communication from the University of Madras. 1943. [10] J. we also expect similar performance improvements when it is used for the decoding of combined coding and modulation schemes [19. Wong. Ungerboeck. He has held positions as Consulting Scientist . Elements of Detection and Signal Design. He received the B. IT-13. "Joint Data and Channel Estimation Using Fast Blind Trellis Search Techniques. No. November 1987. Carl-Erik W. pp. [23] C. Jacobs. [20] J. J. He holds two patents in the area of data communications. [4] G. 1967. Vol. 540-547. From 1977 to 1984 he was a Research Professor (Docent) in the Department of Telecommunication Theory." IEEE Transactions on Information Theory. Plenum Press." IEEE Transactions on Communications. Sweden on July 7. pp.5. in 1966 and 1975. 705-708. The analysis we have presented for the AWGN channel is directly applicable to these schemes. [8] R.SM'81 . 376-381. Gains of 3-4. Inc." IEEE Transactions on Communications. W. While the LVA has been applied to the decoding of convolutional codes in this paper. NY. July 1974.0 dB in practice. pp. Seshadri. 1994). Error Control CodingFundamentals and Applications. W. Vol. Feb. Before 1976 he held various teaching and research positions at the University of Lund. Vol. NO. [15] F. 410-413. 1983. 42. March 1994. Omura. Vol. 2/3/4. and Ph. New York.E. pp. "Error-Correcting Codes for List Decoding." IEEE Transactions on Information Theory. Anderson and S. in 1984 and 1986 respectively. [3] G. April 1988. as an ESA Research Fellow. respectively.F'90) was born in Kariskrona.E. Viterbi and J. M. Since November 1986." IEEE Transactions on Information Theory. "Sequential Coding Algorithms: A Survey and Cost Analysis. 1984. also to appear in IEEE Transactions on Communications. COM-32. He is the coauthor of Blind Deconvolution (Prentice Hall." Proceedings of the lEE. He received the M. Murray Hill. Jr.M'75 .23]and combined speech and channel decoding [14. NJ. December 3-5. [5] H. Conference Record pp. Vol. September 1980. 1986. Mohan. NJ. list decoding results in an increase in the effective time diversity of the code. pp. Nambirajan Seshadri (S'81.322 IEEE TRANSACTIONS ON COMMUNICATIONS. With supporting theory. No.D. "Viterbi Decoding Algorithm for Convolutional Codes with Repeat Request. Yamamoto and K. in 1982. pp. Springer Verlag.J.1. Itoh.21]. IT-26. Principles of Communication Engineering. 1680-1686. North-Holland Publishing Company. he was with the European Space Research and Technology Centre (ESTEC). Currently he is a Distinguished Member of the Technical Staff at AT&T Bell Laboratories. Dallas. H. [12) S. Seshadri and C-E. "Generalized Viterbi Algorithms for Error Detection with Convolutional Codes. he has been a Member of Technical Staff at AT&T Bell Laboratories. "A Network-Based Frame Synchronous Level Building Algorithm for Connected Word Recognition. W. and the M. [7] J." IEEE Transactions on Communications. Sundberg. Dallas. Jr. This in turn corresponds to an increasing coding gain with increasing SNR. Nov. Texas. "Convolutional Codes III: Sequential Decoding. Sundberg. a typical characteristic of increased diversity. 389-400. "A Tree-Trellis Based Fast Search Algorithm for Finding the N Best Sentence Hypotheses in Con-. IT-33. Rabiner. Conference Record pp. and Dr. 1. pp. India. [21] N. 1987.A." GLOBECOM /89. Wozencraft and 1. M'87) was born in India in 1961. pp. Elias. Principles of Digital Communication and Coding. Lin and D. 420-427. Acoustics." IEEE Transactions on Information Theory.. H. The Theory of ErrorCorrecting Codes. B. University of Lund. "High Rate Concatenated Coding Systems Using Bandwidth Efficient Trellis Inner Codes.9. FEBRUARYIMARCH/APRIL 1994 methods for utilizing the LVA for concatenated communication systems are described. No. It is interesting to note that the use of the LVA is optional. Sundberg. 267-297. July 1974. May 1989. Hashimoto. tinuous Speech Recognition.. Huang. [9] P. Seshadri and C-E. MacWilliams and N." IEEE Transactions on Communications. Sundberg. 1978. 37.S. San Diego. Prentice-Hall. 1982. N. Weber." Proc. The Netherlands. IT-28. Acoustics. 222-266. Vol. pp. 1965. NJ in the Signal Processing Research Department. then the LVA needs to be used only in the last stage of decoding. 866-876. Vol. One receiver may operate with VA while another may use L = 2. Conference Record. Vol." Information Control. "A List-Type Reduced-Constraint Generalization of the Viterbi Algorithm. J. REFERENCES [1] A. University of Lund. pp. Hagenauer and P. K. More work needs to be done in the presence of Rayleigh fading. Other applications include speech recognition [15. Texas." IEEE Transactions on Information Theory. 260-269. COM-36. Nov. Sweden. Techn. [16] J. M. degrees from the Lund Institute of Technology." Proc.. [19] G. pp. 1988. Anderson. 1534-1538. The price paid is increased signal processing at the receiver. VOL. Aulin and C-E.S. Hoeher. pp. Noordwijk. 1989. Sloane. l3001310. [13] J. Forney. 25." Information Control. J." GLOBECOM '90. [18] F. Lee and L. C. 41. IEEE Int. 43-49. pp. McGraw-Hill. J. 1989. Jan. Speech and Signal Processing. 5-12. pp. Hagenauer. "Channel Coding with Multilevel/Phase Signals. Conf. Hoeher. January 1991. T. Costello. 25. Costello. Signal Processing Research Department. pp. Conj. If one were interested in solely reducing the probability offailure to decode. John Wiley & Sons. degrees in Computer and Systems Engineering from the Rensselaer Polytechnic Institute." GLOBECOM '89. 1991. 37. December 3-5. J. Speech and Signal Processing. Conference Record. No. "A Viterbi Algorithm with SoftDecision Outputs and its Applications. Jr.. It is also not necessary to use the LVA in every stage of decoding in the hybrid FECI ARQ scheme that uses RCPC codes. Part I. D. Murray Hill.5 dB are demonstrated at block error probabilities of about 10-4• We have demonstrated that the LVA improves both the throughput and the probability of failure to decode for hybrid FECI ARQ schemes. "Convolutional Codes II: Maximum Likelihood Decoding. 138. NY. "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm. Troy. His research interests are in the general areas of communications and signal processing. San Diego.. [6] T. [2] A. 1979. Soong and E. Sundberg (S'69 . 1659-1663.20. L. [24] W. [14] N. IEEE Int. K.E. [22] N. NY. Deng and D. No. 169-176. February 1991. Digital Phase Modulation. R. Viterbi. NY. we find that the algorithm for the AWGN channel can yield 1. 55-67.24].0-2. September 1993. D. List decoding has also been applied to the problem of joint data and channel estimation [22]." GLOBECOM '90. B. F. "Estimation of Unreliable Packets in Sub-band Coding of Speech. pp.

(New York: Springer-Verlag 1989). digital mobile radio systems. has been involved in studies of error control methods and modulation techniques for the Swedish Defense. fault-tolerant systems. ICCS'90 and ICCS'92 (Singapore). In 1986 he and his coauthor received the IEEE Vehicular Technology Society's Paper of the Year Award and in 1989 he and his coauthors were awarded the Marconi Premium Proc. channel coding. May 1984 and for the 5th Tirrenia International Workshop on Digital Communications. He is a member of SER (Svenska Elektroingenjorening) and the Swedish URSI Committee (Svenska Nationalkommitten for Radiovetenskap). lEE Best Paper Award. He is a member of COMSOC Communication Theory Committee and Data Communications Committee. (New York: Plenum.SESlIADRI AND SUNDBERG: LIST VITERBI DECODING ALGORITHMS WITH APPLICATIONS 323 at LM Ericsson. Sweden. He has also been a member of the Technical Program Committees for the International Symposium on Information Theory. SAAB-SCANIA. He has published over 80 journal papers and contributed over 110 conference papers. Jovite. The Netherlands. Holmdel. SUNCOM. Italy. digital modulation methods. . September 1991. His consulting company. 1986) and Topics in Coding Theory. and optical communications. spread-spectrum systems. and at Bell Laboratories. He has been a member of the International Advisory Committee for ICCS'88. for the International Conference on Communications. ICC'84. St. October 1983. He holds 13 US. He served as Guest Ed~ itor for the IEEE Journal on Selected Areas in Communications in 1988-1989. Canada. Tirrenia. Sundberg has been a member of the IEEE European-African-Middle East Committee (EAMEC) of COMSOC from 1977 to 1984. digital satellite communications systems. His research interests include source coding. a number of private companies and international organizations. He has organized and chaired sessions at a number of international meetings. Swedish and international patents. Amsterdam. Dr. He is coauthor of Digital Phase Modulation.

- On Writing
- The Shell Collector
- I Am Having So Much Fun Here Without You
- The Rosie Project
- The Blazing World
- Hyperbole and a Half
- Quiet Dell
- After Visiting Friends
- Missing
- Birds in Fall
- The Great Bridge
- Who Owns the Future?
- The Wife
- On Looking
- Catch-22
- A Farewell to Arms
- Goat Song
- Birth of the Cool
- Girl Reading
- Galveston
- Telex from Cuba
- Rin Tin Tin
- The White Tiger
- The Serialist
- No One Belongs Here More Than You
- Love Today
- Sail of Stone
- All My Puny Sorrows
- Fourth of July Creek
- The Walls Around Us

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd