You are on page 1of 7
A Viterbi Algorithm with Soft-Decision Outputs and its Applications Joachim Hagenauer Peter Hoeher German Aerospace Research Establishment (DLR) Institute for Communications Technology D-8031 Oberpfaffenhofen, West-Germany Tel. ++49/8153/28-893, Fax ++49/8153/28-243 Abstract ‘The Viterbi algorithm (VA) is modified to deliver not ‘only the most likely path sequence in a finite-state ‘markov chain, but either the a posteriori probability for each bit or a reliability value. With this reliability, licator the modified VA. produces soft decisions to be used in decoding of outer codes. ‘The inner Soft Output Viterbi Algorithm (SOVA) accepts and deliv. crs soft sample values and can be regarded as a device for improving the SNR, similar to an FM demodula- tor. Several applications aze investigated to show the gein over the conventional hard-deciding VA, includ~ ‘ng concatenated convolutional codes, concatenation of convolutional and simple block codes, concatena- tion of Trellis-Coded Modulation with convolutional FEC codes, and coded Viterbi equalization. For these applications we found additional gains of 1—4 dB as compared to the classical hard-deciding algorithms For compatison, we also investigated the more com. plex symbol-by-symbol MAP algorithm whose opti- ‘mal a posteriori probabilities can be transformed into soft outputs, 1 Introduction ‘The Viterbi algorithm [1] has become a standard tool {in communication receivers, performing such fune> tions ds demodulation, decoding, equalization, ete. ‘An increasing number of applications use two VA in concatenated way. Examples are coded modula tion systems without bandwidth expansion, such as coded quadrature amplidude modulation (QAM) (2) and continuous phase modulation (CPM) [3]. In these aystems Viterbi receivers replace classical modulation, 47.1.1 schemes. An additional outer coding system could use convolutional codes with Viterbi decoding to perform forward error-corzection (FEC) decoding ‘There are normally two drawbacks with such a so- lation: first, the inner VA for demodulation produces bursts of errors against which the outer VA is very sen- sisive; second, the inner VA produces hard decisions prohibiting the outer VA from using its capal accept soft decisions. The first drawback can be inated by using some interleaving between the inner and outer VA. To eliminate the second drawback, the inner VA needs to output soft-decisions ie, reliabil- ity information. This should improve the performance of the outer VA considerably. Another important situation where a similar prob- Jem arises is when convolutional codes for PEC are ‘used on channels requiring equalization. ‘This is the case in the future pan-european mobile redio system (GSM) {4). The normal Viterbi equalizer produces only hard decisions, eading to a reduced performance of the outer VA performing the PEC. ‘The performance of the above mentioned systems, and other systems such as multistage Viterbi schemes, FEC/ARQ schemes ete, wll improve by using a Soft Output Viterbi Algorithm (SOVA). This isa VA which ses soft (or hard) decisions to calculate its met bat also decides in a soft way by providing reliability information together with the output bits. The telia- Dility information can be the log-likelihood function. We wish to modify the VA as little ax possible. The foal is to add to the Viterbi receiver a soft~deciding ‘unit In earlier work Forney considered “augmented ‘outputs from the VA itself [1], auch as the depth at ‘which all paths are merged, the difference in length between the best two path, and a list of the p best paths, The last topic was generalized in [5]. Ya- 1680 (CH2682.9/8910000-1680 $1.00 © 1989 IEEE mamoto et al. derived a simple indicator for block ‘etors by introducing labels [6]. This scheme is re- stricted to requesting a retransmission of the whole block. Schaub ef al. [7] tock up the ideas of Forney by declaring an erasure output on those bits to be decided, where the metric dfferenee at the point of merging never exceeds a threshold 2 The Soft-Output Symbol- by-Symbol MAP and Viter- bi Algorithm (SOVA) 2.1 Detection with Reliablity Infor- mation ‘We assume that in the receiver chain a Viterbi de- tector is used. Thie could be a Viterbi equalizer, a Viterbi demodulator ((e. for CPM or Trelli-Coded QAM), of the Viterbi decoder for an inner convolu- tional code. ‘This device is followed by a second stage detector, which could be a demodulator or decoder after the equalizer, a decoder after the demodulator, fan outer decoder after the inner decoder, or a source ecoder. We assume that this second device can im- prove its performance if some reliability information is available in addition to the hard decisions from the first stage A straightforward way is shown in Fig. 1. The first stage detector delivers estimates 0 of the symbol se- quence u’ by processing the received sequence y in the Viterbi detector. We want the detector to deliver farther for each symbol an estimate of the probability that this symbol has been incoreectly detected = Problah #04 Ly}. a Since the VA ofthe fst stage produces error events, and therefore correlated errors in and comtlated Yale that might degrade the performance of the next stage, we apply suficient deinterleaving to achieve statistical independence. A similar interleave ing device i applied at the transmitter side. Inthe notation we drop the primes. At the dashed line A-A in Fig 1, the hat stage detecoe delivers symbol Sy with statistically independent error probabilities Now, for the second stage detector the channel i a discrele (binary) memoryless compound channel [8 with output pais (Su, fu). If the second stage dete: {or performs maximunlkeihood (ML) dteetion, an optimum ML metie ie [8) (2) sphere af = kl te the B-th symbol of the meth ine formation sequence, dy is the hard (41) decion of the first Viterbi detector. We wil call the fst stage VA a Soft-Ouiput Viterbi Algorithm (SOVA) beeause it delivers soft decisions ah, 3) with log «@) to be processed by the next MIdetector stage 2.2. Algorithms Given a finite-state discrete-time Markov process ob- served in white noise, the VA is the optimal recur- five algorithm. It provides the state sequence with the lowest cost [1]. The VA is optimal in the ML sequence sense. It determines the symbol sequence = {,) that maximizes the log-likelihood fanetion log p(y | 8). 1 The Soft-Output Symbol-by-Symbol MAP Algorithm “Ther existe an optinnam general algorithm providing thea porteron! probabilities (APP') for each bitto be decided (10), [1, Appendix). ‘Ths sinbol-by-symbol MAP algorithm was originally developed to minimine the bt-ervor probably intend of the sequence enor probability. The algorithm seems les attractive than the VA due to the increased complexity. However, we tasily obtain the optimal APPe P(an |) for each bit to be decided, A soft decision as the log-likelihood ratios obtained by P(x ly) Pia | y)! where iy ia the MAP estimate for which P(ux | y) it maximum. ‘The probability P(uy | y) can be caleu- Inted by a forward and a backward recursion following (20) and [a fa = loa 7 6) 2.2.2 The Soft-Output Viterbi Algorithm (OVA) ‘We shall frst give a motivation for the algorithm be- fore we state it formally. For the sake of simplicity we restrict ourselves to trellises with two branches endi in each node, ‘This could include a K/M code pune: ‘uured from a 1/0 code because it still ures the trellis cof the 1/M code (11), (12). The number of states $ is "where w is the code memory. 1681 ‘Assume that the clasical VA makes a final decision with delay 6, 6 being large enough so that all 2° sur- vivor paths have been merged with sufficiently high probability. As shown in Fig. 2, the VA has to select ‘survivor for state a4, 0 < ay 0. pay approaches 0.5 if Ms = Ma and 0 it M ~ My > 1. With probability py the ‘VA has made ertors in all the « postions where the information bits of path 2 difer from path 1yin other words if wD eu, Divode () Positions where of? = uf are not affected by the survivor decision. "Let dy be the length of those two paths until they menge. "Then we have e diferent im formation values, and bq ~e nondiferent values. Av rome we have stored the probabilities of previous trroneous decisions with path 1. Under the assump: Hon that path Lae been selected we com update these probabilities for thee difring decisions on this path tecording to BBL Pa)+O-Py Prey FF duende (10) 0 < $, < 0.5. This formule requires statistical inde- pendence between the random variables and Prey which is approximately toe for most ofthe practical codes, The recursion could be directly performed on {he log-likelihood ratio 1682 Using (8), (10), and (11) we obtain after some caleu- lation By sda) = Big 2 +03) ee teels th = Ma ~ Mi > 0,3 = diy.ny3u The fonction Wty, A) should be tabslated with fy and Aas input tetiables and need ot tobe calculted at each step. ‘The factor a prevents ovetiow with increasing SNR A proper choice to achieve asymptotically E(d, Fa (13) whore djre isthe free distance of the code. A good approximation of (12) is f(b.) = min (bs,B/a), (04) and requires no table and no knowledge of the SNR. "The Soft-Oxtpat Vitert Algorithm (SOVA) can now be formated using the notation in (i). Only the steps marked by (s) augment the claical Viterbi algorthen, a= Ady Storage: (lime index, modulo 6 +1) lee) = {Baau(sa)oonvGa(se)}, 0 (hard decision values, @ € {41}) Ulex) = CLa-a(te)oooyLa(ee)} OS 4

You might also like