You are on page 1of 23
402 1 | prinkveyroniat |] 1 | Primitive pynomit a) rr 4] @ 1s| moot s| on 6) ao | os v7) ooo: 1} ae 18] 20008 s| sl 0 7100002, 3} ane xo| saat wl 02 a1] sooo008 nf soot zn] ecnnooo2 2} am | s1ee001 8) soe x} rmowon | ‘Table 106: Primitive polynomials of degree 1. Eack polynomial is expressed in octal rotation with the lowest degree terms onthe lft have u(2)9(Z) + 92) aos7) (2) 92) + 9512) eZ) (2) Summing these two equations, we get Blah+ +1) = [alZ) +a(Z)lo(Z) +52) + 94(Z) (1048) cannot divide Since ji < n = 2 ~ 1, and o(Z) is primitive, then o(Z) cannot divide (2, anconeenty (2) #9 for 5. Cte Hangs Can be coed by using te erorsrppingeigoritm asin Example 1018. A listof partie polynoil that generate Hamming codes is given in Table 106 for different values ofl. The polynomials are given in octal notation, wit lowest-degree terms onthe left. For example, the fist ine ofthe table means 64 = 110100 9(Z) = 142 +2? 10.2, Block codes 493 Golay codes {In searching for perfect codes, Golay discovered a (23,12) code that is a cyclic code with generator polynomial WZ) = 24 4244441 0.49) and with minimum distance dyiq = 7. Therefore, tiple eror correction is pos- sible. The important point is that this code is the only possible nontrivial lin- ear binary perfect code with multiple error-correcting capabilities, Besides the ‘Hamming single-error-comrecting codes, the repetition codes (with n odd), and. the Golay code, no other linear binary perfect codes exist (see MacWilliam and ‘Stoane, 1977, Chapter 6). ‘Bose-Chaudhuri-Hocquenghem (BCH) codes ‘This class of cyclic codes is one of the most useful for correcting random errors ‘mainly because the decoding algorithms can be implemented with an acceptable amount of complenity. For any pair of positive integers m and, there isa binary [BCH code with the following parameters: nak mt, dyin > 2641 This code can correct all combinations of tor fewer errors, The generator poly- ‘nomial for this code canbe constructed from the factors of (Z?"~!+1), Unfortu- nately, this procedure is not straightforward and is beyond the scope of this book; the interested readers are referred to Chapter 9 ofthe book by MacWilliams and Sloane (1977). A list of generator polynomials for BCH codes of different pa- rameters is given in Tables 10.7 and 10.8. The polynomials are represented in coztl notation, with the highest degree terms on the lef®. As an example, the third line ofthe table means 721 = 111010001 = (2) = 2° + 7" + 284 2441 Notice that this polynomial can be factored as (2) = (2+ 24 1)(244 24 224741) Itcan be verified from Table 10.5 that these two factors are factors of (2 + 1). ‘The BCH codes provide a large class of codes. They are useful not only because of the flexibility in the choice of parameters (block length and code The oe notation in his able diferent with repect to that of Tables 105 and 105, in the sens thatthe highest depree term son the let hee 10. Improving the transmission reliability: Block codes 102. Block codes bs z 10) [| SAREE = I | 155 | | sossiamsrsesismasoewaarion 10 te | iaatiomusmessismeser aPalalo Be] 13 | dettirmas asics ny i|B Br 18 alossunsioisiarsme eran | | ia 13) 9 | innleezconeant ya onsiels ee aay 3] 3) Ber Us| 41 | auessmicnnmancacocas ee ee, aif ae| 113 tar] ta | momrasmnsen ponerse ay 3 3) Sen | 2 lestearaseninaazuionisse te st, we) 3| ionsr 21/3 | crmesnusraurmasinnaaiee ea my 3] Sais | 25/ Nolssoaeremonn acon mecne s! of 3/2 | | | »| Sucnsmmemucmnanosnmrnn, 2] en ss $5) 3] roy, nf» Maa omens uiesiisuaosessies | “| [ 2/2) Socteneeson 3| wom 30] iosattosiesunsisinsr rats 2] 3] Gasman |) ces Ae ees | $5] 21| 13283001 on ssrssoansnass 26s es | | aoneserssc aan 15] 13 | Sauaneesrasois i 2 | 2 | Saisie ecoesomnausmosnsriszssulasetnuins | 's] 8 | Seas, secasisssin w|i} "ty an 4) 6 | tsmmeess a sionasreasm secon jig) ty a suet te) 3) tse 9 | 45 | sinasmssoomnatamursencocirsserrezaeur %| 2] Sasi susseatocs | 3 | cuoweonsr 2| a7 | Soamssiecroneessrpsomanmssazezcoiinis AA ee | een | 7) deamottatos | | ass $2$e21s25760332656001771snomei2IosaraeLasesxOT4se2 i 4) eimansta | Sisters | 10] esnssor stan, 13] 59 | seu rsawstsnssenes rr asmseoenoerusiesesson 3) | Seat, | icermat 3) 8) saat Oe ea enna 2) | Garam L_ | ole aaa | 21 | sosissisorromscerioenieis | ene a 2 ees 08 tre sear pram BH a. ppp 3) 51 | Soomossinestenannse ‘ered in octal notion wih he igh deree ema onthe eh ene ra wsjaq| | Be Bl 2] Bese Br | 3} isons fate, bt aio because at block lengths of few handed or less many ofthese BB) 1) Betis ves ae among the best-known codes ofthe same length and nan oe i) 2) Beem ‘sg sli Han (38s, hone he ere im | 7 | seauenene, | Chapter 5, nd Blaha 1908 cee wr) 3 | Jeauurensrse te | 3] iment 1p | 19) Saamtoreoeccietoss } ee J tn | ar | Aserenaaizsezasear706163057 1s31O07S60R95157472451801 ° ue ——— {cies codes area subclass of BCH codes generalized to the nonbinary esse, that dns for BCH codes. ach oro is epee 's. twcode symbols belonging oa set of eartnaliyq = 2% Thon cook pie ase ee oe ee a" sil be represented a binary m-tuple, andthe code ean be spnodereo er seve nol nnaton wth che highest degre 496 10. Improving the transmission reliability: Block codes special type of binary code (see Blahut, 1983, Chapter 7). The parameters of Reed-Solomon code are the following: Symbol sm binary digits Block length m (2m ~ 1) symbols ‘=m(2 — 1) binary digits 2¢ symbols 2mé binary digits Parity checks (n ~ k) “These codes are capable of correcting all combinations of or fewer symbol er~ rors. Alternatively interpreted as binary codes, they are well suited for correction Df bursts of errors (see Section 10.2.10). In fact, one symbol in error means & ‘number of binary digits in error ranging from I to m in adjacent positions within the code word. Pethaps the most important application ofthese codes isin the ‘concatenated coding scheme described in Chapter 11. Shortened cyclic codes, Since the generator polynomial of acyclic code must be a divisor of (Z" + 1) itoften happens tat its possible degree (n — K) dos not cover ll combinations ffm and that satisfy practical needs, To avoid this dificult, cyclic codes are Sometimes used in a shortened form. To this purpose, the first information digits are assumed to be always 2ero and are not transmitted. In this way, & few (n~ isk 4) code is derived whose code words are a subset ofthe code ‘Nords of the orginal code. The code is called shortened eycic code, elthough Janay not be eyelic. The new code has atleast the same minimum distance ts the code from which it s derived. The encoding and syndrome calculation an be ancomplished by the same circuits employed in the original code, since the leading string of zeros doesnot affect the paity-check computations, Error Correction can be accomplished by pefixing to each received vector a string of ¢ eros, or by modifying accordingly the elated circuitry. Therefore, these codes ‘Share all the implementation advantages of cyclic codes and are also of practice! interes. 10.29. Maximal-length (pseudonose) sequences “The code words ofthe cyclic (2! ~ 11) simplex (ot maximal-legth) code of Section 10.25 resemble eandom sequences of zeros and ones. Infact, we shal sec that any nonzero code word ofthese codes has many of the properties that ‘we would expect from a binary sequence obtained by tossing a coin 2 —Ltimes. “Maximal length codes are the duals ofthe Hamming codes, Remember that aHanming code of length 2 ~1is generated by a primitive polynomial 9(Z) of 10.2. Block codes E aD Figur 10.7: Sit rite crt for ec ctor nding th dl code fh Ham cote vd goer (2) = 2+ 2+ 1 The eat genera dna eqns of length 2° —1 = 7. cennes 497 1 degre. The dl ode ofthe sane J ofthe same eng can be ote by eng the sa (2) 8 pary-chack polo. The da coe on heer be preted by sing antag encoder ofthe type of Figure 10.13 wh fedeck comer tions reflecting the structure of 9(2) ses of cart ons eftig te ‘9(2). For purposes of clarification, we use the Example 10.18 The dual code of the (7, 4) Hamming code gererated by 9(Z) = 2+ 2's Tra 9) ce wih (2) a prick pom A re sge cx oso in Pig 147, Ti shene ip ein Seen ova nF, Then en ott Inthe lowing te ert ot cde wr coaponding 0 the sequent 100i shown tpt with testes sates oe ese The st columa ofthe mble is the desired cade word. [Regier content] 00 1 to ea) o1 0 | iol isin) ite ori O01 In the dual code, all the code words, wih the exception of ha which it all zr, are ditt pte ts ofa sin cae wort Ti propery tty considering te tn fe st fhe i pr ote nner ef Fi 017 Wien the rege inal ded ad eid 2 — 1 ns pla oh al pose Lets Then itu final one. The apt ous whe ndtely shite ou, is periodic with period 2° ~ 1. Sine there are only 2° ~ 1 possible states, this period coresponds to the largest posible in this register. Ths expluns the name of rmaximal-lenth sequence and why the 21 code word ofthis clic code are diferent 498 10. Improving the transmission reliability: Block codes 3 tess oon ect ersized show hat he enero matin en Sec ‘sequences of petiod 2! — 1, Prim- fo ofthese 3 ‘The example can te maximal-length ‘code can be used fo generat : itive polynomials (6e Table 10.) are suitable fr the generation of these quences. As already sated, these sequences ae also called pseudonose Sequences. They present the following pseudo-randomness proper ee here ar ext sty 1 nay spent of length 2 —1 of the sequent iy Power ‘ones and 2-1 — 1 zeros, That is, the number of ones and the ee Sroses owen Ts per nan meds comeares of Seton ones ay see cde wba Rem ee tae nig is constant and aways equal 0 2 = (1023). iden imal string of consecutive Property 2. If we define a run tobe the maximal string of iene Ssubals, then in any segment of the PN sequence of length 2 = 1 one- fal ofthe rans hve length 1, one quater have length 2 one iit Nove Tength 3, and 50 on, In each case, the number of rans of zeros i the number of runs of ones, levant property is related tothe autocorrelation function pee eee) sorrelation function of sm infinite of the PN sequence, Let us define the autoc real sequence (a,) of period m as, (20.50) sequence (a4) is binary, ‘tice that tq is periodic, of period n. If the sequence (a;) at Toone by "Ov and "I" le us replace i by a sequence (6) sities have substituted the I's with =1'S and the O's by +1's. Thus, from (10.50), wwe get tu ESEhkan En 0.51) where the sequence (das where A and D are the number of places wh« (on lant) ad its eyelie shift (Qmqdyest--- dant) BBFEC And GsBEEEE, TSE tively (so A + D =n). Therefore, fora sequence of period m = 2! 1, we have mad 10.52) forigms2— 40.53) 102, Block codes 7 0110, “Random” sequence pie Binary: seounee D = PN sequence ‘Synchronization Figure 10.18: Scrambling and descrambling a binary sequence by adding twice a PN sequence In the sense of minimizing the magnitude of rin, for m % 0, this is the “best” possible autocorrelation function of any binary sequence of pexiod. PN sequences are very useful in practice, when it is desired to obtain se- ‘quences with randomlike properties. To this purpose, the same PN sequence is ‘added modulo-2 0 the sequence at hand both atthe transmitter and receiver side, as shown in Figure 10.18. This is possible as the PN sequence is determinis. tic, The only requirement is that inthe two additions the two PN sequences be synchronized. The randomizing operation is known as scrambling. 30.240. Codes for bursteerror detection and eorreetion In this section, We abandon the model of a channel producing random errors ike an AWGN or its hard-demodulated version BSC) and assume a ckannel riodel in which enrors tend to be clustered in bursts. This is @ typical situation in certain communication systems, employing media like ragaetic tapes, magnetic Gisks, magnetic memories and compact disks. Another situation would be @ channel that is basically an AWGN occasionally disturbed by long bursts of noise ot radio-frequency interference. In general, when burst errors dominate, codes designed for correcting random errors may become inefficient. Nevertheless, cyclic codes again are very iseful in this sistation, Let us define « burst of length b as an error patter in which the errors are confined to b consecutive postions. Therefore, a burst-error pattern of length b ean be represented by the polynomial 2) = 242) aso where 2* locates the burst in the emor sequence of length m, and es(Z) isa polynomial of the type 662) = “The following theorem hold tru, 500 410. Improving the transmission reliability: Block codes ‘Theorem 10.8 ‘Any cyclic code (nk) ean detect all bursts (a-k. 9 whose length is not greater thant Proof of Theorem 10.8 ‘The syndrome of such busts generator polynomial 9(2). since neither Z* nor en(2Z) are multiples 4s the remainder of the division of Z'e,(Z) by she ‘But this syndrome is always diferent from 2er0, ‘of 9(2Z), provided that # % 40.55) v Proof of Theorem 10.9 ‘To comect all bursts of length , from each code word. In fact if the bursts of length 2b oes) must be diferent code wordisa burst of length 29 (or ess) Hae te um €or burt of length (or less). Consider the stand Te the cole. If one of the two burt (the comectabie Oe) na coe! eae, are erin: aconsequence ofthe assumption made onthe code word, must bein Tecate soset. Therefore, the second burst cannot be cotected. In conclusion, oe pac a eng 2 ot essean be allowed to be acode word in order comect busy of fengthb, When tis condition is met, the number of check pitas wf aa tet conser the sequences whose nonzero components are confined teahe fet 2b positions. There are 2 such sequences. These sequences must Be toe ot erect of he stands array, Opherwie, thei sun would be a code 1 rt ponding to a burst of length 2 or ess). Since the cosets are 2". then the inequality (10.55) follows. QED As consequence of Theorem 109, theo 2% (4056) nok 10.2. Block codes so eta] Com [Bosecomeine | Gree | tun [akigre | gota | ° a) 2 I asp] 3 BB t fat wa s ‘267 Gin] 6 a0 aay] ule cos] ent ean | > rns orm} tat 1 fim | 2 sh wm} 3 A 6829) 4 nist | @am| 3 tn) ca aman | oct 96.79) 8 501001 2 129 2 161 es) 3 a | 4 pa asin | 5 2st tii | 6 ie > | “esa | 3 fs ast 3 3s tai | RS aso | be « | Gite] et 5 | ami | i fost The 19 fi te nr endef rt re cei he generator polynomial is represented in octal notation with the highest eae on the left (Lin, 1970). Me Msherdere erm con beassumed as meas of te ust corecing fe etn efcieney ofthecode, Some sEeoung agri flor busca comeion ated n cor chppng he nies ee Fetenon and Weldon, 1972, Chapters 8 and 11). : stot effect cates an chatted yl codes orc vas ghenn Tie 109. Te plyomis weep mpeend ea soxon ih he bihest degre tens the et it Table 10.7 Fire codes “These codes ae a versatile class of lass of systematic cyclic codes designed for comrect- ing or detecting a single burst of length 8 in a block of n digits, Let p(Z) be an “ 02 10. Improving the transmission reliability: Block codes irreducible polynomial of degree m > b, and lete be the smallest positive integer such that p(Z) divides (Z* + 1). Furthermore, assume that ¢ and (2b ~ 1) are relatively prime integers. Then the polynomial (2) = (2-1 + 1x2) aos7 is the generator of b burst-eror correcting Fire code of length n = LCM(e, 25— 1), where LCM means least common multiple. Notice thatthe number of parity- check digits in these codes is (m-+2b—1). Forte limitcase of m = b, we obtain ‘a burst-correctng efficiency that cannot exceed 2/3. A proof of the bursterror cconecting capabilities of Fie codes, together with a description of error-trapping, decoders, can be found in Chapter 9 of Lin and Costello (1983). Under the same conditions as before, given two integers b and d, we can generate a Fire code capable of correcting any burst of length b (o less) and simultaneously detecting any burst of length up tod > 6, by using the generator polynomial (2) = (2° + 192) 10.58) with ¢ satisfying the condition c > b +d 1 (see Peterson and Weldon, 1972, ‘Chapter 11). Example 10.19 We want to design a Fire code to cect all bursts of length <7 and to detect all bursts of lengths up to 10. We get ¢ > 16 and m > 7. Choosing the ‘rimitve polynomial of degree 7 in Table 10.6, we obtain the following generator: 9(2) = (2+ 12" +2 +1) Since p(Z) is primitive, we have e = 27 — 1 = 127, and the length ofthe code is ‘n= 16 x 127 = 2052, Thus, the code isa (2032,2008) Fire code with a high rate (Re = 0.99) and a burt-conectng eficiency + = 0.6. Notice thatthe low value of (=) makes it easy to implement the encoder. On the other hand, these codes usually have a high length, even fora modest bust-corectng capability. Ths isa disadvantage, since only one burst per each block length is corecable or detectable. Therefore, avery Jong guard space between successive buss is required a Interleaved codes A practical technique to cope with burst erors is that of using random-error ‘correcting codes in connection witha suitable interleaver/deinterleaver pair. An interleaver isa device that rearranges the ordering of a sequence of symbols ina deterministic manner. The deinterleaver applies the inverse operation to restore 10.3. Performance evaluation of block codes 503 Pie Hee] | ere] fom] Figure 1019: Black diagram forthe application ofthe interieaver-deinterleaver pair the sequence to is evi orérng, Given an (nk) te en an (nk eye code, an nik) caved coe can be obaned by ananging 1 codewords of the srg ony it owsoaseanguarara at ile vanity clus The pos Gein cated the ineleasng degre f te cde tt cgial ode ees Up £ raom enor, the ited code wil have he sae tandems crrston aby tt nadton tw bes coe l basa eth £% (ore) The ue of tis tecniqu is showmin Fig 10S in the following example. " eee kee Example 1020 Conder a (15,5) BCH cae, wh 9) BCH cae, who gente alo ig om Table 107 92) = 242824 204 284 2 Threaleccree aan er contention witht sence of 25 gt died ino fe Si mestae Bok ad ve cde wor of ng 5 ae peated wing 92), Thee nae words ar amg ve ovs of te 5x15 mati sbown ate ges Theos thematic aetanomite, ine neat ok a sea wd of og. Eee ef eat 5 ors) proces 0 mare han tee ers inch orate en ‘burst from position 18 to position 32 is shawn by Ther on 22s shavay dase quae ithe gue The the deder comet the ers by operating 0 exch ou The ae heing at i fact dised te bart in lated et, anal cr pears coe no eos ors each fhe maui ae Sable 3 10.3. Performance evaluation of block codes In Chapter 5, diferent modulation she ion schemes were compared on the basis oftheir biterror probsbility P(e). The scope ofthis section is to provide useful tools for ‘extending those comparison to coded transmission. ‘or transmission systems employing block codes, two error probat we Roam ploying block codes, 1 probabilities can * The word error probability Pe), defined asthe probability that the de- coder output is a wrong code word, ie, a code wo nt from cose op rd different from that 504 10, Improving the transmission reliability: Block codes ni] 16] 21f26]31] s+ | 66] 7 13) 18[ 23] 28[33] +++ |68|73 14] 19/24] 29[34] + | 69| 74 6 7 [12}a7| 22] 27]32[ +++ |67| 72 8 9 Five code words $10] 15] 20] 25]30]35| +++ [70/75 So Information digits Parity-check digits Figure 10.20: Scheme forthe interpretation ofa (75,25) interleaved code derived from (15.5) BCH code, A burst of length b = 15 is spread ino ¢ = 3 error pattems in each ofthe five code words ofthe interleaved code. 1s The bit error probability Py(e) (or symbol error probability for nonbinary codes), defined as the probability that an information bit (symbol) is in error after decoding. ‘Which ofthe two probabilities better describes the system performance in a par ticular situation depends on the system. The significance of the bit error proba bility comes from the fact that some of the information bits may be correct even if the decoder outpuss a wrong code word. “The computation of the word and bit error probabilities depends on the de- coding strategies chosen by the system. As an example, when the system em- ploys an ARQ strategy, the decoder will output a wrong code word if and only if the received n-tuple is one ofthe 2* — 1 code words different from the transmit- ted one. This, for linear codes, requires thatthe channel error vector coincides with one of the nonzero code words. The situation is completely different when ‘an FEC strategy is adopted. Different decoding strategies are better understood with reference tothe stan- dard aray of a linear code introduced in Section 10.2.2. We recall that the stan- dard aray isan array with 2 columns and 2*~* rows that groups all 2° n-tuples representing the reetived words. Each row (a cose) is labeled by a code syn- drome, and contains all the ntuples that give that syndrome, The first ntuple of each row (the coset leader) is the lowest weight word inthe row. ‘Arrange the cosets inorder of decreasing weight (.<., decreasing probability ‘on a BSC) ofthe coset leader, obtaining the situation of Figure 10.21. and assume thatthe code has a correction capability oft errors. Ifthe code were a perfect code, such as a Hancming code for # = 1, the cosets with a leader of weight upto ¢ would include all n-tuples. In general, however, this will not be the case, and 10.3. Performance evaluation of block codes 505 Coset Syndromes leaders CComect errors (ost leaders of weight <1) ete errs (coset leaders of weight > 1) Fe 102: Sadan ary of ok wi he 1 Mek cal with cse nde ecrng te increasing weight of the coset leaders, vung we wil nd cosets with leaders having weights beyon we face different decoding strategies: erties vend ts When his happen, 1. Complete decoding. With his satay, the decoder always outpts a de- coded code word. I fully exploits the standard array, so that the top part of it (See Figure 10.21) includes all the received words. Given a received ‘word in the standard array, the decoder assumes that an error vector corre- sponding tothe leader ofthe coset containing the received word has been ‘Added to the transmitted code word on the channel, and decodes accord ingly. In doing ths, the decoder goes beyond the correction capabilities of the code, so that some error vectors with weight larger than ¢ can lead to wrong decoding Bounded tdtance decoding. This sategy consaponds tte of te saad ary of Figure 1021, Ie acted wor y lig ine 1p part ofthe ay, its msximuilioed dco he cde nod found tthe topos calumn, If He inthe bot pu of te ata arabe decoder just detects tha ethan & enor ate Secused Ay t consegocne, we have an incomplete decoding o « mkt of ence conection ander detection Error detection. Tis statey doesnot atempt to coe nor; athe, sles anertor whenever he received word doesnot belong othe code, Upper part ofthe standard array of Figure 10.21 disapzears, and the second includes all teceived words except code words. , Inthe following we shall anlze al deo , ng tyes, The analysis ih simple hen he codes nea In tis eae, nf we an fo eas ison oe the BSC, the wif enor rope sing tite ro: po ably conditioned to a given tanxited code word doe tend 9 et 506 10. Improving the transmi ‘code word. This important property stems from the properties of linear codes (see Problem 10.26). Thus, to compute the average error probability (word or bit), we can choose to transmit every code word; itis customary to choose the all-zero code word, denoted x;. Code linearity and transmission of xy wil al- ‘ways be assumed in the following We start our analysis withthe simplest case of hard-decoding and error detection (ARQ) systems. 103.1, Performance of error detection systems Th ARQ systems, the decoder’s function is to answer the binary question of whether a received word is a code word or not. In the negative cas, the system Asks forthe retransmission of the message until a positive answer is obtained from the decoder This echnigue has been used for years in computer systems and other appliccions where a feedback chanel is available and retransmission is made possible by the system resoures and constraints, ‘We assume a BSC, and define as P(e), Plc), Pd) the probabilities of decoding eror,corectdetetion, and eror detection in a single transmission. A decoding err oocrs when te error vector onthe channel coincides witha code ‘word different frem x. We have comet detection when no erors cecur over the Channel, and, finally, eror detection when the err vector is not a code word “These are the only possible evens ina single transmission, so that PO) + PG) + PU) =1 20.59) Moreover, the probability of error coincides with the probability that the channel ‘ermor vector is one ofthe nonzero code words, given by Po) = & aatt-vrt= car [a(Z2) -1] ans wher Au is the numberof code words of Hamming weight and A() is he weight enumerating fonction defined in (10.18). Th rab of conect detection isthe probability tht no ees occur on the channel thats PO) = (La) (10.61) we obtain from (10.58), (10.60) and (10.61) ‘Noticing then that Ao Saat a 1-a-ara(2) (10.62) Pd) ‘When the system keeps on retransmitting a message until the received word gets accepted by the decoder, we can have a word error atthe nth transmission if and 10.3. Performance evaluation of block codes 507 only if the previous m ~ 1 transmissions have led t isions have led to an error detection, and the nth to an accepted erroneous code word, so that P(E) = PIP a" (10.63) ‘The word eror probability of ths ARQ system i thus given by Pale) = 2 PENMC) oss) 1032. Performance of error correction systems: word error probability Hard decoding Denote with Py; | x;) the probability of ‘ lity of reesiving the n-tuple y, when the code ‘word; ofan (nk, 1) code is transmitted over a BSC with transition probability ». Then the word error probability for complete decoding is given by 1 fa Pale) =1— 55 [DE Ps ix} (W065) ties where IM = 2 indicates the number of code words assumed to be equally ‘ikely, and S; the set of subscripts j of received words y,, which are and tes Y¥is Which are decoded into the For linear codes and maximum-likelihood decodi cli oding, the set; identifies the received words Iying in the same column of the standard array as x,. Owing to the nfo enor propery, we can avid be tcage oe he waa words in (10.65), obtaining “ Metcode Pula) 1 ~ Play |) (1066) where x; isthe ete alo cate wor afortnay, he easton ofthis apparently simple expe hard, 1983) becuse exansveconpuatont aoes Se eneeca rinerese. Therefore appr bounds to 10.6) hue hen soph In genera, for bah comple and bounded taisane Stating, he word cor probability is always less than or equal tothe probability that more than t eas have occured onthe channel, We obtain then he flowing per beard Pale) < y ()"0-— -E(‘)e'a-or cosy 508 10, Improving the transmission reliability: Block codes Notice that the equal sign in (10.67) holds only for perfect codes. When mp <1, (10.67) can be approximated by its largest term Pale) = (p21) eG = (10.68) A diferent approach to obtain a general upper bound tothe word error prob ability stems directly from the union bound explained in Section 4. tion here will assume the uniform error property. Let us define the pairwise error event (x1 >) a the set of received words y; such tha, when the transmitted Code word is x1, the received word y; is closer (in the ML sense) to xe than to 2x. Therefore, for maximum-likelihoad decoding (ea x2) S fay + Pls | x0) > Plys la} (10.69) Denoting by Sie the set of subscripts j for which (x — xe) occurs, we have, for the pairwise error probability Poa +x) = % Ply la) (10.70) and the union bound (4.50) gives Pale) SS Pl x0 om) a i + 70). This result, in- Now we derive an upper bound for P(x —> 2) of (10. troduced in (10.71), will give us the final answer. Defining the function fe(ys) > afl, forjeSu oT Ho ®{ foc 3 # Sie oad we can rewrite (10.70) as Pla +m) = 5 ly Ps 1) (10.73) wher the summation as been extended over the whole set of receved sequences Ys, Now, we ean easily bound flys) by (10.74) 10.3. Performance evaluation of block codes 509 ‘Owing to the definition (10.69) of (x, ~+ xz), the bound (10.74) is verified for 4 € Sye, whereas for j ¢ Sy itis trivial. Introducing (10174) into (10.73), we Finally get Plea x2) SY POs | xd PCY; [) (10.75) ‘This expression is called the Bhattacharyya bound. Using tre memoryless prop- erty of the BSC, (10.75) leads tothe result (see Problem 10.27) Pl + Xe) S Iz VPints=OPOT=] (10.76) jet where Y = {0, 1}, and wes the weight of the code word x, Using the transition probability p of the BSC and introducing (10.76) into (10.71), we get Aue) <3 Vou=a]" «aor ‘The summation in (10.77) is performed over all M — 1 code words different from the all-zero code word. Recalling the meaning of the weight enumerating, function A(D) and its expression (10.18), we can transform (10.77) into Pls SS AelYE—P] =1A)— Ip ery 1078) ‘The bound (10.78) requires the knowledge of the weight enumerating function Of the code. A simpler, but weaker, bound is obtained if we replace we with doig WE in (10.77) Pale) s (M—1) [yeti] 4079) Unquantized soft-decision decoding ‘As described in Section 10.1, unquantized soft-decision cecoding entails no {quantization ofthe channel output. In principle, ML decading for transmission over an AWGN channel could be performed with the techniques explained in Chapter 4. For en (nk) code, each ‘code word x is mapped by the modulator into a waveform x(:) The functions of the demodulator and decoder are integrated within the receiver, which is formed bby abank of Mf = 2 parallel filters matched tothe waveforms z(t). The sampled ‘output of the ith filter yields the correlation between the received signal and the wv S10 10. Improving the transmission reliability: Block codes ‘th modulator signal. The M output from the matched filters enter @ processor that chooses the largest, thus performing an MI. decision. This optimum receiver becomes unreaizable in practice fr large values of k. Some simplifications are possible, aiming at reducing the numberof matched filter, for particular choices Of the modulation scheme. Let us assume, as an example, thatthe bits of the code word are transmitted using binary antipodal modulation. Each binary ‘waveform is demodulated by the optimum soft demodulator (a single matched Filter followed by a sampler) and a code word is represented by a sequence of n random variables. Let € denote the energy of the modulator waveform. Then, dropping an irelevant constant, each ofthe n binary decision variables can be written as VE+u ifthe th dgitis 1 (Wat, ease le with {= 1,2,...m, The random variables var samples ofthe Gaussian noise With zero mean and variance No/?2. From the knowledge of the M = 2* code ‘words, and upon reception ofthe sequenc > rom the demodulator, the decoder forms M decision variables as follows: by= Erg ~ Vay, f= Ml come where 24 denotes the digit the jth position of the ith code word. In this way, the decision variable corresponding te he actual transmitted code word will have ‘mean value n/E, while the other (IM —1) ones will have smaller mean values Maximum-likelihood decoding is achieved by selecting the largest among the y's of (10.81). ‘Although the computations involved in the previous decoding process are very simple, it may soon become impractical to implement this algorithm be- ‘cause of the exponential growth of the number M of decision variables with h Several different types of sof-decision decoding algorithms have been invented to circumvent this difficulty, Some reference is given to them in the Bibliograph- ical Notes. “The derivation of the exact error probability in the decoding process is not straightforward, as it is complicated by the correlations between the decision variables. Therefore, we resort to a union bound similar to the one employed for the case of hard-decoding. Recalling the uniform error property, We can assume that the all-zero code word is transmitted. Let us define the pairwise error probability P(x1 > Xm) 88 P(x +m) 2 Phim > La 21) oz) 10.3, Performance evaluation of block codes si Ieiathe probably that the tiethood of atthe ikelioed ofthe mth code words higher han ele transite lle code word, The union bund 50) hen Pale) < Pe Xm) (10.83) a lity P(x: -+ %m) only depends on the Euclidean Deine 5) pte ie ‘then it differs from the all-zeru code ward in wp, positions. Therefore, ial ig = ttm ~ Sm RE 034) Introducing this value into obtain Inodacing is (452), we obtain P(x, + Xm), and, from (10.83), nine} See (FB) cosy (10.86) Grouping together the Ay code words wi Cronin code words with the same weight d, we can rewrite a RE, ras Saat ( we «aos7) Using now the inequality (A.5) (see Appendix A) £ trantorm (10a ino“ APPERAXA) Fee(/A) < bexn(— 1 Pele) <5 Ye ARAM TAD) = Ipce-miny (1088) To cstimate he coding gain cn compar the with (636 for binary anipodal transmission. Making ise of he exporter ‘oprximatin (A) forthe fanconee weeblan n PTE & (E0/No)une kind Nolen ETN ee "Notice thatthe coding gxin G. depends on both the code parameters and the alto noise ra, The asympoti vale in dB is 10g (Re), ae € last two sections will now be illustrated through G, ese 512 10. Improving the transmission reliability: Block codes ‘Example 10.21 Consider the (7,4) Hamming code that has been used inthe examples throughout the chapter. The aim ofthis example is to assess the bounds introduced in the lst two rections, We start with hard decision. To compute te eror probability p of the BSC, we assume a binary anipodal transmission, so tht 1 Rees ) Ps Since Hamming codes are perfect codes, the expression (10.67) i the exact value ofthe word error probability pte =1-¥ (1) eta =e -p)? (-p) “The exacresul for P(e) is plotted in Figure 10.22, together with the two bounds (10-78) ind (1079), Recalling the weight enumerating function of Hazning codes (10.19) the bound (1078) yietds Pole) STD TDA D pa 5TH Silay, being dais = 3 the Bound (10.79) becomes Pole) 15D Ip aR From te curves of Figure 10:22, we aoe thatthe lighter ofthe two bounds difers by ligly less han 2 4B fom the exact eor probability at P(e) = 10°, and tha the looser bound is worse bya fraction ofa. Tn the case of soft decision, we apply the bound (10.87) andthe simpler (10.86), cosning te [BY Tate ( BB) Hee ( rates Fert ( BB) + Foe ( + ton (8) a See ( [BE ates Bee (HE “The gr lo contin these ess paint sf decison. a 103.3. Performance of error correction systems: bit error probability ‘The expression ofthe bit error probability is somewhat more complicated. In this case i fact, we must enumerate the actual error events and weight the prob- ability ofeach event by the number of bit errors that occur. Invoking the uniform 103. Performance evaluation of block codes sis ea et fqiowy (2 “EIN (6B) Figure 10.22: Word eror probability forthe (7,4) Hamming code: exact value and no upper bounds. Hard decision and softdecision curves. inary anipedal transmission. error probability, a formal expression forthe bit error protability is the follow ing: aos) where P(x, | x:) is the probability of decoding e code word x, different from the transmitted all-zero code word x;, and w(t) isthe Hamming weight ofthe information word that generates the code word x,. ‘Computing (10.91) is a very difficult tak, in general, Inthe following, we ‘will generalize the concept of weight enumerating function so as to obtain a union bound tothe bit errr probability close to the one already obtained forthe ‘word error probability. “The weight enumerating function describes the weight distribution of the code words. It does not provide informations on the data words that generate the code words. In other words, it does not provide informations about the encoder. Letus generalize it by introducing the definition ofthe inpur-ousput weight enu- vy sis 10, Improving the transmission reliability: Block codes ‘merating function ((OWEF) Bw,D) 2 B,D 092) ith weight donee by dat where Byarepreen the number of code ward with weight d ge Sov of teigt wy Realing te defniton 10 18) fh eg neering Tancton (Ds saghcvrd oie the eens AD) = BW, Dwar As= Yo Boa (10.93) ‘As an example, consider the Hamming code (7, 4). From the table defining its encoder (see Example 10.3), we obtain the IOWEF BW,D) = 1+ W(D' + DS) + WARD" +34) + WD? + 3D4) + WD" (10.94) 2 14WB,(D) + W?B,(D) + W2B(D) + W*BA(D) e rating function (CWEP) where we have defined the conditional weight enumerating fu B,(D), as the weight enumerating function ofthe code words generated by data ‘words of weight w. In general its obtained through 1 9B"(W, D) (10.95) wl OW" wag By(D) = So Baad! = A.anion undo tenor bait an sain loge ation of (10.78) and (1088), and defining aconditonal paris emo even ao Xu.d) Ctoeeattet the likelihood of the transmitted all-zero code word isles than that ofthe code wordy with weight d pected by an information wordof weigh w. Problem 10:23) are, respectively, ROSE ES Bes[Viod-a)] = d FRO) ohn =o (10.96) ky @ RE Pile) S LH, Beate No ) ca 10.3. Performance evaluation of block codes sis Making use of the inequality (A.5), (10.97) becomes RO < 5D FBO) toce-nein (10.98) By exchanging the order of summation (106) (and smiltly (1097) can also be writen nthe fom Ros S (Sze) [vein] cose) 82, Therefore, he maximum value of k should lie between #3 and 99. From Table 10.7, we observe that a (127,92) BCH code exists that provides a satisfactory answer to cur problem. a 10.4.2. Bounds on code performance Bounds on code performance are obtained using random coal ined using random coding techniques, that is, by evaluating the average performance of an ensemble of codes. Ths implies the existence of specific codes that behave better than the average. ‘The most important result ofthis approach was already mentioned in Chapter 3 as the channel-coding theorem. This theorem states thatthe word error probability of coded system can be ede any rae by simply ineesng te oe ord length n, provided only that the code rate does not exceed the channel capaci ie exceed the channel capacity Given the ensemble of binary block codes of length and rate, the mini- ‘mum atainable error probability over any discrete memoryless channel is bound- ed by Pele) <2"), RSC (10.106) 522 10, Improving the transmission reliability: Block codes mo OR Figure 10.26: Typical behavior ofthe reliability function E(R.) om the AWGN channel ‘The achievable performance is determined by the reliability function of the chan- rel £((), whose typical behavior fora discrete memoryless channel is given in Figure 10.26. The tangent to B(R,) with slope —1 intercepts the horizontal axis ata value of R that we call the cuzoff rate Rp of the channel. Therefore, we can write a simpler upper bound in the form. Pale) S2-™RR), Ry 0.107) ‘The parameter Ry plays an important role in coding theory. Sine it is a charac- teristic ofthe channel, it allows comparisons of different channels with respect 10 an ensemble of codes. We shall now derive the expression of for the ensemble of binary block codes using a binary antipodal modulation over an AWGN. Let us consider the ensemble C of all binary block codes C of length rand rate Re, Each code C has M = 2 = 2% code words, and there i a total of, 2" possible codes. This ensemble also includes some very bad codes, such as those having all equal code words. Nevertheless, the bounding technique gives useful results, If we select at random one code C, and denote by Pye | C) the conditional word error probability, the error probability over the code ensemble is given by a! Pyle) (10.108) Pele) ‘Assume now that the code word x; of C is wansmited with probability P(x). 104. Coding bounds 323 Then Pole) = 3 Pale lem) Px) 10.109) The conditional error probability Pa(e | C,x,) canbe upper boun ‘the union bound (see Section 4.3). We get : ‘oper ouded by using PulelCm)< So Poy +x 10) 10.110) he were P(e | C) dents te pairwise cor poablty beeen code words x; and x; of the code C. , aaa feo Jno (10110) mo (10.10) and grin back (10108), we obtain Pale) $2" FY P(x) Yo Pl x10) (10.11 cee Ses Now comes th crcl sep i the desvatin consti inthe in ‘summations order in (10.111) to get : nelaiheimerhangs ofthe Rios Ero) So prey Pt ax! a (10.112) ‘The quanuty in square brackets is th robabi ts isthe average of the pairwise error probabil P(x; -+ xj | C) over the ensemble of codes and is quite straightforward i omput. Since the code Cand hence the code words, are chosen at andor te Pefor the average of Ec Plas x; |C) in aqua rackets of (10112) considering the pas xx, of randomly chosen code words tha iferin h symbols (Le, whose Hamming distance is dy, =A) and then averspne cua respect to h. We have then ‘enews ew S Pou x5 |) EP = MPC [dy =m) (0.113) ‘The probability that two code words of length n selec Bere ith n selected at random differ in A Plays =) = () Pd (10.114) Furthecmore, trom Chaper 4 (4,29) and using the inequality (A.S), we get ee te PERE are Pox 1 y= 8) = hte (FRB 3 2, Obtain the generator matrix of the equivalent systematic code oe om y and lis the 10 Generalize the examples of Problems 10.3 and 10.410 show that there is always a systematic code equivalent tothe ane generated by a given parity-check matrix, 106 A systematic (10, 3) linear block code is defined by the following parity-check equations tetatee0, mtat=0, 0, mete =0, wetntn=0, rotate ntmtn=0 10.121) Find the percentage of eror pattems with 1,2,3,...,9,10 errors Edn rc 2,8, ... 9,10 errors that can be de> 528 10. Improving the transmission reliability: Block codes 40.7 (2) linear block code is defined by the following table: apo 0 0 hipaa 1 ojr 0 1 at ren o 1. Find the generator matrix andthe parity-check matrix of the code. 2, Build the standard array andthe decoing table to be used on a BSC. 43. Whatisthe probability of making erors in decoding code wor assuming an eror detection strategy ofthe decoder? 108 Assume that an (n,) code tas minimum distance d 1. Prove tha evey set of (d ~ 1) fewer columns ofthe parity check matrix His linearly independent 2, Prove that there exist at least one set of d columns of HT that is linearly dependent. 109 Show thatthe dul ofthe (1) repetition code isan (n8—1) code with dain = 2 and with code words always having even weight 10.10 Given an (nk) code, it can be shortened to obtain an (n ~ 1,4 ~ 1) cove by ‘imply taking only the code words that havea in the ist postion, and deleting this 0. Show thatthe maximal-length (simplex) code (2 — 1,7) is obtained by shortening the first-order (2, m +1) Reed-Muller code. 10.11 Assume that an (7,4) block code with minimum distance di used on the binary ‘Gasure channel of Example 3.14. Show that itis always possible to corectly ‘ecode the received sequence provided dat no more chan (d ~ 1) erasures have occurred. 10.12 Consider the following generator matrix ofan (8,5) linea block code 1oooor1d o10c0100 e=[o0100010 ooo10001 ooooriid 1. Show shat the code is eyelic, and find both the generator polynomial (2) + and the parity-check polynomial A(Z). 10.6. Problems os 2, Obtain the parity-check matrix HL 10.13 Consider the generator polynomial 2) 1. Show that it generates acyclic code of any Ieagth. 2, Obtain the party-check polynomial h(Z), the parity-check matrix HY, and the generator matrix G. 3, What kind of code is obtained? +1 10.14 Given the (7, 4) Hamming code generated by the polynomial (2) = 28+ 241 obtain the (7.3) code generated by 92) =(2+1)(2* +2 +1) 1. How is it elated to the orginal (7,4) code? 2. Whats its minimum distance? 3. Show tht the ew coe can cot ll single ec and ima detect all double errors. ae sansa 4. Describe an algorithm for cometion and detection asin pat (3 10.15 Iusrate the behavior ofthe encoder of Fi enc igure 10.13 by enumerating the register contents during the encoding ofthe data word u(Z) = 2? + 2? 41 10.16 Show thatthe (n,n ~ k) coe generated by the parity-check polynomial 2) is sito of cote peated bye rea Plow 2) ie, show that leo he code generated by 9 Z) is generated eae y 92) is ener by 10.17 A cyclic code is generated by 92) = 2+ 2+ 28+ Zt 1. Find the length n ofthe code, 2, Sketch the encoding ciruits with ak or (n ~ &) shift register, 10.18 Discuss the synthesis of a code capable of correcting single erors and adjacent double erors. Develop an example and compare the numbers n and k with tose required forthe conection of all double and single errors. Hint: Count the re (quired syndromes and construct a suitable paritycheck matrix. 530 10. Improving the transmission reliability: Block codes 10.19 Itis desired to build a single-eror-corectng (8, 4) linear block code 1. Define the code by shortening a cyclic code. 2, List the code words and find the minimum distance 5. Sketch the encoding circuit and verify its behavior with an example. 10.20 Show thatthe binary cyclic code of length: generated by 9(Z2) has tinimum istnce atleast 3, provided that nis the smallest integer for which 9(Z) divides (a+), 10.21 Consider acyclic code generated by the polynomial g(Z) that does not contain (Z-+1) asa factor, Show that the vector ofall nes isa cade word, 10.22 Show thatthe (7,4) code generated by g(9(Z} = Z? + Z +1 isthe dual of the (7, 3) code generated by g(Z) = 24+ 29+ 22 +1, 10.23 Repeat the computations of Example 10.23 forthe (15, 11) Hamming code 10.28 Prove the Singletor bound (10.102) andthe Hamming bound (10.103), 10.25 Consider a transmission system that performs error detection over a BSC with tmmsition probability p. Using the weight enumerating function A(D), find an ‘exact expression forthe probability of undetected erors forthe following codes 1, Hamming codes. 2, Extended Hamming codes. 3. Maximal-length codes. 4. (1) repetition codes 5. (nun = 1) parity check codes. 10.26 Show that fora linear block code the set of Haruming distances from a given code ‘word othe other (i — 1) code words isthe same for all code words. Hin: Use Property 2 of Section 102 Prove that fr any near code used on a binary-input symmetric channel with MI. decoding the uniform error property hols, Le. Pale) = Polelx) Mt Hin Woite Palelxs) =F POsls) we 106. Problems oe s3L where Si the set of subscript j of received sequences lee Se to i sequences y; that are decoded into Poybe TPtslea) then use the symmetry ofthe channel, i, Pl = Olzie Pe = ts =0) 10.27 Using the memoryless property of the BSC, that is, Poyslae = TT Pluto) erve (10.76) from (1075. 1028 Following the sane ses ate t 107%),dve eo dere te in found 1 ‘error probability (10.96) and (10.97). bela il Convolutional and concatenated codes With block codes, the information sequence is segmented into blocks that are en coded independently to form the coded sequence as a succession of fixed-length independent code words. Convolutional codes behave differently. The no bits that the convolutional encoder generates in correspondence of the ky informa tion bits depend on the ky data bits and also on some previous data frames (see Seetion 10.1): the encoder has memory. ‘Convolutional codes differ deeply from block codes, in terms oftheir struc~ ture, analysis and design tools. Algebraic properties are of great importance in constructing good block codes and in developing efficient decoding algorithms. Good convolutional codes, instead, have been almost invariably found by ex- haustive computer search, and the most efficient decoding algorithms (ike the Viterbi maximum-likelihood algorithm and the sequential algorithm) stem di- recily from the sequential-sate machine nature of convolutional encoders rather than from the algebraic properties ofthe code. ‘In this chapter, we will start by establishing the connection of binary convo- Jutional codes with linear block codes, and then widen the horizon by assuming ‘a completely different point of view that looks at a convolutional encoder as & finite-state machine and introduces the code reli as the graphic tool describing, all possible code sequences. "We will show how to evaluate the distance properties of the code and the er- ror probability performance, and deseribe in details the application of the Viterbi ‘algorithm to its decoding. A brief introduction to sequential and threshold de~ ‘coding will also be given. "The second part of the chapter is devoted to concatenated codes, a concept 532 411. Convolutional codes 533 Figure 11.1: General block diagram of a comolutional encoder in serial form for a (oko) code with constrain lengih N. first introduced by Fomey (1966) that has sce then found a wide range of appli- cations. After describing the classca! concatenation schemes, we will also de~ ‘ote some space to the recently introduced “turbo” codes, a very promising new class of concatenated codes that approach the capacity coding gains at medium- to-low bit enor probabilities. 12.1. Convolutional codes ‘A binary convolutional encoder is a inite-memory system that outputs binary digits for every ky information digits presented at its input. Again, the code rate is defined as Rz = ko/no. In contrast with block codes, ky and no are usually small numbers. A scheme that serially implements a linear, feedforward binary convolutional encoder is shown in Fig. 11.1. The message digits are introduced fe ata time into the input shift register, which has Nkp postions. As a block ‘of ko digits enters the register, the m modulo-2 adders feed the output register with the no digits and these are shifted out. Then the input register is fed with anew block of ko digits, and the old blocks are shifted to the right, the oldest ‘one bein lost. And so on. We can conclude that in a convolutional code the no Aigits generated by the encoder depend nt only onthe comresponding kp message gis, but also on the previous (1 ~ 1a ones, whose number consis the ‘memory v © (N ~ 1)ko of the encoder. Such a code is called an (no, ko, N) convolutional code. The parameter V, the numberof data frames contained in 534 11. Convolutional and concatenated codes the input register, is called the constrains length ofthe code." With reference to the encoder of Fig. 11.1, a black code can be considered to be the limiting case ‘of a convolutional code, with constrain length V = 1. If we define u to be the semi-infinite message vector and x the correspond- ing encoded vector, we want now to describe how to get x from u. As for block ‘codes, to describe the encoder we only need to know the connections between the input and output registers of Fig. 11.1, This approach enables us to show both the analogies and the differences with respect to block codes. But, if pursued fur- ther, it would lead to complicated notations and tend to emphasize the algebraic structure of convolutional codes. This is less interesting for decoding purposes. ‘Therefore, we shall only sketch this approach briefly. Later, the description of the code willbe restated from a different viewpoint. ‘To describe the encoder of Fig. L1.1, we can use NV submatrices G;, Gs, Gs,..., Gy containing ky rows and no columns. The submatrix G; describes the connections ofthe j-th segment of ko cells of the input register withthe ng cells ofthe output register. The no entries of the first row of G, describe the ‘connections of the first cell of the i-th input register segment with the ng cells. ‘of the output register. A “I” in G, means a connection, while a "0" means no ‘connection, We can now define the generator matrix of the convolutional code GG... Gy GG... Gy Gq G Gy au GG Gy All other entries in Gq, are zero. This matrix has the same properties as for block codes, except that it is semi-infinite (it extends indefinitely downveard and tothe right). Therefore, given a semi-infinite message vector u, the corresponding coded vector is x=uG. (112) ‘This equation is formally identical to (10.4). A convolutional encoder is said to be systematic if, in each segment of ro digits that it generates, the frst ko are @ replica of the corresponding message digts. It can be verified that this coneition ‘Tie reader should be warned tat here is 80 unique defiton of constraint length inthe convoluona code iterate ILL Convolutional codes 535 @ © Figure 11.2: To equivalent schemes forthe comolutional encoder oft pease ofthe (3.13) cole {is equivalent to have the following ky x np submatrices: et: toe wefo ot Sle, ay amon wi pees O38 00D (4) Gas | tr 3-1. All these concepts are better clarified with two examples. Example 11.1 Consider a (3.13) conolutonal code, Two equivalent schemes fo the coder ae shown n Fig 11.2. The fist ses a register with thee cells, wheres the Second uses two cells, each introducing unitary delay. The output register replaced bys commutator hat eds seen the ouput ofthe tee ae. The encoder specifica by the following ther submatices actually, three row vectors, since y= 1). @ = p19 G = pit G = 004) 536 M. Convolutional and concatenated codes oft tay Figure 11,3: Comolutional encoder forthe (32,2) code of Example 11-1. “The generator mati, from (L1.1), becomes 111 O11 001 000, (000 111 O11 001 000 Ge =} qo 00 111 O11 O01 000 ‘team be verified, from (11.2), thatthe information sequence = (11011...) sencoded Into the sequence x = (111100010110100...). The encoders systematic, Notice that the code sequence can be obtained by summing madalo-2 the rows of Guy corespond, {ng to the “I” inthe information Sequence, as for block codes. o 1.3, The code ‘Example 112 Consider a (32.2) code. The encoder i shown in Fig ‘is now defied by the two submatrices [ato] on[set ‘pect ema (13 (1 ae i Teer ei now given by | 101 001 000 | Go 010 001 000 (000 101 001 000 Geo = | 000 010 001 000 (900 000 101 61 000 (900 000 016 001 000 1191101...) is ened in the cove sequence = “The information sequence (c1010100110...). ILL Convolutional codes eT Figure 114: Paralel implementation ofthe same comolutional encoder of Fig. 11.3. sent ranacces + ro [S pxrasie soma Figure 11.5: General block diagram ofa convolutional encoder in paralel form for an (no, hy,N) code: ‘The encoder of Fig. 1.3 requites a serial input. The ky = 2 input digits can also ‘be presented in parallel, and the corresponding encoder is given in Fig. 11.4. "The parallel representation of the encoder, shown for a general (no, ka) en= ‘coder in Fig. 11.5. is more flexible than the serial one of Fig. 11.1, as it allows allocation of a different number of register cells in each parallel section. When N, = N, Wi, we can define the constraint length NV asin the case of the serial representation. When the N's are different, we define the constraint length 1” as the largest among the N's, ie, N © max, Ne, @ = 1,...,ka- The encoder memory is in this case v = -S¥89(N;- 1) If we look for a physical meaning of the constraint length, N’ — 1 represents the maximum aumber of tells steps (see Section 11.1.1) that are required to ‘etum from any state of the encoder tothe zero state. This “remerge” operation, called relis termination, is required to transform K sections of the trellis of 2 convolutional code into a black code with parameters k = ky K, n = ng: (K+ LN ~ 1), Sometimes, system constraints impose a frame (or “burst’) structure ‘on the information stream. For short bursts, terminated convolutional codes are used, and each burst is decoded using the Viterbi algorithm without truncation

You might also like