The Viterbi Algorithm

‡ Application of Dynamic Programming-the Principle of Optimality ‡ -Search of Citation Index -213 references since 1998 ‡ Applications
± Telecommunications
‡ ‡ ‡ ‡ ‡ Convolutional codes-Trellis codes Inter-symbol interference in Digital Transmission Continuous phase transmission Magnetic Recording-Partial Response SignalingDivers others
± ± ± ± Image restoration Rainfall prediction Gene sequencing Character recognition

Milestones
‡ Viterbi (1967) decoding convolutional codes ‡ Omura (1968) VA optimal ‡ Kobayashi (1971) Magnetic recording ‡ Forney (1973) Classic survey recognizing the generality of the VA ‡ Rabiner (1989) Influential survey paper of hidden Markov chains

2 1.3 N bridges Optimal: 4(N+1) adds Brute force: (N-1)2N adds S 1.8 .2 Faculty Club N Brute force:8 adds .7 .8 S .Professor X chooses an optimum Example-Principle of Optimality Find optimal path to each bridge path on his trip to lunch Publish .2 1.8 .2 N .0 .5 .5 EE Bld Optimal: 6 adds Perish 1.5 .

Digital Transmission with Convolutional Codes Information a1 ..... cN BSC p p %% % Information a1 .. aN Convolutional Source Encoder AN c1 ... a2 .. c2 ... a 2 ... bN BN . b2 .... a N Sink Viterbi Algorithm b1 ..

.. b2 . a2 ... aN ) ! max p a1 . N ) @Hamming distance between sequences Maximum Aposteriori Probability a1 .. B N ) (1  p ) N  D ( AN ... aN D ( AN .. bN / a1 .. a2 ... a2 .. a2 ........ N N ) log( p /(1  p) Brute force = Exponential Growth with N . a N min D ( A . B N ) p ! bit error probability Equivalently a1 .Maximum a Posteriori (MAP) Estimate Define D( B N . aN max P (b1 ..

input) efficiency=input/output Initial state .s1 ! s2 ! 0 .s1 ! s2 ! 0 Input T 110100 S1 Output T State S2 111 100 010 110 011 001 000 01 ! i 0 2 ! i ‡ s1 0 3 ! i ‡ s1 ‡ s2 Initial state .1) code (output.Convolutional codes-Encoding a sequence Example(3.

Markov chain for Convolutional code 0-000 input -output state 01 0 -010 00 1 -111 0 -001 Fig.2.14 0 -011 1 -110 10 1 -100 11 1 -101 .

Trellis Representation State output Next state 0 input s1s2 00 01 001 110 011 100 010 101 000 111 1 input 00 01 10 11 10 11 .

SN S N  s1 .. aN min D( AN ...B N 1 )) . s s s s1 ... B ) ! min(d (aN . sN 1 / S N min D( A N 1 . B ) ! min N N N N s1 . s N Shift register contents N i i !1 min D( AN .. s N 1 2 N s1 .. s ..... B N ) BSC s1 . s ... s2 . bN )  N N sN s1 ..... s2 ...... s2 .... s s . s N min D( A . B N d a d 1 .. s N 1 2 /2S N 1.. sN min ( D ( A .Iteration for Optimization a1 . sN . s s s ... b )i memorylessness min ( D ( A ...DN A1min (1 )( N (abN 1N )) b ( ) N . s2 . B 2 ) min . s2 . bN )) ( D( A .. B N ) ! min min D ( A N N2... s N 1 .s 1 2 N N N 1/ s1 . N .... a2 .1B NN1)) d ( aN ..... s N § d (a .. s2 ... s N 2 N 1 / S N min ( D ( A N .. b )  min s .. B ) ! min( d (aN(. s2 . B N ) ! N N s1 ...

. Redundant N 1 N 2 )) ! s1 . sN 2 / S N 1 . sN 1 / sN min D( A . sN 2 / S N 1 min D( A N  2 . sN  2 / sN 1 . N 2 )) Accumulated distance Linear growth in N .. bN 1 )  s1 ... N 1 )! min D( A N 2 sN 1 / sN Incremental distance min (d (aN 1 . s2 .... N 2 )) s1 ......S N min D( A N  2 . s2 . s2 ... s2 ..Key step! s1 ..

.. si / si 1 min D( A . B ) ! i i s1 .. s2 .. s2 . bi )  si / si 1 min D( A ..Deciding revious State s1 ... B i 1 ) 00 4 ai ! 000 Search previous states 10 2 ai ! 001 State i-1 1 2 00 4 . B )) d ( ai .. bi ) State i i 1 i 1 bi ! 010 D( Ai 1 . si 1 / si min(d (ai .

Viterbi lgorithm-shortest path to detect sequence First step s0 s1 s2 s3 Trace though successive states shortest path-Hamming distance to s0 Trellis codes-Euclidean distance Optimum Sequence Detection .

i  j " m  Finite memory channel .Inter-symbol Interference z (t ) § a p(t  iT ) i i !1 N Transmitter Channel Equalizer Decisions N V z (t ) ! § ai h(t  iT )  n(t )-Received signal i !1 X ri  j @´ h(t  iT )h(t  jT ) dt 0 ri  j ! 0.

a2 .... a2 . aN i !1 j !1 ­ i !1 ½ here X Z i ! ´ y (t ) h(t  i ) dt -Output o Matched Filter 0 ...AWGN Channel-MAP Estimate « » min ´ ¬ z (t )  § ai h(t  iT ) ¼ dt a1 .. aN i !1 ½ 0 ­ Euclidean distance between received and possible signals N X 2 impli ication N N « N » min ¬ 2§ a i Z i  §§ ai a j ri  j ¼ a1 ...

sk ) @¬ 2§ a i Z i  §§ ai a j ri  j ¼  ccumulated distance i !1 j !1 ­ i !1 ½ d ( Zk ........... Z k 1 . sk  m 1 ..... sk  m1 .... sk 1 / sk sk 1 / sk min D ( Z1 ....Viterbi Algorithm for ISI Define:s k ! {ak  m 1 .... s2 . s2 . sk ) @2ak Z k  2ak § i !k m k 1 ai rk i  ak2 r0  Incremental distance State = number of symbols in memory s1 ... sk  2 )) . ak }-states emory m k k « k » D ( Z1 . sk 1 ) ! s1 . sk  2 / sk 1 min (d (Zk ... sk 1 .. Z k . sk ) min D ( Z1 ... sk  m 1 ... sk 1 ..... Z k  2 .

Magnetic Recording Magnetization pattern m(t ) ! § ( ak  1)u (t  kT )  1(t ) k !0 g Magnetic flux passes over heads Differentiation of pulses Controlled ISI utput Same model applies to « d m(t ) » Partial Response signaling e(t ) ! * h(t ) ¬ dt ­ ¼ ½ ! 2§ xk h(t  kT ) where xk ! ak  ak 1 k !0 Nyquist pulse g Sample .

. aN Tranmitted ignal yk ! cos([ ( ak )t  xk ).. kT e t (k  1)T onstraint..ontinuous Phase [ (ak 1 )t  xk 1 | [ (ak )t  xk . mod 2 Example-Binary signaling [.Continuous Phase FSK igital Input equ ence a1 . a2 ..

 [.

2 0.4 0.odd no. ones ® xk ! ¯ 1.even no.2 -0.8 1 0.3 -0.8 0.7 -1. ones ° .6 0. odd number ½ cycles Whole cycles 0.2 Signaling interval 0 0.

Merges and State Reduction Optimal paths through trellis All paths merge Force merges to reduce complexity Computations order of (No states)2 Carry only high probability states .

j  m) to(ISI) Blurring Analogous l ! L m ! L Input pixel  n(i. j ) AWG Optical channel here L ! optical blur idth .Input Pixel Effect of Blurring Optical output signal L L s (i. j ) ! h l. m § § a(i  l .

Row Scan VA for optimal row sequence Known state transitions And Decision Feed back Utilized for state reduction .

Hidden Markov Chain ‡ ‡ ‡ ‡ ‡ Data suggests Markovian structure Estimate initial state probabilities Estimate transition probabilities V used for estimation of robabilities Iteration .

Rainfall Prediction Rainy wet Rainy dry No rain Showery wet Showery dry Rainfall observations .

e.C and ± Pairing between strands ± Bonding A ¼ T and C ¼ G ‡Genes ±Made up of Cordons. .T. i.DNA Sequencing ‡ DN -double helix ± Sequences of four nucleotides. triplets of adjacent nucleotides ±Overlapping of genes Nucleotide sequence C TTC ene 1 ene 2 ene 3 Cordon A in three genes .

«.«.Hidden Markov Chain Tracking genesM S-start first cordon of gene P1-4. .+4 from start + Gene E-stop H-gap M1-4 -1.-4 from start M3 Initial and Transition M4 Probabilities known M2 1 S P1 P2 P3 P4 + E H .+1.

Recognizing Handwritten Chinese Characters Text-line images Estimate stroke width Set up m X n grid Estimate initial and transition probabilities Detect possible segmentation paths by VA Next Slide Results .

Example Segmenting Handwritten Characters All possible segmentation paths Eliminating Redundant Paths Removal of Overlapping paths Discarding near paths .

Sign up to vote on this title
UsefulNot useful