## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

**Convolutional Code Structure
**

1

1 2 k 1 2

2

k 1 2

K

k

k bits

+

1

+

2

+

n-1

+

n

Output

2

Convoltuional Code

Convolutional codes

k = number of bits shifted into the encoder at one time

k=1 is usually used!!

n = number of encoder output bits corresponding to the k information bits r = k/n = code rate K = constraint length, encoder memory

Each encoded bit is a function of the present input bits and their past ones.

3

g 22 ) = 1. and g 42 ) = 1. and g = 1. g = 1. g 32 ) = 0. g = 0. Generator Sequence: g(1)=(1 0 1 1) u r0 r1 r2 r3 v ( ( ( ( g 02 ) = 1. g1( 2 ) = 1. Generator Sequence: g(2)=(1 1 1 0 1) 4 .Generator Sequence u r0 r1 (1) 1 r2 (1) 2 v (1) 3 g (1) 0 = 1.

Convolutional Codes An Example – (rate=1/2 with K=2) G1(x)=1+x2 G2(x)=1+x1+x2 0(00) x1 x2 00 Present Next Output 0(11) 0(01) 01 1(00) 0(10) 11 1(11) 0 1 0 1 0 1 0 1 00 00 01 01 10 10 11 11 00 10 00 10 01 11 01 11 00 11 11 00 01 10 10 01 10 1(10) 1(01) State Diagram 5 .

Trellis Diagram Representation 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 0( 11 ) 0( 11 ) 0( 11 ) 01 ) 01 01 0( 11 ) 01 01 ) ) 0(0 1 ) 0(0 1 0(0 1 10 10 ) 0(10 10 0(10 ) 10 1( 10 ) 10 1( 0(0 1 10 ) 10 1( ) ) 11 1(01) 11 1(01) 11 1(01) 11 Trellis termination: K tail bits with value 0 are usually added to the end of the code. 6 0(10 0(10 ) 0(0 1 ) 0( 11 ) 1 (0 0) 1 (0 0) 1 (0 0) ) 10 1( .

Encoding Process Input: 1 Output: 11 0 01 1 00 1 10 1 01 0 10 0 11 00 0(00) ) 1(11 00 0(00) ) 1(11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 0( 11 ) 0( 11 ) 0( 11 ) 01 ) 01 ) 01 0( 11 ) 01 01 0(0 1 0(0 1 10 10 ) 0(10 10 0(10 ) 10 1( 10 ) 10 1( 0(0 1 10 ) 10 1( ) ) 11 1(01) 11 1(01) 11 1(01) 11 7 0(10 0(10 ) 0 (0 1) ) ) 0(0 1 ) 0( 11 ) 0 1(0 1 (0 0) 1 (0 0) ) 10 1( .

r) !! detected sequence d Viterbi Decoding Algorithm An efficient search algorithm Performing ML decoding rule.Viterbi Decoding Algorithm Maximum Likelihood (ML) decoding rule received sequence r ML min(d. Reducing the computational complexity. 8 .

the decoder computes and compares the metrics of all the partial paths entering a node The decoder stores the partial path with the larger metric and eliminates all the other partial paths.Viterbi Decoding Algorithm Basic concept Generate the code trellis at the decoder The decoder penetrates through the code trellis level by level in search for the transmitted code sequence At each level of the trellis. The stored partial path is called the survivor. 9 .

Viterbi Decoding Process Output: 11 Receive: 11 01 11 00 00 10 10 01 01 10 11 11 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 ) 10 1( 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 2 0( 11 ) 0( 11 ) 0( 11 ) 01 0 (0 1) 01 ) 01 0( 11 ) 01 01 0 (0 1) 0(0 1 10 10 ) 0(10 10 0(10 ) 10 1( 10 ) 10 1( 0(0 1 10 ) 10 1( 0 ) ) 11 1(01) 11 1(01) 11 1(01) 11 10 0(10 0(10 ) 0 (0 1) ) ) 0( 11 ) 0 1(0 1 (0 0) 1 (0 0) .

Viterbi Decoding Process Output: 11 Receive: 11 01 11 00 00 10 10 01 01 10 11 11 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 ) 10 1( 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 2 4 01 0( 11 ) 0( 11 ) 0( 11 ) 01 ) 01 0( 11 ) 01 01 1 0 (0 1) 0 (0 1) 0(0 1 10 10 10 ) 10 1( 10 ) 10 1( 0(0 1 10 ) 10 1( 0 2 ) ) ) 0(10 0(10 11 1(01) 11 1(01) 11 1(01) 11 1 11 0(10 0(10 ) 0 (0 1) ) ) 0( 11 ) 0 1(0 1 (0 0) 1 (0 0) .

Viterbi Decoding Process Output: 11 Receive: 11 01 11 00 00 10 10 01 01 10 11 11 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 ) 10 1( 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 2 4 01 3 01 0( 11 ) 0( 11 ) 0( 11 ) 01 0( 11 ) 01 01 1 0 (0 1) 2 0 (0 1) 0(0 1 10 10 10 10 ) 10 1( 0(0 1 10 ) 10 1( 0 2 ) 10 1( 1 ) ) 0(10 0(10 0(10 ) 11 1(01) 11 1(01) 11 1(01) 11 0(10 ) 1 2 12 0 (0 1) ) ) 0( 11 ) 0 1(0 ) 1 (0 0) 1 (0 0) .

Viterbi Decoding Process Output: 11 Receive: 11 01 11 00 00 10 10 01 01 10 11 11 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 ) 10 1( 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 2 4 01 3 01 3 01 0( 11 ) 0( 11 ) 0( 11 ) 0( 11 ) 01 01 1 0 (0 1) 2 2 0 (0 1) 0(0 1 10 10 10 10 0(0 1 10 ) 10 1( 0 2 ) 10 1( 1 ) 10 1( 3 ) 0(10 0(10 ) ) 0(10 11 1(01) 11 1(01) 11 1(01) 11 1 2 0(10 ) 13 1 0 (0 1) ) ) 0( 11 ) 0 1(0 ) 1 (0 0) 1 (0 0) .

Viterbi Decoding Process Output: 11 Receive: 11 01 11 00 00 10 10 01 01 10 11 11 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 ) 10 1( 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 2 4 01 3 01 3 01 3 0( 11 ) 0( 11 ) 0( 11 ) 0( 11 ) 01 01 1 0 (0 1) 2 2 3 0 (0 1) 0 (0 1) ) 0(0 1 10 10 10 10 0(0 1 ) 10 0 2 ) 10 1( 1 ) 10 1( 3 ) 10 1( 3 0(10 ) ) ) 0(10 0(10 11 1(01) 11 1(01) 11 1(01) 11 1 2 0(10 14 1 ) 1 0( 11 ) 0 1(0 ) 1 (0 0) 1 (0 0) .

Viterbi Decoding Process Output: 11 Receive: 11 01 11 00 00 10 10 01 01 10 11 11 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 ) 10 1( 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 2 4 01 3 01 3 01 3 0( 11 ) 3 01 0( 11 ) 0( 11 ) 0( 11 ) 0( 11 ) 01 0 1(0 ) 1 (0 1 (0 1 0 (0 1) 2 2 3 0 (0 1) 2 0) 0) 0 (0 1) ) 0(0 1 10 10 10 10 0(0 1 ) 10 0 2 ) 10 1( 1 ) 10 1( 3 ) 10 1( 3 0(10 ) ) ) 0(10 0(10 11 1(01) 11 1(01) 11 1(01) 11 1 2 0(10 15 1 ) 1 .

Viterbi Decoding Process Output: 11 Receive: 11 01 11 00 00 10 10 01 01 10 11 11 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 ) 10 1( 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 00 0(00) 00 2 4 01 3 01 3 01 3 0( 11 ) 3 01 0( 11 ) 2 0( 11 ) 0( 11 ) 0( 11 ) 01 0 1(0 ) 1 (0 1 (0 1 0 (0 1) 2 2 3 0 (0 1) 2 0) 0) 0 (0 1) ) 0(0 1 10 10 10 10 0(0 1 ) 10 0 2 ) 10 1( 1 ) 10 1( 3 ) 10 1( 3 0(10 ) ) ) 0(10 0(10 11 1(01) 11 1(01) 11 1(01) 11 1 2 0(10 16 1 ) 1 .

Viterbi Decoding Process Decision:11 Receive: 11 00 0(00) 00 0(00) ) 1 (1 1 01 11 00 0(00) ) 1 (1 1 00 00 10 10 01 01 10 11 00 0(00) ) 1 (1 1 00 0(00) ) 1 (1 1 00 0(00) 11 11 00 0(00) 00 0( 11 ) 0( 11 ) 0( 11 ) 01 01 01 0( 11 ) 01 01 1 0 (0 1) 2 2 3 0 (0 1) 0 (0 1) ) 0(0 1 10 ) 10 1( 10 10 10 0(0 1 ) 10 0 2 ) 10 1( 1 ) 10 1( 3 ) 10 1( 3 0(10 ) ) ) 0(10 0(10 0(10 ) 11 1(01) 11 1(01) 11 1(01) 11 Output: 1011100 1 2 17 1 1 0( 11 ) ) 1 (1 1 2 4 3 3 3 3 2 0 1(0 ) 1 (0 0) 1 (0 0) 2 .

Convolutional Codes .

but the memory order m must be made large to achieve low error probabilities. n and k are small integers with k<n. n-output linear sequential circuit with input memory m. In the important special case when k=1. the information sequence is not divided into blocks and can be processed continuously. k. Typically. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes. An (n.Convolutional Codes Convolutional codes differ from block codes in that the encoder contains memory and the n encoder outputs at any time unit depend not only on the k inputs but also on m previous input blocks. m) convolutional code can be implemented with a kinput. 19 .

Viterbi proposed a maximum likelihood decoding scheme that was relatively easy to implement for cods with small memory orders.Convolutional Codes Shortly thereafter. together with improved versions of sequential decoding. In 1963. Then in 1967. 20 . Massey proposed a less efficient but simpler-toimplement decoding method called threshold decoding. This scheme. and experimental studies soon began to appear. Wozencraft proposed sequential decoding as an efficient decoding scheme for convolutional codes. led to the application of convolutional codes to deep-space and satellite communication in early 1970s. called Viterbi decoding.

the shift register consists of K (k-bit) stages and n linear algebraic function generators. In general. 21 .Convolutional Code A convolutional code is generated by passing the information sequence to be transmitted through a linear finite-state shift register.

Each encoded bit is a function of the present input bits and their past ones. encoder memory. 22 .Convoltuional Code Convolutional codes k = number of bits shifted into the encoder at one time k=1 is usually used!! n = number of encoder output bits corresponding to the k information bits Rc = k/n = code rate K = constraint length.

7). k=1. and g3=[111].5. g2=[101]. 23 . and n=3. The generators are: g1=[100]. The generators are more conveniently given in octal form as (4.Encoding of Convolutional Code Example 1: Consider the binary convolutional encoder with constraint length K=3.

12). 24 . 15. g2=[1101]. these generator are (13.Encoding of Convolutional Code Example 2: Consider a rate 2/3 convolutional encoder. The generators are: g1=[1011]. and g3=[1010]. In octal form.

Representations of Convolutional Code There are three alternative methods that are often used to describe a convolutional code: Tree diagram Trellis diagram State disgram 25 .

c=10. and d=11. register. K=3 convolutional code. b=01. we may sat that the 3-bit output sequence for each input bit is determined by the input bit and the four possible states of the shift Tree diagram for rate 1/3. 26 .Representations of Convolutional Code Tree diagram Note that the tree diagram in the right repeats itself after the third stage. denoted as a=00. This is consistent with the fact that the constraint length K=3. In other words. The output sequence at each stage is determined by the input bit and the two previous input bits.

Representations of Convolutional Code Trellis diagram Tree diagram for rate 1/3. K=3 convolutional code. 27 .

K=3 convolutional code.Representations of Convolutional Code State diagram 0 1 a ⎯⎯ a a ⎯⎯ c → → 0 1 b ⎯⎯ a b ⎯⎯ c → → 0 1 c ⎯⎯ b c ⎯⎯ d → → 0 1 d ⎯⎯ b d ⎯⎯ d → → State diagram for rate 1/3. 28 .

n=3 convolutional code Tree diagram 29 .Representations of Convolutional Code Example: K=2. k=2.

Representations of Convolutional Code Example: K=2. n=3 convolutional code Trellis diagram 30 . k=2.

k=2. n=3 convolutional code State diagram 31 .Representations of Convolutional Code Example: K=2.

constraint length K. There are 2k branches entering each state and 2k branches leaving each state. The trellis and the state diagrams each have 2k(K-1) possible states.Representations of Convolutional Code In general. 32 . convolutional code is characterized by 2k branches emanating from each node of the tree diagram. we state that a rate k/n.

Since mod-2 addition is a linear operation. the encoder is a linear feedforward shift register. 3) binary convolutional codes: the encoder consists of an m= 3-stage shift register together with n=2 modulo-2 adders and a multiplexer for serializing the encoder outputs. All convolutional encoders can be implemented using a linear feedforward shift register of this type. The mod-2 adders can be implemented as EXCLUSIVE-OR gates. 33 .Encoding of Convolutional Codes Example: A (2. 1.

g m1) ) ( ( g ( 2 ) = ( g 02 ) . u1. …) enters the encoder one bit at a time. Since the encoder is a linear system. the two encoder output ( ( ( ( sequence v (1) = (υ 01) .υ1( 2 ) . and are written as : ( ( g (1) = ( g 01) . g1( 2 ) .υ1(1) .” The impulse responses are obtained by letting u =(1 0 0 …) and observing the two output sequence. u2. .υ 22 ) . Since the encoder has an m-time unit memory. ) can be obtained as the convolution of the input sequence u with the two encoder “impulse response.Encoding of Convolutional Codes The information sequence u =(u0. . g1(1) . g m2 ) ) 34 . ) and v ( 2 ) = (υ 02 ) . the impulse responses can last at most m+1 time units.υ 21) .

( υl( j ) = ∑ ul −i gi( j ) = ul g 0 j ) + ul −1 g1( j ) + m i =0 ( + ul − m g mj ) . where * denotes discrete convolution and all operations are mod-2. 3) code is g (1) = (1 0 1 1) g ( 2 ) = (1 1 1 1) The impulse response g(1) and g(2) are called the generator sequences of the code. 1. j = 1. 35 .. The encoding equations can now be written as v (1) = u ∗ g (1) v ( 2) = u ∗ g ( 2) The convolution operation implies that for all l ≥ 0.Encoding of Convolutional Codes The encoder of the binary (2. where ul −i = 0 for all l < i. 2.

3) code. the two output sequences are multiplexed into a signal sequence.1.υ1(1)υ1( 2 ) . After encoding. ). for the encoder of the binary (2.υ 21)υ 22 ) . called the code word.Encoding of Convolutional Codes Hence. 36 . υ l(1) = ul + ul − 2 + ul −3 υ l( 2 ) = ul + ul −1 + ul − 2 + ul −3 as can easily be verified by direct inspection of the encoding circuit. for transmission over the channel. The code word is given by ( ( ( ( v = (υ 01)υ 02 ) .

0 1.Encoding of Convolutional Codes Example 10. 0 1. 0 1. 1 1).1 Let the information sequence u = (1 0 1 1 1). 37 . Then the output sequences are v (1) = (1 0 1 1 1) ∗ (1 0 1 1) = (1 0 0 0 0 0 0 1) v (2) = (1 0 1 1 1) ∗ (1 1 1 1) = (1 1 0 1 1 1 0 1) and the code word is v = (1 1. 0 0. 0 1. 0 0.

and the G is a semi-infinite matrix. Note that each row of G is identical to the preceding row but shifted n = 2 places to right. G is called the generator matrix of the code.Encoding of Convolutional Codes If the generator sequence g(1) and g(2) are interlaced and then arranged in the matrix ( ( ⎡ g 01) g 02 ) ⎢ G=⎢ ⎢ ⎢ ⎢ ⎣ g1(1) g1( 2 ) ( ( g 01) g 02 ) ( ( g 21) g 22 ) g1(1) g1( 2 ) ( ( g 01) g 02 ) ( ( g m1) g m2 ) ( ) ( g m1−1 g m2−)1 ( ) ( g m1− 2 g m2−)2 ( ( g m1) g m2 ) ( ) ( g m1−1 g m2−)1 ⎤ ⎥ ⎥ ( ( g m1) g m2 ) ⎥ ⎥ ⎥ ⎦ where the blank areas are all zeros. the encoding equations can be rewritten in matrix form as v = uG. corresponding to the fact that the information sequence u is of arbitrary length. 38 .

0 0. and v has length 2(m + L).Encoding of Convolutional Codes If u has finite length L. 0 1. then G has L rows and 2(m+L) columns. 0 1. 39 ⎤ ⎥ ⎥ ⎥ ⎥ 11 ⎥ 11 11⎥ ⎦ 1 1). agree with our previous calculation using discrete convolution. Example 10.2 If u=(1 0 1 1 1). 0 1. 0 0. then v = uG ⎡11 0 1 ⎢ 11 ⎢ = (1 0 1 1 1) ⎢ ⎢ ⎢ ⎢ ⎣ = (1 1. . 0 11 11 01 11 11 11 01 11 11 11 0 1 11 11 0 1 1.

Encoding of Convolutional Codes Consider a (3. 40 . 1) convolutional codes Since k = 2. the encoder consists of two m = 1stage shift registers together with n = 3 mode-2 adders and two multiplexers. 2.

u 22 ) . g i(. g (2) = (1 0). ) ( ( u (1) = (u 01) . 1) convolutional codes are or as the two input sequences ( g11) = (1 1). u1(1) u1( 2 ) . m) ) represent the generator sequence Let corresponding to input i and output j. j j j g i( j ) = ( g i(. 2. ) ( ( u (2) = (u 02 ) . . (2) g1 = (0 1).1) . and can be written as ( ( ( ( u = (u 01) u 02 ) .Encoding of Convolutional Codes The information sequence enters the encoder k = 2 bits at a time. g (3) = (1 0). u1(1) . u1( 2 ) . g (21) = (0 1). 0) . 2 41 . ) There are three generator sequences corresponding to each input sequence. u 21) . 2 (3) g1 = (1 1). u 21) u 22 ) . the generator sequence of the (3. g i(.

υ 21)υ 22)υ 23) . the code word is given by ( ( ( ( ( ( v = (υ 01)υ 02)υ 03) .Encoding of Convolutional Codes And the encoding equations can be written as ( v (1) = u (1) ∗ g11) + u ( 2 ) ∗ g (21) ( v ( 2 ) = u (1) ∗ g12 ) + u ( 2 ) ∗ g (22 ) ( v (3) = u (1) ∗ g13) + u ( 2 ) ∗ g (23) The convolution operation implies that 1 2 υ l(1) = ul(1) + ul(−)1 + ul(−1) 1 υ l( 2 ) = ul( 2 ) + ul(−)1 1 υ l(3) = ul(1) + ul( 2 ) + ul(−)1 . After multiplexing.υ1(1)υ1( 2)υ1(3) . 42 . ).

43 .Encoding of Convolutional Codes Example 10.3 If u(1) = (1 0 1) and u(2) = (1 1 0). then v (1) = (1 0 1) ∗ (1 1) + (1 1 0) ∗ (0 1) = (1 0 0 1) v ( 2 ) = (1 0 1) ∗ (0 1) + (1 1 0) ∗ (1 0) = (1 0 0 1) v ( 3) = (1 0 1) ∗ (1 1) + (1 1 0) ∗ (1 0) = (0 0 1 1) and v = (1 1 0. 1 1 1). 0 0 0 . 0 0 1.

0 g 2. m g 2.11) g1(. m) code is ) ) ⎡ g1(.2 ) g1(. 0 g 2. ⎢ ⎥ ) ) ) ) G=⎢ g1(.Encoding of Convolutional Codes The generator matrix of a (3. .3) ⎤ g1(. m g 2.30) g1(.1m g1(. m −1 g 2.3m 0 1 1 ⎢ (1) ( 2 ) ( 3) ⎥ ( ) ( ( ( ) ( ) ( ) g 2. m ⎥ ⎢ ⎢ ⎥ ⎣ ⎦ and the encoding equation in matrix are again given by v = uG. .3m ⎥ ⎢ (1) ( 2 ) ( 3) (1) ( 2) ( 3) (1) ( 2 ) ( 3) ⎥ g 2. 2.1m g1(.2m) g1(. m −1 g 2.30) g1(. 0 g 2.3m −1 g1(.1m −1 g1(. 0 g 2.2m) g1(. 44 .2m) −1 g1(. m g 22m g 23m .2 ) g1(. Note that each set of k = 2 rows of G is identical to the preceding set of rows but shifted n = 3 places to right. m −1 g 2. 0 g 2. 0 g 211 g 221) g 231) g 21.20) g1(.10) g1(. . .10) g1(.

then u = (1 1. 1 0) and v = uG ⎡1 0 1 1 1 1 ⎤ ⎢0 1 1 1 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ 1 0 1 111 = (1 1. 0 0 1. 1 1 1). 1 0) ⎢ ⎥ 0 11 1 0 0 ⎢ ⎥ ⎢ 1 0 1 1 1 1⎥ ⎢ ⎥ 0 1 1 1 0 0⎦ ⎢ ⎥ ⎣ = (1 1 0.Encoding of Convolutional Codes Example 10. 45 . 0 0 0. 0 1.4 If u(1) = (1 0 1) and u(2) = (1 1 0). it agree with our previous calculation using discrete convolution. 0 1.

not all of which must have the same length. 1. 46 . and 2. 3. If Ki is the length of the ith shift register. then the encoder memory order m is defined as m max K i 1≤ i ≤ k An example of a (4. 2) convolutional encoder in which the shift register length are 0. the encoder now contains k shift registers.Encoding of Convolutional Codes In particular.

For example.3).1). and 12. the constraint length of the (2. nA can be interpreted as the maximum number of encoder outputs that can be affected by a signal information bit. and during each time unit can affect any of the n encoder outputs.2) convolutional codes are 8. and (4.2. Since each information bit remains in the encoder for up to m+1 time units.1. (3. 6. 47 . respectively.3.Encoding of Convolutional Codes The constraint length is defined as nA≣n(m+1).

l g 2 .1l) g1(.n ) ⎤ l l ⎢ (1) ( 2) ( n) ⎥ g 2.l ⎥ ⎢ g 2 . k.l g k .l ⎣ ⎦ Note that each set of k rows of G is identical to the previous set of rows but shifted n places to the right.Encoding of Convolutional Codes If the general case of an (n.l ⎥ ⎢ g k . m) code.l Gl = ⎢ ⎥ ⎢ (1) ( 2) ( n) ⎥ g k . the generator matrix is Gm ⎡G 0 G 1 G 2 ⎤ ⎢ ⎥ G 0 G1 G m −1 G m ⎥ G=⎢ ⎢ G0 G m − 2 G m −1 G m ⎥ ⎢ ⎥ ⎣ ⎦ where each Gl is a k × n submatrix whose entries are ⎡ g1(. 48 .2 ) g1(.

49 .Encoding of Convolutional Codes For an information sequence ( ( ( u = (u 0 . an (n. ) = (υ 01)υ 02) υ 0n ) . u1 . Since the code word v is a linear combination of rows of the generator matrix G. v1 . ) ( ( ( and the code word v = ( v 0 . k. ) = (u 01) u 02 ) u 0k ) . u1(1) u1( 2 ) u1( k ) .υ1(1)υ1( 2) υ1( n ) . m) convolutional code is a linear code. ) is given by v = uG.

For an k·L finite length information sequence. 50 .Encoding of Convolutional Codes A convolutional encoder generates n encoded bits for each k information bits. where the final n·m outputs are generated after the last nonzero information block has entered the encoder. and R = k/n is called the code rate. the block code rate is given by kL/n(L + m). then L/(L + m) ≈ 1. the ratio of the number of information bits to the length of the code word. the corresponding code word has length n(L + m). and the block code rate and convolutional code are approximately equal . Viewing a convolutional code as a linear block code with generator matrix G. If L » m.

However.3%.Encoding of Convolutional Codes If L were small. however.1. Example 10. if the length of the information sequence is L=1000. which is the effective rate of information transmission. would be reduced below the code rate by a fractional amount k n − kL n( L + m) m = k n L+m called the fractional rate loss.5 For a (2. L is always assumed to be much larger than m.5%. 51 . To keep the fractional rate loss small. the ratio kL/n(L + m).3) convolutional codes. L=5 and the fractional rate loss is 3/8=37. the fractional rate loss is only 3/1003=0.

For example. where u(D) = u0 + u1D + u2D2 + ··· is the information sequence. Since a convolutional encoder is a linear system. each sequence in the encoding equations can be replaced by corresponding polynomial. and the convolution operation replaced by polynomial multiplication. for a (2. 52 . m) code. 1.Encoding of Convolutional Codes In a linear system. the encoding equations become v (1) ( D) = u( D)g (1) ( D) v ( 2 ) ( D ) = u( D )g ( 2 ) ( D ). the sequence itself is represent by the coefficients of the polynomial. time-domain operations involving convolution can be replaced by more convenient transform-domain operations involving polynomial multiplication. In the polynomial representation of a binary sequence.

the code word become v ( D) = v (1) ( D 2 ) + Dv ( 2 ) ( D 2 ) the indeterminate D can be interpreted as a delay operator.Encoding of Convolutional Codes The encoded sequences are ( ( v (1) ( D) = υ 01) + υ1(1) D + υ 21) D 2 + ( ( v ( 2 ) ( D) = υ 02 ) + υ1( 2 ) D + υ 22 ) D 2 + The generator polynomials of the code are ( ( g (1) ( D ) = g 01) + g1(1) D + + g m1) D m ( ( g ( 2 ) ( D ) = g 02 ) + g1( 2 ) D + + g m2 ) D m and all operations are modulo-2. 53 . After multiplexing. and the power of D denoting the number of time units a bit is delayed with respect to the initial bit.

the generator polynomials are g(1)(D) = 1+D2+D3 and g(2)(D) = 1+D+D2+D3. 54 .Encoding of Convolutional Codes Example 10. and the code word is v ( D) = v(1) ( D2 ) + Dv(2) ( D2 ) = 1 + D + D3 + D7 + D9 + D11 + D14 + D15 . 3) convolutional code. Note that the result is the same as previously computed using convolution and matrix multiplication.6 For the previous (2. For the information sequence u(D) = 1+D2+D3+D4. the encoding equation are v (1) ( D) = (1 + D 2 + D 3 + D 4 )(1 + D 2 + D 3 ) = 1 + D 7 v ( 2 ) ( D) = (1 + D 2 + D 3 + D 4 )(1 + D + D 2 + D 3 ) = 1+ D + D3 + D 4 + D5 + D7 . 1.

Encoding of Convolutional Codes The generator polynomials of an encoder can be determined directly from its circuit diagram. at least one of the generator polynomials must have degree equal to the shift register length m. Since the last stage of the shift register in an (n. Since the shift register stage represents a one-time-unit delay. that is m = max deg g ( j ) (D ) 1≤ j ≤ n [ ] 55 . 1) code must be connected to at least one output. the sequence of connection (a 1 representing a connection and a 0 no connection) from a shift register to an output is the sequence of coefficients in the corresponding generator polynomial.

k) code where k>1. Each set of n generators represents the connections from one of the shift registers to the n outputs. The length Ki of the ith shift register is given by K i = max deg g i( j ) (D ) . 1≤ j ≤ n [ ] 1 ≤ i ≤ k. 1≤ i ≤ k 1≤ j ≤ n 1≤ i ≤ k [ ] 56 . there are n generator polynomials for each of the k inputs.Encoding of Convolutional Codes In an (n. and the encoder memory order m is m = max K i = max deg g i( j ) . where g i( j ) (D ) is the generator polynomial relating the ith input to the jth output.

and u(i)(D) is the ith input sequence and v(j)(D)is the jth output sequence. there are a total of k·n transfer functions. As with k-input. These can be represented by the k × n transfer function matrix ( ( ⎡ g11) ( D ) g12) ( D ) ⎢ (1) ( 2) ⎢g 2 ( D ) g 2 ( D ) =⎢ ⎢ ⎢ g (1) ( D ) g ( 2) ( D ) k ⎣ k 57 G (D) ( D )⎤ ⎥ (n) g 2 ( D )⎥ ( g1 n) ⎥ ⎥ n g (k ) ( D ) ⎥ ⎦ .Encoding of Convolutional Codes Since the encoder is a linear system. n-output linear system. the generator polynomial g i( j ) (D )can be interpreted as the encoder transfer function relating input i to output j.

u (2) ( D ). .Encoding of Convolutional Codes Using the transfer function matrix. v ( D). u ( k ) ( D ) ⎤ is the k-tuple of ⎣ ⎦ (1) (2) input sequences and V ( D ) ⎡ v ( D ). v ( n ) ( D ) ⎤ is ⎣ ⎦ the n-tuple of output sequences. m) code can be expressed as V (D ) = U(D )G (D ) where U ( D ) ⎡u (1) ( D ). . k. After multiplexing. the code word becomes v(D ) = v (1) D n + Dv (2 ) D n + ( ) ( ) + D n −1 v (n ) D n . the encoding equation for an (n. ( ) 58 .

1 + D 3 . D 2 + D 3 [ ] [ [ ] ] and the code word is v ( D ) = (1 + D 9 ) + (1 + D 9 ) D + ( D 6 + D 9 ) D 2 = 1 + D + D8 + D 9 + D10 + D11 59 .7 For the previous (3. v (D ). v (D ) = 1 + D . 2.Encoding of Convolutional Codes Example 10. 1) convolutional code ⎡1 0 1 1 1 1 ⎤ ⎡1 + D D 1 + D ⎤ ⎢0 1 1 1 0 0⎥ G (D ) = ⎢ ⎥ ⎣ ⎦ D 1 1 ⎦ ⎣ For the input sequences u(1)(D) = 1+D2 and u(2)(D)=1+D. the encoding equations are ⎡1 + D D 1 + D ⎤ (1) (2 ) (3 ) 2 V (D ) = v (D ).1 + D ⎢ 1 1 ⎥ D ⎣ ⎦ = 1 + D 3 .

A little algebraic manipulation yields v (D ) = ∑ u (i ) D n g i (D ) i =1 k ( ) where gi ( D ) g (i1) ( D n ) + Dg (i 2) ( D n ) + D n −1g (i n −1) ( D n ) . is a composite generator polynomial relating the ith input sequence to v(D). 1 ≤ i ≤ k .Encoding of Convolutional Codes Then. we can find a means of representing the code word v(D) directly in terms of the input sequences. 60 .

3) convolutional codes. 1. the code word is v ( D ) = u ( D2 ) g ( D ) = (1 + D 4 + D 6 + D8 ) ⋅ (1 + D + D 3 + D 4 + D 5 + D 6 + D 7 ) = 1 + D + D 3 + D 7 + D 9 + D11 + D14 + D15 ( ) ( ) again agreeing with previous calculations. 61 . the composite generator polynomial is g (D ) = g (1) D 2 + Dg (2 ) D 2 = 1 + D + D 3 + D 4 + D 5 + D 6 + D 7 and for u(D)=1+D2+D3+D4 .Encoding of Convolutional Codes Example 10.8 For the previous (2.

Structural Properties of Convolutional Codes Since a convolutional encoder is a sequential circuit. m) code with k > 1. k Defined K ∑ i =1 K i as the total encoder memory. For an (n. the encoder state at time unit l is the binary K-tuple of inputs (u ( ) u ( ) 1 1 l −1 l − 2 u l(1)K1 − 2) 2) u l(−1u l(− 2 2) u l(− K 2 k k) u l(−1)u l(− 2 k) u l(− K k ) and there are a total 2K different possible states. its operation can be described by a state diagram. The state of the encoder is defined as its shift register contents. the ith shift register contains Ki previous information bits. 62 . k.

b1. there are only two branches leaving each state. one corresponding to each different input block. 1. Note that for an (n.…. There are 2k branches leaving each state. 1.….bK-1 is equivalent to the integer ( ) i = b0 20 + b1 21 + 63 + bK −1 2 K −1 .S2K-1. where by convention Si represents the state whose binary K-tuple representation b0. m) code. Each new block of k inputs causes a transition to a new state. m) code. The states are labeled S0. Each branch is labeled with the k inputs causing the transition ul(1) ul( 2 ) ul( k ) and n corresponding outputs (υ l(1)υ l( 2) υ l( n ) ) .S1.Structural Properties of Convolutional Codes For a (n. K = K1 = m and the encoder state at time unit l is simply (u l −1ul − 2 ul − m ).

64 . the encoder is return to state S0 by a sequence of m all-zero blocks appended to the information sequence.Structural Properties of Convolutional Codes Assuming that the encoder is initially in state S0 (all-zero state). Following the last nonzero information block. the code word corresponding to any given information sequence can be obtained by following the path through the state diagram and noting the corresponding outputs on the branch labels.

1 1) 65 . 0 1. 1 1. 3) code If u = (1 1 1 0 1). 1 0. the code word v = (1 1.Structural Properties of Convolutional Codes Encoder state diagram of a (2. 1 1. 1. 1 0. 0 1.

Structural Properties of Convolutional Codes Encoder state diagram of a (3. 1) code 66 . 2.

e. a weight distribution function for the code). State S0 is split into an initial state and a final state. and the weight of the associated code word is the power of X in the path gain.Structural Properties of Convolutional Codes The state diagram can be modified to provide a complete description of the Hamming weights of all nonzero code words (i.where i is the weight of the n encoded bits on that branch. 67 . the selfloop around state S0 is deleted. Each path connecting the initial state to the final state represents a nonzero code word that diverge from and remerge with state S0 exactly once. and each branch is labeled with a branch gain Xi . The path gain is the product of the branch gains along a path.

3) code. 1. 68 . The path representing the sate sequence S0S1S3S7S6S5S2S4S0 has path gain X2·X1·X1·X1·X2·X1·X2·X2=X12.Structural Properties of Convolutional Codes Modified encoder state diagram of a (2.

69 . The path representing the sate sequence S0S1S3S2S0 has path gain X2·X1·X0·X1 =X12. 2.Structural Properties of Convolutional Codes Modified encoder state diagram of a (3. 1) code.

Structural Properties of Convolutional Codes The weight distribution function of a code can be determined by considering the modified state diagram as a signal flow graph and applying Mason’s gain formula to compute its “generating function” T ( X ) = ∑ Ai X i . a path connecting the initial state to the final state which does not go through any state twice is called a forward path. where Ai is the number of code words of weight i. i 70 . In a signal flow graph. A closed path starting at any state and returning to that state without going through any other state twice is called a loop.

l”} be the set of all triples of nontouching loops. j’} be the set of all pairs of nontouching loops.l '' '' '' i j l . i where ∑ Ci is the sum of the loop gains. j '' . l '' three nontouching loops summed over all nontouching loops. Define ∆ = 1 − ∑ Ci + ∑ C ' C ' − ∑ C '' C '' C '' + . {i’.Structural Properties of Convolutional Codes Let Ci be the gain of the ith loop. j .j ' ' i j i . {i”. Let {i} be the set of all loops. j”. 71 i . j the loop gains of two nontouching loops summed over all pairs of nontouching loops. ∑' Ci ' C j 'is the product of i i' . and so on. '' ∑ Ci '' C j '' Cl '' is the product of the loop gains of i . A set of loops is nontouching if no state belongs to more than one loop in the set.

together with all branches connected to these states. 72 . all states along the ith forward path. that is. are removed from the graph when computing ∆i. but only for that portion of the graph not touching the ith forward path.Structural Properties of Convolutional Codes And ∆i is defined exactly like ∆. Mason’s formula for computing the generating function T(X) of a graph can now be states as T (X ) = ∑F∆ i i i ∆ . where the sum in the numerator is over all forward paths and Fi is the gain of the ith forward path.

Structural Properties of Convolutional Codes Example (2.1.3) Code: There are 11 loops in the modified encoder state diagram. Loop 1 : S 1 S 3 S 7 S 6 S 5 S 2 S 4 S 1 Loop 2 : S 1 S 3 S 7 S 6 S 4 S 1 Loop 3 : S 1 S 3 S 6 S 5 S 2 S 4 S 1 Loop 4 : S 1 S 3 S 6 S 4 S 1 Loop 5 : S 1 S 2 S 5 S 3 S 7 S 6 S 4 S 1 Loop 6 : S 1 S 2 S 5 S 3 S 6 S 4 S 1 Loop 7 : S 1 S 2 S 4 S 1 Loop 8 : S 2 S 5 S 2 Loop 9 : S 3 S 7 S 6 S 5 S 3 Loop 10 : S 3 S 6 S 5 S 3 Loop 11 : S 7 S 7 73 (C 8 (C (C (C (C (C (C (C (C (C 1 = X8 = X3 = X7 = X2 = X9 = X8 = X3 = X) = X5 2 3 4 5 6 7 ) ) ) ) ) ) ) ) ) 9 10 (C11 = X ) = X4 .

Structural Properties of Convolutional Codes Example (2. loop 11) ( C C = X ) Loop pair 6: ( loop 7. loop 11) ( C C = X ) Loop pair 3: ( loop 4. loop 11) ( C C = X ) Loop pair 1: ( loop 2. loop 11) (C C = X ) Loop pair 9: ( loop 8.3) Code: (cont. loop 10) ( C C = X ) Loop pair 8: ( loop 7. loop 11) ( C C = X ) Loop pair 5: ( loop 6.1. loop 9) (C C = X ) Loop pair 7: ( loop 7. loop 8) (C C = X ) Loop pair 4: ( loop 4. loop 8) 4 2 8 8 3 11 8 3 4 3 4 11 9 6 11 9 8 7 7 7 10 4 7 11 2 8 11 5 10 11 74 .) There are 10 pairs of nontouching loops : (C C = X ) Loop pair 2: ( loop 3. loop 11) ( C C = X ) Loop pair 10: ( loop 10.

loop 11 ) (C C C Loop triple 2 : (loop 7.Structural Properties of Convolutional Codes Example (2. loop 11 ) (C C C 4 8 7 10 11 = X4 ) 11 = X8 ) ) There are no other sets of nontouching loops.) There are two triples of nontouching loops : Loop triple 1 : (loop 4.3) Code : (cont. ∆ = 1− X 8 + X 3 + X 7 + X 2 + X 4 + X 3 + X 3 + X + X 5 + X 4 + X 4 4 ( + (X − (X + X8 + X3 + X3 + X4 + X8 + X7 + X4 + X2 + X5 + X 8 = 1− 2X + X 3 ) ) 75 . loop 8. Therefore. loop 10.1.

Structural Properties of Convolutional Codes Example (2. 11 6 3 4 8 5 7 6 7 7 .1.) There are seven forward paths in this state diagram : Foward path 1 : S 0 S1 S 3 S 7 S 6 S 5 S 2 S 4 S 0 F1 = X 12 Foward path 2 : S 0 S1 S 3 S 7 S 6 S 4 S 0 Foward path 3 : S 0 S1 S 3 S 6 S 5 S 2 S 4 S 0 Foward path 4 : S 0 S1 S 3 S 6 S 4 S 0 Foward path 5 : S 0 S1 S 2 S 5 S 3 S 7 S 6 S 4 S 0 Foward path 6 : S 0 S1 S 2 S 5 S 3 S 6 S 4 S 0 Foward path 7 : S 0 S1 S 2 S 4 S 0 76 7 2 ( ) (F = X ) (F = X ) (F = X ) (F = X ) (F = X ) (F = X ).3) Code : (cont.

3) Code : (cont.X 77 .) Forward paths 1 and 5 touch all states in the graph. and hence the sub graph not touching these paths contains no states. ∆1 = ∆5 = 1. The subgraph not touching forward paths 3 and 6: ∆3 = ∆6 = 1 . Therefore.1.Structural Properties of Convolutional Codes Example (2.

Structural Properties of Convolutional Codes Example (2.) The subgraph not touching forward path 2: ∆2 = 1 .X The subgraph not touching forward path 4: ∆4 = 1 – (X + X) + (X2) = 1 – 2X + X2 78 .1.3) Code : (cont.

) The subgraph not touching forward path 7: ∆7 = 1 – (X + X4 + X5) + (X5) = 1 – X – X4 79 .Structural Properties of Convolutional Codes Example (2.1.3) Code : (cont.

five of weight 8.3) Code : (cont.1.Structural Properties of Convolutional Codes Example (2. three of weight 7. 80 .) The generating function for this graph is then given by X 12 ⋅1 + X 7 (1 − X ) + X 11 (1 − X ) + X 6 1 − 2 X + X 2 + X 8 ⋅1 + X 7 (1 − X ) + X 7 1 − X − X 4 T (X ) = 1− 2X − X 3 X6 + X7 −X8 = = X 6 + 3 X 7 + 5 X 8 + 11X 9 + 25 X 10 + 1− 2X − X 3 ( ) ( ) T(X) provides a complete description of the weight distribution of all nonzero code words that diverge from and remerge with state S0 exactly once. In this case. there is one such code word of weight 6. and so on.

and ∆ = 1− X 2 + X 4 + X 3 + X + X 2 + X 1 + X 2 + X 3 6 ( + (X +X2 +X4 +X3+X4 +X5 − X6 ) ( ) ) = 1− 2 X − 2 X 2 − X 3 + X 4 + X 5. 2. six pairs of nontouching loops. and one triple of nontouching loops in the graph of previous modified encoder state diagrams : (3.2.Structural Properties of Convolutional Codes Example (3.) There are eight loops.1) Code : (cont. 1) code. 81 .

and so on. fifteen of weight 5. ( ( ) ( ) ) ( ) Hence. and + X 4 ⋅1 + X 3 1 − X − X 2 + X 6 1 − X 2 + X 6 ⋅1 + X 5 (1 − X ) + X 8 ⋅1 + X 4 1 − X − X 2 − X 3 + X 4 + X 7 1 − X 3 + X 6 ⋅1 + X 3 (1 − X ) + X 6 ⋅1 = 2 X 3 + X 4 + X 5 + X 6 − X 7 − X 8.) ∑ Fi ∆ i = X 5 (1 − X − X 2 − X 3 + X 5 ) + X 4 (1 − X 2 ) + X 6 ⋅1 + X 5 (1 − X 3 ) i There are 15 forward path in this graph.Structural Properties of Convolutional Codes Example (3. the generating function is 2X 3 + X 4 + X 5 + X 6 − X 7 − X 8 T (X ) = = 2 X 3 + 5 X 4 + 15 X 5 + 1− 2X − 2X 2 − X 3 + X 4 + X 5 This code contains two nonzero code word of weight 3. 82 .1) Code : (cont.2. five of weight 4.

If the modified state diagram is augmented by labeling each branch corresponding to a nonzero information block with Y j.Structural Properties of Convolutional Codes Additional information about the structure of a code can be obtained using the same procedure. i j l i . 83 . Y . j . whose associated information sequence has weight j.l The coefficient Ai. where j is the weight of the k information bits on the branch. and whose length is l branches.l X Y Z .j. the generating function is given by T ( X . j . and labeling every branch with Z. Z ) = ∑ Ai .l denotes the number of code words with weight i.

84 . 3) codes. 1.Structural Properties of Convolutional Codes The augment state diagram for the (2.

3) Code: For the graph of the augment state diagram for the (2. we have: ∆ = 1 − ( X 8Y 4 Z 7 + X 3Y 3 Z 5 + X 7Y 3 Z 6 + X 2Y 2 Z 4 + X 4Y 4 Z 7 + X 3Y 3 Z 6 + X 3YZ 3 + XYZ 2 + X 5Y 3 Z 4 + X 4Y 2 Z 3 + XYZ ) + ( X 4Y 4 Z 7 + X 8Y 4 Z 7 + X 3Y 3 Z 6 + X 3Y 3 Z 5 + X 4Y 4 Z 7 + X 8Y 4 Z 7 + X 7Y 3 Z 6 + X 4Y 2 Z 4 + X 2Y 2 Z 3 + X 5Y 3 Z 4 ) − X 4Y 4 Z 7 + X 8Y 4 Z 7 − X 4Y 2 ( = 1 + XY Z + Z 2 − X 2Y 2 Z 4 − Z 3 − X 3 YZ 3 − Y 3 Z 6 3 ( (Z −Z4 ) ( ) − X (Y Z 8 3 ) ) 6 − Y 4 Z 7 − X 9Y 4 Z 7 ) ( ) 85 .1. 1.Structural Properties of Convolutional Codes Example (2. 3) codes.

) i Fi ∆ i = X 12Y 4 Z 8 ⋅1 + X 7Y 3 Z 6 1 − XYZ 2 + X 11Y 3 Z 7 (1 − XYZ ) ∑ + X 6Y 2 Z 5 (1 − XY Z + Z 2 + X 2Y 2 Z 3 ) + X 8Y 4 Z 8 ⋅1 + X 7Y 3 Z 7 (1 − XYZ ) + X 7YZ 4 1 − XYZ − X 4Y 2 Z 3 = X 6Y 2 Z 5 + X 7YZ 4 − X 8Y 2 Z 5 ( ) ( ) ( ) Hence the generating function is X 6Y 2 Z 5 + X 7YZ 4 − X 8Y 2 Z 5 T (X .3) Code: (cont.1.Y . Z ) = ∆ = X 6Y 2 Z 5 + X 7 YZ 4 + Y 3 Z 6 + Y 3 Z 7 + X 8 Y 2 Z 6 + Y 4 Z 7 + Y 4 Z 8 + 2Y 4 Z 9 86 ( ( ) )+ .Structural Properties of Convolutional Codes Example (2.

one code word of weight 7 has length 4 branches and information sequence weight 1.) This implies that the code word of weight 6 has length 5 branches and an information sequence of weight 2.Structural Properties of Convolutional Codes Example (2. and so on.1.3) Code: (cont. another has length 6 branches and information sequence weight 3. 87 . the third has length 7 branches and information sequence weight 3.

Without loss of generality.The Transfer Function of a Convolutional Code The state diagram can be used to obtain the distance property of a convolutional code. 88 . we assume that the all-zero code sequence is the input to the encoder.

where the exponent of D denotes the Hamming distance between the sequence of output bits corresponding to each branch and the sequence of output bits corresponding to the all-zero branch. D1. D2. one of which represents the input and the other the output of the state diagram. Furthermore.The Transfer Function of a Convolutional Code First. node a is split into two nodes. 89 . or D3. we label the branches of the state diagram as either D0=1. since it contributes nothing to the distance properties of a code sequence relative to the all-zero code sequence. The self-loop at node a can be eliminated.

we can obtain four state equations: X c = D 3 X a + DX b X b = DX c + DX d X d = D2 X c + D2 X d X e = D2 X b The transfer function for the code is defined as T(D)=Xe/Xa. By solving the state equations. we obtain: ∞ D6 = D 6 + 2 D8 + 4 D10 + 8 D12 + = ∑ ad D d T ( D) = 1 − 2D2 d =6 ⎧ 2(d −6 ) 2 (even d ) ⎪ ad = ⎨ ⎪ 0 (odd d ) ⎩ 90 .The Transfer Function of a Convolutional Code Use the modified state diagram.

Suppose we introduce a factor N into all branch transitions caused by the input bit 1.The Transfer Function of a Convolutional Code The transfer function can be used to provide more detailed information than just the distance of the various paths. Furthermore. we introduce a factor of J into each branch of the state diagram so that the exponent of J will serve as a counting variable to indicate the number of branches in any given path from node a to node e. 91 .

J ) = 1 − JND 2 (1 + J ) = J 3 ND 6 + J 4 N 2 D8 + J 5 N 2 D8 + J 5 N 3 D10 + 2 J 6 N 3 D10 + J 7 N 3 D10 + 92 . N .The Transfer Function of a Convolutional Code The state equations for the state diagram are: X c = JND 3 X a + JNDX b X b = JDX c + JDX d X d = JND 2 X c + JND 2 X d X e = JD 2 X b Upon solving these equations for the ratio Xe/Xa. we obtain the transfer function: J 3 ND 6 T ( D.

” Fourth Edition. pp. The exponent of the factor N indicates the number of 1s in the information sequence for that path. Proakis. McGraw-Hill. The exponent of D indicates the distance of the sequence of encoded bits for that path from the all-zero sequence. 2001. 477— 482.The Transfer Function of a Convolutional Code The exponent of the factor J indicates the length of the path that merges with the all-zero path for the first time. 93 . Reference: John G. “Digital Communications.

and the generator sequences satisfy if j = i ( j) ⎧ 1 gi = ⎨ . …. 2..k.Structural Properties of Convolutional Codes An important subclass of convolutional codes is the class of systematic codes. k.e. if j ≠ i ⎩0 94 . i.. the first k output sequences are exact replicas of the k input sequences.2 . i = 1. v(i) = u(i).. In a systematic code. i = 1..

0 is the k × k all-zero matrix.kl+1) g1.l) ⎥ ⎥.nl) ⎤ g (2n.l) ⎥ ⎦ 95 . ⎥ g (kn.l ⎢ ⎢g (kk.l+ 2 ) ⎣ ( g1. and Pl is the k × (n – k) matrix ( ( ⎡g1.l+1) g (kk.Structural Properties of Convolutional Codes The generator matrix is given by 0 Pm ⎡I P0 0 P1 0 P2 ⎤ ⎢ ⎥ I P0 0 P1 0 Pm −1 0 Pm ⎢ ⎥ I P0 0 Pm − 2 0 Pm −1 0 Pm ⎥ ⎢ G= ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ where I is the k × k identity matrix.kl+ 2 ) ⎢g (k +1) g (k + 2 ) 2.l Pl = ⎢ 2.

only k·(n – k) sequences must be specified to define a systematic code.Structural Properties of Convolutional Codes And the transfer matrix becomes ( ( ⎡1 0 0 g1k +1) (D ) g1n ) (D )⎤ ⎢0 1 0 g (2k +1) (D ) g (2n ) (D )⎥ ⎥ G (D ) = ⎢ ⎢ ⎥ ⎢0 0 1 g (kk +1) (D ) g (kn ) (D )⎥ ⎣ ⎦ Since the first k output sequences equal the input sequences. they are called information sequences and the last n – k output sequences are called parity sequences. Systematic codes represent a subclass of the set of all possible codes. m) convolutional code. 96 . k. Note that whereas in general k·n generator sequences must be specified to define an (n.

97 . 1.3) Code: Consider the (2. 1.Structural Properties of Convolutional Codes Example (2.1. The generator sequences are g(1)=(1 0 0 0) and g(2)=(1 1 0 1). 3) systematic code. The generator matrix is ⎤ ⎡1 1 0 1 0 0 0 1 ⎢ G=⎢ ⎢ ⎢ ⎣ 1 1 0 1 1 1 0 0 0 1 0 1 0 0 0 ⎥ ⎥ 1⎥ ⎥ ⎦ The (2. 3) systematic code.

Structural Properties of Convolutional Codes Example (2.3) Code: The transfer function matrix is g(1)=(1 0 0 0) and g(2)=(1 1 0 1). G(D) = [1 1 + D + D3]. the information sequence is v (1) (D ) = u(D )g (1) (D ) = 1 + D 2 + D 3 ⋅1 = 1 + D 2 + D 3 and the parity sequence is v (2 ) (D ) = u(D )g (2 ) (D ) = 1 + D 2 + D 3 1 + D + D 3 = 1+ D + D 2 + D3 + D4 + D5 + D6 98 ( ) ( )( ) .1. For an input sequence u(D) = 1 + D2 + D3.

99 . k.Structural Properties of Convolutional Codes One advantage of systematic codes is that encoding is somewhat simpler than for nonsystematic codes because less hardware is required. For an (n. 3) systematic code requires only one modulo-2 adder with three inputs. m) systematic code with k > n – k . there exits a modified encoding circuit which normally requires fewer than K shift register states. 1. The (2.

2) Systematic Code: Consider the (3.Structural Properties of Convolutional Codes Example (3. 2) systematic code with transfer function matrix 2 ⎡1 0 1 + D + D ⎤ G (D ) = ⎢ 0 1 1+ D2 ⎥ ⎣ ⎦ The straightforward realization of the encoder requires a total of K = K1 + K2 = 4 shift registers.2. 2. 100 .

and it requires only two stages of encoder memory rather than 4.2.2) Systematic Code: Since the information sequences are given by v(1)(D) = u(1)(D) and v(2)(D) = u(2)(D). and the parity sequence is given by ( v (3 ) (D ) = u (1) (D )g13 ) (D ) + u (2 ) (D )g (23 ) (D ) . The (3. 101 .Structural Properties of Convolutional Codes Example (3. 2. 2) systematic encoder.

m) systematic code with k>n-k. 102 . In the case of an (n. Another advantage of systematic codes is that no inverting circuit is needed for recovering the information sequence from the code word.k.Structural Properties of Convolutional Codes A complete discussion of the minimal encoder memory required to realize a convolutional code is given by Forney. a simpler realization usually exists. In most cases the straightforward realization requiring K states of shift register memory is most efficient.

we can obtain V(D)G-1(D) = U(D)G(D)G-1(D) = U(D)Dl .Structural Properties of Convolutional Codes Nonsystematic codes. where I is the k × k identity matrix. k-output linear sequential circuit whose transfer function matrix is G-1(D). Since V(D) = U(D)G(D). and the information sequence can be recovered with an l-timeunit delay from the code word by letting V(D) be the input to the n-input. on the other hand. 103 . that is. require an inverter to recover the information sequence. an n × k matrix G-1(D) must exit such that G(D)G-1(D) = IDl for some l ≥ 0.

a transfer function matrix G(D) has a feedforward inverse G-1(D) of delay l if and only if GCD[g(1)(D). g(n)(D)] = Dl for some l ≥ 0. where GCD denotes the greatest common divisor. ⎛n⎞ For an (n. 2. ⎜ ⎟ . i = 1. g(2)(D).…. 2. 104 n . 1. A feedforward inverse of delay l exits if and only if GCD ⎡∆ i ( D) : i = 1.Structural Properties of Convolutional Codes For an (n. let ∆ i (D ). m) code. ⎛ k ⎞⎤ = D l ⎜ ⎟⎥ ⎝ ⎠⎦ . . n ⎝k ⎠ ⎜ ⎟ be the determinants of the ⎛ k ⎞ distinct k × k submatrices of the ⎝ ⎠ transfer function matrix G(D). m) code with k > 1. k. ⎢ ⎣ for some l ≥ 0.

GCD ⎡1 + D 2 + D 3 .Structural Properties of Convolutional Codes Example (2.1. G(D)G−1(D) = 1]. 1 + D + D 2 + D 3 ⎤ = 1 = D 0 ⎣ ⎦ and the transfer function matrix ⎡1 + D + D 2 ⎤ G −1 ( D ) = ⎢ 2 ⎥ ⎣ D+D ⎦ provides the required inverse of delay 0 [i.e. 1. 3) code. The implementation of the inverse is shown below 105 .3) Code: For the (2..

1 + D2. Since GCD ⎡1 + D + D 2 . 2.1) Code: For the (3. and 1.Structural Properties of Convolutional Codes Example (3. 1) code . 1 + D 2 . the 2 × 2 submatrices of G(D) yield determinants 1 + D + D2. The required transfer function matrix is given by: 0 ⎤ ⎡0 G −1 ( D ) = ⎢1 1 + D ⎥ ⎢ ⎥ ⎢1 D ⎥ ⎣ ⎦ 106 . 1⎤ = 1 ⎣ ⎦ there exists a feedforward inverse of delay 0.2.

**Structural Properties of Convolutional Codes
**

To understand what happens when a feedforward inverse does not exist, it is best to consider an example. For the (2, 1, 2) code with g(1)(D) = 1 + D and g(2)(D) = 1 + D2 , ⎡ ⎤ GCD ⎣1 + D, 1 + D 2 ⎦ = 1 + D, and a feedforward inverse does not exist. If the information sequence is u(D) = 1/(1 + D) = 1 + D + D2 + …, the output sequences are v(1)(D) = 1 and v(2)(D) = 1 + D; that is, the code word contains only three nonzero bits even though the information sequence has infinite weight. If this code word is transmitted over a BSC, and the three nonzero bits are changed to zeros by the channel noise, the received sequence will be all zeros.

107

**Structural Properties of Convolutional Codes
**

A MLD will then produce the all-zero code word as its estimated, since this is a valid code word and it agrees exactly with the received sequence. The estimated information sequence will be u ( D ) = 0, implying an infinite number of decoding errors caused by a finite number (only three in this case) of channel errors. Clearly, this is a very undesirable circumstance, and the code is said to be subject to catastrophic error propagation, and is called a catastrophic code. 1 2 n GCD ⎡g ( ) ( D ) , g ( ) ( D ) ,… , g ( ) ( D ) ⎤ = D l and GCD ⎡ ∆ i ( D ) ⎣ ⎣ ⎦ : i = 1, 2,...., n ⎤ = D l are necessary and sufficient conditions k ⎥ ⎦ for a code to be noncatastrophic.

()

108

**Structural Properties of Convolutional Codes
**

Any code for which a feedforward inverse exists is noncatastrophic. Another advantage of systematic codes is that they are always noncatastrophic. A code is catastrophic if and only if the state diagram contains a loop of zero weight other than the self-loop around the state S0. Note that the self-loop around the state S3 has zero weight. State diagram of a (2, 1, 2) catastrophic code.

109

110 . it is important to avoid the selection of catastrophic codes. m) codes with k > 1 is still lacking. A similar result for (n. k. 1. m) nonsystematic codes are catastrophic.Structural Properties of Convolutional Codes In choosing nonsystematic codes for use in a communication system. Only a fraction 1/(2n − 1) of (n.

Convolutional Decoder and Its Applications .

that is. Viterbi introduced a decoding algorithm for convolutional codes which has since become known as Viterbi algorithm. 112 .Introduction In 1967. the decoder output selected is always the code word that gives the largest value of the log-likelihood function. Forney recognized that it was in fact a maximum likelihood decoding algorithm for convolutional codes. Later. Omura showed that the Viterbi algorithm was equivalent to finding the shortest path through a weighted graph.

113 . and the last m time units correspond to the encoder’s return to state S0. Assuming that the encoder always starts in state S0 and returns to state S0. the first m time units correspond to the encoder’s departure from state S0..e. to represent each time unit with a separate state diagram). 2) code with G ( D ) = ⎡1 + D. The trellis diagram contains L+m+1 time units or levels. and these are labeled from 0 to L+m in Figure (a). 1. and is shown in Figure (a) for the (3. 1 + D + D 2 ⎤ ⎣ ⎦ and an information sequence of length L=5. The resulting structure is called a trellis diagram. it is convenient to expand the state diagram of the encoder in time (i.The Viterbi Algorithm In order to understand Viterbi’s decoding algorithm. 1 + D 2 .

114 . 1. 2) code with L=5.The Viterbi Algorithm Figure (a): Trellis diagram for a (3.

the code word corresponding to the information sequence u = (1 1 1 0 1) is shown highlighted in Figure (a). and each time unit contains a replica of the state diagram. 115 . while the lower branch represents ui = 0. The upper branch leaving each state at time unit i represents the input ui = 1.The Viterbi Algorithm Not all states can be reached in the first m or the last m time units. For example. However. Each branch is labeled with the n corresponding outputs vi. all states are possible. in the center portion of the trellis. and each of the 2L code words of length N = n(L + m) is represented by a unique path through the trellis. There are two branches leaving and entering each state.

…. vL+m−1) of length N = n(L + m). there are 2k branches leaving and entering each state. m) code and an information sequence of length kL. 116 . ukL−1). and that a sequence r = (r0. Now assume that an information sequence u = (u0. v1. these sequences can be written as u = (u0. vN−1).…. Alternatively.The Viterbi Algorithm In the general case of an (n. uL−1) of length kL is encoded into a code word v = (v0. r = (r0.…. r1.….…. v1.…. where the subscripts now simply represent the ordering of the symbols in each sequence. k. r1. rN−1). v = (v0. and 2kL distinct paths through the trellis corresponding to the 2kL code words. rL+m−1) is received over a discrete memoryless channel (DMC).

The Viterbi Algorithm As a general rule of detection. i =0 i =0 it follows that log P ( r | v ) = L + m −1 ∑ i =0 log P ( ri | v i ) = ∑ log P ( ri | vi )-----.(A) i =0 N −1 where P(ri |vi) is a channel transition probability. ˆ A maximum likelihood decoder (MLD) for a DMC chooses v as ˆ the code word v which maximizes the log-likelihood function logP(r |v). This is a minimum error probability decoding rule when all code words are equally likely. Since for a DMC L + m −1 N −1 P ( r | v ) = ∏ P ( ri | v i ) = ∏ P ( ri | vi ). the decoder must produce an estimate v of the code word v based on the received sequence r. 117 .

soft-decision decoding leads to finding the path with minimum Euclidean distance. i =0 N −1 The decision made by the log-likelihood function is called the soft-decision. The path metric M(r |v) can be written as M (r | v ) = L + m −1 ∑ i =0 M ( ri | v i ) = ∑ M ( ri | vi ). and is denoted M(r |v). If the channel is added with AWGN. and are denoted M(ri |vi). and are denoted M(ri |vi). The terms log P(ri |vi) in the sum of Equation (A) are called branch metrics. 118 .The Viterbi Algorithm The log-likelihood function log P(r |v) is called the metric associated with the path v. whereas the terms log P(ri |vi) are called bit metrics.

119 .The Viterbi Algorithm A partial path metric for the first j branches of a path can now be expressed as j −1 M [r | v ] j = ∑ M ( ri | v i ). and stores the path with the largest metric. the maximum likelihood path). The algorithm processes r in an iterative manner.e. it compares the metrics of all paths entering each state. when applied to the received sequence r from a DMC. together with its metric.. called the survivor. At each step. ( ) i =0 The following algorithm. finds the path through the trellis with the largest metric (i.

If j < L + m. Store the path (the survivor) and its metric for each state. Step 3. stop. together with its metric. Otherwise. repeat step 2. 120 . Increase j by 1. Compute the partial metric for all the paths entering a state by adding the branch metric entering that state to the metric of the connecting survivor at the preceding time unit. and eliminate all other paths. Step 2. For each state.The Viterbi Algorithm The Viterbi Algorithm Step 1. store the path with the largest metric (the survivor). Beginning at time unit j = m. compute the partial metric for the single path entering each state.

Theorem The final survivor v in the Viterbi algorithm is maximum likelihood path. there is only one state. at time unit L + m. the all-zero state. one for each of the 2K states. since there are fewer states while the encoder is returning to the all-zero state. and the algorithm terminates.The Viterbi Algorithm There are 2K survivors from time unit m through time unit L. M r | v ≥ M (r | v ) . that is. Finally. 121 . ( ) all v ≠ v. and hence only one survivor. (K is the total number of registers) After time unit L there are fewer survivors.

the total metric of this path will exceed the total metric of the maximum likelihood path. If the remaining portion of the maximum likelihood path is appended onto the survivor at time unit j. This implies that the partial path metric of the survivor exceeds that of the maximum likelihood path at this point. Assume that the maximum likelihood path is eliminated by the algorithm at time unit j. 122 .The Viterbi Algorithm Proof. as illustrated in figure.

v) is the Hamming distance between r and v. In the special case of a binary symmetric channel (BSC) with transition probability p < ½. and must be the final survivor. the maximum likelihood path cannot be eliminated by the algorithm. 123 .The Viterbi Algorithm But this contradicts the definition of the maximum likelihood path as the path with the largest metric. v ) log + N log (1 − p ) 1− p where d(r. Hence. the received sequence r is binary (Q = 2) and the log-likelihood function becomes (Eq 1.11): p log P ( r | v ) = d ( r. the Viterbi algorithm is optimum in the sense that it always finds the maximum likelihood path through the trellis. Therefore.

The Viterbi Algorithm Since log ⎡ p (1 − p ) ⎤ < 0 and N log (1 − p ) is a constant for all v. the path closest to r in Hamming distance). 124 .. ⎣ ⎦ an MLD for BSC chooses v as the code word v that minimizes the Hamming distance d ( r. The detail of the algorithm are exactly the same. v ). vi) becomes the branch metric. vi) becomes the bit metric. This kind of decoding is called hard-decision. v ) = ∑ d ( r . v ) = L + m −1 i =0 ∑ d ( r . i i i =0 i i N −1 In applying the Viterbi algorithm to the BSC. d(ri. and the algorithm must find the path through the trellis with the smallest metric (i. except that the Hamming distance replaces the log-likelihood function as the metric and the survivor at each state is the path with the smallest metric. d(ri.e.

The Viterbi Algorithm Example 11. 0 1 0. 125 . 1. 1 0 1.2: The application of the Viterbi algorithm to a BSC is shown in the following figure. 1 0 1) . 1 1 0. 2) code of Figure (a) is transmitted over a BSC and that the received sequence is r = (1 1 0. Assume that a code word from the trellis diagram of the (3. 1 1 1. 1 1 0.

**The Viterbi Algorithm
**

Example 11.2 (cont.):

The final survivor

v = (1 1 1, 0 1 0, 1 1 0, 0 1 1, 1 1 1, 1 0 1, 0 1 1)

is shown as the highlighted path in the figure, and the decoded information sequence is u = (1 1 0 0 1) . The fact that the final survivor has a metric of 7 means that no other path through the trellis differs from r in fewer than seven positions. Note that at some states neither path is crossed out. This indicates a tie in the metric values of the two paths entering that state. If the final survivor goes through any of these states, there is more than one maximum likelihood path (i.e., more than one path whose distance from r is minimum).

126

**The Viterbi Algorithm
**

Example 11.2 (cont.): From an implementation point of view, whenever a tie in metric values occurs, one path is arbitrarily selected as the survivor, because of the impracticality of storing a variable number of paths. This arbitrary resolution of ties has no effect on the decoding error probability.

127

**Punctured Convolutional Codes
**

In some practical applications, there is a need to employ highrate convolutional codes, e.g., rate of (n-1)/n. The trellis for such high-rate codes has 2n-1 branches that enter each state. Consequently, there are 2n-1 metric computations per state that must be performed in implementing the Viterbi algorithm and as many comparisons of the updated metrics in order to select the best path at each state. The computational complexity inherent in the implementation of the decoder of a high-rate convolutional code can be avoided by designing the high-rate code from a low-rate code in which some of the coded bits are deleted (punctured) from transmission. Puncturing a code reduces the free distance.

128

corresponding to P input information bits to the encoder.Punctured Convolutional Code The puncturing process may be described as periodically deleting selected bits from the output of the encoder. We begin with a rate 1/n parent code and define a puncturing period P. creating a periodically time varying trellis code. Associated with the nP encoded bits is a puncturing matrix P of the form: p1 p ⎤ ⎡ p11 p12 ⎢p p22 p2 p ⎥ 21 ⎥ P=⎢ ⎢ ⎥ ⎢ ⎥ pn1 pn 2 pnp ⎥ ⎢ ⎣ ⎦ 129 . thus. In one period. the encoder outputs nP coded bits.

the corresponding output bit from the encoder is deleted. the code rate is P/(nP-N). That is: P Achievable code rates are : Rc = . the corresponding output bit from the encoder is transmitted.2. When pij=1.. The code rate is determined by the period P and the number of bits deleted. When pij=0.. M = 1.. where N may take any integer value in the range 0 to (n-1)P-1. If we delete N bits out of nP.Punctured Convolutional Code Each column of P corresponds to the n possible output bits from the encoder for each input bit and each element of P is either 0 or 1.(n − 1)P P+M 130 .

Out of every nP=9 output bits. There are many choices for P and M to achieve the desired rate. we delete N=5 bits. K=3 encoder as shown in the following figure. namely. P=3. We may take the smallest value of P. 131 . Original Circuit.Punctured Convolutional Code Example: Constructing a rate ¾ code by puncturing the output of the rate 1/3.

K=3 encoder .Punctured Convolutional Code Example: Trellis of a rate ¾ convolutional code by puncturing the output of the rate 1/3. 132 .

Punctured Convolutional Code Some puncturing matrices are better than others in that the trellis paths have better Hamming distance properties. 133 . A computer search is usually employed to find good puncturing matrices. Generally. high-rate codes with good distance properties are obtained by puncturing rate ½ maximum free distance codes. the high-rate punctured convolutional codes generated in this manner have a free distance that is either equal to or 1 bit less than the best same high-rate convolutional code obtained directly without puncturing. In general.

Punctured Convolutional Code 134 .

**Punctured Convolutional Code
**

The decoding of punctured convolutional codes is performed in the same manner as the decoding of the low-rate 1/n parent code, using the trellis of the 1/n code. When one or more bits in a branch are punctured, the corresponding branch metric increment is computed based on the non-punctured bits, thus, the punctured bits do not contribute to the branch metrics. Error events in a punctured code are generally longer than error events in the low rate 1/n parent code. Consequently, the decoder must wait longer than five constraint lengths before making final decisions on the received bits.

135

**Rate-compatible Punctured Convolutional Codes
**

In the transmission of compressed digital speech signals and in some other applications, there is a need to transmit some groups of information bits with more redundancy than others. In other words, the different groups of information bits require unequal error protection to be provided in the transmission of the information sequence, where the more important bits are transmitted with more redundancy. Instead of using separate codes to encode the different groups of bits, it is desirable to use a single code that has variable redundancy. This can be accomplished by puncturing the same low rate 1/n convolutional code by different amounts as described by Hagenauer (1988).

136

The puncturing matrices are selected to satisfy a ratecompatibility criterion, where the basic requirement is that lower-rate codes (higher redundancy) should transmit the same coded bits as all higher-rate codes plus additional bits. The resulting codes obtained from a single rate 1/n convolutional code are called rate-compatible punctured convolutional (RCPC) code. In applying RCPC codes to systems that require unequal error protection of the information sequence, we may format the groups of bits into a frame structure and each frame is terminated after the last group of information bits by K-1 zeros.

Rate-compatible Punctured Convolutional Codes

137

138 .Rate-compatible Punctured Convolutional Codes Example: Constructing a RCPC code from a rate 1/3. Note that when a 1 appears in a puncturing matrix of a high-rate code. K=4 maximum free distance convolutional code. a 1 also appears in the same position for all lower-rate codes.

the performances (coding gains) are reduced by approximately 2 dB for the AWGN channel when compared with the soft-decision decoding. or both.Practical Considerations The minimum free distance dfree can be increased either by decreasing the code rate or by increasing the constraint length. If hard-decision decoding is used. 139 .

The degree of quantization of the input signal to the Viterbi decoder. which is a desirable feature that ensures a fixed decoding delay. The path memory truncation to about five constraint lengths has been found to result in negligible performance loss.Practical Considerations Two important issues in the implementation of Viterbi decoding are: The effect of path memory truncation. 140 .

Practical Considerations Bit error probability for rate ½ Viterbi decoding with eight-level quantized inputs to the decoder and 32-bit path memory. 141 .

Practical Considerations Performance of rate ½ codes with harddecision Viterbi decoding and 32-bit path memory truncation 142 .

four-. and two-level quantization at the input to the Viterbi decoder. 143 . K=5 code with eight-.Practical Considerations Performance of rate ½. Path truncation length=32 bits.

and 8-biy path memory truncation and eight. K=5 code with 32-.Practical Considerations Performance of rate ½. 144 .and two-level quantization. 16-.

- Holiday Math Activity Plotting Points and Finding Slope
- L38-viterbi decoder design
- Arith Algebra Questions
- Lec10b-DSP2
- Complex_P
- Books for Gate
- mergeSortAlgorithm
- Crosswords and Information Theory
- Thiet Ke Ma Chap
- chapter 1 jeopardy review
- People in AMS
- hw3
- Information Theory
- bigtartul_IT2002.pdf
- CV Kanoria
- Year 10 - Algebra Expressions Test
- rfc20
- 04-dynamic-programming.pdf
- Mã vòng
- Important Concepts
- s_Even_2011
- Past Year SPM_Sets
- Spring 12 Math 106 Sample Key 2(1) (1)
- Unit-5 String Matching
- DEWK CS51DraftSpec
- SolvedProblems
- other math paper
- lec24
- CODE MQL
- Class 2

- Chapter1b
- Chapter1b
- L07cs2110sp13-6up
- TV Children Television Health Development
- advertisinment
- ECE3196_Chapter1 , embeded systems
- Complete Guide to Wiring
- i-comm
- advertising
- dc1
- ECE3196_Chapter0 /embeded systems
- Electrical Technology
- Cache
- Notes Digital Communication Lecture 2
- زبدة المحركات
- Propagation and Antenna
- 17 Advanced Radar Analysis 37W23378-0
- 06_LinkedLists
- Sig Prop
- lect6-121225113642-phpapp01
- 92517066 Data Structures and Design Using C
- Class2

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading