You are on page 1of 5

Convolution codes Above considered block codes, when the value of items that are in different units, not

subordinated to one another. For systematic binary block (n, k)-codes sequence of characters of information sources is divided into blocks of length k bits, then Encoders to each unit attached r=n-k check symbols, and then blocks length n symbols are transmitted in the channel . Decoding block is also independently from each other. However, another principle of continuous coding and decoding, when the input comes Encoders continuous sequence information characters sources, and when the coder was withdrawn as continuous sequence of characters, which is a function of input characters and structure Encoders. In the decoder of this type on the input received continuous sequence of characters from the channel of communication (perhaps distortion errors), but the output recovers (possibly with errors, but usually smaller, that errors in channel) sequence of information symbols. The most common class of continuous codes are convolution codes for which the operation of forming the initial sequence for a given input sequence is linear. Convolution codes were discovered L. Fink and P. Elayyesom soon Vozenkraft developed a method of sequential decoding convolution code.

Pic. 10.7. Structures of

convolution codes at speeds of R = k / n (a), at speeds of R = 1 / 2 (b) and grating encoders with rate R = 1 / 2 (c)

In 1967 a Viterbo was discovered algorithm for optimal decoding convolution codes.

Structure convolution binary code with the speed of R=k/n shown in Pic. 10.7a. Encoder consists v units of memory and adder unit for mod 2 inputs each connected to some outputs units of memory register, coefficients caused by hij=(0, 1). Signal combiners read through the switch k and filed with the communication channel. Thus, at each step in the shift register to receive the next block of k information symbols, while the source is exempt from the k characters, which in its extreme right-cell memory. At the same time are n output symbols that sequentially read in the channel. So, if vi - rate characters coder, then the absence of increasing delays in time the speed of transmission of characters on the channel of communication should not be less than vk = n / k*vi whence it follows that the ratio k / n really defines the speed of convolution code . Size (or length register) commonly called the code length limitations. In some works the code length limiting the number of units of memory landslide case minus 1. Maybe something else, a more general picture of convolution encoders as a scheme with k shift register code constraints vi (i = 1, 2, ..., k). At the entrance of each of them served an informational character during one clock cycle. On Pic. 10.7, b are a special case of convolution code at speeds of R = 1 / 2 and length limitation code v= 3. With zero information sequence output code sequence is also zero. Here's an example consider the formation of the original sequence for the coder, shown in Pic. 10.7, b: Input: Output: 0 0 1 11 0 10 0 11 0 00 1 11 1 01 0 01 0 11 0 00

Coder output sequence can be submitted as digital convolution input information sequence and pulse withdrawal Encoders (hence the name codes - convolution). Convolution code is characterized by the following parameters: relative speed of the code R = k / n and overloaded = 1-R, where k and n - number of information and code symbols corresponding to one step of work Encoders (for encoders on Pic 10.7, b R = 1 / 2) length of code restrictions v (length register Encoders); polynomial code, the coefficients which describe the relationship adder cell sensitivity Encoders (for upper adder g (1) = l + D + D2, for the lower adder g ( 2) = l + D2). Polynomials course record decreased mark every three bends (binary coefficients) as one octal digit [code in Pic. 10.7, a marked (7.5)]. In addition to these parameters of convolution code free distance dB, under which realize the distance between the two for Hemminhom half-infinite code sequences. If two sequences encode the same information using encoders, shown in Pic. 10.7, b, the corresponding code sequence consist with each other. If at some point in the same sequence information will symbol 0, and another 1, then from now on code sequences will differ from one another regardless of the future content information

sequences. The minimum distance for Hemminhom between any two code sequences half-infinite codes since the corresponding information sequence beginning distinguish, called the free distance of convolution codes dB. Free distance d characterizes unjammable properties of convolution codes (in the same manner as the minimum distance d characterizes unjammable properties of block codes). It shows that the least number of errors should occur in the channel to a code sequence passed to the other errors were not identified. For the code given in our example, the free distance dB = 5. Search of good convolution codes (with the largest dB at a given R and v) usually is the method over all polynomial on a computer. Convolution codes are special cases (linear realization) meshed codes. You can also assume that the grid is simply another (sometimes more convenient) way of representation and conventional of convolution codes. Bars oriented graph is called periodically repeated structure unit of memory. Each cell contains columns with the same number of vertices (nodes) connected by edges. Between procedure coding convolution code and bars exist mutually one correspondence, which sets the following rules: each vertex (node) corresponds to the internal state of encoders; edge that goes from each vertex corresponds to one possible source of characters (for a binary source for each vertex are the two edges - top and bottom to 0 to 1); over every edge listed value of characters transmitted in the channel, if the encoder was in a state that meets the top of this, and issued a source symbol that corresponds to this edge; sequence of edges (path grating) - a sequence of characters, published source. Thus, if a coder to understand the content as the last two units of memory (2, 3) in the shift register in Pic. 10.7, b, the lattice with four states, as required by the coder will have the form shown in Pic. 10.7 in (grating can reflect and nonlinear coder, when the original characters is not a linear function of input). As well as block codes, convolution allow images of half-infinite check matrixes, but the image in a grid is easier to describe the decoding algorithms. Convolution codes have the following main advantages over the block while their use to correct mistakes. 1. They do not require synchronization of units, needed only synchronization switches K (for transmission and reception). 2. If the code change restrictions v equal length block code, the ability of correctional convolution code is greater than the correction capability of block code (in the best of both codes). 3. Decoding algorithm of convolution codes allow a simple generalization to the case of soft decoding, which provides additional energy gain.

4. Convolution codes allow a simple association of coding and modulation (the socalled coded modulation and signal design code), which is especially important when building energy efficient systems of communication channels with limited I bands. For optimal decoding of convolution codes in memoryless channels often used recursion decoding algorithm of Viterbo (AV). Consider it an example of soft decoding in constant feed from additional white Gaussian noise. Since the signal that was adopted at the k-th interval, we know, we can calculate the Euclid (or Hilbert) distance between the received signal and all possible signals:

where ( )expected in the receiving signal, which corresponds to i-th symbol (for binary signals i = 0, 1); zk (t) - signal passed at the k-th clock interval. Now each edge can be attributed to the grid on k-value of its ki. Best rule for maximum likelihood decoding when choosing a responsible way to grating (ie the sequence continuously continuous ribs) that be minimal. It would seem, for lattice length n (ie the sequence of transmitted symbols length n) required over 2 n options, but it is not. Key point of AV is that for each vertex in this step, there are many metrics that match with the edges connected vertex. In the previous step, you can leave only one edge that minimizes the sum of metrics on all the previous steps. Simply can explain this algorithm in this example. Let the grid have only two states and the structure shown in pic. 10.8, and where the set of edges corresponding metric. We believe that the first information symbol 0. Then the way left (ie, that "survivors") at different steps are shown in Pic. 10.8, would be. We see that the 4 th step get "After" way, which in terms of our notation (orientation ribs down - 1 up - 0) corresponds to the information sequence 0100.


Pic. 10.8 - Lattice with a (a) and road construction that "survived" algorithm Viterbo (b)

Complexity of AV is determined at each step the number of comparison metrics, connecting all the peaks, and it limited size M2, where M the number of lattice states. As with a scheme Encoders get that M = 2v-1, where v - the number of units of memory shift register encoders, you can see that the exponential complexity of AV depends on the length of code restrictions, but linearly dependent on the length of the sequence that is transmitted. Therefore, the length of code restrictions v using AB as the decoding algorithm usually selected not more than 10 ... 15, which, however, quite enough to get big energy win. AV requires processing the entire sequence of signals for optimal decoding even the first information symbol. This procedure requires a large memory for receiving and decoding delay for message elements. For rid these deficiencies is changed AV as truncated algorithm, when a decision on the information symbol at the i-th time taken for processing the results of AV sequence of characters on the i-th and L next clock interval. Theory and experiment show that when L choose a few lengths of code restrictions, the energy loss with the use of such modifications will be small.