You are on page 1of 3

Lecture 4: Error detection and correction schemes

word is rotated left or right by any number of bit digits, the resulting string is still a word in the code space. This property makes encoding and decoding very easy and efficient to implement by using simple shift registers. The main drawback of using CRC codes is that they have only error detecting capabilities. They cannot correct for any errors in the data once detected at the destination, and the data must be transmitted again to receive the message. For this reason, CRC codes are usually used in conjunction with another code that provides error correction. Convolutional Codes These codes are generally more complicated than LBCs, more difficult to implement, and have lower code rates (usually below 0.90), but have powerful error correcting capabilities. They are popular in satellite and deep space communications, where bandwidth is essentially unlimited, but the BER is much higher and retransmissions are infeasible. Difficult to decode because they are encoded using finite state machines that have branching paths for encoding each bit in the data sequence. A well-known process for decoding convolutional codes quickly is the Viterbi Algorithm. The Viterbi algorithm is a maximum likelihood decoder, meaning that the output code word from decoding a transmission is always the one with the highest probability of being the correct word transmitted from the source. 4.4 Hamming codes 4.4.1 Basic definitions Hamming code is a LBC. Because of the simplicity of Hamming codes, are widely used in computer memory. These codes are used for single-error-correcting and double-error-detecting. Consider (n,k) code block length., k is the number of bits of information and n is the output number of bits with the coding, ASCII is (8,7) Code rate = k/n. Hamming weight - number of ones in a code word (e.g. for A=10010011, Hamming weight = 4) Hamming distance, d - the number of bits in which two code words differ e.g. if A = 1010, B = 1101. Hence d =3. Minimum hamming distance - Minimum number of bits in which valid code words differ. This distance provides an important measure of the error combating properties of the error control code used. 4.4.2 Encoding A systematic linear block code-vector (C) is obtained by multiplying the information Vector (X) by a generator matrix (G). X . G = C Matrix order: (1xk ) . (kxn) = (1xn)
Message Vector X G Linear code block generator Hardware / Software Code Vector C

4.1

Introduction Error detection and correction or error control are techniques that enable reliable delivery of data over unreliable communication channels. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data. Error scheme used depends on the type of errors (random or burst) and the particular application communication. In designing error schemes consideration is made for efficiency, reliability and cost. Purpose coding schemes For faster transmission Error detection Error correction Implementation Error correction may generally be realized in two different ways: Automatic repeat request (ARQ) (backward error correction): This is an error control technique whereby an error detection scheme is combined with requests for retransmission of erroneous data. Every block of data received is checked using the error detection code used, and if the check fails, retransmission of the data is requested this may be done repeatedly, until the data can be verified. ARQ is appropriate if the communication channel has varying or unknown capacity, such as is the case on the Internet Forward error correction (FEC): The sender encodes the data using an error-correcting code (ECC) prior to transmission. The additional information (redundancy) added by the code is used by the receiver to recover the original data. In general, the reconstructed data is what is deemed the "most likely" original data. Used in broadcasting, file storage etc. ARQ and FEC may be combined, such that minor errors are corrected without retransmission, and major errors are corrected via a request for retransmission: this is called hybrid automatic repeat-request (HARQ).

4.2

4.3

Coding schemes Linear Block Codes LBCs are named so because each code word in the set is a linear combination of a set of generator code words. If the messages are k bits long, and the code words are n bits long (where n > k), there are k linearly independent code words of length n that form a generator matrix. They have very high code rates, usually above 0.95. To encode any message of k bits, you simply multiply the message vector (x) by the generator matrix(g) to produce a code word vector v that is n bits long. Linear block codes are very easy to implement in hardware, and since they are algebraically determined, they can be decoded in constant time. They have low coding overhead, but they have limited error correction capabilities. They are very useful in situations where the BER of the channel is relatively low, bandwidth availability is limited in the transmission, and it is easy to retransmit data. CRC Codes Cyclic Redundancy Check (CRC) codes are a special subset of linear block codes that are very popular in digital communications. CRC codes have the cyclic shift property; when any code

ET412 Transmission and switching

Page 1 of 3

Lecture 4: Error detection and correction schemes


c7 = x1 x 2 x 4

Need to find the Generator matrix. G = [I k


1 0 Ik = 0 0 0 1 0 0

| P]
0 0 1 0 0 .... 0 ... 1 ... .. P11 P21 ... ... P22 P22 ... ... ... ... ... ... P2n k ... .. ... ... ... ... P1n k ... .... ... ..

Encoder implementation x4 x3 x2 x1

0 .... and P = P 31

... ... ... ...

c7 c6

c5

This is the only unknown part, which we need to find to complete the generator of a systematic LBC. If errors are introduced during transmission, they will likely be detected during the decoding process at the destination because the code word would be transformed into an invalid bit string. Given a data string to be transmitted that is k bits long, there are 2k possible bit strings that the data can be. The number of parity, p, or error check bits required is given by the Hamming rule, and is a function of the number of bits of information transmitted. The Hamming rule is expressed by the following inequality. k + p + 2 p , where k is the number of data bits and p is the number of parity bits. The result of appending the computed parity bits to the data bits is called the Hamming code word. The size of the code word c is obviously, n = k+p, and a Hamming code word is described by the ordered set (n,k). Example 4.1 : If k=4, find n? Solution 4.1:

This implemented by using a k-bit shift register and n-k modulo -2 address tied to the appropriate stages of the shift register. 4.4.3 Decoding Decoding of a systematic linear block codes is achieved by using the parity check information from the code's P submatrix. The received signal is given by:

Y =CE
Error detection This is based on Syndrome: S S = YH T

Received Vector Y

Parity Check Matrix Hardware / Software H

Syndrome Vector S

Generator matrix for a (7,4) systematic Hamming codes is by: 1 0 G= 0 0 X

0 1 0 0

0 0 1 0
.

0 0 0 1

0 1 1 1 G =

1 1 1 0

1 1 0 1 C,
c6 c7 ] ,

H = PT

| I n k

]
Errors detected No Yes

Syndrome matrix S = (00...0 ) S (00...0 )

C = [x1 x 2 x3 x 4 c5 Where c5 = x1 x 2 x 3 c6 = x 2 x3 x 4

Example 4.2: a) Decode the data [1001] b) What was the bit that was corrupted if the received data was Page 2 of 3

ET412 Transmission and switching

Lecture 4: Error detection and correction schemes


i. ii. [1001110] [1011110]

Example 4.4: Decode X= [1011] 4.5.2 Decoding As with general block codes, the syndrome calculated at the decoder (S) depends on the error pattern (E) and not on the transmitted code vector (C). However, the syndrome is calculated from the received vector (Y). The syndrome is used for error correction, based on maximum likelihood decoding.

4.5 Cyclic codes Code is represented in terms of polynomial and NOT 0, 1. Has the following property: A cyclic shift of a code vector produces another code vector. Has an inherent algebraic structure, which can be exploited in encoding/decoding methods. Example 4.3: For the data: 101011 = p 5 + p 3 + P + 1 4.5.1 Encoding A systematic linear block code-vector (c(p)) is obtained by multiplying the information Vector (x(p)) by a generator matrix (g(p)). x(p) . g(p) = c(p)

a) Divide received by generator polynomial


b)

y( p) s ( p) = Q( p) + g ( p) g ( p)

Syndrome matrix Errors detected g(p) is a factor of y(p). No s( p) = 0 g(p) is not a valid code polynomial. Yes s( p) 0 c) Use a look up table to determine the error bit. 7 error pattern must be calculated of the following polynomials of p 6 p 5 p 4 p 3 p 2 p1 p 0

Message Vector x(p)

Linear code block generator Hardware / Software g(p)

Code Vector c(p)

p6 = p 6 ,the syndrome is: s ( p ) = Re m g ( p ) = ( p 3 + p + 1) , For error g ( p) , if 2 s( p) = p + 1 = 101

n-k order of factors of a ( p n +1) polynomial can be used to generate (n,k) cyclic codes. i.e. g(p) is a factor of ( p n +1) . e.g. (7,4) code => ( p 7 + 1) = ( p + 1)( p 3 + p 2 + 1)( p 3 + p + 1) For (n,k) cyclic codes, g(p) is a factor of valid code-vectors and this is used at the receiver to detect errors. Tx and RX agree on g(p) to use. For example for CRC-16, g ( p ) = ( p16 + p 12 + p 5 + 1) Polynomials are added in a similar manner to the vector addition: Sum is obtained by mod-2 addition of polynomial coefficients Subtraction is the same as mod-2 addition

Note: A cyclic code can be generated by c(p)=x(p) x g(p), however this code is non systematic, i.e the information and the check bits are mixed within the transmitted code vector. Procedure: a) Shift message by n-k bits x s = p n k x( p ) b) Divide by generator polynomial

xs r ( p) = Q( p) + g ( p) g ( p) Where Q(p)= quotient, r(p)=reminder = check bits c) Combine message with check bits c ( p ) = x s + r ( p )

ET412 Transmission and switching

Page 3 of 3

You might also like