You are on page 1of 20

MODULE 4

Error Control Coding: Introduction, Examples of Error control coding, methods of


Controlling Errors, Types of Errors, types of Codes, Linear Block Codes: matrix description of
Linear Block Codes, Error Detection and Error Correction Capabilities of Linear Block Codes,
Single Error Correcting hamming Codes, Table lookup Decoding using Standard Array.

Binary Cyclic Codes: Algebraic Structure of Cyclic Codes, Encoding using an (n-k) Bit Shift
register, Syndrome Calculation, Error Detection and Correction.

Text Book:
1. Digital and analog communication systems, K. Sam Shanmugam, John Wiley India Pvt.
Ltd, 1996. (Refer sections 9.1, 9.2, 9.3, 9.3.1, 9.3.2, 9.3.3)
2. Information theory and coding, K Giridhar
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
ERROR CONTROL CODING

We know that the probability of error for a particular signaling scheme is a function of
the signal-to-noise ratio at the receiver input and the data rate. The only practical alternative for
reducing the probability of error is the use of error control coding, also known as channel
coding. And the error control coding is the calculated use of r e d u n d a n c y . It is concerned with
techniques of delivering information from a source to destination with a minimum number of
errors. The functional blocks that accomplish error control coding are the channel encoder and
the channel decoder. The channel encoder systematically adds digits to the transmitted message
digits. These additional digits, while conveying no new information themselves, make it possible
for the channel decoder to detect and correct errors in the information bearing digits. Error
detection and/or correction lower the overall probability of error.
The problem of error control coding can be formulated as follows. With reference to Figure
4.1 we have a digital communication system for transmitting the binary output { b k } of a source
encoder over a noisy channel at a rate of rb bits/sec. Due to channel noise, the bit stream { ̂ k }
recovered by the receiver differs occasionally from the transmitted sequence { b k } . It is desired
that the probability of error P( ̂ k ) be less than some prescribed value. We need to design an
error control coding scheme so that the overall probability of error is acceptable. The channel
encoder and the decoder are the functional blocks in the system that by acting together, reduce
the overall probability of error. The encoder divides the input message bits into blocks of K
message bits and replaces each K bit message block D with an n bit codeword Cw by adding n-k
check bits to each message block. The decoder looks at the received version of the codeword C W ,
which may occasionally contain errors, and attempts to decode the k message bits. While the
check bits convey no new information to the receiver, they enable the decoder to detect and
correct transmission errors and thereby lower the probability of error.

Figure 4.1 Error Control Coding. Channel bit errorprobability is qc=P{ ̂k dk}, and message bit error
probability is Pe= P{ ̂ k }

1
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET

The design of the encoder and decoder consists of selecting rules for generating
codewords from message blocks and for extracting message blocks from the received version of
the codewords. The mapping rules for coding and decoding are to be chosen such that error
control coding lowers the overall probability of error.

4.1 Example of Error Control Coding


Suppose that we want to transmit data over a telephone link that has a usable bandwidth
of 3000Hz and a maximum S/N at the output of 13db at a rate of 1200 bits/sec with a probability
of error less than 10-4. We are given a DPSK modem that can operate at speeds of 1200, 2400 or
3600 bits/sec with errors probabilities 2(10-4), 4(10-4) and 8(10-4) respectively. We need to
design an error control coding scheme that would yield probability of error less than 10-4.
Consider an error control coding scheme for this problem wherein the triplets 000 and
111 are transmitted when bk=0 and 1 respectively. These triplets 000 and 111 are called
codewords. These triplets of 0’s and 1’s are certainly redundant and two of three bits in each
triplet carry no information. Data comes out of channel encoder at a rate of 3600 bits/sec, and at
this data rate the modem has an error probability of 8(10-4). The decoder at the receiver looks at
the received triplets and extracts the information –bearing digit using a majority logic decoding
scheme as shown below.

Received Triplets 000 001 010 100 011 101 110 111
Output message bit 0 0 0 0 1 1 1 1
bk

Notice that 000 and 111 are the only valid codewords.
Pe= P( ̂ k )
= P(two or more bits in a triplet are in error)
Let X be a random variable representing the number of errors. Then X is a binomial random
variable with n=3 and p=probability of a bit is in error=qc
i.e. Pe =P(X
=∑ px (1-p) n-x
= 3qc2-2qc3
= qc2 (3 - 2. qc)
=0.02 x 10 -4 which is much less than 10-4
The preceding example illustrates the following important aspects of error control coding:

2
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
1. It is possible to detect and correct errors by adding extra bits called check bits to the
message stream. Because of the additional bits, not all bit sequences will constitute
bonafide message.
2. It is not possible to detect and correct all errors.
3. Addition of check bits reduces the effective data rate through the channel. Quantitatively
the rate efficiency of a coding scheme is defined as rb/rt

4.2 Methods of controlling errors:


The method of controlling errors at the receiver through attempts to correct noise induced error
is called the forward acting error correction method.

The decoder upon examining the demodulated output, accepts the received sequence if it
matches a valid message sequence. If not, the decoder discards the received sequence and
notifies the transmitter that errors have occurred and that the received message must be
retransmitted. Thus the decoder attempts to detect errors but does not attempt to correct the
error. This method of error control is called error detection. Error detection schemes yield a
lower overall probability of error than error correction schemes.

Error detection and retransmission of a message requires a reverse channel that may not be
available in some applications.

Error detection schemes slow down the effective rate of data transmission since the transmitter
has to wait for an acknowledgement from the receiver before transmitting the next message.

4.3 Types of Errors

Transmission errors in a digital communication system are caused by the noise in the
communication channel. Generally two kinds of noise can be distinguished in a communication
channels.

1. Gaussian Noise: Gaussian noise sources include thermal and shot noise in the
transmitting and receiving equipment, thermal noise in the channel and radiation picked
up by the receiving antenna. The transmission error introduced by the white Gaussian
noise is such that the occurrence of error during a particular signaling interval does not
affect the performance of the system during the subsequent signaling interval. The
discrete channel in this case can be modeled by binary symmetric channel.

3
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
2. Impulse Noise: Impulse Noise is characterized by long quiet intervals followed by high
amplitude noise bursts. This type of noise results from many natural and manmade
causes such as lightning and switching transients. When a noise burst occur, it affects
more than one symbol or bit and there is usually a dependence of errors in successive
transmitted symbols. Thus the error occurs in bursts.

Due to these two types of errors occur.

1. Random Error: The transmission errors that occur due to the presence of white Gaussian
noise are referred as random errors.
2. Burst Error: When a noise burst occur, it affects more than one symbol or bit and error
caused is called Bursts Error.

Error control schemes for dealing with random errors are called random error correcting codes
and coding schemes designed to correct burst errors are called burst error correcting codes.

4.4 Types of codes

Error control codes are divided into two broad categories:

1. Block codes: in block codes, a block of k information bits is followed by a group of n-k
check bits that are derived from the block of information bits. At the receiver the check
bits are used to verify the information bits in the information block preceding the check
bits.
2. Convolutional codes: in convolutional codes, check bits are continuously interleaved
with information bits; the check bits verify the information bits not only in the block
immediately preceding them but in other blocks as well.

4.3 Linear Block Codes

4
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET

The n-bit block of the channel encoder output is called a code-word, and codes (or coding
schemes) in which the message bits appear at the beginning of a code-word are called systematic
codes. If each of the 2k codewords can be expressed as linear combinations of k linearly
independent code vectors, then the code is called systematic linear block code.

4.3.1 Matrix description of Linear Block Code:

The encoding operation in a linear block encoding scheme consists of two basic steps:

1. The information sequence is segmented into message blocks, each block consisting of k
successive information bits
2. The encoder transforms each message block into a larger block of n bits according to
some predetermined set of rules.

We denote the message block as a row vector or k-tuple D=(d1,d2,….dk), where each message bit
can be a 0 or 1. Thus we have 2k distinct message blocks. Each block is transformed to a code
word C of length n bits C=(c1,c2,…..cn) by the encoder and there are 2k distinct code-words, one
unique code-word for each distinct message block. This set of 2k code-words also called code
vectors is called a (n,k) block code. The rate efficiency of this code is k/n.

In a systematic linear block code the first k bits of the codeword are the message bits,

i.e., ci=di, i=1,2,……,k …….(1)

the last last n-k bits in the codeword are check bits generated from the k message bits according
to some predetermined rule;

…………(2)

The coefficients pij in the above equation are 0’s and 1’s and the addition operations in the above
equations are performed in modulo-2 arithmetic.

We can combine equation (1) and (2) and write them in matrix form,

5
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET

Where G is a k x n matrix is called the generator matrix and it has the form

G= [ Ik| P] kxn

Where Ik is the identity matrix of order k and P is an arbitrary k by n-k matrix.

Associated with each (n,k) block code there is a parity check matrix H, which is defined as

The parity check matrix can be used to verify whether a codeword C is generated by the matrix
G=[Ik | P]. C is a codeword in the (n,k) block code generated by G=[Ik | P] if and only if ,

CHT=0

While the generator matrix is used in the encoding operations, the parity check matrix is used in
the decoding operation.

Consider a linear (n,k) block code with a generator matrix G=[Ik | P] and a parity check matrix
H=[PT | In-k]. Let C be a code vector that was transmitted over a noisy channel and let R be the
noise corrupted vector that was received. The vector R is the sum of the original code vector C
and error vector E that is,

R=C+E

The receiver does not know C and E; its function is to decode C from R, and the message block
D from C. the receiver does the decoding operation by determining an (n-k) vector S defined as

6
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
T
S=RH

The vector S is called the error syndrome of R.

S=[C+E] HT =CHT + EHT =EHT (Since CHT =0.)

Thus the syndrome of a received vector is zero if R is a valid code vector.

4.3.2 Error Detection and Error Correction Capabilities of Linear Block codes

Some of the basic terminologies used in defining the error control capabilities of a linear block
code are:

1. Hamming weight: hamming weight of a code vector C is defined as the number of


nonzero components of C.
2. Hamming Distance: the Hamming distance between the two vectors C1 and C2 is
defined as the number of components in which they differ.
3. Minimum distance(dmin): the minimum distance of a block code is the smallest distance
between any pair of codewords in the code.

Theorem 1: The minimum distance of a linear block code is equal to the minimum weight of any
nonzero word in the code.

Proof: It is clear that the minimum distance cannot be greater than the weight of the minimum
weight nonzero codeword since the all zero n-tuple is a codeword. But if it is less the sum of the
two closest codewords is a codeword of weight lower than the minimum weight, which is
impossible. Thus for linear block codes, minimum weight is equal to minimum distance.

Theorem 2: A linear block code with a minimum distance dmin can correct up to [(dmin -1)/2]
errors and detect up to dmin -1 errors in each code-word.

Proof: Let R be the received codeword and let C be the transmitted codeword. let C’ be any
other codeword. Then, the Hamming distance between codewords C and C’, d(C,C’) and
distances d(C,R) and d(C’, R) satisfy

d(C,R) + d(C’, R) d(C,C’)

since d(U,V)= weight of U+V , where addition is on bit by bit basis in modulo-2 arithmetic with
no carry. If an error pattern of t errors occurs, then the Hamming distance between the

7
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
transmitted vector Ca nd the received vector r is d(C,R)=t’. now since the code is assumed to
have a distance of dmin, d(C,C’) Thus we have

The decoder will correctly identify C as the transmitted vector if d(C,R) is less than d(C’,R),
where C’ is any other codeword of the code.

For d(C, R)<d(C’, R) the number of errors t’ should satisfy

t' =

Thus we have shown that a linear block code with a minimum distance dmin can correct up to t
errors if t [(dmin -1)/2]

4.3.3 Single error correcting Hamming codes

When a single error occurs, say in the ith bit of the codeword, the syndrome of the received
vector is equal to the ith row of HT. Hence if we choose the n rows of the (n) x (n-k) matrix HT to
be distinct, then the syndrome of all single errors will be distinct and we correct these errors.

Following point must be considered while choosing HT

1. We must not use a row of 0’s since a syndrome of 0’s corresponds to no error.
2. The last n-k row of HT must be chosen so that we have an identity matrix in H.

Once we choose H the generator matrix can be obtained. Each row in HT has (n-k) entries, each
of which could be a 0 or 1. Hence we can have 2n-k distinct rows of (n-k) entries out of which we
can select 2n-k -1 distinct rows of HT . Since HT has n rows, for all of them to be distinct we need

2n-k -1 n

Or the number of parity bits in this (n,k) code satisfies the inequality

So given a message block size k, we can determine the minimum size n for the codeword from

8
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET

4.3.4 Table lookup Decoding Using the Standard Array

Let an (n, k) linear code C is used for error correcting purposes. Let C1,C2,…. C2k be the code
vectors of C. let R be the received vector. R can be any one of 2n n-tuple and the decoder has the
task of associating R with one of 2k n-tuples that are valid codewords. The decoder can perform
this task by partitioning the 2n n-tuple into 2k disjoint sets T1, T2, ….. T2k such that each subset
contains only one code word vector Ci. Then if R Ti, the decoder identifies Ci as the transmitted
code vector. Correct decoding results if the error E is such that Ci +E Ti, and an incorrect

decoding results if

One way of partitioning the set of 2n n-tuple is shown in table below.

The standard array has following properties:

1. Each element in the standard array is distinct and hence the columns of the standard array
Ti are disjoint.
2. If the error pattern caused by the channel coincides with a co set leader, then the pattern
is not a co set leader, then an incorrect decoding will result. This can be easily verified by
noting that the received vector corresponding to a code vector Ci will be in Ti if the error
pattern caused by the channel is a co-set leader. Thus the co-set leaders are called
correctable error patterns.

Theorem 3: All the 2k n-tuples of a co-set have the same syndrome and the syndrome of different
co-sets are different.

4.4 Binary Cyclic Codes

Binary cyclic codes form a subclass of linear block codes. Using binary cyclic codes, encoding
and syndrome calculations can be easily implemented using simple shift registered with

9
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
feedback connections and these codes have a fair amount of mathematical structure that makes it
possible to design codes with useful error correcting properties.

4.4.1 Algebraic structure of Cyclic Codes:

An (n,k) linear block code C is called cyclic code if it has the following property:

If a n-tuple V=(v0, v1,……vn-1) is a code vector of C, then the n-tupple V(1)= (vn-1, v0,v1,…..vn-2)
obtained by shifting V cyclically one place to the right is also a code vector of C.

For example, the code word V can be represented by a code polynomial as,

The coefficients of the polynomial are 0’s and 1’s and they belong to a binary field with the
following rules of addition and multiplication:

The variable x is defined on the real line with x2=x.x, x3=(x2)x=x.x.x and so on.

In this notation if we have ,

With the polynomial representation V(x) for the code vector V, we obtain the code polynomial
V(i)(x) for the code vector V(i) as

10
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET

Theorem 4: If g(x) is a polynomial of degree (n-k) and is a factor of xn+1, then g(x) generates
an (n,k) cyclic code in which the code polynomial V(x) for a data vector D=(d0,d1,d2…..dk-1) is
generated by, V(x)=D(x)g(x)

Proof: Consider k polynomials g(x), xg(x), x2g(x), …………,xk-1 g(x) which all have degree n-
1 or less. Now , any linear combination of these polynomials of the form ,

V(x)= d0 g(x)+d1x g(x)+d2x2g(x)+ …………+dk-1 xk-1 g(x)

V(x)= D(x). g(x) is a polynomial of degree n-1 or less and is a multiple of g(x).

There are a total of 2k such polynomials corresponding to the 2k data vector and the code vectors
corresponding to the polynomials from a liner (n,k) code .

To prove that this code is cyclic, let

V(x)=v0+v1x+……..+vn-1 xn-1 be a code polynomial in this code

Consider x. V(x)= v0x+v1x2+……..+vn-1 xn

=vn-1(xn+1)+(vn-1+v0x+…..+vn-2 xn-1)

=vn-1(xn+1)+v(1)(x)

Where V(1)(x) is a cyclic shift of V(x).

Since x.V(x) and xn+1 are both divisible by g(x) ,V(1)(x) must be divisible by g(x). Thus V(1)(x)
is a multiple of g(x) and can be expressed as a linear combination of g(x), xg(x), x2g(x),
…………,xk-1 g(x). this says that V(1)(x) is also code polynomial.

Hence from the definition of cyclic codes it follows that the linear code generated by g(x), xg(x),
x2g(x), …………,xk-1 g(x) is a (n,k) cyclic code.

The polynomial g(x) is called generator polynomial of the cyclic code. Given the generator
polynomial g(x) of a cyclic code, the code can be put into a systematic form as

V=[ r0, r1,r2,…….,rn-k-1, d0,d1,…..,dk-1]

11
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
Where r(x)=r0+ r1 x +r2 x + ……. +rn-k-1 xn-k-1 is the parity check polynomial is the remainder
2

from dividing xn-k D(x) by g(x):

xn-k D(x) =q(x)g(x) +r(x)

where q(x) and r(x) are quotient and remainder respectively. The code polynomial V(x) is given
by

V(x)=r(x)+ xn-k D(x)

Note:

Galois Field: Consider a set (0,1) together with modulo-2 addition and multiplication. Such a
field is called as binary field and denote by Galois Field or GF(2) .

4.4.2 Encoding Using an (n-k) bit Shift register

The encoding operation described in systematic form involve the division of x n-k D(x) by g(x) to
calculate r(x). This division can be accomplished using the dividing circuit consisting of
feedback shift registers as shown in Figure 4.2.

Figure 4.2: An (n-k) stage encoder for an (n, k) binary cyclic code generated by g(x).

The encoding operation consists of two steps:

1. With the gate turned on and the switch in position 1, the information digits
(d0,d1,d2…..dk-1) are shifted into the register and simultaneously into the communication
channel. As soon as the k information digits have been shifted into the register, the
register contains the parity check bits (r0, r1,….., rn-k-1).

12
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
2. With the gate turned off and the switch in position 2, the contents of the shift register are
shifted into the channel. Thus the code word (r0, r1,….., rn-k-1, d0,d1,d2…..dk-1) is
generated and sent over the channel.

4.4.3 Syndrome calculation, error detection and error correction

A nonzero syndrome indicates that transmission errors have occurred. The syndrome S(x) of the
received vector R(x) is the reminder resulting from diving R(x) by G(x) , that is

Where P(x) is the quotient of the division. The syndrome S(x) is a polynomial of degree n-k-1 or
less. If E(x) is the error pattern caused by the channel then

Figure 4.3 Syndrome Calculation Circuit

Now V(x)=D(x) g(x) where D(x) is the message polynomial. Hence we obtain

Hence the syndrome of R(x) is equal to the reminder


resulting from dividing the error pattern by the generator polynomial, and the syndrome contains
information about the error pattern that can be used for error correction.

The syndrome calculations are carried out as follows:

13
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
1. The register is first initialized. Then, with gate 2 turned on and gate 1
turned off, the received vector R(x) is entered into the shift register.
2. After the entire received vector is shifted into the register, the contents of
the register will be the syndrome. Now, gate 2 is turned off, gate 1 is turned
on, and the syndrome vector is shifted out of the register. The circuit is
ready for processing the next received vector.

Cyclic codes are extremely well suited for error detection. Error detection
can be implemented by simply adding an additional flip-flop to the syndrome
calculator. If the syndrome is nonzero, the flip-flop sets and an indication of
error is provided.

The error correction procedure consists of the following steps

1. The received vector is shifted into the buffer register and the syndrome register.
2. After the syndrome for the received vector is calculated and placed in the syndrome
register, the contents of the syndrome register are read into the detector. The detector is a
combinational logic circuit designed to output a 1 if and only if the syndrome in the
syndrome register corresponds to a correctable error pattern with an error at the highest
order position xn-1. That is if the detector output is 1, then the received digit at the right
most stage of the buffer register is assumed to be erroneous and hence is corrected. If the
detector output is 0, then the received digit at the right most stage of the buffer register is
assumed to be correct. Thus the detector output is the estimated error value for the digit
coming out of the buffer register.

14
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET

3. The first received digit is shifted out of the buffer and at the same time the syndrome
register is shifted right once. If the first received digit is in error, the detector output will
be 1, which is used for correcting the first received digit. The output of the detector is
also fed to the syndrome register to modify the syndrome. This result in a new syndrome
corresponding to the altered received vector shifted to the right by one place.
4. The new syndrome is now used to check whether or not the second received digit, now at
the right most stage of the buffer register, is an erroneous digit. If so, it is corrected a new
syndrome is calculated as in step3, and the procedure is repeated.
5. The detector operates on the received vector digit by digit until the entire vector is shifted
out of the buffer.

******************************************************************

QUESTION BANK
1. For a systematic (6.3) linear block code generated by C4=d1 d3 , C5=d2 d3, C6=
d1 d2
(i) Find all possible code vectors
(ii) Draw encoder circuit and syndrome circuit
(iii) Detect and correct the code word if the received code word is 110010
(iv) Hamming weight for all code vectors, min. hamming distance, Error detecting
and correcting capability.
2. Define the following: (i) Block code and conventional code (ii) systematic and non-
systematic codes

15
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
3. A linear Hamming code for (7,4) is described by generator polynomial g(x)= 1+x+x 3 .
Determine Generate matrix and Parity check matrix.
4. A generator polynomial for a (15,7) cyclic code is g(x)=1+x4+x6+x7+x8.
(i) Find the code vector for the message D(x)= x2+x3+x4 using the cyclic encoder
circuit.
(ii) Draw syndrome calculation circuit and find the syndrome of the received
polynomial
Z(x) =1+x+x3+x6+x8+x9+x11+x14
5. Define Hamming weight, Hamming Distance, minimum distance of linear block code
and systematic linear block code
6. For a linear block code the syndrome is given by;
S1= r1+r2+r3+r5, S2= r1+r2+r4=r6, S3= r1+r3+r4+r7
(i) Find generator matrix
(ii) Find parity check matrix
(iii) Draw the encoder an decoder circuit
(iv) How many errors can be detected and corrected?
7. A (15, 11) cyclic code is generated using g(x)=1+x+x4. Design a feedback register
encoder and illustrate the encoder procedure with the message vector [11001101011] by
listing the state of the register assuming the right most bit as the earliest bit.
8. A (7,4) cyclic code has the generator polynomial g(x)=1+x+x4. Write the syndrome
calculation circuit and verify the circuit for the message polynomial d(x)=1+x3.
9. The parity check bits of a (7,4) Hamming code are generated by,
C5= d1+d3+d4
C6=d1+d2+d3
C7=d2+ d3+d4
Where d1,d2,d3 and d4 are the message bits.
(i) Find the generator matrix [G] and parity check matrix [H] for this code.
(ii) Prove that GHT =0.
(iii) The (n,K) linear block code so obtained has a dual code. This code is a (n,n-K)
code having a generator matrix H and parity check matrix G. Determine the eight
code vectors of the dual code for the (7,4) Hamming cose described above.
(iv) Find the minimum distance of the dual code determined in part (iii).
10. Design (n,K) Hamming code with a minimum distance of dmin =3 and a message length
of 4 bits. If the received code vector is R=[1111001]. Detect and correct the single error
that has occurred due to noise.
11. The expurgated (n, K-1) Hamming code is obtained from the original (n,K) Hamming
code by discarding some of the code vectors. Let g(x) denote the generator polynomial of
the original hamming code. The most common expurgated hamming code is the one
generated by g1(x)=(1+x)g(x); where (1+x) is a factor of 1+x n. consider the (7,4)
Hamming code generated by g(x) =1+x2 +x3
(i) Construct the eight code vectors in the expurgated (7,3) Hamming code,
assuming a systematic format. Hence show that the minimum distance of the code
is 4.
(ii) Determine the generator matrix G1 and parity check matrix H1 of the expurgated
hamming code.

16
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
(iii) Devise the encoder for the expurgated hamming code and list the shift register
contents in a tabular fashion for the message 011. Verify the code vector so
obtained using [V]=[D][G1].
(iv) Devise the syndrome calculator for the expurgated hamming code. Hence
determine the syndrome for the received vector 0111110. Also correct the error, if
any in that received vector.
12. The generator polynomial for (15,7) cyclic code is g(x)= 1+ x4+x6+x7+x8.
(i) Find the code vector in systematic form for the message D(x)=x 2+x3+x4 suffer
transmission errors. Find the syndrome of V(x).
13. The parity check bits of (8,4) block code are generated by,
C5= d1+d2+d4
C6=d1+d2+d3
C7=d1+ d3+d4
C8=d2+ d3+d4
Where d1,d2,d3 and d4 are the message bits.
(i) Find G and H
(ii) Find minimum weight of the code
(iii) Find error detecting capacity
(iv) Show through two examples that this code can detect and correct errors.
14. Design a single error correcting code with a message block size of 11 bits and show by
an example that the code can correct single errors.
15. The generator polynomial of a(7,4) cyclic code is g(x)= 1+x+x 3. Find the code words for
the following in both systematic and non systematic form 1010, 0110, 1101.
16. For a (15,5) binary cyclic code, generator polynomial is g(x)=1+x+x2 +x4+x5+x8+x10.
Draw the encoder diagram and find the encoded output for a message D[x]=1+x2+x4.
17. What are the types of errors and types of codes in error control coding?

18. Consider a linear code whose generator matrix is G=[ ]

(i) Find all code vectors.


(ii) Find all the hamming weights
(iii) Find minimum weight parity check matrix
(iv) Draw the encoder circuit for the above codes.
19. Define Binary cyclic codes. Explain the properties of cyclic codes.
20. For a (15,5) binary cyclic code, generator polynomial is g(x)=1+x+x2 +x4+x5+x8+x10.
(i) Draw the block diagram of an encoder for this code.
(ii) Find the code vector for the message polynomial D(x)= 1+x2 +x4 in systematic
form.
(iii) Is V(x) =1+x4+x6+x8+x14 a code polynomial? If not find the syndrome of V(x)
21. For a linear block code the syndrome is given by,
S1=r1+r2+r3+r5
S2=r1+r2+r4+r6
S3=r1+r3+r4+r7
(i) Find the generator matrix.
(ii) Draw the encoder and decoder circuit

17
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
(iii) How many errors can it detect and correct
22. The parity check bits of a (7,4) Hamming code are generated by,
C5= d1+d3+d4
C6=d1+d2+d3
C7=d2+ d3+d4
Where d1,d2,d3 and d4 are the message bits.
(v) Find the generator matrix [G] and parity check matrix [H] for this code.
(vi) Prove that GHT =0.
(vii) Find the minimum weight of this code.
(viii) Find error detecting and correcting capability
(ix) Draw encoder circuit and syndrome circuit for the same.
23. Compare fixed length code and variable length code.
24. For the given generator polynomial find generator matrix and parity check matrix and
find code-word for (7,3)hamming code and its hamming weight g(x)=1+x+x2+x4.

25. For a systematic (6,3) linear block code the parity matrix, P=[ ]

(i) Find all possible code vectors


(ii) Find the minimum weight of the code
(iii) Find the parity check matrix
(iv) For a received code vector R=110010 detect and correct that has occurred due to
noise.
26. Define the terms :
(i) Burst error
(ii) Systematic linear block code
(iii) Galois field
(iv) Hamming weight
27. What are the different methods of controlling errors? Explain.
28. For the [7,4] single error correcting cyclic code, D(X)= d0+d1X+d2X2+d3X3 and
Xn+1=X7+1 +(1+X+X3)(1+X+X2+X4) using generator polynomial g(X)=(1+X+X3).
Find all 16 code vector of cyclic code both in non systematic and systematic form.
29. What is binary cyclic code? Describe the features of encoder and decoder used for cyclic
codes using an (n-K) bit shift register.

30. For a systematic (6,3) linear block code, the parity matrix is P=[ ] find all

possible code vectors and parity check matrix.


31. For a (7, 4) cyclic code received vector Z(x)=0100101 and the generator polynomial is
g(x)=1+x+x3. Draw the syndrome calculation circuit and correct the single error in the
received vector, also explain operation of the circuit.
32. For a (7, 3) expurgated Hamming code write the code vector table and draw the encoder
circuit if g(x)= 1+x+x3.

33. For a systematic (7,4) linear code, parity code is given by, P=[ ]

18
Information Theory and Coding [17EC54]
Dept. of ECE, AJIET
(i) Find all possible valid code vectors
(ii) Draw the corresponding encoding circuit.
(iii) A single error has occurred in each of these vectors. Detect and correct these
errors:
YA=[0111110], YB=[1011100], YC=[1010000]
(iv) Draw the syndrome calculation circuit.
34. If C is a valid code vector the prove that CH T=0 where HT is transpose of parity check
matrix H.
35. For a linear block code the syndrome is given by,
S1=r1+r2+r3+r5
S2=r1+r2+r4+r6
S3=r1+r3+r4+r7
(i) Find the code-word of all the messages
(ii) Write the standard array.
(iii) Find the syndrome for the received data 1011011.
36. For a systematic (6.3) linear block code generated by C4=d1+d2 , C5=d1 +d3, C6= d2+d3
(i) Write G and H matrices
(ii) Construct encoder and error correcting circuits
(iii) Construct standard array

37. Find the generator matrix G and H matrix for a linear block code with dmin=3 and
message block size of 8 bits.
38. Design an encoder for (7,4) binary cyclic code generated by G(x)= 1+x+x3 and verify its
operation using message vectors (1001) and (1011). Also verify the code obtained using
polynomial arithmetic.

19

You might also like