You are on page 1of 5

PAPER NO: 16-7204/EXAM/S2/08

SHEFFIELD HALLAM UNIVERSITY

Faculty of Arts, Computing, Engineering and Sciences

SEMESTER TWO EXAMINATIONS - MAY 2008

Module Title: Communication Module No: 16-7204


Engineering

Module Leader(s): Dr. Graham Swift Time Allowed: 2 hours plus 10


minutes reading time

Stationery requirements (per student) :


• 16 Page Anonymous Answer Booklet
• Supplementary Answer Sheets (Available on Request)
• Formulae and Information Sheet (Attached at the back of this paper)

INSTRUCTIONS TO CANDIDATES:

1. The University Regulations on academic conduct, including cheating and


plagiarism, apply to all examinations.

2. The normal examination regulations of the University apply (see script answer
book). Please do NOT start writing until told to do so by the Invigilator.

3. Candidates must NOT use red ink on the script answer book.

4. The memory of any programmable/graphical calculator used during this examination


must be cleared before the start of the paper.

5.. Answer any THREE questions from four.

THIS PAPER CONTAINS 5 PAGES INCLUDING THIS SHEET

1
PAPER NO: 16-7204/EXAM/S2/08

Answer any THREE questions.

1. A number of source coding techniques are used to code discrete


alphabets.

(a) A discrete source alphabet is represented as x0, x1, x2,


x3 ............xM-1. The symbols have associated probabilities
P(x0), P(x1), P(x2), P(x3) ............P(xM-1).

(i) Explain, using equations and diagrams as appropriate,


how arithmetic coding can efficiently code this alphabet.
(12 Marks)
(ii) Demonstrate how the sequence A,C,B,D can be
arithmetically encoded and decoded given the following
probabilities;

Probability of occurrence of A is 0.4.


Probability of occurrence of B is 0.3.
Probability of occurrence of C is 0.2.
Probability of occurrence of D is 0.1.

Note: use decimal for the coding.


(11 Marks)
(iii Describe with reference to a specific example how
) arithmetic encoders deal with limited precision.
(10 Marks)

2. Information theory provides a fundamental basis for the encoding


of data.

(a)  1 
Explain why the expression log 2  
 is appropriate for
 p( x m ) 
describing the information content of a discrete alphabet.
(5 Marks)
(b) Explain the principle behind source coding with reference to
lossless coding.
(6 Marks)
(c) A source alphabet generates the symbols A,B,C,D and E. The
probabilities that these symbols will be generated are 0.5, 0.2,
0.1, 0.1 and 0.1 respectively.

(i) Explain how Huffman encoding can be used to encode


the source and hence determine the code words.
(10 Marks)
(ii) Compare the coding efficiency of this method with fixed
length (block to block) encoding and discuss the relative
merits of the Huffman coding method.
(12 Marks)

2
PAPER NO: 16-7204/EXAM/S2/08

3. Linear block codes are a popular method for error detection and
correction.

(a) The generator matrix for a linear block code is given as.
G=
1 1 0 1 0 0 0
0 1 1 0 1 0 0
1 1 1 0 0 1 0
1 0 1 0 0 0 1

(i) With the aid of an appropriate sketch show how this can
be implemented in hardware.
(10 Marks)

(ii) Demonstrate how an error can be corrected using this


generator matrix.
(11 Marks)

(b) Explain the principle of code modification and discuss the


implications of modifying codes.
(12 Marks)

4. Convolution codes are used in many applications.

(a) (i) Explain, with reference to a trellis diagram, how


convolution codes are effectively decoded using the
Viterbi algorithm.
(12 Marks)

(ii) A ½ rate convolution encoder uses the generator


polynomials 1012 and 1102. With reference to a suitable
state machine demonstrate the encoding process.
(12 Marks)
(iii Discuss the features of Viterbi based convolution
) encoders with reference to relevant parameters and
describe an appropriate method to evaluate such a
coder. Justify your choice of method.
(9 Marks)

3
PAPER NO: 16-7204/EXAM/S2/08

Formulae and information sheet.

Information theory.
M −1
E[ x m ] = ∑ p ( x m )I m
m =0
M −1
L[ X ( x m )] = ∑P( x m )C m
m =0

H [ xm ]
×100
L[ x m ]
M −1 N −1
 1 
H (Y | X ) = ∑∑ p ( x m ) p ( y n | x m ) log  
m =0 n =0  p( y n | x m ) 

N −1 M −1
 1 
H ( X | Y ) = ∑∑ p ( yn ) p ( xm | yn ) log 
 p( x | y ) 

n =0 m =0  m n 

I(X;Y) = H(X) – H(X|Y),

I(Y;X) = H(Y) – H(Y|X).

Source coding.

Theorem.

H(X) ≤B(X) <H(X) +1

Arithmetic coding

 m −1 m

I m =  y + ∑PmV0 , y + ∑PmV0 
 m =0 m =0 

 M −2 M −1

I M −1 =  y + ∑PmV0 , y + ∑PmV0 
 m =0 m =0 

 M −2

I M −1 =  y + ∑ PmV0 , y + V0 
 m =0 

 m −1 m

V j =  y + ∑PmV j −1 , y + ∑PmV j −1 
 m =0 m =0 

 M −1  M −1
C = − log 2 V0 ∏PmPm J  VJ =V0 ∏PmP mJ

 m =0  m =0

4
PAPER NO: 16-7204/EXAM/S2/08

M −1
C = −∑Pm J log 2 Pm
m=0

M −1
VJ =V0 ∏PmPm J
m =0

Channel coding.
Theorem.

H (X M ) C

Ts Tc

Linear block codes.

G = ( P I k ×k )

c = m⋅ G

H = (I(n-k,n-k) | PT)

s = e⋅ HT

Typical trellis diagram for a (2,1,3) encoder.

00 00 00 00
A
11 11
11 11
11 11

B 00 00

10 10 10

01
01
01 01 01
D
10 10