You are on page 1of 7

*3963286*

[3963] 286

T.E. (Electronics and Telecommunication) (Semester II) Examination, 2011 SIGNAL CODING AND ESTIMATION THEORY (New) (2008 Pattern)
Time : 3 Hours Max. Marks : 100

Instructions : i) Answer three questions from Section I and three questions from Section II. ii) Answer to the two Sections should be written in separate answer books. iii) Neat diagrams must be drawn wherever necessary. iv) Assume suitable data if necessary. v) Use of Electronic Pocket Calculator is allowed. vi) Figures to the right indicate full marks. SECTION I 1. a) Describe LZW (Lempel-Ziv-Welch) algorithm to encode byte streams. b) A zero memory source emits six messages (m1, m2, m3, m4, m5, m6) with probabilities (0.30, 0.25, 0.15, 0.12, 0.10, 0.08) respectively. Find : i) Huffman code. ii) Determine its average word length. iii) Find entropy of the source. iv) Determine its efficiency and redundancy. OR 3 3 3 3 4

P.T.O.

[3963] 286

-2-

*3963286*

2. a) Explain the Mutual Information. b) A zero memory source emits six messages (N, I, R, K, A, T) with probabilities (0.30, 0.10, 0.02, 0.15, 0.40, 0.03) respectively. Given that A is coded as 0. Find : i) Entropy of source. ii) Determine Shannon Fano Code and Tabulate them. iii) What is the original symbol sequence of the Shannon Fano coded signal (110011110111111110100). 3. a) Explain with the help of neat diagram JPEG algorithm. b) For a Gaussian Channel C = B.log2 ((1 + (Eb/N0) (C/B)) i) Find Shannon limit. ii) Draw the bandwidth efficiency diagram with (Eb/N0) dB on horizontal axis and (Rb/B) on vertical axis. Mark different regions and Shannon limit on the graph. OR 4. a) A voice grade channel of the telephone network has a bandwidth of 3.4 kHz. Calculate the information capacity of the telephone channel for SNR of 30 dB. b) Find a generator polynomial g(x) for a systematic (7, 4) cyclic code and find the code vectors for the following data vectors : 1010, 1111, 0001 and 1000. Given that x7 + 1 = (x + 1) (x3 + x + 1) (x3 + x2 + 1).

4 4

4 6

10

10

*3963286*

-3-

[3963] 286

5. For systematic rate 1 2 convolutional code n = 2, k = 1 and constraint length K = 2; parity bit is generated by the mod-2 sum of the SR output as P = X + 1 that is g(1, 1) = (1 1). 1) Draw the figure of convolutional encoder and decoder. 2) Find out the output for message string {10110...}. 3) Draw state diagram. 4) Explain Viterbi algorithm for decoding. OR 6. A convolutional encoder is rate 1 2 , constraint length K = 3, it uses two paths to generate multiplexed output. It consists of two mod-2 adder and two SR. The path 1 has g1(D) = (1 + D2) and path 2 has g2(D) = (1 + D + D 2). 1) Draw encoder diagram. 2) Draw the State diagram. 3) Find out the output for message input of (1 0 0 1 1). SECTION II 7. Consider the decoding of (15, 5) error correcting BCH code with generator polynomial g(x) having , 2 , 3, 4 , 5 , 6 as roots. The roots , 2 , 4 have the same minimum polynomial
1(X) = 2 (X) = 4 (X) = 1 + X + X4 ...

4 4 4 6

6 6 6

The roots 3 and 6 have same minimum polynomial


3( X) = 6( X) = 1 + X + X2 + X3 + X4...

[3963] 286

-4-

*3963286*

The minimum polynomial of 5 is 5( X) = 1 + X + X2 i) Find g(x) as LCM {1( X), 3 ( X), 5 (X)} . ii) Let the received word be (0 0 0 1 0 1 0 0 0 0 0 0 1 0 0) that is r(x) = x3 + x5 + x12 find the syndrome components given that 1 + 6 + 9 = 10 and
1 + 12 + 18 = 5 .

iii) Through iterative procedure if the error location polynomial is


( x ) = 6 ( x ) = 1 + x + 5 x 3 ... having roots 3 , 10 , 12 ... What are the error

location number and error pattern e(x) ? OR 8. a) Write RSA algorithm for generating public key and private key for encryption and decryption of plain text. b) Plain text was encrypted using RSA key (Kp = 33, 3). English alphabets (A, B.. upto Z) are numbered as (1, 2.. upto 26) respectively. The encrypted Ciphertext (C) transmitted as (28, 21, 20, 1, 5, 5, 26). The received signals are decrypted using key (Ks = 33, 7). Find out the symbols i.e. alphabets after decryption. Given algorithm to avoid exponentation operation. C : = 1; begin for i = 1 to E do C : = MOD (C . P, N); end. Where E is exponent ? 9. a) Let Y1, Y2, ..., Yk be the observed random variables, such that Yk = a + bxk + Zk, k = 1, 2, ..., K

12

*3963286*

-5-

[3963] 286

The constants xk, k = 1, 2, ..., K are known, while the constants a and b are not known. The random variables Zk, k = 1, 2, ..., K, are statistically independent, each with zero mean and variance 2 known. Obtain the ML estimate of (a, b). 10 Given the likelihood function is :
L (a , b ) = 1 k 2 exp [ y ( a bx )] + k k 2 k 2 k = 1 2 1

b) Let Y1 and Y2 be two statistically independent Gaussian random variables, such that E[Y1] = m, E[Y2] = 3m, and var[Y1] = var[Y 2] = 1; m is unknown. Obtain the ML estimates of m. OR 10. a) Consider the problem where the observation is given by Y = ln X + N, where X is the parameter to be estimated. X is uniformly distributed over the interval [0, 1] and N has an exponential distribution given by
f N ( n ) = e n , n 0 = 0 , otherwise

Obtain the mean-square estimate, x^ms. b) Let Y1, Y2, ..., Yk be K independent variables with P(Yk = 1) = p and P(Yk = 0) = 1 p, where p, 0 < p < 1 is unknown Determine the lower bound on the variance of the estimator, assuming that the estimator is unbiased. Given that :

10

K 1k k yk 1 , k = 0, 1 and k = 1, 2, ... K p (1 p ) ( p ) f ( y | p ) f ( y | p ) L = T|p = = k =1 k k =1 0 otherwise


K

= pKy (1 p)K-Ky; since the Yks are iid.

[3963] 286

-6-

*3963286*

11. a) A rectangular pulse of known amplitude A is transmitted starting at time instant t0 with probability 1/2. The duration T of the pulse is a random variable uniformly distributed over the interval [T1, T2]. The additive noise to the pulse is white Gaussian with mean zero and variance N0/2. Determine the likelihood ratio. b) In a binary detection problem, the transmitted signal under hypothesis H1 is either s1(t) or s2(t), with respective probabilities P1 and P2. Assume P1 = P2 = 1/2, and s1(t) and s2(t) orthogonal over the observation time t [ 0, T ] . No signal is transmitted under hypothesis H0. The additive noise is white Gaussian with mean zero and power spectral density N0/2. Obtain the optimum decision rule, assuming minimum probability of error criterion and P(H0) = P(H1) = 1/2. OR 12. In a simple binary communication system, during energy T seconds, one of two possible signals s0(t) and s1(t) is transmitted. Our two hypotheses are H0 : s0(t) was transmitted H1 : s1(t) was transmitted We assume that s0(t) = 0 and s1(t) = 1 0 < t < T The communication channel adds noise n(t), which is a zero-mean normal random process with variance 1. Let x(t) represent the received signal : x(t) = si(t) + n(t) i = 0, 1 6 10

*3963286*

-7-

[3963] 286

We observe the received signal x(t) at some instant during each signaling interval. Suppose that we received an observation X = 0.6. a) Using the maximum likelihood test, determine which signal is transmitted. The pdf of x under each hypothesis is given by
f (x | H 0 ) = 1 x2 / 2 e 2 1 ( x 1) 2 / 2 e 2

10

f ( x | H1) =

b) Derive the Neyman-Pearson test.

B/I/11/5,845

You might also like