You are on page 1of 3

Solved Problems (Chapter 5)

1. A source with bandwidth 4000 Hz is sampled at the Nyquist rate. Assuming that the resulting sequence
can be approximately modeled by a DMS with alphabet Α= {−2, −1, 0, 1, 2} and with corresponding
probabilities { 1⁄2 , 1⁄4 , 1⁄8, 1⁄16, 1⁄16}, determine the rate of the source in bits/sec.

Solution:

We have

and since we have 8000 samples/sec the source produces information at a rate of 15,000 bits/sec.

Note that the Nyquist rate is twice the bandwidth.

2. An information source can be modeled as a bandlimited process with a bandwidth of 6000 Hz. This
process is sampled at a rate higher than the Nyquist rate to provide a guard band of 2000 Hz. It is
observed that the resulting samples take values in the set Α = {−4, −3, −1, 2, 4, 7} with probabilities
0.2, 0.1, 0.15, 0.05, 0.3, 0.2. What is the entropy of the discrete-time source in bits/output (sample)?
What is the entropy in bits/sec?
Solution:
The entropy of the source is

The sampling rate is


fs = 2000 + 2 · 6000 = 14000 Hz
This means that 14000 samples are taken per each second. Hence, the entropy of the source in bits
per second is given by

3. A memoryless source has the alphabet  = {−5, −3, −1, 0, 1, 3, 5} with corresponding probabilities
{0.05, 0.1, 0.1, 0.15, 0.05, 0.25, 0.3}.
a. Find the entropy of the source.
b. Assume that the source is quantized according to the quantization rule
𝑞(−5) = 𝑞(−3) = 4,
{ 𝑞(−1) = 𝑞(0) = 𝑞(1) = 0,
𝑞(3) = 𝑞(5) = 4.
Find the entropy of the quantized source.

Solution:

a. The entropy of the source is

b. After quantization, the new alphabet is B = {−4, 0, 4} and the corresponding symbol
probabilities are given by

1
Hence, 𝐻(𝑄(𝑋)) = 1.4060. As it is observed quantization decreases the entropy of the
source.
4. Design a Huffman code for the source in Problem 1 above.
Solution:
Huffman Encoding Algorithm.
a. Sort source outputs in decreasing order of their probabilities.
b. Merge the two least-probable outputs into a single output whose probability is the sum of
the corresponding probabilities.
c. If the number of remaining outputs is 2, then go to the next step, otherwise go to step 1.
d. Arbitrarily assign 0 and 1 as code words for the two remaining outputs.
e. If an output is the result of the merger of two outputs in a preceding step, append the current
code word with a 0 and a 1 to obtain the code word for the preceding outputs and then repeat
5. If no output is preceded by another output in a preceding step, then stop.

5. A source has an alphabet {a1, a2, a3, a4} with corresponding probabilities {0.1, 0.2, 0.3, 0.4}.
a. Find the entropy of the source.
b. What is the minimum required average code word length to represent this source for
error-free reconstruction?
c. Design a Huffman code for the source and compare the average length of the Huffman
code with the entropy of the source.

Solution:

a. The entropy of the source is

b. The average codeword length is lower bounded by the entropy of the source for error free
reconstruction. Hence, the minimum possible average codeword length 𝑖𝑠 𝐻(𝑋) = 1.8464.
c. The following figure depicts the Huffman coding scheme of the source (based on the above
procedure). The average codeword length is

2
1 1
6. Design a Huffman code for a source with n output letters and corresponding probabilities { 2 , 4 ,
1 1 1
8
, . . . , 2𝑛−1 , 2𝑛−1 }. Show that the average code word length for such a source is equal to the source
entropy.
Solution:
The following figure shows the design of the Huffman code. Note that at each step of the
algorithm the branches with the lowest probabilities (that merge together) are those at the
bottom of the tree.

In the way that the code is constructed, the first codeword (0) has length one, the second
codeword (10) has length two and so on until the last two codewords (111...10, 111...11) which
have length n − 1. Thus, the average codeword length is

You might also like