You are on page 1of 50

KTUStudents.

in
4 CREDITS

For more study materials: WWW.KTUSTUDENTS.IN


COURSE OUTCOMES
1. Apply the basics of information theory on source coding
techniques.
2. Apply the knowledge of Shannon’s Fano coding theorem

KTUStudents.in
and channel coding theorem for designing an efficient and
error free communication link.
3. Analyze various coding schemes for error detection and
correction
4. Design an optimum decoder for various coding schemes.

For more study materials: WWW.KTUSTUDENTS.IN


TEXT BOOKS
 Simon Haykin: Digital Communication Systems,Wiley India, 2013.
 P.S. Sathya Narayana: Concepts of Information Theory & Coding ,
Dynaram Publications,2005

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Communication System

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Communication Systems
 Source encoding converts source output to bits.

KTUStudents.in
Source input can be voice, video, text, sensor output …

 Channel coding adds extra bits (redundancy) to data


which is to be transmitted over the channel
This redundancy helps reduce the errors which can be
introduced in transmitted bits due to channel noise

For more study materials: WWW.KTUSTUDENTS.IN


MODULE 1
 Introduction to Information Theory. Concept of
information, units, entropy, marginal, conditional
and joint entropies, relation among entropies, mutual

KTUStudents.in
information, information rate.
 Source coding: Instantaneous codes, construction of
instantaneous codes, Kraft‘s inequality, coding
efficiency and redundancy

For more study materials: WWW.KTUSTUDENTS.IN


CONCEPT OF INFORMATION
 Let a discrete random variable,

 Probability of each symbol is given by,

KTUStudents.in
 Also probabilities must satisfy the condition,

 Then the amount of information or self information


gained after observing the event S = sk,

For more study materials: WWW.KTUSTUDENTS.IN


CONCEPT OF INFORMATION
Properties of amount of information:

 1.

 2. KTUStudents.in
 3.

 4.
if sk and sl are statistically independent.
For more study materials: WWW.KTUSTUDENTS.IN
UNITS OF INFORMATION
 The base of logarithm in equation (4) is quite arbitrary.
 If the base is 3, unit of information is Triples.
 If the base is 4, unit of information is Quadruples.

 KTUStudents.in
If the base is 10, unit will be Hartley’s.
If the base is e (natural logarithm), unit is Nats.
 In the context of digital communication, the base is taken
as 2
 If the base is 2, unit is BIT.
 One bit is the amount of information that we gain when
one of two possible and equally likely (equiprobable) event
occurs.
For more study materials: WWW.KTUSTUDENTS.IN
UNITS OF INFORMATION
 1 Hartley = 3.322 bits.
 1 Nat = 1.443 bits.

KTUStudents.in
 1 Triple = 1.585 bits.

For more study materials: WWW.KTUSTUDENTS.IN


ENTROPY
 is a discrete random variable. Mean of
over the source alphabet S is called Entropy of the
discrete memoryless source.

KTUStudents.in
(9)

 Entropy is a measure of the average information


content per source symbol. It is average self
information
For more study materials: WWW.KTUSTUDENTS.IN
Question
 A source emits three symbols s0,s1,s2 with probabilities
¼,1/4 & ½.Calculate entropy of source.

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


INFORMATION RATE
 Information rate, R = r x H,
where, ‘r‘ is the message rate.

KTUStudents.in
‘H’ is entropy

For more study materials: WWW.KTUSTUDENTS.IN


Properties of Entropy
1. if and only if the probability pk = 1 for
some k, and the remaining probabilities in the set are
all zero; this lower bound on entropy corresponds to no
uncertainty.

2. KTUStudents.in
if and only if p = (1/K) for all k (i.e.,
all the symbols in the alphabet
k
are equiprobable);
this upper bound on entropy corresponds to
maximum uncertainty .
So entropy is bounded by:

For more study materials: WWW.KTUSTUDENTS.IN


KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Entropy of Binary Memory-less Source
 Example of a Discrete memory-less source.
 The discrete memoryless source (DMS) has the
property that its output at a certain time does not
depend on its output at any earlier time.

KTUStudents.in
 Symbol ‘0’ occurs with probability p0 and
Symbol ‘1’ with probability p1 = 1 – p0.
 Then entropy is given by:

For more study materials: WWW.KTUSTUDENTS.IN


Entropy as a function of probability
for a Binary Memoryless Source

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Extension of Discrete Memoryless Source
 ‘n’ Successive Symbols constitute a Block.
 Extended source with source alphabet that has Kn
distinct blocks.

KTUStudents.in(11)
 Entropy of extended source:

 Q) Calculate the entropy of the extended source (2nd


order extension) for the previous problem.

For more study materials: WWW.KTUSTUDENTS.IN


 Solution:

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


MARGINAL ENTROPY
 H(S) :

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


CONDITIONAL ENTROPY
 The conditional entropy of Source Alphabet X, given that output
alphabet Y = yk is:

KTUStudents.in
 The mean of entropy over the output alphabet Y is

(1)
For more study materials: WWW.KTUSTUDENTS.IN
CONDITIONAL ENTROPY
 The quantity is called a conditional entropy. It represents
the amount of uncertainty remaining about the channel input after
the channel output has been observed.

KTUStudents.in
 Similarly for H(Y/X)

For more study materials: WWW.KTUSTUDENTS.IN


JOINT ENTROPY
H(X,Y) = H(X)+H(Y/X)
KTUStudents.in
= H(Y) + H(X/Y)
After the substitution of corresponding
formulae, we get

For more study materials: WWW.KTUSTUDENTS.IN


RELATION AMONG ENTROPIES
 CHAIN RULE

H(X,Y) = H(X) + H(Y/X)

KTUStudents.in
= H(Y) + H(X/Y)

 If X and Y are independent each other, then


 H(Y/X) = H(Y), Also H(X/Y) = H(X)
 Then, H(X,Y) = H(X) + H(Y)

For more study materials: WWW.KTUSTUDENTS.IN


Mutual Information
 represents the amount of uncertainty remaining
about the channel input after the channel output has been

KTUStudents.in
observed.
 The quantity represents the uncertainty about the
channel input before observing the channel output.
 Difference - must represent the uncertainty
about the channel input that is resolved by observing the
channel output. This important quantity is called the mutual
information of the channel denoted by

Similarly,

For more study materials: WWW.KTUSTUDENTS.IN


Properties of Mutual Information
1. The mutual information of a channel is symmetric

KTUStudents.in
2. The mutual information is always nonnegative

3. The mutual information of a channel is related to the joint


entropy of the channel input and channel output by

For more study materials: WWW.KTUSTUDENTS.IN


1.
Proof:
Consider,

KTUStudents.in
(2)
(2) – (1), we get

For more study materials: WWW.KTUSTUDENTS.IN


(3)
 From Bayes' rule for conditional probabilities:

Substituting in (3)

KTUStudents.in
Hence the proof.

For more study materials: WWW.KTUSTUDENTS.IN


 2.
Proof:
Baye’s Rule
Sub in (3) we get

KTUStudents.in
Fundamental Inequality:

Hence the proof.

For more study materials: WWW.KTUSTUDENTS.IN


3.
Proof:
Joint Entropy:

(4)
KTUStudents.in
 Multiply both numerator and denominator by
p(x )p(y ) and rewrite (4) as folows
j k

(5)

For more study materials: WWW.KTUSTUDENTS.IN


 Second summation term can be written as,

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


First Summation Term can be written as –I(X,Y)
(5) Can be written as follows

KTUStudents.in
We get

Therefore
Hence the Proof.

For more study materials: WWW.KTUSTUDENTS.IN


RELATION AMONG ENTROPIES and
MI:VENN DIAGRAM

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


SOURCE CODING
Source encoder has to do the following:
 Short code words are used for frequent source
symbols

KTUStudents.in
 Long code words are used for rare source symbols.
 The code words produced by the encoder should be in
binary form.
 More compression and quality is the main concern
 The source code is uniquely decodable, so that the
original source sequence can be reconstructed
perfectly from the encoded binary sequence.

For more study materials: WWW.KTUSTUDENTS.IN


Prefix Code or Instantaneous Code
 A Prefix code should satisfy prefix property. That is, no code
word should be prefix of any other code word in the given code.

Source Probability of Code I Code II Code III

KTUStudents.in
Symbol Occurrence
s0 0.5 0 0 0
s1 0.25 1 10 01
s2 0.125 00 110 011
s3 0.125 11 111 0111

 Code II only satisfies prefix property.

For more study materials: WWW.KTUSTUDENTS.IN


 Decision tree for Code II:

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Kraft–Mc Millan Inequality
 In coding theory, the Kraft–McMillan inequality gives a
necessary and sufficient condition for the existence of

KTUStudents.in
a prefix code for a given set of codeword lengths
 Kraft-Mc Millan Inequality:

 Where lk is the length of the code word for symbol sk,


k=0,1,2,3,…, K-1

For more study materials: WWW.KTUSTUDENTS.IN


Prefix Code or Instantaneous Code
 In the previous example table, code I does not satisfy
Kraft Inequality.

KTUStudents.in
 Code II & Code III are satisfying Kraft Inequality.
 So Code II is prefix code or instantaneous code.

For more study materials: WWW.KTUSTUDENTS.IN


Summary : Instantaneous Code
 Instantaneous Code should satisfy Prefix Property. No
code word should be prefix of any other code word.

KTUStudents.in
 Instantaneous Code should satisfy Kraft Mc Millan
Inequality
 Instantaneous Code will be uniquely decodable.

For more study materials: WWW.KTUSTUDENTS.IN


CODING EFFICIENCY
 Average code word length or average number of bits
per symbol:

KTUStudents.in
 Code efficiency:

For more study materials: WWW.KTUSTUDENTS.IN


REDUNDANCY
 R= 1 - ή

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


QUESTIONS
1. Define Amount of Information and Entropy
2. State and Prove Kraft’s Inequality
3. Define information with different units. Compare the

4. KTUStudents.in
different units of information.
Derive relations among entropies for transmitting a
message and receiving it as Y.
5. Write the properties of instantaneous code with
examples.
6. Derive a relation for entropy. Find the optimum entropy
for a binary source.
7. Show that mutual information is always positive.

For more study materials: WWW.KTUSTUDENTS.IN


QUESTIONS
8. Define the term information. Justify the reason for
choosing logarithmic function for defining
information.
KTUStudents.in
9. Show that entropy is maximum when symbols are
equally likely.
10. Prove the properties of MI.
11. Obtain the relation for H(Y/X) of a communication
channel.
12. State and explain the necessary and sufficient
condition for the code to be instantaneous.
For more study materials: WWW.KTUSTUDENTS.IN
Questions

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Questions

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Questions

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Questions

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Questions

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN


Questions

KTUStudents.in

For more study materials: WWW.KTUSTUDENTS.IN

You might also like