Professional Documents
Culture Documents
Information Theory Introduction PDF
Information Theory Introduction PDF
and
Coding Techniques
Prof. Ja-Ling Wu
Department of Computer Science
and Information Engineering
National Taiwan University
Source
Encoder
Source Coding
Source codeword
Channel
Encoder
Destination
Channel Coding
Channel codeword
Modulator
Source
Decoder
Channel
Decoder
Received codeword
Modulation
Demodulator
Channel
Noise
Information Theory
Source Coding
Based on characteristics/features of a source,
Source Encoder-Decoder pair is designate to
reduce the source output to a Minimal
Representation.
[Shannon 1948]
Information Theory
Channel coding
Introduction redundancy into the channel encoder
and using this redundancy at the decoder to
reconstitute the input sequences as accurately as
possible, i.e., channel coding is designate to
minimize the effect of the channel noise.
Noise
Channel
Encoder
Channel
Channel
Decoder
Noiseless
Channel
Information Theory
Modulation
Physical channels can require electrical signals,
radio signals, or optical signals. The modulator
takes in the channel encoder/source encoder
outputs and output waveforms that suit the
physical nature of the channel and are also
chosen to yield either system simplicity or
optimal detection performance.
Information Theory
What is information?
Information (Source)
S1
P1
S2
P2
Sq Source alphabet
Pq Probability
Facts:
1) The information content (surprise) is somewhat inversely
related to the probability of occurrence.
2) The information content from two different independent
symbols is the sum of the information content from each
separately. Since the probability of two independent choices
are multiplied together to get the probability of the compound
event, it is natural to define the amount of information as
I (Si ) = log
1
Pi
(or log Pi )
As a result, we have
I (S1 ) + I (S 2 ) = log
1
= I (S1 , S 2 )
P1 P2
Information Theory
Information Unit:
log2 : bit
loge : nat
log10 : Hartley
base conversions:
log102 = 0.30103,
log10e = 0.43429,
loge2 = 0.69315,
log210 = 3.3219
loge10 = 2.30259
log2e = 1.44270
log b X
log a X =
= (log a b ) log b X
log b a
Information Theory
10
H r (S ) = Pi log
i =1
()
1
r Pi
S1
P1
S2 L Sq
P2 L Pq
= Pi log r Pi
i =1
H r (S ) = H 2 (S )(log r 2 )
Consider the entropy of the Source can have no
meaning unless a model of the Source is included.
For a sequence of numbers and if we cannot
recognize that they are pseudo-random numbers, then
we would probably compute the entropy based on the
frequency of occurrence of the individual numbers.
Information Theory
11
H(X) = 0
P(ai)=0
or
P(ai)=1
12
Information Theory
13
2.
3.
Information Theory
14
Course contents
Coding Techniques
15