You are on page 1of 20

EIE 441 Advanced Digital Communication

Lecturer: Dr. W.Y.Tam


Room no.: DE604
Phone no.: 27666265
e-mail: enwytam@polyu.edu.hk

web: www.en.polyu.edu.hk/~em/mypage.htm
Normal Office hr: 8:15am – 5:30pm

A.1

References
Textbook:
S. Haykin, Communication Systems, John Wiley, 2001.

References:
J. G. Proakis, Digital Communications, McGraw Hill, 2001.

F. G. Stremler, Introduction to Communication Systems,


Addison-Wesley, 1990.

A.2

1
Assessment
Assessment:
Examination 60%

Continuous work
Test 15%
Practical & Assignments 25%

A.3

Syllabus
Week 1-2 Information Theory
Week 1 Introduction
Information Contents
Entropy

Week 2 Mutual Information


Channel Capacity

A.4

2
Syllabus
Week 3-5 Source Coding and Channel Coding
Week 3 Huffman Coding
Lempel-Ziv Coding

Week 4 Linear Block Codes

Week 5 Cyclic Codes


Convolutional Codes

A.5

Syllabus
Week 6-10 Baseband Transmission
Week 6 Matched filter
Error rate of PAM systems

Week 7 Chung Yeung Festival

Week 8 Intersymbol Interference


Nyquist’s Criterion

Week 9 Corrective-level Coding


M-ary schemes
A.6

3
Syllabus
Week 10 Eye Patterns
Digital Subscriber Lines
Channel Equalization

A.7

Syllabus
Week 11-14 Passband Transmission
Week 11 Coherent PSK

Week 12 Hybrid Amplitude/Phase Moduations


Coherent FSK

Week 13 Noncoherent FSK


Differential PSK

Week 14 Comparison of Digital Modulation Schemes

A.8

4
Other arrangements
Tutorial
– followed by each topic
– week 2, 3, 5, 9 and 14

Laboratory
– week 9 to week 14

Test
– week 8

Assignment
– after each lecture A.9

General communication system


Signal Receiver Destination
Information Transmitter channel
source

Noise message
message

The fundamental problem of communication is that of


reproducing at one point (at the destination) either exactly or
approximately a message selected at another point (source).

The actual message is one selected from a set of possible


message. (Example: Select “1” or “0” in a binary system)
A.10

5
Components
Information source
produces a message (or a sequence messages) to be
transmitted to the destination

Transmitter
operates on the message to produce a signal suitable for
transmission over the channel

Channel
medium used to transmit the signal from transmitter to
the receiver. Noise may be added.
A.11

Components
Receiver
performs the reverse function of the transmitter

Destination
the person or device for which the message is intended

A.12

6
A. INFORMATION THEORY
Introduction
The purpose of a communication system is to carry
information-bearing baseband signals from one place to
another over a communication channel.

Information Theory deals with mathematical modeling and


analysis of a communication system rather than with
physical sources and physical channels.

A.13

Introduction
Examples:
Baseband signals: Audio signals and video signals
Channel: Optical fiber and free space

Questions:
Information-bearing baseband signals: ?
Signals containing no information: ?

A.14

7
Information Measurement
The amount of information about an event is closely related
to its probability of occurrence.

Messages containing knowledge of high probability of


occurrence convey relatively little information.

Those messages containing knowledge of a low


probability of occurrence convey relatively large amount
of information.

P↑I ↓ P↓I ↑
A.15

Information Measurement

Source alphabet of n symbols


{S 1 , S 2 , S 3 ,iiiS n }
Information
source

Let the probability of producing Si be

P( Si ) = Pi for Pi ≥ 0 ∑ P =1
i
i

A.16

8
Example 1-2
Question
If a receiver receives the symbol S1 , how much information
is received?

Case I: Consider n = 1 (only 1 symbol), Pi = 1 .


Answer: S1 is transmitted for sure. Therefore, no
information.

Case II: Consider n > 1 , P1 = 1 and P2 = P3 = ...Pn = 0


Answer: ?

A.17

Example 1-2
Conclusion
Information is related to the probability of transmission of a
symbol.
Probability = 1 ⇒ No information

A.18

9
Example 3
Information from 2 independent source will be the sum of
the information from each separately.

Example:
Consider n ≥ 2 , if a receiver receives a symbol S1 and then
a symbol S 2 , the received information is

I ( S1S 2 ) = I ( S1 ) + I ( S 2 )

A.19

Example 3
Conclusion
Information of a message of M symbols =
sum of information of each of the M symbols.

Question
Which function satisfies the above criteria (A.20 & A.18)?
a. first order polynomial: ax+b
b. sinusoidal function: cos or sin
c. logarithmic function: log
d. exponential function: e

A.20

10
Definition of Information
Consider a set of M symbols (messages) m1 , m2 ,..., mM with
probabilities of occurrence P1 , P2 ,..., PM , respectively,
and P1 + P2 + ... + PM = 1

th
The amount of information or information content in the k
symbols
I (mk ) = − log 2 Pk Unit: bit

Example: I (mk ) → 0 if Pk → 1 (most likely event )


I (mk ) → ∞ if Pk → 0 (highly unlikely event)

A.21

Sum of information
Information of symbol m1 + Information of m2+ Information of
symbol m3 + …

= I (m1 ) + I (m2 ) + ... + I (mM )


= − log P1 − log P2 − ... − log PM
1 1 1
= log + log + ... + log
P1 P2 PM
1
= log
P1 P2 ⋅ ⋅ ⋅ PM
= I (m1m2 ⋅ ⋅ ⋅ mM )
Note : A log B = log B A
A.22

11
Sum of Information
Therefore the information in M one-symbol message is equal
to the information in one M-symbol message

I (mk ) : self information associated with symbol mk


amount of information associated with symbol mk
information content associated with symbol mk

A.23

Example
1
A binary source m1 & m2 ; P(m1 ) = ; P(m2 ) = 1
2 2

I (m1 ) = I (m2 ) = − log 2 (1 / 2) = 1 bit

log10 B
Note : log 2 B =
log10 2

A.24

12
Example 5-6
A binary source m1 & m2 ; P(m1 ) = 1 ; P(m2 ) = 3
4 4
I (m1 ) = − log 2 (1 / 4) = 2 bits

I (m2 ) = − log 2 (3 / 4) = 0.415 bits

Source alphabet = {A,B,C,..,Y,Z}


If each symbol is produced independently, with equal
probability, the information content of each symbol is
 1 
− log 2   = 4.7 bits
 26 
A.25

Average Information Content (Entropy)


In practice, a source may not produce symbols with equal
probabilities. Then each symbol will carry different amount
of information.

Need to know: On average how much information is


transmitted per symbol in a M - symbols message? Or, the
average information content of symbols in a long message.

Entropy
A.26

13
Entropy
Consider a source emits one of M possible symbols
S1 , S 2 ,..., S M in a statistically independent manner with
probabilities P1 , P2 ,..., PM , respectively. (i.e. occurrence of a
symbol at any given time does not influence the symbol
emitted any other time )

Assume that a total of N symbols were emitted, self


information of the i th possible symbol is

I (S i ) = − log 2 (Pi ) bits

A.27

Entropy
Si will occur, on average, Pi N times for N → ∞ .

Therefore, total information of the N - symbol message is


M
.
I t = −∑ NPi log 2 (Pi ) bits
i =1

The average information per symbol is I t / N = H , and


M
H = −∑ Pi log 2 (Pi ) bits/symbol
i =1

H is called the source entropy (or entropy of source).


A.28

14
Maximum Entropy
The entropy is maximum when P1 = P2 = ... = PM , i.e., it is
equally likely for the next emitted symbol be any one of {Si }.

H
The ratio is called the source efficiency or relatively
entropy. H max

A.29

Example
6
Source with 2 symbols, with probabilities P1 = ,
2 , N=8 8
P =
2
message
8 N =8
2
Total information: I t = ∑ NP I (S )
i =1
i i

6 8  2 8


= 8  log 2   + 8  log 2  
8 6 8  2
= 2.49 + 4
= 6.49 bits

H = I t / N = 0.811 bits/symbol
A.30

15
Example
1 3
Source with symbols '0' and '1' and probabilities P1 = , P2 =
4 4
2
H = ∑ Pi I (S i )
i =1

1 1  3  3
= −  log 2   −   log 2  
 4  4  4  4
= 0.811 bits/symbol

Note: If P1 = P2 = 1 / 2 , H =1 bit/symbol(= H max )

A.31

Example
Source produces symbols from source {S1 , S 2 , S 3 , S 4 }
with the following probabilities:
1 3 3 1
P1 = P2 = P3 = P4 =
8 8 8 8
4
H = −∑ Pi log 2 (Pi )
i =1

= 1.8 bits/symbol

1
Note that if P1 = P2 = P3 = P4 = ,
4
4
1
H = H max = ∑ log 2 (4 ) = 2 bits/symbol
i =1 4 A.32

16
Example
Consider source alphabet {0,1}
Probability of producing symbol ‘0’ = p, 0<p<1
Probability of producing symbol ‘1’ = 1-p

H = − P(0) log 2 (P (0) ) − P(1) log 2 (P(1) )


1 1
= p log 2 + (1 − p) log 2
p 1− p

A.33

Example
H
H against p is
1

0 1 1 P
2
dH
H reaches maximum when =0
dp
1
and we get p = ; H max= 1 bit/symbol
2

A.34

17
Example
In general, when there are M symbols in the source alphabet,
maximum entropy occurs when all symbols are equally
probable, i.e. P1 = P2 = ... = PM

1
As P1 + P2 + ... + PM = 1 ⇒ P1 = P2 = ... = PM =
M
1
H max = ∑ Pi log 2
i Pi
1
=∑ log 2 M
i M
= log 2 M bits/symbol
A.35

Average information rate


When the symbols are emitted at a rate of rs symbols/sec,
the average source information rate is

R = rs H bits/sec.

A.36

18
Example
A source produces one symbol out of 256 at a rate of 8000
symbols/sec . Determine the information rate.
1
H = ∑ Pi log 2
256 Pi
1
=∑ log 2 256
i 256
= log 2 256
= 8 bits/symbol

R = 8 × 8000 = 64 kbit/s (kbps)

A.37

Example
An analog signal is bandlimited to B Hz, sampled at the
Nyquist rate, quantized into 4 levels. The quantization levels
Q1 , Q2 , Q3 , Q4 are independent and occur with probabilities
1 3
P1 = P2 = P3 = P4 =
8 8
1
H = ∑ Pi log 2 = 1.8 bits/symbol
i Pi
rs = 2 B symbols/sec (Nyquist rate).

R = rs H = 3.6 B bits/s
If all quantified samples are equally probable,
H = 2 bits/symbol and R = rs H = 4 B bits/sec A.38

19
Summary

Information content of a message:


I (mk ) = − log 2 Pk bit

Entropy of a source:
M
H = −∑ Pi log 2 (Pi ) bits/symbol
i =1

Information rate of a source:


R = rs H bit/second

A.39

Application

If there is a source with M symbols, the simplest coding


method is using constant codeword length L where
log 2 M ≤ L ≤ 1 + log 2 M
Example: M = 9
log2M=3.2 and choose L = 4
S 0 : 0000 S1 : 0001 L S 8 : 1000

However, if P( S i ) are not all equal, the entropy H < 3.2.


We might find a coding method with average codeword
length L where
L≈H
A.40

20

You might also like