You are on page 1of 20

# A Mathematical Theory of Communication by

Claude shannon
Gowshigan.S
Asian Institute of Technology Thailand
Information Theory and Coding 20000471

07.04.2014

Gowshigan.S (AIT)

Presenation

07.04.2014

1 / 20

Overview
1

Introduction

Communication system

## Part 1: Discrete Noise Less systems

The discrete noiseless channel
The discrete source of Information
Graphical Representation of Markov Process
Ergodic and Mixed sources
Choice, Uncertainty And Entropy
The Entropy of An Information Source
The Discrete channel with Noise

Second Section

Gowshigan.S (AIT)

Presenation

07.04.2014

2 / 20

Introduction
Claude Elwood Shannon was an American mathematician, electronic engineer, and cryptographer known as the father of information theory.
Shannon is famous for having founded information theory with a landmark
paper that he published in 1948

Gowshigan.S (AIT)

Presenation

07.04.2014

3 / 20

Communication System

Gowshigan.S (AIT)

Presenation

07.04.2014

4 / 20

## information source : which produce messages or sequences to be

communicated to the receiving terminal.
A sequence of letters as in a telegraph of Teletype system
A single function of time f(t ) as in radio or telephony
A function of time and other variables as in black and white television
Two or more functions of time, say f(t ), g(t ), h (t )
Several functions of several variables
Various combinations also occur, for example in television with an
associated audio channel

## A transmitter: operates on messages in some way to produce a signal

suitable for tranmssion over the channel. Ex Vocoder systems ,
Televisions, Frequency modulation

Gowshigan.S (AIT)

Presenation

07.04.2014

5 / 20

to receiver

## receiver : Ordinarily perform the inverse operation of that done by

transmitter , reconstruction the message from the signal

## Destination : is the person or identity to who or which the messages

is intended

Gowshigan.S (AIT)

Presenation

07.04.2014

6 / 20

## Logarithm of the number of possible signal in a discrete channel

increase linearly with time
The capacity to transmit information can be specified by giving the
rate of increase
How much information source to be described mathematically?
How much information bits per second in produced ?
Frequent letters: shortest sequence
Infrequent Letters: Longest sequence

Gowshigan.S (AIT)

Presenation

07.04.2014

7 / 20

## The Discrete source of Information- Continued

Gowshigan.S (AIT)

Presenation

07.04.2014

8 / 20

1

## A Physical System or Mathematical model System of a System which

produces symbols according to certain probabilities is known as
Stochastic Process

## conversely any Stochastic Process which produces a discrete

sequence of symbols from a finite set may be considered a discrete
source

Gowshigan.S (AIT)

Presenation

07.04.2014

9 / 20

## Markov Process and Graphical Representation

Consider the Example (B):
Using the same five letters let the probabilities be .4, .1, .2, .2, .1,
respectively, with successive choices independent. A typical message
from this source is then:
AAACDCBDCEAADADACEDA
E A D C A B E D A D D C E C A A A A A D.

## Corresponding Markov Process

Gowshigan.S (AIT)

Presenation

07.04.2014

10 / 20

## Markov Process and Graphical Representation- Continued

Gowshigan.S (AIT)

Presenation

07.04.2014

11 / 20

## Simple Introduction to Ergodic Process

Among the possible discrete Markov Process there is a group with
special properties of significance in communication theory and this
special class is called ergodic process and corresponding sources are
called ergodic sources
In ergodic process every sequences produced by process is the same is
statical properties and that is the one distinguish this from other
process

Gowshigan.S (AIT)

Presenation

07.04.2014

12 / 20

## Choice, Uncertainty And Entropy

Lets define a quantity which will measure how much information is produced by a process
Suppose we have a set of possible events whose probabilities of occurrence are p1,p2 ,..... pn.
and
If there is such a measure, say H( p1,p2,,,,pn )
H should be continuous in the Pi
If all the pi are equal, Pi=(1/n) then H should be a monotonic
increasing function of n. With equally likely events there is more
choice, or uncertainty, when there are more possible events.

Gowshigan.S (AIT)

Presenation

07.04.2014

13 / 20

## If a choice be broken down into two successive choices, the original H

should be the weighted sum of the individual values of H. The
meaning of this is illustrated in Fig. 6. At the left we have three

Gowshigan.S (AIT)

Presenation

07.04.2014

14 / 20

## Choice, Uncertainty And Entropy

T he only H
the three above assumptions is the form
Psatisfying
n
H = K
P
log
P
i
i
1
w here K is a positive constant, merely amounts to choice of unit measure
T he entropy in the case of two probabilities p and q = 1 p , namely
H = (p log p + q log q) is plotted as a function of p

Gowshigan.S (AIT)

Presenation

07.04.2014

15 / 20

## Choice, Uncertainty And Entropy- Continued

previous graph has some interesting properties
1

H = 0 if and only if all the pi but one are zero, this one having the
value unity. Thus only when we are certain of the outcome does H
vanish. Otherwise H is positive.

For a given n, H is a maximum and equal to log n when all the pi are
equal (i.e., 1/n ). This is also intuitively the most uncertain
situation.

## Suppose there are two events, x and y , in question with m

possibilities for the first and n for the second. Letp(i, j) be the
probability of the joint occurrence of i for the first and j for the
second. The entropy of the joint
is
Pevent
n
H(x, y ) = i,j p(i, j) log p(i, j)
and it is easily shown that H(x, y ) 6 H(x) + H(y )
4

## Any change toward equalization of the probabilities

p1 , p2 , , , , , , pn increasesH

Gowshigan.S (AIT)

Presenation

07.04.2014

16 / 20

## The Entropy of an Information source

C onsider a discrete source of the finite state type considered above. For
each possible state i there will be a set of probabilities pi (j) of producing
the various possible symbols j.
Thus there is an entropy Hi for each state.
The entropy of the source will be defined as the average of these Hi
weighted in accordance withPthe probability
P of occurrence of the states in
question: H = i Pi Hi = i,j Pi pi (j) log pi (j)
This is the entropy of the source per symbol of text.

Gowshigan.S (AIT)

Presenation

07.04.2014

17 / 20

Discussion

## As a simple example of some of these results consider a source which

produces a sequence of letters chosen from among A, B, C , D with
probabilities 21 , 14 , 81 , 18 successive symbols being chosen independently.We
have H = ( 12 log 21 + 14 log 14 + 28 log 18
= 74 bitspersymbol

Gowshigan.S (AIT)

Presenation

07.04.2014

18 / 20

## E = f (S, N) received signal can be assumed as a function of

transmitted signal and received signal

P,i (, j)

## The capacity of noisy channel can be defined as

C = Max (H(x) Hy (y ))

Gowshigan.S (AIT)

Presenation

07.04.2014

19 / 20

The End

Gowshigan.S (AIT)

Presenation

07.04.2014

20 / 20