Professional Documents
Culture Documents
Chapter 7 outline
Source
Encoder
Channel
Decoder
Destination
Noise
Encoder
Source
Source
coder
Decoder
Channel
coder
Channel
Channel
decoder
Source
decoder
Destination
Communication system
Noise
Source
Encoder
Channel
Destination
Estimate of message
Message
Source
Decoder
Encoder
Channel
Decoder
Destination
Message
Source
Encoder
Channel
Decoder
Destination
Channel
" #
n
!
n ofi capacity ni
Mathematical
description
f (1 f )
Pe =
i
i=m+1
Information channel capacity:
C = max I(X; Y )
p(x)
1
C = log2 (1 + |h|2 P/PN )
2
Highest rate (bits/channel use) that can
1communicate at2 reliably
2 log2 (1 + |h| P/PN )
C = theorem' says: information capacity = operational
( capacity
Channel coding
1
2
Eh 2 log2 (1 + |h| P/PN )
)
)
)
1
)
maxQ:T r(Q)=P 2 log2 IMR + HQH
Operational channel capacity:
C=
max
r(Q)=P
ChannelQ:T
capacity
EH
)
)(
)
)
log2 IMR + HQH
2
'1
Y =Channel:
HX +p(y|x)
N
X
Y
1
X=H U+N
1I(X; Y )
Capacity
C H(H
= max
Y=
p(x) U) + N
=U+N
bits/channel use
= ,
1
C = log2B(1= +
B1 P/N)
+ B2
2
Noiseless channel
Channel capacity
Capacity?
1 bit/channel use
C = max I(X; Y )
p(x)
= max H(X) 0
p(x)
=1
Non-overlapping outputs
Channel capacity
1/2
Capacity?
1/2
1 bit/channel use
1/2
C = max I(X; Y )
1
1/2
p(x)
= max H(X) 0
p(x)
=1
o
1
2
3
;Y )
X) H(X|Y Channel
)
capacity
X) log2 (3)
Noisy typewriter
Z .
Capacity?
D
E
I
J
T
S
C = max I(X; Y )
p(x)
O N M
Channel capacity
Capacity?
1-f
f
1-f
Channel capacity
Capacity?
1-H(f)
bits/channel use
1-f
f
f
Conditional distributions
p(y=0|x=0) = p(y=1|x=1)=1-f
1-f
p(y=1|x=0) = p(y=0|x=1)=f
[Cover+Thomas pg.187]
Examples of
Channel of Channel
Examples
Review
Channel Capacity
Channel Capacity
Examples of Channel
Channel Capacity
Binary
Channels
Binary
Channels
Binary
BinaryChannels
Symmetric Channel: X = {0, 1} and Y = {0, 1}
"
{0,
f f1} and
0
Binary Erasure Channel:
X1=
Y
! f 1 fY = {0, ?,"1}
0
1f f 0
X
Y
!
0
f 1 "f
1f f 0
X
Y
Z channel: X = {0, 1} and Y = {0,0 1}
f 1f
"
Fall 2008-09
9 / 22
B. Smida (ES250)
B. Smida (ES250)
Channel Capacity
Channel Capacity
Fall 2008-09
Fall 2008-09
9 / 22
9 / 22
Symmetric channels
= max(1 )H ( ) + H () H ()
(7.14)
SYMMETRIC CHANNELS
Here the entry in the xth row and the yth column denotes the conditional
probability p(y|x) that y is received when x is sent. In this channel, all
the rows of the probability transition matrix are permutations of each other
and so are the columns. Such a channel is said to be symmetric. Another
example of a symmetric channel is one of the form
Y =X+Z
(mod c),
(7.18)
" #
n
!
n i
f (1 f )ni
Pe =
Properties of the channel
i capacity
i=m+1
C = max I(X; Y )
p(x)
C=
C=
C=
1
2
1
log2 (1 + |h|2 P/PN )
2
log2 (1 + |h|2 P/PN )
(
Eh 2 log2 (1 + |h| P/PN )
)
)
)
1
)
maxQ:T r(Q)=P 2 log2 IMR + HQH
'1
maxQ:T r(Q)=P EH
)
)(
'1
Y = HX + N
X = H1 U + N
Y = H(H1 U) + N
=U+N
C=
1
log2 (1 + P/N)
2
Review
Examples of Channel
Channel Capacity
Previous
of the
coding
theorem
Preview
of channel
the channel
coding
theorem
B. Smida (ES250)
Channel Capacity
Fall 2008-09
18 / 22
Definitions
Channel
Estimate of message
Message
Definitions
Source
Encoder
Channel
Decoder
Destination
Estimate of message
Message
Definitions
Source
Encoder
Channel
Source
Destination
Estimate of message
Message
Definitions
Decoder
Encoder
Channel
Decoder
Destination
" #
n
!
n ofi capacity ni
Mathematical
description
f (1 f )
Pe =
i
i=m+1
Information channel capacity:
C = max I(X; Y )
p(x)
1
C = log2 (1 + |h|2 P/PN )
2
Highest rate (bits/channel use) that can
1communicate at2 reliably
2 log2 (1 + |h| P/PN )
C = theorem' says: information capacity = operational
( capacity
Channel coding
1
2
Eh 2 log2 (1 + |h| P/PN )
)
)
1
maxQ:T r(Q)=P 2 log2 )IMR + HQH )
Operational channel capacity:
Proof of Achievability
Proof of Converse
Random codes
Estimate of message
Message
Source
Encoder
Channel
Decoder
Destination
Transmission
Probability of error
Estimate of message
Message
Source
Encoder
Channel
Destination
Estimate of message
Message
Source
Decoder
Encoder
Channel
Decoder
Destination
Probability of error
Random codes?
Estimate of message
Message
Source
Encoder
Channel
Destination
Estimate of message
Message
Source
Decoder
Encoder
Channel
Decoder
Destination
AY
2N H(Y )
Analogy....
#
! ! ! 2N H(Y |X)
!!!
#
!!!#
!!!
!!
!!
!!
!"
!!!
!!!
!!!
!!!
!!!
!!!
!!!
!!!
!!!
!
"
!!!
N H(X|Y ) ! ! !
2
!!!
!!!
!!!
!!
!!
Strong converse:
exponentially
Source
Source
coder
Decoder
Channel
coder
Channel
Channel
decoder
Source
decoder
Destination
Rate?
R = # source bits / # coded bits
Majority vote
om
c
e
l
liab
e
Probability of error?
[n=2m+1]
r
r
! fo
!
" #
dn
n
e
e
!
N
Pe =
i=m+1
n!
o
i
t
a
unic
n i
f (1 f )ni
i
C = max I(X;WithYpermission
) from David J.C. Mackay
p(x)
Channel capacity
Is capacity R = 0?
Source
Source
coder
Decoder
Channel
coder
Channel
Channel
decoder
Source
decoder
Destination
Examples
0 0 0 1 1 1 1
0 0 0 1 1 1 1
0 1 1 0 0 1 1 .
H =
(7.117)
H = 0 1 1 0 0 1 1 .
(7.117)
1 01 10 01 10 10 01 1
0000000
0100101 1000011 1100110
0001111 0101010 1001100 1101001
0001111
0101010
1001100
0010110
0110011
10101011101001
1110000
0010110
0110011
1010101
0011001 0111100 10110101110000
1111111
0011001 0111100 1011010 1111111
Since the set of codewords is the null space of a matrix, it is linear in the
sense
the sum of is
anythe
two
codewords
a codeword.
The in
setthe
of
Since the
set that
of codewords
null
space ofisaalso
matrix,
it is linear
213
codewords
therefore
forms
a
linear
subspace
of
dimension
4
in
the
vector
sense that the sum of any two codewords is also a codeword. The set of
space of dimension
7.represent the message, and the last n k bits are parity check
codeword
codewords
therefore forms
linear
subspace
ofThedimension
4 in the vector
bits. Suchaa code
is called
a systematic code.
code is often identified
by its block length n, the number of information bits k and the minimum
space of dimension 7.distance
d. For example, the above code is called a (7,4,3) Hamming code
7.11 HAMMING CODES
1
1
0
1
0
214
FIGURE 7.11. Venn diagram with information bits and parity bits with even parity for each
circle.
CHANNEL CAPACITY
1
1
FIGURE 7.11. Venn diagram with information bits and parity bits with even parity for each
circle.
FIGURE 7.12. Venn diagram with one of the information bits changed.
Hamming codes are the simplest examples of linear parity check codes.
They demonstrate the principle that underlies the construction of other
linear codes. But with large block lengths it is likely that there will be
more than one error in the block. In the early 1950s, Reed and Solomon
parity check bit and to allow the parity checks to depend on various subsets
of the information bits. The Hamming code that we describe below is an
example of a parity check code. We describe it using some simple ideas
from linear algebra.
To illustrate the principles of Hamming codes, we consider a binary
code of block length 7. All operations will be done modulo 2. Consider
the set of all nonzero binary vectors of length 3. Arrange them in columns
to form a matrix:
0 0 0 1 1 1 1
Linear block codes: not good enough...
H = 0 1 1 0 0 1 1 .
(7.117)
1 0 1 0 1 0 1
Achieving capacity
which when multiplied by H give 000). From the theory of linear spaces,
since H has rank 3, we expect the null space of H to have dimension 4.
These 24 codewords are
0000000
0001111
0010110
0011001
0100101
0101010
0110011
0111100
1000011 1100110
1001100 1101001
1010101
1110000
7.11 HAMMING CODES
211
1011010 1111111
0 0 0 1 1 1 1
H = 0 1 1 0 0 1 1 .
(7.117)
good enough...
1 0 1 0 1 0 1
Achieving capacity
Linear block codes: not
Consider the set of vectors of length 7 in the null space of H (the vectors
which when multiplied by H give 000). From the theory of linear spaces,
enough...
since H has rank 3, we expect the null space of H to have dimension 4.
These 24 codewords are
0000000 0100101 1000011 1100110
0001111
1001100convolutional
1101001
Turbo codes: 1993 Berrou et al. considered
two 0101010
interleaved
0010110 0110011 1010101 1110000
0111100
1011010
1111111
codes with parallel cooperative decoders.0011001
Achieved
close
to capacity!
Since the set of codewords is the null space of a matrix, it is linear in the
sense that the sum of any two codewords is also a codeword. The set of
formsintroduced
a linear subspace by
of dimension
4 in the
Paritycodewords
Checktherefore
Codes
Gallager
invector
his
space of dimension 7.
Feedback capacity
Channel without feedback
Estimate of message
Message
Source
Encoder
Channel
Decoder
Destination
Message
Source
Encoder
Feedback capacity
Decoder
Destination
Feedback capacity
Channel without feedback
Estimate of message
Message
Source
Encoder
Channel
Decoder
Destination
Message
Source
Decoder
Encoder
Destination
Source-channel separation
When are we allowed to design the source and channel coder separately AND
remain optimal from an end-to-end perspective?
Noise
Source
Encoder
Channel
Decoder
Destination
Noise
Encoder
Source
Source
coder
Decoder
Channel
coder
Channel
Channel
decoder
Source
decoder
Destination
Source-channel separation
Source
Encoder
Channel
Decoder
Destination
Source-channel separation
Noise
Source
Encoder
Channel
Decoder
Destination
Noise
Encoder
Source
Source
coder
Decoder
Channel
coder
Channel
Channel
decoder
Source
decoder
Destination