Professional Documents
Culture Documents
& CODING
Channel Capacity
LEARNING OBJECTS
Channel capacity
Block codes
Convolutional codes
Shannon-Hartley Theorem
Shannon-Hartley Theorem
Shannon-Hartley Theorem
CHANNEL CAPACITY
& CODING
10
Error control techniques
Automatic Repeat reQuest (ARQ)
◦ Full-duplex connection, error detection codes
◦ The receiver sends a feedback to the transmitter, saying that
if any error is detected in the received packet or not (Not-
Acknowledgement (NACK) and Acknowledgement (ACK),
respectively).
◦ The transmitter retransmits the previously sent packet if it
receives NACK.
Forward Error Correction (FEC)
◦ Simplex connection, error correction codes
◦ The receiver tries to correct some errors
A
Coding gain: F
For a given bit-error probability,
C B
the reduction in the Eb/N0 that can be
realized through the use of code:
D
Eb Eb
G [dB] [dB] [dB] E
Uncoded
N 0 u N 0 c
Eb / N 0 (dB)
12
Channel models
Discrete memoryless channels
◦ Discrete input, discrete output
Gaussian channels
◦ Discrete input, continuous output
13
Vector Space
The set of all binary n-tuples, Vn, is called a
vector space over the binary field of two
elements 0 and 1.
The binary field has two operations: multiplication and modulo-2
addition
14
Vector Space
Binary field :
◦ The set {0,1}, under modulo 2 binary addition and multiplication forms a field.
Addition Multiplication
00 0 00 0
0 1 1 0 1 0
1 0 1 1 0 0
11 0 1 1 1
15
Vector Space
◦ Examples of vector spaces
◦ The set of binary n-tuples, denoted byVn
V4 {(0000), (0001), (0010), (0011), (0100), (0110), (0101), (0111),
(1000), (1001), (1010), (1011), (1100), (1101), (1101), (1111)}
Vector subspace: Vn
◦ A subset S of the vector space is called a subspace if:
◦ The all-zero vector is in S.
◦ The sum of any two vectors in S is also in S.
◦ Example:
{(0000), (0101), (1010), (1111 )} is a subspace of V4 .
16
Linear block codes
The information bit stream is chopped into blocks of k bits.
Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over channel.
The reverse procedure is done at the receiver.
Channel
Data block Codeword
encoder
k bits n bits
17
Linear block codes
Linear block code (n,k)
◦ A set is called a linear block code if, and only if, it is a subspace of the vector space Vn
of all n-tuples.
◦ Members of C are called code-words.
◦ The all-zero codeword is a codeword.
◦ Any linear combination of code-words is a codeword.
18
Linear block codes –cont’d
mapping Vn
Vk
C
Bases of C
◦ A Generator matrix G is constructed by taking as its rows the vectors on the
basis, .
{V1 , V2 , , Vk }
19
Linear block codes – cont’d
Encoding in (n,k) block code
U mG V1
V
(u1 , u2 , , u n ) (m1 , m2 , , mk ) 2
Vk
(u1 , u2 , , u n ) m1 V1 m2 V2 m2 Vk
20
Linear block
Example: Block code (6,3)
codes – cont’d
000 000000
V1 1 1 0 1 0 0 100 110100
G V2 0 1 1 0 1 0 010 011010
V3 1 0 1 0 0 1 110 1 01 1 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
111 0 0 0 111
21
Linear block codes – cont’d
Systematic block code (n,k)
◦ For a systematic code, the first (or last) k elements in the codeword are information
bits.
G [P I k ]
I k k k identity matrix
Pk k (n k ) matrix
22
Linear block codes – cont’d
The Hamming weight of vector U, denoted by w(U),
is the number of non-zero elements in U.
The Hamming distance between two vectors U and
V, is the number of elements in which they differ.
d (U, V ) w(U V )
The minimum distance of a block code is
23
Linear block codes – cont’d
Error detection capability is given by
e d min 1
24
Linear block codes – cont’d
For memory less channels, the probability that the decoder commits an
erroneous decoding is
1 n
n j
PB
n
j
j
j t 1
p (1 p ) n j
25
Linear block codes – cont’d
For any linear code we can find a matrix G , whose rows are
orthogonal to rows of :
H ( n k )n
H is called the parity check matrix and its rows are linearly independent.
For systematic linear block codes:
GH 0 T
H [I n k T
P ]
27
Linear block codes – cont’d
Data source Format
m Channel U Modulation
encoding
channel
Channel Demodulation
Data sink Format
m̂ decoding r Detection
r Ue
r (r1 , r2 ,...., rn ) received codeword or vector
e (e1 , e2 ,...., en ) error pattern or vector
Syndrome testing: S rH T eH T
◦ S is syndrome of r, corresponding to the error pattern e.
28
Linear block codes – cont’d
Standard array
nk
1. For row i 2,3,...,
, 2 find a vector in Vn of minimum weight
which is not already listed in the array.
2. Call this pattern e i and form the i : th row as the corresponding
coset
zero
codeword U1 U2 U 2k
e2 e2 U 2 e 2 U 2k
coset
e 2 nk e 2 nk U 2 e 2 nk U 2 k
coset leaders
29
Linear block codes – cont’d
Standard array and syndrome table decoding
1. Calculate S rH T
2. Find the coset leader, eˆ ei , corresponding to S rH T
3. Calculate Uˆ r eˆ and corresponding .
ˆ r eˆ (U e) eˆ U (e eˆ )
U
◦ Note that
eˆ e
◦ If , error is corrected.
eˆ e
◦ If , undetectable decoding error occurs.
30
Linear block codes – cont’d
Example: Standard array for the (6,3) code
codewords
Coset leaders
31
Linear block codes – cont’d
Error pattern Syndrome
000000 000 U (101110) transmitted.
000001 101
r (001110) is received.
000010 011
000100 110
The syndrome of r is computed :
001000 001 S rH T (001110) H T (100)
010000 010 Error pattern corresponding to this syndrome is
100000 100
eˆ (100000)
010001 111
The corrected vector is estimated
ˆ r eˆ (001110) (100000) (101110)
U
32
Hamming codes
Hamming codes
◦ Hamming codes are a subclass of linear block codes and belong to the category of perfect
codes.
◦ Hamming codes are expressed as a function of a single integer .
m2
Code length : n 2m 1
Number of informatio n bits : k 2 m m 1
Number of parity bits : n-k m
Error correction capability : t 1
◦ The columns of the parity-check matrix, H, consist of all non-zero binary m-tuples.
33
Hamming codes
Example: Systematic Hamming code (7,4)
1 0 0 0 1 1 1
H 0 1 0 1 0 1 1 [I 33 PT ]
0 0 1 1 1 0 1
0 1 1 1 0 0 0
1 0 1 0 1 0 0
G [P I 44 ]
1 1 0 0 0 1 0
1 1 1 0 0 0 1
34
Cyclic block codes
Cyclic codes are a subclass of linear block codes.
Encoding and syndrome calculation are easily performed using feedback shift-
registers.
◦ Hence, relatively long block codes can be implemented with a reasonable
complexity.
35
Cyclic block codes
A linear (n,k) code is called a Cyclic code if all cyclic shifts of a codeword are
also a codeword.
◦ Example:
U (1101)
U (1) (1110 ) U ( 2 ) (0111 ) U (3) (1011) U ( 4 ) (1101) U
36
Cyclic block codes
Systematic encoding algorithm for an (n,k) Cyclic code:
nk
1. Multiply the message polynomial m( X ) by X
p( X ) X n k m( X ) U( X )
3. Add to to form the codeword
40
Cyclic block codes
Example: For the systematic (7,4) Cyclic code with
generator polynomial g ( X ) 1 X X
3
n 7, k 4, n k 3
m (1011) m( X ) 1 X 2 X 3
X n k m( X ) X 3m( X ) X 3 (1 X 2 X 3 ) X 3 X 5 X 6
Divide X n k m( X ) by g ( X) :
X 3 X 5 X 6 (1 X X 2 X 3 )(1 X X 3 ) 1
quotientq(X) generator g(X) remainder p ( X )
41
Cyclic block codes
2. Find the generator and parity check matrices, G and H,
respectively.
g ( X ) 1 1 X 0 X 2 1 X 3 ( g 0 , g1 , g 2 , g 3 ) (1101)
1 1 0 1 0 0 0
0 Not in systematic form.
1 1 0 1 0 0
G We do the following:
0 0 1 1 0 1 0 row(1) row(3) row(3)
0 0 0 1 1 0 1 row(1) row(2) row(4) row(4)
1 1 0 1 0 0 0
0 1 0 0 1 0 1 1
1 1 0 1 0 0
G H 0 1 0 1 1 1 0
1 1 1 0 0 1 0
0 0 1 0 1 1 1
1 0 1 0 0 0 1
I 33 PT
P I 44
42
Example of the block codes
PB
8PSK
QPSK
Eb / N 0 [dB] 44
Convolutional codes
Convolutional codes offer an approach to error control coding
substantially different from that of block codes.
◦ A convolutional encoder:
◦ encodes the entire data stream, into a single codeword.
◦ does not need to segment the data stream into blocks of fixed size
◦ is a machine with memory.
45
Convolutional codes-cont’d
A Convolutional code is specified by three parameters (n, k , K )
or (k / n, K ) where
46
A Rate ½ Convolutional encoder
Convolutional encoder (rate ½, K=3)
◦ 3 shift-registers where the first one takes the incoming data
bit and the rest, form the memory of the encoder.
47
A Rate ½ Convolutional encoder
Message sequence: m (101)
Time Output Time Output
(Branch word) (Branch word)
u1 u1
u1 u 2 u1 u 2
t1 1 0 0 t2 0 1 0
1 1 1 0
u2 u2
u1 u1
u1 u 2 u1 u 2
t3 t4
1 0 1 0 0 0 1 0 1 0
u2 u2
48
A Rate ½ Convolutional encoder
Time Output Time Output
(Branch word) (Branch word)
u1 u1
u1 u 2 u1 u 2
t5 t6
0 0 1 1 1 0 0 0 0 0
u2 u2
49
Effective code rate
Initialize the memory before encoding the first bit (all-zero)
Clear out the memory after encoding the last bit (all-zero)
◦ Hence, a tail of zero-bits is appended to data bits.
L
Reff Rc
n( L K 1)
50
Encoder representation
Vector representation:
◦ We define n binary vector with K elements (one vector for
each modulo-2 adder). The i:th element in each vector, is “1” if
the i:th stage in the shift register is connected to the
corresponding modulo-2 adder, and “0” otherwise.
◦ Example:
u1
g1 (111 )
m u1 u 2
g 2 (101)
u2
51
Encoder representation – cont’d
Impulse response representation:
52
Encoder representation – cont’d
Polynomial representation:
◦ We define n generator polynomials, one for each modulo-2
adder. Each polynomial is of degree K-1 or less and describes
the connection of the shift registers to the corresponding
modulo-2 adder.
◦ Example:
g1 ( X ) g 0(1) g1(1) . X g 2(1) . X 2 1 X X 2
g 2 ( X ) g 0( 2 ) g1( 2 ) . X g 2( 2 ) . X 2 1 X 2
53
Encoder representation –cont’d
In more details:
m( X )g1 ( X ) (1 X 2 )(1 X X 2 ) 1 X X 3 X 4
m( X )g 2 ( X ) (1 X 2 )(1 X 2 ) 1 X 4
m( X )g1 ( X ) 1 X 0. X 2 X 3 X 4
m( X )g 2 ( X ) 1 0. X 0. X 2 0. X 3 X 4
U( X ) (1,1) (1,0) X (0,0) X 2 (1,0) X 3 (1,1) X 4
U 11 10 00 10 11
54
State diagram
A finite-state machine only encounters a finite number of states.
State of a machine: the smallest amount of information that, together with a
current input to the machine, can predict the output of the machine.
In a Convolutional encoder, the state is represented by the content of the
memory. K 1
Hence, there are
2 states.
55
State diagram – cont’d
A state diagram is a way to represent the encoder.
A state diagram contains all the states and all possible transitions between
them.
Only two transitions initiating from a state
Only two transitions ending up in a state
56
State diagram – cont’d
Current input Next output
state state
S0 0 S0 00
0/00 Output
Input (Branch word)
00 1 S2 11
S0
1/11 00 0/11 S1 0 S0 11
01 1 S2 00
1/00
S2 S1 S2 0 S1 10
10 01
0/10 10 1 S3 01
S3 0 S1 01
1/01 S3 0/01
11 11 1 S3LECTURE 9
10 57
1/10
Trellis – cont’d
Trellis diagram is an extension of the state diagram
that shows the passage of time.
◦ Example of a section of trellis for the rate ½ code
State
S 0 00 0/00
1/11
S 2 10 0/11
1/00
S1 01 1/01
0/10
0/01
S3 11 1/10
ti ti 1 Time
58