You are on page 1of 50

CHANNEL CAPACITY

& CODING

Channel Capacity
LEARNING OBJECTS
 Channel capacity

 Shannon Hartley theorem

 Channel coding or FEC

 Block codes

 Convolutional codes
Shannon-Hartley Theorem
Shannon-Hartley Theorem
Shannon-Hartley Theorem
CHANNEL CAPACITY
& CODING

Introduction to Channel Coding


What is channel coding?
Transforming signals to improve communications performance by
increasing the robustness against channel impairments (noise,
interference, fading)
Waveform coding: Transforming waveforms to better waveforms
Structured sequences: Transforming data sequences into better
sequences, having structured redundancy.
◦ “Better” in the sense of reducing probability of errors.

10
Error control techniques
Automatic Repeat reQuest (ARQ)
◦ Full-duplex connection, error detection codes
◦ The receiver sends a feedback to the transmitter, saying that
if any error is detected in the received packet or not (Not-
Acknowledgement (NACK) and Acknowledgement (ACK),
respectively).
◦ The transmitter retransmits the previously sent packet if it
receives NACK.
Forward Error Correction (FEC)
◦ Simplex connection, error correction codes
◦ The receiver tries to correct some errors

Hybrid ARQ (ARQ+FEC)


◦ Full-duplex, error detection and correction codes
11
Why using error correction coding?
◦ Error performance vs. bandwidth
◦ Power vs. bandwidth
◦ Data rate vs. bandwidth PB
◦ Capacity vs. bandwidth
Coded

A
Coding gain: F
For a given bit-error probability,
C B
the reduction in the Eb/N0 that can be
realized through the use of code:
D
 Eb   Eb 
G [dB]    [dB]    [dB] E
Uncoded
 N 0 u  N 0 c
Eb / N 0 (dB)

12
Channel models
Discrete memoryless channels
◦ Discrete input, discrete output

Binary Symmetric channels


◦ Binary input, binary output

Gaussian channels
◦ Discrete input, continuous output

13
Vector Space
The set of all binary n-tuples, Vn, is called a
vector space over the binary field of two
elements 0 and 1.
The binary field has two operations: multiplication and modulo-2
addition

Theresult of all operations are in the same set of


two elements.

14
Vector Space
Binary field :
◦ The set {0,1}, under modulo 2 binary addition and multiplication forms a field.

Addition Multiplication
00  0 00  0
0 1  1 0 1  0
1 0  1 1 0  0
11  0 1 1  1

◦ Binary field is also called Galois field, GF(2).

15
Vector Space
◦ Examples of vector spaces
◦ The set of binary n-tuples, denoted byVn
V4  {(0000), (0001), (0010), (0011), (0100), (0110), (0101), (0111),
(1000), (1001), (1010), (1011), (1100), (1101), (1101), (1111)}

Vector subspace: Vn
◦ A subset S of the vector space is called a subspace if:
◦ The all-zero vector is in S.
◦ The sum of any two vectors in S is also in S.
◦ Example:
{(0000), (0101), (1010), (1111 )} is a subspace of V4 .

16
Linear block codes
The information bit stream is chopped into blocks of k bits.
Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over channel.
The reverse procedure is done at the receiver.

Channel
Data block Codeword
encoder
k bits n bits

n-k Redundant bits


k
Rc  Code rate
n

17
Linear block codes
Linear block code (n,k)
◦ A set is called a linear block code if, and only if, it is a subspace of the vector space Vn
of all n-tuples.
◦ Members of C are called code-words.
◦ The all-zero codeword is a codeword.
◦ Any linear combination of code-words is a codeword.

18
Linear block codes –cont’d
mapping Vn
Vk
C

Bases of C
◦ A Generator matrix G is constructed by taking as its rows the vectors on the
basis, .
{V1 , V2 ,  , Vk }

 v11 v12  v1n 


 V1  
   v21 v22  v2 n 
G    
   
Vk   
 vk 1 vk 2  vkn 

19
Linear block codes – cont’d
Encoding in (n,k) block code

U  mG  V1 
V 
(u1 , u2 ,  , u n )  (m1 , m2 , , mk )   2 
 
 
Vk 
(u1 , u2 ,  , u n )  m1  V1  m2  V2    m2  Vk

◦ The rows of G, are linearly independent.

20
Linear block
Example: Block code (6,3)
codes – cont’d

Message vector Codeword

000 000000
 V1  1 1 0 1 0 0 100 110100
G  V2   0 1 1 0 1 0 010 011010
 V3  1 0 1 0 0 1 110 1 01 1 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
111 0 0 0 111

21
Linear block codes – cont’d
Systematic block code (n,k)
◦ For a systematic code, the first (or last) k elements in the codeword are information
bits.

G  [P I k ]
I k  k  k identity matrix
Pk  k  (n  k ) matrix

U  (u1 , u2 ,..., un )  ( p1 , p2 ,..., pn  k , m1 , m2 ,..., mk )


          
parity bits message bits

22
Linear block codes – cont’d
The Hamming weight of vector U, denoted by w(U),
is the number of non-zero elements in U.
The Hamming distance between two vectors U and
V, is the number of elements in which they differ.
d (U, V )  w(U  V )
The minimum distance of a block code is

d min  min d (U i , U j )  min w(U i )


i j i

23
Linear block codes – cont’d
Error detection capability is given by

e  d min  1

Error correcting-capability t of a code, which is defined


as the maximum number of guaranteed correctable
errors per codeword, is
 d min  1
t
 2 

24
Linear block codes – cont’d
For memory less channels, the probability that the decoder commits an
erroneous decoding is

◦ is the transition probability or bit error probability over


channel. n
n j
PM     p (1  p) n  j
j  t 1  j 

The decoded bit error probability is p

1 n
n j
PB 
n
 j  
 j
j t 1  
p (1  p ) n j

25
Linear block codes – cont’d
For any linear code we can find a matrix G , whose rows are
orthogonal to rows of :

H ( n  k )n
H is called the parity check matrix and its rows are linearly independent.
For systematic linear block codes:

GH  0 T

H  [I n  k T
P ]

27
Linear block codes – cont’d
Data source Format
m Channel U Modulation
encoding
channel
Channel Demodulation
Data sink Format
m̂ decoding r Detection

r  Ue
r  (r1 , r2 ,...., rn ) received codeword or vector
e  (e1 , e2 ,...., en ) error pattern or vector

Syndrome testing: S  rH T  eH T
◦ S is syndrome of r, corresponding to the error pattern e.

28
Linear block codes – cont’d
Standard array
nk
1. For row i  2,3,...,
, 2 find a vector in Vn of minimum weight
which is not already listed in the array.
2. Call this pattern e i and form the i : th row as the corresponding
coset
zero
codeword U1 U2  U 2k
e2 e2  U 2  e 2  U 2k
coset
   
e 2 nk e 2 nk  U 2  e 2 nk  U 2 k
coset leaders

29
Linear block codes – cont’d
Standard array and syndrome table decoding
1. Calculate S  rH T
2. Find the coset leader, eˆ  ei , corresponding to S  rH T
3. Calculate Uˆ  r  eˆ and corresponding .
ˆ  r  eˆ  (U  e)  eˆ  U  (e  eˆ )
U
◦ Note that
eˆ  e
◦ If , error is corrected.
eˆ  e
◦ If , undetectable decoding error occurs.

30
Linear block codes – cont’d
Example: Standard array for the (6,3) code
codewords

000000 110100 011010 101110 101001 011101 110011 000111


000001 110101 011011 101111 101000 011100 110010 000110
000010 110111 011000 101100 101011 011111 110001 000101
000100 110011 011100 101010 101101 011010 110111 000110
001000 111100   
010000 100100 coset
100000 010100 
010001 100101   010110

Coset leaders

31
Linear block codes – cont’d
Error pattern Syndrome
000000 000 U  (101110) transmitted.
000001 101
r  (001110) is received.
000010 011
000100 110
The syndrome of r is computed :
001000 001 S  rH T  (001110) H T  (100)
010000 010 Error pattern corresponding to this syndrome is
100000 100
eˆ  (100000)
010001 111
The corrected vector is estimated
ˆ  r  eˆ  (001110)  (100000)  (101110)
U

32
Hamming codes
Hamming codes

◦ Hamming codes are a subclass of linear block codes and belong to the category of perfect
codes.
◦ Hamming codes are expressed as a function of a single integer .
m2
Code length : n  2m  1
Number of informatio n bits : k  2 m  m  1
Number of parity bits : n-k  m
Error correction capability : t  1

◦ The columns of the parity-check matrix, H, consist of all non-zero binary m-tuples.

33
Hamming codes
Example: Systematic Hamming code (7,4)

1 0 0 0 1 1 1
H  0 1 0 1 0 1 1  [I 33 PT ]
0 0 1 1 1 0 1
0 1 1 1 0 0 0
1 0 1 0 1 0 0
G   [P I 44 ]
1 1 0 0 0 1 0
 
1 1 1 0 0 0 1 

34
Cyclic block codes
Cyclic codes are a subclass of linear block codes.
Encoding and syndrome calculation are easily performed using feedback shift-
registers.
◦ Hence, relatively long block codes can be implemented with a reasonable
complexity.

BCH and Reed-Solomon codes are cyclic codes.

35
Cyclic block codes
A linear (n,k) code is called a Cyclic code if all cyclic shifts of a codeword are
also a codeword.

U  (u0 , u1 , u2 ,..., un 1 ) “i” cyclic shifts of U

U (i )  (un i , un i 1 ,..., un 1 , u0 , u1 , u 2 ,..., un i 1 )

◦ Example:

U  (1101)
U (1)  (1110 ) U ( 2 )  (0111 ) U (3)  (1011) U ( 4 )  (1101)  U
36
Cyclic block codes
Systematic encoding algorithm for an (n,k) Cyclic code:
nk
1. Multiply the message polynomial m( X ) by X

2. Divide the result of Step 1 by the generator polynomial g ( X )


p.(Let
X) be the reminder.

p( X ) X n k m( X ) U( X )
3. Add to to form the codeword

40
Cyclic block codes
Example: For the systematic (7,4) Cyclic code with
generator polynomial g ( X )  1  X  X
3

1. Find the codeword for the message m  (1011)

n  7, k  4, n  k  3
m  (1011)  m( X )  1  X 2  X 3
X n  k m( X )  X 3m( X )  X 3 (1  X 2  X 3 )  X 3  X 5  X 6
Divide X n  k m( X ) by g ( X) :
X 3  X 5  X 6  (1  X  X 2  X 3 )(1  X  X 3 )  1
              
quotientq(X) generator g(X) remainder p ( X )

Form the codeword polynomial :


U( X )  p( X )  X 3m( X )  1  X 3  X 5  X 6
U  (1 0 0 1 0 1 1 )
parity bits message bits

41
Cyclic block codes
2. Find the generator and parity check matrices, G and H,
respectively.

g ( X )  1  1  X  0  X 2  1 X 3  ( g 0 , g1 , g 2 , g 3 )  (1101)
1 1 0 1 0 0 0
0 Not in systematic form.
1 1 0 1 0 0
G We do the following:
0 0 1 1 0 1 0 row(1)  row(3)  row(3)
 
0 0 0 1 1 0 1 row(1)  row(2)  row(4)  row(4)

1 1 0 1 0 0 0
0 1 0 0 1 0 1 1
1 1 0 1 0 0
G H  0 1 0 1 1 1 0
1 1 1 0 0 1 0
  0 0 1 0 1 1 1
1 0 1 0 0 0 1
I 33 PT
P I 44

42
Example of the block codes

PB
8PSK

QPSK

Eb / N 0 [dB] 44
Convolutional codes
Convolutional codes offer an approach to error control coding
substantially different from that of block codes.
◦ A convolutional encoder:
◦ encodes the entire data stream, into a single codeword.
◦ does not need to segment the data stream into blocks of fixed size
◦ is a machine with memory.

This fundamental difference in approach imparts a different nature


to the design and evaluation of the code.
◦ Block codes are based on algebraic/combinatorial techniques.

Convolutional codes are based on construction techniques.

45
Convolutional codes-cont’d
A Convolutional code is specified by three parameters (n, k , K )
or (k / n, K ) where

◦ Rc  k / n is the coding rate, determining the number of data


bits per coded bit.
◦ In practice, usually k=1 is chosen and we assume that from
now on.
◦ K is the constraint length of the encoder a where the encoder
has K-1 memory elements.

46
A Rate ½ Convolutional encoder
Convolutional encoder (rate ½, K=3)
◦ 3 shift-registers where the first one takes the incoming data
bit and the rest, form the memory of the encoder.

u1 First coded bit


(Branch word)
Input data bits Output coded bits
m u1 ,u2
u2 Second coded bit

47
A Rate ½ Convolutional encoder
Message sequence: m  (101)
Time Output Time Output
(Branch word) (Branch word)
u1 u1
u1 u 2 u1 u 2
t1 1 0 0 t2 0 1 0
1 1 1 0
u2 u2

u1 u1
u1 u 2 u1 u 2
t3 t4
1 0 1 0 0 0 1 0 1 0
u2 u2

48
A Rate ½ Convolutional encoder
Time Output Time Output
(Branch word) (Branch word)
u1 u1
u1 u 2 u1 u 2
t5 t6
0 0 1 1 1 0 0 0 0 0
u2 u2

m  (101) Encoder U  (11 10 00 10 11)

49
Effective code rate
Initialize the memory before encoding the first bit (all-zero)
Clear out the memory after encoding the last bit (all-zero)
◦ Hence, a tail of zero-bits is appended to data bits.

data tail Encoder codeword

Effective code rate :


◦ L is the number of data bits and k=1 is assumed:

L
Reff   Rc
n( L  K  1)

50
Encoder representation
Vector representation:
◦ We define n binary vector with K elements (one vector for
each modulo-2 adder). The i:th element in each vector, is “1” if
the i:th stage in the shift register is connected to the
corresponding modulo-2 adder, and “0” otherwise.
◦ Example:

u1
g1  (111 )
m u1 u 2
g 2  (101)
u2

51
Encoder representation – cont’d
Impulse response representation:

◦ The response of encoder to a single “one” bit that goes


through it.
◦ Example: Branch word
Register
contents u1 u2
100 1 1
Input sequence : 1 0 0
010 1 0
Output sequence : 11 10 11
001 1 1
Input m Output
1 11 10 11
0 00 00 00
1 11 10 11
Modulo-2 sum: 11 10 00 10 11

52
Encoder representation – cont’d
Polynomial representation:
◦ We define n generator polynomials, one for each modulo-2
adder. Each polynomial is of degree K-1 or less and describes
the connection of the shift registers to the corresponding
modulo-2 adder.
◦ Example:
g1 ( X )  g 0(1)  g1(1) . X  g 2(1) . X 2  1  X  X 2
g 2 ( X )  g 0( 2 )  g1( 2 ) . X  g 2( 2 ) . X 2  1  X 2

The output sequence is found as follows:

U ( X )  m( X )g1 ( X ) interlaced with m( X )g 2 ( X )

53
Encoder representation –cont’d
In more details:
m( X )g1 ( X )  (1  X 2 )(1  X  X 2 )  1  X  X 3  X 4
m( X )g 2 ( X )  (1  X 2 )(1  X 2 )  1  X 4
m( X )g1 ( X )  1  X  0. X 2  X 3  X 4
m( X )g 2 ( X )  1  0. X  0. X 2  0. X 3  X 4
U( X )  (1,1)  (1,0) X  (0,0) X 2  (1,0) X 3  (1,1) X 4
U  11 10 00 10 11

54
State diagram
A finite-state machine only encounters a finite number of states.
State of a machine: the smallest amount of information that, together with a
current input to the machine, can predict the output of the machine.
In a Convolutional encoder, the state is represented by the content of the
memory. K 1
Hence, there are
2 states.

55
State diagram – cont’d
A state diagram is a way to represent the encoder.
A state diagram contains all the states and all possible transitions between
them.
Only two transitions initiating from a state
Only two transitions ending up in a state

56
State diagram – cont’d
Current input Next output
state state

S0 0 S0 00
0/00 Output
Input (Branch word)
00 1 S2 11
S0
1/11 00 0/11 S1 0 S0 11
01 1 S2 00
1/00
S2 S1 S2 0 S1 10
10 01
0/10 10 1 S3 01
S3 0 S1 01
1/01 S3 0/01
11 11 1 S3LECTURE 9
10 57

1/10
Trellis – cont’d
Trellis diagram is an extension of the state diagram
that shows the passage of time.
◦ Example of a section of trellis for the rate ½ code
State
S 0  00 0/00
1/11
S 2  10 0/11
1/00

S1  01 1/01
0/10

0/01
S3  11 1/10
ti ti 1 Time

58

You might also like