You are on page 1of 163

ECE4007 Information Theory and Coding L T P J C

3 0 0 4 4
Pre-requisite ECE4001/ ECE1018/ ECE2030 Syllabus version
1.1
Course Objectives:
1. To acquaint students with the basics of probability, information and its properties
2. To familiarize students with different channel models and their capacity
3. To teach different types of source coding techniques
4. To explain various types of channel coding techniques

Course Outcomes:
1. Comprehend and analyze the basics of probability, information and its properties
2. Examine different types of channels and determine their capacity
3. Understand the binary and non-binary source coding schemes
4. Analyze the dictionary-based coding schemes for image compression techniques
5. Understand the fundamentals of error control coding schemes
6. Construct, comprehend and analyze the advanced error control coding schemes
7. Evaluate the performance of source coding, channel coding techniques in image processing
and wireless applications
Student Learning Outcomes(SLO): 1,2,18
Module: 1 Introduction 4 hours
Review of Probability Theory, Introduction to information theory
Module:2 Entropy 6 hours
Uncertainty, self-information, average information, mutual information and their properties -
Entropy and information rate of Markov sources - Information measures of continuous random
variables.
Module:3 Channel Models and Capacity 5 hours
Importance and types of various channel models - Channel capacity calculation – Binary
symmetric channel, binary erasure channel - Shannon’s channel capacity and channel coding
theorem - Shannon’s limit.
Module:4 Source Coding I 6 hours
Source coding theorem - Huffman coding - Non binary Huffman codes - Adaptive Huffman
coding - Shannon Fano Elias coding - Non binary Shannon Fano codes
Module:5 Source Coding II 6 hours
Arithmetic coding - Lempel-Ziv coding - Run-length encoding and rate distortion function -
Overview of transform coding.
Module:6 Channel Coding I 8 hours
Introduction to Error control codes - Block codes, linear block codes, cyclic codes and their
properties, Encoder and Decoder design- serial and parallel concatenated block code, Convolution
Codes- Properties, Encoder-Tree diagram, Trellis diagram, state diagram, transfer function of
convolutional codes, Viterbi Decoding, Trellis coding, Reed Solomon codes.
Module:7 Channel Coding II 8 hours
Serial and parallel concatenated convolutional codes, Block and convolutional interleaver, Turbo
coder, Iterative Turbo decoder, Trellis coded modulation-set partitioning - LDPC Codes.
Module:8 Contemporary Issues 2 hours

Total lecture hours: 45 hours


Text Book(s)
1. Simon Haykin, “Communication Systems”, 2012, 4th Edition, Wiley India Pvt Ltd, India.
2 Ranjan Bose, “Information Theory, Coding and Cryptography”, 2015, 1st Edition, McGraw
Hill Education (India) Pvt. Ltd., India.
Reference Books
1. John G. Proakis, “Digital Communications”, 2014, 5th Edition, McGraw-Hill, McGraw Hill
Education (India) Pvt. Ltd., India.
2. Bernard Sklar and Pabitra Kumar Ray “Digital Communications: Fundamentals and
Applications”, 2012, 1st Edition, Pearson Education, India.
3 Khalid Sayood, “Introduction to Data Compression”, Reprint: 2015, 4th Edition, Elsevier,
India.
Mode of Evaluation: Internal Assessment (CAT, Quizzes, Digital Assignments) & Final
Assessment Test (FAT)
Typical Projects
1. Efficient Image compression technique by using modified SPIHT algorithm
2. Develop the compression algorithms by using Discrete Wavelet Transform
3. Compress and decompress an Image using Modified Huffman coding
4. Apply Run length coding and Huffman encoding algorithm to compress an image.
5. Adaptive Huffman coding of 2D DCT coefficients for Image compression
6. Compress of an image by chaotic map and Arithmetic coding
7. Region of Interest based lossless medical image compression
8. Write a code to build the (3, 1, 3) repetition encoder. Map the encoder output to BPSK symbols.
Transmit the symbols through AWGN channel. Investigate the error correction capability of the
(3, 1, 3) repetition code by comparing its BER performance to that without using error correction
code.
9. Write a code to compare the BER performance and error correction capability of (3, 1, 3) and
(5, 1, 5) repetition codes. Assume BPSK modulation and AWGN channel. Also compare the
simulated results with the theoretical results.
10. Write a code to compare the performance of hard decision and soft decision Viterbi decoding
algorithms. Assume BPSK modulation and AWGN channel.
11. Write a code to build (8, 4, 3) block encoder and decoder. Compare the BER performance of
(8, 4, 3) block coder with (3,1,3) repetition codes. Assume BPSK modulation and AWGN
channel.
12. Consider the following Extended vehicular A channel power delay profile. Write a code to
model the given profile. Also measure the channel capacity. Compare the obtained capacity to
that without fading channel.
Delay (ns) Power (dB)
0 0
30 -1.5
150 -1.4
310 -3.6
370 -0.6
710 -9.1
1090 -7
1730 -12
2510 -16.9

13. Performance analysis of various channels (BSC, BEC, Noiseless, Lossless) under AWGN.
14. FPGA implementation of linear block coding and syndrome decoding.
15. Performance of linear block codes under single error and burst error.
16 .Performance of analysis of convolution codes under single error and burst error
17. Implementation of VITERBI decoding in FPGA.
18. Efficiency checking of different interleaver for turbo encoder.
19. Implementation of trellis code modulator in FPGA.
20. Developing the Compression algorithms for Wireless multimedia sensor networks.

Mode of evaluation: Review I, Review II and Review III


Recommended by Board of Studies
Approved by Academic Council No. 49 Date 15/03/2018
Titles for Capstone project

Text Encryption using Huffman Coding

Machine Learning-Based 5G-and-Beyond Channel Estimation for MIMO-OFDM Communication Systems

Implementation of Sudoku based Reversible data hiding scheme on reference image

Image Encoding using Huffman Coding Algorithm

Text encryption using huffman compression

Machine Learning-Based 5G-and-Beyond Channel Estimation for MIMO-OFDM Communication Systems

Text Encryption using Huffman Coding

Encoding and Decoding an image using CNN

STEREOSCOPIC IMAGE COMPRESSION AND COMPARISON USING HUFFMAN AND ARITHMETIC ALGORITHMS

TEXT TO BRAILLE CONVERSION

Data compression and decompression software using LZ77 algorithm

Image Compression using Shannon Fano Elias Coding and run length encoding techniques

Comparative study between Huffman and Digital Watermarking Techniques for encoding and encryption of images

Achieving Secured Communication through Encoding and Decoding of Audio into Image and vice versa using Matlab

Encoding Image Using Huffman Coding

DATA HIDING IN IMAGE USING DIGITAL WATERMARKING

Convolution Encoder and Viterbi Decoder

Text Compression using Huffman and LZW Coding Techniques

Data hiding in audio

BER PERFORMANCE ANALYSIS OF LINEAR BLOCK CODES

Analyzing an eeg signal for diagnosis of epileptic seizures by using MATLAB

Encryption and Encoding in images using Huffman and Digital Watermarking Technique

Image Stenography using Advanced Encryption Standard

Image Steganography based on Block-DCT

Autoencoders for fashion class noise removal

Text Encryption using Huffman Coding

Machine Learning-Based 5G-and-Beyond Channel Estimation for MIMO-OFDM Communication Systems

Implementation of Sudoku based Reversible data hiding scheme on reference image

Image Encoding using Huffman Coding Algorithm

Text encryption using huffman compression

Machine Learning-Based 5G-and-Beyond Channel Estimation for MIMO-OFDM Communication Systems

Text Encryption using Huffman Coding

Encoding and Decoding an image using CNN

STEREOSCOPIC IMAGE COMPRESSION AND COMPARISON USING HUFFMAN AND ARITHMETIC ALGORITHMS
TEXT TO BRAILLE CONVERSION

Data compression and decompression software using LZ77 algorithm

Image Compression using Shannon Fano Elias Coding and run length encoding techniques

Comparative study between Huffman and Digital Watermarking Techniques for encoding and encryption of images

Achieving Secured Communication through Encoding and Decoding of Audio into Image and vice versa using Matlab

Encoding Image Using Huffman Coding

DATA HIDING IN IMAGE USING DIGITAL WATERMARKING

Convolution Encoder and Viterbi Decoder

Text Compression using Huffman and LZW Coding Techniques

Data hiding in audio

BER PERFORMANCE ANALYSIS OF LINEAR BLOCK CODES

Analyzing an eeg signal for diagnosis of epileptic seizures by using MATLAB

Encryption and Encoding in images using Huffman and Digital Watermarking Technique

Image Stenography using Advanced Encryption Standard

Image Steganography based on Block-DCT

Autoencoders for fashion class noise removal

Data compression using LZSS coding algorithm.

Entry Based On Face Mask Detection using python

Comparative study on image compression techniques

Implementing Sudoku based reversible Data Hiding scheme (on reference image)

Image Compression Using Wavelets

Covid-19 path finder using Machine learning

Data Hiding using Watermarking

Image Steganography using Generative Adversial Networks (GANs)

VIDEO COMPRESSION AND ENCODING USING LOSSLESS HUFFMAN TECHNIQUES

Data Encryption and Decryption using Image Format

Audio and Image Encryption using AES

Encoding and Decoding by LDPC


TEXT ENCRYPTION USING HUFFMAN COMPRESSION

RSA(Rivest–Shamir–Adleman) Data Encryption and Decryption

Reversible steganography method using combination of dl techniques(U-Net structure)

Reversible data hiding based on histogram shifting technique

Secure Banking Transaction using AES and RSA Algorithm

Encoded Modulation approach in satellite communications

Handwritten Character Recognition with Neural Networks

AUDIO COMPRESSION USING DCT

STEREOSCOPIC IMAGE COMPRESSION AND COMPARISON


USING HUFFMAN AND ARITHMETIC ALGORITHMS

JPEG COMPRESSOR WITH DCT TRANSFORMATION

Movie-Recommendation-System-Using-BERT

Huffman Encoder using matlab


Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
AN INTRODUCTION TO ERROR
CORRECTING CODES
Part 3
Jack Keil Wolf

ECE 154 C
Spring 2010
Introduction to LDPC Codes
• These codes were invented by Gallager in his Ph.D.
dissertation at M.I.T. in 1960.

• They were ignored for many years since they were


thought to be impractical.

• But with present day technology they are very


practical.

• Their performance is similar to turbo codes but they


may have some implementation advantages.
Outline: Some Questions
• What is a parity check code?

• What is an LDPC code?

• What is a message passing decoder for


LDPC codes?

• What is the performance of these codes?


What is a Parity Check Code?
• A binary parity check code is a block code: i.e., a
collection of binary vectors of fixed length n.

• The symbols in the code satisfy r parity check


equations of the form:
xa xb  xc  …  xz = 0
where  means modulo 2 addition and
xa, xb, xc , … , xz
are the code symbols in the equation.

• Each codeword of length n can contain (n-r)=k


information digits and r check digits.
What is a Parity Check Matrix?
• A parity check matrix is an r-row by n-
column binary matrix. Remember k=n-r.

• The rows represent the equations and the


columns represent the digits in the code
word.

• There is a 1 in the i-th row and j-th column if


and only if the i-th code digit is contained in
the j-th equation.
Example: Hamming Code with
n=7, k=4, and r=3
• For a code word of the form c1, c2, c3, c4, c5, c6, c7, the equations
are:

c1  c2  c3  c5 = 0
c1  c2  c4  c6 = 0
c1  c3  c4  c7 = 0.

• The parity check matrix for this code is then:

1 1 1 0 1 0 0
1 1 0 1 0 1 0
1 0 1 1 0 0 1

• Note that c1 is contained in all three equations while c2 is


contained in only the first two equations.
What is an LDPC Code?
• The percentage of 1’s in the parity check
matrix for a LDPC code is low.

• A regular LDPC code has the property that:


– every code digit is contained in the same number
of equations,
– each equation contains the same number of code
symbols.

• An irregular LDPC code relaxes these


conditions.
The Equations for A Simple
LDPC Code with n=12
c3  c6  c7  c8 = 0
c1  c2  c5  c12 = 0
c4  c9  c10  c11 = 0
c2  c6  c7  c10 = 0
c1  c3  c8  c11 = 0
c4  c5  c9  c12 = 0
c1  c4  c5  c7 = 0
c6  c8  c11  c12= 0
c2  c3  c9  c10 = 0.

• There are actually only 7 independent equations so


there are 7 parity digits.
The Parity Check Matrix for the
Simple LDPC Code
c1 c2 c3 c4 c5 c6 c7 c8 c9c10c11c12

0 0 1 0 0 1 1 1 0 0 0 0 c3  c6  c 7  c8 = 0
1 1 0 0 1 0 0 0 0 0 0 1 c1  c2  c5  c12 = 0
0 0 0 1 0 0 0 0 1 1 1 0 c4  c9  c10  c11 = 0
0 1 0 0 0 1 1 0 0 1 0 0 c2  c6  c7  c10 = 0
1 0 1 0 0 0 0 1 0 0 1 0 c1  c3  c8  c11 = 0
0 0 0 1 1 0 0 0 1 0 0 1 c4  c5  c9  c12 = 0
1 0 0 1 1 0 1 0 0 0 0 0 c1  c4  c 5  c7 = 0
0 0 0 0 0 1 0 1 0 0 1 1 c6  c8  c11  c12= 0
0 1 1 0 0 0 0 0 1 1 0 0 c2  c3  c9  c10 = 0
The Parity Check Matrix for the
Simple LDPC Code
0 0 1 0 0 1 1 1 0 0 0 0
1 1 0 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 0 1 1 1 0
0 1 0 0 0 1 1 0 0 1 0 0
1 0 1 0 0 0 0 1 0 0 1 0
0 0 0 1 1 0 0 0 1 0 0 1
1 0 0 1 1 0 1 0 0 0 0 0
0 0 0 0 0 1 0 1 0 0 1 1
0 1 1 0 0 0 0 0 1 1 0 0

• Note that each code symbol is contained in 3


equations and each equation involves 4 code
symbols.
A Graphical Description of LDPC
Codes
• Decoding of LDPC codes is best understood by a
graphical description.

• The graph has two types of nodes: bit nodes and


parity nodes.

• Each bit node represents a code symbol and each


parity node represents a parity equation.

• There is a line drawn between a bit node and a parity


node if and only if that bit is involved in that parity
equation.
The Graph for the Simple LDPC
Code
0 0 1 0 0 1 1 1 0 0 0 0
1 1 0 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 0 1 1 1 0
0 1 0 0 0 1 1 0 0 1 0 0
1 0 1 0 0 0 0 1 0 0 1 0
0 0 0 1 1 0 0 0 1 0 0 1
1 0 0 1 1 0 1 0 0 0 0 0
0 0 0 0 0 1 0 1 0 0 1 1
0 1 1 0 0 0 0 0 1 1 0 0 Squares represent parity equations.

Circles represent code symbols.

Only the lines corresponding to the 1st row and 1st column are shown.
Entire Graph for the Simple
LDPC Code
• Note that each bit node has 3 lines connecting it to
parity nodes and each parity node has 4 lines
connecting it to bit nodes.
Decoding of LDPC Codes by
Message Passing on the Graph
• Decoding is accomplished by passing messages
along the lines of the graph.

• The messages on the lines that connect to the i-th bit


node, ci, are estimates of Pr[ci =1] (or some
equivalent information).

• At the nodes the various estimates are combined in


a particular way.
Decoding of LDPC Codes by
Message Passing on the Graph
• Each bit node is furnished an initial estimate of the
probability it is a 1 from the soft output of the
channel.

• The bit node broadcasts this initial estimate to the


parity nodes on the lines connected to that bit node.

• But each parity node must make new estimates for


the bits involved in that parity equation and send
these new estimates (on the lines) back to the bit
nodes.
Estimation of Probabilities by
Parity Nodes
• Each parity node knows that there are an
even number of 1’s in the bits connected to
that node.

• But the parity node has received estimates of


the probability that each bit node connected
to it is a 1.

• The parity node sends a new estimate to the


i-th bit node based upon all the other
probabilities furnished to it.
Estimation of Probabilities by
Parity Nodes
• For example, consider the parity node corresponding to the
equation c3  c6  c7  c8 = 0.

• This parity node has the estimates p3, p6, p7, and p8
corresponding to the bit nodes c3, c6, c7, and c8, where pi is an
estimate for Pr[ci=1].

• The new estimate for the bit node c3 is:

p’3=p6(1-p7)(1-p8)+ p7(1-p6)(1-p8)+ p8(1-p6)(1-p7)+ p6p7p8

and for the other nodes:

p’6=p3(1-p7)(1-p8)+ p7(1-p3)(1-p8)+ p8(1-p3)(1-p7)+ p3p7p8


p’7=p6(1-p3)(1-p8)+ p3(1-p6)(1-p8)+ p8(1-p3)(1-p6)+ p3p6p8
p’8=p6(1-p7)(1-p3)+ p7(1-p6)(1-p3)+ p3(1-p6)(1-p7)+ p3p6p7
Estimation of Probabilities by Bit
Nodes
• But the bit nodes are provided different estimates of
Pr[c=1] by the channel and by each of the parity
nodes connected to it.

• It no longer broadcasts a single estimate but sends


different estimates to each parity equation.

• The new estimate sent to each parity node is


obtained by combining all other current estimates.

• That is, in determining the new estimate sent to a


parity node, it ignores the estimate received from
that parity node.
Estimation of Probabilities by Bit
Nodes
• The new estimate sent to each parity node is equal to the
normalized product of the other estimates.

• The proper normalization is a detail which will be discussed


later.

• If instead of passing estimates of Pr[c=1] we pass estimates of


log {Pr[c=1]/Pr[c=0]} where Pr[c=0] = 1 - Pr[c=1], we merely
need to add the appropriate terms.

• The channel estimate is always used in all estimates passed to


the parity node.
Estimation of Probabilities by Bit
Nodes
• The following table illustrates how estimates are
combined by a bit node involved in 3 parity
equations A, B, and C.

Estimate received from channel: pch


Estimate received from parity node A: pA
Estimate received from parity node B: pB
Estimate received from parity node C: pC

New estimate sent to parity node A: K pch pB pC


New estimate sent to parity node B: K pchpA pC
New estimate sent to parity node C: K pchpA pB
The Rest of the Decoding
Algorithm
• The process now repeats: parity nodes
passing messages to bit nodes and bit nodes
passing messages to parity nodes.

• At the last step, a final estimate is computed


at each bit node by computing the
normalized product of all of its estimates.

• Then a hard decision is made on each bit by


comparing the final estimate with the
threshold 0.5.
Final Estimate Made by Bit
Nodes
• The following table illustrates how the final estimate
is made by a bit node involved in 3 parity equations
A, B, and C.

Estimate received from channel: pch


Estimate received from parity node A: pA
Estimate received from parity node B: pB
Estimate received from parity node C: pC

FINAL ESTIMATE: K pchpA pB pC


Decoding of Simple Example

• Suppose the following Pr[Ci=1], i=1, 2, …, 12 are


obtained from channel:
0.9 0.5 0.4 0.3 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9

• We now watch the decoder decode.


Decoding of Simple Example:
First 4 Bit Nodes Only
• Initial broadcast from first 4 bit nodes:
0.9 0.9 0.5 0.4 0.4 0.3
0.9 0.5 0.4 0.3 0.3
0.5

C1 C2 C3 C4

0.9 0.5 0.4 0.3

• Transmission from parity nodes to these 4 bit nodes:


0.436 0.756 0.756 0.756
0.372 0.756 0.436 0.756 0.5 0.756 0.756
0.5

C1 C2 C3 C4

• Next transmission from the first 4 bit nodes:


0.842 0.705 0.674 0.804
0.805 0.874 0.705 0.906 0.674 0.865 0.804 0.804

C1 C2 C3 C4
Message Passing for First 4
Bit Nodes for More Iterations
Message Passing

1.000

0.900

0.800

0.700

0.600
C1
Prob[C=1]

C2
0.500
C3
C4
0.400

0.300

0.200

0.100

0.000
Up Down Down Down Up Up Up Down Down Down Up Up Up
C1 0.900 0.500 0.436 0.372 0.805 0.842 0.874 0.594 0.640 0.656 0.968 0.962 0.959
C2 0.500 0.756 0.756 0.436 0.705 0.705 0.906 0.640 0.690 0.630 0.791 0.751 0.798
C3 0.400 0.756 0.756 0.500 0.674 0.674 0.865 0.790 0.776 0.644 0.807 0.820 0.897
C4 0.300 0.756 0.756 0.756 0.804 0.804 0.804 0.749 0.718 0.692 0.710 0.742 0.765
Messages Passed To and
From All 12 Bit Nodes

Up Down Down Down Up Up Up Down Down Down Up Up Up

C1 0.900 0.500 0.436 0.372 0.805 0.842 0.874 0.594 0.640 0.656 0.968 0.962 0.959
C2 0.500 0.756 0.756 0.436 0.705 0.705 0.906 0.640 0.690 0.630 0.791 0.751 0.798
C3 0.400 0.756 0.756 0.500 0.674 0.674 0.865 0.790 0.776 0.644 0.807 0.820 0.897
C4 0.300 0.756 0.756 0.756 0.804 0.804 0.804 0.749 0.718 0.692 0.710 0.742 0.765
C5 0.900 0.500 0.372 0.372 0.759 0.842 0.842 0.611 0.694 0.671 0.976 0.966 0.970
C6 0.900 0.436 0.500 0.756 0.965 0.956 0.874 0.608 0.586 0.643 0.958 0.962 0.952
C7 0.900 0.436 0.500 0.372 0.842 0.805 0.874 0.647 0.628 0.656 0.967 0.969 0.965
C8 0.900 0.436 0.436 0.756 0.956 0.956 0.843 0.611 0.605 0.656 0.963 0.964 0.956
C9 0.900 0.372 0.372 0.500 0.842 0.842 0.759 0.722 0.694 0.703 0.980 0.982 0.981
C10 0.900 0.372 0.500 0.500 0.900 0.842 0.842 0.690 0.614 0.654 0.964 0.974 0.970
C11 0.900 0.372 0.436 0.756 0.956 0.943 0.805 0.667 0.608 0.676 0.967 0.974 0.965
C12 0.900 0.500 0.372 0.756 0.943 0.965 0.842 0.565 0.642 0.657 0.969 0.957 0.955
Messages Passed To and
From All 12 Bit Nodes
Message Passing

1.000

0.900

0.800

C1
0.700 C2
C3
0.600 C4
Prob[C=1]

C5
0.500 C6
C7
0.400 C8
C9
0.300 C10
C11
0.200 C12

0.100

0.000
Up Down Down Down Up Up Up Down Down Down Up Up Up
More Iterations
All 12 Bit Nodes

1.000

0.900

C1
0.800
C2
0.700 C3
C4
0.600 C5
C6
0.500
C7

0.400 C8
C9
0.300 C10
C11
0.200
C12

0.100

0.000
Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up
More Interesting Example
All 12 Bit Nodes

1.000

0.900
C1
0.800
C2

0.700 C3
C4
0.600 C5
C6
0.500
C7
0.400 C8
C9
0.300 C10
C11
0.200
C12
0.100

0.000
p

p
n

n
U

U
ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow

ow
D

D
Computation at Bit Nodes
• If estimates of probabilities are statistically
independent you should multiply them.

• But you need to normalize the product. Otherwise


the product is smaller than every single estimate.

• For example, with three independent estimates all


equal to 0.9, the unnormalized product is:
(0.9)3 = 0.729
where the correct normalized product is:
(0.9)3 / [(0.1)3+ (0.9)3 ] = 0.9986
Derivation of Correct
Normalization
• Assume we have 3 independent estimates, pa, pb, and pc from which
we compute the new estimate p’ from the formula:

p’ = K pa pb pc.

• But the same normalization must hold for (1-p’):

(1-p’) = K(1- pa)(1- pb)(1- pc)

• From the first equation (1-p’) = 1- K pa pb pc.

• Setting (1- K pa pb pc ) equal to K(1- pa)(1- pb)(1- pc) and solving for K
we obtain:

K = 1 / [(1- pa)(1- pb)(1- pc) + pa pb pc]


Assumption of Independence
• Note that in our example, parts of the graph looks like:

• This is called a cycle of length 4.

• Cycles cause estimates to be dependent and our combining


formulas are incorrect.

• As a result short cycles should be avoided in the design of


codes.
Computation at Parity Nodes
• When a parity equation involves many bits, an
alternative formula is used.

• Details are omitted here but can be found in the


literature.
Rate of a Regular LDPC Code
• Assume a LDPC is designed where:

(1) every bit is in J parity checks, and


(2) every parity check checks K bits.

• Since the number of 1’s in a parity check matrix is the same


whether we count by rows or columns, we have

J (# of columns) = K (# of rows)
or J (n) = K (n-k).

• Solving for k / n, we have k/n = (1- J / K), the rate of the code.

• Higher rate codes can be obtained by puncturing lower rate


codes.
Design of a Parity Matrix for a
Regular LDPC Code
• The following procedure was suggested by Gallager. We
illustrate it for a code with J = 3 and K = 4 .

1. Construct the first n/4 rows as follows:


n
1 1 1 1 0 0 0 0 . . . . 0 0 0 0
0 0 0 0 1 1 1 1 . . . . 0 0 0 0
n/4
. . . . . . . . . . . . . . . .
0 0 0 0 0 0 0 0 . . . . 1 1 1 1

2. Construct the next n/4 rows by permuting the columns of the


first n/4 rows.
3. Repeat 2 using another permutation of the columns.
Irregular LDPC Codes
• Irregular LDPC codes have a variable number of 1’s in the rows
and in the columns.

• The optimal distributions for the rows and the columns are
found by a technique called density evolution.

• Irregular LDPC codes perform better than regular LDPC codes.

• The basic idea is to give greater protection to some digits and


to have some of the parity equations give more reliable
information to give the decoding a jump start .
Paper on Irregular LDPC Codes
Luby et al (ISIT 1998)
Paper on Irregular LDPC Codes
Luby et al (ISIT 1998)
• Code Rate ½

• Left degrees
L3 =.44506 l5 =. 26704
L9 =.14835 l17=. 07854
l33=.04046 l65=. 02055

• Right degrees
r7 =.38282 r8 =. 29548
r19=.10225 r20=. 18321
r84=.04179 r85=. 02445
From MacKay’s Website
From MacKay’s Website
• The figure shows the performance of various codes with rate 1/4
over the Gaussian Channel. From left to right:
• Irregular low density parity check code over GF(8), blocklength
48000 bits (Davey and MacKay, 1999);
• JPL turbo code (JPL, 1996) blocklength 65536;
• Regular LDPC over GF(16), blocklength 24448 bits (Davey and
MacKay, 1998);
• Irregular binary LDPC, blocklength 16000 bits (Davey, 1999);
• M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi and D.A. Spielman's
(1998) irregular binary LDPC, blocklength 64000 bits;
• JPL's code for Galileo: a concatenated code based on constraint
length 15, rate 1/4 convolutional code (in 1992, this was the best
known code of rate 1/4); blocklength about 64,000 bits;
• Regular binary LDPC: blocklength 40000 bits (MacKay, 1999).
Conclusions
• The inherent parallelism in decoding LDPC codes suggests
their use in high data rate systems.

• A comparison of LDPC codes and turbo codes is complicated


and depends on many issues: e.g., block length, channel
model, etc.

• LDPC codes are well worthwhile investigating. Some issues to


be resolved are:
– Performance for channel models of interest
– Optimization of irregular LDPC codes (for channels of interest).
– Implementation in VLSI.
– Patent issues.

You might also like