Professional Documents
Culture Documents
3 0 0 4 4
Pre-requisite ECE4001/ ECE1018/ ECE2030 Syllabus version
1.1
Course Objectives:
1. To acquaint students with the basics of probability, information and its properties
2. To familiarize students with different channel models and their capacity
3. To teach different types of source coding techniques
4. To explain various types of channel coding techniques
Course Outcomes:
1. Comprehend and analyze the basics of probability, information and its properties
2. Examine different types of channels and determine their capacity
3. Understand the binary and non-binary source coding schemes
4. Analyze the dictionary-based coding schemes for image compression techniques
5. Understand the fundamentals of error control coding schemes
6. Construct, comprehend and analyze the advanced error control coding schemes
7. Evaluate the performance of source coding, channel coding techniques in image processing
and wireless applications
Student Learning Outcomes(SLO): 1,2,18
Module: 1 Introduction 4 hours
Review of Probability Theory, Introduction to information theory
Module:2 Entropy 6 hours
Uncertainty, self-information, average information, mutual information and their properties -
Entropy and information rate of Markov sources - Information measures of continuous random
variables.
Module:3 Channel Models and Capacity 5 hours
Importance and types of various channel models - Channel capacity calculation – Binary
symmetric channel, binary erasure channel - Shannon’s channel capacity and channel coding
theorem - Shannon’s limit.
Module:4 Source Coding I 6 hours
Source coding theorem - Huffman coding - Non binary Huffman codes - Adaptive Huffman
coding - Shannon Fano Elias coding - Non binary Shannon Fano codes
Module:5 Source Coding II 6 hours
Arithmetic coding - Lempel-Ziv coding - Run-length encoding and rate distortion function -
Overview of transform coding.
Module:6 Channel Coding I 8 hours
Introduction to Error control codes - Block codes, linear block codes, cyclic codes and their
properties, Encoder and Decoder design- serial and parallel concatenated block code, Convolution
Codes- Properties, Encoder-Tree diagram, Trellis diagram, state diagram, transfer function of
convolutional codes, Viterbi Decoding, Trellis coding, Reed Solomon codes.
Module:7 Channel Coding II 8 hours
Serial and parallel concatenated convolutional codes, Block and convolutional interleaver, Turbo
coder, Iterative Turbo decoder, Trellis coded modulation-set partitioning - LDPC Codes.
Module:8 Contemporary Issues 2 hours
13. Performance analysis of various channels (BSC, BEC, Noiseless, Lossless) under AWGN.
14. FPGA implementation of linear block coding and syndrome decoding.
15. Performance of linear block codes under single error and burst error.
16 .Performance of analysis of convolution codes under single error and burst error
17. Implementation of VITERBI decoding in FPGA.
18. Efficiency checking of different interleaver for turbo encoder.
19. Implementation of trellis code modulator in FPGA.
20. Developing the Compression algorithms for Wireless multimedia sensor networks.
STEREOSCOPIC IMAGE COMPRESSION AND COMPARISON USING HUFFMAN AND ARITHMETIC ALGORITHMS
Image Compression using Shannon Fano Elias Coding and run length encoding techniques
Comparative study between Huffman and Digital Watermarking Techniques for encoding and encryption of images
Achieving Secured Communication through Encoding and Decoding of Audio into Image and vice versa using Matlab
Encryption and Encoding in images using Huffman and Digital Watermarking Technique
STEREOSCOPIC IMAGE COMPRESSION AND COMPARISON USING HUFFMAN AND ARITHMETIC ALGORITHMS
TEXT TO BRAILLE CONVERSION
Image Compression using Shannon Fano Elias Coding and run length encoding techniques
Comparative study between Huffman and Digital Watermarking Techniques for encoding and encryption of images
Achieving Secured Communication through Encoding and Decoding of Audio into Image and vice versa using Matlab
Encryption and Encoding in images using Huffman and Digital Watermarking Technique
Implementing Sudoku based reversible Data Hiding scheme (on reference image)
Movie-Recommendation-System-Using-BERT
ECE 154 C
Spring 2010
Introduction to LDPC Codes
• These codes were invented by Gallager in his Ph.D.
dissertation at M.I.T. in 1960.
c1 c2 c3 c5 = 0
c1 c2 c4 c6 = 0
c1 c3 c4 c7 = 0.
1 1 1 0 1 0 0
1 1 0 1 0 1 0
1 0 1 1 0 0 1
0 0 1 0 0 1 1 1 0 0 0 0 c3 c6 c 7 c8 = 0
1 1 0 0 1 0 0 0 0 0 0 1 c1 c2 c5 c12 = 0
0 0 0 1 0 0 0 0 1 1 1 0 c4 c9 c10 c11 = 0
0 1 0 0 0 1 1 0 0 1 0 0 c2 c6 c7 c10 = 0
1 0 1 0 0 0 0 1 0 0 1 0 c1 c3 c8 c11 = 0
0 0 0 1 1 0 0 0 1 0 0 1 c4 c5 c9 c12 = 0
1 0 0 1 1 0 1 0 0 0 0 0 c1 c4 c 5 c7 = 0
0 0 0 0 0 1 0 1 0 0 1 1 c6 c8 c11 c12= 0
0 1 1 0 0 0 0 0 1 1 0 0 c2 c3 c9 c10 = 0
The Parity Check Matrix for the
Simple LDPC Code
0 0 1 0 0 1 1 1 0 0 0 0
1 1 0 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 0 1 1 1 0
0 1 0 0 0 1 1 0 0 1 0 0
1 0 1 0 0 0 0 1 0 0 1 0
0 0 0 1 1 0 0 0 1 0 0 1
1 0 0 1 1 0 1 0 0 0 0 0
0 0 0 0 0 1 0 1 0 0 1 1
0 1 1 0 0 0 0 0 1 1 0 0
Only the lines corresponding to the 1st row and 1st column are shown.
Entire Graph for the Simple
LDPC Code
• Note that each bit node has 3 lines connecting it to
parity nodes and each parity node has 4 lines
connecting it to bit nodes.
Decoding of LDPC Codes by
Message Passing on the Graph
• Decoding is accomplished by passing messages
along the lines of the graph.
• This parity node has the estimates p3, p6, p7, and p8
corresponding to the bit nodes c3, c6, c7, and c8, where pi is an
estimate for Pr[ci=1].
C1 C2 C3 C4
C1 C2 C3 C4
C1 C2 C3 C4
Message Passing for First 4
Bit Nodes for More Iterations
Message Passing
1.000
0.900
0.800
0.700
0.600
C1
Prob[C=1]
C2
0.500
C3
C4
0.400
0.300
0.200
0.100
0.000
Up Down Down Down Up Up Up Down Down Down Up Up Up
C1 0.900 0.500 0.436 0.372 0.805 0.842 0.874 0.594 0.640 0.656 0.968 0.962 0.959
C2 0.500 0.756 0.756 0.436 0.705 0.705 0.906 0.640 0.690 0.630 0.791 0.751 0.798
C3 0.400 0.756 0.756 0.500 0.674 0.674 0.865 0.790 0.776 0.644 0.807 0.820 0.897
C4 0.300 0.756 0.756 0.756 0.804 0.804 0.804 0.749 0.718 0.692 0.710 0.742 0.765
Messages Passed To and
From All 12 Bit Nodes
C1 0.900 0.500 0.436 0.372 0.805 0.842 0.874 0.594 0.640 0.656 0.968 0.962 0.959
C2 0.500 0.756 0.756 0.436 0.705 0.705 0.906 0.640 0.690 0.630 0.791 0.751 0.798
C3 0.400 0.756 0.756 0.500 0.674 0.674 0.865 0.790 0.776 0.644 0.807 0.820 0.897
C4 0.300 0.756 0.756 0.756 0.804 0.804 0.804 0.749 0.718 0.692 0.710 0.742 0.765
C5 0.900 0.500 0.372 0.372 0.759 0.842 0.842 0.611 0.694 0.671 0.976 0.966 0.970
C6 0.900 0.436 0.500 0.756 0.965 0.956 0.874 0.608 0.586 0.643 0.958 0.962 0.952
C7 0.900 0.436 0.500 0.372 0.842 0.805 0.874 0.647 0.628 0.656 0.967 0.969 0.965
C8 0.900 0.436 0.436 0.756 0.956 0.956 0.843 0.611 0.605 0.656 0.963 0.964 0.956
C9 0.900 0.372 0.372 0.500 0.842 0.842 0.759 0.722 0.694 0.703 0.980 0.982 0.981
C10 0.900 0.372 0.500 0.500 0.900 0.842 0.842 0.690 0.614 0.654 0.964 0.974 0.970
C11 0.900 0.372 0.436 0.756 0.956 0.943 0.805 0.667 0.608 0.676 0.967 0.974 0.965
C12 0.900 0.500 0.372 0.756 0.943 0.965 0.842 0.565 0.642 0.657 0.969 0.957 0.955
Messages Passed To and
From All 12 Bit Nodes
Message Passing
1.000
0.900
0.800
C1
0.700 C2
C3
0.600 C4
Prob[C=1]
C5
0.500 C6
C7
0.400 C8
C9
0.300 C10
C11
0.200 C12
0.100
0.000
Up Down Down Down Up Up Up Down Down Down Up Up Up
More Iterations
All 12 Bit Nodes
1.000
0.900
C1
0.800
C2
0.700 C3
C4
0.600 C5
C6
0.500
C7
0.400 C8
C9
0.300 C10
C11
0.200
C12
0.100
0.000
Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up
More Interesting Example
All 12 Bit Nodes
1.000
0.900
C1
0.800
C2
0.700 C3
C4
0.600 C5
C6
0.500
C7
0.400 C8
C9
0.300 C10
C11
0.200
C12
0.100
0.000
p
p
n
n
U
U
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
D
D
Computation at Bit Nodes
• If estimates of probabilities are statistically
independent you should multiply them.
p’ = K pa pb pc.
• Setting (1- K pa pb pc ) equal to K(1- pa)(1- pb)(1- pc) and solving for K
we obtain:
J (# of columns) = K (# of rows)
or J (n) = K (n-k).
• Solving for k / n, we have k/n = (1- J / K), the rate of the code.
• The optimal distributions for the rows and the columns are
found by a technique called density evolution.
• Left degrees
L3 =.44506 l5 =. 26704
L9 =.14835 l17=. 07854
l33=.04046 l65=. 02055
• Right degrees
r7 =.38282 r8 =. 29548
r19=.10225 r20=. 18321
r84=.04179 r85=. 02445
From MacKay’s Website
From MacKay’s Website
• The figure shows the performance of various codes with rate 1/4
over the Gaussian Channel. From left to right:
• Irregular low density parity check code over GF(8), blocklength
48000 bits (Davey and MacKay, 1999);
• JPL turbo code (JPL, 1996) blocklength 65536;
• Regular LDPC over GF(16), blocklength 24448 bits (Davey and
MacKay, 1998);
• Irregular binary LDPC, blocklength 16000 bits (Davey, 1999);
• M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi and D.A. Spielman's
(1998) irregular binary LDPC, blocklength 64000 bits;
• JPL's code for Galileo: a concatenated code based on constraint
length 15, rate 1/4 convolutional code (in 1992, this was the best
known code of rate 1/4); blocklength about 64,000 bits;
• Regular binary LDPC: blocklength 40000 bits (MacKay, 1999).
Conclusions
• The inherent parallelism in decoding LDPC codes suggests
their use in high data rate systems.