Professional Documents
Culture Documents
Ir-Sb-Xr-Ee-Kt 2009-008
Ir-Sb-Xr-Ee-Kt 2009-008
IQBAL HUSSAIN
First of all I would like to thank my supervising tutor Prof. Lars Kildehöj
Rasmussen at KTH for his abundant help and prolific suggestions. He made
himself readily available, had a patient ear, and always took the time to answer my
for his support, patience and guidance. Both of my supervisors also reviewed the
whole thesis report very carefully for even the delicate specifics for which I am
work in this department in my future plans. I have to mention nice time out
and discussions with communication theory department staff. I would also like
to thank Johannes Karlsson for his devotion to simulator computers which made
Last but not least, I would like to direct my warmest thanks to programme
iii
Abstract
good model for OFDM-based wireless communications systems in which the fad-
ing occurs in block wise manner. Raptor code is new emerging rateless code which
packet size. This non-ideal interleaving affects the maximum achievable diversity
from the channel. We investigated the effect of correlation between fading blocks,
tor code over correlated slowly fading channels and compared it with half rate
standard (3,6) regular LDPC code, and to the ARQ systems using punctured
LDPC codes for short block length. We also compared the performance of the
Raptor code to the standard (3,6) regular LDPC code over Binary Erasure chan-
nel, Additive White Gaussian Noise channel, and fast fading Rayleigh chanel for
v
Abstract
short block length. Enormous simulations are performed to get insight for future
research.
vi
Contents
Acknowledgments iii
Abstract v
Contents vii
List of Figures x
Acronyms xiii
1 Introduction 3
1.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
vii
Contents
2 System Model 21
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.2 OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
viii
Contents
metrical Channel . . . . . . . . . . . . . . . . . . . . . . . 49
Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 LT Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2.2 LT Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . 67
ix
Contents
5 Simulations Results 89
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6 Conclusions 105
Bibliography 111
x
List of Figures
xi
List of Figures
xii
Acronyms
xiii
Chapter 1
Introduction
Digital communication has had promising impacts on our lives from the last
few decades. Its practical applications in satellite, military, internet, sea and
space communications, digital audio and video broadcasting and mobile commu-
information over noisy channels is one of the basic requirements of digital infor-
channel has been the subject of much research for many years. Channel coding is
space, the distance between constellation points increases and hence enhances bit
error detection and correction. Channel coding can be classified in two categories
3
1. Introduction
that data is delivered accurately despite the occurrence of errors during transmis-
sion. On the other hand, FEC tries to correct errors at the receiver end. ARQ
schemes require to send a small number of redundant information along with user
information, which can be used to detect errors during transmission, while FEC
the number of redundant information bits and achieve high error-correction ca-
pability, on can use powerful powerful error-correcting codes. LDPC codes were
introduced by Gallager [7], which have shown better performance over a variety of
channels. Finite-length LDPC codes have also been shown to outperform turbo
codes. Rateless codes such as LT codes [11] and Raptor codes [14] are the class
of digital fountain codes [10], having capacity achieving property. This chapter
introduces the basic background and the literature review on the works related
to the thesis. Furthermore, the main objectives of the thesis are also presented
The shannon channel theorem [20], states the maximum rate at which infor-
specified bandwidth. The maximum rate is called channel capacity. The capacity
The Shannon theorem shows that if information is transmitted with rate equal
to capacity or less, then there exists a coding technique which allows the proba-
bility of error at the receiver to be made arbitrarily small. The converse is also
4
Background Discussion 1.1
important that if the information rate is greater than the capacity of the chan-
nel, then there exist no coding technique which makes probability close to zero.
However Shannon theorem did not give any clue about the construction of the
coding scheme.
at the receiver and hence no feedback channel is required. FEC offers constant bit
FEC codes reduce the required transmit power for a given BER at the expense
will take place if and only if errors are detected at the receiver end, hence needs
There are two main types of conventional FEC codes which are named as block
codes and convolutional codes. Block codes accept k information bits and produce
n encoded bits by adding (n-k ) redundant bits. On the other hand, convolutional
codes transform k information bits to n encoded bits in serial manner and this
is the constraint length of the code.Trellis codes combine channel code design and
Recent advancements in coding technology, such as LDPC codes [7] and Ratless
codes [11, 14] offer performance that approaches the channel capacity of AWGN
5
1. Introduction
lation technique which is more immune to frequency selective fading than single
carrier systems. In OFDM, the data stream is distributed over a number of lower
rate streams and these streams are modulated over different carriers. Lower data
rate of each stream in OFDM system as compared to the single carrier system
increases the symbol duration and hence reduces the effects of multipath propa-
gation. Inter symbol interference can be removed by using cyclic prefix which is
copying the last part of a symbol at the start of a symbol. By using orthogonal
transmit signal and cyclic prefix of at least equal to maximum delay spread avoids
not only inter symbol interference but also inter-carrier interference. As channel
simpler than the adaptive equalization used in single carrier system. Drawbacks
of OFDM system as compared to the single carrier system is its sensitivity to fre-
quency offset and phase noise. Also there is possible limited interleaving present
among the subcarrier of OFDM system which can severely affect the performance.
sages for lost/wrong data frames at the receiver. ARQ works well for many
one-to-one protocols such as TCP/IP protocol, but its performance seriously de-
6
Background Discussion 1.1
retransmitted even if they are received by many receivers. In these cases, data
receivers may ask for retransmitting the data which are already received by other
receivers. Consequently, the data source needs to retransmit most of the data
and hence inefficiently uses valued bandwidth. Furthermore in ARQ, most of the
time source will be idle if the distance between source and destination is too long.
the FEC decoder and then checked for errors. If errors are detected, a retrans-
mission request for the corresponding data frame is sent back to the transmitter.
Adaptive code rate is always desirable especially in the varying channel en-
forward error-correction coding schemes, we can achieve flexible code rates. Punc-
tured codes can be achieved by deleting parity bits to get wide range of higher
rates than the code rate of mother code. Higher bandwidth efficiency can be
codes can be achieved by appending more parity bits to the mother code. By
extending a code we can get only lower code rate than the code rate of the
mother code. Extending leads to codes of increased minimum distance and im-
7
1. Introduction
time varying channels. The limitation in rate-compatible code is that the coded
bits of a high-rate punctured code are also used by the lower-rate codes. In other
words, the high-rate codes are embedded into the lower-rate codes of the family.
If the higher rate codes are not sufficiently powerful to decode channel errors,
only small amount of extra bits which were previously punctured (deleted) have
bits. The number of redundant bits added to every k information bits is (n − k).
A code is linear if the addition of any two valid codewords results in another
valid codeword. A systematic block code is also specified by its generator matrix
determines the (n − k) parity check bits. The systematic block code can also be
transpose of the matrix P. Obviously, n is larger than k in general and the code
efficiency is usually evaluated by k/n, which is called the code rate of the coding
equations.
8
Literature Survey 1.2
The main purpose of coding theory has been to design the codes which can
achieve Shannon capacity [20]. Convolution codes were designed to approach the
Shannon limit within a gap of few decibels with reasonable decoding complexity.
However, reducing this gap required impractical complexity until the discovery
decoder, low density parity check and turbo codes have provided excellent perfor-
mance and a small gap to the Shannon limit with a practical decoding complexity.
attention to this field of study, which soon extended to a more general class of
Graphs are used to visualize the constraints of the codes. The advantage
[4]. Other communication components can also be modelled in the graph of codes.
Since the rediscovery of LDPC codes, there has been a lot of research activities
and improvements in the area of codes defined on graphs [3]. Research on LDPC
codes has played a major role in this field, as many of the new classes of codes
which are defined on graphs are influenced by the structure of LDPC codes.
LDPC Code, which is based on bi-partite graph, was first proposed by Gallager
[7] in the early 1960s, but it did not get proper attentions until years later.
9
1. Introduction
LDPC codes are also called as Gallager codes, in honor of Robert G.Gallager,
who proposed the concept of LDPC in his phD thesis. The bi-partite graph
contains two sets of nodes which are called as variable nodes and check nodes.
The bi-partite graph is build in such a way that for each check node, the module-
2 sum of the values of its incident variable nodes is equal to zero. There are
various methods for LDPC code, having encoding algorithm which can run in
linear time. The most efficient decoding algorithm for LDPC codes is the belief-
probability that a variable node is zero based on the information obtained from
the check nodes in the previous round. The time complexity of the decoding is
proportional to the number of edges in the bipartite graph. LDPC code requires
It has been proved that the performance of LDPC codes which is obtained from
very close to the Shannon bounds. LDPC codes are also widely used in many
LDPC codes in Binary Erasure Channel has been studied in [5]. Similarly based
on [8], the results of LDPC codes in Additive White Gaussian Noise and correlated
and uncorrelated fast Rayleigh fading channels have been demonstrated in [6].
The traditional block codes have several problems in the real communication
systems. For example, in the systems where the channel error rate is unknown
10
Literature Survey 1.2
by the encoder or the decoder before coding, the traditional codes which have
fixed code rates, have to estimate the loss rate and choose the closest code rate
to adapt the channel conditions. However, these coding algorithms are nearly
such channel usually employ feedback from receiving end to transmitter. These
feedback messages are either acknowledgments for missing packets or for every
mit the missing packets while in other case transmitter retransmit packets with
negative acknowledgment. However, from Shannon theory [2, 20], it can be easily
concluded that these protocols are inefficient and wasteful of bandwidth because
if the channel has erasure probability, then its capacity will be (1 − ), whether
which consists of many channels with different loss rates. When using the fixed
rate codes, sender is forced to generate code words based on the worst code rate
to ensure reliable transmission. Thus, a new family of FEC codes which is robust
M.Luby in [11] presented the first practical realization of the rateless code
Fountain Codes are also called universal erasure code because it can be used
Erasure channel [11] but exhibits error floor in fading channels and impractical
11
1. Introduction
decoding complexity for long block codes [12]. Raptor code [14] is concatenation
of the outer code with LT code to combat the problem of error floor in fading
channels and to provide linear time encoding and decoding complexity. Raptor
code has demonstrated the capacity approaching property for the binary erasure
channel [14]. Raptor code has not only beat LT code but also has shown near
optimal performance on wide variety of channels. Raptor code has also provided
amazing performance in AWGN channel [12, 15], and Rayleigh fading channels
traditional block codes such as LDPC codes and RS codes, Digital Fountain
Code does not have a fixed code rate and rate is determined by the number of
transmitted codeword symbols required before the decoder is able to decode. The
can generate as many codeword symbols as needed to recover all the message
bits regardless of the channel performances. The first practical realization of the
rateless codes was presented by M.Luby in [11] and was further improved in [14].
Digital Fountain Codes also called as universal erasure code because they can be
used independently of channel loss rate and having good performance over every
due its advantages in different requirements. In fixed-rate codes the rate is fixed
12
Literature Survey 1.2
efficiency and reliability. Fixed-rate codes achieve high rate transmission at the
expense of reliability and vice versa, while in rateless codes, codeword length is
compared to fixed-rate codes. To achieve best possible code rate, fixed-rate codes
transmitter. For large relay networks the feedback information of fixed-rate codes
Rateless code has become a natural candidate for the Hybrid ARQ. In con-
account the channel information of the past transmission(s), the transmitter sends
word symbols to the receiver unless it get a feedback message from the receiver
about successful transmission. Both rateless codes and codes based on IR-HARQ
Encoding and Decoding complexities of rateless schemes are higher than IR-
13
1. Introduction
HARQ schemes. However, when Raptor codes are used, we only need to encode
schemes, we need to encode all parity bits, even though we may need to send only
is required for frequent and significant feedback messages, while for rateless this
is not required for one time single bit feedback message for a given number of
information bits.
not adapt to fading conditions need an acceptable reliability when the channel
quality is not so good. Thus, these fixed -rate systems are effectively designed
for the worst-case channel conditions. Adapting to the channel fading can in-
networking and broad band internet access. Wireless Local Area Networks use
14
Thesis Motivation 1.3
speed wireless local area network standard set by Association of Radio industries
and Business Japan ; all of which has similar physical layer specifications based
on OFDM. OFDM is also a strong candidate for IEEE Wireless Personal Area
Network (WPAN) standard and for fourth generation (4G) cellular systems.
Real-time applications actually need short block length which is less than a
few thousands. This limitation may be due to memory and buffer size and delay
user data and data rate on radio link interface is 270.83 Kbps which corresponds
to the frame length of 1250 bits. The standard of the multimedia broadcast and
multiuser services of 3G wireless networks works for small code word length up to
500 bits [25]. WiMAX has quite flexible physical layer frame duration varies from
bandwidth 1.25 MHz and QPSK modulation having aggregate uplink data rate
154 kpbs with 5 ms frame length and 128 OFDM symbols which corresponds to
770 bits frame length. The basic transmission element in the FDD physical layer
of UMTS is a radio frame. A radio frame has duration of 10 ms, and it is broken
down into 15 time slots of 0.667 ms each. Each time slot contains 2,560 chips
are between 32 and 192 kpbs for a single channel. In this case Digital Audio
Broadcasting has a frame length between 768 and 4068 bits.So codes with small
frame length have significant role in present and future wireless communication
technologies.
Rateless codes are used at transport layer or network layer but many of the
15
1. Introduction
existing system uses fairly small block length at physical layer. So our motivation
is to investigate the performance of Raptor code for shorter block length because
for larger block length, rateless code has already outperformed different codes
such systems, so we want to investigate the use of rateless codes [11, 14] which
is new code family and ideally suitable for the multi-channel data transmission
environment [22] over such channels. We decided to investigate the effect of cor-
relation between fading blocks, which relates to the limited interleaving possible
Interleaving is used to mitigate the the burst error effect of channel in com-
munication over a fading channel. For large coherence time or equivalently low
doppler spread of the fading, high interleaving is required to break the memory
plications like GPRS, there is a constraint on the interleaving depth due to the
Thesis Objective:
16
Thesis Contributions 1.4
code with short block length over uncorrelated and correlated slowly fading chan-
nels and compare it with fixed rate standard (3,6) regular LDPC code.
In order to be able to reach the proposed objective, we used the following proce-
dure:
• We investigated (3,6) rate 1/2 regular LDPC code over the binary erasure
channel (BEC).
• We analyzed (3,6) rate 1/2 regular LDPC code over the additive white
• We compared the performance of the (3,6) rate 1/2 regular LDPC code
over the Rayleigh block-fading channel with additive white Gaussian noise
• We thoroughly investigated the Luby Transform (LT) codes over the BEC.
• Finally Investigation of Raptor codes over the AWGN and over BF-AWGN
This thesis has contributed to field of rateless coding in different ways. Along
with LDPC codes, rateless codes are natural candidate for use in HARQ schemes
used in applications with fluctuating channel conditions. Raptor code is the class
for Fountain codes, designed for reliable transmission of data over an erasure
17
1. Introduction
channel with unknown erasure probability. The main contribution of this master
• We compared the performance of Raptor code and half rate standard (3,6)
regular LDPC code over binary erasure channel for short block length and
LDPC code.
channel over fast fading Rayleigh, uncorrelated and correlated block fading
(3,6) LDPC code over AWGN channel for short block length.
• We compared the performance of Raptor code and half rate standard (3,6)
LDPC code over fast fading Rayleigh channel and block fading channel.
• We compared the performance of the Raptor code to half rate standard (3,6)
regular LDPC code in correlated block fading channel based on the degree
18
Thesis Organization 1.5
tured LDPC in correlated block fading channel for short block length.
In chapter 2 we presents the channel models, which are used in our project
for the performance comparison of the Raptor code and half rate standard (3,6)
LDPC code.
their decoding algorithms. Simulations results were provided at the end of LDPC
over Binary erasure, AWGN and Rayleigh fading channels for small code word
block length.
In chapter 4 , the concept of digital fountain codes [10] was introduced. More-
over, the performance of the LT and Raptor codes were demonstrated for short
block length codes over binary erasure , AWGN , fast fading Rayleigh and uncor-
the Raptor code and half rate regular (3,6) LDPC code over binary erasure,
AWGN, fast fading channel and uncorrelated block fading channel. Chapter 5
also discusses the performance of the Raptor code in correlated block fading
channel and compare it with half rate standard(3,6) LDPC code for short block
code and punctured LDPC code in correlated block fading channel for short block
length.
19
1. Introduction
20
Chapter 2
System Model
2.1 Introduction
that, if the information or entropy rate is below the capacity of the channel,
then proper methods are available to encode information messages and it can
be be received with out errors even if the channel distorts the message during
have performance very close to the channel capacity. The utilization of error
control coding has become an integral part of the modern communication system.
in Figure 2.1.1. This model is suitable from coding theory and signal processing
signals by source coder which are suitable for digital communication system.
21
2. System Model
Source Encoder
Modulator Channel Demodulator Decoder Sink
(n>k)
transforms the message to signal suitable for the transmission over channel.
munication channel. For example telephone lines in wired system and environ-
ment between the transmitter and receiver in wireless system. The channel model
Error may arise from the channel noise, so encoder and decoder blocks must be
design to minimize the errors introduced by channel. The messages from the
source are binary sequences of length k bits. The encoder performs mapping of
the messages to the encoded words. so the code words are binary sequence of
length n bits but not all combinations of n bits are code words. There are 2k
After the signal has passed the channel it is distorted due to channel noise.
So the decoder must be designed in such a way to minimize the error between the
received code word and the transmitted code word. The design parameters of the
decoder are complexity, delay and rate. As in the absence of decoder, the receiver
takes decisions independently on each bit but as now with the introduction of
decoder, the receiver takes decision on n bits, so it needs a more complex system,
but in today’s VLSI technology, the complexity is not a major issue. Also to
22
Channel Modelling 2.2
wait for n bits introduces delay in the system which is very critical. As n is
take into consideration because now the system attempts channel n times to
communicate k bits as compared to the un-coded system in which case the system
will communicate k bits in k attempts of the channel. But with the introduction
of coding we can reduced the transmit power for the same error rate as compared
to the un-coded system, or equivalently we can reduce the bit error rate for the
same transmit power. As high power radio frequency devices are more expensive
than the low power radio frequency devices which further enhance the importance
The goal of wireless channel modeling is to find useful analytical models for
the variations in the channel. The most prominent draw back of the wireless com-
terminal mobility and user interference, result in channel with time-varying pa-
rameters. Fading of the wireless channel can be classified into large-scale and
small-scale fading. Large-scale fading involve the variation of the mean of the
received signal power over large distances relative to the signal wavelength. On
the other hand, small-scale fading involve the fluctuations of the received signal
power over distances comparable with the wavelength. Models for the large scale
in radio resource management such as handoff, admission control, and power con-
trol. Models for the small scale variations are more useful in the design of digital
23
2. System Model
hence focus on the small scale variations in this class. Reflection, diffraction and
signal. The reflected signals arrive at different delays which cause random am-
plitude and phase of the received signals. This phenomenon is called multipath
fading. If the product of the root mean square (RMS) delay spread which is stan-
dard deviation of the delay spread and the signal bandwidth is much less than
unity, the channel is said to suffer from the flat fading.The relative motion be-
tween the transmitter and the receiver (or vice versa) causes the frequency of the
received signal to be shifted relative to that of the transmitted signal. The fre-
and the frequency of the transmitted signal . A signal undergoes slow fading when
the bandwidth of the signal is much larger than the Doppler spread (defined as
combination of the multipath fading with its time variations causes the received
signal to degrade severely. This degradation of the quality of the received signal
and channel coding. In the forthcoming subsections we will briefly discuss a few
A Discrete Memoryless Channel (DMC) is a class of channel for which both the
input and output letters belong to finite alphabet.The most simple and popular
example of the DMC channel models is Binary Erasure Channel (BEC), which
24
Channel Modelling 2.2
1-ε
0 0
Erasure
1 1
1-ε
Input Output
a BEC, two inputs are needed and they are either 0 or 1. The output consists
of 0, 1 and additional element called erasure. The bits are either transmitted
this channel is given by (1 − ) [2]. Binary erasure channel can be used to model
internet system where packets can be either forwarded accurately or dropped due
25
2. System Model
Channel (BSC)as shown in Figure 2.2.2. In BSC both the input and output are
binary signals. Each bit is either transmitted correctly with probability (1 − ),
In contrast to the binary symmetric channel, which has discrete input and
output symbols taken from binary alphabets, the so-called AWGN channel is
defined on the basis of continuous valued random variables. Any wireless system
white Gaussian noise with known variance and x and y is input and output
26
Channel Modelling 2.2
Input signal
vector
X
+ R Output match
filter output
Z
Noise
vector
having constant spectral density.The AWGN model does not take account for
source of Gaussian noise are thermal vibrations of atoms in antennas, shot noise,
black body radiation etc.The AWGN channel is a good model for many satellite
and deep space communication links.The model for AWGN channel is illustrated
in Figure 2.2.3 and described by the additive white Gaussian noise term Z. In
given by
where B is bandwidth of the system, S is the signal power of the input signal
X and N is the noise power of noise signal Z which are expressed by follwoing
relations:
S = E X2
27
2. System Model
N = E Z2
So the channel capacity depends on the signal to noise ratio. The signal-to-noise
S Eb
=
N N0 /2
with bit energy Eb and noise power spectral density N0 . In [23], comparison
of capacities of the AWGN channel and BSC has been carried out. In order
to achieve the same channel capacity binary symmetrical channel required more
For most practical channels, where signal propagation takes place in the at-
mosphere and near the ground, the freespace propagation model is inadequate to
describe the channel and predict system performance. When there is no line-of-
sight present between transmitter and receiver then the multipath is produced
only from reflections of the objects presents in the environment. This form of
with no one path dominating the others in strength.When the channel gain fol-
“α” is the normalized Rayleigh fading factor and related to the fading coefficient
of the channel ht through α = |ht |, where the real and imaginary components of
28
Channel Modelling 2.2
, “n” is the Gaussian random variables with zero mean and variance σ 2 and
phase is independent random variable being uniform on [0, 2π]. Rayleigh fading
The block fading model is more common representation of the slowly varying
fading channels. In block fading channel the random channel gain or normal-
ized Rayleigh fading factor remains constant over a certain block of the symbols
transmitted through the channel. Code word representation of the Block fading
channel is shown in Figure 2.2.4. So the code designed which works better in
fast fading channel, may not behave very well in the Block fading channel. As
the block fading channel is nonergodic channel [17], so we cannot use the channel
rate limit we rather use the outage probability instead of capacity [18].
29
2. System Model
are affected by each fading block is given by l , N/m. The received symbol yi is
given by
yi = αj ∗ ti + ni
where i = 1, . . . N, and j = 1 + [(i − 1)/l], where [x] represent the integer part
of a real number x. The “αj ” is defined as for Rayleigh fading channel for fading
block j, where j = 1, . . . , m and ni is an i.i.d AWGN sample with zero mean and
For our simulation we will use 2-Block fading channel such that we divide our
code word length in two blocks and each block has independent fading factor.
y1 = α1 te1 + n1
y2 = α2 te2 + n2
Where α1 and α2 are independent normalized Rayleigh fading factor.α1 is for the
first half of block length bits te1 and α2 is for the second half blocks length bits te2
independent of each other, n1 and n2 are Additive Gaussian random noise with
In BF model, transmitted sequence is divided into blocks and all the symbols
belonging to the same block experience the same fading. In some cases, the fading
blocks are assumed to be independent from each other, but in some applications
30
Channel Modelling 2.2
α1
n1
x1 y1 v1 û1
and De-interleaver
c1
Interleaver and
u1 v2
De-Modulator
c2 û2
Decoder
u2
Modulator
Encoder
..........
.........
..........
..........
..........
..........
..........
ûk
uk vn
cn
xL yL
α1
nL
such as the OFDM system, there is a considerable correlation exist among the
the constraints on such as maximum allowable packet size and processing delay
requirement.
The coded system of correlated Block fading channel is shown Figure 2.2.5.
The k information bits encoded to code word of n bits. The coded bits are inter-
trix.
Cα = E[ααh ], α = (α1 , . . . , αL )t
31
2. System Model
Single carrier modulation scheme is limited by the delay spread of the chan-
to split the data stream into a number of sub-streams of lower data rate and
modulation, data is sent serially over the channel by modulating a single carrier.
to symbol period, which result as inter symbol interference (ISI). In such situation
bands, called subcarriers. The data is divided into several parallel data streams
or channels, one for each sub-carrier. Each sub-carrier is modulated with a con-
ventional modulation scheme at a low symbol rate, maintaining total data rates
The symbol duration can be made greater than the channel maximum delay by
selecting more sub-carrier. On the other hand, bandwidth of the each sub-carrier
that each sub-carrier experience flat fading and hence simple equalization will
the symbol time which significantly reduce ISI and hence simplifies equalization.
However, the performance of long symbol time signals is degraded in time variant
32
OFDM-based Wireless Communication systems 2.3
nk,t
x1 y1
u1 û1
Decoder
Encoder
IFFT
..............
rk,t
..............
..............
............
FFT
Multipath
CP Channel CP-1
uk ûk
xL y1
channels. If the symbol time is greater than the coherence time of the channel
then transmission is heavily affected during a single symbol transmission and the
2.3.2 OFDM
must have overlapping transmit spectra but at the same time they need to be
orthogonal to avoid complex separation and processing at the receiving end. Mul-
ticarrier modulation schemes that fulfil above mentioned conditions are called
modulator and bank of matched filters Inverse Fast Fourier Transform (IFFT)
and Fast Fourier Transform (FFT) is efficient method of OFDM system imple-
mentation as shown in Figure 2.3.1 because it is cheap and does not suffer from
Inter symbol interference occurs when the signal passes through the time-
33
2. System Model
of the subscribers may be lost, resulting and inter carrier interference. OFDM
system uses cyclic prefix (CP) to overcome these problems. A cyclic prefix is
the copy of the last part of the OFDM symbol to the beginning of transmitted
symbol and removed at the receiver before demodulation. The cyclic prefix should
be at least as long as the length of impulse response. The use of prefix has
two advantages that are it serves as guard space between successive symbols
to avoid ISI and it converts linear convolution with channel impulse response to
ICI in addition with memory and time saving in measurement. In Figure 2.3.1, L
coded vector xi are generated by proper coding, interleaving and mapping. After
adding cyclic prefix, OFDM signal is passed through multipath channel. At the
receiver the cyclic prefix is removed and received signal is passed through FFT
block to get L received vectors y i , where vk,t are zero mean Gaussian noise with
variance N0 /2 of k-th sample of the t-th OFDM symbol. N0 is the noise power,
be achieved when there are multiple independent fading channel exist between
transmitter and receiver. From channel coding point of view, closely related
pendent fading of channel. Bits are closely related in block code if they are part
34
OFDM-based Wireless Communication systems 2.3
of the same code while in convolution code if not so many constraint lengths
present between them. In time interleaving bits must be separated in time while
interleaving introduces decoding delay because the receiver has to wait until all
closely related bits are received for decoding. In time interleaving ,closely related
bits of the code experience independent fading, when the time separation between
the bits are greater than coherence time of the channel. Similarly the bandwidth
of the system should be greater the coherence bandwidth of the channel to get
takes a block of some symbols and then randomly permute the order of the sym-
bols in the block. At the receiver permutation is reversed to get the original
are best choice for frequency interleavers due to processing of block of sub-carrier
used, while convolution interleavers suited to the time interleavers due its ex-
cellent decoding properties. In certain situations, time interleaving did not give
tion.
based system .From [24], for DAB to be working in 225 MHz , vehicle speed is
48 Km/h turn to Doppler frequency 10 Hz, and using time interleaving results
in decoding delay of several seconds, which is not suitable for practical system.
OFDM has the property to combine the time and frequency interleaving to achieve
35
2. System Model
best interleaving.
tem such as OFDM, correlation exists among the sub-carriers due to non-ideal
interleaving[24]. One of many reasons for the lost of the orthogonality between
the sub-carriers of the OFDM system is higher values of Doppler frequency. From
fading channel but as due to limited interleaving correlation exist among sub-
carriers. Similarly in block fading channel, channel gain remains constant over
achieve the performance analysis of OFDM system we can use the generalized
model of the correlated block fading channel [26]. So the whole blocks, from
IFFT to FFT in Figure 2.3.1, can viewed as correlated block fading channel of
Figure 2.2.5.
effectively limits the maximum achievable diversity from the channel. In such a
channel, fading gain occurs in block wise fashion, i.e., all transmitted symbols
of the same block sense the same fading. Also there is considerable correlation
36
Interleaved-OFDM system as Block Fading Channel 2.4
among the fading blocks. We call such a channel as Correlated Block Fading
(CBF) channel. In some situations, the block fading channel can be assumed
37
Chapter 3
3.1 Introduction
Parity Check) LDPC code. LDPC code was first proposed by Gallager [7] in the
early 1960s along with its elegant iterative decoding property, but it did not get
proper attention until years later. Due to heavy computations, LDPC codes were
ignored but got an amazing comeback in the last few years. The name of LDPC
code is used in relation with their Parity-check matrix which have low density
of 1,s compared the number of 0,s. In contrast to other coding scheme , LDPC
codes offer a better performance with low decoding complexity. LDPC codes are
check matrix. The elements of the parity-check matrix are 0,s and 1,s and all
the XOR of the bits in c corresponding to the 1,s in the row of H. The number
39
3. Low Density Parity Check Codes
of 1,s in each row of the parity-check matrix is called the weight of each row
of H, while the number of 0,s in each column is the weight of each column the
parity-check matrix. If the length of the block code is n then the number of rows
and where wc represents the weight of column and wr is the row weight of H. The
wc
rate of the LDPC code is R ≥ (1 − wr
).
A LDPC code is regular if the number of 1,s in column wc and number of 1,s
in row wr are constant for a given parity-check matrix. LDPC codes proposed by
Gallager in [7] are regular for which parity-check matrix can be defined as
H1
H2
H=
.
..
Hwc
where the submatrices Hs has a special structure. For any integer β and wr greater
than 1, each submatrix Hs has a row weight equal to wr and a column weight 1
and of size β × βwr . The submatrix H1 has the special form of for i = 1, . . . , β,
ith row contain all 1,s of wr in column (i − 1)wr to iwr .The other submatrices
are just the column permutation of H1 . Gallager [7] showed that the ensemble of
such codes has excellent distance properties provided that wc ≥ 1 and wr > wc .
parity-check matrix H.
40
Introduction 3.1
A LDPC code is regular if the number of 1,s in columns and rows are not
constant for a given parity-check matrix. Irregular LDPC codes can be parame-
terized by the degree polynomials λ(x) and ρ(x), which can be defined as
dl
X dr
X
λ(x) = λi xi−1 and ρ(x) = ρi xi−1
i=2 i=2
where λi (x) and ρi (x) are the fractions of edges belonging to degree-i variable
and check nodes, and dl and dr are the maximum variable and check node degrees
In coding theory, codes connected with graphs have been defined in variety of
ways. Tanner graph is the best way to represent the LDPC codes as this graph
not only gives good visualization but also describes the decoding algorithm as
well. Tanner graphs of LDPC codes are bipartite graphs with variable or bit
nodes on one side and constraint or check nodes on the other. Each variable node
the bits of the code word. Edges are used to connect the corresponding variable
and check nodes. The tanner graph representation of the LDPC codes is closely
Such parity-check matrix can easily be generated [7, 21] and it consists of check
nodes equal to the number of parity and variable nodes equal to the number of
bits in a codeword. The entry (i, j) is 1 if and only if the ith check node is
41
3. Low Density Parity Check Codes
c1 c2 c3 c4 c5
Check
Nodes
Variable
Nodes
v1 v2 v3 v4 v5 v6 v7 v8 v9 v10
connected to the j th variable node in the graph. LDPC codes are defined by the
graph and they consist of a set of vector c of block length n such that H.cT = 0T .
However, not every binary linear code has a representation by a sparse bipartite
graph, if it does, then the code is called a low-density parity-check (LDPC) code
[2]. Graphical representation of the half-rate regular (3, 6) LDPC code with code
word length 10 for the following parity-check matrix is shown in Figure 3.1.1.
1 1 1 1 0 1 1 0 0 0
0 0 1 1 1 1 1 1 0 0
H= 0 1 0 1 0 1 0 1 1 1
1 0 1 0 1 0 0 1 1 1
1 1 0 0 1 0 1 0 1 1
42
Iterative Decoding of LDPC Codes 3.2
Gaussian noise channel and Rayleigh fading channel [6]. It is evident from simula-
tion results that LDPC codes perform near to the Shannon limit for larger block
lengths.For example, LDPC codes perform within the 0.04 dB of the Shannon
limit at bit error rate of 10−6 with a block length of 107 [1].
produced such coding systems that outperform turbo codes with lower complexity.
LDPC codes have the advantage of the controlled sparseness of the code which
causes a specified and small number of 1,s in each row and column. The small
number one 1,s in each column not only reduces the number of equations to be
solved by the decoder, but also makes it feasible for practical decoding by means
of iterative decoding.
The tanner graph exploits the dependency structure of the various bits re-
ceived from the channel. The iterative or a message passing algorithm discussed
in [7] and which can be easily explained by a tanner graph. As the name specifies,
in each round the messages are passed from the variable nodes to check nodes and
from check nodes to variable nodes. The message passed from the variable node
depends upon the received value of bit from the channel and values received from
its neighbor check nodes except the check node to which it will send the message.
43
3. Low Density Parity Check Codes
Similarly, the message sent from the check node depends on its neighbor’s vari-
able nodes except the variable node to which it will send the message. The most
popular iterative algorithm is the belief propagation algorithm in which the mes-
sages passed from the variable nodes to the check nodes and vice versa are not the
It is also assumed that the message passed between variable and check nodes are
If the messages which are to be exchanged between the variable and check
nodes are independent, then the corresponding node can accurately calculate
The binary erasure channel (BEC) is perhaps the simplest channel model as
described in section 2.2.1. The message passing algorithm of LDPC has been
generalized in [5] using hard decision rule for Binary erasure channel.Variable
nodes receive bits information from the channel and some of them are erased
depending on the channel erasure probability. Then based on the received bits
information, variable nodes will send erasure, 0 or 1 to the check nodes. The
messages sent by variable nodes are received by the check nodes and hence cal-
culate the messages to be send to the corresponding variable nodes. From check
nodes perspective if any incoming message except the variable node to which it
will send this outgoing message is erasure then outgoing message will be erasure.
44
LDPC Decoding for BEC 3.3
In this case, if the incoming message except the corresponding variable node are
all 0,s or 1,s then the outgoing message is the mod-2 sum of incoming messages.
As soon as this information is received by the variable nodes, they send erasure
to check nodes if all incoming messages except the corresponding check node are
erasures, otherwise all the non-error messages must agree and either 0 or 1 as
the channel introduced no errors. At that time the log-likelihood ratio for all
bits of corresponding bits are calculated. The iterations can be halted as soon
as the valid code word is achieved .i.e. HcT = 0 or explicitly should impose the
In Figure 3.3.1 , we present the result of random, regular (3, 6) LDPC code
with rate one-half and code word block length 512 bits for binary erasure channel
using hard decision rule. For simplicity, we assume that all zero code word is
transmitted. The main focus of these simulation results is to show the enhanced
performance of the LDPC decoding and comparison of different code word length
behavior. The simulation result shows the comparison of the erasure probability
for LDPC decoding and erasure probability without LDPC decoding. From the
simulation result, it is obvious that the iterative decoding with LDPC enhances
our simulations results for example it is 0.33, the value of the erasure probability
of iterative LDPC decoding is zero and after that it almost increases linearly with
amount of the variable bits is erased. But when the channel erasure probability
45
3. Low Density Parity Check Codes
increases beyond 0.5 value then both the erasure probabilities with and without
probability with iterative LDPC decoding outperforms the un-coded system but
after that threshold value both are behaving in the same manner.
length codes, we simulated 256 and 512 code word block length in BEC channel.
LDPC code with block length 512 outperform 256 code word block length But
again this better performance is subject to some threshold value and after that
all codes with different code block lengths behave in the same fashion. Similarly
46
LDPC Decoding for Binary Symmetrical Channel 3.4
if we increase the length of block code, then after a certain limit further increase
in the code word length can’t result in a better performance. In our simulation
result this threshold is around 0.445 channel erasure probability and is shown in
Figure 3.3.2.
channel the message passing decoder consists of a number of rounds. The incom-
47
3. Low Density Parity Check Codes
ing messages at the check node except the corresponding variable node and after
calculation at the check node forward the outgoing message to the corresponding
variable node. In a similar fashion the incoming message at the variable node,
except the corresponding check node are processed and forward the outgoing
message to the check node. Here, too, the independent message assumption is
made.
hard decision rule,variable node sends its received information from the channel
to the check node. But after round 0, if all the incoming message except the
concerned check node are of the same value, then variable node sends that infor-
mation to the check node; otherwise, it sends its received value to the check node.
On the other hand, check node sends module 2 additions of all incoming messages
except from the message of the the corresponding variable node to which it will
flexible and fancy than it. In this algorithm for a certain degree of edge and a
particular round there is a threshold such that if the variable node receives the
same messages at least equal to that threshold excluding the corresponding check
node, then it sends that information to the check node; otherwise, it sends its
48
LDPC Decoding for Binary Symmetrical Channel 3.4
This decoder is similar to the hard decision decoder. However, the update
messages exchanged between two sets of nodes are not the hard decision but it is
At iteration zero each variable node calculates the prior probability that the
received bit is 1. LetPi be the conditional probability such that the received code
Pi = pr (ci = 1|yi )
The message from variable node “i ” to the check node “j ” that the received
delivered from the variable node “i ”to the check node “j ” that the given bit
received by variable node “i ” is zero is qij (0) = 1 − Pi . The check nodes update
The message from the check node “j ”to the variable node “i” given that this
node believes that the received bit is zero from the incoming message excluding
the corresponding variable node “i ” to which this message has to be send is given
by following equation
1 1 Y
rji (0) = + (qi0 j (0) − qi0 j (1))
2 2 0
i ∈Vj \i
where Vj represent the neighbor of check node “j ”. Neighbor of the check node can
be defined as the number of variable nodes connected to it. This is the probability
that the whole sequence contain an even number of 1,s and hence rji (0) is the
49
3. Low Density Parity Check Codes
is zero. Similarly the probability that message sent from the check node “j ” to
The variable nodes calculate their update messages from the incoming messages
of check nodes except the corresponding check node as follows [1, 7].
Y
qij (0) = K0 rj 0 i (0)
j 0 ∈C i \j
node “i ”. Kij ,K0 and K1 should be selected in such a way that the sum of
probability calculated for the update equation should be 1. The method of update
This completes one of the iteration and the variable nodes take decision of
Y
Qij (0) = K00 rji (0)
j∈Ci
and
Y
Qij (1) = K11 rji (1)
j∈Ci
Again K00 ,K11 and Ki should be selected in such a way that the sum of the
50
LDPC Decoding for Binary Symmetrical Channel 3.4
qij
rji
.......... ...... ...... ..........
i-th i-th
variable variable
Node Node
(a) (b)
Figure 3.4.1. General message update (a) from check node to variable node
(b) from variable node to check node..
The maximum number of iterations are either explicitly mentioned or the iter-
ations should be continued until the code word satisfies the parity-check matrix
bility problem such that the intermediate result may become zero and may give
some erroneous final result so the best way to convert this algorithm into the
log-domain.
51
3. Low Density Parity Check Codes
MAP decoder. In other words the soft-decision message passing decoder approx-
imately computes the probability that the given bit is either zero or one given
the entire received vector. Let x = (x1 , x2 , · · · , xn ) be the message bits, the
BPSK method map the signal from {0, 1} to ±1 where ”+1” represent 1 and
Gaussian noise is added to the original signal and the received analogue signal
vector r = (r1 , r2 , ...rn ) will have a value around ”-1” and ”+1”. In a hard-
decision decoder, the BPSK demodulator estimates the received binary vector
Let us consider the probability that the received bit is zero for given received
pr (ci = 0 | r)
pr (ci = 0 | r)
li =
pr (ci = 1 | r)
and the log likelihood ratio can then be represented by LLR = loge li .
52
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5
rithm in which there are iterations and in each iteration there are two steps. In
first step messages were passed to check nodes from variable nodes and in the
next step messages have been delivered from check nodes to variable nodes.
If the bits are equally likelihood then the channel likelihood ratio for each bit is
given by
pr (ci = 0 | ri ) 2
li = = e σ2 ri
pr (ci = 1 | ri )
2
yi = loge (li ) = ri
σ2
Where σ 2 is the variance of noise. So the input to the decoder is now yi and as
yi is just scaling of ri that is yi is the loge of the ratio of the two probabilities.
Channel log-likelihood ratio yi should be assign to the ith variable node so that
the detection is executed bit wise here. In every iterations each node will send
its best estimated log-likelihood ratio (LLR) to its neighbor nodes. The messages
which are exchanged between variable nodes and check nodes are assumed to be
independent.
During first iteration initially each variable node sends its LLR (yi ) to its ”d ”
neighbors if it is of degree ”d ” and the check node of degree ”e” will receive LLRs
from its ”e” neighbors variable nodes as shown in Figure 3.5.1. Next each check
node sum up the incoming LLR’s excluding to the corresponding variable node
53
3. Low Density Parity Check Codes
First Iteration
Step (a)
yi1
From yi
Channel yi yi2
yi
…………
…………
Check Node
Variable Node yi of degree e
of degree d
yie
Step (b)
Ci1
Ci2
Check Node
………
of degree e
Cie
other check nodes calculate LLR based on their corresponding neighbors variable
54
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5
positive and negative so we can’t take logarithm directly of this function. Due to
this reason we have to keep track of sign and take logarithm of magnitude only.
n
|yx | X |yi |
log(tanh( )) = log(tanh( ))
2 i=1
2
So let us define
|x|
f (x) = log(tanh( ))
2
|f (x)|
x = log(tanh( ))
2
Now we can easily generalize these equations for the soft-message passing
decoder. In step (b) of first iteration the corresponding messages passed to the
Where V is the log likelihood ration calculated by check node, “e” is the degree
55
3. Low Density Parity Check Codes
Iteration l :
Step(a) Step(b) in iteration (l-1)
u1 (l)
v1 (l-1)
u2 (l) v2 (l-1)
yi
…………..
yi
……….
Variable Node
Variable Node
of degree d
of degree d
vd (l-1)
ud (l)
of the check node, superscript “1” indicate iteration 1 and subscript “1” indicate
the check node number.Similarly other check node messages in iteration 1 can be
calculated as
Now onward for any iteration “l ” the messages passed from the variable node
56
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5
Iteration “l ” , step a:
Where U represent the log likelihood ratio calculated by variable node, d repre-
sent the degree of variable node ,subscripts indicate the number of the node and
And so on
For the second step in the iteration l, the messages passed from the check node
and so on
..
.
57
3. Low Density Parity Check Codes
Decoded
Message Code word Noise Message
(k bits) (n bits) (k bits)
Received
signal
The final code word can be represented as ĉ = [ĉ1 , ĉ2 , . . . , ĉn ].The iterations can
In our simulations we not only compared the iterative LDPC decoding with the
un-coded system but also investigated bit error rates for various code word block
length. The BPSK modulation is used in such a way that “+1” represent 1
and “-1” represent 0. All zero code word is transmitted to avoid the encoding
58
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5
−2
10
−3
10
−4
10
1 1.5 2 2.5
Eb/N0 (dB)
modulated transmitted signal and “n” is zero mean and variance σ 2 additive
signal. From [8], the AWGN channel log-likelihood ratio can be calculated as
2
yi = ri
σ2
Where ri is the received bit information after AWGN and yi is the received log-
From simulation result in Figure 3.5.4 , it is obvious that the coded system
outperforms the un-coded system and the popular statement that with the larger
code word block length bit error rate will be improved. As with increase in code
word length comprehensively enhance the performance of the system. Below 1.5
59
3. Low Density Parity Check Codes
code word length 1008 outperforms all other systems with a huge margin.
avoid encoding complexity of the LDPC code, we transmited all zero word and
the receiver then the intrinsic information at the receiver variable node is given
by [6, 8].
2
y0 = yα
σ2
This intrinsic information is sent to check nodes by variable nodes in the first
iteration and messages have been updated based on this information at check
be started.
In Rayleigh fading channel we simulated 204 and 504 code word length to
that again the larger block length of code word has a better performance. It
should also be kept in mind that this channel has comparatively a worse perfor-
mance than AWGN channel due to the additional distortion of the normalized
Rayleigh fading factor. Up to 6 dB value of SNR both code word length have
almost the same bit error rate, but after that there is a visible deviation in per-
formance and code with block length 504 outperform code with block length 204
60
Simulation Results and Conclusions for Rayleigh Fading Channel 3.6
−1
10
−2
10
Bit Error Probability
−3
10
−4
10 LDPC Decoding with N=204
LDPC Decoding with N=504
comprehensively.
The intrinsic information can be calculated as in the case for Rayleigh fading
channel but now with half block length of code word each with its corresponding
2
y01 = y1 α1
σ2
2
y02 = y2 α2
σ2
As expected from the simulation result shown in Figure 3.6.2 , the performance of
the block fading channel is comparatively degraded than the fast Rayleigh fading
channel. This degraded performance in the case of block fading is due to if the
61
3. Low Density Parity Check Codes
−1
10
Bit Error Probability
−2
10
−3
10
value of the fading factor is bad then the whole half block length code word to be
in error, while in the case of fast rayleigh fading only a single bit will be distorted.
cod word block length 504 and 204 for given range of SNR values and shown in
62
Chapter 4
4.1 Introduction
Digital Fountain Codes has promising performance for erasure channel which
is suitble model for packet switching networks. The first practical realization of
the Fountain codes was introduced by M.Luby in [11] and was further improved
in [14]. Raptor code [14] is a class of Digital Fountain Codes,can be used inde-
pendently of channel loss rate of erasure channel and near optimal performance
for every erasure channel [11]. Digital Fountain Codes are considered as ”rate-
less”, which means, unlike the traditional block codes such as LDPC codes and
RS codes, Digital Fountain Codes do not have a fixed code rate and the rate
the decoder is able to decode. The rate is then not known a priori as it is in
as needed to recover all the message bits regardless of the channel performances.
Existing rateless codes has the ability to adapt itself according to the channel con-
ditions without knowing the channel knowledge at the transmitter. This property
63
4. Digital Fountain Codes
of rateless codes make them very good candidate for the relay networks [13] due
4.2 LT Coding
The Luby Transform code introduced by M.Luby [11] is the first practical
needed depending upon the quality of the channel. The decoder can retrieve
original data from any set of the transmitted codeword symbols that are only
slightly longer than original data [11]. Regardless of the statistics of the erasure
events on the channel, we can send as many encoded symbols unless decoder
become able to recover original data from the encoded symbols, so LT code is
optimal for any erasure channel. Encoding and decoding time is function of
√
original data. It can recover k symbols from any k + O( k ∗ ln(k/δ) encoding
√
symbols with probability 1 − δ on average k + O( k ∗ ln(k/δ) symbol operations
[11]. Because of the high complexity of the LT codes, Raptor code was proposed
appropriate pre-coding methods, but Raptor code also requires extra memory
to store the pre-code output. The family of Digital Fountain Code has received
many designer’s attention and have been used in many applications on Transport
layer.
64
LT Coding 4.2
as follows:
3. Bitwise sum, modulo 2 these dn bits to get the value of one output bit cn .
The encoding process defines a bipartite-check graph connected the codeword bits
the mean degree d is significantly smaller than the message length k, this graph
similar to the LDPC codes and can be written in terms of vectors and matrices,
i.e.
cT = GsT
and c is the original message bits sequence and codeword bits sequence respec-
65
4. Digital Fountain Codes
Input Output
Symbols/Bits Symbols/Bits
C0
S0
C1
S1
C2
S2
C3
S3
C4
Variable Check
Nodes Nodes
c0 0 0 1 0 s0
c1 1 0 0 1 s1
T
T
c =
= Gs = 0 1
c2 0 1
s2
c3
1 1
1 0
s3
c4 0 1 1 0 s4
symbol is called as the degree of this symbol and corresponds to the 1’s appear
in each row of G, here c0 has a degree of 1 for example. The total amount of
66
LT Coding 4.2
For the decoder to recover the original data from the transmitted bits, it must
know the degree distribution and set of neighbors for each encoding symbol. By
upon the application. One way may be to deliver the degree and list of neighbor
4.2.2 LT Decoder
The main purpose of the decoder is to get s from sT = GsT . When the gen-
erator matrix G is large, decoding cost is much more higher. However, decoding
a sparse-graph code over an erasure channel is an easy task by using the special
belief-propagation (BP) method in which a codeword only has two status, either
plained as follows:
is no such single degree message is available then the decoder is unable to recover
further messages.
3. Find all the codewords that are connected to sk , update these codewords by:
5. Repeat step 1 to 4 until all the message symbols are recovered or no more
So initially the encoding symbols of degree one are released to cover their unique
neighbor. The covered input symbols which are not yet processed are called rip-
67
4. Digital Fountain Codes
.................
Check
Nodes
............
Check .................
Nodes
Output encoded
Variable Nodes
(a) (b)
ple [11], so at this stage all input symbols are in the ripple. At each step one
input symbol is processed and removes as neighbor from the encoding symbols to
which it connected. Each time this process may or may not cause growth of the
The Tanner graph of LT codes is similar to the LDPC codes; however, there
is one difference exist. The Tanner graph LDPC has only one set of variable
nodes while that of LT code contains two types of variable nodes: information
bit variable nodes and encoded bit variable nodes. The information bits are not
transmitted over the channel, while the encoded bits are transmitted over the
channel. At the receiver end, information bits can be recovered from the encoded
bits. Tanner graph for both LDPC and LT code are shown in Figure 4.2.2.
68
Design of LT Degree Distribution 4.3
The degree distribution ρ(d) is the crucial part of LT code design. The guide-
• As the encoding and decoding complexity are both increasing with increase
in the number of edges in the graph, so the critical quantity is the average
Every input symbol must have at least one edge for the successful decoding. The
encoder select the edges between encoding symbols and input symbols randomly,
the number of the edges must be at least k ∗ ln(k) [10], where k is the number
decoding process is possible, then the average degree of each encoding symbol is
ln(k) [10].
Ideally, the decoding graph should run in such a way that just one encoding
symbol has degree “1” at each step. At each step, when this encoding symbol is
processed, only one degree “1” encoding symbol should appear. This goal can be
1/k
for d = 1
ρ(d) =
1
for d = 2, 3, · · · , k
d(d−1)
69
4. Digital Fountain Codes
However, just like other ideal algorithms, this distribution works poorly in prac-
tices. The degree “1” encoding symbol is very likely to disappear at some point
of the decoding process due to the fluctuations around the expected value. Van-
ishing of ripple can occur even at small variance. The intuition to solve this
problem is to increase the amount of output symbols of degree “1” and needs
The ideal soliton distribution works poorly in practice because the ripple size
is 1, which is too small. Ripple will disappear even at small variance around the
expected value of the ripple size and may cause failure of the decoding process.
The robust distribution increases the size of the ripple so that even at higher
variance the ripple may exist. Also to keep the number of encoding symbols
small, the ripple size should be kept small to avoid redundancy in recovering the
input symbols.
The robust soliton distribution uses two extra parameters, c and δ and the design
of the distribution will be such that throughout the process degree “1” encoding
instead of one. The parameter δ is the probability that the decoding fails after a
certain number K´of encoding symbols are received and parameter c is constant
of order 1 but smaller values than 1 can give better result. This design ensure that
√
the variance of the ripple size exceed the value (ln(k/δ) k) is at most δ which
70
Design of LT Degree Distribution 4.3
√
can be achieved by K´ = k + O(ln(k/δ) k encoding symbols [11]. A positive
function is defined as
S
for d = 1, 2, · · · , (k/S) − 1
kd
τ (d) = S
ln(S/δ) for d = k/S
k
0 for d > k/S
Then add the ideal soliton distribution ρ(.) to τ (.) and normalize to obtain the
ρ(d) + τ (d)
µ(d) =
Z
to ensure the successful decoding with probability at least 1 − δ, are K´= kZ [10].
The distribution τ (.) plays an important role in the successful decoding. At the
beginning of the decoding process τ (1) ensures the sufficient size of the ripple.
Now consider the decoding process at the intermediate stage and suppose at this
stage there aer M input symbols unprocessed. As in each step when the input
symbol is processed, the size of the ripple decreases by one, so there is a need
in increase of size for this compensation. If the ripple size is S, then the chance
that the released encoding symbol adds one to the ripple is (M-S)/M. For average
increment in the ripple size at this stage it requires M/ (M-S) encoding symbols.
We can verify from [11] that when M input symbols remain unprocessed, then
the overall release rate consists of a constant portion and that constant portion
the ripple size to be maintain around S then degree i should be proportion to the
71
4. Digital Fountain Codes
Luby in [11] explained how the spike in τ (d) at d = k/S is included to ensure
that every input symbol is connected to the encoding symbol at least once. This
symbols. As this happens only once, we don’t pay so much penalty in term of the
In this section, we present the results of the LT code on binary erasure channel
model which is explained in section 2.2.1. We use 250 and 500 information bits
block and study the behavior of the LT decoder over various erasure probability
ε of the channel. The distribution used in our analysis is the robust soliton
distribution. The decoder runs a belief propagation algorithm over the tanner
graph similar to the LDPC codes. However, in this case encoder send a fountain
of the encoded bits and decoder runs the belief propagation algorithm over a
certain encoded bits. If the original data is not fully recovered then decoder
processes one more encoding symbol unless it recovers the original data.
From simulations result shown in Figure 4.4.1, it is clear that as the erasure
probability increases, the decoder will require more and more encoded symbols
to recover the original data because the number of erased bits also increases.
In Figure 4.4.1 of simulations, our focus is on the variations of rate of the LT code
with increase in the length of the information bits block length. We compare the
performance of 250 bits and 500 bits block lengths. As it is obvious that in the
72
Simulations and Conclusions of LT code on BEC 4.4
Performance of LT Code
0.7
0.65
Rate of LT Code
0.6
0.55
0.5
LT decoder, we can get the original data with higher code rate with increase in
block length of information bits. In other words, the capacity gap reduces with
larger block length. So we can conclude that with larger code word length , we
can get better performance in LT decoder. Hence, the LT code almost recovered
LT code over noisy channels such as Binary symmetric Channel (BSC) and
ulation results of [12] for both of robust soliton distribution as well as distributions
optimized for Binary Erasure Channel (BEC) in [14], posses the same behavior
and not so impressive due to high overhead and high word error rate. For both
the distributions WER and BER curves have reasonable error floor and this error
73
4. Digital Fountain Codes
floor exists because there are certain input symbols which have connectivity with
Our simulation analysis and from [11, 12], we come to the conclusion that LT
code performs very well for the BEC channel. But as the complexity increases
as O(k ln(k)) [11], where k is the length of information bits, LT codes are not
practical for higher block length in term of complexity. Also the error floor
the complexity can be reduced to linear function of information bits i.e. O(k ) in
Encoding, decoding complexity and error floor exhibited by LT code has been
codes are called Raptor codes which has constant cost of encoding and decoding
as well as improved performance over noisy channels [12, 14]. Encoding and
decoding complexity in the LT code is of order O(k ln(k)) because the average
degree of the packets in the sparse graph is ln(k). Successful decoding requires
(k ln(k)) edges in order to recover all the input symbols with high probability.
However, Raptor code uses LT code as inner code with much more lower average
degree. The consequence of this lower average degree of edges is that a fraction
of the input symbols will not be connected to the graph and hence cannot be
erasure correcting code used as an outer code. The LT code and the traditional
erasure correcting code must be designed properly such that the outer erasure
74
Raptor Codes 4.5
correcting code can successfully recover the input symbols left by the LT code.
A landmark paper [15] was presented in the analysis of Raptor codes over noisy
channels. In this paper the author derived the properties of the capacity-achieving
Raptor codes over noisy channels and the code construction is presented for such
channels. As from [15] there exist no universal Raptor codes for channels other
LDPC code as an outer code. The Belief propagation Algorithm runs over the
Tanner graph both for LT and LDPC decoder part of the Raptor decoder.The
most critical part of Raptor code is to design the degree distribution of the output
symbols over noisy channels such as Binary Input Additive White Gaussian Noise
code can be characterized by (k, C, µ), where µ is the output symbols degree
distribution and C is the pre-code which is used as an outer code. First k -bit
information bits are mapped to n-bit code word of pre-code C, where n is slightly
greater than k-bits and are called input symbols of the LT code or intermediate
the receiving end after successful decoding. The Tanner graph of the raptor code
Contrasting to the LT code, Raptor code requires storage for the intermediate
75
4. Digital Fountain Codes
Information Bits
Pre-coding
Input Bits
LT coding
symbols. Space required for the Raptor code is 1/R, where R is the rate pre-code
[14]. The encoding and decoding cost of the Raptor code is different from the LT
code due to the presence of pre-code. The decoding cost for Raptor code is the
LT code is a type of Raptor code with no pre-coding, but uses very sophis-
ticated pre-code degree distribution to recover all the input symbols. On the
other extreme,there is Raptor code with simplest degree distribution but com-
LT part of the Raptor code uses degree distribution with lower average degree
encoding and decoding process. As a result, some of the input symbols can not be
connected to the output symbols and hence cannot be recovered. If the average
76
Raptor Codes 4.5
10-Information Bits
13-Intermediate Bits
Unattached
bit Unattached
bit
11- Encoded
received bits
expected fraction of not recovered symbols are f˜ = e−d [10]. In Raptor code [14]
k information bits are first encoded to n = k/(1 − f˜) packets with outer code
which can correct erasures if the erasure probability of the channel is f̃. Then
using the weak LT code to transmit n bits, and as soon as slightly larger than n
output symbols are received, decoder can recover n intermediate symbols. Then
Raptor code can easily be explained by the schematic diagram shown in Fig-
ure 4.5.2. For our example we use information bits, k = 10 and then by using the
pre-code to encode these bits to intermediate bits, n = 13 bits. The Degree dis-
tribution used in this example has average degree around 3. This simple degree
distribution failed to connect two of the intermediated bits to the output bits
.The LT decoder can recovers the 11 connected intermediate bits from the trans-
mitted encoded bits.Then the outer traditional erasure code can recover original
10 information bits.
77
4. Digital Fountain Codes
As the performance of the Raptor code on any channel is the number of bits
required for successful decoding and the number of decoding attempts made by
the receiver. The decoder will try to decode each time it receives a noisy bit from
Erasure Channel the decoder stops decoding on stopping set [14] and as soon as
it receives new noisy bit it starts decoding from the stopping set, but this is not
the receiver waits for finite number of symbols from the limitless stream sent by
the transmitter and then starts decoding. As soon as the rate decreases, decoding
The LLRs for AWGN and Rayleigh fading channel can be calculated according
to [8]. First the messages are exchanged between input and out symbols of the
LT part of the Raptor decoder and upon convergence, we then run the belief
propagation in the LDPC pre-code part of the Raptor code. In round “0” of the
BP algorithm the input symbols of the LT part send 0 to the output symbols and
then after that at each iteration the messages are passed from output symbols to
78
BP Algorithm of Raptor code over Noisy Channels 4.6
the input symbols and vice versa. At any iteration l, the message passed from
(l)
the output symbol “j ” to the input symbol “i ” can be represented by Lj→i and
the message passed from the input input symbol “i” to the output symbol “j ”
(l)
is represented by Li→j , neighbor of any node can be represented by N (v) and is
(l)
the number of nodes connected to it. The value of Lj→i can expressed as [15, 16]
(l−1)
(l) L0,j Y Lí→j
Lj→i = 2 tanh−1 tanh( ) tanh
2 2
í∈N(j)\(i)
The above equation means that output symbol “j ” sends the message to the “i”
using above formula by taking into account the LLRs of neighbors of “j ” except
Similarly, the message passed from input symbol “i ” to the output symbol “j ”
(l)
X (l)
Li→j = Lj́→i
j́∈N(i)\(j)
Again above equation means that message from input symbol “i ” to the output
The iterations can be stop explicitly by reaching to some maximum value and at
the end the LLR of the input symbol “i” can be calculated as
X (l)
Li = Lj→i
j∈N(i)
which means that the LLR of input symbol “i ” can be calculated as above, taking
all the neighbors of “i” into account. After computing LLRs for every input
79
4. Digital Fountain Codes
symbol, then act as prior LLRs of the input symbols of the pre-code decoder.The
BP algorithm then runs on the input and output symbols of the pre-code decoder
over pre-code, receiver then takes decision about the individual symbol/bit. If the
required Bit Error Rate has not been achieved then decoder takes more output
symbols and run decoding algorithm process again until we get the required level
AWGN was studied in [12] for the degree distribution optimized for the BEC
channel [14]. From the results of [12, 15], we can conclude that Raptor decoder
requires little bit more output symbols than input symbols to recover the original
noise increases then Raptor code will required more overhead to achieve the same
performance. As the comparison of the LT and Raptor code in [12, 15] shows that
Raptor code outperform the LT code as error floor exist in the LT code. As the
degree distribution optimized for the BEC, perform well for the AWGN channels
but if the degree distribution specifically designed for these noisy channels will
enhance performance, which is thoroughly studied in [15] for large block length
codes.
From [15], for the BIMSC C, we denote the capacity of the channel C as
Cap(C) and E(C) as the expectation E(tanh(L0,j /2)) , where L0,j is channel LLR
80
Capacity achieving Raptor Code for Noisy Channels 4.7
as
Cap(C)
Π(C) :=
E(C)
Raptor code can be parameterized for noisy channels as (k, C, Ωk (x)) , by increas-
ing k, Raptor code is said to be capacity achieving over a given noisy channel C,
approaches to infinity, bit error rate tends to zero. Important parameters for the
capacity-achieving Raptor code over BIMSC are the number of output symbols
with degree one and two i.e. Ω1 and Ω2 . From [15], for any BIMSC C, Ω1 and Ω2
(k) (k)
Ω1 > 0 and lim Ω1 = 0,
k→∞
and
(k) Π(C) (k) Π(C)
Ω2 > and lim Ω2 = ,
2 k→∞ 2
Above conditions are analogues to the stability condition of LDPC. The stability
condition of LDPC deals with the upper bond of the fraction of variable nodes of
degree 2, such that if the fraction of the degree 2 variable nodes is lower than this
condition of the Raptor code in [15] states that if the fraction of degree 2 output
symbols is greater than the lower bond Ω2 (C) = Π(C)/2, then the decoding
process could be successfully started. The lower bond in the stability condition
81
4. Digital Fountain Codes
of the Raptor code Ω2 (C) depends on the channel parameter like noise variance
The author of [15], outline the construction of Raptor code for BIAWGN
based on two assumptions. The first assumption is that the factor-graph of the
Raptor code is cycle free, so the messages exchanged between input and output
symbols are independent. This assumption is justified when the tanner graph of
the Raptor code is large. The second assumption is that the probability density
of the messages sent from input symbols are mixture of symmetric Gaussian
the input symbols. Selection of signal to noise ratio SNR is also very critical in
the design of the Raptor code. From result in [16] , while designing of Raptor
code for any noisy channel the SNR has two bonds,SNR∗low and SNR∗high such
that if the channel SNR is outside interval [SNR∗low , SNR∗high ],then Raptor code
is either not available or if exist can’t achieve capacity of the channel. It is also
evident from the simulation result in the[16], that if the SNR is close to SNR∗low
with in the interval then the designed Raptor code performs very close to the
capacity, whereas if the SNR is close to the SNR∗high with in the interval then gap
between the curve produced by the designed Raptor code and capacity becomes
predominant.
is AWGN which is described in chapter 2.2.3. The encoder part of Raptor code
82
Simulations of Raptor code 4.8
Pre-Code
Source LT Encoder Modulator
Encoder
Channel
Pre- code decoder LT decoder
mc,v mi,o
Hard
Destination
decision
consists of two parts i.e. pre-code encoder and LT encoder. Here we use the
LDPC irregular .95 rate code as pre-code. Similarly, the decoder consists of LT
decoder and LDPC decoder. The message LLR are first converged by the LT
decoder and upon convergence, LDPC decoder runs BP algorithm to retrieve the
original information bits. We get 532 encoded bits also called intermediate bits,
from 0.95 rate LDPC encoder which correspond to 502 information bits. We use
the binary phase shift keying (BPSK) which mapped the encoded bit to the signal
point. If we have ideal channel state information then the Channel log likelihood
P (x = 0|y) 2
L0 = loge = 2y
P (x = 1|y) σ
83
4. Digital Fountain Codes
0
10
Eb/N0=1 dB
Eb/N0=2 dB
Eb/N0=2.5dB
−1
10
Bit Error Probability
−2
10
−3
10
−4
10
1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2
1/R (Delay)
Our simulation Raptor decoder waits for a finite number of output symbols as in
our case 700 symbols and then calculate the channel LLR for each bits/symbols.LT
decoder part of Raptor decoder runs BP algorithm on these output symbols and
after running through 40 iterations gives the LLRs of the input symbols. These
LLRs of the input/intermediate symbols behave as prior LLRs to the LDPC de-
coder which again run for 30 iterations, and finally, we take decision at the end
of BP algorithm. In next decoding attempt the decoder collects 100 more output
symbols and repeats the whole decoding process again.The degree distribution
used in LT part optimized for σ = .977 and given in [15]. We draw our simulation
graph as shown in Figure 4.8.2. The graph is bit error probability verses inverse of
rate (R) which in some sense can be called as delay. In other words, the decoder
will wait until it receives output symbols equal to (1/R) times information bits,
and then will start decoding attempt. Up to around half rate of the code, the
slope of the BER curve is higher as in the case of LT code. But after that BER
84
Simulations of Raptor code 4.8
reduces but with little bit lower slope. This slight change in the slope occurs
because the input bits achieved from LT part are not so reliable to reduce the
BER curve sharply by the outer code. After that when the code rate lowered
further, then the input bits of the LT part are now connected to more output bits
and hence reliable enough for the outer code to enhance performance. Also it is
So the same bit error rate can be achieved with higher rate for the higher value
For the Rayleigh fading channel the system block diagram used is similar to
the AWGN channel as shown in Figure 4.8.1. Irregular LDPC code with rate
from [15]. Again 504 information bits are used for the LDPC code, which convert
this into 532 intermediate bits. These intermediate bits are then encoded by the
LT encoder. BPSK modulation is used to map the output bits from LT encoder
P (x = 0|y) 2
L0 = ln = 2 yα.
P (x = 0|y) σ
The focus of our simulation is to investigate the error rate performance of the
Raptor code over various values of the noise variance in uncorrelated Rayleigh
fading channel. The simulation result in Figure 4.8.3, demonstrates that increas-
ing the noise variance leads to degradation in performance of the Raptor code.The
85
4. Digital Fountain Codes
0
10
−1
10
Bit Error Probability
−2
10
Eb/N0=1 dB
Eb/N0=2 dB
Eb/N0=3 dB
−3 Eb/N0= 4 dB
10
1.6 1.8 2 2.2 2.4 2.6 2.8
1/R (Delay)
reason for the slope of the BER curve is same as for the AWGN channel.i.e. for
from simulation results in Figure 4.8.3 that increasing the value of Eb /N0 causes
improvement in the bit error rate. It should also be noted that with lower rate
of the Raptor code, we can get better error rate performance i.e. lowering the
efficiency of the Raptor code increasing the reliability of the Raptor code.
We assumed perfect channel knowledge at the receiver and two block fading
bits to the signals. The two blocks have i.i.d. normalized Rayleigh factors i.e.
86
Simulations of Raptor code 4.8
0
10
−1
10
Bit Error Probability
−2
10
Eb/N0=5 dB
Eb/N0=7 dB
Eb/N0=9 dB
Eb/N0=11 dB
−3 Eb/N0=13
10
1.6 1.8 2 2.2 2.4 2.6
1/R (Delay)
Figure 4.8.4. Performance of Raptor code for K=504 in 2 Block Fading Channel
α1 and α2 such that first half of the symbols transmitted experience channel
behavior as α1 and the last half bits as α2 , respectively. The received symbol
model is given by
yi = αj ∗ ti + ni
where j =1,2 and i =1,2,........, unless we get the original information symbols
k.Again we consider 504 information bits which is converted to 532 encoded bits
also called intermediate bits by LDPC code with rate .95, used as an outer code.
Initially we sent little bit more encoded symbols than the information bits and if
we did not get the required BER then we sent extra 100 and repeat this process
Graph for the simulation result shown in Figure 4.8.4. Just like AWGN and fast
Rayleigh fading channel as the decoder can collect more and more bits and run
87
4. Digital Fountain Codes
rates the slope of the decreases of the BER curve is not so sharp due to error
floor in the LT part. But as the rate is further lowered then BER curve performs
very well due the reliability of the input bits i.e. inputs bits are now connected
to more output bits. It should be keep in mind that we can get even enhanced
performance of the Raptor code for larger code word length as in [12, 15]. Also it
should be noted that with increase in the value of Eb /N0 , more and more coding
gain is achieved. We can get even better results than this if we optimize degree
distribution for each Eb /N0 value and specifically for the block fading channels.
88
Chapter 5
Simulations Results
5.1 Introduction
Code designed which works better in fast fading channel, may not behave
very well in the Block fading channel. As the block fading channel is nonergodic
channel [17], so we cannot use the channel capacity as we did in the case of fast
fading channel. For information-theoretical rate limit we rather use the outage
block fading channels has been given. Results of [19] demonstrate the efficiency,
reliability and robustness of the Raptor coding over slow fading channels when
was studied in block rayleigh fading channel but blocks are assumed to take inde-
pendent values and hence no correlation was present. In this chapter beside the
comparison of Half rate regular (3,6) LDPC and Raptor code over various noisy
channel, we have investigated Raptor code over uncorrelated and correlated block
fading rayleigh channel for short block length. The main emphasis of this chapter
to compare the error rate performance of Raptor code with a fixed rate standard
(3,6) regular LDPC code and to HARQ systems using punctured LDPC codes
89
5. Simulations Results
over uncorrelated and correlated slowly fading channels, which can be modeled
We presented some simulation results for the Raptor codes using system set
up as shown in Figure 4.8.1. For all simulations except for binary erasure chan-
nel, irregular LDPC code with 0.95 rate is used as precode, which converts 504
information bits to 532 intermediate bits. These intermediate bits are then en-
coded by the LT encoder and BPSK modulation scheme is used to transmit the
different energy per bit-to-noise ratio i.e.Eb /N0 values, where Eb is the energy
each codeword by the belief propagation algorithm after 40 iteration for the LT
decoder and 30 iterations for the LDPC precode. We then count the number of
bits in which the estimated codeword differs from the sent one.
Our simulations for fading channels used the following two degree distribu-
tions which are optimized for binary erasure and AWGN channel. The degree
90
Simulations of Raptor Code vs LDPC in BEC 5.2
System block diagram is used for our simulation as shown in Figure 4.8.1.
Raptor code and half rate standard (3, 6) regular LDPC code over the same
binary erasure channel as described in chapter 2.2.1. For simplicity we use the
BPSK modulation scheme. Left regular LDPC code with rate .90 is used as
pre-code instead of .95 rate LDPC code which is normally used for larger block
length code. The reason for this lower rate pre-code is that fraction of inputs
¯
not connected in LT part is around e−d [10] for large code word length where d¯
The degree distribution for the LT part can be used from [14],however all the
degree distributions available their are of average degree around 5.90, which result
in higher decoder complexity but we are simulating code with information bits
256 or more precisely when encoded through pre-code we get 285 intermediate
the distribution with average degree 5.90 is around e−5.90 ∗ 285 = .78, which is not
91
5. Simulations Results
Performance Comparison of raptor and Half rate LDPC code for K=256
0.7
Raptor Code
LDPC Codes with Rate half
0.6
Throughput(Rate of Code)
0.5
0.4
0.3
0.2
0.1
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Erasure Probability of Channel
Figure 5.2.1. Performance Comparison of Raptor and (3,6), Half rate LDPC
code for 256 information bits in BEC
a best choice for our code word as the average degree is high so that almost all of
the bits will recovered by the LT code and no need of pre-code. So we optimize
the degree distributions using [14] and technique available in [10] for our specific
did not recovered by LT code and the decoder complexity reduced significantly.
throughput versus the erasure probability of the channel. Throughput here means
the average bits per channel realization i.e. rate of the code when all the informa-
tion bits are successfully decoded by the decoder. As we know that the half rate
standard (3,6) regular,LDPC code has constant code rate and hence throughput.
In this case the threshold is decided, based on the channel erasure probability
92
Raptor code vs LDPC code in AWGN Channel 5.3
−1
10
Bit Error Probability
−2
10
−3
10
Figure 5.3.1. Performance Comparison of Raptor and (3,6), Half rate LDPC
code for 504 information bits in AWGN
such that after this threshold value of channel erasure probability,half rate reg-
ular (3,6), LPDC code of this particular code word length cannot recover all
information bits with high probability. But in case of raptor code we can recover
the information bits at all channel erasure probability. At higher channel era-
sure probability we need more encoded bits to recover the original information
bits and hence lower the throughput and code rate. However we can even get
tion for short block length. For larger code word length, Raptor code has much
93
5. Simulations Results
In order to gauge the performance comparison of the Raptor Code and half-
rate standard (3, 6) regular LDPC code, simulations result is drawn in Fig-
ure 5.3.1. As far as the rate of the Raptor code is higher than LDPC, then LDPC
outperform all over the Eb /N0 range but if the rate of the Raptor code decreases
than LDPC code, then the Raptor code outperform LDPC code in the lower
Eb /N0 . Here Eb is the energy of the encoded bit for fair comparison betweern
Raptor code and LDPC code in ARQ schemes. One possible reason for this poor
performance of the Raptor code in AWGN channel is the degree distribution used
in our simulations was optimized for binary erasure channel. The degree distri-
bution optimized for the AWGN has not so good performance in AWGN for short
block length (result not shown) because the derivation of that degree distribution
is based on the exchange of independent messages between the nodes which is not
satisfied for small code word length. The performance of the Raptor code can be
while in our simulations we used degree distribution optimized for noise variance
.977 in [15]. So there is a need for the degree optimization of the short block
length in AWGN channel. But even with this degree distribution, we got Raptor
code which has dynamic range of rates which is very useful in fluctuating channel
Repeat Request.
94
Performance comparison of Raptor code and LDPC code in fast-fading Rayleigh
Channel 5.4
0
10
−1
10
Bit Error Probability
−2
10
−3
10
Figure 5.4.1. Raptor code vs LDPC in Rayleigh Fading Channel for K=504
System block diagram shown in Figure 4.8.1 is used for simulations setup.
Comparison of the Raptor code and standard (3, 6) regular LDPC code is demon-
shown in Figure 5.4.1, for different value of Eb /N0 in fast fading Rayleigh chan-
nel as described in 2.2.4. The performance of the fast fading Rayleigh channel
based on the degree distribution optimized for AWGN is not so impressive. The
reason for this is that while deriving the optimized degree distribution for the
AWGN channel in [15], it was assumed that all incoming messages arriving at
a given node in the factor graph of Raptor code were independent but this con-
95
5. Simulations Results
−1
10
Bit Error Probability
−2
10 1/2 rate LDPC Decoding
Raptor Coding with Rate .63
Raptor Coding with Rate .50
Raptor Code with Rate .38
5 6 7 8 9 10 11 12 13
Eb/N0 (dB)
Figure 5.5.1. Raptor code vs Regular (3,6), Half rate LDPC code for K=504
in 2 Block Fading Channel using AWGN degree distribution
dition is not fulfilled for short block length. From Figure 5.4.1, it is clear that
Raptor code having rate greater than the half rate LDPC code has degraded
performance while the Raptor code with lower rate than half rate LDPC code
has better performance. It should also be noted that when Raptor code is used
as fixed half rate code , then it suffer little bit degradation in performance due
to short block length. However Raptor code offers dynamic and flexible range of
rates as compared to fixed half rate standard (3, 6) regular LDPC code, so we
can conclude that with fluctuating fast fading Rayleigh channel , Raptor code is
good choice as compared to fixed rate standard (3, 6) regular LDPC code.
96
Simulations and Conclusions of Raptor code for 2-Block Fading Channel 5.5
−1
10
Bit Error Probability
−2
10
Figure 5.5.2. Raptor code vs Regular (3,6), Half rate LDPC code for K=504
in 2 Block Fading Channel using BEC degree distribution
Simulation of Raptor code for 2-Block fading channel is shown in Figure 5.5.1
based on the degree distribution optimized for the AWGN Channel. Figure 5.5.2
shows the simulation result of the Raptor code over the same channel and degree
distribution optimized for BEC channel. From comparison of both the figures, we
can conclude that the degree distribution optimized for the BEC performs well
as compared to the degree distribution optimized for the AWGN channel and the
Our goal is to elaborate the difference in performance of the half rate standard
(3, 6), regular LDPC code and Raptor code. Raptor code based on equation 5.1.1
97
5. Simulations Results
and having rate greater than half, has not so good performance as LDPC code
but with rate lesser than half, Raptor code outperforms LDPC code, which is very
straight forward. At half rate LDPC code performance is better than Raptor code
but the performance can be brought closer to the LDPC code by designing degree
distribution for every Eb /N0 value and specifically for block fading channel, and
for short block length . Also it is clear from the Figure 5.5.2, that with increase
in the Eb /N0 values , BER decreases. Moreover, half rate standard (3,6) LDPC
code exhibits the error floor in block fading channel [17], so in varying block
fading channel , Raptor code is good option as compared to fixed half rate LDPC
code. The difference in performance for two degree distributions shows that the
System block diagram and simulation conditions remains same as for 2-Block
fading channel but one main difference in this case is that the block normalized
Rayleigh fading factors are no more independent but instead correlated with some
value. This correlated block fading channel is similar to the channel discussed
in 2.2.6. Simulation result drawn in Figure 5.6.1, between Bit error probability
and inverse of rate (R) of Raptor code i.e. delay. These curves are drawn for
the Eb /N0 = 8 dB. It is clear that if the correlation between the two fading
channel of the block increases then the BER rate is going decreases. For the
zero correlation the two blocks are independent and hence if the information bits
in one half of the code word length are degraded by one block of the channel
98
Simulations and Conclusions of Raptor code for correlated Block Fading Channel 5.6
0
10
Correlation=0
Correlation=.25
Correlation=.50
Correlation=.75
Correlation=1
Bit Error Probability (BER)
−1
10
−2
10
1.5 2 2.5 3
1/R (Delay)
Figure 5.6.1. Performance of Raptor code for K=504 in 2 Block Fading Cor-
related Channel
then it can be recovered from the other half of the code word length because
the output bits randomly select the input bits. Hence the system behaves like
a 2-diversity system. But when the correlation increases then if the information
bits degraded by one block of the channel in one half of the code word length
will affect the information bits in the other half of the code word length and
hence we get higher bit error rate as compared to the zero correlation. But if
the correlation becomes 1, then both half of the code word length experience the
same block channel behavior and system will work as single diversity system. For
correlation value 1, the bit error probability is worst than all the other values and
it is also confirmed from our simulation results in Figure 5.6.1. It should also be
noted that for small values of correlation the difference in performance is more
after a certain value correlation effect is so high that it considerable degrade the
99
5. Simulations Results
0
10
Bit Error Probability (BER)
−1
10
performance and further increase in correlation will degrade the performance but
not with the rate as did by lower values of correlation values. Another point
which should be noted that at lower rate the correlation has predominant effect
the diversity achieved from the block fading channel depends on the rate of the
code. So code transmitting with rate equal to or less than half, then the diversity
achieved from the block fading channel is 2 otherwise diversity obtained from the
channel is one. Also the coding gain is high for low rate code. So the Raptor
code with rate less than half has better performance in block fading channel and
degraded significantly with correlation in channel while this is not the case at
higher rate Raptor code. Hence we can conclude that Raptor code with rate
greater than half has less performance degradation as compared to Raptor with
100
Simulations and Conclusions of Raptor code for correlated Block Fading Channel 5.6
0
10
−1
10
−2
10 Half rate LDPC with roh=0
Half rate LDPC with roh=.75
Raptor Code rate=.63& roh=0
Raptor Code rate=.63& roh=.75
Raptor Code rate=.36& roh=0
−3 Raptor Code rate=.36& roh=.75
10
5 6 7 8 9 10 11 12 13
Eb/N0
bution in equation 5.1.1, with half rate standard (3, 6) regular LDPC code in
Figure 5.6.3. As from the result of the previous section we know that correlation
in the channel causes degradation in performance of Raptor code for short block
length, with degradation being sever for the low rate Raptor code. It is clear from
the simulation result shown in Figure 5.6.3, that Raptor code with rate less than
the half rate (3,6) regular LDPC code outperforms LDPC code in uncorrelated
2 block fading channel but with the introduction of the correlation in the fading
block, LDPC code outperforms lower rate Raptor code. So the correlation in
the fading blocks of block fading channel severely affect the performance of the
Raptor code with rate less than the Half rate (3,6) regular LDPC code, while
the degradation in performance of the Raptor code with rate greater than the
101
5. Simulations Results
0
10
Bit Error Probability (BER)
−1
10
−2
10 Punctured LDPC with rate=.56 & roh=0
Raptor code with rate =.56 & roh=0
Punctured LDPC with rate=.56 & roh=.50
Raptor code with rate =.56 & roh=.50
Punctured LDPC with rate=.56 & roh=1
−3 Raptor code with rate =.56 & roh=1
10
5 6 7 8 9 10 11 12 13
1/R(Delay)
half rate (3,6) regular LDPC code is comparatively less. As the half rate (3,
6) regular LDPC code exhibited the error floor at higher Eb /N0 values in block
fading channel. Raptor code has proved no error floor in AWGN channel [12]
and fading channel [16] for larger block length, so it is expected it will exhibit no
error for the short block length as well. So in that situation Raptor code is best
alternative in the correlated block fading channel. The performance of the Rap-
tor code can be enhanced by optimizing a degree distribution for the correlated
block fading channels for short block length. Raptor code also provided dynamic
and punctured LDPC code based on heuristic search algorithm [28], in correlated
102
Simulations and Conclusions of Raptor code for correlated Block Fading Channel 5.6
block fading channel is shown in Figure 5.6.4. Again we can note that the corre-
lation between the blocks of the block fading channel affect the Raptor code more
as compared to the punctured LDPC code especially at low correlation values for
short block length. But as we know that puncturing has a larger adverse impact
when the mother code is of low rate than when the mother code is of high rate,
while the punctured code having the same code rate. So we could not achieve very
flexible range of rates in case of punctured LDPC codes. Also punctured LDPC
codes have error floor at high Eb /N0 region in block fading channel. Therefore,
Raptor code provides a much large dynamic range of rates for the HARQ than the
punctured LDPC codes and so in this sense more robust than punctured LDPC
codes. In Raptor code we need only to encode as many as parity bits as we need
to send but in punctured LDPC codes we need to encode all parity bits even if
103
Chapter 6
Conclusions
In this chapter we first briefly review the main contributions of the thesis and
6.1 Conclusions
This thesis has contributed to the field of Rateless coding in a number of ways.
In particular, we have produced results applicable to the short block length codes
apply to Raptor codes that are decoded using the iterative belief-propagation
algorithm.
We simulated the Raptor code over fading channel using two different degree
distributions. We proposed based on the simulation results that for short block
length,the degree distribution optimized for the binary erasure channel works
better for fading channels than the degree distribution optimized for the additive
white Gaussian noise channel. The reason for this bad performance using the
degree distribution optimized for the AWGN in fading channels is the assump-
tions made during derivation this distribution. First assumption was, that the
105
6. Conclusions
messages exchanged between the nodes of the Raptor decoder are assumed to be
symmetric Gaussian distributions. But for short block length codes these condi-
We compared the performance of the Raptor code with half rate regular (3,6)
LDPC code in binary erasure channel and from simulations concluded that even
for short block length Raptor code outperforms half rate standare (3,6) regular
LDPC code. For AWGN and fast Rayleigh fading channels the Raptor code with
rate greater than half outperformed by LDPC while Raptor code with rate greater
than half has better performance than half rate (3,6) regular LDPC code. From
the performance of the Raptor code with half rate LDPC code in block fading
channel, we can conclude that code with lesser rate than half rate has better
performance than the regular half rate(3,6) LDPC code. So in applications with
Raptor code is more efficient and reliable than fixed-rate codes even with little
processing delay and maximum packet size. This limited interleaving between
channel. In this thesis we have also investigated the performance over correlated
block fading channel and concluded that correlation in block fading channel cause
degradation in performance. This degradation is less sever for the higher rate of
the Raptor code as compared to low rate for short block length because diversity
106
Future work 6.2
We also compared the performance of the Raptor code with fixed rate standard
(3,6) regular and punctured LDPC code for short block length in correlated block
fading channel. Punctured LDPC used half rate regular (3,6) LDPC code as
mother code and for 0.56 rate punctured LDPC code rate outperforms Raptor
code of the same rate in correlated block fading channel. But puncturing has
sever impact when mother code is of low rate than when mother code is of high
rate, therefore, punctured LDPC has limited range of flexible rates above the
rate of mother code. Low or even no feedback messages in Raptor code makes it
a good candidate for large frame relay networks as compared to the punctured
LDPC code. Therefore, Raptor codes provide more dynamic range of rates as
which involves high feed back messages, Raptor code will perform better than
many as parity bits as we need to send but in punctured LDPC codes we need
to encode all parity bits even if transmit a small fraction of them depending
as compared to the fixed rate and punctured LDPC code in correlated fading
In continuation of this work, there are a number of problems that can be the
subject of future research. Some possible directions of research are given below.
As in this thesis, we have demonstrated that Raptor code for Block fading
107
6. Conclusions
channels works better based on the degree distribution optimized for the binary
erasure channel as compared to the degree distribution optimized for the AWGN
for the Block fading channel and for short block length. We demonstrated that
correlation among the fading blocks in correlation block fading channel causes
at the encoder to compensate for this degradation. Encoding and decoding com-
plexities are higher for Raptor codes, so in addition to [27], there still a need to
108
Appendix A
Probability Calculations
Let us X,Y and Z are binary random variables and related by the following
equation.
X =Y +Z
pr (Y = 0) = Py0
pr (Z = 0) = Pz0
Now the probability that X will be zero if both Y and Z are 0 or 1 and this
109
A. Probability Calculations
where Px1 is the probability that binary random variable has value 1.
pr (Yi = 0) = Pi0
Y n
Px0 − Px1 Pi0 − Pi1
=
Px0 + Px1 i=1
Pi0 + Pi1
Px0
! n Pi0
!
Px1
− 1 Y Pi1
− 1
Px0
= Pi0
Px1
+1 i=1 Pi1
+1
But from definition of likelihood ratio lx = eyx and li = eyi so
n
lx − l Y li − 1
=
lx + 1 i=1
li + 1
n yi
eyx − l
Y e −1
=
eyx + 1 i=1
e yi + 1
n yi
− e−yx /2 e − e−yi /2
yx /2 Y
e
=
eyx /2 + e−yx /2 i=1
eyi + eyi /2
n
yx Y yi
tanh( ) = tanh( )
2 i=1
2
110
Bibliography
Magazine., August,2003.
[6] Nauman F. Kiyani and Jos H.Weber, “Analysis of Random Regular LDPC
Press,1963.
[8] J. Hou, P. H. Siegel, and L. B. Milstein, “Performance analysis and code op-
111
Bibliography
pp.924-934.
[10] D.J.C. Mackay, “Fountain Codes”,IEE Proc.-Commun. Vol. 152, No.6, De-
cember 2005.
[12] R. Palanki and J.S. Yedidia, “ Rateless coding on noisy channels”, in Proc.
IEEE Int.Symp.Inform.Theory,p.37,2004.
[13] J. Castura and Y. Mao, “Rateless coding and relay networks ”,IEEE Signal
[16] Z. Cheng, J. Castura and Y. Mao, “on the Design of Raptor Codes for
112
Bibliography
2008.
erations for cellular mobile radio ” , IEEE Trans. Vehicular Tech., vol. 43,
[19] Jeff Castura and Yongyi Mao, “ Rateless Coding over Fading Channels ” ,
Technical Journal, vol. 27, pp. 397-423 and 623-656, July/Oct, 1948.
[21] D.J.C Mackay, “Good Error-Correcting Codes Based on Very Sparse Ma-
March, 1999.
[23] André Neubauer, Jürgen Freudenberger, Volker Kühn, “Coding Theory Al-
gorithms, Architectures, and Applications ” , Jhon wiley and Sons Ltd, 2007.
[24] Henrik Schulze, Christian Lüders, “Theory and Applications of OFDM and
113
Bibliography
Tokyo,pp.805-810,2000.
[27] Ketai Hu, Jeff Castura and Youngyi Mao, “ Reduced-Complexity Decoding
ference,2006,GLOBECOM ’06,IEEE.
[28] S.F.Zaheer, S.A. Zummo, M.A. Landolsi and M.A. Kousa, “ Improved regular
114