You are on page 1of 128

Rateless Coding for OFDM-Based

Wireless Communications Systems

IQBAL HUSSAIN

Master’s Degree Project


Stockholm, Sweden 2009
To my Late brother
Nawab Khan........
Acknowledgments

First of all I would like to thank my supervising tutor Prof. Lars Kildehöj

Rasmussen at KTH for his abundant help and prolific suggestions. He made

himself readily available, had a patient ear, and always took the time to answer my

questions thoroughly sometimes not related to thesis. My deep appreciation and

heartfelt gratitude also goes to my thesis supervisor at LTH, Fredrik Tufvesson

for his support, patience and guidance. Both of my supervisors also reviewed the

whole thesis report very carefully for even the delicate specifics for which I am

very thankful to them. My special thanks go to Professor Mikael Skoglund for

allowing me to do my master thesis in communication Theory department KTH.

The atmosphere has always been a perfect source of motivation. My stay at

communication theory department was so nice that I included the possibility to

work in this department in my future plans. I have to mention nice time out

and discussions with communication theory department staff. I would also like

to thank Johannes Karlsson for his devotion to simulator computers which made

the simulation so flexible and extendible for me.

Last but not least, I would like to direct my warmest thanks to programme

secretary Pia Bruhn of master in wireless communications at Lund University

who was so generous in her support at various stages of my study in Sweden.

iii
Abstract

Performance of broadband wireless communication networks is limited by

available resources such as frequency bandwidth and transmission power. Also,

the time-varying features of wireless communication channels adversely affect

performance. Transmission schemes, adapting to instantaneous channel charac-

teristics can significantly improve performance. The block-fading channel is a

good model for OFDM-based wireless communications systems in which the fad-

ing occurs in block wise manner. Raptor code is new emerging rateless code which

has shown amazing performance over variety of channels. There is a constraint

on the interleaving depth of OFDM-based system due to delay and maximum

packet size. This non-ideal interleaving affects the maximum achievable diversity

from the channel. We investigated the effect of correlation between fading blocks,

which relates to the limited interleaving possible between carriers in an OFDM

system, based on Raptor code. We investigated the performance of the Rap-

tor code over correlated slowly fading channels and compared it with half rate

standard (3,6) regular LDPC code, and to the ARQ systems using punctured

LDPC codes for short block length. We also compared the performance of the

Raptor code to the standard (3,6) regular LDPC code over Binary Erasure chan-

nel, Additive White Gaussian Noise channel, and fast fading Rayleigh chanel for

v
Abstract

short block length. Enormous simulations are performed to get insight for future

research.

vi
Contents

Acknowledgments iii

Abstract v

Contents vii

List of Figures x

Acronyms xiii

1 Introduction 3

1.1 Background Discussion . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1.2 Channel Coding . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.3 OFDM Systems . . . . . . . . . . . . . . . . . . . . . . . . 6

1.1.4 Automatic Repeat Request (ARQ) . . . . . . . . . . . . . 6

1.1.5 Rate-Compatible (RC) Codes . . . . . . . . . . . . . . . . 7

1.2 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.2.1 Linear Block Codes . . . . . . . . . . . . . . . . . . . . . . 8

1.2.2 Codes on Graph . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2.3 LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2.4 Rateless Codes . . . . . . . . . . . . . . . . . . . . . . . . 10

vii
Contents

1.2.5 Rateless code vs Fixed-rate code . . . . . . . . . . . . . . 12

1.2.6 Rateless code vs Automatic Repeat Request (ARQ) . . . . 13

1.3 Thesis Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.4 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.5 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2 System Model 21

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2 Channel Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2.1 Binary Erasure Channel . . . . . . . . . . . . . . . . . . . 24

2.2.2 Binary Symmetrical Channel . . . . . . . . . . . . . . . . 26

2.2.3 AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . 26

2.2.4 Fast Rayleigh Fading Channel . . . . . . . . . . . . . . . . 28

2.2.5 Block Rayleigh Fading Channel Model . . . . . . . . . . . 29

2.2.6 Correlated Block Rayleigh Fading Channel Model . . . . . 30

2.3 OFDM-based Wireless Communication systems . . . . . . . . . . 32

2.3.1 Multicarrier Modulation . . . . . . . . . . . . . . . . . . . 32

2.3.2 OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.3.3 Interleaving in OFDM System . . . . . . . . . . . . . . . 34

2.4 Interleaved-OFDM system as Block Fading Channel . . . . . . . . 36

3 Low Density Parity Check Codes 39

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.1.1 Regular LDPC Codes . . . . . . . . . . . . . . . . . . . . . 40

3.1.2 Irregular LDPC Codes . . . . . . . . . . . . . . . . . . . . 41

3.1.3 Graphical Representation of LDPC Codes . . . . . . . . . 41

viii
Contents

3.1.4 Decoding Complexity . . . . . . . . . . . . . . . . . . . . . 43

3.2 Iterative Decoding of LDPC Codes . . . . . . . . . . . . . . . . . 43

3.3 LDPC Decoding for BEC . . . . . . . . . . . . . . . . . . . . . . 44

3.3.1 Simulation Results and Conclusions . . . . . . . . . . . . . 45

3.4 LDPC Decoding for Binary Symmetrical Channel . . . . . . . . . 47

3.4.1 Soft Decision Message Passing Decoder for Binary Sym-

metrical Channel . . . . . . . . . . . . . . . . . . . . . . . 49

3.5 Soft Decision Iterative Decoder for BPSK AWGN Channel . . . . 52

3.5.1 Simulation Results and Conclusions for AWGN Channel . 58

3.6 Simulation Results and Conclusions for Rayleigh Fading Channel 60

3.6.1 Simulation Results and Conclusions for Block Fading Rayleigh

Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4 Digital Fountain Codes 63

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2 LT Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.2.1 LT Encoding Process . . . . . . . . . . . . . . . . . . . . . 65

4.2.2 LT Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4.2.3 Tanner Graph for LT Code . . . . . . . . . . . . . . . . . 68

4.3 Design of LT Degree Distribution . . . . . . . . . . . . . . . . . . 69

4.3.1 The Robust Soliton Distribution . . . . . . . . . . . . . . . 70

4.4 Simulations and Conclusions of LT code on BEC . . . . . . . . . 72

4.5 Raptor Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.5.1 Tanner Graph and Construction of Raptor Code . . . . . . 75

4.6 BP Algorithm of Raptor code over Noisy Channels . . . . . . . . 78

ix
Contents

4.7 Capacity achieving Raptor Code for Noisy Channels . . . . . . . . 80

4.8 Simulations of Raptor code . . . . . . . . . . . . . . . . . . . . . 82

4.8.1 Simulations for AWGN Channel . . . . . . . . . . . . . . . 82

4.8.2 Simulations for Rayleigh Fading Channel . . . . . . . . . . 85

4.8.3 Simulations for 2-Block Fading Channel . . . . . . . . . . 86

5 Simulations Results 89

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.2 Simulations of Raptor Code vs LDPC in BEC . . . . . . . . . . . 91

5.3 Raptor code vs LDPC code in AWGN Channel . . . . . . . . . . 94

5.4 Performance comparison of Raptor code and LDPC code in fast-

fading Rayleigh Channel . . . . . . . . . . . . . . . . . . . . . . . 95

5.5 Simulations and Conclusions of Raptor code for 2-Block Fading

Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.6 Simulations and Conclusions of Raptor code for correlated Block

Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

5.6.1 Raptor code vs LDPC in correlated Block Fading Channel 101

5.6.2 Raptor code vs punctured LDPC in correlated Block Fading

Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

6 Conclusions 105

6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

A Probability Calculations 109

Bibliography 111

x
List of Figures

2.1.1 Communication System Model . . . . . . . . . . . . . . . . . . . . 22

2.2.1 The Binary Erasure Channel . . . . . . . . . . . . . . . . . . . . 25

2.2.2 The Binary Symetric Channel . . . . . . . . . . . . . . . . . . . . 26

2.2.3 The Additive White Gaussian Noise . . . . . . . . . . . . . . . . 27

2.2.4 Block Fading Channel code word representation . . . . . . . . . . 29

2.2.5 Model of Correlated Block Fading Channel . . . . . . . . . . . . . 31

2.3.1 Model of OFDM System . . . . . . . . . . . . . . . . . . . . . . . 33

3.1.1 Tanner Graph of LDPC Code . . . . . . . . . . . . . . . . . . . . 42

3.3.1 LDPC vs No LDPC in BEC . . . . . . . . . . . . . . . . . . . . . 46

3.3.2 Code word length comparison in BEC . . . . . . . . . . . . . . . . 47

3.4.1 General Message Updates . . . . . . . . . . . . . . . . . . . . . . 51

3.5.1 Message updates during first iteration. . . . . . . . . . . . . . . . 54

3.5.2 Update message for variable nodes . . . . . . . . . . . . . . . . . 56

3.5.3 System Block diagram for simulation. . . . . . . . . . . . . . . . . 58

3.5.4 LDPC in AWGN with different code word length . . . . . . . . . 59

3.6.1 LDPC in Rayleigh Fading Channel . . . . . . . . . . . . . . . . . 61

3.6.2 LDPC in Block Fading Channel . . . . . . . . . . . . . . . . . . . 62

xi
List of Figures

4.2.1 Bipartite Graph of LT code . . . . . . . . . . . . . . . . . . . . . 66

4.2.2 Tanner Graph of (a) LDPC code(b) LT code . . . . . . . . . . . 68

4.4.1 LT code in BEC channel . . . . . . . . . . . . . . . . . . . . . . . 73

4.5.1 Tanner Graph of Raptor Code . . . . . . . . . . . . . . . . . . . 76

4.5.2 Graphical representation of Raptor code . . . . . . . . . . . . . . 77

4.8.1 System Block Diagram of Raptor code . . . . . . . . . . . . . . . 83

4.8.2 Raptor code in AWGN . . . . . . . . . . . . . . . . . . . . . . . . 84

4.8.3 Raptor code in Rayleigh Fading Channel . . . . . . . . . . . . . . 86

4.8.4 Raptor code in Block Fading Channel . . . . . . . . . . . . . . . . 87

5.2.1 Raptor code vs LDPC in BEC . . . . . . . . . . . . . . . . . . . . 92

5.3.1 Raptor code vs LDPC in AWGN Channel . . . . . . . . . . . . . 93

5.4.1 Raptor code vs LDPC in Rayleigh Fading Channel . . . . . . . . 95

5.5.1 Raptor code vs LDPC in BFC using AWGN deg. dest. . . . . . . 96

5.5.2 Raptor code vs LDPC in BFC using BEC Deg. Dist. . . . . . . . 97

5.6.1 Raptor code in correlated block Fading Channel . . . . . . . . . . 99

5.6.2 Effect of correlation on rate of Raptor code . . . . . . . . . . . . . 100

5.6.3 Raptor code vs LDPC in correlated BF Channel . . . . . . . . . . 101

5.6.4 Raptor code vs Punctured LDPC code . . . . . . . . . . . . . . . 102

xii
Acronyms

DMC Discrete Memoryless Channel


ISI Inter Symbol Interference
BEC Binary Erasure Channel
BSC Binary Symmetrical Channel
LDPC Low Density Parity Check
FEC Forward Error Correction
BPSK Binary Phase Shift Keying
AWGN Additive White Guassian Noise
LLR Log-Likelihood Ratio
LT Luby Transform
BIAWGN Binary Input Additive White Gaussian Noise
BIMSC Binary Input Memoryless Symmetrical Channel
SNR Signal to Noise Ratio
BER Bit Error Rate
i.i.d Independent Identically Distributed
BIAWGN Binary Input Additive White Gaussian Noise
ARQ Automatic Repeat Request
RMS Root Mean Square
HARQ Hybrid ARQ
RC Rate Compatible
DVB Digital Video Broadcasting
OFDM Orthogonal Frequency Division Multiplexing
WLAN Wireless Local Area Network

xiii
Chapter 1

Introduction

1.1 Background Discussion


1.1.1 Overview

Digital communication has had promising impacts on our lives from the last

few decades. Its practical applications in satellite, military, internet, sea and

space communications, digital audio and video broadcasting and mobile commu-

nications have brought a revolution in our society. The reliable transmission of

information over noisy channels is one of the basic requirements of digital infor-

mation and communication systems. Reliable transmission over communication

channel has been the subject of much research for many years. Channel coding is

viable method to ensure reliable communication by introducing redundancy to the

information to be transmitted. Channel coding transforms signal constellation

points to a higher dimensional signalling space. Due to this higher dimensional

space, the distance between constellation points increases and hence enhances bit

error detection and correction. Channel coding can be classified in two categories

as automatic repeat request (ARQ) scheme and forward-error correction (FEC)

scheme. ARQ combines error detection and retransmission strategies to ensure

3
1. Introduction

that data is delivered accurately despite the occurrence of errors during transmis-

sion. On the other hand, FEC tries to correct errors at the receiver end. ARQ

schemes require to send a small number of redundant information along with user

information, which can be used to detect errors during transmission, while FEC

schemes require redundant information to detect and correct errors. To minimize

the number of redundant information bits and achieve high error-correction ca-

pability, on can use powerful powerful error-correcting codes. LDPC codes were

introduced by Gallager [7], which have shown better performance over a variety of

channels. Finite-length LDPC codes have also been shown to outperform turbo

codes. Rateless codes such as LT codes [11] and Raptor codes [14] are the class

of digital fountain codes [10], having capacity achieving property. This chapter

introduces the basic background and the literature review on the works related

to the thesis. Furthermore, the main objectives of the thesis are also presented

1.1.2 Channel Coding

The shannon channel theorem [20], states the maximum rate at which infor-

mation can be transmitted reliably over a given communication channel with a

specified bandwidth. The maximum rate is called channel capacity. The capacity

of a bandlimited AWGN channel with bandwidth B, is given by

C = B log 2(1 + Es /N0 ), bits per second (bps)

The Shannon theorem shows that if information is transmitted with rate equal

to capacity or less, then there exists a coding technique which allows the proba-

bility of error at the receiver to be made arbitrarily small. The converse is also

4
Background Discussion 1.1

important that if the information rate is greater than the capacity of the chan-

nel, then there exist no coding technique which makes probability close to zero.

However Shannon theorem did not give any clue about the construction of the

coding scheme.

Forward Error Correction code uses redundancy to correct transmission errors

at the receiver and hence no feedback channel is required. FEC offers constant bit

throughput and varying reliability depending on channel condition. Conventional

FEC codes reduce the required transmit power for a given BER at the expense

of increased signal bandwidth or a reduced data rate. In ARQ schemes a small

amount redundant bits are added to detect transmission errors. Retransmission

will take place if and only if errors are detected at the receiver end, hence needs

a feedback channel. As contrast to FEC, ARQ offers constant reliability and

varying bit throughput.

There are two main types of conventional FEC codes which are named as block

codes and convolutional codes. Block codes accept k information bits and produce

n encoded bits by adding (n-k ) redundant bits. On the other hand, convolutional

codes transform k information bits to n encoded bits in serial manner and this

transformation depends on the current as well as L last information bits, where L

is the constraint length of the code.Trellis codes combine channel code design and

modulation to reduce the BER without bandwidth expansion or rate reduction.

Recent advancements in coding technology, such as LDPC codes [7] and Ratless

codes [11, 14] offer performance that approaches the channel capacity of AWGN

and fading channels for larger block length.

5
1. Introduction

1.1.3 OFDM Systems

Orthogonal Frequency Division Multiplexing is based on multi-carrier modu-

lation technique which is more immune to frequency selective fading than single

carrier systems. In OFDM, the data stream is distributed over a number of lower

rate streams and these streams are modulated over different carriers. Lower data

rate of each stream in OFDM system as compared to the single carrier system

increases the symbol duration and hence reduces the effects of multipath propa-

gation. Inter symbol interference can be removed by using cyclic prefix which is

copying the last part of a symbol at the start of a symbol. By using orthogonal

transmit signal and cyclic prefix of at least equal to maximum delay spread avoids

not only inter symbol interference but also inter-carrier interference. As channel

is divided in parallel subchannels in OFDM system, so channel equalization is

simpler than the adaptive equalization used in single carrier system. Drawbacks

of OFDM system as compared to the single carrier system is its sensitivity to fre-

quency offset and phase noise. Also there is possible limited interleaving present

among the subcarrier of OFDM system which can severely affect the performance.

1.1.4 Automatic Repeat Request (ARQ)

The Automatic Repeat Request (ARQ) is a control mechanism which com-

bines error detection and retransmission schemes to ensure reliable transmission.

A back-channel is required for acknowledgement and non-acknowledgement mes-

sages for lost/wrong data frames at the receiver. ARQ works well for many

one-to-one protocols such as TCP/IP protocol, but its performance seriously de-

grades for broadcast applications. In broadcast applications, data frames may be

6
Background Discussion 1.1

retransmitted even if they are received by many receivers. In these cases, data

receivers may ask for retransmitting the data which are already received by other

receivers. Consequently, the data source needs to retransmit most of the data

and hence inefficiently uses valued bandwidth. Furthermore in ARQ, most of the

time source will be idle if the distance between source and destination is too long.

In Hybrid ARQ, ARQ and error-correction schemes are combined to reduce

retransmission. When the packet arrives at the receiver, it is first decoded by

the FEC decoder and then checked for errors. If errors are detected, a retrans-

mission request for the corresponding data frame is sent back to the transmitter.

In applications with fluctuating channel conditions such as satellite packet trans-

mission and mobile communication, incremental redundancy (IR) HARQ schemes

demonstrate higher throughput efficiency by adopting their error correcting code

redundancy to different channel conditions.

1.1.5 Rate-Compatible (RC) Codes

Adaptive code rate is always desirable especially in the varying channel en-

vironment. By using puncturing and extending properties to the conventional

forward error-correction coding schemes, we can achieve flexible code rates. Punc-

tured codes can be achieved by deleting parity bits to get wide range of higher

rates than the code rate of mother code. Higher bandwidth efficiency can be

achieved in punctured codes at the expense of degraded performance. Extended

codes can be achieved by appending more parity bits to the mother code. By

extending a code we can get only lower code rate than the code rate of the

mother code. Extending leads to codes of increased minimum distance and im-

7
1. Introduction

proved performance.The addition of rate-compatibility property to puncturing

and extending further enhances the performance of Rate-compatible codes in

time varying channels. The limitation in rate-compatible code is that the coded

bits of a high-rate punctured code are also used by the lower-rate codes. In other

words, the high-rate codes are embedded into the lower-rate codes of the family.

If the higher rate codes are not sufficiently powerful to decode channel errors,

only small amount of extra bits which were previously punctured (deleted) have

to be transmitted in order to improve the code performance.

1.2 Literature Survey


1.2.1 Linear Block Codes

A binary block code generates a block of n coded bits from k information

bits. The number of redundant bits added to every k information bits is (n − k).

A code is linear if the addition of any two valid codewords results in another

valid codeword. A systematic block code is also specified by its generator matrix

G = [Ik P ], where Ik is (k × k) identity matrix and P is a k× (n − k) matrix that

determines the (n − k) parity check bits. The systematic block code can also be

specified by a parity-check matrix H of the form H = [P T In−k ], where P T is the

transpose of the matrix P. Obviously, n is larger than k in general and the code

efficiency is usually evaluated by k/n, which is called the code rate of the coding

algorithm. A (n, k) parity-check code is a linear block code whose codewords

satisfy a set of (n − k) linear parity-check equations. It is traditionally defined

by its (n − k) ×n parity-check matrix H, whose (n − k) rows specify the (n − k)

equations.

8
Literature Survey 1.2

1.2.2 Codes on Graph

The main purpose of coding theory has been to design the codes which can

achieve Shannon capacity [20]. Convolution codes were designed to approach the

Shannon limit within a gap of few decibels with reasonable decoding complexity.

However, reducing this gap required impractical complexity until the discovery

of the iterative message-passing algorithms. Using an iterative message-passing

decoder, low density parity check and turbo codes have provided excellent perfor-

mance and a small gap to the Shannon limit with a practical decoding complexity.

This better performance of message-passing algorithms based codes drew a lot of

attention to this field of study, which soon extended to a more general class of

codes called codes defined on graphs.

Graphs are used to visualize the constraints of the codes. The advantage

of codes defined on graphs is that they can be decoded using message-passing

algorithms. Degree of the vertices determine decoding complexity while girth

and diameter provides the qualitative analysis of the message-passing algorithms

[4]. Other communication components can also be modelled in the graph of codes.

Since the rediscovery of LDPC codes, there has been a lot of research activities

and improvements in the area of codes defined on graphs [3]. Research on LDPC

codes has played a major role in this field, as many of the new classes of codes

which are defined on graphs are influenced by the structure of LDPC codes.

1.2.3 LDPC Codes

LDPC Code, which is based on bi-partite graph, was first proposed by Gallager

[7] in the early 1960s, but it did not get proper attentions until years later.

9
1. Introduction

LDPC codes are also called as Gallager codes, in honor of Robert G.Gallager,

who proposed the concept of LDPC in his phD thesis. The bi-partite graph

contains two sets of nodes which are called as variable nodes and check nodes.

The bi-partite graph is build in such a way that for each check node, the module-

2 sum of the values of its incident variable nodes is equal to zero. There are

various methods for LDPC code, having encoding algorithm which can run in

linear time. The most efficient decoding algorithm for LDPC codes is the belief-

propagation (BP) algorithm. During each iteration, BP algorithm updates the

probability that a variable node is zero based on the information obtained from

the check nodes in the previous round. The time complexity of the decoding is

proportional to the number of edges in the bipartite graph. LDPC code requires

a small reception overhead at the receivers to reconstruct the message symbols.

It has been proved that the performance of LDPC codes which is obtained from

an appropriate highly irregular bipartite graph rather than a regular graph is

very close to the Shannon bounds. LDPC codes are also widely used in many

applications, e.g. in the DVB technology. The message-passing algorithm for

LDPC codes in Binary Erasure Channel has been studied in [5]. Similarly based

on [8], the results of LDPC codes in Additive White Gaussian Noise and correlated

and uncorrelated fast Rayleigh fading channels have been demonstrated in [6].

LDPC codes are used in DVB-S2, WiMax and 10GBase-T Ethernet.

1.2.4 Rateless Codes

The traditional block codes have several problems in the real communication

systems. For example, in the systems where the channel error rate is unknown

10
Literature Survey 1.2

by the encoder or the decoder before coding, the traditional codes which have

fixed code rates, have to estimate the loss rate and choose the closest code rate

to adapt the channel conditions. However, these coding algorithms are nearly

useless if the estimated rate cannot be estimated correctly or can be changed

during the transmission. Communication methods used to communicate over

such channel usually employ feedback from receiving end to transmitter. These

feedback messages are either acknowledgments for missing packets or for every

packet. In case of acknowledgments for missing packets, transmitter only retrans-

mit the missing packets while in other case transmitter retransmit packets with

negative acknowledgment. However, from Shannon theory [2, 20], it can be easily

concluded that these protocols are inefficient and wasteful of bandwidth because

if the channel has erasure probability,  then its capacity will be (1 − ), whether

we will use feedback or not. This inefficient utilization of bandwidth becomes

more predominant when a data source wants to make broadcasting on a network

which consists of many channels with different loss rates. When using the fixed

rate codes, sender is forced to generate code words based on the worst code rate

to ensure reliable transmission. Thus, a new family of FEC codes which is robust

are proposed to address these problems.

M.Luby in [11] presented the first practical realization of the rateless code

and performance was further improved in [14] by Amin Shokrollahi. Digital

Fountain Codes are also called universal erasure code because it can be used

independently of channel loss rate and showed awe-inspiring performance over

every erasure channel [11]. As LT code has excellent performance in Binary

Erasure channel [11] but exhibits error floor in fading channels and impractical

11
1. Introduction

decoding complexity for long block codes [12]. Raptor code [14] is concatenation

of the outer code with LT code to combat the problem of error floor in fading

channels and to provide linear time encoding and decoding complexity. Raptor

code has demonstrated the capacity approaching property for the binary erasure

channel [14]. Raptor code has not only beat LT code but also has shown near

optimal performance on wide variety of channels. Raptor code has also provided

amazing performance in AWGN channel [12, 15], and Rayleigh fading channels

[19] for larger block length.

1.2.5 Rateless code vs Fixed-rate code

Digital Fountain Code is considered as ”rateless”, which means, unlike the

traditional block codes such as LDPC codes and RS codes, Digital Fountain

Code does not have a fixed code rate and rate is determined by the number of

transmitted codeword symbols required before the decoder is able to decode. The

rate is then not known a priori as it is in traditional fixed-rate block codes. It

can generate as many codeword symbols as needed to recover all the message

bits regardless of the channel performances. The first practical realization of the

rateless codes was presented by M.Luby in [11] and was further improved in [14].

Digital Fountain Codes also called as universal erasure code because they can be

used independently of channel loss rate and having good performance over every

erasure channel [11].

Our motivation is to use rateless codes as compared to the fixed-rate code

due its advantages in different requirements. In fixed-rate codes the rate is fixed

and independent of realizations of the channel, so there is a tradeoff between

12
Literature Survey 1.2

efficiency and reliability. Fixed-rate codes achieve high rate transmission at the

expense of reliability and vice versa, while in rateless codes, codeword length is

determined by the channel realization offering higher efficiency and reliability as

compared to fixed-rate codes. To achieve best possible code rate, fixed-rate codes

required channel information at the transmitter which is some time difficult to

achieve due to limited bandwidth of feedback channel, while in rateless codes

efficient transmission is achieved irrespective of the channel information at the

transmitter. For large relay networks the feedback information of fixed-rate codes

contributes significantly as compared to the rateless codes, so rateless code is a

good candidate for the relay networks.

1.2.6 Rateless code vs Automatic Repeat Request (ARQ)

Rateless code has become a natural candidate for the Hybrid ARQ. In con-

ventional Incremental Redundancy IR-HARQ protocol initially the transmitter

sends only as many codeword symbols as necessary to ensure a high probability

of successful transmission. If the decoding fails, the receiver sends a negative

acknowledgement and the channel information to the transmitter. Taking into

account the channel information of the past transmission(s), the transmitter sends

only as many additional codeword symbols as necessary to insure a high proba-

bility of successful transmission. Similarly in rateless codes transmitter send code

word symbols to the receiver unless it get a feedback message from the receiver

about successful transmission. Both rateless codes and codes based on IR-HARQ

schemes adjust its transmission according to the channel conditions.

Encoding and Decoding complexities of rateless schemes are higher than IR-

13
1. Introduction

HARQ schemes. However, when Raptor codes are used, we only need to encode

as many parity bits as we need to send in the initial transmission or in subsequent

re-transmission(s). On the other hand, for punctured codes based on IR-HARQ

schemes, we need to encode all parity bits, even though we may need to send only

a small portion of them. In conventional IR-HARQ schemes a feedback channel

is required for frequent and significant feedback messages, while for rateless this

is not required for one time single bit feedback message for a given number of

information bits.

1.3 Thesis Motivation

Adaptive modulation and coding enables robust and bandwidth-efficient trans-

mission over time-varying channels. Modulation and coding techniques that do

not adapt to fading conditions need an acceptable reliability when the channel

quality is not so good. Thus, these fixed -rate systems are effectively designed

for the worst-case channel conditions. Adapting to the channel fading can in-

crease average throughput, reduce required transmit power, or reduce average

probability of bit error by taking advantage of good quality channel conditions.

In rateless coding techniques transmission of information will take place at high

efficiency and reliability as compared to fixed-rate coding.

OFDM is finding use in practical applications in Digital Audio Broadcasting

(DAB) and Terrestrial Digital Video Broadcasting (DVB-T) in Europe, wireless

networking and broad band internet access. Wireless Local Area Networks use

OFDM as their physical layer transmission technique. The European standard

is ETSI HiperLAN/2, American standard is IEEE 802.11a/g , HiSWAN is high-

14
Thesis Motivation 1.3

speed wireless local area network standard set by Association of Radio industries

and Business Japan ; all of which has similar physical layer specifications based

on OFDM. OFDM is also a strong candidate for IEEE Wireless Personal Area

Network (WPAN) standard and for fourth generation (4G) cellular systems.

Real-time applications actually need short block length which is less than a

few thousands. This limitation may be due to memory and buffer size and delay

in transmission. As GSM frame duration is 4.615 ms which corresponds to the 8

user data and data rate on radio link interface is 270.83 Kbps which corresponds

to the frame length of 1250 bits. The standard of the multimedia broadcast and

multiuser services of 3G wireless networks works for small code word length up to

500 bits [25]. WiMAX has quite flexible physical layer frame duration varies from

2 ms to 20 ms, however 5 ms is more conveniently used. WiMAX using channel

bandwidth 1.25 MHz and QPSK modulation having aggregate uplink data rate

154 kpbs with 5 ms frame length and 128 OFDM symbols which corresponds to

770 bits frame length. The basic transmission element in the FDD physical layer

of UMTS is a radio frame. A radio frame has duration of 10 ms, and it is broken

down into 15 time slots of 0.667 ms each. Each time slot contains 2,560 chips

which corresponds from 10 to 640 bits depending on spreading factor. Frame

length of Digital Audio Broadcasting based on MPEG-1 is 24 ms and data rates

are between 32 and 192 kpbs for a single channel. In this case Digital Audio

Broadcasting has a frame length between 768 and 4068 bits.So codes with small

frame length have significant role in present and future wireless communication

technologies.

Rateless codes are used at transport layer or network layer but many of the

15
1. Introduction

existing system uses fairly small block length at physical layer. So our motivation

is to investigate the performance of Raptor code for shorter block length because

for larger block length, rateless code has already outperformed different codes

over variety of channels.

Performance of broadband wireless communication networks is limited by

available resources such as frequency bandwidth and transmission power. Various

phenomena such as multipath propagation, terminal mobility and users interfer-

ence, result in channels with time-varying parameters. These time-varying fea-

tures of wireless communication channels severely affect performance. So there

is a need to develop practical adaptive transmission schemes for OFDM-based

wireless communications systems. The block-fading channel is a good model for

such systems, so we want to investigate the use of rateless codes [11, 14] which

is new code family and ideally suitable for the multi-channel data transmission

environment [22] over such channels. We decided to investigate the effect of cor-

relation between fading blocks, which relates to the limited interleaving possible

between carriers in an OFDM system.

Interleaving is used to mitigate the the burst error effect of channel in com-

munication over a fading channel. For large coherence time or equivalently low

doppler spread of the fading, high interleaving is required to break the memory

of the channel, which is not affordable in some communication system due to

some constraints. OFDM system recommended by IEEE.802.11, or in other ap-

plications like GPRS, there is a constraint on the interleaving depth due to the

maximum allowable packet size or processing delay.

Thesis Objective:

16
Thesis Contributions 1.4

The overall objective is to investigate the error rate performance of Raptor

code with short block length over uncorrelated and correlated slowly fading chan-

nels and compare it with fixed rate standard (3,6) regular LDPC code.

In order to be able to reach the proposed objective, we used the following proce-

dure:

• We investigated (3,6) rate 1/2 regular LDPC code over the binary erasure

channel (BEC).

• We analyzed (3,6) rate 1/2 regular LDPC code over the additive white

Gaussian noise channel (AWGN).

• We compared the performance of the (3,6) rate 1/2 regular LDPC code

over the Rayleigh block-fading channel with additive white Gaussian noise

(BF-AWGN) and AWGN only channel.

• We thoroughly investigated the Luby Transform (LT) codes over the BEC.

• By simulations we investigated Raptor codes over the BEC.

• Finally Investigation of Raptor codes over the AWGN and over BF-AWGN

was carried out.

1.4 Thesis Contributions

This thesis has contributed to field of rateless coding in different ways. Along

with LDPC codes, rateless codes are natural candidate for use in HARQ schemes

used in applications with fluctuating channel conditions. Raptor code is the class

for Fountain codes, designed for reliable transmission of data over an erasure

17
1. Introduction

channel with unknown erasure probability. The main contribution of this master

thesis can be summarized as follows:

• We compared the performance of Raptor code and half rate standard (3,6)

regular LDPC code over binary erasure channel for short block length and

by simulations showed that Raptor code outperforms the corresponding

LDPC code.

• We showed by simulations that degree distribution optimized for binary

erasure channel outperforms the degree distribution optimized for AWGN

channel over fast fading Rayleigh, uncorrelated and correlated block fading

channels for short block length.

• We demonstrated the comparison of Raptor code and half rate standard

(3,6) LDPC code over AWGN channel for short block length.

• We compared the performance of Raptor code and half rate standard (3,6)

LDPC code over fast fading Rayleigh channel and block fading channel.

• We presented the performance of the Raptor code in correlated block fading

channel and showed by simulations that correlation causes degradation in

performance and degradation is more sever at lower rates.

• We compared the performance of the Raptor code to half rate standard (3,6)

regular LDPC code in correlated block fading channel based on the degree

distribution optimized for BEC and showed by simulations that Raptor

code experience more degradation specially at low correlation values.

18
Thesis Organization 1.5

• We presented the performance comparison of the Raptor code and punc-

tured LDPC in correlated block fading channel for short block length.

1.5 Thesis Organization

The rest of this thesis will be organized as follow.

In chapter 2 we presents the channel models, which are used in our project

for the performance comparison of the Raptor code and half rate standard (3,6)

LDPC code.

Chapter 3 discusses necessary background and structure of LDPC codes and

their decoding algorithms. Simulations results were provided at the end of LDPC

over Binary erasure, AWGN and Rayleigh fading channels for small code word

block length.

In chapter 4 , the concept of digital fountain codes [10] was introduced. More-

over, the performance of the LT and Raptor codes were demonstrated for short

block length codes over binary erasure , AWGN , fast fading Rayleigh and uncor-

related block fading channels.

Chapter 5 presents simulation results to gauge difference in performance of

the Raptor code and half rate regular (3,6) LDPC code over binary erasure,

AWGN, fast fading channel and uncorrelated block fading channel. Chapter 5

also discusses the performance of the Raptor code in correlated block fading

channel and compare it with half rate standard(3,6) LDPC code for short block

length. In chapter 5 we also introduced the performance comparison of the Raptor

code and punctured LDPC code in correlated block fading channel for short block

length.

19
1. Introduction

Finally, in Chapter 6 we present a summary of our work, conclusions, and

future directions for the continuation of this thesis.

20
Chapter 2

System Model

2.1 Introduction

Shannon ground-breaking approach in his landmark paper [20] demonstrated

that, if the information or entropy rate is below the capacity of the channel,

then proper methods are available to encode information messages and it can

be be received with out errors even if the channel distorts the message during

transmission. Recent developments in coding theory, have design codes which

have performance very close to the channel capacity. The utilization of error

control coding has become an integral part of the modern communication system.

A typical Digital communication model is represented by block diagram as shown

in Figure 2.1.1. This model is suitable from coding theory and signal processing

point of view. Information is generated by source which may be human speech,

data source, video or a computer. This information is then transformed to electric

signals by source coder which are suitable for digital communication system.

To ensure reliable transmission over communication channel encoder introduce

redundancy to the user information.The modulator is a system component which

21
2. System Model

Message Code word Received Decoded


(k bits) (n bits) word (n bits) Message
(k bits)

Source Encoder
Modulator Channel Demodulator Decoder Sink
(n>k)

Figure 2.1.1. Communication System Model

transforms the message to signal suitable for the transmission over channel.

The physical medium through which information is transmitted is called com-

munication channel. For example telephone lines in wired system and environ-

ment between the transmitter and receiver in wireless system. The channel model

is a mathematical model of the channel characteristics. During transmission, in-

formation is transmitted from source to sink through communication channel.

Error may arise from the channel noise, so encoder and decoder blocks must be

design to minimize the errors introduced by channel. The messages from the

source are binary sequences of length k bits. The encoder performs mapping of

the messages to the encoded words. so the code words are binary sequence of

length n bits but not all combinations of n bits are code words. There are 2k

code words out of 2n possible binary sequences.

After the signal has passed the channel it is distorted due to channel noise.

So the decoder must be designed in such a way to minimize the error between the

received code word and the transmitted code word. The design parameters of the

decoder are complexity, delay and rate. As in the absence of decoder, the receiver

takes decisions independently on each bit but as now with the introduction of

decoder, the receiver takes decision on n bits, so it needs a more complex system,

but in today’s VLSI technology, the complexity is not a major issue. Also to

22
Channel Modelling 2.2

wait for n bits introduces delay in the system which is very critical. As n is

greater than k, so this redundancy in the coded system is very important to

take into consideration because now the system attempts channel n times to

communicate k bits as compared to the un-coded system in which case the system

will communicate k bits in k attempts of the channel. But with the introduction

of coding we can reduced the transmit power for the same error rate as compared

to the un-coded system, or equivalently we can reduce the bit error rate for the

same transmit power. As high power radio frequency devices are more expensive

than the low power radio frequency devices which further enhance the importance

of the coding techniques.

2.2 Channel Modelling

The goal of wireless channel modeling is to find useful analytical models for

the variations in the channel. The most prominent draw back of the wireless com-

munications is channel fading. Various properties such as multipath propagation,

terminal mobility and user interference, result in channel with time-varying pa-

rameters. Fading of the wireless channel can be classified into large-scale and

small-scale fading. Large-scale fading involve the variation of the mean of the

received signal power over large distances relative to the signal wavelength. On

the other hand, small-scale fading involve the fluctuations of the received signal

power over distances comparable with the wavelength. Models for the large scale

variations are useful in cellular capacity-coverage optimization and analysis, and

in radio resource management such as handoff, admission control, and power con-

trol. Models for the small scale variations are more useful in the design of digital

23
2. System Model

modulation and demodulation schemes that are robust to these variations. We

hence focus on the small scale variations in this class. Reflection, diffraction and

scattering in the communication channel causes rapid variations in the received

signal. The reflected signals arrive at different delays which cause random am-

plitude and phase of the received signals. This phenomenon is called multipath

fading. If the product of the root mean square (RMS) delay spread which is stan-

dard deviation of the delay spread and the signal bandwidth is much less than

unity, the channel is said to suffer from the flat fading.The relative motion be-

tween the transmitter and the receiver (or vice versa) causes the frequency of the

received signal to be shifted relative to that of the transmitted signal. The fre-

quency shift, or Doppler frequency, is proportional to the velocity of the receiver

and the frequency of the transmitted signal . A signal undergoes slow fading when

the bandwidth of the signal is much larger than the Doppler spread (defined as

a measure of the spectral broadening caused by the Doppler frequency). The

combination of the multipath fading with its time variations causes the received

signal to degrade severely. This degradation of the quality of the received signal

caused by fading needs to be compensated by various techniques such as diversity

and channel coding. In the forthcoming subsections we will briefly discuss a few

of standard channel models which we will frequently use in our simulations.

2.2.1 Binary Erasure Channel

A Discrete Memoryless Channel (DMC) is a class of channel for which both the

input and output letters belong to finite alphabet.The most simple and popular

example of the DMC channel models is Binary Erasure Channel (BEC), which

24
Channel Modelling 2.2

1-ε
0 0

Erasure

1 1
1-ε
Input Output

Figure 2.2.1. The Binary Erasure Channel

is characterized by transition probability  and shown in Figure 2.2.1. To model

a BEC, two inputs are needed and they are either 0 or 1. The output consists

of 0, 1 and additional element called erasure. The bits are either transmitted

correctly with probability (1 − ) or erased with probability . The capacity of

this channel is given by (1 − ) [2]. Binary erasure channel can be used to model

internet system where packets can be either forwarded accurately or dropped due

to congestion or other disturbance in network.

25
2. System Model

Figure 2.2.2. The Binary Symetric Channel

2.2.2 Binary Symmetrical Channel

Another important example of the memoryless channel is Binary Symmetrical

Channel (BSC)as shown in Figure 2.2.2. In BSC both the input and output are

binary signals. Each bit is either transmitted correctly with probability (1 − ),

or it flipped with probability . The capacity of the channel is given by 1 +

 log2 () + (1 − ) log2 (1 − ) [2].

2.2.3 AWGN Channel

In contrast to the binary symmetric channel, which has discrete input and

output symbols taken from binary alphabets, the so-called AWGN channel is

defined on the basis of continuous valued random variables. Any wireless system

in an AWGN channel can be expressed as y = x + n, where n is the additive

white Gaussian noise with known variance and x and y is input and output

signal respectively. The distribution of the noise is assumed be Gaussian and

26
Channel Modelling 2.2

Input signal
vector
X
+ R Output match
filter output

Z
Noise
vector

Figure 2.2.3. The Additive White Gaussian Noise

having constant spectral density.The AWGN model does not take account for

the phenomena of fading, interference dispersion in time and frequency. The

source of Gaussian noise are thermal vibrations of atoms in antennas, shot noise,

black body radiation etc.The AWGN channel is a good model for many satellite

and deep space communication links.The model for AWGN channel is illustrated

in Figure 2.2.3 and described by the additive white Gaussian noise term Z. In

particular, the capacity of a bandlimited AWGN channel with bandwidth B, is

given by

C = B log 2(1 + S/N ), bits per second (bps)

where B is bandwidth of the system, S is the signal power of the input signal

X and N is the noise power of noise signal Z which are expressed by follwoing

relations:

S = E X2


27
2. System Model

N = E Z2


So the channel capacity depends on the signal to noise ratio. The signal-to-noise

ratio of the real-valued output R of the matched filter is then given by

S Eb
=
N N0 /2

with bit energy Eb and noise power spectral density N0 . In [23], comparison

of capacities of the AWGN channel and BSC has been carried out. In order

to achieve the same channel capacity binary symmetrical channel required more

signal-to-noise ratio as compared to AWGN channel. This gain also translates to

the coding gain which can be achieved by soft-decision decoding as compared to

hard-decision decoding of channel codes.

2.2.4 Fast Rayleigh Fading Channel

For most practical channels, where signal propagation takes place in the at-

mosphere and near the ground, the freespace propagation model is inadequate to

describe the channel and predict system performance. When there is no line-of-

sight present between transmitter and receiver then the multipath is produced

only from reflections of the objects presents in the environment. This form of

scattering is purely diffuse and can be assumed to form a continuum of paths,

with no one path dominating the others in strength.When the channel gain fol-

lows a Rayleigh probability density function (pdf), the channel is said to be a

Rayleigh fading channel. Received signal can be modeled as y = α ∗ te + n. The

“α” is the normalized Rayleigh fading factor and related to the fading coefficient

of the channel ht through α = |ht |, where the real and imaginary components of

28
Channel Modelling 2.2

α1 α1 ...................... α1 α2 α2 ..................... α2 ............... αm αm .................. αm

N/m N/m N/m

Figure 2.2.4. Block Fading Channel code word representation

ht are Gaussian random variables. If sufficient channel interleaving is introduce,

then fading coefficients of ht are independent. The conditional probability density


2
function (pdf) of the normalized Rayleigh fading factor is given by p(α) = 2αe−α

, “n” is the Gaussian random variables with zero mean and variance σ 2 and

phase is independent random variable being uniform on [0, 2π]. Rayleigh fading

is viewed as a reasonable model for urban environments on radio signals.

2.2.5 Block Rayleigh Fading Channel Model

The block fading model is more common representation of the slowly varying

fading channels. In block fading channel the random channel gain or normal-

ized Rayleigh fading factor remains constant over a certain block of the symbols

transmitted through the channel. Code word representation of the Block fading

channel is shown in Figure 2.2.4. So the code designed which works better in

fast fading channel, may not behave very well in the Block fading channel. As

the block fading channel is nonergodic channel [17], so we cannot use the channel

capacity as we did in the case of fast fading channel. For information-theoretical

rate limit we rather use the outage probability instead of capacity [18].

If we want to transmit N symbols over m-block fading channel where m the

29
2. System Model

number of normalized independent fading factor. The number of symbols which

are affected by each fading block is given by l , N/m. The received symbol yi is

given by

yi = αj ∗ ti + ni

where i = 1, . . . N, and j = 1 + [(i − 1)/l], where [x] represent the integer part

of a real number x. The “αj ” is defined as for Rayleigh fading channel for fading

block j, where j = 1, . . . , m and ni is an i.i.d AWGN sample with zero mean and

variance σ 2 = N0 /2, where N0 /2 is the two-sided noise power spectral density.

For our simulation we will use 2-Block fading channel such that we divide our

code word length in two blocks and each block has independent fading factor.

The simulated model can be explained as

y1 = α1 te1 + n1

y2 = α2 te2 + n2

Where α1 and α2 are independent normalized Rayleigh fading factor.α1 is for the

first half of block length bits te1 and α2 is for the second half blocks length bits te2

independent of each other, n1 and n2 are Additive Gaussian random noise with

zero mean and variance σ 2 .

2.2.6 Correlated Block Rayleigh Fading Channel Model

In BF model, transmitted sequence is divided into blocks and all the symbols

belonging to the same block experience the same fading. In some cases, the fading

blocks are assumed to be independent from each other, but in some applications

30
Channel Modelling 2.2

α1
n1

x1 y1 v1 û1

and De-interleaver
c1

Interleaver and
u1 v2

De-Modulator
c2 û2

Decoder
u2

Modulator
Encoder

..........

.........
..........
..........

..........

..........
..........
ûk
uk vn
cn

xL yL

α1
nL

Figure 2.2.5. Model of Correlated Block Fading Channel

such as the OFDM system, there is a considerable correlation exist among the

fading blocks due imperfect interleaving. The non-ideal interleaving is due to

the constraints on such as maximum allowable packet size and processing delay

requirement.

The coded system of correlated Block fading channel is shown Figure 2.2.5.

The k information bits encoded to code word of n bits. The coded bits are inter-

leaved and mapped to L symbol vectors xi . Each symbol vector xi is transmitted

through a subchannel of block fading channel each having normalized rayleigh

fading factor αi . Fading coefficients are correlated by following correlation ma-

trix.

Cα = E[ααh ], α = (α1 , . . . , αL )t

where L is the number of fading blocks.

31
2. System Model

2.3 OFDM-based Wireless Communication sys-


tems

2.3.1 Multicarrier Modulation

Single carrier modulation scheme is limited by the delay spread of the chan-

nel. The simple idea of multicarrier transmission to overcome this limitation is

to split the data stream into a number of sub-streams of lower data rate and

to transmit these data sub-streams on adjacent sub-carriers. In a single carrier

modulation, data is sent serially over the channel by modulating a single carrier.

In multipath fading channel, the time dispersion can be significant as compared

to symbol period, which result as inter symbol interference (ISI). In such situation

a more complex equalizer will be required to combat the channel distortion. In

multicarrier modulation, the available bandwidth is divided in a number of sub-

bands, called subcarriers. The data is divided into several parallel data streams

or channels, one for each sub-carrier. Each sub-carrier is modulated with a con-

ventional modulation scheme at a low symbol rate, maintaining total data rates

similar to conventional single-carrier modulation schemes in the same bandwidth.

The symbol duration can be made greater than the channel maximum delay by

selecting more sub-carrier. On the other hand, bandwidth of the each sub-carrier

must be kept small as compared to the coherence bandwidth of the channel so

that each sub-carrier experience flat fading and hence simple equalization will

be required at the receiver end. Increasing the number of sub-carrier increases

the symbol time which significantly reduce ISI and hence simplifies equalization.

However, the performance of long symbol time signals is degraded in time variant

32
OFDM-based Wireless Communication systems 2.3

nk,t
x1 y1
u1 û1

Decoder
Encoder

IFFT
..............
rk,t
..............

..............

............
FFT
Multipath
CP Channel CP-1

uk ûk
xL y1

Figure 2.3.1. Model of OFDM System

channels. If the symbol time is greater than the coherence time of the channel

then transmission is heavily affected during a single symbol transmission and the

overall performance is degraded.

2.3.2 OFDM

To achieve higher spectral efficiency in multicarrier system, the sub-carriers

must have overlapping transmit spectra but at the same time they need to be

orthogonal to avoid complex separation and processing at the receiving end. Mul-

ticarrier modulation schemes that fulfil above mentioned conditions are called

orthogonal frequency division multiplex (OFDM) systems. Instead of baseband

modulator and bank of matched filters Inverse Fast Fourier Transform (IFFT)

and Fast Fourier Transform (FFT) is efficient method of OFDM system imple-

mentation as shown in Figure 2.3.1 because it is cheap and does not suffer from

inaccuracies in analogue oscillators.

Inter symbol interference occurs when the signal passes through the time-

dispersive channel. In an OFDM system, it is also possible that orthogonality

33
2. System Model

of the subscribers may be lost, resulting and inter carrier interference. OFDM

system uses cyclic prefix (CP) to overcome these problems. A cyclic prefix is

the copy of the last part of the OFDM symbol to the beginning of transmitted

symbol and removed at the receiver before demodulation. The cyclic prefix should

be at least as long as the length of impulse response. The use of prefix has

two advantages that are it serves as guard space between successive symbols

to avoid ISI and it converts linear convolution with channel impulse response to

circular convolution. As circular convolution in time domain translates into scalar

multiplication in frequency domain, the subcarrier remains orthogonal there is no

ICI in addition with memory and time saving in measurement. In Figure 2.3.1, L

coded vector xi are generated by proper coding, interleaving and mapping. After

adding cyclic prefix, OFDM signal is passed through multipath channel. At the

receiver the cyclic prefix is removed and received signal is passed through FFT

block to get L received vectors y i , where vk,t are zero mean Gaussian noise with

variance N0 /2 of k-th sample of the t-th OFDM symbol. N0 is the noise power,

k = (1, 2, . . . , NF F T − 1) and t = (1, 2, . . . , M ), where M is the number of OFDM

symbols and NF F T is the size of FFT.

2.3.3 Interleaving in OFDM System

Diversity is a technique, used to mitigate the multipath fading. Diversity can

be achieved when there are multiple independent fading channel exist between

transmitter and receiver. From channel coding point of view, closely related

transmitted coded bits must separate in time or frequency to experience inde-

pendent fading of channel. Bits are closely related in block code if they are part

34
OFDM-based Wireless Communication systems 2.3

of the same code while in convolution code if not so many constraint lengths

present between them. In time interleaving bits must be separated in time while

in frequency interleaving bits are physically separated in frequency domain. Time

interleaving introduces decoding delay because the receiver has to wait until all

closely related bits are received for decoding. In time interleaving ,closely related

bits of the code experience independent fading, when the time separation between

the bits are greater than coherence time of the channel. Similarly the bandwidth

of the system should be greater the coherence bandwidth of the channel to get

efficient frequency interleaving.

Different techniques are available to implement interleaving. Block interleaver

takes a block of some symbols and then randomly permute the order of the sym-

bols in the block. At the receiver permutation is reversed to get the original

order. Convolution interleavers perform in sequential manner. Block interleavers

are best choice for frequency interleavers due to processing of block of sub-carrier

used, while convolution interleavers suited to the time interleavers due its ex-

cellent decoding properties. In certain situations, time interleaving did not give

sufficient diversity so time and frequency interleaving may be used in combina-

tion.

OFDM system using interleaving (either time or frequency) depending on

different applications. Digital Audio Broadcasting (DAB) is typically OFDM

based system .From [24], for DAB to be working in 225 MHz , vehicle speed is

48 Km/h turn to Doppler frequency 10 Hz, and using time interleaving results

in decoding delay of several seconds, which is not suitable for practical system.

OFDM has the property to combine the time and frequency interleaving to achieve

35
2. System Model

best interleaving.

2.4 Interleaved-OFDM system as Block Fading


Channel

As we live in the world of reality, therefore in practical multicarrier sys-

tem such as OFDM, correlation exists among the sub-carriers due to non-ideal

interleaving[24]. One of many reasons for the lost of the orthogonality between

the sub-carriers of the OFDM system is higher values of Doppler frequency. From

practical point of view these correlation must be take in to consideration.

In OFDM system information bits are distributed over a number of sub-

carriers and transmitted in parallel channels. Each subcarrier experiences flat

fading channel but as due to limited interleaving correlation exist among sub-

carriers. Similarly in block fading channel, channel gain remains constant over

a certain block of the symbols transmitted through the channel. Therefore to

achieve the performance analysis of OFDM system we can use the generalized

model of the correlated block fading channel [26]. So the whole blocks, from

IFFT to FFT in Figure 2.3.1, can viewed as correlated block fading channel of

Figure 2.2.5.

As an example, the limitations on depth of interleaving in transmission of

OFDM system recommended by IEEE.802.11 are maximum allowable packet size

and processing delay requirements. This results in a non-ideal interleaving which

effectively limits the maximum achievable diversity from the channel. In such a

channel, fading gain occurs in block wise fashion, i.e., all transmitted symbols

of the same block sense the same fading. Also there is considerable correlation

36
Interleaved-OFDM system as Block Fading Channel 2.4

among the fading blocks. We call such a channel as Correlated Block Fading

(CBF) channel. In some situations, the block fading channel can be assumed

independent but in systems based on OFDM, there exist some correlation. So

limited interleaved OFDM -based system can be properly modelled by correlated

block fading channel.

37
Chapter 3

Low Density Parity Check Codes

3.1 Introduction

A linear code with sparse parity-check matrix is said to be ( Low Density

Parity Check) LDPC code. LDPC code was first proposed by Gallager [7] in the

early 1960s along with its elegant iterative decoding property, but it did not get

proper attention until years later. Due to heavy computations, LDPC codes were

ignored but got an amazing comeback in the last few years. The name of LDPC

code is used in relation with their Parity-check matrix which have low density

of 1,s compared the number of 0,s. In contrast to other coding scheme , LDPC

codes offer a better performance with low decoding complexity. LDPC codes are

also called the capacity approaching codes.

In parity-check matrix representation of a LDPC, the code word c = (c1 , . . . , cn )

can be expressed as simple linear algebraic equation HcT = 0T , where H is parity-

check matrix. The elements of the parity-check matrix are 0,s and 1,s and all

arithmetics are modulo 2, that is, multiplication of c by a row of H means taking

the XOR of the bits in c corresponding to the 1,s in the row of H. The number

39
3. Low Density Parity Check Codes

of 1,s in each row of the parity-check matrix is called the weight of each row

of H, while the number of 0,s in each column is the weight of each column the

parity-check matrix. If the length of the block code is n then the number of rows

of parity-check matrix can be calculated as n × wc /wr which has to be integer

and where wc represents the weight of column and wr is the row weight of H. The
wc
rate of the LDPC code is R ≥ (1 − wr
).

3.1.1 Regular LDPC Codes

A LDPC code is regular if the number of 1,s in column wc and number of 1,s

in row wr are constant for a given parity-check matrix. LDPC codes proposed by

Gallager in [7] are regular for which parity-check matrix can be defined as
 
 H1 
 
 H2
 

H=
 .

 ..


 
 
Hwc

where the submatrices Hs has a special structure. For any integer β and wr greater

than 1, each submatrix Hs has a row weight equal to wr and a column weight 1

and of size β × βwr . The submatrix H1 has the special form of for i = 1, . . . , β,

ith row contain all 1,s of wr in column (i − 1)wr to iwr .The other submatrices

are just the column permutation of H1 . Gallager [7] showed that the ensemble of

such codes has excellent distance properties provided that wc ≥ 1 and wr > wc .

Mackay in [21] proposed independently different methods for generating sparse

parity-check matrix H.

40
Introduction 3.1

3.1.2 Irregular LDPC Codes

A LDPC code is regular if the number of 1,s in columns and rows are not

constant for a given parity-check matrix. Irregular LDPC codes can be parame-

terized by the degree polynomials λ(x) and ρ(x), which can be defined as
dl
X dr
X
λ(x) = λi xi−1 and ρ(x) = ρi xi−1
i=2 i=2

where λi (x) and ρi (x) are the fractions of edges belonging to degree-i variable

and check nodes, and dl and dr are the maximum variable and check node degrees

respectively. The optimization of the λ(x) and ρ(x) is found by combination of

density evolution and optimization algorithm.

3.1.3 Graphical Representation of LDPC Codes

In coding theory, codes connected with graphs have been defined in variety of

ways. Tanner graph is the best way to represent the LDPC codes as this graph

not only gives good visualization but also describes the decoding algorithm as

well. Tanner graphs of LDPC codes are bipartite graphs with variable or bit

nodes on one side and constraint or check nodes on the other. Each variable node

corresponds to a bit, and each parity-check node corresponds to the constraint on

the bits of the code word. Edges are used to connect the corresponding variable

and check nodes. The tanner graph representation of the LDPC codes is closely

analogous to the more standard parity-check matrix representation of a code.

Such parity-check matrix can easily be generated [7, 21] and it consists of check

nodes equal to the number of parity and variable nodes equal to the number of

bits in a codeword. The entry (i, j) is 1 if and only if the ith check node is

41
3. Low Density Parity Check Codes

c1 c2 c3 c4 c5
Check
Nodes

Variable
Nodes

v1 v2 v3 v4 v5 v6 v7 v8 v9 v10

Figure 3.1.1. Tanner Graph of LDPC Code

connected to the j th variable node in the graph. LDPC codes are defined by the

graph and they consist of a set of vector c of block length n such that H.cT = 0T .

However, not every binary linear code has a representation by a sparse bipartite

graph, if it does, then the code is called a low-density parity-check (LDPC) code

[2]. Graphical representation of the half-rate regular (3, 6) LDPC code with code

word length 10 for the following parity-check matrix is shown in Figure 3.1.1.
 
 1 1 1 1 0 1 1 0 0 0 
 
 
 0 0 1 1 1 1 1 1 0 0 
 
 
H=  0 1 0 1 0 1 0 1 1 1  
 
 
 1 0 1 0 1 0 0 1 1 1 
 
 
1 1 0 0 1 0 1 0 1 1

42
Iterative Decoding of LDPC Codes 3.2

3.1.4 Decoding Complexity

LDPC codes have achieved outstanding performances over additive white

Gaussian noise channel and Rayleigh fading channel [6]. It is evident from simula-

tion results that LDPC codes perform near to the Shannon limit for larger block

lengths.For example, LDPC codes perform within the 0.04 dB of the Shannon

limit at bit error rate of 10−6 with a block length of 107 [1].

Simplicity of LDPC codes have been exploited by recent design improvements,

advancement in computational technology and hardware realization and hence

produced such coding systems that outperform turbo codes with lower complexity.

LDPC codes have the advantage of the controlled sparseness of the code which

causes a specified and small number of 1,s in each row and column. The small

number one 1,s in each column not only reduces the number of equations to be

solved by the decoder, but also makes it feasible for practical decoding by means

of iterative decoding.

3.2 Iterative Decoding of LDPC Codes

The tanner graph exploits the dependency structure of the various bits re-

ceived from the channel. The iterative or a message passing algorithm discussed

in [7] and which can be easily explained by a tanner graph. As the name specifies,

in each round the messages are passed from the variable nodes to check nodes and

from check nodes to variable nodes. The message passed from the variable node

depends upon the received value of bit from the channel and values received from

its neighbor check nodes except the check node to which it will send the message.

43
3. Low Density Parity Check Codes

Similarly, the message sent from the check node depends on its neighbor’s vari-

able nodes except the variable node to which it will send the message. The most

popular iterative algorithm is the belief propagation algorithm in which the mes-

sages passed from the variable nodes to the check nodes and vice versa are not the

hard decision values but probabilities, conditional likelihoods or log-likelihoods.

It is also assumed that the message passed between variable and check nodes are

independent of each other.

If the messages which are to be exchanged between the variable and check

nodes are independent, then the corresponding node can accurately calculate

the log-likelihood based on the received beliefs or messages. This independent

assumption is not fulfilled in practice, but in bipartite graph it is applicable for l

first iterations if the neighborhood of a variable node up to depth l is a tree [2].

3.3 LDPC Decoding for BEC

The binary erasure channel (BEC) is perhaps the simplest channel model as

described in section 2.2.1. The message passing algorithm of LDPC has been

generalized in [5] using hard decision rule for Binary erasure channel.Variable

nodes receive bits information from the channel and some of them are erased

depending on the channel erasure probability. Then based on the received bits

information, variable nodes will send erasure, 0 or 1 to the check nodes. The

messages sent by variable nodes are received by the check nodes and hence cal-

culate the messages to be send to the corresponding variable nodes. From check

nodes perspective if any incoming message except the variable node to which it

will send this outgoing message is erasure then outgoing message will be erasure.

44
LDPC Decoding for BEC 3.3

In this case, if the incoming message except the corresponding variable node are

all 0,s or 1,s then the outgoing message is the mod-2 sum of incoming messages.

As soon as this information is received by the variable nodes, they send erasure

to check nodes if all incoming messages except the corresponding check node are

erasures, otherwise all the non-error messages must agree and either 0 or 1 as

the channel introduced no errors. At that time the log-likelihood ratio for all

bits of corresponding bits are calculated. The iterations can be halted as soon

as the valid code word is achieved .i.e. HcT = 0 or explicitly should impose the

maximum number of iterations.

3.3.1 Simulation Results and Conclusions

In Figure 3.3.1 , we present the result of random, regular (3, 6) LDPC code

with rate one-half and code word block length 512 bits for binary erasure channel

using hard decision rule. For simplicity, we assume that all zero code word is

transmitted. The main focus of these simulation results is to show the enhanced

performance of the LDPC decoding and comparison of different code word length

behavior. The simulation result shows the comparison of the erasure probability

for LDPC decoding and erasure probability without LDPC decoding. From the

simulation result, it is obvious that the iterative decoding with LDPC enhances

the performance of the system. So up to some threshold erasure probability as in

our simulations results for example it is 0.33, the value of the erasure probability

of iterative LDPC decoding is zero and after that it almost increases linearly with

an increase in erasure probability of the channel because at this stage considerable

amount of the variable bits is erased. But when the channel erasure probability

45
3. Low Density Parity Check Codes

Figure 3.3.1. Performance comparison of LDPC vs No LDPC Decoding in


BEC for code word length 512.

increases beyond 0.5 value then both the erasure probabilities with and without

iterative LDPC decoding become linear. So up to threshold value of erasure

probability with iterative LDPC decoding outperforms the un-coded system but

after that threshold value both are behaving in the same manner.

In order to exploit the difference in performance of different code word block

length codes, we simulated 256 and 512 code word block length in BEC channel.

LDPC code with block length 512 outperform 256 code word block length But

again this better performance is subject to some threshold value and after that

all codes with different code block lengths behave in the same fashion. Similarly

46
LDPC Decoding for Binary Symmetrical Channel 3.4

Figure 3.3.2. Performance comparison of different code word length.

if we increase the length of block code, then after a certain limit further increase

in the code word length can’t result in a better performance. In our simulation

result this threshold is around 0.445 channel erasure probability and is shown in

Figure 3.3.2.

3.4 LDPC Decoding for Binary Symmetrical


Channel

Message passing or belief propagation decoding algorithm for binary sym-

metrical channel is extensively discussed in [7]. Similar to the binary erasure

channel the message passing decoder consists of a number of rounds. The incom-

47
3. Low Density Parity Check Codes

ing messages at the check node except the corresponding variable node and after

calculation at the check node forward the outgoing message to the corresponding

variable node. In a similar fashion the incoming message at the variable node,

except the corresponding check node are processed and forward the outgoing

message to the check node. Here, too, the independent message assumption is

made.

In round 0 of Gallager,s A algorithm [7] using message passing decoder and

hard decision rule,variable node sends its received information from the channel

to the check node. But after round 0, if all the incoming message except the

concerned check node are of the same value, then variable node sends that infor-

mation to the check node; otherwise, it sends its received value to the check node.

On the other hand, check node sends module 2 additions of all incoming messages

except from the message of the the corresponding variable node to which it will

send this message.

Gallager, s B algorithm [7] is similar to the Gallager, s A algorithm but more

flexible and fancy than it. In this algorithm for a certain degree of edge and a

particular round there is a threshold such that if the variable node receives the

same messages at least equal to that threshold excluding the corresponding check

node, then it sends that information to the check node; otherwise, it sends its

received bit information from channel to the check node.

48
LDPC Decoding for Binary Symmetrical Channel 3.4

3.4.1 Soft Decision Message Passing Decoder for Binary


Symmetrical Channel

This decoder is similar to the hard decision decoder. However, the update

messages exchanged between two sets of nodes are not the hard decision but it is

some sort of belief calculated based on the incoming messages.

At iteration zero each variable node calculates the prior probability that the

received bit is 1. LetPi be the conditional probability such that the received code

bit is 1 and given as

Pi = pr (ci = 1|yi )

The message from variable node “i ” to the check node “j ” that the received

bit by variable node “i ” is 1 is given by qij (1) = Pi , equivalently the message

delivered from the variable node “i ”to the check node “j ” that the given bit

received by variable node “i ” is zero is qij (0) = 1 − Pi . The check nodes update

their equations according to [1, 7] as following:

The message from the check node “j ”to the variable node “i” given that this

node believes that the received bit is zero from the incoming message excluding

the corresponding variable node “i ” to which this message has to be send is given

by following equation

1 1 Y
rji (0) = + (qi0 j (0) − qi0 j (1))
2 2 0
i ∈Vj \i

where Vj represent the neighbor of check node “j ”. Neighbor of the check node can

be defined as the number of variable nodes connected to it. This is the probability

that the whole sequence contain an even number of 1,s and hence rji (0) is the

49
3. Low Density Parity Check Codes

probability that message to be send by check node “j ” to the variable node “i ”

is zero. Similarly the probability that message sent from the check node “j ” to

the variable node “i ” is ’1’ is given by the following equation,

rji (1) = 1 − rji (0)

The variable nodes calculate their update messages from the incoming messages

of check nodes except the corresponding check node as follows [1, 7].

Y
qij (0) = K0 rj 0 i (0)
j 0 ∈C i \j

and so we can get


Y
qij (1) = K1 rj 0 i (1)
j 0 ∈C i \j

where K0 = Kij (1 − pi ) ,K1 = Kij pi and Ci represent the neighbors of variable

node “i ”. Kij ,K0 and K1 should be selected in such a way that the sum of

probability calculated for the update equation should be 1. The method of update

messages calculations is shown in Figure 3.4.1.

This completes one of the iteration and the variable nodes take decision of

the received bits as follows [7].

Y
Qij (0) = K00 rji (0)
j∈Ci

and
Y
Qij (1) = K11 rji (1)
j∈Ci

where K00 = Ki (1 − pi ) and K11 = Ki pi .

Again K00 ,K11 and Ki should be selected in such a way that the sum of the

50
LDPC Decoding for Binary Symmetrical Channel 3.4

j-th Check j-th Check


Node Node

.......... ...... ..... .........

qij

qij qij qij qij rji rji rji rji

rji
.......... ...... ...... ..........

i-th i-th
variable variable
Node Node

(a) (b)

Figure 3.4.1. General message update (a) from check node to variable node
(b) from variable node to check node..

calculated probabilities should be equal to 1. The decision rule can be taken as



 0 if Qi (0) > Qi (1)

ĉi =
 1 otherwise

The maximum number of iterations are either explicitly mentioned or the iter-

ations should be continued until the code word satisfies the parity-check matrix

equation i.e. HcT = 0. Due to large number of multiplications there is a sta-

bility problem such that the intermediate result may become zero and may give

some erroneous final result so the best way to convert this algorithm into the

log-domain.

51
3. Low Density Parity Check Codes

3.5 Soft Decision Iterative Decoder for BPSK


AWGN Channel

Soft-decision message passing decoder is the approximation of the bitwise-

MAP decoder. In other words the soft-decision message passing decoder approx-

imately computes the probability that the given bit is either zero or one given

the entire received vector. Let x = (x1 , x2 , · · · , xn ) be the message bits, the

BPSK method map the signal from {0, 1} to ±1 where ”+1” represent 1 and

”-1” represent 0 respectively. After transmitting over an AWGN channel, white

Gaussian noise is added to the original signal and the received analogue signal

vector r = (r1 , r2 , ...rn ) will have a value around ”-1” and ”+1”. In a hard-

decision decoder, the BPSK demodulator estimates the received binary vector

y = (y1 , y2 , ...yn ) based on the sign of the elements of vector r so that



 1 if ri > 0

yi =
 0 otherwise

Let us consider the probability that the received bit is zero for given received

entire vector is give by the following equation.

pr (ci = 0 | r)

Or actually it calculate the likelihood ratio given by

pr (ci = 0 | r)
li =
pr (ci = 1 | r)

and the log likelihood ratio can then be represented by LLR = loge li .

52
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5

This soft-decision message passing decoder is similar to the Gallager A algo-

rithm in which there are iterations and in each iteration there are two steps. In

first step messages were passed to check nodes from variable nodes and in the

next step messages have been delivered from check nodes to variable nodes.

If the bits are equally likelihood then the channel likelihood ratio for each bit is

given by
pr (ci = 0 | ri ) 2
li = = e σ2 ri
pr (ci = 1 | ri )

So the channel log-likelihood ratio is given by

2
yi = loge (li ) = ri
σ2

Where σ 2 is the variance of noise. So the input to the decoder is now yi and as

yi is just scaling of ri that is yi is the loge of the ratio of the two probabilities.

Channel log-likelihood ratio yi should be assign to the ith variable node so that

it can start processing by using it as initial value. It should be remembered that

the detection is executed bit wise here. In every iterations each node will send

its best estimated log-likelihood ratio (LLR) to its neighbor nodes. The messages

which are exchanged between variable nodes and check nodes are assumed to be

independent.

During first iteration initially each variable node sends its LLR (yi ) to its ”d ”

neighbors if it is of degree ”d ” and the check node of degree ”e” will receive LLRs

from its ”e” neighbors variable nodes as shown in Figure 3.5.1. Next each check

node sum up the incoming LLR’s excluding to the corresponding variable node

53
3. Low Density Parity Check Codes

First Iteration

Step (a)

yi1
From yi
Channel yi yi2
yi
…………

…………
Check Node
Variable Node yi of degree e
of degree d
yie

Step (b)

Ci1

Ci2
Check Node
………

of degree e

Cie

Figure 3.5.1. Message updates during first iteration.

to which it will send this message .e.g.

Ci1 = yi2 + yi3 + · · · + yie

Ci2 = yi1 + yi3 + · · · + yie

It should be keep in mind that summation used here is module 2. Similarly

other check nodes calculate LLR based on their corresponding neighbors variable

node.This is shown in Figure 3.5.1. As these calculation can be made simple as

possible for the implementation using algebraic rules.From appendix A


n
yx Y yi
tanh( )= tanh( )
2 i=1
2

In order to calculate yx then we will need a lot of multiplications but by taking

54
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5

logarithm we can convert multiplication to addition but as tanh can both be

positive and negative so we can’t take logarithm directly of this function. Due to

this reason we have to keep track of sign and take logarithm of magnitude only.
n
|yx | X |yi |
log(tanh( )) = log(tanh( ))
2 i=1
2

So let us define
|x|
f (x) = log(tanh( ))
2

If we plot f(x) then it is inverse of itself so by swapping the variables makes no

change in the equation. Therefore,

|f (x)|
x = log(tanh( ))
2

So the magnitude of yx is given by


n
X
|yx | = f ( f (yi ))
i=1

and sign can be calculated as


n
Y
sign(yx ) = (yi )
i=1

Now we can easily generalize these equations for the soft-message passing

decoder. In step (b) of first iteration the corresponding messages passed to the

variable node are given below.

mag(V11 ) = f (f (yi2 ) + f (yi3 ) + · · · + f (yie ))

sign(V11 ) = sign(yi2 ) × sign(yi3 ) × · · · × sign(yie )

Where V is the log likelihood ration calculated by check node, “e” is the degree

55
3. Low Density Parity Check Codes

Iteration l :
Step(a) Step(b) in iteration (l-1)

u1 (l)
v1 (l-1)
u2 (l) v2 (l-1)
yi
…………..
yi

……….
Variable Node
Variable Node
of degree d
of degree d
vd (l-1)
ud (l)

Figure 3.5.2. Calculation of update messages in variable nodes in iteration ”l”.

of the check node, superscript “1” indicate iteration 1 and subscript “1” indicate

the check node number.Similarly other check node messages in iteration 1 can be

calculated as

mag(V21 ) = f (f (yi1 ) + f (yi3 ) + · · · + f (yie ))

sign(V21 ) = sign(yi1 ) × sign(yi3 ) × · · · × sign(yie )

And so on for eth check node.

mag(Ve1 ) = f (f (yi1 ) + f (yi2 ) + · · · + f (yi(e−1) ))

sign(Ve1 ) = sign(yi1 ) × sign(yi2 ) × · · · × sign(yi(e−1) )

Now onward for any iteration “l ” the messages passed from the variable node

to check nodes are calculated as following and shown in Figure 3.5.2.

56
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5

Iteration “l ” , step a:

U1l−1 = yi + V2l−1 + V3l−1 + · · · + Vdl−1

Where U represent the log likelihood ratio calculated by variable node, d repre-

sent the degree of variable node ,subscripts indicate the number of the node and

superscripts (l − 1) showing previous iteration.

U2l−1 = yi + V1l−1 + V2l−1 + · · · + Vdl−1

And so on

Udl−1 = yi + V1l−1 + V2l−1 + · · · + Ved−1

For the second step in the iteration l, the messages passed from the check node

can be calculated as follows:

mag(V1l ) = f (f (U2l ) + f (U3l ) + · · · + f (Uel ))

sign(V1l ) = sign(V2l ) × sign(V3l ) × · · · × sign(Vel )

and so for the next check

mag(V2l ) = f (f (U1l ) + f (U2l ) + · · · + f (Uel ))

sign(V2l ) = sign(V1l ) × sign(V2l ) × · · · × sign(Vel )

and so on
..
.

mag(Vel ) = f (f (U1l ) + f (U2l ) + · · · + f (Ue−1


l
))

sign(Vel ) = sign(V1l ) × sign(V2l ) × · · · × sign(Ve−1


l
)

57
3. Low Density Parity Check Codes

Decoded
Message Code word Noise Message
(k bits) (n bits) (k bits)

BPSK BPSK Sink


Source Encoder Channel Decoder
Modulator Demodulator

Received
signal

Figure 3.5.3. System Block diagram for simulation.

Final decision after iteration l can be given at the variable node as

output(LLR) = yi + V1l + V2l + · · · + Vdl



 0 if output(LLR) > 0

Ĉi =
 1 otherwise

The final code word can be represented as ĉ = [ĉ1 , ĉ2 , . . . , ĉn ].The iterations can

be either stop explicitly or by using the condition Hĉ(T ) = 0.

3.5.1 Simulation Results and Conclusions for AWGN Chan-


nel

Block Diagram of Communication System used for simulation consists BPSK

Modulator, AWGN channel and BPSK Demodulator is shown in Figure 3.5.3.

In our simulations we not only compared the iterative LDPC decoding with the

un-coded system but also investigated bit error rates for various code word block

length. The BPSK modulation is used in such a way that “+1” represent 1

and “-1” represent 0. All zero code word is transmitted to avoid the encoding

complexity. The received signal can be modeled as y = te + n, where “te ” the

58
Soft Decision Iterative Decoder for BPSK AWGN Channel 3.5

−2
10

Bit Error Probability

−3
10

−4
10

LDPC Decoding with N=204


−5
LDPC Decoding with N=504
10 LDPC Decoding with N=1008
Ucoded BPSK

1 1.5 2 2.5
Eb/N0 (dB)

Figure 3.5.4. Performance of different code word length in AWGN Channel.

modulated transmitted signal and “n” is zero mean and variance σ 2 additive

white Gaussian noise. Log-likelihood technique applied to decode the received

signal. From [8], the AWGN channel log-likelihood ratio can be calculated as

2
yi = ri
σ2

Where ri is the received bit information after AWGN and yi is the received log-

likelihood ratio which is given as input to the decoder.

From simulation result in Figure 3.5.4 , it is obvious that the coded system

outperforms the un-coded system and the popular statement that with the larger

code word block length bit error rate will be improved. As with increase in code

word length comprehensively enhance the performance of the system. Below 1.5

dB there is no appreciable enhancement in performance but after that code with

59
3. Low Density Parity Check Codes

code word length 1008 outperforms all other systems with a huge margin.

3.6 Simulation Results and Conclusions for Rayleigh


Fading Channel

Uncorrelated Rayleigh fading channel described in section 2.2.4 is considered

for simulation. Received signal can be modeled as y = αte + n. Again in order to

avoid encoding complexity of the LDPC code, we transmited all zero word and

used BPSK modulation for simplicity. If we have complete channel knowledge at

the receiver then the intrinsic information at the receiver variable node is given

by [6, 8].
2
y0 = yα
σ2

This intrinsic information is sent to check nodes by variable nodes in the first

iteration and messages have been updated based on this information at check

nodes. So a sequence of exchange of these corresponding update messages will

be started.

In Rayleigh fading channel we simulated 204 and 504 code word length to

elaborate the difference in performance. Study of simulation Figure 3.6.1 shows

that again the larger block length of code word has a better performance. It

should also be kept in mind that this channel has comparatively a worse perfor-

mance than AWGN channel due to the additional distortion of the normalized

Rayleigh fading factor. Up to 6 dB value of SNR both code word length have

almost the same bit error rate, but after that there is a visible deviation in per-

formance and code with block length 504 outperform code with block length 204

60
Simulation Results and Conclusions for Rayleigh Fading Channel 3.6

−1
10

−2
10
Bit Error Probability

−3
10

−4
10 LDPC Decoding with N=204
LDPC Decoding with N=504

4 4.5 5 5.5 6 6.5 7 7.5 8


Eb/N0 (dB)

Figure 3.6.1. Performance of different code word length in Rayleigh Fading


Channel.

comprehensively.

3.6.1 Simulation Results and Conclusions for Block Fad-


ing Rayleigh Channel.

The intrinsic information can be calculated as in the case for Rayleigh fading

channel but now with half block length of code word each with its corresponding

normalized Rayleigh fading factor .i.e.

2
y01 = y1 α1
σ2
2
y02 = y2 α2
σ2

As expected from the simulation result shown in Figure 3.6.2 , the performance of

the block fading channel is comparatively degraded than the fast Rayleigh fading

channel. This degraded performance in the case of block fading is due to if the

61
3. Low Density Parity Check Codes

−1
10
Bit Error Probability

−2
10

−3
10

LDPC Decoding Rayleigh Fading with N=204


−4 LDPC Decoding Block Fading with N=204
10
LDPC Decoding Rayleigh Fading with N=504
LDPC Decoding Block Fading with N=504

4 4.5 5 5.5 6 6.5 7 7.5 8


Eb/N0 (dB)

Figure 3.6.2. Performance of different code word length in 2-Block Rayleigh


Fading Channel.

value of the fading factor is bad then the whole half block length code word to be

in error, while in the case of fast rayleigh fading only a single bit will be distorted.

Also for block fading channel there is no appreciable difference in performance of

cod word block length 504 and 204 for given range of SNR values and shown in

simulation result in Figure 3.6.2.

62
Chapter 4

Digital Fountain Codes

4.1 Introduction

Digital Fountain Codes has promising performance for erasure channel which

is suitble model for packet switching networks. The first practical realization of

the Fountain codes was introduced by M.Luby in [11] and was further improved

in [14]. Raptor code [14] is a class of Digital Fountain Codes,can be used inde-

pendently of channel loss rate of erasure channel and near optimal performance

for every erasure channel [11]. Digital Fountain Codes are considered as ”rate-

less”, which means, unlike the traditional block codes such as LDPC codes and

RS codes, Digital Fountain Codes do not have a fixed code rate and the rate

is determined by the number of transmitted codeword symbols required before

the decoder is able to decode. The rate is then not known a priori as it is in

traditional fixed-rate block codes. It can generate as many codeword symbols

as needed to recover all the message bits regardless of the channel performances.

Existing rateless codes has the ability to adapt itself according to the channel con-

ditions without knowing the channel knowledge at the transmitter. This property

63
4. Digital Fountain Codes

of rateless codes make them very good candidate for the relay networks [13] due

significantly low feedback messages.

4.2 LT Coding

The Luby Transform code introduced by M.Luby [11] is the first practical

realization of the Digital Fountain concept. The length of codeword symbol is

arbitrary i.e. encoding symbols can be generated on fly, as few or as many as

needed depending upon the quality of the channel. The decoder can retrieve

original data from any set of the transmitted codeword symbols that are only

slightly longer than original data [11]. Regardless of the statistics of the erasure

events on the channel, we can send as many encoded symbols unless decoder

become able to recover original data from the encoded symbols, so LT code is

optimal for any erasure channel. Encoding and decoding time is function of

original data. It can recover k symbols from any k + O( k ∗ ln(k/δ) encoding

symbols with probability 1 − δ on average k + O( k ∗ ln(k/δ) symbol operations

[11]. Because of the high complexity of the LT codes, Raptor code was proposed

as an extension of LT code to achieve linear increase in complexity by using some

appropriate pre-coding methods, but Raptor code also requires extra memory

to store the pre-code output. The family of Digital Fountain Code has received

many designer’s attention and have been used in many applications on Transport

layer.

64
LT Coding 4.2

4.2.1 LT Encoding Process

Each codeword bit cn is generated from the message bits sequence s1 , s2 , s3 , · · · sk

as follows:

1. Randomly choose a degree dn from a degree distribution ρ(d), where d =

1, 2, · · · k, where k is total number of input bits.

2. Choose randomly dn message bits from k input bits.

3. Bitwise sum, modulo 2 these dn bits to get the value of one output bit cn .

The encoding process defines a bipartite-check graph connected the codeword bits

with original message; a simple example is demonstrated in Figure 4.2.1. Because

the mean degree d is significantly smaller than the message length k, this graph

is called sparse graph.

The resulting code is an irregular low-density-generator-matrix code, which is

similar to the LDPC codes and can be written in terms of vectors and matrices,

i.e.

cT = GsT

Where G is the generator-matrix corresponding to the bipartite check graph, s

and c is the original message bits sequence and codeword bits sequence respec-

tively. The following matrix illustrates the LT encoding process corresponds to

the bipartite graph in Figure 4.2.1.

65
4. Digital Fountain Codes

Input Output
Symbols/Bits Symbols/Bits

C0

S0

C1

S1

C2

S2

C3

S3

C4

Variable Check
Nodes Nodes

Figure 4.2.1. Bipartite Graph of Input and Output Bits/Symbols of LT Code

    
 c0   0 0 1 0  s0 
    
    
 c1   1 0 0 1  s1 
    
T
  T
  
c =
  = Gs =  0 1
c2   0 1 

 s2 

    
    

 c3 

 1 1
 1 0 

 s3 

    
c4 0 1 1 0 s4

Throughout we consider BPSK modulation so we will interchange the word

bits and symbols.The number of message bits connected to a single codeword

symbol is called as the degree of this symbol and corresponds to the 1’s appear

in each row of G, here c0 has a degree of 1 for example. The total amount of

degrees is equal to the number of operations in LT decoding process.

66
LT Coding 4.2

For the decoder to recover the original data from the transmitted bits, it must

know the degree distribution and set of neighbors for each encoding symbol. By

so many ways, these information can be deleivered to the decoder depending

upon the application. One way may be to deliver the degree and list of neighbor

indices explicitly to the decoder for each encoding symbol [11].

4.2.2 LT Decoder

The main purpose of the decoder is to get s from sT = GsT . When the gen-

erator matrix G is large, decoding cost is much more higher. However, decoding

a sparse-graph code over an erasure channel is an easy task by using the special

belief-propagation (BP) method in which a codeword only has two status, either

completely erased or completely correct. The LT decoding process may be ex-

plained as follows:

1. Find a codeword cn connected to only one original message symbol sk . If there

is no such single degree message is available then the decoder is unable to recover

further messages.

2. Set the value of cn equal to sn such that cn = sk .

3. Find all the codewords that are connected to sk , update these codewords by:

cn0 = cn0 ⊕ sk , where ⊕ is module 2 summation.

4. Remove the connection of sk from generator-matrix G.

5. Repeat step 1 to 4 until all the message symbols are recovered or no more

codeword can be found in step 1.

So initially the encoding symbols of degree one are released to cover their unique

neighbor. The covered input symbols which are not yet processed are called rip-

67
4. Digital Fountain Codes

Variable Input information


Nodes Variable Nodes
............
.................

.................
Check
Nodes
............

Check .................
Nodes
Output encoded
Variable Nodes

(a) (b)

Figure 4.2.2. Tanner Graph of (a) LDPC code(b) LT code

ple [11], so at this stage all input symbols are in the ripple. At each step one

input symbol is processed and removes as neighbor from the encoding symbols to

which it connected. Each time this process may or may not cause growth of the

ripple depending neighbors connected to it are previously processed or not. The

process will terminate when the ripple becomes empty.

4.2.3 Tanner Graph for LT Code

The Tanner graph of LT codes is similar to the LDPC codes; however, there

is one difference exist. The Tanner graph LDPC has only one set of variable

nodes while that of LT code contains two types of variable nodes: information

bit variable nodes and encoded bit variable nodes. The information bits are not

transmitted over the channel, while the encoded bits are transmitted over the

channel. At the receiver end, information bits can be recovered from the encoded

bits. Tanner graph for both LDPC and LT code are shown in Figure 4.2.2.

68
Design of LT Degree Distribution 4.3

4.3 Design of LT Degree Distribution

The degree distribution ρ(d) is the crucial part of LT code design. The guide-

lines of the distribution design are:

• As the encoding and decoding complexity are both increasing with increase

in the number of edges in the graph, so the critical quantity is the average

degree of encoding symbols. Therefore, the average degree of encoding

symbols should be keep as small as possible since it corresponds to the

necessary operations of decoding process.

• As few as possible codewords are required to recover the message symbols

to get higher spectral efficiency.

Every input symbol must have at least one edge for the successful decoding. The

encoder select the edges between encoding symbols and input symbols randomly,

the number of the edges must be at least k ∗ ln(k) [10], where k is the number

of input symbols. If the number of encoding symbols received is close to k and

decoding process is possible, then the average degree of each encoding symbol is

ln(k) [10].

Ideally, the decoding graph should run in such a way that just one encoding

symbol has degree “1” at each step. At each step, when this encoding symbol is

processed, only one degree “1” encoding symbol should appear. This goal can be

achieved by ideal distribution [11].


 1/k

for d = 1
ρ(d) =
1
for d = 2, 3, · · · , k


d(d−1)

69
4. Digital Fountain Codes

where k is the number of the input symbols.

However, just like other ideal algorithms, this distribution works poorly in prac-

tices. The degree “1” encoding symbol is very likely to disappear at some point

of the decoding process due to the fluctuations around the expected value. Van-

ishing of ripple can occur even at small variance. The intuition to solve this

problem is to increase the amount of output symbols of degree “1” and needs

slight modification in the ideal distribution.

4.3.1 The Robust Soliton Distribution

The ideal soliton distribution works poorly in practice because the ripple size

is 1, which is too small. Ripple will disappear even at small variance around the

expected value of the ripple size and may cause failure of the decoding process.

The robust distribution increases the size of the ripple so that even at higher

variance the ripple may exist. Also to keep the number of encoding symbols

small, the ripple size should be kept small to avoid redundancy in recovering the

input symbols.

The robust soliton distribution uses two extra parameters, c and δ and the design

of the distribution will be such that throughout the process degree “1” encoding

symbols are given below



S = c ln(k/δ) k

instead of one. The parameter δ is the probability that the decoding fails after a

certain number K´of encoding symbols are received and parameter c is constant

of order 1 but smaller values than 1 can give better result. This design ensure that

the variance of the ripple size exceed the value (ln(k/δ) k) is at most δ which

70
Design of LT Degree Distribution 4.3


can be achieved by K´ = k + O(ln(k/δ) k encoding symbols [11]. A positive

function is defined as

S
for d = 1, 2, · · · , (k/S) − 1


kd




τ (d) = S
ln(S/δ) for d = k/S
 k



0 for d > k/S

Then add the ideal soliton distribution ρ(.) to τ (.) and normalize to obtain the

robust soliton distribution, µ(.):

ρ(d) + τ (d)
µ(d) =
Z

where Z = Σd (ρ(d) + τ (d)).The number of encoded symbols required at receiver

to ensure the successful decoding with probability at least 1 − δ, are K´= kZ [10].

The distribution τ (.) plays an important role in the successful decoding. At the

beginning of the decoding process τ (1) ensures the sufficient size of the ripple.

Now consider the decoding process at the intermediate stage and suppose at this

stage there aer M input symbols unprocessed. As in each step when the input

symbol is processed, the size of the ripple decreases by one, so there is a need

in increase of size for this compensation. If the ripple size is S, then the chance

that the released encoding symbol adds one to the ripple is (M-S)/M. For average

increment in the ripple size at this stage it requires M/ (M-S) encoding symbols.

We can verify from [11] that when M input symbols remain unprocessed, then

the overall release rate consists of a constant portion and that constant portion

is the release rate of encoding symbols of degree i where i = k/M. Similarly if

the ripple size to be maintain around S then degree i should be proportion to the

71
4. Digital Fountain Codes

ρ(i) + τ (i) [11], for i = 2, · · · , (k/S) − 1.

Luby in [11] explained how the spike in τ (d) at d = k/S is included to ensure

that every input symbol is connected to the encoding symbol at least once. This

is same as to release S ln(S/δ) encoding symbols to cover unprocessed S input

symbols. As this happens only once, we don’t pay so much penalty in term of the

number of encoding symbols which we want to be minimum as much as possible.

4.4 Simulations and Conclusions of LT code on


BEC

In this section, we present the results of the LT code on binary erasure channel

model which is explained in section 2.2.1. We use 250 and 500 information bits

block and study the behavior of the LT decoder over various erasure probability

ε of the channel. The distribution used in our analysis is the robust soliton

distribution. The decoder runs a belief propagation algorithm over the tanner

graph similar to the LDPC codes. However, in this case encoder send a fountain

of the encoded bits and decoder runs the belief propagation algorithm over a

certain encoded bits. If the original data is not fully recovered then decoder

processes one more encoding symbol unless it recovers the original data.

From simulations result shown in Figure 4.4.1, it is clear that as the erasure

probability increases, the decoder will require more and more encoded symbols

to recover the original data because the number of erased bits also increases.

In Figure 4.4.1 of simulations, our focus is on the variations of rate of the LT code

with increase in the length of the information bits block length. We compare the

performance of 250 bits and 500 bits block lengths. As it is obvious that in the

72
Simulations and Conclusions of LT code on BEC 4.4

Performance of LT Code
0.7

0.65
Rate of LT Code

0.6

0.55

0.5

LT Codes with Information Bits=K=250


LT Codes with Information Bits =K=500
0.45
0 0.05 0.1 0.15 0.2 0.25
Erasure Probability

Figure 4.4.1. LT code in Binary Erasure Channel

LT decoder, we can get the original data with higher code rate with increase in

block length of information bits. In other words, the capacity gap reduces with

larger block length. So we can conclude that with larger code word length , we

can get better performance in LT decoder. Hence, the LT code almost recovered

the original information regardless the quality of the channel.

LT code over noisy channels such as Binary symmetric Channel (BSC) and

Additive White Noise Gaussian (AWGN) is comprehensively studied in [12]. Sim-

ulation results of [12] for both of robust soliton distribution as well as distributions

optimized for Binary Erasure Channel (BEC) in [14], posses the same behavior

and not so impressive due to high overhead and high word error rate. For both

the distributions WER and BER curves have reasonable error floor and this error

73
4. Digital Fountain Codes

floor exists because there are certain input symbols which have connectivity with

small amount of encoding symbols and hence always unreliable.

Our simulation analysis and from [11, 12], we come to the conclusion that LT

code performs very well for the BEC channel. But as the complexity increases

as O(k ln(k)) [11], where k is the length of information bits, LT codes are not

practical for higher block length in term of complexity. Also the error floor

problem in case of the noisy channel makes it an unfeasible choice. However,

the complexity can be reduced to linear function of information bits i.e. O(k ) in

slightly sub-optimal rate less codes called Raptor Code [14].

4.5 Raptor Codes

Encoding, decoding complexity and error floor exhibited by LT code has been

addressed in [14] by concatenating a weakened LT code with an outer code. These

codes are called Raptor codes which has constant cost of encoding and decoding

as well as improved performance over noisy channels [12, 14]. Encoding and

decoding complexity in the LT code is of order O(k ln(k)) because the average

degree of the packets in the sparse graph is ln(k). Successful decoding requires

(k ln(k)) edges in order to recover all the input symbols with high probability.

However, Raptor code uses LT code as inner code with much more lower average

degree. The consequence of this lower average degree of edges is that a fraction

of the input symbols will not be connected to the graph and hence cannot be

recovered. The un-recovered input symbols can then be recovered by traditional

erasure correcting code used as an outer code. The LT code and the traditional

erasure correcting code must be designed properly such that the outer erasure

74
Raptor Codes 4.5

correcting code can successfully recover the input symbols left by the LT code.

A landmark paper [15] was presented in the analysis of Raptor codes over noisy

channels. In this paper the author derived the properties of the capacity-achieving

Raptor codes over noisy channels and the code construction is presented for such

channels. As from [15] there exist no universal Raptor codes for channels other

than BEC. As Raptor code is an extension of the LT code with concatenation of

LDPC code as an outer code. The Belief propagation Algorithm runs over the

Tanner graph both for LT and LDPC decoder part of the Raptor decoder.The

most critical part of Raptor code is to design the degree distribution of the output

symbols over noisy channels such as Binary Input Additive White Gaussian Noise

(BIAWGN) and Rayleigh Fading channels.

4.5.1 Tanner Graph and Construction of Raptor Code

As Raptor code is concatenation of the outer code and LT code. A Raptor

code can be characterized by (k, C, µ), where µ is the output symbols degree

distribution and C is the pre-code which is used as an outer code. First k -bit

information bits are mapped to n-bit code word of pre-code C, where n is slightly

greater than k-bits and are called input symbols of the LT code or intermediate

symbols of Raptor code. Then using the degree distribution µ by LT code to

generate a limitless output symbols from the input/intermediate symbols. The

generation of outputs symbols process is terminated by feedback message from

the receiving end after successful decoding. The Tanner graph of the raptor code

is shown in Figure 4.5.1.

Contrasting to the LT code, Raptor code requires storage for the intermediate

75
4. Digital Fountain Codes

Information Bits

Pre-coding

Input Bits

LT coding

Output Bits …….

Figure 4.5.1. Tanner Graph of Raptor Code

symbols. Space required for the Raptor code is 1/R, where R is the rate pre-code

[14]. The encoding and decoding cost of the Raptor code is different from the LT

code due to the presence of pre-code. The decoding cost for Raptor code is the

expected number of arithmetic operations required to recover k input symbols.

LT code is a type of Raptor code with no pre-coding, but uses very sophis-

ticated pre-code degree distribution to recover all the input symbols. On the

other extreme,there is Raptor code with simplest degree distribution but com-

plicated pre-code called pre-code-only (PCO) Raptor codes. The performance of

the Raptor code depends on the pre-code [14].

Graphical Representation of Raptor Code

LT part of the Raptor code uses degree distribution with lower average degree

as compared to pure LT code in order to reduce and maintain a constant cost of

encoding and decoding process. As a result, some of the input symbols can not be

connected to the output symbols and hence cannot be recovered. If the average

degree of distribution of the LT part in Raptor code is d, then on the average

76
Raptor Codes 4.5

10-Information Bits

13-Intermediate Bits
Unattached
bit Unattached
bit

11- Encoded
received bits

Figure 4.5.2. Graphical representation of Raptor code

expected fraction of not recovered symbols are f˜ = e−d [10]. In Raptor code [14]

k information bits are first encoded to n = k/(1 − f˜) packets with outer code

which can correct erasures if the erasure probability of the channel is f̃. Then

using the weak LT code to transmit n bits, and as soon as slightly larger than n

output symbols are received, decoder can recover n intermediate symbols. Then

the outer code is used to recover the original information symbols k.

Raptor code can easily be explained by the schematic diagram shown in Fig-

ure 4.5.2. For our example we use information bits, k = 10 and then by using the

pre-code to encode these bits to intermediate bits, n = 13 bits. The Degree dis-

tribution used in this example has average degree around 3. This simple degree

distribution failed to connect two of the intermediated bits to the output bits

.The LT decoder can recovers the 11 connected intermediate bits from the trans-

mitted encoded bits.Then the outer traditional erasure code can recover original

10 information bits.

77
4. Digital Fountain Codes

4.6 BP Algorithm of Raptor code over Noisy


Channels

As the performance of the Raptor code on any channel is the number of bits

required for successful decoding and the number of decoding attempts made by

the receiver. The decoder will try to decode each time it receives a noisy bit from

the channel , as this type of decoder is feasible in BEC. It is because in Binary

Erasure Channel the decoder stops decoding on stopping set [14] and as soon as

it receives new noisy bit it starts decoding from the stopping set, but this is not

possible in noisy channels due to higher complexity of decoder. In noisy channels

the receiver waits for finite number of symbols from the limitless stream sent by

the transmitter and then starts decoding. As soon as the rate decreases, decoding

complexity increases but the bit error probability reduces.

Let the message y n = (y1 , y2 , · · · yn ) be received by the receiver over a noisy

channel up to time n , where yj is output for transmitted bit xj . The channel

Log Likelihood Ratio (LLR) of bit xj can be defined as


 
P (xj = 0|yj )
L0,j = ln
P (xj = 1|yj )

The LLRs for AWGN and Rayleigh fading channel can be calculated according

to [8]. First the messages are exchanged between input and out symbols of the

LT part of the Raptor decoder and upon convergence, we then run the belief

propagation in the LDPC pre-code part of the Raptor code. In round “0” of the

BP algorithm the input symbols of the LT part send 0 to the output symbols and

then after that at each iteration the messages are passed from output symbols to

78
BP Algorithm of Raptor code over Noisy Channels 4.6

the input symbols and vice versa. At any iteration l, the message passed from
(l)
the output symbol “j ” to the input symbol “i ” can be represented by Lj→i and

the message passed from the input input symbol “i” to the output symbol “j ”
(l)
is represented by Li→j , neighbor of any node can be represented by N (v) and is
(l)
the number of nodes connected to it. The value of Lj→i can expressed as [15, 16]

  
(l−1)
(l) L0,j Y Lí→j
Lj→i = 2 tanh−1 tanh( ) tanh  
2 2
í∈N(j)\(i)

The above equation means that output symbol “j ” sends the message to the “i”

using above formula by taking into account the LLRs of neighbors of “j ” except

the input symbol “i” to which this message will be sent.

Similarly, the message passed from input symbol “i ” to the output symbol “j ”

at any iteration l can be expressed as [15, 16].

(l)
X (l)
Li→j = Lj́→i
j́∈N(i)\(j)

Again above equation means that message from input symbol “i ” to the output

symbol “j ” can be calculated from the neighbors of “i ” except “j ” to which this

message has to be sent.

The iterations can be stop explicitly by reaching to some maximum value and at

the end the LLR of the input symbol “i” can be calculated as

X (l)
Li = Lj→i
j∈N(i)

which means that the LLR of input symbol “i ” can be calculated as above, taking

all the neighbors of “i” into account. After computing LLRs for every input

79
4. Digital Fountain Codes

symbol, then act as prior LLRs of the input symbols of the pre-code decoder.The

BP algorithm then runs on the input and output symbols of the pre-code decoder

for a pre-defined maximum number of iterations. At the end of decoding process

over pre-code, receiver then takes decision about the individual symbol/bit. If the

required Bit Error Rate has not been achieved then decoder takes more output

symbols and run decoding algorithm process again until we get the required level

of bit error rate.

4.7 Capacity achieving Raptor Code for Noisy


Channels

The performance of Binary-input Memoryless Symmetric channel such as

AWGN was studied in [12] for the degree distribution optimized for the BEC

channel [14]. From the results of [12, 15], we can conclude that Raptor decoder

requires little bit more output symbols than input symbols to recover the original

input symbols by BP algorithm. Also it should be noted as the variance of the

noise increases then Raptor code will required more overhead to achieve the same

performance. As the comparison of the LT and Raptor code in [12, 15] shows that

Raptor code outperform the LT code as error floor exist in the LT code. As the

degree distribution optimized for the BEC, perform well for the AWGN channels

but if the degree distribution specifically designed for these noisy channels will

enhance performance, which is thoroughly studied in [15] for large block length

codes.

From [15], for the BIMSC C, we denote the capacity of the channel C as

Cap(C) and E(C) as the expectation E(tanh(L0,j /2)) , where L0,j is channel LLR

80
Capacity achieving Raptor Code for Noisy Channels 4.7

and assume to be random variable . An important parameter Π(C) is then defined

as
Cap(C)
Π(C) :=
E(C)

Raptor code can be parameterized for noisy channels as (k, C, Ωk (x)) , by increas-

ing k, Raptor code is said to be capacity achieving over a given noisy channel C,

if BP algorithm is run over k/Cap(C) + O(k) output symbols. As the value of k

approaches to infinity, bit error rate tends to zero. Important parameters for the

capacity-achieving Raptor code over BIMSC are the number of output symbols

with degree one and two i.e. Ω1 and Ω2 . From [15], for any BIMSC C, Ω1 and Ω2

should fulfill the following conditions:

(k) (k)
Ω1 > 0 and lim Ω1 = 0,
k→∞

and
(k) Π(C) (k) Π(C)
Ω2 > and lim Ω2 = ,
2 k→∞ 2

such that E(C) 6= 0 and Cap(C) 6= 0.

Above conditions are analogues to the stability condition of LDPC. The stability

condition of LDPC deals with the upper bond of the fraction of variable nodes of

degree 2, such that if the fraction of the degree 2 variable nodes is lower than this

bond then BP algorithm can be completed successfully. However the stability

condition of the Raptor code in [15] states that if the fraction of degree 2 output

symbols is greater than the lower bond Ω2 (C) = Π(C)/2, then the decoding

process could be successfully started. The lower bond in the stability condition

81
4. Digital Fountain Codes

of the Raptor code Ω2 (C) depends on the channel parameter like noise variance

in case of noisy channels such as AWGN.

The author of [15], outline the construction of Raptor code for BIAWGN

based on two assumptions. The first assumption is that the factor-graph of the

Raptor code is cycle free, so the messages exchanged between input and output

symbols are independent. This assumption is justified when the tanner graph of

the Raptor code is large. The second assumption is that the probability density

of the messages sent from input symbols are mixture of symmetric Gaussian

distributions. This assumption is valid due the irregular degree distribution of

the input symbols. Selection of signal to noise ratio SNR is also very critical in

the design of the Raptor code. From result in [16] , while designing of Raptor

code for any noisy channel the SNR has two bonds,SNR∗low and SNR∗high such

that if the channel SNR is outside interval [SNR∗low , SNR∗high ],then Raptor code

is either not available or if exist can’t achieve capacity of the channel. It is also

evident from the simulation result in the[16], that if the SNR is close to SNR∗low

with in the interval then the designed Raptor code performs very close to the

capacity, whereas if the SNR is close to the SNR∗high with in the interval then gap

between the curve produced by the designed Raptor code and capacity becomes

predominant.

4.8 Simulations of Raptor code


4.8.1 Simulations for AWGN Channel

System block diagram is shown in Figure 4.8.1. Channel under consideration

is AWGN which is described in chapter 2.2.3. The encoder part of Raptor code

82
Simulations of Raptor code 4.8

Pre-Code
Source LT Encoder Modulator
Encoder

Channel
Pre- code decoder LT decoder

C - node mv,c V- node mi,v Int.bit mo,i Enc. bit


Demodulator
update update update update

mc,v mi,o

Hard
Destination
decision

Figure 4.8.1. System Block Diagram of Raptor code

consists of two parts i.e. pre-code encoder and LT encoder. Here we use the

LDPC irregular .95 rate code as pre-code. Similarly, the decoder consists of LT

decoder and LDPC decoder. The message LLR are first converged by the LT

decoder and upon convergence, LDPC decoder runs BP algorithm to retrieve the

original information bits. We get 532 encoded bits also called intermediate bits,

from 0.95 rate LDPC encoder which correspond to 502 information bits. We use

the binary phase shift keying (BPSK) which mapped the encoded bit to the signal

point. If we have ideal channel state information then the Channel log likelihood

ratio can be expressed as [6, 8]:

P (x = 0|y) 2
L0 = loge = 2y
P (x = 1|y) σ

83
4. Digital Fountain Codes

0
10
Eb/N0=1 dB
Eb/N0=2 dB
Eb/N0=2.5dB
−1
10
Bit Error Probability

−2
10

−3
10

−4
10
1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2
1/R (Delay)

Figure 4.8.2. Performance of Raptor code in AWGN

Our simulation Raptor decoder waits for a finite number of output symbols as in

our case 700 symbols and then calculate the channel LLR for each bits/symbols.LT

decoder part of Raptor decoder runs BP algorithm on these output symbols and

after running through 40 iterations gives the LLRs of the input symbols. These

LLRs of the input/intermediate symbols behave as prior LLRs to the LDPC de-

coder which again run for 30 iterations, and finally, we take decision at the end

of BP algorithm. In next decoding attempt the decoder collects 100 more output

symbols and repeats the whole decoding process again.The degree distribution

used in LT part optimized for σ = .977 and given in [15]. We draw our simulation

graph as shown in Figure 4.8.2. The graph is bit error probability verses inverse of

rate (R) which in some sense can be called as delay. In other words, the decoder

will wait until it receives output symbols equal to (1/R) times information bits,

and then will start decoding attempt. Up to around half rate of the code, the

slope of the BER curve is higher as in the case of LT code. But after that BER

84
Simulations of Raptor code 4.8

reduces but with little bit lower slope. This slight change in the slope occurs

because the input bits achieved from LT part are not so reliable to reduce the

BER curve sharply by the outer code. After that when the code rate lowered

further, then the input bits of the LT part are now connected to more output bits

and hence reliable enough for the outer code to enhance performance. Also it is

very straightforward that increase in Eb /N0 causes enhancement in performance.

So the same bit error rate can be achieved with higher rate for the higher value

of Eb /N0 as compared to the lower value of Eb /N0 .

4.8.2 Simulations for Rayleigh Fading Channel

For the Rayleigh fading channel the system block diagram used is similar to

the AWGN channel as shown in Figure 4.8.1. Irregular LDPC code with rate

.95 is used as pre-code with concatenation LT code of degree distribution used

from [15]. Again 504 information bits are used for the LDPC code, which convert

this into 532 intermediate bits. These intermediate bits are then encoded by the

LT encoder. BPSK modulation is used to map the output bits from LT encoder

to to output symbol.The channel intrinsic Log likelihood information in case of

Channel state information available at the receiver is given as [8]

P (x = 0|y) 2
L0 = ln = 2 yα.
P (x = 0|y) σ

The focus of our simulation is to investigate the error rate performance of the

Raptor code over various values of the noise variance in uncorrelated Rayleigh

fading channel. The simulation result in Figure 4.8.3, demonstrates that increas-

ing the noise variance leads to degradation in performance of the Raptor code.The

85
4. Digital Fountain Codes

0
10

−1
10
Bit Error Probability

−2
10

Eb/N0=1 dB
Eb/N0=2 dB
Eb/N0=3 dB
−3 Eb/N0= 4 dB
10
1.6 1.8 2 2.2 2.4 2.6 2.8
1/R (Delay)

Figure 4.8.3. Performance of Raptor code in Rayleigh Fading Channel for


K=504

reason for the slope of the BER curve is same as for the AWGN channel.i.e. for

better performance the reliability of the input symbols/bits are to higher or in

other words it should be connected to more output symbols/bits.It is also clear

from simulation results in Figure 4.8.3 that increasing the value of Eb /N0 causes

improvement in the bit error rate. It should also be noted that with lower rate

of the Raptor code, we can get better error rate performance i.e. lowering the

efficiency of the Raptor code increasing the reliability of the Raptor code.

4.8.3 Simulations for 2-Block Fading Channel

We assumed perfect channel knowledge at the receiver and two block fading

channel as discussed in 2.2.5. BPSK modulation is used to map the transmitted

bits to the signals. The two blocks have i.i.d. normalized Rayleigh factors i.e.

86
Simulations of Raptor code 4.8

0
10

−1
10
Bit Error Probability

−2
10
Eb/N0=5 dB
Eb/N0=7 dB
Eb/N0=9 dB
Eb/N0=11 dB
−3 Eb/N0=13
10
1.6 1.8 2 2.2 2.4 2.6
1/R (Delay)

Figure 4.8.4. Performance of Raptor code for K=504 in 2 Block Fading Channel

α1 and α2 such that first half of the symbols transmitted experience channel

behavior as α1 and the last half bits as α2 , respectively. The received symbol

model is given by

yi = αj ∗ ti + ni

where j =1,2 and i =1,2,........, unless we get the original information symbols

k.Again we consider 504 information bits which is converted to 532 encoded bits

also called intermediate bits by LDPC code with rate .95, used as an outer code.

Initially we sent little bit more encoded symbols than the information bits and if

we did not get the required BER then we sent extra 100 and repeat this process

until we get the BER close to the pre-defined value.

Graph for the simulation result shown in Figure 4.8.4. Just like AWGN and fast

Rayleigh fading channel as the decoder can collect more and more bits and run

87
4. Digital Fountain Codes

the BP algorithm then BER rate decreases progressively. At some intermediate

rates the slope of the decreases of the BER curve is not so sharp due to error

floor in the LT part. But as the rate is further lowered then BER curve performs

very well due the reliability of the input bits i.e. inputs bits are now connected

to more output bits. It should be keep in mind that we can get even enhanced

performance of the Raptor code for larger code word length as in [12, 15]. Also it

should be noted that with increase in the value of Eb /N0 , more and more coding

gain is achieved. We can get even better results than this if we optimize degree

distribution for each Eb /N0 value and specifically for the block fading channels.

88
Chapter 5

Simulations Results

5.1 Introduction

Code designed which works better in fast fading channel, may not behave

very well in the Block fading channel. As the block fading channel is nonergodic

channel [17], so we cannot use the channel capacity as we did in the case of fast

fading channel. For information-theoretical rate limit we rather use the outage

probability instead of capacity [18]. In [19], a framework of rateless coding for

block fading channels has been given. Results of [19] demonstrate the efficiency,

reliability and robustness of the Raptor coding over slow fading channels when

no channel state information are available at transmitter. In [19] Raptor code

was studied in block rayleigh fading channel but blocks are assumed to take inde-

pendent values and hence no correlation was present. In this chapter beside the

comparison of Half rate regular (3,6) LDPC and Raptor code over various noisy

channel, we have investigated Raptor code over uncorrelated and correlated block

fading rayleigh channel for short block length. The main emphasis of this chapter

to compare the error rate performance of Raptor code with a fixed rate standard

(3,6) regular LDPC code and to HARQ systems using punctured LDPC codes

89
5. Simulations Results

over uncorrelated and correlated slowly fading channels, which can be modeled

with the block-fading channel model.

We presented some simulation results for the Raptor codes using system set

up as shown in Figure 4.8.1. For all simulations except for binary erasure chan-

nel, irregular LDPC code with 0.95 rate is used as precode, which converts 504

information bits to 532 intermediate bits. These intermediate bits are then en-

coded by the LT encoder and BPSK modulation scheme is used to transmit the

encoded bits to the recevier over various channel as described in chapter 2 at

different energy per bit-to-noise ratio i.e.Eb /N0 values, where Eb is the energy

of encoded bit. LT code is decoded first followed by the precode. We decoded

each codeword by the belief propagation algorithm after 40 iteration for the LT

decoder and 30 iterations for the LDPC precode. We then count the number of

bits in which the estimated codeword differs from the sent one.

Our simulations for fading channels used the following two degree distribu-

tions which are optimized for binary erasure and AWGN channel. The degree

distribution optimized for BEC from [14] is given below:

ΩBEC (x) = .008x + 0.494x2 + 0.166x3 + 0.073x4 + .083x5

+ 0.056x8 + 0.037x9 + 0.056x19 + 0.025x65 + 0.003x66 (5.1.1)

And degree distribution optimized for AWGN channel from [15]

90
Simulations of Raptor Code vs LDPC in BEC 5.2

ΩAW GN (x) = .006x + 0.492x2 + 0.0339x3 + 0.2403x4 + 0.006x5

+ 0.096x8 + 0.049x14 + 0.018x30 + 0.0356x33 + 0.033x200 (5.1.2)

5.2 Simulations of Raptor Code vs LDPC in


BEC

System block diagram is used for our simulation as shown in Figure 4.8.1.

We focus our simulations on the comparison of error rate performance of the

Raptor code and half rate standard (3, 6) regular LDPC code over the same

binary erasure channel as described in chapter 2.2.1. For simplicity we use the

BPSK modulation scheme. Left regular LDPC code with rate .90 is used as

pre-code instead of .95 rate LDPC code which is normally used for larger block

length code. The reason for this lower rate pre-code is that fraction of inputs
¯
not connected in LT part is around e−d [10] for large code word length where d¯

is the average degree of distribution. However this fraction of unconnected input


¯
symbols fluctuate around this value e−d for small code word length block. In

our simulations, LT code with degree distribution of average degree 3 is used.

The degree distribution for the LT part can be used from [14],however all the

degree distributions available their are of average degree around 5.90, which result

in higher decoder complexity but we are simulating code with information bits

256 or more precisely when encoded through pre-code we get 285 intermediate

bits/symbols. So the fractions of unconnected input bits in the LT part by using

the distribution with average degree 5.90 is around e−5.90 ∗ 285 = .78, which is not

91
5. Simulations Results

Performance Comparison of raptor and Half rate LDPC code for K=256
0.7
Raptor Code
LDPC Codes with Rate half
0.6
Throughput(Rate of Code)

0.5

0.4

0.3

0.2

0.1

0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Erasure Probability of Channel

Figure 5.2.1. Performance Comparison of Raptor and (3,6), Half rate LDPC
code for 256 information bits in BEC

a best choice for our code word as the average degree is high so that almost all of

the bits will recovered by the LT code and no need of pre-code. So we optimize

the degree distributions using [14] and technique available in [10] for our specific

simulations of about average degree 3 so that around e−3 ∗ 285 = 14 symbols

did not recovered by LT code and the decoder complexity reduced significantly.

However by optimizing good degree distribution we can achieve more enhanced

performance with lesser decoding complexity.

In our simulations result as shown in Figure 5.2.1, we plot the curve of

throughput versus the erasure probability of the channel. Throughput here means

the average bits per channel realization i.e. rate of the code when all the informa-

tion bits are successfully decoded by the decoder. As we know that the half rate

standard (3,6) regular,LDPC code has constant code rate and hence throughput.

In this case the threshold is decided, based on the channel erasure probability

92
Raptor code vs LDPC code in AWGN Channel 5.3

−1
10
Bit Error Probability

−2
10

−3
10

LDPC .50 Rate


−4 Raptor Code .63 Rate
10
Raptor Code .50 Rate
Raptor Code .36 Rate
−2 −1.5 −1 −0.5
Eb/N0 (dB)

Figure 5.3.1. Performance Comparison of Raptor and (3,6), Half rate LDPC
code for 504 information bits in AWGN

such that after this threshold value of channel erasure probability,half rate reg-

ular (3,6), LPDC code of this particular code word length cannot recover all

information bits with high probability. But in case of raptor code we can recover

the information bits at all channel erasure probability. At higher channel era-

sure probability we need more encoded bits to recover the original information

bits and hence lower the throughput and code rate. However we can even get

better performance of Raptor code by proper optimization of the degree distribu-

tion for short block length. For larger code word length, Raptor code has much

more better performance as compared to Figure 5.2.1, and hence comprehensively

outperforms LDPC code for the Binary erasure channel.

93
5. Simulations Results

5.3 Raptor code vs LDPC code in AWGN Chan-


nel

In order to gauge the performance comparison of the Raptor Code and half-

rate standard (3, 6) regular LDPC code, simulations result is drawn in Fig-

ure 5.3.1. As far as the rate of the Raptor code is higher than LDPC, then LDPC

outperform all over the Eb /N0 range but if the rate of the Raptor code decreases

than LDPC code, then the Raptor code outperform LDPC code in the lower

Eb /N0 . Here Eb is the energy of the encoded bit for fair comparison betweern

Raptor code and LDPC code in ARQ schemes. One possible reason for this poor

performance of the Raptor code in AWGN channel is the degree distribution used

in our simulations was optimized for binary erasure channel. The degree distri-

bution optimized for the AWGN has not so good performance in AWGN for short

block length (result not shown) because the derivation of that degree distribution

is based on the exchange of independent messages between the nodes which is not

satisfied for small code word length. The performance of the Raptor code can be

be further enchanced by designing degree distribution for every value of Eb /N0 ,

while in our simulations we used degree distribution optimized for noise variance

.977 in [15]. So there is a need for the degree optimization of the short block

length in AWGN channel. But even with this degree distribution, we got Raptor

code which has dynamic range of rates which is very useful in fluctuating channel

conditions as compared to the fixed-rate codes and schemes based on Automatic

Repeat Request.

94
Performance comparison of Raptor code and LDPC code in fast-fading Rayleigh
Channel 5.4

0
10

−1
10
Bit Error Probability

−2
10

−3
10

LDPC .50 Rate


−4
Raptor Code .63 Rate
10 Raptor Code .50 Rate
Raptor Code .36 Rate
1 1.5 2 2.5 3 3.5 4
Eb/N0 (dB)

Figure 5.4.1. Raptor code vs LDPC in Rayleigh Fading Channel for K=504

5.4 Performance comparison of Raptor code


and LDPC code in fast-fading Rayleigh Chan-
nel

System block diagram shown in Figure 4.8.1 is used for simulations setup.

Comparison of the Raptor code and standard (3, 6) regular LDPC code is demon-

strated in simulation result based on degree distribution from equation 5.1.1

shown in Figure 5.4.1, for different value of Eb /N0 in fast fading Rayleigh chan-

nel as described in 2.2.4. The performance of the fast fading Rayleigh channel

based on the degree distribution optimized for AWGN is not so impressive. The

reason for this is that while deriving the optimized degree distribution for the

AWGN channel in [15], it was assumed that all incoming messages arriving at

a given node in the factor graph of Raptor code were independent but this con-

95
5. Simulations Results

−1
10
Bit Error Probability

−2
10 1/2 rate LDPC Decoding
Raptor Coding with Rate .63
Raptor Coding with Rate .50
Raptor Code with Rate .38
5 6 7 8 9 10 11 12 13
Eb/N0 (dB)

Figure 5.5.1. Raptor code vs Regular (3,6), Half rate LDPC code for K=504
in 2 Block Fading Channel using AWGN degree distribution

dition is not fulfilled for short block length. From Figure 5.4.1, it is clear that

Raptor code having rate greater than the half rate LDPC code has degraded

performance while the Raptor code with lower rate than half rate LDPC code

has better performance. It should also be noted that when Raptor code is used

as fixed half rate code , then it suffer little bit degradation in performance due

to short block length. However Raptor code offers dynamic and flexible range of

rates as compared to fixed half rate standard (3, 6) regular LDPC code, so we

can conclude that with fluctuating fast fading Rayleigh channel , Raptor code is

good choice as compared to fixed rate standard (3, 6) regular LDPC code.

96
Simulations and Conclusions of Raptor code for 2-Block Fading Channel 5.5

−1
10
Bit Error Probability

−2
10

1/2 rate LDPC Decoding


Raptor Coding with Rate .63
Raptor Coding with Rate .50
Raptor Code with Rate .42
−3 Raptor Code with Rate .36
10
5 6 7 8 9 10 11 12 13
Eb/N0 (dB)

Figure 5.5.2. Raptor code vs Regular (3,6), Half rate LDPC code for K=504
in 2 Block Fading Channel using BEC degree distribution

5.5 Simulations and Conclusions of Raptor code


for 2-Block Fading Channel

Simulation of Raptor code for 2-Block fading channel is shown in Figure 5.5.1

based on the degree distribution optimized for the AWGN Channel. Figure 5.5.2

shows the simulation result of the Raptor code over the same channel and degree

distribution optimized for BEC channel. From comparison of both the figures, we

can conclude that the degree distribution optimized for the BEC performs well

as compared to the degree distribution optimized for the AWGN channel and the

reason for this the short block length.

Our goal is to elaborate the difference in performance of the half rate standard

(3, 6), regular LDPC code and Raptor code. Raptor code based on equation 5.1.1

97
5. Simulations Results

and having rate greater than half, has not so good performance as LDPC code

but with rate lesser than half, Raptor code outperforms LDPC code, which is very

straight forward. At half rate LDPC code performance is better than Raptor code

but the performance can be brought closer to the LDPC code by designing degree

distribution for every Eb /N0 value and specifically for block fading channel, and

for short block length . Also it is clear from the Figure 5.5.2, that with increase

in the Eb /N0 values , BER decreases. Moreover, half rate standard (3,6) LDPC

code exhibits the error floor in block fading channel [17], so in varying block

fading channel , Raptor code is good option as compared to fixed half rate LDPC

code. The difference in performance for two degree distributions shows that the

selection of degree distribution plays critical rule in performance of Raptor code.

5.6 Simulations and Conclusions of Raptor code


for correlated Block Fading Channel

System block diagram and simulation conditions remains same as for 2-Block

fading channel but one main difference in this case is that the block normalized

Rayleigh fading factors are no more independent but instead correlated with some

value. This correlated block fading channel is similar to the channel discussed

in 2.2.6. Simulation result drawn in Figure 5.6.1, between Bit error probability

and inverse of rate (R) of Raptor code i.e. delay. These curves are drawn for

the Eb /N0 = 8 dB. It is clear that if the correlation between the two fading

channel of the block increases then the BER rate is going decreases. For the

zero correlation the two blocks are independent and hence if the information bits

in one half of the code word length are degraded by one block of the channel

98
Simulations and Conclusions of Raptor code for correlated Block Fading Channel 5.6

0
10
Correlation=0
Correlation=.25
Correlation=.50
Correlation=.75
Correlation=1
Bit Error Probability (BER)

−1
10

−2
10
1.5 2 2.5 3
1/R (Delay)

Figure 5.6.1. Performance of Raptor code for K=504 in 2 Block Fading Cor-
related Channel

then it can be recovered from the other half of the code word length because

the output bits randomly select the input bits. Hence the system behaves like

a 2-diversity system. But when the correlation increases then if the information

bits degraded by one block of the channel in one half of the code word length

will affect the information bits in the other half of the code word length and

hence we get higher bit error rate as compared to the zero correlation. But if

the correlation becomes 1, then both half of the code word length experience the

same block channel behavior and system will work as single diversity system. For

correlation value 1, the bit error probability is worst than all the other values and

it is also confirmed from our simulation results in Figure 5.6.1. It should also be

noted that for small values of correlation the difference in performance is more

predominant as compared to the higher values of correlation. This is because

after a certain value correlation effect is so high that it considerable degrade the

99
5. Simulations Results

0
10
Bit Error Probability (BER)

−1
10

Raptor Code rate=.63& roh=.25


Raptor Code rate=.63& roh=.75
Raptor Code rate=.36& roh=.25
−2 Raptor Code rate=.36& roh=.75
10
5 6 7 8 9 10 11 12 13
Eb/N0

Figure 5.6.2. Effect of correlation on different fixed-rate Raptor code

performance and further increase in correlation will degrade the performance but

not with the rate as did by lower values of correlation values. Another point

which should be noted that at lower rate the correlation has predominant effect

on the performance as compared to the higher rate as shown in Figure 5.6.2. As

the diversity achieved from the block fading channel depends on the rate of the

code. So code transmitting with rate equal to or less than half, then the diversity

achieved from the block fading channel is 2 otherwise diversity obtained from the

channel is one. Also the coding gain is high for low rate code. So the Raptor

code with rate less than half has better performance in block fading channel and

degraded significantly with correlation in channel while this is not the case at

higher rate Raptor code. Hence we can conclude that Raptor code with rate

greater than half has less performance degradation as compared to Raptor with

rate less than half.

100
Simulations and Conclusions of Raptor code for correlated Block Fading Channel 5.6

0
10

Bit Error Probability (BER)

−1
10

−2
10 Half rate LDPC with roh=0
Half rate LDPC with roh=.75
Raptor Code rate=.63& roh=0
Raptor Code rate=.63& roh=.75
Raptor Code rate=.36& roh=0
−3 Raptor Code rate=.36& roh=.75
10
5 6 7 8 9 10 11 12 13
Eb/N0

Figure 5.6.3. Raptor code vs LDPC in 2 Block Fading Correlated Channel

5.6.1 Raptor code vs LDPC in correlated Block Fading


Channel

We compared the performance of the Raptor code based on degree distri-

bution in equation 5.1.1, with half rate standard (3, 6) regular LDPC code in

Figure 5.6.3. As from the result of the previous section we know that correlation

in the channel causes degradation in performance of Raptor code for short block

length, with degradation being sever for the low rate Raptor code. It is clear from

the simulation result shown in Figure 5.6.3, that Raptor code with rate less than

the half rate (3,6) regular LDPC code outperforms LDPC code in uncorrelated

2 block fading channel but with the introduction of the correlation in the fading

block, LDPC code outperforms lower rate Raptor code. So the correlation in

the fading blocks of block fading channel severely affect the performance of the

Raptor code with rate less than the Half rate (3,6) regular LDPC code, while

the degradation in performance of the Raptor code with rate greater than the

101
5. Simulations Results

0
10
Bit Error Probability (BER)

−1
10

−2
10 Punctured LDPC with rate=.56 & roh=0
Raptor code with rate =.56 & roh=0
Punctured LDPC with rate=.56 & roh=.50
Raptor code with rate =.56 & roh=.50
Punctured LDPC with rate=.56 & roh=1
−3 Raptor code with rate =.56 & roh=1
10
5 6 7 8 9 10 11 12 13
1/R(Delay)

Figure 5.6.4. Performance comparison of Raptor code and Punctured LDPC


code in 2 Block Fading Correlated Channel

half rate (3,6) regular LDPC code is comparatively less. As the half rate (3,

6) regular LDPC code exhibited the error floor at higher Eb /N0 values in block

fading channel. Raptor code has proved no error floor in AWGN channel [12]

and fading channel [16] for larger block length, so it is expected it will exhibit no

error for the short block length as well. So in that situation Raptor code is best

alternative in the correlated block fading channel. The performance of the Rap-

tor code can be enhanced by optimizing a degree distribution for the correlated

block fading channels for short block length. Raptor code also provided dynamic

range of rates as compared to the fixed rate LDPC code.

5.6.2 Raptor code vs punctured LDPC in correlated Block


Fading Channel

Performance comparison of the fixed-rate Raptor code based on equation 5.1.1

and punctured LDPC code based on heuristic search algorithm [28], in correlated

102
Simulations and Conclusions of Raptor code for correlated Block Fading Channel 5.6

block fading channel is shown in Figure 5.6.4. Again we can note that the corre-

lation between the blocks of the block fading channel affect the Raptor code more

as compared to the punctured LDPC code especially at low correlation values for

short block length. But as we know that puncturing has a larger adverse impact

when the mother code is of low rate than when the mother code is of high rate,

while the punctured code having the same code rate. So we could not achieve very

flexible range of rates in case of punctured LDPC codes. Also punctured LDPC

codes have error floor at high Eb /N0 region in block fading channel. Therefore,

Raptor code provides a much large dynamic range of rates for the HARQ than the

punctured LDPC codes and so in this sense more robust than punctured LDPC

codes. In Raptor code we need only to encode as many as parity bits as we need

to send but in punctured LDPC codes we need to encode all parity bits even if

we have to transmit a small fraction of them depending channel conditions.

103
Chapter 6

Conclusions

In this chapter we first briefly review the main contributions of the thesis and

then propose some directions for future research.

6.1 Conclusions

This thesis has contributed to the field of Rateless coding in a number of ways.

In particular, we have produced results applicable to the short block length codes

and their applications in wireless communications at physical layer. The results

apply to Raptor codes that are decoded using the iterative belief-propagation

algorithm.

We simulated the Raptor code over fading channel using two different degree

distributions. We proposed based on the simulation results that for short block

length,the degree distribution optimized for the binary erasure channel works

better for fading channels than the degree distribution optimized for the additive

white Gaussian noise channel. The reason for this bad performance using the

degree distribution optimized for the AWGN in fading channels is the assump-

tions made during derivation this distribution. First assumption was, that the

105
6. Conclusions

messages exchanged between the nodes of the Raptor decoder are assumed to be

statistically independent. Second assumption is that the probability density of

the message passed from an input symbol to an output symbol is a mixture of

symmetric Gaussian distributions. But for short block length codes these condi-

tions can not be justified.

We compared the performance of the Raptor code with half rate regular (3,6)

LDPC code in binary erasure channel and from simulations concluded that even

for short block length Raptor code outperforms half rate standare (3,6) regular

LDPC code. For AWGN and fast Rayleigh fading channels the Raptor code with

rate greater than half outperformed by LDPC while Raptor code with rate greater

than half has better performance than half rate (3,6) regular LDPC code. From

the performance of the Raptor code with half rate LDPC code in block fading

channel, we can conclude that code with lesser rate than half rate has better

performance than the regular half rate(3,6) LDPC code. So in applications with

fluctuating channel conditions such as mobile and satellite data transmission,

Raptor code is more efficient and reliable than fixed-rate codes even with little

bit performance degradation for the same rate.

Transmission based on OFDM has constraint on interleaving depth due to

processing delay and maximum packet size. This limited interleaving between

the sub-carriers of OFDM system can be modelled as correlated block fading

channel. In this thesis we have also investigated the performance over correlated

block fading channel and concluded that correlation in block fading channel cause

degradation in performance. This degradation is less sever for the higher rate of

the Raptor code as compared to low rate for short block length because diversity

106
Future work 6.2

of the block fading channel depends on rate of the code.

We also compared the performance of the Raptor code with fixed rate standard

(3,6) regular and punctured LDPC code for short block length in correlated block

fading channel. Punctured LDPC used half rate regular (3,6) LDPC code as

mother code and for 0.56 rate punctured LDPC code rate outperforms Raptor

code of the same rate in correlated block fading channel. But puncturing has

sever impact when mother code is of low rate than when mother code is of high

rate, therefore, punctured LDPC has limited range of flexible rates above the

rate of mother code. Low or even no feedback messages in Raptor code makes it

a good candidate for large frame relay networks as compared to the punctured

LDPC code. Therefore, Raptor codes provide more dynamic range of rates as

compared to the punctured LDPC codes for HARQ. In broadcast applications

which involves high feed back messages, Raptor code will perform better than

(HARQ) punctured LDPC codes. In Raptor code we need only to encode as

many as parity bits as we need to send but in punctured LDPC codes we need

to encode all parity bits even if transmit a small fraction of them depending

on channel conditions. So rateless code is a good candidate in HARQ schemes

as compared to the fixed rate and punctured LDPC code in correlated fading

channels despite some degradation in performance for short block length.

6.2 Future work

In continuation of this work, there are a number of problems that can be the

subject of future research. Some possible directions of research are given below.

As in this thesis, we have demonstrated that Raptor code for Block fading

107
6. Conclusions

channels works better based on the degree distribution optimized for the binary

erasure channel as compared to the degree distribution optimized for the AWGN

channel. So there is a need to design a degree distribution optimized specifically

for the Block fading channel and for short block length. We demonstrated that

correlation among the fading blocks in correlation block fading channel causes

degradation in performance, an analytical relationship is required to scale the

effects this degradation due to correlation. As we also showed that correlation

in block fading channel causes degradation so we can suggest some modification

at the encoder to compensate for this degradation. Encoding and decoding com-

plexities are higher for Raptor codes, so in addition to [27], there still a need to

design a reduced-complexity decoder for Raptor code.

108
Appendix A

Probability Calculations

Let us X,Y and Z are binary random variables and related by the following

equation.

X =Y +Z

Where + is the module-2 summation.Now define the Probabilities that Y and Z

are zeros are given below:

pr (Y = 0) = Py0

pr (Z = 0) = Pz0

Now the probability that X will be zero if both Y and Z are 0 or 1 and this

probability is given by the following equation:

Px0 = Py0 Pz0 + (1 − Py0 )(1 − Pz0 )

similarly the probability that X is one is given by

Px1 = (Py0 )(1 − Pz0 ) + (1 − Py0 )(Pz0 )

109
A. Probability Calculations

so the difference probability becomes:

(Px0 − Px1 ) = (Py0 − Py1 )(Pz0 − Pz1 )

where Px1 is the probability that binary random variable has value 1.

If X is the sum of n term then X = Y1 + Y2 + · · · + Y n.

Let the probability that Yi is zero is given by

pr (Yi = 0) = Pi0

And so the difference probability can be calculated as


n
Y
(Px0 − Px1 ) = (Pi0 − Pi1 )
i=1

  Y n  
Px0 − Px1 Pi0 − Pi1
=
Px0 + Px1 i=1
Pi0 + Pi1
Px0
! n Pi0
!
Px1
− 1 Y Pi1
− 1
Px0
= Pi0
Px1
+1 i=1 Pi1
+1
But from definition of likelihood ratio lx = eyx and li = eyi so
  n  
lx − l Y li − 1
=
lx + 1 i=1
li + 1

n  yi
eyx − l
  
Y e −1
=
eyx + 1 i=1
e yi + 1
n  yi
− e−yx /2 e − e−yi /2
 yx /2  Y 
e
=
eyx /2 + e−yx /2 i=1
eyi + eyi /2
n
yx Y yi
tanh( ) = tanh( )
2 i=1
2

110
Bibliography

[1] Bernhard M. J. Leiner, “LDPC Codes- a brief Tutorial ”, April 2005

[2] Amin Shokrollahi , “LDPC Codes: An Introduction”,Digital FountainInc.

39141 Civic Center Drive, Fremont, CA 94538, April 2,2003.

[3] T.Richardson and R.Urbanke, “The Renaissance of Gallager’s Low-Density

Parity-Check Codes”, IEEE Communication Magazine,August, 2003.

[4] Frank R.Kschischang, “Codes Defined on Graphs”, IEEE Communications

Magazine., August,2003.

[5] T.Richardson and R.Urbanke , “Modern Coding Theory”,Cambridge Uni-

versity Press ., 2006.

[6] Nauman F. Kiyani and Jos H.Weber, “Analysis of Random Regular LDPC

Codes on Rayleigh Fading Channels”,.

[7] R.G. Gallager, “Low Density Parity Check Codes”,Cambridge: MA:MIT

Press,1963.

[8] J. Hou, P. H. Siegel, and L. B. Milstein, “Performance analysis and code op-

timization of low density parity-check codes on rayleigh fading channels”,in

111
Bibliography

IEEE Journal on Selected Areas in Communications, no.15, May 2001,

pp.924-934.

[9] T. J. Richardson and R. L. Urbanke, “The capacity of low density parity

check codes under message passing decoding”,in IEEE Trans. Information

Theory, no.47, Feb.2001, pp. 599-618.

[10] D.J.C. Mackay, “Fountain Codes”,IEE Proc.-Commun. Vol. 152, No.6, De-

cember 2005.

[11] M.Luby, “LT-Codes”,Proceedings of the 43rd Annual IEEE Symposium on

the Foundations of Computer Science,pp.271-280,2002.

[12] R. Palanki and J.S. Yedidia, “ Rateless coding on noisy channels”, in Proc.

IEEE Int.Symp.Inform.Theory,p.37,2004.

[13] J. Castura and Y. Mao, “Rateless coding and relay networks ”,IEEE Signal

Processing Magazine, September 2007, pp. 27-35.

[14] Amin Shokrollahi, “Raptor Codes ” ,IEEE Transaction on Information

Theoery, VOL. 52, N0. 6, June 2006.

[15] O.Etesami A. Shokrollahi, “Raptor Codes on binary memoryless symmetric

channels”, IEEE Trans. Inform. Theory, vol.52, no.6, June, 2006.

[16] Z. Cheng, J. Castura and Y. Mao, “on the Design of Raptor Codes for

Binary-Input Gaussian Channels ” , August 15, 2008.

112
Bibliography

[17] J. J. Boutros, A. Guillén, E. Biglieri, and G. Zémor, “Low-Density Parity-

Check Codes for Nonergodic Block-Fading Channels ” , Draft,February 2,

2008.

[18] L. h. Ozarow, S. Shamai, and A. D. Wyner, “ Information theoretic consid-

erations for cellular mobile radio ” , IEEE Trans. Vehicular Tech., vol. 43,

no. 2,pp. 359-378, May 1994.

[19] Jeff Castura and Yongyi Mao, “ Rateless Coding over Fading Channels ” ,

IEEE Communications Letters, Vol. 10, N0. 1, January 2006.

[20] C.E.Shannon, “ A Mathematical Theory of Communication ” , Bell System

Technical Journal, vol. 27, pp. 397-423 and 623-656, July/Oct, 1948.

[21] D.J.C Mackay, “Good Error-Correcting Codes Based on Very Sparse Ma-

trices ” , IEEE Transactions on Information Theoy, vol. 45, pp. 399-431,

March, 1999.

[22] J.Byers, M.Luby, M.Mitzenmacher, and A.Rege, “A digital fountain ap-

proach to reliable distribution of bulk data ” , in proceedings of ACM SIG-

COMM ’ 98, 1998.

[23] André Neubauer, Jürgen Freudenberger, Volker Kühn, “Coding Theory Al-

gorithms, Architectures, and Applications ” , Jhon wiley and Sons Ltd, 2007.

[24] Henrik Schulze, Christian Lüders, “Theory and Applications of OFDM and

CDMA ” , Jhon wiley and Sons Ltd, 2005.

113
Bibliography

[25] “Technical Specification Group Services and System Aspects; Multimedia

Broadcast/Multicast Services (MBMS); Protocols and Codecs (Release 6) ”

, 3rd Generation Partnership Project (3GPP), Tech. Rep.3GPP TS 26.346

V6.3.0, 3GPP, 2005.

[26] R. Hoshyar, S. H. Jamali, and A. R. Bahai, “ Rateless Coding over Fading

Channels Turbo Coding Performance in OFDM Packet Transmission ” ,

51st IEEE Vehicular Technology conference Proceedings, VTC 2000-Spring

Tokyo,pp.805-810,2000.

[27] Ketai Hu, Jeff Castura and Youngyi Mao, “ Reduced-Complexity Decoding

of Raptor Codes over Fading Channels ” , Global Telecommunications Con-

ference,2006,GLOBECOM ’06,IEEE.

[28] S.F.Zaheer, S.A. Zummo, M.A. Landolsi and M.A. Kousa, “ Improved regular

and semi-random rate-compatible Low-density paritcy-check with short block

lengths ” , IET Commun, Vol. 2,No. 7, pp. 960-971,2008.

114

You might also like