# Internship 1

The Viterbi Algorithm for AWGN and Poisson Channels

Submitted to

dr.ir. Frans M.J. Willems Technische Universiteit Eindhoven Department of Electrical Engineering

by

Daniel G. Taddesse Broadband Telecommunication Technologies Department of Electrical Engineering

November 2008

Abstract

The Viterbi algorithm is a powerful tool for improving bit-error-rate(BER) performance. While Viterbi algorithm for decoding of transmitted data bits over an AWGN channel is being extensively considered in many works, Viterbi algorithm for decoding transmitted data bits over a Poisson channel is lacking consideration. However, it is essential for reliable communication over an optical channel. In this work, a Viterbi algorithm is implemented in MATLAB for Poisson and AWGN channels. A rate one-half (5,7) convolutional encoder is used for encoding a random data sequence. The encoded sequence is modulated and transmitted over the respective channel. The Viterbi algorithm, which is a maximum likelihood decoder, decodes the message sequence. The bit-error-rate(BER) is then computed. . . .

Contents

Abstract List of Figures i iv

1 Introduction

1

2 Convolutional Codes, Maximum Likelihood Decoding and Viterbi Algorithm 2 2.1 Encoder Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Terms related to Convolutional Codes . . . . . . . . . . . . . . . . . . . . 3 2.3 Encoder Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3.1 Generator Representation . . . . . . . . . . . . . . . . . . . . . . . 4 2.3.2 Tree Diagram Representation . . . . . . . . . . . . . . . . . . . . . 4 2.3.3 State Diagram Representation . . . . . . . . . . . . . . . . . . . . 5 2.3.4 Trellis Diagram Representation . . . . . . . . . . . . . . . . . . . . 5 2.4 Maximum Likelihood Decoding and Metrics . . . . . . . . . . . . . . . . . 6 2.4.1 Maximum Likelihood Decoding and Metrics for AWGN Channel . 6 2.4.2 Maximum Likelihood Decoding and Metrics for Poisson Channel . 7 2.5 The Viterbi Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3 Simulation Model 3.1 Data Generator . . . . . . . . . . . . . . . . . . . . . . . 3.2 Convolutional Encoder . . . . . . . . . . . . . . . . . . . 3.3 Modulation . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Modulation for AWGN Channel . . . . . . . . . 3.3.2 Modulation for Poisson Channel . . . . . . . . . 3.4 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 AWGN Channel . . . . . . . . . . . . . . . . . . 3.4.2 Poisson Channel . . . . . . . . . . . . . . . . . . 3.4.2.1 Generating a Poisson Random Variable 3.5 Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 The Performance Estimation block . . . . . . . . . . . . 10 10 10 11 11 12 12 12 13 13 14 15

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

4 Result and Discussion 17 4.1 Results for Poisson Channel . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2 Results for AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . . . . 18 ii

Contents 5 Summary
iii 19
A MATLAB Functions B MATLAB Script for Simulation of AWGN Channel C MATLAB Script for Simulation of Poisson Channel
20 25 27
Bibliography
29
.

.1 4. . . . . . . . . . . . . . . . . . . . . . . . .
Simulation result for Viterbi decoding over Poisson channel . . Trellis diagram representation . . . . . . . . . . .4 3. .1 3. . . . . . . . . . . . . . . . . . . .1 2. . . . . . . .2 A simple convolutional encoding/Viterbi A simple convolutional encoder . . .5 4. . . . . State diagram represetation . . . . . . . . . . . . . . . . . . The AWGN Channel . . . . 18
iv
. . Histogram of a set of Poisson random variables with parameter λ = 5 Flow chart for the Viterbi decoder . . . . . . . . . . . . . . . . . . . . .7) convolutional encoder .5 3. . . . . . . . . .2 3. .3 2. Tree diagram representation . . . . . . . . . . . . . . . 17 Simulation result for Viterbi decoding over AWGN channel . . . . . . . . . . . . . A (5. . . . . . . . . . . . . . . . . . . . . . 2 3 5 6 7 10 11 12 14 16
Block diagram for simulation .3 3. . . . . . . . . . . . . . . . . . . . . . . . . . . .4 2. . . .List of Figures
2. . . . . decoding system . . . . . . . . .2 2. . . . . . . . . . .

convolutional encoding and the Viterbi decoding algorithm for transmission over a Poisson channel is implemented in MATLAB. The algorithm is based on the trellis structure of the code. In this study. Then plots of simulation results are given and the results are discussed. AWGN channel is also given for comparison. The Viterbi algorithm was developed by Andrew Viterbi in the 1960s as an error-correction scheme for noisy digital communication links [2]. Next. I will ﬁrst explain about convolutional codes and the Viterbi algorithm in general. In this report.
1
.Chapter 1
Introduction
The Viterbi algorithm is an algorithm for decoding a convolutionally encoded data stream transmitted over a certain channel. the models for Poisson and AWGN channels will be presented.

Chapter 2
Convolutional Codes. Figure 2. It generates a coded
2
. I will provide the necessary background for convolutional codes and the Viterbi decoding algorithm.1: A simple convolutional encoding/Viterbi decoding system
Convolutional Codes were ﬁrst introduced by Elias in 1955. which oﬀer an alternative to block codes for transmission over a noisy channel [2].
Figure 2. but also on previous inputs and/or outputs. Maximum Likelihood Decoding and Viterbi Algorithm
In this chapter.1 shows a simple convolutional encoding/Viterbi decoding system. in the sense that the output symbols depend not only on the input symbols. as well as blocks of data. A convolutional encoder has ”memory”. In other words. the encoder is a ﬁnite state machine. Convolutional coding can be applied to a continuous input stream (which cannot be done with block codes).

2
Terms related to Convolutional Codes
Code rate(r): The code rate r for a convolutional code is deﬁned as
r = k/n
(2.
Figure 2. Constraint length( K): The constraint length K for a convolutional code is deﬁned as K =m+1 (2.
2.
2. Convolutional Codes.1)
where k is the number of parallel input information bits and n is the number of parallel output encoded bits at one time interval.2)
.2: A simple convolutional encoder
The information bits are input into shift registers and the output encoded bits are obtained by modulo−2 addition of the input information bits and the contents of the shift registers. Maximum Likelihood Decoding and Viterbi Algorithm
3
output data stream from an input data stream.7) Convolutional encoder is shown in Figure 2.2.2. It is composed of shift registers and a network of XOR (Exclusive-OR) gates.1
Encoder Structure
A simple one-half rate (5.

State: The state of the encoder is deﬁned as the contents of the memory. For example. and the constraint length K = 3.3. the maximum memory size m = 2. Convolutional Codes.1
Generator Representation
The generator representation shows the hardware connection of the shift register taps to the modulo-2 adders. In the tree diagram. a solid line represents input information bit 0 and a dashed line represents input information bit 1. Maximum Likelihood Decoding and Viterbi Algorithm
4
where m is the maximum number of stages (memory size) in any shift register.3 shows the tree diagram for the encoder in Figure 2.
2. A generator vector represents the position of the taps for an output. The shift registers store the state information of the convolutional encoder and the constraint length relates the number of bits upon which the output depends. State Diagram Representation 4.2 for four input bit intervals. The corresponding output encoded bits are shown on the branches of the tree. Trellis Diagram Representation
2. An input information sequence deﬁnes a speciﬁc path through the tree diagram from left to right. For the convolutional encoder shown in Figure 2.2. Tree Diagram Representation 3. Generator Representation 2.2.2 are g1 = [101] and g2 = [111] where the subscripts 1 and 2 denote the corresponding output terminals. the two generator vectors for the encoder in Figure 2. They are 1.
. Figure 2.2
Tree Diagram Representation
The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder. A ”1” represents a connection and a ”0” represents no connection.
2.3
Encoder Representations
The encoder can be represented in several diﬀerent but equivalent ways [1]. the code rate r = 1/2.3.

2. It is customary to begin convolutional encoding from the all zero state.3. In the state diagram.4
Trellis Diagram Representation
The trellis diagram is basically a redrawing of the state diagram. Convolutional Codes.
.2.5 shows the trellis diagram for the encoder in Figure 2. Maximum Likelihood Decoding and Viterbi Algorithm
5
Figure 2. Figure 2. The path information between the states.3
State Diagram Representation
The state diagram shows the state information of a convolutional encoder. Each new input information bit causes a transition from one state to another. Figure 2.2. It shows all possible state transitions at each time step.3: Tree diagram representation
2.2. The state information of a convolutional encoder is stored in the shift registers. represents input information bit x and output encoded bits c. the state information of the encoder is shown in the circles. denoted as x/c.3.4 shows the state diagram of the encoder in Figure 2.

Maximum Likelihood Decoding and Viterbi Algorithm
6
Figure 2.2.3)
2.[5]
N
p(¯|¯) = rc
i=1
p(ri |ci ).3).4
Maximum Likelihood Decoding and Metrics
The likelihood of a received sequence r after transmission over a certain channel channel.
. A maximum-likelihood decoder (MLD) selects a coded sequence that maximizes (2. which is deﬁned as a one-to-one mapping between bits (0.1
Maximum Likelihood Decoding and Metrics for AWGN Channel
For an AWGN channel. Convolutional Codes.4: State diagram represetation
2.
(2. A).4)
where si denotes a binary modulated signal. is given by the conditional probability [2]. 1) and real numbers (−A.4. the likelihood is given by
N
p(¯|¯) = rs
i=1
√
1 1 exp − (ri − si )2 N0 πN0
(2. given that a coded sequence c is sent.

3).8)
. for the AWGN channel.3) is equivalent to minimizing the sum of the squared Euclidean distances
N 2 i=1 (ri −si ) .7)
where ki is the channel count per unit time.
N
¯¯ log p(k|λ) = log
i=1
e−λi λi ki ki !
(2. The likelihood is given by:
N
¯¯ p(k|λ) =
i=1
e−λi λki i ki !
(2.2.
(2.2
Maximum Likelihood Decoding and Metrics for Poisson Channel
Let the sequence of Poisson parameters λi represent the mean of the Poisson process. Convolutional Codes. the metric that is minimized by the coded
sequence selected by the MLD is the squared Euclidean distance:
N
d=
i=1
(ri − si )2 . the log-likelihood function is obtained.5)
Maximizing (2.5: Trellis diagram representation
By taking the logarithm of Equation (2.6)
2. Maximum Likelihood Decoding and Viterbi Algorithm
7
Figure 2. The log-likelihood function is found by taking the logarithm of the likelihood function.4. 1 1 log p(¯|¯) = − N log πN0 − rs 2 N0
N
(ri − si )2
i=1
(2.
Hence.

2. In this study. Since log(k!) is the same for the branches entering a node. for discrepancy for every possible path in the trellis. This computation is repeated for every level of the trellis. Updating: Update the path metrics. for each node in the trellis of the encoder. Initialization: Set i = 0.2.5
The Viterbi Algorithm
The search procedure through the trellis of a convolutional code for maximum-likelihood decocling of the received sequence is a dynamic programming problem. compare and select (ACS): Add the branch metrics to the corresponding path metrics and select the one with the larger metric.. The standard approach to solving this problem in coding applications is known as the Viterbi algorithm [3]. The paths that are retained by the algorithm are called survivor paths. for both channel types. the path with the higher metric is retained.9)
¯¯ log p(k|λ) =
i=1
(−λi + ki log λi − log(ki !))
(2. Branch metric computation: At stage i. Thus. maximizing (2. Convolutional Codes.e. Maximum Likelihood Decoding and Viterbi Algorithm
8
N
¯¯ log p(k|λ) =
i=1 N
e−λi λki i log ki !
(2. Set path metrics. and the other path is discarded. survivor states and survivor bits. 4. a measure.
.Then for the Poisson channel. the likelihood is used as the metric.10).11)
2. So.10) is equivalent to maximizing
N i=1 (−λi
+ ki log λi ).10)
A maximumlikelihood decoder (MLD) selects a coded sequence that maximizes (2. the metric that is
maximized by the coded sequence selected by the MLD is
N
d=
i=1
(−λi + ki log λi ). The following are the basic decoding steps of the Viterbi algorithm: 1.
(2. Add. 3. survivor states and survivor bits to matrices of zero. The Viterbi algorithm operates by computing a metric i. compute the branch metrics entering each node. the algorithm compares the metrics for the two paths entering that particular node.

Convolutional Codes.2. Decoding: Decode symbols.
9
. Maximum Likelihood Decoding and Viterbi Algorithm 5.

which decided the trellis length in the Viterbi decoder. 1 0.
Figure 3. 0 2.1
Data Generator
In this block.2
Convolutional Encoder
The convolutional encoder I used is shown in the Figure 3. MATLAB was selected for implementation because of its ease of programming and also owing to my background. 1 0. 1 3. The trellis structure of this convolutional encoder is put in a table form as: nextstate=[0 2. the random binary data packages were generated. c1=[0 1. 1 0. a simulation system had to be designed.Chapter 3
Simulation Model
In order to evaluate the performance of the Viterbi decoder.2 It is a rate one-half (5. 0 1]. c2=[0 1. 0 1.
3.1: Block diagram for simulation
3. 1 3]. Also the data length of each package was set up.1.
10
. 1 0]. The simulation system layout was constructed as shown in Figure 3. 7) convolutional encoder with constraint length K = 3.

3.2: A (5. For example.1
Modulation for AWGN Channel
For AWGN channel.
.3. For example.7) convolutional encoder
The next state table shows the next states for the respective inputs. In the same way. if the input bit is 1. the next state is 0 for input 0 and 2 for input 1. the outputs c1 = 0 and c2 = 0.3
Modulation
Modulation was made for both the AWGN channel and Poisson channel. Simulation Model
11
Figure 3. BPSK is used.3. the ﬁrst row in nextstate tells that. Then a MATLAB code converts a message sequence in to an encoded sequence based on these tables.
3. So I used a mapping: s(t) = −A for 0 bit s(t) = A for 1 bit. the outputs c1 = 1 and c2 = 1. c1 and c2 show the outputs of the trellis corresponding to the next states. if the input bit is 0. where it uses two phases which are separated by 180 degrees. for state 0. for the 0 state.

3: The AWGN Channel
The transmitted signal s(t) gets disturbed by a simple additive white Gaussian noise process n(t) and the received signal r(t) is given by: r(t) = s(t) + n(t).1
Channel
AWGN Channel
Figure 3.4
3. Simulation Model
12
3.4.2) (3.2
Modulation for Poisson Channel
For Poisson channel.3) (3. Assuming the variance to be one for AWGN.
3. (3. the bits are modulated to two diﬀerent intensities. SN R = A2 .3. λ0 and λ1 using the mapping: s(t) = λ0 for 0 bit s(t) = λ1 for 1 bit.3. The signal to noise ratio is given by SN R = (A2 )/var2 .1)
.

(where the right-hand side is the new value of F ).4. a set of Poisson random variables with parameter λ = 5 is generated and a histogram of the data is generated as shown in Figure 3. 3. lf not.
(3. 3.4)
3.2.6)
2. for λ0 = 5 and λ1 = 10 are as follows:
pi i+1 . which is actually the output of the Poisson channel. then deliver X = i. . 1.
13
(3.2
Poisson Channel
For a Poisson channel. Generate a random number U . Fo = p0 . i+1
pi+1 = p(Xi ) = 1. then it computes (in Step 4) p1 by using the recursion (3. and so on. If U <= Fi . i = 0. λ pi .. It now checks whether U < p0 + p1 . it sets X = 0. i >= 0. So a Poisson random variable K should be generated which represents our received value for my decoder.3.6). Then the photon counts received have a Poisson distribution with mean λ.. Using this algorithm. If so.
Fi+1 = Fi + pi+1 and i = i + 1. Else increment the values pi+1 = 5.4.
.. The modulated sequence and the received sequence. i!
pi = P (X = i) = e−λ
(3. 4.4. Simulation Model The signal-to-noise ratio in decibel is then: SN R = 10 ∗ log(A2 )/log10. Initialize the quantities: i = 0.5)
We need the following recursive relationship between successive Poisson probabilities to generate random variables from the Poisson distribution [4]. and if so it sets X = 1. What the above algorithm does is that it ﬁrst generates a random number U and then checks whether or not U < e−λ = p0 . a certain photon count K is received for every λ.1 Generating a Poisson Random Variable
The random variable X is Poisson with mean λ if λi . Return to step 1. Go to Step 3. p0 = e−λ .

5
Decoder
The ﬂow chart for the Viterbi decoder is shown in Figure 3. The Viterbi decoding block works as follows:
.4: Histogram of a set of Poisson random variables with parameter λ = 5
>> modulated’
ans =
10 10
5 10
10 10
5 5
5 5
10 10
5 10
5 5
5 10
10 10
>> k’
ans =
5 7
2 14
11 16
3 5
4 9
11 7
4 14
3 4
3 16
15 8
3.3. Simulation Model
14
Figure 3.5.

3. as it traces back along the maximum likelihood path. Simulation Model
15
• It ﬁrst computes the two branch metrics for the respective channel entering a node for each state. • The trace-back and output-decoding block performs trace-back to ﬁnd the most likelihood path from the last state to the ﬁrst state in the survivor path metric. Then it generates the decoded output sequence.6
The Performance Estimation block
This block receives the decoded data and compares it with source data. At the same time. ACS block will record the survivor states and survivor bits. the last stage of the trellis is used as the starting point for tracing back. The traceback operation outputs the decoded data. • When all the data bits have been received. and the trellis has been completed.
.
3. Then it will update the path metrics with the selected value. then computes the performances namely the bit-error-rate(BER) for varying SNR value in the case of AWGN and for varying λ1 and ﬁxed λo in the case of Poisson channel. • It then adds the branch metrics to the corresponding path metric and selects the one with the larger metric.

3. Simulation Model
16
Figure 3.5: Flow chart for the Viterbi decoder
.

5 occurs at λ1 = 5 = λ0 . Figure 4. This is expected because if the two bits are both mapped to the same intensity. the maximum bit-error-rate (BER) of about 0.1.Chapter 4
Result and Discussion
4.
Figure 4. λ0 is ﬁxed and the bit error rate (BER) is plotted for varying λ1 .1: Simulation result for Viterbi decoding over Poisson channel
As shown in Figure 4. their corresponding received photon count(K) will also be the same and hence 17
.1 Results for Poisson Channel
For Poisson channel.1 shows the simulation result.

the bit-error-rate(BER) was plotted for varying SN R and the result is shown in Figure 4.2: Simulation result for Viterbi decoding over AWGN channel
. in which case each bit has a probability of 0.2
Results for AWGN Channel
For AWGN channel.
Figure 4. Result and Discussion
18
the metrics for the two bits will be equal. So the highest BER occurs when the two intensity levels are equal.
4.5 to be detected.2.4. From the results. it can be seen that BER decreases as the SNR increases.

This is because. So the highest BER occurs when the two intensity levels are equal. For a Poisson channel. and the other path is discarded. This computation is repeated for every level of the trellis. Thus. decoding is done by a standard approach known as the Viterbi algorithm. For AWGN channel. for each node in the trellis of the encoder. The paths that are retained by the algorithm are called survivor paths. in which case each bit has a probability of 0.
19
. BER decreases as the SNR increases.Chapter 5
Summary
Convolutional encoding and Viterbi decoding are error correction techniques widely used in communication systems to improve the bit-error-rate (BER) performance. their corresponding received photon count(K) will also be the same and hence the metrics for the two bits will be equal. the maximum bit-error-rate (BER) occurs when the two intensity levels to which the two bits are mapped are equal and the BER decreases when the intensity levels are diﬀerent. The Viterbi algorithm operates by computing a metric for every possible path in the trellis of a convolutional code. the more the signal to noise ratio. The path with the larger metric is retained. For a given convolutional code.5 to be detected. This is because if the two bits are both mapped to the same intensity. the easier it is to detect which bit is transmitted over the noisy channel. the algorithm compares the two paths entering that particular node.

inputbit+1). encoded(i. 0 1. inputbit+1).m
%random data generator function [data] = datagen(flength) %random data generator data=floor(rand(flength. 1 0. stopstate]= convencoder(startstate. message) %converts a binary input into a convolutional code nextstate=[0 2.5).m function [encoded. 1 3. c1=[0 1. end 20
. for i=1:flength inputbit=message(i).inputbit+1). 1 0. encoded=zeros(flength. encoded(i. flength=length(message). currentstate=startstate. 0 2. 0 1].1)+0. 1 0.1)=c1(currentstate+1. 1 0]. c2=[0 1.2)=c2(currentstate+1. 1 3].Appendix A
MATLAB Functions
datagen. currentstate=nextstate(currentstate+1.2). end convencoder.

flength) %modulator for Poisson channel modulated=zeros(flength. end end end end channelAWGN.i+1) = lambda1.m function[modulated]=modulatepoisson(encoded. end end end end modulatepoisson.i+1) = lambda0. for t = 1 : length(encoded) for j=0:1 if encoded(t.2).2).A) %modulator for AWGN channel modulated=zeros(length(encoded).i+1) == 0 modulated(t.m function[modulated]=modulateAWGN(encoded. end modulateAWGN. MATLAB Functions stopstate=currentstate.j+1) == 0 modulated(t. for t = 1 : flength for i=0:1 if encoded(t.Appendix A.j+1) =A .j+1) =(-1)*A . else modulated(t. else modulated(t.lambda1.m
21
.lambda0.

p = exp(-modulated(i. F = F + p.c]=size(modulated). received=sqrt(snr)*(2*modulated-1)+noise. io = io+1. j = j+1. end channelpoisson. end end end
.2). % initialize quantities u = rand.b+1)*p/(io+1). else % move to next probability p = modulated(i.Appendix A.b+1)).c). io = 0. k(i. F = p.flength)
22
%Poisson channel-generates a poissn random variable K for each intensity %level transmitted k=zeros(flength.m function k=channelpoisson(modulated. noise=randn(r. for i=1:length(modulated) for b=0:1 j = 1. while flag % generate the variate needed if u <= F % then accept flag = 0.b+1)=io.snr) %adds a white gaussian noise [r. MATLAB Functions function received=channelAWGN(modulated. while j <= flength-1 flag = 1.

m function [decoded]=vitdecoder(stopstate. MATLAB Functions end end end metriccompAWGN.flength.i+1)=exp(-ampl)*(ampl)^k(t.flength) %computes the metrics(likelihood) for AWGN channel for t=1:flength for i=0:1 ampl=sqrt(10^(snr/10))*(2*i-1). else ampl=lambda1.1)).m
23
function[metrics1. end end end vitdecoder.lambda0.ampl)^2).metrics1. metrics2(t. metrics1(t. metrics2]=metriccomppoisson(k.m function[metrics1.i+1)=exp(-(received(t.flength)
.metrics2.2)/factorial(k(t.i+1)=exp(-ampl)*(ampl)^k(t.1)/factorial(k(t.Appendix A. metrics2(t.i+1)=exp(-(received(t.lambda1) %computes the metrics(likelihood) for poisson channel for t=1:flength for i=0:1 if i==0 ampl=lambda0.2) .2)). metrics2]=metriccompAWGN(received.snr. end metrics1(t. end end end metriccomppoisson.ampl)^2).1) .

nexts=nextstate(s+1. survivorstate=zeros(flength+1.code1+1)*metrics2(t. survivorbits=zeros(flength+1.nexts+1)=candm. for t=flength:-1:1 decoded(t)=survivorbits(t+1. candm=pathm*metrics1(t.maxstate+1). c1=[0 1. pathmetric=zeros(flength+1. end end
.nexts+1) pathmetric(t+1. if candm>pathmetric(t+1. c2=[0 1. for i=0:1 code1=c1(s+1.i+1).maxstate+1). 0 1. 1 0]. 0 2.s+1).maxstate+1).Appendix A. decoded=zeros(flength. 1 0.nexts+1)=i.i+1). maxstate=3.num+1).code2+1). survivorstate(t+1. pathmetric(1. 1 0.1)=1. 1 0. for t=1:flength for s=0:maxstate pathm=pathmetric(t. 1 3]. code2=c2(s+1. survivorbits(t+1. num=survivorstate(t+1.1).nexts+1)=s. 1 3.i+1). MATLAB Functions
24
%viterbi decoder-decodes a convolutional code based on the trellis of the %code nextstate=[0 2. 0 1]. end end end end num=stopstate.num+1).

% Encode the message [encoded. for i=1:length(snr) error = 0. % Viterbi decoding [metrics1. A=zeros(1. ber=[ ]. metrics2]=metriccompAWGN(received. stopstate]= convencoder(startstate.% random bits flength=length(message).snr(i).length(snr)).flength). code for AWGN channel
modulated = modulateAWGN(encoded.Appendix B
MATLAB Script for Simulation of AWGN Channel
% Simulation of Viterbi decoding of a (5. snr = -1:1:6. % AWGN received=modulated + randn(length(message). A(i)=sqrt(10^(snr(i)/10)). startstate=0.A(i)).7) clear all nsim = 5000.2). 25
. for Ns = 1:nsim message = datagen(100). % modulation message).

end % for A % and plot results semilogy(snr.metrics1.flength).’-b*’) xlabel(’SNR’). end % for Ns ber0 = error/(nsim*length(message)). MATLAB Script for Simulation of AWGN Channel decoded=vitdecoder(stopstate. ylabel(’BER’)
. metrics2. ber=[ber ber0].decoded(1:flength))). % Enumerate decoding information errors and add them
26
error = error+ sum(xor(message(1:flength).Appendix A.ber.

lambda0. stopstate]= convencoder(startstate.lambda0. 27
. % Encode the message [encoded. k = channelpoisson(modulated.lambda1). [metrics1.Appendix C
MATLAB Script for Simulation of Poisson Channel
% Simulation of Viterbi decoding of a (5.decoded(1:flength))).lambda1.flength). error = 0.flength).7) convolutional %code over a poisson channel clear all nsim = 1000.flength. for Ns = 1:nsim %Encoding message = datagen(100). message). metrics2]=metriccomppoisson(k. %Number of simulations
modulated = modulatepoisson(encoded.flength). ber=[ ]. decoded=vitdecoder(stopstate.metrics1. % Enumerate decoding information errors and add them error = error+ sum(xor(message(1:flength). startstate=0. for lambda1= 1:11 lambda0=5. metrics2. % random bits flength=length(message).

end % for lambda % and plot results lambda1 = 1:11. semilogy(lambda1. ber=[ber ber0].ber. ylabel(’BER’).Appendix A. MATLAB Script for Simulation of Poisson Channel end % for Ns ber0 = error/(nsim*length(message)).’-b*’) xlabel(’lambda1’).legend(’Lambda0=5’)
28
.

ISBN: 0470015586. ISBN-10: 0130224723 [4] Ross. 1997. Encycopidia of mathematics and its applications. 2005. John Wiley and Sons. Second Edition. Morelos-Zaragoza. Second Edition. Cary Huﬀman and Vera Pless. New York: Academic Press. Sheldon..GianCarlo Rota(ed. ISBN 0521782805
29
. [2] Robert H. McEliece. 2006. Cambridge University press. Fundamentals of error correcting codes.Bibliography
[1] Robert J. Simulation. [5] W. Prentice Hall. The art of error correcting coding. 2003.Vol.3. 1997. [3] Simon Haykin and Michael Moher. Modern Wireless Communications.).