You are on page 1of 23

Journal Pre-proofs

Recurrent Neural Network Based Turbo Decoding Algorithms for Different


Code Rates

Shridhar B. Devamane, Rajeshwari L. Itagi

PII: S1319-1578(20)30332-3
DOI: https://doi.org/10.1016/j.jksuci.2020.03.012
Reference: JKSUCI 761

To appear in: Journal of King Saud University - Computer and


Information Sciences

Received Date: 24 November 2019


Revised Date: 9 March 2020
Accepted Date: 28 March 2020

Please cite this article as: Devamane, S.B., Itagi, R.L., Recurrent Neural Network Based Turbo Decoding
Algorithms for Different Code Rates, Journal of King Saud University - Computer and Information Sciences
(2020), doi: https://doi.org/10.1016/j.jksuci.2020.03.012

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover
page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version
will undergo additional copyediting, typesetting and review before it is published in its final form, but we are
providing this version to give early visibility of the article. Please note that, during the production process, errors
may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2020 Production and hosting by Elsevier B.V. on behalf of King Saud University.
Recurrent Neural Network Based Turbo Decoding Algorithms for
Different Code Rates

Shridhar B. Devamane1 Rajeshwari L. Itagi2


1Assistant Professor, Department of ECE, APS College of Engineering, Bangalore, India,devamaneshridhar@gmail.com
2 Professor, Department of ECE, KLE Institute of Technology, Hubballi, India, rajeshwariitagi@kleit.ac.in

Abstract
Application of deep learning to error control coding is gaining special attention and neural network architectures on
decoding are approached to compare with conventional ones. Turbo codes conventionally use BCJR algorithm for
decoding. In this paper, performances of neural Turbo decoder and deep learning-based Turbo decoder are examined. A
category of sequential codes are utilized to construct the RSC (Recursive Systematic Convolutional) codes as basic
elements for Turbo encoder. Sequential codes suit the requirement of memory element present in convolution codes,
which act as components for Turbo encoder. Turbo decoders are constructed by two means; as neural Turbo decoder and
deep learning Turbo decoder. Both structures are based on recurrent neural network (RNN) architectures. RNN
architectures are preferred due to the presence of memory as a feature. BER performance of both is compared with that of
a convolutional Viterbi decoder in awgn channel. Both the structures are studied for different input data-lengths and code
rates.

Keywords: Block error rate, signal to noise ratio, additive white gaussian noise, recurrent neural network, deep learning,
Turbo code.

1 Introduction

Error control coding is reliable data communication in both wired (DSL modems, cable, Ethernet) and wireless (deep
space, satellite, cellular) mediums. The computationally effective and robust encoding and decoding design techniques
are essential to ensure data integrity under noisy conditions. This has been the coding theory’s discipline over the past
hundred years and significant progress is witnessed in the optimal code design over the past seven decades. Low density
parity check (LDPC) codes, turbo codes and polar codes are some of the landmark codes. The universal cellular
standards, right from the second generation up to the fifth generation comprise of these codes. The focus on technical
inventions since past sixty years has been on the research of capacity-approaching error correcting codes [C.
Shannon,1948].Turbo codes gained sufficient attention which are a class of high-performance forward error
correction (FEC) codes [Claude Berrou and Alain Glavieux and P. Thitimajshima,1993].

The efficient optimal decoding of turbo codes was implemented by BCJR algorithm [L Bahl, J Cocke, F Jelinek, and J
Raviv 1974]. Application of artificial neural network to decoding designs was then proposed by applying to Viterbi
decoder[X A Wang, S B Wicker 1996].Simulation works of turbo BCJR decoding using log- MAP(maximum
Posteriori Probability) and max-log-MAP for mobile Wi-Max environment was verified by different researchers
[Jagdish D.Kene, Kishor D.Kulat 2012]. Simulation works for turbo BCJR decoding using log- MAP and max-log-
MAP for mobile channel were verified on a DSP hardware [Shridhar B Devamane, Rajeshwari L Itagi 2018].With the
widespread use of artificial intelligence, researchers took up decoding algorithms to be implemented by using machine
learning [T. J. O’Shea, J. Hoydis 2017] and algorithms were proposed. The Application of deep learning to decoding
designs to linear codes was proposed by [E. Nachmani, Y. Beery, D. Burshtein 2016].

Deep learning simulation model of turbo encoding and decoding using the supervised learning of RNN architecture and
their performance analyses in being carried out by [Raja Sattiraju, Andreas Weinand and Hans D. Schotten, 2018].
Deep learning based Turbo Auto encoder for the moderate block lengths and for low value SNR range is explained by
[Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath, 2019]. Turbo auto
encoder for code rate 1/3 and their performance analysis is carried out. BCJR algorithm were successfully implemented
using Neural network [M. H. Sazli, C Icsik 2007]. Design of computationally effective new codes and the design of new
codes for multi-terminal (i.e., beyond point-to-point) settings are the two long-term goals in coding theory. Turbo codes
function close to the theoretic Shannon limit in awgn channel. Deep learning has been outstandingly successful in a vast
variety of human efforts and is rapidly becoming capable of learning advanced algorithms on the basis of the observed
information like input, action, and output. Therefore, prompted by these successes, it is projected that the aforementioned
twin goals of coding theory can be solved through deep learning techniques. Despite the availability of nearly unlimited
training data and the learning framework being percipient, there are two primary challenges. The first challenge is that the
code’s size is inconceivably large and the code’s space is huge. The second challenge is the efficient functioning of data
rate and block lengths over a broad range of channel signal to noise ratios (SNRs).In other words, a range of channel
SNRs is required to design a family of codes and to evaluate their performance, wherein the codes are
Table 1 List of acronyms used

RNN Recurrent Neural Network


AWGN Additive white gaussian noise
BCJR Bahl Cocke Jelinek Raviv
LDPC Low density parity check
SNR Signal to noise ratio
BER Bit error rate
BLER Block length error rate
MAP Maximum a posterior

LTE Long terminal evolution


RSC Recursive systematic convolution
DSL Digital subscriber line
HMM Hidden Markov model
characterized by the number of information bits and data rate. In this paper, application of deep learning to turbo
decoding, using the Recurrent neural network (RNN) architecture are implemented and their performance analysis for
the various input data length and code rates are examined.
The study of decoding algorithms until recently, focus on the use of conventional algorithm implementation. In view
of 5G/6G Communication technology, there is a need to test, verify and compare advanced techniques using deep
learning to implement the decoding algorithms. Therefore, an attempt is made to test Turbo decoding using deep
learning techniques.
In the proposed work, study is conducted in two ways as mentioned below-
(i) Neural Turbo decoder is used to decode using RNN (Recurrent Neural Network) architectures in awgn channel.
BCJR decoders are emulated in principle by RNN.RNN decoder is trained at a specific SNR over information bit
lengths 500 bits to 2,000 bits.
(ii) Decoding of turbo codes is performed by deep learning-based algorithm which is modified form of (i).
The results obtained demonstrate the possibility of neural network decoding, through training methodology and a
careful construction of RNN architectures.
2. Methodology
The proposed methodology consists to deploy RNN architecture to construct Neural Turbo decoder and to deep
learning-based turbo decoder algorithms.

Fig.1 Methodology
Encoder produces the output which will be sent to the decoder for RNN architecture operation, which will produce the
intermediate BER values known as the fitting values. Based on these values, the maximum likelihood (ML) table will
be generated. Using ML table values, the best BER fitness values are selected and the decoding operation is recursively
done for the fixed number of iterations to obtain the updated maximum likelihood table values as final solution.
3. Sequential codes and RNN architecture
Sequential coding schemes are distinctively appealing among the various families of coding schemes prevalent in
literature. Three major characteristics of sequential coding schemes are-(i) Recurrent Neural Networks (RNN),the basic
sequential encoder which act as the recurrent network is as shown in Fig. 2. The Sequential codes are used to construct
the rate1/2 RSC (Recursive Systematic Convolutional) code as shown in Fig. 3. The recurrent natural structure of
sequential coding schemes is found to be aligning with RNN. (ii) Sequential coding schemes possess the potential to
attain performance near to the information theoretic limit. (iii) Sequential coding schemes are widely utilized in cellular
phone standards that include LTE, 3G, 4G, and satellite communications.
Recurrent Neural Network is a generalization of feed forward neural network that has an internal memory. RNN
is recurrent in nature as it performs the same function for every input of data while the output of the current input
depends on the previous computation. Since we know that convolutional code has memory, sequential code can be
equivalent to convolutional code. To implement RSC encoder, the suitable architecture can be RNN as it has memory
in its structure. In back propagation algorithm, there will be absence of memory. RNN is thus preferred over back
propagation taking turbo decoding into consideration.
The Recursive Systematic Convolutional code (RSC) (rate-1/2) as shown in Fig. 3 is constructed using sequential
encoder in Fig.2. A forward pass is performed by the encoder on binary input sequence as message bits 𝑏 = (𝑏1, ….𝑏𝑘)
with binary vector outputs as codeword𝑐 = (𝑐1, ….𝑐𝑘) and binary vector states 𝑠 = (𝑠1, ….,𝑠𝑘)on a recurrent network.

The output is a two-dimensional binary vector ck=(ck1,ck2)=(2bk−1,2(bk⊕sk1)−1)∈{−1,1}2 at a two-dimensional


binary vector’s state sk = (ski,sk2) and at time k with binary input bk ɛ{0, 1}, where 𝑥⨁𝑦 = |𝑥 ― 𝑦|.
The initial state is presumed to be 0 (i.e., s1 = (0,0)). The next cell’s state is updated as sk + 1 = ⁡(bk⨁sk1⨁sk1sk2).

Fig.2 Sequential encoder as a recurrent network

Fig.3 One cell for a rate 1/2 RSC code

With the canonical channel being considered as awgn channel, a noisy channel is used to send the output bits.
yki = cki + zkiare called the received bits andare the received binary vectors 𝑦 = (𝑦1, ….,𝑦𝑘)for all i ɛ [1,2] and k ɛ [K],
where Zki’s are Gaussian (with zero mean and variance σ2). The estimate of received signal (y) using MAP [14] is
referred to as decoding. The optimal matching MAP estimator isbk = arg maxbk Pr (bk|y).The optimal MAP estimate
in time linear (in the K block length) can be found by using dynamic programming, which is known as the Viterbi
algorithm. For all k = 1,……,K, the optimal matching MAP estimator is bk = arg maxbk Pr (bk|y) and the fraction of
wrong bits is counted by BER. The linear time optimal decoder is critically dependent on the encoder’s recurrent
structure.
Fig.5 Neural turbo decoder
4. System design and implementation
Construction of neural Turbo encoder and decoder are discussed in this section.
A. Neural Turbo encoder

Fig.4 Rate 1/3 Turbo encoder


Fig.4 illustrates a rate-1/3 Turbo encoder. Two identical rate-1/2 RSC encoders namely, encoder 1 and encoder 2, are
employed.Encoder 1 is used with the original sequence b as the input and encoder 2 is used with a randomly permuted
version of b as the input. The random permutation is executed by interleaver. Encoder 1’s output sequence C1(1) is
redundant due to its identicalness to encoder 2’s output sequence C1(2). Thus, the sequence C1(2) is discarded, followed by
the transmission of the remaining sequencesc1(1), c2(1)),c2(2)).Consequentially, the rate is 1/3.
B. Neural Turbo Decoder
The multiple layers of neural decoder are present in Turbo neural decoder as in Fig. 5. The noisy received sequences are
y1(1), y2(1),y2(2)and the transmission of the coded sequencec1(1), c2(1)),c2(2)) is executed over awgn channel. The
RSC MAP decoder (BCJR algorithm) is utilized as a building block by iterative decoder termed as turbo decoder. Prior
probabilities are({Pr (bk)}𝑘 = 1), received sequence is y, and posterior probabilities of the message bits are
𝐾

({Pr (bk|𝑦)}𝐾𝑘 = 1).The posterior Pr (bk|Π(y1(1), y2(2))with uniform prior probability on bk for all k ɛ [K] is estimated by
the standard BCJR at the first iteration. Further,Pr (bk|(Π(y1(1)), y2(2))is estimated by BCJR block with the interleaved
sequence Π(𝑦1(1)). The first layer’s output is now taken as a prior for the next iteration. Estimation is made for each bit
with repeated iterations of the process. Training is carried out for SNR(Signal to Noise Ratio)of 0 to 30 dB. Ten layers of
BCJR decoder are stacked between random interleavers.A turbo decoder’s intermediate layers are utilized for generating
self-training to smoothen the final output. The architecture is trained with different input code lengths and code rates.
Neural decoder’s last layer is trained to accept input lengths of size 500, 1000, 1500, 2000 data bits and also for different
code rate.
The N-BCJR architecture is a new approach of decoding for N-RSC. The proposed N-RSC and N-BCJR as in Fig. 4 and 5
are implemented and the results are discussed in section 4.
N-Turbo, referred to in Fig. 5, is the neural decoder. It is a challenge to facilitate the end-to-end training through the
instances of the message sequence of b(i)’s and the input sequence y(i)’s of the recurrent architecture’s deep layers.

Fig.6 Turbo decoder

Fig. 6 shows the implementation block for Turbo decoder which is slightly different from Fig.5, where in received
information from interleaved data is applied as input to the decision of next stage BCJR. The results are discussed in
section 4. It is expected that results from implementation of Turbo decoder as in Fig. 6 to be better as compared to that of
Neural Turbo decoder as in Fig. 5.
5 Results and Discussion
Implementation of Neural Turbo decoder and modified neural Turbo decoder as in Fig.5 and 6 respectively, are performed
for the encoded data from RSC encoder as in Fig.4. Implementation is conducted on Matlab-2014®. Performance
assessment is tested with Bit error rate (BER) and Block Length error rate (BLER).Channel is assumed to be gaussian and
the noise awgn.

0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig.7 BLER v/s SNR for input length=500 code rate=1/2


-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2
BER with Deep Learning

2.1

BER
1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
Eb/No (dB)

Fig.8 BER v/s SNR for input length=500 code rate=1/2


-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2
BER with Deep Learning

2.1

2
BER

1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
SNR

Fig.9 BER v/s SNR for input length=1000 code rate=1/2

0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)
Fig.10 BLER v/s SNR for input length=1000 code rate=1/2
-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2
BER with Deep Learning

2.1

BER
1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
Eb/No (dB)

Fig.11BER v/s SNR for input length=1500 code rate=1/2

0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig.12 BLER v/s SNR for input length=1500 code


rate=1/2

-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2 BER with Deep Learning

2.1

2
BER

1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
Eb/No (dB)

Fig.13 BER v/s SNR for input length=2000 code rate=1/2


0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)
Fig. 14 BLER v/s SNR for input length=2000 code rate=1/2
-3
x 10
2.3
BER with Viterbi
2.2 BER with Neural
BER with Deep Learning

2.1

2
BER

1.9

1.8

1.7

1.6

1.5
0 5 10 15 20 25 30
Eb/No (dB)

Fig.15 BER v/s SNR for input length=500 code rate=1/3

0.14
BLER with Viterbi
0.13 BLER with Neural
BLER with Deep Learning
0.12

0.11

0.1
BLER

0.09

0.08

0.07

0.06

0.05

0.04
0 5 10 15 20 25 30
Eb/No(dB)

Fig. 16 BLER v/s SNR for input length=500 code rate=1/3


-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2 BER with Deep Learning

2.1

BER
2

1.9

1.8

1.7
0 5 10 15 20 25 30
Eb/No (dB)

Fig.17 BER v/s SNR for input length=1000 code rate=1/3

0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig.18 BLER v/s SNR for input length=1000 code rate=1/3

-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2 BER with Deep Learning

2.1

2
BER

1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
Eb/No (dB)

Fig. 19 BER v/s SNR for input length=1500 code rate=1/3


0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12

BLER
0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig. 20 BLER v/s SNR for input length=1500 code rate=1/3

-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2 BER with Deep Learning

2.1
BER

1.9

1.8

1.7
0 5 10 15 20 25 30
Eb/No (dB)

Fig.21 BER v/s SNR for input length=2000 code rate=1/3


0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig. 22 BLER v/s SNR for input length=2000 code rate=1/3


-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2 BER with Deep Learning

2.1

BER
1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
Eb/No (dB)

Fig.23 BER v/s SNR for input length=500 code rate=1/4

0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig.24 BLER v/s SNR for input length=500 code rate=1/4

-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2 BER with Deep Learning

2.1

2
BER

1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
Eb/No (dB)

Fig. 25 BER v/s SNR for input length=1000 code rate=1/4


0.16
BLER with Viterbi
BLER with Neural
BLER with Deep Learning
0.14

0.12

BLER
0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig.26 BLERv/s SNR for input length=1000 code rate=1/4


-3
x 10
2.3
BER with Viterbi
BER with Neural
2.2 BER with Deep Learning

2.1

2
BER

1.9

1.8

1.7

1.6
0 5 10 15 20 25 30
Eb/No (dB)

Fig. 27 BER v/s SNR for input length=1500 code rate=1/4

0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)

Fig. 28 BLER v/s SNR for input length= 1500 code rate=1/4
-3
x 10
2.5
BER with Viterbi
2.4 BER with Neural
BER with Deep Learning
2.3

2.2

2.1

BER
2

1.9

1.8

1.7

1.6

1.5
0 5 10 15 20 25 30
Eb/No (dB)

Fig. 29 BER v/s SNR for input length=2000 code rate=1/4


0.16
BLER with Viterbi
BLER with Neural
0.14 BLER with Deep Learning

0.12
BLER

0.1

0.08

0.06

0.04
0 5 10 15 20 25 30
Eb/No (dB)
Fig. 30 BLER v/s SNR for input length=2000 code rate=1/4

Fig. 31 Communication outputs of the signal transmission

The parameters used to train the Turbo decoders using deep learning neural network are input code-length, SNR, the
activation function used tangent sigmoid. Received signal y1(1), y1(2)and y2(2) act as the input for the decoder to perform
the decoding.Fig.31depicts the values of amplitude and the associated BER, computed at intermediate level and these are
called as the fitting values. BER values are continuously computed and displayed until the final BER plot is produced. Fig.
7 to 14 display the results obtained with code rate 1/2 with code lengths 500, 1000, 1500 and 2000, for three two types of
decoders discussed with Fig 5 and 6. The third plot in these figures is conventional Viterbi decoder plotted to compare
with deep learning based decoders. Fig. 15 to 22 display the results obtained with code rate 1/3 Fig. 23 to 30 display the
results obtained with code rate ¼.Fig. 32 gives the comparisonfor input length 500. Deep learning-based Turbo decoder is
seen to perform better as compared to Neural Turbo decoder. Also, it is seen the both Neural and deep learning Turbo
decoders perform better as compared to conventional Viterbi decoder. Table 2 shows the fitting values obtained inside the
network before arriving at final BER values. In this table, only first ten values for SNR 1 to 10 are shown. Table 3 shows
the minimum and maximum values of BER for individual algorithm for code rate 1/2, 1/3 and ¼ for input length=500.

Table 2 BER Values for all the Decoders for Input length =500 and different code rate, number of iterations=10

Code rate=1/2 Code rate=1/3 Code rate=1/4


Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi
Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder
0.0018387 0.0018463 0.002043 0.0018997 0.0018463 0.0021108 0.0018921 0.0018311 0.0021023
0.0018539 0.0018768 0.0020599 0.0019379 0.0019226 0.0021532 0.0019302 0.0019073 0.0021447
0.0017242 0.0016174 0.0019158 0.0019455 0.0019379 0.0021617 0.0018463 0.0017242 0.0020515
0.0020294 0.0020599 0.0022549 0.0018921 0.0018158 0.0021023 0.0020065 0.0020142 0.0022295
0.0020294 0.0020599 0.0022549 0.0019226 0.0018463 0.0021362 0.0020065 0.0020142 0.0022295
0.0018845 0.0018158 0.0020938 0.0020142 0.0020294 0.0022380 0.0018845 0.0017853 0.0020938
0.0018234 0.0016937 0.0020260 0.0018616 0.0017853 0.0020684 0.0019608 0.0019226 0.0021786
0.0018997 0.0018768 0.0021108 0.0019379 0.0019379 0.0021532 0.0018768 0.0017853 0.0020854
0.0018921 0.0018005 0.0021023 0.0019379 0.0018921 0.0021532 0.0018234 0.0016479 0.0020260
0.0019836 0.0019989 0.0022040 0.0019378 0.0018920 0.0021522 0.0018234 0.0016632 0.0020260

Table 3Maximum and Minimum BER values for input length =500 for different code rates

Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi


Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder

Code rate=1/2 Code rate=1/3 Code rate=1/4


Maximum 0.0020294 0.0020599 0.0022549 0.0020142 0.0020294 0.0021617 0.0020065 0.0020142 0.0022295
BER
Minimum 0.0017242 0.0016174 0.0019158 0.0018616 0.0017853 0.0021023 0.0018234 0.0016479 0.002026
BER
Minimum BER Comparison for different code rate for input 500

0.0025

0.002

0.0015 Viterbi Decoder


BER

Neural Turbo Decoder


0.001
Deep Learning Turbo Decoder

0.0005

0
Code rate=1/2 Code rate=1/3 Code rate=1/4

Fig.32 Minimum BER Comparison for different code rates for all the decoding algorithms for input length=500

Table 4 BER Values for all the Decoders for Input length=1000 and different code rate, number of iterations=10
Code rate=1/2 Code rate=1/3 Code rate=1/4
Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi
Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder
0.0018310 0.0016785 0.0020345 0.0019684 0.0019379 0.0021871 0.001915 0.0018311 0.0021278
0.0017853 0.0016479 0.0019836 0.0019302 0.0018616 0.0021447 0.0018692 0.0017395 0.0020769
0.0018921 0.0018311 0.0021023 0.0019226 0.0019073 0.0021362 0.0018387 0.0016785 0.0020430
0.001976 0.0019989 0.0021956 0.0019302 0.0018768 0.0021447 0.0018616 0.0018005 0.0020684
0.0019913 0.0020142 0.0022125 0.0018463 0.0017395 0.0020515 0.0019150 0.0018768 0.0021278
0.0019150 0.0018616 0.0021278 0.0019531 0.0019531 0.0021701 0.0019379 0.0019226 0.0021532
0.0018616 0.0017242 0.0020684 0.0019836 0.0020142 0.0022040 0.0018921 0.0017853 0.0021023
0.0018616 0.0017242 0.0020684 0.0018845 0.0018005 0.0020938 0.0018921 0.0017853 0.0021023
0.0019226 0.0018463 0.0021362 0.0018845 0.0018005 0.0020938 0.0019302 0.0018616 0.0021447
0.0019226 0.0018463 0.0021362 0.0020905 0.0021973 0.0023227 0.0019301 0.0018614 0.0021445

Table 5Maximum and Minimum BER values for input length =1000 for different code rates

Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi


Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder

Code rate=1/2 Code rate=1/3 Code rate=1/4


Maximum 0.0019226 0.0020142 0.0022125 0.0020905 0.0021973 0.0023227 0.0019302 0.0019226 0.0021445
BER
Minimum 0.0017853 0.0016479 0.0019836 0.0018463 0.0017395 0.0020515 0.0018387 0.0016785 0.0020684
BER
Minimum BER Comparison for different code rate for input 1000

0.0025

0.002

0.0015 Viterbi Decoder


BER

Neural Turbo Decoder


0.001
Deep Learning Turbo Decoder
0.0005

0
Code rate=1/2 Code rate=1/3 Code rate=1/4
Fig. 33 Minimum BER Comparison for different code rates for all the decoding algorithms for input length=1000

Table 6 BER Values for all the Decoders for Input length=1500 and different code rate, number of iterations=10

Code rate=1/2 Code rate=1/3 Code rate=1/4


Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi
Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder
0.0020142 0.0020752 0.002238 0.0020447 0.0020905 0.0022719 0.0019073 0.0018158 0.0021193
0.0019913 0.0020294 0.0022125 0.0017929 0.0016632 0.0019921 0.0018692 0.0017853 0.0020769
0.0019608 0.0019531 0.0021786 0.0019226 0.0018463 0.0021362 0.0018997 0.0018463 0.0021108
0.0019379 0.0018921 0.0021532 0.0018845 0.0017700 0.0020938 0.0018768 0.0018158 0.0020854
0.0019989 0.0019989 0.002221 0.0019379 0.0019226 0.0021532 0.0018692 0.0018768 0.0020769
0.0020065 0.0020294 0.0022295 0.0019836 0.0019836 0.002204 0.0018692 0.0018768 0.0020769
0.001915 0.0018463 0.0021278 0.0019302 0.0018768 0.0021447 0.0018616 0.0018616 0.0020684
0.0019226 0.0019073 0.0021362 0.0018692 0.0017395 0.0020769 0.0018387 0.0016937 0.0020430
0.0018005 0.0016327 0.0020006 0.0020065 0.0020141 0.0022295 0.0018539 0.0017242 0.0020599
0.0018997 0.0018616 0.0021108 0.0019608 0.0019531 0.0021786 0.0018539 0.0017242 0.0020599

Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi


Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder

Code rate=1/2 Code rate=1/3 Code rate=1/4


Maximum 0.0020142 0.0020752 0.002238 0.0020447 0.0020905 0.0022719 0.0019073 0.0018768 0.0021193
BER
Minimum 0.0018005 0.0016327 0.0020006 0.0017929 0.0016632 0.0019921 0.0018387 0.0016937 0.002043
BER
Table 7Maximum and Minimum BER values for input length =1500 for different code rates

Minimum BER Comparison for diffferent code rate for input length 1500

0.0025

0.002

0.0015 Viterbi Decoder


BER

Neural Turbo Decoder


0.001
Deep Learning Turbo Decoder
0.0005

0
Code rate=1/2 Code rate=1/3 Code rate=1/4

Fig. 34 Minimum BER Comparison for different code rates for all the decoding algorithms for input length=1500

Fig. 33 gives the comparison for input length 1000. Deep learning-based Turbo decoder is seen to perform better as
compared to Neural Turbo decoder. Also, it is seen the both Neural and deep learning Turbo decoders perform better as
compared to conventional Viterbi decoder. Table 4 shows the fitting values obtained inside the network before arriving at
final BER values. In this table, only first ten values for SNR 1 to 10 are shown. Table 5 shows the minimum and
maximum values of BER for individual algorithm for code rate 1/2, 1/3 and¼.
Fig. 34 gives the comparison for input length 1500. Deep learning-based Turbo decoder is seen to perform better as
compared to Neural Turbo decoder. Also, it is seen the both Neural and deep learning Turbo decoders perform better as
compared to conventional Viterbi decoder. Table 6 shows the fitting values obtained inside the network before arriving at
final BER values. In this table, only first ten values for SNR 1 to 10 are shown. Table 7 shows the minimum and
maximum values of BER for individual algorithm for code rate 1/2, 1/3 and 1/4.

Table 8 BER Values for all the Decoders for Input length=2000 and different code rate, number of iterations=10

Code rate=1/2 Code rate=1/3 Code rate=1/4


Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi
Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder
0.0019989 0.0019989 0.0022210 0.0018539 0.0017395 0.0020599 0.0019913 0.0019989 0.0022125
0.0020218 0.0020447 0.0022464 0.0019913 0.0019989 0.0022125 0.0018463 0.0019987 0.0020515
0.0019455 0.0018921 0.0021617 0.0019913 0.0019989 0.0022125 0.0018845 0.0018311 0.0020938
0.0019531 0.0019226 0.0021701 0.0018997 0.0018158 0.0021108 0.0017929 0.0019531 0.0019921
0.0018845 0.0017853 0.0020938 0.0019684 0.0019531 0.0021871 0.0019379 0.0019379 0.0021532
0.0019531 0.0017700 0.0021701 0.0019379 0.0019073 0.0021532 0.0019073 0.0018616 0.0021193
0.0018921 0.0017853 0.0021023 0.0019379 0.0018768 0.0021532 0.001915 0.0017890 0.0021278
0.0019760 0.0016479 0.0021956 0.0019684 0.0020142 0.0021871 0.0018997 0.0018463 0.0021108
0.0019836 0.0019836 0.0022040 0.0019150 0.0019226 0.0021278 0.0019073 0.0018311 0.0021193
0.0018921 0.0018768 0.0021023 0.0019073 0.0018616 0.0021193 0.0019150 0.0018616 0.0021278

Table 9Maximum and Minimum BER values for input length =2000 for different code rates

Neural Deep Viterbi Neural Deep Viterbi Neural Deep Viterbi


Turbo Learning Decoder Turbo Learning Decoder Turbo Learning Decoder
Decoder Turbo Decoder Turbo Decoder Turbo
Decoder Decoder Decoder

Code rate=1/2 Code rate=1/3 Code rate=1/4


Maximum 0.0020218 0.0020447 0.0022464 0.0019913 0.0020142 0.0022125 0.0019913 0.0019989 0.0022125
BER
Minimum 0.0018845 0.0016479 0.0020938 0.0018539 0.0017395 0.0020599 0.0017929 0.0017890 0.0019921
BER

Minimum BER Comparison for different code rate for input 2000

0.0025

0.002

0.0015 Viterbi Decoder


BER

0.001 Neural Turbo Decoder


Deep Learning Turbo Decoder
0.0005

0
Code rate=1/2 Code rate=1/3 Code rate=1/4

Fig. 35 Minimum BER Comparison for different code rates for all the decoding algorithms for input length=2000

Fig. 35 gives the comparison for input length 2000. Deep learning-based Turbo decoder is seen to perform better as
compared to Neural Turbo decoder. Also, it is seen the both Neural and deep learning Turbo decoders perform better as
compared to conventional Viterbi decoder. Table 8 shows the fitting values obtained inside the network before arriving at
final BER values. In this table, only first ten values for SNR 1 to 10 are shown. Table 9 shows the minimum and
maximum values of BER for individual algorithm for code rates 1/2, 1/3 and ¼.

6 Conclusion

In this paper an approach to use RNN architecture to neural turbo decoding and deep learning turbo decoding is presented.
Binary cross entropy loss is utilized as a cost function. Cross entropy is the entropy between the input and the output data
which should be high and the cross entropy loss should be low. In the two types of decoding schemes the overall BER is
reduced utilizing the cross entropy. from the results, it is concluded that the both Neural and deep learning Turbo decoders
perform better as compared to conventional Viterbi decoder. The performance is observed to be uniform for different
code-lengths. It is concluded that performance better than convolutional Viterbi decoder is attained by training a neural
decoder. Improvements in BER performance may be obtained by improved RNN architectures and taking into
consideration the latency of the decoder.
Yihan Jiang et.al., (2019) have performed Turbo auto encoder for code rate 1/3 and their performance analysis using deep
learning for low value SNR range. Bit error performance of the order of 10-5 to 10-6 is shown. the Proposed work presented
in this paper achieve a bit error performance of nearly 0.0016632 is observed for a data size of 1500 for code rate 1/3.
Performance for higher data size can be tested. Performance improvement can be tested by modifying decoder
architecture.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.
References
C. E. Shannon, 1948. “A mathematical theory of communication, PartI, part II”, Bell Syst. Tech. Journal, Vol. 27, pp. 623–656.
C. Berrou, A. Glavieux, and P.Thitimajshima, 1993. “Near Shannon limit error-correcting coding and decoding: Turbo-codes”,Proceedings of
ICC 93, IEEE International Conference on Communications, GenevaVol. 2., pp. 1064-1070.
L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, 1974. “Optimal decoding of linear codes for minimizing symbol error rate”, IEEE Transactions on
information theory, Vol. 20, No. 2, pp. 284–287.
X.A. Wang and S. B. Wicker, 1996. “An artificial neural net Viterbi decoder”, IEEE Transactions on Communications, Vol. 44, No. 2, pp. 165–
171.
Jagdish D. Kene, Kishor Kulat, 2012.“Soft Output Decoding Algorithm for Turbo Codes Implementation in Mobile Wi-Max Environment”,
Procedia Technology, Vol. 6, pp 666-673.
Shridhar B.Devamane and Rajeshwari L.Itagi, 2018. “Turbo Code Implementation on TMS320C6711”,3rdIEEE International Conference on
Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques, ICEECCOT, GSSS Mysore, Karnataka, India.
T. Gruber, S. Cammerer, J. Hoydis, and S. ten Brink, 2017. “On deep learning-based channel decoding”,51st IEEE Annual Conference on
Information Sciences and Systems (CISS).
M. H. Sazl and C. Icsik, 2007. “Neural network implementation of the BCJR algorithm”, Digital Signal Processing, Vol. 17, No. 1, pp. 353 –
359.
T. Richardson and R. Urbanke, 2008. “Modern coding theory”, Cambridge University Press.
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, 2013. “Distributed representations of words and phrases and their
compositionality”, in Advances in neural information processing systems, pp. 3111–3119.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Sateesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, 2015. “Image net large-scale
visual recognition challenge”, Int. J. of Computer Vision, Vol. 115, No. 3, pp. 211–252.
S. Benedetto, G. Montorsi,1995. “Performance evaluation of turbo codes”, Electron. Lett. Vol. 31, No. 3,pp.163–165.
D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot
et al., 2016. “Mastering the game of go with deep neural networks and tree search”, Nature, Vol. 529, No. 7587, pp. 484–489.
S. Cammerer, T. Gruber, J. Hoydis, and S. ten Brink, 2017. “Scaling deep learning-based decoding of polar codes via partitioning”, in
GLOBECOM. IEEE, pp. 1–6
S. Dörner, S. Cammerer, J. Hoydis, and S. t. Brink, 2017. “Deep learning-based communication over the air”, IEEE Journal of selected topics
in signal processing.
T. J. O’Shea and J. Hoydis, 2017. “An introduction to machine learning communications systems”, IEEE Transactions on Cognitive
Communications and Networking, Vol. 3, Issue 4.
E. Nachmani, Y. Beery, and D. Burshtein, 2016. “Learning to decode linear codes using deep learning”, 54th Annual Allerton Conference on
Communication, Control, and Computing, Monticello, IL, pp. 341–346.
W. Xu, Z. Wu, Y. L. Ueng, X. You, and C. Zhang, 2017.“Improved polar decoder based on deep learning”, 2017 IEEE International Workshop
on Signal Processing Systems (SiPS), pp. 1–6.
A. Viterbi, 1967. “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm”, IEEE transactions on Information
Theory, Vol. 13, No. 2, pp. 260–269.
X A Wang and S B Wicker, 1996. “An artificial neural networkViterbi Decoder”, IEEE Transactions on Communications, Vol. 44, No.2, pp.
165–171.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Good fellow, and R. Fergus,2014. “Intriguing properties of neural networks”, Int.
conference on Learning Representation.
M. T. Ribeiro, S. Singh, and C. Guestrin, 2016. “Why should I trust You?Explaining the predictions of any classier”, Proceedings of the 22nd
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, USA, pp. 1135–1144.
J. Li, X. Wu, and R. Laroia, 2013. “OFDMA mobile broadband Communications: A systems approach”, Cambridge University Press.
G. Hinton, O. Vinyals and J. Dean, 2015.“Distilling the knowledge in a neural network”, NIPS Deep Learning and Representation Learning
Workshop. http://arxiv.org/abs/1503.02531
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, 2016. “Binarized neural networks”,Proc. of the 30th Int. conference on
Neural Information Processing, Barcelona, Spain.
A. Lapidoth, 1996. “Nearest neighbor decoding for additive non-gaussian noise channels,” IEEE Transactions on Information Theory, Vol. 42,
No. 5, pp. 1520–1529.
F. H. Sanders, J. E. Carroll, G. A. Sanders, and R. L. Sole, 2013. “Effects of radar interference on LTE base station receiver Performance”,
NTIA, US Dept. of Commerce.
G. A. Sanders, 2014. “Effects of radar interference on LTE (FDD) eNodeB and UE receiver performance in the 3.5 GHz band”, US Department
of Commerce, National Telecommunications and Information Administration.
H.A. Safavi-Naeini, C. Ghosh, E. Visotsky, R. Ratasuk, and S. Roy, 2015. “Impact and mitigation of narrow-band radar interference in down-
link LTE”, in Communications (ICC), 2015 IEEE International Conference on. IEEE, pp. 2644–2649.
H.A. Safavi-Naeini, C. Ghosh, E. Visotsky, R. Ratasuk, and S. Roy, 2015. “Impact and mitigation of narrow-band radar interference in down-
link LTE,”IEEE International Conference on Communications (ICC),. IEEE pp.2644–2649.
Haesik Kim,2015. “Coding and Modulation Techniques for High Spectral Efficiency Transmission in 5G and Satcom”, IEEE 23rd European
Signal Processing Conference.
Heshani Niyagama Gamage,2017. “Waveforms and Channel Codingfor 5G”, Department of Communications Engineering, Master’s thesis,
University of Oulu, Finland.
Kai Niu,2017. “Learning to Decode”, Key Laboratory of Universal Wireless Communication, Ministry of Education BeijingUniversity of Posts
and Telecommunications.
Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath, 2019. “Turbo Autoencoder: Deep learning
based channel codes for point-to-point communication channels”, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019),
Vancouver, Canada.
Raja Sattiraju, Andreas Weinand and Hans D. Schotten, 2018. “Performance Analysis of Deep Learning based on Recurrent Neural Networks
for Channel Coding”, 12th International Conference on Advanced Networks and Telecommunications Systems Conference IEEE ANTS 2018,
Indore, India.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.

You might also like