You are on page 1of 5

Bit Level Implementation of the PTA Algorithm for

Reed-Solomon Codes
Yuval Genga∗ , Olutayo Oyerinde† , Jaco Versfeld
School of Electrical & Information Engineering, Department of Electrical and Electronic Engineering,
University of the Witwatersrand, Stellenbosch University,
Private Bag 3, 2050, Johannesburg, South Africa Private Bag X1, 7602, Matieland, South Africa
Email: ∗ Yuval.Genga@students.wits.ac.za Email: djjversfeld@sun.ac.za
† Olutayo.Oyerinde@wits.ac.za

Abstract—The Parity check Transformation Algorithm (PTA) check matrix, H, with a dense structure. Application of Belief
is a symbol level decoder that was recently developed for Propagation is made possible by performing a binary image
Reed-Solomon (RS) codes. The algorithm is a soft decision expansion on the dense H matrix to make it sparse by
iterative decoder that uses the reliability information to decode
by transforming the parity check matrix based on the same soft converting it into its binary form [7], hence working on a
information. The PTA algorithm has been shown in literature bit level. The ABP algorithm works by adopting the bit level
to outperform widely used RS decoders like the Berlekamp- H matrix based on the received LLR bit reliabilities. This
Massey algorithm and Koetter-Vardy algorithm. Work done in adoption of the H matrix is done iteratively so as to prevent
this paper investigates the effect of the PTA algorithm working on the decoder from getting stuck at pseudo-equilibrium points
a sparse matrix by implementing it on a bit level. Modifications
are made to the algorithm, to suit the bit level implementation, [7]. The ABP was shown in [7] to significantly outperform
that significantly improve the overall performance. Simulations other widely used RS decoders, including the KV algorithm.
run show that the algorithm compares favorably to the high This is largely attributed to the fact that the algorithm works
performance Adoptive Belief Propagation (ABP) decoder, that with a sparse binary H matrix obtained from working at a bit
also works on a bit level, while providing a significant gain when level.
compared to the symbol level implementation of the PTA.
Index Terms—Reed Solomon codes, Soft decision decoding, The Parity check Transformation Algorithm (PTA) is an
Iterative decoding iterative soft decision symbol level decoder for RS codes that
was recently developed in [11]. The PTA algorithm works
I. I NTRODUCTION by transforming the H matrix iteratively based on the values
Reed-Solomon (RS) codes [1] are a class of high perfor- of the soft reliability information that is outputted from the
mance error correction codes due to their favorable algebraic channel. The reliability values change for every iteration due to
properties. As a result of their good performance, these codes corrections made based on values obtained from the syndrome
have a wide range of applications in telecommunications check. The PTA algorithm has also been shown to outperform
and storage devices. A lot of the research on these popular the widely used KV algorithm [11], and shows a lot of promise
class of codes has been applied to the development of high as an alternative decoder for RS codes.
performance decoding schemes. Work presented in this paper focuses on the implementation
In literature, decoding of RS codes can be done on two of the the PTA algorithm on a bit level. The PTA algorithm
levels; it can either be done on a symbol level or on a bit does not utilize any of the algebraic properties possessed by
level. Most well known decoders for RS codes work on a RS codes, thus the expectation that a bit level implementation
symbol level. Such decoders include hard decision decoding is possible just like the case of the ABP algorithm. This work
algorithms, like the Berlekamp-Massey algorithm [2][3][4] is done so as to determine if the PTA algorithm can take
and the Euclidean algorithm [5]. The Koetter-Vardy (KV)[6] advantage of the sparse structure of the H matrix exhibited at
algorithm is an example of a widely used soft decision symbol the bit level to yield an improved error correction performance.
level decoder used for RS codes. Work done in [7][8][9] This performance will be determined through a Symbol Error
presents literature on some examples of bit-level decoding Rate (SER) comparison between a bit level implementation
algorithms based on RS soft decision techniques. The most versus a symbol level implementation of the PTA algorithm.
famous example of a soft decision RS decoder working on a A performance comparison will also be done on the bit level
bit level is the Adoptive Belief Propagation (ABP) algorithm PTA with the ABP algorithm acting as a benchmark, so as to
[7]. This is largely due to its impressive error correcting give a fare comparison with bit level decoders.
capabilities. The rest of the paper is structured as follows. Section II
The ABP algorithm is based on the Belief propagation gives a summary of how the PTA algorithm works. The
algorithm for Low Density Parity Check (LDPC) codes [10]. implementation of the the PTA algorithm on a bit level will
The ABP algorithm was designed to apply the Belief Prop- be explained in Section III. Results obtained from simulations
agation algorithm on linear block codes defined by a parity will be discussed in Section IV and Section V will conclude

978-1-5386-1098-5/17/$31.00 2017
c IEEE

39
the work done in this paper. • The correction step: This step involves performing cor-
rections on V based on each of the values, Si , of the
II. T HE PARITY CHECK T RANSFORMATION A LGORITHM
syndrome vector . These corrections on the reliabilities
The PTA algorithm is a symbol wise soft decision decoder can be considered to work in the form of a reward
that utilizes the reliable information from the channel to system. The PTA ‘rewards’ the participating reliabilities
transform the H matrix by performing row operations. The indexed by U and R by adding a predetermined cor-
PTA decoder works iteratively by repeatedly correcting the rection factor, δ and δ/2, to their values respectively
reliability values to obtain the decoded message. whenever the Si = 0. This reinforces the hard decision
To better comprehend how the PTA works we shall first selection of these values for the next iteration. If Si = 0,
establish notation. Consider an (n, k) RS codeword c. The the participating reliabilities indexed by U and R are
symbols of c are transmitted over a noisy channel based on ‘penalized’ by subtracting δ and δ/2 respectively from
a selected modulation scheme. At the output of the channel their values. This is done to reduce the chances of these
the vector r is received. This vector r is then used to obtain indices being selected during the hard decision step for
the reliability matrix, β, as shown in [11]. The PTA algorithm the next iteration. These corrections are done for each
then decodes the received message by working iteratively on syndrome check and can be summarized as
the matrix β. For each iteration, the PTA algorithm carries out ⎧
the following steps. ⎪
⎪ VUi + δ if Si = 0,


• The sorting reliability step: A reliability vector V is
⎨ VR + δ/2,
Vj = (5)
obtained as shown in (1). ⎪

⎪ V Ui − δ
⎪ if Si = 0.

V = argmax(βj ), (1) VR − δ/2,
where 1 ≤ j ≤ n and βj represents each column of The new values of the reliabilities in V indexed by U
the matrix β. The values of V are then sorted from and R are then updated to their original positions in the
the least to the highest, while still noting the original β matrix.
positions (indices) of the reliabilities during the switch. • The stopping condition: The above steps are repeated
These rearranged indices of the sorted reliabilities are iteratively until all of the Si values are equal to 0 or
recorded in the vector l and are noted as until a negative value is updated to the β matrix.
l = [l1 , l2 , ..., ln ]. (2) III. B IT LEVEL I MPLEMENTATION OF THE PTA
A LGORITHM
The k highest reliabilities are considered to be the most
reliable and the matching indices are put in the vector R. In this section, the bit level implementation of the PTA is
The remaining indices are assumed to match the (n − k) discussed with regards to modifications in the H transforma-
least reliable symbols, and are stored in the vector U and tion step and the size of δ used during the correction step. For
can be expressed as purposes of notation, the PTA algorithm implemented at the
bit level is denoted to as PTAbl .
U = [l1 , l2 , ..., l(n−k) ],
(3) It is also important to note that a symbol level H matrix
R = [l(n−k)+1 , .., ln ].
with dimensions m × n will have the dimension of M × N
• The H transformation step: This step involves row re- (where M = mb and N = nb and b represents the number
duction operations being used to transform the H matrix of bits in the field) in its binary state after a binary image
into a partial systematic structure. The transformation is expansion has been done.
considered to be partially systematic because the row
reduction operations are done in a way such that the A. Modifications in the H transformation step
(n − k) indices of U match the identity submatrix and As seen in Section II, the PTA decodes by performing
the k indices of R match the parity submatrix, as shown corrections to the reliabilities in the matrix β. Therefore, the
in [11]. The row reduction operation can be done in a algorithm does not directly utilize the algebraic properties of
computationally efficient way using the approach shown the RS code. However, it does take advantage of the structural
in [12]. properties of the H matrix. This is because the H matrix of
• The syndrome check step: In this step a syndrome check an (n, k) RS code has a set of (n − k) independent columns
is done by finding the dot product of the hard decision regardless of the order of columns selected [12] [13]. This
received vector ( r), obtained from β, and each of the helps during the transformation step of the PTA decoding
(n−k) rows of the transformed parity check matrix (H  ). process. After a binary image expansion is done, the binary H
This computation can be presented as matrix losses this property. However the matrix still possess
a full row rank. This means that for the binary H matrix,
Si = r · Hi , (4)
with the dimensions M × N , there exists a total number
where 1 ≤ i ≤ (n − k) and is used to denote each row of M independent columns which can be obtained through
of H  and each value of the syndrome vector, S. row reduction. However, there is no guarantee that these M

40
columns will run consecutively to form an exact identity IV. R ESULTS AND A NALYSIS
matrix after the row reduction is done. With this regard, the
In this section, simulation results are presented for the
modification to the H transformation step can be summarized
PTAbl algorithm. This simulations are done to obtain optimum
as follows:
performance conditions of the algorithm. The performance of
• The received vector is first converted into its bit form
the PTAbl is also tested against the high performance ABP
based on a selected modulation scheme. The β matrix is algorithm, which serves as the benchmark algorithm for this
now constructed with only 2 rows to represent either a work.
0 or a 1 for the hard decision detection. To ensure that
each column in β adds up to one as in the case of [11], A. Algorithm Analysis
we scale the value of the reliabilities as shown in (6)
We first investigate the performance analysis of the PTAbl
βq,j algorithm based on the new implementation using a single δ
βq,j(scaled) = , (6)
βj to correct the reliabilities. The performance of this implemen-
where 1 ≤ q ≤ 2, 1 ≤ j ≤ N and βq,j denotes the entry tation will be compared to the original case when different
in q th row and the j th column of β. The vector V is δ values are used to correct the reliabilities matching the
obtained in the same way shown in (1). V is then sorted participating U and R indices in each syndrome check. A
in ascending order. The rearranged indexes are noted as (15, 7) RS code is used in this simulation, with the symbols
well and stored in the vector l. being transmitted through an Additive White Gaussian Noise
• The binary H matrix is then transformed based on the re- (AWGN) Channel using a 16QAM modulation scheme. A
arranged indices in the vector l. The independent columns value of δ = 0.001 is used with both PTAbl implementations.
in the binary H matrix will be represented by the columns Results for this simulation are shown in Fig. 1.
that match the identity submatrix after the transformation. From Fig. 1 it can be seen that the single δ implementation
These columns will now be used to represent U indexes performs better than the original implementation that requires
and the remaining indexes are used to represent the R different δ values for correcting the U and R indices. This
indexes. result supports the hypothesis that, to obtain an optimum
performance, a single δ value is required to correct the
B. Modifications to the correction step reliabilities due to the sparse nature of the binary H matrix.
Working at a bit level means working with a binary parity A further investigation on the performance of the PTAbl
check matrix that is more sparse than a regular H matrix for an based on the value of δ is conducted. A similar test is done
RS code. The symbol level PTA algorithm addresses the issue in [11] for the symbol level PTA. This is done to study the
of working with a H matrix that has a very sparse identity effect that the size of δ has on the case of the sparse binary
submatrix and a completely dense parity submatrix, when its H matrix. A (15, 7) RS code is again transmitted through an
in the partial systematic transformed state, by using different AWGN channel using a 16QAM modulation scheme. These
values of δ in the correction step for each submatrix as shown results are seen in Fig. 2.
in (5). For the case of working at a bit level, the binary H
matrix can be considered to be sparse in both the identity and
parity submatrices in its transformed state.
With this regard, the PTAbl is implemented using a single δ
PTAbl (different δ = 0.001)
value to correct the indices of the reliabilities that match both PTAbl (Single δ = 0.001)
10 -1
the sparse identity and the sparse parity submatrices during
each iteration of the decoding process. The correction step is
now defined as
⎧ 10 -2
Symbol Error Rate


⎪ VUi + δ if Si = 0,

⎪ V
⎨ Rt + δ,
Vj = (7) 10 -3



⎪ V Ui − δ if S i =
 0.

VRt − δ,
10 -4
where t represents the participating bits indexed by R in the
ith (for 1 ≤ i ≤ M ) syndrome check and can be denoted as
(8) 10 -5
0 2 4 6 8 10 12
t = {N : HM,N = 1}. (8) Es/No, dB

HM,N represents the entry in the M th row and the N th Fig. 1: Performance comparison of the PTAbl based on the
column of the binary parity check matrix. different δ implementations.

41
10 -1 10 -1

10 -2 10 -2
Symbol Error Rate

Symbol Error Rate


10 -3 10 -3

PTAbl (Single δ = 0.001)


PTAbl (Single δ = 0.0025)
10 -4 10 -4
PTAbl (Single δ = 0.005)
PTAbl (Single δ = 0.0075) ABP(α = 0.5)
PTAbl (Single δ = 0.01) ABP(α = 0.12)
PTAbl (Single δ = 0.025) ABP(α = 0.05)
10 -5 PTAbl (Single δ = 0.05) 10 -5 Symbol Level PTA (δ = 0.001)
PTAbl (Single δ = 0.075) PTAbl (Single δ = 0.01)

0 2 4 6 8 10 12 0 2 4 6 8 10 12 14 16
Es/No, dB Es/No, dB

Fig. 2: Performance comparison of the PTAbl based on the Fig. 3: Performance of the PTAbl compared to the PTA and
value of δ used. the ABP for a half rate code

Fig. 2 shows that values of δ less than 0.01 all yield a number of iterations of the ABP algorithm is set to 50. The
very similar performance. This gives a different result when results of the simulation are shown in Fig. 3.
compared to the symbol level implementation in [11], which From Fig. 3, it can be seen that a significant gain is
shows smaller values of δ outperforming the larger values. The experienced when the PTA is implemented on a bit level
reason why larger values of δ, for the bit level case, work the compared to the symbol level implementation. The PTAbl
same as the optimum value of 0.001 shown in [11] is once algorithm outperforms the symbol level PTA by well over
again due the sparse structure of the binary H matrix. For 4dB at a SER of 10−4 . Working at a bit level means working
the symbol level PTA, all the indices matching R participate with a larger H matrix than the case of a symbol level. This,
in every check, due to the complete dense structure of the therefore, increases the complexity of the PTA algorithm at
parity submatrix. This means that the R indices are corrected a bit level compared to that of a symbol level. However, the
accordingly, regardless if they are right or wrong, depending significant performance gain more than justifies the use of the
on the value of the syndrome check for every row in every PTAbl compared to the symbol level PTA whenever a tradeoff
iteration. Therefore, a smaller value of δ is required for the is required.
correction step to give a more accurate result. This is not the Fig. 3 also shows that the PTAbl algorithm yields a slight
case for a binary H matrix, as its sparse structure ensure that gain, just within the range of 0.5dB at a SER of 10−4 , when
not all the indices matching R take part in every syndrome compared to the ABP algorithm with α = 0.12 and 0.05. A
check calculation. This means that only the participating R more significant gain of over 1.5dB is seen in the comparison
indices are corrected during each syndrome check. Thus larger for the ABP with α = 0.5 at the same SER value.
value of δ, not exceeding 0.01, can be used for the PTAbl . This
proves to be advantageous as larger values of δ enable the PTA High Rate codes
to decode with less iterations [11]. An additional simulation is done to test the performance
of the PTAbl algorithm under high rate conditions. For this
B. Performance Analysis simulation, a (15, 11) RS code is once again transmitted using
a rectangular 16 QAM modulation scheme through an AWGN
Half Rate codes
channel. The PTAbl is still corrected using a single value of
Now a performance comparison is done for the PTA algo- δ = 0.01, while being benchmarked against the ABP running
rithm when implemented on a bit level and symbol level. In under the same conditions as those used for the half rate codes.
the same test, the PTAbl is also benchmarked against the ABP The results for this simulations are presented in Fig. 4.
algorithm to give a fair comparison with decoders that work Fig. 4 suggests that the PTAbl algorithm yields a similar
on on a bit level. The same simulation parameters are used performance to that of the ABP with α = 0.12 and 0.05
as in the case for Fig. 1 and Fig. 2. The PTAbl is run with algorithm for high rate RS codes. The PTAbl algorithm still
a correction factor of δ = 0.01 while the symbol level PTA only slightly outperforms the ABP with α = 0.5. This
is implemented with a correction factor of δ = 0.001 as used indicates that the performance gain attained by the PTAbl
in [11]. The ABP algorithm is implemented using a damping compared to the ABP algorithm diminishes as the rate of the
factor, α, of 0.05, 0.12 and 0.5 as done in [7]. The maximum code increases.

42
[8] A. Vardy and Y. Be’ery, “Bit-level soft-decision decoding of reed-
10 -1 solomon codes,” Communications, IEEE Transactions on, vol. 39, no. 3,
pp. 440–444, Mar 1991.
[9] O. Ur-rehman and N. Zivic, “Soft decision iterative error and erasure
decoder for Reed-Solomon codes,” Communications, IET, vol. 8, no. 16,
pp. 2863–2870, 2014.
10 -2
[10] D. MacKey and R. M. Neil, “Near Shannon limit performance of low
density parity check codes,” Electro. Lett, 1996.
Symbol Error Rate

[11] O.O. Ogundile, Y.O. Genga and D.J.J. Versfeld, “Symbol level iterative
soft decision decoder for Reed-Solomon codes based on parity-check
10 -3 equations,” Electronics Letters, vol. 51, no. 17, pp. 1332–1333, Aug.
2015.
[12] D.J.J. Versfeld, J.N. Ridley, H.C. Ferreira and A.S.J. Helberg, “On Sys-
tematic Generator Matrices Reed-Solomon codes,” Information Theory,
IEEE Transactions on, vol. 56, no. 6, pp. 2549–2550, June 2010.
10 -4
[13] D. J. J. Versfeld, H. C. Ferreira, and A. S. J. Helberg, “Efficient packet
ABP(α = 0.5)
ABP(α = 0.12) erasure decoding by transforming the systematic generator matrix of an
ABP(α = 0.05) rs code,” in 2008 IEEE Information Theory Workshop, May 2008, pp.
PTAbl ( δ = 0.01) 401–405.
10 -5
0 2 4 6 8 10 12 14
Es/No, dB

Fig. 4: Performance of the PTAbl compared to the PTA and


the ABP for a high rate code

V. C ONCLUSION
In this paper, the bit level implementation of the PTA
algorithm is presented. The algorithm compares favorably
against the high performance ABP algorithm for both cases
of half rate and high rate codes. The bit level implementation
also significantly outperforms the symbol level PTA algorithm.
This comes at the cost of an increased complexity due to the
PTAbl working with a larger H matrix as a result of the binary
image expansion. However, the large performance gain of the
PTAbl over the symbol level PTA justifies its use in a case
where a performance-complexity tradeoff is required.
ACKNOWLEDGMENTS :
The financial assistance of National Research Foundation
(NRF) of South Africa towards this research is hereby ac-
knowledged. Opinions expressed and conclusions arrived at,
are those of the authors and are not necessarily to be at-
tributed to the NRF. The financial support of the Centre
for Telecommunication Access and Services (CeTAS), the
University of the Witwatersrand, Johannesburg, South Africa
is also acknowledged.
R EFERENCES
[1] I. S. Reed and G. Solomon, “Polynomial Codes over Certain Finite
Fields,” J. Soc.Ind. Appl. Maths., 1960.
[2] J. Massey, “Shift-register synthesis and BCH decoding,” Information
Theory, IEEE Transactions on, vol. 15, no. 1, pp. 122–127, Jan 1969.
[3] J. G. Proakis and M. Salehi, Digital Communications, 5th ed. McGraw-
Hilll Higher Education, Nov 2007.
[4] E. R. Berlekamp, Algebraic Coding Theory. New York: McGraw-Hill,
Inc., 1968.
[5] Y. Sugiyama, M. Kasahara, S. Hirasawa, and T. Namekawa, “A method
for solving key equation for decoding Goppa codes,” Information and
Control, 1975.
[6] R. Koetter and A. Vardy, “Algebraic Soft-Decision Decoding of Reed-
Solomon Codes,” Information Theory, IEEE Transactions on, 2003.
[7] J. Jiang and K. Narayanan, “Iterative Soft-Input Soft-Output Decoding
of Reed Solomon Codes by Adapting the Parity-Check Matrix,” Infor-
mation Theory, IEEE Transactions on, 2006.

43

You might also like