You are on page 1of 4

QPSK Communications Using Short 4-ary Codes

Laurie L. Joiner

John J. Komo

Electrical and Computer Engineering


University of Alabama in Huntsville
Huntsville, Alabama 35899
e-mail: Ijoiner@ece.uah.edu

Electrical and Computer Engineering


Clem son University
Clemson, South Carolina 29634
e-mail: john.komo@ces.clemson.edu
of the inphase component being in error and the quadrature
component being correct. This can be expressed as

ABSTRACT
Error control coding matched to the number of symbols

of quatenary phase shift keying (QPSK) is considered in this


paper. Short nonbinary codes with a set of 4 symbols are
used in conjunction with QPSK for improved error
performance. These 4-ary symbols match the number of
phases used in communicating QPSK.
The improved
performance relative to unccded QPSK is demonstrated in
terms of equivalent bit error probability. Hard and soft
decision decoding are considered.

P,=P(decides,ls,)=P(decide sJso)
=Q ( J m ) [ l - Q ( J m ) l

where E,=mE, with m the number of bits per symbol which


for QPSK m=2, and E, is the energy per bit. In a similar
manner the probability of deciding s, given so was sent is the
probability of both the inphase and quadrature components in
error and is expressed as

(d-).

INTRODUCTION

P,=P(decide s,Iso)= Q2

Binary codes have been used extensively for binary


signaling, and nonbinary codes used for M-ary orthogonal
signaling which yields a uniform discrete symmetric channel.
Nonbinary codes have also been used with groupings of
binary signals. Here a match of code symbols and channel
symbols are used with quatenary phase shift keying (QPSK)
which does not yield a uniform discrete symmetric channel.
The objective is to further separate the codewords using the
nonuniform seperation of the QPSK signals.
Let the four QPSK signals, so with phase n/4, s1 with
phase 3~14,s, with phase 5x14, and s, with phase 7x14,
correspond to the four code symbols, 0, 1, p2, and p
respectively. With these code symbols, which are elements
of GF(4) (Galois field of 4 elements), represented as the
corresponding two dimensional vectors 00, 10, 11, 01, a
natural Gray coding exists for the bits of the QPSK signals.
For an AWGN (additive white Gaussian noise) channel, both
the inphase and quadratwe portions of the processed QPSK
signal have an additive Gaussian noise component. Using
the phases given, the decision boundaries for the signals so,
s,, s,, and s3 are the quadrants 1, 2, 3, and 4 respectively.
The probability of error in one component (inphase or
quadrature) for QPSK is given as

where E, is the energy per symbol and Nd2 is the two-sided


noise spectral density. Now for an (n,k) 4-ary code, the
probability of deciding s, given sowas sent is the probability
This research was sG$rted by the Army Research Office
under grant number DAAHM-95-1-0247.
0-7803-5237-8/99/$10.00 01999 IEEE
176

(2)

(3)

Finally the probability of deciding sogiven sowas sent is the


probability of both the inphase and quadram components
being correct and is given as
P,=P(decide sols,)=l-2P,-P,= [l - Q ( J m ) I 2 . (4)
4-ARY RS CODES

For a (4,2) extended Reed-Solomon (RS) code a generator


polynomial is given by g(x)=x+p and the corresponding
generator matrix by
(5)
The weight distribution for this (4,2) code is A,,=l, A,=12,
and A4=3 where Ai denotes the number of codewords of
weight i. Thus, the (4,2) RS code has minimum Hamming
distance d-=3 and is a single error correcting code. In
addition to the weight distribution, the subweight
distribution or distribution of code symbols for each weight
codeword is important for QPSK signaling. Now, let A,j be
the number of codewords with weight A, and jth set of
symbols. The subweight distribution of the (4,2) RS code is
given as A,,=12 codewords with nonzero code symbols 1pp
and &,=2 codewords of 1111 and pppp and &,=l of

PzPzPz~z.

Since this (4.2) code is a linear code, the probability of


error can be obtained by assuming the all-zero codeword was
sent. Similar to 111, the probability of decoder error for a
bounded distance decoder is

where P) is the probability that a received word is exactly


Hamming distance t! from a weight-j codeword. Let Bi be
the total Hamming weight of the information bits associated
with the codewords of weight i and correspondingly B, be the
total weight of the information bits associated with the
codewords of weight i and jth set of symbols. The coded bit
error probability can be expressed as

where mk equals the number of information bits per


codeword. For this (4,2) code, (7) can be expressed as
Pbc=[B3,
{P[d(r,c(wt 3))=Ol+P[d(r,c(wt 3))=1])

+B4~{P[d(r,(ll1l))=O]+P[d(r,(ll11))=1]]
+B4zIP[d(rr(P2P2P2Pp>)=01+p[d(r,(p2p2p2p2))=11
)1/4 (8)

where r is the received word and c(wt i) is a codeword of


weight i. Since B3,=24,B41=4,and B424 for this (4,2) code,
P,=6( P ~ P z P o + [ 2 P l P 2 P P:P25+
o~+
P:Po&])
+( Pf+4P;J ii;)+(
P,4+4P,3 6).

Q($KFG)
*

P,=2( P; P,2+3P:P,2i7+2P;P0p,)
+7{ P:P2P,2+2P,P,P,2 F + P ? P i pz+2P:P,Pop,)
+2{P,P,2P,2+Pi P i i7+2PlP,P,2 pz+2P, P;PoF)
+{ P; P,2+3P,2 P,2 F + 2 P , 3 P 0 5 )
+( PpP0+4P: Po .+ Pp
)
.
.

(9)

Also, from (2) and (3) it can be seen that P, is on the ordea
of Pf and the term 6P:P06 dominates the calculation of
P,. The corresponding uncoded probability of bit error is
given as
pbu=

which reduces to

(10)

The bit error probability of the hard decision decoding of the


(4,2) code along with the uncoded bit error probability is
shown in Figure 1. As can be seen there is negligible gain
and only then above about 6dB signal-to-noiseratio.
Consider another short 4-ary code with a larger code rate.
The (53) doubly extended RS code will now be analyzed. A
generator matrix for this (5,3) code is given as

(11)

which has weight distribution &=l, A,=30, &=15, and


A5=18. This (53) code is also single error correcting with
d,-=3.
The subweight distribution is given as A3,=8 with 2
each of the nonzero code symbols 111, 1l p , lpp, and ppp,
A3,=16 with 2 of lip', 12 of lpp, and 2 of ppp, A 3 , 4
with 2 each of lp2p2 and pp2p2,A,,=2 with 2 of pfip,
&,=2 with 1 each of 1111 and pppp, &2=8 with 4 each of

177

p,

pz

+4 ( P: P2P0+3P:P2Po + P;Po + P; P, )
+2( P: P,2Po+2P,PzPoii;+2P:P~Po~+P:P,2

p,)

5)

+( P;Po+4P,3Po~+Pp24

+2{ P: +5 Pp )+( PpP,+4 Pf P,


+6( P; P;+3P: P; F + 2 P ; P 2 K )
+3( P: P,3+2P1P2 5 + 3 P: P2 ).

+ Pf

6)

(13)

The terms that dominate (13) are 6P: Po2 F+7P: P;


which is a variation with P: as was the dominant term for
the (4,2) code. The (5,3) code should have smaller Pb, than
the (4,2) code since the rate R=k/n is larger which
corresponds to a smaller P,. This is also shown in Figure 1
and does illustrate the hard decision coding gain for the (53)
code. Coding gain is achieved with the (53) for an E,,/No
above 6dl3 and is significantly larger than the gain of the
(4,2)
code.
.~
Further coding gain can be achieved by considering soft
decision decoding as opposed to h a d decision decoding. For
a q-ary (n,k) code the soft decision word error probability can
be expressed as [2]

p [ ~ ] = p U (r closer to c j than to

= 0)

j=1

Cp-1

I x P ( r closer to c j than to co I co = 0).


j=l

(14)

The four QPSK signals can be represented as


s,(t)= 2/2Acos(wot+in/2+n/4), O<tlT, i=0,1,2,3.

(15)

Using standard correlators for the inphase and quadrahm


components consisting of multiplication by cos(oot) and
-sin((o,t) respectively and integrating, the output of the
correlators r, and rQ for the AWGN channel are Gaussian
random variables with mean +AT/2 and variance N0T/4. 7he
soft decision or maximum likelihood decision rule
determines the codeword that is closest to the meived wad
in Euclidean distance. From the exponents of the Gaussian
density functions, the probability of r being closer to ci than
to co is expressed as

with B, now the total number of nonzero information bits


associated with all weight i and normalized Euclidean
distance di.
For the (4,2) code with i the Hamming distance, the
nonzero weight distribution and normalized Euclidean
distance is given as
1

A,

3
4
4

12
2
1

B,
24
4
4

di
4
4
8

Since q=4, m=2, Es=2E,,, and R=1/2, the soft decision coded
bit error probability is upper bounded as

P[r closer to cj than to colco=O)


P&7 Q ( , / m ) +
n-1

C [ ~ ~ ( ~ ~ ~ ~ - ~ ~ j ~ ) + r Q1 i ( ~ Q ~ ~ ~ Q i (16)
~ < ~ l

i=O

Now, both SIo;-S~,;and Swi-SQii equal 0 when the CorreWnding


comPonents are equal and both equal AT when the
corresponding components are not equal. Using (16),
P[r closer to cj than to COICO=O)=P[~*=cri'<o] (17)
i,so #sj'

Where ri' , so' , and sj' correspond to both inphase and


quadrature components.
Let d i be the number of
components that the vector signals soand sj of weight i differ
in. Thus, d t represents the normalized Euclidean distance
between so and sj, and equals the number of terms in the
summation of (17). Furthermore, r* is Gaussian with mean
AT d;/2 and variance NOTdi/4, and (17) can be expressed as

=Q

(21)

The term a(,/->


dominates the calculation of Pb,.
This coded bit error probability is shown in Figure 2 along
with the uncoded bit error probability. For soft decision
decoding of the (4,2) code, a gain is achieved for E D o above
2dB. This coding gain is much larger than the gain for hard
decision decoding. Actually, below 2dB soft decision
decoding is no worse than the uncoded system, but the
apparent degradation is due to the looseness of the bound of
(21).
.~
For the (5,3) code the nonzero weight distribution and
normalized Euclidean distance is given as
A,
8

B,
12

di

3
3

16

42

3
3
4
4
4
4

4
2
2
8

12
6
6
24
12
6
12
6
36
18

5
6
4
5
6
8
5
6
7
8

P[r closer to cj than to coIco=O)=Q(dA2Tdi /No )


J

a(,/->.

(18)

5
5
5
5

since the coded energy per symbol E,,=REs=A2T. The soft


decision word error probability of (14) is then upper bounded

4
1
4
2
8
4

as

P(E)S

i=d .
mn

Since q=4, m=2, E,=2&, R=3/5, the soft decision coded bit
error probability is upper bounded ay
CAijQ(JRE,a,2j/N,)
'

(19)
P,,<2 Q ( , / m ) + 8 Q(d-1

where now A, is the number of codewords of weight i and


normalized Euclidean distance
corresponding
. di. The
)
m
/ soft
decision coded bit error probability is upper bounded as
P,,S

i=d rmn
.

Cj P i j/ ( m k ) l Q ( d m )

(20)

178

+8 Q ( d m ) + 4 Q(,/m)
+6 Q. ( , / m( ) + 4 Q

(22)

The term Q ( , / m )
dominates the calculation of Pb,
and is larger than the dominating term in the (4,2) code.
Thus, the (4,2) code has smaller Pbs than the ( 5 3 code for
large E D o as can be seen in Figure 2. That this is the case

demonstrated for hard decision decoding of the (4,2) extended


RS code and the (53) doublyextended RS code. Larger
coding gains for soft decision decoding of these codes have
also been demonstrated. While the (53) code performed
better for hard decision decoding, the (4,2) code performed
better for soft decision decoding for large EJN,,.

can also be seen from the fact that d%=4for the (4,2) axle
while d:=3 for the (5,3) code. This increase in d; more
than compensates for the increased rate from R=1/2 for the
(4,2) code to R=3/5 for the (5,3) code. Thus, even though
the (5,3) code performed better using hard decision decoding,
the (4,2) code performed better for soft decision decoding for
large E,,/No. To increase performance for soft decision
decoding the minimum normalized Euclidean distance should
be maximized.

REFERENCES
[l] S . B. Wicker, Error Control Systems for Digital
Communications and Storage, Prentice-Hall, Englewood
Cliffs, NJ, 1995.
[2] Reid, W. I. 111, Joiner, L. L., and Komo, J. J., "Soft
Decision Decoding of BCH Codes Using Emr
Magnitudes," Proc. 1997 International Symposium on
Information Theory, p. 303.

CONCLUSIONS

The matching of code symbol alphabet to the number of


phases in QPSK has been demonshated here through the use
of short 4-ary codes. Small coding gains have been

10.'

10"

rod

lo'

16'

10'

'a

10

12

Eb/NcdB

Figure 1. Probability of coded bit error using hard decisions.


lon

uwded

Eb/NodB

Figure 2. Probability of coded bit error using soft decisions.

179

You might also like