You are on page 1of 21

The Cherno Bounding Parameter for a Multilevel

Modulation Scheme Using PSK-signaling 1

Karin Engdahl and Kamil Sh. Zigangirov


Department of Information Technology
Telecommunication Theory Group
Lund University, Box 118, S-221 00 Lund, Sweden
Phone +46 46 222 3450, Fax +46 46 222 4714, e-mail: karin@it.lth.se

Abstract{ We analyze a PSK version of the multilevel modulation scheme with multi-
stage decoding intruduced by Imai and Hirakawa [5], when transmission takes place
over a Gaussian channel. Each stage in the decoder uses a suboptimal metric. The
results of our analysis can be used not only for the case of multilevel PSK, but for
any binary modulation using PSK-signaling and multiple representation of bits. We
argue that the conventional approximation of the Cherno bounding parameter Z
(used as argument in the code generating function when calculating error probabil-
ities) is not adequate, and new values of Z that tightens the error bounds without
causing them to lose their validity are given. The capacity for this scheme is also
calculated, and the result hereof is that the use of the suboptimal metric in multi-
stage decoding causes very little degradation in capacity compared to when optimal
metric is used in each stage of the multistage decoder.

Keywords{ Multilevel modulation, multistage decoding, PSK-signalling, set parti-


tioning.

1 This work was supported in part by Swedish Research Council for Engineering Sciences under
Grant 95-164

1
I. INTRODUCTION
The principle of trellis-coded modulation was described by Ungerbck in 1982
[7], and the concept of multilevel modulation was introduced by Imai and Hirakawa
[5]. The multilevel scheme enables the usage of a multistage decoder [5], which per-
formes almost as well as an optimum joint maximum likelihood sequence estimator
over all levels [4] but is much less complex.
In this paper we will study a multilevel modulation scheme using PSK-signaling,
transmitting over a Gaussian channel, and employing a multistage decoder in the
receiver. To further reduce complexity we use a suboptimal metric in each decoding
stage. This is shown to generate very slight performance loss in terms of capacity.
In Section II we present the PSK multilevel modulation scheme. The channel
characteristics and the decoding procedure are also given. Upper bounds on block
and burst error probabilities (for block and convolutional codes respectively) are
given in Section III. The upperbounding of these error probabilities reduces to the
calculation of code generating functions with the Cherno bounding parameter Z as
argument. This parameter is a function of the intra-set squared Euclidean distance,
k2, on level k, the noise variance 2 and the number of signal points in the PSK signal
set on the corresponding level of set partitioning. In [2] and [3] Z was calculated for
di erent levels of a multilevel modulation scheme using QAM-signaling. For 2-PSK
 
transmission (the last level of the multilevel PSK scheme) Z = exp 8
k2
2 . This
value is a lower bound on Z for other levels of the scheme, and is often used as
an approximation of Z , though the approximation results in that the bounds on
error probability lose their validity. The inaccuracy increases with the ratio k . An
2
2

2
 
accurate upper bound on Z for any level of the PSK scheme is Z = 2 exp 8
k2
2 .
This is a consequence of the \nearest neighbor error events principle" [4], [6], and
the fact that each signal point of a M -PSK set (M  4) has 2 nearest neighbors.
This value of Z yields, especially for small values of k , loose bounds on the error
2
2

probabilities.
In Section IV we calculate a better upper bound on Z that tightens the error
bounds without causing them to lose their rigor. This is done using the Cherno
bounding method, which gives exponentially tight bounds [10]. Finally, in Section
V we use the distribution of the metric described in Section IV to calculate the
capacity of this multilevel modulation scheme using PSK-signaling, with this type
of suboptimal decoder, and compare to the capacity for the same scheme using
optimal metric in each decoding stage.

II. SYSTEM DESCRIPTION


The transmitter and receiver described in Figure 1 is closely related to [5] and
is the PSK-version of the scheme presented in [2]. A binary information sequence u
is partitioned into K binary subsequences u(1) ; u(2) ; : : : ; u(K ), and each subsequence
is encoded by an independent binary component code Ck (block or convolutional).
n o
A set of K bits, v(1) (n) ; v(2) (n) ; . . . ; v(K ) (n) , one bit from each of the code
sequences v(1) ; v(2) , . . . ,v(K ), are synchronously mapped onto one of the 2K -PSK
signal points, s (n). The mapping is illustrated in Figure 2 for K = 3. The map-
ping procedure results in that on each level, except the last one, we have multiple
representations of the transmitted bits.

3
When passed through the discrete memoryless Gaussian channel the complex
sequence s = s (1) ; s (2) ; : : : ; s (n) ; : : : (where s (n) = a (n) + jb (n) is the chan-
nel input at the nth moment of time, a (n) = sin  (n), b (n) = cos  (n) and
n o
 (n) 2 22K m : m = 0; 1; : : : ; M 1 ) is corrupted by the error sequence e =
e (1) ; e (2) ; : : : ; e (n) ; : : : such that the complex received sequence is r = s + e. Here
e (n) = e(I ) (n) + je(Q) (n), where e(I ) (n) and e(Q) (n) are independent Gaussian
random variables with zero mean and variance 2.
The multistage decoder consists of a set of suboptimal decoders matched to the
codes used on the corresponding levels of encoding. Each decoding stage consists of
calculation of distances (metrics) to the received sequence r from all possible code
words on the corresponding level of set partitioning. The side information from
the previous decoding stages determines, according to the set partitioning structure
(illustrated in Figure 2), the signal set upon which the metrics are calculated.
When the decoder calculates the metrics, it uses the following suboptimal prin-
ciple. Let us suppose that a binary block code (the extension to convolutional
codes is straight forward) of length N is used on the kth level of the encoding, and
that the decoding in the previous (k 1) decoding stages determines the subsets
S (k 1) (1) ; S (k 1) (2) ; : : : ; S (k 1) (N ), to which the transmitted symbols of the binary
codeword v(k) = v(k) (1) ; v(k) (2) ; : : : ; v(k) (N ) belong.
Let S0(k 1) (n) and S1(k 1) (n) be subsets of S (k 1) (n), corresponding to transmis-
sion of v(k) (n) = 0 and v(k) (n) = 1 respectively. Let s(k) (n) 2 S (k 1) (n), n =
1; 2; : : : ; N and s(k) = s(k) (1) ; s(k) (2) ; : : : ; s(k) (N ). Finally let S(vkk 1) = Sv(kk (1)
( ) ( )
1) (1) ;

( )
1) (2) ; : : : ; S (k 1) (N ) be the sequence of subsets corresponding to transmission
Sv(kk (2) v k (N )
( )

4
of the code word v(k). Then the distance (metric) between the received sequence
r = r (1) ; r (2) ; : : : ; r (N ) and the codeword v(k) is determined as
   
 r; v(k) = k mink dE r; s(k) ; (1)
s( ) 2S(v(k)1)
where dE (x; y) is the squared Euclidean distance between the N -dimensional vectors
x and y. The decoding consists of choosing the codeword v(k) for which the metric
 
 r; v(k) above is minimal.
III. ERROR PERFORMANCE
The performance of a multilevel coded modulation system, which employs a
multistage decoder, is commonly estimated by average bit error probability of each
component code or by block error probability and burst error probability for block
and convolutional coding respectively [1], [4].
When a linear block code is used on the kth signaling level, an upper bound on
 
the probability of decoding error P "(k) for that level is [2], [9]
 
P "(k) < G(k) (D) jD=Z k ; ( ) (2)

and when a convolutional code is used, the upper bound on the burst error proba-
 
bility P "(k) and bit error probability Pb(k) are [8], [11]
 
P "(k) < T (k) (D) jD=Z k ( ) (3)

and
@ T (k) (D; L; I ) j k
Pb(k) < @I (4)
D=Z ;L=I =1
( )

where G(k) (D) and T (k) (D) are the generating functions of the codes, and T (k) (D; L; I )
is the re ned path enumerator [11].
5
Hence, the commonly used upper bounds for bloc, burst and bit error probabili-
ties in the block and convolutional coding cases are both functions of the parameter
Z (k). Calculation of the minimal possible Z (k) yields tight upperbounding of the
decoding error probability on each decoding level. As we mentioned earlier
( ) ( )
exp k2  Z (k) < 2 exp k2 (5)
8 2 8 2
in the PSK case. The bounds (2), (3) and (4) are often abused through the use of
 
the lower bound exp 8
k2
2 as an approximation to Z (k) [1]. Thereby the upper
bounds (2), (3) and (4) lose their validity. An example of this is given in [4],
where it can be seen that the actual bit error probability exceeds the bound (4)
 
when Z (k) = exp k2
8
2 is used. This is equivalent to using the minimum squared
Euclidean distance between code words as a design rule, which is not appropriate
when multilevel systems are considered. On the other hand the estimation Z (k) =
 
2 exp k2
8
2 sometimes gives bounds that are loose. In the following section we
calculate the values Z (k) that give exponentially tight union bounds, so that (2), (3)
and (4) keep their validity. This is done by using the Cherno bounding method.
The upper bounds (2), (3) and (4) ignores the fact that errors can propagate in the
multistage decoder, and hence for multilevel modulation they are only rigorous for
the rst stage in the decoder. The problem of error propagation was addressed in
[6] and the bounds presented there will also be improved if our value of Z (k) is used
 
instead of 2 exp k2
8
2 . If non-multilevel binary modulation using PSK-signaling
and multiple representation of bits is considered, the bounds given are of course also
useful.

6
IV. DERIVATION OF THE CHERNOFF BOUNDING PARAMETER
Without loss of generality we suppose that v0(k) (the all-zero codeword) is trans-
mitted. That is, at each time instant one of the signal points from the reference
set is transmitted. We suppose in addition that vl(k) is a codeword of Hamming
weight wl(k). To simplify the analysis, we also change the order of the transmitted
 
symbols, such that the rst wl(k) symbols of vl(k) are ones. Comparing  r; v0(k) and
 
 r; vl(k) to decide in favor of one of the codewords, using the Cherno bounding
technique and the union bound yields [2]

Z (k) = min
s0
'(k) (s) (6)

where '(k) (s) is the generating function of the suboptimal metric


   
l(k) (n) = dE r (n) ; v0(k) (n) dE r (n) ; vl(k) (n) (7)

and where
   
dE r (n) ; vl(k) (n) = mink dE r (n) ; s(k) (n) : (8)
s(k) n
( )2Sv( (k)1)(n) (n)
l

On the last decoding level the subsets S0(K 1) and S1(K 1) consists of one point
each, and the squared Euclidean distance between the subsets is K2 . It is well known
 
that in this case Z (K ) = Z2 = exp 8K2 , i.e. it coincides with the lower bound
2

in (5). The parameter Z (K ) is a function of  = 2K , and since all the expressions
of Z (k) in the further evaluation of the error probability on the kth level will also
be functions of , we will keep this notation. Thus, without loss of generality, we
consider, on the kth level of set partitioning, the signal set with the energy per

7
transmitted symbol 2 and additive Gaussian noise with the variance 2 equal to 1.
Numerical values of the Cherno bounding parameter Z2 (Z for 2-PSK) are shown
in Table 1 for a number of di erent .
Consider now the decoding on the kth level, when the subsets S0(k 1) and S1(k 1)
are 2(K k)-PSK signal sets (Figure 3). On each level, the relation between the symbol
energy 2 and the normalized inter-set Euclidean distance k
 is
k = 2 sin  ; (9)
 2
where  = 2M and M = 2(K k). To study the distribution of (k) (n), we introduce
the system of coordinates (x; y), shown in Figure 3. The M sectors de ne regions,
in which a received point has the same nearest point from each set S0(k 1) and
S1(k 1). The conditional probability that the received point is in the same region as
in Figure 3, given that v0(k) was transmitted, is equal to 1=M . In this region the
squared Euclidean distance from the received point to the nearest point of S0(k 1) is
 
dE r (n) ; v0(k) (n) = 0 = (x (n) )2 + (y (n))2 (10)

and the squared Euclidean distance from the received point to the nearest point of
S1(k 1) is
 
dE r (n) ; vl(k) (n) = 1 = (x (n)  cos )2 + (y (n)  sin )2 : (11)

So the di erence between the squared Euclidean distances becomes


   
(k) (n) = dE r (n) ; v0(k) (n) = 0 dE r (n) ; vl(k) (n) = 1 =
!
  
2 ( x (n) + x (n) cos  + y (n) sin ) = 4 sin 2 y (n) cos 2 x (n) sin 2 : (12)

8
Since the distribution of (k) (n) does not depend on n, for n = 1; 2; : : : ; wl(k) we will
drop the argument n. The random variables X and Y are independent. The con-
ditional joint probability density function of (X; Y ) given that v0(k) was transmitted
and given that the received signal is in the sector between the point in the reference
set having the smallest phase angle and the point from the opposite set with the
smallest phase angle (Figure 3) is
(0) (x; y ) = 1
M 1
X
2  1 
2 1 (y  sin 2n)2 :
fX;Y  n=0 exp 2 ( x  cos 2 n ) 2 (13)
Keeping in mind the multiple representation of v0(k) , the explanation to this expres-
sion is as follows. First take the sum of all the two dimensional Gaussian distribu-
tions centered around each of the signal points of the reference set. All points from
the reference set are assumed to be equally likely. Then condition this sum on that
the received signal is in the above mensioned sector, this will give a scale factor of 2
times the original sum. If the received signal is in any other region, we get formulas
for the representation of (k) analogous to (12) and for the density function of (X; Y )
analogous to (13). The generating function of (k) is
Z 1 Z x tan  n o (0)
( k)
' (s) = exp s (k) fX;Y (x; y) dydx; (14)
x=0 y=0

In polar coordinates, x = r cos  and y = r sin , this becomes


Z Z1 n o
'(k) (s) = exp s (k) fR;(0) (r; ) drd; (15)
=0 r=0


where
(k) = 2r ( cos  + cos ( )) (16)
and
(0) 1 X1 M  1 
fR; (r; ) = 
2
2 2
r exp 2 r +  2r cos (2n ) : (17)
n=0

9
Thus, since we get
n o
exp s (k) fR;(0) (r; ) =
1 X1 r exp  1 r2 2r ( cos (2n ) + 2s cos ( ) 2s cos ) + 2  =
M
2

 n=0 2
1
M
X1  1   1 
=
2
2 2
r exp 2  (A ()) exp 2 (r A ())2 (18)
n=0

where
A () =  cos (2n ) + 2s cos ( ) 2s cos ; (19)
and since
Z1  1  ( 2 ) p
r exp 2 (r A ()) dr = exp A 2() + 2A () Q ( A ()) ; (20)
2
r=0

R n o
where Q (x) = p1 1 exp t dt, the generating function of (k) becomes
2 x 2
2

'(k) (s) =
( ) !
1 X1 Z  exp 2 + p2A () Q ( A ()) exp  1 2 A2 () d =
M
2

 n=0 =0 2 2
( 2 ) 0 s MX1 Z  ( 2 ) 1
= exp  B 1+ 2 () Q ( A ()) exp A () dC
2

2 @  =0
A 2 A: (21)
n =0
This completes the proof of the following theorem.

Theorem 1 For any level of suboptimal decoding of a multilevel coded PSK signal
set, the Cherno bounding parameter has the value

Z (k) = ZM = min
s0
'(k) (s) ; (22)

where
( 2 ) 0 s MX1 Z  ( 2 ) 1
'(k) (s) = exp 2 B @1 + 2 A () Q ( A ()) exp A 2() dC
2

A (23)
n=0 =0

10
and
A () =  cos (2n ) + 2s cos ( ) 2s cos : (24)

Numerical values of ZM (Z for M -PSK), M = 4; 8; 16; 32; 64, are shown in Table
1 for a number of di erent . In Table 2, ZM is compared to its lower bound Z2
(often used as an approximation of ZM ) for di erent values of , where  = 2 sin( 2 )
is the normalized minimal Euclidean distance in the M -PSK set, shown in Figure
3. It can be seen that the lower bound is rater close to ZM at small . As 
increases, ZM approaches its upper bound 2Z2 due to the \nearest neighbor error
events principle". In Figure 4 we show by two examples how the bounds on error
probability are improved by the use of ZM instead of 2Z2.

V CAPACITY OF MULTISTAGE DECODING USING SUBOPTIMAL


METRIC
Analogously to [4] we consider the transmission on each level of modulation as
transmission over an individual channel. But in contrast to [4], where the capa-
cities of these individual channels for optimal decoding on each level of modulation
was calculated, we calculate the capacities when suboptimal decoding, described in
Section II, is used. As an output of the channel we consider the sequence of statistics
(k) (n). According to the de nition of capacity, the error probability can be made
arbitrarily small on each decoding stage if the code length approaches in nity [4]. In
this section we study the asymptotical behavior of the system, and therefore we do
not consider the problem of error propagation due to erroneous decisions in earlier
decoding stages. The capacity for the kth level of the given multilevel PSK system

11
using multistage decoding with suboptimal metric (8) is
Z 1 (0) Z 1 (1)
C (k) = Haposteriori + Hapriori = 12 f k log f (0)k d (k) + 12 f k log f (1)k d (k)
1 1
( ) ( ) ( ) ( )

Z 1  1 (0) 1 (1)   1 (0) 1 (1) 


f k + 2 f k log 2 f k + 2 f k d (k); (25)
1 2
( ) ( ) ( ) ( )

where by symmetry f (1)k = f (0) k . All signal points are assumed to be equally likely.
( ) ( )

The capacity and computational cuto rate Rc = 1 log2(1+ ZM ) of a level using M -


PSK is shown in Figure 5 for M = 2; 4; 8; 16; 32; 64. In this gure we also show the
capacity for the same system using optimal metric at each decoding stage calculated
in [4] (M = 2; 4; 8) and conclude that the capacity for our system using suboptimal
metric and multistage decoding is close to the capacity for a system using optimal
metric and multistage decoding. This indicates that it is not necessary to use the
optimal metric in multistage decoding of multilevel PSK. That is, a complexity
reduction by use of the suboptimal metric can be accieved at very small loss in
capacity. In Figure 6 we show the total capacity of a multilevel modulation scheme
using PSK-signaling with K = 6 levels.

VI CONCLUSIONS
The decoding error probability for multilevel PSK with suboptimal multistage
decoding has been analyzed. Rigorous union bounds for error probabilities of com-
ponent codes have been presented, and a formula for the parameter Z , derived
using the Cherno bounding method, has been derived. This formula yields a Z
that gives tighter bounds than the commonly used \nearest neighbor error events
principle", but the bounds do not lose their validity as is the case when the often
n o
used approximation Z = exp 2
8
2 is applied. Comparison to traditional bound-
12
ing techniques for the probability of decoding error has been made for M -PSK with
M = 2; 4; 8; 16; 32; 64.
Calculation of capacity was made, and from this it was concluded that the capac-
ity for suboptimal decoding of M -PSK is close to the capacity for optimal decoding,
hence in terms of capacity the need for using optimal metric is small when multilevel
PSK with multistage decoding is considered.

References
[1] E. Biglieri, D. Divsalar, P. J. McLane and M. K. Simon, Introduction to Trellis-
Coded Modulation with Applications, Macmillan, 1991.

[2] K. Engdahl and K. Sh. Zigangirov, \On the Calculation of the


Error Probability for a Multilevel Modulation Scheme Using QAM-
signalling," submitted to IEEE Trans. Information Theory, (available from
http://www.it.lth.se/~karin/artiklar.html).

[3] K. Engdahl and K. Sh. Zigangirov, \On the Calculation of the Error Probability
for a Multilevel Modulation Scheme Using QAM-signaling," Proceedings ISIT
-97, p.390.

[4] J. Huber, \Multilevel Codes: Distance Pro les and Channel Capacity," in ITG-
Fachbereicht 130, pp. 305-319, Oct. 1994. Conference Record.

[5] H. Imai and S. Hirakawa, \A New Multilevel Coding Method Using Error-
Correcting Codes," IEEE Trans. Information Theory, vol. IT-23, pp. 371-377,
May 1977.
13
[6] Y. Kofman, E. Zehavi and S. Shamai, \Performance Analysis of a Multilevel
Coded Modulation System" IEEE Trans. Communications, vol. COM-42, pp.
299-312, Feb./Mar./Apr. 1994.

[7] G. Ungerbck, \Channel Coding with Multilevel/Phase Signals," IEEE Trans.


Information Theory, vol. IT-28, pp. 55-67, Jan. 1982.

[8] A. J. Viterbi and J. K. Omura, Principles of Digital Communication and Cod-


ing, McGraw Hill, 1979.

[9] S. G. Wilson, Digital Modulation and Coding, Prentice Hall, 1996.

[10] J. M. Wozencraft and I. M. Jacobs, Principles of Communication Engineering,


Wiley, 1965.

[11] K. Sh. Zigangirov and R. Johannesson, Fundamentals of Convolutional Codes,


to be published by IEEE Press 1998.

14
LIST OF FIGURE CAPTIONS

Figure 1: System description of the K level modulation scheme using PSK-signaling.


Figure 2: Set partitioning of PSK signals when K = 3.
Figure 3: The signal constellation on the rst level for K = 3.
Table 1: Numerical values of Z for di erent values of .
Table 2: Comparison of Z to the conventional approximation. The normalized
intra-set Euclidean distance is  = 2 sin( 2 ).

Figure 4: Two examples of the error bound (3) for the rst level of a system with
K = 2. The gure shows the bound when the lower and upper bounds in
(5) D = Z2 (dashed) and D = 2Z2 (dash-dotted) are used, and when D = Z4
(solid) is applied. The relation between  and the energy per channel use over
N0 is NEs = p2 . (a) T (D) = 1 D2D , (b) T (D) = D10
0
5

D12
D8 4D6 +5D4 4D2 +3
+4D10 4D8 +2D6 2D4 2D2 +1 .

Figure 5: Capacities (solid lines) for M -PSK using suboptimal decoding (M =


2; 4; 8; 16; 32; 64). The capacities for M -PSK with M = 2; 4; 8 using optimal
decoding[4], are also shown (stars). Also plotted is the computational cuto
rates (dashed lines).

Figure 6: Total capacity (solid line) for a multilevel modulation scheme using 64-
PSK (K = 6 levels) and the capacities of each level (dashed lines).

15
u(1) C v(1)
1
u(2)
C
v (2)
2
u Partition
of
u(3) C3
v(3) 2K -PSK s AWGN r subopti- u^
mal
information mapper  decoder
p
p
p
u(K ) C v(K )
K

Figure 1:

uuu
u u S (0)
v(1) = 0 v(1) = 1
 HHH
uuu

eue 
 HHHj
ueu
u u e e S (1)
eue ueu
v(2) = 0 @ v(2) = 1 v(2) = 0 @ v(2) = 1
@@
R e @@R
eue e e uee e
e u
e e u u e e e e S (2)
eue eee eeu uee

Figure 2:

16
y 6
@@ x
 
@@h (k)
HHH dE r (n) ; vl (n) = 1
h
@@  received point
@  HY d r (n) ; v(k) (n) = 0
@@  -E 0
BxB @
x
x
B @@
 BB @@ u 2 S0(k 1)
Bh @h e 2 S1(k 1)
x @


Figure 3:

17
 Z2 Z4 Z8 Z16 Z32 Z64
1 0.6065 0.9334 0.9994 1.0000 1.0000 1.0000
2 0.1353 0.5650 0.9639 1.0000 1.0000 1.0000
3 0.0111 0.1889 0.8010 0.9976 1.0000 1.0000
4 0.0003 0.0353 0.5417 0.9766 1.0000 1.0000
5 4e-05 0.0038 0.3005 0.9140 0.9999 1.0000
6 1e-08 0.0002 0.1392 0.8077 0.9988 1.0000
7 2e-11 1e-05 0.0547 0.6736 0.9938 1.0000
8 1e-14 2e-07 0.0184 0.5317 0.9804 1.0000
9 3e-18 3e-09 0.0053 0.3986 0.9553 1.0000
10 2e-22 3e-11 0.0013 0.2846 0.9176 1.0000
11 5e-27 1e-13 0.0003 0.1941 0.8682 0.9998
12 0.0001 0.1267 0.8091 0.9991
13 0.0793 0.7430 0.9975
14 0.0477 0.6725 0.9944
15 0.0275 0.6004 0.9892
16 0.0153 0.5288 0.9813
17 0.0082 0.4597 0.9704
18 0.0042 0.3946 0.9564
19 0.0021 0.3347 0.9390
20 0.0010 0.2805 0.9185
21 0.0005 0.2324 0.8951
22 0.0002 0.1904 0.8689
23 0.0001 0.1542 0.8402
24 0.1236 0.8095
25 0.0981 0.7770
26 0.0770 0.7430
27 0.0598 0.7080
28 0.0460 0.6723
29 0.0350 0.6361
30 0.0264 0.5998

Table 1:

18
 Z4=Z2 Z8=Z2 Z16=Z2 Z32=Z2 Z64 =Z2
0.5 1.03 1.03 1.03 1.03 1.03
1.0 1.11 1.13 1.13 1.13 1.13
1.5 1.22 1.28 1.30 1.31 1.31
2.0 1.34 1.45 1.49 1.50 1.50
2.5 1.46 1.61 1.65 1.66 1.66
3.0 1.57 1.73 1.77 1.78 1.78
3.5 1.67 1.83 1.86 1.87 1.87
4.0 1.76 1.89 1.92 1.92 1.92
4.5 1.82 1.94 1.95 1.96 1.96
5.0 1.88 1.96 1.98 1.98 1.98
5.5 1.92 1.98 1.99 1.99 1.99
6.0 1.94 1.99 1.99 1.99 1.99
6.5 1.96 1.99 1.996 1.996 1.996
7.0 1.98 1.996 1.997 1.997 1.997
7.5 1.99 1.997 1.998 1.998 1.998
8.0 1.99 1.998 1.998 1.998 1.998
8.5 1.99 1.998 1.998 1.998 1.998
9.0 1.996 1.998 1.998 1.998 1.998
9.5 1.997 1.998 1.998 1.998 1.998
10.0 1.997 1.998 1.998 1.998 1.998

Table 2:

19
−2
10

−4
10
−3
10

−6
10
−4
10

−8
10
−5
10

−10
10
−6
10

−12
10
−7
10

−14
10
−8
10
3 4
 5 6 3 4
 5 6

Figure 4:

bits per
level and 0.8

channel use
0.6

0.4

0.2

0 −1 0 1 2
10 10 10 10


Figure 5:

20
6

bits per
channel use 5

0 −1 0 1 2
10 10 10 10


Figure 6:

21

You might also like