You are on page 1of 17

4.

2-21

4.2.4 Orthogonal, Biorthogonal and Simplex Signals In PAM, QAM and PSK, we had only one basis function. For orthogonal, biorthogonal and simplex signals, however, we use more than one orthogonal basis function, so N-dimensional Examples: the Fourier basis; time-translated pulses; the Walsh-Hadamard basis. o Weve become used to SER getting worse quickly as we add bits to the symbol but with these orthogonal signals it actually gets better. o The drawback is bandwidth occupancy; the number of dimensions in bandwidth W and symbol time Ts is
N 2W Ts

o So we use these sets when the power budget is tight, but theres plenty of bandwidth.

Orthogonal Signals With orthogonal signals, we select only one of the orthogonal basis functions for transmission:

The number of signals M equals the number of dimensions N.

4.2-22

Examples of orthogonal signals are frequency-shift keying (FSK), pulse position modulation (PPM), and choice of Walsh-Hadamard functions (note that with Fourier basis, its FSK, not OFDM).

Energy and distance:

o The signals are equidistant, as can be seen from the sketch or from
Es location i si s j = Es location j

so si s j = 2 Es = 2log 2 ( M ) Eb = 2kEb = d min , i, j

4.2-23

Effect of adding bits:

o For a fixed energy per bit, adding more bits increases the minimum distance strong contrast to PAM, QAM, PSK.

o But adding each bit doubles the number of signals M, which equals the number of dimensions N and that doubles the bandwidth! Error analysis for orthogonal signals. Equal energy, equiprobable signals, so receiver is

o Error probability is same for all signals. If s1 was sent, then the correlation vector is
Es + n1 n2 with real components. c= nM

4.2-24

o Assume s1 was sent. The probability of a correct symbol decision, conditioned on the value of the received c1 , is

Pcs (c1 ) = P ( n2 < c1 ) ( n3 < c1 ) ( nM < c1 ) c1 = 1 Q N 2 0


M 1

o Then the unconditional probability of correct symbol detection is


c1 Pcs = 1 Q N 2 0

M 1

pc1 ( c1 ) dc1
M 1

c1 = 1 Q N 2 0 =

1 2

1 2 1 c1 Es exp 2 N0 N0 2

dc1

(1 Q ( u ) )

M 1

1 1 exp u 2 s 2 2

du

Needs a numerical evaluation.

o The unconditional probability of symbol error (SER) is


Pes = 1 Pcs

4.2-25

Now for the bit error rate.


o All errors equally likely, at

Pes P = k es . No point in Gray coding. M 1 2 1

k o Can get i bit errors in equiprobable ways, so i Ps k k Ps k k 1 Peb = k i = k 2 1 i =1 i 2k 1 i =1 i 1 2k 1 k Ps Ps =k k 2 2 1

About half the bits are in error in the average symbol error. The SER for orthogonal signals is striking:
Symbol Error Prob, Orthogonal Signals
1 0.1 prob of symbol error 0.01 1 .10
3

M=2 M=4 M=8 M = 16

The SER improves as we add bits!

4 1 .10

But the bandwidth doubles with each additional bit.

1 .10 1 .10

5 6

10

12

14

16

SNR per bit, gamma_b (dB)

4.2-26

Why does SER drop as M increases?


o All points are separated by the same d = 2log 2 ( M ) Eb . Since d 2

increases linearly with k = log 2 ( M ) , the pairwise error probability (between two specific points) decreases exponentially with k, the number of bits per symbol.

o Offsetting this is the number M-1 of neighbours of any point: M-1

ways of getting it wrong, and M increases exponentially with k.

o Which effect wins? Cant get much analytical traction from the

integral expression two pages back. So fall back to a union bound.

Union bound analysis:

o Without loss of generality, assume s1 was transmitted. o Define Em , m = 2, , M as the event that r is closer to s m than to s1 .

4.2-27

o Note that Em is a pairwise error. It does not imply that the receivers

decision is m. In fact, we have events E2 ,, EM and they are not mutually exclusive, since r could lie closer to two or more points than to s1 :

o The overall error event E = E2 E3 EM , so the SER is bounded

by the sum of the pairwise event probabilities1


Pes = P [ E ] = P [ E2 E3 EM ] d 2 = ( M 1)Q = ( M 1) Q N 2 0 1 2 e k b 2
k
1

m=2

P [ Em ]
log 2 ( M ) b

<e

k b ln(2) 2

A general expansion is
M M M

P [ A1 A2 AM ] = P [ A i ] P Ai A j
i =1 i =i j =1 j i

+ P Ai A j Ak
i =1 j =1 k =1 j i k j k i

M M M

etc.

4.2-28

o From the bound, if b > 2ln ( 2 ) = 1.386 , then loading more bits onto a

symbol causes the upper bound on SER to drop exponentially to zero.


o Conversely, if b < 2ln ( 2 ) , increasing k causes the upper bound on

SER to rise exponentially (SER itself saturates at 1).

o Too bad the dimensionality, the number of correlators and the

bandwidth also increase exponentially with k. This scheme is called block orthogonal coding. Thresholds are characteristic of coded systems. Remember that this analysis is based on bounds
SER and Bound, Orthogonal Signals
10 1 0.1 prob of symbol error 0.01 1 .10 1 .10 1 .10 1 .10
3 4 5 6

True SERs are solid lines, bounds are dashed.

M=2 M=2 M=4 M=4 M=8 M=8 M = 16 M = 16 2 0 2 4 6 8 10 12 14 16

Upper bound is useful to show decreasing SER, but is too loose for a good approximation.

SNR per bit, gamma_b (dB)

4.2-29

Biorthogonal Signals

If you have coherent detection, why waste the other side of the axis? Double the number of signal points (add one bit) with Es . Now
M = 2 N , with no increase in bandwidth. Or keep the same M and cut the

bandwidth in half.

Energy and distance:

All signal points are equidistant from si

si s k = 2 Es = d min (the same as orthogonal signals)


except one the reflection through the origin which is farther away
s i s i = 2 Es

4.2-30

Symbol error rate:

The probability of error is messy, but the union bound is easy. A signal is equidistant from all other signals but its own complement.

o So the union bound on SER is


Ps ( M 2) Q

( ) + Q ( 2 ) ( M 2) Q ( ) second term redundant (Sec'n 4.5.4) = ( M 2) Q ( log ( M ) )


s s s 2 b

which is slightly less than orthogonal signaling, for the same M and same b . The major benefit is that biorthogonal needs only half the bandwidth of orthogonal, since it has half the number of dimensions.

o And the bit error probability is


Pb log 2 ( M ) Ps 2

4.2-31

Simplex Signals

The orthogonal signals can be seen as a mean values, shared by all, and signal-dependent increments:

1 The mean signal (the centroid) is s = M

m =1

sm .

o The mean does not contribute to distinguishability of signals so why

not subtract it?


1 M 1 M s = Es m 1 1 M location m 1 M

s = s m s m

where

Es is the original signal energy.

4.2-32

o Removing the mean lowers the dimensionality by one. After rotation

into tidier coordinates:

They still have equal energy (though no longer orthogonal). That energy is

Es = s m

1 2 = Es M 2 + 1 M M

1 = E 1 M

The energy saving is small for larger M.

4.2-33

The correlation among signals is:


1 ( s , s ) 1 Re [mn ] = m n = M = 1 s s M 1 m n 1 M

for m n

A uniform negative correlation. The SER is easy. Translation doesnt change the error rate, so the SER is that of orthogonal signals with a M ( M 1) SNR boost.

4.2.5 Vertices of a Hypercube and Generalizations

More signals defined on a multidimensional space. Unlike orthogonal, we will now allow more than one dimension to be used in a symbol. Vertices of a hypercube is straightforward: two-level PAM (binary antipodal) on each of the N dimensions:

o With the Fourier basis, it is BPSK on each frequency at once a

simple OFDM

o With the time translate basis, it is a classical NRZ transmission:

4.2-34

o Signal vectors:
sm1 s sm = m2 smN

with smn = Es N

for m = 1, M and M = 2 N .

o In space, it looks like this

o Every point has the same distance from the origin, hence the same

energy

sm

= Es = log 2 ( M ) Eb = N Eb

4.2-35

o Minimum distance occurs for differences in a single coordinate:


1 1 Es 1 N 1 1 1 1 Es 1 N 1 1

sm =

sn =

d min = s m s n = 2

Es = 2 Eb N

More generally, for d H disagreements (Hamming distance),


sm sn = 2 d H Es = 2 d H Eb N

o What are the effects on d min and bandwidth if we increase the number

of bits, keeping Eb fixed? Its easy to generalize from binary to PAM, PSK, QAM, etc. on each of the dimensions:

o With the Fourier basis, it is OFDM

o With time translates, it is the usual serial transmission.

4.2-36

Error analysis: All points have same energy


Es = N Eb

Euclidean distance for Hamming distance h is d = 2 h Eb

o If labeling is done independently on different dimensions (see sketch),

then the independence of the noise causes it to decompose to N independent detectors. So the following structure
Ps N Q

2 b

= log 2 ( M )Q

)
2 b

Pb depends on labels

becomes

Ps is meaningless

Pb = Q

2 b , just

binary antipodal

4.2-37

Generalization: user multilevel signals (PAM) in each dimension. Again, if labeling is independent by dimension, then its just independent and parallel use, like QAM. Not too exciting. Yet. These signals form a finite lattice.

Generalization: use a subset of the cube vertices, with points selected for greater minimum distance. This is a binary block code.

o Advantage: greater Euclidean distance

o Disadvantage: lower data rate

You might also like