Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Jan.15, 2003
Lecture #2
Probability and Random Variables
Review/Preview
Last time:
Course policies (syllabus).
Block diagram of communication system.
This time:
Review of probability and random variables.
Random Events
When we conduct a random experiment, we can
use set notation to describe possible outcomes.
Example: Roll a sixsided die.
Possible outcomes: S={1,2,3,4,5,6}
An event is any subset of possible outcomes: A = {1,2}
The complement of the event is: A = SA = {3,4,5,6}
S is the certain event (the set of all outcomes).
∅ is the null event (empty set).
Another example:
Transmit a data bit.
Two complementary outcomes are:
• {Received correctly, received in error}
A
More Set Theory
The union (or sum) of two events contains all
sample points in either event.
Let A = {1,2}, B = {5,6}, and C={1,3,5}
Then find A ∪B and A ∪C
The intersection of two events contains only
those points that are common to both sets.
Find A∩B and A∩C
If A∩B = ∅, then A and B are mutually exclusive.
Probability
The probability P(A) is a number which
measures the likelihood of event A.
Axioms of probability:
P(A)≥ 0.
P(A)≤ 1 and P(A)=1 only if A=S (the certain event).
If A and B are two events such that A∩B=∅,
then P(A∪B) = P(A)+P(B).
•
A and B are mutually exclusive.
S
A
B
i.e. A and B
don’t overlap
Joint and Conditional Probability
Joint probability is the probability that both
A and B occur:
P(A,B) = P(A∩B).
Conditional probability is the probability that
A will occur given that B has occurred:
Bayes’ theorem:
P(A,B) = P(A)P(BA) = P(B)P(AB)
P(AB) =
P(A,B)
P(B)
P(BA) =
P(AB)P(B)
P(A)
and P(AB) =
P(BA)P(A)
P(B)
and P(BA) =
P(A,B)
P(A)
Statistical Independence
Events A and B are statistically independent
if P(A,B) = P(A)P(B).
If A and B are independent, then:
•
P(AB) = P(A) and P(BA) = P(B).
Example:
• Flip a coin, call result A={heads, tails}.
• Flip it again, call result B = {heads,tails}.
• P{A=heads, B=tails} = 0.25.
• P{A=heads}P{B=tails} = (0.5)(0.5) = 0.25
Random Variables
A random variable X(s) is a realvalued
function of the underlying event space s∈S.
Typically, we just denote it as X.
•
i.e. we suppress the dependence on s (it is assumed).
Random variables (R.V.’s) can be either
discrete or continuous:
A discrete R.V. can only take on a countable
number of values.
• Example: The number of students in a class.
A continuous R.V. can take on a continuous
range of values.
•
Example: The voltage across a resistor.
Cumulative Distribution Function
Abbreviated CDF.
Also called Probability Distribution Function.
Definition: F
x
(a) = P[ X ≤ x ]
Properties:
F(x) is monotonically nondecreasing.
F(∞) = 0
F(∞ ) = 1
P[ a < X ≤ b] = F(b)  F(a)
The CDF completely defines the random
variable, but is cumbersome to work with.
Instead, we will use the pdf …
Probability Density Function
Abbreviated pdf.
Definition:
Properties:
p(x) ≥ 0
Interpretation:
Measures how fast the CDF is increasing.
Measures how likely a RV is to lie at a particular value or within a
range of values.
) ( ) ( x F
dx
d
x p
X X
·
1 ) ( ·
∫
∞
∞ −
dx x p
X
) ( ) ( ] [ ) ( a F b F b X a P dx x p
X X
b
a
X
− · ≤ < ·
∫
Example
Consider a fair die:
P[X=1] = P[X=2] = … = P[X=6] = 1/6.
The CDF is:
The pdf is:
0 2 4 6 x
0 2 4 6 x
unit step
function
∑
·
− ·
6
1
) (
6
1
) (
i
X
i x u x F
dirac delta
function
∑
·
− ·
6
1
) (
6
1
) (
i
X
i x x p δ
Expected Values
Sometimes the pdf is unknown or cumbersome to
specify.
Expected values are a shorthand way of describing
a random variable.
The most important examples are:
Mean:
Variance:
The expectation operator works with any function
Y=g(X).
∫
∞
∞ −
· · dx x xp m X E
X
) ( ] [
( ) [ ] [ ]
2 2 2
2
2
) ( ) (
X X X X
m X E dx x p m x m X E − · − · − ·
∫
∞
∞ −
σ
∫
∞
∞ −
· · dx x p x g X g E Y E ) ( ) ( )] ( [ ] [
Uniform Random Variables
The uniform random variable is the most
basic type of continuous R.V.
The pdf of a uniform R.V. is constant over a
finite range and zero elsewhere:
A
1
2
A
m+
2
A
m−
m
A
x
Example
Consider a uniform random variable with pdf:
Compute the mean and variance.
¹
'
¹
≤ ≤
·
elsewhere
x
x p
0
10 0 10 / 1
) (
Probability Mass Function
The pdf of discrete RV’s consists of a set of
weighted dirac delta functions.
Delta functions can be cumbersome to work with.
Instead, we can define the probability mass
function (pmf) for discrete random variables:
p[x] = P[X=x]
Properties of pmf:
p[x] ≥ 0
p x
x
[ ] ·
·−∞
∞
∑
1
p x P a X b
x a
b
[ ] [ ] · ≤ ≤
·
∑
For the dieroll
example:
p[x] = 1/6
for 1 ≤ x ≤ 6
Binary Distribution
A binary or Bernoulli random variable has
the following pmf:
Used to model binary data (p=1/2).
Used to model the probability of bit error.
Mean:
Variance:
¹
'
¹
·
· −
·
1
0 1
] [
x p
x p
x p
Binomial Distribution
Let where {Xi, i=1,…,n} are i.i.d.
Bernoulli random variables.
Then:
where:
Mean:
Variance:
∑
·
·
n
i
i
X Y
1
k n k
Y
p p
k
n
k p
−
−
,
`
.

· ) 1 ( ] [
)! ( !
!
k n k
n
k
n
−
·
,
`
.

np m
X
·
) 1 (
2
p np
X
− · σ
Example
Suppose we transmit a 31 bit sequence (code
word).
We use an error correcting code capable of
correcting 3 errors.
The probability that any individual bit in the
code word is received in error is p=.001.
What is the probability that the code word is
incorrectly decoded?
i.e. Probability that more than 3 bits are in error.
Example
Parameters: n=31, p=0.001, and t=3
Pairs of Random Variables
We often need to consider a pair (X,Y) of RVs
joint CDF:
joint pdf:
marginal pdf:
Conditional pdf:
Bayes rule:
[ ] { ¦ { ¦ [ ] y Y x X P y Y x X P y x F
Y X
≤ ∩ ≤ · ≤ ≤ · , ) , (
,
) , ( ) , (
,
2
,
y x F
y x
y x p
Y X Y X
∂ ∂
∂
·
∫
∞
∞ −
· dy y x p x p
Y X X
) , ( ) (
, ∫
∞
∞ −
· dy y x p x p
Y X X
) , ( ) (
,
) (
) , (
)  (
,
y p
y x p
y x p
Y
Y X
X
·
) (
) , (
)  (
,
x p
y x p
x y p
X
Y X
Y
·
) (
) ( )  (
)  (
y p
x p x y p
y x p
Y
X Y
X
·
) (
) ( )  (
)  (
x p
y p y x p
x y p
X
Y X
Y
·
Independence and Joint Moments
X and Y are independent if:
Correlation:
If E[XY] = 0 then X,Y are orthogonal
Covariance:
If µ
XY
= 0 then X,Y are uncorrelated.
If X,Y are independent, then they are uncorrelated
∫ ∫
· dxdy y x xyp XY E ) , ( ] [
∫ ∫
− − · − − · dxdy y x p m y m x m Y m X E
y x Y x Y X
) , ( ) )( ( )] )( [(
,
µ
) ( ) ( ) , (
,
y p x p y x p
Y X Y X
·
Random Vectors
Random vectors are an ndimensional generalization
of pairs of random variables.
X = [X
1
,X
2
, …, X
n
]’
Joint CDF & pdf are possible, but cumbersome.
Marginalize by integrating out unwanted variables.
Mean is specified by a vector m
x
= [m
1
, m
2
, …, m
n
]’
Correlation and covariance specified by matrices:
Covariance matrix:
• M=[µ
i,j
] i.e. a positivedefinite matrix with (i,j)th element µ
i,j
• where
• If M diagonal, the X is uncorrelated
Linear Transformation: Y = AX
• m
Y
=Am
X
• M
Y
=AM
X
A’
)] )( [(
, j j i i j i
m X m X E − − · µ
Central Limit Theorem
Let [X
1
,X
2
, …X
n
] be a vector of n independent and
identically distributed (i.i.d.) random variables, and
let:
Then as n→∞, Y will have a Gaussian distribution.
This is the Central Limit Theorem.
This theorem holds for (almost) any distribution of X
i
’s.
Importance of Central Limit Theorem:
Thermal noise results from the random movement of many
electrons  modeled very well with Gaussian distribution.
Interference from many equal power (identically distributed)
interferers in a CDMA system tends towards a Gaussian
distribution.
∑
·
·
n
i
i
X Y
1
Gaussian Random Variables
The pdf of a Gaussian random variable is:
where m is the mean and σ
2
is the variance.
Properties of Gaussian random variables:
A Gaussian R.V. is completely described by its mean and
variance.
• Gaussian vector specified by mean and covariance matrix.
The sum of Gaussian R.V.’s is also Gaussian.
• Linear transformation of Gaussian vector is Gaussian.
If two Gaussian R.V.’s are uncorrelated, then they are also
independent.
• Unocrrelated Gaussian vector is independent.
) 2 /( ) (
2 2
2
1
) (
σ
π σ
m x
X
e x p
− −
·
The Q Function
The Q function can be used to find the
probability that the value of a Gaussian R.V.
lies in a certain range.
The Q function is defined by:
where X is a Gaussian R.V. with zero mean and
unit variance (i.e. σ
2
=1).
Can also be defined as:
) ( 1 ) ( z F z Q
x
− ·
∫
∞
−
·
z
d e z Q λ
π
λ 2 /
2
2
1
) (
Using the Q Function
If X is a Gaussian R.V. with mean m and variance σ
2
,
then the CDF of X is:
Approximation for large X
Most Q function tables only go up to z=4 or z=6.
For z>4 a good approximation is:
,
`
.

−
· ≤ ·
σ
a m
Q a X P a F
x
] [ ) (
2 /
2
2
1
) (
z
e
z
z Q
−
≈
π
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
10
6
10
5
10
4
10
3
10
2
10
1
10
0
10
1
QFunction and Overbound
overbound
Q function
EE 561
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Jan.17, 2003
Lecture #3
Random Processes
Review/Preview
Last time:
Review of probability and random variables.
• Random variables, CDF, pdf, expectation.
• Pairs of RVs, random vectors, autocorrelation, covariance.
• Uniform, Gaussian, Bernoulli, and binomial RVs.
This time:
Random processes.
Upcoming assignments:
HW #1 is due in 1 week.
Computer Assignment #1 will be posted soon.
Random Variables
vs. Random Processes
Random variables model unknown
values.
Random variables are numbers.
Random processes model unknown
signals.
Random processes are functions of time.
One Interpretation: A random process is
just a collection of random variables.
A random process evaluated at a specific time
t is a random variable.
If X(t) is a random process then X(1), X(1.5),
and X(37.5) are all random variables.
Random Variables
Random variables map the outcome of a
random experiment to a number.
heads
tails
S
0 1 X
Random Processes
Random Processes map
the outcome of a random
experiment to a signal
(function of time).
heads
tails
signal associated
with the outcome:
sample function
e
n
s
e
m
b
l
e
A random process evaluated at a
particular time is a random variable
S
Random Process Terminology
The expected value, ensemble average or mean of
a random process is:
The autocorrelation function (ACF) is:
Autocorrelation is a measure of how alike the random
process is from one time instant to another.
Autocovariance:
{ ¦
∫
∞
∞ −
· · dx x p x t x E t m
t t x
) ( ) ( ) (
{ ¦
∫ ∫
∞
∞ −
∞
∞ −
· ·
2 1 2 1 2 1
) , ( ) ( ) ( ) , (
2 1 2 1 t t t t t t
dx dx x x p x x t x t x E t t φ
µ φ ( , ) ( , ) ( ) ( ) t t t t m t m t
1 2 1 2 1 2
· −
Mean and Autocorrelation
Finding the mean and autocorrelation is not
as hard as it might appear!
Why: because oftentimes a random process can
be expressed as a function of a random variable.
We already know how to work with functions of
random variables.
Example:
This is just a function g(θ ) of θ :
We know how to find the expected value of a
function of a random variable:
•
To find this you need to know the pdf of θ .
) 2 sin( ) ( θ + π · t t x
) 2 sin( ) ( θ + π · θ t g
{ ¦ { ¦ { ¦ ) 2 sin( ) ( ) ( θ + π · θ · t E g E t x E
a random variable
An Example
If θ is uniform between 0 and π , then:
{ ¦ ) 2 sin( ) ( θ π + · t E t m
x
∫
∞
∞ −
+ · θ θ θ π
θ
d p t ) ( ) 2 sin(
∫
π
θ
,
`
.

π
θ + π ·
0
1
) 2 sin( d t
) 2 cos(
2
t π
π
·
{ ¦ ) 2 sin( ) 2 sin( ) , (
2 1 2 1
θ π θ π φ + + · t t E t t
∫
∞
∞ −
+ + · θ θ θ π θ π
θ
d p t t ) ( ) 2 sin( ) 2 sin(
2 1
∫
π
θ
,
`
.

π
θ + π θ + π ·
0
2 1
1
) 2 sin( ) 2 sin( d t t
( ) ) ( 2 cos
2
1
1 2
t t − π ·
( ) πτ τ φ 2 cos
2
1
) ( ·
1 2
where t t − · τ
Stationarity
A process is strictsense stationary (SSS) if all its joint
densities are invariant to a time shift:
in general, it is difficult to prove that a random process is strict
sense stationary.
A process is widesense stationary (WSS) if:
The mean is a constant:
The autocorrelation is a function of time difference only:
If a process is strictsense stationary, then it is also
widesense stationary.
( ) ( )
( ) ( )
( ) ( ) ) ( ),..., ( ), ( ) ( ),..., ( ), (
) ( ), ( ) ( ), (
) ( ) (
0 0 2 1 2 1
0 2 1 2 1
t t x t t x t t x p t x t x t x p
t t x t t x p t x t x p
t t x p t x p
N o x N x
o x x
o x x
+ + + ·
+ + ·
+ ·
x x
m t m · ) (
1 2
2 1
where
) ( ) , (
t t
t t
− ·
·
τ
τ φ φ
Properties of the Autocorrelation
Function
If x(t) is Wide Sense Stationary, then its
autocorrelation function has the following
properties:
Examples:
Which of the following are valid ACF’s?
{ ¦
2
) ( ) 0 ( t x E · φ this is the second moment
) ( ) ( τ φ τ φ − ·
even symmetry
) ( ) 0 ( τ φ φ ≥
Power Spectral Density
Power Spectral Density (PSD) is a measure of a random
process’ power content per unit frequency.
Denoted Φ (f).
Units of W/Hz.
Φ (f) is nonnegative function.
For realvalued processes, Φ (f) is an even function.
The total power of the process if found by:
The power within bandwidth B is found by:
∫
∞
∞ −
Φ · df f P ) (
∫
Φ ·
B
df f P ) (
WienerKhintchine Theorem
We can easily find the PSD of a WSS random
processes.
WienerKhintchine theorem:
If x(t) is a wide sense stationary random process,
then:
i.e. the PSD is the Fourier Transform of the ACF.
Example:
Find the PSD of a WSS R.P with autocorrelation:
{ ¦
∫
∞
∞ −
−
· · Φ τ τ φ τ φ
τ π
d e F f
f j 2
) ( ) ( ) (
T T −
¹
¹
¹
'
¹
>
≤ −
·
,
`
.

Λ ·
T
T
T
T
τ
τ
τ
τ
τ φ
if 0
if 1
) (
Example:
¹
¹
¹
'
¹
>
≤ −
·
,
`
.

Λ ·
T
T
T
T
τ
τ
τ
τ
τ φ
if 0
if 1
) (
White Gaussian Noise
A process is Gaussian if any n samples placed into a
vector form a Gaussian vector.
If a Gaussian process is WSS then it is SSS.
A process is white if the following hold:
WSS.
zeromean, i.e. m
x
(t) = 0.
Flat PSD, i.e. Φ (f) = constant.
A white Gaussian noise process:
Is Gaussian.
Is white.
• The PSD is Φ (f) =N
0
/2
• N
0
/2 is called the twosided noise spectral density.
Since it is WSS+Gaussian, then it is also SSS.
Linear Systems
The output of a linear time invariant (LTI) system is
found by convolution.
However, if the input to the system is a random
process, we can’t find X(f).
Solution: use power spectral densities:
This implies that the output of a LTI system is WSS if the
input is WSS.
y(t)
h(t)
x(t)
) ( ) ( ) ( t h t x t y ∗ ·
) ( ) ( ) ( f H f X f Y ·
2
) ( ) ( ) ( f H f f
x y
Φ · Φ
Example
A white Gaussian noise process with PSD of Φ (f) =N
0
/2 = 10
5
W/Hz is passed through an ideal lowpass filter with cutoff at 1 kHz.
Compute the noise power at the filter output.
Ergodicity
A random process is said to be ergodic if it is ergodic
in the mean and ergodic in correlation:
Ergodic in the mean:
Ergodic in the correlation:
In order for a random process to be ergodic, it must
first be Wide Sense Stationary.
If a R.P. is ergodic, then we can compute power three
different ways:
From any sample function:
From the autocorrelation:
From the Power Spectral Density:
) ( )} ( { t x t x E m
x
· ·
{ ¦ ) ( ) ( ) ( ) ( ) ( τ τ τ φ + · + · t x t x t x t x E
x
2
2 /
2 /
2
 ) (   ) ( 
1
lim t x dt t x
T
P
T
T
t x
· ·
∫
−
∞ →
) 0 (
x x
P φ ·
∫
∞
∞ −
Φ · df f P
x x
) (
time average operator:
∫
−
∞ →
·
2 /
2 /
) (
1
lim ) (
T
T
t
dt t g
T
t g
Crosscorrelation
If we have two random processes x(t) and y(t) we can
define a crosscorrelation function:
If x(t) and y(t) are jointly stationary, then the cross
correlation becomes:
If x(t) and y(t) are uncorrelated, then:
If x(t) and y(t) are independent, then they are also
uncorrelated, and thus:
{ ¦
2 1 2 1 2 1
) , ( ) ( ) ( ) , (
2 1 2 1 t t t t t t xy
dy dx y x p y x t y t x E t t
∫ ∫
∞
∞ −
∞
∞ −
· · φ
{ ¦ ) ( ) ( ) ( τ τ φ + · t y t x E
xy
y x xy
m m · ) (τ φ
{ ¦ { ¦ { ¦ ) ( ) ( ) ( ) ( t y E t x E t y t x E ·
Summary of Random Processes
A random process is a random function of time.
Or conversely, an indexed set of random variables.
A particular realization of a random process is
called a sample function.
) , (
1
s t x
) , (
2
s t x
) , (
3
s t x
t
t
t
q
Furthermore, a Random
Process evaluated at a
particular point in time is a
Random Variable.
q
A random process is
ergodic in the mean if the
time average of every
sample function is the same
as the expected value of the
random process at any time.
EE 561
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Jan. 31, 2003
Lecture #8
Advanced Coding Techniques
Review
Earlier this week:
Continued our discussion of quantization.
•
Quantization = source coding for continuous sources.
Lloydmax algorithm.
• Optimal quantizer design if source pdf is known.
• Analogous to Huffman coding.
Scalar vs. vector quantization.
• Performance can be improved by jointly encoding
multiple samples.
As number of samples ∞ then R R(D).
Vector quantization can take advantage of
correlation in the source.
Even if the source is uncorrelated, vector
quantization achieves a shaping gain.
•
We computed the distortion of a vector quantizer.
Preview
Kmeans algorithm.
• Optimal quantizer design when source pdf is unknown.
• Analogous to LempelZiv algorithm.
This time:
Practical source coding for speech.
• Differential pulse code modulation.
• Vocoding.
Reading: Proakis section 3.5
Coding Techniques for Speech
All speech coding techniques employ
quantization.
Many techniques also use additional
strategies to exploit the characteristics of
human speech.
Companding: Pass a nonuniform sample
through a nonlinearity to make it more uniform.
Then sample with a uniform quantizer.
∀
µ law and Alaw.
DPCM.
Vocoding.
Differential PCM
Speech is highly correlated.
Given several past samples of a speech signal it is
possible to predict the next sample to a high degree of
accuracy by using a linear prediction filter.
The error of the prediction filter is much smaller than
the actual signal itself.
In differential pulsecode modulation (DPCM),
the error at the output of a prediction filter is
quantized, rather than the voice signal itself.
DPCM can produce “tollquality” speech at half the
normal bit rate (i.e. 32 kbps).
DPCM Block Diagram
DPCM
Signal
Sample
Prediction
Filter
Prediction
Filter
Quantizer
Digital
Communications
Channel
Decoder DAC
Analog
Input
Signal
Analog
Output
Signal
Σ
Σ

+
+
+
Encode
DPCM Issues
The linear prediction filter is usually just a
feedforward (FIR) filter.
The filter coefficients must be periodically transmitted.
In adaptive differential pulsecode modulation
(ADPCM), the quantization levels can be changed on
the fly.
Helpful if the input pdf changes over time (nonstationary).
Used in DECT (Digital European Cordless Telephone).
Delta modulation is a special case of DPCM where
there are only two quantization levels.
Only need to know the zerocrossings of the signal
While DPCM works well on speech, it does not work
well for modem signals.
Modem signals are uncorrelated.
1.2 2.4 4.8 9.6 16 24 32 64
Unsatisfactory (1)
Poor (2)
Fair (3)
Good (4)
Excellent (5)
Bit Rate (kbps)
Mean Opinion Score
(MOS)
Toll quality
Communications
quality
Waveform coders Vocoders
Tradeoff: Voice Quality versus Bit
Rate
The bit rate
produced by the
voice coder can be
reduced at a price.
Increased
hardware
complexity.
Reduced
perceived speech
quality.
Waveform Coding and Vocoding
For high bit rates (1664 kbps) it is sufficient to just
sample and quantize the time domain voice
waveform.
This is called waveform coding.
DPCM is a type of waveform coding.
For low bit rate voice encoding it is necessary to
mathematically model the voice and transmit the
parameters associated with the model.
Process of analysis and synthesis.
Called vocoding.
Most vocoding techniques are based on linear predictive
coding (LPC).
Linear Predictive Coding
Linear predictive coding is similar to DPCM with the
following exceptions:
The prediction filter is more complex
• more taps in the FIR filter.
The filter coefficients are transmitted more frequently
• once every 20 milliseconds.
• The filter coefficients are quantized with a vector quantizer.
The error signal is not transmitted directly
• The error signal can be though of a type of noise.
• Instead the statistics of the “noise” are transmitted
Power level
Whether voiced (vowels) or unvoiced (consonants)
• This is where the big savings (in terms of bit rate) comes from.
Vocoder standards
Vocoding is the single most important technology
enabling digital cell phones.
RPELTP
Regular Pulse Excited Long Term Prediction.
Used in GSM (European Digital Cellular)
13 kbps.
VSELP
Vector Sum Excited Linear Predictive Coder.
Used in USDC, IS136 (US Digital Cellular).
8 kbps.
QCELP
Qualcomm Code Excited Linear Predictive Coder.
Used in IS95. (US Spread Spectrum Cellular)
Variable bit rate (full, half, quarter, eighth)
Original full rate was 9.6 kbps.
Revised standard (QCELP13) uses 14.4 kbps.
Preview of Next Week
Sample
Quantize
Source
Encode
Encryption
Channel
Encoder
Modulator
Channel
D/A
Conversion
Decryption
Source
Decoder
Channel
Decoder
Equalizer
Demodulator
Analog
Input
signal
Analog
Output
signal
Digital
Output
Direct
Digital
Input
we have been
looking at this
part of the
communication
system
(Part 1 of 4)
Now, we will start looking at this part
of the communication system. (Part 2 of 4)
Modulation Principles
Almost all communication systems transmit
data using a sinusoidal carrier waveform.
Electromagnetic signals propagate well.
Choice of carrier frequency allows placement of
signal in arbitrary part of spectrum.
Modulation is implemented in practice by:
Processing digital information at baseband.
Pulse shaping and filtering of digital waveform.
Baseband signal is mixed with signal from oscillator
to bring up to RF.
Radio frequency (RF) signal is filtered amplified and
coupled with antenna.
Modulator:
Simplified Block Diagram
Baseband Processing:
Source Coding
Channel Coding, etc.
Pulseshaping
Filter
Digital/Analog
Converter
Filter and
Amplify
data bits
code bits (symbol)
data rate
symbol rate
oversampled ~10X symbol rate
cos(2π f
c
t)
baseband section RF section
antenna
Modulation
Modulation shifts the spectrum of a baseband
signal to that it becomes a bandpass signal.
A bandpass signal has nonnegligible spectrum
only about some carrier frequency f
c
>> 0
Note: the bandwidth of a bandpass signal is the range
of positive frequencies for which the spectrum is non
negligible.
Unless otherwise specified, the bandwidth of a
bandpass signal is twice the bandwidth of the baseband
signal used to create it.
BW=B BW=2B
Modulation
Common digital modulation techniques use the
data value to modify the amplitude, phase, or
frequency of the carrier.
Amplitude: Onoff keying (OOK)
• 1 ⇒ A cos(2π f
c
t)
• 0 ⇒ 0
More generally, this is called amplitude shift keying (ASK).
Phase: Phase shift keying (PSK)
• 1 ⇒ A cos(2π f
c
t)
• 0 ⇒ A cos(2π f
c
t + π ) =  A cos(2π f
c
t)
Frequency: Frequency shift keying (FSK)
• 1 ⇒ A cos(2π f
1
t)
• 0 ⇒ A cos(2π f
2
t)
copyright 2003
EE 561
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Feb 5, 2003
Lecture #10
Representation of Bandpass Signals
©
2
0
0
3
Announcements
Homework #2 is due today.
I’ll post solutions over the weekend.
Including solutions to all problems from chapter 3.
Computer assignment #1 is due next week.
See webpage for details.
Today: Vector representation of signals
Sections 4.14.2
©
2
0
0
3
Review/
Preview
Sample
Quantize
Source
Encode
Encryption
Channel
Encoder
Modulator
Channel
D/A
Conversion
Decryption
Source
Decoder
Channel
Decoder
Equalizer
Demodulator
Analog
Input
signal
Analog
Output
signal
Digital
Output
Direct
Digital
Input
we have been
looking at this
part of the
communication
system
Now, we will start looking at this part
of the communication system.
©
2
0
0
3
Representation of Bandpass Signals
(EE 461 Review)
A bandpass signal is a signal that has a bandwidth
that is much smaller than the carrier frequency.
i.e. most of the spectral content is not at DC.
Otherwise, it is called baseband or lowpass.
Bandpass signals can be represented in any of three
standard formats:
Quadrature notation.
Complex envelope notation.
Magnitude and phase notation.
©
2
0
0
3
Standard Notations
for Bandpass Signals
Quadrature notation
x(t) and y(t) are realvalued lowpass signals called the in
phase and quadrature components of s(t).
Complex envelope notation
s
l
(t) is the complex envelope of s(t).
s
l
(t) is a complexvalued lowpass signal.
s t x t f t y t f t
c c
( ) ( ) cos ( )sin · − 2 2 π π
b g b g
s t x t jy t e s t e
j f t
l
j f t
c c
( ) Re ( ) ( ) Re ( ) · + ·
− −
a f
2 2 π π
©
2
0
0
3
More Notation for
Bandpass Signals
Magnitude and phase notation
Where a(t) is the magnitude and θ (t) is the phase of s(t).
a(t) and θ (t) are both realvalued lowpass signals.
Relationship among notations:
s t a t f t t
c
( ) ( ) cos ( ) · + 2π θ
b g
a t x t y t ( ) ( ) ( ) · +
2 2
θ( ) tan
( )
( )
t
y t
x t
·
L
N
M
O
Q
P
−1
x t a t t ( ) ( ) cos ( ) · θ a f
y t a t t ( ) ( )sin ( ) · θ a f
s t x t jy t
l
( ) ( ) ( ) · +
©
2
0
0
3
Key Points
With these alternative representations, we can
consider bandpass signals independently from their
carrier frequency.
The idea of quadrature notation sets up a coordinate
system for looking at common modulation types.
Idea: plot in 2dimensional space.
• x axis is the inphase component.
• y axis is the quadrature component.
Called a signal constellation diagram.
©
2
0
0
3
Example Signal Constellation
Diagram: BPSK
y t ( ) ∈ 0 kp
x t ( ) , ∈ − + 1 1 k p
©
2
0
0
3
Example Signal Constellation
Diagram: QPSK
y t ( ) , ∈ − + 1 1 k p
x t ( ) , ∈ − + 1 1 k p
QPSK:
Quadriphase shift keying
©
2
0
0
3
Example Signal Constellation
Diagram: QAM
QAM:
Quadrature Amplitude Modulation
x t ( ) , , , ∈ − − + + 3 1 1 3
l q
y t ( ) , , , ∈ − − + + 3 1 1 3
l q
©
2
0
0
3
Interpretation of Signal
Constellation Diagrams
Axis are labeled with x(t) and y(t).
Possible signals are plotted as points.
Signal power is proportional to distance from origin.
Probability of mistaking one signal for another is
related to the distance between signal points.
The received signal will be corrupted by noise.
The receiver selects the signal point closest to the
received signal.
©
2
0
0
3
Example: A Received QAM
Transmission
x t ( ) , , , ∈ − − + + 3 1 1 3
l q
y t ( ) , , , ∈ − − + + 3 1 1 3
l q
received signal
©
2
0
0
3
A New Way of Viewing Modulation
The quadrature way of viewing modulation is very
convenient for some modulation types.
QAM and MPSK.
We will examine an even more general way of looking at
modulation by using signal spaces.
We can study any modulation type.
By choosing an appropriate set of axes for our signal
constellation, we will be able to:
Design modulation types which have desirable properties.
Construct optimal receivers for a given modulation type.
Analyze the performance of modulation types using very general
techniques.
First, we must review vector spaces …
©
2
0
0
3
Vector Spaces
An ndimensional vector consists of n
scalar components
The norm (length) of a vector v is given by:
The inner product of two vectors
and is given by:
v · v v v
n 1 2
, ,...,
v v v
n 1 2
, ,...,
l q
v
1 1 1 1 2 1
· v v v
n , , ,
, ,...,
v
2 2 1 2 2 2
· v v v
n , , ,
, ,...,
v ·
·
∑
v
i
i
n
2
1
v v
1 2 1 2
1
⋅ ·
·
∑
v v
i i
i
n
, ,
©
2
0
0
3
Basis Vectors
A vector v may be expressed as a linear combination
of its basis vectors
e
i
is normalized if it has unit length
If e
i
is normalized, then v
i
is the projection of v onto e
i
Think of the basis vectors as a coordinate system
(x,y,z,… axes) for describing the vector v.
What makes a good choice of coordinate system?
e e e
1 2
, ,...,
n
l q
v e ·
·
∑
v
i i
i
n
1
v
i i
· ⋅ e v
e
i
·1
©
2
0
0
3
Complete Basis
The set of basis vectors is complete or
spans the vector space ℜ
n
if any vector v can be
represented as a linear combination of basis vectors:
The set of basis vectors is linearly independent if no
one basis vector can be represented as a linear
combination of the remaining vectors.
The n vectors must be linearly independent in order to span
ℜ
n
.
e e e
1 2
, ,...,
n
l q
v e ·
·
∑
v
i i
i
n
1
©
2
0
0
3
Given the following vector space:
Which of the following is a complete basis?
e
1
1
0
·
L
N
M
O
Q
P
e
2
1
0
·
−
L
N
M
O
Q
P
e
1
1
0
·
L
N
M
O
Q
P
e
2
0
1
·
L
N
M
O
Q
P
e
1
1
1
·
L
N
M
O
Q
P
e
2
1
1
·
−
L
N
M
O
Q
P
Example: Complete Basis
v
1
0
0
·
L
N
M
O
Q
P
v
2
0
1
·
L
N
M
O
Q
P
v
3
1
0
·
L
N
M
O
Q
P
v
4
1
1
·
L
N
M
O
Q
P
©
2
0
0
3
Orthonormal Basis
Two vectors v
i
and v
j
are orthogonal if
A basis is orthonormal if:
All basis vectors are orthogonal to oneanother.
All basis vectors are normalized.
v v
i j
⋅ · 0
©
2
0
0
3
Which of the following is a complete orthonormal
basis?
e
1
1
0
·
L
N
M
O
Q
P
e
2
0
1
·
L
N
M
O
Q
P
e
1
1
1
·
L
N
M
O
Q
P
e
2
1
1
·
−
L
N
M
O
Q
P
e
1
1 2
1 2
·
L
N
M
O
Q
P
/
/
e
2
1 2
1 2
·
−
L
N
M
O
Q
P
/
/
Example: Complete
Orthonormal Basis
e
1
1
1
·
L
N
M
O
Q
P
e
2
1
0
·
−
L
N
M
O
Q
P
e
2
1
0
·
−
L
N
M
O
Q
P
e
1
1 2
1 2
·
L
N
M
O
Q
P
/
/
copyright 2003
EE 561
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Feb 21, 2003
Lecture #16
©
2
0
0
3
Announcements
HW #3 is due on Monday.
“sigspace.m” is on webpage.
©
2
0
0
3
Review
Sample
Quantize
Source
Encode
Encryption
Channel
Encoder
Modulator
Channel
D/A
Conversion
Decryption
Source
Decoder
Channel
Decoder
Equalizer
Demodulator
Analog
Input
signal
Analog
Output
signal
Digital
Output
Direct
Digital
Input
We have been looking at this part
of the system.
©
2
0
0
3
Digital Signaling over
AWGN Channel
System model:
Signal space representation:
+
s t s t s t s t
M
( ) ( ), ( ),..., ( ) ∈
1 2
l q
n(t)
Gaussian
φ τ δ τ
nn
N
af af ·
0
2
r t s t n t af af af · +
r t r f t n t
k k
k
K
( ) ( ) ' ( ) · +
·
∑
1
basis functions for s(t)
orthogonal noise  disregard.
r r t f t dt
s t f t dt n t f t dt
s n
k k
T
k
T
k
T
m k k
·
· +
· +
z
z z
( ) ( )
( ) ( ) ( ) ( )
,
0
0 0
©
2
0
0
3
Receiver Overview
Front
end
Back
end
r t ( ) r
s
Goal: obtain the vector
of sufficient statistics r
from the received signal r(t).
Implementation: either analog
electronics, or digital electronics
working at several (310) samples
per symbol period T.
Options:
Correlation receiver
Matchedfilter receiver
Goal: obtain an estimate of the
transmitted signal given the vector r.
Implementation: Digital signal
processing operating at the symbol
period. One vector sample per
symbol period.
Options:
MAP receiver
ML receiver
Front End Design #1:
Bank of Correlators
⋅
z
dt
T
0
f t
1
( )
r t ( ) r
1
⋅
z
dt
T
0
f t
K
( )
r
K
r ·
L
N
M
M
M
O
Q
P
P
P
r
r
K
1
Front End Design #2:
Bank of Matched Filters
r t ( ) r
1
h t f T t
K K
( ) ( ) · −
r
K
r ·
L
N
M
M
M
O
Q
P
P
P
r
r
K
1
t T ·
h t f T t
1 1
( ) ( ) · −
t T ·
©
2
0
0
3
MAP Decision Rule
argmax  s
s
r s
S
·
∈
m
p p
m m
b g m r
argmax exp
/ ,
s
s
· −
−
F
H
G
G
I
K
J
J
R
S

T

U
V

W

−
·
∑
m
p N
r s
N
m o
K k m k
o k
K
π b g
c h
2
2
1
substitute conditional pdf of
r given s
m
(vector Gaussian)
argmax ln exp
/ ,
s
s
· −
−
F
H
G
G
I
K
J
J
L
N
M
M
O
Q
P
P
R
S

T

U
V

W

−
·
∑
m
p N
r s
N
m o
K k m k
o k
K
π
b g
c h
2
2
1
take natural log
argmax ln ln ln exp
/ ,
s
s
· + + −
−
F
H
G
G
I
K
J
J
L
N
M
M
O
Q
P
P
R
S

T

U
V

W

−
·
∑
m
p N
r s
N
m o
K k m k
o k
K
bg b g
c h
π
2
2
1
use ln(xy) =
ln(x) + ln(y)
argmax ln ln
,
s
s
· − −
−
R
S

T

U
V

W

·
∑
m
p
K
N
r s
N
m o
k m k
o k
K
bg b g
c h
2
2
1
π
use ln(x
y
) = yln(x)
and ln(exp(x)) = x
argmax ln ln
,
s
s
· − − −
R
S
T
U
V
W
·
∑
m
p
K
N
N
r s
m o
o
k m k
k
K
bg b g c h
2
1 2
1
π
pull 1/N
o
out of summation
©
2
0
0
3
MAP Decision Rule
(Continued)
argmax ln ln
, ,
s
s
· − − − +
R
S
T
U
V
W
·
∑
m
p
K
N
N
r s r s
m o
o
k m k k m k
k
K
bg b g c h
2
1
2
2 2
1
π
square term in
summation
argmax ln
, ,
s
s
· − − +
R
S
T
U
V
W
·
∑
m
p
N
s r s
m
o
m k k m k
k
K
bg c h
1
2
2
1
eliminate terms that are common
to all s
m
argmax ln
, ,
s
s
· + −
R
S
T
U
V
W
· ·
∑ ∑
m
p
N
s r
N
s
m
o
m k k
k
K
o
m k
k
K
bg
2 1
1
2
1
Break up the one summation
into two summations
argmax ln
,
s
s
· + −
R
S
T
U
V
W
·
∑
m
p
N
s r
N
m
o
m k k
k
K
m
o
bg
2
1
E
Use definition of
signal energy:
E
m m
T
m k
k
K
s t dt s · ·
z
∑
·
( )
,
2
0
2
1
argmax ln
,
s
s
· + −
R
S
T
U
V
W ·
∑
m
N
p s r
o
m m k k
k
K
m
2 2
1
bg
E
Multiply by N
o
/2
We use this equation to design the
optimal MAP receiver!
Back End Design #1:
MAP Decision Rule
S ·
L
N
M
M
M
O
Q
P
P
P
s s
s s
K
M M K
1 1 1
1
, ,
, ,
z Sr ·
r
z
1
N
p
o
2 2
1
1
lnbg−
E
z
M
N
p
o
M
M
2 2
lnb g−
E
choose
largest
s
z s r
m m k k
k
K
·
·
∑ ,
1
©
2
0
0
3
ML Decision Rule
ML is simply MAP with p
m
= 1/M
argmax  s
s
r s
S
·
∈
m
p
m
b g m r
arg max
,
s
s
· −
R
S
T
U
V
W ·
∑
m
s r
m k k
k
K
m
1
2
E
arg max s
s
· −
R
S
T
U
V
W m
z
m
m
E
2
Back End Design #2:
ML Decision Rule
If p
m
’s are unknown or all equal, then use the
ML (maximum likelihood) decision rule:
S ·
L
N
M
M
M
O
Q
P
P
P
s s
s s
K
m M K
1 1 1
1
, ,
, ,
z Sr ·
r
z
1
−
E
1
2
z
M
−
E
M
2
choose
largest
s
©
2
0
0
3
Example
Start with the following signal set:
What kind of modulation is this?
What is the energy of each signal?
0 1 2
s t
1
( )
0 1 2
s t
2
( )
0 1 2
s t
4
( )
0 1 2
s t
3
( )
Concept of the
Correlation Receiver
Concept
Correlate the received signal against all 4 possible transmitted
signals.
Pick most likely after accounting for p
m
.
⋅
z
dt
T
0
s t
1
( )
r t ( )
⋅
z
dt
T
0
s t
4
( )
N
p
o
2
1
lnbg
N
p
o
M
2
lnbg
choose
largest
s
z
1
z
M
©
2
0
0
3
Signal Space Representation
Note: the previous receiver is not an efficient
implementation.
4 correlators were used.
Could we use fewer correlators?
• We can answer this by using the concept of signal
space!
Using the following basis functions:
Find the signal vectors and signal space
diagram.
0 1 2
f t
1
( )
f t
2
( )
0 1 2
A More Efficient MAP Receiver
⋅
z
dt
T
0
f t
1
( )
r t ( )
N
p
o
2
1
lnbg
choose
largest
s
z r r
1 1 2
· +
f t
2
( )
r
1
⋅
z
dt
T
0
r
2
m
a
t
r
i
x
m
u
l
t
i
p
l
y
:
z
=
S
r
N
p
o
2
2
lnbg
z r r
2 1 2
· −
N
p
o
2
3
lnbg
z r r
3 1 2
· − +
N
p
o
2
4
lnbg
z r r
4 1 2
· − −
S
s
s
s
s
·
L
N
M
M
M
M
O
Q
P
P
P
P
·
−
−
− −
L
N
M
M
M
M
O
Q
P
P
P
P
1
2
3
4
1 1
1 1
1 1
1 1
T
T
T
T
The ML Receiver
⋅
z
dt
T
0
f t
1
( )
r t ( )
choose
largest
s
z r r
1 1 2
· +
f t
2
( )
r
1
⋅
z
dt
T
0
r
2
m
a
t
r
i
x
m
u
l
t
i
p
l
y
:
z
=
S
r
z r r
2 1 2
· −
z r r
3 1 2
· − +
z r r
4 1 2
· − −
S
s
s
s
s
·
L
N
M
M
M
M
O
Q
P
P
P
P
·
−
−
− −
L
N
M
M
M
M
O
Q
P
P
P
P
1
2
3
4
1 1
1 1
1 1
1 1
T
T
T
T
Decision Regions
The decision regions can be shown on the
signal space diagram.
Example: Assume p
m
= ¼ for m={1,2,3,4}
Thus MAP and ML rules are the same.
©
2
0
0
3
Average Energy Per Bit
The energy of the m
th
signal (symbol) is:
The average energy per symbol is:
log
2
M is the number of bits per symbol.
Thus the average energy per bit is:
E
b
allows for a fair comparison of the energy
efficiencies of different signal sets.
We use E
b
/N
o
for comparison.
E
m m k
k
K
s ·
·
∑ ,
2
1
E E E
s m m m
k
M
E p · ·
·
∑
1
E
E
b
s
M
·
log
2
©
2
0
0
3
Visualizing Signal Spaces
A MATLAB function has been posted on the
web page that allows you to visualize two
dimensional signal spaces and the
associated decision regions.
Usage:
sigspace( [x1 y1 p1; x2 y2 p2; …; xM yM pM],
EbNodB )
where:
•
(xm,ym) is the coordinate of the m
th
signal point
•
pm is the probability of the m
th
signal
can omit this to get ML receiver
• EbNodB is E
b
/N
o
in decibels.
Example: QPSK with
ML Decision Rule
sigspace( [1 1;1 –1; 1 1; 1 –1], 10 )
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
X
Y
Signal Space and Decision Regions
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
X
Y
Signal Space and Decision Regions
Example: QPSK with
Unequal Probabilities
sigspace( [1 1 .3;1 –1 .3; 1 1 .3; 1 –1 .1], 2 )
Example: Extreme Case of
Unequal Probabilities
sigspace( [.5 .5 .3;.5 –.5 .3; .5 .5 .3; .5 –.5 .1], 6 )
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
X
Y
Signal Space and Decision Regions
Example: Unequal Signal Energy
sigspace( [1 1; 2 2; 3 3; 4 4], 10)
0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.5
1
1.5
2
2.5
3
3.5
4
X
Y
Signal Space and Decision Regions
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
X
Y
Signal Space and Decision Regions
Example: 16QAM
sigspace( [0.5 0.5; 1.5 0.5; 0.5 1.5; 1.5 1.5; ...
0.5 0.5; 1.5 0.5; 0.5 1.5; 1.5 1.5; ...
0.5 0.5; 1.5 0.5; 0.5 1.5; 1.5 1.5; ...
0.5 0.5; 1.5 0.5; 0.5 1.5; 1.5 1.5], 10 )
copyright 2003
EE 561
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Feb. 26, 2003
Lecture #18
QPSK and the Union Bound
©
2
0
0
3
Assignments
HW #4 is posted.
Due on Monday March 10.
HW #3 solutions posted.
Full chapter 4 solutions included.
Computer Assignment #2
Will be posted later this week.
Will be due on Monday March 24.
I encourage you to finish before the exam.
Midterm exam discussion.
Scheduled for March 14.
We might have a faculty candidate that day and thus may need
to reschedule.
Options: Thursday evening 57 PM or shortfuse takehome?
©
2
0
0
3
Midterm Exam
Exam Guidelines:
Timed exam. Limited to 2 hours.
• Option (1): Thursday 57 PM.
• Option (2): Handed out Wed. 3 PM. Due Fri. 3 PM.
But you are not to work more than 2 hours on it.
• Same exam either way.
Open book and notes.
No help from classmates or from me.
• Must sign pledge and not discuss it.
• If anyone asks you about the exam, you should tell me who.
Covers chapters 15, and HW 14.
• Computer assignment 1 & 2 are also relevant.
Two sample exams posted on the webpage.
• From 2000 and 2001.
• Should be able to do everything except #1 on 2001 exam.
©
2
0
0
3
Review: Receiver Overview
Front
end
Back
end
r t ( ) r
s
r r t f t dt
s t f t dt n t f t dt
s n
k k
T
k
T
k
T
m k k
·
· +
· +
z
z z
( ) ( )
( ) ( ) ( ) ( )
,
0
0 0
MAP rule:
argmax ln
,
s
s
· + −
R
S
T
U
V
W ·
∑
m
N
p s r
o
m m k k
k
K
m
2 2
1
bg
E
ML rule:
arg max
,
s
s
· −
R
S
T
U
V
W ·
∑
m
s r
m k k
k
K
m
1
2
E
©
2
0
0
3
Error Probability
Symbol error probability:
P
p
s
m
m
M
m m
· ≠
· ≠ ·
·
∑
Pr
Pr 
s s
s s s s
1
From “Total probability theorem”
Probability that s
m
was sent. Probability of error given that s
m
was sent:
Pr
 Pr 

Pr 

s s s s r s s
r s r
r s s
r s r
≠ · · ∉ ·
·
· − ∈ ·
· −
z
z
m m m m
m
R
m m
m
R
R
p d
R
p d
m
m
b g
b g
1
1
Probability that the received vector is
not in the decision region R
m
,
given that s
m
was sent.
©
2
0
0
3
Comments on Error Calculation
For K>1, the integral is multidimensional.
Recall that difficult multidimensional integrals can be
simplified by rotation or translation of coordinates.
This is similar to change of variables in 1D integrals.
Error probability depends on the distance between
signals.
Error probability does not depend on the choice of
coordinates.
Therefore, any translation, rotation, or reflection
operation on the coordinates that does not change
the distance between the signals will not affect the
error probabilities.
©
2
0
0
3
QPSK: Definition
Now consider quartenary phase shift keying:
Signals:
Using GramSchmidt orthonormalization we find two
basis functions:
Now:
s t P f t
c 1
2 2 ( ) cos · π
b g
for 0 < t < T
s t P f t
c 2
2 2 ( ) sin · π
b g
for 0 < t < T
f t
T
f t
c 1
2
2 ( ) cos · π
b g
for 0 < t < T
s t P f t
c 3
2 2 ( ) cos · − π
b g
for 0 < t < T
s t P f t
c 4
2 2 ( ) sin · − π
b g
for 0 < t < T
f t
T
f t
c 2
2
2 ( ) sin · π
b g
for 0 < t < T
P
T
s t dt
T T
T
s b
· · ·
z
1 2 2
0
( )
E E
©
2
0
0
3
QPSK: Signal Space Representation
Signal vectors:
Signal space diagram:
− E
s
s
2
s
1
E
s
s
1
0
·
L
N
M
O
Q
P
E
s
s
2
0
·
L
N
M
O
Q
P
E
s
s
3
0
·
−
L
N
M
O
Q
P
E
s
s
4
0
·
−
L
N
M
O
Q
P
E
s
E
s
− E
s
s
3
s
4
R
1
R
2
R
4
R
3
©
2
0
0
3
QPSK: Coordinate Rotation
The analysis is easier if we rotate coordinates
by 45
o
:
s
2
s
1
s
3
s
4
R
1
R
2
R
4
R
3
s
1
2
2
·
L
N
M
M
M
M
O
Q
P
P
P
P
E
E
s
s
s
2
2
2
·
−
L
N
M
M
M
M
O
Q
P
P
P
P
E
E
s
s
s
3
2
2
·
−
−
L
N
M
M
M
M
O
Q
P
P
P
P
E
E
s
s
s
4
2
2
·
−
L
N
M
M
M
M
O
Q
P
P
P
P
E
E
s
s
©
2
0
0
3
QPSK: Conditional Error Probability
Pr  
,  ,
exp
exp exp
, ,
s s s s r s r ≠ · · −
· −
· − − −
F
H
G
I
K
J
+ −
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
R
S

T

U
V

W

· − − −
F
H
G
I
K
J
R
S

T

U
V

W

− −
F
z
z z
z z
z
∞ ∞
∞ ∞
∞
1 1 1
1 2 1 1 1 2 1 2
0 0
1
2
2
2
1 2
0 0
1
2
1
0
2
1
1
1
1 1
2 2
1
1 1
2
1 1
2
1
p d
p r r s s dr dr
N N
r r dr dr
N N
r dr
N N
r
R
o o
s s
o o
s
o o
s
b g
c h
π
π π
E E
E E
H
G
I
K
J
R
S

T

U
V

W

· − −
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
−
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
·
F
H
G
I
K
J
−
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
∞
z
2
2
0
2
1 1 1
2
dr
Q
N
Q
N
Q
N
Q
N
s
o
s
o
s
o
s
o
E E
E E
©
2
0
0
3
QPSK: Symbol Error Probabilty
From symmetry:
Thus:
Pr  s s s s ≠ · ·
F
H
G
I
K
J
−
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
m m
s
o
s
o
Q
N
Q
N
2
2
E E
P
p
Q
N
Q
N
Q
N
Q
N
s
m
m
M
m m
s
o
s
o
b
o
b
o
· ≠
· ≠ ·
·
F
H
G
I
K
J
−
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
·
F
H
G
I
K
J
−
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
·
∑
Pr
Pr 
s s
s s s s
1
2
2
2
2
2 2
E E
E E
©
2
0
0
3
QPSK: Bit Error Probability
Assume Gray Mapping:
00
10
11
01
P Q
N
Q
N
Q
N
Q
N
Q
N
b
b
o
b
o
b
o
b
o
b
o
·
F
H
G
I
K
J
−
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
·
F
H
G
I
K
J
−
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
≈
F
H
G
I
K
J
2
2
2 2
2 2 2
2
2
E E
E E E
If neighbor is
mistakenly chosen
then only 1 bit will
be wrong,
i.e. P
b
= P
s
/2
If opposite signal
is chose then both
bits are incorrect,
i.e. P
b
= P
s
Q
2
(z) << Q(z)
Same BER as BPSK
©
2
0
0
3
Error Probability for Large M
In theory, we can compute any symbol error
probability using the appropriate integrals.
In practice, this becomes tedious for large
constellations (M >4).
The decision region R
j
has complicated shape.
Pr
 Pr 

Pr


s s s s r s s
r s r
s s s s
r s r
≠ · · ∉ ·
·
· · ·
·
z
∑
z
∑
·
≠
·
≠
i i i i
i
R
j i
j
j i
M
i
R
j
j i
M
R
p d
p d
i
j
b g
b g
1
1
©
2
0
0
3
Conditional Error Probabilities
Consider the following term for QPSK:
We must integrate over the R
4
:
This is tricky because R
4
has two boundaries.
Pr
  s s s s r s r ≠ · ·
z
4 1 1
4
p d
R
b g
R
1
R
2
R
4
R
3
s
2
s
1
s
3
s
4
©
2
0
0
3
We can bound this probability by only
integrating over a region with just one
boundary:
Now this is easier to evaluate:
A Bound on Probability
Pr
 Pr s s s s ≠ · ≤ ≥
4 1 4 1
z z
R
1
R
2
R
4
R
3 Ignore the presence
of s
2
and s
3
.
Then we pick s
4
over s1 whenever
z
4
≥ z
1
Pairwise error probability:
If we may only choose between s
1
and s
4
Then prob. of picking s
4
over s
1
s
2
s
1
s
3
s
4
©
2
0
0
3
Calculation of Pairwise Error Prob.
By appropriate rotation & translation, we can
express the pairwise decision problem as:
This is just like BPSK!
−
d
i j ,
2
s
j
s
i
d
i j ,
2
R
i
R
j
Pr
 Pr
,
s s s s ≠ · ≤ ≥
≤
F
H
G
I
K
J
j i j i
i j
o
z z
Q
d
N 2
©
2
0
0
3
The Union Bound
Putting it all together:
And the symbol error probability becomes:
This is called the Union bound.
Pr
 
Pr
,
s s s s r s r ≠ · ·
≤ ≥
≤
F
H
G
I
K
J
z
∑
∑
∑
·
≠
·
≠
·
≠
i i i
R
j
j i
M
j i
j
j i
M
i j
o j
j i
M
p d
z z
Q
d
N
j
b g
1
1
1
2
P p
p Q
d
N
s i
i
M
i i
i
i
M
i j
o j
j i
M
· ≠ ·
≤
F
H
G
I
K
J
·
· ·
≠
∑
∑ ∑
1
1 1
2
Pr

,
s s s s
©
2
0
0
3
Example: QPSK
Find Union bound on P
s
for QPSK:
©
2
0
0
3
Consider the exact calculation:
Now consider the Union bound:
Pr
  s s s s r s r ≠ · ·
z
∑
·
1 1
2
4
p d
i
R
j
j
b g
R
1
R
2
R
4
R
3
Pr

,
s s s s ≠ · ≤
F
H
G
I
K
J
·
∑ 1 1
1
2
4
2
Q
d
N
j
o j
R
1
R
4
This area has been
Accounted for already
No need to include it.
s
2
s
1
s
3
s
4
s
2
s
1
s
3
s
4
©
2
0
0
3
Improved Union Bound
Let A
i
be the set of signals with decision
regions directly adjacent to R
i
Share a common boundary
Then:
Example:
QPSK
P p Q
d
N
s i
i
M
i j
o j A
M
i
≤
F
H
G
I
K
J
· ∈
∑ ∑
1
2
,
s
2
s
1
s
3
s
4
R
1
R
4
©
2
0
0
3
Comparison
0 1 2 3 4 5 6
10
2
10
1
E
b
/N
o
in dB
B
E
R
Performance of QPSK
Union bound
Improved Union bound
Exact
©
2
0
0
3
MidSemester!
Right now, we are at the midpoint of this
class.
copyright 2003
EE 561
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Feb. 28, 2003
Lecture #19
PAM, PSK, and QAM
©
2
0
0
3
Review: Union Bound
The symbol error rate of any digital
modulation over an AWGN channel can be
found using the Union bound:
A better bound (tighter and easier to
compute) can be found using:
P Q
d
N
s i
i
M
i j
o j
j i
M
≤ ·
F
H
G
I
K
J
· ·
≠
∑ ∑
Pr[ ]
,
s s
1 1
2
d d
s s
i j i j
i j
i k j k
k
K
,
, ,
( , ) ·
· −
· −
·
∑
s s
s s
c h
2
1
P Q
d
N
s i
i
M
i j
o j A
i
≤ ·
F
H
G
I
K
J
· ∈
∑ ∑
Pr[ ]
,
s s
1
2
A j R R
i j i
· : borders
m r
©
2
0
0
3
Consider the exact calculation:
Now consider the Union bound:
Pr
  s s s s r s r ≠ · ·
z
∑
·
1 1
2
4
p d
i
R
j
j
b g
R
1
R
2
R
4
R
3
Pr

,
s s s s ≠ · ≤
F
H
G
I
K
J
·
∑ 1 1
1
2
4
2
Q
d
N
j
o j
R
1
R
4
This area has been
Accounted for already
No need to include it.
s
2
s
1
s
3
s
4
s
2
s
1
s
3
s
4
©
2
0
0
3
Improved Union Bound
Let A
i
be the set of signals with decision
regions directly adjacent to R
i
Share a common boundary
Then:
Example:
QPSK
P p Q
d
N
s i
i
M
i j
o j A
M
i
≤
F
H
G
I
K
J
· ∈
∑ ∑
1
2
,
s
2
s
1
s
3
s
4
R
1
R
4
©
2
0
0
3
Categories of Digital Modulation
Digital Modulation can be classified as:
PAM: pulse amplitude modulation
PSK: phase shift keying
QAM: quadrature amplitude modulation
Orthogonal signaling
Biorthogonal signaling
Simplex signaling
We have already defined and analyzed some of
these.
For completeness, we will now define and analyze all
of these formats.
Note: the definition & performance only depends on
the geometry of the signal constellation  not the
choice of basis functions!
©
2
0
0
3
Bandwidth of Digital Modulation
The bandwidth W of a digitally modulated
signal must satisfy:
The actual bandwidth depends on the choice
of basis functions.
“pulse shaping”
For sincpulses this is an equality.
However, if the basis functions are confined
to time [0,T], then:
W
KR KR
M
s b
≥ ·
2 2
2
log
W KR
KR
M
s
b
≥ ·
log
2
©
2
0
0
3
PAM
Pulse Amplitude Modulation
K = 1 dimension.
Usually, the M signals are equally spaced along the
line.
d
i,j
= d
min
(a constant) for ji = 1
Usually, the M signals are symmetric about the
origin.
Examples:
0
0
d
min
s
1
s
8
s
i
i M
d
· − −
L
N
M
O
Q
P
( )
min
2 1
2
©
2
0
0
3
Performance of PAM
Using the Improved Union bound:
There are two cases to consider:
P Q
d
N
M
Q
d
N
s i
i
M
i j
o j A
i j
o j A i
M
i
i
≤ ·
F
H
G
I
K
J
≤
F
H
G
I
K
J
· ∈
∈ ·
∑ ∑
∑ ∑
Pr[ ]
,
,
s s
1
1
2
1
2
outer points have only 1 neighbor
there are 2 of these
inner points have 2 neighbors
there are (M2) of these
assuming equiprobable
signaling
©
2
0
0
3
Performance of PAM
For the “outer” points (one neighbor):
For the “inner” points (two neighbors):
Therefore,
P Q
d
N
i i M
i i
o

min
s s s ≠ ≤
F
H
G
I
K
J
· ·
2
1 for and
P Q
d
N
i M
i i
o

min
s s s ≠ ≤
F
H
G
I
K
J
≤ ≤ − 2
2
2 1 for
P p P
M
P
M
M Q
d
N
Q
d
N
M
M
Q
d
N
s i
i
M
i i i i
i
M
o o
o
· ≠ · ≠
≤ −
F
H
G
I
K
J
+
F
H
G
I
K
J
F
H
G
I
K
J
≤
−
F
H
G
I
K
J
· ·
∑ ∑
1 1
1
1
2 2
2
2
2
2 1
2


( )
( )
min min
min
s s s s s s
©
2
0
0
3
Performance of PAM
Using the Improved Union bound:
We would like an expression in terms of E
b
.
However, unlike other formats we have
considered, the energy of the M signals is not
the same.
E
i
i M
d
· − −
L
N
M
O
Q
P
( )
min
2 1
2
2
P
M
M
Q
d
N
s
o
≤
−
F
H
G
I
K
J
2 1
2
( )
min
©
2
0
0
3
PAM
Average Energy per Symbol
E E
s i i
i
M
i
M
i
M
n
M
p
M
i M
d
M
d
i M
M
d
n
M
d M M
d M
·
· − −
L
N
M
O
Q
P
·
F
H
I
K
− −
·
F
H
I
K
−
·
F
H
I
K
F
H
I
K
F
H
I
K
−
F
H
G
I
K
J
·
−
·
·
·
·
∑
∑
∑
∑
1
2
1
2
2
1
2
2
1
2
2 2
2 2
1
2 1
2
1
2
2 1
2
2
2 1
2
2
1
3 2
4
2
1
1
12
( )
( )
( )
min
min
min
/
min
min
c h
assuming equiprobable
signaling
from symmetry
arithmetic series:
( ) 2 1
1
3
4 1
2
1
2
n
N N
n
N
−
· −
·
∑
c h
©
2
0
0
3
Performance of PAM
Solve for d
min
:
Substitute into Union Bound Expression:
Bit error probability:
d
M
s
min
·
−
12
1
2
E
c h
P
M
M
Q
d
N
M
M
Q
M N
M
M
Q
M
M N
s
o
s
o
b
o
≤
−
F
H
G
I
K
J
≤
−
−
F
H
G
G
I
K
J
J
≤
−
−
F
H
G
G
I
K
J
J
2 1
2
2 1 6
1
2 1 6
1
2
2
2
( )
( )
( ) log
min
E
E
c h
c h
P
M
M M
Q
M
M N
b
b
o
≈
−
−
F
H
G
G
I
K
J
J
2 1 6
1
2
2
2
( )
log
log E
c h
If Gray coding used,
only 1 bit error will
normally occur when
there is a symbol error.
Eq. (5.246)
Performance Comparison: PAM
Performance gets worse as M increases
0 2 4 6 8 10 12 14 16 18 20
10
8
10
6
10
4
10
2
10
0
Eb/No (dB)
B
E
R
32PAM
8PAM
16PAM
2PAM
4PAM
©
2
0
0
3
MPSK
Mary Phase Shift Keying
K = 2 dimensions.
Signals equally spaced along a circle of
radius
E
s
s
i
s
s
i
M
i
M
·
−
F
H
I
K
−
F
H
I
K
L
N
M
M
M
M
O
Q
P
P
P
P
E
E
cos
( )
sin
( )
2 1
2 1
π
π
Example: 8 PSK
©
2
0
0
3
MPSK: Distances
Distance between two adjacent points:
Use law of cosines:
d
ij
E
s
E
s
2π
M
radians
2
2 2 2
ab C a b c cos · + −
2
2
2
2 1
2
4
2
2
2
E E
E
E
E
s s ij
ij s
ij s
ij s
M
d
d
M
d
M
d
M
cos
cos
sin
sin
π
π
π
π
F
H
I
K
· −
· −
F
H
I
K
L
N
M
O
Q
P
·
F
H
I
K
·
F
H
I
K
1 2 2
2
− · cos sin θ θ af
©
2
0
0
3
Performance of MPSK
Symbol error probability (M>2):
Bit error probability (M>2):
P Q
N M
Q
M
N M
s
s
o
b
o
≤
F
H
I
K
F
H
G
I
K
J
≤
F
H
I
K
F
H
G
I
K
J
2
2
2
2
2
E
E
sin
log
sin
π
π
P
M
Q
M
N M
b
b
o
≈
F
H
I
K
F
H
G
I
K
J
2 2
2
2
log
log
sin
E π
0 2 4 6 8 10 12 14 16 18 20
10
8
10
6
10
4
10
2
10
0
Eb/No in dB
B
E
R
Performance Comparison:
MPSK
Performance gets worse as M increases.
4PSK
8PSK
32PSK
16PSK
64PSK
©
2
0
0
3
QAM
Quadrature Amplitude Modulation
K = 2 dimensions
Points can be placed anywhere on the plane.
Actual P
s
depends on the geometry.
Example: 16 QAM
d
min
Corner points have 2 neighbors
(There are 4 of these)
Edge points have 3 neighbors
(There are 8 of these)
Interior points have 4 neighbors
(There are 4 of these)
©
2
0
0
3
Performance of Example
QAM Constellation
Improved Union Bound:
P p P
P
Q
d
N
Q
d
N
Q
d
N
Q
d
N
Q
d
N
s i
i
M
i i
i i
i
M
o o o
o
o
· ≠
· ≠
≤
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
+
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
+
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
F
H
G
I
K
J
≤
F
H
G
I
K
J
≤
F
H
G
I
K
J
·
·
∑
∑
1
1
1
16
1
16
4 2
2
8 3
2
4 4
2
48
16 2
3
2


min min min
min
min
s s s
s s s
©
2
0
0
3
Performance of Example
QAM Constellation
We would like to express P
s
in terms of E
b
.
However, as with PAM, the energy of the M
signals is not the same.
E
E
E
i
i
i
d d
d
d d
d
d d
d
·
F
H
G
I
K
J
+
F
H
G
I
K
J
·
·
F
H
G
I
K
J
+
F
H
G
I
K
J
·
·
F
H
G
I
K
J
+
F
H
G
I
K
J
·
3
2
3
2
9
2
2
3
2
5
2
2 2
1
2
2 2
2
2 2
2
2 2
2
min min
min
min min
min
min min
min
Corner points
Edge points
Interior points
©
2
0
0
3
QAM
Average Energy per Symbol
Average Energy per Symbol:
Solving for d
min
E E
s i i
i
p
d d d
d
d
·
·
F
H
G
I
K
J
+
F
H
G
I
K
J
+
F
H
G
I
K
J
L
N
M
O
Q
P
·
·
·
∑
1
16
2 2 2
2
2
1
16
4
9
2
8
5
2
4
1
2
40
16
5
2
min min min
min
min
assuming equiprobable
signaling
d
s
min
·
2
5
E
©
2
0
0
3
Performance of Example
QAM Constellation
Substitute into Union Bound Expression:
For other values of M, you must compute P
s
in
this manner.
Relationship between P
s
and P
b
depends on
how bits are mapped to symbols.
P Q
d
N
Q
N
Q
N
s
o
s
o
b
o
≤
F
H
G
I
K
J
≤
F
H
G
I
K
J
≤
F
H
G
I
K
J
3
2
3
5
3
4
5
min
E
E
0 2 4 6 8 10 12 14 16 18 20
10
8
10
6
10
4
10
2
10
0
Eb/No in dB
Comparison:
QAM vs. MPSK
Performance of QAM is better than MPSK.
16QAM
16PSK
s
y
m
b
o
l
e
r
r
o
r
p
r
o
b
a
b
i
l
i
t
y
16PAM
copyright 2003
EE 561
Communication Theory
Spring 2003
Instructor: Matthew Valenti
Date: Mar. 3, 2003
Lecture #20
Orthogonal, biorthogonal,
and simplex modulation
©
2
0
0
3
Review: Performance
of Digital Modulation
We have been finding performance bounds
for several modulation types defined in class.
The Union bound:
The Improved Union bound:
P Q
d
N
s i
i
M
i j
o j
j i
M
≤ ·
F
H
G
I
K
J
· ·
≠
∑ ∑
Pr[ ]
,
s s
1 1
2
P Q
d
N
s i
i
M
i j
o j A
i
≤ ·
F
H
G
I
K
J
· ∈
∑ ∑
Pr[ ]
,
s s
1
2
A j R R
i j i
· : borders
m r
©
2
0
0
3
PAM
Pulse Amplitude Modulation
Definition:
K = 1 dimension.
Signals are equally spaced and symmetric about
the origin:
Performance:
0
d
min
s
1
s
8
s
i
i M
d
· − −
L
N
M
O
Q
P
( )
min
2 1
2
P
M
M
Q
M
M N
s
b
o
≤
−
−
F
H
G
I
K
J
2 1 6
1
2
2
( ) log E
c h
P
M
M M
Q
M
M N
b
b
o
≈
−
−
F
H
G
G
I
K
J
J
2 1 6
1
2
2
2
( )
log
log E
c h
Example: 8PAM
©
2
0
0
3
MPSK
Mary Phase Shift Keying
Definition:
K = 2 dimensions.
Signals equally spaced along a circle of radius
Performance:
E
s
s
i
s
s
i
M
i
M
·
−
F
H
I
K
−
F
H
I
K
L
N
M
M
M
M
O
Q
P
P
P
P
E
E
cos
( )
sin
( )
2 1
2 1
π
π
Example: 8 PSK
P Q
M
N M
s
b
o
≤
F
H
I
K
F
H
G
I
K
J
2
2
2
E log
sin
π
P
M
Q
M
N M
b
b
o
≈
F
H
I
K
F
H
G
I
K
J
2 2
2
2
log
log
sin
E π
©
2
0
0
3
QAM
Quadrature Amplitude Modulation
Definition:
K = 2 dimensions
Points can be placed anywhere on the plane.
Neighboring points are normally distance d
min
apart.
Constellation normally takes on a “box” or “cross” shape.
Performance:
Depends on geometry.
In general, when p
i
= 1/M:
Example: 16 QAM
P
M
N Q
d
N
N Q
d
N
N Q
d
N
s
o o o
≤
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
+
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
+
F
H
G
I
K
J
L
N
M
M
O
Q
P
P
F
H
G
I
K
J
1
2
2
3
2
4
2
2 3 4
min min min
Number for points with 4 neighbors
Number for points with 3 neighbors
Number for points with 2 neighbors
©
2
0
0
3
QAM: Continued
Need to relate d
min
to E
s
and E
b
.
Because QAM signals don’t have constant energy:
Solve the above to get d
min
= f(E
s
) and plug into the
expression for P
s
.
Bit error probability is difficult to determine,
because the exact mapping of bits to symbols
must be taken into account.
E E
s i i
i
M
i i
i
M
i
i
M
p
p
M
·
·
·
·
·
·
∑
∑
∑
1
2
1
2
1
1
s
s
©
2
0
0
3
Orthogonal Signaling
K=M dimensions.
Signal space representation:
Example: 3FSK
s
1
0
0
·
L
N
M
M
M
M
M
O
Q
P
P
P
P
P
E
s
s
M
s
·
L
N
M
M
M
M
O
Q
P
P
P
P
0
0
E
s
2
0
0
·
L
N
M
M
M
M
O
Q
P
P
P
P
E
s
E
s
E
s
E
s
©
2
0
0
3
Performance of
Orthogonal Signaling
Distances:
The signal points are equallydistant:
Using the Union Bound:
Bit error probability
P M Q
N
M Q
M
N
s
s
o
b
o
≤ −
F
H
G
I
K
J
≤ −
F
H
G
I
K
J
1
1
2
a f
a f
E
E log
P M Q
M
N
b
M
M
b
o
≤
−
−
F
H
G
I
K
J
−
2
2 1
1
2
2
1
2
(log )
(log )
log
a f
E
d i j
ij s
· ∀ ≠ 2E
see Eq. (5.224)
for details
0 2 4 6 8 10 12 14 16
10
8
10
6
10
4
10
2
10
0
Eb/NoindB
B
E
R
Performance Comparison:
FSK (Orthogonal)
Performance gets better as M increases.
2FSK
8FSK
4FSK
16FSK
64FSK
32FSK
©
2
0
0
3
Limits of FSK
As M → ∞, then P
s
→ 0 provided that:
Although FSK is energy efficient, it is not
bandwidth efficient (5.286):
BW efficiency can be improved by using:
Biorthogonal signaling.
Simplex signaling.
E
b
o
N
> ≈ − ln . 2 159 dB
W
KR KR
M
MR
M
s b b
≥ · ·
2 2 2
2 2
log log
Eq. (5.230)
Eq. (5.286)
©
2
0
0
3
Biorthogonal Signaling
K=M/2 dimensions
M is even.
First M/2 signals are orthogonal:
s
i
(t) = f
i
(t) for 1 ≤ i ≤ M/2
Remaining M/2 signals are the negatives:
s
i
(t) =  f
iM/2
(t) for M/2 + 1 ≤ i ≤ M
Since halfas many dimensions as
orthogonal, the bandwidth requirement is
halved:
W
KR KR
M
MR
M
s b b
≥ · ·
2 2 4
2 2
log log
E
s
E
s
©
2
0
0
3
Example Biorthogonal Signal Set
Biorthogonal signal set for M=6.
E
s
E
s
E
s
− E
s
− E
s
− E
s
©
2
0
0
3
Performance of Biorthogonal Signals
Compute the distances:
Union Bound
Improved Union Bound
Performance of biorthogonal is actually
slightly better than orthogonal.
d
i j
i j
M
i j s
s
,
·
·
− ·
R
S


T


0
2
2
2
for
for
otherwise
E
E
P M Q
N
Q
N
s
s
o
s
o
≤ −
F
H
G
I
K
J
+
F
H
G
I
K
J
2
2
a f
E E
P M Q
N
M Q
M
N
s
s
o
b
o
≤ −
F
H
G
I
K
J
≤ −
F
H
G
I
K
J
2
2
2
a f
a f
E
E log
Simplex Signaling
Consider our 3FSK example:
»
3 points form a plane
»
All 3 points can be placed on a 2dimensional
plot.
» By changing coordinates, we can reduce the
number of dimensions from 3 to 2.
Make a new constellation.
»
2 dimensional instead of 3.
»
The origin of the new constellation is the mean of
the old constellation.
»
The distances between signals are the same.
E
s
E
s
E
s
d
min
©
2
0
0
3
Simplex Signaling
To create a simplex signal set:
Start with an orthogonal signal set.
Compute the centroid of the set:
Shift the signals so that they are centered about
the centroid:
s s · ·
L
N
M
M
M
M
M
O
Q
P
P
P
P
P
∑
p
M
M
i i
i
M
s
s
E
E
s s s
i i
'
· −
©
2
0
0
3
Simplex Signaling
Compute a new set of basis functions.
• The new set of signals has dimensionality K = M1.
• Therefore the bandwidth is
The average energy of the simplex constellation is
now less than that of the original orthogonal set:
W
KR KR
M
M R
M
s b b
≥ · ·
−
2 2
1
2
2 2
log
( )
log
E E
s i s
M
'
· − · −
F
H
I
K
s s
2
1
1
©
2
0
0
3
Performance of Simplex Signals
The distances between signals remains the
same as with orthogonal signaling.
Therefore, the symbol error probability is:
Where,
Therefore,
P M Q
N
s
s
o
≤ −
F
H
G
I
K
J
1 a f
E
E E
s s
M
'
· −
F
H
I
K
1
1
P M Q
M
N M
M Q
M M
M N
s
s
o
b
o
≤ −
−
F
H
G
I
K
J
≤ −
−
F
H
G
I
K
J
1
1
1
1
2
a f
a f
E
E
'
( )
log
( )
E E
s s
M
M
·
−
F
H
I
K
'
1
But this is in terms of the
old E
s
 we want it in terms
of the new one …
©
2
0
0
3
BPSK and QPSK
Categorize the following:
BPSK
MPSK?
QAM?
Orthogonal?
Biorthgonal?
Simplex?
QPSK
MPSK?
QAM?
Orthogonal?
Biorthgonal?
Simplex?
This action might not be possible to undo. Are you sure you want to continue?