Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
ID: 12395631
www.lulu.com
D
i
g
i
t
a
l
S
i
g
n
a
l
P
r
o
c
e
s
s
i
n
g
i
n
a
N
u
t
s
h
e
l
l
(
V
o
l
u
m
e
I
)
J
a
n
A
l
l
e
b
a
c
h
,
K
r
i
t
h
i
k
a
C
h
a
n
d
r
a
s
e
k
a
r
Digital Signal Processing in a
Nutshell (Volume I)
Jan P Allebach Krithika Chandrasekar
2
Contents
1 Signals 5
1.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Signal Types . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Signal Characteristics . . . . . . . . . . . . . . . . . . . . . 7
1.4 Signal Transformations . . . . . . . . . . . . . . . . . . . . . 10
1.5 Special Signals . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Complex Variables . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Singularity functions . . . . . . . . . . . . . . . . . . . . . . 23
1.8 Comb and Replication Operations . . . . . . . . . . . . . . 27
2 Systems 29
2.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 System Properties . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Frequency Response . . . . . . . . . . . . . . . . . . . . . . 44
3 Fourier Analysis 51
3.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2 ContinuousTime Fourier Series (CTFS) . . . . . . . . . . . 51
3.3 ContinuousTime Fourier Transform . . . . . . . . . . . . . 57
3.4 DiscreteTime Fourier Transform . . . . . . . . . . . . . . . 70
4 Sampling 79
4.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 79
4.2 Analysis of Sampling . . . . . . . . . . . . . . . . . . . . . . 79
4.3 Relation between CTFT and DTFT . . . . . . . . . . . . . 87
4.4 Sampling Rate Conversion . . . . . . . . . . . . . . . . . . . 90
3
4 CONTENTS
5 Z Transform (ZT) 99
5.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2 Derivation of the Z Transform . . . . . . . . . . . . . . . . . 99
5.3 Convergence of the ZT . . . . . . . . . . . . . . . . . . . . . 103
5.4 ZT Properties and Pairs . . . . . . . . . . . . . . . . . . . . 107
5.5 ZT,Linear Constant Coeﬃcient Diﬀerence Equations . . . . 108
5.6 Inverse Z Transform . . . . . . . . . . . . . . . . . . . . . . 114
5.7 General Form of Response of LTI Systems . . . . . . . . . . 124
6 Discrete Fourier Transform (DFT) 129
6.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 129
6.2 Derivation of the DFT . . . . . . . . . . . . . . . . . . . . . 129
6.3 DFT Properties and Pairs . . . . . . . . . . . . . . . . . . . 132
6.4 Spectral Analysis Via the DFT . . . . . . . . . . . . . . . . 134
6.5 Fast Fourier Transform (FFT) Algorithm . . . . . . . . . . 137
6.6 Periodic Convolution . . . . . . . . . . . . . . . . . . . . . . 142
7 Digital Filter Design 147
7.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 147
7.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.3 Finite Impulse Response Filter Design . . . . . . . . . . . . 154
7.4 Inﬁnite Impulse Response Filter Design . . . . . . . . . . . 165
8 Random Signals 177
8.1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . 177
8.2 Single Random Variable . . . . . . . . . . . . . . . . . . . . 177
8.3 Two Random Variables . . . . . . . . . . . . . . . . . . . . 181
8.4 Sequences of Random Variables . . . . . . . . . . . . . . . . 184
8.5 Estimating Distributions . . . . . . . . . . . . . . . . . . . . 191
8.6 Filtering of Random Sequences . . . . . . . . . . . . . . . . 195
8.7 Estimating Correlation . . . . . . . . . . . . . . . . . . . . . 203
Chapter 1
Signals
1.1 Chapter Outline
In this chapter, we will discuss:
1. Signal Types
2. Signal Characteristics
3. Signal Transformations
4. Special Signals
5. Complex Variables
6. Singularity Functions
7. Comb and Replication Operators
5
6 CHAPTER 1. SIGNALS
1.2 Signal Types
There are three diﬀerent types of signals:
1. Continuoustime (CT)
2. Discretetime (DT)
3. Digital (discretetime and discreteamplitude)
The diﬀerent types of signals are shown in Figure 1.2(a) below.
Figure 1.2(a) Diﬀerent types of signals.
Comments
• The CT signal need not be continuous in amplitude
• The DT signal is undeﬁned between sampling instances
• Levels need not be uniformly spaced in a digital signal
• We make no distinction between discretetime and digital signals in
theory
• The independent variable need not be time
1.3. SIGNAL CHARACTERISTICS 7
• x(n) may or may not be sampled from an analog waveform, i.e.
x
d
(n) = x
a
(nT) T sampling interval
Throughout this book, we will use the subscripts d and a only
when necessary for clarity.
Equivalent Representations for DT signals
Figure 1.2(b) Equivalent representations for DT signals.
1.3 Signal Characteristics
Signals can be:
1. Deterministic or random
2. Periodic or aperiodic
For periodic signals, x(t) = x(t +nT), for some ﬁxed T and for all
integers n and times t.
3. Rightsided, leftsided, or twosided
For rightsided signals, x(t) = 0 only when t ≥ t
0
.
If t
0
≥ 0, the signal is causal.
For leftsided signals, x(t) = 0 only when t ≤ t
0
.
If t
0
≤ 0, the signal is anticausal.
For 2sided or mixed causal signals, x(t) = 0 for some t < 0, and
x(t) = 0 for some t > 0.
4. Finite or inﬁnite duration
For ﬁnite duration signals, x(t) = 0 only for −∞< t
1
≤ t ≤ t
2
< ∞.
8 CHAPTER 1. SIGNALS
Metrics
Metrics CT signal DT signal
Energy E
x
=
_
x(t)
2
dt E
x
=
n
x(n)
2
Power P
x
= lim
T→∞
1
2T
_
T
−T
x(t)
2
dt P
x
= lim
N→∞
1
2N + 1
N
n=−N
x(n)
2
Magnitude M
x
= max
t
x(t) M
x
= max
n
x(n)
Area A
x
=
_
x(t)dt A
x
=
n
x(n)
Average value x
avg
= lim
T→∞
1
2T
_
T
−T
x(t)dt x
avg
= lim
N→∞
1
2N + 1
N
n=−N
x(n)
Comments
• If signal is periodic, we need only average over one period, e.g. if
x(n)=x(n+N) for all n
P
x
=
1
N
n
0
+N−1
n=n
0
x(n)
2
, for any n
0
• rootmeansquare (rms) value
x
rms
=
_
P
x
Examples
Example 1
Figure 1.3(a) Signal for Example 1.
x(t) =
_
e
−(t+1)/2
, t > −1
0, t < −1
1.3. SIGNAL CHARACTERISTICS 9
The signal is:
1. deterministic
2. aperiodic
3. rightsided
4. mixed causal
Metrics
1. E
x
=
_
∞
−1
e
−(t+1)/2

2
dt, let s = t + 1
E
x
=
_
∞
0
e
−s
ds = −e
−s

∞
0
= 1
2. P
x
= 0
3. M
x
= 1
4. A
x
= 2
5. x
avg
= 0
Example 2
Figure 1.3(b) Signal for Example 2.
x(n) = Acos(πn/4)
The signal is:
1. deterministic
2. periodic (N = 8)
10 CHAPTER 1. SIGNALS
3. 2sided
4. mixed causal
Metrics
1. E
x
= ∞
2. P
x
=
1
8
7
n=0
Acos(πn/4)
2
P
x
=
A
2
16
7
n=0
[1 + cos(πn/2)]
P
x
=
A
2
2
3. x
rms
=
A
√
2
4. M
x
= A
5. A
x
is undeﬁned
6. x
avg
= 0
1.4 Signal Transformations
ContinousTime Case
Consider
1. Reﬂection
y(t) = x(−t)
1.4. SIGNAL TRANSFORMATIONS 11
2. Shifting
y(t) = x(t −t
0
)
3. Scaling
y(t) = x(
t
a
)
4. Scaling and Shifting
Example 1
y(t) = x(
t
a
−t
0
)
What does this signal look like?
center of pulse:
t
a
−t
0
= 0 ⇒t = at
0
12 CHAPTER 1. SIGNALS
left edge of pulse:
t
a
−t
0
=
−1
2
⇒t = a(t
0
−
1
2
)
right edge of pulse:
t
a
−t
0
=
1
2
⇒t = a(t
0
+
1
2
)
Example 2
y(t) = x
_
t−t
0
a
_
What does this signal look like?
center: (t −t
0
)/a = 0 ⇒t = t
0
left edge of pulse: (t −t
0
)/a = −1/2 ⇒t = t
0
−a/2
right edge of pulse: (t −t
0
)/a = 1/2 ⇒t = t
0
+a/2
DiscreteTime Case
Consider
1.4. SIGNAL TRANSFORMATIONS 13
1. Reﬂection
y(n) = x(−n)
2. Shifting
y(n) = x(n −n
0
)
Note that n
0
must be an integer. Noninteger delays can only be
deﬁned in the context of a CT signal, and are implemented by
interpolation.
3. Scaling
a. downsampler
y(n) = x(Dn), D ∈ Z
b. upsampler
y(n) =
_
x(n/D), n/D ∈ Z, D ∈ Z
0, else
14 CHAPTER 1. SIGNALS
1.5 Special Signals
Unit step
a. CT case
u(t) =
_
1, t > 0
0, t < 0
b. DT case
u(n) =
_
1, n ≥ 0
0, else
Rectangle
rect(t) =
_
1, t < 1/2
0, t > 1/2
1.5. SPECIAL SIGNALS 15
Triangle
Λ(t) =
_
1 −t, t ≤ 1
0, else
Sinusoids
a. CT
x
a
(t) = Asin(ω
a
t +θ)
where A is the amplitude
ω
a
is the frequency
θ is the phase
Analog frequency
ω
a
_
radians
sec
_
= 2πf
a
_
cycles
sec
_
16 CHAPTER 1. SIGNALS
Period
T
_
sec
cycle
_
=
1
f
a
b. DT
x
d
(n) = Asin(ω
d
n +θ)
digital frequency
ω
d
_
radians
sample
_
= 2πµ
_
cycles
sample
_
Comments
1. Depending on ω, x
d
(n) may not look like a sinusoid
2. Digital frequencies ω
1
= ω
0
and ω
2
= ω
0
+ 2πk are equivalent
x
2
(n) = Asin(ω
2
n +θ)
= Asin[(ω
0
+ 2πk)n +θ]
= x
1
(n), for all n
3. x
d
(n) will be periodic if and only if ω
d
= 2π(p/q) where p and q are
integers. In this case, the period is q.
1.6. COMPLEX VARIABLES 17
Sinc
sinc(t) =
sin(πt)
(πt)
1.6 Complex Variables
Deﬁnition
Cartesian coordinates
z = (x, y)
Polar coordinates
z = R∠θ
R =
_
x
2
+y
2
x = Rcos (θ)
18 CHAPTER 1. SIGNALS
θ = arctan
_
y
x
_
y = Rsin (θ)
Alternate representation
deﬁne j = (0, 1) = 1∠(π/2)
then z = x +jy
Additional notation
Re {z} = x
Im{z} = y
z = R
Algebraic Operations
Addition (Cartesian Coordinates)
z
1
= x
1
+jy
1
z
2
= x
2
+jy
2
z
1
+z
2
= (x
1
+x
2
) +j(y
1
+y
2
)
1.6. COMPLEX VARIABLES 19
Multiplication (Polar Coordinates)
z
1
= R
1
∠θ
1
z
2
= R
2
∠θ
2
z
1
z
2
= R
1
R
2
∠(θ
1
+θ
2
)
Special Examples
1. 1 = (1, 0) = 1∠0
2. −1 = (−1, 0) = 1∠180
3. j
2
= (1∠90)
2
= 1∠180 = −1
20 CHAPTER 1. SIGNALS
Complex Conjugate
z
∗
= x −jy = R∠(−θ)
Some useful identities
Re {z} =
1
2
[z +z
∗
]
Im{z} =
1
j2
[z −z
∗
]
z =
√
zz
∗
Complex Exponential
Taylor series for exponential function of real variable x
e
x
=
k=0
∞
x
k
k!
Replace x by jθ
e
jθ
= 1 +jθ +
jθ
2
2!
+
jθ
3
3!
+
jθ
4
4!
+
jθ
5
5!
= (1 −
θ
2
2!
+
θ
4
4!
−...) +j(θ −
θ
3
3!
+
θ
5
5!
−...)
= cos θ +j sin θ
1.6. COMPLEX VARIABLES 21
e
jθ
 =
_
cos
2
θ + sin
2
θ = 1
∠e
jθ
= arctan
_
sin θ
cos θ
_
= θ
Alternate form for polar coordinate representation of complex number
z = R∠θ = Re
jθ
Multiplication of two complex numbers can be done using rules for
multiplication of exponentials:
z
1
z
2
= (R
1
e
jθ
1
)(R
2
e
jθ
2
) = R
1
R
2
e
j(θ
1
+θ
2
)
Complex exponential signal
CT x(t) = Ae
jψ(t)
ψ(t) = ω
a
t +θ instantaneous phase
dψ(t)
dt
= ω
a
_
rad
sec
_
instantaneous phasor velocity
22 CHAPTER 1. SIGNALS
DT x(n) = Ae
j(ω
d
n+θ)
ω
d
is the phasor velocity in
_
radians
sample
_
.
Example
ω
d
= π/3 θ = π/6
Aliasing
1. It refers to the ability of one frequency to mimic another
2. For CT case, e
jω
1
t
= e
jω
2
t
for all t ⇔ω
1
= ω
2
3. In DT, e
jω
1
n
= e
jω
2
n
if ω
2
= ω
1
+ 2πk
1.7. SINGULARITY FUNCTIONS 23
Consider
ω
1
=
7π
6
ω
2
=
−5π
6
x
1
(n) = e
jω
1
n
x
2
(n) = e
jω
2
n
In general, if ω
1
= π + ∆ and ω
2
= −(π −∆), then
x
1
(n) = e
jω
1
n
= e
j(π+∆)n
= e
j[(π+∆)n−2πn]
= e
j[(−π+∆)n]
= e
−j(π−∆)n
= x
2
(n)
1.7 Singularity functions
CT Impulse Function
Let δ
∆
(t) =
1
∆
rect(
t
∆
)
Note that
_
∞
−∞
δ
∆
(t)dt = 1.
What happens as ∆ →0?
∆
1
> ∆
2
> ∆
3
24 CHAPTER 1. SIGNALS
In the limit, we obtain
δ(t) = lim
∆→0
δ
∆
(t)
δ(t) =
_
0, t = 0
∞, t = 0
and
_
∞
−∞
δ(t)dt = 1
We will use impulses in three ways:
1. to sample signals
2. as a way to decompose signals into elementary components
3. as a way to characterize the response of a class of systems to an
arbitrary input signal
Sifting property
Consider a signal x(t) multiplied by δ
∆
(t −t
0
)for some ﬁxed t
0
:
1.7. SINGULARITY FUNCTIONS 25
From the mean value theorem of calculus,
_
∞
−∞
x(t)δ
∆
(t −t
0
)dt = x(ξ)
for some ξ which satisﬁes
t
0
−
∆
2
≤ ξ ≤ t
0
+
∆
2
As ∆ →0,ξ →t
0
and x(ξ) →x(t
0
)
So we have
_
∞
−∞
x(t)δ(t −t
0
)dt = x(t
0
)
provided x(t) is continuous at t
0
.
Equivalence
Based on sifting property,
x(t)δ(t −t
0
) ≡ x(t
0
)δ(t −t
0
)
Indeﬁnite integral of δ(t)
Let u
δ
(t) =
_
t
−∞
δ
∆
(τ)dτ
26 CHAPTER 1. SIGNALS
Let ∆ →0,
u(t) =
_
t
−∞
δ(τ)dτ
When we deﬁne the unit step function, we did not specify its value at
t=0. In this case, u(0) = 0.5
More general form of sifting property
_
b
a
x(τ)δ(τ −t
0
)dτ =
_
x(t
0
), a < t
0
< b
0, t
0
< a or b < t
0
DT Impulse Function (unit sample function)
δ(n) =
_
1, n = 0
0, else
1.8. COMB AND REPLICATION OPERATIONS 27
Sifting Property
n
2
n=n
1
x(n)δ(n −n
0
) =
_
x(n
0
), n
1
≤ n
0
≤ n
2
0, else
Equivalence
x(n)δ(n −n
0
) = x(n
0
)δ(n −n
0
)
Indeﬁnite sum of δ(n)
u(n) =
n
m=−∞
δ(m)
1.8 Comb and Replication Operations
Sifting property yields a single sample at t
0
. Consider multiplying x(t) by
an entire train of impulses:
Deﬁne
x
s
(t) = comb
T
[x(t)]
= x(t)
n
δ(t −nT)
=
n
x(t)δ(t −nT)
=
n
x(nT)δ(t −nT)
28 CHAPTER 1. SIGNALS
Area of each impulse = value of sample there.
The replication operator similarly provides a compact way to express
periodic signals:
x
p
(t) = rep
T
[x
0
(t)] =
n
x
0
(t −nT)
Note that x
0
(t) = x
p
(t)rect
_
t
T
_
Chapter 2
Systems
2.1 Chapter Outline
In this chapter, we will discuss:
1. System Properties
2. Convolution
3. Frequency Response
2.2 Systems
A system is a mapping which assigns to each signal x(t) in the input
domain a unique signal y(t) in the output range.
29
30 CHAPTER 2. SYSTEMS
Examples
1. CT y(t) =
_
t+1/2
t−1/2
x(τ)dτ
let x(t) = sin(2πft)
y(t) =
_
t+1/2
t−1/2
sin(2πft)dt
=
−1
2πf
cos(2πft)
t+1/2
t−1/2
= −
1
2πf
{cos[2πf(t + 1/2)] −cos[2πf(t −1/2)]}
use cos(α ±β) = cos αcos β ∓sin αsin β
y(t) = −
1
2πf
{[cos(2πft) cos(πf) −sin(2πft) sin(πf)]
−[cos(2πft) cos(πf) + sin(2πft)sin(πf)]}
y(t) =
sin(πf)
πf
sin(2πft)
= sinc(f)sin(2πft)
Look at particular values of f
f = 0 : x(t) ≡ 0 y(t) ≡ 0
f = 1 : x(t) = sin(2πt)
sinc(1) = 0
y(t) ≡ 0
2.2. SYSTEMS 31
2. DT y(n) =
1
3
[x(n) +x(n −1) +x(n −2)]
Eﬀect of ﬁlter on output:
1. widened,smoothed, or smeared each pulse
2. delayed pulse train by one sample time
32 CHAPTER 2. SYSTEMS
2.3 System Properties
Notation:
y(t) = S[x(t)]
A. Linearity
def. A system S is linear(L) if for any two inputs x
1
(t) and x
2
(t) and any
two constants a
1
and a
2
, it satisﬁes:
S[a
1
x
1
(t) +a
2
x
2
(t)] = a
1
S[x
1
(t)] +a
2
S[x
2
(t)]
Special cases:
1. homogeneity (let a
2
= 0)
S[a
1
x
1
(t)] = a
1
S[x
1
(t)]
2. superposition (let a
1
= a
2
= 1)
S[x
1
(t) +x
2
(t)] = S[x
1
(t)] +S[x
2
(t)]
Examples
y(t) =
_
t+1/2
t−1/2
x(τ)dτ
let x
3
(t) = a
1
x
1
(t) +a
2
x
2
(t)
y
3
(t) =
_
t+1/2
t−1/2
x
3
(τ)dτ
=
_
t+1/2
t−1/2
[a
1
x
1
(τ) +a
2
x
2
(τ)]dτ
= a
1
_
t+1/2
t−1/2
x
1
(τ)dτ +a
2
_
t+1/2
t−1/2
x
2
(τ)dτ
= a
1
y
1
(t) +a
2
y
2
(t)
∴ system is linear
We can similarly show that
y(n) =
1
3
[x(n) +x(n −1) +x(n −2)] is linear.
2.3. SYSTEM PROPERTIES 33
Consider two additional examples:
3. y(n) = nx(n)
let x
3
(n) = a
1
x
1
(n) +a
2
x
2
(n)
y
3
(n) = nx
3
(n)
= (a
1
nx
1
(n) +a
2
nx
2
(n))
= a
1
y
1
(n) +a
2
y
2
(n)
∴ system is linear
y(t) =
_
_
_
−1, x(t) < −1
x(t), −1 ≤ x(t) ≤ 1
1, 1 < x(t)
Suspect that system is nonlinear  ﬁnd a counterexample.
x
1
(t) ≡ 1 ⇒y
1
(t) ≡ 1
x
2
(t) ≡
1
2
⇒y
2
(t) ≡
1
2
x
3
(t) = x
1
(t) +x
2
(t) =
3
2
⇒ y
3
(t) ≡ 1 =
3
2
∴system is not linear
Because it is memoryless, this system is completely described by a curve
relating input to output at each time t:
34 CHAPTER 2. SYSTEMS
Importance of linearity: We can represent response to a complex input in
terms of responses to very simple inputs.
B. Time Invariance
def. A system is timeinvariant if delaying the input results in only an
identical delay in the output, i.e.
if y
1
(t) = S[x
1
(t)]
and y
2
(t) = S[x
1
(t −t
0
)]
then y
2
(t) = y
1
(t −t
0
)
Examples
1. y(n) =
1
3
[x(n) +x(n −1) +x(n −2)]
assume
y
1
(n) =
1
3
[x
1
(n) +x
1
(n −1) +x
1
(n −2)]
let x
2
(n) = x
1
(n −n
0
)
y
2
(n) =
1
3
[x
2
(n) +x
2
(n −1) +x
3
(n −2)]
=
1
3
[x
1
(n −n
0
) +x
1
(n −1 −n
0
) +x
1
(n −2 −n
0
)]
=
1
3
[x
1
(n −n
0
) +x
1
(n −n
0
−1) +x
1
(n −n
0
−2)]
= y
1
(n −n
0
)
∴ system is TI.
We can similarly show that
2.3. SYSTEM PROPERTIES 35
2. y(t) =
_
t+1/2
t−1/2
x(τ)dτ is TI
3. y(n) = nx(n)
assume y
1
(n) = nx
1
(n)
let x
2
(n) = x
1
(n −n
0
)
y
2
(n) = nx
2
(n)
= nx
1
(n −n
0
)
= (n −n
0
)x
1
(n −n
0
)
∴ system is not TI.
C. Causality
def. A system S is causal if the output at time t depends only on x(τ) for
τ ≤ t
Casuality is equivalent to the following property:
If x
1
(t) = x
2
(t), t ≤ t
0
then y
1
(t) = y
2
(t), t ≤ t
0
D. Stability
def. A system is said to be boundedinput boundedoutput (BIBO) stable
if every bounded input produces a bounded output,i.e.
M
x
< ∞⇒M
y
< ∞.
Example
1. y(n) =
1
3
[x(n) +x(n −1) +x(n −2)]
Assume x(n) ≤ M
x
for all n
y(n) =
1
3
x(n) +x(n −1) +x(n −2)
≤
1
3
[x(n) +x(n −1) +x(n −2)]
≤ M
x
36 CHAPTER 2. SYSTEMS
2.4 Convolution
Characterization of behavior of an LTI system
1. Decompose input into sum of simple basis functions x
i
(n)
x(n) =
N−1
i=0
c
i
x
i
(n)
y(n) = S[x(n)]
=
N−1
i=0
c
i
S[x
i
(n)]
2. Choose basis functions which are all shifted versions of a single basis
function x
0
(n)
i.e. x
i
(n) = x
0
(n −n
i
)
Let y
i
(n) = S[x
i
(n)], i=0,...,N1
then y
i
(n) = y
0
(n −n
i
)
and y(n) =
N−1
i=0
c
i
y
0
(n −n
i
)
3. Choose impulse as basis function
Denote impulse response by h(n).
Now consider an arbitrary input x(n).
2.4. CONVOLUTION 37
Sum over both the set of inputs and the set of outputs.
x(n) = x(0)δ(n) +x(1)δ(n −1) +x(2)δ(n −2) +x(3)δ(n −3)
y(n) = x(0)h(n) +x(1)h(n −1) +x(2)h(n −2) +x(3)h(n −3)
38 CHAPTER 2. SYSTEMS
Convolution sum
x(n) =
∞
k=−∞
x(k)δ(n −k)
y(n) =
∞
k=−∞
x(k)h(n −k)
let = n −k ⇒k = n −
y(n) =
−∞
=∞
x(n −)h() =
∞
=−∞
x(n −)h()
Notation and identity
For any signals x
1
(n) and x
2
(n), we use an asterisk to denote their
convolution; and we have the following identity.
x
1
(n) ∗ x
2
(n) =
∞
k=−∞
x
1
(n −k)x
2
(k)
=
∞
k=−∞
x
1
(k)x
2
(n −k)
Characterization of CT LTI systems
• Approximate x(t) by a superposition of rectangular pulses:
2.4. CONVOLUTION 39
x
∆
(t) =
∞
k=−∞
x(k∆)δ
∆
(t −k∆)∆
• Find response to a single pulse:
• Determine response to x
∆
(t):
y
∆
(t) =
∞
k=−∞
x(k∆)h
∆
(t −k∆)∆
Let ∆ →0 (dτ) k∆ →τ
δ
∆
(t) →δ(t) x
∆
(t) →x(t)
h
∆
(t) →h(t) y
∆
(t) →y(t)
Convolution Integral
x(t) =
_
∞
−∞
x(τ)δ(t −τ)dτ
y(t) =
_
∞
−∞
x(τ)h(t −τ)dτ
40 CHAPTER 2. SYSTEMS
Notation and Identity
For any signals x
1
(t) and x
2
(t), we use an asterisk to denote their
convolution; and we have the following identity.
x
1
(n) ∗ x
2
(n) =
_
∞
−∞
x
1
(τ)x
2
(t −τ)dτ
=
_
∞
−∞
x
1
(t −τ)x
2
(τ)dτ
Example
DT system y(n) =
1
W
W−1
k=0
x(n −k) W  integer
Find response to x(n) = e
−n/D
u(n).
W  width of averaging window
D  duration of input
To ﬁnd impulse response, let x(n) = δ(n) ⇒h(n) = y(n).
h(n) =
1
W
W−1
k=0
δ(n −k) =
_
1/W, 0 ≤ n ≤ W −1
0, else
Now use convolution to ﬁnd response to x(n) = e
−n/D
u(n).
y(n) =
∞
k=−∞
x(n −k)h(k)
Case 1: n < 0
y(n) = 0
2.4. CONVOLUTION 41
Case 2: 0 ≤ n ≤ W −1
y(n) =
n
k=0
x(n −k)h(k)
=
1
W
n
k=0
e
−(n−k)/D
=
1
W
e
−n/D
n
k=0
e
k/D
Geometrical Series
N−1
k=0
z
k
=
1 −z
N
1 −z
, for any complex number z
∞
k=0
z
k
=
1
1 −z
, z < 1
y(n) =
1
W
e
−n/D
_
1 −e
(n+1)/D
1 −e
1/D
_
Case 3: W ≤ n
42 CHAPTER 2. SYSTEMS
y(n) =
W−1
k=0
x(n −k)h(k)
=
1
W
W−1
k=0
e
−(n−k)/D
=
1
W
e
−n/D
W−1
k=0
e
k/D
y(n) =
1
W
e
−n/D
_
1 −e
W/D
1 −e
1/D
_
=
1
W
_
1 −e
−(W/D
1 −e
−1/D
_
e
−[n−(W−1)]/D
Putting everything together
y(n) =
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
0, n < 0
1
W
_
1 −e
−(n+1)/D
1 −e
−1/D
_
, 0 ≤ n ≤ W −1
1
W
_
1 −e
−(W/D
1 −e
−1/D
_
e
−[n−(W−1)]/D
, W ≤ n
Causality for LTI systems
y(n) =
n
k=−∞
x(k)h(n −k) +
∞
k=n+1
x(k)h(n −k)
System will be causal ⇔ second sum is zero for any input x(k).
This will be true ⇔ h(n −k) = 0, k = n + 1, ..., ∞
2.4. CONVOLUTION 43
⇔ h(k) = 0, k < 0
An LTI system is causal ⇔ h(k) = 0, k < 0, i.e. the impulse response is
a causal signal.
Stability of LTI systems
Suppose the input is bounded, i.e M
x
< ∞
y(n) =
∞
k=∞
x(k)h(n −k)
y(n) = 
∞
k=−∞
x(k)h(n −k)
≤
∞
k=−∞
x(k)h(n −k)
≤ M
x
∞
k=−∞
h(k)
Therefore, it is suﬃcient for BIBO stability that the impulse response be
absolutely summable.
Suppose
k
h(k) < ∞.
Consider y(0) =
k
x(k)h(−k).
Assuming h(k) to be realvalued, let x(k) =
_
1, h(−k) > 0
−1, h(−k) < 0
then y(0) =
k
h(k) < ∞
For an LTI system to be BIBO stable, it is necessary that the impulse
response is absolutely summable, i.e.
k
h(k) < ∞.
Examples
y(n) = x(n) +y(n −1)
Find the impulse response.
Let x(n) = δ(n), then h(n) = y(n).
Need to ﬁnd solution to
44 CHAPTER 2. SYSTEMS
y(n) = δ(n) = 2y(n −1)
This example diﬀers from earlier ones because the system is recursive, i.e
the current output depends on previous output values as well as the
current and previous inputs.
1. must specify initial conditions for the system (assume y(1)=0)
2. cannot directly write a closed form expression for y(n)
Find output sequence term by term.
y(0) = δ(0) + 2y(−1) = 1 + 2(0) = 1
y(1) = δ(1) + 2y(0) = 0 + 2(1) = 2
y(2) = δ(2) + 2y(1) = 0 + 2(2) = 4
Recognize general form.
h(n) = y(n) = 2
n
u(n)
1. Assuming system is initially at rest, it is causal
2.
n
h(n) < ∞⇒ system is not BIBO stable
2.5 Frequency Response
We have seen that the impulse signal provides a simple way to
characterize the response of an LTI system to any input. Sinusoids play a
similar role
Consider x(n) = e
jωn
, ω is ﬁxed
Denote response by y
ω
(n)
Now consider
2.5. FREQUENCY RESPONSE 45
Since e
jωm
e
jωn
= e
jω(n+m)
y
ω
(n +m) = y
ω
(n)e
jωm
Let n = 0,
y
ω
(m) = y
ω
(0)e
jωm
for all m
∴ For a system which is homogeneous and timeinvariant, the response to
a complex exponential input x(n) = e
jωn
is y(n) = y
ω
(0)x(n), a
frequencydependent constant times the input x(n). Thus, complex
exponential signals are eigenfunctions of homogeneous, timeinvariant
systems.
Comments
• We refer to the constant of proportionality as the frequency
response of the system and denote it by
H(e
jω
) = y
ω
(0)
• We write it as a function of e
jω
rather than ω for two reasons:
1. digital frequencies are only unique modulo 2π
2. there is a relation between frequency response and the Z
transform that this representation captures
• Note that we did not use superposition. We will need it later when
we express the response to arbitrary signals in terms of the
frequency response.
46 CHAPTER 2. SYSTEMS
Magnitude and Phase of Frequency Response
In general, H(e
jω
) is complexvalued,
i.e. H(e
jω
) = A(ω)e
jθ(ω)
Thus for x(n) = e
jωn
y(n) = A(ω)e
j[ωn+∠θ(ω)]
Suppose response to any realvalued input x(n) is realvalued.
Let x(n) = cos(ωn) =
1
2
[e
jωn
+e
−jωn
].
Assuming superposition holds,
y(n) =
1
2
_
H(e
jω
)e
jωn
+H(e
−jω
)e
−jωn
_
[y(n)]
∗
=
1
2
_
[H(e
jω
)]
∗
e
−jωn
+ [H(e
−jω
)]
∗
e
jωn
_
y(n) = [y(n)]
∗
⇔H(e
−jω
) = [H(e
jω
]
∗
Expressed in polar coordinates
A(−ω)e
j∠θ(−ω)
= A(ω)e
−j∠θ(ω)
∴ A(−ω) = A(ω) even
θ(−ω) = −θ(ω) odd
Examples
1. y(n) =
1
2
[x(n) +x(n −1)]
Let x(n) = e
jωn
y(n) =
1
2
[e
jωn
+e
jω(n−1)
]
=
1
2
[1 +e
−jω
]e
jωn
∴ H(e
jω
) =
1
2
[1 +e
−jω
]
Factoring out the halfangle:
H(e
jω
) =
1
2
e
−jω/2
[e
jω/2
+e
−jω/2
]
= e
−jω/2
cos(ω/2)
2.5. FREQUENCY RESPONSE 47
H(e
jω
) = e
−jω/2
 cos(ω/2)
=  cos(ω/2)
∠H(e
jω
) = ∠e
−jω/2
+∠cos(ω/2)
∠H(e
jω
) =
_
−ω/2, cos(ω/2) ≥ 0
−ω/2 ±π, cos(ω/2) < 0
Note:
• even symmetry of H(e
jω
)
• odd symmetry of ∠H(e
jω
)
• periodicity of H(e
jω
) with period 2π
• low pass characteristic
2. y(n) =
1
2
[x(n) −x(n −2)]
Let x(n) = e
jωn
y(n) =
1
2
[e
jωn
−e
jω(n−2)
]
=
1
2
[1 −e
−jω2
]e
jωn
H(e
jω
) =
1
2
[1 −e
−jω2
]
= je
−jω
[
1
j2
(e
jω
−e
−jω
)]
48 CHAPTER 2. SYSTEMS
H(e
jω
) = je
−jω
sin(ω)
H(e
jω
) = je
−jω
 sin(ω)
=  sin(ω)
∠H(e
jω
) = ∠j +∠e
−jω
+∠sin(ω)
∠H(e
jω
) =
_
π/2 −ω, sin(ω) ≥ 0
π/2 −ω ±π, sin(ω) < 0
The ﬁlter has a bandpass characteristic.
3. y(n) = x(n) −x(n −1) −y(n −1)
Let x(n) = e
jωn
, how do we ﬁnd y(n)?
Assume desired form of output,
i.e. y(n) = H(e
jω
)e
jωn
H(e
jω
)e
jωn
= e
jωn
−e
jω(n−1)
−H(e
jω
)e
jω(n−1)
2.5. FREQUENCY RESPONSE 49
H(e
jω
)[1 +e
−jω
]e
jωn
= [1 −e
−jω
]e
jωn
H(e
jω
) =
_
1 −e
−jω
1 +e
−jω
_
=
je
−jω/2
[
1
j2
(e
jω/2
−e
−jω/2
)]
e
−jω/2
[
1
2
(e
jω/2
+e
−jω/2
)]
= j
sin(ω/2)
cos(ω/2)
= j tan(ω/2)
H(e
jω
) =  tan(ω/2)
∠H(e
jω
) =
_
π/2, tan(ω/2) ≥ 0
π/2 ±π, tan(ω/2) < 0
Comments
• What happens at ω = π/2?
• Factoring out the halfangle is possible only for a relatively
restricted class of ﬁlters
50 CHAPTER 2. SYSTEMS
Chapter 3
Fourier Analysis
3.1 Chapter Outline
In this chapter, we will discuss:
1. ContinuousTime Fourier Series (CTFS)
2. ContinuousTime Fourier Transform (CTFT)
3. DiscreteTime Fourier Transorm (DTFT)
3.2 ContinuousTime Fourier Series (CTFS)
Spectral representation for periodic CT signals
Assume x(t) = x(t +nT)
We seek a representation for x(t) of the form
x(t) =
1
T
N−1
k=0
A
k
x
k
(t) (1a)
x
k
(t) = cos(2πf
k
t +θ
k
) (1b)
Since x(t) is periodic with period T, we only use the frequency
components that are periodic with this period:
51
52 CHAPTER 3. FOURIER ANALYSIS
How do we choose amplitude A
k
and the phase θ
k
of each component?
k = 0:
A
k
x
k
(t) = A
k
cos(2πkt/T +θ
k
)
=
A
k
2
e
jθ
k
e
j2πkt/T
+
A
k
2
e
−jθ
k
e
−j2πkt/T
= X
k
e
j2πkt/T
+X
−k
e
−j2πkt/T
k = 0:
A
0
x
0
(t) = A
0
cos(θ
0
)
= X
0
3.2. CONTINUOUSTIME FOURIER SERIES (CTFS) 53
Given a ﬁxed signal x(t), we don’t know at the outset whether an exact
representation of the form given by Eq.(1) even exists.
However, we can always approximate x(t) by an expression of this form.
So we consider
´ x(t) =
1
T
N−1
k=−(N−1)
X
k
e
j2πkt/T
We want to choose the coeﬃcients X
k
so that ´ x(t) is a good
approximation to x(t).
Deﬁne e(t) = ´ x(t) −x(t).
Recall
P
e
=
1
T
_
T/2
−T/2
e(t)
2
dt
=
1
T
_
T/2
−T/2
¯ x(t) −x(t)
2
dt
=
1
T
_
T/2
−T/2
¸
¸
¸
¸
¸
¸
1
T
N−1
k=−(N−1)
X
k
e
j2πkt/T
−x(t)
¸
¸
¸
¸
¸
¸
2
dt
=
1
T
_
T/2
−T/2
_
_
1
T
N−1
k=−(N−1)
X
k
e
j2/pikt/T
−x(t)
_
_
×
_
_
1
T
N−1
=−(N−1)
X
∗
e
−j2πt/T
−x
∗
(t)
_
_
dt
P
e
=
1
T
_
T/2
−T/2
__
1
T
N−1
k=−N+1
X
k
e
j2πkt/T
_
−
_
1
T
N−1
=−N+1
X
∗
e
−j2πt/T
_
−
_
1
T
N−1
k=−N+1
X
k
e
j2/pikt/T
_
x
∗
(t)−
x(t)
_
1
T
=−N+1
X
∗
e
−j2πt/T
_
+x(t)x
∗
(t)
_
dt
54 CHAPTER 3. FOURIER ANALYSIS
P
e
=
1
T
3
N−1
k=−N+1
N−1
=−N+1
X
k
X
∗
_
T/2
−T/2
e
j2π(k−l)t/T
dt
−
1
T
2
N−1
k=−N+1
X
k
_
T/2
−T/2
x
∗
(t)e
j2πkt/T
dt
−
1
T
2
N−1
=−N+1
X
∗
_
T/2
−T/2
x(t)e
−j2πlt/T
dt
+
1
T
_
T/2
−T/2
x(t)
2
dt
_
T/2
−T/2
e
j2π(k−l)t/T
dt =
T
j2π(k −l)
e
j2π(k−l)t/T
¸
¸
¸
¸
T/2
−T/2
=
T
j2π(k −)
_
e
[jπ(k−)]
−e
[−jπ(k−)]
_
= Tsinc(k −)
=
_
T, k = l
0, k = l
Let
¯
X
l
=
_
T/2
−T/2
x(t)e
−j2πlt/T
dt.
P
e
=
1
T
2
N−1
−N+1
_
x(k)
2
−X
k
¯
X
∗
k
−X
∗
k
¯
X
k
_
+P
x
X
k
unknown coeﬃcients in the Fourier Series approximation
¯
X
k
ﬁxed numbers that depend on x(t)
We want to choose values for the coeﬃcients X
k
,k = −N + 1, ..., N −1
which will minimize P
e
.
Fix l between −N + 1 and N −1.
Let X
l
= A
l
+jB
l
.
and consider
∂P
e
∂A
l
and
∂P
e
∂B
l
3.2. CONTINUOUSTIME FOURIER SERIES (CTFS) 55
P
e
=
1
T
2
N−1
k=−N+1
_
X(k)
2
X
k
¯
X
∗
k
−X
∗
k
¯
X
k
_
+P
x
∂P
e
∂A
l
=
1
T
2
∂P
e
∂A
l
_
(A
l
+jB
l
)(A
l
−jB
l
) −(A
l
+jB
l
)
¯
X
∗
l
−(A
l
−jB
l
)
¯
X
l
_
=
1
T
2
_
X
∗
l
+X
l
−
¯
X
∗
l
−
¯
X
l
_
=
2
T
2
Re
_
X −l −
¯
X
l
_
∂P
e
∂A
l
= 0 ⇒ Re {X
l
} = Re
_
¯
X
l
_
Similarly
∂P
e
∂B
l
=
1
T
2
jX
∗
l
−jX
l
−j
¯
X
∗
l
+j
¯
X
l
= j
2
T
2
Im
_
¯
X
l
−X
l
_
and
∂P
e
∂B
l
= 0 ⇒ Im{X
l
} = Im
_
¯
X
l
_
To summarize
For ´ x(t) =
1
T
N−1
k=−N+1
X
k
e
j2πkt/T
to be a minimum meansquared error approximation to the signal x(t)
over the interval −T/2 ≤ t ≤ T/2, the coeﬃcients X
k
must satisfy
X
k
=
_
T/2
−T/2
x(t)e
−j2πkt/T
dt
Example
56 CHAPTER 3. FOURIER ANALYSIS
X
k
=
_
T/2
−T/2
x(t)e
−j2πkt/T
dt
= A
_
τ/2
−τ/2
e
−j2πkt/T
dt
=
A
−j2πkt/T
e
−j2πkt/T
=
A
−j2πk/T
[e
−jπkτ/T
−e
jπkτ/T
]
= A
τ
sin(πkτ/T)
πkτ/T
Line spectrum
X
k
= A
τ
sinc(kτ/T)
X
k
 = A
τ
sinc(kτ/T) ∠X
k
=
_
0, sinc(kτ/T) > 0
±π, sinc(kτ/T) < 0
3.3. CONTINUOUSTIME FOURIER TRANSFORM 57
Comments
Recall X
k
=
_
T/2
−T/2
x(t)e
−j2πkt/T
dt
1. X
0
=
_
T/2
−T/2
x(t)dt = Tx
avg
2. X
k
= 0, k = 0
3. lim
T→∞
X
k
 = 0
What happens as N increases?
Let ´ x
N
(t) =
1
T
N−1
k=−N+1
X
k
e
j2πkt/T
1. If
_
T/2
−T/2
x(t)
2
dt < ∞,
_
T/2
−T/2
´ x
N
(t) −x(t)
2
dt →0
2. If
_
T/2
−T/2
x(t)dt < ∞ and other Dirichilet conditions are met,
´ x
N
(t) →x(t) for all t where x(t) is continuous
3. In the neighborhood of discontinuities, ´ x
N
(t) exhibits an overshoot
or undershoot with maximum amplitude equal to 9 percent of the
step size no matter how large N is (Gibbs phenomena)
3.3 ContinuousTime Fourier Transform
Spectral representation for aperiodic CT signals
Consider a ﬁxed signal x(t) and let x
p
(t) = rep
T
[x(t)].
58 CHAPTER 3. FOURIER ANALYSIS
What happens to the Fourier series as T increases?
τ = T/2
τ = T/4
Fourier coeﬃcients
X
k
=
_
T/2
−T/2
x(t)e
−j2πkt/T
dt
Let T →∞.
3.3. CONTINUOUSTIME FOURIER TRANSFORM 59
k/T →f
X
k
→X(f)
X(f) =
_
∞
−∞
x(t)e
−j2πft
dt
Fourier series expansion
x
p
(t) =
1
T
∞
−∞
X
k
e
j2πkt/T
Let T →∞.
x
p
(t) →x(t)
k/T →f X
k
→X(f)
1
T
∞
k=−∞
→
_
∞
−∞
df
x(t) =
_
∞
−∞
X(f)e
j2πft
df
Fourier Transform pairs
Forward transform
X(f) =
_
∞
−∞
x(t)e
−j2πft
dt
Inverse Transform
x(t) =
_
∞
−∞
X(f)e
j2πft
df
Suﬃcient conditions for Existence of CTFT
1. x(t) has ﬁnite energy
_
∞
−∞
x(t)
2
dt < ∞
2. x(t) is absolutely integrable
_
∞
−∞
x(t)dt < ∞
and it satisﬁes Dirichlet conditions
60 CHAPTER 3. FOURIER ANALYSIS
Transform relations
• Linearity
a
1
x
1
(t) +a
2
x
2
(t)
CTFT
↔ a
1
X
1
(f) +a
2
X
2
(f)
• Scaling and shifting
x
_
t −t
0
a
_
CTFT
↔ aX(af)e
−j2πft
0
• Modulation
x(t)e
j2πf
0
t
CTFT
↔ X(f −f
0
)
• Reciprocity
X(t)
CTFT
↔ x(−f)
• Parseval’s relation
_
∞
−∞
x(t)
2
dt =
_
∞
−∞
X(f)
2
df
• Initial value
_
∞
−∞
x(t)dt = X(0)
Comments
1. Reﬂection is a special case of scaling and shifting with a = −1 and
t
0
= 0, i.e.
x(−t)
CTFT
↔ X(−f)
2. The scaling relation exhibits reciprocal spreading
3. Uniqueness of the CTFT follows from Parseval’s relation
CTFT for real signals
If x(t) is real, X(f) = [X(−f)]
∗
.
⇒X(f) = X(−f) and ∠X
f
= −∠X(−f)
In this case, the inverse transform may be written as:
x(t) = 2
_
∞
0
X(f) cos[2πft +∠X(f)]df
3.3. CONTINUOUSTIME FOURIER TRANSFORM 61
Additional symmetry relations:
x(t) is real and even ⇔ X(f) is real and even.
x(t) is real and odd ⇔ X(f) is real and odd.
Important transform pairs:
• rect(t)
CTFT
↔ sinc(f)
• δ(t)
CTFT
↔ 1 (by sifting property)
Proof
F {δ(t)} =
_
∞
−∞
δ(t)e
−j2πft
dt = 1
• 1
CTFT
↔ δ(f) (by reciprocity)
• e
j2πf
0
t
CTFT
↔ δ(f −f
0
) (by modulation property)
62 CHAPTER 3. FOURIER ANALYSIS
• cos(2πf
0
t)
CTFT
↔
1
2
[δ(f −f
0
) +δ(f +f
0
)]
Generalized Fourier transform
Note that δ(t) is absolutely integrable but not square integrable.
Consider δ
∆
(t) =
1
T
rect
_
t
∆
_
.
_
∞
−∞
δ
∆
(t)dt = 1
_
∞
−∞
δ
∆
(t)
2
dt =
1
∆
∴ lim
∆→0
_
∞
−∞
δ
∆
(t)
2
dt = ∞
The function x(t) ≡ 1 is neither absolutely nor square integrable; and the
integral
_
∞
−∞
1e
−j2πft
dt is undeﬁned.
Even when neither condition for existence of the CTFT is satisﬁed, we
may still be able to deﬁne a Fourier transform through a limiting process.
Let x
n
(t), n = 0, 1, 2, ... denote a sequence of functions each of which has
a valid CTFT X
n
(f).
Suppose lim
n→∞
x
n
(t) = x(t), that function does not have a valid transform.
If X(f) = lim
n→∞
X
n
(f) exists, we call it the generalized Fourier transform
3.3. CONTINUOUSTIME FOURIER TRANSFORM 63
of x(t) i.e.
x
0
(t)
CTFT
↔ X
0
(f)
x
1
(t)
CTFT
↔ X
1
(f)
x
2
(t)
CTFT
↔ X
2
(f)
x(t)
GCTFT
↔ X(f)
Example
Let x
n
(t) = rect(t/n).
X
n
(f) = nsinc(nf) (by scaling)
lim
n→∞
x
n
(t) = 1
What is lim
n→∞
X
n
(f)?
X
n
(f) →0, f = 0
X
n
(0) →∞
What is
_
∞
−∞
X
n
(f)df?
By the initial value relation
_
∞
−∞
X
n
(f)df = x
n
(0) = 1
∴ lim
n→∞
X
n
(f) = δ(f)
and we have
1
GCTFT
↔ δ(f)
64 CHAPTER 3. FOURIER ANALYSIS
Eﬃcient calculation of Fourier transforms
Suppose we wish to determine the CTFT of the following signal.
Brute force approach:
1. Evaluate transform integral directly
X(f) =
_
0
−2
(−1)e
−j2πft
dt +
_
2
0
(1)e
−j2πft
dt
2. Collect terms and simplify
Faster approach
1. Write x(t) in terms of functions whose transforms are known
x(t) = −rect
_
t + 1
2
_
+ rect
_
t −1
2
_
2. Use transform relations to determine X
f
X(f) = 2 sinc(2f)[e
−j2πf
−e
j2πf
]
X(f) = −j4 sinc(2f)sin(2πf)
3.3. CONTINUOUSTIME FOURIER TRANSFORM 65
Comments
1. A
x
= 0 and X(0) = 0
2. x(t) is real and odd and X(f) is imaginary and odd
CTFT and CT LTI systems
The key factor that made it possible to express the response y(t) of an
LTI system to an arbitrary input x(t) in terms of the impulse response
h(t) was the fact that we could write x(t) as the superposition of
impulses δ(t).
We can similarly express x(t) as a superposition of complex exponential
signals:
x(t) =
_
∞
−∞
X(f)e
j2πft
df
Let
¯
H(f) denote the frequency response of the system, i.e. for a ﬁxed
frequency f.
66 CHAPTER 3. FOURIER ANALYSIS
then by homogeneity
and by superposition
Thus, the response to x(t) is
y(t) =
_
∞
−∞
¯
H(f)X(f)e
j2πft
dt
But also
y(t) =
_
∞
−∞
Y (f)e
j2πft
dt
∴ Y (f) =
¯
H(f)X(f) (1)
We also know that
y(t) =
_
∞
−∞
h(t −τ)x(τ)dτ
What is the relation between h(t) and
¯
H(f)?
Let x(t) = δ(t) ⇒ y(t) = h(t)
then X(f) = 1 and Y (f) = H(f).
From Eq(1), we conclude that
¯
H(f) = H(f)
Since the frequency response is the CTFT of the impulse response, we
will drop the tilde.
Summarizing, we have two equivalent characterizations for CT LTI
3.3. CONTINUOUSTIME FOURIER TRANSFORM 67
systems.
y(t) =
_
∞
−∞
h(t −τ)x(τ)dτ
Y (f) = X(f)H(f)
Convolution Theorem
Since x(t) and h(t) are arbitrary signals, we also have the following
Fourier transform relation
_
x
1
(τ)x
2
(t −τ)dτ
CTFT
↔ X
1
(f)X
2
(f)
x
1
(t) ∗ x
2
(t)
CTFT
↔ X
1
(f)X
2
(f)
Product Theorem
By reciprocity, we also have the following result:
x
1
(t)x
2
(t)
CTFT
↔ X
1
(f) ∗ X
2
(f)
This can be very useful for calculating transforms of certain functions.
Example x(t) =
_
1
2
[1 + cos(2πt)], t ≤ 1/2
0, t > 1/2
Find X(f)
x(t) =
1
2
[1 + cos(2πt)]rect(t)
∴ X(f) =
1
2
_
δ(f) +
1
2
[δ(f −1) +δ(f + 1)]
_
∗ sinc(f)
Since convolution obeys linearity, we can write this as
68 CHAPTER 3. FOURIER ANALYSIS
X(f) =
1
2
_
δ(f) ∗ sinc(f) +
1
2
[δ(f −1) ∗ sinc(f) +δ(f + 1) ∗ sinc(f)]
_
All three convolutions here are of the same general form.
Identity
For any signal w(t),
w(t) ∗ δ(t −t
0
) = w(t −t
0
)
Proof:
w(t) ∗ δ(t −t
0
) =
_
w(τ)δ(t −τ −t
0
)dτ = w(t −t
0
) (by sifting property)
Using the identity,
X(f) =
1
2
_
δ(f) ∗ sinc(f) +
1
2
[δ(f −1) ∗ sinc(f) +δ(f + 1) ∗ sinc(f)]
_
=
1
2
_
sinc(f) +
1
2
[sinc(f −1) + sinc(f + 1)]
_
Fourier transform of Periodic Signals
• We previously developed the Fourier series as a spectral
representation for periodic CT signals
• Such signals are neither square integrable nor absolutely integrable,
and hence do not satisfy the conditions for existence of the CTFT
• By applying the concept of the generalized Fourier transform, we
can obtain a Fourier transform for periodic signals
3.3. CONTINUOUSTIME FOURIER TRANSFORM 69
• This allows us to treat the spectral analysis of all CT signals within
a single framework
• We can also obtain the same result directly from the Fourier series
Let x
0
(t) denote one period of a signal that is periodic with period T, i.e.
x
0
(t) = 0, t > T/2.
Deﬁne x(t) = rep
T
[x
0
(t)].
The fourier series representation for x(t) is
x(t) =
1
T
k
X
k
e
j2πkt/T
Taking the CTFT directly, we obtain
X(f) = F
_
1
T
k
X
k
e
j2πkt/T
_
=
1
T
k
X
k
F
_
e
j2πkt/T
_
(by linearity)
=
1
T
k
X
k
δ(f −k/T)
Also
X
k
=
_
T/2
−T/2
x(t)e
−j2πkt/T
dt
=
_
∞
−∞
x
0
(t)e
−j2πkt/T
dt
= X
0
(k/T)
Thus
X(f) =
1
T
k
X
0
(k/T)δ(f −k/T)
=
1
T
comb 1
T
[X
0
(f)]
Dropping the subscript 0, we may state this result in the form of a
transform relation:
rep
T
[x(t)]
CTFT
↔
1
T
comb 1
T
[X(f)]
70 CHAPTER 3. FOURIER ANALYSIS
For our derivation, we required that x(t) = 0, t > t/2. However, when
the generalized transform is used to derive the result, this restriction is
not needed.
Example
3.4 DiscreteTime Fourier Transform
• Spectral representation for aperiodic DT signals
• As in the CT case, we may derive the DTFT by starting with a
spectral representation (the discretetime Fourier series) for
periodic DT signals and letting the period become inﬁnitely long
• Instead, we will take a shorter but less direct approach
Recall the continuoustime fourier series:
x(t) =
1
T
∞
k=−∞
X
k
e
j2πkt/T
(1)
3.4. DISCRETETIME FOURIER TRANSFORM 71
X
k
=
_
T/2
−T/2
x(t)e
−j2πkt/T
dt (2)
Before, we interpreted (1) as a spectral representation for the CT
periodic signal x(t).
Now let’s consider (2) to be a spectral representation for the sequence
X
k
, −∞< k < ∞.
We are eﬀectively interchanging the time and frequency domains.
We want to express an arbitrary signal x(n) in terms of complex
exponential signals of the form e
jωn
.
Recall that ω is only unique modulo 2π.
To obtain this, we make the following substitutions in (2).
X
k
=
_
T/2
−T/2
x(t)e
−j2πkt/T
dt
k →n
Tx(t) →X(e
jω
)
−2πt/T →ω
X
k
→x(n)
dt→−
T
2π
dω
x(n) =
1
2π
_
π
−π
X(e
jω
)e
jωn
dω
This is the inverse transform.
To obtain the forward transform, we make the same substitution in (1).
x(t) =
1
T
∞
k=−∞
X
k
e
j2πkt/T
Tx(t) →X(e
jω
)
k →n
X
k
→x(n)
2πt/T →−ω
X(e
jω
) =
∞
n=−∞
x(n)e
−jωn
72 CHAPTER 3. FOURIER ANALYSIS
Putting everything together
X(e
jω
) =
∞
n=−∞
x(n)e
−jωn
x(n) =
1
2π
_
π
−π
X(e
jω
)e
jωn
dω
Suﬃcient conditions for existence
n
x(n)
2
< ∞ or
n
x(n) < ∞ plus Dirichlet conditions.
Transform Relations
1. linearity
a
1
x
1
(n) +a
2
x
2
(n)
DTFT
↔ a
1
X
1
(e
jω
) +a
2
X
2
(e
jω
)
2. shifting
x(n −n
0
)
DTFT
↔ X(e
jω
)e
−jωn
0
3. modulation
x(n)e
jω
0
n
DTFT
↔ X(e
j(ω−ω
0
)
)
4. Parseval’s relation
∞
n=−∞
x(n)
2
=
1
2π
_
π
−π
X(e
jω
)
2
dω
5. Initial value
∞
n=−∞
x(n) = X(e
j0
)
3.4. DISCRETETIME FOURIER TRANSFORM 73
Comments
1. As discussed earlier, scaling in DT involves sampling rate changes;
thus, the transform relations are more complicated than in CT case
2. There is no reciprocity relation, because time domain is a
discreteparameter whereas frequency domain is a
continuousparameter
3. As in CT case, Parseval’s relation guarantees uniqueness of the
DTFT
Some transform pairs
1. x(n) = δ(n)
X(e
jω
) =
n
δ(n)e
−jωn
= 1 (by sifting property)
2. x(n) = 1
• does not satisfy existence conditions
•
n
e
−jωn
does not converge in ordinary sense
• cannot use reciprocity as we did for CT case
Consider inverse transform
x(n) =
1
2π
_
π
−π
X(e
jω
)e
jωn
dω
What function X(e
jω
) would yield x(n) ≡ 1?
74 CHAPTER 3. FOURIER ANALYSIS
Let X(e
jω
) = 2πδ(ω), −π ≤ ω ≤ π,
then x(n) =
1
2π
_
π
−π
2πδ(ω)e
jωn
dω = 1 (again by sifting property).
Note that X(e
jω
) must be periodic with period 2π, so we have:
X(e
jω
) = 2π
k
δ(ω −2πk)
3. e
jω
0
n
DTFT
↔ rep
2π
[2πδ(ω −ω
0
)] (by modulation property)
4. cos(ω
0
n)
DTFT
↔ rep
2π
[πδ(ω −ω
0
) +πδ(ω +ω
0
)]
5. x(n) =
_
1, 0 ≤ n ≤ N −1
0, else
X(e
jω
) =
N−1
n=0
e
−jωn
=
1 −e
−jωN
1 −e
−jω
= e
−jω(N−1)/2
sin(ωN/2)
sin(ω/2)
6. y(n) =
_
1, −(N −1)/2 ≤ n ≤ (N −1)/2
0, else
3.4. DISCRETETIME FOURIER TRANSFORM 75
N is odd
y(n) = x(n + (N −1)/2) where x(n) is signal from Example 5
Y (e
jω
) = X(e
jω
)e
jω(N−1)/2
(by sifting property)
=
_
e
−jω(N−1)/2
sin(ωN/2)
sin(ω/2)
_
e
jω(N−1)/2
Y (e
jω
) =
sin(ωN/2)
sin(ω/2)
What happens as N →∞?
y(n) →1
−∞< n < ∞
2π/N →0
Y (e
jω
) →rep
2π
[2πδ(ω)]
1
2π
_
π
−π
Y (e
jω
)dω = y(0)
DTFT and DT LTI systems
Recall that in CT case we obtained a general characterization in terms of
CTFT by expressing x(t) as a superposition of complex exponential
signals and then using frequency response H(f) to determine response to
each such signal.
Here we take a diﬀerent approach.
For any DT LTI system, we know that input and output are related by
y(n) =
k
h(n −k)x(k)
76 CHAPTER 3. FOURIER ANALYSIS
Thus
Y (e
jω
) =
n
y(n)e
−jωn
=
n
k
h(n −k)x(k)e
−jωn
=
k
_
n
h(n −k)e
−jωn
_
x(k)
=
k
H(e
jω
)e
−jωk
x(k) (by sifting property)
where H(e
jω
) is the DTFT of the impulse response h(n).
Rearranging,
Y (e
jω
) = H(e
jω
)
k
x(k)e
−jωk
= H(e
jω
)X(e
jω
)
How is H(e
jω
) related to the frequency response?
Consider
x(n) = e
jω
0
n
X(e
jω
) = rep
2π
[2πδ(ω −ω
0
)]
X(e
jω
) + 2π
k
δ(ω −ω
0
−2πk)
Y (e
jω
) = H(e
jω
)X(e
jω
)
= 2π
k
H(e
jω
0
=2πk
)δ(ω −ω
0
−2πk)
= H(e
jω
0
)2π
k
δ(ω −ω
0
−2πk)
Y (e
jω
) = H(e
jω
0
)X(e
jω
)
∴ y(n) = H(e
jω
0
)x(n)
3.4. DISCRETETIME FOURIER TRANSFORM 77
so H(e
jω
) is also the frequency response of the DT system.
We thus have two equivalent characterizations for the response y(n) of a
DT LTI system to any input x(n).
y(n) =
k
h(n −k)x(k)
Y (e
jω
) = H(e
jω
)X(e
jω
)
Convolution Theorem
Since x(n) and h(n) are arbitrary signals, we also have the following
transform relation:
k
x
1
(k)x
2
(n −k)
DTFT
↔ X
1
(e
jω
)X
2
(e
jω
)
or
x
1
(n) ∗ x
2
(n)
DTFT
↔ X
1
(e
jω
)X
2
(e
jω
)
Product theorem
As mentioned earlier, we do not have a reciprocity relation for the DT
case.
However, by direct evaluation of the DTFT, we also have the following
result:
x
1
(n)x
2
(n)
DTFT
↔
1
2π
_
π
−π
X
1
(e
j(ω−µ)
)X
2
(e
jµ
)dµ
Note that this is periodic convolution.
78 CHAPTER 3. FOURIER ANALYSIS
Chapter 4
Sampling
4.1 Chapter Outline
In this chapter, we will discuss:
1. Analysis of Sampling
2. Relation between CTFT and DTFT
3. Sampling Rate Conversion (Scaling in DT)
4.2 Analysis of Sampling
A simple scheme for sampling a waveform is to gate it.
T  period
τ  interval for which switch is closed
τ/T  duty cycle
x
s
(t) = s(t)x(t)
79
80 CHAPTER 4. SAMPLING
Fourier Analysis
X
s
(f) = S(f) ∗ X(f)
s(t) = rep
T
_
1
τ
rect
_
t
τ
_¸
S(f) =
1
T
comb 1
T
[sinc(τf)]
=
1
T
k
sinc(τk/T)δ(f −k/T)
X
s
(f) =
1
T
k
sinc(τk/T)X(f −k/T)
How do we reconstruct x(t) from x
s
(t)?
4.2. ANALYSIS OF SAMPLING 81
Nyquist condition
Perfect reconstruction of x(t) from x
s
(t) is possible if
X(f) = 0, f ≥ 1/(2T)
Nyquist sampling rate
f
s
=
1
T
= 2W
Ideal Sampling
What happens as τ →0?
s(t) →
m
δ(t −mT)
x
s
(t) →
m
x(mT)δ(t −mT) = comb
T
[x(t)]
X
s
(f) →
1
T
k
X(f −k/T) =
1
T
rep 1
T
[X(f)]
• An ideal lowpass ﬁlter will again reconstruct x(t)
• In the sequel, we assume ideal sampling
82 CHAPTER 4. SAMPLING
Transform Relations
rep
T
[x(t)]
CTFT
↔
1
T
comb 1
T
[X(f)]
comb
T
[x(t)]
CTFT
↔
1
T
rep 1
T
[X(f)]
Given one relation, the other follows by reciprocity.
WhittakerKotelnikovShannon Sampling Expansion
X
r
(f) = H
r
(f)X
s
(f)
H
r
(f) = Trect(Tf)
x
r
(t) = h
r
(t) ∗ x
s
(t)
h
r
(t) = sinc(t/T)
x
s
(t) =
m
x(mT)δ(t −mT)
x
r
(t) =
m
x(mT)sinc
_
t −mT
T
_
x
r
(nT) =
m
x(mT)sinc(n −m)
= x(nT)
4.2. ANALYSIS OF SAMPLING 83
If Nyquist condition is satisﬁed, x
r
(t) ≡ x(t).
Zero Order Hold Reconstruction
x
r
(t) =
m
x(mT)rect
_
t −T/2 −mT
T
_
= h
ZO
(t) ∗ x
s
(t)
h
r
(t) = rect
_
t −T/2
T
_
X
r
(f) = H
ZO
(f)X
s
(f)
H
ZO
(f) = Tsinc(Tf)e
−j2πf(T/2)
84 CHAPTER 4. SAMPLING
• One way to overcome the limitations of zero order hold
reconstruction is to oversample, i.e. choose f
s
= 1/T >> W
• This moves spectral replications farther out to where they are
better attenuated by sinc(Tf)
• What happens if we inadvertently undersample, and then
reconstruct with an ideal lowpass ﬁlter?
4.2. ANALYSIS OF SAMPLING 85
Eﬀect of undersampling
• frequency truncation error
X
r
(f) = 0, f ≥ f
s
/2
• aliasing error
Frequency components at f
1
= f
s
/2 + ∆ fold down to and mimic
frequencies f
2
= f
s
/2 −∆.
Example
• x(t) = cos[2π(5000)t]
• sample at f
s
= 8 kHz
• reconstruct with ideal lowpass ﬁlter having cutoﬀ at 4 kHz
Sampling
x
r
(t) = cos[2π(3000)t]
Note that x(t) and x
r
(t) will have the same sample values at times
t = nT (T=1/(8000)).
x(nT) = cos {2π(5000)n/8000}
= cos {2π(5000)n/8000}
= cos {2π[(5000)n0(8000)n]/8000}
= cos {−2π(3000)n/8000}
= x
r
(nT)
86 CHAPTER 4. SAMPLING
Reconstruction
4.3. RELATION BETWEEN CTFT AND DTFT 87
4.3 Relation between CTFT and DTFT
We have already shown that
X
s
(f
a
) =
1
T
rep 1
T
[X(f
a
)]
Suppose we evaluate the CTFT of x
s
(t) directly
X
s
(f
a
) = F
_
n
x
a
(nT)δ(t −nT)
_
=
n
x
a
(nT)F {δ(t −nT)}
X
s
(f
a
) =
n
x
a
(nT)e
−j2πf
a
nT
Recall
X
d
(e
jω
d
) ≡
n
x
d
(n)e
−jω
d
n
88 CHAPTER 4. SAMPLING
But x
d
(n) = x
a
(nT).
Let ω
d
= 2πf
a
T = 2π(f
a
/f
s
).
Rearranging as f
a
=
_
ω
d
2π
_
f
s
, we obtain
X
d
(e
jω
d
) = X
s
__
ω
d
2π
_
f
s
_
Example
• CT Analysis
x
a
(t) = cos(2πf
a0
t)
X
a
(f
a
) =
1
2
[δ(f
a
−f
a0
) +δ(f
a
+f
a0
)]
X
s
(f
a
) = f
s
rep
f
s
[X
a
(f
a
)]
=
f
s
2
k
[δ(f
a
−f
a0
−kf
s
) +δ(f
a
+f
a0
−kf
s
)]
4.3. RELATION BETWEEN CTFT AND DTFT 89
• DT Analysis
x
d
(n) = x
a
(nT)
= cos(2πf
a0
nT)
= cos(ω
d0
n)
ω
d0
= 2πf
a0
T = 2π(f
a0
/f
s
)
X(e
jω
d
) = π
k
[δ(ω
d
−ω
d0
−2πk) +δ(ω
d
+ω
d0
−2πk)]
• Relation between CT and DT Analyses
X
d
(e
jω
d
) = X
s
__
ω
d
2π
_
f
s
_
X
s
(f
a
) =
f
s
2
k
[δ(f
a
−f
a0
−kf
s
) +δ(f
a
+f
a0
−kf
s
)]
X
d
(e
jω
d
) =
f
s
2
k
_
δ
_
ω
d
f
s
2π
−f
a0
−kf
s
_
+δ
_
ω
d
f
s
2π
+f
a0
−kf
s
__
Recall δ(ax +b) =
1
a
δ(x +b/a).
X
d
(e
jω
d
) =
f
s
2
k
_
2π
f
s
δ
_
ω
d
−
2πf
a0
f
s
−
k2πf
s
f
s
__
+
_
2π
f
s
δ
_
ω
d
+
2πf
a0
f
s
−
k2πf
s
f
s
__
= π
k
[δ(ω
d
−ω
d0
−2πk) +δ(ω
d
+ω
d0
−2πk)]
Aliasing
• CT
f
a1
= f
s
/2 + ∆
a
folds down to f
a2
= f
s
/2 −∆
a
.
• DT
Let ω
d
= 2π
_
f
a
f
s
_
∆
d
= 2π
_
∆
a
f
s
_
ω
d1
= π +ω
d
is identical to ω
d2
= π −∆
d
.
90 CHAPTER 4. SAMPLING
4.4 Sampling Rate Conversion
Recall deﬁnitions for DT scaling.
Downsampling
y(n) = x(Dn)
Upsampling
y(n) =
_
x(n/D), if n/D is an integer
0, else
4.4. SAMPLING RATE CONVERSION 91
In CT, we have a very simple transform relation for scaling:
x(at)
CTFT
↔
1
a
X
_
f
a
_
What is the transform relation for DT?
1. As in deﬁnitions for downsampling and upsampling, we must treat
these cases separately
2. Relations will combine elements from both scaling and sampling
transform relations for CT
Downsampling
y(n) = x(Dn)
Y (e
jω
) =
n
x(Dn)e
−jωn
let m = Dn ⇒n = m/D
Y (e
jω
) =
m
x(m)e
−jωm/D
(m/D is an integer)
To remove restrictions on m, deﬁne a sequence:
s
D
(m) =
_
1, m/D is an integer
0, else
then
92 CHAPTER 4. SAMPLING
Y (e
jω
) =
m
s
D
(m)x(m)e
jωm/D
Alternate expression for s
D
(m):
D = 2
s
2
(m) =
1
2
[1 + (−1)
m
]
=
1
2
[1 +e
−j2πm/2
]
s
2
(0) = 1
s
2
(1) = 0
s
2
(m+ 2k) = s(m)
D=3
s
3
(m) =
1
3
[1 +e
−j2πm/3
+e
−j2π(2)m/3
]
s
3
(0) =
1
3
[1 + 1 + 1] = 1
s
3
(1) =
1
3
[1 +e
−j2π/3
+e
−j2π(2)/3
] = 0
s
3
(2) =
1
3
[1 +e
−j2π(2)/3
+e
−j2π(4)/3
] = 0
s
3
(m+ 3k) = s
3
(m)
In general,
s
D
(m) =
1
D
D−1
k=0
e
−j2πkm/D
=
1
D
1 −e
−j2πm
1 −e
−j2πm/D
4.4. SAMPLING RATE CONVERSION 93
s
D
(m) =
_
1, m/D is an integer
0, else
Y (e
jω
) =
m
s
D
(m)x(m)e
−jωm/D
=
m
1
D
D−1
k=0
e
−j2πkm/D
x(m)e
−jωm/D
=
1
D
D−1
k=0
m
x(m)e
−j[(ω+2πk)/D]m
=
1
D
D−1
k=0
X(e
j(ω+2πk)/D
)
94 CHAPTER 4. SAMPLING
Decimator
To prevent aliasing due to downsampling, we preﬁlter the signal to
bandlimit it to highest frequency ω
d
= π/D.
The combination of a low pass ﬁlter followed by a downsampler is
referred to as a decimator.
4.4. SAMPLING RATE CONVERSION 95
With preﬁlter
Without preﬁlter
Upsampling
y(n) =
_
x(n/D), n/D is an integer
0, else
Y (e
jω
) =
n
x(n/D)e
−jωn
(n/D) is an integer
=
n
s
D
(n)x(n/D)e
−jωn
Let m = n/D ⇒ n = mD
Y (e
jω
) =
m
s
D
(mD)x(m)e
−jωmD
but s
D
(mD) ≡ 1
∴ Y (e
jω
) = X(e
jωD
)
96 CHAPTER 4. SAMPLING
Interpolator
To interpolate between the nonzero samples generated by upsampling, we
use a low pass ﬁlter with cutoﬀ at ω
d
= π/D.
The combination of an upsampler followed by a low pass ﬁlter is referred
to as an interpolator.
4.4. SAMPLING RATE CONVERSION 97
Time Domain Analysis of an Interpolator
y(n) =
k
v(k)h(n −k)
v(k) = s
D
(k)x(k/D)
let = k/D ⇒ k = D
y(n) =
x()h(n −D)
h(n) =
1
2π
_
π
−π
H(e
jω
)e
jωn
dω
=
1
2π
_
π/D
−π/D
e
jωn
dω
=
1
2π
CTFT
−1
_
rect
_
ω
2π/D
__¸
¸
¸
¸
t=n/2π
=
1
2π
2π
D
sinc
_
2π
D
_
n
2π
_
_
=
1
D
sinc
_
n
D
_
y(n) =
x()h(n −D)
=
x()sinc
_
n −D
D
_
98 CHAPTER 4. SAMPLING
Chapter 5
Z Transform (ZT)
5.1 Chapter Outline
In this chapter, we will discuss:
1. Derivation of the Z transform
2. Convergence of the ZT
3. ZT properties and pairs
4. ZT and linear, constant coeﬃcient diﬀerence equations
5. Inverse Z transform
6. General Form of Response of LTI Systems
5.2 Derivation of the Z Transform
1. Extension of DTFT
2. Transform exists for a larger class of signals than does DTFT
3. Provides an important tool for characterizing behavior of LTI
systems with rational transfer functions
• Recall suﬃcient conditions for existence of the DTFT
99
100 CHAPTER 5. Z TRANSFORM (ZT)
1.
n
x(n)
2
< ∞
or
2.
n
x(n) < ∞
• By using impulses we were able to deﬁne the DTFT for periodic
signals which satisfy neither of the above conditions
Example 1
Consider x(n) = 2
n
u(n)
It also satisﬁes neither condition for existence of the DTFT.
Deﬁne ¯ x(n) = r
−n
x(n), r > 2
5.2. DERIVATION OF THE Z TRANSFORM 101
Now
∞
n=0
¯ x(n) =
∞
n=0
r
−n
x(n)
=
∞
n=0
(2/r)
n
∞
n=0
¯ x(n) =
1
1 −2/r
, (2/r) = 1 or r > 2
∴ DTFT of ¯ x(n) exists
¯
X(e
jω
) =
∞
n=0
¯ x(n)e
−jωn
¯
X(e
jω
) =
∞
n=0
[2(re
jω
)
−1
]
n
=
1
1 −2(re
jω
)
−1
, 2(re
jω
)
−1
 < 1 or r > 2
Now let’s express
¯
X(e
jω
) in terms of x(n)
¯
X(e
jω
) =
∞
n=0
r
−n
x(n)e
−jωn
=
∞
n=0
x(n)(re
jω
)
−n
Let z = re
jω
and deﬁne the Z transform (ZT) of x(n) to be the DTFT of
x(n) after multiplication by the convergence factor r
−n
u(n).
X(z) =
¯
X(e
jω
) =
n
x(n)z
−n
For the example x(n) = 2
n
u(n).
X(z) =
1
1 −2z
−1
, z > 2
It is important to specify the region of convergence since the transform is
not uniquely deﬁned without it.
102 CHAPTER 5. Z TRANSFORM (ZT)
Example 2
Let y(n) = −2
n
u(−n −1)
Y (z) =
n
y(n)z
−n
= −
−1
n=−∞
2
n
z
−n
Y (z) = −
−1
n=−∞
(z/2)
−n
= −
∞
n=1
(z/2)
n
= −
∞
n=0
(z/2)
n
+ 1
Y (z) = 1 −
1
1 −z/2
, z/2 < 1 or z < 2
Y (z) =
−z/2
1 −z/2
=
1
1 −2z
−1
, z < 2
So we have
x(n) = 2
n
u(n)
ZT
↔ X(z) =
1
1 −2z
−1
, z > 2
y(n) = −2
n
u(−n −1)
ZT
↔ Y (z) =
1
1 −2z
−1
, z < 2
5.3. CONVERGENCE OF THE ZT 103
1. The two transforms have the same functional form
2. They diﬀer only in their regions of convergence
Example 3
w(n) = 2
n
, −∞< n < ∞
w(n) = x(n) −y(n)
By linearity of the ZT,
W(z) = X(z) −Y (z)
=
1
1 −2z
−1
−
1
1 −2z
−1
= 0
But note that X(z) and Y (z) have no common region of convergence.
Therefore, there is no ZT for
w(n) = 2
n
, −∞< n < ∞
5.3 Convergence of the ZT
1. A series
∞
n=0
u
n
104 CHAPTER 5. Z TRANSFORM (ZT)
is said to converge to U if given any real > 0, there exists an
integer M such that
¸
¸
¸
¸
¸
N−1
n=0
u
n
−U
¸
¸
¸
¸
¸
< for all N > M
2. Here the sequence u
n
and the limit U may be either real or complex
3. A suﬃcient condition for convergence of the series
∞
n=0
u
n
is that of
absolute convergence i.e.
∞
n=0
u
n
 < ∞
4. Note that absolute convergence is not necessary for ordinary
convergence as illustrated by the following example
The series
∞
n=0
(−1)
n
1
n
= −ln(2)
is convergent, whereas the series
∞
n=1
1
n
is not.
5. However, when we discuss convergence of the Z transform, we
consider only absolute convergence Thus, we say that
X(z) =
n
x(n)z
−n
converges at z = z
0
if
n
¸
¸
x(n)z
−n
0
¸
¸
=
n
x(n)r
−n
0
< ∞
where r
0
= z
0

Region of Convergence for ZT
As a consequence of restricting ourselves to absolute convergence, we
have the following properties:
5.3. CONVERGENCE OF THE ZT 105
P1. If X(z) converges at z = z
0
, then it converges for all z for which
z = r
0
, where r
0
= z
0
.
Proof:
n
¸
¸
x(n)z
−n
¸
¸
=
n
x(n)r
−n
0
< ∞ by hypothesis
P2. If x(n) is a causal sequence, i.e. x(n) = 0, n < 0, and X(z) converges
for z = r
1
, then it converges for all Z such that z = r > r
1
.
Proof:
∞
n=0
¸
¸
x(n)z
−n
¸
¸
=
∞
n=0
x(n)r
−n
<
∞
n=0
x(n)r
−n
1
< ∞ by hypothesis
P3. If x(n) is an anticausal sequence, i.e. x(n) = 0, n > 0 and X(z)
converges for z = r
2
, then it converges for all z such that z = r < r
2
.
Proof:
0
n=−∞
¸
¸
x(n)z
−n
¸
¸
=
∞
n=0
x(−n)r
n
<
∞
n=0
x(−n)r
n
2
< ∞ by hypothesis
P4. If x(n) is a mixed causal sequence, i.e. x(n) = 0 for some n < 0 and
x(n) = 0 for some n > 0, and X(z) converges for some z = r
0
, then
there exists two positive reals r
1
and r
2
with r
1
< r
0
< r
2
such that X(z)
converges for all z satisfying r
1
< z < r
2
.
Proof:
Let x(n) = x
−
(n) +x
+
(n) where x
−
(n) is anticausal and x
+
(n) is causal.
Since X(z) converges for z = r
0
, X
−
(z) and X
+
(z) must also both
106 CHAPTER 5. Z TRANSFORM (ZT)
converge for z = r
0
.
From properties 2 and 3, there exist two positive reals r
1
and r
2
with
r
1
< r
0
< r
2
such that X
−
(z) converges for z < r
2
and X
+
(z) converges
for z > r
1
.
The ROC for X(z) is just the intersection of these two ROC’s.
Example 3
x(n) = 2
−n
u(n)
X(z) =
n
2
−n
z
−n
=
−1
n=−∞
2
n
z
−n
+
∞
n=0
2
−n
z
−n
=
∞
n=0
(2
−1
z)
n
−1 +
∞
n=0
(2
−1
z
−1
)
n
=
1
1 −2
−1
z
−1 +
1
1 −2
−1
z
−1
2
−1
z < ∞ 2
−1
z
−1
 < ∞
z < 2 z > 1/2
Combining everything
X(z) =
−3z
−1
/2
1 −5z
−1
/2 +z
−2
, 1/2 < z < 2
5.4. ZT PROPERTIES AND PAIRS 107
5.4 ZT Properties and Pairs
Transform Relations
1. Linearity
a
1
x
1
(n) +a
2
x
2
(n)
ZT
↔ a
1
X
1
(z) +a
2
X
2
(z)
2. Shifting
x(n −n
0
)
ZT
↔ z
−n
0
X(z)
3. Modulation
z
n
0
x(n)
ZT
↔ X(z/z
0
)
4. Multiplication by time index
nx(n)
ZT
↔ −z
d
dz
[X(z)]
5. Convolution
x(n) ∗ y(n)
ZT
↔ X(z)Y (z)
6. Relation to DTFT
If X
ZT
(z) converges for z = 1, then the DTFT of x(n) exists and
X
DTFT
(e
jω
) = X
ZT
(z)
z=e
jω
Important Transform Pairs
1. δ(n)
ZT
↔ 1, all z
2. a
n
u(n)
ZT
↔
1
1 −az
−1
, z > a
3. −a
n
u(−n −1)
ZT
↔
1
1 −az
−1
, z < a
4. na
n
u(n)
ZT
↔
az
−1
(1 −az
−1
)
2
, z > a
5. −na
n
u(−n −1)
ZT
↔
az
−1
(1 −az
−1
)
2
, z < a
108 CHAPTER 5. Z TRANSFORM (ZT)
5.5 ZT,Linear Constant Coeﬃcient
Diﬀerence Equations
1. From the convolution property, we obtain a characterization for all
LTI systems
• impulse response: y(n) = h(n) ∗ x(n)
• transfer function: Y (z) = H(z)X(z)
2. An important class of LTI systems are those characterized by
linear, constant coeﬃcient diﬀerence equations
y(n) =
M
k=0
a
k
x(n −k) −
N
=1
h
y(n −)
• nonrecursive
N = 0
always ﬁnite impulse response (FIR)
• recursive
N > 0
usually inﬁnite impulse response (IIR)
3. Take ZT of both sides of the equation
y(n) =
M
k=0
a
k
x(n −k) −
N
=1
b
y(n −)
Y (z) =
M
k=0
a
k
z
−k
X(z) −
N
=1
b
z
−
Y (z)
H(z) =
Y (z)
X(z)
=
M
k=0
a
k
z
−k
1 +
N
=1
b
z
−
5.5. ZT,LINEAR CONSTANT COEFFICIENT DIFFERENCE EQUATIONS109
4. M ≥ N
Multiply numerator and denominator by z
M
.
H(z) =
M
k=0
a
k
z
M−k
z
M−N
_
z
N
+
N
=1
b
z
N−
_
5. M < N
Multiply numerator and denominator by z
N
.
H(z) =
z
N−M
M
k=0
a
k
z
M−k
z
N
+
N
=1
b
z
N−
6. By the fundamental theorem of algebra, the numerator and
denominator polynomials may always be factored
• M ≥ N
H(z) =
M
k=1
(z −z
K
)
z
M−N
N
=1
(z −p
)
• M < N
H(z) =
z
N−M
M
k=1
(z −z
k
)
N
=1
(z −p
)
7. Roots of the numerator and denominator polynomials:
• zeros z
1
, ..., z
M
• poles p
1
, ..., p
N
• If M ≥ N, have M −N additional poles at z = ∞
• If M < N, have N −M additional zeros at z = 0
8. The poles and zeros play an important role in determining system
behavior
Example
y(n) = x(n) +x(n −1) −
1
2
y(n −2)
110 CHAPTER 5. Z TRANSFORM (ZT)
Y (z) = X(z) +z
−1
X(z) −
1
2
z
−2
Y (z)
H(z) =
Y (z)
X(z)
=
1 +z
−1
1 +
1
2
z
−2
zeros : z
1
= 0 z
2
= −1
poles : p
1
= j/
√
2 p
2
= −j/
√
2
H(z) =
z(z + 1)
z
2
+ 1/2
=
z(z + 1)
(z −j/
√
2)(z +j/
√
2)
Eﬀect of Poles and Zeros on Frequency Response
Frequency response H(e
jω
)
h(n)
CTFT
↔ H
DTFT
(e
jω
) = H(e
jω
)
h(n)
ZT
↔ H
ZT
(z)
H
DTFT
(e
jω
) = H
ZT
(e
jω
)
Let z = e
jω
.
5.5. ZT,LINEAR CONSTANT COEFFICIENT DIFFERENCE EQUATIONS111
⇒H(e
jω
) = H
ZT
(e
jω
)
Assume M < N.
H(z) =
z
N−M
M
k=1
(z −z
k
)
N
=1
(z −p
)
H(e
jω
) =
e
jω(N−M)
M
k=1
(e
jω
−z
k
)
N
=1
(e
jω
−p
)
H(e
jω
) =
M
k=1
e
jω
−z
k

N
=1
e
jω
−p

∠H(e
jω
) = ω(N −M) +
M
k=1
∠(e
jω
−z
k
) −
N
=1
∠(e
jω
−p
)
Example
H(z) =
z(z + 1)
(z −j/
√
2)(z +j/
√
2)
H(e
jω
) =
e
jω
+ 1
e
jω
−j/
√
2e
jω
+j/
√
2
∠H(e
jω
) = ω +∠(e
jω
+ 1) −∠(e
jω
−j/
√
2) −∠(e
jω
+j/
√
2)
Contribution from a single pole
112 CHAPTER 5. Z TRANSFORM (ZT)
ω = 0
H(e
j0
) =
a
b
c
d
a = 2

b = 1
c =
_
1 +
1
2
=
_
3
2

d =
_
3
2
H(e
j0
) =
4
3
= 1.33
H(e
j0
) = ∠a +∠
b −∠c −∠
d
∠a = 0
∠
b = 0
∠c = arctan
_
1
√
2
_
∠
d = −∠c
∠H(e
j0
) = 0
ω = π/4
H(e
jπ/4
) =
a
b
c
d
a =
_
_
1 +
1
√
2
_
2
+
1
2
= 1.85

b = 1
5.5. ZT,LINEAR CONSTANT COEFFICIENT DIFFERENCE EQUATIONS113
c =
1
√
2

d =
_
_
2
√
2
_
2
+
_
1
√
2
_
2
=
_
5
2
H(e
jπ/4
) = 1.65
H(e
jπ/4
) = ∠a +∠
b −∠c −∠
d
∠a = 22.5
◦
∠
b = 45
◦
∠c = 0
∠
d = 63.4
◦
∠H(e
jπ/4
) = 4.1
◦
ω = π/2
H(e
jπ/2
) =
a
b
c
d
a =
√
2

b = 1
c = 1 −
1
√
2

d = 1 +
1
√
2
H(e
jπ/2
) = 2.83
H(e
jπ/4
) = ∠a +∠
b −∠c −∠
d
∠a = 45
◦
∠
b = 90
◦
∠c = 90
◦
∠
d = 90
◦
∠H(e
jπ/2
) = −45
◦
General Rules
1. A pole near the unit circle will cause the frequency response to
increase in the neighborhood of that pole
114 CHAPTER 5. Z TRANSFORM (ZT)
2. A zero near the unit circle will cause the frequency response to
decrease in the neighborhood of that zero
5.6 Inverse Z Transform
1. Formally, the inverse ZT may be written as
x(n) =
1
j2π
_
c
X(z)z
n−1
dz
where the integration is performed in a counterclockwise direction
around a closed contour in the region of convergence of X(z) and
encircling the origin.
2. If X(z) is a rational function of z, i.e. a ratio of polynomials, it is
not necessary to evaluate the integral
3. Instead, we use a partial fraction expansion to express X(z) as a
sum of simple terms for which the inverse transform may be
recognized by inspection
4. The ROC plays a critical role in this process
5. We will illustrate the method via a series of examples
Example 1
The signal x(n) = (1/3)
n
u(n) is input to a DT LTI system described by
y(n) = x(n) −x(n −1) + (1/2)y(n −1)
Find the output y(n).
What are our options?
• direct substitution
Assume y(−1) = 0.
y(0) = x(0) −x(−1) + (1/2)y(−1) = 1 −0 + (1/2)(0) = 1.000
y(1) = x(1) −x(0) + (1/2)y(0) = 1/3 −1 + (1/2)(1.0) = −0.167
y(2) = x(2)−x(1)+(1/2)y(1) = 1/9−1/3+(1/2)(−0.167) = −0.306
y(3) = x(3)−x(2)+(1/2)y(2) = 1/27−1/9+(1/2)(−0.306) = −0.227
5.6. INVERSE Z TRANSFORM 115
• convolution
y(n) =
∞
k=−∞
h(n −k)x(k)
Find impulse response by direct substitution.
h(n) = δ(n) −δ(n −1) + (1/2)h(n −1)
h(0) = 1 −0 + (1/2)(0) = 1
h(1) = 0 −1 + (1/2)(1) = −1/2
h(2) = 0 −0 + (1/2)(−1/2) = −1/4
h(3) = 0 −0 + (1/2)(−1/4) = −1/8
Recognize h(n) = δ(n) −(1/2)
n
u(n −1).
y(n) =
∞
k=−∞
_
δ(n −k) −(1/2)
(n−k)
u(n −k −1)
_
(1/3)
k
u(k)
= (1/3)
n
u(n) −(1/2)
n
n−1
k=0
(2/3)
k
u(n −1)
= (1/3)
n
u(n) −(1/2)
n
1 −(2/3)
n
1 −2/3
u(n)
= 4(1/3)
n
u(n) −3(1/2)
n
u(n)
• DTFT
x(n) = (1/3)
n
u(n)
X(e
jω
) =
1
1 −
1
3
e
−jω
h(n) = δ(n) −(1/2)
n
u(n −1)
H(e
jω
) = 1 −
_
1
1 −
1
2
e
−jω
−1
_
=
1 −e
−jω
1 −
1
2
e
−jω
116 CHAPTER 5. Z TRANSFORM (ZT)
Y (e
jω
) = H(e
jω
)X(e
jω
)
=
1 −e
−jω
(1 −
1
2
e
−jω
)(1 −
1
3
e
−jω
)
=
1 −e
−jω
1 −
5
6
e
−jω
+
1
6
e
−jω2
y(n) =
1
2π
_
π
−π
1 −e
−jω
1 −
5
6
e
−jω
+
1
6
e
−jω2
e
jωn
dω
• ZT
x(n) = (1/3)
n
u(n)
X(z) =
1
1 −
1
3
z
−1
, z >
1
3
y(n) = x(n) −x(n −1) + (1/2)y(n −1)
Y (z) = X(z) −z
−1
X(z) + (1/2)z
−1
Y (z)
H(z) =
Y (z)
X(z)
=
1 −z
−1
1 −
1
2
z
−1
What is the region of convergence?
Since system is causal, h(n) is a causal signal.
⇒ H(z) converges for z > 1/2.
Y (z) = H(z)X(z)
=
1 −z
−1
(1 −
1
2
z
−1
)(1 −
1
3
z
−1
)
ROC[Y (z)] = ROC[H(z)] ∩ ROC[X(z)]
= z : z > 1/2
5.6. INVERSE Z TRANSFORM 117
Partial Fraction Expansion (PFE)
(Two distinct poles)
1 −z
−1
_
1 −
1
2
z
−1
_ _
1 −
1
3
z
−1
_
=
A
1
1 −
1
2
z
−1
+
A
2
1 −
1
3
z
−1
To solve for A
1
and A
2
, we multiply both sides by
_
1 −
1
2
z
−1
_ _
1 −
1
3
z
−1
_
to obtain
1 −z
−1
= A
1
_
1 −
1
3
z
−1
_
+A
2
_
1 −
1
2
z
−1
_
1 = A
1
+A
2
A
1
= −3
−1 = −
1
3
A
1
−
1
2
A
2
A
2
= 4
so
1 −z
−1
(1 −
1
2
z
−1
)(1 −
1
3
z
−1
)
=
−3
1 −
1
2
z
−1
+
4
1 −
1
3
z
−1
check
1 −z
−1
= −3
_
1 −
1
3
z
−1
_
+ 4
_
1 −
1
2
z
−1
_
Now
y(n) = y
1
(n) +y
2
(n)
Possible ROC’s
where
Y
1
(z) =
−3
1 −
1
2
z
−1
, z <
1
2
or z >
1
2
Y
2
(z) =
4
1 −
1
3
z
−1
, z <
1
3
or z >
1
3
118 CHAPTER 5. Z TRANSFORM (ZT)
and
ROC[Y (z)] = ROC[Y
1
(z)] ∩ ROC[Y
2
(z)]
=
_
z : z >
1
2
_
Recall transform pairs:
a
n
u(n)
ZT
↔
1
1 −az
−1
z > a
−a
n
u(−n −1)
ZT
↔
1
1 −az
−1
z < a
−3
1−
1
2
z
−1
ZT
−1
→ −3
_
1
2
_
n
u(n)
4
1−
1
3
z
−1
ZT
−1
→ 4
_
1
3
_
n
u(n)
and
y(n) = −3
_
1
2
_
n
u(n) + 4
_
1
3
_
n
u(n)
Consider again
Y (z) =
1 −z
−1
_
1 −
1
2
z
−1
_ _
1 −
1
3
z
−1
_
=
−3
1 −
1
2
z
−1
+
4
1 −
1
3
z
−1
There are 3 possible ROC’s for a signal with this ZT.
5.6. INVERSE Z TRANSFORM 119
Case II:
1
3
< z <
1
2
Possible ROC’s
Y
1
(z) =
−3
1 −
1
2
z
−1
, z <
1
2
or z >
1
2
Y
2
(z) =
4
1 −
1
3
z
−1
, z <
1
3
or z >
1
3
and
ROC[Y (z)] = ROC[Y
1
(z)] ∩ ROC[Y
2
(z)]
∴ ROC[Y
1
(z)] =
_
z : z <
1
2
_
ROC[Y
2
(z)] =
_
z : z >
1
3
_
−3
1 −
1
2
z
−1
ZT
−1
→ −3
_
1
2
_
n
u(−n −1)
4
1 −
1
3
z
−1
ZT
−1
→ 4
_
1
3
_
n
u(n)
y(n) = 3
_
1
2
_
n
u(−n −1) + 4
_
1
3
_
n
u(n)
Case III:
z <
1
3
ROC[Y
1
(z)] =
_
z : z <
1
2
_
ROC[Y
2
(z)] =
_
z : z <
1
3
_
−3
1 −
1
2
z
−1
ZT
−1
→ 3
_
1
2
_
n
u(−n −1)
4
1 −
1
3
z
−1
ZT
−1
→ −4
_
1
3
_
n
u(−n −1)
y(n) = 3
_
1
2
_
n
u(−n −1) −4
_
1
3
_
n
u(−n −1)
120 CHAPTER 5. Z TRANSFORM (ZT)
Summary of Possible ROC’s and Signals
Y (z) =
1 −z
−1
_
1 −
1
2
z
−1
__
1 −
1
3
z
−1
_
ROC Signal
1
2
< z −3
_
1
2
_
n
u(n) + 4
_
1
3
_
n
u(n)
1
3
< z <
1
2
3
_
1
2
_
n
u(−n −1) + 4
_
1
3
_
n
u(n)
z <
1
3
3
_
1
2
_
n
u(−n −1) −4
_
1
3
_
n
u(−n −1)
Residue Method for Evaluating Coeﬃcients of PFE
Y (z) =
1 −z
−1
_
1 −
1
2
z
−1
_ _
1 −
1
3
z
−1
_
=
A
1
1 −
1
2
z
−1
+
A
2
1 −
1
3
z
−1
A
1
=
_
1 −
1
2
z
−1
_
Y (z)
¸
¸
¸
¸
z=
1
2
=
1 −z
−1
1 −
1
3
z
−1
¸
¸
¸
¸
z=
1
2
=
1 −2
1 −2/3
= −3
A
2
=
_
1 −
1
3
z
−1
_
Y (z)
¸
¸
¸
¸
z=
1
3
=
1 −z
−1
1 −
1
2
z
−1
¸
¸
¸
¸
z=
1
3
=
1 −3
1 −3/2
= 4
5.6. INVERSE Z TRANSFORM 121
Example 2 (complex conjugate poles)
Y (z) =
1 +
1
2
z
−1
(1 −jz
−1
) (1 +jz
−1
)
X(z) =
A
1
1 −jz
−1
+
A
2
1 +jz
−1
A
1
=
1 +
1
2
z
−1
1 +jz
−1
¸
¸
¸
¸
z=j
=
1
2
_
1 −
1
2
j
_
A
2
=
1 +
1
2
z
−1
1 −jz
−1
¸
¸
¸
¸
z=−j
=
1
2
_
1 +
1
2
j
_
check
1
2
_
1 −
1
2
j
_
1 −jz
−1
+
1
2
_
1 +
j
2
_
1 +jz
−1
=
1
2
_
1 −
1
2
j
_ _
1 +jz
−1
_
+
1
2
_
1 +
j
2
_ _
1 −jz
−1
_
(1 −jz
−1
) (1 +jz
−1
)
=
1 +
1
2
z
−1
(1 −jz
−1
)(1 +jz
−1
)
• Note that A
1
= A
∗
2
• This is necessary for y(n) to be realvalued
• Use it to save computation
122 CHAPTER 5. Z TRANSFORM (ZT)
Example 3 (poles with multiplicity greater than 1)
Y (z) =
1 +z
−1
_
1 −
1
2
z
−1
_
2
(1 −z
−1
)
1
2
< z < 1
recall
na
n
u(n)
ZT
↔
az
−1
(1 −az
−1
)
2
, z > a
Try
1 +z
−1
_
1 −
1
2
z
−1
_
2
(1 −z
−1
)
=
A
1
_
1 −
1
2
z
−1
_
2
+
A
2
1 −z
−1
A
1
=
1 +z
−1
1 −z
−1
¸
¸
¸
¸
z=
1
2
= −3
A
2
=
1 +z
−1
_
1 −
1
2
z
−1
_
2
¸
¸
¸
¸
¸
z=1
= 8
check
−3
_
1 −
1
2
z
−1
_
2
+
8
1 −z
−1
=
−3(1 −z
−1
) + 8(1 −z
−1
+
1
4
z
−2
)
_
1 −
1
2
z
−1
_
2
(1 −z
−1
)
=
5 −5z
−1
+ 2z
−2
_
1 −
1
2
z
−1
_
(1 −z
−1
)
=
1 +z
−1
_
1 −
1
2
z
−1
_
2
(1 −z
−1
)
General form of terms in PFE for pole p
with multiplicity m
5.6. INVERSE Z TRANSFORM 123
A
, m
(1 −p
z
−1
)
m
+
A
, m
−1
(1 −p
z
−1
)
m
−1
+... +
A
, 1
(1 −p
z
−1
)
Now we have
1 +z
−1
_
1 −
1
2
z
−1
_
2
(1 −z
−1
)
=
A
1,2
_
1 −
1
2
z
−1
_
2
+
A
1,1
_
1 −
1
2
z
−1
_
+
A
2
1 −z
−1
A
1,2
= −3, A
2
= 8
To solve for A
1,1
, multiply both sides by
_
1 −
1
2
z
−1
_
2
.
1 +z
−1
1 −z
−1
= A
1,2
+A
1,1
_
1 −
1
2
z
−1
_
+A
2
_
1 −
1
2
z
−1
_
2
1 −z
−1
diﬀerentiate with respect to z
−1
2
(1 −z
−1
)
2
= 0 +A
1,1
_
−1
2
_
+A
2
_
1 −
1
2
z
−1
_
P(z)
let z =
1
2
A
1,1
= −2
2
(1 −2)
2
= −4
Example 4
(numerator degree ≥ denominator degree)
Y (z) =
1 +z
−2
1 −
1
2
z
−1
−
1
2
z
−2
reduce degree of numerator by long division so
1 +z
−2
1 −
1
2
z
−1
−
1
2
z
−2
= −2 +
3 −z
−1
1 −
1
2
z
−1
−
1
2
z
−2
In general, we will have a polynomial of degree M −N.
M−N
k=0
B
k
z
−k
ZT
−1
→
M−N
k=0
B
k
δ(n −k)
124 CHAPTER 5. Z TRANSFORM (ZT)
5.7 General Form of Response of LTI
Systems
If both X(z) and H(z) are rational
Y (z) = H(z)X(z)
=
_
P
H
(z)
Q
H
(z)
_ _
P
X
(z)
Q
X
(z)
_
=
_
¸
¸
¸
¸
¸
_
P
H
(z)
N
H
=1
_
1 −p
H
z
−1
_
_
¸
¸
¸
¸
¸
_
_
¸
¸
¸
¸
¸
_
P
X
(z)
N
X
=1
_
1 −p
X
z
−1
_
_
¸
¸
¸
¸
¸
_
Y (z) =
P
Y
(z)
N
y
=1
_
1 −p
Y
z
−1
_
P
Y
(z) = P
H
(z)P
X
(z)
N
Y
= N
H
+N
X
p
Y
is a combined set of poles p
H
and p
X
.
Drop superscript/subscript Y.
Accounting for poles with multiplicity > 1
Y (z) =
P(z)
D
=1
_
1 −p
z
−1
_
m
D  number of distinct poles
N =
D
=1
m
For M < N
Y (z) =
D
=1
m
k=1
A
k
(1 −p
z
−1
)
k
5.7. GENERAL FORM OF RESPONSE OF LTI SYSTEMS 125
• Each term under summation will give rise to a term in the output
y(n)
• It will be causal or anticausal depending on location of pole relative
to ROC
• Poles between origin and ROC result in causal terms
• Poles separated from origin by ROC result in anticausal terms
• For simplicity, consider only causal terms in what follows
Real pole with multiplicity 1
A
1 −pz
−1
ZT
−1
→ Ap
n
u(n)
126 CHAPTER 5. Z TRANSFORM (ZT)
Complex conjugate pair of poles with multiplicity 1
A
1 −pz
−1
+
A
∗
1 −p
∗
z
−1
ZT
−1
→ Ap
n
u(n) +A
∗
(p
∗
)
n
u(n)
[Ap
n
+A
∗
(p
∗
)
n
] u[n] =
_
Ae
j∠A
(pe
j∠p
)
n
+Ae
−j∠A
(pe
−j∠p
)
n
¸
u(n)
= 2Ap
n
cos(∠pn +∠A)u(n)
• sinusoid with amplitude 2Ap
n
1. grows exponentially if p > 1
2. constant if p = 1
3. decays exponentially if p < 1
• digital frequency ω
d
= ∠p radians/sample
• phase ∠A
Examples
5.7. GENERAL FORM OF RESPONSE OF LTI SYSTEMS 127
Real Pole with multiplicity 2
A
(1 −pz
−1
)
2
ZT
−1
→ ?
Recall na
n
u(n)
ZT
↔
az
−1
(1 −az
−1
)
2
, z > a
A
(1 −pz
−1
)
2
= (A/p)z
_
pz
−1
(1 −pz
−1
)
2
_
→
A
p
(n + 1)p
n+1
u(n + 1)
A
p
(n + 1)p
n+1
u(n + 1) = A(n + 1)p
n
u(n)
Example (A=1)
• Similar results are obtained with complex conjugate poles that have
multiplicity 2
• In general, repeating a pole with multiplicity m results in
multiplication of the signal obtained with multiplicity 1 by a
polynomial in n with degree m−1
128 CHAPTER 5. Z TRANSFORM (ZT)
Example
A
(1 −pz
−1
)
3
ZT
−1
→ A(n + 1)(n + 2)p
n
u(n)
Stability Considerations
• A system is BIBO stable if every bounded input produces a
bounded output
• A DT LTI system is BIBO stable ⇔
n
h(n) < ∞
•
n
h(n) < ∞⇔H(z) converges on the unit circle
• A causal DT LTI system is BIBO stable ⇔ all poles of H(z) are
strictly inside the unit circle
Stability and the general form of the response
1. real pole with multiplicity 1
Ap
n
u(n) is bounded if p ≤ 1
2. complex conjugate pair of poles with multiplicity 1
2Ap
n
cos(∠pn +∠A)u(n) is bounded if p ≤ 1
3. real pole with multiplicity 2
A(n + 1)p
n
u(n) is bounded if p < 1
Chapter 6
Discrete Fourier
Transform (DFT)
6.1 Chapter Outline
In this chapter, we will discuss:
1. Derivation of the DFT
2. DTFT Properties and Pairs
3. Spectral Analysis via the DFT
4. Fast Fourier Transform (FFT) Algorithm
5. Periodic Convolution
6.2 Derivation of the DFT
Summary of Spectral Representations
Signal Type Transform Frequency Domain
CT, Periodic CT Fourier Series Discrete
CT, Aperiodic CT Fourier Transform Continuous
DT, Aperiodic DT Fourier Transform Continuous
DT, Periodic Discrete Fourier Transform Discrete
129
130 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
Computation of the DTFT
X(e
jω
) =
∞
n=−∞
x(n)e
−jωn
In order to compute X(e
jω
), we must do two things:
1. Truncate summation so that it ranges over ﬁnite limits
2. Discretize ω = ω
k
Suppose we choose
1. ¯ x(n) =
_
x(n), 0 ≤ n ≤ N −1
0, else
2. ω
k
= 2πk/N, k = 0, ..., N −1
then we obtain
¯
X(k) ≡
N−1
n=0
¯ x(n)e
−j2πkn/N
This is the forward DFT. To obtain the inverse DFT, we could discretize
the inverse DTFT:
x(n) =
1
2π
_
2π
0
X(e
jω
)e
jωn
dω
ω →ω
k
=
2πk
N
_
2π
0
dω →
2π
N
N−1
k=0
X(e
jω
) →
¯
X(k)
e
jωn
→e
j2πk/N
This results in
x(n) ≈
1
N
N−1
k=0
¯
X(k)e
j2πkn/N
• Note approximation sign
1. truncated x(n) so
¯
X(k) ≈ X(e
j2πk/N
)
2. approximated integral by a sum
6.2. DERIVATION OF THE DFT 131
Alternate approach
• Use orthogonality of complex exponential signals
• Consider again
¯
X(k) =
N−1
n=0
¯ x(n)e
−j2πkn/N
• For ﬁxed 0 ≤ m ≤ N −1, multiply both sides by e
j2πkm/N
and sum
over k
N−1
k=0
¯
X(k)e
j2πkm/N
=
N−1
k=0
_
N−1
n=0
¯ x(n)e
−j2πkn/N
_
e
j2πkm/N
=
N−1
n=0
¯ x(n)
N−1
k=0
e
−j2πk(n−m)/N
=
N−1
n=0
¯ x(n)
_
1 −e
−j2π(n−m)
1 −e
−j2π(n−m)/N
_
N−1
k=0
¯
X(k)e
j2πkm/N
=
N−1
n=0
¯ x(n)Nδ(n −m)
= N¯ x(m)
∴ ¯ x(m) =
1
N
N−1
k=0
¯
X(k)e
j2πkm/N
Summarizing
X(k) =
N−1
n=0
x(n)e
−j2πkn/N
x(n) =
1
N
N−1
k=0
X(k)e
j2πkn/N
where we have dropped the tildes
132 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
6.3 DFT Properties and Pairs
1. Linearity
a
1
x
1
(n) +a
2
x
2
(n)
DFT
↔ a
1
X
1
(k) +a
2
X
2
(k)
2. Shifting
x(n −n
0
)
DFT
↔ X(k)e
−j2πkn
0
/N
3. Modulation
x(n)e
j2πk
0
n/N
DFT
↔ X(k −k
0
)
4. Reciprocity
X(n)
DFT
↔ Nx(−k)
5. Parseval’s relation
N−1
n=0
x(n)
2
=
1
N
N−1
k=0
X(k)
2
6. Initial value
N−1
n=0
x(n) = X(0)
7. Periodicity
x(n +mN) = x(n)for all integers m
X(k +N) = X(k) for all integers
8. Relation to DTFT of a ﬁnite length sequence
Let x
0
(n) = 0 only for 0 ≤ n ≤ N −1
x
0
(n)
DTFT
↔ X
0
(e
jω
)
Deﬁne x(n) =
m
x
0
(n +mN)
x(n)
DFT
↔ X(k)
Then X(k) = X
0
(e
j2πk/N
)
6.3. DFT PROPERTIES AND PAIRS 133
Transform Pairs
1. x(n) = δ(n), 0 ≤ n ≤ N −1
X(k) = 1, 0 ≤ k ≤ N −1 (by relation to DTFT)
2. x(n) = 1, 0 ≤ n ≤ N −1
X(k) = Nδk, 0 ≤ k ≤ N −1 (by reciprocity)
3. x(n) = e
j2πk
0
n
, 0 ≤ n ≤ N −1
X(k) = Nδ(k −k
0
), 0 ≤ k ≤ N −1 (by modulation property)
4. x(n) = cos(2πk
0
n/N), 0 ≤ n ≤ N −1
X(k) =
N
2
[δ(k −k
0
) +δ(k −(N −k
0
))] , 0 ≤ k ≤ N −1
5. x(n) =
_
1, 0 ≤ n ≤ M −1, 0 ≤ n ≤ N −1
0, else
X(k) = e
j
2πk
N
(M−1)/2
sin[2πkM/(2N)]
sin[2πk/(2N)]
(by relation to DTFT)
134 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
6.4 Spectral Analysis Via the DFT
An important application of the DFT is to numerically determine the
spectral content of signals. However, the extent to which this is possible
is limited by two factors:
1. Truncation of the signal  causes leakage
2. Frequency domain sampling  causes picket fence eﬀect
Truncation of the Signal
Let x(n) = cos(ω
0
n)
Recall that
X(e
jω
) = rep
2π
[πδ(ω −ω
0
) +πδ(ω +ω
0
)]
Also, let w(n) =
_
1, 0 ≤ n ≤ N −1
0, else
Recall W(e
jω
) = e
−jω(N−1)/2
sin(ωN/2)
sin(ω/2)
Note that W(e
jω
) = W(e
j(ω+2πm)
) for all integers m.
Now let x
t
(n) = x(n)w(n) ttruncated
6.4. SPECTRAL ANALYSIS VIA THE DFT 135
By the product theorem
X
t
(e
jω
) =
1
2π
_
π
−π
W(e
j(ω−µ)
)X(e
jµ
)dµ
=
1
2π
_
π
−π
W(e
j(ω−µ)
)rep
2π
[πδ(µ −ω
0
) +πδ(µ +ω
0
)]dµ
X
t
(e
jω
) =
1
2π
_
π
−π
W(e
j(ω−µ)
)[πδ(µ −ω
0
) +πδ(µ +ω
0
)]dµ
=
1
2π
_
π
−π
W(e
j(ω−µ)
)δ(µ −ω
0
)dµ
+
_
π
−π
W(e
j(ω−µ)
)δ(µ +ω
0
)dµ
X
t
(e
jω
) =
1
2
_
W(e
j(ω−ω
0
)
) +W(e
j(ω+ω
0
)
)
¸
Frequency Domain Sampling
Let
x
r
(n) =
m
x
t
(n +mN) rreplicated
Recall from relation between DFT and DTFT of ﬁnite length sequence
that
X
r
(k) = X
t
(e
j2πk/N
)
=
1
2
_
W(e
j(2πk/N−ω
0
)
) +W(e
j(2πk/N+ω
0
)
)
_
Consider two cases:
136 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
1. ω
0
= 2πk
0
/N for some integer k
0
Example: k
0
= 1, N = 8, ω
0
= π/4
X
r
(k) =
1
2π
_
W(e
j2π(k−k
0
)/N
) +W(e
j2π(k+k
0
)/N
)
_
Example: k
0
= 1, N = 8
In this case:
X
r
(k) =
N
2
[δ(k −k
0
) +δ(k −(N −k
0
))] , 0 ≤ k ≤ N −1
as shown before. There is no picket fence eﬀect.
2. ω
0
= 2πk
0
/N +π/N for some integer k
0
Example: k
0
= 1, N = 8, ω
0
= 3π/8
6.5. FAST FOURIER TRANSFORM (FFT) ALGORITHM 137
Picket fence eﬀect:
• Spectral peak is midway between sample locations
• Each sidelobe peak occurs at a sample location
1. Both truncation and picket fence eﬀects are reduced by increasing N
2. Sidelobes may be supressed at the expense of a wider mainlobe by
using a smoothly tapering window
6.5 Fast Fourier Transform (FFT)
Algorithm
The FFT is an algorithm for eﬃcient computation of the DFT. It is not a
new transform.
Recall
X
(N)
(k) =
N−1
n=0
x(n)e
−j2πkn/N
, k = 0, 1, ..., N −1
Superscript (N) is used to show the length of the DFT. For each value of
k, computation of X(k) requires N complex multiplications and N1
additions.
Deﬁne a complex operation (CO) as 1 complex multiplication and 1
complex addition.
Computation of length N DFT then requires approximately N
2
CO’s.
To derive an eﬃcient algorithm for computation of the DFT, we employ a
divideandconquer strategy.
138 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
Assume N is even.
X
(N)
(k) =
N−1
n=0
x(n)e
−j2πkn/N
X
(N)
(k) =
N−1
n=0,even
x(n)e
−j2πkn/N
+
N−1
n=0,odd
x(n)e
−j2πkn/N
=
N/2−1
m=0
x(2m)e
−j2πk(2m)/N
+
N/2−1
m=0
x(2m+ 1)e
−j2πk(2m+1)/N
X
(N)
(k) =
N/2−1
m=0
x(2m)e
−j2πkm/(N/2)
+e
−j2πk/N
N/2−1
m=0
x(2m+ 1)e
−j2πkm/(N/2)
Let x
0
(n) = x(2m), m = 0, ..., N/2 −1
x
1
(n) = x(2m+ 1), m = 0, ..., N/2 −1
Now have
X
(N)
(k) = X
(N/2)
0
(k) +e
−j2πk/N
X
(N/2)
1
(k), k = 0, ..., N −1
Note that X
(N/2)
0
(k) and X
(N/2)
1
(k) are both periodic with period N/2,
while e
−j2πk/N
is periodic with period N.
Computation
• Direct
X
(N)
(k), k = 0, ..., N −1 N
2
CO’s
• Decimation by a factor of 2
X
(N/2)
0
(k), k = 0, ..., N/2 −1 N
2
/4 CO’s
X
(N/2)
1
(k), k = 0, ..., N/2 −1 N
2
/4 CO’s
X
(N)
(k) = X
(N/2)
0
(k) +e
−j2πk/N
X
(N/2)
1
(k), k = 0, ..., N −1 N
CO’s
6.5. FAST FOURIER TRANSFORM (FFT) ALGORITHM 139
Total N
2
/2 +N CO’s
For large N, we have nearly halved computation.
Consider a signal ﬂow diagram of what we have done so far:
If N is even,we can repeat the idea with each N/2 pt. DFT:
If N = 2
M
, we repeat the process M times reulting in M stages.
140 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
The ﬁrst stage consists of 2 point DFT’s:
X
(2)
(k) =
1
n=0
x(n)e
−j2πkn/2
X
(2)
(0) = x(0) +x(1)
X
(2)
(1) = x(0) −x(1)
Flow diagram of 2 pt. DFT Full example for N=8(M=3)
6.5. FAST FOURIER TRANSFORM (FFT) ALGORITHM 141
Ordering of Input Data
Computation (N = 2
M
)
M = log
2
N stages
N CO’s/stage
Total: Nlog
2
N CO’s
Comments
• The algorithm we derived is the decimationintime radix 2 FFT
1. input in bitreversed order
142 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
2. inplace computation
3. output in normal order
• The dual of the above algorithm is the decimationinfrequency
radix 2 FFT
1. input in normal order
2. inplace computation
3. output in normal order
• The same approaches may be used to derive either
decimationintime or decimationinfrequency mixed radix FFT
algorithms for any N which is a composite number
• All FFT algorithms are based on composite N and require O(N log
N) computation
• The DFT of a length N real signal can be expressed in terms of a
length N/2 DFT
6.6 Periodic Convolution
We developed the DFT as a computable spectral representation for DT
signals. However, the existence of a very eﬃcient algorithm for calculating
it suggests another application. Consider the ﬁltering of a length N signal
x(n) with an FIR ﬁlter containing M coeﬃcients, where M << N
y(n) =
M−1
m=0
h(m)x(n −m)
Computation of each output point requires M multiplications and M1
additions.
Based on the notion that convolution in the time domain corresponds to
multiplication in the frequency domain, we perform the following
computations:
• Compute X
(n)
(k) N log
2
N CO’s
• Extend h(n) with zeros to length N and compute
H
(N)
(k) N log
2
N CO’s
• Multiply the DFT’s
6.6. PERIODIC CONVOLUTION 143
• Y
(N)
(k) = H
(N)
(k)X
(N)
(k) N complex mutliplications
• Compute inverse DFT
¯ y(n) = DFT
−1
_
Y
(N)
(k)
_
N log
2
N CO’s
Total: 3N log
2
N CO’s + N complex multiplications
The computation/output point is 3 log
2
N CO’s+ 1 complex
multiplication.
How Periodic Convolution Arises
• Consider inverse DFT of product Y [k] = H[k]X[k]
y(n) =
1
N
N−1
k=0
H[k]X[k]e
j2πkn/N
• Substitute for X[k] in terms of x[n]
y(n) =
1
N
N−1
k=0
H[k]
_
N−1
m=0
x[m]e
−j2πkm/N
_
e
j2πkn/N
• Interchange order of summations
y(n) =
N−1
m=0
x[m]
_
1
N
N−1
k=0
H[k]e
−j2πk(n−m)/N
_
=
N−1
m=0
x[m]h[n −m]
• Note periodicity of sequence h[n]
h[n −m] = h[(n −m) mod N]
144 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
Periodic vs. Aperiodic Convolution
Aperiodic convolution
y[n] = h[n] ∗ x[n] =
∞
m=−∞
h[n −m]x[m]
Periodic convolution (N=9)
y[n] = h[n] ⊗x[n] =
N−1
m=0
h[(n −m)modN]x[m]
6.6. PERIODIC CONVOLUTION 145
Comparison
Aperiodic convolution
146 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT)
Periodic convolution (N = 9)
Zero Padding to Match Aperiodic Convolution
Suppose x[n] has length N and h[n] has length M. Periodic convolution
with length M+N1 will match aperiodic result
Chapter 7
Digital Filter Design
7.1 Chapter Outline
In this chapter, we will discuss:
1. Overview of ﬁlter design problem
2. Finite impulse response ﬁlter design
3. Inﬁnite impulse response ﬁlter design
7.2 Overview
• Filter design problem consists of three tasks
1. Speciﬁcation  What frequency response or other
characteristics of the ﬁlter are desired?
2. Approximation  What are the coeﬃcients or, equivalently,
poles and zeros of the ﬁlter that will approximate the desired
characteristics?
3. Realization  How will the ﬁlter be implemented?
• Consider a simple example to illustrate these tasks.
147
148 CHAPTER 7. DIGITAL FILTER DESIGN
Filter Design Example
• An analog signal contains frequency components ranging from DC
(0 Hz) to 5 kHz
• Design a digital system to remove all but the frequencies in the
range 24 kHz
• Assume signal will be sampled at 10 kHz rate
Speciﬁcation
• Ideal analog ﬁlter
• Ideal digital ﬁlter
7.2. OVERVIEW 149
Filter Tolerance Speciﬁcation
ω
pl
, ω
pu
 lower and upper passband edges
ω
sl
, ω
su
 lower and upper stopband edges
δ
l
, δ
u
 lower and upper stopband ripple
 passband ripple
Approximation
• Design by positioning poles and zeros (PZ plot design)
150 CHAPTER 7. DIGITAL FILTER DESIGN
• Transfer function for PZ plot ﬁlter
H(z) = K
_
z −e
jπ/10
_ _
z −e
j3π/10
_ _
z −e
j9π/10
_
_
z −0.8e
j5π/10
_ _
z −0.8e
j7π/10
_
×
_
z −e
−jπ/10
_ _
z −e
−j3π/10
_ _
z −e
−j9π/10
_
_
z −0.8e
−j5π/10
_ _
z −0.8e
−j7π/10
_
Choose constant K to yield unity magnitude response at center of
passband
• Frequency response of PZ plot ﬁlter
• Comments on PZ plot design
1. Passband asymmetry is due to extra zeros in lower stopband
2. Phase is highly nonlinear
3. Zeros on unit circle drive magnitude response to zero and
result in phase discontinuities
4. Polezero plot yields intuition about ﬁlter behavior, but does
not suggest how to meet design speciﬁcations:
7.2. OVERVIEW 151
Realization
• Cascade form of transfer function
H(z) = K
_
z −e
jπ/10
_ _
z −e
j3π/10
_ _
z −e
j9π/10
_
_
z −0.8e
j5π/10
_ _
z −0.8e
j7π/10
_
×
_
z −e
−jπ/10
_ _
z −e
−j3π/10
_ _
z −e
−j9π/10
_
_
z −0.8e
−j5π/10
_ _
z −0.8e
−j7π/10
_
Cascade Form Realization
• Cascade form in second order sections with realvalued coeﬃcients
H(z) = K[z
2
−2 cos(π/10)z + 1]
×
_
z
2
−2 cos(3π/10)z + 1
z
2
−1.6 cos(5π/10)z + 0.64
_
×
_
z
2
−2 cos(9π/10)z + 1
z
2
−1.6 cos(7π/10)z + 0.64
_
• Convert to negative powers of z
H(z) = K[1 −2 cos(π/10)z
−1
+z
−2
]
×
_
1 −2 cos(3π/10)z
−1
+z
−2
1 −1.6 cos(5π/10)z
−1
+ 0.64z
−2
_
_
1 −2 cos(9π/10)z
−1
+z
−2
1 −1.6 cos(7π/10)z
−1
+ 0.64z
−2
_
(Ignoring overall time advance of 2 sample units)
• Overall System
152 CHAPTER 7. DIGITAL FILTER DESIGN
• Second order stages
H
i
(z) =
1 +a
1i
z
−1
+a
2i
z
−2
1 −b
1i
z
−1
−b
2i
z
−2
, i = 1, 2, 3
y
i
[n] = x
i
[n] +a
1i
x
i
[n −1] +a
2i
x
i
[n −2] +b
1i
y
i
[n −1] +b
2i
y
i
[n −2]
Parallel Form Realization
• Expand transfer function in partial fractions
H(z) =
d
0
+d
1
z
−1
+d
2
z
−2
+
a
01
+a
11
z
−1
1 −b
11
z
−1
−b
21
z
−2
+
a
02
+a
12
z
−1
1 −b
12
z
−1
−b
22
z
−2
7.2. OVERVIEW 153
Direct Form Realization
Multiply out all factors in numerator and denominator of transfer
function
H(z) =
a
0
+a
1
z
−1
+... +a
6
z
−6
1 −b
1
z
−1
−... −b
4
z
−4
154 CHAPTER 7. DIGITAL FILTER DESIGN
7.3 Finite Impulse Response Filter Design
Ideal Impulse Response
• Consider same ideal frequency response as before
• Inverse DTFT
h[n] =
1
2π
_
π
−π
H(ω)e
jωn
dω
h[n] =
1
2π
_
_
−2π/5
−4π/5
e
jωn
dω +
_
4π/5
2π/5
e
jωn
dω
_
h[n] =
1
j2πn
__
e
−j2πn/5
−e
−j4πn/5
_
+
_
e
j4πn/5
−e
−j2πn/5
__
h[n] =
1
j2πn
_
e
−j3πn/5
_
e
jπn/5
−e
−jπn/5
_
+e
j3πn/5
_
e
jπn/5
−e
−jπn/5
__
h[n] =
_
e
−j3πn/5
1
5
sinc
_
n
5
_
+e
j3πn/5
1
5
sinc
_
n
5
_
_
h[n] = 0.4 cos(3πn/5)sinc(n/5)
7.3. FINITE IMPULSE RESPONSE FILTER DESIGN 155
Approximation of Desired Filter Impulse Response by
Truncation
• FIR ﬁlter equation
y[n] =
M−1
m=0
a
m
x[n −m]
• Filter coeﬃcients (assuming M is odd)
a
m
= h
ideal
[m−(M −1)/2], m = 0, ..., M −1
Truncation of Impulse Response
156 CHAPTER 7. DIGITAL FILTER DESIGN
Frequency Response of Truncated Filter
Analysis of Truncation
• ideal inﬁnite impulse response  h
ideal
[n]
• Actual, ﬁnite impulse response  h
actual
[n]
• Window sequence  w[n]
w[n] =
_
1 , n ≤ (M −1)/2
0 , else
• Relation between ideal and actual impulse responses
h
actual
[n] = h
ideal
[n]w[n]
H
actual
(ω) =
1
2π
_
π
−π
H
ideal
(λ)W(ω −λ)dλ
7.3. FINITE IMPULSE RESPONSE FILTER DESIGN 157
Eﬀect of Filter Length
DTFT of Rectangular Window
W(ω) =
n=(M−1)/2
n=−(M−1)/2
e
−jωn
let m=n+(M1)/2
=
M−1
m=0
e
−jω[m−(M−1)/2]
= e
jω(M−1)/2
1 −e
jωM
1 −e
jω
=
sin(ωM/2)
sin(ω/2)
158 CHAPTER 7. DIGITAL FILTER DESIGN
Graphical View of Convolution
H
actual
(ω) =
1
2π
_
π
−π
H
ideal
(λ)W(ω −λ)dλ
Relation between Window Attributes and Filter
Frequency Response
Parameter W(ω) H
actual
(ω)
∆ω Mainlobe width Transition bandwidth
δ Area of ﬁrst sidelobe Passband and stopband ripple
7.3. FINITE IMPULSE RESPONSE FILTER DESIGN 159
Design of FIR Filters by Windowing
• Relation between ideal and actual impulse responses
h
actual
[n] = h
ideal[n]
w[n]
• Choose window sequence w[n] for which DTFT has
1. minimum mainlobe width
2. minimum sidelobe area
• Kaiser window is the best choice
1. based on optimal prolate spheroidal wavefunctions
2. contains a parameter that permits tradeoﬀ between mainlobe
width and sidelobe area
Kaiser Window
w[n] =
_
I
0
[β(1−[(n−α)/α]
2
)
1/2
]
I
0
(β)
, 0 ≤ N ≤ M −1
0, else
α = (M −1)/2
I
0
(.) zerothorder modiﬁed Bessel function of the ﬁrst kind
160 CHAPTER 7. DIGITAL FILTER DESIGN
Eﬀect of Parameter β
Filter Design Procedure for Kaiser Window
1. Choose stopband and passband ripple and transition bandwidth
7.3. FINITE IMPULSE RESPONSE FILTER DESIGN 161
2. Determine paramter β
A = −20 log
10
δ
β =
_
_
_
0.112(A−8.7) , A > 50
0.5842(A−21)
0.4
+ 0.07886(A−21) , 21 ≤ A ≤ 50
0.0 , A < 21
3. Determine ﬁlter length M
M =
A−8
2.285∆ω
+ 1
4. Example
∆ω = 0.1π radians/sample= 0.05 cycles/sample
δ = 0.01
β = 3.4
M = 45.6
Filter Designed with Kaiser Window
162 CHAPTER 7. DIGITAL FILTER DESIGN
Kaiser Filter Frequency Response
M = 47
β = 3.4
δ = 0.01 = −40 dB
∆ω = 0.05
ω
sl
= 0.175
ω
pl
= 0.225
ω
pu
= 0.375
ω
su
= 0.425
Optimal FIR Filter Design
• Although the Kaiser window has certain optimal properties, ﬁlters
designed using this window are not optimal
• Consider the design of ﬁlters to minimize integral meansquared
frequency error
• Problem:
Choose h
actual
[n] = −(M −1)/2, ..., (M −1)/2 to minimize
E =
1
2π
_
π
−π
H
actual
(ω) −H
ideal
(ω)
2
dω
7.3. FINITE IMPULSE RESPONSE FILTER DESIGN 163
Minimum MeanSquared Error Design
• Using Parseval’s relation, we obtain a direct solution to this problem
E =
1
2π
_
π
−π
H
actual
(ω) −H
ideal
(ω)
2
dω
=
∞
n=−∞
h
actual
[n] −h
ideal
[n]
2
=
(M−1)/2
n=−(M−1)/2
h
actual
[n] −h
ideal
[n]
2
+
−(M+1)/2
n=−∞
h
ideal
[n]
2
+
∞
n=(M+1)/2
h
ideal
[n]
2
• Solution
Set h
actual
[n] = h
ideal
[n], n = −(M −1)/2, ..., (M −1)/2
Minimum error is given by
E =
−(M+1)/2
n=−∞
h
ideal
[n]
2
+
∞
n=(M+1)/2
h
ideal
[n]
2
• This is the truncation of the ideal impulse response
• We conclude that original criteria of minimizing meansquared error
was not a good choice
• What was the problem with truncating the ideal impulse response?
(See ﬁgure on following page)
Minimax (Equiripple) Filter Design
• No matter how large we pick M for the truncated impulse response,
we cannot reduce the peak ripple
• Also, the ripple is concentrated near the band edges
• Rather than focussing on the integral of the error, let’s consider the
maximum error
• Parks and McClellan developed a method for design of FIR ﬁlters
based on minimizing the maximum of the weighted frequency
domain error
164 CHAPTER 7. DIGITAL FILTER DESIGN
Eﬀect of Filter Length: ParksMcClellan Equiripple
Filters
7.4. INFINITE IMPULSE RESPONSE FILTER DESIGN 165
7.4 Inﬁnite Impulse Response Filter Design
IIR vs. FIR Filters
• FIR Filter Equation
y[n] = a
0
x[n] +a
1
x[n −1] +... +a
M−1
x[n −(M −1)]
• IIR Filter Equation
y[n] = a
0
x[n] +a
1
x[n −1] +...a
M−1
x[n −(M −1)] −b
1
y[n −1] −
b
2
y[n −2] −... −b
N
y[n −N]
• IIR ﬁlters can generally achieve given desired response with less
computation than FIR ﬁlters
• It is easier to approximate arbritrary frequency response
characteristics with FIR ﬁlters, including exactly linear phase
Inﬁnite Impulse Response (IIR) Filter Design:
Overview
• IIR digital ﬁlter designs are based on established methods for
designing analog ﬁlters
• Approach is generally imited to frequency selective ﬁlters with ideal
passband/stopband characteristics
• Basic ﬁlter type is low pass
166 CHAPTER 7. DIGITAL FILTER DESIGN
• Achieve highpass or bandpass via transformations
• Achieve multiple stop/pass bands by combining multiple ﬁlters with
single pass band
IIR Filter Design Steps
• Choose prototype analog ﬁlter family
1. Butterworth
2. Chebychev Type I or II
3. Elliptic
• Choose analogdigital transformation method
1. Impulse Invariance
2. Bilinear Transformation
• Transform digital ﬁlter speciﬁcations to equivalent analog ﬁlter
speciﬁcations
• Design analog ﬁlter
• Transform analog ﬁlter to digital ﬁlter
• Perform frequency transformation to achieve highpass or bandpass
ﬁlter, if desired
Prototype Filter Types
7.4. INFINITE IMPULSE RESPONSE FILTER DESIGN 167
Transformation via Impulse Invariance
• Sample impulse response of analog ﬁlter
h
d
[n] = Th
a
(nT)
H
d
(ω) =
k
H
a
(f −k/T)
¸
¸
¸
¸
¸
f =
ω
2πT
Note that aliasing may occur
Impulse Invariance
• Implementation of digital ﬁlter
168 CHAPTER 7. DIGITAL FILTER DESIGN
1. Partial fraction expansion of analog transfer function
(assuming all poles have multiplicity 1)
H
a
(s) =
N
k=1
A
k
s −s
k
2. Inverse Laplace transform
h
a
(t) =
N
k=1
A
k
e
s
k
t
u(t)
3. Sample Impulse response
h
d
[n] = Th
a
(nT) = T
N
k=1
A
k
e
s
k
nT
u(nT)
= T
N
k=1
A
k
p
n
k
u[n], p
k
= e
s
k
T
4. Take Z transform
H
d
(z) =
N
k=1
TA
k
1 −p
k
z
−1
5. Combine terms
H
d
(z) =
a
0
+a
1
z
−1
+...a
M−1
z
−(M−1)
1 +b
1
z
−1
+b
2
z
−2
+...b
N
z
−N
6. Corresponding ﬁlter equation
y[n] = a
o
x[n] +a
1
x[n −1] +...a
M−1
x[n −(M −1)] −b
1
y[n −
1] −b
2
y[n −2] −... −b
N
y[n −N]
Impulse Invariance Example
• Design a second order ideal low pass digital ﬁlter with cutoﬀ at
ω
d
= π/5 radians/sample (Assume T=1)
1. Analog cutoﬀ frequency
f
c
=
ω
d
2πT
= 0.1Hz
ω
c
= 2πf
c
= π/5 rad/sec
• Use Butterworth analog prototype
H
a
(s) =
0.3948
[s −(−0.4443 +j0.4443)][s −(−0.4443 −j0.4443)]
7.4. INFINITE IMPULSE RESPONSE FILTER DESIGN 169
• Apply partial fraction expansion
H
a
(s) =
−j0.4443
[s −(−0.4443 +j0.4443)]
+
j0.4443
[s −(−0.4443 −j0.4443)]
• Compute inverse Laplace Transform
h
a
(t) = [−j0.4443e
(−0.4443+j0.4443)t
+j0.4443e
(−0.4443−j0.4443)t
]u(t)
• Sample impulse response
h
d
[n] =
_
−j0.4443e
(−0.4443+j0.4443)n
+j0.4443e
(−0.4443−j0.4443)n
_
u[n]
=
_
−j0.4443(0.641e
j0.4443
)
n
+j0.4443(0.641e
−j0.4443
)
n
¸
u[n]
• Compute Z transform
H
d
(z) =
−j0.4443
1 −0.6413e
j0.4443
z
−1
+
j0.4443
1 −0.6413e
−j0.4443
z
−1
=
0.2450z
−1
1 −1.1581z
−1
+ 0.4113z
−2
• Combine terms
H
d
(z) =
0.2450z
−1
(1 −0.6413e
j0.4443
z
−1
)(1 −0.6413e
−j0.4443
z
−1
)
• Find diﬀerence equation
y[n] = 0.2450x[n −1] + 1.1581y[n −1] −0.4113y[n −2]
Impulse Invariance MethodSummary
• Preserves impulse response and shape of frequency response, if
there is no aliasing
• Desired transition bandwidths map directly between digital and
analog frequnecy domains
• Passband and stopband ripple speciﬁcations are identical for both
digital and analog ﬁlter, assuming that there is no aliasing
170 CHAPTER 7. DIGITAL FILTER DESIGN
• The ﬁnal digital ﬁlter design is independent of the sampling interval
parameter T
• Poles in analog ﬁlter map directly to poles in digital ﬁlter via
transformation p
k
= e
s
k
T
• There is no such relation between the zeros in the two ﬁlters
• Gain at DC in a digital ﬁlter may not equal unity, since sampled
impulse response may only approximately sum to 1
Bilinear Transformation Method
• Mapping between s and z
s =
2
T
_
1 −z
−1
1 +z
−1
_
H
d
(z) = H
a
_
2
T
_
1 −z
−1
1 +z
−1
__
z =
1 + (T/2)s
1 −(T/2)s
• Mapping between s and z planes
• Mapping between analog and digital frequencies
7.4. INFINITE IMPULSE RESPONSE FILTER DESIGN 171
Bilinear TransformationExample No. 1
• Design lowpass ﬁlter
1. cutoﬀ frequency ω
dc
= π/5 rad/sample
2. transition bandwidth and ripple
∆ω
d
= 0.2π radians/sample= 0.1 cycles/sample
δ = 0.1
3. use Butterworth analog prototype
4. set T = 1
H
d
(ω
d
1) = 1 −
H
d
(ω
d
2) = δ
• Map digital to analog frequencies
172 CHAPTER 7. DIGITAL FILTER DESIGN
ω
a
=
2
T
tan(ω
d
/2)
ω
d1
= π/5 −0.1π
ω
a1
= 0.3168 rad/sec
ω
d2
= π/5 + 0.1π
ω
a2
= 1.0191 rad/sec
• Solve for ﬁlter order and cutoﬀ frequency
H
a
(ω
a
)
2
=
1
1+(ω
a
/ω
c
)
2N
H
a
(ω
a1
) = 1 −
H
a
(ω
a2
) = δ
N =
_
1
2
log[(H
a
(ω
a2
)
−2
−1)/(H
a
(ω
a1
)
−2
−1)]
log(ω
a2
/ω
a1
)
_
ω
ac
=
ω
a2
(H
a
(ω
a2
)
−2
−1)
1/(2N)
• Result
N = 3
ω
ac
= 0.4738
• Determine transfer function of analog ﬁlter
H
a
(s)H
a
(−s) =
1
1+(s/jω
ac
)
2N
• Poles are given by
s
k
= ω
ac
e
j
π
2
+
π
2N
+
2πk
2N
, k = 0, 1, ..., 2N −1
• Take N poles with negative real parts for H(s)
s
0
= −0.2369 +j0.4103 = 0.4738e
j2π/3
s
1
= −0.4738 +j0.0000 = 0.4738e
jπ
s
2
= −0.2369 −j0.4103 = 0.4738e
−j2π/3
• Transfer function of anlaog ﬁlter
H
a
(s) =
ω
3
ac
(s −s
0
)(s −s
1
)(s −s
2
)
7.4. INFINITE IMPULSE RESPONSE FILTER DESIGN 173
• Transform to discretetime ﬁlter
H
d
(z) = H
a
_
2
T
_
1 −z
−1
1 +z
−1
__
H
d
(z) =
ω
3
ac
(1 +z
−1
)
3
8(1 −z
−1
−s
0
)(1 −z
−1
−s
1
)(1 −z
−1
−s
2
)
=
0.0083 + 0.0249z
−1
+ 0.0249z
−2
+ 0.0083z
−3
1.0000 −2.0769z
−1
+ 1.5343z
−2
−0.3909z
−3
Bilinear Example No.2
• Design lowpass ﬁlter ω
dc
= π/5 rad/sample
174 CHAPTER 7. DIGITAL FILTER DESIGN
1. Cutoﬀ frequency
2. Transition bandwidth and ripple
∆ω
d
= 0.1π radians/sample = 0.05 cycles/sample
δ = 0.01
3. Use Butterworth analog prototype
4. set T = 1
• Result:N = 13
7.4. INFINITE IMPULSE RESPONSE FILTER DESIGN 175
Frequency Transformations of Lowpass IIR Filters
• Given a prototype lowpass digital ﬁlter design with cutoﬀ frequency
θ
c
, we can transform directly to:
1. a lowpass digital ﬁlter with a diﬀerent cutoﬀ frequency
2. a highpass digital ﬁlter
3. a bandpass digital ﬁlter
4. a bandstop digital ﬁlter
• General form of transformation H(z) = H
lp
(Z)}
−1
Z
= G(z
−1
)
• Example  lowpass to lowpass
1. Transformation
Z
−1
=
z
−1
−α
1 −αz
−1
2. Associated design formula
α =
sin
_
θ
c
−ω
c
2
_
sin
_
θ
c
+ω
c
2
_
176 CHAPTER 7. DIGITAL FILTER DESIGN
Chapter 8
Random Signals
8.1 Chapter Outline
In this chapter, we will discuss:
1. Single Random Variable
2. Two Random Variables
3. Sequences of Random Variables
4. Estimating Distributions
5. Filtering of Random Sequences
6. Estimating Correlation
8.2 Single Random Variable
Random Signals
Random signals are denoted by upper case X and are completely
characterized by density function f
X
(x)
P(a ≤ X ≤ b) =
_
b
a
f
X
(x)dx
⇒f
X
(x)∆x = P(x ≤ X ≤ x + ∆x)
177
178 CHAPTER 8. RANDOM SIGNALS
The cumulative distribution function is F
X
(x) = P(X ≤ x)
F
X
(x) =
_
x
−∞
f
X
(x)dx
∴
dF
X
(x)
dx
= f
X
(x)
Properties of density function
1. f
X
(x) ≥ 0
2.
_
∞
−∞
f
X
(x)dx = 1
Example 1
f
X
(x) = λe
−λx
u(x)
_
∞
0
f
X
(x)dx =
_
∞
0
λe
−λx
dx
= e
−λx
¸
¸
0
∞
= 1
What is the probability X ≤ 1?
P(X ≤ 1) =
_
1
0
λe
−λx
dx
= e
−λx
¸
¸
0
1
= 1 −e
−λ
8.2. SINGLE RANDOM VARIABLE 179
Expectation
E {g(X)} =
_
g(x)f
X
(x)dx where g is a function of X
Special Cases:
1. Mean value
g(x) = x
X = E {X} =
_
xf
X
(x)dx
2. Second moment
g(x) = x
2
X
2
= E
_
X
2
_
=
_
x
2
f
X
(x)dx
3. Variance
g(x) = (X −X)
2
Expectation is a linear operator
E {ag(x) +bh(x)} = aE {g(x)} +bE {h(x)}
∴ σ
2
X
= E
_
(X −X)
2
_
= E
_
X
2
−2XX +X
2
_
= E
_
X
2
_
−X
2
Example 2
E {X} =
_
∞
0
λxe
−λx
dx
Integrate by parts
_
udv = uv −
_
vdu
u = x du = 1 v = −e
−λx
dv = λe
−λx
_
∞
0
λxe
−λx
dx = −xe
−λx
¸
¸
∞
0
+
_
∞
0
e
−λx
dx
= −xe
−λx
¸
¸
∞
0
−
1
λ
e
−λx
¸
¸
¸
¸
∞
0
=
1
λ
180 CHAPTER 8. RANDOM SIGNALS
Example 3
E
_
X
2
_
=
_
∞
0
λx
2
e
−λx
dx
Integrate by parts
u = x
2
du = 2x v = −e
−λx
dv = λe
−λx
_
∞
0
x
2
λe
−λx
dx = −x
2
e
−λx
¸
¸
∞
0
+ 2
_
∞
0
xe
−λx
dx
=
2
λ
σ
2
X
=
2
λ
−
1
λ
2
λ = 1 X = 1 σ
2
X
= 1 σ
X
= 1
X and Y have the same mean and same standard deviation
Transformations of a Random Variable
• Y = aX +b
• Y = aX +b
8.3. TWO RANDOM VARIABLES 181
•
σ
2
Y
= E
_
(Y −Y )
2
_
= E
_
_
aX +b −(aX +b)
¸
2
_
= a
2
E
_
(X −X)
2
_
= a
2
σ
2
X
•
F
Y
(y) = P {Y ≤ y}
= P {aX +b ≤ y} assume a> 0
= P
_
X ≤
y −b
a
_
= F
X
_
y −b
a
_
• f
Y
(y) =
1
a
f
X
_
y −b
a
_
In general f
Y
(y) =
¸
¸
¸
¸
1
a
¸
¸
¸
¸
f
X
_
y −b
a
_
8.3 Two Random Variables
1. Joint density function
P {a ≤ X ≤ b ∩ c ≤ Y ≤ d} =
_
b
a
_
d
c
f
XY
(x, y)dxdy
2. Joint distribution function
F
XY
(x, y) =
_
x
−∞
_
y
−∞
f
XY
(x, y)dxdy
3. Marginal densities
f
X
(x) =
_
f
XY
(x, y)dy
f
Y
(y) =
_
f
XY
(x, y)dx
4. Expectation
E {g(X, Y )} =
_ _
g(x, y)f
XY
(x, y)dxdy
182 CHAPTER 8. RANDOM SIGNALS
5. Linearity
E {ag(x, y) +bh(x, y)} = aE {g(x, y)} +bE {h(x, y)}
6. Correlation
g(x, y) = xy
7. Covariance
g(x, y) = (x −X)(y −Y ) XY = E {g(x, y)}
8. Correlation coeﬃcient
σ
2
XY
= E {g(x, y)}
=
1
σ
X
σ
Y
_
E {XY } −E
_
XY
_
−E
_
XY
_
+X Y
¸
=
1
σ
X
σ
Y
_
XY −X Y
_
9. Independence
f
XY
(x, y) = f
X
(x)f
Y
(y)
⇒σ
2
XY
= 0 and XY = X Y
Example 4
f
XY
(x, y) =
_
2, 0 ≤ y ≤ x ≤ 1
0, else
f
X
(x) =
_
_
_
_
x
0
2dy = 2x, 0 ≤ x ≤ 1
0, else
f
Y
(y) =
_
_
_
_
1
y
2dx = 2(1 −y), 0 ≤ y ≤ 1
0, else
8.3. TWO RANDOM VARIABLES 183
XY =
_
1
x=0
_
x
y=0
2xydxdy
=
_
1
x=0
x
__
x
y=0
2ydy
_
dx
=
_
1
x=0
x
3
dx
=
x
4
4
¸
¸
¸
¸
1
0
=
1
4
X =
_
1
0
x(2x)dx =
2
3
x
3

1
0
=
2
3
Y =
1
3
X
2
=
_
1
0
x
2
(2x)dx
=
2
4
x
4
¸
¸
¸
¸
1
0
=
1
2
184 CHAPTER 8. RANDOM SIGNALS
∴ σ
2
X
=
1
2
−
4
9
=
1
18
= σ
2
y
σ
2
XY
=
XY −X Y
σ
X
σ
Y
=
1/4 −2/9
1/18
=
1
2
8.4 Sequences of Random Variables
Characteristics
X
1
, X
2
, X
3
, ..., X
N
denote a sequence of random variables
Their behavior is completely characterized by
P
_
x
l
1
≤ X
1
≤ x
u
1
, x
l
2
≤ X
2
≤ x
u
2
, ..., x
l
N
≤ X
N
≤ x
u
N
_
=
_
x
u
1
x
l
1
_
x
u
2
x
l
2
...
_
x
u
N
x
l
N
f
X
1
,X
2
,...,X
N
(x
1
, x
2
, ..., x
N
)dx
1
, dx
2
, ..., dx
N
It is apparent that this can get complicated very quickly
In practice, we consider only 2 cases:
1. X
1
, X
2
, ..., X
N
are mutually independent
⇒f
X
1
,X
2
,...,X
N
(x
1
, x
2
, ..., x
N
) =
N
i=1
f
X
i
(x
i
)
2. X
1
, X
2
, ..., X
N
are jointly Gaussian
Example 5
X
1
, X
2
, ..., X
n
are i.i.d with mean µ
X
and std. deviation σ
2
X
N
1
, N
2
, ..., N
n
are i.i.d with zero mean
Y
i
= aX
i
+N
i
8.4. SEQUENCES OF RANDOM VARIABLES 185
E {Y
i
} = aE {X
i
} +E {N
i
}
Y = aX
Y
2
= E
_
(aX
i
+N
i
)
2
_
= a
2
X
2
+N
2
σ
2
Y
= Y
2
−
_
Y
_
2
= a
2
X
2
+σ
2
N
−a
2
_
X
_
2
= a
2
σ
2
X
+σ
2
N
XY = E {X
i
(aX
i
+N
i
)}
= aX
2
σ
2
XY
=
aX
2
−a
_
X
_
2
σ
X
(a
2
σ
2
X
+σ
2
N
)
1/2
= a
σ
X
_
_
a
2
σ
2
X
+σ
2
N
_
=
1
_
1 +
σ
2
N
(
a
2
σ
2
X
)
186 CHAPTER 8. RANDOM SIGNALS
8.4. SEQUENCES OF RANDOM VARIABLES 187
188 CHAPTER 8. RANDOM SIGNALS
8.4. SEQUENCES OF RANDOM VARIABLES 189
190 CHAPTER 8. RANDOM SIGNALS
8.5. ESTIMATING DISTRIBUTIONS 191
8.5 Estimating Distributions
So far, we have assumed that we have an analytical model for the
probability densities of interest.
This was reasonable for quantization. However, suppose we do not know
the underlying densities. Can we estimate it from observations of the
signal?
Suppose we observe a sequence of i.i.d rv’s
X
1
, X
2
, ..., X
N
let E(X) = µ
X
How do we estimate µ
X
?
ˆ µ
X
=
1
N
N
n=1
X
n
How good is the estimate?
µ
X
is actually a random variable, let’s call it Y
E {Y } = E
_
1
N
N
n=1
X
n
_
= E {X}
= µ
X
E
_
Y −µ
X

2
_
= E
_
_
_
_
1
N
N
n=1
X
n
−µ
X

_
2
_
_
_
=
1
N
2
N
m=1
N
N
n=1
E {[X
m
−µ
X
][X
n
−µ
X
]}
=
1
N
2
N
m=1
Nσ
2
X
=
1
N
σ
2
X
192 CHAPTER 8. RANDOM SIGNALS
How do we estimate variance?
ˆ
σ
2
X
=
1
N
N
n=1
[X
n
−µ
X
] −[ ˆ µ
X
−µ
X
]]
2
=
1
N
N
n=1
X
n
−µ
X

2
−2[ ˆ µ
X
−µ
X
]
1
N
N
n=1
[X
n
−µ
X
]
+
1
N
N
n=1
[ ˆ µ
X
−µ
X
]
2
=
1
N
N
n=1
X
n
−µ
X

2
−[ ˆ µ
X
−µ
X
]
2
call this r.v Z
E {Z} =
1
N
N
n=1
σ
2
X
−E
_
 ˆ µ
X
−µ
X

2
_
E
_
 ˆ µ
X
−µ
X

2
_
= E
_
_
_
_
1
N
N
n=1
[X
n
−µ
X
]
_
2
_
_
_
=
1
N
2
N
m=1
N
n=1
E {[X
m
−µ
X
][X
n
−µ
X
]}
=
1
N
2
Nσ
2
X
=
1
N
σ
2
X
∴ E {Z} = σ
2
X
−
1
N
σ
2
X
= σ
2
X
_
N −1
N
_
So the estimate is not unbiased
We deﬁne unbiased estimate as:
ˆ
σ
2
X
unb
=
N
N −1
ˆ
σ
2
X
=
1
N −1
N
n=1
(X
n
− ´ µ
X
)
2
8.5. ESTIMATING DISTRIBUTIONS 193
σ
2
X
=
1
N
N
n=1
X
n
− ´ µ
X

2
=
1
N
_
N
n=1
X
n

2
−2 ´ µ
X
N
n=1
X
n
+ ´ µ
X
2
_
=
1
N
N
n=1
X
n

2
−
ˆ
µ
2
X
∴ σ
2
X
unb
=
N
N −1
ˆ
σ
2
X
So far we have seen how to estimate statistics such as mean and variance
of a random variable.
These ideas can be extended to estimation of any moments of the random
variable. But suppose we want to know the actual density itself.
How do we do this?
We need to compute the histogram.
Consider a sequence of i.i.d r.v’s with density f
X
(x)
We divide the domain of X into intervals
I
k
=
_
x :
_
k −
1
2
_
∆ ≤ x ≤
_
k +
1
2
_
∆
_
194 CHAPTER 8. RANDOM SIGNALS
We then deﬁne the r.v.’s
U
k
=
_
1, X ∈ I
k
0, else
We observe a sequence of i.i.d r.v.’s X
n
n = 1, ..., N and compute the
corresponding r.v.’s U
k
n
Finally we let
Z
k
=
1
N
N
n=1
U
k
n
Then since this r.v has the same form as the estimator of mean which we
analyzed earlier, we know that
E
_
Z
k
_
= E
_
U
k
n
_
σ
2
Z
k
=
1
N
σ
2
U
k
Now what is E
_
U
k
n
_
E
_
U
k
n
_
= 1.P {X
n
∈ I
k
} + 0.P {X
n
/ ∈ I
k
}
= P {X
n
∈ I
k
}
=
_
(
k+
1
2
)
∆
(
k−
1
2
)
∆
f
X
(x)dx
So we see that we are estimating the integral of the density function over
the range
_
k −
1
2
_
∆ ≤ x ≤
_
k +
1
2
_
∆
According to the fundamental theorem of calculus, there exists
_
k −
1
2
_
∆ ≤ τ ≤
_
k +
1
2
_
∆
such that
f
X
(τ)∆ =
_
(
k+
1
2
)
∆
(
k−
1
2
)
∆
f
X
(x)dx
= E
_
Z
k
_
Assuming f
X
(x) is continuous,
As ∆ → 0
E
_
Z
k
_
= f
X
(τ)∆ →f
X
(k∆)∆
deﬁned as
φ =
_
U
k
_
2
σ
2
U
k
= U
k
8.6. FILTERING OF RANDOM SEQUENCES 195
For ﬁxed f
X
(x), U
k
= P {X ∈ I
k
} will increase with increasing ∆ so there
is a tradeoﬀ between measurement noise and measurement resolution.
Of course, for ﬁxed ∆ and f
X
(x), we can always improve our result by
increasing N.
What is the variance of U
k
n
?
E
_
_
U
k
n
_
2
_
= 1
2
P {X ∈ I
k
} + 0
2
P {X / ∈ I
k
}
= P {X ∈ I
k
}
= U
k
so σ
2
U
k
= U
k
−(U
k
)
2
= U
k
(1 −U
k
)
We would expect to operate in range where ∆ is small so
σ
2
U
k
≈ U
k
8.6 Filtering of Random Sequences
Let us deﬁne µ
X
(n) = E {X
n
}
r
XX
(m, n) = E {X
m
X
n
}
196 CHAPTER 8. RANDOM SIGNALS
Special case
A process is w.s.s. if
1. µ
X
(n) ≡ µ
X
2. r
XX
(m, n) = r
XX
(n −m, 0) ≡ r
XX
(n −m)
Convention:
r
AB
[a, b] = r
AB
[b −a]
Consider
Y [n] =
m
h[n −m]X[m]
E {Y
n
} =
m
h[n −m]E {X
m
}
If X
n
is w.s.s
E {Y
n
} =
m
h[n −m]µ
X
=
m
h[n]µ
X
= µ
X
[x]is a constant
8.6. FILTERING OF RANDOM SEQUENCES 197
To ﬁnd r
Y Y
, we ﬁrst deﬁne crosscorrelation
r
XY
[m, n] = E {X
m
, Y
n
}
= E
_
X
m
k
h[n −k]X
k
_
=
k
h[n −k]E {X
m
X
k
}
=
k
h[n −k]r
XX
[k −m]
let l=km ⇒k = m+l
r
XY
[m, n] =
k
h[n −(m+l)]r
XX
(l)
=
k
h[n −m−l]r
XX
(l)
∴ r
XY
[m, n] = r
XY
[n −m]
then
r
Y Y
[m, n] = E {Y
m
Y
n
}
= E
_
Y
m
k
h[n −k]X
k
_
=
k
h[n −k]r
XY
[m−k]
let l = m−k ⇒ k = m−l
=
k
h[n −(m−l)]r
XY
(l)
=
k
h[n −m+l]r
XY
(l)
=
k
h[n −m−l]r
XY
(−l)
so r
Y Y
[m, n] = r
Y Y
[n −m]
summarizing:
If X is w.s.s, Y is also w.s.s
198 CHAPTER 8. RANDOM SIGNALS
Interpretation of Correlation
Recall that for two diﬀerent random variables X and Y, we deﬁned
covariance to be
σ
2
XY
= E {(X −µ
X
) (Y −µ
Y
)}
and correlation coeﬃcient
ρ
XY
=
σ
2
XY
σ
X
σ
Y
as measures of how related X and Y are,i.e whether or not they tend to
vary together?
Since X[m] and X[m+n] or X[m] and Y [m+n] are just two diﬀerent
pairs of r.v.’s, we can see that
r
XX
[n] = E {X[m]X[m+n]}
r
XY
[n] = E {X[m]Y [m+n]}
are closely related to covariance and correlation coeﬃcient.
Suppose X and Y have zero mean, these r.v’s have the same
interpretation. How related are X
m
and X
m+n
or Y
m
and Y
m+n
?
Let’s consider some examples:
Suppose X[n] is i.i.d with zero mean.
Since mean of a w.s.s process is just a constant oﬀset or shift, we can let
it equal zero without loss of generality.
Then r
XX
[n] = E {X[m]X[m+n]}
r
XX
[n] =
_
σ
2
X
, n = 0
0, else
r
XX
[n] = σ
2
X
δ[n]
Now suppose
Y [n] =
7
m=0
2
−m
X[n −m]
This is a ﬁltering operation with
h[n] = {2
−n
[u[n] −u[n −8]}
We already know that
r
XY
[n] = h[n] ∗ r
XX
[n]
= σ
2
X
h[n]
8.6. FILTERING OF RANDOM SEQUENCES 199
r
XY
[n] = E {X[m]Y [m+n]}
What about the sequence Y [n]?
Is it i.i.d?
Recall
r
Y Y
[n] = h[n] ∗ r
XY
[−n]
= h[n] ∗ σ
2
X
h[−n]
= σ
2
X
m
h[n −m]h[−m]
= σ
2
X
m
h[n +m]h[m]
Cases
1. n ≤ −7,
r
Y Y
[n] = 0
2. −7 ≤ n ≤ 0
r
Y Y
[n] = σ
2
X
−n+7
m=0
2
−(n+m)
2
−m
= σ
2
X
2
−n
−n+7
m=0
2
−2m
= σ
2
X
2
−n
1 −2
−2(−n+8)
1 −2
−2
= σ
2
X
4
3
_
2
−n
−2
(n−16)
_
3. 0 ≤ n ≤ 7
r
Y Y
[n] = σ
2
X
2
−n
7
m=−n
2
−2m
l = m+n
= σ
2
X
2
−n
n+7
l=0
2
−2(l−n)
= σ
2
X
2
n
n+7
l=0
2
−2l
= σ
2
X
2
n
1 −2
−2(n+8)
1 −2
−2
=
4
3
σ
2
X
[2
n
−2
−n−16
]
200 CHAPTER 8. RANDOM SIGNALS
4. n > 7
r
Y Y
[n] = 0
Summarizing
r
Y Y
[n] =
_
¸
¸
¸
¸
_
¸
¸
¸
¸
_
0, n < −7
4
3
σ
2
X
[2
n
−2
(−n−16)
], −7 ≤ n ≤ 0
4
3
σ
2
X
[2
−n
−2
(n−16)
], 0 ≤ n ≤ 7
0, n > 7
8.6. FILTERING OF RANDOM SEQUENCES 201
202 CHAPTER 8. RANDOM SIGNALS
8.7. ESTIMATING CORRELATION 203
8.7 Estimating Correlation
Assume X and Y are jointly w.s.s processes
E[X
n
] = µ
X
E[Y
n
] = µ
X
r
XY
[m, m+n] = E {X[m]Y [m+n]}
= r
XY
[n]
Suppose we form the sum
Z[n] =
1
M
M−1
m=0
X[m]Y [m+n]
Consider
E {Z[n]} =
1
M
M−1
m=0
E {X[m]Y [m+n]}
E {Z[n]} =
1
M
M−1
m=0
r
XY
[n] = r
XY
[n]
So we have an unbiased estimator of correlation.
Let’s look a little more closely at what the computation involves:
Suppose we computed
ˆ r
XY
[n] = Z[n] for the interval −N ≤ n ≤ N
=
1
M
M−1
m=0
X[m]Y [m+n]
For n = −N
X[0]X[1]...X[M −1]
Y [−N]Y [−N + 1]...Y [−N +M −1]
For n = 0
X[0]X[1]...X[M −1]
Y [0]Y [1]...Y [M −1]
For n = N
X[0]X[1]...X[M −1]
Y [N]Y [N + 1]...Y [N +M −1]
In each case, we used same data set for X, bu the data used for Y varied.
204 CHAPTER 8. RANDOM SIGNALS
Suppose instead we collect Z sets of data
X[0]...X[M −1]
Y [0]...Y [M −1]
and use this data to compute estimate for every n
let X
tr
[n] = X[n] {u[n] −u[n −m]}
let Y
tr
[n] = Y [n] {u[n] −u[n −m]}
Then let
v[n] =
m
X
tr
[m]Y
tr
[m+n]
=
M−1
m=0
X[m]Y [m+n] {u[m+n] −u[m+n −M]}
E {V [n]} =
M−1
m=0
E {X[m]Y [m+n]} {u[m+n] −u[m+n −M]}
=
M−1
m=0
r
XY
[n] {u[m+n] −u[m+n −M]}
= r
XY
[n]
M−1
m=0
{u[m+n] −u[m+n −M]}
So an unbiased estimate of correlation is given by
ˆ r
XY
[n] =
¸
¸
¸
¸
1
M −n
¸
¸
¸
¸
M−1
m=0
X[m]Y
tr
[m+n]
∗ Cover image taken from website
http://www.fmwconcepts.com/imagemagick
Digital Signal Processing in a Nutshell (Volume I)
Jan P Allebach Krithika Chandrasekar
2
Contents
1 Signals 1.1 Chapter Outline . . . . . . . . . . 1.2 Signal Types . . . . . . . . . . . . 1.3 Signal Characteristics . . . . . . . 1.4 Signal Transformations . . . . . . . 1.5 Special Signals . . . . . . . . . . . 1.6 Complex Variables . . . . . . . . . 1.7 Singularity functions . . . . . . . . 1.8 Comb and Replication Operations 2 Systems 2.1 Chapter Outline . . 2.2 Systems . . . . . . . 2.3 System Properties . 2.4 Convolution . . . . . 2.5 Frequency Response 5 5 6 7 10 14 17 23 27 29 29 29 32 36 44 51 51 51 57 70 79 79 79 87 90
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
3 Fourier Analysis 3.1 Chapter Outline . . . . . . . . . . . . . 3.2 ContinuousTime Fourier Series (CTFS) 3.3 ContinuousTime Fourier Transform . . 3.4 DiscreteTime Fourier Transform . . . . 4 Sampling 4.1 Chapter Outline . . . . . . . . . . . 4.2 Analysis of Sampling . . . . . . . . . 4.3 Relation between CTFT and DTFT 4.4 Sampling Rate Conversion . . . . . . 3
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . . . . . . . . . .4 Inﬁnite Impulse Response Filter Design 8 Random Signals 8. . . . . . . . . . .4 Sequences of Random Variables 8. . . . . . . . . . . . . . . . . .2 Derivation of the Z Transform . . . . . . . . . . . . . . . . . . . . 6. . . . . . .Linear Constant Coeﬃcient Diﬀerence Equations 5. . . . . 7. . . . . . . . . . . . . . . . 5. . . . . . . . . . .2 Overview . . . . . . . .5 Fast Fourier Transform (FFT) Algorithm 6. . . . 5. . . 8. . . . . . . 7. . .3 Finite Impulse Response Filter Design . . . . . . . . . . 8. . . . . . . . . . .3 Convergence of the ZT . .1 Chapter Outline . . . . . 8. . 5. . . . . . . . . . . . . . . . 6 Discrete Fourier Transform (DFT) 6. . 7 Digital Filter Design 7. . . . . . . . .6 Filtering of Random Sequences 8. . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Spectral Analysis Via the DFT . .4 CONTENTS 99 99 99 103 107 108 114 124 129 129 129 132 134 137 142 147 147 147 154 165 177 177 177 181 184 191 195 203 5 Z Transform (ZT) 5. . . .7 Estimating Correlation . .2 Single Random Variable . 6. . . . . . . . . . 6. . . .5 Estimating Distributions . . . . . . . . . .1 Chapter Outline . .6 Periodic Convolution . . . . . . .3 DFT Properties and Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Inverse Z Transform . . . . . . . . . . . . .4 ZT Properties and Pairs . 6. . . . . . . . . . . . . . . . . . . . . . . .3 Two Random Variables .5 ZT. 5. .1 Chapter Outline . . . . . . . . . . . . . . . . . . . . . . . . . . .7 General Form of Response of LTI Systems . . . . . . . . . . . . . .2 Derivation of the DFT . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . 5.
Signal Transformations 4. we will discuss: 1.Chapter 1 Signals 1. Signal Characteristics 3. Singularity Functions 7. Comb and Replication Operators 5 . Special Signals 5.1 Chapter Outline In this chapter. Complex Variables 6. Signal Types 2.
Digital (discretetime and discreteamplitude) The diﬀerent types of signals are shown in Figure 1. Continuoustime (CT) 2.2 Signal Types There are three diﬀerent types of signals: 1. SIGNALS 1. Discretetime (DT) 3. Figure 1.2(a) Diﬀerent types of signals. Comments • The CT signal need not be continuous in amplitude • The DT signal is undeﬁned between sampling instances • Levels need not be uniformly spaced in a digital signal • We make no distinction between discretetime and digital signals in theory • The independent variable need not be time .2(a) below.6 CHAPTER 1.
Deterministic or random 2.2(b) Equivalent representations for DT signals. we will use the subscripts d and a only when necessary for clarity.3. x(t) = 0 only for −∞ < t1 ≤ t ≤ t2 < ∞. If t0 ≥ 0.1. or twosided For rightsided signals. . 4. x(t) = 0 only when t ≥ t0 . Periodic or aperiodic For periodic signals. and x(t) = 0 for some t > 0. If t0 ≤ 0. x(t) = 0 only when t ≤ t0 .3 Signal Characteristics Signals can be: 1. the signal is causal. For leftsided signals. 1. the signal is anticausal. 7 Equivalent Representations for DT signals Figure 1.sampling interval Throughout this book. leftsided.e. Rightsided. for some ﬁxed T and for all integers n and times t. xd (n) = xa (nT ) T. x(t) = 0 for some t < 0. i. x(t) = x(t + nT ). For 2sided or mixed causal signals. Finite or inﬁnite duration For ﬁnite duration signals. SIGNAL CHARACTERISTICS • x(n) may or may not be sampled from an analog waveform. 3.
SIGNALS DT signal x(n)2 Ex = n x(t)2 dt −T Px = lim 1 N →∞ 2N + 1 N x(n)2 n=−N Mx = maxt x(t) Ax = x(t)dt 1 xavg = lim T →∞ 2T T Mx = maxn x(n) Ax = x(n) n x(t)dt −T xavg = lim 1 N →∞ 2N + 1 N x(n) n=−N Comments • If signal is periodic. we need only average over one period. x(t) = e−(t+1)/2 . e. for any n0 n=n0 • rootmeansquare (rms) value xrms = Px Examples Example 1 Figure 1.g. t > −1 t < −1 . if x(n)=x(n+N) for all n Px = 1 N n0 +N −1 x(n)2 .3(a) Signal for Example 1. 0.8 Metrics Metrics Energy Power Magnitude Area Average value CT signal Ex = x(t)2 dt 1 Px = lim T →∞ 2T T CHAPTER 1.
periodic (N = 8) . xavg = 0 Example 2 ∞ −1 e−(t+1)/2 2 dt. Ex = Ex = 2. mixed causal Metrics 1. Ax = 2 5.1.3. x(n) = A cos(πn/4) The signal is: 1. aperiodic 3. deterministic 2. Mx = 1 4. SIGNAL CHARACTERISTICS 9 The signal is: 1. deterministic 2.3(b) Signal for Example 2. rightsided 4. Px = 0 3. = let s = t + 1 =1 ∞ −s e ds 0 ∞ −e−s 0 Figure 1.
mixed causal CHAPTER 1. Mx = A 5. Px = 1 A cos(πn/4)2 8 n=0 7 7 Px = A2 16 A2 Px = 2 [1 + cos(πn/2)] n=0 A 3. SIGNALS Metrics 1.4 Signal Transformations ContinousTime Case Consider 1. 2sided 4. Ex = ∞ 2. xavg = 0 1. Reﬂection y(t) = x(−t) . xrms = √ 2 4. Ax is undeﬁned 6.10 3.
Scaling t y(t) = x( a ) 4. SIGNAL TRANSFORMATIONS 11 2.4.1. Scaling and Shifting Example 1 t y(t) = x( a − t0 ) What does this signal look like? center of pulse: t a − t0 = 0 ⇒ t = at0 . Shifting y(t) = x(t − t0 ) 3.
12 left edge of pulse: right edge of pulse: t a CHAPTER 1. SIGNALS − t0 = t a −1 2 1 t0 = 2 ⇒ t = a(t0 − 1 ) 2 1 ⇒ t = a(t0 + 2 ) − Example 2 y(t) = x t−t0 a What does this signal look like? center: (t − t0 )/a = 0 ⇒ t = t0 left edge of pulse: (t − t0 )/a = −1/2 ⇒ t = t0 − a/2 right edge of pulse: (t − t0 )/a = 1/2 ⇒ t = t0 + a/2 DiscreteTime Case Consider .
and are implemented by interpolation. D ∈ Z b. Scaling a. SIGNAL TRANSFORMATIONS 1. n/D ∈ Z.1. Noninteger delays can only be deﬁned in the context of a CT signal. downsampler y(n) = x(Dn). D ∈ Z y(n) = 0. Reﬂection y(n) = x(−n) 13 2. upsampler x(n/D). else .4. Shifting y(n) = x(n − n0 ) Note that n0 must be an integer. 3.
else Rectangle rect(t) = 1. n ≥ 0 0. 0. SIGNALS 1.5 Special Signals Unit step a.14 CHAPTER 1. t>0 t<0 b. DT case u(n) = 1. CT case u(t) = 1. t < 1/2 0. t > 1/2 .
CT xa (t) = A sin(ωa t + θ) where A is the amplitude ωa is the frequency θ is the phase Analog frequency radians ωa = 2πfa sec cycles sec . SPECIAL SIGNALS 15 Triangle Λ(t) = 1 − t. t ≤ 1 0.1.5. else Sinusoids a.
16 Period sec T cycle b. xd (n) will be periodic if and only if ωd = 2π(p/q) where p and q are integers. Depending on ω. xd (n) may not look like a sinusoid 2. DT xd (n) = A sin(ωd n + θ) CHAPTER 1. for all n 3. . In this case. SIGNALS = 1 fa digital frequency radians ωd = 2πµ sample Comments cycles sample 1. the period is q. Digital frequencies ω1 = ω0 and ω2 = ω0 + 2πk are equivalent x2 (n) = A sin(ω2 n + θ) = A sin[(ω0 + 2πk)n + θ] = x1 (n).
COMPLEX VARIABLES Sinc sinc(t) = sin(πt) (πt) 17 1.6 Complex Variables Deﬁnition Cartesian coordinates z = (x.1.6. y) Polar coordinates z = R∠θ R= x2 + y 2 x = R cos (θ) .
SIGNALS θ = arctan y = R sin (θ) y x Alternate representation deﬁne j = (0. 1) = 1∠ (π/2) then z = x + jy Additional notation Re {z} = x Im {z} = y z = R Algebraic Operations Addition (Cartesian Coordinates) z1 = x1 + jy1 z2 = x2 + jy2 z1 + z2 = (x1 + x2 ) + j(y1 + y2 ) .18 CHAPTER 1.
COMPLEX VARIABLES Multiplication (Polar Coordinates) 19 z1 = R1 ∠θ1 z2 = R2 ∠θ2 z1 z2 = R1 R2 ∠ (θ1 + θ2 ) Special Examples 1. 0) = 1∠180 3. −1 = (−1. j 2 = (1∠90)2 = 1∠180 = −1 .6. 1 = (1. 0) = 1∠0 2.1.
...) + j(θ − + − ..20 CHAPTER 1. SIGNALS Complex Conjugate z ∗ = x − jy = R∠ (−θ) Some useful identities 1 Re {z} = [z + z ∗ ] 2 1 Im {z} = [z − z ∗ ] j2 √ z = zz ∗ Complex Exponential Taylor series for exponential function of real variable x k=0 ex = ∞ xk k! Replace x by jθ ejθ = 1 + jθ + jθ2 jθ3 jθ4 jθ5 + + + 2! 3! 4! 5! θ2 θ4 θ3 θ5 = (1 − + − .) 2! 4! 3! 5! = cos θ + j sin θ .
1.6. COMPLEX VARIABLES 21 ejθ  = ∠ejθ cos2 θ + sin2 θ = 1 sin θ = arctan =θ cos θ Alternate form for polar coordinate representation of complex number z = R∠θ = Rejθ Multiplication of two complex numbers can be done using rules for multiplication of exponentials: z1 z2 = (R1 ejθ1 )(R2 ejθ2 ) = R1 R2 ej(θ1 +θ2 ) Complex exponential signal CT x(t) = Aejψ(t) ψ(t) = ωa t + θ instantaneous phase rad dψ(t) = ωa instantaneous phasor velocity dt sec .
It refers to the ability of one frequency to mimic another 2.22 CHAPTER 1. sample Aliasing 1. SIGNALS DT x(n) = Aej(ωd n+θ) ωd is the phasor velocity in Example ωd = π/3 θ = π/6 radians . ejω1 n = ejω2 n if ω2 = ω1 + 2πk . In DT. ejω1 t = ejω2 t for all t ⇔ ω1 = ω2 3. For CT case.
then x1 (n) = ejω1 n = ej(π+∆)n = ej[(π+∆)n−2πn] = ej[(−π+∆)n] = e−j(π−∆)n = x2 (n) 1.7 Singularity functions CT Impulse Function Let δ∆ (t) = Note that −∞ 1 t rect( ) ∆ ∆ ∞ δ∆ (t)dt = 1.7. SINGULARITY FUNCTIONS Consider 7π −5π ω1 = ω2 = 6 6 x1 (n) = ejω1 n x2 (n) = ejω2 n 23 In general. What happens as ∆ → 0? ∆1 > ∆2 > ∆3 . if ω1 = π + ∆ and ω2 = −(π − ∆).1.
to sample signals 2. as a way to characterize the response of a class of systems to an arbitrary input signal Sifting property Consider a signal x(t) multiplied by δ∆ (t − t0 )for some ﬁxed t0 : . ∞. t=0 t=0 δ(t)dt = 1 −∞ We will use impulses in three ways: 1.24 CHAPTER 1. we obtain δ(t) = lim δ∆ (t) ∆→0 δ(t) = and ∞ 0. SIGNALS In the limit. as a way to decompose signals into elementary components 3.
∞ 25 x(t)δ∆ (t − t0 )dt = x(ξ) −∞ for some ξ which satisﬁes t0 − ∆ 2 ≤ ξ ≤ t0 + ∆ 2 As ∆ → 0.ξ → t0 and x(ξ) → x(t0 ) So we have ∞ x(t)δ(t − t0 )dt = x(t0 ) −∞ provided x(t) is continuous at t0 . x(t)δ(t − t0 ) ≡ x(t0 )δ(t − t0 ) Indeﬁnite integral of δ(t) t Let uδ (t) = −∞ δ∆ (τ )dτ . SINGULARITY FUNCTIONS From the mean value theorem of calculus.1.7. Equivalence Based on sifting property.
t u(t) = −∞ δ(τ )dτ When we deﬁne the unit step function. else .26 CHAPTER 1.5 More general form of sifting property b x(τ )δ(τ − t0 )dτ = a x(t0 ). In this case. n = 0 0. a < t0 < b 0. we did not specify its value at t=0. t0 < a or b < t0 DT Impulse Function (unit sample function) δ(n) = 1. SIGNALS Let ∆ → 0. u(0) = 0.
8 Comb and Replication Operations Sifting property yields a single sample at t0 . COMB AND REPLICATION OPERATIONS 27 Sifting Property n2 x(n)δ(n − n0 ) = n=n1 x(n0 ).8. else Equivalence x(n)δ(n − n0 ) = x(n0 )δ(n − n0 ) Indeﬁnite sum of δ(n) n u(n) = m=−∞ δ(m) 1.1. Consider multiplying x(t) by an entire train of impulses: Deﬁne xs (t) = combT [x(t)] = x(t) n δ(t − nT ) = n x(t)δ(t − nT ) x(nT )δ(t − nT ) n = . n1 ≤ n0 ≤ n2 0.
SIGNALS Area of each impulse = value of sample there. The replication operator similarly provides a compact way to express periodic signals: xp (t) = repT [x0 (t)] = n x0 (t − nT ) Note that x0 (t) = xp (t)rect t T .28 CHAPTER 1.
2 Systems A system is a mapping which assigns to each signal x(t) in the input domain a unique signal y(t) in the output range. Convolution 3. we will discuss: 1. Frequency Response 2. 29 .Chapter 2 Systems 2.1 Chapter Outline In this chapter. System Properties 2.
SYSTEMS 1.30 Examples t+1/2 CHAPTER 2. CT y(t) = t−1/2 x(τ )dτ let x(t) = sin(2πf t) t+1/2 y(t) = t−1/2 sin(2πf t)dt −1 t+1/2 cos(2πf t)t−1/2 2πf 1 =− {cos[2πf (t + 1/2)] − cos[2πf (t − 1/2)]} 2πf = use cos(α ± β) = cos α cos β y(t) = − sin α sin β 1 {[cos(2πf t) cos(πf ) − sin(2πf t) sin(πf )] 2πf −[cos(2πf t) cos(πf ) + sin(2πf t)sin(πf )]} y(t) = sin(πf ) sin(2πf t) πf = sinc(f )sin(2πf t) Look at particular values of f f = 0 : x(t) ≡ 0 y(t) ≡ 0 f = 1 : x(t) = sin(2πt) sinc(1) = 0 y(t) ≡ 0 .
2. DT y(n) = 1 [x(n) + x(n − 1) + x(n − 2)] 3 Eﬀect of ﬁlter on output: 1. SYSTEMS 31 2.smoothed. or smeared each pulse 2. delayed pulse train by one sample time . widened.2.
3 System Properties Notation: y(t) = S[x(t)] A. SYSTEMS 2. homogeneity (let a2 = 0) S[a1 x1 (t)] = a1 S[x1 (t)] 2. superposition (let a1 = a2 = 1) S[x1 (t) + x2 (t)] = S[x1 (t)] + S[x2 (t)] Examples t+1/2 y(t) = t−1/2 x(τ )dτ let x3 (t) = a1 x1 (t) + a2 x2 (t) t+1/2 y3 (t) = t−1/2 t+1/2 x3 (τ )dτ [a1 x1 (τ ) + a2 x2 (τ )]dτ t−1/2 t+1/2 t+1/2 = = a1 x1 (τ )dτ + a2 t−1/2 t−1/2 x2 (τ )dτ = a1 y1 (t) + a2 y2 (t) ∴ system is linear We can similarly show that y(n) = 1 [x(n) + x(n − 1) + x(n − 2)] is linear. 3 . A system S is linear (L) if for any two inputs x1 (t) and x2 (t) and any two constants a1 and a2 . Linearity def. it satisﬁes: S[a1 x1 (t) + a2 x2 (t)] = a1 S[x1 (t)] + a2 S[x2 (t)] Special cases: 1.32 CHAPTER 2.
this system is completely described by a curve relating input to output at each time t: .2.ﬁnd a counterexample.3. x1 (t) ≡ 1 ⇒ y1 (t) ≡ 1 x2 (t) ≡ 1 2 ⇒ y2 (t) ≡ 1 2 3 2 x3 (t) = x1 (t) + x2 (t) = ⇒ y3 (t) ≡ 1 = 3 2 ∴system is not linear Because it is memoryless. 1 < x(t) 33 Suspect that system is nonlinear . y(n) = nx(n) let x3 (n) = a1 x1 (n) + a2 x2 (n) y3 (n) = nx3 (n) = (a1 nx1 (n) + a2 nx2 (n)) = a1 y1 (n) + a2 y2 (n) ∴ system is linear −1. SYSTEM PROPERTIES Consider two additional examples: 3. −1 ≤ x(t) ≤ 1 y(t) = 1. x(t) < −1 x(t).
i.34 CHAPTER 2. if y1 (t) = S[x1 (t)] and y2 (t) = S[x1 (t − t0 )] then y2 (t) = y1 (t − t0 ) Examples 1. A system is timeinvariant if delaying the input results in only an identical delay in the output.e. y(n) = assume y1 (n) = 1 [x1 (n) + x1 (n − 1) + x1 (n − 2)] 3 let x2 (n) = x1 (n − n0 ) y2 (n) = 1 [x2 (n) + x2 (n − 1) + x3 (n − 2)] 3 1 = [x1 (n − n0 ) + x1 (n − 1 − n0 ) + x1 (n − 2 − n0 )] 3 1 = [x1 (n − n0 ) + x1 (n − n0 − 1) + x1 (n − n0 − 2)] 3 = y1 (n − n0 ) 1 [x(n) + x(n − 1) + x(n − 2)] 3 ∴ system is TI. Time Invariance def. B. We can similarly show that . SYSTEMS Importance of linearity: We can represent response to a complex input in terms of responses to very simple inputs.
C.e. y(n) = y(n) = 1 x(n) + x(n − 1) + x(n − 2) 3 1 ≤ [x(n) + x(n − 1) + x(n − 2)] 3 ≤ Mx . A system S is causal if the output at time t depends only on x(τ ) for τ ≤t Casuality is equivalent to the following property: If x1 (t) = x2 (t).2.i. A system is said to be boundedinput boundedoutput (BIBO) stable if every bounded input produces a bounded output. t ≤ t0 then y1 (t) = y2 (t). y(t) = t−1/2 x(τ )dτ is TI 3. Mx < ∞ ⇒ My < ∞. y(n) = nx(n) assume y1 (n) = nx1 (n) let x2 (n) = x1 (n − n0 ) y2 (n) = nx2 (n) = nx1 (n − n0 ) = (n − n0 )x1 (n − n0 ) ∴ system is not TI. Example 1 [x(n) + x(n − 1) + x(n − 2)] 3 Assume x(n) ≤ Mx for all n 1. Stability def.3. Causality def. SYSTEM PROPERTIES t+1/2 35 2. t ≤ t0 D.
i=0. xi (n) = x0 (n − ni ) Let yi (n) = S[xi (n)].. Now consider an arbitrary input x(n).. Decompose input into sum of simple basis functions xi (n) N −1 x(n) = i=0 ci xi (n) y(n) = S[x(n)] N −1 = i=0 ci S[xi (n)] 2.36 CHAPTER 2. Choose impulse as basis function Denote impulse response by h(n)..4 Convolution Characterization of behavior of an LTI system 1..e. SYSTEMS 2. Choose basis functions which are all shifted versions of a single basis function x0 (n) i.N1 then yi (n) = y0 (n − ni ) N −1 and y(n) = i=0 ci y0 (n − ni ) 3. .
CONVOLUTION 37 Sum over both the set of inputs and the set of outputs.4.2. x(n) = x(0)δ(n) + x(1)δ(n − 1) + x(2)δ(n − 2) + x(3)δ(n − 3) y(n) = x(0)h(n) + x(1)h(n − 1) + x(2)h(n − 2) + x(3)h(n − 3) .
∞ x1 (n) ∗ x2 (n) = k=−∞ ∞ x1 (n − k)x2 (k) x1 (k)x2 (n − k) k=−∞ = Characterization of CT LTI systems • Approximate x(t) by a superposition of rectangular pulses: . and we have the following identity.38 CHAPTER 2. we use an asterisk to denote their convolution. SYSTEMS Convolution sum ∞ x(n) = k=−∞ ∞ x(k)δ(n − k) y(n) = k=−∞ x(k)h(n − k) let =n−k ⇒k =n− −∞ ∞ y(n) = =∞ x(n − )h( ) = =−∞ x(n − )h( ) Notation and identity For any signals x1 (n) and x2 (n).
CONVOLUTION ∞ 39 x∆ (t) = k=−∞ x(k∆)δ∆ (t − k∆)∆ • Find response to a single pulse: • Determine response to x∆ (t): ∞ y∆ (t) = k=−∞ x(k∆)h∆ (t − k∆)∆ Let ∆ → 0 (dτ ) k∆ → τ δ∆ (t) → δ(t) x∆ (t) → x(t) h∆ (t) → h(t) y∆ (t) → y(t) Convolution Integral ∞ x(t) = −∞ ∞ x(τ )δ(t − τ )dτ x(τ )h(t − τ )dτ −∞ y(t) = .2.4.
40
CHAPTER 2. SYSTEMS
Notation and Identity
For any signals x1 (t) and x2 (t), we use an asterisk to denote their convolution; and we have the following identity.
∞
x1 (n) ∗ x2 (n) =
−∞ ∞
x1 (τ )x2 (t − τ )dτ x1 (t − τ )x2 (τ )dτ
−∞
= Example DT system y(n) = 1 W
W −1
x(n − k) W  integer
k=0
Find response to x(n) = e−n/D u(n). W  width of averaging window D  duration of input To ﬁnd impulse response, let x(n) = δ(n) ⇒ h(n) = y(n). 1 h(n) = W
W −1
δ(n − k) =
k=0
1/W, 0,
0≤n≤W −1 else
Now use convolution to ﬁnd response to x(n) = e−n/D u(n).
∞
y(n) =
k=−∞
x(n − k)h(k)
Case 1: n < 0
y(n) = 0
2.4. CONVOLUTION
41
Case 2: 0 ≤ n ≤ W − 1
n
y(n) =
k=0
x(n − k)h(k) 1 W
n
= =
e−(n−k)/D
k=0 n
1 −n/D e W
ek/D
k=0
Geometrical Series
N −1
zk =
k=0 ∞
1 − zN , for any complex number z 1−z 1 , z < 1 1−z
zk =
k=0
y(n) =
1 −n/D 1 − e(n+1)/D e W 1 − e1/D
Case 3: W ≤ n
42
CHAPTER 2. SYSTEMS
W −1
y(n) =
k=0
x(n − k)h(k) 1 W
W −1
=
e−(n−k)/D
k=0 W −1
1 −n/D = e W
ek/D
k=0
y(n) = =
1 −n/D 1 − eW/D e W 1 − e1/D 1 W 1 − e−(W/D −[n−(W −1)]/D e 1 − e−1/D
Putting everything together n<0 0, 1 1 − e−(n+1)/D , 0≤n≤W −1 y(n) = W 1 − e−1/D −(W/D 1 1−e e−[n−(W −1)]/D , W ≤ n W 1 − e−1/D
Causality for LTI systems
n ∞
y(n) =
k=−∞
x(k)h(n − k) +
k=n+1
x(k)h(n − k)
System will be causal ⇔ second sum is zero for any input x(k). This will be true ⇔ h(n − k) = 0, k = n + 1, ..., ∞
2.4. CONVOLUTION ⇔ h(k) = 0, k < 0
43
An LTI system is causal ⇔ h(k) = 0, k < 0, i.e. the impulse response is a causal signal.
Stability of LTI systems
Suppose the input is bounded, i.e Mx < ∞
∞
y(n) =
k=∞
x(k)h(n − k)
∞
y(n) = 
k=−∞ ∞
x(k)h(n − k) x(k)h(n − k)
≤
k=−∞
∞
≤ Mx
k=−∞
h(k)
Therefore, it is suﬃcient for BIBO stability that the impulse response be absolutely summable. Suppose
k
h(k) < ∞. x(k)h(−k).
k
Consider y(0) =
Assuming h(k) to be realvalued, let x(k) = then y(0) =
k
1, −1,
h(−k) > 0 h(−k) < 0
h(k) < ∞
For an LTI system to be BIBO stable, it is necessary that the impulse response is absolutely summable, i.e. h(k) < ∞.
k
Examples y(n) = x(n) + y(n − 1) Find the impulse response. Let x(n) = δ(n), then h(n) = y(n). Need to ﬁnd solution to
i.e the current output depends on previous output values as well as the current and previous inputs. Sinusoids play a similar role Consider x(n) = ejωn . Assuming system is initially at rest. h(n) = y(n) = 2n u(n) 1. 1. SYSTEMS This example diﬀers from earlier ones because the system is recursive. ω is ﬁxed Denote response by yω (n) Now consider .5 Frequency Response We have seen that the impulse signal provides a simple way to characterize the response of an LTI system to any input. it is causal 2. y(0) = δ(0) + 2y(−1) = 1 + 2(0) = 1 y(1) = δ(1) + 2y(0) = 0 + 2(1) = 2 y(2) = δ(2) + 2y(1) = 0 + 2(2) = 4 Recognize general form. cannot directly write a closed form expression for y(n) Find output sequence term by term. n h(n) < ∞ ⇒ system is not BIBO stable 2. must specify initial conditions for the system (assume y(1)=0) 2.44 y(n) = δ(n) = 2y(n − 1) CHAPTER 2.
complex exponential signals are eigenfunctions of homogeneous. FREQUENCY RESPONSE 45 Since ejωm ejωn = ejω(n+m) yω (n + m) = yω (n)ejωm Let n = 0. a frequencydependent constant times the input x(n). Comments • We refer to the constant of proportionality as the frequency response of the system and denote it by H(ejω ) = yω (0) • We write it as a function of ejω rather than ω for two reasons: 1. Thus. digital frequencies are only unique modulo 2π 2. the response to a complex exponential input x(n) = ejωn is y(n) = yω (0)x(n). yω (m) = yω (0)ejωm for all m ∴ For a system which is homogeneous and timeinvariant.2.5. there is a relation between frequency response and the Z transform that this representation captures • Note that we did not use superposition. . We will need it later when we express the response to arbitrary signals in terms of the frequency response. timeinvariant systems.
y(n) = 1 H(ejω )ejωn + H(e−jω )e−jωn 2 1 [H(ejω )]∗ e−jωn + [H(e−jω )]∗ ejωn 2 [y(n)]∗ = y(n) = [y(n)]∗ ⇔ H(e−jω ) = [H(ejω ]∗ Expressed in polar coordinates A(−ω)ej∠θ(−ω) = A(ω)e−j∠θ(ω) ∴ A(−ω) = A(ω) even θ(−ω) = −θ(ω) odd Examples 1 [x(n) + x(n − 1)] 2 Let x(n) = ejωn 1 y(n) = [ejωn + ejω(n−1) ] 2 1 = [1 + e−jω ]ejωn 2 1 ∴ H(ejω ) = [1 + e−jω ] 2 Factoring out the halfangle: 1. SYSTEMS Magnitude and Phase of Frequency Response In general. 2 Assuming superposition holds.46 CHAPTER 2. H(ejω ) is complexvalued.e. i. H(ejω ) = A(ω)ejθ(ω) Thus for x(n) = ejωn y(n) = A(ω)ej[ωn+∠θ(ω)] Suppose response to any realvalued input x(n) is realvalued. y(n) = H(ejω ) = 1 −jω/2 jω/2 e [e + e−jω/2 ] 2 = e−jω/2 cos(ω/2) . 1 Let x(n) = cos(ωn) = [ejωn + e−jωn ].
cos(ω/2) ≥ 0 −ω/2 ± π. cos(ω/2) < 0 47 Note: • even symmetry of H(ejω ) • odd symmetry of ∠H(ejω ) • periodicity of H(ejω ) with period 2π • low pass characteristic 1 [x(n) − x(n − 2)] 2 Let x(n) = ejωn 2. FREQUENCY RESPONSE H(ejω ) = e−jω/2  cos(ω/2) =  cos(ω/2) ∠H(ejω ) = ∠e−jω/2 + ∠cos(ω/2) ∠H(ejω ) = −ω/2.5.2. y(n) = y(n) = 1 jωn [e − ejω(n−2) ] 2 1 = [1 − e−jω2 ]ejωn 2 1 [1 − e−jω2 ] 2 1 = je−jω [ (ejω − e−jω )] j2 H(ejω ) = .
SYSTEMS H(ejω ) = je−jω sin(ω) H(ejω ) = je−jω  sin(ω) =  sin(ω) ∠H(ejω ) = ∠j + ∠e−jω + ∠ sin(ω) ∠H(ejω ) = π/2 − ω. how do we ﬁnd y(n)? Assume desired form of output. 3. y(n) = H(ejω )ejωn H(ejω )ejωn = ejωn − ejω(n−1) − H(ejω )ejω(n−1) . sin(ω) ≥ 0 π/2 − ω ± π.48 CHAPTER 2. y(n) = x(n) − x(n − 1) − y(n − 1) Let x(n) = ejωn . i.e. sin(ω) < 0 The ﬁlter has a bandpass characteristic.
tan(ω/2) < 0 Comments • What happens at ω = π/2? • Factoring out the halfangle is possible only for a relatively restricted class of ﬁlters . FREQUENCY RESPONSE H(ejω )[1 + e−jω ]ejωn = [1 − e−jω ]ejωn H(ejω ) = = 1 − e−jω 1 + e−jω 1 je−jω/2 [ j2 (ejω/2 − e−jω/2 )] 49 1 e−jω/2 [ 2 (ejω/2 + e−jω/2 )] sin(ω/2) =j cos(ω/2) = j tan(ω/2) H(ejω ) =  tan(ω/2) ∠H(ejω ) = π/2.2. tan(ω/2) ≥ 0 π/2 ± π.5.
50 CHAPTER 2. SYSTEMS .
DiscreteTime Fourier Transorm (DTFT) 3.1 Chapter Outline In this chapter. we only use the frequency components that are periodic with this period: 51 . we will discuss: 1. ContinuousTime Fourier Transform (CTFT) 3.Chapter 3 Fourier Analysis 3. ContinuousTime Fourier Series (CTFS) 2.2 ContinuousTime Fourier Series (CTFS) Spectral representation for periodic CT signals Assume x(t) = x(t + nT ) We seek a representation for x(t) of the form x(t) = 1 T N −1 Ak xk (t) (1a) k=0 xk (t) = cos(2πfk t + θk ) (1b) Since x(t) is periodic with period T.
FOURIER ANALYSIS How do we choose amplitude Ak and the phase θk of each component? k = 0: Ak xk (t) = Ak cos(2πkt/T + θk ) Ak jθk j2πkt/T Ak −jθk −j2πkt/T = e e + e e 2 2 = Xk ej2πkt/T + X−k e−j2πkt/T k = 0: A0 x0 (t) = A0 cos(θ0 ) = X0 .52 CHAPTER 3.
we can always approximate x(t) by an expression of this form. Recall Pe = = 1 T 1 T T /2 e(t)2 dt −T /2 T /2 x(t) − x(t)2 dt −T /2 T /2 N −1 2 1 Xk ej2πkt/T − x(t) dt −T /2 T k=−(N −1) N −1 1 T /2 1 = Xk ej2/pikt/T − x(t) T −T /2 T k=−(N −1) N −1 1 × X ∗ e−j2π t/T − x∗ (t) dt T 1 = T =−(N −1) T /2 −T /2 N −1 Pe = 1 T 1 T 1 T 1 T N −1 Xk ej2πkt/T k=−N +1 t/T − X ∗ e−j2π =−N +1 N −1 − Xk ej2/pikt/T x∗ (t)− k=−N +1 x(t) 1 T X ∗ e−j2π =−N +1 t/T + x(t)x∗ (t) dt . So we consider x(t) = 1 T N −1 Xk ej2πkt/T k=−(N −1) We want to choose the coeﬃcients Xk so that x(t) is a good approximation to x(t).3.(1) even exists.2. Deﬁne e(t) = x(t) − x(t). However. CONTINUOUSTIME FOURIER SERIES (CTFS) 53 Given a ﬁxed signal x(t). we don’t know at the outset whether an exact representation of the form given by Eq.
0. and consider ∂Pe ∂Al and ∂Pe ∂Bl . N − 1 which will minimize Pe .54 CHAPTER 3. Fix l between −N + 1 and N − 1. FOURIER ANALYSIS Pe = 1 T3 1 T2 1 T2 1 T N −1 N −1 T /2 Xk X ∗ k=−N +1 =−N +1 N −1 T /2 −T /2 ej2π(k−l)t/T dt − Xk k=−N +1 N −1 −T /2 T /2 x∗ (t)ej2πkt/T dt x(t)e−j2πlt/T dt −T /2 − X∗ =−N +1 T /2 + x(t)2 dt −T /2 T /2 ej2π(k−l)t/T dt = −T /2 T ej2π(k−l)t/T j2π(k − l) )] T /2 −T /2 )] T = e[jπ(k− j2π(k − ) = T sinc(k − ) = T.. Let Xl = Al + jBl . . N −1 ∗ ∗ x(k)2 − Xk Xk − Xk Xk + Px −N +1 Pe = 1 T2 Xk unknown coeﬃcients in the Fourier Series approximation Xk ﬁxed numbers that depend on x(t) We want to choose values for the coeﬃcients Xk .. k=l k=l − e[−jπ(k− T /2 Let Xl = −T /2 x(t)e−j2πlt/T dt..k = −N + 1.
2.3. CONTINUOUSTIME FOURIER SERIES (CTFS) 1 T2 N −1 ∗ ∗ X(k)2 Xk Xk − Xk Xk + Px k=−N +1 55 Pe = 1 ∂Pe ∂Pe = 2 (Al + jBl )(Al − jBl ) − (Al + jBl )Xl∗ − (Al − jBl )Xl ∂Al T ∂Al 1 = 2 Xl∗ + Xl − Xl∗ − Xl T 2 = 2 Re X − l − Xl T ∂Pe = 0 ⇒ Re {Xl } = Re Xl ∂Al Similarly ∂Pe 1 = 2 jXl∗ − jXl − j Xl∗ + j Xl ∂Bl T 2 = j 2 Im Xl − Xl T and ∂Pe = 0 ⇒ Im {Xl } = Im Xl ∂Bl To summarize For x(t) = 1 T N −1 Xk ej2πkt/T k=−N +1 to be a minimum meansquared error approximation to the signal x(t) over the interval −T /2 ≤ t ≤ T /2. the coeﬃcients Xk must satisfy T /2 Xk = −T /2 x(t)e−j2πkt/T dt Example .
FOURIER ANALYSIS T /2 Xk = −T /2 x(t)e−j2πkt/T dt τ /2 =A −τ /2 e−j2πkt/T dt A e−j2πkt/T −j2πkt/T A = [e−jπkτ /T − ejπkτ /T ] −j2πk/T sin(πkτ /T ) = Aτ πkτ /T = Line spectrum Xk = Aτ sinc(kτ /T ) Xk  = Aτ sinc(kτ /T ) ∠Xk = 0. sinc(kτ /T ) < 0 . sinc(kτ /T ) > 0 ±π.56 CHAPTER 3.
−T /2 xN (t) − x(t)2 dt → 0 2. In the neighborhood of discontinuities. xN (t) exhibits an overshoot or undershoot with maximum amplitude equal to 9 percent of the step size no matter how large N is (Gibbs phenomena) 3. If −T /2 T /2 x(t)2 dt < ∞. Xk = 0.3 ContinuousTime Fourier Transform Spectral representation for aperiodic CT signals Consider a ﬁxed signal x(t) and let xp (t) = repT [x(t)]. lim Xk  = 0 T →∞ What happens as N increases? N −1 1 Let xN (t) = Xk ej2πkt/T T k=−N +1 T /2 T /2 1.3. X0 = −T /2 x(t)dt = T xavg 2. If −T /2 x(t)dt < ∞ and other Dirichilet conditions are met. .3. xN (t) → x(t) for all t where x(t) is continuous 3. CONTINUOUSTIME FOURIER TRANSFORM Comments Recall Xk = −T /2 T /2 57 T /2 x(t)e−j2πkt/T dt 1. k = 0 3.
FOURIER ANALYSIS What happens to the Fourier series as T increases? τ = T /2 τ = T /4 Fourier coeﬃcients T /2 Xk = −T /2 x(t)e−j2πkt/T dt Let T → ∞.58 CHAPTER 3. .
x(t) is absolutely integrable ∞ x(t)dt < ∞ −∞ and it satisﬁes Dirichlet conditions .3. CONTINUOUSTIME FOURIER TRANSFORM k/T → f Xk → X(f ) ∞ 59 X(f ) = −∞ x(t)e−j2πf t dt Fourier series expansion xp (t) = 1 T ∞ Xk ej2πkt/T −∞ Let T → ∞.3. xp (t) → x(t) k/T → f Xk → X(f ) 1 T ∞ ∞ → k=−∞ ∞ −∞ df X(f )ej2πf t df x(t) = −∞ Fourier Transform pairs Forward transform ∞ X(f ) = −∞ x(t)e−j2πf t dt Inverse Transform ∞ x(t) = −∞ X(f )ej2πf t df Suﬃcient conditions for Existence of CTFT 1. x(t) has ﬁnite energy ∞ x(t)2 dt < ∞ −∞ 2.
FOURIER ANALYSIS Transform relations • Linearity a1 x1 (t) + a2 x2 (t) ↔ a1 X1 (f ) + a2 X2 (f ) • Scaling and shifting t − t0 CT F T x ↔ aX(af )e−j2πf t0 a • Modulation x(t)ej2πf0 t CT F T CT F T ↔ X(f − f0 ) • Reciprocity X(t) ↔ x(−f ) • Parseval’s relation ∞ ∞ CT F T x(t)2 dt = −∞ −∞ X(f )2 df • Initial value ∞ x(t)dt = X(0) −∞ Comments 1. The scaling relation exhibits reciprocal spreading 3. Uniqueness of the CTFT follows from Parseval’s relation CTFT for real signals If x(t) is real. CT F T x(−t) ↔ X(−f ) 2.e. Reﬂection is a special case of scaling and shifting with a = −1 and t0 = 0. i. X(f ) = [X(−f )]∗ . ⇒X(f ) = X(−f ) and ∠Xf = −∠X(−f ) In this case.60 CHAPTER 3. the inverse transform may be written as: ∞ x(t) = 2 0 X(f ) cos[2πf t + ∠X(f )]df .
CONTINUOUSTIME FOURIER TRANSFORM Additional symmetry relations: x(t) is real and even ⇔ X(f ) is real and even.3. 61 Important transform pairs: • rect(t) ↔ sinc(f ) CT F T • δ(t) ↔ 1 CT F T (by sifting property) Proof F {δ(t)} = ∞ δ(t)e−j2πf t dt = 1 −∞ • 1 ↔ δ(f ) (by reciprocity) CT F T • ej2πf0 t CT F T ↔ δ(f − f0 ) (by modulation property) . x(t) is real and odd ⇔ X(f ) is real and odd.3.
. 2. ∆ δ∆ (t)dt = 1 −∞ ∞ δ∆ (t)2 dt = −∞ ∞ 1 ∆ ∴ lim ∆→0 δ∆ (t)2 dt = ∞ −∞ The function x(t) ≡ 1 is neither absolutely nor square integrable. Let xn (t). Suppose lim xn (t) = x(t).62 CHAPTER 3. Consider δ∆ (t) = ∞ 1 rect T t . n = 0. n→∞ If X(f ) = lim Xn (f ) exists. we call it the generalized Fourier transform n→∞ . and the integral ∞ 1e−j2πf t dt is undeﬁned. −∞ Even when neither condition for existence of the CTFT is satisﬁed. that function does not have a valid transform.. we may still be able to deﬁne a Fourier transform through a limiting process. denote a sequence of functions each of which has a valid CTFT Xn (f ). 1. . FOURIER ANALYSIS • cos(2πf0 t) ↔ CT F T 1 [δ(f − f0 ) + δ(f + f0 )] 2 Generalized Fourier transform Note that δ(t) is absolutely integrable but not square integrable.
Xn (f ) = nsinc(nf ) (by scaling) n→∞ lim xn (t) = 1 What is lim Xn (f )? n→∞ Xn (f ) → 0. f = 0 Xn (0) → ∞ ∞ What is −∞ Xn (f )df ? By the initial value relation ∞ Xn (f )df = xn (0) = 1 −∞ ∴ lim Xn (f ) = δ(f ) n→∞ and we have 1 GCT F T ↔ δ(f ) . CONTINUOUSTIME FOURIER TRANSFORM of x(t) i.e. x0 (t) ↔ X0 (f ) x1 (t) ↔ X1 (f ) x2 (t) ↔ X2 (f ) x(t) GCT F T CT F T CT F T CT F T 63 ↔ X(f ) Example Let xn (t) = rect(t/n).3.3.
Evaluate transform integral directly 0 2 X(f ) = −2 (−1)e−j2πf t dt + 0 (1)e−j2πf t dt 2. Use transform relations to determine Xf X(f ) = 2 sinc(2f )[e−j2πf − ej2πf ] X(f ) = −j4 sinc(2f )sin(2πf ) . FOURIER ANALYSIS Eﬃcient calculation of Fourier transforms Suppose we wish to determine the CTFT of the following signal. Collect terms and simplify Faster approach 1. Brute force approach: 1.64 CHAPTER 3. Write x(t) in terms of functions whose transforms are known x(t) = −rect t+1 2 + rect t−1 2 2.
. i. We can similarly express x(t) as a superposition of complex exponential signals: ∞ x(t) = −∞ X(f )ej2πf t df Let H(f ) denote the frequency response of the system. for a ﬁxed frequency f.3. Ax = 0 and X(0) = 0 2. x(t) is real and odd and X(f ) is imaginary and odd CTFT and CT LTI systems The key factor that made it possible to express the response y(t) of an LTI system to an arbitrary input x(t) in terms of the impulse response h(t) was the fact that we could write x(t) as the superposition of impulses δ(t). CONTINUOUSTIME FOURIER TRANSFORM 65 Comments 1.e.3.
Summarizing.66 CHAPTER 3. the response to x(t) is ∞ y(t) = −∞ H(f )X(f )ej2πf t dt But also ∞ y(t) = −∞ Y (f )ej2πf t dt ∴ Y (f ) = H(f )X(f ) (1) We also know that ∞ y(t) = −∞ h(t − τ )x(τ )dτ What is the relation between h(t) and H(f )? Let x(t) = δ(t) ⇒ y(t) = h(t) then X(f ) = 1 and Y (f ) = H(f ). we will drop the tilde. From Eq(1). FOURIER ANALYSIS then by homogeneity and by superposition Thus. we conclude that H(f ) = H(f ) Since the frequency response is the CTFT of the impulse response. we have two equivalent characterizations for CT LTI .
we also have the following Fourier transform relation x1 (τ )x2 (t − τ )dτ CT F T CT F T ↔ X1 (f )X2 (f ) x1 (t) ∗ x2 (t) ↔ X1 (f )X2 (f ) Product Theorem By reciprocity.3. CONTINUOUSTIME FOURIER TRANSFORM systems. t > 1/2 Find X(f ) CT F T 1 [1 + cos(2πt)]rect(t) 2 1 1 ∴ X(f ) = δ(f ) + [δ(f − 1) + δ(f + 1)] ∗ sinc(f ) 2 2 Since convolution obeys linearity. 1 [1 + cos(2πt)]. ∞ 67 y(t) = −∞ h(t − τ )x(τ )dτ Y (f ) = X(f )H(f ) Convolution Theorem Since x(t) and h(t) are arbitrary signals. we also have the following result: x1 (t)x2 (t) ↔ X1 (f ) ∗ X2 (f ) This can be very useful for calculating transforms of certain functions. t ≤ 1/2 Example x(t) = 2 0. we can write this as x(t) = .3.
and hence do not satisfy the conditions for existence of the CTFT • By applying the concept of the generalized Fourier transform.68 X(f ) = 1 2 CHAPTER 3. Identity For any signal w(t). w(t) ∗ δ(t − t0 ) = w(t − t0 ) Proof: w(t) ∗ δ(t − t0 ) = Using the identity. FOURIER ANALYSIS 1 δ(f ) ∗ sinc(f ) + [δ(f − 1) ∗ sinc(f ) + δ(f + 1) ∗ sinc(f )] 2 All three convolutions here are of the same general form. we can obtain a Fourier transform for periodic signals . X(f ) = 1 2 1 = 2 1 δ(f ) ∗ sinc(f ) + [δ(f − 1) ∗ sinc(f ) + δ(f + 1) ∗ sinc(f )] 2 1 sinc(f ) + [sinc(f − 1) + sinc(f + 1)] 2 w(τ )δ(t − τ − t0 )dτ = w(t − t0 ) (by sifting property) Fourier transform of Periodic Signals • We previously developed the Fourier series as a spectral representation for periodic CT signals • Such signals are neither square integrable nor absolutely integrable.
we may state this result in the form of a transform relation: repT [x(t)] ↔ CT F T 1 1 comb T [X(f )] T .3.3. x0 (t) = 0. The fourier series representation for x(t) is 1 x(t) = Xk ej2πkt/T T k Taking the CTFT directly. t > T /2. CONTINUOUSTIME FOURIER TRANSFORM 69 • This allows us to treat the spectral analysis of all CT signals within a single framework • We can also obtain the same result directly from the Fourier series Let x0 (t) denote one period of a signal that is periodic with period T. we obtain X(f ) = F 1 = T = 1 T 1 T Xk ej2πkt/T k Xk F ej2πkt/T k (by linearity) Xk δ(f − k/T ) k Also T /2 Xk = −T /2 ∞ x(t)e−j2πkt/T dt x0 (t)e−j2πkt/T dt −∞ = = X0 (k/T ) Thus X(f ) = = 1 T X0 (k/T )δ(f − k/T ) k 1 1 comb T [X0 (f )] T Dropping the subscript 0.e. Deﬁne x(t) = repT [x0 (t)]. i.
70
CHAPTER 3. FOURIER ANALYSIS
For our derivation, we required that x(t) = 0, t > t/2. However, when the generalized transform is used to derive the result, this restriction is not needed.
Example
3.4
DiscreteTime Fourier Transform
• Spectral representation for aperiodic DT signals • As in the CT case, we may derive the DTFT by starting with a spectral representation (the discretetime Fourier series) for periodic DT signals and letting the period become inﬁnitely long • Instead, we will take a shorter but less direct approach Recall the continuoustime fourier series: 1 x(t) = T
∞
Xk ej2πkt/T (1)
k=−∞
3.4. DISCRETETIME FOURIER TRANSFORM
T /2
71
Xk =
−T /2
x(t)e−j2πkt/T dt (2)
Before, we interpreted (1) as a spectral representation for the CT periodic signal x(t). Now let’s consider (2) to be a spectral representation for the sequence Xk , −∞ < k < ∞. We are eﬀectively interchanging the time and frequency domains. We want to express an arbitrary signal x(n) in terms of complex exponential signals of the form ejωn . Recall that ω is only unique modulo 2π. To obtain this, we make the following substitutions in (2).
T /2
Xk =
−T /2
x(t)e−j2πkt/T dt
k→n T x(t) → X(ejω ) −2πt/T → ω Xk → x(n)
T dt→ − 2π dω
x(n) =
1 2π
π
X(ejω )ejωn dω
−π
This is the inverse transform. To obtain the forward transform, we make the same substitution in (1). x(t) = 1 T
∞
Xk ej2πkt/T
k=−∞
T x(t) → X(ejω ) k→n Xk → x(n) 2πt/T → −ω
∞
X(ejω ) =
n=−∞
x(n)e−jωn
72
CHAPTER 3. FOURIER ANALYSIS
Putting everything together
∞
X(ejω ) =
n=−∞
x(n)e−jωn
π
x(n) =
1 2π
X(ejω )ejωn dω
−π
Suﬃcient conditions for existence x(n)2 < ∞ or
n
x(n) < ∞ plus Dirichlet conditions.
n
Transform Relations
1. linearity a1 x1 (n) + a2 x2 (n) 2. shifting x(n − n0 ) 3. modulation x(n)ejω0 n
DT F T DT F T DT F T
↔
a1 X1 (ejω ) + a2 X2 (ejω )
↔
X(ejω )e−jωn0
↔
X(ej(ω−ω0 ) )
4. Parseval’s relation
∞
x(n)2 =
n=−∞
1 2π
π
X(ejω )2 dω
−π
5. Initial value
∞
x(n) = X(ej0 )
n=−∞
3.4. DISCRETETIME FOURIER TRANSFORM
73
Comments
1. As discussed earlier, scaling in DT involves sampling rate changes; thus, the transform relations are more complicated than in CT case 2. There is no reciprocity relation, because time domain is a discreteparameter whereas frequency domain is a continuousparameter 3. As in CT case, Parseval’s relation guarantees uniqueness of the DTFT
Some transform pairs
1. x(n) = δ(n) X(ejω ) =
n
δ(n)e−jωn
= 1 (by sifting property)
2. x(n) = 1 • does not satisfy existence conditions •
n
e−jωn does not converge in ordinary sense
• cannot use reciprocity as we did for CT case Consider inverse transform x(n) = 1 2π
π
X(ejω )ejωn dω
−π
What function X(ejω ) would yield x(n) ≡ 1?
x(n) = 1. cos(ω0 n) DT F T ↔ 5. ejω0 n DT F T ↔ rep2π [2πδ(ω − ω0 )] (by modulation property) rep2π [πδ(ω − ω0 ) + πδ(ω + ω0 )] 4. −π ≤ ω ≤ π. −(N − 1)/2 ≤ n ≤ (N − 1)/2 0. else N −1 X(ejω ) = n=0 e−jωn 1 − e−jωN 1 − e−jω sin(ωN/2) sin(ω/2) = = e−jω(N −1)/2 6. else . FOURIER ANALYSIS Let X(ejω ) = 2πδ(ω). so we have: X(ejω ) = 2π k δ(ω − 2πk) 3. 0 ≤ n ≤ N − 1 0. y(n) = 1. then x(n) = 2π −π Note that X(ejω ) must be periodic with period 2π.74 CHAPTER 3. π 1 2πδ(ω)ejωn dω = 1 (again by sifting property).
For any DT LTI system. Here we take a diﬀerent approach.3. we know that input and output are related by y(n) = k h(n − k)x(k) .4. DISCRETETIME FOURIER TRANSFORM N is odd y(n) = x(n + (N − 1)/2) where x(n) is signal from Example 5 Y (ejω ) = X(ejω )ejω(N −1)/2 (by sifting property) = e−jω(N −1)/2 Y (ejω ) = sin(ωN/2) sin(ω/2) sin(ωN/2) jω(N −1)/2 e sin(ω/2) 75 What happens as N → ∞? y(n) → 1 −∞ < n < ∞ 2π/N → 0 Y (ejω ) → rep2π [2πδ(ω)] π 1 Y (ejω )dω = y(0) 2π −π DTFT and DT LTI systems Recall that in CT case we obtained a general characterization in terms of CTFT by expressing x(t) as a superposition of complex exponential signals and then using frequency response H(f) to determine response to each such signal.
76 CHAPTER 3. FOURIER ANALYSIS Thus Y (ejω ) = n y(n)e−jωn h(n − k)x(k)e−jωn n k = = k n h(n − k)e−jωn H(ejω )e−jωk x(k) k x(k) = (by sifting property) where H(ejω ) is the DTFT of the impulse response h(n). Y (ejω ) = H(ejω ) k jω x(k)e−jωk = H(e )X(ejω ) How is H(ejω ) related to the frequency response? Consider x(n) = ejω0 n X(ejω ) = rep2π [2πδ(ω − ω0 )] X(ejω ) + 2π k δ(ω − ω0 − 2πk) Y (ejω ) = H(ejω )X(ejω ) = 2π k H(ejω0 =2πk )δ(ω − ω0 − 2πk) jω0 = H(e )2π k δ(ω − ω0 − 2πk) Y (ejω ) = H(ejω0 )X(ejω ) ∴ y(n) = H(ejω0 )x(n) . Rearranging.
We thus have two equivalent characterizations for the response y(n) of a DT LTI system to any input x(n). we also have the following transform relation: x1 (k)x2 (n − k) k DT F T ↔ X1 (ejω )X2 (ejω ) or x1 (n) ∗ x2 (n) DT F T ↔ X1 (ejω )X2 (ejω ) Product theorem As mentioned earlier. by direct evaluation of the DTFT.3.4. we do not have a reciprocity relation for the DT case. we also have the following result: π DT F T 1 x1 (n)x2 (n) ↔ X1 (ej(ω−µ) )X2 (ejµ )dµ 2π −π Note that this is periodic convolution. DISCRETETIME FOURIER TRANSFORM 77 so H(ejω ) is also the frequency response of the DT system. However. . y(n) = k h(n − k)x(k) Y (ejω ) = H(ejω )X(ejω ) Convolution Theorem Since x(n) and h(n) are arbitrary signals.
FOURIER ANALYSIS .78 CHAPTER 3.
T .interval for which switch is closed τ /T . Sampling Rate Conversion (Scaling in DT) 4.2 Analysis of Sampling A simple scheme for sampling a waveform is to gate it.period τ .duty cycle xs (t) = s(t)x(t) 79 . Relation between CTFT and DTFT 3.1 Chapter Outline In this chapter.Chapter 4 Sampling 4. we will discuss: 1. Analysis of Sampling 2.
80 CHAPTER 4. SAMPLING Fourier Analysis Xs (f ) = S(f ) ∗ X(f ) s(t) = repT S(f ) = 1 τ rect t τ 1 1 comb T [sinc(τ f )] T 1 = sinc(τ k/T )δ(f − k/T ) T k k Xs (f ) = 1 T sinc(τ k/T )X(f − k/T ) How do we reconstruct x(t) from xs (t)? .
we assume ideal sampling . f  ≥ 1/(2T ) Nyquist sampling rate 1 fs = = 2W T Ideal Sampling What happens as τ → 0? s(t) → m δ(t − mT ) x(mT )δ(t − mT ) = combT [x(t)] m xs (t) → Xs (f ) → 1 T X(f − k/T ) = k 1 1 rep T [X(f )] T • An ideal lowpass ﬁlter will again reconstruct x(t) • In the sequel. ANALYSIS OF SAMPLING 81 Nyquist condition Perfect reconstruction of x(t) from xs (t) is possible if X(f ) = 0.4.2.
SAMPLING Transform Relations 1 1 comb T [X(f )] T CT F T 1 1 combT [x(t)] ↔ rep T [X(f )] T repT [x(t)] ↔ CT F T Given one relation. WhittakerKotelnikovShannon Sampling Expansion Xr (f ) = Hr (f )Xs (f ) Hr (f ) = T rect(T f ) xr (t) = hr (t) ∗ xs (t) hr (t) = sinc(t/T ) xs (t) = x(mT )δ(t − mT ) m xr (t) = m x(mT )sinc t − mT T xr (nT ) = m x(mT )sinc(n − m) = x(nT ) .82 CHAPTER 4. the other follows by reciprocity.
2.4. ANALYSIS OF SAMPLING 83 If Nyquist condition is satisﬁed. Zero Order Hold Reconstruction xr (t) = m x(mT )rect t − T /2 − mT T = hZO (t) ∗ xs (t) t − T /2 T hr (t) = rect Xr (f ) = HZO (f )Xs (f ) HZO (f ) = T sinc(T f )e−j2πf (T /2) . xr (t) ≡ x(t).
e. and then reconstruct with an ideal lowpass ﬁlter? . SAMPLING • One way to overcome the limitations of zero order hold reconstruction is to oversample. choose fs = 1/T >> W • This moves spectral replications farther out to where they are better attenuated by sinc(Tf) • What happens if we inadvertently undersample.84 CHAPTER 4. i.
2. ANALYSIS OF SAMPLING 85 Eﬀect of undersampling • frequency truncation error Xr (f ) = 0. x(nT ) = cos {2π(5000)n/8000} = cos {2π(5000)n/8000} = cos {2π[(5000)n0(8000)n]/8000} = cos {−2π(3000)n/8000} = xr (nT ) . Example • x(t) = cos[2π(5000)t] • sample at fs = 8 kHz • reconstruct with ideal lowpass ﬁlter having cutoﬀ at 4 kHz Sampling xr (t) = cos[2π(3000)t] Note that x(t) and xr (t) will have the same sample values at times t = nT (T=1/(8000)).4. f  ≥ fs /2 • aliasing error Frequency components at f1 = fs /2 + ∆ fold down to and mimic frequencies f2 = fs /2 − ∆.
SAMPLING Reconstruction .86 CHAPTER 4.
3 Relation between CTFT and DTFT We have already shown that Xs (fa ) = 1 1 rep T [X(fa )] T Suppose we evaluate the CTFT of xs (t) directly Xs (fa ) = F n xa (nT )δ(t − nT ) xa (nT )F {δ(t − nT )} = n Xs (fa ) = n xa (nT )e−j2πfa nT Recall Xd (ejωd ) ≡ n xd (n)e−jωd n .4. RELATION BETWEEN CTFT AND DTFT 87 4.3.
SAMPLING But xd (n) = xa (nT ). ωd Rearranging as fa = fs . Let ωd = 2πfa T = 2π(fa /fs ). we obtain 2π ωd Xd (ejωd ) = Xs fs 2π Example • CT Analysis xa (t) = cos(2πfa0 t) 1 Xa (fa ) = [δ(fa − fa0 ) + δ(fa + fa0 )] 2 Xs (fa ) = fs repfs [Xa (fa )] fs = [δ(fa − fa0 − kfs ) + δ(fa + fa0 − kfs )] 2 k .88 CHAPTER 4.
4. . RELATION BETWEEN CTFT AND DTFT • DT Analysis xd (n) = xa (nT ) = cos(2πfa0 nT ) = cos(ωd0 n) ωd0 = 2πfa0 T = 2π(fa0 /fs ) X(ejωd ) = π k 89 [δ(ωd − ωd0 − 2πk) + δ(ωd + ωd0 − 2πk)] • Relation between CT and DT Analyses ωd Xd (ejωd ) = Xs fs 2π fs Xs (fa ) = [δ(fa − fa0 − kfs ) + δ(fa + fa0 − kfs )] 2 k fs Xd (ejωd ) = 2 δ k ωd fs − fa0 − kfs 2π +δ ωd fs + fa0 − kfs 2π Recall δ(ax + b) = 1 δ(x + b/a). a 2πfa0 k2πfs 2π δ ωd − − fs fs fs Xd (ejωd ) = fs 2 k 2π 2πfa0 k2πfs + δ ωd + − fs fs fs =π k [δ(ωd − ωd0 − 2πk) + δ(ωd + ωd0 − 2πk)] Aliasing • CT fa1 = fs /2 + ∆a folds down to fa2 = fs /2 − ∆a .3. • DT Let ωd = 2π fa fs ∆d = 2π ∆a fs ωd1 = π + ωd is identical to ωd2 = π − ∆d .
SAMPLING 4.4 Sampling Rate Conversion Recall deﬁnitions for DT scaling. Downsampling y(n) = x(Dn) Upsampling y(n) = x(n/D).90 CHAPTER 4. else . if n/D is an integer 0.
we have a very simple transform relation for scaling: f CT F T 1 x(at) ↔ X a a 91 What is the transform relation for DT? 1. deﬁne a sequence: 1.4. Relations will combine elements from both scaling and sampling transform relations for CT Downsampling y(n) = x(Dn) Y (ejω ) = n x(Dn)e−jωn let m = Dn ⇒ n = m/D Y (ejω ) = m x(m)e−jωm/D (m/D is an integer) To remove restrictions on m. SAMPLING RATE CONVERSION In CT. else then .4. m/D is an integer sD (m) = 0. As in deﬁnitions for downsampling and upsampling. we must treat these cases separately 2.
sD (m) = 1 D D−1 e−j2πkm/D k=0 1 1 − e−j2πm = D 1 − e−j2πm/D .92 Y (ejω ) = m CHAPTER 4. SAMPLING sD (m)x(m)ejωm/D Alternate expression for sD (m): D=2 s2 (m) = 1 [1 + (−1)m ] 2 1 = [1 + e−j2πm/2 ] 2 s2 (0) = 1 s2 (1) = 0 s2 (m + 2k) = s(m) D=3 1 s3 (m) = 3 [1 + e−j2πm/3 + e−j2π(2)m/3 ] 1 s3 (0) = 3 [1 + 1 + 1] = 1 s3 (1) = 1 [1 + e−j2π/3 + e−j2π(2)/3 ] = 0 3 s3 (2) = 1 [1 + e−j2π(2)/3 + e−j2π(4)/3 ] = 0 3 s3 (m + 3k) = s3 (m) In general.
4.4. m/D is an integer else sD (m)x(m)e−jωm/D m 93 Y (ejω ) = = m 1 D D−1 e−j2πkm/D x(m)e−jωm/D k=0 = 1 D 1 D D−1 x(m)e−j[(ω+2πk)/D]m k=0 m D−1 = X(ej(ω+2πk)/D ) k=0 . SAMPLING RATE CONVERSION sD (m) = 1. 0.
SAMPLING Decimator To prevent aliasing due to downsampling. The combination of a low pass ﬁlter followed by a downsampler is referred to as a decimator. .94 CHAPTER 4. we preﬁlter the signal to bandlimit it to highest frequency ωd = π/D.
n/D is an integer 0.4. SAMPLING RATE CONVERSION 95 With preﬁlter Without preﬁlter Upsampling y(n) = x(n/D).4. else x(n/D)e−jωn n Y (ejω ) = = n (n/D) is an integer sD (n)x(n/D)e−jωn Let m = n/D ⇒ n = mD Y (ejω ) = m sD (mD)x(m)e−jωmD but sD (mD) ≡ 1 ∴ Y (ejω ) = X(ejωD ) .
we use a low pass ﬁlter with cutoﬀ at ωd = π/D.96 CHAPTER 4. SAMPLING Interpolator To interpolate between the nonzero samples generated by upsampling. . The combination of an upsampler followed by a low pass ﬁlter is referred to as an interpolator.
4.4. SAMPLING RATE CONVERSION Time Domain Analysis of an Interpolator y(n) = k 97 v(k)h(n − k) v(k) = sD (k)x(k/D) let = k/D ⇒ k = D x( )h(n − D) 1 2π π y(n) = h(n) = H(ejω )ejωn dω −π π/D 1 = 2π = ejωn dω −π/D 1 CTFT−1 2π rect n 2π ω 2π/D t=n/2π 2π 1 2π sinc = 2π D D 1 n = sinc D D y(n) = = x( )h(n − D) x( )sinc n− D D .
SAMPLING .98 CHAPTER 4.
2 Derivation of the Z Transform 1. Extension of DTFT 2. Provides an important tool for characterizing behavior of LTI systems with rational transfer functions • Recall suﬃcient conditions for existence of the DTFT 99 . General Form of Response of LTI Systems 5.Chapter 5 Z Transform (ZT) 5. constant coeﬃcient diﬀerence equations 5. Derivation of the Z transform 2. ZT and linear. Inverse Z transform 6. Transform exists for a larger class of signals than does DTFT 3. we will discuss: 1.1 Chapter Outline In this chapter. ZT properties and pairs 4. Convergence of the ZT 3.
n x(n) < ∞ • By using impulses we were able to deﬁne the DTFT for periodic signals which satisfy neither of the above conditions Example 1 Consider x(n) = 2n u(n) It also satisﬁes neither condition for existence of the DTFT. Z TRANSFORM (ZT) x(n)2 < ∞ or 2. Deﬁne x(n) = r−n x(n). n CHAPTER 5. r>2 .100 1.
2. (2/r) = 1 or r > 2 1 − 2/r ∴ DTFT of x(n) exists X(ejω ) = n=0 ∞ x(n)e−jωn X(ejω ) = = [2(rejω )−1 ]n n=0 1 . X(z) = 1 . X(z) = X(ejω ) = n x(n)z −n For the example x(n) = 2n u(n). 2(rejω )−1  < 1 or r > 2 1 − 2(rejω )−1 Now let’s express X(ejω ) in terms of x(n) ∞ X(ejω ) = n=0 ∞ r−n x(n)e−jωn x(n)(rejω )−n n=0 = Let z = rejω and deﬁne the Z transform (ZT) of x(n) to be the DTFT of x(n) after multiplication by the convergence factor r−n u(n).5. DERIVATION OF THE Z TRANSFORM Now ∞ ∞ 101 x(n) = n=0 n=0 ∞ r−n x(n) (2/r)n n=0 = ∞ x(n) = n=0 ∞ 1 . 1 − 2z −1 z > 2 It is important to specify the region of convergence since the transform is not uniquely deﬁned without it. .
102 CHAPTER 5. z < 2 1 − 2z −1 1 . z < 2 1 − 2z −1 So we have ZT x(n) = 2n u(n) ↔ X(z) = . z > 2 1 − 2z −1 1 ZT y(n) = −2n u(−n − 1) ↔ Y (z) = . Z TRANSFORM (ZT) Example 2 Let y(n) = −2n u(−n − 1) Y (z) = n y(n)z −n −1 =− n=−∞ 2n z −n −1 Y (z) = − =− =− (z/2)−n n=−∞ ∞ (z/2)n n=1 ∞ (z/2)n + 1 n=0 Y (z) = 1 − 1 . z/2 < 1 or z < 2 1 − z/2 Y (z) = −z/2 1 − z/2 1 = .
Therefore. CONVERGENCE OF THE ZT 1. They diﬀer only in their regions of convergence 103 Example 3 w(n) = 2n . −∞ < n < ∞ w(n) = x(n) − y(n) By linearity of the ZT. −∞ < n < ∞ 5. A series un n=0 . W (z) = X(z) − Y (z) 1 1 = − =0 1 − 2z −1 1 − 2z −1 But note that X(z) and Y (z) have no common region of convergence.3. there is no ZT for w(n) = 2n .3 Convergence of the ZT ∞ 1. The two transforms have the same functional form 2.5.
Z TRANSFORM (ZT) is said to converge to U if given any real integer M such that N −1 > 0. there exists an un − U < n=0 for all N > M 2. we have the following properties: . A suﬃcient condition for convergence of the series n=0 un is that of absolute convergence i. However. Note that absolute convergence is not necessary for ordinary convergence as illustrated by the following example The series ∞ (−1)n n=0 1 = − ln(2) n 1 is not. whereas the series 5.e. ∞ un  < ∞ n=0 4. we consider only absolute convergence Thus. Here the sequence un and the limit U may be either real or complex ∞ 3. when we discuss convergence of the Z transform. we say that X(z) = n x(n)z −n converges at z = z0 if −n x(n)z0 = n n −n x(n)r0 < ∞ where r0 = z0  Region of Convergence for ZT As a consequence of restricting ourselves to absolute convergence.104 CHAPTER 5. n n=1 ∞ is convergent.
Proof: x(n)z −n = n n −n x(n)r0 105 < ∞ by hypothesis P2. Since X(z) converges for z = r0 .e. x(n) = 0. x(n) = 0 for some n < 0 and x(n) = 0 for some n > 0. i. n < 0. and X(z) converges for z = r1 .e. Proof: ∞ ∞ x(n)z −n = n=0 n=0 ∞ x(n)r−n −n x(n)r1 n=0 < < ∞ by hypothesis P3. then it converges for all z for which z = r0 . n > 0 and X(z) converges for z = r2 . and X(z) converges for some z = r0 . then it converges for all Z such that z = r > r1 . i. then it converges for all z such that z = r < r2 . i. If x(n) is an anticausal sequence.e. CONVERGENCE OF THE ZT P1. Proof: Let x(n) = x− (n) + x+ (n) where x− (n) is anticausal and x+ (n) is causal.3.5. Proof: 0 ∞ x(n)z −n = n=−∞ n=0 ∞ x(−n)rn n x(−n)r2 n=0 < < ∞ by hypothesis P4. If x(n) is a mixed causal sequence. X− (z) and X+ (z) must also both . If X(z) converges at z = z0 . where r0 = z0 . then there exists two positive reals r1 and r2 with r1 < r0 < r2 such that X(z) converges for all z satisfying r1 < z < r2 . If x(n) is a causal sequence. x(n) = 0.
The ROC for X(z) is just the intersection of these two ROC’s. CHAPTER 5. Z TRANSFORM (ZT) From properties 2 and 3. Example 3 x(n) = 2−n u(n) X(z) = n −1 ∞ 2−n z −n 2n z −n + n=−∞ ∞ −1 n=0 ∞ = = (2 2−n z −n (2−1 z −1 )n n=0 z)n − 1 + n=0 1 1 = −1+ 1 − 2−1 z 1 − 2−1 z −1 2−1 z < ∞ 2−1 z −1  < ∞ z < 2 z > 1/2 Combining everything −3z −1 /2 X(z) = . 1 − 5z −1 /2 + z −2 1/2 < z < 2 .106 converge for z = r0 . there exist two positive reals r1 and r2 with r1 < r0 < r2 such that X− (z) converges for z < r2 and X+ (z) converges for z > r1 .
−nan u(−n − 1) ↔ az −1 (1 − az −1 ) 2. 1 − az −1 ZT z > a z < a z > a z < a 3. 1 − az −1 2. Shifting x(n − n0 ) ↔ z −n0 X(z) 3. Relation to DTFT If XZT (z) converges for z = 1. .5. an u(n) ↔ ZT ZT all z 1 . 2. Convolution x(n) ∗ y(n) ↔ X(z)Y (z) 6. ZT PROPERTIES AND PAIRS 107 5. δ(n) ↔ 1. Linearity a1 x1 (n) + a2 x2 (n) ↔ a1 X1 (z) + a2 X2 (z) 2. Multiplication by time index nx(n) ↔ −z 5. Modulation n z0 x(n) ↔ X(z/z0 ) ZT ZT ZT 4. nan u(n) ↔ ZT 1 . then the DTFT of x(n) exists and XDT F T (ejω ) = XZT (z)z=ejω ZT ZT d [X(z)] dz Important Transform Pairs 1. az −1 (1 − az −1 ) ZT 5. −an u(−n − 1) ↔ 4.4.4 ZT Properties and Pairs Transform Relations 1.
Z TRANSFORM (ZT) 5. constant coeﬃcient diﬀerence equations M N y(n) = k=0 ak x(n − k) − =1 h y(n − ) • nonrecursive N =0 always ﬁnite impulse response (FIR) • recursive N >0 usually inﬁnite impulse response (IIR) 3. we obtain a characterization for all LTI systems • impulse response: y(n) = h(n) ∗ x(n) • transfer function: Y (z) = H(z)X(z) 2. An important class of LTI systems are those characterized by linear.108 CHAPTER 5.5 ZT.Linear Constant Coeﬃcient Diﬀerence Equations 1. Take ZT of both sides of the equation M N y(n) = k=0 M ak x(n − k) − =1 N b y(n − ) b z − Y (z) =1 Y (z) = k=0 ak z −k X(z) − M ak z −k H(z) = Y (z) = X(z) k=0 N 1+ =1 b z− . From the convolution property.
Roots of the numerator and denominator polynomials: • zeros z1 .5. M ≥ N Multiply numerator and denominator by z M .. M z N −M H(z) = zN + =1 k=0 N ak z M −k b zN − 6. ZT.. zM • poles p1 . . have M − N additional poles at z = ∞ • If M < N .LINEAR CONSTANT COEFFICIENT DIFFERENCE EQUATIONS109 4... . pN • If M ≥ N . M ak z M −k H(z) = k=0 N z M −N z N + =1 b zN − 5.. M < N Multiply numerator and denominator by z N . The poles and zeros play an important role in determining system behavior Example 1 y(n) = x(n) + x(n − 1) − y(n − 2) 2 . the numerator and denominator polynomials may always be factored • M ≥N M (z − zK ) H(z) = • M <N H(z) = z N −M N k=1 z M −N M k=1 (z N =1 (z −p ) − zk ) (z − p ) =1 7..5. have N − M additional zeros at z = 0 8. By the fundamental theorem of algebra.
. Z TRANSFORM (ZT) 1 Y (z) = X(z) + z −1 X(z) − z −2 Y (z) 2 1 + z −1 Y (z) = H(z) = 1 X(z) 1 + 2 z −2 zeros : z1 = 0 z2 = −1 √ √ poles : p1 = j/ 2 p2 = −j/ 2 H(z) = z(z + 1) z 2 + 1/2 z(z + 1) √ √ = (z − j/ 2)(z + j/ 2) Eﬀect of Poles and Zeros on Frequency Response Frequency response H(ejω ) h(n) ↔ HDT F T (ejω ) = H(ejω ) h(n) ↔ HZT (z) HDT F T (ejω ) = HZT (ejω ) ZT CT F T Let z = ejω .110 CHAPTER 5.
5.LINEAR CONSTANT COEFFICIENT DIFFERENCE EQUATIONS111 ⇒ H(ejω ) = HZT (ejω ) Assume M < N .5. ZT. M z N −M (z − zk ) k=1 H(z) = N (z − p ) =1 M ejω(N −M ) (ejω − zk ) k=1 H(ejω ) = N (ejω − p ) =1 M ejω − zk  H(ejω ) = k=1 N ejω − p  =1 M N ∠H(ejω ) = ω(N − M ) + k=1 ∠(ejω − zk ) − =1 ∠(ejω − p ) Example z(z + 1) √ √ (z − j/ 2)(z + j/ 2) ejω + 1 √ √ H(ejω ) = jω − j/ 2ejω + j/ 2 e √ √ ∠H(ejω ) = ω + ∠(ejω + 1) − ∠(ejω − j/ 2) − ∠(ejω + j/ 2) Contribution from a single pole H(z) = .
85 . Z TRANSFORM (ZT) H(ej0 ) = a = 2 b = 1 c = d = 1+ 3 2 ab cd 1 2 = 3 2 H(ej0 ) = 4 = 1.33 3 H(ej0 ) = ∠a + ∠b − ∠c − ∠d ∠a = 0 ∠b = 0 1 ∠c = arctan √2 ∠d = −∠c ∠H(ej0 ) = 0 ω = π/4 H(ejπ/4 ) = a = b = 1 1+ ab cd 1 √ 2 2 + 1 2 = 1.112 ω=0 CHAPTER 5.
5.1◦ ω = π/2 H(ejπ/2 ) = √ a = 2 b = 1 1 c = 1 − √2 d = 1 + 1 √ 2 ab cd H(ejπ/2 ) = 2.65 H(ejπ/4 ) = ∠a + ∠b − ∠c − ∠d ∠a = 22.83 H(ejπ/4 ) = ∠a + ∠b − ∠c − ∠d ∠a = 45◦ ∠b = 90◦ ∠c = 90◦ ∠d = 90◦ ∠H(ejπ/2 ) = −45◦ General Rules 1.5◦ ∠b = 45◦ ∠c = 0 ∠d = 63.5. A pole near the unit circle will cause the frequency response to increase in the neighborhood of that pole .4◦ ∠H(ejπ/4 ) = 4. ZT.LINEAR CONSTANT COEFFICIENT DIFFERENCE EQUATIONS113 c = d = 1 √ 2 2 √ 2 2 + 1 √ 2 2 = 5 2 H(ejπ/4 ) = 1.
000 y(1) = x(1) − x(0) + (1/2)y(0) = 1/3 − 1 + (1/2)(1. A zero near the unit circle will cause the frequency response to decrease in the neighborhood of that zero 5. the inverse ZT may be written as X(z)z n−1 dz c where the integration is performed in a counterclockwise direction around a closed contour in the region of convergence of X(z) and encircling the origin.6 Inverse Z Transform x(n) = 1 j2π 1. y(0) = x(0) − x(−1) + (1/2)y(−1) = 1 − 0 + (1/2)(0) = 1. Formally.227 .306) = −0. What are our options? • direct substitution Assume y(−1) = 0. Instead. a ratio of polynomials.167) = −0. i.114 CHAPTER 5.306 y(3) = x(3)−x(2)+(1/2)y(2) = 1/27−1/9+(1/2)(−0. We will illustrate the method via a series of examples Example 1 The signal x(n) = (1/3)n u(n) is input to a DT LTI system described by y(n) = x(n) − x(n − 1) + (1/2)y(n − 1) Find the output y(n). Z TRANSFORM (ZT) 2. If X(z) is a rational function of z. The ROC plays a critical role in this process 5.0) = −0.e. it is not necessary to evaluate the integral 3. we use a partial fraction expansion to express X(z) as a sum of simple terms for which the inverse transform may be recognized by inspection 4.167 y(2) = x(2) − x(1) + (1/2)y(1) = 1/9 − 1/3 + (1/2)(−0. 2.
6. h(n) = δ(n) − δ(n − 1) + (1/2)h(n − 1) h(0) = 1 − 0 + (1/2)(0) = 1 h(1) = 0 − 1 + (1/2)(1) = −1/2 h(2) = 0 − 0 + (1/2)(−1/2) = −1/4 h(3) = 0 − 0 + (1/2)(−1/4) = −1/8 Recognize h(n) = δ(n) − (1/2)n u(n − 1). ∞ y(n) = k=−∞ δ(n − k) − (1/2)(n−k) u(n − k − 1) (1/3)k u(k) n−1 = (1/3)n u(n) − (1/2)n k=0 (2/3)k u(n − 1) = (1/3) u(n) − (1/2) − (2/3)n u(n) 1 − 2/3 = 4(1/3)n u(n) − 3(1/2)n u(n) n n1 • DTFT x(n) = (1/3)n u(n) X(ejω ) = 1 1 − 1 e−jω 3 h(n) = δ(n) − (1/2)n u(n − 1) 1 −1 1 − 1 e−jω 2 H(ejω ) = 1 − = 1 − e−jω 1 − 1 e−jω 2 . INVERSE Z TRANSFORM • convolution ∞ 115 y(n) = k=−∞ h(n − k)x(k) Find impulse response by direct substitution.5.
Y (z) = H(z)X(z) = (1 − 1 −1 )(1 2z 1 − z −1 − 1 z −1 ) 3 ROC[Y (z)] = ROC[H(z)] ∩ ROC[X(z)] = z : z > 1/2 . ⇒ H(z) converges for z > 1/2. h(n) is a causal signal. Z TRANSFORM (ZT) Y (ejω ) = H(ejω )X(ejω ) = = π −π 1 − e−jω (1 − 1−e 1 1 − 5 e−jω + 6 e−jω2 6 1 −jω )(1 − 1 e−jω ) 2e 3 −jω y(n) = • ZT 1 2π 1 − e−jω ejωn dω 1 1 − 5 e−jω + 6 e−jω2 6 x(n) = (1/3)n u(n) X(z) = 1 .116 CHAPTER 5. 1 − 1 z −1 3 z > 1 3 y(n) = x(n) − x(n − 1) + (1/2)y(n − 1) Y (z) = X(z) − z −1 X(z) + (1/2)z −1 Y (z) H(z) = 1 − z −1 Y (z) = X(z) 1 − 1 z −1 2 What is the region of convergence? Since system is causal.
z < 2 or z > 1 − 2z 4 1 Y2 (z) = 1 −1 .6.5. z < 3 or z > 1 − 3z Y1 (z) = 1 2 1 3 . we multiply both sides by 1 − 2 z −1 to obtain 1 1 − z −1 = A1 1 − 3 z −1 + A2 1 − 1 z −1 2 1 = A1 + A2 −1 = so (1 − 1 −1 )(1 2z A1 = −3 − 1 A2 2 A2 = 4 − 1 A1 3 −3 4 1 − z −1 = + check − 1 z −1 ) 1 − 1 z −1 1 − 1 z −1 3 2 3 1 1 − z −1 = −3 1 − 3 z −1 + 4 1 − 1 z −1 2 Now y(n) = y1 (n) + y2 (n) Possible ROC’s where −3 1 1 −1 . INVERSE Z TRANSFORM 117 Partial Fraction Expansion (PFE) (Two distinct poles) 1− 1 −1 2z 1 − z −1 A1 A2 = + 1 − 1 z −1 1 − 1 z −1 1 − 1 z −1 2 3 3 1 − 1 z −1 3 1 To solve for A1 and A2 .
Z TRANSFORM (ZT) ROC[Y (z)] = ROC[Y1 (z)] ∩ ROC[Y2 (z)] = z : z > 1 2 Recall transform pairs: an u(n) ↔ ZT 1 z > a 1 − az −1 ZT −an u(−n − 1) ↔ ZT −1 −3 → 1 −1 1− 2 z ZT 4 → 1 1− 3 z −1 −1 1 z < a 1 − az −1 −3 4 1 n 2 u(n) 1 n 3 u(n) and y(n) = −3 1 2 n u(n) + 4 1 3 n u(n) Consider again Y (z) = 1 − z −1 1− 1 − 1 z −1 3 −3 4 = + 1 − 1 z −1 1 − 1 z −1 2 3 1 −1 2z There are 3 possible ROC’s for a signal with this ZT.118 and CHAPTER 5. .
INVERSE Z TRANSFORM Case II: 1 1 < z < 3 2 Possible ROC’s −3 1 . z < or z > Y1 (z) = 2 1 − 1 z −1 2 4 1 Y2 (z) = .5.6. z < or z > 3 1 − 1 z −1 3 and ROC[Y (z)] = ROC[Y1 (z)] ∩ ROC[Y2 (z)] ∴ ROC[Y1 (z)] = z : z < ROC[Y2 (z)] = z : z > −3 1 − 1 z −1 2 4 1 − 1 z −1 3 y(n) = 3 Case III: 1 z < 3 ROC[Y1 (z)] = ROC[Y2 (z)] = −3 1 − 1 z −1 2 4 1 − 1 z −1 3 y(n) = 3 ZT −1 ZT −1 1 2 1 3 n 119 1 2 1 3 → −3 → 4 n 1 2 1 3 n u(−n − 1) u(n) 1 3 n ZT −1 1 2 u(−n − 1) + 4 u(n) z : z < z : z < 1 2 1 3 n 1 2 1 3 → 3 → −4 n u(−n − 1) n ZT −1 u(−n − 1) 1 3 n 1 2 u(−n − 1) − 4 u(−n − 1) .
Z TRANSFORM (ZT) Summary of Possible ROC’s and Signals Y (z) = 1 − z −1 1 1 − z −1 2 1 1 − z −1 3 −3 1 n 2 1 3 ROC < z < z < z < 1 3 1 2 1 2 Signal 1 n u(n) + 4 3 u(n) n 1 n 3 2 u(−n − 1) + 4 1 u(n) 3 n n 1 3 2 u(−n − 1) − 4 1 u(−n − 1) 3 Residue Method for Evaluating Coeﬃcients of PFE Y (z) = 1 −1 2z 1− 1 − z −1 A1 A2 = + 1 − 1 z −1 1 − 1 z −1 1 − 1 z −1 2 3 3 A1 = = 1 1 − z −1 Y (z) 2 1 − z −1 1 − 1 z −1 3 z= 1 2 z= 1 2 1−2 = 1 − 2/3 = −3 1 1 − z −1 Y (z) 3 1 − z −1 1 − 1 z −1 2 A2 = = = z= 1 3 z= 1 3 1−3 1 − 3/2 =4 .120 CHAPTER 5.
6. INVERSE Z TRANSFORM 121 Example 2 (complex conjugate poles) Y (z) = 1 + 1 z −1 2 (1 − jz −1 ) (1 + jz −1 ) A1 A2 X(z) = + 1 − jz −1 1 + jz −1 A1 = = 1 + 1 z −1 2 1 + jz −1 1 2 1 1− j 2 z=j A2 = 1 + 1 z −1 2 1 − jz −1 1 1+ j 2 z=−j 1 = 2 check 1 2 j 1 1 − 1j 1+ 2 2 + 2 = 1 − jz −1 1 + jz −1 j 1 + jz −1 + 1 1 + 2 1 − jz −1 2 (1 − jz −1 ) (1 + jz −1 ) 1 −1 1 + 2z = (1 − jz −1 )(1 + jz −1 ) 1 2 1 − 1j 2 • Note that A1 = A∗ 2 • This is necessary for y(n) to be realvalued • Use it to save computation .5.
z > a − z −1 ) = A1 1− 1 −1 2 2z + A2 1 − z −1 1+z 1 − z −1 = −3 z= 1 2 A2 = check 1 + z −1 1 − 1 z −1 2 −3 1 − 1 z −1 2 2 2 z=1 =8 + 1 −3(1 − z −1 ) + 8(1 − z −1 + 4 z −2 ) 8 = 2 1 − z −1 1 − 1 z −1 (1 − z −1 ) 2 = = 5 − 5z −1 + 2z −2 1 − 1 z −1 (1 − z −1 ) 2 1 + z −1 1 − 1 z −1 2 2 (1 − z −1 ) General form of terms in PFE for pole p with multiplicity m .122 CHAPTER 5. Z TRANSFORM (ZT) Example 3 (poles with multiplicity greater than 1) Y (z) = 1 + z −1 1 − 1 z −1 2 2 (1 − z −1 ) 1 < z < 1 2 recall nan u(n) ↔ Try 1 + z −1 1− A1 = 1 −1 2 (1 2z −1 ZT az −1 (1 − az −1 ) 2.
1 −1 )2 2 2 (1 − z 1 let z = 2 2 A1. A2 = 8 1 To solve for A1..1 A2 = 2 2 + 1 −1 + 1 − z −1 1 − 2z 1 − 1 z −1 (1 − z −1 ) 1 − 1 z −1 2 2 A1.2 = −3. multiply both sides by 1 − 2 z −1 .2 A1.1 + .m A .1 = −2 2 = −4 (1 − 2) Example 4 (numerator degree ≥ denominator degree) Y (z) = 1 + z −2 1 − 1 z −1 − 1 z −2 2 2 reduce degree of numerator by long division so 3 − z −1 1 + z −2 = −2 + 1 1 − 1 z −1 − 2 z −2 1 − 1 z −1 − 1 z −2 2 2 2 In general. INVERSE Z TRANSFORM A .1 . we will have a polynomial of degree M − N .1 1 − z −1 1 − z −1 2 + A2 1 − 1 z −1 2 1 − z −1 diﬀerentiate with respect to z −1 2 −1 1 + A2 1 − z −1 P (z) = 0 + A1..m − 1 A . 2 2 123 1 + z −1 1 = A1. M −N Bk z −k → k=0 ZT −1 M −N Bk δ(n − k) k=0 . + m + −1 )m −1 (1 − p z −1 ) (1 − p z −1 ) (1 − p z Now we have 1 + z −1 A1.6.2 + A1.5.
number of distinct poles D N= =1 m For M < N D m Y (z) = =1 k=1 A k k (1 − p z −1 ) . Drop superscript/subscript Y. Z TRANSFORM (ZT) 5. Accounting for poles with multiplicity > 1 Y (z) = P (z) D 1 − p z −1 =1 m D .7 General Form of Response of LTI Systems If both X(z) and H(z) are rational Y (z) = H(z)X(z) = PH (z) QH (z) PX (z) QX (z) PX (z) N X 1 − pX z −1 =1 PH (z) = N H 1 − pH z −1 =1 Y (z) = PY (z) Ny 1 − pY z −1 =1 PY (z) = PH (z)PX (z) NY = NH + NX pY is a combined set of poles pH and pX .124 CHAPTER 5.
7. consider only causal terms in what follows Real pole with multiplicity 1 A 1 − pz −1 ZT −1 → Apn u(n) . GENERAL FORM OF RESPONSE OF LTI SYSTEMS 125 • Each term under summation will give rise to a term in the output y(n) • It will be causal or anticausal depending on location of pole relative to ROC • Poles between origin and ROC result in causal terms • Poles separated from origin by ROC result in anticausal terms • For simplicity.5.
126 CHAPTER 5. grows exponentially if p > 1 2. decays exponentially if p < 1 • digital frequency ωd = ∠p radians/sample • phase ∠A Examples . Z TRANSFORM (ZT) Complex conjugate pair of poles with multiplicity 1 A A∗ + −1 1 − pz 1 − p∗ z −1 ZT −1 → Apn u(n) + A∗ (p∗ )n u(n) [Apn + A∗ (p∗ )n ] u[n] = Aej∠A (pej∠p )n + Ae−j∠A (pe−j∠p )n u(n) = 2Apn cos(∠pn + ∠A)u(n) • sinusoid with amplitude 2Apn 1. constant if p = 1 3.
7. repeating a pole with multiplicity m results in multiplication of the signal obtained with multiplicity 1 by a polynomial in n with degree m − 1 .5. z > a → A (n + 1)pn+1 u(n + 1) p = (A/p)z pz −1 2 pz −1 ) A (n + 1)pn+1 u(n + 1) = A(n + 1)pn u(n) p Example (A=1) • Similar results are obtained with complex conjugate poles that have multiplicity 2 • In general. GENERAL FORM OF RESPONSE OF LTI SYSTEMS 127 Real Pole with multiplicity 2 A (1 − 2 pz −1 ) ZT −1 → ? ZT Recall nan u(n) ↔ A (1 − 2 pz −1 ) az −1 (1 − az −1 ) (1 − 2.
complex conjugate pair of poles with multiplicity 1 2Apn cos(∠pn + ∠A)u(n) is bounded if p ≤ 1 3. real pole with multiplicity 2 A(n + 1)pn u(n) is bounded if p < 1 . real pole with multiplicity 1 Apn u(n) is bounded if p ≤ 1 2.128 Example A (1 − 3 pz −1 ) ZT −1 CHAPTER 5. Z TRANSFORM (ZT) → A(n + 1)(n + 2)pn u(n) Stability Considerations • A system is BIBO stable if every bounded input produces a bounded output • A DT LTI system is BIBO stable ⇔ n h(n) < ∞ • n h(n) < ∞ ⇔ H(z) converges on the unit circle • A causal DT LTI system is BIBO stable ⇔ all poles of H(z) are strictly inside the unit circle Stability and the general form of the response 1.
we will discuss: 1. Aperiodic DT.1 Chapter Outline In this chapter. Derivation of the DFT 2. Spectral Analysis via the DFT 4. Aperiodic DT. Fast Fourier Transform (FFT) Algorithm 5.2 Derivation of the DFT Transform CT Fourier Series CT Fourier Transform DT Fourier Transform Discrete Fourier Transform 129 Frequency Domain Discrete Continuous Continuous Discrete Summary of Spectral Representations Signal Type CT. Periodic Convolution 6. Periodic . DTFT Properties and Pairs 3.Chapter 6 Discrete Fourier Transform (DFT) 6. Periodic CT.
truncated x(n) so X(k) ≈ X(ej2πk/N ) 2. To obtain the inverse DFT. 0 ≤ n ≤ N − 1 0.. else 2.130 CHAPTER 6. ωk = 2πk/N. we could discretize the inverse DTFT: 2π x(n) = 1 2π X(ejω )ejωn dω 0 2πk N N −1 ω → ωk = 2π 0 2π dω → N k=0 X(ejω ) → X(k) ejωn → ej2πk/N This results in x(n) ≈ 1 N N −1 X(k)ej2πkn/N k=0 • Note approximation sign 1. approximated integral by a sum . N − 1 then we obtain N −1 X(k) ≡ n=0 x(n)e−j2πkn/N This is the forward DFT.. k = 0. .. Discretize ω = ωk Suppose we choose 1. we must do two things: 1. Truncate summation so that it ranges over ﬁnite limits 2. x(n) = x(n). DISCRETE FOURIER TRANSFORM (DFT) Computation of the DTFT ∞ X(ejω ) = n=−∞ x(n)e−jωn In order to compute X(ejω ).
multiply both sides by ej2πkm/N and sum over k N −1 N −1 N −1 X(k)ej2πkm/N = k=0 k=0 N −1 n=0 x(n)e−j2πkn/N ej2πkm/N N −1 = n=0 N −1 x(n) k=0 e−j2πk(n−m)/N 1 − e−j2π(n−m) 1 − e−j2π(n−m)/N = n=0 x(n) N −1 N −1 X(k)ej2πkm/N = k=0 n=0 x(n)N δ(n − m) = N x(m) N −1 ∴ x(m) = 1 N X(k)ej2πkm/N k=0 Summarizing N −1 X(k) = n=0 x(n)e−j2πkn/N N −1 x(n) = 1 N X(k)ej2πkn/N k=0 where we have dropped the tildes .2. DERIVATION OF THE DFT 131 Alternate approach • Use orthogonality of complex exponential signals • Consider again N −1 X(k) = n=0 x(n)e−j2πkn/N • For ﬁxed 0 ≤ m ≤ N − 1.6.
DISCRETE FOURIER TRANSFORM (DFT) 6.3 DFT Properties and Pairs a1 x1 (n) + a2 x2 (n) ↔ a1 X1 (k) + a2 X2 (k) DF T 1. Linearity 2.132 CHAPTER 6. Parseval’s relation N −1 DF T DF T DF T x(n)2 = n=0 1 N N −1 X(k)2 k=0 6. Reciprocity X(n) ↔ N x(−k) 5. Shifting x(n − n0 ) ↔ X(k)e−j2πkn0 /N 3. Initial value N −1 x(n) = X(0) n=0 7. Periodicity x(n + mN ) = x(n)for all integers m X(k + N ) = X(k) for all integers 8. Modulation x(n)ej2πk0 n/N ↔ X(k − k0 ) 4. Relation to DTFT of a ﬁnite length sequence Let x0 (n) = 0 only for 0 ≤ n ≤ N − 1 x0 (n) Deﬁne x(n) = m DF T DT F T ↔ X0 (ejω ) x0 (n + mN ) x(n) ↔ X(k) Then X(k) = X0 (ej2πk/N ) .
0 ≤ k ≤ N − 1 (by reciprocity) 3. else 2πk N (M −1)/2 sin[2πkM/(2N )] (by relation to DTFT) sin[2πk/(2N )] . 0 ≤ n ≤ M − 1. x(n) = δ(n). x(n) = 1. 0 ≤ k ≤ N − 1 (by relation to DTFT) 2. x(n) = X(k) = ej 1. DFT PROPERTIES AND PAIRS 133 Transform Pairs 1.3. 0 ≤ k ≤ N − 1 (by modulation property) 4. 0 ≤ n ≤ N − 1 0. x(n) = cos(2πk0 n/N ).6. 0 ≤ n ≤ N − 1 X(k) = N δk. 0 ≤ n ≤ N − 1 X(k) = N δ(k − k0 ). 0≤n≤N −1 X(k) = 1. 2 0≤k ≤N −1 5. 0 ≤ n ≤ N − 1 N X(k) = [δ(k − k0 ) + δ(k − (N − k0 ))] . x(n) = ej2πk0 n .
4 Spectral Analysis Via the DFT An important application of the DFT is to numerically determine the spectral content of signals. 0 ≤ n ≤ N − 1 0.134 CHAPTER 6.causes leakage 2.causes picket fence eﬀect Truncation of the Signal Let x(n) = cos(ω0 n) Recall that X(ejω ) = rep2π [πδ(ω − ω0 ) + πδ(ω + ω0 )] 1. Frequency domain sampling . Also. let w(n) = Now let xt (n) = x(n)w(n) ttruncated . DISCRETE FOURIER TRANSFORM (DFT) 6. However. else sin(ωN/2) Recall W (ejω ) = e−jω(N −1)/2 sin(ω/2) Note that W (ejω ) = W (ej(ω+2πm) ) for all integers m. Truncation of the signal . the extent to which this is possible is limited by two factors: 1.
6.4. SPECTRAL ANALYSIS VIA THE DFT By the product theorem Xt (ejω ) = 1 2π 1 = 2π π 135 W (ej(ω−µ) )X(ejµ )dµ −π π W (ej(ω−µ) )rep2π [πδ(µ − ω0 ) + πδ(µ + ω0 )]dµ −π Xt (ejω ) = 1 2π 1 = 2π π −π π W (ej(ω−µ) )[πδ(µ − ω0 ) + πδ(µ + ω0 )]dµ −π π W (ej(ω−µ) )δ(µ − ω0 )dµ −π + Xt (ejω ) = 1 2 W (ej(ω−µ) )δ(µ + ω0 )dµ W (ej(ω−ω0 ) ) + W (ej(ω+ω0 ) ) Frequency Domain Sampling Let xr (n) = m xt (n + mN ) rreplicated Recall from relation between DFT and DTFT of ﬁnite length sequence that Xr (k) = Xt (ej2πk/N ) 1 W (ej(2πk/N −ω0 ) ) + W (ej(2πk/N +ω0 ) ) = 2 Consider two cases: .
ω0 = 3π/8 . 0 ≤ k ≤ N − 1 2 as shown before.136 CHAPTER 6. 2. ω0 = π/4 1 Xr (k) = W (ej2π(k−k0 )/N ) + W (ej2π(k+k0 )/N ) 2π Example: k0 = 1. N = 8. There is no picket fence eﬀect. DISCRETE FOURIER TRANSFORM (DFT) 1. ω0 = 2πk0 /N + π/N for some integer k0 Example: k0 = 1. N = 8 In this case: N Xr (k) = [δ(k − k0 ) + δ(k − (N − k0 ))] . N = 8. ω0 = 2πk0 /N for some integer k0 Example: k0 = 1.
5. N − 1 Superscript (N) is used to show the length of the DFT. Recall N −1 X (N ) (k) = n=0 x(n)e−j2πkn/N . It is not a new transform. . we employ a divideandconquer strategy. 1.. .. For each value of k.. Deﬁne a complex operation (CO) as 1 complex multiplication and 1 complex addition.6. k = 0. computation of X(k) requires N complex multiplications and N1 additions. To derive an eﬃcient algorithm for computation of the DFT.5 Fast Fourier Transform (FFT) Algorithm The FFT is an algorithm for eﬃcient computation of the DFT. Sidelobes may be supressed at the expense of a wider mainlobe by using a smoothly tapering window 6. Both truncation and picket fence eﬀects are reduced by increasing N 2. Computation of length N DFT then requires approximately N 2 CO’s. FAST FOURIER TRANSFORM (FFT) ALGORITHM 137 Picket fence eﬀect: • Spectral peak is midway between sample locations • Each sidelobe peak occurs at a sample location 1.
... DISCRETE FOURIER TRANSFORM (DFT) Assume N is even... k = 0.. Computation • Direct X (N ) (k). N −1 X (N ) (k) = n=0 x(n)e−j2πkn/N N −1 N −1 X (N ) (k) = n=0. N/2 − 1 (k)..138 CHAPTER 6.. k = 0. .even N/2−1 x(n)e−j2πkn/N + n=0. Note that X0 while e−j2πk/N is periodic with period N...... N/2 − 1 (k) + e−j2πk/N X1 (N/2) (N/2) (k).. k = 0.. k = 0.. N/2 − 1 (N/2) N 2 /4 CO’s N 2 /4 CO’s (N/2) (N/2) X (N ) (k) = X0 CO’s (k) + e−j2πk/N X1 (k). Now have X (N ) (k) = X0 (N/2) (N/2) m = 0. N − 1 • Decimation by a factor of 2 X0 X1 (N/2) N 2 CO’s (k). N/2 − 1 m = 0. N − 1 (k) and X1 (k) are both periodic with period N/2. . .. .... . N − 1 N . x1 (n) = x(2m + 1).. k = 0.odd x(n)e−j2πkn/N = m=0 N/2−1 x(2m)e−j2πk(2m)/N x(2m + 1)e−j2πk(2m+1)/N m=0 + X (N ) (k) = N/2−1 N/2−1 x(2m)e−j2πkm/(N/2) + e−j2πk/N m=0 m=0 x(2m + 1)e−j2πkm/(N/2) Let x0 (n) = x(2m). . .
we have nearly halved computation.6.we can repeat the idea with each N/2 pt. DFT: If N = 2M . Consider a signal ﬂow diagram of what we have done so far: If N is even.5. . we repeat the process M times reulting in M stages. FAST FOURIER TRANSFORM (FFT) ALGORITHM Total N 2 /2 + N CO’s 139 For large N.
DFT Full example for N=8(M=3) .140 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT) The ﬁrst stage consists of 2 point DFT’s: 1 X (2) (k) = n=0 x(n)e−j2πkn/2 X (2) (0) = x(0) + x(1) X (2) (1) = x(0) − x(1) Flow diagram of 2 pt.
5. FAST FOURIER TRANSFORM (FFT) ALGORITHM Ordering of Input Data 141 Computation (N = 2M ) M = log2 N stages N CO’s/stage Total: Nlog2 N CO’s Comments • The algorithm we derived is the decimationintime radix 2 FFT 1.6. input in bitreversed order .
input in normal order 2. However. output in normal order • The same approaches may be used to derive either decimationintime or decimationinfrequency mixed radix FFT algorithms for any N which is a composite number • All FFT algorithms are based on composite N and require O(N log N) computation • The DFT of a length N real signal can be expressed in terms of a length N/2 DFT 6. Based on the notion that convolution in the time domain corresponds to multiplication in the frequency domain. the existence of a very eﬃcient algorithm for calculating it suggests another application. inplace computation 3. we perform the following computations: • Compute X (n) (k) N log2 N CO’s • Extend h(n) with zeros to length N and compute H (N ) (k) N log2 N CO’s • Multiply the DFT’s . output in normal order • The dual of the above algorithm is the decimationinfrequency radix 2 FFT 1. inplace computation 3. Consider the ﬁltering of a length N signal x(n) with an FIR ﬁlter containing M coeﬃcients.142 CHAPTER 6. DISCRETE FOURIER TRANSFORM (DFT) 2.6 Periodic Convolution We developed the DFT as a computable spectral representation for DT signals. where M << N M −1 y(n) = m=0 h(m)x(n − m) Computation of each output point requires M multiplications and M1 additions.
143 How Periodic Convolution Arises • Consider inverse DFT of product Y [k] = H[k]X[k] y(n) = 1 N N −1 H[k]X[k]ej2πkn/N k=0 • Substitute for X[k] in terms of x[n] y(n) = 1 N N −1 N −1 H[k] k=0 m=0 x[m]e−j2πkm/N ej2πkn/N • Interchange order of summations N −1 y(n) = m=0 N −1 x[m] 1 N N −1 H[k]e−j2πk(n−m)/N k=0 = m=0 x[m]h[n − m] • Note periodicity of sequence h[n] h[n − m] = h[(n − m) mod N ] .6. PERIODIC CONVOLUTION • Y (N ) (k) = H (N ) (k)X (N ) (k) N complex mutliplications • Compute inverse DFT y(n) = DF T −1 Y (N ) (k) N log2 N CO’s Total: 3N log2 N CO’s + N complex multiplications The computation/output point is 3 log2 N CO’s+ 1 complex multiplication.6.
144 CHAPTER 6. Aperiodic Convolution Aperiodic convolution ∞ y[n] = h[n] ∗ x[n] = m=−∞ h[n − m]x[m] Periodic convolution (N=9) N −1 y[n] = h[n] ⊗ x[n] = m=0 h[(n − m)modN ]x[m] . DISCRETE FOURIER TRANSFORM (DFT) Periodic vs.
6. PERIODIC CONVOLUTION 145 Comparison Aperiodic convolution .6.
DISCRETE FOURIER TRANSFORM (DFT) Periodic convolution (N = 9) Zero Padding to Match Aperiodic Convolution Suppose x[n] has length N and h[n] has length M.146 CHAPTER 6. Periodic convolution with length M+N1 will match aperiodic result .
we will discuss: 1. poles and zeros of the ﬁlter that will approximate the desired characteristics? 3.What frequency response or other characteristics of the ﬁlter are desired? 2. Inﬁnite impulse response ﬁlter design 7. Realization . Finite impulse response ﬁlter design 3.2 Overview 1. equivalently.1 Chapter Outline In this chapter.Chapter 7 Digital Filter Design 7. Approximation . 147 .What are the coeﬃcients or. Overview of ﬁlter design problem 2.How will the ﬁlter be implemented? • Filter design problem consists of three tasks • Consider a simple example to illustrate these tasks. Speciﬁcation .
148 CHAPTER 7. DIGITAL FILTER DESIGN Filter Design Example • An analog signal contains frequency components ranging from DC (0 Hz) to 5 kHz • Design a digital system to remove all but the frequencies in the range 24 kHz • Assume signal will be sampled at 10 kHz rate Speciﬁcation • Ideal analog ﬁlter • Ideal digital ﬁlter .
ωpu .passband ripple Approximation • Design by positioning poles and zeros (PZ plot design) .lower and upper stopband ripple . OVERVIEW 149 Filter Tolerance Speciﬁcation ωpl .lower and upper passband edges ωsl .7. δu . ωsu .2.lower and upper stopband edges δl .
8ej5π/10 z − 0. Zeros on unit circle drive magnitude response to zero and result in phase discontinuities 4. Passband asymmetry is due to extra zeros in lower stopband 2. DIGITAL FILTER DESIGN • Transfer function for PZ plot ﬁlter H(z) = K × z − ejπ/10 z − ej3π/10 z − ej9π/10 z − 0.8e−j7π/10 Choose constant K to yield unity magnitude response at center of passband • Frequency response of PZ plot ﬁlter • Comments on PZ plot design 1.150 CHAPTER 7.8ej7π/10 z − e−jπ/10 z − e−j3π/10 z − e−j9π/10 z − 0. but does not suggest how to meet design speciﬁcations: . Phase is highly nonlinear 3. Polezero plot yields intuition about ﬁlter behavior.8e−j5π/10 z − 0.
64z −2 (Ignoring overall time advance of 2 sample units) • Overall System .8ej7π/10 z − e−jπ/10 z − e−j3π/10 z − e−j9π/10 z − 0.8e−j5π/10 z − 0.6 cos(7π/10)z −1 + 0.8ej5π/10 z − 0.7.2. OVERVIEW 151 Realization • Cascade form of transfer function H(z) = K × z − ejπ/10 z − ej3π/10 z − ej9π/10 z − 0.8e−j7π/10 Cascade Form Realization • Cascade form in second order sections with realvalued coeﬃcients H(z) = K[z 2 − 2 cos(π/10)z + 1] × z 2 − 2 cos(3π/10)z + 1 z 2 − 1.6 cos(7π/10)z + 0.6 cos(5π/10)z + 0.64 • Convert to negative powers of z H(z) = K[1 − 2 cos(π/10)z −1 + z −2 ] × 1 − 2 cos(3π/10)z −1 + z −2 1 − 1.64 z 2 − 2 cos(9π/10)z + 1 × 2 z − 1.6 cos(5π/10)z −1 + 0.64z −2 1 − 2 cos(9π/10)z −1 + z −2 1 − 1.
152
CHAPTER 7. DIGITAL FILTER DESIGN
• Second order stages Hi (z) = 1 + a1i z −1 + a2i z −2 , i = 1, 2, 3 1 − b1i z −1 − b2i z −2
yi [n] = xi [n] + a1i xi [n − 1] + a2i xi [n − 2] + b1i yi [n − 1] + b2i yi [n − 2]
Parallel Form Realization
• Expand transfer function in partial fractions H(z) = a01 + a11 z −1 a02 + a12 z −1 d0 + d1 z −1 + d2 z −2 + + −1 − b z −2 1 − b11 z 1 − b12 z −1 − b22 z −2 21
7.2. OVERVIEW
153
Direct Form Realization
Multiply out all factors in numerator and denominator of transfer function a0 + a1 z −1 + ... + a6 z −6 H(z) = 1 − b1 z −1 − ... − b4 z −4
154
CHAPTER 7. DIGITAL FILTER DESIGN
7.3
Finite Impulse Response Filter Design
Ideal Impulse Response
• Consider same ideal frequency response as before
• Inverse DTFT π 1 h[n] = H(ω)ejωn dω 2π −π −2π/5 1 h[n] = ejωn dω + 2π −4π/5
4π/5
ejωn dω
2π/5
h[n] =
1 j2πn
e−j2πn/5 − e−j4πn/5 + ej4πn/5 − e−j2πn/5
h[n] = 1 e−j3πn/5 ejπn/5 − e−jπn/5 + ej3πn/5 ejπn/5 − e−jπn/5 j2πn h[n] = n 1 n 1 + ej3πn/5 sinc e−j3πn/5 sinc 5 5 5 5
h[n] = 0.4 cos(3πn/5)sinc(n/5)
7.3. FINITE IMPULSE RESPONSE FILTER DESIGN
155
Approximation of Desired Filter Impulse Response by Truncation
• FIR ﬁlter equation
M −1
y[n] =
m=0
am x[n − m]
• Filter coeﬃcients (assuming M is odd) am = hideal [m − (M − 1)/2], m = 0, ..., M − 1
Truncation of Impulse Response
hideal [n] • Actual.hactual [n] • Window sequence .w[n] 1 . DIGITAL FILTER DESIGN Frequency Response of Truncated Filter Analysis of Truncation • ideal inﬁnite impulse response .156 CHAPTER 7. n ≤ (M − 1)/2 w[n] = 0 . else • Relation between ideal and actual impulse responses hactual [n] = hideal [n]w[n] π 1 Hactual (ω) = Hideal (λ)W (ω − λ)dλ 2π −π . ﬁnite impulse response .
7. FINITE IMPULSE RESPONSE FILTER DESIGN 157 Eﬀect of Filter Length DTFT of Rectangular Window n=(M −1)/2 W (ω) = n=−(M −1)/2 e−jωn let m=n+(M1)/2 M −1 = m=0 e−jω[m−(M −1)/2] 1 − ejωM 1 − ejω = ejω(M −1)/2 = sin(ωM/2) sin(ω/2) .3.
158 CHAPTER 7. DIGITAL FILTER DESIGN Graphical View of Convolution Hactual (ω) = 1 2π π Hideal (λ)W (ω − λ)dλ −π Relation between Window Attributes and Filter Frequency Response Parameter ∆ω δ W (ω) Mainlobe width Area of ﬁrst sidelobe Hactual (ω) Transition bandwidth Passband and stopband ripple .
else α = (M − 1)/2 I0 (.7. contains a parameter that permits tradeoﬀ between mainlobe width and sidelobe area Kaiser Window 0≤N ≤M −1 0. minimum mainlobe width 2. FINITE IMPULSE RESPONSE FILTER DESIGN 159 Design of FIR Filters by Windowing • Relation between ideal and actual impulse responses hactual [n] = hideal[n] w[n] • Choose window sequence w[n] for which DTFT has 1. I0 (β) .3. minimum sidelobe area • Kaiser window is the best choice 1.) zerothorder modiﬁed Bessel function of the ﬁrst kind w[n] = I0 [β(1−[(n−α)/α]2 )1/2 ] . based on optimal prolate spheroidal wavefunctions 2.
Choose stopband and passband ripple and transition bandwidth . DIGITAL FILTER DESIGN Eﬀect of Parameter β Filter Design Procedure for Kaiser Window 1.160 CHAPTER 7.
112(A − 8.4 + 0.7.07886(A − 21) β= 0. A < 21 4.3. A > 50 .5842(A − 21)0.1π radians/sample= 0.285∆ω + 1 . Determine paramter β A = log10 δ −20 0.05 cycles/sample δ = 0. Determine ﬁlter length M A−8 M = 2.0 3.4 M = 45. FINITE IMPULSE RESPONSE FILTER DESIGN 161 2.6 Filter Designed with Kaiser Window .01 β = 3. 21 ≤ A ≤ 50 . Example ∆ω = 0.7) 0.
175 ωpl = 0.425 Optimal FIR Filter Design • Although the Kaiser window has certain optimal properties.225 ωpu = 0. ﬁlters designed using this window are not optimal • Consider the design of ﬁlters to minimize integral meansquared frequency error • Problem: Choose hactual [n] = −(M − 1)/2..05 ωsl = 0.162 CHAPTER 7.01 = −40 dB ∆ω = 0. (M − 1)/2 to minimize π 1 E= Hactual (ω) − Hideal (ω)2 dω 2π −π .4 δ = 0.375 ωsu = 0... DIGITAL FILTER DESIGN Kaiser Filter Frequency Response M = 47 β = 3. .
.3. the ripple is concentrated near the band edges • Rather than focussing on the integral of the error.7. .. FINITE IMPULSE RESPONSE FILTER DESIGN 163 Minimum MeanSquared Error Design • Using Parseval’s relation. let’s consider the maximum error • Parks and McClellan developed a method for design of FIR ﬁlters based on minimizing the maximum of the weighted frequency domain error . we obtain a direct solution to this problem E= = n=−∞ (M −1)/2 1 2π ∞ π Hactual (ω) − Hideal (ω)2 dω −π hactual [n] − hideal [n]2 = n=−(M −1)/2 −(M +1)/2 hactual [n] − hideal [n]2 ∞ + n=−∞ hideal [n]2 + n=(M +1)/2 hideal [n]2 • Solution Set hactual [n] = hideal [n]. we cannot reduce the peak ripple • Also. n = −(M − 1)/2.. (M − 1)/2 Minimum error is given by −(M +1)/2 ∞ E= n=−∞ hideal [n]2 + n=(M +1)/2 hideal [n]2 • This is the truncation of the ideal impulse response • We conclude that original criteria of minimizing meansquared error was not a good choice • What was the problem with truncating the ideal impulse response? (See ﬁgure on following page) Minimax (Equiripple) Filter Design • No matter how large we pick M for the truncated impulse response.
164 CHAPTER 7. DIGITAL FILTER DESIGN Eﬀect of Filter Length: ParksMcClellan Equiripple Filters .
. FIR Filters • FIR Filter Equation y[n] = a0 x[n] + a1 x[n − 1] + .4...aM −1 x[n − (M − 1)] − b1 y[n − 1] − b2 y[n − 2] − . including exactly linear phase Inﬁnite Impulse Response (IIR) Filter Design: Overview • IIR digital ﬁlter designs are based on established methods for designing analog ﬁlters • Approach is generally imited to frequency selective ﬁlters with ideal passband/stopband characteristics • Basic ﬁlter type is low pass . − bN y[n − N ] • IIR ﬁlters can generally achieve given desired response with less computation than FIR ﬁlters • It is easier to approximate arbritrary frequency response characteristics with FIR ﬁlters.4 Inﬁnite Impulse Response Filter Design IIR vs....7. INFINITE IMPULSE RESPONSE FILTER DESIGN 165 7. + aM −1 x[n − (M − 1)] • IIR Filter Equation y[n] = a0 x[n] + a1 x[n − 1] + .
if desired Prototype Filter Types . Elliptic • Choose analogdigital transformation method 1. Butterworth 2. Bilinear Transformation • Transform digital ﬁlter speciﬁcations to equivalent analog ﬁlter speciﬁcations • Design analog ﬁlter • Transform analog ﬁlter to digital ﬁlter • Perform frequency transformation to achieve highpass or bandpass ﬁlter. Chebychev Type I or II 3. DIGITAL FILTER DESIGN • Achieve highpass or bandpass via transformations • Achieve multiple stop/pass bands by combining multiple ﬁlters with single pass band IIR Filter Design Steps • Choose prototype analog ﬁlter family 1.166 CHAPTER 7. Impulse Invariance 2.
4. INFINITE IMPULSE RESPONSE FILTER DESIGN 167 Transformation via Impulse Invariance • Sample impulse response of analog ﬁlter hd [n] = T ha (nT ) Hd (ω) = k Ha (f − k/T ) f= ω 2πT Note that aliasing may occur Impulse Invariance • Implementation of digital ﬁlter .7.
Inverse Laplace transform N ha (t) = k=1 Ak esk t u(t) 3.. Combine terms a0 + a1 z −1 + ....bN z −N 6.aM −1 x[n − (M − 1)] − b1 y[n − 1] − b2 y[n − 2] − . − bN y[n − N ] Impulse Invariance Example • Design a second order ideal low pass digital ﬁlter with cutoﬀ at ωd = π/5 radians/sample (Assume T=1) 1. DIGITAL FILTER DESIGN 1. pk = esk T k 4. Sample Impulse response N hd [n] = T ha (nT ) = T k=1 N Ak esk nT u(nT ) =T k=1 Ak pn u[n].4443)] ..4443)][s − (−0.3948 [s − (−0..4443 + j0. Analog cutoﬀ frequency fc = ωd 2πT = 0.. Take Z transform N Hd (z) = k=1 T Ak 1 − pk z −1 5.. Partial fraction expansion of analog transfer function (assuming all poles have multiplicity 1) N Ha (s) = k=1 Ak s − sk 2.168 CHAPTER 7.4443 − j0. Corresponding ﬁlter equation Hd (z) = y[n] = ao x[n] + a1 x[n − 1] + .1Hz ωc = 2πfc = π/5 rad/sec • Use Butterworth analog prototype Ha (s) = 0.aM −1 z −(M −1) 1 + b1 z −1 + b2 z −2 + .
4.4443 z −1 1 − 0.2450z −1 = 1 − 1.641ej0.4443−j0.1581y[n − 1] − 0.4443(0.4443 )n u[n] • Compute Z transform Hd (z) = j0.4443(0.4443 z −1 ) • Find diﬀerence equation y[n] = 0.4443)] 169 • Compute inverse Laplace Transform ha (t) = [−j0.4443 z −1 )(1 0.4443e(−0.641e−j0.4443)t + j0.2450z −1 − 0. INFINITE IMPULSE RESPONSE FILTER DESIGN • Apply partial fraction expansion Ha (s) = −j0.4443e(−0.6413ej0.4443)] [s − (−0.6413e−j0.4443+j0.4443 + j0.4443 −j0.4443e(−0.4443−j0.4443e(−0.7. assuming that there is no aliasing .4443 z −1 0.4443 + [s − (−0.4443)n +j0.4443)t ]u(t) • Sample impulse response hd [n] = −j0.4443 − j0.6413e−j0.4113z −2 • Combine terms Hd (z) = (1 − 0.2450x[n − 1] + 1.1581z −1 + 0.6413ej0.4443 )n +j0.4443 + 1 − 0. if there is no aliasing • Desired transition bandwidths map directly between digital and analog frequnecy domains • Passband and stopband ripple speciﬁcations are identical for both digital and analog ﬁlter.4443 j0.4443+j0.4443)n u[n] = −j0.4113y[n − 2] Impulse Invariance MethodSummary • Preserves impulse response and shape of frequency response.
DIGITAL FILTER DESIGN • The ﬁnal digital ﬁlter design is independent of the sampling interval parameter T • Poles in analog ﬁlter map directly to poles in digital ﬁlter via transformation pk = esk T • There is no such relation between the zeros in the two ﬁlters • Gain at DC in a digital ﬁlter may not equal unity. since sampled impulse response may only approximately sum to 1 Bilinear Transformation Method • Mapping between s and z s= 2 T 1 − z −1 1 + z −1 2 T 1 − z −1 1 + z −1 Hd (z) = Ha z= 1 + (T /2)s 1 − (T /2)s • Mapping between s and z planes • Mapping between analog and digital frequencies .170 CHAPTER 7.
1 cycles/sample δ = 0. set T = 1 Hd (ωd 1) = 1 − Hd (ωd 2) = δ • Map digital to analog frequencies . use Butterworth analog prototype 4.2π radians/sample= 0. cutoﬀ frequency ωdc = π/5 rad/sample 2. INFINITE IMPULSE RESPONSE FILTER DESIGN 171 Bilinear TransformationExample No.7.4.1 3. 1 • Design lowpass ﬁlter 1. transition bandwidth and ripple ∆ωd = 0.
1π ωa2 = 1.4738 + j0. k = 0.2369 − j0..4738ejπ s2 = −0.4103 = 0.172 ωa = 2 T CHAPTER 7.0000 = 0.4738 • Determine transfer function of analog ﬁlter Ha (s)Ha (−s) = 1 1+(s/jωac )2N • Poles are given by j sk = ωac e π π 2πk + + 2 2N 2N .3168 rad/sec ωd2 = π/5 + 0. 2N − 1 • Take N poles with negative real parts for H(s) s0 = −0.4738e−j2π/3 • Transfer function of anlaog ﬁlter 3 ωac Ha (s) = (s − s0 )(s − s1 )(s − s2 ) .4103 = 0.4738ej2π/3 s1 = −0.2369 + j0..1π ωa1 = 0.0191 rad/sec • Solve for ﬁlter order and cutoﬀ frequency Ha (ωa )2 = 1 1+(ωa /ωc )2N Ha (ωa1 ) = 1 − Ha (ωa2 ) = δ N= ωac 1 log[(Ha (ωa2 )−2 − 1)/(Ha (ωa1 )−2 − 1)] 2 log(ωa2 /ωa1 ) ωa2 = (Ha (ωa2 )−2 − 1)1/(2N ) • Result N =3 ωac = 0.. 1. DIGITAL FILTER DESIGN tan(ωd /2) ωd1 = π/5 − 0. .
0000 − 2.0769z −1 + 1.7.2 • Design lowpass ﬁlter ωdc = π/5 rad/sample . INFINITE IMPULSE RESPONSE FILTER DESIGN • Transform to discretetime ﬁlter 2 1 − z −1 Hd (z) = Ha T 1 + z −1 Hd (z) = 3 ωac (1 + z −1 )3 8(1 − z −1 − s0 )(1 − z −1 − s1 )(1 − z −1 − s2 ) 0.0249z −2 + 0.0083 + 0.5343z −2 − 0.0083z −3 = 1.4.3909z −3 173 Bilinear Example No.0249z −1 + 0.
DIGITAL FILTER DESIGN 1. Transition bandwidth and ripple ∆ωd = 0.01 3. set T = 1 • Result:N = 13 . Use Butterworth analog prototype 4.1π radians/sample = 0.174 CHAPTER 7.05 cycles/sample δ = 0. Cutoﬀ frequency 2.
we can transform directly to: 1. INFINITE IMPULSE RESPONSE FILTER DESIGN 175 Frequency Transformations of Lowpass IIR Filters • Given a prototype lowpass digital ﬁlter design with cutoﬀ frequency θc . a bandstop digital ﬁlter • General form of transformation H(z) = Hlp (Z)}Z = G(z −1 ) • Example . Transformation z −1 − α Z −1 = 1 − αz −1 2. a bandpass digital ﬁlter 4.4. a highpass digital ﬁlter 3.lowpass to lowpass 1. a lowpass digital ﬁlter with a diﬀerent cutoﬀ frequency 2. Associated design formula sin θc −ωc 2 α= sin θc +ωc 2 −1 .7.
176 CHAPTER 7. DIGITAL FILTER DESIGN .
Estimating Correlation 8. Filtering of Random Sequences 6.2 Single Random Variable Random Signals Random signals are denoted by upper case X and are completely characterized by density function fX (x) b P (a ≤ X ≤ b) = a fX (x)dx ⇒ fX (x)∆x = P (x ≤ X ≤ x + ∆x) 177 .1 Chapter Outline In this chapter. Sequences of Random Variables 4. we will discuss: 1.Chapter 8 Random Signals 8. Single Random Variable 2. Two Random Variables 3. Estimating Distributions 5.
fX (x) ≥ 0 ∞ 2.178 CHAPTER 8. −∞ fX (x)dx = 1 Example 1 fX (x) = λe−λx u(x) ∞ ∞ fX (x)dx = 0 0 λe−λx dx 0 ∞ = e−λx =1 What is the probability X ≤ 1? 1 P (X ≤ 1) = 0 λe−λx dx 0 1 = e−λx = 1 − e−λ . RANDOM SIGNALS The cumulative distribution function is FX (x) = P (X ≤ x) x FX (x) = −∞ fX (x)dx ∴ dFX (x) = fX (x) dx Properties of density function 1.
Variance g(x) = (X − X)2 Expectation is a linear operator E {ag(x) + bh(x)} = aE {g(x)} + bE {h(x)} 2 ∴ σX = E (X − X)2 179 g(x)fX (x)dx where g is a function of X xfX (x)dx x2 fX (x)dx = E X 2 − 2XX + X = E X2 − X Example 2 ∞ 2 2 E {X} = 0 λxe−λx dx vdu v = −e−λx dv = λe−λx ∞ 0 ∞ 0 ∞ Integrate by parts udv = uv − u=x du = 1 ∞ λxe−λx dx = −xe−λx 0 + 0 e−λx dx 1 −λx e λ ∞ 0 = −xe−λx 1 = λ − .8. SINGLE RANDOM VARIABLE Expectation E {g(X)} = Special Cases: 1. Mean value g(x) = x X = E {X} = 2. Second moment g(x) = x2 X2 = E X2 = 3.2.
180 Example 3 ∞ CHAPTER 8. RANDOM SIGNALS E X2 = 0 λx2 e−λx dx dv = λe−λx ∞ 0 ∞ Integrate by parts u = x2 du = 2x v = −e−λx ∞ x2 λe−λx dx = −x2 e−λx 0 +2 0 xe−λx dx = 2 λ 2 σX = 2 1 − λ λ2 2 λ = 1 X = 1 σX = 1 σX = 1 X and Y have the same mean and same standard deviation Transformations of a Random Variable • Y = aX + b • Y = aX + b .
Joint distribution function x y FXY (x. y)dx 4.8. y)fXY (x. Marginal densities fX (x) = fY (y) = fXY (x. TWO RANDOM VARIABLES • 2 σY = E (Y − Y )2 181 =E aX + b − (aX + b) 2 = a2 E (X − X)2 2 = a2 σX • FY (y) = P {Y ≤ y} = P {aX + b ≤ y} =P = FX 1 fX a X≤ y−b a y−b a assume a> 0 • fY (y) = y−b a 1 fX a y−b a In general fY (y) = 8. Expectation E {g(X. Y )} = g(x. Joint density function P {a ≤ X ≤ b ∩ c ≤ Y ≤ d} = a c fXY (x.3 Two Random Variables b d 1. y)dxdy 3.3. y)dy fXY (x. y)dxdy 2. y)dxdy . y) = −∞ −∞ fXY (x.
182 CHAPTER 8. 0≤y≤x≤1 else fX (x) = x 2dy = 2x. y)} + bE {h(x. y)} 1 = E {XY } − E XY σX σY 1 = XY − X Y σX σY − E XY +X Y 9. Independence fXY (x. y)} 6. . y)} = aE {g(x. Covariance g(x. Correlation coeﬃcient 2 σXY = E {g(x. y)} 8. 1 y 0. RANDOM SIGNALS 5. 0. y) = fX (x)fY (y) 2 ⇒ σXY = 0 and XY = X Y Example 4 fXY (x. y) = (x − X)(y − Y ) XY = E {g(x. y) + bh(x. 0 ≤ x ≤ 1 else 2dx = 2(1 − y). 0 ≤ y ≤ 1 else fY (y) = 0 0. Linearity E {ag(x. y) = xy 7. Correlation g(x. y) = 2.
3. TWO RANDOM VARIABLES 183 1 x XY = x=0 1 y=0 2xydxdy x = x=0 1 x y=0 2ydy dx = = x 4 x=0 4 1 x3 dx = 0 1 4 1 X= 0 x(2x)dx = 2 2 31 x  = 3 0 3 1 Y = 3 1 X2 = 0 x2 (2x)dx 1 0 = 2 4 x 4 1 = 2 .8.
.... xl ≤ X2 ≤ xu ...XN (x1 .... we consider only 2 cases: 1...d with mean µX and std.. N2 .. . Xn are i.. deviation σX N1 .. dxN It is apparent that this can get complicated very quickly In practice..XN (x1 . .d with zero mean Yi = aXi + Ni . .. XN are mutually independent N ⇒ fX1 .. ...i. X2 . . X2 .. Nn are i. RANDOM SIGNALS 1 4 − 2 9 1 = 18 2 = σy 2 σXY = XY − X Y σX σY 1/4 − 2/9 = 1/18 1 = 2 8.. X1 ..X2 .. . dx2 .. XN denote a sequence of random variables Their behavior is completely characterized by P xl ≤ X1 ≤ xu .. .i. x2 .4 Sequences of Random Variables Characteristics X1 ... x2 .. xl ≤ XN ≤ xu 1 1 2 2 N N xu 1 xu 2 xu N = xl 1 xl 2 . XN are jointly Gaussian Example 5 2 X1 ..X2 . X1 . .... xN )dx1 ... xN ) = i=1 fXi (xi ) 2.184 2 ∴ σX = CHAPTER 8. X2 . xl N fX1 . X3 . .. X2 ..
4. SEQUENCES OF RANDOM VARIABLES E {Yi } = aE {Xi } + E {Ni } Y = aX Y 2 = E (aXi + Ni ) = a2 X 2 + N 2 2 σY = Y 2 − Y 2 2 2 185 2 = a2 X 2 + σN − a2 X 2 2 = a2 σX + σN XY = E {Xi (aXi + Ni )} = aX 2 2 1/2 2 σXY = aX 2 − a X 2 2 σX (a2 σX + σN ) σX =a 2 σ2 + σ2 a X N = 1+ 1 2 σN 2 σ2 (a X ) .8.
186 CHAPTER 8. RANDOM SIGNALS .
8.4. SEQUENCES OF RANDOM VARIABLES 187 .
188
CHAPTER 8. RANDOM SIGNALS
8.4. SEQUENCES OF RANDOM VARIABLES
189
190
CHAPTER 8. RANDOM SIGNALS
8.5. ESTIMATING DISTRIBUTIONS
191
8.5
Estimating Distributions
So far, we have assumed that we have an analytical model for the probability densities of interest. This was reasonable for quantization. However, suppose we do not know the underlying densities. Can we estimate it from observations of the signal? Suppose we observe a sequence of i.i.d rv’s X1 , X2 , ..., XN let E(X) = µX How do we estimate µX ? µˆ = X 1 N
N
Xn
n=1
How good is the estimate? µX is actually a random variable, let’s call it Y E {Y } = E 1 N
N
Xn
n=1
= E {X} = µX
N 2
E Y − µX 
2
=E
1 N
N
Xn − µX 
n=1 N
=
1 N2 1 N2
N
m=1 N 2 N σX m=1 n=1
E {[Xm − µX ][Xn − µX ]}
=
1 2 = σX N
v Z N 1 2 σ 2 − E µˆ − µX  E {Z} = X N n=1 X 1 N 2 E µˆ − µX  = E [Xn − µX ] X N n=1 = = 1 N2 N N 2 E {[Xm − µX ][Xn − µX ]} m=1 n=1 1 2 N σX N2 1 2 = σX N 2 ∴ E {Z} = σX − 2 = σX 1 2 σ N X N −1 N So the estimate is not unbiased We deﬁne unbiased estimate as: N 2 2ˆ σXunb = σˆ N −1 X 1 = (Xn − µX )2 N − 1 n=1 N .192 CHAPTER 8. RANDOM SIGNALS How do we estimate variance? 1 2 σˆ = X N = 1 N 1 N 1 N N [Xn − µX ] − [µˆ − µX ]] X n=1 N 2 2 Xn − µX  − 2[µˆ − µX ] X n=1 N 1 N N [Xn − µX ] n=1 + [µˆ − µX ]2 X n=1 N = Xn − µX  − [µˆ − µX ]2 X n=1 2 call this r.
Consider a sequence of i.8.v’s with density fX (x) We divide the domain of X into intervals Ik = x : k − 1 2 ∆≤x≤ k+ 1 2 ∆ .d r.i. But suppose we want to know the actual density itself. ESTIMATING DISTRIBUTIONS 1 N 1 N 1 N N 193 2 σX = Xn − µX 2 n=1 N N = Xn 2 − 2µX n=1 N 2 Xn 2 − µˆ X n=1 n=1 Xn + µX 2 = 2 ∴ σXunb = N 2 σˆ N −1 X So far we have seen how to estimate statistics such as mean and variance of a random variable. These ideas can be extended to estimation of any moments of the random variable.5. How do we do this? We need to compute the histogram.
v. As ∆ → 0 E Z k = fX (τ )∆ → fX (k∆)∆ deﬁned as φ= Uk = Uk 2 σU k 2 . ...d r.v has the same form as the estimator of mean which we analyzed earlier..194 CHAPTER 8. RANDOM SIGNALS We then deﬁne the r.i.’s Xn n = 1. X ∈ Ik Uk = 0.v.P {Xn ∈ Ik } / = P {Xn ∈ Ik } (k+ 1 )∆ 2 = (k− 1 )∆ 2 So we see that we are estimating the integral of the density function over the range k − 1 ∆ ≤ x ≤ k + 1 ∆ 2 2 According to the fundamental theorem of calculus.P {Xn ∈ Ik } + 0. else We observe a sequence of i. N and compute the k corresponding r.’s Un Finally we let Zk = 1 N N n=1 k Un Then since this r.’s 1. there exists k− 1 2 fX (x)dx ∆≤τ ≤ k+ 1 2 ∆ such that (k+ 1 )∆ 2 fX (τ )∆ = (k− 1 )∆ 2 = E Zk fX (x)dx Assuming fX (x) is continuous. we know that k E Z k = E Un 1 2 2 σZ k = σU k N k Now what is E Un k E Un = 1.v.
we can always improve our result by increasing N. U k = P {X ∈ Ik } will increase with increasing ∆ so there is a tradeoﬀ between measurement noise and measurement resolution.6 Filtering of Random Sequences Let us deﬁne µX (n) = E {Xn } rXX (m.6.8. FILTERING OF RANDOM SEQUENCES 195 For ﬁxed fX (x). k What is the variance of Un ? E k Un 2 = 12 P {X ∈ Ik } + 02 P {X ∈ Ik } / = P {X ∈ Ik } = Uk 2 so σU k = Uk − (Uk )2 = Uk (1 − Uk ) We would expect to operate in range where ∆ is small so 2 σU k ≈ U k 8. Of course. for ﬁxed ∆ and fX (x). n) = E {Xm Xn } .
s.s.s E {Yn } = m h[n − m]µX h[n]µX m = = µX [x]is a constant .s. b] = rAB [b − a] Consider Y [n] = m h[n − m]X[m] h[n − m]E {Xm } m E {Yn } = If Xn is w. n) = rXX (n − m. if 1. 0) ≡ rXX (n − m) Convention: rAB [a. RANDOM SIGNALS Special case A process is w.196 CHAPTER 8. rXX (m. µX (n) ≡ µX 2.
n] = k h[n − (m + l)]rXX (l) h[n − m − l]rXX (l) k = ∴ rXY [m. n] = E {Xm . we ﬁrst deﬁne crosscorrelation rXY [m.s. n] = rXY [n − m] then rY Y [m. Yn } =E = k Xm k h[n − k]Xk h[n − k]E {Xm Xk } h[n − k]rXX [k − m] k = let l=km ⇒ k = m + l rXY [m.s. FILTERING OF RANDOM SEQUENCES 197 To ﬁnd rY Y . n] = E {Ym Yn } =E = k Ym k h[n − k]Xk h[n − k]rXY [m − k] let l = m − k ⇒ k = m − l = k h[n − (m − l)]rXY (l) h[n − m + l]rXY (l) k = = k h[n − m − l]rXY (−l) so rY Y [m.6.8.s . Y is also w.s. n] = rY Y [n − m] summarizing: If X is w.
else 2 rXX [n] = σX δ[n] rXX [n] = Now suppose 7 Y [n] = m=0 2−m X[n − m] This is a ﬁltering operation with h[n] = {2−n [u[n] − u[n − 8]} We already know that rXY [n] = h[n] ∗ rXX [n] 2 = σX h[n] .s process is just a constant oﬀset or shift.’s.v’s have the same interpretation. RANDOM SIGNALS Interpretation of Correlation Recall that for two diﬀerent random variables X and Y.d with zero mean.i. Suppose X and Y have zero mean.e whether or not they tend to vary together? Since X[m] and X[m + n] or X[m] and Y [m + n] are just two diﬀerent pairs of r. How related are Xm and Xm+n or Ym and Ym+n ? Let’s consider some examples: Suppose X[n] is i.s. n = 0 0.v. these r.198 CHAPTER 8. we can see that rXX [n] = E {X[m]X[m + n]} rXY [n] = E {X[m]Y [m + n]} are closely related to covariance and correlation coeﬃcient. Since mean of a w. we deﬁned covariance to be 2 σXY = E {(X − µX ) (Y − µY )} and correlation coeﬃcient σ2 ρXY = XY σX σY as measures of how related X and Y are. Then rXX [n] = E {X[m]X[m + n]} 2 σX .i. we can let it equal zero without loss of generality.
rY Y [n] = 0 2.6. 0 ≤ n ≤ 7 7 rY Y [n] = 2 σX 2−n m=−n n+7 2−2m 2−2(l−n) l=0 l =m+n 2 = σX 2−n n+7 2 = σX 2 n l=0 2 2−2l = σX 2n 1 − 2−2(n+8) 1 − 2−2 4 2 = σX [2n − 2−n−16 ] 3 .8. FILTERING OF RANDOM SEQUENCES 199 rXY [n] = E {X[m]Y [m + n]} What about the sequence Y [n]? Is it i. n ≤ −7.i.d? Recall rY Y [n] = h[n] ∗ rXY [−n] 2 = h[n] ∗ σX h[−n] 2 = σX m 2 = σX m h[n − m]h[−m] h[n + m]h[m] Cases 1. −7 ≤ n ≤ 0 −n+7 2 rY Y [n] = σX m=0 2 2−(n+m) 2−m = σX 2−n m=0 −n+7 2−2m 1 − 2−2(−n+8) 2 = σX 2−n 1 − 2−2 4 −n 2 = σX 2 − 2(n−16) 3 3.
200 4. 3 rY Y [n] = 4 2 σX [2−n − 2(n−16) ]. 3 0. n > 7 rY Y [n] = 0 Summarizing 0. CHAPTER 8. RANDOM SIGNALS n < −7 −7 ≤ n ≤ 0 0≤n≤7 n>7 . 4 2 σX [2n − 2(−n−16) ].
FILTERING OF RANDOM SEQUENCES 201 .6.8.
RANDOM SIGNALS .202 CHAPTER 8.
7. m + n] = E {X[m]Y [m + n]} = rXY [n] Suppose we form the sum Z[n] = 1 M M −1 X[m]Y [m + n] m=0 M −1 Consider E {Z[n]} = 1 M 1 M E {X[m]Y [m + n]} m=0 M −1 E {Z[n]} = rXY [n] = rXY [n] m=0 So we have an unbiased estimator of correlation.7 Estimating Correlation Assume X and Y are jointly w.... ESTIMATING CORRELATION 203 8.s processes E[Xn ] = µX E[Yn ] = µX rXY [m.X[M − 1] Y [N ]Y [N + 1].Y [N + M − 1] In each case. bu the data used for Y varied.Y [−N + M − 1] For n = 0 X[0]X[1].s.. Let’s look a little more closely at what the computation involves: Suppose we computed rXY [n] = Z[n] for the interval −N ≤ n ≤ N ˆ = 1 M M −1 X[m]Y [m + n] m=0 For n = −N X[0]X[1]..Y [M − 1] For n = N X[0]X[1]..... .8... we used same data set for X..X[M − 1] Y [−N ]Y [−N + 1].X[M − 1] Y [0]Y [1].
fmwconcepts....com/imagemagick .204 CHAPTER 8.Y [M − 1] and use this data to compute estimate for every n let Xtr [n] = X[n] {u[n] − u[n − m]} let Ytr [n] = Y [n] {u[n] − u[n − m]} Then let v[n] = m M −1 Xtr [m]Ytr [m + n] X[m]Y [m + n] {u[m + n] − u[m + n − M ]} m=0 = M −1 E {V [n]} = m=0 M −1 E {X[m]Y [m + n]} {u[m + n] − u[m + n − M ]} = m=0 rXY [n] {u[m + n] − u[m + n − M ]} M −1 = rXY [n] m=0 {u[m + n] − u[m + n − M ]} So an unbiased estimate of correlation is given by rXY [n] = ˆ 1 M −n M −1 X[m]Ytr [m + n] m=0 ∗ Cover image taken from website http://www. RANDOM SIGNALS Suppose instead we collect Z sets of data X[0].X[M − 1] Y [0]..
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.