Professional Documents
Culture Documents
1438 Chapter1 PDF
1438 Chapter1 PDF
Houshou Chen
• Siganls:
1. What is a signal
2. Classification of signals
3. Basic operations on signals
4. Elementary signals
• Systems:
1. What is a system
2. Classification of systems
3. LTI systems: circuit example
4. More example and motivation
Signals
Classification of signals
Any signal x(t) can be decompose into the even part xe (t) and
the odd part xo (t) by:
1 1
x(t) = [x(t) + x(−t)] + [x(t) − x(−t)],
2 2
where
1 1
xe (t) = [x(t) + x(−t)] and xo (t) = [x(t) − x(−t)]
2 2
• There are many similarities between x(t) and x[n] , but there is
one important difference.
• For a continuous time x(t) = ejw0 t we have:
1. ejw1 t = ejw2 t if w1 = w2 , i.e., any two signals with two
different frequencies are distinct.
2. w1 > w2 ⇒ ejw1 t oscillates faster than ejw2 t .
2π
3. ejw0 t is periodic for any w0 , T0 = w0 .
• The above three properties are not true for a discrete-time signal
x[n] = ejΩ0 n .
1. For a discrete-time signal, we have
x[n] = ej(Ω0 +2π)n = ejΩ0 n × ej2πn = ejωo n
i.e., the signal x[n] at frequency (Ω0 + 2π) is the same as that at
frequency Ω0 , that is unlike the continuous case:
ejw1 t = ejw2 t if w1 = w2
2. I.e., for continuous-time signal, ejw0 t are all distinct for distinct
w0 . On the other hand, in discrete-time, the signal
x[n] = ejΩ0 n = ej(Ω0 +2mπ)n for any m ∈ Z.
=⇒ we only need to consider a frequency interval of length 2π,
usually −π ≤ Ω < π or 0 ≤ Ω < 2π.
Figure 1: (−1)n
Operations on signals
⎧
⎨ t − axis
• operation on of x(t)
⎩ x − axis
• y(t) = Ax(t) + B
⎧
⎪
⎪ |A| > 1 expand(A < 0 reverse)
⎪
⎪
⎪
⎨ |A| < 1 compress
– Remark:
⎪
⎪ B > 0 shift up
⎪
⎪
⎪
⎩ B < 0 shift down
• y(t) = 3x(t) + 4
Figure 2:
⎧ we do
If
⎨ y (t) = x(t) + B shift next
1
⎩ y2 (t) = Ay1 (t) scaling first
=⇒ y2 (t) = A(x(t) + B)
• Conclusion:
– y(t) = Ax(t) + B then A first ⇒ B next
– y(t) = A(x(t) + B) then B first ⇒ A next
• On independent variable t
⎧
⎪
⎪ |a| > 1 compress(a < 0 reverse)
⎪
⎪
⎪
⎨ |a| < 1 expand
• Remark:
⎪
⎪ b > 0 shift left (advance version)
⎪
⎪
⎪
⎩ b < 0 shift right (delayed version)
Figure 3:
⎧
⎨ y (t) = x(at) scaling first
1
If we do
⎩ y2 (t) = y1 (t + b) shift next
⇒ y2 (t) = y1 (t + b) = x(a(t + b)) = x(at + ab)
Conclusion:
y(t) = x(at + b) b first⇒ a next
y(t) = x(a(t + b)) a first⇒ b next
Figure 4:
Figure 5:
1. ramp function:
⎧
⎨ 0 t≤0
r(t) =
⎩ t t≥0
⎧
⎨ 0 n≤0
r[n] =
⎩ n n≥0
Figure 6:
2. unit function
⎧
⎨ 0 t≤0
u(t) =
⎩ 1 t≥1 step function
⎧
⎨ 0 n = −1, −2, . . .
u[n] =
⎩ 1 n = 0, 1, . . .
Figure 7:
• u(t) − u(t − 1)
• u(t − a) − u(t − b)
Figure 8:
Figure 9:
Figure 10:
Figure 11:
3. impulse function⎧
⎨ 0 t = 0
δ(t) =
⎩ 1·∞ t=0 impulse function
⎧
⎨ 0 n = 0
δ[n] =
⎩ 1 n=0 delta function
Figure 12:
2. sifting property:
∞
−∞
x(t)δ(t − t0 )dt = x(t0 )
⎧
⎪
b ⎨ x(t0 ) if t0 ∈ [a, b]
⎪
x(t)δ(t − t0 )dt =
a ⎪
⎪
⎩
0 else
Figure 13:
1
3. δ(at) = |a| δ(t)
Figure 14:
4. δ(at + b) = δ(a(t + ab )) = 1
|a| δ(t + ab )
Figure 15:
∞
or = −∞
x(t − τ )δ(τ )dτ (continuous-time)
• x[n] = Σ∞
k=−∞ x[k]δ[n − k]
or = Σ∞
k=−∞ δ[k]x[n − k] (discret-time)
We see that any signal x(t) (x[n]) can be written as the ”linear
combination” of δ(t) (δ[n]) and it’s shift version δ(t − τ ) (δ[n − k]),
i.e., the linear integral for continuous-time, and linear sum for
discrete-time.
• Remark:
⎧
⎪
⎪
⎨ (t) = u(t)
r
⎪
⎪
⎩
u (t) = δ(t)
⎧ t
⎪
⎪ δ(τ )dτ = u(t)
⎨ −∞
⎪
⎪
⎩ t
−∞
u(τ )dτ = r(t)
Also
⎧
⎪
⎨ r[n] − r[n − 1] = u[n]
⎪
⎪
⎪
⎩
u[n] − u[n − 1] = δ[n]
⎧
⎪
⎪
n
Σ
⎨ k=−∞ δ[k] = u[n]
⎪
⎪
⎩
Σnk=−∞ u[k] = r[n]
• Similarly, we have
∞
u(t) = u(τ )δ(t − τ )dτ
−∞
∞
= δ(t − τ )dτ
0
n
= δ[k]
k=−∞
• Similarly, we have
∞
u[n] = δ[n − k]u[k]
k=∞
∞
= δ[n − k]
k=0
System
Figure 16:
Figure 17:
How to describe the relationship between the input vi (t) and the
output v0 (t)?
Classification of system
n n
⇔ H{ i=1 ci xi (t)} = i=1 ci H{xi (t)}
Figure 18:
Figure 19:
∞
x(t) = −∞
x(τ )δ(t − τ )dτ
∞
⇒ y(t) = H{x(t)} = H{ −∞ x(τ )δ(t − τ )dτ }
∞ ∞
= −∞
x(τ )H{δ(t − τ )}dτ = −∞
x(τ )h(t − τ )dτ
2. s-domain: transfer function H(s)
∞
x(t) = −∞
X(w)ejwt
dw ⇒ y(t) = H{x(t)}
∞ ∞
= −∞
X(w)H{e jwt
}dw = −∞
X(w)H(w)ejwt
dw
∞
= ( −∞ h(τ )e−sτ dτ )est = H(s)est (= H(s)x(t))
∞
= ( −∞ h[k]z −k )z n = H(z)z n (= H(z)x[n])
Or equivalently,
A(v1 v2 · · · vn ) = (Av1 , Av2 , · · · Avn ) = (λ1 v1 , λ2 v2 , · · · λn vn )
⎡ ⎤
λ 0 0
⎢ 1 ⎥
⎢ .. ⎥
= (v1 v2 · · · vn ) ⎢ 0 . 0 ⎥ ⇒ AV = V D ⇒ A = V DV −1
⎣ ⎦
0 0 λn
y = V DV −1 x ⇒ y = Dx ⇒ V −1 y = DV −1 x
Figure 20:
Or block diagram
Figure 21:
From
⎧ the circuit theory, we have
⎪
⎪ VR (t) = R · i(t)
⎪
⎪
⎪
⎪
⎪
⎪
⎨
VL (t) = L di(t)
⎪
⎪
dt
⎪
⎪
⎪
⎪
⎪
⎪
⎩ i(t) = C dVC (t) ⇒ di(t) = C d2 VC (t)
dt dt dt2
d2 VC (t)
VL (t) = L · C dt2
Finally, we have
d2 VC (t)
L· C dt2 + RC dVdt
C (t)
+ VC (t) = Vs (t)
⇒ Vc (t) + R
L Vc (t) + 1
Lc Vc (t) = 1
LC Vs (t)
y (t) + R
L y (t) + 1
LC y(t) = 1
LC x(t)
In general, y(t) for t t0 depends on both the initial state s(t0 ) and
the input function x(τ ), t t0 we write:
y(t) = F (s(t0 ); x(τ ), τ t0 )
then ZIR(t) = f (s(t0 ); 0); ZSR(t) = f (0; x(τ ), τ t0 )
1. For a linear-system, Complete system response=ZIR+ZSR
2. We will assume s(t0 ) = 0 from now on and turn attention
to ZIR when we discuss the Laplace Transform.
⇒ A = V DV −1
x (t) = V DV −1 x(t)
⎧
⎨ y (t) = c eλ1 t
1 1
⇒
⎩ y2 (t) = c2 eλ2 t
⎡ ⎤ ⎡ ⎤
x1 (t) y1 (t)
⇒⎣ ⎦= v1 v2 ⎣ ⎦
x2 (t) y2 (t)
⎡ ⎤ ⎡ ⎤
x1 (t) c eλ1 t
⇒⎣ ⎦ = v1 (t) v2 (t) ⎣ 1 ⎦
x2 (t) c2 eλ2 t
= c1 eλ1 t v1 + c2 eλ2 t v2
Summary
From the above example, we can see that there are several ways to
describe the relationship between the input x(t) and the output y(t)
for a LIT sytem x(t) ↔ y(t). These are:
1. Block diagram
Figure 22:
y + R
L y (t) + 1
LC y(t) = 1
LC x(t) (λ2 + R
Lλ + 1
LC = 0 two roots)
where ⎧
⎪
⎪ c eλt
+ c eλt
(λ1 = λ2 real)
⎨ 1 2
4. our focus
⎧
⎨ time domain h(t): impulse response
⎩ frequency domain H(s): transfer function
Phasors
R: VR (t) = Ri(t)
ZR
i(t) = ejwt ⇒ VR (t) = R ejwt ⇒ ZR = R( independce)
L: VL (t) = L di(t)
dt
ZL
i(t) = ejwt ⇒ VL (t) = L · jw ejwt ⇒ ZL = jwL (SL)
C: i(t) = C dVdt
C (t)
ZC
VC (t) = ejwt ⇒ i(t) = jwC ejwt ⇒ ZC = 1
jwC
1
( sC )
Figure 23:
1
jwc
Vc = 1 Vs , (∗jw L1 on top and bottom)
jwc + R + jwL
H(jw)
1
LC
i.e. H(jw) = (jw)2 + R 1
L jw+ LC
1
LC
or H(s) = S2 + R 1
L S+ LC
Figure 24:
Figure 25:
⇒ ((jw)2 + R
L jw + 1
RL )H(jw)ejwt
= 1 jwt
RL e
1
⇒ H(jw) = LC
(jw)2 + R 1
L jw+ LC
This is always true for any nth order linear constant coefficient ODE.
That is, given a differential equation for a LTI system
n (i)
m (j)
i.e., i=1 ai y (t) = j=1 b j x (t)
di jw
into the ODE & use the fact dti e = (jwt)i ejwt we have
bm sm +···+b1 sm +b0
H(s) = an sn +···+a1 s+a0
Figure 26:
N (s)
Usually H(s) = D(s) (degD(s) = n)
= A1
s+p1 + + · · · + s+p
A2
s+p2
An
n
(assume D(s) has n distinct toots)
(by P.E.F. partial fraction Expansion)
⇒ h(t) = L−1 {H(s)}
⇒ h(t) = A1 e−p1 t u(t) + A2 e−p2 t u(t) + · · · + An e−pn t u(t)
In general,
block diagram > differential system
> O.D.E
> h(t)(H(S))
where > means providing more information.
In signal & system,we study the zero-state response
Figure 27:
⎡ ⎤
a d c b
⎢ ⎥
⎢ b a c d ⎥
⎢ ⎥
A=⎢ ⎥ (circulant matrix)
⎢ c b a d ⎥
⎣ ⎦
d c b a
• How to find the eigenvectors and eigenvalues for the circulant
matrix A?
• We can use the fact that A represents a discrete-time LTI system
to find the eigenvectors and eigenvalues.
N
y[n] = x[k]h[n − k]
k=1
Av = λv ⇒ (λI − A)v = 0
with v = 0
Therefore we must have λI − A is a singular matrix, i.e.
det(λI − A) = 0 (characteristic polynomial).
This is a poly of degree n if A is a n × n matrix.
In general, it is not easy to find the eigenvalues for a given n × n
matrix A.
⎡ ⎤
1
⎢ ⎥
⎢ −1 ⎥
⎢ ⎥ j 2π
v3 = ⎢ ⎥ = (e 4 ·2·n )0≤n≤3 = (i2·n )0≤n≤3
⎢ 1 ⎥
⎣ ⎦
−1
⎡ ⎤
1
⎢ ⎥
⎢ −i ⎥
⎢ ⎥ j 2π
v4 = ⎢ ⎥ = (e 4 ·3·n )0≤n≤3 = (i3·n )0≤n≤3
⎢ −1 ⎥
⎣ ⎦
i
are eigenvectors of A.
eigenvalue of v1
=a+d+c+b
eigenvalue of v2
(a − c) + i(d − b)
eigenvalue of v3
(a + c) − (d + b)
eigenvalue of v4
(a − c) − i(d − b)
.
Let ei = √1 vi
4
⇒ {e1 , e2 , e3 , e4 } are orthonormal eigenvector for A.
In other words,we have A [e1 , e2 , e3 , e4 ] = [e1 , e2 , e3 , e4 ] D
⎡ ⎤
V V
λ1 0
⎢ ⎥
⎢ ⎥
⎢ λ2 ⎥
where D = ⎢ ⎥,
⎢ λ ⎥
⎣ 3 ⎦
0 λ4
and λi is an eigenvalue of ei .
Then
3
3
Ax = h[n − k]x[k] = h[k]x[n − k]
k=0 k=0
j 2π
• i.e., e , (0 ≤ k0 ≤ 3) is an eigenvector of A with eigenvalue
4 nk0
−j 2π
h[k]e 4 kk0 .
k
2π
√1 (ej N ·0n )0≤n≤N −1 = e1
N
2π
√1 (ej N ·1n )0≤n≤N −1 = e2
N
..
.
2π
√1 (ej N ·(N −1)n )0≤n≤N −1 = eN , (N eigenvectors)
N
j 2π
Similarly, if we let x[n] = e , 0 ≤ k0 ≤ N − 1
N nk0
j 2π
⇒ Ax = h[k]e N (n−k)k0
k
j 2π j 2π
= h[k]e N kk0 · e
N nk0
k
x[n]
λ
Also y = Ax = V DV −1 x ⇒ V −1 y = DV −1 x
⇒ y = DV T x
⇒ y = Dx