Professional Documents
Culture Documents
Linear System Theory, 2/E: Solutions Manual
Linear System Theory, 2/E: Solutions Manual
Wilson J. Rugh
Department of Electrical and Computer Engineering
Johns Hopkins University
PREFACE
With some lingering ambivalence about the merits of the undertaking, but with a bit more dedication than
the first time around, I prepared this Solutions Manual for the second edition of Linear System Theory. Roughly
40% of the exercises are addressed, including all exercises in Chapter 1 and all others used in developments in the
text. This coverage complements the 60% of those in an unscientific survey who wanted a solutions manual, and
perhaps does not overly upset the 40% who voted no. (The main contention between the two groups involved the
inevitable appearance of pirated student copies and the view that an available solution spoils the exercise.)
I expect that a number of my solutions could be improved, and that some could be improved using only
techniques from the text. Also the press of time and my flagging enthusiasm for text processing impeded the
crafting of economical solutions—some solutions may contain too many steps or too many words. However I
hope that the error rate in these pages is low and that the value of this manual is greater than the price paid.
Please send comments and corrections to the author at rugh@jhu.edu or ECE Department, Johns Hopkins
University, Baltimore, MD 21218 USA.
CHAPTER 1
Solution 1.1
(a) For k = 2, (A + B)2 = A 2 + AB + BA + B 2 . If AB = BA, then (A + B)2 = A 2 + 2AB + B 2 . In general if
AB = BA, then the k-fold product (A + B)k can be written as a sum of terms of the form A j B k−j , j = 0, . . . , k. The
k
number of terms that can be written as A j B k−j is given by the binomial coefficient . Therefore AB = BA
j
implies
k
Σ
k j k−j
(A + B)k = AB
j
j =0
(b) Write
det [λ I − A (t)] = λn + an−1 (t)λn−1 + . . . + a 1 (t)λ + a 0 (t)
where invertibility of A (t) implies a 0 (t) ≠ 0. The Cayley-Hamilton theorem implies
A n (t) + an−1 (t)A n−1 (t) + . . . + a 0 (t)I = 0
for all t. Multiplying through by A −1 (t) yields
_−a 1 (t)I −
. . . − an−1 (t)A n−2 (t) − A n−1 (t)
________________________________
A −1 (t) =
a 0 (t)
for all t. Since a 0 (t) = det [−A (t)], a 0 (t) = det A (t). Assume ε > 0 is such that det A (t) ≥ ε for all t. Since
A (t) ≤ α we have aij (t) ≤ α, and thus there exists a γ such that a j (t) ≤ γ for all t. Then, for all t,
a 1 (t)I + . . . + A n−1 (t)
______________________
A −1 (t) =
det A (t)
+ γ α + . . . + αn−1 ∆
_γ________________
≤ =β
ε
Solution 1.2
(a) If λ is an eigenvalue of A, then recursive use of Ap = λp shows that λk is an eigenvalue of A k . However to
show multiplicities are preserved is more difficult, and apparently requires Jordan form, or at least results on
similarity to upper triangular form.
(b) If λ is an eigenvalue of invertible A, then λ is nonzero and Ap = λp implies A −1 p = (1/ λ)p. As in (a),
addressing preservation of multiplicities is more difficult.
1 , . . . , λ__
(c) A T has eigenvalues λ__ n since det (λI − A ) = det (λI − A) = det (λI − A).
T T
(d) A H has eigenvalues λ1 , . . . , λn using (c) and the fact that the determinant (sum of products) of a conjugate is
the conjugate of the determinant. That is
-1-
Linear System Theory, 2/E Solutions Manual
_ _ ________
_
det (λ I − A H ) = det (λ I − A)H = det (λ I − A)
(e) α A has eigenvalues αλ1 , . . . , αλn since Ap = λp implies (α A)p = (αλ)p.
(f) Eigenvalues of A T A are not nicely related to eigenvalues of A. Consider the example
0 α 0 0
A= , ATA =
0 0 0 α
where the eigenvalues of A are both zero, and the eigenvalues of A T A are 0, α. (If A is symmetric, then (a)
applies.)
Solution 1.3
(a) If the eigenvalues of A are all zero, then det (λ I − A) = λn and the Cayley-Hamilton theorem shows that A is
nilpotent. On the other hand if one eigenvalue, say λ1 is nonzero, let p be a corresponding eigenvector. Then
A k p = λ k1 p ≠ 0 for all k ≥ 0, and A cannot be nilpotent. _
(b) Suppose Q is real and symmetric, and λ is an eigenvalue of Q. Then λ also _ _is_ an eigenvalue. From the
eigenvalue/eigenvector
_ equation Qp = λ p we get_ p H Qp = λ p H p. Also Qp = λ p, and _ transposing gives
p H Qp = λ p H p. Subtracting the two results gives (λ − λ)p H p = 0. Since p ≠ 0, this gives λ = λ, that is, λ is real.
(c) If A is upper triangular, then λ I − A is upper triangular. Recursive Laplace expansion of the determinant about
the first column gives
det (λ I − A) = (λ − a 11 ) . . . (λ − ann )
which implies the eigenvalues of A are the diagonal entries a 11 , . . . , ann .
Solution 1.4
(a)
0 0 1 0
A= implies A T A = implies A = 1
1 0 0 0
(b)
3 1 10 6
A= implies A T A =
1 3 6 10
Then
det (λI − A T A) = (λ − 16)(λ − 4)
which implies A = 4.
(c)
1−i 0 (1+i)(1−i) 0 2 0
A= implies A H A = =
0 1+i 0 (1−i)(1+i) 0 2
This gives A = √2 .
-2-
Linear System Theory, 2/E Solutions Manual
Solution 1.6 By definition of the spectral norm, for any α ≠ 0 we can write
______
A x
A = max A x = max
x = 1 x = 1 x
A α x
________ αA x
_________
= max = max
α x = 1 α x x = 1/α αx
Therefore
______
A x
A ≥
x
Solution 1.8 We use the following easily verified facts about partitioned vectors:
x1 x1 0
≥ x 1 , x 2 ; = x 1 , = x 2
x2 0 x2
Write
A 11 A 12 x1 A 11 x 1 + A 12 x 2
Ax = =
A 21 A 22 x2 A 21 x 1 + A 22 x 2
The other partitions are handled similarly. The last part is easy from the definition of induced norm. For example
if
-3-
Linear System Theory, 2/E Solutions Manual
0 A 12
A=
0 0
-4-
Linear System Theory, 2/E Solutions Manual
max x T A T A x = λmax (A T A)
x =1
Solution 1.12 Since A T A > 0 we have λi (A T A) > 0, i = 1, . . . , n, and (A T A)−1 > 0. Then by Exercise 1.11,
1
_________
A −1 2 = λmax ((A T A)−1 ) =
λmin (A T A)
n
Π λi (A T A) n −1
_[λ____________
T
__________________
i =1 max (A A)]
= ≤
λmin (A T A) . det (A T A) (det A)2
A 2(n−1)
_________
=
(det A)2
Therefore
A n−1
________
A −1 ≤
det A
Solution 1.13 Assume A ≠ 0, for the zero case is trivial. For any unity-norm x and y,
y T A x ≤ y T A x
Now let unity-norm xa be such that A xa = A , and let
Axa
_____
ya =
A
Therefore
max y T A x = A
x , y =1
Solution 1.14 The coefficients of the characteristic polynomial of a matrix are continuous functions of matrix
entries, since determinant is a continuous function of the entries (sum of products). Also the roots of a
polynomial are continuous functions of the coefficients. (A proof is given in Appendix A.4 of E.D. Sontag,
Mathematical Control Theory, Springer-Verlag, New York, 1990.) Since a composition of continuous functions
is a continuous function, the pointwise-in-t eigenvalues of A (t) are continuous in t.
This argument gives that the (nonnegative) eigenvalues of A T (t)A (t) are continuous in t. Then the maximum at
each t is continuous in t — plot two eigenvalues and consider their pointwise maximum to see this. Finally since
square root is a continuous function of nonnegative arguments, we conclude A (t) is continuous in t.
However for continuously-differentiable A (t), A (t) need not be continuously differentiable in t. Consider the
-5-
Linear System Theory, 2/E Solutions Manual
example
t 0
t , 0≤t ≤1
A (t) = , A (t) =
0 t2
t2 , 1 < t < ∞
Clearly the time derivative of A (t) is discontinuous at t = 1. (This overlaps Exercise 1.18 a bit.)
Also the eigenvalues of continuously-differentiable A (t) are not necessarily continuously differentiable, consider
0 1
A (t) =
−1 −t
Therefore
1
_ ______ 1
___
λmin (Q −1 ) = ≥
λmax (Q) ε2
1
_______ 1
___
λmax (Q −1 ) = ≤
λmin (Q) ε1
Solution 1.16 If W (t) − ε I is symmetric and positive semidefinite for all t, then for any x,
x T W (t) x ≥ ε x T x
for all t. At any value of t, let xt be an eigenvector corresponding to an eigenvalue (necessarily real) λt of W (t).
Then
x Tt W (t) xt = λt x Tt xt ≥ ε x Tt xt
That is λt ≥ ε. This holds for any eigenvalue of W (t) and every t. Since the determinant is the product of
eigenvalues,
det W (t) ≥ εn > 0
for any t.
-6-
Linear System Theory, 2/E Solutions Manual
Solution 1.17 Using the product rule to differentiate A (t) A −1 (t) = I yields
. _d_ A −1 (t) = 0
A (t) A −1 (t) + A (t)
dt
which gives
_d_ A −1 (t) = −A −1 (t) A. (t) A −1 (t)
dt
Solution 1.18 Assuming differentiability of both x (t) and x (t), and using the chain rule for scalar
functions,
_d_ _d_ x (t)
x (t)2 = 2x (t)
dt dt
_d_ x (t)
= 2x (t)
dt
Also we can write, using the product rule and the Cauchy-Schwarz inequality,
_d_ x (t)2 = _d_ x T (t) x (t) = x. T (t) x (t) + x T (t) x. (t) = 2x T (t) x. (t)
dt dt
.
≤ 2x (t)x (t)
Solution 1.19 To prove the contrapositive claim, suppose for each i, j there is a constant βij such that
t
Then by the inequality on page 7, noting that max fij (t) is a continuous function of t and taking the pointwise-
i, j
in-t maximum,
t t
0 0
t m n
≤ √mn
∫Σ Σ | fij (σ) d σ
0 i =1 j =1
n m
≤ √mn
Σ Σ βij < ∞ ,
i =1 j =1
t ≥0
k
The argument for Σ F ( j) is similar.
j =0
-7-
Linear System Theory, 2/E Solutions Manual
Solution 1.20 If λ(t), p (t) are a pointwise-in-t eigenvalue/eigenvector pair for A −1 (t), then
A −1 (t) p (t) = λ(t) p (t) = λ(t)p (t)
Note that
tb
∫ Q (σ ) d σ ≥ 0
ta
x T
∫ Q (σ ) d σ x = ∫ x T Q (σ ) x d σ ≥ 0
ta ta
Finally,
tb
∫ Q (σ ) d σ ≤ ε I
ta
∫ Q (σ) d σ ≤ ε
ta
Therefore
tb
∫ Q (σ) d σ ≤ n ε
ta
-8-
CHAPTER 2
.
Solution 2.3 The nominal solution for ũ(t) = sin (3t) is ỹ(t) = sin t. Let x 1 (t) = y (t), x 2 (t) = y (t) to write the
state equation
. x 2 (t)
x (t) =
−(4/ 3)x 31 (t) − (1/ 3)u (t)
Computing the Jacobians and evaluating gives the linearized state equation
. 0 1 0
x δ (t) = x (t) + u (t)
−4 sin2 t 0 δ −1/ 3 δ
y δ (t) = 1 0 x δ (t)
where
sin t 0
x δ (t) = x (t) − , u δ (t) = u (t) − sin (3t) , y δ (t) = y (t) − sin t , x δ (0) = x (0) −
cos t
1
, =
−1+2x 1 2x 2
∂x
∂u
1
evaluating at each of the constant nominals gives the corresponding 4 linearized state equations.
-9-
rank A = rank [ A b ].
Also, x̃ is a constant nominal with c x̃ = 0 if and only if
0 = A x̃ + bũ
0 = c x̃
that is, if and only if
−bũ
A
x̃ =
c 0
Solution 2.8
(a) Since
A B
C 0
is invertible, for any K
A + BK B A B I 0
=
C 0 C 0 K I
is invertible. Let
A + BK B
R1 R2
I 0
=
C 0
R3 R 4
0 I
Then the 1, 2-block gives R 2 = −(A + BK) BR 4 and the 2, 2-block gives CR 2 = I, that is, I = −C(A + BK)−1 BR 4
−1
-10-
Linear System Theory, 2/E Solutions Manual
A +Dũ bũ
Solution 2.12 For the given nominal input, nominal output, and nominal initial state, the nominal solution
satisfies
. 1
0
x̃ 1 (t) − x̃ 3 (t) , x̃(0) = −3
x̃ (t) =
−2
x̃ 2 (t) − 2 x̃ 3 (t)
1 = x̃ 2 (t) − 2 x̃ 3 (t)
Integrating for x̃ 1 (t) and then x̃ 3 (t) easily gives the nominal solution x̃ 1 (t) = t, x̃ 2 (t) = 2 t − 3, and x̃ 3 (t) = t − 2.
The corresponding linearized state equation is specified by
0 0 0 0
A = 1 0 −1 , B (t)= t , C = 0 1 −2
0 1 −2 0
It is unusual that the nominal input and nominal output are constants, but the linearization is time varying.
-11-
CHAPTER 3
Solution 3.2 Differentiating term k +1 of the Peano-Baker series using Leibniz rule gives
t σ1 σ2 σk
∂
___
∂τ
∫τ A (σ1 ) ∫τ A (σ2 ) ∫τ ... ∫τ A (σk +1 ) d σk +1 . . . d σ1
t σ2 σk τ τ
d d
= A (t) ∫ A (σ2 ) ∫τ ∫τ A (σk +1 ) d σk +1 A (τ) ∫ A (σ2 ) ∫ . . . d σk +1 . . . d σ2
. . . d σ2 ___ ___
... t− τ
τ
dτ τ τ
d τ
t σ1 σ2 σk
∂
+ ∫ A (σ 1 )
___
∂τ
∫τ A (σ2 ) ∫τ ... ∫τ A (σk +1 )
d σk +1 . . . d σ1
τ
t σ1 σ2 σk
∂
= ∫ A (σ 1 )
___
∂ τ
∫τ A (σ2 ) ∫τ ... ∫τ A (σk +1 )
d σk +1 . . . d σ1
τ
∂
___
∂τ
∫τ A (σ1 ) ∫τ A (σ2 ) ∫τ ... ∫τ A (σk +1 ) d σk +1 . . . d σ1
t σ1 σk−1 σk
∂
= ∫ A (σ 1 ) ∫τ ... ∫τ A (σ k )
___
∂τ
∫τ A (σk +1 ) d σk +1 d σk . . . d σ1
τ
t σ1 σk−1 σk
= ∫ A (σ 1 ) ∫τ ... ∫τ A (σ k ) 0 − A (τ) + ∫τ 0 d σk +1 d σk . . . d σ1
τ
t σ1 σ2 σk−1
= ∫ A (σ 1 ) ∫τ A (σ 2 ) ∫τ ... ∫τ A (σ k ) d σ k . . . d σ 1 − A (τ)
τ
Recognizing this as term k of the uniformly convergent series for −Φ(t, τ) A (τ) gives
∂
___ Φ(t, τ) = −Φ(t, τ) A (τ)
∂τ
(Of course it is simpler to use the formula for the derivative of an inverse matrix given in Exercise 1.17.)
-12-
Linear System Theory, 2/E Solutions Manual
Solution 3.6 Writing the state equation as a pair of scalar equations, the first one is
. −t
______
x 1 (t) = x 1 (t)
1 + t2
and an easy computation gives
x 1o
_________
x 1 (t) =
(1 + t 2 )1/2
Then the second scalar equation then becomes
. −4t
______ x 1o
_________
x 2 (t) = x 2 (t) +
1+t 2
(1 + t 2 )1/2
The complete solution formula gives, with some help from Mathematica,
t
. 1 (1 + σ2 )3/2
x 2o + ∫
________ _________ d σ x 1o
x 2 (t) = 2 2 2 2
(1 + t ) 0 (1 + t )
1
________ _√ (t 3 /4+5t/ 8)+(3/ 8) sinh−1 (t)
2
1+t
____________________________ x 1o
= x 2o +
(1 + t 2 )2 (1 + t 2 )2
If x 1o = 1, then as t →∞, x 2 (t) → 1/ 4, not zero.
− ∫ v ( τ) d τ
to
e
to obtain
t t
− ∫ v ( τ) d τ − ∫ v ( τ) d τ
_d_ r (t)e
to
≤ v (t)ψ(t)e
to
dt
− ∫ v ( τ) d τ t − ∫ v ( τ) d τ
≤ ∫ v (σ)ψ(σ)e dσ
to to
r (t)e
to
-13-
Linear System Theory, 2/E Solutions Manual
∫ v ( τ) d τ
t
eo
gives
t
t ∫ v ( τ) d τ
r (t) ≤ ∫ v (σ)ψ(σ)e σ dσ
to
At each t ≥ to let
a (t) = 2n 2 max aij (t)
1 ≤ i, j ≤ n
Note a (t) is a continuous function of t, as a quick sample sketch indicates. Then, since zi (t) ≤ z (t),
_d_ z (t)2 ≤ a (t)z (t)2 , t ≤ to
dt
Multiplying through by the positive quantity
t
− ∫ a (σ) d σ
to
e
gives t
− ∫ a (σ) d σ
_d_ e
to
z (t)2 ≤ 0 , t ≤ to
dt
Integrating both sides from to to t and using z (to ) = 0 gives
z (t) = 0 , t ≥ to
which implies z (t) = 0 for t ≥ to .
Solution 3.11 The vector function x (t) satisfies the given state equation if and only if it satisfies
t t τ t
x (t) = xo + ∫ A (σ) x(σ) d σ + ∫ ∫ E (τ, σ) x(σ) d σd τ + ∫ B (σ)u (σ) d σ
to to to to
Interchanging the order of integration in the double integral (Dirichlet’s formula) gives
-14-
Linear System Theory, 2/E Solutions Manual
t t t
z (t) = ∫ A (σ) z(σ) d σ + ∫ ∫ E (τ, σ) d τ z(σ) d σ
to to σ
t t
t
∆
= ∫ Â(t, σ) z (σ) d σ
to
Thus
t t
z (t) = ∫ Â(t, σ) z (σ) d σ ≤ ∫ Â(t, σ)z (σ) d σ
to to
By continuity, given T > 0 there exists a finite constant α such that Â(t, σ) ≤ α for to ≤ σ ≤ t ≤ to + T. Thus
t
z (t) ≤ ∫ α z (t) d σ , t ∈ [to , to +T ]
to
and the Gronwall-Bellman inequality gives z (t) = 0 for t ∈ [to , to +T ], implying that there can be no more
than one solution.
Φ(t, τ) − I + ∫ A (σ 1 ) d σ 1 + . . . + ∫ A (σ 1 ) ∫τ ... ∫τ A (σ k ) d σ k . . . d σ 1
τ τ
t σ1 σ j−1
∞
= Σ
j =k +1 τ
∫ A (σ 1 ) ∫τ ... ∫τ A (σ j ) d σ j . . . d σ 1
For any fixed T > 0 there is a finite constant α such that A (t) ≤ α for t ∈ [−T, T ], by continuity. Therefore
t σ1 σ j−1 t σ1 σ j−1
∞ ∞
Σ ∫ A (σ 1 )
j =k +1 τ
∫τ ... ∫τ A (σ j ) d σ j . . . d σ1 ≤ Σ
j =k +1
∫ A (σ 1 )
τ
∫τ ... ∫τ A (σ j ) d σ j . . . d σ1
t σ1 σ j−1
∞
≤ Σ ∫ A (σ1 ) ∫
j =k +1 τ τ
... ∫τ A (σ j ) d σ j . . . d σ1
.
.
.
t σ j−1
∞
≤ Σ
j =k +1
α ∫ j
τ
... ∫τ 1 d σ j . . . d σ1
∞
t − τ j
_______
≤ Σ
j =k +1
αj
j!
∞
(α2T) j
Σ
______ , t, τ ∈ [−T, T ]
≤
j =k +1 j!
We need to show that given ε > 0 there exists K such that
-15-
Linear System Theory, 2/E Solutions Manual
∞
_(α_____
2T) j
Σ
j =K +1 j!
<ε (*)
Solution 3.15 Writing the complete solution of the state equation at t f , we need to satisfy
tf
Thus there exists a solution that satisfies the boundary conditions if and only if
tf
There exists a unique solution that satisfies the boundary conditions if Ho + H f Φ(t f , to ) is invertible. To compute
a solution x (t) satisfying the boundary conditions:
(1) Compute Φ(t, to ) for t ∈ [to , t f ]
(2) Compute Ho + H f Φ(t f , to )
tf
-16-
CHAPTER 4
.
Solution 4.1 An easy way to compute A (t) is to use A (t) = Φ(t, 0)Φ(0, t). This gives
−2t −1
A (t) =
1 −2t
This A (t) commutes with its integral, so we can write Φ(t, τ) as the matrix exponential
t
−(t−τ)2 −(t−τ)
Solution 4.4 A linear state equation corresponding to the n th -order differential equation is
0 ...
1 0
0 ...
0 0
.
. . . .
x (t) = . . . . x (t)
. . . .
0 ...
0 1
−a 0 (t) −a 1 (t)
. . . −an−1 (t)
The corresponding adjoint state equation is
0 ... 0 a 0 (t)
−1 ... 0
a 1 (t)
. . . . .
z (t) =
. . . .
. . . . z (t)
0 ... 0 an−2 (t)
0 ... −1 an−1 (t)
th
To put this in the form of an n -order differential equation, start with
.
zn (t) = −zn−1 (t) + an−1 (t) zn (t)
.
zn−1 (t) = −zn−2 (t) + an−2 (t) zn (t)
These give
.. . _d_ [ a (t) z (t) ]
zn (t) = −zn−1 (t) + n−1 n
dt
_d_ [ a (t) z (t) ]
= zn−2 (t) − an−2 (t) zn (t) + n−1 n
dt
Next,
-17-
Linear System Theory, 2/E Solutions Manual
.
zn−2 (t) = −zn−3 (t) + an−3 (t) zn (t)
gives
d3
____ . d2
_d_ [ a (t) z (t) ] + ____
zn (t) = z n−2 (t) − n−2 n [ an−1 (t) zn (t) ]
dt 3 dt dt 2
_d_ [ a (t) z (t) ] + ____ d2
= −zn−3 (t) + an−3 (t) zn (t) − n−2 n [ an−1 (t) zn (t) ]
dt dt 2
Continuing gives the n th -order differential equation
dn
____ d n−1
_____ d n−2
_____
zn (t) = [ a n−1 (t) zn (t) ] − [ an−2 (t) zn (t) ]
dt n dt n−1 dt n−2
_d_ [ a (t) z (t) ] + (−1)n +1 a (t) z (t)
+ . . . + (−1)n 1 n 0 n
dt
Solution 4.6 For the first matrix differential equation, write the transpose of the equation as (transpose and
differentiation commute)
.T
X (t) = A T (t)X T (t) , X T (to ) = X To
This has the unique solution X T (t) = ΦA T (t) (t, to )X To , so that
X (t) = Xo Φ AT T (t) (t, to )
In the second matrix differential equation, let Φk (t, τ) be the transition matrix for Ak (t), k = 1, 2. Then it is easy
to verify (Leibniz rule) that a solution is
t
X (t) = Φ1 (t, to )Xo Φ T2 (t, to ) + ∫ Φ1 (t, σ)F (σ)Φ T2 (t, σ) d σ
to
Or, one can generate this expression by using the obvious integrating factors on the left and right sides of the
differential. equation. (To show this is a unique solution, show that the difference Z (t) between any two solutions
satisfies Z (t) = A 1 (t)Z (t) + Z (t)A T2 (t), with Z (to ) = 0. Integrate both sides and apply the Bellman-Gronwall
inequality to show Z (t) is identically zero.)
Solution 4.9 Clearly A (t) commutes with its integral. Thus we compute
0 1
τ
exp
−1 0
t
and then replace τ by ∫ a (σ) d σ. From the power series for the exponential,
0
∞
1 0 1k k
Σ
0 1 ___
τ
τ =
exp −1 0
−1 0
k =0 k!
∞ ∞
1 0 1 2k 2k 1 0 1 2k +1 2k +1
Σ Σ
_____ _ ______
τ
τ +
= −1 0 −1 0
k =0 (2k)!
k =0 (2k +1)!
∞ ∞
1 (−1)k 0 1 (−1)k
Σ
0
Σ
_____ _ ______ τ 2k +1
= τ 2k +
k +1
k =0 (2k)!
0 (−1)k k =0 (2k +1)!
(−1) 0
-18-
Linear System Theory, 2/E Solutions Manual
cos τ 0 0 sin τ
= +
0 cos τ −sin τ 0
cos τ sin τ
=
−sin τ cos τ
Replacing τ as noted above gives Φ(t, 0).
Solution 4.10 For sufficiency, suppose Φx (t, 0) = T (t)e Rt . Then T (0) = I and T (t) is continuously
differentiable. Let z (t) = T −1 (t) x (t) so that
Φz (t, 0) = T −1 (t)Φx (t, 0)T (0) = T −1 (t)T (t)e Rt = e Rt
.
Thus z (t) = R z (t).
For necessity, suppose P (t) is a variable change that gives
.
z (t) = Ra z (t)
Then
Φz (t, 0) = e = P −1 (t)Φx (t, 0)P (0)
Ra t
that is,
Φx (t, 0) = P (t)e P −1 (0)
Ra t
=e
A1t
( A 1 +A 2 ) e −A 1 t . e A 1 t e A 2 t
A1t −A t
This implies A (t) = e [ A 1 +A 2 ] e 1 . Therefore A (0) = A 1 +A 2 is clear, and
. A t −A t A t −A t
A (t) = A 1 e 1 ( A 1 +A 2 ) e 1 + e 1 ( A 1 +A 2 ) e 1 (−A 1 )
= A 1 A (t) − A (t) A 1
Conversely, assume A 1 and A 2 are such that
.
A (t) = A 1 A (t) − A (t) A 1 , A (0) = A 1 + A 2
This matrix differential equation has a unique solution (by rewriting it as a linear vector differential equation), and
from the calculation above this solution is
A (t) = e
A1t
( A 1 + A 2 ) e −A 1 t
Since
-19-
Linear System Theory, 2/E Solutions Manual
_d_ e
A1t
e
A2t
= A (t)e
A1t A2t
e , e
A10 A20
e =I
dt
for i = 1, 2, and
_∂_ Φ (t, τ) = A (t)Φ (t, τ) + A (t)Φ (t, τ) , Φ (τ, τ) = 0
∂t
12 11 12 12 22 12
Multiplying on the left by P (t), the result can be written as a dimension-4 linear state equation. Choosing the
initial condition corresponding to P (0) = I, some clever guessing gives
1 0
P (t) =
t 1
Solution 4.23 Using the formula for the derivative of an inverse matrix given in Exercise 1.17,
_∂_ Φ (−τ, −t) = _∂_ Φ −1 (−t, −τ) = −Φ −1 (−t, −τ) _∂_ Φ (−t, −τ) Φ −1 (−t, −τ)
∂t
A A
∂t
A A
∂t
A
∂
_____
= −Φ −1
A (−t, −τ) − ΦA (−t, −τ) Φ −1
A (−t, −τ)
∂(−t)
= −Φ −1
A (−t, −τ) −A (−t)ΦA (−t, −τ) Φ −1
A (−t, −τ)
= Φ −1
A (−t, −τ) A (−t) = ΦA (−τ, −t) A (−t)
Transposing gives
-20-
Linear System Theory, 2/E Solutions Manual
This implies
_∂_ Φ T (−τ, −t) = A T (−t)Φ (−τ, −t)
∂t
A A
and
_ _ _
∞
1 k
Σ
At (σ)t ___ A t (σ)t k
e = I + At (σ)t +
k =2 k!
Then
_
At (σ)t
R (t, σ) = Φ(t + σ, σ) − e
t +σ τ1 τk−1
∞ _
1 k
= Σ ∫σ A (τ1 ) ∫ A (τ2 ) . . . ∫σ
___
A (τk ) d τk . . . d τ1 − A t (σ)t k
k =2 σ k!
= α2 t 2 e α t
-21-
CHAPTER 5
Solution 5.3 Using the series definition, which involves talent in series recognition,
0 1 1 0
, k = 0, 1, . . .
A 2k +1 = , A 2k =
1 0 0 1
gives
0 t ___
1
t 2 0 ___
1
0 t3 ...
e At = I + + + +
t 0 2!
0 t2 3!
t3 0
−t −t
(e +e )/ 2 (e −e )/ 2
t t
cosh t sinh t
= =
(e t −e −t )/ 2 (e t +e −t )/ 2
sinh t cosh t
s −1 −1 s −1 s −1
2 2
(sI − A)−1 =
=
−1 s
s
_____ 1
_____
s −1
2 s −1
2
gives
1 0
P −1 AP =
0 −1
Then
et 0
cosh t sinh t
e At = P P −1 =
0 e −t
sinh t cosh t
-22-
Linear System Theory, 2/E Solutions Manual
∫ A (σ) d σ
t2/2 t
Φ(t, 0) = e 0 = exp
t t2/2
And since
t2 / 2 0
0 t
,
0 t2/2
t 0
commute,
1 0 2 0 1
Φ(t, 0) = exp t / 2 . exp
t
0 1 1 0
Φ(t, 0) = 2 = 2 2
0 e t /2
sinh t cosh t
e t /2 sinh t e t /2 cosh t
note that the two sides agree at t = 0, and the derivatives of the two sides with respect to t are identical.
If A is invertible and all its eigenvalues have negative real parts, then limt → ∞ e At = 0. This gives
∞
A ∫ e Aσ d σ = − I
0
that is,
∞ 0
A −1 = − ∫ e A σ d σ = ∫ e A σ d σ
0 ∞
Solution 5.9 Evaluating the given expression at t = 0 gives x (0) = 0. Using Leibniz rule to differentiate the
expression gives
t
t D ∫ u ( τ) d τ
. _d_ e A (t−σ) e σ
x (t) = ∫
dt 0
bu (σ) d σ
t
t D ∫ u ( τ) d τ
_∂_
= bu (t) + ∫ e A (t−σ) e σ
bu (σ) d σ
0 ∂t
D ∫ u ( τ) d τ
σ
Using the product rule and differentiating the power series for e gives
t t
t D ∫ u ( τ) d τ D ∫ u ( τ) d τ
.
x (t) = bu (t) + ∫ Ae A (t−σ) e σ
bu (σ) + e A (t−σ) Du (t)e σ
bu (σ) d σ
0
-23-
Linear System Theory, 2/E Solutions Manual
t t
t D ∫ u ( τ) d τ D ∫ u ( τ) d τ t
.
x (t) = bu (t) + A ∫ e A (t−σ) e σ bu (σ) d σ + Du (t) ∫ e A (t−σ) e σ bu (σ) d σ
0 0
Solution 5.12 We will show how to define β0 (t), . . . , βn−1 (t) such that
n−1 . n−1 n−1
k =0
Σ βk (t)Pk = Σ
k =0
βk (t)APk , Σ βk (0)Pk = I
k =0
(*)
which then gives the desired expression by Property 5.1. From the definitions,
P 1 = AP 0 − λ1 I , P 2 = AP 1 − λ2 P 1 , . . . , Pn−1 = APn−2 − λn−1 Pn−2
Also Pn = (A−λn I)Pn−1 = 0 by the Cayley-Hamilton theorem, so APn−1 = λn Pn−1 . Now we equate coefficients of
like Pk ’s in (*), rewritten as
n−1 . n−1
Σ βk (t)Pk = Σ βk (t)[Pk+1 + λk +1 Pk ]
k =0 k =0
.
0 0 . . . λn−1 0
βn−1 (t)
0 0 . . . 1 λn
βn−1 (t)
With the initial condition provided by β0 (0) = 1, βk (0) = 0, k = 1, . . . , n−1, the analytic solution of this state
equation provides a solution for (*). (The resulting expression for e At is sometimes called Putzer’s formula.)
-24-
Linear System Theory, 2/E Solutions Manual
S (t−to )
= Q (t, to )e
∫ tr [A (σ)] d σ
det Φ(T, 0) = det e RT = e 0
Because the integral in the exponent is positive, the product of eigenvalues of Φ(T, 0) is greater than unity, which
implies that at least one eigenvalue of Φ(T, 0) has magnitude greater than unity.Thus by the argument following
Example 5.12 there exist unbounded solutions.
Solution 5.22 The solution will be T-periodic for initial state xo if and only if xo satisfies (see text equation
(32))
to +T
[Φ −1
(to +T, to ) − I ] xo = ∫ Φ(to , σ)f(σ) d σ
to
-25-
Linear System Theory, 2/E Solutions Manual
to +T to +T
∫z T
(σ)f (σ) d σ = ∫ z To e A σ f (σ) d σ
0 0
T T
= −zo 1 ∫ sin2 (σ) d σ + zo 2 ∫ cos σ sin σ d σ
0 0
≠0
so there is no periodic solution.
Case 3: If ω = 1/k, k = 2, 3, . . . , then since
T T
the condition (+) will hold, and there exist periodic solutions.
In summary, there exist periodic solutions for all ω > 0 except ω = 1.
-26-
CHAPTER 6
Solution 6.1 If the state equation is uniformly stable, then there exists a positive γ such that for any to and xo
the corresponding solution satisfies
x (t) ≤ γxo , t ≥ to
Given a positive ε, take δ = ε / γ. Then, regardless of to , xo ≤ δ implies
x (t) ≤ γ δ = ε , t ≥ to
Conversely, given a positive ε suppose positive δ is such that, regardless of to , xo ≤ δ implies x (t) ≤ ε,
t ≥ to . For any ta ≥ to let xa be such that
xa = 1 , Φ(ta , to )xa = Φ(ta , to )
Therefore
Φ(ta , to ) ≤ ε / δ
Solution 6.4 Using the fact that A (t) commutes with its integral,
t
∫ A (σ) d σ
t−τ −e −(t−τ)
1
___
t−τ −e −(t−τ)
2
Φ(t, τ) = e τ
=I+ + + ...
e −(t−τ) e −(t−τ)
t−τ 2!
t−τ
For any fixed τ, φ11 (t, τ) clearly grows without bound as t → ∞, and thus the state equation is not uniformly
stable.
-27-
Linear System Theory, 2/E Solutions Manual
t t σ1
t t σ1
t t σ1
t−τ2
_|_____
= 1 + αt−τ + α2 + ...
2!
For | t−τ ≤ δ,
_α δ
2 2
____
Φ(t, τ) ≤ 1+α δ+ + ...
2!
= eα δ
Therefore
-28-
Linear System Theory, 2/E Solutions Manual
∞ ∞
j +( j −1)+ ...
+1
_2___________
∫ t j e λt dt ≤ ∫ e −(η/2 )t dt
j
0 (η e) j 0
j +( j −1)+ . . . +1
2j
_2___________ . ___
≤
(η e) j η
22j +( j −1)+ +1
...
_____________
=
e j Re [λ] j +1
Solution 6.12 By Theorem 6.4 uniform stability is equivalent to existence of a finite constant γ such that
e At ≤ γ for all t ≥ 0. Writing
m σk
t j−1
Σ Σ Wkj
______ λ t
e At = e k
k =1 j =1 ( j−1)!
where λ1 , . . . , λm are the distinct eigenvalues of A, suppose
Re[λk ] ≤ 0 , k = 1, . . . , m (*)
Re[λk ] = 0 implies σk = 1
λk t λ t
Since t e is bounded if Re[λk ] < 0 (for any j), and e k = 1 if Re [λk ] = 0, it is clear that
j−1
e At is
bounded for t ≥ 0. Thus (*) is a sufficient condition for uniform stability.
A necessary condition for uniform stability is
Re[λk ] ≤ 0 , k = 1, . . . , m
For if Re[λk ] > 0 for some k, the proof of Theorem 6.2 shows that e At grows without bound as t → ∞. The gap
between this necessary condition and the sufficient condition is illustrated by the two cases
0 0 0 1
A= , A=
0 0 0 0
Both satisfy the necessary condition, neither satisfy the sufficient condition, and the first case is uniformly stable
while the second case is not (unbounded solutions exist, as shown by easy computation of the transition matrix).
(It can be shown that a necessary and sufficient condition for uniform stability is that each eigenvalue of A has
nonpositive real part and any eigenvalue of A with zero real part has algebraic multiplicity equal to its geometric
multiplicity.)
for all t, to such that t ≥ to . Then given any xo , to , the corresponding solution at t ≥ to satisfies
−λ(t−to )
x (t) = Φ(t, to )xo ≤ Φ(t, to )xo ≤ γ e xo
-29-
Linear System Theory, 2/E Solutions Manual
−λ(ta −to )
x (ta ) = Φ(ta , to )xa = Φ(ta , to ) ≤ γ e
.
Solution 6.18 The variable change z (t) = P −1 (t) x (t) yields z (t) = 0 if and only if
.
P −1 (t) A (t)P (t) − P −1 (t)P (t) = 0
.
for all t. This clearly is equivalent to P (t) = A (t)P (t), which is equivalent to ΦA (t, τ) = P (t)P −1 (τ). Now, if P (t)
is a Lyapunov transformation, that is P (t) ≤ ρ < ∞ and det P (t) ≥ η > 0 for all t, then
P (τ)n−1
__________
ΦA (t, τ) ≤ P (t)P −1 (τ) ≤ P (t)
det P (τ)
∆
≤ ρn /η = γ
-30-
CHAPTER 7
Solution 7.3 Let  = FA, and take Q = F −1 , which is positive definite since F is positive definite. Then since F
is symmetric,
T
 Q + Q = A T FF −1 + F −1 FA = A T + A < 0
This gives exponential stability by Theorem 7.4.
Solution 7.5 By our default assumptions, a (t) is continuous. Since Q is constant, symmetric, and positive
definite, the first condition of Theorem 7.2 holds. Checking the second condition,
−a (t) −a (t)/ 2
≤0
A T (t)Q + QA (t) =
−a (t)/ 2 −1
gives the requirements
a (t) ≥ 0 , 4a (t) ≥ a 2 (t)
Thus the state equation is uniformly stable if a (t) is a continuous function satisfying 0 ≤ a (t) ≤ 4 for all t.
0 1
we need to assume that a (t) is continuously differentiable and η ≤ a (t) ≤ ρ for some positive constants η and ρ so
.
that the first condition of Theorem 7.4 is satisfied. For the second condition we need to assume a (t) ≤ −ν, for
some positive constant ν. Unfortunately this implies, taking any to ,
t
.
a (t) = a (to ) + ∫ a (σ) d σ ≤ a (to ) + ν to − ν t , t ≥ to
to
and for sufficiently large t the positivity condition on a (t) will be violated. Thus there is no a (t) for which the
given Q (t) shows uniform exponential stability of the given state equation.
-31-
Linear System Theory, 2/E Solutions Manual
η ≤ a (t) ≤ 1/ (2η)
for all t. Then
2a (t) + 1 − η ≥ η + 1 > 1
_a______
(t)+1 1
______
−η ≥ 1+ = 1+η > 1
a (t) 1/ (2η)
and Q (t)−ηI ≥ 0, for all t, follows easily. Similarly, with ρ = (2η+1)/ η we can show ρI−Q (t) ≥ 0 using
η+1
_2____ 1
___
ρ − 2a (t) − 1 ≥ −2 −1 = 1
η 2η
η+1
(t)+1 _2____
_a______ 1
____
ρ− ≥ −1− ≥1
a (t) η a (t)
Next consider
.
. 2a (t)−2a(t) 0
.
≤ −ν I
T
A (t)Q (t) + Q (t) A (t) + Q (t) = a (t)
_____
0 −2a(t)− 2
a (t)
This gives that for uniform exponential stability we also need existence of a small, positive constant ν such that
.
ν a 2 (t) − 2a 3 (t) ≤ a (t) ≤ a (t)−ν/2
for all t. For example, a (t) = 1 satisfies these conditions.
Solution 7.11 Suppose that for every symmetric, positive-definite M there exits a unique, symmetric,
positive-definite Q such that
A T Q + QA + 2µQ = −M (*)
that is,
(A + µ I)T Q + Q (A + µ I) = −M (**)
Then by the argument above Theorem 7.11 we conclude that all eigenvalues of A +µ I have negative real parts.
That is, if
0 = det [ λI − (A +µ I) ] = det [ (λ − µ)I − A ]
then Re [λ] < 0. Since µ > 0, this gives Re [λ − µ] < −µ, that is, all eigenvalues of A have real parts strictly less
than −µ.
Now suppose all eigenvalues of A have real parts strictly less than −µ. Then, as above, eigenvalues of
A + µ I have negative real parts. Then by Theorem 7.11, given symmetric, positive-definite M there exists a
unique, symmetric, positive-definite Q such that (**) holds, which implies (*) holds.
-32-
Linear System Theory, 2/E Solutions Manual
∞ ∞
∫ x Ta e A σ Me A σ xa d σ ≤ ∫ x Ta e A σ Me A σ xa d σ
T T
t 0
∫ x Ta e A σ Me A σ xa d σ = ∫ x Ta e A (t + τ) Me A(t + τ) xa d τ
T T
t 0
e At 2
_______
= x Ta e A t Qe At xa ≥ λmin (Q)e At xa 2 =
T
Q −1
Therefore
e At 2
_______ ≤ Q
Q −1
Solution 7.17 Let F = A + (µ−ε)I. Then F ≤ A +µ−ε, all eigenvalues of F have real parts less than −ε,
and
e Ft = e At e (µ−ε)t
Thus
e At = e −(µ − ε)t e Ft (*)
By Theorem 7.11 the unique solution of F Q + QF = −I is
T
∞
Q = ∫ e F σ e Fσ d σ
T
dσ
≥ −F T + F x T e F σ e F σ x
T
(Exercise1.9)
≥ −2(A +µ−ε) x T e F σ e F σ x
T
t dσ
∞
≥ −2 (A +µ−ε) ∫ x T e F σ e F σ x d σ
T
≥ −2 (A +µ−ε) x T Qx
Therefore
-33-
Linear System Theory, 2/E Solutions Manual
x T e F t e Ft x ≤ 2 (A +µ−ε) x T Qx , t ≥ 0
T
which gives
e Ft ≤ √2 ( A + µ − ε ) Q , t ≥ 0
Solution 7.19 To show uniform exponential stability of A (t), write the 1,2-entry of A (t) as a (t), and let
Q (t) = q (t) I, where
2+e −2t , t ≥ 1/ 2
q (t) =
q ⁄ (t) , −1/ 2 < t < 1/ 2
1
2
3 , t ≤ −1/ 2
Here q ⁄ (t) is a continuously-differentiable ‘patch’ satisfying 2 ≤ q ⁄ (t) ≤ 3 for −1/ 2 < t < 1/ 2, and another
1
2
1
2
condition to be specified below. Then we have 2 I ≤ Q (t) ≤ 3 I for all t. Next consider
. .
−2q (t)+q (t) a (t)q (t)
a (t)q (t) −6q (t)+q (t)+1
.
for all t. With t < −1/ 2 or t > 1/ 2 it is easy to show that q (t)−q (t)−1 ≥ 0, and a patch function can be sketched
such that this inequality is satisfied for −1/ 2 < t < 1/ 2. Then, for all t,
. .
−2q (t)+q (t)+1 ≤ −q (t) ≤ 0 , −6q (t)+q (t)+1 ≤ −5q (t) ≤ 0
. .
[−2q (t)+q (t)+1][−6q (t)+q (t)+1] − a 2 (t)q 2 (t) ≥ [5−a 2 (t)]q 2 (t) ≥ 4q 2 (t) ≥ 0
Thus we have proven uniform exponential stability.
To show A T (t) is not uniformly exponentially stable, write the state equation as two scalar equations to
compute
e −t 0
ΦA T (t) (t, 0) = , t ≥0
(e t −e −3t )/ 4 e −3t
Solution 7.20 Using the characterization of uniform stability in Exercise 6.1, given ε > 0, let δ = β−1 (α(ε)).
Then δ > 0, since α(ε) > 0, and the inverse exists since β(.) is strictly increasing. Then for any to , and any xo such
that xo ≤ δ, the corresponding solution is such that
v (t, x (t)) ≤ v (to , xo ) ≤ β(xo ) ≤ β(δ) = α(ε) , t ≥ to
Therefore
α(x (t)) ≤ v (t, x (t)) ≤ α(ε) , t ≥ to
But since α(.) is strictly increasing, this gives x (t) ≤ ε , t ≥ to , and thus the state equation is uniformly stable.
-34-
CHAPTER 8
A=
0 −1
A + AT =
√8 −2
Solution 8.6 Viewing F (t)x (t) as a forcing term, for any to , xo , and t ≥ to we can write
t
x (t) = ΦA +F (t, to ) xo = ΦA (t, to ) xo + ∫ ΦA (t, σ)F (σ) x(σ) d σ
to
Thus
t
∫ γF (σ) d σ
λto
e λt x (t) ≤ γ e
to
xo e
Therefore
-35-
Linear System Theory, 2/E Solutions Manual
∫ γF (σ) d σ
−λ(t−to )
x(t) ≤ γ e
t
eo xo
∞
∫ γF (σ) d σ
−λ(t−to )
≤γe
t
eo xo
−λ(t−to )
≤γe eγ β xo
Solution 8.8 We can follow the proof of Theorem 8.7 (first and last portions) to show that the solution
∞
Q (t) = ∫ e A
T
(t)σ
e A (t)σ d σ
0
of
A T (t)Q (t) + Q (t) A (t) = −I
is continuously-differentiable and satisfies, for all t,
ηI ≤ Q (t) ≤ ρI
which implies
1
__
Q −1 (t) ≤
η
Also, by the middle portion of the proof of Theorem 8.7,
. .
Q (t) ≤ 2A (t)Q (t)2
Therefore
. _βρ
___2
1⁄2Q −1 (t)Q (t) ≤
η
for all t. Write
. . .
x (t) = A (t) x (t) = [ A (t) − 1⁄2Q −1 (t)Q (t) ] x (t) + 1⁄2Q −1 (t)Q (t) x (t)
∆
.
= F (t) x (t) + 1⁄2Q −1 (t)Q (t) x (t)
-36-
Linear System Theory, 2/E Solutions Manual
and the result of Exercise 8.8 implies that there exists positive constants γ, λ such that, for any to and t ≥ to ,
t
−λ(t−to ) _βρ 2
∫ γ e −λ(t−σ)
___
x (t) ≤ γ e xo + x(σ) d σ
to
η
Therefore
t
_γβρ 2
∫
λto ____ e λσ x(σ) d σ
e λt
x (t) ≤ γ e xo +
to
η
∫ γβρ2 /η d σ
λto
e λt x (t) ≤ γ e
to
xo e
Thus
−(λ−γβρ2 /η)(t−to )
x (t) ≤ γ e xo
Now, writing the left side as ΦA (t, to )xo and for any to and t ≥ to choosing the appropriate unity-norm xo gives
−(λ−γβρ2 /η)(t−to )
ΦA (t, to ) ≤ γ e
For β sufficiently small this gives the desired uniform exponential stability. (Note that Theorem 8.6 also can be
.
used to conclude that uniform exponential stability of x (t) = F (t) x (t) implies uniform exponential stability of
. .
x (t) = [ F (t) + 1⁄2Q −1 (t)Q (t) ] x (t) = A (t) x (t)
for β sufficiently small.)
. .
Solution 8.10 With F (t) = A (t) + (µ / 2)I we have that F(t) ≤ α + µ / 2, F (t) = A (t), and the eigenvalues of
F (t) satisfy Re [λF (t)] ≤ −µ / 2. The unique solution of
F T (t)Q (t) + Q (t)F (t) = −I
is
∞
Q (t) = ∫ e F
T
(t)σ
e F (t)σ d σ
0
As in the proof of Theorem 8.7, there is a constant ρ such that Q (t) ≤ ρ for all t. Now, for any n × 1 vector z,
d T F T (t)σ F (t)σ
___ z = z T e F (t)σ [ F T (t) + F (t) ] e F (t)σ z
T
z e e
dσ
≥ −(2α + µ) z T e F
T
(t)σ
e F (t)σ z
-37-
Linear System Theory, 2/E Solutions Manual
∞
d
∫τ
___ T
(t)σ
e F (t)σ z
−z T e F
T
(t)τ
e F (t)τ z = dσ
z Te F
dσ
∞
≥ −(2α + µ) ∫ z T e F
T
(t)σ
e F (t)σ z d σ
τ
∞
≥ −(2α + µ) ∫ z T e F
T
(t)σ
e F (t)σ z d σ
0
≥ −(2α + µ) z T Q (t) z
Thus
e F (t)τ ≤ (2α + µ) Q (t) , τ ≥ 0
T
(t)τ
eF
and using
e F(t)τ = e A(t)τ e (µ /2) τ , τ ≥ 0
gives
e A(t)τ ≤ √(2
α
+ µ e (−µ /2) τ ,
)ρ τ≥0
Solution 8.11 Write (the chain rule is valid since u (t) is a scalar)
. dA
___ . db
___ .
q (t) = −A −1 (u (t)) (u (t))u (t) A −1 (u (t))b (u (t)) − A −1 (u (t)) (u (t))u (t)
du
du
∆ .
= −B̂(t)u (t)
Then
.
x (t) = A (u (t)) x (t) + b (u (t))
= A (u (t)) [ x (t) − q (t) ] + A (u (t))q (t) + b (u (t))
= A (u (t)) [ x (t) − q (t) ]
gives
_d_ [ x (t) − q (t) ] = A (u (t)) [ x (t) − q (t) ] + B̂(t)u. (t) (*)
dt
Since
dA
_d_ A (u (t)) = ___ . dA
___ .
(u (t))u (t) = (u (t))u (t)
dt du du
.
we can conclude from Theorem 8.7 that for δ sufficiently small, and u (t) such that u (t) ≤ δ for all t, there exist
positive constants γ and η (depending on u (t)) such that
ΦA (u (t)) (t, σ) ≤ γ e −η (t−σ) , t ≥σ≥0
But the smoothness assumptions on A (.) and b (.) and the bounds on u (t) also give that there exists a positive
constant β such that B̂(t) ≤ β for t ≥ 0. Thus the solution formula for (*) gives
x (t) − q (t) ≤ γx (0) − q (0) + γ βδ / η , t ≥0
for u (t) as above, and the claimed result follows.
-38-
CHAPTER 9
Im −β Im β 2 Im ...
0 Im −2βIm ...
=
B AB A 2 B . . .
0 0 Im ...
0 0 0 ...
. . . .
. . . .
. . . .
Clearly the two controllability matrices have the same rank. (The solution is even easier using rank tests from
Chapter 13.)
∫ T T
T
AQ + QA = Ae At BB T e A t + e At BB T e A t A T dt
0
∞
_d_
∫
T
= e At BB T e A t
dt
0 dt
= −BB T
Also it is clear that Q is positive semidefinite. If it is not positive definite, then for some nonzero, n × 1 x,
∞
0 = x Qx = ∫ x T e At BB T e A t x dt
T
T
0
∞
= ∫ x T e At B 2 dt
0
-39-
Linear System Theory, 2/E Solutions Manual
dj
___
0= x T e At B = x TA jB
dt j
t =0
for j = 0, 1, 2, . . . . But this implies
x T B AB . . . A n−1 B
=0
Solution 9.9 Suppose λ is an eigenvalue of A, and p is a corresponding left eigenvector. Then p ≠ 0, and
p TA = λ p T
This implies both
_
p HA = λ p H , A T p = λp
Now suppose Q is as claimed. Then
_
p H AQp + p H QA T p = λ p H Qp + λ p H Qp
= −p H BB T p
that is,
2Re [λ] p H Q p = −p H BB T p (*)
This gives Re [λ] ≤ 0 since Q is positive definite. Now suppose Re [λ] = 0. Then (*) gives p H B = 0. Also, for
j = 1, 2, . . . ,
_ _
p H A j B = λ p H A j−1 B = . . . = λ j p H B = 0
Thus
p H B AB . . . A n−1 B
=0
=0
-40-
Linear System Theory, 2/E Solutions Manual
Now suppose the state equation is output controllable on [to , t f ], but that Wy (to , t f ) is not invertible. Then
there exists a p × 1 vector ya ≠ 0 such that y Ta Wy (to , t f )ya = 0. Using by now familiar arguments, this gives
y Ta C (t f )Φ(t f , t)B (t) = 0 , t ∈ [to , t f ]
Consider the initial state
xo = Φ(to , t f )C T (t f )[ C (t f )C T (t f ) ]−1 ya
which is well defined and nonzero since rank C (t f ) = p. There exists an input ua (t) such that
tf
Premultiplying by y Ta gives
0= y Ta ya
This contradicts ya ≠ 0, and thus Wy (to , t f ) is invertible.
The rank assumption on C (t f ) is needed in the necessity proof to guarantee that xo is well defined. For
m = p = 1, invertibility of Wy (to , t f ) is equivalent to existence of a ta ∈ (to , t f ) such that
C (t f )Φ(t f , ta )B (ta ) ≠ 0
That is, there exists a ta ∈ (to , t f ) such that the output response at t f to an impulse input at ta is nonzero.
Solution 9.11 From Exercise 9.10, since rank C = p, the state equation is output controllable if and only if for
some fixed t f > 0,
tf
∆ A (t f −t) A T (t f −t)
Wy = ∫ Ce BB T e C T dt
0
by showing equivalence of the negations. If Wy is not invertible, there exists a nonzero p × 1 vector ya such that
y Ta Wy ya = 0. Thus
A (t f −t)
y Ta Ce B = 0 , t ∈ [0, t f ]
Differentiating repeatedly, and evaluating at t = t f gives
y Ta CA j B = 0 , j = 0, 1, . . .
Thus
y Ta CB CAB . . . CA n−1 B
=0
Conversely, if the rank condition fails, then there exists a nonzero ya such that y Ta CA j B = 0,
j = 0, . . . , n−1. Then
-41-
Linear System Theory, 2/E Solutions Manual
n−1
A (t f −t)
y Ta Ce B = y Ta C Σ αk (t f −t) A k B = 0 ,
k =0
t ∈ [0, t f ]
then
L 0 (t)
n −1
.
Σ αi (t)Li (t) =
i =0
α0 (t) . . . αn −1 (t)
.
. = Ln (t)
Ln −1 (t)
-42-
CHAPTER 10
Solution 10.2 We show equivalence of full-rank failure in the respective controllability and observability
matrices, and thus conclude that one realization is controllable and observable (minimal) if and only if the other is
controllable and observable (minimal). First,
rank B AB . . . A n−1 B
<n
Similarly,
C
CA
rank . <n
.
.
CA n−1
C (A+BC)n−1
-43-
Linear System Theory, 2/E Solutions Manual
where the left side is a product of invertible matrices by minimality. Therefore the two matrices on the right side
are invertible. Let
tf
that is,
tf
∫ F(σ)B T (σ) d σ W −1
x (to , t f ) = ∫ C T (t)H (t) dt
Mx (to , t f ) = P
to to
so we have
C (t) = H (t)P
for all t. Noting that 0 = P −1 . 0 . P, we have that P is a change of variables relating the two zero-A minimal
realizations. Since a change of variables always can be used to obtain a zero-A realization, this shows that any
two minimal realizations of a given weighting pattern are related by a variable change.
-44-
Linear System Theory, 2/E Solutions Manual
_d_ X (t) X (σ) = X (t) d
___
X (σ )
dt
dσ
which implies
d
___ _d_ X (t) X (σ)
X (σ) = X (−t)
dσ
dt
Integrate both sides with respect to t from a fixed to to a fixed t f > to to obtain
tf
X (σ) = ∫ X (−t)
d
___ _d_ X (t) dt X (σ)
(t f − to )
dσ to
dt
Now let
tf
A=
_____1
t f −to
∫ X (−t)
_d_ X (t) dt
dt
to
to write
d
___ X (σ) = A X (σ) , X (0) = I
dσ
This implies X (σ) = e A σ . (Of course there are quicker ways. For example note that
∂
_∂_ X (t+σ) = ___ d
___
X (t+σ) = X (t) X (σ )
∂t ∂σ dσ
. .
Evaluating at σ = 0 gives X (t) = X (t)X (0), which implies
. .
X (t) = X (0)e X (0)t = e X (0) t
Also the result holds for continuous solutions of the functional equation, though the proof is much more difficult.)
Solution 10.12 If rank Gi = ri we can write (admittedly using a matrix factorization unreviewed in the text)
Gi = Ci Bi
where Ci is p × ri , Bi is ri × m, and both have rank ri . Then it is easy to check that
B1
.
A = block diagonal { −λ i Ir i
, i = 1, . . . , r }, B=
.
. , C=
C1 . . . Cr
Br
is a realization of G (s) of dimension r 1 + . . . + rr = n. We need only show that this realization is controllable
and observable. Write
B1 0 . . . 0
0 B 2 . . . 0 Im λ 1 Im
. . . λ n−1 I
1 m
. . . .
B AB . . . A n−1 B =
. . . .
. . . .
. . . .
. . . .
. . . .
. . . λ n−1 I
0 0 . . . Br Im λr Im
r m
On the right side the first matrix has rank n, while the second is invertible due to its Vandermonde structure and
the fact that λ1 , . . . , λr are distinct. This shows controllability. A similar argument shows observability.
(Controllability and observability can be shown more easily using rank tests developed in Chapter 13.)
-45-
CHAPTER 11
1 −1
=1
the state equation is not minimal. It is easy to compute the impulse response:
G (t, σ) = C (t)e A (t−σ) B = (t 2 + 1) e −(t−σ)
Then a factorization is obvious, giving a minimal realization
.
x (t) = e t u (t)
y (t) = (t 2 + 1)e −t x (t)
Γ22 (t, σ) =
e 2t 0
It is easy to check that rank Γ22 (t, σ) = 2 for all t, σ, and a little more calculation shows that rank Γ33 (t, σ) = 2.
Then a minimal realization is, using formulas in the proof of Theorem 11.3,
1+e 2t
B (t) = Fr (t, t) =
e 2t
e 2t 0 −1 0 1
A (t) = Fs (t, t)F −1 (t, t) = F (t, t) =
2e 2t 0
0 2
-46-
Linear System Theory, 2/E Solutions Manual
1 1 1 ...
1 1 1 ...
Γ= 1 ...
1 1
. . . .
. . . .
. . . .
and clearly the rank condition in Theorem 11.7 is satisfied with l = k = n = 1. Then, following the proof of
Theorem 11.7,
F = Fs = Fc = Fr = H 1 = H s1 = 1
and a minimal (dimension-1) realization is
.
x (t) = x (t) + u (t)
y (t) = x (t)
For the truncated sequence,
1 1 1 0 ...
1 1 0 0 ...
1 0 0 0 ...
Γ=
0 0 0 ...
0
. . . . .
. . . . .
. . . . .
1 1 1 1 1 0
F = H3 = 1 1 0 , Fs = H s3 = 1 0 0
1 0 0 0 0 0
1
Fc = 1 1 1 , Fr =
1
1
gives a minimal realization specified by
0 1 0 1
A = Fs F −1 = 0 0 1 , B= 1 , C= 1 0 0
0 0 0 1
(This is an example of ‘Silverman’s formulas’ in Exercise 11.13. Also, it is not hard to see that truncation of the
sequence after any finite number n of 1’s will lead to a minimal realization of dimension n.)
G0 G1 . . .
G1 G2 . . .
. . .
. . .
Γ=
. . .
Gn−1 Gn . . .
. . .
. . .
. . .
suppose for some 1 ≤ i ≤ n a left-to-right column search yields that the first linearly dependent column is column
i. Then there exist scalars α0 , . . . , αi−2 such that column i is given by the linear combination
-47-
Linear System Theory, 2/E Solutions Manual
Gi−1 G0 Gi−2
Gi
G1 Gi−1
. . .
.
.
.
. = α0
. + . . . +α
.
i−2
Gn−2+i
Gn−1
Gn−3+i
.
.
.
. . .
.
.
.
By ignoring the top entry, this linear combination shows that column i +1 is given by the same linear combination
of the i−1 columns to its left, and so on. Thus by the rank assumption on Γ there cannot exist such an i, and the
first n columns of Γ are linearly independent. A similar argument shows that the first n columns of Γn,n +j are
linearly independent, for every j ≥ 0, and thus that Γnn is invertible.
It remains only to show that the given A, B, C provides a realization for G (s), since minimality is then
immediate. Premultiplication by Γnn verifies
Gk
.
Γ −1
nn
.
. = ek +1 , k = 0, . . . , n−1
Gn +k−1
Gk Gk +1
.
.
A
.
. = Γ snn ek +1 =
.
. , k = 0, . . . , n−1
Gn +k−1
Gn +k
Now, CB = G 0 , and
G0 G1
.
.
CA j B = CA j−1 A . = CA j−1 .
.
.
Gn−1
Gn
Gj
.
= ... =C
.
. = G j , j = 1, . . . , n
Gn−1+j
To complete the verification we use the fact that each dependent column of Γn,n +j is given by the same linear
combination of n columns to its left. This follows by writing column n +1 of Γ as a linear combination of the first
n (linearly independent) columns, and deleting partitions from the top of the resulting expression. This implies
that multiplying any column of Γn,n +j by A gives the next column to the right. Thus
Gn Gn +j
.
.
CA n +j B = CA j . =C .
.
.
G 2n−1
G 2n−1+j
= Gn +j , j = 1, 2, . . .
-48-
CHAPTER 12
Solution 12.1 If the state equation is uniformly bounded-input, bounded-output stable, then it is clear from the
definition that given δ we can take ε = η δ.
Now suppose the ε, δ condition holds. In particular we can take δ = 1 and assume ε is such that, for any to ,
u (t) ≤ 1 , t ≥ to
implies
y (t) ≤ ε , t ≥ to
Now suppose u (t) is any bounded input signal. Given to let µ = sup u (t). Note µ > 0 can be assumed, for
t ≥ to
otherwise we have a trivial case. Then u (t)/ µ ≤ 1 for all t ≥ to , and the zero-state response to u (t) satisfies
t
y (t) = ∫ G (t, σ)u (σ) d σ
to
t
= µ ∫ G (t, σ)u (σ)/µ d σ
to
Thus we have
sup y (t) ≤ ε sup u (t)
t ≥ to t ≥ to
∫ e A (t−δ−σ) BB T e A (t−δ−σ)
T
W (t−δ, t) = dσ
t−δ
It is easy to prove (by showing the equivalence of the negations by contradiction, as in the proof of Theorem 9.5)
that this is positive definite if and only if
-49-
Linear System Theory, 2/E Solutions Manual
B AB . . . A n−1 B
rank =n
0
−t /2
For a time-varying example, take scalar a (t) = 0, b (t) = e . Then
W (t−δ, t) = e −t (e δ −1)
Given any δ > 0, W (t−δ, t) > 0 for all t, but there exists no ε > 0 such that
W (t−δ, t) ≥ ε
for all t.
and the state equation is uniformly bounded-input, bounded-output stable with η = 1. However if we consider a
bounded input that is continuous and satisfies
1, 0≤t ≤1
u (t) =
0, t ≥2
then limt → ∞ u (t) = 0, but y (t) = 1 for t ≥ 1.
The result is true in the time-invariant case, however. Suppose
∞
and suppose u (t) is continuous, and u (t) → 0 as t → ∞. Then u (t) is bounded, and we let µ = sup u (t). Now
t≥0
given ε > 0, pick T 1 > 0 such that
∞
ε
∫ G (t) dt ≤ ___
2µ
T1
-50-
Linear System Theory, 2/E Solutions Manual
ε
___
u (t) ≤ , t ≥ T2
2ρ
Let T = 2 max [T 1 , T 2 ]. Then for t ≥ T,
t
y (t) ≤ ∫ G (t−σ)u (σ) d σ
0
T t
ε
= µ ∫ G (t−σ) d σ +
___
2ρ T
∫ G (t−σ) d σ
0
ε
___ ε
___
≤µ + ρ =ε
2µ 2ρ
This shows that y (t) → 0 as t → ∞.
Solution 12.11 The hypotheses imply that given ε > 0 there exist δ1 , δ2 > 0 such that if
xo < δ1 ; u (t) < δ2 , t ≥ to
where u (t) is n × 1, then the solution of
.
x (t) = A (t) x (t) + u (t) , x (to ) = xo
satisfies
x (t) < ε , t ≥ to
In particular, with xo = 0, this shows that if u (t) < δ2 for t ≥ to , then the corresponding zero-state solution of
the state equation
.
x (t) = A (t) x (t) + u (t)
y (t) = x (t) (*)
satisfies y (t) < ε for t ≥ to . But this implies uniform bounded-input, bounded-output stability by Exercise
12.1. Thus there exists a finite constant α such that the impulse response of (*), which is identical to the transition
matrix of A (t), satisfies
t
∫ Φ(t, σ) d σ ≤ α
to
for all t, to such that t ≥ to . Since A (t) is bounded, this gives uniform exponential stability of
.
x (t) = A (t) x (t)
by Theorem 6.8.
Solution 12.12 Suppose the impulse response is G (t), where G (t) = 0 for t < 0. For u (t) = e −λt , t ≥ 0,
-51-
Linear System Theory, 2/E Solutions Manual
∞ ∞
t
∞
∞
∞
∞
= ∫ ∫ G (t−σ)e −η t dt
e −λ σ d σ
0 0
where all integrals are well-defined because of the stability assumption, and λ, η > 0. Changing the variable of
integration in the inner integral from t to γ = t−σ gives
∞ ∞
∞
∫ y (t)e −η t
dt = ∫ ∫ G (γ)e −η γ d γ
e −η σ e −λ σ d σ
0 0 0
∞
=
G(s) s = η
∫ e −(η+λ)σ d σ
0
1
_____
= G(η)
η+λ
Without the stability assumption we can say that U (s) = 1/(s+λ) for Re [s ] > −λ, and the integral for G (s)
converges for Re [s ] > Re [p 1 ], . . . , Re [pn ], where p 1 , . . . , pn are the poles of G (s). Thus
∞
= ∫ y (t)e −st dt
G (s)
_____
Y (s) =
s+λ 0
then
∞
G (η )
∫ y (t)e −ηt dt = _____
η+λ
0
Solution 12.14 Given u (t), t ≥ 0, and xo , suppose x (t) is a solution of the given state equation. Then with
v (t) = y (t) = C x (t) we have
.
x (t) = A x (t) + Bu (t) , x (0) = xo
.
z (t) = AP z (t) + AB(CB)−1 C x (t)
= AP z (t) + A (I − P) x (t) , z (0) = xo
Thus
. .
x (t) − z (t) = AP [ x (t) − z (t) ] + Bu (t) , x (0) − z (0) = 0
and this gives
t
x (t) − z (t) = ∫ e AP (t−σ) Bu (σ) d σ
0
Since PB = 0 and
n−1
e AP (t−σ) = Σ αi (t−σ) (AP)i
i =0
-52-
Linear System Theory, 2/E Solutions Manual
we get
t
x (t) − z (t) = ∫ α0 (t−σ)Bu (σ) d σ
0
Then
.
w (t) = −(CB)−1 CAP z (t) − (CB)−1 CAB(CB)−1 C x (t) + (CB)−1 C x (t)
= −(CB)−1 CAP z (t) − (CB)−1 CAB(CB)−1 C x (t) + (CB)−1 CA x (t) + (CB)−1 CBu (t)
= −(CB)−1 CAP z (t) + (CB)−1 CA [ −B(CB)−1 C + I ] x (t) + u (t)
= (CB)−1 CAP[ x (t) − z (t) ] + u (t)
t
= (CB)−1 CAP ∫ α0 (t−σ)Bu (σ) d σ + u (t)
0
-53-
CHAPTER 13
a 11 +a 22 ± √
22
)
−4(a
(a
11
+a 11 a 22 −a 12 a 21 )
2
____________________________________
2
and since the eigenvalues are complex,
0 = det [ b Ab ] = a 21 b 21 − a 12 b 22 − (a 11 −a 22 )b 1 b 2
implies
(a 11 −a 22 )2 b 21 b 22 = (a 21 b 21 −a 12 b 22 )2 (**)
(a 21 b 21 −a 12 b 22 )2 + 4a 12 a 21 b 21 b 22 < 0
or,
(a 21 b 21 +a 12 b 22 )2 < 0
p TA = λ p T , p Tb = 0
which implies that the state equation is not controllable for this b, a contradiction. Therefore A cannot have real
eigenvalues, so it must have complex eigenvalues. (For the more challenging version of the problem, we can
show controllability for all nonzero b implies n = 2 by using a (real) P to transform A to real Jordan form. Then
for n > 2 pick a left eigenvector of P −1 AP and a real b ≠ 0 such that p T P −1 b = 0 to obtain a contradiction.)
-54-
Linear System Theory, 2/E Solutions Manual
rank
... = n +p
D CB CAB
This implies
B AB . . . A n−1 B
rank =n
in other words, the first rank condition in (+) holds. Now suppose
A B
rank < n +p
C D
Then
so In −A 0 B
rank < n +p
−C so Ip
D
so = 0
that is,
A 0 B
so In +p −
rank < n +p
C 0 D
so = 0
and this implies that (++) is not controllable. The contradiction shows that the second rank condition in (+) holds.
Solution 13.5 Since J has a single eigenvalue λ, controllability is equivalent to the condition
rank λ I−J
B =n
From the form of the matrix λ I−J it is clear that a necessary and sufficient condition for controllability is that the
set of rows of B corresponding to zero rows of λ I−J must be a linearly independent set of 1 × m vectors.
In the general Jordan form case, applying this condition for each eigenvalue λi gives a necessary and
sufficient condition for controllability. (Note that independence of one set of such rows of B (corresponding to one
distinct eigenvalue) from another set of such rows of B (corresponding to another distinct eigenvalue) is not
required.)
-55-
Linear System Theory, 2/E Solutions Manual
and controllability indices are defined by a left-to-right linear independence search, it is clear that controllability
indices are unaffected by state variable changes.
For the second part, let rk be the number of linearly dependent columns in A k B that arise in the left-to-right
column search of [ B AB . . . A n−1 B ]. Note r 0 = 0 since rank B = m. Then rk is the number of controllability
indices that have value ≤ k. This is because for each of the rk columns of the form A k Bi that are dependent, we
have ρi ≤ k, since for j > 0 the vector A k +j Bi also will be dependent on columns to its left. Thus for
k = 1, . . . , m, rk −rk−1 gives the number of controllability indices with value k. Writing
G 0 ... 0
G ... 0
0
BG ABG . . . A k BG = B AB . . . A k B
. . . .
. . . .
. . . .
0 0 ... G
and using the invertibility of G shows that the same sequence of rk ’s are generated by left-to-right column search
in [ BG ABG . . . A n−1 BG ].
with initial state z (to ) = xo and input va (t) = ua (t) − K (t) xa (t) has the solution z (t) = xa (t). Thus z (t f ) = 0. Since
this argument applies for any xo , the closed-loop state equation is controllable on [to , t f ].
Solution 13.12 By controllability, we can apply a variable change to controller form, with
 = Ao + Bo UP −1 = PAP −1 , B̂ = Bo R = PB
Then we can choose K̂ such that
0 1 ... 0
0 0 ... 0
. . . .
 + B̂K̂ = . . . .
. . . .
0 0 ... 1
−p 0 −p 1
. . . −p
n−1
-56-
Linear System Theory, 2/E Solutions Manual
0
0
.
B̂b̂ = .
.
0
1
Using × to denote various unimportant entries, set
0
1 × ... × 0
0
0 1 ... ×
0
.
. . . .
.
B̂b̂ = Bo Rb̂ = block diagonal . , i = 1, . . . , m . . . . . b̂ = .
. ρi × 1
. . . .
.
0
0 0 ... ×
0
1 0 0 ... 1 1
0 = b̂m−1 + × b̂m
1 = b̂m
Clearly there is a solution for the entries of b̂, regardless of the ×’s. Now it is easy to conclude controllability of
the single-input state equation by calculation of the form of the controllability matrix. Then changing to the
original state variables gives the result since controllability is preserved. In the original variables, take K = K̂P
and b = b̂. For an example to show that b alone does not suffice, take Exercise 13.11 with all ×’s zero.
Solution 13.14 Supposing the rank of the controllability matrix is q, Theorem 13.1 gives an invertible Pa such
that
 11  12 B̂ 1
P −1 , P −1
a APa = a B =
0
, CPa = Ĉ 1 Ĉ 2
0 Â 22
Ĉ 1 Â 11
rank . =l
.
.
n−1
Ĉ 1 Â 11
we have
-57-
Linear System Theory, 2/E Solutions Manual
à 11 0 à 13 B̃ 1
−1
P (P −1
a APa )P = Ã 21 Ã 22 Ã 23 , P −1
(P −1
a B) =
B̃ 2
(*)
0 0 Ã 33
0
CPa P = C̃ 1 0 C̃ 2
where à 11 is l × l, and in fact à 33 =  22 , C̃ 2 = Ĉ 2 . It is easy to see that the state equation formed from
C̃ 1 , Ã 11 , B̃ 1 is both controllable and observable. Also an easy calculation using block triangular structure shows
that the impulse response of the state equation defined by (*) is
A˜ 11 t
C̃ 1 e B̃ 1
It remains only to show that l = s. Using the effect of variable changes on the controllability and observability
matrices and the special structure of (*) give
C C̃ 1
CA C̃ 1 Ã 11
n−1
B AB . . . A n−1 B
.
= . B̃ 1 Ã 11 B̃ 1 . . . Ã 11 B̃ 1
.
.
.
.
n−1 n−1
CA C̃ 1 Ã 11
Thus
C̃ 1
C̃ 1 Ã 11
n−1
B̃ 1 Ã 11 B̃ 1 . . .
rank . Ã 11 B̃ 1 =s
.
.
n−1
C̃ 1 Ã 11
But
C̃ 1
C̃ 1 Ã 11
l−1
B̃ 1 Ã 11 B̃ 1 . . . Ã 11 B̃ 1
rank . = rank =l
.
.
l−1
C̃ 1 Ã 11
-58-
CHAPTER 14
W = ∫ e −At BB T e −A t dt
T
AW + WA = − ∫T _d_
e −At BB T e −A
T
t
dt
0 dt
−At f −A T t f
=−e BB T e + BB T
Letting K = −B T W −1 , we have
−At f −A T t f
(A + BK)W + W (A + BK)T = − ( e BB T e + BB T ) (*)
-59-
Linear System Theory, 2/E Solutions Manual
Solution 14.5
(a) For any n × 1 vector x,
x H (A + A T ) x = x H A x + x H A T x ≥ −2αm x H x
If λ is an eigenvalue of A, and x is a unity-norm eigenvector corresponding to λ, then
_
A x = λ x , x HAT = λ x H
and we conclude
_
λ + λ ≥ −2 αm
Therefore any eigenvalue of A satisfies Re [λ] ≥ −αm , and this implies that for α > αm all eigenvalues of A +α I
have positive real parts. Therefore all eigenvalues of −(A T +α I) = (−A−α I)T have negative real parts.
(b) Using Theorem 7.11, with α > αm , the unique solution of
Q (−A − α I)T + (−A − α I)Q = −BB T (*)
is
∞
Q = ∫ e −(A +α I)t BB T e −(A
T
+α I)t
dt
0
x T e −(A +α I)t B = 0 , t ≥ 0
and the usual sequential differentiation and evaluation at t = 0 gives a contradiction to controllability. Thus Q is
positive definite.
(c) Now consider the linear state equation
.
z (t) = ( A+α I−BB T Q −1 )z (t) (**)
Using (*) to write BB T Q −1 gives
. T
z (t) = −Q (A+α I ) Q −1 z (t)
But Q [ −(A + α I)T ]Q −1 has negative-real-part eigenvalues, which proves that (**) is exponentially stable.
(d) Invoking Lemma 14.6 gives that
.
z (t) = ( A−BB T Q −1 )z (t)
is exponentially stable with rate α > αm .
is a controllable single-input state equation. By a single-input controller form calculation, it is clear that we can
choose a 1 × n gain k that yields a closed-loop state equation with any specified characteristic polynomial. That
is,
-60-
Linear System Theory, 2/E Solutions Manual
Solution 14.8 Without loss of generality we can assume the change of variables in Theorem 13.1 has been
performed so that
A 11 A 12 B1
A= , B=
0 A 22 0
where A 11 is q × q, and
rank λ I−A 11
B1 = q
for all complex values of λ. Then the eigenvalues of A comprise the eigenvalues of A 11 and the eigenvalues of
A 22 . Also, for any complex λ,
λ I−A 11 −A 12 B 1
Now suppose rank [λ I−A B ] = n for all nonnegative-real-part eigenvalues of A. Then by (+) any such
eigenvalue must be an eigenvalue of A 11 , which implies that all eigenvalues of A 22 have negative real parts. But
we can compute an m × q matrix K 1 such that A 11 + B 1 K 1 has negative-real-part-eigenvalues. So setting
K = [ K 1 0 ] we have that
A 11 +B 1 K 1 A 12
A + BK =
0 A 22
has negative-real-part eigenvalues, then A 22 has negative-real-part eigenvalues. Thus if Re [λ] ≥ 0, then (+) gives
rank λ I−A
B = q+n−q = n
Solution 14.9 For controllability assume A and B have been transformed to controller form by a state variable
change. By Exercise 13.10 this does not alter the controllability indices. Then it is easy to show that A+BLC and
B are in controller form with the same block sizes, regardless of L and C. Thus the controllability indices do not
change. Similar arguments apply in the case of observability.
-61-
Linear System Theory, 2/E Solutions Manual
tr [A+BLC ] = tr [A ] + tr [BLC ]
= tr [A ] + tr [CBL ]
= tr [A ]
>0
Thus at least one eigenvalue of A+BLC has positive real part, regardless of L.
Thus in the k th -row of G(s), the minimum difference between the numerator and denominator polynomial degrees
among the entries Gk1 (s), . . . , Gkm (s) is κk .
-62-
CHAPTER 15
z (t)
GC F
z (t)
∫ B (σ)2 dσ = ∫ Φ(σ, τ)Φ(τ, σ)B (σ)B T (σ)ΦT (τ, σ)ΦT (σ, τ) d σ
τ τ
τ+ δ
Since A (t) is bounded, by Exercise 6.6 there is a positive constant γ such that Φ(σ, τ)2 ≤ γ 2 for σ ∈ [τ, τ+δ].
And since
τ+ δ
-63-
Linear System Theory, 2/E Solutions Manual
τ+ δ τ+ δ
k τ+(j +1)δ
≤ Σ +j∫
j =0 τ δ
B (σ)2 dσ
≤ (k +1) β1 ≤ [1 + (t−τ)/ δ ] β1
for all t, τ with t ≥ τ. (Of course this provides a simplification of the hypotheses of Theorem 15.5 for the
bounded-A (t) case.)
Solution 15.6 Write the given state equation in the partitioned form
.
za (t) A 11 A 12
za (t) B1
. = + u(t)
zb (t) A 21 A 22 zb (t) B2
za (t)
y (t) = Ip
0
zb (t)
za (t) A 11 +B 1 K 1 A 12 +B 1 K 2 −B 1 K 2 za (t) B1N
.
A 21 +B 2 K 1 A 22 +B 2 K 2 −B 2 K 2
z (t) = zb (t) + B 2 N r(t)
b
.
0 0
A 22 −HA 12 eb (t)
0
eb (t)
za (t)
y (t) = Ip
0 0
zb (t)
eb (t)
Thus we see that the eigenvalues of the closed-loop state equation are provided by the n eigenvalues of A +BK
and the (n−p) eigenvalues of A 22 −H A 12 . Furthermore, the block triangular structure gives the closed-loop
transfer function as
-64-
Linear System Theory, 2/E Solutions Manual
Y(s) = Ip
0 (sI−A−BK )−1 BN R(s)
A+BLJC 2 BLH BLJD 21
 = , B̂ =
GC 2 +GD 22 LJC 2 F+GD 22 LH
GD 22 +GD 22 LJD 21
Ĉ = C 1 +D 1 LJC 2
D 1 LH
, D̂ = D 1 LJD 21
These expressions can be rewritten using
L = (I − JD 22 )−1 = I + J (I−D 22 J)−1 D 22
which follows from Exercise 28.2 or is easily verified using the identity in Exercise 28.1.
-65-
CHAPTER 16
Solution 16.4 By Theorem 16.16 there exist polynomial matrices X (s), Y (s), A (s), and B (s) such that
N(s) X(s) + D(s)Y(s) = Ip (*)
Na (s) A(s) + Da (s)B(s) = Ip (**)
Since D −1 (s)N(s) = D −1 −1
a (s)Na (s), Na (s) = Da (s)D (s)N(s). Substituting this into (**) gives
It remains only to prove unimodularity. Since NL (s) and DL (s) are left coprime, there exist polynomial matrices
A(s) and B(s) such that
DL (s) A(s) + NL (s)B(s) = I
That is,
X(s) Y(s) D(s) B(s) I X(s)B(s)+Y(s)A(s)
=
−N(s) A(s)
NL (s) DL (s)
0 I
-66-
Linear System Theory, 2/E Solutions Manual
I −[X(s)B(s)+Y(s)A(s)]
0 I
gives
X(s) Y(s)
D(s) −D(s)[X(s)B(s)+Y(s)A(s)]+B(s)
= I
−N(s) N(s)[X(s)B(s)+Y(s)A(s)]+A(s)
NL (s) DL (s)
That is
−1
X(s) Y(s)
D(s) −D(s)(X(s)B(s)+Y(s)A(s))+B(s)
=
−N(s) N(s)(X(s)B(s)+Y(s)A(s))+A(s)
NL (s) DL (s)
NL (s) DL (s)
is unimodular.
I = (P ρ s ρ + . . . + P 0 ) (Q η s η + . . . + Q 0 )
−1
Solution 16.10 Since N(s)D −1 (s) = Ñ(s)D̃ (s) both are coprime right polynomial fraction descriptions, there
exists a unimodular U(s) such that D(s) = D̃(s)U(s). Suppose for some integer 1 ≤ J ≤ m we have
ck [D ] = ck [D̃] , k = 1, . . . , J−1 ; cJ [D ] < cJ [D̃]
Writing D(s) and D̃(s) in terms of columns Dk (s) and D̃k (s) and writing the (i, j)-entry of U(s) as uij (s) give
-67-
Linear System Theory, 2/E Solutions Manual
Dk (s) = D̃ 1 (s)u 1,k (s) + . . . + D̃J (s)uJ,k (s) + . . . + D̃m (s)um,k (s) , k = 1, . . . , m
Using a similar column notation for D hc and D l (s) gives
ck [D ] hc c 1 [D˜ ] l hc c [D ]˜l
D hc
k s + D lk (s) = [D̃ 1 s +D̃ 1 (s)] u 1,k (s) + . . . + [D̃ J s J +D̃ J (s)] uJ,k (s)
hc c [D ] l ˜
+ . . . + [D̃ m s m +D̃ m (s)] um,k (s) , k = 1, . . . , m
We claim that
ck [D ] = max { c j [D̃]+degree u j,k (s) }
j = 1, . . . , m
hc hc
This is shown by a an argument using linear independence of D̃ 1 , . . . , D̃ m as follows. Let
c̃ = max { c j [D̃]+degree u j,k (s) }
j = 1, . . . , m
c̃−c j [D˜ ]
and let µ j,k be the coefficient of s in u j,k (s). Then not all the µ j,k are zero, and the vector coefficient of the s c̃
term on the right side is
m
Σ µ j,k D̃ j
hc
j =1
where Ua (s) is (J−1) × J, from which rank U (s) ≤ m−1 for all values of s. This contradicts unimodularity, Thus
cJ [D ] = cJ [D̃]. The proof is complete since the roles of D (s) and D̃(s) can be reversed.
-68-
CHAPTER 17
Solution 17.1 If
.
x (t) = A x (t) + Bu (t)
y (t) = C x (t)
T
is a realization of G (s), then
.
z (t) = A T x (t) + C T v (t)
w (t) = B T z (t)
is a realization for G (s) since
T T
G (s) = [ G T (s) ] = [ C (sI − A)−1 B ] = B T (sI − A T )−1 C T
Furthermore, easy calculation of the controllability and observability matrices of the two realizations shows that
one is minimal if and only if the other is. Now, if N (s) and D (s) give a coprime left polynomial fraction
description for G(s), then there exist polynomial matrices X (s) and Y (s) such that
N (s) X (s) + D(s)Y(s) = I
Therefore
X T (s)N T (s) + Y T (s)D T (s) = I
which implies that N T (s) and D T (s) are right coprime. Also, since D(s) is row reduced, D T (s) is column reduced.
Thus we can write down a controller-form minimal realization for G T (s) = N T (s)[ D T (s) ]−1 as per Theorem 17.4,
and this provides a minimal realization for G (s) by the correspondence above.
-69-
Linear System Theory, 2/E Solutions Manual
Therefore
−1
SB To [ sI − A To − Q −1 VBTo ] = D −1 (s)ΨT (s)
Using the definition of N (s),
−1
D −1 (s)N (s) = SB To [ sI − (A To + Q −1 VBTo ) ] Q −1 B
−1
= CQ [ sI − Q −1 AQ ] Q −1 B
= C (sI − A)−1 B
Note that D (s) is row reduced since Dlr = S −1 , which is invertible. Finally, if the state equation is controllable as
well as observable, hence minimal, then it is clear from the definition of D (s) that the degree of the polynomial
fraction description equals the dimension of the minimal realization. Therefore D −1 (s)N (s) is a coprime left
polynomial fraction description.
Solution 17.5 Suppose there is a nonzero h with the property that for each uo there is an xo such that
t
hCe At xo + ∫ hCe A (t−σ) Buo e
so σ
dσ = 0 , t ≥ 0
0
Suppose G (s) = N (s)D −1 (s) is a coprime right polynomial fraction description. Then taking Laplace transforms
gives
hC (sI − A)−1 xo + hN (s)D −1 (s)uo (s−so )−1 = 0
that is,
(s−so )hC (sI − A)−1 xo + hN (s)D −1 (s)uo = 0
If so is not a pole of G (s), then D (so ) is invertible. Thus evaluating at s = so gives
hN (so )D −1 (so )uo = 0
and we have that if so is not a pole of G (s), then for every ũo
hN (so )ũo = 0
Thus hN (so ) = 0, that is rank N (so ) < p < m, which implies that so is a transmission zero.
Conversely, suppose so is a transmission zero that is not a pole of G (s). Then for a right-coprime
polynomial fraction description G (s) = N (s)D −1 (s) we have that D (so ) is invertible, and rank N (so ) < p < m.
Thus there exists a nonzero 1 × p vector h such that hN (so ) = 0. Using the identity (just as in the proof of
Theorem 17.13)
(so I − A)−1 (s − so )−1 = (sI − A)−1 (so I − A)−1 + (sI − A)−1 (s−so )−1
we can write for any uo and the choice xo = (so I − A)−1 Buo ,
t
That is, h has the property that for any uo there is an xo such that
t
hCe xo + ∫ hCe A (t−σ) Buo e
so σ
At
dσ = 0 , t ≥ 0
0
-70-
Linear System Theory, 2/E Solutions Manual
Since the numerator is the magnitude of a polynomial, it is finite for every so , and this implies det D (so ) = 0, that
is, so is a pole of G (s).
Now suppose so is such that det D (so ) = 0. By coprimeness of the right polynomial fraction description
N (s)D −1 (s), there exist polynomial matrices X (s) and Y (s) such that
X(s)N(s) + Y(s)D(s) = Im
for all s. Therefore
[ X(s)G(s) + Y(s) ] D(s) = Im
for all s, and thus
det [ X(s)G(s) + Y(s) ] det D(s) = 1
for all s. This implies that at s = so we must have
det [ X(so )G (so ) + Y (so ) ] = ∞
Since the entries of the polynomial matrices X (so ) and Y (so ) are finite, some entry of G (so ) must have infinite
magnitude.
-71-
CHAPTER 18
Solution 18.2
(a) If x ∈ A (A −1 V ), then clearly x ∈ Im [A ], and there exists y ∈ A −1 V such that x = Ay, which implies x ∈ V.
Therefore A (A −1 V ) ⊂ V ∩ Im [A ]. Conversely, suppose x ∈ V ∩ Im [A ]. Then x ∈ Im [A ] implies there exists y
such that x = Ay, and x ∈ V implies y ∈ A −1 V. Thus x ∈ A (A −1 V ), that is, V ∩ Im [A ] ⊂ A (A −1 V ).
there exists y ∈ V such that Ax = Ay, that is, A (x−y) = 0. Thus writing
x = y + (x−y) ∈ V + Ker [A ]
−1
gives A (AV ) ⊂ V + Ker [A ].
-72-
Linear System Theory, 2/E Solutions Manual
rank C B AB . . . A n−1 B
=p
and thus the proof involves showing that the rank condition is equivalent to positive definiteness of
tf
∫ Ce A (t −t) BB T e A (t −t) C T dt
T
f f
Solution 18.10 We show equivalence of the negations. First suppose 0 ≠ V ⊂ Ker [C ] is a controlled invariant
subspace. Then picking a friend F of V we have
(A + BF)V ⊂ V ⊂ Ker [C ]
Selecting 0 ≠ xo ∈ V, this gives
e (A + BF)t xo ∈ V , t ≥ 0
and thus
Ce (A + BF)t xo = 0 , t ≥ 0
Thus the closed-loop state equation is not observable, since the zero-input response to xo ≠ 0 is identical to the
zero-input response to the zero initial state.
Conversely, suppose the closed-loop state equation is not observable for some F. Then
n−1
N = ∩ Ker [C (A + BF)k ] ≠ 0
k =0
0(r−q) × q B̂ 22
B̂ =
0(c−r) × q B̂ 32
0(n−c) × q 0(n−c) × (m−q)
0c × (n−c) Â 22
-73-
CHAPTER 19
zT
for all z ∈ S
Ker z T = Ker , (*)
xT
zT
dim Ker < dim Ker zT
xT
Solution 19.2 By induction we will show that (W k ) ⊥ = V k , where V k is generated by the algorithm for V * in
Theorem 19.3:
-74-
Linear System Theory, 2/E Solutions Manual
V0 = K
V k +1 = K ∩ A −1 (V k + B )
= V k ∩ A −1 (V k + B )
For k = 0 the claim becomes ( K ⊥ ) ⊥ = K , which is established in Exercise 19.1. So suppose for some nonegative
integer K we have (W K ) ⊥ = V K . Then, using Exercise 19.1,
⊥
(W K +1 ) ⊥ = W K + AT[ W K ∩ B ⊥ ]
⊥
= (W K ) ⊥ ∩ A T (W K ∩ B ⊥ )
⊥
= VK ∩ A T [ (V K ) ⊥ ∩ B ⊥ ]
Thus
(W K +1 ) ⊥ = V K ∩ A −1 (V K + B) = V K +1
This completes the induction proof, and gives V * = V n = (W n ) ⊥ .
= R1
Assume now that for some positive integer K we have
K
Σ
j =1
(A + BF) j−1 (B ∩ V *) = R K = V * ∩ (AR K−1 ∩ B )
Then
K +1 K
Σ
j =1
(A + BF) j−1 (B ∩ V *) = B ∩ V * + (A + BF) Σ (A + BF) j−1 (B ∩ V *)
j =1
= B ∩ V * + (A + BF)R K (+)
From the algorithm, R ⊂ R ⊂ V *, thus
K n
(A + BF)R K ⊂ (A + BF)V * ⊂ V *
Using the second part of Exercise 18.4 gives
B ∩ V * + (A + BF)R K = [ B + (A + BF)R K ] ∩ V *
Since (A + BF)R K + B = AR K + B, the right side of (+) can be rewritten as
B ∩ V * + (A + BF)R K = V * ∩ [ AR K + B ]
= R K +1
This completes the induction proof of the Hint, and Theorem 19.6 gives R * = R n .
-75-
Linear System Theory, 2/E Solutions Manual
w 1 . . . wq
Then
(E + BK)w j = Ew j + BKw j
= v j + Bu j + B −u 1 . . . −uq e j
= v j , j = 1, . . . , q
That is, K is such that
Im [E + BK ] ⊂ V *
-76-
Linear System Theory, 2/E Solutions Manual
Ĉ 1 = C 1 P = Ĉ 11 0 0
Ĉ 2 = C 2 P = 0 Ĉ 11 0
B̂ 13
0
B̂ 2 = B̂ 22
B̂ 23
 11 0 0
 = 0  22 0
 31  32  33
That is, with z (t) = P −1 x (t), the closed-loop state equation takes the partitioned form
.
za (t) = Â 11 za (t) + B̂ 11 r 1 (t)
.
zb (t) = Â 22 zb (t) + B̂ 22 r 2 (t)
.
zc (t) = Â 31 za (t) + Â 32 zb (t) + Â 33 zc (t) + B̂ 13 r 1 (t) + B̂ 23 r 2 (t)
y 1 (t) = Ĉ 11 za (t)
y 2 (t) = Ĉ 12 zb (t)
-77-
CHAPTER 20
Solution 20.1 A sketch shows that v (t) is a sequence of unit-height rectangular pulses, occurring every T
seconds, with the width of the k th pulse given by k/5, k = 0, . . . , 5. This is a piecewise-continuous (actually,
piecewise-constant) input, and the continuous-time solution formula gives
t
z (to ) + ∫ e F (t−σ) Gv (σ) d σ
F (t−to )
z (t) = e
to
The integral term is not linear in the input sequence u (k), so we approximate the integral when u (k) is small.
Changing integration variable to γ = T−τ, another way to write the integral term is
u (k)T
e FT ∫ e −F γ G d γ sgn [u (k)]
0
∫ e −F γ d γ = ∫ ( I−F γ+ . . . ) d γ ∼ u (k)T I
0 0
Then since u (k) sgn [u (k)] = u (k), this gives the approximate, linear, discrete-time state equation.
z [(k +1)T ] = e FT z (kT) + e FT T u (k)
Solution 20.4 For a constant nominal input u (k) = ũ, constant nominal solutions are given by
-78-
Linear System Theory, 2/E Solutions Manual
ũ 2
x̃ = 2 , ỹ = ũ
ũ
4ũ
y δ (k) = 2ũ
−1 x δ (k) + 2ũ u δ (k)
Solution 20.10 Computing Φ( j +q, j) for the first few values of q ≥ 0 easily leads to the general formula for
Φ(k, j):
0 a 1 (k−1)a 2 (k−2)a 1 (k−3)a 2 (k−4) . . . a 1 ( j)
. . . , k−j odd, ≥ 1
a 2 (k−1)a 1 (k−2)a 2 (k−3)a 1 (k−4) a 2 ( j) 0
, k−j even, ≥ 1
0 a 2 (k−1)a 1 (k−2)a 2 (k−3)a 1 (k−4) . . . a 1 ( j)
k−1
≤ Σ
j =k 1
Φ(k, j) Φ( j, k)
-79-
Linear System Theory, 2/E Solutions Manual
k−1
1
Σ
_____ Φ(k, j) Φ( j, k) , k ≥ k 1 +1 ≥ ko +1
Φ(k, ko ) ≤
k−k 1 j =k 1
0 1
gives
k−1
Σ a (i)
1
P (k) = ΦA (k, 0) = , k ≥1
i =0
0 1
and
k−1
− Σ a (i)
1
P −1 (k +1) = ΦA (0, k +1) = , k ≥0
i =0
0 1
-80-
CHAPTER 21
12 z+7 2
z +7z+12
and
Y (z) = zc(zI−A)−1 xo + c(zI−A)−1 b U (z)
z
_________ z−1
1/ 20 _________ z
____
= −z−19 z−1
1/ 20
+ 2
z 2 +7z+12 z +7z+12 z−1
=0
Therefore the complete solution is y (k) = 0, k ≥ 0.
0 s 1
= 1/ s
U (s)
and
z−1 −T −1 T 2 / 2
Z [y (kT)]
_________ = h (zI−F)−1 g = 0 1
0 z−1
T
= T / (z−1)
Z [u (kT)]
Solution 21.7
(a) The solution formula gives, using a standard formula for a finite geometric sum,
-81-
Linear System Theory, 2/E Solutions Manual
k−1
x (k) = (1+r/ l)k xo + Σ (1+r / l)k−j−1 b
j =0
1−1/(1+r / l)k
____________
= (1+r/ l)k xo + b (1+r / l)k−1
1−1/(1+r / l)
(c) Set
(−50,000)
_________ 50,000
_______
0 = x (19) = (1.05)19 xo + +
0.05
0.05
and solve to obtain xo = $604,266. Of course this means you have actually won only $654,266, but
congratulations remain appropriate.
Solution 21.9 With T = Td / l and v (t) = v (kT), kT ≤ t ≤ (k +1)T, evaluate the solution formula
t
z (t) = e F (t−τ) z (τ) + ∫ e F (t−σ) Gv (σ−Td ) d σ , t ≥ T
τ
at t = (k +1)T, τ = kT to obtain
T
z [(k +1)T ] =e FT z (kT) + ∫ e F τ d τ G v [(k−l)T ]
0
∆
= Az (kT) + Bv [(k−l)T ]
Defining
z (kT)
v [(k−l)T ]
x (k) = .
, u (k) = v (kT) , ŷ(k) = y (kT)
.
.
v [(k−1)T ]
we get
-82-
Linear System Theory, 2/E Solutions Manual
A B 0 ... 0 0 z (0)
0 0 1 ... 0
0
v (−lT)
. . . . .
.
.
0 0 0 ... 1 0 v (−2T)
0 0 0 ... 0 1 v (−T)
ŷ(k) = C 0 . . . 0 x (k)
The dimension of the initial state is n+l. The transfer function of this state equation is the same as the transfer
function of
z (k +1) = Az (k) + Bu (k−l)
y (k) = Cz (k)
Taking the z-transform, using the right shift property, gives
Y (z) = C (zI−A)−1 Bz −l U (z)
0 0
0 0
Solution 21.13 By Lemma 21.6, given any ko there is a K-periodic solution of the forced state equation if and
only if there is an xo satisfying
ko +K−1
[I − Φ(ko +K, ko )]xo = Σ
j =ko
Φ(ko +K, j +1)f ( j) (*)
Similarly there is a K-periodic solution of the unforced state equation if and only if there is a zo satisfying
[I − Φ(ko +K, ko )]zo = 0 (**)
Since there is no zo ≠ 0 satisfying (**), it follows that [I−Φ(ko +K, ko )] is invertible. This implies that for each ko
there exists a unique xo satisfying (*). For this xo the forced state equation has a K-periodic solution.
However, if there is a zo ≠ 0 satisfying (**), (*) might still have a solution if the right side is in the range of
[I−Φ(ko +K, ko )].
Solution 21.14 Since the forced state equation has no K-periodic solutions, for any ko there is by Exercise
21.13 a zo ≠ 0 such that the solution of
z (k +1) = A (k)z (k) , z (ko ) = zo
is K-periodic. Thus by Lemma 21.6,
[I − Φ(ko +K, ko )]zo = 0
and therefore [I − Φ(ko +K, ko )] is not invertible. Since there are no solutions to
ko +K−1
[I − Φ(ko +K, ko )]xo = Σ
j =ko
Φ(ko +K, j +1)f ( j)
-83-
Linear System Theory, 2/E Solutions Manual
we have by linear algebra that there exits a nonzero, n × 1 vector p such that
[I − Φ(ko +K, ko )]T p = 0
and
ko +K−1
∆
pT Σ
j =k
Φ(ko +K, j +1)f ( j) = q ≠ 0
o
Now pick any xo . Then it is easy to show that the corresponding solution satisfies p T x(ko +jK) = p T xo +jq,
j = 1, 2, . . . . This shows that the solution is unbounded.
-84-
CHAPTER 22
Solution 22.4 If the state equation is uniformly exponentially stable, then there exist γ ≥ 1 and 0 ≤ λ < 1 such
that
Φ(k, j) ≤ γ λk−j , k ≥ j
Equivalently, for every k,
Φ(k +j, k) ≤ γ λ j , j ≥ 0
which implies
φ j = sup Φ(k +j, k) ≤ γ λ j
k
Then
j = lim (γ λ) = λ lim γ
lim φ 1/j 1/j 1/j
j→∞ j→∞ j→∞
≤λ
<1
Now suppose
lim (φ j )1/j < 1
j→∞
Then for j ≤ J,
Φ(k +j, k) ≤ sup Φ(k +j, j) = φ j
k
≤ max φ j ≤ γ λJ
1≤ j ≤J
≤ γ λj
-85-
Linear System Theory, 2/E Solutions Manual
= φj
< (1−ε) j = λ j
≤ γ λj
-86-
Linear System Theory, 2/E Solutions Manual
which is equivalent to
ΦA T (−k) (k, j) ≤ γ λk−j , k ≥ j
which is equivalent to uniform exponential stability of A T (−k).
However for the case of A T (k), consider the example where A (k) is 3-periodic with
0 2 0 1/ 2 2 0
A (0) = , A (1) = , A (2) =
1/ 2 0
1/ 2 0
0 1/ 2
Then
1/ 2 0
ΦA (k) (3, 0) =
0 1/ 2
2 0
ΦA T (k) (3, 0) =
0 1/ 8
-87-
CHAPTER 23
Solution 23.1 With Q = qI, where q > 0 we compute A T (k)QA (k)−Q to get the sufficient condition for
uniform exponential stability:
ν
__
a 21 (k), a 22 (k) ≤ 1− , ν>0
q
Thus the state equation is uniformly exponentially stable if there exists a constant α < 1 such that for all k
a 1 (k), a 2 (k) ≤ α
With
q1 0
Q=
0 q2
where q 1 , q 2 > 0, the sufficient condition for uniform exponential stability becomes existence of a constant ν > 0
such that for all k,
q 2 −ν
_____ q 1 −ν
_____
a 21 (k) ≤ , a 22 (k) ≤
q1 q2
These conclusions show uniform exponential stability under weaker conditions, where one bounded coefficient
can be larger than unity if the other bounded coefficient is suitably small. For example, suppose
sup | a 2 (k) = α < ∞. Then we can take q 1 = α2 +0.01, q 2 = 1, and ν = 0.01 to conclude uniform exponential
k
stability if a 21 (k) ≤ 0.99/ (α2 +0.01) for all k.
Solution 23.4 Using the transition matrix computed in Exercise 20.10, an easy computation gives that
∞
Q (k) = I + Σ ΦT ( j, k)Φ( j, k)
j =k +1
Since this Q (k) is guaranteed to satisfy I ≤ Q (k) and A T (k)Q (k)A (k)−Q (k) ≤ −I for all k, a sufficient condition
for uniform exponential stability is existence of a constant ρ such that q 11 (k), q 22 (k) ≤ ρ for all k. Clearly this
-88-
Linear System Theory, 2/E Solutions Manual
holds if a 21 (k), a 22 (k) ≤ α < 1 for all k, but it also holds under weaker conditions. For example suppose the α-
bound is violated only for k = 0, and
a 21 (0) > 1 , a 21 (0)a 22 (1) < α
Then we can conclude uniform exponential stability. (More sophisticated analyses should be possible . . . .)
Solution 23.6 If the state equation is exponentially stable, then by Theorem 23.7 there is for any symmetric M
a unique symmetric Q such that
A T QA − Q = −M
Write
m1 m2 q1 q2
M= , Q=
m2 m3 q2 q3
−1 0 a 20
q1
−m 1
0 −1−a 0 a 0 −m 2
q2 =
1 −2 0
q3
−m 3
The condition
−1 0 a 20
0 −1−a 0 a 0 ≠0
det
1 −2 0
reduces to the condition a 0 ≠ 0, 1, −2. Assuming this condition we compute Q for M = I, and use the fact that
Q > 0 since M > 0. The expression
−1
q1
−1 0 a 20
−1
0 −1−a 0 a 0
q2 = 0
q3
1 −2 0
−1
gives
1
______________ −a 0 (a 20 +a 0 +2) −2a 0
Q=
−2a 0 −2(a 0 +1)
a 0 (a 0 +2)(a 0 −1)
-89-
Linear System Theory, 2/E Solutions Manual
p H A T QAp − p H Qp = −p H Mp
That is,
( λ2 −1 )p H Qp = −p H Mp
If p H Mp > 0, then λ2 −1 < 0, which gives λ < 1. But suppose p H Mp = 0. Then for k ≥ 0,
_
0 = λ2k p H Mp = λk p H Mp λk = p H (A T )k MA k p
Therefore
lim A k p = lim λk p = 0
k→∞ k→∞
-90-
CHAPTER 24
A T (k)A (k) =
0 a 21
it is clear that
λ 1/2
max (k) = max [ a 1 (k), a 2 (k) ]
Thus Corollary 24.3 states that the state equation is uniformly stable if there exists a constant γ such that
k
Π max [ a 1 (i), a 2 (i) ] ≤γ (#)
i =j
4 0
The eigenvalues are ± 2/ 3, so the state equation is uniformly stable, but clearly (#) fails.
-91-
Linear System Theory, 2/E Solutions Manual
k k−1 k
1
__________ 1
__________ 1
__________
r (k +1) Π ≤ r (k) Π + ν(k)ψ(k) Π , k ≥ ko +1
j =ko
1+η( j)ν( j) j =ko
1+η( j)ν( j) j =ko
1+η( j)ν( j)
Solution 24.7 By assumption ΦA (k, j) ≤ γ for k ≥ j. Treating f (k, z (k)) as an input, the complete solution
formula is
k−1
z (k) = ΦA (k, ko )z (ko ) + Σ
j =ko
ΦA (k, j +1)f ( j, z ( j)) , k ≥ ko +1
This gives
k−1
z (k) ≤ γ z (ko ) + Σ
j =ko
γ f ( j, z ( j))
k−1
≤ γ z (ko ) + Σ
j =ko
γ α j z ( j) , k ≥ ko +1
k , k < 0
which is bounded for each k. But for ko < 0, the solution of this state equation yields
−ko ko
z (0) = (3/ 2) zo = (3/ 2) zo
Clearly any candidate bound γ can be violated by choosing ko sufficiently large, so the state equation is not
uniformly stable.
-92-
CHAPTER 25
Solution 25.1 If M (ko , k f ) is not invertible, then there exists a nonzero, n × 1 vector xa such that
0 = x Ta M (ko , k f )xa
k f −1
= Σ
j =k
x Ta ΦT ( j, ko )C T ( j)C ( j)Φ( j, ko )xa
o
k f −1
= Σ
j =ko
C ( j)Φ( j, ko )xa 2
This implies
C ( j)Φ( j, ko )xa = 0 , j = ko , . . . , k f −1
which shows that the nonzero initial state xa yields the same output on the interval as does the zero initial state.
Therefore the state equation is not observable.
On the other hand, for any initial state xo we can write, just as in the proof of Theorem 25.9,
y (ko )
.
y (k f −1)
xo = M −1 (ko , k f )O T (ko , k f ) .
.
y (k f −1)
= b (k f −1)b T (k f −1)
This W (0, k f ) has rank at most 1, and if n ≥ 2 the state equation is not reachable on [0, k f ].
-93-
Linear System Theory, 2/E Solutions Manual
and let u (k) = 0 for other values of k. Then it is easy to show that the zero-state response to this input yields
y (k f ) = y f . Thus the state equation is output reachable on [ko , k f ].
Conversely, suppose the state equation is output reachable on [ko , k f ]. If WO (ko , k f ) is not invertible, then
there exists a nonzero p × 1 vector ya such that
0 = y Ta WO (ko , k f )ya
k f −1
= Σ
j =ko
y Ta C (k f )ΦT (k f , j +1)B( j)B T ( j)ΦT (k f , j +1)C T (k f )ya
k f −1
= Σ
j =ko
y Ta C (k f )Φ(k f , j +1)B( j)2
Therefore
y Ta C (k f )Φ(k f , j +1)B( j) = 0 , j = ko , . . . , k f −1
But by output reachability, with y f = ya , there exists an input ua (k) such that
k f −1
ya = Σ
j =ko
C (k f )Φ(k f , j +1)B ( j)ua ( j)
Thus
k f −1
y Ta ya = Σ
j =ko
y Ta C (k f )Φ(k f , j +1)B ( j)ua ( j) = 0
and this implies ya = 0. This contradiction shows that WO (ko , k f ) must be invertible.
Note that if rank C (k f ) < p, then WO (ko , k f ) cannot be invertible, and the state equation cannot be output
reachable.
If m = p = 1, then
k f −1
WO (ko , k f ) = Σ
j =ko
G 2 (k f , j)
Thus the state equation is output reachable on [ko , k f ] if and only if G (k f , j) ≠ 0 for some j = ko , . . . , k f −1.
-94-
Linear System Theory, 2/E Solutions Manual
Solution 25.13 We will prove that the state equation is reconstructible if and only if
C
CA
.
. z = 0 implies A n z = 0 (*)
.
CA n−1
That is, if and only if the null space of the observability matrix is contained in the null space of A n .
First, suppose the state equation is not reconstructible. Then there exist n × 1 vectors xa and xb such that
xa ≠ xb and
C C
.
.
. xa = . xb , A n xa ≠ A n xb
.
.
CA n−1
CA n−1
That is
C
.
.
. (xa −xb ) = 0 , A n (xa −xb ) ≠ 0
CA n−1
.
.
. z = 0 and A n z ≠ 0
CA n−1
-95-
CHAPTER 26
1 1
and
0 k 2k−1
R 3 (k) = B (k) Φ(k +1, k)B (k−1) Φ(k +1, k−1)B (k−2) =
1 1 k
From the respective ranks the state equation is 3-step reachable, but not 2-step reachable.
(zI−A) −1 0
b
= 01×n 1
−1
z c (zI−A)−1 z −1
−1
d
= z −1 c (zI−A)−1 b + z −1 d − 1
= z −1 G (z) − 1
Solution 26.6 By Theorem 26.8 G (z) is realizable if and only if it is a matrix of (real-coefficient) strictly-
proper rational functions. By partial fraction expansion of G (z)/ z we can write G (z) in the form
-96-
Linear System Theory, 2/E Solutions Manual
m σl
z
Σ Σ Glr
_ _____
G (z) =
l =1 r =1 (z−λl )r
_
Here λ1 , . . . , λm are distinct complex numbers
__ such that if λL is complex, then λM = λL for some M. Furthermore
the p × m complex matrices satisfy GMr = GLr for r = 1, . . . , σL . From Table 1.10 the corresponding unit pulse
response is
m σl
k k+1−r
Σ Σ Glr λ
G (k) = (#)
l =1 r =1
l−1 l
Thus we can state that a unit pulse response G (k) is realizable if and only if
(a) there exist positive integers m, σ1 , . . . , σm , distinct complex numbers λ1 , . . . , λm , and σ1 + . . . +σm complex
p × m matrices Glr such that (#) holds
_ for all k ≥ 1, and __
(b) if λL is complex, then λM = λL for some M. Furthermore the p × m complex matrices satisfy GMr = GLr for
r = 1, . . . , σL .
Solution 26.8 Suppose the given state equation is minimal and of dimension n. We can write its (strictly-
proper, rational) transfer function as
. adj (zI−A) . b
_c____________
G (z) =
det (zI−A)
where the polynomial det (zI−A) has degree n. If the numerator and denominator polynomials have a common
root, then this root can be canceled without changing the inverse z transform of G (z). Therefore, following
Example 26.10, we can write by inspection a dimension-(n−1) realization of the unit pulse response of the
original state equation. This contradicts the assumed minimality, and the contradiction gives that the two
polynomials cannot have a common root.
Now suppose the polynomials det (zI−A) and c . adj (zI−A) . b have no common root, but that the given state
equation is not minimal. Then there is a minimal realization
z (k +1) = Fz (k) + gu (k)
y (k) = hz (k)
and we then have
. adj (zI−A) . b
_c____________ . adj (zI−F) . g
_h____________
=
det (zI−A) det (zI−F)
where the polynomial det (zI−F) has degree no larger than n−1. This implies that the polynomials det (zI−A) and
c . adj (zI−A) . b have a common root—a contradiction. Therefore the given state equation is minimal.
Solution 26.12 Either by writing a minimal realization of G (z) in the form of Example 26.10 and computing
cA k b, k = 0, . . . , 4, or by long division of G (z), it is easy to verify the first 5 Markov parameters.
For the second part we can either work with an assumed transfer function, or assume a dimension-2 state
equation of the form
0 1 0
x (k +1) = x (k) + u (k)
−a 0 −a 1
1
y (k) = c 0
c 1 x (k)
-97-
Linear System Theory, 2/E Solutions Manual
-98-
CHAPTER 27
Solution 27.4 Suppose the entry Gij (z) has one pole at z = 1, that is
Nij (z)
__________
Gij (z) =
(z−1)Dij (z)
where all roots of the polynomial Dij (z) have magnitude less than unity (so Dij (1) ≠ 0), and the polynomial Nij (z)
satisfies Nij (1) ≠ 0. Suppose that the m × 1 U (z) has all components zero except for U j (z) = z /(z−1). Then the
i th -component of the output is given by
z Nij (z)
___________
Yi (z) =
(z−1)2 Dij (z)
By partial fraction expansion yi (k) includes decaying exponential terms, possibly a constant term, and the term
_N ij (1)
_____ k , k ≥0
Dij (1)
Since this term is unbounded, every realization of G (z) fails to be uniform bounded-input, bounded-output stable.
Solution 27.7 The claim is not true in the time-varying case. Consider the scalar state equation
x (k +1) = x (k) + δ(k)u (k)
y (k) = x (k)
where δ(k) is the unit pulse. The zero-state response to any input is
0, k ≥ ko > 0
y (k) =
u (0), k ≥ ko +1, ko ≤ 0
Thus the state equation is uniform bounded-input, bounded-output stable with η = 1. However for ko = 0 and
u (k) = (1/ 2)k we have u (k) → 0 as k → ∞, but y (k) = 1 for all k ≥ 1.
For the time-invariant case the claim can be proved as follows. Assume u (k) → 0 as k → ∞. Given ε > 0
we will find a K such that y (k) ≤ ε, k ≥ K, which shows that y (k) → 0 as k → ∞. With
k
y (k) = Σ G (k−j)u ( j)
j =0
-99-
Linear System Theory, 2/E Solutions Manual
∞
µ = sup u (k) ,
k≥0
η= Σ G (k)
k =0
The first constant is finite for a well-defined sequence that goes to zero, and the second is finite by uniform
bounded-input, bounded-output stability. Then there is a positive integer K 1 such that
∞
ε ε
Σ
___ ___ k ≥ K1
u (k) ≤ , G (k) ≤ ,
2η k =K 1
2µ
ε
___ ε
___
≤µ + η=ε
2µ 2η
-100-
CHAPTER 28
Solution 28.2 Lemma 16.18 gives that if V 11 and V are invertible, then
−1 −1
−1 V 11 V 12
V −1 −1 −1 −1 −1 −1
11 +V 11 V 12 V a V 21 V 11 −V 11 V 12 V a
V = = −1 −1 −1
−V a V 21 V 11
V 21 V 22
Va
where Va = V 22 −V 21 V −1
11 V 12 . From the expression VV
−1
= I, written as
V 11 V 12 W 11 W 12
=I
V 21 V 22
W 21 W 22
we obtain
V 11 W 11 + V 12 W 21 = I
V 21 W 11 + V 22 W 21 = 0
Under the assumption that V 11 and V 22 are invertible these imply
W 11 = V −1 −1
11 − V 11 V 12 W 21 , W 21 = −V −1
22 V 21 W 11
and comparing this with the 1,1-block of V −1 from Lemma 16.18 gives
−1 −1
(V 11 −V 12 V −1 −1 −1 −1 −1
22 V 21 ) = V 11 + V 11 V 12 (V 22 −V 21 V 11 V 12 ) V 21 V 11
-101-
Linear System Theory, 2/E Solutions Manual
n
−1
Σ
T T k T T n +1
K = −B̂ ( )n  B̂B̂ ( )k Â
k =0
That is,
n
−1
K = −α B T (α A T )n
Σ
k =0
(α A)k (α B)(α B)T (α A T )k
(α A)n +1
n
−1
= −B T (A T )n
Σ α−2(n−k) A k BB T (A T )k
k =0
A n +1
Solution 28.4 Similar to Solution 13.11. However for the time-invariant case the reachability matrix rank test
can be used, rather than the eigenvector test, by writing
I KB KAB+(KB)2 . . .
0 0 I
. . . .
. . . .
. . . .
Solution 28.8 Supposing that the linear state equation is reachable, there exists K such that all eigenvalues of
A+BK have magnitude less than unity. Therefore (I−A−BK) is invertible, and if we suppose
A−I B
C 0
is invertible, then C (I−A−BK)−1 B is invertible from Exercise 28.6. Then given any diagonal, m × m matrix Λ, we
can choose
N = [C (I−A−BK)−1 B ]−1 Λ
to obtain Ĝ(1) = Λ. For this closed-loop system, any x (0) and any constant input R (k) = ro yields
lim y (k) = Λro
k→∞
by the final value theorem. That is, the steady-state value of the response to constant inputs is ‘noninteracting.’
(For finite time values, or other inputs, interaction typically occurs.)
-102-
CHAPTER 29
Therefore if J (k) is bounded, that is, J (k) ≤ α < ∞ for all k, then eb (k) → 0 implies e (k) → 0, as k → ∞, and
x̂(k) is an asymptotic estimate of x (k).
x b (k +1)
F 21 (k) F 22 (k)
x b (k)
G 2 (k)
xa (k)
y (k) = Ip 0
x b (k)
With
Pb (k) = −H̃(k) In −p
we have
−1 −1
C (k) Ip 0
Ip 0
= =
P b (k)
−H̃ (k) In −p
H̃ (k) In −p
-103-
Linear System Theory, 2/E Solutions Manual
H̃(k)
I
Ip xa (k) 0
= I 0 + z (k)
H̃(k)
x b (k)
I
xa (k)
=
H̃(k)xa (k)+z (k)
Therefore
x̂a (k) = xa (k)
x̂b (k) = H̃(k)xa (k) + z (k)
where
z (k +1) = F̃(k)z (k) + G̃a (k)u (k) + G̃b (k)y (k)
This is exactly the same as the reduced-dimension observer in the text.
-104-