You are on page 1of 8

Engineering Tripos Part IIB

Module 4F2: Nonlinear Systems and Control


Examples Paper 4F2/4
Solutions

Question 1: The system has an equilibrium at x = 0. Notice that V (0) = 0 and V (x) > 0 for
all x 6= 0. Moreover,

V̇ (x) = 2a2 x1 ẋ1 + 2x2 ẋ2


= 2a2 x1 x2 − 2x2 a1 x2 + a2 x1 + (b1 x2 + b2 x1 )2 x2


= −2x22 a1 + (b1 x2 + b2 x1 )2


≤0

For ǫ > 0 consider the compact set

S = {x ∈ R2 | V (x) ≤ ǫ}.

Since V̇ (x) ≤ 0 for all x, S is an invariant compact set. The set

{x ∈ S | V̇ (x) = 0} = {x ∈ S | x2 = 0}

(recall that a1 > 0) contains no trajectories other than x(t) = 0 for all t. This is because
x(t) ∈ {x ∈ S | x2 = 0} for all t implies that x2 (t) = 0 for all t, hence ẋ2 (t) = 0 for all t,
hence x1 (t) = 0 for all t. By LaSalle’s Theorem x = 0 is locally asymptotically stable and all
trajectories starting in S converge to it. Since ǫ can be chosen arbitrarily large, the domain of
attraction of x = 0 is the whole of R2 and x = 0 is globally asymptotically stable.

Question 2: Let V (x) = f (x)T P f (x). Since x = 0 is an equilibrium f (0) = 0 and hence
V (0) = 0. Moreover, V (x) > 0 for all x ∈ ∆ with x 6= 0. This is because P = P T > 0 and x = 0
is assumed to be the only equilibrium in ∆, therefore f (x) 6= 0 for all x ∈ ∆ with x 6= 0.

V̇ (x) = f˙(x)T P f (x) + f (x)T P f˙(x)


= ẋT J(x)T P f (x) + f (x)T P J(x)ẋ
= f (x)T J(x)T P f (x) + f (x)T P J(x)f (x)
= f (x)T J(x)T P + P J(x) f (x)
 

< 0 for all x ∈ ∆ with x 6= 0.

This is because − J(x)T P + P J(x) > 0 and x = 0 is the only equilibrium in ∆. Applying
 

Lyapunov’s Asymptotic Stability Theorem (Theorem 5 in Handout 2) with S = ∆ (assuming ∆


is open) we can conclude that x = 0 is locally asymptotically stable.

Question 3: (i) One can show that P = P T > 0 is equivalent to all eigenvalues of P being real
and greater than zero. Therefore P is invertible (i.e. P −1 exists). We want to show that for all
x 6= 0 , xT P −1 x > 0. Let y = P −1 x and notice that if x 6= 0 then y 6= 0. Then

xT P −1 x = (P y)T P −1 (P y) = y T P P −1 P y = y T P y > 0

1
(ii) xT (P + Q)x = xT P x + xT Qx. xT P x > 0 for x 6= 0 since P > 0. xT Qx ≥ 0 since Q ≥ 0.
Therefore, P + Q > 0.

(iii) We want to show xT (P QP )x > 0 for all x 6= 0. Let y = P x and notice that y 6= 0 if x 6= 0.

xT (P QP )x = (P x)T Q(P x) = y T Qy > 0.

(iv) Let y = Rx. Then

xT (RT P R)x = (Rx)T P (Rx) = y T P y ≥ 0

Notice that in this case we can only assert that RT P R ≥ 0 but not that RT P R > 0. This is
because it may be possible for y = Rx = 0 even if x 6= 0.

(v) Let
 
y
x= .
z

Then
 
T P 0
x x = y T P y + z T Qz ≥ 0.
0 Q

Question 4: State equations are

ẋ1 = x2
ẋ2 = −2x1 − 3x2 + u

or, in matrix form


      
ẋ1 0 1 x1 0
= + u
ẋ2 −2 −3 x2 1
= Ax + bu.

The Lyapunov matrix equation AT P + P A = −I becomes


       
0 −2 p11 p12 p11 p12 0 1 −1 0
+ =
1 −3 p12 p22 p12 p22 −2 −3 0 −1

(recall that P = P T , therefore p12 = p21 ).

−2p12 − 2p12 = −1 (from the (1,1) element)


−2p22 + p11 − 3p12 = 0 (from the (1,2) element)
p12 − 3p22 + p12 − 3p22 = −1 (from the (2,2) element)

(the equation for the (2,1) element is the same with that for the (1,2) element). The first equation
leads to p12 = 1/4. Substituting p12 into the last equation leads to p22 = 1/4. Substituting p12
and p22 into the second equation leads to p11 = 5/4. Overall,
 5 1 
P = 41 14 .
4 4

The eigenvalues of A are −1 and −2. Therefore, by the Linear Lyapunov Stability Theorem
(Theorem 5, Handout 2) P > 0. (This is also easy to check directly by computing the eigenvalues
of P itself).

2
Now let u = −U sign(bT P x) and consider V (x) = xT P x. Clearly V (0) = 0 and, since P = P T >
0, V (x) > 0 for all x 6= 0. Moreover,

V̇ (x) = ẋT P x + xT P ẋ
= (Ax + bu)T P x + xT P (Ax + bu)
= xT (AT P + P A)x + 2uT bT P x
= −xT x − 2U (bT P x)sign(bT P x)
≤ −kxk2

since (bT P x)sign(bT P x) ≥ 0. Therefore, V̇ (x) < 0 for all x 6= 0. Applying Lyapunov’s Asymp-
totic Stability Theorem (Theorem 5, Handout 2) shows that x = 0 is locally asymptotically
stable. Since the above calculation holds for all x (i.e. S = R2 ), x = 0 is globally asymptotically
stable.
In fact, it can be shown that with this nonlinear feedback law the state reaches x = 0 in finite
time for some initial conditions. Such performance could never be achieved by a linear feedback
law; with linear feedback the solutions will approach x = 0 in infinite time (asymptotically).
Notice that the feedback law u = −U sign(bT P x) is discontinuous as a function of x. Therefore,
the closed loop system may violate the existence-uniqueness conditions of Theorem 1, Handout
1. In certain cases sliding may be observed (cf. Question 8, Examples Paper 4F2/3).

Question 5: Consider e(t) = E sin(ωt) and u(t) = f (e(t)) = f (E sin(ωt)).


If E ≤ δ, then u(t) = 0 for all t, therefore N (E) = 0.
If E > δ, u(t) = 0 for |ωt| < sin−1 (δ/E), |π − ωt| < sin−1 (δ/E), etc. and u(t) = ±R otherwise.
Therefore
1 2π
Z
U1 = u(t) sin(θ)dθ (where θ = ωt)
π 0
Z π
4 2
= u(t) sin(θ)dθ (since u(t) is odd)
π 0
"Z −1 #
sin (δ/E) Z π
4 2
= 0dθ + R sin(θ)dθ
π 0 sin−1 (δ/E)
4R π
= [cos(θ)]sin
2
−1 (δ/E)
π
4R
= cos(sin−1 (δ/E))
π s
 2
4R δ
= 1− .
π E

Since the nonlinearity introduces no phase shift V1 = 0. Therefore,


(
0 q if E ≤ δ
N (E) = 4R δ 2

πE 1− E if E > δ

3
N (E) looks something like

To locate the maximum differentiate N (E) with respect to E. After a few lines of algebra this
leads to
d 4R(2δ2 − E 2 )
N (E) = q
dE 2
πE 4 1 − Eδ

which is equal to zero when E = δ 2.
√ 2R
N (δ 2) =
πδ
Question 6: (a) e = E sin(ωt) and
Z 2π Z 2π 
1
N1 (E) = f1 (e) sin(θ)dθ + j f1 (e) cos(θ)dθ
πE 0 0

(writing θ = ωt). Therefore


Z 2π Z 2π 
1
N1 (E) = [f2 (e) + f3 (e)] sin(θ)dθ + j [f2 (e) + f3 (e)] cos(θ)dθ
πE 0 0
Z 2π Z 2π 
1
= f2 (e) sin(θ)dθ + j f2 (e) cos(θ)dθ +
πE 0 0
Z 2π Z 2π 
1
f3 (e) sin(θ)dθ + j f3 (e) cos(θ)dθ
πE 0 0
=N2 (E) + N3 (E).

(b) Clearly part (a) can be generalised to “if f (e) = f1 (e) + f2 (e) . . . + fN (e) then N (E) =
N1 (e) + N2 (E) + . . . NN (E)”. With this in mind, notice that the A-D converter characteristic
can be written as
N
X
f (e) = fi (e)
i=1

4
where
if e < − 2i−1

 −δ 2 δ
fi (e) = 0 if |e| ≤ 2i−1
2 δ
2i−1
δ if e > 2 δ

From Question 5,
2i−1

 0 r if E ≤ 2 δ
Ni (E) =  2

 πE 1 − (2i−1)δ
2E if E > 2i−1
2 δ

Hence
δ


 0 r if E ≤ 2

  2
 4δ Pk (2i−1)δ 2k−1 2k+1
N (E) = πE i=1 1− 2E if 2 δ <E≤ 2 δ for k = 1, 2, . . . , N − 1

 r  2
 4δ PN (2i−1)δ 2N −1


πE i=1 1− 2E if E > 2 δ

5
Question 7:
(a) From Handout 3, the fundamental component of (E sin θ)5 is
 
5 2 0 5 10
U1 sin θ = E 5 (−1) sin θ = E 5 sin θ
2 2 16
so
U1 5
N (E) = = E4
E 8
(b) Any continuous function can be approximated by a polynomial over some interval (Weier-
strass theorem). Suppose that we know that a signal magnitude cannot exceed some value Umax .
Then we could approximate u = f (e) by a polynomial of degree n in the range |u| < Umax , and
use the result of Question 6 to approximate the describing function as a weighted sum of de-
scribing functions of the monomial nonlinearities ek for k = 0, 1, . . . , n.
NB: This idea may not be practical.

Question 8: We look for an intersection of g(jω) with −1/N (E). From Question 5, we know
that N (E) is real and 0 ≤ N (E) ≤ 8/π. So −1/N (E) takes values between −∞ and −π/8.

To find the intersection point look for the frequency where the g(jω) becomes real and negative.
20 20
g(jω) = =
jω(jω + 1)(jω + 2) jω(2 − ω 2 ) − 3ω 2
√ √
The imaginary part is equal to zero when ω = 2. At this frequency |g(j 2)| = 10/3.
Since −10/3 < −π/8 there is an intersection between g(jω) and −1/N (E), so a limit cycle
exists.
In fact, there are 2 intersections,
√ at two different values of E. Only the larger of the two values
of E (the one with E > 2/4) will lead to a stable limit cycle (see the discussion in Handout
3). To find this value we have to solve
s
1 2
 
4 3
N (E) = 1− = .
πE 4E 10

6
This can be solved by iteration, or as a quadratic
√ in E 2 . A shortcut in this case is to call
1/(4E) = sin(φ) (a valid substitution if E > 2/4). The equation then becomes
16 8 3
sin(φ) cos(φ) = sin(2φ) = .
π π 10
Hence, φ = 0.059 and E = 4.239. √
So a limit cycle oscillation exists, with frequency 2 rad/sec, and amplitude 4.239 at the output
of the linear element.

Question 9: To check if G(s) is strictly positive real it is sufficient to check that (i) G(s) has
no poles with non-negative real parts or at ∞, and (ii) the real part of G(jω) is positive for all ω.

(a) No poles with Re(s) ≥ 0 and

1 − jω
G(jω) = .
1 + ω2T 2
Therefore, strictly positive real.

(b) Unstable pole at s = 1/T > 0, therefore not positive real.

(c) Stable, but not asymptotically stable. G(jω) = −j/ω, therefore Re(G(jω)) = 0. Finally

lim sG(s) = 1
s→0

i.e. positive residue at s = 0. Hence positive real.

(d) Asymptotically stable. The Nyquist diagram is

7
Re(G(jω)) > 0 for all ω. Therefore, strictly positive real.

(e) Stable but not asymptotically stable.

jωωn2
G(jω) = .
ωn2 − ω 2

Therefore, Re(G(jω)) = 0.

lim (s − jωn )G(s) = ωn2 /2 > 0


s→jωn

Positive residue at s = jωn (similar calculation for s = −jωn ) . Hence, positive real.

Question 10,11,12: Handwritten solutions.

J.Lygeros
Revised by J. Maciejowski March 2011

You might also like