This action might not be possible to undo. Are you sure you want to continue?
ECE/MAE 7360
Optimal and Robust Control
(Fall 2003 Offering)
Instructor: Dr YangQuan Chen, CSOIS, ECE Dept.,
Tel. : (435)7970148.
Email: yqchen@ieee.org or, yqchen@ece.usu.edu
Control Systems Area
Fall'03 Course Offering
ECE/MAE 7360 Optimal and Robust Control. Advanced methods of control system
analysis and design. Operator approaches to optimal control, including LQR/LQG/LTR,
muanalysis, Hinfinity loop shaping and gap metric etc. Prerequisite: ECE 6320 or
instructor approval. (3 cr) (alternate Fall).
Day/Time/Venue: MW 2:303:45 PM. EL112 (Control Lab)
Instructor: Dr YangQuan Chen, CSOIS, ECE Dept., (435)7970148.
Text: Kemin Zhou, with John Doyle, Essentials of Robust Control, PrenticeHall, 1998.
Course Description: Robust control is concerned with the problem of designing control
systems when there is uncertainty about the model of the system to be controlled or when
there are (possibly uncertain) external disturbances influencing the behavior of the
system. Optimal control is concerned with the design of control systems to achieve a
prescribed performance (e.g., to find a controller for a given linear system that minimizes
a quadratic cost function). While optimal control theory was originally derived using the
techniques of calculus of variation, most robust control methodologies have been
developed from an operatortheoretic perspective. In this course we will mainly use an
operator approach to study the basic results in robust control that have been developed
over the last fifteen years. However, mathematical programming based techniques for
solving optimal control problems will also be briefly covered. This course provides a
unified treatment of multivariable control system design for systems subject to
uncertainty and performance requirements.
Course Topics:
1. Review of multivariable linear control theory and balanced model
realization/reduction.
2. Signal/system norms and
∞
H /
2
H spaces and internal stability.
3. Performance specification and limitations.
4. Modeling uncertainty and robustness.
5. LFT and mu synthesis.
6. Parameterization of controllers.
7.
2
H optimal control (LQR/Kalman Filter /LQG/LTR.)
8.
∞
H optimal control (for unstructured perturbations).
9. Gap metric
10. Solving optimal control problems numerically.
ECE/MAE 7360: Optimal and Robust Control
Course Syllabus  Fall 2003
From http://www.ece.usu.edu/academics/graduate_courses.html
***7360. Optimal and Robust Control.
Advanced methods of control system analysis and design. Operator approaches to optimal
control, including LQR, LQG, and L1 optimization techniques. Robust control theory, including
QFT, Hinfinity, and interval polynomial approaches. Prerequisite: ECE/MAE 6320 or
instructor approval. Also taught as MAE 7360. (3 cr) (Sp)
Instructor: YangQuan Chen, Center for SelfOrganizing and Intelligent Systems
Department of Electrical and Computer Engineering, Utah State University
Room EL152; Tel.(435)7970148, yqchen@ece.usu.edu
Lecture Day/Time/Venue: MW 2:303:45 PM. EL112 (Control Lab)
Ofice Hours: MW 1:152:30 PM.
Text: Kemin Zhou, with John Doyle, Essentials of Robust Control, PrenticeHall, 1998.
References: Will be give by the Instructor via email/web/ftp.
Software: (1) MATLAB Control Systems Toolbox (2) MATLAB muSynthesis Toolbox (3)
RIOTS_95: MATLAB Toolbox for solving general optimal control problems.
Course Requirements:
Homework 40 points
Midterm take home exam 10 points
Focused Individual Study Project/presentation 10 points
Design project 40 points
Notes:
1. The course will follow the outline on the next page.
2. The course will cover material from most chapters of the text as well as some materials taken
from the instructor's notes.
3. The course will be conducted as follows:
a) There will be lectures by the instructor on most Mondays/Wednesdays
b) Homework or project assignments will be given, via email, on the weekly basis
normally on Wednesday. The due is by the end of the next Wednesday.
c) There will be a midterm takehome exam.
d) For each student, a focused individual study project (FISP) is to be done with a
literature survey and a class presentation. Topics can be chosen by the individual
student, subject to the approval of the Instructor.
e) There are totally 4 design projects using MATLAB Simulink/Control Systems
Toolbox/muSynthesis Toolbox/RIOTS_95 Toolbox. The Instructor will provide a
free student edition of RIOTS_95 (worth $99.00) for solving general optimal control
problems.
f) There is no final exam.
Course Description:
Robust control is concerned with the problem of designing control systems when there is
uncertainty about the model of the system to be controlled or when there are (possibly uncertain)
external disturbances influencing the behavior of the system. Optimal control is concerned with
the design of control systems to achieve a prescribed performance (e.g., to find a controller for a
given linear system that minimizes a quadratic cost function). While optimal control theory was
originally derived using the techniques of calculus of variation, most robust control
methodologies have been developed from an operatortheoretic perspective. In this course we will
mainly use an operator approach to study the basic results in robust control that have been
developed over the last fifteen years. However, mathematical programming based techniques for
solving optimal control problems will also be briefly covered. This course provides a unified
treatment of multivariable control system design for systems subject to uncertainty and
performance requirements.
Course Topics and Approximate Schedule:
Course Topics:
1. Review of multivariable linear control theory and balanced model realization/reduction.
2. Signal/system norms and / spaces and internal stability.
3. Performance specification and limitations.
4. Modeling uncertainty and robustness.
5. LFT and mu synthesis.
6. Parameterization of controllers.
7. optimal control (LQR/Kalman Filter /LQG/LTR.)
8. optimal control (for unstructured perturbations).
9. Gap metric
10. Solving optimal control problems numerically.
wk
#
Mondays Wednesdays Homework/Project
1 Aug. 25 – Chapter 1
Introduction/linear algebra
Aug. 27 – Chapter 2,3
Review /linear system theory
HW#1
2 Sept. 1 – No class
Labor Day
Sept. 3 – No class
(ASME DETC03 Conference)
3 Sept. 8  Chapter 4, 5
Norms, Stability
Sept. 10 – Chapter 6
Performance Specs/Limitation
HW#2
4 Sept. 15 – Chapter 6
More on performance
limitations.
Sept. 17 – Chapter 7
Balanced Model Reduction
Proj.#1: Inverted
Pendulum control
revisited
5 Sept. 22  Chapter 8
Modeling Uncertainty
Sept. 24 – Chapter 9
LFT: Linear Fractional
Transform
HW#3
6 Sept. 29 – Chapter 10
mu and mu synthesis
Oct. 1 – Chapter 10
More on mu
HW#4
7 Oct. 6 – Chapter 11
Controller parameterization
(Youlaparamterization)
Oct. 8 – Chapter 12,13
LQR/H2 control
Project#2: Spaceshuttle
robustness analysis
(stability and
performance)
8 Oct. 13 – Lecturer's Notes
LQG/LTR
Oct. 15 – Chapter 14
Hinfinity Control
HW#5
9 Oct. 20  Chapter 14
Hinfinity Control
Oct. 22 – Chapter 14
Hinfinity Control
midterm take home
exam
10 Oct. 27  Chapter 15
Hinfinity Controller order
reduction
Oct. 29 – Chapter 16
Hinfinity loop shaping
HW#6
11 Nov. 3 – Chapter 16
Hinfinity loop shaping
Nov. 5 – – Chapter 16
Hinfinity loop shaping
Project#3: Hinfinity
control (performance)
design of high
maneuvering airplane
12 Nov. 10 – Chapter 17
Gap metric
Nov. 12 – Chapter 17
nuGap metric
HW#7
13 Nov. 17 – Instructor's notes
Mathematical foundation of
RIOTS_95
Nov. 19 – Instructor's notes
Sample applications of
RIOTS_95
HW#8
14 Nov. 24 – FISP presentations
(3 students)
Nov. 26 – No class.
Thanksgiving
Project #4: Solving
optimal control problems
(you define your own
OCP!) using RIOTS_95
15 Dec. 1 – FISP presentations (2
students)
Dec. 3  FISP
presentations (2 students)
16 Dec. 8  No class
(IEEE CDC'03 Conference)
Dec. 10 – No class
(IEEE CDC'03 Conference)
Email exit interview Due: Dec. 15.
No Final Exam
Everything due on Dec.
12, 12:00PM.
Possible Topics for FISP (not limited to the following, students may propose their own topic
of interest subject to the Instructor’s approval)
1.
1
l  and
∞
l optimal control (for rejection of unknown but bounded disturbances)
2. Structured perturbations, Kharitonov's Theorem
3. Quantitative feedback theory (QFT ).
4. Linear matrix inequalities (LMI ).
5. and many more …
3
Classical control in the 1930’s and 1940’s
Bode, Nyquist, Nichols, . . .
• Feedback ampliﬁer design
• Single input, single output (SISO)
• Frequency domain
• Graphical techniques
• Emphasized design tradeoﬀs
– Eﬀects of uncertainty
– Nonminimum phase systems
– Performance vs. robustness
Problems with classical control
Overwhelmed by complex systems:
• Highly coupled multiple input, multiple output systems
• Nonlinear systems
• Timedomain performance speciﬁcations
4
The origins of modern control theory
Early years
• Wiener (1930’s  1950’s) Generalized harmonic analysis, cybernetics,
ﬁltering, prediction, smoothing
• Kolmogorov (1940’s) Stochastic processes
• Linear and nonlinear programming (1940’s  )
Optimal control
• Bellman’s Dynamic Programming (1950’s)
• Pontryagin’s Maximum Principle (1950’s)
• Linear optimal control (late 1950’s and 1960’s)
– Kalman Filtering
– LinearQuadratic (LQ) regulator problem
– Stochastic optimal control (LQG)
5
The diversiﬁcation of modern control
in the 1960’s and 1970’s
• Applications of Maximum Principle and Optimization
– Zoom maneuver for timetoclimb
– Spacecraft guidance (e.g. Apollo)
– Scheduling, resource management, etc.
• Linear optimal control
• Linear systems theory
– Controllability, observability, realization theory
– Geometric theory, disturbance decoupling
– Pole assignment
– Algebraic systems theory
• Nonlinear extensions
– Nonlinear stability theory, small gain, Lyapunov
– Geometric theory
– Nonlinear ﬁltering
• Extension of LQ theory to inﬁnitedimensional systems
• Adaptive control
6
Modern control application: Shuttle reentry
The problem is to control the reentry of the shuttle, from orbit to
landing. The modern control approach is to break the problem into two
pieces:
• Trajectory optimization
• Flight control
• Trajectory optimization: tremendous use of modern control principles
– State estimation (ﬁltering) for navigation
– Bangbang control of thrusters
– Digital autopilot
– Nonlinear optimal trajectory selection
• Flight control: primarily used classical methods with lots of simulation
– Gain scheduled linear designs
– Uncertainty studied with adhoc methods
Modern control has had little impact on feedback design because it
neglects fundamental feedback tradeoﬀs and the role of plant uncertainty.
7
The 1970’s and the return of the frequency domain
Motivated by the inadequacies of modern control, many researchers
returned to the frequency domain for methods for MIMO feedback control.
• British school
– Inverse Nyquist Array
– Characteristic Loci
• Singular values
– MIMO generalization of Bode gain plots
– MIMO generalization of Bode design
– Crude MIMO representations of uncertainty
• Multivariable loopshaping and LQG/LTR
– Attempt to reconcile modern and classical methods
– Popular, but hopelessly ﬂawed
– Too crude a representation of uncertainty
While these methods allowed modern and classical methods to be blended
to handle many MIMO design problems, it became clear that fundamen
tally new methods needed to be developed to handle complex, uncertain,
interconnected MIMO systems.
8
Postmodern Control
• Mostly for fun. Sick of “modern control,” but wanted a name equally
pretentious and selfabsorbed.
• Other possible names are inadequate:
– Robust ( too narrow, sounds too macho)
– Neoclassical (boring, sounds vaguely fascist )
– Cyberpunk ( too nihilistic )
• Analogy with postmodern movement in art, architecture, literature,
social criticism, philosophy of science, feminism, etc. ( talk about
pretentious ).
The tenets of postmodern control theory
• Theories don’t design control systems, engineers do.
• The application of any methodology to real problems will require some
leap of faith on the part of the engineer (and some ad hoc ﬁxes).
• The goal of the theoretician should be to make this leap smaller and
the ad hoc ﬁxes less dominant.
9
Issues in postmodern control theory
• More connection with data
• Modeling
– Flexible signal representation and performance objectives
– Flexible uncertainty representations
– Nonlinear nominal models
– Uncertainty modeling in speciﬁc domains
• Analysis
• System Identiﬁcation
– Nonprobabilistic theory
– System ID with plant uncertainty
– Resolving ambiguity; “uncertainty about uncertainty”
– Attributing residuals to perturbations, not just noise
– Interaction with modeling and system design
• Optimal control and ﬁltering
– H
∞
optimal control
– More general optimal control with mixed norms
– Robust performance for complex systems with structured uncer
tainty
10
Chapter 2: Linear Algebra
• linear subspaces
• eigenvalues and eigenvectors
• matrix inversion formulas
• invariant subspaces
• vector norms and matrix norms
• singular value decomposition
• generalized inverses
• semideﬁnite matrices
11
Linear Subspaces
• linear combination:
α
1
x
1
+ . . . + α
k
x
k
, x
i
∈ F
n
, α ∈ F
span{x
1
, x
2
, . . . , x
k
} := {x = α
1
x
1
+ . . . + α
k
x
k
: α
i
∈ F}.
• x
1
, x
2
, . . . , x
k
∈ F
n
linearly dependent if there exists α
1
, . . . , α
k
∈ F
not all zero such that α
1
x
2
+ . . . + α
k
x
k
= 0; otherwise they are
linearly independent.
• {x
1
, x
2
, . . . , x
k
} ∈ S is a basis for S if x
1
, x
2
, . . . , x
k
are linearly
independent and S = span{x
1
, x
2
, . . . , x
k
}.
• {x
1
, x
2
, . . . , x
k
} in F
n
are mutually orthogonal if x
∗
i
x
j
= 0 for all
i = j and orthonormal if x
∗
i
x
j
= δ
ij
.
• orthogonal complement of a subspace S ⊂ F
n
:
S
⊥
:= {y ∈ F
n
: y
∗
x = 0 for all x ∈ S}.
• linear transformation
A : F
n
−→ F
m
.
• kernel or null space
KerA = N(A) := {x ∈ F
n
: Ax = 0},
and the image or range of A is
ImA = R(A) := {y ∈ F
m
: y = Ax, x ∈ F
n
}.
Let a
i
, i = 1, 2, . . . , n denote the columns of a matrix A ∈ F
m×n
,
then
ImA = span{a
1
, a
2
, . . . , a
n
}.
12
• The rank of a matrix A is deﬁned by
rank(A) = dim(ImA).
rank(A) = rank(A
∗
). A ∈ F
m×n
is full row rank if m ≤ n and
rank(A) = m. A is full column rank if n ≤ m and rank(A) = n.
• unitary matrix U
∗
U = I = UU
∗
.
• Let D ∈ F
n×k
(n > k) be such that D
∗
D = I. Then there exists a
matrix D
⊥
∈ F
n×(n−k)
such that
¸
D D
⊥
is a unitary matrix.
• Sylvester equation
AX + XB = C
with A ∈ F
n×n
, B ∈ F
m×m
, and C ∈ F
n×m
has a unique solution
X ∈ F
n×m
if and only if λ
i
(A) + λ
j
(B) = 0, ∀i = 1, 2, . . . , n and
j = 1, 2, . . . , m.
“Lyapunov Equation”: B = A
∗
.
• Let A ∈ F
m×n
and B ∈ F
n×k
. Then
rank (A) + rank(B) −n ≤ rank(AB) ≤ min{rank (A), rank(B)}.
• the trace of A = [a
ij
] ∈ C
n×n
Trace(A) :=
n
¸
i=1
a
ii
.
Trace has the following properties:
Trace(αA) = αTrace(A), ∀α ∈ C, A ∈ C
n×n
Trace(A+ B) = Trace(A) + Trace(B), ∀A, B ∈ C
n×n
Trace(AB) = Trace(BA), ∀A ∈ C
n×m
, B ∈ C
m×n
.
13
Eigenvalues and Eigenvectors
• The eigenvalues and eigenvectors of A ∈ C
n×n
: λ, x ∈ C
n
Ax = λx
x is a right eigenvector
y is a left eigenvector:
y
∗
A = λy
∗
.
• eigenvalues: the roots of det(λI −A).
• the spectral radius: ρ(A) := max
1≤i≤n
λ
i

• Jordan canonical form: A ∈ C
n×n
, ∃ T
A = TJT
−1
where
J = diag{J
1
, J
2
, . . . , J
l
}
J
i
= diag{J
i1
, J
i2
, . . . , J
im
i
}
J
ij
=
λ
i
1
λ
i
1
.
.
.
.
.
.
λ
i
1
λ
i
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
∈ C
n
ij
×n
ij
The transformation T has the following form:
T =
¸
T
1
T
2
. . . T
l
T
i
=
¸
T
i1
T
i2
. . . T
im
i
T
ij
=
¸
t
ij1
t
ij2
. . . t
ijn
ij
14
where t
ij1
are the eigenvectors of A,
At
ij1
= λ
i
t
ij1
,
and t
ijk
= 0 deﬁned by the following linear equations for k ≥ 2
(A−λ
i
I)t
ijk
= t
ij(k−1)
are called the generalized eigenvectors of A.
A ∈ R
n×n
with distinct eigenvalues can be diagonalized:
A
¸
x
1
x
2
· · · x
n
=
¸
x
1
x
2
· · · x
n
λ
1
λ
2
.
.
.
λ
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
and has the following spectral decomposition:
A =
n
¸
i=1
λ
i
x
i
y
∗
i
where y
i
∈ C
n
is given by
y
∗
1
y
∗
2
.
.
.
y
∗
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=
¸
x
1
x
2
· · · x
n
−1
.
• A ∈ R
n×n
with real eigenvalue λ ∈ R ⇒ real eigenvector x ∈ R
n
.
• A is Hermitian, i.e., A = A
∗
⇒ ∃ unitary U such that A = UΛU
∗
and Λ = diag{λ
1
, λ
2
, . . . , λ
n
} is real.
15
Matrix Inversion Formulas
•
A
11
A
12
A
21
A
22
¸
¸
¸ =
I 0
A
21
A
−1
11
I
¸
¸
¸
A
11
0
0 ∆
¸
¸
¸
I A
−1
11
A
12
0 I
¸
¸
¸
∆ := A
22
−A
21
A
−1
11
A
12
•
A
11
A
12
A
21
A
22
¸
¸
¸ =
I A
12
A
−1
22
0 I
¸
¸
¸
ˆ
∆ 0
0 A
22
¸
¸
¸
I 0
A
−1
22
A
21
I
¸
¸
¸
ˆ
∆ := A
11
−A
12
A
−1
22
A
21
•
A
11
A
12
A
21
A
22
¸
¸
¸
−1
=
A
−1
11
+ A
−1
11
A
12
∆
−1
A
21
A
−1
11
−A
−1
11
A
12
∆
−1
−∆
−1
A
21
A
−1
11
∆
−1
¸
¸
¸
and
A
11
A
12
A
21
A
22
¸
¸
¸
−1
=
ˆ
∆
−1
−
ˆ
∆
−1
A
12
A
−1
22
−A
−1
22
A
21
ˆ
∆
−1
A
−1
22
+ A
−1
22
A
21
ˆ
∆
−1
A
12
A
−1
22
¸
¸
¸ .
A
11
0
A
21
A
22
¸
¸
¸
−1
=
A
−1
11
0
−A
−1
22
A
21
A
−1
11
A
−1
22
¸
¸
¸
A
11
A
12
0 A
22
¸
¸
¸
−1
=
A
−1
11
−A
−1
11
A
12
A
−1
22
0 A
−1
22
¸
¸
¸ .
• det A = det A
11
det(A
22
−A
21
A
−1
11
A
12
) = det A
22
det(A
11
−A
12
A
−1
22
A
21
).
In particular, for any B ∈ C
m×n
and C ∈ C
n×m
, we have
det
I
m
B
−C I
n
¸
¸
¸ = det(I
n
+ CB) = det(I
m
+ BC)
and for x, y ∈ C
n
det(I
n
+ xy
∗
) = 1 + y
∗
x.
• matrix inversion lemma:
(A
11
−A
12
A
−1
22
A
21
)
−1
= A
−1
11
+A
−1
11
A
12
(A
22
−A
21
A
−1
11
A
12
)
−1
A
21
A
−1
11
.
16
Invariant Subspaces
• a subspace S ⊂ C
n
is an Ainvariant subspace if Ax ∈ S for every
x ∈ S.
For example, {0}, C
n
, and KerA are all Ainvariant subspaces.
Let λ and x be an eigenvalue and a corresponding eigenvector of
A ∈ C
n×n
. Then S := span{x} is an Ainvariant subspace since
Ax = λx ∈ S.
In general, let λ
1
, . . . , λ
k
(not necessarily distinct) and x
i
be a set of
eigenvalues and a set of corresponding eigenvectors and the generalized
eigenvectors. Then S = span{x
1
, . . . , x
k
} is an Ainvariant subspace
provided that all the lower rank generalized eigenvectors are included.
• An Ainvariant subspace S ⊂ C
n
is called a stable invariant subspace
if all the eigenvalues of A constrained to S have negative real parts.
Stable invariant subspaces are used to compute the stabilizing solu
tions of the algebraic Riccati equations
• Example
A
¸
x
1
x
2
x
3
x
4
=
¸
x
1
x
2
x
3
x
4
λ
1
1
λ
1
λ
3
λ
4
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
with Reλ
1
< 0, λ
3
< 0, and λ
4
> 0. Then it is easy to verify that
S
1
= span{x
1
} S
12
= span{x
1
, x
2
} S
123
= span{x
1
, x
2
, x
3
}
S
3
= span{x
3
} S
13
= span{x
1
, x
3
} S
124
= span{x
1
, x
2
, x
4
}
S
4
= span{x
4
} S
14
= span{x
1
, x
4
} S
34
= span{x
3
, x
4
}
are all Ainvariant subspaces. Moreover, S
1
, S
3
, S
12
, S
13
, and S
123
are
stable Ainvariant subspaces.
17
However, the subspaces
S
2
= span{x
2
}, S
23
= span{x
2
, x
3
}
S
24
= span{x
2
, x
4
}, S
234
= span{x
2
, x
3
, x
4
}
are not Ainvariant subspaces since the lower rank generalized eigen
vector x
1
of x
2
is not in these subspaces.
To illustrate, consider the subspace S
23
. It is an Ainvariant subspace
if Ax
2
∈ S
23
. Since
Ax
2
= λx
2
+ x
1
,
Ax
2
∈ S
23
would require that x
1
be a linear combination of x
2
and
x
3
, but this is impossible since x
1
is independent of x
2
and x
3
.
18
Vector Norms and Matrix Norms
X a vector space. · is a norm if
(i) x ≥ 0 (positivity);
(ii) x = 0 if and only if x = 0 (positive deﬁniteness);
(iii) αx = α x, for any scalar α (homogeneity);
(iv) x + y ≤ x + y (triangle inequality)
for any x ∈ X and y ∈ X.
Let x ∈ C
n
. Then we deﬁne the vector pnorm of x as
x
p
:=
¸
n
¸
i=1
x
i

p
¸
1/p
, for 1 ≤ p ≤ ∞.
In particular, when p = 1, 2, ∞ we have
x
1
:=
n
¸
i=1
x
i
;
x
2
:=
n
¸
i=1
x
i

2
;
x
∞
:= max
1≤i≤n
x
i
.
the matrix norm induced by a vector pnorm is deﬁned as
A
p
:= sup
x=0
Ax
p
x
p
.
In particular, for p = 1, 2, ∞, the corresponding induced matrix norm can
be computed as
A
1
= max
1≤j≤n
m
¸
i=1
a
ij
 (column sum) ;
A
2
=
λ
max
(A
∗
A) ;
19
A
∞
= max
1≤i≤m
n
¸
j=1
a
ij
 (row sum) .
The Euclidean 2norm has some very nice properties:
Let x ∈ F
n
and y ∈ F
m
.
1. Suppose n ≥ m. Then x = y iﬀ there is a matrix U ∈ F
n×m
such that x = Uy and U
∗
U = I.
2. Suppose n = m. Then x
∗
y ≤ x y. Moreover, the equality
holds iﬀ x = αy for some α ∈ F or y = 0.
3. x ≤ y iﬀ there is a matrix ∆ ∈ F
n×m
with ∆ ≤ 1 such that
x = ∆y. Furthermore, x < y iﬀ ∆ < 1.
4. Ux = x for any appropriately dimensioned unitary matrices U.
Frobenius norm
A
F
:=
Trace(A
∗
A) =
m
¸
i=1
n
¸
j=1
a
ij

2
.
Let A and B be any matrices with appropriate dimensions. Then
1. ρ(A) ≤ A (This is also true for F norm and any induced matrix
norm).
2. AB ≤ A B. In particular, this gives
A
−1
≥ A
−1
if A is
invertible. (This is also true for any induced matrix norm.)
3. UAV = A, and UAV
F
= A
F
, for any appropriately di
mensioned unitary matrices U and V .
4. AB
F
≤ A B
F
and AB
F
≤ B A
F
.
20
Singular Value Decomposition
Let A ∈ F
m×n
. There exist unitary matrices
U = [u
1
, u
2
, . . . , u
m
] ∈ F
m×m
V = [v
1
, v
2
, . . . , v
n
] ∈ F
n×n
such that
A = UΣV
∗
, Σ =
Σ
1
0
0 0
¸
¸
¸
where
Σ
1
=
σ
1
0 · · · 0
0 σ
2
· · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · σ
p
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
and
σ
1
≥ σ
2
≥ · · · ≥ σ
p
≥ 0, p = min{m, n}.
Singular values are good measures of the “size” of a matrix
Singular vectors are good indications of strong/weak input or output
directions.
Note that
Av
i
= σ
i
u
i
A
∗
u
i
= σ
i
v
i
.
A
∗
Av
i
= σ
2
i
v
i
AA
∗
u
i
= σ
2
i
u
i
.
σ(A) = σ
max
(A) = σ
1
= the largest singular value of A;
and
σ(A) = σ
min
(A) = σ
p
= the smallest singular value of A .
21
Geometrically, the singular values of a matrix A are precisely the lengths
of the semiaxes of the hyperellipsoid E deﬁned by
E = {y : y = Ax, x ∈ C
n
, x = 1}.
Thus v
1
is the direction in which y is the largest for all x = 1; while
v
n
is the direction in which y is the smallest for all x = 1.
v
1
(v
n
) is the highest (lowest) gain input direction
u
1
(u
m
) is the highest (lowest) gain observing direction
e.g.,
A =
cos θ
1
−sin θ
1
sin θ
1
cos θ
1
¸
¸
¸
σ
1
σ
2
¸
¸
¸
cos θ
2
−sin θ
2
sin θ
2
cos θ
2
¸
¸
¸ .
A maps a unit disk to an ellipsoid with semiaxes of σ
1
and σ
2
.
alternative deﬁnitions:
σ(A) := max
x=1
Ax
and for the smallest singular value σ of a tall matrix:
σ(A) := min
x=1
Ax .
Suppose A and ∆ are square matrices. Then
(i) σ(A+ ∆) −σ(A) ≤ σ(∆);
(ii) σ(A∆) ≥ σ(A)σ(∆);
(iii) σ(A
−1
) =
1
σ(A)
if A is invertible.
22
Some useful properties
Let A ∈ F
m×n
and
σ
1
≥ σ
2
≥ · · · ≥ σ
r
> σ
r+1
= · · · = 0, r ≤ min{m, n}.
Then
1. rank(A) = r;
2. KerA = span{v
r+1
, . . . , v
n
} and (KerA)
⊥
= span{v
1
, . . . , v
r
};
3. ImA = span{u
1
, . . . , u
r
} and (ImA)
⊥
= span{u
r+1
, . . . , u
m
};
4. A ∈ F
m×n
has a dyadic expansion:
A =
r
¸
i=1
σ
i
u
i
v
∗
i
= U
r
Σ
r
V
∗
r
where U
r
= [u
1
, . . . , u
r
], V
r
= [v
1
, . . . , v
r
], and Σ
r
= diag (σ
1
, . . . , σ
r
);
5. A
2
F
= σ
2
1
+ σ
2
2
+ · · · + σ
2
r
;
6. A = σ
1
;
7. σ
i
(U
0
AV
0
) = σ
i
(A), i = 1, . . . , p for any appropriately dimensioned
unitary matrices U
0
and V
0
;
8. Let k < r = rank(A) and A
k
:=
¸
k
i=1
σ
i
u
i
v
∗
i
, then
min
rank(B)≤k
A−B = A−A
k
= σ
k+1
.
23
Generalized Inverses
Let A ∈ C
m×n
. X ∈ C
n×m
is a right inverse if AX = I. one of the
right inverses is given by X = A
∗
(AA
∗
)
−1
.
Y A = I then Y is a left inverse of A.
pseudoinverseor MoorePenrose inverse A
+
:
(i) AA
+
A = A;
(ii) A
+
AA
+
= A
+
;
(iii) (AA
+
)
∗
= AA
+
;
(iv) (A
+
A)
∗
= A
+
A.
pseudoinverse is unique.
A = BC
B has full column rank and C has full row rank. Then
A
+
= C
∗
(CC
∗
)
−1
(B
∗
B)
−1
B
∗
.
or
A = UΣV
∗
with
Σ =
Σ
r
0
0 0
¸
¸
¸ , Σ
r
> 0.
Then A
+
= V Σ
+
U
∗
with
Σ
+
=
Σ
−1
r
0
0 0
¸
¸
¸ .
24
Semideﬁnite Matrices
• A = A
∗
is positive deﬁnite (semideﬁnite) denoted by A > 0 (≥ 0),
if x
∗
Ax > 0 (≥ 0) for all x = 0.
• A ∈ F
n×n
and A = A
∗
≥ 0, ∃ B ∈ F
n×r
with r ≥ rank(A) such that
A = BB
∗
.
• Let B ∈ F
m×n
and C ∈ F
k×n
. Suppose m ≥ k and B
∗
B = C
∗
C.
∃ U ∈ F
m×k
such that U
∗
U = I and B = UC.
• square root for a positive semideﬁnite matrix A, A
1/2
= (A
1/2
)
∗
≥ 0,
by
A = A
1/2
A
1/2
.
Clearly, A
1/2
can be computed by using spectral decomposition or
SVD: let A = UΛU
∗
, then
A
1/2
= UΛ
1/2
U
∗
where
Λ = diag{λ
1
, . . . , λ
n
}, Λ
1/2
= diag{
√
λ
1
, . . . ,
√
λ
n
}.
• A = A
∗
> 0 and B = B
∗
≥ 0. Then A > B iﬀ ρ(BA
−1
) < 1.
• Let X = X
∗
≥ 0 be partitioned as
X =
X
11
X
12
X
∗
12
X
22
¸
¸
¸ .
Then KerX
22
⊂ KerX
12
. Consequently, if X
+
22
is the pseudoinverse
of X
22
, then Y = X
12
X
+
22
solves
Y X
22
= X
12
and
X
11
X
12
X
∗
12
X
22
¸
¸
¸ =
I X
12
X
+
22
0 I
¸
¸
¸
X
11
−X
12
X
+
22
X
∗
12
0
0 X
22
¸
¸
¸
I 0
X
+
22
X
∗
12
I
¸
¸
¸ .
3
Reference Textbooks
• G. F. Franklin, J. D. Powell and A. EmamiNaeini, Feedback Control of Dynamic
Systems, 3rd Edition, Addison Wesley, New York, 1994.
• B. D. O. Anderson and J. B. Moore, Optimal Control, Prentice Hall, London, 1989.
• F. L. Lewis, Applied Optimal Control and Estimation, Prentice Hall, Englewood Cliffs,
New Jersey, 1992.
• A. Saberi, B. M. Chen and P. Sannuti, Loop Transfer Recovery: Analysis and Design,
Springer, London, 1993.
• A. Saberi, P. Sannuti and B. M. Chen, H
2
Optimal Control, Prentice Hall, London, 1995.
• B. M. Chen, Robust and H
∞
Control, Springer, London, 2000.
Prepared by Ben M. Chen
6
Revision: Basic Concepts
Prepared by Ben M. Chen
7
What is a control system?
System to be controlled
Controller
Desired
performance:
REFERENCE
INPUT
to the
system
Information
about the
system:
OUTPUT
+
–
Difference:
ERROR
Objective: To make the system OUTPUT and the desired REFERENCE as close
as possible, i.e., to make the ERROR as small as possible.
Key Issues: 1) How to describe the system to be controlled? (Modelling)
2) How to design the controller? (Control)
aircraft, missiles,
economic systems,
cars, etc
Prepared by Ben M. Chen
8
Some Control Systems Examples:
System to be controlled
Controller
+
–
OUTPUT
INPUT REFERENCE
Economic System
Desired
Performance
Government
Policies
Prepared by Ben M. Chen
9
A Live Demonstration on Control of a CoupledTank System through Internet Based
Virtual Laboratory Developed by NUS
The objective is to control the flow levels of two coupled tanks. It is a reducedscale
model of some commonly used chemical plants.
Prepared by Ben M. Chen
10
m
u
v
m
b
v · +
&
Modelling of Some Physical Systems
A simple mechanical system:
By the wellknown Newton’s Law of motion: f = m a, where f is the total force applied to an
object with a mass m and a is the acceleration, we have
A cruisecontrol
system
force u
friction
force bx&
x displacement
acceleration x& &
mass
m
m
u
x
m
b
x x m x b u · + ⇔ · −
& & & & & &
This a 2nd order Ordinary Differential Equation with respect to displacement x. It can be
written as a 1st order ODE with respect to speed v = : x&
← model of the cruise control system, u is input force, v is output.
Prepared by Ben M. Chen
11
Controller
+
–
OUTPUT INPUT
REFERENCE
A cruisecontrol system:
?
+
–
speed v
u
90 km/h
m
u
v
m
b
v · +
&
Prepared by Ben M. Chen
12
Basic electrical systems:
v
i
R
resistor
R i v ·
capacitor
C v (t)
i (t)
dt
dv
C i ·
inductor
L v (t)
i (t)
dt
di
L v ·
Kirchhoff’s Voltage Law (KVL):
The sum of voltage drops around any
close loop in a circuit is 0.
v
5
v
1
v
4
v
3
v
2
0
5 4 3 2 1
· + + + + v v v v v
Kirchhoff’s Current Law (KCL):
The sum of currents entering/leaving a
note/closed surface is 0.
i
i
i
i
i
1
2
3
4
5
i
i
i
i
i
1
2
3
4
5
0
5 4 3 2 1
· + + + + i i i i i
Prepared by Ben M. Chen
13
Modelling of a simple electrical system:
i
v
i
R
C
v
o
To find out relationship between the input (v
i
) and the output (v
o
) for the circuit:
dt
dv
RC Ri v
R
o
· ·
dt
dv
C i
o
·
By KVL, we have
0
i o
· − + v v v
R
0
i
o
o i o
· − + · − + v
dt
dv
RC v v v v
R
i o o i o
o
v v v RC v v
dt
dv
RC · + ⇔ · + &
A dynamic model
of the circuit
Prepared by Ben M. Chen
14
Controller
+
–
OUTPUT INPUT
REFERENCE
Control the output voltage of the electrical system:
?
+
–
v
o
v
i
230 Volts
v
i
R
C
v
o
i o o
v v v RC · + &
Prepared by Ben M. Chen
15
Ordinary Differential Equations
Many real life problems can be modelled as an ODE of the following form:
This is called a 2nd order ODE as the highest order derivative in the equation is 2. The ODE
is said to be homogeneous if u(t) = 0. In fact, many systems can be modelled or
approximated as a 1st order ODE, i.e.,
) ( ) ( ) ( ) (
0 1
t u t y a t y a t y · + + & & &
An ODE is also called the timedomain model of the system, because it can be seen the above
equations that y(t) and u(t) are functions of time t. The key issue associated with ODE is: how
to find its solution? That is: how to find an explicit expression for y(t) from the given equation?
) ( ) ( ) (
0
t u t y a t y · + &
Prepared by Ben M. Chen
16
State Space Representation
Recall that many real life problems can be modelled as an ODE of the following form:
) ( ) ( ) ( ) (
0 1
t u t y a t y a t y · + + & & &
If we define socalled state variables,
y x
y x
& ·
·
2
1
u x a x a u y a y a y x
x y x
+ − − · + − − · ·
· ·
1 0 2 1 0 1 2
2 1
& & & &
& &
[ ]
,
`
.

· ·
]
]
]
+
,
`
.

]
]
]
− −
·
,
`
.

2
1
1
2
1
1 0 2
1
0 1 ,
1
0 1 0
x
x
x y u
x
x
a a x
x
&
&
We can rewrite these equations in a more compact (matrix) form,
This is called the state space representation of the ODE or the dynamic systems.
Prepared by Ben M. Chen
17
Laplace Transform and Inverse Laplace Transform
Let us first examine the following timedomain functions:
0 1 2 3 4 5 6 7 8 9 10
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
TIME (Second)
M
a
g
n
i
t
u
d
e
A cosine function with a frequency f = 0.2 Hz.
Note that it has a period T = 5 seconds.
( ) ( ) ( ) t t t t x π π π 6 . 1 cos 8 . 0 sin 4 . 0 cos ) ( + ·
What are frequencies of this function?
0 1 2 3 4 5 6 7 8 9 10
2
1.5
1
0.5
0
0.5
1
1.5
2
TIME (Second)
M
a
g
n
i
t
u
d
e
Laplace transform is a tool to convert a timedomain function into a frequencydomain one
in which information about frequencies of the function can be captured. It is often much
easier to solve problems in frequencydomain with the help of Laplace transform.
Prepared by Ben M. Chen
18
Laplace Transform:
Given a timedomain function f (t), its Laplace transform is defined as follows:
{ ¦
∫
∞
−
· ·
0
) ( ) ( ) ( dt e t f t f L s F
st
Example 1: Find the Laplace transform of a constant function f (t) = 1.
0 ) ( ,
1
1
1
0
1 1 1 1
) ( ) (
0
0 0 0
> ·
,
`
.

⋅ − − ⋅ − ·
,
`
.

− − − · − · · ·
∞ −
∞
−
∞
−
∞
−
∫ ∫
s
s s s
e
s
e
s
e
s
dt e dt e t f s F
st st st
Re
Example 2: Find the Laplace transform of an exponential function f (t) = e
– a t
.
( ) ( )
a s
a s
e
a s
dt e dt e e dt e t f s F
t a s t a s st at st
− >
+
·
+
− · · · ·
∞
+ −
∞
+ −
∞
− −
∞
−
∫ ∫ ∫
) ( ,
1 1
) ( ) (
0 0 0 0
Re
Prepared by Ben M. Chen
19
Inverse Laplace Transform
Given a frequencydomain function F(s), the inverse Laplace transform is to convert it back
to its original timedomain function f (t).
( )
2
2
1
1
1
1
1
) ( ) (
a s
te
a s
e
s
t
s
s F t f
at
at
+
⇔
+
⇔
⇔
⇔
⇔
−
−
( )
( )
2 2
2 2
2 2
2 2
cos
sin
cos
sin
) ( ) (
b a s
a s
bt e
b a s
b
bt e
a s
s
at
a s
a
at
s F t f
at
at
+ +
+
⇔
+ +
⇔
+
⇔
+
⇔
⇔
−
−
Here are some very useful Laplace and inverse Laplace transform pairs:
Prepared by Ben M. Chen
20
Some useful properties of Laplace transform:
{ ¦ { ¦ { ¦ ) ( ) ( ) ( ) ( ) ( ) (
2 2 1 1 2 2 1 1 2 2 1 1
s F a s F a t f L a t f L a t f a t f a L + · + · +
1. Superposition:
2. Differentiation: Assume that f (0) = 0.
{ ¦ { ¦ ) ( ) ( ) (
) (
s sF t f sL t f L
dt
t df
L · · ·
¹
'
¹
¹
'
¹
&
{ ¦ { ¦ ) ( ) ( ) (
) (
2 2
2
2
s F s t f L s t f L
dt
t f d
L · · ·
¹
'
¹
¹
'
¹
& &
3. Integration:
( ) { ¦ ) (
1
) (
1
0
s F
s
t f L
s
d f L
t
· ·
¹
'
¹
¹
'
¹
∫
ζ ζ
Prepared by Ben M. Chen
21
Reexpress ODE Models using Laplace Transform (Transfer Function)
Recall that the mechanical system in the cruisecontrol problem with m = 1 can be
represented by an ODE:
u bv v · + &
Taking Laplace transform on both sides of the equation, we obtain
{ ¦ { ¦ { ¦ { ¦ { ¦ u L bv L v L u L bv v L · + ⇒ · + & &
{ ¦ { ¦ { ¦ ) ( ) ( ) ( s U s bV s sV u L v bL v sL · + ⇒ · + ⇒
( )
b s s U
s V
s U s V b s
+
· ⇒ · + ⇒
1
) (
) (
) ( ) (
This is called the transfer function of the system model
) (s G ·
Prepared by Ben M. Chen
22
Controller
+
–
OUTPUT INPUT
REFERENCE
A cruisecontrol system in frequency domain:
driver? auto?
+
–
speed V (s)
U (s)
R (s)
b s
s G
+
·
1
) (
Prepared by Ben M. Chen
23
In general, a feedback control system can be represented by the following block diagram:
+
U (s) R (s)
) ( s G
) ( s K
Y (s)
–
E (s)
Given a system represented by G(s) and a reference R(s), the objective of control system
design is to find a control law (or controller) K(s) such that the resulting output Y(s) is as
close to reference R(s) as possible, or the error E(s) = R(s) –Y(s) is as small as possible.
However, many other factors of life have to be carefully considered when dealing with real
life problems. These factors include:
R (s)
+
U (s)
) ( s G
) ( s K
Y (s) –
E (s)
disturbances
noises
uncertainties
nonlinearities
Prepared by Ben M. Chen
24
Control Techniques – A Brief View:
There are tons of research published in the literature on how to design control laws for various
purposes. These can be roughly classified as the following:
♦ Classical control: Proportionalintegralderivative (PID) control, developed in 1940s and used
for control of industrial processes. Examples: chemical plants, commercial aeroplanes.
♦ Optimal control: Linear quadratic regulator control, Kalman filter, H
2
control, developed in
1960s to achieve certain optimal performance and boomed by NASA Apollo Project.
♦ Robust control: H
∞
control, developed in 1980s & 90s to handle systems with uncertainties
and disturbances and with high performances. Example: military systems.
♦ Nonlinear control: Currently hot research topics, developed to handle nonlinear systems
with high performances. Examples: military systems such as aircraft, missiles.
♦ Intelligent control: Knowledgebased control, adaptive control, neural and fuzzy control, etc.,
researched heavily in 1990s, developed to handle systems with unknown models.
Examples: economic systems, social systems, human systems.
Prepared by Ben M. Chen
25
Classical Control
Let us examine the following block diagram of control system:
+
U (s) R (s)
) ( s G
) (s K
Y (s)
–
E (s)
Recall that the objective of control system design is trying to match the output Y(s) to the
reference R(s). Thus, it is important to find the relationship between them. Recall that
) ( ) ( ) (
) (
) (
) ( s U s G s Y
s U
s Y
s G · ⇒ ·
Similarly, we have , and . ) ( ) ( ) ( s E s K s U · ) ( ) ( ) ( s Y s R s E − ·
Thus,
[ ] ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( s Y s R s K s G s E s K s G s U s G s Y − · · ·
[ ] ) ( ) ( ) ( ) ( ) ( ) ( 1 ) ( ) ( ) ( ) ( ) ( ) ( ) ( s R s K s G s Y s K s G s Y s K s G s R s K s G s Y · + ⇒ − ·
) ( ) ( 1
) ( ) (
) (
) (
) (
s K s G
s K s G
s R
s Y
s H
+
· · ⇒ Closedloop transfer function from R to Y.
Prepared by Ben M. Chen
26
a s
b
s G
+
· ) (
s
k s k
s
k
k s K
i p
i
p
+
· + · ) (
We’ll focus on control system design of some first order systems with a
proportionalintegral (PI) controller, . This implies
Thus, the block diagram of the control system can be simplified as,
) ( ) ( 1
) ( ) (
) (
s K s G
s K s G
s H
+
·
R (s) Y (s)
The whole control problem becomes how to choose an appropriate K(s) such that the
resulting H(s) would yield desired properties between R and Y.
i p
i p
bk s bk a s
bk s bk
s K s G
s K s G
s H
+ + +
+
·
+
·
) ( ) ( ) ( 1
) ( ) (
) (
2
The closedloop system H(s) is a second order system as its denominator is a polynomial s
of degree 2.
Prepared by Ben M. Chen
27
Stability of Control Systems
Example 1: Consider a closedloop system with,
1
1
) (
2
−
·
s
s H
R (s) = 1 Y (s)
We have
1
5 . 0
1
5 . 0
) 1 )( 1 (
1
1
1
) ( ) ( ) (
2
+
−
−
·
− +
·
−
· ·
s s s s s
s R s H s Y
Using the Laplace transform table, we obtain
a s
e
at
+
⇔
−
1
1
5 . 0
5 . 0
+
⇔
−
s
e
t
1
5 . 0
5 . 0
−
⇔
s
e
t
) ( 5 . 0 ) (
t t
e e t y
−
− ·
This system is said to be unstable because the
output response y(t) goes to infinity as time t is
getting larger and large. This happens because
the denominator of H(s) has one positive root at
s = 1.
0 2 4 6 8 10
0
2000
4000
6000
8000
10000
12000
Time (seconds)
) (t y
&
Prepared by Ben M. Chen
28
Example 2: Consider a closedloop system with,
2 3
1
) (
2
+ +
·
s s
s H
R (s) = 1 Y (s)
We have
2
1
1
1
) 2 )( 1 (
1
2 3
1
) ( ) ( ) (
2
+
−
+
·
+ +
·
+ +
· ·
s s s s s s
s R s H s Y
Using the Laplace transform table, we obtain
t t
e e t y
2
) (
− −
− ·
0 2 4 6 8 10
0
0.05
0.1
0.15
0.2
0.25
Time (seconds)
) (t y
This system is said to be stable because
the output response y(t) goes to 0 as time
t is getting larger and large. This happens
because the denominator of H(s) has no
positive roots.
Prepared by Ben M. Chen
29
We consider a general 2nd order system,
The system is stable if the denominator of the system, i.e., , has no
positive roots. It is unstable if it has positive roots. In particular,
2 2
2
2
) (
n n
n
s s
s H
ω ζω
ω
+ +
·
R (s) = 0
Y (s)
0 2
2 2
· + +
n n
s s ω ζω
Mar gi nal l y St abl e
Unst abl e
St abl e
Prepared by Ben M. Chen
30
Stability in the State Space Representation
Consider a general linear system characterized by a state space form,
Then,
1. It is stable if and only if all the eigenvalues of A are in the open lefthalf plane.
2. It is marginally stable if and only if A has eigenvalues are in the closed lefthalf
plane with some (simple) on the imaginary axis.
3. It is unstable if and only if A has at least one eigenvalue in the righthalf plane.
u
u
D
B
x
x
C
A
y
x
+
+
·
·
¹
'
¹
&
L.H.P.
Stable Region
R.H.P.
Unstable Region
Prepared by Ben M. Chen
31
Lyapunov Stability
Consider a general dynamic system, . If there exists a socalled Lyapunov function
V(x), which satisfies the following conditions:
1. V(x) is continuous in x and V(0) = 0;
2. V(x) > 0 (positive definite);
3. (negative definite),
then we can say that the system is asymptotically stable at x = 0. If in addition,
then we can say that the system is globally asymptotically stable at x = 0. In this case, the
stability is independent of the initial condition x(0).
) (x f x · &
0 ) ( ) ( <
∂
∂
· x f
x
V
x V
&
∞ → ∞ → x x V as , ) (
Prepared by Ben M. Chen
32
Lyapunov Stability for Linear Systems
Consider a linear system, . The system is asymptotically stable (i.e., the eigenvalues
of matrix A are all in the open RHP) if for any given appropriate dimensional real positive
definite matrix Q = Q
T
> 0, there exists a real positive definite solution P = P
T
> 0 for the
following Lyapunov equation:
Proof. Define a Lyapunov function . Obviously, the first and second conditions
on the previous page are satisfied. Now consider
Hence, the third condition is also satisfied. The result follows.
Note that the condition, Q = Q
T
> 0, can be replaced by Q = Q
T
≥ 0 and being
detectable.
x A x · &
Q PA P A − · +
T
x P x x V
T
· ) (
( ) 0 ) ( ) ( < − · + · + · + · Qx x x PA P A x x A P x x P x A x P x x P x x V
T T T T T T T
& &
&
,
`
.
 2
1
, Q A
Prepared by Ben M. Chen
33
Behavior of Second Order Systems with a Step Inputs
Again, consider the following block diagram with a standard 2nd order system,
The behavior of the system is as follows:
2 2
2
2
) (
n n
n
s s
s H
ω ζω
ω
+ +
·
R (s) = 1/s Y (s)
r = 1
The behavior of the system is
fully characterized by ζ ,
which is called the damping
ratio, and ω
n
, which is called
the natural frequency.
Prepared by Ben M. Chen
34
Control System Design with Timedomain Specifications
1% settling time
overshoot
rise time
s
t
r
t
p
M
2 2
2
2
) (
n n
n
s s
s H
ω ζω
ω
+ +
·
R (s) = 1/s
Y (s)
r = 1
t
n
r
t
ω
8 . 1
≅
n
s
t
ζω
6 . 4
≅
Prepared by Ben M. Chen
35
i p
i p
bk s bk a s
bk s bk
s K s G
s K s G
s R
s Y
s H
+ + +
+
·
+
· ·
) ( ) ( ) ( 1
) ( ) (
) (
) (
) (
2
+
U (s) R (s)
) (s G
) ( s K
Y (s)
–
E (s)
PID Design Technique:
s
k s k
s
k
k s K
i p
i
p
+
· + · ) ( with and results a closedloop system:
a s
b
s G
+
· ) (
The key issue now is to choose parameters k
p
and k
i
such that the above resulting system
has desired properties, such as prescribed settling time and overshoot.
Compare this with the standard 2nd order system:
2 2
2
2
) (
n n
n
s s
s H
ω ζω
ω
+ +
·
i n
p n
bk
bk a
·
+ ·
2
2
ω
ζω
b
k
b
a
k
n
i
n
p
2
2
ω
ζω
·
−
·
Prepared by Ben M. Chen
36
To achieve an overshoot less than 25%, we obtain
from the figure on the right that 4 . 0 > ζ
x
To achieve a settling time of 10 s, we use
767 . 0
10 6 . 0
6 . 4 6 . 4 6 . 4
·
×
· · ⇒ ·
s
n
n
s
t
t
ζ
ω
ζω
6 . 0 · ζ To be safe, we choose
CruiseControl System Design
Recall the model for the cruisecontrol system, i.e., . Assume that the
mass of the car is 3000 kg and the friction coefficient b = 1. Design a PI controller for it
such that the speed of the car will reach the desired speed 90 km/h in 10 seconds (i.e., the
settling time is 10 s) and the maximum overshoot is less than 25%.
m
b
s
m
s U
s V
+
·
1
) (
) (
Prepared by Ben M. Chen
37
The transfer function of the cruisecontrol system,
000333 . 0
3000
1
3000
1
3000
1 1
) (
) (
) ( · · · ⇒
+
·
+
· · b a
s
m
b
s
m
s U
s Y
s G
b
k
b
a
k
n
i
n
p
2
2
ω
ζω
·
−
·
Again, using the formulae derived,
1765
3000 / 1
767 . 0
2760
3000 / 1
3000 / 1 767 . 0 6 . 0 2 2
2 2
· · ·
·
− × ×
·
−
·
b
k
b
a
k
n
i
n
p
ω
ζω
The final cruisecontrol system:
+
–
Speed
Reference
90 km/h
s
1765
2760 +
Prepared by Ben M. Chen
38
Simulation Result:
The resulting
overshoot is
less than 25%
and the settling
time is about 10
seconds.
Thus, our
design goal is
achieved.
0 2 4 6 8 10 12 14 16 18 20
0
20
40
60
80
100
120
Time in Se c o n d s
S
p
e
e
d
i
n
k
m
/
h
Prepared by Ben M. Chen
39
+
r
) (s G
) (s K
y
–
e
Bode Plots
Consider the following feedback control system,
+
r
) ( ) ( s G s K
y
–
e
Bode Plots are the the magnitude and phase responses of the openloop transfer function,
i.e., K(s) G(s), with s being replaced by jω. For example, for the ball and beam system we
considered earlier, we have
( )
2 2 2
3 . 2 7 . 3 3 . 2 7 . 3 10
23 . 0 37 . 0 ) ( ) (
ω
ω
ω ω
ω
−
+
·
+
· + ·
· ·
·
j
s
s
s
s s G s K
j s j s
j s
o
180
7 . 3
3 . 2
tan ) ( ) ( ,
) 3 . 2 ( 7 . 3
) ( ) (
1
2
2 2
−
,
`
.

· ∠
+
·
−
ω
ω ω
ω
ω
ω ω j G j K j G j K
Prepared by Ben M. Chen
40
10
1
10
0
10
1
20
0
20
40
60
Frequency (rad/sec)
M
a
g
n
i
t
u
d
e
(
d
B
)
10
1
10
0
10
1
180
160
140
120
100
80
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
r
e
e
s
)
Bode magnitude and phase plots of the ball and beam system:
Prepared by Ben M. Chen
41
10
1
10
0
10
1
60
40
20
0
20
Frequency (rad/sec)
M
a
g
n
i
t
u
d
e
(
d
B
)
10
1
10
0
10
1
250
200
150
100
50
0
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
r
e
e
s
)
gain
crossover
frequency
phase
crossover
frequency
gain
margin
phase
margin
Gain and phase margins
Prepared by Ben M. Chen
42
0.5 0 0.5 1 1.5
1.5
1
0.5
0
0.5
1
1.5
Real Axis
I
m
a
g
A
x
i
s
Nyquist Plot
Instead of separating into magnitude and phase diagrams as in Bode plots, Nyquist plot
maps the openloop transfer function K(s) G(s) directly onto a complex plane, e.g.,
Prepared by Ben M. Chen
43
–1
PM
GM
1
Gain and phase margins
The gain margin and phase margin can also be found from the Nyquist plot by zooming in
the region in the neighbourhood of the origin.
o
180 ) ( ) ( ,
) ( ) (
1
· ∠ ·
p p p
p p
j G j K
j G j K
ω ω ω
ω ω
that such is where GM
Mathematically,
1 ) ( ) ( such that is where , 180 ) ( ) ( PM · + ∠ ·
g g g g g
j G j K j G j K ω ω ω ω ω
o
Remark: Gain margin is the maximum
additional gain you can apply to the
closedloop system such that it will still
remain stable. Similarly, phase margin
is the maximum phase you can tolerate
to the closedloop system such that it
will still remain stable.
Prepared by Ben M. Chen
44
10
1
10
0
10
1
20
0
20
40
60
Frequency (rad/sec)
M
a
g
n
i
t
u
d
e
(
d
B
)
10
1
10
0
10
1
180
160
140
120
100
80
Frequency (rad/sec)
P
h
a
s
e
(
d
e
g
r
e
e
s
)
Example: Gain and phase margins of the ball and beam system: PM = 58
°
, GM = ∞
Prepared by Ben M. Chen
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.