Professional Documents
Culture Documents
FALL ‘21
LECTURE 1
Reduce workload
Perform tasks people can’t
Reduce the effects of disturbances
Reduce the effects of plant variations
Stabilize an unstable system
Improve the performance of a system (time response)
What are the typical objectives?
Goal: Design a controller so that the system has some desired characteristics.
Typical objectives:
Stabilize the system (Stabilization)
Regulate the system about some design point (Regulation)
Follow a given class of command signals (Tracking)
Reduce response to disturbances. (Disturbance Rejection)
Obtain model
Analytic or from measured data (system ID)
Evaluation model → reduce size/complexity → Design model
Accuracy? Error model?
Design controller
Select technique (SISO, MIMO), (classical, state-space)
Choose parameters (optimization)
Antenna azimuth
position control system:
a) system concept;
b) detailed layout;
c) schematic;
d) functional block
diagram;
e) effect of high and
low controller gain
on the output
response
Modern Control Theory
The modern trend in engineering systems is toward greater complexity, due mainly to
the requirements of complex tasks and good accuracy.
Complex systems may have multiple inputs and multiple outputs and may be time varying.
modern control theory, which is a new approach to the analysis and design of complex control systems,
has been developed since around 1960.
This new approach is based on the concept of state.
The concept of state by itself is not new, since it has been in existence for a long time in the field of classical
dynamics and other fields
Modern Control Theory Versus Conventional Control Theory
Also, modern control theory is essentially time-domain approach and frequency domain approach (in
certain cases such as H-infinity control),
while conventional control theory is a complex frequency-domain approach.
Focus of this course
Assumptions:
Given a relatively simple (first or second order) system, and a set of stability/performance
requirements:
you can analyze the stability & expected performance
you have a reasonably good idea how to design a controller using classical techniques (Bode and/or root locus)
Will not focus too much on the classical design in this course
it is important to know “the design process” for classical controllers,
you should be able to provide a “classical interpretation” of any advanced control technique as a “sanity check”
State: The state of a dynamic system is the smallest set of variables (called state
variables) such that knowledge of these variables at 𝑡 = 𝑡0 , together with knowledge of the
input for 𝑡 ≥ 𝑡0 , completely determines the behavior of the system for any time 𝑡 ≥ 𝑡0 .
Note that the concept of state is by no means limited to physical systems. It is applicable to biological
systems, economic systems, social systems, and others.
State Variables: The state variables of a dynamic system are the variables making up the
smallest set of variables that determine the state of the dynamic system. If at least n
variables 𝑥1 , 𝑥2 , … , 𝑥𝑛 are needed to completely describe the behavior of a dynamic system,
then such 𝑛 variables are a set of state variables.
Note that
state variables need not be physically measurable or observable quantities.
Variables that do not represent physical quantities and those that are neither measurable nor observable can be chosen
as state variables.
Such freedom in choosing state variables is an advantage of the state-space methods.
Practically, however, it is convenient to choose easily measurable quantities for the state variables, if this is possible at
all, because optimal control laws will require the feedback of all state variables with suitable weighting.
Modeling in State Space
State Vector: If 𝑛 state variables are needed to completely describe the behavior of a
given system, then these 𝑛 state variables can be considered the 𝑛 components of a
vector 𝑥.
Such a vector is called a state vector.
A state vector is thus a vector that determines uniquely the system state 𝑥(𝑡) for any time 𝑡 ≥ 𝑡0 , once
the state at 𝑡 = 𝑡0 is given and the input 𝑢(𝑡) for 𝑡 ≥ 𝑡0 is specified.
If we define
where the first one is the state equation and the second one is the output equation.
If vector functions f and/or g involve time t explicitly, then the system is called a time-varying system.
Modeling in State Space
If state equation and output equation are linearized about the operating state, then we have the
following linearized state equation and output equation:
where
A(t) is called the state matrix,
B(t) the input matrix,
C(t) the output matrix, and
D(t) the direct transmission matrix.
If vector functions f and g do not involve time t explicitly then the system is called a
time-invariant system. In this case, the equations can be simplified to
the first equation is the state equation of the linear, time-invariant system and the second
on is the output equation for the same system.
In this course, we shall be concerned mostly with the linear, time-invariant (LTI)
systems described by the equations above.
A, B, C, D are constant and do not depend on t.
Modeling in State Space
There are a variety of ways to develop these state-space models. We will explore this
process in detail.
Linearization of nonlinear models
Derivation from simple linear dynamics
Modeling in State Space: an example
This system is of second order. This means that the system involves two
integrators. Let us define state variables 𝑥1 (𝑡) and 𝑥2 (𝑡) as
Then we obtain
or
and
Fortunately there are very simple numerical tools available to perform each of these steps
Removes much of the “art” and/or “magic” required in classical control design → design process more
systematic
Linear Algebra
Will not focus on the theorems/proofs in class – details will be handed out as necessary, but these are
in the textbooks.
Will focus on using the linear algebra to understand the behavior of the system dynamics so that we
can modify them using control. “Linear algebra in action”
Even so, this will require more algebra that most math courses that you have taken . . . .
Nonlinear and Robustness
We will spend more time on the analysis of the system stability assuming various types
of basic nonlinearities
All systems are nonlinear to some extent
How well will the controller based on linear model assumptions work on the full nonlinear system?
Goal is to develop tools to work with a nonlinear model, linearize it, design a controller,
and then analyze the performance on the original nonlinear system.
Also interested in understanding how the addition of certain types of nonlinearities (saturation, rate
limits, stiction) might influence stability/performance.
Will also explicitly consider the effects of uncertainty in the dynamic models
Basic issues such as gain and timing errors
Parameter errors
Unmodeled dynamics
Implementation Issues
With the increase in processor speeds and the ability to develop code right from
Simulink, there is not much point discussing all of the intricacies of digital design
But sometimes you have to use a small computer and/or you don’t get all the cycles
We will discuss the process enough to capture the main issues and the overall
implementation approach
Bottom line is that as long as you “sample” fast enough the effects are not catastrophic - it just adds a
little bit more delay to the feedback loop
Easily predicted and relatively easy to account for
Provides feedback on how fast you need to go – go faster costs more money and can create numerical
implementation issues
In this course, much of the implementation done will be with Matlab and Simulink
Basic Definitions: Linearity
What is a linear dynamical system? A system G is linear with respect to its inputs and
output
iff superposition holds:
A system is said to be time-invariant if the relationship between the input and output is
independent of time.
So if the response to 𝑢(𝑡) is 𝑦(𝑡), then the response to 𝑢(𝑡 − 𝑡0 ) is 𝑦(𝑡 − 𝑡0 )
is LTI, but
is not
Creating State-Space Models
Most easily created from 𝑁 𝑡ℎ order differential equations that describe the dynamics
This was the case done before
Only issue is which set of states to use – there are many choices
Problem is that we have restricted ourselves here to linear state space models, and
almost all systems are nonlinear in real-life.
Can develop linear models from nonlinear system dynamics
Nonlinear Systems
𝑔
Example – pendulum dynamics: 𝜃ሷ + 𝑟𝜃ሶ + sin 𝜃 = 0 can be written in state space form:
𝑙
Setting f x, u = 0 yields 𝑥2 = 0 and 𝑥2 = − 𝑔Τ𝑟𝑙 sin 𝑥1 , which implies that 𝑥1 = 0 = 0, 𝜋
Linearization: Taylor Series Expansion
Assume that the system is operating about some nominal points (𝑥𝑒 , 𝑢𝑒 )
Then write the actual state as 𝑥 𝑡 = 𝑥𝑒 + 𝛿𝑥(𝑡) and the actual inputs as 𝑢(𝑡) = 𝑢𝑒 + 𝛿𝑢(𝑡)
The “𝛿” is included to denote the fact that we expect the variations about the nominal to be “small”
The linearized equations can be derived by using the Taylor series expansion of f(. , . )
about 𝑥𝑒 and 𝑢𝑒
Each equation 𝑥ሶ 𝑖 = 𝑓𝑖 (x, u) of the vector equation xሶ = f(x, u) can be expressed:
where
∙ȁ0 means that the function
should be evaluated at the
nominal values of 𝑥𝑒 and 𝑢𝑒
Linearization: Taylor Series Expansion
The meaning of “small” deviations now clear – the variations in 𝛿𝑥 and 𝛿𝑢 must be
small enough that we can ignore the higher order terms in the Taylor expansion of
f(x, u)
Combining for all 𝑛 state equations, gives (note that we also set “ ≈ ” → “ = ”) that
Linearization: Taylor Series Expansion
Similarly, if the nonlinear measurement equation is y = g(x, u) and 𝑦(𝑡) = 𝑦𝑒 + 𝛿𝑦, then
Typically drop the “𝛿” as they are rather cumbersome, and (abusing notation) we write the
state equations as:
If the system is operating around just one set point then the partial fractions in the
expressions for 𝐴– 𝐷 are all constant → LTI linearized model.
Stability of LTI Systems
We can also infer stability of the original nonlinear from the analysis of the linearized
system model
Lyapunov’s indirect method: Let 𝑥𝑒 = 0 be an equilibrium point for the nonlinear
autonomous system
Then:
The origin is an asymptotically stable equilibrium point for the nonlinear system if ℝ𝜆𝑖 𝐴 < 0 ∀ 𝑖
The origin is unstable if ℝ𝜆𝑖 (𝐴) > 0 for any 𝑖
Note that this doesn’t say anything about the stability of the nonlinear system if the linear system is neutrally
stable.
A very powerful result that is the basis of all linear control theory
Linearization: an example
A leaf spring, which is used on car suspensions, could be modelled as a cubic spring.
In this case, the nonlinear model for the suspension system is presented as
Find the equilibrium points and then make a state space model
Linearization: an example
which gives
Second condition corresponds to 𝑦𝑒 = 0 or 𝑦𝑒 = ± −𝑘1 /𝑘2 , which is only real if 𝑘1 and 𝑘2 are opposite signs.
which are the standard dynamics of a system with just a linear spring of stiffness 𝑘1
Stable motion about 𝑦 = 0 if 𝑘1 > 0
Jonathan P. How and Emilio Frazzoli. 16.30/31 Feedback Control Systems. Fall 2010.
Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu/.
License: Creative Commons BY-NC-SA.
Steven Hall. 16.06 Principles of Automatic Control. Fall 2012. Massachusetts Institute of
Technology: MIT OpenCourseWare, https://ocw.mit.edu/. License: Creative Commons
BY-NC-SA.
Katsuhiko Ogata, Modern Control Engineering, 2010, 5th edition, Prentice Hall.
Norman Nice, Control Systems Engineering, 2011, Addison-Wesley.