You are on page 1of 38

UCK363E – AUTOMATIC CONTROL II

FALL ‘21

LECTURE 1

Instructor: Barış Başpınar


Mathematical Models

 The designer uses physical laws to model the system mathematically.


 Mathematical models may assume many different forms.
 Depending on the particular system and the particular circumstances, one mathematical model may be
better suited than other models
 Once a mathematical model of a system is obtained, various analytical and computer tools can be used
for analysis and synthesis purposes.

 In obtaining a mathematical model, we must make a compromise between the


simplicity of the model and the accuracy of the results of the analysis
 In deriving a reasonably simplified mathematical model, we frequently find it necessary to ignore certain
inherent physical properties of the system
 In particular, if a linear lumped-parameter mathematical model (that is, one employing ordinary
differential equations) is desired, it is always necessary to ignore certain nonlinearities and distributed
parameters that may be present in the physical system.
 If the effects that these ignored properties have on the response are small, good agreement will be
obtained between the mathematical model and the physical system.
Why do we need an automatic control system?

 Reduce workload
 Perform tasks people can’t
 Reduce the effects of disturbances
 Reduce the effects of plant variations
 Stabilize an unstable system
 Improve the performance of a system (time response)
What are the typical objectives?
 Goal: Design a controller so that the system has some desired characteristics.

 Typical objectives:
 Stabilize the system (Stabilization)
 Regulate the system about some design point (Regulation)
 Follow a given class of command signals (Tracking)
 Reduce response to disturbances. (Disturbance Rejection)

 Typically think of closed-loop control → so we would


analyze the closed-loop dynamics.
 Open-loop control also possible (called “feedforward”) –
more prone to modeling errors since inputs not changed as
a result of measured error. Block diagrams of control systems:
a. open-loop system;
b. closed-loop system
Feedback Control Approach

 Establish control objectives


 Qualitative – don’t use too much fuel
 Quantitative – settling time of step response <3 sec
 Typically requires that you understand the process (expected commands and disturbances) and the
overall goals (bandwidths).
 Often requires that you have a strong understanding of the physical dynamics of the system so that you
do not “fight” them in inappropriate (i.e., inefficient) ways.

 Select sensors & actuators


 What aspects of the system are to be sensed and controlled?
 Consider sensor noise and linearity as key discriminators.
 Cost, reliability, size, . . .
Feedback Control Approach

 Obtain model
 Analytic or from measured data (system ID)
 Evaluation model → reduce size/complexity → Design model
 Accuracy? Error model?

 Design controller
 Select technique (SISO, MIMO), (classical, state-space)
 Choose parameters (optimization)

 Analyze closed-loop performance. Meet objectives?


 Analysis, simulation, experimentation, . . .
 Yes ⇒ done, No ⇒ iterate . . .
Feedback Control Approach: An Example - Position Control Systems

Antenna azimuth
position control system:
a) system concept;
b) detailed layout;
c) schematic;
d) functional block
diagram;
e) effect of high and
low controller gain
on the output
response
Modern Control Theory

 The modern trend in engineering systems is toward greater complexity, due mainly to
the requirements of complex tasks and good accuracy.
 Complex systems may have multiple inputs and multiple outputs and may be time varying.

 Because of the necessity of meeting


 increasingly stringent requirements on the performance of control systems,
 the increase in system complexity, and
 easy access to large scale computers,

 modern control theory, which is a new approach to the analysis and design of complex control systems,
has been developed since around 1960.
 This new approach is based on the concept of state.
 The concept of state by itself is not new, since it has been in existence for a long time in the field of classical
dynamics and other fields
Modern Control Theory Versus Conventional Control Theory

 Modern control theory is contrasted with conventional control theory in that


 the former is applicable to multiple-input and multiple-output systems (MIMO), which may be linear or
nonlinear, time invariant or time varying,
 while the latter is applicable only to linear time-invariant single-input, single-output systems (SISO).

 Also, modern control theory is essentially time-domain approach and frequency domain approach (in
certain cases such as H-infinity control),
 while conventional control theory is a complex frequency-domain approach.
Focus of this course

 Assumptions:
 Given a relatively simple (first or second order) system, and a set of stability/performance
requirements:
 you can analyze the stability & expected performance
 you have a reasonably good idea how to design a controller using classical techniques (Bode and/or root locus)

 Will not focus too much on the classical design in this course
 it is important to know “the design process” for classical controllers,
 you should be able to provide a “classical interpretation” of any advanced control technique as a “sanity check”

 Our focus in this class is on modern control theory (state-space methods)


 More systematic design tools exist - can be easily codified and implemented numerically – easy
integrate optimization
 Easily handle larger-scale systems (many modes) and many sensors and actuators (MIMO).
Modeling in State Space

 State: The state of a dynamic system is the smallest set of variables (called state
variables) such that knowledge of these variables at 𝑡 = 𝑡0 , together with knowledge of the
input for 𝑡 ≥ 𝑡0 , completely determines the behavior of the system for any time 𝑡 ≥ 𝑡0 .
 Note that the concept of state is by no means limited to physical systems. It is applicable to biological
systems, economic systems, social systems, and others.

 State Variables: The state variables of a dynamic system are the variables making up the
smallest set of variables that determine the state of the dynamic system. If at least n
variables 𝑥1 , 𝑥2 , … , 𝑥𝑛 are needed to completely describe the behavior of a dynamic system,
then such 𝑛 variables are a set of state variables.
 Note that
 state variables need not be physically measurable or observable quantities.
 Variables that do not represent physical quantities and those that are neither measurable nor observable can be chosen
as state variables.
 Such freedom in choosing state variables is an advantage of the state-space methods.
 Practically, however, it is convenient to choose easily measurable quantities for the state variables, if this is possible at
all, because optimal control laws will require the feedback of all state variables with suitable weighting.
Modeling in State Space

 State Vector: If 𝑛 state variables are needed to completely describe the behavior of a
given system, then these 𝑛 state variables can be considered the 𝑛 components of a
vector 𝑥.
 Such a vector is called a state vector.
 A state vector is thus a vector that determines uniquely the system state 𝑥(𝑡) for any time 𝑡 ≥ 𝑡0 , once
the state at 𝑡 = 𝑡0 is given and the input 𝑢(𝑡) for 𝑡 ≥ 𝑡0 is specified.

 State-Space Equations: In state-space analysis we are concerned with three types of


variables that are involved in the modeling of dynamic systems: input variables, output
variables, and state variables
 Assume that a MIMO system involves 𝑛 integrators. Assume also that there are 𝑟 inputs
𝑢1 𝑡 , 𝑢2 𝑡 , … , 𝑢𝑟 (𝑡) and 𝑚 outputs 𝑦1 𝑡 , 𝑦2 𝑡 , … , 𝑦𝑚 (𝑡). Define 𝑛 outputs of the integrators as state
variables: 𝑥1 𝑡 , 𝑥2 𝑡 , … , 𝑥𝑛 𝑡 .
Modeling in State Space

 Then the system may be described by

 The outputs 𝑦1 𝑡 , 𝑦2 𝑡 , … , 𝑦𝑚 (𝑡) of the system may be given by


Modeling in State Space

 If we define

 then previous equations become

where the first one is the state equation and the second one is the output equation.
If vector functions f and/or g involve time t explicitly, then the system is called a time-varying system.
Modeling in State Space

 If state equation and output equation are linearized about the operating state, then we have the
following linearized state equation and output equation:

where
 A(t) is called the state matrix,
 B(t) the input matrix,
 C(t) the output matrix, and
 D(t) the direct transmission matrix.

Block diagram of the linear, continuous-time


control system represented in state space.
Modeling in State Space

 If vector functions f and g do not involve time t explicitly then the system is called a
time-invariant system. In this case, the equations can be simplified to

the first equation is the state equation of the linear, time-invariant system and the second
on is the output equation for the same system.

 In this course, we shall be concerned mostly with the linear, time-invariant (LTI)
systems described by the equations above.
 A, B, C, D are constant and do not depend on t.
Modeling in State Space

 Reasons to use this form:


 State variable form convenient way to work with complex dynamics. Matrix format easy to use on
computers.
 Transfer functions only deal with input/output behavior, but state-space form provides easy access to
the “internal” features/response of the system.
 Allows us to explore new analysis and synthesis tools.
 Great for multiple-input multiple-output systems (MIMO), which are very hard to work with using
transfer functions.

 There are a variety of ways to develop these state-space models. We will explore this
process in detail.
 Linearization of nonlinear models
 Derivation from simple linear dynamics
Modeling in State Space: an example

 Consider the mechanical system shown in figure


 We assume that the system is linear.
 The external force 𝑢(𝑡) is the input to the system, and the displacement 𝑦(𝑡) of the mass is the output.
 The displacement 𝑦(𝑡) is measured from the equilibrium position in the absence of the external force.
 This system is a SISO system.

 This system is of second order. This means that the system involves two
integrators. Let us define state variables 𝑥1 (𝑡) and 𝑥2 (𝑡) as

 Then we obtain

or

 The output equation is


Mechanical system
Modeling in State Space: an example

 In a vector-matrix form, the equations can be written as

and

 They are in the standard form:

Block diagram of the mechanical system


The block diagram of the system is illustrated.
Notice that the outputs of the integrators are state variables.
Modeling in State Space: summary of modeling principles

1. Identify the state of the system: 3. Organize as:


 positions
𝑑𝑥
 velocities = 𝑓 𝑥, 𝑢 𝑦 = 𝑔 𝑥, 𝑢
𝑑𝑡
 inductor currents
 capacitor voltages where
 … x – state vector
u – control input
y – output of measurement

2. Use physics to find 𝑑𝑥1 Τ𝑑𝑡 , 𝑑𝑥2 Τ𝑑𝑡 , … 4. Linearize if necessary.


Controller Design in State Space

 Control design: Split into 3 main parts


 Full-state feedback – fictitious since requires more information than typically (ever?) available
 Observer/estimator design – process of “estimating” the system state from the measurements that are
available
 Dynamic output feedback – combines these two parts with provable guarantees on stability (and
performance)

 Fortunately there are very simple numerical tools available to perform each of these steps

 Removes much of the “art” and/or “magic” required in classical control design → design process more
systematic
Linear Algebra

 Linear systems theory involves extensive use of linear algebra

 Will not focus on the theorems/proofs in class – details will be handed out as necessary, but these are
in the textbooks.

 Will focus on using the linear algebra to understand the behavior of the system dynamics so that we
can modify them using control. “Linear algebra in action”

 Even so, this will require more algebra that most math courses that you have taken . . . .
Nonlinear and Robustness

 We will spend more time on the analysis of the system stability assuming various types
of basic nonlinearities
 All systems are nonlinear to some extent
 How well will the controller based on linear model assumptions work on the full nonlinear system?

 Goal is to develop tools to work with a nonlinear model, linearize it, design a controller,
and then analyze the performance on the original nonlinear system.
 Also interested in understanding how the addition of certain types of nonlinearities (saturation, rate
limits, stiction) might influence stability/performance.

 Will also explicitly consider the effects of uncertainty in the dynamic models
 Basic issues such as gain and timing errors
 Parameter errors
 Unmodeled dynamics
Implementation Issues

 With the increase in processor speeds and the ability to develop code right from
Simulink, there is not much point discussing all of the intricacies of digital design
 But sometimes you have to use a small computer and/or you don’t get all the cycles

 We will discuss the process enough to capture the main issues and the overall
implementation approach
 Bottom line is that as long as you “sample” fast enough the effects are not catastrophic - it just adds a
little bit more delay to the feedback loop
 Easily predicted and relatively easy to account for
 Provides feedback on how fast you need to go – go faster costs more money and can create numerical
implementation issues

 In this course, much of the implementation done will be with Matlab and Simulink
Basic Definitions: Linearity

 What is a linear dynamical system? A system G is linear with respect to its inputs and
output

iff superposition holds:

So if 𝑦1 is the response of 𝐺 to 𝑢1 (𝑦1 = 𝐺𝑢1 ), and 𝑦2 is the response of 𝐺 to 𝑢2 (𝑦2 =


𝐺𝑢2 ), then the response to 𝛼1 𝑢1 + 𝛼2 𝑢2 is 𝛼1 𝑦1 + 𝛼2 𝑦2
Basic Definitions: Time-invariant

 A system is said to be time-invariant if the relationship between the input and output is
independent of time.
 So if the response to 𝑢(𝑡) is 𝑦(𝑡), then the response to 𝑢(𝑡 − 𝑡0 ) is 𝑦(𝑡 − 𝑡0 )

 Example: the system

is LTI, but

is not
Creating State-Space Models

 Most easily created from 𝑁 𝑡ℎ order differential equations that describe the dynamics
 This was the case done before
 Only issue is which set of states to use – there are many choices

 Can be developed from transfer function model as well


 Much more on this later

 Problem is that we have restricted ourselves here to linear state space models, and
almost all systems are nonlinear in real-life.
 Can develop linear models from nonlinear system dynamics
Nonlinear Systems

 A system is nonlinear if the principle of superposition does not apply.


 Thus, for a nonlinear system the response to two inputs cannot be calculated by treating one input at a
time and adding the results.

 Linearization of Nonlinear Systems


 In control engineering a normal operation of the system may be around an equilibrium point, and the
signals may be considered small signals around the equilibrium.
 if the system operates around an equilibrium point and if the signals involved are small signals, then it
is possible to approximate the non-linear system by a linear system.
 Such a linear system is equivalent to the nonlinear system considered within a limited operating range
Linearization: Equilibrium Points

 The linearization procedure to be presented in the following is based on the expansion


of nonlinear function into a Taylor series about the operating point and the retention of
only the linear term.
 because we neglect higher-order terms of the Taylor series expansion, these neglected terms must be
small enough
 the variables deviate only slightly from the operating condition

 The linearization will be performed about equilibrium points


 Characterized by setting the state derivative to zero:
 Result is an algebraic set of equations that must be solved for both 𝑥𝑒 and 𝑢𝑒
 Note that 𝑥ሶ 𝑒 = 0 and 𝑢ሶ 𝑒 = 0 by definition
 Typically think of these nominal conditions 𝑥𝑒 , 𝑢𝑒 as “set points” or “operating points” for the nonlinear system.

𝑔
 Example – pendulum dynamics: 𝜃ሷ + 𝑟𝜃ሶ + sin 𝜃 = 0 can be written in state space form:
𝑙
 Setting f x, u = 0 yields 𝑥2 = 0 and 𝑥2 = − 𝑔Τ𝑟𝑙 sin 𝑥1 , which implies that 𝑥1 = 0 = 0, 𝜋
Linearization: Taylor Series Expansion

 Assume that the system is operating about some nominal points (𝑥𝑒 , 𝑢𝑒 )
 Then write the actual state as 𝑥 𝑡 = 𝑥𝑒 + 𝛿𝑥(𝑡) and the actual inputs as 𝑢(𝑡) = 𝑢𝑒 + 𝛿𝑢(𝑡)
 The “𝛿” is included to denote the fact that we expect the variations about the nominal to be “small”

 The linearized equations can be derived by using the Taylor series expansion of f(. , . )
about 𝑥𝑒 and 𝑢𝑒
 Each equation 𝑥ሶ 𝑖 = 𝑓𝑖 (x, u) of the vector equation xሶ = f(x, u) can be expressed:

where
∙ȁ0 means that the function
should be evaluated at the
nominal values of 𝑥𝑒 and 𝑢𝑒
Linearization: Taylor Series Expansion

 The meaning of “small” deviations now clear – the variations in 𝛿𝑥 and 𝛿𝑢 must be
small enough that we can ignore the higher order terms in the Taylor expansion of
f(x, u)
 Combining for all 𝑛 state equations, gives (note that we also set “ ≈ ” → “ = ”) that
Linearization: Taylor Series Expansion

 Similarly, if the nonlinear measurement equation is y = g(x, u) and 𝑦(𝑡) = 𝑦𝑒 + 𝛿𝑦, then

 Typically drop the “𝛿” as they are rather cumbersome, and (abusing notation) we write the
state equations as:

same form as the


previous linear models

 If the system is operating around just one set point then the partial fractions in the
expressions for 𝐴– 𝐷 are all constant → LTI linearized model.
Stability of LTI Systems

 Consider a solution 𝑥𝑠 (𝑡) to a differential equation for a given initial condition 𝑥𝑠 𝑡0


 Solution is stable if other solutions 𝑥𝑏 (𝑡0 ) that start near 𝑥𝑠 (𝑡0 ) stay close to 𝑥𝑠 (𝑡) ∀ 𝑡 ⇒ stable in
sense of Lyapunov (SSL)
 If other solutions are SSL, but the 𝑥𝑏 (𝑡) do not converge to 𝑥𝑠 (𝑡) ⇒ solution is neutrally stable
 If other solutions are SSL and 𝑥𝑏 (𝑡) → 𝑥𝑠 (𝑡) as 𝑡 → ∞ ⇒ solution is asymptotically stable
 A solution 𝑥𝑠 (𝑡) is unstable if it is not stable

 Note that a linear (autonomous) system 𝑥ሶ = 𝐴𝑥 has an equilibrium point at 𝑥𝑒 = 0


 This equilibrium point is stable if and only if all of the eigenvalues of 𝐴 satisfy ℝ𝜆𝑖 (𝐴) ≤ 0 and every
eigenvalue with ℝ𝜆𝑖 𝐴 = 0 has a Jordan block of order one*
 Thus the stability test for a linear system is the familiar one of determining if ℝ𝜆𝑖 (𝐴) ≤ 0

*this basically means that these eigenvalues are not repeated


Stability of LTI Systems: Lyapunov’s indirect method

 We can also infer stability of the original nonlinear from the analysis of the linearized
system model
 Lyapunov’s indirect method: Let 𝑥𝑒 = 0 be an equilibrium point for the nonlinear
autonomous system

where f is continuously differentiable in a neighborhood of 𝑥𝑒 . Assume

Then:
 The origin is an asymptotically stable equilibrium point for the nonlinear system if ℝ𝜆𝑖 𝐴 < 0 ∀ 𝑖
 The origin is unstable if ℝ𝜆𝑖 (𝐴) > 0 for any 𝑖

 Note that this doesn’t say anything about the stability of the nonlinear system if the linear system is neutrally
stable.

 A very powerful result that is the basis of all linear control theory
Linearization: an example

 A leaf spring, which is used on car suspensions, could be modelled as a cubic spring.
In this case, the nonlinear model for the suspension system is presented as

 Consider the nonlinear spring with (set 𝑚 = 1)

gives us the nonlinear model (𝑥1 = 𝑦 and 𝑥2 = 𝑦)ሶ

 Find the equilibrium points and then make a state space model
Linearization: an example

 For the equilibrium points, we must solve

which gives

 Second condition corresponds to 𝑦𝑒 = 0 or 𝑦𝑒 = ± −𝑘1 /𝑘2 , which is only real if 𝑘1 and 𝑘2 are opposite signs.

 For the state space model,

and the linearized model is 𝛿𝑥ሶ = 𝐴𝛿𝑥


Linearization: an example

 For the equilibrium point 𝑦𝑒 = 0, 𝑦ሶ𝑒 = 0

which are the standard dynamics of a system with just a linear spring of stiffness 𝑘1
 Stable motion about 𝑦 = 0 if 𝑘1 > 0

 Assume that 𝑘1 = −1, 𝑘2 = 1/2, then we should get an equilibrium point at 𝑦ሶ = 0, 𝑦 =


± 2, and since 𝑘1 + 𝑘2 (𝑦𝑒 )2 = 0 then

which are the dynamics of a stable


oscillator about the equilibrium point
References

Jonathan P. How and Emilio Frazzoli. 16.30/31 Feedback Control Systems. Fall 2010.
Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu/.
License: Creative Commons BY-NC-SA.
Steven Hall. 16.06 Principles of Automatic Control. Fall 2012. Massachusetts Institute of
Technology: MIT OpenCourseWare, https://ocw.mit.edu/. License: Creative Commons
BY-NC-SA.
Katsuhiko Ogata, Modern Control Engineering, 2010, 5th edition, Prentice Hall.
Norman Nice, Control Systems Engineering, 2011, Addison-Wesley.

You might also like