Professional Documents
Culture Documents
Eee V Modern Control Theory 10ee55 Notes PDF
Eee V Modern Control Theory 10ee55 Notes PDF
PART - A
UNIT - 3
Derivation of transfer function from state model, digitalization, Eigen values, Eigen vectors,
generalized Eigen vectors.
6 Hours
UNIT - 4
Solution of state equation, state transition matrix and its properties, computation using
Laplace transformation, power series method, Cayley-Hamilton method, concept of
controllability & observability, methods of determining the same
10 Hours
PART - B
UNIT - 5
POLE PLACEMENT TECHNIQUES: stability improvements by state feedback, necessary
& sufficient conditions for arbitrary pole placement, state regulator design, and design of
state observer, Controllers- P, PI, PID.
10 Hours
UNIT - 6
Non-linear systems: Introduction, behavior of non-linear system, common physical non
linearity-saturation, friction, backlash, dead zone, relay, multi variable non-linearity.
3 Hours
UNIT - 7
Phase plane method, singular points, stability of nonlinear system, limit cycles, construction
of phase trajectories.
7 Hours
UNIT - 8
Liapunov stability criteria, Liapunov functions, direct method of Liapunov & the linear
system, Hurwitz criterion & Liapunov‟s direct method, construction of Liapunov functions
for nonlinear system by Krasvskii‟s method.
6 Hours
TEXT BOOKS:
1. Digital control & state variable methods- M. Gopal - 2nd edition, THM Hill
2003
2. Control system Engineering- I. J. Nagarath & M. Gopal, - 3rd edition, New
Age International (P) Ltd.
REFERENCE BOOKS:
1. State Space Analysis of Control Systems- Katsuhiko Ogata -Prentice Hall
Inc
2. Automatic Control Systems- Benjamin C. Kuo & Farid Golnaraghi, 8th
edition, John Wiley & Sons 2003.
3. Modern Control Engineering- Katsuhiko Ogata- PHI 2003
4. Control Engineering theory and practice- M. N. Bandyapadhyay PHI,
2007
5. Modern control systems- Dorf & Bishop- Pearson education, 1998
CONTENTS
PART -A
UNIT - 1 & UNIT - 2
STATE VARIABLE ANALYSIS AND DESIGN: Introduction, concept of state, state
variables and state model, state modeling of linear systems, linearization of state equations.
State space representation using physical variables, phase variables & canonical variables
10 Hours
The state space analysis is a modern approach and also easier for analysis using
digital computers. It's gives the total internal state of the system considering all initial
conditions.
The conventional approach used to study the behaviour of linear time invariant
control systems, uses time domain or frequency domain methods. When performance
specifications are given for single input, single output linear time invariant systems, then
system can be designed by using Root locus. When time domain specifications are given,
Root locus technique is employed in designing the system. If frequency domain
specifications are given, frequency response plots like Bode plots are used in designing the
system.
Linear system
Non- linear system
Time invariant system
Time varying system
Multiple input and multiple output system.
the analysis can be carried with initial conditions.
State:-
The state is the condition of a system at any time instant 't'.
State variable:-
A set of variable which described the state of the system at any time instant
are called state variables.
OR
The state of a dynamic system is the smallest number of variables (called
state variables) such that the knowledge of these variables at t=to, together with the
knowledge of the input for t=to, completely determine the behaviour of the system
for any time t ≥ to.
State space:-
The set of all possible values which the state vector X(t) can have( or
assume) at time t forms the state space of the system.
State vector:-
It is a (nx1) column matrix whose elements are state variables of the
system,(where n is order of system) it is denoted by X(t).
(t) → → (t)
(t) → → (t)
Control
(t) → System
→ (t)
(t) → → (t)
↓ ↓ ↓ ↓
(t) (t) (t) ..... (t)
Control
U→ →Y
System
↓
X
The different variables may be represented by the vectors(column matrix) as
shown below
Input vector Output vector State variable vector
.
.
.
While out puts of such system are dependent on the state of the system and
instantaneous inputs.
Functional output equation can be written as,
..
.
= +
which referees to state – space methods developed in the late 1950,s and early 1960s. In
modern control, system models are directly written in the time domain. Analysis and
design are done in time domain. It should denoted that before Laplace transforms and
transfer functions became popular in the 1920s Engineering were studying systems in
the time domain. Therefore the resurgence of time domain analysis was not unusual, but
it was triggered by the development of computers and advances in numerical analysis.
Because computers were available, it was no longer necessary to development analysis and
design methods that were strictly manual. An Engineer could use computers to
numerically solve or simulate large system that were nonlinear and time varying. State
space methods removed the previously mentioned limitations of classical control. The
period of the1960s was the heyday of modern control.
The idea of state is familiar from a knowledge of the physical world and the means
of solving the differential equations used to model the physical world . Consider a ball
flying through the air. Intuitively we feel that if we know the ball‟s position and
velocity, we also know its future behaviour. It is on this basis that an outfielder positions
himself to catch a ball. Exactly the same information is needed to solve a differential
equation model of the problem. Consider for example the second order differential
equation
·· ·
X +aX + bX = f(t)
The solution to this equation be found as the sum of the forced response, due to
f(t) and the natural or unforced response ie the solution of homogeneous equation
·· ·
X +aX + bX = 0
If X1 (t), X2(t) -------- Xn(t) are state variables of the system chosen then the
initial conditions of the state variables plus the u(t)‟s for t > 0 should be sufficient to
decide the future behaviour i.e y(t)‟s for t > 0. Note that the state variables need not be
physically measurable or observable quantities. Practically however it is convenient to
chose easily measurable quantities. The number of state variable is then equal to the
order of the differential equation which is normally equal to the number of energy
storage elements in the system.
= +
=
X A X + B U
X is called derivative of state vector whose size is (nx1)
·
X is called state vector whose size is (n x 1)
Output equation
The state variables X1(t) -------X n(t) represents the dynamic state of a system. The
system output/ outputs may be used as some of the state variables themselves ordinarily,
the output variables
Y1 = C11 X1 + C12 X2 + ---------- C1n Xn
Y2 = C21 X1 + C22 X2 + ---------- C2n X
Sometimes the output is a function of both state variables and inputs . for this general
case
Y = CX +DU
or
Y1 c11 c12 ----------- c1n x1 D11 D12 -------- D1m u1
D matrix is of size ( p x m)
State Model
The state equation of a system determines its dynamic state and the output equation gives
its output at any time t > 0 provided the state at t = 0 and the control forces for
t 0 are known. These two equations together form the state model of the system.
The state model of a linear system is therefore given by
·
X = AX +BU (1 )
Y = CX+ DU
If we let m =1 and p=1 in the state model of a multiple input multiple output linear time
invariant system we obtain the following state model for SISO linear system.
· = AX +bu
X (2 )
Y = CX+ du
Let us now consider how the state model defined by equation (2) may be obtained for an
nth order SISO system whose describing differential equation relating output y with input
u is given by
dn y dn-1 y dn-2 y dy
+ an-1 + an-2 + ------------ a1 + a0 y = b0 u (3)
dtn dtn-1 dtn-2 dt
dy dn-1 y
y(0), (0), ------------ (0), are initial conditions
dt dt
X1, X2, ----------- Xn which can be done in many possible ways. A convenient way
is define
X1 = y
X2 = y·
·
Xn = yn-1
With the above definition of state variables equation (4) is reduced to a set of „n‟ first
order differential equations given below;
·
X1 = y· = X2
· =·y·
X 2 = X3
·
Xn-1 = yn-1 = Xn
·
X = yn= -a0X1 – a1X2 – a2X3 -----a n-1Xn +b0u
·
the above equations result in the following state equations
It is to be noted that the matrix A has a special form. It has all 1‟s in the upper off
– diagonal, its last row is comprised of the negative of the coefficients of the original
differential equation and all other elements are zero. This form of matrix A is known as
the „BUSH FORM‟ . The set of state variables which yield „BUSH FORM‟ for the
matrix A are called „Phase variables‟
When „A‟ is in „BUSH FORM‟, the vector b has the specialty that all its elements
except the last are zero. In fact A and B and therefore the state equation can be written
directly by inspection of the linear differential equation.
Where C = [ 1 0 ------0]
Note: There is one more state model called canonical state model . we shall consider this
model after going through transfer function.
Having obtained the state model we next consider the problem of determining transfer
function from a given state model of SISO / MIMO systems.
1) SISO SYSTEM
u(s) y(s)
G(s)
y(s)
G(s) = y(s) = u(s) G(s) or y (s) = G(s) u(s) -------- (1)
u(s)
Taking laplace transformation on both sides of equations ( 2) and (3) and neglecting
initial conditions
From (4)
(sI –A ) X(s) = Bu(s)
Substituting ( 6) in (5)
An important observation that needs to be made here is that while the state model is not
unique, the transfer function is unique. i.e. the transfer function of equation (8) must work
out to be the same irrespective of which particular state model is used to describe the
system.
u1(s) y1(s)
G(s)
u2(s) y2(s)
um(s) yn(s)
More often the system model is known in the transfer function form. It therefore becomes
necessary to have methods available for converting the transfer function model to a state
model. The process of going from the transfer function to the state equations is called
decomposition of the transfer function. In general there are three basic ways of
decomposing a transfer function in direct decomposition, parallel decomposition, and
cascaded decomposition has its own advantage and is best suited for a particular
situation.
Decomposition of TF
Take in LT
Taking in LT
= X3 + 7 X2 + 2 X1
3. Cascading form
UNIT-2
STATE-SPACE REPRESENTATION
1 Introduction
The classical control theory and methods (such as root locus) that we have been using in
class to date are based on a simple input-output description of the plant, usually expressed as a
transfer function. These methods do not use any knowledge of the interior structure of the plant,
and limit us to single-input single-output (SISO) systems, and as we have seen allows only limited
control of the closed-loop behavior when feedback control is used. Modern control theory solves
many of the limitations by using a much “richer” description of the plant dynamics. The so-called
state-space description provide the dynamics as a set of coupled first-order differential equations
in a set of internal variables known as state variables, together with a set of algebraic equations
that combine the state variables into physical output variables.
system models. For suchsystems the number of state variables, n, is equal to the number of
independent energy storage elements in the system. The values of the state variables at any
time t specify the energy of each energy storage element within the system and therefore
the total system energy, and the time derivatives of the state variables determine the rate
of change of the system energy. Furthermore, the values of the system state variables at
any time t provide sufficient information to determine the values of all other variables in the
system at that time.
There is no unique set of state variables that describe any given system; many
different sets of variables may be selected to yield a complete system description.
However, for a given system the order n is unique, and is independent of the particular set
of state variables chosen. State variable descriptions of systems may be formulated in
terms of physical and measurable variables, or in terms of variables that are not directly
measurable. It is possible to mathematically transform one set of state variables to
another; the important point is that any set of state variables must provide a complete
description of the system. In this note we concentrate on a particular set of state
variables that are based on energy storage variables in physical systems.
x˙1 = f1 (x, u, t)
x˙2 = f2 (x, u, t)
. = . (1)
x˙n = fn (x, u, t
where ẋi = dxi /dt and eachof the functions fi (x, u, t), (i = 1, . . . , n) may be a general
nonlinear, time varying function of the state variables, the system inputs, and time. 1
It is common to express the state equations in a vector form, in which the set of n state
variables is written as a state vector x(t) = [x1 (t), x2 (t), . . . , xn (t)]T , and the set of r inputs
is written as an input vector u(t) = [u1 (t), u2 (t), . . . , ur (t)]T . Each state variable is a time
varying component of the column vector x(t).
This form of the state equations explicitly represents the basic elements contained
in the definition of a state determined system. Given a set of initial conditions (the values
of the xi at some time t0 ) and the inputs for t ≥ t0 , the state equations explicitly specify
the derivatives of all state variables. The value of each state variable at some time ∆t later
may then be found by direct integration. The system state at any instant may be
interpreted as a point in an n-dimensional state space, and the dynamic state response x(t)
can be interpreted as a path or trajectory traced out in the state space.
In vector notation the set of n equations in Eqs. (1) may be written:
ẋ = f (x, u, t) . (2)
ẋ = Ax + Bu (5)
In this note we use bold-faced type to denote vector quantities. Upper case letters are
used to denote general matrices while lower case letters denote column vectors. See
Appendix A for an introduction to matrix notation and operations.
where the state vector x is a column vector of length n, the input vector u is a column vector
of length r, A is an n × n square matrix of the constant coefficients aij , and B is an n × r
physical system in terms of a set of state variables does not necessarily include all of the
variables of direct engineering interest. An important property of the linear state equation
description is that all system variables may be represented by a linear combination of the
state variables xi and the system inputs ui . An arbitrary output variable in a system of
where the ci and di are constants. If a total of m system variables are defined as outputs,
the m suchequations may be written as:
or in matrix form:
The output equations, Eqs. (8), are commonly written in the compact
. form:
y = Cx + Du (9)
where y is a column vector of the output variables yi (t), C is an m × n matrix of the constant
coefficients cij that weight the state variables, and D is an m × r matrix of the constant
coefficients dij that weight the system inputs. For many physical systems the matrix D
is the null matrix, and the output equation reduces to a simple weighted combination of
the state variables:
y = Cx. (10)
ẋ = Ax + Bu (11)
y = Cx + Du. (12)
The matrices A and B are properties of the system and are determined by the system
structure and elements. The output equation matrices C and D are determined by the
particular choice of output variables.
The overall modeling procedure developed in this chapter is based on the following steps:
1. Determination of the system order n and selection of a set of state variables from
the linear graphsystem representation.
2. Generation of a set of state equations and the system A and B matrices using a
well defined methodology. This step is also based on the linear graph system
description.
Step 1: Draw n integrator (S −1 ) blocks, and assign a state variable to the output
of each block.
Figure 2: Vector block diagram for a linear system described by state-space system dynamics.
Step 2: At the input to each block (which represents the derivative of its state variable)
draw a summing element.
Step 3: Use the state equations to connect the state variables and inputs to the
summing elements through scaling operator blocks.
Step 4: Expand the output equations and sum the state variables and inputs through
a set of scaling operators to form the components of the output.
Example 1
Draw a block diagram for the general second-order, single-input single-output
system:
x1
x˙1 a11 a12 b1
= x2 + u(t)
x˙2 a21 a22 b2
x1
y(t) = c1 c2 + du(t). (i)
x2
Solution: The block diagram shown in Fig. 3 was drawn using the four steps
described above.
Example 2
Find the transfer function and a single first-order differential equation relating
the output y(t) to the input u(t) for a system described by the first-order linear
state and output equations:
dx
= ax(t) + bu(t) (i)
dt
y(t) = cx(t) + du(t) (ii)
which may be rewritten with the state variable X (s) on the left-hand side:
and substitute into the Laplace transform of the output equation Y (s) = cX (s)+
dU (s):
bc
Y (s) = + d U (s)
s−a
ds +U (s) (vi)
=
(bc − ad)
(s − a)
The transfer function is:
Y (s) ds + (bc −
H (s) = = . (vii)
U (s) ad) (s − a)
The differential equation is found directly:
(s − a) Y (s) = (ds + (bc − ad)) U (s), (viii)
and rewriting as a differential equation:
dy du
− ay = d + (bc − ad) u(t). (ix)
dt dt
Classical representations of higher-order systems may be derived in an analogous set of
steps by using the Laplace transform and matrix algebra. A set of linear state and output
equations written in standard form
ẋ = Ax + Bu (13)
y = Cx + Du (14)
may be rewritten in the Laplace domain. The system equations are then
sX(s) = AX (s) + BU (s)
Y(s) = CX(s) + DU(s) (15)
and the state equations may be rewritten:
sx(s) − Ax(s) = [sI − A] x(s) = Bu(s). (16)
where the term sI creates an n × n matrix with s on the leading diagonal and zeros elsewhere.
(This step is necessary because matrix addition and subtraction is only defined for matrices
of the same dimension.) The matrix [sI − A] appears frequently throughout linear system
theory; it is a square n × n matrix withelements directly related to the A matrix:
(s − a11 ) −a12 · · · −a1n
−a 21 (s − a22 ) · · · −a2n
[sI − A] =
. .
. (17)
. . .. .
−an1 −an2 · · · (s − ann )
The state equations, written in the form of Eq. (16), are a set of n simultaneous
opera- tional expressions. The common methods of solving linear algebraic equations, for
example Gaussian elimination, Cramer’s rule, the matrix inverse, elimination and
substitution, may be directly applied to linear operational equations such as Eq. (16). For
low-order single-input single-output systems the transformation to a classical formu- lation
may be performed in the following steps:
2. Reorganize each state equation so that all terms in the state variables are on
the left-hand side.
3. Treat the state equations as a set of simultaneous algebraic equations and solve
for those state variables required to generate the output variable.
5. Write the output equation in operational form and identify the transfer function.
6. Use the transfer function to write a single differential equation between the
output variable and the system input.
Example 3
Use the Laplace transform method to derive a single differential equation for the
capacitor voltage vC in the series R-L-C electric circuit shown in Fig. 4
Solution: The linear graph method of state equation generation selects the
capacitor voltage vC (t) and the inductor current iL (t) as state variables, and
generates the following pair of state equations:
v̇c 0 1/C vc 0
= + Vin . (i)
i̇L −1/L −R/L iL 1/L
vc
y(t) = 1 0 + 0 Vin (ii)
iL
1/LC
VC (s) = Vs (s) (vii)
s2 + (R/L)s + 1/LC
d2 vC R dvC 1 1
2
+ + vC = Vs (t) (ix)
dt L dt LC LC
Cramer’s Rule, for the solution of a set of linear algebraic equations, is a useful method to
apply to the solution of these equations. In solving for the variable xi in a set of n linear
algebraic equations, such as Ax = b the rule states:
det A(i)
xi = (18)
det [A]
where A(i) is another n × n matrix formed by replacing the ith column of A with the vector
b.
If
[sI − A] X(s) = BU(s) (19)
then the relationship between the ith state variable and the input is
Example 4
Use Cramer’s Rule to solve for vL (t) in the electrical system of Example 3.
Solution: From Example 3 the state equations are:
v̇c 0 1/C vc 0
= + Vin (t) (i)
i̇L −1/L −R/L iL 1/L
s −1/C Vc (s) 0
= Vin (s). (iii)
1/L s + R/L IL (s) 1/L
0 −1/C
(1) det
det (sI − A) 1/L (s + R/L)
VC (s) = Vin (s) = Vin (s)
det [(sI − A)] s −1/C
det
1/L (s + R/L)
1/LC
= Vin (s). (iv)
s2 + (R/L)s + (1/LC )
The current IL (t) is:
s 0
(2) det
det (sI − A) 1/L 1/L
IL (s) = Vin (s) = Vin (s)
det [(sI − A)] s −1/C
det
1/L (s + R/L)
s/L
= Vin (s). (v)
s2 + (R/L)s + (1/LC )
The output equation may be written directly from the Laplace transform of Eq.
(ii):
For a single-input single-output (SISO) system the transfer function may be found directly
by evaluating the inverse matrix
−1
X(s) = (sI − A) BU (s). (22)
Using the definition of the matrix inverse:
adj [sI − A]
[sI − A]−1 = , (23)
det [sI − A]
adj [sI − A] B
X(s) = U (s). (24)
det [sI − A]
and substituting into the output equations gives:
Y (s) = C [sI − A]−1 BU (s) + DU (s)
= C [sI − A]−1 B + D U (s). (25)
Expanding the inverse in terms of the determinant and the adjoint matrix yields:
C adj (sI − A) B + det [sI − A] D
Y (S) = U (s)
det [sI − A]
= H (s)U (s) (26)
so that the required differential equation may be found by expanding:
det [sI − A] Y (s) = [C adj (sI − A) B + det [sI − A] D] U (s). (27)
and taking the inverse Laplace transform of both sides.
Example 5
Use the matrix inverse method to find a differential equation relating vL (t) to
Vs (t) in the system described in Example 3.
Solution: The state vector, written in the Laplace domain,
X(s) = [sI − A]−1 BU(s) (i)
from the previous example is:
−1
Vc (s) s −1/C 0
= Vin (s). (ii)
IL (s) 1/L s + R/L 1/L
The determinant of [sI − A] is
Since
0
s + R/L 1/C
C adj (sI − A) B = −1 −R 1/L
−1/L s
R 1
= − s− , (vi)
L LC
the transfer function is
−(R/L)s − 1/(LC ) + (s 2
+ (R/L)s + (1/LC )) 1
H (s) =
(s2 + (R/L)s + (1/LC ))
2
s
= , (vii)
(S 2 + (R/L)S + (1/LC ))
which is the same result found by using Cramer’s rule in Example 4.
We may multiply bothsides of the equation by s−n to ensure that all differential operators
have been eliminated:
an + an−1 s−1 + · · · + a1 s −(n−1) + a0 s −n Y (s) =
bn + bn−1 s− 1 + · · · + b1 s −(n−1) + · · · + b0 s −n U (s), (29)
from which the output may be specified in terms of a transfer function. If we define a dummy
variable Z (s), and split Eq. (29) into two parts
and rearranged to generate a feedback structure that can be used as the basis for a block
diagram:
1 an−1 1 a1 1 a0 1
Z (s) = U (s) − +··· + + Z (s). (33)
an an s an sn−1 an s−n
The dummy variable Z (s) is specified in terms of the system input u(t) and a weighted sum
of successive integrations of itself. Figure 5 shows the overall structure of this direct-form
block diagram. A string of n cascaded integrator (1/s) blocks, with Z (s) defined at the input
to the first block, is used to generate the feedback terms, Z (s)/s−i , i = 1, . . . n, in Eq. (33).
Equation (31) serves to combine the outputs from the integrators into the output y (t).
A set of state equations may be found from the block diagram by assigning the state
variables xi (t) to the outputs of the n integrators. Because of the direct cascade connection
of the integrators, the state equations take a very simple form. By inspection:
ẋ1 = x2
ẋ2 = x3
. .
ẋn−1 = xn
a0 a1 an−1 1
ẋn = − x1 − x2 − · · · − xn + u(t). (34)
an an an an
Example 6
Draw a direct form realization of a block diagram, and write the state equations
in phase variable form, for a system with the differential equation
d3 y d2 y dy du
3
+ 7 2
+ 19 + 13y = 13 + 26u (i)
dt dt dt dt
Solution: The system order is 3, and using the structure shown in Fig. 5 the
block diagram is as shown in Fig. 6.
The state and output equations are found directly from Eqs. (35) and (37):
ẋ1 0 1 0 x1 0
ẋ
2
= 0 0 1 x2 + 0 u(t), (ii)
ẋ3 −13 −19 −7 x3 1
Figure 6: Block diagram of the transfer operator of a third-order system found by a direct
realization.
x1
y(t) = 26 13 0 x2
+ [0] u (t) . (iii)
x3
and expanding the inverse in terms of the determinant and the adjoint
matrix
C adj (sI − A) B + det [sI − A] D
Y(s) = U(s)
det [sI − A]
= H(s)U(s), (40)
where H(s) is defined to be the matrix transfer function relating the output vector Y(s) to
the input vector U(s):
(C adj (sI − A) B + det [sI − A] D)
(41)
det [sI − A]
H(s) =
The state variables that produce a state model are not, in general, unique. However, there exist several
common methods of producing state models from transfer functions. Most control theory texts contain
developments of a standard form called the control canonical form, see, e.g., [1]. Another is the phase
variable canonical form.
Control Canonical Form
When the order of a transfer function‟s denominator is higher than the order of its numerator, the
transfer function is called strictly proper. Consider the general, strictly proper third-order transfer
function
Y ( s) b s 2 b s b0
3 2 2 1 . (1)
U ( s ) s a2 s a1s a0
b2 b1 b0
2 3
Y ( s)
s s s (2)
U ( s ) 1 2 1 a0
a a
s s2 s3
which is a function containing numerous 1/s terms, or integrators. There are various signal-flow graph
configurations that will produce this function. One possibility is the control canonical form shown in
Figure 1. The state model for the signal flow configuration in Figure 1 is
x1 0 1 0 x1 0
x 0 0 1 x2 0 u
2
x3 a0 a1 a2 x3 1
(3)
x1
y b0 b1 b2 x2 0 u
x3
b2
b1
Y(s)
U(s) 1/s 1/s
1/s b0
-a2
-a1
-a0
1 1 1 1 1 1
Y ( s) a2 Y ( s) a1 2 Y ( s) a0 3 Y ( s) b2 U ( s ) b1 2 U ( s ) b0 3 U ( s ) (4)
s s s s s s
and then substituting the expression for Y(s) from the state model output equation in (3) yields
1 1 1
b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) b2 U ( s ) b1 2 U ( s ) b0 3 U ( s )
s s s
1
a2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s )
s
1
a1 2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s )
s
1
a0 3 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) .
s
(5)
Equation (5) can be rewritten as
b2 b1 b0
b0 X 1 ( s) b1 X 2 ( s) b2 X 3 ( s) Y ( s) s s2 s3 ,
U ( s) U ( s) a a a
1 2 21 30
s s s
which is identical to equation (2) proving that the control canonical form is valid.
b2 b1 b0 2 8 6
2 3
C ( s) 2 s 2 8s 6 s s s s s2 s3 .
3
R( s ) s 8s 2 26s 6 1 a2 a1 a0 1 8 26 6
s s2 s3 s s2 s3
The coefficients corresponding with the control canonical form are:
b0 6, b1 8, b2 2, a0 6, a1 26, and a2 8 .
x1 0 1 0 x1 0
x 0 0 1 x2 0 r
2
x3 6 26 8 x3 1
x1
y 6 8 2 x2 [0]r.
x3
Examination of Figure 1 shows one potential benefit of manipulating systems to conform to control
canonical form. If X1(s) happened to carry units of inches (position), then X2(s) might be inches/second
(velocity), and X3(s) might be inches/second/second (acceleration).
Phase Variable Canonical Form
Development of the phase variable canonical form as presented in [2] begins with the general fourth-
order, strictly proper transfer function
Y ( s) b s 3 b s 2 b s b0 b3s 1 b2 s 2 b1s 3 b0 s 4
4 3 3 2 2 1 (6)
U ( s) s a3s a2 s a1s a0 1 a3s 1 a2 s 2 a1s 3 a0s 4
which can be rearranged to read
After substituting Y(s) from (8) into (7), the expression becomes
The advances made in microprocessors, microcomputers, and digital signal processors have
accelerated the growth of digital control systems theory. The discrete-time systems are dynamic
systems in which the system-variables are defined only at discrete instants of time.
The terms sampled-data control systems, discrete-time control systems and digital control
systems have all been used interchangeably in control system literature. Strictly speaking, sampled-
data are pulse-amplitude modulated signals and are obtained by some means of sampling an analog
signal. Digital signals are generated by means of digital transducers or digital computers, often in
digitally coded form. The discrete-time systems, in a broad sense, describe all systems having some
form of digital or sampled signals.
Discrete-time systems differ from continuous-time systems in that the signals for a discrete-time
systems are in sampled-data form. In contrast to the continuous-time system, the operation of
discrete-time systems are described by a set of difference-equations. The analysis and design of
discrete-time systems may be effectively carried out by use of the z-transform, which was evolved
from the Laplace transform as a special form.
UNIT - 4
Solution of state equation, state transition matrix and its properties, computation using Laplace transformation,
power series method, Cayley-Hamilton method, concept of controllability & observability, methods of determining
the same
10 Hours
After obtaining the various mathematical models such as physical, phase and canonical
forms of state models, the next step to analysis is to obtain the solution of state equation. From the
solution of the state equation, the transient response can then be obtained for specific input. This
completes the analysis of control system in state-space.
The solution of state equation consists of two parts; homogenous solution and forced solution. The
model with zero input is referred as homogenous system and with non-zero input is referred as forced
system. The solution of state equation with zero input is called as zero input response (ZIR). The solution
of state model with zero state (zero initial condition) is referred as zero state response (ZSR). Hence
total response is sum of ZIR and ZSR.
X (t ) = Ae at k
k = (e At ) −1 x(t 0) = e− At x(0)
Dept. of EEE, SJBIT Page 57
Modern Control Theory 10EE55
x(t ) = e At e − At x(t 0)
= e A(t −t 0) x(t 0)
At t=0 (zero initial condition),
x(t ) = eat x(0) (8)
at t=0 is driven to a state x(t) at time „t‟ by matrix exponential function e At . This matrix
exponential function, which transfers the initial state of the system in „t‟ seconds, is called
STATE TRANSITION MATRIX (STM) and is denoted by
φ (t )φ (−t ) = I
Pre-multiplying both sides with φ −1
φ (−t ) = φ −1 (t )
3). φ (t1 )φ (t 2 ) = φ (t1 + t 2 ) :
2 3
A 2 t1 A3t1
Proof: φ (t1 ) = I + At1 + + + ....
2! 3!
2 3
A2t2 A3t2
and φ (t 2 ) = I + At 2 + + + ....
2! 3!
A2(t +t )2 A3(t +t )3
φ(t1)φ(t2 ) = I +A(t1 +t2)+ 1 2 + 1 3 +.......
2! 3!
= φ (t1 + t 2 )
4). φ (t 2 − t1 )φ (t1 − t 0 ) = φ (t 2 − t 0 ) :
(This property implies that state transition processes can be divided into a number of
sequential transitions), i.e. the transition from t0 to t2,
x(t 2 ) = φ (t 2 − t 0 ) x(t 0 ) ; is equal to transition from „t0‟ to „t1‟ and from „t1‟ to „t2‟ i.e.,
x(t1 ) = φ (t1 − t 0 ) x(t 0 )
x(t 2 ) = φ (t 2 − t1 ) x(t1 )
A (t 2 − t1 ) + ....
Proof: φ (t 2 − t1 ) = I + A(t 2 − t1 ) +
2!
A (t1 − t 0 )
2
t2 t3 t3
1− + + ... t−t2 + + ...
= 2 3 2
t3 e te −t
2 3
te −t
−t +t −
2
−t + ...
2
+
Dept. of EEE, SJBIT Page 59
Modern Control Theory 10EE55
3t 2t
1 − 2t + − + ...
2 3
=
− te −t e −t − te −t
A 2 t 2 A 3t 2
b) φ (t ) = I + At + + + ......
2! 3!
2
1 0 1 1 1 1 t2
= + t+ + .....
0 1 0 1 0 1 2!
t2 t3 t3
1+ t + + + ... t+t2 + + ...
= 2! 3! 2
t 2 t3
0 1 + t + + + ...
2! 3!
et te t
=
0 e t
2). Inverse Laplace Transform Method:
Consider the state equation with zero input as
x(t ) = Ax(t );
∆
x(t 0) = x(0)
Taking Laplace Transform on both sides,
Sx(s) − x(0) = Ax(s)
[SI − A]x(s) = x(0)
Premultying both sides by [SI-A]-1,
x(s) = [SI − A] −1 x(0)
Taking Laplace Inverse on both sides,
x(t ) = -1
[ SI − A] −1 x(0)
Comparing the above example with
x(t ) = e At x(0)
It shows that
e At = STM = φ (t ) = -1 [SI − A]
−1
φ (t ) = -1 [SI − A]
−1
Example:
Obtain the STM by Laplace Transform (Inverse Laplace Transform) method for given
system matrices
1 0
1 1
a). A = , 0
1 1 b). A=
Dept. of EEE, SJBIT Page 60
Modern Control Theory 10EE55
−1
1 0 1
S −1 , c). A =
−2 − −
2 3
−1
a). A = [SI − A] =
0 1 0 S −1
S −1 1
S −1 −1 0 S −1
φ (s) = [SI − A] −1 = =
0 S −1 [S − 1] 2
et te t
-1 φ (s) =
φ (t ) = 0 e t
0 1 S −1
b). A = [SI − A] =
−1 − 2 1 (S + 2)
S+2 1
−1
S −1 (S + 1) 2 (S + 1)
φ (t ) = [SI − A] −1 = =
1 S+2 −1 S
(S + 1) 2 (S + 1) 2
(1 + t)e −t te −t
-1 φ (t) =
φ (t ) = − te −t (1 − t )e −t
0 1 S −1
c). A = [SI − A] =
−2 −3 2 S +3
S +3 1
S −1 −2 S
[SI − A] −1 = =
2 S +3 (S + 2)(S + 1)
S +3 1
−2 S
Hence φ (t ) =
(S + 2)(S + 1)
S +3 1
−2 S
-1 φ (t ) = -1
φ (t ) = (S + 2)(S + 1)
2e −t − e −2t e −t − e −2t
=
− 2e −t + 2e −2t − e −t + 2e −2t
= φ (t )
where α 0 , α 1,.....α n constant coefficients which can be evaluated with eigen values of
matrix A as described below.
Step 1: For a given matrix, form its characteristic equation λI − A = 0 and find
eigen values as λ1 , λ 2 ,.....λ n .
Step2 (Case 1):If all eigen values are distinct, then form „n‟ simultaneous
equations as,
e λ1t = f (λ1) = α 0 + α 1λ1 + α2 λ1 + .....
2
e λnt = f (λ n ) = α 0 + α 1λ n + α 2 λ n + .....
2
f ( A) = α 0 I + α 1 A + .... + α n An −1
= φ (t )
Examples:
0 1
1) Find f ( A) = A10 for A =
−1 − 2
Now characteristic equation is λI − A = 0 .
λ −1
=0 (λ + 1) 2 = 0.
1 λ+2
Hence, Eigen values are λ1 = −1, λ 2 = 0, λ 2 = −1.
Since „A‟ is of 2x2, the corresponding poly function is
f (λ ) = λ10 = α 0 + (1)
α 1λ
f (λ1) = λ 110 = α 0 + α 1 λ1
(−1)10 = α0 − α 1
α 0 − α1 = 1
Since it is case of repeated Eigen values, to obtain the second equation
differentiating the expansion for f( ) on both sides (i.e. Eq 1),
d 10
[λ ] = α1
dλ λ =−1
f ( A) = A10 = α 0 I + α 1 A
1 0 0 1
= α0 + α1
0 1 −1 − 2
α0 0 0 α1
= +
0 α 0 − α 1 − 2α 1
α0 α1 − 9 − 10
= =
− α 1 α 0 − 2α1 10 11
2). Find STM by Cayley Hamlton method, given;
1 1 0 b) A= 1 0 1
a) A = −1 c) A=
0 1 −2 − 2 −3
1 1
a) Consider A =
0 1
Characteristic equation is λI − A = 0
λ −1 −1
= λ1 = 1andλ 2 = 1. Since A is of 2x2 second order system, hence,
0 λ −1
e λt = α 0 + α 1λ (1)
λt
e = α 0 + α 1λ at λ = λ1
will be e λ1t = α 0 + α 1λ1
et = α 0 + α1 (2)
Differentiating Eq. (1) with respect to and substituting = 2
te λt = α 1 atλ = λ 2
te t = α 1 (3)
e t (1 − t ) 0 te t te t
= +
0 e t (1 − t) 0 tet
et te t
= t
0 e
0 b) A = 1 λ −1
−1 ; characteristic equation is λI − A = 0 =0
−2 1 λ+2
λ1 = −1, λ 2 = −1.
For second order system,
e λt = α 0 + α 1λ (1)
e λ1t = +α λ
α 0 1 1
e −t = α 0 − α 1 ; ( λ1 = −1 ) (2)
Differentiating Eq. (1) with respect to and substituting = 2=-1
α1 = te −t
hence α 0 = e−t (1 + t ) from Eq. (2).
The STM is given by φ (t ) = e At = α 0 I + A
On simplification,
(1 + t)e −t te −t
φ (t) = −t
− te −t (1 − t )e
0 1
c). A = characteristic equation is λI − A = 0 .
−2 −3
λ −1
=0 λ1 = −1, λ 2 = −2.
2 λ +3
Eigen values are distinct. The corresponding functions are,
e λ1t = α 0 + α 1λ 1 (1)
e −t = α 0 − α 1 (2)
And e λ2t = α 0 +
e −2t α 1λ 2 (3)
= α0 − 2α1
From Eqs. (2) and (3) solving for α 0 and α 1 ,
α 0 = 2e −t − e −2t and
α1 = e −t − e −2t .
α0 α1
=
− 2α 1 α 0 − 3α 1
2e −t − e −2t e −t − e −2t
= as before.
− 2e −t + 2e −2t − e −t + 2e −2t
λ − 2 −1 −4
λI − A = 0 λ−2 0 =0
0 0 λ −1
Eigen values are λ1 = 2, λ1 = 2, λ 2 = 1
Corresponding function will be
e λ1t = α 0 + α 1λ 1 + α2λ1
2
(1)
e 2t = α 0 + 2α 1 + 4α 2 (2)
Differentiating Eq. (1) with respect to 1 and substituting 1=2,
On simplification
te 2t = α 1 + 4α 2 (3)
and e λ2t = α 0 + α 1λ 2 + α 2 λ 22
= α 0 + α1 + α 2 (4)
From Eqs. (2), (3) and (4), solving for α 0 , α 1 , α 2 ,
α0 = 2te t − 3e 2t + 4e t ;
α1 = −3te 2t + 4e 2t − 4e t
α 2 = te 2t − e 2t + e t
Hence STM φ (t ) = α 0 I + α 1A + α 2 A2
e 2t 12e t − 12e 2t + 13te 2t − 4e t + 4e 2t
= 0 e 2t 0
0 − 3e t + 3e 2t et
oe
− At
x(t) |t = e − Aτ Bu(τ )dτ
o
t
e − At
x(t ) − x(0) = e− Aτ Bu(τ )dτ
o
t
ZIR o
ZSR
t
x(t ) = φ (t ) x(0) + φ (t − τ )Bu(τ )dτ .
o
If the initial time is to instead of o then,
t
x(t ) = φ (t − t o ) x(t o ) + φ (t − τ )Bu(τ )dτ (4)
to
The Eq (4) is the solution of state equation with input as u(t) which represents the change
of state from x(to) to x(t) with input u(t).
Example 1: Obtain the response of the system for step input of 10 units given state model
as
−1 1 0
x(t ) = x(t ) + u(t )
− 1 − 10 10
y(t) = [1 0]x(t ); x(0) = 0
Solution: The state equation solution is
Since x(0)=0,
t
)
1.0128e − a1 (t −τ ) − 0.0128e − a2 (t −τ 0.114e − a1 (t −τ ) − 0.114e − a 2 (t −τ )
φ (t − τ ) =
− 0.114e −a1 (t −τ ) + 0.114e −a2 (t −τ ) − 0.0128e − a1 (t −τ ) + 1.012e −a2 (t −τ )
Hence
t
0
x(t) = φ (t − τ ) 10dτ
o
10
9.094 − 10.247e − a1t + 1.153e − a 2 t
=
− 0.132 + 1.151e −a1t − 1.019e −a 2t
∴ y(t) = Cx(t) = [1 0]x(t )
= 9.094 − 10.247e − a1t + 1.153e − a2t ;
where, a1=1.1125, a2=9.88
Example 2: Consider the system
0 1 0
x(t ) = x(t ) + u(t )
−2 −3 1
Solution: The
−
1
1
2e −t − e −2t e −t − e −2t
=
− 2e −t + 2e −2t − e −t + 2e −2t
ZIR = φ (t ) x(0)
2e −t − e −2t e −t − e −2t 1
=
− 2e −t
+ 2e −2t
− e + 2e−t −2t
−1
2e −t − e −2t − e −t + e −2t
=
− 2e −t + 2e −2t + e −t − 2e −2t
e −t
=
− e −t
t
10 −t 4
e − e −2t − e −4t
= 3 3
− 5 −t 8
e + e −2t + e −4t
3 3
The system has two state variables. X1(t) and X2(t). The control input u(t) effects the state
variable X1(t) while it cannot effect the effect the state variable X2(t). Hence the state
variable X2(t) cannot be controlled by the input u(t). Hence the system is uncontrollable,
i.e., for nth order, which has „n‟ state variables, if any one state variable is uncontrolled
by the input u(t), the system is said to be UNCONTROLLABLE by input u(t).
Definition:
For the linear system given by
X (t ) = AX (t ) + Bu(t )
Y (t ) = CX (t ) + Du(t )
is said to be completely state controllable. If there exists an unconstrained input vector
u(t), which transfers the initial state of the system x(t0) to its final state x(tf) in finite time
f(tf-t0) i.e. ff. It can be seen if all the initial states are controllable the system is completely
controllable otherwise the system the system uncontrollable.
Methods to determine the Controllability:
1) Gilbert‟s Approach
2) Kalman‟s Approach.
Gilbert’s Approach:
It determines not only the controllability but also the uncontrollable state variables, if the
it uncontrollable.
X (t) = e X (0) + e A(t −τ ) Bu(τ )dτ . Assuming without loss of generality that X(t)=0, the
At
0
solution will be
t
The system is state controllable, iff the none of the elements of B is zero. If any element
is zero, the corresponding initial state variable cannot be controlled by the input u(t).
Hence the system is uncontrollable. The solution involves A, B matrices and is
independent of C matrix, the controllability of the system is frequently referred to as the
controllability of the pair [A, B].
Example: Check the controllability of the system. If it is uncontrollable,
identify the state, which is uncontrollable.
−1 −1 1
X (t ) = X (t ) + u(t )
0 −2 0
Solution: First, let us convert into diagonal form.
λ +1 1
λI − A = 0 =0
λ+2
[λ + 1][λ + 2] = 0
λ 2 + 3λ + 2 = 0
(λ + 2)(λ + 1) = 0
λ = −2, λ = −1
Eigen vector associated with λ = −2
−2 1 1
v1 =
0 0 1
Eigen vector associated with λ = −1
−1+1 1 0 1
0 −1+ 2 0 1
1 1 1
v2 = P=
0 1 0
0 1
−1 1 0 1
P −1 =
1 −1 1
0
P −1 AP = 1 −1 −1 1 1
−1
1 0 −2 1 0
0 −2 1 1 −2 0
= =
1 −1 1 0 0 −1
0 1 1 0
P −1 B = =
−1 1 0 −1
As P-1B vector contains zero element, the system is uncontrolled and state X1(t) is
uncontrolled.
2) Kalman’s approach:
This method determines the controllability of the system.
Statement: The system described by
x(t ) = Ax(t ) + Bu(t )
y(t ) = Cx(t )
is said to completely state controllable iff the rank of controllability matrix Qc is equal to
order of system matrix A where the controllability matrix Qc is given by,
Qc = [B |AB|A2 B|……|An-1 B].
i.e., if system matrix A is of order nxn then for state controllability
ρ (Qc ) = n
where ρ (Qc ) is rank of Qc.
Example: Using Kalman‟s Approach determine the state controllability of the system
described by
y + 3y + 2 y = u + u (1)
with x1 = y, x 2 = y − u
x2 = −3x 2 − 2x1 − 2u
x1 = x 2 + u
x1 0 1 x1 1
= + u(t)
x2 − 2 − 3 x2 −2
0 1 1
A= ,B =
−2 −3 −2
0 1 1
Dept. of EEE, SJBIT Page 72
Modern Control Theory 10EE55
−2
Now AB = =
−2 −3 −2 4
1 −2
Qc = [B | AB ] =
−2 4
1 −2
Qc = =0
−2 4
hence rank ρ (Qc ) is <2 and system matrix A is of 2x2. Therefore system is not state
controllable.
Example: Determine the state controllability of the system by Kalman‟s approach.
0 1 0 0
x(t) = 0 0 1 x(t) + 0 u(t)
0 −2 −3 1
ρ (Qc ) = 3
= Order of system. Therefore, system
is state controllable. Verification by
Gilbert’s approach:
Transferring the system model into canonical form with usual procedure.
1
0 0 0
2
Z (t ) = 0 − 1 0 Z (t ) + − 1 u(t )
1
0 0 −2
2
T
1 1
Here, B = −1 has no zero element in any row, hence system is controllable.
2 2
Observability:
Concept:
A system is completely observable, if every state variable of the system effects some of
the outputs. In other words, it is often desirable to obtain information on state variables
from the measurements of outputs and inputs. If any one of the states cannot be observed
from the measurements of the outpits and inputs, that state is unobservable and system is
not completely observable or simply unobservable.
Consider the state diagram of typical system with state variables as x1 and x2 and y and
u(t) as output and inputs respectively,
o [
= C AT C T = ] 1 −2
00
φ o = 0 ρ (φ o ) < 2
Order of system is 2. So, system is unobservable.
Example: Show that the system
0 1 0 0
x(t) = 0 0 1 x(t) + 0 u(t)
− 6 − 11 − 6 1
A= 0 0 1 A = 1 0 − 11
− 6 − 11 − 6 0 1 −6
4
C = [4 5 1] C T
= 5
1
6 −6
A C = −7 ;A C = 5
T T T2 T
−1 −1
2
CT T AT C T AT C T
−6
φ o 4= 6
o φ =0
5 −7 5
1 −1 −1
[A − BK ] =
0 1 0 0
−
0 0 K1 K 2
0 1
=
− K1 − K 2
SI − (A − BK) = S 2
2
+ K1 S + K
e) The corresponding characteristic equation is
S 2 + K 2S + K 1 = 0 (2)
Comparing the co-efficient of like powers of S from Equ (1) and (2)
K2=8, K1=32.
Hence controller K=[32 8].
Controller gain matrix K=[32 8] in feed back will have desired specifications.
Pn2 P
Pn1 nn
P1
= P2 ; Pi = [Pi1 Pi2 Pin ]
Pn
Taking the derivative of Eqn (3) and substituting for X(t) from Equ(1)
X(t) = AX + B u(t) (4)
Where A = PAP −1 , B = PB (5)
0 1 0...... 0
0 0 1...... 0
= (6)
. . . .
− n − n−1 ...... − n
0
.
B = PB = (7)
0
1
From Equ.(3),
x1 = P11x1 + P12 x 2 + ......... + P1n x n =
(8)
P1x
Taking the derivative
(9)
x1 = P1x(t) = P1[Ax(t) + Bu(t)]
= P1Ax(t) + P1Bu(t)
Comparing (8) and (9), P1B = 0
3) Compute A & B as
A = PAB = PB
4) Assume the controller K
Dept. of EEE, SJBIT Page 81
Modern Control Theory 10EE55
[
K = K1K 2 ....K n and ]
0 1 0 . . 0
Determine A − B K =[ 0
.
] 0 1 0 .
. . . .
1
.
− n − K1 . . . . - i − K n
5) Determine the Cha equation as
SI − (A − B K = S n + ( n + K 1 )Sn−1 + ....... + 1 − K n = 0
K 2 = an−1 − n−1
K n = a1 − 1 and hence
[
K = K1K 2 ....K n ]
8) The feedback controller K for original system is
K = KP
This completes the design procedure
Design of controller by Ackermann’s formula:
We have
P1
P = P1A & P1 = Q−1 [ ]
c ⋅ [00 01]
P1A n−1
K = [a n − n an−1 − n−1 a1 − 1 ]P
(Q c )−1[00 01]
= [an − n a n−1 − n−1 a1 − 1
−1
] (Q c ) A[00 01]
(Q c )−1 A n−1[00 01]
= [00
−1
01]Q c (a1 − [ 1 )A (a 2 −
n−1
2
n−2
)A + ...... + (an − n)I ]
(1)
Characteristics poly of A matrix is
SI − A = Sn + 1S n−1 + 2 Sn−2 + ....... + n (2)
According to Cayley-Hammilton theorem,
An + 1A n−1 + ....... + nI = 0
n n−1
−1
K = [00 01]Q c (A) -------------(4)
Where
(A) = A n + a1A n−1 + a 2 An−2 + a 3 An−3 + ....... + anI (5)
[
& Qc = B | AB | A B | .......A B
2 n−1
] (6)
Design procedure:
1. For given model, determine Qc and test the controllability. If controllable,
2. Compute the desired characteristics equation as
Sn + a1Sn−1 + ......... + a n = 0
and determine coefficients a1 , a 2 , a n
3. Compute the characteristics poly of system matrix A as
(A) = A n + a1 A n−1 + a2 A n−2 + ....... + an
4. Determine Q −1 −1
c and then controller k will be K=[0 0 ….0 1] Q c . Φ( A)
This completes the design procedure.
Ex. Design controller k which places the closed loop poles at − 4 ± j 4 for system
0 1 0
X(A) = X(t) + u(t) using Acermann‟s formula.
1 0 1
Answer: Check for controllability
Qc = [B | AB] =
0 1
I Its controllable as f(Qc)=2.
1 0
0 1
Q c−1 =
1 0
II. Desired characteristics equation is (S+4+j4)(S+4-j4)
S 2 + 8S + 32 = 0
a1 = 8,a 2 = 32.
32 0
III. (A) = A 2 + a1A + a 2I =
0 8
0
1 0 0
IV. K = [0 1]
0 1 32
State regulator design: The state Regulator system is a system whose non zero
initial state can be transferred to zero initial state by the application of the input u(t).
Design of such system requires a control law
U(t)=-KX(t) which transfers the non-zero initial state to zero initial state (Rejection of
disturbances) by properly designing controller K.
The following steps give design of such controller. For a system given by
P1A n−1
c [
Q = B | AB | A 2B | .......A n−1B ]
3. Using the desired location of closed loop poles (desired Egen values) obtain
the characteristics poly
(S − 1 )(S − 2 ).....(S − nn ) = S + a1Sn−1 + a 2Sn−2 + .....a n
determine a1 , a 2 ,.......a n
4. The required state feed back controller K is
K = [a n − nan−1 − n−1.....a1 − 1 ]P
Example: A regulator system has the plant
0 1 0 0
X(t) = 0 0 1 X(t) + 0 u(t)
− 6 −11 − 6 1
Y = [1 0 0 ]X(t)
Problems
1) Design a state feed back controller which place the closed loop poles at − 2 ± j3.464 and-5. Give the block
diagram of the control configuration
Solution: By observing the structure of A and B matrices, the model is in controllable conmical
form. Hence controller K can be designed.
S1. SI − A = 0
−1 0
6
S −1 S 3 + 6S 2 + 11S + 17
11 S + 6
2
Dept. of EEE, SJBIT Page 84
Modern Control Theory 10EE55
Block Diagram:
K1=43
K2=9
K3=3
Y(t) = Cx(t);
initial state X(0) are known and
X (t ) is estimated states and
X (0) is estimated initial states the resulting observer in open loop observer.
Error in output is
[ ]
Y(t) − Y(t) = c X(t) − X(t) (2b)
Error in state vector
~
X(t) = X(t) − X(t) (3)
Y(t) = [0 0 1]X(t)
Y(t) = [0 0 1]X(t)
1) Desired characteristic
equation will be
(S+2+j 3 .64) (S+2-j3.64) (S+5)=0.
=> S 3 + 9S 2 + 20S + 60 = 0
hence comparing with S 3 + a1s2 + a 2 s + a 3 = 0
a1 =9, a 2 =20, a3 =60 (I)
2) Let the observer be m = [m1 , m2 ,.......mn ] T
0 0 0 0 m1 −6
[A − mc ] = 1 0 0 m2
−
1 −611 − 0 0 m3
0 0 − (6 + m1 )
[A − mc ] = 1 0 − (11+ m 2 )
1 − (6 + m3 )
SI − (A − mc ) = 0
S 3 + (6 + m3 )S2 + (11+ m 2 )S + (6 + m1 ) = 0
Comparing with
s3 + 1s 2 + 2 s + 3 =0
1 = (6 + m3 ), 2 = (11+ m 2 ), 3 = (6 + m1 (II)
)
3) From I and II,
9 = 6 + m3 m3 = 3
20 = 11+ m 2 m 2 = 9
60 = 6 + m1 m1 = 54
and Y(t) = [1 0]
X1 (t)
(4)
X e (t)
from equation (3)
Xe (t) = a ee X e(t) + a X(t)
e1 +eub (5)
Known input
Known terms
Comparing equation (5) & (6) with
X(t) = AX(t) + bu(t)
and Y(t) = CX(t)
we can have similarity as
X(t) → Xe (t)
A(t) → a ee
bu → a e1X1(t) + b eu(t) (7)
c = a1e
Y = Y(t) − a11X1(t) − b1u(t)
Using Equation (7), the state model for reduced order state observer can be obtained as
X̂e (t) = a ee X̂ e (t) + a e1 y + be u(t)X 1(t) + m (Y(t) + (Y − a11y − b 1u − a 1e X̂ e (t)) (8)
Dept. of EEE, SJBIT Page 90
Modern Control Theory 10EE55
(same as X̂(t) = AX̂(t) + bu(t) + m(y(t) − ŷ(t)) where ŷ(t) = a1e X̂ e (t)
~
Defining Xe (t) = X e − X̂ e
~
Xe (t) = X e (t) − X̂ e (t)
Solution:
From output Equation it follows that states X1(t) & X2(t) cannot
be measured as corresponding elements of C matrix are zero. Let
us write the state equation as
X1 0 0 0 X1(t) 50
•
X2 = 1 0 − 24 X2 (t) + 2 u(t)
3 X 0 1 − 10 X3 (t) 1
a1e = [0 1]
0 0 0
aee = , ae1 =
1 0 − 24
a11 = [− 10]; be =
50
b 1 = [1]
2
m1
Let m=
m2
Characteristic equation of an observer is
SI − (a ee − ma1e ) = 0
S 2 + m 2S + m1 = 0
Desirable characteristic equation is
(S + 10)(S + 10) = 0
S 2 + 20S + 100 = 0
From two characteristic equations, equating the co-efficient of like powers of S
20=m2 & 100=m1.
100
m=
20
Hence state model of observer will be
−[ 0 1]}X̂ e (t) + {
0 0 100 0 100 50 100
X e′(t) = { − × {−10}]}Y(t) + { − 1}u(t)
1 0 20 − 24 20 2 20
CONTROLLERS
The controller is an element which accepts the error in some form and decides the proper corrective
action. The output of the controller is then applied to the system or process. The accuracy of the entire
system depends on how sensitive is the controller to the error detected and how it is manipulating such an
error. The controller has its own logic to handle the error. The controllers are classified based on
the response and mode of operation. On the basis of mode of operation, they are classified into
Continuous and Discontinuous controllers. The discontinuous mode controllers are further classified as
ON-OFF controllers and multi position controllers.
Continuous mode controllers, depending on the input-output relationship, are classified into three
basic types named as Proportional controller, Integral controller and Derivative controller. In many
practical cases, these controllers are used in combinations.
The examples of such composite controllers are Proportional – Integral (PI) controllers,
Proportional – Derivative (PD) controllers and Proportional - Integral – Derivative (PID) controllers.
The block diagram of a basic control system with controller is shown in Figure. The error detector compares
the feedback signal b(t) with the reference input r(t) to generate an error. e(t) = r(t) – b(t).
Proportional Controller:
In the proportional control mode, the output of the controller is proportional to the error e(t).
The relation between the error and the controller output is determined by a constant called proportional
gain constant denoted as KP. i.e. p(t) = KP e(t).
Though there exists linear relation between controller output and the error, for zero error the
controller output should not be zero as this will lead to zero input to the system or process. Hence,
there exists some controller output Po for the zero error. Therefore mathematically the proportional
The performance of proportional controller depends on the proper design of the gain KP. As
the proportional gain KP increases, the system gain will increase and hence the steady state error will
decrease. But due to high gain, peak overshoot and settling time increases and this may lead to
instability of the system. So, compromise is made to keep steady state error and overshoot within
acceptable limits. Hence, when the proportional controller is used, error reduces but can not
make it zero. The proportional controller is suitable where manual reset of the operating point is
possible and the load changes are small.
Integral Controller:
We have seen that proportional controller can not adapt with the changing load
conditions. To overcome this situation, integral mode or reset action controller is used. In
this controller, the controller output P(t) is changed at a rate which is proportional to
actuating error signal e(t).
The constant Ki is called integral constant. The output from the controller at any instant is the area
under the actuating error curve up to that instant. If the error is zero, the controller output will not
change. The integral controller is relatively slow controller. It changes its output at a rate which is
dependent on the integrating time constant, until the error signal is cancelled. Compared to
the proportional controller, the integral control requires time to build up an appreciable output.
However it continues to act till the error signal disappears. Hence, with the integral controller the
steady state error can be made to zero. The reciprocal of integral constant is known as integral time
constant Ti. i.e., Ti = 1/Ki.
Derivative Controller:
In this mode, the output of the controller depends on the rate of change of error with respect to time.
Hence it is also known as rate action mode or anticipatory action mode.
The mathematical equation for derivative controller is .
Where Kdis the derivative gain constant. The derivative gain constant indicates by how much
percentage the controller output must change for every percentage per second rate of change of the
error. The advantage of the derivative control action is that it responds to the rate of change of
error and can produce the significant correction before the magnitude of the actuating error
becomes too large. Derivative control thus anticipates the actuating error, initiates an early corrective
action and tends to increase stability of the system by improving the transient response. When
the error is zero or constant, the derivative controller output is zero. Hence it is never used alone.
PI Controller:
This is a composite control mode obtained by combining the proportional mode and
integral mode. The mathematical expression for PI controller is
PID Controller:
This is a composite control mode obtained by combining the proportional mode,
integral mode and derivative mode. The mathematical expression for PI controller is
This is to note that derivative control is effective in the transient part of the
response as error is varying, whereas in the steady state, usually if any error is there, it is
constant and does not vary with time. In this aspect, derivative control is not effective in
the steady state part of the response. In the steady state part, if any error is there,
integral control will be effective to give proper correction to minimize the steady state
error. An integral controller is basically a low pass circuit and hence will not be effective in
transient part of the response where error is fast changing. Hence for the whole range of
time response both derivative and integral control actions should be provided in addition
to the inbuilt proportional control action for negative feedback control systems.
UNIT - 6
Non-linear systems: Introduction, behavior of non-linear system, common physical non linearity-
saturation, friction, backlash, dead zone, relay, multi variable non-linearity.
3 Hours
Many practical systems are sufficiently nonlinear so that the important features of
their performance may be completely overlooked if they are analyzed and designed
through linear techniques. The mathematical models of the nonlinear systems are
represented by nonlinear differential equations. Hence, there are no general methods
for the analysis and synthesis of nonlinear control systems. The fact that
superposition principle does not apply to nonlinear systems makes generalisation
difficult and study of many nonlinear systems has to be specific for typical situations.
Figure 6.1
A nonlinear system, when excited by a sinusoidal input, may generate several
harmonics in addition to the fundamental corresponding to the input frequency. The
amplitude of the fundamental is usually the largest, but the harmonics may be of
significant amplitude in many situations.
When excited by a sinusoidal input of constant frequency and the amplitude is increased from
low values, the output frequency at some point exactly matches with the input frequency
and continue to remain as such thereafter. This phenomenon which results in a
synchronisation or matching of the output frequency with the input frequency is called
frequency entrainment or synchronisation.
6.3 Methods of Analysis
Nonlinear systems are difficult to analyse and arriving at general conclusions are tedious.
However, starting with the classical techniques for the solution of standard nonlinear
differential equations, several techniques have been evolved which suit different types of
analysis. It should be emphasised that very often the conclusions arrived at will be useful for
the system under specified conditions and do not always lead to generalisations. The
commonly used methods are listed below.
Linearization Techniques: In reality all systems are nonlinear and linear systems are only
approximations of the nonlinear systems. In some cases, the linearization yields useful
information whereas in some other cases, linearised model has to be modified when the
operating point moves from one to another. Many techniques like perturbation method, series
approximation techniques, quasi-linearization techniques etc. are used for linearise a nonlinear
system.
Phase Plane Analysis: This method is applicable to second order linear or nonlinear systems
for the study of the nature of phase trajectories near the equilibrium points. The system
behaviour is qualitatively analysed along with design of system parameters so as to get the
desired response from the system. The periodic oscillations in nonlinear systems
called limit cycle can be identified with this method which helps in investigating the stability
of the system.
Liapunov’s Method for Stability: The analytical solution of a nonlinear system is rarely possible.
If a numerical solution is attempted, the question of stability behaviour can not be fully answered
as solutions to an infinite set of initial conditions are needed. The Russian mathematician A.M.
Liapunov introduced and formalised a method which allows one to conclude about the stability
without solving the system equations.
6.4 Classification of Nonlinearities
The nonlinearities are classified into i) Inherent nonlinearities and ii) Intentional
nonlinearities. The nonlinearities which are present in the components used in system due
to the inherent imperfections or properties of the system are known as inherent
nonlinearities. Examples are saturation in magnetic circuits, dead zone, back lash in
gears etc. However in some cases introduction of nonlinearity may improve the
performance of the system, make the system more economical consuming less space and
more reliable than the linear system designed to achieve the same objective. Such
nonlinearities introduced intentionally to improve the system performance are known as
intentional nonlinearities. Examples are different types of relays which are very
frequently used to perform various tasks. But it should be noted that the improvement
in system performance due to nonlinearity is possible only under specific operating
conditions.r other conditions, generally nonlinearity degrades the performance of the
system.
Saturation: This is the most common of all nonlinearities. All practical systems, when
driven by sufficiently large signals, exhibit the phenomenon of saturation due to
limitations of physical capabilities of their components. Saturation is a common
phenomenon in magnetic circuits and amplifiers.
Dead zone: Some systems do not respond to very small input signals. For a particular
range of input, the output is zero. This is called dead zone existing in a system. The
input-output curve is shown in figure.
Dept. of EEE, SJBIT Page 102
Modern Control Theory 10EE55
Figure 6.3
Figure 6.4
Relay: A relay is a nonlinear power amplifier which can provide large power
amplification inexpensively and is therefore deliberately introduced in control systems. A relay
controlled system can be switched abruptly between several discrete states which are usually off,
full forward and full reverse. Relay controlled systems find wide applications in the
control field. The characteristic of an ideal relay is as shown in figure. In practice a relay has a
definite amount of dead zone as shown. This dead zone is caused by the facts that relay coil
requires a finite amount of current to actuate the relay. Further, since a larger coil current is
needed to close the relay than the current at which the relay drops out, the characteristic always
exhibits hysteresis.
Phase plane analysis is one of the earliest techniques developed for the study of second
order nonlinear system. It may be noted that in the state space formulation, the state variables
chosen are usually the output and its derivatives. The phase plane is thus a state plane where the
two state variables x1 and x2 are analysed which may be the output variable y and its
derivative . The method was first introduced by Poincare, a French mathematician. The
method is used for obtaining graphically a solution of the following two simultaneous equations
of an autonomous system.
The plot of the state trajectories or phase trajectories of above said equation thus gives an
idea of the solution of the state as time t evolves without explicitly solving for the state. The
phase plane analysis is particularly suited to second order nonlinear systems with no input or
constant inputs. It can be extended to cover other inputs as well such as ramp inputs, pulse
inputs and impulse inputs.
variables are zero. These points are called singular points. These are in fact equilibrium points
of the system. If the system is placed at such a point, it will continue to lie there if left
undisturbed. A family of phase trajectories starting from different initial states is called a
phase portrait. As time t increases, the phase portrait graphically shows how the system moves
in the entire state plane from the initial states in the different regions. Since the solutions from
each of the initial conditions are unique, the phase trajectories do not cross one another. If the
system has nonlinear elements which are piece-wise linear, the complete state space can be
divided into different regions and phase plane trajectories constructed for each of the regions
separately.
where and n are the damping factor and undamped natural frequency of the system. Defining the state
variables as x = x1 and = x2, we get the state equation in the state
variable form as
These equations may then be solved for phase variables x1 and x2. The time response plots of
x1, x2 for various values of damping with initial conditions can be plotted. When the differential
equations describing the dynamics of the system are nonlinear, it is in general not possible to
obtain a closed form solution of x1, x2. For example, if the spring force is nonlinear say (k1x +
Solving these equations by integration is no more an easy task. In such situations, a graphical
method known as the phase-plane method is found to be very helpful. The coordinate plane
with axes that correspond to the dependent variable x1 and x2 is called phase-plane. The curve
described by the state point (x1,x2) in the phase-plane with respect to time is called a phase
trajectory. A phase trajectory can be easily constructed by graphical techniques.
Therefore, the locus of constant slope of the trajectory is given by f2(x1,x2) = Mf1(x1,x2)
The above equation gives the equation to the family of isoclines. For different values of
M, the slope of the trajectory, different isoclines can be drawn in the phase plane.
Knowing the value of M on a given isoclines, it is easy to draw line segments on each of these
isoclines.
Consider a simple linear system with state equations
Dividing the above equations we get the slope of the state trajectory in the x1-x2 plane as
which is a straight line in the x1-x2 plane. We can draw different lines in the x1-x2 plane for
different values of M; called isoclines. If draw sufficiently large number of isoclines to cover the
complete state space as shown, we can see how the state trajectories are moving in the state
plane. Different trajectories can be drawn from different initial conditions. A large number of
such trajectories together form a phase portrait. A few typical trajectories are shown in figure
given below.
Figure 7.1
The Procedure for construction of the phase trajectories can be summarised as
below:
1. For the given nonlinear differential equation, define the state variables as x1 and x2
and obtain the state equations as
Example 7.1: For the system having the transfer function and a relay with
dead zone as nonlinear element, draw the phase trajectory originating from the initial
condition (3,0).
Since the input is zero, e = r – c = -c and the differential equation in terms of error
will be
We can identify three regions in the state plane depending on the values of e
= x 1.
Re
gio
n
1:
Here u = 1, so that the isoclines are given by or
For different values of M, these are a number of straight lines parallel to the
x-axis.
M 0 1/3 1/2 2 3 -4 -3 -2 -1
Region 3:
Here u = -1 so that on substitution we get or
These are also lines parallel to x – axis at a constant distance decided by the value
of the slope of the trajectory M.
M 0 1/3 1 2 3 -5 -4 -3 -2
Isoclines drawn for all three regions are as shown in figure. It is seen that
trajectories from either region 1 or 2 approach the boundary between the regions
and approach the origin along a line at -1 slope. The state can settle at any value of
x1 between -1 and +1 as this being a dead zone and no further movement to the
origin along the x1-axis will be possible. This will result in a steady state error,
the maximum value of which is equal to half the dead zone. However, the
presence of dead zone can reduce the oscillations and result in a faster response. In
order that the steady state error in the output is reduced, it is better to have as small a
dead zone as possible.
Example 7.2: For the system having a closed loop transfer function
,
plot the phase trajectory originating from the initial condition (-1,0).
When x2 = 0,
Figure 7.3
The isoclines are drawn as shown in figure. The starting point of the trajectory is marked at (-
1,0). At (-1,0), the slope is , ie., the trajectory will start at an angle 90o. From the
next isoclines, the average slope is (8+4)/2 = 6, ie., a line segment with a slope 6 is drawn (at an
angle 80.5o).The same procedure is repeated and the complete phase trajectory will be obtained
as shown in figure.
7.3.2 Delta Method:
The delta method of constructing phase trajectories is applied to systems of the form
Where may be linear or nonlinear and may even be time varying but must be
continuous and single valued.
With the help of this method, phase trajectory for any system with step or ramp or any time
varying input can be conveniently drawn. The method results in considerable time saving when
a single or a few phase trajectories are required rather than a complete phase portrait.
While applying the delta method, the above equation is first converted to the form
In general, depends upon the variables x, and t, but for short intervals the
changes in these variables are negligible. Thus over a short interval, we have
, where is a constant.
Let us choose the state variables as x1 = x; , then
With known at any point P on the trajectory and assumed constant for a short interval,
we can draw a short segment of the trajectory by using the trajectory slope dx2/dx1 given
in the above equation. A simple geometrical construction given below can be used for
this purpose.
1. From the initial point, calculate the value of .
2. Draw a short arc segment through the initial point with (- , 0) as centre,
thereby determining a new point on the trajectory.
3. Repeat the process at the new point and continue.
Example 7.3: For the system described by the equation given below, construct the
trajectory starting at the initial point (1, 0) using delta method.
Let then
So that
At initial point is calculated as = 0+1-1 = 0. Therefore, the initial arc is centred at
point (0, 0). The mean value of the coordinates of the two ends of the arc is used to
calculate the next value of and the procedure is continued. By constructing the small
arcs in this way the complete trajectory will be obtained as shown in figure.
Figure 7.4
Dept. of EEE, SJBIT Page 113
Modern Control Theory 10EE55
which describes physical situations in many nonlinear systems. In terms of the state variables
, we obtain
The figure shows the phase trajectories of the system for > 0 and < 0. In case of > 0 we
observe that for large values of x1(0), the system response is damped and the amplitude of
x1(t) decreases till the system state enters the limit cycle as shown by the outer trajectory. On the
other hand, if initially x1(0) is small, the damping is negative, and hence the amplitude of x1(t)
increases till the system state enters the limit cycle as shown by the inner trajectory. When < 0,
the trajectories moves in the opposite directions as shown in figure.
Figure 7.5
A limit cycle is called stable if trajectories near the limit cycle, originating from outside
or inside, converge to that limit cycle. In this case, the system exhibits a sustained
Dept. of EEE, SJBIT Page 114
Modern Control Theory 10EE55
oscillation with constant amplitude. This is shown in figure (i). The inside of the limit cycle is
an unstable region in the sense that trajectories diverge to the limit cycle, and the outside is a
stable region in the sense that trajectories converge to the limit cycle.
A limit cycle is called an unstable one if trajectories near it diverge from this limit cycle. In this
case, an unstable region surrounds a stable region. If a trajectory starts within the stable region, it
converges to a singular point within the limit cycle. If a trajectory starts in the unstable region, it
diverges with time to infinity as shown in figure (ii). The inside of an unstable limit cycle is the
stable region, and the outside the unstable region.
7.5 Analysis and Classification of Singular Points:
Singular points are points in the state plane where . At these points the slope of the
trajectory dx2/dx1 is indeterminate. These points can also be the equilibrium points of the
nonlinear system depending whether the state trajectories can reach these or not. Consider a
linearised second order system represented by
Using linear transformation x = Mz, the equation can be transformed to canonical form
Where, 1 and 2 are the roots of the characteristic equation of the system.
The transformation given simply transforms the coordinate axes from x1-x2 plane to z1-z2 plane
having the same origin, but does not affect the nature of the roots of the characteristic
equation. The phase trajectories obtained by using this transformed state equation still carry the
same information except that the trajectories may be skewed or stretched along the coordinate
axes. In general, the new coordinate axes will not be rectangular.
The solution to the state equation being given by
Based on the nature of these eigen values and the trajectory in z1 – z2 plane, the singular
points are classified as follows.
Nodal Point:
Consider eigen values are real, distinct and negative as shown in figure (a). For this case
the equation of the phase trajectory follows as
Consider a system with complex conjugate eigen values. The canonical form of the state
equation can be written as
The slope
We get,
This is an equation for a spiral in the polar coordinates. A plot of this equation for negative
values of real part is a family of equiangular spirals. The origin which is a singular point in this
case is called a stable focus. When the eigen values are complex conjugate with positive real
parts, the phase portrait consists of expanding spirals as shown in figure and the singular point is
an unstable focus. When transformed into the x1-x2 plane, the phase portrait in the above two
cases is essentially spiralling in nature, except that the spirals are now somewhat twisted in shape.
Consider now the case of complex conjugate eigen values with zero real parts. ie., 1, 2 = ±j
Figure 7.9
Example 7.4:
Determine the kind of singularity for the following differential equation.
2
i.e, or +3 +2=0
1, 2 = -2, -1. Since the roots are real and negative, the type of singular point is stable
node.
At singular points,
2
i.e., ( -0.1)+1 = 0 or – 0.1 + 1 = 0
The eigen values are complex with positive real part. The singular point is an
unstable focus.
Linearization around (-1,0)
2
Therefore – 0.1 - 1 = 0
1, 2 = 1.05 and -0.98. Since the roots are real and one negative and another
positive, the singular point is a saddle point.
Example 7.6:
Determine the kind of singularity for the following differential equation.
At singular points,
So that the singular points
2
i.e., ( + 0.5) + 2 = 0 or + 0.5 + 2 = 0
The eigen values are complex with negative real parts. The singular point is a
stable focus.
Linearization around (-2,0)
2
Therefore + 0.5 - 2 = 0
1, 2 = 1.19 and -1.69. Since the roots are real and one negative and another positive,
the singular point is a saddle point.
6 Hours
For linear time invariant (LTI) systems, the concept of stability is simple and can be
formalised as per the following two notions:
a) A system is stable with zero input and arbitrary initial conditions if the resulting trajectory
tends towards the equilibrium state.
b) A system is stable if with bounded input, the system output is bounded.
Equilibrium State: In the system of equation , a state xe where f(xe,t) = 0 for all
„t‟ is called an equilibrium state of the system. If the system is linear time invariant, namely f(x,t) =
Ax, then there exists only one equilibrium state if „A‟ is non-singular and there exist infinitely many
equilibrium states if „A‟ is singular. For nonlinear systems, there may be one or more equilibrium
states. Any isolated equilibrium state can be
shifted to the origin of the coordinates, or f(0,t) = 0, by a translation or coordinates.
8.2 Stability Definitions:
The Russian mathematician A.M. Liapunov has clearly defined the different types of stability.
These are discussed below.
Stability: An equilibrium state xe of the system is said to be stable if for each real
number > 0 there is a real number ( ,t0) > 0 such that implies
for all t t0. The real number depends on and in general, also depends on t0. If does not depend
on t0, the equilibrium state is said to be uniformly stable.
An equilibrium state xe of the system of equation is said to be stable in the
sense of Liapunov if, corresponding to each S( ), there is an S( ) such that trajectories starting in S(
) do not leave S( ) as t increases indefinitely.
Asymptotic Stability: An equilibrium state xe of the system is said to be
asymptotically stable, if it is stable in the sense of Liapunov and every solution starting within S( )
converges, without leaving S( ), to xe as „t‟ increases indefinitely.
Asymptotic Stability in the Large: If asymptotic stability holds for all states from
which trajectories originate, the equilibrium state is said to be asymptotically stable in the large. An
equilibrium state xe of the system is said to be asymptotically stable in the
large, if it is stable and if every solution converges to xe as „t‟ increases indefinitely. Obviously a
necessary condition for asymptotic stability in the large is that there is only one equilibrium state in
the whole state space.
Instability: An equilibrium state xe is said to be unstable, if for some real number > 0 and any
real number > 0, no matter how small, there is always a state x0 in S( ) such that the trajectory
starting at this state leaves S( ).
A scalar function V(x) is said to be negative semi-definite in a region if it is negative for all
states in the region and except at the origin and at certain other states, where it is zero.
A scalar function V(x) is said to be indefinite if in the region it assumes both positive and
negative values, no matter how small the region is.
Examples:
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be negative definite is that the determinant of A be positive if n is even and
negative if n is odd, and the successive principal minors of even order be positive and the
successive principal minors of odd order be negative.
i
.
e
.
,
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be positive semi-definite is that the determinant of A be singular and the
successive principal minors of the determinant of A be nonnegative.
i.e.,
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real
symmetric matrix, be negative semi-definite is that the determinant of A be singular and all the
principal minors of even order be nonnegative and those of odd orders be non positive.
Example 8.1: Using Sylvester‟s criteria, determine the sign definiteness of the
following quadratic forms
Successive principal
minors are
Example 8.3:
2.
3. does not vanish identically in t t0 for any t0 and any x0
0, where denotes the trajectory or solution starting from x0 at t0.
Then the equilibrium state at the origin of the system is uniformly asymptotically stable in
the large.
If however, there exists a positive definite scalar function V(x,t) such that is
identically zero, then the system can remain in a limit cycle. The equilibrium state at the
origin, in this case, is said to be stable in the sense of Liapunov.
Theorem 3: Suppose that a system is described by where f(0,t) = 0 for all t
t0. If there exists a scalar function W(x,t) having continuous first partial derivatives and
satisfying the following conditions
1. W(x,t) is positive definite in some region about the origin.
2. is positive definite in the same region, then the equilibrium state at
the origin is unstable.
Example 8.4: Determine the stability of following system using Liapunov‟s method.
Let us choose
Then
Example 8.5: Determine the stability of following system using Liapunov‟s method.
Let us choose
Then
This is a negative semi definite function. If is to be vanish identically for t t0, then
x2 must be zero for all t t0.
This means that vanishes identically only at the origin. Hence the equilibrium state
at the origin is asymptotically stable in the large.
Example 8.6: Determine the stability of following system using Liapunov‟s method.
Let us choose
Then
This is an indefinite function.
Let us choose
Then
Therefore, for asymptotic stability we require that the above condition is satisfied. The
region of state space where this condition is not satisfied is possibly the region of
instability. Let us concentrate on the region of state space where this condition is
satisfied. The limiting condition for such a region is.The dividing lines lie in the first and
third quadrants and are rectangular hyperbolas as shown in Figure. 8.4. In the second and
fourth quadrants, the inequality is satisfied for all values of x1 and x2. Figure 8.4 shows the
region of stability and possible instability. Since the choice of Liapunov function is not
unique, it may be possible to choose another Liapunov function for the system under
consideration which yields a larger region of stability.
Conclusions:
1. Failure in finding a „V‟ function to show stability or asymptotic stability or
instability of the equilibrium state under consideration can give no information on
stability.
2. Although a particular V function may prove that the equilibrium state
under consideration is stable or asymptotically stable in the region , which includes
this equilibrium state, it does not necessarily mean that the motions are unstable
outside the region .
3. For a stable or asymptotically stable equilibrium state, a V function with the
required properties always exists.
Note:
1. If does not vanish identically along any trajectory, then Q may
be chosen to be positive semi definite.
2. In determining whether or not there exists a positive definite Hermitian or
real symmetric matrix P, it is convenient to choose Q = I, where I is the identity
matrix. Then the elements of P are determined from and the matrix P is
tested for positive definiteness.
-2P12 = -1
P11 – P12 – P22 = 0
2P12 – 2P22 = -1
1.5 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium
state at origin is asymptotically stable in the large.
Example 8.9: Determine the stability of the equilibrium state of the following system.
-2P11 + 2P12 = -1
-2P11 – 5P12 + P22 = 0
-4P12 – 8P22 = -1
23 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium
state at origin is asymptotically stable in the large.
Example 8.11: Determine the stability range for the gain K of the system given below.
Let us solve,
For P to be positive definite, it is necessary and sufficient that 12 – 2k > 0 and k > 0 or
0 < k <6. Thus for 0 < k <6, the system is stable; that is, the origin is asymptotically
stable in the large.
We can obtain as
This is a negative definite matrix and hence the equilibrium state is asymptotically stable.
. Therefore the equilibrium state is
asymptotically stable in the large.
Example 8.13: Using Krasovskii‟s theorem, examine the stability of the equilibrium state
x = 0 of the system given by
This is a negative definite matrix and hence the equilibrium state is asymptotically stable.