You are on page 1of 62

Control System Analysis

Using State Variable 


Methods

12.1 INTRODUCTION

In the preceding chapters, we showed that the root-locus and the frequency-response methods are quite
powerful for the analysis and design of feedback control systems. The analysis and design are carried out
using transfer functions, together with a variety of graphical tools such as root-locus plots, Nyquist plots,
Bode plots, Nichols chart, etc. These techniques of the so-called classical control theory have been greatly
enhanced by the availability and low cost of digital computers for system analysis and simulation. The
graphical tools can now be more easily used with computer graphics.
The classical design methods suffer from certain limitations due to the fact that the transfer function
model is applicable only to linear time-invariant systems, and there too it is generally restricted to SISO
systems, as the classical design approach becomes highly cumbersome for use in MIMO systems. Another
limitation of the transfer function technique is that it reveals only the system output for a given input and
provides no information about the internal behaviour of the system. There may be situations where the
output of a system is stable and yet some of the system elements may have a tendency to exceed their
specified ratings. In addition to this, it may sometimes be necessary and advantageous to provide a feedback
proportional to the internal variables of a system, rather than the output alone, for the purpose of stabilizing
and improving the performance of a system.
The limitations of classical methods based on transfer function models, have led to the development of a
state space approach of analysis and design. It is a direct time-domain approach which provides a basis for
modern control theory. It is a powerful technique for the analysis and design of linear and nonlinear, time-
invariant or time-varying MIMO systems. The organization of the state space approach is such that it is
easily amenable to solution through digital computers.
It will be incorrect to conclude from the foregoing discussion that the state variable design methods can
completely replace the classical design methods. In fact, the classical control theory comprising a large body
of use-tested knowledge is going strong. State variable design methods prove their mettle in applications
which are intractable by classical methods.
Control System Analysis Using State Variable Methods 569
The state variable formulation contributes to the application areas of classical control theory in a different
way. It is the most efficient form of system representation from the standpoint of computer simulation. To
compute the response of G(s) to an input R(s) requires expansion of {G(s)R(s)} into partial fractions; which,
in turn, requires computation of all the poles of {G(s)R(s)}, or all the roots of a polynomial. The roots of a
polynomial are very sensitive to their coefficients. Furthermore, to develop a computer program to carry out
partial fraction expansion is not simple. On the other hand, the response of state variable equations is easy to
program. Its computation does not require computation of roots or eigenvalues; therefore, it is less sensitive
to parameter variations. For these reasons, it is desirable to compute the response of G(s) through state
variable equations.
Many CAD packages handling both the classical and the modern tools of control system design use state
variable formulation. It is, therefore, helpful for the control engineer to be familiar with state variable
methods of system representation and analysis. This chapter introduces the main concepts of state variable
analysis. This material may be used as a brief terminating study or as an introduction to a more in-depth
discussion of state variable methods in Chapter 13.
We have been mostly concerned with SISO systems in the text so far. In this chapter also our emphasis
will be on SISO systems. However, many of the methods based on state variable concepts are applicable to
both SISO and MIMO systems with almost equal convenience; the only difference being the additional
computational effort for MIMO systems which is taken care of by CAD packages.

12.2 MATRICES

This section is intended to be a concise summary of facts about matrices1 that the reader will need to know
in reading the present chapter. Having them all at hand will minimize the need to consult a book on matrix
theory. It also serves to define the notation and terminology which are, regrettably, not entirely standard.
No attempt has been made at proving every statement made in this section. The interested reader is urged
to consult a suitable book (for example [27, 28]) for details of proofs.
Basic definitions and algebraic operations associated with matrices are given below.
Matrix

È a11 a12 L a1m ˘


Í ˙
a21 a22 L a2 m ˙
The matrix A= Í = Èa ˘ (12.1)
Í M M M ˙ Î ij ˚
Í ˙
Î an1 an 2 L anm ˚
is a rectangular array of nm elements. It has n rows and m columns. aij denotes (i, j)th element, i.e., the
element located in ith row and jth column. A is said to be a rectangular matrix of order n ¥ m.
When m = n, i.e., the number of columns is equal to that of rows, the matrix is said to be a square matrix
of order n.
An n ¥ 1 matrix, i.e., a matrix having only one column is called a column matrix. A 1 ¥ n matrix, i.e., a
matrix having only one row is called a row matrix.
Diagonal matrix A diagonal matrix is a square matrix whose elements off the principal diagonal are all
zeros (a i j = 0 for i π j). The following matrix is a diagonal matrix.

1
We will use upper case bold letters to represent matrices and lower case bold letters to represent vectors.
570 Control Systems: Principles and Design

Èa11 0 L 0 ˘
Í0 ˙
a22 L 0 ˙
L= Í (12.2)
Í M M M ˙
Í ˙
Î0 0 L ann ˚
Unit (identity) matrix A unit matrix I is a diagonal matrix whose diagonal elements are all equal to unity
(aii = 1, aij = 0 for i π j).
È1 0 L 0 ˘
Í0 1 L 0˙
I= Í ˙
ÍM M M˙
Í ˙
Î0 0 L 1 ˚
Whenever necessary, an n ¥ n unit matrix will be denoted by In.
Null (zero) matrix A null matrix 0 is a matrix whose elements are all equal to zero.
È0 0 L 0˘
Í0 0 L 0˙
0= Í ˙
ÍM M M˙
Í ˙
Î0 0 L 0˚
Whenever necessary, the dimensions of the null matrix will be indicated by two subscripts: 0nm.
Matrix transpose If the rows and columns of an n ¥ m matrix A are interchanged, the resulting m ¥ n
matrix, denoted as AT, is called the transpose of the matrix A. Namely, if A is given by Eqn. (12.1), then
È a11 a21 L an1 ˘
Í L an 2 ˙˙
a12 a22
AT = Í
Í M M M ˙
Í ˙
Îa1m a2 m L anm ˚
Some properties of the matrix transpose are
(i) (AT )T = A (ii) (kA)T = kAT, where k is a scalar
(iii) (A + B)T = AT + BT (iv) (AB)T = BTAT
Determinant of a matrix Determinants are defined for square matrices only. The determinant of the
n ¥ n matrix A, written as |A| or det A, is a scalar-valued function of A. It is found through the use of minors
and cofactors.
The minor mij of the element aij is the determinant of a matrix of order (n – 1) ¥ (n – 1) obtained from A
by removing the row and the column containing aij.
The cofactor cij of the element aij is defined by the equation
cij = (– 1)i + j mij
Determinants can be evaluated by the method of Laplace expansion. If A is an n ¥ n matrix, any arbitrary
row k can be selected and |A| is then given by
n
|A| = Â akj ckj
j =1
Control System Analysis Using State Variable Methods 571
Similarly, Laplace expansion can be carried out with respect to any arbitrary column l, to obtain
n
|A| = Â ail cil
i =1

Laplace expansion reduces the evaluation of an n ¥ n determinant down to the evaluation of a string of
(n – 1) ¥ (n – 1) determinants, namely the cofactors.
Some properties of determinants are
(i) det AB = (det A)(det B) (ii) det AT = det A
(iii) det kA = k det A; A is n ¥ n matrix and k is scalar
n

(iv) The determinant of any diagonal matrix is the product of its diagonal elements.
Singular matrix A square matrix is called singular if the associated determinant is zero.
Nonsingular matrix A square matrix is called nonsingular if the associated determinant is non-zero.
Adjoint matrix The adjoint matrix of a square matrix A is found by replacing each element aij of matrix
A by its cofactor cij and then transposing.
adj A = A+

È c11 c21 L cn1 ˘


Íc c22 L cn 2 ˙˙
= Í
12
= [cji]
Í M M M ˙
Í ˙
Îc1n c2 n L cnn ˚
Note that
A(adj A) = (adj A)A = |A| I (12.3)
–1
Matrix inverse The inverse of a square matrix is written as A , and is defined by the relation
A–1 A = AA–1 = I
From Eqn. (12.3) and the definition of the inverse matrix, we have
adj A
A–1 = (12.4)
| A|
Some properties of matrix inverse are
(i) (A–1 )–1 = A (ii) (AT) –1 = (A– 1)T
1
(iii) (AB)–1 = B–1 A–1 (iv) det A–1 =
det A
(v) det P–1AP = det A
(vi) Inverse of diagonal matrix given by Eqn. (12.2) is
È1/ a11 0 L 0 ˘
Í 0 1/ a L 0 ˙
–1
L = Í 22 ˙
Í M M M ˙
Í ˙
ÎÍ 0 0 L 1/ an n ˚˙
572 Control Systems: Principles and Design

Rank of a matrix The rank r(A) of a matrix A is the dimension of the largest array in A with a non-zero
determinant. Some properties of rank are
(i) r(AT) = r(A)
(ii) The rank of a rectangular matrix cannot exceed the lesser of the number of rows or the number of
columns. A matrix whose rank is equal to the lesser of the number of rows and number of columns is
said to be of full rank.
r(A) £ min (n, m); A is n ¥ m matrix
(iii) The rank of a product of two matrices cannot exceed the rank of the either:
r(AB) £ min [r(A), r(B)]
Partitioned matrix A matrix can be partitioned into submatrices. Broken lines are used to show the
partitioning when the elements of the submatrices are explicitly shown. For example
È a11 a12 a13 ˘
Í ˙
A = Ía21 a22 a23 ˙
Ía a33 ˙˚
Î 31 a32
The broken lines indicating the partitioning are sometimes omitted when the context makes it clear that
partitioned matrices are being considered.
We will be frequently using the following forms of partitioning.
(i) Matrix A partitioned into its columns:
A = [a1 a2 . . . am] (12.5a)
where
È a1i ˘
ai = ÍÍ a2i ˙˙ = ith column in A
Í M ˙
ÍÎ ani ˙˚
(ii) Matrix A partitioned into its rows:
È a1 ˘
A = ÍÍa 2 ˙˙ (12.5b)
Í M˙
ÍÎa n ˙˚
where
a i = [ai1 ai2 . . . aim] = ith row in A
(iii) A block diagonal matrix is a square matrix that can be partitioned so that the non-zero elements are
contained only in square submatrices along the main diagonal,

È A1 0 L 0 ˘
Í0 A2 L 0 ˙
A= Í ˙ (12.6a)
ÍM M M ˙
Í ˙
Î0 0 L Am ˚
= diag [A1 A2 . . . Am]
Control System Analysis Using State Variable Methods 573
For this case
(a) |A| = | A1 | | A2 | . . . |Am |
(b) A–1 = diag [ A1-1 A -21 L A -m1 ] , provided that A–1 exists.
(iv) A square matrix which has all of its elements below (above) the main diagonal equal to zero, is called
an upper triangular (lower triangular) matrix. The determinant of any triangular matrix is the product
of its diagonal elements.
The determinant of block triangular matrices is the product of determinants of square submatrices
along the main diagonal. For example, for
È A11 A12 ˘
A= Í ; | A | = |A11 | | A22 | (12.6b)
Î 0 A 22 ˙˚

Quadratic forms and definiteness of matrices An expression such as


n n
V(x1,x2,...,xn) = ÂÂ qij xi x j
i =1 j =1

involving terms of second degree in xi and xj is known as the quadratic form of n variables. When xi, xj and
qij are all real, the value of V is real, and the quadratic form can be expressed in the vector-matrix notation
as [155]
V(x) = x TQx (12.7a)
where
È x1 ˘
Í ˙
x2
x = Í ˙ ; (n ¥ 1) vector of variables
ÍM˙
Í ˙
Î xn ˚

È q11 q12 L q1n ˘


Íq ˙
21 q22 L q2 n ˙
Q= Í ; (n ¥ n)constant symmetric matrix (Q = QT ).
Í M M M ˙
Í ˙
Î qn1 qn 2 L qnn ˚
If for all x π 0,
(i) V(x) = xTQx ≥ 0, then V(x) is called a positive semidefinite function and Q is called a positive
semidefinite matrix;
(ii) V(x) = xTQx > 0, then V(x) is called a positive definite function and Q is called a positive definite
matrix.
The Sylvester’s criterion states that the necessary and sufficient conditions for Q to be positive definite
are that all the successive principal minors of Q be positive, i.e.,
q11 q12 q13
q11 q12
q11 > 0; > 0; q21 q22 q23 > 0;L; Q > 0 (12.7b)
q21 q22
q31 q32 q33
574 Control Systems: Principles and Design

The necessary and sufficient conditions for Q to be positive semidefinite are that Q is singular and all the
other principal minors of Q are non-negative.

12.3 STATE VARIABLE REPRESENTATION

12.3.1 State Variable Modelling


We have already seen in Chapter 2 (refer Eqns (2.4)) that the application of physical laws to mechanical,
electrical, thermal, liquid-level, and other physical processes results in state variable models of the form
x& (t) = Ax(t) + bu(t); x(t0) ≜ x0 : State equation (12.8a)
y(t) = cx(t) + du(t): output equation (12.8b)
where
x(t) = n ¥ 1 state vector of nth-order dynamic system
u(t) = system input
y(t) = defined output
A= n ¥ n matrix
b= n ¥ 1 column matrix
c= 1 ¥ n row matrix
d= scalar, representing direct coupling between input and output (direct coupling is rare in
control systems, i.e., usually d = 0)
A review through couple of examples of the basic concepts of state variable modelling presented earlier
in Section 2.2, will be helpful.

Example 12.1 Two very usual applications of dc motors are in speed and position control systems.
Figure 12.1 gives the basic block diagram of a speed control system. A separately excited dc motor
drives the load. A dc tachogenerator is attached to the motor shaft; speed signal is fedback and the error
signal is used to control the armature voltage of the motor.

Fig. 12.1 Basic block diagram of a closed-loop speed control system


Control System Analysis Using State Variable Methods 575
In the following, we derive the plant model for the speed control system. A separately excited dc
motor with armature voltage control is shown in Fig. 12.2.
The loop equation for the armature circuit is
d ia(t )
u(t) = La + R a ia(t) + eb(t) (12.9a)
dt
where
La = inductance of armature winding (henrys)
Ra = resistance of armature winding (ohms)
ia = armature current (amperes)
eb = back emf (volts)
u = applied armature voltage (volts)
The torque balance equation is
d w (t ) Fig. 12.2 Model of a separately excited dc
TM (t) = J + B w (t) (12.9b) motor
dt
where
TM = torque developed by the motor (newton-m)
J = equivalent moment of inertia of motor and load referred to motor shaft (kg-m2)
B = equivalent viscous friction coefficient of motor and load referred to motor shaft
(newton-m/(rad/sec))
w = angular velocity of motor shaft (rad/sec)
In servo applications, the dc motors are generally used in the linear range of the magnetization curve.
Therefore, the air gap flux f is proportional to the field current. For the armature controlled motor, the field
current if is held constant. Therefore, the torque TM developed by the motor, which is proportional to the
product of the armature current and the air gap flux, can be expressed as
TM (t) = KT ia(t) (12.9c)
where
KT = motor torque constant (newton-m/amp)
The counter electromotive force eb, which is proportional to f and w, can be expressed as
eb(t) = Kb w(t) (12.9d)
where
Kb = back emf constant2 (volts/(rad/sec))
Equations (12.9) can be reorganized as
dia (t ) Ra K 1
=– ia(t) – b w(t) + u(t)
dt La La La
(12.10)
d w (t )
KT B
=
ia(t) – w (t)
dt J J
x1(t) = w (t), and x 2(t) = ia(t) is the obvious choice for state variables. The output variable is
y(t) = w(t).

2
In MKS units, Kb = KT. Refer Section 3.5.
576 Control Systems: Principles and Design

The plant model of the speed control system, organized into the vector-matrix notation, is given
below:
È B KT ˘
Í - ˙ È0˘
È x&1 (t ) ˘ J J È x1 (t ) ˘ Í ˙
Í& ˙ = ÍÍ K ˙ Í ˙ + Í 1 ˙ u(t)
Î x2 (t )˚ ˙ ( )
R x t
Î ˚
Í - L - a
b 2
˙ ÍÎ La ˙˚
Î a La ˚
y(t) = x1(t)
Let us assign numerical values to the system parameters.
For the parameters3
Ra = 1 ohm, La = 0.1 H, J = 0.1 kg-m2,
(12.11)
B = 0.1 newton-m/(rad/sec), Kb = KT = 0.1
the plant model becomes
x& (t) = Ax(t) + bu(t)
y(t) = cx(t) (12.12)
È- 1 1˘ È 0˘
where A = Í - 1 - 10 ˙ ; b = Í10 ˙ ; c = [1 0]
Î ˚ Î ˚
Example 12.2 Figure 12.3 gives the basic block diagram of a position control system. The controlled
variable is now the angular position q (t) of the motor shaft:

dq (t )
= w (t) (12.13)
dt
We make the following choice for state and output variables.
x1(t) = q(t), x2(t) = w (t), x3(t) = ia(t), y(t) = q (t)
For this choice, we obtain the following plant model from Eqns (12.10) and (12.13).
È ˘
Í 0 1 0 ˙ È ˘
È x&1 (t ) ˘ Í ˙ È x1 (t ) ˘ Í 0 ˙
˙ Í 0 ˙
KT
Í& ˙ Í 0 -B ˙ Í
Í x 2 ( t ) ˙ = Í J J ˙ Í x2 ( t ) ˙ + Í ˙ u(t)
ÍÎ x&3 (t ) ˙˚ Í K R ˙ ÍÎ x3 (t ) ˙˚ Í 1 ˙
Í 0 - b - a ˙ Í ˙
Í La La ˙ ÍÎ La ˙˚
Î ˚
y(t) = x1(t)
For the system parameters given by (12.11), the plant model for position control system becomes
x& (t) = Ax(t) + bu(t)
(12.14)
y(t) = cx(t)

3
These parameters have been chosen for computational convenience.
Control System Analysis Using State Variable Methods 577

Fig. 12.3 Basic block diagram of a closed-loop position control system


where
È0 1 0˘ È 0˘
Í0 - 1 1˙˙ ; b = Í 0˙
A= Í Í ˙ ; c = [1 0 0]
ÎÍ0 - 1 - 10˚˙ ÎÍ10˚˙
In Examples 12.1 and 12.2 discussed above, the selected state variables are the physical quantities of
the systems, which can be measured. It may sometimes be necessary and advantageous to provide a
feedback proportional to the state variables of a system, rather than the output alone, for the purpose of
stabilizing and improving the performance of a system (refer Chapter 13). The implementation of design
with state variable feedback becomes straightforward if the state variables are available for feedback.
The choice of physical variables of a system as state variables therefore helps in the implementation of
design. Another advantage of selecting physical variables for state variable formulation is that the
solution of state equation gives time variation of variables which have direct relevance to the physical
system.

12.3.2 Transformation of State Variables


It frequently happens that the state variables used in the original formulation of the dynamics of a system are
not as convenient as another set of state variables. Instead of having to reformulate the system dynamics, it
is possible to transform the set {A, b, c, d} of the original formulation (12.8) to a new set {A, b, c , d } . The
change of variables is represented by a linear transformation
x = Px (12.15a)
where x is a state vector in the new formulation and x is the state vector in the original formulation. It is
assumed that the transformation matrix P is a nonsingular n ¥ n matrix, so that we can always write
x = P– 1x (12.15b)
We assume, moreover, that P is a constant matrix.
The original dynamics are expressed by
x& (t) = Ax(t) + bu(t); x(t0) ≜ x0 (12.16a)
and the output by
y(t) = cx(t) + du(t) (12.16b)
578 Control Systems: Principles and Design

Substitution of x as given by Eqn. (12.15a) into these equations, gives


P x& (t) = AP x (t) + bu(t)
y(t) = cP x (t) + du(t)
or
x& (t) = A x (t) + b u(t); x (t0) = P– 1x(t0) (12.17a)

y(t) = c x (t) + d u(t) (12.17b)


with
A = P– 1AP, b = P– 1b, c = cP, d = d
In the next section, we will prove that both the linear systems (12.16) and (12.17) have identical output
responses for the same input. The linear system (12.17) is said to be equivalent to the linear system (12.16),
and P is called an equivalence or similarity transformation.
It is obvious that there exist an infinite number of equivalent systems since the transformation matrix P
can be arbitrarily chosen. Some transformations have been extensively used for the purposes of analysis and
design. Five of such special (canonical) transformations will be introduced in this chapter.

Example 12.3 Example 12.1 revisited


For the system of Fig. 12.2, we have taken angular velocity w (t) and armature current ia(t) as state
variables:
È x1 ˘ Èw ˘
x = Íx ˙ = Í ˙
Î 2 ˚ Îia ˚
We now define new state variables as
x1 = w, x2 = – w + ia

È x ˘ È x1 ˘ È 1 0˘ È x1 ˘
or x = Í 1˙ = Í ˙ = Í ˙Í ˙
Î x2 ˚ Î- x1 + x2 ˚ Î - 1 1˚ Î x2 ˚
We can express velocity x1(t) and armature current x2(t) in terms of the variables x1 (t) and x2 (t):
x= Px (12.18)

È1 0 ˘
with P= Í ˙
Î1 1 ˚
Using Eqns (12.17 )and (12.12), we obtain the following state variable model for the system of Fig. 12.2
in terms of the transformed state vector x (t):
x& (t) = A x (t) + b u(t)
(12.19)
y(t) = c x (t)
where
È 1 0˘ È - 1 1˘ È1 0˘ È 0 1˘
A = P– 1A P = Í ˙ Í - 1 - 10˙ Í1 1˙ = Í - 11 - 11˙
Î - 1 1 ˚Î ˚Î ˚ Î ˚
Control System Analysis Using State Variable Methods 579

È 1 0˘ È 0˘ È 0˘
b = P– 1b = Í ˙Í ˙ = Í ˙
Î - 1 1 ˚ Î10 ˚ Î10 ˚
È1 0 ˘
c = cP = [1 0] Í ˙ = [1 0]
Î1 1 ˚
x1 (t0) = x1(t 0); x2 (t0) = – x1(t0) + x2(t0) (12.20)
Equation (12.19) give an alternative state variable model of the system previously represented by Eqns
(12.12). x (t) and x(t) both qualify to be state vectors of the given system (the two vectors individually
characterize the system completely at time t), and the output y(t), as we shall see shortly, is uniquely
determined from either of the models (12.12) and (12.19). State variable model (12.19) is thus
equivalent to the model (12.12), and the matrix P given by Eqn. (12.18) is an equivalence or similarity
transformation.
The state variable model given by Eqn. (12.19) is in a canonical (special) form.

12.4 CONVERSION OF STATE VARIABLE MODELS TO TRANSFER FUNCTIONS

We shall derive the transfer function of a SISO system from the Laplace-transformed version of the state and
output equations. Refer Section 12.2 for the matrix operations used in the derivation.
Consider the state variable model (Eqns (12.8)):
x& (t) = Ax (t) + bu(t); x(t0) ≜ x0
(12.21)
y(t) = cx(t) + du(t)
Taking the Laplace transform of Eqns (12.21), we obtain
sX(s) – x0 = AX(s) + bU(s)
Y(s) = cX(s) + dU(s)
where
X(s) ≜ L [x(t)]; U(s) ≜ L [u(t)]; Y(s) ≜ L [ y(t)]
Manipulation of these equations gives
(sI – A)X(s) = x0 + bU(s); I is n ¥ n identity matrix
or X(s) = (sI – A)– 1x0 + (sI – A)– 1bU(s) (12.22a)
–1 0 –1
Y(s) = c(sI – A) x + [c(sI – A) b + d]U(s) (12.22b)
0
Equations (12.22) are algebraic equations. If x and U(s) are known, X(s) and Y(s) can be computed from
these equations.
In the case of a zero initial state (i.e., x0 = 0), the input–output behaviour of the system (12.21) is
determined entirely by the transfer function
Y (s )
= G(s) = c(sI – A)– 1b + d (12.23)
U (s )
580 Control Systems: Principles and Design

We can express the inverse of the matrix (sI – A) as


( s I - A) +
(sI – A)– 1 = (12.24)
| sI - A|
where
|sI – A| = determinant of the matrix (sI – A)
(sI – A)+ = adjoint of the matrix (sI – A)
Using Eqn. (12.24), the transfer function G(s) given by Eqn. (12.23) can be written as
c ( s I - A) + b
G(s) = +d (12.25)
| sI - A|

12.4.1 Eigenvalues of a Matrix


For a general nth-order matrix
È a11 a12 L a1n ˘
Í ˙
Í a21 a22 L a2 n ˙
A= Í ˙ ,
Í M M M ˙
ÍÎ an1 an 2 L ann ˙˚
the matrix (sI – A) has the following appearance:
È s - a11 - a12 L - a1n ˘
Í ˙
Í - a21 s - a22 L - a2 n ˙
(sI – A) = Í ˙
Í
M M M ˙
ÍÎ - an1 - an 2 L s - an n ˙˚
If we imagine calculating det (sI – A), we see that one of the terms will be the product of diagonal
elements of (sI – A):
(s – a11)(s – a22) ◊ ◊ ◊ (s – ann) = sn + a¢1 sn – 1 + ◊ ◊ ◊ + a n¢ ,
a polynomial of degree n with the leading coefficient of unity. There will be other terms coming from the off-
diagonal elements of (sI – A), but none will have a degree as high as n. Thus | sI – A| will be of the following
form:
| sI – A| = D(s) = sn + a1 s n–1 + ◊ ◊ ◊ + a n (12.26)
where a i are constant scalars.
This is known as the characteristic polynomial of the matrix A. It plays a vital role in the dynamic
behaviour of the system. The roots of this polynomial are called the characteristic roots or eigenvalues of
matrix A. These roots, as we shall see in Section 12.6, determine the essential features of the unforced
dynamic behaviour of the system (12.21).
The adjoint of an n ¥ n matrix is itself an n ¥ n matrix whose elements are the cofactors of the original
matrix. Each cofactor is obtained by computing the determinant of the matrix that remains when a row and a
column of the original matrix are deleted. It thus follows that each element in (sI – A)+ is a polynomial in s
of maximum degree (n – 1). Adjoint of (sI – A) can therefore be expressed as
(sI – A)+ = Q1 sn – 1 + Q2 sn – 2 + ◊ ◊ ◊ + Qn – 1 s + Qn (12.27)
Control System Analysis Using State Variable Methods 581
where Qi are constant n ¥ n matrices. We can express transfer function G(s) given by Eqn. (12.25) in the
following form:

c[Q1 s n - 1 + Q2 s n - 2 + L + Qn - 1 s + Q n ] b
G(s) = +d (12.28)
s n + a1 s n - 1 + L + a n - 1 s + a n
G(s) is thus a rational function of s. When d = 0, the degree of numerator polynomial of G(s) is strictly
less than the degree of the denominator polynomial and therefore the resulting transfer function is a strictly
proper transfer function. When d π 0, the degree of numerator polynomial of G(s) will be equal to the
degree of the denominator polynomial, giving a proper transfer function. Further,
d = lim [G(s)] (12.29)
sƕ

From Eqns (12.26) and (12.28) we observe that the characteristic polynomial of matrix A of the system
(12.21) is same as the denominator polynomial of the corresponding transfer function G(s). If there are no
cancellations between the numerator and denominator polynomials of G(s) in Eqn. (12.28), the eigenvalues
of matrix A are same as the poles of G(s). We will take up in Section 12.8, this aspect of the correspondence
between state variable models and transfer functions. It will be proved that for a completely controllable and
completely observable state variable model, the eigenvalues of matrix A are same as the poles of the
corresponding transfer function.
12.4.2 Invariance Property
It is recalled that the state variable model for a system is not unique, but depends on the choice of a set of
state variables. A transformation
x(t) = P x (t); P is a nonsingular matrix (12.30)
results in the following alternative state variable model (refer Eqns (12.17)) for the system (12.21):
x& (t) = A x (t) + b u(t); x (t0) = P– 1x(t0) (12.31a)
y(t) = c x (t) + du(t) (12.31b)
–1 –1
where A = P AP, b = P b, c = cP
The definition of new set of internal state variables should evidently not affect the eigenvalues or input-
output behaviour. This may be verified by evaluating the characteristic polynomial and the transfer function
of the transformed system.
(i) | sI – A | = | sI – P– 1AP| = |sP– 1 P – P– 1 AP|
= |P– 1 (sI – A)P| = |P– 1 | |sI – A| |P| = |sI – A| (12.32)
(ii) System output in response to input u(t) is given by the transfer function
G (s) = c (sI – A )– 1 b + d
= cP(sI – P– 1AP)– 1P– 1b + d
= cP(sP– 1 P – P– 1 AP)– 1 P– 1 b + d
= cP[P– 1(sI – A)P]– 1P– 1 b + d
= cPP– 1(sI– A)– 1PP– 1 b + d
= c(sI – A)– 1b + d = G(s) (12.33)
582 Control Systems: Principles and Design

(iii) System output in response to initial state x (t0) is given by (refer Eqn. (12.22b))
–1
c (sI – A) x (t0) = cP(sI – P– 1AP)– 1P– 1x(t0)
= c(sI – A)– 1x(t0) (12.34)
The input-output behaviour of the system (12.21) is thus invariant under the transformation (12.30).

Example 12.4 Consider the position control system of Example 12.2. The plant model of the system
is reproduced below:
x& (t) = Ax(t) + bu(t)
(12.35)
y(t) = cx(t)
È0 1 0˘ È 0˘
Í0 - 1 1˙˙ ; b = Í 0˙
with A= Í Í ˙ ; c = [1 0 0]
ÍÎ0 - 1 - 10˙˚ ÍÎ10˙˚
The characteristic polynomial of matrix A is
s –1 0
|sI – A| = 0 s +1 -1 = s(s2 + 11s + 11)
0 1 s + 10
The transfer function
Y (s ) c(sI - A) + b
G(s) = =
U (s ) | sI - A |

È s 2 + 11 s + 11 s + 10 1 ˘È 0 ˘
Í ˙
[1 0 0] Í 0 s (s + 10) s ˙Í 0 ˙
Í ˙
Í 0 -s s (s + 1) ˙˚ ÍÎ10 ˙˚
Î
=
s ( s 2 + 11 s + 11)
10
= 2 (12.36)
s( s + 11 s + 11)

12.5 CONVERSION OF TRANSFER FUNCTIONS TO CANONICAL STATE VARIABLE MODELS

In the last section, we studied the problem—finding the transfer function from the state variable model of a
system. The converse problem—finding a state variable model from the transfer function of a system, is the
subject of discussion in this section. This problem is quite important because of the following reasons:
(i) Quite often the system dynamics is determined experimentally using standard test signals like a step,
impulse, or sinusoidal signal. A transfer function is conveniently fitted to the experimental data in
some best possible manner.
There are, however, many design techniques developed exclusively for state variable models. In
order to apply these techniques, experimentally obtained transfer function descriptions must be
realized into state variable models.
Control System Analysis Using State Variable Methods 583
(ii) Realization of transfer functions into state variable models is needed even if the control system design
is based on frequency-domain design methods. In these cases the need arises for the purpose of
transient response simulation. Many algorithms and numerical integration computer programs
designed for solution of systems of first-order equations are available, but there is not much software
for the numerical inversion of Laplace transforms. Thus, if a reliable method is needed for calculating
the transient response of a system, one may be better off converting the transfer function of the system
to state variable description and numerically integrating the resulting differential equations rather
than attempting to compute the inverse Laplace transform by numerical methods.
We shall discuss here the problem of realization of transfer function into state variable models.
Note the use of the term ‘realization’. A state variable model that has a prescribed rational function
G(s) as its transfer function, is the realization of G(s). The term ‘realization’ is justified by the fact
that by using the state diagram, a pictorial representation of the state variable model, the system with
the transfer function G(s) can be built in the real world by an op amp circuit. We shall see shortly that
a state diagram is essentially an analog-computer program for the given system, which is characterized
by three basic operations: (i) multiplication of a variable by a constant, (ii) addition of several
variables, and (iii) integration of variables.
The following three problems are involved in the realization of a given transfer function into state variable
models:
(i) Is it possible at all to obtain state variable description from the given transfer function?
(ii) If yes, is the state variable description unique for a given transfer function?
(iii) How do we obtain the state variable description from the given transfer function?
The answer to the first problem has been given in the last section. A rational function G(s) is realizable by
a finite dimensional linear time-invariant state model if and only if G(s) is a proper rational function. A
proper rational function will have state model of the form:
x& (t) = Ax(t) + bu(t)
(12.37)
y(t) = cx(t) + du(t)
where A, b, c and d are constant matrices of appropriate dimensions. A strictly proper rational function will
have state model of the form
x& (t) = Ax (t) + bu(t)
(12.38)
y(t) = cx(t)
Let us now turn to the second problem. In the last section, we saw that there are innumerable systems that
have the same transfer function. Hence, the representation of a transfer function in state variable form is
obviously not unique. However, all these representations will be equivalent.
In the remaining part of this section, we deal with the third problem. We shall develop three standard, or
‘canonical’ representations of transfer functions.
A linear time-invariant SISO system is described by transfer function of the form
b0 s m + b1 s m - 1 + L + bm
G(s) = ;m£n
s n + a1 s n - 1 + L + a n
where the coefficients a i and b i are real constant scalars. Note that there is no loss in generality to assume
the coefficient of sn to be unity.
584 Control Systems: Principles and Design

In the following, we derive results for m = n; these results may be used for the case m < n by setting
appropriate b i coefficients equal to zero. Therefore, our problem is to obtain a state variable model
corresponding to the transfer function
b0 s n + b1 s n - 1 + L + bn
G(s) = (12.39)
s n + a1 s n - 1 + L + a n
12.5.1 First Companion Form
Our development starts with a transfer function of the form
Z (s ) 1
= n (12.40)
U( s) s + a1 s n - 1 + L + a n
which can be written as
(sn + a 1sn – 1 + ◊ ◊ ◊ + a n) Z(s) = U(s)
The corresponding differential equation is
pnz(t) + a 1 pn – 1z(t) + ◊ ◊ ◊ + a n z(t) = u(t)
where
d kz (t )
pkz(t) ≜
dt k
Solving for highest derivative of z(t), we obtain
pnz(t) = – a 1 pn – 1 z(t) – a 2 pn – 2 z(t) – ◊ ◊ ◊ – a n z(t) + u(t) (12.41)
Now consider a chain of n integrators as shown in Fig. 12.4. Suppose that the output of the last integrator
is z(t). Then the output of the just previous integrator is p z = dz/dt, and so forth. The output from the first
integrator is pn – 1z(t), and the input to this integrator is thus pnz(t). This leaves only the problem of obtaining
pnz(t) for use as input to the first integrator. In fact, this is already specified by Eqn. (12.41). Realization of
this equation is shown in Fig. 12.4.
z z z z z

Fig. 12.4 Realization of the system (12.41)


Having developed a realization of the simple transfer function (12.40), we are now in a position to
consider the more general transfer function (12.39). We decompose this transfer function into two parts, as
shown in Fig. 12.5. The output Y(s) can be written as

⋅⋅⋅
⋅⋅⋅

Fig. 12.5 Decomposition of the transfer function (12.39)


Control System Analysis Using State Variable Methods 585
n
Y(s) = (b 0 s + b1 s n–1
+ ◊ ◊ ◊ + b n) Z(s) (12.42a)
where Z(s) is given by
Z (s ) 1
= n n -1
(12.42b)
U( s ) s + a1 s + L + an
A realization of the transfer function (12.42b) has already been developed. Figure 12.4 shows this
realization. The output of the last integrator is z(t) and the inputs to the integrators in the chain from the last
to the first are the n successive derivatives of z(t).
Realization of the transfer function (12.42a) is now straightforward. The output
y(t) = b0 pnz(t) + b1 pn – 1z(t) + ◊ ◊ ◊ + b n z(t),
is nothing but the sum of the scaled versions of the inputs to the n integrators. Figure 12.6 shows complete
realization of the transfer function (12.39). All that remains to be done is to write the corresponding differential
equations.

Fig. 12.6 Realization (state diagram) of the system (12.39)

To get one state variable model of the system, we identify the output of each integrator in Fig. 12.6 with
a state variable, starting at the right and proceeding to the left. The corresponding differential equations,
using this identification of state variables are
x&1 = x2
x&2 = x3
⋮ (12.43a)
x& n - 1 = xn
x&n = – an x1 – a n – 1 x2 – ◊ ◊ ◊ – a 1 xn + u
The output equation is found by careful examination of the block diagram of Fig. 12.6. Note that there are
two paths from the output of each integrator to the system output—one path upward through the box labelled
bi, and a second path down through the box labelled a i and thence through the box labelled b0. As a
consequence,
y = (bn – a nb0) x1 + (b n – 1 – a n – 1b 0) x2 + ◊◊◊ + (b 1 – a1 b 0) xn + b 0 u (12.43b)
586 Control Systems: Principles and Design

The state and output Eqns (12.43), organized in vector-matrix form, are given below:
x& (t) = Ax(t) + bu(t)
(12.44)
y(t) = cx (t) + du(t)
with
È 0 1 0 L 0 ˘ È0 ˘
Í 0 0 1 L 0 ˙ Í0 ˙
Í ˙ Í ˙
A= Í M M M M ˙;b= ÍM ˙
Í ˙ Í ˙
Í 0 0 0 L 1 ˙ Í0 ˙
Í- a n - an -1 - an - 2 L - a1 ˙˚ ÍÎ1 ˙˚
Î
c = [b n – a n b 0, bn – 1 – a n – 1b 0, ..., b 1 – a 1b 0]; d = b 0
If the direct path through b 0 is absent (refer Fig. 12.6), then the scalar d is zero and the row matrix c
contains only the bi coefficients.
The matrix A in Eqns (12.44) has a very special structure: the coefficients of the denominator of the
transfer function preceded by minus signs form a string along the bottom row of the matrix. The rest of the
matrix is zero except for the ‘superdiagonal’ terms which are all unity. In matrix theory, a matrix with this
structure is said to be in companion form. For this reason, we identify the realization (12.44) as companion-
form realization of the transfer function (12.39). We call this the first companion form: another companion
form follows.
12.5.2 Second Companion Form
In the first companion form, the coefficients of the denominator of the transfer function appear in one of the
rows of the A matrix. There is another companion form in which the coefficients appear in a column of the
A matrix. This can be obtained by writing Eqn. (12.39) as
(sn + a 1 sn –1 + ◊ ◊ ◊ + a n) Y(s) = (b 0 sn + b 1 sn – 1 + ◊ ◊ ◊ + b n) U(s)
or sn [Y(s) – b 0U(s)] + sn – 1[a 1Y(s) – b1U(s)] + ◊ ◊ ◊ + [anY(s) – bnU(s)] = 0
On dividing by sn and solving for Y(s), we obtain
1 1
Y(s) = b0U(s) + [b1U(s) – a1Y(s)] + ◊ ◊ ◊ + n [ b nU(s) – a nY(s)] (12.45)
s s
1
Note that 1/sn is the transfer function of a chain of n integrators. Realization of [b nU(s) – a nY(s)]
sn
requires a chain of n integrators with an input [bn u – an y] to the first integrator in the chain from left-to-right.
1
Realization of n - 1 [b n – 1 U(s) – a n – 1Y(s)] requires a chain of (n – 1) integrators with input [bn – 1 u –
s
an – 1 y] to the first integrator in the chain from left-to-right, and so forth. This immediately leads to the
structure shown in Fig. 12.7. The signal y is fed back to each of the integrators in the chain and the signal u
is fed forward. Thus the signal [bn u – a n y] passes through n integrators; the signal [b n – 1 u – a n – 1 y] passes
through (n – 1) integrators and so forth to complete the realization of Eqn. (12.45). The structure retains the
ladder-like shape of the first companion form, but the feedback paths are in different directions.
Control System Analysis Using State Variable Methods 587

Fig. 12.7 Realization (state diagram) of Eqn. (12.45)


We can now write differential equations for the realization given by Fig. 12.7. To get one state variable
model, we identify the output of each integrator in Fig. 12.7 with a state variable starting at the left and
proceeding to the right. The corresponding differential equations are
x&n = xn – 1 – a1(xn + b0 u) + b1 u
x&n – 1 = xn – 2 – a2(xn + b0 u) + b 2 u

x&2 = x1 – a n – 1 (xn + b0 u) + bn – 1u

x&1 = – an (xn + b0 u) + bn u
and the output equation is
y = xn + b 0 u
The state and output equations organized in vector-matrix form are given below:
x& (t) = Ax(t) + bu(t)
(12.46)
y(t) = cx(t) + du(t)
È 0 0 L 0 - an ˘ È bn - a n b0 ˘
Í 1 0 L 0 - a n -1 ˙˙ Í ˙
Í Í bn - 1 - a n -1b0 ˙
with A = ÍÍ 0 1 L 0 - an -2 ˙ ;
˙ b = ÍÍ ˙
˙
Í M M M M ˙ Í M ˙
Í ˙ Í ˙
Î 0 0 L 1 - a1 ˚ Î b1 - a1 b0 ˚
c = [0 0 . . . 0 1]; d = b 0
Compare A, b, and c matrices of the second companion form with that of the first. We observe that A, b, and
c matrices of one companion form correspond to the transpose of the A, c, and b matrices, respectively, of the
other.
There are many benefits derived from the companion forms of state variable models. One obvious benefit is
that both the companion forms lend themselves easily to simple analog computer models. Both the companion
forms also play an important role in pole-placement design through state feedback.(refer Chapter 13)
588 Control Systems: Principles and Design

12.5.3 Jordan Canonical Form


In the two canonical forms (12.44) and (12.46), the coefficients of the denominator of the transfer function
appear in one of the rows or columns of matrix A. In another of the canonical forms, the poles of the transfer
function form a string along the main diagonal of the matrix. This canonical form follows directly from the
partial fraction expansion of the transfer function.
The general transfer function under consideration is (refer Eqn. (12.39))
b0 s n + b1 s n - 1 + L + bn
G(s) =
s n + a1 s n - 1 + L + a n
By long division, G(s) can be written as
b1¢ s n - 1 + b2¢ s n - 2 + L + bn¢
G(s) = b 0 + = b 0 + G ¢(s)
s n + a1 s n - 1 + L + a n
The results are simplest when the poles of the transfer function are all distinct. The partial fraction
expansion of the transfer function then has the form:
Y ( s) r1 r2 rn
G(s) = = b0 + + +⋯+ (12.47)
U ( s) s - l1 s - l2 s - ln
The coefficients ri (i = 1, 2, ..., n) are the residues of
the transfer function G ¢(s) at the corresponding
poles at s = l i (i = 1, 2, ..., n). In the form of
Eqn. (12.47), the transfer function consists of a
direct path with gain b 0, and n first-order transfer
functions in parallel. A block diagram representation
of Eqn. (12.47) is shown in Fig. 12.8. The gains
corresponding to the residues have been placed at the
outputs of the integrators. This is quite arbitrary. They
could have been located on the input side, or indeed
split between the input and the output.
Identifying the outputs of the integrators with the
state variables results in the following state and
output equations:
x& (t) = L x(t) + bu(t)
(12.48)
y(t) = cx(t) + du(t)

Èl1 0 L 0 ˘ È1˘
Í0 l L 0˙ Í1˙
L= Í ˙;b= Í˙
2
with
ÍM M M ˙ ÍM ˙
Í ˙ Í˙
Î 0 0 L ln ˚ Î1˚ Fig. 12.8 Realization (state diagram) of G(s) in
c = ÈÎ r1 r2 L r n ˘˚ ; d = b0 Eqn. (12.47)

It is observed that for this canonical state variable model, the matrix L is a diagonal matrix with the poles
of G(s) as its diagonal elements. The unique decoupled nature of the canonical model is obvious from
Equations (12.48); the n first-order differential equations are independent of each other:
x&i (t) = l i xi(t) + u(t); i = 1, 2, ..., n (12.49)
Control System Analysis Using State Variable Methods 589
This decoupling feature greatly helps in system analysis (refer [155]).
The block diagram representation of Fig. 12.8 can be turned into hardware only if all the poles at s = l1,
l2, ..., ln are real. If they are complex, the feedback gains and the gains corresponding to the residues are
complex. In this case, the representation must be considered as being purely conceptual; valid for theoretical
studies, but not physically realizable. A realizable representation can be obtained by introducing an
equivalence transformation.
Suppose that s = s + jw , s = s – jw and s = l are the three poles of a transfer function. The residues at the
pair of complex conjugate poles must be themselves complex conjugates. Partial fraction expansion of the
transfer function with a pair of complex conjugate poles and a real pole has the form
p + jq p - jq r
G(s) = d + + +
s - (s + j w ) s - (s - j w ) s-l
A state variable model for this transfer function is given below (refer Eqns (12.48)):
x& = L x + bu
(12.50)
y = cx + du
È s + jw 0 0˘ È1˘
Í
L= Í 0 s - jw 0 ˙˙ ; b = Í1˙
with Í˙
ÍÎ 0 0 l ˙˚ ÍÎ1˙˚
c = [p + jq p – jq r]
Introducing an equivalence transformation
x= Px
È 1/2 - j1/2 0 ˘
with P = ÍÍ 1/2 j1/2 0 ˙
˙
ÍÎ 0 0 1 ˙˚
we obtain (refer Eqns (12.17))
N& (t) = A x (t) + b u(t)
(12.51)
y(t) = c x (t) + du(t)
where
È1 1 0˘ È s + j w 0 0 ˘ È 1/2 - j1/2 0˘ È s w 0˘
Í ˙ Í 0 ˙˙ ÍÍ 1/2 0˙˙ = ÍÍ - w s 0 ˙˙
A = P L P = Í j - j 0˙ Í
–1
0 s - jw j1/2
ÎÍ 0 0 1˚˙ ÎÍ 0 0 l ˚˙ ÎÍ 0 0 1˚˙ ÍÎ 0 0 l ˚˙

È 2˘
b = P b = Í 0˙
–1
Í ˙
ÍÎ1 ˙˚
c = cP = [ p q r]
590 Control Systems: Principles and Design

When the transfer function G(s) has repeated poles, the partial fraction expansion will not be as simple as
Eqn. (12.47). Assume that G(s) has m distinct poles at s = l1, l2, . . ., lm of multiplicity n1, n2, . . ., nm
respectively; n = n1 + n2 + ◊ ◊ ◊ + nm. That is, G(s) is of the form
n -1 n-2
b1¢ s + b2¢ s + L + bn¢
G(s) = b0 + (12.52)
( s - l1 ) n1 ( s - l2 ) n2 L( s - lm )nm
The partial fraction expansion of G(s) is of the form :
Y (s )
G(s) = b0 + H1(s) + ◊ ◊ ◊ + Hm(s) = (12.53)
U( s)
where
ri1 ri 2 rini Yi (s )
Hi(s) = + ni - 1
+◊◊◊+ =
( s - li )
ni
( s - li ) ( s - li ) U (s )
The first term in Hi(s) can be synthesized as a chain of ni identical, first-order systems, each having
transfer function 1/(s – l i ). The second term can be synthesized by a chain of (ni – 1) first-order systems, and
so forth. The entire Hi(s) can be synthesized by the system having the block diagram shown in Fig. 12.9.
We can now write differential equations for the realization of Hi(s) given by Fig. 12.9. To get one state
variable formulation, we identify the output of each integrator with a state variable starting at the right and
proceeding to the left. The corresponding differential equations are
x&i1 = li xi1 + xi2
x&i 2 = li xi2 + xi3 (12.54a)

x&in = li xin + u
i i

Fig. 12.9 Realization (state diagram) of Hi(s) in Eqn. (12.53)

and the output is given by


yi = ri1 xi1 + ri2 xi2 + ◊ ◊ ◊ + rini xini (12.54b)
If the state vector for the subsystem is defined by
xi = [xi1 xi2 ◊ ◊ ◊ xini ]T
Control System Analysis Using State Variable Methods 591
then Eqns (12.54) can be written in the standard form
x& i = L ixi + biu
(12.55)
yi = ci xi
È li 1 0 L 0 0˘ È0 ˘
Í0 li 1 L 0 0 ˙˙ Í0 ˙
Í Í ˙
where Li = Í M M M M M ˙ ; bi = ÍM ˙
Í ˙ Í ˙
Í0 0 0 L li 1˙ Í0 ˙
ÍÎ 0 0 0 L 0 li ˙˚ ÍÎ1 ˙˚

ci = ÈÎ ri1 ri 2 L rini ˘˚
Note that matrix L i has two diagonals—the principal diagonal has the corresponding characteristic root
(pole) and the superdiagonal has all 1’s. In matrix theory, a matrix having this structure is said to be in
Jordan form. For this reason, we identify the realization (12.55) as Jordan canonical form.
According to Eqn. (12.53), the overall transfer function G(s) consists of a direct path with gain b 0 and m
subsystems, each of which is in the Jordan canonical form as shown in Fig. 12.10. The state vector of the
overall system consists of the concatenation of the state vectors of each of the Jordan blocks:
È x1 ˘
Íx ˙
x= Í ˙
2
(12.56a)
Í M ˙
Í ˙
Î xm ˚
Since there is no coupling between any of the subsystems, the L matrix of the overall system is ‘block
diagonal’:

Fig. 12.10 Subsystems in Jordan canonical form combined


into overall system
592 Control Systems: Principles and Design

È L1 0 L 0 ˘
Í 0 L2 L 0 ˙˙
L = Í (12.56b)
Í M M M ˙
Í ˙
Î 0 0 L Lm ˚
where each of the submatrices L i is in the Jordan canonical form (12.55). The b and c matrices of the overall
system are the concatenations of the bi and ci matrices respectively of each of the subsystems:

È b1 ˘
Íb ˙
b = Í ˙ ; c = [ c1 c 2 L c m ] ; d = b0
2
(12.56c)
Í M ˙
Í ˙
Î bm ˚
The state variable model (12.48) derived for the case of distinct poles, is a special case of Jordan canonical
form (12.56) where each Jordan block is of 1 ¥ 1 dimension.

Example 12.5 In the following, we obtain three different realizations for the transfer function

s+3 Y (s )
G(s) = 3 2 =
s + 9 s + 24 s + 20 U( s)
First companion form Note that the given G(s) is a strictly proper fraction; the realization will
therefore be of the form (12.38), i.e., the parameter d in the realization {A, b, c, d} is zero.
The state variable formulation in the first companion form can be written just by inspection of the
given transfer function. Referring to Eqns (12.44), we obtain
È x&1 ˘ È 0 1 0 ˘ È x1 ˘ È0 ˘
Í& ˙ Í 1 ˙˙ ÍÍ x2 ˙˙ + ÍÍ0 ˙˙ u
Í x2 ˙ = Í 0 0
ÍÎ x&3 ˙˚ ÎÍ- 20 - 24 - 9 ˚˙ ÍÎ x3 ˙˚ ÎÍ1 ˚˙
È x1 ˘
Í ˙
y = [3 1 0] Í x2 ˙
ÍÎ x3 ˙˚
Figure 12.11a shows the state diagram in signal flow graph form.
Second companion form Referring to Eqns (12.46), we obtain
È x&1 ˘ È0 0 - 20˘ È x1 ˘ È 3˘
Í& ˙ Í ˙Í ˙ Í ˙
Í x2 ˙ = Í1 0 - 24˙ Í x2 ˙ + Í1 ˙ u
ÍÎ x&3 ˙˚ ÍÎ0 1 - 9 ˙˚ ÍÎ x3 ˙˚ ÍÎ0 ˙˚
y = x3
Figure 12.11b shows the state diagram.
Control System Analysis Using State Variable Methods 593

Fig. 12.11 Three realizations of given G(s)

Jordan canonical form The given transfer function G(s) in the factored form:
s+3
G(s) =
( s + 2)2 (s + 5)
Using partial fraction expansion, we obtain

1/ 3 2/ 9 - 2/ 9
G(s) = + +
( s + 2) 2
s+2 s+5
A matrix of the state variable model in Jordan canonical form will be block-diagonal; consisting of two
Jordan blocks (refer Eqns (12.55)):

È- 2 1˘
L1 = Í ˙,
Î 0 - 2 ˚
L 2 = [– 5]
The corresponding bi and ci vectors are (refer Eqns (12.55)):

È0˘
b1 = Í ˙ ; c1 = ÈÎ 13 2˘

Î1˚

b2 = [1] ; c2 = ÈÎ - 92 ˘˚
The state variable model of the given G(s) in Jordan canonical form is therefore given by (refer
Eqns (12.56))
594 Control Systems: Principles and Design

È x&1 ˘ È- 2 1 0 ˘ È x1 ˘ È 0 ˘
Í& ˙ Í ˙ Í ˙
Í x2 ˙ = Í 0 - 2 0 ˙ ÍÍ x2 ˙˙ + Í 1 ˙ u
ÍÎ x&3 ˙˚ Í 0 0 - 5 ˙˚ ÍÎ x3 ˙˚ ÍÎ 1 ˙˚
Î
È x1 ˘
ÈÎ 13 2˘ Í ˙
y = 2
9
- 9 ˚ Í x2 ˙
ÍÎ x3 ˙˚
Figure 12.11c shows the state diagram. We note that Jordan canonical state variables are not completely
decoupled. The decoupling is blockwise; state variables of one block are independent of state variables
of all other blocks. However, the state variables of one block among themselves are coupled; the
coupling is unique and simple.

12.6 SOLUTION OF STATE EQUATIONS

In this section, we investigate the solution of the state equation


x& (t) = Ax(t) + bu(t); x (t0) ≜ x0 (12.57)
where x is n ¥ 1 state vector, u is a scalar input, A is n ¥ n constant matrix, and b is n ¥ 1 constant vector.

12.6.1 Matrix Exponential


Functions of square matrices arise in connection with the solution of vector differential equations. Of
immediate interest to us are matrix infinite series.
Consider the infinite series in a scalar variable x:

f (x) = a 0 + a 1x + a 2 x2 + ◊ ◊ ◊ = Â a i xi (12.58a)
i=0
with the radius of convergence r.
We can define infinite series in a matrix variable A as

f (A) = a 0I + a1A + a2A 2 + ◊ ◊ ◊ = Â a iAi (12.58b)
i =0
An important relation between the scalar power series (12.58a) and the matrix power series (12.58b) is
that if the absolute values of eigenvalues of A are smaller than r, then the matrix power series (12.58b)
converges (for proof, refer Lefschetz [33]).
Consider, in particular, the scalar power series

1 2 1 k 1
f (x) = 1 + x +
2!
x +◊◊◊+
k!
x +◊◊◊=  i ! xi (12.59a)
i =0

It is well-known that this power series converges on to the exponential ex for a finite x, so that
f (x) = ex (12.59b)
Control System Analysis Using State Variable Methods 595
It follows from this result that the matrix power series

1 2 1 k 1
f (A) = I + A +
2!
A +◊◊◊+
k!
A +◊◊◊=  i ! Ai
i =0
converges for all A. By analogy with the power series in Eqns (12.59) for the ordinary exponential function,
we adopt the following nomenclature:
If A is an n ¥ n matrix, the matrix exponential of A is

1 2 1 k 1
eA ≜ I + A +
2!
A +◊◊◊+
k!
A +◊◊◊=  i ! Ai
i=0
The following matrix exponential will appear in the solution of state equations:
1 22 1 kk • 1 ii
eAt = I + At +
2!
A t +◊◊◊+
k!
A t +◊◊◊=  k!
At (12.60)
i =0
It converges for all A and all finite t.
In the following we examine some of the properties of the matrix exponential.
1. eA0 = I (12.61)
This is easily verified by setting t = 0 in Eqn. (12.60).
2. eA(t + t) = eAteAt = eAteAt (12.62)
At At
This is easily verified by multiplying out the first few terms for e and e .
3. (eAt )– 1 = e– A t (12.63)
Setting t = – t in Eqn. (12.62), we obtain
eAte– At = eA0 = I
Thus the inverse of eAt is e– At.
Since the inverse of eAt always exists, the matrix exponential is nonsingular for all finite values of t.
d At
4. e = AeAt = eAtA (12.64)
dt
Term-by-term differentiation of Eqn. (12.60) gives
d At 1 32 1
e = A + A 2t + A t +◊◊◊+ Ak t k – 1 + ◊ ◊ ◊
dt 2! (k - 1)!

1 22 1
= A[I + At + A t +◊◊◊+ Ak – 1t k – 1 + ◊ ◊ ◊ ] = AeAt
2! (k - 1)!

1 22 1
= [I + At + A t +◊◊◊+ A k – 1 t k – 1 + ◊ ◊ ◊ ]A = eAtA
2! (k - 1)!

12.6.2 Solution of Homogeneous State Equation


The simplest form of the general differential Eqn. (12.57) is the homogeneous, i.e., unforced equation
x& (t) = Ax(t); x(t0) ≜ x 0 (12.65)
596 Control Systems: Principles and Design

We assume a solution x(t) of the form


x (t) = eAtk (12.66)
where e At is the matrix exponential function defined in Eqn. (12.60) and k is a suitably chosen constant
vector.
The assumed solution is, in fact, the true solution since it satisfies the differential Eqn. (12.65) as is seen
below.
d At d At
x& (t) = [e k] = [e ]k
dt dt
Using property (12.64) of the matrix exponential, we obtain
x& (t) = AeAt k = Ax(t)
To evaluate the constant k in terms of the known initial state x(t0), we substitute t = t0 in Eqn. (12.66):
x(t0) = eAt 0 k
Using property (12.63) of the matrix exponential, we obtain
k = (eAt 0)– 1x(t0) = e-A t0 x(t0)
Thus the general solution to Eqn. (12.65) for the state x(t) at time t, given the state x(t0) at time t0, is
x(t) = eAte–At0 x(t0) = eA(t – t0) x(t0) (12.67a)
We have used the property (12.62) of the matrix exponential to express the solution in this form.
If the initial time t0 = 0, i.e., the initial state x0 is known at t = 0, we have from Eqn. (12.67a):
x(t) = eAtx(0) (12.67b)
0
From Eqn. (12.67b) it is observed that the initial state x(0) ≜ x at t = 0 is driven to a state x(t) at time t.
This transition in state is carried out by the matrix exponential eAt. Due to this property, eAt is known as the
state transition matrix, and is denoted by f (t).
Properties of state transition matrix Properties of the matrix exponential, given earlier in
Eqns (12.61) – (12.64), are restated below in terms of state transition matrix f (t).
d
1. f (t) = Aff (t); f (0) = I
dt
2. f (t 2 – t1) f (t1 – t0) = f (t2 – t0) for any t0, t1, t2
This property of the state transition matrix is important since it implies that a state transition process
can be divided into a number of sequential transitions. The transition form t0 to t2:
x(t2) = f (t2 – t0) x(t0);
is equal to the transition from t0 to t1 and then from t1 to t2:
x(t1) = f (t1 – t0)x(t0)
x(t2) = f (t2 – t1)x(t1)
3. f – 1 (t) = f (– t)
4. f (t) is a nonsingular matrix for all finite t.
Evaluation of state transition matrix Taking the Laplace transform on both sides of Eqn. (12.65)
yields
sX(s) – x 0 = AX(s)
Control System Analysis Using State Variable Methods 597
where X(s) ≜ L [x (t)]
x0 ≜ x(0)
Solving for X(s), we get
X(s) = (sI – A)–1x0
The state vector x(t) can be obtained by inverse transforming X(s):
–1
x(t) = L [(sI – A)–1]x0
Comparing this equation with Eqn. (12.67b), we get
eAt = f (t) = L –1[(sI – A)–1] (12.68)
The matrix (sI – A)–1 = F (s) is known in mathematical literature as the resolvent of A. The entries of the
resolvent matrix F (s) are rational functions of s.

Example 12.6 Consider the system


È 0 0 - 2˘ È0 ˘
&x = Í 0 1 0 ˙˙ x ; x(0) = Í1 ˙
Í Í ˙
ÍÎ 1 0 3 ˙˚ ÍÎ0 ˙˚
The resolvent matrix
-1
Ès 0 2 ˘
Í ( sI - A ) +
F (s) = (sI – A) = Í 0 s - 1
–1
0 ˙˙ =
| sI - A |
ÍÎ - 1 0 s - 3 ˙˚
|sI – A| = (s – 1)2 (s – 2)
È(s - 1)(s - 3) 0 - 2(s - 1) ˘
+ Í
(sI – A) = Í 0 ( s - 1)( s - 2) 0 ˙
˙
ÍÎ ( s - 1) 0 s ( s - 1) ˙˚
F (s)] = L –1 [(sI – A)–1]
eAt = L –1 [F
È (s - 3) -2 ˘
Í 0 ˙
Í ( s - 1)( s - 2) ( s - 1)( s - 2) ˙
Í 1 ˙
= L –1 Í 0 0 ˙
Í ( s - 1) ˙
Í 1 s ˙
Í 0 ˙
Î ( s - 1)( s - 2) ( s - 1)( s - 2) ˚

È 2 et - e 2 t 0 2 et - 2 e 2 t ˘
Í ˙
= Í 0 et 0 ˙
Í t 2t 2t t ˙
ÎÍ - e + e 0 2 e - e ˚˙
598 Control Systems: Principles and Design

Consequently, the free response of the system is


È0˘
Í ˙
x(t) = e x(0) = Í et ˙
At

Í0˙
Î ˚
Note that x(t) could be more easily computed by taking the inverse Laplace transform of
X(s) = [(sI – A)–1x(0)]

12.6.3 Solution of Nonhomogeneous State Equation


When an input u(t) is present, the complete solution x(t) is obtained from the nonhomogeneous Eqn. (12.57).
By writing Eqn. (12.57) as
x& (t) – Ax(t) = bu(t)
and premultiplying both sides of this equation by e–At, we obtain
e–At [ x& (t) – Ax (t)] = e–At bu(t) (12.69)
By applying the rule for the derivative of the product of two matrices, we can write (refer Eqn. (12.64))
d –At d d –At
[e x(t)] = e–At (x(t)) + (e )x(t)
dt dt dt
= e–At x& (t) – e–At Ax(t)
= e–At [ x& (t) – Ax(t)]
Use of this equality in Eqn. (12.69) gives
d –At
[e x(t)] = e–Atbu(t)
dt
Integrating both sides with respect to t between the limits 0 and t, we get
t t
e–Atx(t) = Ú e- At bu (t)dt
0 0

t
- At
or e–At
x(t) – x(0) = Úe bu(t )dt
0

Now premultiplying both sides by eAt, we have


t

Úe
A(t – t)
x(t) = eAtx(0) + bu(t )dt (12.70)
0

If the initial state is known at t = t0, rather than t = 0, Eqn. (12.70) becomes
( t - t0 ) t
eA
Úe
A(t – t )
x(t) = x(t 0) + bu(t)dt (12.71)
t0
Control System Analysis Using State Variable Methods 599
Equation (12.71) can also be written as
t
x(t) = f (t – t0) x (t 0) + Ú f (t – t ) bu(t ) dt (12.72)
t0

where f (t) = eAt


Equation (12.72) is the solution of Eqn. (12.57). This equation is called the state transition equation. It
describes the change of state relative to the initial conditions x(t0) and the input u(t).

Example 12.7 For the speed control system of Fig. 12.1, the following plant model was derived in
Example 12.1 (refer Eqns (12.12)):

x& = Ax + bu
y = cx
with
È- 1 1˘ È 0˘
A= Í ˙;b= Í10 ˙ ; c = [1 0]
Î - 1 - 10 ˚ Î ˚
State variables x1 and x2 are the physical variables of the system:
x1(t) = w(t), angular velocity of the motor shaft
x2(t) = ia(t), armature current
The output
y(t) = x1(t) = w (t)
In the following, we evaluate the response of this system to a unit step input under zero initial conditions.
-1
Ès + 1 -1 ˘
(sI – A)–1 = Í
Î 1 s + 10˙˚

1 È s + 10 1 ˘
= 2 Í - 1 s +1 ˙
s + 11 s + 11 Î ˚
È s + 10 1 ˘
Í ( s + a )( s + a ) ( s + a1 )( s + a2 ) ˙
= Í ˙ ; a1 = 1.1125, a2 = 9.8875
1 2
Í -1 s +1 ˙
Í( ( s + a1 )( s + a2 ) ˙˚
Î s + a1 )( s + a2 )
eAt = L –1 [(sI – A)–1]
È1.0128 e- a1 t - 0.0128 e- a2 t 0.114 e- a1 t - 0.114 e - a2 t ˘
= Í ˙
ÍÎ - 0.114 e- a1 t + 0.114 e - a2 t - 0.0128 e - a1 t +1.0128 e - a2 t ˙˚
u(t) = 1; t ≥ 0
x(0) = 0
600 Control Systems: Principles and Design

Therefore,
t È
Í
t
(
1.14 e- a1 (t - t ) - e- a2 (t - t )) ˘
˙ dt
Úe
A(t – t )
x(t) = bdt = Ú
0
Í
(
0 Í1.14 -0.1123 e
Î
- a1 (t - t )
+ 8.8842 e- a2 (t - t ) ) ˙
˙˚

È 0.9094 - 1.0247 e- a1 t + 0.1153 e- a2 t ˘


= Í ˙
ÍÎ- 0.0132 + 0.1151 e- a1 t - 0.1019 e - a2 t ˙˚
The output
y(t) = w(t) = 0.9094 – 1.0247 e–1.1125t + 0.1153e–9.8875t; t ≥ 0

12.6.4 Discrete-Time Solution


As illustrated in Fig. 12.12, the time axis is discretized into intervals of width T, and u (t) is approximated by
a staircase function, constant over the intervals. For the kth interval,
u(t) = u(kT ); kT £ t < (k + 1) T; k = 0, 1, 2, ... (12.73)
Using Eqn. (12.71) for t0 = kT,
È t ˘
x(t) = eA (t - kT ) x(kT ) + Í Ú eA (t - t ) bdt ˙ u (kT ); kT £ t < (k + 1)T (12.74)
ÍÎ kT ˙˚
In response to the input u(kT ), the state settles to the value x((k + 1)T ), where
È( k +1)T ˘ u(t)
x((k + 1)T ) = e x(kT ) + Í Ú eA[( k +1)T -t ]bdt ˙ u(kT )
AT
ÍÎ kT ˙˚
Letting s = (t – kT ), we have
ÈT ˘
x ((k + 1)T ) = eAT x(kT ) + Í Ú eA (T - s ) b ds ˙ u (kT )
ÍÎ 0 ˙˚
With q = T – s, we get
ÈT ˘ kT
t
x((k + 1)T ) = eAT x(kT ) + Í Ú e Aq b d q ˙ u (kT ) (k + 1)T
ÍÎ 0 ˙˚
Fig. 12.12 Time discretization
th
Therefore, for the k interval,
x((k + 1)T ) = Fx(kT ) + gu(kT ) (12.75a)
where
F = eAT (12.75b)
T

Úe
Aq
g= b dq (12.75c)
0

This permits the solution to be calculated forward in time.


Control System Analysis Using State Variable Methods 601
The infinite series (12.60) may be used to compute F and g.
1 2 2 1 3 3 ...
F = eAT = I + AT + A T + AT +
2! 3!

A iT i 0
= Â ;A =I (12.76)
i =0 i!

For a finite T, this series is uniformly convergent. It is therefore possible to evaluate F within prescribed
accuracy. If the series is truncated at i = N, then we may write the finite series sum as
N
A iT i
F@ Â (12.77)
i =0 i!

which represents the infinite series approximation. The larger the N, the better is the approximation.
The integral in Eqn. (12.75c) can be evaluated term by term to give

ÈT Ê 1 ˆ ˘
g = Í Ú Á I + Aq + A 2q 2 + L˜ d q ˙ b
Ë ¯
ÎÍ 0 2! ˙˚

A i T i +1
= Â (i + 1) ! b (12.78)
i =0

g may also be computed by the approximation technique described above.

Example 12.8 Consider the state variable model


x& (t) = Ax(t) + b u(t)
y(t) = cx (t)
with
È0 1˘ È0˘
A= Í ˙ ; b = Í ˙ ; c = [1 0 ]
Î 0 - 5˚ Î1˚
The state transition matrix
È1 1 ˘
s s(s + 5) ˙ È1 1 (1 - e- 5t ) ˘
–1 Í
˙ = ÍÍ ˙
At –1 –1
e = L [(sI – A) ] = L Í 5
Í 1 - 5t ˙
s + 5 ˙˚ Î
0 0 e ˚
Î
Discretizing time axis into intervals of width T = 0.1 sec, we get the following state transition equation,
that yields discrete-time solution:
x ((k + 1)T ) = Fx (kT ) + gu(kT )
where
È1 At
1
(1 - e- 5T )˘ È 1 0 ◊ 0787 ˘
F=e = Í 5
˙ = Í ˙
ÍÎ0 e - 5T
˙˚ Î 0 0 ◊ 6065 ˚
602 Control Systems: Principles and Design

È T ˘
Í Ú 1
(1 - e- 5q ) d q ˙ -5T
˙ È 15 (T - 15 + 15 e ) ˘ È 0 ◊ 0043 ˘
T 5
Í
g = Ú e b dq = Í
Aq 0
˙ = Í ˙ = Í ˙
Í
T
- 5q ˙ ÍÎ
1
(1 - e- 5T ) ˙˚ Î 0 ◊ 0787 ˚
Úe
0 5
Í dq ˙
Î 0 ˚
For a given initial state x(0), and input u(t); t ≥ 0,
x(T ) = Fx(0) + gu(0)
x(2T ) = Fx(T ) + gu(T )
x(3T ) = Fx(2T ) + gu(2T )

The output
y(kT ) = cx (kT )

12.7 CONCEPTS OF CONTROLLABILITY AND OBSERVABILITY

Controllability and observability are properties which describe structural features of a dynamic system.
These properties play an important role in modern control system design theory; the conditions on
controllability and observability often govern the control solution.
To illustrate the motivation of investigating controllability and observability properties, we consider the
problem of the stabilization of an inverted pendulum on a motor-driven cart.

Example 12.9 Figure 12.13 shows an inverted pendulum with its pivot mounted on a cart. The cart
is driven by an electric motor. The motor drives a pair of wheels of the cart; the whole cart and the
pendulum become the ‘load’ on the motor. The motor at time t
exerts a torque T(t) on the wheels. The linear force applied to
the cart is u(t); T(t) = Ru(t), where R is the radius of the wheels.
The pendulum is obviously unstable. It can, however, be
kept upright by applying a proper control force u(t). This
dm
somewhat artificial system example represents a dynamic
r

model of a space booster on take off—the booster is balanced


on top of the rocket engine thrust vector.
From inspection of Fig. 12.13, we construct the differential
equations describing the dynamics of the inverted pendulum Carriage
and the cart. The horizontal displacement of the pivot on the
cart with respect to the fixed nonrotating coordinate frame, is
z(t), while the rotational angle of the pendulum is q(t). The
parameters of the system are as follows.
M = the mass of the cart
L = the length of the pendulum = 2l
m = the mass of the pendulum Fig. 12.13 Inverted pendulum
J = the moment of inertia of the pendulum with system
Control System Analysis Using State Variable Methods 603
respect to centre of gravity (CG)
l l l
È r3 ˘ Ê 2l 3 ˆ
Ú r 2 dm = Ú r ( r Adr ) = r A ÍÎ 3 ˙˚ -l = r A ÁË 3 ˜¯
2
J=
-l -l

Ê l 2 ˆ ml 2
= r A(2l ) Á ˜ =
Ë 3¯ 3
l sin q
A = area of cross-section; and r = density.
The horizontal and vertical positions of the CG of the
pendulum are given by (z + l sinq) and (l cosq ) respectively.
The forces exerted on the pendulum are the force mg on the l cos q
centre of gravity, a horizontal reaction force H and a vertical
reaction force V (Fig. 12.14). H is the horizontal reaction force
that the cart exerts on the pendulum, whereas –H is the force
exerted by the pendulum on the cart. Similar convention applies
to forces V and –V. Taking moments around CG of the pendulum,
we get

d 2 q (t )
J = V(t) l sin q (t) – H(t) l cos q (t) (12.79a)
dt 2
Summing up all forces on the pendulum in vertical and horizontal Fig. 12.14
directions, we obtain
d2
m (l cosq (t)) = V(t) – mg (12.79b)
dt 2
d2
m (z(t) + l sinq(t)) = H(t) (12.79c)
dt 2
Summing up all forces on the cart in the horizontal direction, we get
d 2 z (t )
M = u(t) – H(t) (12.79d)
dt 2
In our problem, since the objective is to keep the pendulum upright, it seems reasonable to assume that
q& (t) and q(t) will remain close to zero. In view of this, we can set with sufficient accuracy sin q @ q;
cos q @ 1. With this approximation, we get from Eqns (12.79),
ml q&& (t) + (m + M) &z& (t) = u(t)
(J + ml2 ) q&& (t) + ml &z& (t) – mgl q(t) = 0
These equations may be rearranged as
ml ( M + m) g ml
q&& (t) = q(t) – u(t) (12.80a)
D D
m2 l 2 g ( J + ml 2 )
&z& (t) = – q (t) + u(t) (12.80b)
D D
where
D = (M + m)J + Mm l2
604 Control Systems: Principles and Design

Suppose that the system parameters are M = 1 kg, m = 0.15 kg, and l = 0.5 m. Recall that g = 9.81 m/sec2
For these parameters, we have from Eqns (12.80),
q&& (t) = 16.3106 q(t) – 1.4458 u(t) (12.81a)
&z& (t) = – 1.0637 q(t) + 0.9639 u(t) (12.81b)
Choosing the states x1 = q, x2 = q& , x3 = z, and x4 = z& , we obtain the following state model for the
inverted pendulum on moving cart.
x& = Ax + bu (12.82)
È 0 1 0 0˘ È 0 ˘
Í 16.3106 0 0 0˙ ˙ Í - 1.4458 ˙˙
with A= Í ;b= Í
Í 0 0 0 1˙ Í 0 ˙
Í ˙ Í ˙
Î - 1.0637 0 0 0˚ Î 0.9639 ˚
The plant (12.82) is said to be completely controllable if every state x(t0) can be affected or
controlled to reach a desired state in finite time by some unconstrained control u(t). Shortly we will see
that the plant (12.82) satisfies this condition and therefore a solution exists to the following control
problem:
Move the cart from one location to another without causing the pendulum to fall.
The solution to this control problem is not unique. We normally look for a feedback control scheme
so that the destabilizing effects of disturbance forces (due to wind, for example) are filtered out.
Figure 12.15a shows a state-feedback control scheme for stabilizing the inverted pendulum. The closed-
loop system is formed by feeding back the state variables through a real constant matrix k;
u(t) = – kx(t)
The closed-loop system is thus described by
x& (t) = (A – bk)x(t)

Fig. 12.15 Control system with state feedback


Control System Analysis Using State Variable Methods 605
The design objective in this case is to find the feedback matrix k such that the closed-loop system is stable.
The existence of solution to this design problem is directly based on the controllability property of the plant
(12.82).
Implementation of the state-feedback control solution requires access to all the state variables of the
plant model. In many control situations of interest, it is possible to install sensors to measure all the state
variables. This may not be possible or practical in some cases. For example, if the plant model includes
non-physical state variables, measurement of these variables using physical sensors is not possible.
Accuracy requirements or cost considerations may prohibit the use of sensors for some physical
variables also.
The input and the output of a system are always physical quantities, and are normally easily accessible
to measurement. We therefore need a subsystem that performs the estimation of state variables based on
the information received from the input u(t) and the output y(t). This subsystem is called an observer
whose design is based on observability property of the controlled system.
The plant (12.82) is said to be completely observable if all the state variables in x(t) can be observed
from the measurements of the output y(t) = q (t) and the input u(t). Shortly we will see that the plant
(12.82) does not satisfy this condition, and therefore a solution to the observer-design problem does not
exist when the inputs to the observer subsystem are u(t) and q (t).
Cart position z(t) is easily accessible to measurement and, as we shall see, the observability condition
is satisfied with this choice of input information to the observer subsystem. Figure 12.15b shows the
block diagram of the closed-loop system with an observer that estimates the state vector from
measurements of u(t) and z(t). The observed or estimated state vector, designated as xˆ , is then used to
generate the control u through the feedback matrix k.

A study of controllability and observability properties, presented in this section, provides a basis for the
state-feedback design problems (Chapter 13). Further, these properties establish the conditions for complete
equivalence between the state variable and transfer function representations (Section 12.8).
12.7.1 Definitions of Controllability and Observability
In this section, we study the controllability and observability of linear time-invariant systems described by
state variable model of the following form:
x& (t) = Ax(t) + bu(t) (12.83a)
y(t) = cx(t) + du(t) (12.83b)
where A, b, c and d are respectively n ¥ n, n ¥ 1, 1 ¥ n and 1 ¥ 1 matrices, x (t) is n ¥ 1 state vector, y(t) and
u(t) are, respectively, output and input variables.

Controllability For the linear system given by (12.83), if there exists an input u [0, t1 ] which transfers the
initial state x(0) ≜ x0 to the state x1 in a finite time t1, the state x 0 is said to be controllable. If all initial states
are controllable, the system is said to be completely controllable, or simply controllable. Otherwise, the
system is said to be uncontrollable.
From Eqn. (12.70), the solution of Eqn. (12.83a) is
t
x(t) = eAtx0 + Ú eA(t – t) bu(t) dt
0
606 Control Systems: Principles and Design

To study the controllability property, we may assume without loss of generality that x1 ∫ 0. Therefore if
the system (12.83) is controllable, there exists an input u [0, t1 ] such that
t1

Úe
0 –At
–x = bu(t) dt (12.84)
0
From this equation, we observe that complete controllability of a system depends on A and b, and is
independent of output matrix c. The controllability of the system (12.83) is frequently referred to as the
controllability of the pair {A, b}.
It may be noted that according to the definition of controllability, there is no constraint imposed on the
input or on the trajectory that the state should follow. Further, the system is said to be uncontrollable although
it may be ‘controllable in part’.
From the definition of controllability, we observe that by complete controllability of a plant, we mean that
we can make the plant do whatever we please. Perhaps this definition is too restrictive in the sense that we
are asking too much of the plant. But if we are able to show that system equations satisfy this definition,
certainly there can be no intrinsic limitation on the design of the control system for the plant. However, if the
system turns out to be uncontrollable, it does not necessarily mean that the plant can never be operated in a
satisfactory manner. Provided that a control system will maintain the important variables in an acceptable
region, the fact that the plant is not completely controllable is immaterial.
Another important point which the reader must bear in mind is that almost all physical systems are
nonlinear in nature to a certain extent and a linear model is obtained after making certain approximations.
Small perturbations of the elements of A and b may cause an uncontrollable system to become controllable.
A common source of uncontrollable state variable models arises when redundant state variables are
defined. No one would intentionally use more state variables than the minimum number needed to
characterize the behaviour of the dynamic system. In a complex system with unfamiliar physics, one may be
tempted to write down differential equations for everything in sight and in doing so, may write down more
equations than are necessary. This will invariably result in an uncontrollable model for the system.
Observability For the linear system given by Eqns (12.83), if the knowledge of the output y and the input
u over a finite interval of time [0, t1] suffices to determine the state x(0) ≜ x0, the state x0 is said to be
observable. If all initial states are observable, the system is said to be completely observable, or simply
observable. Otherwise, the system is said to be unobservable.
The output of the system (12.83) is given by
t
y(t) = c eAt x0 + c Ú eA(t – t )bu(t)dt + du(t)
0
The output and the input can be measured and used, so that the following signal h(t) can be obtained from
u and y.
t
h(t) ≜ y(t) – c Ú eA(t – t)b u(t)dt – d u(t) = c eAtx0 (12.85)
0
T
Premultiplying by e A t cT and integrating from 0 to t1, gives

ÏÔt1 T t T ¸ t1
At Ô 0
ÌÚ e c c e dt ˝ x =
AT t T
Úe
A
c h(t )dt (12.86)
ÓÔ 0 ˛Ô 0
Control System Analysis Using State Variable Methods 607
When the signal h(t) is available over a time interval [0, t1], and the system (12.83) is observable, then
the initial state x0 can be uniquely determined from Eqn. (12.86).
From Eqn. (12.86) we see that complete observability of a system depends on A and c, and is independent
of b. The observability of the system (12.83) is frequently referred to as the observability of the pair {A, c}.
Note that the system is said to be unobservable, although it may be ‘observable in part’. For plants that are
not completely observable, one may examine feedback control schemes which do not require complete state
feedback.
12.7.2 Controllability Test
It is difficult to guess whether a system is controllable or not from the defining Eqn. (12.84). Some simple
mathematical tests which answer the question of controllability have been developed. The following theorem
gives a simple controllability test4.
Theorem 12.1 The necessary and sufficient condition for the system (12.83) to be completely controllable
is that the n ¥ n controllability matrix
U ≜ [b Ab A2 b ◊ ◊ ◊ An – 1 b] (12.87)
has rank equal to n, i.e., r(U) = n.

Example 12.10 Recall the inverted pendulum of Example 12.9, shown in Fig. 12.13, in which the
object is to apply a force u(t) so that the pendulum remains balanced in the vertical position. We found
the linearized equations governing the system to be:

x& = Ax + bu
where x = [q q& z z& ]T
È 0 1 0 0 ˘ È 0 ˘
Í 16.3106 0 0 0 ˙˙ Í - 1.4458 ˙
A= Í ;b= Í ˙
Í 0 0 0 1 ˙ Í 0 ˙
Í ˙ Í ˙
Î - 1.0637 0 0 0 ˚ Î 0.9639 ˚
z(t) = horizontal displacement of the pivot on the cart
q(t) = rotational angle of the pendulum.
To check the controllability of this system, we compute the controllability matrix U:
È 0 - 1.4458 0 - 23.5816 ˘
Í - 1.4458 0 - 23.5816 0 ˙
U = [b Ab A2b A3b] = Í ˙
Í 0 0.9639 0 1.5379 ˙
Í ˙
Î 0.9639 0 1.5379 0 ˚
Since |U| = 420.4851, U has full rank, and by Theorem 12.1, the system is completely controllable.
Thus if the angle q departs from equilibrium by a small amount, a control always exists which will drive
it back to zero.5 Moreover, a control also exists which will drive both q and z, as well as their derivatives,
to zero.
4
For proof, refer Chapter 5 of reference [155].
5
This justifies the assumption that q (t) @ 0, provided we choose an appropriate control strategy.
608 Control Systems: Principles and Design

Example 12.11 Consider the electrical network shown in


Fig. 12.16. Differential equations governing the dynamics of this
network can be obtained by various standard methods. By use of
nodal analysis, for example, we get

de1 e1 - e2 e1 - e0
C1 + + =0
dt R3 R1
de2 e2 - e1 e2 - e0
C2 + + =0
dt R3 R2
The appropriate state variables for the network are the capacitor
voltages e1 and e2. Thus, the state equations of the network are Fig. 12.16
x& = Ax + be0
where x = [e1 e2]T
È Ê 1 1ˆ 1 1 ˘ È 1 ˘
Í- Á + ˜ ˙ Í RC ˙
Í Ë R R3 ¯ C1 R3C1 ˙
A= Í
1
Í 1 1 ˙
˙;b= Í 1 ˙
Í 1 Ê 1 1ˆ 1˙
-Á + ÍR C ˙
ÍÎ R3C2 Ë R2 R3 ˜¯ C2 ˙˚ Î 2 2 ˚
The controllability matrix of the system is
È 1 1 1 Ê 1 1 ˆ˘
Í - + - ˙
Í R1C1 ( R1C1 ) 2
R3C1 Ë R2 C2 R1C1 ˜¯ ˙
Á
U = [b Ab] = Í ˙
Í 1 1 1 Ê 1 1 ˆ˙
- + -
ÍÎ R2C2 ( R2C2 ) 2 R3C2 ÁË R1C1 R2C2 ˜¯ ˙˚
We see that under the condition
R1C1 = R2C2 ,
r(U) = 1 and the system becomes ‘uncontrollable’. This condition is the one required to balance the
bridge, and in this case, the voltage across the terminals of R3 cannot be influenced by the input e0.

12.7.3 Observability Test


The following theorem gives a simple observability test6.
Theorem 12.2 The necessary and sufficient condition for the system (12.83) to be completely observable
is that the n ¥ n observability matrix
È c ˘
Í cA ˙˙
V @ Í (12.88)
Í M ˙
Í ˙
ÎÍ cA n - 1 ˚˙
has rank equal to n, i.e., r(V) = n.
6
For proof, refer Chapter 5 of reference [155].
Control System Analysis Using State Variable Methods 609

Example 12.12 We now return to the inverted pendulum of Example 12.10. Assuming that the only
output variable for measurement is q(t), the position of the pendulum, then the linearized equations
governing the system are

x& = Ax + bu
y = cx
È 0 1 0 0 ˘ È 0 ˘
Í 16.3106 0 0 0 ˙˙ Í - 1.4458˙
where A= Í ;b= Í ˙
Í 0 0 0 1 ˙ Í 0 ˙
Í ˙ Í ˙
Î -1.0637 0 0 0 ˚ Î 0.9639 ˚
c=[1000]
The observability matrix

È c ˘ È 1 0 0 0˘
Í cA ˙ Í
0 1 0 0 ˙˙
V = ÍÍ 2 ˙˙ = Í
cA Í16.3106 0 0 0˙
Í ˙ Í ˙
ÍÎ cA ˙˚ Î 0
3 16.3106 0 0 ˚

|V| = 0, and therefore by Theorem 12.2, the system is not completely observable.
Consider now the displacement z(t) of the cart as the output variable. Then
c = [0 0 1 0]
and the observability matrix
È 0 0 1 0˘
Í 0 0 0 1 ˙˙
V= Í
Í –1.0637 0 0 0˙
Í ˙
Î 0 –1.0637 0 0˚

|V| = 1.1315 π 0; the system is therefore completely observable. The values of z& (t), q(t) and q& (t) can
all be determined by observing z(t) over an arbitrary time interval.

12.7.4 Invariance Property


It is recalled that the state variable model for a system is not unique, but depends on the choice of a set of
state variables. A transformation
x(t) = P x (t); P is a nonsingular constant matrix,
results in the following alternative state variable model (refer Eqns (12.17)) for the system (12.83):
x& (t) = A x (t) + b u(t); x (t0) = P–1x(t0)
y(t) = c x (t) + du(t)
where
A = P– 1AP, b = P– 1b, c = cP
610 Control Systems: Principles and Design

The definition of a new set of internal state variables should evidently not affect the controllability and
observability properties. This may be verified by evaluating the controllability and observability matrices of
the transformed system.
I. U = [b A b L (A )n - 1 b]
b = P– 1b
A b = P– 1APP– 1b = P–1Ab
( A )2 b = ( A) ( A b ) = P– 1APP– 1Ab = P– 1A2b

n -1
( A) b = P– 1An– 1b
Therefore,
U = [P– 1b P– 1Ab ◊ ◊ ◊ P–1An – 1b] = P– 1U
where U = [b Ab ◊ ◊ ◊ An – 1b]
Since P– 1 is nonsingular,
r( U ) = r(U)
II. A similar relationship can be shown for the observability matrices.

12.8 EQUIVALENCE BETWEEN TRANSFER FUNCTION AND STATE VARIABLE REPRESENTATIONS

In frequency-domain analysis, it is tacitly assumed that the dynamic properties of a system are completely
determined by the transfer function of the system. That this is not always the case is illustrated by the
following examples.

Example 12.13 Consider the system

x& = Ax + bu
(12.89)
y = cx
È- 2 1˘ È1˘
with A= Í ;b= Í1˙ ; c = [0 1]
Î 1 - 2 ˙˚ Î˚
The controllability matrix
È1 - 1˘
U = [b Ab] = Í ˙
Î1 - 1˚
Since r(U) = 1, the second-order system (12.89) is not completely controllable.
The eigenvalues of matrix A are the roots of the characteristic equation
s+2 -1
|sI – A| = =0
-1 s+2
The eigenvalues are obtained as – 1, – 3. The modes of the transient response are therefore e–t and e–3t.
Control System Analysis Using State Variable Methods 611
The transfer function of the system (12.89) is calculated as
-1
Ès + 2 - 1 ˘ È1˘
G(s) = c(sI – A)– 1b = [0 1] Í ˙ Í1˙
Î - 1 s + 2˚ Î˚

È s+2 1 ˘
Í ( s + 1) ( s + 3) ( s + 1)( s + 3) ˙ È1˘ 1
= [0 1] Í ˙Í ˙ =
Í 1 s+2 ˙ Î1˚ s +1
Í ( s + 1) ( s + 3) ( s + 1)( s + 3) ˙˚
Î
We find that because of pole-zero cancellation, both the eigenvalues of matrix A do not appear as poles
in G(s). The dynamic mode e–3t of the system (12.89) does not show up in input-output characterization
given by the transfer function G(s). Note that the system under consideration is not a completely
controllable system.
Example 12.14 Consider the system

x& = Ax + bu
(12.90)
y = cx
with
È- 2 1˘ È1 ˘
A= Í ;b= Í 0 ˙ ; c = [1 –1]
Î 1 - 2 ˙˚ Î ˚
The observability matrix
Èc ˘ È 1 -1 ˘
V= Í ˙ = Í
ÎcA ˚ Î- 3 3 ˙˚
Since r(V ) = 1, the second-order system (12.90) is not completely observable.
The eigenvalues of matrix A are – 1, – 3. The transfer function of the system (12.90) is calculated as
G(s) = c(sI – A)–1b

È s+2 1 ˘
Í ( s + 1) ( s + 3) ( s + 1) ( s + 3) ˙ È1 ˘ 1
= [1 –1] Í ˙Í ˙ =
Í 1 s+2 ˙ Î0˚ s+3
Í ( s + 1) ( s + 3) ˙
( s + 1) ( s + 3) ˚
Î
The dynamic mode e–t of the system (12.90) does not show up in the input-output characterization
given by the transfer function G(s). Note that the system under consideration is not a completely
observable system.

In the following, we give two specific state transformations to reveal the underlying structure imposed
upon a system by its controllability and observability properties (for proof, refer [105]). These results are
then used to establish equivalence between transfer function and state variable representations.
612 Control Systems: Principles and Design

Theorem 12.3 Consider an nth-order system


x& = Ax + bu
(12.91a)
y = cx
Assume that

r(U) = r[b Ab ⋯ An – 1b] = m < n


There exists an equivalence transformation
x= Px (12.91b)
that transforms the system (12.91a) to the following form:
È x& 1 ˘ ÈAc A12 ˘ È x1 ˘ Èbc ˘
Í& ˙ = Í ˙Í ˙+Í ˙ u
Î x2 ˚ Î0 A 22 ˚ Î x2 ˚ Î 0 ˚

= A x + bu (12.91c)

È x1 ˘
y = [ c1 c2 ] Í ˙ = c x
Î x2 ˚
where the m-dimensional subsystem
x& 1 = A c x1 + b c u + A12 x 2

is controllable from u (the additional driving term A12 x2 has no effect on controllability), and the (n – m)
dimensional subsystem
x& 2 = A 22 x2
is not affected by the input, and is therefore entirely uncontrollable.
This theorem shows that any system which is not completely controllable can be decomposed into
controllable and uncontrollable subsystems shown in Fig. 12.17. The state model (12.91c) is said to be in
controllability canonical form.

Fig. 12.17 The controllability canonical form of a state variable model

In Section 12.4, it was shown that the characteristic equations and transfer functions of equivalent systems
are identical. Thus, the set of eigenvalues of matrix A of system (12.91a) is same as the set of eigenvalues of
matrix A of system (12.91c), which is a union of the subsets of eigenvalues of matrices A c and A 22 . Also
Control System Analysis Using State Variable Methods 613
the transfer function of system (12.91a) must be the same as that of (12.91c). The transfer function of
(12.91a) is calculated from Eqn. (12.91c) as7
-1
È sI - Ac - A12 ˘ È bc ˘
G(s) = [ c1 c2 ] Í ˙ Í ˙
Î 0 s I - A22 ˚ Î0˚
È(s I - A c )– 1 (s I - A c ) – 1 A12 (s I - A 22 ) – 1 ˘ Èb c ˘
= [ c1 c2 ] Í ˙Í ˙
ÍÎ 0 ( s I - A 22 ) – 1 ˙˚ Î 0 ˚

= c1 ( s I - A c ) – 1 bc
Therefore, the input–output relationship for the system is dependent only on the controllable part of the
system. We will refer to the eigenvalues of A c as controllable poles and the eigenvalues of A 22 as
uncontrollable poles.
Only the controllable poles appear in the transfer function model; the uncontrollable poles are cancelled
by the zeros.
Theorem 12.4 Consider the nth order system
x& = Ax + bu
(12.92a)
y = cx
Assume that
È c ˘
Í cA ˙
r(V) = r Í ˙ =l<n
Í M ˙
Í ˙
ÎÍ cA n - 1 ˚˙
There exits an equivalence transformation
x = Qx (12.92b)
that transforms the system (12.92a) to the following form:

È x& 1 ˘ È A0 0 ˘ È x1 ˘ È b1 ˘
Í& ˙ = Í ˙ Íx ˙ + Í ˙ u = A x + b u (12.92c)
Î x2 ˚ Î A 21 A 22 ˚ Î 2 ˚ Î b2 ˚

Èx ˘
y = [ c0 0] Í 1 ˙ = c x
Î x2 ˚

7
È A1 A2 ˘ È B1 B2 ˘ È I 0 ˘
Í =
ÎÍ 0 A 3 ˚˙˙ ÍB
ÎÍ 3 B 4 ˙˚˙ ÍÎÍ0 I ˙˙˚

È B1 B2 ˘ ÈA1- 1 - A1- 1 A2 A3- 1 ˘


gives Í =Í ˙
ÍÎ B3 B4 ˙˙˚ Í 0 A3- 1 ˙˚
Î
614 Control Systems: Principles and Design

where the l-dimensional subsystem


x& 1 = A 0 x1 + b1 u
y = c0 x1
is observable from y, and the (n – l )-dimensional subsystem
x& 2 = A 22 x 2 + b 2 u + A 21 x1
has no effect upon the output y, and is therefore
entirely unobservable, i.e., nothing about x2 can be
inferred from output measurement.
This theorem shows that any system which is not
completely observable can be decomposed into the
observable and unobservable sub-systems shown in
Fig. 12.18. The state model (12.92c) is said to be in
observability canonical form.
Since systems (12.92a) and (12.92c) are Fig. 12.18 The observability canonical form of
equivalent, the set of eigenvalues of matrix A of a state variable model
system (12.92a) is same as the set of eigenvalues of
matrix A of system (12.92c), which is a union of the subsets of eigenvalues of matrices A 0 and A 22 . The
transfer function of the system (12.92a) may be calculated from (12.92c) as follows:
-1
È s I - A0 0 ˘ È b1 ˘
G(s) = [ c0 0] Í ˙ Í ˙
Î - A 21 s I - A 22 ˚ Îb 2 ˚
È ( sI - A 0 )- 1 0 ˘ È b1 ˘
= [ c0 0] Í
-1 -1
˙ Í ˙
ÍÎ(sI - A 22 ) A 21 (sI - A0 ) (sI - A22 )-1 ˙˚ Îb2 ˚
= c0 ( s I - A 0 )- 1 b1 (12.93)
which shows that the unobservable part of the system does not affect the input–output relationship. We will
refer to the eigenvalues of A as observable poles and the eigenvalues of A 22 as unobservable poles.
We now examine the use of state variable and transfer function models of a system to study its dynamic
properties.
We know that a system is asymptotically stable if all the eigenvalues of the characteristic matrix A of its
state variable model are in the left-half of complex plane. Also we know that a system is (bounded-input
bounded-output) BIBO stable if all the poles of its transfer function model are in the left-half of complex
plane. Since, in general, the poles of the transfer function model of a system are a subset of the eigenvalues
of the characteristic matrix A of the system, asymptotic stability always implies BIBO stability.
The reverse, however, may not always be true because the eigenvalues of the uncontrollable and/or
unobservable part of the system are hidden from the BIBO stability analysis. These may lead to instability of
a BIBO stable system. When a state variable model is both controllable and observable, all the eigenvalues
of characteristic matrix A appear as poles in the corresponding transfer function. Therefore BIBO stability
implies asymptotic stability only for completely controllable and completely observable system.
To conclude, we may say that the transfer function model of a system represents its complete dynamics
only if the system is both controllable and observable.
Control System Analysis Using State Variable Methods 615

Review Examples

Review Example 12.1 A feedback system has a closed-loop transfer function

Y (s ) 10( s + 4)
=
R(s ) s ( s + 1) (s + 3)
Construct three different state models for this system:
(a) One where the system matrix A is a diagonal matrix.
(b) One where A is in first companion form.
(c) One where A is in second companion form.
Solution
(a) The given transfer function can be expressed as
Y (s ) 10( s + 4) 40/3 - 15 5/3
= = + +
R(s ) s ( s + 1) (s + 3) s s +1 s + 3
Therefore,
40/3 - 15 5/3
Y(s) = R(s) + R(s) + R(s)
s s +1 s+3
40 40
Let X1(s) = R(s); this gives x&1 = r
3 3
- 15
X2(s) = R(s); this gives x&2 + x2 = – 15 r
s +1
5/3 5
X3(s) = R(s); this gives x&3 + 3x3 = r
s+3 3
In terms of x1, x2 and x3, the output y(t) is given by
y(t) = x1(t) + x2(t) + x3(t)
A state variable formulation for the given transfer function is defined by the following matrices:
È0 0 0˘ È40/3 ˘
Í
L = Í 0 -1 0 ˙˙ ; b = Í - 15 ˙ ; c = [ 1 1 1]; d = 0
Í ˙
ÍÎ 0 0 - 3 ˙˚ ÍÎ 5/3 ˙˚
Note that the coefficient matrix L is diagonal, and the state model is in Jordan canonical form.
We now construct two state models for the given transfer function in companion form. To do this, we
express the transfer function as
Y (s ) 10(s + 4) 10 s + 40 b0 s3 + b1 s 2 + b2 s + b3
= = 3 = ;
R (s ) s (s + 1) (s + 3) s + 4 s2 + 3 s s3 + a1 s2 + a 2 s + a 3
b0 = b1 = 0, b2 = 10, b3 = 40, a1 = 4, a2 = 3, a 3 = 0
616 Control Systems: Principles and Design

(b) With reference to Eqns (12.44), we obtain the following state model in the first companion
form:
È0 1 0˘ È0 ˘
Í
A= Í0 0 1 ˙˙ ; b = Í0 ˙ ; c = [40 10 0]; d = 0
Í ˙
ÎÍ 0 - 3 - 4 ˚˙ ÎÍ1 ˚˙
(c) With reference to Eqns (12.46), the state model in second companion form becomes
È0 0 0˘ È40˘
Í - 3 ˙˙ ; b = Í10 ˙
A = Í1 0 Í ˙ , c = [0 0 1]; d = 0
ÍÎ0 1 - 4 ˙˚ ÍÎ 0 ˙˚

Review Example 12.2 A linear time-invariant system is characterized by the homogeneous state
equation

È x&1 ˘ È0 1 ˘ È x1 ˘
Í x& ˙ = Í0 - 2 ˙ Í x ˙
Î 2˚ Î ˚Î 2˚
(a) Compute the solution of the homogeneous equation assuming the initial state vector
È 1˘
x(0) = Í ˙
Î0˚
(b) Consider now that the system has a forcing function and is represented by the following
nonhomogeneous state equation:
È x&1 ˘ È0 1 ˘ È x1 ˘ È 0˘
Í x& ˙ = Í0 + u
Î 2˚ Î - 2 ˙˚ ÍÎ x2 ˙˚ ÍÎ 1˙˚
where u is a unit step input.
Compute the solution of this equation assuming initial conditions of part (a).
Solution
(a) Since
Ès - 1 ˘
(sI – A) = Í ˙
Î 0 s + 2˚
we obtain

È1 1 ˘
Ís s ( s + 2) ˙
(sI – A)–1 = Í ˙
Í 1 ˙
Í0 s + 2 ˙˚
Î
È1 1
(1 - e- 2t )˘
Hence eAt = L –1[(sI – A)–1] = Í 2
˙
- 2t
ÍÎ0 e ˙˚

È1 ˘
x(t) = eAtx(0) = Í ˙
Î0 ˚
Control System Analysis Using State Variable Methods 617

t
(b) x(t) = e Atx(0) + Ú eA(t – t) bu(t )dt
0

Now
È t - 2(t - t )
˘
Í 12 Ú [1 - e ]dt ˙
˙ È - 14 + 21 t + 41 e- ˘
t 2t
Í 0
Ú eA(t – t) bu(t)dt = Í ˙ = Í ˙
Í
t
- 2(t - t )
1
(
˙ ÍÎ 2 1 - e
- 2t
) ˙
˚
Í Úe
0
dt ˙
Î 0 ˚
Therefore,
x1(t) = 3
4
+ 21 t + 41 e- 2t

x2(t) = 1
2
(1 – e–2t )
Review Example 12.3 The motion of a satellite in the equatorial (r, q) plane is given by the state
equation [122]

È x&1 ˘ È 0 1 0 0 ˘ È x1 ˘ È0˘ È0˘


Í x& ˙ Í 2 ˙Í ˙ Í ˙ Í 0˙
Í 2 ˙ = Í 3w 0 x 1
0 2w ˙ Í 2 ˙ Í ˙
+ u1 + Í ˙ u2
Í x&3 ˙ Í 0 0 0 1 ˙ Í x3 ˙ Í0˙ Í 0˙
Í ˙ Í ˙Í ˙ Í ˙ Í ˙
Î x&4 ˚ ÍÎ 0 - 2w 0 0 ˙˚ Î x4 ˚ Î0˚ Î 1˚
where w is the angular frequency of the satellite in circular, equatorial orbit, x1(t) and x3(t) are,
respectively, the deviations in position variables r(t) and q (t) of the satellite, and x2(t) and x4(t) are,
respectively, the deviations in velocity variables r& (t) and q& (t). The inputs u1(t) and u2(t) are the thrusts
ur and uq in the radial and tangential directions respectively, applied by small rocket engines or gas jets
(u = 0 when x = 0). Sensors have been installed for measuring r(t) and q (t).
(a) Suppose that the tangential thruster becomes inoperable. Determine the controllability of the
system with the radial thruster alone.
(b) Suppose that the radial thruster becomes inoperable. Determine the controllability of the system
with the tangential thruster alone.
(c) Suppose that the tangential measuring device becomes inoperable. Determine the observability of
the system from radial position measurement (x1 = r) alone.
(d) Suppose that the radial measurements are lost. Determine the observability of the system from
tangential position measurement (x3 = q) alone.
Solution
(a) With u2 = 0,
È0˘
Í 1˙
b= Í ˙
Í0˙
Í ˙
Î0˚
618 Control Systems: Principles and Design

The controllability matrix


È 0 1 - w2 ˘
0
Í ˙
2 3 Í 1 0 - w2 0 ˙
U = [b Ab A b A b] = Í
Í 0 0 - 2w 0 ˙˙
Í 0 - 2w 0 2w 3 ˙˚
Î
1 0 - w2
|U| = – 0 - 2w 0 = – [– 2w (2w 3 – 2w 3 )] = 0
- 2w 0 2w 2
Therefore, r(U) < 4, and the system is not completely controllable with u1 alone.
(b) With u1 = 0,
È0˘
Í0˙
b= Í ˙
Í0˙
Í ˙
Î 1˚
The controllability matrix
È 0 0 2w 0 ˘
Í ˙ 3
Í 0 2w 0 - 2w ˙
U= Í ˙
Í 0 1 0 - 4w 2 ˙
Í 1 0 - 4w 2 0 ˙˚
Î
0 2w - 2w 3
|U| = 2w 0 1 - 4w 2 = – 12w 4 π 0
1 0 0

Therefore, r (U) = 4, and the system is completely controllable with u2 alone.


(c) With y = x1,
c = [1 0 0 0]
The observability matrix
Èc ˘ È 1 0 0 0 ˘
Í cA ˙ Í 0 1 0 0 ˙˙
V = ÍÍ 2 ˙˙ = ÍÍ
cA 3w 2 0 0 2w ˙
Í ˙ Í ˙
ÍÎ cA3 ˙˚ ÍÎ 0 - w2 0 0 ˙˚
|V| = 0
Therefore, r(V) < 4, and the system is not completely observable from y = x1 alone.
(d) With y = x3,
c = [0 0 1 0]
Control System Analysis Using State Variable Methods 619
The observability matrix
È 0 0 1 0 ˘
Í 0 0 0 1 ˙˙
V= Í
Í 0 - 2w 0 0 ˙
Í 3
˙
ÍÎ - 6w 0 0 - 4w 2 ˙˚
|V| = – 12w 4 π 0
Therefore, r(V) = 4, and the system is completely observable from y = x3 alone.
Review Example 12.4 Consider state equation of a single-input system which includes delay in
control action:
x& (t) = Ax(t) + bu(t – tD) (12.94a)
where x is n ¥ 1 state vector, u is scalar input, tD is the dead-time, and A and b are, respectively, n ¥ n
and n ¥ 1 real constant matrices.
For discrete-time solution, the time axis is discretized (sampled) into intervals of width T;
t = kT, k = 0, 1, 2, ...; and u(t) is approximated by a staircase function, constant over the intervals. Given
the capabilities of the modern microprocessor, we can always adjust the sampling frequency slightly so
that
tD = NT (12.94b)
where N is an integer.
For the kth interval, u (t ) = u ( kT - NT ); kT £ t < ( k + 1) T ; k = 0,1, 2,.... Following the steps given in
Eqns (12.73)–(12.75), we obtain
x(( k + 1)T ) = Fx( kT ) + gu ( kT - NT ) (12.95)
where
T
F = eAT and g = Ú eAq bdq
0
Let us introduce N new state variables defined below.
xn +1 ( k ) = u (k - N )
xn + 2 (k ) = u (k - N + 1)

xn + N ( k ) = u ( k - 1)
The augmented state equation becomes
È x(k + 1) ˘ ÈF g 0 0 L 0˘ È x (k ) ˘ È0˘
Í ( + 1) ˙ Í0 0 1 0 L 0˙˙ ÍÍ xn +1 (k ) ˙˙ ÍÍ0˙˙
x
Í n +1 k ˙ Í
Í xn +2 (k + 1) ˙ Í0 0 0 1 L 0˙ Í xn +2 (k ) ˙ Í0˙
Í ˙ = Í ˙Í ˙ + Í ˙ u (k ) (12.96)
Í M ˙ ÍM M M M M˙Í M ˙ ÍM ˙
Íx (k + 1) ˙ Í 0 0 0 0 L 1˙ Í xn + N -1 (k ) ˙ Í0˙
Í n + N -1 ˙ Í ˙Í ˙ Í ˙
ÎÍ n + N
x (k + 1) ˚˙ ÎÍ 0 0 0 0 L 0 ˚˙ ÎÍ xn + N (k ) ˚˙ ÎÍ1˚˙
620 Control Systems: Principles and Design

Review Questions

12.1 Given a single-input single-output state variable model


x& = Ax + bu
y = cx
Prove that the eigenvalues of matrix A are invariant under state transformation x = P x ; P is a constant
nonsingular matrix.
12.2 Given a single-input single-output state variable model
x& = Ax + bu
y = cx
Y (s)
(a) Prove that = G(s) = c(sI – A)–1b.
U (s)
(b) Prove that G(s) is invariant under state transformation x = P x ; P is a constant nonsingular matrix.
12.3 Is the state variable description for a given transfer function

b0 s n + b1 s n - 1 + L + bn - 1 s + bn
G(s) =
s n + a1 s n - 1 + L + a n - 1 s + a n
unique? If not, explain how three different state variable descriptions for the given transfer function
can be obtained.
12.4 (a) Write two n ¥ n general matrices, one having companion form and the other having Jordan form
structure.
(b) An nth-order state variable model in Jordan canonical form always yields n decoupled first-order
differential equations. Is the statement true? Justify your answer.
12.5 Given a single-input state variable equation
x& = Ax + bu
Prove that
t
Ú0 e
A(t – t )
x(t) = eAt x(0) + b u(t) dt
12.6 List the properties of the state transition matrix
f (t) = eAt
of the system
x& = Ax; x(0) ≜ x0
Prove that
–1
eAt = L [(sI – A)–1]
Why is the matrix exponential eAt called the state transition matrix?
12.7 Given a single-input state equation
x& = Ax + bu
Control System Analysis Using State Variable Methods 621
For discrete-time solution, the equation can be discretized to the following form:
x(kT + T) = Fx(kT) + gu(kT); T = sampling interval
Find the matrices F and g.
12.8 A system is described by the single-input single-output state variable model
x& = Ax + bu
y = cx
What is the motivation behind the concept of controllability of the system? Give a precise definition
of controllability and describe a controllability test in terms of matrices A and b.
Prove that controllability property is invariant under state transformation x = P x ; P is a constant
nonsingular matrix.
12.9 A system is described by the single-input single-output state variable model
x& = Ax + bu
y = cx
What is the motivation behind the concept of observability? Give a precise definition of observability
and describe an observability test in terms of matrices A and c.
Prove that observability property is invariant under state transformation x = P x ; P is a constant
nonsingular matrix.
12.10 (a) Show that if a continuous-time linear time-invariant system is asymptotically stable, it is also
BIBO stable.
(b) Show that a BIBO stable continuous-time linear time-invariant system is asymptotically stable
only if the system is completely controllable and completely observable.
12.11 Prove that the transfer function model of the single-input single-output system
x& = Ax + bu
y = cx
is a complete characterization of the system only if the system is both controllable and observable.

Problems

12.1 Figure P12.1 shows a control scheme for controlling the azimuth angle of a rotating antenna. The
plant consists of an armature-controlled dc motor with dc generator used as an amplifier. The
parameters of the plant are given below:
Motor torque constant, KT = 1.2 N-m/amp
Motor back emf constant, Kb = 1.2 V/(rad/sec)
Generator gain constant, Kg = 100 V/amp
Motor to load gear ratio, ( )
n = q& L / q&M = 1/2
R f = 21W, Lf = 5H, Rg = 9W, Lg = 0.06 H, Ra = 10W, La = 0.04 H,
J = 1.6 N-m/(rad/sec2), B = 0.04 N-m/(rad/sec), motor inertia and friction are negligible.
Taking physically meaningful and measurable variables as state variables, derive a state model for
the system.
622 Control Systems: Principles and Design

Fig. P12.1
12.2 Figure P12.2 shows a position control system with state variable feedback. The plant consists of a
field-controlled dc motor with a dc amplifier. The parameters of the plant are given below:
Amplifier gain, KA = 50 volt/volt
Motor field resistance, Rf = 99 W
Motor field inductance, Lf = 20 H
Motor torque constant, KT = 10 N-m/amp
Moment of inertia of load, J = 0.5 N-m/(rad/sec2)
Coefficient of viscous friction of load, B = 0.5 N-m/(rad/sec)
Motor inertia and friction are negligible.
Taking x = q , x = q&, and x = i as the state variables, u = e as the input, and y = q as the output,
1 2 3 f f
derive a state variable model for the plant.

=1

Tachogenerator

Fig. P12.2
12.3 Figure P12.3 shows the block diagram of a motor-driven single-link robot manipulator with position
and velocity feedback. The drive motor is an armature controlled dc motor; ea is armature voltage, ia
is armature current, qM is the motor shaft position and q&M is motor shaft velocity. qL is the position of
the robot arm.
Control System Analysis Using State Variable Methods 623

Fig. P12.3
&
Taking qM, q M and ia as state variables, derive a state model for the feedback system.
12.4 Figure P12.4 shows the block diagram of a speed control system with state variable feedback. The
drive motor is an armature-controlled dc motor with armature resistance Ra, armature inductance La,
motor torque constant KT, inertia referred to motor shaft J, viscous friction coefficient referred to
motor shaft B, back emf constant Kb, and tachogenerator constant Kt. The applied armature voltage is
controlled by a three-phase full-converter. We have assumed a linear relationship between the control
voltage ec and the armature voltage ea . er is the reference voltage corresponding to the desired speed.

Fig. P12.4

Taking x1 = w (speed) and x2 = ia (armature current) as the state variables, u = er as the input, and y
= w as the output, derive a state variable model for the feedback system.
12.5 Consider the system
È - 3 1˘ È0 ˘
x& = Í ˙ x+ Í ˙ u
Î- 2 0˚ Î1 ˚
y = [1 0] x
624 Control Systems: Principles and Design

A similarity transformation is defined by


È 2 -1 ˘
x= Px = Í x
Î- 1 1 ˙˚
(a) Express the state model in terms of the states x (t).
(b) Draw state diagrams in signal flow graph form for the state models in x(t) and x (t).
(c) Show by Mason’s gain formula that the transfer functions for the two state diagrams in (b) are
equal.
12.6 Consider a double-integrator plant described by the differential equation

d 2q (t )
= u(t)
dt 2
(a) Develop a state equation for this system with u as the input, and q and q& as the state variables x1
and x2 respectively.
(b) A similarity transformation is defined as
È1 0 ˘
x= Px = Í ˙x
Î1 1˚
Express the state equation in terms of the states x (t).
(c) Show that the eigenvalues of the system matrices of the two state equations in (a) and (b) are
equal.
12.7 A system is described by the state equation
È 0 1 0˘ È0 ˘
x& = ÍÍ 0 0 1 ˙˙ x + Í0 ˙ u ; x(0) = x0
Í ˙
ÍÎ - 1 0 - 3 ˙˚ ÍÎ1 ˙˚
Using the Laplace transform technique, transform the state equation into a set of linear algebraic
equations in the form
X(s) = G(s)x0 + H(s) U(s)
12.8 Give a block diagram for the programming of the system of Problem 12.7 on an analog computer.
12.9 The state diagram of a linear system is shown in Fig. P12.9. Assign the state variables and write the
dynamic equations of the system.
1

1 s–1 1 s–1 s–1 2 1 1


U Y
–1 –2

–1
–1
Fig. P 12.9
Control System Analysis Using State Variable Methods 625
12.10 Derive transfer functions corresponding to the following state models:
È 0 1˘ È1 ˘
(a) x& = Í ˙ x + Í ˙ u; y = [1 0] x
Î- 2 -3˚ Î0 ˚

È - 3 1˘ È0˘
(b) x& = Í ˙ x + Í ˙ u; y = [1 0]x
Î- 2 0˚ Î1˚
12.11 Figure P12.11 shows the block diagram of a control system with state variable feedback and integral
control. The plant model is:

È x&1 ˘ È- 3 2 ˘ È x1 ˘ È1˘
Í x& ˙ = Í 4 - 5 ˙ Í x ˙ + Í0˙ u
Î 2˚ Î ˚Î 2˚ Î ˚
y=[0 1]x
(a) Derive a state model of the feedback system.
(b) Derive the transfer function Y(s)/R(s).

Fig. P12.11
12.12 Construct state models for the systems of Fig. P12.12a and Fig. P12.12b, taking outputs of simple lag
blocks as state variables.

Fig. P12.12a Fig. P12.12b


12.13 Construct state models for the following transfer functions. Obtain different canonical form for each
system.

s+3 5 s3 + 8 s 2 + 17 s + 8
(i) (ii) (iii)
s2 + 3 s + 2 ( s + 1) 2 ( s + 2) ( s + 1) ( s + 2) ( s + 3)
Give block diagrams for the analog computer simulation of these transfer functions.
626 Control Systems: Principles and Design

12.14 Construct state models for the following differential equations. Obtain a different canonical form for
each system.
y + 3 &&y + 2 y& = u& + u
(i) &&& y + 6 &&y + 11 y& + 6 y = u
(ii) &&&
(iii) &&&
y + 6 &&y + 11 y& + 6 y = &&&
u + 8 u&& + 17 u& + 8u
12.15 Derive two state models for the system with transfer function
Y (s ) 50(1 + s / 5)
=
U ( s) s (1 + s/2)(1 + s /50)
(a) One for which the system matrix is a companion matrix.
(b) One for which the system matrix is diagonal.
12.16 (a) Obtain state variable model in Jordan canonical form for the system with transfer function
Y (s ) 2 s2 + 6 s + 5
=
U ( s) ( s + 1)2 ( s + 2)
(b) Find the response y(t) to a unit-step input using the state variable model in (a)
(c) Give a block diagram for analog computer simulation of the transfer function.
12.17 Consider a continuous-time system
È -2 2˘ È -1 ˘
x& (t ) = Í ˙ x (t ) + Í ˙ u (t )
Î 1 -3˚ Î 5˚
y(t) = [2 – 4] x(t) + 6u(t)
Discretizing time axis into intervals of T = 0.2 sec, obtain state transition and output equations that
yield discrete-time solutions for x(t) and y(t).
12.18 The mathematical model of the plant of a control system is given below:
Y (s ) e- 0.4 s
= G (s ) =
U ( s) s +1
For digital simulation of the plant, obtain a difference equation model with T = 0.4 sec as the sampling
interval.
12.19 Given the system
È- 2 1˘ È1˘
x& = Í ˙ x + Í1˙ u
Î 1 - 2 ˚ Î˚
(a) Obtain a state diagram in signal flow graph form.
(b) From the signal flow graph, determine the state equation in the form
X(s) = G(s)x(0) + H(s)U(s)
(c) Using inverse Laplace transformation, obtain the
(i) zero-input response to initial condition
T
x(0) = ÈÎ x10 x20 ˘˚ ; and
(ii) zero-state response to unit-step input.
12.20 Consider the system
È 0 1˘ È0˘ È1˘
x& = Í x+ Í 1˙ u ; x(0) =
Î - 2 - 3 ˙˚ Î ˚
Í1˙
Î˚
y = [1 0]x
Control System Analysis Using State Variable Methods 627
(a) Determine the stability of the system.
(b) Find the output response of the system to unit-step input.
12.21 Figure P12.21 shows the block diagram of a
control system with state variable feedback and
feedforward control. The plant model is

é- 3 2ù é1 ù
N& = ê N+
ë 4 - 5 úû ê0 ú u
ë û

y = [0 1]N
(a) Derive a state model for the feedback system.
(b) Find the output y(t) of the feedback system to
a unit-step input r(t); the initial state is
assumed to be zero. Fig. P12.21
12.22 Consider the state equation

é 0 1ù
N& = ê ú N
ë - 1 - 2 û
Find a set of states x1(1) and x2(1) such that x1(2) = 2.
12.23 The following facts are known about the linear system

N& (t) = )N(t)

é 1ù é e- 2 t ù
If N(0) = ê ú , then N(t) = ê ú
ë- 2 û - 2t
ëê - 2 e ûú

é 1ù é e- t ù
If N(0) = ê ú , then N(t) = ê -t ú
ë- 1 û êë - e úû
Find e)t and hence ).
12.24 Show that the pair {), ?} is completely observable for all values of a i’s.

é 0 0 0 L 0 - =n ù
ê ú
ê 1 0 0 L 0 - =n -1 ú
ê 0 1 0 L 0 - =n - 2 ú
)= ê ú
ê M M M M M ú
ê ú
ê 0 0 0 L 0 - =2 ú
ê 0 0 0 L 1 - =1 ú
ë û

?=[0 0◊◊◊0 1]
628 Control Systems: Principles and Design

12.25 Show that the pair {A, b} is completely controllable for all values of a i’s.
È 0 1 0 L ˘0 È0 ˘
Í 0 0 1 L ˙0 Í0 ˙
Í ˙ Í ˙
A= Í M M M ˙;b=
M ÍM ˙
Í ˙ Í ˙
Í 0 0 0 L ˙1
Í0 ˙
Í- a n - an -1 - an - 2 L - a1 ˙˚ ÍÎ1 ˙˚
Î
12.26 Determine the controllability and observability properties of the following systems:
È- 2 1˘ È1 ˘
(i) A = Í ˙ ; b = Í ˙ ; c = [1 – 1]
Î 1 - 2˚ Î0 ˚
È- 1 0˘ È 2˘
(ii) A = Í ˙ ; b = Í ˙ ; c = [0 1]
Î 0 -2˚ Î5˚
È0 1 0˘ È0 ˘
(iii) A = ÍÍ0 0 1 ˙˙ ; b = Í0 ˙ ; c = [10
Í ˙ 0 0]
ÍÎ0 -2 - 3 ˙˚ ÍÎ1 ˙˚

È0 0 0˘ È40˘
(iv) A = ÍÍ1 0 - 3 ˙˙ ; b = Í10 ˙ ; c = [ 0 0
Í ˙ 1]
ÍÎ0 1 - 4 ˙˚ ÍÎ 0 ˙˚
1
12.27 The following models realize the transfer function G(s) = .
s +1
È- 2 1˘ È1˘
(i) A = Í ;b= Í1˙ ; c = [0 1]
Î 1 - 2 ˙˚ Î˚
È- 1 0˘ È1˘
(ii) A = Í ˙ ; b = Í1˙ ; c = [1 0]
Î 0 - 3 ˚ Î˚
È- 2 0˘ È0 ˘
(iii) A = Í ˙ ; b = Í 1 ˙ ; c = [0 1]
Î 0 - 1 ˚ Î ˚
Investigate the controllability and observability properties of these models. Find a state variable
model for the given transfer function which is both controllable and observable.
12.28 Consider the systems
È 0 - 2˘ È1˘
(i) A = Í ˙ ;b= Í1˙ ; c = [0 1]
Î1 - 3 ˚ Î˚

È 0 1 0˘ È0 ˘
Í 1 ˙˙ ; b = Í0 ˙
(ii) A = Í 0 0 Í ˙ ; c = [4 5 1]
ÍÎ - 6 - 11 - 6 ˙˚ ÍÎ1 ˙˚
Determine the transfer function in each case. What can we say about controllability and observability
properties without making any further calculations?
Control System Analysis Using State Variable Methods 629
12.29 Consider the system
È1 1 0˘ È 0˘
x& = ÍÍ0 -2 1 ˙˙ x + Í 1˙
Í ˙ u ; y = [1 0 0] x
ÎÍ0 0 - 1 ˚˙ ÎÍ- 2 ˚˙
(a) Find the eigenvalues of A and from there determine the stability of the system.
(b) Find the transfer function model and from there determine the stability of the system.
(c) Are the two results same? If not, why?
12.30 Given a transfer function
10 Y ( s)
G(s) = =
s( s + 1) U ( s)
Construct three different state models for this system:
(a) One which is both controllable and observable.
(b) One which is controllable but not observable.
(c) One which is observable but not controllable.

You might also like