You are on page 1of 15

t

Increased Roles of Linear Algebra


in Control Education
Robert E. Skelton and Tetsuya Iwasaki

sionality prevents efficient computation, and the resulting con-


I n the past, control theory developed along different lines that
required different mathematical tools. Time-varying systems
required one set of tools, time-invariant another. Single-input,
troller is often of much larger order than the plant.
The state space optimization theory is almost always related
single-output problems lend themselves easily to one approach, to the quadratic Lyapunov function in some way, and is naturally
but this approach is awkward for multivariable systems, etc. The suited for treating linear time-varying (LTV) systems. A control-
purpose of this article is to point to the desirability of a unified ler is usually designed by first computing a Lyapunov matrix and
approach to elementary control education using only tools of then using a formula in terms of the plant data and the Lyapunov
linear algebra as the enabling unifier. The following is a descrip- matrix. If the plant is LTI, the Lyapunov matrix is constant, and
tion and motivation for the proposed new control course (ap- can be found by solving algebraic Riccati equations (ARES)[3,4]
proach). There is only one main theorem from linear algebra, and or linear matrix inequalities (LMIs) [5,6].An ARE has an ana-
at least 17 different control problems all reduce to solving a lytical solution based on the Hamiltonian technique, while an
single linear algebra problem. LMI can be solved efficiently via finite dimensional convex
programming [7]. Thus, the state space methods have a compu-
History tational advantage over the above-mentioned transfer function
We seek a unifying point of view of systems and control with approach. The resulting controller order is usually equal to the
a focus on linear systems. With so many available, one may ask plant, which may still be too high for practical implementation.
why a new approach in linear systems and control is necessary, Many researchers in control theory view the lack of a tractable
especially since many directions in control theory have reached method for the design of low-order controllers as the most
a fairly mature state. However, the tools used to develop the fundamental deficiency of modem control theory. This course
existing results are fundamentally different, and there is no uses a state space method and algebraic manipulations to derive
unifying point of view. The unification of linear control design parameterizations for controllers of low order (equal to or less
methods is not only of theoretical interest but also plays an than the plant) for various design specifications, to aid in the
important role in control education. This article serves as the practical problem of designing a simple controller for a high-or-
basis for a new first-year graduate course which presents various der complex system.
control problems and their solutions in a unified manner.
In the sequel, we shall briefly summarize a historical back- Deterministicvs. Stochastic Methods
ground of systems and control theory, and then discuss the role The debate over whether to treat the system as deterministic
of our unified approach, from several different perspectives. or stochastic has been as heated as the debate over frequency or
time domain methods. One argument against stochastic methods
Transfer Function vs. State Space Methods is that guarantees of absolute values of signals are not possible;
The frequency domain methods, based on the system descrip- yet, bounds on signals are very practical and important consid-
tion in terms of the transfer function, do not directly apply to erations when dealing with real systems with sensor and actuator
time-varying and nonlinear systems (except for the isolated saturations, and physical limits of stresses in structures, etc. On
nonlinearities that allow describing function analysis). On the the other hand, one argument for stochastic analysis is that there
other hand, for linear time-invariant (LTI) systems, these meth- are no physical sensors and actuators without some electronic
ods accommodate a frequency domain parameterization of the noise on the outputs. In the stochastic literature, covariance
class of all stabilizing controllers (due to Youla et al.; see [ 11). analysis is a cornerstone in filtering theory, and this powerful
With this parameterization, the closed-loop transfer function is theory has found many practical uses. In the book [ 8 ] a step is
an affine (and thus convex) function of the free parameter Q(s) taken to unify the two points of view (deterministic and stochas-
belonging to an infinite-dimensional space. When used in opti- tic). By offering a deterministic treatment of excitations (initial
mization problems with closed-loop convex specifications [2], conditions and impulses), an analysis of the deterministic system
the Youla parameterization serves as a tool for computing the is given that is shown to be mathematically equivalent to the
global solution via convex programming, but the infinite dimen- covariance analysis of stochasticprocesses. Thus, adeterministic

76 IEEE Control Systems


interpretation of the covariance matrix is given, and a back- go to zero in perfectly well-behaved systems. Every plane or
ground in stochastic processes is not required to take this course. spacecraft has rate gyros with spinning rotors (whose rotor
position with respect to inertial space goes toward infinity). Even
Modeling vs. Control Design in the absence of extemal disturbances, the control signal pro-
When applying modem control theory for high-order systems duced by a digital computer does not asymptotically go to zero,
such as large scale flexible structures, reduced order plant models but to some sort of limit cycle dictated by the computer
are sometimes used to design controllers of moderate complex- wordlength. Therefore, the ability to guarantee upper bounds on
ity. However, it is well understood by now that the plant model- signals is more important than the ability to guarantee asymptotic
ing and control design problems are not independent [9]. For this stability. If stability has been overemphasized in the control
reason, twenty years of model reduction research have left us literature, performance has been underemphasized. Lack of sta-
with few guarantees about closed-loop performance of such bility might be considered a disaster, but in physical systems
controllers. Hence, many of these researchers moved to the more disaster comes long before instability. A stable billion-dollar
specialized subject of “controller reduction,” with little addi- space telescope would still be considered a disaster if it fails to
tional success. Direct design of fixed-order controllers based on meet the pointing accuracy required to make the observations
high order plant model promises a better answer theoretically, and pictures useful.
but the corresponding nonlinear mathematical program presents This course discusses two kinds of robustness: “perform-
computational difficulties [10,11]. ance” robustness and “stability” robustness. By these phrases
This course takes a new direct approach to fixed-order con- we mean that certain performance and stability guarantees hold
troller design based on computations involving matrix inequali- in the presence of specified errors in the plant model or in the
ties, as opposed to the equalities used in [10,11]. Algorithms disturbances.
guaranteeing global convergence for this type of problems have In 1956, the result of [21] states that if a system (solution) is
been proposed [12,13], as well as local algorithms that converge asymptotically stable, then there exists a Lyapunov function to
(without a guarantee) much faster [14,15,16]. The course intro- prove it so. The practical value of this theorem is that the search
duces a unification of the modeling and control problem in the for aLyapunov function is not a waste of time (the set of functions
sense that the controller of fixed order is designed at the outset. with the properties we seek is never an empty set). The theoretical
value of this theorem is that the search for a characterization of
Performance vs. Stability all stabilizing controllers would be well served by a charac-
The comparison of classical vs. modem control methods has terization of all Lyapunov functions. That is the approach of this
been characterized by the following oversimplification: “In clas- course, to parameterize (for linear systems) the set of all quad-
sical control one designs for stability, but then must check for ratic Lyapunov functions. This allows us to capture the set of all
performance, whereas in modem control, one designs for per- stabilizing controllers. From this set, all controllers which can
formance, but then must check for stability.” Both classical and meet the performance requirements (a matrix inequality) are
modem approaches have made progress toward integration of parameterized. The connections between quadratic Lyapunov
stability and performance design objectives, but only for highly functions and deterministic definitions of performance allow the
specialized definitions of performance and stability measures. In unification of the theories for performance and stability guaran-
general, performance optimality does not guarantee robust sta- tees.
bility. Hence, performance should be optimized with a robust
stability constraint. To serve this purpose, it has been popular Choosing a Design Space
recently 117,181 to optimize the performance subject to an From 1930 to 1955 the two-dimensional space of the complex
upper bound on the %norm (roughly, the peak magnitude of the plane was the workhorse of control theory. Design in this space
transfer function). The %norm bound delivers a certain kind of allows placement of poles and zeros. Various other two-dimen-
stability margin and the fi norm represents a (scalar) perform- sional plots have special use in such designs, including the Bode
ance measure, related to the root mean squared (RMS) behavior plot, the Nichols plot, the Nyquist plot, and the Root Locus. The
of the output signal. We seek control design procedures in this essential tool here was complex analysis.
course which can include, but not be limited to, these kinds of In the 1960s state space became the popular design space
design objectives. We seek also a method that can treat the time [22,23]. Because the n-dimensional state space does not lend
domain L,norm [19,20], since it represents the peak values of itself to plotting, the graphical methods made popular for the
real-time signals, such as sensor and actuator signals that are two-dimensional complex plane played less of a role in this
subject to saturation. period of control development. Rather, the essential tools used
The vast majority of control theory has focused on stability, here were optimization, the calculus of variations, ordinary dif-
and performance guarantees have received much less attention. ferential equations and Hilbert spaces.
Being able to guarantee specific bounds on the response is In the 1980s a modem version of complex analysis extended
needed in practice. Indeed, “stability” is a mathematical property the classical notions of stability margins to multi-input, multi-
of a particular “mathematical model” of the system and is never output (MIMO) systems in a systematic way [24,25,26]. By the
a guaranteed property of the physical system itself. When teach- small gain theorem, a certain stability margin of a system can be
ing about controller design for real-world implementation, one quantified by the “size” of the transfer function describing the
must differentiate between the physical system and its mathe- system. For single-input, single-output (SISO) systems, the size
matical model. Quite often the model of the closed-loop system can be measured by the peak value of the Bode magnitude plot.
may have the “asymptotic stability” property, but the physical The structured singular value [27] or the multivariable stability
system never does. That is, there are always variables that do not margin [28] were proposed to extend the concept of SISO stabil-

August 1995 77
ity margin for MIMO transfer matrices. Exact computation of expectation operator for stochastic processes. Finally, for a trans-
these quantities is difficult, and hence approximate computation fer matrix T(s), llTllH~denotes the standard % norm where we
by the scaled %norm has been used in practice. In the late 1980s
use the same notation for the continuous-time and discrete-time
a state space interpretation of !&. theory was provided
transfer matrices.
[4,29,30,3 11, and a simple synthesis algorithm based on two
Riccati equations became available. Guaranteeing an upper
bound on the scaled % norm remains a major focus of control
A Unified Approach
This section presents a unified algebraic approach for design-
theorists to this day.
ing controllers with various specifications including stability,
Since the late 1980s the application of convex programming
performance, and robustness. In particular, many control design
techniques to systems analysis and synthesis has been gaining
problems are shown to reduce to just one linear algebra problem
more attention [2,32]. By formulating control problems as con-
of solving an LMI for the controller parameter [40,43]. We show
vex optimization problems, one can compute, in principle, global
explicitly the common mathematical structure of such control
solutions that indicate “limits of performance.” In the early stage
problems hidden in the proofs of previous synthesis results in the
of this paradigm, the Youla parameterization [l] played the
literature. To this end, we first introduce a state space framework
central role, leading to computationally expensive infinite-di-
on which our approach is based.
mensional convex problems. In the 1990s, LMIs were recog-
nized to be a new tool for control system design based on convex
State Space Framework
programming [7,33]. LMIs are not as universal as the Youla
Consider the feedback control system depicted in Fig. 1 where
parameterization in the sense that the class of control specifica-
tions that can be treated by LMIs is not as large as that for the P i s the LTI nominal model of the plant, A is the uncertain part
Youla parameterization. However, LMIs can be solved by finite- of the plant, and C is the LTI controller to be designed. It is
dimensional convex programming, and several efficient algo- assumed that the uncertainty A is known to belong to a set Bu,
rithms are now available [34,35,36]. but otherwise unknown. The signals o and @ are the exogenous
During the mid-1980s another approach to control called signals to describe the uncertainty A, the signal z is the regulated
“covariance control” was introduced [37]. The objectives here output containing error signal, actuator signal, etc., the signal w
n(n + 1) is the extemal input consisting of the disturbance, sensor noise,
were the assignment of all -elements of the state covari- command signal, etc., and the signals u and y are the control input
2
ance matrix where n is the dimension of the closed loop system. and the sensor output, respectively. The objective of robust
control design is, given the uncertain plant model Pand Bu, to
n(n + 1)
The -dimensional “covariance space” has some impor- find a controller C such that the regulated signal z is “small” in
2 response to the extemal input w,in the presence of the uncer-
tant features. By increasing the design space from n (as in state
tainty A E Bu.
n(n + 1)
space) to -,the class of systems that can be studied with Different control problems can be formulated by considering
2 different underlying uncertainty sets BU, different ways of meas-
the simple tools of linear algebra are increased. That is, the class uring the size (norm) of the performance signals z and w. For
of problems that can be represented as linear problems in the instance, BU may be a class of norm-bounded uncertainties, or
n(n + 1) passive (positive real) uncertainties. For the nominal control
-dimensional space is larger than the class of problems
2 problems, the uncertainty A is assumed to be negligible, i.e., BU
that are linear in the n-dimensional state space. This idea has been = {O}. The size of a signal may be measured by its energy (&
extended to control problems with design specifications related norm) or by its peak (L,norm), etc. In this article, we show a
to the quadratic Lyapunov functions [38] which include the unified approach to solve various control problems defined by
covariance matrix as a special case. Recent combination with the these different specifications.
LMI approach resulted in added computational efficiency and The approach is based on the following state space realiza-
simple unified theory for various control problems including % tions of the nominal plant Pand the controller c
[6], % [39], and others [40,41,42]. For treating the large class of
control problems, the covariance/Lyapunov-based control the-
ory needs only the tools of linear algebra.
The fundamental goal of this course is to show a large class
of control problems that reduce to a single problem in linear
algebra. Hence linear algebra is the enabling tool that allows w
students to view the majority of linear system control problems
from a common setting. The goal of this course is to show how
to use linear algebra to accomplish this goal. Y+ W
Notations: For a matrix A, A denotes a matrix whose rows
are a basis for the null space of AT.IAII denotes the spectral norm Z
of A. For a vector-valued signal v, J I V ~ denotes
~ ~ the usual L2
norm where the “spacial norm” is the Euclidean norm;
, I(vlle2,and JIvllg_.E[.] is the
llv(t)1)*tv‘(t)v(t). Similarly, for J(vIIL_ I

Figure 1 . Confrol system configuration.

78 IEEE Control Systems


raxl r~ B, B,. B, 1r.i X > 0, (A + B,GCZ)X + X(A + BUGCxf < 0 . (5)

The problem is to find a Lyapunov matrix X and a controller


parameter G satisfying (5). Note that the Lyapunov inequality in
(1) (5) is not linear in terms of the variables X and G, and thus it is
difficult to solve this problem directly by a numerical algorithm.
a
where is the differentiation/delay operator for the continuous- However, the problem exhibits a special bilinear property; the
time/discrete-time case, and E 8% and x, E %nc are the plant Lyapunov inequality is affine with respect to either one of the
two matrix variables if the other is fixed. Hence, if we fix X > 0,
and the controller states, respectively. The closed-loop system
then (5) can be considered as a linear algebra problem to be
can be described by
solved for G.
This algebraic problem has an analytical solution. In particu-
lar, given X > 0, there exists G satisfying (5) if and only if X
meets the following conditions [5,6]:

(2)
B; (AX + X A ~ ) B ; ~c o ,
(6)
where xd E Rnbis the closed-loop state and

- A 0 B, B, B,, 0 Now the original algebraic problem (5) in terms of X and G


-A BW B, B, 0 0 0 0 0 In, is replaced by the new problem given by (6) and (7) that involve
c, ,D
, ,D ,D only the Lyapunov matrix X > 0. Note that inequality (6) is linear
in x, and thus the set of matrices x > 0 satisfying (6) is convex.
'
C, D,, ,D, D
,, Dyw Dyu 0
However, (7) is linear in X-', and the set of matrices X > 0
cz D
,, D,, GT D,, 0,' B,'
' In'
satisfying both (6) and (7) simultaneously is not convex. Hence,

nal stability is the only requirement for the closed-loop system, B: (AX + XA T ) ~ : Tc 0 ,
the condition for A is given by (8)

3 x s.t. x > o , ax+xar<o.


cJj- (YA + A T o
~ ) ~ J <L
(9)
Then, the problem is to find a controller parameter matrix G
satisfying the matrix inequality condition. This is an algebraic
where X and Y are np x np symmetric matrices defined by
problem.

An Algebraic Approach
In this subsection, we consider a simple stabilization problem
to explain the idea of our algebraic approach. As shown briefly
in the previous subsection, the nominal closed-loop stability with * denoting entries of less importance. Note that X > 0
specification can be translated to the following inequalities: implies that X 2 Y" > 0, or equivalently,

August 1995 79
[; 420.
In this case, all such matrices G are parameterized by
Hence, the existence of X > 0 satisfying (6) and (7) implies the
existence of X and Y satisfying (8)-(10).It can readily be checked
that the converse is also true; if matrices X and Y satisfy (8)-(lo),
then a matrix defined by where scalar p and matrix F are the free parameters subject to

is positive definite and satisfies inequalities (6) and (7), where and matrices R and Y are defined by
,X, and X r are any matrices such that

x,,x,’x;,. = x - Y - 1 , x, >0 .

Thus, a feasible Lyapunov matrix X can be found by searching For the stabilization problem, the solvability conditions (14) and
for a matrix pair (X, Y) such that (8)-(10) hold. (15) reduce to (6) and (7), respectively. In general, inequalities
The inequalities (8)-(10) are affine with respect to the vari- (14) and (15) define convex sets in terms of x and x-’, that is, x
ables X and Y, and thus the set of matrix pairs (X, Y)satisfying > 0 satisfies (14) and (15) if and only if
these conditions is convex. As will be discussed in the next
subsection, there are efficient computational algorithms to find
such a matrix pair. Hence, we now declare that the stabilization
problem is “solved.”
where & and yn denote convex subsets of n x n symmetric
matrices defined by LMIs. The second constraint in (18) is not
Unified Perspective
convex in X, and thus it is difficult to compute a feasible
In the previous subsection, we have illustrated an algebraic
approach to linear control design by taking a simple example of Lyapunov matrix X with these characterizations.
the stabilization problem. This section formalizes the approach In the final step of our algebraic approach, the intemal struc-
in a more general setting. ture of the augmented plant matrices in (4) is exploited to reduce
First the control design specifications are translated to a nonconvex conditions (18) for X to convex LMI conditions
matrix inequality of the form

TGA + (rGA)T+ 0 < 0 (13)

where G is the controller parameter defined in (4), and the other for (X, Y).These conditions are given by (8)-(10) for the stabi-
matrices are appropriately defined in terms of the plant data and lization problem.
the Lyapunov matrix x (and possibly the scaling matrix s as There are several efficient algorithms to solve this type of
discussed later). For the stabilization problem in the previous convex LMI problem. The altemating projection algorithm [ 151
subsection, inequality (13) corresponds to the Lyapunov inequal- often converges quickly in practice, although no theoretical
ity in ( 5 ) with results are available for the worst-case convergence rate. The
method of centers [34] is conceptually simple, and is a version
of interior point methods. The rate of convergence is at least
geometric. The most efficient algorithms currently available are
probably the potential reduction methods [44,36] and the projec-
Next, the linear algebra problem ( 13) is solved for unknown
tive method [46]. The latter method has been used to develop a
variable G. The following theorem plays a crucial role in this
commercial software package [46].Thus,a matrix pair (X, Y) in
step. Proofs for the solvability condition can be found in [5,6],
(19) can be found by applying any of these algorithms.
and a proof for the general solution formula may be found in [6].
Finally, the controller design proceeds as follows:
Theorem 1. Let matrices I-, A, and 0 be given. Assume that 1. Find a pair (X, Y) in (19) via convex programming.
MT> 0for simplicity. Then thefollowing statements are equiva- 2. Construct a feasible Lyapunov matrix X as in (1 1) and (12),
lent. using X and Y.
( i ) There exists a matrix G satisbing (13). 3. Compute the controller parameter G by the explicit formula
(ii) Thefollowing two conditions hold: in (16).
The algebraic approach described above in the general setting
(14) is applicable to many control problems. In the next section, we
shall list 17 control problems that all reduce to a single linear
algebra problem to solve (13) for the matrix G.

80 IEEE Control Systems


Control Problems Reducible to the Linear Algebra Covariance Upper Bound Control
Problem Next, we shall consider the covariance upper bound control
problem to guarantee a bound on the closed-loop covariance. We
Continuous-TimeCase explicitly show that the problem can be reduced to the LMI of
In this section, continuous-time systems are considered. We the form (20). Consider the linear time-invariant system
shall introduce various control problems and show that all the
problems reduce to a single linear algebra problem of solving

TGA + (TGAf + 0<0 (20)

for the controller parameter G. Specifically,we show appropriate


matrices r, A, and 0)for each control problem in terms of the where x is the state, w is the white noise with intensity I, u is the
plant data and a Lyapunov matrix (and possibly a scaling matrix). control input, y is the output of interest, and z is the measured
Once these matrices are identified, the approach described in the output. The covariance upper bound control problem [40,19,43]
previous section can be applied to each control problem. As a is the following: Let an output covariance bound R > 0 be given.
result, every control problem (except for those involving in- Determine whether or not there exists a controller in (1) which
put/output scaling) reduce to LMI convex problems of the type asymptotically stabilizes the system (22) and yields an output
(19), and thus can be solved efficiently. Those problems with covariance satisfying
input/output scaling will not reduce to convex problems in
general due to nonconvex constraints on the scaling matrix.
Nevertheless, the resulting matrix inequality characterization
similar to (19) has been shown to be useful for computing a local
solution [47] as well as a global solution [13]. Parameterize all such controllers when one exists.
This problem can be reduced to the following.
StabilizingControl Theorem 3. Let a controller G and an output covariance
The simplest control problem is the one with the stability bound R > 0 be given. Thefollowing Statements are equivalent.
specification only. The result for this problem has been shown in (i) The controller G solves the covariance upper bound
a foregoing section. However, we state the result formally here control problem.
again for completeness. Consider a linear time-invariant plant: ( i i ) There exists a matrix X > 0 such that both
C,XC; < R and (20) hold where
A B x
[:]=[c* 0q["]

where x is the state, u is the control input, and z is the measured


output. The stabilization problem is the following: Determine
whether or not there exists a controller in ( I ) which asymptoti- Proof. Statement (i) holds if and only if there exists a state
cally stabilizes the system ( 2 1 ) .Parameterize all such controllers covariance upper bound X AI such that C,XCt < R and
when one exists.
This problem can be reduced to the above linear algebra
problem (20) as follows.
(A + B,GC,)X + X(A + BUGCxf +(Bw+ B,GD,)(B, +
Theorem 2. Let a controller G be given. The following
B,GDJ <o
statements are equivalent.
( i ) The controller G solves the stabilization problem.
or equivalently, by the Schur complement formula,
(ii) There exists a matrix X > 0 such that (20) holds where
(A + B,GC,)X + X(A + B,GC,) B, + BG
, D,
A -I
[r AT E)]=[B, I XCT I AX + XA'] .
It is trivial to verify the equivalence of the above statements
Proof. The result directly follows from Lyapunov's stability and (ii).
theory [23], which states that (i) is equivalent to the existence of
a Lyapunov matrix X > 0 satisfying
Linear Quadratic Regulator
Another control problem which falls into the framework of
(A + B,GCJX + X(A + B,Gc,~< o (20) is the Linear Quadratic Regulator (LQR) problem. Consider
the linear time-invariant system
where matrices A, Bu,and C2are the augmented matrices defined
in (4)

August 1995 81
where x is the state, w is the impulsive disturbance w(t)= w&t) Theorem 5. Let a controller G and a performance bound y >
where a(.)
is the Dirac delta function, u is the control input, y is 0 be given. The following statements are equivalent.
the output of interest, and z is the measured output. The LQR (i) The controller G solves the &control problem.
problem (e.g., [22,48]) is to guarantee an upper bound on the (ii) There exists a matrix X > 0 such that both and (20)hold
square integral (the L2 norm) of the output signal as follows: Let where
a performance bound y > 0 be given. Determine whether or not
there existsa controller in ( I ) which asymptotically stabilizes the
system (23)and yields the zero initial state response y such that
IfiI, < y for all directions of the impulsive disturbance
1IwO I<L 1. Parameterize all such controllers when one exists. K Control
It turns out that this problem can be reduced to a mathematical Consider the linear time-invariant system
problem which is the dual of that for the covariance upper bound
control problem.
Theorem 4. Let a controller G and a performance bound y >
0 be given. The following statements are equivalent.
( I ) The controller G solves the LQR problem.
(ii) There exists a matrix Y ;z 0 such that both where x is the state, y and w are the regulated output and the
((B~,YB,,,(l< y2 and (20) hold where disturbance, and z and u are the measured output and the control
input, respectively. Let the closed loop transfer matrix from w to
y with the controller in (1) be denoted by T(s);

Proof. It is well known (e.g., [3]) that statement (i) is equiva- where the closed loop matrices are defined in (3). The %L control
lent to the existence of a Lyapunov matrix Y > 0 such that problem [4,51,26] can be stated as follows: Let a pe$ormance
< Y2 and
JJBT,YB,JJ bound y > 0 be given. Determine whether or not there exists a
controller in (1) which asymptotically stabilizes the system (25)
and yields the closed-loop transfer matrix such that ~ ~ cy T .~ ~ g ~
Y ( A + B,GC,) + ( A + B,GC,)TY + (Cy+ D,,GCz)T(C, +
D y S C z )< 0 . Parameterize all such controllers when one exists.
This problem may be interpreted in the following two ways:
Then the result follows from the Schur complement formula. First, the energy-to-energy gain Tee is equal to the %L n o m of
the corresponding transfer matrix, Le., re,= ~ ~ .T Hence,
~ ~ ga ~
L, Control controller that solves the %Lcontrol problem guarantees that the
Consider the linear time-invariant system energy (L2norm) of the output y is less than y for all disturbances
w with llqlL2 5 1 . Thus, y is the worst-case performance bound.
Another interpretation is to view the % control problem as a
robust stabilization problem. As shown by the small gain theo-
rem, the condition llTllH_< y guarantees robust stability with
(24)
respect to norm bounded uncertainty A such that ~ ~ A ~ ~ <, y - '
where x is the state, w is the disturbance with finite energy, y is
connected as w = Az. Hence, in this case, yis a robustness bound.
the output of interest, and z and u are the measured output and
For the &control problem, we have the following result.
the control input, respectively.The Loo control problem is to find
a controller that guarantees a bound on the peak value of the Theorem 6. Leta controller G andaperformance (or robust-
output y in response to any unit energy disturbance. This problem ness) bound y > 0 be given. Thefollowing statements are equiva-
lent.
can be stated as follows: Let aperformance bound y > 0 be given.
Determine whether or not there exists a controller in ( I ) which (i) The controller G solves the %control problem.
asymptotically stabilizes the system (24)and yields the output y (ii) There exists a matrix X > 0 such that (20) holds where
satisfying IlfiI, < y for all disturbances w such that llqlL2 5 I.

Parameterize all such controllers when one exists.


This problem is mathematically equivalent to the covariance
upper bound control problem with covariance bound R y21 . f
This fact can readily be verified from the analysis results in
[49,50], and hence, we give a characterization of the Lcontrol-
lers without proof as follows.

82 IEEE Control Systems


Proof. The result simply follows from substituting the defi-
nitions of the closed loop matrices in (3) into the bounded real
Riccati inequality [31] and taking the Schur complement.

Positive Real Control


Consider the linear time-invariant system
where x is the state, wand o are the exogenous signals to describe
the uncertainty A, y and w are the output of interest and the
impulsive disturbance, and z and u are the measured output and
the control input. The nominal system (A = 0) is linear time-in-
variant, while the uncertainty A is assumed to belong to the
following set of norm-bounded time-varying, structured uncer-
where x is the state, z and u are the measured output and the tainties:
control input, and y and w are the exogenous signals to describe
the design specification. The closed loop transfer matrix from w
to y with the controller in (1) is given by

T(s):Cy(sI -A)-'B, +'Dy,, where

The transfer matrix T(s) is said to be strict[ypositive real if


it is asymptotically stable and "I
u= block djag(6,l k , , ..., 651kJ,A , , ..., A,.): 6i E %, E %k~+3xk~+t}

(29)
T ( j w ) + T T ( - j o )> 0
In the above, we have implicitly assumed for simplicity that the
uncertainty A is square. Define a subset of positive definite
for all frequencies o E R. If, in addition, the above inequality matrices that commute with A E 'u:
holds for o = co, the transfer matrix is called strongly positive
real. Using this notion, the positive real control problem can be
stated as follows: Determine whether or not there exists a con- Sf{block d i q ( S , ... S, silk,+, ... sflkS+!):
troller in (1)which asymptotically stabilizes the system (26)and
yields the strongly positive real closed loop transfer matrix T(s). si E %kaxk,, si E w, si> 0, si > 0). (30)
Parameterize all such controllers when one exists.
This problem can be considered as a robust stabilization We consider the following robust performance problem to guar-
problem for systems with positive real (or passive) uncertainties antee a bound on the energy of the output y in response to the
[52,53]. The positive real control problem can also be reduced worst-case impulsive disturbance w , for all uncertainties A E
to a problem of the type given by (20) as follows. BUc: Let a robust performance bound y > 0 be given. Find a
Theorem 7. Let a controller G be given. The following controller in ( I ) , for the uncertain system (27), such that the
statements are equivalent. closed-loop system is robustly stable and the output y satisfies
(i) The controller G solves the positive real control problem. 1141, < y for all impulsive disturbances w(t) = w&t) with
(ii) There exists a matrix X > 0 such that (20)holds where
llw0ll5 1 , and for all possible uncertainties A E BUc.

YB, CT YA+ATY YB, Ct C;


A
[r AT @]=
0 Dr, BLY -S Df,
-S-'
DC,
0
vu 0 C, ,D
, '

Then the result follows by substituting the closed loop matri- D


,, 0 Cy D
,, 0 -I

Then the controller G solves the robust % control problem.


Robust %.! Control Proof. The result follows from the analysis result in [55] and
Consider the uncertain system described by the definitions for the closed-loop matrices given in (3).

August 1995 83
Robust L Control Let a robust performance bound y > 0 be given. Find a
Consider the uncertain system given by controller in ( I ) ,for the uncertain system (32),such that the
closed-loop system is robustly stable and the output y satisfies

(31)
for any disturbance w such that llql4 5 1, and for all possible

guarantee a bound on the peak value of the output y for any unit
energy disturbance w in the presence of uncertainty A E BUc. YB, c; Y A + A ~ Y YB, YB, c; c;-
Let a robust pevormance bound y > 0 be given. Find a 0 DT, BLY -S 0 ,D
; D
w
;
controller in ( I ) , for the uncertain system (31), such that the
closed-loop system is robustly stable and the output y satisfies
[r AT @Ip DT,
0 BLY 0 -9 D$, DJ .
D,u 0 C, ,D
, ,D
, -S-' 0
IldI,_ < y for any disturbance w such that I l ~ / l S 1, and for all -9
L2 D ~ u 0 Cy Dyw D,, 0

(20)hold where Discrete-Time Case


This subsection presents the discrete-time counterpart of the
results given in the previous subsection. We shall show that, for
the discrete-time case, many control problems can be reduced to
the algebraic problem of solving a Quadratic Matrix Inequality
(QMU

Then the controller G solves the robust & control problem.


Proof. The result follows from the LMI characterization of for the controller parameter G where matrices 0, r,A, R,and Q
the robust Lm performance bound given in [55] and the defini- are appropriately defined for each control problem. Note that the
tions for the closed-loop matrices in (3). above linear algebra problem is a special case of (20) which arose
in the continuous-time case. To see this, using the Schur comple-
Robust K Control ment formula, write (33) as
Consider the uncertain system

, 0=Av where we used R > 0 (which holds for all the control problems
considered below). Thus, both continuous-time and discrete-
(32) time control problems reduce to the LMI (20). In this regard, the
LMI (20) defines a fundamental algebraic problem in the LMI
where x is the state, y~ and 61 are the exogenous signals used to formulation of control problems.
describe the uncertainty A E BW, y and w are the output of To solve the QMI (33) for G,we could apply Theorem 1 to
interest and the finite energy disturbance, and z and u are the the LMI (34). However, the QMI (33) can be directly solved by
measured output and the control input. The robust G control using the result of [40,57]. In this case, necessary and sufficient
problem [57,58] is to guarantee a bound on the energy of the conditions for the existence of G satisfying (33) are given by
output y in response to any unit energy disturbance w in the
presence of uncertainty A E BW.

84 IEEE Control Systems


asymptotically stabilizes the system (36) and yields an output
covariance satishing

and all such G are explicitly parameterized by a free parameter


F such that llFll< 1. As in the continuous-time case, these exist- Parameterize all such controllers when one exists.
ence conditions lead to LMIs in terms of the Lyapunov matrix x For this problem, we have the following result.
and its inverse x-', which can further be reduced to aconvex LMI Theorem 12. Let a controller G and an output covariance
problem for the control problems that do not involve input/output bound R > 0 be given. The following statements are equivalent.
scaling. (i) The controller G solves the covariance upper bound
control problem.
Stabilizing Control (ii) There exists a matrix X > 0 such that both
Let us first consider the simplest control problem of stabiliz- C,XC$+D,,D$, < R and (33)hold where
ing a linear time-invariant plant;

where x is the state, u is the control input, and z is the measured


output. The stabilization problem is the following: Determine
whether or not there exists a controller in (1) which asymptoti-
Proof. The result simply follows from the standard covari-
cally stabilizes the system (35).Parameterize all such controllers
ance analysis results (e.g., [8]).
when one exists.
This problem can be reduced to the following linear algebra
Linear Quadratic Regulator
problem.
We consider the Linear Quadratic Regulator problem for the
Theorem 11. Let a controller G be given. The following
linear time-invariant system
statements are equivalent.
(i) The controller G solves the stabilization problem.

[ : ;]=k: D:][ml
x(k+I) A B, B, x(k)
(ii) There exists a matrix X > 0 such that (33)holds where

D
:
(37)

[AT R]f[CT I X] where x is the state, w is the pulse disturbance w(k) = w&k)
where 6 ( k )is the Kronecker delta function, u is the control input,
y is the output of interest, and z is the measured output. The LQR
Proof. The result is just a restatement of Lyapunov's stability problem is defined by an upper bound on the square summation
theory [59] for discrete-time systems, which says that statement (the t2 norm) of the output signal as follows: Let a performance
(i) holds if and only if there exists a Lyapunov matrix X > 0 such
that bound y > 0 be given. Determine whether or not there exists a
controller in ( I ) which asymptotically stabilizes the system (37)
X > (A + BuGCz)X(A+ BuGCJT and yields the zero initial state response y satisfying IIYfle, < y
for all directions of thepulse disturbance llw0ll 5 1 .Parameterize
This completes the proof. all such controllers when one exists.
This problem can be reduced to the following.
Covariance Upper Bound Control
Theorem 13. Let a controller G and a perj-ormance bound y
Next, we shall consider the covariance upper bound control > 0 be given. The following statements are equivalent.
problem. A state space model of the plant is given by
(i) The controller G solves the LQR problem.
( i i ) There exists a matrix Y > 0 such that both
IlBf YB,+ D;wt Dywll < y2 and (33)hold where

where x is the state, w is the white noise with covariance I, u is


the control input, y is the output of interest, and z is the measured
output. The covariance upper bound control problem is the
following: Let an output covariance bound R > 0 be given.
Determine whether or not there exists a controller in (1) which

August 1995 85
Proof. Statement (i) holds if and only if there exists a matrix The !& control problem [61,62] is the following: Let a
Y > 0 such that (IBI Y B , + DTuJDlI, < y2 and performance bound y > 0 be given. Determine whether or not
there exists a controller in ( I ) which asymptotically stabilizes the
system (40)and yields the closed loop transfer matrix such that
Y > (A + B,GC#Y(A + BuGCzj + Cy + Dy,GCz)T(Cy + IIT1lx_ < y . Parameterize all such controllers when one exists.
D&G . (38)
As in the continuous-time case, this problem has two distinct
Then it is straightforward to verify the result by completing the physical significances; namely, robustness with respect to norm-
square with respect to G. bounded perturbations, and the disturbance attenuation level
measured by the energy-to-energygain. The % control problem
e , Control for the discrete-time case reduces to the following.
Theorem 15. Let a controller G and a performance (or
Consider the linear time-invariant system
robustness) bound y 2 0 be given. The following statements are
equivalent.
(i) The controller G solves the % control problem.
(ii) There exists a matrix X > 0 such that (33)holds where
(39)

where x is the state, y is the output of interest, w is the finite


energy disturbance, and z and u are the measured output and the
control input, respectively. The e, control problem [60] can be
stated as follows: Let a performance bound y > 0 be given.
Determine whether or not there exists a controller in (1) which
asymptotically stabilizes the system (39)and yields the output y Proof. The result can be verified using the discrete-time
such that IlflI,_ < y for any disturbances w with 1
141 2 1 .Para- bounded real lemma (e.g. [631).
12
meterize all such controllers when one exists.
This problem reduces to the following. Robust % Control
Consider the uncertain system described by
Theorem 14. Let a controller G and a performance bound y
> 0 be given. The following statements are equivalent.
(i) The controller G solves the e, control problem.
( i i ) There exists a matrix X > 0 such that both w=Ay
I(C, XCF+ D
,, D b l l < y2 and (33) hold where
(41)

where xis the state, w and w are the exogenous signals to describe
the uncertainty A, y and w are the output of interest and the pulse
disturbance, and z and u are the measured output and the control
input. We assume that the uncertainty A belongs to the following
set of norm-bounded, time-varying, structured uncertainties:
Proof. The result follows from the analysis result of [60].

FL Control
Consider the linear time-invariant system
where I is the set of integers and U is the subset of structured
positive definite matrices defined by (29). In the sequel, we will
use the set of scaling matrices s in (30), corresponding to the
uncertainty structure (29). The robust H2 control problem for the
discrete-time case is analogous to the continuous-time counter-
part, and can be stated as follows: Let a robust performance
where x is the state, y and w are the performance signals that we
bound y > 0 be given. Find a controller in ( I ) ,for the uncertain
use to describe the design specification, and z and u are the
system (41), such that the closed loop system is robustly stable
measured output and the control input. We denote the closed loop
and the outputy satisfies lldlp2< y for allpulsedisturbances w(k)
transfer matrix from w to y, with the controller in (1), by
= wo6(k) with llwoll< 1 , and for all uncertainties A E BUD.
To address this problem, we impose a technical assumption
o(0) = 0. With this assumption, we have the following.
Theorem 16. Let a controller G and a robust perj5ormance
bound y > 0 be given. Suppose there exist a Lyapunov matrix Y

86 IEEE Control Systems


> 0 and a scaling m a t r i i S E such that both control problem: Let a robustperformance boundy > 0 he given.
I(B! YB, + DT,,D, 1 < y2 and (33)hold where Find a controller in (I),for the uncertain system (43),such that
the closed-loop system is robustly stable and the output y satisfies
llfl16- < y for all disturbances such that lldl,, 5 1 , and for all
uncertainties A E BUD.
This problem may be (conservatively) approached using the
following theorem.
Theorem 17. Let a controller G and a robust performance

[Ar R]= D
,,
1: 0, p1
0 S 0 .
bound y > 0 he given. Suppose there exist a Lyupunov matrix X
> 0 and a scaling matrix S E s such that both
1IC,,XCI+
, D,,,,.D:wI(. < y2 and (33) hold where

Then the controller G solves the robust fi control problem.


Proof. The result follows from the analysis result of [55]by
substituting the closed loop matrices defined in (3).

Robust P,, Control


Consider the uncertain system described by

Then the controller G sol\>esthe robust L, control problem.


Proof. The result follows from the characterization of the
robust [,,, performance bound given in [55].

Robust FLControl
where x is the state, yf and o are the exogenous signals to describe Consider the uncertain system described by
the uncertainty A E BUD, y and w are the output of interest and
the finite energy disturbance. and z and u are the measured output
and the control input. We consider the following robust e,

Robert E. Skelton is a Fellow of the American


Instituteof Aeronautics and Astronautics and a Fel-
low of the IEEE. He has receivedtwo international
awardsfrom the Japan Societyfor the
where x is the state, yf and o are the exogenous signals to describe
Science(l986), and a Senior Scientis
Alexander Von Humboldt Foundatio the uncertainty A E BUD, y and w are the signals to describe the
held the Russell Severence Springer Chair at the performance specification, and z and u are the measured output
University of Califomia Berkeley in 1991.Skelton and the control input. The robust ?Lcontrol problem can be
worked in the space industry (Lockheed Missiles stated as follows: Let a robust performance bound y> 0 be given.
Spew Rand Cop.) for 12 years. He is currently a
professorof Aexonauticaland AstronauticalEngineeringand the director Find a controller in ( I ) ,for the uncertain system (44),such that
of the Space Systems Control Laboratory at Purdue University, West the closed-loop system is robustly stable and the output y satisfies
Lafayette, IN. He is the author or co-author of three books,one in print, llflll, < y for all disturbances such that 111.\1 5 1 , and for all
/2
Dynamic Systems Control (Wiley 1988) and two under review, Linear
Algebra in Systems, with Darrell Williamson, and A Unified Algebraic uncertainties A E %U[UI,,
Approach to Control Design, with T. Iwasaki and K.Grigoriadis. He is The following formulation of the robust L control problem
associateeditor of two journals: Mathematical Problems in Engineering is called the state space upper bound p-synthesis [33,63,57].
and Mathematical Modelling of Systems. Theorem 18. Let a controller G and a robust performance
Tetsuya Iwasaki received B.E. andM.E. degreesin hound y > 0 be giiren. Suppose there exist a Lyapunov matrix X
electrical and electronic engineering from Tokyo > 0 and a scaling matrix s E S satis-ing inequality (33) with
Institute of Technology in 1987 and 1990, respec-
tively. During his master's program he held a visit-
ing position at the Power Electronics Research
Center, University of Missouri at Columbia. He
received his Ph.D. from Schoolof Aeronauticsand
Astronautics,Purdue University, in 1993, and then
held a postdoctoralposition at Purdue. Currently,he
is a research associate at the Department of Systems Science, Tokyo
Institute of Technology. His research interests include multivariable
robust control with applicationsto aerospacesystems.

August 1995 87
Then the controller G solves the robust &control problem. [6] T. Iwasaki and R.E. Skelton, “All Controllersfor the General H, Control
proof.meresult follows from the p-upper bound of Problem: LMI Existence Conditions and State Space Formulas,”Automutica
30, pp. 1307 1317, 1994.
~31.
[7] S.P. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan,“Linear Matrix
Conclusions Inequalitiesin System and Control Theory,”SIAM Studies in Applied Mathe-
We have shown that many linear control design problems, matics, 1994.
with stability, performance, and robustness specifications, Can [8] R.E. Skelton,Dynamic Systems Control, Wiley, 1988.
be reduced to a single problem of solving a matrix inequality
[9] R.E. Skelton, “Model Error Concepts in Control Design,” Int. J . Contr.,
1989, vol. 49, no. 5, pp. 1725 1753.
[ 101 D.C. Hyland and D.S. Bemstein, “The Optimal Projection Equations for
Fixed Order Dynamic Compensation,”IEEE Trans.Automat. Contr., 1984,
for the controller parameter G, where the other matrices are vol. AC 29, pp. 1034 1037.
appropriately defined for each control problem, in terms of the
plant data, a Lyapunov matrix X (or Y), and possibly with a [ 111 W.J. Naeije and O.H. Bosgra, “The Design of Dynamic Compensators
for Linear MultivariableSystems,”IFAC Fred Canada, 1977.
scaling matrix S.
When designing a controller based on this approach, one must [ 121 K. C. Goh, M.G. Safonov, and G.P. Papavassilopoulos, “A Global
find a Lyapunov matrix X (or Y) satisfying the existence condi- Optimization Approach for the BMI Problem,” Proc. IEEE Con5 Decision
tions Conrr., 1994, pp. 2009 2014.
[ 131Y. Yamada, S. Ham and H. Fujioka, “GlobalOptimizationfor Constantly
Scaled H, Control Problem,” Proc. American Conrr. Conf., June 1995.
[I41 J.C. Geromel, P.L.D. Peres and S.R. Souza, “Output Feedback Stabili-
It is shown that these conditions can be reduced to LMI zation of Uncertain Systems Through a Min/Max Problem,” IFAC World
feasibility problems, and several computational algorithms to Congress, 1993.
solve these convex problems are discussed. Once such a [ 151K.M. Grigoriadisand R.E. Skelton,“Low-OrderControl Design for LMI
Lyapunov matrix is computed, then a controller can be obtained Problems Using Alternating Projection Methods,” Proc. Allerton Conf.,
by the explicit formula given in (16) (see also [40,43] for a October 1993.
specialized formula for the discrete-time case.
[I61 T. Iwasaki and R.E. Skelton. “The XY CenteringAlgorithm for the Dual
We have shown that 17 different control problems (for both
LMI Problem: A New Approach to Fixed Order Control Design,” Int. J.
continuous-time and discrete-time cases) below: Contr., 1993 (to appear).
stabilization
[17] D.S. Bemstein and W.M. Haddad, “LQG Control with an H, Perform-
LQR(W ance Bound: A Riccati Equation Approach,” IEEE Trans. Automat. Contl:,
robust % March 1989, vol. AC 34, no. 3, pp. 293 305.
* K [ 181 P.P. Khargonekar and M.A. Rotea, “Mixed Ha.. Control: A Convex
robust K Optimization Approach,”IEEETrans.Automut. Contr., 1991, vol. AC 36,no.
*.L 7, pp. 824 837.
robust L, [ 191M.A. Rotea, “The GeneralizedH2 Control Problem,”Auromuricu, 1993,
covariance upper bound control vol. 29, pp. 373 386.
positive real [20] G. Zhu and R.E. Skelton, “Mixed L2 and L Problems by Weight
all reduce to the same linear algebra problem! Many other control Selection in Quadratic Optimal Control,” Int. J . Contr., 1991, vol. 53, no. 5,
problems may also be shown reducible to linear algebra prob- pp. 1161 1176.
lems, and therefore linear algebra deserves more attention in the [21] J.L. Massera,“Contributionsto StabilityTheory,”Ann.Math., 1956,vol.
early stages of control education. 64, pp. 182 206.

References [22] R.E. Kalman, “Contributionsto the Theory of Optimal Control,” Bol.
[ 11D.C. Youla, H.A. Jahr, and J.J. Bongiomo, “Modem Wiener Hopf Design Mar’Mexicanat 1960’pp’ IO2 ‘19’
of Optimal Controllers: Part 2,”IEEE Trans. Automat. Conrr., 1976, vol. AC [23] R,E, K & - , ~ and J.E, -control System ~ ~ aandl~~~i~
~ ~ i ~
21, pp. 319 338. Via the Second Method of Lyapunov I: Continuous Time Systems,”J. Basic
Engineering, June 1960, vol. 82, pp. 371 393.
[2] S. Boyd and C.H. Barratt, Linear Controller Design: Limirs of Perform-
ance, Prentice Hall, Englewood Cliffs, NJ, 1991. [24] J.C. Doyle and G. Stein, “MultivariableFeedback Design: Concepts for
a Classical/Modern Synthesis,”lEEE Trans.Automat. Conrr.,February 1981,
[3] B.D.O. Anderson and J.B. Moore, Optimal Conrrol: Linear Quadratic
vel, A~ 26, no, 1, pp. 16.
Methods. Prentice Hall. 1990.
[25] M.G. Safonov and M. Athans, “AMultiloop Generalizationof the Circle
[4l J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis, ‘‘State Space Criterionfor StabilityMargin Analysis,”IEEE Trans.Automat. Contr., 1981,
Solutions to Standard H:!and H- Control Problems,” IEEE Trans. Aufomat. vel, AC 26, no, 2, pp, 415 422,
Contr., 1989, vol. AC 34. no. 8 (August).
[26] G. Zames, “Feedback and Optimal Sensitivity:Model ReferenceTrans-
[5] P. Gahinet and P. Apkarian, “A Linear Matrix Inequality Approach to H, formations, Multiplicative Seminorms, and Approximate Inverses,” IEEE
Control,” Int. J. Robust Nonlin. Contr.. 1994, vol. 4 , pp. 421 448. Trans.Automat. Contr., 1981, vol. AC 26, pp. 301 320.

88 IEEE Control Systems


[27] J.C. Doyle, “Analysis of Feedback Systems with Structured Uncertain- [35] Yu.Nesterov and A. Nemirovsky,InteriorPoint Polynomial Methods in
ties,” IEE Proc., November 1982, vol. 129, Part D, no. 6, pp. 242 250. Convex P rogramming, SIAM Studies in Applied Mathematics, Philadelphia,
1994.
[28] M.G. Safonov, “Stability Margins of Diagonally Perturbed Multivari-
able Feedback Systems,”IEE Proc., November 1982, vol. 129, Part D, no. [36] L. Vandenberghe and S. Boyd, “A Primal Dual Potential Reduction
6, pp. 251 256. Method for Problems Involving Matrix Inequalities,” Mathematical Pro-
gramming, June 1993, Series B.
[29] K. Glover and J. Doyle, “State Space Formulae for All Stabilizing
Controllers that Satisfy an H.. Norm Bound and Relations to Risk Sensitiv- [37] A. Hotz and R.E. Skelton, “Covariance Control Theory,”Int. J. Contr.,
ity,” Sys. Contr. Lett., 1988, vol. 11, pp. 167 172. 1987, vol. 46, no. 1, pp. 13 32.
[30] LR. Petersen, “Disturbance Attenuation and H, Optimization: ADesign [38] R.E. Skelton and T. Iwasaki. “Liapunov and Covariance Controllers,”
Method Based on the Algebraic Riccati Equation,” IEEE Trans. Automat. In?. J. Contr., 1993, vol. 57, no. 3, pp. 519 536.
Contr., May 1987, vol. AC 32, no. 5 , pp. 427 429.
[39] T. Iwasaki, R.E. Skelton and J.C. Geromel, “Linear Quadratic Subopti-
[31] K. Zhou and P.P. Khargonekar, “An Algebraic Riccati Equation Ap- mal Control with Static Output Feedback,” Sys. Conrr. Lett., 1994, vol. 23,
proach to H, Optimization,” Sys. Contr. Lett., 1988, vol. 11, pp. 85 92. pp. 421 430.
[32] S.P. Boyd, Y Balakrishnan, C.H. Barratt, N.M. Khraishi, X. Li, D.G. [401 T. Iwasaki, “A Unified Matrix Inequality Approach to Linear Control
Meyer, and S.A. Norman, “A New CAD Method and Associated Architec- Design,” Ph.D. dissertation, M u e University, West Lafayette, IN 47907,
tures for Linear Controllers,”IEEETrans. Automat. Contr., March 1988, vol. December 1993.
AC 33, no. 3, pp. 268 283.
[411 T. Iwasaki and R. E. Skelton, “Parametrization of All Stabilizing
[33] J.C. Doyle, A. Packard and K. Zhou, “Review of LFTs LMIs and p.” Controllers Via Quadratic Lyapunov Functions,”J. Optimiz. Theory Appl.,
Proc.IEEE Conf. Decision Conn., 1991, pp. 1227 1232. 1993 (to appear).
[34] S. Boyd and L. El Ghaoui, “Method of Centers for Minimizing Gener- 1421 T. Iwasaki and R.E. Skelton, “A Unified Approach to Fixed Order
alized Eigenvalues,” Linear Algebra and Applications, 1993, vol. 188189, Controller Design Via Linear Matrix Inequalities,” Proc. American Contr.
pp. 63 111. Conf., 1994, pp. 35 39.

{FlexTool(EFM)” for use with MATLAB


Three FlexTools-in-one (GA, FS, EFM)
GA Genetic Algorithmq FS: Fuzzy Systunr.

7(ccaucq-++ €VOlUTIONIUY
FUZZY
MO D E 1 EU
Controllers ExpertSystems
Evaluators DecisionMakem
\ U f
Predictors FeatureExtractor
c3
Simplify Your Fuzzy Modeling Task.
volve Optimum Fuzzy Models via Genetic Learning.
7
Also From Us
Unlaue F a t a n s ‘rleinr x n f 0 n n a t i O l

+ .mfiles FlexTool(EFM)
t Cold warm starts GA ToolBox S369.95
t Hands-on tutorial for use with FlexTool(GA)
t User selectable s79.95
EFM attributes Guidelines for EFM
+ Defauit settings s11.95
+ Linguistic Mapper
For further
t Multiple Objective
t Multiple Models
t LeanModels
Fax: 205-345-5095
e-mail:
FIGLW@AOL .COM
A c n d d c and
volume Diseormt

shipping Ineluded
information,
please contact:
1 c

X B

‘g
CICS
Automation
V O ~205-345-5166
.

Flexible Intelligence Group, Box 1477, Tuscaloosa


AL 35486. USA
Reader Service Number 41 Reader Service Number 5

A u m t 1995 89
[43] R.E. Skelton, T. Iwasaki, and K.M. Grigoriadis, A Unified Algebraic [54] D.S. Bernstein and W.M. Haddad, “Robust Stability and Performance
Approach to Linear Conrrol Design (in preparation). Analysis for State Space Systems Via Quadratic Lyapunov Bounds,” SIAM
J , Mat. Anal. Appl., April 1990,vol. 11,no. 2, pp. 239 271.
[44] K. Kojima, S. Shindoh and S. Hara, “Interior Point Methods for the
Monotone Linear Complementarity Problem in SymmetricMatrices,” Tech. [55] T. Iwasaki, “Robust Performance Analysis for Systems with Norm
Rep. Dept. Info. Sei. Tokyo Insr. Tech., 1994. Bounded Time Varying Structured Uncertainty,” In?. J . Robust Nonlinear
Contr., 1993 (to appear).
[45] A. Nemirovskii and P. Gahinet, “The Projective Method for Solving
Linear Maaix Inequalities,”Proc. American Contr. Conf., 1994, pp. 840 844. [56] A.A. Stoorvogel, “The Robust H2 Control Problem: A Worst Case
Design,” IEEE Trans. Auromut. Conrr., 1993, vol. AC 38, no. 9, pp. 1358
[46] P. Gahinet, A. Nemirovskii, A.J. Laub and M. Chilali, LMI Control 1370.
Toolbox, The Mathworks Inc., 1994.
[57] A. Packard, K. Zhou, I? Pandey, J. Leonhardson, and G.Balas, “Optimal
[47] M.A. Rotea and T. Iwasaki,“An Alternative to the D K Iteration?,”Proc. Constant I/O Similarity Scaling for Full Information and State Feedback
American Conrr. Conf., 1994, pp. 53 57. Control Problems,” Sys. Contr. Lett., 1992, vol. 19, pp. 271 280.
[48] W.S. Levine and M. Athans, “On the Determination of the Optimal [58] L. Xie and C.E. de Souza, “Robust H, Control for Linear Systems with
Constant Output Feedback Gains for Linear Multivariable Systems,’’ IEEE Norm Bounded Time Varying Uncertainty,” IEEE Trans. Automat. Contr.,
Trans.Auromar. Conrr., 1970, vol. AC 15, pp. 44 48. August 1992, vol. AC 37, no. 8, pp. 1188 1191.
[49] M. Corless, G. Zhu and R.E. Skelton, “Robustness of Covariance [59] R.E. Kalman and J.E. Bertram, “Control System Analysis and Design
Controllers,”Proc. IEEE Conf, Decision Contr., 1989, pp. 2667 2672. Via the Second Method of Lyapunov 11: Discrete Time Systems,” J . Basic
Engineering, June 1960, vol. 82, pp. 394 400.
[50] D.A. Wilson, “Convolution and Hankel Operator Norms for Linear
Systems,”IEEE Trans. Auromatic Control, 1989, vol. AC 34(1), pp. 94 98. [60] G. Zhu and R.E. Skelton, “Robust Discrete Controllers Guaranteeing12
and lmPerformances,” IEEE Trans. Auromat. Contr., 1992, vol. AC 37, no.
[51] B.A. Francis, A Course in H- Control Theory, Springer Verlag, New
10, pp. 1620 1625.
York, 1987.
[61] P.A. Iglesias and K. Glover,“State Space Approach to Discrete Time H-
[52] W.M. Haddad and D.S. Bemstein, “Robust Stabilizationwith Positive
Contro1,”Int.J. Contr., 1991, vol. 54, no. 5, pp. 1031 1073.
Real Uncertainty: Beyond the Small Gain Theorem,”Sys.Contx Lerr., 1991,
pp. 191 208. [62] A.A. Stoorvogel, The H- Conrrol Problem: A Stare Space Approach,
Prentice Hall, 1992.
[53] W. Sun, P. Khargonekar, and D. Shim, “Solution to the Positive Real
Control Problem for Linear Time Invariant Systems,” IEEE Trans.Automar. [63] A. Packard and J. Doyle, “The Complex Structured Singular Value,”
Conrr.,October 1994,vol. AC 39, no. 10,2034 2046. Automutica, 1993, vol. 29, no. 1, pp. 71 109.

Reader Service Number 12

You might also like