Professional Documents
Culture Documents
H1
H1
H1
I. INTRODUCTION
Hybrid dynamical systems (HDS) have attracted considerable
attention in recent years (see, e.g., [1][4]). In general, HDSs are
those that consist of a logical discrete-event decision-making system
interacting with a continuous-time process. A simple example is
a climate control system in a typical home. The on/off nature of
the thermostat is modeled as a discrete-event system, whereas the
furnace and air-conditioner are modeled as continuous-time systems.
Some other examples include transmissions and stepper motors [3],
computer disk drives [1], robotic systems [4], higher-level flexible
manufacturing systems [5], and intelligent vehicle/highway systems
[6].
In this paper, we consider robust control problems for a new
class of HDSs. The HDS under consideration consists of a linear
continuous-time plant with the control and disturbance inputs and a
discrete-event controller. There are several theoretically interesting
and practically significant problems concerning the use of switched
controllers. In some situations it is possible to design several controllers and then switch between them to provide a performance
improvement over a fixed controller. In other situations the choice
of linear or nonlinear controllers available to the designer is limited
and the design task is to use the available set of controllers in an
optimal fashion. This latter problem is the type we consider in this
paper and includes, for example, the optimal switching between gears
in a gearbox and the optimal switching between heating and cooling
modes of operation in an air-conditioning plant. The controller is
defined by a collection of given nonlinear output feedback controllers
which are called basic controllers. Then, our control strategy is a
rule for switching from one basic controller to another. The control
goal is to achieve a level of performance defined by an integral
performance index. This integral performance index is similar to the
requirement in standard
-control theory (see, e.g., [7][10]). We
obtain a solution for an output feedback problem on an infinite time
interval. The main result is given in terms of the existence of suitable
solutions to a Riccati algebraic equation of the
-filtering type and
H1
H1
Manuscript received August 22, 1995. This work was supported by the
Australian Research Council.
A. V. Savkin is with the Department of Electrical and Electronic Engineering, University of Western Australia, Nedlands, Perth, WA 6009, Australia.
R. J. Evans is with the Department of Electrical and Electronic Engineering
and Cooperative Research Center for Sensor Signal and Information Processing, University of Melbourne, Parkville, Victoria 3052, Australia.
Publisher Item Identifier S 0018-9286(98)05813-9.
x(t)
_
= Ax(t) + B1 (t) + B2 u(t)
z(t) = Kx(t) + Gu(t)
y(t) = Cx(t) + v(t)
(1)
(2)
8j 2 f0; 1;
2; 1 1 1g
8t 2
u(t) = Ui (y(t))
[jT ; (j + 1)T )
where
jT
ij = Ij (y(1)j0 ):
(3)
As above, our control strategy is a rule for switching from one basic
controller to another. Such a rule constructs a symbolic sequence
fij gj =0 from the output measurement y(1): The sequence fij gj =0
is called a switching sequence.
Notation: In this paper, k 1 k denotes the standard Euclidian norm.
Also, for any 2 (0; 1] L2 [0; ) denotes the Hilbert space of square
integrable vector-valued functions defined on [0; ):
Definition 2.1: Consider system (1). Let Q = Q0 > 0 and R =
0
R > 0 be given matrices. Suppose that there exist constants c > 0
and > 0 and a function V~ (x0 ) such that V~ (0) = 0 V~ (x0 ) 0 for
all x0 2 R n , and for any vector x0 2 R n there exists a controller of
the form (2) and (3) such that the following conditions hold.
1) For any initial condition x(0) and disturbance inputs
[(1); v(1)] 2 L2 [0; 1) the closed-loop system (1)(3) has
a unique solution which is defined on [0; 1):
2
k2 +
(kz (t)k 0 (t) Q (t) 0 v (t) Rv (t)) dt
0
ck(x(0) 0 x0 )k2 + V~ (x0 )
(4)
holds for all N 2 N and for all solutions to the closed loop
system with any disturbance inputs [ (1); v (1)] 2 L2 [0; N T ]:
x(N T )
AP^ + P^ A + P^ [K K
(5)
0 0 C 0 RC ]]^x(t) + P^ C 0 Ry(t)
0
^ K G + B2 ]u(t):
+ [P
_ (t) = [A + P
^ [K K
x
^
x0
(6)
W (^
x(t); u(t); y (t))
(7)
Then
i
1 1
F (^
x0 ; L( )) =
sup
(1)2L [0;T ]
L(^
x(T )) +
W (^
x(t); Ui (y (t)); y (t)) dt
(8)
where the supremum is taken over all solutions to the system (32) with
y (1) 2 L2 [0; T ] u(t) Ui (y (t)) and initial condition x
^(0) = x
^0 :
Now we are in a position to present the main result of this paper.
Theorem 3.1: Consider system (1) and the basic controllers (2).
Let Q = Q0 > 0 and R = R0 > 0 be given matrices. Suppose that
G0 G > 0 (A; B1 ) is controllable, (A; K ) is observable, and F i (1; 1)
is defined by (8). Then, the following statements are equivalent.
1) The output feedback robust control problem defined by the
matrices Q and R has a solution via controlled switching with
output feedback basic controllers (2).
2) There exists a constant 0 > 0 and solution P^ > 0 to the Riccati
1
equation (5) such that the matrix D = A + P^ [K 0 K 0 C 0 RC ]
is stable and the dynamic programming equation
V^ (^
x0 ) =
i
min F (^
x ; V^ ( ))
i ;111;k
=1
has a solution V^ (^
x0 ) such that V^ (0) = 0 and V^ (^
x0 )
2
n
^0 k for all x
^0 2 R :
0 kx
(9)
1293
01 0
+ B1 Q B1
0 C 0 RC ]P (t)
(10)
x(N T ) XT x(N T )
NT
NT
( z (t)
W (^
x(t); u0 (t); y0 (t)) dt
(13)
J =
sup
[(1);v(1)]2L [0; ]
(1
k k2 +
0
8 2 (0; 1)
c x(0)
kz(t)k2 dt
0
<1
( (t)0 Q (t) + v (t)0 Rv (t)) dt
0 )
(14)
where c > 0 is a constant such that condition (4) holds. Now consider
the disturbance input v (1) 0Cx(1): Then, we have y (1) 0 and
u(1) 0: Therefore, from (14) we have
J0 =
sup
(1)2L [0; ]
1 0
((1
k k2 +
0
8 2 (0; 1):
c x(0)
(t)0 Q (t) dt
< 1;
1294
Hence, it follows from [12, Lemma 5] that the solution P (1) to the
Riccati equation
(3), Lemma 3.1 implies that (12) holds. From (12) we obtain that
(15)
)01 I; 8N 2 N :
Furthermore, we can take ! 0 and obtain from the continuity of
P N T
Pc N T
)01 I;
8N 2 N :
8t 0
( ) Pc (t);
P1 t
and
( ) P1 (t2 );
8t1 t2 :
P1 t1
Relations
(16)
and
(17)
1
P^ = limt!1
( ) ! P^
as
! 1:
(18)
be the class of all controllers of the form (2) and (3) and let
be fixed. Introduce for any j = 0; 1; 1 1 1 ; N the following
2N
function:
jT
W x t ;u t ;y t
(19)
(^ ) = kx^0 k2 ;
VN x0
Vj
i
N
Fj (^
x0 ; Vj +1 (1))
(^x0 ) = i=1min
;111;k
where
i
(^
(1))
Fj x0 ; L
=1
sup
(1)2L [jT;(j+1)T )
1 L(^x((j + 1)T )) +
(j+1)T
jT
(^( ) ( ( )) ( )) dt
W x t ; Ui y t ; y t
(20)
and the supremum is taken over all solutions to the system (13)
with y (1) 2 L2 [jT ; (j + 1)T ) u(t) Ui (y (t)) and initial condition
8x^0 ; N; j:
(21)
8N:
(22)
Vj
Vj
jT
Vj
8M N:
(25)
Vj x0
i
(^ ) = i=1min
Fj (^
x0 ; Vj +1 (1)):
;111;k
Vj x0
(^( ) ( ) ( )) dt
V^ (^x(0)) 0 V^ (^x(N T )) V^ (^x(0)) 0 0 kx^(N T )k2:
Furthermore, Lemma 3.1 implies that (11) holds with X0 = P^ 01 ;
= 12 minf(1=kP^ 01 k); 0 g; and V~ V^ : Hence, condition (4) holds
with c = kP^ 01 k and V~ V^ : Conditions 1) and 2) of Definition 2.1
follow immediately from inequality (4), assumption G0 G > 0, and
observability of (A; K ): This completes the proof of the theorem.
0
where the supremum is taken over all solutions to (13) with initial condition x
^(jT ) = x^0 : According to the theory of dynamic
programming (see, e.g., [7] and [14]) VjN (1) satisfies the equations
N
V0
NT
(^0 ) =1 u(inf
sup
1)2
y(1)2L [jT;NT ]
N
Vj x
NT
(^x0 ) Z (^x0 );
0
(17)
P t
Vj
(16)
Now let P1 (1) be the solution to (10) with initial condition P1 (0) =
0: Then, well-known properties of the Riccati equation (10) (see, e.g.,
Let
P t
x
^(jT ) = x^0 : Now we prove that there exists a function Z (1)
such that Z (0) = 0 and
W x t ;u t ;y t
Remarks:
1) It can be shown, using the methodology of [15], that robust
stability of the closed-loop system (1)(3) in the sense of
Definition 2.1 implies input-to-state stability: for any initial
condition x(0) and disturbance inputs [ (1); v (1)] 2 L1 [0; 1)
the corresponding solution x(1) to the closed-loop system belongs to L1 [0; 1): Here L1 [0; 1) denotes the Banach space
of measurable vector-valued functions defined and essentially
bounded on [0; 1):
2) The disturbance rejection problem (4) is equivalent to some robust stabilization problem (see, e.g., [16]) where the underlying
linear system is described by the state equations (1). However,
in this case, (t) and v (t) are the uncertainty inputs, and z (t) is
the uncertainty output. The uncertainty inputs (t) and v (t) are
required to satisfy a certain integral quadratic constraint. Then,
Theorem 3.1 gives a solution for the corresponding problem of
robust stabilization via output feedback switching control (2)
and (3).
APPENDIX
8t 2 [0; N T ]
1 xf XT xf
0
1
; (1)] =
0 0
2R
(27)
0 0
0k
(x(0)
x ) X (x(0) x )
NT
(t)0 Q (t)
(Kx(t) + Gu (t))
+
+(y (t)
Cx(t))0 R(y (t) Cx(t))
k2
dt
(28)
and x(1) is the solution to (1) with disturbance input (1) and
boundary condition x(N T ) = xf :
Now consider the following minimization problem:
min
(1)2L [0;NT ]
J [xf ; ( )]
(29)
where the minimum is taken over all x(1) and (1) related by (1)
with the boundary condition x(N T ) = xf : This problem is a linear
quadratic optimal tracking problem in which the system operates in
reverse time.
We wish to convert the above tracking problem into a tracking
problem of the form considered in [17] and [18]. In order to achieve
this, first define x1 (t) to be the solution to the state equations
x_ 1 (t) = Ax1 (t) + B2 u0 (t);
x1 (0) = 0:
(30)
Now let x
~(t) = x(t) 0 x1 (t): Then, it follows from (1) and (30) that
x
~(t) satisfies the state equations
_ (t) = Ax
x
~
~(t) + B1 (t)
(31)
where x
~(0) = x(0): Furthermore, the cost function (28) can be
rewritten as
1
0 00 0
J [xf ; ( )] = J~[~
xf ; ( )]
0 x0 )
0 k(K [~x(t) + x1 (t)]
0
2
0
+ Gu0 (t))k + (y0 (t) 0 C [~
x(t) + x1 (t)]) R(y0 (t)
(32)
0 C [~x(t) + x1 (t)])) dt
where x
~(N T ) = x
~f = xf 0 x1 (N T ): Equations (31) and (32) now
define a tracking problem of the form considered in [17] where y0 (1);
u0 (1); and x1 (1) are all treated as reference inputs. In fact, the only
x(0)
= (~
x(0) x ) X (~
NT
0
( (t) Q (t)
+
min
(1)2L [0;NT ]
J~[~
xf ; ( )]
min
(1)2L [0;NT ]
= (~
xf
(26)
then, substitution of (26) into (11) implies that (11) holds if and only
if
J [xf ; ( )]
1295
J~[~
xf ; ( )]
W (x1 (t) + x
^1 (t); u0 (t); y0 (t)) dt
(33)
where x
^1 (1) is the solution to state equations
_ 1 (t) = [A + P (t)[K K
x
^
2 0
min
(1)2L [0;NT ]
= (xf
J [xf ; ( )]
W (^
x(t); u0 (t); y0 (t)) dt
where x
^(1) is the solution to state equations (6) with initial condition
x
^(0) = x0 : From this we can conclude that condition (11) with a
given inputoutput pair [u0 (1); y0 (1)] is equivalent to the inequality
(12).
REFERENCES
[1] A. Gollu and P. P. Varaiya, Hybrid dynamical systems, in Proc. 28th
IEEE Conf. Decision and Control, Tampa, FL, 1989.
[2] P. J. Antsaklis, J. A. Stiver, and M. Lemmon, Hybrid systems modeling
and autonomous control systems, in Hybrid Systems, R. L. Grossman,
A. Nerode, A. P. Ravn, and H. Rishel, Eds. New York: SpringerVerlag, 1993.
[3] R. W. Brockett, Hybrid models for motion control systems, in Essays
in Control, H. L. Trentelman and J. C. Willems, Eds. Boston, MA:
Birkhauser, 1993.
[4] A. Back, J. Guckenheimer, and M. Myers, A dynamical simulation
facility for hybrid systems, in Hybrid Systems, R. L. Grossman, A.
Nerode, A. P. Ravn, and H. Rishel, Eds. New York: Springer-Verlag,
1993.
[5] S. B. Gershwin, Hierarchical flow control: A framework for scheduling
and planning discrete events in manufacturing systems, Proc. IEEE,
vol. 77, no. 1, pp. 195209, 1989.
[6] P. P. Varaiya, Smart cars on smart roads: Problems of control, IEEE
Trans. Automat. Contr., vol. 38, pp. 195207, 1993.
[7] T. Basar and P. Bernhard, 1 -Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach. Boston, MA:
Birkhauser, 1991.
[8] I. R. Petersen, B. D. O. Anderson, and E. A. Jonckheere, A first
principles solution to the nonsingular 1 control problem, Int. J.
Robust and Nonlinear Control, vol. 1, no. 3, pp. 171185, 1991.
[9] P. P. Khargonekar, K. M. Nagpal, and K. R. Poolla, 1 control
with transients, SIAM J. Control and Optimization, vol. 29, no. 6, pp.
13731393, 1991.
[10] M. R. James and J. S. Baras, Robust 1 output feedback control
for nonlinear systems, IEEE Trans. Automat. Contr., vol. 40, pp.
10071017, 1995.
[11] A. V. Savkin, R. J. Evans, and I. R. Petersen, A new approach to
robust control of hybrid systems, in Hybrid Systems, IIIVerification
and Control, Lecture Notes in Computer Science 1066, T. A. Henzinger,
R. Alur, and Ed. D. Sontag, Eds. New York: Springer-Verlag, 1996,
pp. 553562.
1296
H1
H1
H1
I. INTRODUCTION
An event graph is a Petri net such that each place has only one
input arc and one output arc. If the tokens have to stay a minimum
amount of time in the places, we speak of a timed event graph (TEG).
These TEGs are well adapted for modeling synchronizations. In
many systems, synchronization is essential. In manufacturing, in order
to start a task, a machine and a part must both be ready. In computer
science, in order to achieve a computation, we need a processor and
the information.
Manuscript received March 5, 1997. This work was supported in part by
the European Community Framework IV programme through the research
network ALAPEDES.