You are on page 1of 5

1292

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 9, SEPTEMBER 1998

A New Approach to Robust Control of


Hybrid Systems Over Infinite Time
Andrey V. Savkin and Robin J. Evans

Abstract The paper presents a new approach to output feedback


robust control synthesis problems for hybrid dynamical systems. The
hybrid system under consideration is a composite of a continuous plant
-control problem
and a discrete-event controller. An output feedback
on an infinite time interval is considered. The main results are given in
terms of the existence of suitable solutions to a dynamic programming
equation and a Riccati algebraic equation of the
-filtering type.
These results show a connection between the theories of hybrid dynamical
systems and robust and nonlinear control.

H1

H1

Index TermsController switching,


cati equations, robust control.

a dynamic programming equation. If such solutions exist, then it is


shown that they can be used to construct a corresponding controller.
Riccati algebraic equations have been widely studied in control
theory, and there exist reliable methods for obtaining solutions. The
solution to dynamic programming equations has been the subject of
much research in the field of optimal control theory. Furthermore, in
[11] a method for obtaining numerical solutions has been proposed
for dynamic programming equations of the type considered in the
paper. Since dynamic programming equations and
-type Riccati
equations are well known in the modern robust and nonlinear control
theories (see, e.g., [7], [9], [10], [12]), this paper shows that these
theories when suitably modified, provide a framework for studying
HDS.

H1

H1 control, hybrid systems, Ric-

II. PROBLEM STATEMENT


We consider the linear system defined on the infinite time interval
[0; 1)

I. INTRODUCTION
Hybrid dynamical systems (HDS) have attracted considerable
attention in recent years (see, e.g., [1][4]). In general, HDSs are
those that consist of a logical discrete-event decision-making system
interacting with a continuous-time process. A simple example is
a climate control system in a typical home. The on/off nature of
the thermostat is modeled as a discrete-event system, whereas the
furnace and air-conditioner are modeled as continuous-time systems.
Some other examples include transmissions and stepper motors [3],
computer disk drives [1], robotic systems [4], higher-level flexible
manufacturing systems [5], and intelligent vehicle/highway systems
[6].
In this paper, we consider robust control problems for a new
class of HDSs. The HDS under consideration consists of a linear
continuous-time plant with the control and disturbance inputs and a
discrete-event controller. There are several theoretically interesting
and practically significant problems concerning the use of switched
controllers. In some situations it is possible to design several controllers and then switch between them to provide a performance
improvement over a fixed controller. In other situations the choice
of linear or nonlinear controllers available to the designer is limited
and the design task is to use the available set of controllers in an
optimal fashion. This latter problem is the type we consider in this
paper and includes, for example, the optimal switching between gears
in a gearbox and the optimal switching between heating and cooling
modes of operation in an air-conditioning plant. The controller is
defined by a collection of given nonlinear output feedback controllers
which are called basic controllers. Then, our control strategy is a
rule for switching from one basic controller to another. The control
goal is to achieve a level of performance defined by an integral
performance index. This integral performance index is similar to the
requirement in standard
-control theory (see, e.g., [7][10]). We
obtain a solution for an output feedback problem on an infinite time
interval. The main result is given in terms of the existence of suitable
solutions to a Riccati algebraic equation of the
-filtering type and

H1

H1

Manuscript received August 22, 1995. This work was supported by the
Australian Research Council.
A. V. Savkin is with the Department of Electrical and Electronic Engineering, University of Western Australia, Nedlands, Perth, WA 6009, Australia.
R. J. Evans is with the Department of Electrical and Electronic Engineering
and Cooperative Research Center for Sensor Signal and Information Processing, University of Melbourne, Parkville, Victoria 3052, Australia.
Publisher Item Identifier S 0018-9286(98)05813-9.

x(t)
_
= Ax(t) + B1 (t) + B2 u(t)
z(t) = Kx(t) + Gu(t)
y(t) = Cx(t) + v(t)

(1)

where x(t) 2 Rn is the state, (t) 2 Rp ; and v(t) 2 Rl are the


disturbance inputs, u(t) 2 Rh is the control input, z(t) 2 Rq
is the controlled output, y(t) 2 R l is the measured output, and
A; B1 ; B2 ; K; G; and C are given matrices.
Controlled Switching: Suppose we have a collection of given
nonlinear output feedback controllers
u1 (t) = U1 (y(t))
u2 (t) = U2 (y(t)); 1 1 1 ; uk (t) = Uk (y(t))

(2)

where U1 (1); U2 (1); 1 1 1 ; Uk (1) are given continuous functions from


l to R h such that U (0) = 0; U (0) = 0; 1 1 1 ; U (0) = 0: The
1
2
k
controllers in (2) are called basic controllers. We will consider the
following class of output feedback controllers. Let T > 0 be a given
time. Let j 2 N and Ij (1) be a function which maps from the
set of the output measurements fy(1)jjT
0 g to the set of symbols
jT
f1; 2; 1 1 1 ; k g: Here fy(1)j0 g denotes the set of all possible measured
outputs y(t) on the interval [0; jT ]: Then, for any sequence of such
functions fIj gj =0 we will consider the following dynamic nonlinear
output feedback controller:

8j 2 f0; 1;

2; 1 1 1g

8t 2

u(t) = Ui (y(t))

[jT ; (j + 1)T )

where

jT

ij = Ij (y(1)j0 ):

(3)

As above, our control strategy is a rule for switching from one basic
controller to another. Such a rule constructs a symbolic sequence
fij gj =0 from the output measurement y(1): The sequence fij gj =0
is called a switching sequence.
Notation: In this paper, k 1 k denotes the standard Euclidian norm.
Also, for any  2 (0; 1] L2 [0;  ) denotes the Hilbert space of square
integrable vector-valued functions defined on [0;  ):
Definition 2.1: Consider system (1). Let Q = Q0 > 0 and R =
0
R > 0 be given matrices. Suppose that there exist constants c > 0
and  > 0 and a function V~ (x0 ) such that V~ (0) = 0 V~ (x0 )  0 for
all x0 2 R n , and for any vector x0 2 R n there exists a controller of
the form (2) and (3) such that the following conditions hold.
1) For any initial condition x(0) and disturbance inputs
[(1); v(1)] 2 L2 [0; 1) the closed-loop system (1)(3) has
a unique solution which is defined on [0; 1):

00189286/98$10.00 1998 IEEE

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 9, SEPTEMBER 1998

2) For any solution x(t) to the closed-loop system (1)(3) with


[ (1); v (1)]  0 x(t) ! 0 as t ! 1:
3) The inequality
NT

2
k2 +
(kz (t)k 0  (t) Q (t) 0 v (t) Rv (t)) dt
0
 ck(x(0) 0 x0 )k2 + V~ (x0 )
(4)
holds for all N 2 N and for all solutions to the closed loop
system with any disturbance inputs [ (1); v (1)] 2 L2 [0; N T ]:

 x(N T )

Then, the output feedback robust control problem defined by the


matrices Q and R is said to have a solution via controlled switching
with the output feedback basic controllers (2).
Remark: The problem defined in Definition 2.1 is similar to the
H 1 -control problem with transients (see, e.g., [9]) with a special
class of output feedback controllers.
III. THE MAIN RESULT
Our solution to the above problem involves the following Riccati
algebraic equation:

AP^ + P^ A + P^ [K K

0 C 0 RC ]P^ + B1 Q01 B10 = 0:

(5)

Also, we consider a set of state equations of the form

0 0 C 0 RC ]]^x(t) + P^ C 0 Ry(t)
0
^ K G + B2 ]u(t):
+ [P

_ (t) = [A + P
^ [K K
x
^

x0

(6)

Notation: Let L(1) be a given function from Rn to R and let


2 Rn be a given vector. Introduce the following cost function:

W (^
x(t); u(t); y (t))

1 k(K x^(t) + Gu(t))k2 0 (C x^(t) 0 y(t))0 R(Cx^(t) 0 y(t)):

(7)
Then
i

1 1

F (^
x0 ; L( )) =

sup

(1)2L [0;T ]
L(^
x(T )) +

W (^
x(t); Ui (y (t)); y (t)) dt

(8)

where the supremum is taken over all solutions to the system (32) with
y (1) 2 L2 [0; T ] u(t)  Ui (y (t)) and initial condition x
^(0) = x
^0 :
Now we are in a position to present the main result of this paper.
Theorem 3.1: Consider system (1) and the basic controllers (2).
Let Q = Q0 > 0 and R = R0 > 0 be given matrices. Suppose that
G0 G > 0 (A; B1 ) is controllable, (A; K ) is observable, and F i (1; 1)
is defined by (8). Then, the following statements are equivalent.
1) The output feedback robust control problem defined by the
matrices Q and R has a solution via controlled switching with
output feedback basic controllers (2).
2) There exists a constant 0 > 0 and solution P^ > 0 to the Riccati
1
equation (5) such that the matrix D = A + P^ [K 0 K 0 C 0 RC ]
is stable and the dynamic programming equation
V^ (^
x0 ) =

i
min F (^
x ; V^ ( ))
i ;111;k

=1

has a solution V^ (^
x0 ) such that V^ (0) = 0 and V^ (^
x0 )
2
n
^0 k for all x
^0 2 R :
0 kx

(9)

1293

Furthermore, suppose that condition 2) holds and let ij (^


x0 ) be an
index such that the minimum in (9) is achieved for i = ij (^
x0 ),
and let x
^(1) be the solution to the (6) with the initial condition
x
^(0) = x0 : Then the controller (2) and (3) associated with the
1 x(jT )), solves the output
switching sequence fij gj1=0 , where ij = ij (^
feedback robust control problem defined by the matrices XT ; Q; and
R with c = kP^ 01 k  = 12 minf(1=kP^ 01 k); 0 g and V~ (1)  V^ (1):
In order to prove this theorem, we will use the following lemma.
Lemma 3.1: Let X0 = X00 > 0 and XT = XT0 > 0 be given
matrices, N 2 N be a given number, x0 2 R n be a given vector,
V~ (x0 ) be a given constant, and y0 (1) and u0 (1) be given vector
functions. Suppose that the solution P (1) to the Riccati equation

P_ (t) = AP (t) + P (t)A + P (t)[K K

01 0
+ B1 Q B1

0 C 0 RC ]P (t)

(10)

with initial condition P (0) = X001 is defined and positive definite


on the interval [0; N T ]: Then, condition

x(N T ) XT x(N T )
NT

k2 0 (t)0 Q(t) 0 v(t)0 Rv(t)) dt


0
 (x(0) 0 x0 )0 X0 (x(0) 0 x0 ) + V~ (x0)
(11)
holds for all solutions to the system (1) with y (1) = y0 (1) and
u(1) = u0 (1) if and only if
+

NT

( z (t)

W (^
x(t); u0 (t); y0 (t)) dt

 V~ (x0 ) + (xf 0 x^(N T ))0 P (N T )01 (xf 0 x^(N T ))


0 x0f XT xf
(12)
for all xf 2 R n , the cost function (7), and the solution to the equation
_ (t) = [A + P (t)[K 0 K 0 C 0 RC ]]^
x
^
x(t)
0

+ P (t)C Ry (t) + [P (t)K G + B ]u(t)

(13)

with u(1) = u0 (1); y (1) = y0 (1) and initial condition x


^(0) = x0 :
The proof of Lemma 3.1 is given in the Appendix.
Proof of Theorem 3.1 [ 1) ) 2)]: If condition 1) holds, then for
any  2 (0; 1) the controller (2) and (3) corresponding to x0 = 0
is a solution to the following output feedback finite time interval
H 1 -control problem:

J =

sup

[(1);v(1)]2L [0; ]

(1

k k2 +
0
8 2 (0; 1)

c x(0)

kz(t)k2 dt
0
<1
( (t)0 Q (t) + v (t)0 Rv (t)) dt

0 )

(14)

where c > 0 is a constant such that condition (4) holds. Now consider
the disturbance input v (1)  0Cx(1): Then, we have y (1)  0 and
u(1)  0: Therefore, from (14) we have

J0 =

sup

(1)2L [0; ]


1 0

((1

0 )kKx(t)k2 0 x(t)0 C 0 RCx(t)) dt

k k2 +
0
8 2 (0; 1):
c x(0)

 (t)0 Q (t) dt

< 1;

1294

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 9, SEPTEMBER 1998

Hence, it follows from [12, Lemma 5] that the solution P (1) to the
Riccati equation

_ ( ) = AP (t) + P (t)A


+ P (t)[(1 0 )K K 0 C RC ]P (t) + B1 Q01 B10
with initial condition P (0) = cI is defined and positive definite on
[0; 1): Since condition (11) with X0 = cI and XT = I holds for
any N 2 N and u(1) and y (1) connected by the controller (2) and
0

(3), Lemma 3.1 implies that (12) holds. From (12) we obtain that

(15)
)01  I; 8N 2 N :
Furthermore, we can take  ! 0 and obtain from the continuity of

P N T

the solutions to the Riccati equations with respect to the parameters


on a finite-time interval and condition (15) that there exists a constant
c0 > 0 that for any c > c0 the solution Pc (1) to the Riccati equation
(10) with initial condition Pc (0) = cI is defined and positive definite
on [0; 1) and

Pc N T

)01  I;

8N 2 N :

[13]) imply that

8t  0

( )  Pc (t);

P1 t

and

( )  P1 (t2 );

8t1  t2 :

P1 t1

Relations

(16)

and

(17)

imply existence of the limit


P1 (t) > 0 and P^ 01  I: Now it is clear
that P^ is a solution to (5). Also, since P1 ! P^ as t ! 1,
we have that P^ is the minimum constant solution. Hence, P^ is
a stabilizing solution, i.e., the matrix A + P^ [K 0 K 0 C 0 RC ] is
stable (see, e.g., [13]).
Now let c > 0 be a constant such that (4) holds and cI  P^ : We
have proved above that there exists a solution to (10) with initial
condition P (0) = cI and

1
P^ = limt!1

( ) ! P^

as

! 1:

(18)

be the class of all controllers of the form (2) and (3) and let
be fixed. Introduce for any j = 0; 1; 1 1 1 ; N the following

2N

function:

jT

(^( ) ( ) ( )) dt + kx^(N T )k2

W x t ;u t ;y t

(19)

(^ ) = kx^0 k2 ;

VN x0

Vj

i
N
Fj (^
x0 ; Vj +1 (1))
(^x0 ) = i=1min
;111;k

where
i

(^

(1))

Fj x0 ; L

=1

sup

(1)2L [jT;(j+1)T )

1 L(^x((j + 1)T )) +

(j+1)T
jT

(^( ) ( ( )) ( )) dt

W x t ; Ui y t ; y t

(20)
and the supremum is taken over all solutions to the system (13)
with y (1) 2 L2 [jT ; (j + 1)T ) u(t)  Ui (y (t)) and initial condition

8x^0 ; N; j:

(21)

(^x0 )  V~ (^x0 ) 8x^0 2 Rn ;

8N:

(22)

It can be seen from (19) that


jT

(x0 )  V0N (x0 ) + u(inf


W (^
x(t); u(t); y0 (t)) dt
1)2
0
8y0 (1) 2 L2 [0; jT ]:
(23)
Now consider an input y0 (t) = 0 for t 2 [0; N T ]: Then, u(t) = 0
for t 2 [0; jT ] and for all u(1) 2
and the inequality (23) implies
N

Vj

Vj

(x0 )  V0N (x0 ) +

jT

(kK x^(t)k2 0 x^(t)0 C 0 RCx^(t)) dt


(24)

for the solutions to (13) with u  0 and y  0: Since P^ is a


stabilizing solution, condition (18) implies that P (1) is a stabilizing
solution to the Riccati equation (10). Hence, system (13) with u  0
and y  0 is exponentially stable. From this and the inequalities
(22) and (24), we have that there exists a constant d > 0 such that
condition (21) holds with Z (x0 ) = V~ (x0 )+ dkx0 k2 : Also, it follows
from (19) that
N

Vj

(x0 )  VjM (x0 );

8M  N:

(25)

Hence, from (21) and (25) we have existence of the limit


(^ ) =1 limN !1 VjN (^x0 )  Z (^x0 ): Clearly Vj (1) is a solution
to the equation

Vj x0

i
(^ ) = i=1min
Fj (^
x0 ; Vj +1 (1)):
;111;k

Vj x0

Now conditions (18) and (21) imply existence of the limit


1
V^ (^
x0 ) = limj !1 Vj (^
x0 ), and V^ (1) is a solution to equation (9).
Also, it follows from the above that V^ (^
^0 k2 and V^ (0) = 0:
x0 )  kx
This completes the proof of this statement.
2) ) 1): Equation (9) implies that for the controller associated
with the switching sequence fij gj1=0 defined by (9), we have

(^( ) ( ) ( )) dt
 V^ (^x(0)) 0 V^ (^x(N T ))  V^ (^x(0)) 0 0 kx^(N T )k2:
Furthermore, Lemma 3.1 implies that (11) holds with X0 = P^ 01 ;
 = 12 minf(1=kP^ 01 k); 0 g; and V~  V^ : Hence, condition (4) holds
with c = kP^ 01 k and V~  V^ : Conditions 1) and 2) of Definition 2.1
follow immediately from inequality (4), assumption G0 G > 0, and
observability of (A; K ): This completes the proof of the theorem.
0

where the supremum is taken over all solutions to (13) with initial condition x
^(jT ) = x^0 : According to the theory of dynamic
programming (see, e.g., [7] and [14]) VjN (1) satisfies the equations
N

V0

NT

(^0 ) =1 u(inf
sup
1)2
y(1)2L [jT;NT ]

N
Vj x

NT

(^x0 )  Z (^x0 );

0

Indeed, condition (12) of Lemma 3.1 together with the definition


(19) imply that

(17)

P t

Vj

(16)

Now let P1 (1) be the solution to (10) with initial condition P1 (0) =
0: Then, well-known properties of the Riccati equation (10) (see, e.g.,

Let

P t

x
^(jT ) = x^0 : Now we prove that there exists a function Z (1)
such that Z (0) = 0 and

W x t ;u t ;y t

Remarks:
1) It can be shown, using the methodology of [15], that robust
stability of the closed-loop system (1)(3) in the sense of
Definition 2.1 implies input-to-state stability: for any initial
condition x(0) and disturbance inputs [ (1); v (1)] 2 L1 [0; 1)
the corresponding solution x(1) to the closed-loop system belongs to L1 [0; 1): Here L1 [0; 1) denotes the Banach space
of measurable vector-valued functions defined and essentially
bounded on [0; 1):
2) The disturbance rejection problem (4) is equivalent to some robust stabilization problem (see, e.g., [16]) where the underlying
linear system is described by the state equations (1). However,
in this case,  (t) and v (t) are the uncertainty inputs, and z (t) is

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 9, SEPTEMBER 1998

the uncertainty output. The uncertainty inputs  (t) and v (t) are
required to satisfy a certain integral quadratic constraint. Then,
Theorem 3.1 gives a solution for the corresponding problem of
robust stabilization via output feedback switching control (2)
and (3).
APPENDIX

Proof of Lemma 3.1: Given an inputoutput pair [u0 (1); y0 (1)],


if condition (11) holds for all vector functions x(1);  (1); and v (1)
satisfying (1) with u(1) = u0 (1) and such that

8t 2 [0; N T ]

y0 (t) = Cx(t) + v (t);

1  xf XT xf
0

for all  (1) 2 L2 [0; N T ] xf


J [xf

1
;  (1)] =

0 0

2R

(27)

where J [xf ;  (1)] is defined by

0 0
0k

(x(0)
x ) X (x(0) x )
NT
 (t)0 Q (t)
(Kx(t) + Gu (t))
+
+(y (t)
Cx(t))0 R(y (t) Cx(t))

k2

dt

(28)

and x(1) is the solution to (1) with disturbance input  (1) and
boundary condition x(N T ) = xf :
Now consider the following minimization problem:


min

(1)2L [0;NT ]

J [xf ;  ( )]

(29)

where the minimum is taken over all x(1) and  (1) related by (1)
with the boundary condition x(N T ) = xf : This problem is a linear
quadratic optimal tracking problem in which the system operates in
reverse time.
We wish to convert the above tracking problem into a tracking
problem of the form considered in [17] and [18]. In order to achieve
this, first define x1 (t) to be the solution to the state equations
x_ 1 (t) = Ax1 (t) + B2 u0 (t);

x1 (0) = 0:

(30)

Now let x
~(t) = x(t) 0 x1 (t): Then, it follows from (1) and (30) that
x
~(t) satisfies the state equations
_ (t) = Ax
x
~
~(t) + B1  (t)

(31)

where x
~(0) = x(0): Furthermore, the cost function (28) can be
rewritten as

defined in [0; N T ] with initial condition P (0) = X001 then the


minimum in


1
0 00 0

J [xf ;  ( )] = J~[~
xf ;  ( )]

0 x0 )
0 k(K [~x(t) + x1 (t)]
0
2
0
+ Gu0 (t))k + (y0 (t) 0 C [~
x(t) + x1 (t)]) R(y0 (t)
(32)
0 C [~x(t) + x1 (t)])) dt
where x
~(N T ) = x
~f = xf 0 x1 (N T ): Equations (31) and (32) now
define a tracking problem of the form considered in [17] where y0 (1);
u0 (1); and x1 (1) are all treated as reference inputs. In fact, the only
x(0)
= (~
x(0) x ) X (~
NT
0
( (t) Q (t)
+

difference between this tracking problem and the tracking problem


considered in the proof of the result of [18] is that in this paper we
have a sign indefinite quadratic cost function.
The solution to this tracking problem is well known (e.g., see [17]).
Indeed, if the Riccati equation (10) has a positive-definite solution

min

(1)2L [0;NT ]

J~[~
xf ;  ( )]

will be achieved for any x0 ; u0 (1); and y0 (1): Furthermore, as in


[18], we can write


min

(1)2L [0;NT ]
= (~
xf

(26)

then, substitution of (26) into (11) implies that (11) holds if and only
if
J [xf ;  ( )]

1295

J~[~
xf ;  ( )]

0 x^1 (N T ))0P (N T )01 (~xf 0 x^1 (N T ))


NT

W (x1 (t) + x
^1 (t); u0 (t); y0 (t)) dt

(33)

where x
^1 (1) is the solution to state equations

_ 1 (t) = [A + P (t)[K K
x
^

0 C 0 RC ]][x1(t) + x^1 (t)]


0

2 0

+ P (t)C Ry (t) + [P (t)K G + B ]u (t)

with initial condition x


^1 (0) = x0 : Now let x
^(1) = x1 (1) + x
^1 (1):
Using the fact that x
~f = xf 0 x1 (N T ) it follows that (33) can be
rewritten as


min

(1)2L [0;NT ]
= (xf

J [xf ;  ( )]

0 x^(N T ))0 P (N T )01(xf 0 x^(N T ))


NT

W (^
x(t); u0 (t); y0 (t)) dt

where x
^(1) is the solution to state equations (6) with initial condition
x
^(0) = x0 : From this we can conclude that condition (11) with a
given inputoutput pair [u0 (1); y0 (1)] is equivalent to the inequality
(12).
REFERENCES
[1] A. Gollu and P. P. Varaiya, Hybrid dynamical systems, in Proc. 28th
IEEE Conf. Decision and Control, Tampa, FL, 1989.
[2] P. J. Antsaklis, J. A. Stiver, and M. Lemmon, Hybrid systems modeling
and autonomous control systems, in Hybrid Systems, R. L. Grossman,
A. Nerode, A. P. Ravn, and H. Rishel, Eds. New York: SpringerVerlag, 1993.
[3] R. W. Brockett, Hybrid models for motion control systems, in Essays
in Control, H. L. Trentelman and J. C. Willems, Eds. Boston, MA:
Birkhauser, 1993.
[4] A. Back, J. Guckenheimer, and M. Myers, A dynamical simulation
facility for hybrid systems, in Hybrid Systems, R. L. Grossman, A.
Nerode, A. P. Ravn, and H. Rishel, Eds. New York: Springer-Verlag,
1993.
[5] S. B. Gershwin, Hierarchical flow control: A framework for scheduling
and planning discrete events in manufacturing systems, Proc. IEEE,
vol. 77, no. 1, pp. 195209, 1989.
[6] P. P. Varaiya, Smart cars on smart roads: Problems of control, IEEE
Trans. Automat. Contr., vol. 38, pp. 195207, 1993.
[7] T. Basar and P. Bernhard, 1 -Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach. Boston, MA:
Birkhauser, 1991.
[8] I. R. Petersen, B. D. O. Anderson, and E. A. Jonckheere, A first
principles solution to the nonsingular 1 control problem, Int. J.
Robust and Nonlinear Control, vol. 1, no. 3, pp. 171185, 1991.
[9] P. P. Khargonekar, K. M. Nagpal, and K. R. Poolla, 1 control
with transients, SIAM J. Control and Optimization, vol. 29, no. 6, pp.
13731393, 1991.
[10] M. R. James and J. S. Baras, Robust 1 output feedback control
for nonlinear systems, IEEE Trans. Automat. Contr., vol. 40, pp.
10071017, 1995.
[11] A. V. Savkin, R. J. Evans, and I. R. Petersen, A new approach to
robust control of hybrid systems, in Hybrid Systems, IIIVerification
and Control, Lecture Notes in Computer Science 1066, T. A. Henzinger,
R. Alur, and Ed. D. Sontag, Eds. New York: Springer-Verlag, 1996,
pp. 553562.

1296

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 43, NO. 9, SEPTEMBER 1998

H1

[12] K. M. Nagpal and P. P. Khargonekar, Filtering and smoothing in an


setting, IEEE Trans. Automat. Contr., vol. 36, pp. 152166, 1991.
[13] L. Xie and C. E. de Souza,
state estimation for linear periodic
systems, IEEE Trans. Automat. Contr., vol. 38, pp. 17041707, Nov.
1993.
[14] J. S. Baras and M. R. James, Robust output feedback control for
discrete-time nonlinear systems: The finite time case, in Proc. 32nd
IEEE Conf. Decision and Control, San Antonio, TX, Dec. 1993, pp.
5155.
[15] E. Sontag and Y. Wang, On characterizations of the input-to-state
stability property, Syst. Contr. Lett., vol. 24, pp. 351359, 1995.
[16] A. V. Savkin and I. R. Petersen, A connection between
control
and the absolute stabilizability of uncertain systems, Syst. Contr. Lett.,
vol. 23, no. 3, pp. 197203, 1994.
[17] F. L. Lewis, Optimal Control. New York: Wiley, 1986.
[18] D. P. Bertsekas and I. B. Rhodes, Recursive state estimation for a setmembership description of uncertainty, IEEE Trans. Automat. Contr.,
vol. AC-16, pp. 117128, Feb. 1971.

H1

H1

Timed-Event Graphs with Multipliers


and Homogeneous Min-Plus Systems
G. Cohen, S. Gaubert, and J. P. Quadrat

Abstract The authors study fluid analogues of a subclass of Petri


nets, called fluid timed-event graphs with multipliers, which are a timed
extension of weighted T-systems studied in the Petri net literature. These
event graphs can be studied naturally, with a new algebra, analogous to
the min-plus algebra, but defined on piecewise linear concave increasing
functions, endowed with the pointwise minimum as addition and the
composition of functions as multiplication. A subclass of dynamical
systems in this algebra, which have a property of homogeneity, can be
reduced to standard min-plus linear systems after a change of counting
units. The authors give a necessary and sufficient condition under which a
fluid timed-event graph with multipliers can be reduced to a fluid timedevent graph without multipliers. In the fluid case, this class corresponds
to the so-called expansible timed-event graphs with multipliers of Munier,
or to conservative weighted T-systems. The change of variable is called
here a potential. Its restriction to the transitions nodes of the event graph
is a T-semiflow.
Index Terms Discrete event systems, dynamic programming, maxplus algebra, potentials, timed-event graphs, timed Petri nets, weighted
T-systems.

I. INTRODUCTION
An event graph is a Petri net such that each place has only one
input arc and one output arc. If the tokens have to stay a minimum
amount of time in the places, we speak of a timed event graph (TEG).
These TEGs are well adapted for modeling synchronizations. In
many systems, synchronization is essential. In manufacturing, in order
to start a task, a machine and a part must both be ready. In computer
science, in order to achieve a computation, we need a processor and
the information.
Manuscript received March 5, 1997. This work was supported in part by
the European Community Framework IV programme through the research
network ALAPEDES.

G. Cohen is with the Centre Automatique et Syst`emes, Ecole


des Mines de
Paris, Fontainebleau, France.
S. Gaubert and J. P. Quadrat are with the INRIA-Rocquencourt, B.P. 105,
78153 Le Chesnay Cedex, France.
Publisher Item Identifier S 0018-9286(98)05807-3.

Several units of the same resource may be required to achieve


a specific task. Then, the corresponding event graph consumes or
produces more than one token in adjacent places, at each transition
firing. The corresponding event graph is called a timed-event graph
with multipliers (TEGM). To assemble a bicycle, two wheels, a frame,
and a certain amount of manpower are needed. In a chemical process,
a reaction producing a molecule consumes in general more than one
atom of a given sort.
Synchronization is not specific to discrete systems, and we will
consider here fluid analogues of timed-event graphs with multipliers
(FTEGM) in which fluids circulate instead of tokens. For instance, in
chemical processes, synchronization (stoichiometry here) is essential
and the products used in a chemical reaction may be fluids.
We give some mathematical tools well suited to manipulate
FTEGM. In particular, very briefly, we introduce a new kind of
power series, extending that considered in [1], which allows us to
express the inputoutput relations of FETGM (in [7], a systematic
classification of all the kinds of power series that may pop up in
Petri net modeling is presented). These power series are elements
of a new noncommutative min-plus algebra: the set of piecewise
linear concave functions, endowed with the pointwise minimum as
addition, and the composition of functions as multiplication. This is
the mathematical cost to pay for dealing with multipliers. This is a
somewhat expensive cost, and it is natural to try to find particular
cases for which this can be avoided.
In [14], Munier introduced and studied an important subclass
of TEGM that we will call conservative, in which the product of
multipliers along any circuit is equal to one. The main result of [14]
reduces such a TEGM to a conventional TEG after a duplication of
transitions. In the context of Petri nets where only the logical aspect
is considered, this subclass is known as a conservative-weighted
T-system [17], [13].
When the multipliers derive from a potential (a vector indexed
by both places and transitions), the dynamic of an FTEGM can
be reduced to classical min-plus linear recurrent equations by a
diagonal change of variables, given by the potential. The existence
of a potential is equivalent to the property pointed out by Munier.
The restriction of a potential to transitions is called a semiflow in
the Petri net literature [17]. In the example of the bicycle, counting
pairs of wheels instead of wheels is quite natural. It is important to
remark that this change of variables, called linearization, is a min-plus
algebra nonlinear operation. However, with this way of counting the
dynamic becomes linear.
As a by-product of the linearization, the existence of an eventual
periodic regime is readily obtained, the performance being characterized in terms of invariants of the original net. We also show that
linearizable FTEGMs are characterized by an inputoutput homogeneity property, which is essentially a conservation law between
input and output quantities.
The fluid case, considered here, is much simpler than the discrete
case considered by Munier. The linearization procedure does not
increase the number of transitions of the system, while the expansion
procedure of [14] results in a blow-up of the number of transitions.
The fact that a fluid approximation is considered in the case of
discrete systems may have an impact on the liveness of the Petri net.
For instance, if a single wheel is available, in reality, the production is
blocked but the fluid model gives a production rate of half a bicycle by
unit of time. This liveness issue is solved by Munier [14] (see also [4]
for nets with a single circuit). In this paper, we show that the discrete

00189286/98$10.00 1998 IEEE

You might also like