You are on page 1of 13

Bei Gu

Student Member ASME

Co-Simulation of Algebraically Coupled Dynamic Subsystems Without Disclosure of Proprietary Subsystem Models
A method for simultaneously running a collection of dynamic simulators coupled by algebraic boundary conditions is presented. Dynamic interactions between subsystems are simulated without disclosing proprietary information about the subsystem models, as all the computations are performed based on input-output numerical data of encapsulated subsystem simulators coded by independent groups. First, this paper describes a system of interacting subsystems with a causal conict as a high-index, Differential-Algebraic Equation (DAE), and develops a systematic solution method using Discrete-Time Sliding Mode control. Stability and convergence conditions as well as error bounds are analyzed by using nonlinear control theory. Second, the algorithm is modied such that the subsystem simulator does not have to disclose its internal model and state variables for solving the overall DAE. The new algorithm is developed based on the generalized Kirchhoff Laws that allow us to represent algebraic boundary constraints as linear equations of the subsystems outputs interacting to each other. Third, a multi-rate algorithm is developed for improving efciency, accuracy, and convergence characteristics. Numerical examples verify the major theoretical results and illustrate features of the proposed method. DOI: 10.1115/1.1648307

H. Harry Asada
Fellow ASME dArbeloff Laboratory for Information Systems and Technology, Department of Mechanical Engineering, Massachusetts Institute of Technology,

Introduction

The role of simulators is not only to facilitate engineering analysis and design but also to provide a powerful means with which engineers communicate to each other. Engineers now communicate by exchanging simulators representing the whole behavior of the components and systems that they have developed 1 . Two examples illustrate the new role and utility of simulators: In the automobile industry, for example, car makers request that suppliers provide simulators of supply parts, and evaluate them by connecting the supply part simulators to the engine simulator and body simulator of the automobile. In turn, carmakers provide the suppliers with simulators depicting the conditions of the automobile system so that the supplier can develop the right parts to meet the specications. Todays manufacturer can communicate with thousands of suppliers through a supply chain management system over the Internet. In the air conditioner industry, former competitors are now forming alliances to use common components and integrate their products. Simulators representing detailed behavior of individual units are exchanged to streamline communications among engineers of alliance partners, thus allowing them to complete thorough engineering analysis and product development in a limited time. The role of simulation as a tool of engineering communication has the potential to grow as more vendors, suppliers, and alliance partners are integrated over the global network. Simulation technology is a vital communication tool in the era of alliances and partnerships as well as supply chain management. Undoubtedly, renewed features and functionality are now needed to keep up with this expected growth. A critical problem in exContributed by the Dynamic Systems, Measurement, and Control Division of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS for publication in the ASME JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received by the ASME Dynamic Systems and Control Division March 2002; nal revision July 2003. Associate Editor: Y. Hurmazlu.

changing simulators is how to combine various simulators coded by different groups. For instance, automobile makers must integrate numerous simulators that have been developed and coded separately by diverse suppliers. The software aspect of this problem has been a focal point in the research community, and many software environments that encapsulate individual simulators and make them portable have been developed. CORBA, for example, has been used in the DOME Distributed, Object-oriented, Modeling Environment project and others 2 . The modeling aspect of this problem is still unresolved: The problem of simulating dynamic interactions among physical systems is more complicated due to the bi-directional nature of physical interactions. One subsystem affecting another subsystem is also inuenced by the counteraction from the other. These interactions often create conicting boundary conditions that constrain both sides of the interacting sub-simulators. Furthermore, these constraints cannot be resolved algebraically, especially in cases where nonlinear elements are involved. As a result, the total simulation model becomes a set of Differential-Algebraic Equations DAEs . Although DAE models are generally difcult to compute, several DAE solution codes, such as DASSL, RADAU5, etc., have been developed 3,4 . The code DASSL has widely been used for solving DAEs 3 . Object-oriented modeling languages, e.g., Dymola, Omola, Modelica, etc. have adopted these DAE solvers to deal with large coupled dynamic systems 5 8 . These DAE solvers and object-oriented modeling languages, however, do not meet the functional requirements for the type of simulation environment that would allow alliance partners and suppliers to communicate by exchanging simulators: The simulators developed by suppliers and alliance partners are often proprietary and hence the actual model and detailed structure of the simulator cannot be disclosed to others. In the air conditioner industry, models of key components, e.g., compressors, are the most important intellectual properties. Car makers must be able to test out the performance of supply parts in diverse conditions, and the suppliers must provide enough information to convince the car maker, but the complete supply part models and MARCH 2004, Vol. 126 1

Journal of Dynamic Systems, Measurement, and Control Copyright 2004 by ASME

model structures cannot be disclosed in many situations. It is therefore essential that multiple simulators, although algebraically coupled and forming a DAE, can be co-run without disclosure of the individual models and structures. The current DAE solvers, however, need to know each subsystems dynamic equations in order to set up the solver, and hence the model cannot be kept condential. The current DAE solvers do not apply to a collection of simulators independently coded. Simulators coded by different parties must be substantially changed or even completely recoded in order to use those DAE solvers. Therefore, the current DAE solvers are inappropriate for exchanging and combining existing subsystem simulators. In an attempt to allow different parties to communicate by exchanging subsystem simulators without disclosing proprietary information, this paper will present a new simulation environment, termed as Co-Simulation environment. Co-Simulation Environment is a software environment for simultaneously running a collection of subsystem simulators without revealing the subsystems dynamic model. Subsystem simulators may be dynamically coupled with each other through boundary conditions represented as algebraic constraints. Communications with individual subsystem simulators are limited to input-output inquiries. Each subsystem simulator is required to respond to specic input data, but it does not have to disclose how the response is computed. Each subsystem is treated as a complete black box, and the internal state and model are kept condential to other subsystems and the Co-Simulation Environment. The objective of this paper is to develop a computational algorithm to enable Co-Simulation. To this goal, we will rst obtain a new DAE solver based on discrete-time sliding mode control, and then modify the algorithm so that the internal model is not needed for solving the DAE. The problem formulation, assumptions, and background information that are necessary before deriving the computational algorithm are presented in the following section.

Fig. 2 Refrigeration cycle: an example of causal conict

of the evaporators are merged, and the refrigerant is collected at the accumulator. Interactions occur at this merging point, where mass ow rates from the two evaporators, u A , u B , sum to the mass ow rate in the pipe reaching the accumulator, m : uA uB m (1)

Let z be an independent variable, called a boundary variable, introduced to rewrite the above condition as: u A z, u B m z. (2)

When the two evaporators are placed in adjacent rooms connected by a short pipe, there is no tangible pressure difference between the two evaporators outlets, denoted y A and y B . Namely, yA yB (3) In describing the dynamics of each evaporator the outlet mass ow rate appears as a subset of the inputs and the outlet pressure appears as a subset of the outputs, and their input-output relationship cannot be reversed 9 . Therefore, a conict occurs when combining the two evaporators, as shown in the gure. Both subsystems provide outputs that must be the same, while the inputs must conform to Eq. 2 . This type of problem is often encountered when combining multiple subsystems. Subsystem inputs and outputs must conform to certain algebraic constrains, hence the total coupled system is described by DAEs. Two robot arms connected at the endpoints and a four-bar-linkage system are classical examples of DAE. In general, energetically interacting dynamic subsystems can be described by a collection of dynamic state equations of the individual subsystems and a collection of algebraic equations representing boundary conditions constraining the subsystems: x f x,u,t 0 g x,u,t
n

Formulating Co-Simulation Problems

2.1 Combining Multiple Simulators That Dynamically Interact. Consider a nonlinear dynamic system consisting of two interacting subsystems A and B. The subsystems are represented by simulators A and B, which are coupled as shown in Fig. 1. Each subsystem simulator receives input u and produces output y while updating its internal state x. The objective is to simultaneously run the two simulators in order to examine their coupled behavior. In some instances, the two subsystems interact in such a way that the sole effect of B on A is manifested in that all or part of subsystem As input u A is explicitly determined by subsystem Bs output y B and vice versa. In this case, the coupled dynamics can be simply simulated by feeding y B to u A and y A to u B . Each sub-simulator can be viewed as a black box, and the coupled simulation can be performed without knowing the internal state variables and the associated dynamic model of the subsystem. The situation, however, is different when the two subsystems have a causal conict in input-output relationship. Consider a refrigeration cycle illustrated in Fig. 2. The system consists of a compressor, a condenser, two sets of a series combination of evaporator and expansion valve, and an accumulator, along with pipes connecting those components. Refrigerant is rst pressurized at the compressor, liquidized in the condenser, and branched out to the two indoor units installed in different rooms. The outlets

(4) (5)

Fig. 1 Interacting dynamic simulators with no causal conict

where x R is the overall state vector comprising subsystem state variables, e.g., x A and x B , and u Rm is the overall input vector comprising u A , u B , etc. It should be noted that input vector u comprises two kinds of inputs; one is boundary variables associated with interactions among the subsystems, and the other is exogenous inputs from disturbance sources or control actions generated in the subsystems. When simulation is performed, the latter type inputs are given as functions of state variables or prescribed time functions. Therefore, the functions on the right-hand side of Eqs. 4 and 5 are functions of x, z, and t. More than two subsystems may interact with each other, and the interactions may include both causally conicting and nonconicting interactions. Thus, Eqs. 4 and 5 represent a general system with diverse interactive structures and boundary conditions. In the following sections, however, we will consider the case that involves only two conicting subsystems subject to one algebraic constraint with a single boundary variable, z. The pair of indoor heat exchangers described above is an example of such interacting subsystems. Extensions to more than two coupled subTransactions of the ASME

2 Vol. 126, MARCH 2004

systems, and to a mixture of conicting and non-conicting subsystems will be discussed after the main theoretical results are obtained, as their description requires more complex notation. Finally, we assume that the interacting subsystems satisfy generalized Kirchhoffs Laws. As will be shown later in Section 7, the algebraic constraint of such physical subsystems is given as a linear equation in terms of the outputs produced by the individual subsystems. Therefore, for two conicting subsystems, A and B, it is given by: 0 b0 bAy A bBy B (6)

s x,z,t

d dt

r 1

g 0

(9)

We will exploit this linear property of algebraic constraints in developing a co-simulation algorithm. We summarize the assumptions made as follows: 1. Only one algebraic constraint and one boundary variable are involved in each pair of conicting subsystems; 2. The algebraic constraint is a linear function of the output variables, and is sufciently differentiable; 3. The DAE system given by Eq. 4 subject to Eq. 5 and with a consistent set of initial conditions i.e., initial conditions that satisfy the algebraic constraint has a well-dened solution for x and z; 4. The DAE system has a nite index r explained below and the index does not change in a region of interest D (x,t,z) x R n ,t R ,z R ; 5. Subsystems are stable, and the time rate of change of each state variable is bounded. With regard to the last assumption, we further assume that each subsystem simulator can be run stably during co-simulation. 2.2 DAE Index and Sliding Mode Control. If the algebraic constraint equation explicitly includes boundary variable z, and the constraint equation is solvable for z, it is a straightforward problem to satisfy the constraint. In general, however, the algebraic constraint does not explicitly include z, as in the case of the two robot arms and the refrigeration cycle. The constraint equation is then described as: 0 g x,t (7)

where 0. This new constraint equation requires only one differentiation to solve for z . To our knowledge, this index reduction method was rst found in 10 . There are also other ways of reducing DAE index 11 . The above constraint equation can be viewed as a sliding manifold in nonlinear control theory. Moreover a control algorithm to drive the system to the constraint surface, i.e. the sliding manifold, can be imported from the theory of sliding mode control. The adoption of this theory provides a rigorous proof of convergence and a systematic way of conning the state within a tunable distance from the sliding manifold 9,12 . Equation 9 represents a critically damped dynamic system with (r 1) modes having the same time constant . If the sliding manifold equation is identically satised, all modes of the sliding manifold subspace decay to zero asymptotically with time constant . The derivatives of the original algebraic constraint as well as the weighted sum of the constraint and the derivatives will remain zero all the time if the original algebraic constraint Eq. 7 is satised at the initial time. Thus the original high index algebraic constraint can be replaced by the reduced index algebraic constraint Eq. 9 . The direct application of continuous sliding mode control to the DAE solver, however, incurs difculties in dealing with numerical computation. Instability due to discretization, chattering, and stiff dynamics are major roadblocks for the Singularly Perturbed Sliding Manifold SPSM approach 9 . Discrete formulation is appropriate for addressing these issues 13 . In this paper the problem will be reformulated as a discrete-time sliding mode DTSM control, and a stable, completely numerical solution method will be developed. Furthermore, the algorithm will be extended such that subsystem simulators do not have to disclose their proprietary internal models.

Discrete-Time Sliding Mode Control

To relate boundary variable z to state variables x, it is necessary to differentiate the algebraic constraint along the state equation 4 as many times as necessary until the boundary variable z shows up. This differentiation process gives rise to the notion of DAE index r. The minimum number of times that the constraint equation 5 must be differentiated with respect to time in order to solve for z as a continuous function of t, x, and z is the index of the DAEs 4 and 5 3 . Assuming that the DAE has an index r, by denition of the DAE index, the following set of algebraic equations must be satised by any exact solution of the DAE: 0 g x,t 0 dg x,t dt 0 dr dt r
1 1

3.1 Discrete-Time DAE. In converting the continuous-time expressions into discrete-time, the independent variable k t is written as t k , and the dependent variables evaluated at time k t are written with a subscript k: For example, the state variable x(k t) is written as xk and the boundary variable z(k t t) is written as z k 1 . Also, all functions evaluated at (xk ,t k ,z k ) are written with subscript k. For example, the sliding variable s dened in Eq. 9 is written simply as s k , when evaluated at (xk ,t k ,z k ), unless it is necessary to manifest x, t and z. Use of this notation yields the following discrete-time highindex DAE system; xk
1

xk 0 g xk

tf xk ,z k ,t k
1 ,t k 1

(10) (11)

(8)

g x,t

DAE systems with index 2 or higher, for which all the derivative constraints in Eq. 8 must be satised along with the original algebraic constraint, are referred to as high index DAEs. Although high-index DAE systems are difcult to solve in general, they can be reduced to an index-one DAE by using a compound constraint equation comprising a weighted sum of the original constraint and its derivatives along the state Eq. 4 : Journal of Dynamic Systems, Measurement, and Control

where f is an effective derivative corresponding to any explicit numerical integration method. If Eulers forward method is used, the effective derivative will be just f(x,z,t). We will use f instead of f in the rest of the paper for simplicity. Note that the algebraic constraint Eq. 11 must be satised at the new state xk 1 ,t k 1 as well as at the current state before transition. Note also that the algebraic constraint does not explicitly depend on the boundary variable, z, since we are interested in high index problems. Using Eq. 9 , we dene a sliding manifold to replace the original algebraic constraint Eq. 11 . The algebraically coupled dynamic system with reduced index in the numerical simulation form is given by: xk
1

xk sk

tf xk ,z k ,t k
1

(12) (13)

Note that the sliding variable is dened by Eq. 9 in continuous time but is evaluated at (xk 1 ,t k 1 ,z k 1 ) in the above expression. MARCH 2004, Vol. 126 3

For this discrete-time system, the dynamic system is meaningful only at each sampling point and the constraint equation needs to be satised only at the sampling point as well. We want to design a discrete-time controller that enforces the reduced constraint equation, Eq. 13 , by driving the boundary variable z: zk
1

zk

(14)

where k is the control input to be designed based upon output variables y k s from the subsystem simulators that are connected through the algebraic constraint. In the above expression, the boundary variable z k is treated as a state variable of the control system that coordinates a pair of conicting sub-simulators. This coordinating system, called Boundary Condition Coordinator or BCC, converts the original DAE into an approximate ODE by replacing incompatible boundary conditions among subsimulators by compatible ones with tunable dynamics. This is a type of realization approach to solving DAEs. For a continuous-time system, discontinuous control is required to make the dynamic system exhibit a sliding mode, i.e. for the trajectories to perpetually point to the sliding manifold, s 0. However, discontinuous control is not required for a discrete-time dynamic system to exhibit a sliding mode 14,15 . Our goal is to design a continuous control law for the above discrete-time DAE system given by Eqs. 12 and 13 so that the system may stay within a tolerable distance of the sliding manifold. 3.2 Control Law. From the assumptions, the sliding variable s is sufciently differentiable with respect to x, t, and z around the point s 0. Using Eulers Forward Method again, s k 1 can be expanded as follows: sk where Jxk s x ,t ,z , Jt k x k k k s x ,t ,z , t k k k Jz k s x ,t ,z z k k k (16)
1

Fig. 3 Conceptual proles of trajectories using discrete-time sliding mode control

s k Jxk xk

xk

Jt k t Jz k z k

zk

Rk (15)

driven by the above nominal control. The goal is to push the trajectory towards the s 0 curve, ultimately letting it slide on the s 0 curve once the trajectory enters the vicinity of the curve. In other words, the goal is for the system to exhibit a sliding mode; The transition of state x is to be conned to an invariant set in the vicinity of the s 0 curve. Due to the unaccounted effect of the R k term, the trajectory does not exactly reach the curve s 0. The resulting error is proportional to the magnitude of R k . We intend to reduce this error by devising a more suitable control law, thus guaranteeing that the trajectory stays within an invariant set satisfying acceptable error bounds. To make this happen, the following requirements must be met: It must be guaranteed that the trajectory stably converges to the invariant set. The nominal control obtained above is a type of Newton-Raphson algorithm, which is often unstable for highly nonlinear systems. Typically, the control magnitude must be limited to ensure stability. From Eq. 17 , the remnant term is apparently a quadratic form in x, t, and z, and it tends to zero as t and z approach zero: lim R k 0 (21)
t0 z0

and R k is the second-order remnant term. If we let w xT ,t,z T , then R k is given by: Rk wT k w s w
T

wk
wk wk

(17)

where 0 1, according to the Mean Value Theorem. Substituting Eqs. 12 and 14 into Eq. 15 yields, sk where
k 1

sk

Jz k

Rk

(18)

Jxk f xk ,z k ,t k

t Jt k t

(19)

Terms k and R k can be interpreted as external inuences or disturbances applied to the control system regulating the sliding variable, s. Term Jz k k is where control k can be applied to inuence s k 1 . Note that Jz k is non-zero and invertible since the existence of a well dened index, r, is assumed and hence the (r 1) st derivative of function g contains a non-zero term in z. The objective of our control is to drive the sliding variable s k 1 to zero and keep it null for all time steps thereafter. In Eq. 18 , set s k 1 to zero and solve for k the input that drives the next time steps sliding variable to zero. Unfortunately, Eq. 18 , includes the term R k that is unknown in general. Therefore, we ignore the R k term for the time being, and we consider the following nominal control: Ok Jz k
1

Note that x goes to zero as t approaches zero, since the time rate of change of state variables, x/ t, is assumed to be bounded, as stated in Section 2.1. To reduce R, time step t must be reduced and control input k z k 1 z k z k must be bounded as well. On the other hand, the controller must be able to generate control input k that is large enough to overcome disturbances acting on the system, thus keeping the system within the invariant set. The exogenous input k acting on the system tends to deviate the sliding variable. Furthermore the uncertainty due to the unaccounted remnant R tends to deviate the sliding variable. We must guarantee that the system is kept within the invariant set despite these disturbances and uncertainty. To meet the rst two requirements, we limit the control input using a saturation function:
k 0 k

sat

Ok
0

(22)

sk

(20)

Figure 3 conceptually illustrates the possible trajectories of the sliding variable s within the extended state space (x,z), when 4 Vol. 126, MARCH 2004

where the magnitude of control input is limited to the threshold value 0 . Note that the threshold value must be chosen to be large enough to meet the third requirement. We can derive the condition for the threshold value from the worst-case scenario. Considering the largest disturbances and uncertainty within the region of interest D, we will examine the following condition for the threshold value: Transactions of the ASME

sup Jz
D

sup
D

t,

(23)

From Eq. 29 , this implies: Vk


k

where the is a small positive constant and Jz 1 , ( t) and R( t, 0 ) are assumed to be upper bounded. If there exist threshold 0 and sampling interval t along with bounds for Jz 1 , ( t) and R( t, 0 ) satisfying the condition given by Eq. 23 , a convergence proof can be given for the control law described by Eqs. 20 and 22 . 3.3 Proof. a. The Invariant Set. We want to show that the closed-loop system with the above control law possesses an invariant set dened by: s sup R
D

Rk

Jz k

(33)

This shows that the absolute value of the sliding variable keeps decreasing as long as the nominal input is larger than the threshold 0. Starting with a nite initial value, the magnitude of the sliding variable s k becomes smaller than supD R in nite steps for a nite . When this happens, the nominal input becomes: Ok Jz k
1

sk

Jz k

sup R

(34)

(24)

Therefore the input k is the same as the nominal input O k , which brings the sliding variable to sk
1

sk

Jz k

Jz k

sk

Rk Rk

(35)

in the vicinity of the s 0 curve. For s k belonging to this set, taking the absolute value of the nominal input given by Eq. 20 and comparing it with the threshold 0 yield Ok sup Jz k
D 1

sup R k
D

(25)

Since this means that the nominal input for a trajectory starting from the region satisfying Eq. 24 is bounded by 0 , the actual control input k is given by the nominal input:
k 0 sat

Namely, the sliding variable is within the invariant set: s k 1 supD R . When the trajectory starts from a point outside the invariant set but the nominal input is smaller than the threshold, the nominal input is then given to the system without going through saturation function and, as shown in Eq. 35 , the invariant set is reached in a single step. This concludes the proof on convergence of the trajectory into the invariant set.

Ok
0

Ok

(26)

4 Co-Simulation With Minimum Information Disclosure


The DTSM method described in the previous section assumes full knowledge of the interacting subsystems. Subsystem simulators have to supply state equations and output equations in order to build the DTSM-based Boundary Condition Coordinator BCC prior to the start of the simulation. This requires the vendors of the subsystem components to disclose all the key elements of the model, information that is often proprietary in nature. The goal of this section is to modify the DTSM algorithm so that CoSimulation of coupled subsystems may be performed without disclosure of sub-simulator models. Required information from individual sub-simulators should be limited to input-output numerical data instead of symbolic or mathematical expressions of the entire model. In other words, the Boundary Condition Coordinator should request each sub-simulator to simply return output values to the inputs specied by the BCC. In the DTSM algorithm, the BCC needs to compute the following nominal control: Ok
k

This input brings the next-step sliding variable to: sk


1

sk

Jz k

Jz k

sk

Rk Rk

(27)

Therefore, the next-step sliding variable remains in s k 1 supD R ; This shows that Eq. 24 denes an invariant set. Next, we will show that the invariant set is attractive. b. Proof of Convergence. To prove that the invariant set around s 0 is attractive, dene a candidate Lyapunov function: V t k ,s k Vk sk (28)

This function V is positive denite and decrescent, since there exist positive constants a b such that a s k V k b s k . Consider the case that the sliding variable s k is outside the invariant set, and the nominal input is larger than the threshold 0: Ok
0

sup Jz
D

sup
D

t,

(29)

Jz k

sk

(20) (19)

The control input k is given by 0 sign(O k ) in accordance with Eqs. 20 and 22 . With this input, the next step sliding variable is moved to: sk
1

Jxk f xk ,z k ,t k

t Jt k t

sk

Ok

Rk

(30)

The forward difference of the Lyapunov function candidate, V k V k 1 V k , is therefore obtained as: Vk sk
1

sk

sk

Ok

Rk

sk

(31)

The forward difference function Vk sk sk


k

V k can then be bounded by:


0

1
0

Ok
1

Rk Rk sk

sk

The above expressions include state variables, and calculating these expressions require explicit knowledge of the state equations of the neighboring subsystems and the algebraic constraints representing their interaction. The objective of this section is to encapsulate all the computations involving state variables and state equations within the subsystem simulator, and to let the BCC perform coordination control based solely on outputs from the subsystem simulators, so that the BCC does not explicitly need the internal model and the state variables of the subsystems. There are three terms involved in the equivalent control given by Eq. 20 : s k , Jz k 1 , and k . The following two methods allow us to replace these terms by the ones computed from the outputs and their derivatives alone. 1 Computation of Sliding Variable s k , Jacobian Jz k , and DAE Index r. As shown in Eq. 8 , the sliding variable comprises the derivatives of the constraint equation g(x,t). However, as shown in Eq. 6 , the algebraic constraint is a linear combination of output variables of the subsystem simulators. Namely, MARCH 2004, Vol. 126 5

Jz k

Rk

Jz k

(32)

Journal of Dynamic Systems, Measurement, and Control

d dt bB

r 1

1 d dt 1

g b0 bA
r 1

d dt

r 1

yA (36)

where s(xk 1 ,t k 1 ,z k ) represents the sliding variable evaluated at (xk 1 ,t k 1 ,z k ). Combining Eq. 38 with the original Taylor expansion in Eq. 18 yields, sk
1

yB

s xk

1 ,t k 1 ,z k

Jz k

k R

(39)

where k represents the second-order remnant given by: R Therefore, the sliding variable s can be computed simply from the outputs of the subsystem and their time derivatives to the (r 1) st order. The actual model of each subsystem is not needed for computing s. According to the denition, the Jacobian Jz is the partial derivative of the sliding function with respect to the boundary variable z. This is a seemingly complex function comprising many derivative terms of the constraint equations. However, examining the sliding variable denition, Eq. 9 , and the denition of the DAE index reveals that only the (r 1) st derivative g ( r 1 ) is a function of the boundary input z. In other words, according to the denition of index, the boundary variable z appears for the rst time in the (r 1) st derivative. Other components of the sliding function are only functions of state x and time t, and thus do not contribute to the partial derivative with respect to z. Therefore, in order to numerically evaluate the Jacobian, Jz, the subsystem simulators only need to supply the numerical values of y ( r 1 ) computed at two values of the boundary variable z:
r 1

k R

1 2

s z2

2 k zk
k

(40)

Use of Eq. 39 eliminates the need for computing k , since the following nominal control that brings s k 1 to zero in Eq. 39 does not contain k : Ok Jz k 1 s xk
1 ,t k 1 ,z k

(41)

Note that this nominal control comprises Jz k and s(xk 1 ,t k 1 ,z k ), both of which can be computed based on the subsystem simulator outputs and their relevant derivatives by using the following procedure: A. Let the subsystem simulators compute xk 1 for z k given by BCC; Eq. 12 . ( B. Let the subsystem simulators compute outputs y k0 ) 1 , (1) (r 2) (r 1) (r 1) y k 1 , . . . , y k 1 , y k 1 and y (xk 1 ,t k 1 ,z k ), and return them to the BCC. C. Based on Step B let the BCC compute s(xk 1 ,t k 1 ,z k ) by using Eq. 36 and Jz k by Eq. 37 . To stabilize the BCC, we need to limit the nominal control by using the following actual control with a saturation function:

Jz

bA

y Ar z

z 2

y Ar y Br 1 z

bB

y Br 1

(37)
k 0

sat Jz k 1 s xk

1 ,t k 1 ,z k

(42)

where y A and y B are the outputs of subsystems A and B, respectively. This approach may result in additional computation and communication for the subsystem simulator, but no additional information about the subsystem need be disclosed, which satises our requirements. Prior to the above computations the DAE index r must be determined and the output derivatives up to the (r 1) st order must be made available for co-simulator. DAE index r is determined in the following manner for subsystems, A and B, coupled by a linear algebraic constraint, g. Let q A be the number of times that the output y A must be differentiated in order to relate it to the boundary variable z, while q B be that of subsystem B. This derivative number is analogous to the relative order of a nonlinear control system. The DAE index r is given by the minimum of the two relative orders: r min(qA ,qB) 1, because boundary variable z appears in the derivative of constraint function g, once the derivative of either output, y A or y B , begins to contain boundary variable z in it. Taking one more derivative of the constraint function g yields an expression that can be solved for z . Note that relative order q A and q B are properties of the individual subsystems and therefore they do not vary although they are connected to different subsystems. The DAE index r, on the other hand, varies depending on which subsystems are connected to each other. However, no matter which subsystem is connected to one subsystem with a relative order of q, the DAE index is no larger than q 1. Therefore, the simulator of each subsystem needs to have only q derivatives of its output. In developing co-simulation environment, each subsystem simulator should have as many output derivatives as its relative order. 2 Eliminating the Computation of the k Term. The term represents the inuence of x and t upon sliding variable s when they vary from time-step k to k 1 while z is kept constant. Therefore, s xk
1 ,t k 1 ,z k

This new control law requires a new threshold value to be determined, and a thorough proof is needed once again to guarantee convergence and stability. We will show the proof after making one more improvement, Multi-Rate computation, in the following section.

Multi-Rate DTSM

5.1 Error Dynamics. It has been shown in Section 3 that the DTSM controller can stably bring the sliding variable s to zero within an error bounded by supD R . As the sliding variable tends to converge, the errors in the algebraic constraint, g(x,t), and its relevant derivatives tend to converge as well. The dynamic relationship between the sliding variable, s(x,z,t), and the constraint function, g(x,t), is governed by Eq. 9 and is illustrated by the block diagram in Fig. 4. The sliding variable s is thus ltered by (r 1) consecutive low-pass lters having (r 1) repeated poles at 1/ resulting in the error in constraint equation, g(x,t), as shown in the gure. It is clear from this block diagram that the magnitude of the output of each block does not exceed the magnitude of the input to that block propagated from the original input s. Therefore the following proposition holds: Proposition If the sliding variable is bounded s supD R , the error in constraint g, also has the same upper bound: g supD R .

s xk ,t k ,z k

higher order terms (38)

Fig. 4 Dynamic relationship between sliding variable s and constraint function g

6 Vol. 126, MARCH 2004

Transactions of the ASME

cycle, where the sliding variable is kept within the invariant set at both points in the x-t-z plane. In the Multi-Rate DTSM, the rst step is to shift to point (xk 1 ,t k 1 ,z k ) following the procedure of computing s(xk 1 ,t k 1 ,z k ) described in Section 4.2. The second step is to move vertically towards s(xk 1 ,t k 1 ,z k 1 ). This step is now divided into n small steps z k 1/n , z k 2/n , . . . ,z k n/n , where subscript i/n means the i-th step of n fast rate computations. Modifying the previous control law, Eq. 41 , in accordance with this subdivision yields the new nominal control: O xk
1 ,t k 1 ,z k i/n

jz xk s xk

1 ,t k 1 ,z k i/n

1 ,t k 1 ,z k i/n

(44)

and the control input with a saturation function:


k i/n 0

sat O xk

1 ,t k 1 ,z k i/n

(45)

The threshold value 0 must be determined to guarantee convergence to the invariant set. In nding the new threshold, an important condition for proving convergence is to guarantee: s xk
1 ,t k 1 ,z k 1

s xk ,t k ,z k

(46)

In each small step i/n we can guarantee:


Fig. 5 Multi-Rate DTSM

s xk

1 ,t k 1 ,z k i 1/n

s xk

1 ,t k 1 ,z k i/n

(47)

The smaller the parameter , the faster g(x,t) converges. A small is therefore desirable to some extent, but making too small may incur an ill-conditioned problem. Examine J z , the sensitivity of s to z, Jz bj
r 1

in the same way as the previous proof for the Single-Rate DTSM. However, this is not sufcient to satisfy Eq. 46 , since the sliding mode control in the fast time scale starts from s(xk 1 ,t k 1 ,z k ) and not from s(xk ,t k ,z k ), as illustrated in Fig. 5. Therefore, by dening s xk we must satisfy
1 ,t k 1 ,z k

s xk ,t k ,z k

(48)

r 1

(43)

s xk

1 ,t k 1 ,z k 1

s xk

1 ,t k 1 ,z k

(49)

Remember that by denition of the index, r, the sensitivity Jz depends only on the (r 1) st order derivative of output y, and is proportional to r 1 . Therefore, as decreases, Jz decreases (r 1) times faster than . As a result, the nominal control O k and the lower bound on threshold 0 , both of which include the reciprocal of Jz, tend to increase rapidly, creating a large error bound, supD R . To alleviate this problem of diminishing sensitivity Jz leading to increasing error bound supD R , the computation must be performed with a reduced time step t; otherwise the error bound increases and the system may become unstable. Use of a small time step t is costly, especially when the dimension of x is large. Such a small time step, however, is not needed for the computation of the individual subsystems. It is not efcient to reduce the step size for the entire system merely for the stability and error bound of the sliding controller. Although the smaller step size will improve accuracy in general, the extra accuracy gained from the subsystem simulators for computation of the state, x, will not be signicant. Thus it is of great interest to decouple the overall step size of the Co-Simulation from the stability requirement of the sliding mode controller. Namely, we can compute the BCC using a time step smaller than the integration step size of the subsystem simulators. This leads to the algorithm of multi-rate simulation. 5.2 The Multi-Rate Control Law. To implement the multirate simulation, we divide one step of the BCC computation into n small steps. Let the integration of the subsystem simulators be in the time scale k, and the BCC computation in the time scale i which is n times faster than the time scale k. Figure 5 shows how the multi-rate computation proceeds in the extended state space. The horizontal axis represents the aggregated state variables and time, x and t, while the vertical axis represents boundary variable z. In the Single-Rate method, computation proceeds from (xk ,t k ,z k ) to (xk 1 ,t k 1 ,z k 1 ) in one Journal of Dynamic Systems, Measurement, and Control

through n small step computations. To this end, consider the following threshold value:
0

sup Jz
D

sup
D

sup R
D

2 0

(50)

Comparison of Eq. 50 with Eq. 23 shows that the upper bound of k is now replaced by the term. In the previous threshold given by Eq. 23 , both and decrease as time step t gets R R smaller. Likewise, the term and become smaller as the number of small time steps, n, increases. In other words, increasing n brings about the same effect as decreasing t. Therefore, updating the discrete-time sliding control at a faster rate than the subsystem simulators may guarantee convergence to the sliding manifold given by Eq. 24 . The multi-rate approach simply runs the Boundary Condition Coordinator at a faster rate without changing the step size of other subsystem simulators. This is one of the major advantages over the single-rate approach. The above argument of multi-rate computation is related to that of stiff problems. A number of effective methods have been developed for solving stiff problems in the DAE community 16 . One critical difference between those existing methods and our multi-rate DTSM is that ours does not need the internal states and models of interacting subsystems, while the existing methods need explicit information about them. 5.3 Convergence Conditions for the Multi-Rate DTSM Replacing the single rate control law in Section 3 by the one given in Eqs. 44 and 45 , we can show that supD ( 2 ) is an invariR 0 ant set: s sup R
D 2 0

(51)

MARCH 2004, Vol. 126 7

Convergence proof can be made in the same way as the single rate DTSM. When the nominal control is larger than the threshold value: O xk the control input is:
k i/n 0 Jz 1 ,t k 1 ,z k i/n 0

(52)

xk

1 ,t k 1 ,z k i/n 1

s xk

1 ,t k 1 ,z k i/n

O xk

1 ,t k 1 ,z k i/n

(53) is then brought

The sliding variable s at step (xk to: s xk


1 ,z k 1 ,z k

1 ,t k 1 ,z k i 1/n )

i 1 n

s xk 1 R

1 ,t k 1 ,z k i/n 0 2 zk

O xk
i/n

1 ,t k 1 ,z k i/n

(54)
Fig. 6 Time responses of sliding variable s for different values of

Taking absolute values and considering the initial condition, we have: s xk


1 ,t k 1 ,z k

i 1 n

s xk

1 ,t k 1 ,z k 1/n

1
1

x A,1 x A,2 x A,2 x A,1 x A,2 x A,3 f ct z x A,3 x A,1 x A,2 (58)

O xk R s xk

1 ,t k 1 ,z k i/n 2 zk i/n

1 ,t k 1 ,z k i/n 1 ,t k 1 ,z k i/n 1 ,t k 1 ,z k i/n 2 zk i/n

where f ct(z) is a nonlinear function of z to be dened later. Subsystem B: x B,1 x B,2

s xk O xk R

x B,2 (55) Algebraic constraint

x B,1 2x B,2 x B,3 z x B,3 x B,1 x B,2 z (59) (60)

Substituting the equivalent control and the threshold value again, we arrive at: s xk
1 ,t k 1 ,z k 0

g x,t

x A,1 x B,1 0

i 1 n

s xk

1 ,t k 1 ,z k i/n

Jz xk n 0

1 ,t k 1 ,z k i/n

2 zk

i/n

(56)

This is an index 3 DAE. The subsystems are simulated using Eulers forward method with t 0.1, and the algebraic constraint is enforced by the Multi-Rate DTSM Eqs. 44 and 45 . Although this example is rather simple, instability occurs for parameter values not satisfying the convergence conditions. The following three cases are typical failure scenarios that we often encounter in selecting the parameter values. a. A Small Incurs Instability As discussed in Section 5, the error in algebraic constraint g decreases more quickly, as parameter becomes smaller. However, the sensitivity of s with respect to z, i.e., Jz, becomes smaller as reduces. As a result, the threshold 0 must be large enough to compensate for the effects of x and t on s, otherwise instability is incurred. Figure 6 illustrates this phenomenon. In Fig. 6, tan(z) was used for the nonlinear function f ct(z), and the initial values of x A,1 , x A,2 , and x A,3 were set 1 while x B,1 , x B,2 , and x B,3 started from 1. Setting the initial value of z to 1 yields an inconsistent initial condition: s 0. First the SingleRate DTSM, i.e. n 1, was used for the computation in Fig. 6. Note that, as was reduced from 0.5 to 0.1, the sliding variable s diverged even for the relatively large threshold value, 0 1, since the stability condition Eq. 23 was not satised for the small . The Multi-Rate DTSM solves this problem. Repeating the z computation n times yields the effect that is equivalent to increasing the threshold value n times: n 0 . Figure 6 shows that n 10 provides a stable result even for 0.1. We can use a small 0 that can still satisfy the stability condition, Eq. 50 . A smaller yields a smaller error in the algebraic constraint g, as shown in Fig. 7. Figure 8 shows the behavior of the boundary variable z in these three cases. b. When an Excessively Large Input is Allowed, Instability Occurs The convergence condition requires the existence of 0 Transactions of the ASME

Therefore the sliding variable s(xk 1 ,t k 1 ,z k i/n ) keeps decreasing monotonically as long as the nominal control is larger than 0 . When the nominal control is smaller than the threshold value, the sliding variable enters the invariant set in one step: s xk
1 ,t k 1 ,z k i 1/n

zk

i/n

sup R

2 0

(57)

The above equations show that the invariant set Eq. 51 is attractive and the convergence rate is larger than ( )/n per step in the fast time scale. Note that the threshold 0 that satises Eq. 50 exists for a sufciently large n if sup Jz 1 and sup involved in the right hand side of the inequality are nite in domain D and the second order partial derivative 2 s/ z 2 in the remnant R k is bounded within domain D. Since k is proportional to the square of 0 as R given by Eq. 40 , the inequality 50 is always satised by a small positive 0 and a large n. By using such a value of 0 , BCC can bring the trajectory into the invariant set in at most n steps.

Numerical Examples and Implementation

6.1 Numerical Examples. The major theoretical results on convergence obtained in the previous sections will be veried using numerical examples. Consider the following subsystems subject to an algebraic constraint. Subsystem A: 8 Vol. 126, MARCH 2004

Fig. 7 Time responses of algebraic constraint g for different values of . For 0.1, n 1, the trajectory immediately diverged; it became converging as n increased to 10.

Fig. 9 Time responses of boundary variable z for different values of 0

0 and t that satisfy Eq. 50 or Eq. 23 . The remnant R ( 2 ) involved in the right hand side of Eq. 50 is a function of 0 and is nonzero when the sliding variable s is a nonlinear function of boundary variable z. As 0 increases, the remnant term sharply increases if the nonlinearity is signicant. As a result, an even larger threshold 0 may be required for satisfying Eq. 50 , which may lead to divergence of the remnant. In consequence, the system may become unstable. Figures 911 illustrate this phenomenon. In Figs. 911, a tan(z) was used for the nonlinear function f ct(z), and the same initial conditions were used as in the above example except for z. The initial value of z was set to 10, which yielded an inconsistent initial condition s 0. Parameter was xed to 0.5. When 0 was set to 20, i.e. a very large threshold value, the response of z diverged as shown in Fig. 9, and the sliding variable s did not converge as shown in Fig. 10. When 0 was reduced to 1, the magnitude of R( 2 ) signicantly reduced, 0 creating stable responses as shown in both gures. Note that n was set to 20 for 0 1 and n 1 for 0 20, so that the product n 0 remained the same for both cases. From these results we nd that, when the threshold 0 is too large and the nonlinearity be-

comes signicant, the change in z tends to be too large to regulate, although the magnitude of the control limit is extended to a larger R threshold value, e.g., 20. The plot of Jz 1 shown in Fig. 11 veries this stability condition; the absolute value of Jz 1 is R much larger than 0 20, hence the stability condition Eq. 50 is not satised and the remnant term tends to be erratic. c. When n 0 is too Small, Divergence Occurs. In the cosimulation computation, the dynamics of x and t acts as exogenous disturbances to the problem of regulating the sliding variable s. In Multi-Rate DTSM, n 0 must be large enough to counteract the effects of x and t in order to stabilize the sliding variable s. When n 0 is too small, the s dynamics diverges. Figures 12 and 13 illustrate this phenomenon. In these gures, all the initial conditions and parameter values remain the same except that different combinations of n and 0 were used. Figure 12 shows that n 0 10 is large enough to stabilize the s dynamics, while n 0 3 is too small to stabilize. Figure 13 shows that the boundary variable z changes too slowly when n 0 is small. Essentially, the sliding variable becomes negative around t 1 second, but the boundary variable is still in the far negative region trying to correct the positive s in the past. This eventually leads to an unstable BCC.

Fig. 8 Boundary variable under different

Fig. 10 Time responses of sliding variable for different values of threshold 0

Journal of Dynamic Systems, Measurement, and Control

MARCH 2004, Vol. 126 9

Fig. 11

Jz 1 R for an unstable case

6.2 Implementation. Since the objective of Co-Simulation is to facilitate communications among engineers in different companies and organizations over the Internet, we need to implement this software on a distributed network environment. The subsystem simulators and the BCC need to be independent software modules that communicate through network connections to exchange inputs and outputs. Each module would update its state and provide its outputs after it has received all the necessary inputs. Any explicit integration algorithm can be used in each of the modules to update its state. It is thus the responsibility of the owner/developer of each subsystem simulator to compute its dynamic equations stably and accurately and to return the output values requested by the BCC. The system integrator, whose function is to simulate the coupled dynamics of multiple subsystems, has the tasks of dening all the boundary conditions and tuning the parameters, , n, and o , such that the convergence conditions are satised and that the error bound requirement is met. As long as the subsystem simulators are stable and those conditions assumed in the previous sections for stability and error bound analysis are held, these parameters can be tuned based on Eq. 50 and the guidelines demonstrated in the numerical examples. One problem, however, is that parameter may affect the numerical

Fig. 13 Time responses of boundary variable for different values of product n " 0 . a Only one subsystem providing an effort variable, b No subsystem providing an effort variable, c Multiple subsystems providing effort variables.

Fig. 12 Time responses of sliding variable for different values of product n " 0

stability of individual subsystem simulators 17 has analyzed the relationship between the value of parameter and the stability of subsystem simulators, and developed a procedure for nding a suitable parameter to avoid instability in the subsystem simulators. The procedure is straightforward for a class of subsystems while complex nonlinear subsystems having diverse time scales need more steps to nd an appropriate value for parameter . For more details, see 17 . Since Co-Simulation is a distributed computation environment, we wish to minimize the amount of communication among simulation modules. When the Multi-Rate DTSM algorithm is used in BCC, we have to evaluate s and Jz multiple times during one time step of subsystem computation. Since both s and Jz depends on y ( r 1 ) (z), computation of s and Jz can be performed solely on the BCC side if the subsystem simulators can provide the (r 1) st derivative as a function of z, i.e. y ( r 1 ) (z), in lieu of numerical values for y ( r 1 ) . It should be noted that the subsystem simulator does not have to disclose the complete function y ( r 1 ) (x,t,z) with variables x, t, and z as a whole. It is required to provide only the function evaluated at the current state and time, xk and t k . The state variables and their values stay in the subsystem simulator, and the BCC would not know the state variables and their relationship with the output derivative y ( r 1 ) . It should be noted that each subsystem simulator must contain codes for derivatives of the output up to the q-th order, that is, the relative order of the subsystem in relating y to boundary variable z. As mentioned in Section 4, although DAE index r varies depending on the combination of subsystems, it does not exceed the relative order of each subsystem plus 1. Therefore, the necessary derivatives are at most the q-th order. The Co-Simulation software environment has been developed using JAVA and will be reported in a separate paper. Here, we simply list pseudo codes to illustrate the core of the CoSimulation environment used for solving DAEs resulting from conicting sub-simulators. Table 1 recapitulates the computations performed at the BCC the left column and the ones at each subsystem simulator the right column as well as communications in between. Starting at point A in the table, the BCC supplies the boundary variable z k to the connected subsystem simulators. Using z k and stored xk s, each subsystem simulator updates its internal state for one step, calculates outputs and their derivatives ( y k 1 , y k 1 . . . y kr 12 ) , and sends these values to the BCC to gether with the z-function y ( r 1 ) (z) evaluated at xk and t k . HavTransactions of the ASME

10 Vol. 126, MARCH 2004

Table 1 Computations at and communications between subsystem simulators and BCC.

ing received these data, the BCC calculates a temporary variable, before proceeding to the fast time scale computation. This temporary variable is a part of the sliding variable s, but is not affected by z. The purpose is to avoid unnecessary repetition of computation. Starting from point B, the boundary variable z is updated n consecutive times. This is to evaluate the control law given by Eqs. 44 and 45 in Section 5. After the BCC completes n steps of computation for z, the computational thread goes back to point A and the BCC sends the newly updated z to connected simulators. This completes one time step of computation.

f
j

ej 0

(62)

Extension to More Than Two Subsystems

So far the DTSM algorithm has been developed only for two interacting subsystems having incompatible boundary conditions. In this section, the DTSM method is extended to general multiple subsystems, the boundary conditions of which are described by a class of energetic junction conditions. Furthermore, we intend to provide physical foundations underpinning the assumptions and formulations we have made for boundary conditions in the previous sections. The treatment and notation of boundary conditions we will use in this section are analogous to those of bond graph 18 . We assume that all the subsystem models are connected through energetic junctions governed by generalized Kirchhoffs Laws as used in bond graph 18 . In bond graph, there are two types of energetic junctions describing connections among elements and subsystems. Let e j and f j , respectively, be effort and ow variables associated with the power bond of the j-th element or subsystem. The two types of connections are described by the following junction equations:
m

where m is the number of elements and subsystems connected to the same junction. Equation 61 describes a common effort junction, of which the effort variables, such as force, pressure, voltage, etc., are the same for all the m elements. The ow variables associated with the common effort junction must sum to zero, i.e. the generalized Kirchhoff voltage law. Equation 62 denes a common ow junction, which is symmetric to the common effort junction, representing the generalized Kirchhoff current law. We use the common effort junction in the following discussion since the common ow junction can be treated in the same manner. It is convenient to use the bond graph notation to represent such energetic junctions and causal relations among the connected subsystems 18 . Figure 14 shows three types of common effort junction with m connected subsystems. A short bar attached to either side of power bond is called a causal stroke, indicating which variable, effort or ow variable, is determined by the connected subsystem. Figure 14 a shows that one and only one subsystem provides an effort to the junction, which is transmitted to other (m 1) subsystems connected to the junction. In this case, the interacting subsystems are free of causal conict. Subsystem 1 provides an effort to the junction and the junction passes the effort to all other connected subsystems. This is a generalization of the example shown in Fig. 1. If no subsystem determines the effort variable at the common effort junction, as in the case of Fig. 14 b , we have to use the generalized Kirchhoff current law for the ow variables in order to nd out the common effort variable e:
m

e
j

fj 0

(61) g

and Journal of Dynamic Systems, Measurement, and Control

fj 0

(63)

MARCH 2004, Vol. 126 11

gj

ej ej
1

0 0

(65)

gj ej ej ] gp
1

e j e ,p 0

where e j can be any one of the p subsystems. Also, we have one equation from the generalized Kirchhoff current law:
m p 1 m

gp

fj

i 1

zi

p 1

fj 0

(66)

where the boundary variables are z 1 f 1 , z 2 f 2 , . . . ,z p f p . Therefore, we have p equations and p unknown boundary variables zs, and hence the junction is solvable. This is a vector case of causal conict. The index number of the algebraic constraint derived from Kirchhoffs law is one, while the indices of the other constraints, i.e. Eq. 65 , depend on the p subsystems. For this type of vectorial causal conict, the DTSM algorithm must be extended. First the sliding manifold must be extended to vectorial expressions using a vector sliding-variable: s s 1 ,s 2 , . . . ,s j , . . . ,s p
T

(67)

Fig. 14 Three types of common effort junction

Note that this constraint equation does not explicitly include the effort variable e but each ow variable f i and the effort variable are related via the dynamic equation of the individual subsystem. Differentiating the above constraint Eq. 63 for (r 1) times yields an explicit relation between f i and e. Regarding the common effort variable e as a boundary variable z we nd that the system is a high index DAE. In section 2, we assumed that the algebraic constraint is a linear function of output variables. However, that assumption turns out to be the consequence of the assumption that subsystems are interacting through energetic junctions. This assumption has a profound physical sense and applies to a broad class of physical systems. If this constraint is the only interaction among the subsystems under consideration, it is a scalar case of causal conict, and the same solution method developed in the previous sections applies to this class of systems. On the other hand, if multiple subsystems provide effort variables as outputs to the common effort junction, a different type of causal conict is incurred, as shown in Fig. 14 c . In this case, we have to solve for p ow variables (p 1) to supply to subsystem 1 through p so that all p subsystems provide the same effort output satisfying the continuity condition of the common effort junction. We have p unknown ow variables which can be treated as boundary variables z 1 ,z 2 , . . . ,z p . Also we can obtain (p 1) independent equations from the continuity condition: e1 e2 ep g1 e j e1 0 ] 12 Vol. 126, MARCH 2004 (64)

Each component of vector s is dened in accordance with Eq. 9 with its own index number. To enforce the sliding manifold the DTSM must be modied to a multi-input, multi-output control. The Jacobian Jz becomes a matrix quantity in the MIMO control, and a matrix norm must be used for convergence conditions. If the algebraic constraints are in the form of Eqs. 65 , the Jacobian will be sparse and easily inverted. See 17 for details. A special case of the vector causal conict is the refrigeration cycle discussed in Section 2. In that system, the two pipes coming from the two evaporators merge and are connected to the pipe to the accumulator. The mass ow rates of the three pipes sum to zero the ow to the accumulator is negative , and the pressure is common to the three pipes. These conditions are described by Eqs. 1 and 3 , which correspond to Eqs. 66 and 65 , respectively. Although two algebraic constraints are present, they can be reduced to one. Since the constraint due to Kirchhoffs law is index one, the boundary variables are explicitly involved in the constraint equation. Therefore the constraint can explicitly be solved for one boundary variable and thereby eliminated algebraically. In Eq. 2 , the mass ow rate of subsystem B was explicitly solved and was represented by the boundary variable of subsystem A. Therefore, the system has only one boundary variable and one algebraic constraint. In general, the number of boundary variables and algebraic constraints can be reduced by at least 1, since generalized Kirchhoffs laws provide index one linear algebraic constraints that can be solved explicitly.

Conclusion

A method for co-simulating coupled subsystems is presented in this paper. The importance of this method is that it allows us to simulate coupled dynamics without revealing the internal state and model of each simulator. The major technical contributions of this paper are: 1. A Discrete-Time Sliding Mode Control method has been developed to compute high-index Differential-Algebraic equations resulting from the integration of subsystem simulators having incompatible boundary conditions. 2. A computational algorithm has been developed to execute co-simulation by merely exchanging input and output data with individual subsystem simulators, thereby preserving proprietary information. 3. A multi-rate co-simulation method has been developed to improve computational efciency, constraint error, and stability. Transactions of the ASME

This can be extended to (p 1) independent equations, such as

4. Convergence conditions and constraint error bounds for the above co-simulation methods have been obtained, and numerical examples have veried the theoretical results. 5. The physical foundations underpinning the major assumption and the resultant algorithm have been provided. The linearity of constraint equations, the key assumption used for developing the algorithm, stems from the generalized Kirchhoff Laws. This paper has addressed, for the rst time, how multiple subsystem simulators can be combined without disclosure of internal states and models of the individual subsystems. This opens up a new research area of dealing with proprietary information in numerical analysis. A number of challenging issues emerge, which need further investigation for the future. For example, a better protection mechanism will be needed to prevent a malicious user from stealing proprietary simulator information. The cosimulation method presented in this paper does not require the internal state and model information, but it does not guarantee that the proprietary information be protected. Another critical issue is to facilitate the tuning of co-simulation parameters so that, despite limited access to proprietary subsystem information, people can perform co-simulation stably and efciently without special knowledge. Tuning is more complicated as we deal with more heterogeneous subsystems having diverse time scale and granularity. In this paper, the basic convergence conditions and guidelines for selecting the parameters have been obtained, assuming that the individual subsystem simulators are stable. Selection of step sizes, however, is a difcult task, when interacting subsystem simulators differ signicantly in time scale, accuracy, and stability margin. More powerful methods for selecting parameter values, or adaptive methods, may be needed for the future.

4 5

7 8

10 11

12

13

14 15 16

References
1 Gu, B., Gordon, B. W., and Asada, H. H., 2000, Co-Simulation of Coupled Dynamic Subsystems: A Differential-Algebraic Approach Using Singularly Perturbed Sliding Manifolds, Proceedings of the American Control Conference, pp. 757761, June 28 30, 2000, Chicago, IL, 2000. 2 Wallace, D. R., Abrahamson, S., Senin, N., and Sferro, P., 2000, Integrated 17

18

Design in a Service Marketplace, Comput.-Aided Des., Vol. 32, no 2, pp. 97107. Brenan, K., Campbell, S., and Petzold, L., 1989 and 1996, Numerical Solution of Initial Value Problems in Differential-Algebraic Equations, Amsterdam, North-Holland, 1989, also in SIAM Classics in Applied Mathematics, Siam, Philadelphia, 1996. Haier, E., and Wanner, G., 1991, Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, Springer-Verlag, New York. Cellier, F. E., and Elmqvist, H., 1993, Automated Formula Manipulation Supports Object-Oriented Continuous-System Modeling, IEEE Control Syst., Vol. 13, No. 2. Andersson, M., 1990, An Object-Oriented Language for Model Representation, Licenciate thesis TFRT-3208, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden. Mattsson, S. E., Elmqvist, H., and Otter, M., 1998, Physical System Modeling with Modelica, Control Eng. Pract., Vol. 6, pp. 501510. Sinha, R., Paredis, C. J. J., Liang, V.-C., and Khosla, P. K., 2001, Modeling and Simulation Methods for Design of Engineering Systems, ASME J. Comput. Inf. Sci. Eng., Vol. 1, pp. 84 91. Gordon, B. W., and Asada, H. H., 2000, Modeling, Realization, and Simulation of Thermo-Fluid Systems Using Singularly Perturbed Sliding Manifolds, ASME J. Dyn. Syst., Meas., Control, 122, pp. 699707. Baumgarte, J., 1972, Stabilization of Constraints and Integrals of Motion in Dynamical Systems, Comp. Math. Appl. Mech. Eng., Vol. 1, pp. 116. Mattsson, S. E., and Soderlind, G., 1993, Index Reduction in DifferentialAlgebraic Equations Using Dummy Derivatives, SIAM J. Comput., Vol. 14, No 3, pp. 667 692. Gordon, B. W., Liu, S., and Asada, H. H., 2000, Realization of High Index Differential Algebraic Systems Using Singularly Perturbed Sliding Manifolds, Proceedings of the 2000 American Control Conference, pp. 752756, Chicago, June 2000. Gu, B. and Asada, H. H., 2001, Co-Simulation of Algebraically Coupled Dynamic Sub-Systems, Proceedings of 2001 American Control Conference, pp. 22732278, Arlington, VA, 2001. Drakunov, S. V., and Utkin, V. I., 1992, Sliding Mode Control in Dynamic Systems, Int. J. Control, Vol. 55, No. 4, pp. 10291037. Utkin, V., Guldner, J., and Shi, J., 1999, Sliding Mode Control in Electromechanical Systems, Taylor & Francis Inc. Ascher, U. M., and Petzold, L. R., 1998, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Society for Industrial and Applied Mathematics, Philadelphia. Gu, B., 2001, Co-Simulation of Algebraically Coupled Dynamic Subsystems, Ph.D. Thesis, Department of Mechanical Engineering, Massachusetts Institute of Technology. Karnopp, D., Margolis, D. and Rosenberg, R., 1990, System Dynamics: A Unied Approach, Wiley-Interscience, New York, NY.

Journal of Dynamic Systems, Measurement, and Control

MARCH 2004, Vol. 126 13

You might also like