Attribution Non-Commercial (BY-NC)

16 views

Attribution Non-Commercial (BY-NC)

- ME677c11p2 LyapunovRedesign t
- Pilot Induced Oscillations Human Dynamic Behavior
- Instructions and Editing Model for Incas Bulletin
- CE2000_0808
- rptInstructionPlan (7)
- Control of Longitudinal Pitch Rate as Aircraft Center of Gravity.pdf
- plantwide_review3
- COMPARATIVE ANALYSIS OF RBF (RADIAL BASIS FUNCTION) NETWORK AND GAUSSIAN FUNCTION IN MULTI-LAYER FEED-FORWARD NEURAL NETWORK (MLFFNN) FOR THE CASE OF FACE RECOGNITION.
- A Robust Fuzzy Tracking Control for Robot Manipulator
- Helicopter Dynamics-10
- HengsterMovric Uta 2502D 12297
- Lecture 2
- Single Feedback Control Loop
- CHAPTER-1 Control System
- IN196 01 Introduction
- Automatic Flight Control - Ch11
- ISMA%2709_62649.pdf
- Computation of Mathematical
- Volume 2Number 2PP 538 545x
- Rotor Stationary Control Analysis Based on Coupling KdV Equation Finite Steady Analysis.pdf

You are on page 1of 5

[2] J. E. Bertram, The effect of quantization in sampled feedback systems, Trans. AIEE Appl. Ind., pt. 2, vol. 77, pp. 177181, Sept. 1958. [3] J. A. Farrell and A. N. Michel, Estimates of asymptotic trajectory bounds in digital implementations of linear feedback control systems, IEEE Trans. Automat. Contr., vol. 34, pp. 13191324, Dec. 1989. [4] B. A. Francis and T. T. Georgiou, Stability theory for linear timeinvariant plants with periodic digital controllers, IEEE Trans. Automat. Contr., vol. 33, pp. 820832, Sept. 1988. [5] G. F. Franklin and J. D. Powell, Digital Control of Dynamic Systems. New York: Addison-Wesley, 1980. [6] T. Hagiwara and M. Araki, Design of a stable state feedback controller based on the multirate sampling of the plant output, IEEE Trans. Automat. Contr., vol. 33, pp. 812819, Sept. 1988. [7] L. Hou, A. N. Michel, and H. Ye, Some qualitative properties of sampled-data control system, IEEE Trans. Automat. Contr., vol. 42, pp. 17211725, Dec. 1997. [8] H. Ito, H. Ohmori, and A. Sano, Stability analysis of multirate sampleddata control systems, IMA J. Mathematical Contr. Inform., vol. 11, pp. 341354, 1994. [9] R. E. Kalman, Kronecker invariants and feedback, in Proc. Conf. Ordinary Differential Equations, Math. Research Center, Naval Research Labs., Washington, DC, 1971, pp. 459471. [10] R. E. Kalman, Y. C. Ho, and K. S. Narendra, Controllability of linear dynamical systems, Contributions to Differential Equations, vol. 1, no. 2, pp. 189213, 1962. [11] D. G. Luenberger, Canonical forms for the linear multivariable systems, IEEE Trans. Automat. Contr., vol. 12, pp. 290293, 1967. [12] D. G. Meyer, A parametrization of stability controllers for multirate sampled-data systems, IEEE Trans. Automat. Contr., vol. 35, pp. 223235, 1990. [13] A. N. Michel and K. Wang, Qualitative Theory of Dynamical Systems. New York: Marcel Dekker, 1995. [14] R. K. Miller and A. N. Michel, Ordinary Differential Equations. New York: Academic Press, 1982. [15] R. K. Miller, A. N. Michel, and J. A. Farrell, Quantizer effects on steady-state error specications of digital feedback control systems, IEEE Trans. Automat. Contr., vol. 34, pp. 651654, June 1989. [16] R. K. Miller, M. S. Mousa, and A. N. Michel, Quantization and overow effects in digital implementations of linear dynamic controllers, IEEE Trans. Automat. Contr., vol. 33, pp. 698704, July 1988. [17] J. E. Slaughter, Quantization errors in digital control systems, IEEE Trans. Automat. Contr., vol. 9, pp. 7074, 1964.

Lure Systems with Multilayer Perceptron and Recurrent Neural Networks: Absolute Stability and Dissipativity

J. A. K. Suykens, J. Vandewalle, B. De Moor

AbstractSufcient conditions for absolute stability and dissipativity of continuous-time recurrent neural networks with two hidden layers are presented. In the autonomous case this is related to a Lure system with multilayer perceptron nonlinearity. Such models are obtained after parameterizing general nonlinear models and controllers by a multilayer perceptron with one hidden layer and representing the control scheme in standard plant form. The conditions are expressed as matrix inequalities control and imposing closed-loop and can be employed for nonlinear stability in dynamic backpropagation.

H1

Index TermsLure systems, LurePostnikov Lyapunov function, matrix inequalities, multilayer recurrent neural networks, nonlinear control.

H1

I. INTRODUCTION In this paper we investigate a class of nonlinear models and controllers that are parameterized by multilayer perceptrons. As a result recurrent neural networks [6], [28] with two hidden layers are obtained as closed-loop system equations. It is well known that multilayer perceptrons with one or more hidden layers are universal approximators in the sense that they are able to approximate any static continuous nonlinear function arbitrarily well on a compact interval [4], [10], [14]. In this sense generality is preserved after parameterizing nonlinear systems by means of multilayer perceptrons. On the other hand, the layered structure and the fact that the two-hidden layer recurrent neural networks contain sector type nonlinearities can be exploited in order to derive matrix inequalities [5] as sufcient conditions for global asymptotic stability. Observability, controllability, and identiability issues for a class of recurrent neural networks have been studied in [1], [2], and [18]. In this paper an absolute stability criterion will be derived based on a LurePostnikov Lyapunov function. In the area of nonlinear control, the problem of extending results from linear control to input afne and general nonlinear systems received considerable interest. Solutions have been presented for the state and output feedback case, in terms of the solution to HamiltonJacobiIsaac equations [3], [11], [12], [22], [23], [27]. The notion of dissipativity as proposed by Willems [26] and later developed by Hill and Moylan for nonlinear systems [7], [8], plays an important role in this context. In this paper we investigate dissipativity of the nonautonomous two-hidden layer recurrent neural networks. This is done with respect to a supply rate of quadratic form (including the cases of passivity and nite 2 gain) and a storage function of

H1

H1

Manuscript received June 26, 1997. Recommended by Associate Editor, K. Passino. This work was carried out at the ESAT Laboratory and the Interdisciplinary Center of Neural Networks ICNN of the Katholieke Universiteit Leuven, in the framework of the Belgian Programme on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Ministers Ofce for Science, Technology and Culture (IUAP P4-02 & IUAP P4-24) and in the framework of a Concerted Action Project MIPS (Modelbased Information Processing Systems) of the Flemish Community. The work of J. A. K. Suykens was supported by the National Fund for Scientic Research FWO-Flanders. The authors are with the Department of Electrical Engineering, Katholieke Universiteit Leuven, ESAT-SISTA, Kardinaal Mercierlaan 94, B-3001 Leuven (Heverlee), Belgium. Publisher Item Identier S 0018-9286(99)02131-5.

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on October 14, 2008 at 06:52 from IEEE Xplore. Restrictions apply.

771

quadratic form plus integral terms. The condition is also expressed as a matrix inequality. The derived matrix inequalities for absolute stability and dissipativity can be employed for controller synthesis. Nonlinear H control for recurrent neural networks involves then solving a constrained nonlinear optimization problem which takes into account the matrix inequality. It also offers a way of improving Narendras dynamic backpropagation procedure [15]. Dynamic backpropagation basically means that the controller is trained by optimizing on one or a set of specic reference inputs. However, nothing is guaranteed then concerning closed-loop stability or generalization of the controller toward other reference inputs, which do not belong to the training set. The matrix inequalities can be used in order to impose global asymptotic stability of the closed-loop scheme or guarantee a certain disturbance attenuation level for the control scheme in standard plant form. A similar approach has been taken for discrete-time multilayer recurrent neural networks in NLq theory, with applications to stabilization and control of systems that possess one or multiple equilibria, are periodic, quasiperiodic, or chaotic [19]. This paper is organized as follows. In Section II we discuss the parameterization of nonlinear models and controllers by a multilayer perceptron. The closed-loop systems can be represented as a two-hidden layer recurrent neural network, which is explained in Section III. In Section IV a sufcient condition for absolute stability of this form is derived. In Section V a condition for dissipativity is given. In Section VI it is discussed how to employ the resulting matrix inequalities for nonlinear H control and modied dynamic backpropagation.

C3

C1 : C2 : C3 :

u = K x + W (V x) z _ u z _ u

with controller state z 2 n and reference input d 2 l : The matrices are of dimension K C 2 m2n ; W C 2 m2n ; V C 2

(3)

of hidden neurons in the hidden layers. It is straightforward to observe that the closed-loop systems for the model M; connected to one of the controllers Ci (i = 1; 2; 3); can be written in the form

p = Ap + B(N p + H(Cp + D2 w) + D1 w) + D0 w _

(4)

with state vector p = [x; z] and exogenous input w = [; d]: One obtains [M 0 C1 ]:

A =A N =

M + BMK C ; M M C VA + VB K C V

B = [WAB B ; H =

M MW C ] M C VB W

0

C =V

C

(5)

II. A PARAMETERIZATION OF MODELS AND CONTROLLERS BY NEURAL NETS For a given nonlinear plant, let us consider nonlinear models of the form

[M

0 C2 ]:

A= N =

A C M F C VA C M H C H C 0 0 C H

0 C ; E 0 C ; G G 0

B = H = D2 = D0 =

WAB 0 VB 0

M C

B 0 0 0

M M M x = A x + B u + f (x; u) + K _ Mx + y =C

with f (1; 1):

(1)

C = D1 =

C M

7! nma continuous nonlinear mapping and l ; and state vector f (0; 0) = 0; input u 2 ; output y 2 n : The matrix K M 2 n2l denotes a Kalman gain, in order to x2 model process noise. Parameterizing f (1; 1) with a one-hidden layer

multilayer perceptron and zero bias terms gives

n2 m

[M

0 C3 ]:

0 C ; H2

K C F

H 0

H2 0

C

(6)

0 C F2

A=

A C M F C VA C M VH C C M VF C

M:

VB

x= _ y =

0 C ; E 0 C VG C VE ;

B =

WAB 0

M WGH C

0 0 0 0

(2)

N =

H =

M M with interconnection matrices WAB 2 n2n ; VA 2 n 2n ; M 2 n 2m : (1) denotes the activation function of the neural network (typically tanh(1) nonlinearity), which is applied componentwise to the elements of a vector and (0) = 0: The model M can be considered as the continuous-time version of neural stateM M space models, proposed in [20]. In case WAB = 0; VA = 0; M = 0; (2) corresponds to a Kalman lter with steady-state Kalman VB gain K M : The model parameters of (2) might be identied using a prediction error algorithm. The gradient of the cost function can be computed using a sensitivity method (known as Narendras dynamic backpropagation in the eld of neural networks [15]). Now, let us consider in connection to the model M a nonlinear state feedback controller C1 ; a linear dynamic output feedback con-

C =

VH C C M VF C 0 C VH

C M

VG C ; VE ;

VB WGH 0 0 VH VF

M C

0 C WEF

D2 =

C C

VH VF

C C

D1 =

C VF

0 C VH

C VF

D0 =

K C F

0 C : F2

(7)

III. TWO-HIDDEN LAYER RECURRENT NEURAL NETS AND LURE SYSTEMS Because the closed-loop systems for (2), (3) can be represented as a recurrent neural network with two hidden layers, we will study absolute stability and dissipativity of the following form:

p = Ap + B1 (N p + H2 (Cp + D2 w) + D1 w) + D0 w _ e = Ep + F 3 (M p + J 4 (Gp + L2 w) + L1 w) + L0 w

(8)

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on October 14, 2008 at 06:52 from IEEE Xplore. Restrictions apply.

772

with external input w 2 n ; output e 2 n ; and state vector p 2 n : For the static nonlinearities i (1) with number of hidden neurons nh we assume the sector conditions [0; ki ] (i = 1; 2; 3; 4; respectively). The interconnection matrices are of appropriate dimension. Note that when the closed-loop systems are written in standard plant form, w and e in (8) correspond to the exogenous input and regulated output, respectively. The discrete-time version of multilayer recurrent neural networks has been studied in the context of NLq theory [19]. Let us consider the autonomous case of (8) (zero external input w) and denote the state vector by x 2 n

with P = P T > 0; !i > 0;
j > 0 and z = Nx + H2 (); = Cx: This Lyapunov function is positive everywhere with V (0) = 0 and radially unbounded. Lemma 1: Let 0 = diagf
j g;

= diagf!i g; T = diagfj g; 8 = diagfi g be diagonal matrices with
j > 0; !i > 0; j > 0; i > 0 for i = 1; 1 1 1 ; nh and j = 1; 1 1 1 ; nh and consider an arbitrary constant > 0: Assume the following condition on the slope of 2 (1):

(9)

For N = 0 the two-hidden layer recurrent neural network (9) can be represented as the Lure system:

k3 ; k3 1; 8j (13) in addition to the sector conditions 1 (1) 2 [0; k1 ] and 2 (1) 2 [0; k2 ]:

Y AT C T 0 Z =

H B T C T 0 0

0

d2 dj

x = Ax + B1 _ = C1 x (10) = '() = B2 1 (H2 (C2 )) with B1 B2 = B; C2 C1 = C and B1 2 n2m ; B2 2 m2n ; C2 2 n 2l ; C1 2 l2n : The nonlinearity '(1) is a multilayer perceptron with two hidden layers, zero bias terms and activation functions 1 (1); 2 (1): The interconnection matrices of the output layer and hidden layers are B2 ; H; C2 ; respectively. The Lure representation (10) consists of the linear dynamic system [A; B1 ; C1 ] interconnected by feedback to '(1); which in general does not satisfy a sector condition. However, '(1) is composed of units with activation functions that do

satisfy a sector condition. This enables us to represent (10) as the Lure system

p0 I

where Z is assumed to be a full rank matrix. Then, if there exist a P = P T > 0; and diagonal matrices 0;

; T; 8 such that the matrix inequality

Y + k1 k3 ZZ T < 0

(14)

x = A3 x + B3 s _ r = C3 x + D3 s = 1 (z) (11) v = 2 () with s = [; v]; r = [z; ]; z = Nx + Hv; v = 2 (Cx); and A3 = A; B3 = [B 0]; C3 = N ; D3 = 0 H C 0 0 and 1 (1); 2 (1) belonging to sector [0; k]: Absolute stability criteria

for such Lure systems are readily available in the literature (see, e.g., [13], [16], and [25]), such as the circle and Popov criterion which are related to a quadratic Lyapunov function and a LurePostnikov Lyapunov function, respectively. However, the nonzero D3 matrix in the Lure representation complicates the analysis. In the next section we will derive a new criterion which is directly based on the form (9). IV. ABSOLUTE STABILITY CRITERION In this section we derive a sufcient condition for global asymptotic stability of the form (9). The following lemma is based on the LurePostnikov Lyapunov function:

is satised, and (9) is globally asymptotically stable (or absolutely stable in the large) with the origin as a unique equilibrium point. Proof: From the sector conditions on 1 (1); one has the inequalT ities [25]: 1 (zi )[1 (zi ) 0 k1 nT x 0 k1 hi 2 ()] 0; 8x 2 n and i T T T 2 (j )[2 (j )0k2 cj x] 0; 8x 2 n ; for all i; j where nT ; hi ; cj i denote the ith row of the matrices N; H and the j th row of the matrix C; respectively. By taking the time derivative of the Lyapunov function (12) and applying the S -procedure [5] one obtains

[NAx + NB1 (z) + H d2 () C(Ax + B1 (z))] d 0 22 ()T T [2() 0 k2 Cx] 0 21 (z)T 8 1 [1 (z) 0 k1 Nx 0 k1 H2 ()]: Dening = [x; 1 (z); 2 ()]; this is written as the quadratic form T (Y + k1 Z3Z T ) < 0 with [9] 0 d2 0 dy 3 = d2 0 0 k3 I: dy 0 0 I Multiplication from the left and right with the full rank matrix Z yields Z3Z T k3 ZZ T : The inequality has to hold for all nonzero ; which gives (14).

V. DISSIPATIVITY In this section we analyze inputoutput (I/O) properties of (8). Therefore, we associate to (8) a supply rate of the form [7], [8]

_ V

V (x) = xT P x + +

n j=1

n i=1

2!i1

0

z 0

1 ()k1 d

(12)

2 j 1

2 ()k2 d

w e

(15)

AT P + P A Y = Y = B P + k1

NA + k1 8N k2 0CA + k2 T C

T T

P B + k1 AT N T

+ k1 N T 8 k2 AT C T 0 + k2 C T T 028 + k1

NB + k1 BT N T

k2 BT C T 0 + k1 8H k2 0CB + k1 H T 8 02T 0 I

Authorized licensed use limited to: Katholieke Universiteit Leuven. Downloaded on October 14, 2008 at 06:52 from IEEE Xplore. Restrictions apply.

773

with Q11 ; Q22 symmetric matrices. The following types of supply rates are of special importance: s(w; e) = 2wT e (passivity) and s(w; e) = 2 wT w 0 eT e (nite L2 -gain ). System (8) with supply rate (15) is called dissipative [7], [8] if for all locally square integrable w and all tf 0; one has s0t s(w(t); e(t)) dt 0 with p(0) = 0 and s(w; e) evaluated along the trajectory of (8). System (8) is dissipative, then with respect to this supply rate if there exists a storage function (p): n 7! satisfying _ (p) > 0 for p 6= 0 and (0) = 0 and (p) s(w; e); 8w 2 n ; 8e 2 n [7], [8]. Here we investigate a storage function which consists of a quadratic form plus integral terms

Proof: The outline of the proof is similar to the proof of _ Lemma 1. We investigate under what condition 0 s(w; e) 0 holds. Using the S -procedure [5] and the inequalities from the sector conditions of the nonlinearities, checking dissipativity yields:

with

and

(p) = pT P p + +

n j =1

n i=1

2!i1

0

z

0

d2 dz2 0 0 0

0 0 0 I 0 0 I

k3 I:

1 ()k1 d

(16)

2 j 1

2 ()k2 d

with P = P T > 0; !i > 0; j > 0 and z = Np + H2 (); = Cp: Note the resemblance with the LurePostnikov Lyapunov function for the autonomous case. In order to derive the next lemma, we have to assume D = D0 ; Di = 0 (i = 1; 2); L = L0 ; Li = 0 (i = 1; 2) and F = 0 in (8), giving

Multiplication from the left and right with the full rank matrix Z gives Z3Z T k3 ZZ T : The quadratic form has to be negative for all nonzero ; which is satised if (20) holds. Remark: The notions of dissipativity with nite L2 gain and storage function are the same as in the context of nonlinear H1 control. However, the derived condition here is only sufcient, due to the storage function which is not a general positive denite function but takes a similar form as a LurePostnikov Lyapunov function. Hence in this sense, loss of dissipativity has a similar meaning as loss of absolute stability for the LurePostnikov Lyapunov function in Lemma 1 when the matrix inequality is not satised. VI. NONLINEAR H1 CONTROL FOR RECURRENT NEURAL NETWORKS AND MODIFIED DYNAMIC BACKPROPAGATION

(17)

This assumption is not needed for the special case of a quadratic storage function. Lemma 2: Let 0 = diagf
j g;

= diagf!i g; T = diagfj g; 8 = diagfi g be diagonal matrices with
j > 0; !i > 0; j > 0; i > 0 for i = 1; 1 1 1 ; nh ; and j = 1; 1 1 1 ; nh and consider an arbitrary constant > 0: Assume the following condition on the slope of 2 (1):

Considering a supply rate with nite L2 -gain, the condition of Lemma 2 can be employed for nonlinear H1 control in order to design one of the controllers (3), based on the recurrent neural network model (2). Additional linear lters could be taken into account for the control scheme in standard plant form. The nonlinear H1 optimal control problem is formulated then as where c denotes the controller parameter vector (related to C1 ; C2 ; or C3 ) and the matrix inequality from Lemma 2 is taken into account for Q11 = 2 I; Q22 = 0I; Q12 = 0 in (15). One seeks for the minimal

;P;

;0;T;8;

min

s:t: Y + k1 k3 ZZ T < 0

(21)

0

d2 dj

k3 ; k3 1;

8j

(18)

in addition to the sector conditions 1 (1) 2 [0; k1 ]; 2 (1) 2 [0; k2 ]: Furthermore, let Y be shown in (19), at the bottom of the page, and

AT C T 0 B T C T

H Z= 0 0 DT C T 0

where

0 1 pI

2

0 0

0 1 pI

2

0 0

Z is assumed to be a full rank matrix and Y44 = 0Q11 0 T LT E T Q22 EL 0 Q12 L 0 LT Q12 : Then, if there exist a P = P T > 0; and diagonal matrices 0;

; T; 8 such that the matrix inequality Y + k1 k3 ZZ T < 0

(20)

disturbance attenuation level such that the matrix inequality is satised. The resulting nonlinear optimization problem is nonconvex and possibly nondifferentiable (when the two largest eigenvalues of the matrix inequality coincide) [17]. The matrix inequality from Lemma 1 can be used to impose global asymptotic stability of the closed-loop scheme for Narendras dynamic backpropagation [15]. Controller design using dynamic backpropagation means that the neural controller is trained by optimizing on one or a set of specic reference inputs. This may lead to instabilities of the control scheme. One can overcome this problem by the modied dynamic backpropagation scheme

J ( ) min ;P;

;0;T;8; track c t =

0

is satised, system (17) is dissipative with respect to supply rate (15) and storage function (16).

(22)

s:t: Y + k1 k3 ZZ T < 0

Y =YT =

AT P + P A 0 E T Q22 E

1 1 1

T P B + k1 AT N T

+ k1 N T 8 k2 AT C T 0 + k2 C T T P D 0 E T (Q12 + Q22 L) T N T

k2 B T C T 0 + k1 H T 8 028 + k1

NB + k1 B k1

ND 1 02T 0 I k2 0CD 1 1 Y44

(19)

774

where y (t; ) is the output of the neural control scheme, Jtrack (c ) ^ is a cost function for the tracking error, dened on a given specic reference input d(t) and tf is a nite time horizon. The matrix inequality constraint can be related to Lemma 1 as well as to Lemma 2, respectively, for imposing global asymptotic stability or I/O stability with a xed disturbance attenuation level 3 : Such methods have been successfully applied for the discrete-time recurrent neural networks using NLq theory in [19].

VII. CONCLUSION In this paper absolute stability and dissipativity of continuous-time recurrent neural networks with two hidden layers have been studied. These types of models occur when one considers nonlinear models and controllers that are parameterized by multilayer perceptrons with one hidden layer. For the autonomous case a classical Lure system representation and Lure system with multilayer perceptron nonlinearity is given. Sufcient conditions for absolute stability and dissipativity have been derived from a LurePostnikov Lyapunov function and a storage function of the same form. The criteria are expressed as matrix inequalities. They can be employed in order to impose closed-loop stability in Narendras dynamic backpropagation procedure and for nonlinear H1 control. REFERENCES

[1] F. Albertini and E. D. Sontag, For neural networks, function determines form, Neural Networks, vol. 6, pp. 975990, 1993. [2] , State observability in recurrent neural networks, Syst. Contr. Lett., vol. 22, pp. 235244, 1994. [3] J. A. Ball, J. W. Helton, and M. L. Walker, 1 control for nonlinear systems with output feedback, IEEE Trans. Automat. Contr., vol. 38, pp. 546559, 1993. [4] A. R. Barron, Universal approximation bounds for superposition of a sigmoidal function, IEEE Trans. Inform. Theory, vol. 39, no. 3, pp. 930945, 1993. [5] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, Studies in Applied Mathematics, vol. 15. Philadelphia, PA: SIAM, 1994. [6] S. Haykin, Neural Networks: A Comprehensive Foundation. Englewood Cliffs, NJ: Macmillan, 1994. [7] D. J. Hill and P. J. Moylan, Connections between nite-gain and asymptotic stability, IEEE Trans. Automat. Contr., vol. AC-25, no. 5, pp. 931936, 1980. [8] , The stability of nonlinear dissipative systems, IEEE Trans. Automat. Contr., vol. AC-21, pp. 708711, 1976. [9] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, U.K.: Cambridge Univ. Press, 1985. [10] K. Hornik, M. Stinchcombe, and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, vol. 2, pp. 359366, 1989. [11] A. Isidori and A. Astol, Disturbance attenuation and 1 control via measurement feedback in nonlinear systems, IEEE Trans. Automat. Contr., vol. 37, pp. 12831293, 1992. [12] A. Isidori and W. Kang, 1 control via measurement feedback for general nonlinear systems, IEEE Trans. Automat. Contr., vol. 40, pp. 466472, 1995. [13] H. K. Khalil, Nonlinear Systems. New York: Macmillan, 1992. [14] M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural Networks, vol. 6, pp. 861867, 1993. [15] K. S. Narendra and K. Parthasarathy, Gradient methods for the optimization of dynamical systems containing neural networks, IEEE Trans. Neural Networks, vol. 2, no. 2, pp. 252262, 1991. [16] K. S. Narendra and J. H. Taylor, Frequency Domain Criteria for Absolute Stability. New York: Academic, 1973.

[17] E. Polak and Y. Wardi, Nondifferentiable optimization algorithm for designing control systems having singular value inequalities, Automatica, vol. 18, no. 3, pp. 267283, 1982. [18] E. D. Sontag and H. Sussmann, Complete controllability of continuoustime recurrent neural networks, Syst. Contr. Lett., vol. 30, pp. 177183, 1997. [19] J. A. K. Suykens, J. P. L. Vandewalle, and B. L. R. De Moor, Articial Neural Networks for Modeling and Control of Non-Linear Systems. Boston, MA: Kluwer, 1995. [20] J. A. K. Suykens, B. De Moor, and J. Vandewalle, Nonlinear system identication using neural state space models, applicable to robust control design, Int. J. Contr., vol. 62, no. 1, pp. 129152, 1995. , NLq theory: A neural control framework with global asymptotic [21] stability criteria, Neural Networks, vol. 10, no. 4, pp. 615637, 1997. [22] A. J. van der Schaft, A state-space approach to nonlinear 1 control, Syst. Contr. Lett., vol. 16, pp. 18, 1991. , 2 -gain analysis of nonlinear systems and nonlinear state [23] feedback 1 control, IEEE Trans. Automat. Contr., vol. 37, pp. 770784, 1992. [24] H. Verrelst, K. Van Acker, J. Suykens, B. Motmans, B. De Moor, and J. Vandewalle, Application of NLq neural control theory to a ball and beam system, European J. Contr., vol. 4, no. 2, pp. 148157, 1998. [25] M. Vidyasagar, Nonlinear Systems Analysis. Englewood Cliffs, NJ: Prentice-Hall, 1993. [26] J. C. Willems, Dissipative dynamical systems I: General theory. II: Linear systems with quadratic supply rates, Archive for Rational Mechanics and Analysis, vol. 45, pp. 321343, 1972. [27] C.-F. Yung, Y.-P. Lin, and F.-B. Yeh, A family of nonlinear 1 output feedback controllers, IEEE Trans. Automat. Contr., vol. 41, pp. 232236, 1996. [28] J. M. Zurada, Introduction to Articial Neural Systems. West, 1992.

L H

Noninteracting Control via Static Measurement Feedback for Nonlinear Systems with Relative Degree

S. Battilotti

Abstract In this paper the authors give a necessary and sufcient geometric condition for achieving noninteraction via static measurement feedback for nonlinear systems with vector relative degree. Their analysis relies on the theory of connections and as a result gives systematic procedures for constructing a decoupling feedback law. Index TermsMeasurement feedback, noninteracting control.

I. THE CLASS OF SYSTEMS AND CONTROL LAWS Let us consider the afne nonlinear systems of the form

m

x = f (x) + _

j =1

gj (x)uj i = 1; 1 1 1 ; m;

yi = hi (x); z = k(x)

(1)

where x 2 M; a smooth (Hausdorff) manifold, the ui s are input functions from a suitable function space (e.g., measurable -valued functions dened on closed intervals of the form [0; T ]); the yi s are the -valued output functions and z 2 s is the vector of

Manuscript received September 23, 1997. Recommended by Associate Editor, A. J. van der Schaft. The author is with the Dipartimento di Informatica e Sistemistica, 00184 Roma, Italy. Publisher Item Identier S 0018-9286(99)02087-5.

- ME677c11p2 LyapunovRedesign tUploaded byElizabeth Johns
- Pilot Induced Oscillations Human Dynamic BehaviorUploaded byAdrian Toader
- Instructions and Editing Model for Incas BulletinUploaded byAyad
- CE2000_0808Uploaded byAshfaq Ali
- rptInstructionPlan (7)Uploaded bySrijan Prabhakar
- Control of Longitudinal Pitch Rate as Aircraft Center of Gravity.pdfUploaded byjunior
- plantwide_review3Uploaded bypavanchem61
- COMPARATIVE ANALYSIS OF RBF (RADIAL BASIS FUNCTION) NETWORK AND GAUSSIAN FUNCTION IN MULTI-LAYER FEED-FORWARD NEURAL NETWORK (MLFFNN) FOR THE CASE OF FACE RECOGNITION.Uploaded byIJAR Journal
- A Robust Fuzzy Tracking Control for Robot ManipulatorUploaded bypeito180
- Helicopter Dynamics-10Uploaded byKaradias
- HengsterMovric Uta 2502D 12297Uploaded byAashish
- Lecture 2Uploaded byWaleed Shakil
- Single Feedback Control LoopUploaded byMichel Sánchez Colin
- CHAPTER-1 Control SystemUploaded byM Rubin Tasnim
- IN196 01 IntroductionUploaded bygpowerp
- Automatic Flight Control - Ch11Uploaded byDavid Amachree
- ISMA%2709_62649.pdfUploaded byVignesh Ramakrishnan
- Computation of MathematicalUploaded byClaudir Oliveira
- Volume 2Number 2PP 538 545xUploaded byedwinprun12
- Rotor Stationary Control Analysis Based on Coupling KdV Equation Finite Steady Analysis.pdfUploaded byrobiny
- Controls Lecture1 S12Uploaded byMuneeb Hassan
- wong1986.pdfUploaded byYasir Butt
- Project for Robotics Final Document 222Uploaded byTolera Tamiru
- What is Interaction - Lukas LaLiberteUploaded bysmartercities
- BMS2015.pdfUploaded byJuliana Almeida
- Abduljabbar_1997_Computers-&-Structures.pdfUploaded byYousef
- Ctrl StaticUploaded byGanesh Shanmugam
- IndexUploaded byPhanHatham
- IJITMCUploaded byAlejandro Carver
- IJITMCUploaded byAlejandro Carver

- Marco Gori and Franco Scarselli- Are Multilayer Perceptrons Adequate for Pattern Recognition and Verification?Uploaded byAsvcxv
- Ricardo Gutierrez-Osuna- Multi-layer perceptronsUploaded byAsvcxv
- L. S. Moulin et al- Support Vector and Multilayer Perceptron Neural Networks Applied to Power Systems Transient Stability Analysis with Input Dimensionality ReductionUploaded byAsvcxv
- Kjell O. E. Elenius and Hans G. C. Tråvén- Multi-Layer Perceptrons and Probabilistic Neural Networks for Phoneme RecognitionUploaded byAsvcxv
- C. Bunzmann, M. Biehl and R. Urbanczik- Learning multilayer perceptrons efficientlyUploaded byAsvcxv
- Jenq-Neng Hwang et al- Classification Boundaries and Gradients of Trained Multilayer PerceptronsUploaded byAsvcxv
- lab1Uploaded byRohit Verma
- Martin Bogdan and Michael Bensch- Artificial neural networks and machine learning for man-machine-interfaces - processing of nervous signalsUploaded byAsvcxv
- Multilayer PerceptronsUploaded byAsvcxv
- Mayer August, Wiesbauer Gerald and Spitzlinger Markus- Applications of Hopfield NetworksUploaded byAsvcxv
- Sabrina Gerth and Peter beim Graben- Unifying syntactic theory and sentence processing difficulty through a connectionist minimalist parserUploaded byAsvcxv
- Ozgur Kisi- Multi-layer perceptrons with Levenberg- Marquardt training algorithm for suspended sediment concentration prediction and estimationUploaded byAsvcxv
- Taehwan Kim and Tulay Adali- Approximation by Fully Complex Multilayer PerceptronsUploaded byAsvcxv
- Jasmin Steinwender and Sebastian Bitzer- Multilayer PerceptronsUploaded byAsvcxv
- mlp_2Uploaded byKALYANpwn
- Lecture 04Uploaded byPrames Wara
- Ramaswamy Palaniappan- Brain Computer Interface Design Using Band Powers Extracted During Mental TasksUploaded byAsvcxv
- Omar AlZoubi, Irena Koprinska and Rafael A. Calvo- Classification of Brain-Computer Interface DataUploaded byAsvcxv
- Lotte Fabien et al- Studying the Use of Fuzzy Inference Systems for Motor Imagery ClassificationUploaded byAsvcxv
- Choi Nang So and J. Godfrey Lucas- Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface DeviceUploaded byAsvcxv
- F Lotte et al- A Review of Classication Algorithms for EEG-based Brain-Computer InterfacesUploaded byAsvcxv
- Ramez T. Mina et al- Brain-Computer Interface Based on Classification of Statistical and Power Spectral Density FeaturesUploaded byAsvcxv
- Soroosh Solhjoo et al- Classification of Chaotic Signals Using HMM Classifiers: EEG-Based Mental Task ClassificationUploaded byAsvcxv
- Janis J Daly and Jonathan R Wolpaw- Brain–computer interfaces in neurological rehabilitationUploaded byAsvcxv
- Ashish Kapoor, Pradeep Shenoy and Desney Tan- Combining Brain Computer Interfaces with Vision for Object CategorizationUploaded byAsvcxv
- Esa 2 Piaggio PisaUploaded byFlavia Gaudio
- Xiaoyuan Zhu et al- Expectation-Maximization Method for EEG-Based Continuous Cursor ControlUploaded byAsvcxv
- Kouhyar Tavakolian et al- Mental Task Classification for Brain Computer Interface ApplicationsUploaded byAsvcxv
- Krunoslav Kovac- Multitask Learning for Bayesian Neural NetworksUploaded byAsvcxv

- L3 D'Alembert's TestUploaded byArnav Dasaur
- African-CosmologyUploaded byshuncharon
- electromagnetic induction2012-notes unlockedUploaded byapi-250079701
- Don't Gamble With Physical Properties for SimulationsUploaded bylaiping_lum
- Resonance and Fourier TheoryUploaded byleardpan
- Advanced-Engineering-Mathematics-H-Dass-ebook-519WT 2B852jL.pdfUploaded byTamo Jit
- imb science lessonUploaded byapi-270439297
- Timu a11 Classification Decision Tree InductionUploaded bytamfin
- Database ManagementUploaded byPrestigious Ali
- Prosody Phonology and PhoneticsUploaded byhord72
- Unit9 Lecture Notes 2017Uploaded bySaurabh Dubey
- Bank ChargeUploaded byansori tirangga
- Hyperthermia Nature 2011Uploaded byonynho
- Hill Dividido TraductorUploaded byMonica Angulo
- GS_EXP_403Uploaded byArief Hidayat
- 25890375Uploaded bytaghdirim
- Chapter 6 - Bond Valuation and Interest RatesUploaded byAmeer B. Baloch
- insert inLL.cUploaded byVishal Gaur
- 6thgradeedmskillsUploaded byapi-323334759
- Noise Removal of MRI Data With Edge EnhanceingUploaded bynallisadguna
- _61d056196c0646e8667f6765d9aa0f96_Programming-Assignment-1Uploaded byarul varman
- STATISTICA Help _ Power AnalysisUploaded byMurali Dharan
- Test en InglesUploaded byYobana Huayna Rojas
- CirclesUploaded bySer Nap
- Testing 20Uploaded bymukymba
- 1 4 ExercisesUploaded byusman3686
- Erroneous concepts behind NATMUploaded bykeoley
- einstein's 100 yrs of paper submissionUploaded byapi-3771959
- 10.1016@j.jcsr.2014.10.008Uploaded byEngr Arbab Faisal
- Accelerometer Gyro Tutorial.pdfUploaded byipobolong