Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa, Israel - June 28-30, 1999

Nonlinear Identification of Automobile Vibration Dynamics ∗
David T. Westwick† Koshy George‡ Michel Verhaegen§
Systems and Control Engineering Group Faculty of Information Technology and Systems Delft Technical University

Abstract The identification of nonlinear state-space systems from input-output measurements is considered. The system is separated into a linear state-space system with a static nonlinearity, driven by the state and input, in feedback. Initially, the contribution from the nonlinearity is treated as an unknown system input driving an otherwise linear plant. A neural network is then used to model the feedback nonlinearity. A realistic simulation of a nonlinear automobile suspension is used to demonstrate the application of the identification algorithm.

1 Introduction
Fatigue and vibration testing of automobiles is often performed using a 4-post “test-rig” (De Cuyper et al., 1998), where the test vehicle sits on 4 hydraulic actuators which are used to replicate motions recorded during a previous test-drive. Typically, recordings are made of the axle and chassis accelerations and suspension displacement at each of the 4 corners of the vehicle. The hydraulic actuators are then driven such that the suspension displacements, and axle and chassis accelerations track those measured in the field. Thus, realistic body motions can be reproduced under controlled conditions, and, for fatigue testing, over extended periods of time. The current solution to this “mission reproduction” involves an iterative, off-line procedure. First, an identification experiment is performed by exciting the system with a relatively broad-band noise input. A linear Frequency Response Function (FRF, essentially a non-parametric transfer function), is estimated from the test data. The target outputs are then filtered using the (frequency by frequency) inverse of the FRF, producing an initial input sequence. If, as is often the case, the initial input sequence does not cause the system to track the target outputs adequately, the inputs are refined. The inverse FRF is applied to the tracking error, and the result is added to the test input. This off-line refinement process iterates until sufficient tracking accuracy is obtained. The overall goal is to replace this off-line iterative tuning process with an online controller. The currently envisioned controller will include a feedback component, for disturbance rejection, and a feedforward (i.e. system inversion) element. Clearly, the design of the feed-forward element will require an accurate model of the car/test-rig system, which includes any significant nonlinearities present in the

This research was supported by the EC Brite-Euram programme through project nr.BE-97-4186 (SCOOP) Email: ‡ Email: § Email:


However. the nonlinear state-space model can be used to represent a wide variety of nonlinear systems ranging from the voltages and currents in a nerve cell (Hodgkin and Huxley. 1997. a relatively small.Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. it must be capable of performing “free run” simulations. it will have to be identified from experimental data. Since this model will depend on the characteristics of the particular car attached to the test rig. (1 – 2) can be trained using a suitably designed iterative optimization. (1 – 2) as. ˜ xk+1 = Axk + Buk + F (xk . we extract a linear subsystem from the nonlinear state-space model. the restrictions on the sub-class being determined by several of the procedures used in the identification. The measurement noise. the results may also be used to initialize an iterative optimization procedures with a nearly optimal model. albeit in a neural-network setting. thus minimizing the possibility of converging to a sub-optimal local minimum. we will consider the identification of such a car/test-rig model.June 28-30. to stiction and friction forces in a robot arm (Kim et al. vk ∈ IRp . this is exactly the nonlinear statespace model considered by Haykin (1999). Then. uk ∈ IRm . and present the theoretical foundations for the procedures that are employed.1 Nonlinear State Space Models Consider the following nonlinear state-space model. the model should be sufficiently accurate to be used without further refinement. parametric form is desirable. and the initial state. In its general form. such as the back propagation through time algorithm. uk ) yk = Cxk + Duk + vk (1) (2) where xk ∈ IRn is the state. A. 1998) for a restricted class of SISO nonlinear systems. the resulting error surface may be very complex. including many suboptimal local minima. In this paper. we describe the identification approach. uk . we may write Eq. we will consider the identification of a class of nonlinear state-space systems. Let A and B be constant matrices of appropriate dimensions. Since the model will be incorporated into an online controller. 2. In this paper.1 Linear/Nonlinear Decomposition First. since it will be incorporated into a feed-forward controller... yk ∈ IRp is the output. Zhang et al. Ideally. uk . we will develop an algorithm for identifying the dynamics of a subset of this class of models. 1989). Our goal is to develop a non-iterative procedure for the identification of nonlinear state-space models. Israel . or at least to limit identification to well defined optimizations with relatively simple cost-functions. However. x0 . and the input. 1997). We note that similar approaches have been used in the design of nonlinear observers (Kim et al. as is the case with input-output models such as the NARMAX model class (Chen and Billings. Thus. is assumed to be a white-noise sequence that is independent of the input. 1. 1952). With the exception of the (linear) direct transmission term. 1999 system.. While a model with the structure Eq. B) yk = Cxk + Duk + vk (3) 725 . 2 Identification of a Class of Nonlinear State Space Systems In this section. rather than simply generating one-step-ahead predictions. xk+1 = F (xk .

we have −1 −1 −1 xk+1 = r32r22 xk + r31 − r32r22 r21 r11 uk + r33q3 where q3 is orthogonal to the current input and state sequences. uk . uk . subspace methods will be used to identify 726 . xN q3 r31 r32 r33 By construction. A. B) = r33q3 will be uncorrelated (but clearly not independent of) the input and state. In the sequel. uN −1 (4) Zk =  x1 x2 . the “auxiliary” inputs. uk . .2 overview of identification approach T Equation (5) can be viewed as a multiple-input linear system.Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. ˜ xk+1 = Axk + Buk + Bf (xk . xk+1 . . . 1999 uk fk - A B C D xk yk ˜ F Figure 1: Linear/Nonlinear decomposition of a nonlinear state-space model ˜ Clearly. are generated by the nonlinearities in the system.June 28-30. we can re-express the nonlinear state-space model (1 – 2) as a linear system with an auxiliary input. The first m k inputs.3. The second group of inputs. state. . (5) 2. uk ) yk = Cxk + Duk + vk ˜ where Bfk is the k th column of r33q3 . between [uT fk ]T and yk . xN −1  =  r21 r22 0   q2  x2 x3 . A. Hence by choosing −1 A = r32r22 −1 −1 r31 − r32r22 r21 r11 B = ˜ in (3) we see that the contribution of the term F (xk .      r11 0 q1 0 u1 u2 . we will focus on one particular choice for A and B. F (xk . Thus. . In section 2. xk . Israel . are the measured system inputs. so that the choice of A and B is in some sense arbitrary. . B). fk . Compute the RQ factorization of a matrix whose k’th column is a stack consisting of the input uk . uk ) = Axk + Buk + F (xk . and subsequent state.

non-iterative estimation of the system matrices can be accomplished using subspace methods (Verhaegen and DeWilde. . . C.. . CAs−2 B 0 D CB . Verhaegen... 2. Verhaegen. Viberg. uN where the subscripts refer to the index of the first data sample. n. 1994. . Y = ΓX + HU + V (7) where V is a Hankel matrix constructed from the output noise sequence. Section 2. we could write. xN −s+1 . Israel . The overall algorithm will be summarized in section 2. . . . ˜ In section 2.s. Y1. as well as a procedure for estimating B. which will be used to reconstruct the auxiliary input. 1992. . Consider the MOESP family of subspace methods for identification (Verhaegen and DeWilde. and H is a lower triangular matrix containing the system’s     H=          (9) D CB CAB . s. treating the “auxiliary” inputs as if they were process noise. vk . . . . . 1992. Finally... 1993. This will be shown to lead to a bias in the identified linear dynamics. CAs−1 the matrix X = x1 x2 . Direct. Then we observe that for a linear system. 1993. .June 28-30. in section 2.. 1994).   . one might attempt to estimated the quadruple (A. the number of rows in the matrix. . in the matrix is user-specified over-dimension parameter which is chosen to be greater than the system order..5 will present a robust method for system inversion. . 1999 the linear dynamics. 1995).s. us us+1 . B.3 Identification of the Linearized Model Given records of uk and yk .4. the input and output are placed in Hankel matrices. . even these schemes may be biased by the presence of the state nonlinearity. and the index of the final data sample in the matrix. . CB D 727 ... 0 0 0 . . a neural network will be used to fit a static nonlinearity between the system state and the auxiliary input. fk .. . The number of rows. and are hence not suitable to this identification. Γ is the extended observability matrix.  . However. .6. 0 0 D . The output Hankel matrix. the conditions necessary for fk to and B to be estimated from the linear residuals will ˜ Of critical importance is the requirement that be investigated. 1993. Markov parameters.7.Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. First. The basic subspace schemes are unable to cope with either measurement noise or process noise. the auxiliary input be a static nonlinear function of the current system state. and input. . .  . defined as   C  CA    (8) Γ=  .   u1 u2 .N is defined analogously.  . .N =  . D) using linear system identification techniques. uN −s+2    (6) U1. . Verhaegen (Verhaegen. uN −s+1  u2 u3 . 1994) has proposed schemes that use instrumental variables to remove the bias caused by measurement and process noise.

given noise corrupted measurements zf.k+1 = Axl. fk .Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. C. the next task is to estimate the “auxiliary” signal generated by the nonlinearity. B. A.      0 Us+1.k . Consider the output of the identified linear model xl.k = yk − yl. and H is defined analo˜ and D with 0.s.k + vk ˆ ˜ Our task is to generate an input sequence.N Q3 and note that the factor R32 = Ys+1.k = yf. given by ˆ ˜ˆ x xf. output and auxiliary input may be related by a similar equation. is available. ˆ Remark 2 Once an estimate. of the auxiliary . and R32 would contain an unbiased estimate of the column space of Γ. is available.k + Buk yl. which so far has been treated as a process disturbance. gously to (9).k . fk .k+1 = Aˆf.k . However. Thus Γ is extracted from R32 using a SVD.k . B.k = Cxl. we must ensure that fk is a zero-mean sequence. The linear residual sequence is ˆ zf.June 28-30. yf. 1993) is then followed. we must determine the conditions under which the system (15) can track the ideal residual sequence. the linear identification may be ˆT repeated using the extended input [uT fk ]T .k = C xf. the term HFQT cannot.k + Duk ˆ where the subscript l indicates the state and output of the linearized model. Although using past inputs as instrumental variables removes the bias caused by measurement noise (and process noise. Israel . replacing B with B. the input. since it is independent of the input). B and D can be obtained from a least-squares regression (Westwick and Verhaegen.s.N −s  =  R21 R22 0   Q2  (11) R31 R32 R33 Ys+1.N QT is given by 2 ˜ R32 = ΓXQT + HFQT + VQT 2 2 2 and note that VQT will vanish as N → ∞.k + B fk ˆ yf. k 2. The usual subspace procedure (Verhaegen. The remaining matrices. fk . ˜ Y = ΓX + HU + HF + V (10) ˜ where F is a Hankel matrix constructed from samples of the auxiliary input. 1996).s. This can easily be accomplished by adding a constant row to the top of the data matrix (11). be expected to vanish.4 Tracking the Contribution due to the Nonlinearity Once an estimate of the linear system. since the measurement noise is assumed to be independent 2 ˜ of the input. F 2 would be zero. and input matrix.N Q1 R11 0  U1. 1999 For the nonlinear system (5). to reduce the bias. (15) (13) (14) (12) 728 . Remark 1 To minimize the bias due to the nonlinearity. For a linear system. they do not remove the bias due to the nonlinearity.k ˆ ˆ tracks the nonlinear output component. yf. Consider the following RQ factorization. in general. fk . and then used to construct estimates of A and C. First.s. such that yf. D.

we can uniquely ˜ identify the contributions due to at most p − 1 nonlinearities in the state. For an arbitrary B ∈ IRn×p . B can be determined by by minimizing the tracking error ˆ B = min B † (I − HT )YT 2 ˜ c1Ar1 −1 B ˜ c2Ar2 −1 B . this is a separable least squares optimization (Golub and Pereyra. 1988) can be used to reduce the computational and storage requirements dramatically. 0 0 ˜ CB . . . 0        (16) = ∆ ∆ ϑ0 ϑ1 · · · ϑN yf. If c denotes the ith row of C. 0 0 0 . Ruhe and Wedin. 0 0 0 . 1 ≤ j ≤ p ˜ ˜ Note. . . n×p ). 0 ≤ i ≤ rj − 1. there will exist a p-dimensional sequence fk that ˜ and fk are unknown. Israel . Thus.N ˜ ˜ ˜ CAN −1 B CAN −2 B CAN −3 B · · · ˜ Lemma 3 An arbitrary sequence {ϑi }N in IRp is output trackable if.Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. .1 · · · yf. 1 ≤ i ≤ p and the matrix      is nonsingular. ϑj. rp −1 B ˜ cp A      (17) where the subscript T indicates that matrix has been truncated by removing its top row.. tracking ability alone is not results in perfect tracking of zf.k . ϑ ∈ Im H. (provided that results ˆ in system with well defined relative degree r = 1). . . Furthermore. 1999 Definition 1 A sequence {ϑi }N in IRp is said to be output trackable if there exists an input sequence i=0 ˆ {fk }N such that the output sequence {ˆf. we note that fast orthogonalization techniques (Korenberg. any arbitrary sequence {ϑi }N i=0 in IRp is output trackable if. given a system with p outputs. and only if.1 For a square system with a well-defined relative degree r. since both B sufficient to guarantee their correct identification. For such a system. Corollary 3. 1980). then the system is ˜ In particular. 1973. . ··· ··· ··· .k }N of the system (15) satisfies ϑ − Y 2 = 0 where y i=0 i=0 ϑ = Y For the system (15) let    ˜ ∆ H=   0 ˜ CB ˜ CAB . however.June 28-30. . and only if.i = 0. consider a square plant (B in IR i ∆ said to have a well-defined relative degree r = r1 r2 · · · rp if ˜ ci Al B = 0 ∀ l < ri − 1.0 yf. i=0 ˜ The proof of this result readily follows from the definitions of output trackability and H. While the computations are burdensome. Hence. 729 . that B has not yet been estimated. . .

ˆ This is illustrated in Fig. In this figure. noise performance is of utmost importance in this application. determine k=1 ˆ fk N k=1 such that yf.k - Inverse System ˆ fk - System -y ˆ f. With the input sequence fk thus generated. 1999 2. .k that also contain an additive white noise process. Israel . The SDI technique is based on augmenting the given state space model (15) by a reasonable model for the input sequence and then designing a Kalman filter to provide an estimate of the input sequence from the measurements zf. Problem 4 Given the above state space model of a plant.k = yf. the output yf. are assumed to originate from the system (15). we assume that ηk is white noise with covariance Qη . .? zf. .k . The resulting augmented system is as follows: xk+1 fk+1 = zk = ˜ A B 0 I C 0 xk fk xk fk + + I 0 0 0 I 0 vk ηk vk ηk (19) (20) 730 . that of handling either non-minimum phase zeros. and noise in the desired signal. with covariance Rv .k is the response of the system (15) to the input fk .June 28-30.k .k Figure 2: Stable dynamic model inversion. 2. For simplicity. zf. yf. ˆ vk yf. it seems realistic to model the input signal ˆ fk as follows ˆ ˆ fk+1 = fk + ηk (18) for some ηk . a target time history {zf. Thus. and uncorrelated with vk .k }N .k k = 1. its SNR will be much lower than that of the raw measurements. Recall that the linear residuals.k of the given system (15) should reasonably represent the desired output sequence yf. (15).Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. namely. N ˆ ˆ where yf. we now describe an approach that inverts the model of the plant whilst overcoming the main drawbacks of existing techniques. or zeros on the unit circle. From intuitive and physical reasoning. Since the desired signal is the linear residual. and that we have measurements.5 Stable Dynamic Inversion (SDI) ˜ Given an estimate of B.k .k . that is independent of the input and initial condition.k . . the ‘System’ is represented by eqn. Our objective ˆ is to generate a desired input sequence fk by obtaining a suitable ‘inverse system’ that relates this input ˆ sequence to the given measurement sequence zf. vk .

The following result shows that if the original system is observable. then the augmented system is observable for almost all points in the complex plane. 0) has no zeros at z = 1. A B 0 I is observable if. Lemma 5 Suppose the pair (C. under the conditions for observability of the augmented system given in Lemma 5 we can set up a Kalman filter to estimate the input signal: xa. the eigenvalues of Aa − KCa ) are the relocated eigenvalues of the system matrix Aa . z = 1 is not a zero of the system (A. A) is observable.k + Ga wa. We can easily see that uk+1|k = 0 I xa. 0). Israel .k+1|k = Aa xa. we can easily take into account the presence of noise in target time histories. Moreover. George et al. xa.k + Ha wa. The pair and only if. and hence the transfer function from the measurements zk to ˆ ˆ ˆ the estimate of the input fk|k−1 (i. its proof is rather straightforward from the definition of zeros: Lemma 6 Every eigenvalue of the system matrix A is a zero of the inverse system (21). 1998). C. and the states corresponding to the zero dynamics are poorly re-constructed in the technique presented in (Hou and Patton. 1998). we have.k|k−1 − zk ˆ ˆ ˆ where K = ∆ C 0 . C. We recall that the method provided in (Devasia et al. we have the following result on the zeros of the inverse system.e.k+1 = Aa xa. Evidently. B.k zk = Ca xa. or to the stable region (Hou and Patton. 731 .. the inverse system) has the following realization: Inverse System = 0 I zI − A − K1 C B −K2C I −1 K (21) We note that the poles of this inverse system (i. K1 K2 = Aa P Ca (Ca P Ca + Rv )−1 and P satisfies the following Riccati equation: ∆ −1 P = Aa P Aa − Aa P Ca Ca P Ca + Rv Ca P Aa + 00 0 0 Qη Here the Kalman gains K1 and K2 respectively correspond to the state xk and the input fk ..Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa.e.. Therefore.. by suitably designing the Kalman filter.k+1|k .k where the definition of each of the individual matrices and vectors is rather obvious. 1999). 1999 In compact form. Thus. 1996) does not handle noise in target time histories..k|k−1 − K Ca xa. the proof of the result readily follows from the definitions of observability and the zeros of a system.June 28-30. B. 1996. the augmented system will be observable at almost all points on the complex plane pro˜ ˜ vided B is chosen such that the LTI quadruple (A. Moreover. this technique is more general than the earlier schemes in that it does not restrict the presence of zeros to the stable and unstable regions in the strict sense (Devasia et al.

1997) ˜ 2. uk is persistently exciting of order at least 2s. Otherwise. CD) to track the linear residuals. 2. fk . where the past inputs are used as instrumental variables (Verhaegen. 3. however. where s is the over-dimension parameter used in the subspace identification. by solving the separable least squares minimization (17). First. 1997). Thus. The measurement noise is independent of the input and state. given the current state. has been estimated. to produce an auxiliary input. It contains no representation of geometric effects of having four wheels 732 . ˜ and the neural network. Fit a feed-forward NN which generates the auxiliary input. Use SDI. The input. Haverkamp and Verhaegen. x. by simulating the system. 3 Simulations 3. [uT fk ]T . re-estimate the linear subsystem using [uT f (ˆk . fk . Estimate B. influence the state. using the state which will. where p is the number of outputs. This can be accomplished efficiently using the Levenberg Marquardt optimization procedure in a separable least squares framework (Sj¨ berg and Viberg. a traditional feed-forward sigmoidal neural network can be fitted from the input and k ˆ estimated state to fk . and the excitation ˜ ˜ provided by the remaining nonlinearity. A. Note that this limit can be increase to p. F (xk . o ˆ Note. xk . using the extended inˆT put. 3.k .7 Algorithmic Summary Given length N records of input-output data from a system with m inputs and p outputs. s. Continue from step 3. we need to be able to estimate the nonlinearity contribution. from the current state and input [xk uk ]. fk .1 Benchmark Example: A Quarter Car The quarter-car model is the simplest model that includes a proper representation of the problem of controlling wheel load variations. this will also provide an upas the input. and auxiliary input resulting from a free run simulation may be beneficial. Note that because fk ˜ dated estimate of B. and an upper bound. uk .Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. Then. that in a free run simulation. Assumptions: 1. B. such that at most p − 1 components of the state have non-constant derivatives with respect to x and u. the the linear dynamics of the separated model (3) have been identified. Israel . 1993. uk )T ]T k ˆ is included in the extended input. stop. we require the following additional assumptions. Fit a linear system between uk and yk using subspace methods. it must be possible to find a representation of the state. which causes the system ˜ (A. Thus. ˆx 5. 1999 2. yf. The system contains at most p − 1 nonlinearities. as described in Sec 2.6 Identification of the Nonlinearity Thus far. on the system order. If the model is satisfactory. In order to use this model in “free run”. Algorithm 1. in turn. re-estimating B. 2.5. Note that s must be chosen to be larger than the system order. the state of the system must be reconstructed. sequence. the sequence fk will be generated by the neural network. B) = Bfk . if ˜ the system can be placed in a canonical form where B is known a priori.June 28-30. n. 4. and input. uk .

body). The quarter car can be expressed by a two-mass model as shown in Fig. It contains two vertical degrees of freedom: the displacement of the unsprung mass (or. The road displacement input is denoted by xb . 2. Thus. Model accuracy was expressed as the “Percent Variance Accounted For” (% VAF).Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. Note that the suspension spring. Integration was performed using the fourth-order Runge-Kutta algorithm. The algorithm outlined in Sec. Israel . example. mc and mw respectively represent the sprung mass and the unsprung mass of the car. cc is nonlinear since it stiffens as it is removed from its equilibrium position.7 was then used to fit a nonlinear state-space model to the inputoutput data. was validated by comparing the system and model responses to a separate input signal. as well as that of the initial linear model. Note that the tangent term in Fc causes the suspension travel to be limited to xmax . xc . xc and xw . Thus. mc cc mw dc cw Figure 3: The quarter car model. 3. to demonstrate the application of our identification algorithm on a manageable. presented through a zero-order hold at 1000 Hz. xw . 1999 and offers no possibility of studying longitudinal interactions or the use of front suspension state information to improve the performance at the rear. xc − xw . it does appear to contain the most basic features of the real problem and gives rise to design thinking which accords with experience. cc and cw the suspension and tire stiffness. it cannot describe problems related to handling. whose characteristic is given by Fc . axle). The system outputs are the acceleration of the masses. The accuracy of the nonlinear state-space model. ¨ ¨ The quarter-car model was driven by a band-limited (10 Hz) input. Moreover. we will use a continuous-time simulation of a quarter-car model. and displacement of the sprung mass (or. and the suspension length. The data was then re-sampled to 200 Hz for analysis. The differential equations for a two degrees of freedom model of a quarter car are ¨ ˙ ˙ mc xc = −Fc (xc − xw ) − dc (xc − xw ) mw xw = −cw (xw − xb ) + Fc (xc − xw ) + dc (xc − xw ) ¨ ˙ ˙ Fc (x) = cc x + αx3 + β tan(πx/2xmax) where the subscripts c and w respectively denote the body and the wheel of the car. but nonetheless realistic. defined as VAF = 1− N ˆ 2 k=1 (yk − yk ) N 2 k=1 yk × 100 733 . However. The only nonlinearity in this system is provided by the suspension spring.June 28-30.

02 −0.5 4. The results of these free run simulations are summarized in ˆ Table 1.06 4 4.2 4. 4 Conclusions We have developed a technique for identifying a restricted class of nonlinear state-space systems. a structure which can be used to model a wide variety of systems.6 time (sec.) 4.8 Table 1: Accuracy of the linear and non-linear state-space models obtained from free-run simulations on cross-validation data. there are other instances.0 74. not one-step-ahead predictions. respectively. 1999 Signal Suspension Displacement Chassis Acceleration Axle Acceleration Linear Model 75. Accuracies are expressed as % VAF Cross−Validation Results: Output 1 0. superimposed on the outputs of the linear and nonlinear models. However.04 −0.June 28-30.06 0. The output of the linear model (dashed line) and nonlinear state-space model (dash-dotted line) are superimposed. (t between 4. Israel .6 98.1 4. It is interesting to note that there are segments in the data (t between 4. Thus.9 5 Figure 4: One second segment of the suspension displacement from the cross-validation data set (solid line). shows a 1 second segment of the suspension displacement (due to the validation input). The application considered in this research 734 .4 97.0 seconds in Figs 4 – 6 for example).Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa.02 0 −0. Figure 4.04 System Linear Model Nonlinear Model Suspension Travel (m) 0. for the same 1 second segment of the validation trial.6 Nonlinear Model 95.4 seconds) where the linear model fails completely.8 4. ˆ where yk is an estimate of the sequence y . where the linear model performs extremely well.7 and 5. it is clear that the system is being occasionally driven out of its linear range.2 72.0 and 4.7 4. Note that the model outputs (both linear and nonlinear models) are “free run” simulations.3 4.4 4. Figures 5 and 6 show the chassis and axle accelerations.

5 4.4 4.8 4.June 28-30. 1999 Cross−Validation Results: Output 2 20 15 System Linear Model Nonlinear Model 10 Chassis Acceleration (m/s2) 5 0 −5 −10 −15 −20 4 4.2 4. The output of the linear model (dashed line) and nonlinear state-space model (dash-dotted line) are superimposed. Cross−Validation Results: Output 3 30 20 10 Axle Acceleration (m/s2) 0 −10 −20 −30 System Linear Model Nonlinear Model 4.Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa.3 4.8 4.9 5 Figure 5: One second segment of the chassis acceleration from the cross-validation data set (solid line).) 4. 735 .1 4.1 4.) 4.3 4.6 time (sec. Israel .9 5 −40 4 Figure 6: One second segment of the axle acceleration from the cross-validation data set (solid line).5 4.4 4.7 4. The output of the linear model (dashed line) and nonlinear state-space model (dash-dotted line) are also shown.7 4.2 4.6 time (sec.

making the nonlinear state-space model an obvious model structure to employ. 1st edn. The identified model was able to reproduce validation data sets with much greater accuracy than is possible with a linear model. Y. 736 . J. Sj¨ berg. and the n×p ˜ corresponding auxiliary input sequence.” SIAM Review. Haykin. Devasia. the key restriction on the model class is that the “number” of nonlinearities must be less than the number of outputs. 445–449. 3. Viberg (1997). Ruhe. “A quantitative description of membrane current and its application to conduction and excitation in nerve. This constraint can be expressed more precisely in a variety of ways: the rank of the matrix r33 in (4) must be strictly less than p. pp. Scherpen (1999). Israel . B. J. 22. p. 930–942. De Cuyper. Lewis. and M.” International Journal of Control. “Optimal filtering for systems with unknown inputs. M. and R. pp. “Representation of non-linear systems: the NARMAX model. R. and P. 16.” IEEE Transactions on Automatic Control. and V. 2. 1013–1032. Paden (1996). D. 117. for example. “A dynamic recurrent neural-network-based adaptive observer for a class of nonlinear systems. We must emphasize that these were free run simulations. are thought to provide the most significant nonlinearity in the system. pp. SMI Toolbox. 7. (1988). vol. 49. The applicability of the identification algorithm was demonstrated using data from a continuous-time simulation model. H. and J. Patton (1998). Abdallah (1997). J.. Kim. no. pp. when it is in IR are under development.. A. J. 7. S.” in Proceedings of the Asian Conference on Acoustics and Vibration. it is a nonlinearity which affects the state directly. K. George.Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. A.. 41. “Stable inversion of MIMO linear discrete time nonminimum phase systems. and not simply one-step-ahead predictions. 10. fk . 3. Verhaegen (1998). Haverkamp. Verhaegen (1997). 43.” IEEE Transactions on Automatic Control. 500–544. Here. 413–432. which are designed to stiffen as they approach the limits of the the suspension travel. 123–142. 1539–1543. In particular. This restriction is ˜ due to the necessity to simultaneously estimate both the input matrix for the nonlinearities. State Space Model Identification Software for Multivariable Dynamical System.” SIAM Journal of Numerical Analysis. 8. “Separable non-linear least squares minimization – possible improvements for o neural net fitting. D. G. 2nd edn. Swevers. Israel. Hodgkin.June 28-30. A. Delft University of Technology. no.” in IEEE Workshop on Neural Networks for Signal Processing. no. Huxley (1952). 345–354. At present. and M. Korenberg. pp. and B. and A.” in The 7th IEEE Mediterranean Conference on Control and Automation.. 318–337. M. M. and S. J. Chen. G. no. Haifa. Hou. pp.” Journal of Physiology. Verhaegen. Golub. “Advanced drive file development methods for improved service load simulation on multi axial durability test rigs. References Chen. and M. Prentice Hall. pp. S. Pereyra (1973). Billings (1989). M. “Identifying nonlinear difference equation and functional expansion representations: The fast orthogonal algorithm. 33. no. and C. (1999). the suspension springs. Liefooghe. S. “Nonlinear inversion-based output tracking. Methods for determining B. Wedin (1980). pp. B. F. pp. “The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate. “Algorithms for separable nonlinear least squares problems. C. Coppens. 1999 was the identification of the dynamics of a car attached to a vibration test-rig.” Annals of Biomedical Engineering.” Automatica. Neural Networks a Comprehensive Foundation.

56. Verhaegen. Verhaegen (1996). and M.June 28-30. 235–258. 1. H. M. 30. 1187–1210. DeWilde (1992). (1995). Verhaegen.” Signal Processing. and P.” Automatica. no. no. pp. Chan. pp. “Nonlinear observer design with unknown nonlinearity via B-spline network approach. the output-error state-space model identification class of algorithms. pp. M.. pp. Viberg. C. (1993). “Subspace model identification part 1.Proceedings of the 7th Mediterranean Conference on Control and Automation (MED99) Haifa. pp. and H. “Subspace-based methods for the identification of linear time-invariant systems. 52. “Subspace model identification part 3.” International Journal of Control. M. M. analysis of the ordinary output-error state-space model identification algorithm. 1999 Verhaegen. no. “Identification of the deterministic part of MIMO state space models given in innovations form from input-output data. 555–586. 737 . no. 3. Westwick. Cheung (1998).” in Proceedings of the Americal Control Conference. 12. “Identifying MIMO Wiener systems using subspace model identification methods. D. pp.” International Journal of Control. 5. 2339–2343. 1835–1851. (1994). Israel . Zhang. 31.” Automatica. 58. 61–74.

Sign up to vote on this title
UsefulNot useful