You are on page 1of 6

Proceedings of International Joint Conference on Neural Networks, Atlanta, Georgia, USA, June 14-19, 2009

Stable Fourier Neural Networks with Application to Modeling Lettuce Growth


Juan Jose Cordova, Wen Yu
parameters identication was demonstrated in [3] and became a hot issue in 1980s. Several robust modication techniques were proposed in [6]. Neuro identication is in sense of blackbox approximation. All uncertainties can be considered as parts of the blackbox, i.e., unmodeled dynamics are within the blackbox model, not as structured uncertainties. Therefore the common used robustifying techniques are not necessary. By using passivity theory, we successfully proved that for continuoustime recurrent neural networks, gradient descent algorithms without ro bust modication were stable and robust to any bounded uncertainties [20], and for continuoustime identication they were also robustly stable [21]. Nevertheless, do discretetime neural networks have the similar characteris tics? In this paper, we give an answer for it. To the best of our knowledge, identication without robust modication via FoNN has not yet been established in the literature. One of the most important advances in modern agricul ture was the use of mathematical models for describing the physiological behavior and the plants growth [19]. The incorporation of such mathematical models in plant production has allowed the utilization of advanced control laws for the climate control of greenhouses [12]. In order to study lettuce growth inside greenhouses, biomass behavior model is needed, which used global radiation model with a nite Fourier series. In this paper, inputtostate stability approach is applied to obtain some new learning laws for FoNN that do not need robust modications. This algorithm is successfully applied on modeling lettuce growth in greenhouse. With this model optimal control laws can be applied. II. P RELIMINARIES The main concern of this section is to understand some concepts of inputtostate stability (ISS). Consider following discretetime nonlinear system x(k + 1) = f [x (k) , u (k)] , y(k) = h [x (k)] (1)

Abstract In general, neural networks cannot match non linear systems exactly. Neuro identier has to include robust modication in order to guarantee Lyapunov stability. In this paper inputtostate stability approach is applied to access robust training algorithms of Fourier neural network (FoNN). It is successfully applied on modeling lettuce growth in greenhouse.

I. I NTRODUCTION Recent results show that neural network technique seems to be very effective to identify a broad category of complex nonlinear systems when we do not have complete model information. Lyapunov approach can be used directly, in order to obtain robust training algorithms for continuoustime neural networks [5][10][17][22]. Discretetime neural networks are more convenient for real applications. Two types stability for discretetime neural networks were studied. The stability of neural networks can be found [4] and [18]. The stability of learning algorithms was discussed by [8] and [14]. In [14] they assumed neural networks could represent nonlinear systems exactly, and concluded that backpropagationtype algorithm guaranteed exact convergence. Gersgorins theo rem was used to derive stability conditions for the network learning in [8]. Fourier neural network (FoNN) is based on Fourier analysis and neural network (NN) theory [23]. The un derlying idea in the design of the FoNN is to extend the Fourier learning method to NNbased training. Compared with traditional NNs, the FoNN employs orthogonal com plex Fourier exponential as its basis functions. Therefore, it has a clear physical meaning and is very closely related to the frequency response method. As a consequence, the network structure can be easily chosen and the parameters of the basis functions are physically determined. Without a prior knowledge of the system model, all the nonlinear ities and uncertainties of the dynamic system are lumped together and compensated online by FoNN. It is well known that normal identication algorithms are stable for ideal plants [6]. In the presence of distur bance or unmodeled dynamics, these adaptive procedures can go to instability easily. The lack of robustness in
Juan Jose Cordova and Wen Yu are with the Departamento de Con trol Automatico, CINVESTAVIPN, Av.IPN 2508, Mxico D.F., 07360, Mxico (email:yuw@ctrl.cinvestav.mx)

where u (k) <m is the input vector, x (k) <n is a state vector, and y (k) <l is the output vector. f and h are general nonlinear smooth functions f, h C . Let us now recall following denitions. Denition 1: The system (1) is said to be globally inputtostate stable if there exists a Kfunction ()

978-1-4244-3553-1/09/$25.00 2009 IEEE

591

(continuous and strictly increasing (0) = 0) and KL function () (Kfunction and lim (k) = 0) , such that, k for each u L (sup {ku(k)k} < ) and each initial state x0 Rn , it holds that x k, x0 , u (k) x0 , k + (ku (k)k) Denition 2: A smooth function V : <n < 0 is called a smooth ISSLyapunov function for system (1) if: (a) there exists a K function 1 () and 2 () (Kfunction and lim (k) = ) such that
k

between zero and one, and c is the parameter to convert carbon dioxide equivalent to that of its sugar. By using singular perturbation techniques, we have the following reduced model
dXd cpl,d Xd ) dt = c (1 e 2 c1 V1 (cco2,1 Xt +cco2,2 Xt cco2,3 )(Xc c ) 2 c1 V1 +(cco2,1 Xt +cco2,2t cco2,3 )(Xc c ) X 2 c = (+ )+ (+ ) +4 Z 2 (c Uv +Cal,ou )Vt +crad V +U Zt = cap,q,v cap,q,v Uv +Cal,ou ) 1 q (c

1 (s) V (s) 2 (s),

s <

(b) There exist a K function 3 () and a Kfunction 4 () such that Vk+1 Vk 3 (kx (k)k) + 4 (ku (k)k) for all x (k) <n , u (k) <m Theorem 1: For a discretetime nonlinear system, the following are equivalent [9] It is inputtostate stability (ISS). It is robustly stable. It admit a smooth ISSLyapunov function. Property. If a nonlinear system is inputtostate stable, the behavior of the system remains bounded when its inputs are bounded. This leads to the NARMA model [1] y(k) = [y (k 1) , y (k 2) , u (k 1) , u (k 2) , ] = [X (k)] X (k) = [y (k 1) , y (k 2) , u (k d) , u (k d 1) , ]T (2)

Zh =

cv,5 1 ecpl,d X d cv,pl,al ( cR (Zt +ct,abs ) e +(Uv + Cleak )Vh


(1ecpl,d X d )cv,pl,al +(Uv +Cleak )

cv,2 Zt Zt +cv,3

where I0 is constant solar energy received at a normal point, a is coefcient of clouds in the atmosphere, is elevation angle of the sun, td is phenological day. But for each real case, the models are different. IV. F OURIER NEURAL NETWORK MODELING WITHOUT
ROBUST MODIFICATION

For lettuce growth, the biomass is strongly dependent on the solar radiation Vi . It is very useful to have an analytic expression of VI instead of a complete model of lettuce growth. A theory model for Vi is td 10 )) (5) Vi = I0 a sen 1 + 0.033cos(2( 365

where

(3)

() is an unknown nonlinear difference equation repre senting the plant dynamics, u (k) and y (k) are measurable scalar input and output, d is time delay. One can see that Denition 1,2 and Theorem 1 do not depend on the exact expression of nonlinear systems. In this paper, we will apply ISS to the NARMA model (2). III. L ETTUCE GROWTH MODEL IN GREEN HOUSE Lettuce growth can be integrated the lettuce biomass Xd growing, which is inuenced by climate behavior (solar radiation), CO2 concentration, temperature, and humidity. In a general form [19] dXd = c (c f ot resp ) dt (4)

Fourier neural network is a special articial neural network, which has the following topology: y (k) = ( b
p q X Y Wk cos(kj xj + kj )) j=1

A nite Fourier series is X 1 f (t) = a0 + an cos(nt) + bn sin(nt) 2 n=1

(6)

(7)

k=1

We write it in general neural networks form y (k) = [Wk X (k)] b

(8)

where the scalar output y (k) and vector input b X (k) =


j=1 q Y

cos(kj xj + kj ) Rn1

where Xd (kg m2 ) is the dry matter, f ot (kg m2 s1 ) is regarded as gross photosynthesis absorption of CO2 , resp (kg m2 s1 ) is the rate of respiration expressed in terms of amount of carbohydrates consumed, c is the parameter of breathing and loss of synthesis in the conversion of carbohydrates to structural and has a value

the weight matrix Wk R1n , is mdimension vector function. The typical presentation of the element i (.) is sigmoid activation function. The identied nonlinear system is represented as (2), i.e., y(k) = [X (k)] (9) According to the StoneWeierstrass theorem [2], this gen eral nonlinear smooth function can be written as y (k) = [W X (k)] (k) (10)

592

where W is optimal weight, (k) is the modeling error. Since is bounded function and the output of the plant is assumed bounded, (k) is bounded as 2 (k) , is an unknown positive constant. The neuro identication error is dened as Using Taylor series around the point of [Wk X (k)], the identication error can be represented as e (k) = [Wk X (k)] [W X (k)] + (k) 0 f = [Wk X (k)] Wk X (k) + (k) (12) e (k) = y (k) y (k) b (11)

f where Wk = Wk W , (k) = (k) + (k) , (k) 0 is second order approximation error. is the derivative of nonlinear activation function () at the point of Wk X (k) . Since is a sigmoid activation function, (k) is bounded as 2 (k) , is an unknown positive constant. The following theorem gives a stable learning algorithm of discretetime singlelayer neural network. Theorem 2: If we use the singlelayer neural network (8) to identify nonlinear plant (9), the following gradient updating law without robust modication can make iden tication error e (k) bounded (stable in an L sense) Wk+1 = Wk k e (k) X T (k) where k = 2, 0 < 1 0 1 + k X T (k)k Proof: We select Lyapunov function as n 2 X n o f f f Vk = Wk = wi = tr W T W e2
i=1
0

(13)

1 > 0, = where = 1+ 2 1 + 0 max X T (k) . Since k 2 2 n min wi Vk n max wi e e 2 2 e where nmin wi and nmax wi are K functions, e and e2 (k) is an K function, 2 (k) is a Kfunction. From (12) and (14) we know Vk is the function of e (k) and (k) , so Vk admits the smooth ISSLyapunov function as in Denition 2. From Theorem 1, the dynamic of the identication error is inputtostate stable. The INPUT is corresponded to the second term of the last line in (15), i.e., the modeling error (k) = (k) + (k) , the STATE is corresponded to the rst term of the last line in (15), i.e., the identication error e (k) . Because the INPUT (k) is bounded and the dynamic is ISS, the STATE e (k) is bounded. Remark 1: (13) is the gradient descent algorithm, which the normalizing learning rate k is timevarying in order to assure the identication process is stable. This learning law is simpler to use, because we do not need to care about how to select a better learning rate to assure both fast convergence and stability. No any previous information is required. Remark 2: If we add a xed matrix A R1m in (8), is mdimension vector function y (k) = A [Wk X (k)] , b Wk Rmn (16)

(14)

Using (12) and 0 < 1, 0 k 1, k = 1 + k0 X T (k)k


2

From the updating law (13) 0 f f Wk+1 = Wk k e (k) X T (k) Vk = Vk+1 Vk 2 2 0 f f = Wk k e (k) X T (k) Wk 0 2 0 2 f = k e2 (k) X T (k) 2k e (k) Wk X (k) 0 2 2 Vk = k e2 (k) X T (k) 2k (e (k) [e (k) (k)]) 0 2 2 k e2 (k) X T (k) 2k e2 (k) +k e2 (k) + k 2 (k) 2 0 = k 1 k X T (k) e2 (k)

This is the same as the Equ.20 in [14], but they assumed the neural networks (16) can match nonlinear system (9) exactly. In our case, modeling errors (k) and (k) are allowed. Remark 3: The class of networks considered in this paper is nonlinear in the weights as in [14] and [15]. Due to slow learning convergence and high data sets, many practical implementations of neural networks are linear in the weights [5][17], y (k) = Wk [X (k)] b

The learning law (13) becomes 0 T T Wk+1 = Wk 2 e (k) A X (k) 1 + k0 AT X T (k)k

In this case identication error dynamic (12) becomes e (k) = Wk [X (k)] W [X (k)] + (k) f = Wk [X (k)] + (k)

+k 2 (k)

(15)

+k 2 (k) e2 (k) + 2 (k)

0 2 X T (k) 2 k 1 2 e (k) 1 + k0 X T (k)k

If we use following updating law

(17) Wk+1 = Wk k e (k) T [X (k)] , 0 < 1, we can where k = 1 + k [X (k)]k2 prove that the identication error e (k) is also bounded. The proof procedures are the same as Theorem 2. The

593

modeling error (k) is smaller than the networks with nonlinear in the weights, because (k) = (k) + (k) . Now, we consider multilayer neural network(or multi layer perceptrons. MLP) which is represented as [11], y (k) = Vk [Wk X (k)] b (18)

where k =

, 0 < 1. 2 2 1 + k0 V 0T X T (k)k + kk The average of the identication error satises


T 1X 2 e (k) (22) T T k=1 1 > 0, = where = 1+ 1 + 2 max 0 V 0T X T (k) + kk2 , = max 2 (k) k k Proof: We selected a positive dened matrix Lk as 2 2 f e (23) Lk = Wk + Vk

J = lim sup

where the scalar output y (k) and vector input X (k) b Rn1 is dened in (3), the weights in output layer are Vk R1m , the weights in hidden layer are Wk Rmn , is mdimension vector function. The typical presentation of the element i (.) is sigmoid function. The neuro identication discussed in this paper is a kind of online identication, i.e., we will use the identication error e (k) to train the neural networks (18) online such that y (k) may approximate y(k). The identied nonlinear b system (2) can be represented as y (k) = V [W X (k)] (k) where V and W are sets of unknown weights which may minimize the modeling error (k). The nonlinear plant (2) may be also expressed as y (k) = V 0 [W X (k)] (k)
0

From the updating law (21), we have f f Wk+1 = Wk k e (k) 0 V 0T X T (k) ,

(19)

where V is an known matrix chosen by the user. In general, k (k)k k (k)k . Using Taylor series around the point of Wk X (k), the identication error can be represented as e (k) = Vk [Wk X (k)] V 0 [W X (k)] + (k) = Vk [Wk X (k)] V 0 [Wk X (k)] + V 0 [Wk X (k)] V 0 [W X (k)] + (k) 0 e f = Vk [Wk X (k)] + V 0 Wk X (k) + (k) (20) 0 where is the derivative of nonlinear activation function f e () at the point of Wk X (k) , Wk = Wk W , Vk = 0 0 Vk V , (k) = V (k) + (k) , here (k) is second order approximation error of the Taylor series. In this paper we are only interested in openloop iden tication, we may assume that the plant (2) is bounded input and boundedoutput stable, i.e., y(k) and u(k) in (2) are bounded. By the boundness of the sigmoid function we may assume that (k) in (19) is bounded, also (k) is bounded. So (k) in (20) is bounded. The following theorem gives a stable backpropagationlike algorithm for discretetime multilayer neural network. Theorem 3: If we use the multilayer neural net work (18) to identify nonlinear plant (2), the following backpropagationlike algorithm can make identication error e (k) bounded Wk+1 = Wk k e (k) 0 V 0T X T (k) Vk+1 = Vk k e (k) T (21)

2k ke (k) [e (k) (k)]k h i 2 k e2 (k) 1 k 0 V 0T X T (k) + kk2 + 2 (k) e2 (k) + 2 (k) (24) where is dened in (22). Because 2 2 2 2 e e e n min wi + min vi Lk n max wi + max vi e 2 2 e and where n min ei + min vi 2 w e are K functions, and n max wi + max vi e2 e2 (k) is an K function, 2 (k) is a Kfunction. From (20) and (23) we know Vk is the function of e (k) and (k) , so Lk admits a smooth ISSLyapunov function as in Denition 2. From Theorem 1, the dynamic of the identication error is inputtostate stable. Because the INPUT (k) is bounded and the dynamic is ISS, the STATE e (k) is bounded. (24) can be rewritten as Summarizing (25) from 1 up to T , and by using LT > 0 and L1 is a constant, we obtain P L L T e2 (k) + T K=1 PT T 2 1 K=1 e (k) L1 LT + T L1 + T Lk e2 (k) + 2 (k) e2 (k) + (25)

Since 0 is diagonal matrix, and by using (20) we have 2 f Lk = Wk k e (k) 0 V 0T X T (k) 2 2 2 e f e + Vk k e (k) T Wk Vk 2 2 = k e2 (k) 0 V 0T X T (k) + kk2 0 e f 2k ke (k)k V 0 Wk X (k) + Vk 2 = 2 e2 (k) 0 V 0T X T (k) + kk2
k

e e Vk+1 = Vk k e (k) T

(22) is established. Remark 4: The normalizing learning rates k in (13) and (21) are timevarying in order to assure the identica tion processes are stable. These learning gains are easier

594

(13) is the same as [15] and [22]. If a modication term or modied rule term are added in (13), it becomes that of [7] or that of [11]. But all of them need the upper bound of modeling error . And the identication error is enlarged by the robust modications [6]. Remark 5: Since we assume neural networks cannot match nonlinear systems exactly, we can not make the parameters (weights) convergence, we would like only to force the output of neural networks to follow the output of the plant, i.e. the identication error is stable. Although the weights cannot converge to their optimal values, (22) shows that the identication error will convergence to the ball radius . Even if the input is persistent exciting, the modeling error (k) will not make the weights convergent to their optimal values. It is possible that the output error is convergent, but the weight errors are very high when the networks structure are not ne dened. The relations of the output error and the weight errors are shown in (12) and (20). The simplest case is that we use line in the weights and the neural networks can match the nonlinear plant exactly plant: y = W [X (k)] neural networks: y = Wt [X (k)] b output error: (y y ) = (W Wt ) [X (k)] b

to be decided, no any prior information is required, for example we may select = 1. The contradiction in fast convergence and stable learning may be avoided. If we select as deadzone function: = 0 if |e (k)| = 0 if |e (k)| >

20 19 18 17 C 16 15 14 13 12

solar radiation Vi.

25

30

35

40

45

50 55 Day

60

65

70

75

80

Fig. 1.

Vi values

internal disturbance does not effect the theory results in this paper, but can enlarge the identication error if the internal disturbance becomes bigger. External disturbance can be regarded as measurement noise, input noise, etc. In the point of structure, input noises are increased feedfor ward through each layer [1]. For example, a noise (k) is multiple by Vk [Wk (k)] and achieves the output. Measurement noise is enlarged due to backpropagation of identication error (21), therefore the weights of neural networks are inuenced by output noise. On the other hand small external disturbance can accelerate convergent rate according to the persistent exciting theory [14], small disturbances in the input u(t) or in output y(t) can enrich frequency of the signal X(t), this is good for parameters convergence. In the following simulation we can see this point. V. E XPERIMENTS For the Fourier neural network, we select p = 17, the initial values are n Cn n n 1 4 6 8 12 17 0.0248 0.0749 0.208 0.1449 0.009 0.008 0.01 0.04 0.06 0.08 0.12 0.17 1 4 6 8 12 17

If [X (k)] is large, small output error (y y ) does not b mean good convergence of the weight error (W Wt ) . Remark 6: V 0 does not effect the stability property of the neuro identication, but it inuences the identication accuracy, see (22). We design an offline method to nd a better value for V 0 . If we let V 0 = V0 , the algorithm (21) can make the identication error convergent, i.e., Vk will make the identication error smaller than that of V0 . V 0 may be selected by following steps: 1) Start from any initial value for V 0 = V0 . 2) Do identication with this V 0 until T0 . 3) If the ke (T0 )k < ke (0)k , let VT as a new V 0 , i.e., V 0 = VT0 , go to 2 to repeat the identication process. 4) If the ke (T0 )k ke (0)k, stop this offline identi cation, now VT0 is the nal value for V 0 . With this prior knowledge V 0 , we may start the online identication (21). Remark 7: Noise (or disturbance) is an important issue in the system identication. There are two types distur bances: external and internal. Internal disturbance can be regarded as unmodeled dynamic (k) in (19). A bounded

Figure 1 shows the values for Vi . Figure 2 shows the modeling results Theorem 2 gives a necessary condition of for stable learning, 1. In this example, we found that if 2.7, the learning process becomes unstable. The neuro identication discussed in this paper is online, we do not study the convergence of the weight, we care about the identication error e(k). The weights do not converge to some constants or optimal values. Compared to normal backpropagation, the timevarying learning rate k in (21) is easier to be realized. We use the same multilayer neural networks 5,20,10,1 (two hidden layers with 20 and 10 nodes ), and select a xed learning

595

0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

25

30

35

40

45

50 55 Time [d]

60

65

70

75

80

Fig. 2.

Modeling results

rate = 0.025 as in [13]. Maybe there exists a better , but the simulation has to be done a lot of times in order to nd a good . Theorem 2 shows thatthe identication error converges to the upper bound of V 0 (k) + (k) , a smaller V 0 may be helpful, but smaller V 0 may make (k) bigger. We used Remark 4 to obtain a suitable V 0 , T0 = 300. After 2 loops, ke (T0 )k does not decrease, we let the V300 as the new V 0 .By Remark 4 the average of the identication P 1 error ( 200 300 e2 (k)) becomes smaller (J = 9.5 k=100 104 ) than the case of V 0 = V0 (J = 18 104 ). VI. C ONCLUSION In this paper we study nonlinear system identication by the discretetime single layer and multilayer neural networks. By using ISS approach, we conclude that the commonlyused robustifying techniques, such as dead zone and modication, are not necessary for the gradi ent descent law and the backpropagationlike algorithm. Further works will be done on discretetime recurrent neural networks and neuro control based on ISS approach. R EFERENCES
[1] M.Brown, C.J.Harris, Neurofuzzy adaptive modelling and control, Prentice Hall, 1994. [2] G.Cybenko, Approximation by Superposition of Sigmoidal Activa tion Function, Math.Control, Sig Syst, Vol.2, 303314, 1989 [3] B.Egardt, Stability of Adaptive Controllers, Lecture Notes in Con trol and Information Sciences, Vol.20, SpringerVerlag, Berlin, 1979 [4] Z.Feng and A.N.Michel, Robustness Analysis of a Class of DiscreteTime Systems with Applications to Neural Networks, Proc. of American Control Conference, 34793483, San Deigo, 1999 [5] S. S. Ge, T.H. Lee and C.J. Harris, Adaptive Neural Network Control of Robotic Manipulators, World Scientic, London, Series in Robotics and Intelligent SystemsVol. 19, 1998. [6] P.A.Ioannou and J.Sun, Robust Adaptive Control, PrenticeHall, Inc, Upper Saddle River: NJ, 1996

[7] S.Jagannathan and F.L.Lewis, Identication of Nonlinear Dynam ical Systems Using Multilayered Neural Networks, Automatica, Vol.32, No.12, 17071712, 1996. [8] L.Jin ad M.M.Gupta, Stable Dynamic Backpropagation Learning in Recurrent Neural Networks, IEEE Trans. Neural Networks, Vol.10, No.6, 13211334, 1999. [9] Z.P.Jiang and Y.Wang, InputtoState Stability for DiscreteTime Nonlinear Systems, Automatica, Vol.37, No.2, 857869, 2001. [10] E.B.Kosmatopoulos, M.M.Polycarpou, M.A.Christodoulou and P.A.Ioannou, HighOrder Neural Network Structures for Identi cation of Dynamical Systems, IEEE Trans. on Neural Networks, Vol.6, No.2, 442431, 1995 [11] F.L.Lewis, A.Yesildirek and K.Liu, Multilayer NeuralNet Robot Controller with Guaranteed Tracking Performance, IEEE Trans. Neural Networks, Vol.7, No.2, 388399, 1996. [12] A.Munack and H.J.Tantau, Mathematical and control applications in agriculture and horticulture, IFAC Workshop on Control Aappli cations in Agriculture, Hannover, Germany,1997 [13] K.S.Narendra and K.Parthasarathy, Identication and Control of Dynamical Systems Using Neural Networks, IEEE Trans. Neural Networks, Vol.1, No.1, 427, 1990. [14] M.M.Polycarpou and P.A.Ioannou, Learning and Convergence Analysis of NeuralType Structured Networks, IEEE Trans. Neural Networks, Vol.3, No.1, 3950, 1992 [15] Q.Song, Robust Training Algorithm of Multilayered Neural Net works for Identication of Nonlinear Dynamic Systems, IEE Pro ceedings Control Theory and Applications, Vol.145, No.1, 41 46,1998 [16] Q.Song, J.Xiao and Y.C.Soh, Robust Backpropagation Training Al gorithm for Multilayered Neural Tracking Controller, IEEE Trans. Neural Networks, Vol.10, No.5, 11331141, 1999. [17] J.A.K.Suykens, J.Vandewalle and B.De Moor, Lure Systems with Multilayer Perceptron and Recurrent Neural Networks Absolute Stability and Dissipativity, IEEE Trans. on Automatic Control, Vol.44, 770774, 1999. [18] J.A.K. Suykens, J. Vandewalle, B. De Moor, NLq Theory: Checking and Imposing Stability of Recurrent Neural Networks for Nonlinear Modelling, IEEE Transactions on Signal Processing (special issue on neural networks for signal processing), Vol.45, No.11, 2682 2691, 1997. [19] J.H.M.Thornley and I.R.Johnson, Plant and Crop Modelling, Clarendon Press, Netherland, 1990. [20] W.Yu and X. Li, Some Stability Properties of Dynamic Neural Networks, IEEE Trans. Circuits and Systems, Part I, Vol.48, No.1, 256259, 2001. [21] W.Yu and X. Li, Some New Results on System Identication with Dynamic Neural Networks, IEEE Trans. Neural Networks, Vol.12, No.2, 412417, 2001. [22] W.Yu, A.S. Poznyak and X.Li, Multilayer Dynamic Neural Net works for Nonlinear System Online Identication, International Journal of Control, Vol.74, No.18, 18581864,2001. [23] W.Zou, L.Cai, Adaptive FourierNeuralNetworkBased Control for a Class of Uncertain Nonlinear Systems, IEEE Trans. Neural Networks, Vol.19, No.10, 16891701, 2008.

596