Professional Documents
Culture Documents
Research Article
Stability Analysis of Neural Networks-Based System Identification
This paper treats some problems related to nonlinear systems identification. A stability analysis neural network model for
identifying nonlinear dynamic systems is presented. A constrained adaptive stable backpropagation updating law is presented and
used in the proposed identification approach. The proposed backpropagation training algorithm is modified to obtain an adaptive
learning rate guarantying convergence stability. The proposed learning rule is the backpropagation algorithm under the condition
that the learning rate belongs to a specified range defining the stability domain. Satisfying such condition, unstable phenomena
during the learning process are avoided. A Lyapunov analysis leads to the computation of the expression of a convenient adaptive
learning rate verifying the convergence stability criteria. Finally, the elaborated training algorithm is applied in several simulations.
The results confirm the effectiveness of the CSBP algorithm.
Copyright © 2008 Talel Korkobi et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
y(k)
n+m+1
Ij = xi wi j , O j = f (I j ), j = 1, . . . , N,
i=1
y m (k + 1) (4)
y(k − n + 1)
N
Ik = Ojvj, y m (k) = f (Ik ), k = 1.
u(k) j =1
∂J(k)
wi j (k + 1) = wi j (k) − ε· ,
neural-network model can match the nonlinear plant exactly ∂wi j (k)
(5)
[6]. Generally, some modifications to the normal-gradient ∂J(k)
v j (k + 1) = v j (k) − ε· ,
algorithm or backpropagation should be applied, such that ∂v j (k)
the learning process is stable. For example, in [12, 16], some
hard restrictions were added to the learning law, and in [11], where J(k) = (1/2)[y(k + 1) − y m (k + 1)]2 , and ε is the
the dynamic backpropagation has been modified with NLq learning rate. The partial derivatives are calculated with
stability constraints. respect to the vectors of weights W and V,
The paper is organized as follows. Section 2 describes
the neural identifier structure considered in this paper ∂J(k)
= f (I j )·(y(k + 1) − y m (k + 1))O j ,
and the usual backpropagation algorithm. In Section 3 and ∂v j (k)
through a stability analysis, a constrained adaptive stable
∂J(k) L
backpropagation algorithm (CSBP) is proposed to provide = f (I j ) f (I j )(y(k + 1) − y m (k + 1))v j xi .
stable adaptive updating process. Three simulation examples ∂wi j (k) j =1
give the effectiveness of the suggested algorithm in Section 4. (6)
+ 1) = W(k) − ε ∂J − W ∗ ,
W(k
Let A and B be defined as follows:
∂W(k) (11)
T (k + 1)W(k
A = tr(W + 1)) − tr(W
T (k)W(k)),
(k + 1) = V (k) − ε ∂J − V ∗ ,
V (14)
∂V (k) B=V (k + 1) − V
T (k + 1)V (k).
T (k)V
4 Modelling and Simulation in Engineering
The ΔVL (k) expression is calculated as Theorem 2. Let εI = [εW , εV ]T be the learning rates for the
ΔV (k) = A + B tuning parameters of the neural identifier θ = [W, V ]T and let
λ be defined as
∂J ∂J ∂J
= tr ε2 − 2ε W(k)
T
∂W T (k) ∂W(k) ∂W(k) λ = λ1 , λ2 , where
∂J ∂J ∂J
+ ε2 − 2ε V (k) ∂y m (k)
∂V T (k) ∂V (k) ∂V (k) λ1 = , (20)
∂W(k)
∂J 2
∂J T
= ε2 − 2ε tr W (k) ∂y m (k)
∂Wi j (k) ∂Wi j (k) λ2 = .
i, j ∂V (k)
∂J
2
∂J T
+ ε 2
− 2ε V (k) Then asymptotic convergence is guaranteed if the learning rates
j
∂V j (k) ∂V (k) are chosen to satisfy
2
∂J
2
∂J
= ε2 + 2 2
∂Wi j (k) ∂V j (k) εW ≺ 2 , εV ≺ 2 , (21)
i, j j λ1 max λ2 max
∂J T ∂J T
− 2ε V (k) + tr W (k) where
∂V (k) ∂W(k)
2 m
∂J 2 ∂J ∂y (k)
≤ ε2 + λ1 max =
∂Wi j (k) ∂V j (k) ∂W(k) 2
i, j j
m
∂J ∂J ∂y m (k) T ∂y (k)
− 2ε V T (k) + tr W T (k) = max · ,
∂V (k) ∂W(k) ∂W(k) ∂W(k)
≤ α·ε2 − 2·β·ε, m (22)
∂y (k)
(15) λ2 max =
∂V (k)
2
where
m
∂y m (k) T ∂y (k)
2
2 = max · .
∂J ∂J ∂V (k) ∂V (k)
α= + = Jθ 2 ,
i, j
∂Wi j (k) j
∂V j (k)
(16) Lemma 1. If the learning rates are chosen as εW = εV = ε,
∂J ∂J then one has the convergence condition
β= V T (k) + tr W T (k) = tr(Jθ ·θ).
∂V (k) ∂W(k)
The stability condition ΔVL (k) ≤ 0 is satisfied only if 1 1
ε≺ 2 + 2 . (23)
λ1 max λ2 max
α·ε2 − 2·β·ε ≤ 0. (17)
Proof. Considering the Lyapunov function
Solving this ε, second-degree equation leads to the establish-
ment of the result presented in (7); ΔV (k) ≤ 0 if ε satisfies 1
the following condition: VL (k) = e2 (k), (24)
2
0 ≤ ε ≤ εs , (18)
where
2
where εs = 2tr(Jθ ·θ)/ Jθ .
Using the expressions of Jθ and θ, we obtain e(k) = y(k) − y m (k). (25)
2 tr H1 ·W T (k) + H2 ·V T (k) The computation of the ΔVL (k) expression leads to
εs = 2,
j ([y (k + 1)(1 − y (k + 1))·e(k)·O j ])
D+ m m
× 2 − εW m
m
∂W(k)
, ∂y (k)
= max ∂y m (k) T ∂y (k)
λ1 max =
∂W(k) · ,
2 ∂W(k) ∂W(k)
ΔVL (k) ≤ 0,
m
m
(28) ∂y (k) ∂y m (k) T ∂y (k)
λ2 max
=
= max · ,
∂V (k) 2 ∂V (k) ∂V (k)
so
m
m
∂y (k)
= max ∂y m (k) T ∂y (k)
m m λ1 max =
∂W(k) · ,
∂y (k) 2 ∂y (k) 2 2 ∂W(k) ∂W(k)
2 − εW ≤ 0,
2 − εV ≤ 0. (29)
∂W(k) ∂V (k) m
∂y (k)
m
∂y m (k) T ∂y (k)
λ2 max
=
= max · .
∂V (k) 2 ∂V (k) ∂V (k)
Finally, when we define the matrix norm ·2 by (32)
ρ2 = max ρT · ρ , (30)
Remark 1. Through simulations, learning rates are chosen
the theorem results are established. belonging to the defined learning rates stability range to
The stability condition ΔVL (k) ≤ 0 is satisfied only if prove the effectiveness of the proposed CSBP algorithm. The
learning rate which guarantees convergence corresponds to
2 2 2 2
εW ≺ 2 , εV ≺ 2 , (31) εW = 2 , εV = 2 , (33)
λ1 max λ2 max φ + λ1 max φ + λ2 max
6 Modelling and Simulation in Engineering