Professional Documents
Culture Documents
Stability and Synchronization Control Neural Networks
Stability and Synchronization Control Neural Networks
Wuneng Zhou
Jun Yang
Liuwei Zhou
Dongbing Tong
Stability and
Synchronization
Control of
Stochastic Neural
Networks
Studies in Systems, Decision and Control
Volume 35
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
About this Series
The series “Studies in Systems, Decision and Control” (SSDC) covers both new
developments and advances, as well as the state of the art, in the various areas of
broadly perceived systems, decision making and control- quickly, up to date and
with a high quality. The intent is to cover the theory, applications, and perspectives
on the state of the art and future developments relevant to systems, decision
making, control, complex processes and related areas, as embedded in the fields of
engineering, computer science, physics, economics, social and life sciences, as well
as the paradigms and methodologies behind them. The series contains monographs,
textbooks, lecture notes and edited volumes in systems, decision making and
control spanning the areas of Cyber-Physical Systems, Autonomous Systems,
Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Bio-
logical Systems, Vehicular Networking and Connected Vehicles, Aerospace Sys-
tems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power
Systems, Robotics, Social Systems, Economic Systems and other. Of particular
value to both the contributors and the readership are the short publication timeframe
and the world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.
123
Wuneng Zhou Liuwei Zhou
School of Information Sciences School of Information Sciences
and Technology and Technology
Donghua University Donghua University
Shanghai Shanghai
China China
The past few decades have witnessed the successful application of neural networks
in many areas such as image processing, pattern recognition, associative memory,
and optimization problems.
For neural networks dynamics, the state variables of the model are the output
signals of the neurons, and a steady output is needed in the dynamical evolution of
neural networks. So the stability research of neural networks is of utmost impor-
tance. In general, there are two kinds of stability, asymptotic stability and expo-
nential stability.
On the other hand, the response of the neurons to the information is completed
jointly by neurons cluster, rather than a single neuron function. The response of the
neurons to information is discharge behavior. This discharge behavior should be
consistent or synchronization by some control method. Therefore, the synchroni-
zation control of neural networks is also an important research topic. Similar to the
kinds of stability, there is asymptotic synchronization and exponential
synchronization.
In the models of neural networks dynamics, there exist the following
phenomenon.
First, as an existence in real neural networks, time-delay, which may cause
oscillation and instability behavior, has gained considerable research attention.
Second, sometimes there are uncertain parameters in neural networks. Therefore,
it is important to investigate the robust stability of neural networks with parameter
uncertainties.
Third, it has been shown that many neural networks may experience abrupt
changes in their structure and parameters due to phenomena such as component
failures or repairs, changing subsystem interconnections, and abrupt environmental
disturbances. In this situation, neural networks may be treated as systems that have
finite modes that may switch from one to another at different times, and can be
described by finite-state Markov chain. The stability analysis problem for neural
networks with Markovian switching has therefore received much research attention
and obtained a series of results about it.
v
vi Preface
Fourth, when the states of a system are decided not only by states of the current
time and past time but by the derivative of the past states; the system can be called a
neutral-type system. Indeed, some physical systems in the real world can be
described by neutral-type models.
Finally, as we know, the synaptic transmission in real nervous systems can be
viewed as a noisy process brought on by random fluctuations from the release of
neurotransmitters and other probabilistic causes. In general, Gaussian noise has
been regarded as the disturbance arising in neural networks. The chief characteristic
of Gaussian noise is its continuous property.
However, in actual neural networks, the neuron’s membrane potential is not only
affected by the Gaussian noise, but also possesses instantaneous disturbance
changes which lead to Poisson spikes from other neurons. This requires that the
neuron system must possess a large number of impinging synapses and that
the synapses have small membrane effects due to small coupling coefficient. These
impinging synapses generate discontinuous disturbance in the neural networks.
The discontinuous disturbance cannot be modeled by Gaussian noise. In view of the
stochastic process, Lévy process possesses discontinuous property, and the process
can be decomposed into a continuous part and a jump part by Lévy-Itô decom-
position. So it is reasonable to model the noise of neural networks as Lévy process.
Therefore, the stability and synchronization analysis problems for neural networks
with Lévy noise, even also with Markovian switching parameters, become a new
and severe challenge.
Focusing on the above models of neural networks dynamics, in this book we
considered the problem of stability and synchronization. Especially for stochastic
neural networks, we studied the almost surely asymptotic/exponential stability and
synchronization and the pth moment asymptotic/exponential stability and syn-
chronization. All of the results of the book are authors’ recent researching
achievements. The chapters are as follows.
Chapter 1 is devoted to relative mathematics foundation which includes some
main concepts and formulas such as stochastic process, martingales, stochastic
differential equation, Itôs formula, M-matrix, etc., and inequalities, such as some
elementary inequalities and matrix inequalities, used in this book.
Chapter 2 is concerned with exponential stability analysis for neural networks
with fuzzy logical BAM and Markovian jump and synchronization control problem
of stochastically coupled neural networks.
Chapter 3 is devoted to some neural network models with uncertainty. In this
chapter, the robust stability of high-order neural networks and hybrid stochastic
neural networks is first investigated. The robust anti-synchronization and robust lag
synchronization of chaotic neural networks are discussed in the sequel.
Chapter 4 is devoted to adaptive synchronization for some neural network
models. In this chapter, we studied the problems of adaptive synchronization of
BAM delayed neural networks, synchronization of stochastic T-S fuzzy neural
networks with time-delay and Markovian jumping parameters, synchronization of
delayed neural networks based on parameter identification and via output coupling,
adaptive a.s. asymptotic synchronization for stochastic delay neural networks with
Preface vii
ix
x Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Symbols and Acronyms
Z Field of integers
R Field of real numbers
Rþ ½0; 1Þ, the set of all nonnegative real numbers
Rn n-dimensional real Euclidean space
Rmn Space of all m n real matrices
S ¼ f1; 2; ; Sg, the finite space of a Markov chain
I Identity matrix
a_b The maximum of a and b
a^b The minimum of a and b
A[0 Symmetric positive definite
A>0 Symmetric positive semi-definite
A\0 Symmetric negative definite
A60 Symmetric negative semi-definite
AT Transpose of matrix A
A1 Inverse of matrix A
traceðAÞ The trace of a square matrix A
‰ðAÞ Spectral radius of matrix A
‚max ðAÞ Maximum eigenvalue of matrix A
‚min ðAÞ Minimum eigenvalue of matrix A
detðAÞ Determinant of matrix A
diagf g Block-diagonal matrix
jj Euclidean norm of a vector or trace norm of a matrix
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
kAk k A k:¼ supfjAxj : jxj ¼ 1g ¼ ‚max ðAT AÞ
f :A!B The mapping f from A to B
Cð½¿; 0; Rn Þ The space of continuous Rn -valued functions ϕ defined on
½¿; 0 with a norm k ϕ k ¼ sup¿ 0 jϕðÞj
C2;1 ðD Rþ ; RÞ The family of all real-valued functions Vðx; tÞ defined on
D Rþ which are continuously twice differentiable in x 2 D
and once differentiable in t 2 Rþ
xv
xvi Symbols and Acronyms
LpF t ð½¿; 0; Rn Þ The family of F t -measurable Cð½¿; 0; Rn Þ-valued random
variables ` such that E k ` kp \1
L1 ðRþ ; Rþ Þ RThe family of functions : Rþ ! Rþ such that
1
0 ðtÞdt\1
l2 ½0; 1Þ The space of square integrable vector on ½0; 1Þ
Ω Sample space
F -algebra of subsets of Ω
ðΩ; F ; PÞ A probability space
fF t gt 0 A filtration
hM; Mit The quadratic variation of martingale or local martingale
fMt gt 0
BAM Bidirectional associative memory
CNN Chaotic neural networks
DNN Delayed neural networks
LMI Linear matrix inequality
NN Neural networks
NSDDE Neutral stochastic delayed differential equation
SDDE Stochastic delayed differential equation
SDE Stochastic differential equation
SDNN Stochastic delayed neural networks
SNN Stochastic neural networks
Chapter 1
Relative Mathematic Foundation
In this chapter, we will present some concepts and formulas as well as several impor-
tant inequalities which will be used throughout this book. We will begin with some
elementary concepts and formulas, such as stochastic processes and martingales,
SDEs, M-matrix, and Itô’s formula. Then some inequalities frequently used in this
book will follow in the sequel.
and
−∞ < lim M(t) < ∞
t→∞
That is, all of the three processes X (t), A2 (t), and M(t) converge to finite random
variables.
Lemma 1.2 (Strong law of large numbers for local martingales)[1, 9] Let M =
{M(t)}t≥0 be a real-valued local martingale vanishing at t = 0. Then
t
dM, Ms M(t)
lim < ∞ a.s. ⇒ lim = 0 a.s.
t→∞ 0 (1 + s) 2 t→∞ t
Four types of stochastic differential equations (SDEs) concerning the topic of this
book are displayed as follows.
1. SDE and Markov chain
The following equation is the general form of an n-dimensional stochastic differ-
ential equation without Markovian jump.
d x(t) = f (t, r (t), x(t), xτ (t))dt + g(t, r (t), x(t), xτ (t))dω(t) (1.3)
Lemma 1.4 ([6]) Let p > 1 and |D(y, i)| ≤ k|y| hold, then
|x − D(y, i)| p
|x| p ≤ k|y| p + , ∀i(x, y, i) ∈ Rn × Rn × S,
(1 − k) p−1
4 1 Relative Mathematic Foundation
or
−|x − D(y, i)| p ≤ −(1 − k) p−1 |x| p + k(1 − k) p−1 |y| p .
1
LV (x, t) = Vt (x, t) + Vx (x, t) f (x) + trace[g T (x)Vx x (x, t)g(x)] (1.6)
2
which is called diffusion operator of (1.1), where
∂V (x, t) ∂V (x, t)
Vx (x, t) = ,..., ,
∂x1 ∂xn
∂ 2 V (x, t)
Vx x (x, t) = .
∂xi ∂x j n×n
1.1 Main Concepts and Formulas 5
For system (1.3) and system (1.4), the diffusion operator has the form
and
The jump-diffusion operator for SDDE with Lévy noise (1.5) is defined by (see
[20])
LV (x, y, t, i) = Vt (x, t, i) + Vx (x, t, i) f (x, y, t, i)
1
+ trace[g T (x, y, t, i)Vx x (x, t, i)g(x, y, t, i)]
2
l
+ [V (x + h (k) (x, y, t, i, z k ), t, i) (1.9)
R k=1
S
− V (x, t, i)]νk (dz k ) + γi j V (x, t, j)
j=1
V (x, t, r (t))
t
=V (x(0), 0, r0 ) + LV (x(s), xτ (s), s, r (s))ds
0
t
+ Vx (x(s), s, r (s))g(x(s), xτ (s), s, r (s))d B(s) (1.10)
0
t
+ [V (x(s), s, r0 + c(r (s), u))
0 R
− V (x(s), s, r (s))]μ(ds, du).
6 1 Relative Mathematic Foundation
V (x, t, r (t))
t
= V (x(0), 0, r0 ) + LV (x(s), x(s − δ(s)), s, r (s))ds
0
t
+ Vx (x(s), s, r (s))g(x(s), x(s − δ(s)), s, r (s))d B(s)
0
l t
(1.11)
+ [V (x(s − ) + h (k) (x(s − ), x((s − δ(s))− ), s,
k=1 0 R
The details of the function c and the martingale measure μ(ds, du) can be seen
in [9, pp. 46–48].
Obviously (1.10) and (1.11) hold if we replace 0 and t with bounded stopping
time τ1 and τ2 , respectively. Thus, the following lemmas is derived.
For system (1.3), (1.5) we have the Dynkin formula as follows.
Lemma 1.5 (Dynkin formula) [9, 11] For system (1.3), let V ∈ C2,1 (Rn × R+ ×
S; R+ ) and τ1 , τ2 be bounded stopping times such that 0 ≤ τ1 ≤ τ2 a. s. (i.e., almost
surely). If V (x(t), t, r (t)) and LV (x(t), xτ (t), t, r (t)) are bounded on t ∈ [τ1 , τ2 ]
with probability 1, then
τ2
EV (x(τ2 ), τ2 , r (τ2 )) = EV (x(τ1 ), τ1 , r (τ1 )) + E LV (x(s), xτ (s), s, r (s))ds.
τ1
Lemma 1.6 (Dynkin formula) [9] For system (1.5), let τ1 , τ2 be bounded stopping
times such that 0 ≤ τ1 ≤ τ2 a.s. If V (x(t), t, r (t)), and LV (x(t), x(t −δ(t)), t, r (t))
are bounded on t ∈ [τ1 , τ2 ] with probability 1, then
Lemma 1.7 (Dynkin formula) (See Ref. [6]) Let V ∈ C2,1 (R+ × S × Rn ; R) and
x(t) be a solution of the Eq. (1.4), Then for any stopping times 0 ≤ ρ1 ≤ ρ2 < ∞
a.s.
EV (ρ2 , r (ρ2 ), x(ρ2 ) − D(x(ρ2 − τ ), r (ρ2 )))
= EV (ρ 1ρ, r (ρ1 ), x(ρ1 ) − D(x(ρ1 − τ ), r (ρ1 ))) (1.13)
+ E ρ12 LV (s, r (s), x(s), x(s − τ ))ds
1.1 Main Concepts and Formulas 7
holds provided that V (t, r (t), x(t) − D(x(t), r (t))) and LV (t, r (t), x(t), x(t − τ ))
are bounded on t ∈ [ρ1 , ρ2 ] with probability 1, where the operator LV : R+ × S ×
Rn × Rn → R is defined by (1.8).
For system (1.3) and (1.4), the following two lemmas are used to determine the
almost surely asymptotic stability of their solutions.
Assumption 1.8 ([19]) Both f and g satisfy the local Lipschitz condition. That is,
for each h > 0, there is an L h > 0 such that
Lemma 1.9 ([19]) Let Assumption 1.8 holds. Assume that there are functions
V ∈ C2,1 (R+ × S × Rn ; R+ ), ψ ∈ L1 (R+ ; R+ ); and w1 , w2 ∈ C(Rn ; R+ ) such
that
LV (t, i, x, y) ≤ ψ(t) − w1 (x) + w2 (y),
(1.14)
∀(t, i, x, y) ∈ R+ × S × Rn × Rn ,
and
lim inf V (t, i, x) = ∞. (1.16)
|x|→∞ 0≤t<∞,i∈S
Lemma 1.10 ([10]) Let system (1.4) satisfies the following hypotheses.
(H1) Both f¯ and ḡ satisfy the local Lipschitz condition. That is, for each h > 0,
there is an L h > 0 such that
Assume also that there are functions V ∈ C2,1 (R+ ×S×Rn ; R), γ ∈ L1 (R+ ; R+ ),
Q ∈ C(Rn × [−τ , ∞); R+ ) and W ∈ C(Rn ; R+ ) such that
LV (t, i, x, y)
(1.17)
≤γ(t) − Q(t, x) + Q(t − τ , y) − W (x − D̄(y, i))
1.1.4 M-Matrix
The theory of M-matrices has played an important role in the study of stability,
stabilization, control, etc. We cite the relative concepts of M-matrix below.
Definition 1.11 ([3]) A square matrix M = (m i j )n×n is called a nonsingular
M-matrix if M can be expressed in the form M = s In − G with some G ≥ 0
(i.e., each element of G is nonnegative) and s > ρ(G), where ρ(G) is the spectral
radius of G.
Lemma 1.12 ([9]) If M = (m i j )n×n ∈ Rn×n with m i j < 0 (i = j), then the
following statements are equivalent.
(i) M is a nonsingular M-matrix.
(ii) Every real eigenvalue of M is positive.
(iii) M is positive stable. That is, M −1 exists and M −1 > 0 (i.e., M −1 ≥ 0 and
at least one element of M −1 is positive).
There are several inequalities which are used frequently in this book. The inequalities
with respect to vectors and scalars are gathered in the first part and those with respect
to matrices are included in the second part.
1.2 Frequently Used Inequalities 9
More generally, this inequality can be written in the following form [7].
x T y + y T x ≤ x T M x + y T M −1 y
Lemma 1.14 (Yong’s inequality) [9] Let a, b ∈ R and β ∈ [0, 1]. Then |a|β |b|(1−β)
≤ β|a| + (1 − β)|b|.
Lemma 1.16 ([2]) Let Z ∈ Rn×n be a symmetric matrix, then the inequality holds
λm (Z )x T x ≤ x T Z x ≤ λ M (Z )x T x,
for any x ∈ Rn .
Lemma 1.17 (Gronwall’s inequality) [9, 11] Let T > 0 and u(·) be a Borel mea-
surable bounded nonnegative function on [0, T ]. If
t
u(t) ≤ c + v u(s)ds, ∀t ∈ [0, T ],
0
Lemma 1.18 (Doob’s martingale inequality, see Ref. [9]) Let {Mt }t≥0 be an Rn -
martingle. Let [a, b] be a bounded interval in R+ . If p > 1, and Mt ∈ L p (Ω; Rn )
(the family of Rn -valued random variables X with E|X | p < ∞), then
p
p
E( sup |Mt | ) ≤ p
E|Mb | p .
a≤t≤b p−1
Lemma 1.20 (Jensen’s Inequality) [5, 16, 17] For any positive definite matrix
M > 0, scalar γ > 0, vector function w : [0, γ] → Rn such that the integrations
concerned are well defined, the following inequality holds:
γ
T γ
γ
Lemma 1.22 ([12, 18]) Given matrices Ω, Γ and Ξ with appropriate dimensions
and with Ω symmetrical, then
Ω + Γ FΞ + Ξ T F T Γ T < 0
for any F satisfying F T F ≤ I , if and only if there exists a scalar ε > 0 such that
Ω + εΓ Γ T + ε−1 Ξ T Ξ < 0.
Lemma 1.23 Let D, S and F are be real matrices of appropriate dimensions with
F T F ≤ I. Then, for any scalar ε > 0, we have D F S+(D F S)T ≤ ε−1 D D T +εS T S.
References
8. B. Liu, X.Z. Liu, G.R. Chen, Robust impulsive synchronization of uncertain dynamical net-
works. IEEE Trans. Circuits Syst. I. 52(7), 1431–1441 (2005)
9. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, London, 2006)
10. X. Mao, Y. Shen, C. Yuan, Almost surely asymptotic stability of neutral stochastic differential
delay equations with Markovian switching. Stoch. Process. Appl. 118(8), 1385–1406 (2008)
11. B. Øksendal, Stochastic Differential Equations an Introduction with Applications (Springer,
Berlin, 2005)
12. I.R. Petersen, A stabilization algorithm for a class of uncertain linear systems. Syst. Control
Lett. 8(4), 351–357 (1987)
13. M.G. Rosenblum, A.S. Pikovsky, J. Kurths, Phase synchronization in driven and coupled
chaotic oscillators. IEEE Trans. Circuits Syst. 44(10), 874–881 (1997)
14. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
15. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
16. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
17. Z.-G. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks
with time-varying delay using sampled-data. IEEE Trans. Cybern. 43(6), 1796–1806 (2013)
18. L. Xie, Output feedback H∞ control of systems with parameter uncertainty. Int. J. Control
63(4), 741–750 (1996)
19. C. Yuan, X. Mao, Robust stability and controllability of stochastic differential delay equations
with Markovian switching. Automatica 40(3), 343–354 (2004)
20. C.G. Yuan, X.R. Mao, Stability of stochastic delay hybrid systems with jumps. Eur. J. Control
16(6), 595–608 (2010)
Chapter 2
Exponential Stability and Synchronization
Control of Neural Networks
In this chapter, we are concerned with exponential stability analysis for neural net-
works with fuzzy logical BAM and Markovian jump and synchronization control
problem of stochastically coupled neural networks.
2.1.1 Introduction
It is well known that the bidirectional associative memory (BAM) neural networks
have been deeply investigated in recent years due to its applicability in solving some
image processing, signal processing, optimization, pattern recognition problems, and
other areas. Many researchers have been attracted by this new class of artificial neural
networks and a great deal of research has been done since fuzzy logical BAM neural
networks are introduced by Kosko in [10–12]. Especially, since the global stability
is one of the most desirable dynamic properties of neural networks, there have been
growing research interests on the stability analysis and synthesis for BAM neural
networks. For example, in [2] authors analyzed the global asymptotic stability of
a BAM neural networks with constant time delays and the exponential stability of
periodic solution to Cohen-Grossberg-type BAM neural networks with time-varying
delays has been investigated in [36].
In recent years, the concept of incorporating fuzzy logic into neural networks has
developed into an extensive research topic. Among various method developed for
the analysis and synthesis of complex nonlinear systems, fuzzy logic control is an
attractive and effective rule-based one. Therefore, fuzzy neural networks receive great
attention since they are the hybrid of fuzzy logic and traditional neural networks. In
many of the model-based fuzzy control approaches, the well-known Takagi-Sugeno
for i = {1, 2, . . . , n}, j = {1, 2, . . . , n}, t ≥ 0, where u i (t) and v j (t) denote the
activations of the ith neurons and jth neurons, g j (t) and h i (t) denote the state, respec-
tively; ai (t) and d j (t) are positive constants while f k (k = 1, 2, . . . , max(m, n)) is
the activation functions; bi j (t) and e ji (t), ci j (t), and w ji (t) are elements of fuzzy
feedback MIN template, and fuzzy feedback MAX template; αi j (t) and γ ji (t), βi (t),
and δ ji (t) stand for fuzzy feed-forward MIN template and fuzzy feed-forward MAX
template at the time t; ∧ and ∨ denote the fuzzy AND and fuzzy OR operations,
respectively; Ii and J j denote the external inputs. To draw our conclusion, we pro-
posed following assumption.
2.1 Global Exponential Stability of NN with Fuzzy Logical BAM … 15
Assumption 2.1 The neuron activation functions in (2.1) satisfy that f z (0) = 0 and
f z are globally Lipschitz continuous, i.e., there exist positive constants λz fulfilling
Now, based on the fuzzy logical BAM neural networks of model (2.1), we discuss
the exponential stability of fuzzy logical BAM neural networks with Markovian
jumping parameters.
In this section, we consider the following fuzzy logical neural networks with
Markovian jumping parameters, which is actually a modification of (2.1):
⎧
⎪
⎪ u̇ i (t,r (t)) = −ai (r (t))u i (t) + ∧nj=1 bi j (r (t)) f j (v j (t))
⎪
⎪
⎪
⎪ + ∨nj=1 ci j (r (t)) f j (v j (t)) + ∧nj=1 αi j (r (t))g j (t)
⎪
⎪
⎪
⎨ + ∨nj=1 βi j (t)g j (t) + Ii (t),
(2.2)
⎪
⎪ v̇ j (t,r (t)) = −d j (r (t))v j (t) + ∧i=1 m
e ji (r (t)) f i (u i (t))
⎪
⎪
⎪
⎪ + ∨mj=1 w ji (r (t)) f i (u i (t)) + ∧i=1
m
γ ji (r (t))h i (t)
⎪
⎪
⎪
⎩
+ ∨i=1 δ ji (t)h i (t) + J j (t).
m
L(t) = (l1 (t), l2 (t), . . . , lm+n (t))T = (u 1 (t), u 2 (t), . . . , u m (t), v1 (t)), . . . , vn (t))T.
Definition 2.2 The system (2.2) is globally exponentially stable if there existing
positive constants k and σ satisfying
Lemma 2.3 Suppose l and l are two states of system (2.2), then the following
inequalities are established for all r (t) = η ∈ S:
n
| ∧nj=1 τi j f j (l j ) − ∧nj=1 τi j f j (l j )| ≤ |τi j ||| f j (l j ) − f j (l j )||,
j=1
n
| ∨nj=1 ζi j f j (l j ) − ∨nj=1 ζi j f j (l j )| ≤ |ζi j ||| f j (l j ) − f j (l j )||.
j=1
In this section, we will discuss the global exponential stability of fuzzy logical BAM
neural networks with Markovian jumping parameters. A new sufficient criterion will
be proposed to prove the exponential stability of the model.
Theorem 2.4 If there exist a positive scalar k > 0 and a position definite matrix
Pη > 0 such that the following linear matrix inequality holds:
k Pη − Pη Wη + G η E η Pη < 0, (2.3)
then the system of (2.2) is global exponential stable for any r (t) = η (∀η ∈ S),
where G η = diag(λ1 , . . . , λm+n ), Wη = diag(a1 (η), . . . , am (η), d1 (η),
. . . , dn (η)),
0 E2
E 1 = (|bi j (η)| + |ci j (η)|)n×m , E 2 = (|e ji (η)| + |w ji (η)|)n×m , E = .
E1 0
For the sake of discussing the global exponentially stability of system (2.2), we
consider the following Lyapunov-Krasovskii functional:
⎛ ⎞
m
m
V (t, l(t), η) = e2kt ⎝ Pi (η)li2 (t) + Pm+ j (η)lm+
2 ⎠
j (t) .
i=1 j=1
Let L be the weak infinitesimal generator of random process {l(t), r (t), t ≥ 0}.
Then, for each r (t) = η ∈ S we can obtain that
m
m
LV (t, l(t), η) = 2ke2kt Pi (η)li2 (t) + 2e2kt Pi (η)li (t)l˙i (t)
i=1 i=1
⎛ ⎞
n
n
+ 2e2kt ⎝k Pm+ j (η)lm+
2
j (t) + Pm+ j (η)lm+ j (t)l˙m+ j (t)⎠
j=1 j=1
⎛ ⎞
S m
m
+ θηη e2kt ⎝ Pi (η)li2 (t) + Pm+ j (η)lm+
2
j (t)
⎠
η =1 i=1 j=1
m+n
m
= 2ke2kt Pi (η)li2 (t) + 2e2kt Pi (η)li (t){−ai (η)li (t)
i=1 i=1
+ [∧nj=1 bi j (η) f j (v j (t, φ)) − ∧nj=1 bi j (η) f j (v j (t, ϕ))]
+ [∨nj=1 ci j (η) f j (v j (t, φ)) − ∨nj=1 ci j (η) f j (v j (t, ϕ))]}
n
+ 2e2kt Pm+ j (η)lm+ j (t){−d j (η)lm+ j (t)
j=1
+ [∧i=1
m
e ji (η) f i (u i (t, φ)) − ∧i=1
m
e ji (η) f i (u i (t, ϕ))]
+ [∨i=1 w ji (η) f i (u i (t, φ)) − ∨i=1 w ji (η) f i (u i (t, ϕ))]}
m m
m+n
= 2ke2kt Pi (η)li2 (t)
i=1
18 2 Exponential Stability and Synchronization Control of Neural Networks
⎧
⎨ m
n
+ 2e2kt − ai (η)Pi (η)li2 (t) − d j (η)Pm+ j (η)lm+
2
j (t)
⎩
i=1 j=1
m
+ Pi (η)li (t)[∧nj=1 bi j (η) f j (v j (t, φ))
i=1
− ∧nj=1 bi j (η) f j (v j (t, ϕ))]
m
+ Pi (η)li (t)[∨nj=1 ci j (η) f j (v j (t, φ))
i=1
− ∨nj=1 ci j (η) f j (v j (t, ϕ))]
n
+ Pm+ j (η)lm+ j (t)[∧i=1
m
e ji (η) f i (u i (t, φ))
j=1
− ∧i=1
m
e ji (η) f i (u i (t, ϕ))]
n
+ Pm+ j (η)lm+ j (t)[∨i=1
m
w ji (η) f i (u i (t, φ))
j=1
⎫
⎬
− ∨i=1
m
w ji (η) f i (u i (t, ϕ))]
⎭
m+n
≤ 2ke2kt Pi (η)li2 (t)
i=1
⎧
⎨ m n
+ 2e2kt − ai (η)Pi (η)li2 (t) − d j (η)Pm+ j (η)lm+
2
j (t)
⎩
i=1 j=1
⎡
m n
+ Pi (η)li (t) ⎣ (|bi j (η)| + |ci j (η)|)
i=1 j=1
⎤
m+n
≤ 2ke2kt Pi (η)li2 (t)
i=1
2.1 Global Exponential Stability of NN with Fuzzy Logical BAM … 19
⎧
⎨ m n
+ 2e2kt − ai (η)Pi (η)li2 (t) − d j (η)Pm+ j (η)lm+
2
j (t)
⎩
i=1 j=1
⎡ ⎤
m n
+ Pi (η)li (t) ⎣ (|bi j (η)| + |ci j (η)|) · λm+ j · |lm+ j (t)|⎦
i=1 j=1
m ⎫
n ⎬
+ Pm+ j (η)lm+ j (t) (|e ji (η)| + |w ji (η)|) · λi · |li (t)|
⎭
j=1 i=1
≤ 2e 2kt T
|l (t)|(k Pη − Pη Wη + G η E η Pη )|l(t)|.
which is equivalent to
λ M (Pη )
||L(t, η, φ) − L(t, η, ϕ)|| ≤ ||φ − ϕ||e−kt .
λm (Pη )
By the Definition 2.2, we can draw the conclusion that the system (2.2) is globally
exponentially stable for all r (t) = η ∈ S and t ≥ 0.
Remark 2.5 The conclusion is just content under the Assumption 2.1, that is to say
the activation functions must meet Lipschitz conditions. The FLBAM model is dif-
ferent from T-S fuzzy BAM model, which has been investigated in [3].
20 2 Exponential Stability and Synchronization Control of Neural Networks
Remark 2.6 Note that (2.3) is a linear matrix inequality, which can be solved by
using the Matlab LMI toolbox. The matrix is relatively simple on account of that we
haven’t thought of the time delay. General, time-delay exists in many systems, while
in our model we ignore the time-delay for convenience.
In this section, a numerical example will be given to demonstrate the feasible of the
proposed results.
Consider the following fuzzy logical BAM neural networks with Markovian jump-
ing parameters:
⎧
⎪
⎪ u̇ i (t,η) = −ai (η)u i (t) + ∧2j=1 bi j (η) f j (v j (t))
⎪
⎪
⎪
⎪ + ∨2j=1 ci j (η) f j (v j (t)) + ∧2j=1 αi j (η)g j (t)
⎪
⎪
⎪
⎪
⎨ + ∨2 β (η)g (t) + I (t),
j=1 i j j i
⎪
⎪ v̇ j (t,η) = −d j (η)v j (t) + ∧i=1 2
e ji (η) f i (u i (t))
⎪
⎪
⎪
⎪
⎪
⎪ + ∨ j=1 w ji (η) f i (u i (t)) + ∧i=1
2 2
γ ji (η)h i (t)
⎪
⎪
⎩
+ ∨i=1
m
δ ji (η)h i (t) + J j (t).
11
where a1 = a2 = d1 = d2 = 4.5, b = c = e = w = ,α=β =γ =δ =
11
0.5 0.5
.
0.5 0.5
We take the activation functions as follows:
1
f i (x) = (|x + 1| − |x − 1|), (i = 1, 2).
2
To comfort the Assumption 2.1, we take λi = 0 (i = 1, 2, 3, 4). Thus, through
the numerical values mentioned above, we can obtain the matrices Wη , G η , and E η
as follows: ⎡ ⎤ ⎡ ⎤
4.5 0 0 0 1000
⎢ 0 4.5 0 0 ⎥ ⎢0 1 0 0 ⎥
Wη = ⎢ ⎥ ⎢
⎣ 0 0 4.5 0 ⎦ , G η = ⎣0 0 1 0⎦ ,
⎥
0 0 0 4.5 0001
⎡ ⎤
2 2 0 0
⎢2 2 0 0⎥
Eη = ⎢
⎣0
⎥
0 2 2⎦
0 0 2 2
2.1 Global Exponential Stability of NN with Fuzzy Logical BAM … 21
−1
−2
−3
−4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
−0.5
−1
−1.5
−2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
By using Matlab LMI Toolbox, we can solve the LMI (2.3), where the solutions
are as follows: ⎡ ⎤
15.3 5.6 8.0 8.0
⎢ 5.6 15.3 8.0 8.0 ⎥
k = 14.1, Pη = ⎢ ⎥
⎣ 8.0 8.0 15.3 5.6 ⎦ .
8.0 8.0 5.6 15.3
By Theorem 2.4, the system is global exponential stable. For this example, the
figures below are the trajectories of the system with different initial conditions. The
initial conditions of Fig. 2.1 is (4, 2, −2, −4) while Fig. 2.2 is (2, 1, −1, −2). The
simulation results show that the system is global exponential stable.
2.1.5 Conclusion
In this section, we have investigated the global exponential stability of fuzzy logical
BAM neural networks with Markovian jumping parameters, which have not been
focus enough attentions on. Based on the Lyapunov functional approach and linear
matrix inequality, a new sufficient stability criteria has been derived, which can be
tested by using the Matlab LMI Toolbox. A numerical example is developed to
demonstrate our proposed results.
22 2 Exponential Stability and Synchronization Control of Neural Networks
2.2.1 Introduction
In the past two decades, Delayed neural networks (DNNs) have received considerable
attention from researchers in different fields. As is known, DNNs always present
complex and unpredictable behaviors in practice, besides the traditional stability
and periodic oscillation that have got a great deal of investigated in the past years.
Recently, the synchronization problem of complex dynamical networks [5–9, 13,
17, 18, 27, 35, 38], like the synchronization of DNNs, is becoming the latest focus
of attention.
Thanks to the tireless efforts of the former researchers, several results on neural
network synchronization have been proposed in the literature. For example, in Ref.
[24], synchronization of coupled delayed neural networks was released the first time.
Then, some further studies in this field have appeared in recent years [14–16, 22,
26, 30, 34]. Wang and Cao studied synchronization in an array of linearly coupled
networks with time-varying delay [27], and synchronization in an array of linearly
stochastically coupled networks with time delays [7], respectively. In Ref. [6], via
Lyapunov functional method and LMI approach, synchronization control of stochas-
tic neural networks with time-varying delays has been researched and the estima-
tion gains of controller that can ensure the synchronization have been obtained. In
addition, in Ref. [16], the global exponential synchronization of coupled connected
neural networks with delays was investigated and a sufficient condition was derived
by using the LMI approaching. Meanwhile, through the stability theory for impulsive
functional differential equations, some new criteria to guarantee the robust synchro-
nization of coupled networks via impulsive control were derived in Ref. [26]. And, in
Ref. [30], on the basis of Lyapunov stability theory, time-delay feedback control and
other techniques, the exponential synchronization problem of a class of stochastic
perturbed chaotic delayed neural networks was considered.
It is well known that, time-delays are often encountered in many kinds of neural
networks, which can be the sources of oscillation and instability of neural networks
[25, 28, 29, 31–33]. However, from the literature mentioned above, we can find
that only discrete time-delay has been considered. Another important time-delay,
namely, distributed time-delay, has not attracted wide attention of the researchers.
Ref. [31] pointed out that there is usually a spatial extent in neural networks due
to the presence of many parallel pathways with a variety of axon sizes and lengths,
so, a distribution of propagation delays will appear over a period of time. Although
the signal transmission is sometimes immediate and can be modeled with discrete
delays, it may be distributed during a certain time period [29]. Hence, it is often
the case that modeling a realistic neural network with both discrete and distributed
delays [23].
Cao and Wang [7] investigated the synchronization in linearly stochastically cou-
pled networks via a simple adaptive feedback control scheme considering the noises’
influence and the discrete time delays. In Ref. [6], synchronization of stochastic
2.2 Synchronization Control of Stochastically Coupled DNN 23
neural networks with discrete time-delays was researched by using LMI approach.
Motivated by these recently literatures and for the sake of modeling a more realistic
and comprehensive networks, we consider the synchronization of linearly stochasti-
cally coupled networks with both discrete and distributed time-delays.
In this section, we aim to study the synchronization problem in an array of linearly
stochastically coupled neural networks with discrete and distributed time delays. By
employing the Lyapunov-Krasovskii functional method and LMI approach, we give
several new criterions that can ensure the complete synchronization of the system. At
the same time, the estimation gains of the delayed feedback controller are obtained.
Then, an illustrative example is provided to prove the effectiveness of our results.
Finally, we make a conclusion for the section.
In Ref. [7], an array of linearly stochastically coupled identical neural networks with
time delays has been considered by Cao and Wang as follows:
N
d xi (t) = [−C xi (t) + A f (xi (t)) + B f (xi (t − τ ))]dt + ci G i j Γ x j (t)dWi1 (t)
j=1
N
+ di G i j Γτ x j (t − τ )dWi2 (t) + Ui dt, i = 1, 2, . . . , N , (2.5)
j=1
where xi (t) = [xi1 (t), xi2 (t), . . . , xin (t)]T ∈ Rn (i = 1, 2, . . . , N ) is the state vector
associated with the ith DNNs; f (xi (t)) = [ f 1 (xi1 (t)), f 2 (xi2 (t)), . . . , f n (xin (t))]T
∈ Rn is the activation functions of the neurons with f (0) = 0; C = diag{c1 , c2 ,
. . . , cn } > 0 is a diagonal matrix that shows the rate of the ith unit resetting
its potential to the resting state in isolation when disconnected from the exter-
nal inputs and the network; A = (ai j )n×n and B = (bi j )n×n stand for, respec-
tively, the connection weight matrix and the discretely delayed connection weight
matrix; Wi = [Wi1 , Wi2 ]T are two-dimensional Brownian motions; Γ ∈ Rn×n and
Γτ ∈ Rn×n denotes the internal coupling of the network at time t and t − τ , where
τ > 0 is the time-delay; ci and di indicate the intensity of the noise; Ui is the input of
the controller; G = (G i j ) N ×N describes the topological structure and the coupling
strength of the networks, and it meet the following conditions [27]:
N
G ii = − Gi j . (2.6)
j=1, j =i
Though the linearly stochastically coupled neural networks has been investigated
in-depth comparatively, only the discrete time delay was considered. So, in order
24 2 Exponential Stability and Synchronization Control of Neural Networks
where W = (wi j )n×n is the distributive delayed connection weight matrix. Then,
we give the form of initial states corresponds with model (2.7) as follows:
For any φi ∈ L2F0 ([−τ , 0]; Rn ), we have xi (t) = ϕi (t), i = 1, 2, . . . , N , where
−τ ≤ t ≤ 0.
Remark 2.7 It is obvious to see that both the discrete and distributed time delays
are considered in the new model (2.7). Thus, the model will be more realistic and
comprehensive than (2.5). To the best of the authors’ knowledge, it is the first time
that the synchronization problem of stochastically coupled identical neural networks
with discrete and distributed time delays is proposed. In order to achieve our results,
the following necessary assumption is made:
Assumption 2.8 The activation functions f i (u) are bounded and satisfy the Lip-
schitz condition:
If there exits a nonempty subset Ψ ⊆ Rn , with xi∗ ∈ Ψ , and for any t ≥ 0, we have
xi (t; t ∗ , X ∗ ) ∈ Rn and
2
lim E xi (t; t ∗ , X ∗ ) − r (t; t ∗ , x0 ) = 0, (2.10)
t→∞
2.2 Synchronization Control of Stochastically Coupled DNN 25
where i = 1, 2, . . . , N , and x0 ∈ Rn , then, it can be said that the DNNs model (2.7)
achieve synchronization.
Next, we denote ei (t) = xi (t) −r (t), which indicates the error signal. From (2.7),
(2.9) and (2.6), the error signal system can be easily obtained as follows:
⎡ ⎤
t
dei (t) = ⎣−Cei (t) + Ag(ei (t)) + Bg(ei (t − τ )) + W g(ei (s))ds ⎦ dt
t−τ
N
N
+ ci G i j Γ e j (t)d Wi1 (t) + di G i j Γτ e j (t − τ )d Wi2 (t) + Ui dt, i = 1, 2, . . . , N ,
j=1 j=1
(2.11)
where g(ei (t)) = f (ei (t) + r (t)) − f (r (t)) and g(ei (t − τ )) = f (ei (t − τ ) + r (t −
τ )) − f (r (t − τ )). From (2.8) and g(0) = 0, it is obvious to see that
Ui = K 1 ei (t) + K 2 ei (t − τ ) (2.13)
Remark 2.11 As Ref. [6] proposed, in many real applications, the memoryless state-
feedback controller Ui = K ei (t) is more popular, since it has an advantage of
easy implementation,
t but its performance is not better than (2.13). Though Ui =
K ei (t) + t−τ K 1 ei (s)ds is a more general form of delayed feedback controller, it is
difficult for us to handle all the initial states of ei (t). However, the controller (2.13)
is a compromise between better performance and simple implementation. Hence, in
our section, we design the controller as (2.13) shows.
then, the error signal system (2.11) is globally asymptotically stable in mean square.
26 2 Exponential Stability and Synchronization Control of Neural Networks
Ω = P A A T P + MT M + P B B T P + P W W T P. (2.17)
Theorem 2.13 Let 0 < σi < 1(i = 1, 2, . . . , N ) be any given constants. If there
exit positive definite matrices P = ( pi j )n×n and Q 1 = (qi j )n×n , such that the
following matrix inequality
⎡ ⎤
Π11 P K 2 PA MT PB PW
⎢ ∗ 0 0 0 0 ⎥
⎢ 22 ⎥
⎢ ∗ ∗ −I 0 0 0 ⎥
N =⎢
⎢ ∗
⎥<0 (2.18)
⎢ ∗ ∗ −I 0 0 ⎥
⎥
⎣ ∗ ∗ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ ∗ ∗ −I
holds, where Π11 and Π22 are defined in (2.15) and (2.16) respectively, then the
error signal model (2.11) is globally asymptotically stable in mean square.
N N t
V (t, ei (t)) = eiT (t)Pei (t) + eiT (s)Q 1 ei (s)ds
i=1 i=1t−τ
N 0
t
+ eiT (η)Q 2 ei (η)dηds (2.19)
i=1 −τ t+s
where P = ( pi j )n×n , Q = (qi j )n×n are positive definite matrices that to be deter-
mined, and Q 2 ≥ 0 is given by
Q 2 = (1 − σi )−1 τ MT M. (2.20)
2.2 Synchronization Control of Stochastically Coupled DNN 27
By I t ô differential formula, the stochastic derivative of V (t, ei (t)) along error system
(2.11) can be obtained as follows:
⎡
N
N
d V (t, ei (t)) = LV (t, ei (t))dt + 2eiT (t)P ⎣ci G i j Γ x j (t)dWi1 (t)
i=1 j=1
⎤
N
+ di G i j Γτ x j (t − τ )dWi2 (t)⎦ , (2.21)
j=1
⎡ ⎤T ⎡ ⎤
t
N
N
− eiT (s)Q 2 ei (s)ds + ci2 ⎣ G i j Γ e j (t)⎦ ⎣ G i j Γ e j (t)⎦
t−τ j=1 j=1
⎡ ⎤T ⎡ ⎤⎫
N
N ⎪
⎬
+ di2 ⎣ G i j Γτ e j (t − τ )⎦ ⎣ G i j Γτ e j (t − τ )⎦ . (2.22)
⎪
⎭
j=1 j=1
Then, following from the relation (2.12) and Lemma 1.13, we can obtain
1 T 1
eiT (t)P Ag(ei (t)) ≤ ei (t)P A A T Pei (t) + g T (eiT (t))g(ei (t))
2 2
1 T 1
≤ ei (t)P A A Pei (t) + eiT (t)MT Mei (t)
T
(2.23)
2 2
1 T 1
eiT (t)P Bg(ei (t − τ )) ≤ e (t)P B B T Pei (t) + g T (eiT (t−τ ))g(ei (t − τ ))
2 i 2
1 1
≤ eiT (t)P B B T Pei (t) + eiT (t−τ )MT Mei (t − τ ) (2.24)
2 2
t
1
ei (t)P W
T
g(ei (s))ds ≤ eiT (t)P W W T Pei (t)
2
t−τ
⎛ ⎞T ⎛ ⎞
t t
1
+ ⎝ g(ei (s))ds ⎠ ⎝ g(ei (s))ds ⎠ (2.25)
2
t−τ t−τ
t t
1 1
eiT (t)P W g(ei (s))ds ≤ eiT (t)P W W T Pei (t) + (1 − σi ) eiT (s)Q 2 ei (s)ds
2 2
t−τ t−τ
(2.27)
2.2 Synchronization Control of Stochastically Coupled DNN 29
N
≤ N G i2j λmax (Γ T Γ ) e Tj (t)e j (t),
j=1
(2.28)
⎡ ⎤T ⎡ ⎤
N
N
⎣ G i j Γ τ e j (t − τ )⎦ ⎣ G i j Γτ e j (t − τ )⎦
j=1 j=1
N
≤ N G i2j e Tj (t − τ )ΓτT Γτ e j (t − τ )
j=1
N
≤ N G i2j λmax (ΓτT Γτ ) e Tj (t − τ )e j (t − τ ) (2.29)
j=1
N
LV (t, ei (t)) ≤ 2eiT (t)P(−C + K 1 )ei (t) + 2eiT (t)P K 2 ei (t − τ )
i=1
+ eiT (t)P A A T Pei (t) + eiT (t)MT Mei (t) + eiT (t)P B B T Pei (t)
+ eiT (t − τ )MT Mei (t − τ ) + eiT (t)P W W T Pei (t)
t
+ (1 − σi ) eiT (s)Q 2 ei (s)ds + e T (t)(Q 1 + τ Q 2 )ei (t)
t−τ
t
− eiT (t − τ )Q 1 ei (t − τ ) − eiT (s)Q 2 ei (s)ds + cN Λλmax (Γ T Γ )
t−τ
N
N
× e Tj (t)e j (t) + d N Λλmax (ΓτT Γτ ) e Tj (t − τ )e j (t − τ )}
j=1 j=1
30 2 Exponential Stability and Synchronization Control of Neural Networks
⎧
N ⎨
= e T (t)[2P(−C + K 1 ) + P A A T P + MT M + P B B T P
⎩ i
i=1
where λmax < 0 indicates the maximal eigenvalue of N . Therefore, from all the above
proofs and results (2.32), together with the study in Ref. [30], we can conclude that
the error signal model (2.11) is globally asymptotically stable in mean square. This
completes the proof.
Remark 2.15 In this section, for the sake of simplifying the description, we are
concerned with the constant time delay. As for time-varying delay, we can derive the
similar results without difficulties, which will be more realistic and comprehensive.
Corollary 2.16 Let 0 < σi < 1(i = 1, 2, . . . , N ) be any given constants. If
there exits a positive definite matrix Q 1 = (qi j )n×n , such that the following matrix
inequality: ⎡ ⎤
Ξ11 ρK 2 ρA MT ρB ρW
⎢ ∗ Ξ22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I 0 0 0 ⎥
N1 = ⎢⎢ ⎥<0 (2.33)
⎥
⎢ ∗ ∗ ∗ −I 0 0 ⎥
⎣ ∗ ∗ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ ∗ ∗ −I
holds, where
Ξ11 = ρ(−C + K 1 )+ρ(−C + K 1 )T + Q 1 +(1 − σi )−1 τ 2 MT M+cN 2 Λλmax Γ T Γ
and Ξ22 = MT M + d N 2 Λλmax ΓτT Γτ − Q 1 then the error signal model (2.11) is
globally asymptotically stable in mean square.
Proof Let P = ρI , where ρ is a positive constant and I is the identity matrix. From
Theorem 2.13 we can obtain Corollary 2.16 immediately.
For the sake of presenting the designed estimate gain matrix K 1 and K 2 by using
the LMI toolbox in Matlab conveniently, we made a simple transformation. Then,
the following theorem can be easily derived.
Theorem 2.17 Let 0 < σi < 1(i = 1, 2, . . . , N ) be any given constants. If there
exits positive definite matrices P = ( pi j )n×n and Q 1 = (qi j )n×n , such that the
following matrix inequality
⎡ ⎤
Ω11 K 2∗ PA MT PB PW
⎢ ∗ Ω22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I 0 0 0 ⎥
N2 = ⎢
⎢ ∗
⎥<0 (2.34)
⎢ ∗ ∗ −I 0 0 ⎥
⎥
⎣ ∗ ∗ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ ∗ ∗ −I
Remark 2.20 Through Corollaries 2.16 and 2.19, it is obvious to see that our main
result in Theorem 2.13 is general enough to contain some special cases, such as
P = ρI .
In this section, our main purpose is to authenticate the global asymptotical stability
of the error signal model (2.11). In order to illustrate the effectiveness of our results,
an example is presented here.
Example
Consider, the following chaotic DNNs with discrete and distributed time delays:
⎡ ⎤
t
dx(t) = ⎣−C x(t) + A f (x(t)) + B f (x(t − τ )) + W f (x(s))ds ⎦ dt (2.36)
t−τ
where x(t) = [x1 (t), x2 (t)]T is the state vector of the single node in the DNNs,
f (x(t)) = [tanh(x1 (t)), tanh(x2 (t))]T , τ = 1,
10 2 −0.1 −1.7 −0.1 −1.2 −0.3
C= ,A= ,B = ,W = .
01 −4.8 4.5 −0.3 −4.1 −0.4 −3.2
In the condition that the initial value is chosen as x1 (t) = 0.4, x2 (t) = 0.6, ∀t ∈
[−1, 0], the chaotic phase trajectories can be easily obtained as Fig. 2.3 shows.
2.2 Synchronization Control of Stochastically Coupled DNN 33
−2
−4
−6
−8
−2 −1.5 −1 −0.5 0 0.5 1 1.5
In order to verify the effectiveness of our results that can make the model (2.11)
achieve synchronization, we just need to test the global asymptotical stability of the
error signal model as the following shows:
⎡ ⎤
t
dei (t) = ⎣−Cei (t) + Ag(ei (t)) + Bg(ei (t − τ )) + W g(ei (s))ds ⎦ dt
t−τ
N
N
+ ci G i j Γ e j (t)dWi1 (t) + di G i j Γτ e j (t − τ )dWi2 (t)
j=1 j=1
+ [K 1 ei (t) + K 2 ei (t − τ )]dt, (2.37)
√ √
where i = 1, 2, . . . , N , ei (t) = [ei1 (t), ei2 (t)]T . Let ci =
0.1, di = 0.1, N⎤= 4,
⎡
−2 1 0 1
10 10 ⎢ 1 −2 1 0 ⎥
Γ = , Γτ = , and the coupling matrix G i j = ⎢
⎣ 0 1 −2 1 ⎦ .
⎥
01 01
1 0 1 −2 4×4
1.2 0
The constant matrix M referred in (2.12) is chosen as M = . Then accord-
0 1.2
ing to Theorem 2.13 and by utilizing the Matlab LMI toolbox, the following feasible
results are derived:
4.9114 0.4327 21.4806 0.1140
P= , Q1 = ,
0.4327 1.4143 0.1140 20.5590
−45.4784 0.0047 −7.0969 −0.0766
K 1∗ = , K 2∗ =
0.0047 −45.5166 −0.0766 −6.4766
ei1
1
0.5
0
−0.5
−1
0 5 10 15 20
t
0.5
0
ei2
−0.5
−1
−1.5
−2
0 5 10 15 20
t
−9.5166 2.9151 −1.4801 0.3987
K1 = , and K 2 =
2.9151 −33.0759 0.3987 −4.7022
immediately.
Under the Initial states as given in Table 2.1 applying the above-mentioned results
to the error signal model (2.37), we can derive the wave diagrams of the error signal
ei1 (t) and ei2 (t) as Figs. 2.4 and 2.5 show, respectively (i = 1, 2, 3, 4).
In Figs. 2.4 and 2.5, it is obvious to see that the error signal model (2.37) or
(2.11) is globally asymptotically stable. That is to say, from our simulation results, it
can be found that the synchronization of an array of linearly stochastically coupled
identical neural networks with discrete and distributed time delays is achieved by
using the delayed feedback controller that we designed. Thus, our theoretical results
have been tested to be true by the simulations, and we can conclude that our study in
2.2 Synchronization Control of Stochastically Coupled DNN 35
2.2.5 Conclusion
The synchronization control problem for an array of coupled DNNs has been thor-
oughly studied in this section. Several sufficient conditions to guarantee the synchro-
nization have been obtained by constructing a Lyapunov-Krasovskii functional and
using the LMI approach. Especially, the discrete and distributed time delay terms
have been considered in the model, together with the stochastic coupling term. The
delayed feedback controller gains have been gained based on the stability condition of
error system. Finally, an illustrative example has been given to verify the theoretical
analysis. The results are novel, because there are few works about the synchroniza-
tion of system with both discrete and distributed time delays. At the same time, it is
possible to apply the results to the realistic systems in practice.
References
1. P.B. Ali, M. Syed, Stability analysis of Takagi-Sugeno fuzzy Cohen-Grossberg BAM neural
networks with discrete and distributed time-varying delays. Math. Comput. Model. 53(1), 151–
160 (2011)
2. S. Arik, V. Tavsanoglu, Global asymptotic stability analysis of bidirectional associative memory
neural networks with constant time delays. Neurocomputing 68, 161–176 (2005)
3. P. Balasubramanian, R. Rakkiyappan, R. Sathy, Delay dependent stability results for fuzzy
BAM neural networks with Markovian jumping parameters. Expert Syst. Appl. 38(1), 121–
130 (2011)
4. R. Belohlavek, Fuzzy logical bidirectional associative memory. Inf. Sci. 128(1), 91–103 (2000)
5. J. Cao, P. Li, W. Wang, Global synchronization in arrays of delayed neural networks with
constant and delayed coupling. Phys. Lett. A 353(4), 318–325 (2006)
6. J. Cao, X. Li, Stability in delayed Cohen-Grossberg neural networks: LMI optimization
approach. Phys. D 212(1), 54–65 (2005)
7. J. Cao, Z. Wang, Y. Sun, Synchronization in an array of linearly stochastically coupled networks
with time-delays. Phys. A: Stat. Mech. Appl. 385(2), 718–728 (2007)
8. G.R. Chen, J. Zhou, Z.R. Liu, Global synchronization of coupled delayed neural networks and
applications to chaotic CNN model. Int. J. Bifurc. Chaos 14(7), 2229–2240 (2004)
9. M. Chen, D. Zhou, Synchronization in uncertain complex networks. Chaos: an interdisciplinary.
J. Nonlinear Sci. 16(1), 013101 (2006)
10. B. Kosko, Adaptive bi-directional associative memories. Appl. Opt. 26(23), 4947–4960 (1987)
11. B. Kosko, Bi-directional associative memories. IEEE Trans. Syst., Man Cybern 18(1), 49–60
(1988)
12. B. Kosko, Neural Networks and Fuzzy Systems—A Dynamical System Approach to Machine
Intelligence (Prentice-Hall, Englewood Cliffs, 1992)
13. C. Li, S. Li, X. Liao, J. Yu, Synchronization in coupled map lattices with small-world delayed
interactions. Phys. A 335(3), 365–370 (2004)
14. C.G. Li, G.R. Chen, Synchronization in general complex dynamical networks with coupling
delays. Phys. A 343, 263–278 (2004)
36 2 Exponential Stability and Synchronization Control of Neural Networks
15. P. Li, J. Cao, Z. Wang, Robust impulsive synchronization of coupled delayed neural networks
with uncertainties. Phys. A 373, 261–272 (2007)
16. Z. Li, G. Chen, Robust adaptive synchronization of uncertain dynamical networks. Phys. Lett.
A 324(2), 166–178 (2004)
17. W. Lin, G. Chen, Using white noise to enhance synchronization of coupled chaotic systems.
Chaos 16(1), 013133–013134 (2006)
18. B. Liu, X.Z. Liu, G.R. Chen, Robust impulsive synchronization of uncertain dynamical net-
works. IEEE Trans. Circuits Syst. I 52(7), 1431–1441 (2005)
19. B. Liu, P. Shi, Delay-range-dependent stability for fuzzy BAM neural networks with time-
varying delays. Phys. Lett. A 373(21), 1830–1838 (2009)
20. X. Lou, B. Cui, Robust asymptotic stability of uncertain fuzzy BAM neural networks with
time-varying delays. Fuzzy Sets Syst. 158(24), 2746–2756 (2007)
21. X. Lou, B. Cui, Stochastic exponential stability for Markovian jumping BAM neural networks
with time-varying delays. IEEE Trans. Syst. 37(3), 713–719 (2007)
22. W. Lu, T. Chen, Synchronization of coupled connected neural networks with delays. IEEE
Trans. Circuits Syst. I 51(12), 2491–2503 (2004)
23. J. Lv, G. Chen, A time-varying complex dynamical network model and its controlled synchro-
nization criteria. IEEE Trans. Autom. Control 50(6), 841–846 (2005)
24. L.M. Pecora, T.L. Carroll, G. Johnson, D. Mar, K.S. Fink, Synchronization stability in coupled
oscillator arrays: solution for arbitrary configurations. Int. J. Bifurc. Chaos 10(2), 273–290
(2000)
25. S. Ruan, R. Filfil, Dynamics of a two-neuron system with discrete and distributed delays. Phys.
D 191(3), 323–342 (2004)
26. Y. Sun, J. Cao, Z. Wang, Exponential synchronization of stochastic perturbed chaotic delayed
neural networks. Neurocomputing 70(13), 2477–2485 (2007)
27. W. Wang, J. Cao, Synchronization in an array of linearly coupled networks with time-varying
delay. Phys. A 366, 197–211 (2006)
28. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos Solitons Fractals 36(2), 388–396 (2008)
29. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos Solitons Fractals 32(1), 62–72 (2007)
30. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
31. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
32. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
33. Z. Wang, H. Shu, Y. Liu, D.W.C. Ho, X. Liu, Robust stability analysis of generalized neural
networks with discrete and distributed time delays. Chaos Solitons Fractals 30(4), 886–896
(2006)
34. C.W. Wu, Perturbation of coupling matrices and its effect on the synchronizability in arrays of
coupled chaotic systems. Phys. Lett. A 319(5–6), 495–503 (2003)
35. C.W. Wu, Synchronization in array of coupled nonlinear system with delay and nonreciprocal
time-varying coupling. IEEE Trans. Circuits Syst. 52(5), 282–286 (2005)
36. H. Xiang, J. Cao, Exponential stability of periodic solution to Cohen-Grossberg-type BAM
networks with time-varying delays. Neurocomputing 72(7), 1702–1711 (2009)
37. H. Xiang, J. Wang, Exponential stability and periodic solution for fuzzy BAM neural networks
with time varying delays. Appl. Math. J. Chin. Univ. 24(2), 157–166 (2009)
38. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A 373,
252–260 (2007)
Chapter 3
Robust Stability and Synchronization
of Neural Networks
In this chapter, the robust stability of high-order neural networks and hybrid stochastic
neural networks is first investigated. The robust anti-synchronization and robust lag
synchronization of chaotic neural networks are discussed in the sequel.
3.1.1 Introduction
Because high-order neural networks have better performances than traditional first-
order neural networks [18], high-order neural networks have been adopted in some
fields, for example, associative memories [36], optimization [25] and pattern recog-
nition [4]. To achieve the good performances, the stability sufficient conditions for
neural network have been studied intensively, e.g. [58, 59] and references wherein.
Time delays are frequently the sources of instability [2, 8, 11], and the stability
sufficient conditions for high-order neural networks with time delays have been
presented in some literatures. Either delay-dependent or delay-independent stability
sufficient conditions have been developed to guarantee the asymptotic, exponential,
or absolute stability for high-order neural networks with discrete time delays, see
e.g. [10, 37, 60].
Since the synaptic transmission is a noisy process brought on by random fluc-
tuations from the release of neurotransmitters and other probabilistic causes [57],
investigating the neural networks with stochastic perturbations is important both in
theory and in practice. Considering the certain stochastic inputs will stabilize or
destabilize a neural network [6], some new results on stability analysis for stochastic
neural networks have been proposed, see e.g. [22, 50], where discrete time delays
have appeared.
Besides the discrete time delays, there is a distribution of propagation delays over
a period of time in neural networks. Then, the mixed time delays, which comprise
discrete and distributed delays, should be taken into account when modeling a realis-
tic neural network [40, 65, 69]. The problem of global asymptotic stability for neural
networks with mixed time delays has been analyzed in [47, 52]. And global asymp-
totic stability for deterministic stochastic high-order neural networks with mixed
time-invariant delays has been studied in [57].
Uncertainties always exist in neural networks for various reasons, and investigat-
ing the robust stability for neural networks with parameter uncertainties is important
[2, 8, 11]. Recently, the robust exponential stability problem for uncertain stochastic
neural networks with mixed time delays has been studied in [53], where an LMI
approach has been established.
In this section, we develop a novel approach to establish sufficient conditions
for the high-order neural networks with mixed delays to be globally, exponentially
stable. This approach is called as parameters weak coupling linear matrix inequality
set (PWCLMIS) approach. Assuming an LMI set is coupled by two LMIs, where
one LMI without system parameters and another one without stability performance
parameters (for example, time delays), then the system parameters and stability per-
formance parameters are coupled weakly. We call such LMI set as PWCLMIS. Intro-
ducing free-weighting matrices into PWCLMIS and making some algebraic transfor-
mations, we will obtain excellent stability performances. Two numerical examples
are given to illustrate this characteristic. Furthermore, discrete and distributed time-
varying delay dependence are simultaneous in this section. Corresponding conditions
in [57] are only distributed time-invariant delay dependence. In addition, we remove
some restraints from this section to cover some results in recently published works,
such as [10, 37, 57, 60].
Consider the high-order neural networks with mixed time delays as follows:
t
d x(t) = [−Ax(t) + W0 f (x(t)) + W1 f (x(t − h(t))) + W2 f (x(s))ds]dt
t−τ (t)
+ σ(t, x(t), x(t − h(t)))dw(t), (3.1)
where
Remark 3.1 The constraints which always appear in other literatures, dτ < 1 and
dh < 1, have been removed from this section.
Remark 3.2 There are two differences with work [57]. First, the discrete time delay
h is a time-varying delay in this section but a time-invariant delay in [57]. Secondly,
The scalar τ (t) = τ > 0 is the unknown distributed time-varying delay in this section
but a known constant distributed time delay in [57].
trace[σ T (t, x(t), x(t −h(t)))σ(t, x(t), x(t −h(t)))] ≤ |Σ1 x(t)|2 +|Σ2 x(t −h(t))|2 ,
(3.3)
Remark 3.3 ([57]) The condition (3.3) imposed on the stochastic disturbance term,
σ T (t, x(t), x(t −h(t))), has been used in recent papers dealing with stochastic neural
networks, see [22] and references therein.
The parameter uncertainties and the stochastic perturbations are common sources
of the disturbances on neural networks. And we model the uncertain stochastic neural
networks with mixed time delays as follows.
40 3 Robust Stability and Synchronization of Neural Networks
d x(t) = [−(A + ΔA)x(t) + (W0 + ΔW0 ) f (x(t)) + (W1 + ΔW1 ) f (x(t − h(t)))
t
+ W2 f (x(s))ds]dt + σ(t, x(t), x(t − h(t)))dw(t), (3.4)
t−τ (t)
where the matrices ΔA, ΔW0 and ΔW1 are unknown matrices representing time-
varying parameter uncertainties and satisfying the following admissible condition:
ΔA ΔW0 ΔW1 = M F N1 N2 N3 , (3.5)
where M, N1 , N2 and N3 are known real constant matrices, and F is the unknown
time-varying matrix-valued function subject to
F T F ≤ I. (3.6)
|gi (x)| ≤ 1, ∀x ∈ R, i = 1, 2, . . . , n.
Denote x(t; ξ) as the state trajectory of the neural network (3.1) or (3.4) from the
initial data x(θ) = ξ(θ) on −τ0 ≤ θ ≤ 0 in L 2F0 ([−τ0 , 0]; Rn ). According to [20],
the system (3.1) or (3.4) admits a trivial solution x(t; 0) ≡ 0 corresponding to the
initial data ξ = 0.
Before proceeding further, we introduce the definition of global exponential sta-
bility for the uncertain stochastic neural network (3.1) or (3.4) with discrete and
distributed time delays as follows:
Definition 3.6 For the neural network (3.1) or (3.4) and every ξ ∈ L2F0 ([−h, 0]; Rn )
the trivial solution (equilibrium point) is robustly, globally, exponentially stable in
the mean square if there exist positive constants β > 0 and μ > 0 such that every
solution x(t; ξ) of (3.1) or (3.4) satisfies
Theorem 3.8 Consider the dynamics of the high-order stochastic delayed neural
network (3.1). The system is robustly, globally, exponentially stable in the mean
square if there exist positive scalars ρ > 0, εi > 0(i = 1, 2, 3) and matrices
P > 0, Q 1 > 0, Q 2 > 0, Z j > 0( j = 1, 2, 3, 4),
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
H1 J1 K1 L1 R1 S1
⎢ H2 ⎥ ⎢ J2 ⎥ ⎢K2⎥ ⎢L 2⎥ ⎢ R2 ⎥ ⎢ S2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
H =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ H3 ⎥ , J = ⎢ J3 ⎥ , K = ⎢ K 3 ⎥ , L = ⎢ L 3 ⎥ , R = ⎢ R3 ⎥ , S = ⎢ S3 ⎥
⎣ H4 ⎦ ⎣ J4 ⎦ ⎣K4⎦ ⎣L 4⎦ ⎣ R4 ⎦ ⎣ S4 ⎦
H5 J5 K5 L5 R5 S5
where
Ω11 = −A P − P A + Q 1 + Q 2 ,
Ω13 = ε1 L 1/2 Σμ ,
Ω77 = ε2 LΣμ Σμ − (1 − dh )Q 2 ,
Φ = Φ1 + Φ2 + Φ2T ,
Φ1 = diag{0, −(1 − dτ )Q 1 , 0, 0, 0},
Φ2 = H + K + L + S −H + J −J − K −L + R −R − S .
By Itô’s differential formula [19], the stochastic derivative of V (t, x(t)) along
(3.1) is
where
(3.17)
and
t t T t
2x (t)P W2
T
f (x(s))ds ≤ ε3 f (x(s))ds f (x(s))ds
t−τ t−τ t−τ
−1 T
+ ε3 x (t)P W2 W2T P x(t), (3.18)
Denote
So we have
where
T
ξ(t) = x T (t) x T (t − h(t)) ,
Ξ = diag{Ξ11 , Ξ22 },
Ξ11 = − A P − P A + Q 1 + Q 2 + ρΣ1T Σ1 + ε1 LΣμ Σμ + ε−1 −1
1 P W 0 W 0 P + ε2 P
T
× W1 W1T P + ε−1 T
3 P W2 W2 P,
Ξ22 = ε2 LΣμ Σμ + ρΣ2T Σ2 − (1 − dh )Q 2 .
It follows from the Schur Complement Lemma (Lemma 1.21) that (3.10) implies
Ξ < 0, we obtain
LV1 (t, x(t)) < 0. (3.24)
T
where ξ2 (t) = x T (t) x T (t − τ (t)) x T (t − τ ) x T (t − h(t)) x T (t − h) .
3.1 Delay-Dependent Stability … 45
LV (t, x(t)) = LV1 (t, x(t)) + LV2 (t, x(t)) < 0. (3.27)
Denote λ1 = mini∈S {λmin (−Ψ1 )}, λ2 = mini∈S {λmin (−Ψ2 )}, we have
where
E{x(t)2 } ≤λ−1
min (P)ηe
−kt
sup E{φ(s)2 }. (3.33)
−τ0 ≤s≤0
Remark 3.9 Theorem 3.8 gives a new stability criteria for system (3.1). We define
a new Lyapunov-Krasovskii functional as (3.12) which makes full use of the infor-
mation about discrete and distributed time delays to derive the result. Furthermore,
there are some novel techniques that have been exploited in the calculation of the time
derivative of V (t). First, no assumptions about Q 1 and Q 2 have been performed to
the system (3.1). However, Q 1 = ε2 LΣμ Σμ + ρΣ2T Σ2 and Q 2 = ε3 τ LΣμ Σμ have
been adopted in [57]. Thus, the presented criteria in this section has the potential to
yield more general results. Second, the result is exponential stability in this section,
3.1 Delay-Dependent Stability … 47
but the result is asymptotic stability in [57]. So the result in this section converges
faster. Last, PWCLMIS presented by authors [28] has been employed in this section.
If only the discrete time delay appears in the neural network, (3.1) can be simpli-
fied to
The stability issue for stochastic high-order neural network with discrete delays has
been investigated in [57], and the following corollary provides a more universal
result.
Corollary 3.10 Consider the dynamics of the neural network (3.34). The system
is robustly, globally, exponentially stable in the mean square if there exist positive
scalars ρ > 0, ε1 > 0, ε2 > 0 and matrices P > 0, Q 2 > 0, Z 3 > 0, Z 4 > 0,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
L1 R1 S1
L = ⎣ L 2 ⎦ , R = ⎣ R2 ⎦ , S = ⎣ S2 ⎦
L3 R3 S3
∗ ∗ ∗ ∗ ∗ ∗ −ρI
⎡ ⎤
Φ̄ h L hR hS
⎢ ∗ −h Z 3 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ −h Z 3 0 ⎦ < 0, (3.37)
∗ ∗ ∗ −h Z 4
where
Ω̄11 = −A P − P A + Q 2 ,
Φ̄ = Φ̄2 + Φ̄2T ,
Φ̄2 = L + S − L + R − R − S .
48 3 Robust Stability and Synchronization of Neural Networks
The high-order neural networks of the type (3.38) have been intensively investi-
gated in some literatures, such as [10, 37, 57, 60]. The following corollary provides
a complementary method to the results in [10, 37, 60]. Furthermore, the following
corollary provides a less restricted than that in [57].
Corollary 3.11 Consider the dynamics of the neural network (3.38). The system is
robustly, globally, exponentially stable if there exist positive scalars ε1 > 0, ε2 > 0
and matrices P > 0, Q 2 > 0, Z 3 > 0, Z 4 > 0,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
L1 R1 S1
L = ⎣ L 2 ⎦ , R = ⎣ R2 ⎦ , S = ⎣ S2 ⎦
L3 R3 S3
such that the PWCLMIS which is constructed by (3.37) and following LMI holds:
⎡ ⎤
Ω̄11 P W0 Ω13 P W1 0
⎢ ∗ −ε1 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ 0 ⎥
⎢ ∗ −ε1 I 0 ⎥ < 0. (3.39)
⎣ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ Ω77
Theorem 3.12 Consider the dynamics of the high-order uncertain stochastic delayed
neural network (3.4). The system is robustly, globally, exponentially stable in the
mean square if there exist positive scalars ρ > 0, εi > 0(i = 1, 2, 3) and matrices
P > 0, Q 1 > 0, Q 2 > 0, Z j > 0( j = 1, 2, 3, 4),
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
H1 J1 K1 L1 R1 S1
⎢ H2 ⎥ ⎢ J2 ⎥ ⎢K2⎥ ⎢L 2⎥ ⎢ R2 ⎥ ⎢ S2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
H =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ H3 ⎥ , J = ⎢ J3 ⎥ , K = ⎢ K 3 ⎥ , L = ⎢ L 3 ⎥ , R = ⎢ R3 ⎥ , S = ⎢ S3 ⎥
⎣ H4 ⎦ ⎣ J4 ⎦ ⎣K4⎦ ⎣L 4⎦ ⎣ R4 ⎦ ⎣ S4 ⎦
H5 J5 K5 L5 R5 S5
such that the PWCLMIS which is constructed by (3.9), (3.11) and following LMI
holds:
3.1 Delay-Dependent Stability … 49
⎡ ⎤
Ω11 P W0 Ω13 P W1 P W2 ρΣ1T P M −ε4 N1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 ε4 N2T 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε I 0 0 0 0 0 0 0 ⎥
⎢ 1 ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 0 ε4 N3T 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε I 0 0 0 0 0 ⎥
⎢ 3 ⎥ < 0. (3.40)
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε 4I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI
Proof From Theorem 3.8, the system (3.4) is robustly, globally, exponentially stable
in the mean square if there exist positive scalars ρ > 0, εi > 0 (i = 1, 2, 3) and
matrices P > 0, Q 1 > 0, Q 2 > 0, Z j > 0( j = 1, 2, 3, 4),
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
H1 J1 K1 L1 R1 S1
⎢ H2 ⎥ ⎢ J2 ⎥ ⎢K2⎥ ⎢L 2⎥ ⎢ R2 ⎥ ⎢ S2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
H =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ H3 ⎥ , J = ⎢ J3 ⎥ , K = ⎢ K 3 ⎥ , L = ⎢ L 3 ⎥ , R = ⎢ R3 ⎥ , S = ⎢ S3 ⎥
⎣ H4 ⎦ ⎣ J4 ⎦ ⎣K4⎦ ⎣L 4⎦ ⎣ R4 ⎦ ⎣ S4 ⎦
H5 J5 K5 L5 R5 S5
such that the PWCLMIS which is constructed by (3.9), (3.11) and following LMI
holds:
⎡ ⎤
Ω̃11 P(W0 + ΔW0 ) Ω13 P(W1 + ΔW1 ) P W2 ρΣ1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 0 0 ⎥
⎢ ⎥ < 0, (3.41)
⎢ ∗ ∗ ∗ ∗ −ε3 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI
where
By Lemma 1.22, (3.42) holds if and only if there is a scalar ε4 > 0 such that
⎡ ⎤
Ω11 P W0 Ω13 P W1 P W2 ρΣ1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε3 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI
⎡ ⎤
PM
⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥ T
+ ε−1 ⎢ ⎥
4 ⎢ 0 ⎥ M P 0 0 0 0 0 0 0
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎣ 0 ⎦
0
⎡ ⎤
−N1T
⎢ NT ⎥
⎢ 2 ⎥
⎢ 0 ⎥
⎢ ⎥
⎢ NT ⎥
⎢
+ε4 ⎢ 3 ⎥ −N1 N2 0 N3 0 0 0 0 < 0. (3.43)
0 ⎥
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎣ 0 ⎦
0
3.1 Delay-Dependent Stability … 51
It follows from the Schur Complement Lemma (Lemma 1.13) that (3.43) holds
if and only if (3.47) holds. The proof of Theorem 3.12 is completed.
If only the discrete time delay appears in the neural network, (3.4) can be simpli-
fied to
such that the PWCLMIS which is constructed by (3.35), (3.37) and following LMI
holds:
⎡ ⎤
Ω11 P W0 Ω13 P W1 ρΣ1T P M −ε4 N1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 ε4 N2T 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 ε4 N3T 0 0 ⎥
⎢ ⎥
⎢ ∗ 0 ⎥
⎢ ∗ ∗ ∗ −ρI 0 0 0 ⎥ < 0. (3.45)
⎢ ∗ ∗ ∗ ∗ ∗ −ε I 0 0 0 ⎥
⎢ 4 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI
If there are no stochastic perturbations σ(t, x(t), x(t − h)), the neural network
(3.44) will be reduced to
d x(t) = [−(A + ΔA)x(t) + (W0 + ΔW0 ) f (x(t)) + (W1 + ΔW1 ) f (x(t − h(t)))]dt.
(3.46)
We have the following corollary:
Corollary 3.14 Consider the dynamics of the neural network (3.46). The system is
robustly, globally, exponentially stable if there exist positive scalars ε1 > 0, ε2 > 0
and matrices P > 0, Q 2 > 0, Z 3 > 0, Z 4 > 0,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
L1 R1 S1
L = ⎣ L 2 ⎦ , R = ⎣ R2 ⎦ , S = ⎣ S2 ⎦
L3 R3 S3
52 3 Robust Stability and Synchronization of Neural Networks
such that the PWCLMIS which is constructed by (3.37) and following LMI holds:
⎡ ⎤
Ω11 P W0 Ω13 P W1 P M −ε4 N1T 0
⎢ ⎥
⎢ ∗ −ε1 I 0 0 0 ε4 N2T 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε I 0 0 0 0 ⎥
⎢ 1 ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 ε4 N3T 0 ⎥ < 0. (3.47)
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ −ε4 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ Ω77
Example 3.15 Consider a two-neuron stochastic neural network (3.1) with both dis-
crete and distributed delays, where
1.2 0 1.5 −1.6 1.2 0.3
A= , W0 = , W1 = ,
0 1.2 −1.6 1.5 0.3 0.9
0.16 −0.64 0.2 0 0.08 0
W2 = , Σμ = , Σ1 = ,
−0.64 0.16 0 0.2 0 0.08
0.09 0
Σ2 = , L = 1.2, dτ = 0.5,
0 0.09
dh = 0.5.
Applying Theorem 3.8, Table 3.1 gives the maximum allowable value h for dif-
ferent τ .
However, applying Theorem 3.8 in [57], we will gain the maximum allowable
value τ = 1.7855. At the same time, Theorem 3.8 in [57] is discrete time delay h
independence.
Example 3.16 Consider a two-neuron uncertain stochastic neural network (3.4) with
mixed delays, where
0.4 0 0.1 −0.2 0.2 0.3
A= , W0 = , W1 = ,
0 0.4 −0.2 0.1 0.3 0.1
0.1 −0.64 0.1 0 0.01 0
W2 = , Σμ = , Σ1 = ,
−0.64 0.1 0 0.1 0 0.01
0.04 0 0.1 0 0.2 0
Σ2 = , M= , N1 = ,
0 0.04 0 0.2 0 0.1
0.1 0 0.2 0
N2 = , N3 = , L = 1.2,
0 0.2 0 0.1
dτ = 0.5, dh = 0.5.
3.1 Delay-Dependent Stability … 53
Applying Theorem 3.12, Table 3.2 gives the maximum allowable value h for
different τ .
From Theorems 3.8 and 3.12, the admissible mixed time delays are large.
The reason of why we are able to obtain such large mixed time delays is we employ
PWCLMIS approach and the value fields of time delays in PWCLMIS are free.
Observing the structures of PWCLMIS. All systems parameters A, W0 , W1 , W2 , Σμ ,
Σ1 , Σ2 , M, N1 , N2 and N3 are in one LMI, and the time delays τ and h are in other
one LMI which without system parameters. At the same time, there are free-weighting
matrices H, J, K , L , R, S in the latter LMI. So, the value fields of time delays are
large (or free).
3.1.5 Conclusion
This section has proposed new sufficient conditions of global exponential stability for
deterministic and uncertain stochastic high-order neural networks. These conditions
are discrete and distributed time-varying delay-dependent conditions. The concept
of PWCLMIS has been developed in this section. The criteria have been developed in
terms of PWCLMIS. Large mixed time delays can be achieved by using PWCLMIS
approach. And the criteria are more general than those in some recent works. Two
numerical examples have been given to demonstrate the merit of presented criteria.
54 3 Robust Stability and Synchronization of Neural Networks
3.2.1 Introduction
Neural networks (cellular neural networks, Hopfield neural networks and bidirec-
tional associative memory networks) have been intensively studied over the past few
decades and have found application in a variety of areas, such as image processing,
pattern recognition, associative memory, and optimization problems [24, 26, 63].
In reality, time-delay systems are frequently encountered in various areas, e.g., in
neural networks, where a time delay is often a source of instability and oscillations.
Recently, both delay-independent and delay-dependent sufficient conditions have
been proposed to verify the asymptotical or exponential stability of delay neural
networks, see e.g. [9, 21, 27, 30, 47, 53, 67].
On the other hand, stochastic modeling has come to play an important role in
many real systems [3, 34], as well as in neural networks. Neural networks have finite
modes, which may jump from one to another at different times. Recently, it has been
shown in [23, 49] that the jumping between different neural network modes can be
governed by a Markovian chain. Furthermore, in real nervous systems, the synaptic
transmission is a noisy process brought on by random fluctuations from the release of
neurotransmitters and other probabilistic causes. It has also been known that a neural
network could be stabilized or destabilized by certain stochastic inputs [51]. Hence,
the stability analysis problem for stochastic neural networks becomes increasingly
significant, and some results related to this problem have recently been published,
see e.g. [48, 51, 53].
In this section, we study the global exponential stability problem for a class of
hybrid stochastic neural networks with mixed time delays and Markovian jumping
parameters, where the mixed delays comprise discrete and distributed time delays,
the parameter uncertainties are norm-bounded, and the neural networks are subjected
to stochastic disturbances described in terms of a Brownian motion. By utilizing a
Lyapunov-Krasovskii functional candidate and using the well-known S-procedure,
we convert the addressed stability analysis problem into a convex optimization prob-
lem. In this letter, the free-weighting matrix approach is employed to derive a linear
matrix inequality (LMI)-based delay-dependent exponential stability criterion for
neural networks with mixed time delays and Markovian jumping parameters. Note
that LMIs can be easily solved by using the Matlab LMI toolbox, and no tuning of
parameters is required. Numerical examples demonstrate the effectiveness of this
method.
In this letter, the neural network with mixed time delays is described as follows:
t
u̇(t) = −Au(t) + W0 g0 (u(t)) + W1 g1 (u(t − h)) + W2 g2 (u(s))ds + V (3.48)
t−τ
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 55
Assumption 3.17 The neuron activation functions in (3.48) are bounded and satisfy
the following Lipschitz condition
Remark 3.18 In this letter, none of the activation functions are required to be contin-
uous, differentiable, and monotonically increasing. Note that the types of activation
functions in (3.49) have been used in many papers, see [47–49, 51, 53].
Let u ∗ be the equilibrium point of (3.48). For the purpose of simplicity, we trans-
form the intended equilibrium u ∗ to the origin by letting x = u − u ∗ , and then the
system (3.48) can be transformed into:
t
ẋ(t) = −A(x) + W0 lo (x(t)) + W1l1 (x(t − h)) + W2 l2 (x(s))ds (3.50)
t−τ
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector of the transformed
system. It follows from (3.49) that the transformed neuron activation functions
lk (x) = gk (x + u ∗ ) − gk (u ∗ )(k = 0, 1, 2) satisfy
d x(t) = [−(A(r (t)) + ΔA(r (t)))x(t) + (W0 (r (t)) + ΔW0 (r (t)))l0 (x(t))
t 1 (r (t)) + ΔW1 (r (t)))l1 (x(t − h)) + (W2 (r (t)) + ΔW2 (r (t)))
+ (W
× t−τ l2 (x(s))ds]dt + σ(t, x(t), x(t − h), r (t))dω(t).
(3.52)
56 3 Robust Stability and Synchronization of Neural Networks
where
y(t, i) = [−(A(r (t)) + ΔA(r (t)))x(t) + (W0 (r (t)) + ΔW0 (r (t)))l0 (x(t))
t 1 (r (t)) + ΔW1 (r (t)))l1 (x(t − h)) + (W2 (r (t)) + ΔW2 (r (t)))
+ (W
× t−τ l2 (x(s))ds]dt,
where ΔA(r (t)) is a diagonal matrix, and M A (r (t)), N A (r (t)), Mk (r (t)), Nk (r (t))
(k = 0, 1, 2), are known real constant matrices with appropriate dimensions at mode
r (t). The matrix F(t, r (t)), which may be time-varying, is unknown and satisfies
trace[σ T (t, x(t), x(t − h), r (t))σ(t, x(t), x(t − h), r (t))]
≤ |Σ1,r (t) x(t)|2 + |Σ2,r (t) x(t − h)|2 (3.56)
where Σ1i and Σ2i are known constant matrices with appropriate dimensions.
Observe the system (3.52) and let x(t; ξ) denote the state trajectory from the ini-
tial data x(θ) = ξ(θ) on −h ≤ θ ≤ 0 in L2F0 ([−h, 0]; Rn ). Clearly, the system
(3.52) admits an equilibrium point (trivial solution) x(t; 0) ≡ 0 corresponding to the
initial data ξ = 0. For all δ ∈ [−d, 0], suppose that ∃ > 0, such that
Recall that the Markovian process {r (t), t ≥ 0} takes values in the finite set
S = {1, 2, . . . , S}. For the sake of simplicity, we denote
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 57
Remark 3.20 The condition (3.56) imposed on the stochastic disturbance term,
σ(t, x(t), x(t − h), r (t)), has been exploited in recent papers dealing with stochas-
tic neural networks [57]. However, Markovian jumping parameters have not been
considered in [57]. The following stability concepts are needed in this section.
Definition 3.21 For the system (3.52) and every ξ ∈ L2F0 ([−h, 0]; Rn ), the equilib-
rium point is asymptotically stable in the mean square if, for every network mode,
and is globally exponentially stable in the mean square if, for every network mode,
there exist scalars α > 0 and β > 0 such that
The main purpose of the rest of this letter is to establish LMI-based stable criteria
under which the system (3.52) is exponential stable in the mean square.
Firstly, we consider the uncertainty-free case. That means, there are no parameter
uncertainties.
Theorem 3.22 The neural network (3.52) with F(t, r (t)) = 0 is globally expo-
nentially stable in the mean square, if ∀i ∈ S, there exist positive scalars ρi > 0,
εki > 0, (k = 1, 2, . . . , 8), positive definite matrices Pi , (i = 1, 2, . . . , S), Q 1 ,
Q 2 , S1 , S2 and matrices Hi = [H1i H2i H3i H4i ], L i = [L 1i L 2i L 3i L 4i ],
Ri = [R1i R2i R3i R4i ],
⎡ ⎤
X 11i X 12i X 13i X 14i
⎢ ∗ X 22i X 23i X 24i ⎥
Xi = Xi = ⎢
T
⎣ ∗
⎥,
∗ X 33i X 34i ⎦
∗ ∗ ∗ X 44i
58 3 Robust Stability and Synchronization of Neural Networks
⎡ ⎤
Y11i Y12i Y13i Y14i
⎢ ∗ Y22i Y23i Y24i ⎥
YiT = Yi = ⎢
⎣ ∗
⎥,
∗ Y33i Y34i ⎦
∗ ∗ ∗ Y44i
Pi < ρi I (3.59a)
Φ
⎡i = ⎤
Φ11i Φ12i Φ13i Φ14i Pi W0i Pi W1i Pi W2i H1iT T
L 1i TW
R1i T T
0i R1i W1i R1i W2i
⎢ ⎥
⎢ ∗ Φ22i Φ23i Φ24i 0 0 TW
R2i0 0i R2i W1i R2i W2i ⎥
H2iT
T T T
L 2i
⎢ ⎥
⎢ ∗ ∗ Φ33i Φ34i H3iT T TW T T ⎥
⎢ 0 0 0 L 3i R3i 0i R3i W1i R3i W2i ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Φ44i 0 0 0 T
H4i L 4i R4i W0i R4i W1i R4i W2i ⎥
T T T T
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε1i I 0 ⎥
⎢ 0 0 0 0 0 0 ⎥
⎢ ∗ 0 ⎥
⎢ ∗ ∗ ∗ ∗ −ε2i I 0 0 0 0 0 ⎥ < 0,
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε3i I 0 ⎥
⎢ 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε4i I 0 0 ⎥
⎢ 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5i I 0 ⎥
⎢ 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε6i I 0 ⎥
⎢ 0 ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε7i I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε8i I
(3.59b)
⎡ ⎤
−ε3i τ G 2T G 2 + Q 2 + ε8i τ G 2T G 2 + X 11i X 12i X 13i X 14i H1iT
⎢ ⎥
⎢ ∗ X 22i X 23i X 24i H2iT ⎥
⎢ ⎥
Ψi = ⎢
⎢ ∗ ∗ X 33i X 34i H3iT ⎥≥0
⎥ (3.59c)
⎢ ⎥
⎣ ∗ ∗ ∗ X 44i H4iT ⎦
∗ ∗ ∗ ∗ S1
⎡ T ⎤
Y11i Y12i Y13i Y14i L 1i
⎢ ⎥
⎢ ∗ Y22i Y23i Y24i T
L 2i ⎥
⎢ ⎥
Ωi = ⎢
⎢ ∗ ∗ Y33i Y34i T
L 3i ⎥ ≥ 0,
⎥ (3.59d)
⎢ ⎥
⎣ ∗ ∗ ∗ Y44i T
L 4i ⎦
∗ ∗ ∗ ∗ S2
where
S
Φ11i = −Pi Ai − AiT Pi + Q 1 + τ Q 2 + πi j P j + ε1i G 0T G 0 + ρi Σ1iT Σ1i
j=1 ,
+ ε6i G 0T G 0 + ε4i τ Σ1iT Σ1i + ε5i hΣ1iT Σ1i + H1iT + H1i + L 1i T +L
1i
− R1i Ai − Ai R1i + τ X 11i + hY11i ,
T T
Φ33i = ε2i G 1T G 1 + ρΣ2iT Σ2i − Q 1 + ε7i G 1T G 1 + ε4i τ Σ2iT Σ2i + ε5i hΣ2iT Σ2i −
L 3i − L 3i + τ X 33i + hY33i ,
T
where Pi > 0, Q 1 > 0, Q 2 > 0, S1 > 0, S2 > 0, ε4i > 0, ε5i > 0(i = 1, 2, . . . S)
are to be determined. By Itô differential formula, the stochastic derivation of
V (t, x(t), r (t)) along (3.52) with F(t, r (t)) = 0 can be obtained as follows:
d V (t, x(t), i) = LV (t, x(t), i)dt + (maddt[x T (t)Pi σ(t, x(t), x(t − h), i)])dω(t)
(3.61)
where
S
LV (t, x(t), i) = x T (t)[−maddt (Pi Ai ) + Q 1 + τ Q 2 + πi j P j ]x(t)
j=1
+ maddt (x T (t)Pi W0i l0 (x(t))) + maddt (x T (t)Pi W1i l1 (x(t − h)))
+ trace[σ T (t, x(t), x(t − h), i)Pi σ(t, x(t), x(t − h), i)]
t t
+ maddt (x T (t)Pi W2i t−τ l2 (x(s))ds) − x T (t − h)Q 1 x(t − h) − t−τ x T (s)Q 2 x(s)ds
t
+ 2[x T (t)H1iT + x T (t − τ )H2iT + x T (t − h)H3iT + y T (t, i)H4iT ][x(t) − x(t − τ ) − t−τ d x(s)]
T + x T (t − τ )L T + x T (t − h)L T + y T (t, i)L T ][x(t) − x(t − h) − t d x(s)]
+ 2[x T (t)L 1i 2i 3i 4i t−h
+ 2[x T (t)R1i T + x T (t − τ )R T + x T (t − h)R T + y T (t, i)R T ][−A x(t) + W l (x(t))
2i 3i 4i i 0i 0
t
+ W1i l1 (x(t − h)) + W2i t−τ l2 (x(s))ds − y(t, i)]
t t
+ τ ξiT (t)X i ξi (t) − t−τ ξiT (t)X i ξi (t)ds + hξiT (t)Yi ξi (t) − t−h ξiT (t)Yi ξi (t)ds
t
+ ε4i τ [Σ1i x(t)2 + Σ2i x(t − h)2 ] − t−τ ε4i [Σ1i x(s)2 + Σ2i x(s − h)2 ]ds
t
+ τ y T (t, i)S1 y(t, i) − t−τ y T (s, r (s))S1 y(s, r (s))ds
t
+ ε5i h[Σ1i x(t)2 + Σ2i x(t − h)2 ] − t−h ε5i [Σ1i x(s)2 + Σ2i x(s − h)2 ]ds
t
+ hy T (t, i)S2 y(t, i) − t−h y T (s, r (s))S2 y(s, r (s))ds
(3.62)
with ξi (t) = [x(t)T x(t − τ )T x(t − h)T y(t, i)T ]T .
60 3 Robust Stability and Synchronization of Neural Networks
t
maddt (x T (t)Pi W2i t−τ l2 (x(s))ds) T
ε−1
t t
≤ x T (t)P W W T P x(t) + ε
i 2i i 3i l 2 (x(s))ds l 2 (x(s))ds
3i 2i t−τ
t
t−τ
(3.65)
≤ ε−1 x T (t)P W W T P x(t) + ε τ
i 2i 2i i 3i l 2 (x(s)) T l (x(s))ds
2
3i t−τ
−1 T t
≤ ε3i x (t)Pi W2i W2i Pi x(t) + ε3i τ t−τ x (s)G 2 G 2 x(s)ds .
T T T
trace[σ T (t, x(t), x(t − h), i)Pi σ(t, x(t), x(t − h), i)]
≤ λmax (Pi )trace[σ T (t, x(t), x(t − h), i)σ(t, x(t), x(t − h), i)]
(3.66)
≤ λmax (Pi )[x(t)T Σ1iT Σ1i x(t) + x(t − h)T Σ2iT Σ2i x(t − h)]
≤ ρi [x(t)T Σ1iT Σ1i x(t) + x(t − h)T Σ2iT Σ2i x(t − h)].
Since
t t
E − 2ξiT (t)HiT t−τ d x(s) ≤ −2ξiT (t)HiT t−τ y(s, r (s))ds + ε−14i ξi (t)Hi Hi ξi (t)
T T
t
+ t−τ ε4i E[Σ1 (r (s))x(s)2 + Σ2 (r (s))x(s − h)2 ]ds.
(3.67)
(3.70)
t t
2ξiT RiT W2i l2 (x(s))ds ≤ ε−1 T T T
8i ξi (t)Ri W2i W2i Ri ξi (t)+ε8i τ x(s)T G 2T G 2 x(s)ds
t−τ t−τ
(3.71)
where
⎡ ⎤
Θ11i 0 0 0
⎢ 0 0 0 0 ⎥
Θi = ⎢
⎣ 0
⎥ + maddt{H T [1
⎦ − 1 0 0]} + ε−1 T
4i Hi Hi
0 Θ33i 0 i
0 0 0 τ S1 + h S2
+ maddt{L iT [1 0 − 1 0]} + ε−1
5i L i L i + maddt{Ri [−Ai 0 0
T T − 1]}
+ τ X i + hYi + ε−1 T T
6i Ri W0i W0i Ri + ε−1 T T
7i Ri W1i W1i Ri + ε−1
8i Ri W2i W2i Ri ,
T T
(3.72)
with
S
Θ11i = −Pi Ai − AiT Pi + Q 1 + τ Q 2 + πi j P j + ε−1 T
1i Pi W0i W0i Pi
j=1
+ ε1i G 0T G 0 + ε−1 T −1
2i Pi W1i W1i Pi + ε3i Pi W2i W2i Pi + ρi Σ1i Σ1i + ε6i G 0 G 0
T T T
Θ33i = ε2i G 1T G 1 + ρi Σ2iT Σ2i − Q 1 + ε7i G 1T G 1 + ε7i G 1T G 1 + ε4i τ Σ2iT Σ2i + ε5i hΣ2iT Σ2i ,
which with (3.59a) and using Schur complement, implies that there exists a scalar
α = λmin (−Θi ) > 0, such that
62 3 Robust Stability and Synchronization of Neural Networks
In the following, we will prove the mean-square exponential stability of the system
(3.52). To this end, we define λ P = max λmax (Pi ),λ p = min λmin (Pi ).
i∈S i∈S
According to d x(t) = y(t, i)dt + σ(t, x(t), x(t − h), r (t))dω(t), |x(t + δ)| ≤
|x(t)| and (3.60), there exist positive scalars δ1 , δ2 such that
t
λ p E|x(t)| ≤ EV (x(t), t, i) ≤ λ P E|x(t)| +
2 2
δ1 E|x(s)|2 ds (3.74)
t−d
and
Considering and by Dynkin’s formula, one obtains that for each r (t) = i, i ∈ S,
t > 0,
(3.78)
or
which implies that the trivial solution of system (3.52) is exponentially stable in the
mean square. This completes the proof.
Theorem 3.24 The dynamics of the neural network (3.52) is globally exponentially
stable in the mean square, if ∀i ∈ S, exist positive scalars ρi > 0, ε ji > 0, ( j =
1, 2, . . . , 8), λn > 0(n = 1, 2, . . . , 7), positive definite matrices Pi (i = 1, 2, . . . ,
S), Q 1 , Q 2 , S1 , S1 and matrices Hi = [H1i H2i ⎡H3i H4i ], L i = [L 1i ⎤L 2i L 3i
X 11i X 12i X 13i X 14i
⎢ ∗ X 22i X 23i X 24i ⎥
L 4i ],Ri = [R1i R2i R3i R4i ], X iT = X i = ⎢ ⎣ ∗
⎥, Y T =
∗ X 33i X 34i ⎦ i
∗ ∗ ∗ X 44i
⎡ ⎤
Y11i Y12i Y13i Y14i
⎢ ∗ Y22i Y23i Y24i ⎥
Yi = ⎢ ⎥
⎣ ∗ ∗ Y33i Y34i ⎦, such that (3.59a), (3.59c), (3.59d) and the following
∗ ∗ ∗ Y44i
linear matrix inequalities hold:
⎡ ⎤
Φi + Φib Γ1i Γ2i Γ3i Γ4i Γ5i Γ6i Γ7i
⎢ ∗ −λ1i 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −λ2i 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −λ3i 0 0 0 0 ⎥
⎢ ⎥<0 (3.80)
⎢ ∗ ∗ ∗ ∗ −λ4i 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −λ5i 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ −λ6i 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −λ7i
64 3 Robust Stability and Synchronization of Neural Networks
Φib =
⎡ ⎤
λ1i N Ai
T N
Ai 0 0 0 0 0 0 0 0 0 0 0
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 λ2i N0iT N0i ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 λ3i N1iT N1i 0 0 0 0 0 0 ⎥
⎢ ⎥,
⎢ 0 0 0 0 0 0 λ4i N2iT N2i 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 λ5i N0iT N0i 0 0 ⎥
⎢ ⎥
⎣ 0 0 0 0 0 0 0 0 0 0 λ6i N1iT N1i 0 ⎦
0 0 0 0 0 0 0 0 0 0 0 λ7i N2iT N2i
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−R1i
TM − P M
Ai i Ai Pi M0i Pi M1i
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ −R2i
TM
Ai ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ −R3i
TM 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ Ai ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ −R4iTM
Ai 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
Γ1i = ⎢
⎢ 0 0 ⎥ Γ2i = ⎢
⎥ ⎢ 0 ⎥ Γ3i = ⎢
⎥ ⎢
⎥
⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 0 ⎦ ⎣ 0 ⎦ ⎣ ⎦
0 0 0
⎡ ⎤ ⎡ T ⎤ ⎡ T ⎤ ⎡ T ⎤
Pi M2i R M
1i 0i R M
1i 1i R1i M2i
⎢ ⎥ ⎢ T ⎥ ⎢ T ⎥ ⎢ T ⎥
⎢ 0 ⎥ ⎢ R2i M0i ⎥ ⎢ R2i M1i ⎥ ⎢ R2i M2i ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ R T M0i ⎥ ⎢ R T M1i ⎥ ⎢ R T M2i ⎥
⎢ ⎥ ⎢ 3i ⎥ ⎢ 3i ⎥ ⎢ 3i ⎥
⎢ ⎥ ⎢ T ⎥ ⎢ T ⎥ ⎢ T ⎥
⎢ 0 ⎥ ⎢ R4i M0i ⎥ ⎢ R4i M1i ⎥ ⎢ R4i M2i ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
Γ4i = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ Γ5i = ⎢ 0 ⎥ Γ6i = ⎢ 0 ⎥ Γ7i = ⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣ 0 ⎦ ⎣ 0 ⎦ ⎣ 0 ⎦
0 0 0 0
where
Γ1bi = N Ai 0 0 0 0 0 0 0 0 0 0 0 ,
Γ2bi = 0 0 0 0 N0i 0 0 0 0 0 0 0 ,
Γ3bi = 0 0 0 0 0 N1i 0 0 0 0 0 0 ,
Γ4bi = 0 0 0 0 0 0 N2i 0 0 0 0 0 ,
Γ5bi = 0 0 0 0 0 0 0 0 0 N0i 0 0 ,
Γ6bi = 0 0 0 0 0 0 0 0 0 0 N1i 0 ,
Γ7bi = 0 0 0 0 0 0 0 0 0 0 0 N2i .
By Theorem 3.22, the system (3.52) is globally exponentially stable in the mean
square, if (3.81), (3.59a), (3.59c) and (3.59d) hold for all F(t, r (t)) satisfying (3.55).
By Lemma 1.22, it follows that for all F(t, r (t)) satisfying (3.55), (3.81) holds if
and only if there exists scalars λ ji , ( j = 1, 2, . . . , 7) such that
Φi + λ−1 T −1 T T −1
1i Γ1i Γ1i + λ1i Γ1bi Γ1bi + λ2i Γ2i Γ2i + λ2i Γ2bi Γ2bi + λ3i Γ3i Γ3i + λ3i Γ3bi Γ3bi
T T T
−1 −1 −1
+λ4i Γ4i Γ4i + λ4i Γ4bi Γ4bi + λ5i Γ5i Γ5i + λ5i Γ5bi Γ5bi + λ6i Γ6i Γ6i + λ6i Γ6bi Γ6bi
T T T T T T
+λ−1
7i Γ7i Γ7i + λ7i Γ7bi Γ7bi < 0.
T T
(3.82)
By Schur complement, it can be easily shown that (3.82) is equivalent to (3.80). This
completes the proof.
Remark 3.25 Let S = {1} and F(t, r (t)) = 0, the neural network (3.52) can be
reduced to (3.51) in [57]. Let S = {1} and F(t, r (t)) = 0, the neural network (3.52)
can be rewritten as (3.52) in [48]. Therefore, Theorem 3.22 and Theorem 3.24 in the
letter can be regarded as the expansions of Theorem 1 in [57] and Theorem 1 in [48]
respectively. However, for delayed neural networks, the criteria proposed in [48] and
[57] are only applicable for systems with some admissible time delay. As is well
known, when the time delay is actually small, the delay-independent conditions tend
to be conservative. In this section, the free-weighting matrix approach is employed
to derive a delay-dependent criterion for neural networks with mixed time delays.
Remark 3.26 Theorem 3.24 presents a sufficient condition to guarantee the global
exponential stability in the mean square for the hybrid stochastic neural networks
66 3 Robust Stability and Synchronization of Neural Networks
with mixed time delays and nonlinearity. For neural networks with Markovian jump-
ing parameters, the problems of global exponential stability analysis for a class of
neural networks have been handled in [49], where both time delays and Markov-
ian jumping parameters are considered. A general of stochastic interval additive
neural networks with time-varying delay and Markovian switching have been stud-
ied in [23]. It is worth noting that the parameter uncertainties, mixed time delays,
and stochastic disturbance have not been fully taken into account. Also, in [23], the
mixed time delays have not been considered. Let F(t, r (t)) = 0, W2 (r (t)) = 0, and
σ(t, x(t), x(t − h), r (t)) = 0 in (3.52), then we can obtain system (3.52) in [49]. It
can be realized that, up to now, the stability analysis problem for neural networks with
Markovian jumping parameters, stochastic disturbance, and mixed time delays has
not been fully investigated despite its practical importance. In this letter, as a special
case, the stability problem has been thoroughly discussed for the hybrid stochastic
neural networks with mixed time delays and nonlinearity.
We present two examples here in order to illustrate the usefulness of our main results.
Our aim is to examine the globally exponentially stable of a given delayed neural
network with Markovian jumping parameters. Example 3.15. Consider a two-neuron
network (3.52) with two modes and without parameter uncertainties. The network
parameters are given as follows:
2.6 0 2.5 0 0.2 0 0.4 0
A1 = , A2 = , G0 = , G1 = ,
0 2.7 0 2.6 0 0.3 0 0.6
0.3 0 1.2 −1.5 1.1 −1.6 −1.1 0.5
G2 = , W01 = , W02 = , W11 = ,
0 0.4 −1.7 1.2 −1.8 1.2 0.5 0.8
−1.6 0.1 0.6 0.1 0.8 0.2 0.08 0
W12 = , W21 = , W22 = , Σ11 = ,
0.3 0.4 0.1 0.2 0.2 0.3 0 0.08
0.07 0 0.09 0 0.08 0 −0.12 0.12
Σ12 = , Σ21 = , Σ22 = ,Γ = ,
0 0.06 0 0.09 0 0.04 0.11 −0.11
τ = 0.12, h = 0.13, σ(t, x(t), x(t − h), 1) = (0.4x1 (t − h), 0.5x2 (t))T
σ(t, x(t), x(t − h), 2) = (0.5x1 (t), 0.3x2 (t − h))T , lk (x(t)) = tanh(x(t)), k =
0, 1, 2.
By using Matlab LMI toolbox, we solve the LMIs in (3.59a)–(3.59d) and obtain
17.6679 4.2912 4.6851 1.8324
P1 = , P2 =
4.2912 21.3771 1.8324 5.7081
ρ1 = 39.4315, ρ2 = 24.6899, ε11 = 64.8119, ε21 = 32.1690, ε31 = 25.0637,
ε41 = 24.4488,
ε51 = 22.2422, ε61 = 32.2754, ε71 = 23.2970, ε81 = 22.6629, ε12 = 22.2509,
ε22 = 17.9420,
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 67
ε32 = 20.2979, ε42 = 24.6447, ε52 = 20.4882, ε62 = 39.1595, ε72 = 27.5308,
ε82 = 25.4427.
Therefore, it follows from Theorem 3.22 that the two-neuron neural network
(3.52) without parameter uncertainties is globally exponentially stable in the mean
square. The responses of the state vector x(t) of system (3.52) for Example 3.15 are
shown in Fig. 3.1, which further illustrate the stability. Example 3.16. We consider
neural network (3.52) with parameter uncertainties. The network data are given as
follows:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2.7 0 0 2.6 0 0 0.3 1.8 0.5
A1 = ⎣ 0 2.8 0 ⎦ , A2 = ⎣ 0 2.4 0 ⎦ , W01 = ⎣ −1.1 1.6 1.1 ⎦ ,
0 0 2.9 0 0 2.7 0.6 0.4 −0.3
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0.2 1.6 0.4 0.8 0.2 0.1 0.2 −0.1 0.3
W02 = ⎣ −1.3 1.4 1.2 ⎦ , W11 = ⎣ 0.2 0.6 0.6 ⎦ , W12 = ⎣ 0.1 0.5 0.4 ⎦ ,
0.4 0.3 −0.2 −0.8 1.1 −1.2 −0.9 1.3 −1.5
⎡ ⎤ ⎡ ⎤
0.5 0.2 0.1 0.6 0.3 0.2
−2 2
W21 = ⎣ 0.3 0.7 −0.3 ⎦ , W22 = ⎣ 0.2 0.6 −0.5 ⎦ , Γ = ,
1 −1
1.2 −1.1 −0.5 1.4 −1.2 −0.4
F(t, 1) = F(t, 2) = diag(sin(5t), cos(5t), sin(3t)), G 0 = G 1 = G 2 = 0.2I ,
Σ11 = Σ12 = Σ21 = Σ22 = 0.8I ,
M A1 = M A2 = M01 = M02 = M11 = M12 = M21 = M22 = N A1 = N A2
= N01 = N02 = N11 = N12 = N21 = N22 = 0.1I,
τ = 0.12, h = 0.13, lk (x(t)) = tanh(x(t)), k = 0, 1, 2,
σ(t, x(t), x(t − h), 1) = (0.5x1 (t), 0.3x2 (t − h), 0.4x3 (t))T ,
σ(t, x(t), x(t − h), 2) = (0.4x1 (t − h), 0.6x2 (t − h), 0.7x3 (t))T .
By solving the LMIs (3.59a), (3.59c), (3.59d) and (3.80), we obtain
⎡ ⎤ ⎡ ⎤
35.3471 −3.3118 −0.3826 6.9012 −2.2426 −0.1964
P1 = ⎣ −3.3118 33.7867 0.8046 ⎦, P2 = ⎣ −2.2426 5.3516 0.4075 ⎦,
−0.3826 0.8046 30.9089 −0.1964 0.4075 4.1297
68 3 Robust Stability and Synchronization of Neural Networks
which indicates that the neural network (3.52) with parameter uncertainties is glob-
ally exponentially stable in the mean square.
For Example 3.16, the responses of the state vector of system (3.52) are shown
as Fig. 3.2. The simulation results imply that the neural network (3.52) is globally
exponentially stable.
3.2.5 Conclusions
In this section, we have addressed the problem of globally exponentially stable analy-
sis for a class of uncertain stochastic delayed neural networks, where both mixed
time delays and Markovian jumping parameter exist. Free-weighting matrices are
employed to express the relationship between the terms in the Leibniz–Newton for-
mula, and an LMI-based globally exponentially stable criterion is derived for delayed
neural networks with mixed time delays and Markovian jumping parameters. Finally,
simulation examples demonstrate the usefulness of the proposed results.
3.3 Anti-Synchronization Control of Unknown CNN … 69
3.3.1 Introduction
Chaotic neural networks, since its first putting forward by Pecora and Carroll [44],
illustrates a potential application value in secure communication [1, 7], image restora-
tion [32], optimization problems [35] and many other fields. Recently, research
focused on synchronization issue on chaotic neural networks has been proposed
[5, 41, 56, 61, 62]. Most of them are concerned about synchronization of chaotic
network. Such as complete synchronization [5], generalized synchronization [5],
phase synchronization [56], lag synchronization [62], projective synchronization
[61], anti-synchronization [41], and so forth.
Meanwhile, noise perturbation and time delay, known as an unavoidable nature
influence in practice, are always the reason of instability. That is to say, chaotic neural
network with noise perturbation and delay has great studying prospect.
The stability analysis problem for unknown chaotic neural network with noise
perturbation and delay has now not been properly investigated. Therefore, in this
section, we deal with the stability of a couple of anti-synchronization unknown
chaotic network. With the aid of a special Lyapunov-Krasovskii function, the stability
problem is converted into an optimization problem which can be easily solved using
linear matrix inequality (LMI) approach. Note that by using the MTALAB toolbox,
we can easily solve the LMI, and no tuning of parameters is required [39, 42]. Finally,
a numerical example is chosen to show the effectiveness and validity of the proposed
stability conditions.
In this section, the master chaotic neural networks system with delay is
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn represents the state vector of neural
network; n is the total number of neurons; C = diag{c1 , c2 , . . . , cn } > 0 is the rate
of resting state when disconnection happen; A = (ai j )n×n , B = (bi j )n×n ∈ Rn×n
is the connect weight and delayed connect weight matrix; f is activation func-
tions, f (x(t)) = ( f 1 (xi1 (t), r (t)), f 2 (xi2 (t), r (t)), . . . , f n (xin (t), r (t)))T ∈ Rn ,
f (xτ (t)) = ( f 1 (x1 (t − τ1 )), f 2 (x2 (t − τ2 )), . . . , f n (xn (t − τn )))T ∈ Rn , where
τ > 0 is the transmission delay.
70 3 Robust Stability and Synchronization of Neural Networks
The unknown slave system with noise perturbation and delay is given as
Assumption 3.27 The neuron activation functions f (x) in (3.83) and (3.85) satisfies
| f i (x) − f i (y)| ≤ λi |x − y|, x, y ∈ R, λi is a constant and λi > 0.
Assumption 3.28 The function H (t, x, y) satisfies the Lipschitz condition, and
there exist constant matrices of appropriate dimensions G 1 , G 2 satisfy
Assumption 3.29 The equilibrium point of system (3.83) and (3.85) is f (0) ≡ 0,
σ(t, 0, 0) ≡ 0.
Letting e(t) = x(t) + y(t) be the anti-synchronization error, where x(t) and y(t)
are state variables of drive system (3.83) and response system (3.85). Then the error
system can be derived as follows
where g(e(t)) = f (x(t)) + f (y(t)), g(eτ (t)) = f (xτ (t)) + f (yτ (t)). The controller
U is designed as follows:
where K 1 and K 2 are feedback gains that can be determined through LMI. Moreover,
From Assumption 3.27, we can easily get
|gi (ei (t))| = | f i (xi (t)) + f i (yi (t))| ≤ λi |ei (t)|. (3.87)
Lemma 3.30 ([38]) Using Assumptions 3.27 and 3.28, supposing e(t) (with e(t) ≡
e(t; t0 , ζ)) is a solution of (3.89), and assuming a continuous, positive function
3.3 Anti-Synchronization Control of Unknown CNN … 71
V (t, x) exits and satisfies: for t ≥ t0 − τ and x ∈ R, there exist positive constants c1 ,
c2 , that c1 |x|2 ≤ V (t, x) ≤ c2 |x|2 ; if t ≥ t0 − τ , there exist constants 0 ≤ β < α,
If t ≥ t0 , then
c2
E(|e(t; t0 , ζ)|2 ) ≤ E(sups∈[t0 τ ,t0 ] |ζ(s)|2 ) exp(−v + (t − t0 )),
c1
Theorem 3.31 Under the above assumptions, system (3.83) and (3.85) is anti-
synchronization, if there exist arbitrary constants λi > 0, vi > 0, i j > 0
and θi j > 0 (i, j = 1, 2, . . . , n) satisfying: time delay coupling weight: Λ =
diag{λ1 , λ2 , . . . , λn }, update laws of parameters in (3.85) are ĉ˙i = vi ei (t)yi (t),
â˙ i j = −i j ei (t) f j (yi (t)), ḃi j = −θi j ei (t) f j (yi (t − τ )). And the feedback gains
K 1 and K 2 of controller U satisfying the following LMI inequality, where
Ξ 1
K
Ψ = 1 1T 1 −1 T 2 2 T < 0. (3.88)
2 K2 2 μ Λ λ + G2 G2 − Q
where Ξ1 = 21 μA T A + 21 μΛT Λ + 21 μB T B + G 1T G 1 + K 1 − C + Q.
e T (t)Ag(e(t))
1 1 −1 T
≤ μe T (t)A T Ae(t) + μ g (e(t))g(e(t))
2 2
1 1 −1 T
≤ μe T (t)A T Ae(t) + μ e (t)ΛT Λe(t), (3.92)
2 2
e T (t)Ag(e(t − τ ))
1 1 −1 T
≤ μe T (t)B T Be(t) + μ g (e(t − τ ))g(e(t − τ ))
2 2
1 1 −1 T
≤ μe T (t)B T Be(t) + μ e (t − τ )ΛT Λe(t − τ ), (3.93)
2 2
where Λ = diag{λ1 , λ2 , . . . , λn }. From Assumption 3.27, it follows
trace H T (t, e(t), e(t − τ ))H (t, e(t), e(t − τ ))
≤ e T (t)G 1T G 1 e(t) + e T (t − τ )G 2T G 2 e(t − τ ). (3.94)
1 1 1
LV (e(t)) ≤ e T (t) μA T A + μΛT Λ + μB T B + G 1T G 1
2 2 2
+ K 1 − C + Q e(t) + e T (t)K 2 e(t − τ )
1
+ e T (t − τ ) μ−1 ΛT Λ + G 2T G 2 − Q e(t − τ )
2
3.3 Anti-Synchronization Control of Unknown CNN … 73
That is:
e(t)
LV (e(t)) ≤ e T (t) e T (t − τ ) Ψ . (3.95)
e(t − τ )
Therefore, we have
which implies
E ||e(t)||2 ≤ e−σ(t−t0 ) E ||ζ||2 , (3.99)
and σ is the unique positive solution of v = α − βevτ . This completes the proof.
Remark 3.32 This section considered noise perturbation as well as time delay when
considering the anti-synchronization of unknown master and slave system. There-
fore, the controller and Lyapunov function adopted here can be applied in more
complex systems than that in [41], which is considering anti-synchronization of
known parameters master and slave system.
Remark 3.33 In Theorem 3.31, the controller U with time delay and noise pertur-
bation are considered in the slave system. While the theorem can be also adopted in
pair of systems free of time-delay term or noise, following corollaries can be derived.
Remark 3.34 According to Theorem 3.31, the network topologies must be all irre-
ducible. In practical, the network topology structures may be partly irreducible, for
example, K 1 , . . . , K l are irreducible, K l+1 , . . . , K κ are reducible. The following
corollary gives a synchronization condition in this situation.
Corollary 3.35 Under the above assumptions, system (3.83) and (3.85) is anti-
synchronization without time delay, if there exist arbitrary constants vi > 0,
74 3 Robust Stability and Synchronization of Neural Networks
where Ξ1 = 21 μA T A + 21 μΛT Λ + 21 μB T B + G 1T G 1 + K 1 − C + Q.
Corollary 3.36 Under the above assumptions, system (3.83) and (3.85) is anti-
synchronization without noise perturbation, if there exist constants vi > 0, i j > 0
and θi j > 0 (i, j = 1, 2, . . . , n) satisfying: ĉ˙i = vi ei (t)yi (t), â˙ i j = −i j ei (t)
f j (yi (t)), ḃi j = −θi j ei (t) f j (yi (t − τ )). And the feedback gains K 1 and K 2 of
controller U satisfying the following LMI inequality
Ξ1 0
Ψ = < 0. (3.101)
0 21 μ−1 ΛT λ − Q
where Ξ1 = 21 μA T A + 21 μΛT Λ + 21 μB T B + G 1T G 1 + K 1 − C + Q.
y1
0
−1
−2
−3
−0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8
x1
ĉ11 (0) = 1, ĉ22 (0) = 1.5, â11 (0) = 2, â22 (0) = 4, b̂11 (0) = −4,
0.5
system (3.83) and (3.85) 0
−0.5
−1
0 50 100 150 200 250 300
2
e2=x2+y2
0
−2
−4
−6
0 50 100 150 200 250 300
Simulation Time
76 3 Robust Stability and Synchronization of Neural Networks
−2
−4
−6
−8
0 50 100 150 200 250 300
Seen from simulation results, we can know that with the help of controller, the
master system anti-synchronizes with the slave system well.
3.3.5 Conclusion
3.4.1 Introduction
literatures [9, 12, 55, 64, 68]. Nowadays, considering the distributed time delays into
the model of DNNs to study the stability of neural networks has been an active sub-
ject [47, 49, 51, 52, 57]. Very recently, the research hot on synchronization of DNNs
has spread widely, including complete synchronization and lag synchronization. For
example, in [14], some conditions were proposed for global synchronization of DNNs
with hybrid coupling by employing the Lyapunov functional method and Kronecker
product properties. Meanwhile, complete synchronization of neural networks based
on parameter identification and via output or state coupling was thoroughly investi-
gated in [29]. And [54] focused on realizing lag synchronization of chaotic system
based on a single controller. It should be mentioned that a propagation delay always
appears in the electronic implementation of dynamical system. So the research on
lag synchronization of DNNs is more realistic and practical. Moreover, it has been
pointed out in [53] that in hardware implementation of neural networks, the network
parameters of the system may be subjected to some changes due to the tolerances of
electronic components employed in the design. Therefore, it is significant to inves-
tigate the synchronization of neural networks with parameter uncertainties.
Thus, from the above discussion, we can see that the adaptive lag synchronization
of parameters uncertain chaotic neural networks with both discrete and distributed
time delays is still a novel problem that has been seldom studied, and remains impor-
tant and challenging. For example, [45] dealt with lag synchronization and parameter
identification for chaotic neural networks with mixed time-varying delays and sto-
chastic perturbation based on an adaptive feedback controller. And in [43], the robust
synchronization of uncertain chaotic neural networks with parameters perturbation
and external disturbances is researched by employing Lyapunov stability theory and
linear matrix inequality technique. Inspired by these recent literatures and basing on
[43], we consider, in this section, the lag synchronization of an array of uncertain
chaotic neural networks with both discrete and distributed time-varying delays and
parameters perturbation based on adaptive control, which can model a more realistic
and comprehensive neural networks. By utilizing the Lyapunov functional method
and some estimation techniques, we give several new criterions that can ensure the lag
synchronization of the two coupled systems based on an adaptive feedback scheme.
Then, an illustrative example is provided to prove the effectiveness of our method.
Finally, we make a conclusion for the section.
In this section, we propose the uncertain chaotic neural network models with time-
varying parameters perturbation, which involve both discrete and distributed time-
varying delays, as follows:
78 3 Robust Stability and Synchronization of Neural Networks
n
n
ẋi (t) = − (ci + Δc1i (t))xi (t) + (ai j +Δa1i j (t)) f j (x j (t)) + (bi j + Δb1i j (t))g j (x j (t − τ1 (t)))
j=1 j=1
n t
+ (di j + Δd1i j (t)) h j (x j (s))ds + Ji , i = 1, 2, . . . , n, (3.102)
j=1 t−τ2 (t)
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with
the ith DNNs; f (x(t)) = [ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T ∈ Rn , g(x(t −
τ1 (t))) = [g1 (x1 (t − τ1 (t))), g2 (x2 (t − τ1 (t))), . . . , gn (xn (t − τ1 (t)))]T ∈ Rn , and
h(x(t)) = [h 1 (x1 (t)), h 2 (x2 (t)), . . . , h n (xn (t))]T ∈ Rn are the activation functions
of the neurons; C = diag{c1 , c2 , . . . , cn } > 0 is a diagonal matrix that presents the
rate of the ith unit resetting its potential to the resting state in isolation when dis-
connected from the external inputs and the network; A = (ai j )n×n , B = (bi j )n×n ,
and D = (di j )n×n stand for, respectively, the connection weight matrix, the dis-
cretely delayed connection weight matrix and the distributive delayed connection
weight matrix; ΔC1 (t), ΔA1 (t), ΔB1 (t), and ΔD1 (t) are n×n perturbation matrices
bounded by ΔC1 (t) ≤ c1 , ΔA1 (t) ≤ a1 , ΔB1 (t) ≤ b1 and ΔD1 (t) ≤ d1 ;
J = [J1 , J2 , . . . , Jn ]T ∈ Rn is the external input vector function; τ1 (t) and τ2 (t) are
the discrete time-varying delay and distributed time-varying delay, respectively.
Throughout this section, the following hypotheses are needed.
(H1 ) The neuron activation functions f (·), g(·) and h(·) satisfy the following
Lipschitz condition:
f (x) − f (y) ≤ Λ(x − y) ,
the family of all continuous Rn -valued functions ϕi (s) on [−τ ∗ , 0] with the norm
ϕi = sup−τ ∗ ≤s≤0 |ϕi (s)| .
Now we consider (3.102) as the master system, and similarly, the corresponding
controlled slave system is taken as
n
n
ẏi (t) = −(ci + Δc2i (t))yi (t) + (ai j +Δa2i j (t)) f j (y j (t)) + (bi j + Δb2i j (t))g j (y j (t − τ1 (t)))
j=1 j=1
n t
+ (di j + Δd2i j (t)) h j (y j (s))ds + Ji + u i (t) + γi , i = 1, 2, . . . , n,
j=1 t−τ2 (t)
(3.104)
where u(t) = [u 1 (t), u 2 (t), . . . , u n (t)]T is the adaptive feedback controller that can
implement the lag synchronization of the two coupled DNNs (3.103) and (3.105);
γ ∈ Rn is a nonlinear input; ΔC2 (t), ΔA2 (t),ΔB2 (t), and ΔD2 (t), are n × n
perturbation matrices bounded by ΔC2 (t) ≤ c2 , ΔA2 (t) ≤ a2 , ΔB2 (t) ≤ b2
and ΔD2 (t) ≤ d2 . The initial condition of (3.104) are givenby yi (s) = ψi (s) ∈
L2F0 ([−τ ∗ , 0]; Rn ), i = 1, 2, . . . , n, −τ ∗ ≤ s ≤ 0, (τ ∗ = max τ1∗ , τ2∗ ).
To simplify the description, we make the following denotes:
C̄(t) = ΔC2 (t) − ΔC1 (t), Ā(t) = ΔA2 (t) − ΔA1 (t),
B̄(t) = ΔB2 (t) − ΔB1 (t), D̄(t) = ΔD2 (t) − ΔD1 (t).
It can be seen from the above mentioned that there must exist positive constants
c3 , a3 , b3 , d3 , such that
C̄(t) ≤ c3 , Ā(t) ≤ a3 , B̄(t) ≤ b3 , D̄(t) ≤ d3 .
Then, let e(t) = y(t) − x(t − λ) be the lag synchronization error between (3.103)
and (3.105), where λ is a propagation delay. One can derive the error dynamical
system as follows:
where e(t) = [e1 (t), e2 (t), . . . , en (t)]T , f¯(e(t)) = f (e(t)+x(t −λ))− f (x(t −λ)),
ḡ(e(t − τ1 (t))) = g(e(t − τ1 (t)) + x(t − τ1 (t) − λ)) − g(x(t − τ1 (t) − λ)) and
h̄(e(t)) = h(e(t) + x(t − λ)) − h(x(t − λ)).
Definition 3.37 The slave system (3.104) is synchronized with the master system
(3.102), when the error dynamical system (3.106) achieves globally asymptotically
stable in mean square, i.e.,
We are going to derive the synchronization criteria of the uncertain chaotic neural
networks investigated in this section.
Theorem 3.38 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:
−Π
γ= e(t), (3.109)
2e(t)2
where
Π = ε−1 −1 2 T −1 2 T
5 c3 y (t)y(t) + ε6 a3 f (y(t)) f (y(t)) + ε7 b3 g (y(t − τ1 (t)))g(y(t − τ1 (t)))
2 T
t t
+ ε−1
8 d3
2
h T (y(s))ds h(y(s))ds, (3.110)
t−τ2 (t) t−τ2 (t)
then the controlled uncertain slave system (3.105) will achieve adaptive lag syn-
chronization with the uncertain master system (3.103) in mean square.
Proof Define the following Lyapunov functional candidate V (t, e(t)) by
t t t
1 T
V (t, e(t)) = [e (t)e(t) + e T (s)Pe(s)ds + e T (r )Qe(r )dr ds
2 t−τ1 (t) t−τ2 (t) s
n
1
+ (μi + L)2 ] (3.111)
αi
i=1
3.4 Lag Synchronization of Uncertain Delayed CNN … 81
where P, Q are positive definite matrices; L is a constant; P and L are matrices with
appropriate dimensions to be determined; Q is defined by
(1 + ε−1 2 ∗
4 d1 )τ2
Q= Φ T Φ. (3.112)
1−σ
By calculating the derivative of (3.111) along the trajectories of the error system
(3.106) and the update law (3.108), we have
1 T 1
e T (t)A f¯(e(t)) ≤ e (t)A A T e(t) + f¯T (e(t)) f¯(e(t)), (3.114)
2 2
1 T 1
e T (t)B ḡ(e(t − τ1 (t))) ≤ e (t)B B T e(t) + ḡ T (e(t − τ1 (t)))ḡ(e(t − τ1 (t))),
2 2
(3.115)
t 1 T
e T (t)D h̄(e(s))ds ≤ e (t)D D T e(t)
t−τ2 (t) 2
t t
1
+ h̄ T (e(s))ds h̄(e(s))ds. (3.116)
2 t−τ2 (t) t−τ2 (t)
ε1 T ε−1
− e T (t)ΔC1 (t)e(t) ≤ e (t)e(t) + 1 c12 e T (t)e(t), (3.117)
2 2
ε2 ε−1
e T (t)ΔA1 (t) f¯(e(t)) ≤ e T (t)e(t) + 2 a12 f¯T (e(t)) f¯(e(t)), (3.118)
2 2
ε3 T ε−1
e T (t)ΔB1 (t)ḡ(e(t −τ1 (t))) ≤ e (t)e(t)+ 3 b12 ḡ T (e(t −τ1 (t)))ḡ(e(t −τ1 (t))),
2 2
(3.119)
82 3 Robust Stability and Synchronization of Neural Networks
t ε4 T ε−1 t t
e T (t)ΔD1 (t) h̄(e(s))ds ≤ e (t)e(t) + 4 d12 h̄ T (e(s))ds h̄(e(s))ds,
t−τ2 (t) 2 2 t−τ2 (t) t−τ2 (t)
(3.120)
ε5 T ε−1
− e T (t)C̄(t)y(t) ≤ e (t)e(t) + 5 c32 y T (t)y(t), (3.121)
2 2
ε6 T ε−1
e T (t) Ā(t) f (y(t)) ≤ e (t)e(t) + 6 a32 f T (y(t)) f (y(t)), (3.122)
2 2
ε7 T ε−1
e T (t) B̄(t)g(y(t − τ1 (t))) ≤ e (t)e(t) + 7 b32 g T (y(t − τ1 (t)))g(y(t − τ1 (t))),
2 2
(3.123)
t ε8 T ε−1 t t
e T (t) D̄(t) h(y(s))ds ≤ e (t)e(t) + 8 d32 h T (y(s))ds h(y(s))ds.
t−τ2 (t) 2 2 t−τ2 (t) t−τ2 (t)
(3.124)
According to the hypotheses (H1 ), (H2 ) and Lemma 1.20, the following terms can
be further estimated by
Then, substituting (3.114)–(3.127) into (3.113) and applying the hypothesis (H2 ),
one yields
ε1 T ε−1 1
V̇ (t, e(t)) ≤ −e T (t)Ce(t) + e (t)e(t) + 1 c12 e T (t)e(t) + e T (t)A A T e(t)
2 2 2
1 ε2 ε−1 1
+ e T (t)ΛT Λe(t) + e T (t)e(t) + 2 a12 e T (t)ΛT Λe(t) + e T (t)B B T e(t)
2 2 2 2
1 ε3 ε−1
+ e T (t − τ1 (t))Ω T Ωe(t − τ1 (t)) + e T (t)e(t) + 3 b12 e T (t − τ1 (t))Ω T Ωe(t − τ1 (t))
2 2 2
1 T 1 ∗ t ε4
+ e (t)D D e(t) + τ2
T
e (s)Φ Φe(s)ds + e T (t)e(t)
T T
2 2 t−τ2 (t) 2
t −1
ε−1 ε5 ε ε6
+ 4 d12 τ2∗ e T (s)Φ T Φe(s)ds + e T (t)e(t) + 5 c32 y T (t)y(t) + e T (t)e(t)
2 t−τ2 (t) 2 2 2
ε−1 ε7 ε−1
+ 6
a32 f T (y(t)) f (y(t)) + e T (t)e(t) + 7 b32 g T (y(t − τ1 (t)))g(y(t − τ1 (t)))
2 2 2
3.4 Lag Synchronization of Uncertain Delayed CNN … 83
t t
ε8 T ε−1 1
+ e (t)e(t) + 8 d32 h T (y(s))ds h(y(s))ds + e T (t)γ + e T (t)Pe(t)
2 2 t−τ2 (t) t−τ2 (t) 2
1−δ T 1
− e (t − τ1 (t))Pe(t − τ1 (t)) + τ2∗ e T (t)Qe(t)
2 2
1−σ t
− e T (s)Qe(s)ds − Le T (t)e(t) (3.128)
2 t−τ2 (t)
By taking
1
P= λmax ((1 + ε−1
3 b1 )Ω Ω)I
2 T
(3.130)
1−δ
1 ε−1 1
L = λmax (−C) + (ε1 + ε2 + ε3 + ε4 + ε5 + ε6 + ε7 + ε8 ) + 1 c12 + λmax A AT
2 2 2
−1 2
!
1 + ε2 a 1 T 1 1
+ λmax Λ Λ + λmax B B T + λmax D DT
2 2 2
∗
1 τ
+ λmax ((1 + ε−1 b12 )Ω T Ω) + λmax 2 Q + 1, (3.131)
2(1 − δ) 3 2
It is obvious that V̇ (t, e(t)) = 0 if and only if e(t) = 0, otherwise V̇ (t, e(t)) < 0,
so basing on the well-known invariant principle of functional differential equations,
the orbit of system (3.106), starting with arbitrary initial value, converges asymp-
totically to the largest invariant set Ξ contained in V̇ (t, e(t)) = 0 as t → ∞,
84 3 Robust Stability and Synchronization of Neural Networks
Remark 3.39 It can be seen from the definition of the nonlinear input γ in (3.109)
that γ → ∞, when e(t) → 0. But this is not realistic for practical application. In
order to prevent γ from approaching infinity, as proposed in [16], we use the methods
of replacing γ with γ̂, such that
"
γ, e(t) ≥ ζ,
γ̂ = −Π (3.133)
2ζ 2
P −1 e(t), e(t) ≤ ζ,
where ζ is an adjustable parameter. Thus, the two uncertain chaotic DNNs (3.102)
and (3.104) will achieve synchronization within finite accuracy since the error is
bounded.
Corollary 3.40 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:
−Π
γ= e(t), (3.135)
2e(t)2
where
then the controlled uncertain slave system (3.105) will achieve adaptive lag synchro-
nization with the uncertain master system (3.103) in mean square.
Proof Let εi = 1(i = 1, 2, . . . , 8). From the proof of Theorem 3.38, Corollary 3.40
can be obtained immediately.
If the two uncertain chaotic neural networks (3.103) and (3.105) have no distrib-
uted time-varying delays, then we can get the following corollary directly. Consider
the master system (3.137) and the slave system (3.138) shown as follows:
3.4 Lag Synchronization of Uncertain Delayed CNN … 85
Corollary 3.41 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:
−Π
γ= e(t), (3.140)
2e(t)2
where
Π = ε−1 2 T −1 2 T −1 2 T
5 c3 y (t)y(t) + ε6 a3 f (y(t)) f (y(t)) + ε7 b3 g (y(t − τ1 (t)))g(y(t − τ1 (t)))
(3.141)
then the controlled uncertain slave system (3.138) will achieve adaptive lag synchro-
nization with the uncertain master system (3.137) in mean square.
The rest proof is similar to the proof of Theorem 3.38 and hence omitted here.
If the two chaotic neural networks (3.103) and (3.105) have no parameters per-
turbation, then the following corollary can be obtained. Consider the master system
(3.143) and the slave system (3.144) shown as follows:
t
ẋ(t) = −C x(t) + A f (x(t)) + Bg(x(t − τ1 (t))) + D h(x(s))ds + J,
t−τ2 (t)
(3.143)
86 3 Robust Stability and Synchronization of Neural Networks
t
ẏ(t) = − C y(t) + A f (y(t)) + Bg(y(t − τ1 (t))) + D h(y(s))ds
t−τ2 (t)
+ J + u(t). (3.144)
Corollary 3.42 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:
In this section, our main purpose is to authenticate the global asymptotical stability
of the error dynamical system (3.106). An example is presented to illustrate the
effectiveness of our results.
Example
Consider the following uncertain chaotic DNNs with discrete and distributed time-
varying delays:
with
10 2.1 −0.12 −1.6 −0.1 −9.3 5.0
C= ,A= ,B = ,D = ,
01 −5.1 3.1 −0.2 −2.4 6.1 −2.1
10 11
ΔC1 (t) = 0.12 cos(t) ΔA1 (t) = ΔB1 (t) = ΔD1 (t) = 0.12 cos(t) ,
01 11
t
J = [0, 0]T , τ1 (t) = ete+1 , τ2 (t) = 1 and x(t) = [x1 (t), x2 (t)]T , f (x(t)) =
g(x(t)) = h(x(t)) = [tanh(x1 (t)), tanh(x2 (t))]T .
3.4 Lag Synchronization of Uncertain Delayed CNN … 87
The corresponding slave system with controller and nonlinear input is described
as follows:
ẏ(t) = −(C + ΔC2 (t))y(t) + (A + ΔA2 (t)) f (y(t)) + (B + ΔB2 (t))g(y(t − τ1 (t)))
t
+ (D + ΔD2 (t)) h(y(s))ds + J + u(t) + γ, (3.147)
t−τ2 (t)
10
where ΔC2 (t) = 0.15 sin(t) ΔA2 (t) = ΔB2 (t) = ΔD2 (t) = 0.15
01
11
sin(t) , and u(t) and γ are defined as in Theorem 3.38. Basing on the above
11
description, let the arbitrary initial states of the two coupled uncertain chaotic DNNs
be as follows: x1 (t) = 1.1, x2 (t) = 1.2; y1 (t) = −0.2, y2 (t) = 2.3; ∀t ∈ [−1, 0].
Then, the following convincing numerical simulations can be obtained as Figs. 3.6–
3.7 show.
In the simulations, the initial conditions of the adaptive feedback strength is taken
as [μ1 (0), μ2 (0)]T = [2.3, 1.2]T , and αi = 30. According to (3.133), we choose
−1
−2
−3
0 2 4 6 8 10
t
−2
−4
−6
0 2 4 6 8 10
88 3 Robust Stability and Synchronization of Neural Networks
the synchronization error by e(t) ≤ 0.05. The propagation delay λ = 0.2. And
the positive scalars εi = 1(i = 1, 2, . . . , 8), c3 = 0.27, a3 = b3 = d3 = 0.54.
The simulation results can be described as follows. Figure 3.6 and Fig. 3.7 depict
the adaptive lag synchronization between (3.146) and (3.147). Thus, from these sim-
ulations, one can conclude that lag synchronization in uncertain chaotic neural net-
works with mixed time-varying delays is realized via the adaptive feedback scheme
and the appropriate nonlinear input.
3.4.5 Conclusion
The lag synchronization problem of uncertain chaotic DNNs has been thoroughly
researched via an adaptive feedback control scheme in this section. By employing the
Lyapunov-Krasovskii stability theory and some estimation methods, some novel and
sufficient conditions have been obtained to ensure the synchronization. Especially,
both the discrete and distributed time-varying delays have been introduced to model a
more practical situation. And the corresponding numerical simulations have validated
the feasibility of the proposed technique. It is believed that the results should provide
some practical guidelines for the application in this area.
References
12. J. Cao, P. Li, W. Wang, Global synchronization in arrays of delayed neural networks with
constant and delayed coupling. Phys. Lett. A 353(4), 318–325 (2006)
13. J. Cao, Z. Wang, Y. Sun, Synchronization in an array of linearly stochastically coupled networks
with time-delays. Phys. A: Stat. Mech. Appl. 385(2), 718–728 (2007)
14. J. Cao, G. Chen, P. Li, Global synchronization in an array of delayed neural networks with
hybrid coupling. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 38(2), 488–498 (2008)
15. S. Celikovsky, V. Lynnyk, Efficient chaos shift keying method based on the second error
derivative anti-synchronization detection, in IEEE International Conference on Control and
Automation (2009), pp. 530–535
16. F. Chen, W. Zhang, LMI criteria for robust chaos synchronization of a class of chaotic systems.
Nonlinear Anal. Theory Methods Appl. 67(12), 3384–3393 (2007)
17. G.R. Chen, J. Zhou, Z.R. Liu, Global synchronization of coupled delayed neural networks and
applications to chaotic CNN model. Int. J. Bifurc. Chaos 14(7), 2229–2240 (2004)
18. A. Dembo, O. Farotimi, T. Kailath, High-order absolutely stable neural networks. IEEE Trans.
Circuits Syst. 38(1), 57–65 (1991)
19. A. Friedman, Stochastic Differential Equations and Their Applications (Academic Press, New
York, 1976)
20. J. Hale, Theory of Functional Differential Equations (Springer, New York, 1977)
21. Y. He, Q. Wang, M. Wu, C. Lin, Delay-dependent state estimation for delayed neural networks.
IEEE Trans. Neural Netw. 17(4), 1077–1081 (2006)
22. H. Huang, D.W.C. Ho, J. Lam, Stochastic stability analysis of fuzzy Hopfield neural networks
with time-varying delays. IEEE Trans. Circuits Syst.: Part II 52(5), 251–255 (2005)
23. H. Huang, D.W.C. Ho, Y. Qu, Robust stability of stochastic delayed additive neural networks
with Markovian switching. Neural Netw. 20(7), 799–809 (2007)
24. G. Joya, M. Atencia, F. Sandoval, Hopfield neural networks for optimization: study of the
different dynamics. Neurocomputing 43(1), 219–237 (2002)
25. N.B. Karayiannis, A.N. Venetsanopoulos, On the training and performance of high-order neural
networks. Math. Biosci. 129(2), 143–168 (1995)
26. W. Li, T. Lee, Hopfield neural networks for affine invariant matching. IEEE Trans. Neural
Netw. 12(6), 1400–1410 (2001)
27. X. Liao, G. Chen, E.N. Sanchez, Delay dependent exponential stability analysis of delayed
neural networks: an LMI approach. Neural Netw. 15(7), 855–866 (2002)
28. M. Li, W. Zhou, H. Wang, Y. Chen, R. Lu, H. Lu, Delay-dependent robust H∞ control for
uncertain stochastic systems, in Proceedings of the 17th World Congress of the International
Federation of Automatic Control, vol. 17 (2008), pp. 6004–6009
29. X. Lou, B. Cui, Synchronization of neural networks based on parameter identification and via
output or state coupling. J. Comput. Appl. Math. 222(2), 440–457 (2008)
30. H. Lu, Comments on “a generalized LMI-based approach to the global asymptotic stability of
delayed cellular neural networks”. IEEE Trans. Neural Netw. 16(3), 778–779 (2005)
31. W. Lu, T. Chen, Synchronization of coupled connected neural networks with delays. IEEE
Trans. Circuits Syst. I. 51(12), 2491–2503 (2004)
32. P. Lu, Y. Yang, Global asymptotic stability of a class of complex networks via decentralised
static output feedback control. IET Control Theory Appl. 4(11), 2463–2470 (2010)
33. J. Lv, X. Yu, G. Chen, Chaos synchronization of general complex dynamical networks. Phys.
A 334(1–2), 281–302 (2004)
34. X. Mao, Stochastic Differential Equations and Their Applications (Horwood, Chichester, 1997)
35. L.M. Pecora, T.L. Carroll, Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821–824
(1990)
36. D. Psaltis, C. Park, J. Hong, Higher order associative memories and their optical implementa-
tions. Neural Netw. 1(2), 143–163 (1988)
37. F. Ren, J. Cao, LMI-based criteria for stability of high-order neural networks with time-varying
delay. Nonlinear Anal. Ser. B: Real World Appl. 7(5), 967–979 (2006)
38. F. Ren, J. Cao, Anti-synchronization of stochastic perturbed delayed chaotic neural networks.
Neural Comput. Appl. 18(5), 515–521 (2009)
90 3 Robust Stability and Synchronization of Neural Networks
39. M.G. Rosenblum, A.S. Pikovsky, J. Kurths, Phase synchronization in driven and coupled
chaotic oscillators. IEEE Trans. Circuits Syst. 44(10), 874–881 (1997)
40. S. Ruan, R. Filfil, Dynamics of a two-neuron system with discrete and distributed delays. Phys.
D 191(3), 323–342 (2004)
41. A.N. Ruiz Oliveras, F.R. Pisarchik, Optical chaotic communication using generalized and
complete synchronization. IEEE J. Quantum Electron. 46(3), 279–284 (2010)
42. L. Sheng, M. Gao, Adaptive hybrid lag projective synchronization of unified chaotic systems,
in Proceedings of the 29th Chinese Control Conference (2010), pp. 2097–2101
43. L. Sheng, H. Yang, Robust synchronization of a class of uncertain chaotic neural networks, in
7th World Congress on Intelligent Control and Automation (2008), pp. 4614–4618
44. S.H. Strogatz, Exploring complex networks. Nature 410(6825), 268–276 (2001)
45. Y. Tang, R. Qiu, J. Fang, Q. Miao, M. Xia, Adaptive lag synchronization in unknown stochas-
tic chaotic neural networks with discrete and distributed time-varying delays. Phys. Lett. A
372(24), 4425–4433 (2008)
46. W. Wang, J. Cao, Synchronization in an array of linearly coupled networks with time-varying
delay. Phys. A 366, 197–211 (2006)
47. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
48. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
49. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
50. L. Wan, J. Sun, Mean square exponential stability of stochastic delayed Hopfield neural net-
works. Phys. Lett. A 343(4), 306–318 (2005)
51. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal. Real World Appl. 7(5), 1119–1128 (2006)
52. Z. Wang, H. Shu, Y. Liu, D.W.C. Ho, X. Liu, Robust stability analysis of generalized neural
networks with discrete and distributed time delays. Chaos Solitons Fractals 30(4), 886–896
(2006)
53. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos Solitons Fractals 32(1), 62–72 (2007)
54. D. Wang, Y. Zhong, S. Chen, Lag synchronizing chaotic system based on a single controller.
Commun. Nonlinear Sci. Numer. Simul. 13(3), 637–644 (2008)
55. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
56. L. Wang, W. Liu, H. Shi, Noise chaotic neural networks with variable thresholds for the fre-
quency assignment problem in satellite communications. IEEE Trans. Syst. Man Cybern. Part
C: Appl. Rev. 38(2), 209–217 (2008)
57. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos Solitons Fractals 36(2), 388–396 (2008)
58. Z. Wu, H. Su, J. Chu, W. Zhou, Improved result on stability analysis of discrete stochastic
neural networks with time delay. Phys. Lett. A 373(17), 1546–1552 (2009)
59. Z. Wu, H. Su, J. Chu, W. Zhou, New results on robust exponential stability for discrete recurrent
neural networks with time-varying delays. Neurocomputing 72(13), 3337–3342 (2009)
60. L. Xie, Output feedback H∞ control of systems with parameter uncertainty. Int. J. Control
63(4), 741–750 (1996)
61. Y. Xu, S. He, Fourier series chaotic neural networks, in Advanced Intelligent Computing Theo-
ries and Applications. With Aspects of Contemporary Intelligent Computing Techniques (2008),
pp. 84–91
62. L. Yan, L. Wang, Applications of transiently chaotic neural networks to image restoration, in
Proceedings of the 2003 International Conference on Neural Networks and Signal Processing,
vol. 1 (2003), pp. 265–269
63. S. Yong, P. Scott, N. Nasrabadi, Object recognition using multilayer Hopfield neural network.
IEEE Trans. Image Process. 6(3), 357–372 (1997)
References 91
64. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A 373,
252–260 (2007)
65. H. Zhao, Existence and global attractivity of almost periodic solution for cellular neural network
with distributed delays. Appl. Math. Comput. 154(3), 683–695 (2004)
66. Y. Zhang, Z. He, A secure communication scheme based on cellular neural networks, in Pro-
ceedings of the IEEE International Conference on Intelligent Process Systems, vol. 1 (1997),
pp. 521–524
67. Q. Zhang, X. Wen, J. Xu, Delay-dependent exponential stability of cellular neural networks
with time-varying delays. Chaos Solitons Fractals 23(4), 1363–1369 (2005)
68. W. Zhou, Y. Xu, H. Lu, L. Pan, On dynamics analysis of a new chaotic attractor. Phys. Lett. A
372(36), 5773–5777 (2008)
69. W. Zhou, H. Lu, C. Duan, Exponential stability of hybrid stochastic neural networks with mixed
time delays and nonlinearity. Neurocomputing 72(13), 3357–3365 (2009)
Chapter 4
Adaptive Synchronization of Neural
Networks
The adaptive control strategy has been widely adopted due to its well performance in
uncertain systems such as stochastic systems or nonlinear systems. In this chapter,
adaptive control is designed for the synchronization of some kinds of neural networks
including BAMDNN, SDNN with Markovian switching and T-S fuzzy NN.
4.1.1 Introduction
Bidirectional Associate Memory (BAM for short) neural network, gradually in pat-
tern recognition and artificial intelligence, prediction and control, has wider applica-
tion of associative memory space. Since Bart Kosko [16] put forward that model on
1987, the BAM neural network has aroused a lot of attention of researchers [12, 17,
32, 57] at home and abroad. Due to the current information if often distributed storage,
bidirectional double BAM neural networks will undoubtedly enhance the efficiency
of the process which is using the appropriate method of encoding implementation of
effective association. Fen Wang [35] and his team studied BAM neural network of
existence and stability of periodic solutions; Zhigang Liu [22] research BAM neural
network of global attractor; Hongjun Xiang [51] has solved the fuzzy BAM neural
network of exponential stability problem; Xingyuan Wang and his colleagues [38]
have made great contributions in delayed neural network adaptive synchronization
research. Because of initial value sensitivity in BAM neural network and the time-
delay phenomenon of neural network itself, the whole characteristic of this network
will change obviously if small dissimilarities in network parameters happened. So
it is the BAM neural network synchronization and parameter identification problem
of research that is particularly important.
BAM is very classic models [20], since this article is based on BAM for discussion,
at the beginning of the article, we will give out BAM models as follows.
1. Mathematical model with delayed neural networks for delayed differential equa-
tions is described as follows:
⎧
⎪ m
⎪
⎪ u̇ i (t) = − ci u i (t) + aij f j (v j (t − τij )) + Ii ,
⎪
⎪
⎪
⎪
⎪
⎪ j=1
⎪
⎨ i = 1, 2, . . . , n,
(4.1)
⎪
⎪ n
⎪
⎪ v̇ j (t) = − d j v j (t) + bji gi (u i (t − σji )) + Ji ,
⎪
⎪
⎪
⎪
⎪
⎪ i=1
⎩
j = 1, 2, . . . , m,
Among systems
C = diag{c1 , c2 , . . . , cn }, D = diag{d1 , d2 , . . . , dn },
And
u = (u 1 , u 2 , . . . , u n )T , v = (v1 , v2 , . . . , vm )T are neuron state; A = (aij )n×m ,
B = (bji )m×n are the connections between neurons of weight matrix and I , J is
external input
f (u) = ( f 1 (u 1 ), f 2 (u 2 ), . . . , f n (u n ))T ,
The equation demonstrates the neuron activation function. The study shows that
reasonable selection of parameters C, A, D, B and delay parameter system to be
certain amount of chaos [24]. In order to better observe the chaotic characteristics
of BAM neural network system and two equations of mutual relations, each of our
two equations for constructing a class that derives from system, and two from the
system by means of function hinges interdependent relations, said the two systems
are as follows:
ẋ(t) = − C1 x(t) + A1 f (y(t − τ )) + I1 + K 1 ,
(4.3)
ẏ(t) = − D1 y(t) + B1 g(x(t − σ)) + J1 + K 2 .
A1 f (y(t − τ )),
B1 g(x(t − σ))
are the interaction function in the two-system. Parameter K 1 and parameter K 2 are
positive definite matrix vector controller for external control inputs for each equation.
It is not difficult to find both equations having similarity with comparison of Eqs. (4.2)
and the Eq. (4.3), but parameters A, B, C, D are unknown parameters which need
96 4 Adaptive Synchronization of Neural Networks
for identification in the equation. Therefore, the problem that how to determine the
master-slave system parameters will turn into the other one that how to design a
proper controller.
K 1 = K 1 (u, x, A1 , C1 , I1 , t),
K 2 = K 1 (v, y, B1 , D1 , J1 , t),
The above parameters are a function of time, and parameter K which we call it
as controller is the external input control quantity of each equation. We may find
similarities between two equations through comparing Eqs. (4.2) and (4.3). As the
parameter A, B, C, D are the ones which need our identification, the problem of
determining master-slave system parameters would turn into the problem how to
design the controller correctly.
Definition 4.4 The corresponding parameter conditions among the equation should
conform the statement as follows:
4.1 Projective Synchronization of BAM Self-Adaptive DNN … 97
⎧
⎪
⎪ lim A1 (t) − A1 = 0,
⎪
⎪ t→∞
⎪
⎪
⎨ lim B1 (t) − B1 = 0,
t→∞
⎪
⎪ lim C1 (t) − C1 = 0,
⎪
⎪ t→∞
⎪
⎪
⎩ lim D1 (t) − D1 = 0.
t→∞
Definition 4.5 Next, our error system in defined set of master-slave system has the
following definition
e1 (t) = x(t) − H1 u(t),
(4.4)
e2 (t) = y(t) − H2 v(t).
H1 = diag{h 11 , h 12 , . . . , h 1n }.
And
H2 = diag{h 21 , h 22 , . . . , h 2n }.
Bringing system (4.3) to system (4.4), they are as following equivalent systems,
and we can get the equivalent equation as follows:
⎧
⎪ ė1 (t) = −C1 x(t) + A1 f (y(t − τ )) + I1 + K 1 +
⎪
⎪
⎨ H Cu(t) − H A f (v(t − τ )) − H I,
1 1 1
(4.5)
⎪
⎪ ė2 (t) = −D1 y(t) + B1 g(x(t − σ)) + J1 + K 2 +
⎪
⎩
H2 Dv(t) − H2 Bg(u(t − σ)) − H2 J.
According to the conclusion of the study and the combination of the correspond-
ing theoretical basis, we design projective synchronization controller basing on the
characteristics of BAM neural networks as follows:
1. projective synchronization controller
⎧
⎪ K 1 (t) = −e1 (t) + [H1 f (v(t − τ )) − f (y(t − τ ))]A1 +
⎪
⎪
⎨ [x(t) − H u(t)]C + (H − 1)I ,
1 1 1 1
(4.6)
⎪ K 2 (t) = −e2 (t) + [H2 g(u(t − σ)) − g(x(t − σ))]B2 +
⎪
⎪
⎩
[y(t) − H2 v(t)]D1 + (H2 − 1)J1 .
98 4 Adaptive Synchronization of Neural Networks
Theorem 4.6 If we chose controller of signal systems for (4.6) of which signal
system of parameter identification rules such as (4.8) type, the track of system (4.3)
and that of the main system (4.2) can reach the state of synchronization as staring
from any initial value.
1 T
V̇1 = {ė P1 e1 + e1T P1 ė1 + [ Ȧ1 (t)]T R1 [A1 (t) − A] +
2 1
[A1 (t) − A]T R Ȧ1 (t) + [Ċ1 (t)]T S[C1 (t) − C]
[Ċ1 (t) − C]T S Ċ1 (t)}
= e1T (t)P1 ė1 (t) + [ Ȧ1 (t)]T R[A1 (t) − A] +
[Ċ1 (t)]T S[C1 (t) − C]. (4.10)
and ⎧
⎪
⎪ lim A1 (t) − A = 0,
⎪
⎨ t→∞
lim C1 (t) − C = 0,
⎪
⎪
t→∞
⎪
⎩ lim x(t) − H1 u(t) = 0.
t→∞
Equation derivation:
and ⎧
⎨ lim (x(t) − H1 u(t)) = 0,
t→∞
⎩ lim (y(t) − H2 v(t)) = 0.
t→∞
And ⎧
⎪
⎪ lim A1 (t) − A1 = 0,
⎪
⎪ t→∞
⎪
⎪
⎨ lim B1 (t) − B1 = 0,
t→∞
⎪
⎪ lim C1 (t) − C1 = 0,
⎪
⎪ t→∞
⎪
⎪
⎩ lim D1 (t) − D1 = 0,
t→∞
Remark 4.7 It can be proved that the track of slave system and that of the master
system can reach the orbit synchronism no matter where the initial values are from.
Remark 4.8 There are many papers discussing the projective synchronization prob-
lem but only few of them are talking about how to solve the same problem about
BAM network and how to figure out the parameter identifying problem. We can get
the answers of both sides from the foregoing equation through the proving progress.
H1 = diag{2, −1.5}, H2 = diag{1, −1}. The initial value of the salve system is as
follows:
(x1 (t) x2 (t) y1 (t) y2 (t))T = (0.25 0.12 0.36 0.55)T ,
System error equations for the initial value (Fig. 4.1) are as follows:
Based on the parameter and the progress of the MATLAB numerical simulation, we
can get the progress of the four parameter e12 (t), e11 (t), e21 (t), e22 (t) as Figs. 4.2,
4.3 and 4.4. Figures 4.5 and 4.6 show the simulation progress of A and B. Due to the
limited article, we have to omit C and D. At the same time, the values we define I
and J in the master system are same as that in the slave system.
4.1.5 Conclusion
There have been many research results in the projection synchronization problem.
This section analyzes the BAM neural network projection synchronization problem
4.2.1 Introduction
encountered in many areas and a time delay is often a source of instability and
oscillators. For neural networks with time delays, various sufficient conditions have
been proposed to guarantee the global asymptotic or exponential stability in some
recent literatures, see e.g. [9, 43, 56, 63, 64]. Meanwhile, many neural networks can
be with abrupt changes in their structure and parameters caused by some phenomena
such as component failures or repairs, changing subsystem interconnections, and
abrupt environmental disturbances. In this situation, there exist finite modes in the
neural networks, and the modes may jump (or switch) from one to another at different
times. This kind of systems is widely studied by many scholars, see e.g. [27, 45, 58,
66] and the references therein.
This section concerned with the adaptive synchronization problem for the T-S
fuzzy neural networks with stochastic noises and Markovian jumping parameters
by using the M-matrix method. The main purpose of this section is to design an
adaptive feedback controller for the T-S fuzzy neural networks with stochastic noises
and Markovian jumping parameters. The M-matrix-based criteria are to test the
adaptive feedback controller whether the T-S fuzzy neural networks are stochastically
synchronization. Finally, a numerical simulation is used to demonstrate the usefulness
of derived M-matrix-based synchronization conditions.
where l ∈ S1 = {1, 2, . . . , ν}. μl1 , μl2 , . . . , μlg are the fuzzy sets. s1 (t), s2 (t),
. . . , sg (t) are the premise variables. ν is the number of fuzzy IF-THEN rules.
t ≥ 0 (or t ∈ R+ , the set of all non-negative real numbers) is the time variable.
x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the state vector of drive system (4.20)
associated with n neurons. f (x(t)) = [ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T ∈ Rn
denotes the activation function of neurons. τ > 0 is the state delay. As a matter of
convenience, for t ≥ 0, we denote Cl (r (t)) = Cli , Al (r (t)) = Ali ,Bl (r (t)) = Bli and
Dl (r (t)) = Dli , respectively. In the drive system (4.20), furthermore, ∀i ∈ S, Cli =
diag {cl1 i , ci , . . . , ci } has positive and unknown entries ci > 0, Ai = (a i )
l2 ln lv l ljv n×n and
Bl = (bljv )n×n are the connection weight and the delayed connection weight matri-
i i
Using the singleton fuzzifier, product fuzzy inference, and weighted average
defuzzifier, the output of above fuzzy drive system is inferred as follows:
ν
d x(t) = h l (s(t)){[−Cli x(t) + Ali f (x(t))
l=1 (4.21)
+Bli f (x(t − τ )) + Dli ]dt},
where
l (s(t))
h l (s(t)) = ν ,
j=1 j (s(t))
g
l (s(t)) = μlj (s j (t)),
j=1
in which, μlj (s j (t)) is the grade of membership of s j (t) in μlj . Then, it can be seen
ν
that, for l = 1, 2, . . . , ν, and all t, l=1 h l (s(t)) = 1, and h l (s(t)) ≥ 0.
Corresponding to the fuzzy drive system (4.20), the fuzzy response system is
described by the following rules.
Response System Rule l: IF s1 (t) is μl1 , s2 (t) is μl2 , . . ., sg (t) is μlg , THEN
where y(t) = (y1 (t), y2 (t), . . . , yn (t))T ∈ Rn is the state vector of response sys-
tem (4.22). As a matter of convenience, for t ≥ 0, we denote Ĉl (r (t)) = Ĉli ,
Âl (r (t)) = Âli and B̂l (r (t)) = B̂li , respectively. And Ĉli = diag{ĉl1 i , ĉi , . . . , ĉi },
l2 ln
Âli = (âljk
i )
n×n and B̂l = (b̂ljk )n×n are the fuzzy estimations of unknown matrices
i i
Cli , Ali and Bli , respectively. ω(t) = [ω1 (t), ω2 (t), . . . , ωn (t)]T is an n-dimensional
Brownian motion defined on a complete probability space (Ω, F, P) with a natural
filtration {Ft }t≥0 (i.e. Ft = σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra), and is independent of
the Markovian process {r (t)}t≥0 , and σ : R+ ×S× S1 ×Rn ×Rn → Rn×n is the noise
intensity matrix and can be regarded as a result from the occurrence of eternal random
fluctuation and other probabilistic causes. Ul (t) = (u l1 (t), u l2 (t), . . . , u ln (t))T ∈
Rn is a control input vector with the form of rules as follows.
Controller Rule l: IF s1 (t) is μl1 , s2 (t) is μl2 , . . ., sg (t) is μlg , THEN
where K l ∈ Rm×n are matrices to be determined later. Then the state-feedback fuzzy
controller is given by
ν
U (t) = h l (s(t)){K l (t)(y(t) − x(t))}. (4.24)
l=1
ν
dy(t) = h l (s(t)){[−Ĉli y(t) + Âli f (y(t))
l=1
(4.25)
+ B̂li f (y(t − τ )) + Dli + Ul (t)]dt
+ σ(t, i, l, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t)}.
ν
de(t) = h l (s(t)){[−C̃li y(t) − Cli e(t) + Ãli g(y(t))
l=1
(4.26)
+ Ali g(e(t)) + B̃li g(yτ ) + Bli g(eτ ) + Ul (t)]dt
+ σ(t, i, l, e(t), eτ )dω(t)}.
where C̃li = Ĉli − Cli , Ãli = Âli − Ali , B̃li = B̂li − Bli . For the purpose of simplicity,
we mark e(t − τ ) = eτ and f (x(t) + e(t)) − f (x(t)) = g(e(t)).
The initial condition associated with system (4.26) is given in the following form
for any ξ ∈ L2F0 ([−τ , 0], Rn ), where L2F0 ([−τ , 0], Rn ) is the family of all F0 -
measurable C([−τ , 0]; Rn )-value random variables satisfying that sup−τ ≤s≤0
E|ξ(s)|2 < ∞, and C([−τ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ , 0] with the norm ξ = sup−τ ≤s≤0 |ξ(s)|.
The main purpose of the rest of this section is to establish a criterion of adap-
tive synchronization for the system (4.21) and the response system (4.25) by using
adaptive feedback control and M-matrix method.
For this purpose, we introduce some assumptions, the definition and some lemmas
which will be used in the proofs of our main results.
Assumption 4.9 The neuron activation functions f i (·) are bounded and satisfy the
following Lipschitz condition:
Assumption 4.10 The noise intensity matrix σ(·, ·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist two positives H1 and H2 , such that
d x(t) = (t, r (t), x(t), xτ (t))dt + ð(t, r (t), x(t), xτ (t))dω(t) (4.27)
In this section, we give a criterion and three special cases of adaptive synchronization
by the M-matrix method for the drive system (4.21) and the response system (4.25).
108 4 Adaptive Synchronization of Neural Networks
S
(L 2 + H2 )q̄ < − η q̃i + γik q̃k , ∀i ∈ S, l ∈ S1 , (4.28)
k=1
and the parameters update laws of matrices Ĉli , Âli and B̂li are chosen as
ĉ˙lj = γ j qli e j y j , â˙ ljv = −αjv qli e j f v , b̂˙ ljv = −βjv qli e j ( f v )τ ,
i i i
(4.30)
where α j > 0, γ j > 0, αjv > 0 and βjv > 0 ( j, v = 1, 2, . . . , n) are arbitrary
constants, respectively.
Computing LV (t, i, l, e, eτ ) along the trajectory of error system (4.26), and using
(4.29) and (4.30), one can obtain that
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 109
LV (t, i, l, e, eτ )
ν
= h l (s(t)) Vt + Ve [−C̃li y − Cli e + Ãli f (y)
l=1
+ Ali g(e) + B̃li f (yτ ) + Bli g(eτ ) + Ul (t)]
+ (1/2)trace (σ T (t, i, l, e, eτ )Vee σ(t, i, l, e, eτ ))
S
+ γik V (t, k, e)
k=1
ν n
n
1 i ˙i
= h l (s(t)) 2 α j klj k̇lj + 2
1
γ j c̃lj c̃lj
l=1 j=1 j=1
n
n
1 i ˙i
n
n
1 i ˙
i
+2 αjv ãljv ã ljv +2 βjv b̃ljv b̃ljv
j=1 v=1 j=1 v=1 (4.31)
+ 2qli e T [−C̃li y − Cli e + Ãli f (y) + Ali g(e)
+ B̃li f (yτ ) + Bli g(eτ ) + Ul (t)]
+ (1/2)trace (σ T (t, i, l, e, e )(2q i )σ(t, i, l, e, e ))
τ l τ
S
+ γik qlk |e|2
k=1
ν
= h l (s(t)) 2qli e T [−Cli e + Ali g(e) + Bli g(eτ )]
l=1
+ (1/2)trace (σ T (t, i, l, e, eτ )(2qli )σ(t, i, l, e, eτ ))
S
+ γik ql |e| .
k 2
k=1
Now, using Assumptions 4.9 and 4.10 together with Lemma 1.13 yields
and
(1/2)trace (σ T (t, i, l, e, eτ )(2qli )σ(t, i, l, e, eτ ))
(4.35)
≤ qli (H1 |e|2 + H2 |eτ |2 ).
110 4 Adaptive Synchronization of Neural Networks
LV (t, i, l, e, eτ )
ν S
≤ h l (s(t)) ηql + i γik ql |e| + (L + H2 )ql |eτ |
k 2 2 i 2
l=1 k=1
ν S (4.36)
≤ h l (s(t)) η q̃i + γik q̃k |e|2 + (L 2 + H2 )q̄|eτ |2
l=1 k=1
ν
≤ hl (s(t)){−m|e|2 + (L 2 + H2 )q̄|eτ |2 }
l=1
S
where m = −(η q̃i + γik q̃k ) by [q̃1 , q̃2 , . . . , q̃ S ]T = M −1 −
→
m.
k=1
Let ψ(t) = 0, ω1 (e) = m|e|2 and ω2 (eτ ) = (L 2 + H2 )q̄|eτ |2 . Then inequality
(4.36) holds such that inequality (1.14) holds. ω1 (0) = 0 and ω2 (0) = 0 when e = 0
and eτ = 0, and inequality (4.28) implies ω1 (e) > ω2 (eτ ). So (1.15) holds. Moreover,
(1.16) holds when |e| → ∞ and |eτ | → ∞. By Lemma 1.9, the error system (4.26) is
adaptive almost surely asymptotically stable, and hence the noise-perturbed response
system (4.25) can be adaptive almost surely asymptotically synchronized with the
drive neural network (4.21). This completes the proof.
Remark 4.13 In Theorem 4.12, the condition (4.28) of the adaptive synchronized
for neural networks with Markovian jumping parameters obtained by using the
M-matrix approach is very different to those, such as linear matrix inequality method.
And the condition can be checked if the drive system and the response system are
given and the positive constant m be chosen.
Now, we are in a position to consider three special cases of the neural networks
(4.21), (4.25) and (4.26). The proof is similar to that of Theorem 4.12, and hence
omitted.
Case 1. The matrices Cli , Ali and Bli of drive system (4.21) and the matrices Ĉli ,
Âl and B̂li of response system (4.25) have the same parameters, respectively. That is
i
to say, Cli = Ĉli , Ali = Âli and Bli = B̂li . The drive system, the response system, and
the error system can be represented as follows:
ν
d x(t) = h l (s(t)){[−Cli x(t) + Ali f (x(t))
l=1 (4.37)
+ Bli f (x(t − τ )) + Dli ]dt},
ν
dy(t) = h l (s(t)){[−Cli y(t) + Ali f (y(t))
l=1 (4.38)
+ Bli f (y(t − τ )) + Dli + Ul (t)]dt
+ σ(t, i, l, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t)},
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 111
ν
de(t) = h l (s(t)){[−Cli e(t) + Ali g(e(t))
l=1 (4.39)
+ Bli g(eτ ) + Ul (t)]dt + σ(t, i, l, e(t), eτ )dω(t)}.
For this case, one can get the following result that is analogous to Theorem 4.12.
Corollary 4.14 Assume that η = −2γ + α + L 2 + β + H1 , γ = min min min clji ,
l∈S1 i∈S 1≤ j≤n
α = max max(ρ(Ali ))2 , β = max max(ρ(Bli ))2 , q̄ = max max qli , q̃i = min qli ,
l∈S1 i∈S l∈S1 i∈S l∈S1 i∈S l∈S1
S
(L 2 + H2 )q̄ < − η q̃i + γik q̃k . (4.40)
k=1
Under Assumptions 4.9 and 4.10, the noise-perturbed response system (4.38) can be
adaptively synchronized with the drive system (4.37), if the feedback gain K l (t) of
controller (4.24) with the update law is chosen as
ν
d x(t) = h l (s(t)){[−Cl x(t) + Al f (x(t))
l=1 (4.42)
+ Bl f (x(t − τ )) + Dl ]dt},
ν
dy(t) = h l (s(t)){[−Ĉl y(t) + Âl f (y(t))
l=1
(4.43)
+ B̂l f (y(t − τ )) + Dl + Ul (t)]dt
+ σ(t, l, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t)},
ν
de(t) = h l (s(t)){[−C̃l y(t) − Cl e(t) + Ãl g(y(t))
l=1
(4.44)
+ Al g(e(t)) + B̃l g(yτ ) + Bl g(eτ ) + Ul (t)]dt
+σ(t, l, e(t), eτ )dω(t)}.
For this case, one can also get the following result that is analogous to Theo-
rem 4.12.
Corollary 4.15 Assume that η = −2γ + α + L 2 + β + H1 , γ = min min clj ,
l∈S1 1≤ j≤n
α = max(ρ(Al ))2 , β = max(ρ(Bl ))2 , L 2 + H2 < −η. Under Assumptions 4.9 and
l∈S1 l∈S1
4.10, the noise-perturbed response system (4.43) can be adaptive synchronized with
112 4 Adaptive Synchronization of Neural Networks
the drive system (4.42), if the feedback gain K l (t) of controller (4.24) with the update
law is chosen as
k̇lj = −α j ql e2j , (4.45)
and the parameters update laws of matrices Ĉl , Âl and B̂l are chosen as
⎧
⎪ ˙
⎨ ĉlj = γ j ql e j y j ,
â˙ ljv = −αjv ql e j f v , (4.46)
⎪
⎩˙
b̂ljv = −βjv ql e j ( f v )τ ,
where α j > 0, γ j > 0, αjv > 0 and βjv > 0 ( j, v = 1, 2, . . . , n) are arbitrary
constants, respectively.
Case 3. The T-S Fuzzy control is removed from the neural networks (4.21),
(4.25), (4.26) and the controller (4.24). The drive system, the response system, the
error system, and the controller can be represented as follows:
k̇ j = −α j q i e2j , (4.51)
and the parameters update laws of matrices Ĉ i , Âi and B̂ i are chosen as
⎧
⎪ ˙
⎨ ĉ j = γ j q e j y j ,
i
˙â jv = −αjv q i e j f v , (4.52)
⎪
⎩˙
b̂jv = −βjv q i e j ( f v )τ ,
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 113
where α j > 0, γ j > 0, αjv > 0 and βjv > 0 ( j, v = 1, 2, . . . , n) are arbitrary
constants, respectively.
model
1.5
0.5
0 1 2 3 4 5 6
t
−7 7
Γ = , α1 = α2 = α3 = 1,
4 −4
Those parameters fully satisfy Assumptions 4.9, 4.10, Inequality (4.40) and that
M is a nonsingular M-matrix. So by the Corollary 4.14, it will prove the main result
to be correct completely if the response of e1 (t), e2 (t) and e3 (t) of error system can
be adaptive synchronization.
To illustrate the effectiveness of method proposed in this section, we adopt the
M-matrix approach to compute the solutions for stochastic T-S fuzzy neural networks
with Markovian jumping parameters and to simulate the dynamics of error system.
The simulation results are given in Figs. 4.7, 4.8 and 4.9. Among them, Fig. 4.7 shows
the switching of system mode. Figure 4.8 shows the state response of e1 (t), e2 (t) and
e3 (t) of the errors system. Figure 4.9 shows dynamic curves of feedback gain k1 (t),
k2 (t) and k3 (t). From the simulations, one can find that the neural networks with
Markovian jumping parameters are adaptive synchronization.
4.2.5 Conclusions
We have studied the problem of adaptive synchronization for stochastic T-S fuzzy
neural networks with time-delay and Markovian jumping parameters. We have
removed the traditional monotonicity and smoothness assumptions on the activation
function. An M-matrix method has been developed to solve the problem addressed.
The adaptive synchronization controller has been designed by M-matrix method for
T-S fuzzy neural networks with stochastic noises and Markovian jumping parame-
ters. Finally, a simulation example has been used to demonstrate the usefulness of
main results proposed.
4.3 Synchronization of DNN Based on Parameter Identification … 115
e(t)
0
−1
−2
−3
−4
0 1 2 3 4 5 6
t
−2
−4
−6
−8
0 1 2 3 4 5 6
t
4.3.1 Introduction
In the past few years, there has been a great deal of study on delayed neural networks
(DNNs), due to its complex and unpredictable behaviors in practice, together with
the traditional stability and periodic oscillation. As is known, networked-induced
delay is one of the main issues in neural networks. So, in recent years, many works
on the stability of neural networks with discrete time delays and with both discrete
and distributed time delays have been done [39, 41, 42, 44, 47, 48]. As it was
proposed in [4, 11] artificial neural networks can present chaotic behavior. And so,
since the master-slave conception for the synchronization of chaotic systems was first
proposed by Pecora and Carroll in 1990 [29], the research hot on synchronization
of neural networks with or without time delays has spread in many different fields
[2, 7, 8, 21, 50]. Synchronization of coupled delayed neural networks was released
116 4 Adaptive Synchronization of Neural Networks
the first time [1] in 2004. Then, some further researches have appeared in recent
years [18, 32, 37, 54]. Very recently, several new results on the synchronization
problem of neural networks have been proposed in some literature. For example,
Wang and Cao studied synchronization in an array of linearly (stochastically) cou-
pled networks with time delays [5, 62]. And, in [14], some conditions were proposed
for global synchronization of DNNs based on parameters identification by employ-
ing the invariance principle of functional differential equations and the update law
for adaptive control. Meanwhile, by introducing a descriptor technique and using
Lyapunov-Krasovskii functional, a multiple delayed state-feedback control design
for exponential synchronization problem of a class of delayed neural networks with
multiple time-varying discrete delays was presented in [23]. Moreover, [36] aimed to
study the global robust synchronization between two coupled neural networks with
all the parameters unknown and discrete time-varying delays via output or state cou-
pling. However, all the papers mentioned above only took the discrete time delays
into consideration, while the distributed time delays have not attracted much atten-
tion among researchers. Though the signal transmission can be modeled with discrete
delays because of the immediate process, it may be distributed during a certain time
period [44]. Hence, in order to modeling a realistic neural network, both discrete and
distributed delays should be involved in the model [11].
From the above discussion, we can see that the synchronization problem of neural
networks with both discrete and distributed time delays is still a novel problem
that has been seldom studied. For example, in [36], the adaptive synchronization of
neural networks with time-varying delays and distributed delays was investigated on
the basis of LaSalle invariant principle of functional differential equations and the
adaptive feedback control technique. Inspired by these recent literatures and basing
on [36], we consider the synchronization of neural networks with both discrete and
distributed time-varying delays based on parameter identification and via output
coupling, which can model a more realistic and comprehensive networks.
In this section, we focus to study the synchronization problem of two coupled
neural networks with both discrete and distributed time-varying delays. This letter
is organized as follows: First, the formulations and preliminaries are given for the
proof of the main results. Then, by using the Lyapunov functional and estimation
method, we propose several new conditions for the global synchronization of the two
coupling systems, and give the criterions for identifying the unknown parameters and
designing the controller via output coupling. And then, some illustrative examples
are provided to show the merits of this research. Finally, a conclusion is given for
the whole section.
In this section, the following neural network models, namely master system, which
involve both discrete and distributed time-varying delays are considered:
4.3 Synchronization of DNN Based on Parameter Identification … 117
⎡
n
n
d xi (t) = ⎣−ci xi (t) + aij f j (x j (t)) + bij u j (x j (t − τ1 (t)))
j=1 j=1
⎤
n t
+ wij v j (x j (s))ds + Ji ⎦ dt, i = 1, 2, . . . , n, (4.55)
j=1 t−τ2 (t)
or equivalently
t
d x(t) = [−C x(t) + A f (x(t)) + Bu(x(t − τ1 (t))) + W v(x(s))ds + J ]dt,
t−τ2 (t)
(4.56)
where xi (t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with the
ith DNNs;
u n (xn (t − τ (t)))]T ∈ Rn
and v(x(t)) = [v1 (x1 (t)), v2 (x2 (t)), . . . , vn (xn (t))]T ∈ Rn are the activation func-
tions of the neurons with f (0) = u(0) = v(0) = 0; C = diag{c1 , c2 , . . . , cn } > 0
is a diagonal matrix that presents the rate of the ith unit resetting its potential to the
resting state in isolation when disconnected from the external inputs and the network;
A = (aij )n×n B = (bij )n×n and W = (wij )n×n stand for, respectively, the connection
weight matrix, the discretely delayed connection weight matrix and the distributive
delayed connection weight matrix; J = [J1 , J2 , . . . , Jn ]T ∈ Rn is the external input
vector function; τ1 (t) ≥ 0 and τ2 (t) ≥ 0 are the discrete time-varying delay and
distributed time-varying delay, respectively.
Similarly, the controlled slave system is taken as the following form:
⎡
n
n
d yi (t) = ⎣−c̄i yi (t) + āij f j (y j (t)) + b̄ij u j (y j (t − τ1 (t)))
j=1 j=1
⎤
n t
+ w̄ij v j (y j (s))ds + Ji + κi (t)⎦ dt, i = 1, 2, . . . , n (4.57)
j=1 t−τ2 (t)
118 4 Adaptive Synchronization of Neural Networks
or equivalently
where C̄ = diag{c̄1 , c̄2 , . . . , c̄n } > 0, Ā = (āij )n×n , B̄ = (b̄ij )n×n and W̄ =
(w̄ij )n×n are all uncertain parameters to be identified. K(t) is a general controller that
can implement the synchronization of the two coupled DNNs and the identification
of the parameters.
Let e(t) = y(t) − x(t), we have
de(t)
= −Ce(t) + (A + K )g(e(t)) + (B + K ∗ )g̃(e(t − τ1 (t)))
dt
t
+W ĝ(e(s))ds − (C̄ − C)y(t) + ( Ā − A) f (y(t))
t−τ2 (t)
t
+ ( B̄ − B)u(y(t − τ1 (t))) + (W̄ − W ) v(y(s))ds, (4.59)
t−τ2 (t)
where g(e(t)) = f (e(t) + x(t)) − f (x(t)), g̃(e(t − τ1 (t))) = u(e(t − τ1 (t)) + x(t −
τ1 (t))) − u(x(t − τ1 (t))) and ĝ(e(t)) = v(e(t) + x(t)) − v(x(t)).
For any ζi , ξi ∈ L2F0 ([−τ ∗ , 0]; Rn ), we give the initial states: xi (t) = ζi (t), yi (t)
= ξi (t), i = 1, 2, . . . , n, where −τ ∗ ≤ t ≤ 0.
In order to achieve our results, the following necessary assumption is made.
Assumption 4.17 The activation functions f i (x) , u i (x) and vi (x) are bounded and
satisfy the Lipschitz condition:
∀x, y ∈ R, i = 1, 2, . . . , n,
Assumption 4.18 τ1∗ ≥ τ1 (t) ≥ 0 and τ2∗ ≥ τ2 (t) ≥ 0 are both differential and
bounded with 1 > δ ≥ τ̇1 (t) ≥ 0, 1 > σ ≥ τ̇2 (t) ≥ 0 t ∈ [0, ∞).
lim E ei (t) 2 = 0, i = 1, 2, . . . , n
t→∞
then the error signal system (4.59) is globally asymptotically stable in mean square.
4.3 Synchronization of DNN Based on Parameter Identification … 119
Theorem 4.20 Under the Assumptions 4.17 and 4.18, the slave DNNs (4.57) is syn-
chronized with the master DNNs (4.55) and lim (c̄i − ci ) = lim (āij − aij ) =
t→∞ t→∞
lim (b̄ij − bij ) = lim (w̄ij − wij ) = 0, (i, j = 1, 2, . . . , n) , based on the following
t→∞ t→∞
three conditions:
(I) Let the time-varying delayed feedback controller
n
κi (t) = kij ( f j (y j (t)) − f j (x j (t)))
j=1
n
+ kij∗ (u j (y j (t − τ1 (t))) − u j (x j (t − τ1 (t)))), i = 1, 2, . . . , n. (4.60)
j=1
(II) The adapted parameters c̄i , āij , b̄ij and w̄ij with the update law are taken as
⎧ ·
⎪
⎪ ˙i = γi ei (t)yi (t); i = 1, 2, . . . , n,
⎪
⎪ c̄
⎪
⎪ ·
⎪
⎨ ā˙ ij = −αij ei (t) f j (y j (t)); i, j = 1, 2, . . . , n,
· (4.61)
⎪
⎪ ˙ = −β e (t)u (y (t − τ (t))); i, j = 1, 2, . . . , n,
⎪
⎪ b̄
⎪
⎪
ij ij i j j
·
1
⎪
⎩ ˙ t
w̄ ij = −ωij ei (t) t−τ2 (t) v j (y j (t)); i, j = 1, 2, . . . , n.
where γi > 0, αij > 0, βij > 0 and ωij > 0 are arbitrary positive constants.
(III) The following inequality
1
n
1
n
− μi ci + μi εi (aii + kii ) + μi ε j aij + kij + μ j εi aji + kji
2 2
j=1, j
=i j=1, j
=i
n 1 n
n
1 1
+ μi φ j bij + kij∗ + μi ϕ j wij + μi φi bji + kji∗
2 2 2(1 − δ)
j=1 j=1 j=1
τ∗
n
+ 2
μi ϕi wji < 0 (4.62)
2(1 − σ)
j=1
120 4 Adaptive Synchronization of Neural Networks
holds, where μi > 0, εi > 0, φi > 0 and ϕi > 0 are all positive constants,
i, j = 1, 2, . . . , n.
n t
1
n
n
1
V (t) = μi ei2 (t) + μi |dji | |ei (s)||g̃i (ei (s))|ds
2 (1 − δ) t−τ1 (t)
i=1 i=1 j=1
n n n
1 t t μi
+ μi |wji | |ei (η)||ĝi (ei (η))|dηds + (c̄i − ci )2
(1 − σ) t−τ2 (t) s γi
i=1 j=1 i=1
n
n
n
μi μi μi
+ (āij − aij )2 + (b̄ij − bij )2 + (w̄ij − wij )2 (4.63)
αij βij ωij
j=1 j=1 j=1
where dji = bji + kji∗ . Then, the derivative of along the trajectory of error system
(4.59) can be derived as follows:
n
n
n
V̇ (t) = μi ei (t)(−ci ei (t) + (aij + kij )g j (e j (t)) + (bij + kij∗ )g̃ j (e j (t − τ1 (t)))
i=1 j=1 j=1
n t
n
n
+ wij ĝ j (e j (s))ds − (c̄i − ci )yi (t) + (āij − aij ) f j (y j (t)) + (b̄ij − bij )u j (y j (t − τ1 (t)))
j=1 t−τ2 (t) j=1 j=1
n
n t 1
n
+ (w̄ij − wij ) v j (y j (s))ds) + μi dji |ei (t)| |g̃i (ei (t))|
t−τ2 (t) 2(1 − δ)
j=1 i=1 j=1
(1 − τ̇1 (t)) n
n n
1
n
− μi dji |ei (t − τ1 (t))| |g̃i (ei (t − τ1 (t)))| + μi wji |ei (t)| τ2 (t) ĝi (ei (t))
2(1 − δ) 2(1 − σ)
i=1 j=1 i=1 j=1
⎡
1 − τ̇2 (t) μi
n n n n
t μi
− μi wji |ei (s)| ĝi (ei (s))ds + ⎣ (c̄i − ci )c̄˙i + (āij − aij )ā˙ ij
2(1 − σ) t−τ 2 (t) γi αij
i=1 j=1 i=1 j=1
⎤
n
μi n
μi
+ ˙
(b̄ij − bij )b̄ij + ˙
(w̄ij − wij )w̄ij ⎦
βij ωij
j=1 j=1
n
n
n
≤ −μi ci ei2 (t) + μi (aii + kii ) |ei (t)| |gi (ei (t))|] + μi aij + kij |ei (t)| g j (e j (t))
i=1 i=1 j=1, j
=i
n
n
n
n
t
+ μi bij + kij∗ |ei (t)|g̃ j (e j (t − τ1 (t))) + μi wij |ei (t)| ĝ j (e j (s))ds
i=1 j=1 i=1 j=1 t−τ2 (t)
1
n
n
1
n n
+ μi dji |ei (t)| |g̃i (ei (t))| − μi dji |ei (t − τ1 (t))| |g̃i (ei (t − τ1 (t)))|
2(1 − δ) 2
i=1 j=1 i=1 j=1
n
τ∗ n
1 n
n
t
+ 2
μi wji |ei (t)| ĝi (ei (t)) − μi wji |ei (s)| ĝi (ei (s))ds (4.64)
2(1 − σ) 2 t−τ2 (t)
i=1 j=1 i=1 j=1
4.3 Synchronization of DNN Based on Parameter Identification … 121
1
n
n
1
n
n
g 2j (e j (t))
≤ μi aij + kij ε j ei2 (t) + μi aij + kij (4.65)
2 2 εj
i=1 j=1, j
=i i=1 j=1, j
=i
1 g 2j (e j (t)) 1
n n n n
μi aij + kij ≤ μi aij + kij g j (e j (t)) e j (t)
2 εj 2
i=1 j=1, j
=i i=1 j=1, j
=i
1
n
n
= μ j aji + kji |gi (ei (t))| |ei (t)| . (4.69)
2
i=1 j=1, j
=i
With the same method and the Assumption 4.18, the following (4.70) and (4.71)
can be obtained immediately:
n
n
μi bij + kij∗ |ei (t)|g̃ j (e j (t − τ1 (t)))
i=1 j=1
n
n
1
≤ μi bij + kij∗ φ j ei2 (t)
2
i=1 j=1
n n
1
+ μ j bji + kji∗ |g̃i (ei (t − τ1 (t)))| |ei (t − τ1 (t))| , (4.70)
2
i=1 j=1
122 4 Adaptive Synchronization of Neural Networks
n
n
t
μi wij |ei (t)| ĝ j (e j (s))ds
i=1 j=1 t−τ2 (t)
n
1
n n n t
1
≤ μi wij ϕ j ei2 (t) + μ j wji ĝi (ei (s)) |ei (s)| ds
2 2 t−τ2 (t)
i=1 j=1 i=1 j=1
(4.71)
1 n
1
n
+ μi ϕ j wi j + μi φi d ji
2 2(1 − δ)
j=1 j=1
⎫
τ2∗
n
⎬
+ μi ϕi w ji ei2 (t). (4.72)
2(1 − σ) ⎭
j=1
Therefore, if the condition III in Theorem 4.20 is satisfied, we can obtain V̇ (t) = 0
if and only if e(t) = 0, otherwise V̇ (t) < 0, immediately. And it can be concluded
that the error signal model (4.59) is globally asymptotically stable in mean square.
Basing on the invariant principle of functional differential equation, when t → ∞,
we have E e(t; ϕ) 2 → 0, c̄ij → cij , āij → aij , b̄ij → bij , and w̄ij → wij . Thus, all
the unknown parameters with arbitrary initial values in the slave system (4.57) can be
identified when (4.57) synchronizes with the master system (4.55). This completes
the proof.
Remark 4.21 It can be seen from the form of Lyapunov-Krasovskii functional (4.63)
that neither symmetry nor positive (negative) definiteness of the coupling matrices
are needed. And thus, the results are less restrictive.
Remark 4.22 In this article, we have chosen the general time-varying delayed feed-
back controller K(t) = K g(e(t)) + K ∗ g̃(e(t − τ1 (t))) to model a more realistic
situation. It should be mentioned that if the controller is taken as K(t) = K g(e(t)),
the synchronization of the two coupled neural networks can also be achieved. But,
its performance is not better than (4.60). That is to say, the former controller is more
practical than the latter.
Remark 4.23 This section is concerned with the time-varying delays for the general
case. As for the special case that with constant time delays, we can derive the similar
results by the same method without difficulties.
4.3 Synchronization of DNN Based on Parameter Identification … 123
If the two coupled neural networks (4.55) and (4.57) have no distributed time-
varying delays, then, we can get the following corollary directly. Consider the master
system (4.73) and the slave system (4.74) as follows:
Corollary 4.24 Under the Assumptions 4.17 and 4.18, the slave DNNs (4.57) is
synchronized with the master DNNs (4.55) and lim (c̄i − ci ) = lim (āij − aij ) =
t→∞ t→∞
lim (b̄ij − bij ) = 0, (i, j = 1, 2, . . . , n), based on the following three conditions:
t→∞
(I) Let the time-varying delayed feedback controller
n
κi (t) = kij ( f j (y j (t)) − f j (x j (t)))
j=1
n
+ kij∗ (u j (y j (t − τ1 (t))) − u j (x j (t − τ1 (t)))), i = 1, 2, . . . , n.
j=1
(4.75)
(II) The adapted parameters c̄i , āij and b̄ij with the update law are taken as
⎧ ·
⎪
⎪ ˙ = γ (t)y
⎪
⎨
c̄ i i ei i (t); i = 1, 2, . . . , n,
·
˙ ij = −αij ei (t) f j (y j (t)); i, j = 1, 2, . . . , n,
ā (4.76)
⎪
⎪
⎪
⎩˙
·
b̄ij = −βij ei (t)u j (y j (t − τ1 (t))); i, j = 1, 2, . . . , n,
where γi > 0, αij > 0 and βij > 0 are arbitrary positive constants.
(III) The following inequality
1
n
1
n
−μi ci + μi εi (aii + kii ) + μi ε j aij + kij + μ j εi aji + kji
2 2
j=1, j
=i j=1, j
=i
n
n
1 1
+ μi φ j bij + kij∗ + μi φi bji + kji∗ < 0 (4.77)
2 2(1 − δ)
j=1 j=1
holds, where μi > 0, εi > 0 and φi > 0 are all positive constants, i, j = 1, 2, . . . , n.
Proof Let wi j = 0 in the model (4.55) and (4.57). By utilizing the method as being
proposed in the proof of Theorem 4.20, we can obtain Corollary 4.24 directly.
124 4 Adaptive Synchronization of Neural Networks
If the two coupled neural networks have neither discrete time-varying delay nor
distributed time-varying delay, then, the following corollary can be obtained imme-
diately. Consider the master system (4.78) and the slave system (4.79) as follows:
Corollary 4.25 Under the Assumptions 4.17 and 4.18, the slave DNNs (4.57) is
synchronized with the master DNNs (4.55) and lim (c̄i − ci ) = lim (āij − aij ) =
t→∞ t→∞
0, (i, j = 1, 2, . . . , n) , based on the following three conditions:
(I) Let the time-varying delayed feedback controller
n
κi (t) = kij ( f j (y j (t)) − f j (x j (t))), i = 1, 2, . . . , n. (4.80)
j=1
(II) The adapted parameters c̄i and āij with the update law are taken as
⎧ ·
⎨˙
c̄i = γi ei (t)yi (t); i = 1, 2, . . . , n,
· (4.81)
⎩˙
ā ij = −αij ei (t) f j (y j (t)); i, j = 1, 2, . . . , n,
1
n
1
n
− μi ci + μi εi (aii + kii ) + μi ε j aij + kij + μ j εi aji + kji < 0
2 2
j=1, j
=i j=1, j
=i
(4.82)
holds, where μi > 0 and εi > 0 are both positive constants, i = 1, 2, . . . , n.
Proof Let bij = 0 and wij = 0 in the proof of Theorem 4.20. On the basis of the
similar technique, Corollary 4.25 can be derived immediately.
In this section, several numerical simulations are presented to illustrate the effective-
ness of our results.
Example
Consider the following master system with discrete and distributed time-varying
delays:
4.3 Synchronization of DNN Based on Parameter Identification … 125
y1(t)
0
−2
−4
−6
−8
−1.5 −1 −0.5 0 0.5 1 1.5 2
x1(t)
t
d x(t) = [−C x(t) + A f (x(t)) + Bu(x(t − τ1 (t))) + W v(x(s))ds + J ]dt,
t−τ2 (t)
(4.83)
where x(t) = [x1 (t), x2 (t)]T and f (x(t)) = u(x(t)) = v(x(t)) = [tanh(x1 (t)),
tanh(x2 (t))]T , τ1 (t) = τ2 (t) = 0.8, J = [0, 0]T
10 2.1 −0.12 −1.6 −0.1 −2.3 −0.5
C= ,A= ,B = ,W = .
01 −5.1 3.1 −0.2 −2.4 −1.1 −0.2
The initial values are chosen as x1 (t) = 0.4, x2 (t) = 0.6, ∀t ∈ [−1, 0], then, the
chaotic phase trajectories of DNNs (4.83) can be obtained as Fig. 4.10 shows.
In order to prove that our results are practical and useful, the following slave
system with controller is considered:
t
dy(t) = −C̄ y(t) + Ā f (y(t)) + B̄u(y(t − τ1 (t))) + W̄ v(y(s))ds + J
t−τ2 (t)
+ K ∗ (u(y(t − τ1 (t))) − u(x(t − τ1 (t)))) dt, + K ( f (y(t)) − f (x(t))) (4.84)
So, it can be seen that ā22 , b̄22 and w̄22 are the parameters to be identified. Next,
we consider the synchronization criterions proposed in Theorem 4.20. In condition
of Theorem 4.20, the corresponding parameters are taken as α22 = 8.7, β22 =
6.2, ω22 = 3.6 And the parameters in are μi = εi = φi = ϕi = 1, (i = 1, 2),
126 4 Adaptive Synchronization of Neural Networks
and δ = σ = 0.5, respectively. The gain matrix of the output coupling controller is
chosen as
−12 1 −1 0.5
K = , K∗ = .
6 −14 1 −2
Thus, basing on the above description, all the synchronization criterions in Theorem
4.20 can be satisfied. Then, let the arbitrary initial states of the two coupled DNNs
and the unknown parameters in (4.84) be as follows:
x1 (t) = 0.6, x2 (t) = 0.7; y1 (t) = −1.7, y2 (t) = 3.8; ∀t ∈ [−1, 0],
ā22 (0) = 3.9, b̄22 (0) = −1.8, w̄22 (0) = −0.4.
0.5
−0.5
−1
−1.5
−2
0 5 10 15 20
t
−1
−2
−3
−4
0 5 10 15 20
−2
−4
−6
0 5 10 15 20
t
two coupled DNNs have achieved synchronization. Finally, Fig. 4.14 indicates that
all the unknown parameters in the slave system can be identified at the same time,
when the synchronization is achieved. Thus, we can conclude that our research in
the synchronization of neural networks with mixed time-varying delays is useful and
meritable.
4.3.5 Conclusion
In this section, the synchronization problem of two coupled DNNs with mixed time-
varying delays has been thoroughly researched based on parameter identification and
via output coupling. Several sufficient and less restrictive conditions to ensure the
global synchronization have been derived on the basis of the Lyapunov-Krasovskii
functional and some estimation methods. Especially, both the discrete and distrib-
uted time-varying delays have been introduced to model a more practical system.
And via output coupling, a general and novel delayed feedback controller has been
proposed. Moreover, the parameters in the slave system have been estimated through
128 4 Adaptive Synchronization of Neural Networks
the simulations. Therefore, the feasibility of the theoretical results has been verified.
Finally, we can see that it is possible to apply the results to the application in this
area.
4.4.1 Introduction
As we known, the stochastic delay neural networks (SDNNs) with Markovian switch-
ing has played an important role in the fields of science and engineering for its many
practical applications, including image processing, pattern recognition, associative
memory, and optimization problems. In the past several decades, the characteristics
of SDNNs with Markovian switching, such as the various stability, have focused
lots of attention from scholars in various fields of nonlinear science. Z.D. Wang etc.,
considered exponential stability of delayed recurrent neural networks with Markov-
ian jumping parameters [43]. W. Zhang, Y. Tang and J. Fang investigated stochastic
stability of Markovian jumping genetic regulatory networks with mixed time delays
[59]. H. Huang. and others investigated robust stability of stochastic delayed additive
neural networks with Markovian switching [13]. The researchers presented a number
of sufficient conditions and proved the global asymptotic stability and exponential
stability of the SDNN with Markovian switching (see, e.g. [27, 46, 49, 63] and the
references therein). The most extensively method used for recent publications is the
LMI approach.
In recent years, it has been found that the synchronization of coupled neural
networks in potential applications has received much attention, such as parallel
recognition and secure communication [10, 24]. Therefore, the investigation of
synchronization for SDNNs is of great significance and some stochastic synchro-
nization results have been investigated. In [19], an adaptive feedback controller is
designed to achieve complete synchronization of unidirectionally coupled delayed
neural networks with stochastic perturbation. In [31], via adaptive feedback control
techniques with suitable parameters update laws, several sufficient conditions are
derived to ensure lag synchronization of unknown delayed neural networks with or
without noise perturbation. In [6], a class of chaotic neural networks is discussed
and based on the Lyapunov stability method and the Halanay inequality lemma, a
delay-independent sufficient exponential synchronization condition is derived. The
simple adaptive feedback scheme has been used for the synchronization of neural
networks with or without time-varying delay in [3]. Tang and Fang in [34] intro-
duced a general model of an array of N linearly coupled delayed neural networks
with Markovian jumping hybrid coupling and by adaptive approach, some sufficient
criteria have been derived to ensure the synchronization in an array of jump neural
networks with mixed delays and hybrid coupling in mean square.
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN … 129
In this section, we consider the neural networks called drive system and represented
by the compact form as follows:
d x(t) = [−C(r (t))x(t) + A(r (t)) f (x(t)) + B(r (t)) f (x(t − τ (t))) + D(r (t))]dt,
(4.85)
where t ≥ 0 is the time, x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the sate vector
associated with n neurons, f (x(t)) = ( f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t)))T ∈ Rn
denote the activation functions of the neurons, τ (t) is the transmission delay satis-
fying that 0 < τ (t) ≤ τ̄ and τ̇ (t) ≤ τ̂ < 1, where τ̄ , τ̂ are constants. {r (t)}t≥0
is a Markov chain taking values in a finite state space S = {1, 2, . . . , S}. As a
matter of convenience, for t ≥ 0, we denote r (t) = i and A(r (t)) = Ai , B(r (t)) =
B i , C(r (t)) = C i , D(r (t)) = D i respectively. In model (4.85), furthermore, ∀i ∈ S,
C i = diag {c1i , c2i , . . . , cni } (i.e. C i is a diagonal matrix) has positive and unknown
entries cki > 0, Ai = (a ijk )n×n and B i = (bijk )n×n are the connection weight and
the delay connection weight matrices, respectively. D i = (d1i , d2i , . . . , dni )T ∈ Rn is
the constant external input vector.
For the drive systems (4.85), a response system is constructed as follows:
dy(t) = [−C(r (t))y(t) + A(r (t)) f (y(t)) + B(r (t)) f (y(t − τ (t))) + D(r (t)) + U (t)]dt
+ σ(t, r (t), y(t) − x(t), y(t − τ (t)) − x(t − τ (t)))dω(t),
(4.86)
where y(t) is the state vector of the response system (4.86), U (t) = (u 1 (t), u 2 (t),
. . . , u n (t))T ∈ Rn is a control input vector with the form of
U (t) = K (t)(y(t) − x(t)) = diag {k1 (t), k2 (t), . . . , kn (t)}(y(t) − x(t)), (4.87)
be regarded as a result from the occurrence of eternal random fluctuation and other
probabilistic causes.
Let e(t) = y(t)− x(t). For the purpose of simplicity, we mark e(t −τ (t)) = eτ (t)
and f (x(t) + e(t)) − f (x(t)) = g(e(t)). From the drive system (4.85) and the
response system (4.86), the error system can be represented as follows:
The initial condition associated with system (4.88) is given in the following form:
for any ξ ∈ L2F0 ([−τ̄ , 0], Rn ), where L2F0 ([−τ̄ , 0], Rn ) is the family of all F0 -
measurable C([−τ̄ , 0]; Rn )-value random variables satisfying that sup−τ̄ ≤s≤0
E|ξ(s)|2 < ∞, and C([−τ̄ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ̄ , 0] with the norm ξ = sup−τ̄ ≤s≤0 |ξ(s)|.
To obtain the main result, we need the following assumptions.
Assumption 4.26 The activation functions of the neurons f (x(t)) satisfy the Lip-
schitz condition. That is, there exists a constant L > 0 such that
Assumption 4.27 The noise intensity matrix σ(·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist two positives H1 and H2 , such that
trace(σ(t, r (t), u(t), v(t)))T (σ(t, r (t), u(t), v(t))) ≤ H1 |u(t)|2 + H2 |v(t)|2
f (0) ≡ 0, σ(t, r0 , 0, 0) ≡ 0.
Remark 4.29 Under Assumptions 4.26–4.28, the error system (4.88) admits an equi-
librium point (or trivial solution) e(t, ξ), t ≥ 0.
The following stability concept and synchronization concept are needed in this
section.
Definition 4.30 The trivial solution e(t, ξ) of the error system (4.88) is said to be
almost surely asymptotically stable if
p
for any ξ ∈ LL0 ([−τ̄ , 0]; Rn ).
The response system (4.86) and the drive system (4.85) are said to be almost surely
asymptotically synchronized, if the error system (4.88) is almost surely asymptoti-
cally stable.
The main purpose of the rest of this letter is to establish a criterion of adaptive
almost surely asymptotically synchronization of the system (4.85) and the response
system (4.86) by using adaptive feedback control and M-matrix techniques.
Consider an n-dimensional stochastic delay differential equation (SDDE, for
short) with Markovian switching
d x(t) = f (t, r (t), x(t), xτ (t))dt + g(t, r (t), x(t), xτ (t))dω(t) (4.89)
| f (t, i, x, y) − f (t, i, x̄, ȳ)| + |g(t, i, x, y) − g(t, i, x̄, ȳ)| ≤ L h (|x − x̄| + |y − ȳ|)
where q̄ = max qi .
i∈S
Under Assumptions 4.26–4.28, the noise-perturbed response system (4.86) can be
adaptive almost surely asymptotically synchronization with the delay neural network
(4.85), if the feedback control gain K (t) of the controller (4.87) with the update law
is chosen as
k̇ j = −qi α j e2j , (4.91)
Proof Under Assumptions 4.26–4.28, it can be seen that the error system (4.88)
satisfies Assumption 4.31.
n
1 2
V (t, i, e) = qi |e|2 + k .
αj j
j=1
− e T C i e ≤ −γ|e|2 , (4.93)
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN … 133
and
S
where m = −(ηqi + γik qk ) by (q1 , q2 , . . . , q S )T = M −1 −
→
m.
k=1
Let w1 (e) = m|e|2 , w
2 (eτ ) = (L + H2 )q̄|eτ | . Then inequalities (1.14) and
2 2
(1.15) hold by using (4.90), where γ(t) = 0 in (1.14). By Lemma 1.9, the error system
(4.88) is adaptive almost surely asymptotically stable, and hence the noise-perturbed
response system (4.86) can be adaptive almost surely asymptotically synchronized
with the drive delay neural network (4.85). This completes the proof.
Remark 4.33 In Theorem 4.32, the condition (4.90) of the adaptive almost surely
asymptotically synchronized for SDNN with Markovian switching obtained by using
M-matrix and the Lyapunov functional method is generator-dependent and very
different to those, such as linear matrix inequality method. And it is easy to check
the condition if the drive system and the response system are given and the positive
constant m is well chosen.
Now, we are in a position to consider two special cases of the drive system (4.85)
and the response system (4.86).
Special case 1 The Markovian jumping parameters are removed from the neural
networks (4.85) and the response system (4.86). In this case, S = 1 and the drive
system, the response system and the error system can be represented, respectively,
as follows
and
de(t) = [−Ce(t) + Ag(e(t))
(4.100)
+ Bg(eτ (t)) + U (t)]dt + σ(t, e(t), eτ (t))dω(t).
For this case, one can get the following result analogous to Theorem 4.32.
134 4 Adaptive Synchronization of Neural Networks
Assume that
η < 0,
and
L 2 + H2 < −η. (4.101)
n
1 2
V (t, e) = |e|2 + k .
αj j
j=1
The rest proof is similar to that of Theorem 4.32, and hence omitted.
Special case 2 The noise-perturbation is removed from the response system (4.86),
which yields the noiseless response system
dy(t) = [−Ĉ(r (t))y(t) + Â(r (t)) f (y(t)) + B̂(r (t)) f (y(t − τ (t))) + D(r (t)) + U (t)]dt
(4.103)
and the error system
de(t) = [−C(r (t))e(t) + A(r (t))g(e(t)) + B(r (t))g(eτ (t)) + U (t)]dt, (4.104)
respectively.
In this case, one can lead to the following results.
Corollary 4.35 Assume that M := −diag {η, η, . . . , η } − Γ is a nonsingular
S
M-matrix, where
η = −2γ + α + L 2 + β.
where q̄ = max qi .
i∈S
Under Assumptions 4.26–4.28, the noiseless-perturbed response system (4.103)
can be adaptive almost surely asymptotically synchronized with the unknown drive
delay neural network (4.85), if the feedback gain K (t) of the controller (4.87) with
the update law is chosen as
k̇ j = −qi α j e2j , (4.106)
n
1 2
V (t, i, e) = qi |e|2 + k .
αj j
j=1
The rest proof is similar to that of Theorem 4.32, and hence omitted.
Example 4.36 Consider a delay neural network (4.85), and its response system
(4.86) with Markovian switching and the following network parameters:
2 0 1.5 0 3.2 −1.5 2.1 −0.6
C1 = , C2 = , A1 = , A2 = ,
0 2.4 0 1 −2.7 3.2 −0.8 3.2
2.7 −3.1 −1.4 −2.1 0.4 0.4 −1.2 1.2
B1 = , B2 = , D1 = , D2 = , Γ = ,
0 2.3 0.3 1.5 0.5 0.6 0.5 −0.5
It can be checked that Assumptions 4.26–4.28 and the inequality (4.90) are satis-
fied and the matrix M is a nonsingular M-matrix. So the noise-perturbed response
system (4.86) can be adaptive almost surely asymptotically synchronized with the
drive delay neural network (4.85) by Theorem 4.32.
136 4 Adaptive Synchronization of Neural Networks
−1
−2
−3
−4
0 200 400 600 800 1000 1200
t
−10
−15
The simulation results are given in Figs. 4.15 and 4.16. Among them, Fig. 4.15
shows the state response of errors system e1 (t), e2 (t). Figure 4.15 shows the feedback
gain k1 , k2 . From the following simulations, one can find that the stochastic delay
neural networks with Markovian switching is adaptive almost surely asymptotically
synchronization.
4.4.5 Conclusions
with the drive delay neural networks with Markovian switching. The method to
obtain the sufficient condition of adaptive synchronization for neural networks is
different to those of linear matrix inequality technique. The condition obtained in
this Letter is dependent on the generator of the Markovian jumping models and
can be easily checked. Extensive simulation result is provided that demonstrates the
effectiveness of our theoretical results and analytical tools.
4.5.1 Introduction
In this section, we consider the neural networks called drive system and represented
by the compact form as follows
where t ≥ 0 (or t ∈ R+ , the set of all nonnegative real numbers) is the time
variable, x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the state vector associated with
n neurons, f (x(t)) = ( f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t)))T ∈ Rn denotes the
activation function of the neurons, τ (t) is the transmission delay satisfying that 0 <
τ (t) ≤ τ̄ and τ̇ (t) ≤ τ̂ < 1, where τ̄ , τ̂ are constants. As a matter of convenience,
for t ≥ 0, we denote r (t) = i and A(r (t)) = Ai , B(r (t)) = B i , C(r (t)) = C i
and D(r (t)) = D i respectively. In the drive system (4.107), furthermore, ∀i ∈ S,
C i = diag {c1i , c2i , . . . , cni } has positive and unknown entries cki > 0, Ai = (a ijk )n×n
and B i = (bijk )n×n are the connection weight and the delayed connection weight
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 139
matrices, respectively, and are both unknown matrices. D i = (d1i , d2i , . . . , dni )T ∈ Rn
is the constant external input vector.
For the drive systems (4.107), a response system is constructed as follows:
where y(t) is the state vector of the response system (4.108), Ĉ i = diag {ĉ1i , ĉ2i , . . . ,
ĉni }, Âi = (â ijk )n×n and B̂ i = (b̂ijk )n×n are the estimations of the unknown matrices
C i , Ai , and B i , respectively, U (t) = (u 1 (t), u 2 (t), . . . , u n (t))T ∈ Rn is a control
input vector with the form of
where C̃(r (t)) = Ĉ(r (t)) − C(r (t)), Ã(r (t)) = Â(r (t)) − A(r (t)) and B̃(r (t)) =
B̂(r (t)) − B(r (t)). Denote c̃ij = ĉij − cij , ã ijk = â ijk − a ijk and b̃ijk = b̂ijk − bijk , then
C̃ i = diag {c̃1i , c̃2i , . . . , c̃ni }, Ãi = (ã ijk )n×n and B̃ i = (b̃ijk )n×n .
The initial condition associated with system (4.110) is given in the following
form:
e(s) = ξ(s), s ∈ [−τ̄ , 0],
for any ξ(s) ∈ L2F0 ([−τ̄ , 0], Rn ), where L2F0 ([−τ̄ , 0], Rn ) is the family of all
F0 -measurable C([−τ̄ , 0]; Rn )-value random variables satisfying that sup−τ̄ ≤s≤0
E|ξ(s)|2 < ∞, and C([−τ̄ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ̄ , 0] with the norm ξ(s) = sup−τ̄ ≤s≤0 |ξ(s)|.
140 4 Adaptive Synchronization of Neural Networks
Assumption 4.38 The noise intensity matrix σ(·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist two positives H1 and H2 , such that
Remark 4.39 Under Assumptions 4.37 and 4.38, the error system (4.110) admits an
equilibrium point (or trivial solution) e(t, ξ(s)), t ≥ 0.
The following stability concept and synchronization concept are needed in this
section.
Definition 4.40 The trivial solution e(t, ξ(s)) of the error system (4.110) is said to
be exponential stability in pth moment if
1
lim sup log(E|e(t, ξ(s))| p ) < 0,
t→∞ t
p
for any ξ(s) ∈ LL0 ([−τ̄ , 0]; Rn ), where p ≥ 2, p ∈ Z. When p = 2, it is said to
be exponential stability in mean square.
The drive system (4.107) and the response system (4.108) are said to be exponen-
tial synchronized in pth moment, if the error system (4.110) is exponential stability
in pth moment.
The main purpose of the rest of this section is to establish a criterion of adaptive
exponential synchronization in pth moment of the system (4.107) and the response
system (4.108) by using adaptive feedback control and M-matrix techniques.
To this end, we introduce some concepts and lemmas which will be used in the
proofs of our main results.
Consider an n-dimensional stochastic delayed differential equation (SDDE, for
short) with Markovian switching
d x(t) = f (t, r (t), x(t), xτ (t))dt + g(t, r (t), x(t), xτ (t))dω(t) (4.111)
λ2 < λ1 (1 − τ̂ ), (4.112)
and
LV (t, i, x, xτ ) ≤ −λ1 |x| p + λ2 |xτ | p (4.114)
for all t ≥ 0, i ∈ S and x ∈ Rn (x = x(t) for short). Then the SDDE (4.111) is
exponential stability in pth moment.
Proof For the function V (t, i, x), applying Lemma 1.5 and using the above condi-
tions, we obtain that
t
c1 E|x| p ≤ EV (0, r0 , ξ(0)) + E 0 LV (s, r (s), x(s), xτ (s))ds
t
≤ EV (0, r0 , ξ(0)) + E 0 (−λ1 |x| p + λ2 |xτ | p )ds.
t
For 0 |xτ | p ds, let u = s − τ (s), then du = (1 − τ̇ (s))ds and
t t−τ (t) 1
|xτ | p ds = −τ (0) 1−τ̇ (s) |x(s)| ds
p
0 t
≤ 1
1−τ̂ −τ̄
|x(s)| p ds
0 t
= 1
1−τ̂ −τ̄
|x(s)| p ds + 1−1 τ̂ 0 |x(s)| p ds
τ̄
t
≤ 1−τ̂
max |ξ(s)| p + 1−1 τ̂ 0 |x(s)| p ds
τ̄ ≤s≤0
So t
E|x| p ≤ c + vE|x| p ds,
0
where
λ2 τ̄
c = c1−1 (EV (0, r0 , ξ(0)) + max E|ξ(s)| p ),
1−τ̂ τ̄ ≤s≤0
−λ1 (1−τ̂ )+λ2
v= c1 (1−τ̂ )
.
142 4 Adaptive Synchronization of Neural Networks
It can be seen that c, v are constants and c > 0 and v < 0. By using the Gronwally’s
inequality, we have
E|x| p ≤ c exp(vt).
Therefore
1
lim sup log(E|e(t, ξ)| p ) ≤ v < 0.
t→∞ t
Thus the SDDE (4.111) is exponential stability in pth moment. This completes
the proof.
Now we are in a position to set up a criterion of adaptive exponential synchroniza-
tion in pth moment for the drive system (4.107) and the response system (4.108).
where q̄ = max qi .
i∈S
Under Assumptions 4.37 and 4.38, the noise-perturbed response system (4.108)
can be adaptive exponential synchronized in pth moment with the drive neural net-
work (4.107), if the feedback gain K (t) of the controller (4.109) with the update law
is chosen as
k̇ j = −(1/2)α j pqi |e| p−2 e2j , (4.116)
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 143
and the parameters update laws of matrices Cˆ i , Âi and B̂ i are chosen as
⎧
γ
⎪ ĉ˙ = 2j pqi |e| p−2 e j y j ,
i
⎪
⎪
⎨ j
α
â˙ jl = − 2jl pqi |e| p−2 e j fl ,
i
(4.117)
⎪
⎪
⎪
⎩ b̂˙ i = − βjl pq |e| p−2 e ( f ) ,
jl 2 i j l τ
where α j > 0, γ j > 0, αjl > 0, and βjl > 0 ( j, l = 1, 2, . . . , n) are arbitrary
constants, respectively.
Now, using Assumptions 4.37 and 4.38 together with Lemma 1.13 yields
− e T C i e ≤ −γ|e|2 , (4.119)
144 4 Adaptive Synchronization of Neural Networks
and
(1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e| p−2 )σ(t, i, e, eτ ))
(4.122)
≤ (1/2) p( p − 1)qi |e| p−2 (H1 |e|2 + H2 |eτ |2 ).
p−2 p 2
|e| p−2 |eτ |2 ≤ |e| + |eτ | p . (4.123)
p p
$LV (t, i, e, eτ )
≤ (1/2) p(−2γ + α + L 2 + β + ( p − 1)H1 )qi
S
+ γik qk |e| p
k=1
(4.124)
+ (1/2) p(L 2 + ( p − 1)H2 )qi )|e| p−2 |eτ |2
S
≤ ηqi + γik qk |e| p + (L 2 + ( p − 1)H2 )qi |eτ | p
k=1
≤ −m|e| p + (L 2 + ( p − 1)H2 )q̄|eτ | p .
Remark 4.43 In Theorem 4.42, the condition (4.115) of the adaptive exponential
synchronization for neural networks with Markovian switching obtained by using
M-matrix approach is mode dependent and very different to those, such as linear
matrix inequality method. And the condition can be checked if the drive system and
the response system are given and the positive constant m be chosen.
Now, we are in a position to consider two special cases of the drive system (4.107)
and the response system (4.108).
Special case 1 The Markovian jumping parameters are removed from the neural
networks. That is to say, S = 1. For this case, one can get the following result
analogous to Theorem 4.42.
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 145
Corollary 4.44 Assume that η < 0 and L 2 + ( p − 1)H2 < −η(1 − τ̂ ), where
and the update laws of the parameters of matrices Ĉ, Â and B̂ are chosen as
⎧ γj
⎪ ˙
⎨ ĉ j = 2 p|e|
p−2 e y ,
j j
˙â jl = − αjl p|e| p−2 e j fl , (4.126)
⎪
⎩˙ β
2
b̂jl = − 2jl p|e| p−2 e j ( fl )τ ,
where α j > 0, γ j > 0, αjl > 0 and βjl > 0 ( j, l = 1, 2, . . . , n) are arbitrary
constants, respectively.
and
S
L q̄ < − ηqi +
2
γik qk (1 − τ̂ ), ∀i ∈ S, (4.127)
k=1
where q̄ = max qi .
i∈S
Under Assumption 4.37, the noiseless-perturbed response system can be adaptive
exponential synchronized in pth moment with the drive neural network, if the feed-
146 4 Adaptive Synchronization of Neural Networks
back gain K (t) of the controller (4.109) with the update law is chosen as (4.116)
and the parameters update laws of matrices Cˆ i , Âi and B̂ i are chosen as (4.117).
Proof The proof is similar to that of Theorem 4.42, and hence omitted.
In the section, we present an example to illustrate the usefulness of the main results
obtained in this section. The adaptive exponential stability in pth moment is examined
for a given stochastic delayed neural networks with Markovian jumping parameters.
Example 4.46 Consider the delayed neural networks (4.107) with Markovian switch-
ing, the response stochastic delayed neural networks (4.108) with Markovian switch-
ing, and the error system (4.110) with the network parameters given as follows:
2.1 0 2.5 0 1.2 −1.5
C1 = , C2 = , A1 = ,
0 2.8 0 2.2 −1.7 1.2
1.1 −1.6 0.7 −0.2 −0.4 −0.1
A2 = , B1 = , B2 = ,
−1.8 1.2 0 0.3 −0.3 0.5
0.6 0.8 −0.12 0.12
D1 = D̂1 = , D2 = D̂2 = ,Γ = ,
0.1 0.2 0.11 −0.11
α11 = α12 = α21 = α22 = β11 = β12 = β21 = β22 = 1,
σ(t, e(t), e(t − τ ), 1) = (0.4e1 (t − τ ), 0.5e2 (t))T ,
σ(t, e(t), e(t − τ ), 2) = (0.5e1 (t), 0.3e2 (t − τ ))T ,
p = 3, L = 1, f (x(t)) = tanh(x(t)), τ = 0.12.
It can be checked that Assumptions 4.37, 4.38, and the inequality (4.115) are
satisfied and the matrix M is a nonsingular M-matrix. So the noise-perturbed response
system (4.108) can be adaptive exponential synchronized in pth moment with the
drive neural network (4.107) by Theorem 4.42. The simulation results are given in
Figs. 4.17, 4.18, 4.19, 4.20 and 4.21. Among them, Fig. 4.17 shows the state response
of errors system e1 (t), e2 (t). Figure 4.18 shows the feedback gain k1 , k2 . Figures 4.19,
4.20 and 4.21 show the parameters update laws of matrices C, % A,
% % B chosen as
c1 (t), c2 (t), a11 (t), a12 (t), a21 (t), a22 (t), b11 (t), b12 (t), b21 (t) and b22 (t). From the
simulations figures, one can see that the stochastic delayed neural networks with
markovian switching (4.107) and (4.108) are adaptive exponential synchronization
in pth moment.
4.5.5 Conclusions
In this section, we have dealt with the problem of the mode and delay-dependent
adaptive exponential synchronization in pth moment for neural networks with sto-
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 147
−1
−2
−3
−4
0 100 200 300 400 500 600
t
−2
−4
−6
−8
−10
−12
0 100 200 300 400 500 600
t
0
0 100 200 300 400 500 600
t
148 4 Adaptive Synchronization of Neural Networks
2 a11(t)
a (t)
1 12
a (t)
21
0
a22(t)
−1
−2
−3
−4
0 100 200 300 400 500 600
t
0.2 b (t)
11
b12(t)
0
b (t)
21
−0.2 b (t)
22
−0.4
−0.6
−0.8
0 100 200 300 400 500 600
t
chastic delayed and Markovian jumping parameters. We have removed the traditional
monotonicity and smoothness assumptions on the activation function. An M-matrix
approach has been developed to solve the problem addressed. The conditions for
the adaptive exponential synchronization in pth moment have been derived in terms
of some algebraical inequalities. These synchronization conditions are much differ-
ent to those of linear matrix inequality. Finally, a simple example has been used to
demonstrate the effectiveness of the main results which obtained in this section.
References
1. J. Cao, P. Li, W. Wang, Global synchronization in arrays of delayed neural networks with
constant and delayed coupling. Phys. Lett. A 353(4), 318–325 (2006)
2. J. Cao, X. Li, Stability in delayed Cohen-Grossberg neural networks: LMI optimization
approach. Phys. D 212(1), 54–65 (2005)
References 149
3. J. Cao, J. Lu, Adaptive synchronization of neural networks with or without time-varying delays.
Chaos: Interdiscip. J. Nonlinear Sci. 16(1), 013133–013139 (2006)
4. J. Cao, L. Wang, Periodic oscillatory solution of bidirectional associative memory networks
with delays. Phys. Rev. E 61(2), 1825–1828 (2000)
5. J. Cao, Z. Wang, Y. Sun, Synchronization in an array of linearly stochastically coupled networks
with time-delays. Phys. A: Stat. Mech. Appl. 385(2), 718–728 (2007)
6. G. Chen, J. Zhou, Z. Liu, Classification of chaos in 3-d autonomous quadratic systems-I: basic
framework and methods. Int. J. Bifurc. Chaos 16(9), 2459–2479 (2006)
7. G.R. Chen, J. Zhou, Z.R. Liu, Global synchronization of coupled delayed neural networks and
applications to chaotic CNN model. Int. J. Bifurc. Chaos 14(7), 2229–2240 (2004)
8. M. Chen, D. Zhou, Synchronization in uncertain complex networks. Chaos: Interdiscip. J.
Nonlinear Sci. 16(1), 013101 (2006)
9. T. Chen, L. Wang, Power-rate global stability of dynamical systems with unbounded time-
varying delays. IEEE Trans. Circuits Syst. II: Express Briefs 54(8), 705–709 (2007)
10. M. Gilli, Strange attractors in delayed cellular neural networks. IEEE Trans. Circuits Syst. I:
Fundam. Theory Appl. 40(17), 849–853 (1993)
11. K. Gopalsamy, Stability of artificial neural networks with impulses. Appl. Math. Comput.
154(3), 783–813 (2004)
12. K. Gopalsamy, X. He, Delay-independent stability in bidirectional associative memory net-
works. IEEE Trans. Neural Netw. 5(6), 998–1002 (1994)
13. H. Huang, D.W.C. Ho, Y. Qu, Robust stability of stochastic delayed additive neural networks
with Markovian switching. Neural Netw. 20(7), 799–809 (2007)
14. H.R. Karimi, P. Maass, Delay-range-dependent exponential H∞ synchronization of a class of
delayed neural networks. Chaos, Solitons Fractals 41(3), 1125–1135 (2009)
15. J.H. Kim, C.H. Hyun, E. Kim, M. Park, Adaptive synchronization of uncertain chaotic systems
based on T-S fuzzy model. IEEE Trans. Fuzzy Syst. 15(3), 359–369 (2007)
16. B. Kosko, Adaptive bi-directional associative memories. Appl. Opt. 26(23), 4947–4960 (1987)
17. G.H. Li, Modified projective synchronization of chaotic system. Chaos, Solitons Fractals 32(5),
1786–1790 (2007)
18. P. Li, J. Cao, Z. Wang, Robust impulsive synchronization of coupled delayed neural networks
with uncertainties. Phys. A 373, 261–272 (2007)
19. X. Li, J. Cao, Adaptive synchronization for delayed neural networks with stochastic perturba-
tion. J. Frankl. Inst. 354(7), 779–791 (2008)
20. X. Liao, J. Yu, Qualitative analysis of bi-directional associative memory with time delay. Int.
J. Circuit Theory Appl. 26(3), 219–229 (1998)
21. B. Liu, X.Z. Liu, G.R. Chen, Robust impulsive synchronization of uncertain dynamical net-
works. IEEE Trans. Circuits Syst. I 52(7), 1431–1441 (2005)
22. Z.G. Liu, Global attractors of delayed BAM neural networks with reaction-diffusion terms. J.
Xiangnan Univ. 31(2), 5–11 (2010)
23. X. Lou, B. Cui, Synchronization of neural networks based on parameter identification and via
output or state coupling. J. Comput. Appl. Math. 222(2), 440–457 (2008)
24. H. Lu, Chaotic attractors in delayed neural networks. Phys. Lett. A 298(2–3), 109–116 (2002)
25. J. Lu, D.W.C. Ho, J. Cao, J. Kurths, Exponential synchronization of linearly coupled neural
networks with impulsive disturbances. IEEE Trans. Neural Netw. 22(2), 329–336 (2011)
26. Y. Lu, K. Yi, Adaptive projective synchronization of uncertain Rössler chaotic system. Comput.
Sci. 36(5), 91–193 (2009)
27. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, London, 2006)
28. M.J. Park, O. Kwon, J.H. Park, S.M. Lee, Simplified stability criteria for fuzzy Markovian
jumping Hopfield neural networks of neutral type with interval time-varying delays. Expert
Syst. Appl. 39(5), 5625–5633 (2012)
29. L.M. Pecora, T.L. Carroll, Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821–824
(1990)
150 4 Adaptive Synchronization of Neural Networks
30. K. Sun, S. Qiu, L. Yin, Adaptive function projective synchronization and parameter identifica-
tion for chaotic systems. Inf. Control 39(3), 326–331 (2010)
31. Y. Sun, J. Cao, Adaptive lag synchronization of unknown chaotic delayed neural networks with
noise perturbation. Phys. Lett. A 364(3), 277–285 (2007)
32. Y. Sun, J. Cao, Z. Wang, Exponential synchronization of stochastic perturbed chaotic delayed
neural networks. Neurocomputing 70(13), 2477–2485 (2007)
33. Y. Sun, G. Feng, J. Cao, Stochastic stability of Markovian switching genetic regulatory net-
works. Phys. Lett. A 373(18), 1646–1652 (2009)
34. Y. Tang, J. Fang, Adaptive synchronization in an array of chaotic neural networks with mixed
delays and jumping stochastically hybrid coupling. Commun. Nonlinear Sci. Numer. Simul.
14(9), 3615–3628 (2009)
35. F. Wang, H.Y. Wu, Existence and stablity of periodic solution for BAM neural networks.
Comput. Eng. Appl. 46(24), 15–18 (2010)
36. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
37. W. Wang, J. Cao, Synchronization in an array of linearly coupled networks with time-varying
delay. Phys. A 366, 197–211 (2006)
38. X.Y. Wang, Q.A. Zhao, Class of uncertain delayed neural network adaptive projection syn-
chronization. Acta Phys. Sin. 57(5) (2008)
39. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos, Solitons Fractals 36(2), 388–396 (2008)
40. Z. Wang, D.W.C. Ho, X. Liu, State estimation for delayed neural networks. IEEE Trans. Neural
Netw. 16(1), 279–284 (2005)
41. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos, Solitons Fractals 32(1), 62–72 (2007)
42. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
43. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
44. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
45. Z. Wang, Y. Liu, X. Liu, Exponential stabilization of a class of stochastic system with Markovian
jump parameters and mode-dependent mixed time-delays. IEEE Trans. Autom. Control 55(7),
1656–1662 (2010)
46. Z. Wang, Y. Liu, G. Wei, X. Liu, A note on control of discrete-time stochastic systems with
distributed delays and nonlinear disturbances. Automatica 46(3), 543–548 (2010)
47. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
48. Z. Wang, H. Shu, Y. Liu, D.W.C. Ho, X. Liu, Robust stability analysis of generalized neural
networks with discrete and distributed time delays. Chaos, Solitons Fractals 30(4), 886–896
(2006)
49. Z.D. Wang, D.W.C. Ho, Y.R. Liu, X.H. Liu, Robust H∞ control for a class of nonlinear discrete
time-delay stochastic systems with missing measurements. Automatica 45(3), 1–8 (2010)
50. C.W. Wu, Synchronization in array of coupled nonlinear system with delay and nonreciprocal
time-varying coupling. IEEE Trans. Circuits Syst. 52(5), 282–286 (2005)
51. H.J. Xiang, Exponential stablity of fuzzy BAM neural networks with diffusion. J. Xiangnan
Univ. 31(2), 12–19 (2010)
52. D. Xu, Z. Li, Controlled projective synchronization in nonpartially-linear chaotic systems. Int.
J. Bifurc. Chaos 12(06), 1395–1402 (2002)
53. L.X. Yang, W.S. He, X.J. Liu, H.B. Chen, Improved full state hybrid projective synchronization
in autonomous chaotic systems. J. Xianyang Norm. Univ. 25(2), 28–30 (2010)
54. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A 373,
252–260 (2007)
References 151
55. C. Yuan, X. Mao, Robust stability and controllability of stochastic differential delay equations
with Markovian switching. Automatica 40(3), 343–354 (2004)
56. H. Zhang, Y. Wang, D. Liu, Delay-dependent guaranteed cost control for uncertain stochastic
fuzzy systems with multiple time delays. IEEE Trans. Syst., Man Cybern., Part B 38(1), 125–
140 (2008)
57. J. Zhang, Y. Yang, Global stability analysis of bidirectional associative memory neural networks
with time delay. Int. J. Circuit Theory Appl. 29(2), 185–196 (2001)
58. L. Zhang, E. Boukas, Stability and stabilization of Markovian jump linear systems with partly
unknown transition probabilities. Automatica 45(2), 463–468 (2009)
59. W. Zhang, Y. Tang, J. Fang, Stochastic stability of Markovian jumping genetic regulatory
networks with mixed time delays. Appl. Math. Comput. 217(17), 7210–7225 (2011)
60. H. Zhao, S. Xu, Y. Zou, Robust H∞ filtering for uncertain Markovian jump systems with
mode-dependent distributed delays. Int. J. Adapt. Control Signal Process. 24(1), 83–94 (2010)
61. J. Zhou, T. Chen, L. Xiang, Chaotic lag synchronization of coupled delayed neural networks
and its applications in secure communication. Circuits, Syst., Signal Process. 24(5), 599–613
(2005)
62. J. Zhou, T. Chen, L. Xiang, Robust synchronization of delayed neural networks based on
adaptive control and parameters identification. Chaos, Solitons Fractals 27(4), 905–913 (2006)
63. W. Zhou, H. Lu, C. Duan, Exponential stability of hybrid stochastic neural networks with mixed
time delays and nonlinearity. Neurocomputing 72(13), 3357–3365 (2009)
64. W. Zhou, D. Tong, Y. Gao, C. Ji, H. Su, Mode and delay-dependent adaptive exponential syn-
chronization in pth moment for stochastic delayed neural networks with Markovian switching.
IEEE Trans. Neural Netw. Learn. Syst. 23(4), 662–668 (2012)
65. Q. Zhu, J. Cao, Adaptive synchronization under almost every initial data for stochastic neural
networks with time-varying delays and distributed delays. Commun. Nonlinear Sci. Numer.
Simul. 16(4), 2139–2159 (2011)
66. S. Zhu, Y. Shen, Passivity analysis of stochastic delayed neural networks with Markovian
switching. Neurocomputing 74(10), 1754–1761 (2011)
Chapter 5
Stability and Synchronization
of Neutral-Type Neural Networks
When the states of a system are decided not only by states of the current time and the
past time but also by the derivative of the past states, the system can be called a neutral
system. The problems of stability and synchronization of neutral neural networks play
an important role in the same issues of neural networks. In this chapter, robust stability
of neutral neural networks is first discussed. Adaptive synchronization and projective
synchronization of neutral neural networks are investigated in the following two
sections. Exponential synchronization and exponential stability for neural networks
of neutral type are discussed respectively, in the fourth and sixth section. The issues
of adaptive synchronization and adaptive asymptotic synchronization are addressed
in the fifth and seventh sections.
5.1.1 Introduction
During last decades, neural networks (NNs) have attracted great attention due to their
extensive application in pattern recognition, signal processing, image processing,
quadratic optimization, associative memories, and many other fields. A variety of
models of NNs have been widely studied such as Hopfield neural networks (HNNs),
cellular neural networks (CNNs), Cohen-Grossberg neural networks (CGNNs), etc.
In some physical systems, mathematic models are described by functional dif-
ferential equations of neutral type, which depends on the delays of state and state
derivative. The practicality of neutral-type models recently attracts researchers to
investigate the stability and stabilization of the neutral-type neural networks [5, 22,
23, 30, 35, 36, 39, 43, 82].
Consider the following neural networks of neutral type, which involve both discrete
and distributed time-varying delays, described by a differential equation:
n
u̇ i (t) = − (ci + Δci (t))u i (t) + (ai j + Δai j (t))g j (u j (t))
j=1
n
+ (bi j + Δbi j (t))g j (u j (t − σ(t)))
j=1
n
+ (di j + Δdi j (t))u̇ j (t − σ(t))
j=1
n t
+ (ei j + Δei j (t)) g j (u j (s))ds + Ji ,
j=1 t−τ (t)
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 155
or equivalently,
d[u(t) − (D + ΔD(t))u(t − σ(t))] = − (C + ΔC(t))u(t) + (A + ΔA(t))g(u(t))
+ (B + ΔB(t))g(u(t − σ(t)))
t
+ (E + ΔE(t)) g(u(s))ds + J dt,
t−τ (t)
(5.1)
where n is the number of neurons in the indicated neural network, u(t) = [u 1 (t),
u 2 (t), . . . , u n (t)]T ∈ Rn is the neuron state vector at time t, J = [J1 , J2 , . . . , Jn ]T ∈
Rn is the external constant input, g(u(t)) = [g1 (u 1 (t)), g2 (u 2 (t)), . . . , gn (u n (t))]T
∈ Rn is the activation function, and the delay σ(t) and τ (t) are time-varying contin-
uous functions that satisfy
Throughout this section, we always assume that the activation functions are
bounded and satisfy Lipschitz condition, i.e., (H) There exist constants L i > 0
such that gi (x) − gi (y) ≤ L i x − y, for any x, y ∈ Rn , i = 1, 2, . . . , n.
It is obvious that the condition (H) infers that the activation functions are continu-
ous but not always monotonic. Consequently, system (5.1) has at least an equilibrium
point according to the Brouwer’s fixed-point theorem.
Suppose u ∗ = [u ∗1 , u ∗2 , . . . , u ∗n ]T ∈ Rn is an equilibrium point of system (5.1),
let x(t) = u(t) − u ∗ , and then system (5.1) can be rewritten as
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T is the state vector of the transformed system,
f (x(t)) = [ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T with f i (xi (t)) = gi (xi (t)+u i∗ )−
gi (u i∗ ), (i = 1, 2, . . . , n).
The equilibrium point of system (5.1) is robustly stable if and only if the origin
of system (5.5) is robustly stable. As a result, we could only consider robust stability
of system (5.5).
In order to obtain robust stability criterion of delayed Hopfield neural networks (5.5),
firstly, we deal with the asymptotic stability criterion for the nominal system of (5.5).
If ΔC = 0, ΔA = 0, ΔB = 0, ΔD = 0, and ΔE = 0, then the system (5.5) can be
rewritten as
Theorem 5.1 Suppose (H) holds, for any delay σ(t), τ (t) satisfying (5.2), then sys-
tem (5.6) is asymptotically stable, if there exist positive definite matrices P, Q 2 , Q 3 , R
and positive scalars ε1 , ε2 , ε3 , ε4 , ε5 , ε6 such that the following LMI holds:
⎡ ⎤
Π11 Π12 0 0 0 0 PA PB PE
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ 0 0 ⎥ < 0, (5.7)
⎢ ∗ ∗ ∗ ∗ ∗ −ε6 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5 I
where
Π11 = −PC − C T P T + τ Q 1 + Q 2 + σ 2 Q 3 + L R L + L(ε1 + ε3 )L ,
Π12 = 21 (D PC + C T P T D T ),
Π22 = L(ε2 + ε4 )L − (1 − μ)Q 2 .
Then,
+ P Eε−1 −1 T T T
5 E P )x(t) + x (t − σ(t))(D P Aε3 A P D
T T T
+ D P Bε−1 −1 T T T
4 B P D + D P Eε6 E P D )x(t − σ(t))
T T T
+ P Eε−1 −1 T T T
5 E P )x(t) + x (t − σ(t))(D P Aε3 A P D
T T T
+ D P Bε−1 −1 T T T
4 B P D + D P Eε6 E P D )x(t − σ(t))
T T T
and
Hence, V̇ (t) < 0 when Σ < 0. Using Lemma 1.21, Σ < 0 is equivalent to
Π < 0, where
⎡ ⎤
Π11 Π12 0 0 0 0 PA PB PE
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 0 0 0 ⎥
⎢ ⎥
Π =⎢ ⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 0 0 ⎥ ⎥.
⎢ ∗ ∗ ∗ ∗ ∗ −ε6 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5 I
where Λ22 = L(ε2 + ε4 )L − Q 2 , Π11 and Π12 are defined in Theorem 5.1.
Remark 5.3 For the case of σ(t) = σ, the delay-dependent stability criterion for
neural networks of neutral type has been studied in [5, 23], which is less conservative
than delay-independent criteria when the delay is small.
160 5 Stability and Synchronization of Neutral-Type Neural Networks
If D = 0 for the nominal system (5.6), the following corollary can be easily
deduced.
Remark 5.5 For the case of D = 0, the system is no longer a neutral-type neural
network. Neural networks with mixed time delays, which is not neutral type, have
been widely discussed, see e.g., [55].
If E = 0 for the nominal system (5.6), the following corollary can be easily
deduced.
Remark 5.7 For the case of E = 0, the system just concludes the discrete time delay.
Neutral-type neural networks with discrete time delays have been widely discussed,
see e.g., [5, 22, 23, 30, 35, 36, 39, 43, 82].
Theorem 5.8 Suppose (H) holds, for any delay σ(t), τ (t) satisfying (5.2), the sys-
tem (5.5) is robust stable, if there exist positive definite matrices P, Q 2 , Q 3 , R and
positive scalars ε1 , ε2 , ε3 , ε4 , ε5 , ε6 , φ1 , φ2 , φ3 , φ4 , φ5 , φ6 , φ7 , φ8 such that the fol-
lowing LMI holds:
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 161
⎡ ⎤
Ξ11 Ξ12 0 0 0 0 PA PB P E Ξ1A
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 Ξ2 A ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Ξ44 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ Ξ55 0 0 0 0 0 ⎥
⎢ ⎥ < 0, (5.17)
⎢ ∗ ∗ ∗ ∗ ∗ Ξ66 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ Ξ77 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ξ88 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ξ99 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ΞAA
P H1 H4 P H1 0 0 0 P H2 P H3 P H5 , Ξ2 A = 0 0 H4 P H2 H4 P H3 H4 P H5 0
0 0 , Ξ A A = diag (−φ1 I, −φ2 I, −φ3 I, −φ4 I, −φ5 I, −φ6 I, −φ7 I, −φ8 I ).
Ω = Ω0 + Δ11
T
F4 (t)F2 (t)Δ12 + Δ12
T
F2T (t)F4T (t)Δ11 + Δ21
T
F4 (t)F3 (t)Δ22
+ Δ22
T
F3T (t)F4T (t)Δ21 + Δ31
T
F4 (t)F5 (t)Δ32 + Δ32
T
F5T (t)F4T (t)Δ31
+ Δ41
T
F2 (t)Δ42 + Δ42
T
F2T (t)Δ41 + Δ51
T
F3 (t)Δ52 + Δ52
T
F3T (t)Δ51
+ Δ61
T
F5 (t)Δ62 + Δ62
T
F5T (t)Δ61
< 0,
(5.18)
where
⎡ ⎤
Ω11 Ω12 0 0 0 0 PA PB PE
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 0 0 0 ⎥
⎢ ⎥
Ω0 = ⎢
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 0 0 ⎥⎥,
⎢ ∗ ∗ ∗ ∗ ∗ −ε6 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5 I
with
1
Ω12 = Π12 + [H4 F4 (t)G 4 P H1 F1 (t)G 1 +
2
G 1T F1T (t)H1T P T G 4T F4T (t)H4T ],
Δ11 = 0 H4 P H2 0 0 0 0 0 0 0 ,
Δ12 = 0 0 0 G 4 G 2 0 0 0 0 0 ,
Δ21 = 0 H4 P H3 0 0 0 0 0 0 0 ,
Δ22 = 0 0 0 0 G 4 G 3 0 0 0 0
Δ31 = 0 H4 P H5 0 0 0 0 0 0 0 ,
Δ32 = 0 0 0 0 0 G 4 G 5 0 0 0 ,
Δ41 = P H2 0 0 0 0 0 0 0 0 ,
Δ32 = 0 0 0 0 0 0 G 2 0 0 ,
Δ51 = P H3 0 0 0 0 0 0 0 0 ,
Δ52 = 0 0 0 0 0 0 0 G 3 0 ,
Δ61 = P H5 0 0 0 0 0 0 0 0 ,
Δ62 = 0 0 0 0 0 0 0 0 G 5 .
From Lemma 1.22, we have Ω11 < Π11 + φ−1 1 P H1 H1 P + φ1 G 1 G 1 , Ω12 <
T T T
1 −1
Π12 + 2 [φ2 H4 P H1 H1 P H4 + φ2 G 1 G 4 G 4 G 1 ],
T T T T T
Ω < Ω0 + φ−1 T T −1 T
3 Δ11 Δ11 + φ3 Δ12 Δ12 + φ4 Δ21 Δ21 + φ4 Δ22 Δ22
T
+ φ−1 −1 T −1 T
5 Δ31 Δ31 + φ5 Δ32 Δ32 + φ6 Δ41 Δ41 + φ6 Δ42 Δ42 + φ7 Δ51 Δ51
T T T
+ φ7 Δ52
T
Δ52 + φ−1
8 Δ61 Δ61 + φ8 Δ62 Δ62 .
T T
Thus (5.18) holds if and only if (5.17) holds. This completes the proof.
If E + ΔE(t) = 0 for the system (5.5), the following corollary can be easily
deduced.
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 163
Corollary 5.9 Suppose (H) holds, for given σ, system (5.5) with E + ΔE(t) = 0 is
asymptotically stable, if there exist positive definite matrices P, Q 2 , Q 3 , R and pos-
itive scalars ε1 , ε2 , ε3 , ε4 , φ1 , φ2 , φ3 , φ4 , φ6 , φ7 such that the following LMI holds:
⎡ ⎤
X11 Ξ12 0 0 0 PA P B X1A
⎢ ∗ Π22 0 D P A D P B 0 0 X2 A ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Ξ44 0 0 0 0 ⎥
⎢ ⎥<0
⎢ ∗ ∗ ∗ ∗ Ξ55 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ Ξ77 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ξ88 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ XAA
Remark 5.10 For the case of E + ΔE(t) = 0, the system just concludes the discrete
time delay. The robust stability of neutral-type neural networks with discrete time
delays has been studied previously, see e.g., [5, 36].
Example 5.11 An example illustrating the result in Theorem 5.1 is given below.
Consider a delayed neural network in (5.6) with parameters as
1.2 0 1.2 0 1.5 0 0.2 0 0.8 0
C= ,A= ,B = ,D = ,E = .
0 1.8 0 1.2 0 1.5 0 0.2 0 0.8
Example 5.12 Consider the delayed neural network in (5.5) with the following para-
meters:
1.2 0 −1.2 0.2 −0.1 0.2
C= ,A= ,B = ,
0 1.8 0.26 0.1 0.2 0.1
−0.2 0 0.8 0 0.5 0
D= ,E = ,L = ,
0.2 −0.1 0 0.8 0 0.8
0.5 0 0.2 0.1 0.1 0
H1 = , H2 = , H3 = ,
0 0.5 0 0.1 0 0.1
0.2 0.1 0.1 0 0.2 0.1
H4 = , H5 = , E1 = ,
0.3 0 0 0.1 0.2 0.1
0.3 0.2 0.3 0.2 0.1 0.2
E2 = , E3 = , E4 = ,
0.5 0.4 0.1 0.3 0.3 0.3
0.2 0
E5 = .
0 0.2
5.1.5 Conclusions
In this section, we have dealt with the problem of robust stability for neural networks
of neutral type, which includes discrete and distributed time-varying delays. A linear
matrix inequality (LMI) approach has been developed to solve the problem addressed.
The stability criteria have been derived in terms of LMI method, and some numerical
examples have been given to demonstrate the applicability of our proposed stability
criteria.
5.2.1 Introduction
chronization [9, 11, 62], lag synchronization [48, 64], phase synchronization [54],
H∞ synchronization [12, 13], etc.
By utilizing adaptive control method, the parameters of the system need to be
estimated and the control law needs to be updated when the neural networks evolve.
In the past decade, much attention has been devoted to the research of the adaptive
synchronization for neural networks (see e.g., [14, 19, 24, 44, 57, 68, 78, 85] and
the references therein).
Also, it is noted that many neural networks may experience abrupt changes in
their structure and parameters caused by some phenomena such as component fail-
ures or repairs, changing subsystem interconnections, and abrupt environmental dis-
turbances. In this situation, there exist finite modes in the neural networks, and the
modes may be switched (or jumped) from one to another at different times. This kind
of systems is widely studied by many scholars, see e.g., [14, 19, 24, 33, 41, 42, 47,
50, 57, 60, 61, 63, 68, 81, 85] and the references therein.
Recently, the stability and synchronization of neutral-type systems which depend
on the delays of state and state derivative have attracted a lot of attention (see e.g.,
[16, 20, 32, 34, 38, 75] and the references therein) since the fact that some physical
systems in the real world can be described by neutral-type models. For example,
a neutral differential delay equation encountered by Rubanik [38] in his study of
vibrating masses attached to an elastic bar is
where [x(t) y(t)]T denotes the position vector of vibrating masses, f i (·, ·, ·, ·)(i =
1, 2) denote the relative functions, ω1 and ω2 are the position coefficients, ε is the
structure coefficient, and γ1 and γ2 are the vibrating acceleration coefficients.
The work by Li [20] investigated the global robust stability for stochastic inter-
val neural networks with continuously distributed delays of neutral type using the
Lyapunov-Krasovskii functional method. Some new stability criteria are presented
in terms of linear matrix inequality (LMI). Park [34] proposed a dynamic feedback
control scheme to achieve the synchronization for neural networks with neutral type
and derives a simple and efficient criterion in terms of LMIs for synchronization.
The work by Zhang et al. [75] considered the problems of robust global exponen-
tial synchronization for neutral-type complex networks with time-varying delayed
couplings and obtained some sufficient conditions that ensure the complex networks
to be robustly globally exponentially synchronized using the Lyapunov functional
method and some properties of Kronecker product. Also, Kolmanovskii et al. [16]
not only established a fundamental theory for neutral-type stochastic differential
delay equations with Markovian switching but also discussed some important prop-
erties of the solutions, e.g., boundedness and stability. The problem of almost sure
asymptotic stability was considered by Mao et al. [32] for neutral-type stochastic
differential delay equations with Markovian switching.
In this section, we study the problem of adaptive synchronization for neutral-type
stochastic delay neural networks (NSDNN, for short) with Markovian switching
5.2 Adaptive Synchronization of Neutral-Type SNN … 167
Consider the following neutral-type neural networks called drive system and repre-
sented by the compact form as follows:
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with n
neurons, f (·) denotes the neuron activation functions, τ represents the transmission
delay. For t ≥ 0, we denote r (t) = i, A(r (t)) = Ai , B(r (t)) = B i , C(r (t)) = C i ,
D(r (t)) = D i , and E(r (t)) = E i , respectively. In neural network (5.19), ∀i ∈
S, Ai = (a ijk )n×n and B i = (bijk )n×n are the connection weight and the delay
connection weight matrices, respectively; C i = diag{c1i , c2i , . . . , cni } is a diagonal
matrix and has positive and unknown entries cij > 0; D i is called the neutral-type
parameter matrix; and E i = [E 1i , E 2i , . . . , E ni ]T ∈ Rn is the constant external input
vector.
The initial condition of system (5.19) is given in the following form:
where y(t) = [y1 (t), y2 (t), . . . , yn (t)]T ∈ Rn is the state vector of the response
system (5.21), U i = U (r (t)) = [u i1 (t), u i2 (t), . . . , u in (t)]T ∈ Rn is a control input
vector, ω(t) = [ω1 (t), ω2 (t), . . . , ωn (t)]T is an n-dimensional Brownian motion
defined on a complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0
(i.e., Ft = σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the Markovian
process {r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix.
It is known that external random fluctuation and other probabilistic causes often lead
to this type of stochastic perturbations.
The initial condition of system (5.21) is given in the following form:
with e(0) = 0.
For systems (5.19), (5.21), and (5.23), the following assumptions are needed.
Assumption 5.13 For the vector f (·), there exists a constant L > 0 such that
Assumption 5.14 For the matrix σ(t, i, u(i), v(i)), there exist two positives H1 and
H2 such that
ρ(D i ) = κi ≤ κ,
Definition 5.17 ([33]) The trivial solution e(t; ξ, i 0 ) of the error system (5.23) is
said to be exponentially stable in pth moment if
1
lim sup log(E|e(t; ξ, i 0 )| p ) < 0,
t→∞ t
p
for any initial data ξ ∈ LF0 ([−τ , 0]; Rn ), where p ≥ 2, p ∈ Z (the set of integral
numbers). When p = 2, it is said to be exponentially stable in mean square.
It is said to be almost surely exponentially stable if
1
lim sup log(|e(t; ξ, i 0 )|) < 0 a.s.
t→∞ t
Remark 5.18 It can be obtained from the proof of Lemma 1.10 in [32] that if we
replace (H1) by the following (H1) , then the results (R1) and (R2) are also satisfied.
170 5 Stability and Synchronization of Neutral-Type Neural Networks
Theorem 5.19 Let Assumptions 5.13–5.15 hold, and the error system (5.23) has a
unique solution denoted by e(t; ξ, i 0 ) on t ≥ 0 for any initial data {e(θ) : −τ ≤ θ ≤
0} = ξ ∈ CbF0 ([−τ , 0]; Rn ) with e(0) = 0.
Assume that M := −diag{η, η, . . . , η } − Γ is a nonsingular M-matrix, where
S
η = −2ς + α + β, (5.26)
with α = max(ρ(Ai ))2 , β = max(ρ(B i ))2 and ς is a nonnegative real number, and
i∈S i∈S
2γ − κ − C02 − 2L 2 − H1 − H2 ≥ 0,
2
(5.27)
with
k̇ j = −α j qi (e j − D i (eτ ) j )2 , (5.29)
Then the noise-perturbed response system (5.21) can be adaptive almost surely
asymptotically synchronized with the time-delay neural network (5.19).
Proof Under Assumptions 5.13–5.14, and the existence of e(t; ξ, i 0 ), it can be seen
that f¯(t, r (t), e(t), eτ (t)), ḡ(t, r (t), e(t), eτ (t)), and D̄(eτ (t), r (t)) satisfy (H1) ,
(H2), and (H3), where
n
1 2
V (t, i, x) = qi |x|2 + k . (5.30)
αj j
j=1
LV (t, i, e, eτ )
= Vt (t, i, e − D i eτ ) + Vx (t, i, e − D i eτ )[−C i e + Ai g(e)
+ B i g(eτ ) + U i ]
1 (5.31)
+ trace[σ T (t, i, e, eτ )Vx x (t, i, e − D i eτ )σ(t, i, e, eτ )]
2
S
+ γik V (t, k, e − D i eτ ),
k=1
while
n
2
n
Vt (t, i, e − D i eτ ) = k j k̇ j = −2 k j qi (e j − D i (eτ ) j )2 ,
αj
j=1 j=1
LV (t, i, e, eτ )
n
≤−2 k j qi (e j − D i (eτ ) j )2
j=1
k=1
≤ 2qi (e − D i eτ )T [−C i e + Ai g(e) + B i g(eτ )]
+ qi (H1 |e|2 + H2 |eτ |2 )
S
+ −2ςqi + γik qk |e − D i eτ |2 .
k=1
− 2qi (e − D i eτ )T C i e
= − 2qi e T C i e + 2qi eτT D i T C i e
(5.33)
≤ − 2qi γ|e|2 + qi (eτT D i T D i eτ + e T C i T C i e)
≤ qi (−2γ + C02 )|e|2 + qi κ2 |eτ |2 ,
2qi (e − D i eτ )T Ai g(e)
≤ qi (e − D i eτ )T Ai Ai T (e − D i eτ ) + qi g T (e)g(e) (5.34)
≤ qi L |e| + qi α|e − D eτ | ,
2 2 i 2
and
2qi (e − D i eτ )T B i g(eτ )
≤ qi (e − D i eτ )T B i B i T (e − D i eτ ) + qi g T (eτ )g(eτ ) (5.35)
≤ qi L 2 |eτ |2 + qi β|e − D i eτ |2 .
5.2 Adaptive Synchronization of Neutral-Type SNN … 173
LV (t, i, e, eτ )
≤ − qi (2γ − C02 − L 2 − H1 )|e|2 + qi (κ2 + L 2 + H2 )|eτ |2
S
+ qi (−2ς + α + β) + γik qk |e − D i eτ |2
k=1 (5.36)
S
= − aqi |e|2 + bqi |eτ |2 + ηqi + γik qk |e − D i eτ |2
k=1
≤ − aqi |e| + aqi |eτ | − m|e −
2 2
D i eτ |2 − (a − b)qi |eτ |2 ,
S
where m = −[ηqi + γik qk ] by (q1 , q2 , . . . , q S )T = M −1 −
→
m , a = 2γ − C02 −
k=1
L 2 − H1 , b = κ2 + L2 +
H2 .
From (5.27) and b > 0, we can see that a > 0 and a − b ≥ 0. So the inequality
(5.36) implies
LV (t, i, e, eτ )
(5.37)
≤ γ(t) − Q(t, e) + Q(t − τ , eτ ) − W (e − D̄(eτ , i)),
Remark 5.21 The M-matrix method used in Theorem 5.19 to study the adaptive
synchronization for neutral-type stochastic neural networks with Markovian switch-
ing is rarely occurred and very different to those, such as the LMI technology. This
174 5 Stability and Synchronization of Neutral-Type Neural Networks
M-matrix method can be used in researching the stability and synchronization of the
complex networks.
Remark 5.22 On the stochastic synchronization problem for neural networks with
time-varying time delay and Markovian jump parameters, Wu et al. in [63] pro-
posed a new method of sampled data combining stochastic Lyapunov functional,
designed a mode-independent state feedback sampling controller, and gave some
delay-dependent criteria to ensure the stochastic synchronization using LMI technol-
ogy. The sampling controller designed in [63] is more suitable for real applications.
Comparing this section with [63], the model that includes variation of the time-
delay state and the stochastic disturbance is more general and the synchronization
conditions obtained by M-matrix method may be checked easily.
λ2 < λ1 , (5.38)
1
lim sup log(E|x(t)| p ) ≤ −υ, (5.41)
t→∞ t
λ1 −λ2
where υ = μ1 (1−κ) p > 0, i.e., the system (1.4) is exponential stable in pth moment.
Proof For the function V (t, i, x), applying the generalized Itô formula (see Lemma
3.1 in [16]) and using above conditions, we obtain that
μ1 (1 − κ) p−1 |x(t)| p
(5.43)
≤ μ1 |x(t) − D̄(xτ (t), r (t))| p + μ1 (1 − κ) p−1 κ|xτ (t)| p .
So
μ1 (1 − κ) p−1 E|x(t)| p
(5.44)
≤ μ1 E|x(t) − D̄(xτ (t), r (t))| p + μ1 (1 − κ) p−1 κE|xτ (t)| p .
while
t
|xτ (s)| p ds
0
t−τ t
= |x(s)| p ds ≤ |x(s)| p ds
−τ −τ
0 t
= |x(s)| p ds + |x(s)| p ds
−τ 0
t
≤ τ max |ξ(s)| + p
|x(s)| p ds.
τ ≤s≤0 0
μ1 (1 − κ) p−1 E|x(t)| p
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0))) + λ2 τ max |ξ(s)| p
τ ≤s≤0 (5.46)
t
+ E (λ2 − λ1 )|x(s)| p ds + μ1 (1 − κ) p−1 κE|xτ (t)| p .
0
176 5 Stability and Synchronization of Neutral-Type Neural Networks
This yields
+ μ1 (1 − κ) p−1
κ sup E|xτ (s)| p
0≤s≤τ
+ μ1 (1 − κ) p−1
κ sup E|xτ (s)| p (5.47)
τ ≤s≤t
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0)))
+ (λ2 τ + μ1 (1 − κ) p−1 κ) max |ξ(s)| p
−τ ≤s≤0
t
+ (λ2 − λ1 ) sup E|x(s)| p ds
0 0≤s≤t
+ μ1 (1 − κ) p−1
κ sup E|x(s)| p .
0≤s≤t
sup E|x(s)| p
0≤s≤t
1
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0)))
μ1 (1 − κ) p
+ (λ2 τ + μ1 (1 − κ) p−1 κ) max |ξ(s)| p
−τ ≤s≤0 (5.48)
t
+ (λ2 − λ1 ) sup E|x(s)| p ds
0 0≤s≤t
t
=μ − υ sup E|x(s)| p ds,
0 0≤s≤t
λ2 τ
where μ = μ1 (1−κ) p V (0, r (0), x(0) − D̄(x τ (0), r (0)))
1
+ μ1 (1−κ) p τmax |ξ(s)| p +
≤s≤0
κ
1−κ τmax |ξ(s)| p and υ is defined in (5.41).
≤s≤0
It can be seen that μ and υ are the two positive constants. Therefore, using the
Gronwall’s inequality (see [81]), we have
thus
1
lim sup log E|x(s)| p ≤ −υ < 0. (5.50)
t→∞ t
From the above inequality and Definition 5.17, one can get that the system (1.4)
is exponential stable in pth moment. This completes the proof.
with
1
k̇ j = − α j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2 , (5.52)
2
where α j > 0( j = 1, 2, . . . , n) are arbitrary constants.
Then the noise-perturbed response system (5.21) can be adaptive exponential
synchronized in pth moment with the time-delay neural network (5.19).
n
1 2
V (t, i, x) = |x| p + k . (5.53)
αj j
j=1
178 5 Stability and Synchronization of Neutral-Type Neural Networks
Vt (t, i, e − D i eτ )
n
2
n
= k j k̇ j = − k j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2 ,
αj
j=1 j=1
Vx x (t, i, e − D i eτ )
= p( p − 2)|e − D i eτ | p−4 [(e − D i eτ )T ]2 + p|e − D i eτ | p−2
≤ p( p − 1)|e − D i eτ | p−2 .
LV (t, i, e, eτ )
n
≤− k j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2
j=1
− p|e − D i eτ | p−2 e T C i e
≤ − γ p|e − D i eτ | p−2 |e|2
≤ γ p(−(1 − κ) p−3 |e| p−2 + κ(1 − κ) p−3 |eτ | p−2 )|e|2
2 p p−2 (5.55)
≤ − γ p(1 − κ) p−3 |e| p + γ pκ(1 − κ) p−3 |e| + |eτ | p
p p
= [−γ p(1 − κ) p−3 + 2γκ(1 − κ) p−3 ]|e| p
+ γ( p − 2)κ(1 − κ) p−3 |eτ | p ,
5.2 Adaptive Synchronization of Neutral-Type SNN … 179
− pς|e − D i eτ | p
(5.59)
≤ − pς(1 − κ) p−1 |e| p + pςκ(1 − κ) p−1 |eτ | p .
180 5 Stability and Synchronization of Neutral-Type Neural Networks
|e − D i eτ | p−2 |e|2
p−2 p−2
2 2
≤ (|e − D i eτ | p ) p (|e| p ) p ≤|e − D i eτ | p + |e| p
p p
p−2 2 (5.60)
≤ (1 + κ) p−1 (|e| p + κ|eτ | p ) + |e| p
p p
p−2 2 p − 2
= (1 + κ) p−1 + |e| p + κ(1 + κ) p−1 |eτ | p ,
p p p
and
|e − D i eτ | p−2 |eτ |2
p−2 2
≤ (|e − D i eτ | p ) p (|eτ | p ) p
p−2 2
≤ |e − D i eτ | p + |eτ | p
p p (5.61)
p−2 2
≤ (1 + κ) p−1 (|e| p + κ|eτ | p ) + |eτ | p
p p
p−2 p − 2 2
= (1 + κ) p−1
|e| +
p
κ(1 + κ) p−1
+ |eτ | p .
p p p
LV (t, i, e, eτ )
≤ [−γ p(1 − κ) p−3 + 2γκ(1 − κ) p−3
+ C02 + α + ακ + L 2 + β + βκ + ( p − 1)H1
p−2
+ (1 + κ) p−1 (C02 + 2L 2 + κ2 + (α + β)(1 + κ)2
2
+ ( p − 1)(H1 + H2 )) − pς(1 − κ) p−1 )]|e| p
+ [γ( p − 2)κ(1 − κ) p−3 + κ2 + ακ + ακ2 + βκ (5.62)
+ βκ2 + L 2 + ( p − 1)H2
p−2
+ κ(1 + κ) p−1 (C02 + 2L 2 + κ2 + (α + β)(1 + κ)2
2
+ ( p − 1)(H1 + H2 )) + pςκ(1 − κ) p−1 ]|eτ | p
= (U1 + U2 + U3 + U4 )|e| p + (V1 + V2 + V3 + V4 )|eτ | p
= − λ1 |e| p + λ2 |eτ | p
Moreover, from (5.51), one can see λ2 < λ1 , i.e., (5.38) holds.
Therefore, by Theorem 5.23, the error system (5.23) is adaptive exponential stable
in pth moment, and hence the response system (5.21) can be exponential synchro-
nized in pth moment with the drive time-delay neural network (5.19). This completes
the proof.
Next, we still have to consider the case of p = 2 and have the following result.
Theorem 5.25 Suppose that the error system (5.23) has a unique solution denoted by
e(t; ξ, i 0 ) on t ≥ 0 for any initial data {e(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn )
with e(0) = 0. Let Assumptions 5.13–5.15 hold, and p = 2. Assume that
Θ1 + Θ2 < 0, (5.63)
with
k̇ j = −α j (e j − D i (eτ ) j )2 , (5.64)
n
1 2
V (t, i, x) = |x|2 + k . (5.65)
αj j
j=1
n
2
Vt (t, i, e − D i eτ ) = k j k̇ j
αj
j=1
n
=− 2k j (e j − D i (eτ ) j )2 ,
j=1
LV (t, i, e, eτ )
n
≤− 2k j (e j − D i (eτ ) j )2
j=1
Let Θ1 = −λ1 , Θ2 = λ2 . Then (5.40) holds and (5.38) also holds by (5.63).
Therefore, by Theorem 5.23, when p = 2, the error system (5.23) is adaptive
exponential stable in pth moment, and hence the response system (5.21) can be
exponential synchronized in pth moment with the drive time-delay neural network
(5.19). This completes the proof.
Remark 5.26 In the proofs of Theorems 5.24 and 5.25, the Lyapunov function
V (t, i, x) may be taken as in the proof of Theorem 5.19. If so, we can obtain the
relative results using M-matrix method.
Theorem 5.28 Suppose that the error system (5.23) has a unique solution denoted by
e(t; ξ, i 0 ) on t ≥ 0 for any initial data {e(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn )
with e(0) = 0. Let Assumption 5.27 holds, and p ≥ 2, μ > 0, υ > 0. If the solution
e(t; ξ, i 0 ) of the error system (5.23) obeys
then
1 υ
lim sup log(|e(t)|) = − < 0, a.s. (5.68)
t→∞ t p
Therefore, the noise-perturbed response system (5.21) can be almost surely expo-
nential synchronized with the time-delay neural network (5.19).
Proof Fix any ξ ∈ CbF0 ([−τ , 0]; Rn ) and write e(t; ξ, i 0 ) = e(t). For the error
system
d[e − D i eτ ]
= [−C i e + Ai g(e) + B i g(eτ ) + U i ]dt + σ(t, i, e, eτ )dω(t)
= [−C i e + Ai g(e) + B i g(eτ ) (5.69)
+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )
× (e − D i eτ )]dt + σ(t, i, e, eτ )dω(t),
For the second term in (5.71), using continuous-type Holder inequality in [33],
Assumptions 5.13 and 5.27, discrete-type Holder inequality in [33], and Lemma 4.3
in [16] and (5.72), respectively, we can obtain
(ψ+1)τ
3 p−1 E | − C(r (s))e(s) + A(r (s))g(e(s))
ψτ
Therefore, substituting Eqs. (5.73), (5.74), and (5.75) into (5.71), together with
e−υψτ ≤ e−υ(ψ−1)τ (ψ ≥ 1), yields
≤ μ̄e−υ(ψ−1)τ = μ̂e−υψτ
for all ψ ≥ 1. The Borel-Cantelli lemma in [33] shows that for almost all ω ∈ Ω,
holds for all but finitely many ψ. Hence, for almost all ω ∈ Ω, there exists an integer
ψ0 = ψ0 (ω) such that Eq. (5.78) holds whenever ψ ≥ ψ0 .
This yields that for almost all ω ∈ Ω,
−1 (υ−ε)(t−τ )
|e − D(r (t))eτ (t)| ≤ e− p , whenever t ≥ ψ0 τ . (5.79)
Noting that |e − D(r (t))eτ (t)| is finite on t ∈ [0, ψ0 τ ], we observe that there is
a finite random variable ζ = ζ(ω) such that, with probability 1,
−1 (υ−ε)t
|e − D(r (t))eτ (t)| ≤ ζe− p , ∀t ≥ 0. (5.80)
which implies
−1 (υ−ε)s
sup [e p |e(s)|]
0≤s≤t
−1 (υ−ε)s
≤ ζ + sup [κe p |eτ (s)|]
0≤s≤t
p −1 (υ−ε)τ p −1 (υ−ε)(s−τ )
(5.82)
≤ ζ + κe ξ + sup [κe |eτ (s)|]
τ ≤s≤t
−1 (υ−ε)τ −1 (υ−ε)s
≤ ζ + κe p ξ + sup [κe p |e(s)|] , ∀t ≥ 0.
0≤s≤t
−1 (υ−ε)s
Since κe p < 1, it follows that
−1 (υ−ε)τ
−1 (υ−ε)s ζ + κe p ξ
sup [e p |e(s)|] ≤ −1
, ∀t ≥ 0. (5.83)
0≤s≤t 1 − κe p (υ−ε)τ
1 υ−ε
lim sup log(|e(t)|) ≤ − , a.s. (5.84)
t→∞ t p
5.2 Adaptive Synchronization of Neutral-Type SNN … 187
Letting ε → 0, we obtain
1 υ
lim sup log(|e(t)|) = − < 0, a.s. (5.85)
t→∞ t p
Remark 5.29 The inequality |ki (t)| ≤ k̄, ∀i ∈ S in Assumption 5.27 is assumed to
assure the term f¯(t, i, e, eτ ) = [−C i e + Ai g(e) + B i g(eτ ) + (diag{k1 (t), k2 (t), . . . ,
kn (t)} − ς I ) × (e − D i eτ )] in the error system (5.69) satisfying | f¯(t, i, e, eτ )| ≤
K (|e| + |eτ |). In fact, under the conditions of Theorems 5.24 and 5.25, the response
system (5.21) can be exponential synchronized in pth moment with the drive time-
delay neural network (5.19) by the control law U i = (diag{k1 (t), k2 (t), . . . , kn (t)}−
ς I )(e(t) − D i eτ ) (i ∈ S) with k̇ j = − 21 α j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2 . In
this case, the k j ( j = 1, . . . , n) approach eventually stable or do not need update.
Therefore, Assumption 5.27 is reasonable.
Remark 5.30 In spite of that the conditions of Theorem 5.28 are stronger than those
of Theorems 5.24 and 5.25, we cannot deduce the exponential synchronization in
pth moment from the almost surely exponential synchronization for the systems.
In fact, there is no natural relationships among the three kinds of synchronization,
i.e., each one kind of synchronization cannot be implied by any other one kinds of
synchronization.
In this section, two numerical examples will be given to support the main results
obtained in this section.
0.1
x2
−0.1
−0.2
−0.3
−0.4
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x1
0.1
0
y2
−0.1
−0.2
−0.3
−0.4
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
y1
f (x(t)) = 0.3tanh(x(t)), τ = 1,
σ(t, 1, e(t), e(t − τ )) = 0.15e1 (t − τ ) 0.2e2 (t) ,
σ(t, 2, e(t), e(t − τ )) = 0.2e1 (t) 0.1e2 (t − τ ) ,
w(t) is taken as Gaussian white noise. The dynamic behaviors of the drive system
(5.19) and the response system (5.21) are given in Figs. 5.3 and 5.4, respectively,
with the initial states x(t) = [−0.25, −0.35]T , y(t) = [0.27, 0.30]T , and k(t) =
[−1, 1]T , t ∈ [−1, 0].
It can be computed from the above parameters of the systems that L = 0.3,
H1 = 0.22 , H2 = 0.152 , κ = 0.2, α = 20.1498, β = 28.0900, γ = 0.9, C0 = 1,
q1 = 7.6515, and q2 = 10.5410.
We further take α1 = α2 = 1, ς = 25, and m = 10.
It can be checked that Assumptions 5.13–5.15 are satisfied, the matrix M in
Theorem 5.19 is a nonsingular M-matrix, and (5.27) holds. So the response system
(5.21) can be adaptive almost surely asymptotically synchronized with the drive
system (5.19) by Theorem 5.19. The dynamic curve of the error system is shown in
Fig. 5.5. The evolution of gains k1 and k2 of the adaptive control law U (t) is given in
Fig. 5.6. Figure 5.5 shows that the two stochastic neural networks (5.19) and (5.21)
are synchronized.
Example 5.32 Consider a time-delay neural network (5.19) and its response system
(5.21) with Markovian switching and network parameters as those in Example 5.31.
5.2 Adaptive Synchronization of Neutral-Type SNN … 189
e2(t)
0.5
e1(t), e2(t)
0.4
0.3
0.2
0.1
−0.1
0 20 40 60 80 100 120 140 160 180 200
t/s
−0.5
−1
−1.5
0 20 40 60 80 100 120 140 160 180 200
t/s
0.5 k (t)
2
−0.5
−1
−1.5
0 200 400 600 800 1000
t
Furthermore, from Fig. 5.7, we can also seen that the evolution graph of the gains
k1 and k2 of the adaptive control law U (t) is almost constant. In fact, it is checked
from the simulation that k1 (1) = −1, k1 (2) = −1.0001, k1 (t) = −1.0002 (t ≥ 3),
and k2 (1) = 1, . . ., k2 (6) = 0.99916, k2 (t) = 0.99915 (t ≥ 7). The reason is that the
error system (5.23) approaches stable and the adaptive control law need not updated.
5.2.5 Conclusion
In this section, the problem of adaptive synchronization has been studied, which
includes adaptive almost sure asymptotical synchronization, adaptive exponential
synchronization in pth moment, and adaptive almost sure exponential synchroniza-
tion, for neutral-type stochastic neural networks with Markovian switching parame-
ters, respectively. By combining the M-matrix approach, stochastic analysis method,
and Lyapunov functional, some sufficient conditions have been obtained to ensure
the above adaptive synchronization for the neutral-type stochastic neural networks
with Markovian switching parameters, respectively. Some numerical example has
been given to demonstrate the applicability and effectiveness of the theoretic results
obtained.
5.3.1 Introduction
Due to various complex dynamic properties of the neural networks, some of the
previous network models could not characterize the neural reaction process precisely,
see [2, 46, 48, 49, 52, 80, 81]. It is pretty obvious that, in the real world, the past
state of the network will affect on the current state. Hence, there has been a extensive
research interest in the study of neutral-type neural networks, see [20, 34, 82, 84].
The stability and synchronization of these neural networks are worth studying since
they can be applied to create chemical and biological systems, image processing,
information sciences, etc.
Most of the existing studies about neural networks focused on complete syn-
chronization and generalized synchronization. However, projective synchronization,
because of the proportionality between its synchronized dynamical states, started to
attract researchers recently. According to [6], when chaotic systems exhibit invari-
ance properties under a special type of continuous transformation, amplification
and displacement of the attractor occur. This degree of amplification or displace-
ment is smoothly dependent on the initial condition. Up to now, just a few articles
5.3 Mode-Dependent Projective Synchronization … 191
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with n
neurons, f (·) denotes the neuron activation functions, τ (t) represents the transmis-
sion delay with 0 ≤ τ (t) ≤ τ̄ , and τ̇ (t) ≤ τ̂ < 1 and τ̄ , τ̂ are positive constants.
For t ≥ 0, we denote i = r (t), Ai = A(r (t)), B i = B(r (t)), C i = C(r (t)),
D i = D(r (t)), and E i = E(r (t)), respectively. In neural network (5.86), ∀i ∈ S,
Ai = (a ijk )n×n and B i = (bijk )n×n are the connection weight and the delay connec-
tion weight matrices, respectively, C i = diag{c1i , c2i , . . . , cni } is a diagonal matrix
and has positive and unknown entries cij > 0, D i is called the neutral-type parameter
matrix, and E i = [E 1i , E 2i , . . . , E ni ]T ∈ Rn is the constant external input vector.
The initial condition of system (5.86) is given in the following form:
Assumption 5.33 For the function f (·) in (5.86), there exists a constant L > 0 such
that
| f (x) − f (y)| ≤ L|x − y|
Assumption 5.34 For σ(t, i, x(i), y(i)) in (5.88), there exist two positives H1 and
H2 such that
trace[σ T (t, r (t), u(t), v(t))σ(t, r (t), u(t), v(t))] ≤ H1 |u(t)|2 + H2 |v(t)|2
Assumption 5.35 For the external input vector D i (i = 1, 2, . . . , S), there are pos-
itives κi ∈ (0, 1), such that
ρ(D i ) = κi ≤ κ,
Then, we present some preliminary lemmas, which play an important role in the
proof of the main results.
Theorem 5.36 Under Assumptions 5.33–5.35, suppose that the following adaptive
controller and updated law
with
k̇ j = −α j (e j − D i eτ j )2 (5.93)
c̃˙ij = γ j (e j − D i eτ j )y j , (5.94)
If there exists a positive constant q, such that the following inequalities hold:
κ2 + L 2 + H2 − (1 − τ̂ )q < 0, (5.99)
α + β + γ − 2ς < 0, (5.100)
Proof Under Assumptions 5.33–5.34, it can be seen that f (t) and σ(·) satisfy the
usual local Lipschitz condition and linear growth condition.
Let D(y, i) = D i y. Then from Assumption 5.35, we have
and
Also,
trace[σ T (t, r (t), e(t), e(t − τ (t)))σ(t, r (t), e(t), e(t − τ (t)))]
(5.112)
≤ H1 |e(t)|2 + H2 |e(t − τ (t))|2 .
Substituting Eqs. (5.108)–(5.112) into Eq. (5.107), from Eqs. (5.98)–(5.100), one
can obtain that
To this end, based on the Lyapunov stability theory, the noise-perturbed response
system (5.88) can be adaptive projective synchronized with the drive time-delay
neural network (5.86). This completes the proof.
Now we remove the Markovian jumping parameter from the neural networks.
That is to say, S = 1. The drive system, the response system, and the error system
can be represented as follows, respectively:
Corollary 5.37 Under Assumptions 5.33–5.35, suppose that the following adaptive
controller and updated law
with
k̇ j = −α j (e j − Deτ j )2 (5.118)
If there exists a positive constant q, such that the following inequalities hold,
κ2 + L 2 + H2 − (1 − τ̂ )q < 0, (5.124)
α̂ + β̂ + γ̂ − 2ς < 0, (5.125)
198 5 Stability and Synchronization of Neutral-Type Neural Networks
Next, we remove the noise perturbations from the response system (5.88). Hence,
the response system and the error system can be represented as follows, respectively:
d[y(t) − D i y(t − τ (t))] = −Ĉ i y(t) + Âi f (y(t)) + B̂ i f (y(t − τ (t)))
t (5.126)
+ Ê i t−τ (t) f (y(s))ds + U (t) dt,
Corollary 5.38 Under Assumptions 5.33 and 5.35, suppose that the following adap-
tive controller and updated law
with
k̇ j = −α j (e j − D i eτ j )2 (5.129)
c̃˙ij = γ j (e j − D i eτ j )y j , (5.130)
If there exists a positive constant q, such that the following inequalities hold,
κ2 + L 2 − (1 − τ̂ )q < 0, (5.135)
α + β + γ − 2ς < 0, (5.136)
with
k̇ j = −α j (e j − D i eτ j )2 (5.139)
c̃˙ij = γ j (e j − D i eτ j )y j , (5.140)
If there exists a positive constant q, such that the following inequalities hold,
κ2 + L 2 + H2 − (1 − τ̂ )q < 0, (5.145)
α + β + γ − 2ς < 0, (5.146)
Consider the time-delay neural network (5.86) and its response system (5.88) with
the scaling factor λ = 2 and following network parameters:
1.2 −1.5 1.1 −1.6 0.7 −0.2
A1 = , A2 = , B1 = ,
−1.7 0.2 −1.8 1.2 0 0.3
−0.4 −0.1 0.2 0 0.5 0
B2 = , C1 = , C2 = ,
0.3 0.5 0 0.3 0 0.2
0.1 0 0.2 0 −0.12 0.12
D1 = , D2 = ,Γ = ,
0 0.15 0 0.1 0.11 −0.11
−1
−2
−3
−4
0 200 400 600 800 1000 1200
−0.5
−1
−1.5
−2
−2.5
0 200 400 600 800 1000 1200
5.3.5 Conclusions
5.4.1 Introduction
some simple yet generic criteria for determining the lag synchronization of coupled
chaotic delayed neural networks are derived based on the invariance principle of
functional differential equations. In [28], Lu et al. investigated globally exponential
synchronization for linearly coupled neural networks with time-varying delay and
impulsive disturbances. By referring to an impulsive delay differential inequality,
a sufficient condition of globally exponential synchronization for linearly coupled
neural networks with impulsive disturbances is derived in the section.
In this section, we are concerned with the analysis issue for the mode and delay-
dependent adaptive exponential synchronization of neural networks with stochastic
delayed and Markovian switching parameters by employing M-matrix approach.
The main purpose of this section is to establish M-matrix-based stability criteria for
testing whether the stochastic delayed neural networks is stochastically exponen-
tially synchronization in pth moment. We will use a simple example to illustrate the
usefulness of the derived M-matrix-based synchronization conditions.
In this section, we consider the neutral-type neural networks called drive system and
represented by the compact form as follows:
where t ≥ 0 (or t ∈ R+ , the set of all nonnegative real numbers) is the time
variable, x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the state vector associated with
n neurons, f (x(t)) = ( f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t)))T ∈ Rn denotes the
activation function of the neurons, and τ (t) is the transmission delay satisfying that
0 < τ (t) ≤ τ̄ and τ̇ (t) ≤ τ̂ < 1, where τ̄ and τ̂ are constants. As a matter
of convenience, for t ≥ 0, we denote r (t) = i and A(r (t)) = Ai , B(r (t)) =
B i , C(r (t)) = C i , and D(r (t)) = D i , N (r (t)) = N i , respectively. In the drive
system (5.147), furthermore, ∀i ∈ S, C i = diag {c1i , c2i , . . . , cni } has positive and
unknown entries cki > 0, Ai = (a ijk )n×n and B i = (bijk )n×n ,N i = (n ijk )n×n are the
connection weight and the delayed connection weight matrices, respectively, and are
both unknown matrices. D i = (d1i , d2i , . . . , dni )T ∈ Rn is the constant external input
vector.
For the drive systems (5.147), a response system is constructed as follows:
where y(t) is the state vector of the response system (5.148), Ĉ i = diag{ĉ1i , ĉ2i , . . . ,
ĉni }, Âi = (â ijk )n×n and B̂ i = (b̂ijk )n×n are the estimations of the unknown matrices
C i , Ai and B i , respectively, U (t) = (u 1 (t), u 2 (t), . . . , u n (t))T ∈ Rn is a control
input vector with the form of
U (t)
= K (t)(y(t) − x(t) − N (r (t)(y(t − τ (t)) − x(t − (τ (t)))))
(5.149)
= diag {k1 (t), k2 (t), . . . , kn (t)}
(y(t) − x(t) − N (r (t)(y(t − τ (t)) − x(t − (τ (t))))),
where C̃(r (t)) = Ĉ(r (t)) − C(r (t)), Ã(r (t)) = Â(r (t)) − A(r (t)) and B̃(r (t)) =
B̂(r (t)) − B(r (t)). Denote c̃ij = ĉij − cij , ã ijk = â ijk − a ijk and b̃ijk = b̂ijk − bijk , then
C̃ i = diag {c̃1i , c̃2i , . . . , c̃ni }, Ãi = (ã ijk )n×n and B̃ i = (b̃ijk )n×n .
The initial condition associated with system (5.150) is given in the following
form:
for any ξ(s) ∈ L2F0 ([−τ̄ , 0], Rn ), where L2F0 ([−τ̄ , 0], Rn ) is the family of all
F0 -measurable C([−τ̄ , 0]; Rn )-value random variables satisfying that sup−τ̄ ≤s≤0 E|
ξ(s)|2 < ∞, and C([−τ̄ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ̄ , 0] with the norm ξ(s) = sup−τ̄ ≤s≤0 |ξ(s)|.
To obtain the main result, we need the following assumptions.
Assumption 5.40 The activation functions of the neurons f (x(t)) satisfy the Lip-
schitz condition, that is, there exists a constant L > 0 such that
Assumption 5.41 The noise intensity matrix σ(·, ·, ·, ·) satisfies the linear growth
condition, that is, there exist two positives H1 and H2 , such that
Remark 5.42 Under Assumptions 5.40 and 5.41, the error system (5.150) admits an
equilibrium point (or trivial solution) e(t, ξ(s)), t ≥ 0.
The following stability concept and synchronization concept are needed in this
section.
Definition 5.43 The trivial solution e(t, ξ(s)) of the error system (5.150) is said to
be exponential stability in pth moment if
1
lim sup log(E|e(t, ξ(s))| p ) < 0,
t→∞ t
p
for any ξ(s) ∈ LL0 ([−τ̄ , 0]; Rn ), where p ≥ 2, p ∈ Z. When p = 2, it is said to
be exponential stability in mean square.
The drive system (5.147) and the response system (5.148) are said to be exponen-
tial synchronized in pth moment, if the error system (5.150) is exponential stability
in pth moment.
The main purpose of the rest of this section is to establish a criterion of adaptive
exponential synchronization in pth moment of the system (5.147) and the response
system (5.148) using adaptive feedback control and M-matrix techniques.
Consider an n-dimensional stochastic delayed differential equation (SDDE, for
short) with Markovian switching
λ2 < λ1 (1 − τ̂ ), (5.152)
where
c1 = min qi , and
i∈S
for all t ≥ 0, i ∈ S and x ∈ Rn (x = x(t) for short). Then the SDDE (5.151) is
exponential stability in pth moment.
where c2 = max qi .
i∈S
Under Assumptions 5.40 and 5.41, the noise-perturbed response system (5.148)
can be adaptive exponential synchronized in pth moment with the drive neural net-
work (5.147), if the feedback gain K (t) of the controller (5.149) with the update law
is chosen as
5.4 Adaptive pth Moment Exponential Synchronization … 207
and the parameters update laws of matrices Cˆ i , Âi , and B̂ i are chosen as
⎧ i γj
˙
⎨ ĉ j = 2 pq i |e − N eτ |
i p−2 (e − N i e )y ,
⎪ j τj j
α
˙â i = − pqi |e − N i eτ | p−2 (e j − N i eτ j ) fl ,
jl
jl (5.157)
⎪
⎩ ˙i
2
β jl
b̂ jl = − 2 pqi |e − N i eτ | p−2 (e j − N i eτ j )( fl )τ ,
n
V (t, i, e) = qi |e| p + ( α1j k 2j + γ j (c̃ j )
1 i 2
j=1
n
n
+ α jl (ã jl )
1 i 2 + β jl (b̃ jl ) ).
1 i 2
l=1 l=1
LV (t, i, e, eτ )
= Vt (t, i, e − N i eτ ) + Ve (t, i, e − N i eτ )
[−C̃ i y − C i e + Ãi f (y) + Ai g(e)
+ B̃ i f (yτ ) + B i g(eτ ) + U (t)] (5.158)
+(1/2)trace (σ T (t, i, e, eτ )Vee (t, i, e − N i eτ )σ(t, i, e, eτ ))
S
+ γik V (t, k, e − N i eτ )
k=1
while
Vt (t, i, e − N i eτ ) = 0
Ve (t, i, e − N i eτ )
= pqi |e − N i eτ )| p−2 (e − N i eτ )T
Vee (t, i, e − N i eτ ) (5.159)
= p( p − 2)qi |e − N i eτ )| p−4 [(e − N i eτ )T ]2
+ pqi |e − N i eτ )| p−2
≤ p( p − 1)qi |e − N i eτ )| p−2 ,
so
208 5 Stability and Synchronization of Neutral-Type Neural Networks
LV (t, i, e, eτ )
n
n
1 i ˙i
≤2 α j k j k̇ j + 2
1
γ j c̃ j c̃ j
j=1 j=1
n
n
1 i ˙i
n
n
1 i ˙i
+2 α jl ã jl ã jl +2 β jl b̃ jl b̃ jl
j=1 l=1 j=1 l=1
+ pqi |e − N i eτ )| p−2 (e − N i eτ )T
[−C̃ i y − C i e + Ãi f (y) + Ai g(e)
+ B̃ i f (yτ ) + B i g(eτ ) + U (t)] (5.160)
+(1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e − N i eτ )| p−2 )
S
·σ(t, i, e, eτ )) + γik qk |e − N i eτ )| p
k=1
= pqi |e − N i eτ )| p−2 (e − N i eτ ))T [−C i e + Ai g(e) + B i g(eτ )]
+(1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e − N i eτ )| p−2 )
S
·σ(t, i, e, eτ )) + γik qk |e − N i eτ )| p .
k=1
Now, using Assumptions 5.40 and 5.41 together with Lemmas 1.13, 1.3, 1.4 yields
− |e − N i eτ | p−2 ≤ −(1 − k) p−3 |e| p−2 + k(1 − k) p−3 |eτ | p−2 , (5.162)
and
S
γik qk |e − N i eτ | p
k=1
S
= γii qi |e − N i eτ | p + γik qk |e − N i eτ | p
k=1,k=i
S
S
=− γik qi |e − N i eτ | p + γik qk |e − N i eτ | p (5.167)
k=1,k=i k=1,k=i
S
≤ γik qi (−(1 − k) p−1 |e| p + k(1 − k) p−1 |eτ | p )
k=1,k=i
S
+ γik qk ((1 + k) p−1 (|e| p + k|eτ | p )).
k=1,k=i
|e − N i eτ | p−2 |e|2
≤ p−2
p |e − N eτ | + p |e|
i p 2 p
p−2
≤ p (1 + k) p−1 (|e| + k|eτ | p ) + 2p |e| p
p
! !
= p−2p (1 + k)
p−1 + 2 |e| p + p−2 k(1 + k) p−1 |e | p ,
p p τ
(5.168)
|e − N i eτ | p−2 |eτ |2
≤ p−2
p |e − N eτ | + p |eτ |
i p 2 p
≤ p−2
p (1 + k)
p−1 (|e| p + k|e | p ) + 2 |e | p
τ p τ !
p−2 p−2
= p (1 + k)
p−1 |e| p + p k(1 + k)
p−1 + 2
p |eτ | p ,
|e − N i eτ | p−2 (−e T C i e)
≤ (−|e − N i eτ | p−2 )γ|e|2
≤ γ|e|2 (−(1 − k) p−3 |e| p−2 + k(1 − k) p−3 |eτ | p−2 ) (5.169)
= (−γ(1 − k) p−3 + γ 2p k(1 − k) p−3 )|e| p
+γ p−2p k(1 − k)
p−3 |e | p
τ
|e − N i eτ | p−2 (e − N i eτ )T Ai g(e)
≤ |e − N i eτ | p−2 ((1/2)(α(1 + k) + L 2 )|e|2
+(1/2)(αk(1 + k)|eτ |2 )
= (1/2)(α(1 + k) + L 2 )|e − N i eτ | p−2 |e|2
+(1/2)αk(1 + k)|e − N i eτ | p−2 |eτ |2
(5.170)
|e − N i eτ | p−2 (e − N i eτ )T B i g(eτ )
≤ |e − N i eτ | p−2 ((1/2)β(1 + k)|e|2
+(1/2)(βk(1 + k) + L 2 )|eτ |2 )
= (1/2)β(1 + k)|e − N i eτ | p−2 |e|2
+(1/2)(βk(1 + k) + L 2 )|e − N i eτ | p−2 |eτ |2
210 5 Stability and Synchronization of Neutral-Type Neural Networks
|e − N i eτ | p−2 eτT (N i )T C i e
≤ |e − N i eτ | p−2 ((1/2)(κ)2 |eτ |2 + (1/2)ι2 |e|2 ) (5.171)
= (1/2)ι2 |e − N i eτ | p−2 |e|2 + (1/2)κ2 |e − N i eτ | p−2 |eτ |2 .
LV (t, i, e, eτ )
≤ qi U1 |e| p
+ qi V1 |eτ | p
"
+ pqi G 1 p−2 (1 + k) p−1
! p !
+ 2p |e| p + p−2 k(1 + k) p−1 |e | p
τ
p
+ pqi G 2 p−2 (1 + k) p−1 |e| p (5.172)
p !
+ p−2 p k(1 + k) p−1 + 2 |e | p
p τ
+Ū4 |e| p + V̄4 |eτ | p
= (qi U1 + qi U2 + qi U3 + Ū4 )|e| p
+(qi V1 + qi V2 + qi V3 + V̄4 )eτ | p ,
where
G 1 = (1/2)ι2 + (1/2)(α(1 + k) + L 2 )
+(1/2)β(1 + k) + (1/2)( p − 1)H1 ,
G 2 = (1/2)κ2 (1/2)β(k(1 + k) + L 2 )
+(1/2)αk(1 + k) + (1/2)( p − 1)H2 ,
V1 = γ( p − 2)k(1 − k) p−3 ,
U2 = G 1 (( p − 2)(1 + k) p−1 + 2)
V2 = G 1 ( p − 2)k(1 + k) p−1 ,
U3 = G 2 ( p − 2)(1 + k) p−1 ,
Let
S
S
a1 = min γik , a2 = max γik , (5.173)
i∈S i∈S
k=1,k=i k=1,k=i
5.4 Adaptive pth Moment Exponential Synchronization … 211
S
S
b1 = min γik qk , b2 = max γik qk . (5.174)
i∈S i∈S
k=1,k=i k=1,k=i
Then
S
S
Ū4 = γik qi (−(1 − k) p−1 ) + γik qk (1 + k) p−1
k=1,k=i k=1,k=i
≤ −a1 qi (1 − k) p−1 + a2 qi (1 + k) p−1
S
+(1 + k) p−1 γik qk (5.175)
k=1
S
S
V̄4 = γik qi k(1 − k)1− p + k γik qk
k=1,k=i k=1,k=i
≤ a2 c2 k(1 − k) p−1 + b1 k.
Therefore,
"LV (t, i, e, eτ )
≤ (U1 + U2 + U3 − a1 (1 − k) p−1 + a2 (1 + k) p−1 )qi
S
+ (1 + k) p−1 γik qk |e| p + ((V1 + V2 + V3 )c2
k=1
+a c k(1 − k) p−1 + b1 k)|eτ | p
2 2 (5.176)
S
≤ ηqi + (1 + k) p−1 γik qk |e| p + ((V1 + V2 + V3 )c2
k=1
+a2 c2 k(1 − k) p−1 + b1 k)|eτ | p
≤ −m|e| p + ((V1 + V2 + V3 )c2
+a2 c2 k(1 − k) p−1 + b1 k)|eτ | p
Special case 1 The Markovian jumping parameters are removed from the neural
networks. That is to say, S = 1. For this case, one can get the following result
analogous to Theorem 5.45.
Corollary 5.47 Assume that η < 0 and (V1 +V2 +V3 )+a2 k(1−k) p−1 < −η(1−τ̂ ),
where
Under Assumptions 5.40 and 5.41, the noise-perturbed response system can be adap-
tive exponential synchronized in pth moment with the drive neural network, if the
feedback gain K (t) of the controller (5.149) with the update law is chosen as
and the update laws of the parameters of matrices Ĉ, Â, and B̂ are chosen as
⎧ i γj
˙
⎨ ĉ j = 2 p|e
⎪
α
− N i eτ | p−2 (e j − N i eτ j )y j ,
â˙ jl = − 2 p|e − N i eτ | p−2 (e j − N i eτ j ) fl ,
i jl
(5.178)
⎪
⎩ ˙i β
b̂ jl = − 2jl p|e − N i eτ | p−2 (e j − N i eτ j )( fl )τ ,
Special case 2 When the noise perturbation is removed from the response system
(5.148), it yields the noiseless response system, which can lead to the following
results.
where
Ḡ 1 = G 1 − (1/2)( p − 1)H1 ,
Ḡ 2 = G 2 − (1/2)( p − 1)H2 ,
and
where
Proof The proof is similar to that of Theorem 5.45, and hence omitted.
In the section, we present an example to illustrate the usefulness of the main results
obtained in this section. The adaptive exponential stability in pth moment is examined
for a given stochastic delayed neural networks with Markovian jumping parameters.
Example 5.49 Consider the delayed neural networks (5.147) with Markovian switch-
ing, the response stochastic delayed neural networks (5.148) with Markovian switch-
ing, and the error system (5.150) with the network parameters given as follows:
214 5 Stability and Synchronization of Neutral-Type Neural Networks
−1
−2
−3
−4
0 100 200 300 400 500 600
t
2.1 0 2.5 0 1.2 −1.5
C1 = , C2 = , A1 = ,
0 2.8 0 2.2 −1.7 1.2
1.1 −1.6 0.7 −0.2 −0.4 −0.1
A2 = , B1 = , B2 = ,
−1.8 1.2 0 0.3 −0.3 0.5
0.6 0.8 −0.12 0.12
D1 = D̂1 = , D2 = D̂2 = ,Γ = ,
0.1 0.2 0.11 −0.11
α11 = α12 = α21 = α22 = β11 = β12 = β21 = β22 = 1,
σ(t, e(t), e(t − τ ), 1) = (0.4e1 (t − τ ), 0.5e2 (t))T ,
σ(t, e(t), e(t − τ ), 2) = (0.5e1 (t), 0.3e2 (t − τ ))T ,
p = 3, L = 1, f (x(t)) = tanh(x(t)), τ = 0.12.
It can be checked that Assumptions 5.40, 5.41, and the inequality (5.180) are
satisfied and the matrix M is a nonsingular M-matrix. So the noise-perturbed response
system (5.148) can be adaptive exponential synchronized in pth moment with the
drive neural network (5.147) by Theorem 5.45. The simulation results are given
in Figs. 5.10, 5.11, 5.12, 5.13, and 5.14. Among them, Fig. 5.10 shows the state
response of errors system e1 (t), e2 (t). Figure 5.11 shows the feedback gain k1 , k2 .
Figures 5.12, 5.13, and 5.14 show the parameters update laws of matrices C, # A,#
#
B chosen as c1 (t), c2 (t), a11 (t), a12 (t), a21 (t), a22 (t), b11 (t), b12 (t), b21 (t), and
b22 (t). From the simulations figures, one can see that the stochastic delayed neural
networks with Markovian switching (5.147) and (5.148) are adaptive exponential
synchronization in pth moment.
5.4 Adaptive pth Moment Exponential Synchronization … 215
−2
−4
−6
−8
−10
−12
0 100 200 300 400 500 600
t
0
0 100 200 300 400 500 600
t
−2
−3
−4
0 100 200 300 400 500 600
t
216 5 Stability and Synchronization of Neutral-Type Neural Networks
−0.4
−0.6
−0.8
0 100 200 300 400 500 600
t
5.4.5 Conclusions
In this section, we have dealt with the problem of the mode and delay-dependent
adaptive exponential synchronization in pth moment for neural networks with sto-
chastic delayed and Markovian jumping parameters. We have removed the traditional
monotonicity and smoothness assumptions on the activation function. A M-matrix
approach has been developed to solve the problem addressed. The conditions for
the adaptive exponential synchronization in pth moment have been derived in terms
of some algebraical inequalities. These synchronization conditions are much differ-
ent to those of linear matrix inequality. Finally, a simple example has been used to
demonstrate the effectiveness of the main results which obtained in this section.
5.5.1 Introduction
During the past two decades, chaos synchronization has played a significant role in
nonlinear science since it can be applied to create chemical and biological systems,
image processing, secure communication systems, information science, and so on.
Different concepts of synchronization, like complete synchronization, generalized
synchronization, phase synchronization, lag synchronization, and anticipated syn-
chronization, have been widely investigated. Researchers used to synchronize two
chaotic systems by following synchronization strategies: adaptive control method,
feedback control method, active control method, etc.
5.5 Adaptive Synchronization of Neutral-Type SNN … 217
Consider the following neural networks of neutral type with time-varying discrete
delays and distributed delays described by the following differential equation:
d[x(t) − Dx(t − τ1 (t))] = −Ct x(t) + At f˜(x(t)) + Bt g̃(x(t − τ2 (t)))
t (5.181)
+ E t t−τ3 (t) h̃(x(s))ds + J dt,
218 5 Stability and Synchronization of Neutral-Type Neural Networks
where n is the number of neurons in the indicated neural network, x(t) = [x1 (t), x2
(t), . . . , xn (t)]T ∈ Rn is the state vector associated with n neurons, J = [J1 , J2 , . . . ,
Jn ]T ∈ Rn is the external constant input vector, f˜(·), g̃(·), h̃(·) denote the neuron
activation functions, and τk (t) (k = 1, 2, 3) represent the time-varying delays. In
system (5.181),
At = A + ΔA(t), Bt = B + ΔB(t),
(5.182)
Ct = C + ΔC(t), E t = E + ΔE(t),
F T (t)F(t) ≤ I. (5.184)
We consider the model (5.181) as the drive system. The response system is
d[y(t) − Dy(t − τ1 (t))] = −Ct y(t) + At f˜(y(t)) + Bt g̃(y(t − τ2 (t)))
t
+ E t t−τ 3 (t) h̃(y(s))ds + J + u(t) dt + σ(t, y(t) − x(t),
y(t − τ1 (t)) − x(t − τ1 (t)), y(t − τ2 (t)) − x(t − τ2 (t)),
y(t − τ3 (t)) − x(t − τ3 (t)))dω(t),
(5.185)
∂V (t,e(t))
where Vt (t, e(t)) = ∂t , Ve (t, e(t)) = ( ∂V (t,e(t))
∂e1 , ∂V (t,e(t))
∂e2 , . . . , ∂V (t,e(t))
∂en ),
∂ 2 V (t,e(t))
Vee (t, e(t)) = ( ∂ei ∂e j )n×n .
To prove our main results, the following assumptions are needed.
Assumption 5.50 There exist diagonal matrices L i− = diag{li1− − −
, li2 , . . . , lin } and
+ + + +
L i = diag{li1 , li2 , . . . , lin }, i = 1, 2, 3 satisfying
for all u, v ∈ Rn , u = v, j = 1, 2, . . . , n.
Assumption 5.51 There exist positive constants τ1 , τ2 , τ3 , μ1 , μ2 , and μ3 such that
ρ(D) < 1,
By the facts that f (0) = g(0) = h(0) = 0 and σ(t, 0, 0, 0, 0) ≡ 0, the system
(5.186) admits a trivial solution e(t; 0) ≡ 0 corresponding to the initial data ξ = 0.
Hence, if the trivial solution of the system (5.186) is globally almost surely asymp-
totically stable, the system (5.181) and system (5.185) achieve synchronization for
almost every initial data.
Next, we introduce the definition of stochastic synchronization under almost every
initial data for the two coupled neural networks (5.181) and (5.185).
Definition 5.55 The two coupled neural networks (5.181) and (5.185) are said
to be stochastic synchronization for almost every initial data if for every ξ ∈
C2F0 ([−τ , 0]; Rn ),
In this section, the stochastic synchronization for the two coupled neural networks
(5.181) and (5.185) is investigated under Assumptions 5.50–5.54.
Firstly, we deal with the synchronization of neural networks (5.181) and (5.185)
without the parameter uncertainties ΔA(t), ΔB(t), ΔC(t), and ΔE(t).
Theorem 5.56 Under Assumptions 5.50–5.54, the two coupled neural networks
(5.181) and (5.185) without the parameter uncertainties can be synchronized for
almost every initial data, if there exist positive diagonal matrices H1 , H2 , H3 ,
P = diag{ p1 , p2 , . . . , pn }, positive definite matrices Q 1 , Q 2 , Q 3 , S1 , S2 , and a
positive scalar λ such that the following LMIs hold:
P ≤ λI, (5.188)
τ3 (S1 + S2 ) ≤ Q 3 , (5.189)
5.5 Adaptive Synchronization of Neutral-Type SNN … 221
⎡ ⎤
Π11 Π12 0 PA 0 PB PE 0
⎢ ∗ Π22 0 −D P A 0 −D P B 0 D P E⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −H1 0 0 0 0 ⎥
Π =⎢
⎢ ∗
⎥ < 0, (5.190)
⎢ ∗ ∗ ∗ Π55 0 0 0 ⎥⎥
⎢ ∗ ∗ ∗ ∗ 0 −H2 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ −S1 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −S2
5
V (t, e(t)) = Vi (t, e(t)), (5.193)
i=1
where
V1 (t, e(t)) = [e(t)
t − De(t − τ1 (t))] P[e(t) − De(t − τ1 (t))],
T
t
e T (t − τ1 (t))[−2D P E] t−τ3 (t) h(e(s))ds
≤ e T (t − τ1 (t))[D P E S2−1 E T P T D T ]e(t − τ1 (t)) (5.196)
T
t t
+ t−τ3 (t) h(e(s))ds S2 t−τ3 (t) h(e(s))ds ,
trace[σ T (t, e(t), e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))Pσ(t, e(t),
e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))]
≤ λmax (P)trace[σ T (t, e(t), e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))σ(t, e(t),
e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))]
≤ λ[e T (t)R1 e(t) + e T (t − τ1 (t))R2 e(t − τ1 (t)) + e T (t − τ2 (t))R3 e(t − τ2 (t))
+e T (t − τ3 (t))R4 e(t − τ3 (t))].
(5.198)
5.5 Adaptive Synchronization of Neutral-Type SNN … 223
LV2 (t, e(t)) = e T (t)Q 1 e(t) − (1 − τ˙1 (t))e T (t − τ1 (t))Q 1 e(t − τ1 (t))
(5.199)
≤ e T (t)Q 1 e(t) − e T (t − τ1 (t))[(1 − μ1 )Q 1 ]e(t − τ1 (t)),
LV3 (t, e(t)) = e T (t)Q 2 e(t) − (1 − τ˙2 (t))e T (t − τ2 (t))Q 2 e(t − τ2 (t))
(5.200)
≤ e T (t)Q 2 e(t) − e T (t − τ2 (t))[(1 − μ2 )Q 2 ]e(t − τ2 (t)),
t
LV4 (t, e(t)) = τ3 (t)h T (e(t))Q 3 h(e(t)) − t−τ3 (t) h T (e(s))Q 3 h(e(s))ds
t (5.201)
≤ h T (e(t))[τ3 Q 3 ]h(e(t)) − t−τ3 (t) h T (e(s))Q 1 h(e(s))ds,
n
pi n
LV5 (t, e(t)) = 2 (ki +α)k̇i = −2 pi (ki +α)(ei2 (t)−di ei (t)ei (t −τ1 (t))).
ϕi
i=1 i=1
(5.202)
Furthermore, the condition (5.189) yields
t t
h T (e(s))[τ3 (S1 + S2 )]h(e(s))ds − h T (e(s))Q 3 h(e(s))ds ≤ 0.
t−τ3 (t) t−τ3 (t)
(5.203)
On the other hand, from Assumption 5.50, it follows that
e T (t −τ2 (t))L 3 H3 L 3 e(t −τ2 (t))−g T (e(t −τ2 (t)))H3 g(e(t −τ2 (t))) ≥ 0, (5.206)
where
Ψ T (t) = [e T (t), e T (t−τ1 (t)), e T (t−τ2 (t)), f T (e(t)), h T (e(t)), g T (e(t−τ2 (t)))]T ,
⎡ ⎤
Ξ11 D PC + αP D 0 PA 0 PB
⎢ ∗ Ξ22 0 −D P A 0 −D P B ⎥
⎢ ⎥
⎢ ∗ ∗ Ξ33 0 0 0 ⎥
Ξ =⎢
⎢ ∗
⎥ < 0,
⎢ ∗ ∗ −H1 0 0 ⎥ ⎥
⎣ ∗ ∗ ∗ ∗ τ3 Q 3 − H2 0 ⎦
∗ ∗ ∗ ∗ ∗ −H3
P ≤ λI, (5.209)
τ3 S1 ≤ Q 3 , (5.210)
⎡ ⎤
Θ11 0 PA 0 PB PE
⎢ ∗ Θ22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −H1 0 0 0 ⎥
Θ=⎢
⎢ ∗
⎥ < 0, (5.211)
⎢ ∗ ∗ Θ44 0 0 ⎥⎥
⎣ ∗ ∗ ∗ 0 −H3 0 ⎦
∗ ∗ ∗ ∗ ∗ −S1
5.5 Adaptive Synchronization of Neutral-Type SNN … 225
Theorem 5.56 gives a new sufficient condition to prove that the two coupled neural
networks (5.181) and (5.185) can be synchronized for almost every initial data. It
makes Theorem 5.56 a little conservatism that it only depends on delay constants
τ3 , μ1 , and μ2 . By constructing a different Lyapunov-Krasovskii function, the next
theorem depends on all the delay constants τk , μk (k = 1, 2, 3), such that it is less
conservative than Theorem 5.56.
Theorem 5.59 Under Assumptions 5.50–5.54, the two coupled neural networks
(5.181) and (5.185) without the parameter uncertainties can be synchronized for
almost every initial data, if there exist positive diagonal matrices H1 , H2 , H4 ,
P = diag{ p1 , p2 , . . . , pn }, positive definite matrices Q 1 , Q 2 , Q 3 , T1 , T2 , T3 , T4 , T5 ,
S1 , S2 , and a positive scalar λ such that the following LMIs hold:
P ≤ λI, (5.214)
τ3 (S1 + S2 ) ≤ Q 3 , (5.215)
⎡ ⎤
X 11 X 12 0 0 PA 0 PB 0 0 PE 0
⎢ ∗ X 22 0 0 X 25 0 X 27 0 0 0 X 211 ⎥
⎢ ⎥
⎢ ∗ ∗ X 33 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ X 44 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ X 55 0 0 0 0 0 0 ⎥
⎢ ⎥
X =⎢
⎢ ∗ ∗ ∗ ∗ ∗ X 66 0 0 0 0 0 ⎥ ⎥ < 0, (5.216)
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 77 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 99 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S1 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S2
226 5 Stability and Synchronization of Neutral-Type Neural Networks
where
X 11 = −2PC + Q 1 + Q 2 −2αP +λ(R1 + R4 )+ L 1 H1 L 1 + L 2 H2 L 2 + L 4 H4 L 4 +T1 ,
X 12 = D PC + αP D, X 22 = λR2 − (1 − μ1 )Q 1 , X 25 = −D P A, X 27 = −D P B,
X 211 = D P E, X 33 = λR3 − (1 − μ2 )Q 2 , X 44 = −(1 − μ3 )T1 ,
X 55 = −H1 + τ1 T4 + τ2 T5 , X 66 = −H4 + T2 , X 77 = τ3 Q 3 − H2 + T3 ,
X 88 = −(1 − μ2 )T2 , X 99 = −(1 − μ3 )T3 .
10
V (t, e(t)) = Vi (t, e(t)), (5.219)
i=1
where t
V6 (t, e(t)) = t−τ3 (t) e T (s)T1 e(s)ds,
t
V7 (t, e(t)) = t−τ2 (t) g T (e(s))T2 g(e(s))ds,
t
V8 (t, e(t)) = t−τ3 (t) h T (e(s))T3 h(e(s))ds,
0 t
V9 (t, e(t)) = −τ1 (t) t+γ f T (e(s))T4 f (e(s))dsdγ,
0 t
V10 (t, e(t)) = −τ2 (t) t+γ f T (e(s))T5 f (e(s))dsdγ,
with that T1 , T2 , T3 , T4 , T5 are positive definite matrices.
By Ito’s differential formula, we could infer that
LV6 (t, e(t)) = e T (t)T1 e(t) − (1 − τ˙3 (t))e T (t − τ3 (t))T1 e(t − τ3 (t))
(5.220)
≤ e T (t)T1 e(t) − e T (t − τ3 (t))[(1 − μ3 )T1 ]e(t − τ3 (t)),
LV7 (t, e(t)) = g T (e(t))T2 g(e(t)) − (1 − τ˙2 (t))g T (e(t − τ2 (t)))T2 g(e(t − τ2 (t)))
≤ g T (e(t))T2 g(e(t)) − g T (e(t − τ2 (t)))[(1 − μ2 )T2 ]g(e(t − τ2 (t))),
(5.221)
LV8 (t, e(t)) = h T (e(t))T3 h(e(t)) − (1 − τ˙3 (t))h T (e(t − τ3 (t)))T3 h(e(t − τ3 (t)))
≤ h T (e(t))T3 h(e(t)) − h T (e(t − τ3 (t)))[(1 − μ3 )T3 ]h(e(t − τ3 (t))),
(5.222)
5.5 Adaptive Synchronization of Neutral-Type SNN … 227
t
LV9 (t, e(t)) = τ1 (t) f T (e(t))T4 f (e(t)) − t−τ1 (t) f T (e(s))T4 f (e(s))ds
(5.223)
≤ f T (e(t))[τ1 T4 ] f (e(t)),
t
LV10 (t, e(t)) = τ2 (t) f T (e(t))T5 f (e(t)) − t−τ2 (t) f T (e(s))T5 f (e(s))ds
≤ f T (e(t))[τ2 T5 ] f (e(t)).
(5.224)
Using Assumption 5.50 yields
where
⎡ ⎤
e(t)
⎢ e(t − τ1 (t)) ⎥
⎢ ⎥
⎢ e(t − τ2 (t)) ⎥
⎢ ⎥
⎢ e(t − τ3 (t)) ⎥
⎢ ⎥
Φ(t) = ⎢
⎢ f (e(t)) ⎥,
⎥
⎢ g(e(t)) ⎥
⎢ ⎥
⎢ h(e(t)) ⎥
⎢ ⎥
⎣g(e(t − τ2 (t)))⎦
h(e(t − τ3 (t)))
228 5 Stability and Synchronization of Neutral-Type Neural Networks
⎡ ⎤
Λ11 X 12 0 0 PA 0 PB 0 0
⎢ ∗ Λ22 0 0 −D P A 0 −D P B 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ X 33 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ X 44 0 0 0 0 0 ⎥
⎢ ⎥
Λ=⎢
⎢ ∗ ∗ ∗ ∗ X 55 0 0 0 0 ⎥ ⎥ < 0,
⎢ ∗ ∗ ∗ ∗ ∗ X 66 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 77 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 99
with
Λ11 = −2PC + Q 1 + Q 2 − 2αP + λ(R1 + R4 ) + P E S1−1 E T P T + L 1 H1 L 1 +
L 2 H2 L 2 + L 4 H4 L 4 + T1 , Λ22 = λR2 − (1 − μ1 )Q 1 + D P E S2−1 E T P T D T .
Using Lemma 1.21, X < 0 is equivalent to Λ < 0, let ζ = λmin (−X ), clearly,
the constant ζ > 0. This fact together with (5.226) gives
P ≤ λI, (5.228)
τ3 S1 ≤ Q 3 , (5.229)
⎡ ⎤
Z 11 0 0 PA 0 PB 0 0 PE
⎢ ∗ X 33 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ X 44 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Z 44 0 0 0 0 0 ⎥
⎢ ⎥
Z =⎢
⎢ ∗ ∗ ∗ ∗ X 66 0 0 0 0 ⎥⎥ < 0, (5.230)
⎢ ∗ ∗ ∗ ∗ ∗ X 77 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 99 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S1
5.5 Adaptive Synchronization of Neutral-Type SNN … 229
where
Z 11 = −2PC + Q 2 − 2αP + λ(R1 + R4 ) + L 1 H1 L 1 + L 2 H2 L 2 + L 4 H4 L 4 + T1 ,
Z 44 = −H1 + τ2 T5 .
And the adaptive feedback controller is designed as
P ≤ λI, (5.233)
τ3 (S1 + S2 ) ≤ Q 3 , (5.234)
⎡ ⎤
Υ11 Υ12 0 0 Υ15 0 Υ17 0 0 Υ110 0 Υ112
⎢ ∗ X 22 0 0 Υ25 0 Υ27 0 0 0 Υ211 Υ212 ⎥
⎢ ∗ ∗ 0 ⎥
⎢ X 33 0 0 0 0 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ X 44 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ X 55 0 0 0 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ 0 ⎥
⎢ X 66 0 0 0 0 0 ⎥ < 0, (5.235)
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 77 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ 0 ⎥
⎢ X 99 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S1 0 0 ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S2 0
⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Υ1212
where
Υ11 = X 11 + φ1 NCT NC , Υ12 = X 12 + φ2 NCT NC , Υ15 = P A + φ3 N AT N A ,
Υ17 = P B + φ4 N BT N B , Υ110 = P E + φ5 N ET N E , Υ25 = X 25 + φ6 N AT N A ,
Υ27 = X 27 + φ7 N BT N B , Υ211 = X 211 + φ8 N ET N E ,
230 5 Stability and Synchronization of Neutral-Type Neural Networks
Υ112 = P M P M P M P M P M 0 0 0 ,
Υ212 = 0 0 0 0 0 P M P M P M ,
Υ1212 = diag{−φ1 I, −φ2 I, −φ3 I, −φ4 I, −φ5 I, −φ6 I, −φ7 I, −φ8 I }.
And the adaptive feedback controller is designed as
where
Γ11 = X 11 + φ−1
1 P M M P + φ1 N C N C ,
T T T
−1
Γ12 = X 12 + φ2 D P M M T P T D T + φ2 NCT NC ,
Γ15 = P A + φ−1
3 P M M P + φ3 N A N A ,
T T T
−1
Γ17 = P B + φ4 P M M P + φ4 N BT N B ,
T T
Γ110 = P E + φ−1
5 P M M P + φ5 N E N E ,
T T T
−1
Γ25 = X 25 + φ6 D P M M P D + φ6 N AT N A ,
T T T
Γ27 = X 27 + φ−1
7 D P M M P D + φ7 N B N B ,
T T T T
From Lemma 1.21, (5.238) holds if and only if (5.235) holds. This completes the
proof.
5.5 Adaptive Synchronization of Neutral-Type SNN … 231
Remark 5.63 In the main result Theorem 5.62, we considered the distributed delay,
the stochastic perturbation, and the parameter uncertainties. Two common sources of
the disturbances on neural networks, the parameter uncertainties and the stochastic
perturbations, are unavoidable in practice. Thus, compared with [34, 75, 83], our
models are more general and useful in practice.
Example 5.64 Consider the synchronization error system (5.186) with u(t) = ke(t)
such that k̇i = −ϕi ei2 (t) + ϕi di ei (t)ei (t −τ1 (t)), ω(t) is a two-dimensional Brown-
ian motion, where e(t) = (e1 (t), e2 (t))T is the state of the error system (5.186).
Take f (e(t)) = g(e(t)) = h(e(t)) = [tanh(e1 (t)), tanh(e2 (t))]T , τ1 (t) = 0.9,
τ2 (t) = 0.3, τ3 (t) = 0.4, L 1 = L 2 = L 3 = L 4 = 1, and
Let α = 30, using LMI toolbox in Matlab, we can obtain the following feasible
solutions to LMIs (5.233)–(5.235):
43.1531 −4.2386 19.4407 −2.5683
Q1 = , Q2 = ,
−4.2386 77.9498 −2.5683 18.1761
64.6621 −5.5682 4.7654 −0.1010
Q3 = ,P = ,
−5.5682 44.8942 0.1010 5.3444
232 5 Stability and Synchronization of Neutral-Type Neural Networks
31.2593 −5.9066 57.4154 −0.3747
H1 = , H2 = ,
−5.9066 26.1920 −0.3747 45.4589
27.4910 −4.2464 35.2360 0
H4 = , T1 = ,
−4.2464 25.4116 0 35.2360
16.5537 −2.3088 14.3788 −2.3348
T2 = , T3 = ,
−2.3088 15.4271 −2.3348 13.6590
11.2095 −1.7910 20.3754 −2.0492
T4 = , T5 = ,
−1.7910 10.6240 −2.0492 19.7232
95.0493 −8.1459 33.1449 −2.1269
S1 = , S2 = ,
−8.1459 44.3732 −2.1269 32.6036
λ = 15.9141, φ1 = 39.3405, φ2 = 37.0910, φ3 = 41.0812, φ4 = 42.3367,
φ5 = 41.7361, φ6 = 42.5957, φ7 = 43.6130, φ8 = 42.6723.
Therefore, from Theorem 5.62, we conclude that the two coupled neural networks
(5.181) and (5.185) can be synchronized for almost every initial data.
Now by taking the initial data as e(0) = [0.6, 0.7]T , k(0) = [15, 20]T , ϕ1 = 0.2,
and ϕ2 = 0.3, we can draw the dynamic curves of the error system, the evolution of
adaptive coupling strengths k1 and k2 , and the Brownian motion ω(t), respectively,
as Figs. 5.15, 5.16 and 5.17. Figure 5.15 shows that the two coupled neural networks
(5.181) and (5.185) are synchronized.
10
−10
−20
−30
−40
0 2 4 6 8 10
t
5.5 Adaptive Synchronization of Neutral-Type SNN … 233
−5
−10
−15
−20
0 2 4 6 8 10
t
15
10
−5
−10
0 2 4 6 8 10
t
5.5.5 Conclusion
In this section, an adaptive feedback controller has been designed to achieve the
synchronization for the neutral-type neural networks with stochastic perturbation
and parameter uncertainties. Using LaSalle-type invariance principle for stochastic
differential delay equations, the stochastic analysis theory, and the adaptive feed-
back control technique, we have obtained the stochastic synchronization criterion for
almost every initial data. A numerical example and its simulation have been given
to demonstrate the effectiveness of the results obtained. The method in this section
can be further extended to the study of the synchronization of neutral-type neural
networks with mixed time delays and Markovian jumping parameters. In addition,
by replacing the unknown parameters in this system with adaptive learning para-
meters, we can research the stability and the synchronization of neutral-type neural
networks. Furthermore, exponential synchronization, project synchronization, and
cluster synchronization of this model can be discussed in the near future.
234 5 Stability and Synchronization of Neutral-Type Neural Networks
5.6.1 Introduction
As we know, the stochastic delay neural networks (SDNNs) with Markovian switch-
ing have played an important role in the fields of science and engineering for its many
practical applications, including image processing, pattern recognition, associative
memory, and optimization problems. In the past several decades, the characteristics
of SDNNs with Markovian switching, such as the various stability, have focused lots
of attention from scholars in various fields of nonlinear science. Z.D. Wang et al. con-
sidered exponential stability of delayed recurrent neural networks with Markovian
jumping parameters [56]. W. Zhang et al. investigated stochastic stability of Markov-
ian jumping genetic regulatory networks with mixed time delays [73]. H. Huang
et al. investigated robust stability of stochastic delayed additive neural networks
with Markovian switching [8]. The researchers presented a number of sufficient con-
ditions and proved the global asymptotic stability and exponential stability of the
SDNN with Markovian switching [33, 58, 59, 80]. The most extensively method
used for recent publications is the LMI approach.
However, many evolution processes are characterized by the fact that at certain
moments of time they experience a change of state abruptly. These processes are
subject to short-term perturbations and it is known that many biological phenomena
involving bursting rhythm models in medicine and biology optimal control in eco-
nomics do exhibit impulsive effects [10, 15]. Thus impulsive effects, as a natural
description of observed phenomena of several real-world problems, are necessary to
consider when investigating the stability of neural networks [37]. Some impulsive
effects of delayed neural networks results have been investigated [37].
In this section, we aim to analyze the globally exponential stability for stochas-
tic neutral-type impulsive neural networks with both time delays and Markovian
switching. LMI approach-based criteria are determined whether globally exponen-
tial stability for stochastic neutral-type impulsive neural networks is developed. A
numerical simulation is given to show the validity of developed results.
In this section, we consider the neutral networks with mixed time delays which is
described as follows:
⎧
⎨ u̇(t) = [−Au(t) + B f (u(t)) t + E f (u(t − τ (t)))
+D u̇(t − τ (t))] + F t−τ (t) f (u(η))dη + U, t = tk , (5.239)
⎩
Δu(t) = Ik (u), t = tk ,
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 235
where t ≥ 0 is the time, u(t) = (u 1 (t), u 2 (t), . . . , u n (t))T ∈ Rn is the sate vector
associated with n neurons, f (u(t)) = ( f 1 (u 1 (t)), f 2 (u 2 (t)), . . . , f n (u n (t)))T ∈
Rn denote the activation functions of the neurons, τ (t) is the transmission delay
satisfying that 0 < τ (t) ≤ τ ≤ in f {tk − tk−1 }/μ, μ > 1 and τ̇ (t) ≤ ρ < 1, where
τ , ρ are constants, and U = (U1 , U2 , . . . , Un )T ∈ Rn is the constant external input
vector.
⎧
⎪ ẋ(t) = −A(r (t))x(t) + B(r (t)) f (x(t)) t + E(r (t)) f (x(t − τ (t))
⎪
⎪
⎪
⎨ +D(r (t))ẋ(t − τ (t)) + F(r (t)) t−τ (t) f (x(η))dη
+σ(x(t),
t f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)),
⎪
⎪
⎪
⎪ f (x(η))dη, t, r (t))ω̇(t), t = tk ,
⎩ t−τ (t)
Δx(t) = Ik (x), t = tk ,
(5.240)
as a matter of convenience, for t ≥ 0, we denote r (t) = i and A(r (t)) =
Ai , B(r (t)) = B i , E(r (t)) = E i , D(r (t)) = D i , F(r (t)) = F i , respectively. In
model (5.240), furthermore, ∀i ∈ S, Ai = diag {a1i , a2i , . . . , ani } (i.e., Ai is a diago-
nal matrix) has positive and unknown entries Aik > 0, B i = (bi j )n×n , E i = (ei j )n×n ,
D i = (di j )n×n and F i = ( f i j )n×n are the connection weight and the delay connec-
tion weight matrices, respectively. U i = (U1i , U2i , . . . , Uni )T ∈ Rn is the constant
external input vector.
We rewrite the neutral networks with mixed time delays and nonlinearity as fol-
lows:
⎧
⎪ ẋ(t) = −Ai x(t) + B i f (x(t)) t + E f (x(t − τ (t))
i
⎪
⎪
⎪
⎪ +D ẋ(t − τ (t)) + F t−τ (t) f (x(η))dη
i i
⎪
⎨
+σ(x(t),
t f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)),
⎪
⎪ t−τ (t) f (x(η))dη, t, i)ω̇(t), t = tk ,
⎪
⎪
⎪
⎪ Δx(t) = I (x), t = tk ,
⎩ + k
x(t0 + s) = Φ(s), s ∈ [t0 − τ , t0 ],
(5.241)
ω(t) = (ω1 (t), ω2 (t), . . . , ωn (t)) is an n-dimensional Brown moment defined on
T
a complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0 (i.e., Ft =
σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the Markovian process
{r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix and can
be regarded as a result from the occurrence of eternal random fluctuation and other
probabilistic causes.
The initial condition associated with system (5.241) is given in the following
form:
Assumption 5.65 The activation functions of the neurons f (x(t)) satisfy the Lip-
schitz condition. That is, there exists a constant G > 0 such that
Assumption 5.66 The noise intensity matrix σ(·, ·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist five positives H1 ,H2 ,H3 ,H4 , and H5 such that
trace (σ(t, r (t), v1 (t), v2 (t), v3 (t), v4 (t), v5 (t)))T (σ(t, r (t), v1 (t), v2 (t), v3 (t), v4 (t), v5 (t)))
≤ H1 |v1 (t)|2 + H2 |v2 (t)|2 + H3 |v3 (t)|2 + H4 |v4 (t)|2 + H5 |v5 (t)|2 ,
for all (t, r (t), v1 (t), v2 (t), v3 (t), v4 (t), v5 (t)) ∈ R+ ×S×Rn ×Rn ×Rn ×Rn ×Rn .
f (0) ≡ 0, σ(t, r0 , 0, 0, 0, 0, 0) ≡ 0.
Definition 5.68 The zero solution of the system (5.241) is said to be stochastic
globally exponential stable in the mean square such that
if for any solution x(t, t0 , Φ) with the initial condition Φ ∈ PC, there exist constants
α > 0 and K > 1.
The main purpose of the rest of this section is to establish a criterion of stochastic
globally exponential stable in the mean square of the system (5.241).
Theorem 5.69 Assume that 0 < τ (t) ≤ τ and τ̇ (t) ≤ ρ < 1, where τ and ρ are
constants, if the following conditions are satisfied:
(i) There exist positive definite symmetry matrices Q i , P, W, M, positive definite
diagonal matrix L with appropriate dimensions such that
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 237
⎡ i ⎤
Λ1,1 Λi1,2 Λi1,3 Λi1,4 Λi1,5
⎢ ∗ Λi Λi Λi2,4 Λi2,5 ⎥
⎢ 2,2 2,3 ⎥
⎢ ⎥
Λi = ⎢ ∗ ∗ Λi3,3 Λi3,4 Λi3,5 ⎥ < 0, (5.243)
⎢ ⎥
⎣ ∗ ∗ ∗ Λi4,4 Λi4,5 ⎦
∗ ∗ ∗ ∗ Λi5,5
where
S
Λi1,1 = −Q i Ai − Ai Q i + αQ i + Ai W Ai + G T LG + πi j Q j + H1T H1 ,
j=1
Λi1,2 = Q i B i − Ai W B i ,
Λi1,3 = Q i E i − Ai W E i ,
Λi1,4 = Q i D i − Ai W D i ,
Λi1,5 = Q i F i − Ai W F i ,
T
Λi2,2 = P + (B i ) W B i + τ 2 M − L + H2T H,
T
Λi2,3 = (B i ) W E i ,
T
Λi2,4 = (B i ) W D i ,
T
Λi2,5 = (B i ) W F i ,
T
Λi3,3 = (E i ) W E i − (1 − ρ)Pe−ατ + H3T H3 ,
T
Λi3,4 = (E i ) W D i ,
T
Λi3,5 = (E i ) W F i ,
T
Λi4,4 = −(D i ) W D i − (1 − ρ)e−ατ W − H4T H4 ,
T
Λi4,5 = (D i ) W F i ,
T
Λi5,5 = (F i ) W F i − e−ατ M − H5T H5 .
LV (x(t), i, t) =
αeαt x T (t)Q i x(t)
+eαt f T (x(t))P f (x(t))
−(1 − τ̇ (t))eα(t−τ (t)) f T (x(t − τ (t)))P f (x(t − τ (t)))
+eαt T
ẋ (t))W ẋ(t) − (1 − τ̇ (t))e
α(t−τ (t)) ẋ T (t − τ (t))W ẋ(t − τ (t))
0 αt T
+τ −τ e f (x(t))M f (x(t))dη
0 (t+β) T
− −τ e f (x(t + β))M f (x(t + β))dη
+2eαt x T (t)Q i −Ai x(t) + B i f (x(t))
+E i f (x(t − τ (t))ẋ(t − τ (t))
t
+ D i ẋ(t − τ (t)) + F i t−τ (t) f (x(η))dη
t
+(1/2)trace [σ T (x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)
αt i
t
2e Q σ(x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i))
S
+eαt x T (t) πi j Q j x(t),
j=1
(5.245)
LV (x(t), i, t) ≤
αeαt x T (t)Q i x(t) + 2eαt x T (t)Q i −Ai x(t)
+B i f (x(t)) + E i f (x(t − τ (t)))ẋ(t − τ (t))
t
+ D i ẋ(t − τ (t)) + F i t−τ (t) f (x(η))dη
S
+eαt x T (t) πi j Q j x(t) + eαt f T (x(t))P f (x(t))
j=1
−(1 − ρ)eα(t−τ (t)) f T (x(t − τ (t)))P f (x(t − τ (t)))
+eαt ẋ T (t))W ẋ(t) − (1 − ρ)eα(t−τ (t)) ẋ T (t − τ (t))W ẋ(t − τ (t))
0 αt T t
+τ −τ e f (x(t))M f (x(t))dη − τ t−τ eαη f T (x(η))M f (x(η))dη
T
t
+(1/2)trace [σ (x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)
2eαt Q i σ(x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ
t
(t) f (x(η))dη, t, i)].
(5.246)
Now, using Assumptions 5.65 and 5.66 together with Lemma 1.16 yields
and
0 t
τ −τ eαt f T (x(t))M f (x(t))dη − τ t−τ eαη f T (x(η))M f (x(η))dη
≤ τ 2 eαt f T (x(t))M
t f (x(t))
−τ (t)eα(t−τ ) t−τ (t) f T (x(η))M f (x(η))dη (5.248)
≤ τ 2 eαt f T (x(t))M f (x(t))
t !T t !
−eα(t−τ ) t−τ (t) f (x(η)dη M t−τ (t) f (x(η)dη .
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 239
T
+ f T (x(t))(P + (B i ) W B i + τ 2 M − L + H2T H ) f (x(t)
t
i T
+ f (x(t))((B ) W F )
T i
f (x(η))dη
t−τ (t)
240 5 Stability and Synchronization of Neutral-Type Neural Networks
T T
+ f T (x(t − (τ (t)))((E i ) Q i − (E i ) W Ai )x(t)
T T
+ f T (x(t))((B i ) W D i )ẋ(t − (τ (t)) + f T (x(t − (τ (t)))((E i ) W B i ) f (x(t)
T
+ f T (x(t − (τ (t)))((E i ) W E i − (1 − ρ)Pe−ατ + H3T H3 )) f (x(t − (τ (t)))
T
+ f T (x(t − (τ (t)))(E i ) W D i ẋ(t − (τ (t))
t
T
+ f T (x(t − (τ (t)))(E i ) W F i f (x(η))dη
t−τ (t)
t T
T T
+ f (x(η))dη ((F i ) Q i − (F i ) W Ai )x(t)
t−τ (t)
t T
T
+ f (x(η))dη (F i ) W B i f (x(t))
t−τ (t)
T T
+ ẋ (t − (τ (t))((D i ) Q i − (D i ) W Ai )x(t)
T
t T t
T
− f (x(η))dη (e−ατ M − (F i ) W F i + H5T H5 ) f (x(η))dη
t−τ (t) t−τ (t)
i T
+ ẋ (t − (τ (t))((D ) W B ) f (x(t))
T i
t T
T
+ f (x(η))dη (F i ) W D i ẋ(t − (τ (t))
t−τ (t)
t
T
+ ẋ T (t − (τ (t))(D i ) W F i f (x(η))dη
t−τ (t)
T
− ẋ T (t − (τ (t))((1 − ρ)e−ατ W − (D i ) W D i + H4T H4 )ẋ(t − (τ (t))
t T
T
+ f (x(η))dη (F i ) W E i f (x(t − (τ (t)))
t−τ (t)
W
%
+ ẋ (t − (τ (t))(D i ) E i f (x(t − (τ (t)))
T
When t = tk , k ∈ N, we have
tk
V (x(tk ), i, tk ) = eαtk x T (tk )Q i x(tk ) + eαη f T (x(η))P f (x(η))dη
tk −τ (tk )
tk
+ eαη ẋ T (η))W ẋ(η))dη
tk −τ (tk )
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 241
0 tk
+τ eαη f T (x(η))M f (x(η))dηdβ
−τ tk +β
−
= eαtk (x(tk− ) + Ik (x(tk− )))T Q i (x(tk− ) + Ik (x(tk− )))
t−
−τ (tk− )eαη f T (x(η))P f (x(η))dη
k
+
tk−
t−
eαη ẋ T (η))W ẋ(η))dη
k
+
tk− −τ (tk− )
0 t−
eαη f T (x(η))M f (x(η))dηdβ
k
+τ
−τ tk− +β
− −
= eαtk x T (tk− )Q i x(tk− ) + 2eαtk x T (tk− )Q i Ik (x(tk− ))
t−
−
+ eαtk IkT (x(tk− ))Q i Ik (x(tk− )) + eαη f T (x(η))P f (x(η))dη
k
tk− −τ (tk− )
t−
eαη ẋ T (η))W ẋ(η))dη
k
+
tk− −τ (tk− )
0 t−
eαη f T (x(η))M f (x(η))dηdβ
k
+τ
−τ tk− +β
− −
≤ V (x(tk− ), i, tk− ) + 2eαtk σk Q i x(tk− )2 + eαtk σk2 Q i x(tk− )2
2σk Q i + σk2 Q i −
≤ 1+ )V (x(t k , i, tk− )
λmin (Q i )
= θk V (x(tk− ), i, tk− ), (5.251)
2σk Q i +σk2 Q i
where θk = (1 + λmin (Q i )
).
Hence,
tk−1 −t0
Because μτ ≤ in f {tk − tk−1 }, we have k − 1 ≤ μτ , which implies
!
ln H
(t−t0 )
H k−1 ≤e μτ
, t ∈ [tk−1 , tk ),
that is
!
ln H
(t−t0 )
V (x(t), t, i) ≤ V (x(t0 ), t0 , i)e μτ
, t ∈ [tk−1 , tk ),
242 5 Stability and Synchronization of Neutral-Type Neural Networks
Remark 5.70 Let σ(·, ·, ·, ·, ·) = 0, and the system (5.241) can be reduced to neutral-
type impulsive neural networks which have been studied in [1]. Thus, Theorem 5.69
in this section can be regarded as the expansions of Theorem 3.1 in [1]. However, for
delayed neural networks, the sufficient conditions proposed in [1] are only applicable
for systems without noise perturbation.
Remark 5.71 Let D = 0, σ(·, ·, ·, ·, ·) = 0, and the system (5.241) can be reduced to
neutral-type impulsive neural networks (4) in [74]. It is worth noting that the systems
will contain some information about the derivative of the past state to further describe
and model the dynamics for such complex neural reactions. However, the criteria
proposed in [74] are only applicable for the stability analysis of neural networks
without involving the issue of neutral-type neural networks.
Example 5.72 Consider neutral-type impulsive neural networks with time delay and
Markovian switching (5.241) and the following network parameters:
2.9 0 2.5 0 0.2 0.18 0.3 0
A1 = , A2 = , B1 = , B2 = ,
0 2.8 0 2.6 0.3 0.19 0.4 0
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 243
0.2 0 0.3 0 0.8 0.2 2.5 1.5
D1 = , D2 = , E1 = , E2 = ,
0 0.2 0 0.3 0.2 0.3 1 2.5
4 0.04 4 1.5 10
F1 = , F2 = , G= ,
0.14 4 1 4 01
0.08 0 0.07 0 0.09 0
H1 = , H2 = , H3 = ,
0 0.08 0 0.07 0 0.09
0.06 0 0.04 0 −0.45 0.45
H4 = , H5 = , Π= ,
0 0.06 0 0.04 0.5 −0.5
f (x(t)) = tanh(x(t)).
Using Matlab LMI toolbox, we solve the LMI in (5.241) and obtain
3.5780 −2.2265 −0.6333 −0.0025 1.3005 −1.4388
Q1 = , Q2 = , P= ,
−2.2265 3.4268 −0.0025 0.1794 −1.4388 1.5744
0.2109 −0.2999 19.7642 −12.6598 2.6151 −2.0063
W = , M= , L= .
−0.2999 0.3364 −12.6598 23.7158 −2.0063 23.7158
It can be checked that Assumptions 5.65–5.66 and the inequality (5.243) are satisfied
and the matrix Q, P, W, M, L is a positive define symmetry matrices. So the noise-
perturbed neutral-type impulsive neural networks with time delays and Markovian
switching system (5.241) can be globally exponential stability by Theorem 5.69.
The simulation results are given in Figs. 5.18 and 5.19, which shows the state vector
x(t) of system (5.241) can be further illustrated the stability. From the following
simulations, one can find that the stochastic neutral-type impulsive neural networks
with time delays and Markovian switching is globally exponential stability.
244 5 Stability and Synchronization of Neutral-Type Neural Networks
2.5
1.5
0 200 400 600 800 1000
and x2 (t)
x (t)(i=1,2,)
−5
i
−10
−15
0 2 4 6 8 10 12 14 16 18 20
t/second
5.6.5 Conclusions
In this section, we have proposed a concept of globally exponential stability for sto-
chastic neutral-type impulsive neural networks with both time delays and Markovian
switching. Making use of LMI and Lyapunov functional method, we have obtain a
sufficient condition under which stochastic neutral-type impulsive neural networks
with both time delays and Markovian switching can globally exponential stability.
The condition obtained in this section is dependent on the generator of the Markovian
jumping models and can be easy checked. Extensive simulation result is provided
that demonstrate the effectiveness of our theoretical results and analytical tools.
5.7.1 Introduction
information science, image processing, and so on. In recent years, different control
methods are derived to achieve different synchronizations [9, 51, 62, 63].
By utilizing adaptive control method, the parameters of the system need to be
estimated and the control law needs to be updated when the neural networks evolve.
In the past decade, much attention has been devoted to the research of the adaptive
synchronization for neural networks (see e.g., [4, 21, 44, 48, 81] and the references
therein).
Recently, the stability and synchronization of neutral-type systems, especially
neutral-type neural networks, which depend on the derivative of the state and the
delay state, have attracted a lot of attention (see e.g., [16, 20, 32, 34, 79, 84] and the
references therein) since the fact that some physical systems in the real world can be
described by neutral-type models (see [38]).
However, the neutral term of derivative of the delay state was not taken into
account in the neural networks proposed in [4, 21, 44, 48, 81], while the adaptive
control was not investigated in [16, 20, 32, 34]. Zhou et al. in [79] did not study
the almost surely (a.s.) synchronization for neutral-type neural networks. Zhu et
al. in [84] did not research the synchronization problem for neural networks with
Markovian switching parameters.
In this section, the problem of almost sure (a.s.) asymptotic adaptive synchro-
nization for neutral-type neural networks with stochastic perturbation and Markov-
ian switching is researched. Firstly, we proposed a new criterion of a.s. asymptotic
stability for a general neutral-type stochastic differential equation which extends
the existed results. Secondly, based upon this stability criterion, by making use of
Lyapunov functional method and designing a adaptive controller, we obtained a
condition of a.s. asymptotic adaptive synchronization for neutral-type neural net-
works with stochastic perturbation and Markovian switching. Finally, we introduced
a numerical example to illustrate the effectiveness of the method and result obtained
in this section.
Consider the following neutral-type neural networks called drive system and repre-
sented by the compact form as follows:
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with n
neurons, f (·) denotes the neuron activation functions, and τ represents the trans-
mission delay. For t ≥ 0, we denote r (t) = i, A(r (t)) = Ai , B(r (t)) = B i ,
246 5 Stability and Synchronization of Neutral-Type Neural Networks
where y(t) = [y1 (t), y2 (t), . . . , yn (t)]T ∈ Rn is the state vector of the response
system (5.256), U (r (t)) = U i = [U1i , U2i , . . . , Uni ]T ∈ Rn is a control input vector,
ω(t) = [ω1 (t), ω2 (t), . . . , ωn (t)]T is an n-dimensional Brownian motion defined
on the complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0 (i.e.,
Ft = σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the Markovian
process {r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix.
It is known that external random fluctuation and other probabilistic causes often lead
to this type of stochastic perturbations.
The initial condition of system (5.256) is given in the following form:
with e(0) = 0.
The primary object here is to deal with the adaptive synchronization problem
of the drive system (5.254) and the response system (5.256), and derive sufficient
conditions such that the response system (5.256) synchronize with the drive system
(5.254).
To prove our main results, the following assumptions are needed.
Assumption 5.73 The activation functions of the neurons f (·) satisfy the Lipschitz
condition. That is, there exists a constant L 1 , L 2 > 0 such that
Assumption 5.74 The noise intensity matrix σ(·, ·, ·, ·) satisfies the bounded con-
dition. That is, there exist two positive constants H1 and H2 , such that
for all (t, r (t), x(t), y(t)) ∈ R+ × S × Rn × Rn , and σ(t, i, 0, 0) = 0 for all
(t, i) ∈ R+ × S.
Assumption 5.75 For the external input matrix D i (i ∈ S), there exists positive
constant κi ∈ (0, 1), such that
Definition 5.76 (See Ref. [33]) The trivial solution e(t; ξ, i 0 ) of the error system
(5.258) is said to be almost surely asymptotically stable if
If the error system (5.258) is almost surely asymptotically stable, then the drive
system (5.254) and the response system (5.256) are said to be almost surely asymp-
totically synchronization.
248 5 Stability and Synchronization of Neutral-Type Neural Networks
In this section, we give some criterions of adaptive synchronization for the drive
system (5.254) and the response system (5.256). First, we establish a general result
which can be applied widely.
Almost Surely Asymptotically Stable
Theorem 5.77 Let (H1), (H2), and (H3) hold. Assume that there are functions V ∈
C2,1 (R+ × S × Rn ; R), γ ∈ L1 (R+ ; R+ ), and W1 , W2 , , W3 ∈ C(Rn ; R+ ) such that
LV (t, i, x, y)
(C2) (5.262)
≤γ(t) − W1 (x) + W2 (y) − W3 (x − D(y, i))
Then for any initial data, {x(θ) : −τ̄ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ̄ , 0]; Rn ) and
r (0) = i 0 ∈ S,
(R1) Eq. (1.4) has a unique global solution which is denoted by x(t; ξ, i 0 ).
(R2) Assume that W3 (x) = 0 if and only if x = 0. The solution x(t; ξ, i 0 ) obeys
that
Theorem 5.80 For systems (5.254) and (5.256), let Assumptions 5.73–5.75 hold,
and the error system (5.258) has a unique solution denoted by e(t; ξ, i 0 ) on t ≥ 0
for any initial data {e(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn ) with e(0) = 0.
Assume also that there exist symmetric matrix Q 1 > 0, diagonal matrix P i > 0
(i = 1, . . . , S), and positive scalars ρ, ρ1 , ρ2 , i (i = 1, 2, 3, 4), such that
P i < ρI (i = 1, 2, . . . , S) (5.266)
⎡ 2 ⎤
(L 2 ρ1 + H1 ρ)I − 2P i C i C i P i L 2 I τ L 2 I
⎢ ∗ −1 I 0 0 ⎥
⎢ ⎥<0 (5.267)
⎣ ∗ 0 −2 I 0 ⎦
∗ 0 0 −4 I
(i = 1, 2, . . . , S)
(i = 1, 2, . . . , S)
⎡ ⎤
Ξ11 C i P i L 2 I L 2 I τ L 2 I
⎢ ∗ −1 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ 0 − I 0 0 ⎥<0 (5.270)
⎢ 2 ⎥
⎣ ∗ 0 0 −3 I 0 ⎦
∗ 0 0 0 −4 I
(i = 1, 2, . . . , S),
where K ∗ = diag{k1∗ , k2∗ , . . . , kn∗ } with k ∗j ( j = 1, 2, . . . , n) be arbitrary negative
constants to be chosen, and Ξ11 = (L 22 ρ1 +H1 ρ−ρ2 L 21 +H2 ρ)I −2P i C i +1 D i T D i .
We choose the feedback control U i with the update law as U i = diag{k1 , . . . , kn }
(e − D i eτ ) and k̇ j = −β j pij (e − D i (eτ ))2j , where β j > 0( j = 1, 2, . . . , n) are
arbitrary constants, pij is the jth diagonal entry of matrix P i , and (e − D i eτ ) j is
the jth element of e − D i eτ . Then the error system (5.258) is almost surely asymp-
250 5 Stability and Synchronization of Neutral-Type Neural Networks
totically stable. Therefore, the drive system (5.254) and the response system (5.256)
are adaptive synchronized a.s.
Proof Under Assumptions 5.73–5.75 and the existence of e(t; ξ, i 0 ), it can be seen
that F̄(t, r (t), e(t), eτ (t)), Ḡ(t, r (t), e(t), eτ (t)), and D̄(eτ (t), r (t)) satisfy (H1) ,
(H2), and (H3), where
where Q 2 = ε−1
4 τ L 2 I , and computing LV (t, i, e, eτ ) along the trajectory of error
2
LV (t, i, e, eτ )
= Vt (t, i, e − D i eτ )
+ Vx (t, i, e − D i eτ ) − C i e + Ai g(e)
t
+ B g(eτ ) + E
i i
g(e(s))ds + U i (5.272)
t−τ
1
+ trace[σ T (t, i, e, eτ )Vx x (t, i, e − D i eτ )σ(t, i, e, eτ )]
2
S
+ γik V (t, k, e − D i eτ ).
k=1
While
Vt (t, i, e − D i eτ )
= g T (e(t))Q 1 g(e(t)) − g T (e(t − τ ))Q 1 g(e(t − τ ))
5.7 Asymptotical Adaptive Synchronization of Neutral … 251
0
+ τ e (t)Q 2 e(t) −
T
e T (t + s)Q 2 e(t + s)ds
−τ
n
2
+ (k j − k ∗j )k̇ j (5.273)
βj
j=1
Vx x (t, i, e − D i eτ ) = 2P i (5.275)
S
γik V (t, k, e − D i eτ )
k=1
(5.276)
S
= γik (e − D i eτ )T P k (e − D i eτ )
k=1
So
LV (t, i, e, eτ )
≤ g T (e)Q 1 g(e) − g T (eτ )Q 1 g(eτ ) + τ e T Q 2 e
t n
− e T (s)Q 2 e(s)ds − 2 (k j − k ∗j ) pij (e − D i eτ )2j
t−τ j=1
+ 2(e − D i eτ )T P i − C i e + Ai g(e) + B i g(eτ )
(5.277)
t
+ Ei g(e(s))ds + diag{k1 , . . . , kn }(e − D i eτ )
t−τ
+ trace[σ (t, i, e, eτ )P i σ(t, i, e, eτ )]
T
S
+ γik (e − D i eτ )T P k (e − D i eτ )
k=1
252 5 Stability and Synchronization of Neutral-Type Neural Networks
= 2(e − D i eτ )T P i K ∗ (e − D i eτ ). (5.279)
2(e − D i eτ )T P i Ai g(e)
≤ 2 (e − D i eτ )T P i Ai Ai T P i (e − D i eτ ) + −1
2 g (e)g(e)
T
(5.283)
≤ 2 (e − D i eτ )T P i Ai Ai T P i (e − D i eτ ) + −1 2 T
2 L 2e e
2(e − D i eτ )T P i B i g(eτ )
≤ 3 (e − D i eτ )T P i B i B i T P i (e − D i eτ ) + −1
3 g (e)g(eτ )
T
(5.284)
≤ 3 (e − D i eτ )T P i B i B i T P i (e − D i eτ ) + −1 2 T
3 L 2 eτ eτ
t
2(e − D eτ ) P E
i T i i
g(e(s))ds
t−τ
≤ 4 (e − D i eτ )T P i E E P i (e − D i eτ )
i iT
t T t
+ −1
4 g(e(s))ds g(e(s))ds (5.285)
t−τ t−τ
5.7 Asymptotical Adaptive Synchronization of Neutral … 253
≤ 4 (e − D i eτ )T P i E i E i T P i (e − D i eτ )
t
−1
+ 4 τ g T (e(s))g(e(s))ds
t−τ
≤ 4 (e − D i eτ )T P i E i E i T P i (e − D i eτ )
t
+ −1
4 τ L 2
2 e T (s)e(s)ds
t−τ
and
Therefore,
LV (t, i, e, eτ )
(5.287)
≤ − W1 (e) + W2 (eτ ) − W3 (e − D i eτ )
where
W1 (e) = e T W̄1 e,
W2 (eτ ) = eτT W̄2 eτ , (5.288)
W3 (e − D eτ ) = (e − D eτ ) W̄3 (e − D eτ ),
i i T i
with
and
W̄3 = − 2 P i Ai Ai T P i + 3 P i B i B i T P i
(5.291)
S
∗
+ 4 P E E i i iT
P +
i
γik P + 2P K
k i
k=1
Now, (5.267) is equivalent to W̄1 > 0, (5.268) is just W̄2 > 0, (5.269) is equivalent
to W̄3 > 0, and (5.270) is equivalent to W̄1 > W̄2 . So from the conditions of
254 5 Stability and Synchronization of Neutral-Type Neural Networks
this theorem, we know that the conditions (C1), (C2), and (C3) in Theorem 5.77
are all satisfied. So by Theorem 5.77, the error system (5.258) is almost surely
asymptotically stable, and hence the drive system (5.254) and the response system
(5.256) are adaptive synchronized a.s. The proof of Theorem 5.80 is completed.
In this section, a numerical example will be given to support the main results obtained
in this section.
−4 4
Letting Γ = , which means S = 2, we give the parameters concerning
2 −2
the drive system (5.254), the response system (5.256), and the error system (5.258)
as follows:
0.2 0 0.3 0
D(1) = , D(2) = ,
0 0.3 0 0.1
61 40
C(1) = , C(2) = ,
17 07
−4 2 −3 2
A(1) = , A(2) = ,
−6 2 −3 1
−2 1 −4 3
B(1) = , B(2) = ,
1 −3 1 −2
−5 2 −4 −2
E(1) = , E(2) = ,
2 −3 −2 −3
J (1) = [1, 0]T , J (2) = [−1, 1]T
We further set τ = 1, f (·) = tanh(·), σ(·) = e(t) + eτ (t). Then we can confirm
that Assumptions 5.73–5.75 are satisfied with L 1 = 0, L 2 = 1, H1 = H2 = 2, and
κ1 = κ2 = κ = 0.3.
6 0
Letting K ∗ = and using LMI toolbox in Matlab, we solve matrix inequal-
08
ities (5.265)–(5.270) and obtain the following results:
0.3196 0 0.3899 0
Q1 = , P1 = ,
0 0.3196 0 0.3899
0.4690 0
P2 = , ρ = 0.5077,
0 0.4690
ρ1 = 0.4498, ρ2 = 0.1078, 1 = 9.9260,
2 = 178.9801, 3 = 194.4959, 4 = 226.2334.
So from Theorem 5.80, the drive system (5.254) and the response system (5.256)
are adaptive synchronized a.s., when the error system (5.258) has a unique solution.
5.7 Asymptotical Adaptive Synchronization of Neutral … 255
r(t)
1.5
0.5
0
0 5 10 15 20 25 30
Time
−1
−2
0 5 10 15 20 25 30
x and y
2 2
3
x2
2 y
2
−1
−2
0 5 10 15 20 25 30
To illustrate the effectiveness of the result in this section, we depict the evolution
figures of the systems as Figs. 5.20, 5.21, 5.22 and 5.23. Figure 5.20 shows the two-
state Markov chain in the systems. Figure 5.21 shows that the drive system (5.254)
synchronizes the response system (5.256) from the moment of t = 7. It can be
seen from Fig. 5.22 that the state of the error system (5.258) tends to zero from
t = 7, which also describes the synchronization of the drive system (5.254) and
the response system (5.256). The update law of the adaptive control gain K (t) is
depicted in Fig. 5.23. Figure 5.23 shows us that the update law of the control gain
K (t) no longer vary after the response system (5.256) synchronizes with the drive
system (5.254).
256 5 Stability and Synchronization of Neutral-Type Neural Networks
e(t)
0
−1
−2
−3
−4
0 5 10 15 20 25 30
Time
0.5
−0.5
K(t)
−1
−1.5
−2
−2.5
−3
0 5 10 15 20 25 30
Time
5.7.5 Conclusions
In this section, we have proposed a new criterion of a.s. asymptotic stability for a gen-
eral neutral-type stochastic differential equation which extends the existed results.
Based upon this new stability criterion, we have obtained a condition of a.s. asymp-
totic adaptive synchronization for neutral-type neural networks with stochastic per-
turbation and Markovian switching by making use of Lyapunov functional method
and designing a adaptive controller. The synchronization condition is expressed as
linear matrix inequality which can be easily solved by Matlab. Finally, we have
employed a numerical example to illustrate the effectiveness of the method and
result obtained in this section.
5.7 Asymptotical Adaptive Synchronization of Neutral … 257
Appendix
Proof The proof of (R1) is the same as [32] and omitted here. To prove (R2), we
will divide it into five steps. We change D̄ into D in subsequence for simplicity.
Step 1 To prove that the solution x(t, i 0 , ξ) of the system obeys
lim sup V (t, r (t), x(t) − D(r (t), x(t − τ ))) < ∞ a.s. (5.292)
t→∞
In fact, let
M(t)
t
= Vx (s, r (s), x(s) − D(x(s − τ ), r (s)))d B(s)
0
t (5.293)
+ (V (s, i 0 + h̄(r (s−), l), x(s) − D(x(s − τ ), r (s)))
0 R
− V (s, r (s), x(s) − D(x(s − τ ), r (s)))μ(ds, dl)
Step 2 To prove
sup V (t, r (t), x(t) − D(r (t), x(t − τ ))) < ∞ a.s.
0≤t<∞
|x(t)| ≤ |D(r (t), x(t − τ ))| + |x(t) − D(r (t), x(t − τ ))|
≤ κ|x(t − τ )| + |x(t) − D(r (t), x(t − τ ))|
sup |x(t)|
0≤t≤T
≤ κ sup |x(t − τ )| + sup |x(t) − D(r (t), x(t − τ ))|
0≤t≤T 0≤t≤T
= κβ + κ sup |x(t)| + sup |x(t) − D(r (t), x(t − τ ))|
0≤t≤T 0≤t≤T
In fact, taking the expectations on both side of (5.294) and letting t → ∞, we obtain
that
∞
E W (s)ds < ∞ (5.298)
0
where W (s) = W1 (x(s)) − W2 (x(s)) + W3 (z(s)), z(s) = x(s) − D(r (s), x(s − τ )).
5.7 Asymptotical Adaptive Synchronization of Neutral … 259
This implies
∞
W (s)ds < ∞ a.s. (5.299)
0
or equivalently,
∞
(W1 (x(s)) − W2 (x(s)))ds < ∞ a.s.
0
and
∞
W3 (z(s))ds < ∞ a.s.
0
or equivalently,
and
Now we will prove (5.297): lim W3 (z(t)) = 0 a.s. In fact, if (5.297) is false, then
t→∞
P(Ω1 ) ≥ 3ε (5.301)
P(Ω2 ) ≥ 1 − ε (5.302)
260 5 Stability and Synchronization of Neutral-Type Neural Networks
P(Ω1 ∩ Ω2 ) ≥ 2ε (5.303)
Let I A denote the indication function of set A. Noting the fact that σ2k < ∞,
whenever σ2k−1 < ∞, we can derive from (5.298) that
∞
∞ > E 0 W3 (z(t))dt
∞
σ2k
≥ E Iσ2k−1 <∞,σ2k <∞,τh =∞ σ2k−1 W3 (z(t))dt
k=1 (5.305)
∞
≥ε E[Iσ2k−1 <∞,τh =∞ (σ2k − σ2k−1 )]
k=1
On the other hand, ny (H1), there exists a constant K h > 0, such that
E[Iτh ∧σ2k−1 <∞ sup |z(τh ∧ (σ2k−1 + t)) − z(τh ∧ σ2k−1 )|2
0≤t≤T (5.306)
≤ 2K h2 T (T + 4)
Since W3 (·) is continuous in Rn , there exists a closed ball S̄h = {x ∈ Rn : |x| < h}
such that W3 (·) is uniformly continuous in S̄h . We can therefore choose δ = δ(ε) > 0
so small such that
2K h2 T (T + 4)
<ε
δ2
It then follows from (5.306) and Chebyshev’s inequality (Lemma 1.19) that
&
P {σ2k−1 ∧ τh < ∞} ∩ sup |z(τh ∧ (σ2k−1 + t))
' 0≤t≤T
− z(τh ∧ σ2k−1 )| ≥ δ ≤ 1
δ2
(2K h2 T (T + 4)) < ε
Noting that
we hence have
& '
P σ2k−1 < ∞, τh = ∞
& '
∩ sup |z(σ2k−1 + t) − z(σ2k−1 )| ≥ δ <ε
0≤t≤T
P ({σ
& 2k−1 < ∞, τh = ∞} '
∩ sup |W3 (z(σ2k−1 + t)) − W3 (z(σ2k−1 ))| < ε (5.309)
0≤t≤T
>ε
Set
& '
Ω̄k = sup |W3 (z(σ2k−1 + t)) − W3 (z(σ2k−1 ))| < ε
0≤t≤T
Noting that
∞
∞>ε E[I(σ2k−1 <∞,τh =∞) (σ2k − σ2k−1 )]
k=1
∞
≥ε E[I(σ2k−1 <∞,τh =∞)∩Ω̄k (σ2k − σ2k−1 )]
k=1
∞
≥ εT P({σ2k−1 < ∞, τh = ∞} ∩ Ω̄k )
k=1
∞
≥ εT ε=∞
k=1
which implies that z̄ ∈ Ker(W3 ) whence Ker(W3 ) = ∅. From this, we can show that
for some ε̄ > 0. Since {z(tk , ω̄)}k≥0 is bounded, we can find its subsequence
{z(t¯k , ω̄)}k≥0 which converges to ẑ ∈ Rn . Clearly, ẑ ∈
/ Ker(W3 ) so W3 (ẑ) > 0.But,
by (5.311),
But by (H2),
References
1. H. Bao, J. Cao, Stochastic global exponential stability for neutral-type impulsive neural net-
works with mixed time-delays and Markovian jumping parameters. Commun. Nonlinear Sci.
Numer. Simul. 16(9), 3786–3791 (2011)
2. G. Cai, Q. Yao, X. Fan, J. Ding, Adaptive projective synchronization in an array of asymmetric
neural networks. J. Comput. 7(8), 2024–2030 (2012)
3. S. Chen, J. Cao, Projective synchronization of neural networks with mixed time-varying delays
and parameter mismatch. Nonlinear Dyn. 67(2), 1397–1406 (2012)
4. X. Ding, Y. Gao, W. Zhou, D. Tong, H. Su, Adaptive almost surely asymptotically synchro-
nization for stochastic delayed neural networks with Markovian switching. Adv. Differ. Equ.
2013(1), 1–12 (2013)
5. J. Feng, S. Xu, Y. Zou, Delay-dependent stability of neutral type neural networks with distrib-
uted delays. Neurocomputing 72(10–12), 2576–2580 (2009)
6. J.M. González-Miranda, Amplification and displacement of chaotic attractors by means of
unidirectional chaotic driving. Phys. Rev. E 57(6), 7321–7324 (1998)
7. W.L. He, J.D. Cao, Adaptive synchronization of a class of chaotic neural networks with known
or unknown parameters. Phys. Lett. A 372(4), 408–416 (2008)
8. H. Huang, D.W.C. Ho, Y. Qu, Robust stability of stochastic delayed additive neural networks
with Markovian switching. Neural Netw. 20(7), 799–809 (2007)
9. X. Huang, J. Cao, Generalized synchronization for delayed chaotic neural networks a novel
coupling scheme. Nonlinearity 19(12), 2797–2811 (2006)
10. H. Huo, W. Li, Existence of positive periodic solution of a neutral impulsive delay predator-prey
system. Appl. Math. Comput. 185(1), 499–507 (2007)
11. H.R. Karimi, Robust synchronization and fault detection of uncertain master-slave systems
with mixed time-varying delays and nonlinear perturbations. Int. J. Control Autom. Syst. 9(4),
671–680 (2011)
12. H.R. Karimi, A sliding mode approach to H∞ synchronization of master-slave time-delay
systems with Markovian jumping parameters and nonlinear uncertainties. J. Frankl. Inst. 349(4),
1480–1496 (2012)
13. H.R. Karimi, H. Gao, LMI-based H∞ synchronization of second-order neutral master-slave
systems using delayed output feedback control. Int. J. Control Autom. Syst. 7(3), 371–380
(2009)
14. H.R. Karimi, M. Zapateiro, N. Luo, Adaptive synchronization of master-slave systems with
mixed neutral and discrete time-delays and nonlinear perturbations. Asian J. Control 14(1),
251–257 (2012)
15. S. Karthikeyan, K. Balachandran, Controllability of nonlinear stochastic neutral impulsive
systems. Nonlinear Anal. Hybrid Syst. 3(3), 266–276 (2009)
16. V. Kolmanovskii, N. Koroleva, T. Maizenberg, X. Mao, A. Matasov, Neutral stochastic differ-
ential delay equations with Markovian switching. Stoch. Anal. Appl. 21(4), 839–867 (2003)
17. O.M. Kwon, M.J. Park, S.M. Lee, J.H. Park, E.-J. Cha, Stability for neural networks with
time-varying delays via some new approaches. IEEE Trans. Neural Netw. Learn. Syst. 24(2),
181–193 (2013)
18. T.H. Lee, J.H. Park, O.M. Kwon, S.M. Lee, Stochastic sampled-data control for state estimation
of time-varying delayed neural networks. Neural Netw. 46(1), 99–108 (2013)
19. F. Li, X. Wang, P. Shi, Robust quantized H∞ control for network control systems with Markov-
ian jumps and time delays. Int. J. Innov. Comput. Inf. Control 9(12), 4889–4902 (2013)
20. X. Li, Global robust stability for stochastic interval neural networks with continuously distrib-
uted delays of neutral type. Appl. Math. Comput. 215(12), 4370–4384 (2010)
21. X. Li, J. Cao, Adaptive synchronization for delayed neural networks with stochastic perturba-
tion. J. Frankl. Inst. 354(7), 779–791 (2008)
22. C.-H. Lien, K.-W. Yu, Y.-F. Lin, Y.-J. Chung, L.-Y. Chung, Exponential convergence rate
estimation for uncertain delayed neural networks of neutral type. Chaos Solitons Fractals
40(5), 2491–2499 (2009)
References 265
23. L. Liu, Z. Han, W. Li, Global stability analysis of interval neural networks with discrete and
distributed delays of neutral type. Expert Syst. Appl. 36(3), 7328–7331 (2009)
24. P. Liu, Delay-dependent robust stability analysis for recurrent neural networks with time-
varying delay. Int. J. Innov. Comput. Inf. Control 9(8), 3341–3355 (2013)
25. Y. Liu, Stochastic asymptotic stability of Markovian jumping neural networks with Markov
mode estimation and mode-dependent delays. Phys. Lett. A 373(41), 3741–3742 (2009)
26. Y. Liu, Z. Wang, X. Liu, Stability analysis for a class of neutral-type neural networks with
Markovian jumping parameters and mode-dependent mixed delays. Neurocomputing 94, 46–
53 (2012)
27. X. Lou, B. Cui, Stochastic stability analysis for delayed neural networks of neutral type with
Markovian jump parameters. Chaos Solitons Fractals 39(5), 2188–2197 (2009)
28. J. Lu, D.W.C. Ho, J. Cao, J. Kurths, Exponential synchronization of linearly coupled neural
networks with impulsive disturbances. IEEE Trans. Neural Netw. 22(2), 329–336 (2011)
29. Q. Lu, L. Zhang, P. Shi, H. Karimi, Control design for a hypersonic aircraft using a switched
linear parameter-varying system approach. Proc. Inst. Mech. Eng. Part I: J. Syst. Control Eng.
227(1), 85–95 (2013)
30. H.H. Mai, X.F. Liao, C.D. Li, A semi-free weighting matrices approach for neutral-type delayed
neural networks. J. Comput. Appl. Math. 225(1), 44–55 (2009)
31. X. Mao, Stochastic Differential Equations and Their Applications (Horwood, Chichester, 1997)
32. X. Mao, Y. Shen, C. Yuan, Almost surely asymptotic stability of neutral stochastic differential
delay equations with Markovian switching. Stoch. Process. Appl. 118(8), 1385–1406 (2008)
33. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, London, 2006)
34. J.H. Park, Synchronization of cellular neural networks of neutral type via dynamic feedback
controller. Chaos Solitons Fractals 42(3), 1299–1304 (2009)
35. J.H. Park, O.M. Kwon, Global stability for neural networks of neutral-type with interval time-
varying delays. Chaos Solitons Fractals 41(3), 1174–1181 (2009)
36. J.H. Park, O.M. Kwon, S.M. Lee, LMI optimization approach on stability for delayed neural
networks of neutral-type. Appl. Math. Comput. 196(1), 236–244 (2008)
37. J.H. Park, C. Park, O. Kwon, S. Lee, A new stability criterion for bidirectional associative
memory neural networks of neutral-type. Appl. Math. Comput. 199(2), 716–722 (2008)
38. V.P. Rubanik, Oscillations of Qasilinear Systems with Retardation (Nauka, Moscow, 1969)
39. R. Samli, S. Arik, New results for global stability of a class of neutral-type neural systems with
time delays. Appl. Math. Comput. 210(2), 564–570 (2009)
40. L. Sheng, M. Gao, Robust stability of Markovian jump discrete-time neural networks with
partly unknown transition probabilities and mixed mode-dependent delays. Int. J. Syst. Sci.
44(2), 252–264 (2013)
41. P. Shi, E.K. Boukas, R. Agarwal, Control of Markovian jump discrete-time systems with norm
bounded uncertainty and unknown delay. IEEE Trans. Autom. Control 44(11), 2139–2144
(1999)
42. P. Shi, E.K. Boukas, R. Agarwal, Kalman filtering for continuous-time uncertain systems with
Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
43. W. Su, Y. Chen, Global asymptotic stability analysis for neutral stochastic neural networks
with time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 14(4), 1576–1581 (2009)
44. Y. Sun, J. Cao, Adaptive lag synchronization of unknown chaotic delayed neural networks with
noise perturbation. Phys. Lett. A 364(3), 277–285 (2007)
45. Y. Sun, G. Feng, J. Cao, Stochastic stability of Markovian switching genetic regulatory net-
works. Phys. Lett. A 373(18), 1646–1652 (2009)
46. Y. Tang, J. Fang, Adaptive synchronization in an array of chaotic neural networks with mixed
delays and jumping stochastically hybrid coupling. Commun. Nonlinear Sci. Numer. Simul.
14(9), 3615–3628 (2009)
47. Y. Tang, H. Gao, W. Zou, J. Kurths, Distributed synchronization in networks of agent systems
with nonlinearities and random switchings. IEEE Trans. Cybern. 43(1), 358–370 (2013)
266 5 Stability and Synchronization of Neutral-Type Neural Networks
48. Y. Tang, R. Qiu, J. Fang, Q. Miao, M. Xia, Adaptive lag synchronization in unknown stochas-
tic chaotic neural networks with discrete and distributed time-varying delays. Phys. Lett. A
372(24), 4425–4433 (2008)
49. Y. Tang, Z. Wang, J. Fang, Controller design for synchronization of an array of delayed neural
networks using a controllable probabilistic PSO. Inf. Sci. 181(20), 4715–4732 (2011)
50. Y. Tang, Z. Wang, H. Gao, S. Swift, J. Kurths, A constrained evolutionary computation method
for detecting controlling regions of cortical networks. IEEE-ACM Trans. Comput. Biol. Bioin-
form. 9(6), 1569–1581 (2012)
51. Y. Tang, W.K. Wong, Distributed synchronization of coupled neural networks via randomly
occurring control. IEEE Trans. Neural Netw. Learn. Syst. 24(3), 435–447 (2013)
52. D. Tong, Q. Zhu, W. Zhou, Y. Xu, J. Fang, Adaptive synchronization for stochastic T-S fuzzy
neural networks with time-delay and Markovian jumping parameters. Neurocomputing 27(6),
91–97 (2013)
53. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
54. Q. Wang, Q. Lu, Phase synchronization in small world chaotic neural networks. Chin. Phys.
Lett. 22(6), 1329–1332 (2005)
55. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos Solitons Fractals 36(2), 388–396 (2008)
56. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
57. Z. Wang, Y. Liu, X. Liu, Exponential stabilization of a class of stochastic system with Markovian
jump parameters and mode-dependent mixed time-delays. IEEE Trans. Autom. Control 55(7),
1656–1662 (2010)
58. Z. Wang, Y. Liu, G. Wei, X. Liu, A note on control of discrete-time stochastic systems with
distributed delays and nonlinear disturbances. Automatica 46(3), 543–548 (2010)
59. Z.D. Wang, D.W.C. Ho, Y.R. Liu, X.H. Liu, Robust H∞ control for a class of nonlinear discrete
time-delay stochastic systems with missing measurements. Automatica 45(3), 1–8 (2010)
60. Z. Wu, P. Shi, H. Su, J. Chu, Delay-dependent stability analysis for switched neural networks
with time-varying delay. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 41(6), 1522–1530
(2011)
61. Z. Wu, P. Shi, H. Su, J. Chu, Passivity analysis for discrete-time stochastic Markovian jump
neural networks with mixed time-delays. IEEE Trans. Neural Netw. 22(10), 1566–1575 (2011)
62. Z. Wu, P. Shi, H. Su, J. Chu, Exponential synchronization of neural networks with discrete and
distributed delays under time-varying sampling. IEEE Trans. Neural Netw. Learn. Syst. 23(9),
1368–1376 (2012)
63. Z. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks
with time-varying delay using sampled-data. IEEE Trans. Cybern. 43(6), 1796–1806 (2013)
64. Y. Yang, J. Cao, Exponential lag synchronization of a class of chaotic delayed neural networks
with impulsive effects. Phys. A: Stat. Mech. Appl. 386(1), 492–502 (2007)
65. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A: Stat.
Mech. Appl. 373(1), 252–260 (2007)
66. C. Yuan, X. Mao, Robust stability and controllability of stochastic differential delay equations
with Markovian switching. Automatica 40(3), 343–354 (2004)
67. D. Zhang, J. Xu, Projective synchronization of different chaotic time-delayed neural networks
based on integral sliding mode controller. Appl. Math. Comput. 217(1), 164–174 (2010)
68. L. Zhang, E. Boukas, Stability and stabilization of Markovian jump linear systems with partly
unknown transition probabilities. Automatica 45(2), 463–468 (2009)
69. L. Zhang, E.K. Boukas, H∞ control for discrete-time Markovian jump linear systems with
partly unknown transition probabilities. Int. J. Robust Nonlinear Control 19(8), 868–883 (2009)
70. L. Zhang, E.K. Boukas, H∞ control of a class of extended Markov jump linear systems. IET
Control Theory Appl. 3(7), 834–842 (2009)
71. L. Zhang, E.K. Boukas, J. Lam, Analysis and synthesis of Markov jump linear systems with
time-varying delays and partially known transition probabilities. IEEE Trans. Autom. Control
53(10), 2458–2464 (2008)
References 267
72. L. Zhang, J. Lam, Necessary and sufficient conditions for analysis and synthesis of Markov
jump linear systems with incomplete transition descriptions. IEEE Trans. Autom. Control
55(7), 1695–1701 (2010)
73. W. Zhang, Y. Tang, J. Fang, Stochastic stability of Markovian jumping genetic regulatory
networks with mixed time delays. Appl. Math. Comput. 217(17), 7210–7225 (2011)
74. Y. Zhang, J. Sun, Stability of impulsive neural networks with time delays. Phys. Lett. A 348(1),
44–50 (2005)
75. Y.J. Zhang, S.Y. Xu, Y.M. Chu, J.J. Lu, Robust global synchronization of complex networks
with neutral-type delayed nodes. Appl. Math. Comput. 216(3), 768–778 (2010)
76. H. Zhao, S. Xu, Y. Zou, Robust H∞ filtering for uncertain Markovian jump systems with
mode-dependent distributed delays. Int. J. Adapt. Control Signal Process 24(1), 83–94 (2010)
77. J. Zhou, T. Chen, L. Xiang, Chaotic lag synchronization of coupled delayed neural networks
and its applications in secure communication. Circuits Syst. Signal Process. 24(5), 599–613
(2005)
78. Q. Zhou, P. Shi, H. Liu, S. Xu, Neural-network-based decentralized adaptive output-feedback
control for large-scale stochastic nonlinear systems. IEEE Trans. Syst. Man Cybern. Part B:
Cybern. 42(6), 1608–1619 (2012)
79. W. Zhou, Y. Gao, D. Tong, C. Ji, J. Fang, Adaptive exponential synchronization in pth moment
of neutral-type neural networks with time delays and Markovian switching. Int. J. Control,
Autom. Syst. 11(4), 845–851 (2013)
80. W. Zhou, H. Lu, C. Duan, Exponential stability of hybrid stochastic neural networks with mixed
time delays and nonlinearity. Neurocomputing 72(13), 3357–3365 (2009)
81. W. Zhou, D. Tong, Y. Gao, C. Ji, H. Su, Mode and delay-dependent adaptive exponential syn-
chronization in pth moment for stochastic delayed neural networks with Markovian switching.
IEEE Trans. Neural Netw. Learn. Syst. 23(4), 662–668 (2012)
82. J. Zhu, Q. Zhang, C. Yang, Delay-dependent robust stability for Hopfield neural networks of
neutral-type. Neurocomputing 72(10), 2609–2617 (2009)
83. Q. Zhu, J. Cao, Adaptive synchronization under almost every initial data for stochastic neural
networks with time-varying delays and distributed delays. Commun. Nonlinear Sci. Numer.
Simul. 16(4), 2139–2159 (2011)
84. Q. Zhu, W. Zhou, D. Tong, J. Fang, Adaptive synchronization for stochastic neural networks
of neutral-type with mixed time-delays. Neurocomputing 99, 477–485 (2013)
85. S. Zhu, Y. Shen, Passivity analysis of stochastic delayed neural networks with Markovian
switching. Neurocomputing 74(10), 1754–1761 (2011)
Chapter 6
Stability and Synchronization of Neural
Networks with Lévy Noise
As a simple model of jump diffusions, Lévy noise is in a more general sense with
respect to the description of neural noise than Brownian motion does. This chapter
is concentrated on the stability and synchronization issues of neural networks with
Lévy noise. Almost surely exponential stability and pth moment asymptotic stability
for such networks are discussed in the first two sections. Synchronization via sampled
data and adaptive synchronization are investigated in the rest two sections.
6.1.1 Introduction
In the past few years, neural networks have been successfully applied in many areas,
including image processing, pattern recognition, associative memory, and optimiza-
tion problems. In the mean time, the stability analysis for neural networks has gained
much research attention. Many methods for stability researches, such as the linear
matrix inequality approach and M-matrix approach, have been investigated, see e.g.,
[17, 21, 22, 34, 35, 39, 45, 50, 53, 54, 60]. Various sufficient conditions have
been proposed to guarantee the global asymptotic or exponential stability for neural
networks.
Recently, it has been shown that many neural networks may have finite modes,
and the modes may switch from one to another at different times [17, 21, 22,
34, 45, 50, 54, 60]. In this situation, finite-state Markov chains can be used to
govern the switching between different modes of neural networks. Therefore, the
stability analysis problem for neural networks with Markovian switching has received
much research attention [17, 21, 22, 34, 45, 50, 54]. As a summary, Mao and Yuan
[22] studied the more general case, stochastic differential equations with Markovian
switching, and got a series of results about it.
Even to now, Gaussian white noise or Brownian motion has been regarded as
a commonly used model to describe the disturbance arising in neural networks or
nonlinear systems else [17, 21, 22, 34, 45, 50, 54]. However, Brownian motion is
at a disadvantage to depict instantaneous disturbance changes due to its continu-
ity. Lévy noise, which frequently appears in areas of finance, statistical mechanics,
and signal processing, see e.g., [1–3, 5, 26, 31, 52] and is written as (B, N ) by
D. Applebaum in [2], is more suitable for modeling diversified system noise because
Lévy noise can be decomposed into a continuous part and a jump part by Lévy-Itô
decomposition. As a result, Lévy noise extends Gaussian noise to many types of
impulsive jump-noise processes found in real and model neurons as well as in mod-
els of finance and other random phenomena. In neural networks, a Lévy noise model
more accurately describes how the neuron’s membrane potential evolves than does
a simpler diffusion model because the more general Lévy model includes not only
pure-diffusion and pure-jump models but also jump-diffusion models as well [4, 10,
28]. For the reason of Gaussian structure, however, pure-diffusion neuron models
rely on special limiting case assumptions of incoming Poisson spikes from other
neurons. These assumptions require at least that the number of impinging synapses
be large and that the synapses have small membrane effects due to the small coupling
coefficient [4, 13]. In the view of engineering applications, Lévy models are more
valuable than Gaussian models because physical devices may be limited in their
number of model-neuron connections [4, 23] and because real signals and noise can
often be impulsive [4, 29]. As seen in [11, 42, 43, 46], system with Lévy noise, or
more generally, with Gaussian noise and some kinds of jump noise is also called
jump diffusions. Hence, stability analysis problems for jump diffusions have drawn
an increasing research interest, see e.g., [11, 19, 25, 42–44, 46].
In this section, we introduce Lévy noise for neural network modeling and extend
the stochastic analysis approach for stability issues of neural networks with traditional
Gaussian noise to the area of neural networks with Lévy noise. By generalized
Itô’s formula for Lévy-type stochastic integrals [1], taking advantage of strong law
of large numbers for martingales and ergodicity of Markov chains, we derive a
sufficient condition of almost surely exponential stability for neural networks, which
depends only on the stationary probability distribution of the Markov chain and
some constants. Two numerical examples are provided to show the usefulness of the
proposed stability condition.
Assumption 6.1 (i) The functions σ(·) and H (·) satisfy σ(0, i) ≡ 0 and H (0, i, y)
≡ 0 for each i ∈ S and y ∈ Y .
(ii) Local Lipschitz condition For all n ∈ N, y ∈ Y, t ≥ 0, i ∈ S and x1 (t), x2 (t) ∈
Rn with |x1 | ∨ |x2 | < n, there exists a positive constant K (n) such that
Remark 6.2 One can immediately derive from Assumption 6.1 (i) that system (6.1)
admits a trivial solution x(t; 0) ≡ 0. Combining (i), (ii) in Assumption 6.1 and the
property of g(0) = 0, we have
for all x(t), y(t) ∈ Rn with |x| ∨ |y| < n and i ∈ S, y ∈ Y , which mean that the
local growth condition of system (6.1) holds, so from [1], the local solution of (6.1)
exists uniquely.
272 6 Stability and Synchronization of Neural Networks…
The purpose of this section is to discuss the almost surely exponential stability of
the neural network (6.1). Let us begin with the following definition.
Definition 6.3 The trivial solution of (6.1), or simply, system (6.1) is said to be
almost surely exponentially stable if for any x0 ∈ Rn ,
1
lim sup log(|x(t; x0 )|) < 0 a.s.
t→∞ t
The following theorem shows that the stability criterion depends only on the state of
the Markov chain and some constants else.
Theorem 6.4 Let Assumption 6.1 holds. Assume that there exists a symmetric pos-
itive definite matrix Q and some constants μi ∈ R, ρi , αi , βi ≥ 0 (i ∈ S), such
that
for all x(t) ∈ Rn , where Ai = A(i), Fi = F(i) for r (t) = i. Then the solution
x(t; x0 ) of (6.1) has the property that
1 πi S
λmax (Q)βi2
lim sup log(|x(t; x0 )|) ≤ μi − 2ρi + λ log (6.9)
t→∞ t 2 λmin (Q)
i=1
S π λmax (Q)βi2
i
In particular, if μi − 2ρi + λ log < 0, then the neural network
i=1 2 λmin (Q)
(6.1) is almost surely exponentially stable.
Proof For simplicity, we will write x(t), F(·), A(·), σ(·, ·), H (·, ·, ·) as x, F, A, σ, H
respectively. Obviously, (6.9) holds when x0 = 0. Fix any x0 = 0. The generalized
Itô’s formula shows
t
1
log[x T (t)Qx(t)] = log(x0T Qx0 ) + T (s)Qx(s)
2x T (s)Q − F(r (s))x(s)
0 x
+ A(r (s))g(x(s)) + trace σ T (x(s), r (s))Qσ(x(s), r (s))
2|x T (s)Qσ(x(s), r (s))|2
S
− + γi j log[x T
(t)Qx(t)] ds
(x T (s)Qx(s))2
j=1
6.1 Almost Surely Exponential Stability of NN with Lévy … 273
t
2x T (s)Qσ(x(s), r (s))
t
+ d B(s) + log (x(s) + H (x(s−), r (s−), y))T
0 x (s)Qx(s)
T
0 Y
× Q(x(s) + H (x(s−), r (s−), y)) − log[x T (s)Qx(s)] N (ds, dy)
t
1
T
= log(x 0T Qx0 ) + T Qx
2x Q − F x + Ag(x) + trace σ T Qσ
0 x
t
2|x T Qσ|2 (x + H )T Q(x + H )
− ds + M 1 (t) + M 2 (t) + log λφ(dy)ds
(x Qx)
T 2
0 Y x T Qx
t 2x T Qσ
t
(x + H )T Q(x + H )
where M1 (t) = 0 x T Qx d B(s) and M2 (t) = 0 Y log x T Qx
Ñ (ds, dy) are two martingales vanishing at t = 0. By condition (6.4), the quadratic
variation of M1 (t) satisfies
t
t d
M1 , M1 s 4|x T (s)Qσ(x(s), r (s))|2
= ds
0 (1 + s)2 0 (x (s)Qx(s)) (1 + s)
T 2 2
t (6.10)
4K (n)|x|4 Q 2 4K (n) Q 2 ∞ ds
≤ ≤ <∞
0 λmin (Q)|x| (1 + s)
2 4 2 λmin (Q)
2
0 (1 + s)2
1 1 1
lim sup log(|x(t)|) = lim sup log(x T (t)Qx(t))
t→∞ t 2 t→∞ t
πi λmax (Q)βi2
S
≤ μi − 2ρi + λ log
2 λmin (Q)
i=1
In the case of H (x(t), r (t), y) ≡ 0, which means that system (6.1) is disturbed
only by Gaussian white noise and has the form of
1 πi
S
lim sup log(|x(t; x0 )|) ≤ [μi − 2ρi ] (6.16)
t→∞ t 2
i=1
S π
i
Moreover, if [μi − 2ρi ] < 0, then the neural network (6.15) is almost surely
i=1 2
exponentially stable.
Remark 6.6 Theorem 6.4 is an extension of the original conclusion (i.e., Corol-
lary 6.5) given by Mao in [22], where the considered disturbance contains only
continuous Brownian motion. In the so-called jump-diffusion system such as (1),
however, disturbance can be either continuous or discontinuous. In particular, when
H (x(t), r (t), y) ≡ 0 holds in Theorem 6.4, the jumps (discontinuous disturbance)
are eliminated, our result is consistent with Mao’s.
Two examples are presented here in order to show the usefulness of our results.
Example 6.7 shows a neural network with two-neuron and 2-state Markovian switch-
ing, while Example 6.8 with three-neuron and three-state Markovian switching. The
random variable y in Poisson jump term of (6.1), which determines the distribu-
tion of the jump amplitudes, is uniformly distributed in Example 6.7 while normally
distributed in Example 6.8.
Example 6.7 Consider a two-neuron neural network (1) with 2-state Markovian
switching, where
90 70 −2 1
F1 = , F2 = , A1 = ,
08 08 1.3 −1
−1.5 1 −1 0.5 −1 −1
A2 = , G1 = , G2 = .
−1 −2 0.5 −1 −1 −2
276 6 Stability and Synchronization of Neural Networks…
The neuron activation function is g(x) = tanh(x) and the noise intensity function
σ(·) satisfies σ(x, i) = G i x, (i = 1, 2). B(t) is a scalar Brownian motion. We set
−6 6
S = {1, 2}, Γ = , λ = 3,
2 −2
Y = {y| − 1 ≤ y ≤ 1}, Q = I2 , H (x, i, y) = i cos(y)x.
2 π λmax (Q)βi2
i
Then we obtain μi − 2ρi + λ log = −0.1891 < 0. By
i=1 2 λmin (Q)
Theorem 6.4, the two-neuron neural network (1) is almost surely exponentially stable.
Figures 6.1, 6.2, 6.3, and 6.4 show the 2-state Markov chain, Poisson point process
with uniformly distributed variable y, the state trajectory, and phase trajectory in
Example 6.7, respectively. We can see from Figs. 6.3 and 6.4 that the state of system
tends to zero at about 0.9 s, which verifies that the neural network in Example 6.7 is
almost surely exponentially stable.
2.5
2
r(t)
1.5
0.5
0
0 2 4 6 8 10
Time(second)
6.1 Almost Surely Exponential Stability of NN with Lévy … 277
Fig. 6.2 Poisson point Poisson point process with uniformly distributed jump amplitude (a:−1, b:1)
4
process in Example 6.7
−1
−2
0 5 10 15
Time(second)
0
−2
−4
−6
−8
−10
0 1 2 3 4 5 6 7
Time(second)
10
Time(second)
0
5
0 10
5
−5 0
x2 x1
−10 −5
278 6 Stability and Synchronization of Neural Networks…
Example 6.8 Consider another three-neuron neural network (6.1) with 3-state
Markovian switching, where
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
10 0 0 700 800
F1 = ⎣ 0 8 0⎦ , F2 = ⎣0 9 0⎦ , F3 = ⎣0 9 0⎦ ,
0 09 008 006
⎡ ⎤ ⎡ ⎤
−1 0.5 0.2 −1 1 1
A1 = ⎣0.3 −1 0.6⎦ , A2 = ⎣0.8 −1 −1.3⎦ ,
1 0.6 −1 1 0.7 −2
⎡ ⎤ ⎡ ⎤
−1 0.2 0.5 −1 0.5 0.2
A3 = ⎣−0.5 1.2 0.4⎦ , G 1 = ⎣0.5 −1 0.3⎦ ,
0.6 1.1 −1 0.2 0.3 −1
⎡ ⎤ ⎡ ⎤
−1 −1 0.5 −1 1 0.6
G 2 = ⎣−1 −2 0.2⎦ , G 3 = ⎣ 1 −1 0.2⎦ .
0.5 0.2 −1 0.6 0.2 −1
The requirements for g(x), σ(·), and B(t) are the same as Example 6.7. Also we set
−2 1 1
S = {1, 2, 3}, Γ = 1 −2 1 , λ = 2,
1 3 −4
Y = {y|0.5 ≤ y ≤ 2}, Q = I3 , H (x, i, y) = i y 2 x.
Let y be a normally distributed random variable with mean 1 and variance 0.25.
We can obtain
3 π λmax (Q)βi2
i
Thus we have μi − 2ρi + λ log = −0.0173 < 0 . It fol-
i=1 2 λmin (Q)
lows from Theorem 6.4 that the three-neuron neural network (1) is almost surely
exponentially stable.
Figures 6.5, 6.6, and 6.7 show the 3-state Markov chain, Poisson point process with
normally distributed variable y, and the state trajectory in Example 6.8, respectively.
In Fig. 6.7, the state of system tends to zero at about 0.7 s, which confirms that neural
network in Example 6.8 is almost surely exponentially stable.
6.1 Almost Surely Exponential Stability of NN with Lévy … 279
2.5
r(t)
2
1.5
0.5
0
0 2 4 6 8 10
Time(second)
Fig. 6.6 Poisson point Poisson point process with normally distributed jump amplitude
(mean:1, variance:0.25)
process in Example 6.8 30
25
Random jump amplitude
20
15
10
0
0 2 4 6 8 10 12 14 16
Time(second)
Fig. 6.7 State trajectory in Responses of neuron dynamics to initial value: 10, 2, −7
Example 6.8 15
x1
x2
10 x3
5
x(t)
−5
−10
0 1 2 3 4 5 6 7
Time(second)
280 6 Stability and Synchronization of Neural Networks…
6.1.5 Conclusion
In this section, we have dealt with the problem of almost surely exponential stability
analysis for neural networks with Lévy noise and Markovian switching. By gener-
alized Itô’s formula with respect to Lévy process, making use of strong law of large
numbers for martingales and ergodic property of the Markov chain, we have derived
a sufficient condition for almost surely exponential stability which depends on the
unique stationary probability of Markov chain and some constants else. Two exam-
ples have been used to demonstrate the effectiveness of the main results obtained in
this section.
6.2.1 Introduction
We further assume that B(t), N (t, z), and r (t) in system (6.17) are independent.
For the purpose of stability study in this section, we impose the following assump-
tions.
Assumption 6.9 Assume that the system (6.17) has a unique solution on t ≥ −τ
which is denoted by x(t, ξ). The functions f, g, and h satisfy f (0, 0, t, i) ≡
0, g(0, 0, t, i) ≡ 0, h(0, 0, t, i, z) ≡ 0 for each (t, i) ∈ R+ × S and z ∈ Rl .
One can immediately derive from Assumption 6.9 that (6.17) admits a trivial
solution x(t; 0) ≡ 0 which is necessary for the following definitions of stability.
282 6 Stability and Synchronization of Neural Networks…
p
for any ξ ∈ LF0 ([−τ , 0]; Rn ]). When p = 2, it is said to be asymptotically stable in
mean square.
p
for any ξ ∈ LF0 ([−τ , 0]; Rn ]).
In what follows, we will present the asymptotic and exponential p-stability conditions
for system (6.17), then propose the M-matrix approach to achieve the asymptotic
stability. An application in neural networks will be put forward subsequently.
1. Stability of hybrid systems
Theorem 6.13 Let Assumptions 6.9 and 6.10 hold. Assume that there exist a function
V ∈ C2,1 (Rn × R+ × S; R+ ) and positive constants p, λ, c j , ( j = 1, 2, 3, 4) as
well as a nonnegative function w(t) = o( t 1+2λ
1
) (t → ∞) such that
c4
c3 > (6.18)
1 − δ̄
c1 |x| p ≤ V (x, t, i) ≤ c2 |x| p (6.19)
LV (x, y, t, i) ≤ w(t) − c3 |x| p + c4 |y| p (6.20)
and ⎧ c3
⎪ i f c2 ≥ c3 τ
c2 + c3 τ − |c2 − c3 τ | ⎨ c2
v< =
2c2 τ ⎪
⎩1 i f c2 < c3 τ
τ
thus
1
0<v< (6.22)
τ
λ
Letting k = v and ψ(t) = (t + k)λ , t ∈ R+ , we have
λ λ
ψ̇(t) = λ(t + k)λ−1 = (t + k)λ ≤ (t + k)λ = vψ(t). (6.23)
t +k k
Noting that ψ(t) is increasing, by (6.23) and the differential mean value theorem,
we get
ψ(t + τ ) = ψ(t) + τ ψ̇()
≤ ψ(t) + τ vψ()
≤ ψ(t) + τ vψ(t + τ )
1
ψ(t + τ ) ≤ ψ(t) (6.24)
1 − τv
Applying Lemma 1.6 to ψ(t)V and then using conditions (6.19), (6.20), and (6.23),
we can show that
0 ≤ c1 ψ(t)E|x(t)| p
≤ ψ(t)EV (x(t), t, r (t))
t
= ψ(0)EV (ξ(0), 0, r0 ) + E (ψ̇(s)V + ψ(s)LV )ds
0 (6.25)
t
≤ c2 k λ E ξ p + [c2 vψ(s)E|x(s)| p + ψ(s)w(s)
0
− c3 ψ(s)E|x(s)| p + c4 ψ(s)E|y(s)| p ]ds
c4 τ (τ + k)λ
where C = (c2 k λ + )E ξ p . From (6.21), we can get c2 v − c3 +
1 − δ̄
c4
= 0. So
(1 − δ̄)(1 − τ v)
t
c1 ψ(t)E|x(t)| p ≤ C + ψ(s)w(s)ds. (6.27)
0
∞
Clearly ψ(t)w(t) = o( t 1+λ1
), (t → ∞). This yields 0 ψ(t)w(t)dt < ∞.
Dividing both side of (6.27) by c1 ψ(t) and then letting t → ∞, we get
lim E|x(t)| p = 0.
t→∞
Remark 6.14 Theorem 6.13 extends Mao’s conclusion concerning asymptotic sta-
bility (see [22], Theorem 5.31, p.198), which is for hybrid systems without delay,
to delayed hybrid systems with Lévy noise. In particular, when h(x(t), x(t −
δ(t), t, r (t), z)) ≡ 0 and δ(t) ≡ 0 hold in system (6.17), our result is consistent
with Mao’s result.
Remark 6.15 Theorem 6.13 gives an estimate for the convergence rate of E|x(t)| p
as well. If the criteria are satisfied in Theorem 6.13, the convergence rate of E|x(t)| p
is faster than that of t −λ (t → ∞), even faster than that of t −2λ (t → ∞) (This can
be proved similarly).
Corollary 6.16 Let Assumptions 6.9 and 6.10 hold. Assume that there exist a func-
tion V ∈ C2,1 (Rn × R+ × S; R+ ) and positive constants p, λ, c j , ( j = 1, 2, 3, 4)
as well as a nonnegative function w(t) = o(e−λt ), (t → ∞) such that
c4
c3 > (6.28)
1 − δ̄
c4 eτ λ
c2 λ − c3 + ≥0 (6.29)
1 − δ̄
c1 |x| p ≤ V (x, t, i) ≤ c2 |x| p (6.30)
LV (x, y, t, i) ≤ w(t) − c3 |x| + c4 |y| p p
(6.31)
Proof Let
c4 eτ u
φ(u) = c2 u − c3 + . (6.32)
1 − δ̄
c4 τ eτ u
It is derived from φ̇(u) = c2 + > 0 that φ(u) is increasing on R+ .
1 − δ̄
Inequalities (6.28) and (6.29) yield that φ(0) < 0 and φ(λ) ≥ 0. By virtue of
the property of continuous functions, there exists a unique λ0 ∈ (0, λ] such that
φ(λ0 ) = 0.
Letting ψ(t) = eλ0 t , we get
Making use of (6.32), (6.33), and (6.34), we compute like the proof for
Theorem 6.13 that
c1 ψ(t)E|x(t)| p
t
≤ C̄ + ψ(s)w(s)ds
0
t
c4 eτ λ0
+ (c2 λ0 − c3 + ) ψ(s)E|x(s)| p ds (6.35)
1 − δ̄ 0
t t
= C̄ + ψ(s)w(s)ds + φ(λ0 ) ψ(s)E|x(s)| p ds
0 0
t
= C̄ + ψ(s)w(s)ds
0
c4 τ eτ λ0
where C̄ = (c2 + )E ξ p .
1 − δ̄
286 6 Stability and Synchronization of Neural Networks…
∞
Noting that 0 ψ(t)w(t)dt < ∞, dividing both side of (6.35) by c1 ψ(t) and then
letting t → ∞, we obtain
1
lim sup log(E|x(t)| p ) ≤ −λ0 ,
t→∞ t
which means system (6.17) is exponentially stable in pth moment. The proof is
completed.
Remark 6.17 Corollary 6.16 proposes a more general result than that of Mao’s [22]
(Theorem 7.22, p. 290) which differs from ours on w(t) ≡ 0. One manifestation of
this is extending the delayed hybrid systems to those with Lévy noise. In addition,
even for delayed hybrid systems without Poisson jump, the original result is a special
case of ours. In fact, w(t) ≡ 0 means that the positive constant λ can be chosen
arbitrarily. Then (6.29) must hold and Corollary 6.16 become Mao’s conclusion.
and
l
(|x + h (k) | p −|x| p )νk (dz k )
R k=1 (6.38)
a
≤ + σi |x| + πi |y|
p p
t 1+b
We further set
p−1
ωi = 0 ∨ (βi + ηi ) (6.39)
2
p( p − 1)
ζi = pαi + ρi + σi + ( p − 2)ωi (6.40)
2
A = −diag{ζ1 , . . . , ζ S } − Γ (6.41)
(q1 , . . . , q S )T = A−1 1 (6.42)
Theorem 6.19 Let Assumptions 6.9, 6.10, and 6.18 hold and p ≥ 2. If A is a non-
singular M-matrix and
Proof It follows from Lemma 1.12 that A−1 exists and A−1 ≥ 0, which means that
the sum of each row of A−1 is positive. Hence, by (6.42), it can be deduced that
qi > 0, ∀i ∈ S.
Define the function V : Rn × R+ × S → R+ by
V (x, t, i) = qi |x| p .
p( p − 1)ρi S
= [( pαi + + σi )qi + γi j q j ]|x| p (6.46)
2
j=1
p( p − 1)ηi
+ [ pβi + ]qi |x| p−2 |y|2
2
288 6 Stability and Synchronization of Neural Networks…
aqi
+ πi qi |y| p +
t 1+b
p( p − 1)ρi S
≤ [( pαi + + σi )qi + γi j q j ]|x| p
2
j=1
aqi
+ pωi qi |x| p−2 |y|2 + πi qi |y| p +
t 1+b
Substituting this and (6.40) into (6.45), noting that pωi qi ≥ 0, we have
p( p − 1)ρi
LV ≤ ( pαi + + σi + ( p − 2)ωi )qi
2
S
aqi
+ γi j q j |x| p + (πi + 2ωi )qi |y| p + 1+b
t
j=1 (6.47)
aqi
S
= + (ζi qi + γi j q j )|x| p + (πi + 2ωi )qi |y| p
t 1+b
j=1
≤ w(t) − c3 |x| + c4 |y| p
p
a
where w(t) = max {qi }, c3 = 1, c4 = max {(πi + 2ωi )qi }.
t 1+b 1≤i≤N 1≤i≤N
By condition (6.43), the inequality (6.18) holds. Hence, all the conditions of
Theorem 6.13 have been verified, so system (6.17) is asymptotically stable in pth
moment. The proof is completed.
where F is a diagonal positive definite matrix, D and E are, respectively, the connec-
tion weight matrix and the delayed connection weight matrix, s j , j = 1, 2 stand for
6.2 Asymptotic Stability of SDNN with Lévy Noise 289
the neuron activation function with s j (0) = 0, j = 1, 2, and what the other symbols
denote is the same as those in system (6.17).
We need more hypotheses based on Assumption 6.18 to study the stability of
neural network (6.48).
l
(|x + h (k) |2 −|x|2 )νk (dz k )
R k=1 (6.50)
a
≤ + σi |x| + πi |y|
2 2
t 1+b
Theorem 6.21 Let Assumptions 6.10 and 6.20 hold, if A is a nonsingular M-matrix
and (πi + 2ωi )qi ≤ 1 − δ̄, ∀i ∈ S, then the neural network (6.48) is asymptotically
stable in mean square.
Proof Let
f (x(t), x(t − δ(t)), t, r (t))
= − F(r (t)x(t) + D(r (t))s1 (x(t)) (6.52)
+ E(r (t))s2 (x(t − δ(t)))
290 6 Stability and Synchronization of Neural Networks…
Comparing with Theorem 6.19 in the case of p = 2, we only need to show that
(6.36) holds.
According to the conditions s j (0) = 0 and (6.49), we get
|s j (u)| ≤ |G j u| j = 1, 2 ∀u ∈ Rn (6.53)
x T f (x, y, t, i)
= x T (−Fi )x + x T Di s1 (x) + x T E i s2 (y)
≤ λmax (−Fi )|x|2 + |Di ||G 1 ||x|2 + |E i ||G 2 ||x||y|
(6.54)
|E i ||G 2 |
≤ (λmax (−Fi ) + |Di ||G 1 | + )|x|2
2
|E i ||G 2 | 2
+ |y|
2
We therefore obtain from (6.51) that
as required. It then follows from Theorem 6.19 that the neural network (6.48) is
asymptotically stable in mean square.
Consider a two-neuron delayed neural network (6.48) with Lévy noise and 2-state
Markovian switching, where the time delay δ(t) = 0.15 sin(t) + 0.85, which means
that τ = 1 and δ̇ ≤ δ̄ = 0.15. B(t) and N (t, z), which compose Lévy noise, are all
one dimensional. The character measure μ of Poisson jump satisfies μ(dz) = ςφ(dz),
where ς = 2 is the intensity of Poisson distribution and φ is the probability intensity
of the standard normal distributed variable z.
We set
−4 4
S = {1, 2}, Γ =
3 −3
as the state space and transition rate matrix with respect to the Markovian switching
and s j (·) = tanh(·), ( j = 1, 2) as the neuron activation functions. Then G 1 = G 2 =
I2 .
The other parameters concerning the neural network (6.48) are as follows:
6 0 −2 1 −1 1
F(1) = , D(1) = , E(1) = ,
0 7 1 −1 1 2
6.2 Asymptotic Stability of SDNN with Lévy Noise 291
x +y 1 yz − x
g(x, y, t, 1) = , h(x, y, t, 1, z) = +
4 t +1 2
6 0 1 −1.2 1 0
F(2) = , D(2) = , E(2) = ,
0 8 −1 1.5 1.5 1
x +y 1 3yz
g(x, y, t, 2) = , h(x, y, t, 2, z) = 2
−x+ .
2 (t + 1) 3 4
is a nonsingular M-matrix.
It then follows from Theorem 6.21 that the two-neuron neural network (6.48) is
asymptotically stable in mean square.
Figures 6.8, 6.9, 6.10, and 6.11 show the 2-state Markov chain, Poisson point
process with normally distributed variable z, the state trajectory, and evolution of
state norm square, respectively. We can see from Fig. 6.10 that the system state tends
to zero at t=5, which verifies the stability of two-neuron network (6.48). In Fig. 6.11,
the two curves show the evolution of the norm square concerning system state (solid
line) and function (t + 1)−1/3 (dot dash line), respectively, with time t. The solid
line is lower than the other one from t = 5, which illustrates that the convergence
rate of the neural network (6.48) is faster than that of function (t + 1)−1/3 .
292 6 Stability and Synchronization of Neural Networks…
2.5
r(t)
1.5
0.5
0
0 5 10 15 20
Time
Fig. 6.9 Poisson point Poisson point process with normally distributed jump
process 4
2
Random jump amplitude
−2
−4
−6
−8
2 4 6 8 10 12 14 16 18 20
Time
5
x(t)
−5
−10
0 5 10 15 20
Time
6.2 Asymptotic Stability of SDNN with Lévy Noise 293
30
25
|x(t)|2
20
15
10
−5
0 5 10 15 20
Time
6.2.5 Conclusion
We have dealt with the problem of asymptotic p-stability analysis for stochastic
delayed hybrid systems with Lévy noise. The general criteria for asymptotic stability
and exponential stability have been obtained through stochastic analysis. M-matrix
approach has been utilized to achieve the asymptotic stability criteria as well. As an
application of our results, the condition of mean square asymptotic stability has been
derived for delayed hybrid neural networks with Lévy noise. An example has been
used to demonstrate the effectiveness of the main results in this section.
6.3.1 Introduction
The past few decades have witnessed the successful applications of neural net-
works in many areas such as image processing, pattern recognition, associative
memory, and optimization problems. As an existence in real neural networks, time
delay, which may cause oscillation and instability behavior, has gained considerable
research attentions which focus on the topics of stability analysis and synchronization
control [6, 17, 32, 36, 39, 40, 49, 54, 55]. In the references involved, the delay
type can be constant, time-varying, discrete, or distributed, and the results can be
294 6 Stability and Synchronization of Neural Networks…
Consider the n-dimensional stochastic delay neural network with Markovian switch-
ing of the form
where r (t) is the Markov chain and x(t) = [x1 (t), . . . , xn (t)]T ∈ Rn is the state vec-
tor associated with the n neurons. f (x(t)) = [ f 1 (x1 (t)), . . . , f n (xn (t))]T denotes
the neuron activation function. C(r (t)) > 0 is a diagonal matrix. A(r (t)) and B(r (t))
are the connection weight matrix and the delay connection weight matrix, respec-
tively. δ(t) denotes the time-varying delay and satisfies 0 ≤ δ1 ≤ δ(t) ≤ δ2 , δ̇ ≤ δ̄.
We further write δ12 = δ2 − δ1 .
In this section, system (6.55) is treated as the master system and its slave system
can be described by the following equation:
where C(r (t), A(r (t), and B(r (t)) are the same matrices as in (6.55). e(t) = y(t) −
x(t) is the error state, which arises in the Lévy noise intensity function g and h
satisfying g : Rn × Rn × S → Rn×m and h : Rn × Rn × S × R → Rn . u(t) is the
control input that will be designed in order to obtain the synchronization of system
(6.55) and (6.56).
296 6 Stability and Synchronization of Neural Networks…
where l(e(t)) = f (y(t)) − f (x(t)) = f (x(t) + e(t)) − f (x(t)). The initial data
is given by {e(θ) : −σ ≤ θ ≤ 0} = ξ(θ) ∈ L2F0 ([−σ, 0]; Rn ]) , r (0) = r0 , where
σ = max{δ2 , τ }. It is assumed that ω(t), N (t, z), and r (t) in system (6.58) are
independent. For simplicity, we will write M(r (t) as Mi when r (t) = i in the sequel.
For the purpose of the synchronization of systems (6.55) and (6.56), i.e., the
stability study of error system (6.58), we impose the following assumptions.
n
e T (t)L Dl(e(t)) = li (ei (t))βi di ei (t)
i=1
n
(6.59)
≥ di [li (ei (t))]2
i=1
= l T (e(t))Dl(e(t))
Definition 6.25 The master system (6.55) and slave system (6.56) are said to be
synchronous in mean square if the error system (6.58) is stable in mean square, that
is, for any ξ(0) ∈ L2F0 ([−σ, 0]; Rn ]) and r0 = i ∈ S,
T
lim E |e(t; ξ(0), r0 )|2 dt < ∞ (6.63)
T →∞ 0
We are now in a position to derive the condition under which the master system (6.55)
and the slave system (6.56) are synchronous in mean square. The main theorem below
reveals that such conditions can be expressed in terms of the positive definite solution
to a quadratic matrix inequality involving some scalar parameters.
Theorem 6.26 Let Assumptions 6.22, 6.23, and 6.24 hold. If there exist a matrix Ji ,
positive matrices Pi , Q 1 , Q 2 , Q 3 , R, W1 , W2 , W3 , W4 , a positive diagonal matrix
D, and positive constants ρi , i , (i ∈ S) such that
Pi < ρi I (6.64)
W3 < I (6.65)
Πi < 0 (6.66)
298 6 Stability and Synchronization of Neural Networks…
where ⎡ ⎤
Π11 0 Π13 0 Π15 0 Π17 Π18
⎢ ∗ Π22 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Π44 0 0 0 0 ⎥
Πi = ⎢⎢ ⎥ with
⎥
⎢ ∗ ∗ ∗ ∗ Π55 Π56 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ Π66 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Π77 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ 0 Π88
Π11 = −2Pi Ci + Sj=1 γi j P j + ρi (G i1 + Hi1 ) + i Hi1
+Q 1 + Q 2 + Q 3 − W1 − W3 + i−1 λPi2 ,
Π13 = Ji + W3 ,
Π15 = W1 ,
Π17 = Pi Ai + L D,
Π18 = Pi Bi ,
Π22 = τ 2 I + δ12 W1 + δ12
2 W −W ,
2 4
Π33 = −W3 ,
Π44 = ρi (G i2 + Hi2 ) + i Hi2 − (1 − δ̄)Q 1 ,
Π55 = −Q 2 − W1 − W2 ,
Π56 = W2 ,
Π66 = −Q 3 − W2 ,
Π77 = R − 2D,
Π88 = −(1 − δ̄)R,
then the master system (6.55) and the slave system (6.56) are synchronous in mean
square. Moreover, the feedback gain matrix is determined by K i = Pi−1 Ji , (i ∈ S).
Proof Fix any (ξ(0), r0 ) ∈ Rn × S and write e(t; ξ(0), r0 ) = e(t) for simplicity.
Consider the following Lyapunov functional V ∈ C2,1 (Rn × R+ × S; R+ ) for the
error system (6.58):
5
V (e(t), t, r (t)) = V p (e(t), t, r (t)) (6.67)
p=1
where
V1 = e T (t)P(r (t))e(t)
t t
V2 = e T (s)Q 1 e(s)ds + e T (s)Q 2 e(s)ds
t−δ(t) t−δ1
t
+ e T (s)Q 3 e(s)ds
t−δ2
t
V3 = l T (e(s))Rl(e(s))ds
t−δ(t)
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 299
0 t
V4 = δ1 ė T (s)W1 ė(s)dsdθ
−δ1 t+θ
−δ1 t
+ δ12 ė T (s)W2 ė(s)dsdθ
−δ2 t+θ
0 t tk+1
V5 = τ ė (s)W3 ė(s)dsdθ +
T
ė T (s)W4 ė(s)ds
−τ t+θ t
LV1
S
= e T (t)[(−2Pi Ci + γi j P j )e(t) + 2Pi Ai l(e(t))
j=1
trace(g T Pi g)
(6.69)
≤ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))
LV1
S
≤ e (t)[(−2Pi Ci +
T
γi j P j )e(t) + 2Pi Ai l(e(t))
j=1
+ 2Pi Bi l(e(t − δ(t))) + 2Pi K i e(tk )]
+ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))
+ (ρi + i )[e T (t − δ(t))Hi2 e(t − δ(t))
+ e T (t)Hi1 e(t)] + λi−1 e T (t)Pi2 e(t) (6.71)
S
= e T (t)[(−2Pi Ci + γi j P j + ρi (G i1 + Hi1 )
j=1
LV2
≤ e T (t)Q 1 e(t) − (1 − δ̄)e T (t − δ(t))Q 1 e(t − δ(t))
(6.72)
+ e T (t)Q 2 e(t) − e T (t − δ1 )Q 2 e(t − δ1 )
+ e T (t)Q 3 e(t) − e T (t − δ2 )Q 3 e(t − δ2 )
LV3
≤ l T (e(t))Rl(e(t)) − (1 − δ̄)l T (e(t − δ(t)))Rl(e(t − δ(t)))
+ 2[e T (t)L Dl(e(t)) − l T (e(t))Dl(e(t))] (6.73)
= l (e(t))(R − 2D)l(e(t)) + 2e (t)L Dl(e(t))
T T
LV4
t
= δ12 ė T (t)W1 ė(t) − δ1 ė T (s)W1 ė(s)ds
t−δ1
t−δ1
+ δ12
2 T
ė (t)W2 ė(t) − δ12 ė T (s)W2 ė(s)ds
t−δ2
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 301
≤ ė T (t)(δ12 W1 + δ12
2
W2 )ė(t) − e T (t)W1 e(t) (6.74)
− e (t − δ1 )(W1 + W2 )e(t − δ1 )
T
LV5
t
= τ ė (t)W3 ė(t) − τ
2 T
ė T (s)W3 ė(s)ds − ė T (t)W4 ė(t)
t−τ (6.75)
t
= ė (t)(τ W3 − W4 )ė(t) − τ
T 2
ė T (s)W3 ė(s)ds
t−τ
Combining (6.67), (6.71), (6.72), (6.73), (6.74), and (6.77), it can be derived that
5
LV = LV p
p=1
S
≤ e T (t)[−2Pi Ci + γi j P j + ρi (G i1 + Hi1 ) + i Hi1
j=1
where
ψ(t) = [e T (t) ė T (t) e T (tk ) e T (t − δ(t)) e T (t − δ1 )e T (t − δ2 ) l T (e(t)) l T (e(t −
δ(t)))]T and Ji = Pi K i .
From (6.66), we have
where −κi = λmax (Πi ), (κi > 0, i ∈ S) and −κ = maxi∈S {−κi }. Then it can be
derived from (1.12) that
T
−E LV dt = EV0 − EVT ≤ EV0 (6.80)
0
So it follows from Definition 6.25 that the master system (6.55) and slave system
(6.56) are synchronous in mean square. This completes the proof.
Corollary 6.28 Assume that (6.64) and (6.65) are satisfied under the conditions in
Theorem 6.26. If the inequality
Ω1 Ω2
Ωi = <0 (6.81)
Ω2T −Ω3
√
λPi 0 0 0 0 0 0 0
Ω2T =
0 τI 0 0 0 0 0 0
I 0
Ω3 = i
∗ I
then the master system (6.55) and the slave system (6.56) are synchronous in mean
square and the feedback gain matrix can be determined by K i = Pi−1 Ji , (i ∈ S).
Proof It can be easily verified that Πi = Ω1 + Ω2 Ω3−1 Ω2T . According to (6.81) and
Lemma 1.21, we have Πi < 0. The conclusion then follows from Theorem 6.26.
Remark 6.29 Compared with the synchronization criteria in [40], our results are
expressed in terms of LMIs as well. However, unlike the approach adopted to con-
struct LMI by expanding the dimensions of matrices, we take a straight-forward way
to obtain lower dimensional LMIs which can not only ensure the simplicity in the-
oretical derivation, but also ensure the computational tractability for the calculation
software.
304 6 Stability and Synchronization of Neural Networks…
One example is presented here in order to show the usefulness of our results. Our
aim is to examine the mean square synchronization of the given neural networks with
Lévy noise and Markovian switching.
Example 6.30 Consider the master system (6.55) and slave system (6.56) with one-
dimensional Lévy noise and 2-state Markovian switching, where δ(t) = 21 sin(t) +
1, f (·) = tanh(·), λ = 2, φ is the standard normal
distribution
of random variable
−2 2
z. The transition rate matrix is chosen as Γ = . The other parameters are
1 −1
as follows:
50 60
C(1) = , C(2) = ,
05 08
−1.3 0.2 −1.4 0.2
A(1) = , A(2) = ,
0.1 −1.2 0.2 1.5
−0.3 0.3 −0.2 −0.3
B(1) = , B(2) =
−0.1 0.2 0.1 −0.2
e(t) e(t − δ(t)) e(t) − e(t − δ(t))
g(1) = + , g(2) = ,
2 3 2
e(t) e(t − δ(t)) e(t) e(t − δ(t))
h(1) = ( √ + √ )z, h(2) = ( √ − √ )z.
4 2 2 2 2 2 3 2
1 1 3
δ̄ = , δ1 = , δ2 = , δ12 = 1, L = I2 ,
2 2 2
5 5 3 3
G 11 = I2 , G 12 = I2 , H11 = I2 , H12 = I2 ,
12 18 8 4
1 1 5 5
G 21 = I2 , G 22 = I2 , H21 = I2 , H22 = I2 ,
2 2 6 9
thus Assumptions 6.22, 6.23, and 6.24 are satisfied.
Now, using the Matlab LMI toolbox, we solve the LMIs (6.64), (6.65), and (6.81)
and obtain
0.7963 0.0080 0.5901 0.0102
P(1) = , P(2) = ,
0.0080 0.7789 0.0102 0.6030
4.1727 −0.0246 0.5384 −0.0172
Q1 = , Q2 = ,
−0.0246 4.1340 −0.0172 0.5049
0.6292 −0.0265
Q3 = ,
−0.0265 0.5191
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 305
1.3103 −0.0399 1.0923 0
R= , D= ,
−0.0399 0.9316 0 1.0923
0.5757 0.0108 0.4900 0.0039
W1 = , W2 = ,
0.0108 0.6078 0.0039 0.5045
0.6942 0.0044 1.7213 0.0057
W3 = , W4 = ,
0.0044 0.7009 0.0057 1.7410
−0.6942 −0.0044 −0.6942 −0.0044
J (1) = , J (2) = ,
−0.0044 −0.7009 −0.0044 −0.7009
−0.8729 0.0035 −1.1766 0.0127
K (1) = , K (2) = ,
0.0033 −0.8998 0.0127 −1.1625
ρ1 = 1.0414, ρ2 = 0.9697, 1 = 0.9149, 2 = 0.9329,
τ = 0.3228.
So the conditions of Corollary 6.28 are all satisfied. It follows from Corollary 6.28
that systems (6.55) and (6.56) are synchronous in mean square.
To illustrate the effectiveness of the proposed method, we plot the stochastic
factors in system (6.58), the synchronization behaviors of system (6.55) and system
(6.56), and the control input as Figs. 6.12, 6.13, 6.14, 6.15, 6.16, and 6.17.
Figure 6.12 shows the 2-state Markov chain generated by the transition rate matrix
Γ . The decomposition of Lévy noise, namely a Brownian motion and a Poisson
point process, is shown in Fig. 6.13 and Fig. 6.14, respectively. Figure 6.15 depicts
2.5
2
r(t)
1.5
0.5
0
0 5 10 15 20 25 30 35 40
Time
306 6 Stability and Synchronization of Neural Networks…
ω(t)
3
−1
0 5 10 15 20 25 30 35 40
Time
Fig. 6.14 Poisson point Poisson point process with normally distributed jump
process 0
Random jump amplitude
−5
−10
−15
−20
−25
5 10 15 20 25 30 35 40
Time
2
x
2
1
0
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
x1
2
1.5
1
y2 0.5
0
−0.5
−1
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2
y
1
0
e(t)
−1
−2
−3
−4
0 5 10 15 20 25 30 35 40
Time
Fig. 6.17 Control input Control input
4
u1
u2
3
2
u(t)
−1
−2
0 5 10 15 20 25 30 35 40
Time
308 6 Stability and Synchronization of Neural Networks…
6.3.5 Conclusion
We have dealt with the problem of sampled-data synchronization for stochastic delay
neural networks with Lévy noise and Markovian switching. By virtue of generalized
Itô’s formula and Lyapunov functional, a sufficient condition which depends on the
switching mode, time delay, and the upper bound of sampling intervals is presented to
guarantee the synchronization of the master system and the slave system. The desired
controller can be obtained via solving a set of LMIs expressed in this sufficient
condition. An example has been provided to demonstrate the effectiveness of the
main results.
6.4.1 Introduction
In the past few decades, neural networks have been successfully applied in many
areas such as image processing, pattern recognition, associative memory, and opti-
mization problems. As an connatural existence in real neural networks, time delays,
which become one of the main reasons for oscillation and instability behavior, have
gained considerable research attentions which focus on the topics of stability and
synchronization issues [6, 17, 32, 36, 39, 40, 49, 54–56]. In the previous studies,
the delay type can be constant, time-varying, discrete, or distributed, and the results
can be delay-dependent or delay-independent case. It is generally recognized that the
delay-independent case performs more conservatively than the delay-dependent case
[40]. On the other hand, it has been verified that neural networks frequently exhibit
chaotic behaviors if the time delays and the parameters of neural networks are prop-
erly chosen [57]. Chaos synchronization problems thus attract extensive research
attention [7, 12, 18, 27] due to their favorable applications in secure communica-
tion, image processing, chemical and biological systems, and so on.
It has been noted that abrupt changes often emerge in the structure and para-
meters of many neural networks due to the phenomena such as component failures
or repairs, changing subsystem interconnections, and abrupt environmental distur-
bances. Neural networks in this case may be treated as systems which have finite
modes, and the modes may jump from one to another at different times [22, 51].
So far, finite-state Markov chains has been a type of sophisticated model which can
be used to govern the switching between different modes of neural networks. The
stability and synchronization issues for Markovian switching neural networks have
therefore received much research attention [17, 32, 51, 54, 56].
As shown in [9], the synaptic transmission in real nervous systems can be viewed
as a noisy process brought on by random fluctuations from the release of neurotrans-
mitters and other probabilistic causes. Consequently, stochastic noise has become an
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 309
Consider the n-dimensional stochastic delay neural network with Markovian switch-
ing of the following form:
where r (t) is the Markov chain and x(t) = [x1 (t), . . . , xn (t)]T ∈ Rn is the state vec-
tor associated with the n neurons. f (x(t)) = [ f 1 (x1 (t)), . . . , f n (xn (t))]T denotes the
neuron activation function. C(r (t)) > 0 is a diagonal matrix. A(r (t)) and B(r (t)) are
the connection weight matrix and the delay connection weight matrix, respectively.
δ(t) is the time-varying delay and satisfies 0 ≤ δ1 ≤ δ(t) ≤ δ2 , δ̇ ≤ δ̄.
In this section, we will treat system (6.82) as the master system and its slave
system can be described by the following equation:
where C(r (t), A(r (t), and B(r (t)) are the same matrices as in (6.82). e(t) = y(t) −
x(t) is the error state, which arises in the Lévy noise intensity function g and h
satisfying g : Rn × Rn × S → Rn×m and h : Rn × Rn × S × R → Rn . u(t) is the
adaptive controller that will be designed in order to achieve the synchronization of
systems (6.82) and (6.83).
The control signal is assumed to take the following form:
where K (t) = diag{k1 (t), . . . , kn (t)} is the adaptive feedback gain matrix to be
determined.
Substituting (6.84) into (6.83), then subtracting (6.82) from (6.83), yields the error
system
de(t) = [−C(r (t))e(t) + A(r (t))l(e(t))
+ B(r (t))l(e(t − δ(t))) + K (t)e(t)]dt
+ g(e(t), e(t − δ(t)), r (t))dω(t) (6.85)
+ h(e(t), e(t − δ(t)), r (t), z)N (dt, dz)
R
where l(e(t)) = f (y(t)) − f (x(t)) = f (x(t) + e(t)) − f (x(t)). The initial data is
given by {e(θ):−δ2 ≤ θ ≤ 0} = ξ(θ) ∈ L2F0 ([−δ2 , 0]; Rn ]), r (0) = r0 . It is assumed
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 311
that ω(t), N (t, z), and r (t) in system (6.85) are independent. For simplicity, we will
write M(r (t) as Mi when r (t) = i in the sequel.
Some hypotheses are presented below for the purpose of the synchronization of
systems (6.82) and (6.83), i.e., the stability study of error system (6.85).
Assumption 6.31 Each function f i : R → R is nondecreasing and there exists a
positive constant ηi such that
n
e T (t)L Dl(e(t)) = li (ei (t))ηi di ei (t)
i=1
n
(6.86)
≥ di [li (ei (t))]2
i=1
= l (e(t))Dl(e(t))
T
Definition 6.34 The master system (6.82) and slave system (6.83) are said to be
synchronous in mean square (or, stochastically synchronous, see [40]) if the error
system (6.85) is stable in mean square, that is, for any ξ(0) ∈ L2F0 ([−δ2 , 0]; Rn ])
and r0 = i ∈ S, T
lim E |e(t; ξ(0), r0 )|2 dt < ∞ (6.90)
T →∞ 0
312 6 Stability and Synchronization of Neural Networks…
We are now in a position to derive the condition under which the master system (6.82)
and the slave system (6.83) are synchronous in mean square. The main theorem below
reveals that such conditions can be expressed in terms of the positive definite solutions
to a quadratic matrix inequality involving some scalar parameters, and the update
law of feedback gain is dependent on one of these solutions.
Theorem 6.35 Let Assumptions 6.31, 6.32, and 6.33 hold. If there exist positive
matrices Pi , Q 1 , Q 2 , Q 3 , R, a positive diagonal matrix D, and positive constants
ρi , i , (i ∈ S) such that
Pi < ρi I (6.91)
Πi < 0 (6.92)
and the update law of the feedback gain matrix K (t) satisfies
n
k̇v (t) = −α eu (t)Piuv ev (t), (v = 1, . . . , n) (6.93)
u=1
where ⎡ ⎤
Π11 0 0 0 Π15 Π16
⎢ ∗ Π22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 ⎥
Πi = ⎢⎢ ∗ ∗ ∗ Π44 0 0 ⎥ with
⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ Π55 0 ⎦
∗ ∗ ∗ ∗ ∗ Π66
Π11 = −2Pi Ci − 2β Pi + Sj=1 γi j P j + ρi (G i1 + Hi1 ) + i Hi1
+ Q 1 + Q 2 + Q 3 + i−1 λPi2 ,
Π15 = Pi Ai + L D,
Π16 = Pi Bi ,
Π22 = ρi (G i2 + Hi2 ) + i Hi2 − (1 − δ̄)Q 1 ,
Π33 = −Q 2 ,
Π44 = −Q 3 ,
Π55 = R − 2D,
Π66 = −(1 − δ̄)R,
α is an arbitrary positive constant and β is a positive constant to be determined,
then the master system (6.82) and the slave system (6.83) are synchronous in mean
square.
Proof Fix any (ξ(0), r0 ) ∈ Rn × S and write e(t; ξ(0), r0 ) = e(t) for simplicity.
Consider the following Lyapunov functional V ∈ C2,1 (Rn × R+ × S; R+ ) for the
error system (6.85):
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 313
4
V (e(t), t, r (t)) = V p (e(t), t, r (t)) (6.94)
p=1
where
V1 =e T (t)P(r (t))e(t)
t t
V2 = e T (s)Q 1 e(s)ds + e T (s)Q 2 e(s)ds
t−δ(t) t−δ1
t
+ e T (s)Q 3 e(s)ds
t−δ2
t
V3 = l T (e(s))Rl(e(s))ds
t−δ(t)
n
(kv (t) + β)2
V4 =
α
v=1
LV1
S
= e T (t)[(−2Pi Ci + γi j P j + 2Pi K (t))e(t)
j=1
trace(g T Pi g)
(6.96)
≤ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))
LV1
S
≤ e T (t)[(−2Pi Ci + γi j P j + 2Pi K (t))e(t)
j=1
+ 2Pi Ai l(e(t)) + 2Pi Bi l(e(t − δ(t)))]
+ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))
+ (ρi + i )[e T (t − δ(t))Hi2 e(t − δ(t))
(6.98)
+ e T (t)Hi1 e(t)] + i−1 λe T (t)Pi2 e(t)
S
= e T (t)[−2Pi Ci + γi j P j + 2Pi K (t)
j=1
Now, we compute
LV2
≤ e T (t)Q 1 e(t) − (1 − δ̄)e T (t − δ(t))Q 1 e(t − δ(t))
(6.99)
+ e T (t)Q 2 e(t) − e T (t − δ1 )Q 2 e(t − δ1 )
+ e T (t)Q 3 e(t) − e T (t − δ2 )Q 3 e(t − δ2 )
LV3
≤ l T (e(t))Rl(e(t)) − (1 − δ̄)l T (e(t − δ(t)))Rl(e(t − δ(t)))
+ 2[e T (t)L Dl(e(t)) − l T (e(t))Dl(e(t))] (6.100)
= l T (e(t))(R − 2D)l(e(t)) + 2e T (t)L Dl(e(t))
− (1 − δ̄)l T (e(t − δ(t)))Rl(e(t − δ(t)))
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 315
LV4
n
2(kv (t) + β)k̇v (t)
=
α
v=1
n (6.101)
n
2(kv (t) + β)(−α u=1 eu (t)Pi uv ev (t))
=
α
v=1
= −2e (t)Pi K (t)e(t) − 2βe T (t)Pi e(t)
T
Combining (6.94), (6.98), (6.99), (6.100), and (6.101), it can be derived that
4
LV = LV p
p=1
S
≤ e T (t)[−2Pi Ci − 2β Pi + γi j P j + ρi (G i1 + Hi1 )
j=1
where
ξ(t) = [e T (t) e T (t − δ(t)) e T (t − δ1 ) e T (t − δ2 ) l T (e(t)) l T (e(t − δ(t)))]T .
From (6.92), we have
where −κi = λmax (Πi ), (κi > 0, i ∈ S) and −κ = maxi∈S {−κi }. Then it can be
derived from (1.12) that
T
−E LV dt = E V0 − EVT ≤ EV0 (6.104)
0
316 6 Stability and Synchronization of Neural Networks…
So it follows from Definition 6.34 that the master system (6.82) and slave system
(6.83) are synchronous in mean square. The proof is complete.
Remark 6.36 In order to make the update law easy to be solved, many studies [33, 57]
tend to select a positive diagonal matrix Pi in Lyapunov functional. In this theorem,
a common positive definite matrix is adopted rather than a special diagonal one.
Corollary 6.38 Assume that (6.91) and (6.93) are satisfied under the conditions in
Theorem 6.35. If the inequality
⎡ √ ⎤
Π̄11 0 0 0 Π15 Π16 λPi
⎢ ∗ Π22 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 0 ⎥
⎢ ⎥
Ωi = ⎢
⎢ ∗ ∗ ∗ Π44 0 0 0 ⎥ ⎥<0 (6.105)
⎢ ∗ ∗ ∗ ∗ Π55 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ Π66 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ −i I
holds for any i ∈ S, where Π̄11 = Π11 − i−1 λPi2 and Π j j ( j = 1, . . . , 6) are the
same as those in Theorem 6.35, then the master system (6.82) and the slave system
(6.83) are synchronous in mean square.
Proof Set ⎡ ⎤
Π̄11 0 0 0 Π15 Π16
⎢ ∗ Π22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 ⎥
Ω1 = ⎢
⎢ ∗ ∗
⎥
⎢ ∗ Π44 0 0 ⎥⎥
⎣ ∗ ∗ ∗ ∗ Π55 0 ⎦
∗ ∗ ∗ ∗ ∗ Π66
√
Ω2 = [ λPi 0 0 0 0 0]T
Ω3 = i I,
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 317
then we get
Ω1 Ω2
Ωi =
Ω2T −Ω3
One example is presented here in order to show the usefulness of our results. We aim
to examine the mean square synchronization of the given chaotic neural networks
with Lévy noise and Markovian switching.
Example 6.39 Consider the master system (6.82) and slave system (6.83) with one-
t
dimensional Lévy noise and 2-state Markovian switching, where δ(t) = ete+1 , f j (u j )
|u j +1|−|u j −1|
= ,
λ = 0.2, φ is the standard normal distribution of random variable
2
−1 1
z. The transition rate matrix is chosen as Γ = . The other parameters
0.5 −0.5
are as follows:
1 0 10
C(1) = , C(2) = ,
0 0.9 01
1.69 19 1.79 19
A(1) = , A(2) = ,
0.11 1.69 0.09 1.79
−1.33 0.3 −1.44 0.1
B(1) = , B(2) =
0.2 −1.33 0.1 −1.44
e(t) + e(t − δ(t)) e(t) − e(t − δ(t))
g(1) = , g(2) = ,
10 20
e(t) + e(t − δ(t)) e(t) − e(t − δ(t))
h(1) = z, h(2) = z.
10 10
It can be simply computed that
1 1
δ̄ = , δ1 = , δ2 = 1, L = I2 ,
4 2
I2 I2
G 11 = G 12 = , G 21 = G 22 = ,
50 200
I2
H11 = H12 = H21 = H22 = ,
25
and thus Assumptions 6.31, 6.32, and 6.33 are satisfied.
318 6 Stability and Synchronization of Neural Networks…
Now setting β = 25 and using the Matlab LMI toolbox, we solve the LMIs (6.91)
and (6.105) and obtain
1.3527 0.0679 1.3161 0.0580
P(1) = , P(2) = ,
0.0679 1.8986 0.0580 1.8601
10.4401 −3.9077 8.2883 −2.9694
Q1 = , Q2 = ,
−3.9077 23.3465 −2.9694 18.0861
8.2883 −2.9694
Q3 = ,
−2.9694 18.0861
13.9097 −4.7776 16.8213 0
R= , D= ,
−4.7776 7.9970 0 16.8213
ρ1 = 17.7062, ρ2 = 17.9440, 1 = 17.0156 2 = 16.9800.
So the conditions of Corollary 6.38 are all satisfied. It follows from Corollary 6.38
that systems (6.82) and (6.83) are synchronous in mean square.
To illustrate the effectiveness of the proposed method, we plot the stochastic fac-
tors in system (6.85), the chaotic behavior of system (6.82), the jumps and oscillations
of uncontrolled noise system (6.83), and the synchronization behaviors of system
(6.82) and system (6.83) with control input.
Figure 6.18 shows the 2-state Markov chain generated by the transition rate matrix
Γ . The decomposition of Lévy noise, namely a Brownian motion and a type of
Poisson jump, is shown in Fig. 6.19 and 6.20, respectively. Figure 6.21 depicts the
chaotic behavior of system (6.82). Fig. 6.22 reveals the destroyed chaos shape caused
2.5
2
r(t)
1.5
0.5
0
5 10 15 20 25 30 35 40
Time
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 319
ω(t)
0
−2
−4
−6
5 10 15 20 25 30 35 40
Time
Fig. 6.20 Poisson point Poisson point process with normally distributed jump
process 0
−1
Random jump amplitude
−2
−3
−4
−5
−6
5 10 15 20 25 30 35 40
Time
by the Lévy noise in system (6.83) without control. When the input acts in system
(6.83), as plotted in Fig. 6.23, the chaotic behavior like system (6.82) is regained.
Then the synchronization behavior of the master system and controlled slave system
can be seen from Fig. 6.24, which shows the tending to zero property of the error
state. Figures 6.25 and 6.26 exhibit the update law of feedback gain and the control
signals, respectively.
320 6 Stability and Synchronization of Neural Networks…
0.5
x2
0
−0.5
−1
−1.5
−20 −15 −10 −5 0 5 10 15 20
x1
3.5
2.5
2
2
y
1.5
0.5
0
0 5 10 15 20 25 30 35
y
1
6.4.5 Conclusion
0.5
y2
0
−0.5
−1
−1.5
−20 −15 −10 −5 0 5 10 15 20
y
1
0.2
0
e(t)
−0.2
−0.4
−0.6
−0.8
5 10 15 20 25 30 35 40
Time
the master system and the slave system. Via the solution of LMIs in this criterion,
the desired controller can be obtained. An example has been provided to show the
effectiveness of the main results.
322 6 Stability and Synchronization of Neural Networks…
K(t)
−3.5
−4
−4.5
−5
−5.5
−6
5 10 15 20 25 30 35 40
Time
1
u(t)
−1
−2
−3
−4
5 10 15 20 25 30 35 40
Time
References
1. D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd edn. (Cambridge University Press,
Cambridge, 2008)
2. D. Applebaum, M. Siakalli, Asymptotic stability of stochastic differential equations driven by
Lévy noise. J. Appl. Probab. 46(4), 1116–1129 (2009)
3. D. Applebaum, M. Siakalli, Stochastic stabilization of dynamical systems using Lévy noise.
Stoch. Dyn. 10(4), 509–527 (2010)
4. P. Ashok, B. Kosko, Stochastic resonance in continuous and spiking neuron models with Lévy
noise. IEEE Trans. Neural Netw. 19(12), 1993–2008 (2008)
5. M.O. Cáceres, The rigid rotator with Lévy noise. Phys. A 283(1–2), 140–145 (2000)
6. J. Cao, L. Li, Cluster synchronization in an array of hybrid coupled neural networks with delay.
Neural Netw. 22(4), 335–342 (2009)
7. H.B. Fotsin, J. Daafouz, Adaptive synchronization of uncertain chaotic Colpitts oscillators
based on parameter identification. Phys. Lett. A 339(3–5), 304–315 (2005)
References 323
8. E. Fridman, A refined input delay approach to sampled-data control. Automatica 46(2), 421–
427 (2010)
9. S. Haykin, Neural Networks (Prentice-Hall, New Jersey, 1994)
10. N. Hohn, A.N. Burkitt, Shot noise in the leaky integrate-and-fire neuron. Phys. Rev. E 63(3),
031902/1–11 (2001)
11. L. Hu, S. Gan, X. Wang, Asymptotic stability of balanced methods for stochastic jump-diffusion
differential equations. J. Comput. Appl. Math. 238, 126–143 (2013)
12. D. Huang, Simple adaptive-feedback controller for identical chaos synchronization. Phys. Rev.
E 71(3), 0372031–4 (2005)
13. A.F. Kohn, Dendritic transformations on random synaptic inputs as measured from a neuron’s
spike train—modeling and simulation. IEEE Trans. Biomed. Eng. 36(1), 44–54 (1989)
14. H.K. Lam, F.H.F. Leung, Design and stabilization of sampled-data neural-network-based con-
trol systems. IEEE Trans. Cybern. 36(5), 995–1005 (2006)
15. T.H. Lee, Z.-G. Wu, J.H. Park, Synchronization of a complex dynamical network with coupling
time-varying delays via sampled-data control. Appl. Math. Comput. 219(3), 1354–1366 (2012)
16. D. Liu, G. Yang, W. Zhang, The stability of neutral stochastic delay differential equations with
Poisson jumps by fixed points. J. Comput. Appl. Math. 235(10), 3115–3120 (2011)
17. Y. Liu, Z. Wang, X. Liu, An LMI approach to stability analysis of stochastic high-order Markov-
ian jumping neural networks with mixed time delays. Nonlinear Anal.: Hybrid Syst. 2(1),
110–120 (2008)
18. J. Lu, J. Cao, Adaptive complete synchronization of two identical or different chaotic (hyper-
chaotic) systems with fully unknown parameters. Chaos 15(4), 43901–1–10 (2005)
19. J. Luo, Comparison principle and stability of Ito stochastic differential delay equations with
Poisson jump and Markovian switching. Nonlinear Anal. 64(2), 253–262 (2006)
20. X. Mao, J. Lam, S. Xu, H. Gao, Razumikhin method and exponential stability of hybrid
stochastic delay interval systems. J. Math. Anal. Appl. 314(1), 45–66 (2006)
21. X. Mao, G.G. Yin, C. Yuan, Stabilization and destabilization of hybrid systems of stochastic
differential equations. Automatica 43(2), 264–273 (2007)
22. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, Cambridge, 2006)
23. C. Mead, Analog VLSI and Neural Systems (Addison-Wesley, Boston, 1989)
24. Y. Mikheev, V. Sobolev, E. Fridman, Asymptotic analysis of digital control systems. Autom.
Remote Cont. 49(9), 1175–1180 (1988)
25. C. Ning, Y. He, M. Wu, Q. Liu, pth moment exponential stability of neutral stochastic differential
equations driven by Lévy noise. J. Frankl. Inst. 349(9), 2925–2933 (2012)
26. G.D. Nunno, B. Øksendal, F. Proske, White noise analysis for Lévy processes. J. Funct. Anal.
206(1), 109–148 (2004)
27. J.H. Park, Adaptive control for modified projective synchronization of a four-dimensional
chaotic system with uncertain parameters. J. Comput. Appl. Math. 213(1), 288–293 (2008)
28. J. Peng, Z. Liu, Stability analysis of stochastic reaction-diffusion delayed neural networks with
Lévy noise. Neural Comput. Appl. 20(4), 535–541 (2011)
29. G. Samorodnitsky, M.S. Taqqu, Stable Non-Gaussian Random Processes: Stochastic Models
with Infinite Variance (Chapman & Hall/CRC Press, London, 1994)
30. L. Sheng, H. Yang, X. Lou, Adaptive exponential synchronization of delayed neural networks
with reaction-diffusion terms. Chaos Solitons Fractals 40(2), 930–939 (2009)
31. R. Situ, Theory of Stochastic Differential Equations with Jumps and Their Applications
(Springer Science Business Media, New York, 2005)
32. D. Tong, Q. Zhu, W. Zhou, Y. Xu, J. Fang, Adaptive synchronization for stochastic T-S fuzzy
neural networks with time-delay and Markovian jumping parameters. Neurocomputing 117(6),
91–97 (2013)
33. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
34. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
324 6 Stability and Synchronization of Neural Networks…
35. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
36. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
37. Z. Wang, Y. Wang, Y. Liu, Global synchronization for discrete-time stochastic complex net-
works with randomly occurred nonlinearities and mixed time delays. IEEE Trans. Nerual Netw.
21(1), 11–25 (2010)
38. I.-S. Wee, Stability for multidimensional jump-diffusion processes. Stoch. Process. Appl. 80(2),
193–209 (1999)
39. Z. Wu, H. Su, J. Chu, W. Zhou, Improved result on stability analysis of discrete stochastic
neural networks with time delay. Phys. Lett. A 373(17), 1546–1552 (2009)
40. Z.-G. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks
with time-varying delay using sampled-data. IEEE Trans. Cybern. 43(6), 1796–1806 (2013)
41. F. Xi, Asymptotic properties of jump-diffusion processes with state-dependent switching.
Stoch. Process. Appl. 119(7), 2198–2221 (2009)
42. F. Xi, G. Yin, Almost sure stability and instability for switching-jump-diffusion systems with
state-dependent switching. J. Math. Anal. Appl. 400(2), 460–474 (2013)
43. F.B. Xi, On the stability of jump-diffusions with Markovian switching. J. Math. Anal. Appl.
341(1), 588–600 (2008)
44. Y. Xu, X.-Y. Wang, H.-Q. Zhang, Stochastic stability for nonlinear systems driven by Lévy
noise. Nonlinear Dyn. 68(1–2), 7–15 (2012)
45. X. Yang, J. Cao, J. Lu, Synchronization of Markovian coupled neural networks with nonidenti-
cal node-delays and random coupling strengths. IEEE Trans. Neural Netw. Learn. Syst. 23(1),
60–71 (2012)
46. Z.X. Yang, G. Yin, Stability of nonlinear regime-switching jump diffusion. Nonlinear Anal.-
Theory Methods Appl. 75(9), 3854–3873 (2012)
47. G. Yin, F.B. Xi, Stability of regime-switching jump diffusions. SIAM J. Control Optim. 48(7),
4525–4549 (2010)
48. G.G. Yin, C. Zhu, Hybrid Switching Diffusions Properties and Applications. Stochastic Mod-
elling and Applied Probability (Springer, 2010)
49. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A: Stat.
Mech. Appl. 373(1), 252–260 (2007)
50. C. Yuan, J. Lygeros, Stabilization of a class of stochastic differential equations with Markovian
switching. Syst. Control Lett. 54(9), 819–833 (2005)
51. C.G. Yuan, X.R. Mao, Stability of stochastic delay hybrid systems with jumps. Eur. J. Control
16(6), 595–608 (2010)
52. L. Zeng, B. Xu, Effects of asymmetric Lévy noise in parameter-induced aperiodic stochastic
resonance. Phys. A 389(2), 5128–5136 (2010)
53. W. Zhang, Y. Tang, J. Fang, Stability of delayed neural networks with time-varying impulses.
Neural Netw. 36, 59–63 (2012)
54. W. Zhou, D. Tong, Y. Gao, C. Ji, H. Su, Mode and delay-dependent adaptive exponential syn-
chronization in pth moment for stochastic delayed neural networks with Markovian switching.
IEEE Transa. Neural Netw. Learn. Syst. 23(4), 662–668 (2012)
55. W. Zhou, J. Yang, X. Yang, A. Dai, H. Liu, J. Fang, Almost surely exponential stability of
neural networks with Lévy noise and Markovian switching. Neurocomputing 145, 154–159
(2014)
56. W. Zhou, Q. Zhu, P. Shi, H. Su, J. Fang, L. Zhou, Adaptive synchronization for neutral-type
neural networks with stochastic perturbation and Markovian switching parameters. IEEE Trans.
Cybern. 44(12), 2848–2860 (2014)
57. Q. Zhu, J. Cao, Adaptive synchronization under almost every initial data for stochastic neural
networks with time-varying delays and distributed delays. Commun. Nonlinear Sci. Numer.
Simul. 16(4), 2139–2159 (2011)
58. Q. Zhu, W. Zhou, D. Tong, J. Fang, Adaptive synchronization for stochastic neural networks
of neutral-type with mixed time-delays. Neurocomputing 99(1), 477–485 (2013)
References 325
59. Q.X. Zhu, F.B. Xi, X.D. Li, Robust exponential stability of stochastically nonlinear jump
systems with mixed time delays. J. Optim. Theory Appl. 154(1), 154–174 (2012)
60. Y. Zhu, Q. Zhang, Z. Wei, Robust stability analysis of Markov jump standard genetic regulatory
networks with mixed time delays and uncertainties. Neurocomputing 110, 44–50 (2013)
Chapter 7
Some Applications to Economy Based
on Related Research Method
This chapter provides two applications with respect to the topic of this book in finance
and economy. As an application of Lévy process, Sect. 7.1 offers a portfolio strategy
of financial market. Robust H∞ control strategy is investigated for a generic linear
rational expectation model of economy.
7.1.1 Introduction
To make a portfolio strategy is to search for a best allocation of wealth among different
asset in markets. Taking the European options for an instance, how to distribute the
appropriate proportions of each option to maximize total returns at expire time is
the core of portfolio strategy problem. There are two points mentioned among the
relevant literatures for portfolio selection problems: setting up a market model that
approximates to the real financial market, and the way of solving it.
Portfolio strategy researches are based on portfolio selection analysis given by
Markowitz H. [13]. Extensions of Markowitz’s work to the multiperiod model have
been given by Li and Ng [20] which derived the analytical optimal portfolio policy.
These previous researches were assuming that the underlying market has only one
state or mode. But the real market might have more than one state, and could switch
among them. Then, portfolio policies under regime switching have been widely
discussed. In a financial market model, the key process S that models the evolu-
tion of stock price should be a Brownian motion. Indeed, this can be intuitively
justified on the basis of the central limit theorem if one perceives the movement
of stocks. The analysis of Bernt sendal [25] was mainly based on the generalized
Black-Scholes model which has two assets B(t) and S(t) as dB(t) = ρ(t)B(t)dt and
of return, and the volatility of the stock vary as the market states switching and the
stock prices are driven by geometric Lévy process. (ii) The PDE determining the
portfolio strategy and its solvability are extensions of the existing results.
Assume that {α(t): t ≥ 0} denotes a Markov chain in (Ω, F, P), as the regime of
financial market, for example, the bull market or bear market of a stock market. Let
S = {1, 2, . . . , S} be the regime space of this Markov chain, and Γ = (γi j ) S×S be
the transition rate matrix.
In this section, we consider a financial market model driven by geometric Lévy
process. The market consists of one risk-free asset denoted by B, and n risky assets
denoted by S1 , S2 , . . . , Sn . The price process of these assets obeys the following
dynamic equations in which the price process of the risky assets follows the geometric
Lévy process, i.e.,
⎧
⎪
⎨ dB(t) = B(t)r (t,
⎪ α(t))dt, B(0) = B0
dSk (t) = Sk (t) μk (t, α(t))dt + σk (t, α(t))dWk (t) + R−{0} z Ñk (dt, dz) ,
⎪
⎪
⎩ S (0) = S 0 > 0.
k k
(7.1)
where B(t) is the price of B with the interest rate r (t, α(t)), Sk (t) is the price of
Sk with the expect rate of return μk (t, α(t)) and the volatility σk (t, α(t)), which
follows the regime switching of financial market. S1 (t), S2 (t), . . . , Sn (t) are inde-
pendent from each other. Wk (t) is the Brownian motion which is independent from
{α(t) : t ≥ 0}. Ñk (·, ·) is defined below
Nk (dt, dz) and ηk (dz)dt indicate the number of jumps and average number of jumps
within time dt and jump range dz of price process Sk (t), respectively. That is
where E is the expectation operator. Moreover, we assume that Nk (dt, dz), α(t), and
Wk (t) (k = 1, 2, . . . , n) are independent from each other.
Remark 7.1 The finance market model (7.1) is an extension of the B-S market model
in which the interest rate of the bond, the rate of return, and the volatility of the stock
vary as the market states switching and the stock prices are driven by geometric Lévy
process.
330 7 Some Applications to Economy Based on Related Research Method
For finance market model (7.1), we introduce the concept of self-financing port-
folio as follows:
n
V (t) := ϕ(t)B(t) + ψk (t)Sk (t), t ≥ 0 (7.3)
k=1
n
dV(t) = ϕ(t)dB(t) + ψk (t)dSk (t), t ≥ 0. (7.4)
k=1
Problem formulation: In this note, we shall propose a portfolio strategy for the
financial market model (7.1) which is determined by a partial differential equation
(PDE) of parabolic type by using Itô formula. The solvability of the PDE is researched
by making use of variables transformation. Furthermore, the relationship between
the solution of the PDE and the wealth process will be discussed.
In this section, we shall give the following fundamental results. For the sake of
simplification, we write r (t, α(t)) as r , f (t, S(t)) as f , etc.
To obtain the main result, we give the solution of (7.1) and the characteristic of
the derivation (7.4) of the wealth process.
The exact solutions of B(t) in (7.1) can be found as follows:
t
B(t) = B(0) exp r (s, α(s))ds .
0
7.1 Portfolio Strategy of Financial Market with Regime … 331
To solve the second equation in (7.1) for Sk (t), it follows from the I t ô formula that
1
d ln Sk (t) = [Sk (t)μk (t, α(t))dt + Sk (t)σk (t, α(t))dWk (t)]
Sk (t)
1 1
− 2 Sk2 (t)σk2 (t, α(t))dt + [ln(Sk (t) + zSk (t))
2 Sk (t) R−{0}
− ln(Sk (t))] Ñk (dt, dz) + ln(Sk (t) + zSk (t))
R−{0}
S
1
− ln(Sk (t)) − zSk (t) ηk (dz)dt + γi j ln(Sk (t))
Sk (t)
j=1
1 2
= μk (t, α(t)) − σk (t, α(t)) dt + σk (t, α(t))dWk (t)
2
+ ln(1 + z) Ñk (dt, dz) + [ln(1 + z) − z]ηk (dz)dt.
R−{0} R−{0}
Proposition 7.3 Consider the price model (7.1) of a financial market. If a portfolio
(ϕ, ψ) is a self-financing strategy, then the wealth process {V (t)}t≥0 defined by (7.3)
satisfies
n
dV(t) = r (t, α(t))V (t) + ψk (t)Sk (t) μk (t, α(t)) − r (t, α(t))
k=1
n
− zηk (dz) dt + ψk (t)Sk (t)σk (t, α(t))dWk (t)
R−{0} k=1
n
+ ψk (t)Sk (t) z Nk (dt, dz). (7.6)
k=1 R−{0}
Conversely, consider the model (7.1) of a financial market. If a pair (ϕ, ψ) of pre-
dictable processes following the wealth process {V (t)}t≥0 defined by formula (7.3)
satisfies (7.6), then (ϕ, ψ) is a self-financing strategy.
332 7 Some Applications to Economy Based on Related Research Method
n
dV(t) = ϕ(t)dB(t) + ψk (t)dSk (t)
k=1
n
= ϕ(t)B(t)r (t, α(t))dt + ψk (t)Sk (t) μk (t, α(t))dt+
k=1
σk (t, α(t))dWk (t) + z Ñk (dt, dz)
R−{0}
n
n
= V (t) − ψk (t)Sk (t) r (t, α(t)) + ψk (t)Sk (t)μk (t, α(t)) dt +
k=1 k=1
n
ψk (t)Sk (t)σk (t, α(t))dWk (t) + ψk (t)Sk (t) z Ñk (dt, dz)
k=1 k=1 R−{0}
n
= r (t, α(t))V (t) + ψk (t)Sk (t) μk (t, α(t)) − r (t, α(t)) −
k=1
n
zηk (dz) dt + ψk (t)Sk (t)σk (t, α(t))dWk (t)
R−{0} k=1
n
+ ψk (t)Sk (t) z Nk (dt, dz),
k=1 R−{0}
V (t) = f (t, S(t)), t ∈ [0, T ], S(t) = (S1 (t), S2 (t), . . . , Sn (t)) (7.7)
f − ∂∂ Sf S T
ϕ(t) = , t ≥0 (7.8)
B(t)
∂f ∂f ∂f ∂f
ψ(t) = , ,..., = , t ≥0 (7.9)
∂ S1 ∂ S2 ∂ Sn ∂S
7.1 Portfolio Strategy of Financial Market with Regime … 333
and the function f (t, S) solves the following backward PDE of parabolic type:
∂f
∂fn
1
∂2 f
n n
+r Sk + Si σi S j σ j = r f, t < T, S > 0. (7.10)
∂t ∂ Sk 2 ∂ Si ∂ S j
k=1 i=1 j=1
Moreover, if V (T ) = g(S(T )), then the function f (t, S) satisfies the following
equation:
f (T, S) = g(S), S > 0. (7.11)
For the converse part, we assume that T > 0. If there exists a function f (t, S) of
C1,2 class such that (7.10) and (7.11) are satisfied, then the process (ϕ, ψ) defined
by (7.9) and (7.8) is a self-financing strategy. The wealth process V = {V (t)}t∈[0.T ]
corresponding to (ϕ, ψ) satisfies (7.7).
Proof We prove the direct part of Theorem 7.4 first.
For
V (t) = f (t, S(t)),
∂f
∂f n
dV(t) = (t, S(t))dt + (t, S(t))(Sk μk dt + Sk σk dWk )
∂t ∂ Sk
k=1
1
∂2 f
n n
+ (t, S(t))Si σi S j σ j dt
2 ∂ Si ∂ S j
i=1 j=1
n
S
+ γi j f (t, S(t))
j=1
⎡
∂f
∂f n
1
∂2 f
n n
=⎣ + Sk μk + Si σi S j σ j
∂t ∂ Sk 2 ∂ Si ∂ S j
k=1 i=1 j=1
n
∂f
n
∂f
− z Sk ηk (dz) dt + Sk σk dWk
∂ S ∂ Sk
k=1 R−{0} k k=1
n
+ [ f (t, S + zS) − f (t, S)]N (dt, dz). (7.12)
k=1 R−{0}
334 7 Some Applications to Economy Based on Related Research Method
On the other hand, since our strategy is self-financing, the formula (7.6) is satisfied.
Thus, the rate of return and the volatility in (7.12) and (7.6) should be coincided,
and hence
⎧ n n ∂ f
⎪
⎪ k=1 ψk (t)Sk (t)σk = k=1 ∂ Sk (t, S)Sk σk ,
⎪
⎪
⎨
(7.13)
⎪
⎪ r (t, α(t)) f (t, S) + nk=1 ψk Sk (μk − r )
⎪
⎪
⎩= ∂ f + n ∂ f S μ + 1 n n ∂ f
2
∂t k=1 ∂ Sk k k 2 i=1 j=1 ∂ Si ∂ S j Si σi S j σ j .
We can easily get Sk ≥ 0 from (7.5), which together with the first equation of
(7.13) and the independence of Sk (k = 1, 2, . . . , n) yields (7.9).
From the first equation of (7.13), (7.3), and (7.7), we have
n
∂f
r ϕB = f − Sk . (7.14)
∂ Sk
k=1
So that n ∂f
f − k=1 ∂ Sk Sk f − fS ST
ϕ= = . (7.15)
B B
Substituting (7.9) into the second equation of (7.13), we have
n
∂f 1
∂2 f
n n
rf − ψk Sk r = + Si σi S j σ j ,
∂t 2 ∂ Si ∂ S j
k=1 i=1 j=1
which is (7.10).
Conversely, assume that f = f (t, S) is a C1,2 -class function which is a solution
of the PDE (7.10), and that (ϕ, ψ) is a process defined by (7.9) and (7.8).
First, we will show that a process V = V (t), t ∈ [0, T ] defined by (7.3) satisfies
the equation
V (t) = f (t, S(t)), t ∈ [0, T ]. (7.16)
In fact, substituting formulas (7.9) and (7.8) into the right-hand side of (7.3), we
have
n
V (t) = ϕB + ψk Sk
k=1
n ∂f
f − k=1 ∂ Sk Sk
n
∂f
= B+ Sk
B ∂ Sk
k=1
= f, t ≥ 0.
Next, we will show that (ϕ, ψ) is a self-financing strategy, i.e., (7.6) holds.
By applying the I t ô formula to the process V and function f , we have that
Eq. (7.12) is satisfied.
Furthermore, by (7.10),
1
∂2 f
∂f
n n n
∂f
+ Si σi S j σ j = r f − r Sk ,
∂t 2 ∂ Si ∂ S j ∂ Sk
i=1 j=1 k=1
∂f
n
∂f 1
n
n
∂2 f
n
∂f
+ Sk μk + Si σi S j σ j = r f + (μk − r )Sk .
∂t ∂ Sk 2 ∂ Si ∂ S j ∂ Sk
k=1 i=1 j=1 k=1
n
∂f
n
∂f 1
∂2 f
n n
rV + ψk Sk (μk − r ) = + Sk μk + Si σi S j σ j
∂t ∂ Sk 2 ∂ Si ∂ S j
k=1 k=1 i=1 j=1
(7.17)
and
n
∂f
ψk Sk σk = Sk σk .
∂ Sk
k=1 k=1
Those together with (7.9) yield that (7.12) implies (7.6). The proof of Theorem 7.4
is completed.
Remark 7.5 In order to determine the portfolio strategy (φ, ψ) and obtain the final
value V (t), from Theorem 7.4, we should find the solution of the PDF (7.10) with
the final data (7.11). This is the key problem in the rest of this section. We have the
following result in terms of method of variables transformation.
Theorem 7.6 Let r (t, α(t)) in (7.1) be a constant r . The function f (t, S), t ≤
T, S > 0 given by the following formula
n
e−r (T −t)
∞ − xi2 √ σi2
f (t, S) = √ e 2 g(0, . . . , 0, Si eσi T −t xi −(r − 2 )(t−T ) , 0, . . . , 0)dxi
2π i=1 −∞
(7.18)
is a solution of the general Black-Scholes equation (7.10) with the final data (7.11).
∂f d(er (t−T ) )
= q + er (t−T ) qt
∂t dt
n
r (t−T ) dr r (t−T ) ∂q 1 2
=e q (t − T ) + r + e qt − r − σi
dt ∂ yi 2
i=1
n
r (t−T ) r (t−T ) dr r (t−T ) ∂q ∂q 1 2
= re q +e q (t − T ) + e − r − σi ,
dt ∂t ∂ yi 2
i=1
∂f ∂q 1
= er (t−T ) ,
∂ Si ∂ yi Si
∂2 f ∂(er (t−T ) ∂∂qyi S1i )
=
∂ Si ∂ S j ∂Sj
⎧
⎪
⎨er (t−T ) ∂ y∂i ∂qy j S1i S1j ,
2
i = j,
= ∂2q ∂q 1
⎪
⎩er (t−T ) ∂ yi ∂ yi S1i S1i − , i = j.
∂ yi S 2
i
n
dr (t−T ) (t−T ) ∂q ∂q 1 ∂q 1
rf + (t − T )qe r +e r ⎣ − r − σi ⎦ + r
2 er (t−T ) Si
dt ∂t ∂ yi 2 ∂ yi Si
i=1 i=1
n
∂2q 1 1 1
n
∂q 2 1 2
+ er (t−T ) Si σi S j σ j − er (t−T ) S S = rf
2 ∂ yi ∂ y j Si S j 2 ∂ yi i S 2 i
i=1 j=1 i=1 i
1
∂2q
n n
dr ∂q
(t − T )q + + σi σ j = 0. (7.20)
dt ∂t 2 ∂ yi ∂ y j
i=1 j=1
τ = T − t > 0, t = T − τ , τ ≥ 0, t ≤ T,
q(t, y) = u(T − t, y) or u(τ , y) = q(T − τ , y). (7.22)
7.1 Portfolio Strategy of Financial Market with Regime … 337
qt (t, y) = −u τ (T − t, y),
∂q ∂u
= (T − t, y),
∂ yi ∂ yi
∂2q ∂2u
= (T − t, y).
∂ yi ∂ y j ∂ yi ∂ y j
(yi −xi )2
n ∞ −
1
e 2σi2 τ
u(t, y1 , y2 , . . . , yn ) = √ g(0, . . . , 0, e xi , 0, . . . , 0)dxi .
2πτ −∞ σi
i=1
(7.25)
In fact,
(yi −xi )2
n −
1
∞ e 2σi2 τ
u τ (τ , y) = − √ g(0, . . . , 0, e xi , 0, . . . , 0)dxi
2 2πτ τ −∞ σi
i=1
(yi −xi )2
n −
1
∞ e 2σi2 τ
(yi − xi )2
+√ g(0, . . . , 0, e xi , 0, . . . , 0) dxi ,
2πτ −∞ σi 2σi2 τ 2
i=1
(yi −xi )2
−
∂u 1 e ∞ 2σi2 τ
yi − xi
= √ g(0, . . . , 0, e , 0, . . . , 0) − 2
xi
dxi ,
∂ yi 2πτ −∞ σi σi τ
⎧
⎪
⎨ 0, i = j,
∂ u
2
(yi −xi )2
= ∞ g(e x1 ,...,e xn ) − 2σ2 τ
∂ yi ∂ y j ⎪
⎩ √1 e i
(yi −xi )2
− 12 σi2 dxi , i = j.
−∞
2πτ 2 σi 4 2 σi τ σi τ
338 7 Some Applications to Economy Based on Related Research Method
So
1
∂2u
n n
u τ (τ , g) − σi σ j
2 ∂ yi ∂ y j
i=1 j=1
(yi −xi )2
n ∞ −
1
e 2σi2 τ
=− √ g(0, . . . , 0, e xi , 0, . . . , 0)dxi
2 2πτ τ −∞ σi
i=1
(yi −xi )2
n ∞ −
1
e 2σi2 τ
(yi − xi )2
+√ g(0, . . . , 0, e xi , 0, . . . , 0) dxi
2πτ −∞ σi 2σi2 τ 2
i=1
(yi −xi )2
1
1
n ∞ g(e x1 , . . . , e xn ) − 2σi2 τ (yi − xi )2 1
− √ e − 2 σi2 dxi
2 2πτ
i=1 −∞ σi2 σi4 τ 2 σi τ
= 0.
(yi −xi )2
n −
1
∞ e 2σi2 τ
u(τ , y) = √ g(0, . . . , 0, e zi +yi , 0, . . . , 0)dz i .
2πτ −∞ σi
i=1
In order to get rid of the denominator σi2 τ in the exponent in the above formula, we
make another change of variables as:
√
z i = σi τ xi . (7.26)
√
So dz i = σi τ dxi .
Recalling the relationship between q and u described in (7.22), we therefore have
n
1
∞ − xi2 √
q(t, y) = √ e 2 g(0, . . . , 0, eσi T −t xi +yi , 0, . . . , 0)dxi .
2π i=1 −∞
n
g(S) = g(S1−k1 , S2−k2 , . . . , Sn−kn ) = (Si − K i )+ , (7.27)
i=1
where Si > 0 and K i > 0 is the strike price of Si . Then we have the following
corollary from Theorem 7.6.
Corollary 7.7 For the European call option, the solution to the general Black-
Scholes value problem (7.10) with the final data (7.27) is given by formula
n
√
n
f (t, S) = Si Φ(−Ai + σi T − t) − e−r (T −t) K i Φ(−Ai ), (7.28)
i=1 i=1
where
σi2 Si
(r − 2 )(T − t) + ln Ki
−Ai = √ =: d2 ,
σi T − t
σi2 Si
√ (r + 2 )(T − t) + ln Ki
−Ai + σi T − t = √ =: d1 ,
σi T − t
i.e.,
n
f (t, S) = Si Φ(d1 ) − e−r (T −t) K i Φ(d2 ).
i=1 i=1
In particular,
n
−r T
f (0, S) = Si Φ(d1 ) − e K i Φ(d2 ). (7.29)
i=1 i=1
340 7 Some Applications to Economy Based on Related Research Method
√ σi2
Si eσi T −t xi −(r − 2 )(t−T ) > Ki . (7.30)
i.e.,
Ki σ2
ln Si − (r − 2i )(T − t)
xi > √ =: Ai .
σi T − t
n
e− 2 dxi
i
− √ Ki
2π i=1 Ai
e−r (T −t)
∞
n σi2 xi2 √
= √ Si e(r − 2 )(T −t) e− 2 +σi T −t xi dxi
2π i=1 Ai
∞ x2
e−r (T −t)
n
e− 2 dxi
i
− √ Ki
2π i=1 Ai
1
∞
n σi2 √
1 2 1 2
= √ Si e− 2 (T −t) e− 2 (xi −σi T −t) + 2 σi (T −t) dxi
2π i=1 Ai
∞ x2
e−r (T −t)
n
e− 2 dxi
i
− √ Ki
2π i=1 Ai
∞ x2
1
∞ e−r (T −t)
n 2
n
− z2
e− 2 dxi
i
= √ √ S i e dz − √ K i
2π i=1 Ai −σi T −t 2π i=1 Ai
n −Ai +σi √T −t x 2
n −Ai xi2
1 − 2i −r (T −t) 1
= Si √ e dxi −e Ki √ e− 2 dxi
i=1
2π −∞ i=1
2π −∞
n
√
n
= Si Φ(−Ai + σi T − t) − e−r (T −t) K i Φ(−Ai ),
i=1 i=1
7.1.5 Conclusion
In this section, we have considered a financial market model with regime switching
driven by geometric Lévy process. This financial market model is based on the mul-
tiple risky assets S1 , S2 , . . . , Sn driven by Lévy process. Its formula and equivalent
transformation methods have been used to solve this complicated financial market
model. An example of the portfolio strategy and the final value problem to apply our
method to the European call option has been given in the end of this section.
7.2.1 Introduction
“Best policies can be evaluated, in theory at least, given an economy. But macro-
economists have only model economies at their disposal and necessarily these
economies are abstractions. A concern then is that the model economy used to eval-
uate policy will provide poor guidance in practice. This leads to the search for policy
that performs well for a broad class of economies. This is what robust control theory
is all about.” The Nobel Prize-winning economist, Edward C. Prescott, wrote these
sentences in the endorsements of book [11].
Robust control for economy has received attention since the early 1960s. In [7],
ambiguity preferences of static environment are axiomatized as multiple priors, and
decision-making with multiple priors can be represented as max–min expected utility.
The static environment of [7] is extended to a dynamic context in [4], where the set
of priors is updated over time and the dynamic consistent central axiom leads to
a recursive structure for utility. The links between robust control and ambiguity
aversion are formally established in [12], which shows that the model set of robust
control can be thought of as a particular specification of the set of priors presented
in [7], and once the least favorable prior is chosen, behavior could be rationalized as
Bayesian with that prior. According to the literature [33], in the economics literature,
the most prominent and influential approach to robust control is due to Hansen
342 7 Some Applications to Economy Based on Related Research Method
and Sargent (and their co-authors), which is summarized in their monograph [11].
Hansen-Sargent approach starts with a nominal model and uses entropy as a distance
measure to calibrate the model uncertainty set. The principal tools used to solve
Hansen-Sargent robust control problems are state-space methods [8, 11]. It needs to
note that, all approaches mentioned above adopt a bounded “worst-case” strategy, or
can be described as an H∞ problem.
Many of the ideas and inspiration for robust control in economics come from con-
trol theory [33]. With the development of robust control for economy, the robust con-
trol in control theory is developed very fast. Uncertainties, stochastic disturbances,
time-varying or invariant delays, nonlinearities, which always appear in economic
systems (see, e.g., [3, 6, 17, 18, 28] and references wherein), are investigated sen-
sitively in control theory. Robust stability of uncertain stochastic neural networks
with time delay is studied in [37, 44]. Robust absolute stability for a class of time
delay uncertain singular systems with sector-bounded nonlinearity is studied in [31].
Robust stability for a class of Lur’e singular system with state time delays is studied
in [22]. Robust H∞ output feedback control for uncertain stochastic systems with
time-varying delays is studied in [39]. Robust H∞ control for uncertain singular
time delay systems is studied in [36]. Robust exponential stability of stochastic sys-
tems with time-varying delay, nonlinearity, and Markovian switching is studied in
[42]. Linear matrix inequality (LMI) approach is adopted in above works as this
approach can be readily checked by exploiting the standard Matlab LMI toolbox,
and free-weighting matrices are introduced in some of the above works to reduce the
conservatism of results. Unfortunately, although the upper bounds of delays in above
works are fit for processing control in engineering, they are not large enough for eco-
nomic systems. Because the time delays of economic systems maybe from days to
decades. For example, the period of American pork price oscillation is 4 years [5,
23], the average and range length of Kondratiev waves is 50 and from approximately
40 to 60 years [19], respectively.
Robust H∞ control condition with very large upper bound of time delay and small
disturbance attenuation for a class uncertain stochastic time-varying delay system
has been presented by the authors in [21], however, we have not discussed the essence
of conservatism fully.
Furthermore, because the LMI approach appeared very recently, there are few
literatures that study the robust problem for economic system via LMI approach. One
of the authors investigates the condition of stability for the economic discrete-time
singular dynamic input–output model in [15]. Furthermore, a state feedback control
condition for the economic discrete-time singular dynamic input–output model is
presented in [14]. The free-weighting matrix technology has not been introduced
into the above literatures.
In this section, we deal with the robust H∞ control with large time delay and
small disturbance attenuation problem for a generic linear rational expectations
model of economy with uncertainties, time-varying delay, and random disturbances.
The norm-bounded uncertainties are adopted to illustrate the uncertainties of eco-
nomic model. The concept of two levels of conservatism of stability and control
sufficient conditions is developed. This concept covers the previous concepts of
7.2 Robust H∞ Control for a Generic Linear … 343
To analyze the robust control problem for macroeconomy with large time delay,
according to the thoughts of literatures [18, 28], we consider the following generic
linear rational expectations model of economy:
where x(t) ∈ Rn is the state vector, u(t) ∈ Rm is the vector of policy instruments
(control vector), and v(t) ∈ Rq is the vector of random shocks (stochastic distur-
bances) which belongs to l2 [0, ∞), y(t) ∈ R p is the controlled output, or target
vector, for example, inflation, output, and possibly the policy maker’s control vari-
able. d(t) is the time-varying lag (delay) satisfying
ψ(t) is the initial condition. B, Bv , C, and D are known real constant matrices, and
the matrices A(t), Ad (t) represent the structured model uncertainties. We assume
that A(t), Ad (t) are time-varying matrices of the form A(t) = A + ΔA(t), Ad (t) =
Ad + ΔAd (t). Here A, Ad are known real constant matrices, ΔA(t), ΔAd (t) are
unknown matrices representing time-varying parameter uncertainties and satisfying
the following admissible condition:
ΔA(t) ΔAd (t) = M F(t) N1 N2 (7.32)
where M, N1 , and N2 are known real constant matrices, and F(t) is the unknown
time-varying matrix-valued function subject to F T (t)F(t) ≤ I, ∀t. Analytically, the
structured uncertainties are defined independently of the state vector x(t).
Remark 7.9 The constraint ḋ(t) ≤ μ < 1 always appears in other robust control
literatures, see, e.g., [30, 38, 39]. And this constraint is removed from this section.
344 7 Some Applications to Economy Based on Related Research Method
Remark 7.10 According to [28], the monetary policy is the optimal response of
policy makers facing uncertainty in model parameters. There are two approaches
to model uncertainty, unstructured model uncertainty and structured model uncer-
tainty. For example, [17] derives policies under the assumption of unstructured uncer-
tainty, and [6] solves this problem with structured uncertainty. Equation (7.32) is a
norm-bounded uncertainty model. Norm-bounded uncertainty is one of the struc-
tured uncertainty models. This model of uncertainty has been adopted for economic
system, see, e.g., [27] and references wherein.
According to the assumption in [28], the authority uses only one instrument and
commits to the stationary rule, that is
In this section, we shall focus on the robust stabilization problem whose purpose
is to vector policy instruments of the type (7.33) for the economy system (Σ), such
that the closed-loop economy system (Σc ) satisfies the following two requirements
simultaneously:
(R1) The closed-loop system (Σc ) is asymptotically stable.
(R2) Under the zero initial condition, the controlled output y(t) satisfies
for all nonzero v(t) ∈ l2 [0, ∞) and all admissible uncertainties ΔA(t) and ΔAd (t),
where γ > 0 is a prescribed scalar.
The following theorem provides a sufficient condition for the closed-loop economy
system (Σc ) with v(t) = 0 to be robust asymptotically stable.
Theorem 7.11 Given scalars h > 0 and μ. The closed-loop economy system (Σc )
with v(t) = 0 is robust asymptotically stable if there exist scalar ε > 0, matrices X >
0, Q > 0, R > 0, Z 1 > 0, Z 2 > 0, L = col{L 1 , L 2 , L 3 }, S̄ = col{S1 , S2 , S3 }, J =
col{J1 , J2 , J3 }, and Y such that the following PWCLMIs holds:
7.2 Robust H∞ Control for a Generic Linear … 345
⎡ ⎤
Ω Ad X + L 2T L 3T X N1T
⎢∗ −Q 0 X N2T ⎥
⎢ ⎥ < 0, (7.35)
⎣∗ ∗ −R 0 ⎦
∗ ∗ ∗ −εI
⎡ ⎤
Φ hL h S̄ hJ
⎢ ∗ −h Z 1 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ −h Z 1 0 ⎦ < 0, (7.36)
∗ ∗ ∗ −h Z 2
where Ω = AX + XAT + Q + R + BY + Y T B T + L 1 + L 1 + εM M , Φ =
T T
Proof For the stability analysis of the closed-loop economy system (Σc ), we define
the following Lyapunov-Krasovskii functional:
t t
V (t, x(t)) = x T (t) P̂ x(t) + x T (s) Q̂x(s)ds + x T (s) R̂x(s)ds
t−d(t) t−h
0 0
+ ẋ T (s)( Ẑ 1 + Ẑ 2 )ẋ(s)dsdθ (7.37)
−h t+θ
V̇ (t, x(t)) ≤ 2x T (t) P̂ ẋ(t) + x T (t) Q̂x(t) − (1 − μ)x T (t − d(t)) Q̂x(t − d(t))
t
+ x T (t) R̂x(t) − x T (t − h) R̂x(t − h) − ẋ T (s)( X̂ 1 + X̂ 2 )ẋ(s)ds
t−h
t
= ξ (t)(Ψ (t) + diag{0, μ Q̂, 0})ξ(t) −
T
ẋ T (s)( X̂ 1 + X̂ 2 )ẋ(s)ds,
t−h
(7.38)
where
T
ξ(t) = x T (t) x T (t − d(t)) x T (t − h) ,
⎡ ⎤
P̂ A(t) + A(t) P̂ + Q̂ + R̂ + P̂BK + K T B T P̂ P̂ Ad (t) 0
Ψ (t) = ⎣ ∗ − Q̂ 0 ⎦.
∗ ∗ − R̂
346 7 Some Applications to Economy Based on Related Research Method
From the Leibniz–Newton formula, the following equations are true for any matri-
ces L̂, Ŝ, and Jˆ with appropriate dimensions:
t
2ξ (t) L̂ x(t) − x(t − d(t)) −
T
ẋ(s)ds = 0, (7.39)
t−d(t)
t−d(t)
2ξ T (t) Ŝ x(t − d(t)) − x(t − h) − ẋ(s)ds = 0, (7.40)
t−h
t−h
T ˆ
2ξ (t) J x(t) − x(t − h) − ẋ(s)ds = 0. (7.41)
t
V̇ (t, x(t)) ≤ ξ T (t)(Ψ (t) + diag{0, μ Q̂, 0})ξ(t) + 2ξ T (t) L̂(x(t) − x(t − d(t)))
+ 2ξ T (t) Ŝ(x(t − d(t)) − x(t − h)) + 2ξ T (t) Jˆ(x(t) − x(t − h))
t t−d(t)
− (ẋ (s) Ẑ 1 + 2ξ (t) L̂)ẋ(s)ds −
T T
(ẋ T (s) Ẑ 1 + 2ξ T (t) Ŝ)
t−d(t) t−h
t
× ẋ(s)ds − (ẋ T (s) Ẑ 2 + 2ξ T (t) Jˆ)ẋ(s)ds
t−h
T !
≤ ξ T (t)(Ψ (t) + diag{0, μ Q̂, 0})ξ(t) + ξ T (t) L̂ 0 0 + L̂ 0 0 ξ(t)
T !
+ ξ T (t) Jˆ Ŝ − L̂ − Ŝ − Jˆ + Jˆ Ŝ − L̂ − Ŝ − Jˆ ξ(t)
where
T
Ψ1 (t) = Ψ (t) + L̂ 0 0 + L̂ 0 0 ,
T
Ψ2 = Jˆ Ŝ − L̂ − Ŝ − Jˆ + Jˆ Ŝ − L̂ − Ŝ − Jˆ + h L̂ Ẑ 1−1 L̂ T + h Ŝ Ẑ 1−1 Ŝ T
+ h Jˆ Ẑ 2−1 JˆT + diag{0, μ Q̂, 0}.
where
⎡ ⎤
Ω11 P̂ Ad + L̂ 2T L̂ 3T
Ψ1 = ⎣ ∗ − Q̂ 0 ⎦, (7.44)
∗ ∗ − R̂
7.2 Robust H∞ Control for a Generic Linear … 347
⎡ ⎤
P̂ΔA(t) + ΔA T (t) P̂ P̂ΔAd (t) 0
ΔΨ1 (t) = ⎣ ∗ 0 0⎦ , (7.45)
∗ ∗ 0
Ψ2 < 0. (7.48)
So, from Theorem 7.11, one can ensure V̇ (t, x(t)) < 0. This completes the proof.
Theorem 7.12 Given scalars h > 0, μ, and γ. The closed-loop economy system
(Σc ) is robust asymptotically stable and the H∞ -norm constraint (7.34) is achieved
under the zero initial condition for all nonzero v(t), if there exist scalar ε > 0,
matrices X > 0, Q > 0, R > 0, Z 1 > 0, Z 2 > 0, L = col{L 1 , L 2 , L 3 }, S̄ =
col{S1 , S2 , S3 }, J = col{J1 , J2 , J3 }, and Y such that the following PWCLMIs holds:
348 7 Some Applications to Economy Based on Related Research Method
⎡ ⎤
Ω Ad X + L 2T L 3T X N1T XC T
⎢∗ −Q 0 X N2T Y T D T ⎥
⎢ ⎥
⎢∗ 0 ⎥
⎢ ∗ −R 0 ⎥ < 0, (7.49)
⎣∗ ∗ ∗ −εI 0 ⎦
∗ ∗ ∗ ∗ −I
⎡ ⎤
Θ hL h S̄ hJ
⎢ ∗ −h Z 1 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ −h Z 1 0 ⎦ < 0, (7.50)
∗ ∗ ∗ −h Z 2
where
Θ = Θ1 + Θ1T + Θ2 ,
J S̄ − L − S̄ − J 0
Θ1 = ,
0 0 0 0
⎡ ⎤
0 0 0 Bv
⎢∗ μQ 0 0 ⎥
Θ2 = ⎢
⎣∗ ∗ 0 0 ⎦ .
⎥
∗ ∗ ∗ −γ 2 I
Proof It is easy to see that (7.49), (7.50) implies (7.35), (7.36), respectively. So, the
closed-loop system (Σc ) is robust asymptotically stable.
Define the same Lyapunov-Krasovskii functional candidate V (t, x(t)) as (7.37).
By the same line as the proof of Theorem 7.11, one has
where
T
δ(t) = x T (t) x T (t − d(t)) x(t − h) v T (t) ,
⎡ ⎤
Ω11 P̂ Ad (t) 0 0
⎢ ∗ − Q̂ 0 0⎥
Λ̃1 (t) = ⎢
⎣ ∗
⎥,
∗ − R̂ 0⎦
∗ ∗ ∗ 0
T
Jˆ Ŝ − L̂ − Ŝ − Jˆ 0 Jˆ Ŝ − L̂ − Ŝ − Jˆ 0
Λ̃2 = +
0 0 0 0 0 0 0 0
⎡ ⎤
0 0 0 P Bv
h L̂ Z 1−1 L̂ T + h Ŝ Z 1−1 Ŝ T + h Jˆ Z 2−1 JˆT 0 ⎢∗ μ Q̂ 0 0 ⎥
+ +⎢⎣∗ ∗
⎥.
0 0 0 0 ⎦
∗ ∗ ∗ 0
7.2 Robust H∞ Control for a Generic Linear … 349
where t > 0.
Under zero initial condition, x(t) = 0 for t ∈ [−h, 0], one has
t
J (t) = (y T (s)y(s) − γ 2 v T (s)v(s) + V̇ (x(s)))ds − V (t, x(t))
0
t
≤ (y T (s)y(s) − γ 2 v T (s)v(s) + V̇ (x(s)))ds
0
t
≤ δ T (s)(Λ1 (t) + Λ2 )δ(s)ds, (7.53)
0
where
⎡ ⎤
Ω11 + Ω̄11 P̂ Ad (t) 0 0
⎢ ∗ − Q̂ 0 0⎥
Λ1 (t) = ⎢
⎣
⎥,
∗ ∗ − R̂ 0⎦
∗ ∗ ∗ 0
⎡ ⎤
0 0 0 0
⎢∗ μ Q̂ 0 0 ⎥
Λ2 =Λ̃2 + ⎢⎣∗ ∗
⎥,
0 0 ⎦
∗ ∗ ∗ −γ 2 I
with Ω̄11 = C T C + K T D T DK + C T DK + K T D T C.
Along the same line as in the proof of Theorem 7.11, according to (7.49) and
(7.50), J (t) < 0 holds. This completes the proof of the theorem.
According to the two-person zero-sum game [26], two players are u(t) and v(t),
and the object function is [27]
t
J (t) = (x T (s)Ξ1 x(s) + u T (s)Ξ2 u(s) − γ 2 v T (s)v(s))ds, (7.54)
0
where
Π1 = X Π̃1T ,
Π2 = Y T Π̃2T .
By Theorem 7.12, we can obtain the state feedback controller parameter and lower
bound of disturbance attenuations as follows:
−26.1983 −2.2999
K = ,
−7.8640 −34.6549
γ = 0.8606.
7.2 Robust H∞ Control for a Generic Linear … 351
Furthermore, we show the upper bounds of time delays h and lower bounds of
disturbance attenuations γ on many time-varying rates μ.
Table 7.1 lists the upper bounds on the time delay h for many μ with γ = 1 by
Theorem 7.12 in this section.
Table 7.2 lists the lower bounds of disturbance attenuations γ for many μ with
h = 108 by Theorem 7.12 in this section.
Remark 7.16 To the best of our knowledge, there is no concrete model with parame-
ter values of macroeconomic system with time delay as far. So we cannot provide an
example of macroeconomic system with time delay which describes the real world,
and we have to provide a numerical example to illustrate the merit of present approach
as Example 7.15. How to obtain the value of weight parameter of state with delay,
for instance Ad (t) in this section, is a challenge of modeling macroeconomic system
with time delay. And this is an important direction of further research.
7.2.5 Discussions
parameters, the upper bound of time delay, the minimum disturbance attenuations,
and the maximum time-varying rate (if delay is time-varying) of system which ensure
the ith condition holds. We maybe cannot compare the conservatism of conditions
directly in other cases.
Now, we analyze the conservatism of results presented in this section.
In this section, we introduce free-weighting matrices L , S, J into V̇ (t, x(t)) =
δ T (t) f (·)δ(t) by employing Leibniz–Newton formula, for example, g(L , S, J ) = 0.
Then, V̇ (t, x(t)) = δ T (t)( f (·) + g(·))δ(t). The main idea of Parameters Weak
Coupling Linear Matrix Inequalities (PWCLMIs) can be described as follows:
Decomposing the f (·) and g(·) as
and
g(·) = g1 (·) + g2 (·) = 0,
Obviously, conditions
and
Unfortunately, Eqs. (7.58) and (7.59) hold and only sufficient condition of (7.60)
holds. So, the condition in this section ((7.58) and (7.59) hold) may lead to more
conservatism than the original condition (7.60).
In order to overcome this shortcoming, we denote f 1 (·), f 2 (·), g1 (·), g2 (·) in
Theorem 7.11 as
And in Theorem 7.12, we denote f 1 (·), g1 (·), g2 (·) as above, and denote
then
If the set of values of system parameters S ensures the original condition (7.60)
holds, by (7.66) and (7.67), noting that ρ is a sufficient small scalar, one has
That is, if (7.60) holds, (7.58) and (7.59) hold. In other words, the PWCLMIs
condition is a necessary condition for original condition.
354 7 Some Applications to Economy Based on Related Research Method
Based on the discussion above, one can see that, the PWCLMIs condition is a
sufficient and necessary condition for original condition. So, the condition in this
section is equivalent to the original condition on the first level conservatism.
Remark 7.17 The equivalence of conditions with and without free-weighting matri-
ces has been proved in some literatures, see, e.g., [9, 40, 41]. From this section, the
conservatism of conditions studied in these literatures are all on the first level.
Remark 7.18 This characteristic is also shown in other literatures which introduce
free-weighting matrix into them, see, e.g., [21, 36, 42] and references wherein. From
this section, the conservatism of conditions studied in these literatures are on the
second level. So we can say that the delay-dependent conditions with free-weighting
matrices are always less conservative than which without free-weighting matrices
on the second level.
From above discussions, on the whole, the condition in this section is less conser-
vative than the original one, and the value fields of h, γ, and μ in this section are free.
So, very large time delay, large time-varying rate, and small disturbance attenuation
will be achieved via adopting presented approach.
7.2.6 Conclusions
In this section, we have studied the problem of robust H∞ state feedback control
for economy which is described as a generic linear rational expectations model
with uncertainties, time-varying delay, and stochastic disturbances. Norm-bounded
uncertainties have been adopted to describe the uncertainties of economic system.
The state feedback controller has been designed for all admissible uncertainties
7.2 Robust H∞ Control for a Generic Linear … 355
such that the closed-loop system is asymptotically stable and achieves a prescribed
H∞ performance level. The results have been presented in terms of PWCLMIs.
The concept of two levels of conservatism has been proposed and has been used to
analyze the conservatism of presented results. Large time delay and small disturbance
attenuation which are of special importance to economic system have been obtained.
Furthermore, by using two-person zero-sum game, the result for system has been
obtained. A numerical example has been exploited to show the effectiveness and
benefit of the result obtained.
References
1. D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd edn. (Cambridge University Press,
Cambridge, 2008)
2. N. Bäuerle, A. Blatter, Optimal control and dependence modeling of insurance portfolios with
Lévy dynamics. Insur. Math. Econ. 48(3), 398–405 (2011)
3. A. Castelletti, F. Pianosi, R. Soncini-Sessa, Water reservoir control under economic social and
environmental. Automatica 44(6), 1595–1607 (2008)
4. L.G. Epstein, M. Schneider, Recursive multiple-priors. J. Econ. Theory 113(1), 1–31 (2003)
5. C. Fan, Y. Zhang, The time delay and oscillation of economic system, in Proceedings of the
1986 International Conference of the System Dynamics Society (1986), pp. 525–535
6. M.P. Giannoni, Does model uncertainty justify caution robust optimal monetary policy in a
forward-looking model. Macroecon. Dyn. 6(01), 111–144 (2002)
7. I. Gilboa, D. Schmeidler, Maxmin expected utility with non-unique prior. J. Math. Econ. 18(2),
141–153 (1989)
8. P. Giordani, P. Söderlind, Solution of macromodels with Hansen-Sargent robust policies: some
extensions. J. Econ. Dyn. Control 28(12), 2367–2397 (2004)
9. F. Gouaisbaut, D. Peaucelle, A note on stability of time delay systems, in 5th IFAC Symposium
on Robust Control Design (Rocond 06) (2006), 13 p
10. X. Guo, Q. Zhang, Optimal selling rules in a regime switching model. IEEE Trans. Autom.
Control 50(9), 1450–1455 (2005)
11. L.P. Hansen, T.J. Sargent, Robustness (Princeton University Press, Princeton, 2008)
12. L.P. Hansen, T.J. Sargent, G. Turmuhambetova, N. Williams, Robust control and model mis-
specification. J. Econ. Theory 128(1), 45–90 (2006)
13. M. Harry, Portfolio selection. J. Financ. 7(1), 77–91 (1952)
14. L. Jiang, J. Fang, W. Zhou, Stability analysis of economic discrete-time singular dynamic input-
output model, in Proceedings of the Seventh International Conference on Machine Learning
and Cybernetics, vol. 3 (2008), pp. 1434–1438
15. L. Jiang, J. Fang, W. Zhou, D. Zheng, H. Lu, Stability of economic input-output model, in
Proceedings of the 27th Chinese Control Conference (2008), pp. 804–807
16. J. Kallsen, Optimal portfolios for exponential Lévy processes. Math. Methods Oper. Res. 51(3),
357–374 (2000)
17. K. Kasa, Model uncertainty robust policies and the value of commitment. Macroecon. Dyn.
6(1), 145–166 (2002)
18. D.A. Kendrick, Stochastic control for economic models: past present and the paths ahead. J.
Econ. Dyn. Control 29(1), 3–30 (2005)
19. N.D. Kondratiev, The Major Economic Cycles (in Russian) (Moscow, 1925)
20. D. Li, W. Ng, Optimal dynamic portfolio selection: multiperiod mean-variance formulation.
Math. Financ. 10(3), 387–406 (2000)
356 7 Some Applications to Economy Based on Related Research Method
21. M. Li, W. Zhou, H. Wang, Y. Chen, R. Lu, H. Lu, Delay-dependent robust H∞ control for
uncertain stochastic systems, in Proceedings of the 17th World Congress of the International
Federation of Automatic Control, vol. 17 (2008), pp. 6004–6009
22. R. Lu, X. Dai, H. Su, J. Chu, A. Xue, Delay-dependant robust stability and stabilization condi-
tions for a class of Lur’e singular time-delay systems. Asian J. Control 10(4), 462–469 (2009)
23. D.G. Luenberger, Introduction to Dynamic Systems: Theory, Models, and Applications (Wiley,
New York, 1979)
24. Q.Z. Moustapha Pemy, G.G. Yin, Liquidation of a large block of stock with regime switching.
Math. Financ. 18(4), 629–648 (2008)
25. B. Øksendal, Stochastic Differential Equations an Introduction with Applications (Springer,
Berlin, 2005)
26. T. Parthasarathy, T.E.S. Raghavan, Some topics in two-person games. SIAM Rev. 14, 356–357
(1972)
27. B. Tang, C. Cheng, M. Zhong, Theory and Applications of Robust Economic Control, 1st edn.
(China Textile University Press, Shanghai, 2000)
28. R.J. Tetlow, P. von zur Muehlen, Robust monetary policy with misspecified models: does model
uncertainty always call for attenuated policy. J. Econ. Dyn. Control 25(6), 911–949 (2001)
29. N. Vandaele, M. Vanmaele, A locally risk-minimizing hedging strategy for unit-linked life
insurance contracts in a Lévy processes financial market. Insur.: Math. Econ. 42(3), 1128–
1137 (2008)
30. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos Solitons Fractals 32(1), 62–72 (2007)
31. H. Wang, A. Xue, R. Lu, Absolute stability criteria for a class of nonlinear singular systems
with time delay. Nonlinear Anal. Theory Methods Appl. 70(2), 621–630 (2009)
32. C. Weng, Constant proportion portfolio insurance under a regime switching exponential Lévy
processes. Insur.: Math. Econ. 52(3), 508–521 (2013)
33. N. Williams, Robust Control. An Entry for the New Palgrave (Princeton University Press,
Princeton, 2007)
34. H. Wu, Z. Li, Multi-period mean-variance portfolio selection with Markov regime switching
and uncertain time-horizon. J. Syst. Sci. Complex. 24(1), 140–155 (2011)
35. H. Wu, Z. Li, Multi-period mean-variance portfolio selection with regime switching and a
stochastic cash flow. Insur.: Math. Econ. 50(3), 371–384 (2012)
36. Z. Wu, W. Zhou, Delay-dependent robust H∞ control for uncertain singular time-delay systems.
IET Control Theory Appl. 1(5), 1234–1241 (2007)
37. Z. Wu, H. Su, J. Chu, W. Zhou, Improved result on stability analysis of discrete stochastic
neural networks with time delay. Phys. Lett. A 373(17), 1546–1552 (2009)
38. L. Xie, C.E. de Souza, Robust H∞ control for linear systems with norm-bounded time-varying
uncertainties. IEEE Trans. Autom. Control 37(8), 1188–1191 (1992)
39. S. Xu, T. Chen, H∞ output feedback control for uncertain stochastic systems with time-varying
delays. Automatica 40(12), 2091–2098 (2004)
40. S. Xu, J. Lam, On equivalence and efficiency of certain stability criteria for time-delay systems.
IEEE Trans. Autom. Control 52(1), 95–101 (2007)
41. S. Xu, J. Lam, A survey of linear matrix inequality techniques in stability analysis of delay
systems. Int. J. Syst. Sci. 39(12), 1095–1113 (2008)
42. D. Yue, Q.L. Han, Delay-dependent exponential stability of stochastic systems with time-
varying delay, nonlinearity and Markovian switching. IEEE Trans. Autom. Control 50(2),
217–222 (2005)
43. K.C. Yuen, C. Yin, On optimality of the barrier strategy for a general Lévy risk process. Math.
Comput. Model. 53(9), 1700–1707 (2011)
44. W. Zhou, M. Li, Mixed time-delays dependent exponential stability for uncertain stochastic
high-order neural networks. Appl. Math. Comput. 215(2), 503–513 (2009)
Index
B
Brownian motion, 2, 270 M
Markov chain, 2, 56, 104, 187, 269, 270, 328
C Markovian switching, 3, 66, 128, 166, 269,
Chebyshev’s inequality, 9, 261 280
Martingale, 1
M-matrix, 8, 131, 211, 269
D
Doob’s martingale inequality, 9, 260
Dynkin’s formula, 5, 62 N
Neural network, 13, 37, 103, 165, 269, 342
E
Exponential stability, 13, 40, 104, 269, 342 P
Poisson random measure, 4
G
Gronwall’s inequality, 9
S
Schur’s complements, 10, 44
H Stability, 269
Hölder’s inequality, 9 Stochastic process, 1, 27
Stopping time, 1, 260
Strong law of large numbers, 1, 270
I Synchronization, 13, 22, 37, 93, 153, 269
Itô’s formula, 5, 270
J T
Jensen’s inequality, 10 Time delay, 39, 115, 154, 191, 280
L Y
Lévy noise, 4, 269, 270, 280 Yong’s inequality, 9, 144, 180
© Springer-Verlag Berlin Heidelberg 2016 357
W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2