You are on page 1of 367

Studies in Systems, Decision and Control 35

Wuneng Zhou
Jun Yang
Liuwei Zhou
Dongbing Tong

Stability and
Synchronization
Control of
Stochastic Neural
Networks
Studies in Systems, Decision and Control

Volume 35

Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
About this Series

The series “Studies in Systems, Decision and Control” (SSDC) covers both new
developments and advances, as well as the state of the art, in the various areas of
broadly perceived systems, decision making and control- quickly, up to date and
with a high quality. The intent is to cover the theory, applications, and perspectives
on the state of the art and future developments relevant to systems, decision
making, control, complex processes and related areas, as embedded in the fields of
engineering, computer science, physics, economics, social and life sciences, as well
as the paradigms and methodologies behind them. The series contains monographs,
textbooks, lecture notes and edited volumes in systems, decision making and
control spanning the areas of Cyber-Physical Systems, Autonomous Systems,
Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Bio-
logical Systems, Vehicular Networking and Connected Vehicles, Aerospace Sys-
tems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power
Systems, Robotics, Social Systems, Economic Systems and other. Of particular
value to both the contributors and the readership are the short publication timeframe
and the world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.

More information about this series at http://www.springer.com/series/13304


Wuneng Zhou Jun Yang

Liuwei Zhou Dongbing Tong


Stability and Synchronization


Control of Stochastic Neural
Networks

123
Wuneng Zhou Liuwei Zhou
School of Information Sciences School of Information Sciences
and Technology and Technology
Donghua University Donghua University
Shanghai Shanghai
China China

Jun Yang Dongbing Tong


Anyang Normal University Shanghai University of Engineering Science
Anyang Shanghai
China China

ISSN 2198-4182 ISSN 2198-4190 (electronic)


Studies in Systems, Decision and Control
ISBN 978-3-662-47832-5 ISBN 978-3-662-47833-2 (eBook)
DOI 10.1007/978-3-662-47833-2

Library of Congress Control Number: 2015946075

Springer Heidelberg New York Dordrecht London


© Springer-Verlag Berlin Heidelberg 2016
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.

Printed on acid-free paper

Springer-Verlag GmbH Berlin Heidelberg is part of Springer Science+Business Media


(www.springer.com)
Preface

The past few decades have witnessed the successful application of neural networks
in many areas such as image processing, pattern recognition, associative memory,
and optimization problems.
For neural networks dynamics, the state variables of the model are the output
signals of the neurons, and a steady output is needed in the dynamical evolution of
neural networks. So the stability research of neural networks is of utmost impor-
tance. In general, there are two kinds of stability, asymptotic stability and expo-
nential stability.
On the other hand, the response of the neurons to the information is completed
jointly by neurons cluster, rather than a single neuron function. The response of the
neurons to information is discharge behavior. This discharge behavior should be
consistent or synchronization by some control method. Therefore, the synchroni-
zation control of neural networks is also an important research topic. Similar to the
kinds of stability, there is asymptotic synchronization and exponential
synchronization.
In the models of neural networks dynamics, there exist the following
phenomenon.
First, as an existence in real neural networks, time-delay, which may cause
oscillation and instability behavior, has gained considerable research attention.
Second, sometimes there are uncertain parameters in neural networks. Therefore,
it is important to investigate the robust stability of neural networks with parameter
uncertainties.
Third, it has been shown that many neural networks may experience abrupt
changes in their structure and parameters due to phenomena such as component
failures or repairs, changing subsystem interconnections, and abrupt environmental
disturbances. In this situation, neural networks may be treated as systems that have
finite modes that may switch from one to another at different times, and can be
described by finite-state Markov chain. The stability analysis problem for neural
networks with Markovian switching has therefore received much research attention
and obtained a series of results about it.

v
vi Preface

Fourth, when the states of a system are decided not only by states of the current
time and past time but by the derivative of the past states; the system can be called a
neutral-type system. Indeed, some physical systems in the real world can be
described by neutral-type models.
Finally, as we know, the synaptic transmission in real nervous systems can be
viewed as a noisy process brought on by random fluctuations from the release of
neurotransmitters and other probabilistic causes. In general, Gaussian noise has
been regarded as the disturbance arising in neural networks. The chief characteristic
of Gaussian noise is its continuous property.
However, in actual neural networks, the neuron’s membrane potential is not only
affected by the Gaussian noise, but also possesses instantaneous disturbance
changes which lead to Poisson spikes from other neurons. This requires that the
neuron system must possess a large number of impinging synapses and that
the synapses have small membrane effects due to small coupling coefficient. These
impinging synapses generate discontinuous disturbance in the neural networks.
The discontinuous disturbance cannot be modeled by Gaussian noise. In view of the
stochastic process, Lévy process possesses discontinuous property, and the process
can be decomposed into a continuous part and a jump part by Lévy-Itô decom-
position. So it is reasonable to model the noise of neural networks as Lévy process.
Therefore, the stability and synchronization analysis problems for neural networks
with Lévy noise, even also with Markovian switching parameters, become a new
and severe challenge.
Focusing on the above models of neural networks dynamics, in this book we
considered the problem of stability and synchronization. Especially for stochastic
neural networks, we studied the almost surely asymptotic/exponential stability and
synchronization and the pth moment asymptotic/exponential stability and syn-
chronization. All of the results of the book are authors’ recent researching
achievements. The chapters are as follows.
Chapter 1 is devoted to relative mathematics foundation which includes some
main concepts and formulas such as stochastic process, martingales, stochastic
differential equation, Itôs formula, M-matrix, etc., and inequalities, such as some
elementary inequalities and matrix inequalities, used in this book.
Chapter 2 is concerned with exponential stability analysis for neural networks
with fuzzy logical BAM and Markovian jump and synchronization control problem
of stochastically coupled neural networks.
Chapter 3 is devoted to some neural network models with uncertainty. In this
chapter, the robust stability of high-order neural networks and hybrid stochastic
neural networks is first investigated. The robust anti-synchronization and robust lag
synchronization of chaotic neural networks are discussed in the sequel.
Chapter 4 is devoted to adaptive synchronization for some neural network
models. In this chapter, we studied the problems of adaptive synchronization of
BAM delayed neural networks, synchronization of stochastic T-S fuzzy neural
networks with time-delay and Markovian jumping parameters, synchronization of
delayed neural networks based on parameter identification and via output coupling,
adaptive a.s. asymptotic synchronization for stochastic delay neural networks with
Preface vii

Markovian switching, and adaptive pth moment exponential synchronization for


stochastic delayed Markovian jump neural networks, respectively.
Chapter 5 is devoted to the stability and synchronization of neutral-type neural
networks. In this chapter, we studied the problems of robust stability, adaptive
synchronization, projective synchronization, adaptive pth moment exponential
synchronization, asymptotical adaptive synchronization for delayed neutral type
neural networks with Gaussian noise, and Markovian switching parameters,
respectively.
Chapter 6 is devoted to the stability and synchronization for neural networks
with Lévy noise. In this chapter, we studied the problems of almost surely expo-
nential stability, pth moment asymptotic stability, synchronization, and adaptive
synchronization for time-delay neural networks with Lévy noise and Markovian
switching parameters, respectively.
Chapter 7 is devoted to some applications to economy based on related research
method. In this chapter, we studied the problems of portfolio strategy of financial
market with regime switching driven by geometric Lévy process, robust H1 control
for a generic linear rational expectations model of economy, respectively.
With the book completed, we would like to thank our students Hongqian Lu,
Minghao Li, Yan Gao, Qingyu Zhu, Xiaozheng Mou, Lezhi Wang, Zhengfeng
Zhang, Fenhan Wang, Anding Dai, and Xianghui Zhou for some of their research
works and entering work. We are grateful to the College of Information Science and
Technology, Donghua University for financial support. We also wish to thank Ms.
Lu Yang and Jessie Guo for their publishing assistance. Finally, we should thank
our families, in particular, Mrs. Xiuqin Liu, for their constant support.

Shanghai, China Wuneng Zhou


Anyang, China Jun Yang
Shanghai, China Liuwei Zhou
Shanghai, China Dongbing Tong
April 2015
Contents

1 Relative Mathematic Foundation. . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Main Concepts and Formulas . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Stochastic Processes and Martingales . . . . . . . . . . . . . . . 1
1.1.2 Stochastic Differential Equations . . . . . . . . . . . . . . . . . . 2
1.1.3 Itô’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 M-Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Frequently Used Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1 Elementary Inequality. . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Exponential Stability and Synchronization Control


of Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13
2.1 Global Exponential Stability of NN with Fuzzy Logical BAM
and Markovian Jump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2 System Description and Preliminaries . . . . . . . . . . . . . . . 14
2.1.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Synchronization Control of Stochastically Coupled DNN . . . . . . 22
2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.3 Main Results and Proofs . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.4 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

ix
x Contents

3 Robust Stability and Synchronization of Neural Networks . . . . ... 37


3.1 Delay-Dependent Stability Based on Parameters Weak
Coupling LMI Set of High-Order NN . . . . . . . . . . . . . . . . . . . . 37
3.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.2 Preliminaries and Problem Formulation. . . . . . . . . . . . . . 38
3.1.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity . . . . . . 54
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.3 Main Results and Proofs . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3 Anti-Synchronization Control of Unknown CNN with Delay
and Noise Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3.2 Problem Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.3.4 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.4 Lag Synchronization of Uncertain Delayed CNN
Based on Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.4.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.4.3 Main Results and Proofs . . . . . . . . . . . . . . . . . . . . . . . . 80
3.4.4 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4 Adaptive Synchronization of Neural Networks. . . . . . . . . . ...... 93


4.1 Projective Synchronization of BAM Self-Adaptive DNN
with Unknown Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.1.2 Problem Fomulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.1.3 Design of Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.1.4 Numercial Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN
with Markovian Jump . . . . . . . . . . . . . . . . . . . . . . . . . ...... 103
4.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . ...... 103
4.2.2 Problem Formulation and Preliminaries. . . . . . . . ...... 104
Contents xi

4.2.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 107


4.2.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . .... 113
4.2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 114
4.3 Synchronization of DNN Based on Parameter Identification
and via Output Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.3.3 Main Results and Proofs . . . . . . . . . . . . . . . . . . . . . . . . 119
4.3.4 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN
with Markovian Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.4.2 Problem Formulation and Preliminaries. . . . . . . . . . . . . . 129
4.4.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.4.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.5 Adaptive pth Moment Exponential Synchronization
of SDNN with Markovian Jump . . . . . . . . . . . . . . . . . . . . . . . 137
4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.5.2 Problem Formulation and Preliminaries. . . . . . . . . . . . . . 138
4.5.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.5.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

5 Stability and Synchronization of Neutral-Type


Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . ........... 153
5.1 Robust Stability of Neutral-Type NN with Mixed
Time Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.1.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.1.3 Main Results Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.1.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.1.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.2 Adaptive Synchronization of Neutral-Type SNN
with Markovian Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.2.2 Problem Formulation and Preliminaries. . . . . . . . . . . . . . 167
5.2.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.2.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
xii Contents

5.3 Mode-Dependent Projective Synchronization


of Neutral-Type DNN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.3.2 Problem Formulation and Preliminaries. . . . . . . . . . . . . . 191
5.3.3 Main Results and Proofs . . . . . . . . . . . . . . . . . . . . . . . . 193
5.3.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.4 Adaptive pth Moment Exponential Synchronization
of Neutral-Type NN with Markovian Switching . . . . . . . . . . . . . 202
5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.2 Problem Formulation and Preliminaries. . . . . . . . . . . . . . 203
5.4.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.4.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.5 Adaptive Synchronization of Neutral-Type SNN
with Mixed Time Delays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.5.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.5.3 Main Results and Proofs . . . . . . . . . . . . . . . . . . . . . . . . 220
5.5.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5.6 Exponential Stability of Neutral-Type Impulsive SNN
with Markovian Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.6.2 Problem Formulation and Preliminaries. . . . . . . . . . . . . . 234
5.6.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.6.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5.6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
5.7 Asymptotical Adaptive Synchronization of Neutral Type
and Markovian Jump SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
5.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
5.7.2 Problem Formulation and Preliminaries. . . . . . . . . . . . . . 245
5.7.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5.7.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.7.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

6 Stability and Synchronization of Neural Networks


with Lévy Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 269
6.1 Almost Surely Exponential Stability of NN with Lévy Noise
and Markovian Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
6.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
6.1.2 Model and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . 270
6.1.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Contents xiii

6.1.4 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 275


6.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6.2 Asymptotic Stability of SDNN with Lévy Noise . . . . . . . . . . . . 280
6.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6.2.2 Model and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . 281
6.2.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
6.2.4 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 290
6.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
6.3 Synchronization of SDNN with Lévy Noise and Markovian
Switching via Sampled Data . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.3.2 Model and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . 295
6.3.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
6.3.4 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 304
6.3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.4 Adaptive Synchronization of SDNN with Lévy Noise
and Markovian Switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
6.4.2 Model and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . 310
6.4.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
6.4.4 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6.4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

7 Some Applications to Economy Based on Related


Research Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 327
7.1 Portfolio Strategy of Financial Market with Regime Switching
Driven by Geometric Lévy Process . . . . . . . . . . . . . . . . . . . . . 327
7.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
7.1.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 329
7.1.3 Main Results and Proofs . . . . . . . . . . . . . . . . . . . . . . . . 330
7.1.4 A Financial Example . . . . . . . . . . . . . . . . . . . . . . . . . . 339
7.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
7.2 Robust H1 Control for a Generic Linear Rational
Expectations Model of Economy . . . . . . . . . . . . . . . . . . . . . . . 341
7.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
7.2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.2.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
7.2.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.2.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Symbols and Acronyms

Z Field of integers
R Field of real numbers
Rþ ½0; 1Þ, the set of all nonnegative real numbers
Rn n-dimensional real Euclidean space
Rmn Space of all m  n real matrices
S ¼ f1; 2;    ; Sg, the finite space of a Markov chain
I Identity matrix
a_b The maximum of a and b
a^b The minimum of a and b
A[0 Symmetric positive definite
A>0 Symmetric positive semi-definite
A\0 Symmetric negative definite
A60 Symmetric negative semi-definite
AT Transpose of matrix A
A1 Inverse of matrix A
traceðAÞ The trace of a square matrix A
‰ðAÞ Spectral radius of matrix A
‚max ðAÞ Maximum eigenvalue of matrix A
‚min ðAÞ Minimum eigenvalue of matrix A
detðAÞ Determinant of matrix A
diagf  g Block-diagonal matrix
jj Euclidean norm of a vector or trace norm of a matrix
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
kAk k A k:¼ supfjAxj : jxj ¼ 1g ¼ ‚max ðAT AÞ
f :A!B The mapping f from A to B
Cð½¿; 0; Rn Þ The space of continuous Rn -valued functions ϕ defined on
½¿; 0 with a norm k ϕ k ¼ sup¿    0 jϕðÞj
C2;1 ðD  Rþ ; RÞ The family of all real-valued functions Vðx; tÞ defined on
D  Rþ which are continuously twice differentiable in x 2 D
and once differentiable in t 2 Rþ

xv
xvi Symbols and Acronyms

LpF t ð½¿; 0; Rn Þ The family of F t -measurable Cð½¿; 0; Rn Þ-valued random
variables ` such that E k ` kp \1
L1 ðRþ ; Rþ Þ RThe family of functions  : Rþ ! Rþ such that
1
0 ðtÞdt\1
l2 ½0; 1Þ The space of square integrable vector on ½0; 1Þ
Ω Sample space
F -algebra of subsets of Ω
ðΩ; F ; PÞ A probability space
fF t gt  0 A filtration
hM; Mit The quadratic variation of martingale or local martingale
fMt gt  0
BAM Bidirectional associative memory
CNN Chaotic neural networks
DNN Delayed neural networks
LMI Linear matrix inequality
NN Neural networks
NSDDE Neutral stochastic delayed differential equation
SDDE Stochastic delayed differential equation
SDE Stochastic differential equation
SDNN Stochastic delayed neural networks
SNN Stochastic neural networks
Chapter 1
Relative Mathematic Foundation

In this chapter, we will present some concepts and formulas as well as several impor-
tant inequalities which will be used throughout this book. We will begin with some
elementary concepts and formulas, such as stochastic processes and martingales,
SDEs, M-matrix, and Itô’s formula. Then some inequalities frequently used in this
book will follow in the sequel.

1.1 Main Concepts and Formulas

1.1.1 Stochastic Processes and Martingales

A family {X (t)}t∈I of Rn -valued random variables is called a stochastic process with


parameter set (or index set) I and state space Rn . The parameter set I is usually the
half line R+ = [0, ∞).
Let {Ft } be an filtration. A random variable τ : Ω → [0, ∞] (it may take the
value ∞) is called an {Ft }-stopping time if {ω : τ (ω) ≤ t} ∈ Ft for any t ≥ 0.
Let {X (t)}t≥0 be an Rn -valued stochastic process. It is said to be {Ft }-adapted if
for every t, X (t) is {Ft }-measurable.
An Rn -valued {Ft }-adapted integrable process {M(t)}t≥0 is called a martingale
with respect to {Ft } if

E(M(t)|Fs ) = M(s) a.s. f or all 0 ≤ s < t < ∞.

A right continuous adapted process M = {M(t)}t≥0 is called a local martingale


if there exists a nondecreasing sequence {τk }k≥1 of stopping times with τk ↑ ∞ a.s.
such that every {M(τk ∧ t) − M(0)}t≥0 is a martingale.
The following results are the convergence theorem of nonnegative semi-
martingales and strong law of large numbers for local martingales.

© Springer-Verlag Berlin Heidelberg 2016 1


W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2_1
2 1 Relative Mathematic Foundation

Lemma 1.1 (The convergence theorem of nonnegative semi-martingales) Let A1 (t)


and A2 (t) be two continuous adapted increasing processes on t ≥ 0 with A1 (0) =
A2 (0) = 0 a. s. Let M(t) be a real-valued continuous local martingale with
M(0) = 0 a. s. Let ζ be a nonnegative F -measurable random variable such that
Eζ < ∞. Define
X (t) = ζ + A1 (t) − A2 (t) + M(t), t ≥ 0

If X (t) is nonnegative, then

{ lim A1 (t) < ∞} ⊂ { lim X (t) < ∞}


t→∞ t→∞
∩ { lim A2 (t) < ∞} a.s.
t→∞

where C ⊂ D a.s. means P(C ∩ D c ) = 0. In particular, if lim A1 (t) < ∞ a.s.,


t→∞
then, with probability one, we have

lim X (t) < ∞, lim A2 (t) < ∞


t→∞ t→∞

and
−∞ < lim M(t) < ∞
t→∞

That is, all of the three processes X (t), A2 (t), and M(t) converge to finite random
variables.
Lemma 1.2 (Strong law of large numbers for local martingales)[1, 9] Let M =
{M(t)}t≥0 be a real-valued local martingale vanishing at t = 0. Then
 t
d M, M s M(t)
lim < ∞ a.s. ⇒ lim = 0 a.s.
t→∞ 0 (1 + s) 2 t→∞ t

1.1.2 Stochastic Differential Equations

Four types of stochastic differential equations (SDEs) concerning the topic of this
book are displayed as follows.
1. SDE and Markov chain
The following equation is the general form of an n-dimensional stochastic differ-
ential equation without Markovian jump.

d x(t) = f (t, x(t))dt + g(t, x(t))dω(t) (1.1)

where f : R+ × Rn → Rn and g : R+ × Rn → Rn×m are two functions, ω(t) is an


m-dimensional Brownian motion.
Let {r (t), t ≥ 0} be a right-continuous Markov process on the probability
space which takes values in the finite space S = {1, 2, . . . , S} with generator
1.1 Main Concepts and Formulas 3

Γ = (πi j )(i, j ∈ S) given by



πi j Δ + o(Δ) i f i = j,
P{r (t + Δ) = j|r (t) = i} =
1 + πii Δ + o(Δ) i f i = j,

where Δ > 0 and lim o(Δ)/Δ = 0. πi j ≥ 0 is the transition rate from i to j if


Δ→∞

i = j and πii = − πi j .
j=i
SDEs with Markovian switching have the form of

d x(t) = f (t, r (t), x(t))dt + g(t, r (t), x(t))dω(t) (1.2)

where f : R+ × S × Rn → Rn and g : R+ × S × Rn → Rn×m are two functions.


2. SDDE
Consider, an n-dimensional stochastic delayed differential equation (SDDE, for
short) with Markovian jumping parameters

d x(t) = f (t, r (t), x(t), xτ (t))dt + g(t, r (t), x(t), xτ (t))dω(t) (1.3)

on t ∈ [0, ∞) with the initial data given by

{x(θ) : −τ ≤ θ ≤ 0} = ξ ∈ L2F0 ([−τ , 0]; Rn ).

f : R+ ×S×Rn ×Rn → Rn and g : R+ ×S×Rn ×Rn → Rn×m are two functions.


3. NSDDE
SDDEs of neutral-type have the form of

d[x(t) − D(x(t − τ ), r (t))] = f (t, r (t), x(t), x(t − τ ))dt


(1.4)
+ g(t, r (t), x(t), x(t − τ ))dω(t).

where t ∈ [0, ∞) with the initial data given by

{x(θ) : −τ ≤ θ ≤ 0} = ξ ∈ L2F0 ([−τ , 0]; Rn ).

f : R+ ×S×Rn ×Rn → Rn , g : R+ ×S×Rn ×Rn → Rn×m and D : Rn ×S → Rn


are three functions.
For the neutral term in Eq. (1.4), we have the following two lemmas.
Lemma 1.3 ([6]) Let p > 1 and |D(y, i)| ≤ k|y| hold, then

|x − D(y, i)| p ≤ (1 + k) p−1 (|x| p + k|y| p ), ∀i(x, y, i) ∈ Rn × Rn × S,

Lemma 1.4 ([6]) Let p > 1 and |D(y, i)| ≤ k|y| hold, then
|x − D(y, i)| p
|x| p ≤ k|y| p + , ∀i(x, y, i) ∈ Rn × Rn × S,
(1 − k) p−1
4 1 Relative Mathematic Foundation

or
−|x − D(y, i)| p ≤ −(1 − k) p−1 |x| p + k(1 − k) p−1 |y| p .

4. SDDE with Lévy noise


Let B(t) = (B1 (t), B2 (t), . . . , Bm (t))T be an m-dimensional Ft -adapted Brown-
ian motion and N (·, ·) be a Ft -adapted Poisson random measure on [0, +∞) × Rn
with compensator Ñ which satisfies Ñ (dt, dz) = N (dt, dz) − λφ(dz)dt, where λ
is the intensity of Poisson process and φ is the probability distribution of random
variable z.
Consider, the n-dimensional stochastic delay hybrid system with Lévy noise of
this form
d x(t) = f (x(t), x(t − δ(t)), t, r (t))dt
+ g(x(t), x(t − δ(t)), t, r (t))d B(t)
 (1.5)
+ h(x(t − ), x((t − δ(t))− ), t, r (t), z)N (dt, dz)
Rl

on t ∈ R+ , where x(t − ) = lims↑t x(s). Here δ : R+ → [0, τ ] is a Borel measurable


function which stands for the time lag, while f : Rn × Rn × R+ × S → Rn , g :
Rn × Rn × R+ × S → Rn×m and h : Rn × Rn × R+ × S → Rn×l . We assume
p
that the initial data are given by {x(θ) : −τ ≤ θ ≤ 0} = ξ(θ) ∈ LF0 ([−τ , 0]; Rn ]),
r (0) = r0 . We note that each column h (k) of the n × l matrix h = [h i j ] depends on
z only through the kth coordinate z k , i.e.,

h (k) (x, y, t, i, z) = h (k) (x, y, t, i, z k )


z = (z 1 , . . . , zl )T ∈ Rl , i ∈ S

1.1.3 Itô’s Formula

1. Diffusion operator and jump-diffusion operator


For system (1.1), Given V ∈ C2,1 (Rn × R+ ; R+ ), define an operator LV : Rn ×
R+ → R by

1
LV (x, t) = Vt (x, t) + Vx (x, t) f (x) + trace[g T (x)Vx x (x, t)g(x)] (1.6)
2
which is called diffusion operator of (1.1), where
 ∂V (x, t) ∂V (x, t) 
Vx (x, t) = ,..., ,
∂x1 ∂xn
 ∂ 2 V (x, t) 
Vx x (x, t) = .
∂xi ∂x j n×n
1.1 Main Concepts and Formulas 5

For system (1.3) and system (1.4), the diffusion operator has the form

LV (x, y, t, i) = Vt (x, t, i) + Vx (x, t, i) f (x, y, t, i)


1
+ trace[g T (x, y, t, i)Vx x (x, t, i)g(x, y, t, i)]
2 (1.7)
S
+ γi j V (x, t, j)
j=1

and

LV (t, i, x, y) = Vt (t, i, x − D(y, i)) + Vx (t, i, x − D(y, i)) f (t, i, x, y)


1
+ trace[g T (t, i, x, y)Vx x (t, i, x − D(y, i))g(t, i, x, y)]
2 (1.8)
 S
+ γi j V (t, j, x − D(y, i)).
j=1

The jump-diffusion operator for SDDE with Lévy noise (1.5) is defined by (see
[20])
LV (x, y, t, i) = Vt (x, t, i) + Vx (x, t, i) f (x, y, t, i)
1
+ trace[g T (x, y, t, i)Vx x (x, t, i)g(x, y, t, i)]
2
  l
+ [V (x + h (k) (x, y, t, i, z k ), t, i) (1.9)
R k=1


S
− V (x, t, i)]νk (dz k ) + γi j V (x, t, j)
j=1

2. Itô’s formula and Dynkin’s formula


For system (1.3) and (1.5), the generalized Itô’s formula can be given respectively
as follows.

V (x, t, r (t))
 t
=V (x(0), 0, r0 ) + LV (x(s), xτ (s), s, r (s))ds
0
 t
+ Vx (x(s), s, r (s))g(x(s), xτ (s), s, r (s))d B(s) (1.10)
0
 t
+ [V (x(s), s, r0 + c(r (s), u))
0 R
− V (x(s), s, r (s))]μ(ds, du).
6 1 Relative Mathematic Foundation

V (x, t, r (t))
 t
= V (x(0), 0, r0 ) + LV (x(s), x(s − δ(s)), s, r (s))ds
0
 t
+ Vx (x(s), s, r (s))g(x(s), x(s − δ(s)), s, r (s))d B(s)
0
l  t
 (1.11)
+ [V (x(s − ) + h (k) (x(s − ), x((s − δ(s))− ), s,
k=1 0 R

r (s), z k ), s, r (s)) − V (x(s − ), s, r (s))] Ñ (ds, dz k )


 t
+ [V (x(s − ), s, r0 + c(r (s), u))
0 R
− V (x(s − ), s, r (s))]μ(ds, du).

The details of the function c and the martingale measure μ(ds, du) can be seen
in [9, pp. 46–48].
Obviously (1.10) and (1.11) hold if we replace 0 and t with bounded stopping
time τ1 and τ2 , respectively. Thus, the following lemmas is derived.
For system (1.3), (1.5) we have the Dynkin formula as follows.

Lemma 1.5 (Dynkin formula) [9, 11] For system (1.3), let V ∈ C2,1 (Rn × R+ ×
S; R+ ) and τ1 , τ2 be bounded stopping times such that 0 ≤ τ1 ≤ τ2 a. s. (i.e., almost
surely). If V (x(t), t, r (t)) and LV (x(t), xτ (t), t, r (t)) are bounded on t ∈ [τ1 , τ2 ]
with probability 1, then
 τ2
EV (x(τ2 ), τ2 , r (τ2 )) = EV (x(τ1 ), τ1 , r (τ1 )) + E LV (x(s), xτ (s), s, r (s))ds.
τ1

Lemma 1.6 (Dynkin formula) [9] For system (1.5), let τ1 , τ2 be bounded stopping
times such that 0 ≤ τ1 ≤ τ2 a.s. If V (x(t), t, r (t)), and LV (x(t), x(t −δ(t)), t, r (t))
are bounded on t ∈ [τ1 , τ2 ] with probability 1, then

EV (x(τ2 ), τ2 , r (τ2 )) = EV (x(τ1 ), τ1 , r (τ1 ))


 τ2
(1.12)
+E LV (x(s), x(s − δ(s), s, r (s))ds
τ1

For NSDDEs, we have the following Dynkin formula.

Lemma 1.7 (Dynkin formula) (See Ref. [6]) Let V ∈ C2,1 (R+ × S × Rn ; R) and
x(t) be a solution of the Eq. (1.4), Then for any stopping times 0 ≤ ρ1 ≤ ρ2 < ∞
a.s.
EV (ρ2 , r (ρ2 ), x(ρ2 ) − D(x(ρ2 − τ ), r (ρ2 )))
= EV (ρ  1ρ, r (ρ1 ), x(ρ1 ) − D(x(ρ1 − τ ), r (ρ1 ))) (1.13)
+ E ρ12 LV (s, r (s), x(s), x(s − τ ))ds
1.1 Main Concepts and Formulas 7

holds provided that V (t, r (t), x(t) − D(x(t), r (t))) and LV (t, r (t), x(t), x(t − τ ))
are bounded on t ∈ [ρ1 , ρ2 ] with probability 1, where the operator LV : R+ × S ×
Rn × Rn → R is defined by (1.8).

For system (1.3) and (1.4), the following two lemmas are used to determine the
almost surely asymptotic stability of their solutions.

Assumption 1.8 ([19]) Both f and g satisfy the local Lipschitz condition. That is,
for each h > 0, there is an L h > 0 such that

| f (t, i, x, y) − f (t, i, x̄, ȳ)| + |g(t, i, x, y) − g(t, i, x̄, ȳ)|


≤ L h (|x − x̄| + |y − ȳ|)

for all (t, i) ∈ R+ × S and those x, y, x̄, ȳ ∈ Rn with x ∨ y ∨ x̄ ∨ ȳ ≤ h. Moreover

sup{| f (t, i, 0, 0)| ∨ |g(t, i, 0, 0)| : t ≥ 0, i ∈ S} < ∞.

Lemma 1.9 ([19]) Let Assumption 1.8 holds. Assume that there are functions
V ∈ C2,1 (R+ × S × Rn ; R+ ), ψ ∈ L1 (R+ ; R+ ); and w1 , w2 ∈ C(Rn ; R+ ) such
that
LV (t, i, x, y) ≤ ψ(t) − w1 (x) + w2 (y),
(1.14)
∀(t, i, x, y) ∈ R+ × S × Rn × Rn ,

w1 (0) = w2 (0) = 0, w1 (x) > w2 (x) ∀x = 0, y = 0, (1.15)

and
lim inf V (t, i, x) = ∞. (1.16)
|x|→∞ 0≤t<∞,i∈S

Then the solution of Eq. (1.3) is almost surely asymptotically stable.

Lemma 1.10 ([10]) Let system (1.4) satisfies the following hypotheses.
(H1) Both f¯ and ḡ satisfy the local Lipschitz condition. That is, for each h > 0,
there is an L h > 0 such that

| f¯(t, i, x, y) − f¯(t, i, x̄, ȳ)| ∨ |ḡ(t, i, x, y) − ḡ(t, i, x̄, ȳ)|


≤ L h (|x − x̄| + |y − ȳ|)

for all (t, i) ∈ R+ × S and those x, y, x̄, ȳ ∈ Rn with x ∨ y ∨ x̄ ∨ ȳ ≤ h.


(H2) For each i ∈ S, there is a constant κi ∈ (0, 1) such that

| D̄(x, i) − D̄(x̄, i)| ≤ κi |x − x̄| ∀x, x̄ ∈ Rn .

(H3) For each (t, i) ∈ R+ × S,

f¯(t, i, 0, 0) = 0, f¯(t, i, 0, 0) = 0, D̄(0, i) = 0.


8 1 Relative Mathematic Foundation

Assume also that there are functions V ∈ C2,1 (R+ ×S×Rn ; R), γ ∈ L1 (R+ ; R+ ),
Q ∈ C(Rn × [−τ , ∞); R+ ) and W ∈ C(Rn ; R+ ) such that

LV (t, i, x, y)
(1.17)
≤γ(t) − Q(t, x) + Q(t − τ , y) − W (x − D̄(y, i))

for (t, i, x, y) ∈ R+ × S × Rn × Rn , and

lim [ inf V (t, i, x)] = ∞. (1.18)


|x|→∞ (t,i)∈R+ ×S

Then, we have the following results.


(R1) For any initial data {x(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn ) and
r (0) = i 0 ∈ S, Eq. (1.4) has a unique global solution which is denoted by x(t; ξ, i 0 ).
(R2) The solution x(t; ξ, i 0 ) obeys that

lim x(t; ξ, i 0 ) = 0 a.s. (1.19)


t→∞

if W has the property that W (x) = 0 if and only if x = 0.

1.1.4 M-Matrix

The theory of M-matrices has played an important role in the study of stability,
stabilization, control, etc. We cite the relative concepts of M-matrix below.
Definition 1.11 ([3]) A square matrix M = (m i j )n×n is called a nonsingular
M-matrix if M can be expressed in the form M = s In − G with some G ≥ 0
(i.e., each element of G is nonnegative) and s > ρ(G), where ρ(G) is the spectral
radius of G.
Lemma 1.12 ([9]) If M = (m i j )n×n ∈ Rn×n with m i j < 0 (i = j), then the
following statements are equivalent.
(i) M is a nonsingular M-matrix.
(ii) Every real eigenvalue of M is positive.
(iii) M is positive stable. That is, M −1 exists and M −1 > 0 (i.e., M −1 ≥ 0 and
at least one element of M −1 is positive).

1.2 Frequently Used Inequalities

There are several inequalities which are used frequently in this book. The inequalities
with respect to vectors and scalars are gathered in the first part and those with respect
to matrices are included in the second part.
1.2 Frequently Used Inequalities 9

1.2.1 Elementary Inequality

Lemma 1.13 ([14–16]) Let x ∈ Rn , y ∈ Rn and ε > 0. Then, we have


x T y + y T x ≤ εx T x + ε−1 y T y.

More generally, this inequality can be written in the following form [7].

x T y + y T x ≤ x T M x + y T M −1 y

holds, where M is any matrix with M > 0.

Lemma 1.14 (Yong’s inequality) [9] Let a, b ∈ R and β ∈ [0, 1]. Then |a|β |b|(1−β)
≤ β|a| + (1 − β)|b|.

Lemma 1.15 (Hölder’s inequality, see Ref. [9]) Let ai ∈ R, k, p ∈ Z and p ≥ 1.


Then

k 
k
| ai | p ≤ k p−1 |ai | p .
i=1 i=1

Lemma 1.16 ([2]) Let Z ∈ Rn×n be a symmetric matrix, then the inequality holds

λm (Z )x T x ≤ x T Z x ≤ λ M (Z )x T x,

for any x ∈ Rn .

Lemma 1.17 (Gronwall’s inequality) [9, 11] Let T > 0 and u(·) be a Borel mea-
surable bounded nonnegative function on [0, T ]. If
 t
u(t) ≤ c + v u(s)ds, ∀t ∈ [0, T ],
0

for some constants c, v, then

u(t) ≤ c exp(vt), ∀t ∈ [0, T ].

Lemma 1.18 (Doob’s martingale inequality, see Ref. [9]) Let {Mt }t≥0 be an Rn -
martingle. Let [a, b] be a bounded interval in R+ . If p > 1, and Mt ∈ L p (Ω; Rn )
(the family of Rn -valued random variables X with E|X | p < ∞), then

p
p
E( sup |Mt | ) ≤ p
E|Mb | p .
a≤t≤b p−1

Lemma 1.19 (Chebyshev’s inequality, see Ref. [9]) If c > 0, p > 0, X ∈


L p (Ω; Rn ), then
P{ω : |X (ω)| ≥ c} ≤ c− p E|X | p .
10 1 Relative Mathematic Foundation

Lemma 1.20 (Jensen’s Inequality) [5, 16, 17] For any positive definite matrix
M > 0, scalar γ > 0, vector function w : [0, γ] → Rn such that the integrations
concerned are well defined, the following inequality holds:
 γ
T  γ
 γ

w(s)ds M w(s)ds ≤γ w T (s)Mw(s)ds .


0 0 0

1.2.2 Matrix Inequalities

Lemma 1.21 (Schur’s complements) [4, 8, 13] Given constant matrices Ω1 , Ω2 ,


Ω3 where Ω1 = Ω1T and 0 < Ω2 = Ω2T , then Ω1 + Ω3T Ω2−1 Ω3 < 0 if and only if

Ω1 Ω3T −Ω2 Ω3
< 0, or < 0.
Ω3 −Ω2 Ω3T Ω1

Lemma 1.22 ([12, 18]) Given matrices Ω, Γ and Ξ with appropriate dimensions
and with Ω symmetrical, then

Ω + Γ FΞ + Ξ T F T Γ T < 0

for any F satisfying F T F ≤ I , if and only if there exists a scalar ε > 0 such that

Ω + εΓ Γ T + ε−1 Ξ T Ξ < 0.

Lemma 1.23 Let D, S and F are be real matrices of appropriate dimensions with
F T F ≤ I. Then, for any scalar ε > 0, we have D F S+(D F S)T ≤ ε−1 D D T +εS T S.

References

1. D. Applebaum, M. Siakalli, Stochastic stabilization of dynamical systems using Lévy noise.


Stoch. Dyn. 10(4), 509–527 (2010)
2. A. Berman, R. Plemmmons, Nonnegative Matrices in Mathematical Sciences (Academic Press,
NewYork, 1979)
3. A. Berman, R.J. Plemmons, Nonnegative Matrices in Mathematical Sciences (SIAM, Philadel-
phia, 1987)
4. S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and
Control Theory (SIAM, Philadelphia, 1994)
5. K. Gu, An integral inequality in the stability problem of time-delay systems, in Proceedings
of 39th IEEE Conference on Decision and Control, (2000) pp. 2805–2810
6. V. Kolmanovskii, N. Koroleva, T. Maizenberg, X. Mao, A. Matasov, Neutral stochastic differ-
ential delay equations with Markovian switching. Stoch. Anal. Appl. 21(4), 839–867 (2003)
7. X. Liao, G. Chen, E.N. Sanchez, LMI-based approach for asymptotically stability analysis of
delayed neural networks. IEEE Trans. Circuits Syst. I 49(7), 1033–1039 (2002)
References 11

8. B. Liu, X.Z. Liu, G.R. Chen, Robust impulsive synchronization of uncertain dynamical net-
works. IEEE Trans. Circuits Syst. I. 52(7), 1431–1441 (2005)
9. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, London, 2006)
10. X. Mao, Y. Shen, C. Yuan, Almost surely asymptotic stability of neutral stochastic differential
delay equations with Markovian switching. Stoch. Process. Appl. 118(8), 1385–1406 (2008)
11. B. Øksendal, Stochastic Differential Equations an Introduction with Applications (Springer,
Berlin, 2005)
12. I.R. Petersen, A stabilization algorithm for a class of uncertain linear systems. Syst. Control
Lett. 8(4), 351–357 (1987)
13. M.G. Rosenblum, A.S. Pikovsky, J. Kurths, Phase synchronization in driven and coupled
chaotic oscillators. IEEE Trans. Circuits Syst. 44(10), 874–881 (1997)
14. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
15. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
16. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
17. Z.-G. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks
with time-varying delay using sampled-data. IEEE Trans. Cybern. 43(6), 1796–1806 (2013)
18. L. Xie, Output feedback H∞ control of systems with parameter uncertainty. Int. J. Control
63(4), 741–750 (1996)
19. C. Yuan, X. Mao, Robust stability and controllability of stochastic differential delay equations
with Markovian switching. Automatica 40(3), 343–354 (2004)
20. C.G. Yuan, X.R. Mao, Stability of stochastic delay hybrid systems with jumps. Eur. J. Control
16(6), 595–608 (2010)
Chapter 2
Exponential Stability and Synchronization
Control of Neural Networks

In this chapter, we are concerned with exponential stability analysis for neural net-
works with fuzzy logical BAM and Markovian jump and synchronization control
problem of stochastically coupled neural networks.

2.1 Global Exponential Stability of NN with Fuzzy Logical


BAM and Markovian Jump

2.1.1 Introduction

It is well known that the bidirectional associative memory (BAM) neural networks
have been deeply investigated in recent years due to its applicability in solving some
image processing, signal processing, optimization, pattern recognition problems, and
other areas. Many researchers have been attracted by this new class of artificial neural
networks and a great deal of research has been done since fuzzy logical BAM neural
networks are introduced by Kosko in [10–12]. Especially, since the global stability
is one of the most desirable dynamic properties of neural networks, there have been
growing research interests on the stability analysis and synthesis for BAM neural
networks. For example, in [2] authors analyzed the global asymptotic stability of
a BAM neural networks with constant time delays and the exponential stability of
periodic solution to Cohen-Grossberg-type BAM neural networks with time-varying
delays has been investigated in [36].
In recent years, the concept of incorporating fuzzy logic into neural networks has
developed into an extensive research topic. Among various method developed for
the analysis and synthesis of complex nonlinear systems, fuzzy logic control is an
attractive and effective rule-based one. Therefore, fuzzy neural networks receive great
attention since they are the hybrid of fuzzy logic and traditional neural networks. In
many of the model-based fuzzy control approaches, the well-known Takagi-Sugeno

© Springer-Verlag Berlin Heidelberg 2016 13


W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2_2
14 2 Exponential Stability and Synchronization Control of Neural Networks

(T-S) fuzzy model is recognized as an convenient and efficient tool in functional


approximations. During the last decades, sufficient attention has been paid to the sta-
bility analysis and control synthesis of T-S fuzzy BAM neural networks [1, 19, 21]. In
[20], researchers discuss the global asymptotic stability problem of T-S fuzzy BAM
neural networks with time-varying delays. Moreover, the robust stability problem for
uncertain fuzzy BAM neural networks with Markovian jumping and time-varying
interval delays is investigated in [3]. However, in [4], a new class of fuzzy logical
bidirectional associative memory (FLBAM) neural networks is introduced and ana-
lyzed. This model not only varies from the traditional BAM neural networks, but
also is different from the T-S fuzzy BAM neural networks. In [37], the authors dis-
cussed the exponential stability and periodic solution for fuzzy logical BAM neural
networks with time-varying delays.
In this section, we are concerned with the development of the exponential sta-
bility of fuzzy logical BAM neural networks with Markovian jumping parameters.
Most scholars investigated the global stability of T-S fuzzy BAM neural networks
with Markovian jumping parameters. However, the global stability of FLBAM neural
networks with Markovian jumping parameters is seldom researched. The main pur-
pose of this section is to derive some sufficient conditions for the exponential stability
of fuzzy logical BAM neural networks with Markovian jumping parameters by con-
structing a Lyapunov functional and utilizing the linear matrix inequality (LMI)
method.

2.1.2 System Description and Preliminaries

Consider the following FLBAM neural networks:




⎪ u̇ i (t) = − ai (t)u i (t) + ∧nj=1 bi j (t) f j (v j (t)) + ∨nj=1 ci j (t) f j (v j (t))




⎨ + ∧nj=1 αi j (t)g j (t) + ∨nj=1 βi j (t)g j (t) + Ii (t),
(2.1)

⎪ v̇ j (t) = − d j (t)v j (t) + ∧i=1m
e ji (t) f i (u i (t)) + ∨mj=1 wi j (t) f i (u i (t))





+ ∧i=1
m
γ ji (t)h i (t) + ∨i=1
m
δ ji (t)h i (t) + J j (t),

for i = {1, 2, . . . , n}, j = {1, 2, . . . , n}, t ≥ 0, where u i (t) and v j (t) denote the
activations of the ith neurons and jth neurons, g j (t) and h i (t) denote the state, respec-
tively; ai (t) and d j (t) are positive constants while f k (k = 1, 2, . . . , max(m, n)) is
the activation functions; bi j (t) and e ji (t), ci j (t), and w ji (t) are elements of fuzzy
feedback MIN template, and fuzzy feedback MAX template; αi j (t) and γ ji (t), βi (t),
and δ ji (t) stand for fuzzy feed-forward MIN template and fuzzy feed-forward MAX
template at the time t; ∧ and ∨ denote the fuzzy AND and fuzzy OR operations,
respectively; Ii and J j denote the external inputs. To draw our conclusion, we pro-
posed following assumption.
2.1 Global Exponential Stability of NN with Fuzzy Logical BAM … 15

Assumption 2.1 The neuron activation functions in (2.1) satisfy that f z (0) = 0 and
f z are globally Lipschitz continuous, i.e., there exist positive constants λz fulfilling

| f z (x) − f z (y)| ≤ λ Z |x − y|,

for all x, y ∈ R and Z = 1, 2, . . . , max(m, n).

Now, based on the fuzzy logical BAM neural networks of model (2.1), we discuss
the exponential stability of fuzzy logical BAM neural networks with Markovian
jumping parameters.
In this section, we consider the following fuzzy logical neural networks with
Markovian jumping parameters, which is actually a modification of (2.1):


⎪ u̇ i (t,r (t)) = −ai (r (t))u i (t) + ∧nj=1 bi j (r (t)) f j (v j (t))



⎪ + ∨nj=1 ci j (r (t)) f j (v j (t)) + ∧nj=1 αi j (r (t))g j (t)



⎨ + ∨nj=1 βi j (t)g j (t) + Ii (t),
(2.2)

⎪ v̇ j (t,r (t)) = −d j (r (t))v j (t) + ∧i=1 m
e ji (r (t)) f i (u i (t))



⎪ + ∨mj=1 w ji (r (t)) f i (u i (t)) + ∧i=1
m
γ ji (r (t))h i (t)




+ ∨i=1 δ ji (t)h i (t) + J j (t).
m

where {r (t), t ≥ 0} is a homogeneous finite-state Markovian process with right-


continuous trajectories on the probability space which takes values in the finite space
S = {1, 2, . . . , S} with its generator Γ = (θηη ) (η, η  ∈ S). Then, we shall work on
the network model r (t) = η for each η ∈ S.
Suppose the vector

L(t) = (l1 (t), l2 (t), . . . , lm+n (t))T = (u 1 (t), u 2 (t), . . . , u m (t), v1 (t)), . . . , vn (t))T.

For any L ∈ Rm+n , we define the norm

||L(t)|| = max (sup |u i (t)|, sup |vi (t)|).


1≤i≤m,1≤ j≤n t∈R t∈R

Set B = {L|L = (u 1 , . . . , u m , v1 , . . . , vn )T }. For any L ∈ B, we define its


induced model as

||L|| = ||L(t)|| = max (sup |u i (t)|, sup |vi (t)|).


1≤i≤m,1≤ j≤n t∈R t∈R

where B is a Banach space.


For any φ, ϕ ∈ B, we denote the solutions of system (2.2) through (0, φ) and (0,
ϕ) as follows:

L(t, r (t), φ) = (u 1 (t, η, φ), u 2 (t, η, φ), . . . , u m (t, η, φ),


v1 (t, η, φ), . . . , vn (t, η, φ))T ,
16 2 Exponential Stability and Synchronization Control of Neural Networks

L(t, r (t), ϕ) = (u 1 (t, η, ϕ), u 2 (t, η, ϕ), . . . , u m (t, η, ϕ),


v1 (t, η, ϕ), . . . , vn (t, η, ϕ))T ,

where r (t) = η ∈ S, respectively.

Definition 2.2 The system (2.2) is globally exponentially stable if there existing
positive constants k and σ satisfying

||L(t, η, φ) − L(t, η, φ)|| ≥ σ||φ − ϕ||e−kt ,

for all r (t) = η ∈ S and t ≥ 0.

Lemma 2.3 Suppose l and l  are two states of system (2.2), then the following
inequalities are established for all r (t) = η ∈ S:


n
| ∧nj=1 τi j f j (l j ) − ∧nj=1 τi j f j (l j )| ≤ |τi j ||| f j (l j ) − f j (l j )||,
j=1

n
| ∨nj=1 ζi j f j (l j ) − ∨nj=1 ζi j f j (l j )| ≤ |ζi j ||| f j (l j ) − f j (l j )||.
j=1

2.1.3 Main Results

In this section, we will discuss the global exponential stability of fuzzy logical BAM
neural networks with Markovian jumping parameters. A new sufficient criterion will
be proposed to prove the exponential stability of the model.
Theorem 2.4 If there exist a positive scalar k > 0 and a position definite matrix
Pη > 0 such that the following linear matrix inequality holds:

k Pη − Pη Wη + G η E η Pη < 0, (2.3)

then the system of (2.2) is global exponential stable for any r (t) = η (∀η ∈ S),
where G η = diag(λ1 , . . . , λm+n ), Wη = diag(a1 (η), . . . , am (η), d1 (η),
 . . . , dn (η)),
0 E2
E 1 = (|bi j (η)| + |ci j (η)|)n×m , E 2 = (|e ji (η)| + |w ji (η)|)n×m , E = .
E1 0

Proof To prove our conclusion, we denote that

l(t, r (t)) = L(t, r (t), φ) − L(t, r (t), ϕ),


2.1 Global Exponential Stability of NN with Fuzzy Logical BAM … 17

then we can obtain from (2.2) that




⎪ l˙i (t, r (t)) = −ai (r (t))li (t, r (t))



⎪ + ∧nj=1 bi j (r (t)) f j (v j (t, φ)) − ∧nj=1 bi j (r (t)) f j (v j (t, ϕ))



⎨ + ∨n ci j (r (t)) f j (v j (t, φ)) − ∨n ci j (r (t)) f j (v j (t, ϕ)),
j=1 j=1
(2.4)

⎪ l˙m + j (t, r (t)) = −d j (r (t))lm+ j (t, r (t))





⎪ + ∧i=1m
e ji (r (t)) f i (u i (t, φ)) − ∧i=1
m
e ji (r (t)) f i (u i (t, ϕ))


+ ∨ j=1 w ji (r (t)) f i (u i (t, φ)) − ∨ j=1 w ji (r (t)) f i (u i (t, ϕ)).
m m

For the sake of discussing the global exponentially stability of system (2.2), we
consider the following Lyapunov-Krasovskii functional:
⎛ ⎞
m 
m
V (t, l(t), η) = e2kt ⎝ Pi (η)li2 (t) + Pm+ j (η)lm+
2 ⎠
j (t) .
i=1 j=1

Let L be the weak infinitesimal generator of random process {l(t), r (t), t ≥ 0}.
Then, for each r (t) = η ∈ S we can obtain that


m 
m
LV (t, l(t), η) = 2ke2kt Pi (η)li2 (t) + 2e2kt Pi (η)li (t)l˙i (t)
i=1 i=1
⎛ ⎞

n 
n
+ 2e2kt ⎝k Pm+ j (η)lm+
2
j (t) + Pm+ j (η)lm+ j (t)l˙m+ j (t)⎠
j=1 j=1
⎛ ⎞

S m 
m
+ θηη e2kt ⎝ Pi (η)li2 (t) + Pm+ j (η)lm+
2
j (t)

η  =1 i=1 j=1


m+n 
m
= 2ke2kt Pi (η)li2 (t) + 2e2kt Pi (η)li (t){−ai (η)li (t)
i=1 i=1
+ [∧nj=1 bi j (η) f j (v j (t, φ)) − ∧nj=1 bi j (η) f j (v j (t, ϕ))]
+ [∨nj=1 ci j (η) f j (v j (t, φ)) − ∨nj=1 ci j (η) f j (v j (t, ϕ))]}
 n
+ 2e2kt Pm+ j (η)lm+ j (t){−d j (η)lm+ j (t)
j=1
+ [∧i=1
m
e ji (η) f i (u i (t, φ)) − ∧i=1
m
e ji (η) f i (u i (t, ϕ))]
+ [∨i=1 w ji (η) f i (u i (t, φ)) − ∨i=1 w ji (η) f i (u i (t, ϕ))]}
m m


m+n
= 2ke2kt Pi (η)li2 (t)
i=1
18 2 Exponential Stability and Synchronization Control of Neural Networks

⎨ m 
n
+ 2e2kt − ai (η)Pi (η)li2 (t) − d j (η)Pm+ j (η)lm+
2
j (t)

i=1 j=1

m
+ Pi (η)li (t)[∧nj=1 bi j (η) f j (v j (t, φ))
i=1
− ∧nj=1 bi j (η) f j (v j (t, ϕ))]

m
+ Pi (η)li (t)[∨nj=1 ci j (η) f j (v j (t, φ))
i=1
− ∨nj=1 ci j (η) f j (v j (t, ϕ))]

n
+ Pm+ j (η)lm+ j (t)[∧i=1
m
e ji (η) f i (u i (t, φ))
j=1
− ∧i=1
m
e ji (η) f i (u i (t, ϕ))]

n
+ Pm+ j (η)lm+ j (t)[∨i=1
m
w ji (η) f i (u i (t, φ))
j=1


− ∨i=1
m
w ji (η) f i (u i (t, ϕ))]


m+n
≤ 2ke2kt Pi (η)li2 (t)
i=1

⎨  m n
+ 2e2kt − ai (η)Pi (η)li2 (t) − d j (η)Pm+ j (η)lm+
2
j (t)

i=1 j=1

m n
+ Pi (η)li (t) ⎣ (|bi j (η)| + |ci j (η)|)
i=1 j=1

·| f j (v j (t, φ)) − f j (v j (t, ϕ))|⎦


 m

n 
+ Pm+ j (η)lm+ j (t) (|e ji (η)| + |w ji (η)|)
j=1 i=1
⎤⎫

· | f i (u i (t, φ)) − f i (u i (t, ϕ))|⎦


m+n
≤ 2ke2kt Pi (η)li2 (t)
i=1
2.1 Global Exponential Stability of NN with Fuzzy Logical BAM … 19

⎨  m n
+ 2e2kt − ai (η)Pi (η)li2 (t) − d j (η)Pm+ j (η)lm+
2
j (t)

i=1 j=1
⎡ ⎤
m n
+ Pi (η)li (t) ⎣ (|bi j (η)| + |ci j (η)|) · λm+ j · |lm+ j (t)|⎦
i=1 j=1
 m ⎫

n  ⎬
+ Pm+ j (η)lm+ j (t) (|e ji (η)| + |w ji (η)|) · λi · |li (t)|

j=1 i=1

≤ 2e 2kt T
|l (t)|(k Pη − Pη Wη + G η E η Pη )|l(t)|.

Since k Pη − Pη Wη + G η E η Pη < 0, then we have

LV (t, l(t), r (t) = η) < 0.

That is to say, for each r (t) = η ∈ S, we can conclude that

V (l(t)) ≤ V (l(0)) = l T (0)Pη l(0) ≤ λ M (Pη )||φ − ϕ||2 ,

where λ M (Pη ) = max{λ1 , λ2 , . . . , λm+n }.


On the other hand, it can be shown that the following inequality is established for
each r (t) = η ∈ S:

V (t, l(t), r (t) = η) ≥ e2kt λm (Pη )||l(t)||2 ,

where λm (Pη ) = min{λ1 , λ2 , . . . , λm+n }.


Hence, we have

e2kt λm (Pη )||l(t)||2 ≤ λ M (Pη )||φ − ϕ||2 ,

which is equivalent to

λ M (Pη )
||L(t, η, φ) − L(t, η, ϕ)|| ≤ ||φ − ϕ||e−kt .
λm (Pη )

By the Definition 2.2, we can draw the conclusion that the system (2.2) is globally
exponentially stable for all r (t) = η ∈ S and t ≥ 0.

Remark 2.5 The conclusion is just content under the Assumption 2.1, that is to say
the activation functions must meet Lipschitz conditions. The FLBAM model is dif-
ferent from T-S fuzzy BAM model, which has been investigated in [3].
20 2 Exponential Stability and Synchronization Control of Neural Networks

Remark 2.6 Note that (2.3) is a linear matrix inequality, which can be solved by
using the Matlab LMI toolbox. The matrix is relatively simple on account of that we
haven’t thought of the time delay. General, time-delay exists in many systems, while
in our model we ignore the time-delay for convenience.

2.1.4 Numerical Examples

In this section, a numerical example will be given to demonstrate the feasible of the
proposed results.
Consider the following fuzzy logical BAM neural networks with Markovian jump-
ing parameters:


⎪ u̇ i (t,η) = −ai (η)u i (t) + ∧2j=1 bi j (η) f j (v j (t))



⎪ + ∨2j=1 ci j (η) f j (v j (t)) + ∧2j=1 αi j (η)g j (t)




⎨ + ∨2 β (η)g (t) + I (t),
j=1 i j j i

⎪ v̇ j (t,η) = −d j (η)v j (t) + ∧i=1 2
e ji (η) f i (u i (t))





⎪ + ∨ j=1 w ji (η) f i (u i (t)) + ∧i=1
2 2
γ ji (η)h i (t)



+ ∨i=1
m
δ ji (η)h i (t) + J j (t).
 
11
where a1 = a2 = d1 = d2 = 4.5, b = c = e = w = ,α=β =γ =δ =
11
 
0.5 0.5
.
0.5 0.5
We take the activation functions as follows:
1
f i (x) = (|x + 1| − |x − 1|), (i = 1, 2).
2
To comfort the Assumption 2.1, we take λi = 0 (i = 1, 2, 3, 4). Thus, through
the numerical values mentioned above, we can obtain the matrices Wη , G η , and E η
as follows: ⎡ ⎤ ⎡ ⎤
4.5 0 0 0 1000
⎢ 0 4.5 0 0 ⎥ ⎢0 1 0 0 ⎥
Wη = ⎢ ⎥ ⎢
⎣ 0 0 4.5 0 ⎦ , G η = ⎣0 0 1 0⎦ ,

0 0 0 4.5 0001
⎡ ⎤
2 2 0 0
⎢2 2 0 0⎥
Eη = ⎢
⎣0

0 2 2⎦
0 0 2 2
2.1 Global Exponential Stability of NN with Fuzzy Logical BAM … 21

Fig. 2.1 State trajectory of 4


the system with initial u1(t)
3
conditions (4, 2, −2, −4) u2(t)
v1(t)
2
v2(t)
1

−1

−2

−3

−4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Fig. 2.2 State trajectory of 2


the system with initial u1(t)
1.5
conditions (2, 1, −1, −2) u2(t)
1 v1(t)
v2(t)
0.5

−0.5

−1

−1.5

−2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

By using Matlab LMI Toolbox, we can solve the LMI (2.3), where the solutions
are as follows: ⎡ ⎤
15.3 5.6 8.0 8.0
⎢ 5.6 15.3 8.0 8.0 ⎥
k = 14.1, Pη = ⎢ ⎥
⎣ 8.0 8.0 15.3 5.6 ⎦ .
8.0 8.0 5.6 15.3

By Theorem 2.4, the system is global exponential stable. For this example, the
figures below are the trajectories of the system with different initial conditions. The
initial conditions of Fig. 2.1 is (4, 2, −2, −4) while Fig. 2.2 is (2, 1, −1, −2). The
simulation results show that the system is global exponential stable.

2.1.5 Conclusion

In this section, we have investigated the global exponential stability of fuzzy logical
BAM neural networks with Markovian jumping parameters, which have not been
focus enough attentions on. Based on the Lyapunov functional approach and linear
matrix inequality, a new sufficient stability criteria has been derived, which can be
tested by using the Matlab LMI Toolbox. A numerical example is developed to
demonstrate our proposed results.
22 2 Exponential Stability and Synchronization Control of Neural Networks

2.2 Synchronization Control of Stochastically Coupled DNN

2.2.1 Introduction

In the past two decades, Delayed neural networks (DNNs) have received considerable
attention from researchers in different fields. As is known, DNNs always present
complex and unpredictable behaviors in practice, besides the traditional stability
and periodic oscillation that have got a great deal of investigated in the past years.
Recently, the synchronization problem of complex dynamical networks [5–9, 13,
17, 18, 27, 35, 38], like the synchronization of DNNs, is becoming the latest focus
of attention.
Thanks to the tireless efforts of the former researchers, several results on neural
network synchronization have been proposed in the literature. For example, in Ref.
[24], synchronization of coupled delayed neural networks was released the first time.
Then, some further studies in this field have appeared in recent years [14–16, 22,
26, 30, 34]. Wang and Cao studied synchronization in an array of linearly coupled
networks with time-varying delay [27], and synchronization in an array of linearly
stochastically coupled networks with time delays [7], respectively. In Ref. [6], via
Lyapunov functional method and LMI approach, synchronization control of stochas-
tic neural networks with time-varying delays has been researched and the estima-
tion gains of controller that can ensure the synchronization have been obtained. In
addition, in Ref. [16], the global exponential synchronization of coupled connected
neural networks with delays was investigated and a sufficient condition was derived
by using the LMI approaching. Meanwhile, through the stability theory for impulsive
functional differential equations, some new criteria to guarantee the robust synchro-
nization of coupled networks via impulsive control were derived in Ref. [26]. And, in
Ref. [30], on the basis of Lyapunov stability theory, time-delay feedback control and
other techniques, the exponential synchronization problem of a class of stochastic
perturbed chaotic delayed neural networks was considered.
It is well known that, time-delays are often encountered in many kinds of neural
networks, which can be the sources of oscillation and instability of neural networks
[25, 28, 29, 31–33]. However, from the literature mentioned above, we can find
that only discrete time-delay has been considered. Another important time-delay,
namely, distributed time-delay, has not attracted wide attention of the researchers.
Ref. [31] pointed out that there is usually a spatial extent in neural networks due
to the presence of many parallel pathways with a variety of axon sizes and lengths,
so, a distribution of propagation delays will appear over a period of time. Although
the signal transmission is sometimes immediate and can be modeled with discrete
delays, it may be distributed during a certain time period [29]. Hence, it is often
the case that modeling a realistic neural network with both discrete and distributed
delays [23].
Cao and Wang [7] investigated the synchronization in linearly stochastically cou-
pled networks via a simple adaptive feedback control scheme considering the noises’
influence and the discrete time delays. In Ref. [6], synchronization of stochastic
2.2 Synchronization Control of Stochastically Coupled DNN 23

neural networks with discrete time-delays was researched by using LMI approach.
Motivated by these recently literatures and for the sake of modeling a more realistic
and comprehensive networks, we consider the synchronization of linearly stochasti-
cally coupled networks with both discrete and distributed time-delays.
In this section, we aim to study the synchronization problem in an array of linearly
stochastically coupled neural networks with discrete and distributed time delays. By
employing the Lyapunov-Krasovskii functional method and LMI approach, we give
several new criterions that can ensure the complete synchronization of the system. At
the same time, the estimation gains of the delayed feedback controller are obtained.
Then, an illustrative example is provided to prove the effectiveness of our results.
Finally, we make a conclusion for the section.

2.2.2 Problem Formulation

In Ref. [7], an array of linearly stochastically coupled identical neural networks with
time delays has been considered by Cao and Wang as follows:


N
d xi (t) = [−C xi (t) + A f (xi (t)) + B f (xi (t − τ ))]dt + ci G i j Γ x j (t)dWi1 (t)
j=1


N
+ di G i j Γτ x j (t − τ )dWi2 (t) + Ui dt, i = 1, 2, . . . , N , (2.5)
j=1

where xi (t) = [xi1 (t), xi2 (t), . . . , xin (t)]T ∈ Rn (i = 1, 2, . . . , N ) is the state vector
associated with the ith DNNs; f (xi (t)) = [ f 1 (xi1 (t)), f 2 (xi2 (t)), . . . , f n (xin (t))]T
∈ Rn is the activation functions of the neurons with f (0) = 0; C = diag{c1 , c2 ,
. . . , cn } > 0 is a diagonal matrix that shows the rate of the ith unit resetting
its potential to the resting state in isolation when disconnected from the exter-
nal inputs and the network; A = (ai j )n×n and B = (bi j )n×n stand for, respec-
tively, the connection weight matrix and the discretely delayed connection weight
matrix; Wi = [Wi1 , Wi2 ]T are two-dimensional Brownian motions; Γ ∈ Rn×n and
Γτ ∈ Rn×n denotes the internal coupling of the network at time t and t − τ , where
τ > 0 is the time-delay; ci and di indicate the intensity of the noise; Ui is the input of
the controller; G = (G i j ) N ×N describes the topological structure and the coupling
strength of the networks, and it meet the following conditions [27]:


N
G ii = − Gi j . (2.6)
j=1, j =i

Though the linearly stochastically coupled neural networks has been investigated
in-depth comparatively, only the discrete time delay was considered. So, in order
24 2 Exponential Stability and Synchronization Control of Neural Networks

to model a more realistic and comprehensive stochastically coupled DNNs, a novel


model is presented as follows:
⎡ ⎤
t
d xi (t) = ⎣−C xi (t) + A f (xi (t)) + B f (xi (t − τ )) + W f (xi (s))ds⎦ dt
t−τ

N 
N
+ ci G i j Γ x j (t)dWi1 (t) + di G i j Γτ x j (t − τ )dWi2 (t)
j=1 j=1
+ Ui dt, i = 1, 2, . . . , N (2.7)

where W = (wi j )n×n is the distributive delayed connection weight matrix. Then,
we give the form of initial states corresponds with model (2.7) as follows:
For any φi ∈ L2F0 ([−τ , 0]; Rn ), we have xi (t) = ϕi (t), i = 1, 2, . . . , N , where
−τ ≤ t ≤ 0.
Remark 2.7 It is obvious to see that both the discrete and distributed time delays
are considered in the new model (2.7). Thus, the model will be more realistic and
comprehensive than (2.5). To the best of the authors’ knowledge, it is the first time
that the synchronization problem of stochastically coupled identical neural networks
with discrete and distributed time delays is proposed. In order to achieve our results,
the following necessary assumption is made:
Assumption 2.8 The activation functions f i (u) are bounded and satisfy the Lip-
schitz condition:

| f i (u) − f i (v)| ≤ βi |u − v| , ∀u, v ∈ R, i = 1, 2, . . . , n, (2.8)

where βi > 0 is a constant.


Remark 2.9 Throughout this literature f i (u), the activation functions of the neurons,
are always supposed to be continuous, differentiable and nondecreasing. And we only
need the Lipschitz condition and boundedness to be satisfied. Actually, we can see
this type of activation functions in many papers, such as Refs. [7, 28] etc.
Definition 2.10 Suppose that xi (t; t ∗ , X ∗ ) is the solution of model (2.7), where
X ∗ = (x1∗ , x2∗ , . . . , x N∗ ), and r (t) ∈ Rn is the response of an isolated node
⎡ ⎤
t
dr (t) = ⎣−Cr (t) + A f (r (t)) + B f (r (t − τ )) + W f (r (η))dη ⎦ dt. (2.9)
t−τ

If there exits a nonempty subset Ψ ⊆ Rn , with xi∗ ∈ Ψ , and for any t ≥ 0, we have
xi (t; t ∗ , X ∗ ) ∈ Rn and
 2
lim E xi (t; t ∗ , X ∗ ) − r (t; t ∗ , x0 ) = 0, (2.10)
t→∞
2.2 Synchronization Control of Stochastically Coupled DNN 25

where i = 1, 2, . . . , N , and x0 ∈ Rn , then, it can be said that the DNNs model (2.7)
achieve synchronization.

Next, we denote ei (t) = xi (t) −r (t), which indicates the error signal. From (2.7),
(2.9) and (2.6), the error signal system can be easily obtained as follows:
⎡ ⎤
t
dei (t) = ⎣−Cei (t) + Ag(ei (t)) + Bg(ei (t − τ )) + W g(ei (s))ds ⎦ dt
t−τ


N 
N
+ ci G i j Γ e j (t)d Wi1 (t) + di G i j Γτ e j (t − τ )d Wi2 (t) + Ui dt, i = 1, 2, . . . , N ,
j=1 j=1
(2.11)
where g(ei (t)) = f (ei (t) + r (t)) − f (r (t)) and g(ei (t − τ )) = f (ei (t − τ ) + r (t −
τ )) − f (r (t − τ )). From (2.8) and g(0) = 0, it is obvious to see that

g(ei (t)) ≤ Mei (t) (2.12)

where M = diag{β1 , β2 , β3 , . . . , βn } > 0 is a known constant matrix.


Considering make the controller more appropriate and realistic, we design a
delayed feedback controller of the following form:

Ui = K 1 ei (t) + K 2 ei (t − τ ) (2.13)

where K 1 ∈ n×n and K 2 ∈ n×n are constant gain matrices.

Remark 2.11 As Ref. [6] proposed, in many real applications, the memoryless state-
feedback controller Ui = K ei (t) is more popular, since it has an advantage of
easy implementation,
t but its performance is not better than (2.13). Though Ui =
K ei (t) + t−τ K 1 ei (s)ds is a more general form of delayed feedback controller, it is
difficult for us to handle all the initial states of ei (t). However, the controller (2.13)
is a compromise between better performance and simple implementation. Hence, in
our section, we design the controller as (2.13) shows.

Definition 2.12 If the error signal satisfies that

lim Eei (t)2 = 0, i = 1, 2, . . . , N (2.14)


t→∞

then, the error signal system (2.11) is globally asymptotically stable in mean square.
26 2 Exponential Stability and Synchronization Control of Neural Networks

2.2.3 Main Results and Proofs

In this section, by using a properly designed delayed feedback controller, we will


present a new criteria for the synchronization of stochastically coupled neural net-
works with discrete and distributed time delays on the basis of the Lyapunov-
Krasovskii functional approach.
In order to simplify the description, we denote:

Π11 = P(−C + K 1 ) + (−C + K 1 )T P + Q 1 + (1 − σi )−1 τ 2 MT M + cN 2 Λλmax Γ T Γ,


(2.15)
Π22 = MT M + d N 2 Λλmax ΓτT Γτ − Q 1 , (2.16)

Ω = P A A T P + MT M + P B B T P + P W W T P. (2.17)

Theorem 2.13 Let 0 < σi < 1(i = 1, 2, . . . , N ) be any given constants. If there
exit positive definite matrices P = ( pi j )n×n and Q 1 = (qi j )n×n , such that the
following matrix inequality
⎡ ⎤
Π11 P K 2 PA MT PB PW
⎢ ∗  0 0 0 0 ⎥
⎢ 22 ⎥
⎢ ∗ ∗ −I 0 0 0 ⎥
N =⎢
⎢ ∗
⎥<0 (2.18)
⎢ ∗ ∗ −I 0 0 ⎥

⎣ ∗ ∗ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ ∗ ∗ −I

holds, where Π11 and Π22 are defined in (2.15) and (2.16) respectively, then the
error signal model (2.11) is globally asymptotically stable in mean square.

Proof Define the following Lyapunov-Krasovskii functional candidate V (t, ei (t))


by


N N t

V (t, ei (t)) = eiT (t)Pei (t) + eiT (s)Q 1 ei (s)ds
i=1 i=1t−τ
N  0
  t
+ eiT (η)Q 2 ei (η)dηds (2.19)
i=1 −τ t+s

where P = ( pi j )n×n , Q = (qi j )n×n are positive definite matrices that to be deter-
mined, and Q 2 ≥ 0 is given by

Q 2 = (1 − σi )−1 τ MT M. (2.20)
2.2 Synchronization Control of Stochastically Coupled DNN 27

By I t ô differential formula, the stochastic derivative of V (t, ei (t)) along error system
(2.11) can be obtained as follows:


N 
N
d V (t, ei (t)) = LV (t, ei (t))dt + 2eiT (t)P ⎣ci G i j Γ x j (t)dWi1 (t)
i=1 j=1


N
+ di G i j Γτ x j (t − τ )dWi2 (t)⎦ , (2.21)
j=1

where the weak infinitesimal operator LV of the stochastic process is given by




N
LV (t, ei (t)) = 2eiT (t)P ⎣ − Cei (t) + Ag(ei (t)) + Bg(ei (t − τ))
i=1

t
+ K 1 ei (t) + K 2 ei (t − τ ) + W g(ei (s))ds ⎦
t−τ


N
+ ⎣eiT (t)(Q 1 + τ Q 2 )ei (t) − eiT (t − τ )Q 1 ei (t − τ )
i=1

t
− eiT (s)Q 2 ei (s)ds ⎦
t−τ
⎡ ⎤T ⎡ ⎤

N N 
N
+ ci2 ⎣ G i j Γ e j (t)⎦ ⎣ G i j Γ e j (t)⎦
i=1 j=1 j=1
⎡ ⎤T ⎡ ⎤

N 
N 
N
+ di2 ⎣ G i j Γτ e j (t − τ )⎦ ⎣ G i j Γτ e j (t − τ )⎦
i=1 j=1 j=1

N ⎨

= 2[eiT (t)P(−C + K 1 )ei (t) + eiT (t)P K 2 ei (t − τ )

i=1

+ eiT (t)P Ag(ei (t)) + eiT (t)P Bg(ei (t − τ )) + eiT (t)P W


t
× g(ei (s))ds] + eiT (t)(Q 1 + τ Q 2 )ei (t) − eiT (t − τ)Q 1 ei (t − τ )
t−τ
28 2 Exponential Stability and Synchronization Control of Neural Networks

⎡ ⎤T ⎡ ⎤
t 
N 
N
− eiT (s)Q 2 ei (s)ds + ci2 ⎣ G i j Γ e j (t)⎦ ⎣ G i j Γ e j (t)⎦
t−τ j=1 j=1
⎡ ⎤T ⎡ ⎤⎫

N 
N ⎪

+ di2 ⎣ G i j Γτ e j (t − τ )⎦ ⎣ G i j Γτ e j (t − τ )⎦ . (2.22)


j=1 j=1

Then, following from the relation (2.12) and Lemma 1.13, we can obtain

1 T 1
eiT (t)P Ag(ei (t)) ≤ ei (t)P A A T Pei (t) + g T (eiT (t))g(ei (t))
2 2
1 T 1
≤ ei (t)P A A Pei (t) + eiT (t)MT Mei (t)
T
(2.23)
2 2

1 T 1
eiT (t)P Bg(ei (t − τ )) ≤ e (t)P B B T Pei (t) + g T (eiT (t−τ ))g(ei (t − τ ))
2 i 2
1 1
≤ eiT (t)P B B T Pei (t) + eiT (t−τ )MT Mei (t − τ ) (2.24)
2 2
t
1
ei (t)P W
T
g(ei (s))ds ≤ eiT (t)P W W T Pei (t)
2
t−τ
⎛ ⎞T ⎛ ⎞
t t
1
+ ⎝ g(ei (s))ds ⎠ ⎝ g(ei (s))ds ⎠ (2.25)
2
t−τ t−τ

where M = diag{β1 , β2 , . . . , βn } is a known constant matrix. Moreover, it can be


seen from Lemma 1.20, (2.12) and (2.20) that
⎛ t ⎞T ⎛ t ⎞
   t
1⎝ ⎠ ⎝ ⎠ 1
g(ei (s))ds g(ei (s))ds ≤ τ g T (ei (s))g(ei (s))ds
2 2 t−τ
t−τ t−τ
 t t
1 1
≤ τ eiT (s)MT Mei (s)ds = (1 − σi ) eiT (s)Q 2 ei (s)ds. (2.26)
2 t−τ 2
t−τ

Hence, from (2.25) and (2.26), we have

t t
1 1
eiT (t)P W g(ei (s))ds ≤ eiT (t)P W W T Pei (t) + (1 − σi ) eiT (s)Q 2 ei (s)ds
2 2
t−τ t−τ
(2.27)
2.2 Synchronization Control of Stochastically Coupled DNN 29

Next, we can estimate the two following terms by


⎡ ⎤T ⎡ ⎤
N 
N 
N
⎣ G i j Γ e j (t)⎦ ⎣ ⎦
G i j Γ e j (t) ≤ N G i j
2
e Tj (t)Γ T Γ e j (t)
j=1 j=1 j=1


N
≤ N G i2j λmax (Γ T Γ ) e Tj (t)e j (t),
j=1
(2.28)
⎡ ⎤T ⎡ ⎤
N 
N
⎣ G i j Γ τ e j (t − τ )⎦ ⎣ G i j Γτ e j (t − τ )⎦
j=1 j=1


N
≤ N G i2j e Tj (t − τ )ΓτT Γτ e j (t − τ )
j=1


N
≤ N G i2j λmax (ΓτT Γτ ) e Tj (t − τ )e j (t − τ ) (2.29)
j=1

Therefore, applying (2.23), (2.24), (2.27)–(2.29) to (2.22), one yields


N
LV (t, ei (t)) ≤ 2eiT (t)P(−C + K 1 )ei (t) + 2eiT (t)P K 2 ei (t − τ )
i=1
+ eiT (t)P A A T Pei (t) + eiT (t)MT Mei (t) + eiT (t)P B B T Pei (t)
+ eiT (t − τ )MT Mei (t − τ ) + eiT (t)P W W T Pei (t)
t
+ (1 − σi ) eiT (s)Q 2 ei (s)ds + e T (t)(Q 1 + τ Q 2 )ei (t)
t−τ
t
− eiT (t − τ )Q 1 ei (t − τ ) − eiT (s)Q 2 ei (s)ds + cN Λλmax (Γ T Γ )
t−τ

N 
N
× e Tj (t)e j (t) + d N Λλmax (ΓτT Γτ ) e Tj (t − τ )e j (t − τ )}
j=1 j=1
30 2 Exponential Stability and Synchronization Control of Neural Networks

N ⎨

= e T (t)[2P(−C + K 1 ) + P A A T P + MT M + P B B T P
⎩ i
i=1

+ P W W T P + Q 1 + τ Q 2 + cN 2 Λλmax (Γ T Γ )]ei (t)


+ 2eiT (t)P K 2 ei (t − τ ) + eiT (t − τ )[MT M + d N 2 Λλmax (ΓτT Γτ )

t ⎬
−Q 1 ]ei (t − τ ) − σi eiT (s)Q 2 ei (s)ds

t−τ
N 
   
  11 + Ω P ei (t)
≤ ei (t) ei (t − τ ) K 2
K 2T P 22 ei (t − τ )
i=1
N 
  
  ei (t)
= ei (t) ei (t − τ ) N (2.30)
ei (t − τ )
i=1
  ! "
11 + Ω P # $
where N = K 2 , Λ = max1≤i, j≤N G i2j , c = max1≤i≤N ci2 and
K 2T P
# $ 22
d = max1≤i≤N di2 .
 
From Lemma 1.21, the form of N = 11 + Ω P
K 2 < 0 can be transformed
K 2T P 22
to (2.18), and the two forms are equivalent. It is obvious to see from (2.29) and I t ô
rule that  t
EV (t, ei (t)) − EV (t0 , ei (t0 )) = E LV (s, ei (s))ds (2.31)
t0

For the positive constant ηi > 0(i = 1, 2, . . . , N ), it can be concluded that


t
ηi Eei (t)2 ≤ EV (t, ei (t)) ≤ EV (t0 , ei (t0 )) + E LV (s, ei (s))ds
t0
 t
≤ EV (t0 , ei (t0 )) + λmax E ei (t)2 ds (2.32)
t0

where λmax < 0 indicates the maximal eigenvalue of N . Therefore, from all the above
proofs and results (2.32), together with the study in Ref. [30], we can conclude that
the error signal model (2.11) is globally asymptotically stable in mean square. This
completes the proof.

Remark 2.14 As it is presented in Theorem 2.13, the synchronization of an array of


linearly stochastically coupled identical neural networks with discrete and distributed
time delays can be guaranteed if the matrix inequality (2.18) is feasible. Since (2.18)
is linear with P > 0 and Q 1 > 0, by utilizing the Matlab LMI toolbox, we can
check the feasibility of (2.18) directly. Meanwhile, the estimate gain matrix K 1 and
K 2 can also be obtained.
2.2 Synchronization Control of Stochastically Coupled DNN 31

Remark 2.15 In this section, for the sake of simplifying the description, we are
concerned with the constant time delay. As for time-varying delay, we can derive the
similar results without difficulties, which will be more realistic and comprehensive.
Corollary 2.16 Let 0 < σi < 1(i = 1, 2, . . . , N ) be any given constants. If
there exits a positive definite matrix Q 1 = (qi j )n×n , such that the following matrix
inequality: ⎡ ⎤
Ξ11 ρK 2 ρA MT ρB ρW
⎢ ∗ Ξ22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I 0 0 0 ⎥
N1 = ⎢⎢ ⎥<0 (2.33)

⎢ ∗ ∗ ∗ −I 0 0 ⎥
⎣ ∗ ∗ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ ∗ ∗ −I

holds, where
Ξ11 = ρ(−C + K 1 )+ρ(−C + K 1 )T + Q 1 +(1 − σi )−1 τ 2 MT M+cN 2 Λλmax Γ T Γ
and Ξ22 = MT M + d N 2 Λλmax ΓτT Γτ − Q 1 then the error signal model (2.11) is
globally asymptotically stable in mean square.
Proof Let P = ρI , where ρ is a positive constant and I is the identity matrix. From
Theorem 2.13 we can obtain Corollary 2.16 immediately.
For the sake of presenting the designed estimate gain matrix K 1 and K 2 by using
the LMI toolbox in Matlab conveniently, we made a simple transformation. Then,
the following theorem can be easily derived.
Theorem 2.17 Let 0 < σi < 1(i = 1, 2, . . . , N ) be any given constants. If there
exits positive definite matrices P = ( pi j )n×n and Q 1 = (qi j )n×n , such that the
following matrix inequality
⎡ ⎤
Ω11 K 2∗ PA MT PB PW
⎢ ∗ Ω22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I 0 0 0 ⎥
N2 = ⎢
⎢ ∗
⎥<0 (2.34)
⎢ ∗ ∗ −I 0 0 ⎥

⎣ ∗ ∗ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ ∗ ∗ −I

holds, where Ω11 = −PC + K 1∗ − C T P + K 1∗T + Q 1 + (1 − σi )−1 τ 2 MT M +


cN 2 Λλmax Γ T Γ and Ω22 = MT M + d N 2 Λλmax ΓτT Γτ − Q 1 , furthermore, K 1∗ =
P K 1 and K 2∗ = P K 2 , then the error signal model (2.11) is globally asymptotically
stable in mean square.
Proof In Theorem 2.13, let K 1 = P −1 K 1∗ and K 2 = P −1 K 2∗ . Then Theorem 2.17
can be derived directly.
Remark 2.18 The method in Theorem 2.17 of solving the estimate gain matrix K 1
and K 2 once be used in Ref. [6], and it is very useful to design the controller that can
ensure the system (2.7) achieve synchronization.
32 2 Exponential Stability and Synchronization Control of Neural Networks

Corollary 2.19 Let 0 < σi < 1(i = 1, 2, . . . , N ) be any given constants. If


there exits a positive definite matrix Q 1 = (qi j )n×n , such that the following matrix
inequality: ⎡ ⎤
Δ11 K 2∗ ρA MT ρB ρW
⎢ ∗ Δ22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −I 0 0 0 ⎥
N3 = ⎢ ⎢ ⎥<0 (2.35)

⎢ ∗ ∗ ∗ −I 0 0 ⎥
⎣ ∗ ∗ ∗ ∗ −I 0 ⎦
∗ ∗ ∗ ∗ ∗ −I

holds, where Δ11 = −ρC + K 1∗ − ρC T + K 1∗T + Q 1 + (1 − σi )−1 τ 2 MT M +


cN 2 Λλmax Γ T Γ and Δ22 = MT M + d N 2 Λλmax ΓτT Γτ − Q 1 , furthermore, K 1∗ =
ρK 1 and K 2∗ = ρK 2 , then, the error signal model (2.11) is globally asymptotically
stable in mean square.

Proof Let P = ρI in Theorem 2.17, where ρ is a positive constant and I is the


identity matrix. Then, we can obtain Corollary 2.19 immediately.

Remark 2.20 Through Corollaries 2.16 and 2.19, it is obvious to see that our main
result in Theorem 2.13 is general enough to contain some special cases, such as
P = ρI .

2.2.4 Illustrative Example

In this section, our main purpose is to authenticate the global asymptotical stability
of the error signal model (2.11). In order to illustrate the effectiveness of our results,
an example is presented here.
Example
Consider, the following chaotic DNNs with discrete and distributed time delays:
⎡ ⎤
t
dx(t) = ⎣−C x(t) + A f (x(t)) + B f (x(t − τ )) + W f (x(s))ds ⎦ dt (2.36)
t−τ

where x(t) = [x1 (t), x2 (t)]T is the state vector of the single node in the DNNs,
f (x(t)) = [tanh(x1 (t)), tanh(x2 (t))]T , τ = 1,
       
10 2 −0.1 −1.7 −0.1 −1.2 −0.3
C= ,A= ,B = ,W = .
01 −4.8 4.5 −0.3 −4.1 −0.4 −3.2

In the condition that the initial value is chosen as x1 (t) = 0.4, x2 (t) = 0.6, ∀t ∈
[−1, 0], the chaotic phase trajectories can be easily obtained as Fig. 2.3 shows.
2.2 Synchronization Control of Stochastically Coupled DNN 33

Fig. 2.3 Chaotic phase 8


trajectories
6

−2

−4

−6

−8
−2 −1.5 −1 −0.5 0 0.5 1 1.5

In order to verify the effectiveness of our results that can make the model (2.11)
achieve synchronization, we just need to test the global asymptotical stability of the
error signal model as the following shows:
⎡ ⎤
t
dei (t) = ⎣−Cei (t) + Ag(ei (t)) + Bg(ei (t − τ )) + W g(ei (s))ds ⎦ dt
t−τ

N 
N
+ ci G i j Γ e j (t)dWi1 (t) + di G i j Γτ e j (t − τ )dWi2 (t)
j=1 j=1
+ [K 1 ei (t) + K 2 ei (t − τ )]dt, (2.37)
√ √
where i = 1, 2, . . . , N , ei (t) = [ei1 (t), ei2 (t)]T . Let ci =
0.1, di = 0.1, N⎤= 4,

    −2 1 0 1
10 10 ⎢ 1 −2 1 0 ⎥
Γ = , Γτ = , and the coupling matrix G i j = ⎢
⎣ 0 1 −2 1 ⎦ .

01 01
1 0 1 −2 4×4
 
1.2 0
The constant matrix M referred in (2.12) is chosen as M = . Then accord-
0 1.2
ing to Theorem 2.13 and by utilizing the Matlab LMI toolbox, the following feasible
results are derived:
   
4.9114 0.4327 21.4806 0.1140
P= , Q1 = ,
0.4327 1.4143 0.1140 20.5590
   
−45.4784 0.0047 −7.0969 −0.0766
K 1∗ = , K 2∗ =
0.0047 −45.5166 −0.0766 −6.4766

Next, from K 1 = P −1 K 1∗ and K 2 = P −1 K 2∗ , we can obtain the estimate gain


matrix
34 2 Exponential Stability and Synchronization Control of Neural Networks

Table 2.1 Initial states of i 1 2 3 4


model (2.37)
ei1 (t) 0.4 1.4 2.4 3.4
ei2 (t) −1.6 −0.6 0.4 1.4

Fig. 2.4 Synchronization 3.5


error of ei1 3
2.5
2
1.5

ei1
1
0.5
0
−0.5
−1
0 5 10 15 20
t

Fig. 2.5 Synchronization 1.5


error of ei2
1

0.5

0
ei2

−0.5

−1

−1.5

−2
0 5 10 15 20
t

   
−9.5166 2.9151 −1.4801 0.3987
K1 = , and K 2 =
2.9151 −33.0759 0.3987 −4.7022

immediately.
Under the Initial states as given in Table 2.1 applying the above-mentioned results
to the error signal model (2.37), we can derive the wave diagrams of the error signal
ei1 (t) and ei2 (t) as Figs. 2.4 and 2.5 show, respectively (i = 1, 2, 3, 4).
In Figs. 2.4 and 2.5, it is obvious to see that the error signal model (2.37) or
(2.11) is globally asymptotically stable. That is to say, from our simulation results, it
can be found that the synchronization of an array of linearly stochastically coupled
identical neural networks with discrete and distributed time delays is achieved by
using the delayed feedback controller that we designed. Thus, our theoretical results
have been tested to be true by the simulations, and we can conclude that our study in
2.2 Synchronization Control of Stochastically Coupled DNN 35

the synchronization control problem of stochastically coupled neural networks with


discrete and distributed time delays is practical and effective.

2.2.5 Conclusion

The synchronization control problem for an array of coupled DNNs has been thor-
oughly studied in this section. Several sufficient conditions to guarantee the synchro-
nization have been obtained by constructing a Lyapunov-Krasovskii functional and
using the LMI approach. Especially, the discrete and distributed time delay terms
have been considered in the model, together with the stochastic coupling term. The
delayed feedback controller gains have been gained based on the stability condition of
error system. Finally, an illustrative example has been given to verify the theoretical
analysis. The results are novel, because there are few works about the synchroniza-
tion of system with both discrete and distributed time delays. At the same time, it is
possible to apply the results to the realistic systems in practice.

References

1. P.B. Ali, M. Syed, Stability analysis of Takagi-Sugeno fuzzy Cohen-Grossberg BAM neural
networks with discrete and distributed time-varying delays. Math. Comput. Model. 53(1), 151–
160 (2011)
2. S. Arik, V. Tavsanoglu, Global asymptotic stability analysis of bidirectional associative memory
neural networks with constant time delays. Neurocomputing 68, 161–176 (2005)
3. P. Balasubramanian, R. Rakkiyappan, R. Sathy, Delay dependent stability results for fuzzy
BAM neural networks with Markovian jumping parameters. Expert Syst. Appl. 38(1), 121–
130 (2011)
4. R. Belohlavek, Fuzzy logical bidirectional associative memory. Inf. Sci. 128(1), 91–103 (2000)
5. J. Cao, P. Li, W. Wang, Global synchronization in arrays of delayed neural networks with
constant and delayed coupling. Phys. Lett. A 353(4), 318–325 (2006)
6. J. Cao, X. Li, Stability in delayed Cohen-Grossberg neural networks: LMI optimization
approach. Phys. D 212(1), 54–65 (2005)
7. J. Cao, Z. Wang, Y. Sun, Synchronization in an array of linearly stochastically coupled networks
with time-delays. Phys. A: Stat. Mech. Appl. 385(2), 718–728 (2007)
8. G.R. Chen, J. Zhou, Z.R. Liu, Global synchronization of coupled delayed neural networks and
applications to chaotic CNN model. Int. J. Bifurc. Chaos 14(7), 2229–2240 (2004)
9. M. Chen, D. Zhou, Synchronization in uncertain complex networks. Chaos: an interdisciplinary.
J. Nonlinear Sci. 16(1), 013101 (2006)
10. B. Kosko, Adaptive bi-directional associative memories. Appl. Opt. 26(23), 4947–4960 (1987)
11. B. Kosko, Bi-directional associative memories. IEEE Trans. Syst., Man Cybern 18(1), 49–60
(1988)
12. B. Kosko, Neural Networks and Fuzzy Systems—A Dynamical System Approach to Machine
Intelligence (Prentice-Hall, Englewood Cliffs, 1992)
13. C. Li, S. Li, X. Liao, J. Yu, Synchronization in coupled map lattices with small-world delayed
interactions. Phys. A 335(3), 365–370 (2004)
14. C.G. Li, G.R. Chen, Synchronization in general complex dynamical networks with coupling
delays. Phys. A 343, 263–278 (2004)
36 2 Exponential Stability and Synchronization Control of Neural Networks

15. P. Li, J. Cao, Z. Wang, Robust impulsive synchronization of coupled delayed neural networks
with uncertainties. Phys. A 373, 261–272 (2007)
16. Z. Li, G. Chen, Robust adaptive synchronization of uncertain dynamical networks. Phys. Lett.
A 324(2), 166–178 (2004)
17. W. Lin, G. Chen, Using white noise to enhance synchronization of coupled chaotic systems.
Chaos 16(1), 013133–013134 (2006)
18. B. Liu, X.Z. Liu, G.R. Chen, Robust impulsive synchronization of uncertain dynamical net-
works. IEEE Trans. Circuits Syst. I 52(7), 1431–1441 (2005)
19. B. Liu, P. Shi, Delay-range-dependent stability for fuzzy BAM neural networks with time-
varying delays. Phys. Lett. A 373(21), 1830–1838 (2009)
20. X. Lou, B. Cui, Robust asymptotic stability of uncertain fuzzy BAM neural networks with
time-varying delays. Fuzzy Sets Syst. 158(24), 2746–2756 (2007)
21. X. Lou, B. Cui, Stochastic exponential stability for Markovian jumping BAM neural networks
with time-varying delays. IEEE Trans. Syst. 37(3), 713–719 (2007)
22. W. Lu, T. Chen, Synchronization of coupled connected neural networks with delays. IEEE
Trans. Circuits Syst. I 51(12), 2491–2503 (2004)
23. J. Lv, G. Chen, A time-varying complex dynamical network model and its controlled synchro-
nization criteria. IEEE Trans. Autom. Control 50(6), 841–846 (2005)
24. L.M. Pecora, T.L. Carroll, G. Johnson, D. Mar, K.S. Fink, Synchronization stability in coupled
oscillator arrays: solution for arbitrary configurations. Int. J. Bifurc. Chaos 10(2), 273–290
(2000)
25. S. Ruan, R. Filfil, Dynamics of a two-neuron system with discrete and distributed delays. Phys.
D 191(3), 323–342 (2004)
26. Y. Sun, J. Cao, Z. Wang, Exponential synchronization of stochastic perturbed chaotic delayed
neural networks. Neurocomputing 70(13), 2477–2485 (2007)
27. W. Wang, J. Cao, Synchronization in an array of linearly coupled networks with time-varying
delay. Phys. A 366, 197–211 (2006)
28. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos Solitons Fractals 36(2), 388–396 (2008)
29. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos Solitons Fractals 32(1), 62–72 (2007)
30. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
31. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
32. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
33. Z. Wang, H. Shu, Y. Liu, D.W.C. Ho, X. Liu, Robust stability analysis of generalized neural
networks with discrete and distributed time delays. Chaos Solitons Fractals 30(4), 886–896
(2006)
34. C.W. Wu, Perturbation of coupling matrices and its effect on the synchronizability in arrays of
coupled chaotic systems. Phys. Lett. A 319(5–6), 495–503 (2003)
35. C.W. Wu, Synchronization in array of coupled nonlinear system with delay and nonreciprocal
time-varying coupling. IEEE Trans. Circuits Syst. 52(5), 282–286 (2005)
36. H. Xiang, J. Cao, Exponential stability of periodic solution to Cohen-Grossberg-type BAM
networks with time-varying delays. Neurocomputing 72(7), 1702–1711 (2009)
37. H. Xiang, J. Wang, Exponential stability and periodic solution for fuzzy BAM neural networks
with time varying delays. Appl. Math. J. Chin. Univ. 24(2), 157–166 (2009)
38. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A 373,
252–260 (2007)
Chapter 3
Robust Stability and Synchronization
of Neural Networks

In this chapter, the robust stability of high-order neural networks and hybrid stochastic
neural networks is first investigated. The robust anti-synchronization and robust lag
synchronization of chaotic neural networks are discussed in the sequel.

3.1 Delay-Dependent Stability Based on Parameters Weak


Coupling LMI Set of High-Order NN

3.1.1 Introduction

Because high-order neural networks have better performances than traditional first-
order neural networks [18], high-order neural networks have been adopted in some
fields, for example, associative memories [36], optimization [25] and pattern recog-
nition [4]. To achieve the good performances, the stability sufficient conditions for
neural network have been studied intensively, e.g. [58, 59] and references wherein.
Time delays are frequently the sources of instability [2, 8, 11], and the stability
sufficient conditions for high-order neural networks with time delays have been
presented in some literatures. Either delay-dependent or delay-independent stability
sufficient conditions have been developed to guarantee the asymptotic, exponential,
or absolute stability for high-order neural networks with discrete time delays, see
e.g. [10, 37, 60].
Since the synaptic transmission is a noisy process brought on by random fluc-
tuations from the release of neurotransmitters and other probabilistic causes [57],
investigating the neural networks with stochastic perturbations is important both in
theory and in practice. Considering the certain stochastic inputs will stabilize or
destabilize a neural network [6], some new results on stability analysis for stochastic
neural networks have been proposed, see e.g. [22, 50], where discrete time delays
have appeared.

© Springer-Verlag Berlin Heidelberg 2016 37


W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2_3
38 3 Robust Stability and Synchronization of Neural Networks

Besides the discrete time delays, there is a distribution of propagation delays over
a period of time in neural networks. Then, the mixed time delays, which comprise
discrete and distributed delays, should be taken into account when modeling a realis-
tic neural network [40, 65, 69]. The problem of global asymptotic stability for neural
networks with mixed time delays has been analyzed in [47, 52]. And global asymp-
totic stability for deterministic stochastic high-order neural networks with mixed
time-invariant delays has been studied in [57].
Uncertainties always exist in neural networks for various reasons, and investigat-
ing the robust stability for neural networks with parameter uncertainties is important
[2, 8, 11]. Recently, the robust exponential stability problem for uncertain stochastic
neural networks with mixed time delays has been studied in [53], where an LMI
approach has been established.
In this section, we develop a novel approach to establish sufficient conditions
for the high-order neural networks with mixed delays to be globally, exponentially
stable. This approach is called as parameters weak coupling linear matrix inequality
set (PWCLMIS) approach. Assuming an LMI set is coupled by two LMIs, where
one LMI without system parameters and another one without stability performance
parameters (for example, time delays), then the system parameters and stability per-
formance parameters are coupled weakly. We call such LMI set as PWCLMIS. Intro-
ducing free-weighting matrices into PWCLMIS and making some algebraic transfor-
mations, we will obtain excellent stability performances. Two numerical examples
are given to illustrate this characteristic. Furthermore, discrete and distributed time-
varying delay dependence are simultaneous in this section. Corresponding conditions
in [57] are only distributed time-invariant delay dependence. In addition, we remove
some restraints from this section to cover some results in recently published works,
such as [10, 37, 57, 60].

3.1.2 Preliminaries and Problem Formulation

Consider the high-order neural networks with mixed time delays as follows:
 t
d x(t) = [−Ax(t) + W0 f (x(t)) + W1 f (x(t − h(t))) + W2 f (x(s))ds]dt
t−τ (t)
+ σ(t, x(t), x(t − h(t)))dw(t), (3.1)

where

x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn ,


f (x(t)) = ( f 1 (x(t)), f 2 (x(t)), . . . , f L (x(t)))T ∈ R L ,
f (x(t − h(t))) = ( f 1 (x(t − h(t))), f 2 (x(t − h(t))), . . . , f L (x(t − h(t))))T ∈ R L ,
3.1 Delay-Dependent Stability … 39

A = diag{a1 , a2 , . . . , an } > 0, W0 = [w0ij ]n×L ∈ Rn×L ,



W1 = [w1i j ]n×L ∈ Rn×L , f j (x(t)) = [gk (xk (t))]dk ( j) .
k∈I j

Here, Ii (i = 1, 2, . . . , L) is a subset of {1, 2, . . . , n}, dk ( j) is a positive integer,


gi (·) is the activation function with gi (0) = 0, x(t) is the state vector associated
with the n neurons, the matrix A = diag{a1 , a2 , . . . , an } has positive entries ai > 0.
The matrices W0 , W1 and W2 are, respectively, the connection weight matrix, the
discretely delayed connection weight matrix, and the distributively delayed connec-
tion weight matrix. f (x(·)) is the product of L activation functions that reflect the
high-order characteristics. The scalars h(t) > 0, τ (t) > 0 are the unknown discrete
time delay and unknown distributed time delay, respectively. h(t) and τ (t) satisfy

⎨ 0 ≤ τ (t) ≤ τ , τ̇ (t) ≤ dτ ,
0 ≤ h(t) ≤ h, ḣ(t) ≤ dh , (3.2)

τ0 = max(τ , h).

Remark 3.1 The constraints which always appear in other literatures, dτ < 1 and
dh < 1, have been removed from this section.

Remark 3.2 There are two differences with work [57]. First, the discrete time delay
h is a time-varying delay in this section but a time-invariant delay in [57]. Secondly,
The scalar τ (t) = τ > 0 is the unknown distributed time-varying delay in this section
but a known constant distributed time delay in [57].

The stochastic disturbance w(t) = [w1 (t), w2 (t), . . . , wm (t)]T ∈ Rm is a Brown-


ian motion defined on complete probability space (Ω, F , {Ft }t≥0 , P). Assume that
σ : R+ × Rn × Rn → Rn is locally Lipschitz continuous and satisfies the linear
growth condition [20]. Moreover, σ satisfies

trace[σ T (t, x(t), x(t −h(t)))σ(t, x(t), x(t −h(t)))] ≤ |Σ1 x(t)|2 +|Σ2 x(t −h(t))|2 ,
(3.3)

where Σ1 and Σ2 are known constant matrices with appropriate dimensions.

Remark 3.3 ([57]) The condition (3.3) imposed on the stochastic disturbance term,
σ T (t, x(t), x(t −h(t))), has been used in recent papers dealing with stochastic neural
networks, see [22] and references therein.

The parameter uncertainties and the stochastic perturbations are common sources
of the disturbances on neural networks. And we model the uncertain stochastic neural
networks with mixed time delays as follows.
40 3 Robust Stability and Synchronization of Neural Networks

d x(t) = [−(A + ΔA)x(t) + (W0 + ΔW0 ) f (x(t)) + (W1 + ΔW1 ) f (x(t − h(t)))
 t
+ W2 f (x(s))ds]dt + σ(t, x(t), x(t − h(t)))dw(t), (3.4)
t−τ (t)

where the matrices ΔA, ΔW0 and ΔW1 are unknown matrices representing time-
varying parameter uncertainties and satisfying the following admissible condition:
   
ΔA ΔW0 ΔW1 = M F N1 N2 N3 , (3.5)

where M, N1 , N2 and N3 are known real constant matrices, and F is the unknown
time-varying matrix-valued function subject to

F T F ≤ I. (3.6)

We make the following assumptions throughout this section [37].

Assumption 3.4 There exist constants μi > 0 such that

|gi (x)| ≤ μi |x|, ∀x ∈ R, i = 1, 2, . . . , n.

Assumption 3.5 The following holds for all gi (·):

|gi (x)| ≤ 1, ∀x ∈ R, i = 1, 2, . . . , n.

Denote x(t; ξ) as the state trajectory of the neural network (3.1) or (3.4) from the
initial data x(θ) = ξ(θ) on −τ0 ≤ θ ≤ 0 in L 2F0 ([−τ0 , 0]; Rn ). According to [20],
the system (3.1) or (3.4) admits a trivial solution x(t; 0) ≡ 0 corresponding to the
initial data ξ = 0.
Before proceeding further, we introduce the definition of global exponential sta-
bility for the uncertain stochastic neural network (3.1) or (3.4) with discrete and
distributed time delays as follows:

Definition 3.6 For the neural network (3.1) or (3.4) and every ξ ∈ L2F0 ([−h, 0]; Rn )
the trivial solution (equilibrium point) is robustly, globally, exponentially stable in
the mean square if there exist positive constants β > 0 and μ > 0 such that every
solution x(t; ξ) of (3.1) or (3.4) satisfies

E{x(t; ξ)2 } ≤ μe−βt sup E{ξ(s)2 }, ∀t > 0. (3.7)


−τ0 ≤s≤0

The main objective of this section is to establish LMI-based stability criteria to


guarantee the high-order uncertain stochastic neural network is robustly exponen-
tially stable with mixed time delays. And the admissible time delays of stability
condition are large.
3.1 Delay-Dependent Stability … 41

3.1.3 Main Results

Before deriving the main results, we give the following lemma.

Lemma 3.7 ([37]) Let f (x) = ( f 1 (x), f 2 (x), . . . , f L (x))T ∈ R L where L is an


integer, and Σμ = diag{μ1 , . . . , μn } where μi is defined in Assumption 3.4. Then,
from Assumption 3.4 and Assumption 3.5, the following inequality holds:

f T (x) f (x) ≤ L x T Σμ Σμ x. (3.8)

Exponential Stability for Deterministic Systems


Following theorem provides the sufficient condition of the robustly, globally, expo-
nentially stable in the mean square for network dynamics system (3.1).

Theorem 3.8 Consider the dynamics of the high-order stochastic delayed neural
network (3.1). The system is robustly, globally, exponentially stable in the mean
square if there exist positive scalars ρ > 0, εi > 0(i = 1, 2, 3) and matrices
P > 0, Q 1 > 0, Q 2 > 0, Z j > 0( j = 1, 2, 3, 4),
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
H1 J1 K1 L1 R1 S1
⎢ H2 ⎥ ⎢ J2 ⎥ ⎢K2⎥ ⎢L 2⎥ ⎢ R2 ⎥ ⎢ S2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
H =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ H3 ⎥ , J = ⎢ J3 ⎥ , K = ⎢ K 3 ⎥ , L = ⎢ L 3 ⎥ , R = ⎢ R3 ⎥ , S = ⎢ S3 ⎥
⎣ H4 ⎦ ⎣ J4 ⎦ ⎣K4⎦ ⎣L 4⎦ ⎣ R4 ⎦ ⎣ S4 ⎦
H5 J5 K5 L5 R5 S5

such that the following PWCLMIS holds:

P < ρI, (3.9)


⎡ ⎤
Ω11 P W0 Ω13 P W1 P W2 ρΣ1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 0 0 ⎥

Ψ1 = ⎢ ∗ ⎥
∗ ∗ ∗ −ε3 I 0 0 0 ⎥ < 0, (3.10)
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI
⎡ ⎤
Φ τH τJ τK hL hR hS
⎢ ∗ −τ Z 1 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −τ Z 1 0 0 0 0 ⎥
⎢ ⎥
Ψ2 = ⎢
⎢∗ ∗ ∗ −τ Z 2 0 0 0 ⎥ ⎥ < 0, (3.11)
⎢∗ ∗ ∗ ∗ −h Z 0 0 ⎥
⎢ 3 ⎥
⎣∗ ∗ ∗ ∗ ∗ −h Z 3 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ −h Z 4
42 3 Robust Stability and Synchronization of Neural Networks

where

Ω11 = −A P − P A + Q 1 + Q 2 ,
Ω13 = ε1 L 1/2 Σμ ,
Ω77 = ε2 LΣμ Σμ − (1 − dh )Q 2 ,
Φ = Φ1 + Φ2 + Φ2T ,
Φ1 = diag{0, −(1 − dτ )Q 1 , 0, 0, 0},
 
Φ2 = H + K + L + S −H + J −J − K −L + R −R − S .

Proof Define a Lyapunov-Krasovskii functional V (t, x(t)) as


 t  t
V (t, x(t)) = x T (t)P x(t) + x T (s)Q 1 x(s)ds + x T (s)Q 2 x(s)ds
t−τ (t) t−h(t)
 0  0  0  0
+ x T (s)(ε3 τ LΣμ Σμ )x(s)dsdθ + ẋ T (s)(Z 1 + Z 2 )ẋ(s)dsdθ
−τ t+θ −τ t+θ
 0  0
+ ẋ T (s)(Z 3 + Z 4 )ẋ(s)dsdθ. (3.12)
−h t+θ

By Itô’s differential formula [19], the stochastic derivative of V (t, x(t)) along
(3.1) is

d V (t, x(t)) = LV (t, x(t))dt + [x T (t)Pσ(t, x(t), x(t − h(t)))


+ σ T (t, x(t), x(t − h(t)))P x(t)]dw(t), (3.13)

where

LV (t, x(t)) = x T (t)(−A T P − P A + Q 1 + Q 2 )x(t) − (1 − τ̇ (t))x T (t − τ (t))


× Q 1 x(t − τ (t)) − (1 − ḣ(t))x T (t − h(t))Q 2 x(t − h(t))
+ 2x T (t)P W0 f (x(t)) + 2x T (t)P W1 f (x(t − h(t)))
 t  t
+ 2x (t)P W2
T
f (x(s))ds − x T (s)(ε3 τ LΣμ Σμ )x(s)ds
t−τ t−τ
 t  t
− ẋ T (s)(Z 1 + Z 2 )ẋ(s)ds − ẋ T (s)(Z 3 + Z 4 )ẋ(s)ds
t−τ t−h
+ trace[σ (t, x(t), x(t − h(t)))Pσ(t, x(t), x(t − h(t)))]. (3.14)
T

According to the conditions (3.3) and (3.9), we have

trace[σ T (t, x(t), x(t − h(t)))Pσ(t, x(t), x(t − h(t)))]


≤ ρ[x T (t)Σ1T Σ1 x(t) + x T (t − h(t))Σ2T Σ2 x(t − h(t))]. (3.15)
3.1 Delay-Dependent Stability … 43

By Lemmas 1.13 and 3.7, we have

2x T (t)P W0 f (x(t)) ≤ ε1 f T (x(t)) f (x(t)) + ε−1


1 x (t)P W0 W0 P x(t)
T T

≤ x T (t)(ε1 LΣμ Σμ + ε−1 T


1 P W0 W0 P)x(t), (3.16)

2x T (t)P W1 f (x(t − h(t)))


≤ ε2 f T (x(t − h(t))) f (x(t − h(t))) + ε−1
2 x (t)P W1 W1 P x(t)
T T

≤ ε2 x T (t − h(t))LΣμ Σμ x(t − h(t)) + ε−1


2 x (t)P W1 W1 P x(t)
T T

(3.17)

and
 t  t T  t
2x (t)P W2
T
f (x(s))ds ≤ ε3 f (x(s))ds f (x(s))ds
t−τ t−τ t−τ
−1 T
+ ε3 x (t)P W2 W2T P x(t), (3.18)

where ε1 > 0, ε2 > 0, ε3 > 0.


By Lemmas 1.20 and 3.7, we can obtain that
 t T  t  t
ε3 f (x(s))ds f (x(s))ds ≤ ε3 τ f T (x(s)) f (x(s))ds
t−τ t−τ t−τ
 t
≤ x T (s)(ε3 τ LΣμ Σμ )x(s)ds.
t−τ
(3.19)

By substituting (3.15)–(3.19) into (3.14), noting (3.2), we have

LV (t,x(t)) ≤ x T (t)[−A P − P A + Q 1 + Q 2 + ρΣ1T Σ1 + ε1 LΣμ Σμ + ε−1


1 P W0
× W0T P + ε−1 T −1
2 P W1 W1 P + ε3 P W2 W2 P]x(t) + x (t − h(t))[ε2 LΣμ
T T

× Σμ + ρΣ2T Σ2 − Q 2 ]x(t − h(t)) − (1 − dτ )x T (t − τ (t))Q 1 x(t − τ (t))


 t  t
− ẋ T (s)(Z 1 + Z 2 )ẋ(s)ds − ẋ T (s)(Z 3 + Z 4 )ẋ(s)ds. (3.20)
t−τ t−h

Denote

LV1 (t, x(t)) = x T (t)[−A P − P A + Q 1 + Q 2 + ρΣ1T Σ1 + ε1 LΣμ Σμ + ε−1


1 P W0
× W0T P + ε−1 T −1
2 P W1 W1 P + ε3 P W2 W2 P]x(t) + x (t − h(t))[ε2 LΣμ Σμ
T T

+ ρΣ2T Σ2 − Q 2 ]x(t − h(t)), (3.21)


44 3 Robust Stability and Synchronization of Neural Networks
 t
LV2 (t, x(t)) = −(1 − dτ )x T (t − τ (t))Q 1 x(t − τ (t)) − ẋ T (s)(Z 1 + Z 2 )
t−τ
 t
× ẋ(s)ds − ẋ T (s)(Z 3 + Z 4 )ẋ(s)ds. (3.22)
t−h

So we have

LV1 (t, x(t)) = ξ1T (t)Ξ ξ1 (t), (3.23)

where
 T
ξ(t) = x T (t) x T (t − h(t)) ,
Ξ = diag{Ξ11 , Ξ22 },
Ξ11 = − A P − P A + Q 1 + Q 2 + ρΣ1T Σ1 + ε1 LΣμ Σμ + ε−1 −1
1 P W 0 W 0 P + ε2 P
T

× W1 W1T P + ε−1 T
3 P W2 W2 P,
Ξ22 = ε2 LΣμ Σμ + ρΣ2T Σ2 − (1 − dh )Q 2 .

It follows from the Schur Complement Lemma (Lemma 1.21) that (3.10) implies
Ξ < 0, we obtain
LV1 (t, x(t)) < 0. (3.24)

Next, we observe LV2 (t, x(t)).


By Leibniz–Newton formula, the following equations are true for any matrices
H, J, K , L , R and S with appropriate dimensions.
 t
2ξ2T (t)H [x(t) − x(t − τ (t)) − ẋ(s)ds] = 0,
t−τ (t)
 t−τ (t)
2ξ2T (t)J [x(t − τ (t)) − x(t − τ ) − ẋ(s)ds] = 0,
t−τ
 t
2ξ2T (t)K [x(t) − x(t − τ ) − ẋ(s)ds] = 0.
t−τ
 t
2ξ2T (t)L[x(t) − x(t − h(t)) − ẋ(s)ds] = 0,
t−h(t)
 t−h(t)
2ξ2T (t)R[x(t − h(t)) − x(t − h) − ẋ(s)ds] = 0,
t−h
 t
2ξ2T (t)S[x(t) − x(t − h) − ẋ(s)ds] = 0,
t−h

 T
where ξ2 (t) = x T (t) x T (t − τ (t)) x T (t − τ ) x T (t − h(t)) x T (t − h) .
3.1 Delay-Dependent Stability … 45

Add the left sides of them to (3.22), we obtain

LV2 (t, x(t))


≤ − (1 − dτ )x T (t − τ (t))Q 1 x(t − τ (t)) + 2ξ2T (t)H [x(t) − x(t − τ (t))]
+ 2ξ2T (t)J [x(t − τ (t)) − x(t − τ )] + 2ξ2T (t)K [x(t) − x(t − τ )]
+ τ ξ2T (t)(H Z 1−1 H T + J Z 1−1 J T + K Z 2−1 K T )ξ2T (t)
 t  t−τ (t)
−1 T
− [ẋ (s)Z 1 + ξ2 (t)H ]Z 1 [Z 1 ẋ(s) + H ξ2 (t)]ds −
T T T
[ẋ T (s)Z 1
t−τ (t) t−τ
 t
+ ξ2T (t)J ]Z 1−1 [Z 1T ẋ(s) + J T ξ2 (t)]ds − [ẋ T (s)Z 2 + ξ2T (t)K ]Z 2−1 [Z 2T ẋ(s)
t−τ
+ K T ξ2 (t)]ds + 2ξ2T (t)L[x(t) − x(t − h(t))] + 2ξ2T (t)S[x(t) − x(t − h)]
+ 2ξ2T (t)R[x(t − h(t)) − x(t − h)] + hξ2T (t)(L Z 3−1 L T + R Z 3−1 R T + S Z 4−1
 t
× S T )ξ2T (t) − [ẋ T (s)Z 3 + ξ2T (t)L]Z 3−1 [Z 3T ẋ(s) + L T ξ2 (t)]ds
t−h(t)
 t−h(t)  t
− [ẋ T (s)Z 3 + ξ2T (t)R]Z 3−1 [Z 3T ẋ(s) + R T ξ2 (t)]ds − [ẋ T (s)Z 4
t−h t−h
+ ξ2 (t)S]Z 4−1 [Z 4T ẋ(s) + S T ξ2 (t)]ds
T

≤ ξ2T (t)[Φ + τ H Z 1−1 H T + τ J Z 1−1 J T + τ K Z 2−1 K T + h L Z 3−1 L T + h R Z 3−1 R T


+ h S Z 4−1 S T ]ξ2 (t). (3.25)

From (3.11), by Schur Complement Lemma (Lemma 1.21), we will obtain Φ +


τ H Z 1−1 H T +τ J Z 1−1 J T +τ K Z 2−1 K T +h L Z 3−1 L T +h R Z 3−1 R T +h S Z 4−1 S T < 0,
then

LV2 (t, x(t)) < 0. (3.26)

So, from (3.20), (3.21), (3.22), (3.25) and (3.26), we have

LV (t, x(t)) = LV1 (t, x(t)) + LV2 (t, x(t)) < 0. (3.27)

Denote λ1 = mini∈S {λmin (−Ψ1 )}, λ2 = mini∈S {λmin (−Ψ2 )}, we have

E{LV (t, x(t))} ≤−λ1 E{x(t)2 } − λ2 E{x(t)2 }


≤−λ1 E{x(t)2 } < 0. (3.28)

Defining a new function as

W (t, x(t)) = ekt V (t, x(t)), k > 0, (3.29)


46 3 Robust Stability and Synchronization of Neural Networks

its infinitesimal operator L is given by

LW (t, x(t)) = kekt V (t, x(t)) + ekt LV (t, x(t)). (3.30)


 T
Let φ(t) = ξ2T (t) ẋ T (s) , by the generalized I t ô formula we can obtain from
(3.29) that

E{W (t, x(t))}


 t  t
= E{W (0, x(0))} + kekt E{V (s, x(s))}ds + ekt E{LV (s, x(s))}ds
0 0
≤ λmax (P)E{x(0)2 } + kekt [λmax (P) + τ λmax (Q 1 ) + hλmax (Q 2 )
 τ0
+ τ λmax (Z 1 + Z 2 ) + hλmax (Z 3 + Z 4 ) + ε3 τ 2 LΣμ Σμ ] E{φ(s)2 }ds
0
 τ0
− λ1 e kt
E{x(s)2 }ds
0
≤η sup E{φ(t)2 }, (3.31)
−τ0 ≤s≤0

where

η = λmax (P) + τ0 kekt [λmax (P) + τ λmax (Q 1 ) + hλmax (Q 2 ) + τ λmax (Z 1 + Z 2 )


+ hλmax (Z 3 + Z 4 ) + ε3 τ 2 LΣμ Σμ ].

Also, it is easy to see

E{V (t, x(t))} ≥ λmin (P)E{x(t)2 }. (3.32)

From (3.31) and (3.32), if follows that

E{x(t)2 } ≤λ−1
min (P)ηe
−kt
sup E{φ(s)2 }. (3.33)
−τ0 ≤s≤0

This completes the proof.

Remark 3.9 Theorem 3.8 gives a new stability criteria for system (3.1). We define
a new Lyapunov-Krasovskii functional as (3.12) which makes full use of the infor-
mation about discrete and distributed time delays to derive the result. Furthermore,
there are some novel techniques that have been exploited in the calculation of the time
derivative of V (t). First, no assumptions about Q 1 and Q 2 have been performed to
the system (3.1). However, Q 1 = ε2 LΣμ Σμ + ρΣ2T Σ2 and Q 2 = ε3 τ LΣμ Σμ have
been adopted in [57]. Thus, the presented criteria in this section has the potential to
yield more general results. Second, the result is exponential stability in this section,
3.1 Delay-Dependent Stability … 47

but the result is asymptotic stability in [57]. So the result in this section converges
faster. Last, PWCLMIS presented by authors [28] has been employed in this section.
If only the discrete time delay appears in the neural network, (3.1) can be simpli-
fied to

d x(t) = [−Ax(t) + W0 f (x(t)) + W1 f (x(t − h(t)))]dt + σ(t, x(t), x(t − h(t)))dw(t).


(3.34)

The stability issue for stochastic high-order neural network with discrete delays has
been investigated in [57], and the following corollary provides a more universal
result.

Corollary 3.10 Consider the dynamics of the neural network (3.34). The system
is robustly, globally, exponentially stable in the mean square if there exist positive
scalars ρ > 0, ε1 > 0, ε2 > 0 and matrices P > 0, Q 2 > 0, Z 3 > 0, Z 4 > 0,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
L1 R1 S1
L = ⎣ L 2 ⎦ , R = ⎣ R2 ⎦ , S = ⎣ S2 ⎦
L3 R3 S3

such that the following PWCLMIS holds:

P < ρI, (3.35)


⎡ ⎤
Ω̄11 P W0 Ω13 P W1 ρΣ1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 ⎥
⎢ 0 ⎥ < 0, (3.36)
⎢ ∗ ∗ ∗ ∗ −ρI 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2 ⎦
T

∗ ∗ ∗ ∗ ∗ ∗ −ρI
⎡ ⎤
Φ̄ h L hR hS
⎢ ∗ −h Z 3 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ −h Z 3 0 ⎦ < 0, (3.37)
∗ ∗ ∗ −h Z 4

where

Ω̄11 = −A P − P A + Q 2 ,
Φ̄ = Φ̄2 + Φ̄2T ,
 
Φ̄2 = L + S − L + R − R − S .
48 3 Robust Stability and Synchronization of Neural Networks

Furthermore, if there are no stochastic perturbations, the neural network (3.34)


will be reduced to

d x(t) = [−Ax(t) + W0 f (x(t)) + W1 f (x(t − h(t)))]dt. (3.38)

The high-order neural networks of the type (3.38) have been intensively investi-
gated in some literatures, such as [10, 37, 57, 60]. The following corollary provides
a complementary method to the results in [10, 37, 60]. Furthermore, the following
corollary provides a less restricted than that in [57].

Corollary 3.11 Consider the dynamics of the neural network (3.38). The system is
robustly, globally, exponentially stable if there exist positive scalars ε1 > 0, ε2 > 0
and matrices P > 0, Q 2 > 0, Z 3 > 0, Z 4 > 0,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
L1 R1 S1
L = ⎣ L 2 ⎦ , R = ⎣ R2 ⎦ , S = ⎣ S2 ⎦
L3 R3 S3

such that the PWCLMIS which is constructed by (3.37) and following LMI holds:
⎡ ⎤
Ω̄11 P W0 Ω13 P W1 0
⎢ ∗ −ε1 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ 0 ⎥
⎢ ∗ −ε1 I 0 ⎥ < 0. (3.39)
⎣ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ Ω77

Exponential Stability for Uncertain Systems


Following theorem gives a sufficient condition of robustly exponentially stable in
the mean square for the network dynamics of (3.4).

Theorem 3.12 Consider the dynamics of the high-order uncertain stochastic delayed
neural network (3.4). The system is robustly, globally, exponentially stable in the
mean square if there exist positive scalars ρ > 0, εi > 0(i = 1, 2, 3) and matrices
P > 0, Q 1 > 0, Q 2 > 0, Z j > 0( j = 1, 2, 3, 4),
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
H1 J1 K1 L1 R1 S1
⎢ H2 ⎥ ⎢ J2 ⎥ ⎢K2⎥ ⎢L 2⎥ ⎢ R2 ⎥ ⎢ S2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
H =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ H3 ⎥ , J = ⎢ J3 ⎥ , K = ⎢ K 3 ⎥ , L = ⎢ L 3 ⎥ , R = ⎢ R3 ⎥ , S = ⎢ S3 ⎥
⎣ H4 ⎦ ⎣ J4 ⎦ ⎣K4⎦ ⎣L 4⎦ ⎣ R4 ⎦ ⎣ S4 ⎦
H5 J5 K5 L5 R5 S5

such that the PWCLMIS which is constructed by (3.9), (3.11) and following LMI
holds:
3.1 Delay-Dependent Stability … 49
⎡ ⎤
Ω11 P W0 Ω13 P W1 P W2 ρΣ1T P M −ε4 N1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 ε4 N2T 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε I 0 0 0 0 0 0 0 ⎥
⎢ 1 ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 0 ε4 N3T 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε I 0 0 0 0 0 ⎥
⎢ 3 ⎥ < 0. (3.40)
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε 4I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI

Proof From Theorem 3.8, the system (3.4) is robustly, globally, exponentially stable
in the mean square if there exist positive scalars ρ > 0, εi > 0 (i = 1, 2, 3) and
matrices P > 0, Q 1 > 0, Q 2 > 0, Z j > 0( j = 1, 2, 3, 4),
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
H1 J1 K1 L1 R1 S1
⎢ H2 ⎥ ⎢ J2 ⎥ ⎢K2⎥ ⎢L 2⎥ ⎢ R2 ⎥ ⎢ S2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
H =⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ H3 ⎥ , J = ⎢ J3 ⎥ , K = ⎢ K 3 ⎥ , L = ⎢ L 3 ⎥ , R = ⎢ R3 ⎥ , S = ⎢ S3 ⎥
⎣ H4 ⎦ ⎣ J4 ⎦ ⎣K4⎦ ⎣L 4⎦ ⎣ R4 ⎦ ⎣ S4 ⎦
H5 J5 K5 L5 R5 S5

such that the PWCLMIS which is constructed by (3.9), (3.11) and following LMI
holds:
⎡ ⎤
Ω̃11 P(W0 + ΔW0 ) Ω13 P(W1 + ΔW1 ) P W2 ρΣ1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 0 0 ⎥
⎢ ⎥ < 0, (3.41)
⎢ ∗ ∗ ∗ ∗ −ε3 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI

where

Ω̃11 = −(A + ΔA)P − P(A + ΔA) + Q 1 + Q 2 .

According to (3.5), (3.41) can be rewritten as


⎡ ⎤
Ω11 P W0 Ω13 P W1 P W2 ρΣ1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε I 0 0 0 0 0 ⎥
⎢ 1 ⎥
⎢ ∗ ∗ ∗ −ε I 0 0 0 0 ⎥
⎢ 2 ⎥
⎢ ∗ ∗ ∗ ∗ −ε I 0 0 0 ⎥
⎢ 3 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI
50 3 Robust Stability and Synchronization of Neural Networks
⎡ ⎤
PM
⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥  
+⎢ ⎥
⎢ 0 ⎥ F −N1 N2 0 N3 0 0 0 0
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎣ 0 ⎦
0
⎡ ⎤
−N1T
⎢ NT ⎥
⎢ 2 ⎥
⎢ 0 ⎥
⎢ ⎥
⎢ NT ⎥ T  T 
+⎢
⎢ 0 ⎥
3 ⎥F M P 0 0 0 0 0 0 0 < 0. (3.42)
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎣ 0 ⎦
0

By Lemma 1.22, (3.42) holds if and only if there is a scalar ε4 > 0 such that
⎡ ⎤
Ω11 P W0 Ω13 P W1 P W2 ρΣ1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε3 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −ρI 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI
⎡ ⎤
PM
⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥ T 
+ ε−1 ⎢ ⎥
4 ⎢ 0 ⎥ M P 0 0 0 0 0 0 0
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎣ 0 ⎦
0
⎡ ⎤
−N1T
⎢ NT ⎥
⎢ 2 ⎥
⎢ 0 ⎥
⎢ ⎥
⎢ NT ⎥ 

+ε4 ⎢ 3 ⎥ −N1 N2 0 N3 0 0 0 0 < 0. (3.43)
0 ⎥
⎢ ⎥
⎢ 0 ⎥
⎢ ⎥
⎣ 0 ⎦
0
3.1 Delay-Dependent Stability … 51

It follows from the Schur Complement Lemma (Lemma 1.13) that (3.43) holds
if and only if (3.47) holds. The proof of Theorem 3.12 is completed.
If only the discrete time delay appears in the neural network, (3.4) can be simpli-
fied to

d x(t) = [−(A + ΔA)x(t) + (W0 + ΔW0 ) f (x(t)) + (W1 + ΔW1 )


× f (x(t − h(t)))]dt + σ(t, x(t), x(t − h))dw(t). (3.44)

We have the following corollary:


Corollary 3.13 Consider the dynamics of the neural network (3.44). The system
is robustly, globally, exponentially stable in the mean square if there exist positive
scalars ρ > 0, ε1 > 0, ε2 > 0 and matrices P > 0, Q 2 > 0, Z 3 > 0, Z 4 > 0,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
L1 R1 S1
L = ⎣ L 2 ⎦ , R = ⎣ R2 ⎦ , S = ⎣ S2 ⎦
L3 R3 S3

such that the PWCLMIS which is constructed by (3.35), (3.37) and following LMI
holds:
⎡ ⎤
Ω11 P W0 Ω13 P W1 ρΣ1T P M −ε4 N1T 0 0
⎢ ∗ −ε1 I 0 0 0 0 ε4 N2T 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε1 I 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 0 ε4 N3T 0 0 ⎥
⎢ ⎥
⎢ ∗ 0 ⎥
⎢ ∗ ∗ ∗ −ρI 0 0 0 ⎥ < 0. (3.45)
⎢ ∗ ∗ ∗ ∗ ∗ −ε I 0 0 0 ⎥
⎢ 4 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ω77 ρΣ2T ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ρI

If there are no stochastic perturbations σ(t, x(t), x(t − h)), the neural network
(3.44) will be reduced to

d x(t) = [−(A + ΔA)x(t) + (W0 + ΔW0 ) f (x(t)) + (W1 + ΔW1 ) f (x(t − h(t)))]dt.
(3.46)
We have the following corollary:
Corollary 3.14 Consider the dynamics of the neural network (3.46). The system is
robustly, globally, exponentially stable if there exist positive scalars ε1 > 0, ε2 > 0
and matrices P > 0, Q 2 > 0, Z 3 > 0, Z 4 > 0,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
L1 R1 S1
L = ⎣ L 2 ⎦ , R = ⎣ R2 ⎦ , S = ⎣ S2 ⎦
L3 R3 S3
52 3 Robust Stability and Synchronization of Neural Networks

such that the PWCLMIS which is constructed by (3.37) and following LMI holds:
⎡ ⎤
Ω11 P W0 Ω13 P W1 P M −ε4 N1T 0
⎢ ⎥
⎢ ∗ −ε1 I 0 0 0 ε4 N2T 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −ε I 0 0 0 0 ⎥
⎢ 1 ⎥
⎢ ∗ ∗ ∗ −ε2 I 0 ε4 N3T 0 ⎥ < 0. (3.47)
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ −ε4 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ Ω77

3.1.4 Numerical Examples

Example 3.15 Consider a two-neuron stochastic neural network (3.1) with both dis-
crete and distributed delays, where
     
1.2 0 1.5 −1.6 1.2 0.3
A= , W0 = , W1 = ,
0 1.2 −1.6 1.5 0.3 0.9
     
0.16 −0.64 0.2 0 0.08 0
W2 = , Σμ = , Σ1 = ,
−0.64 0.16 0 0.2 0 0.08
 
0.09 0
Σ2 = , L = 1.2, dτ = 0.5,
0 0.09
dh = 0.5.

Applying Theorem 3.8, Table 3.1 gives the maximum allowable value h for dif-
ferent τ .
However, applying Theorem 3.8 in [57], we will gain the maximum allowable
value τ = 1.7855. At the same time, Theorem 3.8 in [57] is discrete time delay h
independence.

Example 3.16 Consider a two-neuron uncertain stochastic neural network (3.4) with
mixed delays, where
     
0.4 0 0.1 −0.2 0.2 0.3
A= , W0 = , W1 = ,
0 0.4 −0.2 0.1 0.3 0.1
     
0.1 −0.64 0.1 0 0.01 0
W2 = , Σμ = , Σ1 = ,
−0.64 0.1 0 0.1 0 0.01
     
0.04 0 0.1 0 0.2 0
Σ2 = , M= , N1 = ,
0 0.04 0 0.2 0 0.1
   
0.1 0 0.2 0
N2 = , N3 = , L = 1.2,
0 0.2 0 0.1
dτ = 0.5, dh = 0.5.
3.1 Delay-Dependent Stability … 53

Table 3.1 Maximum h τ Maximum allowable value


calculated for various τ of h
1.0000 × 1020 3.1467 × 1015
1.0000 × 1015 1.3350 × 1021
1.0000 × 1010 5.0000 × 1020

Table 3.2 Maximum h τ Maximum allowable value of h


calculated for various τ
1.0000 × 1020 5.6448 × 1014
1.0000 × 1015 1.2302 × 1020
1.0000 × 1010 2.8015 × 1019

Applying Theorem 3.12, Table 3.2 gives the maximum allowable value h for
different τ .
From Theorems 3.8 and 3.12, the admissible mixed time delays are large.
The reason of why we are able to obtain such large mixed time delays is we employ
PWCLMIS approach and the value fields of time delays in PWCLMIS are free.
Observing the structures of PWCLMIS. All systems parameters A, W0 , W1 , W2 , Σμ ,
Σ1 , Σ2 , M, N1 , N2 and N3 are in one LMI, and the time delays τ and h are in other
one LMI which without system parameters. At the same time, there are free-weighting
matrices H, J, K , L , R, S in the latter LMI. So, the value fields of time delays are
large (or free).

3.1.5 Conclusion

This section has proposed new sufficient conditions of global exponential stability for
deterministic and uncertain stochastic high-order neural networks. These conditions
are discrete and distributed time-varying delay-dependent conditions. The concept
of PWCLMIS has been developed in this section. The criteria have been developed in
terms of PWCLMIS. Large mixed time delays can be achieved by using PWCLMIS
approach. And the criteria are more general than those in some recent works. Two
numerical examples have been given to demonstrate the merit of presented criteria.
54 3 Robust Stability and Synchronization of Neural Networks

3.2 Exponential Stability of Hybrid SDNN with Nonlinearity

3.2.1 Introduction

Neural networks (cellular neural networks, Hopfield neural networks and bidirec-
tional associative memory networks) have been intensively studied over the past few
decades and have found application in a variety of areas, such as image processing,
pattern recognition, associative memory, and optimization problems [24, 26, 63].
In reality, time-delay systems are frequently encountered in various areas, e.g., in
neural networks, where a time delay is often a source of instability and oscillations.
Recently, both delay-independent and delay-dependent sufficient conditions have
been proposed to verify the asymptotical or exponential stability of delay neural
networks, see e.g. [9, 21, 27, 30, 47, 53, 67].
On the other hand, stochastic modeling has come to play an important role in
many real systems [3, 34], as well as in neural networks. Neural networks have finite
modes, which may jump from one to another at different times. Recently, it has been
shown in [23, 49] that the jumping between different neural network modes can be
governed by a Markovian chain. Furthermore, in real nervous systems, the synaptic
transmission is a noisy process brought on by random fluctuations from the release of
neurotransmitters and other probabilistic causes. It has also been known that a neural
network could be stabilized or destabilized by certain stochastic inputs [51]. Hence,
the stability analysis problem for stochastic neural networks becomes increasingly
significant, and some results related to this problem have recently been published,
see e.g. [48, 51, 53].
In this section, we study the global exponential stability problem for a class of
hybrid stochastic neural networks with mixed time delays and Markovian jumping
parameters, where the mixed delays comprise discrete and distributed time delays,
the parameter uncertainties are norm-bounded, and the neural networks are subjected
to stochastic disturbances described in terms of a Brownian motion. By utilizing a
Lyapunov-Krasovskii functional candidate and using the well-known S-procedure,
we convert the addressed stability analysis problem into a convex optimization prob-
lem. In this letter, the free-weighting matrix approach is employed to derive a linear
matrix inequality (LMI)-based delay-dependent exponential stability criterion for
neural networks with mixed time delays and Markovian jumping parameters. Note
that LMIs can be easily solved by using the Matlab LMI toolbox, and no tuning of
parameters is required. Numerical examples demonstrate the effectiveness of this
method.

3.2.2 Problem Formulation

In this letter, the neural network with mixed time delays is described as follows:
 t
u̇(t) = −Au(t) + W0 g0 (u(t)) + W1 g1 (u(t − h)) + W2 g2 (u(s))ds + V (3.48)
t−τ
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 55

where u(t) = [u 1 (t), u 2 (t), . . . , u n (t)]T ∈ Rn is the state vector associated


with n neurons and the diagonal matrix A = diag{a1 , a2 , . . . , an } has posi-
tive entries ak > 0. W0 = (wi0j )n×n , W1 = (wi1j )n×n and W2 = (wi2j )n×n
are, respectively, the connection weight matrix, the discretely delayed connection
weight matrix, and the distributively delayed connection weight matrix. gk (u(t)) =
[gk1 (u 1 ), gk2 (u 2 ), . . . , gkn (u n )]T (k = 0, 1, 2) denotes the neuron activation func-
tion with gk (0) = 0, and V = [V1 , V2 , . . . , Vn ]T is a constant external input vector.
The scalar h > 0, which may be unknown, denotes the discrete time delay, where
the scalar τ > 0 is the known distributed time delay.

Assumption 3.17 The neuron activation functions in (3.48) are bounded and satisfy
the following Lipschitz condition

|gk (x) − gk (y)| ≤ |G k (x − y)|, ∀x, y ∈ R(k = 0, 1, 2) (3.49)

where G k ∈ Rn×n are known constant matrices.

Remark 3.18 In this letter, none of the activation functions are required to be contin-
uous, differentiable, and monotonically increasing. Note that the types of activation
functions in (3.49) have been used in many papers, see [47–49, 51, 53].

Let u ∗ be the equilibrium point of (3.48). For the purpose of simplicity, we trans-
form the intended equilibrium u ∗ to the origin by letting x = u − u ∗ , and then the
system (3.48) can be transformed into:
 t
ẋ(t) = −A(x) + W0 lo (x(t)) + W1l1 (x(t − h)) + W2 l2 (x(s))ds (3.50)
t−τ

where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector of the transformed
system. It follows from (3.49) that the transformed neuron activation functions
lk (x) = gk (x + u ∗ ) − gk (u ∗ )(k = 0, 1, 2) satisfy

|lk (x)| ≤ |G k (x)| (3.51)

where G k ∈ Rn×n (k = 0, 1, 2) are specified in (3.49).


By the model (3.50), we are in a position to introduce the hybrid stochastic neural
networks with mixed time delays and nonlinearity as follows.
We consider the following hybrid stochastic neural networks with mixed time
delays and nonlinearity, which is actually a modification of (3.50).

d x(t) = [−(A(r (t)) + ΔA(r (t)))x(t) + (W0 (r (t)) + ΔW0 (r (t)))l0 (x(t))
 t 1 (r (t)) + ΔW1 (r (t)))l1 (x(t − h)) + (W2 (r (t)) + ΔW2 (r (t)))
+ (W
× t−τ l2 (x(s))ds]dt + σ(t, x(t), x(t − h), r (t))dω(t).
(3.52)
56 3 Robust Stability and Synchronization of Neural Networks

where {r (t), t ≥ 0} is a right-continuous Markov chain which takes values in the


finite space S = {1, 2, . . . , S} with generator Γ = (πi j )(i, j ∈ S). For notational
convenience, we give the following definitions:

d x(t) = y(t, i)dt + σ(t, x(t), x(t − h), r (t))dω(t) (3.53)

where

y(t, i) = [−(A(r (t)) + ΔA(r (t)))x(t) + (W0 (r (t)) + ΔW0 (r (t)))l0 (x(t))
 t 1 (r (t)) + ΔW1 (r (t)))l1 (x(t − h)) + (W2 (r (t)) + ΔW2 (r (t)))
+ (W
× t−τ l2 (x(s))ds]dt,

ω(t) = [ω1 (t), ω2 (t), . . . , ωm (t)]T ∈ Rm is a Brownian motion defined on


(Ω, F, {Ft }t≥0 , P). Here,

ΔA(r (t)) = M A (r (t))F(t, r (t))N A (r (t)), ΔWk (r (t))


= Mk (r (t))F(t, r (t))Nk (r (t)) (3.54)
k = 0, 1, 2

where ΔA(r (t)) is a diagonal matrix, and M A (r (t)), N A (r (t)), Mk (r (t)), Nk (r (t))
(k = 0, 1, 2), are known real constant matrices with appropriate dimensions at mode
r (t). The matrix F(t, r (t)), which may be time-varying, is unknown and satisfies

F T (t, r (t))F(t, r (t)) ≤ I, ∀t ≥ 0; r (t) = i ∈ S (3.55)

Assume that σ : R+ × Rn × Rn × S is local Lipschitz continuous and satisfies the


linear growth condition [48]. Moreover, σ satisfies

trace[σ T (t, x(t), x(t − h), r (t))σ(t, x(t), x(t − h), r (t))]
≤ |Σ1,r (t) x(t)|2 + |Σ2,r (t) x(t − h)|2 (3.56)

where Σ1i and Σ2i are known constant matrices with appropriate dimensions.
Observe the system (3.52) and let x(t; ξ) denote the state trajectory from the ini-
tial data x(θ) = ξ(θ) on −h ≤ θ ≤ 0 in L2F0 ([−h, 0]; Rn ). Clearly, the system
(3.52) admits an equilibrium point (trivial solution) x(t; 0) ≡ 0 corresponding to the
initial data ξ = 0. For all δ ∈ [−d, 0], suppose that ∃ > 0, such that

|x(t + δ)| ≤ |x(t)|, d = max{τ , h} (3.57)

Recall that the Markovian process {r (t), t ≥ 0} takes values in the finite set
S = {1, 2, . . . , S}. For the sake of simplicity, we denote
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 57

A(i) = Ai , Wk (i) = Wki , M A (i) = W Ai , Mk (i) = Mki ,


N A (i) = N Ai , Nk (i) = Nki , Σ1 (i) = Σ1i , Σ2 (i) = Σ2i (3.58)

Note that the network mode we work on next is r (t) = i, ∀i ∈ S.


Remark 3.19 The parameter uncertainty structure as in (3.54), (3.55) has been
widely used in the literature addressing problems of robust control systems and
neural networks, see [23, 48, 53] and references therein.

Remark 3.20 The condition (3.56) imposed on the stochastic disturbance term,
σ(t, x(t), x(t − h), r (t)), has been exploited in recent papers dealing with stochas-
tic neural networks [57]. However, Markovian jumping parameters have not been
considered in [57]. The following stability concepts are needed in this section.

Definition 3.21 For the system (3.52) and every ξ ∈ L2F0 ([−h, 0]; Rn ), the equilib-
rium point is asymptotically stable in the mean square if, for every network mode,

lim E|x(t; ξ)|2 = 0


t→∞

and is globally exponentially stable in the mean square if, for every network mode,
there exist scalars α > 0 and β > 0 such that

E|x(t; ξ)|2 ≤ αe−βt sup E|ξ(θ)|2 .


−h≤θ≤0

The main purpose of the rest of this letter is to establish LMI-based stable criteria
under which the system (3.52) is exponential stable in the mean square.

3.2.3 Main Results and Proofs

Firstly, we consider the uncertainty-free case. That means, there are no parameter
uncertainties.

Theorem 3.22 The neural network (3.52) with F(t, r (t)) = 0 is globally expo-
nentially stable in the mean square, if ∀i ∈ S, there exist positive scalars ρi > 0,
εki > 0, (k = 1, 2, . . . , 8), positive definite matrices Pi , (i = 1, 2, . . . , S), Q 1 ,
Q 2 , S1 , S2 and matrices Hi = [H1i H2i H3i H4i ], L i = [L 1i L 2i L 3i L 4i ],
Ri = [R1i R2i R3i R4i ],
⎡ ⎤
X 11i X 12i X 13i X 14i
⎢ ∗ X 22i X 23i X 24i ⎥
Xi = Xi = ⎢
T
⎣ ∗
⎥,
∗ X 33i X 34i ⎦
∗ ∗ ∗ X 44i
58 3 Robust Stability and Synchronization of Neural Networks
⎡ ⎤
Y11i Y12i Y13i Y14i
⎢ ∗ Y22i Y23i Y24i ⎥
YiT = Yi = ⎢
⎣ ∗
⎥,
∗ Y33i Y34i ⎦
∗ ∗ ∗ Y44i

such that the following linear matrix inequalities hold.

Pi < ρi I (3.59a)
Φ
⎡i = ⎤
Φ11i Φ12i Φ13i Φ14i Pi W0i Pi W1i Pi W2i H1iT T
L 1i TW
R1i T T
0i R1i W1i R1i W2i
⎢ ⎥
⎢ ∗ Φ22i Φ23i Φ24i 0 0 TW
R2i0 0i R2i W1i R2i W2i ⎥
H2iT
T T T
L 2i
⎢ ⎥
⎢ ∗ ∗ Φ33i Φ34i H3iT T TW T T ⎥
⎢ 0 0 0 L 3i R3i 0i R3i W1i R3i W2i ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Φ44i 0 0 0 T
H4i L 4i R4i W0i R4i W1i R4i W2i ⎥
T T T T
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε1i I 0 ⎥
⎢ 0 0 0 0 0 0 ⎥
⎢ ∗ 0 ⎥
⎢ ∗ ∗ ∗ ∗ −ε2i I 0 0 0 0 0 ⎥ < 0,
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε3i I 0 ⎥
⎢ 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε4i I 0 0 ⎥
⎢ 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5i I 0 ⎥
⎢ 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε6i I 0 ⎥
⎢ 0 ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε7i I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε8i I

(3.59b)
⎡ ⎤
−ε3i τ G 2T G 2 + Q 2 + ε8i τ G 2T G 2 + X 11i X 12i X 13i X 14i H1iT
⎢ ⎥
⎢ ∗ X 22i X 23i X 24i H2iT ⎥
⎢ ⎥
Ψi = ⎢
⎢ ∗ ∗ X 33i X 34i H3iT ⎥≥0
⎥ (3.59c)
⎢ ⎥
⎣ ∗ ∗ ∗ X 44i H4iT ⎦
∗ ∗ ∗ ∗ S1
⎡ T ⎤
Y11i Y12i Y13i Y14i L 1i
⎢ ⎥
⎢ ∗ Y22i Y23i Y24i T
L 2i ⎥
⎢ ⎥
Ωi = ⎢
⎢ ∗ ∗ Y33i Y34i T
L 3i ⎥ ≥ 0,
⎥ (3.59d)
⎢ ⎥
⎣ ∗ ∗ ∗ Y44i T
L 4i ⎦
∗ ∗ ∗ ∗ S2

where


S
Φ11i = −Pi Ai − AiT Pi + Q 1 + τ Q 2 + πi j P j + ε1i G 0T G 0 + ρi Σ1iT Σ1i
j=1 ,
+ ε6i G 0T G 0 + ε4i τ Σ1iT Σ1i + ε5i hΣ1iT Σ1i + H1iT + H1i + L 1i T +L
1i
− R1i Ai − Ai R1i + τ X 11i + hY11i ,
T T

Φ12i = −H1iT + H2i + L 2i − AiT R2i + τ X 12i + hY12i ,


Φ13i = H3i − L 1i
T + L − AT R + τ X
3i i 3i 13i + hY13i ,
Φ14i = H4i + L 4i − R1i
T − AT R + τ X
i 4i 14i + hY14i ,
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 59

Φ22i = −H2iT − H2i + τ X 22i + hY22i ,


Φ23i = −H3i − L 2i T +τX
23i + hY23i ,
Φ24i = −H4i − R2i + τ X 24i + hY24i ,
T

Φ33i = ε2i G 1T G 1 + ρΣ2iT Σ2i − Q 1 + ε7i G 1T G 1 + ε4i τ Σ2iT Σ2i + ε5i hΣ2iT Σ2i −
L 3i − L 3i + τ X 33i + hY33i ,
T

Φ34i = −L 4i − R3i T +τX


34i + hY34i ,
Φ44i = τ S1 + h S2 − R4iT − R +τX
4i 44i + hY44i

Proof Construct the following Lyapunov-Krasovskii functional candidates:


t 0 t
V (t, x(t), r (t)) = x T (t)P(r (t))x(t) + t−h x T (s)Q 1 x(s)ds + −τ t+s x T (η)Q 2 x(η)dηds
t t
+ t−τ s ε4i [Σ1 (r (v))x(v)2 + Σ2 (r (v))x(v − h)2 ]dvds
t t
+ t−τ s y T (v, r (v))S1 y(v, r (v))dvds
t t
+ t−h s ε5i [Σ1 (r (v))x(v)2 + Σ2 (r (v))x(v − h)2 ]dvds
t t T
+ t−h s y (v, r (v))S2 y(v, r (v))dvds
(3.60)

where Pi > 0, Q 1 > 0, Q 2 > 0, S1 > 0, S2 > 0, ε4i > 0, ε5i > 0(i = 1, 2, . . . S)
are to be determined. By Itô differential formula, the stochastic derivation of
V (t, x(t), r (t)) along (3.52) with F(t, r (t)) = 0 can be obtained as follows:

d V (t, x(t), i) = LV (t, x(t), i)dt + (maddt[x T (t)Pi σ(t, x(t), x(t − h), i)])dω(t)
(3.61)
where


S
LV (t, x(t), i) = x T (t)[−maddt (Pi Ai ) + Q 1 + τ Q 2 + πi j P j ]x(t)
j=1
+ maddt (x T (t)Pi W0i l0 (x(t))) + maddt (x T (t)Pi W1i l1 (x(t − h)))
+ trace[σ T (t, x(t), x(t − h), i)Pi σ(t, x(t), x(t − h), i)]
t t
+ maddt (x T (t)Pi W2i t−τ l2 (x(s))ds) − x T (t − h)Q 1 x(t − h) − t−τ x T (s)Q 2 x(s)ds
t
+ 2[x T (t)H1iT + x T (t − τ )H2iT + x T (t − h)H3iT + y T (t, i)H4iT ][x(t) − x(t − τ ) − t−τ d x(s)]

T + x T (t − τ )L T + x T (t − h)L T + y T (t, i)L T ][x(t) − x(t − h) − t d x(s)]
+ 2[x T (t)L 1i 2i 3i 4i t−h
+ 2[x T (t)R1i T + x T (t − τ )R T + x T (t − h)R T + y T (t, i)R T ][−A x(t) + W l (x(t))
2i 3i 4i i 0i 0
t
+ W1i l1 (x(t − h)) + W2i t−τ l2 (x(s))ds − y(t, i)]
t t
+ τ ξiT (t)X i ξi (t) − t−τ ξiT (t)X i ξi (t)ds + hξiT (t)Yi ξi (t) − t−h ξiT (t)Yi ξi (t)ds
t
+ ε4i τ [Σ1i x(t)2 + Σ2i x(t − h)2 ] − t−τ ε4i [Σ1i x(s)2 + Σ2i x(s − h)2 ]ds
t
+ τ y T (t, i)S1 y(t, i) − t−τ y T (s, r (s))S1 y(s, r (s))ds
t
+ ε5i h[Σ1i x(t)2 + Σ2i x(t − h)2 ] − t−h ε5i [Σ1i x(s)2 + Σ2i x(s − h)2 ]ds
t
+ hy T (t, i)S2 y(t, i) − t−h y T (s, r (s))S2 y(s, r (s))ds
(3.62)
with ξi (t) = [x(t)T x(t − τ )T x(t − h)T y(t, i)T ]T .
60 3 Robust Stability and Synchronization of Neural Networks

From Lemma 1.13 and (3.51), we have:

maddt (x T (t)Pi W0i l0 (x(t))) ≤ ε−1


1i x (t)Pi W0i W0i Pi x(t) + ε1i l0 (x(t))l0 (x(t))
T T T
−1 T
≤ ε1i x (t)Pi W0i W0iT Pi x(t) + ε1i x T (t)G 0T G 0 x(t),
(3.63)
maddt (x T (t)Pi W1i l1 (x(t − h))) ≤ ε−1
2i x (t)Pi W1i W1i Pi x(t) + ε2i l1
T T T
−1 T
×(x(t − h))l1 (x(t − h)) ≤ ε2i x (t)Pi W1i W1i Pi x(t) + ε2i x (t − h)G 1T G 1 x(t − h),
T T
(3.64)

t
maddt (x T (t)Pi W2i t−τ l2 (x(s))ds) T  
ε−1
t t
≤ x T (t)P W W T P x(t) + ε
i 2i i 3i l 2 (x(s))ds l 2 (x(s))ds
3i 2i t−τ
t
t−τ 
(3.65)
≤ ε−1 x T (t)P W W T P x(t) + ε τ
i 2i 2i i 3i l 2 (x(s)) T l (x(s))ds
2
3i t−τ 
−1 T t
≤ ε3i x (t)Pi W2i W2i Pi x(t) + ε3i τ t−τ x (s)G 2 G 2 x(s)ds .
T T T

Next, it follows from the conditions (3.56) that

trace[σ T (t, x(t), x(t − h), i)Pi σ(t, x(t), x(t − h), i)]
≤ λmax (Pi )trace[σ T (t, x(t), x(t − h), i)σ(t, x(t), x(t − h), i)]
(3.66)
≤ λmax (Pi )[x(t)T Σ1iT Σ1i x(t) + x(t − h)T Σ2iT Σ2i x(t − h)]
≤ ρi [x(t)T Σ1iT Σ1i x(t) + x(t − h)T Σ2iT Σ2i x(t − h)].

Furthermore, it can be seen that


t
−2ξiT (t)HiT t−τ d x(s)
t t
= −2ξiT (t)HiT t−τ y(s, r (s))ds − 2ξiT (t)HiT t−τ σ(s, x(s), x(s − h), i)dω
t
≤ −2ξiT (t)HiT t−τ y(s, r (s))ds + ε−1 ξ T (t)HiT Hi ξi (t)
 2 i
4i
 t 
+ ε4i  t−τ σ(s, x(s), x(s − h), i)dω  .

Since
 
t t
E − 2ξiT (t)HiT t−τ d x(s) ≤ −2ξiT (t)HiT t−τ y(s, r (s))ds + ε−14i ξi (t)Hi Hi ξi (t)
T T
t
+ t−τ ε4i E[Σ1 (r (s))x(s)2 + Σ2 (r (s))x(s − h)2 ]ds.
(3.67)

Similarly, we can obtain that


 
t t
E − 2ξi (t)L i t−h d x(s) ≤ −2ξiT (t)L iT t−h
T T y(s, r (s))ds + ε−1
5i ξi (t)L i L i ξi (t)
T T
t
+ t−h ε5i E[Σ1 (r (s))x(s)2 + Σ2 (r (s))x(s − h)2 ]ds,
(3.68)
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 61

2ξiT RiT W0i l0 (x(t)) ≤ ε−1


6i ξi (t)Ri W0i W0i Ri ξi (t) + ε6i l0 (x(t))l0 (x(t)) (3.69)
T T T T
−1 T
≤ ε6i ξi (t)Ri W0i W0i Ri ξi (t) + ε6i x(t) G 0T G 0 x(t),
T T T

2ξiT RiT W1i l1 (x(t −h)) ≤ ε−1


7i ξi (t)Ri W1i W1i Ri ξi (t)+ε7i x(t −h) G 1 G 1 x(t −h)
T T T T T

(3.70)
 t  t
2ξiT RiT W2i l2 (x(s))ds ≤ ε−1 T T T
8i ξi (t)Ri W2i W2i Ri ξi (t)+ε8i τ x(s)T G 2T G 2 x(s)ds
t−τ t−τ
(3.71)

ε4i τ Σ1i x(t)2 = ε4i τ x T (t)Σ1iT Σ1i x(t),


ε4i τ Σ2i x(t − h)2 = ε4i τ x T (t − h)Σ2iT Σ2i x(t − h),
ε5i h Σ1i x(t)2 = ε5i hx T (t)Σ1iT Σ1i x(t),
ε5i h Σ2i x(t)2 = ε5i hx T (t − h)Σ2iT Σ2i x(t − h).

In views of (3.62)–(3.71), it follows that


   
t ξiT (t) ξiT (t)
E{LV (x(t), i)} ≤ ξiT (t)Θi ξi (t) − t−τ
E T Ψi ds
y (s, r (s)) y T (s, r (s))
   
t ξ T (s) ξiT (s)
− E T i Ωi ds,
t−τ y (s, r (s)) y T (s, r (s))

where
⎡ ⎤
Θ11i 0 0 0
⎢ 0 0 0 0 ⎥
Θi = ⎢
⎣ 0
⎥ + maddt{H T [1
⎦ − 1 0 0]} + ε−1 T
4i Hi Hi
0 Θ33i 0 i
0 0 0 τ S1 + h S2
+ maddt{L iT [1 0 − 1 0]} + ε−1
5i L i L i + maddt{Ri [−Ai 0 0
T T − 1]}
+ τ X i + hYi + ε−1 T T
6i Ri W0i W0i Ri + ε−1 T T
7i Ri W1i W1i Ri + ε−1
8i Ri W2i W2i Ri ,
T T
(3.72)

with


S
Θ11i = −Pi Ai − AiT Pi + Q 1 + τ Q 2 + πi j P j + ε−1 T
1i Pi W0i W0i Pi
j=1
+ ε1i G 0T G 0 + ε−1 T −1
2i Pi W1i W1i Pi + ε3i Pi W2i W2i Pi + ρi Σ1i Σ1i + ε6i G 0 G 0
T T T

+ ε4i τ Σ1iT Σ1i + ε5i hΣ1iT Σ1i ,

Θ33i = ε2i G 1T G 1 + ρi Σ2iT Σ2i − Q 1 + ε7i G 1T G 1 + ε7i G 1T G 1 + ε4i τ Σ2iT Σ2i + ε5i hΣ2iT Σ2i ,

which with (3.59a) and using Schur complement, implies that there exists a scalar
α = λmin (−Θi ) > 0, such that
62 3 Robust Stability and Synchronization of Neural Networks

E{LV (x(t), i)} ≤ −αE(|x(t)|2 ) (3.73)

In the following, we will prove the mean-square exponential stability of the system
(3.52). To this end, we define λ P = max λmax (Pi ),λ p = min λmin (Pi ).
i∈S i∈S
According to d x(t) = y(t, i)dt + σ(t, x(t), x(t − h), r (t))dω(t), |x(t + δ)| ≤
|x(t)| and (3.60), there exist positive scalars δ1 , δ2 such that
 t
λ p E|x(t)| ≤ EV (x(t), t, i) ≤ λ P E|x(t)| +
2 2
δ1 E|x(s)|2 ds (3.74)
t−d

and

EV (x(0), 0, r (0)) ≤ δ2 E φ(t)2 (3.75)

where d = max{τ , h}, t ∈ [−h, 0], x(t) = φ(t).


Let δ be a root to the inequality

δ(λ P + dδ1 eedτ ) ≤ α (3.76)

Considering and by Dynkin’s formula, one obtains that for each r (t) = i, i ∈ S,
t > 0,

E{eδt V (x(t), t, i)} = E{V (x(0), 0, r (0))}


 t
+E eδs [δV (x(s), s, r (s)) + LV (x(s), s, r (s))]ds
0
(3.77)

It then follows from (3.73), (3.74) and (3.75) that


t s
E{eδt V (x(t), t, i)} ≤ δ2 E{φ(t)2 } + E 0 eδs δ(λ P |x(s)|2 + s−d δ1 |x(β)|2 dβ)ds
 t δs 
= δ2 E{φ(t)2 } + E 0 δeδs λ P |x(s)|2 ds
t
− αE 0 e |x(s)|ds
s
t δs t δs
+ E 0 e δds s−d δ1 |x(β)| dβ − αE 0 e |x(s)|2 ds.
2

(3.78)

We can conclude that


 t δs  s
s−d δ1 |x(β)|
2 dβds
0 e  β+d
≤ −d δ1 |x(β)|2 β eδs dsdβ
t
t (3.79)
≤ deδd −d δ1 |x(β)|2 eδβ dβ
t 0
≤ deδd ( 0 δ1 |x(s)|2 eδs ds + −d δ1 |x(s)|2 eδs ds).
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 63

From (3.74), (3.78), (3.79) and (3.76), we obtain that

eδt λ p E|x(t)|2 ≤ eδt EV (x(t), t, i) ≤ (δ2 + dδδ1 eδd )E φ(t)2

or

E|x(t)|2 ≤ λ−1 δd 2 −δt


p (δ2 + dδδ1 e )E φ(t) e ,

which implies that the trivial solution of system (3.52) is exponentially stable in the
mean square. This completes the proof.

Remark 3.23 It is obvious that the freematrices Hi , L i , Ri in (3.62)  t express


t
the relationships among (x(t), x(t − τ ), t−τ d x(s)), (x(t), x(t − h), t−h d x(s))
t
and (Ai x(t), W0i l0 x(t), W1i l1 (x(t − h)), W2i t−τ l2 (x(s))ds, y(t, i)), respectively,
which are taken into account. These free matrices can be easily determined by solving
LMIs (3.59d).

By Theorem 3.22, we are in a position to present the solution to globally expo-


nentially stable for the system (3.52) with uncertainty.

Theorem 3.24 The dynamics of the neural network (3.52) is globally exponentially
stable in the mean square, if ∀i ∈ S, exist positive scalars ρi > 0, ε ji > 0, ( j =
1, 2, . . . , 8), λn > 0(n = 1, 2, . . . , 7), positive definite matrices Pi (i = 1, 2, . . . ,
S), Q 1 , Q 2 , S1 , S1 and matrices Hi = [H1i H2i ⎡H3i H4i ], L i = [L 1i ⎤L 2i L 3i
X 11i X 12i X 13i X 14i
⎢ ∗ X 22i X 23i X 24i ⎥
L 4i ],Ri = [R1i R2i R3i R4i ], X iT = X i = ⎢ ⎣ ∗
⎥, Y T =
∗ X 33i X 34i ⎦ i
∗ ∗ ∗ X 44i
⎡ ⎤
Y11i Y12i Y13i Y14i
⎢ ∗ Y22i Y23i Y24i ⎥
Yi = ⎢ ⎥
⎣ ∗ ∗ Y33i Y34i ⎦, such that (3.59a), (3.59c), (3.59d) and the following
∗ ∗ ∗ Y44i
linear matrix inequalities hold:
⎡ ⎤
Φi + Φib Γ1i Γ2i Γ3i Γ4i Γ5i Γ6i Γ7i
⎢ ∗ −λ1i 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −λ2i 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −λ3i 0 0 0 0 ⎥
⎢ ⎥<0 (3.80)
⎢ ∗ ∗ ∗ ∗ −λ4i 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ −λ5i 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ −λ6i 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −λ7i
64 3 Robust Stability and Synchronization of Neural Networks

where Φi is defined in (3.59b), and

Φib =
⎡ ⎤
λ1i N Ai
T N
Ai 0 0 0 0 0 0 0 0 0 0 0
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 λ2i N0iT N0i ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 λ3i N1iT N1i 0 0 0 0 0 0 ⎥
⎢ ⎥,
⎢ 0 0 0 0 0 0 λ4i N2iT N2i 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 0 0 0 0 0 0 λ5i N0iT N0i 0 0 ⎥
⎢ ⎥
⎣ 0 0 0 0 0 0 0 0 0 0 λ6i N1iT N1i 0 ⎦
0 0 0 0 0 0 0 0 0 0 0 λ7i N2iT N2i

⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−R1i
TM − P M
Ai i Ai Pi M0i Pi M1i
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ −R2i
TM
Ai ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ −R3i
TM 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ Ai ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ −R4iTM
Ai 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
Γ1i = ⎢
⎢ 0 0 ⎥ Γ2i = ⎢
⎥ ⎢ 0 ⎥ Γ3i = ⎢
⎥ ⎢


⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 0 ⎦ ⎣ 0 ⎦ ⎣ ⎦
0 0 0
⎡ ⎤ ⎡ T ⎤ ⎡ T ⎤ ⎡ T ⎤
Pi M2i R M
1i 0i R M
1i 1i R1i M2i
⎢ ⎥ ⎢ T ⎥ ⎢ T ⎥ ⎢ T ⎥
⎢ 0 ⎥ ⎢ R2i M0i ⎥ ⎢ R2i M1i ⎥ ⎢ R2i M2i ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ R T M0i ⎥ ⎢ R T M1i ⎥ ⎢ R T M2i ⎥
⎢ ⎥ ⎢ 3i ⎥ ⎢ 3i ⎥ ⎢ 3i ⎥
⎢ ⎥ ⎢ T ⎥ ⎢ T ⎥ ⎢ T ⎥
⎢ 0 ⎥ ⎢ R4i M0i ⎥ ⎢ R4i M1i ⎥ ⎢ R4i M2i ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
Γ4i = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ Γ5i = ⎢ 0 ⎥ Γ6i = ⎢ 0 ⎥ Γ7i = ⎢ 0 ⎥
⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 ⎦ ⎣ 0 ⎦ ⎣ 0 ⎦ ⎣ 0 ⎦
0 0 0 0

Proof Replacing Ai and Wki , k = 0, 1, 2, in (3.59b) with Ai + M Ai Fi N Ai and


Wki + Mki Fi Nki , respectively, we have
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 65

Φi + maddt (Γ1i Fi Γ1bi ) + maddt (Γ2i Fi Γ2bi ) + maddt (Γ3i Fi Γ3bi )


+ maddt (Γ4i Fi Γ4bi ) + maddt (Γ5i Fi Γ5bi ) + + maddt (Γ6i Fi Γ6bi ) +
+ maddt (Γ7i Fi Γ7bi ) < 0
(3.81)

where
 
Γ1bi = N Ai 0 0 0 0 0 0 0 0 0 0 0 ,
 
Γ2bi = 0 0 0 0 N0i 0 0 0 0 0 0 0 ,
 
Γ3bi = 0 0 0 0 0 N1i 0 0 0 0 0 0 ,
 
Γ4bi = 0 0 0 0 0 0 N2i 0 0 0 0 0 ,
 
Γ5bi = 0 0 0 0 0 0 0 0 0 N0i 0 0 ,
 
Γ6bi = 0 0 0 0 0 0 0 0 0 0 N1i 0 ,
 
Γ7bi = 0 0 0 0 0 0 0 0 0 0 0 N2i .

By Theorem 3.22, the system (3.52) is globally exponentially stable in the mean
square, if (3.81), (3.59a), (3.59c) and (3.59d) hold for all F(t, r (t)) satisfying (3.55).
By Lemma 1.22, it follows that for all F(t, r (t)) satisfying (3.55), (3.81) holds if
and only if there exists scalars λ ji , ( j = 1, 2, . . . , 7) such that

Φi + λ−1 T −1 T T −1
1i Γ1i Γ1i + λ1i Γ1bi Γ1bi + λ2i Γ2i Γ2i + λ2i Γ2bi Γ2bi + λ3i Γ3i Γ3i + λ3i Γ3bi Γ3bi
T T T
−1 −1 −1
+λ4i Γ4i Γ4i + λ4i Γ4bi Γ4bi + λ5i Γ5i Γ5i + λ5i Γ5bi Γ5bi + λ6i Γ6i Γ6i + λ6i Γ6bi Γ6bi
T T T T T T

+λ−1
7i Γ7i Γ7i + λ7i Γ7bi Γ7bi < 0.
T T

(3.82)

By Schur complement, it can be easily shown that (3.82) is equivalent to (3.80). This
completes the proof.
Remark 3.25 Let S = {1} and F(t, r (t)) = 0, the neural network (3.52) can be
reduced to (3.51) in [57]. Let S = {1} and F(t, r (t)) = 0, the neural network (3.52)
can be rewritten as (3.52) in [48]. Therefore, Theorem 3.22 and Theorem 3.24 in the
letter can be regarded as the expansions of Theorem 1 in [57] and Theorem 1 in [48]
respectively. However, for delayed neural networks, the criteria proposed in [48] and
[57] are only applicable for systems with some admissible time delay. As is well
known, when the time delay is actually small, the delay-independent conditions tend
to be conservative. In this section, the free-weighting matrix approach is employed
to derive a delay-dependent criterion for neural networks with mixed time delays.

Remark 3.26 Theorem 3.24 presents a sufficient condition to guarantee the global
exponential stability in the mean square for the hybrid stochastic neural networks
66 3 Robust Stability and Synchronization of Neural Networks

with mixed time delays and nonlinearity. For neural networks with Markovian jump-
ing parameters, the problems of global exponential stability analysis for a class of
neural networks have been handled in [49], where both time delays and Markov-
ian jumping parameters are considered. A general of stochastic interval additive
neural networks with time-varying delay and Markovian switching have been stud-
ied in [23]. It is worth noting that the parameter uncertainties, mixed time delays,
and stochastic disturbance have not been fully taken into account. Also, in [23], the
mixed time delays have not been considered. Let F(t, r (t)) = 0, W2 (r (t)) = 0, and
σ(t, x(t), x(t − h), r (t)) = 0 in (3.52), then we can obtain system (3.52) in [49]. It
can be realized that, up to now, the stability analysis problem for neural networks with
Markovian jumping parameters, stochastic disturbance, and mixed time delays has
not been fully investigated despite its practical importance. In this letter, as a special
case, the stability problem has been thoroughly discussed for the hybrid stochastic
neural networks with mixed time delays and nonlinearity.

3.2.4 Numerical Examples

We present two examples here in order to illustrate the usefulness of our main results.
Our aim is to examine the globally exponentially stable of a given delayed neural
network with Markovian jumping parameters. Example 3.15. Consider a two-neuron
network (3.52) with two modes and without parameter uncertainties. The network
parameters are given as follows:
       
2.6 0 2.5 0 0.2 0 0.4 0
A1 = , A2 = , G0 = , G1 = ,
0 2.7 0 2.6 0 0.3 0 0.6
       
0.3 0 1.2 −1.5 1.1 −1.6 −1.1 0.5
G2 = , W01 = , W02 = , W11 = ,
0 0.4 −1.7 1.2 −1.8 1.2 0.5 0.8
       
−1.6 0.1 0.6 0.1 0.8 0.2 0.08 0
W12 = , W21 = , W22 = , Σ11 = ,
0.3 0.4 0.1 0.2 0.2 0.3 0 0.08
       
0.07 0 0.09 0 0.08 0 −0.12 0.12
Σ12 = , Σ21 = , Σ22 = ,Γ = ,
0 0.06 0 0.09 0 0.04 0.11 −0.11
τ = 0.12, h = 0.13, σ(t, x(t), x(t − h), 1) = (0.4x1 (t − h), 0.5x2 (t))T
σ(t, x(t), x(t − h), 2) = (0.5x1 (t), 0.3x2 (t − h))T , lk (x(t)) = tanh(x(t)), k =
0, 1, 2.
By using Matlab LMI toolbox, we solve the LMIs in (3.59a)–(3.59d) and obtain
   
17.6679 4.2912 4.6851 1.8324
P1 = , P2 =
4.2912 21.3771 1.8324 5.7081
ρ1 = 39.4315, ρ2 = 24.6899, ε11 = 64.8119, ε21 = 32.1690, ε31 = 25.0637,
ε41 = 24.4488,
ε51 = 22.2422, ε61 = 32.2754, ε71 = 23.2970, ε81 = 22.6629, ε12 = 22.2509,
ε22 = 17.9420,
3.2 Exponential Stability of Hybrid SDNN with Nonlinearity 67

Fig. 3.1 State trajectory of


the system for Example 3.15

ε32 = 20.2979, ε42 = 24.6447, ε52 = 20.4882, ε62 = 39.1595, ε72 = 27.5308,
ε82 = 25.4427.
Therefore, it follows from Theorem 3.22 that the two-neuron neural network
(3.52) without parameter uncertainties is globally exponentially stable in the mean
square. The responses of the state vector x(t) of system (3.52) for Example 3.15 are
shown in Fig. 3.1, which further illustrate the stability. Example 3.16. We consider
neural network (3.52) with parameter uncertainties. The network data are given as
follows:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
2.7 0 0 2.6 0 0 0.3 1.8 0.5
A1 = ⎣ 0 2.8 0 ⎦ , A2 = ⎣ 0 2.4 0 ⎦ , W01 = ⎣ −1.1 1.6 1.1 ⎦ ,
0 0 2.9 0 0 2.7 0.6 0.4 −0.3
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0.2 1.6 0.4 0.8 0.2 0.1 0.2 −0.1 0.3
W02 = ⎣ −1.3 1.4 1.2 ⎦ , W11 = ⎣ 0.2 0.6 0.6 ⎦ , W12 = ⎣ 0.1 0.5 0.4 ⎦ ,
0.4 0.3 −0.2 −0.8 1.1 −1.2 −0.9 1.3 −1.5
⎡ ⎤ ⎡ ⎤
0.5 0.2 0.1 0.6 0.3 0.2  
−2 2
W21 = ⎣ 0.3 0.7 −0.3 ⎦ , W22 = ⎣ 0.2 0.6 −0.5 ⎦ , Γ = ,
1 −1
1.2 −1.1 −0.5 1.4 −1.2 −0.4
F(t, 1) = F(t, 2) = diag(sin(5t), cos(5t), sin(3t)), G 0 = G 1 = G 2 = 0.2I ,
Σ11 = Σ12 = Σ21 = Σ22 = 0.8I ,
M A1 = M A2 = M01 = M02 = M11 = M12 = M21 = M22 = N A1 = N A2
= N01 = N02 = N11 = N12 = N21 = N22 = 0.1I,
τ = 0.12, h = 0.13, lk (x(t)) = tanh(x(t)), k = 0, 1, 2,
σ(t, x(t), x(t − h), 1) = (0.5x1 (t), 0.3x2 (t − h), 0.4x3 (t))T ,
σ(t, x(t), x(t − h), 2) = (0.4x1 (t − h), 0.6x2 (t − h), 0.7x3 (t))T .
By solving the LMIs (3.59a), (3.59c), (3.59d) and (3.80), we obtain
⎡ ⎤ ⎡ ⎤
35.3471 −3.3118 −0.3826 6.9012 −2.2426 −0.1964
P1 = ⎣ −3.3118 33.7867 0.8046 ⎦, P2 = ⎣ −2.2426 5.3516 0.4075 ⎦,
−0.3826 0.8046 30.9089 −0.1964 0.4075 4.1297
68 3 Robust Stability and Synchronization of Neural Networks

Fig. 3.2 State trajectory of


the system for Example 3.16

ρ1 = 44.7997, ρ2 = 21.8688 , ε11 = 106.4891, ε21 = 76.2694, ε31 = 74.3977,


ε41 = 33.2942,
ε51 = 28.3790, ε61 = 61.1545, ε71 = 48.9937, ε81 = 48.0338, ε12 = 34.0457,
ε22 = 33.0474,
ε32 = 33.6439, ε42 = 36.4455, ε52 = 31.6921, ε62 = 60.7741, ε72 = 53.1769,
ε82 = 52.0863,

which indicates that the neural network (3.52) with parameter uncertainties is glob-
ally exponentially stable in the mean square.
For Example 3.16, the responses of the state vector of system (3.52) are shown
as Fig. 3.2. The simulation results imply that the neural network (3.52) is globally
exponentially stable.

3.2.5 Conclusions

In this section, we have addressed the problem of globally exponentially stable analy-
sis for a class of uncertain stochastic delayed neural networks, where both mixed
time delays and Markovian jumping parameter exist. Free-weighting matrices are
employed to express the relationship between the terms in the Leibniz–Newton for-
mula, and an LMI-based globally exponentially stable criterion is derived for delayed
neural networks with mixed time delays and Markovian jumping parameters. Finally,
simulation examples demonstrate the usefulness of the proposed results.
3.3 Anti-Synchronization Control of Unknown CNN … 69

3.3 Anti-Synchronization Control of Unknown CNN


with Delay and Noise Perturbation

3.3.1 Introduction

Chaotic neural networks, since its first putting forward by Pecora and Carroll [44],
illustrates a potential application value in secure communication [1, 7], image restora-
tion [32], optimization problems [35] and many other fields. Recently, research
focused on synchronization issue on chaotic neural networks has been proposed
[5, 41, 56, 61, 62]. Most of them are concerned about synchronization of chaotic
network. Such as complete synchronization [5], generalized synchronization [5],
phase synchronization [56], lag synchronization [62], projective synchronization
[61], anti-synchronization [41], and so forth.
Meanwhile, noise perturbation and time delay, known as an unavoidable nature
influence in practice, are always the reason of instability. That is to say, chaotic neural
network with noise perturbation and delay has great studying prospect.
The stability analysis problem for unknown chaotic neural network with noise
perturbation and delay has now not been properly investigated. Therefore, in this
section, we deal with the stability of a couple of anti-synchronization unknown
chaotic network. With the aid of a special Lyapunov-Krasovskii function, the stability
problem is converted into an optimization problem which can be easily solved using
linear matrix inequality (LMI) approach. Note that by using the MTALAB toolbox,
we can easily solve the LMI, and no tuning of parameters is required [39, 42]. Finally,
a numerical example is chosen to show the effectiveness and validity of the proposed
stability conditions.

3.3.2 Problem Formation

In this section, the master chaotic neural networks system with delay is

d x(t) = [−C x(t) + A f (x(t)) + B f (xτ (t))]dt, (3.83)

where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn represents the state vector of neural
network; n is the total number of neurons; C = diag{c1 , c2 , . . . , cn } > 0 is the rate
of resting state when disconnection happen; A = (ai j )n×n , B = (bi j )n×n ∈ Rn×n
is the connect weight and delayed connect weight matrix; f is activation func-
tions, f (x(t)) = ( f 1 (xi1 (t), r (t)), f 2 (xi2 (t), r (t)), . . . , f n (xin (t), r (t)))T ∈ Rn ,
f (xτ (t)) = ( f 1 (x1 (t − τ1 )), f 2 (x2 (t − τ2 )), . . . , f n (xn (t − τn )))T ∈ Rn , where
τ > 0 is the transmission delay.
70 3 Robust Stability and Synchronization of Neural Networks

The unknown slave system with noise perturbation and delay is given as

dy(t) = [−Ĉ y(t) + Â f (y(t)) + B̂ f (yτ (t)) + U ]dt


+ H (t, y(t) − x(t − σ), yτ (t) − xτ (t − σ))dω(t), (3.84)

where Ĉ = C + ΔC, Â = A + ΔA and B̂ = B + ΔB, ΔC, ΔA, ΔB are estimations


of unknown matrices C, A, and B; ω(t) is an n-dimensional Brown motion that
satisfying E{dω(t)} = 0 and E{[dω(t)]2 = dt}, σ > 0 is time delay, U is a controller.
In order to develop our main results, following assumptions and lemmas are
required to make.

Assumption 3.27 The neuron activation functions f (x) in (3.83) and (3.85) satisfies
| f i (x) − f i (y)| ≤ λi |x − y|, x, y ∈ R, λi is a constant and λi > 0.

Assumption 3.28 The function H (t, x, y) satisfies the Lipschitz condition, and
there exist constant matrices of appropriate dimensions G 1 , G 2 satisfy

trace[H T (t, x, y)H (t, x, y)] ≤ ||G 1 x||2 + ||G 2 y||2 ,

for all (t, x, y) ∈ R × Rn × Rn .

Assumption 3.29 The equilibrium point of system (3.83) and (3.85) is f (0) ≡ 0,
σ(t, 0, 0) ≡ 0.

Letting e(t) = x(t) + y(t) be the anti-synchronization error, where x(t) and y(t)
are state variables of drive system (3.83) and response system (3.85). Then the error
system can be derived as follows

de(t) = [−Ce(t) + Ag(e(t)) + Bg(e(t − τ ))


− ΔC y(t) + ΔA f (y(t)) + ΔB f (y(t − τ ))]dt
+ H (t, e(t), e(t − τ ))dω(t) + U dt (3.85)

where g(e(t)) = f (x(t)) + f (y(t)), g(eτ (t)) = f (xτ (t)) + f (yτ (t)). The controller
U is designed as follows:

U = K 1 (x(t) + y(t)) + K 2 (x(t − τ ) + y(t − τ ))


= K 1 e(t) + K 2 e(t − τ ), (3.86)

where K 1 and K 2 are feedback gains that can be determined through LMI. Moreover,
From Assumption 3.27, we can easily get

|gi (ei (t))| = | f i (xi (t)) + f i (yi (t))| ≤ λi |ei (t)|. (3.87)

Lemma 3.30 ([38]) Using Assumptions 3.27 and 3.28, supposing e(t) (with e(t) ≡
e(t; t0 , ζ)) is a solution of (3.89), and assuming a continuous, positive function
3.3 Anti-Synchronization Control of Unknown CNN … 71

V (t, x) exits and satisfies: for t ≥ t0 − τ and x ∈ R, there exist positive constants c1 ,
c2 , that c1 |x|2 ≤ V (t, x) ≤ c2 |x|2 ; if t ≥ t0 − τ , there exist constants 0 ≤ β < α,

E(V (t, e(t))) ≤ −αE(V (t, e(t))) + βE(V (t − τ , e(t − τ ))).

If t ≥ t0 , then
c2
E(|e(t; t0 , ζ)|2 ) ≤ E(sups∈[t0 τ ,t0 ] |ζ(s)|2 ) exp(−v + (t − t0 )),
c1

where v + ∈ (0, α−β] is the unique solution of equation v = α−βevτ . Furthermore,


the trivial solution of (3.86) is globally exponentially stable in the mean square.

3.3.3 Main Results

In this section, we build a series of theory and corollaries to study anti-synchronization


between master system (3.83) and slave system (3.85). The aim is to design a con-
troller to realize anti-synchronization between the master system (3.83) and the slave
system (3.85).

Theorem 3.31 Under the above assumptions, system (3.83) and (3.85) is anti-
synchronization, if there exist arbitrary constants λi > 0, vi > 0, i j > 0
and θi j > 0 (i, j = 1, 2, . . . , n) satisfying: time delay coupling weight: Λ =
diag{λ1 , λ2 , . . . , λn }, update laws of parameters in (3.85) are ĉ˙i = vi ei (t)yi (t),
â˙ i j = −i j ei (t) f j (yi (t)), ḃi j = −θi j ei (t) f j (yi (t − τ )). And the feedback gains
K 1 and K 2 of controller U satisfying the following LMI inequality, where
 
Ξ 1
K
Ψ = 1 1T 1 −1 T 2 2 T < 0. (3.88)
2 K2 2 μ Λ λ + G2 G2 − Q

where Ξ1 = 21 μA T A + 21 μΛT Λ + 21 μB T B + G 1T G 1 + K 1 − C + Q.

Proof We use the following Lyapunov functional to derive anti-synchronization cri-


terion
 t
1 T
V (e(t)) = e (t)e(t) + e T (s)Qe(s)ds
2 t−τ
n  
1 1  
n n
1 1
+ Δci2 + Δai2j + Δbi2j , (3.89)
2 vi αi j θi j
i=1 j=1 j=1

where Q is a positive definite matrix.


72 3 Robust Stability and Synchronization of Neural Networks

Using Ito’s formula [15], the following differential can be obtained

d V (e(t)) = LV (e(t))dt + Ve (e(t))H (t, e(t), e(t − τ ))dω. (3.90)

The weak infinitesimal operator L is


 
LV (e(t)) = e T (t) − Ce(t) + Ag(e(t)) + Bg(e(t − τ )) + U
+ e T (t)Qe(t) − e T (t − τ )Qe(t − τ )
 
+ e T (t) − ΔC y(t) + ΔA f (y(t)) + ΔB f (y(t − τ ))
1  
+ trace H T (t, e(t), e(t − τ ))H (t, e(t), e(t − τ )) (3.91)
2
 n 
n n
+ Δci ei (t)y j (t) − Δai j ei (t) f j (y j (t))
i=1 i=1 j=1

n 
n
− Δbi j ei (t) f j (y j (t − τ )).
j=1 j=1

Using (3.88), we can get

e T (t)Ag(e(t))
1 1 −1 T
≤ μe T (t)A T Ae(t) + μ g (e(t))g(e(t))
2 2
1 1 −1 T
≤ μe T (t)A T Ae(t) + μ e (t)ΛT Λe(t), (3.92)
2 2
e T (t)Ag(e(t − τ ))
1 1 −1 T
≤ μe T (t)B T Be(t) + μ g (e(t − τ ))g(e(t − τ ))
2 2
1 1 −1 T
≤ μe T (t)B T Be(t) + μ e (t − τ )ΛT Λe(t − τ ), (3.93)
2 2
where Λ = diag{λ1 , λ2 , . . . , λn }. From Assumption 3.27, it follows
 
trace H T (t, e(t), e(t − τ ))H (t, e(t), e(t − τ ))
≤ e T (t)G 1T G 1 e(t) + e T (t − τ )G 2T G 2 e(t − τ ). (3.94)

Then, we can conclude that

1 1 1
LV (e(t)) ≤ e T (t) μA T A + μΛT Λ + μB T B + G 1T G 1
2 2 2

+ K 1 − C + Q e(t) + e T (t)K 2 e(t − τ )
1 
+ e T (t − τ ) μ−1 ΛT Λ + G 2T G 2 − Q e(t − τ )
2
3.3 Anti-Synchronization Control of Unknown CNN … 73

That is:
 
  e(t)
LV (e(t)) ≤ e T (t) e T (t − τ ) Ψ . (3.95)
e(t − τ )

Therefore, we have

d V (e(t)) = LV (e(t))dt + Ve (e(t))H (t, e(t), e(t − τ ))dω(t)


 
 T  e(t)
≤ e (t) e (t − τ ) Ψ
T (3.96)
e(t − τ )
+ e T (t)H (t, e(t), e(t − τ ))dω(t).

Taking mathematical expectation of (3.100), we have


   
dEV (t) = E LV (e(t))dt + E e T (t)H (t, e(t), e(t − τ ))dω(t)
  
 T  e(t)
= E e (t) e (t − τ ) Ψ
T . (3.97)
e(t − τ )

Seen from (3.89) and Lemma 3.30, we can derive that


   
E V (e(t) ≤ sup E v(e(s)) e−σ(t−t0 ) , (3.98)
t0 −τ ≤s≤t0

which implies
   
E ||e(t)||2 ≤ e−σ(t−t0 ) E ||ζ||2 , (3.99)

and σ is the unique positive solution of v = α − βevτ . This completes the proof.
Remark 3.32 This section considered noise perturbation as well as time delay when
considering the anti-synchronization of unknown master and slave system. There-
fore, the controller and Lyapunov function adopted here can be applied in more
complex systems than that in [41], which is considering anti-synchronization of
known parameters master and slave system.
Remark 3.33 In Theorem 3.31, the controller U with time delay and noise pertur-
bation are considered in the slave system. While the theorem can be also adopted in
pair of systems free of time-delay term or noise, following corollaries can be derived.
Remark 3.34 According to Theorem 3.31, the network topologies must be all irre-
ducible. In practical, the network topology structures may be partly irreducible, for
example, K 1 , . . . , K l are irreducible, K l+1 , . . . , K κ are reducible. The following
corollary gives a synchronization condition in this situation.
Corollary 3.35 Under the above assumptions, system (3.83) and (3.85) is anti-
synchronization without time delay, if there exist arbitrary constants vi > 0,
74 3 Robust Stability and Synchronization of Neural Networks

i j > 0 and θi j > 0 (i, j = 1, 2, . . . , n) satisfying: ĉ˙i = vi ei (t)yi (t), â˙ i j =


−i j ei (t) f j (yi (t)), ḃi j = −θi j ei (t) f j (yi (t − τ )). And the feedback gains K 1 and
K 2 of controller U satisfying the following LMI inequality
 
Ξ1 0
Ψ = < 0. (3.100)
0 21 μ−1 ΛT λ + G 2T G 2 − Q

where Ξ1 = 21 μA T A + 21 μΛT Λ + 21 μB T B + G 1T G 1 + K 1 − C + Q.

Corollary 3.36 Under the above assumptions, system (3.83) and (3.85) is anti-
synchronization without noise perturbation, if there exist constants vi > 0, i j > 0
and θi j > 0 (i, j = 1, 2, . . . , n) satisfying: ĉ˙i = vi ei (t)yi (t), â˙ i j = −i j ei (t)
f j (yi (t)), ḃi j = −θi j ei (t) f j (yi (t − τ )). And the feedback gains K 1 and K 2 of
controller U satisfying the following LMI inequality
 
Ξ1 0
Ψ = < 0. (3.101)
0 21 μ−1 ΛT λ − Q

where Ξ1 = 21 μA T A + 21 μΛT Λ + 21 μB T B + G 1T G 1 + K 1 − C + Q.

3.3.4 Illustrative Example

In this section, an example is presented to show the usefulness and effectiveness of


our main results. The aim is to explore stability condition of a pair of master and
slave chaotic neural network system.
Example In this example, following parameters about (3.83) and (3.85) are
considered
     
1.1 0 1.8 −0.12 −1.4 −0.1
C= , A= , B= ,
0 1.1 −5.0 2.9 −0.28 2.5
     
ĉ 0 â11 −0.1 b̂11 −0.1
Ĉ = 1 , Â = , B̂ = ,
0 ĉ2 −5.0 â22 −0.2 4
v1 = 1.5, v2 = 0.3, 11 = 0.1, 22 = 0.5, θ11 = 1.0.

And the noise perturbation is


 
√ ||e(t)|| 0
H (t, e(t), e(t − τ )) = 2· ,
0 ||e(t − τ )||

and f (x) = tanh(x(t)).


3.3 Anti-Synchronization Control of Unknown CNN … 75

Fig. 3.3 The chaotic 3


performance of system
(3.83) 2

y1
0

−1

−2

−3
−0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8
x1

In the simulations, we choose delay differential equations to simulate the master


and slave systems (3.83) and (3.85). The initial conditions of unknown parameters
are

ĉ11 (0) = 1, ĉ22 (0) = 1.5, â11 (0) = 2, â22 (0) = 4, b̂11 (0) = −4,

respectively. The initial conditions of simulation are: τ = 0.1; T = 200;


[x1 (s) x2 (s)]T = [0.4 0.6]T , [y1 (s) y2 (s)]T = [0.2 0.3]T , for s ∈ [−1, 0].
 Moreover,
 in order to satisfy Assumptions 3.27 and 3.28, we take G 1 = G 2 =
10
, λ1 = λ2 = 1.1. By using Matlab LMI toolbox, (3.92) can be solved with
01
 
5.911 −28.029
following feasible solutions: μ = −8.0628, K 1 = , K2 =
−28.029 23.052
   
0.1 0 51.49 0
,Q= .
0 0.1 0 51.49
The simulation results are as follows: Fig. 3.3 illustrates the chaotic performance
of the master system (3.83); Fig. 3.4 shows the trajectories of error system (3.86);
Fig. 3.5 represents the variation of unknown parameters in the slave system.

Fig. 3.4 Anti- 1


synchronization error of
e1=x1+y1

0.5
system (3.83) and (3.85) 0
−0.5
−1
0 50 100 150 200 250 300

2
e2=x2+y2

0
−2
−4
−6
0 50 100 150 200 250 300
Simulation Time
76 3 Robust Stability and Synchronization of Neural Networks

Fig. 3.5 Parameters 8


c11
variation in the slave system 6 c22
a11
a22
4 b11

−2

−4

−6

−8
0 50 100 150 200 250 300

Seen from simulation results, we can know that with the help of controller, the
master system anti-synchronizes with the slave system well.

3.3.5 Conclusion

In this section, the anti-synchronization of unknown chaotic neural networks master


and slave systems with time delay and noise perturbation have been studies and a
controller is proposed to control the above systems. By employing the Lyapunov-
Krasovskii stability theory as well as the LMI method, several novel and sufficient
conditions have been established to guarantee the master and slave systems. More-
over, the corresponding numerical simulation has illustrated the effectiveness and
validity of the proposed scheme.

3.4 Lag Synchronization of Uncertain Delayed CNN Based


on Adaptive Control

3.4.1 Introduction

Since the master–slave concept for constructing synchronization of two coupled


chaotic systems was proposed in [35], the past few years have witnessed the fruitful
applications of the control and synchronization of chaotic systems in various fields
such as general complex dynamical networks [33], secure communication [66], etc.
In such applications, different kinds of neural networks have been extensively investi-
gated [13, 17, 31, 46]. Due to the finite speed of information processing, the existence
of time-varying delays often causes oscillation, stability, divergence, or even syn-
chronization in neural networks. In recent years, delayed neural networks (DNNs)
have received much attention of researchers and many results have appeared in the
3.4 Lag Synchronization of Uncertain Delayed CNN … 77

literatures [9, 12, 55, 64, 68]. Nowadays, considering the distributed time delays into
the model of DNNs to study the stability of neural networks has been an active sub-
ject [47, 49, 51, 52, 57]. Very recently, the research hot on synchronization of DNNs
has spread widely, including complete synchronization and lag synchronization. For
example, in [14], some conditions were proposed for global synchronization of DNNs
with hybrid coupling by employing the Lyapunov functional method and Kronecker
product properties. Meanwhile, complete synchronization of neural networks based
on parameter identification and via output or state coupling was thoroughly investi-
gated in [29]. And [54] focused on realizing lag synchronization of chaotic system
based on a single controller. It should be mentioned that a propagation delay always
appears in the electronic implementation of dynamical system. So the research on
lag synchronization of DNNs is more realistic and practical. Moreover, it has been
pointed out in [53] that in hardware implementation of neural networks, the network
parameters of the system may be subjected to some changes due to the tolerances of
electronic components employed in the design. Therefore, it is significant to inves-
tigate the synchronization of neural networks with parameter uncertainties.
Thus, from the above discussion, we can see that the adaptive lag synchronization
of parameters uncertain chaotic neural networks with both discrete and distributed
time delays is still a novel problem that has been seldom studied, and remains impor-
tant and challenging. For example, [45] dealt with lag synchronization and parameter
identification for chaotic neural networks with mixed time-varying delays and sto-
chastic perturbation based on an adaptive feedback controller. And in [43], the robust
synchronization of uncertain chaotic neural networks with parameters perturbation
and external disturbances is researched by employing Lyapunov stability theory and
linear matrix inequality technique. Inspired by these recent literatures and basing on
[43], we consider, in this section, the lag synchronization of an array of uncertain
chaotic neural networks with both discrete and distributed time-varying delays and
parameters perturbation based on adaptive control, which can model a more realistic
and comprehensive neural networks. By utilizing the Lyapunov functional method
and some estimation techniques, we give several new criterions that can ensure the lag
synchronization of the two coupled systems based on an adaptive feedback scheme.
Then, an illustrative example is provided to prove the effectiveness of our method.
Finally, we make a conclusion for the section.

3.4.2 Problem Formulation

In this section, we propose the uncertain chaotic neural network models with time-
varying parameters perturbation, which involve both discrete and distributed time-
varying delays, as follows:
78 3 Robust Stability and Synchronization of Neural Networks


n 
n
ẋi (t) = − (ci + Δc1i (t))xi (t) + (ai j +Δa1i j (t)) f j (x j (t)) + (bi j + Δb1i j (t))g j (x j (t − τ1 (t)))
j=1 j=1


n  t
+ (di j + Δd1i j (t)) h j (x j (s))ds + Ji , i = 1, 2, . . . , n, (3.102)
j=1 t−τ2 (t)

or, in a compact form:

ẋ(t) = −(C + ΔC1 (t))x(t) + (A + ΔA1 (t)) f (x(t))


 t
+ (B + ΔB1 (t))g(x(t − τ1 (t))) + (D + ΔD1 (t)) h(x(s))ds + J,
t−τ2 (t)
(3.103)

where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with
the ith DNNs; f (x(t)) = [ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T ∈ Rn , g(x(t −
τ1 (t))) = [g1 (x1 (t − τ1 (t))), g2 (x2 (t − τ1 (t))), . . . , gn (xn (t − τ1 (t)))]T ∈ Rn , and
h(x(t)) = [h 1 (x1 (t)), h 2 (x2 (t)), . . . , h n (xn (t))]T ∈ Rn are the activation functions
of the neurons; C = diag{c1 , c2 , . . . , cn } > 0 is a diagonal matrix that presents the
rate of the ith unit resetting its potential to the resting state in isolation when dis-
connected from the external inputs and the network; A = (ai j )n×n , B = (bi j )n×n ,
and D = (di j )n×n stand for, respectively, the connection weight matrix, the dis-
cretely delayed connection weight matrix and the distributive delayed connection
weight matrix; ΔC1 (t), ΔA1 (t), ΔB1 (t), and ΔD1 (t) are n×n perturbation matrices
bounded by ΔC1 (t) ≤ c1 , ΔA1 (t) ≤ a1 , ΔB1 (t) ≤ b1 and ΔD1 (t) ≤ d1 ;
J = [J1 , J2 , . . . , Jn ]T ∈ Rn is the external input vector function; τ1 (t) and τ2 (t) are
the discrete time-varying delay and distributed time-varying delay, respectively.
Throughout this section, the following hypotheses are needed.
(H1 ) The neuron activation functions f (·), g(·) and h(·) satisfy the following
Lipschitz condition:
 f (x) − f (y) ≤ Λ(x − y) ,

g(x) − g(y) ≤ Ω(x − y) ,

h(x) − h(y) ≤ Φ(x − y) , ∀x, y ∈ Rn ,

where Λ ∈ Rn×n , Ω ∈ Rn×n and φ ∈ Rn×n are known constant matrices.


(H2 ) τ1 (t) and τ2 (t) are bounded and differential functions on R+ , i.e., 0 ≤
τ1 (t) ≤ τ1∗ , 0 ≤ τ2 (t) ≤ τ2∗ and 0 ≤ τ̇1 (t) ≤ δ < 1, 0 ≤ τ̇2 (t) ≤ σ < 1, for all t.
(H3 ) f (0) = u(0) = v(0) ≡ 0.

The initial condition of (3.102)
  given by xi (s) = ϕi (s), i = 1, 2, . . . , n, −τ
are
≤ s ≤ 0, (τ ∗ = max τ1∗ , τ2∗ ), for any ϕi ∈ L2F0 ([−τ ∗ , 0]; Rn ), where
L2F0 ([−τ ∗ , 0]; Rn ) denotes the family of all F0 -measurable C([−τ ∗ , 0]; Rn )-valued
random variables satisfying sup −τ ∗ ≤s≤0 E|ϕi (s)|2 < ∞, and C([−τ ∗ , 0]; Rn ) is
3.4 Lag Synchronization of Uncertain Delayed CNN … 79

the family of all continuous Rn -valued functions ϕi (s) on [−τ ∗ , 0] with the norm
ϕi  = sup−τ ∗ ≤s≤0 |ϕi (s)| .
Now we consider (3.102) as the master system, and similarly, the corresponding
controlled slave system is taken as


n 
n
ẏi (t) = −(ci + Δc2i (t))yi (t) + (ai j +Δa2i j (t)) f j (y j (t)) + (bi j + Δb2i j (t))g j (y j (t − τ1 (t)))
j=1 j=1


n  t
+ (di j + Δd2i j (t)) h j (y j (s))ds + Ji + u i (t) + γi , i = 1, 2, . . . , n,
j=1 t−τ2 (t)

(3.104)

or, in a compact form:

ẏ(t) = − (C + ΔC2 (t))y(t) + (A + ΔA2 (t)) f (y(t)) + (B + ΔB2 (t))g(y(t − τ1 (t)))


 t
+ (D + ΔD2 (t)) h(y(s))ds + J + u(t) + γ (3.105)
t−τ2 (t)

where u(t) = [u 1 (t), u 2 (t), . . . , u n (t)]T is the adaptive feedback controller that can
implement the lag synchronization of the two coupled DNNs (3.103) and (3.105);
γ ∈ Rn is a nonlinear input; ΔC2 (t), ΔA2 (t),ΔB2 (t), and ΔD2 (t), are n × n
perturbation matrices bounded by ΔC2 (t) ≤ c2 , ΔA2 (t) ≤ a2 , ΔB2 (t) ≤ b2
and ΔD2 (t) ≤ d2 . The initial condition of (3.104) are givenby yi (s)  = ψi (s) ∈
L2F0 ([−τ ∗ , 0]; Rn ), i = 1, 2, . . . , n, −τ ∗ ≤ s ≤ 0, (τ ∗ = max τ1∗ , τ2∗ ).
To simplify the description, we make the following denotes:

C̄(t) = ΔC2 (t) − ΔC1 (t), Ā(t) = ΔA2 (t) − ΔA1 (t),

B̄(t) = ΔB2 (t) − ΔB1 (t), D̄(t) = ΔD2 (t) − ΔD1 (t).

It can be seen from the above mentioned that there must exist positive constants
c3 , a3 , b3 , d3 , such that
       
C̄(t) ≤ c3 ,  Ā(t) ≤ a3 ,  B̄(t) ≤ b3 ,  D̄(t) ≤ d3 .

Then, let e(t) = y(t) − x(t − λ) be the lag synchronization error between (3.103)
and (3.105), where λ is a propagation delay. One can derive the error dynamical
system as follows:

ė(t) = − (C + ΔC1 (t))e(t) + (A + ΔA1 (t)) f¯(e(t)) + (B + ΔB1 (t))ḡ(e(t − τ1 (t)))


 t
+ (D + ΔD1 (t)) h̄(e(s))ds − C̄(t)y(t) + Ā(t) f (y(t))
t−τ2 (t)
 t
+ B̄(t)g(y(t − τ1 (t))) + D̄(t) h(y(s))ds + u(t) + γ, (3.106)
t−τ2 (t)
80 3 Robust Stability and Synchronization of Neural Networks

where e(t) = [e1 (t), e2 (t), . . . , en (t)]T , f¯(e(t)) = f (e(t)+x(t −λ))− f (x(t −λ)),
ḡ(e(t − τ1 (t))) = g(e(t − τ1 (t)) + x(t − τ1 (t) − λ)) − g(x(t − τ1 (t) − λ)) and
h̄(e(t)) = h(e(t) + x(t − λ)) − h(x(t − λ)).
Definition 3.37 The slave system (3.104) is synchronized with the master system
(3.102), when the error dynamical system (3.106) achieves globally asymptotically
stable in mean square, i.e.,

lim Eei (t)2 = 0, i = 1, 2, . . . , n. (3.107)


t→∞

3.4.3 Main Results and Proofs

We are going to derive the synchronization criteria of the uncertain chaotic neural
networks investigated in this section.
Theorem 3.38 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:

μ̇i = −αi ei2 (t), i = 1, 2, . . . , n, (3.108)

in which αi > 0 is an arbitrary constant, and the mark ⊗ is defined as μ ⊗ e(t) =


[μ1 e1 (t), μ2 e2 (t), . . . , μn en (t)]T . If there exist positive scalars εi (i = 1, 2, . . . , 8)
and nonlinear input γ, such that

−Π
γ= e(t), (3.109)
2e(t)2

where

Π = ε−1 −1 2 T −1 2 T
5 c3 y (t)y(t) + ε6 a3 f (y(t)) f (y(t)) + ε7 b3 g (y(t − τ1 (t)))g(y(t − τ1 (t)))
2 T
 t  t
+ ε−1
8 d3
2
h T (y(s))ds h(y(s))ds, (3.110)
t−τ2 (t) t−τ2 (t)

then the controlled uncertain slave system (3.105) will achieve adaptive lag syn-
chronization with the uncertain master system (3.103) in mean square.
Proof Define the following Lyapunov functional candidate V (t, e(t)) by
 t  t  t
1 T
V (t, e(t)) = [e (t)e(t) + e T (s)Pe(s)ds + e T (r )Qe(r )dr ds
2 t−τ1 (t) t−τ2 (t) s
n
1
+ (μi + L)2 ] (3.111)
αi
i=1
3.4 Lag Synchronization of Uncertain Delayed CNN … 81

where P, Q are positive definite matrices; L is a constant; P and L are matrices with
appropriate dimensions to be determined; Q is defined by

(1 + ε−1 2 ∗
4 d1 )τ2
Q= Φ T Φ. (3.112)
1−σ

By calculating the derivative of (3.111) along the trajectories of the error system
(3.106) and the update law (3.108), we have

V̇ (t, e(t)) = −e T (t)(C + ΔC1 (t))e(t) + e T (t)(A + ΔA1 (t)) f¯(e(t))


 t
+ e T (t)(B + ΔB1 (t))ḡ(e(t − τ1 (t))) + e T (t)(D + ΔD1 (t)) h̄(e(s))ds − e T (t)C̄(t)y(t)
t−τ2 (t)
 t
+ e T (t) Ā(t) f (y(t)) + e T (t) B̄(t)g(y(t − τ1 (t))) + e T (t) D̄(t) h(y(s))ds + e T (t)(μ ⊗ e(t))
t−τ2 (t)
1 T 1 − τ̇1 (t) T 1
+ e T (t)γ + e (t)Pe(t) − e (t − τ1 (t))Pe(t − τ1 (t)) + τ2 (t)e T (t)Qe(t)
2 2 2
 n
1 − τ̇2 (t) t
− e T (s)Qe(s)ds − (μi + L)ei2 (t) (3.113)
2 t−τ2 (t) i=1

Then, following from Lemma 1.13, the following inequalities holds:

1 T 1
e T (t)A f¯(e(t)) ≤ e (t)A A T e(t) + f¯T (e(t)) f¯(e(t)), (3.114)
2 2
1 T 1
e T (t)B ḡ(e(t − τ1 (t))) ≤ e (t)B B T e(t) + ḡ T (e(t − τ1 (t)))ḡ(e(t − τ1 (t))),
2 2
(3.115)

 t 1 T
e T (t)D h̄(e(s))ds ≤ e (t)D D T e(t)
t−τ2 (t) 2
 t  t
1
+ h̄ T (e(s))ds h̄(e(s))ds. (3.116)
2 t−τ2 (t) t−τ2 (t)

Moreover, by utilizing Lemma 1.13, the following terms can be estimated by

ε1 T ε−1
− e T (t)ΔC1 (t)e(t) ≤ e (t)e(t) + 1 c12 e T (t)e(t), (3.117)
2 2

ε2 ε−1
e T (t)ΔA1 (t) f¯(e(t)) ≤ e T (t)e(t) + 2 a12 f¯T (e(t)) f¯(e(t)), (3.118)
2 2

ε3 T ε−1
e T (t)ΔB1 (t)ḡ(e(t −τ1 (t))) ≤ e (t)e(t)+ 3 b12 ḡ T (e(t −τ1 (t)))ḡ(e(t −τ1 (t))),
2 2
(3.119)
82 3 Robust Stability and Synchronization of Neural Networks

  
t ε4 T ε−1 t t
e T (t)ΔD1 (t) h̄(e(s))ds ≤ e (t)e(t) + 4 d12 h̄ T (e(s))ds h̄(e(s))ds,
t−τ2 (t) 2 2 t−τ2 (t) t−τ2 (t)
(3.120)

ε5 T ε−1
− e T (t)C̄(t)y(t) ≤ e (t)e(t) + 5 c32 y T (t)y(t), (3.121)
2 2

ε6 T ε−1
e T (t) Ā(t) f (y(t)) ≤ e (t)e(t) + 6 a32 f T (y(t)) f (y(t)), (3.122)
2 2

ε7 T ε−1
e T (t) B̄(t)g(y(t − τ1 (t))) ≤ e (t)e(t) + 7 b32 g T (y(t − τ1 (t)))g(y(t − τ1 (t))),
2 2
(3.123)
  
t ε8 T ε−1 t t
e T (t) D̄(t) h(y(s))ds ≤ e (t)e(t) + 8 d32 h T (y(s))ds h(y(s))ds.
t−τ2 (t) 2 2 t−τ2 (t) t−τ2 (t)
(3.124)

According to the hypotheses (H1 ), (H2 ) and Lemma 1.20, the following terms can
be further estimated by

f¯T (e(t)) f¯(e(t)) ≤ Λe(t)2 = e T (t)ΛT Λe(t) (3.125)

ḡ T (e(t−τ1 (t)))ḡ(e(t−τ1 (t))) ≤ Ωe(t − τ1 (t))2 = e T (t−τ1 (t))Ω T Ωe(t−τ1 (t))


(3.126)
 t  t  t
h̄ T (e(s))ds h̄(e(s))ds ≤ τ2∗ h̄ T (e(s))h̄(e(s))ds
t−τ2 (t) t−τ2 (t) t−τ2 (t)
 t  t
≤ τ2∗ Φe(s)2 ds = τ2∗ e T (s)Φ T Φe(s)ds. (3.127)
t−τ2 (t) t−τ2 (t)

Then, substituting (3.114)–(3.127) into (3.113) and applying the hypothesis (H2 ),
one yields

ε1 T ε−1 1
V̇ (t, e(t)) ≤ −e T (t)Ce(t) + e (t)e(t) + 1 c12 e T (t)e(t) + e T (t)A A T e(t)
2 2 2
1 ε2 ε−1 1
+ e T (t)ΛT Λe(t) + e T (t)e(t) + 2 a12 e T (t)ΛT Λe(t) + e T (t)B B T e(t)
2 2 2 2
1 ε3 ε−1
+ e T (t − τ1 (t))Ω T Ωe(t − τ1 (t)) + e T (t)e(t) + 3 b12 e T (t − τ1 (t))Ω T Ωe(t − τ1 (t))
2 2 2

1 T 1 ∗ t ε4
+ e (t)D D e(t) + τ2
T
e (s)Φ Φe(s)ds + e T (t)e(t)
T T
2 2 t−τ2 (t) 2
 t −1
ε−1 ε5 ε ε6
+ 4 d12 τ2∗ e T (s)Φ T Φe(s)ds + e T (t)e(t) + 5 c32 y T (t)y(t) + e T (t)e(t)
2 t−τ2 (t) 2 2 2
ε−1 ε7 ε−1
+ 6
a32 f T (y(t)) f (y(t)) + e T (t)e(t) + 7 b32 g T (y(t − τ1 (t)))g(y(t − τ1 (t)))
2 2 2
3.4 Lag Synchronization of Uncertain Delayed CNN … 83

 t  t
ε8 T ε−1 1
+ e (t)e(t) + 8 d32 h T (y(s))ds h(y(s))ds + e T (t)γ + e T (t)Pe(t)
2 2 t−τ2 (t) t−τ2 (t) 2
1−δ T 1
− e (t − τ1 (t))Pe(t − τ1 (t)) + τ2∗ e T (t)Qe(t)
2 2

1−σ t
− e T (s)Qe(s)ds − Le T (t)e(t) (3.128)
2 t−τ2 (t)

It follows from the definition of γ and Q in (3.109) and (3.112) that



1 ε−1
V̇ (t, e(t)) ≤ e T (t) −C + (ε1 + ε2 + ε3 + ε4 + ε5 + ε6 + ε7 + ε8 ) I + 1 c12 I
2 2
−1 2 ∗ 
1 1 + ε2 a 1 T 1 1 1 τ
+ A AT + Λ Λ + B B T + D D T + P + 2 Q − L I e(t)
2 2 2 2 2 2
 
−1 2
1 + ε b 1 − δ
+ e T (t − τ1 (t)) 3 1
ΩT Ω − P e(t − τ1 (t))
2 2
 1 
1 ε−1
≤ e T (t) λmax (−C) + (ε1 + ε2 + ε3 + ε4 + ε5 + ε6 + ε7 + ε8 ) + 1 c12 + λmax A A T
2 2 2
 1 + ε−1 a 2  1  1  1  τ∗  
+ λmax 2
Λ Λ + λmax B B T + λmax D D T + P + λmax 2 Q − L e(t)
1 T
2 2 2 2 2
   
−1 2
1 + ε3 b1 T 1−δ
+ e T (t − τ1 (t)) λmax Ω Ω − P e(t − τ1 (t)) (3.129)
2 2

By taking
1
P= λmax ((1 + ε−1
3 b1 )Ω Ω)I
2 T
(3.130)
1−δ

 
1 ε−1 1
L = λmax (−C) + (ε1 + ε2 + ε3 + ε4 + ε5 + ε6 + ε7 + ε8 ) + 1 c12 + λmax A AT
2 2 2
−1 2
!    
1 + ε2 a 1 T 1 1
+ λmax Λ Λ + λmax B B T + λmax D DT
2 2 2
 ∗ 
1 τ
+ λmax ((1 + ε−1 b12 )Ω T Ω) + λmax 2 Q + 1, (3.131)
2(1 − δ) 3 2

we can derive that

V̇ (t, e(t)) ≤ −e T (t)e(t) (3.132)

It is obvious that V̇ (t, e(t)) = 0 if and only if e(t) = 0, otherwise V̇ (t, e(t)) < 0,
so basing on the well-known invariant principle of functional differential equations,
the orbit of system (3.106), starting with arbitrary initial value, converges asymp-
totically to the largest invariant set Ξ contained in V̇ (t, e(t)) = 0 as t → ∞,
84 3 Robust Stability and Synchronization of Neural Networks

where the set Ξ = {ei (t) = 0|μi = μ0 ∈ Rn , i = 1, 2, . . . , n}. Thus, according to


Definition 3.35, we can conclude that the lag synchronization between the uncertain
chaotic neural networks (3.102) and (3.104) is achieved under the adaptive feedback
controller. This completes the proof.

Remark 3.39 It can be seen from the definition of the nonlinear input γ in (3.109)
that γ → ∞, when e(t) → 0. But this is not realistic for practical application. In
order to prevent γ from approaching infinity, as proposed in [16], we use the methods
of replacing γ with γ̂, such that
"
γ, e(t) ≥ ζ,
γ̂ = −Π (3.133)
2ζ 2
P −1 e(t), e(t) ≤ ζ,

where ζ is an adjustable parameter. Thus, the two uncertain chaotic DNNs (3.102)
and (3.104) will achieve synchronization within finite accuracy since the error is
bounded.

Corollary 3.40 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:

μ̇i = −αi ei2 (t), i = 1, 2, . . . , n, (3.134)

in which αi > 0 is an arbitrary constant, and the mark ⊗ is defined as μ ⊗ e(t) =


[μ1 e1 (t), μ2 e2 (t), . . . , μn en (t)]T . If there exist nonlinear input γ, such that

−Π
γ= e(t), (3.135)
2e(t)2

where

Π = c32 y T (t)y(t) + a32 f T (y(t)) f (y(t)) + b32 g T (y(t − τ1 (t)))g(y(t − τ1 (t)))


 t  t
+ d32
h (y(s))ds
T
h(y(s))ds, (3.136)
t−τ2 (t) t−τ2 (t)

then the controlled uncertain slave system (3.105) will achieve adaptive lag synchro-
nization with the uncertain master system (3.103) in mean square.

Proof Let εi = 1(i = 1, 2, . . . , 8). From the proof of Theorem 3.38, Corollary 3.40
can be obtained immediately.

If the two uncertain chaotic neural networks (3.103) and (3.105) have no distrib-
uted time-varying delays, then we can get the following corollary directly. Consider
the master system (3.137) and the slave system (3.138) shown as follows:
3.4 Lag Synchronization of Uncertain Delayed CNN … 85

ẋ(t) = − (C + ΔC1 (t))x(t) + (A + ΔA1 (t)) f (x(t))


+ (B + ΔB1 (t))g(x(t − τ1 (t))) + J, (3.137)

ẏ(t) = − (C + ΔC2 (t))y(t) + (A + ΔA2 (t)) f (y(t))


+ (B + ΔB2 (t))g(y(t − τ1 (t))) + J + u(t) + γ. (3.138)

Corollary 3.41 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:

μ̇i = −αi ei2 (t), i = 1, 2, . . . , n, (3.139)

in which αi > 0 is an arbitrary constant, and the mark ⊗ is defined as μ ⊗ e(t) =


[μ1 e1 (t), μ2 e2 (t), . . . , μn en (t)]T . If there exist positive scalars εi (i = 1, 2, . . . , 8)
and nonlinear input γ, such that

−Π
γ= e(t), (3.140)
2e(t)2

where

Π = ε−1 2 T −1 2 T −1 2 T
5 c3 y (t)y(t) + ε6 a3 f (y(t)) f (y(t)) + ε7 b3 g (y(t − τ1 (t)))g(y(t − τ1 (t)))
(3.141)

then the controlled uncertain slave system (3.138) will achieve adaptive lag synchro-
nization with the uncertain master system (3.137) in mean square.

Proof Construct the following Lyapunov functional


  t #
1 T n
1
V (t, e(t)) = e (t)e(t) + e (s)Pe(s)ds +
T
(μi + L)2
2 t−τ1 (t) α
i=1 i
(3.142)

The rest proof is similar to the proof of Theorem 3.38 and hence omitted here.

If the two chaotic neural networks (3.103) and (3.105) have no parameters per-
turbation, then the following corollary can be obtained. Consider the master system
(3.143) and the slave system (3.144) shown as follows:
 t
ẋ(t) = −C x(t) + A f (x(t)) + Bg(x(t − τ1 (t))) + D h(x(s))ds + J,
t−τ2 (t)
(3.143)
86 3 Robust Stability and Synchronization of Neural Networks
 t
ẏ(t) = − C y(t) + A f (y(t)) + Bg(y(t − τ1 (t))) + D h(y(s))ds
t−τ2 (t)
+ J + u(t). (3.144)

Corollary 3.42 Under the hypotheses (H1 ) ∼ (H3 ), let the controller u(t) = μ ⊗
e(t), where the coupling strength μ = (μ1 , μ2 , . . . , μn )T ∈ Rn is adaptive with the
following update law:

μ̇i = −αi ei2 (t), i = 1, 2, . . . , n, (3.145)

in which αi > 0 is an arbitrary constant, and the mark ⊗ is defined as μ ⊗ e(t) =


[μ1 e1 (t), μ2 e2 (t), . . . , μn en (t)]T . Then the controlled slave system (3.144) will
achieve adaptive lag synchronization with the master system (3.143) in mean square.
τ∗
Proof Let Q = 1−σ 2
Φ T Φ in the Lyapunov functional (3.111). By utilizing the
similar method as being proposed in the proof of Theorem 3.38, we can obtain
Corollary 3.42 directly.

3.4.4 Illustrative Example

In this section, our main purpose is to authenticate the global asymptotical stability
of the error dynamical system (3.106). An example is presented to illustrate the
effectiveness of our results.
Example
Consider the following uncertain chaotic DNNs with discrete and distributed time-
varying delays:

ẋ(t) = −(C + ΔC1 (t))x(t) + (A + ΔA1 (t)) f (x(t))


 t
+ (B + ΔB1 (t))g(x(t − τ1 (t))) + (D + ΔD1 (t)) h(x(s))ds + J,
t−τ2 (t)
(3.146)

with
       
10 2.1 −0.12 −1.6 −0.1 −9.3 5.0
C= ,A= ,B = ,D = ,
01 −5.1 3.1 −0.2 −2.4 6.1 −2.1
   
10 11
ΔC1 (t) = 0.12 cos(t) ΔA1 (t) = ΔB1 (t) = ΔD1 (t) = 0.12 cos(t) ,
01 11
t
J = [0, 0]T , τ1 (t) = ete+1 , τ2 (t) = 1 and x(t) = [x1 (t), x2 (t)]T , f (x(t)) =
g(x(t)) = h(x(t)) = [tanh(x1 (t)), tanh(x2 (t))]T .
3.4 Lag Synchronization of Uncertain Delayed CNN … 87

The corresponding slave system with controller and nonlinear input is described
as follows:

ẏ(t) = −(C + ΔC2 (t))y(t) + (A + ΔA2 (t)) f (y(t)) + (B + ΔB2 (t))g(y(t − τ1 (t)))
 t
+ (D + ΔD2 (t)) h(y(s))ds + J + u(t) + γ, (3.147)
t−τ2 (t)
 
10
where ΔC2 (t) = 0.15 sin(t) ΔA2 (t) = ΔB2 (t) = ΔD2 (t) = 0.15
01
 
11
sin(t) , and u(t) and γ are defined as in Theorem 3.38. Basing on the above
11
description, let the arbitrary initial states of the two coupled uncertain chaotic DNNs
be as follows: x1 (t) = 1.1, x2 (t) = 1.2; y1 (t) = −0.2, y2 (t) = 2.3; ∀t ∈ [−1, 0].
Then, the following convincing numerical simulations can be obtained as Figs. 3.6–
3.7 show.
In the simulations, the initial conditions of the adaptive feedback strength is taken
as [μ1 (0), μ2 (0)]T = [2.3, 1.2]T , and αi = 30. According to (3.133), we choose

Fig. 3.6 t − x1 (t) − y1 (t) 3


x1
x
2
2

−1

−2

−3
0 2 4 6 8 10
t

Fig. 3.7 t − e1 (t) − e1 (t) 6


e
1
e
2
4

−2

−4

−6
0 2 4 6 8 10
88 3 Robust Stability and Synchronization of Neural Networks

the synchronization error by e(t) ≤ 0.05. The propagation delay λ = 0.2. And
the positive scalars εi = 1(i = 1, 2, . . . , 8), c3 = 0.27, a3 = b3 = d3 = 0.54.
The simulation results can be described as follows. Figure 3.6 and Fig. 3.7 depict
the adaptive lag synchronization between (3.146) and (3.147). Thus, from these sim-
ulations, one can conclude that lag synchronization in uncertain chaotic neural net-
works with mixed time-varying delays is realized via the adaptive feedback scheme
and the appropriate nonlinear input.

3.4.5 Conclusion

The lag synchronization problem of uncertain chaotic DNNs has been thoroughly
researched via an adaptive feedback control scheme in this section. By employing the
Lyapunov-Krasovskii stability theory and some estimation methods, some novel and
sufficient conditions have been obtained to ensure the synchronization. Especially,
both the discrete and distributed time-varying delays have been introduced to model a
more practical situation. And the corresponding numerical simulations have validated
the feasibility of the proposed technique. It is believed that the results should provide
some practical guidelines for the application in this area.

References

1. A. Arenas, A. Guilera, J. Kurths, Y. Morenob, C. Zhoug, Synchronization in complex networks.


Phys. Rep. 469(3), 93–153 (2008)
2. S. Arik, Global robust stability analysis of neural networks with discrete time delays. Chaos
Solitons Fractals 26(5), 1407–1414 (2005)
3. L. Arnold, Stochastic Differential Equations: Theory and Applications (Wiley, New York,
1972)
4. E. Artyomov, O. Yadid-Pecht, Modified high-order neural network for invariant pattern recog-
nition. Pattern Recognit. Lett. 26(6), 843–851 (2005)
5. W. Baoyun, H. Zenya, N. Jingnan, To implement the CDMA multiuser detector by using
transiently chaotic neural network. IEEE Trans. Aerosp. Electron. Syst. 33(3), 1068–1071
(1997)
6. S. Blythe, X. Mao, X. Liao, Stability of stochastic delay neural networks. J. Frankl. Inst. 338(4),
481–495 (2001)
7. S. Boccaletti, V. Latora, Y. Moreno, M. Chevez, D.U. Hwqng, Complex networks: structure
and dynamics. Phys. Rep. 424(4–5), 175–308 (2006)
8. J. Cao, T. Chen, Globally exponentially robust stability and periodicity of delayed neural
networks. Chaos Solitons Fractals 22(4), 957–963 (2004)
9. J. Cao, X. Li, Stability in delayed Cohen-Grossberg neural networks: LMI optimization
approach. Phys. D 212(1), 54–65 (2005)
10. J. Cao, J. Liang, J. Lam, Exponential stability of high-order bidirectional associative memory
neural networks with time delays. Phys. D: Nonlinear Phenom. 199(3), 425–436 (2004)
11. J. Cao, D. Huang, Y. Qu, Global robust stability of delayed recurrent neural networks. Chaos
Solitons Fractals 23(1), 221–229 (2005)
References 89

12. J. Cao, P. Li, W. Wang, Global synchronization in arrays of delayed neural networks with
constant and delayed coupling. Phys. Lett. A 353(4), 318–325 (2006)
13. J. Cao, Z. Wang, Y. Sun, Synchronization in an array of linearly stochastically coupled networks
with time-delays. Phys. A: Stat. Mech. Appl. 385(2), 718–728 (2007)
14. J. Cao, G. Chen, P. Li, Global synchronization in an array of delayed neural networks with
hybrid coupling. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 38(2), 488–498 (2008)
15. S. Celikovsky, V. Lynnyk, Efficient chaos shift keying method based on the second error
derivative anti-synchronization detection, in IEEE International Conference on Control and
Automation (2009), pp. 530–535
16. F. Chen, W. Zhang, LMI criteria for robust chaos synchronization of a class of chaotic systems.
Nonlinear Anal. Theory Methods Appl. 67(12), 3384–3393 (2007)
17. G.R. Chen, J. Zhou, Z.R. Liu, Global synchronization of coupled delayed neural networks and
applications to chaotic CNN model. Int. J. Bifurc. Chaos 14(7), 2229–2240 (2004)
18. A. Dembo, O. Farotimi, T. Kailath, High-order absolutely stable neural networks. IEEE Trans.
Circuits Syst. 38(1), 57–65 (1991)
19. A. Friedman, Stochastic Differential Equations and Their Applications (Academic Press, New
York, 1976)
20. J. Hale, Theory of Functional Differential Equations (Springer, New York, 1977)
21. Y. He, Q. Wang, M. Wu, C. Lin, Delay-dependent state estimation for delayed neural networks.
IEEE Trans. Neural Netw. 17(4), 1077–1081 (2006)
22. H. Huang, D.W.C. Ho, J. Lam, Stochastic stability analysis of fuzzy Hopfield neural networks
with time-varying delays. IEEE Trans. Circuits Syst.: Part II 52(5), 251–255 (2005)
23. H. Huang, D.W.C. Ho, Y. Qu, Robust stability of stochastic delayed additive neural networks
with Markovian switching. Neural Netw. 20(7), 799–809 (2007)
24. G. Joya, M. Atencia, F. Sandoval, Hopfield neural networks for optimization: study of the
different dynamics. Neurocomputing 43(1), 219–237 (2002)
25. N.B. Karayiannis, A.N. Venetsanopoulos, On the training and performance of high-order neural
networks. Math. Biosci. 129(2), 143–168 (1995)
26. W. Li, T. Lee, Hopfield neural networks for affine invariant matching. IEEE Trans. Neural
Netw. 12(6), 1400–1410 (2001)
27. X. Liao, G. Chen, E.N. Sanchez, Delay dependent exponential stability analysis of delayed
neural networks: an LMI approach. Neural Netw. 15(7), 855–866 (2002)
28. M. Li, W. Zhou, H. Wang, Y. Chen, R. Lu, H. Lu, Delay-dependent robust H∞ control for
uncertain stochastic systems, in Proceedings of the 17th World Congress of the International
Federation of Automatic Control, vol. 17 (2008), pp. 6004–6009
29. X. Lou, B. Cui, Synchronization of neural networks based on parameter identification and via
output or state coupling. J. Comput. Appl. Math. 222(2), 440–457 (2008)
30. H. Lu, Comments on “a generalized LMI-based approach to the global asymptotic stability of
delayed cellular neural networks”. IEEE Trans. Neural Netw. 16(3), 778–779 (2005)
31. W. Lu, T. Chen, Synchronization of coupled connected neural networks with delays. IEEE
Trans. Circuits Syst. I. 51(12), 2491–2503 (2004)
32. P. Lu, Y. Yang, Global asymptotic stability of a class of complex networks via decentralised
static output feedback control. IET Control Theory Appl. 4(11), 2463–2470 (2010)
33. J. Lv, X. Yu, G. Chen, Chaos synchronization of general complex dynamical networks. Phys.
A 334(1–2), 281–302 (2004)
34. X. Mao, Stochastic Differential Equations and Their Applications (Horwood, Chichester, 1997)
35. L.M. Pecora, T.L. Carroll, Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821–824
(1990)
36. D. Psaltis, C. Park, J. Hong, Higher order associative memories and their optical implementa-
tions. Neural Netw. 1(2), 143–163 (1988)
37. F. Ren, J. Cao, LMI-based criteria for stability of high-order neural networks with time-varying
delay. Nonlinear Anal. Ser. B: Real World Appl. 7(5), 967–979 (2006)
38. F. Ren, J. Cao, Anti-synchronization of stochastic perturbed delayed chaotic neural networks.
Neural Comput. Appl. 18(5), 515–521 (2009)
90 3 Robust Stability and Synchronization of Neural Networks

39. M.G. Rosenblum, A.S. Pikovsky, J. Kurths, Phase synchronization in driven and coupled
chaotic oscillators. IEEE Trans. Circuits Syst. 44(10), 874–881 (1997)
40. S. Ruan, R. Filfil, Dynamics of a two-neuron system with discrete and distributed delays. Phys.
D 191(3), 323–342 (2004)
41. A.N. Ruiz Oliveras, F.R. Pisarchik, Optical chaotic communication using generalized and
complete synchronization. IEEE J. Quantum Electron. 46(3), 279–284 (2010)
42. L. Sheng, M. Gao, Adaptive hybrid lag projective synchronization of unified chaotic systems,
in Proceedings of the 29th Chinese Control Conference (2010), pp. 2097–2101
43. L. Sheng, H. Yang, Robust synchronization of a class of uncertain chaotic neural networks, in
7th World Congress on Intelligent Control and Automation (2008), pp. 4614–4618
44. S.H. Strogatz, Exploring complex networks. Nature 410(6825), 268–276 (2001)
45. Y. Tang, R. Qiu, J. Fang, Q. Miao, M. Xia, Adaptive lag synchronization in unknown stochas-
tic chaotic neural networks with discrete and distributed time-varying delays. Phys. Lett. A
372(24), 4425–4433 (2008)
46. W. Wang, J. Cao, Synchronization in an array of linearly coupled networks with time-varying
delay. Phys. A 366, 197–211 (2006)
47. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
48. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
49. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
50. L. Wan, J. Sun, Mean square exponential stability of stochastic delayed Hopfield neural net-
works. Phys. Lett. A 343(4), 306–318 (2005)
51. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal. Real World Appl. 7(5), 1119–1128 (2006)
52. Z. Wang, H. Shu, Y. Liu, D.W.C. Ho, X. Liu, Robust stability analysis of generalized neural
networks with discrete and distributed time delays. Chaos Solitons Fractals 30(4), 886–896
(2006)
53. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos Solitons Fractals 32(1), 62–72 (2007)
54. D. Wang, Y. Zhong, S. Chen, Lag synchronizing chaotic system based on a single controller.
Commun. Nonlinear Sci. Numer. Simul. 13(3), 637–644 (2008)
55. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
56. L. Wang, W. Liu, H. Shi, Noise chaotic neural networks with variable thresholds for the fre-
quency assignment problem in satellite communications. IEEE Trans. Syst. Man Cybern. Part
C: Appl. Rev. 38(2), 209–217 (2008)
57. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos Solitons Fractals 36(2), 388–396 (2008)
58. Z. Wu, H. Su, J. Chu, W. Zhou, Improved result on stability analysis of discrete stochastic
neural networks with time delay. Phys. Lett. A 373(17), 1546–1552 (2009)
59. Z. Wu, H. Su, J. Chu, W. Zhou, New results on robust exponential stability for discrete recurrent
neural networks with time-varying delays. Neurocomputing 72(13), 3337–3342 (2009)
60. L. Xie, Output feedback H∞ control of systems with parameter uncertainty. Int. J. Control
63(4), 741–750 (1996)
61. Y. Xu, S. He, Fourier series chaotic neural networks, in Advanced Intelligent Computing Theo-
ries and Applications. With Aspects of Contemporary Intelligent Computing Techniques (2008),
pp. 84–91
62. L. Yan, L. Wang, Applications of transiently chaotic neural networks to image restoration, in
Proceedings of the 2003 International Conference on Neural Networks and Signal Processing,
vol. 1 (2003), pp. 265–269
63. S. Yong, P. Scott, N. Nasrabadi, Object recognition using multilayer Hopfield neural network.
IEEE Trans. Image Process. 6(3), 357–372 (1997)
References 91

64. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A 373,
252–260 (2007)
65. H. Zhao, Existence and global attractivity of almost periodic solution for cellular neural network
with distributed delays. Appl. Math. Comput. 154(3), 683–695 (2004)
66. Y. Zhang, Z. He, A secure communication scheme based on cellular neural networks, in Pro-
ceedings of the IEEE International Conference on Intelligent Process Systems, vol. 1 (1997),
pp. 521–524
67. Q. Zhang, X. Wen, J. Xu, Delay-dependent exponential stability of cellular neural networks
with time-varying delays. Chaos Solitons Fractals 23(4), 1363–1369 (2005)
68. W. Zhou, Y. Xu, H. Lu, L. Pan, On dynamics analysis of a new chaotic attractor. Phys. Lett. A
372(36), 5773–5777 (2008)
69. W. Zhou, H. Lu, C. Duan, Exponential stability of hybrid stochastic neural networks with mixed
time delays and nonlinearity. Neurocomputing 72(13), 3357–3365 (2009)
Chapter 4
Adaptive Synchronization of Neural
Networks

The adaptive control strategy has been widely adopted due to its well performance in
uncertain systems such as stochastic systems or nonlinear systems. In this chapter,
adaptive control is designed for the synchronization of some kinds of neural networks
including BAMDNN, SDNN with Markovian switching and T-S fuzzy NN.

4.1 Projective Synchronization of BAM Self-Adaptive DNN


with Unknown Parameters

4.1.1 Introduction

Bidirectional Associate Memory (BAM for short) neural network, gradually in pat-
tern recognition and artificial intelligence, prediction and control, has wider applica-
tion of associative memory space. Since Bart Kosko [16] put forward that model on
1987, the BAM neural network has aroused a lot of attention of researchers [12, 17,
32, 57] at home and abroad. Due to the current information if often distributed storage,
bidirectional double BAM neural networks will undoubtedly enhance the efficiency
of the process which is using the appropriate method of encoding implementation of
effective association. Fen Wang [35] and his team studied BAM neural network of
existence and stability of periodic solutions; Zhigang Liu [22] research BAM neural
network of global attractor; Hongjun Xiang [51] has solved the fuzzy BAM neural
network of exponential stability problem; Xingyuan Wang and his colleagues [38]
have made great contributions in delayed neural network adaptive synchronization
research. Because of initial value sensitivity in BAM neural network and the time-
delay phenomenon of neural network itself, the whole characteristic of this network
will change obviously if small dissimilarities in network parameters happened. So
it is the BAM neural network synchronization and parameter identification problem
of research that is particularly important.

© Springer-Verlag Berlin Heidelberg 2016 93


W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2_4
94 4 Adaptive Synchronization of Neural Networks

As an important model, the synchronous issue of BAM neural network model


has been the focus of scholars’ research; on the part of coupled linear systems,
Mainieri and Xu found that drive-response systems can synchronize to the terms
of a scale factor. This type of synchronization is called projective synchronization
[26, 52]. Projection synchronization is also a recent hot issue among scholars. Kehui
Sun [30] and his partners studied chaotic system adaptive function and its projection
synchronization problem; Lixin Yang [53] studied an improvement of self-knowledge
chaotic system projection synchronization problem.
The projective synchronization has important theoretical significance and prospect
of application in information engineering, medicine engineering, chaotic secure com-
munication fields and the application space of research on projective synchronization.
Based on the foregoing two points, this section starts from the BAM system parame-
ters identification and delayed systems in both hands, by constructing appropriative
Lyapunov function and implementation of BAM adaptive projective synchronization
of delayed system and proves its feasibility through numerical simulation of further.
The advantage of the section is that it not only realizes the projective synchroniza-
tion of BAM system but also at the same time realizes parameter identification which
features rare papers that have owned.

4.1.2 Problem Fomulation

BAM is very classic models [20], since this article is based on BAM for discussion,
at the beginning of the article, we will give out BAM models as follows.
1. Mathematical model with delayed neural networks for delayed differential equa-
tions is described as follows:

⎪  m

⎪ u̇ i (t) = − ci u i (t) + aij f j (v j (t − τij )) + Ii ,





⎪ j=1

⎨ i = 1, 2, . . . , n,
(4.1)

⎪  n

⎪ v̇ j (t) = − d j v j (t) + bji gi (u i (t − σji )) + Ji ,





⎪ i=1

j = 1, 2, . . . , m,

2. Abstract and equivalence of the equation



u̇(t) = − Cu(t) + A f (v(t − τ )) + I,
(4.2)
v̇(t) = − Dv(t) + Bg(u(t − σ)) + J.
4.1 Projective Synchronization of BAM Self-Adaptive DNN … 95

Among systems

C = diag{c1 , c2 , . . . , cn }, D = diag{d1 , d2 , . . . , dn },

ci > 0, d j > 0 (i = 1, 2, . . . , n, j = 1, 2, . . . , m).

And
u = (u 1 , u 2 , . . . , u n )T , v = (v1 , v2 , . . . , vm )T are neuron state; A = (aij )n×m ,
B = (bji )m×n are the connections between neurons of weight matrix and I , J is
external input
f (u) = ( f 1 (u 1 ), f 2 (u 2 ), . . . , f n (u n ))T ,

g(v) = (g1 (v1 ), g2 (v2 ), . . . , gm (vm ))T .

Assumption 4.1 The function f i : R → R (i = 1, 2, . . . , n) is the bounded


function and there is a constant existing that K i > 0 for ∀x1 , x2 ∈ R, ∃|Fi (x1 ) −
Fi (x2 )| ≤ K i |x1 − x2 |.

The equation demonstrates the neuron activation function. The study shows that
reasonable selection of parameters C, A, D, B and delay parameter system to be
certain amount of chaos [24]. In order to better observe the chaotic characteristics
of BAM neural network system and two equations of mutual relations, each of our
two equations for constructing a class that derives from system, and two from the
system by means of function hinges interdependent relations, said the two systems
are as follows:

ẋ(t) = − C1 x(t) + A1 f (y(t − τ )) + I1 + K 1 ,
(4.3)
ẏ(t) = − D1 y(t) + B1 g(x(t − σ)) + J1 + K 2 .

Among the equation,

x(t) = (x1 (t), x2 (t), . . . , xn (t))T ,

y(t) = (y1 (t), y2 (t), . . . , yn (t))T

are state vectors of the system;

A1 f (y(t − τ )),

B1 g(x(t − σ))

are the interaction function in the two-system. Parameter K 1 and parameter K 2 are
positive definite matrix vector controller for external control inputs for each equation.
It is not difficult to find both equations having similarity with comparison of Eqs. (4.2)
and the Eq. (4.3), but parameters A, B, C, D are unknown parameters which need
96 4 Adaptive Synchronization of Neural Networks

for identification in the equation. Therefore, the problem that how to determine the
master-slave system parameters will turn into the other one that how to design a
proper controller.
K 1 = K 1 (u, x, A1 , C1 , I1 , t),

K 2 = K 1 (v, y, B1 , D1 , J1 , t),

A1 = (A11 (t), A12 (t), . . . , A1 p (t))T ∈ R p ,

B1 = (B11 (t), B12 (t), . . . , B1 p (t))T ∈ R p ,

C1 = (C11 (t), C12 (t), . . . , C1 p (t))T ∈ R p ,

D1 = (D11 (t), D12 (t), . . . , D1 p (t))T ∈ R p .

The above parameters are a function of time, and parameter K which we call it
as controller is the external input control quantity of each equation. We may find
similarities between two equations through comparing Eqs. (4.2) and (4.3). As the
parameter A, B, C, D are the ones which need our identification, the problem of
determining master-slave system parameters would turn into the problem how to
design the controller correctly.

Definition 4.2 Parameter-updating rules are as follows:

Ȧ1 = A1 (u, v, x, y, A1 , C1 , I1 , t),

Ḃ1 = B1 (u, v, x, y, B1 , D1 , J1 , t),

Ċ1 = C1 (u, v, x, y, A1 , C1 , I1 , t),

Ḋ1 = D1 (u, v, x, y, B1 , D1 , J1 , t),

Definition 4.3 Now define the conditions of projective synchronization as follows:



⎨ lim (x(t) − H1 u(t)) = 0,
t→∞
⎩ lim (y(t) − H2 v(t)) = 0.
t→∞

Definition 4.4 The corresponding parameter conditions among the equation should
conform the statement as follows:
4.1 Projective Synchronization of BAM Self-Adaptive DNN … 97


⎪ lim A1 (t) − A1 = 0,

⎪ t→∞


⎨ lim B1 (t) − B1 = 0,
t→∞

⎪ lim C1 (t) − C1 = 0,

⎪ t→∞


⎩ lim D1 (t) − D1 = 0.
t→∞

Definition 4.5 Next, our error system in defined set of master-slave system has the
following definition 
e1 (t) = x(t) − H1 u(t),
(4.4)
e2 (t) = y(t) − H2 v(t).

Here are all n-order-diagonal matrixes and the expression:

H1 = diag{h 11 , h 12 , . . . , h 1n }.

And
H2 = diag{h 21 , h 22 , . . . , h 2n }.

Bringing system (4.3) to system (4.4), they are as following equivalent systems,
and we can get the equivalent equation as follows:

⎪ ė1 (t) = −C1 x(t) + A1 f (y(t − τ )) + I1 + K 1 +


⎨ H Cu(t) − H A f (v(t − τ )) − H I,
1 1 1
(4.5)

⎪ ė2 (t) = −D1 y(t) + B1 g(x(t − σ)) + J1 + K 2 +


H2 Dv(t) − H2 Bg(u(t − σ)) − H2 J.

4.1.3 Design of Controller

According to the conclusion of the study and the combination of the correspond-
ing theoretical basis, we design projective synchronization controller basing on the
characteristics of BAM neural networks as follows:
1. projective synchronization controller

⎪ K 1 (t) = −e1 (t) + [H1 f (v(t − τ )) − f (y(t − τ ))]A1 +


⎨ [x(t) − H u(t)]C + (H − 1)I ,
1 1 1 1
(4.6)
⎪ K 2 (t) = −e2 (t) + [H2 g(u(t − σ)) − g(x(t − σ))]B2 +



[y(t) − H2 v(t)]D1 + (H2 − 1)J1 .
98 4 Adaptive Synchronization of Neural Networks

2. Errors can be expressed as



⎪ ė1 (t) = −e1 (t) + H1 f (v(t − τ ))(A1 − A) +


⎨ H (I − I ) − H u(t)(C − C),
1 1 1 1
(4.7)

⎪ ė2 (t) = −e2 (t) + H2 g(u(t − σ))(B1 − B) +


H2 (J1 − J ) − H2 v(t)(D1 − D).

3. Unknown parameter update rules




⎪ Ȧ1 (t) = −R −1 f (v(t − τ ))H1 P1 e1 ,


⎨ Ḃ1 (t) = −Q −1 g(u(t − σ))H2 P2 e2 ,
(4.8)

⎪ Ċ2 (t) = −S −1 u(t)H1 P1 e1 ,



Ḋ1 (t) = −T −1 v(t)H2 P2 e2 .

Theorem 4.6 If we chose controller of signal systems for (4.6) of which signal
system of parameter identification rules such as (4.8) type, the track of system (4.3)
and that of the main system (4.2) can reach the state of synchronization as staring
from any initial value.

4. To prove the process of Theorem 4.6

V1 [e1 (t), A1 (t), C1 (t)]


1
= [e1T (t)P1 e1 (t) + (A1 (t) − A)T R(A1 (t) − A) +
2
(C1 (t) − C)T S(C1 (t) − C)], (4.9)

The above equation derivation is as follows:

1 T
V̇1 = {ė P1 e1 + e1T P1 ė1 + [ Ȧ1 (t)]T R1 [A1 (t) − A] +
2 1
[A1 (t) − A]T R Ȧ1 (t) + [Ċ1 (t)]T S[C1 (t) − C]
[Ċ1 (t) − C]T S Ċ1 (t)}
= e1T (t)P1 ė1 (t) + [ Ȧ1 (t)]T R[A1 (t) − A] +
[Ċ1 (t)]T S[C1 (t) − C]. (4.10)

Bring Eq. (4.4) into Eq. (4.11) and receive

so, − e1T (t)Pe1 (t) ≤ 0, (4.11)


4.1 Projective Synchronization of BAM Self-Adaptive DNN … 99

and ⎧

⎪ lim A1 (t) − A = 0,

⎨ t→∞
lim C1 (t) − C = 0,


t→∞

⎩ lim x(t) − H1 u(t) = 0.
t→∞

And define the Lyapunov equation as follows:

V2 [e2 (t), B1 (t), D1 (t)]


1
= [e2T (t)P2 e2 (t) + (B1 (t) − B)T Q(B1 (t) − B) + (4.12)
2
(D1 (t) − D)T H (D1 (t) − D)].

Equation derivation:

V̇2 (t) = e2T (t)P2 ė2 (t) + Ḃ1T (t)Q[B1 (t) − B] +


(4.13)
Ḋ1T (t)S(D1 (t) − D).

According to the derivation, we can prove

− e2T (t)Pe2 (t) ≤ 0, (4.14)

and ⎧
⎨ lim (x(t) − H1 u(t)) = 0,
t→∞
⎩ lim (y(t) − H2 v(t)) = 0.
t→∞

And ⎧

⎪ lim A1 (t) − A1 = 0,

⎪ t→∞


⎨ lim B1 (t) − B1 = 0,
t→∞

⎪ lim C1 (t) − C1 = 0,

⎪ t→∞


⎩ lim D1 (t) − D1 = 0,
t→∞

Inference 1. We use the following controller:



⎪ K 1 (t) = −e1 (t) + [H1 f (v(t − τ )) − f (y(t − τ ))]A1 +


⎨ [x(t) − H u(t)]C + (H − 1)I ,
1 1 1 1
(4.15)

⎪ 2
K (t) = −e (t) + [H g(u(t − σ)) − g(x(t − σ))]B1 +


2 2
[y(t) − H2 v(t)]D1 + (H2 − 1)J1 .
100 4 Adaptive Synchronization of Neural Networks

And at the same time, we use the updating rules as follows:




⎪ Ȧ1 (t) = − f (v(t − τ ))T H1 e1 ,


⎨ Ḃ (t) = −g(u(t − σ))T H e ,
1 2 2
(4.16)

⎪ Ċ2 (t) = −u(t)H1 e1 ,



Ḋ1 (t) = −v(t)H2 e2 .

Remark 4.7 It can be proved that the track of slave system and that of the master
system can reach the orbit synchronism no matter where the initial values are from.

Remark 4.8 There are many papers discussing the projective synchronization prob-
lem but only few of them are talking about how to solve the same problem about
BAM network and how to figure out the parameter identifying problem. We can get
the answers of both sides from the foregoing equation through the proving progress.

4.1.4 Numercial Simulation

In order to verify the effectiveness of the controller, we give an example of numerical


simulation.
Based on the classically BAM neural network model, we use the following model
as the primary system

u̇(t) = − Cu(t) + A f (v(t − τ )) + I,
(4.17)
v̇(t) = − Dv(t) + Bg(u(t − σ)) + J.

u(t) = [u 1 (t) u 2 (t)]T , τ = 1, σ = 1, C = (−1 −1)T , A = (−1.5 −0.1 −0.2


−2.5)T , I = (1 5)T , J = (3 1.5)T ,
and


tanh(v1 (t − τ )) tanh(v2 (t − τ )) 0 0 Ξ1 Ξ2 0 0
f (v(t − τ )) = g(u(t − σ)) =
0 0 tanh(v1 (t − τ )) tanh(v2 (t − τ )) 0 0 Ξ1 Ξ2
(4.18)

where Ξ1 = |u 1 (t − σ) + 1| − |u 1 (t − σ) − 1|)/2 and Ξ2 = (|u 2 (t − σ) + 1| −


|u 2 (t − σ) − 1|)/2.
According to the Inference 1, we choose Eqs. (4.17) and (4.18) to do the numerical
simulation and get the chaotic attractor of the foregoing system. Finally we define
the slave system as follows:

ẋ(t) = − C1 x(t) + A1 f (y(t − τ )) + I1 + K 1 ,
(4.19)
ẏ(t) = − D1 y(t) + B1 g(x(t − σ)) + J1 + K 2 .
4.1 Projective Synchronization of BAM Self-Adaptive DNN … 101

Choose the initial value as follows:

(u 1 (t) u 2 (t) v1 (t) v2 (t))T = (0.1 0.1 0.1 0.1)T ,

H1 = diag{2, −1.5}, H2 = diag{1, −1}. The initial value of the salve system is as
follows:
(x1 (t) x2 (t) y1 (t) y2 (t))T = (0.25 0.12 0.36 0.55)T ,

A1 = (A11 (t) A12 (t) A13 (t) A14 (t))T = (1 1 1 1)T ,

B1 = (B11 (t) B12 (t) B13 (t) B14 (t))T = (1 1 1 1)T ,

C1 = (C11 (t) C12 (t) C13 (t) C14 (t))T = (1 1 1 1)T ,

D1 = (D11 (t) D12 (t) D13 (t) D14 (t))T = (1 1 1 1)T ,

System error equations for the initial value (Fig. 4.1) are as follows:

e1 (t) = (e11 (t) e12 (t))T = (0.35 0.45),

e2 (t) = (e21 (t) e22 (t))T = (0.35 0.45).

Based on the parameter and the progress of the MATLAB numerical simulation, we
can get the progress of the four parameter e12 (t), e11 (t), e21 (t), e22 (t) as Figs. 4.2,
4.3 and 4.4. Figures 4.5 and 4.6 show the simulation progress of A and B. Due to the
limited article, we have to omit C and D. At the same time, the values we define I
and J in the master system are same as that in the slave system.

4.1.5 Conclusion

There have been many research results in the projection synchronization problem.
This section analyzes the BAM neural network projection synchronization problem

Fig. 4.1 The chaotic feature


of system
102 4 Adaptive Synchronization of Neural Networks

Fig. 4.2 The


synchronization progress of
parameter e11 (t)

Fig. 4.3 The


synchronization progress of
parameter e12 (t)

Fig. 4.4 The


synchronization progress of
parameter e21 (t)

Fig. 4.5 The parameter


identification progress of A
4.1 Projective Synchronization of BAM Self-Adaptive DNN … 103

Fig. 4.6 The parameter


identification progress of B

by constructing suitable controller, making two kinds of interconnected signal sys-


tems which are the main system and the corresponding system into a state of orbital
inaccessible synchronism. Also, parameter identification of BAM neural network
has been realized which demonstrates the system parameters could ultimately trend
to stability through concussion. This section based on the Lyapunov stability theory,
a theoretical analysis, and finally the numerical simulation with MATLAB, proved
the feasibility of this method.

4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN


with Markovian Jump

4.2.1 Introduction

The synchronization problem of neural networks has been extensively investigated


over the last decade due to their successful applications in many areas (see e.g.
[40]), such as communication, signal processing, and combinatorial optimization.
Synchronization means putting in synchrony of two events occurring at the same
time. The goal of the synchronization of neural networks evolving separately, one
called “the drive system” and the other called “the response system,” is that those
systems will be sharing a common trajectory from a certain time onward. Moreover,
the adaptive synchronization for neural networks has drawn much attention due to its
potential applications in many fields (see e.g., [19, 25, 31, 61, 65]), such as parameter
estimation adaptive control and model reference adaptive control.
On the other hand, the well-known Takagi-Sugeno (T-S) fuzzy model is recognized
as an efficient tool in approximating a complex nonlinear system. The T-S fuzzy
modeling is a multi-model approach in which some linear models are blended into
an overall single model through nonlinear membership functions to represent the
nonlinear dynamics. Based on the T-S fuzzy model, the adaptive synchronization for
fuzzy neural networks is addressed in [15, 28] by a simple analytic method, such
as the linear matrix inequality approach. In reality, time-delay system is frequently
104 4 Adaptive Synchronization of Neural Networks

encountered in many areas and a time delay is often a source of instability and
oscillators. For neural networks with time delays, various sufficient conditions have
been proposed to guarantee the global asymptotic or exponential stability in some
recent literatures, see e.g. [9, 43, 56, 63, 64]. Meanwhile, many neural networks can
be with abrupt changes in their structure and parameters caused by some phenomena
such as component failures or repairs, changing subsystem interconnections, and
abrupt environmental disturbances. In this situation, there exist finite modes in the
neural networks, and the modes may jump (or switch) from one to another at different
times. This kind of systems is widely studied by many scholars, see e.g. [27, 45, 58,
66] and the references therein.
This section concerned with the adaptive synchronization problem for the T-S
fuzzy neural networks with stochastic noises and Markovian jumping parameters
by using the M-matrix method. The main purpose of this section is to design an
adaptive feedback controller for the T-S fuzzy neural networks with stochastic noises
and Markovian jumping parameters. The M-matrix-based criteria are to test the
adaptive feedback controller whether the T-S fuzzy neural networks are stochastically
synchronization. Finally, a numerical simulation is used to demonstrate the usefulness
of derived M-matrix-based synchronization conditions.

4.2.2 Problem Formulation and Preliminaries

Given a probability space (Ω, F, P), {r (t), t ≥ 0} is a homogeneous Markov chain


taking values in a finite set S = {1, 2, . . . , S} with the generator Γ = (γij ) S×S ,
i, j ∈ S. Consider the following stochastic T-S fuzzy neural networks with time-
delay and Markovian jumping parameters described by the following rules.
Drive System Rule l: IF s1 (t) is μl1 , s2 (t) is μl2 , . . ., sg (t) is μlg , THEN

d x(t) = [−Cl (r (t))x(t) + Al (r (t)) f (x(t))


(4.20)
+ Bl (r (t)) f (x(t − τ )) + Dl (r (t))]dt,

where l ∈ S1 = {1, 2, . . . , ν}. μl1 , μl2 , . . . , μlg are the fuzzy sets. s1 (t), s2 (t),
. . . , sg (t) are the premise variables. ν is the number of fuzzy IF-THEN rules.
t ≥ 0 (or t ∈ R+ , the set of all non-negative real numbers) is the time variable.
x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the state vector of drive system (4.20)
associated with n neurons. f (x(t)) = [ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T ∈ Rn
denotes the activation function of neurons. τ > 0 is the state delay. As a matter of
convenience, for t ≥ 0, we denote Cl (r (t)) = Cli , Al (r (t)) = Ali ,Bl (r (t)) = Bli and
Dl (r (t)) = Dli , respectively. In the drive system (4.20), furthermore, ∀i ∈ S, Cli =
diag {cl1 i , ci , . . . , ci } has positive and unknown entries ci > 0, Ai = (a i )
l2 ln lv l ljv n×n and
Bl = (bljv )n×n are the connection weight and the delayed connection weight matri-
i i

ces, respectively, and are both unknown matrices. Dli = (dl1 i , d i , . . . , d i )T ∈ Rn


l2 ln
is the constant external input vector.
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 105

Using the singleton fuzzifier, product fuzzy inference, and weighted average
defuzzifier, the output of above fuzzy drive system is inferred as follows:


ν
d x(t) = h l (s(t)){[−Cli x(t) + Ali f (x(t))
l=1 (4.21)
+Bli f (x(t − τ )) + Dli ]dt},

where
l (s(t))
h l (s(t)) = ν ,
j=1  j (s(t))

g

l (s(t)) = μlj (s j (t)),
j=1

s(t) = [s1 (t) s2 (t) · · · sg (t)],

in which, μlj (s j (t)) is the grade of membership of s j (t) in μlj . Then, it can be seen
ν
that, for l = 1, 2, . . . , ν, and all t, l=1 h l (s(t)) = 1, and h l (s(t)) ≥ 0.
Corresponding to the fuzzy drive system (4.20), the fuzzy response system is
described by the following rules.
Response System Rule l: IF s1 (t) is μl1 , s2 (t) is μl2 , . . ., sg (t) is μlg , THEN

dy(t) = [−Ĉl (r (t))y(t) + Âl (r (t)) f (y(t))


+ B̂l (r (t)) f (y(t − τ )) + Dl (r (t)) + Ul (t)]dt (4.22)
+ σ(t, r (t), l, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t),

where y(t) = (y1 (t), y2 (t), . . . , yn (t))T ∈ Rn is the state vector of response sys-
tem (4.22). As a matter of convenience, for t ≥ 0, we denote Ĉl (r (t)) = Ĉli ,
Âl (r (t)) = Âli and B̂l (r (t)) = B̂li , respectively. And Ĉli = diag{ĉl1 i , ĉi , . . . , ĉi },
l2 ln
Âli = (âljk
i )
n×n and B̂l = (b̂ljk )n×n are the fuzzy estimations of unknown matrices
i i

Cli , Ali and Bli , respectively. ω(t) = [ω1 (t), ω2 (t), . . . , ωn (t)]T is an n-dimensional
Brownian motion defined on a complete probability space (Ω, F, P) with a natural
filtration {Ft }t≥0 (i.e. Ft = σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra), and is independent of
the Markovian process {r (t)}t≥0 , and σ : R+ ×S× S1 ×Rn ×Rn → Rn×n is the noise
intensity matrix and can be regarded as a result from the occurrence of eternal random
fluctuation and other probabilistic causes. Ul (t) = (u l1 (t), u l2 (t), . . . , u ln (t))T ∈
Rn is a control input vector with the form of rules as follows.
Controller Rule l: IF s1 (t) is μl1 , s2 (t) is μl2 , . . ., sg (t) is μlg , THEN

Ul (t) = K l (t)(y(t) − x(t))


(4.23)
= diag {kl1 (t), kl2 (t), . . . , kln (t)}(y(t) − x(t)),
106 4 Adaptive Synchronization of Neural Networks

where K l ∈ Rm×n are matrices to be determined later. Then the state-feedback fuzzy
controller is given by


ν
U (t) = h l (s(t)){K l (t)(y(t) − x(t))}. (4.24)
l=1

Therefore the overall fuzzy response system is given by


ν
dy(t) = h l (s(t)){[−Ĉli y(t) + Âli f (y(t))
l=1
(4.25)
+ B̂li f (y(t − τ )) + Dli + Ul (t)]dt
+ σ(t, i, l, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t)}.

Denote the synchronization error signal by e(t) = y(t) − x(t). Synchronization


between the drive and response system means that e(t) → 0 as t → ∞. From (4.21)
and (4.25), the state e(t) of error system is arranged as


ν
de(t) = h l (s(t)){[−C̃li y(t) − Cli e(t) + Ãli g(y(t))
l=1
(4.26)
+ Ali g(e(t)) + B̃li g(yτ ) + Bli g(eτ ) + Ul (t)]dt
+ σ(t, i, l, e(t), eτ )dω(t)}.

where C̃li = Ĉli − Cli , Ãli = Âli − Ali , B̃li = B̂li − Bli . For the purpose of simplicity,
we mark e(t − τ ) = eτ and f (x(t) + e(t)) − f (x(t)) = g(e(t)).
The initial condition associated with system (4.26) is given in the following form

e(s) = ξ(s), s ∈ [−τ , 0],

for any ξ ∈ L2F0 ([−τ , 0], Rn ), where L2F0 ([−τ , 0], Rn ) is the family of all F0 -
measurable C([−τ , 0]; Rn )-value random variables satisfying that sup−τ ≤s≤0
E|ξ(s)|2 < ∞, and C([−τ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ , 0] with the norm ξ = sup−τ ≤s≤0 |ξ(s)|.
The main purpose of the rest of this section is to establish a criterion of adap-
tive synchronization for the system (4.21) and the response system (4.25) by using
adaptive feedback control and M-matrix method.
For this purpose, we introduce some assumptions, the definition and some lemmas
which will be used in the proofs of our main results.

Assumption 4.9 The neuron activation functions f i (·) are bounded and satisfy the
following Lipschitz condition:

| f i (u) − f i (v)| ≤ |G i (u − v)|, ∀u, v ∈ Rn , i = 1, 2, . . . , n,

where G i ∈ Rn×n (i = 1, 2, . . . , n) are known constant matrices.


4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 107

Assumption 4.10 The noise intensity matrix σ(·, ·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist two positives H1 and H2 , such that

trace(σ(t, r (t), l, u(t), v(t)))T (σ(t, r (t), l, u(t), v(t)))


≤ H1 |u(t)|2 + H2 |v(t)|2

for all (t, r (t), l, u(t), v(t)) ∈ R+ × S × S1 × Rn × Rn .

Consider an n-dimensional stochastic delayed differential equation (SDDE, for


short) with Markovian jumping parameters

d x(t) = (t, r (t), x(t), xτ (t))dt + ð(t, r (t), x(t), xτ (t))dω(t) (4.27)

on t ∈ [0, ∞) with the initial data given by

{x(θ) : −τ ≤ θ ≤ 0} = ξ ∈ L2L0 ([−τ , 0]; Rn ).

For V ∈ C2,1 (R+ × S × Rn ; R+ ), define an operator L from R+ × S × Rn to R


by Eq. (1.7).
For the SDDE with Markovian jumping parameters again, the following hypoth-
esis is imposed on the coefficients  and ð.
Assumption 4.11 ([55]) Both  and ð satisfy the local Lipschitz condition. That is,
for each h > 0, there is an L h > 0 such that

|(t, i, x, y) − (t, i, x̄, ȳ)| + |ð(t, i, x, y) − ð(t, i, x̄, ȳ)|


≤ L h (|x − x̄| + |y − ȳ|)

for all (t, i) ∈ R+ × S and those x, y, x̄, ȳ ∈ Rn with x ∨ y ∨ x̄ ∨ ȳ ≤ h. Moreover

sup{|(t, i, 0, 0)| ∨ |ð(t, i, 0, 0)| : t ≥ 0, i ∈ S} < ∞.

4.2.3 Main Results

In this section, we give a criterion and three special cases of adaptive synchronization
by the M-matrix method for the drive system (4.21) and the response system (4.25).
108 4 Adaptive Synchronization of Neural Networks

Theorem 4.12 Assume that M := −diag {η, η, . . . , η } − Γ is a nonsingular


  
S
M-matrix, where η = −2γ + α + L 2 + β + H1 , γ = min min min clji , α =
l∈S1 i∈S 1≤ j≤n
max max(ρ(Ali ))2 , β = max max(ρ(Bli ))2 . Let m > 0 and −
→m = (m, m, . . . , m )T .
l∈S1 i∈S l∈S1 i∈S   
S
That is to say, all elements of M −1 − →
m are positive. According to the Lemma 1.12,
−1 −

(q̃1 , q̃2 , . . . , q̃ S ) := M m 0. Furthermore, assume also that
T



S
(L 2 + H2 )q̄ < − η q̃i + γik q̃k , ∀i ∈ S, l ∈ S1 , (4.28)
k=1

where q̄ = max max qli , q̃i = min qli .


l∈S1 i∈S l∈S1
Under Assumptions 4.9 and 4.10, the noise-perturbed fuzzy response system (4.25)
can be adaptively synchronized with the unknown fuzzy drive system (4.21), if the
feedback gain K l (t) of controller (4.24) is adapted according to the following update
law
k̇lj = −α j qli e2j , (4.29)

and the parameters update laws of matrices Ĉli , Âli and B̂li are chosen as

ĉ˙lj = γ j qli e j y j , â˙ ljv = −αjv qli e j f v , b̂˙ ljv = −βjv qli e j ( f v )τ ,
i i i
(4.30)

where α j > 0, γ j > 0, αjv > 0 and βjv > 0 ( j, v = 1, 2, . . . , n) are arbitrary
constants, respectively.

Proof Define a Lyapunov function candidate as




ν n 

V (t, i, l, e, eτ ) = h l (s(t)) qli |e|2 + 1 2
α j klj + γ j (c̃lj )
1 i 2
l=1 j=1 

n
n
+ αjv (ãljv )
1 i 2 + βjv (b̃ljv )
1 i 2 .
v=1 v=1

Computing LV (t, i, l, e, eτ ) along the trajectory of error system (4.26), and using
(4.29) and (4.30), one can obtain that
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 109

LV (t, i, l, e, eτ )

ν
= h l (s(t)) Vt + Ve [−C̃li y − Cli e + Ãli f (y)
l=1
+ Ali g(e) + B̃li f (yτ ) + Bli g(eτ ) + Ul (t)]
+ (1/2)trace (σ T (t, i, l, e, eτ )Vee σ(t, i, l, e, eτ ))

S
+ γik V (t, k, e)
k=1 
ν n
n
1 i ˙i
= h l (s(t)) 2 α j klj k̇lj + 2
1
γ j c̃lj c̃lj
l=1 j=1 j=1

n
n
1 i ˙i
n
n
1 i ˙
i
+2 αjv ãljv ã ljv +2 βjv b̃ljv b̃ljv
j=1 v=1 j=1 v=1 (4.31)
+ 2qli e T [−C̃li y − Cli e + Ãli f (y) + Ali g(e)
+ B̃li f (yτ ) + Bli g(eτ ) + Ul (t)]
+ (1/2)trace (σ T (t, i, l, e, e )(2q i )σ(t, i, l, e, e ))
τ l τ
S
+ γik qlk |e|2
k=1

ν
= h l (s(t)) 2qli e T [−Cli e + Ali g(e) + Bli g(eτ )]
l=1
+ (1/2)trace (σ T (t, i, l, e, eτ )(2qli )σ(t, i, l, e, eτ ))

S
+ γik ql |e| .
k 2
k=1

Now, using Assumptions 4.9 and 4.10 together with Lemma 1.13 yields

− e T Cli e ≤ −γ|e|2 , (4.32)

2e T Ali g(e) ≤ e T Ali (Ali )T e + g T (e)g(e)


(4.33)
≤ (α + L 2 )|e|2 ,

2e T Bli g(eτ ) ≤ e T Bli (Bli )T e + g T (eτ )g(eτ )


(4.34)
≤ (β|e|2 + L 2 |eτ |2 ),

and
(1/2)trace (σ T (t, i, l, e, eτ )(2qli )σ(t, i, l, e, eτ ))
(4.35)
≤ qli (H1 |e|2 + H2 |eτ |2 ).
110 4 Adaptive Synchronization of Neural Networks

Substituting (4.32)–(4.35) into (4.31) yields

LV (t, i, l, e, eτ )
  
ν S
≤ h l (s(t)) ηql + i γik ql |e| + (L + H2 )ql |eτ |
k 2 2 i 2
l=1  k=1  
ν S (4.36)
≤ h l (s(t)) η q̃i + γik q̃k |e|2 + (L 2 + H2 )q̄|eτ |2
l=1 k=1
ν
≤ hl (s(t)){−m|e|2 + (L 2 + H2 )q̄|eτ |2 }
l=1


S
where m = −(η q̃i + γik q̃k ) by [q̃1 , q̃2 , . . . , q̃ S ]T = M −1 −

m.
k=1
Let ψ(t) = 0, ω1 (e) = m|e|2 and ω2 (eτ ) = (L 2 + H2 )q̄|eτ |2 . Then inequality
(4.36) holds such that inequality (1.14) holds. ω1 (0) = 0 and ω2 (0) = 0 when e = 0
and eτ = 0, and inequality (4.28) implies ω1 (e) > ω2 (eτ ). So (1.15) holds. Moreover,
(1.16) holds when |e| → ∞ and |eτ | → ∞. By Lemma 1.9, the error system (4.26) is
adaptive almost surely asymptotically stable, and hence the noise-perturbed response
system (4.25) can be adaptive almost surely asymptotically synchronized with the
drive neural network (4.21). This completes the proof.

Remark 4.13 In Theorem 4.12, the condition (4.28) of the adaptive synchronized
for neural networks with Markovian jumping parameters obtained by using the
M-matrix approach is very different to those, such as linear matrix inequality method.
And the condition can be checked if the drive system and the response system are
given and the positive constant m be chosen.

Now, we are in a position to consider three special cases of the neural networks
(4.21), (4.25) and (4.26). The proof is similar to that of Theorem 4.12, and hence
omitted.
Case 1. The matrices Cli , Ali and Bli of drive system (4.21) and the matrices Ĉli ,
Âl and B̂li of response system (4.25) have the same parameters, respectively. That is
i

to say, Cli = Ĉli , Ali = Âli and Bli = B̂li . The drive system, the response system, and
the error system can be represented as follows:


ν
d x(t) = h l (s(t)){[−Cli x(t) + Ali f (x(t))
l=1 (4.37)
+ Bli f (x(t − τ )) + Dli ]dt},


ν
dy(t) = h l (s(t)){[−Cli y(t) + Ali f (y(t))
l=1 (4.38)
+ Bli f (y(t − τ )) + Dli + Ul (t)]dt
+ σ(t, i, l, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t)},
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 111


ν
de(t) = h l (s(t)){[−Cli e(t) + Ali g(e(t))
l=1 (4.39)
+ Bli g(eτ ) + Ul (t)]dt + σ(t, i, l, e(t), eτ )dω(t)}.

For this case, one can get the following result that is analogous to Theorem 4.12.
Corollary 4.14 Assume that η = −2γ + α + L 2 + β + H1 , γ = min min min clji ,
l∈S1 i∈S 1≤ j≤n
α = max max(ρ(Ali ))2 , β = max max(ρ(Bli ))2 , q̄ = max max qli , q̃i = min qli ,
l∈S1 i∈S l∈S1 i∈S l∈S1 i∈S l∈S1


S
(L 2 + H2 )q̄ < − η q̃i + γik q̃k . (4.40)
k=1

Under Assumptions 4.9 and 4.10, the noise-perturbed response system (4.38) can be
adaptively synchronized with the drive system (4.37), if the feedback gain K l (t) of
controller (4.24) with the update law is chosen as

k̇lj = −α j qli e2j , (4.41)

where α j > 0 ( j = 1, 2, . . . , n) is arbitrary constant.


Case 2. The Markovian jumping parameters are removed from the neural networks
(4.21), (4.25) and (4.26). That is to say, S = 1. The drive system, the response system
and the error system can be represented as follows:


ν
d x(t) = h l (s(t)){[−Cl x(t) + Al f (x(t))
l=1 (4.42)
+ Bl f (x(t − τ )) + Dl ]dt},


ν
dy(t) = h l (s(t)){[−Ĉl y(t) + Âl f (y(t))
l=1
(4.43)
+ B̂l f (y(t − τ )) + Dl + Ul (t)]dt
+ σ(t, l, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t)},


ν
de(t) = h l (s(t)){[−C̃l y(t) − Cl e(t) + Ãl g(y(t))
l=1
(4.44)
+ Al g(e(t)) + B̃l g(yτ ) + Bl g(eτ ) + Ul (t)]dt
+σ(t, l, e(t), eτ )dω(t)}.

For this case, one can also get the following result that is analogous to Theo-
rem 4.12.
Corollary 4.15 Assume that η = −2γ + α + L 2 + β + H1 , γ = min min clj ,
l∈S1 1≤ j≤n
α = max(ρ(Al ))2 , β = max(ρ(Bl ))2 , L 2 + H2 < −η. Under Assumptions 4.9 and
l∈S1 l∈S1
4.10, the noise-perturbed response system (4.43) can be adaptive synchronized with
112 4 Adaptive Synchronization of Neural Networks

the drive system (4.42), if the feedback gain K l (t) of controller (4.24) with the update
law is chosen as
k̇lj = −α j ql e2j , (4.45)

and the parameters update laws of matrices Ĉl , Âl and B̂l are chosen as

⎪ ˙
⎨ ĉlj = γ j ql e j y j ,
â˙ ljv = −αjv ql e j f v , (4.46)

⎩˙
b̂ljv = −βjv ql e j ( f v )τ ,

where α j > 0, γ j > 0, αjv > 0 and βjv > 0 ( j, v = 1, 2, . . . , n) are arbitrary
constants, respectively.
Case 3. The T-S Fuzzy control is removed from the neural networks (4.21),
(4.25), (4.26) and the controller (4.24). The drive system, the response system, the
error system, and the controller can be represented as follows:

d x(t) = [−C i x(t) + Ai f (x(t)) + B i f (x(t − τ )) + D i ]dt, (4.47)

dy(t) = [−Ĉ i y(t) + Âi f (y(t)) + B̂ i f (y(t − τ )) + D i + U (t)]dt


(4.48)
+ σ(t, i, y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t),

de(t) = [−C̃ i y(t) − C i e(t) + Ãi f (y(t)) + Ai g(e(t))


+ B̃ i f (yτ ) + B i g(eτ ) + U (t)]dt (4.49)
+ σ(t, i, e(t), eτ )dω(t),

U (t) = K (t)(y(t) − x(t))


(4.50)
= diag {k1 (t), k2 (t), . . . , kn (t)}(y(t) − x(t)).

Corollary 4.16 Assume that η = −2γ + α + L 2 + β + H1 , γ = min min cij , α =


i∈S 1≤ j≤n
S
max(ρ(Ai ))2 , β = max(ρ(B i ))2 , q̄ = max q i , (L 2 + H2 )q̄ < −(ηq i + γik q k ).
i∈S i∈S i∈S k=1
Under Assumptions 4.9 and 4.10, the noise-perturbed response system (4.48) can be
adaptively synchronized with the drive system (4.47), if the feedback gain K (t) of
controller (4.50) with the update law is chosen as

k̇ j = −α j q i e2j , (4.51)

and the parameters update laws of matrices Ĉ i , Âi and B̂ i are chosen as

⎪ ˙
⎨ ĉ j = γ j q e j y j ,
i
˙â jv = −αjv q i e j f v , (4.52)

⎩˙
b̂jv = −βjv q i e j ( f v )τ ,
4.2 Adaptive Synchronization of Stochastic T-S Fuzzy DNN … 113

where α j > 0, γ j > 0, αjv > 0 and βjv > 0 ( j, v = 1, 2, . . . , n) are arbitrary
constants, respectively.

4.2.4 Numerical Examples

An example is presented to demonstrate the effectiveness of main results obtained


in this section. The aim is to examine the adaptive stability of a given stochastic T-S
fuzzy neural networks with Markovian jumping parameters.
Consider the following stochastic T-S fuzzy neural networks with time-delay and
Markovian jumping parameters (the drive system (4.37), the response system (4.38)
and the error system (4.39)), and the network parameters are given as follows:
⎧ ⎡ ⎤ ⎡ ⎤

⎪ 1.5 0 0 1.2 −1.5 1.1



⎪ C11 = ⎣ 0 1 0 ⎦ , A11 = ⎣−1.7 1.2 1.2⎦ ,



⎪ 0 0 1.1 1 1.3 1.5

⎪ ⎡ ⎤ ⎡ ⎤

⎪ 0.7 −0.2 0.8 0.6


⎪ B11 = ⎣ 0 0.3 0.6⎦ , D11 = ⎣0.6⎦ ,




⎪ 0.7 1.5 1.7 0.1


⎨ ⎡ ⎤ ⎡ ⎤
Model 1 : 0.9 0 0 1.1 −1.5 1 (4.53)

⎪ C = ⎣ 0 1.1 0⎦ , A12 = ⎣−1.8 1.3 1.1⎦ ,


12

⎪ 0 0 1 2.1 1.2 2.6

⎪ ⎡ ⎤ ⎡ ⎤



⎪ −0.4 −0.2 1.9 0.9

⎪ ⎣ ⎦ ⎣

⎪ B12 = 0.2 0.6 2.3 , D 12 = 0.8⎦ ,



⎪ 0.6 1.3 0.7 0.2



⎪ σ(t, 1, 1, et , eτ ) = (0.4eτ 1 , 0.5et2 , 0.5eτ 3 )T ,

σ(t, 1, 2, et , eτ ) = (0.5et1 , 0.3eτ 2 , 0.5et3 )T ,
⎧ ⎡ ⎤ ⎡ ⎤

⎪ 1 0 0 1.1 −1.6 1



⎪ C21 = ⎣0 0.9 0 ⎦ , A21 = ⎣−1.8 1.2 1.1⎦ ,



⎪ 0 0 0.8 2.1 1.1 2.5

⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.4 −0.1 1.8 0.6



⎪ B = ⎣ 0.3 0.5 2.4 ⎦ , D = ⎣ 0.7⎦ ,

⎪ 21 21

⎪ 0.7 1.4 0.8 0.1


⎨ ⎡ ⎤ ⎡ ⎤
Model 2 : 1.1 0 0 1.1 −1.6 0.5 (4.54)


⎪ C22 = ⎣ 0 1.2 0⎦ , A22 = ⎣−1.7 1.2 0.3⎦ ,

⎪ −1.7 −1.8 1.2

⎪ 0 0 1

⎪ ⎡ ⎤ ⎡ ⎤

⎪ −0.4 −0.1 0.5 0.8



⎪ B22 = ⎣ 0.3 0.5 0.7⎦ , D22 = ⎣0.12⎦ ,



⎪ 1.4 0.6 0.8 0.2



⎪ σ(t, 2, 1, e , e ) = (0.3e , 0.4e , 0.4eτ 3 )T ,


t τ τ 1 t2
σ(t, 2, 2, et , eτ ) = (0.5et1 , 0.3eτ 2 , 0.2et3 )T ,
114 4 Adaptive Synchronization of Neural Networks

Fig. 4.7 Switching of 2.5


system mode

model
1.5

0.5
0 1 2 3 4 5 6
t



−7 7
Γ = , α1 = α2 = α3 = 1,
4 −4

f (x(t)) = tanh(x(t)), τ = 0.5, h 1 (s(t)) = sin2 (e1 ), h 2 (s(t)) = cos2 (e1 ).

Those parameters fully satisfy Assumptions 4.9, 4.10, Inequality (4.40) and that
M is a nonsingular M-matrix. So by the Corollary 4.14, it will prove the main result
to be correct completely if the response of e1 (t), e2 (t) and e3 (t) of error system can
be adaptive synchronization.
To illustrate the effectiveness of method proposed in this section, we adopt the
M-matrix approach to compute the solutions for stochastic T-S fuzzy neural networks
with Markovian jumping parameters and to simulate the dynamics of error system.
The simulation results are given in Figs. 4.7, 4.8 and 4.9. Among them, Fig. 4.7 shows
the switching of system mode. Figure 4.8 shows the state response of e1 (t), e2 (t) and
e3 (t) of the errors system. Figure 4.9 shows dynamic curves of feedback gain k1 (t),
k2 (t) and k3 (t). From the simulations, one can find that the neural networks with
Markovian jumping parameters are adaptive synchronization.

4.2.5 Conclusions

We have studied the problem of adaptive synchronization for stochastic T-S fuzzy
neural networks with time-delay and Markovian jumping parameters. We have
removed the traditional monotonicity and smoothness assumptions on the activation
function. An M-matrix method has been developed to solve the problem addressed.
The adaptive synchronization controller has been designed by M-matrix method for
T-S fuzzy neural networks with stochastic noises and Markovian jumping parame-
ters. Finally, a simulation example has been used to demonstrate the usefulness of
main results proposed.
4.3 Synchronization of DNN Based on Parameter Identification … 115

Fig. 4.8 The state response 5


of e1 (t), e2 (t) and e3 (t) of 4
e (t)
1
the errors system e (t)
2
3
e3(t)
2
1

e(t)
0
−1
−2
−3
−4
0 1 2 3 4 5 6
t

Fig. 4.9 The dynamic 4


curves of feedback gain k1(t)
k1 (t), k2 (t) and k3 (t) 2 k2(t)
k3(t)
0
k(t)

−2

−4

−6

−8
0 1 2 3 4 5 6
t

4.3 Synchronization of DNN Based on Parameter


Identification and via Output Coupling

4.3.1 Introduction

In the past few years, there has been a great deal of study on delayed neural networks
(DNNs), due to its complex and unpredictable behaviors in practice, together with
the traditional stability and periodic oscillation. As is known, networked-induced
delay is one of the main issues in neural networks. So, in recent years, many works
on the stability of neural networks with discrete time delays and with both discrete
and distributed time delays have been done [39, 41, 42, 44, 47, 48]. As it was
proposed in [4, 11] artificial neural networks can present chaotic behavior. And so,
since the master-slave conception for the synchronization of chaotic systems was first
proposed by Pecora and Carroll in 1990 [29], the research hot on synchronization
of neural networks with or without time delays has spread in many different fields
[2, 7, 8, 21, 50]. Synchronization of coupled delayed neural networks was released
116 4 Adaptive Synchronization of Neural Networks

the first time [1] in 2004. Then, some further researches have appeared in recent
years [18, 32, 37, 54]. Very recently, several new results on the synchronization
problem of neural networks have been proposed in some literature. For example,
Wang and Cao studied synchronization in an array of linearly (stochastically) cou-
pled networks with time delays [5, 62]. And, in [14], some conditions were proposed
for global synchronization of DNNs based on parameters identification by employ-
ing the invariance principle of functional differential equations and the update law
for adaptive control. Meanwhile, by introducing a descriptor technique and using
Lyapunov-Krasovskii functional, a multiple delayed state-feedback control design
for exponential synchronization problem of a class of delayed neural networks with
multiple time-varying discrete delays was presented in [23]. Moreover, [36] aimed to
study the global robust synchronization between two coupled neural networks with
all the parameters unknown and discrete time-varying delays via output or state cou-
pling. However, all the papers mentioned above only took the discrete time delays
into consideration, while the distributed time delays have not attracted much atten-
tion among researchers. Though the signal transmission can be modeled with discrete
delays because of the immediate process, it may be distributed during a certain time
period [44]. Hence, in order to modeling a realistic neural network, both discrete and
distributed delays should be involved in the model [11].
From the above discussion, we can see that the synchronization problem of neural
networks with both discrete and distributed time delays is still a novel problem
that has been seldom studied. For example, in [36], the adaptive synchronization of
neural networks with time-varying delays and distributed delays was investigated on
the basis of LaSalle invariant principle of functional differential equations and the
adaptive feedback control technique. Inspired by these recent literatures and basing
on [36], we consider the synchronization of neural networks with both discrete and
distributed time-varying delays based on parameter identification and via output
coupling, which can model a more realistic and comprehensive networks.
In this section, we focus to study the synchronization problem of two coupled
neural networks with both discrete and distributed time-varying delays. This letter
is organized as follows: First, the formulations and preliminaries are given for the
proof of the main results. Then, by using the Lyapunov functional and estimation
method, we propose several new conditions for the global synchronization of the two
coupling systems, and give the criterions for identifying the unknown parameters and
designing the controller via output coupling. And then, some illustrative examples
are provided to show the merits of this research. Finally, a conclusion is given for
the whole section.

4.3.2 Problem Formulation

In this section, the following neural network models, namely master system, which
involve both discrete and distributed time-varying delays are considered:
4.3 Synchronization of DNN Based on Parameter Identification … 117


n 
n
d xi (t) = ⎣−ci xi (t) + aij f j (x j (t)) + bij u j (x j (t − τ1 (t)))
j=1 j=1


n  t
+ wij v j (x j (s))ds + Ji ⎦ dt, i = 1, 2, . . . , n, (4.55)
j=1 t−τ2 (t)

or equivalently
 t
d x(t) = [−C x(t) + A f (x(t)) + Bu(x(t − τ1 (t))) + W v(x(s))ds + J ]dt,
t−τ2 (t)
(4.56)

where xi (t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with the
ith DNNs;

f (x(t)) = [ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T ∈ Rn

u(x(t − τ (t))) = [u 1 (x1 (t − τ (t))), u 2 (x2 (t − τ (t))), . . . ,

u n (xn (t − τ (t)))]T ∈ Rn

and v(x(t)) = [v1 (x1 (t)), v2 (x2 (t)), . . . , vn (xn (t))]T ∈ Rn are the activation func-
tions of the neurons with f (0) = u(0) = v(0) = 0; C = diag{c1 , c2 , . . . , cn } > 0
is a diagonal matrix that presents the rate of the ith unit resetting its potential to the
resting state in isolation when disconnected from the external inputs and the network;
A = (aij )n×n B = (bij )n×n and W = (wij )n×n stand for, respectively, the connection
weight matrix, the discretely delayed connection weight matrix and the distributive
delayed connection weight matrix; J = [J1 , J2 , . . . , Jn ]T ∈ Rn is the external input
vector function; τ1 (t) ≥ 0 and τ2 (t) ≥ 0 are the discrete time-varying delay and
distributed time-varying delay, respectively.
Similarly, the controlled slave system is taken as the following form:


n 
n
d yi (t) = ⎣−c̄i yi (t) + āij f j (y j (t)) + b̄ij u j (y j (t − τ1 (t)))
j=1 j=1


n  t
+ w̄ij v j (y j (s))ds + Ji + κi (t)⎦ dt, i = 1, 2, . . . , n (4.57)
j=1 t−τ2 (t)
118 4 Adaptive Synchronization of Neural Networks

or equivalently

dy(t) = [−C̄ y(t) + Ā f (y(t)) + B̄u(y(t − τ1 (t)))


 t
+ W̄ v(y(s))ds + J + K(t)]dt, (4.58)
t−τ2 (t)

where C̄ = diag{c̄1 , c̄2 , . . . , c̄n } > 0, Ā = (āij )n×n , B̄ = (b̄ij )n×n and W̄ =
(w̄ij )n×n are all uncertain parameters to be identified. K(t) is a general controller that
can implement the synchronization of the two coupled DNNs and the identification
of the parameters.
Let e(t) = y(t) − x(t), we have

de(t)
= −Ce(t) + (A + K )g(e(t)) + (B + K ∗ )g̃(e(t − τ1 (t)))
dt
 t
+W ĝ(e(s))ds − (C̄ − C)y(t) + ( Ā − A) f (y(t))
t−τ2 (t)
 t
+ ( B̄ − B)u(y(t − τ1 (t))) + (W̄ − W ) v(y(s))ds, (4.59)
t−τ2 (t)

where g(e(t)) = f (e(t) + x(t)) − f (x(t)), g̃(e(t − τ1 (t))) = u(e(t − τ1 (t)) + x(t −
τ1 (t))) − u(x(t − τ1 (t))) and ĝ(e(t)) = v(e(t) + x(t)) − v(x(t)).
For any ζi , ξi ∈ L2F0 ([−τ ∗ , 0]; Rn ), we give the initial states: xi (t) = ζi (t), yi (t)
= ξi (t), i = 1, 2, . . . , n, where −τ ∗ ≤ t ≤ 0.
In order to achieve our results, the following necessary assumption is made.
Assumption 4.17 The activation functions f i (x) , u i (x) and vi (x) are bounded and
satisfy the Lipschitz condition:

| f i (x) − f i (y)| ≤ εi |x − y| , |u i (x) − u i (y)| ≤ φi |x − y| and |vi (x) − vi (y)| ≤ ϕi |x − y| ,

∀x, y ∈ R, i = 1, 2, . . . , n,

where βi > 0, φi > 0, ϕi > 0 are all positive scalars.

Assumption 4.18 τ1∗ ≥ τ1 (t) ≥ 0 and τ2∗ ≥ τ2 (t) ≥ 0 are both differential and
bounded with 1 > δ ≥ τ̇1 (t) ≥ 0, 1 > σ ≥ τ̇2 (t) ≥ 0 t ∈ [0, ∞).

Definition 4.19 If the error signal satisfies that

lim E ei (t) 2 = 0, i = 1, 2, . . . , n
t→∞

then the error signal system (4.59) is globally asymptotically stable in mean square.
4.3 Synchronization of DNN Based on Parameter Identification … 119

4.3.3 Main Results and Proofs

In this section, by employing the Lyapunov-Krasovskii functional and estimation


method, we will give several new criterions for the synchronization of two coupled
neural networks with discrete and distributed time-varying delays via output cou-
pling. And the rules for designing the delayed feedback controller will be proposed.
Furthermore, all the connection weight matrices can be estimated.

Theorem 4.20 Under the Assumptions 4.17 and 4.18, the slave DNNs (4.57) is syn-
chronized with the master DNNs (4.55) and lim (c̄i − ci ) = lim (āij − aij ) =
t→∞ t→∞
lim (b̄ij − bij ) = lim (w̄ij − wij ) = 0, (i, j = 1, 2, . . . , n) , based on the following
t→∞ t→∞
three conditions:
(I) Let the time-varying delayed feedback controller


n
κi (t) = kij ( f j (y j (t)) − f j (x j (t)))
j=1
n
+ kij∗ (u j (y j (t − τ1 (t))) − u j (x j (t − τ1 (t)))), i = 1, 2, . . . , n. (4.60)
j=1

(II) The adapted parameters c̄i , āij , b̄ij and w̄ij with the update law are taken as
⎧ ·

⎪ ˙i = γi ei (t)yi (t); i = 1, 2, . . . , n,

⎪ c̄

⎪ ·

⎨ ā˙ ij = −αij ei (t) f j (y j (t)); i, j = 1, 2, . . . , n,
· (4.61)

⎪ ˙ = −β e (t)u (y (t − τ (t))); i, j = 1, 2, . . . , n,

⎪ b̄


ij ij i j j
·
1

⎩ ˙  t
w̄ ij = −ωij ei (t) t−τ2 (t) v j (y j (t)); i, j = 1, 2, . . . , n.

where γi > 0, αij > 0, βij > 0 and ωij > 0 are arbitrary positive constants.
(III) The following inequality

1 
n
  1 
n
 
− μi ci + μi εi (aii + kii ) + μi ε j aij + kij  + μ j εi aji + kji 
2 2
j=1, j =i j=1, j =i

n   1 n
  
n  
1   1  
+ μi φ j bij + kij∗  + μi ϕ j wij  + μi φi bji + kji∗ 
2 2 2(1 − δ)
j=1 j=1 j=1

τ∗ 
n
 
+ 2
μi ϕi wji  < 0 (4.62)
2(1 − σ)
j=1
120 4 Adaptive Synchronization of Neural Networks

holds, where μi > 0, εi > 0, φi > 0 and ϕi > 0 are all positive constants,
i, j = 1, 2, . . . , n.

Proof Define the following Lyapunov-Krasovskii functional candidate V (t) by

 n  t
1  
n 
n
1
V (t) = μi ei2 (t) + μi |dji | |ei (s)||g̃i (ei (s))|ds
2 (1 − δ) t−τ1 (t)
i=1 i=1 j=1

n n   n


1 t t μi
+ μi |wji | |ei (η)||ĝi (ei (η))|dηds + (c̄i − ci )2
(1 − σ) t−τ2 (t) s γi
i=1 j=1 i=1

n 
n 
n 
μi μi μi
+ (āij − aij )2 + (b̄ij − bij )2 + (w̄ij − wij )2 (4.63)
αij βij ωij
j=1 j=1 j=1

where dji = bji + kji∗ . Then, the derivative of along the trajectory of error system
(4.59) can be derived as follows:


n 
n 
n
V̇ (t) = μi ei (t)(−ci ei (t) + (aij + kij )g j (e j (t)) + (bij + kij∗ )g̃ j (e j (t − τ1 (t)))
i=1 j=1 j=1


n  t 
n 
n
+ wij ĝ j (e j (s))ds − (c̄i − ci )yi (t) + (āij − aij ) f j (y j (t)) + (b̄ij − bij )u j (y j (t − τ1 (t)))
j=1 t−τ2 (t) j=1 j=1

  
n 
n t 1
n
 
+ (w̄ij − wij ) v j (y j (s))ds) + μi dji  |ei (t)| |g̃i (ei (t))|
t−τ2 (t) 2(1 − δ)
j=1 i=1 j=1

(1 − τ̇1 (t))      n 
n n
1
n
   
− μi dji |ei (t − τ1 (t))| |g̃i (ei (t − τ1 (t)))| + μi wji  |ei (t)| τ2 (t) ĝi (ei (t))
2(1 − δ) 2(1 − σ)
i=1 j=1 i=1 j=1


1 − τ̇2 (t)        μi 
n n n n
t μi
− μi wji |ei (s)| ĝi (ei (s))ds + ⎣ (c̄i − ci )c̄˙i + (āij − aij )ā˙ ij
2(1 − σ) t−τ 2 (t) γi αij
i=1 j=1 i=1 j=1

 n
μi n
μi
+ ˙
(b̄ij − bij )b̄ij + ˙
(w̄ij − wij )w̄ij ⎦
βij ωij
j=1 j=1


n
 
n 
n
   
≤ −μi ci ei2 (t) + μi (aii + kii ) |ei (t)| |gi (ei (t))|] + μi aij + kij  |ei (t)| g j (e j (t))
i=1 i=1 j=1, j =i


n    
n  
n
 
n
  t  
+ μi bij + kij∗  |ei (t)|g̃ j (e j (t − τ1 (t))) + μi wij  |ei (t)| ĝ j (e j (s))ds
i=1 j=1 i=1 j=1 t−τ2 (t)

1 
n 
n
  1    
n n
+ μi dji  |ei (t)| |g̃i (ei (t))| − μi dji |ei (t − τ1 (t))| |g̃i (ei (t − τ1 (t)))|
2(1 − δ) 2
i=1 j=1 i=1 j=1


n  
τ∗ n
    1 n 
n
  t  
+ 2
μi wji  |ei (t)| ĝi (ei (t)) − μi wji  |ei (s)| ĝi (ei (s))ds (4.64)
2(1 − σ) 2 t−τ2 (t)
i=1 j=1 i=1 j=1
4.3 Synchronization of DNN Based on Parameter Identification … 121

Following the Lemma 1.13, we can derive


 

n 
n
    n 
n
  g j (e j (t))
   
μi aij + kij |ei (t)| g j (e j (t)) =  
μi aij + kij ε j |ei (t)|
εj
i=1 j=1, j =i i=1 j=1, j =i

1 
n 
n
  1 
n 
n
  g 2j (e j (t))
≤ μi aij + kij ε j ei2 (t) + μi aij + kij  (4.65)
2 2 εj
i=1 j=1, j =i i=1 j=1, j =i

and it can be seen from Assumption 4.17 that

g T (e(t))g(e(t)) ≤ e T (t)ΛT Λe(t) (4.66)

g̃ T (e(t))g̃(e(t)) ≤ e T (t)MT Me(t) (4.67)

ĝ T (e(t))ĝ(e(t)) ≤ e T (t)N T Ne(t) (4.68)

where Λ = diag{ε1 , ε2 , . . . , εn } > 0, M = diag{φ1 , φ2 , . . . , φn } > 0, and N =


diag{ϕ1 , ϕ2 , . . . , ϕn } > 0 are constant matrices. Furthermore, from (4.66), the last
term in (4.65) can be estimated by

1    g 2j (e j (t)) 1     
n n n n
μi aij + kij  ≤ μi aij + kij  g j (e j (t)) e j (t)
2 εj 2
i=1 j=1, j =i i=1 j=1, j =i

1 
n 
n
 
= μ j aji + kji  |gi (ei (t))| |ei (t)| . (4.69)
2
i=1 j=1, j =i

With the same method and the Assumption 4.18, the following (4.70) and (4.71)
can be obtained immediately:


n 
n  
 
μi bij + kij∗  |ei (t)|g̃ j (e j (t − τ1 (t)))
i=1 j=1
n 
n  
1  
≤ μi bij + kij∗ φ j ei2 (t)
2
i=1 j=1
n  n  
1  
+ μ j bji + kji∗  |g̃i (ei (t − τ1 (t)))| |ei (t − τ1 (t))| , (4.70)
2
i=1 j=1
122 4 Adaptive Synchronization of Neural Networks


n  
n
  t  
μi wij  |ei (t)| ĝ j (e j (s))ds
i=1 j=1 t−τ2 (t)

n  
  1      
n n n t
1
≤ μi wij ϕ j ei2 (t) + μ j wji ĝi (ei (s)) |ei (s)| ds
2 2 t−τ2 (t)
i=1 j=1 i=1 j=1
(4.71)

Then, by substituting (4.65)–(4.71) for (4.64), one yields



n ⎨
 
1
n
 
V̇ (t) ≤ −μ c + μi εi (aii + kii ) + μi ε j ai j + ki j 
⎩ i i 2
i=1 j=1, j =i

n
  1 
n  
1  
+ μ j εi a ji + k ji  + μi φ j bi j + ki∗j 
2 2
j=1, j =i j=1

1 n
  1   
n
+ μi ϕ j wi j  + μi φi d ji 
2 2(1 − δ)
j=1 j=1

τ2∗ 
n
 ⎬
+ μi ϕi w ji  ei2 (t). (4.72)
2(1 − σ) ⎭
j=1

Therefore, if the condition III in Theorem 4.20 is satisfied, we can obtain V̇ (t) = 0
if and only if e(t) = 0, otherwise V̇ (t) < 0, immediately. And it can be concluded
that the error signal model (4.59) is globally asymptotically stable in mean square.
Basing on the invariant principle of functional differential equation, when t → ∞,
we have E e(t; ϕ) 2 → 0, c̄ij → cij , āij → aij , b̄ij → bij , and w̄ij → wij . Thus, all
the unknown parameters with arbitrary initial values in the slave system (4.57) can be
identified when (4.57) synchronizes with the master system (4.55). This completes
the proof.
Remark 4.21 It can be seen from the form of Lyapunov-Krasovskii functional (4.63)
that neither symmetry nor positive (negative) definiteness of the coupling matrices
are needed. And thus, the results are less restrictive.
Remark 4.22 In this article, we have chosen the general time-varying delayed feed-
back controller K(t) = K g(e(t)) + K ∗ g̃(e(t − τ1 (t))) to model a more realistic
situation. It should be mentioned that if the controller is taken as K(t) = K g(e(t)),
the synchronization of the two coupled neural networks can also be achieved. But,
its performance is not better than (4.60). That is to say, the former controller is more
practical than the latter.
Remark 4.23 This section is concerned with the time-varying delays for the general
case. As for the special case that with constant time delays, we can derive the similar
results by the same method without difficulties.
4.3 Synchronization of DNN Based on Parameter Identification … 123

If the two coupled neural networks (4.55) and (4.57) have no distributed time-
varying delays, then, we can get the following corollary directly. Consider the master
system (4.73) and the slave system (4.74) as follows:

d x(t) = [−C x(t) + A f (x(t)) + Bu(x(t − τ1 (t))) + J ]dt, (4.73)

dy(t) = [−C̄ y(t) + Ā f (y(t)) + B̄u(y(t − τ1 (t))) + J + K(t)]dt. (4.74)

Corollary 4.24 Under the Assumptions 4.17 and 4.18, the slave DNNs (4.57) is
synchronized with the master DNNs (4.55) and lim (c̄i − ci ) = lim (āij − aij ) =
t→∞ t→∞
lim (b̄ij − bij ) = 0, (i, j = 1, 2, . . . , n), based on the following three conditions:
t→∞
(I) Let the time-varying delayed feedback controller


n
κi (t) = kij ( f j (y j (t)) − f j (x j (t)))
j=1

n
+ kij∗ (u j (y j (t − τ1 (t))) − u j (x j (t − τ1 (t)))), i = 1, 2, . . . , n.
j=1
(4.75)

(II) The adapted parameters c̄i , āij and b̄ij with the update law are taken as
⎧ ·

⎪ ˙ = γ (t)y


c̄ i i ei i (t); i = 1, 2, . . . , n,
·
˙ ij = −αij ei (t) f j (y j (t)); i, j = 1, 2, . . . , n,
ā (4.76)



⎩˙
·
b̄ij = −βij ei (t)u j (y j (t − τ1 (t))); i, j = 1, 2, . . . , n,

where γi > 0, αij > 0 and βij > 0 are arbitrary positive constants.
(III) The following inequality

1 
n
  1 
n
 
−μi ci + μi εi (aii + kii ) + μi ε j aij + kij  + μ j εi aji + kji 
2 2
j=1, j =i j=1, j =i

n   
n  
1   1  
+ μi φ j bij + kij∗  + μi φi bji + kji∗  < 0 (4.77)
2 2(1 − δ)
j=1 j=1

holds, where μi > 0, εi > 0 and φi > 0 are all positive constants, i, j = 1, 2, . . . , n.

Proof Let wi j = 0 in the model (4.55) and (4.57). By utilizing the method as being
proposed in the proof of Theorem 4.20, we can obtain Corollary 4.24 directly.
124 4 Adaptive Synchronization of Neural Networks

If the two coupled neural networks have neither discrete time-varying delay nor
distributed time-varying delay, then, the following corollary can be obtained imme-
diately. Consider the master system (4.78) and the slave system (4.79) as follows:

d x(t) = [−C x(t) + A f (x(t)) + J ]dt, (4.78)

dy(t) = [−C̄ y(t) + Ā f (y(t)) + J + K(t)]dt. (4.79)

Corollary 4.25 Under the Assumptions 4.17 and 4.18, the slave DNNs (4.57) is
synchronized with the master DNNs (4.55) and lim (c̄i − ci ) = lim (āij − aij ) =
t→∞ t→∞
0, (i, j = 1, 2, . . . , n) , based on the following three conditions:
(I) Let the time-varying delayed feedback controller


n
κi (t) = kij ( f j (y j (t)) − f j (x j (t))), i = 1, 2, . . . , n. (4.80)
j=1

(II) The adapted parameters c̄i and āij with the update law are taken as
⎧ ·
⎨˙
c̄i = γi ei (t)yi (t); i = 1, 2, . . . , n,
· (4.81)
⎩˙
ā ij = −αij ei (t) f j (y j (t)); i, j = 1, 2, . . . , n,

where γi > 0 and αij > 0 are arbitrary positive constants.


(III) The following inequality

1 
n
  1 
n
 
− μi ci + μi εi (aii + kii ) + μi ε j aij + kij  + μ j εi aji + kji  < 0
2 2
j=1, j =i j=1, j =i
(4.82)
holds, where μi > 0 and εi > 0 are both positive constants, i = 1, 2, . . . , n.

Proof Let bij = 0 and wij = 0 in the proof of Theorem 4.20. On the basis of the
similar technique, Corollary 4.25 can be derived immediately.

4.3.4 Illustrative Example

In this section, several numerical simulations are presented to illustrate the effective-
ness of our results.
Example
Consider the following master system with discrete and distributed time-varying
delays:
4.3 Synchronization of DNN Based on Parameter Identification … 125

Fig. 4.10 Chaotic phase 8


trajectories of DNNs (4.83)
6

y1(t)
0

−2

−4

−6

−8
−1.5 −1 −0.5 0 0.5 1 1.5 2
x1(t)

 t
d x(t) = [−C x(t) + A f (x(t)) + Bu(x(t − τ1 (t))) + W v(x(s))ds + J ]dt,
t−τ2 (t)
(4.83)

where x(t) = [x1 (t), x2 (t)]T and f (x(t)) = u(x(t)) = v(x(t)) = [tanh(x1 (t)),
tanh(x2 (t))]T , τ1 (t) = τ2 (t) = 0.8, J = [0, 0]T





10 2.1 −0.12 −1.6 −0.1 −2.3 −0.5
C= ,A= ,B = ,W = .
01 −5.1 3.1 −0.2 −2.4 −1.1 −0.2

The initial values are chosen as x1 (t) = 0.4, x2 (t) = 0.6, ∀t ∈ [−1, 0], then, the
chaotic phase trajectories of DNNs (4.83) can be obtained as Fig. 4.10 shows.
In order to prove that our results are practical and useful, the following slave
system with controller is considered:

 t
dy(t) = −C̄ y(t) + Ā f (y(t)) + B̄u(y(t − τ1 (t))) + W̄ v(y(s))ds + J
t−τ2 (t)

+ K ∗ (u(y(t − τ1 (t))) − u(x(t − τ1 (t)))) dt, + K ( f (y(t)) − f (x(t))) (4.84)

where y(t) = [y1 (t), y2 (t)]T , J = [0, 0]T , and







10 2.1 −0.12 −1.6 −0.1 −2.3 −0.5
C̄ = , Ā = , B̄ = , W̄ = .
01 −5.1 ā22 −0.2 b̄22 −1.1 w̄22

So, it can be seen that ā22 , b̄22 and w̄22 are the parameters to be identified. Next,
we consider the synchronization criterions proposed in Theorem 4.20. In condition
of Theorem 4.20, the corresponding parameters are taken as α22 = 8.7, β22 =
6.2, ω22 = 3.6 And the parameters in are μi = εi = φi = ϕi = 1, (i = 1, 2),
126 4 Adaptive Synchronization of Neural Networks

and δ = σ = 0.5, respectively. The gain matrix of the output coupling controller is
chosen as


−12 1 −1 0.5
K = , K∗ = .
6 −14 1 −2

Thus, basing on the above description, all the synchronization criterions in Theorem
4.20 can be satisfied. Then, let the arbitrary initial states of the two coupled DNNs
and the unknown parameters in (4.84) be as follows:

x1 (t) = 0.6, x2 (t) = 0.7; y1 (t) = −1.7, y2 (t) = 3.8; ∀t ∈ [−1, 0],
ā22 (0) = 3.9, b̄22 (0) = −1.8, w̄22 (0) = −0.4.

Hence, the following convincing numerical simulations can be derived as shown in


Figs. 4.11, 4.12, 4.13 and 4.14.
As shown in Figs. 4.11 and 4.12, it is obvious that the slave system (4.84) is
synchronized with the master system after a short time. And from Fig. 4.13, we can
see that the error signals are globally asymptotically stable, which also means the

Fig. 4.11 t − x1 (t) − y1 (t) 1.5

0.5

−0.5

−1

−1.5

−2
0 5 10 15 20
t

Fig. 4.12 t − x2 (t) − y2 (t) 4


3
2
1
0
−1
−2
−3
−4
−5
0 5 10 15 20
t
4.3 Synchronization of DNN Based on Parameter Identification … 127

Fig. 4.13 Synchronization 3


error of e1 (t) and e2 (t)
2

−1

−2

−3

−4
0 5 10 15 20

Fig. 4.14 Parameter 6


identification of system
(4.83) and (4.84) 4

−2

−4

−6
0 5 10 15 20
t

two coupled DNNs have achieved synchronization. Finally, Fig. 4.14 indicates that
all the unknown parameters in the slave system can be identified at the same time,
when the synchronization is achieved. Thus, we can conclude that our research in
the synchronization of neural networks with mixed time-varying delays is useful and
meritable.

4.3.5 Conclusion

In this section, the synchronization problem of two coupled DNNs with mixed time-
varying delays has been thoroughly researched based on parameter identification and
via output coupling. Several sufficient and less restrictive conditions to ensure the
global synchronization have been derived on the basis of the Lyapunov-Krasovskii
functional and some estimation methods. Especially, both the discrete and distrib-
uted time-varying delays have been introduced to model a more practical system.
And via output coupling, a general and novel delayed feedback controller has been
proposed. Moreover, the parameters in the slave system have been estimated through
128 4 Adaptive Synchronization of Neural Networks

the simulations. Therefore, the feasibility of the theoretical results has been verified.
Finally, we can see that it is possible to apply the results to the application in this
area.

4.4 Adaptive a.s. Asymptotic Synchronization of SDNN


with Markovian Switching

4.4.1 Introduction

As we known, the stochastic delay neural networks (SDNNs) with Markovian switch-
ing has played an important role in the fields of science and engineering for its many
practical applications, including image processing, pattern recognition, associative
memory, and optimization problems. In the past several decades, the characteristics
of SDNNs with Markovian switching, such as the various stability, have focused
lots of attention from scholars in various fields of nonlinear science. Z.D. Wang etc.,
considered exponential stability of delayed recurrent neural networks with Markov-
ian jumping parameters [43]. W. Zhang, Y. Tang and J. Fang investigated stochastic
stability of Markovian jumping genetic regulatory networks with mixed time delays
[59]. H. Huang. and others investigated robust stability of stochastic delayed additive
neural networks with Markovian switching [13]. The researchers presented a number
of sufficient conditions and proved the global asymptotic stability and exponential
stability of the SDNN with Markovian switching (see, e.g. [27, 46, 49, 63] and the
references therein). The most extensively method used for recent publications is the
LMI approach.
In recent years, it has been found that the synchronization of coupled neural
networks in potential applications has received much attention, such as parallel
recognition and secure communication [10, 24]. Therefore, the investigation of
synchronization for SDNNs is of great significance and some stochastic synchro-
nization results have been investigated. In [19], an adaptive feedback controller is
designed to achieve complete synchronization of unidirectionally coupled delayed
neural networks with stochastic perturbation. In [31], via adaptive feedback control
techniques with suitable parameters update laws, several sufficient conditions are
derived to ensure lag synchronization of unknown delayed neural networks with or
without noise perturbation. In [6], a class of chaotic neural networks is discussed
and based on the Lyapunov stability method and the Halanay inequality lemma, a
delay-independent sufficient exponential synchronization condition is derived. The
simple adaptive feedback scheme has been used for the synchronization of neural
networks with or without time-varying delay in [3]. Tang and Fang in [34] intro-
duced a general model of an array of N linearly coupled delayed neural networks
with Markovian jumping hybrid coupling and by adaptive approach, some sufficient
criteria have been derived to ensure the synchronization in an array of jump neural
networks with mixed delays and hybrid coupling in mean square.
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN … 129

Although it is practically important, adaptive almost surely asymptotically syn-


chronization for SDNNs with Markovian switching is seldom mentioned. Motivated
by the above discussions, in this section, we aim to analyze the adaptive almost
surely asymptotically synchronization for SDNNs with Markovian switching. An
M-matrix-based criterion for determined whether adaptive almost surely asymp-
totically synchronization for SDNNs with Markovian switching is developed. An
adaptive feedback controller is proposed for the SDNNs with Markovian switching.
A numerical simulation is given to show the validity of developed results.

4.4.2 Problem Formulation and Preliminaries

In this section, we consider the neural networks called drive system and represented
by the compact form as follows:

d x(t) = [−C(r (t))x(t) + A(r (t)) f (x(t)) + B(r (t)) f (x(t − τ (t))) + D(r (t))]dt,
(4.85)

where t ≥ 0 is the time, x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the sate vector
associated with n neurons, f (x(t)) = ( f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t)))T ∈ Rn
denote the activation functions of the neurons, τ (t) is the transmission delay satis-
fying that 0 < τ (t) ≤ τ̄ and τ̇ (t) ≤ τ̂ < 1, where τ̄ , τ̂ are constants. {r (t)}t≥0
is a Markov chain taking values in a finite state space S = {1, 2, . . . , S}. As a
matter of convenience, for t ≥ 0, we denote r (t) = i and A(r (t)) = Ai , B(r (t)) =
B i , C(r (t)) = C i , D(r (t)) = D i respectively. In model (4.85), furthermore, ∀i ∈ S,
C i = diag {c1i , c2i , . . . , cni } (i.e. C i is a diagonal matrix) has positive and unknown
entries cki > 0, Ai = (a ijk )n×n and B i = (bijk )n×n are the connection weight and
the delay connection weight matrices, respectively. D i = (d1i , d2i , . . . , dni )T ∈ Rn is
the constant external input vector.
For the drive systems (4.85), a response system is constructed as follows:

dy(t) = [−C(r (t))y(t) + A(r (t)) f (y(t)) + B(r (t)) f (y(t − τ (t))) + D(r (t)) + U (t)]dt
+ σ(t, r (t), y(t) − x(t), y(t − τ (t)) − x(t − τ (t)))dω(t),
(4.86)

where y(t) is the state vector of the response system (4.86), U (t) = (u 1 (t), u 2 (t),
. . . , u n (t))T ∈ Rn is a control input vector with the form of

U (t) = K (t)(y(t) − x(t)) = diag {k1 (t), k2 (t), . . . , kn (t)}(y(t) − x(t)), (4.87)

ω(t) = (ω1 (t), ω2 (t), . . . , ωn (t))T is an n-dimensional Brown moment defined on


a complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0 (i.e. Ft =
σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the Markovian process
{r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix and can
130 4 Adaptive Synchronization of Neural Networks

be regarded as a result from the occurrence of eternal random fluctuation and other
probabilistic causes.
Let e(t) = y(t)− x(t). For the purpose of simplicity, we mark e(t −τ (t)) = eτ (t)
and f (x(t) + e(t)) − f (x(t)) = g(e(t)). From the drive system (4.85) and the
response system (4.86), the error system can be represented as follows:

de(t) = [−C(r (t))e(t) + A(r (t))g(e(t))


(4.88)
+ B(r (t))g(eτ (t)) + U (t)]dt + σ(t, r (t), e(t), eτ (t))dω(t).

The initial condition associated with system (4.88) is given in the following form:

e(s) = ξ(s), s ∈ [−τ̄ , 0],

for any ξ ∈ L2F0 ([−τ̄ , 0], Rn ), where L2F0 ([−τ̄ , 0], Rn ) is the family of all F0 -
measurable C([−τ̄ , 0]; Rn )-value random variables satisfying that sup−τ̄ ≤s≤0
E|ξ(s)|2 < ∞, and C([−τ̄ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ̄ , 0] with the norm ξ = sup−τ̄ ≤s≤0 |ξ(s)|.
To obtain the main result, we need the following assumptions.
Assumption 4.26 The activation functions of the neurons f (x(t)) satisfy the Lip-
schitz condition. That is, there exists a constant L > 0 such that

| f (u) − f (v)| ≤ L|u − v|, ∀u, v ∈ Rn .

Assumption 4.27 The noise intensity matrix σ(·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist two positives H1 and H2 , such that

trace(σ(t, r (t), u(t), v(t)))T (σ(t, r (t), u(t), v(t))) ≤ H1 |u(t)|2 + H2 |v(t)|2

for all (t, r (t), u(t), v(t)) ∈ R+ × S × Rn × Rn .

Assumption 4.28 In the drive system (4.85),

f (0) ≡ 0, σ(t, r0 , 0, 0) ≡ 0.

Remark 4.29 Under Assumptions 4.26–4.28, the error system (4.88) admits an equi-
librium point (or trivial solution) e(t, ξ), t ≥ 0.

The following stability concept and synchronization concept are needed in this
section.
Definition 4.30 The trivial solution e(t, ξ) of the error system (4.88) is said to be
almost surely asymptotically stable if

P( lim |x(t; i 0 , ξ)| = 0) = 1


t→∞
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN … 131

p
for any ξ ∈ LL0 ([−τ̄ , 0]; Rn ).
The response system (4.86) and the drive system (4.85) are said to be almost surely
asymptotically synchronized, if the error system (4.88) is almost surely asymptoti-
cally stable.
The main purpose of the rest of this letter is to establish a criterion of adaptive
almost surely asymptotically synchronization of the system (4.85) and the response
system (4.86) by using adaptive feedback control and M-matrix techniques.
Consider an n-dimensional stochastic delay differential equation (SDDE, for
short) with Markovian switching

d x(t) = f (t, r (t), x(t), xτ (t))dt + g(t, r (t), x(t), xτ (t))dω(t) (4.89)

on t ∈ [0, ∞) with the initial data given by

{x(θ) : −τ̄ ≤ θ ≤ 0} = ξ ∈ L2L0 ([−τ̄ , 0]; Rn ).

If V ∈ C2,1 (R+ × S × Rn ; R+ ), define an operator L from R+ × S × Rn to R by


Eq. (1.7).
For the SDDE with Markovian switching again, the following hypothesis is
imposed on the coefficients f and g.
Assumption 4.31 Both f and g satisfy the local Lipschitz condition. That is, for
each h > 0, there is an L h > 0 such that

| f (t, i, x, y) − f (t, i, x̄, ȳ)| + |g(t, i, x, y) − g(t, i, x̄, ȳ)| ≤ L h (|x − x̄| + |y − ȳ|)

for all (t, i) ∈ R × S and those x, y, x̄, ȳ ∈ Rn with x ∨ y ∨ x̄ ∨ ȳ ≤ h. Moreover

sup{| f (t, i, 0, 0)| ∨ |g(t, i, 0, 0)| : t ≥ 0, i ∈ S} < ∞.

4.4.3 Main Results

In this section, we give a criterion of adaptive almost surely asymptotically synchro-


nization for the drive system (4.85) and the response system (4.86).

Theorem 4.32 Assume that M := −diag {η, η, . . . , η } − Γ is a nonsingular


  
S
M-matrix, where
η = −2γ + α + L 2 + β + H1 ,

γ = min min cij , α = max(ρ(Ai ))2 , β = max(ρ(B i ))2 , p ≥ 2.


i∈S 1≤ j≤n i∈S i∈S
132 4 Adaptive Synchronization of Neural Networks

Let m > 0 and −



m = (m, m, . . . , m )T . Then (q1 , q2 , . . . , q S )T := M −1 −

m 0
  
S
(i.e. all elements of M −1 −

m are positive) by Lemma 1.12. Assume also that


S
(L + H2 )q̄ < − ηqi +
2
γik qk , ∀i ∈ S, (4.90)
k=1

where q̄ = max qi .
i∈S
Under Assumptions 4.26–4.28, the noise-perturbed response system (4.86) can be
adaptive almost surely asymptotically synchronization with the delay neural network
(4.85), if the feedback control gain K (t) of the controller (4.87) with the update law
is chosen as
k̇ j = −qi α j e2j , (4.91)

where α j > 0 ( j = 1, 2, . . . , n) are arbitrary constants.

Proof Under Assumptions 4.26–4.28, it can be seen that the error system (4.88)
satisfies Assumption 4.31.

For each i ∈ S, choose a nonnegative function as follows:


n
1 2
V (t, i, e) = qi |e|2 + k .
αj j
j=1

Then it is obvious that the condition (1.16) holds.


Computing LV (t, i, e) along the trajectory of error system (4.88), and using
(4.91), one can obtain that

LV (t, i, e) = Vt + Ve [−C i e + Ai g(e) + B i g(eτ ) + U (t)]



S
+ (1/2)trace (σ T (t, i, e, eτ )Vee σ(t, i, e, eτ )) + γik V (t, k, e)
k=1

n
=2 1
αj k j k̇ j + 2qi e T [−C i e + Ai g(e) + B i g(eτ ) + U (t)]
j=1

S
+ qi trace (σ T (t, i, e, eτ )σ(t, i, e, eτ )) + γik qk |e|2
k=1
= 2qi e T [−C i e + Ai g(e) + B i g(eτ )]

S
+ qi trace (σ T (t, i, e, eτ )σ(t, i, e, eτ )) + γik qk |e|2 .
k=1
(4.92)
Now, using Assumptions 4.26 and 4.27 together with Lemma 1.13 yields

− e T C i e ≤ −γ|e|2 , (4.93)
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN … 133

2e T Ai g(e) ≤ e T Ai (Ai )T e + g T (e)g(e) ≤ (α + L 2 )|e|2 , (4.94)

2e T B i g(eτ ) ≤ e T B i (B i )T e + g T (eτ )g(eτ ) ≤ β|e|2 + L 2 |eτ |2 , (4.95)

and

qi trace (σ T (t, i, e, eτ )σ(t, i, e, eτ )) ≤ qi (H1 |e|2 + H2 |eτ |2 ). (4.96)

substituting (4.93)–(4.96) into (4.92), yields


 

S
LV (t, i, e) ≤ ηqi + γik qk |e|2 + (L 2 + H2 )qi |eτ |2
k=1 (4.97)
≤ −m|e|2 + (L 2 + H2 )q̄|eτ |2 .


S
where m = −(ηqi + γik qk ) by (q1 , q2 , . . . , q S )T = M −1 −

m.
k=1
Let w1 (e) = m|e|2 , w
2 (eτ ) = (L + H2 )q̄|eτ | . Then inequalities (1.14) and
2 2

(1.15) hold by using (4.90), where γ(t) = 0 in (1.14). By Lemma 1.9, the error system
(4.88) is adaptive almost surely asymptotically stable, and hence the noise-perturbed
response system (4.86) can be adaptive almost surely asymptotically synchronized
with the drive delay neural network (4.85). This completes the proof.
Remark 4.33 In Theorem 4.32, the condition (4.90) of the adaptive almost surely
asymptotically synchronized for SDNN with Markovian switching obtained by using
M-matrix and the Lyapunov functional method is generator-dependent and very
different to those, such as linear matrix inequality method. And it is easy to check
the condition if the drive system and the response system are given and the positive
constant m is well chosen.
Now, we are in a position to consider two special cases of the drive system (4.85)
and the response system (4.86).
Special case 1 The Markovian jumping parameters are removed from the neural
networks (4.85) and the response system (4.86). In this case, S = 1 and the drive
system, the response system and the error system can be represented, respectively,
as follows

d x(t) = [−C x(t) + A f (x(t)) + B f (x(t − τ (t))) + D]dt, (4.98)

dy(t) = [−C y(t) + A f (y(t)) + B f (y(t − τ (t))) + D + U (t)]dt


(4.99)
+ σ(t, y(t) − x(t), y(t − τ (t)) − x(t − τ (t)))dω(t),

and
de(t) = [−Ce(t) + Ag(e(t))
(4.100)
+ Bg(eτ (t)) + U (t)]dt + σ(t, e(t), eτ (t))dω(t).

For this case, one can get the following result analogous to Theorem 4.32.
134 4 Adaptive Synchronization of Neural Networks

Corollary 4.34 Let


η = −2γ + α + L 2 + β + H1 ,

γ = min c j , α = (ρ(A))2 , β = (ρ(B))2 , p ≥ 2.


1≤ j≤n

Assume that
η < 0,

and
L 2 + H2 < −η. (4.101)

Under Assumptions 4.26–4.28, the noise-perturbed response system (4.99) can be


adaptive almost surely asymptotically synchronization with the delay neural network
(4.98), if the feedback gain K (t) of the controller (4.87) with the update law is chosen
as
k̇ j = −α j e2j , (4.102)

where α j > 0 ( j = 1, 2, . . . , n) are arbitrary constants.

Proof Choose the following nonnegative function as


n
1 2
V (t, e) = |e|2 + k .
αj j
j=1

The rest proof is similar to that of Theorem 4.32, and hence omitted.
Special case 2 The noise-perturbation is removed from the response system (4.86),
which yields the noiseless response system

dy(t) = [−Ĉ(r (t))y(t) + Â(r (t)) f (y(t)) + B̂(r (t)) f (y(t − τ (t))) + D(r (t)) + U (t)]dt
(4.103)
and the error system

de(t) = [−C(r (t))e(t) + A(r (t))g(e(t)) + B(r (t))g(eτ (t)) + U (t)]dt, (4.104)

respectively.
In this case, one can lead to the following results.
Corollary 4.35 Assume that M := −diag {η, η, . . . , η } − Γ is a nonsingular
  
S
M-matrix, where
η = −2γ + α + L 2 + β.

m = (m, m, . . . , m )T . Then (q1 , q2 , . . . , q S )T := M −1 −


Let m > 0 and −
→ →
m 0
  
S
by Lemma 1.12. Assume also that
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN … 135


S
L q̄ < − ηqi +
2
γik qk , ∀i ∈ S, (4.105)
k=1

where q̄ = max qi .
i∈S
Under Assumptions 4.26–4.28, the noiseless-perturbed response system (4.103)
can be adaptive almost surely asymptotically synchronized with the unknown drive
delay neural network (4.85), if the feedback gain K (t) of the controller (4.87) with
the update law is chosen as
k̇ j = −qi α j e2j , (4.106)

where α j > 0 is arbitrary constants.

Proof For each i ∈ S, choose a nonnegative function as follows:


n
1 2
V (t, i, e) = qi |e|2 + k .
αj j
j=1

The rest proof is similar to that of Theorem 4.32, and hence omitted.

4.4.4 Numerical Examples

In the section, an illustrative example is given to support our main results.

Example 4.36 Consider a delay neural network (4.85), and its response system
(4.86) with Markovian switching and the following network parameters:





2 0 1.5 0 3.2 −1.5 2.1 −0.6
C1 = , C2 = , A1 = , A2 = ,
0 2.4 0 1 −2.7 3.2 −0.8 3.2







2.7 −3.1 −1.4 −2.1 0.4 0.4 −1.2 1.2
B1 = , B2 = , D1 = , D2 = , Γ = ,
0 2.3 0.3 1.5 0.5 0.6 0.5 −0.5

σ(t, e(t), e(t − τ ), 1) = (0.4e1 (t − τ ), 0.5e2 (t))T ,


σ(t, e(t), e(t − τ ), 2) = (0.5e1 (t), 0.3e2 (t − τ ))T ,
f (x(t)) = g(x(t)) = tanh(x(t)), τ = 0.12, L = 1.

It can be checked that Assumptions 4.26–4.28 and the inequality (4.90) are satis-
fied and the matrix M is a nonsingular M-matrix. So the noise-perturbed response
system (4.86) can be adaptive almost surely asymptotically synchronized with the
drive delay neural network (4.85) by Theorem 4.32.
136 4 Adaptive Synchronization of Neural Networks

Fig. 4.15 The state response 4


of errors system e1 (t), e2 (t) e (t)
1
3
e2(t)
2

−1

−2

−3

−4
0 200 400 600 800 1000 1200
t

Fig. 4.16 The feedback gain


k1 , k2 0
k1(t)
k (t)
2
−5

−10

−15

0 200 400 600 800 1000 1200


t

The simulation results are given in Figs. 4.15 and 4.16. Among them, Fig. 4.15
shows the state response of errors system e1 (t), e2 (t). Figure 4.15 shows the feedback
gain k1 , k2 . From the following simulations, one can find that the stochastic delay
neural networks with Markovian switching is adaptive almost surely asymptotically
synchronization.

4.4.5 Conclusions

In this Letter, we have proposed a concept of adaptive almost surely asymptotically


synchronization for stochastic delay neural networks with Markovian switching.
Making use of M-matrix and Lyapunov functional method, we have obtained a
sufficient condition under which the response stochastic delay neural network with
Markovian switching can be adaptive almost surely asymptotically synchronized
4.4 Adaptive a.s. Asymptotic Synchronization of SDNN … 137

with the drive delay neural networks with Markovian switching. The method to
obtain the sufficient condition of adaptive synchronization for neural networks is
different to those of linear matrix inequality technique. The condition obtained in
this Letter is dependent on the generator of the Markovian jumping models and
can be easily checked. Extensive simulation result is provided that demonstrates the
effectiveness of our theoretical results and analytical tools.

4.5 Adaptive pth Moment Exponential Synchronization


of SDNN with Markovian Jump

4.5.1 Introduction

In reality, time-delay system is frequently encountered in many areas and a time


delay is often a source of instability and oscillators. For neural networks with time
delays, various sufficient conditions have been proposed to guarantee the global
asymptotic or exponential stability in some recent literatures, see e.g., [13, 43, 63]
and the references therein in which many methods have been exploited, such as the
linear matrix inequality approach.
Meanwhile, many neural networks may experience abrupt changes in their struc-
ture and parameters caused by some phenomena such as component failures or
repairs, changing subsystem interconnections, and abrupt environmental distur-
bances. In this situation, there exist finite modes in the neural networks, and the
modes may be switched (or jumped) from one to another at different times. This
kind of systems is widely studied by many scholars, see e.g. [27, 33, 45, 58, 60] and
the references therein.
As we know, the synchronization for a neural networks is to achieve the accordance
of the states of the drive system and the response system in a moment. That is to
say, the state of the error system of the drive systems and the response system can
achieve to zero eventually when the time approaches infinity. Especially, the adaptive
synchronization for a neural networks is such a synchronization that the parameters
of the drive system need to be estimated and the synchronization control law needs
to be updated in real-time when the neural networks evolve.
Up to now, the synchronization problem of the neural networks has been exten-
sively investigated over the last decade due to their successful applications in many
areas, such as signal processing, combinatorial optimization, and communication.
Moreover, the adaptive synchronization for neural networks has been used in real
neural networks control, such as parameter estimation adaptive control and model
reference adaptive control. In the past decade, much attention has been devoted to
the research of the adaptive synchronization for neural networks (see e.g. [19, 25,
31, 61, 65] and the references therein). In [31], the adaptive lag synchronization
issue of unknown chaotic delayed neural networks with noise perturbation is con-
sidered and the suitable parameters update laws and several sufficient conditions
138 4 Adaptive Synchronization of Neural Networks

to ensure lag synchronization of unknown delayed neural networks with or without


noise perturbation are derived. An adaptive feedback controller is designed to achieve
complete synchronization of unidirectionally coupled delayed neural networks with
stochastic perturbation and the globally almost surely asymptotical stability of the
error dynamical system is investigated by LaSalle-type invariance principle in [19].
In [65], adaptive synchronization condition under almost every initial data for sto-
chastic neural networks with time-varying delays and distributed delays is derived.
In [61], the issues of lag synchronization of coupled chaotic delayed neural networks
are investigated. By using the adaptive control with the linear feedback updated law,
some simple yet generic criteria for determining the lag synchronization of coupled
chaotic delayed neural networks are derived based on the invariance principle of
functional differential equations. In [25], Lu et al. investigated globally exponential
synchronization for linearly coupled neural networks with time varying delay and
impulsive disturbances. By referring to an impulsive delay differential inequality,
a sufficient condition of globally exponential synchronization for linearly coupled
neural networks with impulsive disturbances is derived in the section.
In this section, we are concerned with the analysis issue for the mode and delay-
dependent adaptive exponential synchronization of neural networks with stochastic
delayed and Markovian switching parameters by employing M-matrix approach.
The main purpose of this section is to establish M-matrix-based stability criteria for
testing whether the stochastic delayed neural networks is stochastically exponen-
tially synchronization in pth moment. We will use a simple example to illustrate the
usefulness of the derived M-matrix-based synchronization conditions.

4.5.2 Problem Formulation and Preliminaries

In this section, we consider the neural networks called drive system and represented
by the compact form as follows

d x(t) = [−C(r (t))x(t) + A(r (t)) f (x(t))


(4.107)
+B(r (t)) f (x(t − τ (t))) + D(r (t))]dt,

where t ≥ 0 (or t ∈ R+ , the set of all nonnegative real numbers) is the time
variable, x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the state vector associated with
n neurons, f (x(t)) = ( f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t)))T ∈ Rn denotes the
activation function of the neurons, τ (t) is the transmission delay satisfying that 0 <
τ (t) ≤ τ̄ and τ̇ (t) ≤ τ̂ < 1, where τ̄ , τ̂ are constants. As a matter of convenience,
for t ≥ 0, we denote r (t) = i and A(r (t)) = Ai , B(r (t)) = B i , C(r (t)) = C i
and D(r (t)) = D i respectively. In the drive system (4.107), furthermore, ∀i ∈ S,
C i = diag {c1i , c2i , . . . , cni } has positive and unknown entries cki > 0, Ai = (a ijk )n×n
and B i = (bijk )n×n are the connection weight and the delayed connection weight
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 139

matrices, respectively, and are both unknown matrices. D i = (d1i , d2i , . . . , dni )T ∈ Rn
is the constant external input vector.
For the drive systems (4.107), a response system is constructed as follows:

dy(t) = [−Ĉ(r (t))y(t) + Â(r (t)) f (y(t))


+ B̂(r (t)) f (y(t − τ (t))) + D(r (t)) + U (t)]dt (4.108)
+ σ(t, r (t), y(t) − x(t), y(t − τ (t))
− x(t − τ (t)))dω(t),

where y(t) is the state vector of the response system (4.108), Ĉ i = diag {ĉ1i , ĉ2i , . . . ,
ĉni }, Âi = (â ijk )n×n and B̂ i = (b̂ijk )n×n are the estimations of the unknown matrices
C i , Ai , and B i , respectively, U (t) = (u 1 (t), u 2 (t), . . . , u n (t))T ∈ Rn is a control
input vector with the form of

U (t) = K (t)(y(t) − x(t))


(4.109)
= diag {k1 (t), k2 (t), . . . , kn (t)}(y(t) − x(t)),

ω(t) = (ω1 (t), ω2 (t), . . . , ωn (t))T is an n-dimensional Brown moment defined on


a complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0 (i.e. Ft =
σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra), and is independent to the Markovian process
{r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix and can
be regarded as a result from the occurrence of eternal random fluctuation and other
probabilistic causes.
Let e(t) = y(t)− x(t). For the purpose of simplicity, we mark e(t −τ (t)) = eτ (t)
and f (x(t) + e(t)) − f (x(t)) = g(e(t)). From the drive system (4.107) and the
response system (4.108), the error system of theirs can be represented as follows:

de(t) = [−C̃(r (t))y(t) − C(r (t))e(t) + Ã(r (t)) f (y(t))


+ A(r (t))g(e(t)) + B̃(r (t)) f (yτ (t))
(4.110)
+ B(r (t))g(eτ (t)) + U (t)]dt
+ σ(t, r (t), e(t), eτ (t))dω(t),

where C̃(r (t)) = Ĉ(r (t)) − C(r (t)), Ã(r (t)) = Â(r (t)) − A(r (t)) and B̃(r (t)) =
B̂(r (t)) − B(r (t)). Denote c̃ij = ĉij − cij , ã ijk = â ijk − a ijk and b̃ijk = b̂ijk − bijk , then
C̃ i = diag {c̃1i , c̃2i , . . . , c̃ni }, Ãi = (ã ijk )n×n and B̃ i = (b̃ijk )n×n .
The initial condition associated with system (4.110) is given in the following
form:
e(s) = ξ(s), s ∈ [−τ̄ , 0],

for any ξ(s) ∈ L2F0 ([−τ̄ , 0], Rn ), where L2F0 ([−τ̄ , 0], Rn ) is the family of all
F0 -measurable C([−τ̄ , 0]; Rn )-value random variables satisfying that sup−τ̄ ≤s≤0
E|ξ(s)|2 < ∞, and C([−τ̄ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ̄ , 0] with the norm ξ(s) = sup−τ̄ ≤s≤0 |ξ(s)|.
140 4 Adaptive Synchronization of Neural Networks

To obtain the main result, we need the following assumptions.


Assumption 4.37 The activation functions of the neurons f (x(t)) satisfy the Lip-
schitz condition. That is, there exists a constant L > 0 such that

| f (u) − f (v)| ≤ L|u − v|, ∀u, v ∈ Rn .

Assumption 4.38 The noise intensity matrix σ(·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist two positives H1 and H2 , such that

trace(σ(t, r (t), u(t), v(t)))T (σ(t, r (t), u(t), v(t)))


≤ H1 |u(t)|2 + H2 |v(t)|2

for all (t, r (t), u(t), v(t)) ∈ R+ × S × Rn × Rn .

Remark 4.39 Under Assumptions 4.37 and 4.38, the error system (4.110) admits an
equilibrium point (or trivial solution) e(t, ξ(s)), t ≥ 0.

The following stability concept and synchronization concept are needed in this
section.
Definition 4.40 The trivial solution e(t, ξ(s)) of the error system (4.110) is said to
be exponential stability in pth moment if

1
lim sup log(E|e(t, ξ(s))| p ) < 0,
t→∞ t
p
for any ξ(s) ∈ LL0 ([−τ̄ , 0]; Rn ), where p ≥ 2, p ∈ Z. When p = 2, it is said to
be exponential stability in mean square.
The drive system (4.107) and the response system (4.108) are said to be exponen-
tial synchronized in pth moment, if the error system (4.110) is exponential stability
in pth moment.
The main purpose of the rest of this section is to establish a criterion of adaptive
exponential synchronization in pth moment of the system (4.107) and the response
system (4.108) by using adaptive feedback control and M-matrix techniques.
To this end, we introduce some concepts and lemmas which will be used in the
proofs of our main results.
Consider an n-dimensional stochastic delayed differential equation (SDDE, for
short) with Markovian switching

d x(t) = f (t, r (t), x(t), xτ (t))dt + g(t, r (t), x(t), xτ (t))dω(t) (4.111)

on t ∈ [0, ∞) with the initial data given by


p
{x(θ) : −τ̄ ≤ θ ≤ 0} = ξ(θ) ∈ LL0 ([−τ̄ , 0]; Rn ).
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 141

For V ∈ C2,1 (R+ × S × Rn ; R+ ), define an operator L from R+ × S × Rn × Rn


to R by Eq. (1.7).

4.5.3 Main Results

In this section, we give a criterion of adaptive exponential synchronization in pth


moment for the drive system (4.107) and the response system (4.108). First, we
establish a general result which can be applied widely.

Theorem 4.41 Assume that there is a function V (t, i, x) ∈ C2,1 (R+ × S × Rn ; R+ )


and positive constants p, c1 , λ1 , and λ2 such that

λ2 < λ1 (1 − τ̂ ), (4.112)

c1 |x| p ≤ V (t, i, x) (4.113)

and
LV (t, i, x, xτ ) ≤ −λ1 |x| p + λ2 |xτ | p (4.114)

for all t ≥ 0, i ∈ S and x ∈ Rn (x = x(t) for short). Then the SDDE (4.111) is
exponential stability in pth moment.

Proof For the function V (t, i, x), applying Lemma 1.5 and using the above condi-
tions, we obtain that
t
c1 E|x| p ≤ EV (0, r0 , ξ(0)) + E 0 LV (s, r (s), x(s), xτ (s))ds
t
≤ EV (0, r0 , ξ(0)) + E 0 (−λ1 |x| p + λ2 |xτ | p )ds.
t
For 0 |xτ | p ds, let u = s − τ (s), then du = (1 − τ̇ (s))ds and
t  t−τ (t) 1
|xτ | p ds = −τ (0) 1−τ̇ (s) |x(s)| ds
p
0  t
≤ 1
1−τ̂ −τ̄
|x(s)| p ds
0 t
= 1
1−τ̂ −τ̄
|x(s)| p ds + 1−1 τ̂ 0 |x(s)| p ds
τ̄
 t
≤ 1−τ̂
max |ξ(s)| p + 1−1 τ̂ 0 |x(s)| p ds
τ̄ ≤s≤0

So  t
E|x| p ≤ c + vE|x| p ds,
0

where
λ2 τ̄
c = c1−1 (EV (0, r0 , ξ(0)) + max E|ξ(s)| p ),
1−τ̂ τ̄ ≤s≤0
−λ1 (1−τ̂ )+λ2
v= c1 (1−τ̂ )
.
142 4 Adaptive Synchronization of Neural Networks

It can be seen that c, v are constants and c > 0 and v < 0. By using the Gronwally’s
inequality, we have
E|x| p ≤ c exp(vt).

Therefore
1
lim sup log(E|e(t, ξ)| p ) ≤ v < 0.
t→∞ t

Thus the SDDE (4.111) is exponential stability in pth moment. This completes
the proof.
Now we are in a position to set up a criterion of adaptive exponential synchroniza-
tion in pth moment for the drive system (4.107) and the response system (4.108).

Theorem 4.42 Assume that M := −diag {η, η, . . . , η } − Γ is a nonsingular


  
S
M-matrix, where

η = (1/2) p[−2γ + α + L 2 + β + ( p − 1)H1 ]


+(1/2)( p − 2)[L 2 + ( p − 1)H2 ],
γ = min min cij ,
i∈S 1≤ j≤n
α = max(ρ(Ai ))2 ,
i∈S
β = max(ρ(B i ))2 , p ≥ 2.
i∈S

Let m > 0 and −



m = (m, m, . . . , m )T (In this case, (q1 , q2 , . . . , q S )T :=
  
S
M −1 −
→m 0, i.e., all elements of M −1 −

m are positive, by Lemma 1.12). Assume
also that

S
(L + ( p − 1)H2 )q̄ < − ηqi +
2
γik qk (1 − τ̂ ), ∀i ∈ S, (4.115)
k=1

where q̄ = max qi .
i∈S
Under Assumptions 4.37 and 4.38, the noise-perturbed response system (4.108)
can be adaptive exponential synchronized in pth moment with the drive neural net-
work (4.107), if the feedback gain K (t) of the controller (4.109) with the update law
is chosen as
k̇ j = −(1/2)α j pqi |e| p−2 e2j , (4.116)
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 143

and the parameters update laws of matrices Cˆ i , Âi and B̂ i are chosen as

γ
⎪ ĉ˙ = 2j pqi |e| p−2 e j y j ,
i


⎨ j
α
â˙ jl = − 2jl pqi |e| p−2 e j fl ,
i
(4.117)



⎩ b̂˙ i = − βjl pq |e| p−2 e ( f ) ,
jl 2 i j l τ

where α j > 0, γ j > 0, αjl > 0, and βjl > 0 ( j, l = 1, 2, . . . , n) are arbitrary
constants, respectively.

Proof For each i ∈ S, choose a nonnegative function as follows:



n
V (t, i, e) = qi |e| p + α j k j + γ j (c̃ j )
1 2 1 i 2
j=1 
n
i )2 + 1 (b̃i )2 .
n
+ 1
( ã
αjl jl βjl jl
l=1 l=1

Clearly V (t, i, x) obeys (4.113) with c1 = min qi . Computing LV (t, i, e, eτ )


i∈S
along the trajectory of error system (4.110), and using (4.116) and (4.117), one can
obtain that
LV (t, i, e, eτ )
= Vt (t, i, e) + Ve (t, i, e)[−C̃ i y − C i e + Ãi f (y) + Ai g(e)
+ B̃ i f (yτ ) + B i g(eτ ) + U (t)]
+ (1/2)trace (σ T (t, i, e, eτ )Vee (t, i, e)σ(t, i, e, eτ ))
S
+ γik V (t, k, e)
k=1

n
n
1 i ˙i
=2 1
αj k j k̇ j + 2 γ j c̃ j c̃ j
j=1 j=1

n
n
1 i ˙i
n
n
1 i ˙i
+2 αjl ãjl ãjl +2 βjl b̃jl b̃jl
j=1 l=1 j=1 l=1 (4.118)
+ pqi |e| p−2 e T [−C̃ i y − C i e + Ãi f (y) + Ai g(e)
+ B̃ i f (yτ ) + B i g(eτ ) + U (t)]
+ (1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e| p−2 )

S
· σ(t, i, e, eτ )) + γik qk |e| p
k=1
= pqi |e| p−2 e T [−C i e + Ai g(e) + B i g(eτ )]
+ (1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e| p−2 )

S
· σ(t, i, e, eτ )) + γik qk |e| p .
k=1

Now, using Assumptions 4.37 and 4.38 together with Lemma 1.13 yields

− e T C i e ≤ −γ|e|2 , (4.119)
144 4 Adaptive Synchronization of Neural Networks

e T Ai g(e) ≤ (1/2)e T Ai (Ai )T e + (1/2)g T (e)g(e)


(4.120)
≤ (1/2)(α + L 2 )|e|2 ,

e T B i g(eτ ) ≤ (1/2)e T B i (B i )T e + (1/2)g T (eτ )g(eτ )


(4.121)
≤ (1/2)(β|e|2 + L 2 |eτ |2 ),

and
(1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e| p−2 )σ(t, i, e, eτ ))
(4.122)
≤ (1/2) p( p − 1)qi |e| p−2 (H1 |e|2 + H2 |eτ |2 ).

On the other hand, making use of Yong’s inequality, we have

p−2 p 2
|e| p−2 |eτ |2 ≤ |e| + |eτ | p . (4.123)
p p

Substituting (4.119)–(4.123) into (4.118) yields

$LV (t, i, e, eτ )
≤ (1/2) p(−2γ + α + L 2 + β + ( p − 1)H1 )qi

S
+ γik qk |e| p
k=1
(4.124)
+ (1/2) p(L 2 + ( p − 1)H2 )qi )|e| p−2 |eτ |2
 
S
≤ ηqi + γik qk |e| p + (L 2 + ( p − 1)H2 )qi |eτ | p
k=1
≤ −m|e| p + (L 2 + ( p − 1)H2 )q̄|eτ | p .

Let λ1 = m, λ2 = (L 2 + ( p − 1)H2 )q̄. Then inequalities (4.114) and (4.112)


hold. By Theorem 4.41, the error system (4.110) is adaptive exponential stability in
pth moment, and hence the noise-perturbed response system (4.108) can be adaptive
exponential synchronized in pth moment with the neural network (4.107). This
completes the proof.

Remark 4.43 In Theorem 4.42, the condition (4.115) of the adaptive exponential
synchronization for neural networks with Markovian switching obtained by using
M-matrix approach is mode dependent and very different to those, such as linear
matrix inequality method. And the condition can be checked if the drive system and
the response system are given and the positive constant m be chosen.

Now, we are in a position to consider two special cases of the drive system (4.107)
and the response system (4.108).
Special case 1 The Markovian jumping parameters are removed from the neural
networks. That is to say, S = 1. For this case, one can get the following result
analogous to Theorem 4.42.
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 145

Corollary 4.44 Assume that η < 0 and L 2 + ( p − 1)H2 < −η(1 − τ̂ ), where

η = (1/2) p[−2γ + α + L 2 + β + ( p − 1)H1 ]


+ (1/2)( p − 2)[L 2 + ( p − 1)H2 ].

Under Assumptions 4.37 and 4.38, the noise-perturbed response system

k̇ j = −(1/2)α j p|e| p−2 e2j , (4.125)

and the update laws of the parameters of matrices Ĉ, Â and B̂ are chosen as
⎧ γj
⎪ ˙
⎨ ĉ j = 2 p|e|
p−2 e y ,
j j
˙â jl = − αjl p|e| p−2 e j fl , (4.126)

⎩˙ β
2
b̂jl = − 2jl p|e| p−2 e j ( fl )τ ,

where α j > 0, γ j > 0, αjl > 0 and βjl > 0 ( j, l = 1, 2, . . . , n) are arbitrary
constants, respectively.

Proof Choose the following nonnegative function:




n
V (t, e) = |e| p + 1 2
αj k j + 1
γj (c̃ j )2
j=1 

n
n
+ αjl (ãjl )
1 2 + βjl (b̃jl )
1 2 .
l=1 l=1

The proof is similar to that of Theorem 4.42, and hence omitted.


Special case 2 When the noise-perturbation is removed from the response system
(4.108), it yields the noiseless response system which can lead to the following
results.

Corollary 4.45 Assume that M := −diag {η, η, . . . , η } − Γ is a nonsingular


  
S
M-matrix, where

η = (1/2) p(−2γ + α + L 2 + β) + (1/2)( p − 2)L 2 ,

and 

S
L q̄ < − ηqi +
2
γik qk (1 − τ̂ ), ∀i ∈ S, (4.127)
k=1

where q̄ = max qi .
i∈S
Under Assumption 4.37, the noiseless-perturbed response system can be adaptive
exponential synchronized in pth moment with the drive neural network, if the feed-
146 4 Adaptive Synchronization of Neural Networks

back gain K (t) of the controller (4.109) with the update law is chosen as (4.116)
and the parameters update laws of matrices Cˆ i , Âi and B̂ i are chosen as (4.117).
Proof The proof is similar to that of Theorem 4.42, and hence omitted.

4.5.4 Numerical Examples

In the section, we present an example to illustrate the usefulness of the main results
obtained in this section. The adaptive exponential stability in pth moment is examined
for a given stochastic delayed neural networks with Markovian jumping parameters.
Example 4.46 Consider the delayed neural networks (4.107) with Markovian switch-
ing, the response stochastic delayed neural networks (4.108) with Markovian switch-
ing, and the error system (4.110) with the network parameters given as follows:




2.1 0 2.5 0 1.2 −1.5
C1 = , C2 = , A1 = ,
0 2.8 0 2.2 −1.7 1.2




1.1 −1.6 0.7 −0.2 −0.4 −0.1
A2 = , B1 = , B2 = ,
−1.8 1.2 0 0.3 −0.3 0.5




0.6 0.8 −0.12 0.12
D1 = D̂1 = , D2 = D̂2 = ,Γ = ,
0.1 0.2 0.11 −0.11
α11 = α12 = α21 = α22 = β11 = β12 = β21 = β22 = 1,
σ(t, e(t), e(t − τ ), 1) = (0.4e1 (t − τ ), 0.5e2 (t))T ,
σ(t, e(t), e(t − τ ), 2) = (0.5e1 (t), 0.3e2 (t − τ ))T ,
p = 3, L = 1, f (x(t)) = tanh(x(t)), τ = 0.12.

It can be checked that Assumptions 4.37, 4.38, and the inequality (4.115) are
satisfied and the matrix M is a nonsingular M-matrix. So the noise-perturbed response
system (4.108) can be adaptive exponential synchronized in pth moment with the
drive neural network (4.107) by Theorem 4.42. The simulation results are given in
Figs. 4.17, 4.18, 4.19, 4.20 and 4.21. Among them, Fig. 4.17 shows the state response
of errors system e1 (t), e2 (t). Figure 4.18 shows the feedback gain k1 , k2 . Figures 4.19,
4.20 and 4.21 show the parameters update laws of matrices C, % A,
% % B chosen as
c1 (t), c2 (t), a11 (t), a12 (t), a21 (t), a22 (t), b11 (t), b12 (t), b21 (t) and b22 (t). From the
simulations figures, one can see that the stochastic delayed neural networks with
markovian switching (4.107) and (4.108) are adaptive exponential synchronization
in pth moment.

4.5.5 Conclusions

In this section, we have dealt with the problem of the mode and delay-dependent
adaptive exponential synchronization in pth moment for neural networks with sto-
4.5 Adaptive pth Moment Exponential Synchronization of SDNN … 147

Fig. 4.17 The response 5


curve of e1 (t), e2 (t) of the e1(t)
4
errors system e2(t)
3

−1

−2

−3

−4
0 100 200 300 400 500 600
t

Fig. 4.18 The dynamic 2


curve of the feedback gain k1(t)
k1 , k2 0 k (t)
2

−2

−4

−6

−8

−10

−12
0 100 200 300 400 500 600
t

Fig. 4.19 The dynamic 8


curve of the parameters
c1 (t), c2 (t) 7
c (t)
6 1
c (t)
2
5

0
0 100 200 300 400 500 600
t
148 4 Adaptive Synchronization of Neural Networks

Fig. 4.20 The dynamic 4


curve of the parameters
a11 (t), a12 (t), a21 (t), a22 (t) 3

2 a11(t)
a (t)
1 12
a (t)
21
0
a22(t)
−1

−2

−3

−4
0 100 200 300 400 500 600
t

Fig. 4.21 The dynamic 0.6


curve of the parameters
b11 (t), b12 (t), b21 (t), b22 (t) 0.4

0.2 b (t)
11
b12(t)
0
b (t)
21
−0.2 b (t)
22

−0.4

−0.6

−0.8
0 100 200 300 400 500 600
t

chastic delayed and Markovian jumping parameters. We have removed the traditional
monotonicity and smoothness assumptions on the activation function. An M-matrix
approach has been developed to solve the problem addressed. The conditions for
the adaptive exponential synchronization in pth moment have been derived in terms
of some algebraical inequalities. These synchronization conditions are much differ-
ent to those of linear matrix inequality. Finally, a simple example has been used to
demonstrate the effectiveness of the main results which obtained in this section.

References

1. J. Cao, P. Li, W. Wang, Global synchronization in arrays of delayed neural networks with
constant and delayed coupling. Phys. Lett. A 353(4), 318–325 (2006)
2. J. Cao, X. Li, Stability in delayed Cohen-Grossberg neural networks: LMI optimization
approach. Phys. D 212(1), 54–65 (2005)
References 149

3. J. Cao, J. Lu, Adaptive synchronization of neural networks with or without time-varying delays.
Chaos: Interdiscip. J. Nonlinear Sci. 16(1), 013133–013139 (2006)
4. J. Cao, L. Wang, Periodic oscillatory solution of bidirectional associative memory networks
with delays. Phys. Rev. E 61(2), 1825–1828 (2000)
5. J. Cao, Z. Wang, Y. Sun, Synchronization in an array of linearly stochastically coupled networks
with time-delays. Phys. A: Stat. Mech. Appl. 385(2), 718–728 (2007)
6. G. Chen, J. Zhou, Z. Liu, Classification of chaos in 3-d autonomous quadratic systems-I: basic
framework and methods. Int. J. Bifurc. Chaos 16(9), 2459–2479 (2006)
7. G.R. Chen, J. Zhou, Z.R. Liu, Global synchronization of coupled delayed neural networks and
applications to chaotic CNN model. Int. J. Bifurc. Chaos 14(7), 2229–2240 (2004)
8. M. Chen, D. Zhou, Synchronization in uncertain complex networks. Chaos: Interdiscip. J.
Nonlinear Sci. 16(1), 013101 (2006)
9. T. Chen, L. Wang, Power-rate global stability of dynamical systems with unbounded time-
varying delays. IEEE Trans. Circuits Syst. II: Express Briefs 54(8), 705–709 (2007)
10. M. Gilli, Strange attractors in delayed cellular neural networks. IEEE Trans. Circuits Syst. I:
Fundam. Theory Appl. 40(17), 849–853 (1993)
11. K. Gopalsamy, Stability of artificial neural networks with impulses. Appl. Math. Comput.
154(3), 783–813 (2004)
12. K. Gopalsamy, X. He, Delay-independent stability in bidirectional associative memory net-
works. IEEE Trans. Neural Netw. 5(6), 998–1002 (1994)
13. H. Huang, D.W.C. Ho, Y. Qu, Robust stability of stochastic delayed additive neural networks
with Markovian switching. Neural Netw. 20(7), 799–809 (2007)
14. H.R. Karimi, P. Maass, Delay-range-dependent exponential H∞ synchronization of a class of
delayed neural networks. Chaos, Solitons Fractals 41(3), 1125–1135 (2009)
15. J.H. Kim, C.H. Hyun, E. Kim, M. Park, Adaptive synchronization of uncertain chaotic systems
based on T-S fuzzy model. IEEE Trans. Fuzzy Syst. 15(3), 359–369 (2007)
16. B. Kosko, Adaptive bi-directional associative memories. Appl. Opt. 26(23), 4947–4960 (1987)
17. G.H. Li, Modified projective synchronization of chaotic system. Chaos, Solitons Fractals 32(5),
1786–1790 (2007)
18. P. Li, J. Cao, Z. Wang, Robust impulsive synchronization of coupled delayed neural networks
with uncertainties. Phys. A 373, 261–272 (2007)
19. X. Li, J. Cao, Adaptive synchronization for delayed neural networks with stochastic perturba-
tion. J. Frankl. Inst. 354(7), 779–791 (2008)
20. X. Liao, J. Yu, Qualitative analysis of bi-directional associative memory with time delay. Int.
J. Circuit Theory Appl. 26(3), 219–229 (1998)
21. B. Liu, X.Z. Liu, G.R. Chen, Robust impulsive synchronization of uncertain dynamical net-
works. IEEE Trans. Circuits Syst. I 52(7), 1431–1441 (2005)
22. Z.G. Liu, Global attractors of delayed BAM neural networks with reaction-diffusion terms. J.
Xiangnan Univ. 31(2), 5–11 (2010)
23. X. Lou, B. Cui, Synchronization of neural networks based on parameter identification and via
output or state coupling. J. Comput. Appl. Math. 222(2), 440–457 (2008)
24. H. Lu, Chaotic attractors in delayed neural networks. Phys. Lett. A 298(2–3), 109–116 (2002)
25. J. Lu, D.W.C. Ho, J. Cao, J. Kurths, Exponential synchronization of linearly coupled neural
networks with impulsive disturbances. IEEE Trans. Neural Netw. 22(2), 329–336 (2011)
26. Y. Lu, K. Yi, Adaptive projective synchronization of uncertain Rössler chaotic system. Comput.
Sci. 36(5), 91–193 (2009)
27. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, London, 2006)
28. M.J. Park, O. Kwon, J.H. Park, S.M. Lee, Simplified stability criteria for fuzzy Markovian
jumping Hopfield neural networks of neutral type with interval time-varying delays. Expert
Syst. Appl. 39(5), 5625–5633 (2012)
29. L.M. Pecora, T.L. Carroll, Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821–824
(1990)
150 4 Adaptive Synchronization of Neural Networks

30. K. Sun, S. Qiu, L. Yin, Adaptive function projective synchronization and parameter identifica-
tion for chaotic systems. Inf. Control 39(3), 326–331 (2010)
31. Y. Sun, J. Cao, Adaptive lag synchronization of unknown chaotic delayed neural networks with
noise perturbation. Phys. Lett. A 364(3), 277–285 (2007)
32. Y. Sun, J. Cao, Z. Wang, Exponential synchronization of stochastic perturbed chaotic delayed
neural networks. Neurocomputing 70(13), 2477–2485 (2007)
33. Y. Sun, G. Feng, J. Cao, Stochastic stability of Markovian switching genetic regulatory net-
works. Phys. Lett. A 373(18), 1646–1652 (2009)
34. Y. Tang, J. Fang, Adaptive synchronization in an array of chaotic neural networks with mixed
delays and jumping stochastically hybrid coupling. Commun. Nonlinear Sci. Numer. Simul.
14(9), 3615–3628 (2009)
35. F. Wang, H.Y. Wu, Existence and stablity of periodic solution for BAM neural networks.
Comput. Eng. Appl. 46(24), 15–18 (2010)
36. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
37. W. Wang, J. Cao, Synchronization in an array of linearly coupled networks with time-varying
delay. Phys. A 366, 197–211 (2006)
38. X.Y. Wang, Q.A. Zhao, Class of uncertain delayed neural network adaptive projection syn-
chronization. Acta Phys. Sin. 57(5) (2008)
39. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos, Solitons Fractals 36(2), 388–396 (2008)
40. Z. Wang, D.W.C. Ho, X. Liu, State estimation for delayed neural networks. IEEE Trans. Neural
Netw. 16(1), 279–284 (2005)
41. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos, Solitons Fractals 32(1), 62–72 (2007)
42. Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks
with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
43. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
44. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
45. Z. Wang, Y. Liu, X. Liu, Exponential stabilization of a class of stochastic system with Markovian
jump parameters and mode-dependent mixed time-delays. IEEE Trans. Autom. Control 55(7),
1656–1662 (2010)
46. Z. Wang, Y. Liu, G. Wei, X. Liu, A note on control of discrete-time stochastic systems with
distributed delays and nonlinear disturbances. Automatica 46(3), 543–548 (2010)
47. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
48. Z. Wang, H. Shu, Y. Liu, D.W.C. Ho, X. Liu, Robust stability analysis of generalized neural
networks with discrete and distributed time delays. Chaos, Solitons Fractals 30(4), 886–896
(2006)
49. Z.D. Wang, D.W.C. Ho, Y.R. Liu, X.H. Liu, Robust H∞ control for a class of nonlinear discrete
time-delay stochastic systems with missing measurements. Automatica 45(3), 1–8 (2010)
50. C.W. Wu, Synchronization in array of coupled nonlinear system with delay and nonreciprocal
time-varying coupling. IEEE Trans. Circuits Syst. 52(5), 282–286 (2005)
51. H.J. Xiang, Exponential stablity of fuzzy BAM neural networks with diffusion. J. Xiangnan
Univ. 31(2), 12–19 (2010)
52. D. Xu, Z. Li, Controlled projective synchronization in nonpartially-linear chaotic systems. Int.
J. Bifurc. Chaos 12(06), 1395–1402 (2002)
53. L.X. Yang, W.S. He, X.J. Liu, H.B. Chen, Improved full state hybrid projective synchronization
in autonomous chaotic systems. J. Xianyang Norm. Univ. 25(2), 28–30 (2010)
54. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A 373,
252–260 (2007)
References 151

55. C. Yuan, X. Mao, Robust stability and controllability of stochastic differential delay equations
with Markovian switching. Automatica 40(3), 343–354 (2004)
56. H. Zhang, Y. Wang, D. Liu, Delay-dependent guaranteed cost control for uncertain stochastic
fuzzy systems with multiple time delays. IEEE Trans. Syst., Man Cybern., Part B 38(1), 125–
140 (2008)
57. J. Zhang, Y. Yang, Global stability analysis of bidirectional associative memory neural networks
with time delay. Int. J. Circuit Theory Appl. 29(2), 185–196 (2001)
58. L. Zhang, E. Boukas, Stability and stabilization of Markovian jump linear systems with partly
unknown transition probabilities. Automatica 45(2), 463–468 (2009)
59. W. Zhang, Y. Tang, J. Fang, Stochastic stability of Markovian jumping genetic regulatory
networks with mixed time delays. Appl. Math. Comput. 217(17), 7210–7225 (2011)
60. H. Zhao, S. Xu, Y. Zou, Robust H∞ filtering for uncertain Markovian jump systems with
mode-dependent distributed delays. Int. J. Adapt. Control Signal Process. 24(1), 83–94 (2010)
61. J. Zhou, T. Chen, L. Xiang, Chaotic lag synchronization of coupled delayed neural networks
and its applications in secure communication. Circuits, Syst., Signal Process. 24(5), 599–613
(2005)
62. J. Zhou, T. Chen, L. Xiang, Robust synchronization of delayed neural networks based on
adaptive control and parameters identification. Chaos, Solitons Fractals 27(4), 905–913 (2006)
63. W. Zhou, H. Lu, C. Duan, Exponential stability of hybrid stochastic neural networks with mixed
time delays and nonlinearity. Neurocomputing 72(13), 3357–3365 (2009)
64. W. Zhou, D. Tong, Y. Gao, C. Ji, H. Su, Mode and delay-dependent adaptive exponential syn-
chronization in pth moment for stochastic delayed neural networks with Markovian switching.
IEEE Trans. Neural Netw. Learn. Syst. 23(4), 662–668 (2012)
65. Q. Zhu, J. Cao, Adaptive synchronization under almost every initial data for stochastic neural
networks with time-varying delays and distributed delays. Commun. Nonlinear Sci. Numer.
Simul. 16(4), 2139–2159 (2011)
66. S. Zhu, Y. Shen, Passivity analysis of stochastic delayed neural networks with Markovian
switching. Neurocomputing 74(10), 1754–1761 (2011)
Chapter 5
Stability and Synchronization
of Neutral-Type Neural Networks

When the states of a system are decided not only by states of the current time and the
past time but also by the derivative of the past states, the system can be called a neutral
system. The problems of stability and synchronization of neutral neural networks play
an important role in the same issues of neural networks. In this chapter, robust stability
of neutral neural networks is first discussed. Adaptive synchronization and projective
synchronization of neutral neural networks are investigated in the following two
sections. Exponential synchronization and exponential stability for neural networks
of neutral type are discussed respectively, in the fourth and sixth section. The issues
of adaptive synchronization and adaptive asymptotic synchronization are addressed
in the fifth and seventh sections.

5.1 Robust Stability of Neutral-Type NN with Mixed


Time Delays

5.1.1 Introduction

During last decades, neural networks (NNs) have attracted great attention due to their
extensive application in pattern recognition, signal processing, image processing,
quadratic optimization, associative memories, and many other fields. A variety of
models of NNs have been widely studied such as Hopfield neural networks (HNNs),
cellular neural networks (CNNs), Cohen-Grossberg neural networks (CGNNs), etc.
In some physical systems, mathematic models are described by functional dif-
ferential equations of neutral type, which depends on the delays of state and state
derivative. The practicality of neutral-type models recently attracts researchers to
investigate the stability and stabilization of the neutral-type neural networks [5, 22,
23, 30, 35, 36, 39, 43, 82].

© Springer-Verlag Berlin Heidelberg 2016 153


W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2_5
154 5 Stability and Synchronization of Neutral-Type Neural Networks

Time delays undoubtedly present complex and unpredictable behaviors in prac-


tice. The existence of time delays has an influence on the stability of a neural net-
work by bringing oscillatory and instability characteristics. However, the neutral-type
neural networks discussed in [22, 30, 35, 36, 39, 43, 82] just consider the discrete
delays, and a few researchers studied the distributed delays of the neutral-type neural
networks [5, 23]. Although the signal propagation is sometimes instantaneous and
can be modeled with discrete delays, it may also be distributed during a certain time
period. Hence, in this section, we would take the distributed delays into consideration.
On the other hand, several adequate conditions, either delay-dependent or delay-
independent condition, have been proposed to guarantee the asymptotic
[5, 23, 35, 36, 39, 43, 82], exponential [22, 30], or robust stability [5, 36] for
delayed neural networks. The weight coefficients of neurons rely on certain resis-
tance and capacitance values, which are subject to uncertainties practically. It is
significant to guarantee the robust stability of neural networks.
In this section, we aim to study the robust stability for neural networks of neutral
type with both discrete and distributed time delays. Based on Lyapunov-Krasovskii
stability theory and linear matrix inequality (LMI) technique, we give several new
criteria that can guarantee the stability of the system. In the mean time, some numeri-
cal examples are also given to demonstrate the applicability of our proposed stability
criteria.

5.1.2 Problem Formulation

Consider the following neural networks of neutral type, which involve both discrete
and distributed time-varying delays, described by a differential equation:


n
u̇ i (t) = − (ci + Δci (t))u i (t) + (ai j + Δai j (t))g j (u j (t))
j=1

n
+ (bi j + Δbi j (t))g j (u j (t − σ(t)))
j=1

n
+ (di j + Δdi j (t))u̇ j (t − σ(t))
j=1

n  t
+ (ei j + Δei j (t)) g j (u j (s))ds + Ji ,
j=1 t−τ (t)
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 155

or equivalently,

d[u(t) − (D + ΔD(t))u(t − σ(t))] = − (C + ΔC(t))u(t) + (A + ΔA(t))g(u(t))

+ (B + ΔB(t))g(u(t − σ(t)))
 t 
+ (E + ΔE(t)) g(u(s))ds + J dt,
t−τ (t)
(5.1)
where n is the number of neurons in the indicated neural network, u(t) = [u 1 (t),
u 2 (t), . . . , u n (t)]T ∈ Rn is the neuron state vector at time t, J = [J1 , J2 , . . . , Jn ]T ∈
Rn is the external constant input, g(u(t)) = [g1 (u 1 (t)), g2 (u 2 (t)), . . . , gn (u n (t))]T
∈ Rn is the activation function, and the delay σ(t) and τ (t) are time-varying contin-
uous functions that satisfy

0 ≤ σ(t) ≤ σ, σ̇(t) ≤ μ, 0 ≤ τ (t) ≤ τ (5.2)

where σ, τ , and μ are constants. C = diag{c1 , c2 , . . . , cn } is a positive definite


diagonal matrix, A = (ai j )n×n , B = (bi j )n×n , D = (di j )n×n , E = (ei j )n×n ∈ Rn×n
are the interconnection matrices representing the weight coefficients of the neurons,
ΔC(t), ΔA(t), ΔB(t), ΔD(t), and ΔE(t) are parametric uncertainties defined by

ΔC(t) = H1 F1 (t)G 1 , ΔA(t) = H2 F2 (t)G 2 ,


ΔB(t) = H3 F3 (t)G 3 , ΔD(t) = H4 F4 (t)G 4 , (5.3)
ΔE(t) = H5 F5 (t)G 5 ,

where Hi , G i (i = 1, 2, 3, 4) are known constant real matrices with appropriate


dimensions, and Fi (t) are unknown time-varying matrices satisfying

FiT (t)Fi (t) ≤ I, (i = 1, 2, 3, 4). (5.4)

Throughout this section, we always assume that the activation functions are
bounded and satisfy Lipschitz condition, i.e., (H) There exist constants L i > 0
such that gi (x) − gi (y) ≤ L i x − y, for any x, y ∈ Rn , i = 1, 2, . . . , n.
It is obvious that the condition (H) infers that the activation functions are continu-
ous but not always monotonic. Consequently, system (5.1) has at least an equilibrium
point according to the Brouwer’s fixed-point theorem.
Suppose u ∗ = [u ∗1 , u ∗2 , . . . , u ∗n ]T ∈ Rn is an equilibrium point of system (5.1),
let x(t) = u(t) − u ∗ , and then system (5.1) can be rewritten as

d[x(t) − (D + ΔD(t))x(t − σ(t))]



= − (C + ΔC(t))x(t) + (A + ΔA(t)) f (x(t))
(5.5)
 t 
+ (B + ΔB(t)) f (x(t − σ(t))) + (E + ΔE(t)) f (x(s))ds dt
t−τ (t)
156 5 Stability and Synchronization of Neutral-Type Neural Networks

where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T is the state vector of the transformed system,
f (x(t)) = [ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T with f i (xi (t)) = gi (xi (t)+u i∗ )−
gi (u i∗ ), (i = 1, 2, . . . , n).
The equilibrium point of system (5.1) is robustly stable if and only if the origin
of system (5.5) is robustly stable. As a result, we could only consider robust stability
of system (5.5).

5.1.3 Main Results Proofs

In order to obtain robust stability criterion of delayed Hopfield neural networks (5.5),
firstly, we deal with the asymptotic stability criterion for the nominal system of (5.5).
If ΔC = 0, ΔA = 0, ΔB = 0, ΔD = 0, and ΔE = 0, then the system (5.5) can be
rewritten as

d[x(t) − Dx(t − σ(t))]


  t 
= −C x(t) + A f (x(t)) + B f (x(t − σ(t))) + E f (x(s))ds dt. (5.6)
t−τ (t)

Theorem 5.1 Suppose (H) holds, for any delay σ(t), τ (t) satisfying (5.2), then sys-
tem (5.6) is asymptotically stable, if there exist positive definite matrices P, Q 2 , Q 3 , R
and positive scalars ε1 , ε2 , ε3 , ε4 , ε5 , ε6 such that the following LMI holds:
⎡ ⎤
Π11 Π12 0 0 0 0 PA PB PE
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ 0 0 ⎥ < 0, (5.7)
⎢ ∗ ∗ ∗ ∗ ∗ −ε6 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5 I

where
Π11 = −PC − C T P T + τ Q 1 + Q 2 + σ 2 Q 3 + L R L + L(ε1 + ε3 )L ,
Π12 = 21 (D PC + C T P T D T ),
Π22 = L(ε2 + ε4 )L − (1 − μ)Q 2 .

Proof Consider the following Lyapunov-Krasovskii functional method for system


(5.6) as

V (t) = V1 (t) + V2 (t) + V3 (t) + V4 (t) + V5 (t),


5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 157
0 t
where V1 (t) = [x(t) − Dx(t − σ)]T P[x(t) − Dx(t − σ)], V2 (t) = −τ t+s x T (η)
t 0 t
Q 1 x(η)dηds, V3 (t) = t−σ(t) x T (s)Q 2 x(s)ds, V4 (t) = σ −σ t+β x T (s)Q 3 x(s)ds
t
dβ, V5 (t) = t−σ(t) f T (x(s))R f (x(s))ds, where P, Q 2 , Q 3 , and R are the positive
definite solutions to the inequality (5.7), and Q 1 ≥ 0 is defined by Q 1 := τ L(ε5 +
ε6 )L, where L is a symmetric matrix.
The time derivative of Lyapunov-Krasovskii functional method V (t) along the
trajectories of system (5.6) is derived as

V̇1 (t) = 2 x(t) − Dx(t − σ(t))]T P − C x(t) + A f (x(t))
t 
+ B f (x(t − σ(t))) + E t−τ (t) f (x(s))ds
= x T (t)(−PC − C T P T )x(t) + x T (t − σ(t))(D PC + C T P T D T )
x(t) + 2x T (t)P A f (x(t)) + 2x T (t)P B f (x(t − σ(t)))
−2x T (t − σ(t))D P A f (x(t)) − 2x T (t − σ(t))D P B
t
f (x(t − σ(t))) + 2x T (t)P E t−τ (t) f (x(s))ds
t
−2x T (t − σ(t))D P E t−τ (t) f (x(s))ds.

From Lemma 1.13, we have the following inequalities:

2x T (t)P A f (x(t)) + 2x T (t)P B f (x(t − σ(t))) − 2x T (t − σ(t))D P A f (x(t))


t
−2x T (t − σ(t))D P B f (x(t − σ(t))) + 2x T (t)P E t−τ (t) f (x(s))ds
t
−2x (t − σ(t))D P E t−τ (t) f (x(s))ds
T

≤ x T (t)P Aε−11 A P x(t) + f (x(t))ε1 f (x(t))


T T T
−1 T T
+x (t)P Bε2 B P x(t) + f T (x(t − σ(t)))ε2 f (x(t − σ(t)))
T

+x T (t − σ(t))D P Aε−1 3 A P D x(t − σ(t)) + f (x(t))ε3 f (x(t))


T T T T
−1
+x T (t − σ(t))D P Bε4 B T P T D T x(t − σ(t))
+ f T (x(t − σ(t)))ε4 f (x(t − σ(t))) + x T (t)P Eε−1 T T
5 E P x(t)
T 
t t
+ t−τ (t) f (x(s))ds ε5 t−τ (t) f (x(s))ds
+x T (t − σ(t))D P Eε−1
6 E P D x(t − σ(t)).
T T T

Then,

V̇1 (t) ≤ x T (t)(−PC − C T P T + P Aε−1 −1 T T


1 A P + P Bε2 B P
T T

+ P Eε−1 −1 T T T
5 E P )x(t) + x (t − σ(t))(D P Aε3 A P D
T T T

+ D P Bε−1 −1 T T T
4 B P D + D P Eε6 E P D )x(t − σ(t))
T T T

+ x T (t − σ(t))(D PC + C T P T D T )x(t) + f T (x(t))(ε1 + ε3 ) f (x(t))


+ f T (x(t − σ(t)))(ε2 + ε4 ) f (x(t − σ(t)))
 t T  t 
+ f (x(s))ds (ε5 + ε6 ) f (x(s))ds
t−τ (t) t−τ (t)
≤ x (t)(−PC − C P +
T T T
P Aε−1
1 A
T
P + P Bε−1
T T T
2 B P
158 5 Stability and Synchronization of Neutral-Type Neural Networks

+ P Eε−1 −1 T T T
5 E P )x(t) + x (t − σ(t))(D P Aε3 A P D
T T T

+ D P Bε−1 −1 T T T
4 B P D + D P Eε6 E P D )x(t − σ(t))
T T T

+ x T (t − σ(t))(D PC + C T P T D T )x(t) + x T (t)L(ε1 + ε3 )L x(t)


+ x T (t − σ(t))L(ε2 + ε4 )L x(t − σ(t))
 t T  t 
+ f (x(s))ds (ε5 + ε6 ) f (x(s))ds . (5.8)
t−τ (t) t−τ (t)

From Lemma 1.20, we have


 t T  t 
f (x(s))ds (ε5 + ε6 ) f (x(s))ds
t−τ (t) t−τ (t)
 t
≤ τ (t) f T (x(s))(ε5 + ε6 ) f (x(s))ds (5.9)
t−τ (t)
 t
≤τ x T (s)L(ε5 + ε6 )L x(s)ds.
t−τ (t)

By differential formula, we could infer


 t
V̇2 (t) = τ x T (t)Q 1 x(t) − x T (s)Q 1 x(s)ds, (5.10)
t−τ

V̇3 (t) = x T (t)Q 2 x(t) − (1 − σ̇(t))x T (t − σ(t))Q 2 x(t − σ(t))


(5.11)
≤ x T (t)Q 2 x(t) − (1 − μ)x T (t − σ(t))Q 2 x(t − σ(t)),
 t
V̇4 (t) = σ 2 x T (t)Q 3 x(t) − σ x T (s)Q 3 x(s)ds
t−σ
 t T  t  (5.12)
≤ σ x (t)Q 3 x(t) −
2 T
x(s)ds Q3 x(s)ds ,
t−σ t−σ

V̇5 (t) = f T (x(t))R f (x(t)) − f T (x(t − σ(t)))R f (x(t − σ(t)))


(5.13)
≤ x T (t)L R L x(t).

Substituting (5.8)–(5.13) into V̇ (t), we get V̇ (t) ≤ ξ T (t)Σξ(t), where


t T
ξ(t) = x(t) x(t − σ(t)) t−σ(t) x(s)ds ,
⎡ ⎤
Σ11 Π12 0
Σ = ⎣ ∗ Σ22 0 ⎦ ,
∗ ∗ −Q 3
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 159

and

Σ11 = −PC − C T P T + τ Q 1 + Q 2 + σ 2 Q 3 + L R L + L(ε1 + ε3 )L


+ P Aε−1 −1 T T −1 T T
1 A P + P Bε2 B P + P Eε5 E P ,
T T

Σ22 = L(ε2 + ε4 )L − (1 − μ1 )Q 2 + D P Aε−1 T T T


3 A P D
+ D P Bε−1 −1 T T T
4 B P D + D P Eε6 E P D .
T T T

Hence, V̇ (t) < 0 when Σ < 0. Using Lemma 1.21, Σ < 0 is equivalent to
Π < 0, where
⎡ ⎤
Π11 Π12 0 0 0 0 PA PB PE
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 0 0 0 ⎥
⎢ ⎥
Π =⎢ ⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 0 0 ⎥ ⎥.
⎢ ∗ ∗ ∗ ∗ ∗ −ε6 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5 I

Based on Lyapunov-Krasovskii stability theorem, the nominal system (5.6) is


asymptotically stable. This completes the proof of Theorem 5.1.
If σ(t) = σ for the nominal system (5.6), the following corollary can be easily
deduced.
Corollary 5.2 Suppose (H) holds, for given σ, system (5.5) with σ(t) = σ is asymp-
totically stable, if there exist positive definite matrices P, Q 2 , Q 3 , R and positive
scalars ε1 , ε2 , ε3 , ε4 , ε5 , ε6 such that the following LMI holds:
⎡ ⎤
Π11 Π12 0 0 0 0 PA PB PE
⎢ ∗ Λ22 0 D P A D P B D P E 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ 0 0 ⎥ < 0, (5.14)
⎢ ∗ ∗ ∗ ∗ ∗ −ε6 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5 I

where Λ22 = L(ε2 + ε4 )L − Q 2 , Π11 and Π12 are defined in Theorem 5.1.
Remark 5.3 For the case of σ(t) = σ, the delay-dependent stability criterion for
neural networks of neutral type has been studied in [5, 23], which is less conservative
than delay-independent criteria when the delay is small.
160 5 Stability and Synchronization of Neutral-Type Neural Networks

If D = 0 for the nominal system (5.6), the following corollary can be easily
deduced.

Corollary 5.4 Suppose (H) holds, system (5.6) with D = 0 is asymptotically


stable, if there exist positive definite matrices P, Q 2 , Q 3 , R and positive scalars
ε1 , ε2 , ε5 , ε6 such that the following LMI holds:
⎡ ⎤
Γ11 0 0 PA PB PE
⎢ ∗ Γ22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 ⎥
⎢ ⎥<0 (5.15)
⎢ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ −ε5 I

where Γ11 = −PC − C T P T + τ Q 1 + Q 2 + σ 2 Q 3 + L R L + Lε1 L, Γ22 = Lε2 L −


(1 − μ1 )Q 2 .

Remark 5.5 For the case of D = 0, the system is no longer a neutral-type neural
network. Neural networks with mixed time delays, which is not neutral type, have
been widely discussed, see e.g., [55].

If E = 0 for the nominal system (5.6), the following corollary can be easily
deduced.

Corollary 5.6 Suppose (H) holds, system (5.6) with E = 0 is asymptotically


stable, if there exist positive definite matrices P, Q 2 , Q 3 , R and positive scalars
ε1 , ε2 , ε5 , ε6 such that the following LMI holds:
⎡ ⎤
Φ11 Π12 0 0 0 PA PB
⎢ ∗ Π22 0 D P A D P B 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 ⎥
⎢ 0 ⎥ < 0, (5.16)
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ −ε2 I

where Φ11 = −PC − C T P T + Q 2 + σ 2 Q 3 + L R L + L(ε1 + ε3 )L .

Remark 5.7 For the case of E = 0, the system just concludes the discrete time delay.
Neutral-type neural networks with discrete time delays have been widely discussed,
see e.g., [5, 22, 23, 30, 35, 36, 39, 43, 82].

Theorem 5.8 Suppose (H) holds, for any delay σ(t), τ (t) satisfying (5.2), the sys-
tem (5.5) is robust stable, if there exist positive definite matrices P, Q 2 , Q 3 , R and
positive scalars ε1 , ε2 , ε3 , ε4 , ε5 , ε6 , φ1 , φ2 , φ3 , φ4 , φ5 , φ6 , φ7 , φ8 such that the fol-
lowing LMI holds:
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 161
⎡ ⎤
Ξ11 Ξ12 0 0 0 0 PA PB P E Ξ1A
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 Ξ2 A ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Ξ44 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ Ξ55 0 0 0 0 0 ⎥
⎢ ⎥ < 0, (5.17)
⎢ ∗ ∗ ∗ ∗ ∗ Ξ66 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ Ξ77 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ξ88 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Ξ99 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ΞAA

where Ξ11 = Π11 + φ1 G 1T G 1 , Ξ12 = Π12 + φ2 G 1T G 4T G 4 G 1 , Ξ44 = −ε3 I +


φ3 G 2T G 4T G 4 G 2 , Ξ55 = −ε4 I + φ4 G 3T G 4T G 4 G 3 , Ξ66 = −ε6 I + φ5 G 5T G 4T G 4 G 5 ,
Ξ77 = −ε1 I +φ6 G 2 G 2 , Ξ88 = −ε2 I +φ  7 G 3 G 3, Ξ99 = −ε5 I +φ8 G 5 G 5 , Ξ1A =
T T T

P H1 H4 P H1 0 0 0 P H2 P H3 P H5 , Ξ2 A = 0 0 H4 P H2 H4 P H3 H4 P H5 0
0 0 , Ξ A A = diag (−φ1 I, −φ2 I, −φ3 I, −φ4 I, −φ5 I, −φ6 I, −φ7 I, −φ8 I ).

Proof Replacing A, B, C, D, E in (5.7) with A + H2 F2 (t)G 2 , B + H3 F3 (t)G 3 ,


C + H1 F1 (t)G 1 , D + H4 F4 (t)G 4 , E + H5 F5 (t)G 5 , respectively, we can infer that
(5.7) is equivalent to the following inequality:

Ω = Ω0 + Δ11
T
F4 (t)F2 (t)Δ12 + Δ12
T
F2T (t)F4T (t)Δ11 + Δ21
T
F4 (t)F3 (t)Δ22
+ Δ22
T
F3T (t)F4T (t)Δ21 + Δ31
T
F4 (t)F5 (t)Δ32 + Δ32
T
F5T (t)F4T (t)Δ31
+ Δ41
T
F2 (t)Δ42 + Δ42
T
F2T (t)Δ41 + Δ51
T
F3 (t)Δ52 + Δ52
T
F3T (t)Δ51
+ Δ61
T
F5 (t)Δ62 + Δ62
T
F5T (t)Δ61
< 0,
(5.18)
where
⎡ ⎤
Ω11 Ω12 0 0 0 0 PA PB PE
⎢ ∗ Π22 0 D P A D P B D P E 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −ε3 I 0 0 0 0 0 ⎥
⎢ ⎥
Ω0 = ⎢
⎢ ∗ ∗ ∗ ∗ −ε4 I 0 0 0 0 ⎥⎥,
⎢ ∗ ∗ ∗ ∗ ∗ −ε6 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε5 I

with

Ω11 = Π11 − P H1 F1 (t)G 1 − G 1T F1T (t)H1T P T ,


162 5 Stability and Synchronization of Neutral-Type Neural Networks

1
Ω12 = Π12 + [H4 F4 (t)G 4 P H1 F1 (t)G 1 +
2
G 1T F1T (t)H1T P T G 4T F4T (t)H4T ],
 
Δ11 = 0 H4 P H2 0 0 0 0 0 0 0 ,
 
Δ12 = 0 0 0 G 4 G 2 0 0 0 0 0 ,
 
Δ21 = 0 H4 P H3 0 0 0 0 0 0 0 ,
 
Δ22 = 0 0 0 0 G 4 G 3 0 0 0 0
 
Δ31 = 0 H4 P H5 0 0 0 0 0 0 0 ,
 
Δ32 = 0 0 0 0 0 G 4 G 5 0 0 0 ,
 
Δ41 = P H2 0 0 0 0 0 0 0 0 ,
 
Δ32 = 0 0 0 0 0 0 G 2 0 0 ,
 
Δ51 = P H3 0 0 0 0 0 0 0 0 ,
 
Δ52 = 0 0 0 0 0 0 0 G 3 0 ,
 
Δ61 = P H5 0 0 0 0 0 0 0 0 ,
 
Δ62 = 0 0 0 0 0 0 0 0 G 5 .

From Lemma 1.22, we have Ω11 < Π11 + φ−1 1 P H1 H1 P + φ1 G 1 G 1 , Ω12 <
T T T
1 −1
Π12 + 2 [φ2 H4 P H1 H1 P H4 + φ2 G 1 G 4 G 4 G 1 ],
T T T T T

Ω < Ω0 + φ−1 T T −1 T
3 Δ11 Δ11 + φ3 Δ12 Δ12 + φ4 Δ21 Δ21 + φ4 Δ22 Δ22
T

+ φ−1 −1 T −1 T
5 Δ31 Δ31 + φ5 Δ32 Δ32 + φ6 Δ41 Δ41 + φ6 Δ42 Δ42 + φ7 Δ51 Δ51
T T T

+ φ7 Δ52
T
Δ52 + φ−1
8 Δ61 Δ61 + φ8 Δ62 Δ62 .
T T

Thus (5.18) holds if and only if (5.17) holds. This completes the proof.

If E + ΔE(t) = 0 for the system (5.5), the following corollary can be easily
deduced.
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 163

Corollary 5.9 Suppose (H) holds, for given σ, system (5.5) with E + ΔE(t) = 0 is
asymptotically stable, if there exist positive definite matrices P, Q 2 , Q 3 , R and pos-
itive scalars ε1 , ε2 , ε3 , ε4 , φ1 , φ2 , φ3 , φ4 , φ6 , φ7 such that the following LMI holds:
⎡ ⎤
X11 Ξ12 0 0 0 PA P B X1A
⎢ ∗ Π22 0 D P A D P B 0 0 X2 A ⎥
⎢ ⎥
⎢ ∗ ∗ −Q 3 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Ξ44 0 0 0 0 ⎥
⎢ ⎥<0
⎢ ∗ ∗ ∗ ∗ Ξ55 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ Ξ77 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Ξ88 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ XAA

where X11 = −PC − C T P T + Q 2 + σ 2 Q 3 + LR L + L(ε1 + ε3 )L + φ1 G1T G 1 ,


X1A = P H1 H4 P H1 0 0 P H2 P H3 , X2 A = 0 0 H4 P H2 H4 P H3 0 0 ,
X A A = diag (−φ1 I, −φ2 I, −φ3 I, −φ4 I, −φ6 I, −φ7 I ) .

Remark 5.10 For the case of E + ΔE(t) = 0, the system just concludes the discrete
time delay. The robust stability of neutral-type neural networks with discrete time
delays has been studied previously, see e.g., [5, 36].

5.1.4 Numerical Example

Example 5.11 An example illustrating the result in Theorem 5.1 is given below.
Consider a delayed neural network in (5.6) with parameters as
         
1.2 0 1.2 0 1.5 0 0.2 0 0.8 0
C= ,A= ,B = ,D = ,E = .
0 1.8 0 1.2 0 1.5 0 0.2 0 0.8

 In this example, the activation function is assumed to satisfy (H) with L =


0.2 0
.
0 0.2
Then, by the Matlab LMI toolbox, the maximum allowed delay satisfying the
LMI in (5.6) can be calculated as τ = 0.679 and the system is stable for any σ > 0.
By choosing x(0) = [ 0.6 0.7 ]T , system (5.6) with τ = 0.679, σ = 1 is asymp-
totically stable, as shown in Fig. 5.1.
164 5 Stability and Synchronization of Neutral-Type Neural Networks

Fig. 5.1 The solution


trajectory of Example 5.11

Example 5.12 Consider the delayed neural network in (5.5) with the following para-
meters:
     
1.2 0 −1.2 0.2 −0.1 0.2
C= ,A= ,B = ,
0 1.8 0.26 0.1 0.2 0.1
     
−0.2 0 0.8 0 0.5 0
D= ,E = ,L = ,
0.2 −0.1 0 0.8 0 0.8
     
0.5 0 0.2 0.1 0.1 0
H1 = , H2 = , H3 = ,
0 0.5 0 0.1 0 0.1
     
0.2 0.1 0.1 0 0.2 0.1
H4 = , H5 = , E1 = ,
0.3 0 0 0.1 0.2 0.1
     
0.3 0.2 0.3 0.2 0.1 0.2
E2 = , E3 = , E4 = ,
0.5 0.4 0.1 0.3 0.3 0.3
 
0.2 0
E5 = .
0 0.2

Let τ = 0.3, σ = 1, μ1 = 0.5, and by applying Theorem 5.8, there exists a


feasible solution which guarantees the robust stability of the system (5.5):
   
7.2482 −1.6753 0.4691 0.7241
Q2 = , Q3 = ,
 −1.6753 14.0989  0.7241 2.9979
11.8673 −0.8115 0.8904 0.9289
P= ,R = ,
−0.8115 19.1026 0.9289 2.4763

ε1 = 9.9193, ε2 = 4.3784, ε3 = 2.9376, ε4 = 1.4568, ε5 = 9.2442,


ε6 = 9.2442, φ1 = 24.4595, φ2 = 9.6003, φ3 = 8.1301, φ4 = 8.1856,
φ5 = 9.4180, φ6 = 8.3283, φ7 = 7.3000, φ8 = 11.9921.
5.1 Robust Stability of Neutral-Type NN with Mixed Time Delays 165

Fig. 5.2 The solution


trajectory of Example 5.12

By choosing x(0) = [ 10 10 ]T , and F1 (t) = F2 (t) = F3 (t) = F5 (t) = sin(t),


system (5.5) with τ = 0.3, σ = 1 is asymptotically stable, as shown in Fig. 5.2.

5.1.5 Conclusions

In this section, we have dealt with the problem of robust stability for neural networks
of neutral type, which includes discrete and distributed time-varying delays. A linear
matrix inequality (LMI) approach has been developed to solve the problem addressed.
The stability criteria have been derived in terms of LMI method, and some numerical
examples have been given to demonstrate the applicability of our proposed stability
criteria.

5.2 Adaptive Synchronization of Neutral-Type SNN


with Markovian Switching

5.2.1 Introduction

As it is well known, the stability and synchronization of neural networks can be


applied to create chemical and biological systems, secure communication systems,
information science, image processing, and so on. The practice of neural network
models attracts researchers to investigate the stability and stabilization of the neutral-
type neural network (see e.g., [17, 18, 51]). The synchronization for the drive system
and the response system is achieved when the states of the error system can eventually
approach zero. In recent years, different control methods, like adaptive control, are
derived to achieve different concepts of synchronization, such as generalized syn-
166 5 Stability and Synchronization of Neutral-Type Neural Networks

chronization [9, 11, 62], lag synchronization [48, 64], phase synchronization [54],
H∞ synchronization [12, 13], etc.
By utilizing adaptive control method, the parameters of the system need to be
estimated and the control law needs to be updated when the neural networks evolve.
In the past decade, much attention has been devoted to the research of the adaptive
synchronization for neural networks (see e.g., [14, 19, 24, 44, 57, 68, 78, 85] and
the references therein).
Also, it is noted that many neural networks may experience abrupt changes in
their structure and parameters caused by some phenomena such as component fail-
ures or repairs, changing subsystem interconnections, and abrupt environmental dis-
turbances. In this situation, there exist finite modes in the neural networks, and the
modes may be switched (or jumped) from one to another at different times. This kind
of systems is widely studied by many scholars, see e.g., [14, 19, 24, 33, 41, 42, 47,
50, 57, 60, 61, 63, 68, 81, 85] and the references therein.
Recently, the stability and synchronization of neutral-type systems which depend
on the delays of state and state derivative have attracted a lot of attention (see e.g.,
[16, 20, 32, 34, 38, 75] and the references therein) since the fact that some physical
systems in the real world can be described by neutral-type models. For example,
a neutral differential delay equation encountered by Rubanik [38] in his study of
vibrating masses attached to an elastic bar is

ẍ(t) + ω12 x(t) = ε f 1 (x(t), ẋ(t), y(t), ẏ(t)) + γ1 ÿ(t − τ )


ÿ(t) + ω22 x(t) = ε f 2 (x(t), ẋ(t), y(t), ẏ(t)) + γ2 ẍ(t − τ )

where [x(t) y(t)]T denotes the position vector of vibrating masses, f i (·, ·, ·, ·)(i =
1, 2) denote the relative functions, ω1 and ω2 are the position coefficients, ε is the
structure coefficient, and γ1 and γ2 are the vibrating acceleration coefficients.
The work by Li [20] investigated the global robust stability for stochastic inter-
val neural networks with continuously distributed delays of neutral type using the
Lyapunov-Krasovskii functional method. Some new stability criteria are presented
in terms of linear matrix inequality (LMI). Park [34] proposed a dynamic feedback
control scheme to achieve the synchronization for neural networks with neutral type
and derives a simple and efficient criterion in terms of LMIs for synchronization.
The work by Zhang et al. [75] considered the problems of robust global exponen-
tial synchronization for neutral-type complex networks with time-varying delayed
couplings and obtained some sufficient conditions that ensure the complex networks
to be robustly globally exponentially synchronized using the Lyapunov functional
method and some properties of Kronecker product. Also, Kolmanovskii et al. [16]
not only established a fundamental theory for neutral-type stochastic differential
delay equations with Markovian switching but also discussed some important prop-
erties of the solutions, e.g., boundedness and stability. The problem of almost sure
asymptotic stability was considered by Mao et al. [32] for neutral-type stochastic
differential delay equations with Markovian switching.
In this section, we study the problem of adaptive synchronization for neutral-type
stochastic delay neural networks (NSDNN, for short) with Markovian switching
5.2 Adaptive Synchronization of Neutral-Type SNN … 167

parameters. By M-matrix approach and the stochastic analysis method, some


synchronization criteria are obtained to ensure the adaptive almost sure asymptot-
ical synchronization, exponential synchronization in pth moment, and almost sure
exponential synchronization, respectively, for the neutral-type stochastic neural net-
works. Numerical examples are provided to illustrate the effectiveness of the results
obtained in this section.
Comparing with existing works on stochastic neural networks with Markovian
switching in [85], the differential of the time-delay state is considered in this section.
And by contrast of the models used in [20, 34] for neutral-type time-delay neural
networks, our work takes into account the stochastic disturbance and the Markovian
switching parameters. So the model proposed in this section possesses universality.

5.2.2 Problem Formulation and Preliminaries

Consider the following neutral-type neural networks called drive system and repre-
sented by the compact form as follows:

d[x(t) − D(r (t))x(t − τ )]


= [−C(r (t))x(t) + A(r (t)) f (x(t)) (5.19)
+ B(r (t)) f (x(t − τ )) + E(r (t))]dt,

where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with n
neurons, f (·) denotes the neuron activation functions, τ represents the transmission
delay. For t ≥ 0, we denote r (t) = i, A(r (t)) = Ai , B(r (t)) = B i , C(r (t)) = C i ,
D(r (t)) = D i , and E(r (t)) = E i , respectively. In neural network (5.19), ∀i ∈
S, Ai = (a ijk )n×n and B i = (bijk )n×n are the connection weight and the delay
connection weight matrices, respectively; C i = diag{c1i , c2i , . . . , cni } is a diagonal
matrix and has positive and unknown entries cij > 0; D i is called the neutral-type
parameter matrix; and E i = [E 1i , E 2i , . . . , E ni ]T ∈ Rn is the constant external input
vector.
The initial condition of system (5.19) is given in the following form:

x(s) = ξx (s), s ∈ [−τ , 0], r (0) = i 0 (5.20)

for any ξx ∈ L2F0 ([−τ , 0]; Rn ).


For the drive system (5.19), the response system is

d[y(t) − D(r (t))y(t − τ )]


= [−C(r (t))y(t) + A(r (t)) f (y(t)) + B(r (t)) f (y(t − τ ))
(5.21)
+ E(r (t)) + U (r (t))]dt
+ σ(t, r (t), y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t),
168 5 Stability and Synchronization of Neutral-Type Neural Networks

where y(t) = [y1 (t), y2 (t), . . . , yn (t)]T ∈ Rn is the state vector of the response
system (5.21), U i = U (r (t)) = [u i1 (t), u i2 (t), . . . , u in (t)]T ∈ Rn is a control input
vector, ω(t) = [ω1 (t), ω2 (t), . . . , ωn (t)]T is an n-dimensional Brownian motion
defined on a complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0
(i.e., Ft = σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the Markovian
process {r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix.
It is known that external random fluctuation and other probabilistic causes often lead
to this type of stochastic perturbations.
The initial condition of system (5.21) is given in the following form:

y(s) = ξ y (s), s ∈ [−τ , 0], r (0) = i 0 (5.22)

for any ξ y ∈ L2F0 ([−τ , 0]; Rn ).


Let e(t) = y(t) − x(t) be the error vector. From the drive system and the response
system, the error system can be written as follows:

d[e(t) − D(r (t))eτ (t)]


= [−C(r (t))e(t) + A(r (t))g(e(t)) + B(r (t))g(eτ (t)) (5.23)
+ U (r (t))]dt + σ(t, r (t), e(t), eτ (t))dω(t),

where eτ (t) = e(t − τ ), g(e(t)) = f (x(t) + e(t)) − f (x(t)).


The initial condition of system (5.23) is given in the following form:

e(s) = ξ(s) = ξ y (s) − ξx (s), s ∈ [−τ , 0], r (0) = i 0 , (5.24)

with e(0) = 0.
For systems (5.19), (5.21), and (5.23), the following assumptions are needed.

Assumption 5.13 For the vector f (·), there exists a constant L > 0 such that

| f (x) − f (y)| ≤ L|x − y|

for any x, y ∈ Rn and f (0) ≡ 0.

Assumption 5.14 For the matrix σ(t, i, u(i), v(i)), there exist two positives H1 and
H2 such that

trace[σ T (t, r (t), u(t), v(t))σ(t, r (t), u(t), v(t))]


≤ H1 |u(t)|2 + H2 |v(t)|2

for all (t, r (t), u(t), v(t)) ∈ R+ × S × Rn × Rn and σ(t, r0 , 0, 0) ≡ 0.


5.2 Adaptive Synchronization of Neutral-Type SNN … 169

Assumption 5.15 For the neutral-type parameter matrices D i (i = 1, 2, . . . , S),


there exists positive κi ∈ (0, 1), such that

ρ(D i ) = κi ≤ κ,

where κ = max κi and ρ(D i ) is the spectral radius of matrix D i .


i∈S

The following concepts are necessary in this section.


Definition 5.16 ([33]) The trivial solution e(t; ξ, i 0 ) of the error system (5.23) is
said to be almost surely asymptotically stable if

P( lim |e(t; ξ, i 0 )| = 0) = 1 (5.25)


t→∞

for any initial data ξ ∈ C([−τ , 0]; Rn ).

Definition 5.17 ([33]) The trivial solution e(t; ξ, i 0 ) of the error system (5.23) is
said to be exponentially stable in pth moment if

1
lim sup log(E|e(t; ξ, i 0 )| p ) < 0,
t→∞ t
p
for any initial data ξ ∈ LF0 ([−τ , 0]; Rn ), where p ≥ 2, p ∈ Z (the set of integral
numbers). When p = 2, it is said to be exponentially stable in mean square.
It is said to be almost surely exponentially stable if

1
lim sup log(|e(t; ξ, i 0 )|) < 0 a.s.
t→∞ t

for any initial data ξ ∈ C([−τ , 0]; Rn ).

Now we describe the problem to solve in this section as follows.


Target Description: For the neutral-type drive neural networks (5.19) with
Markovian switching parameters and the initial condition (5.20) and the neutral-
type response neural networks (5.21) with Markovian switching parameters, sto-
chastic disturbance and the initial condition (5.22), using Lyapunov functional,
M-matrix, and the stochastic analysis methods, to obtain some criteria of adap-
tive almost sure asymptotical synchronization, exponential synchronization in pth
moment and almost sure exponential synchronization, respectively.
Then, we present a preliminary lemma which plays an important role in the proof
of the main theorems.

Remark 5.18 It can be obtained from the proof of Lemma 1.10 in [32] that if we
replace (H1) by the following (H1) , then the results (R1) and (R2) are also satisfied.
170 5 Stability and Synchronization of Neutral-Type Neural Networks

(H1) Given any initial data {x(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn ),


Eq. (1.4) has a unique solution denoted by x(t; ξ, i 0 ) on t ≥ 0. Moreover, both
f¯(t, r (t), x(t), y(t)) and ḡ(t, r (t), x(t), y(t)) are locally bounded in (x, y) while
uniformly bounded in (t, r (t)), i.e., for any h > 0, there is a K h > 0, such that

| f¯(t, r (t), x(t), y(t))| ∨ |ḡ(t, r (t), x(t), y(t))| ≤ K h ,

for all t ≥ 0, r (t) ∈ S, and x, y ∈ Rn with |x| ∨ |y| ≤ h.

5.2.3 Main Results

Almost Sure Asymptotical Synchronization


In this subsection, we give a criterion of adaptive almost sure asymptotical synchro-
nization for the drive system (5.19) and the response system (5.21).

Theorem 5.19 Let Assumptions 5.13–5.15 hold, and the error system (5.23) has a
unique solution denoted by e(t; ξ, i 0 ) on t ≥ 0 for any initial data {e(θ) : −τ ≤ θ ≤
0} = ξ ∈ CbF0 ([−τ , 0]; Rn ) with e(0) = 0.
Assume that M := −diag{η, η, . . . , η } − Γ is a nonsingular M-matrix, where
  
S

η = −2ς + α + β, (5.26)

with α = max(ρ(Ai ))2 , β = max(ρ(B i ))2 and ς is a nonnegative real number, and
i∈S i∈S

2γ − κ − C02 − 2L 2 − H1 − H2 ≥ 0,
2
(5.27)

where γ = min min cij , C0 = max |C i |.


i∈S 1≤ j≤n i∈S
m = [m, m, . . . , m ]T and [q1 , q2 , . . . , q S ]T := M −1 −
Let m > 0, −
→ →
m . We choose
  
S
the feedback control U i (i ∈ S) with the update law as

U i = (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − D i eτ ) (5.28)

with

k̇ j = −α j qi (e j − D i (eτ ) j )2 , (5.29)

where α j > 0( j = 1, 2, . . . , n) are arbitrary constants.


5.2 Adaptive Synchronization of Neutral-Type SNN … 171

Then the noise-perturbed response system (5.21) can be adaptive almost surely
asymptotically synchronized with the time-delay neural network (5.19).

Proof Under Assumptions 5.13–5.14, and the existence of e(t; ξ, i 0 ), it can be seen
that f¯(t, r (t), e(t), eτ (t)), ḡ(t, r (t), e(t), eτ (t)), and D̄(eτ (t), r (t)) satisfy (H1) ,
(H2), and (H3), where

f¯(t, r (t), e(t), eτ (t))


= − C(r (t))e(t) + A(r (t))g(e(t)) + B(r (t))g(eτ (t))
+ U (r (t)),
ḡ(t, r (t), e(t), eτ (t)) = σ(t, r (t), e(t), eτ (t)),
D̄(eτ (t), r (t)) = D(r (t))eτ (t).

Now, by Theorem 2.10 in [33], [q1 , q2 , . . . , q S ]T 0, i.e., all elements of M −1 −



m
are positive. For each i ∈ S, choose a nonnegative function


n
1 2
V (t, i, x) = qi |x|2 + k . (5.30)
αj j
j=1

Then Eq. (1.18) holds.


Computing LV (t, i, e, eτ ) along the trajectory of error system (5.23), we have

LV (t, i, e, eτ )
= Vt (t, i, e − D i eτ ) + Vx (t, i, e − D i eτ )[−C i e + Ai g(e)
+ B i g(eτ ) + U i ]
1 (5.31)
+ trace[σ T (t, i, e, eτ )Vx x (t, i, e − D i eτ )σ(t, i, e, eτ )]
2
S
+ γik V (t, k, e − D i eτ ),
k=1

while


n
2 
n
Vt (t, i, e − D i eτ ) = k j k̇ j = −2 k j qi (e j − D i (eτ ) j )2 ,
αj
j=1 j=1

Vx (t, i, e − D i eτ ) = 2qi (e − D i eτ )T , Vx x (t, i, e − D i eτ ) = 2qi .


172 5 Stability and Synchronization of Neutral-Type Neural Networks

Using Assumption 5.14 and (5.29), one can obtain that

LV (t, i, e, eτ )

n
≤−2 k j qi (e j − D i (eτ ) j )2
j=1

+ 2qi (e − D i eτ )T [−C i e + Ai g(e) + B i g(eτ )


+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e − D i eτ )]
+ qi trace[σ T (t, i, e, eτ )σ(t, i, e, eτ )]
(5.32)

S
+ γik qk |e − D eτ |
i 2

k=1
≤ 2qi (e − D i eτ )T [−C i e + Ai g(e) + B i g(eτ )]
+ qi (H1 |e|2 + H2 |eτ |2 )
 
S
+ −2ςqi + γik qk |e − D i eτ |2 .
k=1

Now, from Assumption 5.15, we can infer that

− 2qi (e − D i eτ )T C i e
= − 2qi e T C i e + 2qi eτT D i T C i e
(5.33)
≤ − 2qi γ|e|2 + qi (eτT D i T D i eτ + e T C i T C i e)
≤ qi (−2γ + C02 )|e|2 + qi κ2 |eτ |2 ,

Using Assumption 5.13, we have

2qi (e − D i eτ )T Ai g(e)
≤ qi (e − D i eτ )T Ai Ai T (e − D i eτ ) + qi g T (e)g(e) (5.34)
≤ qi L |e| + qi α|e − D eτ | ,
2 2 i 2

and

2qi (e − D i eτ )T B i g(eτ )
≤ qi (e − D i eτ )T B i B i T (e − D i eτ ) + qi g T (eτ )g(eτ ) (5.35)
≤ qi L 2 |eτ |2 + qi β|e − D i eτ |2 .
5.2 Adaptive Synchronization of Neutral-Type SNN … 173

Substituting (5.33)–(5.35) into (5.32) yields

LV (t, i, e, eτ )
≤ − qi (2γ − C02 − L 2 − H1 )|e|2 + qi (κ2 + L 2 + H2 )|eτ |2
 
S
+ qi (−2ς + α + β) + γik qk |e − D i eτ |2
k=1 (5.36)
 

S
= − aqi |e|2 + bqi |eτ |2 + ηqi + γik qk |e − D i eτ |2
k=1
≤ − aqi |e| + aqi |eτ | − m|e −
2 2
D i eτ |2 − (a − b)qi |eτ |2 ,


S
where m = −[ηqi + γik qk ] by (q1 , q2 , . . . , q S )T = M −1 −

m , a = 2γ − C02 −
k=1
L 2 − H1 , b = κ2 + L2 +
H2 .
From (5.27) and b > 0, we can see that a > 0 and a − b ≥ 0. So the inequality
(5.36) implies

LV (t, i, e, eτ )
(5.37)
≤ γ(t) − Q(t, e) + Q(t − τ , eτ ) − W (e − D̄(eτ , i)),

where γ(t) = 0, Q(t, x) = aqi |x|2 and W (e − D̄(eτ , i)) = m|e − D i eτ |2 .


Therefore, the inequality (1.17) holds and by Lemma 1.10, the error system
(5.23) is adaptive almost surely asymptotically stable, and hence the noise-perturbed
response system (5.21) can be adaptive almost surely asymptotically synchronized
with the drive time-delay neural network (5.19). This completes the proof.

Remark 5.20 In Theorem 5.19, we have assumed that M := −diag{η, η, . . . , η }−Γ


  
S
is a nonsingular M-matrix, where η = −2ς + α + β, α = max(ρ(Ai ))2 , β =
i∈S
max(ρ(B i ))2 , and ς ≥ 0. Here, ς is an adjustable parameter to ensure that M is an
i∈S
M-matrix for selected networks parameters Ai and B i and the generator Γ . So we
add −ς I into the feedback control update law (5.28) such that the noise-perturbed
response system (5.21) can be adaptive almost surely asymptotically synchronized
with the drive time-delay neural network (5.19). This designing method of the con-
trol law is similarly used in the subsequent discussion of the adaptive exponential
synchronization in pth moment and the adaptive almost sure exponential synchro-
nization for NSDNN with Markovian switching.

Remark 5.21 The M-matrix method used in Theorem 5.19 to study the adaptive
synchronization for neutral-type stochastic neural networks with Markovian switch-
ing is rarely occurred and very different to those, such as the LMI technology. This
174 5 Stability and Synchronization of Neutral-Type Neural Networks

M-matrix method can be used in researching the stability and synchronization of the
complex networks.

Remark 5.22 On the stochastic synchronization problem for neural networks with
time-varying time delay and Markovian jump parameters, Wu et al. in [63] pro-
posed a new method of sampled data combining stochastic Lyapunov functional,
designed a mode-independent state feedback sampling controller, and gave some
delay-dependent criteria to ensure the stochastic synchronization using LMI technol-
ogy. The sampling controller designed in [63] is more suitable for real applications.
Comparing this section with [63], the model that includes variation of the time-
delay state and the stochastic disturbance is more general and the synchronization
conditions obtained by M-matrix method may be checked easily.

Exponential Synchronization in pth Moment


In this subsection, we give a criterion of adaptive exponential synchronization in
pth moment for the drive system (5.19) and the response system (5.21). First, we
establish a general result which can be applied widely.
Theorem 5.23 Let x(t) be a solution of the NSDDE (1.4) and ξ(s) be the initial
condition. Assume that there exists a function V (t, i, x) ∈ C2,1 (R+ × S × Rn ; R+ )
and positive constants p ≥ 1, μ1 , λ1 , and λ2 such that

λ2 < λ1 , (5.38)

μ1 |x| p ≤ V (t, i, x), (5.39)

LV (t, i, x, xτ ) ≤ −λ1 |x| p + λ2 |xτ | p , (5.40)

for all t ≥ 0, i ∈ S and x(t) ∈ Rn . Then

1
lim sup log(E|x(t)| p ) ≤ −υ, (5.41)
t→∞ t

λ1 −λ2
where υ = μ1 (1−κ) p > 0, i.e., the system (1.4) is exponential stable in pth moment.

Proof For the function V (t, i, x), applying the generalized Itô formula (see Lemma
3.1 in [16]) and using above conditions, we obtain that

EV (t, r (t), x(t) − D̄(xτ (t), r (t)))


= EV (0, r (0), x(0) − D̄(xτ (0), r (0)))
 t (5.42)
+E LV (s, r (s), x(s), xτ (s))ds.
0
5.2 Adaptive Synchronization of Neutral-Type SNN … 175

By Lemma 4.5 in [16], we have

μ1 (1 − κ) p−1 |x(t)| p
(5.43)
≤ μ1 |x(t) − D̄(xτ (t), r (t))| p + μ1 (1 − κ) p−1 κ|xτ (t)| p .

So

μ1 (1 − κ) p−1 E|x(t)| p
(5.44)
≤ μ1 E|x(t) − D̄(xτ (t), r (t))| p + μ1 (1 − κ) p−1 κE|xτ (t)| p .

Using Eqs. (5.39), (5.40) and (5.42), it is obvious that

μ1 E|x(t) − D̄(xτ (t), r (t))| p


≤ V (0, r (0), x(0) − D̄(xτ (0), r (0)))
 t
+E LV (s, r (s), x(s), xτ (s))ds
0 (5.45)
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0)))
 t
+ E (−λ1 |x(s)| p + λ2 |xτ (s)| p )ds,
0

while
 t
|xτ (s)| p ds
0
 t−τ  t
= |x(s)| p ds ≤ |x(s)| p ds
−τ −τ
 0  t
= |x(s)| p ds + |x(s)| p ds
−τ 0
 t
≤ τ max |ξ(s)| + p
|x(s)| p ds.
τ ≤s≤0 0

Substituting (5.45) into (5.44), one has

μ1 (1 − κ) p−1 E|x(t)| p
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0))) + λ2 τ max |ξ(s)| p
τ ≤s≤0 (5.46)
 t
+ E (λ2 − λ1 )|x(s)| p ds + μ1 (1 − κ) p−1 κE|xτ (t)| p .
0
176 5 Stability and Synchronization of Neutral-Type Neural Networks

This yields

μ1 (1 − κ) p−1 sup E|x(s)| p


0≤s≤t
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0))) + λ2 τ max |ξ(s)| p
−τ ≤s≤0
 t
+ (λ2 − λ1 ) sup E|x(s)| p ds
0 0≤s≤t

+ μ1 (1 − κ) p−1
κ sup E|xτ (s)| p
0≤s≤τ

+ μ1 (1 − κ) p−1
κ sup E|xτ (s)| p (5.47)
τ ≤s≤t
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0)))
+ (λ2 τ + μ1 (1 − κ) p−1 κ) max |ξ(s)| p
−τ ≤s≤0
 t
+ (λ2 − λ1 ) sup E|x(s)| p ds
0 0≤s≤t

+ μ1 (1 − κ) p−1
κ sup E|x(s)| p .
0≤s≤t

Then, we compute that

sup E|x(s)| p
0≤s≤t

1
≤ V (0, r (0), x(0) − D̄(xτ (0), r (0)))
μ1 (1 − κ) p
+ (λ2 τ + μ1 (1 − κ) p−1 κ) max |ξ(s)| p
−τ ≤s≤0 (5.48)
 
t
+ (λ2 − λ1 ) sup E|x(s)| p ds
0 0≤s≤t
 t
=μ − υ sup E|x(s)| p ds,
0 0≤s≤t

λ2 τ
where μ = μ1 (1−κ) p V (0, r (0), x(0) − D̄(x τ (0), r (0)))
1
+ μ1 (1−κ) p τmax |ξ(s)| p +
≤s≤0
κ
1−κ τmax |ξ(s)| p and υ is defined in (5.41).
≤s≤0
It can be seen that μ and υ are the two positive constants. Therefore, using the
Gronwall’s inequality (see [81]), we have

sup E|x(s)| p ≤ μ exp(−υt), (5.49)


0≤s≤t
5.2 Adaptive Synchronization of Neutral-Type SNN … 177

thus
1
lim sup log E|x(s)| p ≤ −υ < 0. (5.50)
t→∞ t

From the above inequality and Definition 5.17, one can get that the system (1.4)
is exponential stable in pth moment. This completes the proof.

Now we are in the position to set up a criterion of adaptive exponential synchro-


nization in pth moment for the drive system (5.19) and the response system (5.21).
We will divide the discussion into two parts: (1) p ≥ 3 and (2) p = 2.
Theorem 5.24 Suppose that the error system (5.23) has a unique solution denoted by
e(t; ξ, i 0 ) on t ≥ 0 for any initial data {e(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn )
with e(0) = 0. Let Assumptions 5.13–5.15 hold, and p ≥ 3. Assume that

(U1 + U2 + U3 + U4 ) + (V1 + V2 + V3 + V4 ) < 0, (5.51)

where U1 = −γ p(1 − κ) p−3 + 2γκ(1 − κ) p−3 ,


U2 = C02 + L 2 + (α + β)(1 + κ) + ( p − 1)H1 ,
U3 = p−22 (1 + κ)
p−1 (C 2 + κ2 + 2L 2 + (α + β)(1 + κ)2 + ( p − 1)(H + H )),
0 1 2
U4 = − pς(1 − κ) p−1 , V1 = γ( p − 2)κ(1 − κ) p−3 ,
V2 = κ2 + L 2 + (α + β)κ(1 + κ) + ( p − 1)H2 ,
V3 = κU3 , V4 = −κU4 ,
and ς is a nonnegative number.
The feedback controller U i (i ∈ S) with the update law is chosen as

U i = (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − D i eτ )

with
1
k̇ j = − α j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2 , (5.52)
2
where α j > 0( j = 1, 2, . . . , n) are arbitrary constants.
Then the noise-perturbed response system (5.21) can be adaptive exponential
synchronized in pth moment with the time-delay neural network (5.19).

Proof For each i ∈ S, choose a nonnegative function


n
1 2
V (t, i, x) = |x| p + k . (5.53)
αj j
j=1
178 5 Stability and Synchronization of Neutral-Type Neural Networks

Then (5.39) holds, where μ1 = 1. Furthermore,

Vt (t, i, e − D i eτ )

n
2 
n
= k j k̇ j = − k j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2 ,
αj
j=1 j=1

Vx (t, i, e − D i eτ ) = p|e − D i eτ | p−2 (e − D i eτ )T ,

Vx x (t, i, e − D i eτ )
= p( p − 2)|e − D i eτ | p−4 [(e − D i eτ )T ]2 + p|e − D i eτ | p−2
≤ p( p − 1)|e − D i eτ | p−2 .

So from Assumption 5.14, we have

LV (t, i, e, eτ )

n
≤− k j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2
j=1

+ p|e − D i eτ | p−2 (e − D i eτ )T [−C i e + Ai g(e) + B i g(eτ )


+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e − D i eτ )]
1 (5.54)
+ p( p − 1)|e − D i eτ | p−2 trace[σ T (t, i, e, eτ )σ(t, i, e, eτ )]
2
≤ p|e − D i eτ | p−2 (e − D i eτ )T [−C i e + Ai g(e) + B i g(eτ )
1
− ς(e − D i eτ )] + p( p − 1)H1 |e − D i eτ | p−2 |e|2
2
1
+ p( p − 1)H2 |e − D i eτ | p−2 |eτ |2 .
2
Now, we can infer from Lemmas 4.3 and 4.5 in [16] that

− p|e − D i eτ | p−2 e T C i e
≤ − γ p|e − D i eτ | p−2 |e|2
≤ γ p(−(1 − κ) p−3 |e| p−2 + κ(1 − κ) p−3 |eτ | p−2 )|e|2
 
2 p p−2 (5.55)
≤ − γ p(1 − κ) p−3 |e| p + γ pκ(1 − κ) p−3 |e| + |eτ | p
p p
= [−γ p(1 − κ) p−3 + 2γκ(1 − κ) p−3 ]|e| p
+ γ( p − 2)κ(1 − κ) p−3 |eτ | p ,
5.2 Adaptive Synchronization of Neutral-Type SNN … 179

p|e − D i eτ | p−2 eτT (D i )T C i e


 
p−2 1 2 1 2 2
≤ p|e − D eτ |
i
κ |eτ | + C0 |e|
2
(5.56)
2 2
1 1
= pC02 |e − D i eτ | p−2 |e|2 + pκ2 |e − D i eτ | p−2 |eτ |2 ,
2 2

p|e − D i eτ | p−2 (e − D i eτ )T Ai g(e)



1
≤ p|e − D i eτ | p−2 (e − D i eτ )T Ai (Ai )T (e − D i eτ )
2

1
+ g T (e)g(e)
2
  (5.57)
1 1
≤ p|e − D i eτ | p−2 (α + ακ + L 2 )|e|2 + (ακ + ακ2 )|eτ |2
2 2
1
= p(α + ακ + L 2 )|e − D i eτ | p−2 |e|2
2
1
+ p(ακ + ακ2 )|e − D i eτ | p−2 |eτ |2 ,
2
and

p|e − D i eτ | p−2 (e − D i eτ )T B i g(eτ )



p−2 1
≤ p|e − D eτ |
i
(e − D i eτ )T B i (B i )T (e − D i eτ )
2

1 T
+ g (eτ )g(eτ )
2
  (5.58)
p−2 1 1
≤ p|e − D eτ |
i
(β + βκ)|e| + (βκ + βκ + L )|eτ |
2 2 2 2
2 2
1
= p(β + βκ)|e − D i eτ | p−2 |e|2
2
1
+ p(βκ + βκ2 + L 2 )|e − D i eτ | p−2 |eτ |2 .
2
Using Lemma 4.5 in [16], one can obtain

− pς|e − D i eτ | p
(5.59)
≤ − pς(1 − κ) p−1 |e| p + pςκ(1 − κ) p−1 |eτ | p .
180 5 Stability and Synchronization of Neutral-Type Neural Networks

On the other hand, by Yong’s inequality in [33], we have

|e − D i eτ | p−2 |e|2
p−2 p−2
2 2
≤ (|e − D i eτ | p ) p (|e| p ) p ≤|e − D i eτ | p + |e| p
p p
p−2 2 (5.60)
≤ (1 + κ) p−1 (|e| p + κ|eτ | p ) + |e| p
p p
 
p−2 2 p − 2
= (1 + κ) p−1 + |e| p + κ(1 + κ) p−1 |eτ | p ,
p p p

and

|e − D i eτ | p−2 |eτ |2
p−2 2
≤ (|e − D i eτ | p ) p (|eτ | p ) p
p−2 2
≤ |e − D i eτ | p + |eτ | p
p p (5.61)
p−2 2
≤ (1 + κ) p−1 (|e| p + κ|eτ | p ) + |eτ | p
p p
 
p−2 p − 2 2
= (1 + κ) p−1
|e| +
p
κ(1 + κ) p−1
+ |eτ | p .
p p p

So substituting Eqs. (5.55)–(5.61) into Eq. (5.54) yields

LV (t, i, e, eτ )
≤ [−γ p(1 − κ) p−3 + 2γκ(1 − κ) p−3
+ C02 + α + ακ + L 2 + β + βκ + ( p − 1)H1
p−2
+ (1 + κ) p−1 (C02 + 2L 2 + κ2 + (α + β)(1 + κ)2
2
+ ( p − 1)(H1 + H2 )) − pς(1 − κ) p−1 )]|e| p
+ [γ( p − 2)κ(1 − κ) p−3 + κ2 + ακ + ακ2 + βκ (5.62)
+ βκ2 + L 2 + ( p − 1)H2
p−2
+ κ(1 + κ) p−1 (C02 + 2L 2 + κ2 + (α + β)(1 + κ)2
2
+ ( p − 1)(H1 + H2 )) + pςκ(1 − κ) p−1 ]|eτ | p
= (U1 + U2 + U3 + U4 )|e| p + (V1 + V2 + V3 + V4 )|eτ | p
= − λ1 |e| p + λ2 |eτ | p

where λ1 = −U1 − U2 − U3 − U4 , λ2 = V1 + V2 + V3 + V4 . This shows that (5.40)


holds.
5.2 Adaptive Synchronization of Neutral-Type SNN … 181

Moreover, from (5.51), one can see λ2 < λ1 , i.e., (5.38) holds.
Therefore, by Theorem 5.23, the error system (5.23) is adaptive exponential stable
in pth moment, and hence the response system (5.21) can be exponential synchro-
nized in pth moment with the drive time-delay neural network (5.19). This completes
the proof.

Next, we still have to consider the case of p = 2 and have the following result.
Theorem 5.25 Suppose that the error system (5.23) has a unique solution denoted by
e(t; ξ, i 0 ) on t ≥ 0 for any initial data {e(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn )
with e(0) = 0. Let Assumptions 5.13–5.15 hold, and p = 2. Assume that

Θ1 + Θ2 < 0, (5.63)

where Θ1 = −2γ + C02 + L 2 + (α + β)(1 + κ) − 2ς(1 − κ) + H1 , Θ2 = κ2 + L 2 +


(α + β)κ(1 + κ) + 2ςκ(1 − κ) + H2 , and ς is a positive.
The feedback controller U i (i ∈ S) with the update law is chosen as

U i = (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − D i eτ )

with

k̇ j = −α j (e j − D i (eτ ) j )2 , (5.64)

where α j > 0( j = 1, 2, . . . , n) are arbitrary constants.


Then the noise-perturbed response system (5.21) can be adaptive exponential
synchronized in pth moment with the time-delay neural network (5.19).

Proof For each i ∈ S, choose a nonnegative function


n
1 2
V (t, i, x) = |x|2 + k . (5.65)
αj j
j=1

Then (5.39) holds, where μ1 = 1. Furthermore,


n
2
Vt (t, i, e − D i eτ ) = k j k̇ j
αj
j=1

n
=− 2k j (e j − D i (eτ ) j )2 ,
j=1

Vx (t, i, e − D i eτ ) = 2(e − D i eτ )T , Vx x (t, i, e − D i eτ ) = 2.


182 5 Stability and Synchronization of Neutral-Type Neural Networks

Similar to the proof of Theorem 5.24, we have

LV (t, i, e, eτ )

n
≤− 2k j (e j − D i (eτ ) j )2
j=1

+ 2(e − D i eτ )T [−C i e + Ai g(e) + B i g(eτ )


+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e − D i eτ )]
+ trace[σ T (t, i, e, eτ )σ(t, i, e, eτ )]
(5.66)
≤ 2(e − D i eτ )T [−C i e + Ai g(e) + B i g(eτ )
− ς(e − D i eτ )] + H1 |e|2 + H2 |eτ |2
≤ (−2γ + C02 + L 2 + (α + β)(1 + κ) − 2ς(1 − κ)
+ H1 )|e|2 + (κ2 + L 2 + (α + β)κ(1 + κ)
+ 2ςκ(1 − κ) + H2 )|eτ |2
= Θ1 |e|2 + Θ2 |eτ |2

Let Θ1 = −λ1 , Θ2 = λ2 . Then (5.40) holds and (5.38) also holds by (5.63).
Therefore, by Theorem 5.23, when p = 2, the error system (5.23) is adaptive
exponential stable in pth moment, and hence the response system (5.21) can be
exponential synchronized in pth moment with the drive time-delay neural network
(5.19). This completes the proof.

Remark 5.26 In the proofs of Theorems 5.24 and 5.25, the Lyapunov function
V (t, i, x) may be taken as in the proof of Theorem 5.19. If so, we can obtain the
relative results using M-matrix method.

Almost Sure Exponential Synchronization


In this subsection, we will discuss the almost sure exponential synchronization for
NSDNNs based on the exponential stability in pth moment.
Assumption 5.27 For feedback controller U i (i ∈ S) in Theorems 5.24 and 5.25,
there exists a constant k̄, such that

|ki (t)| ≤ k̄, ∀i ∈ S.

Theorem 5.28 Suppose that the error system (5.23) has a unique solution denoted by
e(t; ξ, i 0 ) on t ≥ 0 for any initial data {e(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn )
with e(0) = 0. Let Assumption 5.27 holds, and p ≥ 2, μ > 0, υ > 0. If the solution
e(t; ξ, i 0 ) of the error system (5.23) obeys

sup E|e(s)| p ≤ μ exp(−υt), (5.67)


0≤s≤t
5.2 Adaptive Synchronization of Neutral-Type SNN … 183

then
1 υ
lim sup log(|e(t)|) = − < 0, a.s. (5.68)
t→∞ t p

Therefore, the noise-perturbed response system (5.21) can be almost surely expo-
nential synchronized with the time-delay neural network (5.19).

Proof Fix any ξ ∈ CbF0 ([−τ , 0]; Rn ) and write e(t; ξ, i 0 ) = e(t). For the error
system

d[e − D i eτ ]
= [−C i e + Ai g(e) + B i g(eτ ) + U i ]dt + σ(t, i, e, eτ )dω(t)
= [−C i e + Ai g(e) + B i g(eτ ) (5.69)
+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )
× (e − D i eτ )]dt + σ(t, i, e, eτ )dω(t),

and each integer ψ ≥ 1, we have

sup [e(ψτ + θ) − D(r (ψτ + θ))e((ψ − 1)τ + θ)]


0≤θ≤τ
= e(ψτ ) − D(r (ψτ ))e((ψ − 1)τ )
 ψτ +θ
+ sup [−C i e + Ai g(e(s)) + B i g(eτ (s)) (5.70)
0≤θ≤τ ψτ

+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(s) − D i eτ (s))]ds


 ψτ +θ
+ sup σ(s, r (s), e(s), eτ (s))dω(s)
0≤θ≤τ ψτ

This with Holder inequality [33] yields

E[ sup |e(ψτ + θ) − D(r (ψτ + θ))e((ψ − 1)τ + θ)| p ]


0≤θ≤τ

≤3 E[|e(ψτ ) − D(r (ψτ ))e((ψ − 1)τ )| p ]


p−1

(ψ+1)τ
+3 p−1
E | − C(r (s))e(s) + A(r (s))g(e(s))
ψτ

+ B(r (s))g(eτ (s)) (5.71)


+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(s)
− D(r (s))eτ (s))|ds] p
 p
ψτ +θ
+3 p−1
sup E |σ(s, r (s), e(s), eτ (s))|dω(s)
0≤θ≤τ ψτ
184 5 Stability and Synchronization of Neutral-Type Neural Networks

Now from (5.67), we have

E|e(t)| p ≤ μ exp(−υt). (5.72)

Next, we compute the three terms in (5.71), respectively.


For the first term in (5.71), using Lemma 4.3 in [16] and (5.72), one can obtain
that
3 p−1 E[|e(ψτ ) − D(r (ψτ ))e((ψ − 1)τ )| p ]
≤ 3 p−1 E[(1 + κ) p−1 (|e(ψτ )| p + κ|e((ψ − 1)τ )| p )] (5.73)
−υψτ −υ(ψ−1)τ
≤ (3(1 + κ)) p−1
μ(e + κe )

For the second term in (5.71), using continuous-type Holder inequality in [33],
Assumptions 5.13 and 5.27, discrete-type Holder inequality in [33], and Lemma 4.3
in [16] and (5.72), respectively, we can obtain

(ψ+1)τ
3 p−1 E | − C(r (s))e(s) + A(r (s))g(e(s))
ψτ

+ B(r (s))g(eτ (s))


+ (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )
· (e(s) − D(r (s))eτ (s))|ds] p

(ψ+1)τ
≤3 τ
p−1 p−1
E ((ϕ + αL)|e(s)| + β L|eτ (s)|
ψτ

+ (k̄ − ς)|e(s) − D(r (s))eτ (s)|) p ds

(ψ+1)τ
≤ (9τ ) p−1 E ((ϕ + αL) p |e(s)| p + (β L) p |eτ (s)| p
ψτ (5.74)

+ (k̄ − ς) p (1 + κ) p−1 (|e(s)| p + κ|eτ (s)| p ))ds

= (9τ ) p−1 ((ϕ + αL) p


 (ψ+1)τ
+ (k̄ − ς) (1 + κ)
p p−1
) E|e(s)| p ds
ψτ
+ (9τ ) p−1 ((β L) p
 (ψ+1)τ
+ (k̄ − ς) κ(1 + κ)
p p−1
) E|eτ (s)| p ds
ψτ
≤ (9τ ) p−1 ((ϕ + αL) p + (k̄ − ς) p (1 + κ) p−1 )τ μe−υψτ
+ (9τ ) p−1 ((β L) p + (k̄ − ς) p κ(1 + κ) p−1 )τ μe−υ(ψ−1)τ

where ϕ = max ρ(C i ).


i∈S
5.2 Adaptive Synchronization of Neutral-Type SNN … 185

For the third term in (5.71), making use of Burkholder-Davis-Gundy inequality


[33], Assumption 5.14, and continuous-type Holder inequality and discrete-type
Holder inequality [33] and (5.72), respectively, we can get that there exists C p > 0
such that
  p
ψτ +θ
3 p−1 sup E |σ(s, r (s), e(s), eτ (s))|dω(s)
0≤θ≤τ ψτ
 p
ψτ +θ 2
≤3 p−1
C pE |σ(s, r (s), e(s), eτ (s))| ds 2
ψτ
 
p ψτ +θ p
≤ 3 p−1 C p θ 2 −1 E (H1 |e(s)|2 + H2 |eτ (s)|2 ) ds 2
(5.75)
ψτ
   
p ψτ +θ p p
≤3 p−1
C pθ 2 −1 2 p−1
E H1 |e(s)| + H2 |eτ (s)|
2 p 2 p
ds
ψτ
 p p

p
≤6 p−1
C pθ 2 −1 τ μ H1 e 2 −υψτ
+ H2 e 2 −υ(ψ−1)τ

Therefore, substituting Eqs. (5.73), (5.74), and (5.75) into (5.71), together with
e−υψτ ≤ e−υ(ψ−1)τ (ψ ≥ 1), yields

E[ sup |e(ψτ + θ) − D(r (ψτ + θ))e((ψ − 1)τ + θ)| p ]


0≤θ≤τ

≤ (3(1 + κ)) p−1 μ(e−υψτ + κe−υ(ψ−1)τ )


+ (9τ ) p−1 ((ϕ + αL) p + (k̄ − ς) p (1 + κ) p−1 )τ μe−υψτ
(5.76)
+ (9τ ) p−1 ((β L) p + (k̄ − ς) p κ(1 + κ) p−1 )τ μe−υ(ψ−1)τ
 p p

p
+ 6 p−1 C p θ 2 −1 τ μ H12 e−υψτ + H22 e−υ(ψ−1)τ

≤ μ̄e−υ(ψ−1)τ = μ̂e−υψτ

where μ̄ = 3 p−1 (1 + κ) p μ + (9τ ) p−1 ((ϕ + αL) p + (β L) p + (k̄ − ς) p (1 + κ) p )τ μ +


p p p
6 p−1 C p θ 2 −1 τ μ(H12 + H22 ), μ̂ = μ̄eυτ and μ̂ is a positive constant independent of ψ.
Thus, for any ε ∈ (0, υ),

P{ω : sup |e(ψτ + θ) − D(r (ψτ + θ))e((ψ − 1)τ + θ)| p


0≤θ≤τ (5.77)
−(υ−ε)ψτ
>e } ≤ μ̂e−υψτ
186 5 Stability and Synchronization of Neutral-Type Neural Networks

for all ψ ≥ 1. The Borel-Cantelli lemma in [33] shows that for almost all ω ∈ Ω,

sup |e(ψτ + θ) − D(r (ψτ + θ))e((ψ − 1)τ + θ)| p


0≤θ≤τ (5.78)
≤ e−(υ−ε)ψτ

holds for all but finitely many ψ. Hence, for almost all ω ∈ Ω, there exists an integer
ψ0 = ψ0 (ω) such that Eq. (5.78) holds whenever ψ ≥ ψ0 .
This yields that for almost all ω ∈ Ω,
−1 (υ−ε)(t−τ )
|e − D(r (t))eτ (t)| ≤ e− p , whenever t ≥ ψ0 τ . (5.79)

Noting that |e − D(r (t))eτ (t)| is finite on t ∈ [0, ψ0 τ ], we observe that there is
a finite random variable ζ = ζ(ω) such that, with probability 1,
−1 (υ−ε)t
|e − D(r (t))eτ (t)| ≤ ζe− p , ∀t ≥ 0. (5.80)

Hence, with probability 1,


−1 (υ−ε)t −1 (υ−ε)t
ep |e| ≤ ζ + κe p |eτ |, ∀t ≥ 0, (5.81)

which implies
−1 (υ−ε)s
sup [e p |e(s)|]
0≤s≤t
−1 (υ−ε)s
≤ ζ + sup [κe p |eτ (s)|]
0≤s≤t
 
p −1 (υ−ε)τ p −1 (υ−ε)(s−τ )
(5.82)
≤ ζ + κe ξ + sup [κe |eτ (s)|]
τ ≤s≤t
 
−1 (υ−ε)τ −1 (υ−ε)s
≤ ζ + κe p ξ + sup [κe p |e(s)|] , ∀t ≥ 0.
0≤s≤t

−1 (υ−ε)s
Since κe p < 1, it follows that
−1 (υ−ε)τ
−1 (υ−ε)s ζ + κe p ξ
sup [e p |e(s)|] ≤ −1
, ∀t ≥ 0. (5.83)
0≤s≤t 1 − κe p (υ−ε)τ

This yields immediately that

1 υ−ε
lim sup log(|e(t)|) ≤ − , a.s. (5.84)
t→∞ t p
5.2 Adaptive Synchronization of Neutral-Type SNN … 187

Letting ε → 0, we obtain

1 υ
lim sup log(|e(t)|) = − < 0, a.s. (5.85)
t→∞ t p

By Definition 5.17, the noise-perturbed response system (5.21) is almost surely


exponential synchronized with the time-delay neural network (5.19). This completes
the proof.

Remark 5.29 The inequality |ki (t)| ≤ k̄, ∀i ∈ S in Assumption 5.27 is assumed to
assure the term f¯(t, i, e, eτ ) = [−C i e + Ai g(e) + B i g(eτ ) + (diag{k1 (t), k2 (t), . . . ,
kn (t)} − ς I ) × (e − D i eτ )] in the error system (5.69) satisfying | f¯(t, i, e, eτ )| ≤
K (|e| + |eτ |). In fact, under the conditions of Theorems 5.24 and 5.25, the response
system (5.21) can be exponential synchronized in pth moment with the drive time-
delay neural network (5.19) by the control law U i = (diag{k1 (t), k2 (t), . . . , kn (t)}−
ς I )(e(t) − D i eτ ) (i ∈ S) with k̇ j = − 21 α j p|e − D i eτ | p−2 (e j − D i (eτ ) j )2 . In
this case, the k j ( j = 1, . . . , n) approach eventually stable or do not need update.
Therefore, Assumption 5.27 is reasonable.

Remark 5.30 In spite of that the conditions of Theorem 5.28 are stronger than those
of Theorems 5.24 and 5.25, we cannot deduce the exponential synchronization in
pth moment from the almost surely exponential synchronization for the systems.
In fact, there is no natural relationships among the three kinds of synchronization,
i.e., each one kind of synchronization cannot be implied by any other one kinds of
synchronization.

5.2.4 Numerical Examples

In this section, two numerical examples will be given to support the main results
obtained in this section.

 the state space of Markov chain {r (t)}t≥0 be S = {1, 2} with


Example 5.31 Let
−1.2 1.2
generator Γ = . Consider a time-delay neural network (5.19) and its
0.5 −0.5
response system (5.21) with Markovian switching and following network parameters:
     
2.7 8 3 8 −4.3 1
A1 = , A2 = , B1 =
0.4 2.7 0.3 2.5 0.7 −4.3
     
−5 0.3 1 0 10
B2 = , C1 = , C2 = ,
0.3 −5 0 0.9 01
   
0.1 0 0.11 0
D1 = , D2 = ,
0 0.2 0 0.19
188 5 Stability and Synchronization of Neutral-Type Neural Networks

Fig. 5.3 Dynamic behavior 0.3


of the drive system (5.19) 0.2

0.1

x2
−0.1

−0.2

−0.3

−0.4
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x1

Fig. 5.4 Dynamic behavior 0.3


of the response system (5.21) 0.2

0.1

0
y2

−0.1

−0.2

−0.3

−0.4
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
y1

f (x(t)) = 0.3tanh(x(t)), τ = 1,
 
σ(t, 1, e(t), e(t − τ )) = 0.15e1 (t − τ ) 0.2e2 (t) ,
 
σ(t, 2, e(t), e(t − τ )) = 0.2e1 (t) 0.1e2 (t − τ ) ,

w(t) is taken as Gaussian white noise. The dynamic behaviors of the drive system
(5.19) and the response system (5.21) are given in Figs. 5.3 and 5.4, respectively,
with the initial states x(t) = [−0.25, −0.35]T , y(t) = [0.27, 0.30]T , and k(t) =
[−1, 1]T , t ∈ [−1, 0].
It can be computed from the above parameters of the systems that L = 0.3,
H1 = 0.22 , H2 = 0.152 , κ = 0.2, α = 20.1498, β = 28.0900, γ = 0.9, C0 = 1,
q1 = 7.6515, and q2 = 10.5410.
We further take α1 = α2 = 1, ς = 25, and m = 10.
It can be checked that Assumptions 5.13–5.15 are satisfied, the matrix M in
Theorem 5.19 is a nonsingular M-matrix, and (5.27) holds. So the response system
(5.21) can be adaptive almost surely asymptotically synchronized with the drive
system (5.19) by Theorem 5.19. The dynamic curve of the error system is shown in
Fig. 5.5. The evolution of gains k1 and k2 of the adaptive control law U (t) is given in
Fig. 5.6. Figure 5.5 shows that the two stochastic neural networks (5.19) and (5.21)
are synchronized.

Example 5.32 Consider a time-delay neural network (5.19) and its response system
(5.21) with Markovian switching and network parameters as those in Example 5.31.
5.2 Adaptive Synchronization of Neutral-Type SNN … 189

Fig. 5.5 The curve of the 0.7


errors e1 and e2 in (5.23) 0.6
e1(t)

e2(t)
0.5

e1(t), e2(t)
0.4

0.3

0.2

0.1

−0.1
0 20 40 60 80 100 120 140 160 180 200
t/s

Fig. 5.6 The evolution 1


graph of the gains k1 and k2 k1(t)
of the adaptive controller 0.5 k2(t)
U (t) in (5.28)
k1(t), k2(t)

−0.5

−1

−1.5
0 20 40 60 80 100 120 140 160 180 200
t/s

Fig. 5.7 The curve of the 2


errors e1 and e2 in (5.23) and e (t)
1
the gains k1 and k2 in (5.28) 1.5
e (t)
2
1 k (t)
1
e(t) and k(t)

0.5 k (t)
2

−0.5

−1

−1.5
0 200 400 600 800 1000
t

We take α1 = α2 = 1, ς = 25, and p = 3.


It can be checked that Assumptions 5.13–5.15 and (5.51) are satisfied. So the
response system (5.21) can be adaptive exponential synchronized in pth moment
with the drive system (5.19) by Theorem 5.24. The dynamic curve of the error
system and the evolution of the gains k1 and k2 of the adaptive control law U (t) are
shown in Fig. 5.6, which shows that the two stochastic neural networks (5.19) and
(5.21) are synchronized.
190 5 Stability and Synchronization of Neutral-Type Neural Networks

Furthermore, from Fig. 5.7, we can also seen that the evolution graph of the gains
k1 and k2 of the adaptive control law U (t) is almost constant. In fact, it is checked
from the simulation that k1 (1) = −1, k1 (2) = −1.0001, k1 (t) = −1.0002 (t ≥ 3),
and k2 (1) = 1, . . ., k2 (6) = 0.99916, k2 (t) = 0.99915 (t ≥ 7). The reason is that the
error system (5.23) approaches stable and the adaptive control law need not updated.

5.2.5 Conclusion

In this section, the problem of adaptive synchronization has been studied, which
includes adaptive almost sure asymptotical synchronization, adaptive exponential
synchronization in pth moment, and adaptive almost sure exponential synchroniza-
tion, for neutral-type stochastic neural networks with Markovian switching parame-
ters, respectively. By combining the M-matrix approach, stochastic analysis method,
and Lyapunov functional, some sufficient conditions have been obtained to ensure
the above adaptive synchronization for the neutral-type stochastic neural networks
with Markovian switching parameters, respectively. Some numerical example has
been given to demonstrate the applicability and effectiveness of the theoretic results
obtained.

5.3 Mode-Dependent Projective Synchronization


of Neutral-Type DNN

5.3.1 Introduction

Due to various complex dynamic properties of the neural networks, some of the
previous network models could not characterize the neural reaction process precisely,
see [2, 46, 48, 49, 52, 80, 81]. It is pretty obvious that, in the real world, the past
state of the network will affect on the current state. Hence, there has been a extensive
research interest in the study of neutral-type neural networks, see [20, 34, 82, 84].
The stability and synchronization of these neural networks are worth studying since
they can be applied to create chemical and biological systems, image processing,
information sciences, etc.
Most of the existing studies about neural networks focused on complete syn-
chronization and generalized synchronization. However, projective synchronization,
because of the proportionality between its synchronized dynamical states, started to
attract researchers recently. According to [6], when chaotic systems exhibit invari-
ance properties under a special type of continuous transformation, amplification
and displacement of the attractor occur. This degree of amplification or displace-
ment is smoothly dependent on the initial condition. Up to now, just a few articles
5.3 Mode-Dependent Projective Synchronization … 191

investigated the projective synchronization of neural networks. In [67], an integral


sliding mode controller was presented to achieve the projective synchronization of
different chaotic time-delayed neural networks. In [3], the projective synchroniza-
tion of neural networks with mixed time-varying delays and parameter mismatch
was discussed.
Random and abrupt variations, such as sudden environmental disturbance, com-
ponent failures or repairs, and changing subsystem interconnections, may change
the behaviors of dynamic systems. The mode-dependent neural networks have the
ability of describing those variations, by switching (or jumping) among different
modes, governed by a Markovian chain. Therefore, the state space of the network
contains continuous and discrete states: the dynamics of the network are continuous
and the Markovian jumping between different modes is discrete. Many researchers
have already made a lot of progress in mode-dependent neural networks, see [25–27,
29, 40, 56, 72].
Along with Markovian jumping modes, to precisely describe the neural cells in
real world, distributed time delay and noise perturbations should be considered. Dis-
tributed time delay reflects the distributed neural signal propagation during a certain
time period with the presence of an amount of parallel pathways as a variety of axon
sizes and lengths, while noise perturbations describe the fluctuation from the release
of neurotransmitters and other probabilistic causes. In the last few years, distributed
time delay and noise perturbation have been put in various neural network models.
Li [20] discussed the global robust stability for stochastic interval neural networks
with continuously distributed delays of neutral type. Liu et al. [26] concerned with
the stability problem for a class of Markovian jumping neutral-type neural networks
with mode-dependent mixed time delays. Tang and Fang [46] investigated the adap-
tive synchronization in an array of chaotic neural networks with mixed delays and
jumping stochastically hybrid coupling.
In this section, we aim at addressing the mode-dependent projective synchroniza-
tion problem of a couple of stochastic neutral-type neural networks with distributed
time delays. Using the Lyapunov stability theory and the adaptive control method, a
sufficient projective synchronization criterion for this neutral-type neural networks
is derived. A numerical simulation is given to demonstrate the feasibility and effec-
tiveness of the theoretical result.

5.3.2 Problem Formulation and Preliminaries

Consider the following neutral-type neural networks with parameters switching as


drive system

d[x(t) − D i x(t − τ (t))] = −C i x(t) + Ai f (x(t)) + B i f (x(t − τ (t)))
t (5.86)
+ E i t−τ (t) f (x(s))ds dt,
192 5 Stability and Synchronization of Neutral-Type Neural Networks

where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with n
neurons, f (·) denotes the neuron activation functions, τ (t) represents the transmis-
sion delay with 0 ≤ τ (t) ≤ τ̄ , and τ̇ (t) ≤ τ̂ < 1 and τ̄ , τ̂ are positive constants.
For t ≥ 0, we denote i = r (t), Ai = A(r (t)), B i = B(r (t)), C i = C(r (t)),
D i = D(r (t)), and E i = E(r (t)), respectively. In neural network (5.86), ∀i ∈ S,
Ai = (a ijk )n×n and B i = (bijk )n×n are the connection weight and the delay connec-
tion weight matrices, respectively, C i = diag{c1i , c2i , . . . , cni } is a diagonal matrix
and has positive and unknown entries cij > 0, D i is called the neutral-type parameter
matrix, and E i = [E 1i , E 2i , . . . , E ni ]T ∈ Rn is the constant external input vector.
The initial condition of system (5.86) is given in the following form:

x(s) = ξx (s), s ∈ [−τ , 0], r (0) = i 0 (5.87)

for any ξx ∈ L2F0 ([−τ , 0]; Rn ).


For the drive system (5.86), the response system is

d[y(t) − D i y(t − τ (t))] = −Ĉ i y(t) + Âi f (y(t)) + B̂ i f (y(t − τ (t)))
t 
+ Ê i t−τ (t) f (y(s))ds + U (t) dt + σ(t, r (t),
y(t) − λx(t), y(t − τ (t)) − λx(t − τ (t)))dω(t),
(5.88)
where y(t) = [y1 (t), y2 (t), . . . , yn (t)]T ∈ Rn is the state vector of the response
system (5.88), U (t) = [u 1 (t), u 2 (t), . . . , u n (t)]T ∈ Rn is a control input vector,
λ = 0 is a scaling factor, ω(t) = [ω1 , ω2 , . . . , ωn ]T is an n-dimensional Brownian
motion defined on a complete probability space (Ω, F, P) with a natural filtration
{Ft }t≥0 (i.e., Ft = σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the
Markovian process {r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise
intensity matrix.
The initial condition of system (5.88) is given in the following form:

y(s) = ξ y (s), s ∈ [−τ , 0], r (0) = i 0 (5.89)

for any ξ y ∈ L2F0 ([−τ , 0]; Rn ).


Let e(t) = y(t) − λx(t) be the projective synchronization error vector. We can
write the following error neutral-type neural network:

d[e(t) − D i e(t − τ (t))] = d[y(t)


− D y(t − τ (t))] − λd[x(t) − D x(t − τ (t))]
i i

= −C̃ i y(t) + Ãi f (y(t)) + B̃ i f (y(t − τ (t)))


t
+ Ẽ i t−τ (t) f (y(s))ds − C i e(t) + Ai g(e(t))
t 
+ B i g(e(t − τ (t))) + E i t−τ (t) g(e(s))ds + U (t) dt
+ σ(t, r (t), e(t), e(t − τ (t)))dω(t),
(5.90)
5.3 Mode-Dependent Projective Synchronization … 193

where g(e(t)) = f (y(t)) − λ f (x(t)), C̃ i = Ĉ i − C i , Ãi = Âi − Ai , B̃ i = B̂ i − B i ,


Ẽ i = Ê i − E i .
The initial condition of system (5.90) is given in the following form:

e(s) = ξ(s) = ξ y (s) − ξx (s), s ∈ [−τ , 0], r (0) = i 0 . (5.91)

To prove our main results, the following assumptions are needed.

Assumption 5.33 For the function f (·) in (5.86), there exists a constant L > 0 such
that
| f (x) − f (y)| ≤ L|x − y|

for any x, y ∈ Rn and f (0) ≡ 0.

Assumption 5.34 For σ(t, i, x(i), y(i)) in (5.88), there exist two positives H1 and
H2 such that

trace[σ T (t, r (t), u(t), v(t))σ(t, r (t), u(t), v(t))] ≤ H1 |u(t)|2 + H2 |v(t)|2

for all (t, r (t), u(t), v(t)) ∈ R+ × S × Rn × Rn and σ(t, r0 , 0, 0) ≡ 0.

Assumption 5.35 For the external input vector D i (i = 1, 2, . . . , S), there are pos-
itives κi ∈ (0, 1), such that

ρ(D i ) = κi ≤ κ,

where κ = max κi and ρ(D i ) is the spectral radius of matrix D i .


i∈S

Then, we present some preliminary lemmas, which play an important role in the
proof of the main results.

5.3.3 Main Results and Proofs

Theorem 5.36 Under Assumptions 5.33–5.35, suppose that the following adaptive
controller and updated law

U (t) = (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − D i e(t − τ (t))) (5.92)

with

k̇ j = −α j (e j − D i eτ j )2 (5.93)

hold, where α j > 0( j = 1, 2, . . . , n) are arbitrary constants and ς is a positive.


194 5 Stability and Synchronization of Neutral-Type Neural Networks

The parameter update laws of matrices C˜ i , Ãi , B̃ i , and Ẽ i are chosen as

c̃˙ij = γ j (e j − D i eτ j )y j , (5.94)

ã˙ ijl = −α jl (e j − D i eτ j ) fl , (5.95)

b̃˙ ijl = −β jl (e j − D i eτ j )( fl )τ , (5.96)


 t
ẽ˙ijl = −ϕ jl (e j − D i eτ j ) fl ds. (5.97)
t−τ (t)

If there exists a positive constant q, such that the following inequalities hold:

− 2δ + C02 + L 2 + H1 + τ̄ 2 L 2 + q < 0, (5.98)

κ2 + L 2 + H2 − (1 − τ̂ )q < 0, (5.99)

α + β + γ − 2ς < 0, (5.100)

where δ = min min cij , C0 = max |C i |, α = max(ρ(Ai ))2 , β = max(ρ(B i ))2 ,


i∈S 1≤ j≤n i∈S i∈S i∈S
γ = max(ρ(E i ))2 , then the noise-perturbed response system (5.88) can be adaptive
i∈S
projective synchronized with the time-delay neural network (5.86).

Proof Under Assumptions 5.33–5.34, it can be seen that f (t) and σ(·) satisfy the
usual local Lipschitz condition and linear growth condition.
Let D(y, i) = D i y. Then from Assumption 5.35, we have

|D(x, i) − D(y, i)| ≤ κi |x − y|, ∀x, y ∈ Rn , D(0, i) = 0,

and

|D(y, i)| ≤ κi |y| ≤ κ|y|, ∀(y, i) ∈ Rn × S. (5.101)

For each i ∈ S, choose a nonnegative function

V (t, i, e) = V1 (t, i, e) + V2 (t, i, e) + V3 (t, i, e) + V4 (t, i, e), (5.102)

where V1 (t, i, e) = [e(t) − D i e(t − τ (t))]T [e(t) − D i e(t − τ (t))],


n n
i )2 +   1 (ã i )2 +  
n n n n
V2 (t, i, e) = 1 2
k
αj j + 1
γj ( c̃ j α jl jl β jl (b̃ jl )
1 i 2 +
j=1 j=1 j=1 l=1 j=1 l=1

n 
n
ϕ jl (ẽ jl ) ,
1 i 2
j=1 l=1
5.3 Mode-Dependent Projective Synchronization … 195
0 t
V3 (t, i, e) = τ̄ −τ (t) t+ε g T (e(s))g(e(s))dsdε,
t
V4 (t, i, e) = t−τ (t) e T (s)qe(s)ds.
Computing LV (t, i, e, eτ ) along the trajectory of error system (5.90), we have

LV1 (t, i, e) = 2[e(t) − D i e(t − τ (t))]T [− t C̃ y(t) + Ã f (y(t))i


i i

+ B̃ f (y(t − τ (t))) + Ẽ t−τ (t) f (y(s))ds − C e(t) + Ai g(e(t))


i i
t
+B i g(e(t − τ (t))) + E i t−τ (t) g(e(s))ds
+(diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − D i e(t − τ (t)))]
+trace[σ T (t, r (t), e(t), e(t − τ (t)))σ(t, r (t), e(t), e(t − τ (t)))].
(5.103)
From (5.93)–(5.97), one can obtain that

LV2 (t, i, e) = 2[e(t) − D i e(t − τ (t))]T C̃ i y(t) − Ãi f (y(t))
t 
− B̃ i f (y(t − τ (t))) − Ẽ i t−τ (t) f (y(s))ds (5.104)
−diag{k1 (t), k2 (t), . . . , kn (t)}(e(t) − D i e(t − τ (t))).

By Ito’s differential formula [82], we could infer that


t
LV3 (t, i, e) ≤ g T (e(t))τ̄ 2 g(e(t)) − t−τ (t) g T (e(s))τ̄ g(e(s))ds
t (5.105)
≤ e T (t)τ̄ 2 L 2 e(t) − t−τ (t) g T (e(s))τ̄ g(e(s))ds,

LV4 (t, i, e) ≤ e T (t)qe(t) − e T (t − τ (t))(1 − τ̂ )qe(t − τ (t)). (5.106)

From Eqs. (5.103)–(5.106), we have

LV (t, i, e) ≤ −2[e(t) − D i e(t − τ (t))]T C i e(t)


+2[e(t) − D i e(t − τ (t))]T Ai g(e(t))
+2[e(t) − D i e(t − τ (t))]T B i g(e(t
t − τ (t)))
+2[e(t) − D i e(t − τ (t))]T E i t−τ (t) g(e(s))ds
−2[e(t) − D i e(t − τ (t))]T ς I [e(t) − D i e(t − τ (t))]
+trace[σ T (t, r (t), e(t), e(t − τ (t)))σ(t, r (t), e(t), e(t − τ (t)))]
t
+e T (t)(τ̄ 2 L 2 + q)e(t) − t−τ (t) g T (e(s))τ̄ g(e(s))ds
−e (t − τ (t))(1 − τ̂ )qe(t − τ (t)).
T
(5.107)
From Assumption 5.33 and Lemma 1.13,

−2[e(t) − D i e(t − τ (t))]T C i e(t)


= −2e T (t)C i e(t) + e(t − τ (t))T D i T C i e(t)
(5.108)
≤ −2e T (t)C i e(t) + e(t − τ (t))T D i T D i e(t − τ (t)) + e T (t)C i T C i e(t)
≤ (−2δ + C0 )|e(t)| + κ |e(t − τ (t))| .
2 2 2 2
196 5 Stability and Synchronization of Neutral-Type Neural Networks

Using Assumption 5.33, one can obtain

2[e(t) − D i e(t − τ (t))]T Ai g(e(t))


≤ [e(t) − D i e(t − τ (t))]T Ai Ai T [e(t) − D i e(t − τ (t))] + g T (e(t))g(e(t))
≤ α|e(t) − D i e(t − τ (t))|2 + L 2 |e(t)|2 ,
(5.109)
and

2[e(t) − D i e(t − τ (t))]T B i g(e(t − τ (t)))


≤ [e(t) − D i e(t − τ (t))]T B i B i T [e(t) − D i e(t − τ (t))]
(5.110)
+g T (e(t − τ (t)))g(e(t − τ (t)))
≤ β|e(t) − D i e(t − τ (t))|2 + L 2 |e(t − τ (t))|2 .

From Lemma 1.20, it is easy to see that


t
2[e(t) − D i e(t − τ (t))]T E i t−τ (t) g(e(s))ds
≤ [e(t) − D i e(t − τ (t))]T E i E i T [e(t) − D i e(t − τ (t))]
T  (5.111)
t t
+ t−τ (t) g(e(s))ds t−τ (t) g(e(s))ds
t
≤ γ|e(t) − D i e(t − τ (t))|2 + t−τ (t) g T (e(s))τ̄ g(e(s))ds.

Also,

trace[σ T (t, r (t), e(t), e(t − τ (t)))σ(t, r (t), e(t), e(t − τ (t)))]
(5.112)
≤ H1 |e(t)|2 + H2 |e(t − τ (t))|2 .

Substituting Eqs. (5.108)–(5.112) into Eq. (5.107), from Eqs. (5.98)–(5.100), one
can obtain that

LV (t, i, e) ≤ (−2δ + C02 + L 2 + H1 + τ̄ 2 L 2 + q)|e(t)|2


+(κ2 + L 2 + H2 − (1 − τ̂ )q)|e(t − τ (t))|2
(5.113)
+(α + β + γ − 2ς)|e(t) − D i e(t − τ (t))|2
< 0.

To this end, based on the Lyapunov stability theory, the noise-perturbed response
system (5.88) can be adaptive projective synchronized with the drive time-delay
neural network (5.86). This completes the proof.

Now we remove the Markovian jumping parameter from the neural networks.
That is to say, S = 1. The drive system, the response system, and the error system
can be represented as follows, respectively:

d[x(t) − Dx(t − τ (t))] = [−C x(t) + A f (x(t)) + B f (x(t − τ (t)))


t (5.114)
+ E t−τ (t) f (x(s))ds dt,
5.3 Mode-Dependent Projective Synchronization … 197

d[y(t) − Dy(t − τ (t))] = −Ĉ y(t) + Â f (y(t)) + B̂ f (y(t − τ (t)))
t 
+ Ê t−τ (t) f (y(s))ds + U (t) dt + σ(t, r (t),
y(t) − λx(t), y(t − τ (t)) − λx(t − τ (t)))dω(t),
(5.115)

− Dy(t − τ (t))] − λd[x(t) − Dx(t − τ (t))]


d[e(t) − De(t − τ (t))] = d[y(t)
= −C̃ y(t) + Ã f (y(t)) + B̃ f (y(t − τ (t)))
t
+ Ẽ t−τ (t) f (y(s))ds − Ce(t) + Ag(e(t))
t 
+ Bg(e(t − τ (t))) + E t−τ (t) g(e(s))ds + U (t) dt
+ σ(t, r (t), e(t), e(t − τ (t)))dω(t),
(5.116)
From Theorem 5.36, we can obtain the following corollary.

Corollary 5.37 Under Assumptions 5.33–5.35, suppose that the following adaptive
controller and updated law

U (t) = (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − De(t − τ (t))) (5.117)

with

k̇ j = −α j (e j − Deτ j )2 (5.118)

hold, where α j > 0( j = 1, 2, . . . , n) are arbitrary constants and ς is a positive.


The parameter update laws of matrices C̃, Ã, B̃, and Ẽ are chosen as

c̃˙ j = γ j (e j − Deτ j )y j , (5.119)

ã˙ jl = −α jl (e j − Deτ j ) fl , (5.120)

b̃˙ jl = −β jl (e j − Deτ j )( fl )τ , (5.121)


 t
ẽ˙ jl = −ϕ jl (e j − Deτ j ) fl ds. (5.122)
t−τ (t)

If there exists a positive constant q, such that the following inequalities hold,

− 2δ̂ + Ĉ02 + L 2 + H1 + τ̄ 2 L 2 + q < 0, (5.123)

κ2 + L 2 + H2 − (1 − τ̂ )q < 0, (5.124)

α̂ + β̂ + γ̂ − 2ς < 0, (5.125)
198 5 Stability and Synchronization of Neutral-Type Neural Networks

where δ̂ = min c j , Ĉ0 = ρ(C), α̂ = ρ(A)2 , β̂ = ρ(B)2 , γ̂ = ρ(E)2 , then the


1≤ j≤n
noise-perturbed response system (5.115) can be adaptive projective synchronized
with the time-delay neural network (5.114).

Next, we remove the noise perturbations from the response system (5.88). Hence,
the response system and the error system can be represented as follows, respectively:

d[y(t) − D i y(t − τ (t))] = −Ĉ i y(t) + Âi f (y(t)) + B̂ i f (y(t − τ (t)))
t  (5.126)
+ Ê i t−τ (t) f (y(s))ds + U (t) dt,

d[e(t) − D i e(t − τ (t))] = d[y(t) − D i y(t − τ (t))] − λd[x(t) − D i x(t − τ (t))]



= −C̃ i y(t) + Ãi f (y(t)) + B̃ i f (y(t − τ (t)))
t
+ Ẽ i t−τ i i
(t) f (y(s))ds − C e(t) + A g(e(t))
t 
+ B i g(e(t − τ (t))) + E i t−τ (t) g(e(s))ds + U (t) dt,
(5.127)
From Theorem 5.36, we can also obtain the following corollary.

Corollary 5.38 Under Assumptions 5.33 and 5.35, suppose that the following adap-
tive controller and updated law

U (t) = (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − D i e(t − τ (t))) (5.128)

with

k̇ j = −α j (e j − D i eτ j )2 (5.129)

hold, where α j > 0( j = 1, 2, . . . , n) are arbitrary constants and ς is a positive.


The parameter update laws of matrices C˜ i , Ãi , B̃ i and Ẽ i are chosen as

c̃˙ij = γ j (e j − D i eτ j )y j , (5.130)

ã˙ ijl = −α jl (e j − D i eτ j ) fl , (5.131)

b̃˙ ijl = −β jl (e j − D i eτ j )( fl )τ , (5.132)


 t
ẽ˙ijl = −ϕ jl (e j − D i eτ j ) fl ds. (5.133)
t−τ (t)

If there exists a positive constant q, such that the following inequalities hold,

− 2δ + C02 + L 2 + τ̄ 2 L 2 + q < 0, (5.134)


5.3 Mode-Dependent Projective Synchronization … 199

κ2 + L 2 − (1 − τ̂ )q < 0, (5.135)

α + β + γ − 2ς < 0, (5.136)

where δ = min min cij , C0 = max |C i |, α = max(ρ(Ai ))2 , β = max(ρ(B i ))2 ,


i∈S 1≤ j≤n i∈S i∈S i∈S
γ = max(ρ(E i ))2 , then the noise-perturbed response system (5.126) can be adaptive
i∈S
projective synchronized with the time-delay neural network (5.86).
Then we suppose λ = 1 and e(t) = y(t) − x(t) be the synchronization error
vector. We can write the following error neutral-type neural network:

d[e(t) − D i e(t − τ (t))] = d[y(t) − D i y(t − τ (t))] − d[x(t) − D i x(t − τ (t))]



= −C̃ i y(t) + Ãi f (y(t)) + B̃ i f (y(t − τ (t)))
t
+ Ẽ i t−τ (t) f (y(s))ds − C i e(t) + Ai g(e(t))
t 
+ B i g(e(t − τ (t))) + E i t−τ (t) g(e(s))ds + U (t) dt
+ σ(t, r (t), e(t), e(t − τ (t)))dω(t),
(5.137)
where g(e(t)) = f (y(t)) − f (x(t)), C̃ i = Ĉ i − C i , Ãi = Âi − Ai , B̃ i = B̂ i − B i ,
Ẽ i = Ê i − E i .
From Theorem 5.36, we can also obtain the following corollary.
Corollary 5.39 Under Assumptions 5.33–5.35, suppose that the following adaptive
controller and updated law

U (t) = (diag{k1 (t), k2 (t), . . . , kn (t)} − ς I )(e(t) − D i e(t − τ (t))) (5.138)

with

k̇ j = −α j (e j − D i eτ j )2 (5.139)

hold, where α j > 0( j = 1, 2, . . . , n) are arbitrary constants and ς is a positive.


The parameter update laws of matrices C˜ i , Ãi , B̃ i and Ẽ i are chosen as

c̃˙ij = γ j (e j − D i eτ j )y j , (5.140)

ã˙ ijl = −α jl (e j − D i eτ j ) fl , (5.141)

b̃˙ ijl = −β jl (e j − D i eτ j )( fl )τ , (5.142)


 t
ẽ˙ijl = −ϕ jl (e j − D i eτ j ) fl ds. (5.143)
t−τ (t)
200 5 Stability and Synchronization of Neutral-Type Neural Networks

If there exists a positive constant q, such that the following inequalities hold,

− 2δ + C02 + L 2 + H1 + τ̄ 2 L 2 + q < 0, (5.144)

κ2 + L 2 + H2 − (1 − τ̂ )q < 0, (5.145)

α + β + γ − 2ς < 0, (5.146)

where δ = min min cij , C0 = max |C i |, α = max(ρ(Ai ))2 , β = max(ρ(B i ))2 ,


i∈S 1≤ j≤n i∈S i∈S i∈S
γ = max(ρ(E i ))2 , then the noise-perturbed response system (5.88) can be adaptive
i∈S
projective synchronized with the time-delay neural network (5.86).

5.3.4 Numerical Example

Consider the time-delay neural network (5.86) and its response system (5.88) with
the scaling factor λ = 2 and following network parameters:
     
1.2 −1.5 1.1 −1.6 0.7 −0.2
A1 = , A2 = , B1 = ,
−1.7 0.2 −1.8 1.2 0 0.3
     
−0.4 −0.1 0.2 0 0.5 0
B2 = , C1 = , C2 = ,
0.3 0.5 0 0.3 0 0.2
     
0.1 0 0.2 0 −0.12 0.12
D1 = , D2 = ,Γ = ,
0 0.15 0 0.1 0.11 −0.11

α11 = α12 = α21 = α22 = β11 = β12 = β21 = β22 = γ1 = γ2 = 1,

σ(t, 1, e(t), e(t − τ )) = (0.15e1 (t − τ ), 0.2e2 (t))T ,


σ(t, 2, e(t), e(t − τ )) = (0.2e1 (t), 0.1e2 (t − τ ))T ,
f (x(t)) = g(x(t)) = 0.3tanh(x(t)), L = 0.3, ς = 5,
τ = 0.12, α1 = α2 = 1.

It can be checked that Assumptions 5.33–5.35 and inequalities (5.98)–(5.100) are


satisfied. So the response system (5.88) can be adaptive projective synchronized with
the drive system (5.86) by Theorem 5.36. The dynamic curve of the error system is
shown in Fig. 5.8. The evolution of adaptive coupling strengths k1 and k2 is given in
Fig. 5.9. Figure 5.8 shows that the two coupled neural networks (5.86) and (5.88) are
synchronized.
5.3 Mode-Dependent Projective Synchronization … 201

Fig. 5.8 The curve of the 3


synchronization error e1 , e2
2

−1

−2

−3

−4
0 200 400 600 800 1000 1200

Fig. 5.9 The evolution 1


graph of the adaptive
coupling strength k1 , k2 0.5

−0.5

−1

−1.5

−2

−2.5
0 200 400 600 800 1000 1200

5.3.5 Conclusions

In this section, we have discussed the projective synchronization problem of a cou-


ple of mode-dependent neutral-type neural networks. To precisely describe the real
world, the distributed time delay and the noise perturbation have been considered
in the model. A sufficient projective synchronization criterion for this neutral-type
neural network has been derived based on the Lyapunov stability theory and the
adaptive control method. A numerical simulation has been exploited to illustrate
the feasibility and effectiveness of the theoretical result obtained. In this section,
the transition probabilities of the Markovian chain are assumed to be completely
known. However, due to the fact that in practice, incomplete transition probabilities
are often encountered, adequate samples of the transitions are time consuming to
obtain, which is proposed in [68–72]. Therefore, the method in this section can be
further extended to solve the synchronization problem of the model with some partial
unknown transition parameters in the Markovian chain.
202 5 Stability and Synchronization of Neutral-Type Neural Networks

5.4 Adaptive pth Moment Exponential Synchronization


of Neutral-Type NN with Markovian Switching

5.4.1 Introduction

In reality, time-delay system is frequently encountered in many areas and a time


delay is often a source of instability and oscillators. For neural networks with time
delays, various sufficient conditions have been proposed to guarantee the global
asymptotic or exponential stability in some recent literatures, see e.g., [8, 56, 80]
and the references therein in which many methods have been exploited, such as the
linear matrix inequality approach.
Meanwhile, many neural networks may experience abrupt changes in their struc-
ture and parameters caused by some phenomena such as component failures or
repairs, changing subsystem interconnections, and abrupt environmental distur-
bances. In this situation, there exist finite modes in the neural networks, and the
modes may be switched (or jumped) from one to another at different times. This
kind of systems is widely studied by many scholars, see e.g., [33, 45, 57, 68, 76]
and the references therein.
As we know, the synchronization for a neural networks is to achieve the accordance
of the states of the drive system and the response system in a moment. That is to
say, the state of the error system of the drive systems and the response system can
achieve to zero eventually when the time approaches infinity. Especially, the adaptive
synchronization for a neural networks is such a synchronization that the parameters
of the drive system need to be estimated and the synchronization control law needs
to be updated in real time when the neural network evolves.
Up to now, the synchronization problem of the neural networks has been exten-
sively investigated over the last decade due to their successful applications in many
areas, such as signal processing, combinatorial optimization, communication, etc.
Moreover, the adaptive synchronization for neural networks has been used in real
neural networks control, such as parameter estimation adaptive control, model ref-
erence adaptive control, etc. In the past decade, much attention has been devoted
to the research of the adaptive synchronization for neural networks (see e.g., [21,
28, 44, 77, 83] and the references therein). In [44], the adaptive lag synchroniza-
tion issue of unknown chaotic delayed neural networks with noise perturbation is
considered and the suitable parameter update laws and several sufficient conditions
to ensure lag synchronization of unknown delayed neural networks with or without
noise perturbation are derived. An adaptive feedback controller is designed to achieve
complete synchronization of unidirectionally coupled delayed neural networks with
stochastic perturbation and the globally almost surely asymptotical stability of the
error dynamical system is investigated by LaSalle-type invariance principle in [21].
In [83], adaptive synchronization condition under almost every initial data for sto-
chastic neural networks with time-varying delays and distributed delays is derived.
In [77], the issues of lag synchronization of coupled chaotic delayed neural networks
are investigated. Using the adaptive control with the linear feedback updated law,
5.4 Adaptive pth Moment Exponential Synchronization … 203

some simple yet generic criteria for determining the lag synchronization of coupled
chaotic delayed neural networks are derived based on the invariance principle of
functional differential equations. In [28], Lu et al. investigated globally exponential
synchronization for linearly coupled neural networks with time-varying delay and
impulsive disturbances. By referring to an impulsive delay differential inequality,
a sufficient condition of globally exponential synchronization for linearly coupled
neural networks with impulsive disturbances is derived in the section.
In this section, we are concerned with the analysis issue for the mode and delay-
dependent adaptive exponential synchronization of neural networks with stochastic
delayed and Markovian switching parameters by employing M-matrix approach.
The main purpose of this section is to establish M-matrix-based stability criteria for
testing whether the stochastic delayed neural networks is stochastically exponen-
tially synchronization in pth moment. We will use a simple example to illustrate the
usefulness of the derived M-matrix-based synchronization conditions.

5.4.2 Problem Formulation and Preliminaries

In this section, we consider the neutral-type neural networks called drive system and
represented by the compact form as follows:

d[x(t) − N (r (t))x(t − τ (t))]


= [−C(r (t))x(t) + A(r (t)) f (x(t)) (5.147)
+B(r (t)) f (x(t − τ (t))) + D(r (t))]dt,

where t ≥ 0 (or t ∈ R+ , the set of all nonnegative real numbers) is the time
variable, x(t) = (x1 (t), x2 (t), . . . , xn (t))T ∈ Rn is the state vector associated with
n neurons, f (x(t)) = ( f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t)))T ∈ Rn denotes the
activation function of the neurons, and τ (t) is the transmission delay satisfying that
0 < τ (t) ≤ τ̄ and τ̇ (t) ≤ τ̂ < 1, where τ̄ and τ̂ are constants. As a matter
of convenience, for t ≥ 0, we denote r (t) = i and A(r (t)) = Ai , B(r (t)) =
B i , C(r (t)) = C i , and D(r (t)) = D i , N (r (t)) = N i , respectively. In the drive
system (5.147), furthermore, ∀i ∈ S, C i = diag {c1i , c2i , . . . , cni } has positive and
unknown entries cki > 0, Ai = (a ijk )n×n and B i = (bijk )n×n ,N i = (n ijk )n×n are the
connection weight and the delayed connection weight matrices, respectively, and are
both unknown matrices. D i = (d1i , d2i , . . . , dni )T ∈ Rn is the constant external input
vector.
For the drive systems (5.147), a response system is constructed as follows:

d[y(t) − N (r (t))x(t − τ (t))]


= [−Ĉ(r (t))y(t) + Â(r (t)) f (y(t))
+ B̂(r (t)) f (y(t − τ (t))) + D(r (t)) + U (t)]dt (5.148)
+σ(t, r (t), y(t) − x(t), y(t − τ (t))
−x(t − τ (t)))dω(t),
204 5 Stability and Synchronization of Neutral-Type Neural Networks

where y(t) is the state vector of the response system (5.148), Ĉ i = diag{ĉ1i , ĉ2i , . . . ,
ĉni }, Âi = (â ijk )n×n and B̂ i = (b̂ijk )n×n are the estimations of the unknown matrices
C i , Ai and B i , respectively, U (t) = (u 1 (t), u 2 (t), . . . , u n (t))T ∈ Rn is a control
input vector with the form of

U (t)
= K (t)(y(t) − x(t) − N (r (t)(y(t − τ (t)) − x(t − (τ (t)))))
(5.149)
= diag {k1 (t), k2 (t), . . . , kn (t)}
(y(t) − x(t) − N (r (t)(y(t − τ (t)) − x(t − (τ (t))))),

ω(t) = (ω1 (t), ω2 (t), . . . , ωn (t))T is an n-dimensional Brown moment defined on


a complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0 (i.e., Ft =
σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra), and is independent to the Markovian process
{r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix and can
be regarded as a result from the occurrence of eternal random fluctuation and other
probabilistic causes.
Let e(t) = y(t)− x(t). For the purpose of simplicity, we mark e(t −τ (t)) = eτ (t)
and f (x(t) + e(t)) − f (x(t)) = g(e(t)). From the drive system (5.147) and the
response system (5.148), the error system of theirs can be represented as follows:

d[e(t) − N (r (t))x(t − τ (t))]


= [−C̃(r (t))y(t) − C(r (t))e(t) + Ã(r (t)) f (y(t))
+A(r (t))g(e(t)) + B̃(r (t)) f (yτ (t)) (5.150)
+B(r (t))g(eτ (t)) + U (t)]dt
+σ(t, r (t), e(t), eτ (t))dω(t),

where C̃(r (t)) = Ĉ(r (t)) − C(r (t)), Ã(r (t)) = Â(r (t)) − A(r (t)) and B̃(r (t)) =
B̂(r (t)) − B(r (t)). Denote c̃ij = ĉij − cij , ã ijk = â ijk − a ijk and b̃ijk = b̂ijk − bijk , then
C̃ i = diag {c̃1i , c̃2i , . . . , c̃ni }, Ãi = (ã ijk )n×n and B̃ i = (b̃ijk )n×n .
The initial condition associated with system (5.150) is given in the following
form:

e(s) = ξ(s), s ∈ [−τ̄ , 0],

for any ξ(s) ∈ L2F0 ([−τ̄ , 0], Rn ), where L2F0 ([−τ̄ , 0], Rn ) is the family of all
F0 -measurable C([−τ̄ , 0]; Rn )-value random variables satisfying that sup−τ̄ ≤s≤0 E|
ξ(s)|2 < ∞, and C([−τ̄ , 0]; Rn ) denotes the family of all continuous Rn -valued
functions ξ(s) on [−τ̄ , 0] with the norm ξ(s) = sup−τ̄ ≤s≤0 |ξ(s)|.
To obtain the main result, we need the following assumptions.

Assumption 5.40 The activation functions of the neurons f (x(t)) satisfy the Lip-
schitz condition, that is, there exists a constant L > 0 such that

| f (u) − f (v)| ≤ L|u − v|, ∀u, v ∈ Rn .


5.4 Adaptive pth Moment Exponential Synchronization … 205

Assumption 5.41 The noise intensity matrix σ(·, ·, ·, ·) satisfies the linear growth
condition, that is, there exist two positives H1 and H2 , such that

trace(σ(t, r (t), u(t), v(t))T σ(t, r (t), u(t), v(t)))


≤ H1 |u(t)|2 + H2 |v(t)|2

for all (t, r (t), u(t), v(t)) ∈ R+ × S × Rn × Rn .

Remark 5.42 Under Assumptions 5.40 and 5.41, the error system (5.150) admits an
equilibrium point (or trivial solution) e(t, ξ(s)), t ≥ 0.

The following stability concept and synchronization concept are needed in this
section.
Definition 5.43 The trivial solution e(t, ξ(s)) of the error system (5.150) is said to
be exponential stability in pth moment if

1
lim sup log(E|e(t, ξ(s))| p ) < 0,
t→∞ t
p
for any ξ(s) ∈ LL0 ([−τ̄ , 0]; Rn ), where p ≥ 2, p ∈ Z. When p = 2, it is said to
be exponential stability in mean square.
The drive system (5.147) and the response system (5.148) are said to be exponen-
tial synchronized in pth moment, if the error system (5.150) is exponential stability
in pth moment.
The main purpose of the rest of this section is to establish a criterion of adaptive
exponential synchronization in pth moment of the system (5.147) and the response
system (5.148) using adaptive feedback control and M-matrix techniques.
Consider an n-dimensional stochastic delayed differential equation (SDDE, for
short) with Markovian switching

d[x(t) − N (r (t))x(t − τ (t))]


(5.151)
= f (t, r (t), x(t), xτ (t))dt + g(t, r (t), x(t), xτ (t))dω(t)

on t ∈ [0, ∞) with the initial data given by


p
{x(θ) : −τ̄ ≤ θ ≤ 0} = ξ(θ) ∈ LL0 ([−τ̄ , 0]; Rn ).

5.4.3 Main Results

In this section, we give a criterion of adaptive exponential synchronization in pth


moment for the drive system (5.147) and the response system (5.148). Firstly, we
establish a general result which can be applied widely.
206 5 Stability and Synchronization of Neutral-Type Neural Networks

Theorem 5.44 Assume that there is a function V (t, i, x) ∈ C2,1 (R+ × S × Rn ; R+ )


and positive constants p, c1 , λ1 , and λ2 such that

λ2 < λ1 (1 − τ̂ ), (5.152)

c1 |x| p ≤ V (t, i, x), (5.153)

where
c1 = min qi , and
i∈S

LV (t, i, x, xτ ) ≤ −λ1 |x| p + λ2 |xτ | p , (5.154)

for all t ≥ 0, i ∈ S and x ∈ Rn (x = x(t) for short). Then the SDDE (5.151) is
exponential stability in pth moment.

Proof Now we are in a position to set up a criterion of adaptive exponential syn-


chronization in pth moment for the drive system (5.147) and the response system
(5.148).

Theorem 5.45 Assume that M := −diag{η, η, . . . , η } − Γ is a nonsingular


  
S
M-matrix, where

η = (U1 + U2 + U3 ) − a1 (1 − k) p−1 + a2 (1 + k) p−1 ,


γ = min min cij ,
i∈S 1≤ j≤n
α = max(ρ(Ai ))2 ,
i∈S
β = max(ρ(B i ))2 , p ≥ 2.
i∈S

Let m > 0 and −



m = (m, m, . . . , m )T (In this case, (q1 , q2 , . . . , q S )T :=
  
S
M −1 −
→m 0, i.e., all elements of M −1 −

m are positive, by Lemma 1.12). Assume
also that

(V1 + V2 + V3 )c2 + a2 c2 k(1 − k) p−1 + b1 k <


 
 S (5.155)
− ηqi + (1 + k) p−1 γik qk (1 − τ̂ ), ∀i ∈ S,
k=1

where c2 = max qi .
i∈S
Under Assumptions 5.40 and 5.41, the noise-perturbed response system (5.148)
can be adaptive exponential synchronized in pth moment with the drive neural net-
work (5.147), if the feedback gain K (t) of the controller (5.149) with the update law
is chosen as
5.4 Adaptive pth Moment Exponential Synchronization … 207

k̇ j = −(1/2)α j pqi |e − N i eτ | p−2 (e j − N i eτ j )2 , (5.156)

and the parameters update laws of matrices Cˆ i , Âi , and B̂ i are chosen as
⎧ i γj
˙
⎨ ĉ j = 2 pq i |e − N eτ |
i p−2 (e − N i e )y ,
⎪ j τj j
α
˙â i = − pqi |e − N i eτ | p−2 (e j − N i eτ j ) fl ,
jl
jl (5.157)

⎩ ˙i
2
β jl
b̂ jl = − 2 pqi |e − N i eτ | p−2 (e j − N i eτ j )( fl )τ ,

where α j > 0, γ j > 0, α jl > 0, and β jl > 0 ( j, l = 1, 2, . . . , n) are arbitrary


constants, respectively.

Proof For each i ∈ S, choose a nonnegative function as follows:


n
V (t, i, e) = qi |e| p + ( α1j k 2j + γ j (c̃ j )
1 i 2
j=1

n 
n
+ α jl (ã jl )
1 i 2 + β jl (b̃ jl ) ).
1 i 2
l=1 l=1

Clearly, V (t, i, x) obeys (5.153) withc1 = min qi . Computing LV (t, i, e, eτ )


i∈S
along the trajectory of error system (5.150), and using (5.156) and (5.157), one can
obtain that

LV (t, i, e, eτ )
= Vt (t, i, e − N i eτ ) + Ve (t, i, e − N i eτ )
[−C̃ i y − C i e + Ãi f (y) + Ai g(e)
+ B̃ i f (yτ ) + B i g(eτ ) + U (t)] (5.158)
+(1/2)trace (σ T (t, i, e, eτ )Vee (t, i, e − N i eτ )σ(t, i, e, eτ ))
S
+ γik V (t, k, e − N i eτ )
k=1

while

Vt (t, i, e − N i eτ ) = 0
Ve (t, i, e − N i eτ )
= pqi |e − N i eτ )| p−2 (e − N i eτ )T
Vee (t, i, e − N i eτ ) (5.159)
= p( p − 2)qi |e − N i eτ )| p−4 [(e − N i eτ )T ]2
+ pqi |e − N i eτ )| p−2
≤ p( p − 1)qi |e − N i eτ )| p−2 ,

so
208 5 Stability and Synchronization of Neutral-Type Neural Networks

LV (t, i, e, eτ )

n 
n
1 i ˙i
≤2 α j k j k̇ j + 2
1
γ j c̃ j c̃ j
j=1 j=1
n 
n
1 i ˙i 
n 
n
1 i ˙i
+2 α jl ã jl ã jl +2 β jl b̃ jl b̃ jl
j=1 l=1 j=1 l=1
+ pqi |e − N i eτ )| p−2 (e − N i eτ )T
[−C̃ i y − C i e + Ãi f (y) + Ai g(e)
+ B̃ i f (yτ ) + B i g(eτ ) + U (t)] (5.160)
+(1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e − N i eτ )| p−2 )

S
·σ(t, i, e, eτ )) + γik qk |e − N i eτ )| p
k=1
= pqi |e − N i eτ )| p−2 (e − N i eτ ))T [−C i e + Ai g(e) + B i g(eτ )]
+(1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e − N i eτ )| p−2 )
S
·σ(t, i, e, eτ )) + γik qk |e − N i eτ )| p .
k=1

Now, using Assumptions 5.40 and 5.41 together with Lemmas 1.13, 1.3, 1.4 yields

− e T C i e ≤ −γ|e|2 , γ = min min cij , (5.161)


i∈S 1≤ j≤n

− |e − N i eτ | p−2 ≤ −(1 − k) p−3 |e| p−2 + k(1 − k) p−3 |eτ | p−2 , (5.162)

assume that 0 < κ < 1,

eτT (N i )T C i e ≤ 21 (κ)2 |eτ |2 + 21 ι2 |e|2 ,


κ = max(ρ(N i )), (5.163)
i∈S
ι = max(ρ(C i )),
i∈S

(e − N i eτ )T Ai g(e) ≤ (1/2)(e − N i eτ )T Ai (Ai )T (e − N i eτ )


+(1/2)g T (e)g(e)
(5.164)
≤ (1/2)(α(1 + k) + L 2 )|e|2
+(1/2)(αk(1 + k)|eτ |2 ,

(e − N i eτ )T B i g(eτ ) ≤ (1/2)e − N i eτT B i (B i )T e


+(1/2)g T (eτ )g(eτ )
(5.165)
≤ (1/2)(β(1 + k)|e|2
+(1/2)(βk(1 + k) + L 2 )|eτ |2 ,

and

(1/2)trace (σ T (t, i, e, eτ )( p( p − 1)qi |e − N i eτ | p−2 )σ(t, i, e, eτ ))


(5.166)
≤ (1/2) p( p − 1)qi |e − N i eτ | p−2 (H1 |e|2 + H2 |eτ |2 ).
5.4 Adaptive pth Moment Exponential Synchronization … 209

using Lemmas 1.3 and 1.4


S
γik qk |e − N i eτ | p
k=1

S
= γii qi |e − N i eτ | p + γik qk |e − N i eτ | p
k=1,k=i

S 
S
=− γik qi |e − N i eτ | p + γik qk |e − N i eτ | p (5.167)
k=1,k=i k=1,k=i
S
≤ γik qi (−(1 − k) p−1 |e| p + k(1 − k) p−1 |eτ | p )
k=1,k=i
S
+ γik qk ((1 + k) p−1 (|e| p + k|eτ | p )).
k=1,k=i

On the other hand, making use of Yong inequality, we have

|e − N i eτ | p−2 |e|2
≤ p−2
p |e − N eτ | + p |e|
i p 2 p
p−2
≤ p (1 + k) p−1 (|e| + k|eτ | p ) + 2p |e| p
p
! !
= p−2p (1 + k)
p−1 + 2 |e| p + p−2 k(1 + k) p−1 |e | p ,
p p τ
(5.168)
|e − N i eτ | p−2 |eτ |2
≤ p−2
p |e − N eτ | + p |eτ |
i p 2 p

≤ p−2
p (1 + k)
p−1 (|e| p + k|e | p ) + 2 |e | p
τ p τ !
p−2 p−2
= p (1 + k)
p−1 |e| p + p k(1 + k)
p−1 + 2
p |eτ | p ,

|e − N i eτ | p−2 (−e T C i e)
≤ (−|e − N i eτ | p−2 )γ|e|2
≤ γ|e|2 (−(1 − k) p−3 |e| p−2 + k(1 − k) p−3 |eτ | p−2 ) (5.169)
= (−γ(1 − k) p−3 + γ 2p k(1 − k) p−3 )|e| p
+γ p−2p k(1 − k)
p−3 |e | p
τ

|e − N i eτ | p−2 (e − N i eτ )T Ai g(e)
≤ |e − N i eτ | p−2 ((1/2)(α(1 + k) + L 2 )|e|2
+(1/2)(αk(1 + k)|eτ |2 )
= (1/2)(α(1 + k) + L 2 )|e − N i eτ | p−2 |e|2
+(1/2)αk(1 + k)|e − N i eτ | p−2 |eτ |2
(5.170)
|e − N i eτ | p−2 (e − N i eτ )T B i g(eτ )
≤ |e − N i eτ | p−2 ((1/2)β(1 + k)|e|2
+(1/2)(βk(1 + k) + L 2 )|eτ |2 )
= (1/2)β(1 + k)|e − N i eτ | p−2 |e|2
+(1/2)(βk(1 + k) + L 2 )|e − N i eτ | p−2 |eτ |2
210 5 Stability and Synchronization of Neutral-Type Neural Networks

|e − N i eτ | p−2 eτT (N i )T C i e
≤ |e − N i eτ | p−2 ((1/2)(κ)2 |eτ |2 + (1/2)ι2 |e|2 ) (5.171)
= (1/2)ι2 |e − N i eτ | p−2 |e|2 + (1/2)κ2 |e − N i eτ | p−2 |eτ |2 .

Substituting (5.161)–(5.171) into (5.160) yields

LV (t, i, e, eτ )
≤ qi U1 |e| p + qi V1 |eτ | p
"
+ pqi G 1 p−2 (1 + k) p−1
! p ! 
+ 2p |e| p + p−2 k(1 + k) p−1 |e | p
τ
p
+ pqi G 2 p−2 (1 + k) p−1 |e| p (5.172)
p ! 
+ p−2 p k(1 + k) p−1 + 2 |e | p
p τ
+Ū4 |e| p + V̄4 |eτ | p
= (qi U1 + qi U2 + qi U3 + Ū4 )|e| p
+(qi V1 + qi V2 + qi V3 + V̄4 )eτ | p ,

where

G 1 = (1/2)ι2 + (1/2)(α(1 + k) + L 2 )
+(1/2)β(1 + k) + (1/2)( p − 1)H1 ,

G 2 = (1/2)κ2 (1/2)β(k(1 + k) + L 2 )
+(1/2)αk(1 + k) + (1/2)( p − 1)H2 ,

U1 = −γ p(1 − k) p−3 + 2γk(1 − k) p−3 ,

V1 = γ( p − 2)k(1 − k) p−3 ,

U2 = G 1 (( p − 2)(1 + k) p−1 + 2)

V2 = G 1 ( p − 2)k(1 + k) p−1 ,

U3 = G 2 ( p − 2)(1 + k) p−1 ,

V3 = G 2 (( p − 2)k(1 + k) p−1 + 2).

Let


S 
S
a1 = min γik , a2 = max γik , (5.173)
i∈S i∈S
k=1,k=i k=1,k=i
5.4 Adaptive pth Moment Exponential Synchronization … 211


S 
S
b1 = min γik qk , b2 = max γik qk . (5.174)
i∈S i∈S
k=1,k=i k=1,k=i

Then


S 
S
Ū4 = γik qi (−(1 − k) p−1 ) + γik qk (1 + k) p−1
k=1,k=i k=1,k=i
≤ −a1 qi (1 − k) p−1 + a2 qi (1 + k) p−1
S
+(1 + k) p−1 γik qk (5.175)
 k=1 
S 
S
V̄4 = γik qi k(1 − k)1− p + k γik qk
k=1,k=i k=1,k=i
≤ a2 c2 k(1 − k) p−1 + b1 k.

Therefore,

"LV (t, i, e, eτ )
≤ (U1 + U2 + U3 − a1 (1 − k) p−1 + a2 (1 + k) p−1 )qi


S
+ (1 + k) p−1 γik qk |e| p + ((V1 + V2 + V3 )c2
k=1
+a c k(1 − k) p−1 + b1 k)|eτ | p
 2 2  (5.176)

S
≤ ηqi + (1 + k) p−1 γik qk |e| p + ((V1 + V2 + V3 )c2
k=1
+a2 c2 k(1 − k) p−1 + b1 k)|eτ | p
≤ −m|e| p + ((V1 + V2 + V3 )c2
+a2 c2 k(1 − k) p−1 + b1 k)|eτ | p

Let λ1 = m, λ2 = (V1 + V2 + V3 )c2 + a2 c2 k(1 − k) p−1 + b1 k. Then inequalities


(5.154) and (5.152) hold. By Theorem 5.44, the error system (5.150) is adaptive
exponential stability in pth moment, and hence the noise-perturbed response system
(5.148) can be adaptive exponential synchronized in pth moment with the neural
network (5.147). This completes the proof.
Remark 5.46 In Theorem 5.45, the condition (5.155) of the adaptive exponen-
tial synchronization for neural networks with Markovian switching obtained using
M-matrix approach is mode dependent and very different to those, such as linear
matrix inequality method. And the condition can be checked if the drive system and
the response system are given and the positive constant m be chosen. To the best of
our knowledge, the method combining Lyapunov function and M-matrix approach
in this section is rarely used in the researching area of adaptive exponential synchro-
nization in pth moment for stochastic neural networks with Markovian switching.
Now, we are in a position to consider two special cases of the drive system (5.147)
and the response system (5.148).
212 5 Stability and Synchronization of Neutral-Type Neural Networks

Special case 1 The Markovian jumping parameters are removed from the neural
networks. That is to say, S = 1. For this case, one can get the following result
analogous to Theorem 5.45.

Corollary 5.47 Assume that η < 0 and (V1 +V2 +V3 )+a2 k(1−k) p−1 < −η(1−τ̂ ),
where

η = (U1 + U2 + U3 ) − a1 (1 − k) p−1 + a2 (1 + k) p−1 .

Under Assumptions 5.40 and 5.41, the noise-perturbed response system can be adap-
tive exponential synchronized in pth moment with the drive neural network, if the
feedback gain K (t) of the controller (5.149) with the update law is chosen as

k̇ j = −(1/2)α j p|e − N i eτ | p−2 (e j − N i eτ j )2 , (5.177)

and the update laws of the parameters of matrices Ĉ, Â, and B̂ are chosen as
⎧ i γj
˙
⎨ ĉ j = 2 p|e

α
− N i eτ | p−2 (e j − N i eτ j )y j ,
â˙ jl = − 2 p|e − N i eτ | p−2 (e j − N i eτ j ) fl ,
i jl
(5.178)

⎩ ˙i β
b̂ jl = − 2jl p|e − N i eτ | p−2 (e j − N i eτ j )( fl )τ ,

where α j > 0, γ j > 0, α jl > 0, and β jl > 0 ( j, l = 1, 2, . . . , n) are arbitrary


constants, respectively.

Proof Choose the following nonnegative function:




n
V (t, e) = |e| p + 1 2
αj k j + 1
γj (c̃ j )2
j=1


n 
n
+ α jl (ã jl )
1 2 + β jl (b̃ jl )
1 2 .
l=1 l=1

The proof is similar to that of Theorem 5.45, and hence omitted.

Special case 2 When the noise perturbation is removed from the response system
(5.148), it yields the noiseless response system, which can lead to the following
results.

Corollary 5.48 Assume that M := −diag {η, η, . . . , η } − Γ is a nonsingular


  
S
M-matrix, where

η = (U1 + Ū2 + Ū3 ) − a1 (1 − k) p−1 + a2 (1 + k) p−1 , (5.179)


5.4 Adaptive pth Moment Exponential Synchronization … 213

where

Ū2 = Ḡ 1 (( p − 2)(1 + k) p−1 + 2),

Ū3 = Ḡ 2 ( p − 2)(1 + k) p−1 ,

Ḡ 1 = G 1 − (1/2)( p − 1)H1 ,

Ḡ 2 = G 2 − (1/2)( p − 1)H2 ,

and

(V1 + V̄2 + V̄3 )c2 + a2 c2 k(1 − k) p−1 + b1 k <


 
 S (5.180)
− ηqi + (1 + k) p−1 γik qk (1 − τ̂ ), ∀i ∈ S,
k=1

where

V̄2 = Ḡ 1 ( p − 2)k(1 + k) p−1

V̄3 = Ḡ 2 (( p − 2)k(1 + k) p−1 + 2).

Under Assumptions 5.40, the noiseless-perturbed response system can be adaptive


exponential synchronized in pth moment with the drive neural network, if the feed-
back gain K (t) of the controller (5.149) with the update law is chosen as (5.156)
and the parameters update laws of matrices Cˆ i , Âi , and B̂ i are chosen as (5.157).

Proof The proof is similar to that of Theorem 5.45, and hence omitted.

5.4.4 Numerical Examples

In the section, we present an example to illustrate the usefulness of the main results
obtained in this section. The adaptive exponential stability in pth moment is examined
for a given stochastic delayed neural networks with Markovian jumping parameters.

Example 5.49 Consider the delayed neural networks (5.147) with Markovian switch-
ing, the response stochastic delayed neural networks (5.148) with Markovian switch-
ing, and the error system (5.150) with the network parameters given as follows:
214 5 Stability and Synchronization of Neutral-Type Neural Networks

Fig. 5.10 The response 5


curve of e1 (t) and e2 (t) of e (t)
1
4
the errors system e2(t)
3

−1

−2

−3

−4
0 100 200 300 400 500 600
t


    
2.1 0 2.5 0 1.2 −1.5
C1 = , C2 = , A1 = ,
0 2.8 0 2.2 −1.7 1.2
     
1.1 −1.6 0.7 −0.2 −0.4 −0.1
A2 = , B1 = , B2 = ,
−1.8 1.2 0 0.3 −0.3 0.5
     
0.6 0.8 −0.12 0.12
D1 = D̂1 = , D2 = D̂2 = ,Γ = ,
0.1 0.2 0.11 −0.11
α11 = α12 = α21 = α22 = β11 = β12 = β21 = β22 = 1,
σ(t, e(t), e(t − τ ), 1) = (0.4e1 (t − τ ), 0.5e2 (t))T ,
σ(t, e(t), e(t − τ ), 2) = (0.5e1 (t), 0.3e2 (t − τ ))T ,
p = 3, L = 1, f (x(t)) = tanh(x(t)), τ = 0.12.

It can be checked that Assumptions 5.40, 5.41, and the inequality (5.180) are
satisfied and the matrix M is a nonsingular M-matrix. So the noise-perturbed response
system (5.148) can be adaptive exponential synchronized in pth moment with the
drive neural network (5.147) by Theorem 5.45. The simulation results are given
in Figs. 5.10, 5.11, 5.12, 5.13, and 5.14. Among them, Fig. 5.10 shows the state
response of errors system e1 (t), e2 (t). Figure 5.11 shows the feedback gain k1 , k2 .
Figures 5.12, 5.13, and 5.14 show the parameters update laws of matrices C, # A,#
#
B chosen as c1 (t), c2 (t), a11 (t), a12 (t), a21 (t), a22 (t), b11 (t), b12 (t), b21 (t), and
b22 (t). From the simulations figures, one can see that the stochastic delayed neural
networks with Markovian switching (5.147) and (5.148) are adaptive exponential
synchronization in pth moment.
5.4 Adaptive pth Moment Exponential Synchronization … 215

Fig. 5.11 The dynamic 2


curve of the feedback gains k (t)
1
k1 and k2 0 k2(t)

−2

−4

−6

−8

−10

−12
0 100 200 300 400 500 600
t

Fig. 5.12 The dynamic 8


curve of the parameters c1 (t)
7
and c2 (t)
c (t)
6 1
c (t)
2
5

0
0 100 200 300 400 500 600
t

Fig. 5.13 The dynamic 4


curve of the parameters
3
a11 (t), a12 (t), a21 (t), and
a22 (t) 2 a11(t)
a (t)
1 12
a (t)
21
0
a (t)
22
−1

−2

−3

−4
0 100 200 300 400 500 600
t
216 5 Stability and Synchronization of Neutral-Type Neural Networks

Fig. 5.14 The dynamic 0.6


curve of the parameters
b11 (t), b12 (t), b21 (t), and 0.4
b22 (t) b (t)
0.2 11
b (t)
12
0
b21(t)
−0.2 b (t)
22

−0.4

−0.6

−0.8
0 100 200 300 400 500 600
t

5.4.5 Conclusions

In this section, we have dealt with the problem of the mode and delay-dependent
adaptive exponential synchronization in pth moment for neural networks with sto-
chastic delayed and Markovian jumping parameters. We have removed the traditional
monotonicity and smoothness assumptions on the activation function. A M-matrix
approach has been developed to solve the problem addressed. The conditions for
the adaptive exponential synchronization in pth moment have been derived in terms
of some algebraical inequalities. These synchronization conditions are much differ-
ent to those of linear matrix inequality. Finally, a simple example has been used to
demonstrate the effectiveness of the main results which obtained in this section.

5.5 Adaptive Synchronization of Neutral-Type SNN


with Mixed Time Delays

5.5.1 Introduction

During the past two decades, chaos synchronization has played a significant role in
nonlinear science since it can be applied to create chemical and biological systems,
image processing, secure communication systems, information science, and so on.
Different concepts of synchronization, like complete synchronization, generalized
synchronization, phase synchronization, lag synchronization, and anticipated syn-
chronization, have been widely investigated. Researchers used to synchronize two
chaotic systems by following synchronization strategies: adaptive control method,
feedback control method, active control method, etc.
5.5 Adaptive Synchronization of Neutral-Type SNN … 217

Recently, the practicality of neutral-type models attracts researchers to investigate


the stability and stabilization of the neutral-type neural networks, like [20]. How-
ever, the synchronization of coupled neutral-type neural networks has been rarely
researched (see [34, 75]). Due to the fact that many systems in the real world can be
described by neutral-type neural networks, the investigation on the synchronization
of coupled neutral-type neural networks has a lot of potential applications in many
areas.
It is well known that time delays present complex and unpredictable behaviors
in practice, which are often caused by the finite switching speeds of the amplifiers.
The investigations on synchronization of neural networks discussed in [7, 21, 34,
44, 46, 65, 75] just consider the discrete delays, and just [53, 83] take the distributed
delays into consideration. However, the neural signal propagation is often distributed
during a certain time period with the presence of an amount of parallel pathways with
a variety of axon sizes and lengths. Hence, the distributed delays would be put in our
models.
Furthermore, in real world, fluctuations from the release of neurotransmitters and
other probabilistic causes may affect the stability property of neutral-type neural
networks. However, due to the difficulty of mathematics, noise perturbations have
been seldom applied to study synchronization problems (see [21, 44, 46, 65, 83]).
Adding noise perturbations to our model makes the results obtained in this section
more general and realistic. In practice, the weight coefficients of neurons rely on cer-
tain capacitance and resistance values which are subject to parameter uncertainties.
Our main target is to find sufficient conditions to ensure the adaptive synchro-
nization for stochastic neural networks of neutral-type with mixed time delays and
parameter uncertainties. Inspired by recently well-studied works [21, 83], in this
section, an adaptive feedback controller is proposed for the synchronization of cou-
pled neutral-type neural networks with stochastic perturbation, based on LaSalle-
type invariance principle for stochastic differential delay equations, the stochastic
analysis theory, and the adaptive feedback control technique. To achieve the syn-
chronization of coupled stochastic neutral-type neural networks, we develop a linear
matrix inequality (LMI, for short) approach to derive some new criteria. Finally, a
numerical example and its simulations are given to show the effectiveness of our
results.

5.5.2 Problem Formulation

Consider the following neural networks of neutral type with time-varying discrete
delays and distributed delays described by the following differential equation:

d[x(t) − Dx(t − τ1 (t))] = −Ct x(t) + At f˜(x(t)) + Bt g̃(x(t − τ2 (t)))
t  (5.181)
+ E t t−τ3 (t) h̃(x(s))ds + J dt,
218 5 Stability and Synchronization of Neutral-Type Neural Networks

where n is the number of neurons in the indicated neural network, x(t) = [x1 (t), x2
(t), . . . , xn (t)]T ∈ Rn is the state vector associated with n neurons, J = [J1 , J2 , . . . ,
Jn ]T ∈ Rn is the external constant input vector, f˜(·), g̃(·), h̃(·) denote the neuron
activation functions, and τk (t) (k = 1, 2, 3) represent the time-varying delays. In
system (5.181),

At = A + ΔA(t), Bt = B + ΔB(t),
(5.182)
Ct = C + ΔC(t), E t = E + ΔE(t),

where the diagonal matrix C = diag{c1 , c2 , . . . , cn }, D = diag{d1 , d2 , . . . , dn } has


positive entries ci > 0, di > 0 (i = 1, 2, . . . , n); A, B, and E are the interconnection
matrices representing the weight coefficients of the neurons; and ΔA, ΔB, ΔC, and
ΔE represent the parameter uncertainties of the system, which are assumed to be of
the form
 
ΔA(t)Δ(t) ΔC(t) ΔE(t)  (5.183)
= M F(t) N A N B NC N E

where M, N A , N B , NC , and N E are some given constant matrices with appropriate


dimensions; and F(t) is an unknown matrix representing the parameter perturbation
which satisfies

F T (t)F(t) ≤ I. (5.184)

We consider the model (5.181) as the drive system. The response system is

d[y(t) − Dy(t − τ1 (t))] = −Ct y(t) + At f˜(y(t)) + Bt g̃(y(t − τ2 (t)))
t 
+ E t t−τ 3 (t) h̃(y(s))ds + J + u(t) dt + σ(t, y(t) − x(t),
y(t − τ1 (t)) − x(t − τ1 (t)), y(t − τ2 (t)) − x(t − τ2 (t)),
y(t − τ3 (t)) − x(t − τ3 (t)))dω(t),
(5.185)

where u(t) = [u 1 (t), u 2 (t), . . . , u n (t)]T ∈ Rn is the controller, ω(t) = [ω1 , ω2 , . . . ,


ωn ]T is an n-dimensional Brownian motion defined on a complete probability space
(Ω, F, P) with a natural filtration {Ft }t≥0 (i.e., Ft = σ{ω(s) : 0 ≤ s ≤ t}), and
σ : R+ × Rn × Rn → Rn×n is the noise intensity matrix. It is known that external
random fluctuation and other probabilistic causes often lead to this type of stochastic
perturbations.
Let e(t) = y(t) − x(t) be the synchronization error, then the system of synchro-
nization error can be written as follows:

d[e(t) − De(t − τ1 (t))] = − Ct e(t) + At f (e(t)) + Bt g(e(t − τ2 (t)))
t 
+ E t t−τ3 (t) h(e(s))ds + u(t) dt + σ(t, e(t),
e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))dω(t),
(5.186)
5.5 Adaptive Synchronization of Neutral-Type SNN … 219

where f (e(t)) = f˜(x(t) + e(t)) − f˜(x(t)), g(e(t)) = g̃(x(t) + e(t)) − g̃(x(t)),


h(e(t)) = h̃(x(t) + e(t)) − h̃(x(t)).
Throughout the section, we assume that f˜(t), g̃(t), h̃(t), and σ(·) satisfy the usual
local Lipschitz condition and linear growth condition. It is known from [31] that
e(θ) = ξ(t) on −τ ≤ θ ≤ 0 in C2F0 ([−τ , 0]; Rn ) for any given initial data, and
the error system (5.186) has a unique global solution on t ≥ 0 denoted by e(t; ξ).
We write e(t; ξ) = e(t) for simplicity. Let C2,1 (Rn × R+ ; R+ ) denotes the family
of all nonnegative functions V (t, e(t)) on Rn × R+ which are continuously twice
differentiable in e(t) and differentiable in t. For each V ∈ C2,1 (Rn ×R+ ; R+ ), along
the trajectory of the system (5.186), we define an operator LV from Rn × R+ to
R by

LV (t, e(t)) = Vt (t, e(t)) + Ve (t, e(t)) − Ct e(t) + At f (e(t)) + Bt g(e(t − τ2 (t)))
t 
+ E t t−τ 3 (t)
h(e(s))ds + u(t) + 21 trace[σ T (t, e(t), e(t − τ1 (t)),
e(t − τ2 (t)), e(t − τ3 (t)))Vee σ(t, e(t), e(t − τ1 (t)), e(t − τ2 (t)),
e(t − τ3 (t)))],
(5.187)

∂V (t,e(t))
where Vt (t, e(t)) = ∂t , Ve (t, e(t)) = ( ∂V (t,e(t))
∂e1 , ∂V (t,e(t))
∂e2 , . . . , ∂V (t,e(t))
∂en ),
∂ 2 V (t,e(t))
Vee (t, e(t)) = ( ∂ei ∂e j )n×n .
To prove our main results, the following assumptions are needed.
Assumption 5.50 There exist diagonal matrices L i− = diag{li1− − −
, li2 , . . . , lin } and
+ + + +
L i = diag{li1 , li2 , . . . , lin }, i = 1, 2, 3 satisfying

f˜j (u) − f˜j (v) g̃ j (u) − g̃ j (v) h̃ j (u) − h̃ j (v)


l1−j ≤ ≤ l1+j , l2−j ≤ ≤ l2+j , l3−j ≤ ≤ l3+j ,
u−v u−v u−v

for all u, v ∈ Rn , u = v, j = 1, 2, . . . , n.
Assumption 5.51 There exist positive constants τ1 , τ2 , τ3 , μ1 , μ2 , and μ3 such that

0 ≤ τ1 (t) ≤ τ1 , 0 ≤ τ2 (t) ≤ τ2 , 0 ≤ τ3 (t) ≤ τ3 ,

τ̇1 (t) ≤ μ1 < 1, τ̇2 (t) ≤ μ2 < 1, τ̇3 (t) ≤ μ3 < 1.

Assumption 5.52 There exist positive definite matrices R1 , R2 , R3 , and R4 such


that

trace[σ T (t, x1 , x2 , x3 , x4 )σ(t, x1 , x2 , x3 , x4 )]


≤ x1T R1 x1 + x2T R2 x2 + x3T R3 x3 + x4T R4 x4 ,

for all x1 , x2 , x3 , x4 ∈ Rn and t ∈ R+ .


Assumption 5.53 σ(t, 0, 0, 0, 0) ≡ 0.
220 5 Stability and Synchronization of Neutral-Type Neural Networks

Assumption 5.54 The matrix D satisfies

ρ(D) < 1,

where the notation ρ(D) is the spectral radius of D.

By the facts that f (0) = g(0) = h(0) = 0 and σ(t, 0, 0, 0, 0) ≡ 0, the system
(5.186) admits a trivial solution e(t; 0) ≡ 0 corresponding to the initial data ξ = 0.
Hence, if the trivial solution of the system (5.186) is globally almost surely asymp-
totically stable, the system (5.181) and system (5.185) achieve synchronization for
almost every initial data.
Next, we introduce the definition of stochastic synchronization under almost every
initial data for the two coupled neural networks (5.181) and (5.185).
Definition 5.55 The two coupled neural networks (5.181) and (5.185) are said
to be stochastic synchronization for almost every initial data if for every ξ ∈
C2F0 ([−τ , 0]; Rn ),

lim e(t; ξ) = 0 a.s.,


t→∞

where “a.s.” denotes “almost surely.”

5.5.3 Main Results and Proofs

In this section, the stochastic synchronization for the two coupled neural networks
(5.181) and (5.185) is investigated under Assumptions 5.50–5.54.
Firstly, we deal with the synchronization of neural networks (5.181) and (5.185)
without the parameter uncertainties ΔA(t), ΔB(t), ΔC(t), and ΔE(t).
Theorem 5.56 Under Assumptions 5.50–5.54, the two coupled neural networks
(5.181) and (5.185) without the parameter uncertainties can be synchronized for
almost every initial data, if there exist positive diagonal matrices H1 , H2 , H3 ,
P = diag{ p1 , p2 , . . . , pn }, positive definite matrices Q 1 , Q 2 , Q 3 , S1 , S2 , and a
positive scalar λ such that the following LMIs hold:

P ≤ λI, (5.188)

τ3 (S1 + S2 ) ≤ Q 3 , (5.189)
5.5 Adaptive Synchronization of Neutral-Type SNN … 221
⎡ ⎤
Π11 Π12 0 PA 0 PB PE 0
⎢ ∗ Π22 0 −D P A 0 −D P B 0 D P E⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ −H1 0 0 0 0 ⎥
Π =⎢
⎢ ∗
⎥ < 0, (5.190)
⎢ ∗ ∗ ∗ Π55 0 0 0 ⎥⎥
⎢ ∗ ∗ ∗ ∗ 0 −H2 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ −S1 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ −S2

where Π11 = −2PC + λ(R1 + R4 ) − 2αP + Q 1 + Q 2 + L 1 H1 L 1 + L 3 H3 L 3 ,


Π12 = D PC + αP D, Π22 = λR2 − (1 − μ1 )Q 1 ,
Π33 = λR3 − (1 − μ2 )Q 2 + L 2 H2 L 2 , Π55 = τ3 Q 3 − H3 .
And the adaptive feedback controller is designed as

u(t) = k(y(t) − x(t)), (5.191)

where the feedback strength k = diag{k1 , k2 , . . . , kn } is updated by the following


law:

k̇i = −ϕi ei2 (t) + ϕi di ei (t)ei (t − τ1 (t)), (5.192)

with ϕi > 0 (i = 1, 2, . . . , n), an arbitrary positive constant.

Proof Consider the following Lyapunov-Krasovskii function for system (5.186) as


5
V (t, e(t)) = Vi (t, e(t)), (5.193)
i=1

where
V1 (t, e(t)) = [e(t)
t − De(t − τ1 (t))] P[e(t) − De(t − τ1 (t))],
T

V2 (t, e(t)) = t−τ1 (t) e T (s)Q 1 e(s)ds,


t
V3 (t, e(t)) = t−τ2 (t) e T (s)Q 2 e(s)ds,
0 t
V4 (t, e(t)) = −τ3 (t) t+γ h T (e(s))Q 3 h(e(s))dsdγ,
n
pi
V5 (t, e(t)) = ϕi (ki + α) ,
2
i=1
with that Q 1 , Q 2 , Q 3 , P = diag{ p1 , p2 , . . . , pn } are positive definite matrices, and
α, pi (i = 1, 2, . . . , n) are positive constants.
Then it follows from (5.186) and (5.187) that
222 5 Stability and Synchronization of Neutral-Type Neural Networks

LV1 (t, e(t)) = 2[e(t) − De(t − τ1 (t))]T P − Ce(t) + A f (e(t)) + Bg(e(t − τ2 (t)))
t 
+ E t−τ 3 (t)
h(e(s))ds + ke(t) + trace[σ T (t, e(t), e(t − τ1 (t)),
e(t − τ2 (t)), e(t − τ3 (t)))Pσ(t, e(t), e(t − τ1 (t)), e(t − τ2 (t)),
e(t − τ3 (t)))]
= e T (t)[−2PC]e(t) + e T (t)[2P A] f (e(t))
t
+e T (t)[2P B]g(e(t − τ2 (t))) + e T (t)[2P E] t−τ 3 (t)
h(e(s))ds
+e T (t − τ1 (t))[2D PC]e(t) + e T (t − τ1 (t))[−2D P A] f (e(t))
+e T (t − τ1 (t))[−2D P B]g(e(t − τ2 (t)))
t
+e T (t − τ1 (t))[−2D P E] t−τ 3 (t)
h(e(s))ds
n 
n
+2 pi ki ei2 (t) − 2 di pi ki ei (t)ei (t − τ1 (t))
i=1 i=1
+trace[σ T (t, e(t), e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))Pσ(t, e(t),
e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))].
(5.194)

From Lemma 1.13, we have


t
e T (t)[2P E] t−τ3 (t) h(e(s))ds
≤ e T (t)[P E S1−1 E T P T ]e(t) (5.195)
T 
t t
+ t−τ3 (t) h(e(s))ds S1 t−τ3 (t) h(e(s))ds ,

t
e T (t − τ1 (t))[−2D P E] t−τ3 (t) h(e(s))ds
≤ e T (t − τ1 (t))[D P E S2−1 E T P T D T ]e(t − τ1 (t)) (5.196)
T 
t t
+ t−τ3 (t) h(e(s))ds S2 t−τ3 (t) h(e(s))ds ,

where S1 and S2 are two positive definite matrices.


Utilizing Lemma 1.20 yields
T 
t t
t−τ3 (t) h(e(s))ds (S1 + S2 ) t−τ3 (t) h(e(s))ds
t
≤ τ3 (t) t−τ3 (t) h T (e(s))(S1 + S2 )h(e(s))ds (5.197)
t
≤ t−τ3 (t) h (e(s))[τ3 (S1 + S2 )]h(e(s))ds.
T

It follows from Assumption 5.52 and (5.188) that

trace[σ T (t, e(t), e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))Pσ(t, e(t),
e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))]
≤ λmax (P)trace[σ T (t, e(t), e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))σ(t, e(t),
e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t)))]
≤ λ[e T (t)R1 e(t) + e T (t − τ1 (t))R2 e(t − τ1 (t)) + e T (t − τ2 (t))R3 e(t − τ2 (t))
+e T (t − τ3 (t))R4 e(t − τ3 (t))].
(5.198)
5.5 Adaptive Synchronization of Neutral-Type SNN … 223

By Ito’s differential formula [82], we could infer that

LV2 (t, e(t)) = e T (t)Q 1 e(t) − (1 − τ˙1 (t))e T (t − τ1 (t))Q 1 e(t − τ1 (t))
(5.199)
≤ e T (t)Q 1 e(t) − e T (t − τ1 (t))[(1 − μ1 )Q 1 ]e(t − τ1 (t)),

LV3 (t, e(t)) = e T (t)Q 2 e(t) − (1 − τ˙2 (t))e T (t − τ2 (t))Q 2 e(t − τ2 (t))
(5.200)
≤ e T (t)Q 2 e(t) − e T (t − τ2 (t))[(1 − μ2 )Q 2 ]e(t − τ2 (t)),
t
LV4 (t, e(t)) = τ3 (t)h T (e(t))Q 3 h(e(t)) − t−τ3 (t) h T (e(s))Q 3 h(e(s))ds
t (5.201)
≤ h T (e(t))[τ3 Q 3 ]h(e(t)) − t−τ3 (t) h T (e(s))Q 1 h(e(s))ds,

n
pi  n
LV5 (t, e(t)) = 2 (ki +α)k̇i = −2 pi (ki +α)(ei2 (t)−di ei (t)ei (t −τ1 (t))).
ϕi
i=1 i=1
(5.202)
Furthermore, the condition (5.189) yields
 t  t
h T (e(s))[τ3 (S1 + S2 )]h(e(s))ds − h T (e(s))Q 3 h(e(s))ds ≤ 0.
t−τ3 (t) t−τ3 (t)
(5.203)
On the other hand, from Assumption 5.50, it follows that

e T (t)L 1 H1 L 1 e(t) − f T (e(t))H1 f (e(t)) ≥ 0, (5.204)

e T (t)L 2 H2 L 2 e(t) − h T (e(t))H2 h(e(t)) ≥ 0, (5.205)

e T (t −τ2 (t))L 3 H3 L 3 e(t −τ2 (t))−g T (e(t −τ2 (t)))H3 g(e(t −τ2 (t))) ≥ 0, (5.206)

where H1 , H2 , and H3 are positive diagonal matrices, and L j = diag{l j1 , l j2 , . . . ,


l jn }, l ji = max{|l − +
ji |, |l ji |} ( j = 1, 2, 3) for i = 1, 2, . . . , n.
Substituting inequalities (5.194)–(5.206) into (5.193), it can be derived that

LV (t, e(t)) = e T (t)[−2PC + Q 1 + Q 2 − 2αP + λ(R1 + R4 ) + P E S1−1 E T P T


+L 1 H1 L 1 + L 2 H2 L 2 ]e(t) + e T (t)[2P A] f (e(t))
+e T (t)[2P B]g(e(t − τ2 (t))) + e T (t − τ1 (t))[2D PC + 2αP D]e(t)
+e T (t − τ1 (t))[−2D P A] f (e(t))
+e T (t − τ1 (t))[−2D P B]g(e(t − τ2 (t)))
+e T (t − τ1 (t))[λR2 − (1 − μ1 )Q 1 + D P E S2−1 E T P T D T ]e(t − τ1 (t))
+e T (t − τ2 (t))[λR3 − (1 − μ2 )Q 2 + L 3 H3 L 3 ]e(t − τ2 (t))
+h T (e(t))[τ3 Q 3 − H2 ]h(e(t)) + f T (e(t))[−H1 ] f (e(t))
+g T (e(t − τ2 (t)))[−H3 ]g(e(t − τ2 (t))) − e T (t)[λR4 ]e(t)
+e T (t − τ3 (t))[λR4 ]e(t − τ3 (t))
= Ψ T (t)Ξ Ψ (t) − e T (t)[λR4 ]e(t) + e T (t − τ3 (t))[λR4 ]e(t − τ3 (t)),
(5.207)
224 5 Stability and Synchronization of Neutral-Type Neural Networks

where

Ψ T (t) = [e T (t), e T (t−τ1 (t)), e T (t−τ2 (t)), f T (e(t)), h T (e(t)), g T (e(t−τ2 (t)))]T ,
⎡ ⎤
Ξ11 D PC + αP D 0 PA 0 PB
⎢ ∗ Ξ22 0 −D P A 0 −D P B ⎥
⎢ ⎥
⎢ ∗ ∗ Ξ33 0 0 0 ⎥
Ξ =⎢
⎢ ∗
⎥ < 0,
⎢ ∗ ∗ −H1 0 0 ⎥ ⎥
⎣ ∗ ∗ ∗ ∗ τ3 Q 3 − H2 0 ⎦
∗ ∗ ∗ ∗ ∗ −H3

Ξ11 = −2PC+λ(R1 +R4 )−2αP+Q 1 +Q 2 +L 1 H1 L 1 +L 2 H2 L 2 +P E S1−1 E T P T ,


Ξ22 = λR2 − (1 − μ1 )Q 1 + D P E S2−1 E T P T D T ,
Ξ33 = λR3 − (1 − μ2 )Q 2 + L 3 H3 L 3 .
Using Lemma 1.21, Π < 0 is equivalent to Ξ < 0. Let ν = λmin (−Π ), clearly,
the constant ν > 0. This fact together with (5.207) gives

LV (t, e(t)) ≤ −e T (t)(λR4 + ν I )e(t) + e T (t − τ3 (t))(λR4 − ν I )e(t − τ3 (t))


= −ω1 (e(t)) + ω2 (e(t − τ3 (t))),
(5.208)
where ω1 (e(t)) = e T (λR4 + ν I )e(t) and ω2 (e(t)) = e T (λR4 − ν I )e(t).
It can be seen that ω1 (e(t)) > ω2 (e(t)) for any e(t) = 0. Therefore, apply-
ing LaSalle-type invariance principle for the stochastic differential delay equations,
we can conclude that the two coupled neural networks (5.181) and (5.185) can be
synchronized for almost every initial data. This completes the proof.

Let D = 0, from Theorem 5.56, we obtain the following corollary.


Corollary 5.57 Under Assumptions 5.50–5.54, the two coupled neural networks
(5.181) and (5.185) without the parameter uncertainties and with D = 0 can be
synchronized for almost every initial data, if there exist positive diagonal matrices
H1 , H2 , H3 , P = diag{ p1 , p2 , . . . , pn }, positive definite matrices Q 2 , Q 3 , S1 , and
a positive scalar λ such that the following LMIs hold:

P ≤ λI, (5.209)

τ3 S1 ≤ Q 3 , (5.210)
⎡ ⎤
Θ11 0 PA 0 PB PE
⎢ ∗ Θ22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ −H1 0 0 0 ⎥
Θ=⎢
⎢ ∗
⎥ < 0, (5.211)
⎢ ∗ ∗ Θ44 0 0 ⎥⎥
⎣ ∗ ∗ ∗ 0 −H3 0 ⎦
∗ ∗ ∗ ∗ ∗ −S1
5.5 Adaptive Synchronization of Neutral-Type SNN … 225

where Θ11 = −2PC + λ(R1 + R4 ) − 2αP + Q 2 + L 1 H1 L 1 + L 2 H2 L 2 ,


Θ22 = λR3 − (1 − μ2 )Q 2 + L 3 H3 L 3 , Θ44 = τ3 Q 3 − H2 .
And the adaptive feedback controller is designed as

u(t) = k(y(t) − x(t)), (5.212)

where the feedback strength k = diag{k1 , k2 , . . . , kn } is updated by the following


law:

k̇i = −ϕi ei2 (t), (5.213)

with ϕi > 0 (i = 1, 2, . . . , n), an arbitrary positive constant.

Remark 5.58 For the case of D = 0 and ΔA = ΔB = ΔC = ΔE = 0, the


systems are no longer neutral-type neural networks and the parameters are constant.
By setting D = 0 in Theorem 5.56, we can obtain the adaptive synchronization result
Theorem 5.51 in [83].

Theorem 5.56 gives a new sufficient condition to prove that the two coupled neural
networks (5.181) and (5.185) can be synchronized for almost every initial data. It
makes Theorem 5.56 a little conservatism that it only depends on delay constants
τ3 , μ1 , and μ2 . By constructing a different Lyapunov-Krasovskii function, the next
theorem depends on all the delay constants τk , μk (k = 1, 2, 3), such that it is less
conservative than Theorem 5.56.

Theorem 5.59 Under Assumptions 5.50–5.54, the two coupled neural networks
(5.181) and (5.185) without the parameter uncertainties can be synchronized for
almost every initial data, if there exist positive diagonal matrices H1 , H2 , H4 ,
P = diag{ p1 , p2 , . . . , pn }, positive definite matrices Q 1 , Q 2 , Q 3 , T1 , T2 , T3 , T4 , T5 ,
S1 , S2 , and a positive scalar λ such that the following LMIs hold:

P ≤ λI, (5.214)

τ3 (S1 + S2 ) ≤ Q 3 , (5.215)
⎡ ⎤
X 11 X 12 0 0 PA 0 PB 0 0 PE 0
⎢ ∗ X 22 0 0 X 25 0 X 27 0 0 0 X 211 ⎥
⎢ ⎥
⎢ ∗ ∗ X 33 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ X 44 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ X 55 0 0 0 0 0 0 ⎥
⎢ ⎥
X =⎢
⎢ ∗ ∗ ∗ ∗ ∗ X 66 0 0 0 0 0 ⎥ ⎥ < 0, (5.216)
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 77 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 99 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S1 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S2
226 5 Stability and Synchronization of Neutral-Type Neural Networks

where
X 11 = −2PC + Q 1 + Q 2 −2αP +λ(R1 + R4 )+ L 1 H1 L 1 + L 2 H2 L 2 + L 4 H4 L 4 +T1 ,
X 12 = D PC + αP D, X 22 = λR2 − (1 − μ1 )Q 1 , X 25 = −D P A, X 27 = −D P B,
X 211 = D P E, X 33 = λR3 − (1 − μ2 )Q 2 , X 44 = −(1 − μ3 )T1 ,
X 55 = −H1 + τ1 T4 + τ2 T5 , X 66 = −H4 + T2 , X 77 = τ3 Q 3 − H2 + T3 ,
X 88 = −(1 − μ2 )T2 , X 99 = −(1 − μ3 )T3 .

And the adaptive feedback controller is designed as

u(t) = k(y(t) − x(t)), (5.217)

where the feedback strength k = diag{k1 , k2 , . . . , kn } is updated by the following


law:

k̇i = −ϕi ei2 (t) + ϕi di ei (t)ei (t − τ1 (t)), (5.218)

with ϕi > 0 (i = 1, 2, . . . , n), an arbitrary positive constant.

Proof Consider the following Lyapunov-Krasovskii function:


10
V (t, e(t)) = Vi (t, e(t)), (5.219)
i=1

where t
V6 (t, e(t)) = t−τ3 (t) e T (s)T1 e(s)ds,
t
V7 (t, e(t)) = t−τ2 (t) g T (e(s))T2 g(e(s))ds,
t
V8 (t, e(t)) = t−τ3 (t) h T (e(s))T3 h(e(s))ds,
0 t
V9 (t, e(t)) = −τ1 (t) t+γ f T (e(s))T4 f (e(s))dsdγ,
0 t
V10 (t, e(t)) = −τ2 (t) t+γ f T (e(s))T5 f (e(s))dsdγ,
with that T1 , T2 , T3 , T4 , T5 are positive definite matrices.
By Ito’s differential formula, we could infer that

LV6 (t, e(t)) = e T (t)T1 e(t) − (1 − τ˙3 (t))e T (t − τ3 (t))T1 e(t − τ3 (t))
(5.220)
≤ e T (t)T1 e(t) − e T (t − τ3 (t))[(1 − μ3 )T1 ]e(t − τ3 (t)),

LV7 (t, e(t)) = g T (e(t))T2 g(e(t)) − (1 − τ˙2 (t))g T (e(t − τ2 (t)))T2 g(e(t − τ2 (t)))
≤ g T (e(t))T2 g(e(t)) − g T (e(t − τ2 (t)))[(1 − μ2 )T2 ]g(e(t − τ2 (t))),
(5.221)

LV8 (t, e(t)) = h T (e(t))T3 h(e(t)) − (1 − τ˙3 (t))h T (e(t − τ3 (t)))T3 h(e(t − τ3 (t)))
≤ h T (e(t))T3 h(e(t)) − h T (e(t − τ3 (t)))[(1 − μ3 )T3 ]h(e(t − τ3 (t))),
(5.222)
5.5 Adaptive Synchronization of Neutral-Type SNN … 227

t
LV9 (t, e(t)) = τ1 (t) f T (e(t))T4 f (e(t)) − t−τ1 (t) f T (e(s))T4 f (e(s))ds
(5.223)
≤ f T (e(t))[τ1 T4 ] f (e(t)),
t
LV10 (t, e(t)) = τ2 (t) f T (e(t))T5 f (e(t)) − t−τ2 (t) f T (e(s))T5 f (e(s))ds
≤ f T (e(t))[τ2 T5 ] f (e(t)).
(5.224)
Using Assumption 5.50 yields

e T (t)L 4 H4 L 4 e(t) − g T (e(t))H4 g(e(t)) ≤ 0, (5.225)

where H4 is a positive diagonal matrix, and L j = diag{l j1 , l j2 , . . . , l jn }, l ji =


max{|l − +
ji |, |l ji |} ( j = 1, 2, 3) for i = 1, 2, . . . , n.
Substituting inequalities (5.194)–(5.205) and (5.220)–(5.225) into (5.219), it can
be derived that

LV (t, e(t)) = e T (t)[−2PC + Q 1 + Q 2 − 2αP + λ(R1 + R4 ) + P E S1−1 E T P T


+L 1 H1 L 1 + L 2 H2 L 2 + L 4 H4 L 4 + T1 ]e(t) + e T (t)[2P A] f (e(t))
+e T (t)[2P B]g(e(t − τ2 (t))) + e T (t − τ1 (t))[2D PC + 2αP D]e(t)
+e T (t − τ1 (t))[−2D P A] f (e(t))
+e T (t − τ1 (t))[−2D P B]g(e(t − τ2 (t)))
+e T (t − τ1 (t))[λR2 − (1 − μ1 )Q 1 + D P E S2−1 E T P T D T ]e(t − τ1 (t))
+e T (t − τ2 (t))[λR3 − (1 − μ2 )Q 2 ]e(t − τ2 (t))
+e T (t − τ3 (t))[−(1 − μ3 )T1 ]e(t − τ3 (t))
+h T (e(t))[τ3 Q 3 − H2 + T3 ]h(e(t))
+ f T (e(t))[−H1 + τ1 T4 + τ2 T5 ] f (e(t)) + g T (e(t))[−H4 + T2 ]g(e(t))
+h T (e(t − τ3 (t)))[−(1 − μ3 )T3 ]h(e(t − τ3 (t)))
+g T (e(t − τ2 (t)))[−(1 − μ2 )T2 ]g(e(t − τ2 (t)))
−e T (t)[λR4 ]e(t) + e T (t − τ3 (t))[λR4 ]e(t − τ3 (t))
= Φ T (t)ΛΦ(t) − e T (t)[λR4 ]e(t) + e T (t − τ3 (t))[λR4 ]e(t − τ3 (t)),
(5.226)

where
⎡ ⎤
e(t)
⎢ e(t − τ1 (t)) ⎥
⎢ ⎥
⎢ e(t − τ2 (t)) ⎥
⎢ ⎥
⎢ e(t − τ3 (t)) ⎥
⎢ ⎥
Φ(t) = ⎢
⎢ f (e(t)) ⎥,

⎢ g(e(t)) ⎥
⎢ ⎥
⎢ h(e(t)) ⎥
⎢ ⎥
⎣g(e(t − τ2 (t)))⎦
h(e(t − τ3 (t)))
228 5 Stability and Synchronization of Neutral-Type Neural Networks
⎡ ⎤
Λ11 X 12 0 0 PA 0 PB 0 0
⎢ ∗ Λ22 0 0 −D P A 0 −D P B 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ X 33 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ X 44 0 0 0 0 0 ⎥
⎢ ⎥
Λ=⎢
⎢ ∗ ∗ ∗ ∗ X 55 0 0 0 0 ⎥ ⎥ < 0,
⎢ ∗ ∗ ∗ ∗ ∗ X 66 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 77 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 99

with
Λ11 = −2PC + Q 1 + Q 2 − 2αP + λ(R1 + R4 ) + P E S1−1 E T P T + L 1 H1 L 1 +
L 2 H2 L 2 + L 4 H4 L 4 + T1 , Λ22 = λR2 − (1 − μ1 )Q 1 + D P E S2−1 E T P T D T .
Using Lemma 1.21, X < 0 is equivalent to Λ < 0, let ζ = λmin (−X ), clearly,
the constant ζ > 0. This fact together with (5.226) gives

LV (t, e(t)) ≤ −e T (t)(λR4 + ζ I )e(t) + e T (t − τ3 (t))(λR4 − ζ I )e(t − τ3 (t))


= −ς1 (e(t)) + ς2 (e(t − τ3 (t))),
(5.227)
where ς1 (e(t)) = e T (λR4 + ζ I )e(t) and ς2 (e(t)) = e T (λR4 − ζ I )e(t).
It is obvious that ς1 (e(t)) > ς2 (e(t)) for any e(t) = 0. Therefore, applying
LaSalle-type invariance principle for the stochastic differential delay equations, we
can conclude that the two coupled neural networks (5.181) and (5.185) can be syn-
chronized for almost every initial data. This completes the proof.
Let D = 0, from Theorem 5.59, we obtain the following corollary.
Corollary 5.60 Under Assumptions 5.50–5.54, the two coupled neural networks
(5.181) and (5.185) without the parameter uncertainties and with D = 0 can be
synchronized for almost every initial data, if there exist positive diagonal matrices
H1 , H2 , H4 , P = diag{ p1 , p2 , . . . , pn }, positive definite matrices Q 2 , Q 3 , T1 , T2 ,
T3 , T5 , S1 , and a positive scalar λ such that the following LMIs hold:

P ≤ λI, (5.228)

τ3 S1 ≤ Q 3 , (5.229)
⎡ ⎤
Z 11 0 0 PA 0 PB 0 0 PE
⎢ ∗ X 33 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ X 44 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Z 44 0 0 0 0 0 ⎥
⎢ ⎥
Z =⎢
⎢ ∗ ∗ ∗ ∗ X 66 0 0 0 0 ⎥⎥ < 0, (5.230)
⎢ ∗ ∗ ∗ ∗ ∗ X 77 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 99 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S1
5.5 Adaptive Synchronization of Neutral-Type SNN … 229

where
Z 11 = −2PC + Q 2 − 2αP + λ(R1 + R4 ) + L 1 H1 L 1 + L 2 H2 L 2 + L 4 H4 L 4 + T1 ,
Z 44 = −H1 + τ2 T5 .
And the adaptive feedback controller is designed as

u(t) = k(y(t) − x(t)), (5.231)

where the feedback strength k = diag{k1 , k2 , . . . , kn } is updated by the following


law:

k̇i = −ϕi ei2 (t), (5.232)

with ϕi > 0 (i = 1, 2, . . . , n), an arbitrary positive constant.


Remark 5.61 Similar to Corollary 5.57, by setting D = 0 in Theorem 5.59, we can
obtain the adaptive synchronization result Theorem 5.54 in [83].
We are in position of dealing with the adaptive synchronization problem of systems
(5.181) and (5.185) with the parameter uncertainties ΔA(t), ΔB(t), ΔC(t), and
ΔE(t). By Lemma 1.22, we can deduce the following result based on Theorem 5.59.
Theorem 5.62 Under Assumptions 5.50–5.54, the two coupled neural networks
(5.181) and (5.185) can be synchronized for almost every initial data, if there
exist positive diagonal matrices H1 , H2 , H4 , P = diag{ p1 , p2 , . . . , pn }, positive
definite matrices Q 1 , Q 2 , Q 3 , T1 , T2 , T3 , T4 , T5 , S1 , S2 , and positive scalars λ,
φ j ( j = 1, 2, . . . , 8) such that the following LMIs hold:

P ≤ λI, (5.233)

τ3 (S1 + S2 ) ≤ Q 3 , (5.234)
⎡ ⎤
Υ11 Υ12 0 0 Υ15 0 Υ17 0 0 Υ110 0 Υ112
⎢ ∗ X 22 0 0 Υ25 0 Υ27 0 0 0 Υ211 Υ212 ⎥
⎢ ∗ ∗ 0 ⎥
⎢ X 33 0 0 0 0 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ X 44 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ X 55 0 0 0 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ 0 ⎥
⎢ X 66 0 0 0 0 0 ⎥ < 0, (5.235)
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 77 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 0 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ 0 ⎥
⎢ X 99 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S1 0 0 ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S2 0

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ Υ1212

where
Υ11 = X 11 + φ1 NCT NC , Υ12 = X 12 + φ2 NCT NC , Υ15 = P A + φ3 N AT N A ,
Υ17 = P B + φ4 N BT N B , Υ110 = P E + φ5 N ET N E , Υ25 = X 25 + φ6 N AT N A ,
Υ27 = X 27 + φ7 N BT N B , Υ211 = X 211 + φ8 N ET N E ,
230 5 Stability and Synchronization of Neutral-Type Neural Networks
 
Υ112 = P M P M P M P M P M 0 0 0 ,
 
Υ212 = 0 0 0 0 0 P M P M P M ,
Υ1212 = diag{−φ1 I, −φ2 I, −φ3 I, −φ4 I, −φ5 I, −φ6 I, −φ7 I, −φ8 I }.
And the adaptive feedback controller is designed as

u(t) = k(y(t) − x(t)), (5.236)

where the feedback strength k = diag{k1 , k2 , . . . , kn } is updated by the following


law:

k̇i = −ϕi ei2 (t) + ϕi di ei (t)ei (t − τ1 (t)), (5.237)

with ϕi > 0 (i = 1, 2, . . . , n), an arbitrary positive constant.

Proof Replacing A, B, C, and E in (5.216) with A + M F(t)N A , B + M F(t)N B ,


C + M F(t)NC , and E + M F(t)N E , respectively. Then utilizing Lemma 1.22, we
have
⎡ ⎤
Γ11 Γ12 0 0 Γ15 0 Γ17 0 0 Γ110 0
⎢ ∗ X 22 0 0 Γ25 0 Γ27 0 0 0 Γ211 ⎥
⎢ ⎥
⎢ ∗ ∗ X 33 0 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ X 44 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ X 55 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ X 66 0 0 0 0 0 ⎥
⎢ ⎥ < 0, (5.238)
⎢ ∗ ∗ ∗ ∗ ∗ ∗ X 77 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 88 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ X 99 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S1 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −S2

where
Γ11 = X 11 + φ−1
1 P M M P + φ1 N C N C ,
T T T
−1
Γ12 = X 12 + φ2 D P M M T P T D T + φ2 NCT NC ,
Γ15 = P A + φ−1
3 P M M P + φ3 N A N A ,
T T T
−1
Γ17 = P B + φ4 P M M P + φ4 N BT N B ,
T T

Γ110 = P E + φ−1
5 P M M P + φ5 N E N E ,
T T T
−1
Γ25 = X 25 + φ6 D P M M P D + φ6 N AT N A ,
T T T

Γ27 = X 27 + φ−1
7 D P M M P D + φ7 N B N B ,
T T T T

Γ211 = X 211 + φ−1


8 D P M M P D + φ8 N E N E .
T T T T

From Lemma 1.21, (5.238) holds if and only if (5.235) holds. This completes the
proof.
5.5 Adaptive Synchronization of Neutral-Type SNN … 231

Remark 5.63 In the main result Theorem 5.62, we considered the distributed delay,
the stochastic perturbation, and the parameter uncertainties. Two common sources of
the disturbances on neural networks, the parameter uncertainties and the stochastic
perturbations, are unavoidable in practice. Thus, compared with [34, 75, 83], our
models are more general and useful in practice.

5.5.4 Numerical Example

In this section, we give an example so as to demonstrate the effectiveness of Theorem


5.62.

Example 5.64 Consider the synchronization error system (5.186) with u(t) = ke(t)
such that k̇i = −ϕi ei2 (t) + ϕi di ei (t)ei (t −τ1 (t)), ω(t) is a two-dimensional Brown-
ian motion, where e(t) = (e1 (t), e2 (t))T is the state of the error system (5.186).
Take f (e(t)) = g(e(t)) = h(e(t)) = [tanh(e1 (t)), tanh(e2 (t))]T , τ1 (t) = 0.9,
τ2 (t) = 0.3, τ3 (t) = 0.4, L 1 = L 2 = L 3 = L 4 = 1, and

 σ(t, e(t), e(t − τ1 (t)), e(t − τ2 (t)), e(t − τ3 (t))) 


0.5e1 (t) + 0.2e1 (t − τ1 (t)) 0
= ,
0 0.5e2 (t − τ2 (t)) + 0.1e2 (t − τ3 (t))

then R1 = 0.4I , R2 = 0.2I , R3 = 0.4I , R4 = 0.1I .


Other parameters of the error system are given as follows:
     
0.5 −0.2 −3.5 −2 1.41 0
A= ,B = ,C = ,
2.75 −1.25 −2.5 −2.8 0 1.41
     
0.2 0 −15.5 1 0.1 0
D= ,E = , NA = ,
0 0.4 −0.2 4.3 −0.1 0.1
     
0.1 0 0.1 0.15 0.2 0
NB = , NC = , NE = ,
−0.1 0 0 −0.1 0 0.2
 
−1.6 −0.1
M= .
−0.3 −2.5

Let α = 30, using LMI toolbox in Matlab, we can obtain the following feasible
solutions to LMIs (5.233)–(5.235):
   
43.1531 −4.2386 19.4407 −2.5683
Q1 = , Q2 = ,
−4.2386 77.9498 −2.5683 18.1761
   
64.6621 −5.5682 4.7654 −0.1010
Q3 = ,P = ,
−5.5682 44.8942 0.1010 5.3444
232 5 Stability and Synchronization of Neutral-Type Neural Networks
   
31.2593 −5.9066 57.4154 −0.3747
H1 = , H2 = ,
−5.9066 26.1920 −0.3747 45.4589
   
27.4910 −4.2464 35.2360 0
H4 = , T1 = ,
−4.2464 25.4116 0 35.2360
   
16.5537 −2.3088 14.3788 −2.3348
T2 = , T3 = ,
−2.3088 15.4271 −2.3348 13.6590
   
11.2095 −1.7910 20.3754 −2.0492
T4 = , T5 = ,
−1.7910 10.6240 −2.0492 19.7232
   
95.0493 −8.1459 33.1449 −2.1269
S1 = , S2 = ,
−8.1459 44.3732 −2.1269 32.6036
λ = 15.9141, φ1 = 39.3405, φ2 = 37.0910, φ3 = 41.0812, φ4 = 42.3367,
φ5 = 41.7361, φ6 = 42.5957, φ7 = 43.6130, φ8 = 42.6723.

Therefore, from Theorem 5.62, we conclude that the two coupled neural networks
(5.181) and (5.185) can be synchronized for almost every initial data.
Now by taking the initial data as e(0) = [0.6, 0.7]T , k(0) = [15, 20]T , ϕ1 = 0.2,
and ϕ2 = 0.3, we can draw the dynamic curves of the error system, the evolution of
adaptive coupling strengths k1 and k2 , and the Brownian motion ω(t), respectively,
as Figs. 5.15, 5.16 and 5.17. Figure 5.15 shows that the two coupled neural networks
(5.181) and (5.185) are synchronized.

Fig. 5.15 The curve of the 40


e1(t)
synchronization errors e1 e2(t)
30
and e2
20

10

−10

−20

−30

−40
0 2 4 6 8 10
t
5.5 Adaptive Synchronization of Neutral-Type SNN … 233

Fig. 5.16 The evolution 20


k1(t)
graph of the adaptive k2(t)
15
coupling strengths k1 and k2
10

−5

−10

−15

−20
0 2 4 6 8 10
t

Fig. 5.17 The evolution 25


w1(t)
graph of the Brownian w2(t)
motions ω1 and ω2 20

15

10

−5

−10
0 2 4 6 8 10
t

5.5.5 Conclusion

In this section, an adaptive feedback controller has been designed to achieve the
synchronization for the neutral-type neural networks with stochastic perturbation
and parameter uncertainties. Using LaSalle-type invariance principle for stochastic
differential delay equations, the stochastic analysis theory, and the adaptive feed-
back control technique, we have obtained the stochastic synchronization criterion for
almost every initial data. A numerical example and its simulation have been given
to demonstrate the effectiveness of the results obtained. The method in this section
can be further extended to the study of the synchronization of neutral-type neural
networks with mixed time delays and Markovian jumping parameters. In addition,
by replacing the unknown parameters in this system with adaptive learning para-
meters, we can research the stability and the synchronization of neutral-type neural
networks. Furthermore, exponential synchronization, project synchronization, and
cluster synchronization of this model can be discussed in the near future.
234 5 Stability and Synchronization of Neutral-Type Neural Networks

5.6 Exponential Stability of Neutral-Type Impulsive SNN


with Markovian Switching

5.6.1 Introduction

As we know, the stochastic delay neural networks (SDNNs) with Markovian switch-
ing have played an important role in the fields of science and engineering for its many
practical applications, including image processing, pattern recognition, associative
memory, and optimization problems. In the past several decades, the characteristics
of SDNNs with Markovian switching, such as the various stability, have focused lots
of attention from scholars in various fields of nonlinear science. Z.D. Wang et al. con-
sidered exponential stability of delayed recurrent neural networks with Markovian
jumping parameters [56]. W. Zhang et al. investigated stochastic stability of Markov-
ian jumping genetic regulatory networks with mixed time delays [73]. H. Huang
et al. investigated robust stability of stochastic delayed additive neural networks
with Markovian switching [8]. The researchers presented a number of sufficient con-
ditions and proved the global asymptotic stability and exponential stability of the
SDNN with Markovian switching [33, 58, 59, 80]. The most extensively method
used for recent publications is the LMI approach.
However, many evolution processes are characterized by the fact that at certain
moments of time they experience a change of state abruptly. These processes are
subject to short-term perturbations and it is known that many biological phenomena
involving bursting rhythm models in medicine and biology optimal control in eco-
nomics do exhibit impulsive effects [10, 15]. Thus impulsive effects, as a natural
description of observed phenomena of several real-world problems, are necessary to
consider when investigating the stability of neural networks [37]. Some impulsive
effects of delayed neural networks results have been investigated [37].
In this section, we aim to analyze the globally exponential stability for stochas-
tic neutral-type impulsive neural networks with both time delays and Markovian
switching. LMI approach-based criteria are determined whether globally exponen-
tial stability for stochastic neutral-type impulsive neural networks is developed. A
numerical simulation is given to show the validity of developed results.

5.6.2 Problem Formulation and Preliminaries

In this section, we consider the neutral networks with mixed time delays which is
described as follows:

⎨ u̇(t) = [−Au(t) + B f (u(t)) t + E f (u(t − τ (t)))
+D u̇(t − τ (t))] + F t−τ (t) f (u(η))dη + U, t = tk , (5.239)

Δu(t) = Ik (u), t = tk ,
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 235

where t ≥ 0 is the time, u(t) = (u 1 (t), u 2 (t), . . . , u n (t))T ∈ Rn is the sate vector
associated with n neurons, f (u(t)) = ( f 1 (u 1 (t)), f 2 (u 2 (t)), . . . , f n (u n (t)))T ∈
Rn denote the activation functions of the neurons, τ (t) is the transmission delay
satisfying that 0 < τ (t) ≤ τ ≤ in f {tk − tk−1 }/μ, μ > 1 and τ̇ (t) ≤ ρ < 1, where
τ , ρ are constants, and U = (U1 , U2 , . . . , Un )T ∈ Rn is the constant external input
vector.

⎪ ẋ(t) = −A(r (t))x(t) + B(r (t)) f (x(t)) t + E(r (t)) f (x(t − τ (t))



⎨ +D(r (t))ẋ(t − τ (t)) + F(r (t)) t−τ (t) f (x(η))dη
+σ(x(t),
t f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)),



⎪ f (x(η))dη, t, r (t))ω̇(t), t  = tk ,
⎩ t−τ (t)
Δx(t) = Ik (x), t = tk ,
(5.240)
as a matter of convenience, for t ≥ 0, we denote r (t) = i and A(r (t)) =
Ai , B(r (t)) = B i , E(r (t)) = E i , D(r (t)) = D i , F(r (t)) = F i , respectively. In
model (5.240), furthermore, ∀i ∈ S, Ai = diag {a1i , a2i , . . . , ani } (i.e., Ai is a diago-
nal matrix) has positive and unknown entries Aik > 0, B i = (bi j )n×n , E i = (ei j )n×n ,
D i = (di j )n×n and F i = ( f i j )n×n are the connection weight and the delay connec-
tion weight matrices, respectively. U i = (U1i , U2i , . . . , Uni )T ∈ Rn is the constant
external input vector.
We rewrite the neutral networks with mixed time delays and nonlinearity as fol-
lows:

⎪ ẋ(t) = −Ai x(t) + B i f (x(t)) t + E f (x(t − τ (t))
i



⎪ +D ẋ(t − τ (t)) + F t−τ (t) f (x(η))dη
i i


+σ(x(t),
t f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)),

⎪ t−τ (t) f (x(η))dη, t, i)ω̇(t), t  = tk ,



⎪ Δx(t) = I (x), t = tk ,
⎩ + k
x(t0 + s) = Φ(s), s ∈ [t0 − τ , t0 ],
(5.241)
ω(t) = (ω1 (t), ω2 (t), . . . , ωn (t)) is an n-dimensional Brown moment defined on
T

a complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0 (i.e., Ft =
σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the Markovian process
{r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix and can
be regarded as a result from the occurrence of eternal random fluctuation and other
probabilistic causes.
The initial condition associated with system (5.241) is given in the following
form:

x(t0+ + s) = Φ(s), s ∈ [t0 − τ , t0 ], (5.242)

for any Φ ∈ PC(Φ|Φ : R → Rn , Rn ), where Φ ∈ PC(Φ|Φ : R → Rn , Rn ) is


continuous for all subinterval [t − τ , t] ∈ R satisfying that the sup norm Φ =
supt0 −τ ≤s≤t0 |Φ|.
236 5 Stability and Synchronization of Neutral-Type Neural Networks

To obtain the main result, we need the following assumptions:

Assumption 5.65 The activation functions of the neurons f (x(t)) satisfy the Lip-
schitz condition. That is, there exists a constant G > 0 such that

| f (u) − f (v)| ≤ G|u − v|, ∀u, v ∈ Rn .

Assumption 5.66 The noise intensity matrix σ(·, ·, ·, ·, ·) satisfies the linear growth
condition. That is, there exist five positives H1 ,H2 ,H3 ,H4 , and H5 such that

trace (σ(t, r (t), v1 (t), v2 (t), v3 (t), v4 (t), v5 (t)))T (σ(t, r (t), v1 (t), v2 (t), v3 (t), v4 (t), v5 (t)))
≤ H1 |v1 (t)|2 + H2 |v2 (t)|2 + H3 |v3 (t)|2 + H4 |v4 (t)|2 + H5 |v5 (t)|2 ,

for all (t, r (t), v1 (t), v2 (t), v3 (t), v4 (t), v5 (t)) ∈ R+ ×S×Rn ×Rn ×Rn ×Rn ×Rn .

Assumption 5.67 In the system (5.241),

f (0) ≡ 0, σ(t, r0 , 0, 0, 0, 0, 0) ≡ 0.

Definition 5.68 The zero solution of the system (5.241) is said to be stochastic
globally exponential stable in the mean square such that

E  x(t, t0 , Φ) 2 ≤ k sup E  Φ(s) 2 e−α(t−t0 ) , t ≥ t0 ,


t0 <tk <t

if for any solution x(t, t0 , Φ) with the initial condition Φ ∈ PC, there exist constants
α > 0 and K > 1.

The main purpose of the rest of this section is to establish a criterion of stochastic
globally exponential stable in the mean square of the system (5.241).

5.6.3 Main Results

In this section, we give a criterion of stochastic globally exponential stable in the


mean square for neutral-type impulsive neural networks with mixed time delays,
Markovian jumping, and stochastic disturbance of the system (5.241).

Theorem 5.69 Assume that 0 < τ (t) ≤ τ and τ̇ (t) ≤ ρ < 1, where τ and ρ are
constants, if the following conditions are satisfied:
(i) There exist positive definite symmetry matrices Q i , P, W, M, positive definite
diagonal matrix L with appropriate dimensions such that
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 237
⎡ i ⎤
Λ1,1 Λi1,2 Λi1,3 Λi1,4 Λi1,5
⎢ ∗ Λi Λi Λi2,4 Λi2,5 ⎥
⎢ 2,2 2,3 ⎥
⎢ ⎥
Λi = ⎢ ∗ ∗ Λi3,3 Λi3,4 Λi3,5 ⎥ < 0, (5.243)
⎢ ⎥
⎣ ∗ ∗ ∗ Λi4,4 Λi4,5 ⎦
∗ ∗ ∗ ∗ Λi5,5

where


S
Λi1,1 = −Q i Ai − Ai Q i + αQ i + Ai W Ai + G T LG + πi j Q j + H1T H1 ,
j=1
Λi1,2 = Q i B i − Ai W B i ,
Λi1,3 = Q i E i − Ai W E i ,
Λi1,4 = Q i D i − Ai W D i ,
Λi1,5 = Q i F i − Ai W F i ,
T
Λi2,2 = P + (B i ) W B i + τ 2 M − L + H2T H,
T
Λi2,3 = (B i ) W E i ,
T
Λi2,4 = (B i ) W D i ,
T
Λi2,5 = (B i ) W F i ,
T
Λi3,3 = (E i ) W E i − (1 − ρ)Pe−ατ + H3T H3 ,
T
Λi3,4 = (E i ) W D i ,
T
Λi3,5 = (E i ) W F i ,
T
Λi4,4 = −(D i ) W D i − (1 − ρ)e−ατ W − H4T H4 ,
T
Λi4,5 = (D i ) W F i ,
T
Λi5,5 = (F i ) W F i − e−ατ M − H5T H5 .

(ii) For any σk > 0, k ∈ N ,  I (x(tk− )) ≤ σk  x(tk− ) .


(iii) max{θk } ≤ H < eαμτ , H is a constant.

Proof Construct the following Lyapunov-Krasovskii functional candidates:


t
V (x(t), i, t) = eαt x T (t)Q i x(t) + t−τ (t) e
αη f T (x(η))P f (x(η))dη
t αη T
0 t αη f T (x(η))M f (x(η))dηdβ,
+ t−τ (t) e ẋ (η))W ẋ(η))dη + τ −τ t+β e
(5.244)

where Q i > 0, P > 0, W > 0, M > 0, (i = 1, 2, . . . , S) are to be determined. By


I t ô differential formula, the stochastic derivation of V (x(t), i, t) along (5.241) can
be obtained as follows:
238 5 Stability and Synchronization of Neutral-Type Neural Networks

LV (x(t), i, t) =
αeαt x T (t)Q i x(t)
+eαt f T (x(t))P f (x(t))
−(1 − τ̇ (t))eα(t−τ (t)) f T (x(t − τ (t)))P f (x(t − τ (t)))
+eαt T
ẋ (t))W ẋ(t) − (1 − τ̇ (t))e
α(t−τ (t)) ẋ T (t − τ (t))W ẋ(t − τ (t))
0 αt T
+τ −τ e f (x(t))M f (x(t))dη
0 (t+β) T 
− −τ e f (x(t + β))M f (x(t + β))dη

+2eαt x T (t)Q i −Ai x(t) + B i f (x(t))
+E i f (x(t − τ (t))ẋ(t − τ (t)) 
t
+ D i ẋ(t − τ (t)) + F i t−τ (t) f (x(η))dη
t
+(1/2)trace [σ T (x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)
αt i
t
2e Q σ(x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i))

S
+eαt x T (t) πi j Q j x(t),
j=1
(5.245)
LV (x(t), i, t) ≤
αeαt x T (t)Q i x(t) + 2eαt x T (t)Q i −Ai x(t)
+B i f (x(t)) + E i f (x(t − τ (t)))ẋ(t − τ (t))

t
+ D i ẋ(t − τ (t)) + F i t−τ (t) f (x(η))dη

S
+eαt x T (t) πi j Q j x(t) + eαt f T (x(t))P f (x(t))
j=1
−(1 − ρ)eα(t−τ (t)) f T (x(t − τ (t)))P f (x(t − τ (t)))
+eαt ẋ T (t))W ẋ(t) − (1 − ρ)eα(t−τ (t)) ẋ T (t − τ (t))W ẋ(t − τ (t))
0 αt T t
+τ −τ e f (x(t))M f (x(t))dη − τ t−τ eαη f T (x(η))M f (x(η))dη
T
t
+(1/2)trace [σ (x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)

2eαt Q i σ(x(t), f (x(t)), f (x(t − τ (t))), ẋ(t − τ (t)), t−τ
t
(t) f (x(η))dη, t, i)].
(5.246)

Now, using Assumptions 5.65 and 5.66 together with Lemma 1.16 yields

x T (t)G T LGx(t) − f T (x(t))L f (x(t)) ≥ 0, (5.247)

and

0 t
τ −τ eαt f T (x(t))M f (x(t))dη − τ t−τ eαη f T (x(η))M f (x(η))dη
≤ τ 2 eαt f T (x(t))M
t f (x(t))
−τ (t)eα(t−τ ) t−τ (t) f T (x(η))M f (x(η))dη (5.248)
≤ τ 2 eαt f T (x(t))M f (x(t))
t !T t !
−eα(t−τ ) t−τ (t) f (x(η)dη M t−τ (t) f (x(η)dη .
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 239

It follows from the Assumption 5.66 that



trace σ T (x(t), f (x(t)), f (x(t − τ (t))),
t
ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)
eαt Q i σ(x(t), f (x(t)), f (x(t − τ (t))),

t
ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)

≤ λmax (Q i )eαt trace σ T (x(t), f (x(t)), f (x(t − τ (t))),
t
ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)
σ(x(t), f (x(t)), f (x(t − τ (t))), 
t
ẋ(t − τ (t)), t−τ (t) f (x(η))dη, t, i)

≤ λmax (Q i )eαt x(t)T H1T H x(t).
(5.249)
+ f (x(t))T H2T H f (x(t))
+ f T (x(t − τ (t)))H3T H f (x(t − τ (t)))
+ẋ T (t − τ (t))H4T H ẋ(t − τ (t)) 
t !T t
+ t−τ (t) f (x(η))dη H5T H t−τ (t) f (x(η))dη

≤ eαt q i x(t)T H1T H x(t)
+ f (x(t))T H2T H f (x(t))
+ f T (x(t − τ (t)))H3T H f (x(t − τ (t)))
+ẋ T (t − τ (t))H4T H ẋ(t − τ (t)) 
t !T t
+ t−τ (t) f (x(η))dη H5 H t−τ (t) f (x(η))dη .
T

Substituting (5.247)–(5.249) into (5.246) yields


$
LV (x(t), i, t) ≤ eαt x T (t) − Q i Ai − Ai Q i + αQ i

S !
+ Ai W Ai + G T LG + πi j Q j + H1T H1 x(t)
j=1

+ x (Q B − A W B ) f (x(t)) + x T (t)(Q i E i − Ai W E i ) f (x(t − τ ((t))


T i i i i
 t
+ x (Q F − A W F )
T i i i i
f (x(η))dη + x T (t)(Q i D i − Ai W D i )ẋ(t − (τ (t))
t−τ (t)
i T i T T
+ f (x(t))((B ) Q − (B ) W Ai )x(t) + f T (x(t))((B i ) W E i ) f (x(t − (τ (t)))
T i

T
+ f T (x(t))(P + (B i ) W B i + τ 2 M − L + H2T H ) f (x(t)
 t
i T
+ f (x(t))((B ) W F )
T i
f (x(η))dη
t−τ (t)
240 5 Stability and Synchronization of Neutral-Type Neural Networks

T T
+ f T (x(t − (τ (t)))((E i ) Q i − (E i ) W Ai )x(t)
T T
+ f T (x(t))((B i ) W D i )ẋ(t − (τ (t)) + f T (x(t − (τ (t)))((E i ) W B i ) f (x(t)
T
+ f T (x(t − (τ (t)))((E i ) W E i − (1 − ρ)Pe−ατ + H3T H3 )) f (x(t − (τ (t)))
T
+ f T (x(t − (τ (t)))(E i ) W D i ẋ(t − (τ (t))
 t
T
+ f T (x(t − (τ (t)))(E i ) W F i f (x(η))dη
t−τ (t)
 t T
T T
+ f (x(η))dη ((F i ) Q i − (F i ) W Ai )x(t)
t−τ (t)
 t T
T
+ f (x(η))dη (F i ) W B i f (x(t))
t−τ (t)
T T
+ ẋ (t − (τ (t))((D i ) Q i − (D i ) W Ai )x(t)
T
 t T  t 
T
− f (x(η))dη (e−ατ M − (F i ) W F i + H5T H5 ) f (x(η))dη
t−τ (t) t−τ (t)
i T
+ ẋ (t − (τ (t))((D ) W B ) f (x(t))
T i
 t T
T
+ f (x(η))dη (F i ) W D i ẋ(t − (τ (t))
t−τ (t)
 t
T
+ ẋ T (t − (τ (t))(D i ) W F i f (x(η))dη
t−τ (t)
T
− ẋ T (t − (τ (t))((1 − ρ)e−ατ W − (D i ) W D i + H4T H4 )ẋ(t − (τ (t))
 t T
T
+ f (x(η))dη (F i ) W E i f (x(t − (τ (t)))
t−τ (t)
W
%
+ ẋ (t − (τ (t))(D i ) E i f (x(t − (τ (t)))
T

= eαt ς T (t)Λi ς(t), (5.250)

where Λi is defined in (5.243), and


 t 
ς (t) = [x(t) , f (x(t)) , f (x(t − τ )) ẋ (t − (τ (t)),
T T T T T
f (x(η))dη] T
.
t−τ (t)

When t = tk , k ∈ N, we have
 tk
V (x(tk ), i, tk ) = eαtk x T (tk )Q i x(tk ) + eαη f T (x(η))P f (x(η))dη
tk −τ (tk )
 tk
+ eαη ẋ T (η))W ẋ(η))dη
tk −τ (tk )
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 241
 0  tk
+τ eαη f T (x(η))M f (x(η))dηdβ
−τ tk +β

= eαtk (x(tk− ) + Ik (x(tk− )))T Q i (x(tk− ) + Ik (x(tk− )))
 t−
−τ (tk− )eαη f T (x(η))P f (x(η))dη
k
+
tk−
 t−
eαη ẋ T (η))W ẋ(η))dη
k
+
tk− −τ (tk− )
 0  t−
eαη f T (x(η))M f (x(η))dηdβ
k

−τ tk− +β
− −
= eαtk x T (tk− )Q i x(tk− ) + 2eαtk x T (tk− )Q i Ik (x(tk− ))
 t−

+ eαtk IkT (x(tk− ))Q i Ik (x(tk− )) + eαη f T (x(η))P f (x(η))dη
k

tk− −τ (tk− )
 t−
eαη ẋ T (η))W ẋ(η))dη
k
+
tk− −τ (tk− )
 0  t−
eαη f T (x(η))M f (x(η))dηdβ
k

−τ tk− +β
− −
≤ V (x(tk− ), i, tk− ) + 2eαtk σk Q i x(tk− )2 + eαtk σk2 Q i x(tk− )2
 
2σk Q i  + σk2 Q i  −
≤ 1+ )V (x(t k , i, tk− )
λmin (Q i )
= θk V (x(tk− ), i, tk− ), (5.251)

2σk Q i +σk2 Q i 
where θk = (1 + λmin (Q i )
).
Hence,

V (x(t), t, i) ≤ V (x(t0 ), t0 , i) Π θk ≤ V (x(t0 ), t0 , i)H k−1 , t ∈ [tk−1 , tk ).


t0 <tk <t

tk−1 −t0
Because μτ ≤ in f {tk − tk−1 }, we have k − 1 ≤ μτ , which implies
!
ln H
(t−t0 )
H k−1 ≤e μτ
, t ∈ [tk−1 , tk ),

that is
!
ln H
(t−t0 )
V (x(t), t, i) ≤ V (x(t0 ), t0 , i)e μτ
, t ∈ [tk−1 , tk ),
242 5 Stability and Synchronization of Neutral-Type Neural Networks

on the other hand,


t
V (x(t0 ), t0 , i) = eαt0 x T (t0 )Q i x(t0 ) + t00−τ (t0 ) eαη f T (x(η))P f (x(η))dη
t
+ t00−τ (t0 ) eαη ẋ T (η)W ẋ(η)dη
0 t
+τ −τ t00+β eαη f T (x(η))M f (x(η))dηdβ
$ −ατ
! −ατ
!
≤ eαt0 λ M (Q i ) + λ M (P) 1−eα g 2M + λ M (W ) 1−eα
−ατ
! %
+ λ M (M) α1 τ − 1−eα g 2M Φ2 .
(5.252)
Therefore, we have
!
ln H
(t−t0 )
EV (x(t), t, i) ≤ EV (x(t0 ), t0 , i)e μτ
$ ! !
( ln H )(t−t0 ) αt0 −ατ −ατ
≤ e μτ e λ M (Q i ) + λ M (P) 1−eα g 2M + λ M (W ) 1−eα
−ατ
! %
+ λ M (M) α1 τ − 1−eα g 2M EΦ2 .
(5.253)

Thus, by Definition 5.68, the system (5.241) is said to be stochastic globally


exponential stable in the mean square which is proved.

Remark 5.70 Let σ(·, ·, ·, ·, ·) = 0, and the system (5.241) can be reduced to neutral-
type impulsive neural networks which have been studied in [1]. Thus, Theorem 5.69
in this section can be regarded as the expansions of Theorem 3.1 in [1]. However, for
delayed neural networks, the sufficient conditions proposed in [1] are only applicable
for systems without noise perturbation.

Remark 5.71 Let D = 0, σ(·, ·, ·, ·, ·) = 0, and the system (5.241) can be reduced to
neutral-type impulsive neural networks (4) in [74]. It is worth noting that the systems
will contain some information about the derivative of the past state to further describe
and model the dynamics for such complex neural reactions. However, the criteria
proposed in [74] are only applicable for the stability analysis of neural networks
without involving the issue of neutral-type neural networks.

5.6.4 Numerical Examples

In this section, an illustrative example is given to support our main results.

Example 5.72 Consider neutral-type impulsive neural networks with time delay and
Markovian switching (5.241) and the following network parameters:
       
2.9 0 2.5 0 0.2 0.18 0.3 0
A1 = , A2 = , B1 = , B2 = ,
0 2.8 0 2.6 0.3 0.19 0.4 0
5.6 Exponential Stability of Neutral-Type Impulsive SNN … 243
       
0.2 0 0.3 0 0.8 0.2 2.5 1.5
D1 = , D2 = , E1 = , E2 = ,
0 0.2 0 0.3 0.2 0.3 1 2.5
     
4 0.04 4 1.5 10
F1 = , F2 = , G= ,
0.14 4 1 4 01
     
0.08 0 0.07 0 0.09 0
H1 = , H2 = , H3 = ,
0 0.08 0 0.07 0 0.09
     
0.06 0 0.04 0 −0.45 0.45
H4 = , H5 = , Π= ,
0 0.06 0 0.04 0.5 −0.5

α = 0.25, τ = 0.2, ρ = 0.6, μ = 3,

σ(t, x(t), f (x(t − τ )), 1) = (0.4 f 1 (x(t − τ )), 0.5x2 (t))T ,


σ(t, x(t), f (x(t − τ )), 2) = (0.5x1 (t), 0.3 f 2 (x(t − τ ))T ,

f (x(t)) = tanh(x(t)).

Using Matlab LMI toolbox, we solve the LMI in (5.241) and obtain
     
3.5780 −2.2265 −0.6333 −0.0025 1.3005 −1.4388
Q1 = , Q2 = , P= ,
−2.2265 3.4268 −0.0025 0.1794 −1.4388 1.5744
     
0.2109 −0.2999 19.7642 −12.6598 2.6151 −2.0063
W = , M= , L= .
−0.2999 0.3364 −12.6598 23.7158 −2.0063 23.7158

It can be checked that Assumptions 5.65–5.66 and the inequality (5.243) are satisfied
and the matrix Q, P, W, M, L is a positive define symmetry matrices. So the noise-
perturbed neutral-type impulsive neural networks with time delays and Markovian
switching system (5.241) can be globally exponential stability by Theorem 5.69.
The simulation results are given in Figs. 5.18 and 5.19, which shows the state vector
x(t) of system (5.241) can be further illustrated the stability. From the following
simulations, one can find that the stochastic neutral-type impulsive neural networks
with time delays and Markovian switching is globally exponential stability.
244 5 Stability and Synchronization of Neutral-Type Neural Networks

Fig. 5.18 The response of 4


the state vectors x1 (t) and x1(t)
x2 (t) x (t)
2
3.5

2.5

1.5
0 200 400 600 800 1000

Fig. 5.19 The response of 10


x1(t)

the impulsive vectors x1 (t) 5


x2(t)

and x2 (t)
x (t)(i=1,2,)

−5
i

−10

−15
0 2 4 6 8 10 12 14 16 18 20
t/second

5.6.5 Conclusions

In this section, we have proposed a concept of globally exponential stability for sto-
chastic neutral-type impulsive neural networks with both time delays and Markovian
switching. Making use of LMI and Lyapunov functional method, we have obtain a
sufficient condition under which stochastic neutral-type impulsive neural networks
with both time delays and Markovian switching can globally exponential stability.
The condition obtained in this section is dependent on the generator of the Markovian
jumping models and can be easy checked. Extensive simulation result is provided
that demonstrate the effectiveness of our theoretical results and analytical tools.

5.7 Asymptotical Adaptive Synchronization of Neutral


Type and Markovian Jump SNN

5.7.1 Introduction

As it is well known, the stability and synchronization of neural networks can be


applied to create chemical and biological systems, secure communication systems,
5.7 Asymptotical Adaptive Synchronization of Neutral … 245

information science, image processing, and so on. In recent years, different control
methods are derived to achieve different synchronizations [9, 51, 62, 63].
By utilizing adaptive control method, the parameters of the system need to be
estimated and the control law needs to be updated when the neural networks evolve.
In the past decade, much attention has been devoted to the research of the adaptive
synchronization for neural networks (see e.g., [4, 21, 44, 48, 81] and the references
therein).
Recently, the stability and synchronization of neutral-type systems, especially
neutral-type neural networks, which depend on the derivative of the state and the
delay state, have attracted a lot of attention (see e.g., [16, 20, 32, 34, 79, 84] and the
references therein) since the fact that some physical systems in the real world can be
described by neutral-type models (see [38]).
However, the neutral term of derivative of the delay state was not taken into
account in the neural networks proposed in [4, 21, 44, 48, 81], while the adaptive
control was not investigated in [16, 20, 32, 34]. Zhou et al. in [79] did not study
the almost surely (a.s.) synchronization for neutral-type neural networks. Zhu et
al. in [84] did not research the synchronization problem for neural networks with
Markovian switching parameters.
In this section, the problem of almost sure (a.s.) asymptotic adaptive synchro-
nization for neutral-type neural networks with stochastic perturbation and Markov-
ian switching is researched. Firstly, we proposed a new criterion of a.s. asymptotic
stability for a general neutral-type stochastic differential equation which extends
the existed results. Secondly, based upon this stability criterion, by making use of
Lyapunov functional method and designing a adaptive controller, we obtained a
condition of a.s. asymptotic adaptive synchronization for neutral-type neural net-
works with stochastic perturbation and Markovian switching. Finally, we introduced
a numerical example to illustrate the effectiveness of the method and result obtained
in this section.

5.7.2 Problem Formulation and Preliminaries

Consider the following neutral-type neural networks called drive system and repre-
sented by the compact form as follows:

d[x(t) − D(r (t))x(t − τ )]



= − C(r (t))x(t) + A(r (t)) f (x(t)) + B(r (t)) f (x(t − τ ))
(5.254)
 t 
+ E(r (t)) f (x(s))ds + J (r (t)) dt,
t−τ

where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector associated with n
neurons, f (·) denotes the neuron activation functions, and τ represents the trans-
mission delay. For t ≥ 0, we denote r (t) = i, A(r (t)) = Ai , B(r (t)) = B i ,
246 5 Stability and Synchronization of Neutral-Type Neural Networks

C(r (t)) = C i , D(r (t)) = D i E(r (t)) = E i , and J (r (t)) = J i , respectively. In


neural network (5.254), ∀i ∈ S, Ai , B i , and E i are the connection weight, the
discrete delay connection weight, and distributed delay connection weight matrix,
respectively; C i = diag{c1i , c2i , . . . , cni } is a positive diagonal matrix; D i is called
the neutral-type parameter matrix; and J i = [J1i , J2i , . . . , Jni ]T ∈ Rn is the constant
external input vector.
The initial condition of system (5.254) is given in the following form:

x(s) = ξx (s), s ∈ [−τ , 0], r (0) = i 0 (5.255)

for any ξx ∈ L2F0 ([−τ , 0]; Rn ).


For the drive system (5.254), the response system is

d[y(t) − D(r (t))y(t − τ )]



= − C(r (t))y(t) + A(r (t)) f (y(t)) + B(r (t)) f (y(t − τ ))
 t  (5.256)
+ E(r (t)) f (y(s))ds + J (r (t)) + U (r (t)) dt
t−τ
+ σ(t, r (t), y(t) − x(t), y(t − τ ) − x(t − τ ))dω(t),

where y(t) = [y1 (t), y2 (t), . . . , yn (t)]T ∈ Rn is the state vector of the response
system (5.256), U (r (t)) = U i = [U1i , U2i , . . . , Uni ]T ∈ Rn is a control input vector,
ω(t) = [ω1 (t), ω2 (t), . . . , ωn (t)]T is an n-dimensional Brownian motion defined
on the complete probability space (Ω, F, P) with a natural filtration {Ft }t≥0 (i.e.,
Ft = σ{ω(s) : 0 ≤ s ≤ t} is a σ-algebra) and is independent to the Markovian
process {r (t)}t≥0 , and σ : R+ × S × Rn × Rn → Rn×n is the noise intensity matrix.
It is known that external random fluctuation and other probabilistic causes often lead
to this type of stochastic perturbations.
The initial condition of system (5.256) is given in the following form:

y(s) = ξ y (s), s ∈ [−τ , 0], r (0) = i 0 (5.257)

for any ξ y ∈ L2F0 ([−τ , 0]; Rn ).


Let e(t) = y(t) − x(t) be the synchronization error vector. From the drive system
and the response system, the error system can be written as follows:

d[e(t) − D(r (t))eτ ]



= − C(r (t))e(t) + A(r (t))g(e(t)) + B(r (t))g(eτ )
 t  (5.258)
+ E(r (t)) g(e(s))ds + U (r (t)) dt
t−τ
+ σ(t, r (t), e(t), eτ )dω(t),

where eτ = e(t − τ ), g(e(t)) = f (x(t) + e(t)) − f (x(t)).


5.7 Asymptotical Adaptive Synchronization of Neutral … 247

The initial condition of system (5.258) is given in the following form:

e(s) = ξ(s) = ξ y (s) − ξx (s), s ∈ [−τ , 0], r (0) = i 0 . (5.259)

with e(0) = 0.
The primary object here is to deal with the adaptive synchronization problem
of the drive system (5.254) and the response system (5.256), and derive sufficient
conditions such that the response system (5.256) synchronize with the drive system
(5.254).
To prove our main results, the following assumptions are needed.

Assumption 5.73 The activation functions of the neurons f (·) satisfy the Lipschitz
condition. That is, there exists a constant L 1 , L 2 > 0 such that

L 1 |x − y| ≤ | f (x) − f (y)| ≤ L 2 |x − y|, ∀x, y ∈ Rn .

Assumption 5.74 The noise intensity matrix σ(·, ·, ·, ·) satisfies the bounded con-
dition. That is, there exist two positive constants H1 and H2 , such that

trace(σ(t, r (t), x(t), y(t))T σ(t, r (t), x(t), y(t)))


≤ H1 |x(t)|2 + H2 |y(t)|2 ,

for all (t, r (t), x(t), y(t)) ∈ R+ × S × Rn × Rn , and σ(t, i, 0, 0) = 0 for all
(t, i) ∈ R+ × S.

Assumption 5.75 For the external input matrix D i (i ∈ S), there exists positive
constant κi ∈ (0, 1), such that

ρ(D i ) = κi ≤ κ ∈ (0, 1),

where κ = max κi and ρ(D i ) is the spectral radius of matrix D i .


i∈S

The following concept is necessary in this section.

Definition 5.76 (See Ref. [33]) The trivial solution e(t; ξ, i 0 ) of the error system
(5.258) is said to be almost surely asymptotically stable if

P( lim |e(t; ξ, i 0 )| = 0) = 1 (5.260)


t→∞

for any initial value ξ ∈ C([−τ , 0]; Rn ).

If the error system (5.258) is almost surely asymptotically stable, then the drive
system (5.254) and the response system (5.256) are said to be almost surely asymp-
totically synchronization.
248 5 Stability and Synchronization of Neutral-Type Neural Networks

5.7.3 Main Results

In this section, we give some criterions of adaptive synchronization for the drive
system (5.254) and the response system (5.256). First, we establish a general result
which can be applied widely.
Almost Surely Asymptotically Stable
Theorem 5.77 Let (H1), (H2), and (H3) hold. Assume that there are functions V ∈
C2,1 (R+ × S × Rn ; R), γ ∈ L1 (R+ ; R+ ), and W1 , W2 , , W3 ∈ C(Rn ; R+ ) such that

(C1) |x| → ∞, i ∈ S, 0 ≤ t < ∞ (5.261)

LV (t, i, x, y)
(C2) (5.262)
≤γ(t) − W1 (x) + W2 (y) − W3 (x − D(y, i))

for (t, i, x, y) ∈ R+ × S × Rn × Rn , and

lim [ inf V (t, i, x)] = ∞.


|x|→∞ (t,i)∈R+ ×S

(C3) W1 (0) = W2 (0) = W3 (0) = 0, W1 (x) ≥ W2 (x), ∀x = 0 (5.263)

Then for any initial data, {x(θ) : −τ̄ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ̄ , 0]; Rn ) and
r (0) = i 0 ∈ S,
(R1) Eq. (1.4) has a unique global solution which is denoted by x(t; ξ, i 0 ).
(R2) Assume that W3 (x) = 0 if and only if x = 0. The solution x(t; ξ, i 0 ) obeys
that

lim x(t; ξ, i 0 ) = 0 a.s. (5.264)


t→∞

i.e., x(t; ξ, i 0 ) is almost surely asymptotically stable.


The proof of this theorem is given in Appendix.
Remark 5.78 Theorem 5.77 is a extended of Theorem 3.1 in [32], that is, when we
take W1 (x) = W2 (x) in our theorem, then Theorem 5.77 is coincident with Theorem
3.1 in [32]. Moreover, Theorem 5.77 is also a extended of Theorem 2.1 in [66] when
we take W3 (x) = 0 with D̄ = 0.
Remark 5.79 From the proof of Theorem 5.77, we can see that if the condition (H1)
is substituted by (H1) , then the conclusion (R2) is also true.
Almost Sure Asymptotical Synchronization
We now give a criterion of adaptive almost sure asymptotical synchronization for the
drive system (5.254) and the response system (5.256).
5.7 Asymptotical Adaptive Synchronization of Neutral … 249

Theorem 5.80 For systems (5.254) and (5.256), let Assumptions 5.73–5.75 hold,
and the error system (5.258) has a unique solution denoted by e(t; ξ, i 0 ) on t ≥ 0
for any initial data {e(θ) : −τ ≤ θ ≤ 0} = ξ ∈ CbF0 ([−τ , 0]; Rn ) with e(0) = 0.
Assume also that there exist symmetric matrix Q 1 > 0, diagonal matrix P i > 0
(i = 1, . . . , S), and positive scalars ρ, ρ1 , ρ2 , i (i = 1, 2, 3, 4), such that

ρ2 I < Q 1 < ρ1 I (5.265)

P i < ρI (i = 1, 2, . . . , S) (5.266)

⎡ 2 ⎤
(L 2 ρ1 + H1 ρ)I − 2P i C i C i P i L 2 I τ L 2 I
⎢ ∗ −1 I 0 0 ⎥
⎢ ⎥<0 (5.267)
⎣ ∗ 0 −2 I 0 ⎦
∗ 0 0 −4 I

(i = 1, 2, . . . , S)

(ρ2 L 21 − ρH2 − −1 3 L 2 )I − 1 D D < 0


2 iT i
(5.268)
(i = 1, 2, . . . , S)
⎡ ⎤

S
k + 2P i K ∗  P i Ai  P i B i  P i E i
⎢ γik P 2 3 4 ⎥
⎢k=1 ⎥
⎢ 0 ⎥
⎢ ∗ −2 I 0 ⎥<0 (5.269)
⎣ ∗ 0 −3 I 0 ⎦
∗ 0 0 −4 I

(i = 1, 2, . . . , S)
⎡ ⎤
Ξ11 C i P i L 2 I L 2 I τ L 2 I
⎢ ∗ −1 I 0 0 0 ⎥
⎢ ⎥
⎢ ∗ 0 − I 0 0 ⎥<0 (5.270)
⎢ 2 ⎥
⎣ ∗ 0 0 −3 I 0 ⎦
∗ 0 0 0 −4 I

(i = 1, 2, . . . , S),
where K ∗ = diag{k1∗ , k2∗ , . . . , kn∗ } with k ∗j ( j = 1, 2, . . . , n) be arbitrary negative
constants to be chosen, and Ξ11 = (L 22 ρ1 +H1 ρ−ρ2 L 21 +H2 ρ)I −2P i C i +1 D i T D i .
We choose the feedback control U i with the update law as U i = diag{k1 , . . . , kn }
(e − D i eτ ) and k̇ j = −β j pij (e − D i (eτ ))2j , where β j > 0( j = 1, 2, . . . , n) are
arbitrary constants, pij is the jth diagonal entry of matrix P i , and (e − D i eτ ) j is
the jth element of e − D i eτ . Then the error system (5.258) is almost surely asymp-
250 5 Stability and Synchronization of Neutral-Type Neural Networks

totically stable. Therefore, the drive system (5.254) and the response system (5.256)
are adaptive synchronized a.s.

Proof Under Assumptions 5.73–5.75 and the existence of e(t; ξ, i 0 ), it can be seen
that F̄(t, r (t), e(t), eτ (t)), Ḡ(t, r (t), e(t), eτ (t)), and D̄(eτ (t), r (t)) satisfy (H1) ,
(H2), and (H3), where

F̄(t, r (t), e(t), eτ (t))


= − C(r (t))e(t) + A(r (t))g(e(t)) + B(r (t))g(eτ (t))
 t
+ E(r (t)) g(e(s))ds + U (r (t)),
t−τ
Ḡ(t, r (t), e(t), eτ (t)) = σ(t, r (t), e(t), eτ (t)),
D̄(eτ (t), r (t)) = D(r (t))eτ (t).

For each i ∈ S, choosing a nonnegative function


 t
V (t, i, e) = x P x + T i
g T (e(s))Q 1 g(e(s))ds
t−τ
 0  t 
n (5.271)
1
+ e T (θ)Q 2 e(θ)dθds + (k j − k ∗j )2
−τ t+s βj
j=1

where Q 2 = ε−1
4 τ L 2 I , and computing LV (t, i, e, eτ ) along the trajectory of error
2

system (5.258), we have

LV (t, i, e, eτ )
= Vt (t, i, e − D i eτ )

+ Vx (t, i, e − D i eτ ) − C i e + Ai g(e)
 t 
+ B g(eτ ) + E
i i
g(e(s))ds + U i (5.272)
t−τ
1
+ trace[σ T (t, i, e, eτ )Vx x (t, i, e − D i eτ )σ(t, i, e, eτ )]
2
S
+ γik V (t, k, e − D i eτ ).
k=1

While

Vt (t, i, e − D i eτ )
= g T (e(t))Q 1 g(e(t)) − g T (e(t − τ ))Q 1 g(e(t − τ ))
5.7 Asymptotical Adaptive Synchronization of Neutral … 251
 0
+ τ e (t)Q 2 e(t) −
T
e T (t + s)Q 2 e(t + s)ds
−τ

n
2
+ (k j − k ∗j )k̇ j (5.273)
βj
j=1

= g T (e(t))Q 1 g(e(t)) − g T (eτ )Q 1 g(eτ )


 t
+ τ e T (t)Q 2 e(t) − e T (s)Q 2 e(s)ds
t−τ

n
−2 (k j + k ∗j ) pij (e − D i eτ )2j
j=1

Vx (t, i, e − D i eτ ) = 2(e − D i eτ )T P i (5.274)

Vx x (t, i, e − D i eτ ) = 2P i (5.275)


S
γik V (t, k, e − D i eτ )
k=1
(5.276)

S
= γik (e − D i eτ )T P k (e − D i eτ )
k=1

So

LV (t, i, e, eτ )
≤ g T (e)Q 1 g(e) − g T (eτ )Q 1 g(eτ ) + τ e T Q 2 e
 t  n
− e T (s)Q 2 e(s)ds − 2 (k j − k ∗j ) pij (e − D i eτ )2j
t−τ j=1

+ 2(e − D i eτ )T P i − C i e + Ai g(e) + B i g(eτ )
(5.277)
 t 
+ Ei g(e(s))ds + diag{k1 , . . . , kn }(e − D i eτ )
t−τ
+ trace[σ (t, i, e, eτ )P i σ(t, i, e, eτ )]
T


S
+ γik (e − D i eτ )T P k (e − D i eτ )
k=1
252 5 Stability and Synchronization of Neutral-Type Neural Networks

It is easy to get that

2(e − D i eτ )T P i diag{k1 , . . . , kn }(e − D i eτ )



n
=2 k j pij (e − D i eτ )2j . (5.278)
j=1

Using Eq. (5.278), we have

2(e − D i eτ )T P i diag{k1 , . . . , kn }(e − D i eτ )



n
−2 (k j − k ∗j ) pij (e − D i eτ )2j
j=1
n
=2 k ∗j pij (e − D i eτ )2j
j=1

= 2(e − D i eτ )T P i K ∗ (e − D i eτ ). (5.279)

According to Assumption 5.73 and Lemma 1.13, we have that

g T (e)Q 1 g(e) ≤ λmax (Q 1 )g T (e)g(e) ≤ ρ1 L 22 |e|2 , (5.280)

−g T (eτ )Q 1 g(eτ ) ≤ −λmin (Q 1 )g T (eτ )g(eτ ) ≤ −ρ2 L 21 |eτ |2 , (5.281)

2eτT D i T P i C i e ≤ 1 eτT D i T D i eτ + −1 T i i i i


1 e C P P C e (5.282)

2(e − D i eτ )T P i Ai g(e)
≤ 2 (e − D i eτ )T P i Ai Ai T P i (e − D i eτ ) + −1
2 g (e)g(e)
T
(5.283)
≤ 2 (e − D i eτ )T P i Ai Ai T P i (e − D i eτ ) + −1 2 T
2 L 2e e

2(e − D i eτ )T P i B i g(eτ )
≤ 3 (e − D i eτ )T P i B i B i T P i (e − D i eτ ) + −1
3 g (e)g(eτ )
T
(5.284)
≤ 3 (e − D i eτ )T P i B i B i T P i (e − D i eτ ) + −1 2 T
3 L 2 eτ eτ

 t
2(e − D eτ ) P E
i T i i
g(e(s))ds
t−τ
≤ 4 (e − D i eτ )T P i E E P i (e − D i eτ )
i iT
 t T  t 
+ −1
4 g(e(s))ds g(e(s))ds (5.285)
t−τ t−τ
5.7 Asymptotical Adaptive Synchronization of Neutral … 253

≤ 4 (e − D i eτ )T P i E i E i T P i (e − D i eτ )
 t
−1
+ 4 τ g T (e(s))g(e(s))ds
t−τ
≤ 4 (e − D i eτ )T P i E i E i T P i (e − D i eτ )
 t
+ −1
4 τ L 2
2 e T (s)e(s)ds
t−τ

and

trace(σ T (t, i, e, eτ )P i σ(t, i, e, eτ ))


(5.286)
≤ρ(H1 e T e + H2 eτT eτ )

Therefore,

LV (t, i, e, eτ )
(5.287)
≤ − W1 (e) + W2 (eτ ) − W3 (e − D i eτ )

where

W1 (e) = e T W̄1 e,
W2 (eτ ) = eτT W̄2 eτ , (5.288)
W3 (e − D eτ ) = (e − D eτ ) W̄3 (e − D eτ ),
i i T i

with

W̄1 = − [ρ1 L 22 I − 2P i C i + −1 iT i i i


1 C P P C
(5.289)
+ −1 −1 2 2
2 L 2 I + ρH1 I + ε4 τ L 2 I ]
2

W̄2 = −ρ2 L 21 I + 1 D i T D i + −1


3 L 2 I + ρH2 I
2 (5.290)

and

W̄3 = − 2 P i Ai Ai T P i + 3 P i B i B i T P i
 (5.291)

S

+ 4 P E E i i iT
P +
i
γik P + 2P K
k i

k=1

Now, (5.267) is equivalent to W̄1 > 0, (5.268) is just W̄2 > 0, (5.269) is equivalent
to W̄3 > 0, and (5.270) is equivalent to W̄1 > W̄2 . So from the conditions of
254 5 Stability and Synchronization of Neutral-Type Neural Networks

this theorem, we know that the conditions (C1), (C2), and (C3) in Theorem 5.77
are all satisfied. So by Theorem 5.77, the error system (5.258) is almost surely
asymptotically stable, and hence the drive system (5.254) and the response system
(5.256) are adaptive synchronized a.s. The proof of Theorem 5.80 is completed.

5.7.4 Numerical Examples

In this section, a numerical example will be given to support the main results obtained
in this section.
−4 4 
Letting Γ = , which means S = 2, we give the parameters concerning
2 −2
the drive system (5.254), the response system (5.256), and the error system (5.258)
as follows:    
0.2 0 0.3 0
D(1) = , D(2) = ,
0 0.3 0 0.1
   
61 40
C(1) = , C(2) = ,
17 07
   
−4 2 −3 2
A(1) = , A(2) = ,
−6 2 −3 1
   
−2 1 −4 3
B(1) = , B(2) = ,
1 −3 1 −2
   
−5 2 −4 −2
E(1) = , E(2) = ,
2 −3 −2 −3
J (1) = [1, 0]T , J (2) = [−1, 1]T

We further set τ = 1, f (·) = tanh(·), σ(·) = e(t) + eτ (t). Then we can confirm
that Assumptions 5.73–5.75 are satisfied with L 1 = 0, L 2 = 1, H1 = H2 = 2, and
κ1 = κ2 = κ = 0.3.
6 0
Letting K ∗ = and using LMI toolbox in Matlab, we solve matrix inequal-
08
ities (5.265)–(5.270) and obtain the following results:
   
0.3196 0 0.3899 0
Q1 = , P1 = ,
0 0.3196 0 0.3899
 
0.4690 0
P2 = , ρ = 0.5077,
0 0.4690
ρ1 = 0.4498, ρ2 = 0.1078, 1 = 9.9260,
2 = 178.9801, 3 = 194.4959, 4 = 226.2334.

So from Theorem 5.80, the drive system (5.254) and the response system (5.256)
are adaptive synchronized a.s., when the error system (5.258) has a unique solution.
5.7 Asymptotical Adaptive Synchronization of Neutral … 255

Fig. 5.20 The varying curve 2−state Markov chain


3
of Markov chain with two
states
2.5

r(t)
1.5

0.5

0
0 5 10 15 20 25 30
Time

Fig. 5.21 The dynamic x and y


1 1
3
trajectory of the drive system x
1
and the response system 2 y
1

−1

−2
0 5 10 15 20 25 30

x and y
2 2
3
x2
2 y
2

−1

−2
0 5 10 15 20 25 30

To illustrate the effectiveness of the result in this section, we depict the evolution
figures of the systems as Figs. 5.20, 5.21, 5.22 and 5.23. Figure 5.20 shows the two-
state Markov chain in the systems. Figure 5.21 shows that the drive system (5.254)
synchronizes the response system (5.256) from the moment of t = 7. It can be
seen from Fig. 5.22 that the state of the error system (5.258) tends to zero from
t = 7, which also describes the synchronization of the drive system (5.254) and
the response system (5.256). The update law of the adaptive control gain K (t) is
depicted in Fig. 5.23. Figure 5.23 shows us that the update law of the control gain
K (t) no longer vary after the response system (5.256) synchronizes with the drive
system (5.254).
256 5 Stability and Synchronization of Neutral-Type Neural Networks

Fig. 5.22 The trajectory of The error state


4
the error state e1
3 e2

e(t)
0

−1

−2

−3

−4
0 5 10 15 20 25 30
Time

Fig. 5.23 The dynamic Update law


1.5
curve of the update law of K
1
the gain K (t) 1 K
2

0.5

−0.5
K(t)

−1

−1.5

−2

−2.5

−3
0 5 10 15 20 25 30
Time

5.7.5 Conclusions

In this section, we have proposed a new criterion of a.s. asymptotic stability for a gen-
eral neutral-type stochastic differential equation which extends the existed results.
Based upon this new stability criterion, we have obtained a condition of a.s. asymp-
totic adaptive synchronization for neutral-type neural networks with stochastic per-
turbation and Markovian switching by making use of Lyapunov functional method
and designing a adaptive controller. The synchronization condition is expressed as
linear matrix inequality which can be easily solved by Matlab. Finally, we have
employed a numerical example to illustrate the effectiveness of the method and
result obtained in this section.
5.7 Asymptotical Adaptive Synchronization of Neutral … 257

Appendix

Proof The proof of (R1) is the same as [32] and omitted here. To prove (R2), we
will divide it into five steps. We change D̄ into D in subsequence for simplicity.
Step 1 To prove that the solution x(t, i 0 , ξ) of the system obeys

lim sup V (t, r (t), x(t) − D(r (t), x(t − τ ))) < ∞ a.s. (5.292)
  
t→∞

In fact, let

M(t)
 t
= Vx (s, r (s), x(s) − D(x(s − τ ), r (s)))d B(s)
0
 t (5.293)
+ (V (s, i 0 + h̄(r (s−), l), x(s) − D(x(s − τ ), r (s)))
0 R
− V (s, r (s), x(s) − D(x(s − τ ), r (s)))μ(ds, dl)

which is a continuous local martingale with M(0) = 0, a.s. By generalized Itô


formula (Lemma 1.10), we have

V (t, i, x(t) − D(i, x(t − τ )))


≤ V (0,
t i 0 , x(0) − D(i, x(−τ )))
+ 0 LV (s, r (s), x(s), x(s) − D(r (s), x(s − τ )))ds
+M(t)
t i 0 , x(0) − D(i, x(−τ )))
≤ V (0,
+ 0 (γ(s) − W1 (x(s)) + W2 (x(s − τ ))
−W3 (x(s) − D(r (s), x(s − τ ))))ds + M(t)
t i 0 , x(0) − D(i,
≤ V (0, x(−τ )))
t t
+ 0 γ(s)ds − 0 W1 (x(s))ds + 0 W2 (x(s − τ ))ds (5.294)
t
− 0 W3 (x(s) − D(r (s), x(s − τ )))ds + M(t)
= V (0, i 0 , x(0) − D(i, ξ(−τ )))
t t t−τ
+ 0 γ(s)ds − 0 W1 (x(s))ds + −τ W2 (x(s))ds
t
− 0 W3 (x(s) − D(r (s), x(s − τ )))ds + M(t)
0
≤ V (0, i 0 , x(0) − D(i, ξ(−τ ))) + −τ W2 (x(s))ds
t t
+ 0 γ(s)ds − 0 (W1 (x(s)) − W2 (x(s)))ds
t
− 0 W3 (x(s) − D(r (s), x(s − τ )))ds + M(t)

By the convergence theorem of nonnegative semimartingales (Lemma 1.1), we have


(5.292).
258 5 Stability and Synchronization of Neutral-Type Neural Networks

Step 2 To prove

sup |x(t)| < ∞ a.s. (5.295)


0≤t<∞

Indeed, from (5.292), we have

sup V (t, r (t), x(t) − D(r (t), x(t − τ ))) < ∞ a.s.
0≤t<∞

which together with (C1), yields

sup |x(t) − D(r (t), x(t − τ ))| < ∞ a.s. (5.296)


0≤t<∞

Now, for any T > 0, by (H2), we have that if 0 ≤ t ≤ T , then

|x(t)| ≤ |D(r (t), x(t − τ ))| + |x(t) − D(r (t), x(t − τ ))|
≤ κ|x(t − τ )| + |x(t) − D(r (t), x(t − τ ))|

where κ = max κi < 1. This implies


i∈S

sup |x(t)|
0≤t≤T
≤ κ sup |x(t − τ )| + sup |x(t) − D(r (t), x(t − τ ))|
0≤t≤T 0≤t≤T
= κβ + κ sup |x(t)| + sup |x(t) − D(r (t), x(t − τ ))|
0≤t≤T 0≤t≤T

where β is the bound for the initial data ξ. Hence,


 
1
sup |x(t)| ≤ κβ + sup |x(t) − D(r (t), x(t − τ ))|
0≤t≤T 1−κ 0≤t≤T

Letting T → ∞ and using (5.296), we obtain (5.295).


Step 3 To prove

lim W3 (x(t) − D(r (t), x(t − τ ))) = 0 a.s. (5.297)


t→∞

In fact, taking the expectations on both side of (5.294) and letting t → ∞, we obtain
that
 ∞
E W (s)ds < ∞ (5.298)
0

where W (s) = W1 (x(s)) − W2 (x(s)) + W3 (z(s)), z(s) = x(s) − D(r (s), x(s − τ )).
5.7 Asymptotical Adaptive Synchronization of Neutral … 259

This implies
 ∞
W (s)ds < ∞ a.s. (5.299)
0

or equivalently,
 ∞
(W1 (x(s)) − W2 (x(s)))ds < ∞ a.s.
0

and
 ∞
W3 (z(s))ds < ∞ a.s.
0

From (5.299), we have

 inf W (t) = 0 a.s.


lim (5.300)
t→∞

or equivalently,

 inf(W1 (x(t)) − W2 (x(t)) = 0 a.s.


lim
t→∞

and

 inf W3 (z(t)) = 0 a.s.


lim
t→∞

Now we will prove (5.297): lim W3 (z(t)) = 0 a.s. In fact, if (5.297) is false, then
t→∞

P{lim sup W3 (z(t)) > 0} > 0


  
t→∞

Hence, there is a number ε > 0 such that

P(Ω1 ) ≥ 3ε (5.301)

where Ω1 = {lim sup W3 (z(t)) > 2ε}


  
t→∞
Recall (5.295), as well as the boundedness of the initial data ξ, we can find a
positive number h, which depends on ε, sufficiently large for

P(Ω2 ) ≥ 1 − ε (5.302)
260 5 Stability and Synchronization of Neutral-Type Neural Networks

where Ω2 = { sup |z(t)| < h}


−τ ≤t<∞
It is easy to see from (5.301) and (5.302) that

P(Ω1 ∩ Ω2 ) ≥ 2ε (5.303)

We now define a sequence of stopping times as follows:

τh = inf{t ≥ 0 : |x(t)| ∧ |z(t)| ≥ h}


σ1 = inf{t ≥ 0 : W3 (z(t)) ≥ 2ε}
σ2k = {t ≥ σ2k−1 : W3 (z(t)) < ε}, k = 1, 2, . . .
σ2k+1 = {t ≥ σ2k : W3 (z(t)) ≥ 2ε}, k = 1, 2, . . .

where throughout this section, we set inf ∅ = ∞.


From (5.300) and the definition of Ω1 and Ω2 , we observe that if ω ∈ Ω1 ∩ Ω2 ,
then

τh = ∞ and σk < ∞ ∀k ≥ 1 (5.304)

Let I A denote the indication function of set A. Noting the fact that σ2k < ∞,
whenever σ2k−1 < ∞, we can derive from (5.298) that

∞ > E 0 W3 (z(t))dt

∞ σ2k 
≥ E Iσ2k−1 <∞,σ2k <∞,τh =∞ σ2k−1 W3 (z(t))dt
k=1 (5.305)


≥ε E[Iσ2k−1 <∞,τh =∞ (σ2k − σ2k−1 )]
k=1

On the other hand, ny (H1), there exists a constant K h > 0, such that

| f (t, i, x, y)|2 ∨ |g(t, i, x, y)|2 ≤ K h2

whenever |x| ∨ |y| < h and (t, i) ∈ R+ × S.


By the Hölder inequality (Lemma 1.15) and the Doob’s martingale inequality
(Lemma 1.18), we compute that, for any T > 0 and k = 1, 2, . . .

E[Iτh ∧σ2k−1 <∞ sup |z(τh ∧ (σ2k−1 + t)) − z(τh ∧ σ2k−1 )|2
0≤t≤T (5.306)
≤ 2K h2 T (T + 4)

Since W3 (·) is continuous in Rn , there exists a closed ball S̄h = {x ∈ Rn : |x| < h}
such that W3 (·) is uniformly continuous in S̄h . We can therefore choose δ = δ(ε) > 0
so small such that

|W3 (x) − W3 (y)| < ε/2 whenever x, y ∈ S̄h , |x − y| < δ (5.307)


5.7 Asymptotical Adaptive Synchronization of Neutral … 261

We furthermore choose T = T (ε, δ, h) > 0 sufficiently small for

2K h2 T (T + 4)

δ2
It then follows from (5.306) and Chebyshev’s inequality (Lemma 1.19) that
 &
P {σ2k−1 ∧ τh < ∞} ∩ sup |z(τh ∧ (σ2k−1 + t))
' 0≤t≤T
− z(τh ∧ σ2k−1 )| ≥ δ ≤ 1
δ2
(2K h2 T (T + 4)) < ε

Noting that

{σ2k−1 < ∞, τh = ∞} = {τh ∧ σ2k−1 < ∞, τ < ∞}


⊆ {τh ∧ σ2k−1 < ∞}

we hence have
& '
P σ2k−1 < ∞, τh = ∞
& '
∩ sup |z(σ2k−1 + t) − z(σ2k−1 )| ≥ δ <ε
0≤t≤T

By (5.303) and (5.304), we further compute


& '
P σ2k−1 < ∞, τh = ∞
& '
∩ sup |z(σ2k−1 + t) − z(σ2k−1 )| < δ
0≤t≤T
= P({σ2k−1 < ∞, τh = ∞})
& ' (5.308)
P σ2k−1 < ∞, τh = ∞
& '
∩ sup |z(σ2k−1 + t) − z(σ2k−1 )| ≥ δ
0≤t≤T
> 2ε − ε = ε
262 5 Stability and Synchronization of Neutral-Type Neural Networks

By (5.307), we hence obtain that

P ({σ
& 2k−1 < ∞, τh = ∞} '
∩ sup |W3 (z(σ2k−1 + t)) − W3 (z(σ2k−1 ))| < ε (5.309)
0≤t≤T

Set
& '
Ω̄k = sup |W3 (z(σ2k−1 + t)) − W3 (z(σ2k−1 ))| < ε
0≤t≤T

Noting that

σ2k (ω) − σ2k−1 (ω) ≥ T if ω ∈ {σ2k−1 < ∞, τh = ∞} ∩ Ω̄k

we derive from (5.305) and (5.309) that



∞>ε E[I(σ2k−1 <∞,τh =∞) (σ2k − σ2k−1 )]
k=1
∞
≥ε E[I(σ2k−1 <∞,τh =∞)∩Ω̄k (σ2k − σ2k−1 )]
k=1


≥ εT P({σ2k−1 < ∞, τh = ∞} ∩ Ω̄k )
k=1
∞
≥ εT ε=∞
k=1

which is a contradiction. So (5.297) must hold.


Step 4 To prove that Ker(W3 ) = ∅ and

lim d(x(t; ξ, i 0 ) − D(x(t − τ ; ξ, i 0 ), r (t)), Ker(W3 )) = 0 a.s. (5.310)


t→∞

By (5.297) and (5.296), we see that there is an Ω0 ⊂ Ω with P(Ω0 ) = 1 such


that

lim W3 (z(t, ω)) = 0 and sup |z(t, ω)| < ∞ ∀ω ∈ Ω0 (5.311)


t→∞ 0≤t<∞

Choose any ω ∈ Ω0 . Then {z(t, ω)}t≥0 is bounded in Rn , so there must be an


increasing sequence {tk }k≥1 such that tk → ∞ and {z(tk , ω)}k≥1 converges to some
z̄ ∈ Rn . Thus

W3 (z̄) = lim W3 (z(tk , ω)) = 0


k→∞
5.7 Asymptotical Adaptive Synchronization of Neutral … 263

which implies that z̄ ∈ Ker(W3 ) whence Ker(W3 ) = ∅. From this, we can show that

lim d(z(t, ω), Ker(W3 )) = 0 ∀ω ∈ Ω0 (5.312)


t→∞

If this is false, then there is some ω̄ ∈ Ω0 such that

lim sup d(z(t, ω̄), Ker(W3 )) > 0


  
t→∞

Hence, there is a subsequence {z(tk , ω̄)}k≥0 of {z(t, ω̄)}t≥0 such that

lim d(z(tk , ω̄), Ker(W3 )) > ε̄


k→∞

for some ε̄ > 0. Since {z(tk , ω̄)}k≥0 is bounded, we can find its subsequence
{z(t¯k , ω̄)}k≥0 which converges to ẑ ∈ Rn . Clearly, ẑ ∈
/ Ker(W3 ) so W3 (ẑ) > 0.But,
by (5.311),

W3 (ẑ) = lim W3 (z(t¯k , ω̄)) = 0


k→∞

a contradiction. Hence, (5.312) must hold and (5.310) holds yet.


Step 5 To prove (R2).
Under the assume that

W3 (x) = 0 if and only if x = 0 (5.313)

we have Ker(W3 ) = {0}. It then follows from (5.310) that

lim [x(t) − D(x(t − τ ), r (t))] = lim z(t) = 0 a.s.


t→0 t→0

But by (H2),

|x(t)| ≤ |D(x(t − τ ), r (t))| + |x(t) − D(x(t − τ ), r (t))|


≤ κ|x(t − τ )| + |x(t) − D(x(t − τ ), r (t))|

where κ ∈ (0, 1) has been defined above. Letting t → ∞ we obtain that

lim sup |x(t)| ≤ κ lim sup |x(t)| a.s.


     
t→∞ t→∞

This together with (5.295) yields

lim |x(t)| = 0 a.s.


t→∞

which is the (5.264) and the proof is therefore completed.


264 5 Stability and Synchronization of Neutral-Type Neural Networks

References

1. H. Bao, J. Cao, Stochastic global exponential stability for neutral-type impulsive neural net-
works with mixed time-delays and Markovian jumping parameters. Commun. Nonlinear Sci.
Numer. Simul. 16(9), 3786–3791 (2011)
2. G. Cai, Q. Yao, X. Fan, J. Ding, Adaptive projective synchronization in an array of asymmetric
neural networks. J. Comput. 7(8), 2024–2030 (2012)
3. S. Chen, J. Cao, Projective synchronization of neural networks with mixed time-varying delays
and parameter mismatch. Nonlinear Dyn. 67(2), 1397–1406 (2012)
4. X. Ding, Y. Gao, W. Zhou, D. Tong, H. Su, Adaptive almost surely asymptotically synchro-
nization for stochastic delayed neural networks with Markovian switching. Adv. Differ. Equ.
2013(1), 1–12 (2013)
5. J. Feng, S. Xu, Y. Zou, Delay-dependent stability of neutral type neural networks with distrib-
uted delays. Neurocomputing 72(10–12), 2576–2580 (2009)
6. J.M. González-Miranda, Amplification and displacement of chaotic attractors by means of
unidirectional chaotic driving. Phys. Rev. E 57(6), 7321–7324 (1998)
7. W.L. He, J.D. Cao, Adaptive synchronization of a class of chaotic neural networks with known
or unknown parameters. Phys. Lett. A 372(4), 408–416 (2008)
8. H. Huang, D.W.C. Ho, Y. Qu, Robust stability of stochastic delayed additive neural networks
with Markovian switching. Neural Netw. 20(7), 799–809 (2007)
9. X. Huang, J. Cao, Generalized synchronization for delayed chaotic neural networks a novel
coupling scheme. Nonlinearity 19(12), 2797–2811 (2006)
10. H. Huo, W. Li, Existence of positive periodic solution of a neutral impulsive delay predator-prey
system. Appl. Math. Comput. 185(1), 499–507 (2007)
11. H.R. Karimi, Robust synchronization and fault detection of uncertain master-slave systems
with mixed time-varying delays and nonlinear perturbations. Int. J. Control Autom. Syst. 9(4),
671–680 (2011)
12. H.R. Karimi, A sliding mode approach to H∞ synchronization of master-slave time-delay
systems with Markovian jumping parameters and nonlinear uncertainties. J. Frankl. Inst. 349(4),
1480–1496 (2012)
13. H.R. Karimi, H. Gao, LMI-based H∞ synchronization of second-order neutral master-slave
systems using delayed output feedback control. Int. J. Control Autom. Syst. 7(3), 371–380
(2009)
14. H.R. Karimi, M. Zapateiro, N. Luo, Adaptive synchronization of master-slave systems with
mixed neutral and discrete time-delays and nonlinear perturbations. Asian J. Control 14(1),
251–257 (2012)
15. S. Karthikeyan, K. Balachandran, Controllability of nonlinear stochastic neutral impulsive
systems. Nonlinear Anal. Hybrid Syst. 3(3), 266–276 (2009)
16. V. Kolmanovskii, N. Koroleva, T. Maizenberg, X. Mao, A. Matasov, Neutral stochastic differ-
ential delay equations with Markovian switching. Stoch. Anal. Appl. 21(4), 839–867 (2003)
17. O.M. Kwon, M.J. Park, S.M. Lee, J.H. Park, E.-J. Cha, Stability for neural networks with
time-varying delays via some new approaches. IEEE Trans. Neural Netw. Learn. Syst. 24(2),
181–193 (2013)
18. T.H. Lee, J.H. Park, O.M. Kwon, S.M. Lee, Stochastic sampled-data control for state estimation
of time-varying delayed neural networks. Neural Netw. 46(1), 99–108 (2013)
19. F. Li, X. Wang, P. Shi, Robust quantized H∞ control for network control systems with Markov-
ian jumps and time delays. Int. J. Innov. Comput. Inf. Control 9(12), 4889–4902 (2013)
20. X. Li, Global robust stability for stochastic interval neural networks with continuously distrib-
uted delays of neutral type. Appl. Math. Comput. 215(12), 4370–4384 (2010)
21. X. Li, J. Cao, Adaptive synchronization for delayed neural networks with stochastic perturba-
tion. J. Frankl. Inst. 354(7), 779–791 (2008)
22. C.-H. Lien, K.-W. Yu, Y.-F. Lin, Y.-J. Chung, L.-Y. Chung, Exponential convergence rate
estimation for uncertain delayed neural networks of neutral type. Chaos Solitons Fractals
40(5), 2491–2499 (2009)
References 265

23. L. Liu, Z. Han, W. Li, Global stability analysis of interval neural networks with discrete and
distributed delays of neutral type. Expert Syst. Appl. 36(3), 7328–7331 (2009)
24. P. Liu, Delay-dependent robust stability analysis for recurrent neural networks with time-
varying delay. Int. J. Innov. Comput. Inf. Control 9(8), 3341–3355 (2013)
25. Y. Liu, Stochastic asymptotic stability of Markovian jumping neural networks with Markov
mode estimation and mode-dependent delays. Phys. Lett. A 373(41), 3741–3742 (2009)
26. Y. Liu, Z. Wang, X. Liu, Stability analysis for a class of neutral-type neural networks with
Markovian jumping parameters and mode-dependent mixed delays. Neurocomputing 94, 46–
53 (2012)
27. X. Lou, B. Cui, Stochastic stability analysis for delayed neural networks of neutral type with
Markovian jump parameters. Chaos Solitons Fractals 39(5), 2188–2197 (2009)
28. J. Lu, D.W.C. Ho, J. Cao, J. Kurths, Exponential synchronization of linearly coupled neural
networks with impulsive disturbances. IEEE Trans. Neural Netw. 22(2), 329–336 (2011)
29. Q. Lu, L. Zhang, P. Shi, H. Karimi, Control design for a hypersonic aircraft using a switched
linear parameter-varying system approach. Proc. Inst. Mech. Eng. Part I: J. Syst. Control Eng.
227(1), 85–95 (2013)
30. H.H. Mai, X.F. Liao, C.D. Li, A semi-free weighting matrices approach for neutral-type delayed
neural networks. J. Comput. Appl. Math. 225(1), 44–55 (2009)
31. X. Mao, Stochastic Differential Equations and Their Applications (Horwood, Chichester, 1997)
32. X. Mao, Y. Shen, C. Yuan, Almost surely asymptotic stability of neutral stochastic differential
delay equations with Markovian switching. Stoch. Process. Appl. 118(8), 1385–1406 (2008)
33. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, London, 2006)
34. J.H. Park, Synchronization of cellular neural networks of neutral type via dynamic feedback
controller. Chaos Solitons Fractals 42(3), 1299–1304 (2009)
35. J.H. Park, O.M. Kwon, Global stability for neural networks of neutral-type with interval time-
varying delays. Chaos Solitons Fractals 41(3), 1174–1181 (2009)
36. J.H. Park, O.M. Kwon, S.M. Lee, LMI optimization approach on stability for delayed neural
networks of neutral-type. Appl. Math. Comput. 196(1), 236–244 (2008)
37. J.H. Park, C. Park, O. Kwon, S. Lee, A new stability criterion for bidirectional associative
memory neural networks of neutral-type. Appl. Math. Comput. 199(2), 716–722 (2008)
38. V.P. Rubanik, Oscillations of Qasilinear Systems with Retardation (Nauka, Moscow, 1969)
39. R. Samli, S. Arik, New results for global stability of a class of neutral-type neural systems with
time delays. Appl. Math. Comput. 210(2), 564–570 (2009)
40. L. Sheng, M. Gao, Robust stability of Markovian jump discrete-time neural networks with
partly unknown transition probabilities and mixed mode-dependent delays. Int. J. Syst. Sci.
44(2), 252–264 (2013)
41. P. Shi, E.K. Boukas, R. Agarwal, Control of Markovian jump discrete-time systems with norm
bounded uncertainty and unknown delay. IEEE Trans. Autom. Control 44(11), 2139–2144
(1999)
42. P. Shi, E.K. Boukas, R. Agarwal, Kalman filtering for continuous-time uncertain systems with
Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999)
43. W. Su, Y. Chen, Global asymptotic stability analysis for neutral stochastic neural networks
with time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 14(4), 1576–1581 (2009)
44. Y. Sun, J. Cao, Adaptive lag synchronization of unknown chaotic delayed neural networks with
noise perturbation. Phys. Lett. A 364(3), 277–285 (2007)
45. Y. Sun, G. Feng, J. Cao, Stochastic stability of Markovian switching genetic regulatory net-
works. Phys. Lett. A 373(18), 1646–1652 (2009)
46. Y. Tang, J. Fang, Adaptive synchronization in an array of chaotic neural networks with mixed
delays and jumping stochastically hybrid coupling. Commun. Nonlinear Sci. Numer. Simul.
14(9), 3615–3628 (2009)
47. Y. Tang, H. Gao, W. Zou, J. Kurths, Distributed synchronization in networks of agent systems
with nonlinearities and random switchings. IEEE Trans. Cybern. 43(1), 358–370 (2013)
266 5 Stability and Synchronization of Neutral-Type Neural Networks

48. Y. Tang, R. Qiu, J. Fang, Q. Miao, M. Xia, Adaptive lag synchronization in unknown stochas-
tic chaotic neural networks with discrete and distributed time-varying delays. Phys. Lett. A
372(24), 4425–4433 (2008)
49. Y. Tang, Z. Wang, J. Fang, Controller design for synchronization of an array of delayed neural
networks using a controllable probabilistic PSO. Inf. Sci. 181(20), 4715–4732 (2011)
50. Y. Tang, Z. Wang, H. Gao, S. Swift, J. Kurths, A constrained evolutionary computation method
for detecting controlling regions of cortical networks. IEEE-ACM Trans. Comput. Biol. Bioin-
form. 9(6), 1569–1581 (2012)
51. Y. Tang, W.K. Wong, Distributed synchronization of coupled neural networks via randomly
occurring control. IEEE Trans. Neural Netw. Learn. Syst. 24(3), 435–447 (2013)
52. D. Tong, Q. Zhu, W. Zhou, Y. Xu, J. Fang, Adaptive synchronization for stochastic T-S fuzzy
neural networks with time-delay and Markovian jumping parameters. Neurocomputing 27(6),
91–97 (2013)
53. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
54. Q. Wang, Q. Lu, Phase synchronization in small world chaotic neural networks. Chin. Phys.
Lett. 22(6), 1329–1332 (2005)
55. Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete
and distributed delays. Chaos Solitons Fractals 36(2), 388–396 (2008)
56. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
57. Z. Wang, Y. Liu, X. Liu, Exponential stabilization of a class of stochastic system with Markovian
jump parameters and mode-dependent mixed time-delays. IEEE Trans. Autom. Control 55(7),
1656–1662 (2010)
58. Z. Wang, Y. Liu, G. Wei, X. Liu, A note on control of discrete-time stochastic systems with
distributed delays and nonlinear disturbances. Automatica 46(3), 543–548 (2010)
59. Z.D. Wang, D.W.C. Ho, Y.R. Liu, X.H. Liu, Robust H∞ control for a class of nonlinear discrete
time-delay stochastic systems with missing measurements. Automatica 45(3), 1–8 (2010)
60. Z. Wu, P. Shi, H. Su, J. Chu, Delay-dependent stability analysis for switched neural networks
with time-varying delay. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 41(6), 1522–1530
(2011)
61. Z. Wu, P. Shi, H. Su, J. Chu, Passivity analysis for discrete-time stochastic Markovian jump
neural networks with mixed time-delays. IEEE Trans. Neural Netw. 22(10), 1566–1575 (2011)
62. Z. Wu, P. Shi, H. Su, J. Chu, Exponential synchronization of neural networks with discrete and
distributed delays under time-varying sampling. IEEE Trans. Neural Netw. Learn. Syst. 23(9),
1368–1376 (2012)
63. Z. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks
with time-varying delay using sampled-data. IEEE Trans. Cybern. 43(6), 1796–1806 (2013)
64. Y. Yang, J. Cao, Exponential lag synchronization of a class of chaotic delayed neural networks
with impulsive effects. Phys. A: Stat. Mech. Appl. 386(1), 492–502 (2007)
65. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A: Stat.
Mech. Appl. 373(1), 252–260 (2007)
66. C. Yuan, X. Mao, Robust stability and controllability of stochastic differential delay equations
with Markovian switching. Automatica 40(3), 343–354 (2004)
67. D. Zhang, J. Xu, Projective synchronization of different chaotic time-delayed neural networks
based on integral sliding mode controller. Appl. Math. Comput. 217(1), 164–174 (2010)
68. L. Zhang, E. Boukas, Stability and stabilization of Markovian jump linear systems with partly
unknown transition probabilities. Automatica 45(2), 463–468 (2009)
69. L. Zhang, E.K. Boukas, H∞ control for discrete-time Markovian jump linear systems with
partly unknown transition probabilities. Int. J. Robust Nonlinear Control 19(8), 868–883 (2009)
70. L. Zhang, E.K. Boukas, H∞ control of a class of extended Markov jump linear systems. IET
Control Theory Appl. 3(7), 834–842 (2009)
71. L. Zhang, E.K. Boukas, J. Lam, Analysis and synthesis of Markov jump linear systems with
time-varying delays and partially known transition probabilities. IEEE Trans. Autom. Control
53(10), 2458–2464 (2008)
References 267

72. L. Zhang, J. Lam, Necessary and sufficient conditions for analysis and synthesis of Markov
jump linear systems with incomplete transition descriptions. IEEE Trans. Autom. Control
55(7), 1695–1701 (2010)
73. W. Zhang, Y. Tang, J. Fang, Stochastic stability of Markovian jumping genetic regulatory
networks with mixed time delays. Appl. Math. Comput. 217(17), 7210–7225 (2011)
74. Y. Zhang, J. Sun, Stability of impulsive neural networks with time delays. Phys. Lett. A 348(1),
44–50 (2005)
75. Y.J. Zhang, S.Y. Xu, Y.M. Chu, J.J. Lu, Robust global synchronization of complex networks
with neutral-type delayed nodes. Appl. Math. Comput. 216(3), 768–778 (2010)
76. H. Zhao, S. Xu, Y. Zou, Robust H∞ filtering for uncertain Markovian jump systems with
mode-dependent distributed delays. Int. J. Adapt. Control Signal Process 24(1), 83–94 (2010)
77. J. Zhou, T. Chen, L. Xiang, Chaotic lag synchronization of coupled delayed neural networks
and its applications in secure communication. Circuits Syst. Signal Process. 24(5), 599–613
(2005)
78. Q. Zhou, P. Shi, H. Liu, S. Xu, Neural-network-based decentralized adaptive output-feedback
control for large-scale stochastic nonlinear systems. IEEE Trans. Syst. Man Cybern. Part B:
Cybern. 42(6), 1608–1619 (2012)
79. W. Zhou, Y. Gao, D. Tong, C. Ji, J. Fang, Adaptive exponential synchronization in pth moment
of neutral-type neural networks with time delays and Markovian switching. Int. J. Control,
Autom. Syst. 11(4), 845–851 (2013)
80. W. Zhou, H. Lu, C. Duan, Exponential stability of hybrid stochastic neural networks with mixed
time delays and nonlinearity. Neurocomputing 72(13), 3357–3365 (2009)
81. W. Zhou, D. Tong, Y. Gao, C. Ji, H. Su, Mode and delay-dependent adaptive exponential syn-
chronization in pth moment for stochastic delayed neural networks with Markovian switching.
IEEE Trans. Neural Netw. Learn. Syst. 23(4), 662–668 (2012)
82. J. Zhu, Q. Zhang, C. Yang, Delay-dependent robust stability for Hopfield neural networks of
neutral-type. Neurocomputing 72(10), 2609–2617 (2009)
83. Q. Zhu, J. Cao, Adaptive synchronization under almost every initial data for stochastic neural
networks with time-varying delays and distributed delays. Commun. Nonlinear Sci. Numer.
Simul. 16(4), 2139–2159 (2011)
84. Q. Zhu, W. Zhou, D. Tong, J. Fang, Adaptive synchronization for stochastic neural networks
of neutral-type with mixed time-delays. Neurocomputing 99, 477–485 (2013)
85. S. Zhu, Y. Shen, Passivity analysis of stochastic delayed neural networks with Markovian
switching. Neurocomputing 74(10), 1754–1761 (2011)
Chapter 6
Stability and Synchronization of Neural
Networks with Lévy Noise

As a simple model of jump diffusions, Lévy noise is in a more general sense with
respect to the description of neural noise than Brownian motion does. This chapter
is concentrated on the stability and synchronization issues of neural networks with
Lévy noise. Almost surely exponential stability and pth moment asymptotic stability
for such networks are discussed in the first two sections. Synchronization via sampled
data and adaptive synchronization are investigated in the rest two sections.

6.1 Almost Surely Exponential Stability of NN with Lévy


Noise and Markovian Switching

6.1.1 Introduction

In the past few years, neural networks have been successfully applied in many areas,
including image processing, pattern recognition, associative memory, and optimiza-
tion problems. In the mean time, the stability analysis for neural networks has gained
much research attention. Many methods for stability researches, such as the linear
matrix inequality approach and M-matrix approach, have been investigated, see e.g.,
[17, 21, 22, 34, 35, 39, 45, 50, 53, 54, 60]. Various sufficient conditions have
been proposed to guarantee the global asymptotic or exponential stability for neural
networks.
Recently, it has been shown that many neural networks may have finite modes,
and the modes may switch from one to another at different times [17, 21, 22,
34, 45, 50, 54, 60]. In this situation, finite-state Markov chains can be used to
govern the switching between different modes of neural networks. Therefore, the
stability analysis problem for neural networks with Markovian switching has received

© Springer-Verlag Berlin Heidelberg 2016 269


W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2_6
270 6 Stability and Synchronization of Neural Networks…

much research attention [17, 21, 22, 34, 45, 50, 54]. As a summary, Mao and Yuan
[22] studied the more general case, stochastic differential equations with Markovian
switching, and got a series of results about it.
Even to now, Gaussian white noise or Brownian motion has been regarded as
a commonly used model to describe the disturbance arising in neural networks or
nonlinear systems else [17, 21, 22, 34, 45, 50, 54]. However, Brownian motion is
at a disadvantage to depict instantaneous disturbance changes due to its continu-
ity. Lévy noise, which frequently appears in areas of finance, statistical mechanics,
and signal processing, see e.g., [1–3, 5, 26, 31, 52] and is written as (B, N ) by
D. Applebaum in [2], is more suitable for modeling diversified system noise because
Lévy noise can be decomposed into a continuous part and a jump part by Lévy-Itô
decomposition. As a result, Lévy noise extends Gaussian noise to many types of
impulsive jump-noise processes found in real and model neurons as well as in mod-
els of finance and other random phenomena. In neural networks, a Lévy noise model
more accurately describes how the neuron’s membrane potential evolves than does
a simpler diffusion model because the more general Lévy model includes not only
pure-diffusion and pure-jump models but also jump-diffusion models as well [4, 10,
28]. For the reason of Gaussian structure, however, pure-diffusion neuron models
rely on special limiting case assumptions of incoming Poisson spikes from other
neurons. These assumptions require at least that the number of impinging synapses
be large and that the synapses have small membrane effects due to the small coupling
coefficient [4, 13]. In the view of engineering applications, Lévy models are more
valuable than Gaussian models because physical devices may be limited in their
number of model-neuron connections [4, 23] and because real signals and noise can
often be impulsive [4, 29]. As seen in [11, 42, 43, 46], system with Lévy noise, or
more generally, with Gaussian noise and some kinds of jump noise is also called
jump diffusions. Hence, stability analysis problems for jump diffusions have drawn
an increasing research interest, see e.g., [11, 19, 25, 42–44, 46].
In this section, we introduce Lévy noise for neural network modeling and extend
the stochastic analysis approach for stability issues of neural networks with traditional
Gaussian noise to the area of neural networks with Lévy noise. By generalized
Itô’s formula for Lévy-type stochastic integrals [1], taking advantage of strong law
of large numbers for martingales and ergodicity of Markov chains, we derive a
sufficient condition of almost surely exponential stability for neural networks, which
depends only on the stationary probability distribution of the Markov chain and
some constants. Two numerical examples are provided to show the usefulness of the
proposed stability condition.

6.1.2 Model and Preliminaries

Let r (t), t ≥ 0 be a right-continuous Markov chain on the probability space taking


values in a finite state space S = {1, 2, . . . , S} with generator Γ = (γi j ) S×S .
6.1 Almost Surely Exponential Stability of NN with Lévy … 271

As a standing hypothesis, we assume in this section that the Markov chain is


irreducible or ergodic. The algebraic interpretation of irreducibility is rank(Γ ) =
S − 1. Under this condition, the Markov chain has a unique stationary distribution
π = (π1 , π2 , . . . , π S ) ∈ R1×S which
 can be determined by solving the following
linear equation πΓ = 0 subject to Sj=1 π j = 1 and π j > 0, ∀ j ∈ S [21].
Consider the n-dimensional stochastic neural network with Lévy noise and
Markovian switching of the form

d x(t) = [−F(r (t))x(t) + A(r (t))g(x(t))]dt + σ(x(t), r (t))d B(t)



(6.1)
+ H (x(t−), r (t−), y)N (dt, dy)
Y

with initial data x(0) = x0 , r (0) = r0 , and σ : Rn ×S → Rn×m , H : Rn ×S×Rn →


Rn . x(t) = (x1 (t), . . . , xn (t))T ∈ Rn is the state vector associated with the n neurons.
F(·) is a positive diagonal matrix and g(x(t)) = (g1 (x1 (t), . . . , gn (xn (t)))T denotes
the neuron activation function with g(0) = 0. The value of random variable y, which
determines the probability distribution of random jump amplitudes, is limited to
y ∈ Y ⊂ Rn . We further assume that B(t), N (t, y), and r (t) in system (6.1) are
independent.
For the purpose of stability study in this section, we impose the following assump-
tion.

Assumption 6.1 (i) The functions σ(·) and H (·) satisfy σ(0, i) ≡ 0 and H (0, i, y)
≡ 0 for each i ∈ S and y ∈ Y .
(ii) Local Lipschitz condition For all n ∈ N, y ∈ Y, t ≥ 0, i ∈ S and x1 (t), x2 (t) ∈
Rn with |x1 | ∨ |x2 | < n, there exists a positive constant K (n) such that

|g(x1 (t)) − g(x2 (t))|2 + σ(x1 (t), i) − σ(x2 (t), i) 2



+ |H (x1 (t), i, y) − H (x2 (t), i, y)|2 φ(dy) ≤ K (n)|x1 (t) − x2 (t)|2
Y
(6.2)

Remark 6.2 One can immediately derive from Assumption 6.1 (i) that system (6.1)
admits a trivial solution x(t; 0) ≡ 0. Combining (i), (ii) in Assumption 6.1 and the
property of g(0) = 0, we have

|g(x(t))|2 ≤ K (n)|x(t)|2 (6.3)


σ(x(t), i) 2 ≤ K (n)|x(t)|2 (6.4)

|H (x(t), i, y)|2 φ(dy) ≤ K (n)|x(t)|2 (6.5)
Y

for all x(t), y(t) ∈ Rn with |x| ∨ |y| < n and i ∈ S, y ∈ Y , which mean that the
local growth condition of system (6.1) holds, so from [1], the local solution of (6.1)
exists uniquely.
272 6 Stability and Synchronization of Neural Networks…

The purpose of this section is to discuss the almost surely exponential stability of
the neural network (6.1). Let us begin with the following definition.

Definition 6.3 The trivial solution of (6.1), or simply, system (6.1) is said to be
almost surely exponentially stable if for any x0 ∈ Rn ,

1
lim sup log(|x(t; x0 )|) < 0 a.s.
t→∞ t

6.1.3 Main Results

The following theorem shows that the stability criterion depends only on the state of
the Markov chain and some constants else.

Theorem 6.4 Let Assumption 6.1 holds. Assume that there exists a symmetric pos-
itive definite matrix Q and some constants μi ∈ R, ρi , αi , βi ≥ 0 (i ∈ S), such
that

2x T Q[−Fi x + Ai g(x)] + trace[σ(x, i)T Qσ(x, i)] ≤ μi x T Qx (6.6)


|x Qσ(x, i)| ≥ ρi (x Qx)
T 2 T 2
(6.7)
αi |x| ≤ |H (x, i, y) + x| ≤ βi |x| (6.8)

for all x(t) ∈ Rn , where Ai = A(i), Fi = F(i) for r (t) = i. Then the solution
x(t; x0 ) of (6.1) has the property that
 
1  πi S
λmax (Q)βi2
lim sup log(|x(t; x0 )|) ≤ μi − 2ρi + λ log (6.9)
t→∞ t 2 λmin (Q)
i=1

S π  λmax (Q)βi2 
i
In particular, if μi − 2ρi + λ log < 0, then the neural network
i=1 2 λmin (Q)
(6.1) is almost surely exponentially stable.
Proof For simplicity, we will write x(t), F(·), A(·), σ(·, ·), H (·, ·, ·) as x, F, A, σ, H
respectively. Obviously, (6.9) holds when x0 = 0. Fix any x0 = 0. The generalized
Itô’s formula shows
 t

1 
log[x T (t)Qx(t)] = log(x0T Qx0 ) + T (s)Qx(s)
2x T (s)Q − F(r (s))x(s)
0 x
  
+ A(r (s))g(x(s)) + trace σ T (x(s), r (s))Qσ(x(s), r (s))

2|x T (s)Qσ(x(s), r (s))|2 
S
− + γi j log[x T
(t)Qx(t)] ds
(x T (s)Qx(s))2
j=1
6.1 Almost Surely Exponential Stability of NN with Lévy … 273
  t

2x T (s)Qσ(x(s), r (s))
t 
+ d B(s) + log (x(s) + H (x(s−), r (s−), y))T
0 x (s)Qx(s)
T
0 Y

× Q(x(s) + H (x(s−), r (s−), y)) − log[x T (s)Qx(s)] N (ds, dy)
 t
1
T    
= log(x 0T Qx0 ) + T Qx
2x Q − F x + Ag(x) + trace σ T Qσ
0 x
 t
2|x T Qσ|2 (x + H )T Q(x + H )
− ds + M 1 (t) + M 2 (t) + log λφ(dy)ds
(x Qx)
T 2
0 Y x T Qx

t 2x T Qσ t (x + H )T Q(x + H )
where M1 (t) = 0 x T Qx d B(s) and M2 (t) = 0 Y log x T Qx
Ñ (ds, dy) are two martingales vanishing at t = 0. By condition (6.4), the quadratic
variation of M1 (t) satisfies
  t
t d M1 , M1 s 4|x T (s)Qσ(x(s), r (s))|2
= ds
0 (1 + s)2 0 (x (s)Qx(s)) (1 + s)
T 2 2
 t  (6.10)
4K (n)|x|4 Q 2 4K (n) Q 2 ∞ ds
≤ ≤ <∞
0 λmin (Q)|x| (1 + s)
2 4 2 λmin (Q)
2
0 (1 + s)2

Noting that Q is a positive definite matrix, by condition (6.8), we get that

λmin (Q)αr2(s−) λmin (Q)|x + H |2


log ≤ log
λmax (Q) λmax (Q)|x|2
(x + H )T Q(x + H )
≤ log
x T Qx
λmax (Q)|x + H |2
≤ log
λmin (Q)|x|2
λmax (Q)βr2(s−)
≤ log
λmin (Q)

which means that


 (x + H )T Q(x + H ) 2

 log  ≤L
x T Qx

 λmin (Q)αr2(s−) 2  λmax (Q)βr2(s−) 2



where L = log  
∨ log  .
λmax (Q) λmin (Q)
Thus
 (x + H )T Q(x + H ) 2
  t   log 
t d M2 , M2 s x T Qx
= λφ(dy)ds
0 (1 + s)2 0 Y (1 + s)2 (6.11)
 ∞
ds
≤ λL <∞
0 (1 + s)2
274 6 Stability and Synchronization of Neural Networks…

From (6.10) and (6.11), we have


 t d Mk , Mk s
lim < ∞ (k = 1, 2) a.s.
t→∞ 0 (1 + s)2

It follows from Lemma 1.2 that


Mk (t)
lim = 0 (k = 1, 2) a.s. (6.12)
t→∞ t
Similarly, we obtain
 t
(x + H )T Q(x + H )
log λφ(dy)ds
0 Y x T Qx
 t
λmax (Q)|x + H |2
≤ log λφ(dy)ds
0 Y λmin (Q)|x|2
 t λmax (Q)βr2(s−)
≤λ log ds
0 λmin (Q)

Now making use of condition (6.6) and (6.7), we derive that

log(x T (t)Qx(t)) ≤ log(x0 Qx0 ) + M1 (t) + M2 (t)


 t λmax (Q)βr2(s−)  (6.13)
+ μr (s) − 2ρr (s) + λ log ds
0 λmin (Q)

By the ergodic property of Markov chain, we have


 t λmax (Q)βr2(s−) 
1
lim μr (s) − 2ρr (s) + λ log ds
t→∞ t 0 λmin (Q)
 (6.14)
 λmax (Q)βi2 
S
= πi μi − 2ρi + λ log
λmin (Q)
i=1

It then follows from (6.12)–(6.14) that

1 1 1
lim sup log(|x(t)|) = lim sup log(x T (t)Qx(t))
t→∞ t 2 t→∞ t
 πi  λmax (Q)βi2 
S
≤ μi − 2ρi + λ log
2 λmin (Q)
i=1

as required. This completes the proof.


6.1 Almost Surely Exponential Stability of NN with Lévy … 275

In the case of H (x(t), r (t), y) ≡ 0, which means that system (6.1) is disturbed
only by Gaussian white noise and has the form of

d x(t) = [ − F(r (t))x(t) + A(r (t))g(x(t))]dt


(6.15)
+ σ(x(t), r (t))d B(t)

and the criterion of almost surely exponential stability is given below.


Corollary 6.5 Let Assumption 6.1 and conditions (6.6) and (6.7) hold. Then the
solution x(t; x0 ) of (6.15) has the property that

1  πi
S
lim sup log(|x(t; x0 )|) ≤ [μi − 2ρi ] (6.16)
t→∞ t 2
i=1

S π
i
Moreover, if [μi − 2ρi ] < 0, then the neural network (6.15) is almost surely
i=1 2
exponentially stable.

Remark 6.6 Theorem 6.4 is an extension of the original conclusion (i.e., Corol-
lary 6.5) given by Mao in [22], where the considered disturbance contains only
continuous Brownian motion. In the so-called jump-diffusion system such as (1),
however, disturbance can be either continuous or discontinuous. In particular, when
H (x(t), r (t), y) ≡ 0 holds in Theorem 6.4, the jumps (discontinuous disturbance)
are eliminated, our result is consistent with Mao’s.

6.1.4 Numerical Simulation

Two examples are presented here in order to show the usefulness of our results.
Example 6.7 shows a neural network with two-neuron and 2-state Markovian switch-
ing, while Example 6.8 with three-neuron and three-state Markovian switching. The
random variable y in Poisson jump term of (6.1), which determines the distribu-
tion of the jump amplitudes, is uniformly distributed in Example 6.7 while normally
distributed in Example 6.8.

Example 6.7 Consider a two-neuron neural network (1) with 2-state Markovian
switching, where

90 70 −2 1
F1 = , F2 = , A1 = ,
08 08 1.3 −1

−1.5 1 −1 0.5 −1 −1
A2 = , G1 = , G2 = .
−1 −2 0.5 −1 −1 −2
276 6 Stability and Synchronization of Neural Networks…

The neuron activation function is g(x) = tanh(x) and the noise intensity function
σ(·) satisfies σ(x, i) = G i x, (i = 1, 2). B(t) is a scalar Brownian motion. We set

−6 6
S = {1, 2}, Γ = , λ = 3,
2 −2
Y = {y| − 1 ≤ y ≤ 1}, Q = I2 , H (x, i, y) = i cos(y)x.

Let y be a uniformly distributed random variable in [−1, 1]. We can get

π1 = 0.25, π2 = 0.75, μ1 = −10.2271, μ2 = −4.6148,


ρ1 = 0.25, ρ2 = 0.1459, α1 = 1.5403,
α2 = 2.0806, β1 = 2, β2 = 3

2 π  λmax (Q)βi2 
i
Then we obtain μi − 2ρi + λ log = −0.1891 < 0. By
i=1 2 λmin (Q)
Theorem 6.4, the two-neuron neural network (1) is almost surely exponentially stable.

Figures 6.1, 6.2, 6.3, and 6.4 show the 2-state Markov chain, Poisson point process
with uniformly distributed variable y, the state trajectory, and phase trajectory in
Example 6.7, respectively. We can see from Figs. 6.3 and 6.4 that the state of system
tends to zero at about 0.9 s, which verifies that the neural network in Example 6.7 is
almost surely exponentially stable.

Fig. 6.1 Markov chain in 2−state Markov chain


3
Example 6.7

2.5

2
r(t)

1.5

0.5

0
0 2 4 6 8 10
Time(second)
6.1 Almost Surely Exponential Stability of NN with Lévy … 277

Fig. 6.2 Poisson point Poisson point process with uniformly distributed jump amplitude (a:−1, b:1)
4
process in Example 6.7

Random jump amplitude


2

−1

−2
0 5 10 15
Time(second)

Fig. 6.3 State trajectory in Responses of neuron dynamics to initial value: 5, −5


Example 6.7 10
x1
8 x
2
6
4
2
x(t)

0
−2
−4
−6
−8
−10
0 1 2 3 4 5 6 7
Time(second)

Fig. 6.4 Phase trajectory in Phase trajectory of neuron dynamics


Example 6.7

10
Time(second)

0
5
0 10
5
−5 0
x2 x1
−10 −5
278 6 Stability and Synchronization of Neural Networks…

Example 6.8 Consider another three-neuron neural network (6.1) with 3-state
Markovian switching, where
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
10 0 0 700 800
F1 = ⎣ 0 8 0⎦ , F2 = ⎣0 9 0⎦ , F3 = ⎣0 9 0⎦ ,
0 09 008 006
⎡ ⎤ ⎡ ⎤
−1 0.5 0.2 −1 1 1
A1 = ⎣0.3 −1 0.6⎦ , A2 = ⎣0.8 −1 −1.3⎦ ,
1 0.6 −1 1 0.7 −2
⎡ ⎤ ⎡ ⎤
−1 0.2 0.5 −1 0.5 0.2
A3 = ⎣−0.5 1.2 0.4⎦ , G 1 = ⎣0.5 −1 0.3⎦ ,
0.6 1.1 −1 0.2 0.3 −1
⎡ ⎤ ⎡ ⎤
−1 −1 0.5 −1 1 0.6
G 2 = ⎣−1 −2 0.2⎦ , G 3 = ⎣ 1 −1 0.2⎦ .
0.5 0.2 −1 0.6 0.2 −1

The requirements for g(x), σ(·), and B(t) are the same as Example 6.7. Also we set

−2 1 1
S = {1, 2, 3}, Γ = 1 −2 1 , λ = 2,
1 3 −4
Y = {y|0.5 ≤ y ≤ 2}, Q = I3 , H (x, i, y) = i y 2 x.

Let y be a normally distributed random variable with mean 1 and variance 0.25.
We can obtain

π1 = 0.3333, π2 = 0.4667, π3 = 0.2, μ1 = −12.1512,


μ2 = −4.4047, μ3 = −10.1603, ρ1 = 0.1769,
ρ2 = 0.0543, ρ3 = 0.0682, α1 = 1.25, α2 = 1.5,
α3 = 1.75, β1 = 5, β2 = 9, β3 = 13.

3 π  λmax (Q)βi2 
i
Thus we have μi − 2ρi + λ log = −0.0173 < 0 . It fol-
i=1 2 λmin (Q)
lows from Theorem 6.4 that the three-neuron neural network (1) is almost surely
exponentially stable.

Figures 6.5, 6.6, and 6.7 show the 3-state Markov chain, Poisson point process with
normally distributed variable y, and the state trajectory in Example 6.8, respectively.
In Fig. 6.7, the state of system tends to zero at about 0.7 s, which confirms that neural
network in Example 6.8 is almost surely exponentially stable.
6.1 Almost Surely Exponential Stability of NN with Lévy … 279

Fig. 6.5 Markov chain in 3−state Markov chain


4
Example 6.8
3.5

2.5

r(t)
2

1.5

0.5

0
0 2 4 6 8 10
Time(second)

Fig. 6.6 Poisson point Poisson point process with normally distributed jump amplitude
(mean:1, variance:0.25)
process in Example 6.8 30

25
Random jump amplitude

20

15

10

0
0 2 4 6 8 10 12 14 16
Time(second)

Fig. 6.7 State trajectory in Responses of neuron dynamics to initial value: 10, 2, −7
Example 6.8 15
x1
x2

10 x3

5
x(t)

−5

−10
0 1 2 3 4 5 6 7
Time(second)
280 6 Stability and Synchronization of Neural Networks…

6.1.5 Conclusion

In this section, we have dealt with the problem of almost surely exponential stability
analysis for neural networks with Lévy noise and Markovian switching. By gener-
alized Itô’s formula with respect to Lévy process, making use of strong law of large
numbers for martingales and ergodic property of the Markov chain, we have derived
a sufficient condition for almost surely exponential stability which depends on the
unique stationary probability of Markov chain and some constants else. Two exam-
ples have been used to demonstrate the effectiveness of the main results obtained in
this section.

6.2 Asymptotic Stability of SDNN with Lévy Noise

6.2.1 Introduction

Stability issues of stochastic systems with Markovian switching have received


tremendous research attention, see [20–22, 50, 51, 54]. A Markovian switching
system is a hybrid system with state vector that has two components x(t) and r (t).
The first one is regarded as the state, while the second one as the mode. Governed by
a Markov chain with finite state space, the system switches from one mode to another
in a random way [51]. This switching manner is more suitable for the description of
random failures, abrupt changes, or sudden disturbances arising in many real sys-
tems. In the past few years, the study of stability problem regarding systems driven by
Lévy noise [2, 3, 25, 44] or, so-called, jump-diffusion systems [38, 41–43, 46, 47]
has become an increasing interest. Many sufficient conditions of stability have been
presented for these stochastic systems [2, 3, 16, 25, 38, 42, 44, 46, 47]. Actually,
there is no essential difference between these two kinds of systems. As Applebaum
shows in his book [1], by Lévy-Itô decomposition, Lévy noise can be decomposed
into a continuous part and a jump part which, respectively, correspond to the diffu-
sion and jump term in systems. Therefore, it is no surprise that stability analysis for
hybrid systems with Lévy noise [41–43, 46–48, 51] tends to be a new research focus.
In addition, time delays, which commonly appear in practical systems, are often the
cause of instability. Hence, the stability of stochastic delay systems is always the hot
area in many relevant researches [16, 25, 37, 51, 54, 59].
In stability issues concerning systems with Lévy noise, many researches refer to
the problem of exponential stability [2, 3, 25, 38, 42, 44, 46, 47] for its excellent
convergence speed. However, higher convergence rates always need more restrictions
on system parameters. Comparing with exponential stability, although asymptotic
stability is at a disadvantage of convergence rate, its requirements on systems are
6.2 Asymptotic Stability of SDNN with Lévy Noise 281

reduced correspondingly. Nevertheless, the problem of asymptotic stability analysis


has received little attention [16] for systems with Lévy noise. So, in this letter, it is our
aim to tackle the pth moment asymptotic stability analysis problem for stochastic
hybrid delayed systems with Lévy noise. By the technique of Itô’s formula and
M-matrix approach, we extend Mao’s work [22] to the case that hybrid systems are
with both delay and Lévy noise. Several sufficient conditions are obtained for the
pth moment asymptotic stability and exponential stability as well.

6.2.2 Model and Preliminaries

Consider the n-dimensional stochastic delay hybrid system with jumps

d x(t) = f (x(t), x(t − δ(t)), t, r (t))dt


+ g(x(t), x(t − δ(t)), t, r (t))d B(t)
 (6.17)
+ h(x(t − ), x((t − δ(t))− ), t, r (t), z)N (dt, dz)
Rl

on t ∈ R+ , where x(t − ) = lims↑t x(s). Here, δ : R+ → [0, τ ] is a Borel measurable


function which stands for the time lag, while f : Rn × Rn × R+ × S → Rn , g :
Rn × Rn × R+ × S → Rn×m and h : Rn × Rn × R+ × S → Rn×l . We assume
p
that the initial data are given by {x(θ) : −τ ≤ θ ≤ 0} = ξ(θ) ∈ LF0 ([−τ , 0]; Rn ]) ,
r (0) = r0 . We note that each column h (k) of the n × l matrix h = [h i j ] depends on
z only through the kth coordinate z k , i.e.,

h (k) (x, y, t, i, z) = h (k) (x, y, t, i, z k )


z = (z 1 , . . . , zl )T ∈ Rl , i ∈ S

We further assume that B(t), N (t, z), and r (t) in system (6.17) are independent.
For the purpose of stability study in this section, we impose the following assump-
tions.

Assumption 6.9 Assume that the system (6.17) has a unique solution on t ≥ −τ
which is denoted by x(t, ξ). The functions f, g, and h satisfy f (0, 0, t, i) ≡
0, g(0, 0, t, i) ≡ 0, h(0, 0, t, i, z) ≡ 0 for each (t, i) ∈ R+ × S and z ∈ Rl .

Assumption 6.10 Assume that δ is differentiable and its derivative is bounded by a


constant δ̄ ∈ [0, 1], namely δ̇ ≤ δ̄, ∀t ≥ 0.

One can immediately derive from Assumption 6.9 that (6.17) admits a trivial
solution x(t; 0) ≡ 0 which is necessary for the following definitions of stability.
282 6 Stability and Synchronization of Neural Networks…

Definition 6.11 The trivial solution of (6.17) is said to be asymptotically stable in


pth moment if
lim E|x(t; ξ)| p = 0
t→∞

p
for any ξ ∈ LF0 ([−τ , 0]; Rn ]). When p = 2, it is said to be asymptotically stable in
mean square.

Definition 6.12 The trivial solution of (6.17) is said to be exponentially stable in


pth moment if
1
lim sup log E|x(t; ξ)| p < 0
t→∞ t

p
for any ξ ∈ LF0 ([−τ , 0]; Rn ]).

6.2.3 Main Results

In what follows, we will present the asymptotic and exponential p-stability conditions
for system (6.17), then propose the M-matrix approach to achieve the asymptotic
stability. An application in neural networks will be put forward subsequently.
1. Stability of hybrid systems
Theorem 6.13 Let Assumptions 6.9 and 6.10 hold. Assume that there exist a function
V ∈ C2,1 (Rn × R+ × S; R+ ) and positive constants p, λ, c j , ( j = 1, 2, 3, 4) as
well as a nonnegative function w(t) = o( t 1+2λ
1
) (t → ∞) such that

c4
c3 > (6.18)
1 − δ̄
c1 |x| p ≤ V (x, t, i) ≤ c2 |x| p (6.19)
LV (x, y, t, i) ≤ w(t) − c3 |x| p + c4 |y| p (6.20)

for all x, y ∈ Rn , t ≥ 0 and i ∈ S. Then system (6.17) is asymptotically stable in


pth moment.

Proof Fix any ξ and write x(t; ξ) = x(t). We set



4c2 c4 τ
c2 + c3 τ − (c2 − c3 τ )2 +
1 − δ̄
v= (6.21)
2c2 τ

From (6.18), we get



c2 + c3 τ − (c2 − c3 τ )2 + 4c2 c3 τ
v> =0
2c2 τ
6.2 Asymptotic Stability of SDNN with Lévy Noise 283

and ⎧ c3
⎪ i f c2 ≥ c3 τ
c2 + c3 τ − |c2 − c3 τ | ⎨ c2
v< =
2c2 τ ⎪
⎩1 i f c2 < c3 τ
τ
thus
1
0<v< (6.22)
τ
λ
Letting k = v and ψ(t) = (t + k)λ , t ∈ R+ , we have

λ λ
ψ̇(t) = λ(t + k)λ−1 = (t + k)λ ≤ (t + k)λ = vψ(t). (6.23)
t +k k

Noting that ψ(t) is increasing, by (6.23) and the differential mean value theorem,
we get
ψ(t + τ ) = ψ(t) + τ ψ̇()
≤ ψ(t) + τ vψ()
≤ ψ(t) + τ vψ(t + τ )

where  ∈ (t, t + τ ). Since τ v < 1, we further get

1
ψ(t + τ ) ≤ ψ(t) (6.24)
1 − τv

Applying Lemma 1.6 to ψ(t)V and then using conditions (6.19), (6.20), and (6.23),
we can show that

0 ≤ c1 ψ(t)E|x(t)| p
≤ ψ(t)EV (x(t), t, r (t))
 t
= ψ(0)EV (ξ(0), 0, r0 ) + E (ψ̇(s)V + ψ(s)LV )ds
0 (6.25)
 t
≤ c2 k λ E ξ p + [c2 vψ(s)E|x(s)| p + ψ(s)w(s)
0
− c3 ψ(s)E|x(s)| p + c4 ψ(s)E|y(s)| p ]ds

By Assumptions 6.10 and (6.24), we compute


 t
ψ(s)E|y(s)| p ds
0
 t
= ψ(s)E|x(s − δ(s))| p ds
0
284 6 Stability and Synchronization of Neural Networks…
 t
1
≤ ψ(u + τ )E|x(u)| p du (6.26)
1 − δ̄ −τ
 0  t
1 ψ(s)E|x(s)| p
≤ ( ψ(τ )E|x(u)| du +
p
ds)
1 − δ̄ −τ 0 1 − τv
t
τ (τ + k)λ E ξ p ψ(s)E|x(s)| p ds
≤ + 0 .
1 − δ̄ (1 − δ̄)(1 − τ v)

Substituting (6.26) into (6.25), we have


 t
c1 ψ(t)E|x(t)| ≤ C +
p
ψ(s)w(s)ds
0
 t
c4
+ (c2 v − c3 + ) ψ(s)E|x(s)| p ds,
(1 − δ̄)(1 − τ v) 0

c4 τ (τ + k)λ
where C = (c2 k λ + )E ξ p . From (6.21), we can get c2 v − c3 +
1 − δ̄
c4
= 0. So
(1 − δ̄)(1 − τ v)
 t
c1 ψ(t)E|x(t)| p ≤ C + ψ(s)w(s)ds. (6.27)
0

Clearly ψ(t)w(t) = o( t 1+λ1
), (t → ∞). This yields 0 ψ(t)w(t)dt < ∞.
Dividing both side of (6.27) by c1 ψ(t) and then letting t → ∞, we get

lim E|x(t)| p = 0.
t→∞

This completes the proof.

Remark 6.14 Theorem 6.13 extends Mao’s conclusion concerning asymptotic sta-
bility (see [22], Theorem 5.31, p.198), which is for hybrid systems without delay,
to delayed hybrid systems with Lévy noise. In particular, when h(x(t), x(t −
δ(t), t, r (t), z)) ≡ 0 and δ(t) ≡ 0 hold in system (6.17), our result is consistent
with Mao’s result.

Remark 6.15 Theorem 6.13 gives an estimate for the convergence rate of E|x(t)| p
as well. If the criteria are satisfied in Theorem 6.13, the convergence rate of E|x(t)| p
is faster than that of t −λ (t → ∞), even faster than that of t −2λ (t → ∞) (This can
be proved similarly).

If we further assume that w(t) = o(e−λt ) (t → ∞) in Theorem 6.13, it is expected


that E|x(t)| p has an exponential convergence rate, namely system (6.17) is expo-
nentially stable in pth moment. We present the following argument to show this.
6.2 Asymptotic Stability of SDNN with Lévy Noise 285

Corollary 6.16 Let Assumptions 6.9 and 6.10 hold. Assume that there exist a func-
tion V ∈ C2,1 (Rn × R+ × S; R+ ) and positive constants p, λ, c j , ( j = 1, 2, 3, 4)
as well as a nonnegative function w(t) = o(e−λt ), (t → ∞) such that
c4
c3 > (6.28)
1 − δ̄
c4 eτ λ
c2 λ − c3 + ≥0 (6.29)
1 − δ̄
c1 |x| p ≤ V (x, t, i) ≤ c2 |x| p (6.30)
LV (x, y, t, i) ≤ w(t) − c3 |x| + c4 |y| p p
(6.31)

for all x, y ∈ Rn , t ≥ 0 and i ∈ S. Then system (6.17) is exponentially stable in pth


moment.

Proof Let
c4 eτ u
φ(u) = c2 u − c3 + . (6.32)
1 − δ̄

c4 τ eτ u
It is derived from φ̇(u) = c2 + > 0 that φ(u) is increasing on R+ .
1 − δ̄
Inequalities (6.28) and (6.29) yield that φ(0) < 0 and φ(λ) ≥ 0. By virtue of
the property of continuous functions, there exists a unique λ0 ∈ (0, λ] such that
φ(λ0 ) = 0.
Letting ψ(t) = eλ0 t , we get

ψ̇(t) = λ0 ψ(t) (6.33)


λ0 τ
ψ(t + τ ) = e ψ(t) (6.34)

Making use of (6.32), (6.33), and (6.34), we compute like the proof for
Theorem 6.13 that

c1 ψ(t)E|x(t)| p
 t
≤ C̄ + ψ(s)w(s)ds
0
 t
c4 eτ λ0
+ (c2 λ0 − c3 + ) ψ(s)E|x(s)| p ds (6.35)
1 − δ̄ 0
 t  t
= C̄ + ψ(s)w(s)ds + φ(λ0 ) ψ(s)E|x(s)| p ds
0 0
 t
= C̄ + ψ(s)w(s)ds
0

c4 τ eτ λ0
where C̄ = (c2 + )E ξ p .
1 − δ̄
286 6 Stability and Synchronization of Neural Networks…

Noting that 0 ψ(t)w(t)dt < ∞, dividing both side of (6.35) by c1 ψ(t) and then
letting t → ∞, we obtain

1
lim sup log(E|x(t)| p ) ≤ −λ0 ,
t→∞ t

which means system (6.17) is exponentially stable in pth moment. The proof is
completed.

Remark 6.17 Corollary 6.16 proposes a more general result than that of Mao’s [22]
(Theorem 7.22, p. 290) which differs from ours on w(t) ≡ 0. One manifestation of
this is extending the delayed hybrid systems to those with Lévy noise. In addition,
even for delayed hybrid systems without Poisson jump, the original result is a special
case of ours. In fact, w(t) ≡ 0 means that the positive constant λ can be chosen
arbitrarily. Then (6.29) must hold and Corollary 6.16 become Mao’s conclusion.

2. M-matrix approach for asymptotic stability


We now apply M-matrix approach in the study of pth moment asymptotic stability.
The following hypothesis with regard to system (6.17) is essential for achieving the
asymptotic p-stability ( p ≥ 2) condition in Theorem 6.13.

Assumption 6.18 For each i ∈ S, there exist constants αi , βi , ρi , ηi , σi , πi and


positive constants a, b such that

x T f (x, y, t, i) ≤ αi |x|2 + βi |y|2 (6.36)


|g|2 ≤ ρi |x|2 + ηi |y|2 (6.37)

and
 
l
(|x + h (k) | p −|x| p )νk (dz k )
R k=1 (6.38)
a
≤ + σi |x| + πi |y|
p p
t 1+b

for all (x, y, t) ∈ Rn × Rn × R+ and z k ∈ R.

We further set
p−1
ωi = 0 ∨ (βi + ηi ) (6.39)
2
p( p − 1)
ζi = pαi + ρi + σi + ( p − 2)ωi (6.40)
2
A = −diag{ζ1 , . . . , ζ S } − Γ (6.41)
(q1 , . . . , q S )T = A−1 1 (6.42)

where 1 = (1, . . . , 1)T .


6.2 Asymptotic Stability of SDNN with Lévy Noise 287

Theorem 6.19 Let Assumptions 6.9, 6.10, and 6.18 hold and p ≥ 2. If A is a non-
singular M-matrix and

(πi + 2ωi )qi < 1 − δ̄, ∀i ∈ S (6.43)

then system (6.17) is asymptotically stable in pth moment.

Proof It follows from Lemma 1.12 that A−1 exists and A−1 ≥ 0, which means that
the sum of each row of A−1 is positive. Hence, by (6.42), it can be deduced that
qi > 0, ∀i ∈ S.
Define the function V : Rn × R+ × S → R+ by

V (x, t, i) = qi |x| p .

Clearly, V obeys (6.19) with c1 = mini∈S qi and c2 = maxi∈S qi . (6.37) yields


that
|x T g|2 ≤ ρi |x|4 + ηi |x|2 |y|2 . (6.44)

We compute the operator LV from Rn × Rn × R+ × S to R by (1.9) as follows:


p
LV = pqi |x| p−2 x T f + qi |x| p−2 |g|2
2
p( p − 2)  S
+ qi |x| p−4 |x T g|2 + γi j q j |x| p
2 (6.45)
j=1
 
l
+ qi (|x + h (k) | p − |x| p )νk (dz k )
R k=1

By conditions (6.36)–(6.39) and (6.44), we have


pqi ρi p
LV ≤ pqi αi |x| p + pqi βi |x| p−2 |y|2 + |x|
2
pqi ηi p−2 2 p( p − 2)qi ρi p
+ |x| |y| + |x|
2 2
p( p − 2)qi ηi p−2 2 aqi
+ |x| |y| + 1+b
2 t
S
+ σi qi |x| p + πi qi |y| p + γi j q j |x| p
j=1

p( p − 1)ρi  S
= [( pαi + + σi )qi + γi j q j ]|x| p (6.46)
2
j=1
p( p − 1)ηi
+ [ pβi + ]qi |x| p−2 |y|2
2
288 6 Stability and Synchronization of Neural Networks…

aqi
+ πi qi |y| p +
t 1+b
p( p − 1)ρi  S
≤ [( pαi + + σi )qi + γi j q j ]|x| p
2
j=1
aqi
+ pωi qi |x| p−2 |y|2 + πi qi |y| p +
t 1+b

By virtue of Lemma 1.14,


p−2 2 p−2 p 2 p
|x| p−2 |y|2 = (|x| p ) p (|y| p ) p ≤ |x| + |y| .
p p

Substituting this and (6.40) into (6.45), noting that pωi qi ≥ 0, we have
 p( p − 1)ρi
LV ≤ ( pαi + + σi + ( p − 2)ωi )qi
2
S
 aqi
+ γi j q j |x| p + (πi + 2ωi )qi |y| p + 1+b
t
j=1 (6.47)
aqi 
S
= + (ζi qi + γi j q j )|x| p + (πi + 2ωi )qi |y| p
t 1+b
j=1
≤ w(t) − c3 |x| + c4 |y| p
p

a
where w(t) = max {qi }, c3 = 1, c4 = max {(πi + 2ωi )qi }.
t 1+b 1≤i≤N 1≤i≤N

By condition (6.43), the inequality (6.18) holds. Hence, all the conditions of
Theorem 6.13 have been verified, so system (6.17) is asymptotically stable in pth
moment. The proof is completed.

3. Asymptotic stability of neural networks


As an application of Theorem 6.19, we discuss the mean square asymptotic stability
of delayed neural networks with Lévy noise and Markovian switching.
Consider the neural network of this form:

d x(t) = [−F(r (t)x(t) + D(r (t))s1 (x(t))


+ E(r (t))s2 (x(t − δ(t)))]dt
+ g(x(t), x(t − δ(t)), t, r (t))d B(t) (6.48)

+ h(x(t − ), x((t − δ(t))− ), t, r (t), z)N (dt, dz)
Rl

where F is a diagonal positive definite matrix, D and E are, respectively, the connec-
tion weight matrix and the delayed connection weight matrix, s j , j = 1, 2 stand for
6.2 Asymptotic Stability of SDNN with Lévy Noise 289

the neuron activation function with s j (0) = 0, j = 1, 2, and what the other symbols
denote is the same as those in system (6.17).
We need more hypotheses based on Assumption 6.18 to study the stability of
neural network (6.48).

Assumption 6.20 (1) The neuron activation functions s j , ( j = 1, 2) satisfy the


Lipschitz condition

|s j (u) − s j (v)| ≤ |G j (u − v)| ∀u, v ∈ Rn (6.49)

where G j , j = 1, 2 are known constant matrices.


(2) g(0, 0, t, i) ≡ 0 and h(0, 0, t, i, z) ≡ 0 hold for all (t, i) ∈ R+ × S and z ∈ Rl .
(3) The function g satisfies (6.37), and h satisfies (6.38) in the case of p = 2, i.e.,

 
l
(|x + h (k) |2 −|x|2 )νk (dz k )
R k=1 (6.50)
a
≤ + σi |x| + πi |y|
2 2
t 1+b

For each i ∈ S, we now set



⎪ |E i ||G 2 |

⎪ αi = λmax (−Fi ) + |Di ||G 1 | +

⎪ 2



⎪ |E i ||G 2 |

⎪ βi =

⎪ 2
⎨ ηi
ωi = 0 ∨ (βi + ) (6.51)

⎪ 2



⎪ ζ = 2α + ρ + σi
⎪ i

i i

⎪ A = −diag{ζ1 , . . . , ζ S } − Γ




(q1 , . . . , q S )T = A−1 1

where 1 = (1, . . . , 1)T , Fi = F(i), Di = D(i), E i = E(i).

Theorem 6.21 Let Assumptions 6.10 and 6.20 hold, if A is a nonsingular M-matrix
and (πi + 2ωi )qi ≤ 1 − δ̄, ∀i ∈ S, then the neural network (6.48) is asymptotically
stable in mean square.

Proof Let
f (x(t), x(t − δ(t)), t, r (t))
= − F(r (t)x(t) + D(r (t))s1 (x(t)) (6.52)
+ E(r (t))s2 (x(t − δ(t)))
290 6 Stability and Synchronization of Neural Networks…

Comparing with Theorem 6.19 in the case of p = 2, we only need to show that
(6.36) holds.
According to the conditions s j (0) = 0 and (6.49), we get

|s j (u)| ≤ |G j u| j = 1, 2 ∀u ∈ Rn (6.53)

By (6.52) and (6.53), we compute

x T f (x, y, t, i)
= x T (−Fi )x + x T Di s1 (x) + x T E i s2 (y)
≤ λmax (−Fi )|x|2 + |Di ||G 1 ||x|2 + |E i ||G 2 ||x||y|
(6.54)
|E i ||G 2 |
≤ (λmax (−Fi ) + |Di ||G 1 | + )|x|2
2
|E i ||G 2 | 2
+ |y|
2
We therefore obtain from (6.51) that

x T f (x, y, t, i) ≤ αi |x|2 + βi |y|2

as required. It then follows from Theorem 6.19 that the neural network (6.48) is
asymptotically stable in mean square.

6.2.4 Numerical Simulation

Consider a two-neuron delayed neural network (6.48) with Lévy noise and 2-state
Markovian switching, where the time delay δ(t) = 0.15 sin(t) + 0.85, which means
that τ = 1 and δ̇ ≤ δ̄ = 0.15. B(t) and N (t, z), which compose Lévy noise, are all
one dimensional. The character measure μ of Poisson jump satisfies μ(dz) = ςφ(dz),
where ς = 2 is the intensity of Poisson distribution and φ is the probability intensity
of the standard normal distributed variable z.
We set
−4 4
S = {1, 2}, Γ =
3 −3

as the state space and transition rate matrix with respect to the Markovian switching
and s j (·) = tanh(·), ( j = 1, 2) as the neuron activation functions. Then G 1 = G 2 =
I2 .
The other parameters concerning the neural network (6.48) are as follows:

6 0 −2 1 −1 1
F(1) = , D(1) = , E(1) = ,
0 7 1 −1 1 2
6.2 Asymptotic Stability of SDNN with Lévy Noise 291

x +y 1 yz − x
g(x, y, t, 1) = , h(x, y, t, 1, z) = +
4 t +1 2

6 0 1 −1.2 1 0
F(2) = , D(2) = , E(2) = ,
0 8 −1 1.5 1.5 1
x +y 1 3yz
g(x, y, t, 2) = , h(x, y, t, 2, z) = 2
−x+ .
2 (t + 1) 3 4

Computing the parameters in (6.37), (6.49), (6.50), and (6.51), We obtain

α1 = −2.0314, β1 = 1.3229, ρ1 = 0.125, η1 = 0.125,


σ1 = −0.5, π1 = 1.5, ω1 = 1.3854, ζ1 = −4.4377,
α2 = −2.5839, β2 = 1.0308, ρ2 = 0.5, η2 = 0.5,
σ2 = −2, π2 = 2.25, ω2 = 1.2808, ζ2 = −6.6677,
1
a = 6, b=
3
and
8.4377 −4
A = −diag{ζ1 , ζ2 } − Γ =
−3 9.6677

is a nonsingular M-matrix.

q = [q1 , q2 ]T = A−1 1 = [0.1964, 0.1644]T .

Hence, we can verify

(π1 + 2ω1 )q1 = 0.8390 < 0.85 = 1 − δ̄


(π2 + 2ω2 )q2 = 0.7910 < 0.85 = 1 − δ̄.

It then follows from Theorem 6.21 that the two-neuron neural network (6.48) is
asymptotically stable in mean square.
Figures 6.8, 6.9, 6.10, and 6.11 show the 2-state Markov chain, Poisson point
process with normally distributed variable z, the state trajectory, and evolution of
state norm square, respectively. We can see from Fig. 6.10 that the system state tends
to zero at t=5, which verifies the stability of two-neuron network (6.48). In Fig. 6.11,
the two curves show the evolution of the norm square concerning system state (solid
line) and function (t + 1)−1/3 (dot dash line), respectively, with time t. The solid
line is lower than the other one from t = 5, which illustrates that the convergence
rate of the neural network (6.48) is faster than that of function (t + 1)−1/3 .
292 6 Stability and Synchronization of Neural Networks…

Fig. 6.8 2-state Markov 2−state Markov chain


3
chain

2.5

r(t)
1.5

0.5

0
0 5 10 15 20
Time

Fig. 6.9 Poisson point Poisson point process with normally distributed jump
process 4

2
Random jump amplitude

−2

−4

−6

−8
2 4 6 8 10 12 14 16 18 20
Time

Fig. 6.10 State trajectory Responses of neuron dynamics to initial value: 6, −6


15
x1
x
2
10

5
x(t)

−5

−10
0 5 10 15 20
Time
6.2 Asymptotic Stability of SDNN with Lévy Noise 293

Fig. 6.11 State’s norm Norm square of neuron dynamics


square trajectory 40
2
|x(t)|
35 (t+1)−1/3

30

25

|x(t)|2
20

15

10

−5
0 5 10 15 20
Time

6.2.5 Conclusion

We have dealt with the problem of asymptotic p-stability analysis for stochastic
delayed hybrid systems with Lévy noise. The general criteria for asymptotic stability
and exponential stability have been obtained through stochastic analysis. M-matrix
approach has been utilized to achieve the asymptotic stability criteria as well. As an
application of our results, the condition of mean square asymptotic stability has been
derived for delayed hybrid neural networks with Lévy noise. An example has been
used to demonstrate the effectiveness of the main results in this section.

6.3 Synchronization of SDNN with Lévy Noise


and Markovian Switching via Sampled Data

6.3.1 Introduction

The past few decades have witnessed the successful applications of neural net-
works in many areas such as image processing, pattern recognition, associative
memory, and optimization problems. As an existence in real neural networks, time
delay, which may cause oscillation and instability behavior, has gained considerable
research attentions which focus on the topics of stability analysis and synchronization
control [6, 17, 32, 36, 39, 40, 49, 54, 55]. In the references involved, the delay
type can be constant, time-varying, discrete, or distributed, and the results can be
294 6 Stability and Synchronization of Neural Networks…

delay-dependent or delay-independent case [36]. It is generally recognized that the


delay-independent case performs more conservatively than the delay-dependent case
[40]. Hence, a great deal of studies have devoted into the seeking of delay-dependent
criteria.
It has been realized in [9] that the synaptic transmission in real nervous systems
can be viewed as a noisy process brought on by random fluctuations from the release
of neurotransmitters and other probabilistic causes. Consequently, stochastic noise
has become an indispensable member in neural networks modeling. Even to now,
Gaussian white noise or Brownian motion has been regarded as a commonly used
model to describe the disturbance arising in neural networks or nonlinear systems else
[17, 22, 32, 36, 39, 49, 54, 55]. However, Brownian motion, as a continuous noise,
is at a disadvantage to depict instantaneous disturbance changes. Lévy noise, written
as (B, N ) [1, 3], which is frequently found in areas of finance, statistical mechanics,
and signal processing, is more appropriate for modeling diversified system noise
because Lévy noise can be decomposed into a continuous part and a jump part by
Lévy-Itô decomposition. As a result, Lévy noise extends Gaussian noise to many
types of impulsive jump-noise processes found in real and model neurons as well
as in models of finance and other random phenomena. In neural networks, a Lévy
noise model more accurately describes how the neuron’s membrane potential evolves
than does a simpler diffusion model because the more general Lévy model includes
not only pure-diffusion and pure-jump models but also jump-diffusion models as
well [10, 28]. For the reason of Gaussian structure, however, pure-diffusion neuron
models rely on special limiting case assumptions of incoming Poisson spikes from
other neurons. These assumptions require at least that the number of impinging
synapses be large and that the synapses have small membrane effects due to the small
coupling coefficient [13]. In the view of engineering applications, Lévy models are
more valuable than Gaussian models because physical devices may be limited in their
number of model-neuron connections [23] and because real signals and noise can
often be impulsive [29]. As seen in [43, 46, 47], system with Lévy noise, or more
generally, with Gaussian noise and some kinds of jump noise is also called jump
diffusions. Therefore, stability analysis problems for jump diffusions have drawn an
increasing research interest, see e.g., [2, 3, 16, 28, 43, 44, 46, 47, 51].
It has been shown that many neural networks may experience abrupt changes in
their structure and parameters due to the phenomena such as component failures
or repairs, changing subsystem interconnections, and abrupt environmental distur-
bances. In this situation, neural networks may treated as systems which have finite
modes, and the modes may switch from one to another at different times [22, 51].
As a result, finite-state Markov chains can be used to govern the switching between
different modes of neural networks. The stability analysis problem for neural net-
works with Markovian switching has therefore received much research attention [17,
32, 51, 54, 55]. As a summary, Mao and Yuan [22] studied the more general case,
stochastic differential equations with Markovian switching, and got a series of results
about it.
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 295

Along with the booming development of digital hardware technologies, the


sampled-data control method, which keeps control signal a constant during the sam-
pling period and allows it to change only at the sampling instant, has been increasingly
employed in dealing with stabilization and synchronization problems of networks.
Since the works of [24], a series of the sampled-data control schemes have been
presented using the same concept [8, 14, 15, 40]. In [15], the synchronization of
a complex network has been introduced. Wu et al. have put forward in [40] the
synchronization of Markovian jump neural networks.
Motivated by the studies mentioned above, we aim to tackle the problem of
sampled-data synchronization of delayed neural networks with Lévy noise and
Markovian switching. An LMI-based condition is proposed to guarantee the sta-
bility of the error system, and thus, the master system can synchronize with the
slave system. The mode-dependent sampled-data controller is meanwhile derived. A
numerical simulation is presented to verify the effectiveness of the proposed criterion.

6.3.2 Model and Preliminaries

Consider the n-dimensional stochastic delay neural network with Markovian switch-
ing of the form

d x(t) = [−C(r (t))x(t) + A(r (t)) f (x(t))


(6.55)
+ B(r (t)) f (x(t − δ(t)))]dt

where r (t) is the Markov chain and x(t) = [x1 (t), . . . , xn (t)]T ∈ Rn is the state vec-
tor associated with the n neurons. f (x(t)) = [ f 1 (x1 (t)), . . . , f n (xn (t))]T denotes
the neuron activation function. C(r (t)) > 0 is a diagonal matrix. A(r (t)) and B(r (t))
are the connection weight matrix and the delay connection weight matrix, respec-
tively. δ(t) denotes the time-varying delay and satisfies 0 ≤ δ1 ≤ δ(t) ≤ δ2 , δ̇ ≤ δ̄.
We further write δ12 = δ2 − δ1 .
In this section, system (6.55) is treated as the master system and its slave system
can be described by the following equation:

dy(t) = [−C(r (t))y(t) + A(r (t)) f (y(t))


+ B(r (t)) f (y(t − δ(t))) + u(t)]dt
+ g(e(t), e(t − δ(t)), r (t))dω(t) (6.56)

+ h(e(t), e((t − δ(t))), r (t), z)N (dt, dz)
R

where C(r (t), A(r (t), and B(r (t)) are the same matrices as in (6.55). e(t) = y(t) −
x(t) is the error state, which arises in the Lévy noise intensity function g and h
satisfying g : Rn × Rn × S → Rn×m and h : Rn × Rn × S × R → Rn . u(t) is the
control input that will be designed in order to obtain the synchronization of system
(6.55) and (6.56).
296 6 Stability and Synchronization of Neural Networks…

The control signal is assumed to be generated by a zero-order-hold function with


a sequence of hold times 0 = t0 < t1 < · · · < tk < · · · , (limk→∞ tk = +∞). That
is, the mode-dependent controller takes the following form:

u(t) = K (r (t))e(tk ), tk ≤ t < tk+1 (6.57)

where K (r (t)) is the sampled-data feedback gain matrix to be determined. e(tk ) is a


discrete measurement of e(t) at the sampling instant tk . It is assumed that tk+1 −tk ≤ τ
for any integer k ≥ 0, where τ is the upper bound of sampling intervals.
Substituting (6.57) into (6.56), then subtracting (6.55) from (6.56), yields the error
system
de(t) = [−C(r (t))e(t) + A(r (t))l(e(t))
+ B(r (t))l(e(t − δ(t))) + K (r (t))e(tk )]dt
+ g(e(t), e(t − δ(t)), r (t))dω(t) (6.58)

+ h(e(t), e(t − δ(t)), r (t), z)N (dt, dz)
R

where l(e(t)) = f (y(t)) − f (x(t)) = f (x(t) + e(t)) − f (x(t)). The initial data
is given by {e(θ) : −σ ≤ θ ≤ 0} = ξ(θ) ∈ L2F0 ([−σ, 0]; Rn ]) , r (0) = r0 , where
σ = max{δ2 , τ }. It is assumed that ω(t), N (t, z), and r (t) in system (6.58) are
independent. For simplicity, we will write M(r (t) as Mi when r (t) = i in the sequel.
For the purpose of the synchronization of systems (6.55) and (6.56), i.e., the
stability study of error system (6.58), we impose the following assumptions.

Assumption 6.22 Each function f i : R → R is nondecreasing and there exists a


positive constant βi such that

| f i (u) − f i (v)| ≤ βi |u − v| ∀u, v ∈ R, i = 1, 2, . . . , n.

Denote L = diag{β1 , . . . , βn }. It can be deduced from Assumption 6.22 that [49]


n
e T (t)L Dl(e(t)) = li (ei (t))βi di ei (t)
i=1
n
(6.59)
≥ di [li (ei (t))]2
i=1
= l T (e(t))Dl(e(t))

where D = diag{d1 , . . . , dn } is an arbitrary positive diagonal matrix.

Assumption 6.23 ∀i ∈ S, there exist two semi-positive definite matrices G i1 and


G i2 such that
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 297

trace(g T (e, e(t − δ(t)), i)g(e, e(t − δ(t)), i))


(6.60)
≤ e T (t)G i1 e(t) + e T (t − δ(t))G i2 e(t − δ(t))

Assumption 6.24 (a) The characteristic measure ν(dz)dt satisfies

ν(dz)dt = λφ(dz)dt (6.61)

where λ is the intensity of Poisson distribution and φ is the probability distribu-


tion of random variable z.
(b) ∀i ∈ S, there exist two semi-positive definite matrices Hi1 and Hi2 such that

h T (e, e(t − δ(t)), i, z)h(e, e(t − δ(t)), i, z)ν(dz)
R (6.62)
≤ e (t)Hi1 e(t) + e (t − δ(t))Hi2 e(t − δ(t))
T T

We now begin with the following definition.

Definition 6.25 The master system (6.55) and slave system (6.56) are said to be
synchronous in mean square if the error system (6.58) is stable in mean square, that
is, for any ξ(0) ∈ L2F0 ([−σ, 0]; Rn ]) and r0 = i ∈ S,
 T
lim E |e(t; ξ(0), r0 )|2 dt < ∞ (6.63)
T →∞ 0

6.3.3 Main Results

We are now in a position to derive the condition under which the master system (6.55)
and the slave system (6.56) are synchronous in mean square. The main theorem below
reveals that such conditions can be expressed in terms of the positive definite solution
to a quadratic matrix inequality involving some scalar parameters.

Theorem 6.26 Let Assumptions 6.22, 6.23, and 6.24 hold. If there exist a matrix Ji ,
positive matrices Pi , Q 1 , Q 2 , Q 3 , R, W1 , W2 , W3 , W4 , a positive diagonal matrix
D, and positive constants ρi , i , (i ∈ S) such that

Pi < ρi I (6.64)
W3 < I (6.65)
Πi < 0 (6.66)
298 6 Stability and Synchronization of Neural Networks…

where ⎡ ⎤
Π11 0 Π13 0 Π15 0 Π17 Π18
⎢ ∗ Π22 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Π44 0 0 0 0 ⎥
Πi = ⎢⎢ ⎥ with

⎢ ∗ ∗ ∗ ∗ Π55 Π56 0 0 ⎥
⎢ ∗ ∗ ∗ ∗ ∗ Π66 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Π77 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ 0 Π88

Π11 = −2Pi Ci + Sj=1 γi j P j + ρi (G i1 + Hi1 ) + i Hi1
+Q 1 + Q 2 + Q 3 − W1 − W3 + i−1 λPi2 ,
Π13 = Ji + W3 ,
Π15 = W1 ,
Π17 = Pi Ai + L D,
Π18 = Pi Bi ,
Π22 = τ 2 I + δ12 W1 + δ12
2 W −W ,
2 4
Π33 = −W3 ,
Π44 = ρi (G i2 + Hi2 ) + i Hi2 − (1 − δ̄)Q 1 ,
Π55 = −Q 2 − W1 − W2 ,
Π56 = W2 ,
Π66 = −Q 3 − W2 ,
Π77 = R − 2D,
Π88 = −(1 − δ̄)R,
then the master system (6.55) and the slave system (6.56) are synchronous in mean
square. Moreover, the feedback gain matrix is determined by K i = Pi−1 Ji , (i ∈ S).

Proof Fix any (ξ(0), r0 ) ∈ Rn × S and write e(t; ξ(0), r0 ) = e(t) for simplicity.
Consider the following Lyapunov functional V ∈ C2,1 (Rn × R+ × S; R+ ) for the
error system (6.58):


5
V (e(t), t, r (t)) = V p (e(t), t, r (t)) (6.67)
p=1

where

V1 = e T (t)P(r (t))e(t)
 t  t
V2 = e T (s)Q 1 e(s)ds + e T (s)Q 2 e(s)ds
t−δ(t) t−δ1
 t
+ e T (s)Q 3 e(s)ds
t−δ2
 t
V3 = l T (e(s))Rl(e(s))ds
t−δ(t)
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 299
 0  t
V4 = δ1 ė T (s)W1 ė(s)dsdθ
−δ1 t+θ
 −δ1  t
+ δ12 ė T (s)W2 ė(s)dsdθ
−δ2 t+θ
 0  t  tk+1
V5 = τ ė (s)W3 ė(s)dsdθ +
T
ė T (s)W4 ė(s)ds
−τ t+θ t

Computing LV1 by (1.9), we can obtain

LV1

S
= e T (t)[(−2Pi Ci + γi j P j )e(t) + 2Pi Ai l(e(t))
j=1

+ 2Pi Bi l(e(t − δ(t))) + 2Pi K i e(tk )] + trace(g T Pi g)



+ [(e(t) + h)T Pi (e(t) + h) − e T (t)Pi e(t)]ν(dz)
R (6.68)

S
= e T (t)[(−2Pi Ci + γi j P j )e(t) + 2Pi Ai l(e(t))
j=1

+ 2Pi Bi l(e(t − δ(t))) + 2Pi K i e(tk )] + trace(g T Pi g)



+ [h T Pi h + 2e T (t)Pi h]ν(dz)
R

From Assumption 6.23 and (6.64), we get

trace(g T Pi g)
(6.69)
≤ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))

From Assumption 6.24, Lemma 1.13, and (6.64), we have



[h T Pi h + 2e T (t)Pi h]ν(dz)
R 
≤ (ρi |h|2 + i |h|2 + i−1 e T (t)Pi2 e(t))ν(dz) (6.70)
R
≤ (ρi + i )[e T (t − δ(t))Hi2 e(t − δ(t))
+ e T (t)Hi1 e(t)] + λi−1 e T (t)Pi2 e(t)
300 6 Stability and Synchronization of Neural Networks…

Substituting (6.69) and (6.70) into (6.68), we obtain that

LV1

S
≤ e (t)[(−2Pi Ci +
T
γi j P j )e(t) + 2Pi Ai l(e(t))
j=1
+ 2Pi Bi l(e(t − δ(t))) + 2Pi K i e(tk )]
+ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))
+ (ρi + i )[e T (t − δ(t))Hi2 e(t − δ(t))
+ e T (t)Hi1 e(t)] + λi−1 e T (t)Pi2 e(t) (6.71)

S
= e T (t)[(−2Pi Ci + γi j P j + ρi (G i1 + Hi1 )
j=1

+ i Hi1 + i−1 λPi2 ]e(t)


+ e T (t − δ(t))[ρi (G i2 + Hi2 ) + i Hi2 ]e(t − δ(t))
+ e T (t)2Pi Ai l(e(t)) + e T (t)2Pi Bi l(e(t − δ(t)))
+ e T (t)2Pi K i e(tk )

LV2
≤ e T (t)Q 1 e(t) − (1 − δ̄)e T (t − δ(t))Q 1 e(t − δ(t))
(6.72)
+ e T (t)Q 2 e(t) − e T (t − δ1 )Q 2 e(t − δ1 )
+ e T (t)Q 3 e(t) − e T (t − δ2 )Q 3 e(t − δ2 )

Similarly, from Assumption 6.22 and (6.59), we compute

LV3
≤ l T (e(t))Rl(e(t)) − (1 − δ̄)l T (e(t − δ(t)))Rl(e(t − δ(t)))
+ 2[e T (t)L Dl(e(t)) − l T (e(t))Dl(e(t))] (6.73)
= l (e(t))(R − 2D)l(e(t)) + 2e (t)L Dl(e(t))
T T

− (1 − δ̄)l T (e(t − δ(t)))Rl(e(t − δ(t)))

From Lemma 1.20, we get

LV4
 t
= δ12 ė T (t)W1 ė(t) − δ1 ė T (s)W1 ė(s)ds
t−δ1
 t−δ1
+ δ12
2 T
ė (t)W2 ė(t) − δ12 ė T (s)W2 ė(s)ds
t−δ2
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 301

≤ ė T (t)(δ12 W1 + δ12
2
W2 )ė(t) − e T (t)W1 e(t) (6.74)
− e (t − δ1 )(W1 + W2 )e(t − δ1 )
T

− e T (t − δ2 )W2 e(t − δ2 ) + 2e T (t)W1 e(t − δ1 )


+ 2e T (t − δ1 )W2 e(t − δ2 )

LV5
 t
= τ ė (t)W3 ė(t) − τ
2 T
ė T (s)W3 ė(s)ds − ė T (t)W4 ė(t)
t−τ (6.75)
 t
= ė (t)(τ W3 − W4 )ė(t) − τ
T 2
ė T (s)W3 ė(s)ds
t−τ

Noting that t − tk ≤ τ , we derive from Lemma 1.20 that


 t
−τ ė T (s)W3 ė(s)ds
t−τ
 t
≤ −(t − tk ) ė T (s)W3 ė(s)ds
tk (6.76)
 t T  t 
≤− ė(s)ds W3 ė(s)ds
tk tk
= −e T (t)W3 e(t) + 2e T (t)W3 e(tk ) − e T (tk )W3 e(tk )

Substituting (6.76) into (6.75) and noting that (6.65), we obtain

LV5 ≤ ė T (t)(τ 2 I − W4 )ė(t) − e T (t)W3 e(t)


(6.77)
− e T (tk )W3 e(tk ) + 2e T (t)W3 e(tk )

Combining (6.67), (6.71), (6.72), (6.73), (6.74), and (6.77), it can be derived that


5
LV = LV p
p=1


S
≤ e T (t)[−2Pi Ci + γi j P j + ρi (G i1 + Hi1 ) + i Hi1
j=1

+ Q 1 + Q 2 + Q 3 − W1 − W3 + i−1 λPi2 ]e(t)


+ ė T (t)(τ 2 I + δ12 W1 + δ12
2
W2 − W4 )ė(t) − e T (tk )W3 e(tk )
+ e T (t − δ(t))[ρi (G i2 + Hi2 ) + i Hi2 − (1 − δ̄)Q 1 )]e(t − δ(t)) (6.78)
− e (t − δ1 )(Q 2 + W1 + W2 )e(t − δ1 )
T
302 6 Stability and Synchronization of Neural Networks…

− e T (t − δ2 )(Q 3 + W2 )e(t − δ2 ) + l T (e(t))(R − 2D)l(e(t))


− l T (e(t − δ(t)))(1 − δ̄)Rl(e(t − δ(t)))
+ 2e T (t)(Ji + W3 )e(tk ) + 2e T (t)W1 e(t − δ1 )
+ 2e T (t)(Pi Ai + L D)l(e(t)) + 2e T (t)Pi Bi l(e(t − δ(t)))
+ 2e T (t − δ1 )W2 e(t − δ2 )
= ψ T (t)Πi ψ(t)

where
ψ(t) = [e T (t) ė T (t) e T (tk ) e T (t − δ(t)) e T (t − δ1 )e T (t − δ2 ) l T (e(t)) l T (e(t −
δ(t)))]T and Ji = Pi K i .
From (6.66), we have

LV ≤ −κi |ψ(t)|2 ≤ −κ|ψ(t)|2 ≤ −κ|e(t)|2 (6.79)

where −κi = λmax (Πi ), (κi > 0, i ∈ S) and −κ = maxi∈S {−κi }. Then it can be
derived from (1.12) that
 T
−E LV dt = EV0 − EVT ≤ EV0 (6.80)
0

We then obtain from (6.79) and (6.80) that


 T 1
E |e(t)|2 dt ≤ EV0 < ∞.
0 κ

So it follows from Definition 6.25 that the master system (6.55) and slave system
(6.56) are synchronous in mean square. This completes the proof.

Remark 6.27 Inspired by [40], we construct V5 in Lyapunov functional (6.67), which


is both τ and tk+1 dependent. This provides the full use of the sawtooth structure
of t − tk and the entire available information regarding the actual sampling pattern.
Thus, our results based on Lyapunov functional (6.67) are less conservatism than
those of Theorem 1 in [40].

In Theorem 6.26, the stability analysis result is established in terms of a quadratic


matrix inequality (6.66), which is generally difficult to be solved by LMI toolbox.
For the sake of solvability, we resort it to the linear matrix inequality by the corollary
below.
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 303

Corollary 6.28 Assume that (6.64) and (6.65) are satisfied under the conditions in
Theorem 6.26. If the inequality

Ω1 Ω2
Ωi = <0 (6.81)
Ω2T −Ω3

holds for any i ∈ S, where


⎡ ⎤
Ω11 0 Π13 0 Π15 0 Π17 Π18
⎢ ∗ Ω22 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ ∗ Π44 0 0 0 0 ⎥
Ω1 = ⎢
⎢ ∗

⎢ ∗ ∗ ∗ Π55 Π56 0 0 ⎥⎥
⎢ ∗ ∗ ∗ ∗ ∗ Π66 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ ∗ Π77 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ ∗ Π88

with Ω11 = Π11 − i−1 λPi2 , Ω22 = Π22 − τ 2 I ,


λPi 0 0 0 0 0 0 0
Ω2T =
0 τI 0 0 0 0 0 0


 I 0
Ω3 = i
∗ I

then the master system (6.55) and the slave system (6.56) are synchronous in mean
square and the feedback gain matrix can be determined by K i = Pi−1 Ji , (i ∈ S).

Proof It can be easily verified that Πi = Ω1 + Ω2 Ω3−1 Ω2T . According to (6.81) and
Lemma 1.21, we have Πi < 0. The conclusion then follows from Theorem 6.26.

Remark 6.29 Compared with the synchronization criteria in [40], our results are
expressed in terms of LMIs as well. However, unlike the approach adopted to con-
struct LMI by expanding the dimensions of matrices, we take a straight-forward way
to obtain lower dimensional LMIs which can not only ensure the simplicity in the-
oretical derivation, but also ensure the computational tractability for the calculation
software.
304 6 Stability and Synchronization of Neural Networks…

6.3.4 Numerical Simulation

One example is presented here in order to show the usefulness of our results. Our
aim is to examine the mean square synchronization of the given neural networks with
Lévy noise and Markovian switching.

Example 6.30 Consider the master system (6.55) and slave system (6.56) with one-
dimensional Lévy noise and 2-state Markovian switching, where δ(t) = 21 sin(t) +
1, f (·) = tanh(·), λ = 2, φ is the standard normal
distribution
of random variable
−2 2
z. The transition rate matrix is chosen as Γ = . The other parameters are
1 −1
as follows:

50 60
C(1) = , C(2) = ,
05 08

−1.3 0.2 −1.4 0.2
A(1) = , A(2) = ,
0.1 −1.2 0.2 1.5

−0.3 0.3 −0.2 −0.3
B(1) = , B(2) =
−0.1 0.2 0.1 −0.2
e(t) e(t − δ(t)) e(t) − e(t − δ(t))
g(1) = + , g(2) = ,
2 3 2
e(t) e(t − δ(t)) e(t) e(t − δ(t))
h(1) = ( √ + √ )z, h(2) = ( √ − √ )z.
4 2 2 2 2 2 3 2

Furthermore, we compute that

1 1 3
δ̄ = , δ1 = , δ2 = , δ12 = 1, L = I2 ,
2 2 2
5 5 3 3
G 11 = I2 , G 12 = I2 , H11 = I2 , H12 = I2 ,
12 18 8 4
1 1 5 5
G 21 = I2 , G 22 = I2 , H21 = I2 , H22 = I2 ,
2 2 6 9
thus Assumptions 6.22, 6.23, and 6.24 are satisfied.
Now, using the Matlab LMI toolbox, we solve the LMIs (6.64), (6.65), and (6.81)
and obtain

0.7963 0.0080 0.5901 0.0102
P(1) = , P(2) = ,
0.0080 0.7789 0.0102 0.6030

4.1727 −0.0246 0.5384 −0.0172
Q1 = , Q2 = ,
−0.0246 4.1340 −0.0172 0.5049

0.6292 −0.0265
Q3 = ,
−0.0265 0.5191
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 305

1.3103 −0.0399 1.0923 0
R= , D= ,
−0.0399 0.9316 0 1.0923

0.5757 0.0108 0.4900 0.0039
W1 = , W2 = ,
0.0108 0.6078 0.0039 0.5045

0.6942 0.0044 1.7213 0.0057
W3 = , W4 = ,
0.0044 0.7009 0.0057 1.7410

−0.6942 −0.0044 −0.6942 −0.0044
J (1) = , J (2) = ,
−0.0044 −0.7009 −0.0044 −0.7009

−0.8729 0.0035 −1.1766 0.0127
K (1) = , K (2) = ,
0.0033 −0.8998 0.0127 −1.1625
ρ1 = 1.0414, ρ2 = 0.9697, 1 = 0.9149, 2 = 0.9329,
τ = 0.3228.

So the conditions of Corollary 6.28 are all satisfied. It follows from Corollary 6.28
that systems (6.55) and (6.56) are synchronous in mean square.
To illustrate the effectiveness of the proposed method, we plot the stochastic
factors in system (6.58), the synchronization behaviors of system (6.55) and system
(6.56), and the control input as Figs. 6.12, 6.13, 6.14, 6.15, 6.16, and 6.17.
Figure 6.12 shows the 2-state Markov chain generated by the transition rate matrix
Γ . The decomposition of Lévy noise, namely a Brownian motion and a Poisson
point process, is shown in Fig. 6.13 and Fig. 6.14, respectively. Figure 6.15 depicts

Fig. 6.12 2-State Markov 2−state Markov chain


chain 3

2.5

2
r(t)

1.5

0.5

0
0 5 10 15 20 25 30 35 40
Time
306 6 Stability and Synchronization of Neural Networks…

Fig. 6.13 Brownian motion Brownian motion


8

ω(t)
3

−1
0 5 10 15 20 25 30 35 40
Time

Fig. 6.14 Poisson point Poisson point process with normally distributed jump
process 0
Random jump amplitude

−5

−10

−15

−20

−25
5 10 15 20 25 30 35 40
Time

the synchronization behavior of each corresponding state component of the master


system and slave system. It can be seen from Fig. 6.16 that the error state trajectory
tends to 0 reference line as time increases, which illustrates the stability of the error
system, in other words, the synchronization of the master system and the slave system.
The signals of control input are exhibited in Fig. 6.17.
6.3 Synchronization of SDNN with Lévy Noise and Markovian … 307

Fig. 6.15 Phase trajectory 5


of master system and slave 4
system with u(t) = 0 3

2
x
2
1
0
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
x1

2
1.5
1
y2 0.5
0
−0.5
−1
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2
y
1

Fig. 6.16 State responses of The error state


error system 3
e1
e2
2

0
e(t)

−1

−2

−3

−4
0 5 10 15 20 25 30 35 40
Time
Fig. 6.17 Control input Control input
4
u1
u2
3

2
u(t)

−1

−2
0 5 10 15 20 25 30 35 40
Time
308 6 Stability and Synchronization of Neural Networks…

6.3.5 Conclusion

We have dealt with the problem of sampled-data synchronization for stochastic delay
neural networks with Lévy noise and Markovian switching. By virtue of generalized
Itô’s formula and Lyapunov functional, a sufficient condition which depends on the
switching mode, time delay, and the upper bound of sampling intervals is presented to
guarantee the synchronization of the master system and the slave system. The desired
controller can be obtained via solving a set of LMIs expressed in this sufficient
condition. An example has been provided to demonstrate the effectiveness of the
main results.

6.4 Adaptive Synchronization of SDNN with Lévy Noise


and Markovian Switching

6.4.1 Introduction

In the past few decades, neural networks have been successfully applied in many
areas such as image processing, pattern recognition, associative memory, and opti-
mization problems. As an connatural existence in real neural networks, time delays,
which become one of the main reasons for oscillation and instability behavior, have
gained considerable research attentions which focus on the topics of stability and
synchronization issues [6, 17, 32, 36, 39, 40, 49, 54–56]. In the previous studies,
the delay type can be constant, time-varying, discrete, or distributed, and the results
can be delay-dependent or delay-independent case. It is generally recognized that the
delay-independent case performs more conservatively than the delay-dependent case
[40]. On the other hand, it has been verified that neural networks frequently exhibit
chaotic behaviors if the time delays and the parameters of neural networks are prop-
erly chosen [57]. Chaos synchronization problems thus attract extensive research
attention [7, 12, 18, 27] due to their favorable applications in secure communica-
tion, image processing, chemical and biological systems, and so on.
It has been noted that abrupt changes often emerge in the structure and para-
meters of many neural networks due to the phenomena such as component failures
or repairs, changing subsystem interconnections, and abrupt environmental distur-
bances. Neural networks in this case may be treated as systems which have finite
modes, and the modes may jump from one to another at different times [22, 51].
So far, finite-state Markov chains has been a type of sophisticated model which can
be used to govern the switching between different modes of neural networks. The
stability and synchronization issues for Markovian switching neural networks have
therefore received much research attention [17, 32, 51, 54, 56].
As shown in [9], the synaptic transmission in real nervous systems can be viewed
as a noisy process brought on by random fluctuations from the release of neurotrans-
mitters and other probabilistic causes. Consequently, stochastic noise has become an
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 309

indispensable component in modeling neural networks. Even to now, Gaussian white


noise or Brownian motion has been regarded as a widely used model to describe the
disturbance arising in neural networks or nonlinear systems else [17, 22, 32, 36,
39, 49, 54, 56]. Nevertheless, being a continuous noise, Brownian motion is at a
disadvantage to depict instantaneous disturbance changes. Lévy noise, which is fre-
quently found in areas of finance, statistical mechanics, and signal processing, is
more appropriate for modeling diversified system noise because Lévy noise can be
decomposed into a continuous part and a jump part by Lévy-Itô decomposition [1,
3], that is, Lévy noise includes both jump and diffusion. As a result, Lévy noise can
extend Gaussian noise to many types of impulsive jump-noise processes appearing in
real and model neurons as well as in models of finance and other random phenomena.
In neuron systems, a Lévy noise model more accurately describes how the neuron’s
membrane potential evolves than does a simpler diffusion model because the more
general Lévy model contains not only pure-diffusion and pure-jump models but also
jump-diffusion models as well [10, 28]. For the reason of Gaussian structure, how-
ever, pure-diffusion neuron models rely on special restrictive assumptions of incom-
ing Poisson spikes from other neurons. These assumptions require that the neuron
system must possess a large number of impinging synapses and that the synapses
have small membrane effects due to the small coupling coefficient [13]. From an
engineering application perspective, Lévy models are more valuable than Gaussian
models because physical devices may be limited in their number of model-neuron
connections [23] and because real signals and noise can often be impulsive [29]. As
seen in [43, 46, 47], system with Lévy noise, or more generally, with Gaussian noise
and some kinds of jump noise is also called jump diffusions. Hence, stability analysis
problems for this type of systems have drawn an increasing research interest, see e.g.,
[2, 3, 16, 28, 43, 44, 46, 47, 51, 55].
Up to now, synchronization issues have become a hot topic of neural networks
studies. The adaptive control strategy has been widely adopted due to its well per-
formance in uncertain systems (e.g., stochastic systems) or nonlinear systems (e.g.,
chaotic systems). In [33], via LaSalle invariant principle, the adaptive synchroniza-
tion problem is discussed for neural networks with time-varying delay and distributed
delay. An adaptive exponential synchronization scheme is presented in [30] for a class
of neural networks with time-varying and distributed delays and reaction–diffusion
terms. In [57], the adaptive synchronization problem is investigated for a class of sto-
chastic delayed neural networks. The same problem is studied in [58] for neutral-type
neural networks with stochastic perturbation and parameter uncertainties.
Motivated by the studies mentioned above, we aim to cope with the adaptive syn-
chronization problem of delayed Markovian switching neural networks with Lévy
noise. An LMI-based condition is proposed to guarantee the synchronization of the
master system and the slave system. The adaptive control law is derived simultane-
ously. The effectiveness of the proposed criterion is verified by a numerical example.
310 6 Stability and Synchronization of Neural Networks…

6.4.2 Model and Preliminaries

Consider the n-dimensional stochastic delay neural network with Markovian switch-
ing of the following form:

d x(t) = [−C(r (t))x(t) + A(r (t)) f (x(t))


(6.82)
+ B(r (t)) f (x(t − δ(t)))]dt

where r (t) is the Markov chain and x(t) = [x1 (t), . . . , xn (t)]T ∈ Rn is the state vec-
tor associated with the n neurons. f (x(t)) = [ f 1 (x1 (t)), . . . , f n (xn (t))]T denotes the
neuron activation function. C(r (t)) > 0 is a diagonal matrix. A(r (t)) and B(r (t)) are
the connection weight matrix and the delay connection weight matrix, respectively.
δ(t) is the time-varying delay and satisfies 0 ≤ δ1 ≤ δ(t) ≤ δ2 , δ̇ ≤ δ̄.
In this section, we will treat system (6.82) as the master system and its slave
system can be described by the following equation:

dy(t) = [−C(r (t))y(t) + A(r (t)) f (y(t))


+ B(r (t)) f (y(t − δ(t))) + u(t)]dt
+ g(e(t), e(t − δ(t)), r (t))dω(t) (6.83)

+ h(e(t), e((t − δ(t))), r (t), z)N (dt, dz)
R

where C(r (t), A(r (t), and B(r (t)) are the same matrices as in (6.82). e(t) = y(t) −
x(t) is the error state, which arises in the Lévy noise intensity function g and h
satisfying g : Rn × Rn × S → Rn×m and h : Rn × Rn × S × R → Rn . u(t) is the
adaptive controller that will be designed in order to achieve the synchronization of
systems (6.82) and (6.83).
The control signal is assumed to take the following form:

u(t) = K (t)e(t) (6.84)

where K (t) = diag{k1 (t), . . . , kn (t)} is the adaptive feedback gain matrix to be
determined.
Substituting (6.84) into (6.83), then subtracting (6.82) from (6.83), yields the error
system
de(t) = [−C(r (t))e(t) + A(r (t))l(e(t))
+ B(r (t))l(e(t − δ(t))) + K (t)e(t)]dt
+ g(e(t), e(t − δ(t)), r (t))dω(t) (6.85)

+ h(e(t), e(t − δ(t)), r (t), z)N (dt, dz)
R

where l(e(t)) = f (y(t)) − f (x(t)) = f (x(t) + e(t)) − f (x(t)). The initial data is
given by {e(θ):−δ2 ≤ θ ≤ 0} = ξ(θ) ∈ L2F0 ([−δ2 , 0]; Rn ]), r (0) = r0 . It is assumed
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 311

that ω(t), N (t, z), and r (t) in system (6.85) are independent. For simplicity, we will
write M(r (t) as Mi when r (t) = i in the sequel.
Some hypotheses are presented below for the purpose of the synchronization of
systems (6.82) and (6.83), i.e., the stability study of error system (6.85).
Assumption 6.31 Each function f i : R → R is nondecreasing and there exists a
positive constant ηi such that

| f i (u) − f i (v)| ≤ ηi |u − v| ∀u, v ∈ R, i = 1, 2, . . . , n.

Denote L = diag{η1 , . . . , ηn }. It can be deduced from Assumption 6.31 that [49]


n
e T (t)L Dl(e(t)) = li (ei (t))ηi di ei (t)
i=1
n
(6.86)
≥ di [li (ei (t))]2
i=1
= l (e(t))Dl(e(t))
T

where D = diag{d1 , . . . , dn } is an arbitrary positive diagonal matrix.


Assumption 6.32 ∀i ∈ S, there exist two semi-positive definite matrices G i1 and
G i2 such that

trace(g T (e, e(t − δ(t)), i)g(e, e(t − δ(t)), i))


(6.87)
≤ e T (t)G i1 e(t) + e T (t − δ(t))G i2 e(t − δ(t))

Assumption 6.33 (a) The characteristic measure ν(dz)dt satisfies

ν(dz)dt = λφ(dz)dt (6.88)

where λ is the intensity of Poisson distribution and φ is the probability distribu-


tion of random variable z.
(b) ∀i ∈ S, there exist two semi-positive definite matrices Hi1 and Hi2 such that

h T (e, e(t − δ(t)), i, z)h(e, e(t − δ(t)), i, z)ν(dz)
R (6.89)
≤ e T (t)Hi1 e(t) + e T (t − δ(t))Hi2 e(t − δ(t))

Definition 6.34 The master system (6.82) and slave system (6.83) are said to be
synchronous in mean square (or, stochastically synchronous, see [40]) if the error
system (6.85) is stable in mean square, that is, for any ξ(0) ∈ L2F0 ([−δ2 , 0]; Rn ])
and r0 = i ∈ S,  T
lim E |e(t; ξ(0), r0 )|2 dt < ∞ (6.90)
T →∞ 0
312 6 Stability and Synchronization of Neural Networks…

6.4.3 Main Results

We are now in a position to derive the condition under which the master system (6.82)
and the slave system (6.83) are synchronous in mean square. The main theorem below
reveals that such conditions can be expressed in terms of the positive definite solutions
to a quadratic matrix inequality involving some scalar parameters, and the update
law of feedback gain is dependent on one of these solutions.

Theorem 6.35 Let Assumptions 6.31, 6.32, and 6.33 hold. If there exist positive
matrices Pi , Q 1 , Q 2 , Q 3 , R, a positive diagonal matrix D, and positive constants
ρi , i , (i ∈ S) such that

Pi < ρi I (6.91)
Πi < 0 (6.92)

and the update law of the feedback gain matrix K (t) satisfies


n
k̇v (t) = −α eu (t)Piuv ev (t), (v = 1, . . . , n) (6.93)
u=1

where ⎡ ⎤
Π11 0 0 0 Π15 Π16
⎢ ∗ Π22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 ⎥
Πi = ⎢⎢ ∗ ∗ ∗ Π44 0 0 ⎥ with

⎢ ⎥
⎣ ∗ ∗ ∗ ∗ Π55 0 ⎦
∗ ∗ ∗ ∗ ∗ Π66

Π11 = −2Pi Ci − 2β Pi + Sj=1 γi j P j + ρi (G i1 + Hi1 ) + i Hi1
+ Q 1 + Q 2 + Q 3 + i−1 λPi2 ,
Π15 = Pi Ai + L D,
Π16 = Pi Bi ,
Π22 = ρi (G i2 + Hi2 ) + i Hi2 − (1 − δ̄)Q 1 ,
Π33 = −Q 2 ,
Π44 = −Q 3 ,
Π55 = R − 2D,
Π66 = −(1 − δ̄)R,
α is an arbitrary positive constant and β is a positive constant to be determined,
then the master system (6.82) and the slave system (6.83) are synchronous in mean
square.

Proof Fix any (ξ(0), r0 ) ∈ Rn × S and write e(t; ξ(0), r0 ) = e(t) for simplicity.
Consider the following Lyapunov functional V ∈ C2,1 (Rn × R+ × S; R+ ) for the
error system (6.85):
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 313


4
V (e(t), t, r (t)) = V p (e(t), t, r (t)) (6.94)
p=1

where
V1 =e T (t)P(r (t))e(t)
 t  t
V2 = e T (s)Q 1 e(s)ds + e T (s)Q 2 e(s)ds
t−δ(t) t−δ1
 t
+ e T (s)Q 3 e(s)ds
t−δ2
 t
V3 = l T (e(s))Rl(e(s))ds
t−δ(t)

n
(kv (t) + β)2
V4 =
α
v=1

Computing LV1 by (1.9), we can obtain

LV1

S
= e T (t)[(−2Pi Ci + γi j P j + 2Pi K (t))e(t)
j=1

+ 2Pi Ai l(e(t)) + 2Pi Bi l(e(t − δ(t)))] + trace(g T Pi g)



+ [(e(t) + h)T Pi (e(t) + h) − e T (t)Pi e(t)]ν(dz)
R (6.95)

S
= e T (t)[(−2Pi Ci + γi j P j + 2Pi K (t))e(t)
j=1

+ 2Pi Ai l(e(t)) + 2Pi Bi l(e(t − δ(t)))] + trace(g T Pi g)



+ (h T Pi h + 2e T (t)Pi h)ν(dz)
R

From Assumption 6.32 and (6.91), we get

trace(g T Pi g)
(6.96)
≤ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))

From Assumption 6.33, Lemma 1.13 and (6.91), we have


314 6 Stability and Synchronization of Neural Networks…

(h T Pi h + 2e T (t)Pi h)ν(dz)
R

≤ (ρi |h|2 + i |h|2 + i−1 e T (t)Pi2 e(t))ν(dz) (6.97)
R
≤ (ρi + i )[e (t − δ(t))Hi2 e(t − δ(t))
T

+ e T (t)Hi1 e(t)] + λi−1 e T (t)Pi2 e(t)

Substituting (6.96) and (6.97) into (6.95), we obtain that

LV1

S
≤ e T (t)[(−2Pi Ci + γi j P j + 2Pi K (t))e(t)
j=1
+ 2Pi Ai l(e(t)) + 2Pi Bi l(e(t − δ(t)))]
+ e T (t)ρi G i1 e(t) + e T (t − δ(t))ρi G i2 e(t − δ(t))
+ (ρi + i )[e T (t − δ(t))Hi2 e(t − δ(t))
(6.98)
+ e T (t)Hi1 e(t)] + i−1 λe T (t)Pi2 e(t)

S
= e T (t)[−2Pi Ci + γi j P j + 2Pi K (t)
j=1

+ ρi (G i1 + Hi1 ) + i Hi1 + i−1 λPi2 ]e(t)


+ e T (t − δ(t))[ρi (G i2 + Hi2 ) + i Hi2 ]e(t − δ(t))
+ e T (t)2Pi Ai l(e(t)) + e T (t)2Pi Bi l(e(t − δ(t)))

Now, we compute

LV2
≤ e T (t)Q 1 e(t) − (1 − δ̄)e T (t − δ(t))Q 1 e(t − δ(t))
(6.99)
+ e T (t)Q 2 e(t) − e T (t − δ1 )Q 2 e(t − δ1 )
+ e T (t)Q 3 e(t) − e T (t − δ2 )Q 3 e(t − δ2 )

Similarly, from Assumption 6.31 and (6.86), we compute

LV3
≤ l T (e(t))Rl(e(t)) − (1 − δ̄)l T (e(t − δ(t)))Rl(e(t − δ(t)))
+ 2[e T (t)L Dl(e(t)) − l T (e(t))Dl(e(t))] (6.100)
= l T (e(t))(R − 2D)l(e(t)) + 2e T (t)L Dl(e(t))
− (1 − δ̄)l T (e(t − δ(t)))Rl(e(t − δ(t)))
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 315

Making use of (6.93) yields

LV4

n
2(kv (t) + β)k̇v (t)
=
α
v=1
n (6.101)
n
2(kv (t) + β)(−α u=1 eu (t)Pi uv ev (t))
=
α
v=1
= −2e (t)Pi K (t)e(t) − 2βe T (t)Pi e(t)
T

Combining (6.94), (6.98), (6.99), (6.100), and (6.101), it can be derived that


4
LV = LV p
p=1


S
≤ e T (t)[−2Pi Ci − 2β Pi + γi j P j + ρi (G i1 + Hi1 )
j=1

+ i Hi1 + Q 1 + Q 2 + Q 3 + i−1 λPi2 ]e(t)


+ e T (t − δ(t))[ρi (G i2 + Hi2 ) + i Hi2 − (1 − δ̄)Q 1 )] (6.102)
× e(t − δ(t)) − e (t − δ1 )Q 2 e(t − δ1 )
T

− e T (t − δ2 )Q 3 e(t − δ2 ) + l T (e(t))(R − 2D)l(e(t))


− l T (e(t − δ(t)))(1 − δ̄)Rl(e(t − δ(t)))
+ 2e T (t)(Pi Ai + L D)l(e(t))
+ 2e T (t)Pi Bi l(e(t − δ(t)))
= ξ T (t)Πi ξ(t)

where
ξ(t) = [e T (t) e T (t − δ(t)) e T (t − δ1 ) e T (t − δ2 ) l T (e(t)) l T (e(t − δ(t)))]T .
From (6.92), we have

LV ≤ −κi |ψ(t)|2 ≤ −κ|ψ(t)|2 ≤ −κ|e(t)|2 (6.103)

where −κi = λmax (Πi ), (κi > 0, i ∈ S) and −κ = maxi∈S {−κi }. Then it can be
derived from (1.12) that
 T
−E LV dt = E V0 − EVT ≤ EV0 (6.104)
0
316 6 Stability and Synchronization of Neural Networks…

We then obtain from (6.103) and (6.104) that


 T 1
E |e(t)|2 dt ≤ EV0 < ∞.
0 κ

So it follows from Definition 6.34 that the master system (6.82) and slave system
(6.83) are synchronous in mean square. The proof is complete.

Remark 6.36 In order to make the update law easy to be solved, many studies [33, 57]
tend to select a positive diagonal matrix Pi in Lyapunov functional. In this theorem,
a common positive definite matrix is adopted rather than a special diagonal one.

Remark 6.37 In the constructing process of Lyapunov functional, we take into


account both the switching modes and the time delay, which yields the mode and
delay-dependent synchronization criterion for neural networks.

The synchronization criterion in Theorem 6.35 involves a quadratic matrix inequal-


ity (6.92), which may be unsolvable for LMI toolbox. We now resort it to the linear
matrix inequality by the corollary below.

Corollary 6.38 Assume that (6.91) and (6.93) are satisfied under the conditions in
Theorem 6.35. If the inequality
⎡ √ ⎤
Π̄11 0 0 0 Π15 Π16 λPi
⎢ ∗ Π22 0 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 0 ⎥
⎢ ⎥
Ωi = ⎢
⎢ ∗ ∗ ∗ Π44 0 0 0 ⎥ ⎥<0 (6.105)
⎢ ∗ ∗ ∗ ∗ Π55 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ ∗ ∗ ∗ Π66 0 ⎦
∗ ∗ ∗ ∗ ∗ ∗ −i I

holds for any i ∈ S, where Π̄11 = Π11 − i−1 λPi2 and Π j j ( j = 1, . . . , 6) are the
same as those in Theorem 6.35, then the master system (6.82) and the slave system
(6.83) are synchronous in mean square.

Proof Set ⎡ ⎤
Π̄11 0 0 0 Π15 Π16
⎢ ∗ Π22 0 0 0 0 ⎥
⎢ ⎥
⎢ ∗ ∗ Π33 0 0 0 ⎥
Ω1 = ⎢
⎢ ∗ ∗

⎢ ∗ Π44 0 0 ⎥⎥
⎣ ∗ ∗ ∗ ∗ Π55 0 ⎦
∗ ∗ ∗ ∗ ∗ Π66

Ω2 = [ λPi 0 0 0 0 0]T
Ω3 = i I,
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 317

then we get
Ω1 Ω2
Ωi =
Ω2T −Ω3

It can be easily verified that Πi = Ω1 + Ω2 Ω3−1 Ω2T . According to (6.105) and


Lemma 1.21, we have Πi < 0. The conclusion then follows from Theorem 6.35.

6.4.4 Numerical Simulation

One example is presented here in order to show the usefulness of our results. We aim
to examine the mean square synchronization of the given chaotic neural networks
with Lévy noise and Markovian switching.

Example 6.39 Consider the master system (6.82) and slave system (6.83) with one-
t
dimensional Lévy noise and 2-state Markovian switching, where δ(t) = ete+1 , f j (u j )
|u j +1|−|u j −1|
= ,
λ = 0.2, φ is the standard normal distribution of random variable
2
−1 1
z. The transition rate matrix is chosen as Γ = . The other parameters
0.5 −0.5
are as follows:

1 0 10
C(1) = , C(2) = ,
0 0.9 01

1.69 19 1.79 19
A(1) = , A(2) = ,
0.11 1.69 0.09 1.79

−1.33 0.3 −1.44 0.1
B(1) = , B(2) =
0.2 −1.33 0.1 −1.44
e(t) + e(t − δ(t)) e(t) − e(t − δ(t))
g(1) = , g(2) = ,
10 20
e(t) + e(t − δ(t)) e(t) − e(t − δ(t))
h(1) = z, h(2) = z.
10 10
It can be simply computed that

1 1
δ̄ = , δ1 = , δ2 = 1, L = I2 ,
4 2
I2 I2
G 11 = G 12 = , G 21 = G 22 = ,
50 200
I2
H11 = H12 = H21 = H22 = ,
25
and thus Assumptions 6.31, 6.32, and 6.33 are satisfied.
318 6 Stability and Synchronization of Neural Networks…

Now setting β = 25 and using the Matlab LMI toolbox, we solve the LMIs (6.91)
and (6.105) and obtain

1.3527 0.0679 1.3161 0.0580
P(1) = , P(2) = ,
0.0679 1.8986 0.0580 1.8601

10.4401 −3.9077 8.2883 −2.9694
Q1 = , Q2 = ,
−3.9077 23.3465 −2.9694 18.0861

8.2883 −2.9694
Q3 = ,
−2.9694 18.0861

13.9097 −4.7776 16.8213 0
R= , D= ,
−4.7776 7.9970 0 16.8213
ρ1 = 17.7062, ρ2 = 17.9440, 1 = 17.0156 2 = 16.9800.

So the conditions of Corollary 6.38 are all satisfied. It follows from Corollary 6.38
that systems (6.82) and (6.83) are synchronous in mean square.
To illustrate the effectiveness of the proposed method, we plot the stochastic fac-
tors in system (6.85), the chaotic behavior of system (6.82), the jumps and oscillations
of uncontrolled noise system (6.83), and the synchronization behaviors of system
(6.82) and system (6.83) with control input.
Figure 6.18 shows the 2-state Markov chain generated by the transition rate matrix
Γ . The decomposition of Lévy noise, namely a Brownian motion and a type of
Poisson jump, is shown in Fig. 6.19 and 6.20, respectively. Figure 6.21 depicts the
chaotic behavior of system (6.82). Fig. 6.22 reveals the destroyed chaos shape caused

Fig. 6.18 2-State Markov 2−state Markov chain


chain 3

2.5

2
r(t)

1.5

0.5

0
5 10 15 20 25 30 35 40
Time
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 319

Fig. 6.19 Brownian motion Brownian motion


8

ω(t)
0

−2

−4

−6
5 10 15 20 25 30 35 40
Time

Fig. 6.20 Poisson point Poisson point process with normally distributed jump
process 0

−1
Random jump amplitude

−2

−3

−4

−5

−6
5 10 15 20 25 30 35 40
Time

by the Lévy noise in system (6.83) without control. When the input acts in system
(6.83), as plotted in Fig. 6.23, the chaotic behavior like system (6.82) is regained.
Then the synchronization behavior of the master system and controlled slave system
can be seen from Fig. 6.24, which shows the tending to zero property of the error
state. Figures 6.25 and 6.26 exhibit the update law of feedback gain and the control
signals, respectively.
320 6 Stability and Synchronization of Neural Networks…

Fig. 6.21 Phase trajectory Chaotic behavior of master system


of master system 1.5

0.5

x2
0

−0.5

−1

−1.5
−20 −15 −10 −5 0 5 10 15 20
x1

Fig. 6.22 Phase trajectory of Trajectory of slave system with u(t)=0


slave system with u(t) = 0 4

3.5

2.5
2

2
y

1.5

0.5

0
0 5 10 15 20 25 30 35
y
1

6.4.5 Conclusion

We have discussed the problem of adaptive synchronization for stochastic chaotic


neural networks with Lévy noise and Markovian switching. By virtue of generalized
Itô’s formula and Lyapunov functional, a sufficient condition which depends on the
switching modes and time delay is presented to guarantee the synchronization of
6.4 Adaptive Synchronization of SDNN with Lévy Noise … 321

Fig. 6.23 Phase trajectory Trajectory of slave system with u(t)=K(t)e(t)


of controlled slave system 1.5

0.5

y2
0

−0.5

−1

−1.5
−20 −15 −10 −5 0 5 10 15 20
y
1

Fig. 6.24 State responses of The error state


error system 0.6
e
1
e
0.4 2

0.2

0
e(t)

−0.2

−0.4

−0.6

−0.8
5 10 15 20 25 30 35 40
Time

the master system and the slave system. Via the solution of LMIs in this criterion,
the desired controller can be obtained. An example has been provided to show the
effectiveness of the main results.
322 6 Stability and Synchronization of Neural Networks…

Fig. 6.25 Update law Update law of K(t)


−1
k1
−1.5 k
2
−2
−2.5
−3

K(t)
−3.5
−4
−4.5
−5
−5.5
−6
5 10 15 20 25 30 35 40
Time

Fig. 6.26 Control input Control input u(t)=K(t)e(t)


4
u1
3 u2

1
u(t)

−1

−2

−3

−4
5 10 15 20 25 30 35 40
Time

References

1. D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd edn. (Cambridge University Press,
Cambridge, 2008)
2. D. Applebaum, M. Siakalli, Asymptotic stability of stochastic differential equations driven by
Lévy noise. J. Appl. Probab. 46(4), 1116–1129 (2009)
3. D. Applebaum, M. Siakalli, Stochastic stabilization of dynamical systems using Lévy noise.
Stoch. Dyn. 10(4), 509–527 (2010)
4. P. Ashok, B. Kosko, Stochastic resonance in continuous and spiking neuron models with Lévy
noise. IEEE Trans. Neural Netw. 19(12), 1993–2008 (2008)
5. M.O. Cáceres, The rigid rotator with Lévy noise. Phys. A 283(1–2), 140–145 (2000)
6. J. Cao, L. Li, Cluster synchronization in an array of hybrid coupled neural networks with delay.
Neural Netw. 22(4), 335–342 (2009)
7. H.B. Fotsin, J. Daafouz, Adaptive synchronization of uncertain chaotic Colpitts oscillators
based on parameter identification. Phys. Lett. A 339(3–5), 304–315 (2005)
References 323

8. E. Fridman, A refined input delay approach to sampled-data control. Automatica 46(2), 421–
427 (2010)
9. S. Haykin, Neural Networks (Prentice-Hall, New Jersey, 1994)
10. N. Hohn, A.N. Burkitt, Shot noise in the leaky integrate-and-fire neuron. Phys. Rev. E 63(3),
031902/1–11 (2001)
11. L. Hu, S. Gan, X. Wang, Asymptotic stability of balanced methods for stochastic jump-diffusion
differential equations. J. Comput. Appl. Math. 238, 126–143 (2013)
12. D. Huang, Simple adaptive-feedback controller for identical chaos synchronization. Phys. Rev.
E 71(3), 0372031–4 (2005)
13. A.F. Kohn, Dendritic transformations on random synaptic inputs as measured from a neuron’s
spike train—modeling and simulation. IEEE Trans. Biomed. Eng. 36(1), 44–54 (1989)
14. H.K. Lam, F.H.F. Leung, Design and stabilization of sampled-data neural-network-based con-
trol systems. IEEE Trans. Cybern. 36(5), 995–1005 (2006)
15. T.H. Lee, Z.-G. Wu, J.H. Park, Synchronization of a complex dynamical network with coupling
time-varying delays via sampled-data control. Appl. Math. Comput. 219(3), 1354–1366 (2012)
16. D. Liu, G. Yang, W. Zhang, The stability of neutral stochastic delay differential equations with
Poisson jumps by fixed points. J. Comput. Appl. Math. 235(10), 3115–3120 (2011)
17. Y. Liu, Z. Wang, X. Liu, An LMI approach to stability analysis of stochastic high-order Markov-
ian jumping neural networks with mixed time delays. Nonlinear Anal.: Hybrid Syst. 2(1),
110–120 (2008)
18. J. Lu, J. Cao, Adaptive complete synchronization of two identical or different chaotic (hyper-
chaotic) systems with fully unknown parameters. Chaos 15(4), 43901–1–10 (2005)
19. J. Luo, Comparison principle and stability of Ito stochastic differential delay equations with
Poisson jump and Markovian switching. Nonlinear Anal. 64(2), 253–262 (2006)
20. X. Mao, J. Lam, S. Xu, H. Gao, Razumikhin method and exponential stability of hybrid
stochastic delay interval systems. J. Math. Anal. Appl. 314(1), 45–66 (2006)
21. X. Mao, G.G. Yin, C. Yuan, Stabilization and destabilization of hybrid systems of stochastic
differential equations. Automatica 43(2), 264–273 (2007)
22. X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching (Imperial Col-
lege Press, Cambridge, 2006)
23. C. Mead, Analog VLSI and Neural Systems (Addison-Wesley, Boston, 1989)
24. Y. Mikheev, V. Sobolev, E. Fridman, Asymptotic analysis of digital control systems. Autom.
Remote Cont. 49(9), 1175–1180 (1988)
25. C. Ning, Y. He, M. Wu, Q. Liu, pth moment exponential stability of neutral stochastic differential
equations driven by Lévy noise. J. Frankl. Inst. 349(9), 2925–2933 (2012)
26. G.D. Nunno, B. Øksendal, F. Proske, White noise analysis for Lévy processes. J. Funct. Anal.
206(1), 109–148 (2004)
27. J.H. Park, Adaptive control for modified projective synchronization of a four-dimensional
chaotic system with uncertain parameters. J. Comput. Appl. Math. 213(1), 288–293 (2008)
28. J. Peng, Z. Liu, Stability analysis of stochastic reaction-diffusion delayed neural networks with
Lévy noise. Neural Comput. Appl. 20(4), 535–541 (2011)
29. G. Samorodnitsky, M.S. Taqqu, Stable Non-Gaussian Random Processes: Stochastic Models
with Infinite Variance (Chapman & Hall/CRC Press, London, 1994)
30. L. Sheng, H. Yang, X. Lou, Adaptive exponential synchronization of delayed neural networks
with reaction-diffusion terms. Chaos Solitons Fractals 40(2), 930–939 (2009)
31. R. Situ, Theory of Stochastic Differential Equations with Jumps and Their Applications
(Springer Science Business Media, New York, 2005)
32. D. Tong, Q. Zhu, W. Zhou, Y. Xu, J. Fang, Adaptive synchronization for stochastic T-S fuzzy
neural networks with time-delay and Markovian jumping parameters. Neurocomputing 117(6),
91–97 (2013)
33. K. Wang, Z. Teng, H. Jiang, Adaptive synchronization of neural networks with time-varying
delay and distributed delay. Phys. A: Stat. Mech. Appl. 387(2–3), 631–642 (2008)
34. Z. Wang, Y. Liu, L. Liu, X. Liu, Exponential stability of delayed recurrent neural networks
with Markovian jumping parameters. Phys. Lett. A 356(4), 346–352 (2006)
324 6 Stability and Synchronization of Neural Networks…

35. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and
distributed delays. Phys. Lett. A 345(4), 299–308 (2005)
36. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with
time delays. Nonlinear Anal.: Real World Appl. 7(5), 1119–1128 (2006)
37. Z. Wang, Y. Wang, Y. Liu, Global synchronization for discrete-time stochastic complex net-
works with randomly occurred nonlinearities and mixed time delays. IEEE Trans. Nerual Netw.
21(1), 11–25 (2010)
38. I.-S. Wee, Stability for multidimensional jump-diffusion processes. Stoch. Process. Appl. 80(2),
193–209 (1999)
39. Z. Wu, H. Su, J. Chu, W. Zhou, Improved result on stability analysis of discrete stochastic
neural networks with time delay. Phys. Lett. A 373(17), 1546–1552 (2009)
40. Z.-G. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks
with time-varying delay using sampled-data. IEEE Trans. Cybern. 43(6), 1796–1806 (2013)
41. F. Xi, Asymptotic properties of jump-diffusion processes with state-dependent switching.
Stoch. Process. Appl. 119(7), 2198–2221 (2009)
42. F. Xi, G. Yin, Almost sure stability and instability for switching-jump-diffusion systems with
state-dependent switching. J. Math. Anal. Appl. 400(2), 460–474 (2013)
43. F.B. Xi, On the stability of jump-diffusions with Markovian switching. J. Math. Anal. Appl.
341(1), 588–600 (2008)
44. Y. Xu, X.-Y. Wang, H.-Q. Zhang, Stochastic stability for nonlinear systems driven by Lévy
noise. Nonlinear Dyn. 68(1–2), 7–15 (2012)
45. X. Yang, J. Cao, J. Lu, Synchronization of Markovian coupled neural networks with nonidenti-
cal node-delays and random coupling strengths. IEEE Trans. Neural Netw. Learn. Syst. 23(1),
60–71 (2012)
46. Z.X. Yang, G. Yin, Stability of nonlinear regime-switching jump diffusion. Nonlinear Anal.-
Theory Methods Appl. 75(9), 3854–3873 (2012)
47. G. Yin, F.B. Xi, Stability of regime-switching jump diffusions. SIAM J. Control Optim. 48(7),
4525–4549 (2010)
48. G.G. Yin, C. Zhu, Hybrid Switching Diffusions Properties and Applications. Stochastic Mod-
elling and Applied Probability (Springer, 2010)
49. W. Yu, J. Cao, Synchronization control of stochastic delayed neural networks. Phys. A: Stat.
Mech. Appl. 373(1), 252–260 (2007)
50. C. Yuan, J. Lygeros, Stabilization of a class of stochastic differential equations with Markovian
switching. Syst. Control Lett. 54(9), 819–833 (2005)
51. C.G. Yuan, X.R. Mao, Stability of stochastic delay hybrid systems with jumps. Eur. J. Control
16(6), 595–608 (2010)
52. L. Zeng, B. Xu, Effects of asymmetric Lévy noise in parameter-induced aperiodic stochastic
resonance. Phys. A 389(2), 5128–5136 (2010)
53. W. Zhang, Y. Tang, J. Fang, Stability of delayed neural networks with time-varying impulses.
Neural Netw. 36, 59–63 (2012)
54. W. Zhou, D. Tong, Y. Gao, C. Ji, H. Su, Mode and delay-dependent adaptive exponential syn-
chronization in pth moment for stochastic delayed neural networks with Markovian switching.
IEEE Transa. Neural Netw. Learn. Syst. 23(4), 662–668 (2012)
55. W. Zhou, J. Yang, X. Yang, A. Dai, H. Liu, J. Fang, Almost surely exponential stability of
neural networks with Lévy noise and Markovian switching. Neurocomputing 145, 154–159
(2014)
56. W. Zhou, Q. Zhu, P. Shi, H. Su, J. Fang, L. Zhou, Adaptive synchronization for neutral-type
neural networks with stochastic perturbation and Markovian switching parameters. IEEE Trans.
Cybern. 44(12), 2848–2860 (2014)
57. Q. Zhu, J. Cao, Adaptive synchronization under almost every initial data for stochastic neural
networks with time-varying delays and distributed delays. Commun. Nonlinear Sci. Numer.
Simul. 16(4), 2139–2159 (2011)
58. Q. Zhu, W. Zhou, D. Tong, J. Fang, Adaptive synchronization for stochastic neural networks
of neutral-type with mixed time-delays. Neurocomputing 99(1), 477–485 (2013)
References 325

59. Q.X. Zhu, F.B. Xi, X.D. Li, Robust exponential stability of stochastically nonlinear jump
systems with mixed time delays. J. Optim. Theory Appl. 154(1), 154–174 (2012)
60. Y. Zhu, Q. Zhang, Z. Wei, Robust stability analysis of Markov jump standard genetic regulatory
networks with mixed time delays and uncertainties. Neurocomputing 110, 44–50 (2013)
Chapter 7
Some Applications to Economy Based
on Related Research Method

This chapter provides two applications with respect to the topic of this book in finance
and economy. As an application of Lévy process, Sect. 7.1 offers a portfolio strategy
of financial market. Robust H∞ control strategy is investigated for a generic linear
rational expectation model of economy.

7.1 Portfolio Strategy of Financial Market with Regime


Switching Driven by Geometric Lévy Process

7.1.1 Introduction

To make a portfolio strategy is to search for a best allocation of wealth among different
asset in markets. Taking the European options for an instance, how to distribute the
appropriate proportions of each option to maximize total returns at expire time is
the core of portfolio strategy problem. There are two points mentioned among the
relevant literatures for portfolio selection problems: setting up a market model that
approximates to the real financial market, and the way of solving it.
Portfolio strategy researches are based on portfolio selection analysis given by
Markowitz H. [13]. Extensions of Markowitz’s work to the multiperiod model have
been given by Li and Ng [20] which derived the analytical optimal portfolio policy.
These previous researches were assuming that the underlying market has only one
state or mode. But the real market might have more than one state, and could switch
among them. Then, portfolio policies under regime switching have been widely
discussed. In a financial market model, the key process S that models the evolu-
tion of stock price should be a Brownian motion. Indeed, this can be intuitively
justified on the basis of the central limit theorem if one perceives the movement
of stocks. The analysis of Bernt sendal [25] was mainly based on the generalized
Black-Scholes model which has two assets B(t) and S(t) as dB(t) = ρ(t)B(t)dt and

© Springer-Verlag Berlin Heidelberg 2016 327


W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2_7
328 7 Some Applications to Economy Based on Related Research Method

dS(t) = α(t)S(t)dt + β(t)S(t)dW (t), where W (t) is a Brownian motion. In that


case, Øksendal formulated optimal selling decision-making as an optimal stopping
problem and derived a closed-form solution. The underlying problem may be treated
as a free boundary value problem, which was extended to incorporate possible regime
switching by Guo X. [10] and Moustapha Pemy [24] with the switching represented
by a two-state Markov chain. The rate of return α(t) in the above Black-Scholes
models in [10] and [24] is a Markov chain which is different from the general one.
As an application, Huiling Wu [34, 35] has given the strategy of multiperiod mean–
variance portfolio selection with regime switching and a stochastic cash flow which
depends on the states of a stochastic market following a discrete-time Markov chain.
Been put in the Markov jump, Black-Scholes model with regime switching is much
closer to the real market.
In recent years, Lévy process as a more general process than Brownian motion
has been applied in financial portfolio optimization. Jan Kallsen [16] gave an optimal
portfolio strategy of securities market under exponential Lévy process. More specific
than exponential Lévy process, a financial market model with stock price following
the geometric Lévy process was discussed by David Applebaum [1] in which a Lévy
process X (t) and geometric Lévy motion S(t) = e X (t) were introduced. Taking
X to be a Lévy process could force our stock prices clearly not moving continu-
ously, and a more realistic approach is that the stock price is allowed small jumps in
small time intervals. Some applications of financial market driven by Lévy process is
taken on life insurance. Nele Vandaele [29] shows the real risk-minimizing hedging
strategy for unit-linked life insurance in financial market driven by a Lévy process.
While, Chengguo Weng [32] has analyzed the constant proportion portfolio insur-
ance by assuming that the risky asset price follows a regime switching exponential
Lévy process and obtained the analytical forms of the shortfall probability, expected
shortfall, and expected gain. Optimizing proportional reinsurance and investment
policies in a multidimensional Lévy-driven insurance model is discussed by Nicole
Bäuerle [2]. Moreover, under a general method, Kam Chuen Yuen [43] has consid-
ered the optimal dividend problem for the insurance risk process in a general Lévy
process which shows that if the Lévy density is a completely monotone function,
then the optimal dividend strategy is a barrier strategy.
Among all the above literatures, those portfolios are always based on one risk-free
asset and only one risky asset which may limit the choice of stocks. However, in a real
financial market, there always exists more than one risky asset in a portfolio. That is
why we are going to extend the single-stock financial market model to a multi-stock
financial market model driven by geometric Lévy process which is more closer to the
real market than proposed portfolios cited above. In this section, we set up a general
Black-Scholes model with geometric Lévy process. For the general Black-Scholes
model of the financial market, a portfolio strategy which is determined by a partial
differential equation (PDE) of parabolic type is given by using Itô formula. The
solvability of the PDE is researched by making use of variables transformation. An
application of the solvability of the PDE on the European options with the final data
is given finally. The contributions of this section are as follows: (i) The B-S market
model is extended into general form in which the interest rate of the bond, the rate
7.1 Portfolio Strategy of Financial Market with Regime … 329

of return, and the volatility of the stock vary as the market states switching and the
stock prices are driven by geometric Lévy process. (ii) The PDE determining the
portfolio strategy and its solvability are extensions of the existing results.

7.1.2 Problem Formulation

Assume that {α(t): t ≥ 0} denotes a Markov chain in (Ω, F, P), as the regime of
financial market, for example, the bull market or bear market of a stock market. Let
S = {1, 2, . . . , S} be the regime space of this Markov chain, and Γ = (γi j ) S×S be
the transition rate matrix.
In this section, we consider a financial market model driven by geometric Lévy
process. The market consists of one risk-free asset denoted by B, and n risky assets
denoted by S1 , S2 , . . . , Sn . The price process of these assets obeys the following
dynamic equations in which the price process of the risky assets follows the geometric
Lévy process, i.e.,


⎨ dB(t) = B(t)r (t,
⎪  α(t))dt, B(0) = B0  
dSk (t) = Sk (t) μk (t, α(t))dt + σk (t, α(t))dWk (t) + R−{0} z Ñk (dt, dz) ,


⎩ S (0) = S 0 > 0.
k k
(7.1)
where B(t) is the price of B with the interest rate r (t, α(t)), Sk (t) is the price of
Sk with the expect rate of return μk (t, α(t)) and the volatility σk (t, α(t)), which
follows the regime switching of financial market. S1 (t), S2 (t), . . . , Sn (t) are inde-
pendent from each other. Wk (t) is the Brownian motion which is independent from
{α(t) : t ≥ 0}. Ñk (·, ·) is defined below

Ñk (dt, dz) = Nk (dt, dz) − ηk (dz)dt,

Nk (dt, dz) and ηk (dz)dt indicate the number of jumps and average number of jumps
within time dt and jump range dz of price process Sk (t), respectively. That is

ηk (dz)dt = E[Nk (dt, dz)],

where E is the expectation operator. Moreover, we assume that Nk (dt, dz), α(t), and
Wk (t) (k = 1, 2, . . . , n) are independent from each other.

Remark 7.1 The finance market model (7.1) is an extension of the B-S market model
in which the interest rate of the bond, the rate of return, and the volatility of the stock
vary as the market states switching and the stock prices are driven by geometric Lévy
process.
330 7 Some Applications to Economy Based on Related Research Method

For finance market model (7.1), we introduce the concept of self-financing port-
folio as follows:

Definition 7.2 A self-financing portfolio (ϕ, ψ) = (ϕ, ψ1 , ψ2 , . . . , ψn ) for the


financial market model (7.1) is a series of predictable processes

{ϕ(t)}t≥0 , {ψk (t)}t≥0 (k = 1, 2, . . . , n),

i.e., for each T > 0,


T n

T
| ϕ(s) |2 ds + | ψk (s) |2 ds < ∞, (7.2)
0 k=1 0

and the corresponding wealth process {V (t)}t≥0 , defined by

n
V (t) := ϕ(t)B(t) + ψk (t)Sk (t), t ≥ 0 (7.3)
k=1

is an Itô process satisfying

n
dV(t) = ϕ(t)dB(t) + ψk (t)dSk (t), t ≥ 0. (7.4)
k=1

Problem formulation: In this note, we shall propose a portfolio strategy for the
financial market model (7.1) which is determined by a partial differential equation
(PDE) of parabolic type by using Itô formula. The solvability of the PDE is researched
by making use of variables transformation. Furthermore, the relationship between
the solution of the PDE and the wealth process will be discussed.

7.1.3 Main Results and Proofs

In this section, we shall give the following fundamental results. For the sake of
simplification, we write r (t, α(t)) as r , f (t, S(t)) as f , etc.
To obtain the main result, we give the solution of (7.1) and the characteristic of
the derivation (7.4) of the wealth process.
The exact solutions of B(t) in (7.1) can be found as follows:
t
B(t) = B(0) exp r (s, α(s))ds .
0
7.1 Portfolio Strategy of Financial Market with Regime … 331

To solve the second equation in (7.1) for Sk (t), it follows from the I t ô formula that

1
d ln Sk (t) = [Sk (t)μk (t, α(t))dt + Sk (t)σk (t, α(t))dWk (t)]
Sk (t)

1 1
− 2 Sk2 (t)σk2 (t, α(t))dt + [ln(Sk (t) + zSk (t))
2 Sk (t) R−{0}

− ln(Sk (t))] Ñk (dt, dz) + ln(Sk (t) + zSk (t))
R−{0}

S
1
− ln(Sk (t)) − zSk (t) ηk (dz)dt + γi j ln(Sk (t))
Sk (t)
j=1

1 2
= μk (t, α(t)) − σk (t, α(t)) dt + σk (t, α(t))dWk (t)
2

+ ln(1 + z) Ñk (dt, dz) + [ln(1 + z) − z]ηk (dz)dt.
R−{0} R−{0}

Integrating both sides of the above equation from 0 to t, we have


 t  t
1 2
Sk (t) = Sk0 exp μk (s, α(s)) − σk (s, α(s)) ds + σk (s, α(s))dWk (s)
0 2 0
t
+ ln(1 + z) Ñk (ds, dz)
0 R−{0}
t 
+ [ln(1 + z) − z]ηk (dz)ds . (7.5)
0 R−{0}

Proposition 7.3 Consider the price model (7.1) of a financial market. If a portfolio
(ϕ, ψ) is a self-financing strategy, then the wealth process {V (t)}t≥0 defined by (7.3)
satisfies


n
dV(t) = r (t, α(t))V (t) + ψk (t)Sk (t) μk (t, α(t)) − r (t, α(t))
k=1


n
− zηk (dz) dt + ψk (t)Sk (t)σk (t, α(t))dWk (t)
R−{0} k=1

n
+ ψk (t)Sk (t) z Nk (dt, dz). (7.6)
k=1 R−{0}

Conversely, consider the model (7.1) of a financial market. If a pair (ϕ, ψ) of pre-
dictable processes following the wealth process {V (t)}t≥0 defined by formula (7.3)
satisfies (7.6), then (ϕ, ψ) is a self-financing strategy.
332 7 Some Applications to Economy Based on Related Research Method

Proof Substituting (7.1) into (7.4), we have

n
dV(t) = ϕ(t)dB(t) + ψk (t)dSk (t)
k=1

n
= ϕ(t)B(t)r (t, α(t))dt + ψk (t)Sk (t) μk (t, α(t))dt+
k=1

σk (t, α(t))dWk (t) + z Ñk (dt, dz)
R−{0}
  

n

n
= V (t) − ψk (t)Sk (t) r (t, α(t)) + ψk (t)Sk (t)μk (t, α(t)) dt +
k=1 k=1

n
ψk (t)Sk (t)σk (t, α(t))dWk (t) + ψk (t)Sk (t) z Ñk (dt, dz)
k=1 k=1 R−{0}


n
= r (t, α(t))V (t) + ψk (t)Sk (t) μk (t, α(t)) − r (t, α(t)) −
k=1


n
zηk (dz) dt + ψk (t)Sk (t)σk (t, α(t))dWk (t)
R−{0} k=1

n
+ ψk (t)Sk (t) z Nk (dt, dz),
k=1 R−{0}

which is Eq. (7.6).


Conversely, from (7.1) and (7.6), we can obtain (7.4).
This completes the proof of the above proposition.
Now we give the following fundamental results:
Theorem 7.4 Consider the model (7.1) of a financial market. Assume that the port-
folio (ϕ, ψ1 , ψ2 , . . . , ψ
n ) is a self-financing
 strategy, {V
(t)}  is the wealth process
t≥0
defined by (7.3) and nk=1 ψk Sk R−{0} zηk (dz) = n
k=1 R−{0} zψk Sk ηk (dz). If
there exists a function f (t, S) of C class (the set of functions which are once
1,2

differentiable in t and continuously twice differentiable in S) such that

V (t) = f (t, S(t)), t ∈ [0, T ], S(t) = (S1 (t), S2 (t), . . . , Sn (t)) (7.7)

holds true, then the portfolio(ϕ, ψ1 , ψ2 , . . . , ψn ) satisfying

f − ∂∂ Sf S T
ϕ(t) = , t ≥0 (7.8)
B(t)

∂f ∂f ∂f ∂f
ψ(t) = , ,..., = , t ≥0 (7.9)
∂ S1 ∂ S2 ∂ Sn ∂S
7.1 Portfolio Strategy of Financial Market with Regime … 333

and the function f (t, S) solves the following backward PDE of parabolic type:

∂f
∂fn
1

∂2 f
n n
+r Sk + Si σi S j σ j = r f, t < T, S > 0. (7.10)
∂t ∂ Sk 2 ∂ Si ∂ S j
k=1 i=1 j=1

Moreover, if V (T ) = g(S(T )), then the function f (t, S) satisfies the following
equation:
f (T, S) = g(S), S > 0. (7.11)

For the converse part, we assume that T > 0. If there exists a function f (t, S) of
C1,2 class such that (7.10) and (7.11) are satisfied, then the process (ϕ, ψ) defined
by (7.9) and (7.8) is a self-financing strategy. The wealth process V = {V (t)}t∈[0.T ]
corresponding to (ϕ, ψ) satisfies (7.7).
Proof We prove the direct part of Theorem 7.4 first.
For
V (t) = f (t, S(t)),

by applying the I t ô formula, we can infer that

∂f
∂f n
dV(t) = (t, S(t))dt + (t, S(t))(Sk μk dt + Sk σk dWk )
∂t ∂ Sk
k=1

1

∂2 f
n n
+ (t, S(t))Si σi S j σ j dt
2 ∂ Si ∂ S j
i=1 j=1
n

+ ( f (t, S + zS) − f (t, S)) Ñk (dt, dz)


k=1 R−{0}

n 
∂f
+ f (t, S + zS) − f (t, S) − z (t, S)Sk ηk (dz)dt
∂ Sk
k=1 R−{0}

S
+ γi j f (t, S(t))
j=1

∂f
∂f n
1

∂2 f
n n
=⎣ + Sk μk + Si σi S j σ j
∂t ∂ Sk 2 ∂ Si ∂ S j
k=1 i=1 j=1
n


∂f
n
∂f
− z Sk ηk (dz) dt + Sk σk dWk
∂ S ∂ Sk
k=1 R−{0} k k=1

n
+ [ f (t, S + zS) − f (t, S)]N (dt, dz). (7.12)
k=1 R−{0}
334 7 Some Applications to Economy Based on Related Research Method

On the other hand, since our strategy is self-financing, the formula (7.6) is satisfied.
Thus, the rate of return and the volatility in (7.12) and (7.6) should be coincided,
and hence
⎧ n n ∂ f

⎪ k=1 ψk (t)Sk (t)σk = k=1 ∂ Sk (t, S)Sk σk ,



 (7.13)

⎪ r (t, α(t)) f (t, S) + nk=1 ψk Sk (μk − r )

⎪   
⎩= ∂ f + n ∂ f S μ + 1 n n ∂ f
2
∂t k=1 ∂ Sk k k 2 i=1 j=1 ∂ Si ∂ S j Si σi S j σ j .

We can easily get Sk ≥ 0 from (7.5), which together with the first equation of
(7.13) and the independence of Sk (k = 1, 2, . . . , n) yields (7.9).
From the first equation of (7.13), (7.3), and (7.7), we have

n
∂f
r ϕB = f − Sk . (7.14)
∂ Sk
k=1

So that n ∂f
f − k=1 ∂ Sk Sk f − fS ST
ϕ= = . (7.15)
B B
Substituting (7.9) into the second equation of (7.13), we have

n
∂f 1

∂2 f
n n
rf − ψk Sk r = + Si σi S j σ j ,
∂t 2 ∂ Si ∂ S j
k=1 i=1 j=1

which is (7.10).
Conversely, assume that f = f (t, S) is a C1,2 -class function which is a solution
of the PDE (7.10), and that (ϕ, ψ) is a process defined by (7.9) and (7.8).
First, we will show that a process V = V (t), t ∈ [0, T ] defined by (7.3) satisfies
the equation
V (t) = f (t, S(t)), t ∈ [0, T ]. (7.16)

In fact, substituting formulas (7.9) and (7.8) into the right-hand side of (7.3), we
have

n
V (t) = ϕB + ψk Sk
k=1
n ∂f
f − k=1 ∂ Sk Sk

n
∂f
= B+ Sk
B ∂ Sk
k=1
= f, t ≥ 0.

This proves Eq. (7.16).


7.1 Portfolio Strategy of Financial Market with Regime … 335

Next, we will show that (ϕ, ψ) is a self-financing strategy, i.e., (7.6) holds.
By applying the I t ô formula to the process V and function f , we have that
Eq. (7.12) is satisfied.
Furthermore, by (7.10),

1

∂2 f
∂f
n n n
∂f
+ Si σi S j σ j = r f − r Sk ,
∂t 2 ∂ Si ∂ S j ∂ Sk
i=1 j=1 k=1

∂f

n
∂f 1
n
n
∂2 f

n
∂f
+ Sk μk + Si σi S j σ j = r f + (μk − r )Sk .
∂t ∂ Sk 2 ∂ Si ∂ S j ∂ Sk
k=1 i=1 j=1 k=1

Then, by (7.16) and (7.9), we have

n
∂f

n
∂f 1

∂2 f
n n
rV + ψk Sk (μk − r ) = + Sk μk + Si σi S j σ j
∂t ∂ Sk 2 ∂ Si ∂ S j
k=1 k=1 i=1 j=1
(7.17)
and

n
∂f
ψk Sk σk = Sk σk .
∂ Sk
k=1 k=1

Those together with (7.9) yield that (7.12) implies (7.6). The proof of Theorem 7.4
is completed.

Remark 7.5 In order to determine the portfolio strategy (φ, ψ) and obtain the final
value V (t), from Theorem 7.4, we should find the solution of the PDF (7.10) with
the final data (7.11). This is the key problem in the rest of this section. We have the
following result in terms of method of variables transformation.

Theorem 7.6 Let r (t, α(t)) in (7.1) be a constant r . The function f (t, S), t ≤
T, S > 0 given by the following formula
n
e−r (T −t)
∞ − xi2 √ σi2
f (t, S) = √ e 2 g(0, . . . , 0, Si eσi T −t xi −(r − 2 )(t−T ) , 0, . . . , 0)dxi
2π i=1 −∞
(7.18)
is a solution of the general Black-Scholes equation (7.10) with the final data (7.11).

Proof We are going to do some equivalent transformations of general B-S equa-


tion (7.10), in order to get an appropriate equivalent equation with analytic solutions.
The procedure will be divided into four steps.
Step I. Let

1 1
f (t, S1 , . . . , Sn ) = er (t−T ) q t, ln S1 − r − σ12 (t − T ), . . . , ln Sn − r − σn2 (t − T ) ,
2 2
(7.19)
336 7 Some Applications to Economy Based on Related Research Method

and denote yi = ln Si − (r − 21 σi2 )(t − T ) (i = 1, 2, . . . , n), then

∂f d(er (t−T ) )
= q + er (t−T ) qt
∂t dt 

n 
r (t−T ) dr r (t−T ) ∂q 1 2
=e q (t − T ) + r + e qt − r − σi
dt ∂ yi 2
i=1
 

n
r (t−T ) r (t−T ) dr r (t−T ) ∂q ∂q 1 2
= re q +e q (t − T ) + e − r − σi ,
dt ∂t ∂ yi 2
i=1
∂f ∂q 1
= er (t−T ) ,
∂ Si ∂ yi Si
∂2 f ∂(er (t−T ) ∂∂qyi S1i )
=
∂ Si ∂ S j ∂Sj


⎨er (t−T ) ∂ y∂i ∂qy j S1i S1j ,
2
i  = j,

= ∂2q ∂q 1

⎩er (t−T ) ∂ yi ∂ yi S1i S1i − , i = j.
∂ yi S 2
i

Inserting the above formulas into Eq. (7.10), we get


⎡ ⎤

n
dr (t−T ) (t−T ) ∂q ∂q 1 ∂q 1
rf + (t − T )qe r +e r ⎣ − r − σi ⎦ + r
2 er (t−T ) Si
dt ∂t ∂ yi 2 ∂ yi Si
i=1 i=1

n
∂2q 1 1 1

n
∂q 2 1 2
+ er (t−T ) Si σi S j σ j − er (t−T ) S S = rf
2 ∂ yi ∂ y j Si S j 2 ∂ yi i S 2 i
i=1 j=1 i=1 i

which can be simplified as

1

∂2q
n n
dr ∂q
(t − T )q + + σi σ j = 0. (7.20)
dt ∂t 2 ∂ yi ∂ y j
i=1 j=1

The final data f (T, S) = g(S) can be rewritten as

q(T, S) = g(e S1 , e S2 , . . . , e Sn ). (7.21)

Step II. We introduce another variable and a new function as follows:

τ = T − t > 0, t = T − τ , τ ≥ 0, t ≤ T,
q(t, y) = u(T − t, y) or u(τ , y) = q(T − τ , y). (7.22)
7.1 Portfolio Strategy of Financial Market with Regime … 337

It can be computed that

qt (t, y) = −u τ (T − t, y),
∂q ∂u
= (T − t, y),
∂ yi ∂ yi
∂2q ∂2u
= (T − t, y).
∂ yi ∂ y j ∂ yi ∂ y j

Substituting the above formulas into (7.20), we get


 n n ∂2u
dV
dt (t − T )u(T − t, y) − u τ (T − t, y) + 1
2 i=1 j=1 ∂ yi ∂ y j σi σ j = 0,
u(0, y) = g(e y ).
(7.23)
Since r (t, α(t)) is assumed as a constant r , (7.23) can be changed into
 n n ∂2u
u τ (τ , y) − 1
2 i=1 j=1 ∂ yi ∂ y j σi σ j = 0,
(7.24)
u(0, y) = g(e y ).

Step III. We claim that the unique solution of (7.24) is

(yi −xi )2
n ∞ −
1
e 2σi2 τ
u(t, y1 , y2 , . . . , yn ) = √ g(0, . . . , 0, e xi , 0, . . . , 0)dxi .
2πτ −∞ σi
i=1
(7.25)
In fact,

(yi −xi )2
n −
1
∞ e 2σi2 τ
u τ (τ , y) = − √ g(0, . . . , 0, e xi , 0, . . . , 0)dxi
2 2πτ τ −∞ σi
i=1
(yi −xi )2
n −
1
∞ e 2σi2 τ
(yi − xi )2
+√ g(0, . . . , 0, e xi , 0, . . . , 0) dxi ,
2πτ −∞ σi 2σi2 τ 2
i=1
(yi −xi )2
−  
∂u 1 e ∞ 2σi2 τ
yi − xi
= √ g(0, . . . , 0, e , 0, . . . , 0) − 2
xi
dxi ,
∂ yi 2πτ −∞ σi σi τ


⎨ 0, i = j,
∂ u
2
(yi −xi )2
=  ∞ g(e x1 ,...,e xn ) − 2σ2 τ
∂ yi ∂ y j ⎪
⎩ √1 e i
(yi −xi )2
− 12 σi2 dxi , i = j.
−∞
2πτ 2 σi 4 2 σi τ σi τ
338 7 Some Applications to Economy Based on Related Research Method

So
1

∂2u
n n
u τ (τ , g) − σi σ j
2 ∂ yi ∂ y j
i=1 j=1
(yi −xi )2
n ∞ −
1
e 2σi2 τ
=− √ g(0, . . . , 0, e xi , 0, . . . , 0)dxi
2 2πτ τ −∞ σi
i=1
(yi −xi )2
n ∞ −
1
e 2σi2 τ
(yi − xi )2
+√ g(0, . . . , 0, e xi , 0, . . . , 0) dxi
2πτ −∞ σi 2σi2 τ 2
i=1
(yi −xi )2
 
1
1
n ∞ g(e x1 , . . . , e xn ) − 2σi2 τ (yi − xi )2 1
− √ e − 2 σi2 dxi
2 2πτ
i=1 −∞ σi2 σi4 τ 2 σi τ
= 0.

Step IV. By introducing a change of variables z i = xi − yi , we have xi = z i + yi


and dxi = dz i , where z i ∈ (−∞, ∞). It follows that

(yi −xi )2
n −
1
∞ e 2σi2 τ
u(τ , y) = √ g(0, . . . , 0, e zi +yi , 0, . . . , 0)dz i .
2πτ −∞ σi
i=1

In order to get rid of the denominator σi2 τ in the exponent in the above formula, we
make another change of variables as:

z i = σi τ xi . (7.26)

So dz i = σi τ dxi .
Recalling the relationship between q and u described in (7.22), we therefore have
n
1
∞ − xi2 √
q(t, y) = √ e 2 g(0, . . . , 0, eσi T −t xi +yi , 0, . . . , 0)dxi .
2π i=1 −∞

Hence, by formula (7.19), we have


n
e−r (T −t)
∞ − xi2 √ σi2
f (t, S) = √ e 2 g(0, . . . , 0, eσi T −t xi +ln Si −(r − 2 )(t−T ) , 0, . . . , 0)dxi .
2π i=1 −∞
7.1 Portfolio Strategy of Financial Market with Regime … 339

Since eln S = S, then


n
e−r (T −t)
∞ − xi2 √ σi2
f (t, S) = √ e 2 g(0, . . . , 0, Si eσi T −t xi −(r − 2 )(t−T ) , 0, . . . , 0)dxi .
2π i=1 −∞

In this way we proved Theorem 7.6.

7.1.4 A Financial Example

As an application, we consider the European call option. In Theorem 7.6, we have


given the solution of the general B-S equation (7.10) which depends on the final data
(7.11)), i.e., f (T, s) = g(s). More specifically, we take the final data g(s) for the
European call option as

n
g(S) = g(S1−k1 , S2−k2 , . . . , Sn−kn ) = (Si − K i )+ , (7.27)
i=1

where Si > 0 and K i > 0 is the strike price of Si . Then we have the following
corollary from Theorem 7.6.
Corollary 7.7 For the European call option, the solution to the general Black-
Scholes value problem (7.10) with the final data (7.27) is given by formula

n

n
f (t, S) = Si Φ(−Ai + σi T − t) − e−r (T −t) K i Φ(−Ai ), (7.28)
i=1 i=1

where
σi2 Si
(r − 2 )(T − t) + ln Ki
−Ai = √ =: d2 ,
σi T − t
σi2 Si
√ (r + 2 )(T − t) + ln Ki
−Ai + σi T − t = √ =: d1 ,
σi T − t

i.e.,

n
f (t, S) = Si Φ(d1 ) − e−r (T −t) K i Φ(d2 ).
i=1 i=1

In particular,

n
−r T
f (0, S) = Si Φ(d1 ) − e K i Φ(d2 ). (7.29)
i=1 i=1
340 7 Some Applications to Economy Based on Related Research Method

Proof For a European call option, we infer that

√ σi2
Si eσi T −t xi −(r − 2 )(t−T ) > Ki . (7.30)

Divided (7.30) by Si and taken the ln, we get


 
√ σ2 Ki
σi T − t xi − r − i (t − T ) > ln ,
2 Si

i.e.,
Ki σ2
ln Si − (r − 2i )(T − t)
xi > √ =: Ai .
σi T − t

Hence, from (7.18) and (7.27), it follows that


n
e−r (T −t)
∞ − xi2 √ σi2
f (t, S) = √ e 2 Si eσi T −t xi −(r − 2 )(t−T ) dxi
2π i=1 Ai
∞ x2
e−r (T −t)

n
e− 2 dxi
i
− √ Ki
2π i=1 Ai

e−r (T −t)

n σi2 xi2 √
= √ Si e(r − 2 )(T −t) e− 2 +σi T −t xi dxi
2π i=1 Ai
∞ x2
e−r (T −t)

n
e− 2 dxi
i
− √ Ki
2π i=1 Ai

1

n σi2 √
1 2 1 2
= √ Si e− 2 (T −t) e− 2 (xi −σi T −t) + 2 σi (T −t) dxi
2π i=1 Ai
∞ x2
e−r (T −t)

n
e− 2 dxi
i
− √ Ki
2π i=1 Ai
∞ x2
1
∞ e−r (T −t)

n 2
n
− z2
e− 2 dxi
i
= √ √ S i e dz − √ K i
2π i=1 Ai −σi T −t 2π i=1 Ai


n −Ai +σi √T −t x 2
n −Ai xi2
1 − 2i −r (T −t) 1
= Si √ e dxi −e Ki √ e− 2 dxi
i=1
2π −∞ i=1
2π −∞

n

n
= Si Φ(−Ai + σi T − t) − e−r (T −t) K i Φ(−Ai ),
i=1 i=1

where Φ(t) is the probability distribution function of a standard Gaussion random


variable N (0, 1), i.e.,
7.1 Portfolio Strategy of Financial Market with Regime … 341
t
1 x2
Φ(t) = √ e− 2 dx, t ∈ R.
2π −∞

In this way, we have proved Corollary 7.7.


Remark 7.8 The above result is about the European call option. A similar repre-
sentation to those from the 
above corollary in the European put option case, can be
obtained by taking g(S) = i=1 n
(K i − Si )+ , Si > 0 for some fixed K i > 0.

7.1.5 Conclusion

In this section, we have considered a financial market model with regime switching
driven by geometric Lévy process. This financial market model is based on the mul-
tiple risky assets S1 , S2 , . . . , Sn driven by Lévy process. Its formula and equivalent
transformation methods have been used to solve this complicated financial market
model. An example of the portfolio strategy and the final value problem to apply our
method to the European call option has been given in the end of this section.

7.2 Robust H∞ Control for a Generic Linear Rational


Expectations Model of Economy

7.2.1 Introduction

“Best policies can be evaluated, in theory at least, given an economy. But macro-
economists have only model economies at their disposal and necessarily these
economies are abstractions. A concern then is that the model economy used to eval-
uate policy will provide poor guidance in practice. This leads to the search for policy
that performs well for a broad class of economies. This is what robust control theory
is all about.” The Nobel Prize-winning economist, Edward C. Prescott, wrote these
sentences in the endorsements of book [11].
Robust control for economy has received attention since the early 1960s. In [7],
ambiguity preferences of static environment are axiomatized as multiple priors, and
decision-making with multiple priors can be represented as max–min expected utility.
The static environment of [7] is extended to a dynamic context in [4], where the set
of priors is updated over time and the dynamic consistent central axiom leads to
a recursive structure for utility. The links between robust control and ambiguity
aversion are formally established in [12], which shows that the model set of robust
control can be thought of as a particular specification of the set of priors presented
in [7], and once the least favorable prior is chosen, behavior could be rationalized as
Bayesian with that prior. According to the literature [33], in the economics literature,
the most prominent and influential approach to robust control is due to Hansen
342 7 Some Applications to Economy Based on Related Research Method

and Sargent (and their co-authors), which is summarized in their monograph [11].
Hansen-Sargent approach starts with a nominal model and uses entropy as a distance
measure to calibrate the model uncertainty set. The principal tools used to solve
Hansen-Sargent robust control problems are state-space methods [8, 11]. It needs to
note that, all approaches mentioned above adopt a bounded “worst-case” strategy, or
can be described as an H∞ problem.
Many of the ideas and inspiration for robust control in economics come from con-
trol theory [33]. With the development of robust control for economy, the robust con-
trol in control theory is developed very fast. Uncertainties, stochastic disturbances,
time-varying or invariant delays, nonlinearities, which always appear in economic
systems (see, e.g., [3, 6, 17, 18, 28] and references wherein), are investigated sen-
sitively in control theory. Robust stability of uncertain stochastic neural networks
with time delay is studied in [37, 44]. Robust absolute stability for a class of time
delay uncertain singular systems with sector-bounded nonlinearity is studied in [31].
Robust stability for a class of Lur’e singular system with state time delays is studied
in [22]. Robust H∞ output feedback control for uncertain stochastic systems with
time-varying delays is studied in [39]. Robust H∞ control for uncertain singular
time delay systems is studied in [36]. Robust exponential stability of stochastic sys-
tems with time-varying delay, nonlinearity, and Markovian switching is studied in
[42]. Linear matrix inequality (LMI) approach is adopted in above works as this
approach can be readily checked by exploiting the standard Matlab LMI toolbox,
and free-weighting matrices are introduced in some of the above works to reduce the
conservatism of results. Unfortunately, although the upper bounds of delays in above
works are fit for processing control in engineering, they are not large enough for eco-
nomic systems. Because the time delays of economic systems maybe from days to
decades. For example, the period of American pork price oscillation is 4 years [5,
23], the average and range length of Kondratiev waves is 50 and from approximately
40 to 60 years [19], respectively.
Robust H∞ control condition with very large upper bound of time delay and small
disturbance attenuation for a class uncertain stochastic time-varying delay system
has been presented by the authors in [21], however, we have not discussed the essence
of conservatism fully.
Furthermore, because the LMI approach appeared very recently, there are few
literatures that study the robust problem for economic system via LMI approach. One
of the authors investigates the condition of stability for the economic discrete-time
singular dynamic input–output model in [15]. Furthermore, a state feedback control
condition for the economic discrete-time singular dynamic input–output model is
presented in [14]. The free-weighting matrix technology has not been introduced
into the above literatures.
In this section, we deal with the robust H∞ control with large time delay and
small disturbance attenuation problem for a generic linear rational expectations
model of economy with uncertainties, time-varying delay, and random disturbances.
The norm-bounded uncertainties are adopted to illustrate the uncertainties of eco-
nomic model. The concept of two levels of conservatism of stability and control
sufficient conditions is developed. This concept covers the previous concepts of
7.2 Robust H∞ Control for a Generic Linear … 343

conservatism. The approach of Parameters Weak Coupling Linear Matrix Inequal-


ities (PWCLMIs) is developed. Robust H∞ control sufficient condition is obtained
in terms of PWCLMIs, and two levels of conservatism of the condition are low.
So large time delay and small disturbance attenuation can be achieved in this note.
Furthermore, by using two-person zero-sum game, the H∞ control result of system
is obtained too. An example is given to demonstrate the effectiveness and merit of
presented method.

7.2.2 Problem Formulation

To analyze the robust control problem for macroeconomy with large time delay,
according to the thoughts of literatures [18, 28], we consider the following generic
linear rational expectations model of economy:

(Σ) :ẋ(t) = A(t)x(t) + Ad (t)x(t − d(t)) + Bu(t) + Bv v(t),


y(t) = C x(t) + Du(t),
x(t) = ψ(t), ∀t ∈ [−h, 0],

where x(t) ∈ Rn is the state vector, u(t) ∈ Rm is the vector of policy instruments
(control vector), and v(t) ∈ Rq is the vector of random shocks (stochastic distur-
bances) which belongs to l2 [0, ∞), y(t) ∈ R p is the controlled output, or target
vector, for example, inflation, output, and possibly the policy maker’s control vari-
able. d(t) is the time-varying lag (delay) satisfying

0 < d(t) < h, ḋ(t) ≤ μ. (7.31)

ψ(t) is the initial condition. B, Bv , C, and D are known real constant matrices, and
the matrices A(t), Ad (t) represent the structured model uncertainties. We assume
that A(t), Ad (t) are time-varying matrices of the form A(t) = A + ΔA(t), Ad (t) =
Ad + ΔAd (t). Here A, Ad are known real constant matrices, ΔA(t), ΔAd (t) are
unknown matrices representing time-varying parameter uncertainties and satisfying
the following admissible condition:
   
ΔA(t) ΔAd (t) = M F(t) N1 N2 (7.32)

where M, N1 , and N2 are known real constant matrices, and F(t) is the unknown
time-varying matrix-valued function subject to F T (t)F(t) ≤ I, ∀t. Analytically, the
structured uncertainties are defined independently of the state vector x(t).

Remark 7.9 The constraint ḋ(t) ≤ μ < 1 always appears in other robust control
literatures, see, e.g., [30, 38, 39]. And this constraint is removed from this section.
344 7 Some Applications to Economy Based on Related Research Method

Remark 7.10 According to [28], the monetary policy is the optimal response of
policy makers facing uncertainty in model parameters. There are two approaches
to model uncertainty, unstructured model uncertainty and structured model uncer-
tainty. For example, [17] derives policies under the assumption of unstructured uncer-
tainty, and [6] solves this problem with structured uncertainty. Equation (7.32) is a
norm-bounded uncertainty model. Norm-bounded uncertainty is one of the struc-
tured uncertainty models. This model of uncertainty has been adopted for economic
system, see, e.g., [27] and references wherein.

According to the assumption in [28], the authority uses only one instrument and
commits to the stationary rule, that is

u(t) = K x(t), (7.33)

where K ∈ Rn is the vector of parameters to be determined.


By substituting (7.33) into system (Σ), we have the closed-loop economy system
as follows:

(Σc ) :ẋ(t) = A(t)x(t) + Ad (t)x(t − d(t)) + BKx(t) + Bv v(t),


y(t) = C x(t) + DKx(t),
x(t) = ψ(t), ∀t ∈ [−h, 0].

In this section, we shall focus on the robust stabilization problem whose purpose
is to vector policy instruments of the type (7.33) for the economy system (Σ), such
that the closed-loop economy system (Σc ) satisfies the following two requirements
simultaneously:
(R1) The closed-loop system (Σc ) is asymptotically stable.
(R2) Under the zero initial condition, the controlled output y(t) satisfies

y(t) 2 < γ 2 v(t) 2 (7.34)

for all nonzero v(t) ∈ l2 [0, ∞) and all admissible uncertainties ΔA(t) and ΔAd (t),
where γ > 0 is a prescribed scalar.

7.2.3 Main Results

The following theorem provides a sufficient condition for the closed-loop economy
system (Σc ) with v(t) = 0 to be robust asymptotically stable.

Theorem 7.11 Given scalars h > 0 and μ. The closed-loop economy system (Σc )
with v(t) = 0 is robust asymptotically stable if there exist scalar ε > 0, matrices X >
0, Q > 0, R > 0, Z 1 > 0, Z 2 > 0, L = col{L 1 , L 2 , L 3 }, S̄ = col{S1 , S2 , S3 }, J =
col{J1 , J2 , J3 }, and Y such that the following PWCLMIs holds:
7.2 Robust H∞ Control for a Generic Linear … 345
⎡ ⎤
Ω Ad X + L 2T L 3T X N1T
⎢∗ −Q 0 X N2T ⎥
⎢ ⎥ < 0, (7.35)
⎣∗ ∗ −R 0 ⎦
∗ ∗ ∗ −εI
⎡ ⎤
Φ hL h S̄ hJ
⎢ ∗ −h Z 1 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ −h Z 1 0 ⎦ < 0, (7.36)
∗ ∗ ∗ −h Z 2

where Ω = AX + XAT + Q + R + BY + Y T B T +  L 1 + L 1 + εM M , Φ =
T T

Φ1 + Φ1 + diag{0, μQ, 0}, Φ1 = J S̄ − L − S̄ − J .


T

In this case, the gain matrix of controller can be chosen by K = YX−1 .

Proof For the stability analysis of the closed-loop economy system (Σc ), we define
the following Lyapunov-Krasovskii functional:
t t
V (t, x(t)) = x T (t) P̂ x(t) + x T (s) Q̂x(s)ds + x T (s) R̂x(s)ds
t−d(t) t−h
0 0
+ ẋ T (s)( Ẑ 1 + Ẑ 2 )ẋ(s)dsdθ (7.37)
−h t+θ

with P̂ > 0, Q̂ > 0, R̂ > 0, Ẑ 1 > 0, Ẑ 2 > 0.


Noting (7.31), and calculating the difference of V (t, x(t)) along the system (Σc )
with v(t) = 0, one has

V̇ (t, x(t)) ≤ 2x T (t) P̂ ẋ(t) + x T (t) Q̂x(t) − (1 − μ)x T (t − d(t)) Q̂x(t − d(t))
t
+ x T (t) R̂x(t) − x T (t − h) R̂x(t − h) − ẋ T (s)( X̂ 1 + X̂ 2 )ẋ(s)ds
t−h
t
= ξ (t)(Ψ (t) + diag{0, μ Q̂, 0})ξ(t) −
T
ẋ T (s)( X̂ 1 + X̂ 2 )ẋ(s)ds,
t−h
(7.38)

where
 T
ξ(t) = x T (t) x T (t − d(t)) x T (t − h) ,
⎡ ⎤
P̂ A(t) + A(t) P̂ + Q̂ + R̂ + P̂BK + K T B T P̂ P̂ Ad (t) 0
Ψ (t) = ⎣ ∗ − Q̂ 0 ⎦.
∗ ∗ − R̂
346 7 Some Applications to Economy Based on Related Research Method

From the Leibniz–Newton formula, the following equations are true for any matri-
ces L̂, Ŝ, and Jˆ with appropriate dimensions:
t
2ξ (t) L̂ x(t) − x(t − d(t)) −
T
ẋ(s)ds = 0, (7.39)
t−d(t)
 t−d(t) 
2ξ T (t) Ŝ x(t − d(t)) − x(t − h) − ẋ(s)ds = 0, (7.40)
t−h
t−h
T ˆ
2ξ (t) J x(t) − x(t − h) − ẋ(s)ds = 0. (7.41)
t

Adding the left sides of (7.39)–(7.41) to (7.38), one has

V̇ (t, x(t)) ≤ ξ T (t)(Ψ (t) + diag{0, μ Q̂, 0})ξ(t) + 2ξ T (t) L̂(x(t) − x(t − d(t)))
+ 2ξ T (t) Ŝ(x(t − d(t)) − x(t − h)) + 2ξ T (t) Jˆ(x(t) − x(t − h))
t t−d(t)
− (ẋ (s) Ẑ 1 + 2ξ (t) L̂)ẋ(s)ds −
T T
(ẋ T (s) Ẑ 1 + 2ξ T (t) Ŝ)
t−d(t) t−h
t
× ẋ(s)ds − (ẋ T (s) Ẑ 2 + 2ξ T (t) Jˆ)ẋ(s)ds
t−h
   T !
≤ ξ T (t)(Ψ (t) + diag{0, μ Q̂, 0})ξ(t) + ξ T (t) L̂ 0 0 + L̂ 0 0 ξ(t)
   T !
+ ξ T (t) Jˆ Ŝ − L̂ − Ŝ − Jˆ + Jˆ Ŝ − L̂ − Ŝ − Jˆ ξ(t)

+ ξ T (t)(h L̂ Ẑ 1−1 L̂ T + h Ŝ Ẑ 1−1 Ŝ T + h Jˆ Ẑ 2−1 Jˆ T )ξ(t)


= ξ T (t)(Ψ1 (t) + Ψ2 )ξ(t), (7.42)

where
   T
Ψ1 (t) = Ψ (t) + L̂ 0 0 + L̂ 0 0 ,
   T
Ψ2 = Jˆ Ŝ − L̂ − Ŝ − Jˆ + Jˆ Ŝ − L̂ − Ŝ − Jˆ + h L̂ Ẑ 1−1 L̂ T + h Ŝ Ẑ 1−1 Ŝ T
+ h Jˆ Ẑ 2−1 JˆT + diag{0, μ Q̂, 0}.

Furthermore, Ψ1 (t) can be decomposed as follows:

Ψ1 (t) = Ψ1 + ΔΨ1 (t), (7.43)

where
⎡ ⎤
Ω11 P̂ Ad + L̂ 2T L̂ 3T
Ψ1 = ⎣ ∗ − Q̂ 0 ⎦, (7.44)
∗ ∗ − R̂
7.2 Robust H∞ Control for a Generic Linear … 347
⎡ ⎤
P̂ΔA(t) + ΔA T (t) P̂ P̂ΔAd (t) 0
ΔΨ1 (t) = ⎣ ∗ 0 0⎦ , (7.45)
∗ ∗ 0

with Ω11 = P̂ A + A T P̂ + Q̂ + R̂ + P̂BK + K T B T P̂ + L̂ 1 + L̂ 1T .


By Lemma 1.23, one has
⎡ ⎤ ⎡ T⎤
P̂ M   N1  
ΔΨ1 (t) = ⎣ 0 ⎦ F(t) N1 N2 0 + ⎣ N2T ⎦ F T (t) M T P̂ 0 0
0 0
⎡ ⎤ ⎡ T⎤
P̂ M   N1  
≤ ε ⎣ 0 ⎦ M T P̂ 0 0 + ε−1 ⎣ N2T ⎦ N1 N2 0 . (7.46)
0 0

From (7.43) to (7.46), one has


⎡ ⎤ ⎡ T⎤
Ω11 + ε P̂ M M T P̂ P̂ Ad + L̂ 2T L̂ 3T N1  
Ψ1 (t) ≤ ⎣ ∗ − Q̂ 0 ⎦ + ε−1 ⎣ N2T ⎦ N1 N2 0 .
∗ ∗ − R̂ 0

By Surch complement, Ψ1 (t) < 0 is equivalent to


⎡ ⎤
Ω11 + ε P̂ M M T P̂ P̂ Ad 0 N1T
⎢ ∗ − Q̂ 0 N2T ⎥
⎢ ⎥ < 0. (7.47)
⎣ ∗ ∗ − R̂ 0 ⎦
∗ ∗ ∗ −εI

On the other hand, denote X = P̂ −1 , Q = X Q̂ X, R = X R̂ X, L = X L̂ X, pre-


and post-multiplying (7.35) by diag{ P̂, P̂, P̂, I }, it is easy to see that (7.47) holds.
At the same time, denote Z 1 = X Ẑ 1 X, Z 2 = X Ẑ 2 X, pre- and post-multiplying
(7.36) by diag{ P̂, P̂, P̂, P̂, P̂, P̂}, and by Surch complement again, one has

Ψ2 < 0. (7.48)

So, from Theorem 7.11, one can ensure V̇ (t, x(t)) < 0. This completes the proof.

Next, we will analyze the H∞ performance of the closed-loop economy system


(Σc ), and give the result in the following theorem.

Theorem 7.12 Given scalars h > 0, μ, and γ. The closed-loop economy system
(Σc ) is robust asymptotically stable and the H∞ -norm constraint (7.34) is achieved
under the zero initial condition for all nonzero v(t), if there exist scalar ε > 0,
matrices X > 0, Q > 0, R > 0, Z 1 > 0, Z 2 > 0, L = col{L 1 , L 2 , L 3 }, S̄ =
col{S1 , S2 , S3 }, J = col{J1 , J2 , J3 }, and Y such that the following PWCLMIs holds:
348 7 Some Applications to Economy Based on Related Research Method
⎡ ⎤
Ω Ad X + L 2T L 3T X N1T XC T
⎢∗ −Q 0 X N2T Y T D T ⎥
⎢ ⎥
⎢∗ 0 ⎥
⎢ ∗ −R 0 ⎥ < 0, (7.49)
⎣∗ ∗ ∗ −εI 0 ⎦
∗ ∗ ∗ ∗ −I
⎡ ⎤
Θ hL h S̄ hJ
⎢ ∗ −h Z 1 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ −h Z 1 0 ⎦ < 0, (7.50)
∗ ∗ ∗ −h Z 2

where

Θ = Θ1 + Θ1T + Θ2 ,

J S̄ − L − S̄ − J 0
Θ1 = ,
0 0 0 0
⎡ ⎤
0 0 0 Bv
⎢∗ μQ 0 0 ⎥
Θ2 = ⎢
⎣∗ ∗ 0 0 ⎦ .

∗ ∗ ∗ −γ 2 I

In this case, the gain matrix of controller can be chosen by K = YX−1 .

Proof It is easy to see that (7.49), (7.50) implies (7.35), (7.36), respectively. So, the
closed-loop system (Σc ) is robust asymptotically stable.
Define the same Lyapunov-Krasovskii functional candidate V (t, x(t)) as (7.37).
By the same line as the proof of Theorem 7.11, one has

V̇ (t, x(t)) ≤ δ T (t)(Λ̃1 (t) + Λ̃2 )δ(t), (7.51)

where
 T
δ(t) = x T (t) x T (t − d(t)) x(t − h) v T (t) ,
⎡ ⎤
Ω11 P̂ Ad (t) 0 0
⎢ ∗ − Q̂ 0 0⎥
Λ̃1 (t) = ⎢
⎣ ∗
⎥,
∗ − R̂ 0⎦
∗ ∗ ∗ 0
 T
Jˆ Ŝ − L̂ − Ŝ − Jˆ 0 Jˆ Ŝ − L̂ − Ŝ − Jˆ 0
Λ̃2 = +
0 0 0 0 0 0 0 0
⎡ ⎤
 0 0 0 P Bv
h L̂ Z 1−1 L̂ T + h Ŝ Z 1−1 Ŝ T + h Jˆ Z 2−1 JˆT 0 ⎢∗ μ Q̂ 0 0 ⎥
+ +⎢⎣∗ ∗
⎥.
0 0 0 0 ⎦
∗ ∗ ∗ 0
7.2 Robust H∞ Control for a Generic Linear … 349

In order to deal with the H∞ performance of the system (Σc ), we introduce


T
J (t) = (y T (s)y(s) − γ 2 v T (s)v(s))ds, (7.52)
0

where t > 0.
Under zero initial condition, x(t) = 0 for t ∈ [−h, 0], one has
t
J (t) = (y T (s)y(s) − γ 2 v T (s)v(s) + V̇ (x(s)))ds − V (t, x(t))
0
t
≤ (y T (s)y(s) − γ 2 v T (s)v(s) + V̇ (x(s)))ds
0
t
≤ δ T (s)(Λ1 (t) + Λ2 )δ(s)ds, (7.53)
0

where
⎡ ⎤
Ω11 + Ω̄11 P̂ Ad (t) 0 0
⎢ ∗ − Q̂ 0 0⎥
Λ1 (t) = ⎢

⎥,
∗ ∗ − R̂ 0⎦
∗ ∗ ∗ 0
⎡ ⎤
0 0 0 0
⎢∗ μ Q̂ 0 0 ⎥
Λ2 =Λ̃2 + ⎢⎣∗ ∗
⎥,
0 0 ⎦
∗ ∗ ∗ −γ 2 I

with Ω̄11 = C T C + K T D T DK + C T DK + K T D T C.
Along the same line as in the proof of Theorem 7.11, according to (7.49) and
(7.50), J (t) < 0 holds. This completes the proof of the theorem.
According to the two-person zero-sum game [26], two players are u(t) and v(t),
and the object function is [27]
t
J (t) = (x T (s)Ξ1 x(s) + u T (s)Ξ2 u(s) − γ 2 v T (s)v(s))ds, (7.54)
0

where Ξ1 > 0, Ξ2 > 0.


Denoting Ξ1 = Π̃1T Ξ̃1−1 Π̃1 , Ξ2 = Π̃2T Ξ̃2−1 Π̃2 , we have the following corollary:
Corollary 7.13 Given scalars h > 0, μ, and γ. According to the two-person zero-
sum game, the object function is (7.54), and the economy system (Σc ) is robust
asymptotically stable, and the H∞ -norm constraint (7.34) is achieved under the zero
initial condition for all nonzero v(t), if there exist scalar ε > 0, matrices X >
0, Q > 0, R > 0, Z 1 > 0, Z 2 > 0, L = col{L 1 , L 2 , L 3 }, S̄ = col{S1 , S2 , S3 }, J =
col{J1 , J2 , J3 }, and Y such that the following PWCLMIs holds:
350 7 Some Applications to Economy Based on Related Research Method
⎡ ⎤
Ω Ad X + L 2T L 3T X N1T Π1 Π2
⎢∗ −Q 0 X N2T 0 0 ⎥
⎢ ⎥
⎢∗ ∗ −R 0 0 0 ⎥
⎢ ⎥ < 0, (7.55)
⎢∗ ∗ ∗ −εI 0 0 ⎥
⎢ ⎥
⎣∗ ∗ ∗ ∗ −Ξ̃1 0 ⎦
∗ ∗ ∗ ∗ ∗ −Ξ̃2
⎡ ⎤
Θ hL h S̄ hJ
⎢ ∗ −h Z 1 0 0 ⎥
⎢ ⎥
⎣ ∗ ∗ −h Z 1 0 ⎦ < 0, (7.56)
∗ ∗ ∗ −h Z 2

where

Π1 = X Π̃1T ,
Π2 = Y T Π̃2T .

In this case, the gain matrix of controller can be chosen by K = YX−1 .


Remark 7.14 Corollary 7.13 is a special case of Theorem 7.12. When choosing
another object function by adopting another game, one will obtain another result
for systems by the same line, too. Therefore, the Theorem 7.12 is a general robust
H∞ control result for macroeconomic system (Σc ).

7.2.4 Numerical Example

In this section, an example is presented to illustrate the usefulness of the developed


method in this section.
Example 7.15 Consider the system (Σ) with the following parameters:
   
0.1 0 0.1 0 0.4 0 0.4 0
A= , Ad = ,B = , Bv = ,
0 0.2 0 0.1 0 0.4 0 0.1
   T T
0.1 0 0.2 0 0.01 0.01 0.01
C= ,D = ,M = , N1 = , N2 = ,
0 0.1 0 0.1 0.02 0.02 0.01
h = 108 , μ = 10.

By Theorem 7.12, we can obtain the state feedback controller parameter and lower
bound of disturbance attenuations as follows:

−26.1983 −2.2999
K = ,
−7.8640 −34.6549
γ = 0.8606.
7.2 Robust H∞ Control for a Generic Linear … 351

Table 7.1 Maximum h for many μ with γ = 1


μ 0.3 1 5 10
Theorem 7.12 0.6356 × 1012 0.6366 × 1012 0.6360 × 1012 0.6365 × 1012

Table 7.2 Minimum γ for many μ with h = 108


μ 0.3 0.5 1 5
Theorem 7.12 0.6181 0.6065 0.8395 0.6420

Furthermore, we show the upper bounds of time delays h and lower bounds of
disturbance attenuations γ on many time-varying rates μ.
Table 7.1 lists the upper bounds on the time delay h for many μ with γ = 1 by
Theorem 7.12 in this section.
Table 7.2 lists the lower bounds of disturbance attenuations γ for many μ with
h = 108 by Theorem 7.12 in this section.

Remark 7.16 To the best of our knowledge, there is no concrete model with parame-
ter values of macroeconomic system with time delay as far. So we cannot provide an
example of macroeconomic system with time delay which describes the real world,
and we have to provide a numerical example to illustrate the merit of present approach
as Example 7.15. How to obtain the value of weight parameter of state with delay,
for instance Ad (t) in this section, is a challenge of modeling macroeconomic system
with time delay. And this is an important direction of further research.

7.2.5 Discussions

In this section, we will discuss the conservatism of result in this paper.


Someone can estimate the conservatism of conditions by two standards. The first
standard is the scope of application of the conditions, that is, the value scope of system
parameters which ensure the condition holds. For example, all values of system para-
meters which ensure the second condition holds will ensure the first condition holds
too, but some values of system parameters which ensure the first condition holds will
not ensure the second condition holds, then the conservatism of the first condition
is less than the second one. We describe it as S1 ⊃ S2 readily (see Table 7.3). The
second standard is the performances of system stability or stabilization, including
admissible time delay, time-varying rate, and disturbance attenuation of system. The
larger the upper bound of time delay is, or the faster the admissible time-varying rate
is, or the smaller the lower bound of disturbance attenuation is, the less the conser-
vatism of condition is. By this point of view, there are two levels of conservatism of
condition.
Table 7.3 lists the cases of the conservatism of the first condition which is less
than the second one, where Si , h i , γi , and μi represent the set of values of system
352 7 Some Applications to Economy Based on Related Research Method

Table 7.3 The cases of the 1st level 2nd level


conservatism of the first
condition is less than the one S1 ⊃ S2 Do not consider h, γ and μ
of the second condition S1 = S2 h 1 > h 2 , μ1 = μ2 , γ1 = γ2 ,
h 1 > h 2 , μ1 = μ2 , γ1 < γ2 ,
h 1 > h 2 , μ1 > μ2 , γ1 < γ2 ,
h 1 = h 2 , μ1 = μ2 , γ1 < γ2 ,
h 1 = h 2 , μ1 > μ2 , γ1 < γ2 ,
h 1 = h 2 , μ1 > μ2 , γ1 = γ2

parameters, the upper bound of time delay, the minimum disturbance attenuations,
and the maximum time-varying rate (if delay is time-varying) of system which ensure
the ith condition holds. We maybe cannot compare the conservatism of conditions
directly in other cases.
Now, we analyze the conservatism of results presented in this section.
In this section, we introduce free-weighting matrices L , S, J into V̇ (t, x(t)) =
δ T (t) f (·)δ(t) by employing Leibniz–Newton formula, for example, g(L , S, J ) = 0.
Then, V̇ (t, x(t)) = δ T (t)( f (·) + g(·))δ(t). The main idea of Parameters Weak
Coupling Linear Matrix Inequalities (PWCLMIs) can be described as follows:
Decomposing the f (·) and g(·) as

f (·) = f 1 (·) + f 2 (·)

and
g(·) = g1 (·) + g2 (·) = 0,

respectively, then, one has

V̇ (t, x(t)) = δ T (t)( f 1 (·) + g1 (·) + f 2 (·) + g2 (·))δ(t). (7.57)

Obviously, conditions

f 1 (·) + g1 (·) < 0 (7.58)

and

f 2 (·) + g2 (·) < 0 (7.59)

hold, one has the original condition

f (·) = f (·) + g(·) < 0, (7.60)

which ensures V̇ (t, x(t)) < 0.


7.2 Robust H∞ Control for a Generic Linear … 353

Unfortunately, Eqs. (7.58) and (7.59) hold and only sufficient condition of (7.60)
holds. So, the condition in this section ((7.58) and (7.59) hold) may lead to more
conservatism than the original condition (7.60).
In order to overcome this shortcoming, we denote f 1 (·), f 2 (·), g1 (·), g2 (·) in
Theorem 7.11 as

f 1 (·) = f 1 (A, Ad , B, C, D, M, N1 , N2 , P, Q, R), (7.61)


f 2 (·) = f 2 (h, μ, P, Q, R, Z 1 , Z 2 ), (7.62)
g1 (·) = g1 (L), (7.63)
g2 (·) = g2 (L , S, J ). (7.64)

And in Theorem 7.12, we denote f 1 (·), g1 (·), g2 (·) as above, and denote

f 2 (·) = f 2 (h, γ, μ, Bv , P, Q, R, Z 1 , Z 2 ). (7.65)

Noting that f 1 (·) is without h, γ, μ, f 2 (·) is without system parameters, and


f 2 (·) in Theorem 7.12 is only with system parameter Bv since introducing the dis-
turbance v(t). At the same time, g1 (·), g2 (·) both are functions of free-weighting
matrices which can be valued freely. As shown in Sect. 7.2.3, the first LMI of
theorem (condition) is represented by f 1 (·) + g1 (·), the second LMI is repre-
sented by f 2 (·) + g2 (·), then, h, γ, μ are coupled weakly with system parameters
A, Ad , B, C, D, M, N1 , N2 , so we call the LMIs set as Parameters Weak Coupling
Linear Matrix Inequalities (PWCLMIs).
First, we compare the conservatism of condition in this section ((7.58) and (7.59)
both hold) to the conservatism of original condition ((7.60) holds) on the first level.
Because g1 (L) is a symmetric matrix, L is a free-weighting matrix. Denoting ρ
as a sufficient small scalar, one always has

g1 (L) = f 2 (·) + ρI, (7.66)

then

g2 (L , S, J ) = −g1 (L) = − f 2 (·) − ρI. (7.67)

If the set of values of system parameters S ensures the original condition (7.60)
holds, by (7.66) and (7.67), noting that ρ is a sufficient small scalar, one has

f 1 (·) + g1 (·) = f 1 (·) + f 2 (·) + ρI < 0,


f 2 (·) + g2 (·) = f 2 (·) − f 2 (·) − ρI = −ρI < 0.

That is, if (7.60) holds, (7.58) and (7.59) hold. In other words, the PWCLMIs
condition is a necessary condition for original condition.
354 7 Some Applications to Economy Based on Related Research Method

Based on the discussion above, one can see that, the PWCLMIs condition is a
sufficient and necessary condition for original condition. So, the condition in this
section is equivalent to the original condition on the first level conservatism.

Remark 7.17 The equivalence of conditions with and without free-weighting matri-
ces has been proved in some literatures, see, e.g., [9, 40, 41]. From this section, the
conservatism of conditions studied in these literatures are all on the first level.

Second, we compare the conservatism of condition in this section ((7.58) and


(7.59) both hold) to the conservatism of original condition (7.60) on the second
level.
On this level, we focus on the value fields of h, γ, and μ. To estimate the value
fields, we analyze the structures of LMIs. The condition in this section, please see
Eqs. (7.61)–(7.65) and two theorems in this section, stability and control performance
parameters h, γ, μ in the second LMI are weakly coupled with systems parameters
A, Ad , B, C, D, M, N1 , N2 in the first LMI. Especially in Theorem 7.11, the second
LMI which involves stability performance parameters is without any system parame-
ters. At the same time, there are free-weighting matrices L , S, J in the second LMI.
So, the value fields of stability and control performance parameters in this section are
large (or free). In addition, we decompose the term −(1 − μ)Q in original condition
as −Q and μQ, then remove the constraint of μ < 1 from Theorems 7.11 and 7.12.
However, in the original condition, h, μ, γ are bounded by all system parameters
A, Ad , B, Bv , C, D, M, N1 , N2 , and μ < 1. According to these facts, the condition
in this section is less conservative than the original condition on the second level.

Remark 7.18 This characteristic is also shown in other literatures which introduce
free-weighting matrix into them, see, e.g., [21, 36, 42] and references wherein. From
this section, the conservatism of conditions studied in these literatures are on the
second level. So we can say that the delay-dependent conditions with free-weighting
matrices are always less conservative than which without free-weighting matrices
on the second level.

From above discussions, on the whole, the condition in this section is less conser-
vative than the original one, and the value fields of h, γ, and μ in this section are free.
So, very large time delay, large time-varying rate, and small disturbance attenuation
will be achieved via adopting presented approach.

7.2.6 Conclusions

In this section, we have studied the problem of robust H∞ state feedback control
for economy which is described as a generic linear rational expectations model
with uncertainties, time-varying delay, and stochastic disturbances. Norm-bounded
uncertainties have been adopted to describe the uncertainties of economic system.
The state feedback controller has been designed for all admissible uncertainties
7.2 Robust H∞ Control for a Generic Linear … 355

such that the closed-loop system is asymptotically stable and achieves a prescribed
H∞ performance level. The results have been presented in terms of PWCLMIs.
The concept of two levels of conservatism has been proposed and has been used to
analyze the conservatism of presented results. Large time delay and small disturbance
attenuation which are of special importance to economic system have been obtained.
Furthermore, by using two-person zero-sum game, the result for system has been
obtained. A numerical example has been exploited to show the effectiveness and
benefit of the result obtained.

References

1. D. Applebaum, Lévy Processes and Stochastic Calculus, 2nd edn. (Cambridge University Press,
Cambridge, 2008)
2. N. Bäuerle, A. Blatter, Optimal control and dependence modeling of insurance portfolios with
Lévy dynamics. Insur. Math. Econ. 48(3), 398–405 (2011)
3. A. Castelletti, F. Pianosi, R. Soncini-Sessa, Water reservoir control under economic social and
environmental. Automatica 44(6), 1595–1607 (2008)
4. L.G. Epstein, M. Schneider, Recursive multiple-priors. J. Econ. Theory 113(1), 1–31 (2003)
5. C. Fan, Y. Zhang, The time delay and oscillation of economic system, in Proceedings of the
1986 International Conference of the System Dynamics Society (1986), pp. 525–535
6. M.P. Giannoni, Does model uncertainty justify caution robust optimal monetary policy in a
forward-looking model. Macroecon. Dyn. 6(01), 111–144 (2002)
7. I. Gilboa, D. Schmeidler, Maxmin expected utility with non-unique prior. J. Math. Econ. 18(2),
141–153 (1989)
8. P. Giordani, P. Söderlind, Solution of macromodels with Hansen-Sargent robust policies: some
extensions. J. Econ. Dyn. Control 28(12), 2367–2397 (2004)
9. F. Gouaisbaut, D. Peaucelle, A note on stability of time delay systems, in 5th IFAC Symposium
on Robust Control Design (Rocond 06) (2006), 13 p
10. X. Guo, Q. Zhang, Optimal selling rules in a regime switching model. IEEE Trans. Autom.
Control 50(9), 1450–1455 (2005)
11. L.P. Hansen, T.J. Sargent, Robustness (Princeton University Press, Princeton, 2008)
12. L.P. Hansen, T.J. Sargent, G. Turmuhambetova, N. Williams, Robust control and model mis-
specification. J. Econ. Theory 128(1), 45–90 (2006)
13. M. Harry, Portfolio selection. J. Financ. 7(1), 77–91 (1952)
14. L. Jiang, J. Fang, W. Zhou, Stability analysis of economic discrete-time singular dynamic input-
output model, in Proceedings of the Seventh International Conference on Machine Learning
and Cybernetics, vol. 3 (2008), pp. 1434–1438
15. L. Jiang, J. Fang, W. Zhou, D. Zheng, H. Lu, Stability of economic input-output model, in
Proceedings of the 27th Chinese Control Conference (2008), pp. 804–807
16. J. Kallsen, Optimal portfolios for exponential Lévy processes. Math. Methods Oper. Res. 51(3),
357–374 (2000)
17. K. Kasa, Model uncertainty robust policies and the value of commitment. Macroecon. Dyn.
6(1), 145–166 (2002)
18. D.A. Kendrick, Stochastic control for economic models: past present and the paths ahead. J.
Econ. Dyn. Control 29(1), 3–30 (2005)
19. N.D. Kondratiev, The Major Economic Cycles (in Russian) (Moscow, 1925)
20. D. Li, W. Ng, Optimal dynamic portfolio selection: multiperiod mean-variance formulation.
Math. Financ. 10(3), 387–406 (2000)
356 7 Some Applications to Economy Based on Related Research Method

21. M. Li, W. Zhou, H. Wang, Y. Chen, R. Lu, H. Lu, Delay-dependent robust H∞ control for
uncertain stochastic systems, in Proceedings of the 17th World Congress of the International
Federation of Automatic Control, vol. 17 (2008), pp. 6004–6009
22. R. Lu, X. Dai, H. Su, J. Chu, A. Xue, Delay-dependant robust stability and stabilization condi-
tions for a class of Lur’e singular time-delay systems. Asian J. Control 10(4), 462–469 (2009)
23. D.G. Luenberger, Introduction to Dynamic Systems: Theory, Models, and Applications (Wiley,
New York, 1979)
24. Q.Z. Moustapha Pemy, G.G. Yin, Liquidation of a large block of stock with regime switching.
Math. Financ. 18(4), 629–648 (2008)
25. B. Øksendal, Stochastic Differential Equations an Introduction with Applications (Springer,
Berlin, 2005)
26. T. Parthasarathy, T.E.S. Raghavan, Some topics in two-person games. SIAM Rev. 14, 356–357
(1972)
27. B. Tang, C. Cheng, M. Zhong, Theory and Applications of Robust Economic Control, 1st edn.
(China Textile University Press, Shanghai, 2000)
28. R.J. Tetlow, P. von zur Muehlen, Robust monetary policy with misspecified models: does model
uncertainty always call for attenuated policy. J. Econ. Dyn. Control 25(6), 911–949 (2001)
29. N. Vandaele, M. Vanmaele, A locally risk-minimizing hedging strategy for unit-linked life
insurance contracts in a Lévy processes financial market. Insur.: Math. Econ. 42(3), 1128–
1137 (2008)
30. Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks
with mixed time-delays. Chaos Solitons Fractals 32(1), 62–72 (2007)
31. H. Wang, A. Xue, R. Lu, Absolute stability criteria for a class of nonlinear singular systems
with time delay. Nonlinear Anal. Theory Methods Appl. 70(2), 621–630 (2009)
32. C. Weng, Constant proportion portfolio insurance under a regime switching exponential Lévy
processes. Insur.: Math. Econ. 52(3), 508–521 (2013)
33. N. Williams, Robust Control. An Entry for the New Palgrave (Princeton University Press,
Princeton, 2007)
34. H. Wu, Z. Li, Multi-period mean-variance portfolio selection with Markov regime switching
and uncertain time-horizon. J. Syst. Sci. Complex. 24(1), 140–155 (2011)
35. H. Wu, Z. Li, Multi-period mean-variance portfolio selection with regime switching and a
stochastic cash flow. Insur.: Math. Econ. 50(3), 371–384 (2012)
36. Z. Wu, W. Zhou, Delay-dependent robust H∞ control for uncertain singular time-delay systems.
IET Control Theory Appl. 1(5), 1234–1241 (2007)
37. Z. Wu, H. Su, J. Chu, W. Zhou, Improved result on stability analysis of discrete stochastic
neural networks with time delay. Phys. Lett. A 373(17), 1546–1552 (2009)
38. L. Xie, C.E. de Souza, Robust H∞ control for linear systems with norm-bounded time-varying
uncertainties. IEEE Trans. Autom. Control 37(8), 1188–1191 (1992)
39. S. Xu, T. Chen, H∞ output feedback control for uncertain stochastic systems with time-varying
delays. Automatica 40(12), 2091–2098 (2004)
40. S. Xu, J. Lam, On equivalence and efficiency of certain stability criteria for time-delay systems.
IEEE Trans. Autom. Control 52(1), 95–101 (2007)
41. S. Xu, J. Lam, A survey of linear matrix inequality techniques in stability analysis of delay
systems. Int. J. Syst. Sci. 39(12), 1095–1113 (2008)
42. D. Yue, Q.L. Han, Delay-dependent exponential stability of stochastic systems with time-
varying delay, nonlinearity and Markovian switching. IEEE Trans. Autom. Control 50(2),
217–222 (2005)
43. K.C. Yuen, C. Yin, On optimality of the barrier strategy for a general Lévy risk process. Math.
Comput. Model. 53(9), 1700–1707 (2011)
44. W. Zhou, M. Li, Mixed time-delays dependent exponential stability for uncertain stochastic
high-order neural networks. Appl. Math. Comput. 215(2), 503–513 (2009)
Index

A LMI, 37, 128, 154, 166, 302, 342


Asymptotic stability, 7, 38 Local martingale, 1, 257
Lyapunov function, 14, 71, 108, 298

B
Brownian motion, 2, 270 M
Markov chain, 2, 56, 104, 187, 269, 270, 328
C Markovian switching, 3, 66, 128, 166, 269,
Chebyshev’s inequality, 9, 261 280
Martingale, 1
M-matrix, 8, 131, 211, 269
D
Doob’s martingale inequality, 9, 260
Dynkin’s formula, 5, 62 N
Neural network, 13, 37, 103, 165, 269, 342

E
Exponential stability, 13, 40, 104, 269, 342 P
Poisson random measure, 4
G
Gronwall’s inequality, 9
S
Schur’s complements, 10, 44
H Stability, 269
Hölder’s inequality, 9 Stochastic process, 1, 27
Stopping time, 1, 260
Strong law of large numbers, 1, 270
I Synchronization, 13, 22, 37, 93, 153, 269
Itô’s formula, 5, 270

J T
Jensen’s inequality, 10 Time delay, 39, 115, 154, 191, 280

L Y
Lévy noise, 4, 269, 270, 280 Yong’s inequality, 9, 144, 180
© Springer-Verlag Berlin Heidelberg 2016 357
W. Zhou et al., Stability and Synchronization Control of Stochastic
Neural Networks, Studies in Systems, Decision and Control 35,
DOI 10.1007/978-3-662-47833-2

You might also like