You are on page 1of 9

Neurocomputing 97 (2012) 192–200

Contents lists available at SciVerse ScienceDirect

Neurocomputing
journal homepage: www.elsevier.com/locate/neucom

Analysis and design of associative memories based on stability of cellular


neural networks
Qi Han a,n, Xiaofeng Liao b, Tingwen Huang c, Jun Peng a, Chuandong Li b, Hongyu Huang b
a
School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
b
State Key Laboratory of Power Transmission Equipment and System Security, College of Computer Science, Chongqing University, Chongqing 400030, China
c
Texas A&M University at Qatar, Doha, P.O. Box 23874, Qatar

a r t i c l e i n f o a b s t r a c t

Article history: It is well known that the prototype patterns in associative memories can be represented by stable
Received 6 September 2011 equilibrium points of cellular neural networks (CNNs). Therefore, the stability of equilibrium points of
Received in revised form CNNs is critical in associative memories based on CNNs. In this paper, some criteria about the stability
14 June 2012
of CNNs are established. In fact, these criteria give some constraint conditions for the relationship of
Accepted 17 June 2012
parameters of CNNs. Compared with the previous works, our results relax the conservatism of the
Communicated by J. Liang
Available online 1 July 2012 relationship of parameters and extend the range of the values of parameters. Two design procedures on
the parameters of CNNs are given to achieve associative memories under our criteria. Finally, an
Keywords: example is given to verify the theoretical results and design procedures.
Cellular neural networks
& 2012 Elsevier B.V. All rights reserved.
Associative memories
Cloning template

1. Introduction interconnected neural networks for associative memories were


presented in [8], and sparse synthesis technique was applied to
Cellular neural networks (CNNs) were first introduced in 1988 the design of a class of CNNs. A design algorithm for CNNs with
[1,2]. CNNs can be arranged in matrix and implemented by some space-invariant cloning template with applications to associative
simple analog circuits called cell. Each cell in CNNs is only memories was presented in [9]. A synthesis procedure for
connected to its neighboring cells. Therefore, CNNs are well fit associative memories using discrete-time CNNs (DTCNNs) with
for very large-scale integration implementations due to this local learning and forgetting capabilities was presented in [10]. A
interconnection property, and have found many applications in a synthesis procedure of CNNs for associative memories was
variety of areas, such as image processing [3], pattern recognition introduced in [11], where the method assured the global asymp-
[4] and associative memories [5]. In this paper, we mainly discuss totic stability of the equilibrium point. DTCNNs with a globally
the application of CNNs in associative memories. At its simplest, asymptotically stable equilibrium point were designed to behave
an associative memory is a system which stores mappings from as associative memories in [12]. In the last ten years, associative
specific input patterns to specific output patterns. That is to say, a memories were realized by local stability of equilibrium points of
system which ‘‘associates’’ two patterns is that when one of two CNNs. In [13,14], the number of memory patterns of CNNs which
patterns is presented, the other can be reliably recalled. There are were locally exponentially stable was obtained, and the design
two kinds of associative memories: auto-associative memories procedures of associative memories based on CNNs was given. A
and hetero-associative. In associative memory, the retrieved design method for synthesizing associative memories based on
pattern can be different from the probe in content or format, discrete-time recurrent neural networks was presented in [15]. In
however, in some other search approaches, these two patterns [16], a new design procedure for synthesizing associative mem-
have to be same or similar, such as similarity search in [6]. ories based on CNNs with time delays characterized by input and
Since Liu and Michel [7] reported that CNNs are effective as an output matrices was introduced.
associative memories medium, associative memories have From the above introduction about associative memories, it is
received a great deal of interest. Next, we would introduce easy to know that stability of CNNs plays an important role in
associative memories according to the time sequence. Sparsely associative memories. The prototype patterns in associative
memories can be represented by stable equilibrium points of
cellular neural networks. Therefore, it is important to study the
n
Corresponding author. stability of equilibrium points of CNNs. There have been abundant
E-mail address: yiding1981@yahoo.com.cn (Q. Han). researches about stability of CNNs. Some sufficient conditions for

0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.neucom.2012.06.017
Q. Han et al. / Neurocomputing 97 (2012) 192–200 193

CNNs to be stable were obtained by constructing Lyapunov methods, such as the method in [28]. In order to get the values
Function [17–24], and these conditions generally made equili- of these two matrices, we need to transform these two matrices in
brium point global asymptotically stable. However, some authors two vectors. The methods will be  shown in Section 3.2.
presented some conditions which made equilibrium points locally Let a ¼ ða1 , a2 , . . . , an ÞT A U n ¼ xi A Rn xi ¼ 1 or xi ¼ 1,i ¼ 1,2,

stable, and there generally were multiple equilibrium points . . . ,ng, CðaÞ ¼ x A Rn  xi ai 4 1,i ¼ 1,2, . . . ,ng. Then, for x A CðaÞ, the
[13,14,25–27]. Eq. (2) can be rewritten as
In previous papers, the conditions of stability of CNNs are x_ ¼ Cx þ Aa þDU þ V ð3Þ
conservative. For example, bias vectors were computed by one of
all memory patterns in [7–9]; the relations between cloning If b is an equilibrium point of (3), then we have
templates were stronger and closer in [13–16]. Therefore, the b ¼ C 1 ðAa þ DU þVÞ A CðaÞ ð4Þ
aim of the paper is to relax conservative relationship among
parameters of CNNs. In order to get our theories, we choose the
initial states of CNNs are zero. Our theories expend the scope of
Lemma 1. [7] Suppose a ¼ ða1 , a2 , . . . , an ÞT A U n . If b ¼ ðb1 , b2 , . . . ,
the values of parameters of CNNs. When the inputs and outputs of
bn ÞT ¼ C 1 ðAa þ DU þ VÞ A CðaÞ, then b is an asymptotically stable
a CNN are given, the values of parameters can be obtained by our
equilibrium point of (2).
methods. In fact, if we get appropriate values of parameters of
CNNs, we can realize associative memories. Therefore, from the Proof. Eq. (3) has a unique equilibrium point at xe ¼ C 1 ðAa þ
above theoretical analysis, we give design procedures of associa- DU þVÞ, and xe ¼ b A CðaÞ by assumption. Therefore, this equili-
tive memories based on CNNs. brium is also asymptotical stable, since Eq. (3) has all its n
The remaining parts of this paper are organized as follows. In eigenvalues at ci , i ¼ 1,2, . . . ,n.
Section 2, a class of CNNs are given. In Section 3, the relationship
among parameters of CNNs is given by a theorem and some 3. Main result
corollaries. These theories give some methods about how to get
the value of parameters A, D and V of a CNN. In Section 4, two In this section, we will give some theories about stability of
design procedures on associative memories and a flow chart CNNs firstly. Then, some methods are obtained on the basis of
about how to get parameters of a CNN are given. In Section 5, these theories for realizing associative memories based on CNNs.
an example is given to verify the theoretical results and design
procedures. Some conclusions are finally drawn in Section 6. 3.1. Stability of CNNs

First, we give a theorem and some corollaries for achieving


2. Preliminaries
associative memories.
The Eq. (3) can be rewritten as
Consider a class of cellular neural networks defined by the
following differential equation: X
n X
n

8 x_ i ¼ ci xi þ aij aj þ dij uj þ vi , i ¼ 1,2, . . . ,n ð5Þ


> kX2 ði,rÞ lX
2 ðj,rÞ j¼1 j¼1
>
> y_ ðtÞ ¼ cij y ðtÞ þ
>
> ij ij akl g i þ k,j þ l ðyðtÞÞ þ Zij
>
< k ¼ k1 ði,rÞl ¼ l1 ðj,rÞ
ð1Þ
>
> kX
2 ði,rÞ lX
2 ðj,rÞ
Theorem 1. In Eq. (5), let xi ð0Þ ¼ 0, i ¼ 1,2, . . . ,n:
>Z ¼
> d u þ v
>
> kl kl ij
: ij k ¼ k ði,rÞl ¼ l ðj,rÞ
1 1 P P
(i). If nj¼ 1 aij aj þ nj¼ 1 dij uj þvi 4 ci , then the Eq. (5) converges
where yij ðtÞ A R denotes the states vector, cij is a positive para- to a positive stable equilibrium point, and the value of
meter, r is positive integer denoting neighborhood radius, A ¼ positive equilibrium point is bigger than 1.
Pn Pn
ðakl Þð2r þ 1Þð2r þ 1Þ a 0 is intra-neuron connection weight matrix, (ii). If j ¼ 1 aij aj þ j ¼ 1 dij uj þ vi o ci , then the Eq. (5) con-
D ¼ ðdkl Þð2r þ 1Þð2r þ 1Þ is input cloning template, ukl is the input, vij verges to a negative stable equilibrium point, and the values
is the bias, k1 ði,rÞ ¼ maxf1i,rg,
 k2 ði,rÞ ¼ minfNi,rg, of negative equilibrium point is less than 1.
l1 ðj,rÞ ¼ minf1 j,rg, l2 ðj,rÞ ¼ max Mj,rg and gðUÞ is the activa-
tion function defined by The proof of Theorem 1 see Appendix B1.
gðyÞ ¼ ð9y þ 199y19Þ=2 By the above theorem, we can get the following corollaries
immediately.
g i þ k,j þ l ðyðtÞÞ is output of a cell of CNNs. Outputs of CNNs can be
memory patterns of associative memories. Corollary 1. In Eq. (5), let xi ð0Þ ¼ 0 and aii Zci ,i ¼ 1,2, . . . ,n,
The expressions of template A and D are in Appendix A.
Pn Pn
Let r ¼ 1 and n ¼ NM. If the system (1) has N rows and M (i) If j ¼ 1,j a i aij aj þ j ¼ 1 dij uj þvi 4 0, then the Eq. (5) con-
columns, then it can be put in vector form as verges to a positive stable equilibrium point, and the value
x_ ¼ Cx þ Af ðxÞ þ DU þ V ð2Þ of positive equilibrium point is bigger than 1.
Pn Pn
T T
(ii) If j ¼ 1,j a i aij aj þ j ¼ 1 dij uj þ vi o0, then the Eq. (5) con-
where x ¼ ðx1 ,x2 , . . . ,xn Þ ¼ ðy11 ,y12 , . . . ,y1M , . . . ,yNM Þ , coefficient verges to a negative stable equilibrium point, and the value
matrices A and D are obtained through the templates A and D, of negative equilibrium point is less than 1.
C ¼ diag ðc1 . . . cn Þ, the input vector U ¼ ðu1 ,. . .,un ÞT , t V ¼
ðv1 ,. . .,vn ÞT and f ðxÞ ¼ ðgðy1 Þ,. . .,gðyn ÞÞT . The kth cell in Eq. (2) is The proof of Theorem 1 see Appendix B2.
denoted by Ok (k ¼ iN þ j, where 1 ri r N, 1 r j rM, i denotes ith
row and j denotes jth column of the CNN). The expressions of Corollary 2. In Eq. (5), let xi ð0Þ ¼ 0 and aii oci ,
template A and D are in Appendix A. The matrices of A and D are
Pn Pn
very complex. They are not only sparse matrices, and the posi- (i) If j ¼ 1 aij aj þ j ¼ 1 dij uj þvi 4 ci , then the Eq. (5) converges
tions of their elements in these two matrices have certain rules. to a positive stable equilibrium point, and the value of
Therefore, these two matrices cannot be solved by previous positive equilibrium point is bigger than 1.
194 Q. Han et al. / Neurocomputing 97 (2012) 192–200

P P
(ii) If nj¼ 1 aij aj þ nj¼ 1 dij uj þ vi oci , then the Eq. (5) converges First, we discuss how to get cloning template A and D of a CNN.
to a negative stable equilibrium point, and the value of We can make use of the inputs and outputs of cells in set R to get
negative equilibrium point is less than  1. A and D. Because of the all values of vi (i¼1,2,yn) of cells in set R
are equal to zero, choose that all vi of all cells in set R are equal to
P
The proof of Corollary 2 is same to that of Theorem 1. zero. Then, by Corollaries 3 or 4, the relationship among nj¼ 1 aij aj ,
Pn
If we choose vi ¼ 0 in Theorem 1, then we can get the following j ¼ 1 dij uj , ci (i¼1,yn) can be established. Then, we use Theorem

corollary: 1 or Corollary 1 to calculate the regions of vi (i¼1,2,yn) of cells in


set P or Q.
Corollary 3. In Eq. (5), let xi ð0Þ ¼ 0 and vi ¼ 0, Therefore, in order to get parameters A, D, C, and V for a CNN,
P P we divide the question into two cases on the basis of Corollary 3
(i) If nj¼ 1 aij aj þ nj¼ 1 dij uj 4 ci , then the Eq. (5) converges to a and Corollary 4, respectively.
positive equilibrium point, and the value of positive equili- (i) Corollary 3 is used to obtain the relationship among
brium point is bigger than 1. parameters A, D and C. Furthermore, the values of A and D can
P P
(ii) If nj¼ 1 aij aj þ nj¼ 1 dij uj oci , then the Eq. (5) converges to a be gotten.
negative equilibrium point, and the value of negative equili- Let li 4 max fci g. We choose
1rirn
brium point is less than 1. 0
F ^,
^ ¼ 0:5D ð9Þ
If we choose vi ¼ 0 in Corollary 1, then we can get the following and
corollary:
A^ G þ DU ^
^ ¼ LG ð10Þ
Corollary 4. In Eq. (5), let xi ð0Þ ¼ 0, vi ¼ 0 and aii Z ci , It is obvious that Eq. (9) and Eq. (10) satisfy Corollary 3. From
P
Pn Pn Eq. (9), the sign of nj¼ 1 aij aj in a cell Oi and the sign of output of
(i). If j ¼ 1,j a i aij aj þ j ¼ 1 dij uj 40, then the Eq. (5) converges to Pn Pn
the cell are the same. From Eq. (10), sign of j ¼ 1 aij aj þ j¼1
a positive stable equilibrium point, and the value of positive
dij uj in a cell Oi of a CNN and the sign of output of the cell are the
equilibrium point is bigger than 1. P P
P P same. By Corollary 3, we know that sign of nj¼ 1 aij aj þ nj¼ 1 dij uj
(ii). If nj¼ 1,j a i aij aj þ nj¼ 1 dij uj o 0, then the Eq. (5) converges to
a negative stable equilibrium point, and the value of negative in a cell of a CNN can determine the sign of output of the cell.
equilibrium point is less than  1. (ii) Corollary 4 is used to achieve associative memories.
Let li 4 0. We choose
Remark 1. In Theorem 1 and its corollaries, though the initial 00
^,
^ ¼ 0:5D
F ð11Þ
states of a CNN have to return to zero, our methods does not
strictly limit the relations between cloning template of the CNN. and
Pn P1 P1
For example, j ¼ 1 ð9aij þbij 9Þ o ci o 1 and i ¼ 1 i ¼ 1 ð9aij þ A~ G þ DU ^
^ ¼ LG ð12Þ
bij 9Þ o1 in [15,16], however our theories do not exist these It is obvious that Eq. (11) and Eq. (12) satisfy Corollary 4.
limitations. From Eqs. (9)–(12), we give the methods about computing
The Theorem 1 and its corollaries will be used to realize parameters A and D. However, we find that it is difficult to be
^
obtainAand ^
Dthrough Eqs. (9)–(12). For example, in Eq. (10), the
associative memories based on CNNs in the following. In order
to realize associative memories, inputs of the CNN and the values of A^ G,U and LG
^ are known, and the value of D ^ is unknown.
We notice that there is no method to get the value of D. ^ So, Eqs.
memory patterns(outputs of the CNN) should be given. Then,
according to the above theories, we train the CNN on the basis of (9)–(12) are needed to transform.
inputs and outputs, and the parameters (weight value) of the CNN Eq. (9) can be transformed as
can be obtained. Actually, the above theorem and corollaries give O
0
^ LA ¼ 0:5D
^, ð13Þ
some methods or constraints about how to get parameters of a
CNN. For example, when initial states of a CNN are zero, the Therefore,
relationship among parameters can be obtained by Theorem 1. If ^ ÞD
LA ¼ 0:5 pinvðO ^, 0
ð14Þ
the values of some parameters of a CNN are fixed, the regions of
values of other parameters of the CNN can be obtained. If inputs, where pinv ðUÞdenotes pseudo inverse of a matrix.
outputs and parameters A, C and D are fixed, the region of bias V Eq. (10) can be transformed as
can be gotten. The main difference between Theorem 1 and 00
^ LA ¼ 0:5D
^
O ð15Þ
Corollary 3 is whether all bias vi (i¼1,2,yn) are equal to 0.
Similarly, the main difference of Corollary 1 and Corollary 4 is Therefore,
also whether all bias vi (i¼1,2,yn) are equal to 0. The main ^ 00 ÞD
^
LA ¼ 0:5 pinvðO ð16Þ
difference between Theorem 1 and Corollary 1 is that whether
there must exists aii Zci . Similarly, the main difference of Eq. (11) can be transformed as
Corollary 3 and Corollary 4 is also that whether there must exists ^0
^ F
^ LD ¼ D
X ð17Þ
aii Z ci .
In Appendix C, we give a lot of notations. These notations will Therefore,
be used in matrix equations of next section. ^ 0 LAÞ
^ O
^ ÞðD
LD ¼ pinvðX ð18Þ
3.2. Associative memories Eq. (12) can be transformed as
^ LD ¼ D
X ^ 00
^ F ð19Þ
Next, we will discuss associative memories based on CNNs by
use of the above the theories and notations in Appendix C. In the Therefore,
section, the explanation of new notations can be found in ^ 00 LAÞ
^ O
^ ÞðD
LD ¼ pinvðX ð20Þ
Appendix C.
Q. Han et al. / Neurocomputing 97 (2012) 192–200 195

After transforming Eqs. (9)–(12), from LA and LD, it is easy to model of the paper is similar with that of [15], therefore, we do
obtain parameters A and D. not proof the problems of high-capacity.

0 00
Remark 2. When matrices O ^ ,O ^ or X^ are irreversible, the values
of LA or LD are approximate values. 4. Design procedure of a CNN
Next, we discuss how to get bias vi for a cell in sets P and Q.
In this section, two design procedures of parameters of a CNN
In set R, the value of vi of a cell are equal to zero. However, in sets P
are given by the above theories.
and Q, the values of vi (Oi A P,Q ) of a cell are not equal to zero.
Therefore, we need to calculate the value. Parameters A and D can be (i) If Theorem 1 and Corollary 3 are chosen to achieve associative
obtained by Corollary 3 or 4. The ranges of vi (i¼ 1,yn) can be memories, we give the following design procedure of para-
obtained by Theorem 1 or Corollary 1. From Theorem 1, we know meters of a CNN. We can use Corollary 3 to get parameters A
P P
that when vi 4ci  nj¼ 1 aij aj  nj¼ 1 dij uj , the Eq. (5) converges to a and D in set R and use Theorem 1 to get biases vi (i¼1,yn) for
positive stable equilibrium point, which means that the output of a cell in sets P and Q.
cellOi is positive. Because the sign of outputs of all cells in set P is Step 1. Denote a matrix G ¼ ða1 , a2 , . . . , am Þ as memory pat-
P terns of associative memories, where ai is a set of outputs of
positive, we can choose that v^ 4 max fci  nj¼ 1 aij aj 
þ
1rirn all cells in a CNN, and m is the number of patterns of
Pn associative memories. Denote a input matrix
^
j ¼ 1 dij uj g is the bias of all cells in set P. Similarly, v o min
n
1rirn U ¼ ðU 1 ,U 2 , . . . ,U m Þ which is corresponding to the matrix of
Pn Pn 
ci  j ¼ 1 aij aj  j ¼ 1 dij uj is the bias of all cells in set Q. Then, memory patterns G.
þ  Step 2. Divide all cells of the CNN into three sets. If all outputs
the regions of v^ and v^ can be get by Theorem 1 or Corollary 1. We of a cell in all memory patterns are 1, the cell will be classified
þ 
can consider the regions of v^ and v^ from two aspects as follows. as set P. If all outputs of a cell in all memory patterns are 1,
(i) If Theorem 1 is used to realize associative memories based the cell will be classified as set Q. If the outputs of a cell in all
on a CNN, we can get the following result: memory patterns are 1 and 1, the cell will be classified as
P P
If ai ¼ 1 in Eq. (5), we get vi ðlÞ 4ci  nj¼ 1 aij alj  nj¼ 1 dij ulj ¼ set R.
0
xi ðlÞ. Step 3. Choose biases vi ðOi A RÞ are equal to zero.
P P Step 4. Determine all state constants cij , 1r i rN,1 r j rM.
If ai ¼ 1 in Eq. (5), we get vi ðlÞ oci  nj¼ 1 aij alj  nj¼ 1 dij ulj ¼
0 Obtain coefficient matrices C.
xi ðlÞ.
Step 5. Determine matrixLsuch that li 4 max fci g, and get
Therefore, we choose 1rirn
matrix L0 .
þ 0 Step 6. Compute cloning template A by use of Eq. (14) and the
v^ Z max 9fxi ðlÞg9 ð21Þ
1 r l r m,Oi A P relationship between inputs and outputs of cells in set R.
Obtain coefficient matrices A.
as bias of all cells in set P, and Step 7. Compute cloning template D by use of Eq. (18) and the
 0 relationship between inputs and outputs of cells in set R.
v^ r max 9fxi ðlÞg9 ð22Þ
1 r l r m,Oi A Q Obtain coefficient matrices D.
0 þ
Step 8. Compute xi ðlÞ(Oi A P,1 rl r m) in set P. Choose v^ Z
0
as bias of all cells in set Q. max 9fxi ðlÞg9 in terms of (21), and biases vi ðOi A PÞ are
1 r l r m,Oi A P
(ii) If Corollary 1 is used to realize associative memories based þ
equal to v^ .
on a CNN, we can get the following results: 0
P P Step 9. Compute xi ðlÞ(Oi A Q ,1 r l rm) in set Q. Choose
If ai ¼ 1 in Eq. (5), we get vi ðlÞ 4  nj¼ 1 aij alj  nj¼ 1 dij ulj ¼ xi ðlÞ.
00

Pn P ^v o  max
0
9fxi ðlÞg9 in terms of (22), and biases
If ai ¼ 1 in Eq. (5), we get vi ðlÞ o  j ¼ 1 aij alj  nj¼ 1 dij 1 r l r m,Oi A Q

00
ulj ¼ xi ðlÞ. vi ðOi A Q Þ are equal to v^ .
Therefore, we choose Step 10. Synthesize the CNN with the connection weight
matrices A, C, D and bias vector V.
þ
v^ Z max
00
9fxi ðlÞg9 ð23Þ From these above ten steps, we give the flow chart as in Fig. 1:
1 r l r m,Oi A P (ii) If Corollary 1 and Corollary 4 are chosen to realize associative
memories, we give the following design procedure for a CNN:
as bias of all cells in set P, and Step 1, Step 2, Step 3 and Step 4 of (ii) are same to these of (i).
 00 Step 5. Determine matrix L such that li 4 0, and get matrix
v^ r max 9fxi ðlÞg9 ð24Þ
1 r l r m,Oi A Q L0 .
Step 6. Determine a00 such that a00 Z max fci g holds.
1rirn
as bias of all cells in set Q.
Step 7. Compute cloning template A from (16) in set R. Obtain
coefficient matrices A.
Remark 3. Cloning template D is computed by the cells in set R.
Step 8. Compute cloning template D from (20) in set R. Obtain
Therefore, the values of D are not be affected by the cells in sets P
coefficient matrices D.
and Q, which can make the values of D more accurate than that in 00
[15]. Bias vector is computed by the cells in sets P and Q, which Step 9. Compute xi ðlÞ(Oi A P, 1 r l rm) in set P. Choose
þ 00
make the outputs of cells in sets P and Q always right in the v^ 4 max 9fxi ðlÞg9 in terms of (23), and bias vi of all
1 r l r m,Oi A P
process of associative memories. þ
cells in set P is equal to v^ .
00
Step 10. Compute xi ðlÞ(Oi A Q ,1 r l r m) in set Q. Choose
Remark 4. In [15], the authors proofed that associative memories  00
v^ o  max 9fxi ðlÞg9 in terms of (24), and bias vi of all
based on CNNs are high-capacity (High-capacity means that a 1 r l r m,Oi A Q

large number of memory patterns can be stored by a CNN). The cells in set Q is equal to v^ .
196 Q. Han et al. / Neurocomputing 97 (2012) 192–200

Step 11. Synthesize the CNN with the connection weight reduce many limitations for the relationship among cloning
matrices A, C, D and bias vector V. templates.
From the above ten steps, we can get a CNN which can realize
associative memories from ‘‘MO’’ to ‘‘LS’’. In Fig. 3, when the
inputs of the CNN are ‘‘O’’, we can get time response curves of all
5. Numerical examples cells in the CNN in Fig. 3. In Fig. 3, we find that states of all cells
will be stable after a time. When states of all cells are stable, the
In this section, we will give some numerical simulations to value of equilibrium point is
verify the theoretical results in this paper. xn ¼ ð22:5671,20:7718,4:1968,3:9240,19:7945,
Consider the same example introduced in [16]. The inputs and
20:2415,18:5446,26:3927,29:7259,27:6048,
the output patterns of a CNN are represented by two pairs of
20:2415,19:7718,4:7119,3:2877,24:4686,
(5  5) pixel images showed in Figs. 1 and 2 (black pixel ¼1, white
20:2415,3:7271,18:1355,4:7119,23:6959,
pixel ¼  1), where the inputs of the CNN compose the word ‘‘MO’’
in Fig. 2(a), and the patterns to be memorized to constitute the 20:1657,20:3248,26:3700,28:2790,21:1051ÞT :
word ‘‘LS’’ in Fig. 2(b). Therefore, we know that all outputs of the CNN corresponding
We design all parameters of a CNN to realize associative to the equilibrium point are
memories for patterns in Fig. 2 by use of design procedure (i) in
Section 4. f ðxn Þ ¼ ð1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1ÞT ,
Step 1. In terms of Fig. 2, we get memory patterns G ¼ ðð1,1, where the outputs of the CNN are same with ‘‘S’’.
1, . . . ,1ÞT , ð1,1,1, . . . ,1ÞT Þ and inputs matrix U ¼ ðð1,1, When the inputs of the CNN are ‘‘M’’, we can get time response
1, . . . ,1ÞT , ð1,1,1, . . . ,1ÞT Þ. curves of all cells in the CNN in Fig. 4. Then the value of
Step 2. From memory patterns, all cells of the CNN can be equilibrium point is
divided into three sets, P ¼ fO2 ,O7 ,O12 ,O22 ,O23 ,O24 g,
Q ¼ fO1 ,O5 , O6 ,O8 ,O9 ,O10 ,O11 ,O15 ,O16 ,O18 ,O20 ,O21 ,O25 g and xn ¼ ð20:9309,23:3777,4:0755,3:9543,22:3247,
R ¼ fO3 ,O4 ,O13 , O14 ,O17 , O19 g. 23:5141,26:4382,16:0296,18:7113,21:1657,
Step 3. Choose biases of all cells in set R are equal to zero,
namely, v3 ¼ v4 ¼ v13 ¼ v14 ¼ v17 ¼ v19 ¼ 0.
Step 4. Let cij ¼ 1,1r ir N,1 rj rM, then we can obtain
C ¼ diag ð1,1, . . . ,1Þnn .
Step 5. Let L ¼ diag ð4,4, . . . ,4Þnn , then we have L0 ¼ diag
ð4,4, . . . ,4Þnmnm .
Step 6. From Eq. (14), we get LA ¼ ð0,0,0,0,2,0,0,0,0ÞT . Then, we
haveA ¼ diag ð2Þnn .
Step 7. From Eq. (18), we get LD ¼ ð1:5682,2:9394,
0:6136,1:6515,0:6136, 0:1212,0:3864,0:3258,0:25ÞT . Then,
we can obtain D.
þ
Step 8. Let v^ ¼ 20 in set P. Therefore, v2 ¼ v7 ¼ v12 ¼ v22 ¼
v23 ¼ v24 ¼ 20.

Step 9. Let v^ ¼ 20 in set Q. Therefore, v1 ¼ v5 ¼ v6 ¼ v8 ¼
v9 ¼ v10 ¼ v11 ¼ v15 ¼ v16 ¼ v18 ¼ v20 ¼ v21 ¼ v25 ¼ 20.
Step 10. Synthesize the CNN with A, C, D and V.

Note that the aii 4 cij in the example, however, in previous


P
paper [15], nj¼ 1 9aij þ bij 9 o cij must be satisfied. Therefore, in the
paper, though the initial states of a CNN have to return to zero, we Fig. 2. (a) Inputs of a CNN and (b) outputs of the CNN or memory patterns.

Fig. 1. Flow chart about how to get parameters of a CNN by use of Theorem 1 and Corollary 3.
Q. Han et al. / Neurocomputing 97 (2012) 192–200 197

Fig. 3. When the inputs of the CNN are ‘‘O’’, time response curves of all cells of a
CNN are shown.

Fig. 5. When the inputs of a CNN are contaminated by the noise, the memory
patterns also can be obtained. (a) Inputs of a CNN and (b) outputs of the CNN or
memory patterns.

In reality, noise is inevitable. Therefore, we examine sensitivity


of the above-mentioned CNN to the noisy data. When the inputs
of a CNN in Fig. 5(a) are contaminated by the noise, the memory
patterns in Fig. 5(b) also can be obtained. Thus, the associative
memories based on the CNN have strong ability of noise
immunity.

6. Conclusions

In the paper, some new methods about associative memories


based on CNNs are given. A theorem and some corollaries are
obtained, where the initial states are zero. From these theories,
we give constraint conditions among parameters of a CNN.
Compared with the theories in [15,16], our theories relax the
Fig. 4. When the inputs of the CNN are ‘‘M’’, time response curves of all cells of a conservatism of relationship among parameters of CNNs. We give
CNN are shown. the methods about how to get parameters A, D and V of a CNN.
Then, two design procedures are obtained by use of above
theories an analysis, where a flow chart about how to get
parameters of a CNN is given. Finally, a example is given to show
24:9837,19:0749,3:3634,3:2877,27:6048, our methods are effective and useful. The example also shows
23:7565,3:4847,26:1049,3:8483,24:4686, that the associative memories obtained by our methods have
strong ability of noise immunity.
23:8322,24:9004,24:7337,23:7489,24:4080ÞT :

Therefore, we know that all outputs of the CNN corresponding


to the equilibrium point are Acknowledgment

f ðxn Þ ¼ ð1,1,1,1,1,1,1,1,1,1,1,1,1,1, This work was supported in part by the National Natural
1,1,1,1,1,1,1,1,1,1,1Þ , T Science Foundation of China under Grant 60973114, Grant
61170249 and Grant 61003247, in part by the Natural Science
where the outputs of the CNN are same to ‘‘L’’. Foundation project of CQCSTC under Grant 2009BA2024 and
Therefore, simulation results show that the CNN with the Grant 2010BB2284, in part by the State Key Laboratory of Power
inputs(MO) is able to recall the output patterns (LS) effectively Transmission Equipment and System Security and New Technol-
and efficiently. ogy, Chongqing University, under Grant 2007DA10512711206, in
part by Teaching and Research Program of Chongqing Education
Remark 5. We use the values in the example in [16] to design a Committee (KJ110401), in part by the First Batch of Supporting
CNN. When the inputs of the CNN are ‘‘MO’’, we find that the Program for University Excellent Talents in Chongqing, and in part
outputs of the CNN are not exactly same as ‘‘LS’’. Therefore, our by Research Project of Chongqing University of Science and
design methods are better than the methods of [16]. Technology under Grant CK2011Z17.
198 Q. Han et al. / Neurocomputing 97 (2012) 192–200

Appendix A. Expression of A,D, A and D Appendix B2. Proof of Corollary 1

Denote the expression of template A as follows: (i) When ai ¼ 1, Eq. (6) can be rewritten as
2 3 0 1
ar,r . . . ar,0 . . . ar,r X
n X
n
6 ^
þ
xi ¼ aii =ci þ @ aij aj þ dij uj þ vi A=ci ð7Þ
6 & ^ & ^ 7 7
6 7 j ¼ 1,j a i j¼1
6
A ¼ 6 a0,r . . . a00 . . . a0,r 7 :
7
6 ^ & ^ & ^ 7 Pn Pn
4 5 When aii =ci Z 1, j ¼ 1,j a i aij aj þ j ¼ 1 dij uj þ vi 4 0 and
ar,r . . . ar,0 . . . ar,r þ
xi ð0Þ ¼ 0, we have xi 41.
ð2r þ 1Þð2r þ 1Þ
(ii) When ai ¼ 1, Eq. (6) can be rewritten as
The definition of matrix D is similar to A. 0 1
The matrix A ¼ ðaij Þnn , defined by (2), composed of template X n X n

xi ¼ aii =ci þ @ aij aj þ dij uj þ vi A=ci ð8Þ
has the form
j ¼ 1,j a i j¼1
2 3
A1 A2 0 0 ::: 0 0
6A Pn Pn
6 3 A1 A2 0 ::: 0 0 77 If aii =ci Z 1, j ¼ 1,j a i aij aj þ j ¼ 1 dij uj þvi o 0 and xi ð0Þ ¼ 0,
6 7
6 0 A3 A1 A2 ::: 0 0 7 we have xi o 1.

6 7
6 0 0 A3 A1 ::: 0 0 7
6 7 ,
6 7
6 ^ ^ ^ ^ & ^ ^ 7
6 7 Appendix C. Notations
6 0 0 0 0 0 A1 A2 7
4 5
0 0 0 0 0 A3 A1 Suppose that there exists a set of memory patterns, and these
nn
2 3
a00 a01 0 ... 0 0 memory patterns can be written as a matrix G ¼ ða1 , a2 , . . . , am Þ,
6a 0 7 where ai ¼ ðai1 , ai2 , . . . , ain ÞT A Un and ai is a set of outputs of all cells
6 0,1 a00 a01 . . . 0 7
6 7 of a CNN. Therefore, G has n rows and m columns. Choose a set of
6 0 a a . . . 0 0 7
6 0,1 00 7
A1 ¼ 6 7 input patterns, and these input patterns can be written as a
6 ^ ^ ^ & ^ ^ 7
6 7 matrix U ¼ ðU 1 ,U 2 , . . . ,U i , . . . ,U m Þ, where U i ¼ ðui1 ,ui2 , . . . ,uij ,
6 0 0 0 ... a00 a01 7
4 5 . . . ,uin ÞT and uij denotes an input data of the jth cell of a CNN in
0 0 0 . . . a0,1 a00 the ith memory pattern.
MM

2 3 Then, we divide all cells of a CNN into three small sets on the
a10 a11 0 ... 0 0 basis of the memory patterns G. When all outputs of a cell of a
6a a10 a11 ... 0 0 7 CNN in all memory patterns are 1, we classify the cell as set
6 1,1 7
6 7
6 0 a1,1 a10 ... 0 0 7 P ¼ fOa ,Ob , . . .g, where Oa denotes ath cell in Eq. (2). When all
6 7
A2 ¼ 6 7 outputs of a cell of the CNN in all memory patterns are  1, we
6 ^ ^ ^ & ^ ^ 7
6 7 classify the cell as set Q ¼ fOc ,Od , . . .g. When all outputs of a cell of
6 0 0 0 ... a10 a11 7
4 5 the CNN in different ai are different, namely, the outputs can be
0 0 0 ... a1,1 a10
2 MM
3 1 and  1, we classify the cell as set R ¼ fOe ,Of , . . .g. We assume
a1,0 a1,1 0 ... 0 0 that the number of elements in set P is k1 , the number of elements
6a a1,0 a1,1 ... 0 0 7 in set Q is k2 , and the number of elements in set R is k3 .
6 1,1 7
6 7
6 0 a1,1 a1,0 ... 0 0 7 Choose l A f1,2, . . . ,mg, q A f1,2, . . . ,ng and k A fR1 ,R2 , . . . ,Rk3 g.
6 7
and A3 ¼ 6 7 Next, we introduce some symbols as follows:
6 ^ ^ ^ & ^ ^ 7
6 7 0 1
6 0 0 0 ... a1,0 a1,1 7 aii 0 . . . 0
4 5
B C
0 0 0 ... a1,1 a1,0 B 0 aii . . . 0 C
MM Let LA ¼ B B ^
C ,
@ ^ & ^ C A
The definition of matrix D ¼ ðdij Þnn is similar to A.
0 0 . . . aii
nn

0 1
Appendix B1. Proof of Theorem 1 l1 0 ... 0
B C
B0 l2 ... 0 C
L¼B
B ^
C
In Eq. (5), there exists an unique equilibrium point @ ^ & ^ C
A
0 1
Xn X
n 0 0 ... ln
nn
xni ¼ @ aij aj þ dij uj þ vi A=ci ð6Þ
j¼1 j¼1 where, li 4 0

A~ ¼ ALA , D
^ ¼ ðDR ,DR , . . . ,DR ÞT A^ ¼ ðAR ,AR , . . . ,AR ÞT
1 2 k 1 2 k
3 3

^ ¼ ðLR , LR , . . . , LR ÞT
and L
Pn Pn 1 2 k 3
(i) If j¼1 aij aj þ dij uj þ vi 4ci and xi ð0Þ ¼ 0, we have xni 41
j¼1
Pn Pn where YRi denotes Ri th row of matrix Y.
in Eq. (6). Therefore, when j ¼ 1 aij aj þ j ¼ 1 dij uj þvi 4 ci ,
then Eq. (5) converges to a positive stable equilibrium point LD ¼ ðd1,1 ,d1,0 ,d1,1 ,d0,1 ,d0,0 ,d0,1 ,d1,1 ,d1,0 ,d1,1 Þ,
by Lemma 1, and the value of positive equilibrium point is
bigger than 1. LA ¼ ða1,1 ,a1,0 ,a1,1 ,a0,1 ,a0,0 ,a0,1 ,a1,1 ,a1,0 ,a1,1 Þ,
Pn Pn
j ¼ 1 dij uj þvi o ci and xi ð0Þ ¼ 0, we have
(ii) If j ¼ 1 aij aj þ 0 1
Pn Pn L 0 ... 0
xi o1 in Eq. (6). Therefore, when
n
j ¼ 1 aij aj þ j ¼ 1 dij uj þ B0 L ... 0C
vi o ci , then Eq. (5) converge to a negative stable equilibrium L0 ¼ B
B
C
C ,
@^ ^ & ^A
point by Lemma 1, and the value of negative equilibrium
point is less than  1. 0 0 0 L ðNMmÞðNMmÞ
Q. Han et al. / Neurocomputing 97 (2012) 192–200 199

G0 ¼ ðða1 ÞT ,ða2 ÞT , . . . ,ðam ÞT ÞT : X


n X
n
x0q ðlÞ ¼ cq  aqj alj  dqj ulj ,
Let F0 ¼ ððAa1 ÞT ,ðAa2 ÞT , . . . ,ðAam ÞT ÞT , j¼1 j¼1

0
^ ¼ ðF0 , F0 , . . . , F0 , F0 0 0 T
F R1 R2 Rk NM þ R1 , . . . FlnNM þ Rk . . . FmnNM þ Rk Þ ,
3 3
X
n X
n
x00q ðlÞ ¼  aqj alj  dqj ulj :
j¼1 j¼1
F ¼ ððA~ a1 ÞT ,ðA~ a2 ÞT , . . . ,ðA~ am ÞT ÞT ,
00

00
^ ¼ ðF00 , F00 , . . . , F00 , F00 00 00 T
F R1 R2 Rk NM þ R1 , . . . FlnNM þ Rk . . . FmnNM þ Rk Þ ,
3 3

0 1 References
0 ulðq1ÞM þ 1 ulðq1ÞM þ 2
B C
B ul l
ulðq1ÞM þ 3 C
B ðq1ÞM þ 1 uðq1ÞM þ 2 C [1] L.O. Chua, L. Yang, Cellular neural networks: theory, IEEE Trans. Circuits Syst.
B C 35 (1988) 1257–1272.
B ul ulðq1ÞM þ 3 ulðq1ÞM þ 4 C
Xlq ¼ B C
ðq1ÞM þ 2 [2] L.O. Chua, L. Yang, Cellular neural networks: applications, IEEE Trans. Circuits
B C , Syst. 35 (1988) 1273–1290.
B ^ ^ ^ C
B C [3] K.R. Crounse, L.O. Chua, Methods for image processing and pattern formation
B ul u l l
uqM C
B qM2 qM1 C in cellular neural networks: a tutorial, IEEE Trans. Circuits Syst. (1995)
@ A 583–601.
ulqM1 ulqM 0 [4] T. Sziranyi, J. Csicsvari, High-speed character recognition using a dual cellular
M3
0 1 neural network architecture (CNND), IEEE Trans. Circuits Syst. II 40 (1993)
0 Xl1 Xl2 223–231.
B l C 0 1 [5] M. Brucoli, L. Carnimeo, G. Grassi, Discrete-time cellular neural networks for
B X1
B Xl2 Xl3 CC X1 associative memories with learning and forgetting capabilities, IEEE Trans.
B l l l C B 2 C
l
B X2 X 3 X 4
C BX C Circuits Syst. I 42 (1995) 396–399.
X ¼B B ^
C , X¼B C , [6] A. Gionis, P. Indyk, R. Motwani, Similarity search in high dimensions via
B ^ ^ CC
B ^
@
C
A hashing, in: Proceedings of the 25th VLDB Conference, Edinburgh, Scotland,
B l C
BX
@ N2 X l
N1 X l C
N A Xm ðNMmÞ9
1999.
[7] D.R. Liu, A.N. Michel, Cellular neural networks for associative memories, IEEE
XlN1 XlN 0 Trans. Circuits Syst. II 40 (1993) 119–121.
ðNMÞ9
[8] D.R. Liu, A.N. Michel, Sparsely interconnected neural networks for associative
memories with applications to cellular neural networks, IEEE Trans. Circuits
^ ¼ ðXR , XR , . . . , XR , XNM þ R , . . . XlnNM þ R . . . XmnNM þ R ÞT ,
X Syst. II 41 (1994) 295–307.
1 2 k3 1 j k3
[9] D.R. Liu, Cloning template design of cellular neural networks for associative
0 1 memories, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 44 (1997)
0 alðq1ÞM þ 1 alðq1ÞM þ 2 646–650.
B C [10] M. Brucoli, L. Carnimeo, G. Grassi, Discrete-time cellular neural networks for
B al alðq1ÞM þ 2 alðq1ÞM þ 3 C
B ðq1ÞM þ 1 C associative memories with learning and forgetting capabilities, IEEE Trans.
B C Circuits Syst. I: Fundam. Theory Appl. 42 (1995) 396–399.
B al alðq1ÞM þ 3 alðq1ÞM þ 4 C
Oq ¼ B
l
B
ðq1ÞM þ 2 C
C , [11] G. Grassi, A new approach to design cellular neural networks for associative
B ^ ^ ^ C memories, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 44 (1997)
B C
B al a l l
a C 835–838.
B qM2 qM1 qM C [12] G. Grassi, On discrete-time cellular neural networks for associative mem-
@ A
l
a qM1 alqM 0 ories, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 48 (2001) 107–111.
M3 [13] Z.G. Zeng, D.S. Huang, Z.F. Wang, Memory pattern analysis of cellular neural
0 1 networks, Phys. Lett. A 342 (2005) 114–128.
0 Ol1 Ol2 [14] Z.G. Zeng, D.S. Huang, Z.F. Wang, Pattern memory analysis based on stability
B C 0 1
B Ol1 Ol2 Ol3 C O1 theory of cellular neural networks, Appl. Math. Modelling 32 (2008)
B C 112–121.
B C B 2 C
l B Ol2 Ol3 Ol4 C BO C [15] Z.G. Zeng, J. Wang, Design and analysis of high-capacity associative memories
O ¼B
B ^
C , O¼B C ,
B ^ ^ CC
B ^
@
C
A
based on a class of discrete-time recurrent neural networks, IEEE Trans. Syst.
B l C Man Cybern., Part B: Cybern. 38 (2008) 1525–1536.
BO
@ N2 OlN1 OlN C
A Om ðNMmÞ9 [16] Z.G. Zeng, J. Wang, Associative memories based on continuous-time cellular
neural networks designed using space-invariant cloning templates, Neural
OlN1 OlN 0 Networks 22 (2009) 651–657.
ðNMÞ9
[17] M. Gilli, M. Biey, P. Checco, Equilibrium analysis of cellular neural networks,
0 1 IEEE Trans. Circuits Syst. I: Regular Papers 51 (2004) 903–912.
0 0 0 0 aðl q1ÞM þ 1 0 0 0 0 [18] N. Takahashi, A new sufficient condition for complete stability of cellular
B C neural networks with delay, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl.
B 0 0 0 0 al 0 0 0 0C
B C l
LOlq ¼ B ðq1ÞM þ 2
CLO 47 (2000) 793–799.
B^ ^ ^ ^ ^ ^ ^ ^ ^C [19] X.M. Li, L.H. Huang, J.H. Wu, Further results on the stability of delayed
@ A cellular neural networks, IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 50
0 0 0 0 alqM 0 0 0 0 (2003) 1239–1242.
0 1 0 1 [20] X.X. Liao, J. Wang, Z.G. Zeng, Global asymptotic stability and global expo-
LOl1 LO1 nential stability of delayed cellular neural networks, IEEE Trans. Circuits Syst.
B C B C
B LOl C B LO2 C II: Express Briefs 52 (2005) 403–409.
B 2 C
¼B CLO ¼ B
B ^ C,
C [21] X.X. Liao, Q. Luo, Z.G. Zeng, Y.X. Guo, Global exponential stability in lagrange
B ^ C @ A sense for recurrent neural networks with time delays, Nonlinear Anal. Real
@ A
World Appl. 9 (2008) 1535–1557.
LOlN LOm [22] C.D. Zheng, H.G. Zhang, Z.S. Wang, New delay-dependent global exponential
stability criterion for cellular-type neural networks with time-varying delays,
IEEE Trans. Circuits Syst. II: Express Briefs 56 (2009) 250–254.
O ¼ OLO, [23] S.P. Xiao, X.M. Zhang, New globally asymptotic stability criteria for delayed
cellular neural networks, IEEE Trans. Circuits Syst. II: Express Briefs 56 (2009)
0
^ ¼ ðOR , OR , . . . , OR , ONM þ R , . . . O T 659–663.
O 1 2 k3 1 lnNM þ Rk . . . OmnNM þ Rk3 Þ ,
[24] C.D. Zheng, H.G. Zhang, Z.S. Wang, Improved robust stability criteria for
delayed cellular neural networks via the LMI approach, IEEE Trans. Circuits
00
^ ¼ ðOR , OR , . . . , OR , ONM þ R , . . . O T Syst. II: Express Briefs 57 (2010) 41–45.
O 1 2 k3 1 lnNM þ Rk . . . OmnNM þ Rk3 Þ ,
[25] W.H. Chen, W.X. Zheng, A new method for complete stability analysis of
cellular neural networks with time delay, IEEE Trans Neural Networks 21
D ¼ L0 G0 , (2010) 1126–1139.
[26] Z.G. Zeng, J. Wang, X.X. Liao, Stability analysis of delayed cellular neural
^ ¼ ðDR , DR , . . . , DR , DNM þ R , . . . D T networks described using cloning templates, IEEE Trans. Circuits Syst. I:
D 1 2 k3 1 lnNM þ Rk . . . DmnNM þ Rk3 Þ , Regular Papers 51 (2004) 2313–2324.
200 Q. Han et al. / Neurocomputing 97 (2012) 192–200

[27] Z.G. Zeng, J. Wang, Complete stability of cellular neural networks with time- Jun Peng received a PhD in Computer Software and
varying delays, IEEE Trans. Circuits Syst. I: Regular Papers 53 (2006) Theory from Chongqing University in 2003, a MA in
944–955. computer system architecture from Chongqing Uni-
[28] A. Knoblauch, Neural associative memory with optimal Bayesian learning, versity in 2000, and a BSc in Applied Mathematics
Neural Comput. 23 (2011) 1393–1451. from the Northeast University in 1992. From 1992 to
present he works at Chongqing University of Science
and Technology. He was a visiting scholar in the
Laboratory of Cryptography and Information Security
Qi Han received the BS degree in Computer Science at Tsukuba University, Japan in 2004, and Department
and Technology from Shandong University (Weihai), of Computer Science at California State University,
China, in 2005. He received MS degree in Computer Sacramento in 2007, respectively. He has authored or
Software and Theory from Chongqing University, coauthored over 50 peer reviewed journal or confer-
China, in 2009. He received PhD degree in Computer ence papers. He has served as the program committee
Science and Technology at Chongqing University of member or session co-chair for over 10 international conferences. His current
China in 2012. His current research interest covers research interests are on cryptography, chaos and network security.
chaos control and synchronization, cellular automata,
neural network, associative memories.

Chuandong Li received the BS degree in Applied


Mathematics from Sichuan University, Chengdu, China
in 1992, and the MS degree in operational research and
control theory and PhD degree in Computer Software
Xiaofeng Liao received the BS and MS degrees in and Theory from Chongqing University, Chongqing,
mathematics from Sichuan University, Chengdu, China, China, in 2001 and in 2005, respectively. He has been
in 1986 and 1992, respectively, and the PhD degree in a Professor at the College of Computer Science,
circuits and systems from the University of Electronic Chongqing University, Chongqing 400030, China, since
Science and Technology of China in 1997. From 1999 2007. From November 2006 to November 2008, he
to 2001, he was involved in postdoctoral research at serves as a research fellow in the Department of
Chongqing University. From November 1997 to April Manufacturing Engineering and Engineering Manage-
1998, he was a research associate at the Chinese ment, City University of Hong Kong, Hong Kong, China.
University of Hong Kong. From October 1999 to His current research interest covers iterative learning
October 2000, he was a research associate at the City control of time-delay systems, neural networks, chaos control and synchroniza-
University of Hong Kong. From March 2001 to June tion, and impulsive dynamical systems.
2001 and March 2002 to June 2002, he was a senior
research associate at the City University of Hong Kong.
From March 2006 to April 2007, he was a research fellow at the City University of
Hong Kong. He has published more than 150 international journal and conference Hongyu Huang is currently an associate professor of
papers. His current research interests include neural networks, nonlinear dyna- the college of computer science of Chongqing Univer-
mical systems, bifurcation and chaos, and cryptography. sity, China. He received his BS and MS degrees from
Chongqing Normal University and Chongqing Univer-
sity at 2002 and 2005, respectively. Right after he
received his PhD degree from Shanghai Jiao Tong
Tingwen Huang received the BS degree in mathe- University at 2009, he joined Chongqing University.
matics from Southwest Normal University, Chongqing, His research interests include Vehicular Ad Hoc Net-
China, in 1990, the MS degree in applied mathematics works, Complex networks and Distributed Systems
from Sichuan University, Chengdu, China, in 1993, and
the PhD degree in mathematics from Texas A&M
University, College Station, in 2002. From 1994 to
1998, he was a Lecturer with Jiangsu University,
Jiangsu, China. From January 2003 to July 2003, he
was a Visiting Assistant Professor at Texas A&M Uni-
versity. From August 2003 to July 2009, he was an
Assistant Professor with Texas A&M University at
Qatar, Doha, Qatar. Since July 2009, he has been an
Associate Professor in the same institution. His
research areas include neural networks, complex networks, chaos and dynamics
of systems, and operator semigroups and their applications, Traveling Wave
Phenomena.

You might also like