Professional Documents
Culture Documents
Textbook Computational Intelligence and Intelligent Systems Kangshun Li Ebook All Chapter PDF
Textbook Computational Intelligence and Intelligent Systems Kangshun Li Ebook All Chapter PDF
https://textbookfull.com/product/computational-intelligence-and-
sustainable-systems-intelligence-and-sustainable-computing-h-
anandakumar/
https://textbookfull.com/product/computational-and-statistical-
methods-in-intelligent-systems-radek-silhavy/
Artificial Intelligence and Algorithms in Intelligent
Systems Radek Silhavy
https://textbookfull.com/product/artificial-intelligence-and-
algorithms-in-intelligent-systems-radek-silhavy/
https://textbookfull.com/product/computational-intelligence-and-
intelligent-systems-10th-international-symposium-
isica-2018-jiujiang-china-october-13-14-2018-revised-selected-
papers-hu-peng/
Computational Intelligence
and Intelligent Systems
9th International Symposium, ISICA 2017
Guangzhou, China, November 18–19, 2017
Revised Selected Papers, Part I
123
Communications
in Computer and Information Science 873
Commenced Publication in 2007
Founding and Former Series Editors:
Phoebe Chen, Alfredo Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu,
Dominik Ślęzak, and Xiaokang Yang
Editorial Board
Simone Diniz Junqueira Barbosa
Pontifical Catholic University of Rio de Janeiro (PUC-Rio),
Rio de Janeiro, Brazil
Joaquim Filipe
Polytechnic Institute of Setúbal, Setúbal, Portugal
Igor Kotenko
St. Petersburg Institute for Informatics and Automation of the Russian
Academy of Sciences, St. Petersburg, Russia
Krishna M. Sivalingam
Indian Institute of Technology Madras, Chennai, India
Takashi Washio
Osaka University, Osaka, Japan
Junsong Yuan
University at Buffalo, The State University of New York, Buffalo, USA
Lizhu Zhou
Tsinghua University, Beijing, China
More information about this series at http://www.springer.com/series/7899
Kangshun Li Wei Li
•
Computational Intelligence
and Intelligent Systems
9th International Symposium, ISICA 2017
Guangzhou, China, November 18–19, 2017
Revised Selected Papers, Part I
123
Editors
Kangshun Li Zhangxing Chen
College of Mathematics and Informatics Chemical and Petroleum Engineering
South China Agricultural University University of Calgary
Guangzhou Calgary, AB
China Canada
Wei Li Yong Liu
Jiangxi University of Science School of Computer Science
and Technology and Engineering
Ganzhou, Jiangxi The University of Aizu
China Aizu-Wakamatsu, Fukushima
Japan
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
Volumes CCIS 873 and CCIS 874 comprise proceedings of the 9th International
Symposium on Intelligence Computation and Applications (ISICA 2017) held in
Guangzhou, China, during November 18–19, 2017. ISICA 2017 successfully attracted
over 180 submissions. After rigorous reviews and plagiarism checking, 51 high-quality
papers are included in CCIS 873, while another 50 papers are collected in CCIS 874.
ISICA conferences are one of the first series of international conferences on compu-
tational intelligence that combines elements of learning, adaptation, evolution, and
fuzzy logic to create programs as alternative solutions to artificial intelligence.
ISICA 2017 featured the most up-to-date research in analysis and theory of evo-
lutionary computation, neural network architectures and learning, neuro-dynamics and
neuro-engineering, fuzzy logic and control, collective intelligence and hybrid systems,
deep learning, knowledge discovery, learning, and reasoning. ISICA 2017 provided a
venue to foster technical exchanges, renew everlasting friendships, and establish new
connections. Prof. Yuanxiang Li, one of the pioneers in parallel and evolution com-
puting at Wuhan University, wrote a beautiful poem in Chinese for the ISICA 2017
event. It is our pleasure to translate his poem with the title of “Computational Intel-
ligence Debate on the Pearl River”:
Wear a smile on a bright face;
Under the night light on the Pearl River;
You are like star and moon shining on the Tower Small Slim Waist;
Ride waves on the cruise ship;
Leave bridges behind in a boundless moment.
dedication and commitment of both the staff at the Springer Beijing Office and the
CCIS editorial staff. We would like to thank the authors for submitting their work, as
well as the Program Committee members and reviewers for their enthusiasm, time, and
expertise. The invaluable help of active members from the Organizing Committee,
including Wei Li, Hui Wang, Lei Yang, Yan Chen, Lixia Zhang, Weiguang Chen,
Zhuozhi Liang, Junlin Jin, Ying Feng, and Yunru Lu, in setting up and maintaining the
online submission systems by EasyChair, assigning the papers to the reviewers, and
preparing the camera-ready version of the proceedings is highly appreciated. We would
like to thank them personally for helping to make ISICA 2017 a success.
Honorary Chairs
Hisao Ishibuchi Osaka Prefecture University, Japan
Qingfu Zhang City University of Hong Kong, SAR China
Yang Xiang Deakin University, Australia
General Chairs
Kangshun Li South China Agricultural University, China
Zhangxing Chen University of Calgary, Canada
Yong Liu University of Aizu, Japan
Program Chairs
Aniello Castiglione University of Salerno, Italy
Jing Liu Xidian University, China
Han Huang South China University of Technology, China
Hailin Liu Guangdong University of Technology, China
Publicity Chairs
Lei Yang South China Agricultural University, China
Lixia Zhang South China Agricultural University, China
Program Committee
Aimin Zhou East China Normal University, China
Allan Rocha University of Calgary, Canada
Dazhi Jiang Shantou University, China
Dongbo Zhang Guangdong University of Science and Technology,
China
Ehsan Aliabadian University of Calgary, Canada
Ehsan Amirian University of Calgary, Canada
Feng Wang Wuhan University, China
Guangming Lin Southern University of Science and Technology, China
Guoliang He Wuhan University, China
VIII Organization
The Dynamic Relationship Between Bank Credit and Real Estate Price
in China. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Xiaofan Wang and Li Zhou
Centralized Access Control Scheme Based on OAuth for Social Networks . . . 576
Yue Liu, Wei Gao, and Jingyun Liao
A Novel Monitor Image De-hazing for Heavy Haze on the Freeway . . . . . . . 501
Chunyu Xu, Yufeng Wang, and Wenyong Dong
1 Introduction
There are a lot of studies of recurrent neural networks focusing on the filed of signal
processing [1, 2], pattern classification [3, 4], robotics [5, 6], optimization [7], and so on.
Especially, with the invention of the Hopfield [8], it was specially invented for solving
online optimization. Recurrent neural networks are becoming a popular research branch
in the field of online optimization. They are with powerful parallelism and online
solving capability. Recurrent neural networks have made huge advances for online
optimization in both theory and application. A recurrent neural network [9] is developed
for nonlinear programming problems, where a penalty term is introduced as equality and
inequality constraints, and it converges to an approximate optimal solution. A switched-
capacitor neural network is proposed [10] for solving nonlinear convex programming
problems. However, the model will be unstable in the case that the optimal solution is
outside the feasible region. A neural network is proposed for solving linear quadratic
programming problems [11]. The optimal solution is proven globally converged. Some
slack variables is introduced to the problem, which leads the dimension of model is too
large. A dual neural network is proposed for reducing the dimension. It is composed of a
single layer of neurons,and the dimension of the dual network is equal to its neurons.
The model and its modifications [13, 14] are introduced to kinematic control of
robot [12, 15]. A simplified dual neural network is proposed [16]. It much reduces
complexity while the convergence property is sound. The model is applied to the
KWTA problem in real time [17], which is just a single neuron. However, it just deals
with quadratic programming problem with a square quadratic term in box constraints
and cost function. A recurrent neural network for solving general quadratic program-
ming problems is proposed. It is with fewer neurons, and the dimension of the model is
greatly reduced while keeping sound accuracy and efficiency.
The remainder of this paper is organized as follows. In Sect. 2, A neural network
model is presented for solving quadratic programming problems. In Sect. 3, the con-
vergence of the neural network is analyzed and it is proven to be globally convergent to
the optimal solution of the quadratic programming problems. A discrete-time model in
Sect. 4 for solving the same problem and an alternative neural network model for
solving the quadratic programming problem under irredundant equality constraints are
studied. In Sect. 5, numerical examples are given to demonstrate the effectiveness of
our method. Section 6 is the conclusion.
In this paper, R denotes the real number field, AT represents the transport matrix of
A, I denotes a unitary matrix.
2 Mathematical Model
1
minð xT Wx þ cT xÞ
2
Ax ¼ b ð1Þ
s:t:
Ex e
1
minð xT Wx þ cT xÞ
2 ð2Þ
1
s:t: Bx d
2
2 3 2 3
A b
6 7 6 7
B ¼ 4 A 5; d ¼ 4 b 5 ð3Þ
E e
A New Recurrent Neural Network with Fewer Neurons 5
1
minð xT Wx þ cT xÞ
2 ð4Þ
s:t: BTr x dr 0
In the Eq. (4), r denotes the row No. of the biggest element of Bx d, and BTr
represents the rth row of B, dr denotes the rth element of d. According to the KKT
terms, the solution of problem (4) meets the requirements:
Wx þ c lBr ¼ 0
(
BTr x dr ¼ 0 if l 0
ð5Þ
BTr x dr 0 if l ¼ 0
The dual variable of inequality constraint in the Eq. (4) is represented with l 2 R.
The Eq. (5) is simplified with an upper saturation function as following:
Wx þ c lBr ¼ 0
ð6Þ
BTr x dr ¼ gðBTr x dr lÞ
The W is positive definite, x could be explicitly solved with l and the first equality
in Eq. (6) as following:
x ¼ lW 1 Br W 1 c ð8Þ
Where 2 [ 0 is a scaling parameter. Substituting (8) into (9) generates the neural
network dynamics with the following state equation and output equation,
State equation:
Output equation:
x ¼ lW 1 Br W 1 c ð11Þ
Where r is the row No. of maxðBx dÞ, d and B and are as Eq. (3).
Remark 2.1: Only one dynamic neuron is required in the neural network (10), which
is nothing to do with the conditions of Eq. (1). There are at least q dynamic neurons in
recurrent neural networks for solving a general quadratic programming problem in [13–
16, 18]. It is the No. of inequalities of problem (1). However, the proposed model is
just a single dynamic neuron, which greatly reduces the number of neurons and
computational complexity.
Remark 2.2: The neural network dynamic modeled by (10) is a switched dynamic
system. It switches in a family of dynamic systems under the endogenous switching
signal r (signal flow in the neural network is plotted in Fig. 1)
2 l_ ¼ fr ðlÞ; r 2 S ¼ f1; 2. . .; pg
ð12Þ
r:l!S
3 Convergence Analysis
It is a feasible way to prove the global convergence of the proposed system by con-
structing a common negative definite Lyapunov function. However, choosing such a
common Lyapunov function is a difficult problem. Some researches on the contraction
theory [21, 22] greatly simplify the proof process with virtual dynamics of the system.
In this paper, the contraction analysis is made to prove the proposed model (10)
convergent. The proof process is based on one definition and two lemmas:
Definition 3.1 ([22]): Given the system equations x_ ¼ f ðx; tÞ, a region of the state
space is called a contraction region with respect to a uniformly positive definite metric
Mðx; tÞ ¼ HT H if @f@x M þ M @x
T
@f
þM_ bM (with b [ 0Þ in that region.
Where x ¼ lW 1 Br W 1 c.
Now, we are on the stage to state the convergence result of the proposed model.
Theorem 3.1: The proposed (10) converges to the equilibrium point from any start
point l 2 R, and the Eq. (11) is the optimal solution to Eqs. (1).
8 S. Chen et al.
Proof: Based on the contraction theory for analyzing the convergence of (10):
1
l_ ¼ f ðl; tÞ ¼ ðgðl þ BTr W 1 Br l BTr W 1 c dr Þ BTr W 1 Br l þ BTr W 1 c þ dr Þ ð13Þ
e
@f
The partial derivative @l is as following:
(
@f BT W 1 B
¼ r e r if t 0 ð14Þ
@l 1e t\0
@f T @f _ bM
MþM þM ð16Þ
@x @x
4 Extensions
xn ¼ ln W 1 Br W 1 c ð18Þ
Where r is the row No. of the largest element of Bxn d, B and d are as equation
sets (3).
In the discrete-time case, the contraction theory is an extension of the well-known
contraction mapping theorem. We still use the contraction theory to analyze the con-
vergence. For discrete-time systems, the definition of a contraction region and the
condition for contraction are stated as below.
Definition 4.1 ([22]): The discrete-time model is as following:
Equation xn þ 1 ¼ fn ðxn ; nÞ, is a contraction region. Positive definite metric Mn ðxn ;
nÞ ¼ HTn Hn , given in the region 9b [ 0, FnT Fn I bI\0, where Fn ¼ Hn þ 1
@fn 1
@xn Hn .
Lemma 4.1 ([22]): If the whole state space is a contraction region, The global con-
vergence to the given trajectory is made sure.
Theorem 4.1: The discrete model (17) will converge to the equilibrium point after
initialization l 2 R and the output of equilibrium point, as shown in (18), is also the
optimal solution to problem (1) under:
2
0\c\2; c\ for all i 2 S ð19Þ
BTi W 1 Bi
ln þ 1 ¼ fn ðxn ; nÞ
ð20Þ
¼ !gðln þ BTr W 1 Br ln BTr W 1 c dr Þ þ ð1 !BTr W 1 Br Þln þ !BTr W 1 c þ !dr
@fn
Calculate @ln as follows:
@fn 1 !BTr W 1 Br if vn 0
¼ ð21Þ
@ln 1 !if vn \0
10 S. Chen et al.
ErT x er 0
Based on KKT conditions and the upper saturation function g(), the solution to
problem (25) meets,
Wx þ c AT y lBTr ¼ 0
Ax ¼ b ð26Þ
ErT x er ¼ gðErT x er lÞ
Before stating the convergence result about the neural network (29), the following
lemma is presented, which is used in the convergence proof.
Lemma 4.2: The symmetric matrix P 2 Rnn , P ¼ W 1 AT ðAW 1 AT Þ1 AW 1 þ W 1
is semi-positive definite, i.e., zT Pz 0 for all z 2 Rn and zT ðW 1 AT ðAW 1 AT Þ1
A IÞ 6¼ 0.
Proof: Since W 1 2 Rnn is positive definite, it can be factorized into W 1 ¼ QQT
with Q positive definite. Defining G 2 Rmn , G ¼ AQ, then G is also row full rank
(since A is row full rank and Q is positive definite) and therefore G can be decomposed
into G ¼ U½^ 0V via singular value decomposition, where U 2 Rmm , V 2 Rnn are
both unitary matrices, ½^ 0 2 Rmn , ^ 2 Rmm is a diagonal matrix with all positive
elements on the diagonal. Bringing
We get the equations as (30) as below. The stability of the neural network (29) and
its global convergence to the optimal solution of (1) is guaranteed by the following
theorem.
Theorem 4.2: The neural network (29) exponentially converges to its equilibrium from
any initial point l 2 R, and its output at this equilibrium by following the output
Eq. (27) is the optimal solution to problem (1), if rankðAÞ ¼ m in (1) and
ðEiT ðW 1 AT ðAW 1 AT Þ1 A IÞ 6¼ 0
For i ¼ 1; 2; . . .; q, with EiT denoting the ith row of the matrix E.
Proof: This theorem can be proven in a two-step procedure similar to the proof of
Theorem 4.1. The first step is to solve the equilibrium and show that the output at this
equilibrium is the optimal solution to the problem (1). The second step is to prove the
global contraction to the equilibrium. Note that the condition ðEiT ðW 1 AT ðAW 1
AT Þ1 A IÞ 6¼ 0 For i ¼ 1; 2; . . .; q, according to Lemma 4.2, guarantees ErT PEr [ 0
for all possible switching signal r. Based on this result, global contraction of the neural
network (29) can be proven. Detailed proof process for the two steps is omitted.
Another random document with
no related content on Scribd:
had caught a chill, and before many hours were over,
Bronchitis declared itself, and notwithstanding the care and
physic from which the doctor had hoped such great things,
on Christmas Eve little Dodie went to Heaven with a smile
on her face, and stretching out her little hands as if
someone had come to fetch her.
"I think Mother must have come for her," said Geoffrey in a
low voice, as they stood round the schoolroom fire talking
about it all.
"Yes," said Jack between his sobs, "Perhaps she was sent to
fetch her, lest she should be afraid of all the new people in
Heaven. Even kind Abraham might frighten her a little, she
was always afraid of people with beards—but she wouldn't
mind them a bit, if Mother fetched her."
When once little Dodie had breathed her last, all sense of
his own loss vanished in the overwhelming thought of what
his Father's sorrow would be, when he found Dodie had
gone.
CHAPTER VI.
THEIR MOTHER'S CHRISTMAS PRESENT.
"My poor lad," he said, stroking the rough curly head of the
boy.
Dodie had her little hands crossed over her breast, holding
a beautiful white flower. There was still the sweet smile on
her lips, and her curly hair lay in clusters over her forehead.
"The Lord gave, and the Lord hath taken away, blessed be
the name of the Lord."
*****
THE END.
*** END OF THE PROJECT GUTENBERG EBOOK GEOFF'S
LITTLE SISTER ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.