You are on page 1of 16

January 9, 2012 13:15 00306

International Journal of Neural Systems, Vol. 22, No. 1 (2012) 2135


c World Scientic Publishing Company
DOI: 10.1142/S0129065712003067
A NOVEL EFFICIENT LEARNING ALGORITHM
FOR SELF-GENERATING FUZZY NEURAL
NETWORK WITH APPLICATIONS
FAN LIU

and MENG JOO ER

School of EEE, Nanyang Technological University


Singapore, 639798, Singapore

liuf0009@e.ntu.edu.sg

emjer@ntu.edu.sg
In this paper, a novel ecient learning algorithm towards self-generating fuzzy neural network (SGFNN)
is proposed based on ellipsoidal basis function (EBF) and is functionally equivalent to a Takagi-Sugeno-
Kang (TSK) fuzzy system. The proposed algorithm is simple and ecient and is able to generate a
fuzzy neural network with high accuracy and compact structure. The structure learning algorithm of the
proposed SGFNN combines criteria of fuzzy-rule generation with a pruning technology. The Kalman lter
(KF) algorithm is used to adjust the consequent parameters of the SGFNN. The SGFNN is employed in
a wide range of applications ranging from function approximation and nonlinear system identication to
chaotic time-series prediction problem and real-world fuel consumption prediction problem. Simulation
results and comparative studies with other algorithms demonstrate that a more compact architecture
with high performance can be obtained by the proposed algorithm. In particular, this paper presents
an adaptive modeling and control scheme for drug delivery system based on the proposed SGFNN.
Simulation study demonstrates the ability of the proposed approach for estimating the drugs eect and
regulating blood pressure at a prescribed level.
Keywords: Self-generating fuzzy neural network; Ellipsoidal Basis Function (EBF); criteria of generation
and pruning; Kalman lter (KF) algorithm.
1. Introduction
Over the past decade, many novel articial intel-
ligence or machine learning algorithms have been
successfully developed and widely applied to many
applications, for example, neural network models for
earthquake magnitude prediction using multiple seis-
micity indicators,
1
estimation of the freeway work
zone capacity based on neuro-fuzzy logic model,
2
nonparametric identication of structures based
on fuzzy wavelet neural networks using nonlinear
autoregressive moving average with exogenous inputs
approach
3
and nonlinear complex system identi-
cation based on internal recurrent neural networks
(IRNN).
4
Recently, many researchers focus on combining
evolutionary algorithms (EA) with machine learn-
ing algorithms. Hung and Adeli
5
combined a genetic
algorithm (GA) with an adaptive conjugate gradi-
ent neural network learning algorithm for training of
feedforward neural networks. Elragal
6
adopted parti-
cle swarm optimization (PSO)
7
algorithm to update
the weights and bias of a neural network to improve
network prediction accuracy.
Another well-known work is the development of
fuzzy neural network (FNN) which has been proven
to be able to reap the benets of fuzzy logic and neu-
ral networks.
1,812
Theoretical investigations have
been proven that fuzzy logic systems and neural

Corresponding author.
21
January 9, 2012 13:15 00306
22 F. Liu & M. J. Er
networks can approximate any function to any pre-
scribed accuracy provided that sucient fuzzy rules
or hidden neurons are available.
13,14
In FNN sys-
tems, standard neural networks are designed to
approximate a fuzzy inference system through the
structure of neural networks while the parameters of
the fuzzy system are modied by means of learning
algorithms used in neural networks.
15
Twin issues
associated with a fuzzy system are (1) parameter
estimation which involves determining parameters of
premises and consequents and (2) structure identi-
cation which involves partitioning the input space
and determining the number of fuzzy rules for a spe-
cic performance.
16
FNN systems have been found to be very e-
cient and of widespread use in several areas such as
adaptive control,
1,7,18,19
signal processing,
20
nonlin-
ear system identication,
21,22
pattern recognition
23
and so on. Besides the well-known adaptive-network-
based fuzzy inference system (ANFIS),
24
many
FNN algorithms have been presented. Juang et al.
9
proposed an online self-constructing neural fuzzy
inference network, which is a modied Takagi-
Sugeno-Kang (TSK) type fuzzy system possess-
ing the learning ability of a neural network. Leng
et al.
25
proposed a self-organizing fuzzy neural net-
work (SOFNN) employing optimal brain surgeon
(OBS) as the pruning method. A novel hybrid learn-
ing approach, termed self-organizing fuzzy neural
networks based on genetic algorithms (SOFNNGA),
which is used to design a growing FNN and to imple-
ment Takagi-Sugeno (TS) type fuzzy models has
been also proposed by Leng et al.
26
However, like
most online learning algorithms, it encounters the
problem of slow learning speed due to the grow-
ing and pruning criteria and complicated learning
process due to the use of GA in optimizing the
topology of the initial network structure. A major
signicant work of developing FNNs is the online
sequential learning algorithm known as resource
allocating network (RAN),
27
to dynamically deter-
mine the number of hidden layer neurons based
on the property of input samples. Enhancement of
RAN, named RANEKF,
28
was proposed where the
extended Kalman lter (EKF) method rather than
the least mean squares (LMS) algorithm was used
for updating parameters of the network. Another
improvement of RAN developed in Ref. 23 employs
a pruning method whereby inactive hidden neurons
can be detected and removed during the learning
process. Other improvements of the RAN in Ref. 29
takes into consideration of the orthogonal techniques
such as QR factorization and singular value decom-
position (SVD) to determine the appropriate input
structure of the RBF network and prune the irrele-
vant neurons within the same network. Another sig-
nicant development of FNNs was made in Ref. 3
where a new dynamic time-delay fuzzy wavelet neu-
ral network has been proposed. The model consists
of dynamic time-delay neural network, wavelet, fuzzy
logic and the reconstructed state space concept
30,31
from the chaos theory which can be used for many
applications such as structural system identication
3
and nonlinear system control.
32
Recently, the idea of self-generating method has
been proposed in FNN systems. The well-known
growing and pruning radial basis function network
(GAP-RBF)
33
algorithm generated FNN automat-
ically based on growing and pruning approaches.
The generalized GAP-RBF (GGAP-RBF)
34
algo-
rithm can be used for arbitrary sampling density
of training samples. A fast and accurate online
self-organizing scheme for parsimonious fuzzy neu-
ral network (FAOS-PFNN)
35
based on RBF neural
networks has been proposed to accelerate the learn-
ing speed and increase the approximation accuracy
by incorporating pruning strategy into new growth
criteria. Unfortunately, like most of the RBF-based
online learning algorithms, all the widths of Gaus-
sian membership functions of the input variables in
a rule are the same due to the use of RBF neural
networks. This usually does not coincide with the
reality, especially when input variables have signi-
cantly dierent operating intervals.
Based on the key idea of self-generating method,
this paper presents an ecient algorithm for con-
structing a self-generating fuzzy neural network
(SGFNN) that identies a TSK-type fuzzy model.
The structure learning algorithm for generating new
ellipsoidal basis function (EBF) neurons is based on
the system error criterion and the -completeness of
fuzzy rules.
36
The salient features of the approach
can be summarized as follow.
By using criteria of generation and pruning neu-
rons, the SGFNN can recruit or remove EBF neu-
rons automatically so as to achieve optimal system
performance.
January 9, 2012 13:15 00306
A Novel Ecient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 23
All the widths of Gaussian membership functions
of input variables in a rule are dierent and can be
adjusted due to the use of EBF neural network.
Overlapping of membership function can be over-
come signicantly since the number of membership
functions of every input variable is dened sepa-
rately. They can be the same or dierent.
The KF algorithm is adopted as consequent
parameter adjustment learning algorithm. The lin-
ear least squares (LLS) method is employed to
adjust weights of many other FNNs.
10,12
Although
it is computationally simple and fast for deter-
mining weights online, it is expensive in computa-
tion when dealing with matrix inversion. The KF
algorithm shows good performance and robust-
ness in noisy environment. To make a compromise
between learning speed and system robustness,
the KF algorithm rather than the LLS method
is used to adjust consequent parameters of the
SGFNN.
The eectiveness of the proposed SGFNN algo-
rithm is demonstrated via some benchmark problems
in the areas of function approximation, nonlinear
dynamic system identication, chaotic time-series
prediction and real-world benchmark regression
problem. Comprehensive comparisons with other
popular learning algorithms have been made. In par-
ticular, an adaptive modeling and control scheme
based on the SGFNN for drug delivery system is
presented. The proposed SGFNN is a novel intelli-
gent modeling tool, which can model the unknown
nonlinearities of the complex drug delivery system
and adapt online to changes and uncertainties of the
system.
This paper is organized as follows. Section 2 intro-
duces the proposed SGFNN algorithm. The structure
learning algorithm which includes criteria of genera-
tion and pruning neurons is given in details in Sec. 3.
The KF algorithm for parameter learning is also pre-
sented in this section. Section 4 presents simulation
results and comparative studies with other popular
learning algorithms. Furthermore, the adaptive mod-
eling and control scheme of the drug delivery system
using the proposed SGFNN is presented in this sec-
tion. A detailed discussion on the merits and work-
ing principle of the SGFNN algorithm is presented
in Sec. 5. Finally, conclusions are drawn in Sec. 6.
1
x
Layer 1
Input layer
Layer 2
Membership
function layer
r
x
Layer 4
Output layer
Layer 3
Rule lalyer
11
A
j
A
1
u
A
1
1 r
A
1
R
rj
A
ru
A
j
R

u
R
j
w
1
w
u
w
y
Fig. 1. Structure of the SGFNN.
2. The Proposed Self-Generating
Fuzzy Neural Network
The SGFNN is constructed based on EBF neural
networks which are functionally equivalent to TSK
fuzzy model. The SGFNN has a total of four lay-
ers as shown in Fig. 1. Layer one transmits values
of input linguistic variable x
i
(i = 1, 2, . . . , r) to the
next layer directly, where r is the number of input
variables. Each input variables x
i
has u member-
ship functions A
ij
(j = 1, 2, . . . , u) as shown in layer
two, which are in the form of a Gaussian function
given by
A
ij
= exp
_

(x
i
c
ij
)
2

2
ij
_
i = 1, 2, . . . , r
j = 1, 2, . . . , u (1)
where A
ij
is the jth membership function of the ith
input variable x
i
and c
ij
and
ij
are the center and
width of the jth membership function with respect
to the ith neuron, respectively. Layer three is the rule
layer. Each node in this layer represents a possible
IF-part of fuzzy rules. If the T-norm operator used
to compute each rules ring strength is multiplica-
tion, the output of the jth rule R
j
(j = 1, 2, . . . , u) is
given by

j
(x
1
, x
2
, . . . , x
r
) = exp
_

i=1
(x
i
c
ij
)
2

2
ij
_
j = 1, 2, . . . , u (2)
Layer four is the output layer and each node rep-
resents an output linguistic variable. The weighted
January 9, 2012 13:15 00306
24 F. Liu & M. J. Er
summation of incoming signals is given by
y(x
1
, x
2
, . . . , x
r
) =
u

j=1
w
j

j
(3)
where y is the output variable and w
j
is the THEN-
part or connection weight of the jth rule and
j
is
obtained from (2).
For the TSK model, weights are polynomials of
the input variables given by
w
j
= a
j
b = a
0j
+ a
1j
x
1
+ + a
rj
x
r
j = 1, 2, . . . , u (4)
where a
j
= [a
0j
a
1j
a
2j
a
rj
] is the weight vector
of input variables with respect to rule j and b =
[1 x
1
x
2
x
r
]
T
is a column vector.
3. Learning Algorithm
The learning algorithm of the SGFNN is based on
structure and parameter learning algorithm which
constructs the FNN automatically and dynamically.
In this section, the learning process of the SGFNN
including structure learning and parameter learn-
ing is presented. In structure learning, FNN with
high accuracy and compact structure is constructed.
The EBF neurons are generated and pruned dynami-
cally during the learning process. In parameter learn-
ing, the KF algorithm is used to adjust consequent
parameters of the SGFNN.
3.1. Criteria of fuzzy-rule generation
3.1.1. System errors
The output error of the SGFNN system with regard
to the reference signal is an important criterion to
determine whether a new rule should be recruited or
not. Consider the ith observation (x
i
, d
i
) where x
i
is
the input vector and d
i
is the desired output. The
overall output of SGFNN with the existing struc-
tures is denoted by y
i
. The system error is dened
as follows:
e
i
= d
i
y
i
. (5)
If
e
i
> k
e
(6)
where k
e
is a predened error tolerance, a new
fuzzy rule should be considered if other criteria of
generation have been satised simultaneously. The
term k
e
decays during the learning process as follows
k
e
=
_

_
e
max
1 < i < n/3
max[e
max

i
, e
min
] n/3 i 2n/3
e
min
2n/3 < i n
(7)
where e
max
is the maximum error chosen, e
min
is
the desired accuracy of the SGFNN output, n is the
learning iteration and is a convergence constant,
which is shown to be
=
_
e
min
e
max
_
3/n
. (8)
3.1.2. System errors
The -completeness of fuzzy rules is for any input
within the operating range, there exists at least one
fuzzy rule such that the match degree (or ring
strength) is not less than . The minimum value
of is usually selected as
min
= 0.5.
37
The ring
strength of each rule shown in (2) can be regarded
as a function of the regularized Mahalanobis distance
(M-distance), i.e.,
(x
1
, x
2
, . . . , x
r
) = exp[md
2
(j)] (9)
where
md(j) =
_
(X C
j
)
T

1
j
(X C
j
) (10)
is the M-distance where X = (x
1
, x
2
, . . . , x
r
)
T
R
r
,
C
j
= (c
1j
, c
2j,
. . . , c
rj
)
T
R
r
and
1
j
is calculated
as follows:
1

J
=
_

_
1

2
1j
0 0
0
1

2
1j
0 0
0 0
.
.
. 0
0 0 0
1

2
1j
_

_
j = 1, 2, . . . , u.
(11)
According to the -completeness criterion of fuzzy
rules, when a new observation (X
i
, d
i
), i =
1, 2, . . . , n, arrives, the M-distance md
i
(j) between
the observation X
i
and the center vector C
j
(j =
1, 2, . . . , u) of existing EBF units is calculated
according to (9) and (10).
January 9, 2012 13:15 00306
A Novel Ecient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 25
Find
J = arg min
1ju
(md
i
(j)). (12)
If
md
i
min
= md
i
(J) > k
d
(13)
this implies that the existing FNN is not satised
with -completeness and a new rule should be con-
sidered. Here, k
d
is a predened threshold that can
be chosen as follows
k
d
=
_

_
d
max
=

ln
_
1

min
_
1 < i < n/3
max[d
max

i
, d
min
] n/3 i 2n/3
d
min
=

ln
_
1

max
_
2n/3 < k n
(14)
where (0, 1) is decay constant which is given by
=
_
d
min
d
max
_
3/n
=
_
ln(1/
max
)
ln(1/
min
)
_3/n
. (15)
The idea for the choice of k
e
and k
d
is called
coarse learning. The reason is to rst nd and cover
more troublesome positions which have large errors
between the desired and actual outputs but are not
properly covered by existing rules.
10,21
3.2. Criteria of fuzzy-rule pruning
If inactive hidden neurons can be deleted during
learning process, a more parsimonious network topol-
ogy can be achieved. In the SGFNN learning algo-
rithm, the pruning strategy is the same as the
GDFNN
38
which is based on the error reduction ratio
(ERR) method of Ref. 39. The ERR method is used
to calculate the sensitivity and signicance of fuzzy
rules in order to check which rules would be deleted.
Suppose for n observations, (3) can be written as
a linear regression model or in the following compact
form:
D = W + E (16)
where D R
n
is the desired output and E is the
error vector.
The regressor matrix can be rewritten as
= KA (17)
where K is an n v (v = u (r + 1)) matrix with
orthogonal columns and A is a vv upper triangular
matrix. Substituting (16) into (17), we obtain
D = KAW + E = KG + E. (18)
The orthogonal least squares solution, G is given by
G = (K
T
K)
1
K
T
D or equivalently
g
i
=
k
T
i
D
k
T
i
k
i
1 i v. (19)
The ERR due to k
i
as dened in Ref. 39 is given by
err
i
=
g
2
i
k
T
i
k
i
D
T
D
1 I v. (20)
Substituting (19) into (20) yields
err
i
=
k
T
i
D
k
T
i
k
i
D
T
D
1 I v. (21)
Dene the ERR matrix ERR = (
1
,
2
, . . . ,
u
)
R
(r+1)u
whose elements are obtained from (21) and
the jth column of the ERR corresponding to the jth
rule. Furthermore, dene

j
=

T
j

j
r + 1
j = 1, 2, . . . , u (22)
Then
j
represents the signicance of the jth rule. If

j
< k
err
j = 1, 2, . . . , u (23)
where k
err
is a predened parameter, then the jth
rules is pruned.
3.3. Determination of premise
parameters
When a new rule has been generated, the problem is
how to allocate its parameters for a Gaussian mem-
bership function which includes centers and widths.
Firstly, suppose that u neurons have been gener-
ated. A new neuron will be generated when the ith
observation X
i
(i = 1, 2, . . . , n) arrives according to
the criteria of rule generation. Next, the incoming
multidimensional input vector X
i
is projected to the
corresponding one-dimensional membership function
for each input variable x
k
(k = 1, 2, . . . , r) and we
dene the Euclidean distance (E-distance) between
the data x
i
k
and boundary set
k
as follows:
ed
k
(j) = |x
i
k

k
(j)| j = 1, 2, . . . , u + 2 (24)
where u is the number of generated neurons and

k
{x
i min
, c
i1
, c
i2
, . . . , c
iu
, x
i max
}.
January 9, 2012 13:15 00306
26 F. Liu & M. J. Er
We dene
j
min
= arg min
j=1,2,...,u+2
(ed
k
(j)). (25)
If
ed
k
(j
min
) k
m
(26)
where k
m
is a predened constant, the new incom-
ing data x
i
k
can be represented by existing fuzzy sets
A
kjmin
(c
kjmin

kjmin
), (k = 1, 2, . . . , r) without gener-
ating a new membership function. Otherwise, a new
Gaussian membership function is allocated whose
width and center are dened as follows:

k
=
max{|c
k
c
k1
|, |c
k
c
k+1
|}
_
ln(1/)
(27)
c
k
(u + 1) = x
i
k
(28)
where c
k1
, c
k+1
are the two centers of neighboring
membership functions of the membership function.
3.4. Determination of consequent
parameters
After the premise parameters and structure of the
SGFNN are determined, it is important to deter-
mine the consequent parameters. In this paper, the
KF algorithm
11
is used to adjust the consequent
parameters.
Firstly, we suppose that u neurons are generated
for n observations with r input variables. Rewriting
(3) in the following compact form
Y = W, (29)
the KF algorithm consists of a recurrent formula
given by
s
i
= S
i1

S
t1

T
i
S
i1
1 +
T
i
S
i1

i
i = 1, 2, . . . , n (30)
W
i
= W
i1
+ S
i

i
(T
T
i

T
i
W
i1
) i = 1, 2, . . . , n
(31)
with the initial conditions given by W
0
= 0 and S
0
=
I, where S
i
is the error covariance matrix for the
ith observation,
i
is the ith column of , W
i
is the
weight matrix after the ith iteration, is a positive
large number and I is an identify matrix.
4. Illustrative Examples
In this section, the eectiveness of the proposed algo-
rithm is demonstrated by MATLAB-based simula-
tion studies on ve examples. They are the two-input
nonlinear sinc function approximation,
24
nonlinear
dynamic system identication,
21
Mackey-Glass time-
series prediction problem,
21
real-world benchmark
regression problem
40
and real-world drug delivery
system.
41
Simulation results are compared with
other learning algorithms, such as the RBF-AFS,
21
the OLS,
39
the MRAN,
23
the ANFIS,
24
the DFNN,
10
the GDFNN,
38
the SOFNN,
25
the SOFNNGA,
26
the
RAN,
27
the RANEKF,
28
the GAP-RBF,
33
the OS-
ELM(RBF)
42
and the FAOS-PFNN.
35
4.1. Example 1: Two input
nonlinear sin c function
This function was used to demonstrate the eciency
of the ANFIS.
24
The sinc function is dened as
follows:
z = sin c(x, y)x [10, 10], y [10, 10]. (32)
A total of 121 two-input data sampled and the cor-
responding target data are used as the training data.
The parameters of the SGFNN are chosen as follows:
= 0.9, e
max
= 0.5, e
min
= 0.03, k
err
= 0.00015,
k
m
=0.5,
max
=0.8,
min
=0.5 and =300.
In order to determine the eect of noise, the
training data are mixed with Gaussian white noise
sequences which have zero mean and dierent vari-
ances as shown in Table 1. The results are illustrated
in Table 1 and Fig. 2.
For the same variance (e.g. = 0.1), the SGFNN
generates ten fuzzy rules with ten membership func-
tions for input variables x and y respectively. The
total number of parameters is 70 and the root mean
squared error (RMSE) is 0.0229. The number of
parameters is less than those of ANFIS,
24
i.e. 72
and the SOFNNGA,
26
i.e. 76, but is more than the
Table 1. Results of two-input function with noise.
Variances Number of Number of
(
2
) fuzzy rules parameters RMSE
= 0 9 59 0.0139
= 0.01 8 56 0.0175
= 0.05 9 59 0.0217
= 0.1 10 70 0.0229
January 9, 2012 13:15 00306
A Novel Ecient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 27
0 20 40 60 80 100 120 140
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
Root Mean Squared Error
Sample Patterns
R
M
S
E
Fig. 2. Root Mean Squared Error (RMSE).
SOFNN,
25
i.e. 68. The RMSE of the SGFNN is less
than that of SOFNN
25
(which is 0.0767), but is more
than the RMSE of the SOFNNGA.
26
The SGFNN
has better performance than the ANFIS
24
and the
SOFNN
25
in terms of network structure and RMSE.
However, the RMSE for training of the SGFNN is
close to that of SOFNNGA,
26
i.e., 0.0173 with fewer
parameters.
4.2. Example 2: Nonlinear dynamic
system identication
The identied nonlinear dynamic system is described
as follows:
y(t + 1) =
y(t)y(t 1)[y(t) + 2.5]
1 + y
2
(t) + y
2
(t 1)
+ u(t)
t [1, 200], y(0) = 0, y(1) = 0,
u(t) = sin(2/25) (33)
To identify the plant, a series-parallel identication
model governed by the following equation is used:
y(t + 1) = f(y(t), y(t 1), u(t)) (34)
where f is the function implemented by the SGFNN
with three inputs and one output model. There are
200 input-target data sets chosen as training data.
The parameters of the SGFNN are set as follows:
= 0.9, e
max
= 0.5, e
min
= 0.03, k
err
= 0.0015,
k
m
= 0.5,
max
= 0.8,
min
= 0.5 and = 320.
Simulation results are shown in Figs. 3 and 4.
The membership functions of input variables y(t),
y(t 1), and u(t) are shown in Figs. 5 to 7.
0 20 40 60 80 10 0 120 140 160 180 200
0
1
2
3
4
5
6
Fuzzy Rule Generation
Sample Patterns
N
u
m
b
e
r

o
f

F
u
z
z
y

R
u
l
e
s
Fig. 3. Fuzzy rule generation.
0 20 40 60 80 100 120 140 160 180 200
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Root Mean Squared Error
Sample Patterns
R
M
S
E
Fig. 4. Root Mean Squared Error (RMSE).
-2 -1 0 1 2 3 4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Input y(t)
M
e
m
b
e
r
s
h
i
p

F
u
n
c
t
i
o
n
s
Membership Functions of Input y(t)
Fig. 5. Membership functions of input y(t).
January 9, 2012 13:15 00306
28 F. Liu & M. J. Er
-2 -1 0 1 2 3 4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Input y(t-1)
M
e
m
b
e
r
s
h
i
p

F
u
n
c
t
i
o
n
s
Membership Functions of Input y(t-1)
Fig. 6. Membership functions of input y(t 1).
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Input u(t)
M
e
m
b
e
r
s
h
i
p

F
u
n
c
t
i
o
n
s
Membership Functions of Input u(t)
Fig. 7. Membership functions of input u(t).
A set of six fuzzy rules is generated with ve,
four and four membership functions for inputs y(t),
y(t 1) and u(t). It can be seen that the number of
each input variables is not the same. Table 2 shows a
comparison of structure and performance with dier-
ent algorithms. The RMSE of the SGFNN is 0.0228
which is less than that of the other algorithms and
the total number of parameters is 44 which is also less
than that of the other algorithms. As seen in Table 2,
for nonlinear dynamic system identication problem,
the proposed SGFNN algorithm outperforms other
learning algorithms. The SGFNN provides a satis-
factory RMSE performance in spite of a simpler net-
work structure.
Table 2. Results of nonlinear dynamic system identi-
cation.
Number of Number of
Algorithms fuzzy rules parameters RMSE
OLS
39
65 326 0.0288
RBF-AFS
21
35 280 0.1384
DFNN
10
6 48 0.0283
GDFNN
38
6 48 0.0241
SGFNN 6 44 0.0228
4.3. Example 3: Mackey-Glass
time-series prediction
The Mackey-Glass time-series prediction
21
is a
benchmark problem which has been considered by
many researchers. The time-series is generated by
x(t + 1) = (1 a)x(t) +
bx(t )
1 + x
10
(t )
(35)
The same parameters as in Refs. 10 and 21 i.e., a =
0.1, b = 0.2, = 17 and the initial condition of
x(0) = 1.2 are chosen. The prediction model is also
the same as Refs. 10 and 21 i.e.
x(t + 6) = f[x(t), x(t 6), x(t 12), x(t 18)]
(36)
For the purpose of training and testing, 4000 sam-
ples are generated between t = 0 and t = 4000
from (35) with the initial conditions x(t) = 0 for
t < 0 and x(0) = 1.2. We choose 1000 data points
between t = 124 and t = 1123 for preparing the input
and output sample data in (36). In order to demon-
strate the prediction ability of the SGFNN approach,
another 1000 data points between t = 1124 and
t = 2123 are tested. Simulation results and compar-
isons with the OLS,
39
RBF-AFS
21
and DFNN
10
are
presented in Table 3.
From the simulation results, it is clear that the
SGFNN can obtain better performance with less
Table 3. Comparisons of structure and performance
with dierent algorithms.
Number of Training Testing
Algorithms fuzzy rules RMSE RMSE
OLS
39
13 0.0158 0.0163
RBF-AFS
21
21 0.0107 0.0128
DFNN
10
5 0.0132 0.0131
SGFNN 7 0.0112 0.0113
January 9, 2012 13:15 00306
A Novel Ecient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 29
RMSE for training and testing even it has generated
more rules than the DFNN.
10
However, the SGFNN
shows superiority compared with the OLS
39
and the
RBF-AFS
21
in terms of the RMSE for testing and
network structure respectively.
4.4. Example 4: Fuel consumption
prediction of automobiles
In order to further validate the performance
of the proposed SGFNN algorithm, compar-
isons of the SGFNN with other popular learn-
ing algorithms
22,23,27,28,33,35,42
are presented for
benchmark prediction problem named auto-mpg
prediction.
40
All the simulation results are averaged
over 50 trials. The average RMSE for training and
testing are calculated and compared in this section.
The auto-mpg problem is to predict the fuel con-
sumption (miles per gallon) of dierent models of
cars based on the displacement, horsepower, weight
and acceleration of cars. A total of 392 observations
are collected for the prediction problem. Each obser-
vation consists of seven inputs (four continuous ones:
displacement, horsepower, weight, acceleration, and
three discrete ones: cylinders, model year and ori-
gin) and one continuous output (the fuel consump-
tion). For simplicity, the seven input attributes and
one output have been normalized to the range [0, 1].
For the sake of comparisons with other learning algo-
rithms, 320 training data and 72 testing data are ran-
domly chosen from the auto-mpg database in each
trial of simulation studies. Table 4 summarizes the
results for auto-mpg regression problem in terms of
training RMSE, testing RMSE and the number of
generated fuzzy rules. The number of generated fuzzy
rules for OS-ELM (RBF)
42
was determined based
on the model selection process while for the other
algorithms, it is generated automatically by the algo-
rithms. As observed from Table 4, the average num-
ber of fuzzy rules of the SGFNN is 5.15 which is
slightly more than the other algorithms except the
OS-ELM (RBF).
42
The average training RMSE of
the SGFNN is 0.0613 which is less than that of
the other algorithms except the FAOS-PFNN.
35
It
means that the approximation performance of the
SGFNN is better than the other algorithms except
the FAOS-PFNN.
35
It should be highlighted that the
SGFNN has the least testing RMSE and best gener-
alization performance among all learning algorithms.
Table 4. Comparisons of the SGFNN with dierent
algorithms on auto-mpg problem.
Number of Training Testing
Algorithms fuzzy rules RMSE RMSE
RAN
27
4.44 0.2923 0.3080
RANEKF
28
5.14 0.1088 0.1387
MRAN
22,23
4.46 0.1086 0.1376
GAP-RBF
33
3.12 0.1144 0.1028
OS-ELM (RBF)
42
25 0.0696 0.0759
FAOS-PFNN
35
2.9 0.0321 0.0775
SGFNN 5.15 0.0613 0.0658
4.5. Example 5: Real-world drug
delivery system
For the real-world application, we employ the
SGFNN to model unknown nonlinearities of com-
plex blood pressure system. We investigate the use
of fuzzy neural network technique for modeling and
automatic control of mean arterial pressure (MAP)
through the intravenous infusion of sodium nitro-
prusside (SNP).
Control of MAP in many clinical situations such
as certain operation procedures for hypertensive
patient is one attractive application in postsurgi-
cal drug delivery systems. A powerful medication for
control of MAP is SNP that has emerged as an eec-
tive vasodilator drug.
43
A model of the MAP
41
of a
patient under the inuence of SNP is given as follows:
MAP(t) = p
0
+ p(t) + p
d
(t) + n(t) (37)
where MAP(t) is the mean arterial pressure, p
0
is the
initial blood pressure, p(t) is the change in pressure
due to infusion rate of SNP, p
d
(t) is the change in
pressure due to the rennin reex action which is the
bodys reaction to the use of a vasodilator drug and
n(t) is a stochastic background noise.
A nominal discrete-time model of the MAP of a
patient under the inuence of SNP is given as follows:
y(t) = f[y(k 1), u(k d), u(k m)]
= a
0
y(k 1) + b
0
u(k d) + b
1
u(k m) + n(k)
(38)
where y(k) is the output of the system which rep-
resents the change in MAP from the initial blood
pressure at discrete time k, u(k) is the input of
system which represents the infusion of SNP at dis-
crete time k, d and m are integer delays which rep-
resent the initial transport delay and recirculation
January 9, 2012 13:15 00306
30 F. Liu & M. J. Er
time delay, respectively, a
0
, b
0
and b
1
are parameters
which may vary considerably from patient to patient
or within the same patient under dierent conditions,
and n(k) is an unknown disturbance term which may
contain unmodeled dynamics, disturbance, measure-
ment noise, eects due to sampling of continuous
time signals, etc. The model is also known as autore-
gressive with exogenous inputs model.
Using linear modeling techniques, the parameters
a
0
, b
0
and b
1
are assumed to be constant, thus result-
ing in a linear system. The time delays denoted by d
and m are constant integer in (38). This is a restric-
tive assumption since in practical systems these val-
ues may vary from patients to patients or within the
same patient under dierent conditions. It is sug-
gested that d and m have a general range between
30 s and 80 s.
44
In the context of using the SGFNN for blood
pressure control, the FNN is viewed as a modeling
method. The knowledge about system dynamics and
mapping characteristics are stored in the network.
Here, direct inverse control method is used to con-
trol blood pressure system. Direct inverse control
method is based on the reference model of the sys-
tem, the FNN is used to learn and approximate the
inverse dynamics of the drug delivery system, and
then the resulting FNN is used to estimate the drug
infusion rate given the desired blood pressure level
r(t). When the FNN is used as a controller in a drug
delivery system, the control object is to obtain appro-
priate control input u(t) to make the output of sys-
tem y(t) approximate the desired blood pressure level
r(t). The control procedure consists of two stages: (1)
learning stage and (2) application stage. In learning
stage, the FNN is used to identify the inverse dynam-
ics of the drug delivery system while in application
stage the FNN is viewed as a controller to generate
appropriate control input. The direct inverse control
method is shown in Fig. 8. It can be easily derived
from (38) that the inverse model of the dynamic sys-
tem is given by
u(k) = f
1
[y(k + d), y(k 1 + d), . . . ,
u(k m + d)] (39)
The generation of u(k) requires knowledge of the
future values y(k + d) and y(k 1 + d). To over-
come the problem, they are usually replaced by their
reference values r(k + d) and (k 1 + d). Another
problem is that the inverse of function f
1
may not
Fig. 8. Control structure of drug delivery system.
always exist. Instead of considering the existence of
the function f
1
, the inverse model of the dynam-
ics system can always be congured in a nonlinear
regression model as follows
u(k) = g[y(k), y(k 1), . . . , y(k m + d)]
= G(z, k) (40)
where z = [y(k)y(k 1), . . . , y(k m + d)]
T
.
The SGFNN is trained to obtain an estimate of
the inverse dynamics as illustrated in Fig. 8. The
output of the SGFNN is calculated as
u
SGFNN
(z, k) =

D(z) (41)
The inverse dynamics of the drug delivery system is
identied by the SGFNN and then it is used as a
controller to generate the control output. The objec-
tively of simulation studies is to demonstrate the
capability of the SGFNN to approximate a dynamic
system and control a drug delivery system based on
sensitive model.
Without any loss of generality, we assume the
integer delays d = 3, m = 6 and sample time is
15 s. Usually, we use the sensitive model of the drug
delivery system which is described as
y(k) = 0.606y(k 1) + 3.5u(k 3) + 1.418u(k 6)
(42)
In the simulation study, the FNN is trained to
model the inverse dynamics of the drug delivery
system. The input signal to the system (SNP infu-
sion rate) for SGFNN training is set as u(k) =
|Asin(2k/250)|, where A is set to be 10. For the
purpose of training, 200 training samples are gen-
erated from (42) with initial conditions y(k) = 0,
u(t) = 0 for t 0.
January 9, 2012 13:15 00306
A Novel Ecient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 31
The inverse model of (42) is given by
u(k) = f(y(k), y(k 3)) (43)
The parameters of the SGFNN are set as follows:
= 0.9, = 0.95, e
max
= 0.2, e
min
= 0.02, k
err
=
0.0001, k
m
= 0.95,
max
= 0.8,
min
= 0.5, and
= 400.
The results are illustrated in Figs. 9 and 10. A
total of seven fuzzy rules are generated during the
training process. The RMSE of the paradigm at the
0 20 40 60 80 100 120 140 160 180 200
0
1
2
3
4
5
6
7
fuzzy rule generation
sample patterns
Fig. 9. Fuzzy rule generation.
0 20 40 60 80 100 120 140 160 180 200
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
root mean squared error
sample patterns
Fig. 10. Root Mean Squared Error (RMSE) during
training process.
end of the training process is shown in Fig. 10. It
can be seen from these gures that the proposed
FNN can model the drug delivery system very well
as the RMSE is 0.0134 at the end of training pro-
cess. After training, the SGFNN is tested for online
adaptive control of the system. The reference trajec-
tory represents a reduction of MAP from 140 mmHg
to 100 mmHg initially, maintaining the level at
100 mmHg. Considering another situation in simula-
tion study, the output measurement y(k) is corrupted
by white noise n(k) with a variance level of 1 mmHg
(resulting in peak-to-peak noise variance of approx-
imately 4 mmHg), which is considered as moder-
ate noise level in the physiological system. Simula-
tion results are shown in Figs. 11 to 13. Figure 11
depicts comparison between the actual change and
desired change of blood pressure. The long-dash
curve denotes the desired change of blood pressure.
The desired change of blood pressure maintains at
0 mmHg initially, and then increases to 40 mmHg at
t
s
= 500. The solid curve denotes the actual change
of blood pressure in the drug delivery system. Fig-
ure 12 demonstrates that the SGFNN controller is
able to regulate the MAP to the desired set-point
even with the noise. It can be seen from the Fig. 12,
for sensitive condition (k = 2.88 mmHg/ml/h)
the overshoot of MAP is 3.76% which is less
than that of the fuzzy controller
45
i.e. 13.2%. The
actual infusion rate of the SNP is demonstrated
in Fig. 13.
0 100 200 300 400 500 600 700 800
-5
0
5
10
15
20
25
30
35
40
45
Real DP and Desired DP
sample number
Fig. 11. () Actual change of blood pressure and ()
desired change of blood pressure.
January 9, 2012 13:15 00306
32 F. Liu & M. J. Er
0 100 200 300 400 500 600 700 800
95
100
105
110
115
120
125
130
135
140
145
Real Map Using SGFNN controller
sample number
Fig. 12. Real MAP using the SGFNN controller.
0 100 200 300 400 500 600 700 800
-1
-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Actual Infusion Rate Uac
Sample number
Fig. 13. Actual infusion rate of the SNP.
5. Discussions
The basic idea of the proposed SGFNN algorithm
is to construct a TSK fuzzy system based on EBF
neural networks. The motivation of this paper is to
provide a simple and ecient algorithm to congure
a fuzzy neural network so that (1) the system could
be used as a modeling tool to model and control of
a nonlinear dynamic system (2) the system could
be used for predicting real-world benchmark pre-
diction problem. Many related learning algorithms
have been developed by other researchers as shown
in Sec. 1. Here, we would like to give a comparative
study between other state-of-the-arts algorithms and
the proposed SGFNN.
5.1. Structure identication
The structure identication of the proposed algo-
rithm is self-adaptive. The resulting structure
depends critically on the generation and pruning cri-
teria of the learning algorithm.
5.2. Parameter adjustment
The method of parameter adjustment has great
impact on the learning speed of the proposed learn-
ing algorithm. In the proposed algorithm, nonlin-
ear parameters (premise parameters) are directly
adjusted during the learning process. On the other
hand, linear parameters (consequent parameters) are
modied in each step by the KF method in which
the solution is globally optimal. The learning speed
is much faster than other algorithms
9,15,21
in which
the back-propagation (BP) algorithm is employed.
The BP method is well-known to be slow and easy
to be trapped into local minima.
If the SGFNN is employed for online identica-
tion or control process, the adaptive capability of
the KF algorithm would decrease when more sample
data are collected, especially, if the identied system
is to account for time-varying characteristics of the
incoming data. Therefore, the eect of old training
data should decay when new ones arrive. One com-
monly used approach is to add a forgetting factor
to (30):
S
t
=
1

_
S
t1

S
i1

T
i
S
t1
1 +
T
i
S
t1

i
_
i = 1, 2, . . . , n
(44)
where 0 < < 1.
5.3. Generalization
Another important issue of FNNs is generalization
capability. Note that the approximation and gener-
alization capability of the resulting FNN depends
on the structure and parameters of the system. In
this paper, two criteria are used to create the fuzzy
rules and the KF method is adopted to update the
consequent parameters of the SGFNN. It follows
that parsimonious network structure and suitable
January 9, 2012 13:15 00306
A Novel Ecient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 33
weights in consequents can be obtained simultane-
ously. Consequently, the resulting fuzzy neural net-
work is able to achieve good generalization accuracy.
As seen from the auto-mpg prediction, the SGFNN
can obtain the best generation performance of all
learning algorithms.
6. Conclusions
In this paper, a novel ecient algorithm towards
constructing a self-generating fuzzy neural network
(SGFNN) performing TSK fuzzy systems based on
EBF neural networks has been proposed. Struc-
ture and parameter identication of the SGFNN
can be done automatically and simultaneously.
Structure learning is based on criteria of genera-
tion and pruning neurons. The KF algorithm has
been used to adjust consequent parameters of the
SGFNN. The eectiveness of the proposed algo-
rithm has been demonstrated in nonlinear function
approximation, nonlinear dynamic system identi-
cation, time-series prediction and real-world bench-
mark prediction problem. Simulation results show
that a more ecient fuzzy neural network with
high accuracy and compact structure can be self-
generated by the proposed SGFNN. Comprehensive
comparisons with other well-known learning algo-
rithms have been presented in this paper. In sum-
mary, the SGFNN is a very ecient algorithm for
function approximation, nonlinear system identi-
cation, time-series prediction and real-world bench-
mark prediction problem. In particular, an adaptive
modeling and control scheme based on the SGFNN
for drug delivery system is presented. The proposed
SGFNN is a novel intelligent modeling tool, which
can model the unknown nonlinearities of the complex
drug delivery system and adapt on line to changes
and uncertainties in the system.
References
1. A. Panakkat and H. Adeli, Recurrent neural net-
work for approximation earthquake time and loca-
tion prediction using multiple seismicity indicators,
Computer-Aided Civil and Infrastructure Engineer-
ing 24(4) (2009) 280292.
2. H. Adeli and X. Jiang, Neuro-Fuzzy logic model
for freeday work zone capacity estimation, Jour-
nal of Transportation Engineering 129(5) (2003)
484493.
3. H. Adeli and X. Jiang, Dynamic fuzzy wavelet neu-
ral network model for structural system identi-
cation, Journal of Structural Engineering 132(1)
(2006) 102111.
4. G. Puscasu and B. Codres, Nonlinear system iden-
tication based on internal recurrent neural net-
works, International Journal of Neural Systems
19(2) (2009) 115125.
5. S. L. Hung and H. Adeli, A parallel genetic/neural
network learning algorithm for MIMD shared mem-
ory machines, IEEE Transactions on Neural Net-
works 5(6) (1994) 900909.
6. H. M. Elragal, Improving neural networks predic-
tion accuracy using particle swarm optimization
combiner, International Journal of Neural Systems
19(5) (2009) 387393.
7. D. Wu, K. Warwick, Z. Ma, M. N. Gasson, J. G.
Burgess, S. Pan and T. Z. Aziz, Prediction of Parkin-
sons disease tremor onset using radial basis function
neural network based on particle swarm optimiza-
tion, International Journal of Neural Systems 20(2)
(2010) 109116.
8. J. S. Wang and C. S. G. Lee, Ecient neuro-fuzzy
control systems for autonomous underwear vehicle
control, in Proceeding of IEEE International Con-
ference on Robotics and Automation 3 (2001) 2986
2991.
9. C. F. Juang and C. T. Lin, An on-line self-
constructing neural fuzzy inference network and its
applications, IEEE Transactions on Fuzzy Systems
6(1) (1998) 1232.
10. S. Wu and M. J. Er, Dynamic fuzzy neural networks-
a novel approach to function approximation, IEEE
Transactions on System, Man, Cybernetics, Part B,
Cybernetics 30(2) (2000) 358364.
11. C. T. Lin and C. S. G. Lee, Neural Fuzzy Systems: A
Neuro-Fuzzy Synergism to Intelligent Systems (Pren-
tice Hall: Upper Saddle River, 1996).
12. D. Nauck, Neuro-fuzzy systems: Review and
prospects, in Proceeding of the 5th European
Congress on Intelligent Techniques and Soft Com-
puting (EUFIT97) (1997) 10441053.
13. J. S. R. Jang and C. T. Sun, Functional equiva-
lence between radial basis function networks and
fuzzy inference systems, IEEE Transactions on Neu-
ral Networks 4(1) (1993) 156159.
14. A. M. Schaefer and H. G. Zimmermann, Recurrent
neural networks are universal approximators, Inter-
national Journal of Neural Systems 17(4) (2007)
253263.
15. D. A. Linkens and H. O. Nyongesa, Learning systems
in intelligent control: An appraisal of fuzzy, neural
and genetic algorithm control applications, in IEE
Proceedings on Control Theory and Applications 143
(1996) 367386.
16. M. Sugeno and G. T. Kang, Structure identication
of fuzzy model, Fuzzy Sets and Systems 28(1) (1988)
1533.
January 9, 2012 13:15 00306
34 F. Liu & M. J. Er
17. K. Tanaka, M. Sano and H. Watanabe, Modeling and
control of carbon monoxide concentration using a
neuro-fuzzy technique, IEEE Transactions on Fuzzy
Systems 3(3) (1995) 271279.
18. Y. Gao and M. J. Er, An intelligent adaptive con-
trol scheme for postsurgical blood pressure regula-
tion, IEEE Transactions on Neural Networks 16(2)
(2005) 475483.
19. D. C. Theodoridis Y. S. Boutalis and M. A.
Christodoulou, Indirect adaptive control of unknown
multi variable nonlinear systems with parametric
and dynamic uncertainties using a new neuro-fuzzy
system description, International Journal of Neural
Systems 20(2) (2010) 129148.
20. J. P. Deng, N. Sundararajan and P. Saratchan-
dran, Communication channel equalization using
complex-valued minimal radial basis function neural
networks, IEEE Transactions on Neural Networks
13(6) (2002) 687696.
21. K. B. Cho and B. H. Wang, Radial basis function
based adaptive fuzzy systems and their applications
to system identication and predication, Fuzzy Sets
and Systems 83(3) (1996) 325339.
22. Y. Lu, N. Sundararajan and P. Saratchandran, A
sequential learning scheme for function approxi-
mation using minimal radial basis function neu-
ral networks, Neural Computation 9(2) (1997)
461478.
23. Y. Lu, N. Sundararajan and P. Saratchandran, Per-
formance evaluation of a sequential minimal radial
basis function (RBF) neural network learning algo-
rithm, IEEE Transactions on Neural Networks 9(2)
(1998) 308318.
24. J. S. R. Jang, ANFIS: Adaptive-network-based fuzzy
inference system, IEEE Transactions on System,
Man, Cybernetics, Part B, Cybernetics 23(3) (1993)
665684.
25. G. Leng, G. Prasad and T. M. McGinnity, An on-line
algorithm for creating self-organizing fuzzy neu-
ral networks, Neural Networks 17(10) (2004) 1477
1493.
26. G. Leng and T. M. McGinnity, Design for self-
organizing fuzzy neural networks based on genetic
algorithms, IEEE Transactions on Fuzzy Systems
14(6) (2006) 755765.
27. J. Platt, A resource-allocating network for func-
tion interpolation, Neural Networks 3(2) (1991)
213225.
28. V. Kadirkamanathan and M. Niranjan, A function
estimation approach to sequential learning with neu-
ral network, Neural Computation 5(6) (1993) 954
975.
29. M. Salmeron, J. Ortega, C. G. Puntonet and A.
Prieto, Improved RAN sequential prediction using
orthogonal technique, Neurocomputing 41(1) (2001)
153172.
30. A. Samant and H. Adeli, Enhancing neural net-
work incident detection algorithms using wavelets,
Computer-Aided Civil and Infrastructure Engineer-
ing 16(4) (2001) 239245.
31. A. Karim and H. Adeli, Comparison of the fuzzy
wavelet RBFNN freeway incident detection model
with the California algorithm, Journal of Trans-
portation Engineering 128(1) (2002) 2130.
32. X. Jiang and H. Adeli, Dynamic fuzzy wavelet neu-
roemulator for nonlinear control of irregular high-
rise building structures, International Journal for
Numerical Methods in Engineering 74(7) (2008)
10451066.
33. G. B. Huang, P. Saratchandran and N. Sundarara-
jan, An ecient sequential learning algorithm for
growing and pruning RBF (GAP-RBF) networks,
IEEE Transactions on System, Man, Cybernetics,
Part B, Cybernetics 34(6) (2004) 22842292.
34. G. B. Huang, P. Saratchandran and N. Sun-
dararajan, A generalized growing and pruning RBF
(GGAP-RBF) neural network for function approx-
imation, IEEE Transactions on Neural Networks
16(1) (2005) 5767.
35. N. Wang and M. J. Er and X. Y Meng, A fast
and accurate online self-organizing scheme for parsi-
monious fuzzy neural networks, Neurocomputing 72
(16-18) (2009) 38183829.
36. C. C. Lee, Fuzzy logic in control system: Fuzzy
logic controller, IEEE Transactions on System, Man,
Cybernetics, Part B, Cybernetics, Part. I, II 20(2)
(1990) 404436.
37. L. X. Wang, A Course in Fuzzy Systems and Control
(Englewood Clis, Prentice Hall, 1997).
38. S. Wu, M. J. Er and Y. Gao, A fast approach for
automatic generation of fuzzy rules by generalized
dynamic fuzzy neural networks, IEEE Transactions
on Fuzzy Systems 9(4) (2001) 578594.
39. S. Chen, C. F. N. Cowan and P. M. Grant, Orthog-
onal least squares learning algorithm for radial basis
function network, IEEE Transactions on Neural Net-
works 2(2) (1991) 302309.
40. A. Frank and A. Asuncion, UCI Machine Learning
Repository [http://archive.ics.uci.edu/ml]. Irvine,
CA: University of California, School of Information
and Computer Science (2010).
41. J. B. Slate, Model-Based Design of a Controller for
Infusing Sodium Nitroprusside During Postsurgical
Hypertension, Ph.D dissertation (Univ. Wisconsin,
Madison, WI, 1980).
42. N. Y. Liang, G. B. Huang, P. Saratchandran and N.
Sundararajan, A fast and accurate online sequential
learning algorithm for feedforward networks, IEEE
Transactions on Neural Networks 17(6) (2006)
14111423.
43. J. G. Reves, L. G. Shepppard, R. Wallach and W. A.
Lell, Therapeutic uses of sodium nitroprusside and
January 9, 2012 13:15 00306
A Novel Ecient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 35
an automated method of administration, Int Anes-
thesiol Clinic 16(2) (1978) 5188.
44. S. Isaka and A. V. Sebald, Control strategies for arte-
rial blood pressure regulation, IEEE Transactions on
Biomedical Engineering 40(4) (1993) 353363.
45. H. Ying, M. McEachern, D. W. Eddleman and L. C.
Sheppard, Fuzzy control of mean arterial pressure in
postsurgical patients with sodium nitroprusside infu-
sion, IEEE Transactions on Biomedical Engineering
39(10) (1992) 10601070.
Copyright of International Journal of Neural Systems is the property of World Scientific Publishing Company
and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for individual use.