You are on page 1of 12

International Journal of Computational Cognition (http://www.YangSky.com/yangijcc.

htm)
Volume 1, Number 4, Pages 7990, December 2003
Publisher Item Identifier S 1542-5908(03)10404-6/$20.00
Article electronically published on December 25, 2002 at http://www.YangSky.com/ijcc14.htm. Please
cite this paper as: hChing-Hung Lee, Jang-Lee Hong, Yu-Ching Lin, and Wei-Yu Lai, Type2 Fuzzy Neural Network Systems and Learning, International Journal of Computational Cognition
(http://www.YangSky.com/yangijcc.htm), Volume 1, Number 4, Pages 7990, December 2003i.

TYPE-2 FUZZY NEURAL NETWORK SYSTEMS AND


LEARNING
CHING-HUNG LEE, JANG-LEE HONG, YU-CHING LIN, AND WEI-YU LAI

Abstract. This paper presents a type-2 fuzzy neural network system


(type-2 FNN) and its learning using genetic algorithm. The so-called
type-1 fuzzy neural network (FNN) has the properties of parallel computation scheme, easy to implement, fuzzy logic inference system, and
parameters convergence. And, the membership functions (MFs) and
the rules can be designed and trained from linguistic information and
numeric data. However, there is uncertainty associated with information or data. Therefore, the type-2 fuzzy sets are used to treat it.
Type-2 fuzzy sets let us model and minimizes the effects of uncertainties in rule-base fuzzy logic systems (FLS). In this paper, the previous
results of type-1 FNN are extended to a type-2 one. In addition, the
corresponding learning algorithm is derived by real-code genetic algoc
rithm. Copyright 2002
Yangs Scientific Research Institute, LLC.
All rights reserved.

1. Introduction
Recently, intelligent systems including fuzzy logic systems, neural networks, and genetic algorithm, have been successfully used in widely various
applications. The fuzzy neural network systems (neuro-fuzzy systems) combine the advantages of fuzzy logic systems and neural networks have become
a very active subject in many scientific and engineering areas, such as, model
reference control problems, PID controller tuning, signal processing, etc.
[2,3,6-11]. In our previous results, the FNN has the properties of parallel
computation scheme, easy to implement, fuzzy logic inference system, and
parameters convergence. The membership functions (MFs) and the rules
can be designed and trained from linguistic information and numeric data.
Thus, it is then easy to design an FNN system to achieve a satisfactory level
Received by the editors December 18, 2002 / final version received December 23, 2002.
Key words and phrases. Fuzzy neural network, type-2 fuzzy sets, genetic algorithm.
This work is supported by the National Science Council, Taiwan, R.O.C., under Grant
NSC-91-2213-E155-012.
c
2002
Yangs Scientific Research Institute, LLC. All rights reserved.

79

80

LEE, HONG, LIN, AND LAI

of accuracy by manipulating the network structure and learning algorithm


of the FNN. However, there is uncertainty associated with information or
data. Therefore, the type-2 fuzzy sets are used to treat it.
Recently, Mendel and Karnik [5,12,14,15] have developed a complete theory of type-2 fuzzy logic systems. These systems are again characterized by
IF-THEN rules, but their antecedent or consequent sets are type-2. A type2 fuzzy set can represent and handle uncertain information effectively. That
is, type-2 fuzzy sets let us model and minimizes the effects of uncertainties
in rule-base fuzzy logic systems (FLS). The purpose of this paper is to develop type-2 fuzzy neural network, i.e., extend our previous results of the
FNN the type-2 one. Indeed, The learning algorithm is derived by genetic
algorithm.
Genetic algorithm (GA) was first proposed by Holland in 1975 [13,17,18].
It is motivated by mechanism of natural selection, a biological process in
which stronger individuals are likely be the winners in a competing environment. It provides an alternative to traditional optimization techniques
by using directed random searches to locate optimal solutions in complex
problems [1,4,13,17,18]. Recently, GA has emerged as a popular family of
methods for global optimization. Through the use of genetic operations, GA
performs a search by evolving a population of potential solutions [17,18].
The organization of this paper is as follows. In Section 2, we briefly introduce the preliminaries- type-1 fuzzy neural network, genetic algorithm and
rtype-2 fuzzy set. Section 3 presents the main result- type-2 FNN systems
and learning algorithm. Finally, conclusion is summarized in Section 4.
2. Preliminaries
2.1. Fuzzy Neural Network (Type-1 FNN system). The fuzzy neural
network (FNN) system is one kind of fuzzy inference system in neural network structure [2,3,7,10,11]. A schematic diagram of the four-layered FNN
is shown in Fig. 1. Obviously, it is a static model of recurrent fuzzy neural
network (RFNN) [7]. The type-1 FNN system has total four layers. Nodes
in layer one are input nodes representing input linguistic variables. Nodes
in layer two are membership nodes. Here, the Gaussian function is used as
the membership function (MF). Each membership node is responsible for
mapping an input linguistic variable into a possibility distribution for that
variable. The rule nodes reside in layer three. The last layer contains the
output variable nodes. More details about FNNs, convergent theorems and
the learning algorithm, can be found in [6-9]. Also, the FNN used here has
been shown to be a universal approximator. That is, for any given real
function h : Rn Rp , continuous on a compact set K Rn , and arbitrary

TYPE-2 FUZZY NEURAL NETWORK SYSTEMS

81

> 0, there exists a FNN system F (x, W ), such that kF (x, W ) h(x)k <
for every x in K.

Figure 1. Schematic diagram of fuzzy neural networks.


For generality, we must consider m fuzzy rules which can be considered
independently like dealing with the jth fuzzy rule in Figure 2. Indeed, the
simplified fuzzy reasoning is described as follows.
Given the training input data xk , k = 1, 2, , , n, and the desired output
yp , p = 1, 2, , , m. Thejth control rule has the following form:
j
.
Rj : IF x 1 is Aj1 and . . . . . . x n is Ajn THEN y1 is 1j and . . . . . . ym is m
where j is the rule number, the Ajq s are membership functions of the antecedent part, and pj s are real numbers of the consequent part. When
the inputs are given, the truth value i of the premise of the jth rule is
calculated by
(1)

j = Aj1 (x1 ) Aj2 (x2 ) . . . Ajn (xn ).

Among the commonly used defuzzification strategies, the simplified fuzzy


reasoning yields a superior result. The output where yp of the fuzzy reasoning can be derived from the following equation.
(2)

yp =

X
i

pi i ,

p = 1, 2, , m

82

LEE, HONG, LIN, AND LAI

where i is the truth value of the premise of the ith rule.

Figure 2. Construction of the jth component of the FNN.

2.2. Type-2 Fuzzy Sets. The concept of type-2 fuzzy set was initially
proposed as an extension of ordinary (type-1) fuzzy sets by Prof. Zaden [19].
And then, the clear definition of type-2 fuzzy set is proposed by Mizumoto
and Tanaka [16]. Recently, Mendel and Karnik [5,12,14,15] have developed
a complete theory of type-2 fuzzy logic systems (FLSs). These systems are
again characterized by IF-THEN rules, but their antecedent or consequent
sets are type-2. A type-2 fuzzy set can represent and handle uncertain
information effectively. That is, type-2 fuzzy sets let us model and minimizes
the effects of uncertainties in rule-base FLSs. As literature [14,15], there are
at least four sources of uncertainties in type-1 FLSs, e.g., antecedents and
consequents of rules, measurement noise, and training date noisy, etc. All
of these uncertainties can be translated into uncertainties about fuzzy MFs.
The type-1 fuzzy sets could not treat it because the MFs are crisp. That
is, type-1 MFs are of two-dimensional, whereas type-2 MFs are of threedimensional. It is the new third-dimension of type-2 MFs that make it
possible to model the uncertainties.
Subsequently, we use the following notation and terminology to describe
the fuzzy sets. Firstly, A is a type-1 fuzzy set and the membership grade of
x X in A is A (x), which is a crisp number in [0,1]; a type-2 fuzzy set in
X is A and the membership grade of x X in A is A (x), which is a type-1
fuzzy set in [0,1]. The type-2 fuzzy set A X can be represented as

TYPE-2 FUZZY NEURAL NETWORK SYSTEMS

(3)

A (x) = fx (u1 )/u1 + fx (u2 )/u2 + + fx (um )/um =

83

fx (ui )/ui

The useful type-2 fuzzy set is the footprint of uncertainty (FOU), e.g.,
see Fig. 3 [5,12,14,15]. Figures 3 (a) and 3 (b) show the Gaussian MFs with
uncertain STD and Gaussian MFs with uncertain mean. These ones are
used to develop the type-2 FNN systems in using on primary and consequent
parts. Herein, these MFs with uncertain mean and STD are described as

(x m)2

A(x)
= exp
, m [m1 , m2 ] and
2

(x m)2

(4)
A(x)
= exp
, [1 , 2 ],
2
respectively. Obviously, this type membership can be represented as bounded

interval by upper MF and lower MF, denote A(x)


and A(x). Details about
type-2 fuzzy sets can be found in literature [5,12,14,15].
1

0.8

0.8

A (x )

0.6

0.6

0.4

0.4

A(x )

0.2

0
-1

-0.5

0.5

(a)

1.5

A (x )

A(x )

0.2

2.5

0
-1

-0.5

0.5

1.5

2.5

(b)

Figure 3. Type-2 fuzzy set- (a) Gaussian MFs with uncertain mean (b) Gaussian MFs with uncertain STD.
The basics of fuzzy logic do not change from type-1 to type-2 sets. The
difference between these two systems is output processing. The type-2 FLSs
should use the type-reducer to reduce the output fuzzy sets degree. As Fig.
4 shows [5,12,14,15], the structure of a type-2 FLS is similar to the structure
of type-1 one. The structure includes fuzzifier, knowledge base, inference
engine, type-reducer, and defuzzifier. Based on the block diagram, we will
explanation the FLSs of type-2 FNN systems in next section.

84

LEE, HONG, LIN, AND LAI

Figure 4. The block diagram of type-2 FLS.

2.3. Genetic Algorithm (GA). GA uses a direct analogy of such natural


evolution. It presumes that the potential solution of a problem is an individual and can be represented by a set of parameters. These parameters
can be structured by a string of values and are regarded as the genes of a
chromosome. Herein, we briefly introduce it. A population consists of a
finite number of chromosomes (or parameters). The GA evaluates a population and generates a new one iteratively, with each successive population
referred to as a generation. Fitness value, a positive value is used to reflect
the degree of goodness of the chromosome for solving the problem, and
this value is closely related to its objective value. In operation process, an
initial population P(0) is given, and then the GA generates a new generation P(t) based on the previous generation P(t-1). The GA uses three basic
operators to manipulate the genetic composition of a population: reproduct, crossover, and mutation [1,4, 17,18]. The most common representation
in GA is binary [4,13,18]. The chromosomes consist of a set of genes, which
are generally characters belonging to an alphabeta {0,1}. In this paper,
the real-coded GA is used to tune the parameters. It is more natural to
represent the genes directly as real numbers since the representations of the
solutions are very close to the natural formulation. Therefore, a chromosome here is a vector of floating point numbers. The crossover and mutation
operators developed for this coding is introduced below.

TYPE-2 FUZZY NEURAL NETWORK SYSTEMS

85

3. Type-2 Fuzzy Neural Network and GA


3.1. Type-2 FNN Systems. Herein, we consider a type-2 FLS system
with a rule base of R rules in type-2 FNN system, e.g., n-input m-output
with R rules. The jth control rule is described as the following form:
j
Rj : IF x 1 is Aj1 and. . . x n is Ajn THEN y1 is 1j and . . . ym is m
.
j

where j is a rule number, the Aq s are type-2 MFs of the antecedent part,
and pj s are type-1 fuzzy sets of the consequent part. Herein, the antecedent

part MFs are represented as an upper MF and a lower MF, denote A(x)

and A(x) (see Fig. 3). The consequent part is a interval set = [, ].
The rules let us simultaneously account for uncertainty about antecedent
membership functions and consequent parameters values.
When the input are given, the firing strength of the jth rule is
m
m

m = Am
1 (x1 ) A2 (x2 ) . . . An (xn )

(5)

where is the meet operation [5,12,14,15]. Herein, the antecedent operation


is product t-norm. That is, equation (1) can be calculated by
(6)

Am (x1 ) Am (x2 ) . . . Am (xn ) and

Am (x1 )
Am (x2 ) . . .
Am (xn ).

Finally, the type reduction and defuzzification should be considered. Similar to the FNN, herein the center of sets (COS)-type reduction method is
used to find
M
M
X
X
(7)
yli =
il li and yli =
ir ri
i=1

i=1

where il denotes the firing strength membership grad (either


i or i ).
Hence, the defuzzified output of an interval type-2 FLS is
yli + yri
.
2
Note that, if rule number R is even M =
and
odd, M = R1
2
(8)

yi =

R
2.

On the other hand, R is

(i
+ iM +1 ) ( iM +1 + M +1 )
yl + yr
+ M +1
.
(9)
y=
2
4
Herein, we simplify the computation procedure for computing yr and yl
which is difference from literature [14,15]. Details of comparison can be
found in literature [14,15].

86

LEE, HONG, LIN, AND LAI

Figure 5 summarizes above discussion and shows a fuzzy inference system


(jth rule) of type-2-FNN system.
Example: Computation of type-2 FNN system with two rules
If a type-2 FNN system has two rules as follows:
1 THEN y = w
R1 : IF x1 is A1 AND x2 is B
1 .

2 THEN y = w .
R2 : IF x1 is A2 AND x2 is B
2
Figure 6 summaries the computation of type-2 FNN system. In the first
layer, the output values are the input x1 and x2 , respectively. In layer 2, one
determines the MF grads by type-2 MFs, i.e., MF grads of upper MF and
i (x2 )], i=1,2.
lower MF. Thus, one obtains [Ai (x1 ), Ai (x1 )] and [B i (x2 ), B
Thus, using the operation in layer-product, one can have [i (x1 , x2 ), i (x1 , x2 )] =
[A1 (x1 ) B 1 (x2 ), A1 (x1 ) B 2 (x2 )]. Finally, yr and yl should be determined.
Note that w
i = [wi , wi ], one has yl = 1 w1 + 2 w2 , yr = 1 w1 + 2 w2 ,
yr + yl
and the defuzzified value y =
.
2
Remark : It is trivial that the type-2 FNN system is a generalization of the
FNN system. That is, the type-2 FNN system can be reduce to a type-1 one
if the fuzzy sets is type-1. We can find that details computation of these
systems are the same.

Figure 5. Fuzzy inference of Type-2 FNN.


3.2. Training of Type-2 FNN- Genetic Algorithm. It is known that
the type-1 FNN system is a universal approximator [2,3,6-9]. That is, in
general, for function mapping or system identification, it is easy to design
an FNN system to achieve a satisfactory level of accuracy. By the way, we

TYPE-2 FUZZY NEURAL NETWORK SYSTEMS

87

Figure 6. Computation example of a Type-2 FNN.

determine the feature parameters to represent a type-2 fuzzy set. Using


these parameters, a type-2 FNN system can be encoded as a chromosome.
Then, the real-code genetic algorithm is used optimize the type-2 FNN
system, i.e., antecedent and consequent MFs.
Herein, the training process using real-code genetic algorithm is described
as follows.
Learning Process
Step 1: Constructing and initializing the type-1 FNN system
Step 2: Using the back-propagation algorithm to train the type-1 FNN
and obtain a set of Gaussian functions (mean, variance) and weighting vector.
Step 3: Using the results of Step 2 and add a uncertainty in antecedent
and consequent part, i.e., m1 , m2 = m m, , w1 , w2 = w w or
m, 1 , 2 = , w1 , w2 = w w.
Step 4: Constructing the chromosome (2R mean+R STD+2R weight).
Step 5: Using the GA to train the type-2 FNN to find the optimal values.

88

LEE, HONG, LIN, AND LAI

The objective of parameters learning is to optimally adjust the free parameters of the type-2 FNN for each incoming data. Subsequently, in this
phase the chromosome should be defined.
Chromosome: the genes of each chromosome (denotes xi ) include two
parts. One is the MF and the other is weighting vector. Each MF contains
two means values (upper MF and lower MF), STD, and weighting vector
(or mean, two STD values, and weight). Therefore, for a given n-input oneoutput type-2 FNN with R rules, the number of genes for each chromosome
is 3R n+2R.
Fitness function: Herein, the fitness function is defined as
(10)

f itness (x) =

1
,
E

E=

XX
t

(di (t) yi (t))

where di (t) and yi (t) are the desired output and type-2 FNN system output,
respectively.
Reproduction: The tournament selection is used in the reproduction process [13,18].
Crossover: Here, the real-coded crossover operation is used.
(11)

x01i = x1i + (x2i x1i )

(12)

x02i = x1i + (x1i x2i )

where f itness (x1 ) f itness (x2 ), x1i and x2i are the ith genes of the
parents x1 and x2 , respectively. x01i and x02i are the ith genes of the parents
x01 and x02 , is a random number and 0 0.5.
Mutation: The mutation operation is
(13)

x01i = x1i +

where i denotes the ith gene and it is randomly chosen; x1i and x01i are the
ith genes of the parents x1 and x01 respectively; is a random number in a
given range.
4. Conclusion
This paper has presented a type-2 FNN system and the corresponding
genetic learning algorithm. This type-2 FNN will be used to treat the uncertainty associated with information or data. That dues to the properties
of type-2 fuzzy sets, it can represent and handle uncertain information effectively. Therefore, the previous results of the FNN have been extended to

TYPE-2 FUZZY NEURAL NETWORK SYSTEMS

89

a type-2 one. We determine the feature parameters to represent a type-2


fuzzy set. Using these parameters, a type-2 FNN system can be encoded as
a chromosome. Then, the real-code genetic algorithm is used optimize the
type-2 FNN system, i.e., antecedent and consequent MFs.
References
[1] T. N. Bui and B. R. Moon, Genetic Algorithm and Graph Partitioning, IEEE
Trans. on Computers, Vol.45, No.7, July 1996.
[2] Y. C. Chen and C. C. Teng, A Model Reference Control Structure Using A Fuzzy
Neural Network, Fuzzy Sets and Systems, Vol. 73, pp. 291312,1995.
[3] Y. C. Chen and C. C. Teng, Fuzzy Neural Network Systems in Model Reference
Control Systems, in Neural Network Systems: Technique and Applications, Vol. 6,
Edited by C. T. Leondes, Academic Press, Inc., pp. 285-313, 1998.
[4] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, 1989.
[5] N. Karnik, J. Mendel, and Q. Liang, Type-2 Fuzzy Logic Systems, IEEE Trans.
On Fuzzy Systems, Vol. 7, No. 6, pp. 643-658, 1999.
[6] C. H. Lee and C. C. Teng, Fine Tuning of Membership Functions for Fuzzy Neural
Systems, Asian Journal of Control, Vol. 3, No. 3, pp. 18-25, 2001.
[7] C. H. Lee and C. C. Teng, Identification and Control of Dynamic Systems Using
Recurrent Fuzzy Neural Networks, IEEE Trans. on Fuzzy Systems, Vol. 8, No. 4,
pp. 349-366, August 2000.
[8] C. H. Lee and C. C. Teng, Approximation of Periodic Function Using a Modified
Fuzzy Neural Network, International Journal of Fuzzy Systems, Vol. 2, No. 3, pp.
176-182, 2000.
[9] C. H. Lee and C. C. Teng, Tuning PID Controller of Unstable Processes: A Fuzzy
Neural Network Approach, Fuzzy Sets and Systems, Vol. 128, No.1, pp. 95-106,
2002.
[10] C. T. Lin and C. S. G. Lee, Neural Fuzzy Systems, Prentice Hall: Englewood Cliff,
1996.
[11] C. T. Lin and C. S. G.Lee, Neural-Network-Based fuzzy logic control and decision
system, IEEE Trans. Computers, vol. C-40, no.12, pp1320-1336, 1991.
[12] Q. Liang and J. Mendel, Interval Type-2 Fuzzy Logic Systems: Theory and Design,
IEEE Trans. On Fuzzy Systems, Vol. 8, No.5, pp. 535-550, 2000.
[13] K. F. Man, K. S. Tang, S. Kwong, Genetic Algorithms: Concepts and Applications, IEEE Transactions on Industrial Electronics, Vol. 43, No. 5, October 1996.
[14] J. Mendel and R. John, Type-2 Fuzzy Sets Made Simple, IEEE Trans. On Fuzzy
Systems, Vol. 10, No. 2, pp. 117-127, 2002.
[15] J. M. Mendel, Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New
Directions, Prentice-Hall: NJ, 2001.
[16] M. Mizumoto and K. Tanaka, Some Properties of Fuzzy Sets of Type 2, Information and Control, Vol. 31, pp. 312-340, 1976.
[17] Z. Michalewicz, Genetic Algorithms + Data Structure = Evolutionary Programs,
Springer-Verlag, Berlin, 3rd edition 1997.
[18] M. Srinivas, Lalit M. Patnaik, Genetic Algorithms: A Survey, Computer, Vol. 27,
pp.17-66, 1994.
[19] L A. Zadeh, The Concept of A Linguistic Variable and Its Application to Approximate Reasoning -1, Information Sciences, Vol. 8, pp. 199-249, 1975.

90

LEE, HONG, LIN, AND LAI

Ching-Hung Leea , Jang-Lee Hongb , Yu-Ching Lina , and Wei-Yu Laia


of Electrical Engineering, Yuan Ze University, No. 135, YuanTung Road, Chung-Li, Taoyuan 320, Taiwan, R.O.C.
b Department of Electronic Engineering, Van Nung Institute of Technology,
Chung-Li, Taoyuan 320, Taiwan, R.O.C.
E-mail address: chlee@saturn.yzu.edu.tw (C.-H. Lee)
a Department

You might also like