Genetic Learning Algorithms for Fuzzy Neural Nets

P. V. Krishnamraju'

J. J. Buckleyt

K. D. Reilly'

Y. Hayashi*

* Dept. of Computer and Information Sciences, University of Alabama at Birmingham, Birmingham, AL 35294.

t Mathematics Department, University of Alabama at Birmingham, Birmingham, AL 35294. Department of Computer and Information Sciences, Ibaraki University, Hitachi-shi, Ibaraki 316, Japan.

In this paper we present a genetic learning algorithm for fuzzy neural nets. Illustrations are provided.



This paper is concerned with learning algorithms for fuzzy neural nets. In this section we first introduce the basic notation to be employed and then discuss what we mean by a fuzzy neural net. Then we briefly survey the literature on learning algorithms for fuzzy neural nets. The second section contains a description of our genetic learning algorithm. The third section contains experimental results. The last section has a brief summary and conclusions. All our fuzzy sets are fuzzy subsets of the real numbers. We place a bar over a symbol if it represents a V, are all fuzzy subsets of the real numbers. The membership function of a fuzzy fuzzy set. So A, B, ..., set A evaluated at t is written as A ( z ) . The a-cut of a fuzzy set B is


B[aI = {zlB(z)) 2 a}



for 0 < a 5 1. We separately specify B[O], support of B , as the closure of the union of the B[a], < a 5 the 0 1. A triangular fuzzy number N is defined by three numbers a < b < c where: (1) N ( z ) = 0 for z 5 a and 2 2 c and N ( z ) = 1 if and only if t = b; and (2) the graph of y = N ( z ) is a straight line segment from ( a , 0) to (a, l ) , ((a, 1) to ( c , 0)) on [a,b] (on [b, c]). If the graph of y = N(z) is a continuous, monotonically increasing (decreasing), curve on [ a , b] (on [b, c ] ) from ( a , 0) to (b, 1) (from (b, 1) to to ( c , 0)) then we say N is a triangular shaped fuzzy number. All our fuzzy sets are either triangular fuzzy numbers, or triangular shaped fuzzy numbers. We abbreviate a triangular fuzzy number, or a triangular shaped fuzzy number, as N = ( a / b / c). We say N 2 0 if a 2 0. A measure of fuzziness of N is b - a. We write f u z z (N) = b - a. We say A? is more fuzzy than N if f u z z (A?) 2 f u z z ( N ) . We employ standard fuzzy arithmetic, based on the extension principle, throughout the paper. If fi(z)5 N ( z ) all x, then we write A? 5 N . All our fuzzy neural nets are layered (three layers), feedforward, networks. In this paper a fuzzy neural net has to have fuzzy signals and/or fuzzy weights. The basic fuzzy neural net used in this paper is shown ? ' are all triangular fuzzy numbers. Due to fuzzy arithmetic in Figure 1. The input 8 ,the weights Ii and the output (and target T ) can be a triangular shaped fuzzy number.



Hiddem Layer

O r l p u l Layer

Lyr .c

Figure 1: Fuzzy Neural Net

All neurons, except the input neuron, have a transfer function y = f(z). This f i s assumed to be a continuous, non-decreasing, mapping from 9? into [-7, ] for T some positive integer. The input node acts T

0-7803-1896-x/94 $4.00 01994 E E E


. It is interesting to note that this procedure can fail to converge to the correct weights. The training data for the fuzzy neural net is (XI. 5. Therefore.Hayashi) has proposed a genetic algorithm.0). 1 5 i 54 . In a series of papers [14. 0. the weights and the bias ) (they employed a sigmoidal j terms are all symmetric triangular fuzzy numbers with X i 20. all 1. and a fuzzy genetic algorithm. to train a fuzzy neural network. so that the fuzzy neural net can learn the weights. l 1 1 4 with final output (4) (5) Y =f(I0). What they found [4] was that there are values for the weights that make their error measure sufficiently small. tiz(a)]. 1 5 1 5 L. 17. no computer experiments have been presented for genetic learning algorithms for a fuzzy neural net. Let [til(a). 241 the authors have: (1) real number signals. v . 11. They assumed that the inputs (XI). where 1 is the desired output when X I is the input. which is based on a-cuts of % and 1 5 1 5 L. 16. 2 Genetic Learning Algorithms = [yrl(a). Also.. Their algorithm is based on a E / a w l l . . 0. inspired by standard backpropagation. z). the input to the output node is Il = 2 . v + . 10. These papers have a learning algorithm for both the real weights and the trapezoidal fuzzy numbers. Standard fuzzy arithmetic is used to evaluate equations (2)-(5). 5 1 5 L. a E / d ~ 4 2 .0. y12(a)] and %[a] = Genetic algorithms are a method of directed random search. z z.8]. 1 1 5 L. They employ a learning algorithm.... In [23. let Wi[O] = [ W i l . wi2]. 131 the authors developed a fuzzy backpropagation algorithm for a fuzzy neural net. 6. 201 these authors have also developed a backpropagation based learning algorithm for fuzzy neural nets.9. Actually.. and G. In [2. the identity mapping so the input to node i in the hidden layer is Node i’s output is Zi = f(Ii).. E[O] = [ujl. We discuss our method in the next section. 1 5 I < L. The first thing to discuss is the error measure to be minimized. what they did was to directly fuzzify the standard delta rule in backpropagation to update the values of the weights. and (3) a special fuzzy error measure. but do not make close to 3. If XIis the input.. This method does not seem to generalize to more complicated fuzzy inputs and/or weights. 1. Until now. 18. The learning algorithm for the real weights is similar to standard backpropagation. 271 is similar to Yamakawa’s fuzzy neuron. 19. (2) monotone increasing membership functions for the fuzzy weights. (3) evaluated using the extension principle.1. The fuzzy neural net in [22. g ) . The algorithm has been corrected but no new results have 1 been reported. Genetic algorithms [ that 8 is close to 3. for a in the set (0. Define %[[a] 1970 m . then let % be the actual output. u~s].+ 2 4 . Yamakawa’s new fuzzy neuron [28] has a learning algorithm for the weights. 251 are finding more and more applications in fuzzy systems [3. The learning problem for fuzzy neural nets is to find the weights (Wi. We do not present the fundamentals of genetic algorithms in this paper but instead refer the reader to the popular text [lo] on genetic algorithms. One of the authors of this paper (with Y. Let E denote their error measure. 12.. mi.These derivatives are complicated and become more complicated if we allow more general fuzzy sets for X I . 4.

all i . is 2 + 5 -7. So. with XIinput and desired output = 3. In the Input column real means XI= real number. + + = (w111w131. (2) fuzz ( X I ) > fuzz (Z). In [16] the authors conjectured that: (1) if fuzz(output) 5 fuzz(input). 3 Experiments The complete experimental design is shown in Table 1.3 (vil. The transfer function f. the genetic algorithm just needs to keep track of the supports of the fuzzy weights. etc. as subject fare for the conference and as part of a more detailed study for a journal length paper.= = (viI/?&z/vi3). 1 5 15 L.In the output column real means the target output % 3) = real number. all 1.vi2 = (vi1 V i 3 ) / 2 . and the same for %. has XI= real for some 1 and XIfuzzy otherwise.1. Let W. case 3. 261 instead of the more familiar roulette wheel selection [lo] to choose population members for mating. We discuss the outcome in the last section. Recall that the training data 1 is (XI. . v i 3 ) .0003. Then wiz = (wil ~ . 11. and fuzzy means 8 = symmetric triangular fuzzy number. The genetic algorithm is to search for the fuzzy weights to drive E to zero. The value of r is always one in the output neuron because all our target fuzzy sets T are in the interval [-1. 1 5 i 5 4 .and E = ( E l Ez) . We choose the value of r for the application. and (3) fuzz (XI) fuzz (3) fuzz (XI) fuzz (3) > otherwise is Output = more and less fuzzy.) in the genetic algorithm can vary slightly from experiment to experiment but their approximate values are: (1) population size = 2000. In case 5 we have fuzz (XI) f u z z (Z). We employed tournament selection [21. is Output = more < some 1 and fuzzy. then all the weights can be real numbers. In this paper we report results on cases 4-6. The mixed case. 11. in each hidden neuron and in the output neuron. (9) where r is a positive integer. A member of the population is (wil/wi2/wi3). Our experiments were designed to test this conjecture.80. and (2) if fuzz(output) > fuzz(input). Mi Table 1: Experimental Design Input output fuzzy 6 real fuzzy real and fuzzy fuzzy fuzzy fuzzy fuzzy real and fuzzy fuzzy more fuzzy less fuzzy more and less fuzzy 1971 . ( G ) is completely known if you know w. then the weights are fuzzy. 2 f(x) = 2 r. and W. (2) probability of crossover = 0. 1 5 1 5 L . 3 ) / 2 . is Output = less fuzzy. Additional results are available. I (10) Now we may discuss the results of our computer experiments on genetic learning algorithms for fuzzy neural nets. w. The values of the parameters (probability of crossover. I v4112143) coded in binary notation (zeros and ones). and fuzzy designates 3 = triangular shaped fuzzy number in [-1. In this paper the fuzzy weights are assumed to be symmetric triangular fuzzy numbers. all 1. 1 5 i 5 4 . . and (3) probability of mutation = 0. 1 5 1 5 L. all I. In cases 5-7 the more (less) fuzzy in the Output column stands for: (1) fuzz (XI) < f u z z (Z). = l<l<L.

75/2.50) (-1.25/1.25/2.25/0.25) (0. The training data is presented in Table 4. It did well for modeling the mapping T=-8+1 and not work well in modeling T=l/X putatively because the piece-wise linear squashing function we used may 1972 . We changed the squashing function to a nonlinear mapping.001 318 11.50) (0.25) (0.00 1.75/3. restricted 8 to be in [l.25/0.75/1. The squashing function in the hidden neurons used r=3.00/0.00) so that T is in [-1.50) (2. 4 Summary and Conclusions In this paper we presented a genetic algorithm for training a fuzzy neural net. T is a triangular shaped fuzzy number. 3 for training and. 33 .00) (2.75/-O.50/1.25) (0.50 0.00 1.00) (213 / 415 /1.0 0.00/1.50/1.50 05 .25/0.00 1 0.75/2.S0) (-0. Case 6 The training set comes from T = F ( 8 ) = l/x for 8 in (-00.00) (1. No.75/1. with results given at the conference and available in subsequent publications.50/0.00) (0. Table 4: Training Data For Case 6. 1 2 3 Input (X) Desired Output (T) (-1 .No. This is a r We 1 contraction mapping because fuzz ( T ) < fuzz (8).5010. 1 2 3 Input (X) fuzz(X) 0.50/2. We felt this was mainly due to the fact that the piece-wise (%segment) linear squashing function may not be a sufficiently good approximation for learning this non-linear function.00 (-0. was set equal to one.00/0.50 Output (' 2) fu%%(T) 1.50 Output (' 2) fu%t(T) * 113 116 1/10 1/15 (1.00) (-0. 1 2 3 4 Input (X) fuzz(X) 0.00/2. -11 o [l. with test results given at the conference. All weights were fuzzy in the neural net. We showed that it worked 8 where A and 8 are triangular fuzzy numbers.11.00/1. in the hidden layer and the output layer.00/0.0 05 . The fuzzy neural net learned the training data perfectly (zero error).50 0.00) The value of r.00) (-0.50/-0.001-31810. as in the previous case.OO/-O.00/0.25/0.00) (112 1417 1 213) (215 1 419 1 112) (113 1 4/11 1 215) The fuzzy neural net was unable to learn the training data.25/0.50) (1.s0) No.

It has been shown [l. Hayashi. In Proc. Kwon. R. K. Japan. of First Asian Fuzzy Systems Symposium. Neural networks that learn from fuzzy if-then rules. of International Joint Conference on Neural Networks. Direct fuzzification of neural networks. However. Hayashi. D. 1993. Neural Networks and Soft Computing. H. 1991. In R.J.J.R. and H. Fuzzy backpropagation for fuzzy neural networks. In Proc. MA. Can fuzzy neural nets approximate continuous fuzzy functions? Fuzzy Sets and Systems. Buckley and Y. pages 721-724. Fuzzy neural nets and applications. Buckley. Manuscript.a universal approximato. 7 that a (regular) fuzzy neural net (like that in this paper) is not. Fuzzy Sets. and C.J. J.J. Buckley and Y. J. In Proc. Goldberg. editors. 1973 . volume 1.San Diego.1992. October 25-29 1993. Hayashi. and machine learning.J. and H. H. of IEEE International Conference on Fuzzy Systems (FUZZ-IEEE’92). Fuzzy neural network with fuzzy signals and weights. of International Joint Conference on Neural Networks. pages 209-215. First European Congress on Fuzzy and Intelligent Technologies. 8:527-537. September 7-10 1992. Japan. November 23-26 1993. Tanaka.J. Singapore. Implementation of fuzzy if-then rules by fuzzy neural networks with fuzzy weights. Buckley and Y. Fuzzy Sets and Systems. Fujioka. Intelligent Systems. Handbook of Genetic Algorithms. pages 12931300. October 25-29 1993. Iizuka. September 7-10 1993. J. What this means is that if 8 5 X are inputs. Aachen. J. Hayashi and J. Direct fuzzification of neural network and fuzzified delta rule. Japan. volume 1. H.J. To Appear. Our results show that you do not necessarily get real weights if fuzz(output) 5 f u z z ( i n p u t ) [16]. Reading. New York. Genetic algorithms in search. Nagoya. volume I.J. Hayashi. Fuzzy neural networks. Buckley. Hayashi. Are regular fuzzy neural nets universal approximators? In Proc. Y. and E. Inter. Y. Germany. J. In Press. Van Nostrand Reinhold.J. It should work well using a nonlinear squashing function. Davis. Unpublished J. IEEE Transactions Fuzzy Systems. Czogala. All functions that we tried to approximate were monotone increasing so we see no theoretical reasons why the fuzzy neural net should not be able to model these mappings. J. Fuzzy Sets and Systems. Further research is needed on this conjecture. An architecture of neural networks for input vectors of fuzzy numbers. Fuzzy Systems and AI. E. Fuzzy genetic algorithms for optimization. In Proc. Hayashi.J. pages 73-76. Fuzzy neural nets: A survey. Zadeh. Tanaka. Buckley and Y. 1 because it is a monotone increasing mapping. Buckley and Y. To Appear. Buckley and Y. To Appear. Czogala. Hayashi. Ishibuchi.A. then Y 5 Y are the corresponding outputs.not well approximate a nonlinear function. Hayashi. To Appear.1993. Buckley and Y. Yager and L. Tanaka. R. we used a different squashing function than the one employed in [16]. Y. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZlJKA’92). In Proc. Fuzzy genetic algorithm and applications. and H. 1:85-97. Addison-Wesley. J. Buckley. July 17-221992. 1:ll-41. J.J. Hayashi. Ishibuchi. References J. J. pages 725-728. Nagoya. Fujioka. Buckley and Y. optimization. L. Ishibuchi. Hayashi. 1989.

China. Beijing. volume 11. Kohno. [18] H. December 15-17 1992. Japan. In Proc. Learning of fuzzy neural networks from fuzzy inputs and fuzzy targets. Goldberg. Horikawa. pages 447-452. In Proc. K. and H. Ishibuchi. [23] D. K. Tanaka. pages 477-483. Earickson. 91002. T. and Y. In Proc. R. K. Puerto Vallarta. and Y. Iizuka. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA '92). July 17-22 1992. Yamakawa. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA '92). Nauck and R. and H. Seoul. 1993. 1990. M. D. Complex Systems and Cognitive Processes. Kwon. Fuzzy neural networks with fuzzy weights and fuzzy biases. July 17-22 1992. A neo fuzzy neuron and its application to fuzzy system identification and prediction of the system behavior. Ageishi. Learning of neural networks from fuzzy inputs and fuzzy targets. Ishibuchi. In Proc. [21] J. Iizuka. of NAFIPS. AL 35487. [24] D. MIT Press. Fujimaki. volume I. Y. Watanabe. Hamatani. pages 123-126. Sgac: A c-language implementation of a simple genetic algorithm. March 28-April 1 1993. and J. Okada. K. H. and H. San Francisco. July 17-22 1992. of International Joint Conference on Neural Networks.A. Kruse. pages 147-150. A neural fuzzy controller learning by fuzzy error backpropagation. Department of Engineering Mechanics. volume 111. [27] M. Tanaka. 1974 . San Francisco. and H. In Proc. Fiflh IFSA World Congress. Nakamura. Genetic Programming: O n the Programming of Computers b y Means of Natural Selection. of IEEE International Conference Neural Networks.R. pages 388-397. Iizuka. [25] R.E. pages 1022-1027. Japan. A fuzzy neural network learning fuzzy control rules and membership functions by fuzzy error backpropagation. Nauck and R. Learning mechanism and an application of ffs-network reasoning system. Kusanagi. Koza. July 17-22 1992. Zanarini. Tanaka. Interpolation of fuzzy if-then rules by neural networks. of IEEE International Conference on Neural Networks. MA. Fuzzy network production system. and H. Serra and G. In Proc.[17] H. Japan. In Proc. T. Ishibuchi. March 28-April 1 1993. volume 11. Korea. Japan. Springer-Verlag.E. Cambridge. In Proc. pages 337-340. Ageishi. pages 1650-1655. Iizuka. Tokunaga. Tanaka. Okada. Technical Report TCGA Report No. Smith. [28] T. 1991. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA'92). E. volume 111. [26] R. Hashizume. Uchino. pages 127-130. [22] K. [20] H. Okada. November 3-6 1992. H. The Clearinghouse for Genetic Algorithms. Miki. H. Mexico. Nakamura. Kruse. Ishibuchi. Tuscaloosa. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA '92). The Univerity of Alabama. July 4-9 1993. [19] H. In Proc.

Sign up to vote on this title
UsefulNot useful