32 views

Uploaded by GT

By
Puyin Liu
Hongxing Li

- Maxwell Paper
- 2.- A Note on the Equivalence of NARX and RNN
- V3I4-0161
- A Course in Fuzzy Systems and Control
- Compre QA1.pdf
- Fuzzy-A Review Shortened
- A Graph Labelling Proof of the Backpropagation Algorithm
- Quantum Neural Networks (QNN’s) Inherently Fuzzy Feedforward Neural Networks
- The Use of Kalman Filter and Neural Network Methodologies in Gas Turbine Performance Diagnostics: A Comparative Study
- Neural Network
- Fuzzy Logic Systems
- Pendulum
- Sample Questions of ANN
- funadementos principales de redes neuronales
- Overview Ml Interpretability
- 1_1_2
- Use of Neural Network in the field of Bioinformatics for prediction of cancer-IJAERDV03I1265576.pdf
- cs224n-2017-lecture4.pdf
- M Tech Mid 2 Nnfs Paper
- 8 Learning

You are on page 1of 79

NETWORK THEORY

AND APPLICATION

Puyin Liu • Hongxing Li

MACHINE PERCEPTION

ARTIFICIAL INTELLIGENCE!

Volume 59

World Scientific

FUZZY NEURAL

NETWORK THEORY

AND APPLICATION

**SERIES IN MACHINE PERCEPTION AND ARTIFICIAL INTELLIGENCE*
**

Editors: H. Bunke (Univ. Bern, Switzerland)

P. S. P. Wang (Northeastern Univ., USA)

**Vol. 43: Agent Engineering
**

(Eds. JimingLiu, NingZhong, Yuan Y. Tang and Patrick S. P. Wang)

Vol. 44: Multispectral Image Processing and Pattern Recognition

(Eds. J. Shen, P. S. P. Wang and T. Zhang)

Vol. 45: Hidden Markov Models: Applications in Computer Vision

(Eds. H. Bunke and T. Caelli)

Vol. 46: Syntactic Pattern Recognition for Seismic Oil Exploration

(K. Y. Huang)

Vol. 47: Hybrid Methods in Pattern Recognition

(Eds. H. Bunke and A. Kandel)

Vol. 48: Multimodal Interface for Human-Machine Communications

(Eds. P. C. Yuen, Y. Y. Tang and P. S. P. Wang)

Vol. 49: Neural Networks and Systolic Array Design

(Eds. D. Zhang and S. K. Pal)

Vol. 50: Empirical Evaluation Methods in Computer Vision

(Eds. H. I. Christensen and P. J. Phillips)

Vol. 51: Automatic Diatom Identification

(Eds. H. du Bui and M. M. Bayei)

Vol. 52: Advances in Image Processing and Understanding

A Festschrift for Thomas S. Huwang

(Eds. A C. Bovik, C. W. Chen and D. Goldgof)

Vol. 53: Soft Computing Approach to Pattern Recognition and Image Processing

(Eds. A Ghosh and S. K. Pat)

Vol. 54: Fundamentals of Robotics — Linking Perception to Action

(M. Xie)

Vol. 55: Web Document Analysis: Challenges and Opportunities

(Eds. A Antonacopoulos and J. Hu)

Vol. 56: Artificial Intelligence Methods in Software Testing

(Eds. M. Last, A. Kandel and H. Bunke)

Vol. 57: Data Mining in Time Series Databases

(Eds. M. Last, A. Kandel and H. Bunke)

Vol. 58: Computational Web Intelligence: Intelligent Technology for

Web Applications

(Eds. Y. Zhang, A. Kandel, T. Y. Lin and Y. Yao)

Vol. 59: Fuzzy Neural Network Theory and Application

(P. Liu and H. Li)

*For the complete list of titles in this series, please write to the Publisher.

59 FUZZY NEURAL NETWORK THEORY AND APPLICATION Puyin Liu National University of Defense Technology Changsha.Vol. China \fc World Scientific NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONGKONG • TAIPEI • CHENNAI . China Hongxing Li Beijing Normal University Beijing.Series in Machine Perception and Artificial Intelligence .

Pte. ISBN 981-238-786-2 Printed in Singapore by World Scientific Printers (S) Pte Ltd . NJ 07661 UK office: 57 Shelton Street. Inc.Published by World Scientific Publishing Co. Ltd. 222 Rosewood Drive. including photocopying. without written permission from the Publisher. This book. may not be reproduced in any form or by any means. Pte. London WC2H 9HE British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. For photocopying of material in this volume.. River Edge. or parts thereof. 5 Toh Tuck Link. MA 01923. Singapore 596224 USA office: Suite 202. recording or any information storage and retrieval system now known or to be invented. Danvers. Covent Garden. FUZZY NEURAL NETWORK THEORY AND APPLICATION Copyright © 2004 by World Scientific Publishing Co.1060 Main Street. please pay a copying fee through the Copyright Clearance Center. electronic or mechanical. USA. Ltd. In this case permission to photocopy is not required from the publisher. All rights reserved.

2 Simulation example (35 §2.6 Notations and preliminaries (14 §1.1.2 FAM's based on 'V .2 Fuzzy inference networks (11 §1.5.2.5.3 BP learning algorithm of FAM's 2.2.1 Fuzzy systems ( 8 1.4 Fuzzy systems and fuzzy inference networks ( 7 1.1 FAM's based on 'V .*' (39 §2.3.2 Fuzzy neural networks with fuzzy operators ( 2 §1.3 Puzzified neural networks ( 4 1.1.4.7 Outline of the topics of the chapters (17 References Chapter II Fuzzy Neural Networks for Storing and Classifying §2.1 Learning algorithm for regular FNN's ( 4 1.Table of Contents Foreword ( xi Preface ( xv Chapter I Introduction ( 1 §1.1 Crisp nonlinear filters (12 1.1 Classification of fuzzy neural networks ( 1 §1.3.2 Universal approximation of regular FNN's ( 6 §1.1 Two analytic functions (45 (45 .1 Two layer max-min fuzzy associative memory ( 21 ( 25 (26 2.A' (36 2.3.4.2 Fuzzy filters (13 §1.5 Fuzzy techniques in image restoration (12 1.2 Fuzzy 6—learning algorithm (36 2.1 FAM with threshold (30 2.

2 Fault-tolerance analysis ( 91 ) 3.1.2 A simulation example ( 98 ) 3.3 The state transition laws (124) .2 The connection relations of attractors (107) 3.1 ART1 architecture ( 50 ) 2.4 Learning algorithm for FBAM ( 96 ) 3.5.6 Equilibrium analysis of fuzzy Hopfield network (117) 3.vi Liu and Li Fuzzy Neural Network Theory and Application 2.5. L) (HI) 3.2 Fuzzy Hopfield network with threshold ( 78 ) 3.1 Connection relations of attractors (118) 3.1 Attractor and stability ( 79 ) 3.1 Fuzzy row-restricted matrix (106) 3.4 The transition laws of states (114) §3.3 A simulation example ( 94 ) §3.2.5.3.4 Real examples ( 62 ) References Chapter III Fuzzy Associative Memory—Feedback Networks §3.6.4 An example (105) §3.2.4.2 Learning algorithm based on fault-tolerance ( 73 ) 3.4.1.3 Fuzzy ARTMAP ( 60 ) 2.3 Optimal fault-tolerance (100) 3.4.4.1 Stability analysis ( 88 ) 3.4.3 Stability and fault-tolerance of FBAM ( 88 ) 3.6.6.1 Attractor and attractive basin ( 69 ) 3.2 Elementary memory of W (122) 3.4 Fuzzy ART and fuzzy ARTMAP ( 48 ) ( 50 ) 2.4.3 The elementary memory of (R.3 Simulation example ( 76 ) §3.5 Connection network of FBAM (106) 3.3.3.1 Learning algorithm based on fault-tolerance ( 96 ) 3.4.2 Analysis of fault-tolerance ( 83 ) §3.2 Fuzzy ART ( 53 ) 2.2 BP learning algorithm §2.3.5.1 Fuzzy Hopfield networks ( 65 ) ( 68 ) ( 68 ) 3.4.1.

2.6.1 Fuzzy CG algorithm and convergence (158) 4.1.5 Approximation analysis of regular FNN (177) 4.5 Learning algorithm and simulation (155) §4.6 Approximation of regular FNN with integral norm (189) 4.4 Universal approximation to fuzzy valued functions (166) 4.1 Regular fuzzy neuron (133) 4.3 A counter example of universal approximation (135) 4.2.4 An example of universal approximation (138) §4.2 Uniformity analysis of three-layer neural network (207) .3 Simulation examples (164) §4.2 Regular fuzzy neural network (134) 4.3.2.5.2 An example (173) §4.1.2 Four-layer regular feedforward FNN (170) 4.1 Preliminaries (145) 4.2 Calculus of V .1 Regular fuzzy neuron and regular FNN (131) 4.3.1 Integrable bounded fuzzy valued functions (189) 4.3 Conjugate gradient algorithm for fuzzy weights (157) 4.1 Fuzzy valued Bernstein polynomial (166) 4.3 Error function (151) 4.4.1 Uniformity analysis of feedforward networks (199) 5.5.A functions (147) 4.1 Closure fuzzy mapping (177) 4.4 Partial derivatives of error function (153) 4.1 Uniform approximation of four-layer network (200) 5.6.2.4.2 Learning algorithms (143) 4.1.2.1.2 GA for finding optimal learning constant (163) 4.Table of Contents References vii (127) Chapter IV Regular Fuzzy Neural Networks (131) §4.1.4.2 Universal approximation with integral norm (191) References (194) Chapter V Polygonal Fuzzy Neural Networks (199) §5.2 Learning algorithm (183) §4.3.1.

1 I/O relationship analysis of polygonal FNN (234) 5.1.2 Realization procedure of universal approximation (285) 6.1 Hierarchical fuzzy system (274) 6.2 Approximation of generalized fuzzy systems with integral norm (247) (251) (252) (252) (254) (259) 6.viii Liu and Li Fuzzy Neural Network Theory and Application §5.4.2.3.4.2 Generalized T-S fuzzy system (268) §6.2 Approximation of SPLF's §6.3.2 Symmetric polygonal fuzzy number (211) 5.3 Hierarchical system of generalized T-S fuzzy system (273) 6.1 Symmetric polygonal fuzzy number space (212) 5.1.2 Approximation of polygonal FNN (240) References Chapter VI Approximation Analysis of Fuzzy Systems §6.2 Generalized hierarchical T-S fuzzy system (276) §6.1 Generalized Mamdani fuzzy system (261) 6.3.2 Canonical representation of Brownian motion (299) §7.1 Piecewise linear function 6.2.3 Universal approximation with integral norm (288) References Chapter VII Stochastic Fuzzy Systems and Approximations §7.3 Extension operations based on polygonal fuzzy numbers (220) §5.3 Polygonal FNN and learning algorithm (223) 5.2.1.3 A simulation §5.1 Stochastic measure and stochastic integral (297) 7.4 Approximation of polygonal FNN (231) (233) 5.3.2.1.4.1 Three-layer feedforward polygonal FNN (223) 5.4.1 Stochastic process and stochastic integral (290) (296) (296) 7.2 Polygonal linear operator (217) 5.2.3.1 SPLF and its properties 6.2 Stochastic fuzzy systems (301) .4.1 Universal approximation with maximum norm (282) 6.2 Learning algorithm (226) 5.4 Approximation of hierarchical T-S fuzzy system (282) 6.

2 Optimal FNN filter (348) 8.1.3.5 Simulation examples (363) References Indices (366) (371) §1.2 Selection type FNN and its universal approximation .3 Image restoration based on FINN (353) 8.2.1 Generalized defuzzification operator (327) 8.1 Symbols (371) §1.3. .3.2.2.3 Universal approximation of generalized FINN (335) 8.2 Approximation with mean square sense (309) §7.3. (357) 8.2 Stochastic Mamdani fuzzy system (304) §7.1 Fuzzy partition (354) 8.3.1 Uniform approximation (307) 7..4 Simulation of system identification (338) §8.3 Universal approximation of stochastic process by T-S system (307) 7.1 Generalized fuzzy inference network (321) (326) (327) 8.1.2.3 A novel FNN filter (361) 8.1 Local FNN representation of image (343) 8.1.3.2 Terminologies (373) . (315) 7. .Table of Contents ix 7.4.2 Fuzzy inference network (332) 8.3.2.2 Representation of two-dimensional image by FINN (343) 8.4.1.3 Experiment results (350) §8.4 Learning algorithm (362) 8.1 Approximation of stochastic Mamdani fuzzy system .2 Example (318) References Chapter VIII Application of F N N to Image Restoration §8.1 Stochastic T-S system (301) 7.4 Universal approximation of stochastic Mamdani fuzzy system (315) 7.

This page is intentionally left blank .

in . A question which arises is: Why is there so much interest and activity in the realm of neurofuzzy systems? What is it that neurofuzzy systems can do that cannot be done equally well by other types of systems? To understand the underlying issues. Hayashi obtained a basic patent in Japan. neurofuzzy systems. "Fuzzy Neural Network Theory and Application. .. rough set theory. What is soft computing? In science. . soft computing is a coalition of methodologies which are tolerant of imprecision. design and utilization of intelligent systems. uncertainty and partial truth. design and utilization of fuzzy neural networks. and which collectively provide a foundation for conception. A basic credo which underlies soft computing is that. Liu and H. Essentially. have a long history. N are complementary methodologies. FNTTA is a treatise that deals authoritatively and in depth with the basic issues and problems that arise in the conception. probabilistic computing. Takagi and I. evolutionary computing. there was little activity until 1988. which was published in 1975. The embryo was a paper on fuzzy neurons by my former student. Much of the theory developed in FNTTA goes considerably beyond what can be found in the literature. Fuzzy neural networks.. What is quite obvious is that if A. assigned to Matsushita. The principal numbers of the coalition are: fuzzy logic. is a highly important work. vodka will solve it. Today. neurocomputing." or FNNTA for short. as in many other realms of human activity. then much can be gained by forming a coalition of A. especially in the realm of consumer products. The pioneering work of Takagi and Hayashi opened the door to development of a wide variety of neurofuzzy systems. Thereafter. namely. Li. it is helpful to view neurofuzzy systems in a broader perspective. A case in point is the well-known Hammer Principle: When the only tool you have is a hammer. which described systems in which techniques drawn from fuzzy logic and neural networks were employed in combination to achieve superior performance. as they are frequently called. B. or more or less equivalently.. Ed Lee.Foreword Authored by Professors P. when H. chaotic computing and machine learning. In this perspective. there is an extensive literature and a broad spectrum of applications. there is a tendency to be nationalistic-to commit oneself to a particular methodology and employ it exclusively. B. Another version is what I call the Vodka Principle: No matter what your problem is.. N.. in the context of soft computing.. everything looks like a nail.

A question which arises is: Does the theory of universal approximation come to grips with problems which arise in the design of fuzzy neural networks in realistic settings? I believe that this issue is in need of further exploration. What should be noted is that the basic idea underlying this approach is applicable to any type of algorithm in which human expertise plays an essential role in choosing parameter-values and controlling their variation as a function of performance. In particular. Bortolan— involves a fuzzification of a multilayer. my feeling is that the usual assumption about continuity of the function that is approximated. fuzzy if-then rules are employed as a language for describing human expertise. An important type of neurofuzzy system which was pioneered by Arabshahi et al starts with a neuro-based algorithm such as the backpropagation algorithm. is too weak. in particular. feedforward neural network. inheriting from neurocomputing the concepts and techniques related to learning and approximation. Clearly. Parameter adjustment in such systems is achieved through the use of gradient techniques which are very similar to those associated with backpropagation. The widely used method of radical basis functions falls into the same category.Xll Liu and Li Fuzzy Neural Network Theory and Application general. Much of the material in FNNTA is original with the authors and reflects their extensive experience. a student of mine who conceived ANFIS as a part of his doctoral dissertation at UC Berkeley. universal approximation is an issue that is of great theoretical interest. What is particularly worthy of note is the author's treatment of universal approximation of fuzzy-valued functions. FNN. extending from the basics of FNN and FAM (fuzzy associate memories) to approximation theory of fuzzy systems. In such applications. A prominent example is the ANFIS system developed by Roger Jang. with smoothness denned as a fuzzy characteristic. Another important direction which emerged in the early nineties involves viewing a Takaga-Sugeno fuzzy inference system as a multilayer network which is similar to a multilayer neural network. fuzzy if-then rules and rules of inference and constraint propagation. In this broader perspective. resulting in a fuzzy neural network. universal approximation. and inheriting from fuzzy logic the concepts and techniques related to granulation. The coverage is both broad and deep. stochastic fuzzy systems and application to image restoration. A basic issue that has a position of centrality in fuzzy neural network theory—and is treated as such by the authors— is that of approximation and. that . Still another important direction—a direction initiated by G. and improves its performance by employing fuzzy if-then rules for adaptive adjustment of parameters. and that the problem of approximation of functions which are smooth. It is this direction that is the principal concern of the work of Professors Liu and Li. neurofuzzy systems may be viewed as the domain of a synergistic combination of neurocomputing and fuzzy logic. better results can be obtained by employing the constituent methodologies of soft computing in combination rather in a stand-alone mode. linguistic variable. rather than continuous.

Computer Science Division Department of Electrical Engineering and Computer Sciences University of California Berkeley. more broadly. FNNTA is not intended for a casual reader. It is a deep work which addresses complex issues and aims at definitive answers. and lays the groundwork for new and important applications. to the conception and design of intelligent systems. must be addressed. Professors Liu and Li. Zadeh Professor in the Graduate School. a matter of degree.Foreword xm is. but. It ventures into territories which have not been explored. and the publisher. 2004 . CA 94720-1776 Director. Berkeley Initiative in Soft Computing (BISC) March. deserve our thanks and congratulations for producing a work that is an important contribution not just to the theory of fuzzy neural networks. Lotfi A.

This page is intentionally left blank .

the fuzzy neural network (FNN) is an efficient tool to deal with nonlinearly complicated systems. generalized fuzzy systems and generalized fuzzy inference networks and so on are the fuzzy valued Bernstein polynomial. including latest research surveys and references related to the subjects through this book. also many application examples included. simultaneously. etc. they are universal approximation and learning algorithm. such as system modeling . As two main research objects. the cuts of fuzzy sets and interval arithmetic. So the readers can get latest information. So it is helpful for readers who are interested in mathematical aspects of FNN's. system modeling and identification. system forecasting. FNN's are thoroughly and systematically studied in the book. such as regular FNN's. Many efficient methods and techniques to treat these practical problems are developed. First of all. The basic tools to study learning algorithms are the max-min (V — A) functions. also useful for those who do not concern themselves with the details of the proofs but the applied aspects of FNN's. and digital image restoration and so on. polygonal FNN's. and from applied or computational aspects with the realization steps of main results shown. the perspective of the book is centered on two typical problems on FNN's. This book treats FNN models both from mathematical perspective with the details of most proofs of the results included: only simple and obvious proofs are omitted. the book is a thorough summation and deepening of authors' works in recent years in the fields related. The achievements about universal approximation of FNN's may provide us with the theoretic basis for FNN applications in many real fields. In view of two basic problems—learning algorithm and universal approximation. And the bridges to research universal approximations of fuzzified neural networks and fuzzy inference type networks. learning algorithms and universal approximations of FNN's constitute the central part of the book. There are several distinctive aspects which together make this book unique. the improved type extension principle and the piecewise linear functions. The achievements obtained here are applied successfully to pattern recognition. There have been a few of books and monographs on the subject of FNN's or neuro-fuzzy systems. in which there are linguistic information and data information. Second.Preface As a hybrid intelligent system of soft computing technique. The achievements of the book will provide us with the necessary theoretic basis for soft computing technique and the applications of FNN's.

(1990). those FNN models and references make this book valuable to people interested in various FNN models and applications. Also the book includes many well-designed simulation examples for readers' convenience to understand the results related. Finally almost all common FNN models are included in the book. and several learning algorithms for fuzzy weights are developed. (1997). (1983). information processing and system optimization and so on. (1989). Moreover. A series of equivalent conditions that guarantee universal approximations are built. Also they are useful to build some learning algorithms to optimize FNN architectures.e. and as many as possible references related are listed in the end of each chapter. The fuzzified neural networks mean mainly two classes of FNN's. And for image processing one of the following books is sufficient: Jain A. Many concepts are first introduced for approximation and learning of FNN's. i. The fourth part is contributed to the applications of the achievements and methodology on FNN's . implementation procedures and all kinds practical applications. and Kloeden P. The specific prerequisites include fuzzy set theory. Third. regular FNN's and polygonal FNN's. Dubois D. the arrangement of contents of the book is novel and there are few overlaps with other books related to the field. The details of these books please see references in Chapter I. The constructive proofs of universal approximations provide us with much convenience in modeling or identifying a real system by FNN's. etc. Also the learning algorithms for the stochastic fuzzy systems are studied. and Kuosmanen P. -J. (1994). Haykin S. And learning algorithms for FNN's may lead to rational treatments of FNN's for their architectures. The third part focuses on the research of universal approximations. and Herzberger J. For the fuzzy theory one of the following books should be sufficient: Zimmermann H. Now let us sketch out the main points of the book. (1980). Astola J. including FNN's for storing and classifying fuzzy patterns. This book consists of four primary parts: the first part focuses mainly on FNN's based on fuzzy operators 'V' and 'A'. For the interval analysis it suffices to reference one of following books: Alefeld G. Also implementations and applications of fuzzified neural networks are included. neural networks. For the neural networks one of the following can provide with sufficient background: Khanna T. K. respectively. and the details will be presented in Chapter I. So readers may easily enter through this book the fields related by taking the two subjects as leads. (1994). dynamical FNN's taking fuzzy Hopfield networks and fuzzy bidirectional associative memory (FBAM) as typical models. They are dealt with in Chapter II and Chapter III. (1991). and Prade H.xvi Liu and Li Fuzzy Neural Network Theory and Application and system identification. The second part is mainly contributed to the research of universal approximations of fuzzified neural networks and their learning algorithms. interval analysis and image processing. including ones of generalized fuzzy systems to integrable functions and ones of stochastic fuzzy systems to some stochastic processes. So readers may easily find their respective contents that they are interested. Diamond P.

However we think the references that we have listed are helpful for readers to find the related works in the literatures. We were supported by several National Natural Science Foundation Grants of China (e. No. A FNN representation of digital images is built for reconstructing images and filtering noises.60174013) during the years this book was written.69974041. respectively. We have to say sorry for our omissions. there are many researchers working on FNN's and we are not always aware of contributions by various authors. etc in the same chapter. Berkeley who writes the preface of the book in the midst of pressing affairs at authors' invitation. Seldrup Ian the editor of the book and the staff at the World Scientific Publishing for displaying a lot of patience in our final cooperation. Finally we express our indebtedness to Dr. Based on fuzzy inference networks some efficient FNN filters are developed for removing impulse noise and restoring images. Although we have tried very hard to give references to original papers. a definition. Thanks are also due to Professor Bunke Horst who accepts this book in the new book series edited by him.g. and so on. No. We are specially grateful to Professors Guo Guirong and He Xingui who read the book carefully and make their many of insightful comments. Theorem 4. while Definition 2. For example. a lemma. to which we should give credit.Preface xvil to digital image restoration.2 means the second theorem in Chapter IV. we utilize the respective numbers as they appear in the statements.60375023 and No. When referring to a theorem. We are indebted to Professor Lotfi A.4 indicates the fourth definition in Chapter II. a corollary. Zadeh of University of California.. Puyin Liu and Hongxing Li March 2004 .

In practice. fuzzy logic and neural networks together with genetic algorithm and probabilistic reasoning [68-71]. To simulate biological control mechanisms. efficiently and to understand biological computational power. neural computing with learning and curve fitting.1 Classification of fuzzy neural networks As a main ingredient of soft computing. and without solving any complex integral. soft computing is to exploit the tolerance for imprecision. In the partnership of fuzzy logic. fuzzy logic is mainly concerned with imprecision and approximate reasoning. a biological control mechanism can carry out complex tasks without having to develop some mathematical models. §1.CHAPTER I Introduction As information techniques including their theory and applications develop further. C. In [36. 37] Lee S. and Lee E. The soft computing techniques can provide us with an efficient computation tool to deal with the highly nonlinear and complicated systems [67]. 61]. neural computing. fuzzy neural network (FNN) is a hybrid intelligent system that possess the capabilities of adjusting adaptively and intelligent information processing. developed a fuzzy associa- . fuzzy logic and neural computation and so on. such as adaptive control. and probabilistic reasoning with uncertainty and belief propagation. such as fuzzy logic. As a collection of methodologies. knowledge-based engineering. in which natural linguistic information and data information coexist [40]. robustness and low solution cost. So such novel neural systems had not attracted any attention until 1987 when Kosko B. uncertainty and partly truth to achieve tractability. neural computing and probabilistic reasoning. probabilistic reasoning and genetic algorithm (GA) and so on. Those techniques take their source at Zadeh's soft data analysis. it is extremely difficult to make an artificial mobile robot to perform the same tasks with vague and imprecise information for the robot involves a fusion of most existing control techniques. thoroughly a few of powerful fields in modern technology have recently emerged [30. However. firstly proposed the fuzzy neurons and some systematic results on FNN's were developed by softening the McCulloch-Pitts neurons in the middle 1970s when the interest in neural networks faltered. T. the studying objects related have become highly nonlinear and complicated systems. differential or any other types of mathematical equations.

The achievements related to FAM's . In practice FNN's have found useful in many application fields. Feedback neural networks (1990's).2 Liu and Li Fuzzy Neural Network Theory and Application tive memory (FAM) to deal with intelligent information by introducing some fuzzy operators in associative memory networks [32]. for in practice many fuzzy patterns to handle are inherently imprecise and distorted. J. A lot of new new concepts. Equally. So storage capability and fault-tolerance are two main problems we focus on in the research on FAM's.1: Based on fuzzy operators < Feedforward neural networks (1980's). 42].e. So far many learning algorithms. Since the early 1980s the research on neural networks has increased dramatically because of the works done by Hopfield J. for instance. T . (see [26]).2 Fuzzy neural networks with fuzzy operators FNN's based on fuzzy operators are firstly studied by Lee and Lee in 1970s. It possesses the capability of storing and recalling fuzzy information or fuzzy patterns [32. Fuzzy inference networks < Takagi-Sugeno type (1990's). pattern recognition [33. as many fuzzy patterns as possible can be stored in a FAM. one may broadly classify all FNN models as three main types as shown in Figure 1. i. system modelling [16. The research on feedforward FAM's. and knowledge engineering and so on. system reliability analysis [7. A FAM is a feedforward FNN whose information flows from input layer to output layer. Mamdani type (1990's). the fuzzy delta rule and the fuzzy back propagation (BP) algorithm and so on have been developed to train FAM's and to improve storage capability of a FAM [32]. Such FNN's have become one of the foci in neural network research since Kosko introduced the fuzzy operators ' V and 'A' in associative memory to define fuzzy associative memory (FAM) in 1987. 41]. Based on fuzziness involved in FNN's developed since the late of 1980s. including their topological architecture designs. including the fuzzy Hebbian rule. 32. fault-tolerance. 24]. learning algorithms and so on attracts much attention [41]. 61]. 56]. the capability of a FAM to recall the right fuzzy pattern from a distorted input is of real importance. In practice a applicable FAM should possess strong storage capability. such as innovative architecture and training rules and models about FNN's have been developed [30. i.e. „. The FNN models have also attracted many scholars' attention. the selection of fuzzy operators for defining the internal operations.1 Classification of FNN's §1. Regular FNN's (1990's). Figure 1. T Fuzzified neural networks • FNN < Improved FNN's (1990's). Generalized type (1990's).

It imitates human brain in understanding objective world. In the book we focus mainly on two classes of dynamical FNN's. The fuzzy ART model developed by Carpenter et al in [11] generalizes ART1 to be capable of learning stable recognition categories in response to both analog and binary input patterns. The research related to fuzzy ART is mainly focused on the classification characteristics and applications to pattern recognition in all kinds of applied fields. It has since led to an evolving series of realtime neural network models for unsupervised category learning and pattern recognition. the feedforward FAM's can identify a fuzzy relation by designing suitable learning algorithm. The model families include ART1. Li and Ruan in [38] use FAM's based on several fuzzy operator pairs including 'V — A'. they show the convergence of fuzzy delta type iteration algorithms. which is presented in Chapter II. many methods for solving fuzzy relational equations are employed to improve the storage capability of FAM's. which can process patterns expressed as vectors whose components are either 0 or 1 [8]. By the comparison between the two dynamical FNN's and the corresponding crisp neural networks . 'V — x'. These constitute one of the main parts in Chapter II. On the other hand. of distributed recognition codes in a multi-level network hierarchy [10]. pattern classification [41. such as pattern recognition [33. they are fuzzy Hopfield networks and fuzzy bidirectional associative memory (FBAM). 58]. ART2 which can categorize either analog or binary input patterns [9]. Pedrycz in [51] put forward two logic type fuzzy neurons based on general fuzzy operators. A main object of the research on FAM's is to improve the storage capability. and ART3 which can carry out parallel search. A feedback neural network as a dynamical system finishes information processing by iterating repeatedly from an initial input to the equilibrium state. 5] express a fuzzy system as a fuzzy relational equation. some learning algorithms for improving storage capability of FAM's are developed. Blanco et al in [4. and such FAM's can be applied to realize fuzzy logic relations efficiently. So the recalling procedure of a dynamical FNN is in nature a process that the FNN evolutes to its equilibrium state from an initial fuzzy pattern. Furthermore. Liu et al utilize the approaches for solving fuzzy relational equations to build a series of equivalent conditions that a given fuzzy pattern family can be stored by a FAM [41]. 59]. Another important case of FNN's based on fuzzy operators is one of feedback FNN's. system analysis [38]. In practice. or hypothesis testing. an equilibrium state of a dynamical FNN turns out to be the right fuzzy pattern to recall. ' + — x' and ' + .A' etc to identify many fuzzy system classes by building a few of novel learning algorithms. signal processing [27] and so on. which relates closely to fuzzy relational equation theory. thus.Chapter I Introduction 3 have found useful in many applied areas. The fuzzy operators 'V' and 'A' are employed to define the operations between fuzzy patterns. Moreover. Adaptive resonance theory (ART) is an efficient neural model of human cognitive information processing. which is a procedure of self-improving again and again.

by the BP algorithm we obtain the a—cuts Wai. One most important class of fuzzified neural networks is regular FNN class. the systematic achievements related have been built by focusing mainly on two basic problem—learning algorithm and universal approximation.1 Learning algorithm for regular F N N ' s There are two main approaches to design learning algorithms for regular FNN's. So the dynamical FNN's may be applied much more widely than the crisp networks may. the a—cut learning algorithm loses its effectiveness frequently since for a i . Thus. §1. However.1]. and the genetic algorithm (GA) for fuzzy weights [2]. Wa2> a n d if no constraint is added. 5]. which may achieve by the fuzzy operators 'V. In practice. each of which is the fuzzifications of a crisp feedforward neural network. the topological architecture is identical to one of the corresponding crisp neural network. 2. And therefore the fuzzy set W can not be defined. Since regular FNN's were put forward by Buckley et al [6] and Ishibuchi et al [28] about in 1990s. the fuzzy connection weights of the regular FNN is trained suitably. Similarly with crisp dynamical networks. So for a regular FNN.4 Liu and Li Fuzzy Neural Network Theory and Application we can see 1. including the global stability of dynamical systems and Lyapounov stability of the equilibrium state (attractor). So in order to ensure the effectiveness of the algorithm . the stability analysis. The main ideas for the a—cut learning algorithm rest with the fact that for any a £ [0. 41]. they are a—cut learning [28.3.3 Fuzzified neural networks A fuzzified neural network means such a FNN whose inputs. in the research related to the dynamical FNN's. for a main function of the transfer function in artificial neural networks lies in controlling output range. Those problems will be studied thoroughly in Chapter III. it is much insufficient to represent fuzzy information by the strings only consisting of 0. which is also viewed as a pure fuzzy system [61]. we utilize the BP algorithm for crisp neural networks to determine the two endpoints of a—cut and consequently establish the a—cut of a fuzzy weight. the fact Wa^CWai can not be guaranteed. The FNN's do not need the transfer function used in the crisp networks. 1] to describe fuzzy information. the attractive basins of attractors and the discrimination of attractors and pseudoattractors and so on are main subjects to study. and 'A' is a threshold function [4. «2 £ [0. 1. and then define the fuzzy weight. a fuzzy input can determines a fuzzy output. and the internal operations are based on Zadeh's extension principle [44] and fuzzy arithmetic [20]. 1] : a\ < «2. 'A'. Through the internal relationships among fuzzy sets of a fuzzified neural network. 1. We should utilize fuzzy patterns whose components belong to [0. outputs and connection weights are all fuzzy set.

x < a. Even if we may find a solution of (1. (1. the chain rules for differentiation of composition functions are only in form. Vi e {1. Apply the results in [73] to analyze the V — A operations fully and to develop a rigorous theory for the calculus of V and A operations are two subsidiary results in Chapter IV. Park et al in [50] study the inverse procedure of the learning for a regular FNN. approximately..Chapter I Introduction 5 it is necessary to solve the following optimization problem: min{E(Wi. Another difficulty to hinder the realization of the a . that is. using the desired fuzzy outputs of the FNN to establish conversely the conditions for the corresponding fuzzy inputs. And if x = a.. we may choose the fuzzy weights as some common fuzzy numbers such as triangular fuzzy numbers. However.1] : an < a 2 . a common method to define E(-) is to introduce some constraints on the fuzzy connection weights.c u t learning algorithm is to define a suitable error function E(-) [41]. x > a. d(x A a) _ j 1.t.1) is generally insolvable. so that not only its minimization can ensure to realize the given input—output (I/O) relationship. Above representations are only valid for special case x ^ a. but also its derivatives related are easy to calculate. x < a. trapezoidal fuzzy numbers to develop some a—cut learning algorithms for training the fuzzy weights of regular FNN's.Wn are fuzzy sets}. To avoid solve (1. Aliev et . ..1). s. dx 1 0. systematically. dx 1 0. Based on these two derivative formulas. The GA's for fuzzy weights are also developed for such fuzzy sets that they can be determined uniquely by a few of adjustable parameters when it is possible to code fuzzy weights and to ensure one to one correspondence between a code sequence in GA and fuzzy connection weights. Solution of such a problem rest in nature with treating the a—cut learning. For instance. That is. let d(x V a) _ J 1. An indispensable step to construct the fuzzy BP algorithm is to differentiate V — A operations by using the unit step function. for the given real constant a. V a i . and lack rigorous mathematical sense. Q 2 G [0.1) in some special cases the corresponding solving procedure will be extremely complicated. And some successful applications of regular FNN's in the approximate realization of fuzzy inference rules are demonstrated in [28]. trapezoidal fuzzy numbers and Gaussian type fuzzy numbers and so on. This is a fuzzy version of the corresponding problem for crisp neural networks [39].n}.—. Ishibuchi et al utilize the triangular or. two important operations 'V' and 'A' are often involved. ( ^ ) Q 2 C ( ^ ) a i ' where E(-) is an error function. If no constraint is added. x > a. they are no longer valid. no matter how different fuzzy weights and error functions these learning algorithms have. which can be determined by a few of adjustable parameters.:. for instance..Wn )\Wi.

The transfer function a related is assumed to be an increasing real function. and obtain some new results by studying the computational aspects and training algorithms for the approximation problem. The subject constitutes one of central parts of Chapter IV. It is shown that a three-layer feedforward neural network with a given nonlinear activation function in the hidden layer is capable of approximating generic class of functions. 57]. The applications of the learning algorithms are much restricted. including continuous and integrable ones [13. 14. 1. / can be represented with arbitrarily given degree of accuracy e > 0 by a feedforward crisp neural network. and adaptive filtering [52]. Since the middle 1990s many authors have begun to paid their attentions to the similar approximation problems in fuzzy environment [6. system identification [14]. Regardless of a—cut learning algorithm and GA for the fuzzy weights of regular FNN's they are efficient only for a few of special fuzzy numbers. 28. The research in this field is at its infancy and many fundamental problems. The achievements related to the field have not only solved the approximation representation of some multivariate functions by the combination of finite compositions of one-variable functions. Gaussian type fuzzy numbers and so on.6 Liu and Li Fuzzy Neural Network Theory and Application al in [2] employ simple GA to train the triangular fuzzy number weights and biases of regular FNN's. GA for fuzzy weights of regular FNN's within a general framework. The research related has attracted many scholars since the late 1980s. Firstly Buckley et al in [6] study systematically the universal approximation . such as the approximation of structural synthesis [57]. how to define a suitable error function? what more efficient code techniques can be employed and what are the more efficient genetic strategies? and so on remain to be solved. Recently Scarselli and Tsoi [57] present a detail survey of recent works on the approximation by feedforward neural networks. etc.2 Universal approximation of regular F N N ' s Another basic problem for regular FNN's is the universal approximation. that is. which can provide us with the theoretic basis for the FNN applications.3. 22. we have to build learning algorithms for general fuzzy weights. To speed the convergence of the fuzzy BP algorithm we develop a fuzzy conjugate gradient (CG) algorithm [18] to train a regular FNN with general fuzzy weights. such as triangular or trapezoidal fuzzy numbers. 41]. but also found useful in many real fields. They encode all fuzzy weights as a binary string (chromosome) to complete the search process. And therefore it is meaningful and important to develop the BP type learning algorithms or. such as. pattern classification [25]. The approximate representation of a continuous function by a three layer feedforward network can with the approximate sense solve the 13—th Hilbert problem with a simple approach [57]. and Kolmogorov had to employ a complicated approach to solve the problem analytically in 1950s [57]. The universal approximation of crisp feedforward neural networks means such a fact that for any compact set U of the input space and any continuous function / defined on the input space.

To solve the universal approximation of regular FNN's completely. and do not answer the second problem. §1. whose topological architectures are identical to the corresponding crisp ones. Considering the arbitrariness of a hybrid FNN in its architectures and internal operations. However. for example fuzzy regression models [28]. However these results solve only the first problem partly. Corresponding to different practical problems the respective hybrid FNN's with different topological architectures and internal operations have to be constructed [6]. Thus some important questions arise.Chapter I Introduction 7 of FNN's and obtain such a fact: Hybrid FNN's can be approximator to fuzzy functions while regular FNN's are not capable of approximating continuous fuzzy functions to any degree of accuracy on the compact sets of a general fuzzy number space.4 Fuzzy systems and fuzzy inference networks Fuzzified neural networks as a class of pure fuzzy systems can deal with . It is difficult to employ similar approaches for dealing with crisp feedforward neural networks to solve above problems for regular FNN's. And some realization algorithms for approximating procedure are built. have found convenient and useful in many applications. We establish some sufficient conditions for fuzzy valued functions defined on an interval [0. internal operations are based on extension principle and fuzzy arithmetic. the natural inference of human brain. Regular FNN's have become the efficient tools to model these real processes. telecommunication networks [42] and so on are the successful examples of regular FNN applications. Above problems attract many scholars' attention. 30]. What are the conditions for continuous fuzzy functions that can be arbitrarily closely approximated by regular FNN's? that is. regular FNN's. and build the approximate representations of a class of trapezoidal fuzzy functions by regular FNN's. Also they employ the approximation of the regular FNN's with trapezoidal fuzzy number inputs and connection weights to solve the overfitting problem. In practice many I/O relationships whose internal operations are characterized by fuzzy sets are inherently fuzzy and imprecise. data fitting models [22]. Then Feuring et al in [22] restrict the inputs of regular FNN's as trapezoidal fuzzy numbers. industrial process control. Chapter IV and Chapter V develop comprehensive and thorough discussion to above problems. For instance. which function class can guarantee universal approximation of regular FNN's to hold? Whether the corresponding equivalent conditions can be established? Since the inputs and outputs of regular FNN's are fuzzy sets. we find such a FNN is inconvenient for realizations and applications. T0] that ensure universal approximation of three layer regular FNN's to hold [41]. At first Buckley and Hayashi [6] show a necessary condition for the fuzzy functions that can be approximated arbitrarily closely by regular FNN's. chemical reaction and natural evolution process and so on [17. that is. the fuzzy functions are increasing. the common operation laws do not hold any more.

much more cases relate to data information... 61]. Using fuzzy inference networks we may represent a fuzzy system as the I/O relationship of a neural system.1 Fuzzy s y s t e m s In practice there are common three classes of fuzzy systems [30. The rule consequent forms of a Mamdani fuzzy system are fuzzy sets while ones corresponding to a T-S fuzzy system are functions of the system input variables. And data information constitutes the external conditions that may adjust system parameters.. From a real fuzzy system we can get a collection of data information that characterizes its I/O relationship by digital sensor or data surveying instrument. Therefore.4. In a fuzzy system we can deal with linguistic information by developing a family of fuzzy inference rules such as 'IF. 61]. 53. Mamdani fuzzy systems and Takagi-Sugeno (T-S) fuzzy systems. 56]. synthetically. 35.'. so it is of very real importance to develop some systematic tools that are able to utilize linguistic and data information. In practice. As in the research of neural networks we study the applications of fuzzy systems and fuzzy inference networks by taking their universal approximation as a start point. 24].2 Fuzzy s y s t e m a r c h i t e c t u r e 1. p u r e fuzzy s y s t e m fuzzy rule base input x fuzzifier singleton fuzzy inference fuzzy output crisp output defuzzifier Y yo F i g u r e 1.THEN.. As shown in . in the following let us take the research on approximating capability of fuzzy systems and fuzzy inference networks as a thread to present a survey to theory and application of this two classes of systems. and therefore fuzzy systems also possess the function of self-learning and self-improving. We can distinguish a Mamdani fuzzy system and a T-S fuzzy system by their inference rule consequents. 60]. system modelling and identification [16. automatic control [30. they are pure fuzzy systems. signal processing [12. rationally. 33. fuzzy systems and fuzzy inference networks have attracted much attention for they have found useful in many applied fields such as pattern recognition [30. in addition to linguistic information. data compression [47] and telecommunication [42] and so on. Since recent twenty years. Pure fuzzy systems deal mainly with linguistic information while the latter two fuzzy systems can handle both linguistic information and data information [61].8 Liu and Li Fuzzy Neural Network Theory and Application natural linguistic information efficiently. Fuzzy systems take an important role in the research related. including the membership functions of fuzzy sets and denazification etc.

S{YN(y))---)).. S(Y2(y). Ying et al in [65] employ a general denazification [23] to generalize Mamdani fuzzy systems and T-S fuzzy systems. the fuzzy sets are Gaussian type fuzzy numbers. how can the approximating procedure of fuzzy systems express the given I/O relationship? How is the accuracy related estimated? With the given accuracy how can the size of the fuzzy rule base of the corresponding fuzzy system be calculated? and so on. 65. since it can not deal with many importantly practical problems such as... that is. and an approximating fuzzy system with the given accuracy may be established accordingly. 62. we may directly build the approximating fuzzy systems related by the constructive procedures. Moreover. Using the fuzzy rule Rj and the implication relation we can establish a fuzzy set Yj denned on the output space [30].. that is. that is. So the constructive methods can be more efficient and applicable. the inference composition rules and denazification..RNFor a given input vector x. the existence of the fuzzy systems is shown by the StoneWeierstrass Theorem [30. such a way gives the strong restrictions to the antecedent fuzzy sets. where T is a t—norm. 72].. Zeng et al in [72] propose some accuracy analysis methods for fuzzy system approximation. Up to the present the research on the related problems focuses on the approximations of the fuzzy systems to the continuous functions and the realization of such approximations. • ••. Although the related achievements are of much . We call Y a synthesizing fuzzy set [30]. Suppose the fuzzy rule base is composed of N fuzzy rules R\. the compositions are based on 'J^ — x' or 'V — x'. The internal structures of the pure fuzzy system are determined by a sequence of fuzzy inference rules. Recent years the research related has attracted many scholars' attention. Moreover. Another is the constructive proving method. universal approximation has attracted much attention since the early 1990s [61.2 is the typical architecture of a fuzzy system. Finally we utilize the defuzzifizer De to establish the crisp output yo = DB(Y)As one of main subjects related to fuzzy systems. And some necessary conditions for fuzzy system approximation and their comparison are built. pure fuzzy system and defuzzifizer. 61]..9 Chapter I Introduction Figure 1. However its drawbacks are obvious. respectively. Such a approach may answer the existence problem of fuzzy systems under certain conditions. which consists of three parts: fuzzifizer. YN to determine the fuzzy set Y defined on the output space: Y(y) = S(Yi(y). One belongs to existential results. By a t-conorm S (generally is chosen as S = V) we synthesize Y\. That is. the fuzzy sets may be chosen as general fuzzy numbers with certain ranking order and the composition may be 'V — T". by fuzzifizer we can get a singleton fuzzy set x . the antecedent fuzzy sets and the composition fuzzy operators can be general. we can classify the achievements in the field into two classes. and the defuzzifier usually means the method of center of gravity..

that is the hierarchical fuzzy system. Recently the research on the properties of the artificial neural networks in the stochastic environment attracts many scholars' attention. and the achievements related have been successfully applied to many practical areas. the number of fuzzy rules needed in the hierarchical system is the linear function of the number of the input variables. the applications are usually limited to systems with very few variables. and he also in [63] gave the sensitivity properties of hierarchical fuzzy systems and designed a suitable system structure. Naturally we may put forward an important problem.10 Liu and Li Fuzzy Neural Network Theory and Application real significance. Raju et al defined in [53] a new type of fuzzy system. in addition to continuous functions. There are many important and fundamental problems in the field remain to be solved. that is. how about the universal approximation of fuzzy systems to other general functions? For instance. which are linked in a hierarchical fashion. or integrable function / on U. Such a system is constructed by a series of lower dimensional fuzzy systems. the related systems are non-continuous. how may we find a hierarchical fuzzy system to approximate / uniformly with arbitrary error bounds el The third important problem is the fuzzy system approximations in stochastic environment.e. meaning that in a fuzzy system the number of fuzzy rules may exponentially increases as the number of the input variables of the system increases. in the control processes to many nonlinear optimal control models and the pulse circuits. for example. whether are hierarchical fuzzy systems universal approximator or not? If a function is continuously differentiable on the whole space. but integrable. particularly to the fuzzy control. Therefore the research in which the fuzzy systems are generalized within a general framework and more general functions. their application areas are definitely restricted. we may avoid the 'rule explosion'. The approximation capabilities of a class of neural networks to stochastic processes and the problem whether the neural networks are able to learn stochastic pro- . i. two or at most four input variables [62]. how may the representation capability of the hierarchical fuzzy systems be analyzed? Kikuchi et al in [31] show that it is impossible to give the precise expression of arbitrarily given continuous function by a hierarchical fuzzy system. are very important both in theory and in practice. So we have to analyze the approximation capability of hierarchical fuzzy systems. Another problem is 'Rule explosion' phenomenon that is caused by so called 'curse of dimensionality'. including integrable functions are approximately represented by the general fuzzy systems with arbitrary degree of accuracy. To realize the given fuzzy inferences. To overcome above drawbacks. For each compact set U and the arbitrarily continuous. First. Although the fuzzy system research has attracted many scholars' attention. Wang in [62] shows the arbitrarily close approximation of the function by hierarchical fuzzy systems. So 'rule explosion' do seriously hinder the applications of the fuzzy systems. consequently the system not implement able. Thus. the scale of the rule base of the fuzzy system is immediately becoming overmuch. When we increase the input variables.

such as system modeling [15. 56]. So a fuzzy system and its corresponding fuzzy inference network are functionally equivalent [30]. classification and detection of noise. many achievements have been achieved and they have found useful in many applied areas. within a general framework study the fuzzy inference networks. Since the early 1990s.4. Theoretically the research on fuzzy inference networks focuses mainly on three parts: First. by which a fuzzy system can be expressed as the I/O relationship of a neural system. Moreover.Chapter I Introduction 11 cesses are systematically studied. efficiently since they possess adaptiveness and fault-tolerance. 16]. Defuzzification constitutes one important object to study fuzzy inference networks and it attracts many scholars' attention. pattern recognition [58] and system forecasting [45] and so on. which is based on some general defuzzifier [54]. 54]. It is shown that the approximation identity neural networks can with mean square sense approximate a class of stochastic processes to arbitrary degree of accuracy. build some suitable learning algorithms. Thus. system identification [46. There are mainly four defuzzification methods. Third. The systematic study on above problems constitutes the central parts of Chapter VI and Chapter VII. it can adaptively adjust to adapt itself to the changes of conditions and to self-improve. many novel defuzzifications to the special subjects are put forward in recent years. that is. the approximation capabilities of fuzzy systems to stochastic processes. A fuzzy inference network is a multilayer feedforward network. As a organic fusion of inference system and neural network. The final problem is to estimate the size of the rule base of the approximating fuzzy system for the given accuracy. Also many well-designed simulation examples illustrate our results. the COG method synthesizes all . 1. The research related is the basis for constructing the related fuzzy systems. boundary detection for noise images. The fuzzy systems can simultaneously deal with data information and linguistic information. So it is undoubtedly very important to study the approximation in the stochastic environment. and p—mean method [55]. a— cut integral method. For example. so that the network architecture is as simple as possible [30].2 Fuzzy inference networks Fuzzy inference system can simulate and realize natural language and logic inference mechanic. Second. a fuzzy inference network can realize automobile generation and automobile matching of fuzzy rules. design a feedforward neural network to realize a known fuzzy system. Further. system modeling in noise environment and so on. maximum of mean(MOM) method [61]. In addition. they can successfully be applied to noise image processing. They have respective starting point and applying fields. they are center of gravity (COG) method [33. so that the connection weights are adjusted rationally to establish suitable antecedent and consequent fuzzy sets. also they have themselves advantages and disadvantages. fuzzy inference networks can deal with all kinds of information including linguistic information and data information.

are neglected. However. but it can not deal with non-additive Gaussian noise. Originally. 52]. There are two ways to achieve such an objective [3. image restoration included the subjects related to the first way only. system distortion. constructing digital filters to remove noises to restore the corrupted images resulted from noises. The research on how we can define denazifications and fuzzy inference networks within a general framework attracts much attention. So the research on nonlinear filters has been attracting many scholars' attention [3. Up to now many general defuzzifiers have been put forward [23. The MOM method takes only the points with maximum membership under consideration while the other points are left out of consideration. 60. 35. But this approach is highly nonlinear in nature and can not be characterized by traditional mathematical modeling. 61]. smooth non-impulse noises and enhance edges or other salient features of the image. whose statistic models are known. In practice. 60].5.1 Crisp nonlinear niters Rank selection (RS) filter is a useful nonlinear filtering model whose sim- . 1.5 Fuzzy techniques in image restoration The objective of image restoration is to reconstruct the image from degraded one resulted from system errors and noises and so on. the points with maximum membership. 55. So it is convenient to employ fuzzy techniques in image processing. To introduce some general principle for defuzzidication and to build a class of generalized fuzzy inference network within a general framework constitute a preliminary to study FNN application in image processing in Chapter VIII. 54. it is imperative to bring ambiguity and uncertainty in the acquisition or transmission of digital images.g. 66]. the definitions are too concrete to be generalized to general cases. §1. The related discussions appeared about in 1981 [48. The fuzzy set and fuzzy logic can be efficiently incorporated to do that. Another is called image enhancement. but not until 1994 did the systematic results related incurred. that is. 49]. they possess respective drawbacks: Either the general principles are too many to be applied conveniently or. And the inverse process may be applied to restore the degraded images. One may use human knowledge expressed heuristically in natural language to describe such images.12 Liu and Li Fuzzy Neural Network Theory and Application actions of points in the support of the synthesizing fuzzy set to establish a crisp output while some special functions of some particular points e. Linear filter theory is an efficient tool to process additive Gaussian noise. and additive noises. especially in the filtering theory to remove system distortion and impulse noises. The a—cut integral and p—mean methods are two other mean summing forms for all points in the support of the synthesizing set. One is to model the corrupted image degraded by motion. Recently many scholars put the second way into the field of image restoration [12. Recently fuzzy techniques are efficiently applied in the field of image restoration.

Thus all RS type filters may be handled within a general framework [3]. For example. So fuzzy techniques may be used to improve the RS type filters from the following parts: extending output range. simultaneously. but it removes fine image structures. the complexity of the RCRS filter increases exponentially with the order (the length of operating window). 52]). soft decision and adaptive structure. Recent years fuzzy theory as a soft technique has been successfully applied in modeling degraded image and building noise filters. Such facts have spurred the development of fuzzy filters. Although the RCFS filter improves the performance of RCRS filter. When the noise probability exceeds 0. as a signal restoration model. The RCRS filter is built by introducing feature vector and rank selection operator. center weighted median filter. It utilize natural language. soft decision and fuzzy inference structure. which improve the performance of RS filters by extending output range. The guidance for building all kinds of RS type filters is that removing impulsive noise while keeping the fine image structure. etc (see [3.2 Fuzzy filters The RCRS filter can not overcome the drawbacks of median filter thoroughly since its ultimate output is still chosen from the gray levels in the operating window. in which the key part is to define suitable membership functions . To offer improved performance. It synthesizes all advantages of RS type filters. Moreover. soft decision or fuzzy inference structure are used to improve noise filters. Civanlar et al in [17] firstly establish an efficient image restoration model by soft decision. the RCRS filter can be generalized a new version—rank conditioned fuzzy selection (RCFS) filter [12].Chapter I Introduction 13 plest form is median filter [52]. median filter can result in poor filtering performance. also it can be generalized as the neural network filter. Extending output range means generalizing crisp filters within a fuzzy framework. By median filter. However. many generalizations of median filter have been developed.5. and stack filter. And so the image information may be used more efficiently. permutation filter. the problems similar to RCRS filter arise still. For example a RCRS filter may change the fine image structure while removing impulsive noise.5 it is difficult to get a restoration with good performance. rank conditioned rank selection (RCRS) filter. impulsive type noise can be suppressed. the RCRS filter possesses the advantage of utilizing rank condition and selection feature of sample set simultaneously. So in more cases. 1. Soft decision means that we may use fuzzy set theory to soften the constraint conditions for the digital image and to build the image restoration techniques. when the noise probability p > 0.5. their own shortcomings are not overcome. They include weighted order statistic filter. as well as the filtering capability. for the outputs of all these filters are the observation samples in the operating window of the image. although RS type filters improve median filter from different aspects. such as 'Dark' 'Darker' 'Medium' 'Brighter' 'Bright' and so on to describe gray levels of the image related. by fuzzifying the selection function as a fuzzy rank.

b]. it is easy to show dH([a.5) that is. let dH(A. Yu and Chen in [66] generalize the stack filter as a fuzzy stack filter. By A. respectively.6 Notations and preliminaries In the following let us present the main notations and terminologies used in the book.. '•xGAyeB V A {l|x-y||}}. The performance of the niters related may be advantageous in processing the high probability (p > 0.3) ' where 'V" means the supremum operator 'sup'. by which we may estimate a PBF from the upper and the lower. b]. i. §1. the research of image restoration by fuzzy techniques has been in its infancy period. B. by which the filtering performance is much improved. and K + is the collection of all nonnegative real numbers. For the intervals [a. For A. the metrics dE and dH are equivalent. Let M. and A is the closure of A. A central part of Chapter VIII is to build some optimal FNN filters by developing suitable fuzzy inference rules and fuzzy inference networks.e. d]) as follows: 1 dE([a.. and account for the organization of the book. Using A. and Z is the integer set. d] C R. And so the fuzzy stack filter concludes the stack filter as a special case. Also to construct the systematic theory in the field is a main object for future research related to the subject. d] C R. the fault-tolerance of fuzzy relational equations is the tool for image compression and reconstruction. by T(X) we denote the collection of all fuzzy sets defined on X. And the classical vector median filter is generalized to the fuzzy one. Int(x) means the maximum integer not exceeding x. [c. in which || • || means the Euclidean norm. 60]. b]. If X is universe. If x G R.d])={(a-c)2 + (b-d)2}\ (1. define the metric de([a.5) noise images [35. Furthermore in [47]. An obvious advantage for such an approach is that the fuzzy rules may be adjusted adaptively.. b]. b]. we denote the subsets of R. [c. C. and 'A' means the infimum operator 'inf. b].4) Give the intervals [a. M = K1.B) = max{\/ A{ll*-y||}.d be d—dimensional Euclidean space. [c. [c. d})< dE([a. B C Rd.b]. and then some FNN mechanisms are constructed to design noise filters.14 Liu and Li Fuzzy Neural Network Theory and Application of fuzzy sets related. One of key steps to do that is fuzzifying a positive Boolean function (PBF) as a fuzzy PBF. [c. (1. B... and so on. [c. [c.. we denote the . yeBxeA (1. d])<V2-dH([a. d]). d„(A. many elementary problems related are unsolved. Of course. Suppose N is the natural number set. Fuzzy inference structure for image processing means that some fuzzy inference rules are built to describe the images to be processed. B) be Hausdorff metric between A and B.

i. for A£ fo(K). It is easy to show. that is the following fact holds: Vxi. x 2 G R. ••••)Bd) € f o W r f ) we also denote for simplicity that n D((Ai..e..7) D) is also a complete metric space..6) a€[0. For A£ that is V {l««|V|««l}(X»= [<£. (Bi. Denote •F 0 (R) d = ^o(R) x • • • x j ^ ( R ) .a)x2)>A(x1)A A(x2).. then Aa= [aa. 20]: £>U. Obviously.{0}). If A€ •Fc(M)..f(Ai.. •--.1]..Bd))=^2D(Ai. then A is called a bounded fuzzy number.B)= V/ {dn(Aa. (ii) Va G (0. (Bi. Ul= (1. SG .. 1]. If we generalize the condition (ii) as (ii)' A is a convex fuzzy set. (Jro(R)d..<£])..Ad). J"o(R). Ker(^) = {x G R\A(x) = 1} ^ 0.. A a C l is a interval. Also it is easy to show. ^-"o(R) C ^ C (R).Bi). We denote the support Supp(A) of a fuzzy set A by Ao • If A. define the metric between A and B as [19.Fo(K).l) By [19] it follows that (.-B«)}a£[0. d And for (Ai. A(ax1 + (1 . |A| means D(A.. And f 0 (K) means a subset of F(R) with the following conditions holding..Ad)eF0m .l] (1.Fo(R). we may extend / as / : ^b(R) d —> ^"(R) by the extension principle [44]: d d V(Ai..xd)=y { A ^ ^ ) } } . Denote the collection of fuzzy sets satisfying (i) (ii)' and (iii) as .. a6[0.. (ii)' is equivalent to the fact that Va G [0.FC(R). Ad). aV\ is a bounded and closed interval.Ad)(y)= V f(x1:. we have (i) The kernel of A satisfies.(1-8) i=l .15 Chapter I Introduction fuzzy sets defined on R.£)) is a complete metric space. Va G [0. 1]. Ba)}= V {^U«.. (iii) The support of A satisfies.l] For a given function / : R d —> R.. Supp(yl) = {x G R | A(x) > 0} is a bounded and closed set of R.

B. . i=l If FJV(-) constitute a universal approximator. where 23 is a cr—algebra on R. and lim a{x) = 1. a continuous x—>+oo x—> — oo sigmoidal function is a Tauber-Wiener function.P = {^i/(x)rd/x}".. Definition 1. we denote x1Vx2= 1 2 x Ax = (xlWxl. (x\/\xl.xd) e R d . and FN : R d —>• R is a three layer feedforward neural network whose transfer function is g.:r 2 ) G [0.. then g is called a Tauber-Wiener function.^xiVxl). . And Cp is a sub-class of collection of continuous fuzzy functions that !Fo(M. U>{A) = {/ : R d — R| 11/IU. Other terminologies and notations not being emphasized here. in which they are utilized. it follows that g is a x—>+oo x—» — oo Tauber-Wiener function. lim a(x) = 0. and 1 C ([a. ^ ) .Fo(R).p < +oo}. L"(R. Then by [14].. Obviously. .16 Liu and Li Fuzzy Neural Network Theory and Application For simplicity. And / is called an extended function. \\f\u = {[ If A C K d . define the Lp(fi)— norm of / as follows: \m\pdAkp. we let n/iu..&]) is the set of all continuously differentiable functions on the closed interval [a. if g is continuous and increasing. and /^ is Lebesgue measure on A.+oo). For n..1 [14] Suppose g : R —> R. m € N.. Fjvfai. we write also / as / . M) = L„(M) = {/ : Kd — K| | | / I U < +oo}. . Give p S [l.)d —• . a is bounded. or sections.. lim a{x) = 0.. If / is a p—integrable function on R d . by /i nX m w e denote the collection of all fuzzy matrices with n rows and m columns. b\. .. moreover.1]™.£<*) = ^2vj d • 9\^2wi:i j=l -Xi+Oj). Let \i be a Lebesgue measure on R d . For x 1 = ( x } ... the readers may find them in the respective chapters.xlnf\x2n)... and / : R d —• R be a measurable function. That is p V(x1... If g is a generalized sigmoidal function a : R —* R. C 1 (R) is the collection of all continuously differentiable functions on R. We call g : R —> R a continuous sigmoidal function. that is. x 2 = 0?.. lim a(x) = 1.

1 reports many useful dynamic properties of the fuzzy Hopfield networks by studying attractors and attractive basins. To take advantage of the adaptivity of neural systems we build two classes of iteration learning algorithms for the FAM. After recalling some fundamental concepts of ART1 we define a fuzzy version of ART1 through fuzzy operators 'V' and 'A'. Chapter II treats two classes of FNN models—feedforward fuzzy associative memory (FAM) for storing fuzzy patterns. respectively. such as Mamdani fuzzy systems. Moreover. which are dynamic FNN's. Some equivalent conditions that a given family of fuzzy pattern pairs can be stored in the FAM completely are established. which can also recall right fuzzy patterns stored. A fuzzy pattern related can be expressed as a vector whose components belong to [0. §3.7 Outline of the topics of the chapters The book tries to develop FNN theory through three main types of FNN models.1 build a novel feedforward FNN—FAM with threshold. and fuzzy adaptive resonance theory (ART) for classifying fuzzy patterns.3. They are FNN's based on fuzzy operators which are respectively treated in Chapter II and Chapter III. We characterize the classifying procedure of the fuzzy ART and develop some useful properties about how a fuzzy pattern is classified. which guarantees the FAM to store the given fuzzy pattern pair family is built. by which some correct fuzzy patterns may be recalled through some imprecise inputs. It is also shown that the dynamical systems are uniformly stable and their attractors are Lyapounov stable. §2. systematically.Chapter I Introduction 17 §1. T-S fuzzy systems and stochastic fuzzy systems and so on.4 focuses on a fuzzy classifying network—fuzzy ART. To improve the storage capability of a FAM. we in §2. Fuzzy inference networks being able to realize the common fuzzy systems. Fuzzified neural networks taking regular FNN's and polygonal FNN's as main components. . And based on fault-tolerance we develop an analytic learning algorithm. especially the FNN models and the learning algorithms related. In each chapter we take some simulation examples to illustrate the effectiveness of our results. In §3. respectively. which are called the fuzzy delta algorithm and the fuzzy BP algorithm in §2.2. that is. introduce a threshold to each neural unit in a FAM.4 the corresponding problems for FBAM's are analyzed. Many simulation examples are studied in detail to illustrate our conclusions. 1]. Chapter III deals with another type of FNN's based on fuzzy operators 'V and 'A'— feedback FAM's. Chapter VII and Chapter VIII. corresponding a crisp ARTMAP we propose its fuzzy version—fuzzy ARTMAP by joining two fuzzy ART's together. At first we show the fact that the FBAM's converge their equilibrium stables. which are handled in Chapter VI.3 and §3. an analytic learning algorithm for connection weights and thresholds. they are fuzzy Hopfield networks and fuzzy bidirectional associative memories (FBAM's). Finally. which are dealt with by Chapter IV and Chapter V. To improve the storage capability and fault-tolerance the fuzzy Hopfield networks with threshold are reported in §3.2 and §2. We focus on two classes of dynamic FAM's.

And give some results about the I/O relationships of regular FNN's. Then we define regular FNN's by connecting a group of regular fuzzy neurons. which can guarantee universal approximation of four layer regular FNN's to hold. whose learning constant in each iteration is determined by GA. which can ensure universal approximation of four layer regular FNN's. Buckley's conjecture 'the regular FNN's can be universal approximators of the continuous and increasing fuzzy function class' is proved to be false by a counterexample. In §4. The main problem to solve is to simplify the equivalent conditions of the fuzzy function class Cp in Chapter IV. Using the fuzzy BP algorithm we can employ a three layer regular FNN to realize a family of fuzzy inference rules approximately.18 Liu and Li Fuzzy Neural Network Theory and Application i. However it can be proven that regular FNN's can approximate the extended function of any continuous function with arbitrarily given degree of accuracy on any compact set of J-'c(K). §4. Here a regular FNN means mainly a multi-layer feedforward FNN. The realization steps of the approximating procedure are presented and illustrated by a simulation example. and present their some useful properties. To speed the convergence of the fuzzy BP algorithm. In §4. an improved fuzzy BP algorithm is developed to realize the approximation with a given accuracy. the discrimination of the pseudo-attractors of this two dynamical FNN's are presented in §3.5 we take the fuzzy Bernstein polynomial as a bride to show that the four layer feedforward regular FNN's can be approximators to the continuous fuzzy valued function class. The basic tools to do these include connection networks. it is also shown in theory that the fuzzy CG algorithm is convergent to the minimum point of the error function. Thus.6 develop some equivalent conditions for the fuzzy function class Cp. attractors or limit cycles. Taking these facts as the basis we in §4. Finally in the chapter we in §4.3 we introduce a novel error function related to three layer feedforward regular FNN's and develop a fuzzy BP algorithm for the fuzzy weights.4 develops a fuzzy CG algorithm for the fuzzy weights of the three layer regular FNN's. The basic tools to do that are the V — A function and the polygonal fuzzy numbers. Moreover. they are learning algorithms for the fuzzy weights of regular FNN's and approximating capability. Simulation examples also demonstrate the fact that the fuzzy CG algorithm improves indeed the fuzzy BP algorithm in convergent speed. Chapter IV develops the systematic theory of regular FNN's by focusing mainly on two classes of important problems. In Chapter V we proceed to analyze universal approximation of regular FNN's. approximately with integral norm sense.e. respectively.e. The main contributions are to introduce a . The transitive laws of attractors.6. the universal approximation problem for four layer regular FNN's is solved completely. fuzzy row-restricted matrices and elementary memories and so on. universal approximation of regular FNN's to fuzzy functions. we at first introduce regular fuzzy neurons. Many simulation examples are shown to illustrate our conclusions. And then some learning algorithms based on fault-tolerance are built. i.7 employ a regular FNN to represent integrable bounded fuzzy valued functions. To this end.5 and §3.

internal operations. Moreover. §5. which simplifies the corresponding conditions in §4.4. for instance. a subset in the space is compact if and only if the set is bounded and closed.1 at first develop uniformity analysis for three layer. §5.). Similarly with §4. Moreover. In §6. by developing a novel extension principle and fuzzy arithmetic. on which we main focus in §6.)— integrable functions with integral norm sense.3 a fuzzy BP algorithm for fuzzy weights of the polygonal FNN's is developed and it is successfully applied to the approximate realization of fuzzy inference rules. many extended operations such as extended multiplication and extended division and so on can be simplified strikingly. For a given accuracy e > 0.3 defines the polygonal FNN. a upper bound of the size of fuzzy rule base of a corresponding approximating fuzzy system is estimated. respectively. and so on. that is. approximation capability and learning algorithm and so on. Chapter VI deals mainly with the approximation capability of generalized fuzzy systems with integral norm.4 treats universal approximation of the polygonal FNN's.3 employ the hierarchy introduced by Raju et al to define hierarchical fuzzy systems. For a given function family the crisp neural networks can approximate each function uniformly with a given accuracy. the size of the fuzzy rule base of a fuzzy system increases exponentially as the input space dimensionality increasing. I/O relationship analysis. the space is a completely separable metric space. So the polygonal FNN's are more applicable.1. Zadeh's extension principle is improved in J^"(M. The basic tool to do that is the piecewise linear function that is one central part in §6.Chapter I Introduction 19 novel class of FNN models—polygonal FNN's and to present useful properties of the FNN's. which is a three layer feedforward network with polygonal fuzzy number input. And show the universal approximation of the generalized fuzzy systems to Lp(/j. strikingly. and four layer crisp feedforward neural networks.6. One main impediment to hinder the application of fuzzy systems is 'rule explosion' problem. by which the 'rule explosion' problem can be solved successfully. a hierarchical fuzzy system and the corresponding higher dimension fuzzy system are equivalent. and shows the fact that a fuzzy function class can guarantee universal approximation of the polygonal FNN's if and only if each fuzzy function in this class is increasing. Also we can construct the approximating neural networks directly through the function family. Thus. the fuzzy systems can also applied to the . such as topological architecture. also it is locally compact. So the hierarchical fuzzy systems can be universal approximators with maximum norm and with integral norm respectively.2 we define the generalized fuzzy systems which include generalized Mamdani fuzzy systems and generalized T-S fuzzy systems as special cases. Thus. a bounded fuzzy number can be a limit of a sequence of polygonal fuzzy numbers. output and connection weights. Also a few of approximation theorems for the piecewise linear functions expressing each Lp(fi)— integrable function are established.2 reports the topological and analytic properties of the polygonal fuzzy number space ^™(R). To overcome such an obstacle we in §6. Based on the novel extension principle §5. To this end we in §5.

At first we treat fuzzy inference networks within a general framework. Also based on MAE criterion. For example. they are stochastic Mamdani fuzzy systems and stochastic T-S fuzzy systems. §8.2 introduces two class of stochastic fuzzy systems. stochastic integral.2 the systematic analysis of approximating capability of stochastic fuzzy systems including stochastic Mamdani fuzzy systems and stochastic T-S fuzzy systems with mean square sense is presented. To this end we in §7. and so §8. whose filtering performance is much better than that of median filter. simultaneously.1 recall some basic concepts about stochastic analysis.4. Using the fundamental results in §6.e. we discuss the approximation capability of fuzzy systems in stochastic environment. Based on the minimum absolute error (MAE) criterion we design an optimal filter FR. and approximating realization procedure of some stochastic processes including a class of non-stationary processes by stochastic fuzzy systems are demonstrated by some simulation examples.2 we propose the FNN representations of a 2-D digital image by define the deviation fuzzy sets and coding the image as the connection weights of a fuzzy inference network. which provides us with the theoretic basis for the applications of generalized fuzzy inference networks. and the stochastic integrals can expressed approximately as an algebra summation of a sequence of random variables. In dynamical system identification we demonstrate by some real examples that the performance resulting from generalized fuzzy inference networks is much better than that from crisp neural networks or fuzzy systems with the Gaussian type antecedent fuzzy sets. that is. when the image is corrupted. However. In order to improve FR in high probability noise. the antecedent fuzzy sets of the selection type FNN are adjusted rationally. §7. which is central part in §7. i.5. FR can preserve the uncorrupted structure of the image and remove impulse noise. In §8. their stochastic integrals with respect to an orthogonal incremental process exist. which possess many useful properties.5 FR may result in bad filtering performance.3 develops a novel FNN—selection type FNN. in which includes the common fuzzy inference networks as special cases. which can be a universal approximator and is suitable for the design of noise filters. stochastic measure and canonical representation of a stochastic process and so on.1 introduces a general fuzzy inference network model by define generalized defuzzifier. Chapter VIII focuses mainly on the application of FNN's in image restoration. and the image can be completely reconstructed. Some further subjects about approximation of fuzzy systems are studied in Chapter VII. Such a representation is accurate when the image is uncorrupted. Many simulation examples are presented to illustrate the approximating results in the chapter.3 and §7. for instance. when the noise probability exceeds 0. Learning algorithms for stochastic Mamdani fuzzy systems and stochastic T-S fuzzy systems are also developed. It preserves the . respectively. the representation may smooth noises and serve as a filter.20 Liu and Li Fuzzy Neural Network Theory and Application cases of higher dimension complicated system. and an optimal FNN filter is built. p > 0. In theory the generalized fuzzy inference networks can be universal approximators.

A massively parallel architecture for a selforganizing neural pattern recognition. Computer Vision. Neural Networks. Genetic algorithm-based learning of fuzzy neural network. 1995. 61(1): 43-51. Fuzzy Sets and Systems. 7 1 : 215-226. 1999. A. P. Improved fuzzy neural networks for solving relational equations. 1991. C. Introduction to interval computations. 1997.. 6(1): 25-30. Introduction to Fuzzy Reliability. . & Grossberg S. 1983. [10] Carpenter G. [5] Blanco A. Identification of relational equations by fuzzy neural networks.5) noise images m a y be much better t h a n t h a t of R S type filters. & Herzberger J. 26: 49194930. Approximation capability in C(!™) by multilayer feedforward networks and related problems. Fuzzy Sets and Systems. B. 1995. Grossberg S. on Neural Networks. including R C R S filter. [7] Cai K.. J. Applied Optics. on Neural Networks. 1987. [4] Blanco A. 1991. Y. Boca Raton FL: CRC. 1994. New York: Academic Press.. t h e F N N filter also can suppress some hybrid type noises. of Internal Joint Conf. and Image Processing. a n d also t o a greatest extend it removes impulse noise. Fundamentals of Nonlinear Digital Filtering.... Graphics. Fuzzy selection filters for image restoration with neural learning. Proc. [9] Carpenter G.. Fuzzy Sets and Systems. [2] Aliev R. ART2: Stable self-organization of pattern recognition codes for analog input patterns. References [I] Alefeld G.. By many real examples we demonstrate t h a t t h e restoration images with good performances can be obtained through t h e filter FR. A. & Grossberg S.. Especially t h e filtering performances of F N N filters t o restore high probability (p > 0. & Rosen D. 37: 54-115. Rep. Delgado M. 3: 129-152. IEEE Trans. Can fuzzy neural nets approximate continuous fuzzy functions. on Signal Processing. [13] Chen T. & Requena I. Fuzzy Sets and Systems.. T. Delgado M. [3] Astola J... 1995. 1987. 1996. 72: 311-322.. & Yu P. 47: 1446-1450.Chapter I 21 Introduction uncorrupted structures of t h e image as many as possible. & Grossberg S. A. & Requena I. 2001.. 118: 351-358. A.. Further. Chen H & Liu R... [12] Chen R. [II] Carpenter G. CAS/CNS-91-006). & Hayashi Y. So by t h e F N N filter t h e restoration image with high quality m a y be built from t h e corrupted image degraded by high or low probabilities impulse noises. Fuzzy ART: an adaptive resonance algorithm for rapid. [6] Buckley J. 2: 411-416. [8] Carpenter G. or t h e F N N filter. IEEE Trans. Part 1: feed-forward fuzzy neural networks. & Kuosmanen P. Boston»Dordrecht«London: Kluwer Academic Publishers. stable classification of analog patterns (Tech. W. ART3: Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures. A.

IEEE Trans. Diamond P. 1994. Sun C. IEEE Trans. Fuzzy associative memories.. Lee C. Fujioka R. P. Acad. Dubois D. 10: 345-358. Otake A. Kuo Y. R. & Uchikawa Y.936. Chung F.. Functional completeness of hierarchical fuzzy modeling. 2: 185-193. on Optimization. 1994. on Fuzzy Systems. Chung F.. Series A. 1989. IEEE Trans. 1989. Proc. Convergence properties of nonlinear conjugate gradient algorithms. 2000. P. On fuzzy modeling using fuzzy neural networks with the back-propagation algorithm. Fuzzy competitive learning. Reading. Singapore: World Scientific Publishing. on Fuzzy Systems.. R. 1998.. 102(2): 227-237. Neoro-fuzzy and soft computing. on Neural Networks. Maximum entropy signal reconstruction with neural networks.. Jang J. Neural networks.. 110: 51-61. & Trussell H. L. 1991. Neural Networks. Nat. 7(3): 539-551.. T. & Lippe W. 6: 687-697. 1992. 1994. J. On multistage fuzzy neural network modeling. & Liu G. & Cai Y. Han J. Neural networks and physical systems with emergent collective computational abilities. Approximating problems in system identification by neural networks. The fuzzy neural network approximation lemma... Reading. & Lee T. Englewood Cliffs. Diamond P. on Neural Networks. A fuzzy neural network and its application to pattern recognition. K. High-stability AWFM filter for signal restoration and its hardware design. et al. Addison-Weley. & Mizutani E. S.on Fuzzy Systems. of Intelligent Systems. 1999. 1987. 2000. Prentice Hall. & Nakanishi S. Fuzzy Sets and Systems. (Ed.. & Merlis Y..22 [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] Liu and Li Fuzzy Neural Network Theory and Application Chen T. 1982.. & Yager R. Digital image restoration using fuzzy sets. 3: 801-806. Feuring T. 1994. J. Fuzzy Sets and Systems. IEEE Trans. Now York: IEEE Press.. -C. 1990. & Kloeden P.on Acoust. 1986. Foundation of neural networks. Fundamentals of digital image processing. J. Characterization of compact subsets of fuzzy sets. 29: 341-348.Signal Processing. -L. 1994. Information Sciences. Kwan H. Khanna T. Metric Spaces of Fuzzy Sets.. 34: 919. H. MA. Fuzzy Sets and Systems... -L.. 37(4): 414-422. MA: Addison-Wesley. a comprehensive foundation.. IEEE Trans.. Ingman D. IEEE Trans. Civanlar M. 3: 195-201. 13: 2254-2258. Haykin S. & Chen C. 1999. . Jain A. Ishibuchi H. NJ: P T R Prentice-Hall. Dai Y. 1: 85-97. 114: 185202. & Tanaka H. SIAM J. Kikuchi H. R. New York: Academic Press. K.. S. M. A generalized defuzzification method via bad distributions. Filev D. 1992. Furuhashi T. 1981. Hopfield J.Speech. 1993. Sci. & Prade H. Horikawa S.. Science in China. 8: 125-142. in: Kandel A.) Fuzzy expert systems....... Kosko B. 1997. Fuzzy sets and systems: theory and applications. & Kloeden P. & Duan J. Neural networks that learn from fuzzy if-then rules. Internat.

T. 16: 376-378. Research on fuzzy neural network theory: a survey. T. Pitas I. Liu Puyin & Zhang Hanjiang. Boston: Kluwer Academic Publishers... C. Internat. & Hirota K. Nie J. T.. Sankar K. IEEE Intern. fuzzy sets and classification.. Biosci. 14: 223-239. Li X.. 54: 1201-1216. 1997. Pal S. Fuzzy sets and neural networks. K. Liu Puyin & Li Hongxing.. Lee S. & Glesner M. 1989: 425-430. &. G. 16: 63-76. New York: Chelsea. on Fuzzy Systems. 1978. J.. Man. Acta Sinca Telecomm. J. 8: 325-334. & Venetsanopoulos A.. Pedrycz W. Lorentz G. 16(5): 29-34. J. on Systems Man and Cybernat. Soft computing and its philosophical intension. on Systems. 1999. 103: 473-486. 90: 11— 23. 109: 355-362. & King R. on Neural Networks. Z. Proc. A. Runkler T. 2: 1161-1166... C.. Pal S. Defuzzification techniques for fuzzy controllers. & Lee E. 2000. Inversion of multilayer nets. Bernstein polynomials. 4: 83-103. Math... Studies in Dialectics of Nature (In Chinese).. Proc. Math. A set of axioms for denazification strategies towards a theory of rational defuzzification operators. J. & Diab H. 1993. A. V. Multilayer perceptron. Neurocomputing. Nguyen H. 56(1): 1-28. 1974. Cybernetics. Fuzzy neural networks and neurocomputations. 8: 266-280. 30: 223-229. 1980. 1993. Park S. Lett. Liu Puyin & Zhang Weiming. & Lee E. 23: 151177. Kisner R.. III.. IEEE Trans.. 2000. Raju G. Fuzzy Sets and Systems. IEEE Trans. S. . 1986. Image enhancement using fuzzy sets. 2000. 1990.. K. 3(5): 683-697. N. on Neural Networks. IEEE Trans. (In Chinese). Pal & Sushmita M. & Hagiwara M. Electron. 1975. Appl. Zhou J. Nobuhara H. 2000. 1991. A. Mathematical problems in research on reliability of telecommunication networks. Iterative inversion of fuzzified neural networks. 1 1 : 494-501. Hierarchical fuzzy control. Fuzzy inference neural network. Image enhancement using smoothing with fuzzy sets. Fuzzy neural networks. and Cybernet—Part B. A. Pedrycz W. Internat. II. IEEE Trans. Nonlinear digital filters— principles and applications. 2000. 1992. 1997. & King R... on Fuzzy Systems. Fuzzy Systems and Math. 10(1): 77-87. Nonlinear time-series forecasting: A fuzzy-neural approach. IEEE Trans. 21(10): 50-57. Novel neural algorithms based on fuzzy 6 rules for solving fuzzy relation equations: Part I. & Han T. Fuzzy Sets and Systems. Saade J. on Fuzzy Systems. & Kindermann J. Neurocomputing. 1981.. Fast solving method of fuzzy relational equation and their application to lossy image compression/reconstruction. 1998. Joint Conf. A note on the extension principle for fuzzy sets.Chapter I Introduction 23 Lee S. Linden A. of Control. Conf.. Anal. B. Nishina T... & Ruan Da. 1997. 64: 369-380. 2000..

Zhang X. H.and some new results.. 1996. Fuzzy sets. Soft computing and fuzzy logic. X. Zimmermann H. 1996. Adaptive fuzzy systems and control: design stability analysis. 93(2): 223-230. Adaptive fuzzy hybrid multichannel filters for removal of impulsive noise from color image. 4 (11): 48-56. Ding Y. C . Universal approximation by hierarchical fuzzy systems. & Chen R.. 1991. Zadeh L. 1992. T. 1994. Zadeh L. Universal approximation using feedforward neural networksia survey of some existing methods. 1994.. neural networks. 37(3): 77-84. Zadeh L. IEEE Trans. New York: Van Nostrand Rehinhold. on Neural Networks. 26(1): 176-180. 1999. IEEE Trans. 45(1): 105-119. From computing with numbers to computing with words—from manipulation of measurements to manipulation of perceptions. Dordrecht: Kluwer Academic Press.. A. Simpson P. et al.-A. on Fuzzy Systems. 1999. 29(5): 508-514. X. & Tan S. C .. H. IEEE Trans. IEEE Trans.. on Image Processing. 1994. The min-max function differentiation and training of fuzzy neural networks. ACM. and soft computing.. 1998.. IEEE Software. 294(4). -J. 7: 1139-1150. Huang C. IEEE Trans. 1996. Zadeh L. A. Zeng X. IEEE Trans. Englewood Cliffs. & Zadeh L. Fuzzy Sets and Systems. Analysis and design of hierarchical fuzzy systems. Yu P. Fuzzy set theory and its application. J. Yager R. IEEE Trans. T.. Wang L. 74(2): 127-151. . Comparison of necessary conditions for typical Takagi-Sugeno and Mandani fuzzy systems as universal approximators. on Fuzzy Systems. A. 3: 777-786. Neural Networks. Commun. 7(6): 617-624. 1998. Wang L. A relationship between membership functions and approximation accuracy in fuzzy systems.. Fuzzy logic. NJ: P T R Prentice-Hall. IEEE Trans.. Man and Cybernet.. fundamental properties and application in image processing. IEEE Trans. & Tsoi A. Wang L.. on Neural Networks. on Systems. Fuzzy stack filters—their definition. & Li S.. Fuzzy logic = computing with words.. 1994. 11(1): 15-37. A. on Circuits Systems-I... et al. Zadeh L. Fuzzy Min-Max neural networks-part 1: classification. 1996. on Fuzzy Systems. 1999. Simpson P. neural networks and soft computing. & Singh M. 1: 32-45. Man and Cybernet. C. on Systems. Fuzzy Min-Max neural networks-part 2: clustering. A. Signal Processing. Ying H. K. 4(2): 103-111. neural networks and soft computing. 5: 838-854. 1993. Eds. X. A. Tsai H. 1999. Fuzzy logic. & Yu P. One-page Course Announcement of CS— the University of California at Berkeley.. K.24 Liu and Li Fuzzy Neural Network Theory and Application Scarselli F. G. 1993.. H.

from input to output. In the chapter we present further researches about FAM's in storage capacity. which was firstly proposed by Kosko B. It is developed based on a crisp feedforward neural network by introducing the fuzzy operators 'V' and 'A'. Some real examples show stronger classifying capability of the fuzzy ART. but it suffers from very poor storage capacity. since the hardware and computation requirements for implementing a FAM with good storage capacity can be reduced. The stronger the classifying ability of a FNN is. signal processing [42]. The fuzzy information can be described by vectors in [0. pattern classification [43. Some optimal connecting fashions of the neurons in a FAM. such as fuzzy relational structure modeling [18. The classifying capability is another important subject related to the storage capacity of a FNN. 39]. such a FNN is called a feedforward FNN. 21. significantly. 1]". associative space and so on. In this chapter. An important subject related to FAM's is the storage capacity of the network [9. At first Kosko [25] develops a fuzzy Hebbian rule for FAM's. completely. under which a family of fuzzy pattern pairs can be stored in a FAM. By introducing the fuzzy operators 'V' and 'A' the crisp adaptive resonance theory (ART) can be generalized as a FNN model—fuzzy ART. Fan et al improve Kosko's methods with maximum solution matrix in [12] to develop some equivalent conditions. 20. 48]. which can provide us with much easier classification for a given fuzzy pattern family [7]. which is called fuzzy associative memories (FAM's). Recent years FAM's have been applied widely in many real fields. the more the FNN can store fuzzy patterns. 23. Finally we propose some systematic approaches to deal with the fuzzy ART. 10. 45-47] and so on. about in 1987. learning algorithm for the connection weight matrices. . A FAM constitutes a fuzzy perceptron. and some learning algorithms are developed based on storage capacity of the FAM. So there exist a lot of researches about the storage capacity of FAM in recent years. and its many classifying characteristics are developed.CHAPTER II Fuzzy Neural Networks for Storing and Classifying If the fuzzy information handled by a FNN flows in one direction. To make up the defects of the fuzzy Hebbian rule. we focus mainly on such feedforward FNN's whose internal operations are based on the fuzzy operator pair 'V — A'.

2) feeP where x j means the transpose of Xfc..x^). .... if let x = (xi... . G P } ..1).^).... where x^ = {x'{.2) in the following.. as shown in Figure 2. y^)IA. y)= {k G P|x<= > yk3}. that is by the following formula W can be established: W=\f{xJoyk}. N = {1.. y = (yi.1) Give a fuzzy pattern pair family as (X.1 Topological architecture of two layer max-min FAM For the fuzzy pattern pair family (X.ra}.m). Kosko in [25] develops a fuzzy Hebbian learning algorithm for W. (2.1) is to develop some learning algorithms for W. U3{x. . . yjfe)|A. y)= {(xfc.ym).. y)= Gtj(x.1. To guarantee more pattern pairs in (<¥. y)= Ltj(x. G P } .. LE^X. and P = {l.1). we improve the algorithm (2.. Eij(X. where 'o' means 'V —A' composition operation.. y)= {k G P | 4 < y*}. no transfer function in FAM's is considered in the following. GEaix. y). y) can be stored in (2. yk = (2/1. . 3]. The analytic learning algorithm (2.p} (p G N)..xn). Denote M = {1.. and the output signal y G [0..26 Liu and Li Fuzzy Neural Network Theory and Application §2. W = (wij)nxm G [inxm is the connection weight matrix. Input layer Output layer Figure 2. Thus the input—output (I/O) relationship of a two layer FAM can be expressed as: y = xoW. then n Vj = \/{xlAwij} (j = l. y). 1]. One of the main objects for studying (2.. (2.. such as [0. and Gij(X. Next let us present the topological architecture corresponding to (2..1).. y)uEij{x. y) = {(x^. y) to be stored in (2..2) can not ensure each pattern pair (xfc.n}.. Suppose the input signal x G [0.1). 1]".. y)= {k G P|x* = $ } .. so that each pattern pair in (X. y)uEtj(x. also the operation 'A' is a threshold function [2.1 Two layer max—min fuzzy associative m e m o r y Since the fuzzy operators 'V' and 'A' can adapt the outputs to prefix range. that is. l ] m . y^) (k G P) to be stored in (2..

easily we can show. Vfc € P. Wo G Mw.° A x\ < y%.?.y }= y*. Vj G M. Vfc G P. iife^Gy^. wy < w%. j € M. = * Vi G N. y) = P. |J S?AW0. => Vi G N. Wij < fceP wfj. k G Gij(X.y)= {k e GE„(*. . wi:j < A {y?}. ^y)>\l {*? A w%}> \ / {x* A «. j G M. by (2.Y. ^ Vfc G Gy (*. (ii) By the assumption we suppose W = (u>ij)nxm G Mw. it /oHows tfiai. => w% < y).1): 4= J keGtj(x. W C W 0 . Also if W G fnxm satisfies the given conditions. y ) ^ 0. then we can conclude that V K A &„}< y). AT" = {W G Mnxml Vfc G P. So (i) is true.3) I 1. ? ) . Xfe o W C yfe. it is easy to show the following fact: Vi G N.e. xfe o W0 C yfe. Gy(*. we have (i) Vfc G P.y) and M™ respectively as follows: sg(w0. that is. j G M.3) we have. So if Gy(. and if the fuzzy matrix W satisfies: Vfc G P. j G M. y)|y* < «. xfe o W = y fe }. W C W°. Moreover VK-A*?}=( ieN V KAxf})v( iifceGy (AT. Also for any k G P.* < y£. y) \/ «-A*?})<i£. wy < j£. «. W c W°. (ii) If Mw ^ 0. Theorem 2.1 For i/ie given fuzzy pattern pair family {(xfe. i. (i) By the definition of Wo. y). Wij < y). (in) The set Mw ^ 0 if and only ifMj G M.27 Chapter II Fuzzy Neural Networks for Storing and Classifying By the following analytic learning algorithm we can re-establish the connection weight matrix W = W0 = (w^Onxm in (2. y/c)|fc G P } . Proof. y). and VW = ( w y ) „ x m G M " . and Wo = (wy) nXTO . we can conclude that V {a£ A W y } = y).y) Therefore. define the sets S^(W0. = » Vfc G Gy (A". Similarly with (i) we can show. m y A a:. j G M. For any fc G P.)=0 For i G N. t/ien W c W0. x/c o Wo C yfe. W y A xf < y*.}. Vi G N. Vi G N.j.y) (2.

which i6N is a contradiction since I f e M™. as shown in Figure 2.1). Therefore. ? ) = P . j G M. T h e n by (i). V K f c A «. For the FAM's based on the fuzzy operator pair 'V — A'. W0 G Mw. it is impossible to t r e a t the problem by increasing unit or. ^ ) .?•}= yk. W0 = ( t c ? .° < yk. =4> M™ ^ 0. . y ) G [0. So Vj € M. (iii) Let Mw £ 0.3) for iu°-. To account for the fact. ) B X m G r . T h e set P£(W) is called the associative space of (2. ieN Synthesizing (2. For any j G M.1). l ] n x [0. x? A «.4) ieN By the definition (2.. either xh° < yk° or. which can be viewed as such a problem t h a t enlarging the associative space of (2. In practice. let | J S?(W0. wijo A xk° < j/*°. fc G P .4) we get. b u t Vi G N. ieN so t h a t A. yf0 > w%0 > Wij.28 Liu and Li Fuzzy Neural Network Theory and Application Thus. for the given connection weight matrix W G / i „ x m . it is easy to show Vi G N. we propose a three layer FAM. W c W0.2. (ii) is proved. On the other ieN hand. and W = (Wlj)nxm G AT". there is i 0 G N. fc G P .2 Hidden layer Output layer Topological architecture of three layer FAM In the network (2.1) so t h a t it can store as many fuzzy p a t t e r n pairs as possible.y)= P (j G M). node layers of FAM's. If there is j 0 G M. = J . a main problem to study FAM is how to design the network (2.1). D Input layer Figure 2. T h u s (2.\ / {a:* A «. So V K ° A wijo}< y*°. and hence. satisfying | J 5 g ( W 0 ) y ) ^ P . = * • W0 G Mw. G 5 i 0 j .° } < j. define P%{W) = {(x. fc0 £ ^ ( W o . then there i6N exists k0 G P . (iii) is true. y ^ ( W Q . l p | x o W = y } .

2 is Ofc..ym). (ii) Let I > TO. " » ) • fe=i (2. there are W\ G / i n X j . Wb). . i=i i. .xn). W 2 ) : x = ( x 1 . W 2 ) : x = {yi.1). . . y ) G P$(W) : x = (a*..e. W2 = ( ^ ' ) .6) as (2) w ki = (xu . (2.. where. j G M a n d fc = »=i (1) ik = j wik. TO < k < I. otherwise.. Proo/.5) stands for a three layer FAM.. Next let us prove t h a t the storage capacities of (2.2 Let W1 = (w^)n><l. PSiWuWi) = {(x. = > ( x .. T h e n by (2. and P 3 0 (Wi. y ) G P 3 a ( W i .W2. j G M ) .. so «ia< P2{W) c P 3 a (Wi.. .\ = 1 = ' > fc=l <-i=l J V{V{*«A«. k<m. i. W2) the associative space of the three layer FAM (2.5)... V ( x . . y)\x=(Xl. 1. I 0. define \ 1. Aw (k = i . £ „ ) . x o f = y .= ^ { x i A W i j } . . y = (yi. is y = ( j / i . y ) G P 3 a ( W i .> 0 > ik] 1 (2-5) 2 Vj = V {ok A w j j j } 0' = 1.ym). we have n V j G M .Chapter II 29 Fuzzy Neural Networks for Storing and Classifying Suppose the o u t p u t of the hidden unit k in Figure 2. %.£„)> n t h e n for j G M.xn).. Therefore. For i e N. y ) e P 2 a ( W ) .. . y m ) satisfy (2. t h e n the corresponding I / O relationship can be expressed as follows: n ofc = V {xi '7 ...5) }. so that P 3 a (Wi.. k<m.WAt«g>}}= V{»«A(V{ti. y m ) . I. W = (wij)nxm follows: *"« = V iwik fe=i A w fci} (* e N.. . (i) is y = (Vl. j G M.5) are identical. Define the connection weight matrix of (2. (i) For any ( x . then for given W G / J „ X m . there is W E nnxm.. Then we can conclude that (i) For given Wx. . { k=j .WAWg>})}. W2 G Mjxm. . . by the assumption we get yj = V{(V(^A W W}Wy}= V J V ^ A ^ A ^ } } fe=l". For any (x.e. .y = C P$(W). W2) C P2{W)\ (ii) If l > m An.1) and (2. . x m . %• = V {^i A w»j}. T h e o r e m 2. .. P 3 0 ( W i . \ 1 0 . .6) it follows t h a t . W 2 ) proved.-.

cn). Using the fuzzy matrix W = (wij)nxm and the fuzzy vector c = (c\. Then the corresponding I/O relationship can be expressed as n Vi = (\f{(xi n V a) A Wijfydj = \J{(xiVCiV dj) A (Wij V dj)}. by (2. I.7) is called a FAM with threshold. we may conclude that i Vj G M. let us now aim at the optimization of the connecting fashions among the units..y)=9. (2.y)=9. m < k < I. Gafry^Q. (1) k < m. Thus. (2. where i £ N..1 FAM with threshold In the FAM's as shown in Figure 2. we introduce the following set: Ji(x. .d^) : A wi3 = < {ykj}.2. y)= {j e M\LEij(x. d° = (dj...8) For j G M. To improve FAM's in their storage capacity or associative space. J \J {xt Afflji.y)jeJi(x. we introduce thresholds Cj. W2). y)± 0}.y) e P 3 a (Wi. respectively. keGtj(x.. d = (dj. WO..1.c°). (2.. .}. 2.9) Gij(X.y) { 0. P%(W) C P3°(Wi....1..10) Ji(X. dm) we re-write (2. then (x. increasing unit layer of a FAM based on 'V — A' can not improve the storage capacity.7) (2...(2)1 y ) f c x m ..9) (2.y) v 1.5) easily we can get J1...10) we establish the connection weight matrix WQ = (w°j)nxm and the threshold vectors c° = (c\. D By Theorem 2. . By the following (2. (2.Therefore. keLEij(x. j G M.. W2 = H . \f {ok A w$}= OJ = \J{xi A Wlj} =%• fc=i So if let W1 = {w^)nxl. dj to the input unit i and output unit j . (h) is proved..7) as y=((xVc)olf)vd.30 Liu and Li Fuzzy Neural Network Theory and Application Then for k = 1.

c 0 ). we have TS%((W0. then w% = 1 > Gij(X. c.do... c = (ci. 6 e [0. And b > c. there is k0 G Gi:j(X. In (2. a (S) (a A b) > b.do. d) the associative space of the FAM with threshold.co). Proof. c o ) . ).?•}. 6..y)uT J B i i ((A'.3 Let W = {wij)nxm^ Vnxm. d) G M™^. c G [0.. y ) = {k G T G E y ( ( ^ .do).do.c 0 ).c 0 ).d0y. c.. rLy((*. j G M.c n ) G [0. = > a (s) 6 > a © c . T S g ( ( W 0 .y)D5g(Wo. d ) e M . 1]. c.9.y)=TG u ((A'. define '(f)' as follows: „ ..y)= {k G TGEtJ((X.dm) G [0.y)={fceP|^Vc?V < $ < $ } .d0. and d° < u. c. d 0 ) .d0. Let {(xfe.ucd . Let a. 1]. i r | ((x V c ) o W ) v d = y } . Define the set TPa(W. y ) | l / ^ < < Since Vi G N.co). TG J E i j ((<Y. and the threshold vectors c. Va.y)D GE^X. define the sets TGij((X. (W. keP So next it suffices to prove Wij < w%.y). r5g((W0. y)= 0. If .8) we give the connection weight matrix W G fJ.1]" x [0. TG£7y ((Ar.y) {»*} = y*°<^°- Wij. y)^ 0. Then Vi G N. a < 6..nxm. and m d = {dl. a > 5 .3 . Then we can show. toy < < • . Theorem 2. l]n.. If Gi:j (X.do.y)= {fc G P|xf Vc»V dj = $ } .. obviously true. j € M. We call TPa(W. d ) = {(x.y)= {k G P\x* Vc° Vd? > j/*}. [ &. d.c0).l] . yfe)|fc G P} be a given fuzzy pattern pair family.y) 0/*}>^< = A keGi:i(x. w f 1. j 6 M.d 0 .< d°.co).y)\y* Vd°}. so that vf= A keGij{x. define Mwcd = ^ c d)|vfc e P) ( (xfc V c) o W)vd = y f c }. it follows that Vj G M. T£7 y ((Ar. Since (W. d. y).31 Chapter II Fuzzy Neural Networks for Storing and Classifying For And d° = A {Vj}fcep « £ N. y). dj < /\ {y^} = d°. y) G [0. d o .c0).. are < «.do.

V d°) < y). define cj as follows: cl _ J tePj'EM I o..y) T h a t is. j G M. y ) ^ 0. t h e n we can conclude t h a t < yk.^Wij < /\ {yk}.VdJ)A(wl.11) yd^)<ykj.?Vci. t h e n we get V j G M .-) = Wy V d^ (2. c.j6M Ci< Therefore Vj G M. fceP. Ci. WQ defined by (2. • By Theorem 2. T h e theorem is therefore proved. = * (arf V Q V d.9) possess a maximality with t h e sense of storing fuzzy p a t t e r n s . k G P .12) holds also. y)^ A {*?}.13) < (a. If KJi(c. So 0. 4 Let (W. = > (x. y)= 0.3 it follows t h a t xk > wfj V d° > ? % V dj. Then there is a threshold c i = (c{. c ° .fc V Q V dj) A (wy V dj) = < (xf V c\ V d°) A (toy If Ji(<Y.* V cj V d°) A (wij V d°) < j / * . y)= 0.9) and Theorem 2. = 0. and VA: € P .12) y) ^ 0. d) G Mwcd.JVdJ)}) w% = xk°®)(xko > Aw^^Wij. by KJi(c. a^ > t/£. vector Proof. If J ^ * . t h e n 4 = 0 < c°. For any i G N.-) A (iwy V d.y) J < ( # .3. define t h e set KJfay) X J i ( c . when KJi(c.j) V d^ (2. If f\ {ykj} = c\. Ji(^.-) < (x V cj V d°) A (u^.y) feeP ± 0 (2.j)<£KJi(c.=^wij (k. •••. so that Vi G N. Therefore Ji{X. . ««<A{tf}- (k.c1n). G P x M | C i > $}. we imply k (a£ V c» V dj) A (ioy V d. = {(k. T h e n by (2. y ) = 0. d 0 ) G Mwcd.32 Liu and Li Fuzzy Neural Network Theory and Application Using t h e definition of '(§)' a n d t h e assumptions easily we can show y*°=x*°®y*°=x*°®(y^{(xl.y).Cl<yk.j) G KJi(c. and (W. keP. c\ < c°. Also it is easy t o show t h e following facts: A {y"}< (kj)eKJi(c.y) Wij (2. T h e o r e m 2 .y)=0.

V {(xi v 4 v d °) A (wij v rf °)} = 2/j. U L e m m a 2.V d°j. (W.V d°) < yj\ Therefore V {(x.do). y). 3>). d 0 ) G Mwcd.11) (2.y) > V {(^Vc?Vd?)A(«. d) G Mwcd. c. fc G P. {x\ V c° V d°) A {w% V d°) = y).13) and (W.Vd°)}. .V^AKVrf^V i\ji(x. »|fe^TSg((W 0 . TTien i/ie following facts hold: V) = V {(^Vc?Vd»)A(«.y)jtiD < ( V {(^VcJv^AKV^jjv i|Ji(Af. we get V {{x* V a V dj) A {wij V dj)} Vki «GN ( V {(^Vc. then we have either x\ V c° V d° = ^fe < UJ^ V d? or.?.y)=<b v( {fa? v ci v dj) A (w^. by (2. which can implies.?. x* V c? V d] > y) > w°. a.v dj)}) V i\Ji(x.d 0 ). it follows that w^ Vd° < yky Hence < • V d° = y).y V d°)}< y).fcVc°Vd°)A(<Vd°)}<^. Consequently.^ V c° V d° > y*\ And y1? < w^ V d°. then either rrf V c° V d° < y) or.y) If fc G T S g ((Wo. k € P. Iffc^ TSg ((Wo.y) Proo/.T h a t is >Y= i6N ((x V ci) o W)\/d0. Also by the definitions of w^ and d°. (xf V c° V d°) A (w^. d 0 ).y)=0 v( = V {(^VcJVdP)AK-Vd?)}) V {Kfc V 4 V d°) A («. »|fe0TSG((W o> do).1 Suppose j G M. Hence Vj G M.33 Chapter II Fuzzy Neural Networks for Storing and Classifying Thus.VdJ)} i|fceTS°((Wo. c±. d 0 ).

satisfying \J TS!?jo((Wo.5 For the given fuzzy pattern pair family {(x^. D Theorem 2. y) = P.) A (Wij vd.y)=£ P.14) and Theorem 2. g((Wo. i\keTLiJO((Wo. so that (W.?.-)}= V {(*? v 4 vd°) A (Wij vd°)} ieN < i€N V {(*? V c? V d?) A («. y) v( V {(a. there is j 0 G M.P.d 0 ).d0).d0).14) {(a? V c? V d?) A («.£.y) V( V (2.co).. i|fceT5g((Wo.d0).?.«• V d°)}) < j . ieN Proof Necessity: Let W = {wij)nXm G Hnxm. the following equality holds: V {(:r*Vc?Vd?)A«. vd.. there is i G N. So by (2.c0). Vi G N. either A. do). c\ < c°. ••••.c]l) : Vi G N..3>)... the set Mwcd ± 0 if and only iflj G M.Vd?)}=y*.VdP)}) »l*^Sg((Wo. If the conclusion is false.°. . dj = d^ then Lemma 2.do.c0). Hence if let Wij = wf./ V dj)} ieN = ( V { ( * * V c ° V d ° ) A K > Vd°)})v i|fe6TSg((Wo.do. y% > w°ijo Vd°.1 implies that V {{x\ V a V dj) A (wi. So V{(4vc°V^)A(«. c = (ci. so that fc G TS .. V d ° ) } ) v i\keTGEiJ0 ({W0. so that Vi 6 N. l ] n .4.34 Liu and Li Fuzzy Neural Network Theory and Application That is.1]"\ so that (W.V <$)}<*.do)iy) V {(s?Vc°Vd?)A(u.. there is a fuzzy vector Ci = (c{. G TLijo((X.?Vc?VdQ)A(«.do. Therefore.Vd°)} = ( V {(x*V C °Vd°)A(™°. Thus. (J TS§((W0.c„) G [0.y) or.Vd?)}=$. k G P. * . ci = c°.d 0 ) .do)iy) And hence the lemma is proved.3 it follows that y)Q = V {Kfe vc.c. i6N Which is a contradiction.c0).y). ^ TS?jo((W0. there is i€N it € P..d) G Mwcd. i|feeTSg((Wo. d = (di. cly d 0 ) G M™cd.y). The necessity is proved.do.y) By the assumption and Theorem 2.y) . yjt)|fc G P } .d m ) G [0.fcG T G £ i i o ((<Y. Sufficiency: Vj G M.

i. y). 3}. ) .7.0.3.5) (0. 5}.1.0.7. discriminate the following equality: UTSg((W0)d0). < & ) .c°). Mwcd ^ 0. • G Theorem 2.0.8.0. ? ) .co).y)..4. Table 2.0.0.3) (0.0.C d . yjfc)|fc € P } .c 0 .4..2. o!°.y).4) (0.6. and establish «. For i G N. T/iCTl (Wb. By Table 2.4.0.4) (0. d o ) .3) Step 1.6.0.4) (0.0. That is.7.2) (0. d 0 ) G M .Chapter II Fuzzy Neural Networks for Storing and Classifying 35 Then put W0 = (w°).5.d0.1 we give a fuzzy pattern pair family as {(xfc..0. By Theorem 2. . Proof.2) (0.5.4 implies.7. Let N = {1.8) with threshold possesses good storage capacity by a simulation example.5.3) (0. U 5 g ( W 0 .0.0.5) (0.1. j G M.0.D 2.0.0. M w < ! ^ 0.5.3.0. 2.8.y).0..0.4. Using the following steps we can realize the algorithm (2.0.5.4.0. Since 5 g ( W 0 ) y ) C TSg((Wo.d0) Mwcd.5. it follows that U TS?((W0. and P = {1. .0. so f/iaf Vfc € P. Theorem 2.0. c°. c 0 = (c?. LEi:j(X. there is W G /i„ X m.4. Le£ W0 = (w^).. calculate the sets Gij(X.d0.0.6. (W0...do).6.y). c 0 = (c?.0. Step 2.co). S'iep 5.. 3.0. T£^((*.0. co. For any i e N.1.c0. M = {1.?•.0..6.do..6 -For a given fuzzy pattern family {(xjt..0.4.4) (0. .5.6.4) (0.0. ( # .4) (0. .. For any j G M.8. and (W 0 .9). j € M.1 Fuzzy pattern pair family k xfc yk (0.0. 4.0. . d 0 = « .0.2 Simulation example In the subsection we demonstrate that FAM (2.2.0.1.y) P.c0).1) (0.0.0. .d0).7. y ) = P .0.3) (0.0.0. Thus.2. VjGM.3.1 we get.0..3.0.0.d 0 ) G M - = .4.4. .3.0. ief M™ ^ 0.e.0.0. d 0 = « .3) (0.0.0.4.0.c°). calculate and determine the following sets: TGij((X.3.7. yfc)|fc G P } . 8}.5. TSg((W 0 . TGE^((X.y)=P? y). 2.8. xfc o W = yfc.6.7.

3 1.1 can be stored in FAM (2.6... the fuzzy 5—learning algorithm. d 0 = (d°. •••. y p ) are memory patterns of FAM (2. (x p .5 1..3 Wtf 1. Y = (yi. yk)\k G P} satisfies the conditions that for each j € M.. using the following algorithm we . Step 5. Step 4.0.. yi).3). If we use the Hebbian learning rule. (x p .3 1.. the threshold vectors of input units and output units. So i€N by Theorem 2.. 41. that is X I x\ x\ \x\ x2 x2 •• •• x\ \ 2 X .0.x p ) T . 42].1) if and only if the solutions of (2.0 1. Moreover. satisfying X o W = Y. All (xi.1) if and only if there is a fuzzy matrix W = {wij)nXm.1) [38].c°).0.8). the connection weight matrix WQ as follows: 0.. .15) exist. By above steps we get..5.0 1.d0). yfc)|fc G P}.0 1.0.3. Y we denote X = (xi.3).36 Liu and Li Fuzzy Neural Network Theory and Application if yes go to the following step. . [j TS^((W0.3. .0 \ 1. y$) can be stored in FAM (2. (2..A' Give a fuzzy pattern pair family {(x^.. Put Wo = K ) .15) (2.1) in its storage capacity.3. easily we may imply that only (X5. < ) . c 0 = (c°..0. c 0 = (0. y p ) can be stored in FAM (2. However. .y P ) T .0 0. otherwise go to Step 5.. §2. To overcome such defects we in this section develop a dynamic learning scheme.y) = P.0. d 0 = (0..5..6 1.0 0. respectively. yi).1 FAM's based on 'V . Y = •• < J ( v\ v\ \vl y\ • •• y \ n \ yl • •• 2/m yl • •• ypm J Then all (xi.0 j we may easily show that the given fuzzy pattern pair family {(xj. . 2. and by the matrices X.2 Fuzzy 5—learning algorithm Introducing threshold to units and designing analytic learning algorithm can improve a FAM as (2.0 0. Stop. we can improve a FAM in storage capacity by introducing suitable thresholds and learning algorithms.0 0.3.2. analytic learning algorithm can not show the adaptivity and self-adjustability of FAM's.3 0. Therefore. all fuzzy pattern pairs in Table 2. and present its convergence. .15) is a fuzzy relational equation based on 'V — A' composition [27-29.

0.3.4 \ 0. and establish a solution of (2. Adjust the connection weights: Let rj G (0.0.3 0.4.000000 W'c = I 1.3 0.0.3 / / 0.2.0 0.5. put i«ij(0) = 1 and i = 0.000000 0. Algorithm 2. Preceding to analyze the convergence of the fuzzy 5—learning algorithm 2.4 0.0.3.000000 Obviously (2.7.5).5 \ 0. u>ij(t).1.8).6 \ 0.0. 3}.3.000000 0.7 X = 0.700000 1.3.2.5 \ 0. otherwise. otherwise let i = i + 1. Step 2.4. 2. i=i Step ^.0. j e M.3 J Choose 7} = 0.0. With the following steps we can realize the iteration of Wij for i G N. go to Step 2. 1. (0.8 0.8 0.3).000000 0.1. 4}.15) as I 0.4 0.0.9 0. Calculate the real output: Y{t) = X o W(£).000000 1.1). (0. (0.0. wy (*) A x\ > y).15) is true for W.7).400000 0. we present an example to demonstrate the realizing procedure of the algorithm.7 0.500000 1.0. denote Wij (t)~V Wij(t+ 1) (Vj (*) " Z/j). Initialization: Vi G N.2 0. y*(i) = \ / {a:? A t ^ ( i ) } .000000 0.0.6 0.0. the sequence of connection weight matrices {W^i)} converges to the matrix W: 1.8.8.5).2.5 1. discriminate Wij(t + 1) = Wij(t)l if yes stop.3). Y in (2. To this end let P = N = {1.0. and with 40 iterations.0.7 0.0.Chapter II Fuzzy Neural Networks for Storing and Classifying 37 can demonstrate the learning procedure for the connection weight matrix W of FAM (2.7.15). Then we can establish the fuzzy matrices X. (0.0.0.0.3 0.7. yfc)|fc G P} for training as follows: xi x2 x3 x4 = = = (0. 1] be a learning constant.1 Fuzzy 5—learning algorithm.4. (0.6). Step 3. that is n Vfc G P.9.2 0.4 0. (0. 3.0.7 Y = 0.2 0.16) Step 5. (2. yi y2 y3 y4 = = = = (0.1).4.1.500000 . let W(t) = {Wij(t))nxm. Give the fuzzy pattern pair family {(x^.6. j G M : Step 1.1 0.3 0.400000 1. and M = {1. Vi G N. Vj G M. 2.0.1 0. j G M.

j G M.. 2. Let t0 G N : i 0 = T h e n we get. Next let us use (2. = ^ u>ij(t0 + 2) = 1 > ^fc° > . For any i G N. then WQ G Mw is the maximum element of Mw. y)(t) > x\ A Wij(t) > y). 2. x\ < y).=> If Gij {X.T h e n by (2. if Mw = 0. Proof. so t h a t Wij(t') max{t € N K j ( t ) > yf}.1.16) it follows t h a t x\Q A wi:j(t0 + 1) < y * ° . . t h e n w^t + 1) = w ^ ( i ) .15).. we in t h e following prove t h e limit matrix of the matrix sequence in Algorithm 2. T h a t is.... if Gij(X. 2 .. For any i g N . j G M. (i) Let t mean t h e iteration step. . t h e n Vfc G P .. j € M. there is t' G N.V • (l/*°(0) -Vj°)=l-V following condition: x Wij{t)>yf.16) t o show t h a t ViG{l. If the solution set of (2.9). So by (2. j € M. Vi G N.7 can guarantee the convergence of Algorithm 2. y) = 0. . Wij(t) A i f ^Uj. t h e matrix sequence {W(£)|£ = 1.. T h e o r e m 2. = ••• = wi5{l) so t h a t yf = = Wij(0) f\ = 1. by (2. for any iteration step t.. t0 > 1.1.} is matrices. 7 Suppose the fuzzy matrix sequence {W(t)\t obtained by Algorithm. 1]. } converges.1 is t h e maximum solution of (2. In fact. In Algorithm 2. 2. for i € N... a n d ^ ( i o + 1) < j/*°. if t = 1.8 For a given fuzzy pattern pair family {(x/-. 2. the fuzzy matrix sequence {W(t)\t = 1. Wjj(l) ( io ~ yj°)> Vj°' where = *o e N satisfies t h e 2/.y)j^ Wij(t + 1) = Wij{t) 0.2. } converges.18) is false. w ^ i + 1) < Wjj(i).2. Wij(t) G [0. there is k0 G Gtj (X.fe0 = V Ki(°) A 4°}= V {4°}= <°i'GN i'eN If (2. fc G P .2. If x\ A Wij(t) < y). if x\ A Wij(t) > y).} defined by (2..can be defined by (2. Moreover. we get.1..16) converges to WQ = {w°j)nxm.16) we get.16) it follows t h a t Gtj(X.17) {y*}.. T h e n Wij(t + 1) = Wij(t) .38 Liu and Li Fuzzy Neural Network Theory and Application T h e o r e m 2 . t—>+oo • Theorem 2.. if Mw ^ 0. where io°.}. we have. then WQ is the maximum element of the set {W\X oW C y } ... } is a non-increasing fuzzy matrix sequence..18) > y^° and (2.. considering 1 > x\° Wij{0) . (2. { ^ ( t ) ^ = 1..y). y)=9. } is a non-increasing sequence of fuzzy (ii) {W{t)\t = 1.y) Wij(0) A x'l0 = x*0 > yk°. = > W ( i + 1) C W(t).15) is non-empty. (2. Therefore. t h a t is.. we have. T h e n keGi:i(x. (ii) Since V* = 1.16). 2. Then (i) {W{t)\t = 1. < yf.v(Vj(t) ~ Vj) < Wij(t). = 1. t h e limit lim Wijit) exists. Proof. yk)\k G P } .

r] • {yf{t) .Chapter II 39 Fuzzy Neural Networks for Storing and Classifying Wij(to + 1). (2.20)..18) is true..17) we can conclude that f. If lim wu{t) = kj > y^°. We choose V ~V{)- V§{t)-y* • Then (2. if Mw = 0. yfc)|fc G P} be a fuzzy pattern pair family.y)=9. So the first part of the theorem holds. which contradicts (2.1 if we choose the learning constant rj as an adjustable value changing with the iteration step t. Wij(t)Ax$>y!j.} is non-increasing. Therefore. then the convergence speed of the algorithm can speed up. To this end we at first present the following definition [18.8 the following theorem is trivial. kj = ltj n • ( lim y^U) . 2. 42. then \/ {x*? hwVj{t)}. where wf.2 FAM's based on 'V . then WQ G MW is a maximum element of Mw. and lim y^°(t) = yk-°.—y-l-no -* fceGy ( * . Theorem 2. and W(t) = (wij(t))nxm be a fuzzy matrix defined by (2. is defined by (2. since by (i) of Theorem 2. strikingly. wi:j(t + 1) = wtj{t) . Gij(x. if the following conditions hold: .1] a fuzzy operator.yk-°). l ] 2 —• [0. Moreover. And the other part of the theorem is a direct result of Theorem 2. :v) lim Wij(t) t—>+oo <• i.20) By Theorem 2..16) is transformed into the following iteration scheme: (wij-WAyJ. Definition 2.yf).19). (2. 35. D In Algorithm 2. the fuzzy matrix sequence {W(t)|i = 1. Wij(t + i) = < [ Wij(t).9). y ) ^ 0. = > ' lim Wij (t) = yf.7. Hence (2.*' Since the fuzzy operator pair 'V — A' can not treat many real problems. then W0 is a maximum element of the {W\X oW CY}. Considering (2.19) i'eN i'eTS Also by (2.1 We call the mapping T : [0. Therefore. Vfit)= lim WiAt) > yk°. Then the sequence {W(t)\t = 1. otherwise. if Mw ^ 0. that is 17(f).2.1.2.. .} converges to WQ = (wfj)nXm as t —> +00..=* ^yfif)- V K ° A % } > ^ ° .. it is necessary to study the FAM's based on other fuzzy operator pairs. So Gy ( # .2. 52]. which is a contradiction.16).9 Let {(xjt.

1]. then (2. Define W* = {w*Anxm G / i n x m as follows: a (2. considering that a* (aa*b) < b. (a a* b) a* b > a. l ] | a T x < b}.c.1] : aa*b = sup{a.y)= M? = {We {k G p\xf * w*tj > y*}. 6. ' . =4> T{a. Moreover. =>• a a» 5 > ai a .21) being similar with Theorem 2. T(a.21) becomes as y = x © W. (4) Va. define oa»& G [0. (aa* b) a* b > a. To this end we at first design an analytic learning algorithm for W. we can conclude that. Proof. T(6. G [0.1].1).0) = 0. a a . we call T a t-norm. and write the t—norm T as '*'. Since a * b < a * b. c) = T(o. y){(xk.1].10 Given a fuzzy pattern pair family (X. a\. (i) is true. (hi) b < bi. If the fuzzy operator T satisfies: Va G [0. (ii) a < ai. d)\ (3) Va. then a < c. l ) = l.1. For a.b) = aTb. From now on. b.X. we call T a t-conorm. Let us now present some useful properties of the operator 'a*'.1]. 40. MnxmlVA. j G M). also using the definition of 'a*' we get. As for (ii) (iii). and W* = (w^nxrn is defined by (2.1]. 6). T ( l .2 Let a. and Va G [0. and T be a t—norm.21) If choose '©' as the 'V — *' composition operation.y) and Mw we may introduce the sets S. ==>• a a* 6 < a a* 6i..1]. b.22). and get a FAM based on the fuzzy operator pair 'V — *': yj=\/{xi*wij}(jGM). T(a.1]. a). (2) If a. b) = T(b. we denote T(a. .22) Recalling Sfj(W0. Lemma 2. T(0. Similar with (2. a a* (a * b) > b. Theorem 2. b. Then (i) a * (a a* b) < b. For a given fuzzy pattern pair family {(x^.f(W*. yfc)|fc G P } . b\ G [0. (i) By the definition of the operator 'a*' it follows that a * ( a a . xfe © W = y fc }. b < d.21) develop some analytic learning algorithms and iterative learning algorithms for the connection weight matrix W. f e ) < 6. keP s. 1) = a. b e [0. T(T(o. we can in (2. ( a * b ) > b. If T is a fuzzy operator. j G M) and M™ respectively as follows: (i G <• = A K * y)} (* G N. c G [0. Then yk)\k G P } . b G [0. they are also the direct results of the definition of ' a .y) N.f(w. b) < T(c.a) = a. (2. we can obtain a conclusion for the FAM (2.x.de [0. • In (2.1) we substitute the t—norm V for 'A'. => (a a* b) * a < b. G P. the further discussions can see [18.40 Liu and Li Fuzzy Neural Network Theory and Application (1)T(0. 52]. c)).

it follows thatW* G M?. fcgp (ii) Let W = ( i ^ n x m G M™. i. fc0 e" S??(W*. there exists i 0 G N. \/ {xk° * wijo}< yk°. W C W*. W C W*.= * ( A { a f a * y f } ) < i * * K f c a .) 7 / M f 7^ 0. x fc © W C yfc. by L e m m a 2. fc G P . So W C W*. For any j G M. x fc © W = y fe . =• Vixi * KH yki- * w*j}= yk. ieN i6N fc'eP ieN t h a t is. (i) For any fc G P .X. andVW = (wi:j)nxm G Mf. Thus. . A f . Thus. (hi) At first assume M? ^ 0.*oi > j # . so t h a t Vi G ieN N.}< yk.y)=P. Vi G N. And for any j G M. we have a^ * w*.2 and (2. w^ < /\ {xk a^y'f } = w*3. T h a t is. . W* G M™. j G M.y). And if W = (wij)nXm G \inxm yfc (fc G P ) .2 and (2. satisfying fc G ^ ( W * . and W = (wij)nxm G M™.. (i) is true. W C W*. ieN Vj G M. moreover V7 C W*. ^ ) ^ P .2 it is easy t o show. and if the fuzzy matrix W satisfies: Vfc G P . (iii) is true. T h e n suppose there is jo G M. T h e n Vfc G P .tf)}<tf.e. fc G P . * . W* = (w*j)nxm G M™. j G M. (ii. X. By (i) we have. V {xk < *\ * {xki a*yk) < yk.=^ xk * Wij <yk. (ii) is proved. and j G M. Therefore.22) and Lemma 2. the following fact holds: VK f e * U. which is a contradiction since W G Mw. for any fc G P . D x\ A<. we can conclude t h a t \ / {zf * wi:j}< ieN yk. w^ < w*^ (Hi) The set M? ^ 0 if and only if Vj € M. *J}= $> =* V Kfc * O ^ ^fcieN ieN L e m m a 2. U S*G(Wt. Similarly with (i) we can show. U S * p ( W * .y)= P (j G iGN i£N M). fc G P . satisfying | J ^ ( W . = > V { 4 * < j } > »*• By the definition of w*p (2. we have. ^ ) < y). And so xko * io. y ) .X. => M™ ^ 0. Vi G N. Thus. V K fc'eP ieN Therefore.22) it follows t h a t the following inequalities hold: VK * <)= VH * (A {*?'«•tf'})}<VK * (*?°. W 0 G M?. T h e n we can imply the following facts: wijo * xk° < yk°. ^ .22) can imply the following conclusion: x* * <j Therefore.=* satisfies: xfc © W C Wij < xk a* y).Chapter II 41 Fuzzy Neural Networks for Storing and Classifying (i) Vfc G P . y ) = P . j € M. Conversely. So there is fc0 G P . Xfc © W* C y*. let (J ^ G ( W * .. ieN Proof. xfe © W* C yfc.

0.0. 2. and the corresponding connection weight matrix is W*. Calculate the real output: yk(t) = x/.0.0.5 0. (0.6 1.0 0.Iteration scheme: The connection weights iterate with the following scheme (where r/ G (0. Let P = N = {1. y2 y3 y4 = = = = (0.10 by a simulation example. X . k G P).0. 0}. Step 3. For i G N.2 Fuzzy 5—learning algorithm based on t—norm. Therefore.?. 1}}.8. Initialization: put t = 0.e.42 Liu and Li Fuzzy Neural Network Theory and Application Next let us illustrate the application of Theorem 2. by (2.0.0. Give the fuzzy pattern pair family {(xfc.0. 5 ^ ( 1 ^ * .0.5.0.22) to establish the connection weight matrix W*.3). for i G N. So we feep /CGP get W* = « j ) 4 X 3 : 0. we can develop an iteration scheme for learning the connection weight matrix W of the FAM (2. i. j G M.6. k € P. Vj G M. 3. Similarly with Algorithm 2.® W(t). ft (2.xf.0.9 0.4).0.1) (0. > y*}.0. Each i6N pattern pair in {(x^.8 0.21).5 0.7 0. Step 2. . 4}.10. that is Algorithm 2. Step 4. yk)\k G P} can be stored in the FAM (2.4. . y k ) | f c e P } : X! x2 x3 x4 = = = = (0. b G [0.0.5. With the following steps we can establish the connection weights of FAM (2.6.y% «.6).7. Let W(t) = K .3. l]|x* + a. Easily we have.( t ) ) n X m . G [0.6.«(*) * ^ > otherwise.1. y)(t) = \ / {x\ * Wij(t)} (j G M. a * b = max{a + b — 1.1 < y^} = min{l + y* . = A { 4 « * V j } = A {min{l + y) . M = {1.v • m 1 Wij(t). 3}. and the t .0.4. A .7. and Wij{t) — 1.5. Using the analytic learning algorithm (2.8 Moreover.0.0.4) (0.4. l]|xf * x < y1*} sup{z G [0.8 0.y) = P.3). by Theorem 2.0.6 0. j G M.x\. 1}.22).n o r m * is defined as follows: Vo.21).5.4).8. (0.1].6. (0.0.0.0 0.23) .7.1] is a learning constant): | «. 2. yj.8.„(*) .7 W? = I 1. .5. Therefore.21): Step 1. it. ^ ) = {k G P\x* * w*. by the definition of '*' it is easy to show x i a* Uj = = supja.9.2) (0. |J ^ ( W 7 * .

y^}.. W(t + 1) C W(t). it suffices to prove (ii). yvmj (2. 2 . There is k0 G G y ( # .9..} be a sequence of fuzzy matrices obtained by Algorithm 2. Similarly with (2.24) is false.. then we can conclude that Vi = l .. then wi:j(0) = 1 > x\a a* y1*0.yfc)|fc G P} be a given fuzzy pattern pair family. otherwise let t = t + 1 go to Step 2.. and {W(t)\t = 1.. (Hi) If WW € Hnxm. stop. so that Wij(ti) < %i0 <**yf.1. and so {W(t)\t = 1.2.27) »'€N . Thus.. since the proofs of (i) (iii) are similar with ones of Theorem 2. y„.. (2.24). By Lemma 2. yj° = <.. we can get the following result. Xxl x\ ••• xl) \wnl w„2--..26) is true. } converges to the maximum solution of (2. (2. 1.. let t 0 = max{i G {0. Discriminate W(t + 1) = W{t)l if yes.2 we get. by the continuity of the t . if t = 0.. let us next to prove by (2. Wij(t0 + 2) = 1 > Wij(tQ + 1). f„k0 0 ^ „.} converges to the maximum solution of X ®W <ZY as t —> +oo. X®W = Y.2. Then (i)\/t = 1.26) In fact..10. By Theorem 2. satisfying re*0 a* yf = /\ {xfa. }.wnm) \y{ yp2 ••• Theorem 2.8 by Theorem 2..j(t 0 + 1) * < „fco . y).25) if Xi° > y1*0.. yk)\k G P} if and only if the following equalities hold: lx\ < \ /Wu W21 W12 (y\ \ W22 0 y\ W2r.18). then {W(t)\t = 1.e {W(t)|t = 0..1. wij{t) >x«°a* . } K j ( t ) < x^0 a*yf}. . i. Therefore A (2. .. (ii) If there isW € ^nxm.2. the FAM (2. If lim wtAt) = Zi?.23) may imply. Similarly with the equation (2.=» tij^^W j'eN Hj >Vi0.-. lim Wit) = W* = (w*:)„ xm .24) holds. So (2. . l i yj which contradicts (i). w.ko\ f ) <yf.> -feo yf w*j.2.11 Let {(xfc.16) that V* G {0...•(*)}. If xf° < keP y • °.43 Chapter II Fuzzy Neural Networks for Storing and Classifying Step 5.} converges.e. then by the definition of 'a*'. lim WiAt) > x*0 a . And if there is tx G N.n o r m '*'. Proof. and Wij(to + 1) < x^ a* y^°.2.2..24) that is.. x(°° a* yk° < 1.15). *..} is non-increasing. The t—norm '*' is continuous as a two-variate function.fo x«° * {xl a" *. xt° * y-°(t) = V ( 4 ° * «*...ao = V (4° * k>j}> yf.ko (2... x*l° *wij{t)<y wi:j(l) = Wi (2) (2.21) can store each fuzzy pattern pair in {(xfc.. so that (2. Then t0 > 0. . then {W(t)\t = 1..8 and Theorem 2.2.

25). that is W* = (0.01. Let / X = Xl \ x2 = V x3 / / 0.s£°). Y = y2 V 1.00000.90000)T By Table 2.21).8 0.90000)T (0.50000 0.9 and r\ = 0. lim wxj{t) = x\ a* y* = < .90000 5 7 16 30 86 594 (0. 2.0. M = {1}.1.0 0. } is.44 Liu and Li Fuzzy Neural Network Theory and Application And by (2.1 0.90000)T 0. for example. hi .7.2 0.0.80000 0. 2. the difference between W and W* is obvious. when 77 = 0. gradually.1 / \ y3 By Theorem 2. . D Next we discuss an application of Algorithm 2.=> lim yf (t) = yf.8.2 is how to determine 77 so that the convergent speed and the sufficient closeness between W and W* can be guaranteed. 3}. And P = {1.-(y*°(t)-yf>).1. Therefore. lim w i? (t) = zf" a» y?°.3 . we get W = W*. Table 2.0 0.2.0.0.0.9) T . a + b .10.68000..00000. Define the t—norm * as follows [27-29]: a*b = max{0.26) and the definition of W wij(t + l) = it follows that wij(t)-T.00000. = > *? which contradicts (2. a meaningful problem related to Algorithm 2.20) are not completely identical. N = {1. 2. xf a > yf.64000. W is close to W*. .00000..v i lim yf (t) .27).2 Simulation results of Algorithm 2.30000 0.1} (a.1.90000)T (0.23) (2..4 \ / yi 0. simultaneously.0.1.1. 0 Hence by (2.2 with different learning constants rj's and the ultimate connection weight matrix W : Table 2.69999. the quicker the convergent speed of the matrix sequence {W(t)|£ = 1.2. Thus.1. the larger the learning constant rj is .0.69531. So the maximum solution of X® W = Y exists. b e [0. When 77 = 0.5 0.69948.2 No.10000 0.0. learning constant (77) iteration (t) converged matrix (W) 1 2 3 4 5 6 0.00000.01000 (0. 1]). 3}.90000)T (0.3} can be stored in the FAM (2.2 we may see. the fuzzy pattern pair family {(x^. The limit value W and the maximum solution W* of (2.00000. yfc)|fc = 1.1.90000)T (0. As 77 becomes smaller and smaller.2 shows the iteration step number of Algorithm 2.70000.

this set nonempty. denote x m i n = x\ A. At first let x m a x > x m i n . . Yl exp{-sxj} i=l (2... La(s\ xi. Proof. !=1 I d /\{xi} Therefore..1) ..1).}..xi. this set empty.xd}\x xmax....xd the following estimations: G [0. and Xi. -Sm(s.. Sm : M+ x [0. 1].1) -exp{-s(x p .• -\/xd. the functions aVx and aAx are not differentiable on [0. this set empty....x1. 2..s ( x m a x - xs)}...xd) = d J2 Xi • exp{sxi} '-—_ .. and Then Vi G . min{x G {xi.• -Axd.s x j l -=^ ..... Since for a given a G [0.xi. Then we have d V {%i} -La(s... as a preliminary for the BP algorithm of FAM's we at first define the differentiable functions 'La' and 'Sm\ by which the fuzzy operators ' V and 'A' can be approximated...e x p { ... xd) G' [x\ A • • • A xd. .28) it is easy to show.. respectively. . Define d—variate functions La.Xd}|x > x m i n } . s > o. 1] (see [51]). this set nonempty.xd) = A {xj}. and establish the partial derivatives related. x\.xd) < (d .. Sm{s.45 Chapter II Fuzzy Neural Networks for Storing and Classifying §2...3 B P learning algorithm of FAM's In the section we present the back propagation (BP) algorithm for the connection weight matrix W of the FAM (2.28) By (2. xd)..xd) J2 e x p j s x j i=l = J2 xt • e x p { ..1].xi. For Xi.xd G [0.... x\ V • • • V xd].x m i n ) } . lim Sm(s.xi...1] as d La(s.. x m a x = Xi V... l ] d —> [0.. G [0. .xd)\< lim La(s. < xmax}. Vxi.. Moreover { { m a x i i G {xi.. = \J {x..1 Two analytic functions Next we build the approximately analytic representations of the fuzzy operators 'V and 'A'..1]. respectively and the derivatives in the BP algorithm can be calculated.. Lemma 2.3...3 Suppose d > 1.1]. the following facts hold: Vs > 0.xd) (d . xmin.xd Sm(s. x\.

. .Xd) = X\ = V {xi}.z m a x ) } X)exp{s(xi ...xs)} = 0 I s—>+oo lim La(s.xi..xi...i-xmax)} E \xi ... lim (d .. If £ m a x = d £min. then X\ = • • • = Xd.xi. when s > 0. we have ...p{sxi} V {xi} -La{s. . conclusions. for j E {l.. we can conclude that E Xi ex.. So if x m a x > xmin..xd)-\J{xi}\< s—>+oo v when x m a x > xg.xi.. .d}. then we can obtain the following limit: d lim La(s..1) • exp{-s • (x m a x - xs)}.xmax| exp{s(xi-a.xs)} < < (d-q) exp{-s • (x r xs)} q i=q+l • exp{-s • (:rmax .=> La(s.x m a x ) } i=g+l < . and The first part of the theorem is proved.x m a x ) } i=i »=1 2—9+1 |*^max 9+ E ^min| y^exp{a(x»-x m a x )} =9+1 exp{s(x.^2 q exp{-s • (x m a x .xd) = \J {xi}...xd) i=i E E \xi -^maxlexpjsa. l]d.1) • exp{-s • (x m a x ..xi... Hence the limit d of La(s..4 The functions La(s..xs)} < (d ... And therefore..Xd) are continuously differentiate on [0.xi. • lim La(s.......Xd) as s —> +oo exists..p{sxi} E expjsxi} i=l 1=1 d d Eki-:Emax|exp{s(a. d}.xi. Moreover.Xd) and Sm(s...46 Liu and Li Fuzzy Neural Network Theory and Application {q + 1. } < E exp{sxi} 1=1 d ex. Thus.xi. m a x )} i=\ i=q+l a E exp{s(xi .. Xi < x m a x ...} E (Xi ~ ^max) e x p { s x ..Xd) = V ixi}- Similarly we can prove the other Lemma 2.

47

Chapter II Fuzzy Neural Networks for Storing and Classifying

,.>

a

-exp(sxj)

dLa(s;xi,...,Xd)

dx~i

W

l^2(sxi

(ii) dSm(*-£>•••>**) =

v

- SXJ - l)exp(sxj)|;

i=l

(f>xp(sx;))

d ex P (-^)

2{pSXi_SXj

+

1)eM_SXi)y

y

i=l

**Proof. It suffices to show (i) since the proof of (ii) is similar. By the
**

definition of La(s;x\, ...,Xd) we can prove

d I Xi • exp(sxi) \

=£

dxi \

, , I

i.i^j

V Y, exp(sxi) '

dLa(s;xi,...,xd)

dxj

JL

d I Xj • exp(sXj)

dxj \ v *

,

E exp(sxi)

—

\SXJ exp(sxj) + \ J sxi exp(sxi)~ (1+SXJ)

r exofsx,-H

*=I.»#J

J2

exp(sxi)) ^

J

y

i=l

a

— exp(sxj)

**< YJ(sxi — SXJ — 1) exp(sxj) !•.
**

i=

(f^exp(sxi))

v

y^exp(sxj)f

i=i

i=l

i

'

So (i) is true. D

By Lemma 2.4, we may conclude that the following facts hold for the constant a £ [0,1] :

dLa(s;x,a)

dx

1

{l — (sa — sx — l)exp(s(a — x))};

(1 + exp(s(a — a;)))2

dSm(s; x, a)

dx

1

{l + (sa — sx + l)exp(—s(a — x))}.

(1 + exp(—s(a — x)))2

(2.29)

Therefore, x > a,^=> lim (dLa(s;x,a)/dx)=

1, lim (dSm(s;x,a)/dx)

=

s—>+oo

0; and a; < a, = >

S—t + OO

s—>+oov

'

**lim (dLa(s;x,a)/dx)=
**

V

0;

**lim (dSV?i(s;x,a)/dx)= 1. So
**

S—» + 00

'

V

'

**for a given constant a G [0, 1], it follows that
**

(

lim

s^+oo

lim

s^+oo

dLa(s;x,a)

da;

dS , m(s;x,a)

dx

d(aVi)

dx

1

I 2'

d(a A x)

dx

x ^ a,

(2.30)

x — a;

(2.31)

48

Liu and Li Fuzzy Neural Network Theory and Application

2.3.2 B P learning algorithm

To develop the BP learning algorithm for the connection weight matrix W

of (2.1), firstly we define a suitable error function. Suppose {(xfe, yfe)|fc G P} is

a fuzzy pattern pair family for training. And for the input pattern xfc of (2.1),

let the corresponding real output pattern be o^ = (o*, ...,oJ^) : Ofc = xj. o W,

that is

o) =

\J{x^AWlj}(keP,j&M).

Define the error function E(W) as follows:

E{W) = \ Y: IK - yk\\2 = \ £ £(«* - y>)2-

(2-32)

fe=i fc=ij=i

**Since E(W) is non-differentiable with respect to Wij, we can not design the
**

BP algorithm directly using EiW). So we utilize the functions La and Sm to

replace the fuzzy operators V and A, respectively. By Lemma 2.3, when s is

sufficiently large, we have

-,

P

m

E{W)x*e{W) = - ^ ^ ( L a f s i H ; ! * , ^ ) , . . . , ^ ^ ^ , , ) ) ^ ) 2 .

fc=ij=i

(2.33)

e(W) is a differentiate function, so we can employ the partial derivative

de(W)j'dwij to develop a BP algorithm of (2.1).

Theorem 2.12 Give the fuzzy pattern pair family {(xfc, yfc)|fc G P } . Then

e(W) is continuously differentiable with respect to Wij for i G N, j £ M. And

de(W) _y^

-exp(s-A(i,fc))r(s)

*"« " ^ ( E e x P ( s . A ( P , f c ) ) )

l + (sxk-sWij

2

+

l)exp{-s(x*;-w^))

(l + e x p ( - * ( a * - « , « ) ) ) '

P=I

d

**where T(s) = ]T {s -exp(s • A(p,ft))— exp(s • A(i,fc))—l}exp(s • A(p, ft)), and
**

P=i

A(i, ft) = Sm(s;x^,Wij).

Proof. By Lemma 2.4, considering A(i, ft) = Sm(s;i*, ?%•) for i € N, j G

M, we have

9I<a(s;Sm(s;Xi,wij)i •••, 5'™(s;^n) u 'nj))

— exp(s • A(i,fc))-r(s)

( £ e x p ( s - A(p,fc)))

P=i

**And by (2.29) easily we can show
**

dSm(s\Xi,Wij)

9wij

**_ 1 + (sxf - SWJJ + 1) exp(—s(Xj—Wjj))
**

(l + exp(-s(x* : -w;i j ))) 2

49

Chapter II Fuzzy Neural Networks for Storing and Classifying

By de(W)/dwij

= £ {de(W)/dSm(s;x^wij))-(dSm(s;xf,wij)/dwij),

it

fc=i

follows that

de(W)

dw

n 13

J^

-exp(s-A(i,k))r(s)

l + jsx*-sWij

2

r^/yr„_/'„

A / „ i^w

k=i(J2exp(s-A(p,k))Y

+ l)

exp(-s(x^-Wij))

(l+exp(-s(4-wij)))2

P=I

**Therefore, e(W) is continuously differentiable with respect to Wij. D
**

Using the partial derivatives in Theorem 2.12 we can design a BP algorithm

for W of (2.1).

Algorithm 2.3 BP learning algorithm of FAM's.

Step 1. Initialization. Put w^ij(O) = 0, and let W(0) = (wij(0))nXm,

t = 1.

Step 2. Denote W{t) =

(wij(t))nxm.

Step 3. Iteration scheme. W(t) iterates with the following law:

n = Wij(t)

- S • df^}^y

+ a • AtUjj-W,

Wij(t

set

+ 1) = (fl V 0) A 1.

**Step 4- Stop condition. Discriminate |e(W(i + 1))| < el If yes, output
**

Wij(t + 1); otherwise, let t = t + 1 go to Step 2.

In the following we illustrate Algorithm 2.3 by a simulation to train FAM

(2.1). To this end, Give a fuzzy pattern pair family as shown in Table 2.3.

Table 2.3 Fuzzy pattern pair family for training

No.

Input pattern

Desired output

Real pattern

1

(0.64,0.50,0.70,0.60)

(0.64,0.70)

(0.6400,0.7000)

2

(0.40,0.45,0.80,0.65)

(0.65,0.80)

(0.6500,0.7867)

3

(0.75,0.70,0.35,0.25)

(0.75,0.50)

(0.7250,0.5325)

4

(0.33,0.67,0.35,0.50)

(0.67,0.50)

(0.6700,0.5000)

5

(0.65,0.70,0.90,0.75)

(0.75,0.80)

(0.7500,0.7867)

6

(0.95,0.30,0.45,0.60)

(0.80,0.60)

(0.7250,0.6000)

7

(0.80,1.00,0.85,0.70)

(0.80,0.80)

(0.7864,0.7867)

8

(0.10,0.50,0.70,0.65)

(0.65,0.70)

(0.6500,0.7000)

9

(0.70,0.70,0.25,0.56)

(0.70,0.56)

(0.7000,0.5600)

**Choose a = 0.05, ry = 0.3. Let s = 100. With 1000 iterations, by Algorithm
**

2.3 we can establish the real output of (2.1), as shown Table 2.3. By comparison we know, Algorithm 2.3 possesses a quicker convergent speed and higher

convergent accuracy.

33-35. it is compared with t h e prototype vector t h a t it most closely matches. G2. T h e A R T 1 has an architecture as shown in Figure 2. such as. 37.3 Architecture of ART1 Figure 2. 50]. A R T 1 can process p a t t e r n s expressed as vectors whose components are either 0 or 1. FAM's as competitive networks do not have stable learning in response t o arbitrary input p a t t e r n s . 38]. T h e five components act together to form an efficient p a t t e r n classifying model. which causes prior learning t o be eroded by more recent learning. In this way previously learned memories are nor eroded by new learning. the FAM has t o b e trained t o violate t h e original W. R C _^ Reset threshold • Figure 2. As each input p a t t e r n is presented to ART. T h u s . a n d three controllers. a prototype is selected. which generate the controlling signals G\. Fuzzy A R T is a fuzzy version of ART1 [7]. a given family of fuzzy p a t t e r n s m a y b e stored in the FAM. 49]. analysis on fault-tolerance of systems [22. These researches are at their infancy. system modeling and identification [21.3. 2. How can a system be receptive t o significant new p a t t e r n s and yet remain stable in response t o irrelevant patterns? Adaptive resonance theory (ART) developed by Carpenter et al addresses such a dilemma [6]. respectively.4 Fuzzy A R T and fuzzy A R T M A P T h r o u g h t h e learning of a FAM. and applying the results obtained to many real fields. and so they have a great prospect for the future research. and reset controller 'Reset'.4 Attentional subsystem . 44. system control [31. and the connection weight matrix W is established. If a new fuzzy p a t t e r n is presented t o the FAM and asked to be stored in W. If t h e match between the prototype and the input vector is no adequate.50 Liu and Li Fuzzy Neural Network Theory and Application T h e further subjects for FAM's include designing the learning algorithms related based on GA [5.4. two gain controllers Gi and G2. signal processing [42].1 A R T 1 architecture An ART1 network consists of five parts. T h e learning instability occurs because of the network's adaptivity. 32] and so on. §2. two subsystems which are called the attentional subsystem C (comparing layer) and the orienting subsystem R (recognition layer). respectively. so let us now recall A R T 1 and its architecture.

O u t p u t s Cj's of t h e nodes in C are determined by t h e 2 / 3 criterion. and t h e nodes in R compete t o generate a wining node j * . as shown in Figure 2. Choose such . gain controlling signal G\ and t h e feedback signal of t h e wining node in t h e recognition R. m means t h e number of classified fuzzy p a t t e r n s . which are connected respectively with each node in t h e recognition layer. n } . By R t h e new fuzzy p a t t e r n s can be added to t h e set of t h e p a t t e r n s classified.. and let x = 0. W h e n t h e signal x =^ 0 is presented to the network. . i.. t^-. = £ • Gi = G2 • RQ = 1.. ..bnj). G 2 = 1.. And we get t h e input of t h e j — t h node in R as follows: n Pj = ^2hj -Xi(j = l. and so by t h e 2 / 3 criterion. In t h e following we will explain how an ART1 works.1}™ is presented the network. ==> G2 = 0. otherwise G2 = 1. and there is no competition in t h e recognition layer R..• • V r m . c = x . xt. t h e input Xi. Each node in C accepts three signals. . T h e n t h e controlling signal G\ of t h e gain controller Gi is t h e product of G2 and t h e complementary of i?o.xn) G {0. as shown in Figure 2. W h e n there is no input signal the network is in a waiting state. G { 0 . We call Pj t h e matching degree between x and hj = ( b y . Suppose t h e o u t p u t of t h e recognition layer R is r = (r\.5 Orienting subsystem T h e r e exist n units (nodes) in t h e comparing layer C. Layer C Figure 2.. Vi e { l . . . t h e gain controller G2 tests whether it is 0. There exist m nodes in t h e recognition layer R.rm). . . r = 0. t h a t is.Chapter II Fuzzy Neural Networks for Storing and Classifying 51 Let us now describe t h e respective functions of five p a r t s of A R T 1 in Figure 2. A n d the o u t p u t vector of C is c = ( c i . t h e majority criterion: T h e value of c» is identical to the common one of t h e majority in d . x ^ 0.5. rm) of R are zero b u t Tj.m).. F i r s t s t e p — m a t c h i n g . . .. x» = 0. Suppose t h e connection weight between t h e i—th node in C a n d t h e j—th node in R is b^. . RQ = 0. .. and consequently Uj = 0. c n ) . . . All t h e components of t h e o u t p u t vector r = ( r i . and R0 = n V . Gi = ( ^ . and t h e corresponding controlling signal G2 = xi V • • • V xn. dynamically. Uj*.. 1 } . each of which is connected with t h e nodes in C to form a feedforward competing network. otherwise G\ = 0. W h e n an input x = (xi. which propagates forwardly to R.3..e.= 1.. = > G\ = 1..4. t h a t is. T h u s G2 = RQ = 0.( l — R Q ) = G2-RQTherefore. T h e reset controller 'Reset' makes t h e wining neuron in competition in layer R lose efficacy. .

Mo/Mi also reflect the similarity between x and t j * . By M 0 = J2 xi ' tij* = X) c* may establish the similarity degree between the vector tj* = corresponding to the wining node and the input pattern x.xn) £ {0... and go to first step. If Mo/Mi < p..The fact that Tj* = 1 results in the weight vector t j . learning. and the 'resonance' between x and tj takes place. Step 2. and through the reset signal 'Reset' let the match finished in the first step lose its efficacy. T = (Uj)nxm iterate with the following steps: Step 1. tj»)= ^kj. Fourth step—learning. The output vector r = (r\. The network returns the matching state in the first step. that is. Step 3... Suppose there exist Mi nonzero components in x.J and t^ : M ° ) = ^ T [ ! M°) = 1 (*= h-. btj and Uj iterate according to the following Algorithm 2.4. Receive an input: Give the input pattern x = (xi. Second step—comparing. Gi = 1.m). i=i i=i Since x» € {0. &i(m+i) =xt{i = 1. and the wining node become invalid. l } n . Reset signal makes the wining node established by the first step keep restrained. . Mi = x\ + • • • + xn. Go to third step. searching.. Go to fourth step. Third step—searching..The l<j<m node is called a wining node. If the circulating procedure does not stop until all patterns in R are used..ci.. rm) of R return to C through the connection weight matrix T = (Uj)nXm. then x and tj are close enough.. Give p G [0. c n ) of C characterizes the matching degree Mo between tj* and the input x : n Mo = (x. . Determine the wining node j * : Calculate the matching degree Pj. Pj* = \J {Pj}. and choose the initial values of 6... n •xi=^2. and so fj* = 1. R0 = 1. Tj = 0 (j' ^ j * ) . Mo is the number of overlapping nonzero components between t j . . .. and x.j = l..n).Compute the similarity degree. then (m + 1)— th node have to be added to store the current pattern as a new template. the match in the first step is effective. And we have... RQ = 0.The output c = (ci.. Initialization: let t = 0.) being active.. so that the stronger 'resonance' between x and tj* takes places.tnj*). m and compute j * : Pj* = V {Pj}..4 The connection weight matrices B = (bij)nXm.t„j. and let t^m+i) = 1.. n n i=i i=i Step 4..e. = ( t y .. ==> G\ = G 2 • i?§ = 0.. then x and tj can not satisfy the similarity condition..52 Liu and Li Fuzzy Neural Network Theory and Application a node j * . If Mo/Mi > p. i. Thus. Algorithm 2.1}. and others being inactive. we (tij*. whose matching degree is maximum. and the restraining state is kept until the ART1 network receives a new pattern..n. 1] as a minimum similarity vigilance of the input pattern x and the template tj corresponding to a wining node.

Denote W* = (W^. let the wining node j * invalid. and put rr = 0. a fuzzy ART consist also of two subsystems [7. From now on. Step 6... then in R add the (m+1)— th node. w* = («£.. and go to Step 7. and &„•• are adjusted with the following scheme: tij. If the wining node number is less than m.4 for the ART1 network is a on-line learning. as shown in Figure 2. Vigilance test. Put m = m + 1. .. 1] (i = l. In the ART1 network. Uj. All fuzzy patterns in F£ constitute a classification of the input patterns. and the connection weight between the node j in .n). where x» G [0.{t)-Xi _ — n 0.(t+l) « . go to Step 2. Step 7. =Uj* n °ij* \ z "•" -1-/ _ — -Xi. Ff layer and FQ layer using for transforming the input patterns. 14].5 + J2 U>j' (t) • x^ i' = l tir(t + l) n 0.6. By the vigilance p we can establish the number of the patterns classified.. ti'r (* + !) i' = l Set t = t + 1. go to Step 3. one is the attentional subsystem. and F% is a pattern expressing layer. Calculate Mx = Y.-.. accepting the input fuzzy patterns. we take the pattern I as an input of a fuzzy ART.. Thus.53 Chapter II Fuzzy Neural Networks for Storing and Classifying Step 5. The attentional subsystem is a two-layer network architecture. Ff includes In nodes. t^m+1) = 1 (i = 1.. Algorithm 2. /. that is.. and another is the orienting subsystem.. J- ^nj. 13. in which Ff is the input layer. which accepts all information coming from Ff layer. the nodes stand for the classified pattern classes respectively. Orienting subsystem consists of a reset node 'Reset'.. Search pattern class.. 2. the output I of F* is determined as follows: 1 yX-^ X J ( ^ 1 ..5 + Y. Xni %li •••> %n) = 1*^1? •••? ^ n i J- ^ 1 .. •••. go to Step 2.2 Fuzzy ART Like an ART1 network. ••. we classify x into the pattern class that includes tj«.. if M0/Mi > p.. n.. and let b^m+i) = £.4.«&„)). £»_..Ff and the node i in F* be wjt.. Adjust connection weights. .. The larger p is.xi-If M o/M1 < p. . By FQ layer we can also complete the complement code of the input fuzzy pattern x. For i = 1.W^2n)j). Go to Step 6. and suppose F * includes m nodes. the more the classified patterns are. Let the connection weight between the node i in Ff and the node j in Ff be W*..n). If the invalid node number equals to m. each of which includes some similar patterns.

. ^ n V y ^ J . I is classified into F2X. wjt(0).. .vh 2n and we call dis(yi....... Before discussing the I/O relationship of the fuzzy ART. M x are parameters of the fuzzy ART. ax G (0... I is classified into F£.. if w* = w*(0) = (1..54 Liu and Li Fuzzy Neural Network Theory and Application + 2/ Attentional subsystem Figure 2. Then W^(0).Denote yi V y 2 = (yj V yl.. x...°ld W*.6 Fuzzy ART architecture Figure 2. where j = 1. respectively: W?A0) = 1 a x + Mx > <i(°) = 1 (« = l . Suppose the initial values of the connection weights W* and w^ are W£(0). . ^ ( 0 ) ) . .y 2 ) = YI \v\~Vi\ tne metric between the fuzzy patterns yi and y2.. .. . +oo) is a uncommitted node parameter [14]. . . w*(0) = ( ^ ( 0 ) . W x > new = W ? I is not classified intoF*. We call the node j a uncommitted node in F*. m.. yi A y 2 = {y\ A |/J. . y\n A y f j . For j G {1. j = l. denote |y J | = 2n Y. otherwise the node j is called a committed node... where a x . l]2n (q = 1... +oo) is a selection parameter.old x W• = W• I is not classified intoF*.. w*(0) correspond.2). Denote W*(0) = (W5(0). y f j G [0. w*' n e w = w*...m}. and Mx e [2n. Suppose yq = (y\.. 2 n . define Wx.7 Geometry form of pattern w* We call w* a template.1). we introduce some notations.m)... . respectively to the j—th connection weight vectors before the input fuzzy pattern x is expressed in F£. .W^ n ) j (0))..

TX j A l / x.35) . . may be taken as a candidacy of a standard pattern class.old . and |lxAwx'old|>M-p... Repeat this procedure.oldi i l node j is a committed node.. x c ). = {uj Ax r x.m}.. and the number of the rectangles increases. Next let us show the geometry sense of fuzzy pattern in the fuzzy ART [14].m} is taken as j * . (2. and it can not ensure Ax > p.old .4. x new n l If Ax < p. For j G M.oldi + hV | where j G {l. then easily we can show. w x .x.. If I x G i? x '° . If let n = 2. at first it is not classified by the fuzzy ART.. if Ax > p.. IT „.l|w. and others are inactive.old I A wxJold i* w x... If one by one each j G {1. correspondingly. then I is stored in F£ as a representative of a new fuzzy pattern class.55 Chapter II Fuzzy Neural Networks for Storing and Classifying For the input I of F x . a x + Mx *.. into which the input pattern I will be classified. if |I X | = M.1]. > ivj ~\ c\ Vx}cJ=wi' x. If I x 0 i? x '° . as shown in Figure 2. then w x can be established by two vertices u x v x of the rectangle i? x ..W X „) we can establish an input of F x layer as t(I) = (£i(I). . then I is classified into w x ». Its maximum value is determined by the vigilance p. the connection weight vectors can be trained as follows: x.34) x |I A w x.. by the upward connection weight matrix (W X .old . x. and the the template vector keeps unchanged. For the input fuzzy pattern I x .. . Similarly with Algorithm 2.. then the reset node 'Reset' in the orienting subsystem generates a signal to enable the node j * inhibitory.. that is x.new Wj' = w x. And we choose a second-maximum tj(T) as the new active node j * . r wx^new x ld .new T A x.i m (I)) : HI node j is an uncommitted node. Let j * G {!...old| | l AA w f | j* _ w w ' a x +4r. the weight vector w x can be expressed by two n dimensional vectors u x and v x : where u x < v x ..m} : i/. w x .(I) = V {^QO}- And in F 2 only the node j * is active. then u x '° < x < v x '° .old . Give an input pattern I x = (x. The rectangle i? x ' o l d is a geometry representation of the template vector w x '° . For a given vigilance p G [0. n e w ^ w ? ' ° • And the weight vectors change. and in F-f we add a new node according to I.new _ XM w „.7. The matching degree between wx„ and I is Ax = (|I A wx»° |)/|I|..-(!) (2.

36) it follows that \R*'new\< n ( l . the rectangle R*f is chosen precede R^° . Then the rectangle i? x '° is chosen precede -Rx'° if and only if 1 r>x. and I £ R^° \ R^° .36) (x V v x ' o l d ) .(x A u x ' o l d ) I = n .old| |w x .37) implies that tj1 (I) > tj2 (I). n e w | n-|i?x. . Thus.| i $ o l d | Using the assumption we get.oldi | t x.oldi x. By computation we can see |I X A w x ' o l d | = | (u x ' o l d A x. ^-i^- a x + |w x ' o l d | \ (2. then the input space is filled with small rectangles. If p « 1.old i |j> iDx. i | x. o l d | .| i ? x < ° T IT A 32 \ / x.old i a x + n .6 Suppose an input fuzzy pattern I = (x. then there are only a few of rectangles.oldi "x + lw-. {v x ' o l d V x} c ) | = E(^A^old)+f:(alV^°Id)c (2. i? x '° < \R?'° • Then the rectangle R*'° is chosen precede R*'° . that is. p « 0.e. x c ) be an input fuzzy pattern of the fuzzy ART.old i «x + |wjx' iAw ld i t (l) 3 |wx-old| . | i 7->x.34) easily we can show \ . x c ) is presented to the fuzzy ART. D Lemma 2.\R^ old _.ax + n . | i7->x. and I G i? x '° (~l i? x '° .oid. ". To this end we at first give three lemmas. I x can be classified into a pattern class that includes the weight vector w x '° .34) it follows that JlAw x '° ld | tjAl) 31 ^ ' i I x.|Rf n So by (2. j i . Lemma 2. By the assumption and (2.oldi . Therefore.Hence (2. respectively.p).5 Let I = (x.old i dis(l.\R^ \ Proof.new| a x + |w x i ' old | a x + |w x ' o l d | ax + n . j'2 are committed nodes in F x layer. (I) n [ ) < \Kh |iA<. So by (2.oid.old i a x + Iw^' w ld r i n-\R*. x°) by the fuzzy ART when choosing the parameter ax as 'very small' 'medium' and 'very large'. Proof. By the assumption we have.\Rjl \ ax + n.56 Liu and Li Fuzzy Neural Network Theory and Application then in F * layer. x.37) i ax + n . if p is very small. i.oldi" ax + n-\R^ \ }• . i | i7->x.oldi ax + |wj2' . iDX. iJ:L(I) > tj2(T).M\ i r i a x + |w x . 1 t- )^(n\ax i 7-»x.35) (2. by the classifying rule of the fuzzy ART. n — \R^° \> n — \R^° |. Now we present the classifying order of the input pattern I = (x.

oldi n-\Rh x+ . iDx.| a x + n-\R*. . i x. • In order to utilize the parameter a x to establish the classifying order of the input fuzzy pattern I by the fuzzy ART.| W r WId |w-Id| ax + | w .7 Suppose I = (x. i?*'° ld ). Then the rectangle R*'° is chosen precede R*'° if and only if „ i i7~>x.\Rjl \ I 1 nr ?i ax + n. / . it„•' . Rf°")< dis(l.oldi iT->x.\R^ \ (2.\R^ . Using the assumption and (2.40) v / ax + n.n + Q.34) we can conclude that **(!)= n — |i?x.| ^ o l d | ) | n-\Rn ^ ' n V^x + n-\RlM\ „ _ iDx.-' ~ \Uji n ~\Uh I < u *• a x + n . and I e R%'° ni?*'° . D Lemma 2.oldi ax + |wj2' .57 Chapter II Fuzzy Neural Networks for Storing and Classifying Then I r>x.„ew |lAW «x + -°ld| .oldi (2.38) But | J R*' new | = |.R*' old |+dis(l.oldi + |wi2' . b t " 'I 7~>X. n-lR^r ld old di s d ^d^+. n |Dx.\R£ \} which implies the lemma. (2.38) we get.01ai ^ J + a x ( ' ^ .\O ax + n-|i?j2' | D a x + n . n— Ji V / . we define the function „ iDx.oldi\ dis(l.2.old| ^ — ^ r .new i7->x.old|\f ) < n + Q!x — it. | i % n e w | = |i?^ o l d |+dis(l. 1^(1) > tj2(T) if and only if the following fact holds: j. | 1 nx.-i^ri *a*r i i^ D h{x) + . ' "I n'X .40) it follows that tjl (I) > th (I) if and only if (2. Consequently by (2.\R^ \ But for k = 1. I-.newi tn(I) > tn{l) <=• ax I r> x i°ldi „ 2 >\ ' > ' • +ra. which is replace into (2.\R^ | a x + n .Jg-°"). i | QX x.39) is true.39) Proof.c Id. ^ ° l d ) . Dx.oldi /ir>x. x c ) is an input pattern presented to the fuzzy ART.oldi* ax + n~ \R^ \ Therefore the following fact holds: th (I) > t.oldi *.-a (I) «=*• J1 v ' J2\ / ^ ^r > iDx.old\ dis(I.(*) = {n + x.\R^ | a x + n .

41) Moreover. then I chooses first the rectangle between li?*'° I and |-R*2'° | with smaller size. then the pattern I chooses first R*'ola if and only : l d U -R?'° l I P X ' ° l d thorn iho nn++0™ T ^ ? .13 Suppose an input pattern I = (x. (ii) If I € i?*' old \ i?*' old .old)(n . then I wiZZ /ir-st choose R*fd if and only if dis(I.7 and Theorem 2. |-R*'° |< |it-2' |. 0o(aa:) ~ 0. and ax is 'medium'.1 ^ ' ° l d | • (2. i% o l d )(n . (2.58 Liu and Li Fuzzy Neural Network Theory and Application Theorem 2.\R£ | And when a x « 0.5. R*.. By Lemma 2. (i) is a direct corollary of Lemma 2. (2-43) .old n R*. Then by Lemma 2. the following fact holds: l i m0 > ( z ) = 0i(O) (dis(I.°ld)(n .|i?*. 0 < ax < +oo. we have. the function </>o(0 is nondecreasing on R. I. dis(I.i^.6 it follows that I chooses the rectangle Rlold firstly. Then (i) / / I e R*'° \ i?*'° . D Next let us proceed to discuss the choosing order of the input fuzzy pattern by the fuzzy ART when the parameter a x is 'medium' or 'sufficiently large'. ^ 2 ' o I d )(n . and the parameter ax « 0.'• b.\R*:°ld\) < dis(I.|i% o l d |) < dis(I. (iii) Obviously. <^>o(a:x) < • dis(M.|i?*. dis(I.x c ) is presented to the fuzzy ART. (ii) Since lira <fio(x) = 0. o l d ) < ^i(0). I will first choose i?*' old ^=> dis(I.R*' old ) < ^i(0). we have.old y i J *. i. jx. o l d |)) n . (Hi) Ifl$. and 0 < 4>o W (ii) Iflg < |^2'°ld | . So it suffices to solve our problem for I e R*fd \ i?*' old and I g i?*' old U i?*' old .old.old. Therefore. i??'° ). so it suffices to show (ii) and (iii). JR^° . Jl A I J2 1^ ^ '-"^V 1 ) -""J2 ^V'" |-"-Jl Proof.old j i / i e n j c f t 0 0 5 e s firsf ^x.i%old)<0o(ax). if I G R*. R*. then I chooses first i?*' old . x c ) is presented to the fuzzy ART. moreover. old |). that is. when a x « 0.e. ™ o 0 c K^t R X ' ° l d .-R*' old ) < <f>i(ax) <^=> dis(I.7.i%old)<<Max). '° dis(I. Then (i) IflE R*f fl i?*' 0 . Theorem 2.14 Suppose the fuzzy pattern I = (x.13 holds when ax is 'medium' or 'sufficiently large'. the conclusion (i) of Theorem 2. I will first choose R*'° if and only if dis(l. So by Lemma 2.42) R*fd U i?*' old .^ o l d )(n-l^.13.° ld |).

Then (i) If I G i?*' old \ i^.).fi-^J2 old )-n ) :. J R^ old ) + | i ? ^ o l d | .. i%old)< |i?£ OId |-|i*' old |. <^o(+oo) = lim Mx) = |iff' o l d |-|iff .44) Proof. ften I wiZi /irsi choose R*fd nX. As for how to choose . o l d | I 7-)X. 0 < X —» + 0 0 I J2 I I Jl I 0o(c*x) < 0o(+oo) it follows that (2.6 and Lemma 2.l ^ . 0o(O) = 0. And we can conclude by computation that = (n-\R*fd\)(\R*fd\-\R*fd\) d«Mx) = dx (. and the assumption we get. Theorem 2.c + ri _|iix. o l d | .41) is true. moreover (/>i(0) < 0 i ( a x ) < 0i(+oo). moreover ->x. o l d |.| ^ . R^ ) < d i s ( l . o l d . Therefore by a x > 0. (2. Then by Lemma 2. <po{-) is nondecreasing on M.oid|)2 Using the fact |i?£ o l d |< n.15 Suppose the fuzzy pattern I = (x.|i?r U D ./ T r>x. iJja' )+|i? J2 . dz (x + By assumption and the fact |JR-' ew n_|jR^° ld |)2 | < n we get. . I r>x. . 14]. I will first choose _R*-old if and only if (2. The parameter a x « +oo. Moreover. that is. tfien I will first choose R*fd «/ and or% i/ dis(I. • When ax —> +oo. we have."~ * ' < «Max) < d i B ^ ^ J + l f ^ l .Chapter II Fuzzy Neural Networks for Storing and Classifying 59 And the function <fii(-) is nondecreasing on R.. by Lemma 2.42) is true.old\ .old I if and only if I r>x.i^ d ). Then <f>'0(x) > 0.6.7 easily we can obtain the following conclusion.l . </>i(x) > 0.old dis(l. |. In the subsection we discuss how to choose committed nodes in the learning of the fuzzy ART.44) is true.|i?£ neW |)(l^' 0ld | . it follows by computation that d0o(x) _ (n . (i) At first. ^ o l d ) + l ^ o l d | .7 we imply the first part of (ii) holds.old I d l s (l. </>o(ax) —• | ^ 2 ' ° | — |-^i'° I' a n d </)i(ax) —• dis(l.old U JR*'old. The main results come from [7. (ii) Ifl(£ J- ft R*. (ii) Using Lemma 2. It is easy to show x—>+oo 1 (0)=dis(l.old n—\R. \R^old\-\Rf°ld\> 0.old | 5l l ! ^ i ( + o o ) = d i s ( I . x°) is presented to the fuzzy ART. So ^i(-) is nonde- creasing on KL Denote <j>\ (+oo) = lim </>i (a.l i ^ l J2 I (2.| ^ .old\ ^ j . Thus.

category regions. . Another important problem related fuzzy A R T is how t o find the appropriate vigilance range to improve its performance. T h e network structure can be constructed from training examples by fuzzy ART learning techniques to find proper fuzzy partitions. which is also a meaningful problem for future research in the field related. T h e main propose of F x y is t o classify a fuzzy p a t t e r n into the given class. T h e inter-ART module consists of the field Fxy and the reset node 'Reset'. 2. the fuzzy A R T X . For instance. so t h a t the m a x i m u m matching degree between the resonance fuzzy p a t t e r n and the input p a t t e r n can achieve.fuzzy ART X Figure 2. membership functions.4. Anagnostopoulo and Georgiopoulos in [1] build a geometric concept related t o fuzzy ART. respectively. and fuzzy logic rules. by the fuzzy ART X the vigilance px increases. We may build some robust and invariant p a t t e r n recognition models by solving such a problem [24]. when the fuzzy A R T y generates a wrong match.8.8 Fuzzy ARTMAP architecture fuzzy ART y . fuzzy A R T y and the inter-ART module. Lin and Lee utilize the fuzzy A R T learning algorithm as a main component t o addresses the structure and the associated on-line learning algorithms of a feedforward multilayered network for realizing the basic elements and functions of a traditional fuzzy logic controller [30-32]. or re-begin t h e matching procedure. Based on geometric interpretations of the vigilance test and the F* layer competition of committed nodes with uncommitted ones.3 Fuzzy A R T M A P As a supervised learning neural network model the fuzzy A R T M A P consist of three modules. especially the stability of learning in fuzzy A R T . It is useful for analyzing the learning of fuzzy ART. y . accepting the inputs x . B o t h fuzzy A R T X a n d fuzzy A R T y are the fuzzy A R T ' s . shown as in Figure 2. Lin.60 Liu and Li Fuzzy Neural Network Theory and Application the uncommitted nodes by the fuzzy A R T some tentative researches are presented in [14].

w*. then we increase p x . | .b^ny) are the output patterns of F f and F y .. y ..|by|... . Map field activation is governed by the activity of the fuzzy ART X and the fuzzy ART y . o.. Thus. F x is inactive and F y is active. a x y = 0. suppose a y = (a y . |axy|= | b y A w x y | > p . and the search procedure is active. Searching match. I x = (x. The connection weight iu xy of F£ —> F x y is trained with the following scheme: Step 1. . |< px • |I X |.. XV . respectively F x y is active.. Learning of map field. If in F x the j*—th node is active..a y n y ) and b y = (b\.2nX.. y . If |a x y |< p • | b y |.. y c ) are two complement code. then its output can be transported to the field F x y through the weight vector w x y . then only when a identical fuzzy pattern class is obtained by fuzzy ART X and fuzzy ART y .. I y = (y.. In the fuzzy ART X .). Also we suppose a x y = (a x y . . If a mis-match between b y and w x y takes place. x c ).Chapter II Fuzzy Neural Networks for Storing and Classifying 61 As the respective input patterns of fuzzy ARTX and fuzzy ART y .. where j * means an active node in F x .xy Wj. In the inter-ART. the j * — t h F x n o d e is active a n d F f is inactive.u. y b .. which accepts the outputs coming from the fuzzy ART X and fuzzy ART y . the vigilance px of ARTX equals to the minimum value p x . or there is no such a node. and y is a response fuzzy pattern.. . satisfying | a x | = | l x A wx» |> p x • | l x | .. so that p x > | l x A w x . If F y is active. then F x stop the expressing procedure of the input patterns. a^y) is a output pattern of field F x y .. Initialize: w x y (0) = 1..6^ x ) is the output of layer F x .a x n X ).. the j * — th F f node is active and F y is active. And let w y = (w^...| l x | .. When the system accepts an input pattern.. which is a prediction of the fuzzy pattern I x . Similarly.. through the search procedure of ART X we can obtain the fact: either there is an active node j * in F x . And w x y may be classified into a defined fuzzy pattern class. and w x y = (w*^. in the fuzzy ART y . 2riy 0 be the k—th connection weight vector from F y down to F y . and thus | a x | = | l x Aw x . where x is a stimulus fuzzy pattern. and b x = (6X. F x is inactive and F y is inactive. . respectively. field F x y is called a map field.. Let the j—th connection weight vector from F x down to F x be w x = (w*1: . the output of layer F f is a x = (a x .. w^y) is the j—th connection weight vector of F x to F x y . in the following way: byAwxy. and the vigilance of F x y is p..

- Maxwell PaperUploaded byarash22913888
- 2.- A Note on the Equivalence of NARX and RNNUploaded bycristian_master
- V3I4-0161Uploaded byShrinidhi Gowda
- A Course in Fuzzy Systems and ControlUploaded byTrAvEsSo84
- Compre QA1.pdfUploaded bybalasundar
- Fuzzy-A Review ShortenedUploaded byYatheshth Anand
- A Graph Labelling Proof of the Backpropagation AlgorithmUploaded byfagi2
- Quantum Neural Networks (QNN’s) Inherently Fuzzy Feedforward Neural NetworksUploaded byalijorani
- The Use of Kalman Filter and Neural Network Methodologies in Gas Turbine Performance Diagnostics: A Comparative StudyUploaded byAdha Montpelierina
- Neural NetworkUploaded byHotland Sitorus
- Fuzzy Logic SystemsUploaded byKandlakuntaBhargava
- PendulumUploaded byHoang Nguyen
- Sample Questions of ANNUploaded byBint e Farooq
- funadementos principales de redes neuronalesUploaded byDiego J Rubio
- Overview Ml InterpretabilityUploaded byHotAss Noodles
- 1_1_2Uploaded byAman Deep Singh
- Use of Neural Network in the field of Bioinformatics for prediction of cancer-IJAERDV03I1265576.pdfUploaded byEditor IJAERD
- cs224n-2017-lecture4.pdfUploaded byRada Dara
- M Tech Mid 2 Nnfs PaperUploaded bysreekantha2013
- 8 LearningUploaded byJuhiRai

- IJIEC_2011_3Uploaded byvrushabh123
- History and Evolution of Concepts in PhysicsUploaded byJohn Edison Vallejo
- hmd-geometry project 4Uploaded byapi-288278295
- DECE C-14Uploaded byprinceuma
- AlgorithmUploaded byMohanbkn
- MidTermUploaded byMike Staszel
- Project Teori BilanganUploaded bytira kristy pane
- ORUploaded byapi-3751356
- sol11.pdfUploaded byAashish D
- Heisler Chart - WikipediaUploaded byRaja Karthi
- Physics, Metaphysics, Duhem and AristotleUploaded byGeorge Mpantes mathematics teacher
- folland2Uploaded byCatalin Toma
- Computer Methods in Geology_excel2 FormulasUploaded byIca Alc
- Grading BasicsUploaded byKranthi Kumar Chowdary Manam
- MTAPGrade 6 TEAM ORAL With SolutionsUploaded bymikee_tej
- Conical Volume Delay Functions Heinz SpiessUploaded bygiancar
- Investigation of Substructuring Principles in the Finite ElementUploaded byRogelio Diaz
- EFFICIENT FIR FILTER IMPLEMENTATION BASED ON MINIMUM.pdfUploaded byiaetsdiaetsd
- SaxUploaded bySenthil Kumar Suruli Nathan
- Dissociation-Difusion in Phase SpaceUploaded byadrrine
- AI - FOLUploaded byPabitha Chidambaram
- 1 Math 9 Exam 1st Quarter 2016-2017Uploaded byJessemar Solante Jaron Wao
- Spreadsheet Solutions to Two-Dimensional Heat Transfer Problems - Heat Equation Cf. RS Warmth Course - Example of Constant Boundary Temperature of a Thin Metal Plate - BesserUploaded bydbowden@bigpond.net.au
- na2Uploaded byfunnuf
- MATH2801 2017 S1Uploaded byJamesG_112
- Differentiation of XnUploaded byNahid Khanna
- Ian Word ProblemUploaded byjappy27
- Ch07Uploaded byLeonardoMadeira11
- Brown Intro Physics 2 Review ProbUploaded byApu
- Strategic and Robust Deployment of. Synchronized Phasor Measurement Units With. Restricted Channel CapacityUploaded bywvargas926