You are on page 1of 13



Optimization of Fuzzy Expert Systems Using Genetic Algorithms and Neural Networks
Christiaan Perneel, Member, IEEE, Jean-Marc Themlin, Jean-Michel Renders, and Marc Acheroy, Member, IEEE
Abstract--In this paper, the fuzzy logic theory is used to build a specific decision-makingsystem for heuristic search algorithms. Such algorithms are typically used for expert systems. To improve the performance of the overall system, a set of important parameters of the decision-makingsystem is identified. Two optimization methods for the learning of the optimum parameters, namely genetic algorithms and gradient-descent techniques based on a neural network formulation of the problem, are used to obtain an improvement of the performance. The decision-makingsystem and both optimizationmethods are tested on a target recognition system.

ECISION-making systems and expert systems are usually designed to solve complex problems with numerous candidate solutions. The exploration of all possible solutions and uninformed search methods like depth-first or breadth first (see, e.g., [l], chapter 2) are often unpractical. To guide the expert system toward a solution in a more efficient manner, knowledge has to be given by experts. This can be done by heuristics, when the complex problem can be considered as a graph-search problem in uncertain environment. Then, well-known heuristic search techniques (see, e.g., [l] and [2]) can be used to find a solution in an efficient way. To deal with imperfect information in expert systems, three main classes of approach can be used (see [3], chapter 1). The most classic approach to model uncertainty is the probabilistic one (see, e.g., [4] and [ 5 ] ) . A second approach is based on the Demster-Shafer theory ([6]) of evidence (e.g., [7]). Bellman and Zadeh [8] and Zadeh [9] introduced the fuzzy logic approach. A survey of the different approaches is made by Prade [lo]. These approaches are criticized by Lindley [ l l ] and Cheeseman [ 121 from a probabilistic viewpoint. A defence of the fuzzy logic approach is made, however, by Zadeh [I31 and Kosko [14]. The interest in fuzzy expert systems has grown considerably over the past few years. The fuzzy reasoning approach is motivated by the following advantages: a) it provides an efficient way to cope with imperfect information, especially imprecision in the available knowledge, b) it offers some kind of flexibility in the decision-making process, and c) it
Manuscript received August 12, 1993; revised January 25, 1995. C. Perneel and M. Acheroy are with the Signal and Image Center, Royal Military Academy, B-1040 Brussels, Belgium. J.-M. Themlin is with the Groupe de Physique des E m s C o d e n d i s , Sciences Faculty of Luminy, URA CNRS 783, Case 901, F-13288 Marseille, France. J.-M. Renders is with the Artificial Intelligence Group, Computer Systems Department of Tractebel, B-1200 Brussels, Belgium. IEEE Log Number 9412930.


gives an interesting madmachine interface by simplifying rule extraction from (human) experts and by allowing a simpler a posteriori interpretation of the system reasoning. The design of fuzzy expert systems, however, relies on a particular modelling of imperfect information. Now, modelling imperfections usefully and efficiently remains a delicate and sometimes critical task. Moreover, fuzzy reasoning is based on definitions and conventions chosen somewhat arbitrarily. For example, a large choice of membership functions can be used in the definition of the linguistic terms; in the same way, there are several possible definitions of fuzzy operators such as “AND,” “OR,” “fuzzy implication” and “defuzzification” scheme (see, e.g., [15]). The potential user is thus confronted with a variety of choices in the design of its fuzzy expert system for its particular application and does not know the optimum choice in advance, To mention a few interesting contributions, e.g., Sugeno and Kang [16] and Sugeno and Yasukawa [ 171 employ heuristic “rule-of-thumb” to identify a good structure of a fuzzy model. Once a fuzzy expert system has been designed, it depends on a large set of parameters, e.g., the shape of the membership functions, weights, etc. To improve the performance of the overall system (to decrease the computing time and to obtain better global results), the parameters have to be tuned by an appropriate optimization method. To solve a similar problem, in the field of the optimization of fuzzy logic controllers, gradient-descent methods are used by, e.g., Nomura et al. [18] and Bersini et al. [19] and genetic algorithms (GA’s) by Karr [20], Karr and Gentry [21] and Thrift [22]. The aim of this paper is twofold: a) to build a fuzzy expert system in the field of decision making with imperfect information solving hierarchical graph-search problems and to identify a set of important parameters whose automatic tuning should improve the performance of the overall system, and b) to use and to compare two optimization methods for the learning of the optimum parameters, namely GA’s and gradient-descent techniques based on a neural network formulation of the problem. This paper is organized as follows: In Section I, the decision-making problem and the traditional approaches to solve it are developed. A specific decision-making system for heuristic search algorithms, based on the fuzzy logic approach, 1 1 , two optimization is presented in Section 11. In Section 1 methods for the learning of the optimum parameters of the decision-making system are presented. After the statement of the optimization problem (Section 111-A), the first optimization Section IIImethod based on GA’s is described (Section 111-B).

1063-6706/95$04.00 0 1995 IEEE

Let D be the discrete set of all the global D = dl-n can be associated with a specific path in the decision graph. .+. taking into account the stochastic nature of the environment. N L } be the set of The set of the ratings of the M different heuristics. M 7 with R the space of the possible rating values (mostly a subset of the real numbers). This particular node is developed into its successors (dl+k7d. . for the decision D (or the partial decisions which are still to be completed). d2. 2 ( m ( D ) ) . The global rating can be expressed as a combination of the partial ratings L. .L~(d1+2). which is a measure of the quality of the decision L[Ll(dl+l). . given the partial knowledge (heuristics) of this level i.M where M is the measurement space. . .h M ( m M ( D ) ) I every node of level k. Heuristic functions are rating the different candidate decisions according to these measurements. . d."'. the decision-making problem will be considered as a hierarchical graph-searching problem: the graph consists of several nodes grouped by levels..* . Let us further assume that knowledge is revealed partially at each level in an incremental fashion.**. In other words. each node of level IC representing a partial decision We consider the following decision-making problem: to find a decision consisting of a sequence of decision elements (or hypotheses) optimizing some criteria in an environment characterized by imperfect information. .g.. it is possible to find an upper limit where 0 is the combination operator across all heuristics. Let { N I . for T = O[hl(ml(D)) h.+!). Let D be a candidate decision consisting of n decision elements d i . this means that Q can be decomposed in the following manner D = (dli&. The previous assumptions allow more efficient strategies to guide the search in a smarter way by exploiting the partial information (partial rating) available at each node. (dl+k7 d i z l ) L: D + R:D + T = L(D) = O ( Q ( 0 ) ) . . We then Q(D) = (qi(D). be an optimum decision. . when a final node (a We suppose that the set of heuristics can be chosen in such complete decision) is evaluated and its (global) rating is greater a way that: than or equal to the (partial) ratings of all other nodes in the list. These ratings describe how well (or how likely) a decision (and its associated measurement) fits in with the environment h. * * * 7 qnM. . only the terminal nodes (the nodes of the last level n) are given a rating. each decision element di belongs to a finite. . Heuristics are combined to form a global rating T . which are evaluated. Let LSUP(dl+k) qi(D) = hi(mi(D)) i = 1.PERNEEL et al. the search is halted and this final node is guaranteed to 1) L can be defined for each element of D. a number M of measurements (or observations) are available mi: D + q 2 M z (d1+2). there is no means of evaluating the quality of a partial decision. In the remainder of this paper. . 11.+l.. 2) L( D ) takes its maximum value at the optimum decision. each level i. . . discrete set D' d l 4 = (dlrd2. (dl-n)]' k M i = M i=l and Mi the number of heuristics of level i. . .*. while the results obtained with the application for the described methods are presented in Section V. Let to the global rating of all the decisions containing the partial decision represented by the node considered. which are nodes which are still to be developed into their successors based on observations and measurements.: OFTMIZATION OF FUZZY EXPERT SYSTEMS USING GENETlC ALGORITHMS 301 C presents the second optimization method based on the neural network approach. q2(D).*.q L l } . Each q i j is the individual (or local) rating of heuristic j at level i and depends only on the partial decision d ~ + At ~. be this upper limit. M : D + mi(D) i =l. PROBLEM STATEMENT The initial decision-making problem can now be formalized as an optimization problem: to find the maximum of L ( D ) . i=l As a link between the decision and the imperfect information.-rl.. classic graph-search strategies such as A*.~Dn ni = lDil n Q with ~ [ ~ ~ ~ ( ~ ~ - ~ ) ~ ~ ~ ~ ~ ~ ~ ~ ID( = n n i . Each heuristic can be considered as a piece of knowledge usually coming from an expert which partially assesses the quality of the decision. . In particular.N 2 . * * 7 Ln(dl-rn)]. q M ( D ) ) is consider the node Ni (or (dl-k)) for which LSUP(dl+rc) and be L the function which associates a global rating T to greater than or equal to the upper limit Lsup of all other each global decision D nodes in the list.dn) di E D'. * qnl(dl-n). which can be used to guide the search for the best decision. is represented by Initially. so D€D=D1xD2~-.. it is assumed that the individual heuristic ratings q i j can be combined into a partial rating Li(dl+i) which represents the quality of the partial decision up to this level. Durrng the search... Branch and Bound methods can be applied provided that.. The particular decision-making application is described in Section IV.i:M -+ R: m i + hi(mi) i = 1 . At this stage. . . . e. these nodes are formed by the nl possible decision elements of level 1: {d.. may be well adapted in certain cases (see. . (di-rc. . [ l ] and [2]). For instance.). M .

d2. the range of the measured variable is more important than its exact value.VOL. the well-known fuzzy centroid defuzzification scheme can be chosen to Obtain a nonfuzzy rating 4 for the heuristic h 1. where r is the nonfuzzy rating value. The elements of this fuzzy vector correspond with the degree of membership to which the measurement or observation belongs to the different classes. 1141. These T different membership values must be combined in one unique nonfuzzy value which is the rating given by the heuristic. obeying the following rules gt:R [O. 3. that consists in taking the minimum heuristic rating as the level rating (see. etc. bad. e.. Mathematically Assuming that the rating values must belong to the interval [K. and bad: see Fig. . Assuming that the first approach is used. K. Each heuristic returns a fuzzy vector of size T. Each heuristic returns a fuzzy vector of size T. For example. Very often.. there must also be a smooth transition in the rating of the different candidate solutions. and [231). The measurements or observations are assumed to provide crisp values as different methods (e. minimum level combination. the number of different linguistic terms. t“ dr 4= Now. The decision elements d k are assumed to belong to nonfuzzy sets Dk. Suppose that (d1-i) is a partial decision up to level i . W ) . To compare partial decisions at every . given a heuristic h with his membership functions. the rating up to level i will be L1-i (di-+i) a k=l To compare two partial decisions at different levels. average. Starting from some measurements or observations. . were the weight wij is given to each heuristic according to its reliability Li(d1-i) Each linguistic term is a fuzzy set which designates a category partially qualifying a candidate solution in the sense of the considered heuristic (e. = 1 if t is the “worst degree of fit” linguistic term and g t ( Kma) = 1 if t is the “best degree of fit” linguistic term. f . ’ . Given a measurement or observation m. good. 2. we will start with the design of the rating function corresponding to one heuristic h.g. 3. + To get a rating of the partial decision (dl-i) up to level i. This global rating function should logically reinforce candidate solutions whose heuristics give mostly a good rating and should disadvantage candidate solutions whose heuristics give mostly a bad rating.. [8]. . . .g. each characterized by a membership function and a linguistic variable. the partial or level rating L i ( d l + i ) is the combination of the Mi heuristic ratings qij.g. L i ( d l + i ) . AUGUST 1995 111.using. weighted sum combination. Furthermore. describing how well it fits the hypothesis that the candidate is the solution to the problem. and good) a rating membership function. An example of the rating membership functions for T = 3 is given in Fig. the weighted sum approach will be used. This is usually done by assigning to each linguistic term t (e.LP(d1-+2). To build such a global rating function L ( D ) . the transitions between these categories are often blurred. average.. the rating of the decision at the lowest level must be extrapolated up to the highest level. The set of heuristics forms a base knowledge of fuzzy rules whose antecedents are related to the measurements or observations and whose consequent part determines the fuzzy (partial) quality of the decision. +m rgc(r)dr - C L r r min ( g t ( r ) P ..302 IEEE TRANSACTIONS ON FUZZY SYSTEMS. individual heuristic ratings must be combined for each level. 11. NO. 1). DECISION MAKINGIN AN UNCERTAIN ENVIRONMENT WITH FUZZY LOGIC To emphasize the structure and the parameters of the global rating function L(dl+n) = LILl(dl+l). centroid level combination. We have explained how the heuristics rate the candidate decision based on some measurements or observations. The transformation of the fuzzy vector hT(m) into a unique nonfuzzy rating value is called defuzzification.d i ) = L l ( d l ) ) L2(d14). This rating will be a combination of the i different level ratings Ll(d1-i) (= Ll(d1. To avoid the too pessimistic (see [ 151) classical combination rule.g. and [SI). the min operator as fuzzy inference rule gc(r) = t min ( g t ( r ) P .g. ‘ 1 Ln(dl+n)l = O(Q(d1-n)) we derive in this paragraph an expression of the global rating function in the framework of fuzzy logic (see [23]. g t ( r ) . . we can define the cumulated rating membership function g c ( r ) ... The range of the space of the measurement values can also be divided in a number of classes. a nonfuzzy global rating must be inferred using the rules of the base knowledge and fuzzy inference mechanisms. [91.] then gt(K.) can be used. Therefore. ‘dr. the rating must increase when the linguistic term expresses an improvement and must decrease when the linguistic term expresses a worsening..

average. e. It is then possible to express the global measure of quality or global rating as a function of a decision D and of a parameter vector 8 T = L(D. the A-algorithm (see. Now. . Example of heuristic fuzzy membership functions (T = 3: Bad. + ~ ) )are ] the estimated membership values for the heuristics of the levels p 1 up to the highest level.PERNEFL ef al. 1) A complete and detailed description of the mathematical background of this method can be found in Pemeel et al.: OPTIMIZATION OF FUZZY EXPERT SYSTEMS USING GENETIC ALGOIUTHMS 303 Bad Average Good Average Bad U n Fig. [24]. imprecise or inconsistent knowledge. One can usually isolate a number of heuristic parameters modelling the shape of the heuristic membership functions or describing the way of combining heuristics (operator 0).g. If a less cautious approach is desired. ' ' ' > ~ H . the global rating can be written as L(d1-m) = L(D) where E [ p & ( ~ ~ k j ( d 1 . Different ways can be followed to obtain these estimated values: the best case approach consists in taking the maximum value for the best linguistic term + ( C L k j ( 4 .. . . e. . [I]).O. This approach provides an upper limit to the global rating of all partial decisions U Iv.. 1.g.g.Bad Average Good TO Ti 1 Tz 73 1 %id T4 75 4 7 ' 6 Kmin KmXX Fig. because the partial decisions at the lower levels might be advantaged with respect to the decisions at the higher levels. [l]). e. with Lk(d1-p) . . An example of membership functions with T = 3.Pn.. Introduction -k k=p+l 2 bk The best case approach is a cautious one. consequently. 7 Pkj" T = (O. From the fuzzy logic approach described in Section 11. . e. on uncertain. M . P Z j ( m ) . 2.Wli. w i j is the weight associated with heuristic j of level i and 8 H . ' ' . OlTlhWATION OF THE FUZZY EXPERT SYSTEM A.g. if the linguistic variable T is the linguistic term corresponding to the best degree of fit. The upper limit rating can be used. ) with Pi the weight corresponding with the level i... and good). . the Branch and Bound search method (see.. we can extract the parameter vector (Pkj(m)>Pij(m)>**. one can choose an arbitrary value In the decision-making system described above. the heuristic set plays a crucial role and strongly affects the quality of the decision adopted by the search method.W n M . e H I 1 . * * . . 8=(Pi.. Taking into account that several estimations have to be done. >PZj(m)) This approach is useful. it is not an easy task to determine an optimal set of heuristics. especially as the heuristics often rely on implicit representation of experts and.8).

GA requires from the environment only an objective function measuring the fitness score of each individual and no other information nor assumption such as derivatives and differentiation.. The second (outer) problem consists of finding the parameter vector 8’ which minimizes some kind of error E ( 8 )between the optimum decision D * ( 8 ) resulting from the solution of the first problem for a given parameter set 8. Both selection and combination steps are performed by using probability rules rather than deterministic ones to maintain a globally exploratory search.1. called “individuals. GA maintains a “population” of candidate points. Each individual (or chromosome) consists of a particular parameter vector 8 i grouped in the population P {81.8~}. For the example in Fig.” a new population is created.. VOL.m7). 3. The cost function of the individual 8i is taken as the error function . On the contrary. we are faced with two interwoven optimization problems. The fundamental implicit mechanism underlying this search consists of the combination of high-performance “building blocks” discovered during past trials.8).82.Y I la denotes some user-defined distance between X and Y. . is hoped that solving the whole problem with a limited learning database will result in heuristic parameters well fitted to a larger number of decision problems (generalization capability). with the best case approach (Branch and Bound). by the fuzzy logic method described in Section 11. respectively. the individuals will in general tend towards the optimum of the fitness function. preferably D* which maximizes the global rating (or global measure of the quality) L L(D*. as a means to change the current solutions locally and to combine them. it consists of finding a good decision D . simplex. At each iteration. Optimization Using GA’s If the first approximation of the parameter set 8. is not satisfactory. AUGUST 1995 the vector which describes the shape of the T membership functions of the heuristic j of level i . The search aims to optimize some user-defined function called the fitness function. Unfortunately.304 IEEE TRANSACTIONS ON FUZZY SYSTEMS. and I IX . I) The GA: The GA method (see [27]) is an iterative search algorithm based on an analogy with the process of natural selection (Darwinism) and evolutionary genetics. NO.8) = maxL(D. The first (inner) problem was considered in Section I. 8~ would be (ml. What makes a GA attractive is its ability to accumulate information about an initially unknown search space and to exploit this knowledge to guide subsequent search into useful subspaces. To generate a new population on the basis of a previous one. these parameters must be properly set by statistics on a large number of experiments.” over the entire search space. steepest-descent methods (see. The GA will be studied to solve the outer optimization problem. a population of individuals is formed. This peculiar landscape jeopardizes the applicability of traditional methods relying on hill-climbing principles. 2) GA and the Optimization of the F u u y Expert System: TO solve the parameter tuning problem with a GA. In this section. etc. Three major differences from classical optimization methods. a GA performs three steps: It evaluates the fitness score of each individual of the old population. usually derived from rule-of-thumb or given by experts. [25]) will fail because the derivatives are generally zero (except on the terrace boundaries). this task is very time consuming in practice. . To perform this task.g.and a reference (desired) decision given by a teacher for a set of particular decision problems constituting a learning database E(@*)= m i n E ( 8 ) 8 with the error function E ( 8 ) defined as E ( 8 )= 2 IlDT(8) . steepest-descent.g. using 0 as heuristic parameters. . Normally.. The simplex method (see [26]) will also fail as soon as all points of the polyhedron lie on the same terrace. . and It recombines these selected individuals using “genetic operators” such as mutation and crossover which can be considered. For example. First. This new generation generally consists of individuals which fit better than the previous ones into the external environment as represented by the fitness function. it is preferable to have recourse to some automatic tuning of the heuristic parameters. namely GA’s and gradient-descent techniques. When adopting such an approach. GA theory will be refreshed. called a “generation. As the population iterates through successive generations.D Y h e r l l a (1) Where Dkeacher is the desired decision for the ith particular decision problem of the learning database. 3. this kind of landscape does not constitute an obstacle to the robustness of the G A S . the landscape of the global error function is composed of “terraces” or flat plateaus.. D: (8) is the solution of the inner optimization problem related to the ith particular decision problem. the number of heuristics and on the number of parameters describing the set of heuristic membership functions.. e.. which means that the search method is not local in scope but rather global over the search space. B. It selects individuals on the basis of their fitness score.. e. are to be noted: The GA works in parallel on a number of search points (potential solutions) and not on a unique solution. chapter 4). The chromosome length depends on the number of levels. D This can be solved. two different methods will be used to optimize the parameter set of the expert system. from an algorithmic point of view. It has indeed been shown that a GA can solve similar optimization problems such as the optimization of the third de Jong function (see [27].m2. e.g. As the set of possible decisions is finite and discrete (for any e).

Indeed. we considered only the adjustment of heuristic weights (which do not intervene in the transformation module Q. we can rewrite the final equation of Section II into \ c). + Branch& Bound D. which is chosen to be maximum for the true optimum decision of each problem I and to associate lower global ratings to poorer decisions. [28] and [29]).) or I "' I I I I Population P .) [ ( P . . A schematic diagram sketching the neural like approach is given in Fig. . To transform the fuzzy reasoning scheme. One may consider that the transformation module Q contains tunable parameters as well. 4. desired global rating Lteacher(D). The transformation module Q implements both measurements and evaluation of the corresponding heuristic ratings. The structure of the database including the transformation module Q is illustrated in Table I. presented in Section 11.k = 1. A group of p k decisions D for each problem 4 . Schematic diagram of the heuristic parameters optimization method using the neural network approach. Fig.PERNEEL ef al. Let us introduce the following notations . a neural like (NN) approach can be used instead of the GA. C. 4. The NN structure is designed in such a way that the adjustable connection weights are nothing else than the parameter vector 8.: OPTIMIZATION OF FUZZY EXPERT SYSTEMS USING GENETIC ALGORITHMS 305 TABLE I Learning Database DATABASE FOR THE NEURAL A~PROACH ~ D I Heuristics 1 Desired rating I 0. where data used for training the NN (inputs and desired outputs) are emphasized. . > (Pkf(mM(D). the optimized weights (or parameters) are extracted and can be used in selected search methods to solve the decisionmaking problem (inner problem). Indeed. the optimization problem can be seen as the training of a neural net in such a way that the network represents a global rating function L(D) whose maximum value is obtained when the true optimum decision is provided as input of the network.. ] (see also Fig. 9 X ( m 1( D ) ) ). . into a network (this is always possible.2. see. and The desired global rating Lteacher(D) for each decision D . e. according to the definition of Section I. + E(@.I(@. 2). .. Optimization Using the Gradient-Descent Method Formulated as a Neural Network Problem For tuning the parameters. 5 is not explicitly calculated but results from the previous formula which gives a value limited to [Kmin.g. 3. . The output of the module consists of a vector for the fuzzy reasoning system. The optimization consists in minimizing the cost function or error. The saturation function in the central part of Fig. K . .' ' p5(mM(D)))1 1 Fig. In this work. it could be necessary in certain cases to refine the heuristic parameters such as the parameters describing the shape of the heuristic membership functions. Schematic diagram of the heuristic parameter optimization method using GA's. The NN is trained to realize a global rating function L ( D ) as close as possible to the Once the NN is trained. (m1 ( D ) ). The learning database is made of Several (C) decision problems (&. becoming part of an "extended" neural network. E(@i) defined by (1). 3 outlines the principle of the method. .. Learning Database Fig. but only in the combination of heuristic ratings).

it is completely impracticable to implement a close matching of the image of the vehicle with a template given all the possible positions and orientations of the vehicle. Level 1: Determination of one principal direction out of the eight most important orientations found on the image. normalized on the wheel train height if the image gives a side view of the vehicle or on the distance between the floor of the vehicle body and the ground if the image gives a front or rear view of the vehicle. e. 3.2 ) . AUGUST 1995 with and 91 M I (D) t=l \ J - The training of the neural network is performed by using a generalized backpropagation algorithm (see. The origin is on the right of the Y axis if the image gives a front or rear view of the vehicle and when the engine is at the rear of the image. the conventions are the following: The X axis is the line on the side of the vehicle between the wheel train and the ground. An expert system was designed to accomplish these two tasks. on the image of the vehicle according to predefined conventions. This principal orientation is either the direction of the X axis if the image is a side view. or the direction of the Y axis if the image is a front or rear view. chapter 5). The recognition problem consists in identifying armored vehicles from short distance two-dimensional infrared images. at the extremity of the X axis on the engine side if the image gives a side view of the vehicle. The first task. . {X. [30]. An illustration of these conventions can be found in Fig 6. The origin of the axes system is. ~ ) p. consists of putting a system of three axes. Indeed. DESCRIPTION OF THE APPLICATION Both methods were tested on a target recognition system.g. The 2 axis is the vertical direction of the vehicle. In our application. by convention. Therefore.Y. The origin is on the left of the Y axis if the image gives a front or rear view of the vehicle and when the engine is at the front of the image. The Y axis is the line on the front or on the rear of the vehicle between the front or rear wheels and the ground. 5. ~ ~ ) ] 2 i=l j=1 by a gradient descent where 77 (0 < 77 < 1) is a learning rate and c de with =2 p* i=l j=1 C [ L (e) D . VOL. the problem is divided in two subproblems: the first subproblem is the computation of the orientation and position of the vehicle based on a crude model of the vehicle and the second subproblem is the identification of the vehicle in a reference position and orientation. The position and orientation detection is then divided into five ( N = 5) subtasks.306 IEEE TRANSACTIONS ON FUZZY SYSTEMS. The major difficulty lies in the lack of knowledge of the position and orientation of the vehicle with respect to the camera. 3.. it aims to minimize the global error with respect to 8 c E ( @ )= X [ L ( D -~ ~ . a e e) and j=1 v. These five subtasks correspond to five different levels of increasing knowledge of the position and orientation of the vehicle. Schematic representation of the tunable part of the neural network ~ ~ ~ ~ ~ h e r (3) ( ~ approach. and also given all the possible vehicles. Fig. NO.~ ~ ~ Lteacher( Dij) l a L ( D i j . position and orientation detection.

Once the first task of detecting the position and orientation of the vehicle is accomplished. the identification of the vehicle. it is sufficient to verify that the templates match the corresponding areas on the image. will be described in this paper. each gene w!~ of individual k taking 20 discrete allele values (integers from 1-20). 6. The norm IIX . such as number of wheels. For each of these images. VI. B. The chromosome length is M (21). Trape- Yes 0.PERNEEL er al. 1. Three linguistic terms have been selected: good. The location and the shape of these characteristic details is known in advance. or above the Y-line importance of the orientation (global and local] comparison with the orientation Y a r i s 90° comparison with the orientation horizon 90° importance of the orientation relative orientation of the third axis with the first one relative orientation of the third axis with the vertical one Fig. System Tuned with GA's We chose to limit the parameter vector 8 to e = (w11 I ' ' ' I %A{. TABLE 111 POSITION AND ORIEUTATION DETECTION RESULTS (MANUAL TUNING) Orientation first axis ( X or U) Orientation second XIS ( Z ) Orientation third axis (U o r X) &ition of the coordinate system 1 S i i% I 96 3% I 45 2% I 80. Level 5: Determination of the third axis from 16 candidates.0 . either the Y axis if it is a side view. These results are obtained with a manually-tuned system. Table I1 gives some details about the heuristics of the expert system. * ( e . are used as rating functions. etc. On Fig. A set of U different images Ij is considered as learning database.05 zoidal type rating functions. )= f(IjIei) found by solving the inner problem. Manually Tuned Fuzzy Expert System The results for a database containing 135 infrared images of eight different vehicles in different positions are shown in Table III.0% I - + + TABLE IV NUMERICALVALUES USEDFOR THE OFTMIZATION p : Number of individuals U : Number of learning images Probability of mutation WITH GA Selection pressure Elitism strategy Level 2: Determination. [32] and [33]. This is done by using pattern recognition techniques such as cross-correlation or neural networks (cf. it remains to identify the type of vehicle. or the line of the Y axis if it is a front or rear view. A. exhaust system. The size of the solution space is 69 120. engine position. position and orientation detection.The numerical values used in our application are given in Table IV. 1. Only the first part of this method.Ylla appearing in the definition of the error function E ( e i ) is a measure of the distance between the two coordinate systems X and Y. and it also chooses the correct X or Y line. of the line of the X axis if it is a side view. The best case value approach is used (Branch and Bound). the expert system has to decide if it is dealing with a side or fronthear view. average and bad. track size. Axes conventions. The overlapping factor is 50% for all heuristics and rating membership functions. 7. out of two sets of lines (a set of X-candidates and a set of Y-candidates). the . This is done by defining characteristic details for each vehicle to be recognized. At this level. Level 4: Determination of the 2 axis from nine candidates. Therefore. RESULTS The fuzzy reasoning method described in Section I1 is implemented in the automatic target recognition system. as in Fig. Since the position and orientation of the vehicle is known. or the X axis if it is a front or rear view. Note that the knowledge of two axes is sufficient to perform the second phase of the recognition problem.: OF'TIMIZATION OF FUZZY EXPERT SYSTEMS USING GENETIC ALGORITHMS 307 TABLE I1 HEURISTIC !SCRIPTION OF THE RECOGNITION SYSTEM Description o f the heuristic importance of the orientation ( s e i Pemeel e l a / [31]) rejects orientations near the orientation horizon + 90° Iavours the orientations of candidate wheels (collinear regions) length measure of the line favours X-candidates if candidate wheels have been found for the orientation rejects Y-candidates if candidate wheels have been found for the orientation measure of length-height ratio measure of the quantity of white areas below the line measure of the gradient of the line measure of the quantity of white areas above the line comparison of the relative position of the lines measure of the relative position of the wheels with respect to the line measure of the absolute position of the line memure of gray level statistics at the outer points of the line comparison of the orientation of the line with the chosen orientation measure of the quantity of white areas or at the outer points of the X-line. The tuning of these parameters was based on common sense so a learning database was not necessary.0 discrete values with interval of 0. the desired axis system Dyher has been determined to compare it with the coordinate system D . Level 3: Determination of the position of the origin if the image is a side view and of the engine position if the image is a fronthear view. templates can be created with their location on the vehicle specified.

The advantage of this optimization method is its simplicity.308 EEE TRANSACTIONS ON FUZZY SYSTEMS. 8. { . VOL. better. the new weights were tested on a larger database of 135 different images. could also be used (see Fig. The results are given in Tables V and VI. Another choice which comesponds to the One the problem as Seen by GA’s is a Some kind of classification. of the total set of possible solutions (69 120). Table VI shows the improvement more in detail. NO. Generation . In the considered case. Schematic represenration of the function Lteacher(D). AUGUST 1995 TABLE V GLOBAL COMPARISON OF THE RESULTS x 1EO 0. On represented in Fig. the problem as seen by Lteacher (as judged by the teacher) ( D )= neural networks is the learning of a mapping (L(D)) so that otherwise. a group of 20 decisions is provided out was chosen to be 77 = 0. 8. containing the leaming database. 1. 0 . A better result is obtained when at least one of the orientations or positions of an axis is more exact. ne function Lteacher(D) is schematically adopted to the problem are completely different.01 to ensure stability. a) Dmax la { rmax .2812 TABLE VI POSITION AND ORIENTATION DETECTION RESULTS cost of the best individual (lower curve) is represented as well as the mean cost (upper curve) of the population. To obtain a representative database. System Tuned with Neural Network Approach o.1562 o.Dteacher Dteacher I <[ lD. three weeks Computing time was needed on a same workstation. 3. It has to be mentioned that this optimization method is time consuming. Table V expresses the percentage of same. the problem of putting a coordinate system on a vehicle is solved 368000 times (46 images. A Lteache? DLeechcr D Fig. solving one problem needs on the average about 15 minutes on a workstation HP425 (64 Mb RAM). 7. TO optimize the fuzzy system (without any image processing job). containing about 50% good examples and 50% bad examples. The results are compared with those obtained for the manually tuned weights. and worse results obtained with the new weights. C. This high computing time is not a real disadvantage of this method because the optimization has to be done only once. For each by one generalized backpropagation step. The learning rate of these images. 0. 9) optimization problem and the role of GA’s is to perform the outer optimization on the basis of a distance criterion (see if D is a correct decision Section 111-B). eight different vehicles in different positions. we use the neural-like method with gradient descent to determine the parameter vector Although the application is the Same where I I . rating is chosen as nedesired Lteacher ( D )= Instead of searching the parameter vector 6 with GA’s. To reduce the computing time.. 125 I<-. of different light environments (different adjustments of the IR camera). The learning database contains 46 images.5 2. The learning is performed in batch mode: one iteration The learning database is made of 69 images I (eight vehicles in different positions) where the coordinate system of an consists of the presentation of 69*20 = 1380 pattems followed armored vehicle has to be determined (see Table I). After the learning process. it must be kept in mind that the strategies Dmax is a threshold. This method of solving the decisionmaking problem is explained in detail in Appendix A. Fitness of best individual and mean fitness. the learned mapping can be used afterwards in the original decision-making problem. A worse result is obtained when at least one of the orientations or positions of an axis is less exact. 1. x lE2 Fig.5 . On the other hand. 40 individuals and 200 generations) to tune the different parameters. we have used a p m of the find population given by GA solving the decision-making problem related to the Same image. 3. all the image Processing work was solved previously. ! ( if [ID l J . I IQ has been used previously (see Section V-B) and for both methods.

9375 __ __ 2. the fuzzy logic theory has been used to build a specific decision-making system for heuristic search algorithms. Schematic representation of another function Lteacher(D). since the number of learning pattems representing good decisions (close to the maximum of Lteacher ) is insufficient. Fig. The genetic algorithm optimization method provides a marked improvement of the results. 115 2 ' . 215 x lE3 DLeacher D Fig. I Generation 0:5 1 ' . however. After the learning phase.875 2. namely GA's and gradient-descent techniques based on a neural-like formulation of the problem have been tested to tune the parameters of the decision-making system. It is observed that the neural-like tuning outperforms the manual tuning. jumps are made in the solution space. This generally results in a poor discrimination among good decisions. The results are compared in Table VU with those obtained for the manually-tuned weights. BB Requirements for Efficiency Alternative Solutions Unstructured information Strong No No GA Medium Yes Yes 2. The decision-making system has been tested on a target recognition problem with good results. one could choose learning pattems at random. 10. rating function (Lteacher). Contrary to usual applications of neural networks in function approximation which try to render the error as small as possible by increasing the network complexity (e. Three reasons can be set forth to explain this inferiority: The difficulty of choosing an adequate learning database. For example.0625 -- 3. This explains the relatively high error rate. To improve the performance of the overall system. .g. the optimum weights found by generalized backpropagation were tested on a larger database of 135 different images (including the learning database) using the fuzzy Branch and Bound method for solving the decision problem. the structure and the parameter number are fixed beforehand. The time complexity to train the NN is limited in comparison with the GA. some definitions may be better than others. the obtained results were very poor.. There exists always a chance to find a local minimum with the neural-like approach instead of the global one. a mean relative error of is reached. On the contrary. Lteacher (D) as closely as possible everywhere. The definition of Lteacher1s arbitrary and somewhat artificial. Also. VII. due to the cross-over and the mutation. For the two possibilities mentioned before. Due to the difficulty of choosing an efficient database for the neural-like approach. the improvements obtained for this optimization method are not so spectacular. does not achieve the performance level of the tuning with GA's. The difficulty of choosing an adequate desired global . the number of nodes). This problem does not exist with the method which is based upon GA because many candidates are considered in parallel. The performance of the NN method. choosing only the learning pattems among good candidate decisions would result in the penalty of some heuristics which could nevertheless be necessary to discriminate decisions of poor quality. however. Learning curve of the network. that the time needed to build a representative database may not be ignored. It has to be mentioned.8125 0. after a few iterations. 9. To combine both possibilities to have enough good examples and to take some examples at random.PERNEEL et al.: OFTIMIZATION OF FUZZY EXPERT SYSTEMS USING GENETIC ALGORITHMS 309 TABLE VI1 GLOBAL COMPARISON OF THE RESULTS 3. the final population given by a GA solving the decision making problem was used (see the Appendix). CONCLUSIONS In this paper. It must be kept in mind that the real objective of the learning is to build a rating function L ( D ) whose maximum coincides with rather than approximating the maximum of Lteacher(D). given the structure and the limited number of parameters of the neural network. Fig. two appropriate optimization methods. 10 shows that.

Starting from top to bottom of the diagrams. 5001. a population of individuals is formed. Curve 2: the mean value (over 50 experiments) of the best fitness of each population. The diagrams are given for one particular image but similar results are observed for most of the available images in our database. 3. Di = (d"l. they are not absolutely guaranteed to find the global best solution. 1 d. . respectively: Curve 1: the highest value (over 50 experiments) of the best fitness of each population. Fuzzy approach (paragraph 11).. AUGUST 1995 APPENDIX DECISION MAKING IN UNCERTAIN ENVIRONMENT WITH GENETIC ALGORITHMS We have investigated the usefulness of GA's to the resolution of the graph-search problem. 1 1 .iiiiry l 1. NO. At the end of the graph-search. Each individual (or chromosome) consists of a particular sequence of decision elements TABLE M GA PARAMETERS USEDI N OURAPPLICATION x 1E2 4*5 T 4* 3. Results The method described above is tested on the armored vehicle recognition system. The fitness function of individual D. For example.-li. x 1El Fig. 1 il/r. To solve the graph-searching problem with GA's. 7'. As we have already said. 5'. the reliability of the GA search is excellent since all experiments result in the optimum (or near-optimum) . VOL. Curve 5: the mean value (over 50 experiments) of the mean fitness of each population. Curve 3: the lowest value (over 50 experiments) of the best fitness of each population.310 IEEE TRANSACTIONS ON FUZZY SYSTEMS. The time before finding a satisfactory solution is variable for Branch and Bound methods (short time if the efficiency requirements are satisfied. whereas GA's need an approximately constant time in between. namely a binary multiple-fault diagnosis problem. to long time otherwise). Learning curves for the fuzzy logic approach. 11 shows the learning curves (evolution of the fitness) for the fuzzy logic approach (see Section 11) over 50 experiments with different initial populations. In our application. etc.5 2.5 3. GA's result in a family of alternative solutions. The GA parameters used in our application are summarized in Table IX:The fitness value of each individual is the global rating of the decision represented by the individual. 6'. Belief functions (see [36]). The GA's approach is motivated by the following arguments: The basic requirements for efficiency of GA are less stringent than those of Branch and Bound or other heuristic methods. GA's turn out to be very useful by providing interesting solutions used as a database for training learning systems such as neural networks. 8'. 9'. each gene d i taking allele values.). solving a hierarchical graph search problem. The chromosome length is thus n. described in the main part of the text. For computing fitness values.5 1 1 Generation $. and curve 6: the lowest value (over 50 experiments) of the mean fitness of each population. Bayesian and fuzzy logic approaches are examined and compared. 1 ' . ni A. * t' I t 2. The application of GA's to a simpler decision problem. As expected. Curve 4: the highest value (over 50 experiments) of the mean fitness of each population. is taken as the global measure of quality (rating) of the decision L(D.di2 > . has already been proposed (see [34]). GA's do not require a structured information (availability of partial rating). which will certainly be found by Branch and Bound methods. the curves represent.. The fitness function therefore depends on the particular approach adopted to cope with the uncertainties (Bayesian approach (see [35]). Although GA's are well known to be highly reliable.) When a structured information is available. I) Single-phaseGA: Fig. a multi-stage GA can be designed which first solves the partial decision problem limited to the first IC levels.Dp}. The fitness values are scaled to belong to the interval [0. 4 J = 4+n grouped in the population P = {D1. . a5 0. 3. however.D*. it is also possible to exploit it by designing more efficient strategies based on GA's which involve a GA-based search several times and in an incremental fashion (multi-stage GA). and then solves the complete decision problem by gradually adding grouped levels of decision. This can be useful in several applications with human interface where the decision maker prefers to have a set of alternative strategies and to make himself the final decision. . 4'.?l".-. 3'. 2'.

Heuristics: Intelligent Search Straregies for Computer Problem Solving. 3-16. 2. Comparison of the success rates of the Bayesian and fuzzy logic approaches. This efficiency is easily explained middle relates to the fuzzy-derived rating. was 96% for all the 135 images of our database. 1989. 7’. 9‘. 28. which involves a normal GA-search on the five levels. CA: [I] N.. 1 ‘ . Press. 3’. 9 ‘ . 13. Princeton. 12. An individual therefore represents a SMC-3. J. 6 ’ . tests have been done to examine the final population of [5] D. 1980. Phase 1 can thus be considered decision. 2) Multi-stage GA: We chose to subdivide the GA search . Looking again at Fig. “Outline of a new approach to the analysis of complex limited to the first two levels. no. tems.PERNEEL er al. while the other individuals correspond with “Probability judgment in artificial intelligence and expert sys[7] -. Shafer. population are corresponding with a sufficient result (at least [6] G. The success rates of the curves for a multi-stage GA using the same conventions Bayesian (see [35]) and fuzzy logic approaches are compared than Fig. -_--A . Spiegelhalter. “Decision making i n a fuzzy environment.” Statistical Sri. Kanal and Lemmer.” Managemen? Sri. Approximare and Causal Reasoning. TABLE X SUMMARY OF A~DITIONAL TWO-STAGE GENETIC ALGORITHMS PARAMETERS 1 PHASE I 1 I Chromosome leneth 12 I . As already mentioned. vol. A. later.5 4 i ! 2. 1987. In the first stage. vol. Zadeh. 12. x 1El Fig. 12. Additional GA-parameters for the multisponding to at most 1600 tested nodes (well below the 69 120 stage search are given in Table X. the search method: 52. [3] P. 2‘. Syst. 5 ‘ . the first two genes (or levels) are still allowed to change. pp. and requirements. 71-87. 1976. 5 ’ . Principles o rating are more adapted to the fundamental GA mechanisms Morgan Kaufmann. 14.5% of the individuals of the final Amsterdam: North-Holland.: OFTMEATION OF FUZZY EXPERT SYSTEMS USING GENETIC ALGORITHMS 31 I x 1El x 1E2 10. pp. it is observed that in Fig. part of the information. 7’. id. 5. 39 0. Generation 0’. 3. vol.single-phase GA. Pearl. Curves 1 and 2 of Fig. Torasso and L.” Uncertainty in Art$cial Intelligence.. 13 shows the learning terminal nodes of the overall tree). Diagnostic Problem Solving-Combining The percentage of success (frequency of achieving a deHeuristic. Console. pp. 1970. 1 ’ . obtained with other armored vehicles on different images. 8’. into two stages. London: North Oxford Academy. vol. San Mateo. A. 6 ’ . 1986. x 1El 3 ’ . E.. a new population of complete decisions is formed from the last generation of phase 1 by randomly adding three decision elements. 1984. 7 . 4. similar results were [2] I. NJ: Princeton Univ. 11. Nilsson. an insufficient result. Man. while the curve in the convergence is achieved. 17.. 0. Leaming curves for the multi-stage GA. “Probabilistic reasoning i n predictive expert systems. it turns out that the fuzzy logic approach converges more rapidly. At the beginning of phase 2.. [8] R. Eds. two correct axis). 4 ) .” IEEE Trans. 11 show that a satisfactory as a means to choose a biased starting population for usual result can already be expected after 40 generations corre. 2‘. 2.” Artificial Inrell. 1973. the lowest curve corresponds to the two-stage GA outperforms the single-stage methods: a faster success rate using Bayesian rating. In this figure.5 3. Redwood City. The third curve by the fact that the particular structure of the information is corresponds to the multi-stage GA and will be described exploited in an improved way. J. A Marhematical Theory of Evidence. “Probabilistic logic. 1I 1 d. J. 4’.Only the level ratings L1 and L2 (respectively of level 1 and 2) are used and combined to compute the fitness of a partial decision. Cybern. 1. This could be REFERENCES explained by the fact that the operators used to build the global f Art$cial Intelligence. CA: Addison-Wesley. Statistical 1986. Considering the first two curves. cision identical or nearly identical to the true optimum one) [4] N. 2 8 4 4 . which carry the most important systems and decision processes. GA performs a search [9] L. Fig. pp. 4’. pp. Nilsson. 8’. 6 . 4 7 4 7 . 1. 141-164. Zadeh. partial decision ( d l . Bellman and L. Fig. During phase 2.

Beveridge and R. surfaces. Con$ Fuzzy Sysr. I. Belgium. Thrift. “Probabilistic versus fuzzy reasoning. Christiaan Perneel (M’94) was born in 1963. New York: McGraw-Hill. Karr and E. Hart. pp. 11.D. Amsterdam: NorthHolland. Man.” IEEE ASSP Mag. L. “Anfis: Adaptive neural based fuzzy inference systems. [26] J. and interfaces to reveal their electronic properties. pattern recognition. he received the master’s degree of engineer in telecommunications at the Royal Military Academy of Brussels.-SPIE. [34] W. In 1986. Perneel. where he received the Ph.” in Proc. [23] E. . pp. Mar. 1291 J. de Mathelin... pp. degree in 1983 at the UniversitC Libre de Bruxelles. Neural Networks. 1281 J. S . 1. vol.” Expert Syst. Computers. and he received the Ph.D. H. L. pp. Schechter. Since 1993. 1271 D. AUGUST 1995 [IO] H.” in Proc.” Trans. [25] G. pp.312 IEEE TRANSACTIONS ON FUZZY SYSTEMS. Redwood City. vol. 4-22. Bornari. pp.. 1986.” in Proc. 293-320. 15-33. 4th In?. Con$ Genetic Algorithms. pp. Redwood City. Kanal and Lemmer. 28.. Nomura. Eds. Kang.. Jean-Marc Themlin was born on November 4. “Application of fuzzy logic to approximate reasoning using linguistic synthesis. Optimization and Machine Learning. Duda and P. vol. Pattern Anal. 85-102. vol. 308-313. 3. Velde. Con5 Fuzzy Sysr. T. S. and M. His research interests include image processing. I.” Computational J. Sugeno and G. 17-24. 1990. D. Applica. He is the head of the Electrical Engineering Dept. NO. Nelder and R. 1993. pp. Mar. 1993. in 1991-1992. Perneel. C-26. vol. “Structure identification of fuzzy model. Prade. 46-53. 3. “Detection of important directions on thermal infrared images with application to target recognition. “Is probability theory sufficient for dealing with uncertainty in AI: A negative view.” in Proc. vol. 1. Machine Intell. IEEE Inr.. D. A. “Fuzzy control of Ph using genetic algorithms. 345-350. 1301 R. V. E. A. 12-16. and M. Perneel. PAMI-7. 1977. S. 1992. Jan. 1989. 2nd IEEE In?. Cheeseman. 0. 1.. Acheroy. 4th Int. pp. 425436. and N. Acheroy. Mead.. no. Con$. Mamdani. 1973. 28-Apr. He received the bachelor’s degree in physics in 1985 at the FacultCs Universitaires Notre-Dame de la Paix in Namur. Cybern. pp. J. 1. 1986. “A simplex method for function optimization. pp. Jean-Michel Renders was born in Brussels. Belgium. 1182-1 191. 1992. 1963. Zadeh. and M. vol. Karr. May 30-June 3.” Uncertainty in Arrijicial Intelligence. pp. His research interest include neural networks. de Mathelin. “A computational approach to approximate and plausible reasoning with applications to expert systems. 1331 R. 1965. Jang and C.” in Proc. pp. Syst. especially image compression and restoration. 1111 D. France. and the Ph. Con$ Generic Algorithms. applied to solids. CA: AddisonWesley. and A. 103-1 16. Kanal and Lemmer. he became Engineer of the Military Material.” in Proc. Intell.. F u u y Syst. R. where he worked on GA’s and fuzzy systems applied to artificial intelligence and image processing. 4. vol.” Int. Acheroy. Pemeel. vol. Optimization: Theory and Practice. “A learning method of fuzzy inference rules by descent method. Hecht-Nielsen. Weyrich. 1988. C. “Automatic target recognition fuzzy system for thermal infrared images.. Fuzzy Syst.” IEEE Trans. Sun.” ZEEE Trans. He is currently working at Tractebel Energy Engineering (Artificial Intelligence Section). J. 28-Apr. 1991. P. he joined the Signal and Image Center of the Royal Military Academy of Brussels.” in Proc. Nordvik. San Francisco. 1311 C. 509-513. he has been teaching at the Royal Military Academy as assistant professor and as professor since 1991.” Uncertainry in Artificial Inrelligence. I . 1993. 1221 P. AI ’94: 14th Avignon Inr. [36] C. J. Amsterdam: North-Holland. “Bayesian estimation versus fuzzy logics for heuristic search algorithms. “An heuristic search algorithm based on belief functions. “A comparison of methods for diagnostic decision making. in 1981 Engineer on Automation Control and he received the Ph. 260-283. Orlando. E. Forward Looking Infrared Image Process. H. 1986. 1970. 1201 C.51 -.D. and 0. vol. Pattern Classification and Scene Analysis. S. 7-31. I. pp. Syst. “Design of an adaptive fuzzy logic controller using a genetic algorithm. Genetic Algorithms in Search. degree in 1991. Wakami. Sugeno and T. In 1978. His primary research interests in experimental physics are photoemission and inverse photoemission. NJ: Prentice-Hall. 1993.. 1141 B. Miller. Belgium.. 1351 M. 1987. M. Gentry.. After post-doctoral studies in Marseille. “A simple direct fuzzy controller derived from its neural equivalent. 1987. Kosko. Eds. 1181 H. Apr. 1985. Apr. de Mathelin. pp. “An introduction to computing with neural nets.’’ Statistical Sei. His research interests include signal and image processing. Lippmann. [32] R. R. he is a Lecturer at the Department of Applied Mathematics of the Royal Military Academy and where he is teaching probability and statistics. Bersini. degree in 1994 at the Vrije Universiteit Brussel. Con$ FUZZY Sysr. 1161 M.” IEEE Trans. He received the Master’s degree in mechanical end electrical engineering from the UniversitC Libre de Bruxelles in 1987. and imperfect information. 944-95 1. [ 121 P. “A fuzzy-logic-based approach to qualitative modeling. 7. 1171 M.” Fuzzy Sets Sysr. “Fuzzy logic synthesis with genetic algorithms. 1191 H. Potter. 2nd IEEE Inr. ConJ Fuzzy Syst.” IEEE Trans. Marc Acheroy (M’90) was born in 1948. A. 1991. 4 5 w 5 7 . Englewood Cliffs. J. pp. “Functional equivalence between radial basis function networks and fuzzy inference systems. Yasukawa. 1993. 1993. 1989. pp. and M. Neurocomputing. Neural Networks and Fuzzy Systems. He is head of the pattern recognition cell of the Signal and Image Center of the Royal Military Academy and he is the Belgian representative of IAPR (International Association of Pattern Recognition).. Paris. he has been Maitre de confe‘rences involved in teaching and research at the FacultC des Sciences de Luminy in Marseille. Goldberg.-IEEE. M. 1993. 2nd IEEE In?. GA’s. “Fuzzy knowledge combination. Belgium. V. “The probability approach to the treatment of uncertainty in artificial intelligence and expert systems. degree from the same university in 1993. Lindley. CA: Addison-Wesley. and of the Signal and Image Center of the Royal Military Academy. Acheroy. Jang. 203-210. 576581. 1994. to appear. Hayashi. Belgium.” in Proc. 1211 C. San Francisco. Since 1985. pp. Currently. R. 1131 L. 2. He received the master’s degree of engineer in transport-mechanics at the Royal Military Academy of Brussels in 1971.” IEEE Trans. and artificial intelligence techniques applied to process control and power systems. VOL. NY: Wiley. 1241 C.D.