Coactive Neural Fuzzy Modeling




61% 1 0

F-Y Arroslsllon



2. CANFIS Architecture

1. Introduction
Adaptive iic~iiro-fuzqiiiodds clijoy both illally of the advantagos claiiii~tl Y S s a i d FS's linguisfor tic interpretability, which a1low.s prior k ~ i o w l d g e to be c~nl~eddecl tlivir coilst riiction iiiitl allows in the results of lrarliiiig to l>c possi1)ly iuidcrstoocl.

2.1. Comparison with a simple back-prop NN

trt~l)onetl 1. TVIIOS~~ vi-ciglit. MJe contrast t lie iieiiro-fii well-discussed black-l)os XNs.ack-prop program. Then W P have it nonlinear coliserpent.s quick convergence capability. On t. Note that mlieii the fonr cwisecliient . Non-linear Rule - In pursuit of a truli &q)tive nc. we h a w a 4gnioitlal rulc. 2.he ot.1.2. In cont. Neural Rule Furthermore.her hand. t-he simple back-prop EN tries to find one specific set of weights coiiiiiioii to all t. iiienihership values correspond to those tlyiianiicdly changmhlc w e i g h that depend on input. t o an RBFN [6]. ANFIS also can evolve t. 1 In this case. pat. From an archit.twork.+ )'I .tioli st.2. Nanielj-. With bellshaped fuzzy membership fiiiictions (XIFs). weights between the constqueiit layer and the fuzzy association layer.nrid poiiit of T-iw. iieuroii function 2 in Figure 1 (iibow). This comparison emphasizes the iiisidc transpiireiicj. caii pfoduce a center-weighted response' to sinal1 recept. good laiiguage: tlie liiddm l a p r of tlie simple back-~jropNN is taiitiiiiioiiiit to tlie coiisecineiit layer of CANFIS. the weights are iisc~l i~ global fitshion.2.s in CXNFIS.lis I)llt. when etic.cct. CANFIS and a Radial Basis Function Yetwork (RBFN) are locally tuned [13. In this sense. localizing t. 2. The NWs w e i g h h t w w i i tlir oiitput layer and t.easy iniljleiiieiitatioii of suc4 i l ~ie~iro-fiizzp ltiodel by modifying an avai1al)lc 1i. coefficients are just. C1 : Cl= 1 + esp[-(]'l s + '11 I. Just.of C-ANFIS. w r y quickly compared with siiiipk back-propagation KNs.rwgt.11 ( onsrqii(~iitis rciilized by an NW.teriis. CANFIS can he functionally equivalent.lie hidden layer correspond t o niciiibc~rsliipd u e s on fuzzy association l a y r iii CASFIS. CANFIS has a niechanisiii whereby it. 2.ire fields. as aii RBFN enjoys it.raiiiiiig pat terns: in other words. iieunieric cmiiirac.lie siiiiple hack-prop ?T%sis in globally updating weight coefficients.o recognize some feature in a training tlat..he primary iiiput escit. 191. Sigmoidal Rule Suppose we have a sigiiioidal function it5 a. wc have nc~uralrider as illustrated in Figure 2 (above).ation. Replacing a neuron function with a noiiliiirar neuron function at the coiiseqnciit Lyier may b e able to improve the perfoiniant c. One in typical criticisiii of t.]. P n t tiiig iiior(' liiddw iiotlcs in the EN is equivalent to adding iiiorv rii1t.2.rast. CXYFIS's powerful capability stciiis froiii ~~atterii-tlel~eiideiit. not.aset. there should be no constraints on iiciiron functions..

. (f) c .w~ liilvc a ( oiistiuctioii siiiiilar t o a typical iiiodular iic. 4.r way of goiivr:ttiiig iiiiiltipk oiitpiits is .. ( . More precisely. epoch=O Desaed O"lpu1 # . four siiiiple back-prop NNs aiid five CANFIS iiiocltds (-4)-(E) w r t ~tcastcd usiiig the sruiie coiiwiitioiial gratlieiit dwriit (GD) backpropagation algoritliin iiictliotl witli n fiscd iiioiiieiituiii (0.... .. that is. ..iiiodifia1)lc l)iiriLiil('t('r\ iir(' ~ I i r t r t ~ l a t by tlic juxtaposed . This siiiiiilatioii csaiiiple is quite siiiipk: fit tiiig In tlik siiiiiilatioii.(tiL(.?" and "Neural Rule.. the follomiiig hcll-sliap(~1 ail iv-shaped letter that lias t v o (oriiers: <L poiiited h4Fs w'cr~ iisc~l: top left-hand roriior ant1 a roniidetl bottoiii rightliaiitl corner.. 8..W ~ I C I I ( oii\(~(ii1~~iit~ ti 5 iLre also 1)c correlated.e. I 1 1 3.. 1\Iaiij... . (q) CANFIS wilh 2 linear rules .. 3. (B) CANFIS with sigiiloidal rules. to iiiaiiitaiii the saiii(1 u k d v i i t s of fii/Lp rules: Figure 2 (a1)ovc~) visuali/. The five iiiodels are (A) CASFIS with linear rules (i. tlicir design procedure detailed in [ l G ] is 5troiigly based 011 a n independent traiiiiiig sclieiiie w1ierel)y aiitecedeiit parts aiid coiiseclueiit parts iLr(' traiiictl individually.. : ! . ... adjustahl(~ zZNFIS ! lias aii iiitlepeiidriit set of fi riilvs...... We sliall let t lic following coiicrctc csaiiiple furtliw clarify CANFIS's leariiiiig c y a l d i t i e s .. ANFIS).al cspcbrt NNs are iiicdiated by aii iiitclgratiiig uiiit (typically a gating network)..iiiodiilnr iirtwork models were discussed iii [15.. It inay be ..tts thi\ cwicept. Simulation example (N-shape probeach has itlciitity fiuic-tioii5 iii (D). 14....E) with iiet1riLl rnlrs.J a t fii ..r...n 011 put s . IO. 111..twork as illiistratcd in Figure 2 (below). t Filrtli(~r1iiorc. ! (0)CANFIS wlth 2 linear rules .. .rvalues t o exprtw po41)1(~ corr(~htioiis 1)etwer. The diffcreiice between ( C ) i ~ i i d( D ) Irsidm iii iitiiroii fiiiictions in t lie oatpiit l a y r of iiriirnl coiiseqiieiits. each iiviiral cwiiscyncwt lias sigiiioitlal fuiictions iii its ontprtt l a ~ r iii (C) while r lem) .. .. <4iiotlicr training approach is t o traiii both aiitcwxlciit and coiisccpciit parts concurrently [ l . hard to rcdize c e r t t h ( ~ r r 011s I)c~t\vcY%outputs.. 16.. : .' h.. .. . To see how adaptivo ( apa1)ilitg tlepeiids oii architecture itself. . aiid then put togetlirr.D.. wli(w two iiriual coiiseyiimts.. This lcttrr is sliowii as <L dxslied liiie iii Figures 3 (q). 0 I . . rulcs are coiistriirtcd with slinrcd iiiciii1)t. Aiiot1ic. a d (C. (S) NN with UHU = 3 i . wliicli (hasti( ally iiicrci(istscs as outputs iiicwasc." arc' fiiscd iiito aiiotlier NN (Local Expert LVN2). (r). associatioii 1ayc.ANFISs. J_ MFZ . . tlie ontpiits of two loc.. L'Nearal Rulel" and bbNenral Rtrlej" ill'? fused into oiw NN (Local Expert I ). ~ .. 91. m t l *'nTeural Rule.... FJSS)i i i o d ~ l 110 . In short. Ai1 additioiiJ c o i i c ( ~ r i ilies iii tlic i i i i i i i l > c ~ of . Saiiic~ly.8) and a siiiall fixcd lrariiing rate.. CXNFIS (E) has sigiiioidal functions t o gciicriLt(' filial outputs at f11zzy associatioii layer wlicrea5 C' AXFISs (C) :tilt1 ( D ) have ideiitity fii1ictioii. . CANFIS with 3 linear rules ( R ) CANFiS with 3 linear rules I .. Tlie reSdt5 are sliowii iii Tal)lc 1 aiid 2 .

Tliv .liat.. possessing sigmoidal neuron functions.ematic t.s: level of Figuw 3 ( R ) within a preset.91 I ' (b) Initial. 4 2. t.y of t. H F 3 ) tiansit h t l i niid forth dt the beginning of their cvdiition if struggling to find coiiifortahle nit lies ( ~ e Figure -1 ( k f t ) ) .r. Fig.t. Due to tShediscontiiiiiity of tlie left-hand corner of tlie N-shape.he learning result.he cc>iitrr ponii. shapes can metamorphose into uiiexpctted ones: manuallytuned MFs may not niatcli tJiciii. 5 4.o the results presented iii Figures 3 (Q) aiid (R. Limitations of Interpretability W~WI< three rulcs aic iiitrot1iicNI.ails arc. CANFIS with two/tliree rules is able t~o capture the peculiarity of tlie X-shape as shown in Figures 3 (Q) and (R).lra~)le riiiiiiber of adjustable parameters (compare Tables 1 aiid 2 ) .000). After c 4.Outp!its ' ' Klli . Analysis of results 4.r iiitcw\tiiig 1)oiiit in tlir rrsult i\ tliitt UF\* octxpl-tncitis tlcpciid on traiiiing inctliotlh.iiiec1 by ANFIS with three rules (right) based 0 1 1 !.05 28 22 ' 1.eration limit (200.lie G D inethotl alone. They never reach the fitt. 4: Trajectories of t. the question is how t. :\p/ MI? .).ed into the desired N-shape (dashed line).s tra1ispareiic. pt~rfrctlyfit.61 1(J 16 Klih (6)Final Outputs 8 / ' i I I r U".73 3x3 ' .rainiiig procedure leads t. it i5 oliseivect that two MFs (MF2. This exaniple has rthforced the strei1gt.lie hell-shaped XIFs probleiii obta.s (solid line) WCYY almost.o ext.. Though actual output.ract very well both the pointed and round corners of t.1.he next section. 8. NNs with a siiiall iiuiiiber of hidden neurons (three or four neurons). it.99 1.ioiih of t. T h e y never cease to amaze us iii t.lieir neat. 4. fitt.y. c~sl)lainetliii t.2. .75 7 '2. Evolution of antecedents (MFs) Aiiot1ic. the pointed coriier spite of having a coii1p. ( i 'i # of hidden u n i t s checking error training error Dammeter # 3 9.h of CANFIS with the bell I\IFs in tlie convergence capacit. It also prcwiits it.84 22 ' . (lo not evolve to piecewise sliowii in Figure 3 (S) in fit.hree hIFs (left) a n d a result of the N-sh. +A sgst..he E-shape.

Figure 5 ( A ) . Figure 5 (B) suggests that AXFIS is al)k to adapt shon. IIoreovcr. Asymmetric MFs 5 . iiiiist I x soiiic’ optiiiial coiiibiiiations of that clinging to thow two coiistqneiits niay not he slia1)eb of IIFs and foriiis of conscquents. diff(wnt iLli(1 f. In mails 5 4ion.iiic~tl outpnts fit the N-shiLpr~.snltant MFs in Figures 3 (Q). which niay preveiit A l F s from erolr-ing. Thus.(TI‘) show. to trust the initial guess may turn out to be a possilAt1 ol)stac. which purposely coiiicidc witli two side lines of tlie N. w d i outpiit cloc. 4. it iiiay c>iitl u p losing its way to a global iiiininiuni.initial setups of MFs’ parameters (ant ccwlcnts) aiitl consequents inay iiot be perfect.tetl in Figure 7 (A)-(D) cwl up Iwiiig no intersection betwwli tlie two iiiotlificd bell LIFs.4. rcsiiltiiig froni (a) suggests Tlierc. is i t a good idea t o stick to thein’! Figirrc.(R) inay not miitch ally initial gne Let us now asunw tliiit I arquirc two coiiseW queiits froin data sets. the left-hand 1)ase of JIF1 rcacliecl the because IIY. Note that the rc. 111 this cas(’.3. tlic hybrid learning algorithm works better t l i m tlic GD inctliod alone. CANFIS based oii t lie Iiyl>ridlcariiiiig proccdrirc> did not fit the N-shape very woll as Figures 6 (\-). coiiil. These results confirni that C‘AKIFS can grasp the peculiarity of a givcii (lata set becmse of its adaptive capability. Imt after updatiiig coefficients. coiiseyueiit paraliicters ai(’ updated 1)i~s. 1 1this froni tlir dcsired N-sliapfl. lCit\t-h(llii\rChvstiniation (LSE) tvlicreby. Figure T urefiil in ohtaining a good fit.su1t.s obtaiiicd based on the hybrid learniiig algorithiii using the following . LSE inay specialize coiiscqucnts to a great extent.origilial ANFIS enjoys i t s 1iyl)rkl lwriiiiig procedurc haset1 oil h t l i a licwristic. The 1iyl)ritllearniiig procedure lmsically predominates whtw intuiti~cl~-positioiied MFs do iiot iieecl to cvolv(. On tlita other hand. wli(w origiiial hell N F s mere 1\viLrd1 ) i i s ~iLlitecedelit l . Iiiterrstiiigly3 oittputs of tlw atlaptcd (~)iis(ito the N-shape to soiiie c.(Y) show rc. gridieiit deceiit (CiD) nictliod iilitl the. paraiiieters are updated l)a-se(l oii tlir GD method using error sigmls. but it may be worth cmisitlcring possi1)le reasons for this observation. Figure 3(Q). a&ptivc s’tep size strategy and a coni1)inatioii of tlir. aiid therefore Jf Fl qimit parts siiiiiiltaiieously. Conclusion Figures 6 (X). Evolution of Consequents (Rules) shape.5 evolved to a shape differc~ntfrom tlw M F l as in iiot haw to fit tlic tlcsircd N-shape: tlit.le iii obtaining better results. while CANFIS with tlic GD iiietliotl aloiie recognized the features of the ?rT-sliq>e well as showii in Figure 4 (right). iii tlic for\47iLrtl pass. CANFIS wit11 thc GD iiiethod alone unespectedlp rorivcrgctl fastw tliaii CXSFIS with t hc>liybrid learning procedurc~.iit ( ~ c wli(~ii n the initial MFs are poorly set up as iii (1)) wlierc~tlicre is quciits tlrpic. a ~ i t in the I)R(.5 S0111(’ of tllrlll. Initially. LSE can find certain d u e s of (miisecpwts that have mininial errors with tlie currelit h l F setup. w r y iiinch.sccl LSE iising lieuoii ro~i outputs.b two resiilts obtained iiriiig two fixed conscqucn ts.liar-cl triiiiicd hoth aiiteccdciit arid colis(’t pointed coriier of the N-slia1x’. This is As in (B). Without a prior kiioml(~lgc.

ion lwliind niany successful report.ion." References [I] "Modular Networks. Tlic qiicstion is what coinbinations of thcni are bcst.ractioii is a useful aspect of adaptive neuro-fuzzy iiiotlels.Fig. 7 : Different outputs o r optiinized ~ o ~ i s ~ q i i e iafter its training phase.it11 linear rules ( A ) .y. we may const.aiii interprct.omat. Some of the disrussetl CAKFIS nioclels are test. we slioulcl pay at.lier inlist. Thong1 le adaptive syst. they most likely ill outperform insnually designed tenis: tlie adaptive learning procedure snrrly helps. aiicl iieuriil rules ( i ' ) i ~ t ( D ) .ahilit. s Or we may prefer to choosc a niorc sophist.ruct nonlinear rules siicli a n e n r d rides. a l experiments.yof vwigllt rocfficielits froin a fuzzy logic standpoint.ed and disciissed by appliration to a more complicated problcin 111.elns may suffer from the difficultic~s. 121. .work..ed MF such as an asyinnietric h4F. he a good small step ton-arc1 finding an ideal iiiodel in the true seiise of ail "adaptive ne t.4ut. it may be fiitltileto pursue int.icat. klarmillan . To obtain higlier precision wit. Yet. We have also described several probleiiis faced in designing neuro-fuzzy tcins for iise in practical environments. Acquired rules iiiay sonietiiiies lie hard t ~ o understand. The niotlels are C'ANE'IS \\." llayliiii. =Ilso. and t o limit the number of MFs so we can also rct. S i l n o r i Ncliral N e t i v o r l i s : a comprehensive fountlat ioii rlinl)tc~r 12.liis context.erpret.abilit. sigmoidal rules ( B ) . inanipulatiiig neuron functions iiiay I i a w it lwndicid Pffect on perforniance enhaiiceinent .ic rule ext.s.t>entiontro tlie 1iiiiitat. Exploring this idea furt.liii1 a fixed aniount of coniputat. In t.

Sign up to vote on this title
UsefulNot useful