You are on page 1of 12
Research Journal of Applied Sciences |(|~#) 72-83, 2006 © Medveell Online, 2006 Recognition of Tool Wear by Using Extended Kalman Filter in Artifical Neural Network J, Hameed Hussain and *S, Purushothaman "Departament of Mechanical Engineering, Sethyabama Deemed University, (Olé Mahabalipuram Road, Chennai, India-603103, Tretia *Deparimeat of Mechanical Enginesting, Hindustan College of Enginesting (Olé Mahabalipuram Road, Chennai, India.603103, India Abstract: The condition of the fool in a hiring operation is monitored hy using Artificial Newwal Network (AND), The recursive Kalman filter algorithm is used for weight upehtion of the ANN, To monitor the stanss ff the too, tool wear pettarasaze collected. The patterns are transfermed from n-dimensional feature space to «lower dimensional space (to dimensions). This is done by using two dseriminast veotors p,and g,. These dixeriminant vectors are found by optimal discriminant plane mathed. Thirty patterns are used for training the ANN. & comparison Leween the classification performances of the ANN tained without reducing We ‘dimensions of the input pattems and with recheed mensions of the mpux patter is done, The ANN trae with tasfonmedtoel wear patems gives beter resulls items of improved classificetion performance in fewer iterations, when compared with the results of the ANN trained without transforming te ekmensions of the input patterns to a lower dimensicn, Key words: Lack-propagation algorithm, extended kalmnan filer, optimal discriminan! plane INTRODUCTION Ta the manufacturing industries, automated machine tools are ised, Some of them are single spindle, rultiepindle antomats, capstan and tumet and comptrer numerical como! machines. In all these machines, predefined sequence of instructions, like using steps and programming methods, i used to exceute the operations, so that good quality parts with mass production are achieved. When the tools are wern eat, they are repheed ‘with nev tools, erregrouncl and used, The duration, after which a tool hos to be replaced or reground, can be expressed interms of emourt of flank wear land wicth of the tool (V5) tao life in mirastas Established data both inteans of ot life andamountof tol wea, ae available, bated on which, the toals can he replaced or regreund ‘There is no assurance that the tool will lst, tll the established time. There is every pessitilty for the tool to fail inadvence, Astifeial novral nowork hoa been used te detect the amount of flark weer of the tool ‘The methods, sed for morstoring tool wear, are duvet aril inaieet The direct methods use measurements ‘of valomatre lose af tool material. This procedure ie dene off-line. Some of de dest melds include change in werk piece dimension, optical techrigues, radioactive ricthods ent proumatie gauging method The indice. methods use the measurement of cutting related parameters, like cutting Forces, tool holder vitration, acoustic emission, atc. Duo to the complity and ‘unpredictable nature of the machine process, the process hase tobe modeled with mule-bassd tashniqans, Modeling contelates process state yeviables to parameters, The process stale variable is Vb, The prooass peramelars are feed rate ), cutting speed (S) and depth of cut 2). Sou of the modeling techniques are multiple regression analysis and group avcthod data handling. These methods require a relationshup between provess parameters andl prooass state variables (Chryseolouris and Guillet! The nevral network epproach does not require any modeling between process paramaters and. the oaxpats are process state variables, The network maps the inp clomains withthe ontpat domains. The inputs are process pacameters and the outputs are process state variables ‘Each process parameter or provess state variabie is called focture, Tho combination of input and output censtnutes 4 ater. Marty patterns wall be called dat. Tn ths study, amstend of ting the actual dunension of dhe input pate (input vecio), the dimension is tdnead to twa. The two dimansional inpoe vector dbes not represent aay individual faue of the original n-dimensionel inpat pattem, instea, itis a combination of "x! features of the original pena, The Corresponding Author: J, Hanesd Husain, Depertinent of Meclarteal Enginccrng, Saiyatame De=med Unvessy, O10 Mahsbalipuram Read, Chensai, Indic-603103, India plied S: components of the rechisad pattern do not have any linensional quantity ‘Transformation of n-dimensional input patternsintotwo dimensional input vectors: The process of changing the dimensions of a vector called transfermatien. The tuansformation of a set of dimensional real vectors onto a plane isealled « mapying operation. The result of thie operation isa planar display. The main achvantage of the planar display is that the distribution of the original pattems of higher dimensions (mere than twve dimensions) can be seen on a two dimensional graph, The mapping opeation can be lincar or not-linear. Linear classification algorithm Fisher”! and @ method for comtrusting 9 classifier on te optimal discrimi plane, wi distance criterion for multiclass classication with stall, amber of patterrs Hong and Yang have been developed. ‘The method of considering the nunber of patlems and feature size Foley” and the relations between discriminant enalysis end multilayer —pereoptrons Gallant, have been analyzed. ‘A linear mapping is used to map ¢ x-dimensional vector space 3 onto a two dimensional space, Some of the linear mapping alerts are principal component mapping Kilter and Young”. generalized dechstering mapping Sammon, Foklauer, Gelaema and Eder", leased squared error mapping ‘Mix an Jones" and projection purnt mapping Friedman and Turkey!" In this study, the generalized dechstering optimal discriminant plane is wsed. The mapping of the oriinal pate “X' onto anew veetoe“Y" cava plane is done by a ratrix transformation, which is given by YoAN wo where 2 ] visa vectors (abe ealled | . aud q, and yy are dhe disci projection vectors) An overview of different mapping teeliques is given Siedlecki"”. The vectors @ arel p, ae obtained by optimizing a given evterion The plane formed by the iscriminant yeetors isthe optimal veorors which ate the ‘optimal iseriminant planes, This glane gives the highest possible classification for the new patter “The steps involved in the inser mappi 10-9272 33, 2006 Step 1: Computaticn of the discriminant vectors ane (g this is spscific for a paricular linear mapping algorithm, Step 2: Compuiatien of the pla ata points i for al nea sr images of the orginal tapping algoitmns. Computation of discriminant vectors @, and p; The criterion to evaliate the clasifieation performance is aiven ty a «ie between class mais end ‘ho within sas mats which i non-singular =Epey Xm mm —m,F A) 8, =2naye[X-m 3% mF] where PCO) a priori the probability of the ith pattem, generally, 9, m, the meen cf each feaure of the ith class 2m). 1m, theglobel mean ofa feenure ofall he pattems in allthe classes, (%1°1,2...1} the n-dimensional pattems of L the alaunber ofpauerss Eq 3 states dhat the distance between Ue class centers should be maxirsim. The discriminant vector @, thet maximizes “Pin Eq 3 is faud as a solution of the ciganvalue problem given by So, So. 6 where: ‘hg. = the greatos:ron-z0m eigenvalve of (S, 8.) (@.. = eigenvalue conesponding to Am The reascn for choosing the eigenvector with, ‘maximum eigervalue is tht the Euclidean distance of this ‘eetor will be the maximum, when the compared with thet, Res. Applied Sc, of the other eigenvectors of Eq. 6, Another discriminant vector @, is ebtainod, by wing the same eritsrion of Fq 3 ‘The discriminant vector Q, should also satisty the ccortition given by: vse o Hq, 7 indicates that the sclution obtained is geometrically indeperslent and the vectors @, and pare perpendicular to cach other, Whenever the patlenss are perperalicular to each oller, it means, that there is absolutely no redundaney, orrepatition ofa pattem, during collection of tool wears pattems in turing operation, The diseriminare vector (is found as asolution of the eigenvalue problem, which is givenby SP. = Aa S.o, ® where: 2a the greatest non-zero eigen valve of Q, G). the projection matric which is given by. o where T= anidentity matsix ‘The cigevecter cerreaponding to the maximum evgentalue of fg. 8 is the discriminant veetor 9, Th Faq. 6 and 8, Sy should be non-singular. The Sy matrix should be now-singular, even for a more general lserimmeting analysis and multiorthonormeal.veetors Foley and Sammon, Lis Cheng!" Ifthe determinant of Seis 210, then Singular Velue Decompesition (SYD) on ‘Sy hat to he dons, Dn using SVD. Sw is decomposed into dace uuuiess U, Wand V. The mauices U and W ars uuniiary matrices and V is a diagonal matrix with non- nogative diggonal elements aranged in the decreasing Coder. A sll value oF 10° wy 10" is wo be eded 19 dhe agonal elements of V matrix. whose value is zero. This prover ie called perturbation. After pertubing the V matrix, the mains ,_’s caleulated by. 5. -uswey? ag where Sc! the non-singular matrix which hav to be considered in the place of 8, ‘The perturbing valie should be very minimum, which is jist sufficient 1o make 8,’ nonsingular. The 10-9): 72-83, 2006 14 method of SVD computation and its applications are giver!) As per Eq 7, when the values of @, and we innerproducied, the resultant value should be zero. In realty, the irmerprodueted value will sotbe zo1o. This ia duc to flosting point oporutions Computation of two-dimencional votor fram the original n-dimensional Input patterns: The oworlisiensional vector set 8 obtained by: =O PL) ay The vector set y, is obtained by projecting the ‘original pattem “X' onto the space, spanned by @, and 9: ‘by using Ho, 11. The values of wand v, can be plotted in| ‘a two-dimensional graph, to know the distibution of the criginal pattems. Bases of Artificial Neural Network (ANN): Anatol [Newal Network (ANN) ie sn abstnet simulation of seal nervous system tet contains & collection ef neuron unit, communicating with ewch other via axon connections Such a model boars a atrong resemblance to axons and dendrites ima nervous system. Due to this self-organiing and adaptive nature, the model offers, potentially, a new parallel processing paradigm. This model cal be more robust and user-fnerdly, tan the traitional approaches. |ANN bas nodes or neurons, which are dotonbed by ifference or dilferemtial equations. The nodes are interconnected layer-wite of intwoormected amone themselves, Hach node in the successive layer receives the inner product of synaptic weights, withthe outputs of the nodes in the provicus layer The inner prot ix called the activation Value, When the activation value is siven e an input to & newton, the output of the same neuron should lie i a closed interval [91]. To achiove this, sigmoid function is used to squash the activation valve ‘The supervised lenis seth is mvc suitable for learning tool wear where beth inputs and cutpus ofthe pattomy ore wed for training the ANN, Ths commonly ‘wed supervised learning methed is hack-prepagatcn aleorithm BPA! ®! The main draw back of BPA is, that it ia very slow in convergence and gets shack up in the local minima in order to nereaso the convergence rate, recanive extended Kaknan filter alecithm ie wsed Extended Kalman Fiter algorithm (EK): The algorithm vaca a modified form of the BPA, to minimine tho cifference between the desised ourputs and the actual ‘outputs, with respect to the inner prosiucts to the onlinear fimetion But, in the conventional BPA, Res. J. Applied Seb, I(2oWs 72-83, 2005 veer er Fig. 1: Multilayer actiicial neural netweek iffornoe between the david axfpats and she outputs ofthe network are minimized wih reapect to he weighs. ‘The EKP algorthm a state estimatien method for a orclincar system and. it ca be wed as & paranicter estimation method, by augmenting the slate with wlown perametie”) A imultlayered menworks is a rorlinear system vith layered strucure and its leaning algorithm is rgaided os paramter estimation for such @ system”. The multi-layered ANN i shown in Fig. 1. The BKF based leaming algorithm gives approximately the minimum variance estimates of the weights. The comvergence of EKF is faster than that of EPA. Eiror values, whichare generated by BKF. reused to estate the ings tothe nen-lineritis. Tho estimated inputs, along with the input yeotos 10 the respective nodes. are used fo produce an undated set of Weights through «system of linear equations at cech rode. Using Kalman filler at each layer solves these sem of liner oauntions 18 EKF algosith, the inputs tothe nontnearities are evtimated ‘his miimvzation of the eo-variance ofthe vectr helps in {ator eonvorgence of the nator. The atop involved in tmaining the ANN, by using extended Kaiman filter algorithm ace xl its error co-vaviance maitik 8 minimized, Step 1: Initialize dhe weights and thresholds randomly between layers intial trace ofthe error eo4eriance mairix Q and te accelerating parameters J and Tou © a VeEY sinall vale Step 2: Prasart the inputs of a pattem ard compute ‘outputs oF rods in the successive layers Ly 1 1

You might also like