You are on page 1of 96
PUBLISHED BY: Q2srum Page Pvt Ltd. Go Saf Site: Indoeil Ac, fhe Garatd201 010 one 0120-41009 Mo pgmacom Website wOLqNANUMgS ‘Shahdere, Deli 0082 Ena: ‘ath Office 16590, Bast Robtas Nagar, (© Au Ros Rsemve> No poof this pletion yb roped trate, oe econ by any meet without poration. (een entined nth works derived rom sources Talore erable, ery effort hasbeen made tensure Kons, however nether the publisher nor the authors pane acuacy or compotnes of any formation Pinte Hern and neither the publisher nor the authors Thal esponsibefor any eros, omissions, of damages ‘sig oa fue of hi iran, Applicaton of Soft Computing (CS/TT : Sem - 7) 1 Eden * 2010-11 2 aon = 2011-12 9 Réitin : 2012-13 44° Edition: 2019-16 5 Bain : 201415, (© ition + 2015-16 1 Baition : 2016-17 8" Editon 2019.20 (Thoroughly Revised Edition) Price: Rs. 110- only Printed af aj apa Da TOS, —_—==0 -1150 here 0s threshold value, 4% Insignum funtion, the vluo of Vis greater than 0, then output is Helsitie aplication of SoR Computing Ovtpat 1 This functions continuo function tht varie gray lope parameter of sigmoid function. i Bywaryng the parameter‘ we obtain sigma fuente slope. The slope atthe origin equals af an10 vig) na linear function nnd canbe defied N= Tora "The output remains the seme aainput ‘The npat ayer activation function, Se mers 3 snrtreme tn treo nan ‘used in single layer network to convert the net input t2 # 5. Bipolar step funetion : This function can be defined 8 3. dentity funeti aeteitt 1-10 F (CSAT-Som-7)_Noural Network Introduction & Architecture) 1 if 120 Wnty in reo sehore 0 represent the threshold valve. This function is also used in fe nyer nctwork to conver the net input to an output that is bipolar a ‘TaeTAO.] Derive an activation function for thresholding function. ARTO 201415(Sem-1V), Marks 10 Taewar ‘Thresholding output Output amation of | PaBsholding unit testa weighted inp "Pig. 11041, Smile modelo a arincal ron, Lata stifcil neuron. tte 3 re them inputato the sr the woighis aitached tothe input ae wee ceewibeon aveipues Linatthesceatsnttemattnprcerns STEN ee nes tas ge aan oe PoE, eo tg, = Sen * Mpsr tnd tytn pny SESS Se erent eetne te 5 Acommonty used activane, SEE Sa as prong thanks held vale "then the outputs Lele ite ne © H thevalse offs gremsgy aoa NC 1 Te eur a ces seeds ott ser Fr Sstecueti it performs the N00 alton fonction. Tipe etvin econ wd are operation fet thro fot, et>0 » poe eeniaed= {OY ee “Aetvtion fanesons (and (are known as bipolar continues ‘Mpntarinary inte respectively espnes of nesons ae cree cin hy ching apd sealing tho bipolr activation funtion Sash [Neural Network Architutare Single Layer and Mi onward Networks, Recurrent Network. ‘QUOTE ] Draw a single ayer feed forward network and: its working funetions, rath aswer Working funtion: 1.” Sing er ed forward network comprises of she ‘puneran tears amma vrs mn ARTO ROTA Lia F(CST-Sem-7)_ Neural Network-I (Introduction & Architecture) LAR (CMT-Semm-T)_Nowral Network 1 UnireNSen the output layer 2 The input layer neurone receive the input signa and rhourourecelve the ofp egnals 44, Theapti links carying the weights connect every input meson the output neuron but not ice-vere “4. Soca network is ai tobe fed forward in type or avelie in nature |5 Output layer performs computation. {6 The sumof the product of weights andthe node 11 Tho input layer tranamts the sgnal tothe output layer. iaputs i eallated in each GERTABT] Write short notes on + Single layer feed forward network fi, Malilayer feedforward network. ‘ih, Recurrent network [ARTO BOUI6(Sens-1i, Marks 05] ‘OR ‘What is noural network architecture ? Explain the different type of neural network architectures, [ARFU BOTFIGem-1V, Marler 07] == [Neural network architecture: Noural network architecture refers tate arrangement of nourons int layers and the connection patterns between Inger activation futons and learning soetbods. Different types of neural network architecture : Single ayer fed forward network Refer Q, 1.1, Page L30F, Appliation 1 esncmpaing MEFS ag a : Maa ttre tae ye ti hia network conto 48S, plage ag setae cme ayers he ayers ‘ctu wna eee Enact peter ny peteet eireeecere ieee eel heen ie ete Set ape nara ea ‘Sater teeta eae Scie ieccn cerca ne Se noel ‘onigustion m= ie eens one aaee Seen ger wags Wig: Sope tan ‘orem Input liger Hiden ayer Output layer "ig 1.13. utaayo fod frward nate Recurrent network: ‘A Recurrent Neural Networks (RINNs) i 9 class of tific vera ‘twos where comoctions between nodes form adireted graph lt ‘temp sequne This llawe recurrent nero to oxi dynam: baw, NS cn ante item tte nor to prone seen -RNNsar digo take eis of input with no predetareine isi 1-14 (CSIT-Sem-7) Neural Network (ntrodution Architecture) the verie. ‘Que 114] Draw neural network architecture and explain auto: ‘esociative properton init, [ARTO 3019-14 Samy Marke [Neural network architectre : patter FHMC Out ayer z 2 2 © © © iar] [ine Node] [Noe 1g. 1.14.1. Arial neural abit, ‘nore fpat vectors and produce one or more vipat vectors and tho outpat are ifluonced not just by weights ‘ppl on inputs ike a regular NN, but alee by a hidden tat vector "epresnting the contest ated on prior inptotats So, the same {apt couldprodse a diferent output depending on previous inputs in ‘Network Outpt vorlain meter re ; aa eninge eyesore can bestia singe Aeoumecenenetwr canbe sed repose deta tht 4 Aaa net rere corel even certs ed ares TECTIT] tow recurrent network work ? Compare t Riccerauemdanmnsiyine i caer peer ee snare eae ea ‘Fiat takes thes frm the sequence ofnput and then to phe ogethe with the input for the next sep, % Sovtho and; the ipat fr the next step ax shown in FS ‘Stary tram the net isthe input with forthe nex #2 ‘In tis way NN keeps remembering the content whl 1AGF(CSMT-Sem-7) Neural Network-l (ntroduction& Architecture) 239, 8 t 22 - 6 [SiNe [Recurrent noural network] Maldlayer neural watwork 1 |Dataandealeuatonstow in |Dsta and calculations fow in = s becward direction, fom ingle drcetion, from the input theoutputdatatothe input, [date tothe outpete. 2._|Iteantains feedback inks. |Itdoer ot ont edbck inks 8 Riswsedor text data spewch [suse image dt, ine ares deta. lordatae ‘BieLTE] Construct a ecorrent network wth four input node, threehidden odes nd for utp nde that has neal nbn structure inthe output aye. ioe] recurrent nvr with Gar int soe, hee hidden nodes and our ‘putnolesarecomrctd te, Fig 101A recreate network with ety, LITRES epeion o Compee LLARE (C8AT-Sem-7) Neural Network Introduction & Architecture) oo current auto associative ‘2 Inpaticulr we use neural network that strained ung observations FeaTAT write short nates cf the speci corrections that atyplst makes. we Corrections that ate made enough times are characteried as possible and explain its Bros ‘rors andthe word ured replace the ereant word isthen defined as th orrected ward associated with the specif rrr. 14. Ifthe typist makes the same error agin, the program subtly sugrests the corrected word tha ir associated with the error. Since th actual behaviour of the typist is usod to determine potential ror, problems asoated wih traditional spell checking methods ‘donot apy Recorent rato assoiative MEMONY erent mle ve memory ao called dynamin z 22 ee recall ehrough a npuvoutpat 2 Dynamic emer Pate tne, Fig. 117-1 usrates the ‘easiatne emery M ‘Various Learning Techniques, Perception and Convergence Rule. ‘Quesitons-Answors [Long Answer Type and Medium Answer Type Questions ‘Que 18:] Briefly discuss clasifiention of learning algorithms Figura, ‘8 The operator M operate t present instant kon the preset an outpt to proce the output inthe next inetant + newer] 44 Sirsa ely nodod for eylic operation, 15 Patents asscitd tite, at). Classification of learning algorithm : 6 ledynamicaly evolved snd Sally converges to an oa } . scoring othe recurve formulae, [+ Sepervioedfesrning L Learningis performed withthehelpoftrainer. §_IMANN, each input vector requires corresponding target vector, = yu ‘which represents the desired ovtput. ‘Pros: I tends to eliminste noise superimposed on the initia ‘i. Theinput vector along with target vector called trining pair Cea: input) —] aura anime Yehetual output) E ted apaility 5_Gonvergeto spurious memories atten) ‘QeeTIE] Which neural network architecture is used 0 Bro signal oY Tarp OP aces cpu ie Li js, Tho input vector results noutput vector. L-B0F (CSAT.Senm7) Neural Network ntrodetion& Architecture) rypeninetson coming as aver rcomparel with desired oxtp Tent r ns anerorsienl ener gl) Gag TO] Wate meant by learning? How ie supervised earning ‘edie — Ee titel rates of Et ent tm enperiedarng? atc 2 Vanoperie erin a ‘What i reinforcement learning ? Discuss base difference between ideal ees TD caring tetors or classifications on atic upervied nd uapervitedtening Dect with mea, sultabiecxamt 1 aca eer menel [ARTO BOT, Mars] {to stg ren etree ia sz erga tt form carers. a yural network. Explain supervised and Draw an artificial ‘unrupervised learning i artifial neural network. (ABET nbn617(Som-, Marka 0] ‘Spel rom enironientt9 iform whet output shoul ‘peter they are core The network itsalfdacovers patterns, regularities, ‘egos mtb input data ond elton the inp theese ix Haut clusters are formed by discovering simi estes sled self organising 1% Relaforcement learning sorte ve a No] Superaed ] _fPeersiont) nin sesning tonto | Bias 1. [i wes knoe ad and ig Lisa, = 1, ats 2 | Computational compet | aapattonl copay alow. |L Learning based on vse information ie called reinforoamett see sleet sont incl reinortment signal ee &% Thenetwrk reves me feedback frm th env “| Namber of clases a | Numero laser are ot knw |e. Feedback is only evaluative. ea. % Teese se [Accurate and reliable [ao sigalear proceed inthe liable [Moderate accurate and eile ‘Riserater, and the obtained critic signals are sont to the. reste. results. tof Weitts properly a gt erie feedback Supervised, unsupervised and reaforeement learning Reier@ 179, Page 1-1, Unit 1-22 F(CSAT-Sem-7) Nour Network introduction & Architecture) ight vector point closer to the fmput veto, inereasng the chance that eer ete myocardial ni ects ll arc ener = ena! Prd cross event ™ boli ent ‘D), the input vector p is subtracted from the weight ve ae a in-atent restenosis from plata. ~ ‘makes the weight vector point farther away from the input. vector, posing Stormer si aneareetmicrte = predictive representations of patients in an ‘Perceptron convergence theorem : It state that if there is # weight 1 nti Mecere Seka fap) ral then for any Harting vein {ein 0 Cr ‘the perceptron learning rule will eonverge to a weight vector (not necessarily ‘Unlene not necessary" that givs the correct response forall raining pttoraa,andit will ooo na finite number of eps. ‘Gee THE] Sepa Hebbian tnening a cahingacongetv bear file with preserved election & F pepinofige 2 diabetes mlitus from electronic medical “FeSTHT pan perepron rae State perceptron co sas a fewctanrte a obibian learning: . 1. Tho Hebbian learning role learning rule that specifies how much ‘the weight ofthe connection between two unite shotld be increased oF ‘creased in proportion ta the product of Useirectivation. 12. This rule staes thatthe connections between two neurons might be strengthened ifthe neurons fr simultancously. | Threemajor pnts were stated a8 prtof Hebbian learning mechanism |. Information stored inthe connections between neurons in neural notwerks, in tho form of weights In this the input-output pattern pala) aes atocited by the weight matrix W 1 Perens rtrd on desired babaviour, The desired be ‘Srtrammaredy ast ofinpa ouput pis PoP Pde herp np tthe tr and te corresponding Seat 2 The jive of peeitron rule ito rede the error ‘ifornce bene he neron epanse ad the tre 4 Thepeeptrofering rl earapelsates died ch Fevers weighs and bso given at np vector Pr Sheeler & Batt vam in vausafther Oa in api ected, he percopicon has beter odie corre outputs, ven Rabin not wd earop wor to find sol ‘Eabiesot und. arp works find auton altering ‘river oat ona inet wetors fo be cased ‘egy Semector tbe heed an Tas renee econ 2, Rbpeenaou and ah popery cabs the np ‘Braye condos that can ocr fora singe neuron cant tots an thoneterhcsegeeooe ‘Sor fa pt vectors proseted and the output of tha weight vector iat we Sxyr |i Weight chango between neurons is proportions t the product of| betivaton valves for neurone peat ‘wey w= By fi. As earning takes plac, simultaneous or repeated activation of weakly eomectod neurons incrementally changes the strength sun mlpatomot weights, indng oto connections ‘4 lebbian rale works well as Tong a all the input patterns are erthogoal or uncorrelated “s aa ‘Gee TRE] explain deta rule for pattern association, ferent learning rule in neural network are: ‘Hebbian learning rule: Refer Q.1.22, Page 1-22P, Unit. Pereeptron rule Refer Q121, Page 1-21F, Unit Dette learning rule: Refer Q.125, Page 1-22F, Unit Correlation learning rule: is eid i eran 2 Tien eR AW) rerivative ofthe tiation fenton LT corzlation learning rue bated on similar pnp the 4 The term A) ee Hebbian leering rule co ee ean be really derived from the condition 2. Teassumes that weights between responding neurons should be Thi earang rec ea Calelatng the gradient ‘rept anda teen aearens i opt actos me ery more negate respect to of those The correlation ral nthe supervise learning. a 2-07 4 Inmathematcl for the corsation earning als flows: aW, =n, eich quanto Wher athe desire valu of oot signs inpot sig, ee Wyisth change in weigh, ge hapa Aluto-astoiatve and Hetaro associative Memory. ‘The deta ral uted for neural network training. ‘Long Answer Type and Medium Answer Type Questions [ARTO 201819 Sem IV), Marka OF al i | i Phat is Hetero and Auto-associative memory ? | is hetoro-associative memory 7 Describe in context of neural (RUE 9015-4 Senm Mar 5 Notwork-{Untroduetion & Architecture) [Heterovascoiative memory] Auto-nssociative nome] : in auto associative memory. ripe raining input and Thbeteromanciativememory, [I Sear jausocinted potters par & ‘as pp ieame patterned Tend format 2 Tectors are cifferet fee @ Z| i dl recalls» outpet | imhis mode vena distorted Pons pvp or ee put ptiern =e inp, ‘with perfect © sr ©) mm aa vn Raa . . stomnaitva mene Sma ee crater. eteronsociative neural : [Hetero associative MSfeierorasocative network consists of one layer of oe Interconnections Finown ne tore correlators 12 Input nen units and tpt has ‘n’units and there is a Interconnection between input and output 4 Weights determined by Heb rte or Delt rule. Tass) eettutys for oo, isis) eo Ta.000) on [ARTU 2015-16(Sem-1V), Marke 10 Tarr ‘The weight matrix for S=Tis calculated as: Si=d010)ande= (10) 1284. it po Piet 5 a ie ofa hteroasoiatv nee vee lfaorel® & WBE] Ditrerentinve spor 9 micescndaey ms (me Sina, fr So " S,=(1100)andt=(10) 1 1 0 | fan a resem, Seattoande= OO wn foo |g we Sfone|° 8 ih nts nl ur pair pa oo oh Tens ach tery coarse, Wawro, 4 0) [2 0} fo 0) jo a) [2 =|1 0/0 of*fo 0] “Jo of=|3 oo, {o of loo) |o o} \o Associative Memory. ine ong ima etn! Asctive Memory (BAM) spi ve 2 ietctrpeatn eral te * ‘ha entag aye of muro namely int ler a amety np ayer ra fl connected each tor nce the weight hve stabi pt iar prenatete patern aera 1-28 F (CSIT-Sem-7) Neural Network introduction & Architecture) “chercgande, represent he input and output ayers respectively exe tn) com Ita ass) = homtal Interconnecting linkage Le leocoss) ao narnia Waarrer tase Lem fgets sutsto att 30 sane Masta Haan Ieiswoedto say bipolar pir ay) where p= 1.2, = Gyn Sag p= Org Ing aig ig © CL ‘Te fist lner ha J, neurons an the second hs Jy neuro. ‘BAM learning ie accomplished with simple Hebbian rue. ‘he weight matrnis wear. winereX and ¥ are meteices that represent the sot vee eXandy present the sets of bipolar vector ‘This learning rule lendsto poor memory storage capacity, ia senitive to noise, ands subject to spurious ready states during real The etleval rosie a erative feedback proces that tart with inlayer L

You might also like