ANN
‘Computer science
Attia intotigonce Mathematos|
pee (approximation theory,
noe ae amass |
‘as Engineering
ime series, data mining) 'magersignal pressing
J Cont theory robotics
Figure 1-1. The mult-disciplinaty point of view of neural networks
bese traditional methods. Also, some
edit card companies are using neural networks in their appli
cen process
This newest method of looking into the future by analyzing past experiences has generated its own unique
set of problems. One such problem is to provide a reason behind a computer-generated answer, say, a5 £0
why a particular loan application was denied. To ex
n how a network learned and why ie recommends a
particular decision has been difficult. The inner workings of neural networks ate “black boxes.” Some people
have even cal
the use of neural nerworks “voodoo engineering.” To justify the decision-making process,
s have provided programs that explain which input through which node
dominates the decision-making process. From this information, experts in the application may
nd its imporcan
ork’s work is a
hapter goes through some of these areas and briefly details
20 progressing in other more promising
fhe current work. The objective is to make the reader aware of various possibilities where neural networks
fer solutions, such as language processing, character recognition, image compression, pattern
Neural networks can be viewed from a mult-disciplinary point of view as shown in Figure 1-1
I 1.2 Application Scope of Neural Networks
The neural neqwor has
1. Air eaffic co
taken as input to the network. The output
sood scope of being used in the following areas
ould be automated with the location, altitude, direction and speed of each radar blip
uld be the ar traffic controller’ instruction in response ro
cach blip.
2. Animal behavior, predatorfprey relationships and. population eyees may bye suitable for analysis by neue
networks,
3. Appraisal and valuation of property, buildings, automobiles, machinery, ete. should be an easFEU Aiticiat Noural Network: Definition
al otk be defined as an information-processing model that is inspired by
he way biological nervou ns, such as the brain, process information. This model tries to replicate onl
the tov basic functions of the brain. The key element of ANN is the novel structure of its information
p vem. An ANN is composed of a large number of highly interconnected processing elements
ing in un 1 specific problems,
ficial neural networks, like people, leara by example, An ANN is configured fora specific application,
pattern recognition or dara classification through a learning process. In biological systems, learin
ves adjustments to the synaptic connections that exist berween the neurons, ANNs undergo a similar
;nge that occurs when the concept on which they are buile leaves the academic environment and is thrown,
to the harsher world of users who simply want to get a job done on computers accurately all the time
ny neural networks now being designed are statistically quite accurate, but they still eave their users with
I taste as they falter when it comes to solving problems accurately, They might be 85-90% accurate
tunately, few applications tolerate tha level of error
Advantages of Neural Networks
ural networks, with their remark
ble ability to derive meaning from complicated or imprecise data, could
be used to extract parterns and detect trends that are too complex to be noticed by either humans or other
nputer techniques. A trained neural network could be thought of as an “expert” in a particular ca
jory of information it has been given to analyze. This expert could be used to provide projections in
new situations of interest and answer “what if” questions. Other advantages of working with an ANN
1. Adaptive learning: An ANN is endowed with the ability o learn how to do tasks based on the data given
training or initial experience
sgenization: An ANN can create its own organization or representation of the information it receives
Jusing learning time
Real-time operation: ANN computations may be cartied out in parallel. Special hardware devices are being,
spability of ANNs,
Signed and manufactured ro take advantage ofthis
o ia redundant information coding: Partial destruction of a neural network leads to the
adation of performance. However, some network capabilities may be retained even
iurrently, neural networks can‘ function asa user interface which translates spoken words into instruction
machine, but someday they would have this skill. Then VCRs, home secutiy systems, CD players, and
4 processors would simply be activated by voice. Touch sereen and voice editing would replace the word
of today. Besides, spreadsheets and databases would he imparted such level of usability that would
leasing to everyone. Buc for now, neural networks are only entering the marketplace in niche areas where
hese niches indeed involve applications where answers provided by thes
but vague, Loan approval is one such area. Financial institutions make more money if they succeed in
I loan rate. For these institutions, installing systems that are “90% aceurate” in selecting
ne lo ans might bean improvement over their current selection process, Indeed, some banks
{thatthe failure rate on loans approved by neural networks is lower than those approved by thci4. Betting o1
If could be predicted usin,
how the best job perfor
12. Expert consultants could. pack
13. Fraud detection reparding credit catds, ins
14, Handuriting and pp
iting coule
ofthe grid becomes an input tothe neural
‘ is the actual output vector. Then the actual
{wo output vectors then ansolid con:
nt output
nANNs
3 Basic Modole of Arial Neural Network a
Neural
x >| network
(opu) w I
7
1
Error =
signals signal = [+
generator (Desired output)
Figure 2-12. Supervised lexrning
the neework. This error signal is used for adjustment of weights until the actual output matches the desired
target) output. In this type of training, a supervisor or teacher is required for error minimization, Hence, the
work trained by this method is sad tobe using supervised training methodology. In supervised learning
2.8.2.2 Unsupervised Learning
The learning here is performed without the help ofa teacher. Consider the learning process ofa tadpole, it
learns by itself, chat is a child fish learns co swim by itself, tis not taught by its mother. Thus, is learn
process is independent and is not supervised by a teacher. In ANN following unsupervised learning, the
nput vectors of similar type are grouped without the use of taining data to specify how a member of each
group looks or to which group a number belongs. In che training process, che network receives the input
patterns and organizes these patterns to form clusters, When a new input patter is applied, the neural
network gives an output response indicating the cass to which the input pattern belongs. If for an input
pattern class cannot be found then a new cls is generated. ‘The block diagram of unsupervised leaning is
shown in Figure 2-13.
From Figure 2-13 it is clear that there is no feedback from the environment to inform what the outputs
should be or whether the outputs are correct. In this case, the network must itself discover pattems, regu
om the input data and relations for the input data over the output. While
od self
discovering all chese features, the network undergoes change in its parameters. This process is cal
organizing in which exact clusters will be formed by discovering similarities and dissimilarities among the
objects
2.3.2.3 Reinforcement Learning
This learning process is similar to supervised learning, In the case of supervised learning, the correct target
output values are known for each input pattern, But, in some cases, less information might be available
ANN |
w
t
Figure 2-13. Unsupervised earning
x
(wy
>
(Actual output)22
eee aC ea
te
ee cay
(Actual output)
Error | |
Error
sul sign R
generator | * (Reinforcement
[anh signal
Figure 2-14 Reinforcement learning,
cic anPl the network might be tld that ts actual ourput is only “50% correc o so. Thus, here
@
Floee) =
i 1 ifner< @
where 0 is the fixed threshold value.fedvet | 2.5 MeCaulloch—Pite Neuron eats 2
F224 teaming Rate
weight adjustment at each step of
[L245 Momentum Factor
1m factor is added to the weight updation process. This is generally
: Convergence is made faster ifa mon
done in the back propagation network. IF momentum has co be used, the weg
um helps the net in reasonably large weight adjustments until the
hes from one oF more previous
raining patterns must be saved, Mor
ion for several patterns.
parameter ranges approximately from 0.7 to I to
work in controlling the number of clusters.
L 2.4.7 Notations
The notations mentioned in this section have been used in this textbook for explain
Activation of unit Xi, input signal
yi Activation of unit Y, yj = f Vin)
‘uj: Weight on connection from unit X, to unit Y;
Hne can Bias acting on unit Bias has a constane activation of 1
r We Weight matrix, W = (0)
yin Net input to unie Y; given by Yj = iw
he net Ix: Norm of magnitude vector X.
Brel 6: ‘Threshold for activation of neuron Yj
S: Training inpuc vector, § = (5-12 ---+40)
Training output vector, T = (fy .-++6-->s)
- X: * Inpue vector, X= (s1y.--58is---%)
bold Awy: Change in weights given by Aw, = wy(new) — wy(old)
* a: Learning rate; it controls the amount of weight adjustment at each step of training,
ua [L25_Mccutloch—Pitts Neuron
hi
2.5.1 Theory fe
The McCulloch-Pitts neuron was the earliest neural network discovered in 1943, Ic is usually called as M-P
neuron. The M-P neurons are connected by directed weighted paths. It should be noted tha the activation of
that is, at any time step the neuron may fre or may not Fre. The weights associated
weights negative). All the
with the communication links may be excitatory (weights positive) or inhibitor28 zo ei! Neutl Network: An tx
excitatory connected weights entering into a particular neuron will have same weights. The threshold plays
‘a major role in M-P neuron. There isa fixed threshold for each neuron, and if the net input to the neuron
is greater than te threshold then the neuron fires. Also, it should be noted that any nonzero inhibitory
input would prevent che neuron from firing. The M-P neurons
finctions. |
fe most widely used in the case of lo
I 25.2 Architecture
A simple M-P neuron is shown in Figure 2-18. As already discussed, the M-P neuron has both excitatory and
inhibitory connections. Icis excitatory with weight (w > 0) or inhibitory with weight ~p(p < 0). In F
2-18, inputs from x1 to xy possess excicarory weighted connections and inputs from xn 1 10 Xp m POSSESS
neuron is based upon the threshold, the
inhibitory weighted interconnections. Since the fring of the outp
activation function here is defined as
Sd = {oq
Forinhiition tobe absolute, the threshold withthe activation funtion should suite following condition
@> mop
The output will ir ii receives say “E" or more excitatory inputs but no inhibitory inputs, where
kw >0 > (k— Dw
The M-P neuron has no particular training algorithm. An analysis has to be performed to determine the
values of che weights and the chreshold. Here the weights ofthe neuron are set along with the thresbld to
make the neuron perform a simple logic Function. The M-P neurons are used as building blocks on which
we can model ay Function or phenomenon, which can be represented a logic function, i
vy
Os afl
Kos}
Figure 2-18 McCulloch:
Picts neuron model.ably
29
l 2.6 Linear Separability
ANN does not give an exact solution fora nonlinear prablem. However, it provides possible approximate
solutions to nonlinear problems. Lineat separability isthe concept wherein the separation of the input space
to regions is based on whether the nework response is positive or negative
A decision line is drawn to separate positive and negative responses, The decision line may alo be called as
ne decision-making line or decision-support line ot linear-sepanable line. The necessity ofthe linear separability
concept was fel ro classify the patterns based upon their ourput responses. Generally the net input calculated
0 the output unit is given as .
m= b+ Sm
For example, ifa bipolar step activation function is used over the calculated net input (ya) then the value of
tha there exists &
1e Function is 1 for a positive net input and —1 for a negative net input. Also, itis
boundary between the f nay be ca
jons whete yg > 0 and yin < 0. This region
can be determined by the relation
=o
be ova
(On ehe basis of the number of input units in the network, the above equation may represent a line, 2 plane
or a hyperplane. The linear separability oft
weights (with bias) for which the training input vectors having positive (correct) response, +1, lie on one side
of che decision boundary and all che other vectors having negs
side ofthe decision boundary then we can conclude the problem is “linearly separable,
Consider single-layer network as shown in F
network is based on the decision-boundary line. If there exist
incorrect) response, —1, lie on the other
jure 2-19 with bias included. The net input for the network
shown in Figure 2-19 is given as
Yn = b+ yay + 20
The separating line for which the bo
sponse on one side and ney
ry lies beeween the values x) and 3, so thatthe ner gives a positive
tive response on other side, is given as
b+mm +0!
HOR
i ®
ae)
2)
Figure 2-19. Asinglelayer neural net30 tific! Neural Network trout
Tus, the requirement for the positive response of the net is
(i aa
During training process, the values of 11, w2 and b are determined so that the net will produce a positiv
(correct) response for the training data, [fon the other hand, threshold value s being used, the” the condition
for obeaining the positive response from output unit is
Net input received > 0 (threshold)
amy baw > 6
The separating line equation will then be
a= y vith wn £ 0)
During training process, the values of w1 and w2 have to be determined, so that che net will haye a correct
response to the training data. For this correct response, the line passes close through the OFIgin, In certain
a
quadrants (AN
\s, even for co
response, the separating line does not pass through the origin,
idee a necwork havi
function) with either binary or bipolar dsta chen the decision line is draw? parating the
sive response region. This is depicted in Figure
Thus, based on the conditions discussed above
positive response region from the ne}
ay be obtained
1¢ equation of this decision lin
discussing, the representation of data play$ 4 major
pees
Also, in all the networks that we would be
6 region)
(egatve
Decision boundary line
Figure 2-20”
However the dat eprsentention mode hs oe decided ~ whether i wil be in bi
bipolar form. It may be noted that the bipolar representation is beerer tha? the binary repr
Using bipolar dita representation, the missing data ean
m mistaken daa, Missing val
Tented tec ale fom +1 t9 =1 0
ues are represented by O and. mistakes can be represented by reversing the in? 3
[L227 Hebb Network
221 theoy a ens nano Re
F ning rales dimple one at Bad «_ Donald Hebb stated in 1949
eBemrt ot atta plained i: “When an
or a neural net, the Hebb
hat in
' performed by the change in the synaptic gap. Het
cll Band teeny or permanently OK Plc ring i some
brain, the learning. some
axon of cell Ais near enoug
growth processor metabolic claange takes place in one or both the cells such ehatAé efficiency
calls frng.B, i increased he product of the
According to the Hebb rale= > the weight vector is found to increase proportionate to the prod
input and the learning signal. Here the leaning signal is equal tothe neuron MPU: In Hebb learning,
: H f feat jated with these neurons can
be increased by the modificatic-an made in thet synaptic gap (strength). The wei8h® UPdate in Hebb
ren by
wiloew) = wold) + xy
The Hebb cule is more suited for bipolar data than binary data, If binary da®'® wed, the above weight
"pdation formula cannor distire ish two conditions namely
1. A training pair in which an input units “on” and arget vale is “ft
2. A uaining pai in which box: Jh the input unit and the target value are “off.”
Thus, there are limitations in H<==bb rule application over binary data, Hence, che Presentation using bipolar
data is advantageous
[L222 Flowchart of Train ising Algorithm
the flowchart for the training
flowchart have already beer
gorithm is used FOr the calculation and adjustment of weights. I
gorithm of Hebb network is g=iven in Figu
discussed in Section 2.4.7, ‘ £ training
2-21, 5: refers to <=ach training input and earget output pat, Til. Ee e€ists« pair of tr
place; else, itis stopped.
2-21. The notations used in the
Output, the rai —aning proces
[L228 Taining Algorithm
jor
The training algorithm of Hebb network is givea below:
tare Feet ne Sr 7 eae il ba
Stop 0: Fitstinitialze the weige—Ihts. Basically inthis network dl
to mwhere *n” may be-= the total number of input neurons
Step 1: Steps 2-4 have to be p-=-2x1-030 mal
. Then for he four npc calelate the net input using
{: if Jin B2 For inputs
refi anes
(0), yea xt eles
LO) WE THOT OM
i i 1 Ox1l+1xl=1
5. Implement ANDNOT function using 0) fied
ing Hoch Pig. caren. (ime buy doe
presentation) From the calculated net inp
Hence, the
Solution: In the case of ANDNOT function, the weights
is true if the first input is erue and the Assume o
nhibitory i
not suitable
weight as exctarory and the of
response
Second input is false, For all other input variations,
nse is false. The truth table for ANDNOT
inion given in Table 2. a0, jie = 1141x2120
1.0), “Yiv=1X140% 121
0, =Ox1+1x—1
0), Yn =0x1+0x—-1=0
From the calculated net inputs, now it is possible
te fice the neuron for input (1, 0) only by fini
threshold of 1, Le, 9 1 for ¥ unit. Thus,
Note:The value of0 is calculated using the following
,
922% 1—1 — [for*p" inhibitory ony
magnitude considered]
Thus, the output of neuron Y can be written as
1 if yg 1
Li) = 10 ar sae T
6. Implement XOR function using McCilloch-Pitts
P fat ite
neuron (consider binary data)
Solution: The truth table for XOR function is.
given
in Table 3,
Table 3 aay
Cie:
0 I 1
{in this case, the output is “ON” for only odd number
OF 1S. For the rest itis “OFE” XOR function cannot
be represented by simple and single logic funceions i
is represented as,
=a +i
te
= (Function 1)
= (Function 2)
y=2\(OR)z. (Function 3)
___Aticial Neural Network: An biroduetion
A single-layer net is not suffi
diate layer is necessary
Figure 6 Neural net for XOR f
weight
* First function (2) = 135): The truth tf
function zy is shown in Table 4,
Table 4
oO oT
° 1 °
I 0
1 1 0
The net rep
(Case 1: Assume both weights as excitatory, ie
entation is given as
Calculate the net inputs. For inputs,
0.0), sy =0x14+0%1=0
OD, in =Ox141x1=1
(1,0), etm =1X140x1=1
0, ate = 1x 11x 152
Hence, itis not possible to obtain function =)
using chese weights,
(Case 2: Assume one weight as excitatory and the
other as inhibicory, Le
Figure 7 Neural net for Z;nd the
Figure 8 Neural net for Z,,
Caleulate the net inputs, For inputs
(0,0), 21 =01+0x-1=0
(1), m= 0x 141x-15-
1,0), 21 =1%140%—1=1
(0, 21g = 1x DL
(On the basis of this calculated nec input, it is
possible ro get the required ourput, Hen
9 =1 forthe Z; neuron
* Second function (22 = ¥7x)): The truth table for
function 22 is shown in Table 5
Table 5
0 0 0
0 1 1
1 o 0
L 1 0
The net representation is given as follows
Case 1: Assume both we
Now calculate the net inputs. For the inputs
(0,0), 22m =O 140x1=0
0.1), 2% =0X1+1x1=1
(1,0), say = 1140 1S1
a x141x1
Hence, ic is not possible to obtain function 22
”
Case 2: Assume one weight as excitatory and the
other as inhibitory, i
2=-h w=
Now calculate the net inputs. For the inputs
(0,0), ze =O 140K 1=0
(0.1), z3%e=0x 141 x1=1
(1,0), 231-140 1=—1
(Dsziy = 1x 141x150
‘Thus, based on this calculated net input, itis
possible to
equited output, ie
621 fortheZ neuron
* Third function (y
): The eruth table
for this function is shown in Table 6.
Table 6
0 0 0 0 0
0 1 o 1
1 0
L 1 7 0 0
Case 1:
Now calculate the net input. For inputs
(0.0), yw =O 140% 1=0
OD, ye =Ox141«1=1
(1,0), yn = 1 x10 121
1,0), yin =0x140x1=0
(because for x; = 1 andagrees oe —
Figure 9 Neural net for ¥(Z; OR Z3)
Serting a threshold of @ > 1 = 1, which
implies thac the net is recognized, Therefore, the
analysis is made for XOR function using M-P
XOR function, the w
obtained as ee
se for OR function
bipolar targ
Solution: Table 7 is the truth table for OR function
Table 7 of
“a ow yd
1 1
The truth tabl
een plotced in Figure 10, IF output is 1, ic is
denoted as “4” else “—.” Assuming the coordinates
as (—1,0) and (0,—1); x1, 71) and (xp, 72), the slope
he straight line can be obtained as
Ar Introduction
Figure 10 Ga,
oF °OR’ function,
Using this value the equation forthe line
Here the quadrants ie nor x and y but xy and xs, so
1 above equation heen
re ot
This can be wrieten x
Wo bd
mT yd
refore, wy = sy
he net input and oUt: of OR Function or the hace
ights and bins, we ger entties in Table 8uti 39
ne threshold is taken as “1” = final (new) weights obrained by presenting th
he calculated ner input. Hence, using the line first input pattern, ie
perability concept, the se is obtained f
hee {ivy wy 6] = [111]
Design a Hebb net to implement logical AND The w
tion (use bipolar inputs and targets)
sining data for the AND function is
Table 9
Tnputs ‘Target
nb ) (new) = wy(old) + Aw) =1-1=0
(new) = ws(old) + Awp =14+1=2
1 1 Similarly, by presenting the third and fourth
input patterns, the new weights can be calculated
The network is trained using the Hebb network tain shows the values of weights forall inputs
thm discussed in Section 2.7.3. Initially the
sand bias are set 0 2er0, i Table 10
Tnputs Weight changes _ Weights
mm by Mwy Aw Ab wy wz b
@ 0 0)
(1.11) and target
the initial weights as old
+ Fiest input [x1 x2 6]
Uh: Seti
hes and applyi
(new) = wx(old) +ny=0+1%1=1
\ LE m 6
new) = bfold) +y = 0 1 a :
, ne weights calculated above ae the final weights
thiearé punesed $00 he frst input, For all inputs, use the final weights obtained
cach input to obtain the separating. line
hese weights ate used asthe initial
For the frst input [1 1 1], the separating line is
change here is Aww; = xp. Hence weight changes given by
lating to the first input ae
Aw =my=1x1=1 (on
Aw, =my=1x1=1
Hy Similarly, for the se
Ab= separating line
1-11] and =O. seid
+ Second input [x a2 4]
140
ee etaael|
|
a ewe
,
|
ae aaaag
a acres
Figure 11 Decision boundary for AND
fanction using Hebb rule for
ef
ural Network: An Introduction
For che third input [—1 1.1}, ie
Finally, for the fourth inpu = 11) the
arating line is
The graphs for each of these separating lines
nF
+” mark is used for output "1" and “—" mark
for output From Figure 1, it ca
npuc, the decision
boundary differentiates only the first and fourth
be noticed chat for the fi
inputs, and notall negative responses are separ
om positive responses. When the second inp
Pattern is presented, the decision boundary se
arates (1, 1) from (1,~1) and (—1, —1) and not
1,1). Butthebou
third and fourth
ining pairs, And, the decision
boundary line obtained from these input eraining
paies separates the positive response region from
Hence,
the negative response regia Je we
obtained from this are the final weights and are
Figure 12. Hebb net for AND fanctior.
9. Design a Hebb net to implement OR function
sider bipolar inputs and targets),the decision
se and fourth
=1) and not
«forthe bot
= a a
he training pair for the OR function isthe ourpur response is "—1” lies on the other side of
in Table 11 the boundary. Thus, the final weights are
Table 11 an ss ie
Tnputs Target The network can be represented as shown in
7 7 ; Figure 14.
7 1 7 1 {
1 1 1 1 a1.) | wy
1 1 1 1 ‘
l L L L
rights and bias are set to 2et0, i.
mae abe
The network is tained and the final weights are out-
J sing the Hebb training algorithn
jon 2.7.3. The weights are considered as final
discussed
ighis if the boundary line obtained from these by
ights separates the positive response region and Beaten
onse region,
iy presenting all the inpue patterns, the weights
ilculated. Table 12 shows the weights calculated
Table 12
Inputs Weight changes Weights
Figure 14 Hebb net for OR function,
10, Use the Hebl rule method to implement XOR
function (ake bipolar inputs and tages).
ing the final weights, che boundary ine equation
Fe ene Gio ie Solution: The taining pattems for an XOR function
ioe 7 are shown in Table:
Table 13
TThe decision region for this nets shown in Figure 13,
Icis observed in Figure 13 that stxight line => amb iy
separates the pater space into two regions. 1 EA
Theinpucpareras[(1, 1), —1)4(—1, 1) forwhich 1
ne output response js “I” lie on one side of the Lact 1
boundary, and the inp pattern (—1,—1) for which 1-1 1TTT
42
Arif Neural Netwotk: An Invodue
Here, a single-layer network with two input new The OR function can be made linea
separable by
as discussed in Problem 6. This
solving will resul
s for separating positive and n:
of xe function :
un
the Heb cights required to
pie pes form the following classifications ofthe given
By using che Hebb training a
ithe, the
ined and the final weights are calculated as shown inp shown in Figure 16. The pattern
1 the following Table
Jpown as%3 x 3 matrix form in the squares. The
Table 14 er fea (0 has ager value IP
Inputs Weight changes Weighs (not Belg the members of clas
mm by Aw Aw, Ab wm wr b (so Py ah
° 9) aerate? ot
eee Sr Tat naa]
ge eet T-1-1 fe] e]e]
Sea ies oi 0 [sisfvesfoestos| ert accloel
aA ria 1 i { job t+
st |
ware 4i| 1
slats
The final weights obtained after presenting all the piGUTE 16 Data for input patterns.
inpur parems do noe give coretoutpu foral pt
The graphshown nF 15 ner go
indicates thar the four input pairs that are presentcan-
1c be divided by a single line to separate them into Tabi , 16
‘wo regions. Thus XOR function isa ease of pattern
wure 16) ae indicated in Table 15,
Patten
classification problem, which isnot linetly separable
r 26. L =a 1 1
ie eA TT tel
oun i singl®-layer network with nine inpue neurons,
| ag afd One output neuron is formed. Set hy
initial 4 gprs and bias to zero, ic
: | Case 1: y,,esenting first input pattern (1), we calculate
Figure 15 Decision boundary for XOR functionnow calculate the new weights u
w(new) = wi(old) + Ae
Setting the old weights as che initial weights here,
uy (new) = w(old) + Aw, = 0+ 1 =
(new) = wafold) + Aw, = 0+ 1 =
ts(new) = ws(old) + Aws = 0-+1 =
Similarly, calculating for other weights we get
uq(new) = —1, ws(new! we(new) = =1
w(new) = 1, wa(new) = 1, wy(new)
Wnew) = 1
The weights after presenting first input pa
w
Case 2: Now we preset
(0). The initial weights used here are the final weights
obtained after presenting thefirstinpuc parcern. Here,
the weights are calculated as shown below (y = —1
mn
wi{new) = wi(old) (Aw; = x]
wy (new) = wy (old) l41x-1=0
wp(new) = w2(ald) + my 1=0
tw4(new) = xa
mole x=1S—m
wr(new) = wy(old) + yy =1 41x —1=0
wp(new) = wy(old) + Sy =14+1x —1 =0
wp(oew) = up xo
new) = Wold x-1=0
The final weight afer
Winew = (000 —22 20000}
The wei inated in the Hebb net
ts obtained ar
shown in Figare 17
12, Find thew
ing clasificatio
the Hebb rule
symbol is present and “
L” pare
ghes required to perform the follow-
of given inp
where “+
is present.
belongs to theclass (target value +1)
'U” paten does not belong to the class
target value —1).
Solution: The teuning input patterns for Figure 18
in Table 16,
Table 16
Pattern Tapats a
na SMS MOMS y
A single-layer network with nine input neurons, one
bias and one ourput neuron is formed. Set the initial
weights and bias v0 2e70, Le“4 Artificial Neurel Network: An Introduction
Figure 18. Input daa for given pattems,2ik: An Intatiction
Inputs Target Weights
Le 3 6 me by Wh wD ws WK WS We Wy wy Wy b
@o000000098
The obtained weights are indicated in the Hebb net
the final weights after presenting the rwo input
19
wn in Fig
Winew) = (00 200 ~200
Figure 19. Hebb net of Figure 18.[L210 Review questions
lement of
n artificial neural network,
3. How many signals can be sent by a neuron at a
particular time instant?
4. Draw a simple artificial neuron and discuss the
5. What is the influ
he net input calculi
6. List the main components of the biological
7. Compare and contrast biologica? neuron and
8. State the characteristics of an artifical neural
9. Discuss in detail che historical development of
artificial neural networks
10. What are the basic models ofan artificial neural
U1, Define net architecture and give its classifica
12, Define learning
13. Differentiate berween supervised and unsuper
14. How isthecritic information used inthe le
arning
[2.11 Exercise Problems
1. F
ie network shown in Figu
Figure 20
net input ro
Aric New
a tig ne rio tlt
15. What is the neces
16.
7
List the commonly used
What is che impact of weight in an artif
neural network
18. What is che other name for weight?
19, Define bias and threshold.
20. What isa learning rate parameter
21. How docs a momentum factor make faster
22. State the role of vigilance parameter in ART
network,
23, '¢ McCulloch-Pitts neuron widely used
functions?
24. Indicate the difference berween excitatory and
nhibitory weighted interconnections,
25. Define linear separability
26. Justify ~ XOR function is non-lineatly separable
lecision boundary line
27. Howcan the equation ofa straight linebe formed
‘sing linear separability?
28. In whatwaysis bipolar representation better than
binary representation?
State the training algorithm used for the Hebb
network,
30. Compare feed-forward
J ourput neuron,‘An Introduction Sesser
Joulate che output of neuron ¥ for the ne
in in Figure 21, Use binary and
an attic
Figure 21, Neural nex
fer in ART
3, Design neural networks with only one M-P
fly used P 8
: ii) OR (xt. 9
iil) NAND (x, 22), where x1 and « i
(b) Show thae the
ebb j al
5. (a) Construct a feed-forward neework with five
input nodes, three hidden nodes and four oueput
that has lateral inhibition scructure in the
Ll 2.12 Projects
L. Write a pr
als using Hebb learning rule. Take a pair ofk
he network, tes 1e net using,
suitable activation function. Perform the clas
sification using bipolar data as well as binary
dat
7
b) Conseruct a
input nodes, three hidden nodes and
nodes thac has feedback links from the hidden
layer to the input layer.
6. Using linear separabilig
response for NAND function.
Design a Hebb net to implement logical AND
function with
(@) binary inpurs and targets and
(b) binary inputs and bipolar targets.
8. Implement NOR function using Hebb net with
a) bipolar inputs and targets and
(6) bipolar inputs and binary tar
9. Classify the input patterns shown in Figure 22
Hebb trainin
Figure 22 Input pacer,
Hebb rule, find the weights require
The vectors
form follo
1-1and
alue +1); vectors (
1)belongto:
11 1and(11-1=1
do nor belong to class (target value —1). Also
ors as input, test the
ssing each of trainin
response of net.
2, Write suitable programs for implementing log
functions using McCullochPitts neuron
3. Write a computer program co rain a Madaline 10
perform AND function, using MRI algorithi
4, Write a program for implementing BPN for
training a single-hidden-layer back: prop48
network with bipolar sigmoidal units (4 = 1) to
achieve the following ewo-to-one mappings
sin(1) cos(0.
Set up two sets of data, each consisting of 10
Neu
NN
texting. The inpurg, —
varying input variagu g
andomly. Also the .°
within [
weights in ch
lata are obtained by
) within [
Apply Put da