You are on page 1of 11

Soft Computing

300 (3)
Dr. Lekshmi R. R.
Asst. Prof.
Department of Electrical & Electronics
Engineering
Amrita School of Engineering 1
Learning Techniques
Hebb rule
Donald Hebb postulated how biological neurons learn in 1949 :
• When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it,
– some growth process or metabolic change takes place on one or
both cells
• such that A’s efficiency as one of the cells firing B, is increased
• Stated as Hebbian Learning rule:
– If two neurons on either side of a synapse (connection) are
activated simultaneously (i.e. synchronously),
• then the strength of that synapse is selectively increased.
Hebb rule
• If two neighbour neurons activated and deactivated at same
time
– Then weight connecting these neurons should increase.
• For neurons operating in opposite phase
– weight between them should decrease.
• If there is no signal correlation,
– weight should not change.
∆𝑤𝑖𝑗 = 𝑥𝑖 𝑥𝑗
Implement AND Gate with Binary output

𝑥1 𝑋1 𝑤1
y ∆𝑤𝑖 = 𝑥𝑖 𝑡
𝑌
𝑤2
𝑥2
𝑋2

• Weight vector increase proportional to


• Input
• Learning signal
Hebbian Learning
• Initialize weights
• For each training vector
• Set input activation functions
• Determine yin
• Set output activation functions
• Adjust the weight
• 𝑤𝑖 new = 𝑤𝑖 𝑜𝑙𝑑 + 𝑥𝑖 𝑡
• 𝑏new = 𝑏𝑜𝑙𝑑 + 𝑡
Design a Hebb net to implement AND gate
No. of inputs: 3
x1 x2 t No. of outputs:1
0 0 -1 Weight Value

0 1 -1 𝑤1 0

1 0 -1 𝑤2 0

1 1 1 b 0

x1 x2 1 t Δw1 Δw2 Δb w1 w2 b
0 0 0
-1 -1 1 -1 1 1 -1 1 1 -1
-1 1 1 -1 1 -1 -1 2 0 -2
1 -1 1 -1 -1 1 -1 1 1 -3
1 1 1 1 1 1 1 2 2 -2
Design a nn for the following truth table
w1=2; w2=2; b=-2 𝑥1 𝑤1 + 𝑥2 𝑤2 + 𝑏 = 0

𝑥2 0 1 2𝑥1 + 2𝑥2 − 2 = 0
𝑥1 1 0
𝑥1 + 𝑥2 − 1 = 0
Threshold: 2
𝑥1 = 1 − 𝑥2
Activaton function: Bipolar

x1 x2 1 t x1w1 x2 w 2 b yin y
-1 -1 1 -1 -2 -2 -2 -6 -1
-1 1 1 -1 -2 2 -2 -2 -1
1 -1 1 -1 2 -2 -2 -2 -1
1 1 1 1 2 2 -2 2 1
• Widrow and Hoff
Delta Rule
• One of most common learning rules.
• Depends on supervised learning.
• states that modification in sympatric weight of node is equal to
– multiplication of error
– input
∆𝑤𝑖𝑗 = η 𝑡 − 𝑦 x𝑖
• For given input vector, compare the output vector with the target.
• If the difference is zero,
– no learning takes place;
• otherwise,
– adjusts its weights to reduce this difference.
• Change in weight is based on:
– learning rate
– activation of input
– difference between the expected output and the actual output
Thank you
References
• https://data-flair.training/blogs/learning-
rules-in-neural-network/
• 05PatternAssoc.PDF
• l5_JB.PDF

You might also like