You are on page 1of 26

Unit –I Introduction to ANN

S. Vivekanandan

Cabin: TT 319A
E-Mail: svivekanandan@vit.ac.in
Mobile: 8124274447
Content

HEBB NET
 Introduction
 Architecture
 Linear separability
 Algorithm & Flow chart
 Examples for Logic function
 Examples for Vector function
 Features

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 2


HEBB Net
• Oldest and most famous learning rules by Donald Hebb in 1949
• A purely feed forward, unsupervised learning
• “ When an axon of cell A is near enough to excite a cell B repeatedly or
persistently takes part in firing it, some growth process take place in one or
both cells such that A’s efficiency as one of the cell firing B is increased”
This may be split in to two:
• If two neurons are activated simultaneously, then the strength of the
connection between them should be increased.
• If two neurons are activated asynchronously, then the strength of the
connection between them is weekend or eliminated
Simply
The cross product of output and input is positive, this results in increase of
weight, otherwise the weight decreases.

w  XiY
31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 3
Architecture

• Single layer net


• Bias is found to be 1
• The input and output data should be
1 in bipolar form.
X1 bj

w1

Xi Yj
w2

Xn wn

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 4


Linear separability
• In general, for any output unit, the desired response is ‘1’ if its
corresponding input is a member of class or ‘0’ if it is not.
• The purpose of training is to make the input pattern to get
similar with the training pattern by adjusting the weights.
• The activation function is taken as step function
• The net input to the output pattern is

Yin  b  XiY
b  XiY  0
• The relation gives the boundary region of the net input

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 5


• The boundary between the region where and Yin  0 Yin  0
is the ‘decision boundary’ . The equation denoting this decision boundary can
represent a line plane
• On training, if the weights of training input vectors pf correct response +1
lie on one side of the boundary and of the training input vectors of response
-1 lie on the other side of the boundary, then the problem is linear separable
else it is linearly non separable.
• Say with two input vectors, the equation of the line separating the positive
and negative region is given by

b  x1w1  x 2 w2  0
X 2   X 1W 1 b
W2 W2
• These two regions are called the decision regions of the net.

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 6


HEBBNET - Algorithm
Step 1 : Initialize all weights and bias to zero.
wi = 0; ( i =1 to n)
Step 2 : For each input /target pair (s: t) perform steps 3-6

Step 3 : Set activation for input vector


xi = Si (i = 1 to n )
Step 4 : Set activation for output vector
y=t
Step 5 : Adjust the weight
wi(new) = wi(old) + xiy (Δw)
Step 6 : Adjust the bias;
b(new) = b(old) + y

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 7


Flow chart

Start
1

Initialize Weights Activate output


y=t

For Weight update


Each wi (new)  wi (old )  xiy
s:t
n
y Bias update
Activate input b(new)=b(old) + y
xi=si

Stop
1
31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 8
HEBB net for the AND function (Bipolar inputs and targets)

X1 X2 B y
1 1 1 1
1 -1 1 -1
-1 1 1 -1
-1 -1 1 -1

• AND function gives a high ‘1’if both the inputs are high else -1.
• Initially the weights and bias are set to zero
W1 = W 2 = b = 0
The weights change is calculated using

w  XiY b  y

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 9


w  XiY b  y

Wnew  Wold  W

Inputs y Weight changes ( XiY) Weights

X1 X2 b Y ΔW1 ΔW2 Δb W1 W2 b
0 0 0
1 1 1 1
1 -1 1 -1
-1 1 1 -1
-1 -1 1 -1

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 10


Inputs y Weight changes ( XiY) Weights
X1 X2 b Y ΔW1 ΔW2 Δb W1 W2 b
0 0 0
1 1 1 1 1 1 1
1 -1 1 -1 -1 1 -1
-1 1 1 -1 1 -1 -1
-1 -1 1 -1 1 1 -1

Inputs y Weight changes ( XiY) Weights

X1 X2 b Y ΔW1 ΔW2 Δb W1 W2 b
0 0 0
1 1 1 1 1 1 1 1 1 1
1 -1 1 -1 -1 1 -1 0 2 0
-1 1 1 -1 1 -1 -1 1 1 -1
-1 -1 1 -1 1 1 -1 2 2 -2

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 11


Inputs y Weight changes ( XiY) Weights

X1 X2 b Y ΔW1 ΔW2 Δb W1 W2 b
0 0 0
1 1 1 1 1 1 1 1 1 1
1 -1 1 -1 -1 1 -1 0 2 0
-1 1 1 -1 1 -1 -1 1 1 -1
-1 -1 1 -1 1 1 -1 2 2 -2

This completes one epoch of training. The straight line can be obtained by

X 2   X 1W 1 b
W2 W2

epoch W1 W2 b X2
fisrt 1 1 1 -X1 - 1
second 0 2 0 0
third 1 1 -1 -X1+1
forth 2 2 -2 -X1+1
31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 12
HEBB net for AND
1
(-1, 1) (1, 1)
X1 -2

2
(-1,-1) (1,-1)

X2 Yj
2

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 13


HEBB net for the AND function (Binary inputs and targets)

X1 X2 B y
1 1 1 1
1 0 1 0
0 1 1 0
0 0 1 0

• AND function gives a high ‘1’if both the inputs are high else -1.
• Initially the weights and bias are set to zero
W1 = W 2 = b = 0
The weights change is calculated using

w  XiY b  y

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 14


Inputs y Weight changes ( XiY) Weights

X1 X2 b Y ΔW1 ΔW2 Δb W1 W2 b
0 0 0
1 1 1 1 1 1 1 1 1 1
1 0 1 0 0 0 0 1 1 1
0 1 1 0 0 0 0 1 1 1
0 0 1 0 0 0 0 1 1 1

This completes one epoch of training. The straight line can be obtained by

X 2   X 1W 1 b
W2 W2

epoch W1 W2 b X2
fisrt 1 1 1 -X1 - 1
second 1 1 1 -X1 - 1
third 1 1 1 -X1-1
forth 1 1 1 -X1-1
31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 15
HEBB net for the AND function (Binary inputs and Bipolar targets)

X1 X2 B y
1 1 1 1
1 0 1 -1
0 1 1 -1
0 0 1 -1

• AND function gives a high ‘1’if both the inputs are high else -1.
• Initially the weights and bias are set to zero
W1 = W 2 = b = 0
The weights change is calculated using

w  XiY b  y

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 16


Inputs y Weight changes ( XiY) Weights

X1 X2 b Y ΔW1 ΔW2 Δb W1 W2 b
0 0 0
1 1 1 1 1 1 1 1 1 1
1 0 1 -1 -1 0 -1 0 1 0
0 1 1 -1 0 -1 -1 0 0 -1
0 0 1 -1 0 0 -1 0 0 -2

This completes one epoch of training. The straight line can be obtained by

X 2   X 1W 1 b
W2 W2

epoch W1 W2 b X2
fisrt 1 1 1 -x1-1
second 0 1 0 0
third 0 0 -1 0
forth 0 0 -2 0
31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 17
• This can be continued with OR, XOR, NAND, ANDNOT, etc…..

Take home Problem


Using a Hebb rule, find the weights to perform the following classifications:
Vectors( 1 111) and ( -1 1-1-1) are members of class {target 1}; vectors (111-1)
and ( 1-1-11) are not members of class {target -1}
Using each of the training x vectors as input, test the response of the net

X1 X2 X3 X4 b y
1 1 1 1 1 1
1 1 1 -1 1 -1
-1 1 -1 -1 1 1
1 -1 -1 1 1 -1

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 18


Learning Rules

General learning rule

EEE 408-NNFL-NRB-VIT
General learning rule – Vector based problems

The learning signal r is in general a function of wi,x, and sometimes of the


teacher's signal di .
The general learning rule is,
where c is a positive number called the learning constant that
determines the rate of learning.
The weight vector adapted at time t becomes at the next instant, or
learning step,

EEE 408-NNFL-NRB-VIT
Hebbian Learning for Vector Problems

•Learning signal is equal to neuron’s output

-the weight initialization at small random values around wi = 0 prior


to learning.
•The Hebbian learning rule represents a purely feedforward,
unsupervised learning.

EEE 408-NNFL-NRB-VIT
Rule: When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes place in firing it, some growth
process or metabolic change takes place in one or both cells such
that A's efficiency, as one of the cells firing B, is increased.
Example:
needs to be trained using the set of three input vectors as below:

for an arbitrary choice of learning constant c = 1.


Features of HEBB Learning

• Feedforward unsupervised learning


• “When an axon of a cell A is near enough to excite a cell B and repeatedly
and persistently takes place in firing it, some growth process or change
takes place in one or both cells increasing the efficiency”
• If oixj is positive the results is increase in weight else vice versa

31-01-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 26

You might also like