You are on page 1of 23

Hopfield Networks

Presentation By
Utkarsh Trivedi
Y8544
Topics Covered
What is Hopfield Network
Some interesting facts
Major Applications
Mathematical model of HNs
Learning HNs through examples


Hopfield Network
2
What is Hopfield Network ??
According to Wikipedia, Hopfield net is a form of recurrent artificial
neural network invented by John Hopfield. Hopfield nets serve
as content-addressable memory systems with binary threshold
units. They are guaranteed to converge to a local minimum, but
convergence to one of the stored patterns is not guaranteed.

Hopfield Network
3
What are HN (informally)
Hopfield Network
4
These are single layered
recurrent networks

All the neurons in the network
are fedback from all other
neurons in the network

The states of neuron is either
+1 or -1 instead of (1 and 0) in
order to work correctly.

No of the input nodes should
always be equal to no of output
nodes

Following figure shows a
Hopfield network with four
nodes
Some Interesting Facts .
The recall pattern of Hopfield network is similar to our
recall mechanism.
Both are based on content addressable memory
If some of the neurons of network are destroyed the
performance is degraded but some network capabilities
may be retained even with major network damage. Just
like our brains
Hopfield Network
5
Did you know
that we are
similar
Major Applications
Recalling or Reconstructing corrupted patterns
Large-scale computational intelligence systems
Handwriting Recognition Software
Practical applications of HNs are limited because number of
training patterns can be at most about 14% the number of nodes in
the network.
If the network is overloaded -- trained with more than the
maximum acceptable number of attractors -- then it won't converge
to clearly defined attractors.
Hopfield Network
6
Mathematical Modeling of HNs
Hopfield Network
7
Hopfield Network
8
Mathematical Modeling of HNs
i j
n
i j
j
j ij i
T i v w h + =

=
=1
T i Wv h + =
(
(
(
(
(
(

=
n
h
.
.
h
h
2
1
h
(
(
(
(
(
(

=
0
0
0
0
3 2 1
3 32 31
2 12 21
1 12 12

n n n
n
n
n
w w w
w w w
w w w
w w w
w
(
(
(
(
(
(

=
n
i
.
.
i
i
2
1
i
Consider signum function to be neurons
activation function.
i.e.
v
i
= +1 if h
i
>0
v
i
= -1 if hi<0
Hopfield Network
9
Mathematical Modeling of HNs
Mathematical Modeling of HNs
Hopfield Network
10
) sgn(
1
i i
k t
i
k
i
T i v w v + =
+
Liapunov Energy function :-
v t v i Wv v
t t t
E + =
2
1
Power Of Hopfield Networks
Hopfield Network
11
We want to
understand how to
achieve this kind of
performance form
simple Hopfield
networks
Learning HNs through simple example
Hopfield Network
12
Oa
O
b
Oc
W3,2
W1,2 W2,1
W3,1 W1,3
W3,3
W2,2
W2,3
W1,1
There are various ways to
train these kinds of
networks like back
propagation algorithm ,
recurrent learning
algorithm, genetic
algorithm but there is one
very simple algorithm to
train these simple
networks called One shot
method.
I will be using this
algorithm in order to
train the network.

Learning HNs through simple example
Lets train this network for following patterns
Pattern 1:- ie O
a(1)
=-1,O
b(1)
=-1,O
c(1)
=1
Pattern 2:- ie O
a(2)
=1,O
b(2)
=-1,O
c(3)
=-1
Pattern 3:- ie O
a(3)
=-1,O
b(3)
=1,O
c(3)
=1


Hopfield Network
13
w1,1 = 0
w1,2 = OA(1) OB(1) + OA(2) OB(2) + OA(3) OB(3) = (-1) (-1) + 1 (-1) + (-1) 1 = 1
w1,3 = OA(1) OC(1) + OA(2) OC(2) + OA(3) OC(3) = (-1) 1 + 1 (-1) + (-1) 1 = -3
w2,2 = 0
w2,1 = OB(1) OA(1) + OB(2) OA(2) + OB(3) OA(3) = (-1) (-1) + (-1) 1 + 1 (-1) = -1
w2,3 = OB(1) OC(1) + OB(2) OC(2) + OB(3) OC(3) = (-1) 1 + (-1) (-1) + 1 1 = 1
w3,3 = 0
w3,1 = OC(1) OA(1) + OC(2) OA(2) + OC(3) OA(3) = 1 (-1) + (-1) 1 + 1 (-1) = -3
w3,2 = OC(1) OB(1) + OC(2) OB(2) + OC(3) OB(3) = 1 (-1) + (-1) (-1) + 1 1 = 1
Learning HNs through example
Moving onto little more complex problem described in Haykins Neural
Network Book
They book used N=120 neuron and trained network with 120 pixel
images where each pixel was represented by one neuron.
Following 8 patterns were used to train neural network.
Hopfield Network
14
Learning HNs through example
In order to recognizing power of HNs
For this they need corrupted image. They flipped
the value of each pixel with p=0.25.
Using these corrupted images trained HN was
run. And after certain number of iteration the
output images converged to one of the learned
pattern.
Next slides shows the results that they obtained
Hopfield Network
15
Learning HNs through example
Hopfield Network
16
Learning HNs through example
Hopfield Network
17
Flow Chart summarizing overall process
Network returns the decrypted pattern
Run the trained network with corrupted pattern
Update weight vectors of Network
Train HN using Standard patterns
Hopfield Network
18
Shortcomings of HNs
Training patterns can be at most about 14% the number
of nodes in the network.
If more patterns are used then
the stored patterns become unstable;
spurious stable states appear (i.e., stable states which
do not correspond with stored patterns).
Can sometimes misinterpret the corrupted pattern.
Hopfield Network
19
Shortcomings of HNs
Hopfield Network
20
References
Zurada :- Introduction to Artificial Neural Systems
Haykins :- Neural Networks, A Comprehensive
Foundation
J. J. Hopfield, "Neural networks and physical systems
with emergent collective computational abilities",1982
R. Rojas: Neural Networks, Springer-Verlag, Berlin,
1996
Wikipedia :-
http://en.wikipedia.org/wiki/Hopfield_networks
Hopfield Network
21
Thank you ..
Hopfield Network
22
Questions
What is the major difference between HN and fully
recurrent networks?
What is content addressable memory and how is it
different from RAM?
What will happen if we train HN for only one
pattern?
If we train a HN with a pattern will it be
automatically trained for its inverse ?
Why cant we increase number of nodes in network
in order to overcome its shortcomings?
(ignore the increased computation complexity or time)

Hopfield Network
23

You might also like