You are on page 1of 13

19/10/2009

1
Hopfield Models
General Idea: Artificial Neural Networks Dynamical Systems
Initial Conditions Equilibrium Points
Continuous Hopfield Model
i
N
j j ij
i
i i
i
I
j
t x w
R
t x
dt
t dx
C +
=
+ =

1
)) ( (
) ( ) (

a) the synaptic weight matrix is symmetric, w


ij
= w
ji
, for all i and j.
b) Each neuron has a nonlinear activation of its own, i.e. y
i
=
i
(x
i
).
Here,
i
() is chosen as a sigmoid function;
c) The inverse of the nonlinear activation function exists, so we may
write x =
i
-1
(y).
19/10/2009
2
Continuous Hopfield Model

= = =

=
+ =
N
i
i i
N
j
N
i
x
i i
i
N
i
j i ij
y I dx y
R
y y w E
i
1 1 1
0
1
1
) (
1
2
1

Lyapunov Function:

= =
|
|

\
|
+ =
N
i
i
N
j
i
i
i
j ij
dt
dy
I
R
x
y w
dt
dE
1 1
0
) (
) (
1
1
2
1
1

\
|
=
(

N
i
i i i
i
i
N
i
i i
i
dt
y d
dt
dy
C
dt
dy
dt
y d
C

Discrete Hopfield Model


Recurrent network
Fully connected
Symmetrically connected (w
ij
= w
ji
, or W = W
T
)
Zero self-feedback (w
ii
= 0)
One layer
Binary States:
x
i
= 1 firing at maximum value
x
i
= 0 not firing
or Bipolar
x
i
= 1 firing at maximum value
x
i
= -1 not firing
19/10/2009
3
Discrete Hopfield Model
Discrete Hopfield Model
(Bipole)

=
<
>
=

i j
i j ij i
i j
i j ij
i j
i j ij
i
x w x
x w
x w
x
0
0 1
0 1

Transfer Function for Neuron i:


x = (x
1
, x
2
... x
N
): bipole vector, network state.

i
: threshold value of x
i
.
|
|

\
|
=

i j
i j ij i
x w x sgn ( ) = Wx x sgn
19/10/2009
4
Discrete Hopfield Model
E w x x x
ij i j i i
i j i i
= +


1
2

E w x x
ij i j
j i i
=


1
2
For simplicity, we consider all threshold
i
= 0:
Energy Function:
Discrete Hopfield Model
Learning Prescription (Hebbian Learning):
Pattern
s
= (
s
1
,
s
2
, ...,
s
n
), where
s
i
take value 1 or -1

=
=
M
j i ij
N
w
1
, ,
1

= 1, 2, ..., M}: M memory patterns


In the matrix form:
I W M
N
M
T
=

=1
1


19/10/2009
5
Discrete Hopfield Model
Energy function is lowered by this learning rule:



= =
i i j
j i
i i j
j i ij
i i j
j i ij
w x x w E
2
,
2
,
, ,
2
1
2
1
2
1




Discrete Hopfield Model
Pattern Association (asynchronous update):
( ) ( )



+ + =
+ =
i j
j ij i
i i j
j i ij
i i j
j i ij
k
x w k x
x k x w x k x w
k E k E E
) (
2
1
1
2
1
) ( ) 1 (
E
k
0
19/10/2009
6
Discrete Hopfield Model
Example:
Consider a network with three neurons, the weight
matrix is:
(
(
(

=
0 2 2
2 0 2
2 2 0
3
1
W
3
2

3
2

3
2
3
2
3
2

3
2

1
2 3
19/10/2009
7
(1, 1, 1)
(1, -1, 1)
stable
(1, -1, -1) (-1, -1, -1)
(-1, 1, -1)
stable
(-1, 1, 1)
(-1, -1, 1)
(1, 1, -1)
The model with three neurons has two fundamental
memories (-1, 1, -1)
T
and (1, -1, 1)
T
(
(
(

=
(
(
(

(
(
(

=
4
4
4
3
1
1
1
1
0 2 2
2 0 2
2 2 0
3
1
Wx
[ ] x Wx =
(
(
(

=
1
1
1
sgn
State (1, -1, 1
)T:
A stable state
19/10/2009
8
(
(
(

=
(
(
(

(
(
(

=
4
4
4
3
1
1
1
1
0 2 2
2 0 2
2 2 0
3
1
Wx
[ ] x Wx =
(
(
(

=
1
1
1
sgn
State (-1, 1, -1
)T:
A stable state
(
(
(

=
(
(
(

(
(
(

=
0
4
0
3
1
1
1
1
0 2 2
2 0 2
2 2 0
3
1
Wx
[ ] x Wx
(
(
(

=
1
1
1
sgn
State (1, 1, 1
)T:
An unstable state. However, it converges to its nearest
stable state (1, -1, 1)
T
19/10/2009
9
(
(
(

=
(
(
(

(
(
(

=
4
0
0
3
1
1
1
1
0 2 2
2 0 2
2 2 0
3
1
Wx
[ ]
(
(
(

=
1
1
1
sgn Wx
State (-1, 1, 1
)T:
An unstable state. However, it converges to its nearest
stable state (-1, 1, -1)
T
[ ] [ ]
(
(
(

=
(
(
(


(
(
(

+
(
(
(

=
0 2 2
2 0 2
2 2 0
3
1
1 0 0
0 1 0
0 0 1
3
2
1 1 1
1
1
1
3
1
1 1 1
1
1
1
3
1
W
Thus, the synaptic weight matrix can be determined by
the two patterns:
19/10/2009
10
Computer Experiments
Significance of Hopfield Model
1) The Hopfield model establishes the bridge between
various disciplines.
2) The velocity of pattern recalling in Hopfield models
is independent on the quantity of patterns stored in
the net.
19/10/2009
11
Limitations of Hopfield Model
2) Spurious memory;
3) Auto-associative memory;
4) Reinitialization
5) Oversimplification
1) Memory capacity;
The memory capacity is directly dependent on the number
of neurons in the network. A theoretical result is
N
N
p
log 2
<
When N is large, it is approximately
p = 0.14N
Problema de Caixeira Viajante
Buscar um caminho mais curto entre n cidades visitando
cada cidade somente uma vez e voltando a cidade de
partida.
Um problema clssico de otimizao combinatrio;
Algoritmos para encontrar uma soluo exato so
NP-difceis
19/10/2009
12
Problema de Caixeira Viajante
( )
)
`

+ +

\
|
+
|
|

\
|
=


= = =
+
= = = =
n
i
ij ij
n
j
n
k
j k j k
n
j
n
i
ij
n
i
n
j
ij
d x x x
W
x x
W
E
1 1 1
1 1
2
1
2
1 1
2
1
1
2
1 1
2
1 1
0
i n i
in i
x x
x x
=
=
+
d
ij
: distancia entre cidade i e j
Cidade/Posio 1 2 3 4
1 1 0 0 0
2 0 0 1 0
3 0 0 0 1
4 0 1 0 0
x
ij
: Output do neurnio (i, j)
N cidades, N
2
neurnios
19/10/2009
13
( ) ( ) ( ) ( ) ( ) ( )

+ |

\
|
+
|
|

\
|
+ + = +


+

3 1 1 2 1
1 W t x d t x d W t x t x W t ky t y
N
i k
N
i k
kj ik kj ik
N
j i
N
i k
kj ij ij ij

( )
( ) t y
ij
ij
e
t x

+
=
1
1
( )
( ) ( )

>
=

otherwise
N t x t x iff
t x
l k
kl ij
ij
0
1
2
,

You might also like