Professional Documents
Culture Documents
Pattern Association
Pattern Association
Heteroassociative memory NN
2
However if the vector pairs are bipolar the matrix W={wij} is
given by
i.e the weight matrix for the set of patterns is the sum of the
weight matrices to store each pattern pair separately
:Algorithm
.Step0: initialize wts using either the Hebb rule or the Delta rule
.Step1: For each input vector do step 2-4
Step2: Set activations for input layer units equal to the current vector
xi = si
Step3: Compute the net input to the output units
y-inj = xi wi j
i
Step4: Determine the activation of the output units for bipolar targest
if y inj > 0 1
yj = 0 if y- inj = 0
if y inj < 0 1-
The output vector y gives the pattern associated with the input vector x .The hetero
.associative memory is not iterative
If the responses of the net are binary , a suitable activation fn is given by
if x > 0 1
= F(x)
if x <= 0 0
3
An example :- (Heteroassociatve net using the Hebb rule )
Suppose the net is to be trained to store the following mapping from input bipolar
vectors S= (S1, S2, S3, S4)
to output vectors with two components t = (t1,t2)
where
Hence
4- 4
W = 2 -2
2 2-
4 4-
Where the weight matrix to store the first potties pais is given by the outer product of the
vectors s = (1, -1, -1, -1) and t = (1, -1), the weight matrix is
1 -1 1
1 1- = 1- 1 1-
1 1- 1-
1 1- 1-
4
Similarly to store the 2nd, 3rd, 4th pair. The weight matrix to store all four pattem
.pairs is sum of the weight matrices to store each pattem pais separately, i.e
4- 4 1- 1 1- 1 1- 1 1- 1
2- 2 = 1- 1 + 1- 1 + 1- 1 + 1 1-
2 2- 1 1- 1- 1 1 1- 1 1-
4 4- 1 1- 1 1- 1 1- 1 1-
Testing the net using the training input, note that the net input to any particular output unit
is the dot product of the input vector (row) with the column of the weight matrix that has
the weights for the output unit is question . The row vector with all of the net puts is the
Product of the input vector and the at matrix. i.e. the entire is represented as
X w=(y.in1 , y.in2 )
Hence using the fist input vector
1st ( 1, -1, -1, -1 ) .W = ( 8, -8 ) ( 1, -1 )
2nd ( 1, -1, -1, -1 ) . W = (12, - 12) ( 1, -1 )
3rd ( -1, -1,-1, 1 ) . W = (-8, 8) ( -1, 1 )
4th ( -1, -1, 1, 1 ) . W = ( -12, 12 ) ( -1, 1 )
5
if x < 0 1-
The above results show that the correct response has been obtained for all of the training
.patterns
However, using an input vector x = (-1, 1, -1, -1) which is similar to the training vector s =
(1, 1, -1, -1) (the 2nd training vector) and differs only in the first component
Hence (-1, 1, -1, -1). W = (4, -4) (1, -1)
i.e. the net associates a known output pattern with this input
Testing the net with the pattern x = (-1, 1, 1, -1) which differs from each of the training
patterns in at least two components
W = (0, 0) (0, 0) .(1- ,1 ,1 ,1-)
i.e. the output is not one of the outputs with which the net was training i.e. the net does
.not recognize the pattern
.
Usually the weights are set using Hebb rule, the X1 y1
.outer product i.e
6
p
Xi Yi
W = s t (p) s (p)
P=1
The auto associative net can be used to determine
Xn Yn
"Whether an input vector is "known" or "unknown
: The procedure is. ( i.e. not stored in the net)
Step0: Set the weights using Hebb rule
Step1: For each testing input vector, do step 2-4
Step2: Set activations of the input units equal to the input vector
Step3: Compute the net in put to each output unit
y_inj = xi xij
7
.i.e. The input vector is recognized
N.B. using the net is simply
W . ( ,1- ,1 ,1 ,1 )
Similarly it can be show that the net recognizes the Vectors (-1, 1, 1, -1, )
and ( 1, 1, 1, -1, ) ( ,1- ,1- ,1 ,1 ) , ( ,1- ,1 ,1- ,1 )
.i.e. the net recognizes vector with one mistake in the input vector
.The net can be shown, also to recognize the vector formed when one component is mining i.e
and ( 1, 1, 1, 0 ) ( 1 ,0 ,1 ,1 ) , ( 1- ,1 ,0 ,1) , ( ,1- ,1 ,1 ,0 )
However , it can be shown that the net does not recognize input vector with two mistakes in
the input vector much as (-1, -1, 1, -1 )
. in general , a net is more tolerant of mining data than it is of mistakes in the data
.it in fairly cornmor for an auto as ociatuie net to have its diagonal terms set to zero . e.g
1- 1 1 0
W0 = 1 0 1 -1
1- 0 1 1
0 1- 1- 1-
.Using this net, it still does not recognize as input vector
Such as (-1,-1.1,-1) with two mustaches.
Stage capacity:
This is numbers of vectors (or pattern pairs) that can stored in the net be in the net
before the net begins to forget.
8
More than one vector can be stored in an auto associative net by adding the weight
matrices for each vector together.
The capacity of an auto associative depends on the net of components the stored
vectors have and
The relationships among the stored vectors: more vectors can be stored if they are
mutually orthogonal.
An auto associative net with four nodes can store three orthogonal vectors (to each
of the other two) However, the wt metrics for four mutually orthogonal vectors is
always singular ( all the elements are zero)
Y Y Y Y
1 2 i n
9
10