You are on page 1of 13

3 Prototype Vectors

Measurement
Vector
Prototype Banana Prototype Apple
shape
p = texture
–1 1
weight p1 = 1 p2 = 1
–1 –1
Shape: {1 : round ; -1 : eliptical}
Texture: {1 : smooth ; -1 : rough}
Weight: {1 : > 1 lb. ; -1 : < 1 lb.}

3
3 Perceptron

Inputs - Title
Sym. Hard - Layer
Limit

AA
p

AA
Rx1
W
SxR
n AA a
Sx1

AA A
Sx1
1 b
R Sx1 S

- Exp (Wp
a = hardlims - + b)

4
3 Two-Input Case
p2
Inputs Two-Input Neuron

AA AA
n>0
2 W

AAAA
p1 w1,1

p2
Σ n a
n<0
w1,2 b
p1
1 -2 2

a = hardlims (Wp + b) w 1, 1 = 1 w 1, 2 = 2

a = hardlims ( n ) = hardlims ( 1 2 p + ( – 2 ) )

Decision Boundary
Wp + b = 0 1 2 p + ( –2 ) = 0
5
3 Apple/Banana Example
  The decision boundary should
 p 1 
a = hardlims  w 1, 1 w 1, 2 w 1, 3 p 2 + b separate the prototype vectors.
 
 p  p1 = 0
 3 
p3 The weight vector should be
orthogonal to the decision
boundary, and should point in the
direction of the vector which
should produce an output of 1.
p1
The bias determines the position
of the boundary

p1
p2 –1 0 0 p2 + 0 = 0
p3
p2 (apple) p1 (banana) 6
3 Testing the Network
Banana:
 
 – 1 
a = hardlims  – 1 0 0 1 + 0 = 1 ( banana )
 
 –1 

Apple:
 1 
 
a = hardlims  – 1 0 0 1 + 0 = – 1 ( apple )
 
 – 1 

“Rough” Banana:
 
 –1 
a = hardlims  – 1 0 0 – 1 + 0 = 1 ( banana )
 
 – 1 
7
3 Hamming Network

Feedforward Layer Recurrent Layer

p
Rx1
AA
W1
SxR
n AA
AA
1 AA A A
AA A Aa1
n2(t + 1) a2(t + 1)
D
a2(t)

A AA A
Sx1 Sx1
W2 Sx1 Sx1 Sx1
SxS
1 b1
Sx1
R S S

- Exp 1(W
a1 = purelin - 1p + b1) a2(0) = a1 a2(t + 1) = poslin (W2a2(t))

8
3 Feedforward Layer

Feedforward Layer For Banana/Apple Recognition

p
Rx1
AA
W1
SxR
AA
AA
n1 a1
S = 2

A AA
Sx1 Sx1
p1T
1 b1 W1 = = –1 1 –1
R Sx1
S p2T 1 1 –1

- Exp 1(W
a1 = purelin - 1p + b1)
b1 = R = 3
R 3

p T1
p p+3 T
1 1
a = W p+b = p+ 3 = 1
1
T 3 T
p2 p2 p + 3

9
3 Recurrent Layer
Recurrent Layer

AA A A
AA A A
n2(t + 1) a2(t + 1) a2(t)
a1

D
A
Sx1
W2 Sx1 Sx1 Sx1
SxS

a2(0) = a1 a2(t + 1) = poslin (W2a2(t))

W2 = 1 –ε 1
ε < ------------
–ε 1 S–1

 2 2 
 1 – ε a 2 ( t ) = poslin  a 1 ( t ) – εa 2 ( t ) 
a 2 ( t + 1 ) = poslin    
 –ε 1   a 2 ( t ) – εa 2 ( t ) 
 2 1 
10
3 Hamming Operation

First Layer

Input (Rough Banana)

–1
p = –1
–1

–1
a1 = – 1 1 – 1 3 = (1 + 3) = 4
–1 +
1 1 –1 3 (– 1 + 3) 2
–1

11
3 Hamming Operation
Second Layer


 poslin  1 – 0.5 4 
  
  – 0.5 1 2 
a 2 ( 1 ) = poslin ( W 2 a 2 ( 0 ) ) = 
  3
 poslin   = 3
  0 0


 poslin  1 – 0.5 3
  
  – 0.5 1 0
a ( 2 ) = poslin ( W a ( 1 ) ) = 
2 2 2
  
 poslin  3  = 3
  – 1.5  0

12
3 Hopfield Network

Initial
Condition Recurrent Layer

p
AA AAAA
AA AA AA
Sx1 W n(t + 1) a(t + 1) a(t)
SxS
D
AA AA
Sx1 Sx1 Sx1

1 b
Sx1
S S

a(0) = p a(t + 1) = satlins (Wa(t) + b)

13
3 Apple/Banana Problem
1.2 0 0 0
W = 0 0.2 0 , b = 0.9
0 0 0.2 – 0.9

a 1 ( t + 1 ) = satlins ( 1.2a 1 ( t ) )

a 2 ( t + 1 ) = satlins ( 0.2a 2 ( t ) + 0.9 )

a 3 ( t + 1 ) = satlins ( 0.2a 3 ( t ) – 0.9 )

Test: “Rough” Banana

–1 –1 –1 –1
a ( 0 ) = –1 a ( 1 ) = 0.7 a(2) = 1 a( 3 ) = 1 (Banana)
–1 –1 –1 –1
14
3 Summary
• Perceptron
– Feedforward Network
– Linear Decision Boundary
– One Neuron for Each Decision
• Hamming Network
– Competitive Network
– First Layer – Pattern Matching (Inner Product)
– Second Layer – Competition (Winner-Take-All)
– # Neurons = # Prototype Patterns
• Hopfield Network
– Dynamic Associative Memory Network
– Network Output Converges to a Prototype Pattern
– # Neurons = # Elements in each Prototype Pattern

15

You might also like