Attribution Non-Commercial (BY-NC)

36 views

Attribution Non-Commercial (BY-NC)

- MAHROUM 2018 Project Syndicate_Calling Dr. Robot
- Hopfield Example
- Hopfield Networks
- Hop Field
- Communications201712 Dl
- Footprint-6-DigitallyDrivenArchitecture
- Review of Nicolas Schöffer Retroprospective
- (1992) LAND -- Circuitries.pdf
- Recognition and Resolution of “Comprehension Uncertainty” in AI, by Sukanto Bhattacharya and Kuldeep Kumar
- A Simulation of Evolved Autotrophic Reproduction (Abstract)
- Open Mind
- Hop Field
- AI
- 312331516 Soap Making Project for Cbse Class 12
- Artificial Intelligence
- ECONOMIC LOAD DISPATCH FOR PIECEWISE QUADRATIC COST FUNCTION USING HOPFIELD NEURAL NETWORK
- Been Kim is Building a Translator for Artificial Intelligence _ Quanta Magazine
- hgsdrfgdfgdfs
- An Improved K-Nearest Neighbor Classification Using Genetic Algorithm
- Hopfield Ieee

You are on page 1of 27

Hopfield: an example

Suppose a Hopfield net is to be trained to recall

vectors (1,-1,-1,1) and (1, 1, -1, -1)

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

1

2

3

4

w

11

w

12

w

13

w

14

w

44

w

43

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step 1: Calculate weight vector, W = X

T

X (w

ii

= 0)

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

W

w

11

w

12

w

13

w

14

w

21

w

22

w

23

w

24

w

31

w

32

w

33

w

34

w

41

w

42

w

43

w

44

1

]

1

1

1

+1 +1

1 +1

1 1

+1 1

1

]

1

1

1

.

+1 1 1 +1

+1 +1 1 1

1

]

1

0 0 2 0

0 0 0 2

2 0 0 0

0 2 0 0

1

]

1

1

1

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step2: For unknown input pattern,

X(0) = (1,-1, 1, 1),

assigning output,

Y(0) = (1,-1,1,1)

Step 3: Iterate (update outputs) until convergence

Assume unit 3 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

3

(1) F w

31

w

32

w

33

w

34

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 2 0 0 0 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step 3: New X(1) = Y(1) = (1,-1,-1,1)

Assume unit 1 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

1

(2) F w

11

w

12

w

13

w

14

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 0 0 2 0 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step 3: New X(2) = Y(2) = (1,-1,-1,1)

Assume unit 2 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

2

(3) F w

21

w

22

w

23

w

24

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 0 0 0 2 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step 3: New X(3) = Y(3) = (1,-1,-1,1)

Assume unit 4 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

4

(4) F w

41

w

42

w

43

w

44

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 0 2 0 0 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Repeat until until convergence

X(n) = Y(n) = (1,-1,-1,1) <----> perfect recalled

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step2: For unknown input pattern,

X(0) = (-1,1, -1, -1),

assigning output,

Y(0) = (-1,1,-1,-1)

Step 3: Iterate (update outputs) until convergence

Assume unit 2 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

2

(1) F w

21

w

22

w

23

w

24

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 0 0 0 2 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step 3: New X(1) = Y(1) = (-1,1,-1,-1)

Assume unit 1 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

1

(2) F w

11

w

12

w

13

w

14

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 0 0 2 0 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step 3: New X(2) = Y(2) = (1,1,-1,-1)

Assume unit 4 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

4

(3) F w

41

w

42

w

43

w

44

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 0 2 0 0 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Klinkhachorn:CpE320

Hopfield: an example (cont)

Step 3: New X(3) = Y(3) = (1,1,-1,-1)

Assume unit 3 is randomly selected to be updated

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

y

3

( 4 ) F w

31

w

32

w

33

w

34

[ ].

x

1

x

2

x

3

x

4

1

]

1

1

1

_

,

F 2 0 0 0 [ ].

1

1

1

1

1

]

1

1

1

_

,

F 2 ( ) 1

Repeat until until convergence

X(n) = Y(n) = (1,1,-1,-1) <----> perfect recalled

Hamming Networks

Klinkhachorn:CpE320

Hamming Nets

A minimum error classifier for binary vectors

Where error is defined using Hamming distant.

Consider the following exemplars:

Exemplar#

1 +1 +1 +1 +1 +1 +1

2 +1 +1 +1 -1 -1 -1

3 -1 -1 -1 +1 -1 +1

4 -1 -1 -1 +1 +1 +1

For example, given the input vector, ( 1 1 1 1 -1 1)

The Hamming distances from each of the above four exemplars are

1, 2, 3, and 4 respectively. In this case the input vector is assigned to

category exemplar #1 since its gives the smallest Hamming distant.

Klinkhachorn:CpE320

Hamming Net - Architecture

Klinkhachorn:CpE320

Hamming Net - Feature Layer

n inputs with fully connected m processing elements (m

exemplars)

Each processing element calculates the number of bits

at which the input vector and an exemplar agree

The weights are set in the one-shot learning phase as

follows:

Let X

p

= (x

p1

,x

p2

,x

p3

,..,x

pn

) and p=1..m be the m exemplar vectors.

If x

pi

takes on the values -1 or 1 then the learning phase consists of

setting the weights to be,

w

ji

= 0.5*x

ji

j = 1..m, and i = 1..n

w

j0

= 0.5*n j = 1..m

Klinkhachorn:CpE320

Hamming Net - Feature Layer

Analysis

During recall, an input vector is processed through each processing

element as follows:

n

S

j

= (w

ji

*x

i

) ..for j = 1..m

i=0

n

= 0.5* {(x

ji

*x

i

) +n} ..for j = 1..m

i=1

Since x

ji

and x

i

take on the values of -1 or +1 and

if n

aj

is the number of bits the x

ji

and x

i

agree, and

if n

dj

is the number of bits the x

ji

and x

i

disagree, then

S

j

= 0.5*(n

aj

-n

dj

+n) ..for j =1 ..m

But n=n

aj

+n

dj

Then Sj = 0.5*(n

aj

-n

dj

+n

aj

+n

dj

) = n

aj

Therefore, output, Sj, from each processing element represents the number of bits at

which the input vector and exemplar agree!

Klinkhachorn:CpE320

Hamming Net - Category Layer

The processing element with the largest initial state

(smallest Hamming distant to the input vector) wins out

Competitive learning through lateral connections

Each node, j, is laterally connected to every other node,

k, in the layer through a connection of fixed strength w

kl

Where w

kj

= 1 ..for k=j, and

w

kj

= - ..for kj, 0<<1/m)

Klinkhachorn:CpE320

Hamming Net - Category Layer

Competition through lateral inhibition

Initialize the network with unknown Input Pattern

n

y

j

(0)= s

j

= w

ji

x

i

for j =1..m

i=0

After initialization of the category layer, the stimulus from the input layer is

removed and the category layer is left to iterate until stabilization. At the i

th

iteration, the output of the j

th

processing element is

Y

j

(t+1) = F

t

[y

j

(t)-.y

k

(t)] k=1 to m

kj

Where y

j

(t) is the output of node j at time t, and

F

t

(s) = s if s>0

= 0 if s0

At convergence of the competition in the category layer, only the

corresponding winner is active in the output layer.

Klinkhachorn:CpE320

Hamming Net

Klinkhachorn:CpE320

Hamming Net: an example

Suppose a Hamming net is to be trained to

recognize vectors (1,-1,-1,1) and (1, 1, -1, -1)

x

1

x

2

x

3

x

4

1

2

1

2

Feature Layer

Category Layer

X

0

=1

Klinkhachorn:CpE320

Hamming Net: an example

Feature Layer: (1,-1,-1,1) and (1, 1, -1, -1)

w

w

10

w

11

w

12

w

13

w

14

w

20

w

21

w

22

w

23

w

24

1

]

1

0.5 * 4 0.5 * 1 0.5* 1 0 . 5 *1 0.5 * 1

1

]

1

2 0.5 0.5 0.5 0.5

1

]

1

Klinkhachorn:CpE320

Hamming Net: an example

Feature Layer: For unknown input pattern (1,-1,1,1)

S

s

1

s

2

1

]

1

w

10

w

11

w

12

w

13

w

14

w

20

w

21

w

22

w

23

w

24

1

]

1

.

x

0

x

0

x

1

x

1

x

2

x

2

x

3

x

3

x

4

x

4

1

]

1

1

1

1

s

1

s

2

1

]

1

2 0.5 0.5 0.5 0.5

1

]

1

.

1 1

1 1

1 1

1 1

1 1

1

]

1

1

1

1

s

1

s

2

1

]

1

3

1

1

]

1

Klinkhachorn:CpE320

Hamming Net: an example

Categetory layer: Software implementation

Since s

1

= 3 and s

2

= 1,

Then

s

1

= winner

Klinkhachorn:CpE320

Hamming Net: an example

Categetory layer: Hardware implementation

At t=0,

y

1

(0) = 3

y

2

(0) =1

Let = 1/2, then

A t=1,

y

1

(1) = F

t

[y

1

(0)-.y

2

(0)] = F

t

[3-1/2*1] = 2.5

y

2

(0) = F

t

[y

2

(0)-.y

1

(0)] = F

t

[1-1/2*3] = 0

Since y

1

(1) is the only +ve output

y

1

= winner

Klinkhachorn:CpE320

Hamming Net VS Hopfield Net

Lippman(1987): Hopfield Net cannot do any better

than a Hamming Net when used to optimally

classifies binary vectors.

Hopfield network with n input nodes has n*(n-1)

connections.

Hopfield net has limited capacity, approximately

1.5*n (# of exemplars it can store)

The capacity of a Hamming net is not dependent on

the number of the input vector but instead is equal to

the number of elements m in its category layer which

is independent of n.

The number of connections in a Hamming network

equal to m*(m+n).

Klinkhachorn:CpE320

Hamming Net VS Hopfield Net

Example:

A Hopfield network with 100 inputs might hold 10

exemplars and requires close to 10,000 connections.

The equivalent Hamming net requires only

10*(10+100) = 1,100 connections.

A Hamming net with 10,000 connections and 100 input

components would be able to hold approximately 62

exemplars!

Klinkhachorn:CpE320

Hopfield Net

Klinkhachorn:CpE320

Hopfield Net

- MAHROUM 2018 Project Syndicate_Calling Dr. RobotUploaded byDocHLB
- Hopfield ExampleUploaded bySun Birds
- Hopfield NetworksUploaded byVijay Patil
- Hop FieldUploaded byadithdasopang
- Communications201712 DlUploaded byDeepak Dahiya
- Footprint-6-DigitallyDrivenArchitectureUploaded byRobert Cowherd
- Review of Nicolas Schöffer RetroprospectiveUploaded byJoseph Nechvatal
- (1992) LAND -- Circuitries.pdfUploaded byAdriano Rodrigues
- Recognition and Resolution of “Comprehension Uncertainty” in AI, by Sukanto Bhattacharya and Kuldeep KumarUploaded byAnonymous 0U9j6BLllB
- A Simulation of Evolved Autotrophic Reproduction (Abstract)Uploaded byBrian Krent
- Open MindUploaded byCzink Tiberiu
- Hop FieldUploaded byMohammed Althaf
- AIUploaded byMeraj Khan
- 312331516 Soap Making Project for Cbse Class 12Uploaded byFalendra dewangan
- Artificial IntelligenceUploaded byGregory Oleinicov
- ECONOMIC LOAD DISPATCH FOR PIECEWISE QUADRATIC COST FUNCTION USING HOPFIELD NEURAL NETWORKUploaded byselaroth168
- Been Kim is Building a Translator for Artificial Intelligence _ Quanta MagazineUploaded byAnonymous ERnDytzEub
- hgsdrfgdfgdfsUploaded byKartik Dutta
- An Improved K-Nearest Neighbor Classification Using Genetic AlgorithmUploaded byMaulana Shalihin
- Hopfield IeeeUploaded byJonathan Hernandez Cuenca
- RowAnalytics Invites Scientists to Help Shape Its Semantic Search SolutionUploaded byPR.com
- Genetic AlgorithmsUploaded byJaime Fouché
- FM_AG-08_TP-088-7Uploaded bycmpmarinho
- Definition of LanguageUploaded byLekhakatva Sjp
- Kohon EnUploaded bySonali Kushwah
- ShV12_InfoLetter(EN).pdfUploaded byVladimir Tkachenko
- Manuscripts in Preparation or Under ReviewUploaded byexspiro
- AI Robotics ReportUploaded byvaragg24
- DummyUploaded bypraneethshub
- Artificial IntelligenceUploaded byCarlos Andres Bula

- Andrew Rosenberg- Lecture 18: Gaussian Mixture Models and Expectation MaximizationUploaded byRoots999
- Andrew Rosenberg- Lecture 2.2: Linear Regression CSC 84020 - Machine LearningUploaded byRoots999
- Andrew Rosenberg- Lecture 2.1: Vector Calculus CSC 84020 - Machine LearningUploaded byRoots999
- Andrew Rosenberg- Lecture 23: Clustering EvaluationUploaded byRoots999
- Pattern AssociationUploaded byRoots999
- Pattern Association 1Uploaded byRoots999
- Andrew Rosenberg- Lecture 22: EvaluationUploaded byRoots999
- Andrew Rosenberg- Lecture 21: Spectral ClusteringUploaded byRoots999
- Andrew Rosenberg- Lecture 14: Neural NetworksUploaded byRoots999
- Andrew Rosenberg- Lecture 20: Model AdaptationUploaded byRoots999
- Andrew Rosenberg- Lecture 1.2: Probability and Statistics CSC 84020 - Machine LearningUploaded byRoots999
- Andrew Rosenberg- Lecture 19: More EMUploaded byRoots999
- Andrew Rosenberg- Support Vector Machines and Kernel MethodsUploaded byRoots999
- Andrew Rosenberg- Support Vector Machines (and Kernel Methods in general)Uploaded byRoots999
- Andrew Rosenberg- Lecture 12: Hidden Markov Models Machine LearningUploaded byRoots999
- Andrew Rosenberg- Lecture 11: Clustering Introduction and Projects Machine LearningUploaded byRoots999
- Andrew Rosenberg- Lecture 10: Junction Tree Algorithm CSCI 780 - Machine LearningUploaded byRoots999
- Andrew Rosenberg- Lecture 1.1: Introduction CSC 84020 - Machine LearningUploaded byRoots999
- Perceptron Learning ProcedureUploaded byRoots999
- Adaline ( Adaptive Linear Neural) Learning ProcedureUploaded byRoots999
- Unsupervised Learning: Part III Counter Propagation NetworkUploaded byRoots999
- Unsupervised Learning: Part II Self-Organizing MapsUploaded byRoots999
- Unsupervised Learning 1Uploaded byRoots999
- Liang Tian- Learning From Data Through Support Vector MachinesUploaded byRoots999
- Artificial Neural NetworkUploaded byRoots999
- Extended Back PropagationUploaded byRoots999
- Biological Neural SystemsUploaded byRoots999
- Back-Propagation Learning ProcedureUploaded byRoots999
- Back Propagation: VariationsUploaded byRoots999

- Erc Peer Review Evaluation PanelsUploaded byPatricio Bohórquez Rodríguez de Medina
- An Introduction to Random VariablesUploaded byShekhar Raghav
- Exam A10 VBAUploaded byJonsJJJ
- curriculum vitae of ryan lello - onlineUploaded byapi-327301490
- ClearanceUploaded byPort of Long Beach
- Vector Space Interpretation of Random VariablesUploaded bySiva Kumar Gani
- SOE timetable.pdfUploaded bymanuj
- AQA GCSE Mathematics-Unit 3F-Practice Paper-Set 4-V1.1Uploaded bynaz2you
- Corcoran on Aristotle 1972-2015 June 2015Uploaded byCocoru Marian
- Estimation Method for Media Audience Duplication - IBOPE Time ChileUploaded byNilmalvila Blue Lilies Pond
- Estimating-the-Mean-and-Variance-of-a-Normal-Distribution.pdfUploaded byBhooveeshay Toolseeram
- Mils and MOA; A Tactical Shooters GuideUploaded byBob Simeone
- Sajie PaperUploaded bychrischov
- A Comprehensive Eigenvalue Analysis of System Dynamics ModelsUploaded byKiran Seelam
- mh2801fliptut1asoln.pdfUploaded byShweta Sridhar
- Serres_The Science of RelationsUploaded byMonika Halkort
- hwsUploaded byKuntal Das
- Difference-Type Estimators for Estimation of Mean in The Presence of Measurement ErrorUploaded byAnonymous 0U9j6BLllB
- MAT137 1617 OutlineUploaded byCindy Han
- Syllabus Booklet of All Institute Courses ModifiedUploaded byHarshit Goel
- revision_guide_fp1.pdfUploaded byDanDezideriuIacob
- Problem set mat232 exam jacobUploaded byRevownSada
- Double and Iterated IntegralsUploaded bySandeep Saju
- Matrix QRDUploaded bykrishnagdeshpande
- Dyscalculia TestUploaded byPhil Weaver
- Digital Signal Processing With Matlab Examples, Volume 3 [2017]Uploaded byVlajko Jankov
- 118808304 Latihan Penjelmaan PMRUploaded byFaris Fars
- math 6Uploaded byArwel Caisio
- Tensor Decomp PresentationUploaded byMahbod Matt Olfat
- Cheating investigation summary for Atherton ElementaryUploaded byenm077486