0 Up votes0 Down votes

48 views30 pagesJun 02, 2012

© Attribution Non-Commercial (BY-NC)

PPTX, PDF, TXT or read online from Scribd

Attribution Non-Commercial (BY-NC)

48 views

Attribution Non-Commercial (BY-NC)

- Resolución parcial prueba Matematicas I 2008, Prueba V
- Data Scientist Key Points For Responsibility.docx
- Tutorial Examples Ch 8,9,11
- Resume Java
- IRJET-V6I154.pdf
- Security Operations Overlooked Partnership 38210
- Resume
- 10.1.1.49.5984
- Active Learning NG
- 673642
- Alternative Sources for Price Projections
- jurnal
- Influence of selection on structure learning in Markov network EDAs: An empirical study
- 5735931_05_ml_intro
- 1709.02840.pdf
- Cambridge Analytica Case Study
- Reveiw1
- Data Science Course Overview
- A Webbased Graphic User Interfaceof Pml for Machine Learning Inparallel Running Head a Webbasedgui of Pml for Machinelearning
- Int Computacional 2016 Ex 7 Boston Housing Price

You are on page 1of 30

Manosur Naseri

Saeedeh Nazari Mostafa Taghubi

Reza Zarei

Sara Sattarzedeh Mehran Shakarami

Hopfield Networks

Elman Networks

Content

Introduction (Hopfield Networks) Convergence Matlab Toolbox

Introduction:

Hopfield Network

Auto-Association Network Fully-connected (clique) with symmetric weights Weight values based on Hebbian principle Performance: Must iterate a bit to converge on a pattern,but generally much less computation than in backpropagation networks.

Weights

Energy Function

Bipolar

Hopfield

Discrete (TLU)

Continuous (Tanh, Sigmoid, )

Unipolar

Inputs

Feedforward

Outputs

Inputs

Feedback

Outputs

5

N=Dimension of Patterns N-Dimensional Hypercube

Selected Target Points

Method of Convergence

Exm: 2D State Space

HD

Existance of Equ. Point is guaranteed BUT Stable Equ. Points

8

Convergence

The network will stabilize when all the states of the neurons stay the same. IMPORTANT PROPERTY: Hopfields network will ALWAYS stabilize after finite time.

Note: There is No Escape Way from Local Min. except Changing the input

Complement

Converge to the disturbed pattern

10

Synchronous & Asynchronous Update Synchronous All neuron outputs would be updated High Speed Convergence Low Guarantee for convergence to the Stable Equ. Point

Asynchronous Only one neuron output would be updated Low Speed Convergence High Guarantee for convergence to the Stable Equ. Point

11

Feedforward Networks Solutions are known Weights are learned Evolves in the weight space Used for:

Prediction Classification Function approximation

Feedback Networks Solutions are unknown Weights are prescribed Evolves in the state space Used for:

Constraint satisfaction Optimization Feature matching

12

net = newhop(T)

T : R Q matrix of Q target vectors (values must be +1 or -1)

13

Given a set of target equilibrium points represented as a matrix T of vectors, newhop returns weights and biases for a recursive network.

Once the network has been designed, it can be tested with one or more input vectors. Hopefully those input vectors close to target equilibrium point. The ability to run batches of trial input vectors quickly allows you to check the design in a relatively short time. First you might check to see that the target equilibrium point vectors are indeed contained in the network. Then you could try other input vectors to determine the domains of attraction of the target equilibrium points and the locations of spurious equilibrium points if they are present.

Consider a Hopfield network with just two neurons. The target equilibrium points are defined to be stored in the network as the two columns of the matrix T.

T = [1 -1; -1 1]; net = newhop(T); W= net.LW{1,1} W= 0.6925 -0.4694 -0.4694 0.6925 b = net.b{1,1} b= 0 0 Ai = T; [Y,Pf,Af] = sim(net,2,[],Ai); Y Y= 1 -1 -1 1

Next test the design with the target vectors T to see if they are stored in the network. The targets are used as inputs for the simulation function sim.

Ai = T; [Y,Pf,Af] = sim(net,2,[ ],Ai); Y Y = The size of the batch 1 -1 -1 1 Thus, the network has indeed been designed to be stable at its design points. Next you can try another input condition that is not a design point, such as Ai = {[-0.9; -0.8; 0.7]}; [Y,Pf,Af] = sim(net,{1 5},{},Ai); Y{1}

This produces

ans = -1 -1 1

We

would like to obtain a Hopfield network that has the two stable points defined by the two target (column) vectors in T. T = [+1 -1; -1 +1]; Here is a plot where the stable points are shown at the corners

xlabel('a(1)'); ylabel('a(2)'); net = newhop(T); [Y,Pf,Af] = net([ ],[ ],T); Y Y = 1 -1 -1 1

17

Here we define a random starting point and simulate the Hopfield network for 20 steps. It should reach one of its stable points.

We can make a plot of the Hopfield networks activity. record = [cell2mat(a) cell2mat(y)]; start = cell2mat(a); hold on plot(start(1,1),start(2,1),'bx',record(1,:),record(2,:));

18

color = 'rgbmy'; for i=1:25 a = {rands(2,1)}; [y,Pf,Af] = net({20},{},a); record=[cell2mat(a) cell2mat(y)]; start=cell2mat(a); plot(start(1,1),start(2,1),'kx',record(1,: record(2,:),color(rem(i,5)+1)) end

),

19

The Hopfield network designed here is shown to have an undesired equilibrium point. However, these points are unstable in that any noise in the system will move the network out of them

T = [+1 -1; -1 +1]; plot(T(1,:),T(2,:),'r*') axis([-1.1 1.1 -1.1 1.1]) title('Hopfield Network State Space') xlabel('a(1)'); ylabel('a(2)');

20

Unfortunately, the network has undesired stable points at places other than the corners. We can see this when we simulate the Hopfield for the five initial weights, P.

These points are exactly between the two target stable points. The result is that they all move into the center of the state space, where an undesired stable point exists.

plot(0,0,'ko'); P = [-1.0 -0.5 0.0 +0.5 +1.0; -1.0 -0.5 0.0 +0.5 +1.0]; color = 'rgbmy'; for i=1:5 a = {P(:,i)}; [y,Pf,Af] = net({1 50},{},a); record=[cell2mat(a) cell2mat(y)]; start = cell2mat(a); plot(start(1,1),start(2,1),'kx',record(1,:), record(2,:),color(rem(i,5)+1)) drawnow end

21

We would like to obtain a Hopfield network that has the four stable points defined by the two target (column) vectors in T. T = [+1 +1 -1 +1; ... -1 +1 +1 -1; ... -1 -1 -1 +1; ... +1 +1 +1 +1; ... -1 -1 +1 +1]; net = newhop(T); Here we define 4 random starting points and simulate the Hopfield network for 50 steps.

Some initial conditions will lead to desired stable points. Others will lead to undesired stable points.

ans =

1 -1 1 1 1

-1 1 -1 1 1

1 -1 1 1 1

1 1 -1 1 -1

22

We would like to obtain a Hopfield network that has the two stable points defined by the two target (column) vectors in T. T = [+1 +1; -1 +1; -1 -1]; Here is a plot where the stable points are shown at the corners

axis([-1 1 -1 1 -1 1]) set(gca,'box','on'); axis manual; hold on; plot3(T(1,:),T(2,:),T(3,:),'r*') title('Hopfield Network State Space') xlabel('a(1)'); ylabel('a(2)'); zlabel('a(3)'); view([37.5 30]);

23

The function NEWHOP creates Hopfield networks given the stable points T.

net = newhop(T);

Here we define a random starting point and simulate the Hopfield network for 50 steps. It should reach one of its stable points. a = {rands(3,1)}; [y,Pf,Af] = net({1 10},{},a);

24

Now we simulate the Hopfield for the following initial These points were exactly between the two target stable points. The result is that they all move into the center of the state space, where an undesired stable point exists. conditions, each a column vector of P.

P = [ 1.0 -1.0 -0.5 1.00 1.00 0.0; ... 0.0 0.0 0.0 0.00 0.00 -0.0; ... -1.0 1.0 0.5 -1.01 -1.00 0.0]; cla plot3(T(1,:),T(2,:),T(3,:),'r*') color = 'rgbmy'; for i=1:6 a = {P(:,i)}; [y,Pf,Af] = net({1 10},{},a); record=[cell2mat(a) cell2mat(y)]; start=cell2mat(a); plot3(start(1,1),start(2,1),start(3,1),'kx', ... record(1,:),record(2,:),record(3,:),color(rem(i,5)+1)) end

25

Network Target

26

Programming

27

Example:

J = imnoise (I , 'salt & pepper , d)

Original image

10% noisy

20% noisy

30% noisy

40% noisy

28

29

30

- Resolución parcial prueba Matematicas I 2008, Prueba VUploaded byjosftx
- Data Scientist Key Points For Responsibility.docxUploaded bySyed Kashif Ali
- Tutorial Examples Ch 8,9,11Uploaded byKhizar Raza
- Resume JavaUploaded byManishVishnoi
- IRJET-V6I154.pdfUploaded byANSHUL SHARMA
- Security Operations Overlooked Partnership 38210Uploaded byjeff
- ResumeUploaded byMegha Singh
- 10.1.1.49.5984Uploaded bysandeepuae
- Active Learning NGUploaded bynaveen jaiswal
- 673642Uploaded byRobertBellarmine
- Alternative Sources for Price ProjectionsUploaded bymbw000012378
- jurnalUploaded bymiftakhul
- Influence of selection on structure learning in Markov network EDAs: An empirical studyUploaded byMartin Pelikan
- 5735931_05_ml_introUploaded byRafael Gutìerrez
- 1709.02840.pdfUploaded byShaifuru Anamu
- Cambridge Analytica Case StudyUploaded byFree Fire
- Reveiw1Uploaded byVenkatesh Ramineni
- Data Science Course OverviewUploaded bybusiness ideas singh
- A Webbased Graphic User Interfaceof Pml for Machine Learning Inparallel Running Head a Webbasedgui of Pml for MachinelearningUploaded byAnonymous 6XcTuX5
- Int Computacional 2016 Ex 7 Boston Housing PriceUploaded byRodrigo De Santis
- machine learning draftUploaded byLuchi Cordo
- Machine _Learning_content_python.pdfUploaded bySourav Kumar
- ENME 599 Final Formula SheetUploaded byNorman
- 1. Leslie Tiong27s Final Thesis 28for FAB and Senate21. 9Uploaded byAnca Vochescu
- Data Science BookUploaded byvinsub1984
- Machine LearningUploaded byjetlin
- PID0327.pdfUploaded byMani
- hazelwoodcollectivecontextualmapprocessUploaded byapi-495077260
- 00056___761496d1a70db3c1e5262126032c93aaUploaded byAikido Euroget
- Mobile Health Technology EvaluationUploaded bytest2012

- Parviz Moin - Fundamentals of Engineering Numerical Analysis (2010, Cambridge University Press)Uploaded bySonia Isabel Rentería Alva
- An Introducion of PLECSUploaded byVoVi Phap Danh
- Hasan_ThesisUploaded byKamrul Hasan
- HW3_SolnUploaded byDanny Mejía
- Robust PI Stabilization of a Class of Chemical Reactors (J FematUploaded byIsrael Najera Martinez
- Problem a RioUploaded bygarusis
- Aerial Robotics Lecture 3B_2 Time, Motion, And Trajectories (Continued)Uploaded byIain McCulloch
- Design of a secure android chatting application using end to end encryptionUploaded byDr Muhamma Imran Babar
- Kamath_Gadiyar_Assign4Uploaded bySatish S Kamath
- Cascade ControlUploaded byمحمد سلام
- Explain How to Perform Long Division of PolynomialsUploaded byluiz Maniebo
- SPM_HW1Uploaded byLaila Nurrokhmah
- The Computational Complexity of Machine Learning - Michael J. Kearns.pdfUploaded byDrHarman Preet Singh
- ADC_F04Uploaded bynessma abosamra
- Chapter 3 Pre-quize 354Uploaded bysmaude01
- Local KMS activation server for Windows 7, Windows 8 Professional and Enterprise, Office 2Uploaded byVimukthi Twk
- Laws of Thermodynamics of Energy ConservationUploaded byAnonymous pZp8uy
- SES AlgorithmUploaded byduong_mf_682450010
- A Compact Data Structure Based Technique for Mining Frequent Item Sets & Association Rules from a Data SetUploaded byInternational Journal for Scientific Research and Development - IJSRD
- Course OutlineUploaded byNhân Trần
- BFS and DFSUploaded byRajendranbehappy
- Design of a Tampering Localization Technique in Image ForensicsUploaded byIJSTE
- Chapter Twenty.time SeriesUploaded bySDB_Econometrics
- Study of Moving Average FiltersUploaded byUtsav Banerjee
- Definitions of CEC2014 Benchmark Suite Part a(1)Uploaded byjdpatel28
- Confidence Intervals for the Exponential Hazard RateUploaded byscjofyWFawlroa2r06YFVabfbaj
- LagrangeUploaded bymamaiabh8476
- Example 1_ Low-Pass Filtering by FFT Convolution MATLABUploaded bycejavier
- Review Rel MeasUploaded bypinokio504
- Digital and Analog Communication Systems 8th Edition Couch Solutions ManualUploaded bysemufebone

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.