Professional Documents
Culture Documents
RESEARCH
Dynamics online book [1]. 6 Neural states are updated asynchronously, allow-
ing for the exploration of all possible states in the
A HN is a recurrent neural network characterized state space. Nevertheless, despite the perception
by fully-connected weighted connections between neu- of information propagation occurring at different
rons, often updated asynchronously in discrete time time steps in the brain, the brain operates dynam-
steps. The symmetric weights in the network form an ically, with time passing synchronously across all
energy function (Hamiltonian), which guides the acti- neurons.
vation of neurons towards stable patterns [12]. How- 7 Time is discretized, although neurons operate in
ever, this model also has some drawbacks as it deviates continuous time.
from neurobiology in some aspects. 8 The model is static. Once it successfully retrieves
a memory and converges to an attractor, it re-
Assumptions of the Model mains in it indefinitely. The system must be man-
1 The network is formed by discrete state neurons, ually restarted in a different state to converge to
referred to as binary neurons, and neuron re- a different attractor. Although the human brain
sponses are binary. Neurons have only two pos- is of course capable of retrieving multiple memo-
sible states, active or inactive, and their effect on ries with a reboot, a similar process of membrane
each of their neighbors is fixed. Although neurons potential initialization may be feasible.
in the cerebral cortex may be differentiated into
inhibitory and excitatory, there is a continuous Definition of the Model
spectrum of states between the two and so of rest- A HN is formed by N neurons connected between each
ing membrane potentials. Also, as the membrane other. Each connection has a weight ω ij , which defines
potential is dynamic, neural responses (action po- the influence a certain neuron (i) has on another (j).
tentials) are too. All weights can be represented by the connectivity ma-
2 The activation threshold is fixed, while neurons trix (W ), which will be discussed later on.
have adaptive spiking thresholds.
3 Binary neurons linearly combine the inputs of The state of a neuron at time t can be defined as
their neighbors. However, dendrites can also in- si (t), having a value of 1 if is active, or -1 if is inactive.
tegrate information nonlinearly. Therefore, the network state can be defined as:
4 The connections and their weights are symmet-
ric and are defined by a connectivity matrix. The s̄(t) = (s1 , s2 , ..., sN ) T (1)
symmetrical nature of the connectivity matrix en-
ables the preservation of memory, as the excita- As it has been presented before, this model assumes
tion of one neuron by another is reciprocated with that neurons are binary. This means that if the lin-
equal strength, resulting in a bidirectional and ear combination of weights (ω ij ) and states of the
balanced interaction between neurons. However, presynaptic neurons exceed the threshold of the sign
this symmetry in interactions does not occur in function sgn(x ≥ 0) of the postsynaptic neuron, this
the brain and is biologically unrealistic. Addition- last one is activated (+1). If it is not the case and
ally it violates Dale’s law, as a neuron cannot be sgn(x < 0), the postsynaptic neuron is inactivated (-
at the same time excitatory and inhibitory. 1). The future state of the postsynaptic neuron (si ′ )
5 The network is fully connected, meaning each neu- can be written as:
ron is connected to all other neurons in the net-
work. Although the cortex may be approximated XN
as a fully connected network, this is biologically si ′ = sgn ω ij sj (2)
unrealistic [13]. In biological neural networks, neu- j=1
neuron j is chosen to be updated randomly, can be can be proven in the following way:
mathematically represented in the following way:
T ′ ′ ′ ′ T
X X
s̄ = (s1 , ..., sN ) → s̄ = (s 1 , ..., s j , ..., s N ) (4) si ′ = sgn ω ij sj = sgn ω ij η j
j j
Hebbian Learning Rule
X 1 1 X
A HN is able to store binary patterns (η i = ±1). Each = sgn η i η j η j = sgn η i ηj ηj
stored pattern coincides with an attractor of the sys- j
N N j
tem, consisting of a stable fixed point with a nonempty
basin of attraction. Each of the µ patterns stored in the 1 X 1
= sgn η i 1 = sgn η i N = sgn (η i )
network can be written as a vector of activity states N j
N
for each neuron in the network, and can be expressed
as follows: = ηi
(7)
µ µ µ µ T
η̄ = (η 1 , η 2 , ..., η N ) (5)
2 The state s̄ = η̄ is stable, hence, it has a non-
The Hebbian rule states that two neurons that fire to- empty basin of attraction, if there is at least an-
gether may be strongly connected, meaning that the other network state that is attracted to the pat-
weight of the connection ω ij between the two neurons tern η̄, meaning that the system converges to the
i and j will be elevated and strengthened if the con- state s̄ = η̄ when being initialized in a different
nected neurons have the same activation for a pattern. state s̄o ̸= η̄. This is summarized as:
This can be summarized by following equation:
{∃ s̄o ̸= η̄ : s̄o → s̄ = η̄}
1
ω ij = ηi ηj (6)
N Assuming
s̄o is the initial state of the network and
η1
where η i and η j are the states of the neurons i and −η2
j when the system reaches the state of the memory s̄o = −η3 , where the sign of a fraction f of the
pattern η̄ (attractor). To make this value independent ..
.
of the size of the network, the weight is divided by N ,
ηN
the number of neurons constituting the network.
entries are flipped, the proof proceeds as follows:
The connections between two neurons can be exci-
tatory or inhibitory. The positive link (excitatory con- X X 1
si ′ = sgn ω ij soj = sgn η i η j soj
nection) is directly linked to memory formation, as N
j j
it implies that if a neuron is activated, the positively
connected ones will also activate. In contrast, the in- 1 X
hibitory connection states that an activated neuron = sgn η i η j soj
N j
suppresses the activation of the others linked to it,
therefore not allowing memory formation. (8)
2
P
Storing Memory Patterns where j η j soj = N (1 − f ) ηj + N f ηj (−ηj )
2
A memory pattern is successfully stored in a hopfield Knowing ηj = 1, and ηj (−ηj ) = −1:
network and the state s̄ = η̄ is an attractor if two X
conditions are fulfilled: ηj soj = N (1 − 2f ) (9)
1 The state s̄ = η̄ is a stable fixed point. j
2 The state s̄ = η̄ has a non-empty basin of attrac-
tion.
1 The state s̄ = η̄ is a stable fixed point if the system ′ 1
si = sgn ηi N (1 − 2f )
remains in this state in the following time steps, N
meaning s̄′ = η̄, being the state s̄o = η̄ the initial (10)
iff < 0.5 → ηi as ηi (1 − 2f ) > 0
condition of the network. Assuming s̄ = η̄, this iff > 0.5 → ηi as ηi (1 − 2f ) < 0
Casas and Franco Page 4 of 13
their activity states. However, this overlap can intro- In total, two network models were developed: a connec-
duce an interference term between patterns, which can tivity degeneration model and a neural degeneration
hinder the retrieval of specific memories. model, which will be later described in detail in its
respective sections. Following the Hopfield mechanism
As the number of stored patterns increases, the basins described in the previous sections, we constructed two
of attraction of different patterns tend to shrink, and fully connected HN for each of the two models to inves-
the minima in the energy landscape come closer to each tigate the influence of either connection loss or neu-
other. This phenomenon results in increased interfer- ral loss on the two necessary conditions for memory
ence between patterns, where the system may converge storage and retrieval; stability of the fixed points and
to incorrect attractors if their basins of attraction are non-emptiness of the basins of attraction, respectively.
in close proximity. Additionally, the presence of noise
from spurious patterns becomes more prominent, as To study the storage capabilities, a network of N =
these patterns also occupy storage resources and fur- 150 neurons was used in both models, while to study
ther constrain the energy landscape. the retrieval capabilities a network of N = 100 neurons
was used. This difference in network size allowed us to
When the ratio of patterns to neurons (M/N ) exceeds reduce the computational cost of the algorithm, as no
the capacity of the network, the basins of attraction significant changes were observed for networks with
of different attractors may become so close to each more than 100 neurons when storing 10 patterns and
other that they collapse into a single minimum in the was only useful for the analysis of the storage results
energy landscape. This can result in the loss of one as it gave a broader view of the dynamics. For all the
of the two patterns, making its retrieval impossible, models and submodels, we made the network to learn
a phenomenon known as catastrophic forgetting. To M = 10 patterns and all the tests were performed
mitigate the occurrence of catastrophic forgetting, it with random patterns defined as N -dimensional vec-
tors generated by randomly assigning to each of its
is crucial to maintain an optimal M/N ratio in the
components a value of -1 or 1. Random patterns allow
network, where the number of stored patterns is care-
for the minimal redundancy in the stored information
fully balanced with the number of neurons available.
as stored information cannot be compressed. There-
fore, they represent the worst case scenario in memory
Due to the increasing cross-talk between patterns, the
formation. Furthermore, this approach avoids bias in
probability of a neuron erroneously changing to the
the results, as all the patterns have similar stability
flipped state of the intended pattern during retrieval
and characteristics.
increases as the ratio M/N increases. Through exper-
imentation with different erroneous state-flip proba-
To iterate every submodel we used asynchronous se-
bilities, it has been determined that the critical ratio, quential cycling updating, meaning that only one neu-
denoted as c, which ensures that the erroneous state- ron is updated at a time, neurons are updated one by
flip probability remains lower than 1% and all patterns one in an ordered manner and this process is repeated
are stored correctly, of αc = 0.185 [1]. This implies that a certain number of cycles. To update a neuron, de-
in a large network with a finite number of N neurons, pending on whether its inputs summed to < 0 or ≥ 0,
on average, about 0.1N neurons in each pattern may respectively, the state of the neuron was changed to -1
change to the wrong state when updated and exhibit or 1. As previously stated, asynchronicity is required
erroneous activity. in a HN to explore the entire state space, which does
not happen with synchronous dynamics. State tran-
However, this is only valid for the first iteration. This sitions only happen if the right neuron is updated,
small set of flipped neurons could potentially trigger otherwise, the state does not change. Hence, several
cascades of state changes in other neurons in subse- time steps are needed to converge to an attractor.
quent iterations, leading also to catastrophic forget- Therefore, there may be attractors that cannot be
ting. To mitigate the risk of such cascades, the number reached if synchronous updating is applied. Although
of stored patterns can be kept below a safer ratio limit fixed schedules, such as sequential or parallel updating,
of αc = 0.138 [1]. can ensure convergence to a stable attractor, they may
be biased towards certain patterns or configurations.
Connectivity and Neuronal Loss Model However, as we are working with random patterns, the
The model development and computations described updating schedule is less relevant. In addition, as se-
below were performed using MATLAB (MathWorks, quential updates converge faster if the initial state is
Natick, MA) version R2020b [5]. close to a stored pattern, and we are testing conver-
gence close to the attractors (most of the time), this
Casas and Franco Page 7 of 13
will allow us to reduce the computational cost of the To test the recall of the patterns, under both connec-
experiments. tivity degeneration and neural degeneration, we in-
troduced random flips (changes of a neuron’s state to
All the submodels were iterated for 10 cycles, where an opposite one) with an increasing probability p into
in each cycle each neuron was updated once, and to one of the patterns, and initialized the network in such
obtain an average of the results, the same calculations a distorted state. This was performed repeatedly for
were repeated for 400 Monte Carlo (MC) realizations all 10 patterns and for k flipping probabilities from 0
when testing memory storage and for 100 MC real- to 0.5 in steps of 0.01. To evaluate the recall perfor-
izations when testing retrieval (due to the high com- mance, the non-emptiness of the designated attractor
putational cost). The required number of cycles was was taken as confirmed if the network converged to the
determined by previously estimating the average num- intended attractor within 10 cycles. This is, after 10
ber of iterations required for the system to converge cycles, the overlap between the intended pattern and
to the designated attractor in a non-degenerated net- the state of the network was equal to or higher than
work. We found out that 10 cycles was more than 95%, as employed in Tolmachev et al. [6]. The number
enough for convergence and was also computationally of retrieved patterns for every flipping probability was
optimal. This coincides with the results found in [4], averaged by the number of patterns (M=10) and the
where the number of iterations of the HN required to number of MC simulations (MC = 100).
achieve convergence to the designated attractor from
an arbitrary starting state as a function of the number Next, we will elucidate how the loss of connections
of links per node in the network, never increases above and neurons were introduced into a HN to create the
7 cycles even for low connectivity. two degeneration models.
To test the stability of the patterns, for both the con- Connectivity Degeneration Model
nectivity degeneration model and the neural degenera- We first investigated how the existence of multiple at-
tion model, the network was initialized in a state equal tractors in a HN is influenced by its connectivity. To do
to the intended pattern. It is important to note that so, as explained above we constructed two submodels
stability was only tested for one of the 10 patterns. As to investigate the two necessary conditions for the con-
patterns are random and we are averaging the num- tinued existence and global stability of its designated
ber of stable patterns over 400 MC simulations, it can attractor and the convergence to this in the context of
be assumed that all patterns are stable on average if connection loss.
one of them is stable. The stability of the designated Similarly as in [4], the loss of connections was intro-
attractor was taken as confirmed if the network re- duced in the model by randomly setting weights in
mained in this state after 10 cycles with less or equal the connectivity matrix to 0 with probability p, hence
than 1% of flipping error, this is with less or equal destroying the connection. For each probability p the
than 1% of the neurons in the wrong state. This refers same connectivity matrix was used but a different set
to the capacity of a network presented in previous sec- of connections were destroyed, as destruction was ran-
tions. For patterns to be well stored and constitute dom. Disconnection probability p was studied for in-
an stable fixed point, the amount of stored patterns creasing values from 0 to 1 in steps of 0.01 and for
must be below the critical ratio limit of αc = 0.138. each, the stability of the patterns and convergence to
This will be met as long as we consider stable only the them were evaluated individually. Thus to test mem-
attractors that result in a converged state with a max- ory storage, the total number of iterations were of 400
imum error rate of 1%. Even though the critical ca- MC realizations × 101 disconnection probabilities ×
pacity calculation typically applies to larger networks, 10 cycles × 100 updates and to test memory retrieval
we assume our network to be large for the purpose of it were of 100 MC realizations × 101 disconnection
convenient mathematical treatment, While acknowl- probabilities × 51 flipping probabilities × 10 patterns
edging the limitations. To account for this error, we × 10 cycles × 100 updates.
have computed the overlap between the intended vec-
tor and the state of the network over time. Also, we Neural Degeneration Model
have kept track of the energy of the network over time Secondly, we investigated how the existence of multi-
using the energy function presented before to ensure ple attractors in a HN is influenced by its nodes, the
the patterns were indeed stable and see the rate at neurons.
which patterns lost their stability, hence their energy The loss of neurons was introduced into the model by
increased. randomly setting rows of weights in the connectivity
matrix to 0 with probability p, hence breaking all the
Casas and Franco Page 8 of 13
Figure 3 a) Memory storage stability against network size, b) Figure 4 a) Overlap between the retrieved and the intended
Energy over time in connectivity decay, and c) Attractor state for different disconnection probabilities. b) Percentage of
energy during connectivity decay well-recalled memories
Connectivity degeneration: memory storage and pattern to either its low stability or the possibility that the
stability attractor has ceased existing. This is also indicated
Storage stability and network capacity have been stud- by the magnitude of the energy of the attractors. As
ied in a scenario of connectivity degeneration in the the neurons lose connections, memories become more
form of loss of connections between neurons. This al- unstable and its storage is weakened. Therefore, the
lowed us to determine the minimal set of connections energy of the attractors decreases.
necessary to preserve the functioning of the network.
Another interesting phenomenon worth mentioning is
Figure 3 a) reflects the capacity of the network com- the linear increase in energy as connections are lost,
pared to the number of lost connections, b) shows the opposed to what is seen for neuronal death. When
energy of the network over time for different disconnec- connections are lost the energy of the network is pro-
tion probabilities, and c) shows the energy associated portional to the number of connections that have been
with the state of the network after 10 cycles as con- destroyed, which shows that Hopfield networks are ro-
nectivity decay takes place. It can be observed that, bust to the loss of connections.
as the network size is reduced, the number of stable
patterns progressively decreases, and criticality occurs Connectivity degeneration: memory retrieval and
majorly when more than 60% of the connections are non-zero basin of attraction
lost. This can be explained by taking into account the To investigate memory retrieval in the context of con-
capacity of the network. This term depends on the nectivity degeneration, a methodology similar to the
network size, however when removing individual con- one employed in the study by Anafi et al. [4] has been
nections, the initially fully connected network loses adopted, with modifications to accommodate the stor-
its symmetry and Hopfield structure, which ensured age of multiple patterns. The probability of destroying
the existence of the fixed-point attractors, and these a connection has been represented on Figure 4, instead
lose their stability. Therefore, the smaller the number of the number of connections destroyed.
of connections, the fewer patterns it can learn while
maintaining its stability. Consistently with previous results, criticality remains
around 60% destruction.
In order to assess the convergence of the system dur-
ing the degeneration of the network, we plotted the Similar results have been obtained for neuronal death
energy over time for selected degrees of degeneration (see Figure 6), yet in this case it can be observed a
in a randomly chosen MC realization. The energy af- far more abrupt memory loss. Memory retrieval im-
ter supposed convergence (10 cycles) was also plotted pairment due to noisy inputs remains similar to the
against the progression of the degeneration (disconnec- neural degeneration scenario. Nevertheless, there is an
tion probability) in the network. As the energy remains intrinsically lower recall ability even for lowly noisy
constant over the cycles, it indicates that the system inputs.
remains in the attractor, confirming its stability. How-
ever, as the disconnection probability increases, the Neuronal degeneration: storage stability and capacity
energy starts fluctuating over time, suggesting that Figure 5 a) shows the capacity of the network com-
the system may have escaped the initial attractor due pared to the neural death probability, b) the energy
Casas and Franco Page 10 of 13
Figure 5 a) Memory storage stability against network size, b) The percolation phenomenon in memory storage ca-
Network energy with neural degeneration, and c) Attractor pacity of the network, which refers to the ability to
energy during neural death
correctly store and memorize a specific pattern of ac-
tivity, is more pronounced in the case of connectivity
degeneration compared to neuronal degeneration (see
of the network with neuronal loss, and c) the attrac- Figure 3). However, unlike in the case of low number
tor energy while neural death probability increases. of available neurons, there is no increase in stability
On Figure 5 b) and c) it can be observed that the observed for low connectivity. This may be attributed
energy remains constant during all cycles as long as to the fact that the state vector s̄, which defines the
the degenerated population remains above the critical state of the network, always has nonzero values for
memory capacity for the given patterns. This implies the neuronal degeneration scenario. Even if a neuron
that the system remains at the given pattern, indicat- becomes isolated due to degeneration of connections,
ing that this pattern is stable (and by generalization its value remains constant at 1 or -1 indefinitely as
all of them are as can be seen through different MC it is not influenced by other neurons and does not
realizations). On the other hand, after criticality, the switch to zero. As a result, these isolated neurons may
pattern loses stability, the basin of attraction is re- still contribute to the overlap with the intended pat-
duced in size, and erroneous activity in some neurons tern during retrieval, if by chance their state matches
(even if it’s just a small number) may push the system with the state in the pattern. However, on average, the
to another attractor with different energy (capacity overlap won’t match with the pattern, and therefore
reduction and increased interference of the spurious for 100% loss of connections, the overlap remains at 0.
attractors). Far from the criticality, the attractor of
the previous pattern may disappear or merge with the Neuronal degeneration: memory retrieval, non-empty
attractor of another pattern, which has a different en- basin of attraction
ergy and resulting in the loss of the stored memory. To evaluate convergence to the intended attractor a
This shows that the Hopfield networks may be robust similar approach to the one applied in Tolmachev et
against losing nodes. Although it is not vulnerable to al. [6] was used by testing convergence for an increas-
losing hubs, as it is a fully-connected network and has ing number of flips of the intended pattern. In a similar
no hubs, it is vulnerable to spurious attractors. way to when evaluating storing capacity, the number
of patterns has been fixed, but the number of neurons
On Figure a), we observe a slight decrease in stability available to store them changes over time.
as the first 50 neurons are lost. Once degeneration goes Results are shown on Figure 6. It can be observed that
beyond 50 dead neurons, the critical capacity threshold there is a loss of generalization as neural degeneration
(boundary) is crossed. As stated by the definition of progresses. Figure 6 a) shows that, although the abil-
the Hopfield model, when the ratio of M/N increases ity to recall memories from clean non-noisy inputs is
above 0.138, the stability and retrieval of patterns is maintained strongly even for high neural degeneration,
not guaranteed. The interference of spurious attractors this ability quickly deteriorates when the input stim-
becomes increasingly relevant and a sudden nonlinear ulus is corrupted, noisy, or altered. The robustness of
decrease in attractor stability occurs. the memory system towards recalling memories from
noisy inputs is highly influenced by the loss of neurons
It should be noted that criticality occurs at a simi- and their connections. The pattern completion and er-
lar destruction probability as observed in the neural ror correction abilities of the memory system may be
degeneration scenario (60%). This suggests that both the first to be affected even before a loss of memories or
connections and neurons have a comparable effect on the ability to recall them is noticed. This finding may
the capacity of the network. Once the symmetry of imply that individuals with Alzheimer’s disease may
Casas and Franco Page 11 of 13
References
1 Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014).
Neuronal dynamics: From single neurons to networks and models of
cognition. Cambridge University Press.
2 Bruck, J., & Roychowdhury, V. P. (1990). On the number of
spurious memories in the Hopfield model (neural network). IEEE
Transactions on Information Theory, 36(2), 393–397.
https://doi.org/10.1109/18.52486
3 Hillar, Christopher & Tran, Ngoc. (2018). Robust Exponential
Memory in Hopfield Networks. Journal of Mathematical
Neuroscience. 8. 10.1186/s13408-017-0056-2.
4 Anafi, R. C., & Bates, J. H. T. (2010). Balancing robustness against
the dangers of multiple attractors in a Hopfield-type model of
biological attractors. PloS One, 5(12), e14413.
https://doi.org/10.1371/journal.pone.0014413
5 The MathWorks Inc. (2022). MATLAB version: 9.13.0 (R2022b),
Natick, Massachusetts: The MathWorks Inc.
https://www.mathworks.com
6 Tolmachev, P., & Manton, J. H. (2020). New insights on learning
rules for Hopfield networks: Memory and objective function
minimisation. 2020 International Joint Conference on Neural
Networks (IJCNN).
7 Morrison, M., Maia, P. D., & Kutz, J. N. (2017). Preventing
neurodegenerative memory loss in Hopfield neuronal networks using
cerebral organoids or external microelectronics. Computational and
Mathematical Methods in Medicine, 2017.
8 Gosti, G., Folli, V., Leonetti, M., & Ruocco, G. (2019). Beyond the
maximum storage capacity limit in Hopfield recurrent neural
networks. Entropy, 21(8), 726.
9 Arriagada, P. V., Growdon, J. H., Hedley-Whyte, E. T., & Hyman,
B. T. (1992). Neurofibrillary tangles but not senile plaques parallel
duration and severity of Alzheimer’s disease. Neurology, 42(3),
631-631.
10 Dennis, E. L., & Thompson, P. M. (2014). Functional brain
connectivity using fMRI in aging and Alzheimer’s disease.
Neuropsychology review, 24(1), 49–62.
https://doi.org/10.1007/s11065-014-9249-6
11 Martin Lepage, Alonso Montoya, Marc Pelletier, Amélie M. Achim,
Matthew Menear, Samarthji Lal, Associative Memory Encoding and
Recognition in Schizophrenia: An Event-Related fMRI Study,
Biological Psychiatry, Volume 60, Issue 11, 2006, Pages 1215-1223,
ISSN 0006-3223, https://doi.org/10.1016/j.biopsych.2006.03.043.
12 Hoffmann, H. (2019). Sparse associative memory. Neural
Computation, 31(5), 998–1014.
https://doi.org/10.1162/neco a 01181
13 Lansner, A. (2009). Associative memory models: from the
cell-assembly theory to biophysically detailed cortex simulations.
Trends in Neurosciences, 32(3), 178–186.
https://doi.org/10.1016/j.tins.2008.12.002
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 1 of 8
for Miter = 1:length(M); % Loop over the values of M to simulate the network for
different numbers of stored patterns
for realization = 1:100; % Perform 100 realizations of the network for each
value of M
pat1 = 2*randi([0 1],100,1)-1; % Generate a random binary pattern of length
100 with values of -1 and 1, stored in the variable pat1. This serves as the first
stored pattern in the network.
for pati = 1:M(Miter)-1; % Loop over the remaining M-1 patterns to update
the connectivity matrix
pattern = 2*randi([0 1],100,1)-1; % Generate a random binary pattern of
length 100 with values of -1 and 1
W = W + (1/M(Miter)).*pattern*pattern'; % Update the connectivity
matrix W with the contribution of the current pattern
end
s0 = pat1; % Set the initial state of the network to the first stored
pattern
% Compute and store the overlap between the first stored pattern and the
final state of the network
over = dot(pat1,s0);
overlaps(Miter) = overlaps(Miter) + over;
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 2 of 8
% Check if the overlap between the first stored pattern and the final state
of the network is above a threshold (0.98), indicating stability
if over/N >= 0.98;
stability(Miter) = stability(Miter) + 1;
end
end
end
% Plot the number of networks with stable input patterns as a function of the
number of stored patterns
figure;
plot(M,stability,'-o');
grid on;
title('Number of networks with stable input pattern' );
xlabel('Number of stored patterns');
ylabel('Number of stable input patterns');
% Overlap
overlap = dot(patterns(:,1),s0)/N_max; % Normalize to range [0,1]
% Stability
if overlap >= 0.98;
% Although the tolerated flipped neurons is 1%, 0.98 because the
overlap
% ranges between -1 and 1, not from 0 to 1.
stability(cut) = stability(cut) + 1;
end
end
end
subplot(1,3,2);
for prob = 1:10:length(pdisconnect)
plot(1:cycles+1,H(prob,:),'-o'); grid on; hold on;
end
hold off;
title('Energy over time in connectivity decay'); % (Example MC realization)
xlabel('Time (cycles)'); ylabel('Energy of the network');
legend('p=0','p=0.1', 'p=0.2', 'p=0.3', 'p=0.4', 'p=0.5', 'p=0.6', 'p=0.7', 'p=0.
8', 'p=0.9', 'p=1');
subplot(1,3,3);
plot(pdisconnect,H(:,end)); grid on;
title('Attractor Energy during connectivity decay' ); % (Example MC realization)
xlabel('Disconnection Probability p'); ylabel('Energy in the last cycle');
% Plot results
figure;
subplot(1,2,1);
for c=1:10:len_pflip
plot(pflip,overlaps(c,:)); grid on; hold on;
end
hold off;
title('Overlap between the retrieved and the intended state for different
disconnection probabilities');
xlabel('Number of flips in the initial state'); ylabel('Average Overlap of the
patterns');
legend('p=0','p=0.1','p=0.2','p=0.3','p=0.4','p=0.5');
subplot(1,2,2);
plot(pdisconnect,retrieval(:,11)); grid on;
title('Percentage of well-recalled memories');
xlabel('Disconnection Probability p'); ylabel('Average ratio of retrieved
memories');
% Overlap
overlap = dot(patterns(:,1),s0)/(N_max-death); % Normalize to range [0,1]
% Stability
if overlap >= 0.98; % s0 == pat1;
% Although the tolerated flipped neurons is 1%, 0.98 because the
overlap
% ranges between -1 and 1, not from 0 to 1.
stability(death+1) = stability(death+1) + 1;
end
end
end
subplot(1,3,2);
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 7 of 8
subplot(1,3,3);
plot(linspace(0,N_max,length(H))/N_max,H(:,end)); grid on;
title('Attractor Energy during Neural Death'); % (Example MC realization)
xlabel('Neural Death Probability p'); ylabel('Energy in the last cycle');
subplot(1,2,2);
plot(linspace(0,1,N_max+1),retrieval(:,11)); grid on;
title('Percentage of well-recalled memories');
xlabel('Neural Death Probability p'); ylabel('Average ratio of recalled memories');