You are on page 1of 21

Casas and Franco

RESEARCH

Modeling the Hopfield model and its fragility


against loss of connections
Jan Casas*† and Natàlia Franco†

areas of research have focused on modeling the ba-


sic principles that underlie those conditions, which is
what will be discussed in this study making use of the
Hopfield model.
Abstract
There is active research on understanding what are Background
the basic concepts underlying the symptoms and The Hopfield model has been widely used to describe
progression of neurodegenerative disorders. The associative memory because of its ability to complete
Hopfield model has been widely used to address partial patterns and reconstruct perturbed patterns,
this issue, as it captures some essential features of and despite its simplifications and assumptions, it has
neural dynamics, such as pattern recognition and demonstrated remarkable capability in providing ex-
memory retrieval, which are directly related to cellent and intuitive results that reflect aspects of re-
associative memory. In this paper the Hopfield ality. The model describes a fully-connected recurrent
model is used to simulate what are the neural network, meaning that loops are formed be-
consequences of losing either connections or tween neurons. It is composed of a set of binary neu-
neurons and what effect this has on the recalling rons forming a network that behaves in a defined man-
capacity of the neural networks constituting ner. This allows us to store in the same network a set
memory in the human brain. The results obtained of memory patterns such that, when a new input is
show how the stability of stored patterns is introduced, the state of the system converges to the
decreased as the network is disrupted in different closest memory pattern.
ways. They provide an explanation of the
pathological mechanisms of most common The general structure of a Hopfield Network consists
neurodegenerative diseases, such as Alzheimer, of a set of nodes with either binary thresholds imple-
schizophrenia, and other disorders, such as
mented through deterministic functions (e.g. sign func-
hyperthymesia, and why they have such widely
tion) or continuous thresholds implemented through
known consequences.
probabilistic functions (e.g. sigmoid function), con-
Keywords: Hopfield model; neurodegenerative nected between each other by recurrent loops, and
disorders; Alzheimer; associative memory; neuronal connected to itself by self-loops (in the classical for-
loss; connectivity loss mulation the model), allowing the formation of Feed-
forward Loops (FFL), and Feedback Loops (FBL).

A model of associative memory has been proposed,


Introduction by adapting the mathematical description of the Hop-
Motivation field model described in Gerstner et al. [1].
In our current society, neurodegenerative disorders are
becoming a major issue of concern. There are many
Methods
existing research institutions looking forward to un-
In this section, we delineate the various mechanisms
derstanding the mechanisms that trigger such degen-
that govern a Hopfield network (HN), with the sub-
eration in an attempt to prevent or anticipate future
sequent application of these principles to the develop-
changes in the cognitive state of the patients. Some
ment of our models for neural and connectivity degen-
*
Correspondence: jan.casas01@estudiant.upf.edu eration. The basic concepts that define this model will
Department of Experimental Sciences and Health, Universitat Pompeu be explained, followed by a deeper insight on different
Fabra , 08003 Barcelona, Spain
Full list of author information is available at the end of the article ways to understand its functioning. To build the mod-

Equal contributor els we have followed the notation from the Neuronal
Casas and Franco Page 2 of 13

Dynamics online book [1]. 6 Neural states are updated asynchronously, allow-
ing for the exploration of all possible states in the
A HN is a recurrent neural network characterized state space. Nevertheless, despite the perception
by fully-connected weighted connections between neu- of information propagation occurring at different
rons, often updated asynchronously in discrete time time steps in the brain, the brain operates dynam-
steps. The symmetric weights in the network form an ically, with time passing synchronously across all
energy function (Hamiltonian), which guides the acti- neurons.
vation of neurons towards stable patterns [12]. How- 7 Time is discretized, although neurons operate in
ever, this model also has some drawbacks as it deviates continuous time.
from neurobiology in some aspects. 8 The model is static. Once it successfully retrieves
a memory and converges to an attractor, it re-
Assumptions of the Model mains in it indefinitely. The system must be man-
1 The network is formed by discrete state neurons, ually restarted in a different state to converge to
referred to as binary neurons, and neuron re- a different attractor. Although the human brain
sponses are binary. Neurons have only two pos- is of course capable of retrieving multiple memo-
sible states, active or inactive, and their effect on ries with a reboot, a similar process of membrane
each of their neighbors is fixed. Although neurons potential initialization may be feasible.
in the cerebral cortex may be differentiated into
inhibitory and excitatory, there is a continuous Definition of the Model
spectrum of states between the two and so of rest- A HN is formed by N neurons connected between each
ing membrane potentials. Also, as the membrane other. Each connection has a weight ω ij , which defines
potential is dynamic, neural responses (action po- the influence a certain neuron (i) has on another (j).
tentials) are too. All weights can be represented by the connectivity ma-
2 The activation threshold is fixed, while neurons trix (W ), which will be discussed later on.
have adaptive spiking thresholds.
3 Binary neurons linearly combine the inputs of The state of a neuron at time t can be defined as
their neighbors. However, dendrites can also in- si (t), having a value of 1 if is active, or -1 if is inactive.
tegrate information nonlinearly. Therefore, the network state can be defined as:
4 The connections and their weights are symmet-
ric and are defined by a connectivity matrix. The s̄(t) = (s1 , s2 , ..., sN ) T (1)
symmetrical nature of the connectivity matrix en-
ables the preservation of memory, as the excita- As it has been presented before, this model assumes
tion of one neuron by another is reciprocated with that neurons are binary. This means that if the lin-
equal strength, resulting in a bidirectional and ear combination of weights (ω ij ) and states of the
balanced interaction between neurons. However, presynaptic neurons exceed the threshold of the sign
this symmetry in interactions does not occur in function sgn(x ≥ 0) of the postsynaptic neuron, this
the brain and is biologically unrealistic. Addition- last one is activated (+1). If it is not the case and
ally it violates Dale’s law, as a neuron cannot be sgn(x < 0), the postsynaptic neuron is inactivated (-
at the same time excitatory and inhibitory. 1). The future state of the postsynaptic neuron (si ′ )
5 The network is fully connected, meaning each neu- can be written as:
ron is connected to all other neurons in the net-  
work. Although the cortex may be approximated XN
as a fully connected network, this is biologically si ′ = sgn  ω ij sj  (2)
unrealistic [13]. In biological neural networks, neu- j=1

ral connections are typically sparse and exhibit


specific patterns based on the structure and func- where the sign (sgn) function is:
tion of the brain regions involved. Additionally,
sgn(x) = 1, if x ≥ 0 ←→ si ′ = 1

the computational complexity of a fully connected
(3)
network may not be feasible in terms of energy sgn(x) = −1, if x < 0 ←→ si ′ = −1
consumption and processing efficiency. The brain
has evolved to address this challenge by employ- As introduced previously, the Hopfield model assumes
ing hubs of connections that serve as shortcuts, as asynchronous dynamics. Therefore, the updated state
seen in scale-free networks. of the network (s̄′ ) after one timestep, considering that
Casas and Franco Page 3 of 13

neuron j is chosen to be updated randomly, can be can be proven in the following way:
mathematically represented in the following way:
   
T ′ ′ ′ ′ T
X X
s̄ = (s1 , ..., sN ) → s̄ = (s 1 , ..., s j , ..., s N ) (4) si ′ = sgn  ω ij sj  = sgn  ω ij η j 
j j
Hebbian Learning Rule
   
X 1 1 X
A HN is able to store binary patterns (η i = ±1). Each = sgn  η i η j η j  = sgn  η i ηj ηj 
stored pattern coincides with an attractor of the sys- j
N N j
tem, consisting of a stable fixed point with a nonempty  
 
basin of attraction. Each of the µ patterns stored in the 1 X  1
= sgn  η i 1 = sgn η i N = sgn (η i )
network can be written as a vector of activity states N j
N
for each neuron in the network, and can be expressed
as follows: = ηi
(7)
µ µ µ µ T
η̄ = (η 1 , η 2 , ..., η N ) (5)
2 The state s̄ = η̄ is stable, hence, it has a non-
The Hebbian rule states that two neurons that fire to- empty basin of attraction, if there is at least an-
gether may be strongly connected, meaning that the other network state that is attracted to the pat-
weight of the connection ω ij between the two neurons tern η̄, meaning that the system converges to the
i and j will be elevated and strengthened if the con- state s̄ = η̄ when being initialized in a different
nected neurons have the same activation for a pattern. state s̄o ̸= η̄. This is summarized as:
This can be summarized by following equation:
{∃ s̄o ̸= η̄ : s̄o → s̄ = η̄}
1
ω ij = ηi ηj (6)
N Assuming
 s̄o is the initial state of the network and
η1
where η i and η j are the states of the neurons i and −η2 
 
j when the system reaches the state of the memory s̄o = −η3 , where the sign of a fraction f of the
 
pattern η̄ (attractor). To make this value independent  .. 
 . 
of the size of the network, the weight is divided by N ,
ηN
the number of neurons constituting the network.
entries are flipped, the proof proceeds as follows:
The connections between two neurons can be exci-    
tatory or inhibitory. The positive link (excitatory con- X X 1
si ′ = sgn  ω ij soj  = sgn  η i η j soj 
nection) is directly linked to memory formation, as N
j j
it implies that if a neuron is activated, the positively  
connected ones will also activate. In contrast, the in- 1 X
hibitory connection states that an activated neuron = sgn  η i η j soj 
N j
suppresses the activation of the others linked to it,
therefore not allowing memory formation. (8)

2
P
Storing Memory Patterns where j η j soj = N (1 − f ) ηj + N f ηj (−ηj )
2
A memory pattern is successfully stored in a hopfield Knowing ηj = 1, and ηj (−ηj ) = −1:
network and the state s̄ = η̄ is an attractor if two X
conditions are fulfilled: ηj soj = N (1 − 2f ) (9)
1 The state s̄ = η̄ is a stable fixed point. j
2 The state s̄ = η̄ has a non-empty basin of attrac-
tion.
 
1 The state s̄ = η̄ is a stable fixed point if the system ′ 1
si = sgn ηi N (1 − 2f )
remains in this state in the following time steps, N
meaning s̄′ = η̄, being the state s̄o = η̄ the initial  (10)
iff < 0.5 → ηi as ηi (1 − 2f ) > 0
condition of the network. Assuming s̄ = η̄, this iff > 0.5 → ηi as ηi (1 − 2f ) < 0
Casas and Franco Page 4 of 13

Spurious States learning stage:


The Hopfield model is known to exhibit spurious mem-
ories, where a random input has a high probability of Consider a set a different vectors of binary memory
leading to a stable neural activity that is distinct from patterns:
any previously learned pattern [2].
η¯µ = (η1µ , η2µ , ..., ηN
µ T
)
If the system starts in an state where the memory pat-
tern is negative, the system will end in an attractor of where µ = 1, 2, ..., M and stands for each memorized
the opposite sign −η̄. These states are named spuri- pattern.
ous memories, also known as reversed states. Spurious
states correspond to memories that cannot be recalled, Considering multiple memory patterns, the Hebbian
considering the case of associative memory, so they are rule can be adapted in the following way:
not desirable.
M
1 X µ µ
ω ij = η η (12)
The energy landscape of a HN storing a single pat- N µ=1 i j
tern is divided in two symmetric basins of attraction
of equal size. If the initial state s̄o has less than 50% Therefore, the connectivity matrix can be written as
(f < 0′ 5) of the individual states flipped compared the sample correlation matrix of the memory patterns,
to the intended pattern we want to retrieve, then the divided by N to keep the weights independent of the
system converges to the non-spurious stored attractor size of the network:
η̄. If more than half of the individual initial states are
flipped (f > 0′ 5), then there is more overlap with the M
1 X µ µT
spurious state than with the stored pattern and the W = η̄ η̄ (13)
N µ=1
system converges to the spurious state.

Connectivity matrix Normalizing by the number of neurons ensures that the


As previously illustrated, the connectivity matrix of a strength of connections is scaled appropriately with
HN is intrinsically symmetric ω ij = ω ji , which accounts the size of the network, and all neurons are treated
for its recurrent nature and the presence of loops in the equally in the network dynamics. This avoids that net-
network. It is defined by performing the scalar product work dynamics become dominated by large weights as
of the memory pattern η̄ with itself: the network size grows. This is that patterns become
dominated by larger and more stable patterns, leading
1 T to incorrect retrieval and instability in the network.
W = η̄ η̄ (11) Note that this normalization is only appropriate for
N
HN with symmetric connections. For other types of
Depending on the sign of the elements of the vector networks or connection schemes, different normaliza-
containing the pattern η, the weight of the connec- tions may be required.
tion between each pair of neurons will be positive or
negative. If positive, it will result in an excitatory in- Energy of a Hopfield Network
teraction, while if a negative it will result in an in- The energy function represents the memories stored in
hibitory one. Following the Hebbian rule, it should be the network as local minima and going downhill in the
noted that if two neurons have an excitatory interac- basin of attraction is equivalent to retrieving informa-
tion, they will have a tendency to be activated at the tion. The energy function proves the existence of local
same time (fire together), otherwise they will fire at minima, therefore allowing memory storage.
different times (out of sync).
The energy of a HN is defined as the energy of the
Storing Multiple Memory Patterns network state at a given discrete time. Mathemati-
As it has been previously shown, a Hopfield network cally, the energy function of a HN can be expressed in
can store a memory pattern, which will be recovered matrix form as follows:
even if the input stimulus is an incomplete or cor-
rupted state of the intended memory. 1 1X
H (s̄) = − s̄T ω̄
¯ s̄ = − ω ij si sj (14)
2 2 i,j
Nevertheless, a Hopfield network can store multiple
patterns, so the connectivity matrix should be rewrit- The energy in a Hopfield network never increases. It
ten each time a new memory is acquired during the always decreases or remains constant. This can be
Casas and Franco Page 5 of 13

proven in the following way: The future energy func-


tion can be mathematically derived:
1X
H ′ = H (s̄′ ) = − ω ij s′i s′j (15)
2 i,j

Therefore, the variation of energy between the current


state and the next one will be:
1X
∆H = H ′ − H = − ω ij s′i s′j − si sj ≤ 0

2 i,j
(16) Figure 1 Energy landscape of a Hopfield network [3]

The energy should be constant or decreasing.


1 2) sk = 1, s′k = −1 → j ω kj sj < 0
P
∆H = − ω kk (s′k s′k − sk sk )
2 Repeating the above process the following result
1X would be obtained: ∆H < 0.
ω kj s′k s′j − sk sj


2
j̸=k
1X The conclusion that can be drawn from the above
− ω ik (s′i s′k − si sk ) demonstrations is that the energy in a HN never in-
2
i̸=k creases: ∆H ≤ 0.
1 X (17)
ω ij s′i s′j − si sj


2 In the presence of noise in the network, spurious at-
i,j̸=k
tractors may emerge. These attractors have lower en-
1X ergy and thus greater stability compared to the ac-
ω kj s′k s′j − sk sj

=−
2 tual fixed points. If the added noise exceeds a certain
j̸=k
1X threshold, it can cause the system to escape from the
− ω ik (s′i s′k − si sk ) attractor and result in loss of stored memory.
2
i̸=k Figure 1 is an illustration of the energy landscape of a
Hopfield network:
Considering the symmetry property (ω ij = ω ji ):
X Network Capacity
ω kj s′k s′j − sk sj

∆H = − The capacity of a Hopfield network is described as the
j̸=k
  number of memories that can be stored and correctly
X recalled. It depends on the number of neurons in the
= − (s′k − sk )  ω kj sj  (18) given network (and connections, if symmetry is bro-
j̸=k ken in the connectivity matrix). Formally, the storage
X capacity of a HN can be defined as the maximum num-
= − (s′k − sk ) ω kj sj + ω kk (s′k − sk ) sk
j
ber of patterns M max that a network composed of N
neurons is capable of retrieving and is given by:
Two different scenarios should be considered:
M max
α= (20)
1 If s′k = sk → ∆H = 0 N

2 If s′k ̸= sk : When a HN stores multiple memories, the same neu-


rons can be utilized to memorize different patterns.
1) sk = −1, s′k = 1 → j ω kj sj > 0
P This phenomenon aligns with the biological concept
Therefore, the change on energy can be defined of memory storage through the creation of circuits
as: known as reverberations. In the brain, when multiple
  memories are stored, neurons from different circuits
X may be repurposed as nodes for new circuits to store
∆H = − (1 − (−1))  ω kj sj  additional memories. In the Hopfield model, this phe-
j
(19) nomenon is reflected as the degree of overlap between
+ ω kk (1 − (−1)) (−1) → ∆H < 0 different patterns, indicating the similarity between
Casas and Franco Page 6 of 13

their activity states. However, this overlap can intro- In total, two network models were developed: a connec-
duce an interference term between patterns, which can tivity degeneration model and a neural degeneration
hinder the retrieval of specific memories. model, which will be later described in detail in its
respective sections. Following the Hopfield mechanism
As the number of stored patterns increases, the basins described in the previous sections, we constructed two
of attraction of different patterns tend to shrink, and fully connected HN for each of the two models to inves-
the minima in the energy landscape come closer to each tigate the influence of either connection loss or neu-
other. This phenomenon results in increased interfer- ral loss on the two necessary conditions for memory
ence between patterns, where the system may converge storage and retrieval; stability of the fixed points and
to incorrect attractors if their basins of attraction are non-emptiness of the basins of attraction, respectively.
in close proximity. Additionally, the presence of noise
from spurious patterns becomes more prominent, as To study the storage capabilities, a network of N =
these patterns also occupy storage resources and fur- 150 neurons was used in both models, while to study
ther constrain the energy landscape. the retrieval capabilities a network of N = 100 neurons
was used. This difference in network size allowed us to
When the ratio of patterns to neurons (M/N ) exceeds reduce the computational cost of the algorithm, as no
the capacity of the network, the basins of attraction significant changes were observed for networks with
of different attractors may become so close to each more than 100 neurons when storing 10 patterns and
other that they collapse into a single minimum in the was only useful for the analysis of the storage results
energy landscape. This can result in the loss of one as it gave a broader view of the dynamics. For all the
of the two patterns, making its retrieval impossible, models and submodels, we made the network to learn
a phenomenon known as catastrophic forgetting. To M = 10 patterns and all the tests were performed
mitigate the occurrence of catastrophic forgetting, it with random patterns defined as N -dimensional vec-
tors generated by randomly assigning to each of its
is crucial to maintain an optimal M/N ratio in the
components a value of -1 or 1. Random patterns allow
network, where the number of stored patterns is care-
for the minimal redundancy in the stored information
fully balanced with the number of neurons available.
as stored information cannot be compressed. There-
fore, they represent the worst case scenario in memory
Due to the increasing cross-talk between patterns, the
formation. Furthermore, this approach avoids bias in
probability of a neuron erroneously changing to the
the results, as all the patterns have similar stability
flipped state of the intended pattern during retrieval
and characteristics.
increases as the ratio M/N increases. Through exper-
imentation with different erroneous state-flip proba-
To iterate every submodel we used asynchronous se-
bilities, it has been determined that the critical ratio, quential cycling updating, meaning that only one neu-
denoted as c, which ensures that the erroneous state- ron is updated at a time, neurons are updated one by
flip probability remains lower than 1% and all patterns one in an ordered manner and this process is repeated
are stored correctly, of αc = 0.185 [1]. This implies that a certain number of cycles. To update a neuron, de-
in a large network with a finite number of N neurons, pending on whether its inputs summed to < 0 or ≥ 0,
on average, about 0.1N neurons in each pattern may respectively, the state of the neuron was changed to -1
change to the wrong state when updated and exhibit or 1. As previously stated, asynchronicity is required
erroneous activity. in a HN to explore the entire state space, which does
not happen with synchronous dynamics. State tran-
However, this is only valid for the first iteration. This sitions only happen if the right neuron is updated,
small set of flipped neurons could potentially trigger otherwise, the state does not change. Hence, several
cascades of state changes in other neurons in subse- time steps are needed to converge to an attractor.
quent iterations, leading also to catastrophic forget- Therefore, there may be attractors that cannot be
ting. To mitigate the risk of such cascades, the number reached if synchronous updating is applied. Although
of stored patterns can be kept below a safer ratio limit fixed schedules, such as sequential or parallel updating,
of αc = 0.138 [1]. can ensure convergence to a stable attractor, they may
be biased towards certain patterns or configurations.
Connectivity and Neuronal Loss Model However, as we are working with random patterns, the
The model development and computations described updating schedule is less relevant. In addition, as se-
below were performed using MATLAB (MathWorks, quential updates converge faster if the initial state is
Natick, MA) version R2020b [5]. close to a stored pattern, and we are testing conver-
gence close to the attractors (most of the time), this
Casas and Franco Page 7 of 13

will allow us to reduce the computational cost of the To test the recall of the patterns, under both connec-
experiments. tivity degeneration and neural degeneration, we in-
troduced random flips (changes of a neuron’s state to
All the submodels were iterated for 10 cycles, where an opposite one) with an increasing probability p into
in each cycle each neuron was updated once, and to one of the patterns, and initialized the network in such
obtain an average of the results, the same calculations a distorted state. This was performed repeatedly for
were repeated for 400 Monte Carlo (MC) realizations all 10 patterns and for k flipping probabilities from 0
when testing memory storage and for 100 MC real- to 0.5 in steps of 0.01. To evaluate the recall perfor-
izations when testing retrieval (due to the high com- mance, the non-emptiness of the designated attractor
putational cost). The required number of cycles was was taken as confirmed if the network converged to the
determined by previously estimating the average num- intended attractor within 10 cycles. This is, after 10
ber of iterations required for the system to converge cycles, the overlap between the intended pattern and
to the designated attractor in a non-degenerated net- the state of the network was equal to or higher than
work. We found out that 10 cycles was more than 95%, as employed in Tolmachev et al. [6]. The number
enough for convergence and was also computationally of retrieved patterns for every flipping probability was
optimal. This coincides with the results found in [4], averaged by the number of patterns (M=10) and the
where the number of iterations of the HN required to number of MC simulations (MC = 100).
achieve convergence to the designated attractor from
an arbitrary starting state as a function of the number Next, we will elucidate how the loss of connections
of links per node in the network, never increases above and neurons were introduced into a HN to create the
7 cycles even for low connectivity. two degeneration models.

To test the stability of the patterns, for both the con- Connectivity Degeneration Model
nectivity degeneration model and the neural degenera- We first investigated how the existence of multiple at-
tion model, the network was initialized in a state equal tractors in a HN is influenced by its connectivity. To do
to the intended pattern. It is important to note that so, as explained above we constructed two submodels
stability was only tested for one of the 10 patterns. As to investigate the two necessary conditions for the con-
patterns are random and we are averaging the num- tinued existence and global stability of its designated
ber of stable patterns over 400 MC simulations, it can attractor and the convergence to this in the context of
be assumed that all patterns are stable on average if connection loss.
one of them is stable. The stability of the designated Similarly as in [4], the loss of connections was intro-
attractor was taken as confirmed if the network re- duced in the model by randomly setting weights in
mained in this state after 10 cycles with less or equal the connectivity matrix to 0 with probability p, hence
than 1% of flipping error, this is with less or equal destroying the connection. For each probability p the
than 1% of the neurons in the wrong state. This refers same connectivity matrix was used but a different set
to the capacity of a network presented in previous sec- of connections were destroyed, as destruction was ran-
tions. For patterns to be well stored and constitute dom. Disconnection probability p was studied for in-
an stable fixed point, the amount of stored patterns creasing values from 0 to 1 in steps of 0.01 and for
must be below the critical ratio limit of αc = 0.138. each, the stability of the patterns and convergence to
This will be met as long as we consider stable only the them were evaluated individually. Thus to test mem-
attractors that result in a converged state with a max- ory storage, the total number of iterations were of 400
imum error rate of 1%. Even though the critical ca- MC realizations × 101 disconnection probabilities ×
pacity calculation typically applies to larger networks, 10 cycles × 100 updates and to test memory retrieval
we assume our network to be large for the purpose of it were of 100 MC realizations × 101 disconnection
convenient mathematical treatment, While acknowl- probabilities × 51 flipping probabilities × 10 patterns
edging the limitations. To account for this error, we × 10 cycles × 100 updates.
have computed the overlap between the intended vec-
tor and the state of the network over time. Also, we Neural Degeneration Model
have kept track of the energy of the network over time Secondly, we investigated how the existence of multi-
using the energy function presented before to ensure ple attractors in a HN is influenced by its nodes, the
the patterns were indeed stable and see the rate at neurons.
which patterns lost their stability, hence their energy The loss of neurons was introduced into the model by
increased. randomly setting rows of weights in the connectivity
matrix to 0 with probability p, hence breaking all the
Casas and Franco Page 8 of 13

connections of a neuron and leaving this isolated from


its neighbor cells. However, even if a neuron is isolated,
its state may still remain active or inactive and there-
fore influence the computation of the overlap with the
intended pattern. To fully implement neural death, the
state of the neuron in both the network’s state vector
s and in each of the stored patterns was also changed
to 0 with probability p.
In the model neurons are killed randomly without re-
placement one by one every 10 cycles until no neurons
are left. In this case, we consistently use the same con-
nectivity matrix, as the modification is applied to the
memory patterns rather than the connectivity matrix.
Meanwhile, we keep track of neurons that have died.
As a result, neural death accumulates over time as new
neurons continue to die along with their connections.

For visualization purposes, this process can be ap-


proximated as an increasing neural death probability Figure 2 Network capacity against number of stored
from 0 to 1 in steps of 0.01, as the percentage of dead patterns
neurons is equivalent to the average number of neu-
rons that die for each probability p. The accuracy of
this approximation will increase the more MC real-
izations we perform to average the results. For each patterns, and serves as a reference for expected per-
neural death probability p, the storage and retrieval formance of the network in healthy conditions.
evaluations were conducted individually in the same
manner as in the connectivity degeneration model. The plot shown on Figure 2 has been obtained by
Thus to test memory storage, the total number of it- making the network learn an increasing number of
erations were of 400 MC realizations × 150 perished patterns and iteratively recalculating the connectivity
neurons × 10 cycles × 100 updates and to test memory matrix to obtain the corresponding weights. It can be
retrieval it was of 100 MC realizations × 100 perished observed that the capacity of the network reaches a
neurons × 51 flipping probabilities × 10 patterns × limit when the number of patterns we want to store
10 cycles × 100 updates.
is around 13% of the number of neurons. After this
point, the stability of the stored patterns considerably
Results
decreases. This observation directly supports the M/N
In this section, we present the results of our study
on the Hopfield model’s sensitivity to loss of connec- ratio limit of αc = 0.185 introduced in previous sec-
tions. We conducted simulations with various parame- tions, as the capacity of the network saturates around
ter settings, including different network sizes, pattern 18 patterns, which corresponds to a ratio of 0.18 for
loads, and probabilities of connection loss and neu- the current network. This indicates that overgrowing
ronal death. The findings provide insights into the the maximum capacity of the network results in per-
fragility of the Hopfield model under network degrada- formance degradation, confirming the relevance of the
tion, shedding light on memory storage and retrieval previously established capacity limit.
dynamics in these conditions. We report on stability,
retrieval accuracy, and energy dynamics analyzes to When excessive cross-talk occurs between stored pat-
elucidate the impact of loss of connections on the Hop- terns in a Hopfield network, the valleys on the energy
field model’s performance landscape become too close to each other, resulting in
interactions between patterns that prevent complete
Network capacity against number of stored patterns
A Hopfield network with 100 neurons, fully connected recovery of the stored patterns. Therefore, we state
and with a symmetric connectivity matrix, was used that to store 10 patterns with negligible interference,
as a ”healthy” baseline condition to study the system’s we need approximately 10 times more neurons. There-
capacity in the absence of any network degradation. fore, in the subsequent tests, we will utilize networks
This choice of network size was made to balance com- of 100 to 150 neurons to ensure an appropriate balance
putational cost with adequate capacity for memory between computational cost and network capacity.
Casas and Franco Page 9 of 13

Figure 3 a) Memory storage stability against network size, b) Figure 4 a) Overlap between the retrieved and the intended
Energy over time in connectivity decay, and c) Attractor state for different disconnection probabilities. b) Percentage of
energy during connectivity decay well-recalled memories

Connectivity degeneration: memory storage and pattern to either its low stability or the possibility that the
stability attractor has ceased existing. This is also indicated
Storage stability and network capacity have been stud- by the magnitude of the energy of the attractors. As
ied in a scenario of connectivity degeneration in the the neurons lose connections, memories become more
form of loss of connections between neurons. This al- unstable and its storage is weakened. Therefore, the
lowed us to determine the minimal set of connections energy of the attractors decreases.
necessary to preserve the functioning of the network.
Another interesting phenomenon worth mentioning is
Figure 3 a) reflects the capacity of the network com- the linear increase in energy as connections are lost,
pared to the number of lost connections, b) shows the opposed to what is seen for neuronal death. When
energy of the network over time for different disconnec- connections are lost the energy of the network is pro-
tion probabilities, and c) shows the energy associated portional to the number of connections that have been
with the state of the network after 10 cycles as con- destroyed, which shows that Hopfield networks are ro-
nectivity decay takes place. It can be observed that, bust to the loss of connections.
as the network size is reduced, the number of stable
patterns progressively decreases, and criticality occurs Connectivity degeneration: memory retrieval and
majorly when more than 60% of the connections are non-zero basin of attraction
lost. This can be explained by taking into account the To investigate memory retrieval in the context of con-
capacity of the network. This term depends on the nectivity degeneration, a methodology similar to the
network size, however when removing individual con- one employed in the study by Anafi et al. [4] has been
nections, the initially fully connected network loses adopted, with modifications to accommodate the stor-
its symmetry and Hopfield structure, which ensured age of multiple patterns. The probability of destroying
the existence of the fixed-point attractors, and these a connection has been represented on Figure 4, instead
lose their stability. Therefore, the smaller the number of the number of connections destroyed.
of connections, the fewer patterns it can learn while
maintaining its stability. Consistently with previous results, criticality remains
around 60% destruction.
In order to assess the convergence of the system dur-
ing the degeneration of the network, we plotted the Similar results have been obtained for neuronal death
energy over time for selected degrees of degeneration (see Figure 6), yet in this case it can be observed a
in a randomly chosen MC realization. The energy af- far more abrupt memory loss. Memory retrieval im-
ter supposed convergence (10 cycles) was also plotted pairment due to noisy inputs remains similar to the
against the progression of the degeneration (disconnec- neural degeneration scenario. Nevertheless, there is an
tion probability) in the network. As the energy remains intrinsically lower recall ability even for lowly noisy
constant over the cycles, it indicates that the system inputs.
remains in the attractor, confirming its stability. How-
ever, as the disconnection probability increases, the Neuronal degeneration: storage stability and capacity
energy starts fluctuating over time, suggesting that Figure 5 a) shows the capacity of the network com-
the system may have escaped the initial attractor due pared to the neural death probability, b) the energy
Casas and Franco Page 10 of 13

the connectivity matrix is lost, each lost connection


proportionally reduces the capacity of the network in
the same way as each lost neuron. However, as Fig-
ures 3 c) and 5 c) show, capacity is lost much faster
when neurons are destroyed compared to when the
same happens for connections. This can be attributed
to the fact that when a neuron dies, multiple connec-
tions are lost simultaneously, resulting in a more rapid
reduction of capacity.

Figure 5 a) Memory storage stability against network size, b) The percolation phenomenon in memory storage ca-
Network energy with neural degeneration, and c) Attractor pacity of the network, which refers to the ability to
energy during neural death
correctly store and memorize a specific pattern of ac-
tivity, is more pronounced in the case of connectivity
degeneration compared to neuronal degeneration (see
of the network with neuronal loss, and c) the attrac- Figure 3). However, unlike in the case of low number
tor energy while neural death probability increases. of available neurons, there is no increase in stability
On Figure 5 b) and c) it can be observed that the observed for low connectivity. This may be attributed
energy remains constant during all cycles as long as to the fact that the state vector s̄, which defines the
the degenerated population remains above the critical state of the network, always has nonzero values for
memory capacity for the given patterns. This implies the neuronal degeneration scenario. Even if a neuron
that the system remains at the given pattern, indicat- becomes isolated due to degeneration of connections,
ing that this pattern is stable (and by generalization its value remains constant at 1 or -1 indefinitely as
all of them are as can be seen through different MC it is not influenced by other neurons and does not
realizations). On the other hand, after criticality, the switch to zero. As a result, these isolated neurons may
pattern loses stability, the basin of attraction is re- still contribute to the overlap with the intended pat-
duced in size, and erroneous activity in some neurons tern during retrieval, if by chance their state matches
(even if it’s just a small number) may push the system with the state in the pattern. However, on average, the
to another attractor with different energy (capacity overlap won’t match with the pattern, and therefore
reduction and increased interference of the spurious for 100% loss of connections, the overlap remains at 0.
attractors). Far from the criticality, the attractor of
the previous pattern may disappear or merge with the Neuronal degeneration: memory retrieval, non-empty
attractor of another pattern, which has a different en- basin of attraction
ergy and resulting in the loss of the stored memory. To evaluate convergence to the intended attractor a
This shows that the Hopfield networks may be robust similar approach to the one applied in Tolmachev et
against losing nodes. Although it is not vulnerable to al. [6] was used by testing convergence for an increas-
losing hubs, as it is a fully-connected network and has ing number of flips of the intended pattern. In a similar
no hubs, it is vulnerable to spurious attractors. way to when evaluating storing capacity, the number
of patterns has been fixed, but the number of neurons
On Figure a), we observe a slight decrease in stability available to store them changes over time.
as the first 50 neurons are lost. Once degeneration goes Results are shown on Figure 6. It can be observed that
beyond 50 dead neurons, the critical capacity threshold there is a loss of generalization as neural degeneration
(boundary) is crossed. As stated by the definition of progresses. Figure 6 a) shows that, although the abil-
the Hopfield model, when the ratio of M/N increases ity to recall memories from clean non-noisy inputs is
above 0.138, the stability and retrieval of patterns is maintained strongly even for high neural degeneration,
not guaranteed. The interference of spurious attractors this ability quickly deteriorates when the input stim-
becomes increasingly relevant and a sudden nonlinear ulus is corrupted, noisy, or altered. The robustness of
decrease in attractor stability occurs. the memory system towards recalling memories from
noisy inputs is highly influenced by the loss of neurons
It should be noted that criticality occurs at a simi- and their connections. The pattern completion and er-
lar destruction probability as observed in the neural ror correction abilities of the memory system may be
degeneration scenario (60%). This suggests that both the first to be affected even before a loss of memories or
connections and neurons have a comparable effect on the ability to recall them is noticed. This finding may
the capacity of the network. Once the symmetry of imply that individuals with Alzheimer’s disease may
Casas and Franco Page 11 of 13

Figure 6 a) Overlap between the retrieved and the intended


state for different neural death probabilities, b) Percentage of
well-recalled memories

Figure 7 Performance for Hopfield networks encoded with


have difficulty recognizing relatives and close friends if an optimal memory set
they are covering their faces with make-up or personal
protective equipment such as surgical masks. This ob-
servation could be a potential symptom to consider
during cognitive performance tests for early diagnosis
Figure 7 has been obtained from their study. It shows
of the disease.
the effect of using different numbers of COs on the
performance of the Hopfield network. If we observe
On the other hand, Figure 6 b) demonstrates that
the top left plot, it can be seen that the curve deter-
although the ability to retrieve stored memories de-
mined by the Performance and the Damage axes has
creases as more neurons die and the disease progresses,
a similar shape to the one on Figure 5. Nevertheless,
percolation does not occur until 60% of the nodes in
the neural network have been destroyed, which is con- they do not represent exactly the same: in Morrison’s
sistent with previous results. In this case, retrieval is study the Hopfield network is connected to a rather
significantly more affected before percolation occurs small auxiliary network, while on Figure 5 there is no
compared to stability, which indicates the significant auxiliary network.
impact of losing a single neuron. This finding high-
lights the vulnerability of memory retrieval to neural Now considering Figure 2, where it is plotted the num-
degeneration and suggests that even a small loss of ber of networks with stable input patterns as the num-
neurons can have a substantial effect on memory per- ber of stored ones increases. A similar result was ob-
formance. tained on a study performed by Giorgio Gosti [8]. A
plot they obtained is shown on Figure 8. It exposes
Discussion how the network performs as the number of patterns
The results of this study suggest that both, neuronal to be stored exceeds the limit and how the perfor-
as well as connection loss is linked to major changes in mance degrades over time.
the stability landscape of a Hopfield network. The pro-
posed model is able to reproduce the intrinsic mech- This result corroborates the one obtained on Figure
anisms underlying most common neurodegenerative 2.
disorders, as well as demonstrate the potential bene-
fits that treating this loss on time could have on the The above paper also provides a justification for Fig-
progression of the condition. ure 3 and Figure 5. Plot C) of both figures, showing
how the energy of the stored patterns increases when
To corroborate our results, the existing literature on connections are lost (3) or neurons are eliminated (5).
the theme was analyzed. It explains what happens when network capacity is re-
duced, and the stored patterns can not be successfully
A similar curve modeling the memory storage stabil- recalled.
ity against network size, our result shown on Figure
5, was obtained in a study performed by M. Morrison Figure 9, from reference [8], plots the overlap between
[7]. They investigated the potential of using cerebral the original pattern and the retrieved pattern in the
organoids (COs) to prevent neurodegenerative mem- y-axis, as a function of the number of stored patterns
ory loss in Hopfield networks. of the network. It shows that at low capacities, the
Casas and Franco Page 12 of 13

One of the main findings of these simulations is that


neuronal loss considerably affects the stability of the
stored patterns in a Hopfield network. The smaller the
number of remaining neurons is, the more memories
will not be recovered. This is consistent with previous
research on the mechanism underlying common neu-
rodegenerative disorders such as Alzheimer.

In particular, a study titled ‘Neurofibrillary tangles


but not senile plaques parallel duration and severity of
Alzheimer’s [9] found a strong correlation between neu-
ronal loss and severity of cognitive impairment, partic-
ularly memory recall, providing evidence for the role
neuronal loss has in memory impairment in Alzheimer
Figure 8 Retrieval rate against distance between memories patients.

Another conclusion that can be drawn from our study


is the impact connectivity loss has on the ability to
recall memories. A study performed by Dennis, E.L.
and Thompson, P.M. [10] justifies the results we have
obtained. They revealed through functional magnetic
resonance imaging the impact changes on the brain
connectivity has on Alzheimer’s disease and mild cog-
nitive impairment.

A study titled ‘Associative Memory Encoding and


Recognition in Schizophrenia: An Event-Related fMRI
Study’ [11] investigated the relationship between
schizophrenia and associative memory, concluding that
patients suffering this disorder had an altered associa-
tive memory. This could be linked to the inability for
those patients to discern reality from fiction, therefore
Figure 9 Overlap between retrieved and original patterns as flattening the energy landscape and making it more
a function of stored patterns
complex to correctly recall stored information. The en-
ergy of the stored patterns increases as the number of
memorized ones also does. On schizophrenic patients,
network ability to correctly recall stored patterns is as non-real memories are not correctly eliminated, the
decreased if the number of memories increases, while network starts saturating, therefore flattening the en-
at higher capacities, it increases, indicating that the ergy landscape.
network is more robust for a higher number of stored
patterns. Future work can focus on putting together these two
models of connectivity and neural degeneration to have
a complete model of Alzheimer and study the conse-
This proves that the results obtained when plotting quences of it.
the capacity of the network when capacity is reduced,
by eliminating neurons or connections. In conclusion, this study has advanced our under-
standing of the effects of network disruption in the
form of neuronal and connectivity loss and the poten-
tial mechanisms behind Alzheimer’s disease and other
Conclusions neurological pathologies.
The aim of the study was to research the effects of
neuronal loss and connection loss on the stability of a Competing interests
Hopfield network. The authors declare that they have no competing interests.
Casas and Franco Page 13 of 13

References
1 Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014).
Neuronal dynamics: From single neurons to networks and models of
cognition. Cambridge University Press.
2 Bruck, J., & Roychowdhury, V. P. (1990). On the number of
spurious memories in the Hopfield model (neural network). IEEE
Transactions on Information Theory, 36(2), 393–397.
https://doi.org/10.1109/18.52486
3 Hillar, Christopher & Tran, Ngoc. (2018). Robust Exponential
Memory in Hopfield Networks. Journal of Mathematical
Neuroscience. 8. 10.1186/s13408-017-0056-2.
4 Anafi, R. C., & Bates, J. H. T. (2010). Balancing robustness against
the dangers of multiple attractors in a Hopfield-type model of
biological attractors. PloS One, 5(12), e14413.
https://doi.org/10.1371/journal.pone.0014413
5 The MathWorks Inc. (2022). MATLAB version: 9.13.0 (R2022b),
Natick, Massachusetts: The MathWorks Inc.
https://www.mathworks.com
6 Tolmachev, P., & Manton, J. H. (2020). New insights on learning
rules for Hopfield networks: Memory and objective function
minimisation. 2020 International Joint Conference on Neural
Networks (IJCNN).
7 Morrison, M., Maia, P. D., & Kutz, J. N. (2017). Preventing
neurodegenerative memory loss in Hopfield neuronal networks using
cerebral organoids or external microelectronics. Computational and
Mathematical Methods in Medicine, 2017.
8 Gosti, G., Folli, V., Leonetti, M., & Ruocco, G. (2019). Beyond the
maximum storage capacity limit in Hopfield recurrent neural
networks. Entropy, 21(8), 726.
9 Arriagada, P. V., Growdon, J. H., Hedley-Whyte, E. T., & Hyman,
B. T. (1992). Neurofibrillary tangles but not senile plaques parallel
duration and severity of Alzheimer’s disease. Neurology, 42(3),
631-631.
10 Dennis, E. L., & Thompson, P. M. (2014). Functional brain
connectivity using fMRI in aging and Alzheimer’s disease.
Neuropsychology review, 24(1), 49–62.
https://doi.org/10.1007/s11065-014-9249-6
11 Martin Lepage, Alonso Montoya, Marc Pelletier, Amélie M. Achim,
Matthew Menear, Samarthji Lal, Associative Memory Encoding and
Recognition in Schizophrenia: An Event-Related fMRI Study,
Biological Psychiatry, Volume 60, Issue 11, 2006, Pages 1215-1223,
ISSN 0006-3223, https://doi.org/10.1016/j.biopsych.2006.03.043.
12 Hoffmann, H. (2019). Sparse associative memory. Neural
Computation, 31(5), 998–1014.
https://doi.org/10.1162/neco a 01181
13 Lansner, A. (2009). Associative memory models: from the
cell-assembly theory to biophysically detailed cortex simulations.
Trends in Neurosciences, 32(3), 178–186.
https://doi.org/10.1016/j.tins.2008.12.002
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 1 of 8

%% HOPFIELD MODEL AND ITS FRAGILITY AGAINST LOSS OF CONNECTIONS


% Jan Casas & Natàlia Franco

%% CAPACITY Vs Nº OF PATTERNS IN HEALTHY CONDITIONS


N = 100; % Number of neurons in the network
cycle = 10; % Number of complete cycles for updating the network during simulation
M = [5 10 15 25 50 75]; % Array of values representing the number of stored
patterns in the network

overlaps = zeros(1,length(M)); % Array to store overlap results for each value of M


stability = zeros(1,length(M)); % Array to store stability results for each value
of M

for Miter = 1:length(M); % Loop over the values of M to simulate the network for
different numbers of stored patterns
for realization = 1:100; % Perform 100 realizations of the network for each
value of M
pat1 = 2*randi([0 1],100,1)-1; % Generate a random binary pattern of length
100 with values of -1 and 1, stored in the variable pat1. This serves as the first
stored pattern in the network.

% Update the connectivity matrix W with the contribution of the first


stored pattern
W = (1/M(Miter)).*pat1*pat1';

for pati = 1:M(Miter)-1; % Loop over the remaining M-1 patterns to update
the connectivity matrix
pattern = 2*randi([0 1],100,1)-1; % Generate a random binary pattern of
length 100 with values of -1 and 1
W = W + (1/M(Miter)).*pattern*pattern'; % Update the connectivity
matrix W with the contribution of the current pattern
end

s0 = pat1; % Set the initial state of the network to the first stored
pattern

for i=1:cycle; % Run all cycles for updating the network


for n = 1:100; % Update each neuron in the network
sk = W(n,:)*s0; % Compute the weighted sum for the current neuron
if sk >= 0; % Apply the activation function (threshold) to
determine the updated state of the neuron
sk = 1;
else
sk = -1;
end
s0(n)=sk; % Update the state of the current neuron
end
end

% Compute and store the overlap between the first stored pattern and the
final state of the network
over = dot(pat1,s0);
overlaps(Miter) = overlaps(Miter) + over;
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 2 of 8

% Check if the overlap between the first stored pattern and the final state
of the network is above a threshold (0.98), indicating stability
if over/N >= 0.98;
stability(Miter) = stability(Miter) + 1;
end
end
end

overlaps = (overlaps./100)./100; % Normalize the overlap results by dividing by the


number of realizations and the number of neurons in the network

% Plot the number of networks with stable input patterns as a function of the
number of stored patterns
figure;
plot(M,stability,'-o');
grid on;
title('Number of networks with stable input pattern' );
xlabel('Number of stored patterns');
ylabel('Number of stable input patterns');

%% CONNECTIVITY DEGENERATION - Memory Storage


N_max = 150; % Number of neurons in the network
cycles = 10; % Number of complete cycles
montecarlo = 400; % Number of Monte Carlo simulations
M = 10; % Number of patterns
patterns = zeros(N_max,M); % Array to store patterns
pdisconnect = 0:0.01:1; % Disconnection probability vector
len_pdisconnect = length(pdisconnect); % Length of the probability vector
stability = zeros(1,len_pdisconnect); % Average ratio of stable patterns
% for each disconnection probability
H = zeros(len_pdisconnect,cycles+1); % Average energy vector

% Monte Carlo simulations


for realization = 1:montecarlo; % Loop over realizations
for pati = 1:M; % Loop over patterns
patterns(:,pati) = 2*randi([0 1],N_max,1)-1; % Generate random patterns
end

for cut = 1:len_pdisconnect; % Loop over disconnection probabilities


W = zeros(N_max); % Initialize connectivity matrix
% Update connectivity matrix based on patterns and disconnection
probability
for pati = 1:M;
W = W + (1/N_max).*patterns(:,pati)*patterns(:,pati)';
end
cut_W = (rand(N_max,N_max) >= pdisconnect(cut));
W = W.*cut_W;
s0 = patterns(:,1); % Initialize initial state with the first pattern

H(cut,1) = (-1/2).*s0'*W*s0; % Compute initial energy

for i=1:cycles; % Run all cycles


16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 3 of 8

for n = 1:N_max; % Update neuron states


sk = W(n,:)*s0;
if sk >= 0;
sk = 1;
else
sk = -1;
end
s0(n)=sk;
end

% Energy at each cycle


H(cut,i+1) = (-1/2).*s0'*W*s0;
end

% Overlap
overlap = dot(patterns(:,1),s0)/N_max; % Normalize to range [0,1]

% Stability
if overlap >= 0.98;
% Although the tolerated flipped neurons is 1%, 0.98 because the
overlap
% ranges between -1 and 1, not from 0 to 1.
stability(cut) = stability(cut) + 1;
end
end
end

% Normalize stability by the number of realizations


stability = stability./montecarlo;

% Plot the results


figure;
subplot(1,3,1);
plot(pdisconnect,stability); grid on;
title('Memory storage stability against network size' );
ylabel('Ratio of stable patterns'); xlabel('Disconnection Probability p');

subplot(1,3,2);
for prob = 1:10:length(pdisconnect)
plot(1:cycles+1,H(prob,:),'-o'); grid on; hold on;
end
hold off;
title('Energy over time in connectivity decay'); % (Example MC realization)
xlabel('Time (cycles)'); ylabel('Energy of the network');
legend('p=0','p=0.1', 'p=0.2', 'p=0.3', 'p=0.4', 'p=0.5', 'p=0.6', 'p=0.7', 'p=0.
8', 'p=0.9', 'p=1');

subplot(1,3,3);
plot(pdisconnect,H(:,end)); grid on;
title('Attractor Energy during connectivity decay' ); % (Example MC realization)
xlabel('Disconnection Probability p'); ylabel('Energy in the last cycle');

%% CONNECTIVITY DEGENERATION - Memory Retrieval


16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 4 of 8

N_max = 100; % Number of neurons in the network


cycles = 10; % Number of complete cycles
montecarlo = 100; % Number of Monte Carlo simulations
M = 10; % Number of patterns
patterns = zeros(N_max,M); % Array to store patterns
pdisconnect = 0:0.01:1; % Disconnection probability vector
len_pdisconnect = length(pdisconnect); % Length of the probability vector
pflip = 0:0.01:0.5; % Flipping probability vector
len_pflip = length(pflip); % Length of the Flipping probability vector
overlaps = zeros(len_pdisconnect,len_pflip); % Array to store overlaps
retrieval = zeros(len_pdisconnect,len_pflip); % Array to store number of retrievals

% Loop for Monte Carlo simulations


for realization = 1:montecarlo; % Loop over realizations
for pati = 1:M; % Loop over patterns
patterns(:,pati) = 2*randi([0 1],N_max,1)-1; % Generate random patterns
end

% Loop over disconnection probabilities


for cut = 1:len_pdisconnect;
W = zeros(N_max);
for pati = 1:M;
% Update connectivity matrix with Hebbian rule
W = W + (1/N_max).*patterns(:,pati)*patterns(:,pati)';
end
cut_W = rand(N_max,N_max) >= pdisconnect(cut);
W = W.*cut_W;

% Loop over patterns


for ptt = 1:M
% Loop over initial state flip probabilities
for prob = 1:len_pflip;
s0 = patterns(:,ptt);
flip = -(2*(rand(N_max,1) <= pflip(prob))-1);
s0 = s0.*flip;

% Update neuron states over cycles


for i=1:cycles; % run all cycles
for n = 1:N_max;
sk = W(n,:)*s0;
if sk >= 0;
sk = 1;
else
sk = -1;
end
s0(n)=sk;
end
end

% Average Overlap (between the retrieved and the intended state)


overlap = dot(patterns(:,ptt),s0)/N_max; % Normalize to range [0,1]
overlaps(cut,prob) = overlaps(cut,prob) + overlap/M;
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 5 of 8

% Number of retrieved patterns


retrieval(cut,prob) = retrieval(cut,prob) + (overlap >= 0.95);
end
end
end
end

% Normalize overlap by the number of realizations


overlaps = (overlaps)./montecarlo;
% Normalize number of retrievals by the number of realizations and patterns
retrieval = retrieval./montecarlo./M;

% Plot results
figure;
subplot(1,2,1);
for c=1:10:len_pflip
plot(pflip,overlaps(c,:)); grid on; hold on;
end
hold off;
title('Overlap between the retrieved and the intended state for different
disconnection probabilities');
xlabel('Number of flips in the initial state'); ylabel('Average Overlap of the
patterns');
legend('p=0','p=0.1','p=0.2','p=0.3','p=0.4','p=0.5');

subplot(1,2,2);
plot(pdisconnect,retrieval(:,11)); grid on;
title('Percentage of well-recalled memories');
xlabel('Disconnection Probability p'); ylabel('Average ratio of retrieved
memories');

%% NEURONAL DEGENERATION - Memory Storage


N_max = 150; % Number of neurons in the network
stability = zeros(1,N_max+1); % Average ratio of stable patterns
cycles = 10; % Number of complete cycles
montecarlo = 400; % Number of Monte Carlo simulations
M = 10; % Number of patterns
patterns = zeros(N_max,M); % Array to store patterns
H = zeros(N_max+1,cycles+1); % Average energy vector

% Monte Carlo simulations


for realization = 1:montecarlo; % Loop over realizations
for pati = 1:M; % Loop over patterns
patterns(:,pati) = 2*randi([0 1],N_max,1)-1; % Generate random patterns
end
neuron_index = 1:N_max; % Assigns index for each neuron in the network
kill_index = []; % Stores the index of the neuron that will be killed

for death = 0:N_max; % Loop over the number of neurons to kill


if death ~= 0 % Avoid killing if death probability is 0
kill_index = randi([1 length(neuron_index)],1,1); % Randomly select a
neuron to kill
end
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 6 of 8

patterns(neuron_index(kill_index),:)=0; % Kill the selected neuron


neuron_index(kill_index)=[]; % Remove the killed neuron from the index
vector
s0 = patterns(:,1); % Initialize the state with the first pattern
W = zeros(N_max); % Define the connectivity matrix

for pati = 1:M; % Loop over patterns


W = W + (1/N_max).*patterns(:,pati)*patterns(:,pati)'; % Update the
connectivity matrix
end

H(death+1,1) = (-1/2).*s0'*W*s0; % Compute initial energy

for i=1:cycles; % Run all cycles


for n = 1:N_max; % Update neuron states
if s0(n) ~= 0
sk = W(n,:)*s0;
if sk >= 0;
sk = 1;
else
sk = -1;
end
s0(n)=sk; % Update the state of neuron n
end
end

H(death+1,i+1) = (-1/2).*s0'*W*s0; % Energy at each cycle


end

% Overlap
overlap = dot(patterns(:,1),s0)/(N_max-death); % Normalize to range [0,1]

% Stability
if overlap >= 0.98; % s0 == pat1;
% Although the tolerated flipped neurons is 1%, 0.98 because the
overlap
% ranges between -1 and 1, not from 0 to 1.
stability(death+1) = stability(death+1) + 1;
end
end
end

% Normalize stability by the number of realizations


stability = stability./montecarlo;

% Plot the results


figure;
subplot(1,3,1);
plot((0:N_max-1)/N_max,stability(1:end-1)'); grid on;
title('Memory storage stability against network size' );
ylabel('Ratio of stable patterns'); xlabel('Neural Death Probability p');

subplot(1,3,2);
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 7 of 8

plot(1:cycles+1,H(1:10:end,:),'-o'); grid on;


title('Energy of the network in neural degeneration' ); %(Example MC realization)
xlabel('Time (cycles)'); ylabel('Energy of the network');
legend('p=0','p=0.1','p=0.2','p=0.3','p=0.4','p=0.5');

subplot(1,3,3);
plot(linspace(0,N_max,length(H))/N_max,H(:,end)); grid on;
title('Attractor Energy during Neural Death'); % (Example MC realization)
xlabel('Neural Death Probability p'); ylabel('Energy in the last cycle');

%% NEURONAL DEGENERATION - Memory Retrieval


N_max = 100; % Number of neurons in the network
cycles = 10; % Number of complete cycles
montecarlo = 100; % Number of Monte Carlo simulations
M = 10; % Number of patterns
patterns = zeros(N_max,M); % Array to store patterns
pflip = 0:0.01:0.5; % Flipping probability vector
len_pflip = length(pflip); % Length of the flipping probability vector
overlaps = zeros(N_max+1,len_pflip); % Array to store overlaps
retrieval = zeros(N_max+1,len_pflip); % Array to store number of retrievals

% Monte Carlo simulations


for realization = 1:montecarlo; % Loop over realizations
for pati = 1:M; % Loop over patterns
patterns(:,pati) = 2*randi([0 1],N_max,1)-1; % Generate random patterns
end
neuron_index = 1:N_max; % Assigns index for each neuron in the network
kill_index = []; % Stores the index of the neuron that will be killed

for death = 0:N_max; % Loop over the number of neurons to kill


if death ~= 0; % Avoid killing if death probability is 0
kill_index = randi([1 length(neuron_index)],1,1); % Randomly select a
neuron to kill
end
patterns(neuron_index(kill_index),:)=0; % Kill the selected neuron
neuron_index(kill_index)=[]; % Remove the killed neuron from the index
vector
s0 = patterns(:,1); % Initialize the state with the first pattern
W = zeros(N_max); % Define the connectivity matrix
for pati = 1:M; % Loop over patterns
W = W + (1/N_max).*patterns(:,pati)*patterns(:,pati)'; % Update the
connectivity matrix
end

% Loop over patterns


for ptt = 1:M
for prob = 1:len_pflip; % Loop over flipping probabilies
s0 = patterns(:,ptt); % Set initial state of the network equal to
the selected pattern
flip = -(2*(rand(N_max,1) <= pflip(prob))-1); % Pick neurons to
flip
s0 = s0.*flip; % Perturb initial state
16/04/23 07:27 C:\Users\...\Code_Casas_Franco_et_al.m 8 of 8

for i=1:cycles; % Run all cycles


for n = 1:N_max; % Update neuron states
if s0(n) ~= 0
sk = W(n,:)*s0;
if sk >= 0;
sk = 1;
else
sk = -1;
end
s0(n)=sk; % Update the state of neuron n
end
end
end

% Average Overlap (between the retrieved and the intended state)


overlap = dot(patterns(:,ptt),s0)/(N_max-death); % Normalize to
range [0,1]
overlaps(death+1,prob) = overlaps(death+1,prob) + overlap/M;

% Number of retrieved patterns


retrieval(death+1,prob) = retrieval(death+1,prob) + (overlap >=
0.95);
end
end
end
end

% Normalize overlap by the number of realizations


overlaps = (overlaps)./montecarlo;
% Normalize number of retrievals by the number of realizations and patterns
retrieval = retrieval./montecarlo./M;

% Plot the results


figure;
subplot(1,2,1);
for c=1:10:len_pflip
plot(pflip,overlaps(c,:)); grid on; hold on;
end
hold off;
title('Overlap between the retrieved and the intended state for different neural
death probabilities');
xlabel('Number of flips in the initial state'); ylabel('Average Overlap of the
patterns');
legend('p=0','p=0.1','p=0.2','p=0.3','p=0.4','p=0.5');

subplot(1,2,2);
plot(linspace(0,1,N_max+1),retrieval(:,11)); grid on;
title('Percentage of well-recalled memories');
xlabel('Neural Death Probability p'); ylabel('Average ratio of recalled memories');

You might also like