You are on page 1of 7

PHYSICAL REVIEW E 97, 032312 (2018)

Critical neural networks with short- and long-term plasticity

L. Michiels van Kessenich,1,* M. Luković,1 L. de Arcangelis,2 and H. J. Herrmann1,3


1
ETH Zürich, Computational Physics for Engineering Materials, Institute for Building Materials,
Wolfgang-Pauli-Strasse 27, HIT, CH-8093 Zürich, Switzerland
2
Department of Engineering, University of Campania Luigi Vanvitelli, 81031 Aversa (CE), Italy
and INFN sez. Naples, Gr. Coll., Salerno, Italy
3
Departamento de Fisica, Universidade Federal do Ceará, 60451-970 Fortaleza, Ceará, Brasil

(Received 13 December 2017; published 23 March 2018)

In recent years self organized critical neuronal models have provided insights regarding the origin of the
experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses,
as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as
Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural
systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms,
acting on two different time scales. The measured avalanche statistics are compatible with experimental results
for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons.
The time series of neuronal activity exhibits temporal bursts leading to 1/f decay in the power spectrum. The
presence of long-term plasticity gives the system the ability to learn binary rules such as XOR, providing the
foundation of future research on more complicated tasks such as pattern recognition.

DOI: 10.1103/PhysRevE.97.032312

I. INTRODUCTION of synaptic fatigue in critical neuronal dynamics was proposed


by Levina et al. [23–25].
Scale-invariant neuronal dynamics, in the form of
Here we present a model which includes both short- and
avalanches, have been reported by a multitude of experiments
long-term plasticity. In addition to plastic adaptation other
measuring spontaneous neuronal activity in vitro and in vivo
neurobiological mechanisms influence the dynamics. These
[1–8]. These observations suggest that neuronal systems op-
include inhibitory neurons and the neuronal refractory time.
erate near a critical point. The measured size and duration
An action potential originating from an inhibitory neuron
distributions follow a power-law behavior with exponents
causes the release of inhibitory neurotransmitters, such as
1.5 and 2.0 respectively, which characterize the mean-field
gamma-aminobutyric acid (GABA) [26]. This hyperpolarizes
self-organized branching process [9]. The experimentally
the postsynaptic neurons and reduces their firing probability.
observed scale-invariant dynamics has first been numeri-
Regarding the neuronal dynamics, inhibitory neurons can
cally reproduced by neuronal models closely related to self-
be seen as sinks for the activity, since the activation of an
organized criticality (SOC) [10–14]. SOC provides a mecha-
inhibitory neuron leads to the reduction of activity in connected
nism by which the critical point is the attractor of the systems
neurons. Moreover, the firing rate of real neurons is limited by
dynamics [15]. It therefore presents an elegant explanation as
the refractory period, i.e., the brief period after the generation
to why these scaling exponents are observed consistently in
of an action potential during which a second action potential
both in vitro and in vivo experiments, ranging from dissociated
cannot be triggered [27].
neurons [16] to MEG measurements on human patients [17]. A
The scale-invariant behavior of SOC models relies heavily
neuronal system is subject to plastic adaptation which follows
on conservative dynamics [28]. More precisely, one distin-
the principles of Hebbian plasticity [18] or activity dependent
guishes between bulk and boundary dissipation [15]. In tradi-
plasticity, i.e., synapses between neurons whose activity is cor-
tional SOC models, such as the Bak-Tang-Wiesenfeld-model
related are strengthened and inactive synapses are weakened.
[29], the bulk dynamics is energy conserving whereas the
This is a form of long-term plasticity which is believed to
boundary dissipation vanishes in the thermodynamic limit
be the basis of learning mechanisms [19,20]. By this activity
N → ∞ [28,30]. If a system has nonconservative interactions
dependent adaptation the synaptic structure of the network is
between the individual units it suffers from bulk dissipation.
modified in an attempt to solve problems or classification tasks.
This implies that the dissipation does not vanish in the ther-
Conversely, short-term plasticity originates from mechanisms
modynamic limit. It is well known that conservation plays
which operate on shorter time scales. One of these mechanisms
a crucial role in SOC models, yet it is worth mentioning
is synaptic depression or fatigue [21]. Repeated activations of
that several models with nonconservative dynamics do show
neurons lead to the depletion of vesicles which contain the
apparent scale-invariant behavior [15]. These systems, such as
neurotransmitter available at the synapses [22]. The inclusion
the OFC earthquake model [31] or Drossel Schwabel forest-fire
model [32], require a loading mechanism to compensate the
dissipative elements. Ultimately, these models rely on the
* careful tuning of the recharging rate to observe scale invariant
laurensm@ethz.ch

2470-0045/2018/97(3)/032312(7) 032312-1 ©2018 American Physical Society


L. MICHIELS VAN KESSENICH et al. PHYSICAL REVIEW E 97, 032312 (2018)

dynamics. Bonachela et al. [28,33] provided a detailed study


of scale-invariant dynamics observed in nonconservative SOC
models. The model presented here features nonconserving
neuronal dynamics and also relies on a tuning parameter. In this
way scale-invariant behavior is obtained even in the presence
of dissipation.
The article starts with a presentation of the model and
the description of how plastic adaptation operating on two
different time scales is implemented. Then we present results
regarding the avalanche statistics and power spectrum and
compare with experimental findings. In the final section the
learning capacities of the model are examined by training the
system to learn the XOR rule.

II. NEURONAL MODEL


Consider a directed neuronal network consisting of N
neurons connected by synapses. Each neuron is characterized
by a membrane potential vi . To model both long- and short-
term plasticity each synapse is represented by the two variables FIG. 1. An avalanche on a scale-free network. Neurons and
Wij and wij . The synaptic strength on short time scales is synapses which participated in the avalanche are shown in red.
represented by wij whereas Wij will determine wij on longer Inactive neurons and synapses are blue.
time scales. The initial value for the potentials vi and the
short-term synaptic strengths wij are not important since the
system will evolve towards a steady state, independent of number of active neurons and its duration is the number of time
the initial condition. Wij are chosen homogeneously in the steps the avalanche lasts. Since during the propagation of an
interval [0,Wmax ] where Wmax is a parameter of the model. avalanche the available neurotransmitter decreases according
Other distributions can be chosen for Wij but, as we will to Eq. (2), a recovery mechanism is needed to sustain future
show, the relevant parameter is the average long-term synaptic activity. The recovery of synapses takes in the order of
strength W . If the potential in a neuron surpasses a threshold, seconds [37–39] whereas avalanche times occur in the order
vi  vc , it causes the generation of an action potential which of milliseconds [2,40]. Therefore, after each avalanche the
travels along the axon towards the synapses. The release of available neurotransmitter recovers as wij = wij + Wij . wij
neurotransmitter then causes a change in the potential of the therefore fluctuates on short time scales and Wij determines
connected postsynaptic neurons j according to the value around which wij fluctuates so that a higher Wij
will lead to a higher average wij . The effect of a synapse
vj (t + 1) = vj (t) ± vi (t)uwij (t), (1) on the postsysnaptic neuron is proportional to uwij , according
to Eq. (1). We call therefore uwij the “short-term synaptic
wij (t + 1) = wij (t)(1 − u), (2) strength” and wij  the “long-term synaptic strength.” The
short-term synaptic efficacy is therefore smaller than the
vi → 0, (3) long-term synaptic strength wij  similarly to other models
with dynamical synapses such as the Tsodyks-Markram model
where the + and − stand for excitatory and inhibitory signals [41,42]. The long-term synaptic strength wij  is determined
respectively. We set a certain percentage pin of the neurons  Wij and a key parameter is then the average W  =
by
to be inhibitory which are chosen randomly. After a neuron ij Wij /Ns where Ns is the number of synapses. A higher
fires it enters a refractory state for tr time steps. During W  will lead to a higher wij  and the system will exhibit
this period it does not respond to stimulations from other more neuronal activity and vice versa. The value for W  is
neurons and will not produce further action potentials. wij set by choosing the initial values Wij homogeneously in the
represents the available neurotransmitter at the synapse ij interval Wij ∈ [0,Wmax ], leading to W  = Wmax /2. Therefore
and u is the fraction of neurotransmitter released when the in order to modify W  it is sufficient to change Wmax . The
synapse activates. The vesicles at a synapse which are ready model can be implemented on any network topology. Since
for immediate release upon an incoming action potential are it is difficult to directly measure the morphological synaptic
called the readily releasable pool (RRP) [34,35]. It has been connectivity in cortical systems, numerical studies have often
shown that the RRP at a synapse is of the order of 5% of implemented the structure of functional networks, as measured
the total available neurotransmitter [22,36]. We therefore set experimentally [43–45]. Therefore, the presented results are
u = 0.05. When the potentials of all neurons are below vc , obtained on a three-dimensional scale-free network where the
activity is triggered by adding a small stimulation δv = 0.1 degree distribution follows a power-law decay, P (k) ∝ k −α
to random neurons until one of them reaches the threshold. If with α = 2 [46]. We assign to each neuron an outgoing degree
the action potential causes a postsynaptic neuron to surpass vc kout ∈ [2,100], which is then connected to other neurons with a
neuronal activity propagates until all neurons have potentials distance dependent probability P (r) ∝ e−r/r0 . A configuration
vi < vc . The size s of an avalanche is given by the total with an example avalanche is shown in Fig. 1.

032312-2
CRITICAL NEURAL NETWORKS WITH SHORT- AND … PHYSICAL REVIEW E 97, 032312 (2018)

FIG. 2. Avalanche size distribution on a scale free network. The FIG. 3. Avalanche duration distribution on a scale free network.
distributions are obtained with pin = 0.2 and tr = 1. The values of The distributions shown are obtained with pin = 0.2 and tr = 1. The
W  for each system size changes as shown in Fig. 4. The inset shows values of W  for each curve changes with the system size (see Fig. 4).
a data collapse of the rescaled distributions. The inset shows a data collapse of the rescaled distributions.

Plastic adaptation governs how the structure of the network III. RESULTS
evolves and is a form of long-term plasticity. It follows Figure 2 shows the distribution of avalanche sizes with
the principles of Hebbian plasticity, which strengthens the 20% inhibitory neurons, pin = 0.2, and a refractory time of
synapses between neurons whose activity is correlated and one time step, tr = 1. The distribution follows a power-law
weakens inactive synapses [18]. Due to short-term plasticity decay with an exponent of 1.5 and an exponential cutoff. When
the synaptic weight wij is a constantly changing quantity the system size is increased the cutoff moves towards larger
and is unsuitable to describe long-term plasticity. Therefore, avalanches. The inset in Fig. 2 shows the data collapse after
in order to implement long-term plasticity Wij is modified. rescaling the distributions, confirming that the cutoff scales
More precisely, each time a synapse causes a firing event in ∝N as suggested by the scale-invariant behavior of the system.
a postsynaptic neuron, Wij is increased proportionally to the The corresponding duration distributions are shown in Fig. 3
potential variation in the postsynaptic neuron j , and follow a power-law decay with exponent 2.0. The average
long-term synaptic strength W  is the control parameter
Wij (t + 1) = Wij (t) + δWij (t), (4) which determines whether the network exhibits sub-, super-,
or critical behavior (see the inset in Fig. 4). We find that as the
δWij (t) = α[vj (t + 1) − vj (t)], (5) system size increases W  needs to decrease proportional to
W c ∝ N −1/2 to maintain a scaling cutoff. The fact that the
critical value of the control parameter changes with the system
where α is a parameter which determines the strength of plastic
size has also been observed in Ref. [23]. Interestingly as the
adaptation. It represents all possible biomolecular mechanisms
percentage of inhibitory neurons increases, an increase in W 
which might influence the long-term adaptation of synapses.
After an avalanche comes to an end, all Wij are reduced by the
average increase in synaptic strength,

1 
W = δWij , (6)
Ns

where Ns is the total number of connections in the system. If


a synapse is not used repeatedly it will progressively weaken
and if its strength decreases below a minimal value, Wij <
10−4 , the synapse is pruned. The weakening of synapses is a
form of long-term depression. The combination of this form
of activity dependent plasticity with the refractory time has
been shown to be important in determining the structure of a
neuronal network since it leads to the removal of loops [47,48].
For the following simulations, plastic adaptation sculpts the
long-term synaptic strengths of the initial scale-free network FIG. 4. W c scales as W c ∝ N −1/2 in order to have a cutoff
until the first synapse is pruned to not alter the scale-free degree scaling with the system size (see Fig. 2). This is observed for different
distribution. After reaching this state long-term plasticity is values of pin . The inset shows how changing the control parameter
suspended and new stimulations are then applied to measure can lead to sub-, super-, or critical behavior for a network of 32 000
the avalanche statistics. neurons and pin = 0.2 with W  = {10−4 ,4 × 10−4 ,10−3 }.

032312-3
L. MICHIELS VAN KESSENICH et al. PHYSICAL REVIEW E 97, 032312 (2018)

is sufficient to maintain criticality and the scaling with system


size remains W c ∝ N −1/2 , independent of pin . The model
can in principle also include higher fractions of inhibitory
neurons pin  0.5. These large fractions of inhibition are
not biologically plausible [49] and in those cases the largest
avalanches tend to become much smaller than the system size.
The scaling behavior can be motivated by considering that
as the system size increases so does the number of synapses,
Ns ∝ N . Each synapse recovers over time according to W 
and the total amount of neurotransmitter recovered Rtot in
the entire system is proportional to the number of synapses
and therefore Rtot ∝ NW . On the other hand, the usage of
neurotransmitter U is proportional to the average avalanche
size s. Larger avalanches have more firing events and ac-
cording to Eq. (2) each firing event uses neurotransmitter.
If the avalanche size distribution follows a power law with
exponent α then the average avalanche size scales as s =
 N −α+1 FIG. 5. The activity sequences a1 and a2 (a). The corresponding
smin s dx ∝ N −α+2 [50]. With α = 1.5 one finds that the power spectra (b). The dashed lines show power laws with exponents
total amount of neurotransmitter used scales as U ∝ N 1/2 . β = 2 and β = 1 respectively.
The ratio Rtot /U = N 1/2 W  determines the energy balance
of the system. Therefore to maintain the energy balance as N
transform of a(t),
increases, W  needs to scale as W  ∝ N −1/2 . If the recovery
mechanism is modified, different scaling behaviors can be T −1

obtained. If, for example, only synapses which participated â(f ) = a(t)e−2iπkt/T .
in an avalanche recover then Rtot ∝ s and the ratio Rtot /U t=0
becomes independent of the system size. To measure the power spectrum of the activity in the model,
It is important to note that the resulting model is not self- the sequence a1 (t) is recorded, where a1 (t) is the total number
organizing due to the presence of the control parameter W . of firing neurons at time t. During quiescent times neurons
The critical value for the control parameter tends to zero in are stimulated by adding δv = 0.1 to a random neuron. Each
the thermodynamic limit, N → ∞. In finite size systems W  stimulation counts as one time step. Additionally we study
needs to be carefully tuned. another sequence defined as

1, if a1 (t)  1
a2 (t) = (7)
IV. POWER SPECTRUM 0, if a1 (t) = 0.
The power spectral density (PSD) analysis is frequently em- An example of a1 (t) and a2 (t) is shown in Fig. 5. These
ployed in the investigation of neuronal activity to characterize sequences are obtained for a network of 32 000 neurons (pin =
both oscillations and temporal correlations [51]. Magnetoen- 0.2, tr = 1). The PSDs for the sequences a1 (t) and a2 (t) with
cephalographic (MEG) and electroencephalographic (EEG) 106 time steps are shown in the lower panel of Fig. 5. The
measurements as well as local field potentials of ongoing power spectra differ significantly. For the sequence a1 (t) it
cortical activity have reported PSDs that decay with a power decays as 1/f 2 before transitioning to white noise for low
law 1/f β [52–57]. Measurements of human eyes-closed and frequency. The 1/f 2 power spectrum has also been observed
eyes-open resting EEG yielded exponents of β = 1.32 and in other critical systems such as the Bak-Tang-Wiesenfeld
β = 1.27 [55]. The measurements of β varied over different model [59]. Conversely, a2 (t) yields a PSD with a 1/f decay
brain regions and the standard error of estimate was in the before it transitions to white noise. The crossover to white
interval [0.26,0.28]. A similar exponent, β = 1.33 ± 0.19 has noise occurs, for both a1 (t) and a2 (t), approximately at the
been measured by Dehghani et al. [54], who also reported frequency corresponding to the cutoff of the avalanche duration
varying exponents across different brain regions between β = distribution (see Fig. 3). Another study of the power spectrum
1 and β = 2. Novikov et al. analyzed the power spectrum [51] has shown that increasing the fraction of inhibitory
of MEG measurements and reported exponents of 0.98 for neurons pin in a fully self-organized critical model leads
one subject and 1.28 for another [52]. To summarize, many to a PSD which decays with 1/f for the sequence a1 (t).
experiments report temporal correlations and measurements Therefore, the balance between excitation and inhibition has
of the PSD frequently report a power-law decay 1/f β , with been considered as the origin of the 1/f power spectrum.
exponents in the interval [0.8,1.5] [51]. Different exponents The authors also report that the cutoff of the avalanche size
have been found in neural activity of epileptic patients. In the distribution scales with pin and therefore avalanches as large
awake state β was measured in the range [2.2,2.44] and in as the system size become less probable for large pin [51]. In
the slow wave sleep β was in the range [1.6,2.87] [58]. In the model presented in this article these results are confirmed
the following we analyze the PSD obtained from the presented since an increase in inhibition (without adjusting W ) also
neuronal model. The power spectrum of a temporal sequence is leads to a 1/f power spectrum for sequence a1 (t); see Fig. 6.
given by S(f ) = â(f )â ∗ (f ) where â(f ) is the discrete Fourier In this case the amplitude of very large avalanches becomes

032312-4
CRITICAL NEURAL NETWORKS WITH SHORT- AND … PHYSICAL REVIEW E 97, 032312 (2018)

FIG. 6. Increasing the percentage of inhibitory neurons while


keeping W  constant leads to a 1/f power spectrum for pin = 0.3
on a scale-free network of 16 000 neurons, tr = 1.

smaller in size due to the dissipative role of inhibitory neurons FIG. 7. Learning performance vs the number of presented binary
and the system moves away from the critical point W c . These patterns of the XOR rule. The system consists of 3000 neurons with
results suggest that avalanches with a size comparable to the pin = 0.2, tr = 1. W  is initialized to W  = 2 × 10−3 . The different
system size affect the temporal correlations measured by the curves correspond to different numbers of input neurons.
power spectrum. This is confirmed by the spectrum of series
a2 (t) in Fig. 5 in which case the avalanches all have the same amplifying the input signal speeds up the convergence of the
amplitude over time and the temporal correlations are revealed, learning task. The learning scheme follows the principles of
yielding a 1/f power spectrum. supervised learning and negative feedback [62]. The possible
binary inputs are presented to the network in a random order
and each time an avalanche is triggered. The activity propagates
V. LEARNING through the network and either activates the output neuron (1)
So far we have shown that the introduction of both long- or not (0). If the result is incorrect a feedback signal is released
and short-term plasticity is compatible with experimental data from the output neuron which modifies all synapses which
regarding avalanche statistics, branching ratio, and power participated in the previous avalanche, according to
spectra. However, the main role of long-term plasticity is its Wij → Wij + αEf (dj o ), (8)
ability to sculpt the network over time to allow the system
to learn. Here we study the systems ability to learn binary where E ∈ {−1,0,1} is the committed error. E = −1 signifies
rules. The network can learn any binary rule but we focus here that the output neuron fired but should not have and E = 1
on the XOR rule as it is a nonlinearly separable rule which means that the output neuron remained inactive but its activa-
makes the task more complex [60]. The learning mechanism is tion was desired. Therefore depending on the error committed,
inspired by previous work [61,62] and is based on the release of synapses are either enhanced or weakened. E = 0 corresponds
chemical signals, for example dopamine, which are known to to the correct output and consequently does not lead to any
mediate plasticity [63–65]. The learning mechanism operates modification of synaptic strengths. α is the global learning
as follows. Two input neurons are defined at one side of the strength and has the same meaning as in Eq. (5). Here α is set
network which will provide the input to the system. On the to α = 0.3 and modifying α does not qualitatively change the
other side of the network an output neuron is defined. All input results presented here. f (dj o ) is a function of the distance dj o
and output neurons are assigned the maximal degree k = 100 between the synapse and the output neuron o. We consider the
to ensure that they are well connected to the network (for the position of a synapse ij to be the same as of the postsynaptic
output neuron it is the incoming degree kin = 100). The two neuron j to which it connects. For the function f (dj o ) we use
input neurons are triggered according to the possible binary an exponential decay, f (dj o ) = e−dj o /d0 . The relevance of the
combinations, i.e., one of the input neurons fires while the spatial extent of this learning mechanism has been studied in
other remains inactive, {0,1} and {1,0}, or both neurons fire at detail in Ref. [65]. How the learning rule modifies a synapse j ,
the same time, {1,1}. The binary input where both neurons are therefore, depends on the distance dj o from the source of the
inactive, {0,0}, is omitted as it won’t cause any activity. The chemical signal, the plasticity rate α, and the error E. We define
response of the system is monitored at the output neuron. If the learning performance as P = Icorrect /Itotal , where Itotal is
the output neuron fires during an avalanche the response is 1, the total number of input signals presented to the network
otherwise it is 0. The output neuron should therefore fire only and Icorrect is the number of times the system gave the correct
if one of the input neurons is activated but not if both inputs response. Figure 7 shows the performance of the system versus
fire simultaneously. To amplify the input signal additional input the number of binary patterns presented to the network. Results
neurons can be defined which fire together. For example if four were obtained on a network of 3000 neurons with pin = 0.2.
input neurons are defined the input {1,0} triggers four neurons The different curves correspond to different numbers of input
and {0,1} triggers four other neurons. This still implements the neurons and results show that the network is able to learn XOR
XOR rule but amplifies the neuronal signal. As we will show, with a faster convergence when the input is amplified.

032312-5
L. MICHIELS VAN KESSENICH et al. PHYSICAL REVIEW E 97, 032312 (2018)

VI. DISCUSSION The spectral analysis of the neuronal activity yields a power
spectrum decaying as 1/f 2 for the sequence a1 (t) which has
The presented model includes several neurobiological in-
also been observed in other critical systems [59]. Conversely,
gredients such as both short- and long-term plasticity, as well as
the analysis of the sequence a2 (t) results in a power spectrum
the refractory time and inhibitory neurons. The introduction of
with a 1/f decay. Recent results from a fully self-organized
short-term plasticity implies that the synaptic weights wij are
critical model [51] show that an increase of inhibition leads
a constantly changing quantity. Therefore long-term plasticity
to the 1/f power spectrum also in the sequence a1 (t). We
cannot operate on the same synaptic weights as it would be
confirm these results and indeed increasing the fraction of
dominated by the short-term fluctuations of wij . We therefore
inhibitory neurons pin while keeping W  constant leads to
propose a long-term component of the synaptic strength Wij
a decrease in the exponent of the power spectrum. As pin
for each synapse. Stronger synapses have a larger Wij which in
increases larger avalanches become less probable leading to
turn will lead to a higher average short-term synaptic strength
a 1/f power spectrum for pin = 0.3, in accordance with sev-
wij . Since critical dynamics relies on energy conservation an
eral experimental studies [52–57]. Finally we present results
important question is how such an energy balance is maintained
regarding the learning capabilities of the model. With distance
in neural systems. Synaptic interactions are of chemical nature
dependent feedback signals the system is able to learn binary
and the individual firing events are nonconserving. This is
rules. We show the example of the XOR rule which is known to
especially true for inhibitory connections. Here we make use
be difficult to learn as it is not linearly separable. The ability to
of dynamical synapses and show that increasing the amount
learn XOR encourages the investigation of more complex tasks
of inhibition can be counteracted by increasing the long-term
such as pattern recognition. This provides a foundation for
synaptic strengths. Furthermore, in the thermodynamic limit
future research of the effects of the combination of short- and
the tuning parameter tends to zero, W c ∝ N −1/2 , indepen-
long-term plasticity on the learning capabilities of neuronal
dently of the percentage of inhibitory neurons. It has to be
systems.
noted that the presence of a tuning parameter implies that
the presented dynamics is not self-organizing. However, it is
the tuning parameter which leads to scaling in the avalanche
statistics even with large fractions of inhibitory neurons.
ACKNOWLEDGMENT
The measured exponents are 1.5 for the avalanche size and
2.0 for the avalanche duration distribution, consistent with The authors would like to thank D. Berger for helpful
experimental studies [1,2]. discussions.

[1] D. Plenz and H. G. Schuster, Criticality in Neural Systems [16] V. Pasquale, P. Massobrio, L. Bologna, M. Chiappalone, and S.
(Wiley, Weinheim, 2014). Martinoia, Neuroscience 153, 1354 (2008).
[2] J. M. Beggs and D. Plenz, J. Neurosci. 23, 11167 (2003). [17] O. Shriki, J. Alstott, F. Carver, T. Holroyd, R. N. Henson, M. L.
[3] J. M. Beggs and D. Plenz, J. Neurosci. 24, 5216 (2004). Smith, R. Coppola, E. Bullmore, and D. Plenz, J. Neurosci. 33,
[4] D. Plenz and T. C. Thiagarajan, Trends Neurosci. 30, 101 (2007). 7079 (2013).
[5] A. Mazzoni, F. D. Broccard, E. Garcia-Perez, P. Bonifazi, M. E. [18] D. O. Hebb, The Organization of Behavior: A Neuropsycholog-
Ruaro, and V. Torre, PLoS One 2, e439 (2007). ical Approach (John Wiley & Sons, New York, 1949).
[6] T. Petermann, T. C. Thiagarajan, M. A. Lebedev, M. A. Nicolelis, [19] S. J. Martin, P. D. Grimwood, and R. G. Morris, Annu. Rev.
D. R. Chialvo, and D. Plenz, Proc. Natl. Acad. Sci. USA 106, Neurosci. 23, 649 (2000).
15921 (2009). [20] N. Caporale and Y. Dan, Annu. Rev. Neurosci. 31, 25 (2008).
[7] E. D. Gireesh and D. Plenz, Proc. Natl. Acad. Sci. USA 105, [21] N. S. Simons-Weidenmaier, M. Weber, C. F. Plappert, P. K. Pilz,
7576 (2008). and S. Schmid, BMC Neurosci. 7, 38 (2006).
[8] P. Massobrio, L. de Arcangelis, V. Pasquale, H. J. Jensen, and [22] K. Ikeda and J. M. Bekkers, Proc. Natl. Acad. Sci. USA 106,
D. Plenz, Front. Syst. Neurosci. 9, 22 (2015). 2945 (2009).
[9] S. Zapperi, K. B. Lauritsen, and H. E. Stanley, Phys. Rev. Lett. [23] A. Levina, J. M. Herrmann, and T. Geisel, Nat. Phys. 3, 857
75, 4071 (1995). (2007).
[10] L. de Arcangelis, C. Perrone-Capano, and H. J. Herrmann, Phys. [24] A. Levina, J. M. Herrmann, and T. Geisel, Phys. Rev. Lett. 102,
Rev. Lett. 96, 028107 (2006). 118110 (2009).
[11] L. de Arcangelis, F. Lombardi, and H. Herrmann, J. Stat. Mech.: [25] A. de Andrade Costa, M. Copelli, and O. Kinouchi, J. Stat.
Theor. Exp. (2014) P03026. Mech.: Theor. Exp. (2015) P06004.
[12] L. Cocchi, L. L. Gollo, A. Zalesky, and M. Breakspear, Prog. [26] M. Watanabe, K. Maemura, K. Kanbara, T. Tamayama, and H.
Neurobiol. 158, 132 (2017). Hayasaki, Int. Rev. Cytol. 213, 1 (2002).
[13] L. de Arcangelis, Eur. Phys. J.: Spec. Top. 205, 243 (2012). [27] J. G. Nicholls, A. R. Martin, B. G. Wallace, and P. A. Fuchs,
[14] E. Miranda and H. Herrmann, Physica A 175, 339 (1991). From Neuron to Brain (Sinauer Associates, Sunderland, MA,
[15] G. Pruessner, Self-organised Criticality: Theory, Models and 2001), Vol. 271.
Characterisation (Cambridge University Press, Cambridge, [28] J. A. Bonachela and M. A. Muñoz, J. Stat. Mech.: Theor. Exp.
England, 2012). (2009) P09009.

032312-6
CRITICAL NEURAL NETWORKS WITH SHORT- AND … PHYSICAL REVIEW E 97, 032312 (2018)

[29] P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. Lett. 59, 381 [48] J. Kozloski and G. A. Cecchi, Front. Neural Circuits 4, 7 (2010).
(1987). [49] S. Sahara, Y. Yanagawa, D. D. O’Leary, and C. F. Stevens,
[30] O. Malcai, Y. Shilo, and O. Biham, Phys. Rev. E 73, 056125 J. Neurosci. 32, 4755 (2012).
(2006). [50] M. E. Newman, Contemp. Phys. 46, 323 (2005).
[31] Z. Olami, H. J. S. Feder, and K. Christensen, Phys. Rev. Lett. [51] F. Lombardi, H. J. Herrmann, and L. de Arcangelis, Chaos 27,
68, 1244 (1992). 047402 (2017).
[32] B. Drossel and F. Schwabl, Phys. Rev. Lett. 69, 1629 (1992). [52] E. Novikov, A. Novikov, D. Shannahoff-Khalsa, B. Schwartz,
[33] J. A. Bonachela, S. De Franciscis, J. J. Torres, and M. A. Munoz, and J. Wright, Phys. Rev. E 56, R2387(R) (1997).
J. Stat. Mech.: Theor. Exp. (2010) P02015. [53] C. Bédard, H. Kröger, and A. Destexhe, Phys. Rev. Lett. 97,
[34] P. S. Kaeser and W. G. Regehr, Curr. Opin. Neurobiol. 43, 63 118102 (2006).
(2017). [54] N. Dehghani, C. Bédard, S. S. Cash, E. Halgren, and A. Destexhe,
[35] C. Rosenmund and C. F. Stevens, Neuron 16, 1197 (1996). J. Comput. Neurosci. 29, 405 (2010).
[36] S. O. Rizzoli and W. J. Betz, Nat. Rev. Neurosci. 6, 57 (2005). [55] W. S. Pritchard, Int. J. Neurosci. 66, 119 (1992).
[37] H. Markram and M. Tsodyks, Nature (London) 382, 807 (1996). [56] E. Zarahn, G. K. Aguirre, and M. D’Esposito, Neuroimage 5,
[38] T. Abrahamsson, B. Gustafsson, and E. Hanse, J. Physiol. 569, 179 (1997).
737 (2005). [57] K. Linkenkaer-Hansen, V. V. Nikouline, J. M. Palva, and R. J.
[39] M. Armbruster and T. A. Ryan, Nat. Neurosci. 14, 824 (2011). Ilmoniemi, J. Neurosci. 21, 1370 (2001).
[40] F. Lombardi, H. J. Herrmann, C. Perrone-Capano, D. Plenz, and [58] B. J. He, J. M. Zempel, A. Z. Snyder, and M. E. Raichle, Neuron
L. de Arcangelis, Phys. Rev. Lett. 108, 228703 (2012). 66, 353 (2010).
[41] M. V. Tsodyks and H. Markram, Proc. Natl. Acad. Sci. USA 94, [59] H. J. Jensen, K. Christensen, and H. C. Fogedby, Phys. Rev. B
719 (1997). 40, 7425 (1989).
[42] M. Tsodyks, K. Pawelzik, and H. Markram, Neural Comput. 10, [60] D. Kriesel, A Brief Introduction to Neural Networks (2007),
821 (1998). http://www.dkriesel.com.
[43] S. Pajevic and D. Plenz, PLoS Comput. Biol. 5, e1000271 (2009). [61] L. de Arcangelis and H. J. Herrmann, Proc. Natl. Acad. Sci. USA
[44] P. Massobrio, V. Pasquale, and S. Martinoia, Sci. Rep. 5, 10578 107, 3977 (2010).
(2015). [62] P. Bak and D. R. Chialvo, Phys. Rev. E 63, 031912 (2001).
[45] O. Shefi, I. Golding, R. Segev, E. Ben-Jacob, and A. Ayali, Phys. [63] F. Liu, Q. Wan, Z. B. Pristupa, X.-M. Yu, Y. T. Wang, and H. B.
Rev. E 66, 021905 (2002). Niznik, Nature (London) 403, 274 (2000).
[46] V. M. Eguiluz, D. R. Chialvo, G. A. Cecchi, M. Baliki, and A. V. [64] J. N. Reynolds and J. R. Wickens, Neural Networks 15, 507
Apkarian, Phys. Rev. Lett. 94, 018102 (2005). (2002).
[47] L. Michiels van Kessenich, L. de Arcangelis, and H. Herrmann, [65] D. L. Berger, L. de Arcangelis, and H. J. Herrmann, Sci. Rep. 7,
Sci. Rep. 6, 32071 (2016). 11016 (2017).

032312-7

You might also like