Professional Documents
Culture Documents
In recent years self organized critical neuronal models have provided insights regarding the origin of the
experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses,
as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as
Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural
systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms,
acting on two different time scales. The measured avalanche statistics are compatible with experimental results
for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons.
The time series of neuronal activity exhibits temporal bursts leading to 1/f decay in the power spectrum. The
presence of long-term plasticity gives the system the ability to learn binary rules such as XOR, providing the
foundation of future research on more complicated tasks such as pattern recognition.
DOI: 10.1103/PhysRevE.97.032312
032312-2
CRITICAL NEURAL NETWORKS WITH SHORT- AND … PHYSICAL REVIEW E 97, 032312 (2018)
FIG. 2. Avalanche size distribution on a scale free network. The FIG. 3. Avalanche duration distribution on a scale free network.
distributions are obtained with pin = 0.2 and tr = 1. The values of The distributions shown are obtained with pin = 0.2 and tr = 1. The
W for each system size changes as shown in Fig. 4. The inset shows values of W for each curve changes with the system size (see Fig. 4).
a data collapse of the rescaled distributions. The inset shows a data collapse of the rescaled distributions.
Plastic adaptation governs how the structure of the network III. RESULTS
evolves and is a form of long-term plasticity. It follows Figure 2 shows the distribution of avalanche sizes with
the principles of Hebbian plasticity, which strengthens the 20% inhibitory neurons, pin = 0.2, and a refractory time of
synapses between neurons whose activity is correlated and one time step, tr = 1. The distribution follows a power-law
weakens inactive synapses [18]. Due to short-term plasticity decay with an exponent of 1.5 and an exponential cutoff. When
the synaptic weight wij is a constantly changing quantity the system size is increased the cutoff moves towards larger
and is unsuitable to describe long-term plasticity. Therefore, avalanches. The inset in Fig. 2 shows the data collapse after
in order to implement long-term plasticity Wij is modified. rescaling the distributions, confirming that the cutoff scales
More precisely, each time a synapse causes a firing event in ∝N as suggested by the scale-invariant behavior of the system.
a postsynaptic neuron, Wij is increased proportionally to the The corresponding duration distributions are shown in Fig. 3
potential variation in the postsynaptic neuron j , and follow a power-law decay with exponent 2.0. The average
long-term synaptic strength W is the control parameter
Wij (t + 1) = Wij (t) + δWij (t), (4) which determines whether the network exhibits sub-, super-,
or critical behavior (see the inset in Fig. 4). We find that as the
δWij (t) = α[vj (t + 1) − vj (t)], (5) system size increases W needs to decrease proportional to
W c ∝ N −1/2 to maintain a scaling cutoff. The fact that the
critical value of the control parameter changes with the system
where α is a parameter which determines the strength of plastic
size has also been observed in Ref. [23]. Interestingly as the
adaptation. It represents all possible biomolecular mechanisms
percentage of inhibitory neurons increases, an increase in W
which might influence the long-term adaptation of synapses.
After an avalanche comes to an end, all Wij are reduced by the
average increase in synaptic strength,
1
W = δWij , (6)
Ns
032312-3
L. MICHIELS VAN KESSENICH et al. PHYSICAL REVIEW E 97, 032312 (2018)
032312-4
CRITICAL NEURAL NETWORKS WITH SHORT- AND … PHYSICAL REVIEW E 97, 032312 (2018)
smaller in size due to the dissipative role of inhibitory neurons FIG. 7. Learning performance vs the number of presented binary
and the system moves away from the critical point W c . These patterns of the XOR rule. The system consists of 3000 neurons with
results suggest that avalanches with a size comparable to the pin = 0.2, tr = 1. W is initialized to W = 2 × 10−3 . The different
system size affect the temporal correlations measured by the curves correspond to different numbers of input neurons.
power spectrum. This is confirmed by the spectrum of series
a2 (t) in Fig. 5 in which case the avalanches all have the same amplifying the input signal speeds up the convergence of the
amplitude over time and the temporal correlations are revealed, learning task. The learning scheme follows the principles of
yielding a 1/f power spectrum. supervised learning and negative feedback [62]. The possible
binary inputs are presented to the network in a random order
and each time an avalanche is triggered. The activity propagates
V. LEARNING through the network and either activates the output neuron (1)
So far we have shown that the introduction of both long- or not (0). If the result is incorrect a feedback signal is released
and short-term plasticity is compatible with experimental data from the output neuron which modifies all synapses which
regarding avalanche statistics, branching ratio, and power participated in the previous avalanche, according to
spectra. However, the main role of long-term plasticity is its Wij → Wij + αEf (dj o ), (8)
ability to sculpt the network over time to allow the system
to learn. Here we study the systems ability to learn binary where E ∈ {−1,0,1} is the committed error. E = −1 signifies
rules. The network can learn any binary rule but we focus here that the output neuron fired but should not have and E = 1
on the XOR rule as it is a nonlinearly separable rule which means that the output neuron remained inactive but its activa-
makes the task more complex [60]. The learning mechanism is tion was desired. Therefore depending on the error committed,
inspired by previous work [61,62] and is based on the release of synapses are either enhanced or weakened. E = 0 corresponds
chemical signals, for example dopamine, which are known to to the correct output and consequently does not lead to any
mediate plasticity [63–65]. The learning mechanism operates modification of synaptic strengths. α is the global learning
as follows. Two input neurons are defined at one side of the strength and has the same meaning as in Eq. (5). Here α is set
network which will provide the input to the system. On the to α = 0.3 and modifying α does not qualitatively change the
other side of the network an output neuron is defined. All input results presented here. f (dj o ) is a function of the distance dj o
and output neurons are assigned the maximal degree k = 100 between the synapse and the output neuron o. We consider the
to ensure that they are well connected to the network (for the position of a synapse ij to be the same as of the postsynaptic
output neuron it is the incoming degree kin = 100). The two neuron j to which it connects. For the function f (dj o ) we use
input neurons are triggered according to the possible binary an exponential decay, f (dj o ) = e−dj o /d0 . The relevance of the
combinations, i.e., one of the input neurons fires while the spatial extent of this learning mechanism has been studied in
other remains inactive, {0,1} and {1,0}, or both neurons fire at detail in Ref. [65]. How the learning rule modifies a synapse j ,
the same time, {1,1}. The binary input where both neurons are therefore, depends on the distance dj o from the source of the
inactive, {0,0}, is omitted as it won’t cause any activity. The chemical signal, the plasticity rate α, and the error E. We define
response of the system is monitored at the output neuron. If the learning performance as P = Icorrect /Itotal , where Itotal is
the output neuron fires during an avalanche the response is 1, the total number of input signals presented to the network
otherwise it is 0. The output neuron should therefore fire only and Icorrect is the number of times the system gave the correct
if one of the input neurons is activated but not if both inputs response. Figure 7 shows the performance of the system versus
fire simultaneously. To amplify the input signal additional input the number of binary patterns presented to the network. Results
neurons can be defined which fire together. For example if four were obtained on a network of 3000 neurons with pin = 0.2.
input neurons are defined the input {1,0} triggers four neurons The different curves correspond to different numbers of input
and {0,1} triggers four other neurons. This still implements the neurons and results show that the network is able to learn XOR
XOR rule but amplifies the neuronal signal. As we will show, with a faster convergence when the input is amplified.
032312-5
L. MICHIELS VAN KESSENICH et al. PHYSICAL REVIEW E 97, 032312 (2018)
VI. DISCUSSION The spectral analysis of the neuronal activity yields a power
spectrum decaying as 1/f 2 for the sequence a1 (t) which has
The presented model includes several neurobiological in-
also been observed in other critical systems [59]. Conversely,
gredients such as both short- and long-term plasticity, as well as
the analysis of the sequence a2 (t) results in a power spectrum
the refractory time and inhibitory neurons. The introduction of
with a 1/f decay. Recent results from a fully self-organized
short-term plasticity implies that the synaptic weights wij are
critical model [51] show that an increase of inhibition leads
a constantly changing quantity. Therefore long-term plasticity
to the 1/f power spectrum also in the sequence a1 (t). We
cannot operate on the same synaptic weights as it would be
confirm these results and indeed increasing the fraction of
dominated by the short-term fluctuations of wij . We therefore
inhibitory neurons pin while keeping W constant leads to
propose a long-term component of the synaptic strength Wij
a decrease in the exponent of the power spectrum. As pin
for each synapse. Stronger synapses have a larger Wij which in
increases larger avalanches become less probable leading to
turn will lead to a higher average short-term synaptic strength
a 1/f power spectrum for pin = 0.3, in accordance with sev-
wij . Since critical dynamics relies on energy conservation an
eral experimental studies [52–57]. Finally we present results
important question is how such an energy balance is maintained
regarding the learning capabilities of the model. With distance
in neural systems. Synaptic interactions are of chemical nature
dependent feedback signals the system is able to learn binary
and the individual firing events are nonconserving. This is
rules. We show the example of the XOR rule which is known to
especially true for inhibitory connections. Here we make use
be difficult to learn as it is not linearly separable. The ability to
of dynamical synapses and show that increasing the amount
learn XOR encourages the investigation of more complex tasks
of inhibition can be counteracted by increasing the long-term
such as pattern recognition. This provides a foundation for
synaptic strengths. Furthermore, in the thermodynamic limit
future research of the effects of the combination of short- and
the tuning parameter tends to zero, W c ∝ N −1/2 , indepen-
long-term plasticity on the learning capabilities of neuronal
dently of the percentage of inhibitory neurons. It has to be
systems.
noted that the presence of a tuning parameter implies that
the presented dynamics is not self-organizing. However, it is
the tuning parameter which leads to scaling in the avalanche
statistics even with large fractions of inhibitory neurons.
ACKNOWLEDGMENT
The measured exponents are 1.5 for the avalanche size and
2.0 for the avalanche duration distribution, consistent with The authors would like to thank D. Berger for helpful
experimental studies [1,2]. discussions.
[1] D. Plenz and H. G. Schuster, Criticality in Neural Systems [16] V. Pasquale, P. Massobrio, L. Bologna, M. Chiappalone, and S.
(Wiley, Weinheim, 2014). Martinoia, Neuroscience 153, 1354 (2008).
[2] J. M. Beggs and D. Plenz, J. Neurosci. 23, 11167 (2003). [17] O. Shriki, J. Alstott, F. Carver, T. Holroyd, R. N. Henson, M. L.
[3] J. M. Beggs and D. Plenz, J. Neurosci. 24, 5216 (2004). Smith, R. Coppola, E. Bullmore, and D. Plenz, J. Neurosci. 33,
[4] D. Plenz and T. C. Thiagarajan, Trends Neurosci. 30, 101 (2007). 7079 (2013).
[5] A. Mazzoni, F. D. Broccard, E. Garcia-Perez, P. Bonifazi, M. E. [18] D. O. Hebb, The Organization of Behavior: A Neuropsycholog-
Ruaro, and V. Torre, PLoS One 2, e439 (2007). ical Approach (John Wiley & Sons, New York, 1949).
[6] T. Petermann, T. C. Thiagarajan, M. A. Lebedev, M. A. Nicolelis, [19] S. J. Martin, P. D. Grimwood, and R. G. Morris, Annu. Rev.
D. R. Chialvo, and D. Plenz, Proc. Natl. Acad. Sci. USA 106, Neurosci. 23, 649 (2000).
15921 (2009). [20] N. Caporale and Y. Dan, Annu. Rev. Neurosci. 31, 25 (2008).
[7] E. D. Gireesh and D. Plenz, Proc. Natl. Acad. Sci. USA 105, [21] N. S. Simons-Weidenmaier, M. Weber, C. F. Plappert, P. K. Pilz,
7576 (2008). and S. Schmid, BMC Neurosci. 7, 38 (2006).
[8] P. Massobrio, L. de Arcangelis, V. Pasquale, H. J. Jensen, and [22] K. Ikeda and J. M. Bekkers, Proc. Natl. Acad. Sci. USA 106,
D. Plenz, Front. Syst. Neurosci. 9, 22 (2015). 2945 (2009).
[9] S. Zapperi, K. B. Lauritsen, and H. E. Stanley, Phys. Rev. Lett. [23] A. Levina, J. M. Herrmann, and T. Geisel, Nat. Phys. 3, 857
75, 4071 (1995). (2007).
[10] L. de Arcangelis, C. Perrone-Capano, and H. J. Herrmann, Phys. [24] A. Levina, J. M. Herrmann, and T. Geisel, Phys. Rev. Lett. 102,
Rev. Lett. 96, 028107 (2006). 118110 (2009).
[11] L. de Arcangelis, F. Lombardi, and H. Herrmann, J. Stat. Mech.: [25] A. de Andrade Costa, M. Copelli, and O. Kinouchi, J. Stat.
Theor. Exp. (2014) P03026. Mech.: Theor. Exp. (2015) P06004.
[12] L. Cocchi, L. L. Gollo, A. Zalesky, and M. Breakspear, Prog. [26] M. Watanabe, K. Maemura, K. Kanbara, T. Tamayama, and H.
Neurobiol. 158, 132 (2017). Hayasaki, Int. Rev. Cytol. 213, 1 (2002).
[13] L. de Arcangelis, Eur. Phys. J.: Spec. Top. 205, 243 (2012). [27] J. G. Nicholls, A. R. Martin, B. G. Wallace, and P. A. Fuchs,
[14] E. Miranda and H. Herrmann, Physica A 175, 339 (1991). From Neuron to Brain (Sinauer Associates, Sunderland, MA,
[15] G. Pruessner, Self-organised Criticality: Theory, Models and 2001), Vol. 271.
Characterisation (Cambridge University Press, Cambridge, [28] J. A. Bonachela and M. A. Muñoz, J. Stat. Mech.: Theor. Exp.
England, 2012). (2009) P09009.
032312-6
CRITICAL NEURAL NETWORKS WITH SHORT- AND … PHYSICAL REVIEW E 97, 032312 (2018)
[29] P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. Lett. 59, 381 [48] J. Kozloski and G. A. Cecchi, Front. Neural Circuits 4, 7 (2010).
(1987). [49] S. Sahara, Y. Yanagawa, D. D. O’Leary, and C. F. Stevens,
[30] O. Malcai, Y. Shilo, and O. Biham, Phys. Rev. E 73, 056125 J. Neurosci. 32, 4755 (2012).
(2006). [50] M. E. Newman, Contemp. Phys. 46, 323 (2005).
[31] Z. Olami, H. J. S. Feder, and K. Christensen, Phys. Rev. Lett. [51] F. Lombardi, H. J. Herrmann, and L. de Arcangelis, Chaos 27,
68, 1244 (1992). 047402 (2017).
[32] B. Drossel and F. Schwabl, Phys. Rev. Lett. 69, 1629 (1992). [52] E. Novikov, A. Novikov, D. Shannahoff-Khalsa, B. Schwartz,
[33] J. A. Bonachela, S. De Franciscis, J. J. Torres, and M. A. Munoz, and J. Wright, Phys. Rev. E 56, R2387(R) (1997).
J. Stat. Mech.: Theor. Exp. (2010) P02015. [53] C. Bédard, H. Kröger, and A. Destexhe, Phys. Rev. Lett. 97,
[34] P. S. Kaeser and W. G. Regehr, Curr. Opin. Neurobiol. 43, 63 118102 (2006).
(2017). [54] N. Dehghani, C. Bédard, S. S. Cash, E. Halgren, and A. Destexhe,
[35] C. Rosenmund and C. F. Stevens, Neuron 16, 1197 (1996). J. Comput. Neurosci. 29, 405 (2010).
[36] S. O. Rizzoli and W. J. Betz, Nat. Rev. Neurosci. 6, 57 (2005). [55] W. S. Pritchard, Int. J. Neurosci. 66, 119 (1992).
[37] H. Markram and M. Tsodyks, Nature (London) 382, 807 (1996). [56] E. Zarahn, G. K. Aguirre, and M. D’Esposito, Neuroimage 5,
[38] T. Abrahamsson, B. Gustafsson, and E. Hanse, J. Physiol. 569, 179 (1997).
737 (2005). [57] K. Linkenkaer-Hansen, V. V. Nikouline, J. M. Palva, and R. J.
[39] M. Armbruster and T. A. Ryan, Nat. Neurosci. 14, 824 (2011). Ilmoniemi, J. Neurosci. 21, 1370 (2001).
[40] F. Lombardi, H. J. Herrmann, C. Perrone-Capano, D. Plenz, and [58] B. J. He, J. M. Zempel, A. Z. Snyder, and M. E. Raichle, Neuron
L. de Arcangelis, Phys. Rev. Lett. 108, 228703 (2012). 66, 353 (2010).
[41] M. V. Tsodyks and H. Markram, Proc. Natl. Acad. Sci. USA 94, [59] H. J. Jensen, K. Christensen, and H. C. Fogedby, Phys. Rev. B
719 (1997). 40, 7425 (1989).
[42] M. Tsodyks, K. Pawelzik, and H. Markram, Neural Comput. 10, [60] D. Kriesel, A Brief Introduction to Neural Networks (2007),
821 (1998). http://www.dkriesel.com.
[43] S. Pajevic and D. Plenz, PLoS Comput. Biol. 5, e1000271 (2009). [61] L. de Arcangelis and H. J. Herrmann, Proc. Natl. Acad. Sci. USA
[44] P. Massobrio, V. Pasquale, and S. Martinoia, Sci. Rep. 5, 10578 107, 3977 (2010).
(2015). [62] P. Bak and D. R. Chialvo, Phys. Rev. E 63, 031912 (2001).
[45] O. Shefi, I. Golding, R. Segev, E. Ben-Jacob, and A. Ayali, Phys. [63] F. Liu, Q. Wan, Z. B. Pristupa, X.-M. Yu, Y. T. Wang, and H. B.
Rev. E 66, 021905 (2002). Niznik, Nature (London) 403, 274 (2000).
[46] V. M. Eguiluz, D. R. Chialvo, G. A. Cecchi, M. Baliki, and A. V. [64] J. N. Reynolds and J. R. Wickens, Neural Networks 15, 507
Apkarian, Phys. Rev. Lett. 94, 018102 (2005). (2002).
[47] L. Michiels van Kessenich, L. de Arcangelis, and H. Herrmann, [65] D. L. Berger, L. de Arcangelis, and H. J. Herrmann, Sci. Rep. 7,
Sci. Rep. 6, 32071 (2016). 11016 (2017).
032312-7