You are on page 1of 21

Monografia

OPTIMAL DYNAMICAL RANGE OF EXCITABLE


NETWORKS AT CRITICALITY

Estudante: Edward Iraita Salcedo

Professor: Diogo O. Soares Pinto

Instituto de Fisica de Sao Carlos

Universidade de Sao Paulo

Sao Carlos

Outubro /2018
INDEX

ABSTRACT

INTRODUCCTION

1. NEURONAL AVALANCHE

1.2 DEFINICION DE NEURONAL AVALANCHE

1.2.1 Power Law Distribution

1.2.2 Repeating Avalanche Patterns

1.2.3 Models of avalanche

2. KINOUCHI-COPELLI MODEL

2.1 DINAMICA DEL MODELO

2.1.1 Automato de Greenberg-Hasting

2.1.2 Automato de Kinouchi-Copelli

2.1.3 Kinouchi-Copelli Model

2.1.4 Propiedades Estadisticas

2.1.4.1 Parametro de Controle

2.1.4.2 Parametro de Orden

3. RESPONSE CURVES AND DYNAMIC RANGE

4.1 Medium Fied


ABSTRACT

In the present monograph a general study is made about the work of Osame Kinouchi and
Mario Copelli, entitled optimization of the dynamic range in excitable networks at
criticality, for this we will analyze the behavior during the activity of a neuronal
avalanche in the KC model, whose objective serves To explain the problems in the
information process in sensory systems, we also deal with the dynamics of the KC model,
such as statistical properties, parameters and rules that follow that model. In this
monograph the work of Camilo Akimushkin Valencia is cited, as among others.
INTRODUCTION

The capacity of a sensory system in efficiently detecting stimuli is usually given by the
dynamic range, a simple measure of the range of stimulus intensity over which the
network is sensible enough. Many times biological systems exhibit large dynamic ranges,
covering many orders of magnitude. There is no easy explanation for that, since
individual neurons present very short dynamic range isolatedly. Arguments based on
sequential recruitment are doomed to failure since the corresponding arrangment of the
limiar thresholds of the units over many orders of magnitud is unrealistic.

Kinouchi-Copelli model strongly suggested that large dynamic range should be a


collective effect of the sensory neurons. The KC model is a network of stochastic
excitable elements coupled as a ramdom graph. KC model showed the spontaneous
activity of the network signals an order-disorder noneequilibrium pahse transtion and the
dynamic range exhibits an optimum precisely at the critical point.
1. NEURONAL AVALANCHE

1.2 Definition Neuronal Avalanche

A neuronal avalanche is a cascade of bursts of activity in neuronal networks whose size


distribution can be approximated by a power law, as in critical sandpile models. Neuronal
avalanche are seen in cultured and acute cortical slices (Beggs and Plenz, 2003;2004).
Activity in these slices os neocortex is characterized by brief bursts lasting tens os
miliseconds, seperated by periods of quiescence lasting several seconds. When observed
with a multielectrode array, the number of electrodes driven over threshold during a burst
is distributed approximately like a power law.[7]

Figure 1: Schematic of data representation. Local field potentials(LFPs) that exceed three
standard deviations are represented by black squares.

1.2.1 Power law size distribution

The movie illustrates that multi-channel data can be broken down into frames where there
is no activity and where there is at least one active electrode, which may pick up the
activity from several neurons. A sequence of consecutively active frames, bracketed by
inactive frames, can be called an avalanche. The example avalanche shown has a size of 9
because this is the total number of electrodes that were driven over threshold. Avalanche
sizes are distributed in a manner that is nearly fit by a power law. Due to the limited
number of electrodes in the array, the power law begins to bend downward in a cutoff
well before the array size of 60. But for larger electrode arrays, the power law is seen to
extend much further.[7]
Figure 2: Example of an avalanche. Seven frames are shown, where each frame
represents activity on the electrode array during one 4 ms time step. An avalanche is a
series of consecutively active frames that is preceded by and terminated by blank frames.
Avalanche size is given by the total number of active electrodes. The avalanche shown
here has a size of 9.

The equation of a power law is :

P ( S )  kS 

Where P(S) is the probabilty of observing an avalanche of size S ,α is the exponent that
gives the slope of the power law in a log-log graph, and k is a proportionality constant.
For experiments with slice cultures, the size distribution of avalanche of local field
potentials has an exponent α ≈ 1.5,but in recordings of spikes from a different array the
exponent is α ≈ 2.1. The reasons behind this difference in exponents are still being
explored. It is important to note that a power law distribution is not what would be
expected if activity at each electrode were driven independently. The neuronal avalanches
emerge from collective processes in a distributed network.
Figure 3: Avalanche size distributions. A, Distribution of sizes from acute
slice LFPsrecorded with a 60 electrode array, plotted in log-log space. Actual data are
shown in black, while the output of a Poisson model is shown in red. In the Poisson
model, each electrode fires at the same rate as that seen in the actual data, but
independently of all the other electrodes. Note the large difference between the two
curves. The actual data follow a nearly straight line for sizes from 1- 35; after this point
there is a cutoff induced by the electrode array size. The straight line is indicative of a
power law, suggesting that the network is operating near the critical point (unpublished
data recorded by W. Chen, C. Haldeman, S. Wang, A. Tang, J.M. Beggs). B, Avalanche
size distribution for spikes can be approximated by a straight line over three orders of
magnitude in probability, without a sharp cutoff as seen in panel A. Data were collected
with a 512 electrode array from an acute cortical slice bathed in high potassium and zero
magnesium (unpublished work of A. Litke, S. Sher, M. Grivich, D. Petrusca, S.
Kachiguine, J.M. Beggs). Spikes were thresholded at -3 standard deviations and were not
sorted. Data were binned at 1.2 ms to match the short interelectrode distance of 60 μm.
Results similar to A and B are also obtained from cortical slice cultures recorded in
culture medium.[7]

1.2.2 Repeating avalanche patterns

While avalanches in critical sandpile models are stochastic in the patterns they form,
avalanches of local field potentials occurs in spatio-temporal patterns that repeat more
often than expected by chance ( Begg and Plenz, 2004). The figure shows several such
patterns from an acute cortical slice. These patters are reproducible over periods of as
long as 10 hours and have a temporal precision of 4ms ( Begg and Plenz, 2004). The
stability and precision of these patterns suggest that neuronal avalanches could be used by
neuronal networks as a substrate for storting information. In this sense, avalanches appear
to be similar to sequences of action potential observed in vivo while animals performs
cognitive tasks. It is unclear at prese whether the repeating activity patterns froms in vivo
are also avalanches.[7]
Figure 4: Families of repeating avalanches from an acute slice. Each family (1-4) shows
a group of three similar avalanches. Repeating avalanches are stable for 10 hrs and have a
temporal precision of 4 ms, suggesting that they could serve as a substrate for storing
information in neural networks.

1.2.3 Models of Avalanches

Models that explicitly predicted avalanches of neural activity include the work of Herz
and Hopfield (1995) which connects the reverberations in a neural network to the power
law distribution of earthquake sizes. Also notable is the work of Eurich, Hermann and
Ernst (2002), which predicted that the avalanche size distribution from a network of
globally coupled nonlinear threshold elements should have an exponent of α ≈ 1.5.
Remarkably, this exponent turned out to match that reported experimentally (Beggs and
Plenz, 2003). In this model, a processing unit which is active at one time step will
produce, on average, activity in σ processing units in the next time step. The number σ is
called the branching parameter and can be thought of as the expected value of this ratio:

descendants

ancestors

Where Ancestors is the number of processing units active at time step t and Descendants
is the numbers of processing units active at time step t+1. There are three general regimes
for σ, as show in the figure.

Figure 5: The three regimes of a branching process. Top, when the branching
parameter, σ , is less than unity, the system is subcritical and activity dies out over time.
Middle, when the branching parameter is equal to unity, the system is critical and activity
is approximately sustained. In actuality, activity will die out very slowly with a power
law tail. Bottom, when the branching parameter is greater than unity, the system is
supercritical and activity increases over time.[7]

At the level of a singles processing unit in the network, the branching parameter σ is set
by the following relationship:

 i   j 1 pij
N
where σi is the expected number of descendant processing units activated by unit i, N is
the number of units that unit i connects to, and pij is the probability that activity in unit i
will transmit to unit j. Because some transmission probabilities are greater that others,
preferred paths of transmission may occur leading to reproducible avalanche patterns.
Both the power law distributionof avalanche size and the repeating avalanches are
qualitatively captured by this model when σ is tuned to the critical point σ=1 .

When the model is tuned moderately above (σ>1) or below (σ<1) the critical point, it
fails to produce a power law distribution of avalanche sizes. This phenomenological
model does not explicitly state the cellular or synaptic mechanisms that may underlie the
branching process, and many of this model's predictions need to be tested.[7]

2. Kinouchi-Copelli Model

The Brazilian researchers Osame Kinouchi and Mauro Copelli presented the first model
in 2006 (hereafter KC 2006), which fully explains the wide dynamic range observed [1].
As another independent evidence of the veracity of the model, the correct answer is with
Stevens' psychophysical law for weak stimuli.

The models used consist of a dynamics on a network: the dynamics and sets of rules that
define the temporal evolution of the stages of the elements from the states in the previous
instant while the network represents the connections between the elements that, of course,
also will shave the evolution of the system..

There are models that consist of coupled differential equations and therefore represent
continuous variables evolving in a continuous time, there are also models of discrete
variables in discrete times, the latter are the most convenient to be treated
computationally due to its simplicity and speed.

Connections are incorporated into the models simply by adding a coupling term in the
dynamics of each pair of connected neurons. however the choice of pairs of neurons to be
connected is defined by the network topology.

2.1 MODEL’S DYNAMIC


2.1.1 Greenberh-Hasting Cellular Automaton

As in a typical two dimensional cellular automaton, consider a rectangular grid, or


checkerboard pattern, of "cells". It can be finite or infinite in extent. Each cell has a set of
"neighbors". In the simplest case, each cell has four neighbors, those being the cells
directly above or below or to the left or right of the given cell.

These discrete variavel models in discrete tempo are concatenated as cellular automato.
The cellular automats are usually defined on a regular network two-dimensional wave or
state of each element depending on time, following simple regimes, state of element and
two states two elements vizinhos next no previous time. We can update all the elements
of the system in a synchronized or random way, for example by updating or state of a
single item randomly selected.

The dynamics of KC2006 is a modification of Greenberg-Hastings cellular automata


which is an automaton on a regular two-dimensional network interacting with the four
nearest neighbors.
Figura 6: Standard of automatic activity in ACGH with a refractory state. Each element
that is in the excited state (blackboards) causes excitation at the next instant of all its near
neighbors in the quiescent state (whiteboards) and passes into the refractory state (gray
frames). The configuration shown is stable over time since after four time steps the
system returns to the same state. this is due to the made of the quaro elements in the
center of each spiral highlighted with pictures.

2.1.2 Kinouchi-Copelli Cellular Automaton

The dynamics of an element in the ACGH can represent the action potential of a neuron
as shown in figure 7. if a quiescent neuron receives a strong enough stimulus it and
depolarized and enters the excited state, later, while the slow conductances are activated,
the neuron travels successively through the refractory states until it returns to the
quiescent state. While the internal dynamics of each element can be modeled by the
ACGH, the interactions between neighbors must be modified if we look for a realistic
model: a neural network, of course, is not square and the number of neighbors is not the
same for all neurons.
On the other hand, the deterministic connections between neurons prove to be excessively
strong. One way to adjust the excitation power of synapses in the model is to introduce a
probability that an excited neuron can excite each of its quiescent neighbors.

Figure 7: Dynamics of a neuron in xi  x2ii ,... 


0n  1,
1
the KC2006 model. The neuron
remains indefinitely in the quiescent state . Eventually excited, the potential exceeds a
certain threshold. which defines the stay in the state , and successively traverses the
refractory states until it returns to the quiescent state.
2.1.3 Kinouchi Copelli Model

Consider an undirected weighted Erdos-Renyi ramdim graph with N nodes. Each node
represent a neuron, i.e.an excitable unit whose possible states will be described below.
Given a desired average connectivity K (mean degree of a node) for the graph, each of the
NK/2 edges is assigned to a ramdoly chosen pair of nodes. Let Vj be neighborhood of
node j, i.e. The set of nodes connected to j by an edge.The strengh of a synapse in this
neuronal network is represented by the weight of the corresponding edge in this graph
and is sorted from an uniform probability density in the interval [0,p max], where 0≤pmax≤1.
Representing the absence of a synapse as a null weight, we can define an (symmetric)
adjancency matrix matrix A whose element Ajk is the weight of the edege between nodes j
and k. (paper 1)

Let be a ramdom variable representing X j (t ) the state of the j-th neuron at the instant
t. For all j and t, exhibits realizations in the set {0,1,2,...,m-1}. The state 0 is
called either the quiescent state or the rest state. The state 1 is the excited state and all
other states are called refractory states. The full dynamics of the system consists in the
temporal evolution of the family {} in discrete time, with synchronous updating,
according to the following rules:

 If 1≤≤ m-2, then ; X j (t  1X) j (tX) j (t  1)


 If = m-1, then X j (Xt j (t
1)) 0
 If = 0 , then will be either 0 or 1, XXj (jt(t)1)
and the total excitation probability
of neuron j depends on independent contributions from

- An external stimulus with probability η :


- Each of its excited neighbors, say k with probability Ajk.

Explicity ,”independent contributions” mean that each of the numbers η and {Ajk} are
meaningful as excitation probablitities only in isolation (absence of all other
contributions). Also notice that the refractory period of a neuron equals m-1 time steps,
starting right after this neuron getting excited. Its evolution is deterministic meanwhile.
The only probabilistic state transition occurs from the quiescent state to the excited one.
For KC ,where would be an arbitrary   1t t e  rt
continuous time internal
(usually,≈1ms) and r would be the probability rate of a Poisson process. In olfactory
intraglomerular neuronal network (a biological system where the KC model may be
applcable), r would be directly related to the concentration of an odorant capable of
exciting neurons.

The main observables of the KC model  (t ) are the density of excited neurons at t-the
time step ( the fraction of the population of neurons composed by excited units)
and its temporal average, the activity of the network,
(1) 1

T
F : t 1
 (t ).
T
For a large enough (≈103ms)
values of the observation window T, so r * thata dynamical equilibrium is reached,
its precise value does not have relevant effects on the behavoiur of F. Then critical
behavior is revealed only, when , the range of values of r over which F exhibits
“significant” variation (1). Is seen as a function of the average branching ratio σ, defined
as the mean value (average over all the neurons) of the local branching ratio σ j of the j-th
node.
(2)  j   A jk .
kV j
Indeed the dynamic range turns out
to be optimal when σ=1. So the roles of r *
control parameter is performed by σ,
which is a measure of how much activity can be directly generated by an excited unit of
the netowrk stimulating a resting neighborhood.

2.1.4 Statistical Properties

Experimentally we have recorded several behavior of the activity in the biological neural
networks that could be compared with the simulations. However, the discovery of neural
avalanches in vitro would be particularly illuminating in the dynamic range problem and
in general in the study of neural network complexity [3]. the activity observed in cortical
tissue using an arrangement of electrodes has shown that the neurons shoot together
according to space-time patterns distributed in the form of power laws in a large region,
which makes one think that the system is near a critical point. To characterize the
network, the branching or branching ratio ("initial") was defined as the ratio between the
number of active neurons (or electrodes) in the two instants of time according to a period
without registering activities [3].

(3)  (t  1)
  ,
 (t )
Such that, ,  (t  1)  0

It was found that the branching ratio   1.04  0.19


of the electrode array assumes values between in the unmodified network, [3]. In
addition, if the network is modified with substances and it is possible to change the
chemical synapses and therefore the value of branching reason, in this case it is observed
that for larger values the unit, the number of activated neurons increases from the first to
the second instant in that activity is recorded in the tissue leading later to the activation of
a large part of the electrodes. At the other extreme, for σ<1 the number of excited neurons
decreases with time and activity rapidly disappears.

2.1.4.1 Control Parameter

The definition (3) is not the most comfortable choice to characterize our networks since it
depends on the individual realizations in time and therefore several experimental
measures are required until converging to an average value. However, our probabilistic
description in terms of the weights of connections makes it possible to characterize the
network without any gaps before doing any particular simulation. Thus, for probabilistic
networks, we will use the definition of [4] equivalent to expression (3). the local
branching reason of a neuron, "i" is defined as the sum over the neighboring "Ki" of the
excitation probabilities from that element:

(4) Ki
   pij ,
j

Which can be interpreted as the average  K i i


number of neurons that will be excited
at the next instant if the i-th neuron is excited and all the neighboring "" are quiescent.
The mean value of the branching ratio of the network, is obtained from (4) as,

(5) Kpmax

2

Where , is called the mean network ( K( p pmax


ijK)i  )
connectivity. The branching reason (5)
turns out to be the control parameter of the model. in the simulations, this parameter has a
predefined value that will determine the value of and therefore of the randomly chosen. in
the model of [4], the value of some is adjusted to guarantee the default value of σ for
each of the neurons. On the contrary, in the automata presented, weights are independent
random variables and the realization the true value of σ that approximates to the default
value in the larger size networks.

2.1.4.2 The Order Parameter

The probabilistic excitation of the r 01, neighbors is essential to obtain the
critical point and generates a non trivial behavior as shown in figure (8) where the
temporal evolution of the instantaneous activity is presented in the presence of a constant
stimulus in time and evanescently weak ,(but it is essential to maintain the activity) for
different values of the branching ratio σ. For this figure 8 was used the random network
in which, in principle, any neuron can be connected with any other. starting from the
same initial condition with all the states equally occupied, regardless of the value of σ if
there is a decrease at the beginning to imperceptible values of the activity. for all the
networks slightly connected, , the network tends to continue quiescent for all time: if
there is any excitation due to the external stimulus, it dies quickly and can spread to a few
neighboring neurons (blue line in figure 8. when the value reaches a qualitative change
occurs: the total number of neurons activated and the time that the network becomes
active from a single external excitation has an enormous variability that can generate both
a brief activity and an activation mass limited only by the size of the entire network (red
line in figure 8).
Figure 8: Actividade instantaneous  1
Nr 
 00.0
10 9
55
activity in the random network in the
presence of a weak stimulus, .Simulations done with networks of neurons with branching
ratios from (blue line), to (red line) in steps of 0.1 (line in degrade from red to green).

For larger branching reasons , the initial    1 excitation can certainly activate a large
portion of the network neurons (lines degrade from red to green in figure 8). The activity
grows until it is around a constant value that will depend on reason of branching σ. for
slightly supercritical networks, , the activity is around a low value and has an apparently
random profile.For networks with larger ramifications, the activity begins to take the
form of oscillations around the fixed value molded by an envelope randomly.

This instantaneous activity of the supercritical networks is interpreted as a "self-


organized" process where neurons shoot together generating collective oscillations, [5,6]
but in our problem of the dynamic range in the sensory networks this behavior is not the
best. Finally, for networks with the highest branching rates, there are strong oscillations
(with increasing amplitude) within an increasingly constant and less random involute that
may eventually cancel out the instantaneous activity but still allow to distinguish a well-
defined mean value.

4 Response curves and dynamic range

4.1 Medium Field

To obtain the graphic below , we need to to introduce some equations which was obtained
to the KC Model,

2.6 (6) hi (t )  1  (1   ) [1  pij ( x j (t ),1)]


Where the variable is hj
j
i (t )
V

the conditional
probabilty of exciting the i-th neuron in the quiescent state and depends on the external
stimulus and also the interaction with others possible neighbors.And also we need to
introduce another equation which was obtained to the KC Model: Mean activity of the
model, "f", as the temporal mean of the instantaneous activity,

2.9 f  [1  (n  1) f ]hi (t )
(7)
hi (t )
This relation is later used in the limit of decoupled neurons and in the mean field
approximation. note that may depend, in general, on the activity of the neighboring
neurons and therefore of f.

In the random network we can easily pij obtain the mean field approximation
expressions from the "stationary" activity (6) and the general expression for the
excitation probability (7) as at [1]. The mean field excitation probability is obtained
assuming that the fraction of active neighbors equals the average activity f and that the
probability of excitation is equal to the mean value.

(8) pmax 
pij   p  
2 K
We must also consider the same
number of neighbors for each neuron, K=Ki . Replacing in equation (6) and later in (7) we
obtain the following transcendental equation for the activity in the steady state.

(9) f  (1  (n  1) f [1  (1  f / K ) K (1   )]

We verified that the values obtained from the roots of equation (9) fit well to the values of
the simulation as shown by the stimulus-response curves of figure 9.
Figura 9.
Mean
activity
as a
function
of the
external
stimulus
for
different
values σ
in the
Erdos-
Renyi
random
network
in linear
vertical scale.

REFERENCES

[1] KINOUCHI. O.; COPELLI. M. Optimal dynamic range of excitable networks at


critacality. Nature Physics,

[2] GREENBERG, J. M.; HASTING, S. P. Spatial patterns for discrete models of


diffusion in excitable media. SIAM Journal on Applied Mathematics, V.34, N.3, P.515-
523, 1978.

[3] STROGATS, S.H. Nonlinear dynamics and chaos. Cambridge, Massachusetts:


WestviewPress, 2008.

[4] HALDEMAN, C.; BEGGS, J.M. Critical branching captures activity in living neural
networks and maximizes the number of metaastable states. Physical Review letters.

[5] LEWIS, T.J.;Self-organized synchronoues oscillations in a network of excitable cels


coupled by gap junctions. Network Computation in Neural System.

[6] LEWIS, T. J.; RINZEL, J. Topological target patterns and populations oscillations in a
network with ranom gap junctional coupling. Neorucomputing.
[7] JOHN, M,B (2007) Neuronal avalanche. Schorlapedia

You might also like