You are on page 1of 5

Neuromorphic Photonics: Introduction to Spiking

and Excitability
Rama Chaudhary
Dayalbagh Educational Institute, Department of physics and Computer Science, Agra, Uttar Pradesh, India

Abstract—The human brain is believed to be the most complex loch Pitts neuron model, often referred to as a perceptron[3],
system in the universe and sets standards for information process- which consists of a linear combiner followed by a step-like
ing. Brain inspired computing systems could break performance activation function i.e. the synaptic weighted sum is compared
limitation inherent in traditional Von-Neumann architecture in
solving particular classes of problems. We study this field which to a threshold and, accordingly, the neuron produces a binary
explores merging of principles of Neuroscience and Photonics. output. Artificial neural networks based on perceptrons are
The special class of artificial neurons known as spiking neuron boolean-complete, they have the ability of simulating any
is discussed. The methodology of finding suitable systems for boolean circuit and are said to be universal for digital compu-
spike processing is also discussed. tations.
I. I NTRODUCTION The second generation of neuron models introduces a con-
The field of brain inspired computing promises break- tinuous monotonic activation function in place of the hard
throughs in field of information processing. It uses terms limiter, and therefore it represents inputs and outputs as analog
like nonlinear node (neuron), interconnection (network) and quantities. Neural networks from the second generation are
information representation (coding scheme). The most elemen- universal for analog computations in the sense that they can
tary illustration of a neuron is shown in Fig (1). Neurons approximate arbitrarily well any continuous function with a
are networked in a weighted directed graph, in which the compact domain.
connections are called synapses. The input into a particular First and second generation neural networks are powerful
neuron is a linear combination also referred to as weighted constructs present in state-of-the-art machine intelligence how-
addition of the output of other neurons. This particular neuron ever, they cannot provide explanations for how fast analog
integrates the weighted signals over time and produces a computations are performed by neurons in the cortex. For
nonlinear response. This nonlinear response is represented by example, neuroscientists demonstrated in the 1990s that a
an activation function, and it is usually bounded and mono- single cortical area in the macaque monkey is capable of
tonic The neurons output is then broadcast to all connected analyzing and classifying visual patterns in just 30 ms, in spite
nodes in the network. These connections can be weighted with of the fact that these neurons firing rates are usually below 100
negative and positive values, respectively called inhibitory and Hz i.e. less than 3 spikes in 30 ms which directly challenges
excitatory synapses. The weight is therefore represented as a the assumptions of rate coding. In parallel, more evidence was
real number, and the entire interconnection network expressed found that biological neurons use the precise timing of these
as a matrix. The coding scheme is a mapping of how real- spikes to encode information, which led to the investigation
valued variables are represented by these spiking signals[1]. of a third generation of neural networks based on a spiking
Three generations of neural networks have been studied in neuron.[1]
computational neuroscience. The first is based on the McCul-
II. S PIKING N EURAL N ETWORKS
Spiking neural models replace the nonlinear activation func-
tion with a nonlinear dynamical system. The most important
example of a spiking neuron is the threshold-fire model. It can
be summarized with the following five properties:

A. Wighted Addition
The neuron must have the ability to sum both positive and
negative weighted inputs. The weights model the synaptic
junctions. These weight can be useful in terms of adaptability.
Adaptability is the network’s ability to accommodate changing
environmental and system conditions. Adaptation of network
Fig. 1. Nonlinear model of a neuron. Notice the set of synapses, or connecting
parameters typically occurs on time scales much slower than
links, an adder performing weighted addition; and a nonlinear activation spiking dynamics and can be separated into either unsuper-
function[1] vised learning[1].
B. Integration of the first two in terms of functionality and computational
Neuron is required to be capable of integrating the weighted power; and for some computational tasks, a single neuron can
sum over time. Integration is a temporal generalization of sum- replace large numbers of conventional neurons and still be
mation in older perceptron based models. Spikes with varying robust to noise.
amplitudes and delays arrive at the integrator, changing its In the nervous system, firing a spike costs a neuron an
state by an amount proportional to their input amplitudes. amount of energy proportional to how far the spike must travel
Excitatory inputs increase its state, while inhibitory inputs down the axon, because the transmission medium is dispersive
deplete it. The integrator is especially sensitive to spikes that and lossy. There are clear advantages for representing informa-
are closely clustered in time or with high amplitudes, the basic tion in spikes traveling through lossy, dispersive passive media:
functionality of temporal summation. Eventually, without any the bit of information is not destroyed by pulse spreading
inputs the state variable will decay to its equilibrium value. or amplitude attenuation, since it is contained in the timing
Both the amplitude and timing play an important role for of the spike; therefore it can be regenerated by intermediate
integration[1]. neurons. Active mechanisms, such as intermediate neurons,
restore the stereotypical shape and amplitude of the spike
C. Thresholding as it travels through a signaling pathway, mitigating noise
Thresholding determines whether the output of the inte- accumulation at every stage of the network. Thus, so long
grator is above or below a predetermined threshold value, as random processes do not interfere irreversibly with spike
T . The neuron makes a decision based on the state of the timings, information can travel indefinitely unaltered at the
integrator, firing if the integration state is above the threshold. expense of energy consumed at each intermediate node.[1]
Thresholding is the center of nonlinear decision making in a
spiking system, which reduces the dimensionality of incoming III. S PIKING N EURON M ODEL
information in a useful fashion and plays a critical role in Leaky Integrated and Fire model is most effective spiking
cleaning up amplitude noise that would otherwise cause the model to describe a variety of different biologically observed
breakdown of analog computation[1]. phenomena. Signals are ideally represented by series of delta
D. Reset functions where the inputs and outputs, for spike times τj ,
take the form:
The reset condition resets the state of the integrator to a low, Xn

rest value immediately after a spike processor fires, causing a x(t) = δ(t − tj ) (1)
refractory period in which it is impossible or difficult to cause j=1

the neuron to fire again. It plays the same role in time as the Individual units perform a small set of basic operations
thresholder does for amplitude, cleaning up timing jitter and (delaying, weighting, spatial summation, temporal integration,
preventing the temporal spreading of excitatory activity while and thresholding) that are integrated into a single device capa-
putting a bandwidth cap on the output of a given spiking unit. ble of implementing a variety of processing tasks, including
It is a necessary component for simulating the rate model of binary classification, adaptive feedback, and temporal logic.
neural computation with spiking neurons[1]. The five properties listed in Table 1 all play important roles
E. Pulse Generation in the emulation of the LIF neuron[2].
According to the standard LIF model, neurons are treated
Pulse generation refers to the ability of a system to sponta-
as an equivalent electrical circuit. The membrane potential
neously generate pulses. If pulses are not regenerated as they
Vm (t), the voltage difference across their membrane, acts as
travel through systems, they will eventually be lost in noise. A
the primary internal (activation) state variable. Ions that flow
system with this property can asynchronously generate pulses
across the membrane experience a resistance R = Rm and
without the need to be triggered on input pulses, which is
capacitance C = Cm associated with the membrane. The
crucial for distributed processing.[1]
soma is effectively a first-order low-pass filter, or a leaky
TABLE I: Spiking Neuron Properties integrator, with the integration time constant τm = Rm Cm
Integration Temporally sum weighted inputs that determines the exponential decay rate of the impulse
Thresholding Fire one spike if state exceeds threshold response function. The leakage current through Rm drives the
Reset Return state to reset immediately after a spike membrane voltage Vm (t) to 0, but an active membrane pump-
Pulse Generation Introduce a new spike into network ing current counteracts it and maintains a resting membrane
Adaptability Modify parameters based on input statistics voltage at a value of Vm (t) = VL .
The basic biological structure of a LIF neuron is depicted
in Fig 2(a). It consists of a dendritic tree that collects and
F. Need of spiking neurons sums inputs from other neurons, a soma that acts as a low
The information transformed by the spiking neuron is often pass filter and integrates signals over time, and an axon that
characterized by the firing rate of spikes (rate coding) or the carries an action potential, or spike, when the integrated signal
timing of individual spikes (temporal coding). Maass demon- exceeds a threshold. Neurons are connected to each other
strated that this third generation is more than a generalization via synapses, or extracellular gaps, across which chemical
for an individual neuron, where these three influences are the
three terms contributing to the differential equation describing
Vm (t).

The dynamics of an LIF neuron are illustrated in Fig. 3.


If Vm (t) ≥ Vthresh , then the neuron outputs a spike, which
takes the form y(t) = δ(t − tf ), where tf is the time of spike
firing, and Vm (t) is set to Vreset . This is followed by a relative
refractory period, during which Vm (t) recovers from Vreset to
the resting potential VL in which it is difficult but possible
to induce the firing of a spike. Consequently, the output of
the neuron consists
P of a continuous time series comprised of
spikes y(t) = i δ(t − ti ) for spike firing times ti .

Fig. 2. (a) Schematic and (b) operation of a leaky integrate-and-fire neuron[2].

signals are transmitted. The axon, dendrite, and synapse all


play an important role in the weighting and delaying of spike
signals[2].
Fig. 2(b) shows the standard LIF neuron model. A neuron
has: (1) N inputs which represent induced currents through
input synapses xj (t), that are continuous time series consisting
either of spikes or continuous analog values; (2) an internal
activation state Vm (t); and (3) a single output state y(t). Each Fig. 3. An illustration of spiking dynamics in an LIF neuron. Spikes arriving
from inputs xj (t) that are inhibitory (red arrows) reduce the membrane
input is independently weighted by wj and delayed by τj voltage Vm (t), while those that are excitatory (green arrows) increase Vm (t).
resulting in a time series that is spatially summed (summed Enough excitatory activity pushes Vm (t) above Vthresh , releasing a delta
point wise). This aggregate input induces, between adjacent function spike in yk (t), followed by a refractory period during which Vm (t)
recovers to its resting potential VL [2].
neurons, an electrical current
n
X This LIF neuron model is believed to be more computa-
Iapp (t) = wj xj (t − tj ) (2)
tionally powerful than either the rate or the earlier perceptron
j=1
models, and can serve as the baseline unit for many modern
The result is then temporally integrated using an expo- cortical algorithms. From the standpoint of computability
nentially decaying impulse response function resulting in the and complexity theory, LIF neurons are powerful and effi-
activation state cient computational primitives that are capable of simulating
 t−t0
 1
Z t−t0
s both boolean-complete logic and traditional sigmoidal neural
Vm (t) = VL 1 − exp τm + Iapp (t−s) exp− τm ds networks[1].
Cm 0
(3)
where t0 is the last time the neuron spiked. The parameters IV. E XCITABILITY M ECHANISMS
determining the behavior of the device are the weights wj This section explains what kind of physical system can
, delays τj , threshold Vthresh , resting potential VL and the give rise to the leaky-integration and excitable dynamics.
integration time constant τm . The dynamical lasers can be modeled by partial differential
There are three influences on Vm (t): passive leakage of cur- equations and can be analyzed using the framework of a
rent, an active pumping current, and external inputs generating dynamical system.
time-varying membrane conductance changes. Including a set Solving a set of differential equation analytically is not an
of digital conditions, we arrive at following typical LIF model easy task. But There are some frameworks in which their
and they annihilate each other. It is known as Saddle- Node
bifurcation(see fig 5).

Fig. 5. An example of saddle node bifurcation [1]

Some dynamical systems have periodic solutions (X(t) =


X(t + T ). This periodic orbit is called a limit cycle if the
Fig. 4. Example of (a) Transcritical Bifurcation (b) Tangent Bifurcation [13]
system is attracted back to it upon small perturbation. We are
interested in systems where these periodic excursions are fast
behavior can be studied without exactly solving the differential and represent action potentials or spikes. In these dynamical
equations. One such framework is Bifurcation Analysis. In this systems, a saddle-node bifurcation can occur on a limit cycle;
section first an introduction to bifurcation is given without any they are referred to as saddle-node on limit cycle (SNLC)
complex mathematics. Further the relevance of this framework bifurcation (see Fig 6). When this happens, such a local
to spike processing is discussed. bifurcation has effect on the global behavior of the system:
if the system is at the right side of a saddle point, the system
A. Bifurcation in simple terms travels along the path of the limit cycle and returns to the
When behavior of system changes qualitatively with smooth attractor node. The qualitative behavior of the system thus
change of its parameters, we say a bifurcation has occurred. changes from periodic to excitable[5].
A typical one-dimensional ODE is written as
ẋ = f (x, α) (4)
where α is a system parameter. To evaluate the equilibrium
points of system one have to put ẋ = 0. For example:
ẋ = x(x − α) (5)
This equation have two equilibrium points x = 0 and x =
α of which one is stable and the other one is unstable. The
stability can be checked by taking the derivative 0f (x) = α − Fig. 6. Saddle node bifurcation on limit cycle. The state of system is attracted
2x. For α < 0 the equilibrium x = 0 is stable, while for α > 0 towards limit cycle on small perturbation which generate spike periodically.
this equilibrium is unstable. For the non-trivial equilibrium Thus system become excitable on a small perturbation[1].
x 6= 0 opposite is true. If α is smoothly varied to zero then
two equilibria exchange stability as shown in Fig. 4(a). This For spike processing devices we need systems in which
is known as transcritical bifurcation. The Fig.4(b) shows a (1) A limit cycle is present and it’s periodic function is a
tangent bifurcation where two equilibria merge into one stable spike output.
equilibrium. Thus qualitative change in behavior on smoothly (2) A saddle node bifurcation is possible on this limit cycle.
changing parameters is called bifurcation[13].
Many such systems have been studied and utilized by
B. Relevance to spike processing different groups in recent years[6]-[9]. The list include Two
Unlike 1-D, there are three types of equilibrium points in section gain and saturable absorber lasers, Vertical cavity sur-
two dimensional systems. Next to the stable and unstable equi- face emitting lasers, Semiconductor lasers, Resonant tunneling
librium, there is the saddle equilibrium. A two-dimensional diode Photo detector and laser diode[10]-[12]. These different
stable equilibrium is attracting in two directions, while a two- approaches will be studied in the future.
dimensional unstable equilibrium is repelling in two directions.
A saddle point is attracting in one direction and repelling in
the other direction. A second remark is that also the dynamics
of the system around the equilibria can differ. The attracting or
repelling can occur via straight orbits (a node) or via spiraling
orbits (a spiral or focus).
For spike processing, we are interested in a special bifurca-
tion in which a stable or unstable node meets a saddle point
R EFERENCES
[1] P.R. Prucnal & B.J. Shastri. Neuromorphic Photonics
CRC Press Taylor and Francis group, 2017.
[2] M.A. Nahmias, B.J. Shastri, A.N. Tait & P.R. Prucnal. A leaky and
Integrated and Fire Laser Neuron for Ultrafast Cognitive Computing.
IEEE J. Sel. Top. Quantum Electronics Vol.19(5), 2013.
[3] S. Haykin. Neural Networks and learning Machines. 3rd ed, 2009.
[4] W.Maass & C.M. Bishop. Pulsed Neural Networks. MIT Press, 2001.
[5] F.C. Hoppensteadt & E.M. Izhikevich. Weakly Connected Neural Net-
works. Springer: Verlag, 1997.
[6] J.L.A. Dubbeldam and B. Krauskopf. Self-pulsations of lasers with
saturable absorber: dynamics and bifurcations. Optics Communications,
vol. 159(4-6) pp. 325338, 1999.
[7] J.L.A. Dubbeldam, B. Krauskopf and D. Lenstra. Excitability and coher-
ence resonance in lasers with saturable absorber. Phys. Rev. E vol. 60,
1999.
[8] M. A. Larotonda, A. Hnilo, J. M. Mendez, and A. M. Yacomotti.
Experimental investigation on excitability in a laser with a saturable
absorber. Physical Review A, vol. 65, 2002.
[9] S. Barbay, R. Kuszelewicz, and A. M. Yacomotti. Excitability in a
semiconductor laser with saturable absorber. Optics Letters, vol.36, no.
23, 2011.
[10] B. J. Shastri, M. A. Nahmias, A. N. Tait, B. Wu, and P. R. Prucnal.
Simpel: Circuit model for photonic spike processing laser neurons. Optics
Express vol. 23(6), 2015.
[11] B. J. Shastri, M. A. Nahmias, A. N. Tait, A. W. Rodriguez, B. Wu, and
P. R. Prucnal. Spike processing with a graphene excitable laser. Scientific
Reports vol 6, 2016.
[12] K. Alexander, T. Van Vaerenbergh, M. Fiers, P. Mechet, J. Dambre,
and P. Bienstman. Excitability in optically injected microdisk lasers with
phase controlled excitatory and inhibitory response. Optics Express Vol.
21(22), 2013.
[13] G.A.K. van Voorn. PhD mini course: Introduction to Bifurcation Anal-
ysis. Vrije Universiteit de Boelelaan, Amsterdam, 2006.

You might also like