You are on page 1of 270

Neural

Engineering
2016-2017
Generation of the Resting
Potential

Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
Neuron

• It is the basic unit of the nervous system. It “brings” an


electrical signal produced by electrochemical process
at the level of the cell membrane.

2
Neuron

• From a functional point of view, it is an excitable cell: this mans


that it can be activated (or excited).
• The activation process corresponds to a change in the potential
difference between the internal and external part of the cell
membrane

3
Neuron
• Cell membrane has a natural polarization, with a potential difference typically in the
range (-100 mV : -50 mV) (- indicates that the inner part is negative with respect to
the external part).
• This natural polarization is due to differences in concentration for specific ionic
species, especially Na+, K+ Cl- .
• The activation process (or depolarization), can propagate along the cell membrane.
This is the basic process through which the electrical pulses are transmitted in the
excitable cells

4
Neuron
• A depolarization occurs as a consequence of excitation. The duration of the
depolarization depends on the involved cell. For a neuron, the typical duration is
around 1 ms, it slightly increase for skeletal muscle cells, and it goes up to 300-400ms
for cardiac muscle cells.

• We define action potential the potential difference through the membrane in case of
excitation and depolarization of the excitable cell.

5
Action Potential

• During the depolarization phase, the trans-


membrane potential rises until it reaches
positive peak values. After the peak, it goes back
to the resting value. All the process lasts around
1ms.
• The complete process is called depolarization /
repolarization.
• For a certain period after the repolarization, the
excitable cell cannot be excited again. This is
called Absolute refractory Period.
• There also a Relative Refractory Period, during
which the cell has a lower excitability.
• The whole duration of these two phases is
typically around 2 ms for neurons.

6
Resting Potential
• The resting potential depends on the following
factors:
– The presence of electrically charged intracellular
proteins, that cannot cross the membrane;
– The presence of ionic species at constant
concentration;
– Selective permeability of the cell membrane;
– The presence of active ion pumps;

• These 4 factors contribute to the generation of the


resting potential.
• We analyze the single contributions by
hypothesizing that the cell is a compartment
containing charged molecules and ions in water.

7
Resting Potential
• Two different mechanisms determine the
resting potential:
– Concentratiion gradient, determining the
diffusion mechanism.
– Electric field, determining the migration
mechanism.
• Both processes cause ions to move  A
current is present.
• Under resting conditions, the resulting
current should be zero…
• …this does not happen, and an active
process is required, which actively transports
ionic species across the membrane (with
energy consumption) 8
Resting Potential: Diffusion
• Given a ionic species k, a flux jdiff,k is present, directly proportional to
the concentration gradient ∇Ck for that species, under the following
law:
jdiff,k = - Dk ∇Ck
• This is the first Fick’s law of diffusion (Dk is the Fick’s constant for
the ion k, and depends on the ion size).

• The negative sign in the law means that the current flows from
higher to lower concentration zones.

9
Resting Potential: Migration

• In this case, flux jmigr,k is directly proportional to the electric field


-∇Φ (where Φ is the electric potential), to the ionic mobility µk,
and to the concentration of the k-th ionic species Ck, according to
the following:

jmigr,k = - µk Ck ∇Φ sign(k)

• where sign(k) can assume values +1 or -1, if it is a cation or anion,


respectively. Positive ions move towards lower potentials, and vice
versa.

10
Resting Potential
• We can assume that the two constants, µk and Dk, both
depend on ion size and they can be linked through the
following:

RT
Dk  k
zk F

• Where T is the temperature, zk is the ion’s valence, R is


the gas constant, F is the Faraday constant
• RT/F = 26.7mV at 37˚C.

11
Resting Potential
• Combining the two fluxes, we obtain:
 FzkCk 
jk  jdiff ,k  jmigr ,k  Dk  Ck   
 RT 
• Wher sign(k)=zk/abs(zk).

• In order to pass from ionic flux jk to ionic current Jk, we have


Jk=jkFzk

• We can further simplify our model, by hypothesizing that all the


variations are in a direction x orthogonal to the cell membrane
We can thus project the above equation along the x axis, and
the gradients become derivatives with respect to x.
12
Nernst-Planck Equation
• We obtain:

Ck (x) F (x) 1


 zk Ck (x)   jk (x)
x RT x Dk

x
• Which is the Nernst-Planck equation:
– It is valid for all the considered ionic species k=1,2, ..., N.
– We can thus further simplify the model, hypothesizing that the
membrane is permeable to only one ion (e.g., K+), and
hypothesizing that we are in stationary conditions jk(x) = 0, we
obtain:
(x) RT CK (x) 1

x FzK x CK (x)
13
Nernst-Planck Equation
(x) RT CK (x) 1

x FzK x CK (x)

• Integrating the equation at the extremes i and e of the membrane,


we can calculate the potential difference across the membrane
itself:

RT CKe
 i  e  ln
FzK CKi

x
• For the specific case of potassium K+, its intracellular concentration
is higher than the extracellular one, and the resulting potential
difference is negative.

14
Resting Potential
RT CK  e
 i  e  ln
FzK CK  i

• This is valid under the simplifying assumption that the membrane is permeable to
one ionic species. When we repeat the equation for only Na+ or only Cl-, the
equation is the same (inverted sign for Cl-).

• Since under equilibrium conditions Φi–Φe must be the same, we can express the
relationships among the ratios of concentrations across the membrane for
different ionic species:

CK  e CNa e CCl  i
 
CK  i CNa i CCl  e

• This is called the Donnan’s Equilibrium. x


15
Resting Potential
• At least theoretically, the concentration ratios for the single ionic
species between in and out the cell should be equal.

• In practice, this doesn’t happen. Experimental data show the


following concentration ratios, measured in the giant axon of the
squid:
CK  e CNa e CCl  i
 0.037,  10,  14
CK  i CNa i CCl  e

• This means that, for excitable cells, including neurons and motor
neurons, Donnan’s Equilibrium is not valid.

• Our simplifying model is not valid, and an active process that


makes jk(x) ≠ 0 must exist, also in resting conditions

16
Resting Potential: Na+-K+ Pump
• The active mechanism is the sodium-potassium pump, which
moves 3 sodium ions out and moves 2 potassium ions in, with a
3:2 ratio.

• Also other active pumps are present, but their contribution to


the membrane potential is negligible in neural cells.

• We have to find solution to the previous problem under non-


equilibrium conditions, finding jk(x) due to the active
mechanism.

• In order to solve this problem, we need Maxwell’s equations,


that help us defining a very simplified yet experimentally
verifiable model
17
Resting Potential: electrical considerations
• From the charge preservation law in resistive materials, we have:

(t)
J  0
t

• This means that in resistive materials the current generated in a


volume equals the total current flowing outside the volume through
its surface (conduction current).
• If Js is the impressed current, and Jc is the conduction current, we can
write
Jc  Js  0
• in a ohmic conductor, conduction current can be expressed as:

Jc   E
18
Resting Potential: electrical considerations
• Substituting, and considering that:

E  

• We have

Jc  Js  0 ()  Js

• This equation tells us that, in order to obtain


information on the trans-membrane potential, we
must consider
– Active mechanisms (right hand side)
– Conductive properties of the medium (left hand side)

19
Resting Potential: electrical considerations

• If we hypothesize the absence of fixed charges in the membrane


(but it’s not true!), and negligibility of the external surface
charges, we can assume that the electric field is constant across
the membrane:

 Vm

x h

• Where Vm is the trans-membrane potential, and h is the


membrane width.

20
Derivation of the GHK Equation

• Recalling the previous equations


 FzkCk 
jk  jdiff ,k  jmigr ,k  Dk  Ck  
 RT 

 Vm

x h

• In the mono-dimensional case it can be written as

𝑑𝐶𝑘 𝑧𝑘 𝐹 𝑉𝑚
𝑗𝑘 = 𝐷𝑘 ( − 𝐶𝑘 )
𝑑𝑥 𝑅𝑇 ℎ

21
Derivation of the GHK Equation
• By using the technique of the separation of variables, we can
reorganize the previous equation in the following way
𝑑𝐶𝑘
= 𝑑𝑧
𝑗𝑘 𝑧𝑘 𝐹 𝑉𝑚
− + 𝐶
𝐷𝑘 𝑅𝑇 ℎ 𝑘
• And we can integrate both sides between the two extremes of the
membrane z=0 and z=h. Considering that the total current density is
zero:
𝐽𝑇𝑂𝑇 = 𝐽𝑘 = 0
𝑘

• Making some calculations, and introducing the permeability Pk, we


obtain the Goldman-Hodgin-Katz Equation:

RT PKCK e  PNaCNae  PClCCl i 𝐷𝑘


Vm  ln 𝑃𝑘 =
F PKCK i  PNaCNai  PClCCl e ℎ
22
Resting Potential
RT PKCK e  PNaCNae  PClCCl i
Vm  ln
F PKCK i  PNaCNai  PClCCl e

• If we know the concentrations of Na+, K+ and Cl-,


Nernst Na+
in resting conditions the calculated potential
resembles that experimentally obtained, even
though we made strong assumptions.

• It is important to note that not only


concentrations, but permeability is key for Nernst K+
trans-membrane potential calculation. Given
that, under resting conditions, membrane has
the highest permeability to K+, the calculated
resting value is highly similar to the one
obtained if we consider only K+.
23
Action Potential

• In the presence of excitation, the


simplifying assumptions made under
equilibrium conditions are not valid (they
are only valid if the net current crossing Nernst Na+
the membrane is close to zero)

• This happens in neurons only in


correspondence to the peak of the action
potential, where the sodium channels are
Nernst K+
totally open, and the membrane is almost
only permeable to Na+. In this particular
case, trans-membrane potential is close to
the Nernst potential of Sodium.

24
Neural
Engineering
2016-2017

L03 – Passive Neuron Model:


subthreshold phenomena under internal
stimulation

Cristiano De Marchis
cristiano.demarchis@uniroma3.it
1
Passive Model and Action Potential

• It is now clear that excitable cells are subject to active processes


(requiring energy consumption), so they should not be treated
as electrically passive objects.

• However, before excitability thresholds are reached, membrane


can be represented as a passive object (like a simple RC circuit).

• This is important to study the neural excitation processes, both


internally generated (i.e. coming from pulses of neighbor cells)
and externally generated (like artificial pulses of NMES and DBS).

2
Passive Model
• The simplest passive model for an excitable cell is that of a passive spherical
membrane.

• Hypothesizing that a virtual stimulation electrode is able to generate a


current Is of duration T in the center of the cell, and considering a reference
electrode far from the cell, a symmetrical spherical electric potential would
be generated as a consequence of excitation, as represented in figure.

• In this case, it would be possible to model the membrane as simple RC, with
resistive and capacitive values Rm and Cm.

3
Passive Model
• Given the circuit model in figure, the total applied current Is can
be divided into a capacitive displacement current and a current
through the membrane resistive branch:

𝑑𝑉𝑚 𝑉𝑚
𝐼𝑠 = 𝐼𝑐 + 𝐼𝑅 = 𝐶𝑚 +
𝑑𝑡 𝑅𝑚

4
Passive Model
• In the presence of Is indicated in figure, we can calculate the
trans-membrane potential Vm(t):

t / Rm Cm
Vm (t)  Is Rm (1 e ), 0  t  T

• We have to remember that each excitable cell has a threshold


voltage Vth, above which the membrane depolarizes.

5
Passive Model Parameters
• We define Rheobase, Irh, the minimum current that we need to
generate excitation (i.e. bring Vm to Vth), in the specific case of an
infinite stimulus duration T = ∞. We thus consider the previous
equation with for t → ∞, obtaining

t / RmCm
lim Vm (t)  Vth  lim(IrhRm (1 e ))  IrhRm
t  t 

• From which Irh = Vth/Rm. The general relationship between Ith and
Vth is: 𝑇
𝑉𝑡ℎ = 𝐼𝑡ℎ 𝑅𝑚 (1 − 𝑒 𝑅𝑚 𝐶𝑚 )
• From which we can represent the relationship between stimulus
duration and threshold current according to the following:
Irh
IS,th  T / RmCm
1 e 6
Passive Model Parameters
Irh
IS,th  T / RmCm
1 e
• This equation expresses the threshold current as a function of
pulse duration T, and it is defined as strength-duration curve.

Irh

pulse duration (ms)

7
Chronaxie and Rheobase
Irh
IS,th  T / RmCm
1 e

2*Irh

Irh

Tchr pulse duration (ms)

• For each excitable cell, under the assumptions of this simple model, we can
characterize the membrane behavior based on its parameters Rehobase Irh and
time constant τ=RmCm. The latter is often expressed with another parameter
called Crhonaxie Tchr. It is defined as the pulse duration Tchr needed to generate
a depolarization if the current is 2*Irh.

8
Chronaxie and Rheobase
Irh
IS,th  T / RmCm
1 e

2*Irh

Irh

Tchr pulse duration (ms)

• We can easily obtain the relationship between Chronaxie Tchr and time constant
τ:

Tchr  RmCm ln 2  0.7  RmCm

9
Chronaxie and Rheobase
• As a function of what do Tchr and Irh change in an
excitable cell?

– If we hypothesize a spherical excitable cell, whose membrane


is made of a specific material, with a defined membrane
width, we can state that Rm is inversely proportional to the
square of the cell radius Rm=c/r2.

– Consequently, Rheobase Irh will be directly proportional to


the square of the cell radius (as Rm is at denominator).

– This theoretically means that bigger cells need higher current


to be excited
10
Chronaxie e Rheobase
• ….As a function of what do Tchr and Irh change in an
excitable cell?

– If we consider Cronaxie, it depends on both Rm and Cm.


Typically, Cm depends on the square of the distance

– Chronaxie is theoretically not dependent on cell size (while it


depends on other cell parameters)

11
Passive Model: limitations and considerations
• All the previous considerations come from very strong assumptions
and simplifications. What happens if we make it a little bit “more
complex”? :
– Current was applied in the center of the sphere. If we apply the current
elsewhere, things would not change that much, as intracellular fluid has a
high conductivity with respect to the membrane (The resulting electric
potential does not change).

– What happen in case of natural stimulation (i.e. current is injected


through a synaptic connection)? Almost nothing. In this case, injected
current is inversely proportional to the impedance seen from the
synapsis, which is higher in case of bigger cells, and vice versa.

– This consideration confirms our previous results on Irh, also in case of


synaptic transmission: bigger cells need higher currents to be excited.
Higher currents means a higher number of synaptic inputs.

12
Spherical Passive Model: limitations


• The spherical model, compared to the real shape of a neuron
(the axon most of all, which is far from having a spherical
geometry), is not anatomically plausible! However, we can
easily assume the circular symmetry in the section of the
axon. In this way we can obtain a slightly more complex cell
passive model, but the mathematical description of such a
model is easy enough!

13
Axon Passive Model with cylindrical symmetry
• The previous spherical assumption is also not functionally feasible, because it
cannot explain the propagation of the electric field along the axon’s
membrane (every membrane point of the previous spherical model has the
same distance from the current injection point, and this is not true in a real
neuron).

• This mechanism is very important for describing all the processes that
include propagation along the cell.

• We use the cylindrical model. In this case the axon is modeled as a cylinder
with a length ways higher than its circular section (anatomically, the
diameter/circular section of an axon is in the range 1-100 um, the length is in
the range 1-100mm CNS neurons, and it is even higher for PNS peripheral
motor neurons), and the current is injected in a point along the axis of the
cylinder (or along a virtual direction along the same axis)

14
Axon Passive Model with cylindrical symmetry
• In this cylindrical case, we can collapse the anatomical structure of the axon
to a mono-dimensional cable, composed of resistors and capacitors having
specific electric properties: a membrane resistance and capacitance, Rm e Cm
(actually we will have these properties expressed as function of space, Rm in
Ωm, and Cm in F/m) for the transverse section (trans-membrane properties),
and the resistivity per unit length for the extracellular and intracellular
media, re and ri .

Extracellular medium

Cm
R
m Membrane

Intracellular medium

15
Axon Passive Model with cylindrical symmetry

• In this model, starting from the unit Vm x,t    i x,t    e x,t 


length electrical properties, we can  i x,t 
 ri ii
write the differential equations x
governing the behavior of the trans-  e x,t 
membrane potential Vm(x,t) as a  reie
x
function of time and space (x is the i i V V
axial abscissa along the axon) : im   i  e  m  Cm m
x x Rm t

16
Axon Passive Model with cylindrical symmetry

• Manipulating the first equation and using the other


equations: V x,t    x,t    x,t 
m i e

 i x,t 
 ri ii
x
 e x,t 
 reie
x
i i V V
im   i  e  m  Cm m
x x Rm t

• We can reorganized as follows:


2Vm x,t  2
 2  i x,t    e x,t  
x 2
x
 re  ri Vm
 reie  ri ii   Vm  re  ri Cm
x Rm t
17
Axon Passive Model with cylindrical symmetry

2Vm x,t  2
 2  i x,t    e x,t  
x 2
x
 r r V
 reie  ri ii   e i Vm  re  ri Cm m
x Rm t

• Reorganizing, we have:
 2
Vm Vm
• where λ is the length constant, τm is
 2    Vm  0 the time constant. Analogously to
x t
2 m
τm, λ represents the length at which
con
with the trans-membrane potential
R reduces to a value 1/e with respect
2  m
re  ri to the origin.
 m  Rm Cm

18
Axon Passive Model with cylindrical symmetry

 Vm 2
Vm
 2
 m  Vm  0
x 2
t
• In practice, τm and λ regulate the temporal and spatial variations
of trans-membrane potential
• In the specific case Vm(t)=Vm0, which corresponds to a boundary
condition imposing a trans-membrane potential constant and
not dependent on time, we obtain:
x
Vm x   Vmo e
 

• Alternatively, hypothesizing a trans-membrane potential not


dependent on the abscissa x, we have:

 t
Vmx t   Vmx 0 e m

19
Axon Passive Model with cylindrical symmetry

• If we exclude the previous two extremely simple conditions, the


general solution to the differential equation is far more complex.

• In particular, we are often interested in the case where an


impulsive current I0 is injected at the time t0 in correspondence
to the abscissa x0. We have Is(x,t)=I0δ(x,t).

• From this we can obtain the impulse response of our system, the
trans-membrane potential Vm(t):

m
 
t exp    m  t
 
2
2
Vm, Is  x,t   rm I o  t   m  2  
x
4 2 
20
Axon Passive Model with cylindrical symmetry

m
t exp    m  t
    
2
Vm, Is  x,t   rm I o
2

 t   m  2  
x
4 2 

• Starting from this impulse response, we can analytically calculate


the trans-membrane potential as a response to any kind of spatial
or temporal excitation, through the double convolution (in space
and time), between the impulse response and the considered
excitation:

 

Vm x,t     I ,  V
s m, Is   x  ,t   d d
 
   0

21
Axon Passive Model with cylindrical symmetry

• Going back to the length constant λ, and the time


constant τm.
– τm = RmCm, is largely independent from the axon’s diameter,
since Rm is reciprocal with axon’s diameter and Cm is
proportional with axon’s diameter
– λ, is directly proportional to the square root of the axon’s
diameter (ri is reciprocal to the square of the diameter, and
re is negligible). As a consequence, larger axons have higher
length constants. This means that in larger axons Vm varies
less along the axon.

• Typical numerical values are τm ≈ms, and λ ∈ [0.1, 1]


mm.
22
Axon Passive Model with cylindrical symmetry

• Myelinated and Non-Myelinated Axons:


– Up till now we have considered a fixed section. This hypothesis is valid only
for non-myelinated fibers. Many neural excitable cells are covered with
myelin sheaths, which are interrupted in correspondence of specific sections,
defined Nodes of Ranvier

– In the myelinated case we can discretize the mathematical formulation, creating a


model with discrete components, considering resistances (extracellular and
intracellular resistance, re and ri) between two consecutive nodes of Ranvier. Trans-
membrane mathematical formulation can be discretized in the same way (Rm and
Cm).

23
Nodes of Ranvier
• In myelinated axons, the myelin sheath is interrupted in correspondence of
these nodes. Myelin works as an electrical insulator in those zones where it is
present, avoiding charge loss.
• This means that trans-membrane potential travels almost without changes
along the myelinated parts of the axon, and the ionic transmission in and out
the cell membrane takes place only on the nodes of Ranvier.
• There is an advantage: through this mechanism, the potential conduction is
“saltatory”, and the resulting conduction velocity is orders of magnitude
higher (from some m/s to 100-200 m/s)
• The corresponding equivalent electric circuit is simpler, considering discrete
resistances on the myelin sheath (cytoplasm Ri and extracellular Re), and
discrete trans-membrane components in the correspondence of the nodes of
Ranvier (Rm and Cm) . Re
Re

Cm Rm Cm Rm

Ri Ri
24
Model of the Node of Ranvier
Re Re

Cm Rm Cm Rm

Ri Ri
• We can take the equation of the non-myelinated continuous model,
and specialize it for the discrete spatial coordinate n (where n
indicates the n-th node of Ranvier starting from the excitation point).
Starting from the equation:
 Vm 2
Vm
 2
 m  Vm  0
x 2
t
• We obtain the discrete equivalent model
Vm,n
  (Vm,n1  2Vm,n  Vm,n1 )   m
2
 Vm,n  0
t
25
Model of the Node of Ranvier
Vm,n
  (Vm,n1  2Vm,n  Vm,n1 )   m
2
 Vm,n  0
t
• We have defined effective resistances and capacitances (not per
unit length), from which we can derive the length constant and
the time constant for the discrete case:

Rm
 
2
Ri  Re 
 m  RmCm

26
Excitable Cells: constants and typical values

Rm
 
2
Ri  Re 
 m  RmCm
• Some indications:
– Rm is the nodal membrane resistance, and it is directly
proportional to the excitable cell diameter;
– Cm is the membrane nodal capacitance, and it is directly
proportional to the excitable cell dimeter;
– Ri is the inter-nodal cytoplasmatic resistace, and it is inversely
proportional to the excitable cell dimeter;
– Re is the inter-nodal extracellular resistance, it does not
depend on the cell size, and it is negligible with respect to Ri.
27
Excitable Cells: constants and typical values
• Assumption: in most cases, the distance between two consecutive nodes of
Ranvier is directly proportional to the excitable cell diameter.
• Under these conditions, both λ and τm are largely independent from the axon
diameter, meaning that the trans-membrane potential is approximately
independent from the axon’s diameter.
• Theoretically the propagation velocity is the ratio between length constant λ and
time constant τm, and it is independent from the axon length.

vc   
m

• However, conduction velocity increases linearly with the cell diameter. This is a
natural consequence of the increased inter-nodal distance as diameter increases,
confirming the advantage of the saltatory conduction with respect to the classical
continuous conduction.

28
Membrane Electrical
Properties
Neural Engineering 2016-2017
Cristiano De Marchis
Conductance and Capacitance per Unit Area
• Let’s suppose that we know the electrical properties of a membrane:

• conductance per unit area g


• capacitance per unit area c

𝑆 𝐹
𝑔= 2 1 m2 𝑐= 2
𝑚 𝑚
Rm and Cm for a Spherical Cell
• In the case of a spherical cell, the total surface will be:
𝐴 = 4 π 𝑟2
• So the total capacitance and resistance of the spherical membrane
are:
1 1
𝐶𝑚 = 𝑐𝐴 = 𝑐 4 π 𝑟2 𝐺𝑚 = 𝑔𝐴 = 𝑔 4 π 𝑟2 𝑅𝑚 = =
𝐺𝑚 𝑔 4 π 𝑟 2

• And they are proportional to the square of the radius (directly and
inversely proportional, respectively for Cm and Rm).
r
Rm and Cm for a Cylindrical Cell L

• In the case of a cylindrical cell, we consider a piece of surface,


corresponding to the lateral surface of the cylinder with a length L.
The total surface of such a piece of membrane is:
𝐴 =𝐿2πr
• So the total capacitance and resistance of the cylindrical membrane
are:
1 1
𝐶𝑚 = 𝑐𝐴 = 𝑐 𝐿 2 π r 𝐺𝑚 = 𝑔𝐴 = 𝑔 𝐿 2 π r 𝑅𝑚 = =
𝐺𝑚 𝑔 𝐿 2 π r

• And they are proportional to the radius, not the square !! (directly
and inversely proportional, respectively for Cm and Rm).
r r
ri for a Cylindrical Cell L
S

• In the case of a cylindrical cell, the axial internal resistance depends


on the section of the cylinder, which is:
𝑆 = π 𝑟2

• And the axial resistance is that of a generic cable:

𝐿 𝐿
𝑟𝑖 = ρ = ρ
𝑆 π 𝑟2

• Meaning that the axial resistance of the intracellular medium is


reciprocal with the square of radius. re is negligible, and we will not
consider it
Length Constant λ and speed v
• From the previous considerations, we get the expected behavior for
the length constant:

2
𝑅𝑚
λ = ∝𝑟
𝑟𝑖

• and for the speed of propagation along the cylindrical axon, since τ
does not depend on cell size
λ
𝑣= ∝ 𝑟
τ
Node of Ranvier Node of Ranvier

myelin myelin myelin


r r r
For the myelinated axon: L L L

• The expression for the length constant is again:


2
𝑅𝑚
λ =
𝑅𝑖
• But being Rm and Ri compact electrical properties (measured in Ω), lambda is
not expressed in meters, but in nodes. From an anatomical point of view, the
distance between consecutive nodes is proportional with radius, through a
factor 100, so that, theoretically, we have:
𝐿(𝑟) 100𝑟 1
𝑟𝑖 = ρ =ρ 2

𝑆 π𝑟 𝑟
• The speed of propagation is thus theoretically constant in terms of “number of
nodes per second”, but increases linearly with radius when measured in m/s,
because the distance between two consecutive nodes of Ranvier is higher.
Neural
Engineering
2016-2017
L04 – Passive Neuron Model:
subthreshold phenomena under external
stimulation
Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
External Stimulation
• Up till now we have considered an internally injected/generated current. This
situation is actually valid for in vitro testing or for needle stimulation. The
most realistic case is the externally induced stimulation.
• For intra-cellular stimulation, we have found that excitation threshold is
directly proportional to cell size. Viceversa, for external stimulation, larger
cells have a lower excitation threshold.

2
External Stimulation

• We step back to the simple spherical case with a very high membrane resistance and
very low internal resistance
• We hypothesize that this cell is immersed in an external electric field E as in figure.
The cell will be depolarized on one side and hyperpolarized on the opposite side. We
can calculate the membrane potential as a function of cell radius and orientation ϑ
theta with respect to the electric field, according to the following:

Vm   3 2 Er cos 
Vm linearly increases with r, so a lower electric field is needed to excite larger cells. This
result is opposite to the previous one, obtained with intracellular stimulation.

3
Stimulation: differences

• Intracellular stimulation:
– The excitability threshold is
directly proportional to the
square of the spherical cell
radius

• External stimulation:
– The excitability threshold is
inversely proportional the
spherical cell radius

4
External stimulation of the axon
• We hypothesize a cylindrical model, and the stimulation derives
from an external electric field E with field lines orthogonal to the
cell membrane.
• We obtain the following final result:

Vm  2Er cos

• In this case, the hypothesis of homogeneous filed along an


orthogonal direction cannot be accepted. We have non
homogeneous electricl filed, and this implies the potential varying
along the axon. The previous equation is not valid, or it is valid
only for a single section/abscissa of the axon.

5
External stimulation of the axon: model
• In the presence of an external stimulation, we use the McNeal
model, which fixes the relation between the externally
generated extracellular potential and the trans-membrane
potential.
• The externally generated potential, in correspondence of the
nodes of Ranvier, corresponds to a source of tension, as in
figure:
Ve,n Extracellular
medium

-
Rm Vm,n Cm membrane
+

Vi,n Ri Intracellular
medium

6
External stimulation of the axon: model
Ve,n Extracellular
medium

-
Rm Vm,n Cm membrane
+

Vi,n Ri Intracellular
medium

Vm,n  Vi,n  Ve,n


• Ve,n is the known potential caused by the external stimulation
• We neglect extracellular resistance, as we made for the case of intracellular
stimulation.
• We can calculate the membrane current Im,n at the n-th node of Ranvier (current
equilibrium at the n-th node of Ranvier):

Vi,n1  Vi,n Vi,n  Vi,n1 1


I m,n    Vi,n1  2Vi,n  Vi,n1 
Ri Ri Ri
7
External stimulation of the axon: model

Vi,n1  Vi,n Vi,n  Vi,n1 1


I m,n    Vi,n1  2Vi,n  Vi,n1 
Ri Ri Ri
• Knowing the basic equations of the membrane:
1 Vm,n
I m,n  Vm,n  C
Rm t
Ve,n Extracellular
medium

- Cm
Rm Vm,n membrane
+

Vi,n Ri Intracellular
medium

8
External stimulation of the axon: model
• And combining the previous equations, we get:
Vm,n
  Vm,n1  2Vm,n  Vm,n1   m
2
 Vm,n    2 Ve,n1  2Ve,n  Ve,n1 
t
• This equation links the externally imposed potential at the
nodes n-1, n, n+1, to the corresponding trans-membrane
potentials Vm,n-1, Vm,n , Vm,n+1.
• λ e τm have the same values and meaning as in the previous
case.
• The above equation is valid for myelinated fibers/cells.
• It is worth noting that the equation is the non-
homogeneous equivalent of the previous situation, where
the right hand side term is related to the external excitatory
source.
9
External stimulation of the axon: model
• In the case of non-myelinated fibers/cells, the discrete
node equation is replaced with a continuous function
of abscissa x:

 2
V V  2
Ve
 2 m
 m m
 Vm    2

x 2
t x 2
• An this is the natural non homogeneous equivalent of:
 Vm 2
Vm
 2
 m  Vm  0
x 2
t

• Previously obtained without external stimulation.


10
External stimulation of the axon: model
 2
V V  2
Ve
 2 m
 m m
 Vm    2

x 2
t x 2
• The known term in the equation is defined Rattay activation function, and it is
useful when we want to study what happens under external stimulation.
• Let’s suppose we have an external rectangular stimulus (with respect to time),
with initial conditions Vm(0)=0. Ve will be constant for the duration of the stimulus
pulse, and zero before and after, and so will the first term. Specializing for t=0, we
have:
Vm  Ve2
m   2

t t 0 x 2
• We can solve this differential equation, obtaining:

 2 2Ve
Vm t    t
 m x 2

11
External stimulation of the axon: model
 2 2Ve
Vm t    t
 m x 2

• This equation tells us that there will be an hyperpolarization (positive) of the


membrane for a negative activation function, and it will depolarize for a positive
activation function.
• Now, we hypothesize a long excitable cell and a homogeneous medium. We can
calculate Ve(x) as a function of the injected current Is(t). Hypotehsizing that the
current is injected at some point with distance h from the excitable cell. The
extracellular potential at the fiber will be:

I s (t)
h Ve x,t  
x 4 x 2  h 2

12
External stimulation of the axon: model
I s (t)
Ve x,t  
4 x 2  h 2
• From which we can calculate the activation function, as
a second order derivative with respect to space. We
obtain:
2Ve x,t  I s (t) 2x 2  h 2

x 2
4 (x 2  h 2 )5 2

• We hypothesized that Is(t) is point in x=0, and


temporally rectangular with pulse duration ways lower
that time constant τm.

13
External stimulation of the axon: model

I s (t)
Ve x,t  
4 x 2  h 2

2Ve x,t  I s (t) 2x 2  h 2



x 2
4 (x 2  h 2 )5 2

 2
V V  2
Ve
 2 m
 m m
 Vm    2

x 2
t x 2

The high density current leaves the fiber close to the cathode, strongly depolarizing
the membrane, while the current is more diffuse where tt enters the fiber
14
External stimulation of the axon: model
• If we exclude multiplicative
constants, and we consider
negative Is (a cathode in x=0),
we have a negative Ve,
approaching zero as we go
away from the stimulating
electrode/position.
• The corresponding activation
function (containing -λ2) is
negative in the stimulation
point, it becomes slightly
positive at low distances, and
it then goes to zero.
• The membrane strongly
depolarizes near to the
stimulation point, and weakly
hyperpolarizes in the near
surroundings.
15
External stimulation of the axon: considerations
• In the case of anodic current, we have the opposite
situation: an hyperpolarization near the stimulation point,
and a weak depolarization in the surroundings. This is the
reason why, in case of external stimulation, active cathode
is more used as it induces stimulation with lower currents,
and the depolarization is induced in the stimulation point.

• In the case of a temporally longer excitation pulse, the


shape of the activation function tends to be larger, and so
the spatial shape of the trans-membrane potential.

16
Rheobase and Chronaxie in external stimulation
• In case of external stimulation of an axon, many simple hypotheses are not valid.
However, the strength-duration have a similar behavior. It is more convenient to
experimentally obtain the Chronaxie and Rehobasae values in these cases.

• Remind that Rehobase is influenced by the following factors:


– Cell size
– Distance from membrane
– Kind of stimulated tissue
– Cell orientation with respect to stimulation

• While the Chronaxie is mainly influenced by the stimulated tissue, as indicated in


table:

17
Rheobase and Chronaxie in external stimulation

• Simulations show that for extracellular stimulation of the stylized axon as described previously,
chronaxie depends on, for example, electrode-fiber distance, in such a way that a point source close to
the fiber gives the lowest value for the chronaxie which monotonically increases by up to a factor two
for increasing electrode-fiber distance.

• Of course, rheobase values are dependent on the size of the target cell, electrode-cell distance,
electrode configuration, surrounding tissue, and cell orientation. The value of the rheobase can vary
over several decades of magnitude for different situations. Chronaxie is much less variable, and, even
though chronaxie is not completely independent of the stimulation conditions, it makes sense to give
chronaxie values to classify various tissues.

• From these values we can establish the correct criteria for stimulating neural tissues. For example, if we
stimulate smooth muscle cells, it is not correct to provide pulses lasting hundreds of microseconds,
while it makes a lot of sense for myelinated nerve fibers.

18
Electrical Properties
• Here we report some typical values for the electrical
properties of the previously analyzed models.

19
Neural
Engineering
2016-2017
L05 - Neuron Active
Models:
Hodgkin-Huxley
Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
Active neuron models
• We know that excitable cells are subject to intrinsically active
processes/mechanisms needing energy consumption. This
means that we cannot describe/model their behavior as passive
electric objects.

• We now go beyond the passive models, and we introduce the


concept of ion-channel selectivity in our models, that is the main
active mechanism of ion transfer across the membrane.

• From an electric cirucit point of view, this means having a


conductance which varies as a function of ionic channell state
(high conductance of open channels, low conductance for closed
channels)

2
Active neuron models
• The Hodgkin–Huxley model, or conductance-based model, is a
mathematical model that describes how action potentials in
neurons are initiated and propagated. It is a set of nonlinear
differential equations that approximates the electrical
characteristics of excitable cells such as neurons, and hence it is
a continuous time model.

• Alan Lloyd Hodgkin and Andrew Fielding Huxley described the


model in 1952 to explain the ionic mechanisms underlying the
initiation and propagation of action potentials in the squid giant
axon. They received the 1963 Nobel Prize in Physiology or
Medicine for this work.

3
Active neuron models
• This was possible thanks to the voltage clamp method for
measuring ionic currents (K. Cole). In the previous models we
took into account current camping, in which a fixed current was
injected and the membrane potential was measured. With the
voltage clamp, the voltage is controlled and kept fixed, and the
ionic current is analyzed. This ionic current is generated by the
two main ionic species in the excitable cell membrane, Na+ and
K+.

• It is thanks to this technique that HH made the huge discovery.

4
Active and Passive Membrane

Extracellular

Na K L

Cytoplasm
• Active mechanisms across the membrane are determined from the selective opening of
ion-specific channels. This is equivalent to having a membrane conductance varying as a
function of ionic channel openness (low conductance for closed channels, high
conductance for open channels).
• This non linear mechanism can be schematized as the electric circuict model in figure,
considering the Nernst potentials for each ion, and the non linear transmembrane
resistances for each ionic channel.
5
Active and Passive Membrane

Extracellular

Na K L

Cytoplasm
• From an electric point of view, this can be modeled by separately considering the non
linear contribution of the ionic channels and the linear contribution L of the membrane. It
is also convenient to compact the circuit design on Na and K as major contributors.
• We can identify the different contributions: capacitive effect (right), linear resistive effect
with resting potential (right), and non linear effects due to the dynamic behavior of the Na
and K channels.

6
Currents in the active membrane
• We analyze the circuit:
I ext Extracellular

Na K L

Ciytoplasm

7
Currents in the active membrane

• In this equation, the linear conductance gL is accompanied by


the two non linear conductances gNa and gK, varying as a
function the potential and time.
• We detail the contribution of gNa and gK, as a function of the
physiological processes linked to Sodium and Potassium
channels.

8
Currents in the active membrane: Potassium
• K channel is composed of 4 identical
modules, that can be either open or closed,
with a probability of being open equal to η.

• The channel is open if and only if all the 4


channels are open: this probability is η4.

η η4

9
Currents in the active membrane: Potassium
• We can thus express ḡK (the conductance of the membrane related to
the potassium channel), when the channel is open (it has been proven
that it is stable in time and independent from voltage):

10
Module dynamics in the Potassium channel

• n depends on two factors α


and β, which in turn depend
on membrane voltage

• In particular, β is the rate of


change from closed to open
channel, while α is the
opposite.

• We ca obtain n as a function
of α and β.

11
Module dynamics in the Potassium channel
• Defining the time constant τn as indicated, we can consider the
stationary case n∞

12
Potassium channel is voltage-dependent!
• Knowing the behavior of α and β as a function of membrane potential, we
can calculate the probability of opening the channel as a function of voltage.
For potassium, in resting conditions (Vm ~ -65 mV), this probability is 0.1,
meaning that almost all the potassium channels are closed.

13
Potassium channel is voltage-dependent!
• The other important factor is the time constant, which tells us
which is the switching speed when passing from closed to open
channel condition. It is also voltage dependent.

14
Currents in the active membrane: Sodium
• The situation is slightly more complex for sodium, as it is
composed of 3 identical fast modules and a
single slow module

• We define the two probabilities m (for fast modules) and h (for


the slow module) of having open and closed sodium channel

• The probability of having a open Sodium channel is m3h.

15
Currents in the active membrane: Sodium
• We can thus express ḡNa (the conductance of the membrane related to
the sodium channel), when the channel is open (also in this case it has
been proven that it is stable in time and independent from voltage):

16
Module dynamics in the Sodium channel

Fast
Activation
Modules

Slow
Inactivation
Module

17
Sodium channel is voltage dependent !
• Given the behavior shown in figure, fast modules are closed at rest, while the slow module is
open. Combining, the channel is closed at rest.
• It is worth nothing the huge difference between the time constants.

18
Hodgkin-Huxley Model
• We can now replace all the previous in the general model, with
probabilities and conductance

19
Comparison among modules
• Pobabilities and time constants of all the studied
modules for K+ and Na+

20
Action Potential and HH parameters

21
Generation of the action potential and excitability
– We understood the mechanisms underlying AP generation

– However, we know that there is a threshold for excitability,


and if trans-membrane potential does not exceed this
threshold, there is no AP.

– Why?
• We must take into account the dynamics of the single ionic species

22
Hodgkin-Huxley model dynamics
• Let’s hypothesize that a injected current causes
the membrane potential to increase. When
reaching -50 mV, the activation function m of
sodium goes from 0 to 1, while the slow
inactivation module h hold at around 0.6 As a
result, there is a huge flux of Na+ ions with
negative current towards the extracellular
medium.
• This increasing current brings the membrane
potential up to +50 mV. This increase determines
the activation of the n K+ module, and the current
changes direction.
• Potassium current persists longer than sodium
current, and there is a hyperpolarization below
the resting potential level.
• Then, potassium channels close and the potential
goes back at resting values

23
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 6: Muscle Synergies during pedaling under different conditions


The file EMG_pedal_spring.mat contains eight EMG signals recorded from the following lower
limb muscles of a healthy subject during a pedaling task:
- Gluteus Maximus
- Biceps Femoris
- Gastrocnemius Medialis
- Soleus
- Rectus Femoris
- Vastus Medialis
- Vastus Lateralis
- Tibialis Anterior
EMG signals are sampled at 1 kHz. The file also contains the biomechanical reference from the
pedal angle, recorded at 1 kHz. The first part of the task contains EMG signals from a self-
selected speed pedaling task, while the last 8 seconds contain the EMG activity of an all-out
sprint.
1. Organize the dataset to obtain two matrices Mnormal and Msprint. Mnormal must be
selected from the middle of the normal pedaling condition, taking a number of cycles
comparable to those executed during the 8s all out sprint.
2. Identify the modular control structures of the two pedaling tasks, decomposing the
matrices Mnormal and Mpsrint through NNMF.
3. Compare the modular organization identified from the two tasks, answering the
following questions:

a. Is there any change in the dimensionality of motor control (i.e. number of


synergies)?
b. Are the synergy vectors comparable between tasks? (suggestion: use the
normalized scalar product to compare synergy vectors).
c. Is there any adaptation in the synergy activation coefficients H?
d. Can the synergies extracted from Mnormal represent muscle coordination of
Msprint? In order to answer this question, you will need a modification to NNMF
algorithm:,NNMF must be applied maintaining Wnormal fixed, and letting only H
update with the iterative update rules. Write a function called Nonnegative
Reconstruction implementing this slight change to the NNMF algorithm.
(Suggestion: check the VAF obtained from the Nonnegative Reconstruction to
verify if the reconstruction is feasible)

Please note that you will need a proper normalization of the EMG signals across the two tasks,
so that the activity of the same muscle can be compared across conditions.
Neural
Engineering
2016-2017
L06 - Neuron Active Models:
Channel Gating and AP propagation in the
Hodgkin Huxley Model
Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
Voltage clamp
• We studied how the membrane potential affects ionic conductances and
currents, assuming that the potential is fixed at a certain value Vc controlled by
an experimenter. To maintain a membrane potential constant (clamped), one
injects a current proportional to the difference Vc-V (voltage-clamp). In stationary
conditions dV/dt=0, it follows that the injected current I equals the net current
generated by the membrane conductances
Voltage clamp
• In a typical voltage clamp experiment, the membrane
potential is held at a certain resting value Vc, and then
set to a new value Vs. The injected current needed to
stabilize the potential at the new value is a function of t,
Vc and Vs. The current initially changes to accomodate
the instantaneous voltage change to Vs. The amplitude
of the current jump is: 𝐼𝑗𝑢𝑚𝑝 = 𝑔 (𝑉𝑠 − 𝑉𝑐 )

• Then, time and voltage dependent processes start to


occur, and the current decreases and then increases. The
value at the negative peak depends only on Vc and Vs,
and it is called instantaneous current-voltage I-V relation
I0(Vc,Vs). The asymptotic value for t∞ depends only on
Vs and it is called steady state current voltage I-V
relation, indicated as I∞(Vs)
Conductances
• Electrical conductances of individual channels might be controlled by
gates, which move the channels between open and closed states. The
gates may be dependent on different factors:
• Membrane voltage
• Other agents (such as neurotransmitters or neuromodulators)

• Transitions between open and closed states of individual channels are


stochastic. However, the net current generated by an ensemble of
identical channels can be described by:

𝐼 = 𝑔 𝑝 (𝑉 − 𝐸)
Conductances
𝐼 = 𝑔 𝑝 (𝑉 − 𝐸)

• g is the maximal conductance of the population


• p is the average proportion of channels in the open state
• E is the reverse potential of the current, i.e. the potential at which the
current reverses its direction.

• If the channels are selective to a single ionic species, then E equals


the Nernst equilibrium potential for that ionic species
Channel gating: general definitions
• When the gates are sensitive to the membrane potential, the
channels are defined voltage-gated. There are two types of gates:
those that activate or open the channels, and those that inactivate or
close the channels. We call m the generic probability of an activation
gate to be in the open state. We call h the generic probability of an
inactivation channel to be in the open state. The proportion of open
channels in a large population is:
𝑝 = 𝑚𝑎 ℎ𝑏

• Where a is the number of activation gates and b is the number of


inactivation gates per channel.
Channel gating: general definitions
𝑝 = 𝑚𝑎 ℎ 𝑏
• A channel can be:
• Partially activated (0<m<1)
• Completely activated (m=1)
• Not activated or deactivated (m=0)
• Inactivated (h=0)
• Released from inactivation(0<h<1)
• Completely de-inactivated (h=1)

• Some channels do not have inactivation gates (b=0), so their probability is


described by 𝑝 = 𝑚𝑎.
• Channels that do not inactivate (b=0) result in persistent currents (e.g. potassium
in HH model of the squid’s giant axon)
• Channels that inactivate (b≠0) result in transient currents (i.e. sodium in HH
model of the squid’s giant axon)
Channel gating: activation of persistent
currents
• The dynamics of an activation variable m can be described by the first
order differential equation:

• Where
• m is the steady state activation function, and has a sigmoidal shape
• τ is the time constant, and has a unimodal shape

• they can be measured experimentally, and smaller value of τ indicate faster


dynamics of m.
Channel gating: measurement of persistent
currents
• How can we experimentally obtain
m∞ and τ for a channel that has no
inactivation?

• Initially we keep the membrane at a


hyperpolarized value Vo such that
all the activation gates are closed
and I = 0.

• Then we step increase V to a


greater value Vs, and wait until the
current reaches the asymptotic
value
Channel gating: measurement of persistent
currents
• We repeat the experiment for
different stepping voltage Vs,
determining te corresponding Is,
and obtaining the steady-state I-V
relation. Knowing that:

𝐼 𝑉 = 𝑔𝑚∞ (𝑉 − 𝐸)

• We can obtain the steady state


activation curve m∞ dividing I(v) by
(V-E) and normalizing so that max
m∞(V) = 1
Channel gating: measurement of inactivation
of transient currents
• The dynamics of an activation variable h can be described by the first
order differential equation:

• Where
• h is the steady state activation function, and has a sigmoidal shape
• Τ is the time constant, and has a unimodal shape

• They can be measured experimentally.


Channel gating: measurement of inactivation
of transient currents
• We have to measure h in the presence of an
activation gate m. The measurement in based on
the particularly high time constant.
• First we hold the membrane potential at a fixed
Vs so that both m and h reach their steady state
(which have yet to be determined)
• Then we step increase V to a sufficiently high
value Vo such that m saturates to 1 during the
first few milliseconds. h continues to be near its
asymptotic value, which an be found from the
peak value of the current:
𝐼𝑠 = 𝑔 1 ℎ𝑠 (𝑉𝑠 − 𝐸)
Hodgkin Huxley Model

The Hodgkin Huxley model is the


most accepted model for
describing the kinetics of the
channels in all kinds of biological
neurons
• A small pulse I(t) produces a small positive perturbation of V
(depolarization), which results in a small net current that drives V back
to rest (repolarization).
• An intermediate size pulse of current produces a perturbation that is
amplified significantly because membrane conductances depend on V .
Such a non-linear amplification causes V to deviate considerably from
Vrest  action potential or spike.
• Strong depolarization increases activation variables m and n and
decreases inactivation variable h. Since τm(V ) is relatively small, variable
m is relatively fast. Fast activation of Na+ conductance drives V toward
ENa resulting in further depolarization and further activation of gNa. This
positive feedback loop results in the upstroke of V .
• While V moves toward ENa, the slower gating variables intervene.
Variable h  0 causing inactivation of Na+ current, and variable n  1
causing slow activation of outward K+ current. The latter repolarizes the
membrane potential toward Vrest
• When V is near Vrest, the voltage-sensitive time constants τn(V ) and
τh(V ) are relatively large. Therefore, recovery of variables n and h is
slow. In particular, outward K+ current continues to be activated (n is
large) even after the action potential downstroke, thereby causing V to
go below Vrest toward EK: a phenomenon known as after-
hyperpolarization.
• In addition, Na+ current continues to be inactivated (h is small) and not
available for any regenerative function. The Hodgkin-Huxley system
cannot generate another action potential during this absolute
refractory period. While the current de-inactivates, the system becomes
able to generate an action potential if that the stimulus is relatively
strong (relative refractory period).
Im
Action Potential Propagation V (x) V (x+Δx)

• From the cable equation we have:


Il (x) Il (x+Δx)

• We now insert the HH model for calculating Im, and we obtain:


Action Potential Propagation

• Combining the equation for trans-membrane potential propagation


along a cable and the HH model, we obtain a plausible physiological
example of reaction-diffusion, according to the following equation:
Action Potential Propagation
• HH have shown that trans/membrane potential propagation along the
axon keeps constant in shape and velocity u:

• From the previous, replacing in the following

• We obtain a second order differential equation:


1 𝑑 2 𝑉 𝑑𝑉 𝐼𝑁𝑎 + 𝐼𝑘 + 𝐼𝐿
2 2 = +
𝑟𝑎 𝐶𝑚 𝑢 𝑑𝑡 𝑑𝑡 𝐶𝑚
Action Potential Propagation
1 𝑑 2 𝑉 𝑑𝑉 𝐼𝑁𝑎 + 𝐼𝑘 + 𝐼𝐿
= +
𝑟𝑎 𝐶𝑚 𝑢2 𝑑𝑡 2 𝑑𝑡 𝐶𝑚

1
𝑟𝑎 𝐶𝑚 𝑢2
HH in myelinated fibers

• Equivalent circuit

Na L Na L

Cytoplasm Cytoplasm
Neural
Engineering
2016-2017
L07 - Neuron Active Models:
Fitzhug-Nagumo model
Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
An alternative model: Fitzhugh-Nagumo
• An active model, conceptually alternative to the
one proposed by HH, is the one proposed in
1962 by Nagumo, and later studied by Fitzhug

• The idea is that of creating an electrically


plausible model based on the experimental
evidences, but without considering the complex
electrochemical processes that characterize the
HH model

• We don’t have to create circuit branches that


simulate the selective opening of the ion
channels, but we must create a circuit model
that is able, all inside it, to electrically respond
just as the membrane does.

2
An alternative model: Fitzhugh-Nagumo
• The electric circuit model represented in figure has the
following features:

– A variable that is proportional to the potential, which


contains a cubic non linearity that is responsible for
the spike generation through positive feedback

– A recovery variable working through a slower


feedback mechanism

– These two variables can be somehow related (not


directly !) to the opening of the Na+ channels
(positive feedback, spike generation) and to the
opening of the K+ channels and h de-inactivation
(slower negative feedback going back to the resting
conditions, with slight hyperpolarization)

3
Fitzhugh-Nagumo: system equations

• Conceptually, the situation can be


modeled as in the circuit: considering
the current nodal equilibrium, the
injected current splits in:
– the capacitive branch (C dV/dt),
– the active linear branch W
– the non linear branch (V-V3/3) that can be
modeled by a tunnel diode.

• The dynamics of the circuit can be


described by the equations:

𝐶𝑉 = 𝐼 − 𝐹 𝑉 − 𝑊

𝐿𝑊 = 𝐸 − 𝑅𝑊 + 𝑉
4
Fitzhugh-Nagumo: system equations
• The generic differential equations governing the circuit can be expressed as a
general pair of differential equations involving two variables:

𝑉 =𝑓 𝑉 −𝑊+𝐼

𝑊 = Φ(𝑉 + 𝑎 − 𝑏𝑊)

• This model should represent the basic experimental evidences on the action
potential

• We have neglected the membrane biological principles, preserving only the


electrical properties

5
Fitzhugh-Nagumo: system equation
• Fitzhugh and Nagumo extracted the numerical values of the
constants, as reported in the right equations

𝑉3
𝑉 =𝑓 𝑉 −𝑊+𝐼 𝑉=𝑉− −𝑊+𝐼
3
𝑊 = Φ(𝑉 + 𝑎 − 𝑏𝑊) 𝑊 = 0.08(𝑉 + 0.7 − 0.8𝑊)

• In general, it is a two-dimensional dynamic system.

• To describe such a system, we use the typical representations of


the non linear systems

6
System equations: dynamical systems
𝑉3
𝑉=𝑉− −𝑊+𝐼
3
𝑊 = 0.08(𝑉 + 0.7 − 0.8𝑊)

• The pair of differential equations of the FH-N model


can be considered as an instance of the following
general model:
𝑣 = 𝑓 𝑣, 𝑤
𝑤 = 𝑔(𝑣, 𝑤)

• The two functions represent the evolution of the system


in the two-dimensional space of the considered
variables.

7
System equations: dynamical systems
• In every generic point
𝑣 = 𝑓 𝑣, 𝑤 (ṽ,w̃) we can define a
𝑤 = 𝑔(𝑣, 𝑤) vector (f(ṽ,w̃),g(ṽ,w̃))
representing the
evolution of the system
in the v-w plane.

• If we define this vector


(f(v,w),g(v,w)) for every point of the
(v,w) plane, we
w obtained the so called
phase plane
representation of the
dynamical system.

v
8
System equations: dynamical systems
• FH-N model is an
𝑣 = 𝑓 𝑣, 𝑤 example of a relaxation
𝑤 = 𝑔(𝑣, 𝑤) oscillator, because if an
external stimulus
exceeds a threshold,
the system will
describe a trajectory in
the phase space,
before going back to
(f(v,w),g(v,w))
the resting state.
w
• This behavior is typical
of spikes in the nervous
system.

v
9
System equations: dynamical systems

• This representation
indicates which is the
direction of system
evolution if we know the
condition in the point of
the plane with
w coordinates (v,w).

• By using this
representation in the
phase plane, we can
v observe and predict the
evolution of the whole
system.

10
Dynamical systems: definitions
• From the phase plane
representation we can
identify some particular
points: they are those
points at which the
vector tends to have zero
amplitude.
w
• In particular, we define
nullcline for the variable
g or f the set of all the
points (ṽ,w̃), in which
v g (ṽ,w̃) =0 or f (ṽ,w̃) =0,
respectively.

11
Dynamical systems: definitions
• In two-dimensional systems
we have trajectories along
which f (ṽ,w̃) =0, and
trajectories along which g
(ṽ,w̃) =0. Thus, nullclines are
trajectories. Graphically, this
means that the vector is
horizontal or vertical when it
belongs to a nullcline.
w • Those points at which these
trajectories meet belong to
the nullcline, an those points
are equilibrium points of the
dynamical system (both
f (ṽ,w̃) and g(ṽ,w̃) are zero)
v • Let’s identify the nullclines of
𝑉3 the Fitzhug Nagumo model,
𝑉=𝑉− −𝑊+𝐼 for the value I=0.
3
𝑊 = 0.08(𝑉 + 0.7 − 0.8𝑊)
12
Dynamical systems: definitions
• Given the initial state of the system (v0,w0), we define a
trajectory a curve that is constantly tangent to the vector field in
the phase plane.

• That trajectory will define the evolution of the system , once the
initial conditions (v0,w0) are known.

• If this trajectory corresponds to a closed trajectory, we define it


a limit cycle, and the variables will assume a periodic behavior.

• More specifically, in biological systems the equilibrium points


represent those points in which the system tends to be when no
external excitation is present.

13
FH-N model as a dynamical system
• In the specific case of the Fitzhug Nagumo model, the two state variables are v (trans-membrane
potential) and w (recovery variable, related to channels activation)

• The phase plane is reported in figure, by using the constant values experimentally obtained by
Fitzhug.

14
FH-N model as a dynamical system
• The two nullcline
trajectories for v
and w are shown in
figure. They cross
in correspondence
of the equilibrium
point.
• This point typically
correspond to
resting conditions,
when the
membrane
potential equals -
70mV and the
amount of channel
activation is close
to zero

15
FH-N model as a dynamical system
• When we perturb the
system from its
equilibrium condition,
the evolution of the
system will vary as a
function of its
position with respect
to the V nullcline

• If the trans-
membrane potential
increases and
overcomes the
nullcline border, its
trajectory will be
similar to the one
dashed in figure, and
it corresponds to the
generation of an
action potential.

16
FH-N model as a dynamical system
• It is worth noting that this
models theoretically
explain the presence of
the refractory period
(absolute and relative
refractory periods).

• If we close to the red


circumference (i.e. a
membrane potential value
close to its resting value),
if we try to inject a
current, it would have no
effect.

• This happens because the


evolution of the system
would bring the state
variable to the limit cycle.

17
FH-N model as a dynamical system
• The second interesting
zone is the relative
refractory period: if we
are in that zone, we will
nedd an increase of the
membrane potential
generated by a strong
injection of current in
order to induce a new
spike.

• This means that passing to


zone d from zone a is
possible, but we need a
lot of additional energy
compared to the one
needed in resting
conditions.

18
FH-N: low currents

• Trajectories described when, in equilibrium conditions,


the injected current is too weak. We don’t get out of
zones (b) and (c), and no spike is generated.

19
FH-N: low currents

• Trajectories described when, in equilibrium conditions, the


injected current is too weak. We don’t get out of zones (b) and
(c), and no spike is generated.
• Analyzing in detail the behavior of the two state variables, the
recovery variable w does not vary substantially.
20
FH-N: generation of the action potential

• When, from the equilibrium conditions, the injected


current is enough to enter the (a) zone, an action
potential is generated and the trajectory is the one
described in figure
21
FH-N: generation of the action potential

• When, from the equilibrium conditions, the injected current is


enough to enter the (a) zone, an action potential is generated
and the trajectory is the one described in figure
• In this case, the recovery variable w substantially changes its
value.
22
FH-N: Refractory period
• Here we represent the
absolute refractory
period. In that period,
even if we inject a
huge amount of
current, the dynamical
system doesn’t
respond, and it goes
back to zone (c),
without generating
new action potentials

23
FH-N: response to constant excitation
• What happens to the model if we inject a constant
current?
• Given the initial equations, g(v,w) does not change, while
f(v,w) changes, moving the v frontier (the v-nullcline)
up/down in case of positive/negative current.
• The interesting thing is that, if this value is high enough,
the system has no stable equilibrium points, and the
neuron fires auto-sustained action potentials, with a
repetition frequency proportional to the excess of
current with respect to the instability threshold current.

24
FH-N: response to constant excitation

25
Active models: general considerations
• HH model (and its variations), is an instance of
biologically-driven models, i.e. models which propose
an electric equivalent based on the real physiologically-
driven electrochemical phenomenon
– The advantage is that these models, if accurately controlled
with the obtained experimental data, will behave as the
original.
– However, redundancy complicates the solution of the
analytical problem, an uncertainty on the model parameters
propagates in a non-controllable way

26
Active models: general considerations
• FH-N (FitzHugh-Nagumo) model and its variations are instances
of reductionist models, i.e. a model which neglects the
physiological process underlying the model, and only seeks to
reproduce the experimentally obtained behavior.
– The advantage is that these models are simple (few parameters, simple
functions), have a low computational cost and the propagation of
uncertainties will be limited
– It also explains some particular phenomena related to neuron excitation
(accommodation, anodal break excitation)
– However, neglecting the underlying physiology there might be some
experimental effect that is observed only in specific situations that
cannot be modeled. These particular cases might be of huge importance,
such as modifications due to pathologies.
– Another limitation of the Fitzhug-Nagumo model is the absence of a
excitability threshold voltage of the neuron, and the absence of the all-or-
none behavior
27
All-or-none

FH-N has no all-or-none behavior: the amplitude of AP


depends on stimulus intensity

28
Accommodation

FH-N can explain accommodation: if you slowly increase


the stimulus, you don’t have an AP. You need a fast
increase

29
Anodal break excitation

If you apply an anodal current, and suddenly remove it, you


get an AP.

30
Excitation block
If yuo ramp increase the stimulus intensity, you get action
potentials until the oscillation is blocked by the excitation
itself (when w-nullcline intersect the descending unstable
branch of the V-nullcline)

31
Integrate and Fire model
• To conclude, a tribute to the oldest model of the neuron introduced by Lapicque at
the beginning of the XX century (when the action potential generation and
propagation were not known)
• Lapique hypothesizes that the neural transmission mechanism takes place through a
all-or-none process determined by a threshold. A capacitance increases the voltage as
a function of the injected current, and fires when the voltage overcomes a specific
threshold.
• Even though it is a really simplified model, the idea was correct and correctly
simulates the considerations on the firing frequency.

32
Integrate and Fire model
• This simple model is often used in neuronal network modeling
(connectivity models), where we are not interested in the propagation
of the action potential along the axon, but we are only interested in
the transmission of information among different neurons.

• This models is also often used to verify the behavior of the firing rate
of single neurons, both with simulations and with experimental data

33
Neural
Engineering
2016-2017
L08 – Principles of Neural Recording
and Introduction to Neural Signal
Processing
Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
Recording Neural Activity
• Systems for recording neural activity are not different
from the generic solution for recording physiological
signals and biopotentials. They constitute the
microscopic counterpart of these recording systems, and
they aim at recording the electrochemical potentials
introduced so far.
– However, these systems have some peculiarities: the activity to
be recorded is immersed in an inhomogeneous tissue, where
the recording point is not easy to localize, and the orientation
with respect to the membrane is not always controllable.
– The recording interface needs a specific electrode configuration
and an amplification system with a high input impedance and
low noise, in order to make the recorded signals suitable for
interpretation and analysis.
2
Recording Neural Activity: ideal intracellular case

• The ideal method for recording the


trans-membrane potential Vm and
obtaining the features of the action
potential, is having two sensing
micro-electrodes, placed inside and
outside the membrane,
respectively. The external electrode
can be on the membrane or in Im
proximity of the membrane. From
this recording scheme we can Rpatch
obtain the behavior of Vm by
amplifying the signal.

• Alternatively. We can collect the


current flowing across the
membrane by using a patch
amplifier (Vout=-RpatchIm).

3
Recording Neural Activity: ideal intracellular case

• One might need to electrically manipulate in vitro the membrane in order to


generate an action potential. We can use, for example, voltage clamping
techniques.

• In this configuration a current is injected proportional to the difference


between trans-membrane potential and a reference potential.
• In this way, we can keep the trans-membrane potential at a constant value,
and measuring the corresponding ionic current.

4
Recording Neural Activity: extracellular case

• In real conditions, excitable cells’ sizes make it really difficult


to record inside the cell. In these cases we must accept
extracellular recordings:

– Sensing electrode is places in proximity of the neuron; the dimension


of the electrode edge must be sufficiently small (≈ μm)

5
Electrodes

Electrodes

Recording Stimulation

Electrode Arrays

Surface Electrodes Invasive Electrodes Microelectrodes

Metal Needle Solid Metal

Suction Wire Supported Metal

Conductive gel Glass Micropipette

6
Electrodes and uElectrodes
• We will focus on the recording of neural activity (reminding that we can also
stimulate). We can make the following considerations:
– In the case of surface or subcutaneous recording of cortical activity (EEG or ECoG), we
have electrodes with a significant size (we can “see” them, an they vary in size from
few mm to some cm);
– When we record in proximity of the neural cell (and in the case of intracellular
recording) we deal with microelectrodes (uElectrodes) that must be developed with
specific features, based on the environment where they have to record. The typical
impedances are in the order of hundreds of kΩ.
– In the case of EEG electrodes, they are metal electrodes with an electrolyte conductive
gel. The electrode-electrolyte interface can be modeled as a RC component. It is thus
necessary for EEG and ECoG that electrode impedance is as lower as possible (typically
lower than 5 kΩ).
– In the case of subcutaneous ECoG elecctrodes, given the reduced size, the impedance
tends to be higher, and this can imply a higher noise on the recorded data. However,
we are closer to the signal source, so the attenuation is lower.

7
uElectrodes

• The original structure of a microelectrode is that of


the glass micropipette, filled with electrolyte
solution to improve conductivity (decrease the
input impedance). All metal conductive wires can
be used instead of electrolyte solution. Typically,
input impedance higher than 200 kΩ are reached.
Micropipettes are typically used fro intracellular
recordings

• microelectrodes with metal conductors are used in


the case of extracellular recordings. In this situation
we are less interested in the waveform shape and
more interested in the timing of the potentials

8
Neural Signal Processing: context
• We have seen that recording a membrane potential is not conceptually different from a
classical acquisition system for a generic biopotential.
• We have only quantitative differences, and the features of the acquisition system depend
on the environment where the recording is performed.
• The conceptual model is reported in figure, and it refers to the recording of neural activity
from one neuron (or from a small group of neurons) through extracellular recording
electrodes:

9
Neural Signal Processing: context
• Taking a look at the figure, we notice a first element which is particularly important: in the case
of extracellular recording (with a uelectrode placed in proximity of the target neuron), the
contribution to the recorded potential coming from nearby neurons is often (if not always)
present. These contribution can come from bigger neurons, with different orientations, and with
different cellular and function feature.
• The identification of the action potential, in an environment where other neurons are firing, is a
phase of fundamental importance for obtaining reliable information.

10
Neural Signal Processing: context
• We will use the term spike to define the generation of an action potential as recorded with an extracellular
uelectrode place in proximity of the neuron/s of interest and a uelectrode placed in a reference (not neutrally
active) zone. In this configuration, the neural activity at rest is zero, while it changes when the propagation of one
or more action potential is “felt” by the active uelctrode.
• Thus, spike is not an action potential, although it is temporally linked to the AP, and its features will depend on the
morphological features of the action potential.
• From an amplitude point of view, we are dealing with potentials that are two orders of magnitude lower than the
AP recorded across the membrane. If we have big neurons, and the uelectrode is placed really close to the neuron,
we can reach amplitudes of some mV: however, in most cases, we work with tens or hundreds of uV.

11
Neural Signal Processing: context

• By using extracellular recordings, working with the acquisition system schematized in figure, it is
common to have situations like the one presented in the plot. This situation is far from what we expect
from a single action potential. In particular::
– Are those spikes coming from action potential of more than one neuron?
– If so, how can we identify them (i.e., how can we state that a certain waveform is related to a specific neuron x
and not to another neuron y)?
– Are we able to classify in the presence of a substantial background noise?
– And, if we have superposition among action potentials, how can we decompose the single contributions?

12
Starting point: The spiking model
• If we consider the spiking activity APk(t) caused by the action
potentials of a single neuron k, it can be represented by the
following ideal model:
Nk
APk (t)   APki (t  IPI ki )
i 1

• That is a temporal superposition if the i-th instances of the spike coming from
the k-th neuron. The temporal distance between two consecutive spikes is IPIki
which varies as a function of time and the type of neuron. IPI is the acronym
for Inter Pulse Interval; alternatively, the acronym ISI is frequently used (Inter
Spike Interval).

13
The spiking model
Nk
APk (t)   APki (t  IPI ki )
i 1
• When we move from ideal to real conditions (noise of the surrounding environment,
attenuation due to extracellular medium), the activity measured from a uelectrode placed
at a certain distance from a set of neurons can be expressed according to the following
model, hypothesizing that the recorded neural activity is generated by M spiking neurons:
M M Nk
Vextracell (t)    k APk (t)    k  APki (t  IPI ki )  n(t)
k 1 k 1 i 1

• Where αk is the attenuation introduced by the distance between the uelectrode


and the k-th spiking neuron. In this model we hypothesize that the position of the
spiking neurons with respect to the recording electrode does not vary over time.
• The second hypothesis is that, on the generic k-th neuron, the AP waveform
repeats always in the same way (same shape, same duration), so that we have
repetitions of the same waveforms at a temporal distance IPIki.
• In this model we also add a realization of noise n(t), caused by environmental noise
and other generic activity not correlated with the neural spiking activity.

14
The spiking model

• Conceptually, the behavior


of a generic k-th neuron
can be modeled as a pulse
train (Dirac’s pulses) with a AP1(t)
temporal distance α1
dependent on IPIki, passing
through a LTI filter with AP2(t)
impulsive response APk(t). α2

• Vextracell(t) will be given by αk


the summation of these APk(t)
pulse trains, each
amplitude modulated by αM
the attenuation factor αk. APM(t)

n(t
)

15
The spiking model

AP1(t)
α1

AP2(t)
α2

αk
APk(t)

αM
APM(t)

n(t
)

16
The spiking model
AP1(t)
α1

AP2(t)
α2

αk
APk(t)

αM
APM(t)

• This kind of model is not conceptually different from the


one well known in literature for the generation of the
EMG signal. The substantial difference is only due
quantitative aspects: if we use needle electrode or
uelectrodes, the number of neurons that we can record
is only a very small set (few units), while with the EMG
we record the superposition of tens of motor units,
generating the interference pattern.

• Moreover, we don’t have the low-pass filtering effect of


fat and tissues, meaning that the analyzed signal will be
substantially different, and so will be the parameters
which characterize recorded signals.

17
The spiking model: considerations
• Differently from EMG analysis, where the parameters of
interest come from statistical considerations on the
characteristics of the underlying process, in the case of
neural spiking activity analysis we will be interested in a
number of deterministic parameters:
– Spike features (representative of the AP)
– Firing rate (Inter Spike Interval, ISI): it is a time varying quantity,
and we study the ensemble features (mean frequency, dispersion,
etc…)
– Associative features between spikes of different neurons (to
quantify their connectivity):among these, starting from the ISI we
can obtain measure of coherence and correlation

18
Neural Signal Processing: steps
• The situation is schematized
in figure: starting from the
recorded signals (a), a phase
of fundamental importance is
the spike sorting (b), which
implies the execution of two
consecutive steps:
– Spike detection: revealing the
generic presence of a spike
from potential changes in the
recorded signal

– Classification (spike sorting),


where we ascribe the change
in the recorded signal to a
specific neuron, mainly based
on shape features,
establishing the number of
neurons that can be recorded.

19
Neural Signal Processing: steps
• Once we have sorted the detected spikes,
we pass to parameters extraction:

1. Analysis of temporal features and ensemble


of the firing rate of the single neurons. This
allows us to characterize the behavior of
each single neuron.

2. Analysis of associative measures, using


quantities such as cross-correlation (c) or
coherence (f), and other ad-hoc measures.
These measures allow us to understand the
association among different neurons, thus
providing information related to
connectivity: if we have association among
nearby neurons, this could mean that they
fire based on a law which connects them.
We can identify a way of propagation
between them, i.e. information transfer
among neurons

20
Neural
Engineering
2016-2017
L09 – Neural Signal
Processing: Spike
Detection
Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
Spike detection
• The steps for spike detection are quite simple, particularly for excitable cells
of the Central Nervous System (CNS), while for peripheral excitable cells the
situation is slightly different
• This happens because the Signal to Noise Ratio (SNR), in the case when the
electrodes are correctly positioned, is quite high. The amplitude of noise is
around some uV, while amplitude ranges for spikes are tens or hundreds of
uV (usually, SNR between 6 dB and 20-25 dB).
• The aim of spike detection is ONLY to detect the presence of a spike,
neglecting the association to the k-th neuron (in this first step…).
• The traditional technique is based on a threshold approach (threshold-based
spike detection):
– Given the digital neural signal x(n), we detect the presence of a spike if x(n)>Th.
– Before applying the threshold-based detection, we can make some preprocessing on
the recorded signal, such as the absolute value.

2
Typical neural spikes
• Here we have some typical morphologies for neural spikes:
they have similarities.

3
Output of spike detection
• Here we have a representation of the output of the spike
detection phase, obtained from an array of uelectrodes.

uE1
20 uV
uE2 4 ms

uE3

uE4

uE5

t
4
Spike Sorting output
• Here we have a representation of the output of the
spike sorting phase of the same signal

20 uV
4 ms

t
5
Threshold-based spike detection

Th
0

• The simplest technique for spike detection is the threshold based approach: I detect a spike
when the recorded signal overcomes a specific voltage threshold.
• This technique can be preceded by pre processing steps (absolute value, power of an even
number). This can be done to detect spikes with visible behaviors, also if they are negative.
• Typically, an important part of the threshold based algorithms is the post-processing phase,
defining a “refractory period”: once we detect a spike, we establish a period during which we
cannot detect another spike.

6
Threshold-based spike detection

Th
0

• We have two degrees of freedom:


– 1) threshold value: we have to remind that, in the case of extracellular recording, we will have zero activity in the
absence of a spike. In the presence of a spike, we will have tens of mV. The threshold can be: higher, leading to a
better specificity and is able to reveal only the spikes from the closest/biggest neurons, or lower leading to a
higher sensitivity, collecting the spikes from more neurons.

– Obviously, the threshold will determine the complexity of the upcoming spike sorting steps (if we have more
spikes, it is more likely that we collected spikes from a larger number of neurons)

7
Threshold-based spike detection

Th
0

• The other degree of freedom:


– 2) length of the refractory period: in this case, increasing the refractory period leads to
a lower sensitivity, and a higher probability of detecting different spikes from
closer/bigger neurons; decreasing the refractory period leads to higher sensitivity, but
it could lead to the identification of two different spikes that are actually part of the
same spike from the same neuron.
– The choice of the refractory period depends on the temporal feature of the spikes,
which typically disappear after few ms (duration of the action potentials).

8
Threshold-based spike detection
• Alternative solutions in threshold based spike
detection algorithms are based on the calculation of
time series coming from the derivatives of the original
neural time series x(n):
1. Nonlinear energy operator:
y(n)  x 2 (n)  x(n   )x(n   )
• In this case we add another degree of freedom, δ, which will make
softer or harder the behavior of the derivative. These energy
operators behave like “peak highlighters”. Typical values of δ are 0.1 -
0.5 ms.
2. Smoothed nonlinear energy operator:
• In this case we add a smoothing function to the previous operator, to
decrease the effect of “excessively peaked” situations (such as in
complex spikes).

9
Effect of threshold

• Let’s imagine that we have recordings allowing the detection of two neurons (1 and 2 in
figure). Hypothesizing that the peak values distribute as in figure, the choice of the
threshold is of fundamental importance for establishing the sensitivity and the specificity
of the system: threshold B reveals only spikes from neuron 1 (not all of them). In this case
we have high specificity, low sensitivity, and the identification phase is easy. Threshold A
detects all the spikes from neuron 1, but also some spikes from neuron 2. In this case we
have high sensitivity, low specificity, and the identification phase is more complex because
we have more than one neuron.

10
The superposition problem
• We have hypothesized that spikes can vary in shape and amplitude,
depending on the considered neuron, and they are immersed in Gaussian
noise.
• The detection problem does not only consist in optimizing the number of
detected spikes with respect to background noise. Detection of
superposed spikes is a really important issue. We have this problem when
two different neurons fire approximately in the same time instant.
• Which is the probability of this situation to happen? It depends on three
different fundamental parameters:
1. The firing rate of all the neurons composing the spiking activity
2. The duration of the spikes composing the spiking activity (the negative phase in
particular)
3. The number of neurons that we are able to record/detect.

11
The superposition problem

ms ms

• In particular, the relation among the number of undetected


spikes due to superposition, the firing rate and the spike
duration (of the negative phase) is given by the product between
two terms r (firing rate) and d (duration): for example, with a
firing rate of 20 spike/s, and a duration of the negative phase of
0.5 ms, the percentage of undetected spikes is 100*(20*0.5E-3)%
= 1%.
12
The superposition problem

ms ms

• A complementary, yet really important, situation, is when two independently spiking neurons
sum their amplitudes, leading to the detection of a single spike (or two consecutive spikes)
• In this case, the frequency with which the two independent spiking activities sum each other is
given by r1r2d, where, hypothesizing similar durations, the two neurons fire with independent
firing rate r1 and r2.
• These rules tend to underestimate the effect of superposition, because the hypothesis of
independence between firing rates of two nearby neurons is not always satisfied. For connectivity
reasons, two nearby neurons tend to fire with correlated frequencies (common drive), and this
correlation is often studied.

13
Shape-based spike detection
• Alternative techniques for spike detection are based on shapes:
– Tuned filters: we have a template of the spike, and we numerically convolve
the spiking activity with the template. We thus obtain an output sequence
which highlights the positions where the original signal is more similar to the
template.
• If we have an experimentally obtained template, we can use it as a reference: for
example, we can use a threshold detection to isolate a waveform, and then use it as a
template for the tuned filter.
• If we don’t have a template, a standard shape from the literature could be used.
• Tuned filters strongly depend on the reference shape (if we have mre than one siking
neuron, the tuned filter is not sensitive to the other neurons, and does not detect their
activity).
• Degrees of freedom of these filters are the threshold on the output sequence, and the
reference template.

14
Shape-based spike detection

• Other techniques are based on morphology:


– Cross-correlation technique: the concept is similar to the previous one,
but here we use the (normalized) cross correlation function as output
sequence. A threshold is then applied to this output sequence. We have
the same considerations that we made for the convolution-based tuned
filters

• These techniques can be applied both to original spiking series,


and to derived series, such as NLEO or power. In this case the
reference template will be the derived one, not the original spike
(e.g. the power of the spike, not the spike itself)

15
Assessment of Accuracy
• The criterion used to evaluate the performance of a spike detection approach
come from estimation theory:
– We use the variables sensitivity and specificity (i.e. detection probability DP and
probability of false alarm PFA).
• DP: True Positives / (True Positives + False Negatives)
• PFA: False Positives / (False Positives + True Positives)
• Detection accuracy PD: (True Positives + True Negatives) / (2*(False Positives+True Positives)
– We can build the ROC curve starting from the obtained DP-PFA by varying the
detection threshold.
– We measure the area under the curve, and we compare different algorithms.

16
Cost Functions
• For each algorithm, we can obtain parameters to measure the detection performance:
which is the cost?
• Typically, if an algorithm is more “complex”, it will also have a greater accuracy. When do
we have to stop increasing the complexity of an algorithm?
• This is of extreme importance for on-line applications, where the computational complexity
plays a fundamental role. On-line information on neural activity is needed when it is used
to control external devices (such as neuro-prosthetics, robotic devices...).
• A possible criterion is reported in the following parameter:

• In this equation we have the detection accuracy PD , the number of FA per second nFa, the
time C(pp) needed for the CPU to provide the output value for the algorithm PP, rms which
is the maximum firing rate for the neuron, the maximum number of neurons m from which
we can record spiking activity, number of bytes of the recording system b, the available
bandwidth BW to transmit information, Fs sampling frequency, FC system clock frequency.

17
Cost Functions

• In this formula, three factors are weighted:


1. The accuracy of the detection system PD, through w1;
2. The transmission cost with respect to the available BW,
through w2 < 1;
3. Computational power of the available HW, through the factor
w3 (for real time, w3 < 1).

18
Neural
Engineering
2016-2017
L10 – Neural Signal
Processing: Introduction
to Spike Sorting
Cristiano De Marchis
cristiano.demarchis@uniroma3.it

1
Processing Steps
• Once the spikes have
been detected, the
following step is the
association of the single
spikes (waveforms) with
the corresponding firing
neuron.

• The output of this step


is schematized in figure
b).

2
Input to the spike sorting phase
• Here, the output of the spike detection phase is reported: spikes
have been detected (and segmented based on duration), but they
have not been identified yet.

3
Output of the spike sorting phase
• The output will lead to the association of the single
waveforms and the corresponding firing neurons.

4
Spike sorting: supervised techniques
• As we have previously seen, spikes coming from different neurons can be
distinguished based on their shape features. In particular:
– amplitude (depending on the neuron size, distance from the electrode,
orientation with respect to the electrode, and intrinsic properties of the neural
cell)
– shape (depending on the presence of tissue between neuron and recording
electrode, and intrinsic properties of the neural cell)
– duration
• The basic technique is based on an amplitude classification: one or two
amplitude features are extracted (typically, peak amplitude and peak-to-
peak amplitude) and the classification is carried out based on these
parameters:
– With a single feature (e.g. peak amplitude) it would correspond to a little bit
more complex version of the classic spike detection technique.
– This technique is computationally efficient, and it is reliable for real time spike
sorting applications
– The main drawback is that it cannot distinguish spikes having a similar/same peak
amplitude but a different shape, and its performance dramatically decreases if
many spiking neurons are present 5
Threshold-based sorting

Thsort
Thdet

• In this simple case, two spikes with a different amplitude are present, and a threshold is
chosen to discriminate red spike from the green spike based on a priori knowledge of the
spike amplitudes
6
Threshold-based sorting: limitations

• Here, a more complex situation is shown. If a simple amplitude


threshold technique is used, the classification would be really
difficult. If we add another feature (peak-to-peak), the situation
wouldn’t be different. 7
Window based discrimination
• Window based
discriminators are the
natural evolution of the Spike 2: windows
amplitude based features:
reference templates are Spike 1: windows
provided, corresponding to
the activation of different
spikes, and duration-
amplitude windows are
defined where the spike
must have a typical
behavior (as shown in
figure). These windows
cannot be overlapped.

8
Window based discrimination
• This technique combines a good
computational efficiency with a
good classification performance.
Spike 2: windows
• The main drawback is that, in the
presence of many spiking Spike 1: windows
neurons, the number of
admissible non overlapping
windows is low.

• Moreover, it needs a manual


phase of windows identification,
that is not quick.

• And, it cannot adequately


manage the superposition
problem.

9
Discrimination based on distance measures
• In this case, a number of reference templates are selected,
and a distance measure between the current waveform and
the set of considered templates is defined.

neuriˆ  argminneuri N (DM i )

• Distance measures DM, are typically based on the norm


(Manhattan, Euclidean,…). Manhattan and Euclidean norm, for
example, are defined as follows:
N samp
1
DM iManh 
N samp
 Templ ( j)  Curr _ wave( j)
i
j 1
N samp
1
DM iEucl 
N samp
 [Templ ( j)  Curr _ wave( j)]
i
2

j 1
10
Discrimination based on distance measures
+ The main advantage of distance measures is their robustness with respect
to noise on shape variations: they are integral measures, and the effect of
noise can be neglected;

+ Moreover, they have a low computational cost.

- A manual phase of template selection/generation is required to calculate


DM;

- Detection timing is of extreme importance: little delays, for such peaked


waveforms, can lead to altered distance measures; discrimination without
alignment suffers this limitation

11
The Alignment issue

DM= 3mV DM=11mV


• We have calculated the Euclidean distance on the same spike
with a reference template. On the right example, a two sample
shift is present, and the DM is dramatically altered.

12
Supervised spike sorting: Considerations
• Both techniques based on feature extraction or distance measures require
the selection of a reference template (extracted from literature, or from a
first analysis of a portion of the recorded signal)
• Other features can be time based, combined with amplitude (such as integral
of the waveform, variance, integral of the energy), or discrete numeric
features of interest (e.g. number of zero-crossings, number of peaks
composing the spike, number of positive peaks, number of negative peaks).
These features are compared with those of the reference spike to have a
similarity measure with the template.
• All these examples belong to the supervised spike sorting, where the
classification is based on the a priori knowledge of the template spike
features.
• The alternative technique is the unsupervised spike sorting, in which the
classification is based on the features themselves, not on a reference
template.

13
Unsupervised Spike Sorting
• In the Unsupervised Spike Sorting, features are extracted from spike
events, and these features are used to establish which is the actual
spiking neuron. In this case we don’t use distance measures with
respect to a template, but we use features of the waveform.
• Starting from the detected and extracted waveform, Unsupervised
Spike Sorting is composed of a features extraction step, followed by a
clustering step.
1. Feature extraction
• Brings to a dimensionality reduction for the upcoming clustering step: typically, staring
from the N samples of the detected and extracted waveform, we obtain a number of
features M<<N, and we use them for clustering (association of the detected spike to a
specific neuron)
2. Clustering
• Starting from the extracted features, we use specific criteria to distinguish different
spiking neurons and associate each single waveform to a specific neuron

14
Feature extraction
• In order to extract features with an unsupervised approach, we can
proceed in two different ways:
a. We can choose “a priori” some specific features that we consider useful
for the upcoming clustering phase: amplitude, peak-to-peak amplitude,
energy, frequency features, features in transformed domains (e.g.
wavelet coefficients). In this case, the set of features is defined a priori,
and we will use it for the upcoming clustering step. This approach is
defined as “knowledge-driven”.
b. We can choose an objective criterion of dimensionality reduction, by
using well known techniques such as PCA and ICA (Principal Component
Analysis and Independent Component Analysis, respectively). In this case
the set of extracted features is driven by the available data-set, without a
priori assumptions (they are defined “data-driven” approaches).
Clustering will be carried out on this data-driven set.

15
Unsupervised Spike Sorting
• Independently from the data-driven or knowledge-driven
approach, we will have a set of t features for each spike (i.e. a
set of t-dimensional vectors), that we use for associating each
spike to the corresponding neuron among the different k
spiking neurons.
• This is a classical clustering problem (or unsupervised
classification), where, given a set of vectors, we look for
associations among similar vectors to obtain k classes
(corresponding to the k spiking neurons)
• Differently from the supervised case, in this case we don’t
have any information regarding the characteristics of each
class (the only information that we may have is k the number
of classes)

16
Cluster analysis
• Clustering is the grouping process of a set of physical or
abstract objects into classes of similar objects.
• A cluster is a collection of objects that are similar
among them, and at the same time different from
objects belonging to other clusters.
• We talk about “Distance-based clustering” when we
introduce a measure of similarity (or, alternatively, a
measure of distance) starting from a set of features
characterizing different items.

17
Feature #1 Feature #1

Clustering
18
The cluster analysis problem
- In terms of analytical definition, we can say that, given a set of t-
dimensional vectors D={t1,t2,…,tN}, and an integer number k, the
problem cluster analysis wants to solve reduces to the definition
of a mapping function f:D-->{1,..,k} according to which each t-
dimensional vector is assigned to one (and only one) cluster Kj,
1<=j<=k.

- A Cluster, Kj, contains the set of vectors that have been


associated to it

- Clusters are NOT defined a priori (as it happens in supervised


spike sorting).
Neural
Engineering
2016-2017
L11 – Neural Signal
Processing: Cluster
Analysis
Cristiano De Marchis
cristiano.demarchis@uniroma3.it
Cluster Analysis Techniques
- We call hierarchical clustering those techniques in which
the set of generated clusters is increased at each step,
starting from the clusters generated at the previous steps.

- We call partitional clustering those techniques in which


each cluster is generated through a partition of the whole
initial dataset.

- A clustering technique can be either incremental (in which


each t-dimensional vector is managed sequentially) or
simultaneous, in which all the dataset is managed
simultaneously.
Parameters of the Cluster Analysis
• In order to obtain distance measures, we start from a
set of parameters typical of cluster analysis.
• Given a cluster composed of N elements, we define the
position of the cluster centroids (first order moments
in the t-dimensional space), its radius and its diameter,
according to the following equations:

3
Parameters of the Cluster Analysis: distances

• Taking into account a pair of clusters, we can define a


set of distance measures between the two clusters:
– Single Link: the minimum distance between two points
belonging to different clusters
– Complete Link: the maximum distance between two points
belonging to different clusters
– Link: the average distance
– Centroid distance: the distance between the two centroids
(not necessarily equal to the average distance)

4
Hierarchical Clustering
• In hierarchical clustering, clusters are generated in levels,
generating a set of clusters at each level. In particular:
– In agglomerative hierarchical clustering, we have the following
properties:
• In the first step, each item belongs to its own cluster (containing only that
item)
• Using an iterative process, clusters are composed through distance criteria
• It is a bottom-up approach: we start from N cluster each composed of a
single item, to arrive to the number k of requested clusters.
– In divisive hierarchical clustering, we have the following properties:
• In the first step all the items belong to a single starting cluster
• Using an iterative process based on distance criteria, the items are taken out
from the original cluster, thus forming new smaller clusters
• It is a top-down approach: we start from a single cluster containing the whole
dataset, and we divide it until we reach the number k of requested clusters.
Criteria for agglomerative clustering
• The criteria can be based on the previously introduced
distance measures, thus determining different
algorithms based on:
– Single Link
– Complete Link
– Average Link

6
Agglomerative clustering: Dendrogram
• Dendrogram is a tree structure
characterizing hierarchical
clustering techniques.
• At each level we have the set of
clusters for that level.
– In agglomerative clustering, we
define leaves the initial level of
clusters composed of a single item
– We define root the final level, in
which all the items belong to a
single cluster.
• A cluster at the generic i-th level
is composed of the union of
clusters at the level i+1.

7
Clustering Levels

We pass from a) to e) in the case of agglomerative clustering, and vice versa for
divisive clustering
An example of agglomerative clustering
• Let’s hypothesize we have
extracted all the distance A B C D E
measures among the five
A 0 1 2 2 6 A B
considered items: starting
from the initial level, in B 1 0 2 4 7
which we have five
clusters, we establish a C 2 2 0 1 5
threshold distance for E C
agglomeration for the D 2 4 1 0 3
first level (distance 1 for
example): in this case we E 6 7 5 3 0
obtain 3 clusters: A-B, C-
D, E. Working for maximal D
distances, at threshold
distance = 4 we Threshold of
agglomerate A-B-C-D, and
finally, with threshold =
7, we agglomerate the 1 2 34 7
whole data in a single
cluster
A B C D E
9
Implementation of the agglomerative clustering

10
Hierarchical clustering
A B C D E
• Here we have examples of different
dendrograms generated using different distance A 0 1 2 2 3
measures (min, max, mean): as we can see, we B 1 0 2 4 3
obtain different results depending on the kind
of used distance measure. C 2 2 0 1 5
D 2 4 1 0 3
E 3 3 5 3 0

11
Partitional Clustering
• In partitional clustering, differently from hierarchical clustering, clusters are
generated in a single step.
• This does not mean that the process is not iterative: it only means that the
number of clusters is fixed at each step, while the composition of the clusters
changes.
• While hierarchical clustering does not necessarily need the number of
clusters as input (we can stop the process at each level of the dendrogram),
in partitional clustering the number k of clusters must be specified a priori.
• There are different partitional clustering techniques, such as:
– square errors
– k-means
– nearest neighbor
– PAM

12
Partitional Clustering: square error
• SE (squared error) algorithm is iterative, and aims at
minimizing the squared error:

• We randomly initialize the clusters, and we calculate the initial random


centroids.
• Each item is then associated to the cluster having the centroid with the
lowest distance to it. This process will change the composition of the
clusters.
• At this point we calculate the squared error of the clusters.
• This process is repeated until we go below a specified threshold of SE
(or, alternatively, if the cluster composition does not change) 13
Squared Error Algorithm
K-Means
• This algorithm requires an initialization, in which each
cluster is randomly generated.
• The items are then moved from one cluster to another,
based on an average cluster criterion, until
convergence.
• The average cluster is the simple arithmetic mean of
the cluster, i.e. given the i-th cluster Ki = {ti1,ti2,…,tim},
the cluster mean is mi = (1/m)(ti1+…+tim).

15
K- means steps
• In particular:
– Give the number k of required clusters and N items, the
process is initialized generating k items as centroids (first
guess)
– Each item is associated to the cluster having the closest
centroid;
– After assignment, centroids are recalculated.
– The previous step is repeated using the new centroids.
– The iterative process ends when the centroids do not change
(we have no additional migration from one cluster to
another)

16
Example: 1-dimensional k-means
• We have the dataset {2,4,10,12,3,20,30,11,25} to be clustered in 2 groups
• First guess random centroids are 3 (for K1) and 4 (for K2);
• At the first step we obtain K1={2,3} and K2={4,10,12,20,30,11,25}, from
which we re-calculate the centroids, m1=2.5 and m2=16, respectively;
• According to the new updated centroids, we assign the items to the tow
clusters, obtaining K1={2,3,4} K2={10,12,20,30,11,25}, from which we re-
calculate the centroids, m1=3, m2=18;
• Third step: K1={2,3,4,10}, K2={12,20,30,11,25}, from which m1=4.75,
m2=19.6;
• Fourth step K1={2,3,4,10,11,12}, K2={20,30,25}, from which m1=7,m2=25;
• At the fifth step we don’t have additional changes in cluster composition,
and the algorithm ends.

17
K-Means Algorithm

18
Partitioning Around Medoids (PAM, or k-medoids)
• It is conceptually the same as k-means, with the only difference that
centroids (arithmetic mean of the items of a cluster) are replaced with
medoids (median instead of mean), i.e. the item which is at an average
minimum distance with respect to all the other items.
• The process is similar to the steps of the k-means:
– The initialization takes k random items and considers them as medoids.
– Each item will be assigned to a cluster, based on a distance measure.
– Now that the clusters have been updated, they will have a new medoid, calculated
minimizing the average distance.
– Once the new medoids have been calculated, cluster association is repeated
– The algorithm ends when the composition of the clusters does not change anymore,
and the medoids are fixed.

19
Nearest Neighbour
• In this algorithm, the items are iteratively associated to the closest
clusters;
• It is an incremental algorithm, in which a threshold criterion
establishes if the item must be assigned to an existing cluster (if the
distance is lower than the threshold), or if a new cluster must be
created (if the distance is higher than the threshold)
• The algorithm is initialized assigning the first item to the first cluster.
• Then we take an item and we calculate the distance between the item
and the closest one among the others. If the distance is lower than the
threshold the item is assigned to the cluster having the nearest
neighbor, otherwise the items forms a new cluster itself.
• The presence of the threshold makes this algorithm independent from
the number of hypothesized clusters

20
Nearest Neighbor Algorithm
Neural
Engineering
2016-2017
L12 – Modularity of the motor system:
inferring neural control strategies from
muscle coordination using multi-muscle
EMG

Cristiano De Marchis
cristiano.demarchis@uniroma3.it
Modularity as a Simplifying Control Strategy

• Redundancy of degrees of freedom and actuators:


complex mechanisms act at the level of the central
nervous system for the control of human movement.
• Many joints
• Many muscles

• Simplifying explanation to the traditional problem of


motor control: paradigms based on dimensionality
reduction of data related to the CNS

• Synergy: set of functional or anatomic elements which act


together in order to
• simplify control
• stabilize a performance variable
Redundancy

• Muscles are innervated by


motorneurons in the spinal cord.

• In order to produce any kind of


movement, CNS send commands
to individual muscles.

• Common movements such as


upper limb reaching, walking,
cycling, etc…, require the
coordinated activity of a huge
number of muscles.
Redundancy

• Muscles act across joints with the


aim of producing torque for
movement generation/endpoint
force production.

• More muscles than needed act


across the joints. The neuro-
muscular system is highly redundant.

• Main Question: how does the CNS


send commands and coordinate such
a redundant set of actuators for the
production of many different kinds of
movements?
Muscle Synergies
• Muscle synergies: modules of muscle co-activation which linearly
combined through amplitude scaling and time shifting mechanisms can
represent the vast repertoire of muscle activation during movement
execution

Cheung et al. 2009

• Recording surface EMG signals from a high number of muscles involved


in task execution
• Decomposing EMG signals into fixed spatial patterns of muscle co-
activation (synergy vectors or modules) and time-varying patterns
(time-varying synergy activation coefficients) through the use of matrix
factorization algorithms.
Muscle Synergies

• Neurophysiological hypothesis: motor modules reflect


connections among interneurons/motorneurons in the
spinal cord, and these spinal circuits are activated through
specific temporal patterns (involving central commands and
sensorial feedback), generating a large repertoire of muscle
activations.

• CNS doesn’t control each muscle independently, but controls


group of muscles, activating them according to the
biomechanical demands of a specific motor task.

• Muscle synergies constitute the building blocks upon which


the CNS develops the internal models in order to simplify
motor control.
Muscle Synergies
• Control of movement based on the temporal recruitment of different
modules/muscle synergies through neural commands during the period
of execution of the motor task.

• Few control signals can account for a complex pattern of muscle


coordination.
Muscle Synergies
Modular motor control has been studied for a variety of motor tasks:
reaching movements of the upper limb, walking, running, pedaling,
perturbed posture...

Evidence for modularity: set of synergies specific for each motor task and
shared among different subjects.

Muscle synergies (or motor modules) seem to accurately represent the


neural strategies used by the CNS.
EMG Recording
• Our aim is the identification of
such modular control structures
(i.e. muscle synergies/motor
modules)
• We can do this by recording multi-
muscle EMG
The Muscle Synergies Model

• The coordinated activity of N muscles through the use of K modules


can be mathematically expressed as follows:

𝑀 𝑡 = 𝐻𝑖 𝑡 𝑊𝑖 + 𝜀(𝑡)
𝑖=1

• M(t) is the coordinated activity of N muscles involved in movement


execution
• Hi(t) is the temporal activation of the i-th module/synergy
• Wi is the i-th module/synergy
• ε(t) represents noise within the data
• K is the number of modules used for movement production
The Muscle Synergies Model
𝐾 An example with two modules activating four muscles.
𝑀 𝑡 = 𝐻𝑖 𝑡 𝑊𝑖 + 𝜀(𝑡) The problem to be solved is an inverse problem, in
𝑖=1 which we want to identify W and H starting from the
measurement of M

H1

H2

H W
Methodological Aspects
• Decomposition of surface EMG signals recorded from a large number of
muscles involved in movement execution:
• Calculation of the EMG amplitude envelope
• Preprocessing and M matrix organization
• Identification of the number of modules
• Matrix factorization algorithms

• Commonly used factorization techniques:


• Independent Component Analysis (ICA)
• Principal Component Analysis (PCA)
• Factor Analysis (FA)
• Nonnegative Matrix Factorization(NMF) (Lee&Seung 1999)
• MNxS ≈ WNxK * HKxS
EMG Preprocessing

• Band-Pass Filtering
• Amplitude Envelope Calculation
• Time Scale Normalization based on a Biomechanical Reference
• Amplitude Normalization
• Matrix Organization
• Number of Synergies Identification
• Factorization for Synergies Extraction
EMG Preprocessing and Synergies Extraction

Raw Band-Pass Full-Wave Low-Pass Time-Scale


EMG Filtering Rectification Filtering Normalization

Factorization of K Number of
matrix M with K M Matrix Amplitude
Synergies Organization Normalization
modules Identification

W H
Envelope Calculation
• Band Pass filtering (typically 20Hz –
400Hz) to preserve the useful
spectral content of the EMG signal,
rejecting any motion artefact and
high frequency noise

• Full-Wave rectification, i.e. the


absolute value of the EMG signal

• Low-Pass Filtering of the Full-Wave


rectified EMG to obtain the
amplitude Envelope. Cut-off
frequency depends on the kind of
task that is being analyzed. Typical
values used in previous studies vary
between 4Hz and 10Hz.
Time-Scale Normalization

• Time-Scale normalization: usually we deal with multiple events of an


analyzed motor task. For example, when analyzing walking, we deal
with more than one stride. It is often useful to compare different
strides, and obtain an average behavior of muscle activation.

• Muscle activation in different events can be compared only if they


are represented on a coherent time scale (as different events have a
different duration). Given the beginning and the end of an event (e.g.
a gait stride defined between two consecutive heel strikes), muscle
activation within that event is resampled on a fixed number of points
(e.g. 100 points representing the percentage of the walking cycle)

• A number of consecutive events are concatenated to obtain an


adequate number of events for the analysis of the motor task
Time-Scale Normalization

An example of three consecutive


pedaling cycles, each of them
resampled on 360 points representative
of pedal degrees of rotation. The
events are then concatenated to obtain
a unique signal
Amplitude Normalization

• Amplitude Normalization: activity of a muscle is measured in V (as


from the envelope), but we cannot quantify how active a muscle is.
We can use normalization approaches based on:
• Reference contractions normalization (i.e. Maximum Voluntary Contraction)
• Task based normalization (the reference normalization value is extracted
from the available data, without a reference contraction)

• In repetitive tasks, such as walking, usually the peak value from each
cycle is kept to obtain a vector of peak values for each muscle. Then,
each muscle activity is normalized to the mean/median peak value
• When we analyze the same task under different conditions (e.g.
walking at different speeds), each muscle activity can be normalized
to a reference condition, or to the maximum activity measured
across conditions
EMG Matrix Factorization

• Different Algorithms can be used to identify modular organization from


myoelectric activity.
• Among them, Non Negative Matrix Factorization (NNMF) is the most used. It
seeks to factorize the matrix M containing the pre-processed EMG signals in the
form:

• M≈WH with W,H>0

• MNxS is the matrix containing the activity of N muscles across S samples


• WNxK is the synergy matrix of nonnegative elements, where each of the K
modules contains the contribution of the N muscles to that module (i.e. the
relative contribution of a muscle to each module, spatial information)
• HKxS is the matrix of the nonnegative synergy activation coefficients. Each of the
K activation coefficients contains the amplitude and temporal information
regarding the recruitment of a synergy along the motor task execution.
• The quality of the approximation M≈WH is measured through the Euclidean
distance ||M – WH||
Non Negative Matrix Factorization

• We want to solve the following problem: minimize ||M-WH|| under the


constrain W>0 and H>0

• Theorem (Lee&Seung 2001): The Euclidean distance ||M-WH|| is non-


increasing under the following multiplicative update rules:

(𝑊 𝑇 𝑀)𝑘𝑠 (𝑀𝐻𝑇 )𝑛𝑘


𝐻𝑘𝑠 𝐻𝑘𝑠 𝑊𝑛𝑘 𝑊𝑛𝑘
(𝑊 𝑇 𝑊𝐻)𝑘𝑠 (𝑊𝐻𝐻𝑇 )𝑛𝑘

• The Euclidean distance is invariant under these updates if and only if W and
H are at a stationary point of the distance.

• W and H are randomly initialized, and according to the theorem the solution
stabilizes on a minimum. It is an iterative procedure. We stop when we arrive
below a threshold or when the distance is not varying anymore
Number od Synergies Identification
• Number of modules K
usually chosen based on the
variance explained by the
model with K synergies
Number of Synergies Identification

• Given the initial M matrix MNxS,we start extracting 1 synergy, so that WNx1
and H1xS

• Then, we extract a number of synergies going from 1 to N, and we calculate


the Variance Accounted For (VAF) for each number of extracted synergies, as
a measure of quality of the approximation M≈WH, according to the
following:

𝑁 𝑆 2
𝑛=1 𝑠=1(𝑀𝑛𝑠 − 𝑅𝑛𝑠 ) 𝑅 = 𝑊𝐻
𝑉𝐴𝐹𝑘 = 1 − 𝑁 𝑆 2
𝑛=1 𝑠=1(𝑀𝑛𝑠 )

• The lowest number K for which VAFK overcomes a specified threshold


(typically 0.9) is set as the correct number of synergies underlying the data
• Other criteria can be used to select the correct number of syenrgies:
• Elbow of the VAF curve
• Comparison with surrogate data
• Information criteria (such as Akaike Information Criterion)
Example
• Once the number of
synergies has been
identified, the whole
muscle coordination is
expressed as a spatial
component W and a
temporal component H.

• In the example, the


decomposition of the
activity of 8 muscles during
pedaling has been
represented through the
activation of 4 muscle
synergies. (each color
represents a different
subject)

De Marchis et al. 2013,


Hum Mov Sci
Use of Muscle Synergies in Human
Movement Analysis
• Biomechanical sub-
functions explained
through the activation of
muscle synergies
• Depending on the muscle
synergy spatial
composition and its
activation along the
movement execution, a
synergy is responsible for
a specific biomechanical
function
• Here, and example on
walking studied through
musculo-skeletal
modeling

Neptune et al. 2009 J Biomech


Use of Muscle Synergies in Neuro-
Rehabilitation
• Reorganization of muscle coordination after stroke: the brain is not
able anymore to send independent control signals to the different
synergies. There is spread co-contraction of muscles caused by a
simultaneous recruitment of different healthy modules. This
eventually results in an impaired movement.

• The main observation is the reduction in the number of modules,


correlated with the level of impairment measured by clinical scales.

• Reorganization of muscle coordination post-stroke has been studied


in different tasks. Studying muscle coordination after stroke might
improve the planning of neuro-rehabilitation strategies
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 1: Single Spiking Neuron

The file Spiking_Neuron_Activity_Exercise1.mat contains the spiking activity x(n) from a single
spiking neuron.

1. Write a function detect_spikes.m which implements a threshold-based spike detection


technique, and provides the estimated spike time instants as output.

2. Plot the original neural signal together with the estimated spikes, as in the following
example.

3. Plot the average time-course of the estimated spikes (average spike profile)

4. Calculate the mean firing rate FR of the spiking neuron and its standard deviation
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 2: Accuracy Evaluation

The file Two_spiking_neurons.mat contains the spiking activity from a two spiking neurons,
together with the time instants corresponding to the real firing onsets of the neurons. The
sampling rate is 20kHz.

1. Write a function evaluate_accuracy.m which requires the following input parameters:

a. Estimated Firing Onsets


b. Actual Firing Onsets
c. Threshold Lag, which is the time window for searching True Positives, False
Positives and False Negatives
And provides the following output variables
d. True Positive Rate
e. False Discovery Rate
f. Root mean square of the delay between real and estimated firing onsets

2. Apply the threshold based algorithm to the neural signal in the workspace, choosing one
pre-processing technique (and fixing the refractory period), iteratively changing the
threshold.

3. Determine the best threshold (i.e. the one providing the best compromise among TPR,
FDR and RMS)
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 2: Accuracy Evaluation

The file Two_spiking_neurons.mat contains the spiking activity from a two spiking neurons,
together with the time instants corresponding to the real firing onsets of the neurons. The
sampling rate is 20kHz.

1. Write a function evaluate_accuracy.m which requires the following input parameters:

a. Estimated Firing Onsets


b. Actual Firing Onsets
c. Threshold Lag, which is the time window for searching True Positives, False
Positives and False Negatives
And provides the following output variables
d. True Positive Rate
e. False Discovery Rate
f. Root mean square of the delay between real and estimated firing onsets

2. Apply the threshold based algorithm to the neural signal in the workspace, choosing one
pre-processing technique (and fixing the refractory period), iteratively changing the
threshold.

3. Determine the best threshold (i.e. the one providing the best compromise among TPR,
FDR and RMS)
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 3: Spike Segmentation and Feature Extraction

The file Two_spiking_neurons.mat (from Exercitation2) contains the spiking activity from two
spiking neurons. The sampling rate is 20kHz.

1. Write a function extract_features.m which requires the following input parameters:

a. Estimated Firing Onsets


b. Neural Signal
c. Sampling Frequency
And provides as output a NxM matrix of features, in which the i-th row contains the
M features extracted from the i-th detected spike (N is the total number of detected
spikes, i.e. the length of the Estimated Firing Onsets input vector)

Suggestion: you can select the features that you prefer. Try to extract at least the
following features:
- Peak Amplitude
- Peak-to-Peak Amplitude
- Energy
- Variance

2. Select a pair of features as to obtain two N-dimensional vectors, and plot these vectors
one against the other (using points or circles ‘.’ or ‘o’). Do you get any relevant
information from this graphical representation?
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 3: Spike Segmentation and Feature Extraction

The file Two_spiking_neurons.mat (from Exercitation2) contains the spiking activity from two
spiking neurons. The sampling rate is 20kHz.

1. Write a function extract_features.m which requires the following input parameters:

a. Estimated Firing Onsets


b. Neural Signal
c. Sampling Frequency
And provides as output a NxM matrix of features, in which the i-th row contains the
M features extracted from the i-th detected spike (N is the total number of detected
spikes, i.e. the length of the Estimated Firing Onsets input vector)

Suggestion: you can select the features that you prefer. Try to extract at least the
following features:
- Peak Amplitude
- Peak-to-Peak Amplitude
- Energy
- Variance

2. Select a pair of features as to obtain two N-dimensional vectors, and plot these vectors
one against the other (using points or circles ‘.’ or ‘o’). Do you get any relevant
information from this graphical representation?
Neuron Passive Models
Exercises

Neural Engineering 2016-2017


Cristiano De Marchis
Exercise 1: membrane resting potential
• An excitable cell in resting conditions is characterized by the following concentrations for the single ionic species inside
and outside the cell membrane.
Na+
• IN 12 mmol/l
• OUT 145 mmol/l
K+
• IN 155 mmol/l
• OUT 2.5 mmol/l
Cl-
• IN 4 mmol/l
• OUT 120 mmol/l

R = 8.314 J K-1 mol-1 (Gas constant)


F = 96.485 J V-1 mol-1 (Faraday constant)

• 1. Calculate the Nernst potential for three ionic species at 37°


• 2. Calculate the value of the membrane resting potential at 37°
• 3. Make some considerations regarding the direction of ionic fluxes, if the membrane resting potential is Vr = -90 mV
Exercise 2: excitable spherical cell
• A neural excitable spherical cell with radius r = 10 μm is stimulated with an internal
electrode in correspondence of the point with coordinates (x=2μm y=3μm) with respect
to the center of the cell. The cell membrane is characterized by the following electrical
properties:
• G = 5 S/m2
• C = 0.1 F/m2

• The resting potential of the cell is -70mV. The threshold voltage is -50mV
• 1. Calculate the stimulation threshold if the applied stimulus has a duration of 320 μs
• 2. Calculate the Chronaxie Tchr of the cell 1 mA

• 3. Is there an action potential if the stimulus in figure is applied?

150 μs
Exercise 3: length constant
• A neuron is stimulated with a current injected in the point with axial coordinate x0. As a
consequence of the applied current, the trans-membrane potential shows an increase of
40mV.

1. Hypothesizing that the length constant λ = 0.2 mm, calculate the distance at which
the increase in trans-membrane potential reduces to 10mV.

2. Hypothesizing that the resting potential is Vr = -80mV and the threshold potential is
Vth = -50mV, calculate which is the maximum distance between two consecutive nodes
of Ranvier able to preserve the corresponding action potential.
Exercise 4: Rattay activating function
• A cathodic current is externally applied to an axon at a distance d = 3mm
from the coordinate x = 0 of the membrane. The applied stimulation
generates the following electric potential in the surrounding tissues:

V(r) = c/r

• 1. Calculate the spatial distribution of the external potential Ve in


correspondence of the membrane along the axial direction.
• 2. Determine the polarization of the membrane at the coordinate x = 1mm
(depolarization or hyperpolarization), and determine the coordinate at
which there is a change in polarization direction
• 3. Which is the measurement unit of the Rattay activating function?
Exercise 5: choose the best configuration
• A nerve fiber has a Rehobase Irh = 1.2 mA. We have an old electrical
stimulator that, unfortunately, is able to apply rectangular pulses only from
the following reduced set of configurations.

4 mA
2 mA
1.5 mA

1 mA
150 μs 200 μs 75 μs 3 ms

• Which pulse configuration would you use to (try to) generate a single
action potential on the nerve fiber? Why?
Why did I say “try to” ?
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 4: Clustering

The file spike_train_ex4.mat (from Exercitation2) contains a neural signal recorded at 10kHz
1. Write a function implementing the k-means algorithm, requiring the following input
parameters:

a. Features Matrix, a NxM matrix defined from N detected spikes out of which M
features have been extracted
b. K, Number of desired classes
And provides a N-elements vector classes as output, in which each i-th element
contains the class the i-th element belongs to (i.e. the corresponding firing neuron).
The algorithm also provides the clusters centroid and radius as output.
2. Write a function implementing the k-medoids algorithm (it is quite similar to the
implementation of the k-means)

3. Apply a clustering technique to the features extracted from the spike_train signal.
Suggestion: first, visually inspect the output of the spike detection algorithm to ensure
that most spikes have been detected. After feature extraction, you can select the
features that you prefer or you consider meaningful. Before applying the clustering, try
to plot one feature against another (using the ‘.’ or ‘o’ marker), to have a first guess on
the possible number of clusters.

4. After clustering, calculate and plot the average time profile of the spikes belonging the
each cluster, and calculate the firing properties of each neuron (Average μFR and
standard deviation σFR of the firing rate).
Neural Engineering 2016-2017
Cristiano De Marchis

Exercise 5: Muscle Synergies

The file EMG_signal_exercitation5.mat contains eight EMG signals recorded from the following
lower limb muscles of a healthy subject during a pedaling task:
- Gluteus Maximus
- Biceps Femoris
- Gastrocnemius Medialis
- Soleus
- Rectus Femoris
- Vastus Medialis
- Vastus Lateralis
- Tibialis Anterior
EMG signals are sampled at 1 kHz. The file also contains the biomechanical reference from the
pedal angle, recorded at 1 kHz.
1. Organize the matrix M for subsequent synergy analysis, resampling the EMG envelope
from each cycle on 100 points, representative of the integer percentages of the pedaling
cycle, and normalize each EMG envelope to the median peak value across all the
consecutive cycles.

2. Write a function implementing the Nonnegative Matrix Factorization Algorithm with the
Lee&Seung multiplicative update rules, requiring the following input parameters:

a. M8x(100xNCYCLES) matrix previously organized


b. K, Number of synergies to be extracted
c. maxiter: maximum number of iterations before stopping the algorithm
And provides the following variables as output:
d. A matrix W8xK containing the K synergy vectors
e. A matrix HKx(100xNCYCLES) containing the K synergy activation coefficients
f. VAF, indicating the Variance Accounted For by the decomposition

The iterative procedure must automatically stop if the norm ||M-WH|| does not
change more than 0.001 after 10 consecutive iterations.

3. Identify the correct number of synergies underlying the data based on a 90% threshold
on the VAF curve, and graphically represent the identified motor modules and the
average synergy activation coefficients (suggestion: use a bar plot for the synergy
vectors and a normal plot for the synergy activation coefficients).

You might also like