You are on page 1of 10

Categoriesof Learning


Online and Offline:

In an off-line learning all the given patterns are used together to determine the weights. On
the other hand, in an on-line learning the information in each new pattern is incorporated into

the network by incrementally adjusting the weights.
An on-line learning allows the neural network to update the information continuously.
However, an off-line learning provides solutions better than an on-line learning since the

information is extracted using all the training samples in the case ofoff-line learning.
The training patterns can be considered as samples of random processes, So the activation
and output states could also be considered as samples of random processes. Randomness
in the output state could also result if the output function is implemented in a probabilistic
manner rather than in a deterministic manner. These input, activation and output variables

may also be viewed as fuzzy quantities instead of crisp quantities. Thus learning process can
be classified as deterministic or stochastic or fuzzy or a combination of these characteristics.
In the implementation of the learning methods the variables may be discrete or continuous.
Likewise the update of weight values may be in discrete steps or in continuous time. All these
factors influence not only the convergence of weights, but also the ability of the network to
learn from the training samples.
LearningMethods

Hebbian Learning:
The basis for the class of Hebbian learning is that the changes in the synaptic
strength is proportional to the correlation between the firing of the post- and pre-
synaptic neurons.
The synaptic dynamics equation is given by a decay term (- wij(t)) and a correlation term
(si ,sj) as

where sisj is the product of the post-synaptic and pre-synaptic neuronal variables for the
ith unit. These variables could be activation values (si,sj = xi(t) x,(t)), or an activation
value and an external input, or an output signal and an external input, or output signals
from two unit, or some other parameters related to the post-synaptic and pre-synaptic
activity
The solution of the above equation is given by
LearningMethods

Hebbian Learning:
The basis for the class of Hebbian learning is that the changes in the synaptic
strength is proportional to the correlation between the firing of the post- and pre-
synaptic neurons.
The synaptic dynamics equation is given by a decay term (- wij(t)) and a correlation term
(si ,sj) as

where sisj is the product of the post-synaptic and pre-synaptic neuronal variables for the
ith unit. These variables could be activation values (si,sj = xi(t) x,(t)), or an activation
value and an external input, or an output signal and an external input, or output signals
from two unit, or some other parameters related to the post-synaptic and pre-synaptic
activity
The solution of the above equation is given by
LearningMethods

Differential Hebbian Learning
The deterministic differential Hebbian learning is described by


That is, the differential equation consists of a passive decay term-
wij(t) and a correlation term si sj, which is a result of the changes
in the post- and pre-synaptic neuronal activations.

The stochastic versions of these laws are approximated by


adding a noise term to the right hand side of these differential
Hebbian learning equations.
LearningMethods

Competitive Learning

Learning laws which modulate the difference between the output
signal and the synaptic weight belong to the category of
competitive learning. The general form of competitive learning is
given by the following expression for the synaptic dynamics

where, si = fi(xi(t)) is the output signal of the unit i, and sj =fj{xj(t)) is


the output signal of the unit j. This is also called the deterministic
competitive learning law. It can be written as
●Differential competition Learning
● Differential competition means that learning takes place
only if there is a change in the post-synaptic neuronal
activation. The deterministic differential competitive
learning is described by

This combines the competitive learning and differential

Hebbian learning.
Linear differential competitive learning law is described by
Stochastic Learning
● Stochastic learning involves adjustment of weights of a neural network in a probabilistic manner. The
adjustment uses a probability law, which in turn depends on the error. The error for a network is a
positive scalar defined in terms of the external input, desired output and the weights connecting the
units.
● In the learning process, a random weight change is made and the resulting change in the error is
determined. If the resulting error is lower, then accept the random weight change. If the resulting
error is not lower, then accept the random weight change with a predecided probability distribution.
● The acceptance of random change of weights despite increase in the error from the network allows
the network to escape local minima in the search for the global minimum of the error surface.
● Boltzmann learning uses stochastic learning along with simulated annealing to determine the weights
of a feedback network to store a given set of patterns
Stability and Convergence
● Stability refers to the equilibrium behaviour of the activation state of a neural network, whereas
convergence refers to the adjustment behaviour of the weights during learning, which will eventually
lead to minimization of error between the desired and actual outputs.
● The convergence is typically associated with supervised learning, although it is relevant in all cases of
learning, both supervised and unsupervised.
● The objective of any learning law is that it should eventually lead to a set of weights which will
capture the pattern information in the training set data.
Recall

During learning, the weights are adjusted to store the information in a
given pattern or a pattein pair. However, during performance, the
weight changes are suppressed, and the input to the network
determines the output activation xi or the signal value si. This
operation is called recall of the stored information. The recall
● techniques are different for feedforward and feedback networks.
The simplest feedforward network uses the following equation to
compute the output signal from the input data vector a.

where A(.) is the output function of the ith unit.
Recall
A recall equation for a network with feedback connections is given by the

following additive model for activation dynamics:

where xi(t + 1) is the activation of the ith unit in a single layer is the nonlinear feedback
network at time (t + 1). The function output function of the jth unit, a (< 1) is a positive
constant that regulates the amount of decay the unit has during the update interval, p is
a positive constant that regulates the amount of feedback the other units provide to the
ith unit, and ai is the external input to the ith unit.

You might also like