Professional Documents
Culture Documents
Module 1
Contents
Introduction: Biological Neuron – Day 1
Artificial Neural Model – Types of activation functions – Day 2
Architecture: Feedforward and Feedback –Day 3
Convex Sets, Convex Hull and Linear Separability – Day 3
Non-Linear Separable Problem – Day 4
XOR Problem – Day 4
Multilayer Networks –Day 5
Learning: Learning Algorithms – Day 5
Error correction and Gradient Descent Rules – Day 6
Learning objective of TLNs – Day 6
Perceptron Learning Algorithm – Day 7
Perceptron Convergence Theorem – Day 8
∙ All the neuron signal functions introduced thus far are deterministic in the sense that the signal
value is completely determined by the instantaneous activation value that the neuron acquires, in
conjunction with the neuron signal function. There is no randomness.
∙ However, synaptic transmission in biological neurons is essentially a noisy process brought about
through random fluctuations resulting from the release of neurotransmitters, and numerous other
uncertain causes.
∙ such randomness generated through synaptic noise in a mathematically tractable framework
should be taken into account.
∙ Stochasticity into the neuron can be introduced by assuming that its activation to signal updates is
no longer deterministic.
∙ For example, if a 5-neuron input feeds signal to a 4-neuron output layer, an input vector from a
five-dimensional space is transformed or mapped to a signal vector in a four-dimensional space.
∙ Neural networks thus generate mappings from one space to another or sometimes from the input space back
to itself
∙ In recurrent neural networks, network activations and signals are in a flux of change until (and unless) they
settle down to a steady state.
• The threshold logic neurons can be connected into multiple layers in a feedforward fashion.
• In general, it is not uncommon to have more than one hidden layer when solving a classification
problem.
• Such a situation arises when one has a number of disjoint regions in the pattern space that are
associated into a single class
1. Each neuron in the first hidden layer forms a hyperplane in the input pattern space.
2. A neuron in the second hidden layer can form a hyper-region from the outputs of the first layer neurons by
performing an AND operation on the hyperplanes. These neurons can thus approximate the boundaries
between pattern classes.
3. The output layer neurons can then combine disjoint pattern classes into decision regions made by the
neurons in the second hidden layer by performing logical OR operations.
No more than three layers in binary threshold feedforward networks are required to form
arbitrarily complex decision regions.
1. Error correction rules that alter the weights of a network using a linear error measure to reduce the error in
the output generated in response to the present input pattern.
2. Gradient rules that alter the weights of a network during each pattern presentation by employing gradient
information with the objective of reducing the mean squared error (usually averaged over all training
patterns).
• A TLN is actually a linear neuron whose output is directed into a unit step or signum function.
• The neuron is adaptive when its weights are allowed to change in accordance with a well-defined learning
law.
• Commonly used adaptive algorithms for such threshold logic neurons are the Perceptron learning algorithm
and the least-mean-square (LMS) algorithm.