You are on page 1of 8

1.

6 Topology
Artificial neural networks are useful only when the processing units are

organised in a suitable manner to accomplish a given pattern recognition


● task.


The arrangement of the processing units, connections, and pattern input
output is referred to as topology.
Artificial neural networks are normally organized into layers of
processing units.
1. Interlayer
2. Intralayer
a. Feed forward: Information is unidirectional
1.6.1 Basic structures of ANN
Basic structures of ANN
1.7 Basic Learning Laws:

The operation of a neural network is governed by neuronal dynamics.
Neuronal dynamics consists of two parts: one corresponding to the
dynamics of the activation state and the other corresponding to the

dynamics of the synaptic weights.
A model of synaptic dynamics is described interms of expressions for the

first derivative of the weights. They are called learning equations.
Learning laws describe the weight vector for the ith processing unit at time
instant (t + 1) in terms of the weight vector at time instant (t) as follows:
wi(t + 1) = wi(t) + Awi(t)
where Awi(t) is the change in the weight vector.
Basic learning laws
1.7.1 Hebb’s Law

The law states that the weight increment is proportional to the product of the
input data and the resulting output signal of the unit.
Basic learning laws
1.7.2. Perceptron learning Law

The change in the weight vector is given by:

where sgn(x) is sign of x. Therefore, we have

This law is applicable only for bipolar output functions f(.). This is also called

discrete perceptron learning law. The expression for Aw i, shows that the

weights are adjusted only if the actual output s i is incorrect, since the term in

the square brackets is zero for the correct output.


Basic learning laws
1.7.3. Delta learning Law

The change in the weight vector is given by:

where f(x) is the derivative with respect to x. Hence,

This law is valid only for a differentiable output function, as it depends on the
derivative of the output function f(.). It is a supervised learning law since the
change in the weight is based on the error between the desired and the actual
output values for a given input. Delta learning law can also be viewed as a
continuous perceptron learning law.
Basic learning laws
1.7.4. Willdrow and Hoff LMS learning Law

The change in the weight vector is given by:

Hence,

In this case the change in the weight is made proportional to the negative gradient of
the error between the desired output and the continuous activation value, which is also
the continuous output signal due to linearity of the output function. Hence, this is also
called the Least Mean Squared (LMS) error learning law.

You might also like