You are on page 1of 31

Introduction to Artificial

Neural Networks
Dr. Rakesh R Warier
Neural Networks and Artificial Neural
Networks
• Neural Networks : networks of neurons, found in
real (i.e. biological) brains
• Artificial Neurons
• crude approximations of the neurons
• physical devices, or purely mathematical constructs
• networks of Artificial Neurons form Artificial Neural
Networks (ANNs)
• ANNs are crude approximations to parts of real brains.
Advantage of NNs
• They are extremely powerful computational devices
• Massive parallelism makes them very efficient.
• They can learn and generalize from training data – so there is no need
for enormous feats of programming.
• They are particularly fault tolerant – this is equivalent to the graceful
degradation* found in biological systems. ‘you could shoot every tenth
neuron in the brain and not even notice it’
• They are very noise tolerant – so they can cope with situations where
normal systems would have difficulty.
Neuron to Node
History
MODELS OF A NEURON

A set of connecting links


(Weights)
An adder
An activation function
The Threshold as a Special Kind of Weight
(Bias)

Activation function Activation function

0
Examples: AND gate

Table:1 Dataset
Finding Weights Analytically
Decision Boundaries
The decision boundary is the surface at which the output of the unit is precisely equal to
the threshold
On one side of this surface, the output Y will be 0, and on the other side it will be 1.
In two dimension the surface is w1x1 + w2x2 = θ
Which we can write x2 = (-w1/w2)x1+ θ/w2
which is the equation of a line of slope -w1/w2 with intercept θ/w2

W1=1, w2=2,
Decision Boundaries for AND and OR

we can see that the gradient of the decision surface


should be negative, and the intercept should be between
0 & 1 for implementing OR logic (& greater than 1 for
AND logic)
Example : Iris flowers
Feature vectors
Feature space
Staircase Light Logic
Decision Boundaries for XOR

We need two straight lines to separate the different


outputs/decisions

There are two obvious remedies: either change the transfer


function so that it has more than one decision boundary, or
use a more complex network that is able to generate more
complex decision boundaries.
Adding hidden neuron
• Adding hidden layer allows more target functions to be represented.

What is the capability of the following network? We added one more layer with
one neuron.
-1
x1
1
3
1
-1 1
x2
1 1
1
-4 -2.5
x3=1

1
This is a two layer network. This has one hidden layer.
Illustration of some possible decision boundaries which can be generated by networks
having threshold activation functions and various numbers of layers (if activation function is
step function)
Activation functions
Activation function-matlab
n = -5:0.1:5;
a = hardlims(n);
plot(n,a)
n = -5:0.1:5;
a = hardlim(n);
plot(n,a)

• The hard-limiting threshold function


– Corresponds to the biological paradigm
• either fires or not
a=logsig(n) = 1 / (1 + exp(-n))

n = -5:0.1:5;
a = logsig(n);
plot(n,a)

a = tansig(n) = 2/(1+exp(-2*n))-1

n = -5:0.1:5;
a = tansig(n);
plot(n,a)
Two class problem: Geometrical Observation
Type nnd4db in MATLAB command window.
Type nnd4pr
Network architecture for perceptron network
for several output classes.
Activation function-matlab

You might also like