You are on page 1of 16

Bulletin of Mathematical Biophysics, 5, 115-133

McCulloch-Pitts neuron model (1943)

MATLAB block

Activation function
0 
Networks of McCulloch-Pitts Neurons
Perceptron
Implementation of Logical NOT, AND, and OR Gates using
perceptron
Finding Weights Analytically
XOR Gate
Homework
Is it possible to realize the following truth table
using perceptron or not? If yes, realize it
Perceptron

Frank Rosenblatt devised Perceptron that operated much in the


same way as the human mind.
Perceptron could "learn" - and that was the breakthrough needed
to pioneer today’s current neural network
The adjusting of weights to produce a particular output is called
the "training" of the network which is the mechanism that
allows the network to learn

Rosenblatt, Frank (1958), The Perceptron: A Probabilistic Model for


Information Storage and Organization in the Brain, Cornell Aeronautical
Laboratory, Psychological Review, v65, No. 6, pp. 386-408.

Perceptron can not realize an XOR gate. We need more


complex network or use different transfer functions.
Decision Boundaries
The decision boundary is the surface at which the output of the unit is
precisely equal to the threshold
On one side of this surface, the output Y will be 0, and on the other side it will be 1.
In two dimension the surface is w1x1 + w2x2 = θ
Which we can write x2 = (-w1/w2)x1+ θ/w2
which is the equation of a line of slope -w1/w2 with intercept θ/w2

x1
w1

1 w1
 slope = 
y w2
2 w2

w2

0 2
x2

W1=1, w2=2,   2
Learning and Generalization
Classification: The ability to assign an input observation to a
category
Learning is a relatively permanent change in behaviour brought
about by experience.
Learning The network must learn decision surfaces from a set of
training patterns so that these training patterns are classified
correctly.
Generalization After training, the network must also be able to
generalize, i.e.correctly classify test patterns it has never seen
before.
Usually we want our neural networks to learn well, and also to
generalize well.
Sometimes, the training data may contain errors (e.g. noise in the
experimental determination of the input values, or incorrect
classifications). In this case, learning the training data perfectly
may make the generalization worse. There is an important tradeoff
between learning and generalization that arises quite generally.
Blue lines show the decision boundaries of two neural networks to solve a
two class classification problem. Which is a better neural network?

x2

2 1

x1
Generalization in Classification
Generalization in Function Approximation
Training a Neural Network

Whether our neural network is a simple Perceptron, or a much


more complicated multilayer network with special activation
functions, we need to develop a systematic procedure for
determining appropriate connection weights.
The general procedure is to have the network learn the
appropriate weights from a representative set of training data.
In all but the simplest cases, however, direct computation of the
weights is intractable.
Instead, we usually start off with random initial weights and adjust
them in small steps until the required outputs are produced.
Trained networks are expected to generalize, i.e. deal
appropriately with input data they were not trained on.

You might also like