You are on page 1of 10

PERCEPTRON

Prof. DR. Sarjon Defit, S.Kom, MSc

January 23, 2022 Neural Networks 1


The Perceptron (Rosenblatt 1958)

Based on McCulloch-Pitts neural model

Based on a model of the retina

Includes a training algorithm that provided the first procedure


for training a simple ANN

Simplest form consists of a single neuron with adjustable


synaptic weights and a hard limiter – sign or step function.

January 23, 2022 Neural Networks 2


Single-layer two-input perceptron

In
puts
x1 Lin
ear Hard
w C
ombin
er L
imiter
1 O
utp
ut
 Y
w
2

x2
T
hresh
old

January 23, 2022 Neural Networks 3


Perceptron’s training algorithm

Step 1: Initialisation
Set initial weights w1, w2,…, wn and threshold  to random
numbers in the range [0.5, 0.5].
If the error, e(p), is positive, we need to increase perceptron
output Y(p), but if it is negative, we need to decrease Y(p).

January 23, 2022 Neural Networks 4


Perceptron’s training algorithm

Step 2: Activation
Activate the perceptron by applying inputs x1(p), x2(p),…, xn(p)
and desired output Yd (p). Calculate the actual output at
iteration p = 1
n 
Y ( p )  step  xi ( p ) wi ( p )  
 i 1 

where n is the number of the perceptron inputs, and step is a step


activation function.

January 23, 2022 Neural Networks 5


Perceptron’s training algorithm
Step 3: Weight training

Update the weights of the perceptron

wi ( p  1)  wi ( p)  wi ( p)
where is the weight correction at iteration p.

The weight correction is computed by the delta rule:

wi ( p )    xi ( p )  e( p )
January 23, 2022 Neural Networks 6
Perceptron’s training algorithm

Step4: Increase iteration p by 1, go back to step 2 and


repeat the process until convergence

January 23, 2022 Neural Networks 7


Perceptron learning : Logical AND
Inputs Desired Init ia l Actual Error Final
Epoch output weights output weights
x1 x2 Yd w1 w2 Y e w1 w2
1 0 0 0 0.3  0.1 0 0 0.3 0.1
0 1 0 0.3 0.1 0 0 0.3 0.1
1 0 0 0.3  0.1 1 1 0.2 0.1
1 1 1 0.2  0.1 0 1 0.3 0.0
2 0 0 0 0.3 0.0 0 0 0.3 0.0
0 1 0 0.3 0.0 0 0 0.3 0.0
1 0 0 0.3 0.0 1 1 0.2 0.0
1 1 1 0.2 0.0 1 0 0.2 0.0
3 0 0 0 0.2 0.0 0 0 0.2 0.0
0 1 0 0.2 0.0 0 0 0.2 0.0
1 0 0 0.2 0.0 1 1 0.1 0.0
1 1 1 0.1 0.0 0 1 0.2 0.1
4 0 0 0 0.2 0.1 0 0 0.2 0.1
0 1 0 0.2 0.1 0 0 0.2 0.1
1 0 0 0.2 0.1 1 1 0.1 0.1
1 1 1 0.1 0.1 1 0 0.1 0.1
5 0 0 0 0.1 0.1 0 0 0.1 0.1
0 1 0 0.1 0.1 0 0 0.1 0.1
1 0 0 0.1 0.1 0 0 0.1 0.1
1 1 1 0.1 0.1 1 0 0.1 0.1
Thresho ld :  = 0.2; learning ra te :  = 0.1

January 23, 2022 Neural Networks 8


January 23, 2022 Neural Networks 9
January 23, 2022 Neural Networks 10

You might also like