You are on page 1of 14

Perceptron Algorithm

Lili Ayu Wulandhari Ph.D.

1
Introduction to Perceptron
• Perceptron was the first algorithmically described neural
network, invented by Rosenblatt in 1958.
• It is the simplest algorithm and used to solve linearly
separable problem.
• Perceptron is single layer architecture which receives
multiple inputs neuron and processes them to produce an
output.
• The architecture of perceptron as given in Figure 3.1
• The algorithm used to adjust the free parameters of this
neural network (weights and bias). Rosenblatt prove that if
the patterns (vectors) used to train the perceptron are
drawn from two linearly separable classes, then the
perceptron algorithm converges and positions the decision
surface in the form of a hyperplane between the two classes
(Figure 3.2)
2
Introduction to Perceptron

Input Output
Layer Layer

Where:

xi : The ith input neuron


Figure 2.1 Perceptron Architecture y : The output neuron
wi : Weights from ith input neuron
3
Introduction to Perceptron
Decision
Boundary

Class 1 Class 1

Class 2
Class 2

(b)
(a)

Figure 2.2 Decision boundary for linearly separable (a) and non
linear separable (b)
4
Perceptron Algorithm

Output
Input Layer
Layer

5
Perceptron Algorithm

• Binary/hard limit function • Bipolar function

6
Perceptron Algorithm

7
Perceptron Algorithm

• Learning rate value is constant for once training process.

• It will adjust how fast learning should take place.

• Small learning rate indicates slower weights adjustment,


and require longer time to complete training process.

• Larger learning rate accelerates rate of weight


adjustment, however it lead to instability of training
performance.

8
Hand Calculation Example of
Perceptron Algorithm

9
Hand Calculation Example of
Perceptron Algorithm
Solution:

10
Hand Calculation Example of
Perceptron Algorithm

11
Hand Calculation Example of
Perceptron Algorithm

12
The perceptron convergence
theorem
• The theorem states that for any data set which is linearly
separable, perceptron learning rule is guaranteed to find
solution in finite number of iteration.

• This theorem had been proven by Rosenblatt and shown


in Figure 2.2

• This finite number of iteration is represented as training


process to find the appropriate weights for all data sets.

13
References
• Simon S Haykin, Neural networks and learning machines,
volume 3. Pearson Education Upper Saddle River, 2009.
• Sandhya Samarasinghe. Neural networks for applied
sciences and engineering: from fundamentals to complex
pattern recognition. CRC Press, 2006.

14

You might also like