You are on page 1of 52

Single layer perceptron

Single layer perceptron


• The perceptron is the simplest form of a neural network used for
classification of a special type of patterns Which is linearly separable
patterns .
• single layer perceptron consists of single neuron With adjustable
synaptic weights and threshold as shown in figure.
• By expanding the output layer of the perceptron to include more
than one neuron , We may form classifications with more than 2
classes but linearly seperable Classes .
Single layer perceptron
• The resulting sum from the summing node is applied to a hard limiter
• So then neuron produces an output equals to +1 if the hard limiter
input is positive and -1 if the hard limiter input is negative .

• The purpose is to classify between two classes C1 and C2


The decision rule is to assign the point represented by the inputs x1
, x2 , … xm to class C1 if the perceptron output is +1, and to class C2 if
the perceptron output is -1.
The two decision regions are separated by a hyperplane defined by:
• If we have only 2 inputs, the
decision boundary takes form of
straight line as shown.
• Above the line the decision is C1
and below the line the decision is
class C2
• The effect of the threshold b is to
shift the boundary away from the
origin.
• The weights could be fixed or
adapted on an iteration by
iteration basis.
The Perceptron Convergence Theorem
• The threshold b(n) denotes to nth iteration
𝑇
• 𝑋 𝑛 = −1, 𝑥1 𝑛 , 𝑥2 𝑛 , … . . 𝑥𝑚 (𝑛)
• 𝑊 𝑛 = 𝑏 𝑛 , 𝑤1 𝑛 , 𝑤2 𝑛 , … . . 𝑤𝑚 (𝑛) 𝑇
• 𝑣 𝑛 = 𝑊𝑇 𝑛 𝑋 𝑛
• WT X ≥ 0 for every input vector X belonging to C1
• WT X < 0 for every input vector X belonging to C 2
• The training problem is to find a weight vector W such that the
inequalities above are satisfied.
The Perceptron Convergence Theorem
• Parameters and variables:
• X(n) The input vector
• W(n) The weight vector
• b(n) The threshold
• y(n) the actual response
• d(n) the desired response
• η learning rate parameter from 0 to 1
The Perceptron Convergence Theorem
• Step 1 Initialization:
Set W(0)=0, then perform the following computations for n=1,2,…
• Step 2 Activation:
At time n activate the perceptron by applying x(n) and d(n)
• Step 3 Computation of actual response:
𝑦 𝑛 = 𝑠𝑔𝑛 𝑊 𝑇 𝑛 𝑋(𝑛)
+1 𝑖𝑓 𝑣 > 0
𝑆𝑔𝑛 𝑣 =
−1 𝑖𝑓 𝑣 < 0
• Step 4 Adaptation of weight vector:
𝑤 𝑛+1 =𝑊 𝑛 +𝜂 𝑑 𝑛 −𝑦 𝑛 𝑋 𝑛
+1 𝑖𝑓 𝑋 𝑛 𝑏𝑒𝑙𝑜𝑛𝑔𝑠 𝑡𝑜 𝐶1
𝑑 𝑛 =
−1 𝑖𝑓 𝑋 𝑛 𝑏𝑒𝑙𝑜𝑛𝑔𝑠 𝑡𝑜 𝐶2
• Step 5: Increment time n by 1 unit and go back to step 2
• Error signal = d(n)- y(n)
Multi-Layer Perceptron
Back- Propagation Algorithm
Summary

You might also like