You are on page 1of 18

Deep learning

• Course Code:
• Unit 1
Introduction to Deep learning
• Lecture 2
Perceptron
Human Brain Biological Neuron

• Neurons: cells forming the human nervous system​


• Dendrites: Take input from other neurons in the network in form of
electrical impulses
• Cell Body: It generates inferences from the dendrite inputs and decides
what action to take
• Axon terminals: They transmit outputs in form of electrical impulses to
next neuron
On average, human brain contains • Synapses: allow weighted transmission of signals between Axons and
about 100 billion neurons! dendrites to build up large Neural networks

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Biological inspiration behind Neural Network
Biological Neuron Basic Terminology
Biological Neural Artificial Neural
Network Network

Neuron Node/ Unit


Stimulus Input Layer

𝑤1 Artificial Neuron Dendrite Links


Soma Weighted Sum of Inputs
𝑤2
Synapse Activation Function
𝑤3 Axon Output

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Perceptron

𝑥0=1𝑤 0
• Perceptron is a building block of an
Artificial Neural Network
𝑤 Sum Activation • Neural Network comprising of a
𝑥1 1𝑛
𝑦 Single Neuron
∑ 𝑤 𝑖 𝑥𝑖
𝑖= 0

𝑥𝑛 𝑤 𝑛

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Perceptron
• Perceptron is a building block of an
Weights Artificial Neural Network
𝑥0=1𝑤 0 • Neural Network comprising of a Single
Neuron
Sum Activation Input Nodes or Input Layer: Accepts the
𝑥1𝑤 initial data into the system
1𝑛
∑ 𝑤 𝑖 𝑥𝑖 𝑦
𝑖= 0
Weights: Represents the strength of the
connection between units and decide
output based on the strength of
𝑥𝑛 𝑤 𝑛 associated input neuron
Output Nodes or Output Layer: Process
Input Layer Output Layer inputs and generates output
Amity Centre for Artificial Intelligence, Amity University, Noida, India
How Perceptron works?
2 Step Process
Step1: Multiply all input values with corresponding weights & add to determine weighted
sum. Mathematically, expressed as:

(dummy input) to introduce a bias term


Step 2: Activation function is applied with weighted sum, which gives us output either in
binary form or a continuous value
Activation function decides if the neuron will fire

Amity Centre for Artificial Intelligence, Amity University, Noida, India


How Perceptron works?
𝑥0=1𝑤 0

𝑥1 𝑤 1 𝑦

𝑥 2 𝑤2
𝑥0=1
𝑥1 𝑦

𝑥2 2 Step Process

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Perceptron can Learn Simple Logic: AND
• AND Logic
𝑥0=1 𝑤 =−1.5 0

0 0 𝑥1 𝑤1 =1 𝑦 =𝑔 ( 𝑤0 𝑥0 +𝑤 1 𝑥 1+ 𝑤2 𝑥 2 )
0 1
1 0
𝑥 2 𝑤2 =1 Thresholding function
1 1

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Perceptron can Learn Simple Logic: AND
• AND Logic
𝑥0=1 𝑤 =−1.5 Activation function
0

0 0 𝑥1 𝑤 1 =1 𝑦 =𝑔 ( 𝑤0 𝑥0 +𝑤 1 𝑥 1+ 𝑤2 𝑥 2 )
0 1
AND: 2D feature space
1 0
𝑥 2 𝑤 2 =1 Output : 0, :1
1 1
𝑥2
Shown data-points are
( 0 ,1 ) (1 , 1)
linearly separable

𝑥1
( 0 ,0 ) (1 , 0)
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Perceptron can Learn Simple Logic: AND
• AND Logic
𝑥0=1 𝑤 =−1.5 Activation function
0

0 0 𝑥1 𝑤 1 =1 𝑦 =𝑔 ( 𝑤0 𝑥0 +𝑤 1 𝑥 1+ 𝑤2 𝑥 2 )
0 1
AND: 2D feature space
1 0
𝑥 2 𝑤 2 =1 Output : 0, :1
1 1 𝑥2
( 0 ,1.5 )
( 0 ,1 ) (1 , 1)
Plot
Need to know only 2 points to plot a
straight line on 2D plane ( 1 , 0 ) 𝑥1
( 0 ,0 ) ( 1.5 , 0 )
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Perceptron can Learn Simple Logic: AND
• AND Logic
𝑥0=1 𝑤 =−1.5 Activation function
0

0 0 𝑥1 𝑤 1 =1 𝑦 =𝑔 ( 𝑤0 𝑥0 +𝑤 1 𝑥 1+ 𝑤2 𝑥 2 )
0 1
AND: 2D feature space
1 0
𝑥 2 𝑤 2 =1 Output : 0, :1
1 1 𝑥2
( 0 ,1.5 )
Perceptron acts as a
(1 , 1)
𝑦 =𝑔 ( 𝑤0 𝑥0 +𝑤 1 𝑥 1+ 𝑤2 𝑥 2 ) linear binary classifier ( 0 ,1 )
Class 1
Class 0 𝑥1
( 0 ,0 ) ( 1 , 0 )
( 1.5 , 0 )
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Q. Which logic does this perceptron learn?
Activation function

a) AND
𝑥0=1 𝑤 =− 0.50

b) OR 𝑥1 𝑤 1 =1 𝑦
c) NAND
d) NOR 𝑥 2 𝑤 2 =1

are either 0 or 1

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Q. Which logic does this perceptron learn?
Activation function

a) AND
𝑥0=1 𝑤 =− 0.50

b) OR 𝑥1 𝑤 1 =1 𝑦
c) NAND
0 0
d) NOR 𝑥 2 𝑤 2 =1
0 1
1 0
1 1
are either 0 or 1

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Q. Which logic does this perceptron learn?
Activation function

a) AND
𝑥0=1 𝑤 =− 0.50

b) OR 𝑥1 𝑤 1 =1 𝑦
c) NAND
0 0 0
d) NOR 𝑥 2 𝑤 2 =1
0 1 1
1 0 1
1 1 1
are either 0 or 1

OR logic
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Weights cause Learning !
AND OR NAND NOR

𝑥2 𝑥2 𝑥2 𝑥2

𝑥1
15

𝑥1 𝑥1
𝑥1
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Can these be classified using a Linear Classifier ?

XOR XNOR

𝑥2 𝑥2 0 0 1
0 0 0
0 1 1 0 1 0
1 0 1 1 0 0
1 1 1
1 1 0
𝑥1 𝑥1

Amity Centre for Artificial Intelligence, Amity University, Noida, India


• Linearly Separable • Not Linearly Separable

AND OR XOR XNOR


𝑥2 𝑥2 𝑥2
𝑥2

𝑥1
𝑥1 𝑥1 𝑥1

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Summary
• We have learned how Perceptron models are the simplest type of artificial
neural network which carries input and their weights, the sum of all weighted
input, and an activation function.
• Perceptron enables the computer to work more efficiently on complex
problems using various Machine Learning technologies.
• The Perceptrons are the fundamentals of artificial neural networks.
• Limitations:
• The output of a perceptron can only be a binary number (0 or 1) due to the
hard limit transfer function.
• Perceptron can only be used to classify the linearly separable sets of input
vectors. If input vectors are non-linear, it is not easy to classify them
properly.
Amity Centre for Artificial Intelligence, Amity University, Noida, India

You might also like