You are on page 1of 21

Deep learning

• Course Code:
• Unit 1
Introduction to Deep learning
• Lecture 3
Multi layer Perceptron
Types of Perceptron?

Single Layer Multi-Layer


Amity Centre for Artificial Intelligence, Amity University, Noida, India
Single Layer
• NN comprising of a Single Neuron​
• Single Layer Neural Network​
• An artificial neuron using the Heaviside
step function as the activation function

• Single-layer perceptron can learn only


linearly separable patterns

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Datapoint (1D) Not linearly Separable

1. Non-linear separator

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Multi Layer Perceptron (MLP)
• MLP is a feedforward artificial neural network
• Data flows in forward direction
• MLP must have at least three layers: the input
layer, a hidden layer and the output layer
• They are fully connected; each node in one
layer connects with a weight to every node in
the next layer
• Can process linear and non-linear patterns.
Can also be implemented with logic gates such
as AND, OR, XOR, NAND, NOT, XNOR, NOR.

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Learning XOR Logic: Multi-layer Perceptron

𝑥2 h2
Map to
XOR =
( OR ) AND ( NAND )
𝑥1 h1
OR NAND
0 0 0 1 0
0 1 1 1 1
1 0 1 1 1
1 1 1 0 0

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Learning XOR Logic: Multi-layer Perceptron
h0=1
𝑥2
Map to
h2 𝑥0=1 − 0.5
h 1 −1.5
1
1
𝑥1 1 OR

𝑥1 h1 1.5 1 AND

OR NAND 𝑥2 −1 h
0 0 0 1 0
− 1NAND 2
0 1 1 1 1
1 0 1 1 1
1 1 1 0 0

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Learning XOR Logic: Multi-layer Perceptron
h0=1
𝑥2
Map to
h2 𝑥0=1 − 0.5
h 1 −1.5
1
1
𝑥1 1 OR

𝑥1 h1 1.5 1 AND

OR NAND 𝑥2 −1 h
0 0 0 1 0
− 1NAND 2
0 1 1 1 1 Input Hidden Output
1 0 1 1 1 Layer Layer Layer
1 1 1 0 0

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Learning XOR Logic: Multi-layer Perceptron
h0=1
𝑥0=1
Layer 1 (Hidden Layer)
− 0.5
h 1 −1.5
First row in input matrix-all 1s represents bias terms
Second and third row in input matrix represents inupts each column in second and third row 1
1
represents 𝑥1 1 OR
Input 00,01,10,11
1.5 1 AND

𝑥2 −1 h
− 1NAND 2
Layer 2 (Output Layer)

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Learning XOR Logic: Multi-layer Perceptron
h0=1
Layer 1 (Hidden Layer)
𝑥0=1 − 0.5
h 1 −1.5
1
1
𝑥1 1 OR

1.5 1 AND

Layer 2 (Output Layer) 𝑥2 −1 h


− 1NAND 2

[ ]
1 1 1 1
[ − 1.5 1 1 ] 0 1 1 1
1 1 1 0
=
=
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Try it Yourself: XNOR Logic
1
h0=1
𝑤 01
NOR AND 𝑥0=1 𝑤 11
1
𝑤 01
2

0 0 1 0 1 2
𝑤121 𝑤 11
0 1 0 0 0 𝑥1 1
1 0 0 0 0 𝑤 02 𝑦
1 2
1 1 0 1 1 𝑤12 𝑤 21
𝑥2 1
Determine the Weights. 𝑤22
Weight of connection between
node in layer to
node in layer

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Learning XNOR Logic: Multi-layer Perceptron
h0=1
NOR AND 𝑥0=1 0.5
h 1 − 0.5
0 0 1 0 1 −1
−1 1
0 1 0 0 0 𝑥1 NO
1 0 0 0 0
− 1.5
R
𝑦
1 1 0 1 1 1 OR

𝑥2 1 h
2
1 AND

Input Hidden Output


Layer Layer Layer

h1 2 layers NN
12
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Learning XNOR Logic: Multi-layer Perceptron
h0=1
Layer 1 (Hidden Layer)
𝑥0=1 0.5
h 1 − 0.5
−1
−1 1
𝑥1 NO

− 1.5
R
𝑦
1 OR

Layer 2 (Output Layer) 𝑥2 1 h


2
1 AND

13
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Neural Network Playground
https://playground.tensorflow.org

1. Select an Input data 2. Select the number of 3. Start to train


hidden layers and
neurons per layer
(build simple networks)

14
14
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Neural Network Playground
https://playground.tensorflow.org

4. Stop when the network starts to


converge or epoch are too high and
no convergence is achieved
5. Observe the functions learned by
each neuron
6. Change the network and observe
the difference in output
7. Change the input data, build more
networks
Amity Centre for Artificial Intelligence, Amity University, Noida, India
Multi Output Perceptron

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Deep Neural Network

Amity Centre for Artificial Intelligence, Amity University, Noida, India


Summary

Which one to
choose

Single Layer Multi-Layer


Amity Centre for Artificial Intelligence, Amity University, Noida, India
Summary

It is more suitable to
simple and linear
problems

Single Layer Multi-Layer


Amity Centre for Artificial Intelligence, Amity University, Noida, India
Summary

Multi-layer perceptron is
more suitable for
complex and non-linear
problems.

Single Layer Multi-Layer


Amity Centre for Artificial Intelligence, Amity University, Noida, India
Summary
• MLPs are neural networks with at least three layers
while DNNs are neural networks with additional or
deeper layers.

• DNNs and MLPs are both capable of performing such


complex tasks as compared to traditional machine
learning algorithms.

Amity Centre for Artificial Intelligence, Amity University, Noida, India

You might also like