Professional Documents
Culture Documents
Get a Guaranteed* Job with Data Science Immersive Bootcamp Download Brochure ×
Home
Introduction
Understanding this network helps us to obtain information about the underlying reasons in the advanced models of Deep
Learning. Multilayer Perceptron is commonly used in simple regression problems. However, MLPs are not ideal for processing
patterns with sequential and multidimensional data.
🙄 A multilayer perceptron strives to remember patterns in sequential data, because of this, it requires a “large” number of
parameters to process multidimensional data.
For sequential data, the RNNs are the darlings because their patterns allow the network to discover dependence on the 🧠
historical data, which is very useful for predictions. For data, such as images and videos, CNNs excel at extracting resource maps
for classification, segmentation, among other tasks.
In some cases, a CNN in the form of Conv1D / 1D is also used for networks with sequential input data. However, in most models
of Deep Learning, MLP, CNN, or RNN are combined to make the most of each.
MLP, CNN, and RNN don’t do everything…
Much of its success comes from identifying its objective and the good choice of some parameters, such as Loss function,
Optimizer, and Regularizer.
We also have data from outside the training environment. The role of the Regularizer is to ensure that the trained model
generalizes to new data.
Dataset MNIST
Suppose our goal is to create a network to identify numbers based on handwritten digits. For example, when the entrance to the
network is an image of a number 8, the corresponding forecast must also be 8.
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 1/8
9/6/2021 A Simple Overview of Multilayer Perceptron (MLP) Deep Learning
MNIST is a collection of digits ranging from 0 to 9. It has a training set of 60,000 images and 10,000 tests classified into
categories.
import numpy as np
The mnist.load_data() method is convenient, as there is no need to load all 70,000 images and their labels.
Before entering the Multilayer Perceptron classifier, it is essential to keep in mind that, although the MNIST data consists of
two-dimensional tensors, they must be remodeled, depending on the type of input layer.
A 3×3 grayscale image is reshaped for the MLP, CNN and RNN input layers:
num_labels = len(np.unique(y_train))
print("total de labels:t{}".format(num_labels))
print("labels:ttt{0}".format(np.unique(y_train)))
⚠️ This representation is not suitable for the forecast layer that generates probability by class. The most suitable format is one-
hot, a 10-dimensional vector-like all 0 values, except the class index. For example, if the label is 4, the equivalent vector is
[0,0,0,0, 1, 0,0,0,0,0].
In Deep Learning, data is stored in a tensor. The term tensor applies to a scalar-tensor (tensor 0D), vector (tensor 1D), matrix
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you
(two-dimensional tensor), and multidimensional tensor.
agree to our Privacy Policy and Terms of Use. Accept
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 2/8
9/6/2021 A Simple Overview of Multilayer Perceptron (MLP) Deep Learning
#converter em one-hot
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
Our model is an MLP, so your inputs must be a 1D tensor. as such, x_train and x_test must be transformed into [60,000, 2828] and [10,000, 2828],
In numpy, the size of -1 means allowing the library to calculate the correct dimension. In the case of x_train, it is 60,000.
image_size = x_train.shape[1]
print("x_train:t{}".format(x_train.shape))
print("x_test:tt{}n".format(x_test.shape))
print("x_train:t{}".format(x_train.shape))
print("x_test:tt{}".format(x_test.shape))
OUTPUT:
Our model consists of three Multilayer Perceptron layers in a Dense layer. The first and second are identical, followed by a
Rectified Linear Unit (ReLU) and Dropout activation function.
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 3/8
9/6/2021 A Simple Overview of Multilayer Perceptron (MLP) Deep Learning
# Parameters
batch_size = 128 # It is the sample size of inputs to be processed at each training stage.
hidden_units = 256
dropout = 0.45
model = Sequential()
model.add(Dense(hidden_units, input_dim=input_size))
model.add(Activation('relu'))
model.add(Dropout(dropout))
model.add(Dense(hidden_units))
model.add(Activation('relu'))
model.add(Dropout(dropout))
model.add(Dense(num_labels))
Regularization
A neural network has a tendency to memorize its training data, especially if it contains more than enough capacity. In this case, the network fails catastrophically
This is the classic case that the network fails to generalize (Overfitting / Underfitting). To avoid this trend, the model uses a
regulatory layer. Dropout.
The idea of Dropout is simple. Given a discard rate (in our model, we set = 0.45) the layer randomly removes this fraction of units.
For example, if the first layer has 256 units, after Dropout (0.45) is applied, only (1 – 0.45) * 255 = 140 units will participate in
the next layer
Dropout makes neural networks more robust for unforeseen input data, because the network is trained to predict correctly,
even if some units are absent.
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 4/8
9/6/2021 A Simple Overview of Multilayer Perceptron (MLP) Deep Learning
The output layer has 10 units, followed by a softmax activation function. The 10 units correspond to the 10 possible labels,
classes or categories.
The activation of softmax can be expressed mathematically, according to the following equation:
model.add(Activation('softmax'))
model.summary()
OUTPUT:
Model: "sequential"
_________________________________________________________________
=================================================================
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
_________________________________________________________________
=================================================================
Non-trainable params: 0
_________________________________________________________________
Model visualization
Optimization
The purpose of Optimization is to minimize the loss function. The idea is that if the loss is reduced to an acceptable level, the model indirectly
learned the function that maps the inputs to the outputs. Performance metrics are used to determine whether your model has
learned.
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
OUTPUT:
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 5/8
9/6/2021 A Simple Overview of Multilayer Perceptron (MLP) Deep Learning
Epoch 1/20
....
Epoch 20/20
Evaluation
At this point, our MNIST digit classifier model is complete. Your performance evaluation will be the next step in determining
whether the trained model will present a sub-optimal solution
_, acc = model.evaluate(x_test,
y_test,
batch_size=batch_size,
verbose=0)
OUTPUT:
Accuracy: 98.4%
to be continued…
Related
Tensorflow Functional API: Building a CNN Artificial Neural Networks- 25 Questions to Test CNN vs. RNN vs. ANN - Analyzing 3 Types of Neural
Your Skills on ANN Networks in Deep Learning
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 6/8
9/6/2021 A Simple Overview of Multilayer Perceptron (MLP) Deep Learning
Download
Feature Engineering Using Pandas for Beginners An Approach towards Neural Network based Image
Clustering
Leave a Reply
Your email address will not be published. Required fields are marked *
Comment
Name* Email*
Website
Submit
Top Resources
Exploring Matplotlib Stylesheets For Data Visualization Basic Concepts of Object-Oriented Programming in Python
rahul105 -
AUG 09, 2021 Himanshi Singh -
SEP 01, 2020
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 7/8
9/6/2021 A Simple Overview of Multilayer Perceptron (MLP) Deep Learning
Commonly used Machine Learning Algorithms (with Python 40 Questions to test a data scientist on Time Series..
and R Codes)
Sunil Ray -
SEP 09, 2017 Saurabh Jaju -
APR 10, 2017
About Us Blog
Companies Visit us
Post Jobs
Trainings
Hiring Hackathons
Advertising
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you
https://www.analyticsvidhya.com/blog/2020/12/mlp-multilayer-perceptron-simple-overview/ 8/8