You are on page 1of 6

Madaline (Multiple Adaptive Linear Neuron) :

● The Madaline(supervised Learning) model consists of many Adaline in parallel with a single output
unit.
● The Adaline layer is present between the input layer and the Madaline layer hence the Adaline layer
is a hidden layer.
● The weights between the input layer and the hidden layer are adjusted, and the weight between the
hidden layer and the output layer is fixed.
● It may use the majority vote rule, the output would have an answer either true or false. Adaline and
Madaline layer neurons have a bias of ‘1’ connected to them. use of multiple Adaline helps counter
the problem of non-linear separability.

Architecture:

Madaline
There are three types of a layer present in Madaline First input layer contains all the input neurons, the
Second hidden layer consists of an adaline layer, and weights between the input and hidden layers are
adjustable and the third layer is the output layer the weights between hidden and output layer is fixed they
are not adjustable.
AI VS ANN

Introduction of MLP
Multi-Layer perceptron defines the most complex architecture of artificial neural networks. It is
substantially formed from multiple layers of the perceptron. TensorFlow is a very popular deep
learning framework released by, and this notebook will guide to build a neural network with this
library. If we want to understand what is a Multi-layer perceptron, we have to develop a multi-layer
perceptron from scratch using Numpy.

The pictorial representation of multi-layer perceptron learning is as shown below-


MLP networks are used for supervised learning format. A typical learning algorithm for MLP
networks is also called back propagation's algorithm.

A multilayer perceptron (MLP) is a feed forward artificial neural network that generates a set of
outputs from a set of inputs. An MLP is characterized by several layers of input nodes connected
as a directed graph between the input nodes connected as a directed graph between the input and
output layers. MLP uses backpropagation for training the network. MLP is a deep learning method.

Activation Functions:
● The activation function decides whether a neuron should be activated or not by calculating the
weighted sum and further adding bias to it. The purpose of the activation function is to introduce
non-linearity into the output of a neuron.

● A neural network without an activation function is essentially just a linear regression model. The
activation function does the non-linear transformation to the input making it capable to learn and
perform more complex tasks.
Error Back Propagation algorithm.
It is an error reducing algorithm used in artificial neural networks. Artificial neural networks are networks
based on the human’s nerve system. These networks contain well defined set of inputs and outputs. The
network is used to describe the complex relationship between the inputs and outputs of the network. The
name comes from the complexity of the network
because the human’s nerves system is so much complex which is known by all of us.
We can categorize the error propagation algorithms into two types as follows,
1.Forward propagation algorithms
2.Back propagation algorithms
Our concentration now is on back propagation algorithms. In this method, the traversal is from output node
to various input nodes and hence called back propagation algorithm. At each node the errors are analyzed
and the error with minimum gradient weight is chosen and is referred to as local minima. So at the end of
the traversal we will have a set of local minimas. Again an analysis is made on the local minimas and an
error is chosen which have minimum gradient weight. This final minima is called global minima. After finding
the global minima the propagation process comes to end.

You might also like