Professional Documents
Culture Documents
2021
Typically, the network consists of a set of sensory units (source nodes/input nodes)
that constitute the input layer, one or more hidden layers of computation nodes,
and an output layer of computation nodes.
The input signal propagates through the network in a forward direction, on a layer by
layer basis. These networks are collectively known as multilayer perceptrons (MLPs),
which represent a generalization of the single-layer perceptron.
[Source: javatpoint.com
Note: The nodes of the Hidden Layer's and the Output Layer are computation nodes,
while the input layer nodes are propagational nodes.
x w
i i
w
x j
j
A multilayer perceptron has three distinctive characteristics:
2. The network contains one or more layers of hidden neurons, that are not part of
the input or output of the network. These hidden neurons enable the network to
learn complex tasks by extracting progressively more meaningful features from the
input patterns.
3. The network exhibits a high degree of connectivity, wherein every node of a layer
is connected with every other node of the following layer. A change in connectivity
needs a change in the population of synaptic connections or their weights.
The combination of these features, along with the ability of the network to learn from
experience through training, that the MLP derives its computing power.
I Hidden
L OL
n Layer
a ua
p t y
Input y Actual Output
u pe
e
t ur
r
t
<Target Output>
A - (1,1.75)
B - (1.5, 2.25)
C - (3, 4.8)
Weight
Weight = Intercept + Slope * Height
Let slope = 0.5, and intercept = 0
Height
Sum of
squared
errors
Intercept
d(SSR) = d(1.75 - (I+0.5*1))^2 + d(2.25 - (I + 0.5*1.5))^2 + d(4.8 - (I + 0.5*3))^2
dI dI dI dI
Putting I = 0;
Slope = -2.5 - 3 - 6.6 = -12.1
youtube.com/watch?v=SDv4f4s2SB8