You are on page 1of 29

United college of engineering and research

Department of Computer Science and


Engineering
Machine Learning Technique
Presented By
Dr. Anita
.
OUTLINE

 ARTIFICIAL NEURAL
NETWORKS
– Perceptions'
– Multilayer perceptron, Gradient
– descent and the Delta rule,

2
INTRODUCTION
ARTIFICIAL NEURAL NETWORKS (ANN)

 Neural networks reflect the behavior of the human


brain, allowing computer programs to recognize
patterns and solve common problems in the fields
of AI, machine learning, and deep learning.

 Neural networks, also known as artificial neural


networks (ANNs) or simulated neural networks
(SNNs), are a subset of machine learning and are at
the heart of deep learning algorithms.

3
 Their name and structure are inspired by the
human brain, mimicking the way that biological
neurons signal to one another.

 The term "Artificial neural network" refers to a


biologically inspired sub-field of artificial
intelligence modeled after the brain.

 An Artificial neural network is usually a


computational network based on biological neural
networks that construct the structure of the human
4
brain
 Similar to a human brain has neurons
interconnected to each other, artificial neural
networks also have neurons that are linked to each
other in various layers of the networks.
 These neurons are known as nodes.

Figure. Biological Neural Network


5

Artificial Neural Network


 Dendrites from Biological Neural Network
represent inputs in Artificial Neural Networks, cell
nucleus represents Nodes, synapse represents
Weights, and Axon represents Output.
6
 There are around 1000 billion neurons in the
human brain.
 Each neuron has an association point somewhere
in the range of 1,000 and 100,000.

7
 consider an example of a digital logic gate that
takes an input and gives an output.
 "OR" gate, which takes two inputs.
 If one or both the inputs are "On," then we get
"On" in output.
 If both the inputs are "Off," then we get "Off" in
output.
 Here the output depends upon input.
 Our brain does not perform the same task.
 The outputs to inputs relationship keep changing
because of the neurons in our brain, which are
"learning." 8
 The architecture of an artificial neural
network:
 In order to define a neural network that consists of
a large number of artificial neurons, which are
termed units arranged in a sequence of layers.
 Artificial Neural Network primarily consists of
three layers:

9
 Input Layer:
 As the name suggests, it accepts inputs in several
different formats provided by the programmer.
 Hidden Layer:
 The hidden layer presents in-between input and
output layers. It performs all the calculations to
find hidden features and patterns.
 Output Layer:
 The input goes through a series of transformations
using the hidden layer, which finally results in
output that is conveyed using this layer.
10
 The artificial neural network takes input and
computes the weighted sum of the inputs and
includes a bias.
 This computation is represented in the form of a
transfer function.

 It determines weighted total is passed as an input


to an activation function to produce the output.
Activation functions choose whether a node should
fire or not.
11
12
Perceptron
 Perceptron was introduced by Frank Rosenblatt in
1957.
 Supervised Learning Algorithm
 used for Binary Classification
 There are two types of Perceptrons: Single layer
and Multilayer.
 Single layer - Single layer perceptrons can learn
only linearly separable patterns
 Multilayer - Multilayer perceptrons or feedforward
neural networks with two or more layers have the
greater processing power
13

 The Perceptron algorithm learns the weights for


the input signals in order to draw a linear decision
boundary.

Perceptron Learning Rule 14

 The Perceptron receives multiple input signals, a


 if the sum of the input signals exceeds a certain
threshold, it either outputs a signal or does not
return an output. In the context of supervised
learning and classification, this can then be used
to predict the class of a sample.
 Next up, let us focus on the perceptron function. 15
 Perceptron Function
 Perceptron is a function that maps its input “x,”
which is multiplied with the learned weight
coefficient; an output value ”f(x)”is generated.

 In the equation given above:


 “w” = vector of real-valued weights
 “b” = bias (an element that adjusts the boundary
away from origin without any dependence on the
input value)
 “x” = vector of input x values
16

 “m” = number of inputs to the Perceptron


 The output can be represented as “1” or “0.” It
can also be represented as “1” or “-1” depending
on which activation function is used.
 Inputs of a Perceptron 17

 A Perceptron accepts inputs, moderates them with


certain weight values, then applies the
transformation function to output the final result.

 A Boolean output is based on inputs such as


salaried, married, age, past credit profile, etc.

 It has only two values: Yes and No or True and


False. The summation function “∑” multiplies all
inputs of “x” by weights “w” and then adds them
up as follows:
18
19
 Activation Functions of Perceptron
 Activation function is internal state of neuron
 used to convert the input signal on node of ANN
to an output signal
 The activation function applies a step rule (convert
the numerical output into +1 or -1) to check if the
output of the weighting function is greater than
zero or not.
20

 For example:
 If ∑ wixi> 0 => then final output “o” = 1
 Else, final output -1
 Step function gets triggered above a certain value
of the neuron output; else it outputs zero.
 Sign Function outputs +1 or -1 depending on
whether neuron output is greater than zero or not.
 Sigmoid is the S-curve and outputs a value
between 0 and 1
21
 Output of Perceptron
 Perceptron with a Boolean output:
 Inputs: x1…xn
 Output: o(x1….xn)

 Weights: wi=> contribution of input xi to the


Perceptron output;
 w0=> bias or threshold
 If ∑w.x > 0, output is +1, else -1. The neuron gets
triggered only when weighted input reaches a
certain threshold value.

22

 An output of +1 specifies that the neuron


is triggered. An output of -1 specifies that
the neuron did not get triggered.
 “sgn” stands for sign function with output
+1 or -1.
 Multi-Layer Perceptrons (Feed forward Neural
Network)
 The field of artificial neural networks is often just
called neural networks or multi-layer perceptrons
after perhaps the most useful type of neural
network.
 A perceptron is a single neuron model that was a
precursor to larger neural networks.

23
 How do artificial neural networks work?
 Artificial Neural Network can be best represented
as a weighted directed graph, where the artificial
neurons form the nodes.
 The association between the neurons outputs and
neuron inputs can be viewed as the directed edges
with weights.
 The Artificial Neural Network receives the input
signal from the external source in the form of a
pattern and image in the form of a vector.
 These inputs are then mathematically assigned by
the notations x(n) for every n number of inputs.
24
Perceptron

25
 Afterward, each of the input is multiplied by its
corresponding weights ( these weights are the
details utilized by the artificial neural networks to
solve a specific problem ).
 In general terms, these weights normally represent
the strength of the interconnection between
neurons inside the artificial neural network.
 All the weighted inputs are summarized inside the
computing unit.

26
 If the weighted sum is equal to zero, then bias is
added to make the output non-zero or something
else to scale up to the system's response.
 Bias has the same input, and weight equals to 1.
Here the total of weighted inputs can be in the
range of 0 to positive infinity.
 Here, to keep the response in the limits of the
desired value, a certain maximum value is
benchmarked, and the total of weighted inputs is
passed through the activation function

27
 The activation function refers to the set of transfer
functions used to achieve the desired output.
 There is a different kind of the activation function,
but primarily either linear or non-linear sets of
functions.
 Some of the commonly used sets of activation
functions are the Binary, linear, and Tan
hyperbolic sigmoidal activation functions.

28
THANK YOU!

29

You might also like