Professional Documents
Culture Documents
The function used in a neuron is generally termed as an activation function. There have been 5 major
activation functions tried to date, step, sigmoid, tanh, ReLU and leaky ReLU.
“Each individual neuron in the layer acts like its own model”.
NN: Neural networks are multi-layer networks of neurons that we use to classify things, make
predictions, etc.
The arrows that connect the dots shows how all the neurons are interconnected and how data travels
from the input layer all the way through to the output layer.
We have a set of inputs and a set of target values — and we are trying to get predictions that match
those target values as closely as possible.
By repeatedly calculating [Z] i.e bias or activation, and applying the activation function to it for each
successive layer, we can move from input to output. This process is known as forward
propagation. i.e. moving forward from input to output by making some computations and
calculating activations at each neuron moreover at each layer.
Weights: a weight decides how much influence the input will have on the output.
Bias: Bias is just like an intercept added in a linear equation. It is an additional parameter in the
Neural Network which is used to adjust the output along with the weighted sum of the inputs to
the neuron. Moreover, bias value allows you to shift the activation function to either right or left.
Cost function: A cost function is a measure of error between what value your model predicts and
what the value actually is.
The purpose of Cost Function is to be either: Minimized — then returned value is usually
called cost, loss or error. The goal is to find the values of model parameters for which Cost
Function return as small number as possible.
Gradient Descent:
The gradient is a numeric calculation allowing us to know how to adjust the parameters of
a network in such a way that its output deviation is minimized.
The gradient of a function is the vector whose elements are its partial derivatives with respect to
each parameter.
So each element of the gradient tells us how the cost function would change if we applied a small
change to that particular parameter — so we know what to tweak and by how much.
1. Compute the gradient of our “current location” (calculate the gradient using our current
parameter values).
2. Modify each parameter by an amount proportional to its gradient element and in the opposite
direction of its gradient element. For example, if the partial derivative of our cost function with
respect to B0 is positive but tiny and the partial derivative with respect to B1 is negative and
large, then we want to decrease B0 by a tiny amount and increase B1 by a large amount to lower
our cost function.
3. Re compute the gradient using our new tweaked parameter values and repeat the previous steps
until we arrive at the minimum.
Back propagation: So basically back propagation allows us to calculate the error attributable to
each neuron and that in turn allows us to calculate the partial derivatives and ultimately the
gradient so that we can utilize gradient descent.
In my opinion, these are the three key takeaways for back propagation:
1. It is the process of shifting the error backwards layer by layer and attributing the correct amount
of error to each neuron in the neural network.
2. The error attributable to a particular neuron is a good approximation for how changing that
neuron’s weights (from the connections leading into the neuron) and bias will affect the cost
function.
3. When looking backwards, the more active neurons (the non-lazy ones) are the ones that get
blamed and tweaked by the back propagation process.
Link: https://towardsdatascience.com/understanding-neural-networks-19020b758230
The basic steps to build an image classification model using a neural network are:
1. Flatten the input image dimensions to 1D (width pixels x height pixels)
2. Normalize the image pixel values (divide by 255)
3. One-Hot Encode the categorical column
4. Build a model architecture (Sequential) with Dense layers
5. Train the model and make predictions
https://towardsdatascience.com/covolutional-neural-network-cb0883dd6529
https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-
learning-99760835f148#:~:text=Convolution%20is%20the%20first%20layer,and%20a%20filter
%20or%20kernel.
https://towardsdatascience.com/classify-your-images-using-convolutional-neural-network-
4b54989d93dd
https://medium.com/@ksusorokina/image-classification-with-convolutional-neural-networks-
496815db12a8
https://www.geeksforgeeks.org/image-classifier-using-cnn/
https://analyticsindiamag.com/deep-learning-image-classification-with-cnn-an-overview/
https://ip.cadence.com/uploads/901/cnn_wp-pdf