You are on page 1of 4

 Neuron: its only job is to provide an output by applying the function on the inputs provided.

The function used in a neuron is generally termed as an activation function. There have been 5 major
activation functions tried to date, step, sigmoid, tanh, ReLU and leaky ReLU.

“Each individual neuron in the layer acts like its own model”.

 NN: Neural networks are multi-layer networks of neurons that we use to classify things, make
predictions, etc. 

Starting from the left, we have:

1. The input layer of our model in orange.


2. Our first hidden layer of neurons in blue.
3. Our second hidden layer of neurons in magenta.
4. The output layer (a.k.a. the prediction) of our model in green.

The arrows that connect the dots shows how all the neurons are interconnected and how data travels
from the input layer all the way through to the output layer.

We have a set of inputs and a set of target values — and we are trying to get predictions that match
those target values as closely as possible.

By repeatedly calculating [Z] i.e bias or activation, and applying the activation function to it for each
successive layer, we can move from input to output. This process is known as forward
propagation. i.e. moving forward from input to output by making some computations and
calculating activations at each neuron moreover at each layer.

 Weights: a weight decides how much influence the input will have on the output.
 Bias: Bias is just like an intercept added in a linear equation. It is an additional parameter in the
Neural Network which is used to adjust the output along with the weighted sum of the inputs to
the neuron. Moreover, bias value allows you to shift the activation function to either right or left.
 Cost function: A cost function is a measure of error between what value your model predicts and
what the value actually is.

The purpose of Cost Function is to be either: Minimized — then returned value is usually
called cost, loss or error. The goal is to find the values of model parameters for which Cost
Function return as small number as possible.

 Gradient Descent:
The gradient is a numeric calculation allowing us to know how to adjust the parameters of
a network in such a way that its output deviation is minimized.

Gradient descent is an optimization algorithm used to minimize some function by iteratively


moving in the direction of steepest descent as defined by the negative of the gradient. In machine
learning, we use gradient descent to update the parameters of our model.

The gradient of a function is the vector whose elements are its partial derivatives with respect to
each parameter. 

So each element of the gradient tells us how the cost function would change if we applied a small
change to that particular parameter — so we know what to tweak and by how much.

To summarize, we can march towards the minimum by following these steps:

1. Compute the gradient of our “current location” (calculate the gradient using our current
parameter values).
2. Modify each parameter by an amount proportional to its gradient element and in the opposite
direction of its gradient element. For example, if the partial derivative of our cost function with
respect to B0 is positive but tiny and the partial derivative with respect to B1 is negative and
large, then we want to decrease B0 by a tiny amount and increase B1 by a large amount to lower
our cost function.
3. Re compute the gradient using our new tweaked parameter values and repeat the previous steps
until we arrive at the minimum.
 Back propagation: So basically back propagation allows us to calculate the error attributable to
each neuron and that in turn allows us to calculate the partial derivatives and ultimately the
gradient so that we can utilize gradient descent.
In my opinion, these are the three key takeaways for back propagation:
1. It is the process of shifting the error backwards layer by layer and attributing the correct amount
of error to each neuron in the neural network.
2. The error attributable to a particular neuron is a good approximation for how changing that
neuron’s weights (from the connections leading into the neuron) and bias will affect the cost
function.
3. When looking backwards, the more active neurons (the non-lazy ones) are the ones that get
blamed and tweaked by the back propagation process.

Link: https://towardsdatascience.com/understanding-neural-networks-19020b758230

 Annotation: An annotation is extra information associated with a particular point in a document


or other piece of information. 
 CNN: In simple word what CNN does is, it extract the feature of image and convert it into lower
dimension without losing its characteristics.

 Back propagation in cnn:


This is where CNN collects feedback and improves itself.
1. After prediction, each layer will receive feedback from its preceding layer. Feedback will be in
the form of losses incurred at each layer during prediction.
2. Aim of the CNN algorithm is to arrive at optimal loss. We call this as local minima.
3. Based on the feedback, network will update the weights of kernels. 
4. This will make the output of convolutions better when next time forward pass happens.
5. When the next forward pass happens, loss will come down. Again, we will do back prop, the
network will continue to adjust, a loss will further come down and process repeats.
6. This forward pass followed by back prop keeps happening the number of times we choose to
train our model. We call it epochs. 

 The basic steps to build an image classification model using a neural network are:
1. Flatten the input image dimensions to 1D (width pixels x height pixels)
2. Normalize the image pixel values (divide by 255)
3. One-Hot Encode the categorical column
4. Build a model architecture (Sequential) with Dense layers
5. Train the model and make predictions
https://towardsdatascience.com/covolutional-neural-network-cb0883dd6529

https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-
learning-99760835f148#:~:text=Convolution%20is%20the%20first%20layer,and%20a%20filter
%20or%20kernel.

https://towardsdatascience.com/classify-your-images-using-convolutional-neural-network-
4b54989d93dd

https://medium.com/@ksusorokina/image-classification-with-convolutional-neural-networks-
496815db12a8

https://www.geeksforgeeks.org/image-classifier-using-cnn/

https://analyticsindiamag.com/deep-learning-image-classification-with-cnn-an-overview/

https://ip.cadence.com/uploads/901/cnn_wp-pdf

You might also like