0% found this document useful (0 votes)
24 views10 pages

Introduction To Neural Network

Uploaded by

cse21216
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views10 pages

Introduction To Neural Network

Uploaded by

cse21216
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Introduction to Neural Network

Basic Terms used

 Neuron: A building block of ANN. It is responsible for accepting input data,


performing calculations, and producing output.
 Input data: Information or data provided to the neurons.
 Artificial Neural Network (ANN): A computational system inspired by the way
biological neural networks in the human brain process information.
 Deep Neural Network: An ANN with many layers placed between the input layer
and the output layer.
 Weights: The strength of the connection between two neurons. Weights determine
what impact the input will have on the output.
 Bias: An additional parameter used along with the sum of the product of weights and
inputs to produce an output.
 Activation Function: Determines the output of a neural network.

Neural Networks Overview

A neural network is made of artificial neurons that receive and process input data. Data is
passed through the input layer, the hidden layer, and the output layer.
A neural network process starts when input data is fed to it. Data is then processed via its
layers to provide the desired output.

A neural network learns from structured data and exhibits the output. Learning taking place
within neural networks can be in three different categories:

 Supervised Learning - with the help of labeled data, inputs, and outputs are fed to the
algorithms. They then predict the desired result after being trained on how to interpret
data.
 Unsupervised Learning - ANN learns with no human intervention. There is no labeled
data, and output is determined according to patterns identified within the output data.
 Reinforcement Learning - the network learns depending on the feedback you give it.

The essential building block of a neural network is a perceptron or neuron. It uses


the supervised learning method to learn and classify data

How Neural Networks work

Neural Networks are complex systems with artificial neurons.

Artificial neurons or perceptron consist of:

 Input
 Weight
 Bias
 Activation Function
 Output
The neurons receive many inputs and process a single output.

Neural networks are comprised of layers of neurons.

These layers consist of the following:

 Input layer
 Multiple hidden layers
 Output layer

The input layer receives data represented by a numeric value. Hidden layers perform the most
computations required by the network. Finally, the output layer predicts the output.

In a neural network, neurons dominate one another. Each layer is made of neurons. Once the
input layer receives data, it is redirected to the hidden layer. Each input is assigned
with weights.

The weight is a value in a neural network that converts input data within the network’s
hidden layers. Weights work by input layer, taking input data, and multiplying it by the
weight value.

It then initiates a value for the first hidden layer. The hidden layers transform the input data
and pass it to the other layer. The output layer produces the desired output.
The inputs and weights are multiplied, and their sum is sent to neurons in the hidden
layer. Bias is applied to each neuron. Each neuron adds the inputs it receives to get the sum.
This value then transits through the activation function.

The activation function outcome then decides if a neuron is activated or not. An activated
neuron transfers information into the other layers. With this approach, the data gets generated
in the network until the neuron reaches the output layer.

Another name for this is forward propagation. Feed-forward propagation is the process of
inputting data into an input node and getting the output through the output node..

Feed-forward propagation takes place when the hidden layer accepts the input data. Processes
it as per the activation function and passes it to the output. The neuron in the output layer
with the highest probability then projects the result.

If the output is wrong, backpropagation takes place. While designing a neural network,
weights are initialized to each input. Backpropagation means re-adjusting each input’s
weights to minimize the errors, thus resulting in a more accurate output.

Types of Neural Networks

Neural networks are identified based on mathematical performance and principles to


determine the output. Below we will go over different types of neural networks.

Perceptron

Minsky and Papert proposed the Perceptron model (Single-layer neural network). They said it
was modeled after how the human brain functions.

It is one of the simplest models that can learn and solve complex data problems using neural
networks. Perceptron is also called an artificial neuron.

A perceptron network is comprised of two layers:

 Input Layer
 Output Layer

The input layer computes the weighted input for every node. The activation function is
pertained to get the result as output.

Feed Forward Neural Network

In a feed-forward network, data moves in a single direction. It enters via the input nodes and
leaves through output nodes.

This is a front propagation wave.

By moving data in one direction, there is no backpropagation. The backpropagation algorithm


calculates the gradient of the loss function with consideration to weights in the network.

The input product sum and their weights are computed. The data later is transferred to the
output.

A couple of feed-forward neural networks applications are:

 Speech Recognition
 Facial Recognition

Radial Basis Function Neural Network

Radial Basis Function Neural Networks (RBF are comprised of three layers:

 Input Layer
 Hidden Layer
 Output Layer

RBF networks classify data based on the distance of any centred point and interpolation.

Interpolation resizes images. Classification is executed by estimating the input data where
each neuron reserves the data. RBF networks look for similar data points and groups them.

The sum and weights of hidden layer output sent to the output layer form a network of
outputs.
Recurrent Neural Network

Neural networks such as a feed-forward networks move data in one direction. This type of
network has the disadvantage of not remembering the data in past inputs. This is where RNNs
comes into play. RNNs do not work like standard neural networks.

A Recurrent Neural Network (RNN) is a network good at modeling sequential data.


Sequential data means data that follow a particular order in that a thing follows another.

In RNN, the output of the previous stage goes back in as an input of the current step. RNN is
a feedback neural network.

Saving the output helps make other decisions.

In RNNs, data runs through a loop, such that each node remembers data in the previous step.

RNNs have a memory that helps the network recall what happened earlier in the sequence
data. While carrying out operations, neurons also act as memory cells.
RNN are used to solve problems in stock predictions, text data, and audio data.

In other words, it’s used to solve similar problems in text-to-speech conversion and language
translation.

Convolution Neural Network

Convolutional Neural Networks (CNN) are commonly used for image recognition. CNNs
contain three-dimensional neuron arrangement.

The first stage is the convolutional layer. Neurons in a convolutional layer only process
information from a small part of the visual field (image). Input features in convolution are
abstracted in batches.

The second stage is pooling. It reduces the dimensions of the features and, at the same time,
sustaining valuable data.

CNNs launch into the third phase (fully connected neural network) when the features get to
the right granularity level.

At the final stage, the final probabilities are analysed and decide which class the image
belongs to.

This type of network understands the image in parts. It also computes the operations multiple
times to complete the processing of the image.

Image processing involves conversion from RGB to a grey-scale. After the image is
processed, modifications in pixel value aid in identifying the edges. The images also get
grouped into different classes.

CNN is mainly used in signal and image processing.


.

Advantages of Neural Networks

Fault tolerance

In a neural network, even if a few neurons are not working properly, that would not prevent
the neural networks from generating outputs.

Real-time Operations

Neural networks can learn synchronously and easily adapt to their changing environments.

Adaptive Learning

Neural networks can learn how to work on different tasks. Based on the data given to produce
the right output.

Parallel processing capacity

Neural networks have the strength and ability to perform multiple jobs simultaneously.

Disadvantages of Neural Networks

Unexplained behaviour of the network

Neural networks provide a solution to a problem. Due to the complexity of the networks, it
doesn’t provide the reasoning behind “why and how” it made the decisions it made.
Therefore, trust in the network may be reduced.

Determination of appropriate network structure


There is no specified rule (or rule of thumb) for a neural network procedure. A proper
network structure is achieved by trying the best network, in a trial-and-error approach. It is a
process that involves refinement.

Hardware dependence

The pieces of equipment of a neural network are dependent on one another. By which we
mean, that neural networks require (or are highly dependent on) processors with adequate
processing capacity.

You might also like