0% found this document useful (0 votes)
18 views12 pages

Session 11

Uploaded by

forads684
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views12 pages

Session 11

Uploaded by

forads684
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Department of

IR&D

COURSE NAME: ANN


COURSE CODE:
22AIP3204R
Session -
11
2
AIM OF THE
SESSION

To model complex functions by propagating input data through multiple layers to produce an output.

INSTRUCTIONAL OBJECTIVES

This Session is designed to: Defining their architecture and function, describing how weights and
biases are adjusted during training, and demonstrating.

LEARNING OUTCOMES

At the end of this session, you should be able to: Design and implement a neural network model to
solve real-world problems.
FEED-FORWARD NEURAL NETWORK (FFNN)

• Feed-forward neural networks (FFNNs) are a type of artificial neural network that can
be used for analyzing pattern association, pattern classification, and pattern mapping. In
these tasks, the network is trained to recognize patterns in input data and map them to a
corresponding output.
• Pattern Classification is the process of assigning input data to one of several pre-defined
categories or classes. For example, a FFNN could be trained to classify images of animals into
categories such as "cat," "dog," or "bird." During training, the network is presented with a set
of input images and their corresponding categories, and it learns to map each image to its
correct category. Once trained, the network can be used to classify new images into these
categories.

4
CONT…

The process of training a feed-forward neural network involves the following steps:
1. Initialization: The weights and biases of the network are randomly initialized to small values.
2. Forward Propagation: The input data is fed through the network, and each neuron in each layer
calculates a weighted sum of its inputs, applies an activation function to this sum, and passes the
result on to the next layer.
3. Error Calculation: The difference between the actual output and the desired output is calculated,
and this error is used to adjust the weights and biases in the network.
4. Backward Propagation: The error is propagated backwards through the network, and the weights
and biases of each neuron are adjusted to minimize the error.
5. Repeat: Steps 2-4 are repeated many times until the network reaches a state where the error is
minimized, and the network is able to accurately map inputs to outputs.

5
Feed-Forward Neural Network
Cont…
Cont…
Cont…
Cont…
REFERENCES FOR FURTHER LEARNING OF
THE SESSION

Reference Books:
1. "Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville
2. "Introduction to Artificial Neural Systems" by Jacek M. Zurada
3. "Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig.
Sites and Web links:
4. Website: http://www.deeplearningbook.org/
5. Website:
https://www.wiley.com/en-us/Introduction+to+Artificial+Neural+Systems%2C+Second+Edi
tion-p-9780471551616
6. Website: http://aima.cs.berkeley.edu/
THANK YOU

Team – Artificial Neural Network

You might also like