You are on page 1of 50

# Neural Networks and its

Applications

Presented By:
Ahmed Hashmi
Chinmoy Das

## What is neural network

An Artificial Neural Network (ANN) is an
inspired by biological nervous systems.
It is composed of a large number of highly
interconnected processing elements called
neurons.
An ANN is configured for a specific
application, such as pattern recognition or
data classification

## Why use neural networks

ability to derive meaning from complicated
or imprecise data
extract patterns and detect trends that are
too complex to be noticed by either humans
or other computer techniques
Real Time Operation

## Neural Networks v/s

Conventional Computers
Conventional computers use an algorithmic
approach, but neural networks works similar
to human brain and learns by example.

Inspiration from
Neurobiology
A neuron: many-inputs /
one-output unit
output can be excited or
not excited
incoming signals from
other neurons determine
if the neuron shall excite
("fire")
Output subject to
attenuation in the
synapses, which are
junction parts of the
neuron

A simple neuron
Takes the Inputs .
Calculate the
summation of the
Inputs .
Compare it with the
threshold being set
during the learning
stage.

Firing Rules
A firing rule determines how one calculates
whether a neuron should fire for any input
pattern.
some sets cause it to fire (the 1-taught set
of patterns) and others which prevent it from
doing so (the 0-taught set)

Example
For example, a 3input neuron is
taught to output 1
when the input
(X1,X2 and X3) is 111
or 101 and to output
0 when the input is
000 or 001.

X
1
:

0 0 0 0 1 1 1 1

X
2
:

X
3
:

0
/
1

0
/
1

0
/
1

0
/
1

O
U
T:

Example
Take the pattern 010. It differs
from 000 in 1 element, from 001 in
2 elements, from 101 in 3
elements and from 111 in 2
elements. Therefore, the 'nearest'
pattern is 000 which belongs in
the 0-taught set. Thus the firing
rule requires that the neuron
should not fire when the input is
001. On the other hand, 011 is
equally distant from two taught
patterns that have different
outputs and thus the output stays
undefined (0/1).

X
1
:

0 0 0 0 1 1 1 1

X
2
:

X
3
:

0
/
1

0
/
1

O
U
T:

## Types of neural network

fixed networksin which the weights cannot
be changed, ie dW/dt=0. In such networks,
the weights are fixed a priori according to
the problem to solve.
change their weights, ie dW/dt not= 0.

## The Learning Process

Associative mappingin which the network
learns to produce a particular pattern on the set
of input units whenever another particular
pattern is applied on the set of input units. The
associative mapping can generally be broken
down into two mechanisms:

mechanisms:

## Nearest-neighbourrecall, where the output

pattern produced corresponds to the input pattern
stored, which is closest to the pattern presented, and
Interpolativerecall, where the output pattern is a
similarity dependent interpolation of the patterns
stored corresponding to the pattern presented. Yet
another paradigm, which is a variant associative
mapping is classification, ie when there is a fixed set
of categories into which the input patterns are to be
classified.

Supervised Learning
Supervised learningwhich incorporates an
external teacher, so that each output unit is told what
its desired response to input signals ought to be.
During the learning process global information may
be required. Paradigms of supervised learning include
error-correction learning, reinforcement learning and
stochastic learning.
An important issue concerning supervised learning is
the problem of error convergence, ie the minimisation
of error between the desired and computed unit
values. The aim is to determine a set of weights
which minimises the error. One well-known method,
which is common to many learning paradigms is the
least mean square (LMS) convergence.

Unsupervised Learning
Unsupervised learninguses no external teacher and
is based upon only local information. It is also referred
to as self-organisation, in the sense that it selforganises data presented to the network and detects
their emergent collective properties.
From Human Neurons to Artificial Neurons their aspect
of learning concerns the distinction or not of a
separate phase, during which the network is trained,
and a subsequent operation phase. We say that a
neural network learns off-line if the learning phase and
the operation phase are distinct. A neural network
learns on-line if it learns and operates at the same
time. Usually, supervised learning is performed off-line,
whereas unsupervised learning is performed on-line.

Back-propagation Algorithm
it calculates how the error changes as each
weight is increased or decreased slightly.
The algorithm computes eachEWby first
computing theEA, the rate at which the
error changes as the activity level of a unit is
changed. For output units, theEAis simply
the difference between the actual and the
desired output.

Transfer Function
The behaviour of an ANN (Artificial Neural Network) depends on both
the weights and the input-output function (transfer function) that is
specified for the units. This function typically falls into one of three
categories:
linear (or ramp)
threshold
sigmoid
Forlinear units, the output activity is proportional to the total
weighted output.
Forthreshold units, the output is set at one of two levels,
depending on whether the total input is greater than or less than
some threshold value.
Forsigmoid units, the output varies continuously but not linearly
as the input changes. Sigmoid units bear a greater resemblance to
real neurones than do linear or threshold units, but all three must be
considered rough approximations.

Application

INTRODUCTION
Features of finger prints
Finger print recognition system
Why neural networks?
Goal of the system

Preprocessing system
Feature extraction using neural networks
Classification
result

## Features of finger prints

Finger prints are the
unique pattern of
ridges and valleys in
every persons
fingers.

## Their patterns are

permanent and
unchangeable for whole
life of a person.
They are unique and
the probability that two
fingerprints are alike is
only 1 in 1.9x10^15.
Their uniqueness is
used for identification
of a person.

system
Image
acquisiti
on

edge
detecti
on

Ridge
extractio
n

Thinin
g

Feature
extractio
n

classifi
cation

## Image acquisition: the acquired image is digitalized

into 512x512 image with each pixel assigned a
particular gray scale value (raster image).
edge detection and thinning: these are preprocessing
of the image , remove noise and enhance the image.

## Finger print recognition

system
Feature extraction:
this the step where
we point out the
features such as
ridge bifurcation and
ridge endings of the
finger print with the
help of neural
network.
Classification: here a
class label is
assigned to the
image depending on
the extracted

## Why using neural networks?

Neural networks enable us to find
solution where algorithmic methods
are computationally intensive or do
not exist.
There is no need to program neural
networks they learn with examples.
Neural networks offer significant
techniques.

Preprocessing system

## The first phase of finger print recognition is to

capture a image .
The image is captured using total internal reflection
of light (TIR).
The image is stored as a two dimensional array of
512x512 size, each element of array representing a
pixel and assigned a gray scale value from 256 gray
scale levels.

Preprocessing system
After image is captured
,noise is removed using
edge detection, ridge
extraction and thinning.

## Edge detection: the edge of

the image is defined where the
gray scale levels changes
greatly. also, orientation of
ridges is determined for each
32x32 block of pixels using
Ridge extraction: ridges are
extracted using the fact that
gray scale value of pixels are
maximum along the direction
normal to the ridge
orientation.

Preprocessing system
Thinning: the extracted
ridges are converted into
skeletal structure in which
ridges are only one pixel
wide. thinning should not Remove isolated as well as
surrounded pixel.
Break connectedness.
Make the image shorter.

## Feature extraction using neural

networks
Multilayer perceptron network of
three layers is trained to detect
minutiae in the thinned image.

## The first layer has nine perceptrons

The hidden layer has five perceptrons
The output layer has one perceptron.

## The network is trained to output 1

when the input window is centered
at the minutiae and it outputs 0
when minutiae are not present.

## Feature extraction using

neural networks
Trained neural networks
are used to analyze the
image by scanning the
image with a 3x3 window.
To avoid falsely reported
features which are due to
noise
The size of scanning
window is increased to 5x5
If the minutiae are too
close to each other than
we ignore all of them.

classification

## finger prints can be

classified mainly in
four classes
depending upon their
general pattern

Arch
Tented arch
Right loop
Left loop

Applications of Fingerprint
Recognition
As finger print recognition system can be
easily embedded in any system. It is used in Recognition of criminals in law enforcement bodies.
Used to provide security to cars, lockers, banks
,shops.
To differentiate between a person who has voted
and those who have not voted in govt. elections.
To count individuals.

## Neural Network Toolbox in

MATLAB
Neural Network Toolboxprovides tools for
designing, implementing, visualizing, and
simulating neural networks. Neural networks are
used for applications where formal analysis would
be difficult or impossible, such as pattern
recognition and nonlinear system identification
and control. Neural Network Toolbox supports
dynamic networks, self-organizing maps, and

Key Features
Neural network design, training, and simulation
Pattern recognition, clustering, and data-fitting tools
Supervised networks including feedforward, radial basis, LVQ,
time delay, nonlinear autoregressive (NARX), and layer-recurrent
Unsupervised networks including self-organizing maps and
competitive layers
Preprocessing and postprocessing for improving the efficiency of
network training and assessing network performance
Modular network representation for managing and visualizing
networks of arbitrary size
Routines for improving generalization to prevent overfitting
Simulinkblocks for building and evaluating neural networks, and
advanced blocks for control systems applications

## Working with Neural Network Toolbox

Like its counterpart in the biological nervous system, a neural
network can learn and therefore can be trained to find solutions,
recognize patterns, classify data, and forecast future events. The
behavior of a neural network is defined by the way its individual
computing elements are connected and by the strength of those
connections, or weights. The weights are automatically adjusted
by training the network according to a specified learning rule
until it performs the desired task correctly.
Neural Network Toolbox includes command-line functions and
graphical tools for creating, training, and simulating neural
networks. Graphical tools make it easy to develop neural
networks for tasks such as data fitting (including time-series
data), pattern recognition, and clustering. After creating your
networks in these tools, you can automatically

Network Architectures
Neural Network Toolbox supports a variety of
supervised and unsupervised network
architectures. With the toolboxs modular
approach to building networks, you can develop
custom architectures for your specific problem.
You can view the network architecture including
all inputs, layers, outputs, and interconnections.

Supervised Networks
Supervised neural networks are trained to produce desired outputs in response to
sample inputs, making them particularly well-suited to modeling and controlling
dynamic systems, classifying noisy data, and predicting future events.
Neural Network Toolbox supports four types of supervised networks:
Feedforward networkshave one-way connections from input to output layers. They
are most commonly used for prediction, pattern recognition, and nonlinear function
fitting. Supported feedforward networks include feedforward backpropagation, cascadeforward backpropagation, feedforward input-delay backpropagation, linear, and
perceptron networks.
Radial basis networksprovide an alternative, fast method for designing nonlinear
feedforward networks. Supported variations include generalized regression and
probabilistic neural networks.
Dynamic networksuse memory and recurrent feedback connections to recognize
spatial and temporal patterns in data. They are commonly used for time-series
prediction, nonlinear dynamic system modeling, and control systems applications.
Prebuilt dynamic networks in the toolbox include focused and distributed time-delay,
nonlinear autoregressive (NARX), layer-recurrent, Elman, and Hopfield networks. The
toolbox also supports dynamic training of custom networks with arbitrary connections.
Learning vector quantization (LVQ)is a powerful method for classifying patterns
that are not linearly separable. LVQ lets you specify class boundaries and the
granularity of classification.

Unsupervised Networks
Unsupervised neural networks are trained by letting the network
continually adjust itself to new inputs. They find relationships
within data and can automatically define classification schemes.
Neural Network Toolbox supports two types of self-organizing,
unsupervised networks:
Competitive layersrecognize and group similar input vectors,
enabling them to automatically sort inputs into categories.
Competitive layers are commonly used for classification and
pattern recognition.
Self-organizing mapslearn to classify input vectors according
to similarity. Like competitive layers, they are used for
classification and pattern recognition tasks; however, they differ
from competitive layers because they are able to preserve the
topology of the input vectors, assigning nearby inputs to nearby
categories.

## Training and Learning Functions

Training and learning functions are mathematical procedures used to
automatically adjust the network's weights and biases. The training
function dictates a global algorithm that affects all the weights and
biases of a given network. The learning function can be applied to
individual weights and biases within a network.
Neural Network Toolbox supports a variety of training algorithms,
methods, the Levenberg-Marquardt algorithm (LM), and the resilient
backpropagation algorithm (Rprop). The toolboxs modular framework
lets you quickly develop custom training algorithms that can be
integrated with built-in algorithms. While training your neural network,
you can use error weights to define the relative importance of desired
outputs, which can be prioritized in terms of sample, timestep (for timeseries problems), output element, or any combination of these. You can
access training algorithms from the command line or via a graphical tool
that shows a diagram of the network being trained and provides network
training process.

Improving Generalization
Improving the networks ability to generalize helps prevent
overfitting, a common problem in neural network design.
Overfitting occurs when a network has memorized the training set
but has not learned to generalize to new inputs. Overfitting
produces a relatively small error on the training set but a much
larger error when new data is presented to the network.

## Neural Network Toolbox provides two solutions to improve

generalization:
Regularizationmodifies the networks performance function (the
measure of error that the training process minimizes). By including
the sizes of the weights and biases, regularization produces a
network that performs well with the training data and exhibits
smoother behavior when presented with new data.
Early stoppinguses two different data sets: the training set, to
update the weights and biases, and the validation set, to stop
training when the network begins to overfit the data.

## Character Recognition- The idea of character

recognition has become very important as handheld
devices like the Palm Pilot are becoming increasingly
popular. Neural networks can be used to recognize
handwritten characters.
Image Compression- Neural networks can receive and
process vast amounts of information at once, making them
useful in image compression. With the Internet explosion
and more sites using more images on their sites, using
neural networks for image compression is worth a look.

## Stock Market Prediction- The day-to-day business of the stock

market is extremely complicated. Many factors weigh in whether
a given stock will go up or down on any given day. Since neural
networks can examine a lot of information quickly and sort it all
out, they can be used to predict stock prices.
Traveling Salesman Problem- Interestingly enough, neural
networks can solve the traveling salesman problem, but only to a
certain degree of approximation.
Medicine, Electronic Nose, Security, and Loan
Applications- These are some applications that are in their
proof-of-concept stage, with the acceptance of a neural network
that will decide whether or not to grant a loan, something that
has already been used more successfully than many humans.
Miscellaneous Applications- These are some very interesting
(albeit at times a little absurd) applications of neural networks.

Application principles
The solution of a problem must be the simple.

## If a problem can be solved with a small look-up table

that can be easily calculated that is a more preferred
solution than a complex neural network with many
layers that learns with back-propagation.

Application principles
The speed is crucial for computer game applications.
If it is possible on-line neural network solutions should be
avoided, because they are big time consumers. Preferably,
neural networks should be applied in an off-line fashion, when
the learning phase doesnt happen during the game playing
time.

Application principles
On-line neural network solutions should be very simple.
Using many layer neural networks should be avoided, if
possible. Complex learning algorithms should be avoided. If
possible a priori knowledge should be used to set the initial
parameters such that very short training is needed for
optimal performance.

Application principles
All the available data should be collected about the problem.
Having redundant data is usually a smaller problem than not
having the necessary data.
The data should be partitioned in training, validation and
testing data.

Application principles
The neural network solution of a problem should be selected
from a large enough pool of potential solutions.
Because of the nature of the neural networks, it is likely that
if a single solution is build than that will not be the optimal
one.
If a pool of potential solutions is generated and trained, it is
more likely that one which is close to the optimal one is
found.

Problem
Problem analysis:
variables
modularisation into sub-problems
objectives
data collection

## Neural network solution

Data collection and organization:
training, validation and testing data sets
Example:
Training set: ~ 75% of the data
Validation set: ~ 10% of the data
Testing set: ~ 5% of the data

5
5
2
.
0
4
12345
3
2
1
7
.
5
5
5
5
2
.
2
.
2
0
0
0
4
4
4
3
1
1
3
1
3
2
2
2
23451234512
341

## Neural network solution

Neural network solution selection

## each candidate solution is tested with

the validation data and the best performing
network is selected

Network 11

Network 4

Network 7

## Neural network solution

Choosing a solution representation:
the solution can be represented directly as a neural
network specifying the parameters of the neurons
alternatively the solution can be represented as a
multi-dimensional look-up table
the representation should allow fast use of the solution
within the application

Summary
Neural network solutions should be kept as simple as possible.
For the sake of the gaming speed neural networks should be
applied preferably off-line.
A large data set should be collected and it should be divided into
training, validation, and testing data.
Neural networks fit as solutions of complex problems.
A pool of candidate solutions should be generated, and the best
candidate solution should be selected using the validation data.
The solution should be represented to allow fast application.