You are on page 1of 23

SUMBMITTED BY- Nisha SUMBMITTED TO – Er.

DILIP KUMAR

/
processing paradigm that is inspired by biological
nervous systems.
*It is composed of a large number of highly
interconnected processing elements called neurons.
*An ANN is configured for a specific application,such as
pattern recognition or data classification
* Conventional computers use an algorithmic
approach,but neural networks works similar to
human brain and learns by example.
* A neuron:many-inputs and o
output unit
* output can be excited or not Axon (Carries
signals away)
excited
* incoming signals from other
neurons determine if the
neuron shall excite ("fire")
* Output subject to attenuation in
the synapses, which are Synapse size changes in
response to learning
junction parts of the neuron
neuron should fire for any input pattern.
* some sets cause it to fire (the 1-taught set of
patterns) and others which prevent it from doing so
(the a-taught set)

.
* Associative mapping in whicf:l the netw 0r ea-r-ASto
produce a particular pattern on the set of inpat tJnits whenever
another particular pattern is applied on the set of input units.
The associative mapping can generally be broken down into
two mechanisms:
Nearest-neighbour reea ll, where f Otltiilut teia-
prod uced corresponds to the input pattern storetl, whjch is
closest to the pattern presented, and

Interpolative reca ll, where the output pattern is a


similarity dependent interpolation of the patterns stored
corresponding to the pattern presented . Yet another
paradigm, which is a variant associative mapping is
classif ication, ie when there is a f ixed set of categories into
which the input patterns are to be classif ied.
During the learning process global information may be
required.Paradigms of supervised learning include error-
correction learning, reinforcement learning and stochastic
learning.
An important issue concerning supervised learning is the
problem of error convergence, ie the minimisation of error
between the desired and computed unit values. The aim is to
determine a set of weights which minimises the error.
One well-known method,which is common to many learning
paradigms is the least mean square (LMS)
convergence. ------------
* Unsupervise learning uses n e eFA-al eacher anr u·; = =
based upon only local information. It is also eFFed to as self-
organization , in the sense that it self-organises data presented
to the network and detects their emergent collective properties.
From Human Neurons to Artificial Neurons their aspect of
learning concerns the distinction or not of a separate
Phase , during which the network is trained , and a subsequent
operation phase. We say that a neural network learns off-line if
the learning phase and the operation phase are distinct. A neural
network learns on-line if it learns and operates at the same time.
Usually, supervised learning is performed off-line, whereas
unsupervised
learning is performed on-line.
increased or decreased slightly.
* The a lgorithm computes each EW by f irst
computing the EA, the rate at which the error changes as
the
activity level of a unit is changed. For output
* IT units,
CALCULATES HOWthe
the EA is simply T HE ERROR
difference between the actual
and the AS
CHANGES desired output.
EA CH WEIGH T IS
increased or decreased slightly.
*The algorithm computes each EW by first
computing the EA, the rate at which the error changes
as the
activity level of a unit is changed. For output
units, the EA is simply the difference between the
actual and the desired output.

.
the weights and the input-output fu nction ( r f g · m th at is
specified f or the units. This function typically falls int o o e of th ree
categories:
*linear (or ramp)
*threshold
*sigmoid
For linear units, the output activity is proportional to the total weighted
output.
For threshold units, the output is set at one of two levels, depending on
whether the total input is greater than or less than some threshold value.
For sigmoid units, the output varies continuously but not linearly as the
input changes. Sigmoid units bear a greater resemblance to real neurones
than do linear or threshold units,but all three must be considered rough
approximations.
DFeatures of finger prints
DFinger print recognition system DWhy neural
networks?
DGoal of the system
•Preprocessing system
•Feature extraction using neural network

.
Finger prints are the unique
pattern of ridges and valleys in
every person's fingers.
* Their patterns are permanent and
unchangeable for whole life of a person.
* They are unique and the
probability that two fingerprints are alike is
only 1 in 1.9x10A15.
* Their uniqueness is used for
identification of a person.
Image Ridge ......---. classifi
edge I-
Thinin I - Feature
acquisiti detecti extractio g extracti cation
on on n on

* Image acquisition: the acquired image is digitalized into 512x512


image with each pixel assigned a particular gray scale value (raster
image).
* edge detection and thining: these are preprocessing of the image ,
remove noise and enhance the image.
* Feature extraction: this
the step where we point out
the features such as ridge
bifurcation and ridge
endings of the finger print
with the help of neural
network.
* Classification:here a
class
label is assigned to the
image depending on the
extracted f eatures.
* Neural networks enable us to find solution
where algorithmic methods are computationally
intensive or do not exist.
*There is no need to program neural networks
they learn with examples.
*Neural networks offer significant speed
aGlvar:ltage over conventional techniques.
 image .The image is captured using total internal reflection of light
 (TI R).The image is stored as a two dimensional array of 512x512
size,each element of array representing a pixel and assigned a
 gray scale value from 256 gray scale levels.

THE FIRST PHASE OF FINGER


PRINT RECOGNITION IS TO
CAPTURE A
After image is captured ,noise is
removed using edge
detection,ridge extraction and
thinning.
* Edge detection:the edge of the
image is defined where the gray
scale levels changes greatly.
also,orientation of ridges is determined for
each 32x32 block of pixels using gray scale
gradient.
* Ridge extraction: ridges are
extracted using the fact that gray scale value
of pixels are maximum
along the direction normal to the
ridge orientation.
* Thinning: the extracted ridges.. are
converted into skeletal Thinned Ridges

structure in which ridges are only


one pixel wide. thinning should not-
*Remove isolated as well as surrounded
pixel.
*Break connectedness.
*Make the image shorter.
* Mult ilayer perceptron network of input
hidd
three layers is trained to detect en
layer output
minutiae in the thinned image. laye
*The first la Y-er has n ine percept rons r layer
*The h id den l ayer has five perceptrons -01
*The outp ut la Y-er has one percept ron.
The network is trained to output '1' when
the input window is centered at the x2 - 02
minutiae and it outputs 'o'
when minutiae are not present. x3
•Neural network solutions should be kept as simple as j:>Ossf I.
•For the sake of the gaming speed neural networks should be applied preferably off-
line.

•A large data set should be collected and it should be divided into training,
validation,and testing data.

•Neural networks fit as solutions of complex problems.


•A pool of candidate solutions should be generated,and the best candidate solution
should be selected using the validation data.

•The solution should be represented to allow fast application.

You might also like