You are on page 1of 79

Fingerprint Matching Using

Hough Transform and Latent Prints

















ABSTRACT
Fingerprint is considered as the greater part robust biometric in the intellect that it could even be
obtained without keen participation of the subjects. Fingerprints are single in the sense of location and
direction of minutiae points present.
Latents are partial fingerprints that are usually smudgy, with small area and containing large
distortion. Due to these characteristics, latents have a significantly smaller number of minutiae points
compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of
latents make it extremely difficult to automatically match latents to their mated full prints that are stored in
law enforcement databases.
Further, they often rely on features that are not easy to extract from poor quality latents. In this
paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents.
The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align
fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field
information.














1. INTRODUCTION
Fingerprint is a widely used form of biometric identification. It is a dynamic means of person identification.
Fingerprint recognition has forensic applications like Criminal investigation, missing children etc., government
applications like social safety, border control, passport control, driving license etc., and commercial applications like
e-commerce, internet access, ATM, credit card etc [1]. Because of their uniqueness and reliability over the time,
fingerprints have been used for identification and verification over a century.
There are essentially three types of fingerprints in law enforcement applications (see Fig. 1): Binarization
using Rough set Theory ,(i) rolled, which is obtained by rolling the finger nail-to-nail either on a paper (in this case
ink is first applied to the finger surface) or the platen of a scanner; (ii) plain, which is obtained by placing the finger
flat on a paper or the platen of a scanner without rolling; and (iii) latents, which are lifted from surfaces of objects that
are unconsciously touched or handled by a person typically at crime scenes. Lifting of latents may involve a
complicated process, and it can range from simply photographing the print to more complex dusting or chemical
processing [2].
A fingerprint is believed to be unique to each person (and each finger). Even the fingerprints of
identical twins are different. The pattern is quite stable trough out our lifetime, in case of a cut, the same
pattern will grow back. The features of a fingerprint depend on the nerve growth in the skins surface. This
growth is determined by genetic factors and environmental factors such as nutrients, oxygen levels and
blood flow which are unique for every individual. [19] Fingerprints are one of the most full-grown
biometric technologies and are used as proofs of evidence in courts of law all over the world. Fingerprints
are, therefore, used in forensic divisions worldwide for criminal investigation.
The performance of matching techniques relies partly on the quality of the input fingerprint. In
practice, due to skin conditions, sensor noise, incorrect finger pressure and bad quality fingers like from
elderly people and manual workers, a significant percentage of fingerprint images is of poor quality. This
makes it quite difficult to compare fingerprints.

The central question that will be answered in this report, is:

How are fingerprints compared to each other?

Based on this, the following questions can be set up:
- Which steps are to be taken before the actual matching can take place?
- Which techniques are there to match fingerprints and which one is most used in practice?
- How this most does used matching technique work?

In chapter 2 the whole process of fingerprint matching is globally described. It contains five steps, all
further explained in the succeeding chapters. Chapter 3 contains the first step of that process: preprocessing
the image. A few enhancement techniques are discussed. After that, there is a classification step, described
in chapter 4. In chapter 5 the actual matching is explained. The method specified is minutiae-based
matching, a method that is well known and widely used. Finally the conclusions are to find in chapter 6.
Law enforcement agencies have started using fingerprint recognition technology to identify suspects
since the early 20th century [2]. Nowadays, automated fingerprint identification
system (AFIS) has become an indispensable tool for law enforcement agencies.
There are essentially three types of fingerprints in law enforcement applications (see Fig. 1): (i)
rolled, which is obtained by rolling the finger nail-to-nail either on a paper (in this case ink is first applied
to the finger surface) or the platen of a scanner;
(ii) plain, which is obtained by placing the finger flat on a paper or the platen of a scanner without
rolling; and
(iii) latents, which are lifted from surfaces of objects that are inadvertently touched or handled by a person
typically at crime scenes. Lifting of latents may involve a complicated process, and it can range from
simply photographing the print to more complex dusting or chemical processing
Rolled prints contain the largest amount of information about the ridge structure on a fingerprint
since they capture the largest finger surface area; latents usually contain the least amount of information for
matching or identification because of their size and inherent noise. Compared to rolled or plain fingerprints,
latents are smudgy and blurred, capture only a small finger area, and have large nonlinear distortion due to
pressure variations.
Due to their poor quality and small area, latents have a significantly smaller number of minutiae
compared to rolled or plain prints (the average number of minutiae in NIST Special Database 27 (NIST
SD27) [3] images is 21 for latents versus 106 for their mated rolled prints). These characteristics make the
latent fingerprint matching problem very challenging.
Fingerprint examiners who perform manual latent fingerprint identification follow a procedure
referred to as ACE-V (analysis, comparison, evaluation and verification) [4]. Because the ACE-V
procedure is quite tedious and time consuming for latent examiners, latents are usually matched against full
prints of a small number of suspects identified by other means, such as eye witness description or M.O.
(mode of operation).With the availability of AFIS, fingerprint examiners are able to match latents against a
large fingerprint database using a semiautomatic procedure that consists of following stages: (i) manually
mark the features (minutiae and singular points) in the latent, (ii) launch an AFIS search, and (iii) visually
verify the top- ( is typically
50) candidate fingerprints returned by AFIS. The accuracy and speed of this latent matching procedure is
still not satisfactory. It certainly does not meet the lights-out mode of operation
desired by the FBI and included in the Next Generation Identification.
For fingerprint matching, there are two major problems which need to be solved. The first is to align
the two fingerprints to be compared and the second is to compute a match score between the two
fingerprints. Alignment between a latent and a rolled print is a challenging problem because latents often
contain a small number of minutiae and undergo large skin distortion. To deal with these two problems, we
propose the descriptor-based
Hough transform (DBHT), which is a combination of the generalized Hough transform and a local
minutiae descriptor, called Minutia Cylinder Code (MCC) [6]. The MCC descriptor improves the
distinctiveness of minutiae while the Hough transform method can accumulate evidence as well as improve
the robustness against distortion. Match score computation between a latent and a rolled print is also
challenging because the number Of mated minutiae is usually small. To address this issue, we further
consider orientation field as a factor in computing match score. Since we only have manually marked
minutiae for latents, a reconstruction algorithm is used to obtain orientation field from minutiae.













PREVIOUS METHODS:
IMAGE BINARIZATION
The problem of image binarization has been widely studied in the field of Image processing. Otsu
(1979) [3] proposed an optimum global thresholding method based on an idea of minimizing the between
class variance. Moayer and Fu (1986) [6] proposed a binarization technique based on an iterative
application of laplacian operator and a pair of dynamic thresholds. During this phase the gray scale image is
transformed into a binary image by computing the global adaptive thresholding.
This thresholding approach gives a particular threshold value for each image which we are
consider for our simulation and testing phase. In this way each pixel of the core region is transferred to two
different intensity levels as compared to gray scale image of 256 intensity levels. So the processing of
binary image is easy.Figure 5 shows the binarized image of corresponding original fingerprint image. The
disadvantage of binarization is that the ridges termination near the boundary is considered as minutia
even though it is not actual minutia. The problem of binarization is eliminated in thinning process.
A similar method proposed by Xiao and Raafat (1991) [7] in which a local threshold is used after
the convolution step to deal with regions with different contrast. Verma, Majumdar, and Chatterjee (1987)
[8] proposed a fuzzy approach that uses adaptive threshold to preserve the same number of 1 and 0 valued
pixels for each neighbourhood. Ratha, Chen, and Jain (1995) [9] introduced a binarization approach based
on peak detection in the gray-level profiles along sections orthogonal to the ridge orientation.
Among all these approaches Otsus thresholding has been widely used. The reason of its popularity
is that it works directly on gray scale histogram and hence it is very fast method once histogram is
computed. Otsus thresholding works on the assumption that the image has only two ranges of intensities
i.e., the image histogram is mostly bi-modal and selects the threshold in between two peaks. But it may not
always true as in case of fingerprint.

Figure 5: A fingerprint Image and its histogram (X-axis shows the image intensity values. Y-axis shows the
frequency).

Fingerprint Image Enhancement

Fingerprint Image enhancement is to make the image clearer for easy further operations. Since the
fingerprint images acquired from sensors or other medias are not assured with perfect quality, those
enhancement methods, for increasing the contrast between ridges and furrows and for connecting the false
broken points of ridges due to insufficient amount of ink,are very useful for keep a higher accuracy to
fingerprint recognition.





Histogram Equalization:

Histogram equalization is to expand the pixel value distribution of an image so as to
increase the perceptional information. The original histogram of a fingerprint image has the
bimodal type the histogram after the histogram equalization occupies all the range from 0 to 255 and the
visualization effect is enhanced.


Figure 6.1 The Original Histogram of a Figure 6.2 Histogram after
fingerprint image histogram equalization


Figure 6.3. Original Image Figure 6.4 Enhanced Image after
Histogram Equalization




Fingerprint Image Binarization
Fingerprint Image Binarization is to transform the 8-bit Gray fingerprint image to a 1-
bit image with 0-value for ridges and 1-value for furrows. After the operation, ridges in the
fingerprint are highlighted with black color while furrows are white.
A locally adaptive binarization method is performed to binarize the fingerprint image.
Such a named method comes from the mechanism of transforming a pixel value to 1 if the value
is larger than the mean intensity value of the current block (16x16) to which the pixel belongs
[Figure 6.1and Figure 6.2].

Figure 7.1 Enhanced Image Figure 7.2 Image after Adaptive
Binarization
THINNING:
It is the process of reducing the thickness of each line of fingerprint patterns to a single pixel
width.Here we are considered templates of (3x3) windows for thinning and trimming of unwanted pixels. In
previous thinning process, the applied algorithm takes the image as an object and processes
its operation.
So that it maynt be possible to get the exact thinned image i.e. the line image into single pixel
strength. But in our process we have considered each pixel as our object space and applied rules on its pixel
level and try to remove the singular pixel as well as the unwanted pixels from fingerprint. These rules are
purely observational and we consider all possibility to eliminate the unnecessary pixel from image and
convert the image into single pixel strength. This thinning process acts
iteratively on fingerprint.
Exact thinning is ensured when the pixel level of the ridge having only one pixel width. However,
this is not always true. There some locations, where the ridge has two-pixel width at some erroneous pixels.
An erroneous pixel is defined as the one with more than two 4-connected neighbors. Hence, before
minutiae extraction, there is a need to develop an algorithm to eliminate the erroneous pixels while
preserving the pattern connectivity. For this purpose an enhanced thinning algorithm is followed after
thinning process.
Enhanced thinning algorithm
Step 1: Scanning the pattern of fingerprint image row wise from top to bottom. Check if the pixel is 1.
Step 2: Count its four connected neighbors.
Step 3: If the sum is greater that two, mark it as an erroneous pixel.
Step 4: Remove the erroneous pixel.
Step 5: Repeat steps 1 4 until whole of the image is scanned and the erroneous pixels are removed.

FEATURE EXTRACTION:
Minutia is characteristics of a fingerprint which is used for identification purpose. These are the
points in the normal direction of the ridges such as ridge endings, bifurcations, and short ridges. Many
automatic recognition systems only consider ridge endings and bifurcations as minutia.
A reason for this is that all other structure like bridge and island structures are considered as false
minutia. The advantage of using this assumption that, it does not differentiate between these minutiae. This
is due to, when a real bifurcation is mistakenly separated, or if an end point mistakenly joins a ridge, it
forms a bifurcation.
Image Binarization based on Rough Set
Before the proposed rough set based binarization process is described, it would be good for the
readers to have some idea about the rough set. For this purpose a short note on rough set is presented. Note
that detail of the rough set can be obtained from [4]
Rough Set Theory
The concept of Rough-set Theory was given by Pawlak [4]. Rough-set theory has become a popular
tool for generating logical rules for classification and prediction. It is a mathematical tool to deal with
vagueness and uncertainty. The focus of the rough-set theory is on the ambiguity of objects. A Rough-set is
a set with uncertainties in it i.e. some elements may or may not be the part of the set. Rough-set theory uses
approximation for these rough-sets. A rough-set can be defined in terms of lower and upper approximations.
The uncertainties at object boundaries can be handled by describing different objects as Rough-sets
with lower (inner) and upper (outer) approximations. Figure 4 shows the object present along with its lower
and upper approximation. Consider an image as a collection of pixels. This collection can be partitioned in
to different blocks (granules) which are used to find out the object and background regions. From the
roughness of these regions rough entropy is computed to obtain the threshold. The detail of these concepts
is now discussed.

Figure 8: An example of object Lower and Upper approximations and background lower and upper
approximations.







A. Granulation:

Granulation involves decomposition of whole into parts. In this step image is divided into blocks of
different sizes. These blocks are termed as Granules. For this decomposition quad-tree representation of the
image is used. Quad-trees are most often used to partition a two dimensional space by recursively
subdividing it into four quadrants or regions. Each region is subdivided on the basis of some conditions. In
this case following condition has been used for the decomposition.
For a block B if 90 % or more of pixels are either greater than or less than some predefined
threshold, the block is not split. Let xi, i = 1, 2 n be pixel values belonging to block B. The block B is not
split if, 90% of xi T or xi T, where T is a pre-specified threshold. The block B is otherwise split. Figure
5 shows the quad-tree granules obtained from a fingerprint image.
The above condition is used because 90% of the pixels are of same range implies the block as
homogeneous otherwise the variation in intensity values compelled to split the block unless only
homogeneous blocks are left.

Figure 9: Fingerprint Image and granules obtained by quad tree decomposition.

B. Object-Background Approximation
Once the image is split in granules, next task is to identify each granule as either object or
background. The image is divided into granules based on some criteria (see Section A). Let G be the total
number of granules then problem is to classify each granule as either object or background. At this point we
also need to know the dynamic range of the image. Let [0, L] be the dynamic range of the image. Our final
aim is to find a threshold T, 0TL-1, so that the image could be binarized based on threshold T.
Granules with pixel values less than T characterize object while granules with values greater than T
characterize background. After getting this separation, object and background can be approximated by two
sets as follows.
The lower approximation of the object or background: For all blocks belonging to object (background),
the blocks having pixels with same intensity values are called lower (inner) approximations of object
(background).
The upper approximation of the object or background: All blocks belonging to object (background)
also belong to upper approximation of the object (background). Once we have the two above mentioned
approximation, the roughness of each granule needs to be computed.
C. Roughness of Object and background
Roughness of object and background is computed as described in [5]. We are only defining the term
here; the detail explanation can be obtained from [5].
The roughness of object is

Here | | and | | are cardinality of object upper approximation and lower approximation
respectively. Similarly roughness of background can be defined as

Where, || and || are cardinality of Background upper approximation and lower approximation
respectively.
Roughness is a measure of uncertainty in object or background. It can be obtained as the number of
granules out of total number of granules that are certainly not the members of the object or background.
Thus the value of the roughness depends on the threshold value (T) used to obtained the lower and upper
approximation (see Section B) of the object or background.
The rough entropy is now measured on the basis of above two roughnesses.


D. Rough Entropy Measure
A measure called Rough entropy based on the concept of image granules is used as described in [5].
Rough entropy is computed from object and background roughness as

Here, Ro is the object roughness and Rb is the background roughness. Maximization of this Rough-
entropy measure minimizes the uncertainty i.e., roughness of object and background. The optimum
threshold is obtained as that maximizing the rough entropy.

E. Algorithm for binarization
Images are then processed stepwise, as described in Section A D, to get their binarized form.
Following are the steps for the binarization of image as proposed in this article.

Represent the image in the form of quad-tree decomposition.
For a threshold value T, 0<T255, separate the blocks obtained from decomposition into object and
background.
Find lower and upper approximation of object and background.
Compute object and background roughness and hence find out rough entropy.
Repeat steps 2 to 4 for all values of T, i.e. from T=1 to 255.
The value of T for which Rough entropy is maximum, is selected as a threshold for binarization.
Binarize the image using optimum threshold obtained.





























PROPOSED METHOD
1. Latent Fingerprint matching
Recent research and development efforts on latent fingerprints can be classified into three streams
according to the manual input required from fingerprint examiners: consistent with existing practice,
increasing manual input, or reducing manual input. Because of large variations in latent fingerprint quality
and specific requirements of practical applications (crime scenes, border crossing points, battle fields), each
of the three streams has its value.
Improved latent matching accuracy has been reported by using extended features, which are
manually marked for latents [15][18]. However, marking extended features (orientation field, ridge
skeleton, etc.) in poor quality latents is very time-consuming and might be only feasible in rare cases. Thus,
some studies have concentrated on latent matching using a reduced amount of manual input, such as
manually marked region of interest (ROI) and singular points [19], [20].
However, only a small portion of latents can be correctly identified using this approach. Hence our
proposed matcher takes manually marked minutiae as input and, therefore, it is consistent with existing
practice. There have also been some studies on fusion of multiple matchers [21] or multiple latent prints
[22].

Evaluation of Latent Fingerprint Technologies
NIST has been conducting a multiphase project on Evaluation of Latent Fingerprint Technologies
(ELFT) to evaluate latent feature extraction and matching techniques [23]. Since all participating algorithms
in ELFT are proprietary, we have no information on the details of these algorithms. The purpose of ELFT-
Phase I was to assess the feasibility of latent fingerprint identification systems using Automated Feature
Extraction and Matching (AFEM), while the purpose of ELFT-Phase II was to actually measure the
performance of state-of-the-art AFEM technology and evaluate whether it was viable to have those systems
in the operational use to reduce the amount of time needed by latent examiners to manually mark latents
thereby increasing the throughput.
In Phase I, latent images were selected from both operational and nonoperational scenarios. The most
accurate system showed a rank-1 accuracy of 80% (100 latents against 10,000 rolled prints) [24]. In Phase
II, latent images were selected from only operational environments. The rank-1 accuracy of the most
accurate system was 97.2% (835 latents against 100,000 rolled prints) [25]. These accuracies cannot be
directly compared since the Phase I and Phase II evaluations used different latent databases. Furthermore,
the quality of latents used in Phase II is better compared to Phase I. As shown in Fig. 2, the quality of
latents varies significantly.
The impressive matching accuracy reported in ELFT does not support that the current practice of
manually marking minutiae in latents should be changed. Although latents in Phase II were selected from
operational scenarios, they represent successful identifications in actual case examinations using existing
AFIS technology. In the ACE-V process, when the examiner analyzes the latent image he/she decides
whether the latent has value for exclusion only, value for individualization or no value. If a latent is
classified as of no value, no comparison is performed. If the latent is classified in one of the other two
categories, then comparisons are performed and the examiners can make an individualization, an exclusion,
or determine the comparison to be inconclusive. So the latents which are successfully identified constitute
only a small part of all latents, which are of reasonable quality. For this reason, in the ELFT-Phase II report
[25] the authors concluded that only a limited class of latents can
benefit from AFEM technology. NIST has conducted another evaluation of latent fingerprint technologies
using extended feature sets manually marked by latent examiners [26]. In this evaluation, the purpose was
to investigate the matching accuracy when (i) latent images and/or
(ii) sets of manually marked features were provided. This evaluation suggested that the highest accuracy
was obtained when the input included both the latent image and manually marked features.
Evaluation of Latent Examiners
A latent examiner can be viewed as a slow but very accurate matcher. Because they are much
slower than automatic matchers, quantitatively estimating the accuracy of latent examiners is not easy.
Hence the numbers of fingerprint pairs used in several black box tests of latent examiners are not large
[27][29]. Although the exact numbers reported in these studies may not reflect the real practice, the
qualitative conclusions are very useful. It was found that latent examinerss conclusion are not always in
agreement, especially in the case of poor quality latents [27]. In addition, the same examiner can change
his/her conclusions on the same fingerprint pair at a later time [28].
These inconsistences may increase under bias [29]. These issues associated with including latent
examiners in the latent identification process will only be solved when the automatic matcher can
outperform latent examiners in accuracy.
No matter how successful the application of automatic fingerprint recognition technology might be,
we cannot say fingerprint matching is a solved problem before we can reach the goal of outperforming
latent examiners.

LATENT MATCHING APPROACH
Given a latent fingerprint (with manually marked minutiae) and a rolled fingerprint, we extract
additional features from both prints, align them in the same coordinate system, and compute a match score
between them. These three steps are described in the following subsections. An overview of the proposed
algorithm is shown in Fig. 3.
A. Feature Extraction
The proposed matching approach uses minutiae and orientation field from both latent and rolled
prints. Minutiae are manually marked by latent examiners in the latent, and automatically extracted using
commercial matchers in the rolled print.
Based on minutiae, local minutiae descriptors are built and used in the proposed descriptor-based alignment
and scoring algorithms.
Orientation field is reconstructed from minutiae location and direction for the latents as proposed in
[30], and orientation field is automatically extracted from the rolled print images by using a gradient-based
method. Local minutia descriptors and orientation field reconstruction are presented in the following
subsections.
1) Local Minutia Descriptor: Local descriptors have been widely used in fingerprint matching (e.g. [6], [8],
[11], [12], Feng and Zhou [31] evaluated the performance of local descriptors associated with fingerprint
matching in four categories of fingerprints: good quality, poor quality, small common region, and large
plastic distortion. They also coarsely classified the local descriptors as image-based, texture-based, and
minutiae-based descriptors. Their results show that the minutiae- based descriptor, Minutia Cylinder Code
(MCC) [6], performs better in three of the four categories, and texture-based descriptor performs better for
the small common region category.
A minutia cylinder records the neighborhood information of a minutia as a 3-D function. A cylinder
contains several layers and each layer represents the density of neighboring minutiae
along the corresponding direction. The cylinder can be concatenated as a vector, and therefore the similarity
between two minutia cylinders can be efficiently computed.


Overview of the proposed approach

and fusing scores with other matching scores, or by enhancing the images to extract more reliable features.
Orientation field estimation using gradient-based method is very reliable in good quality images [7].
However, when the image contains noise, this estimation becomes very challenging. A few model-based
orientation field estimation methods have been proposed ([32][34]) that use singular points as input to the
model. In the latent fingerprint matching case, it is very challenging to estimate the orientation field based
only on the image due to the poor quality and small area of the latent. Moreover, if singular points are to be
used, they need to be manually marked (and they are not always present) in the latent fingerprint image.
Latent fingerprint in NIST SD27 and the reconstructed orientation field
overlaid on the latent.
Hence, we use a minutiae-based orientation field reconstruction algorithm proposed in [30] which
takes manually marked minutiae in latents as input and outputs an orientation field. This approach estimates
the local ridge orientation in a block by averaging the direction of neighboring minutiae. The orientation
field is reconstructed only inside the convex hull of minutiae.
Since the direction of manually marked minutiae is very reliable, the orientation field reconstructed
using this approach is quite accurate except in areas absent of minutiae or very close to singular points (see
Fig. 5 for an example).

























The process of matching









There are multiple approaches found in literature to match fingerprint images. The one that is well
known and used often is called minutiae-based matching. In this chapter is first described the process of
fingerprint matching, in case of minutiae-based matching is used.
Fingerprints are unique, there are no two individuals that have exactly the same pattern. In principle,
every finger is suitable to give prints for authentication purposes. However, there are differences between
the ten fingers. There is no clear evidence as to which specific finger should be used for identification. The
thumb provides a bigger surface area but there is not much association of the thumb with criminality.
Forefingers have been typically used in civilian applications. In most cases one can assume that the index
finger obtains the best performance. Since the majority of the people is right handed, the best choice would
be to take the right hand index finger. [25, 26]
After capturing the fingerprint, for example in crime, it is compared to other fingerprints in the
database to find a matching pair. Nowadays this whole process goes automatically via identification marks.
A fingerprint has various identification marks. On the global level there are the ridges that make a particular
pattern. Moreover there are singular points to detect, like the delta and core. At the local level you find
minutiae details. The two most occurring are a ridge ending and the bifurcation (splitting of ridges). At the
very fine level there are sweat pores. These can only be used at images from very high quality and are not
discussed in this paper.
The comparing of two prints can only take place if the fingerprints are of reasonable quality and
have values that are measured the same way and mean the same thing. A few preparations must precede
before the actual matching takes place. These steps are called preprocessing. Preprocessing improves the
quality of the fingerprint (mostly a digital image) and removes scars and other noise in the image. It also
makes the image better readable for the matching step by making the image black and white for instance.
The matching of fingerprints has been studied a lot, resulting in multiple approaches. One of these
approaches is minutiae matching. It is the most well known and often used method that makes use of small
details in the fingerprint, called minutiae, as ridge endings and split points. Each minutia has its own
information, like the angle and position. Extracting this information is one of the steps of the minutiae
matching approach, matching these extracted minutiae is another problem. This matching of minutiae can
be seen as a point matching problem. A suitable algorithm to solve this problem is the Hough transform-
based algorithm. It calculates the optimal transformation for matching minutiae. If there exists a matching
fingerprint in the database, the template with the most matching minutiae is probably the same as the input.
Between the extraction of minutiae points and the matching, there is often a post-processing step to
filter out false minutiae, caused by scars, sweat, dirt or even by the preprocessing step. In the end, using the
minutiae can lead to a unique match of fingerprints.
Preprocessing
The procedure described above can be extended with one important time saving step, namely
classification. Classification is used in cases where the database containing fingerprints is so large that
matching a fingerprint with all the images is too time-consuming. Fingerprints can be categorized by their
patterns. The classification is based on the following patterns: loops, whorls and arches. In this way an input
fingerprint does not have to be matched with all the fingerprints in the database, but only a part of it.
The steps to take for the whole process of matching an input fingerprint with one of the templates
from the database, is depicted in figure 2-1. The assumption is made that the templates in the database have
already gone through this process and so only the input fingerprint has to be prepared for matching.









Figure 2-1: The process of matching an input fingerprint with a template from the database

Classification
Minutiae
extraction
Minutiae
matching
Post-processing
3. Preprocessing
When a fingerprint image is captured, nowadays through a scan, it contains a lot of redundant
information. Problems with scars, too dry or too moist fingers, or incorrect pressure must also be overcome
to get an acceptable image. Therefore, preprocessing, consisting of enhancement and segmentation is
applied to the image.
It is widely acknowledged that at least two to five percent of target population has fingerprints of
poor quality. These fingerprints that cannot be reliably processed using automatic image processing
methods. This fraction is even higher when the target population consists of older people, people doing
manual work, people living in dry weather conditions or having skin problems, and people who have poor
fingerprints due to their genetic and racial attributes. [13]
A fingerprint can contain regions of different quality:
- a well-defined region, where ridges are clearly differentiated from each another;
- a recoverable region, where ridges are corrupted by a small amount of gaps, creases and smudges, but
they are still visible and the neighboring regions provide sufficient information about their true
structure;
- an unrecoverable region, where ridges are corrupted by such a severe amount of noise and distortion
that no ridges are visible and the neighboring regions do not allow them to be reconstructed. [17]
3.1 Steps
A critical step in automatic fingerprint matching is to automatically and reliably extract minutiae from
the input fingerprint images. However, the performance of a minutiae extraction algorithm relies heavily on
the quality of the input fingerprint images. To ensure that the performance of an automatic fingerprint
identification system will be robust with respect to the quality of the fingerprint images, it is essential to
implement a fingerprint enhancement algorithm in the minutiae extraction module. [28]
In the literature there are several methods to improve the quality of an image and make it ready for
matching details. The steps that are present in almost every process are:
1) normalization,
2) filtering,
3) binarization,
4) skeletonization.

In the first step the input image from the sensor is normalized. This is important since image parameters
may differ significantly with varying sensors, fingers and finger conditions. By normalizing an image, the
colors of the image are spread evenly throughout the gray scale. The filtering step is the one that changes
the most. There are a lot of filters to smooth the ridges, take away scars, noise and irrelevant segments. Low
pass filters are used, just as Gaussian masks, Gabor filters or orientation filters. The third step is making the
image binary; transform the gray scale image into a binary image (black and white). The ridges are then
made thinner from five to eight pixels in width down to one pixel, for precise location of endings and
bifurcations.

3.1.1 Normalization
Normalization is a good first step for improving image quality. To normalize an image is to spread
the gray scale in a way that it is spread evenly and fill all available values instead of just a part of the
available gray scale, see figure 3-1. The normal way to plot the distribution of pixels with a certain amount
of gray (the intensity) is via a histogram. To be able to normalize an image, the area which is to normalize
within, has to be known. Thus it is necessary to find the highest and the lowest pixel value of the current
image. Every pixel is then evenly spread out along this scale. Equation (1) represents the normalization
process.
M
I I
I y x I
y x I
norm

=
min max
min
) , (
) , (
, (1)
where I is the intensity (gray level) of the image. I
min
is the lowest pixel value found in the image, I
max
is the
highest one found. M represents the new maximum value of the scale, mostly M = 255, resulting in 256
different gray levels, including black (0) and white (255). I
norm
(x, y) is the normalized value of the pixel
with coordinates x and y in the original image I(x,y). When images have been normalized it is much easier
to compare and determine quality since the spread now has the same scale. Without the normalization it
would not be possible to use a global method for comparing quality. [2, 8]

Raw image from sensor
Normalized image

Figure 3- 1: The normalization step

3.1.2 Filtering
It is important to filter out image noise coming from finger consistency and sensor noise. For that
purpose the orientation of the ridges can be determined so that it is able to filter the image exactly in the
direction of the ridges. In figure 3-2 an orientation field overlayed on a fingerprint is shown.



Figure 3-2: An orientation field overlayed on a fingerprint

By this filter method the ridge noise is greatly reduced without affecting the ridge structure itself, see figure
3-3. One approach to ridge orientation estimation relies on the local image gradient. A gray scale gradient is
a vector whose orientation indicates the direction of the steepest change in the gray values and whose
magnitude depends upon the amount of change of the gray values in the direction of the gradient. The local
orientation in a block can be determined from the pixel gradient orientations of the block. [3, 13, 27]



Normalized image Directionally filtered image

Figure 3-3: The filtering step (in this case an orientation field filter)

3.1.3 Binarization
Binarization can be seen as the separation of the object and background. It turns a gray scale picture into a
binary picture. A binary picture has only two different values. The values 0 and 1 are represented by the
colors black and white, respectively. Refer to figure 3-4 for a binarized image. To perform binarization on
an image, a threshold value in the gray scale image is picked. Everything darker (lower in value) than this
threshold value is converted to black and everything lighter (higher in value) is converted to white. This
process is performed to facilitate finding identification marks in the fingerprints such as singularity points
or minutiae (see chapter 4 and 5).
The difficulty with binarization lies in finding the right threshold value to be able to remove
unimportant information and enhance the important one. It is impossible to find a working global threshold
value that can be used on every image. The variations can be too large in these types of fingerprint images
that the background in one image can be darker than the print in another image. Therefore, algorithms to
find the optimal value must be applied separate on each image to get a functional binarization. There are a
number of algorithms to perform this, the most simple one uses the mean value or the median of the pixel
values in the image. This algorithm is based on global thresholds.
What often are used nowadays are local thresholds. The image is separated into smaller parts and
threshold values are then calculated for each of these parts. This enables adjustments that are not possible
with global calculations. Local thresholds demand a lot more calculations but mostly compensate it with a
better result. [2, 8, 27]


Directionally filtered image Binarized image

Figure 3-4: The binarization step

3.1.4 Skeleton modeling
One way to make a skeleton is with thinning algorithms. The technique takes a binary image of a
fingerprint and makes the ridges that appear in the print just one pixel wide without changing the overall
pattern and leaving gaps in the ridges creating a sort of skeleton of the image. See figure 3-5 for an
example of skeletonization. The form is used as structural element, consisting of five blocks that each
present a pixel. The pixel in the center of that element is called the origin. When the structural element
overlays the object pixels in its entirety, only the pixels of the origin remain. The others are deleted.

Figure 3-5: An example of skeletonization
Skeleton modeling makes it easier to find minutiae and removes a lot of redundant data, which would have
resulted in longer process time and sometimes different results. There are a lot of different algorithms for
skeleton modeling that differ slightly. The result of a skeletonized (or thinned) fingerprint is shown in figure
3-6. [2, 8, 27]

Binarized image Skeletonized image

Figure 3-6: The skeletonizing step

3.2 Discussion
Binarization reduces the information in the image from gray scale to black and white. While this
simplifies for further algorithms to decode the fingerprint image into information useful for identification, it
also reduces the complexity of the image and removes information that might have been necessary for the
identification of the fingerprint. Some authors have proposed minutiae extraction approaches that work
directly on the gray-scale images without binarization and skeleton modeling. This choice is motivated by
these considerations [17]:
- a significant amount of information may be lost during the binarization process;
- binarization and thinning are time consuming;
- thinning may introduce a large number of spurious minutiae;
- in the absence of an a priori enhancement step, most of the binarization techniques do not provide
satisfactory results when applied to low-quality images.
A reduction of information is not necessarily a negative thing though.
Still the binarization is a necessary step for many of the algorithms used for minutiae analysis. Advanced
algorithms like skeletonization will only work if this process is performed.

The thinning performed in skeleton modeling enables point identification via simple counting of the
nearby pixels. When this process has been performed, a mapping of available minutiae in the image is
made, used for minutiae-based matching. It is now comparable to earlier stored templates.








CLASSIFICATION






The fingerprints have been traditionally classified into categories based on information in the global pattern
of ridges. A recognition procedure consists in retrieving one or more fingerprints in a large database
corresponding to a given fingerprint, whereas a classification procedure consists in assigning a fingerprint
to a pre-defined class. In this chapter it is described how this works.
4.1 Why classification
Fingerprint recognition is the basic task of the identification systems of the most famous policy agencies. If
all the fingerprints within the database are a priori classified, the recognition procedure can be performed
more efficiently, since the given fingerprint has to be compared only with the database items belonging to
the same class.
The first (semi-)automatic systems for the fingerprint recognition were developed in the 70s by the US
Federal Bureau of Investigation (FBI) in collaboration with the National Bureau of Standards, Cornel1
Aeronautics Laboratory and Rockwell International Corporation. Since then the volumes of the fingerprint
databases and the amount of requests of identification increased constantly, so that it was necessary very
soon to classify the fingerprints to improve the recognition efficiency. The FBI now has about forty to fifty
million fingerprints stored in their database.
Although the first approaches to automatic fingerprint classification were proposed many years ago [9, 10],
policy agencies still go on performing classification manually with a great expense of time. On the other
hand, the problem of fingerprint classification is very hard: a good classification system should be very
reliable (it must not misclassify fingerprints), it should be selective (the fingerprint database has to be
partitioned in a number of non-overlapping classes with about the same properties) and it should be
efficient (each fingerprint must be processed in a short time). [16]
4.2 Patterns
A fingerprint can be looked at from different levels; the global level, the local level, and the very fine level.
At the global level, you find the singularity points, called core and delta points, see figure 4-1. These
singularity points are very important for fingerprint classification, but they are not sufficient for accurate
matching.


Figure 4-1: Core and delta points marked on sketches of the two fingerprint patterns loop and whorl

The core is the inner point, normally in the middle of the print, around which whorls, loops, or arches
center. It is frequently characterized by a ridge ending and several curved ridges. Deltas are the points,
normally at the lower left and right hand of the fingerprint, around which a triangular series of ridges center.
[28]

Fingerprint classification nowadays is usually based on the Galton-Henry classification scheme. Galton
divided the fingerprints into three major classes (arches, loops and whorls) and further divided each
category into subcategories [7]. Edward Henry refined Galtons classification by increasing the number of
classes [11]. The five most common classes of the Galton-Henry classification scheme are: plain arches,
tented arches, left loop, right loop, and whorl. Of all fingerprint patterns 65% consist of loops, whorls make
up about 30%, and arches the remaining 5%.
Loops
The loop is the most common fingerprint pattern. They are usually separated into right loops and left loops.
The difference between these two is the direction that the ridges turn to. If the ridges turn to the left it is a
left loop and vice versa. Figure 4-2 shows a right loop.

In a loop pattern, one or more of the ridges enter on either side of the fingerprint, recurve, touch or cross the
line running from the delta to the core and terminate or tend to terminate on or toward the same side of the
fingerprint from which such ridge or ridges entered. There is one delta. [2, 23]






Figure 4-2: A right loop
Whorls
Whorls are the second most common pattern. Here the ridges
form circular patterns around the core. Most often they form to
spirals, but they can also appear as concentric circles.

In a whorl some of the ridges make a turn through at least one
circuit. There are two loops (or a whorl) and two deltas, see the
red arrows in figure 4-3.

Figure 4-3: A whorl
Arches
Arches are more uncommon than loops and whorls. In the arch type the ridges run from one side to the
other, making no backward turn. Usually arches are classified into plain (simple) and tented (narrow)
arches, see figure 4-3. A plain arch does not have loops or deltas. The tented arch often has a loop and a
delta point below. [2, 23]

Figure 4-4:
A plain arch A tented arch

4.3 Classification techniques
It is important to note that the distribution of fingers into the five classes is highly skewed. A fingerprint
classification system should take that into account and should be invariant to rotation, translation and elastic
distortion of the skin.

Although a wide variety of classification algorithms has been developed for this problem, a relatively small
number of features extracted from fingerprint images have been used by most of the authors. In particular,
almost all the methods are based on one or more of the following features: ridge line flow, orientation
image, singular points and Gabor filter responses. Ridge line flow is usually represented as a set of curves
running parallel to the ridge lines. An orientation image is used in most classification approaches because it
contains all the information required for the classification. Usually the orientation image is registered with
respect to the core point, one of the singular points, explained in paragraph 4.2. A Gabor filter is a common
directional filter which has both frequency-selective and orientation-selective properties.

Several approaches have been developed for automatic fingerprint classification. The best approaches can
be broadly categorized into the following categories [17]:

- Rule-based
These approaches mainly depend on the number and position of the singular points of the fingerprint.
This is the approach commonly used by human experts for manual classification. A plain arch has no
singular points. A tented arch, left loop and right loop have one loop and one delta. A whorl has two
loops (or a whorl) and two deltas. The result of this technique is a scheme to follow, which tells in
which class the input image belongs to. [12, 13, 15]
- Structural
Structural approaches are based on the relational organization in the structure of the fingerprints. They
are often based on the orientation field. Cappelli et al [4] proposed partitioning the orientation image
into homogeneous regions and subsequently classify the image based on the pattern of the lines between
the regions. See also Maio and Maltoni [16].
- Statistical
In statistical approaches, a fixed-size numerical feature vector is derived from each fingerprint and a
general-purpose statistical classifier is used for the classification. One of the most widely adopted
statistical classifiers is the k-nearest neighbor. Many approaches directly use the orientation image as a
feature vector. [14]
- Neural network-based
Most of the proposed neural network approaches are based on multilayer perceptrons and use the
elements of the orientation image as input features. The success of neural networks was mainly in the
1990s. [14]
- Multi-classifier
Different classifiers offer complementary information about the patterns to be classified, which may
improve performance. This approach is especially studied in recent years. For example Jain, Prabhakar
and Hong [14] adopt a two-stage classification strategy: a k-nearest neighbor classifier is used to find
the two most likely classes. Then a specific neural network, trained to distinguish between the two
classes, is exploited to obtain the final decision.

The techniques have an error rate between 6.5 and 11.9% when fingerprints are assigned to five
different classes. [17] Many errors are due to the misclassification of tented arch fingerprints as plain arch.
That is the reason why some authors only use four classes; tented arch and plain arch as one. The
classification errors drop to between 5.1 and 9.4%.


Fingerprint matching
This chapter describes how an input fingerprint is compared with one of the template fingerprints,
stored in a database. In the first paragraph some methods for fingerprint matching are described.
Subsequently, the most well known method of these, minutiae-based matching, is described in detail.
5.1 Methods
Many algorithms have been developed to match two different fingerprints and they can be divided into the
following groups: [17, 28]
Correlation-based matching: Two fingerprint images are laid on top of each other and the
correlation between corresponding pixels is computed for different alignments (various displacements and
rotations). This technique has some disadvantages. Correlation-based techniques require the precise location
of a registration point and are affected by non-linear distortion.

Minutiae-based matching: As described in section 5.2, a fingerprint pattern is full of minutiae
points, which characterize the fingerprint. In minutiae-based matching, these points are extracted from the
image, stored as sets of points in the two-dimensional plane, and then compared with the same points
extracted from the template. With this technique, it is difficult to extract the minutiae points accurately
when the fingerprint is of low quality.

Ridge feature-based matching: This matching can be conceived as a superfamily of minutiae-based
matching and correlation-based matching, since the pixel intensity and the minutiae positions are
themselves features of the finger ridge pattern. The matching method uses features like local orientation and
frequency, ridge shape, and structure information. Even though minutiae-based matching is considered
more reliable, there are cases where ridge feature-based matching is better to use. In very low-quality
fingerprint images, it can be difficult to extract the minutiae points, and using the ridge pattern for matching
is then preferred.

Minutiae-based matching is certainly the most well known and widely used method for fingerprint
matching, thanks to its strict analogy with the way forensic experts compare fingerprints and its acceptance
as a proof of identity in the courts of law in almost all countries. For this reason specific attention is paid to
this method.

5.1.1 Minutiae extraction
At the local level, a total of 150 different local ridge characteristics, called minutiae details, have been
identified. Most of them depend heavily on the impression conditions and quality of fingerprints and are
rarely observed in fingerprints. The seven most prominent ridge characteristics are shown in figure 5.1. [17]


Figure 5.1: Minutiae details
The measured fingerprint area consists in average of about thirty to sixty minutiae points depending
on the finger and on the sensor area. These can be extracted from the image after the image processing step
(and possibly the classification step) is performed. The point at which a ridge ends, and the point where a
bifurcation begins, are the most rudimentary minutiae, and are used in most applications. Once the thinned
ridge map is available, the ridge pixels with three ridge pixel neighbors are identified as ridge bifurcations,
and those with one ridge pixel neighbor identified as ridge endings. However, all the minutiae detected are
no facts yet because of image processing and the noise in the fingerprint image. For each extracted minutia
a couple of features are stored: the absolute position (x,y), the direction (u ), and if necessary the scale (s).
[3]
The location of the minutiae are commonly indicated by the distance from the core, with the core
serving as the (0,0) on an x,y-axis. Some authors use the far left and bottom boundaries of the image as the
axes, correcting for misplacement by locating and adjusting from the core. In addition to the placement of
the minutia, the angle of the minutia is normally used. When a ridge ends, its direction at the point of
termination establishes the angle. This angle is taken from a horizontal line extending rightward from the
core.
At the very fine level, intra-ridge details can be detected. These are essentially the finger sweat
pores whose position and shape are considered highly distinctive. However, extracting pores is usable only
in high-resolution fingerprint images of good quality and therefore this kind of representation is not
practical for most applications. [17]

5.1.2 Post-processing
Minutiae localization begins with a preprocessed image. At this point, even a very precise image will have
distortions and false minutiae that need to be filtered out. For example, an algorithm may search the image
and eliminate one of two adjacent minutiae, since minutiae are very rarely adjacent. Irregularities caused by
scars, sweat or dirt appear as false minutiae, and algorithms locate any points or patterns that do not make
sense, such as a spur on an island (probably false) or a ridge crossing at right angles to two or three others
(probably a scar or dirt). A large percentage of false minutiae are discarded in this post-processing stage.
[28]

In figure 5.2 several examples of false minutiae can be observed. In clockwise order: interrupted ridges,
forks, spurs, structure ladders, triangles and bridges are depicted in the figure. The interrupted ridges are
two very close lines with the same direction. Two lines connected by a noisy line compose a fork. The spurs
are short lines whose direction is orthogonal to ridges direction. The structure ladders are pseudo-rectangle
between two ridges. The triangles are formed by a real bifurcation with a noisy line between two ridges.
Finally, the bridge is a noisy line between two ridges. All these characteristics generate several false
minutiae. The algorithm is divided into several steps, executed in a pre-arranged order: elimination of the
spurs, union of the endpoints, elimination of the bridges, elimination of the triangles, elimination of the
structure ladders. [6, 24]

Figure 5-2: False minutiae: interrupted ridges, forks, spurs, structure ladders, triangles and bridges, in
clockwise order

5.1.3 Minutiae matching
Minutiae-based techniques first find minutiae points and then map their relative placement on the finger in
order to match the minutiae with the template fingerprint minutiae. A general approach for matching
minutiae is described in paragraph 5.3.1 and an algorithm for transformation-based minutiae matching is
subsequently described in paragraph 5.3.2. The minutiae matching is mainly based on the book by D.
Maltoni, D. Maio, A. K. Jain and S. Prabhakar [17].

5.1.4 General approach
Let T and I be the representation of the template and input fingerprint, respectively. This representation is a
feature vector whose elements are the fingerprint minutiae:

T = {m
1
, m
2
, , m
m
}
I = {m
1
, m
2
, , m
n
},

where m and n denote the number of minutiae in T and I, respectively.

Each minutia may be described by a number of attributes, including its location in the fingerprint image,
orientation, type (e.g. termination or bifurcation), a weight based on the quality of the fingerprint image in
the neighborhood of the minutia, and so on. Most common minutiae matching algorithms consider each
minutia as a triplet m = {x, y,u } that indicates the coordinates (x, y) of the absolute location of the minutia
and the minutia angle u :
m
i
= {x
i
, y
i
, u
i
} i = 1..m
m
j
= {x
j
, y
j
, u
j
} j = 1..n

A minutia m
j
in I and a minutia m
i
in T are considered matching, if the spatial distance between them is
smaller than a given tolerance r
0
and the direction difference between them is smaller than an angular
tolerance u
0
:
spatial_distance(m
j
, m
i
) =
0
2 2
) ' ( ) ' ( r y y x x
i j i j
s + (1)
and
direction_distance(m
j
, m
i
) = ( )
0
' 360 , ' min u u u u u s
i j i j
. (2)
This last equation takes the minimum of the two because of the circularity of angles (the difference between
angles of 2 and 358 is only 4 ). The tolerances r
0
and u
0
are necessary to compensate for the
unavoidable errors made by feature extraction algorithms.

In order to match the fingerprints, there has to be done a displacement and rotation and possibly some other
geographical transformations as well. When the fingerprints are from two different scanners, the resolution
may vary. So the scale has to be considered. Also the prints can be damaged or affected by distortions.
There has to be a mapping function to deal with these problems. See figure 5-3 for the transformation of a
minutia point in two fingerprints. [21]






]Minutia point in the input fingerprint

Minutia point in the template fingerprint


In the absence of noise and other deformation, the rotation and displacement between two images can be
completely determined using two corresponding point pairs. In the ideal scenario, the true alignment can be
estimated by testing all possible pairs of minutiae for correspondence and then selecting the best
correspondence. [18]

Let map() be the function that maps a minutia m
j
(from I) into m
j
according to a given geometrical
transformation. For example, by considering a displacement of [ A x, A y], and a counterclockwise rotation u
around the origin:

} ' , ' ' , ' ' { ' ' }) ' , ' , ' { ' ( map
, ,
u u u
u
+ = = =
A A j j j j j j j j y x
y x m y x m , where
(

A
A
+
(


=
(

y
x
y
x
y
x
j
j
j
j
'
'
cos sin
sin cos
' '
' '
u u
u u
. (3)

A pair of fingerprints that are most alike will have the maximum number of matching minutiae. Let mm()
be an indicator function that returns 1 in the case where the minutiae m
j
and m
i
match according to the
spatial distance and direction distance:

s s
=
otherwise 0
) , ' ' ( distance direction_ and ) , ' ' ( stance spatial_di 1
) , ' ' mm(
0 0
u
i j i j
i j
m m r m m
m m (4)

Then the problem can be formulated as:

=
A A A A
m
i
i i P y x P y x
m m map mm
1
) ( , , , , ,
) ), ' ( ( max
u u
, (5)
which indicates the maximum number of matched minutiae. The function P(i) determines the paring
between I and T minutiae. In particular, each minutia has either exactly one mate in the other fingerprint or
has no mate at all. See figure 5-4 for an example of pairing. If m
1
were mated with m
2
(the closest
minutia), m
2
would remain unmated. However, pairing m
1
with m
1
, allows m
2
to be mated with m
2
, thus
maximizing equation (5).

Figure 5-4: An example of pairing minutiae

The point-matching problem is studied a lot, inducing many approaches. The Hough transform-based
approach [1, 21] is one that is quite popular and will be described in the next paragraph.

5.2 The Hough transform-based algorithm
Many transformations for minutiae matching are based on the Hough transform-based approach. It is an
algorithm with embedded fingerprint alignment in the minutiae matching stage, as proposed in Ratha 1996
[21]. It discretizes the set of all allowed transformations, and for each transformation, the matching score is
computed. The transformation with the maximal score is believed to be the correct one. It consists of three
major steps:

1) Estimate the transformation parameters A x, A y, u , and s between the two representations, where A x
and A y are translations along x- and y-directions, respectively, u is the rotation angle, and s is the
scaling factor.
2) Align the two sets of minutiae points with the estimated parameters and count the matched pairs within
a bounding box.
3) Repeat the previous two steps for the set of discretized allowed transformations. The transformation that
results in the highest matching score is believed to be the correct one.

The space of transformations consists of quadruples ( A x, A y, u , s), where each parameter is discretized
(denoted by the symbol
+
) into a finite set of values:

A x
+
e{ A x
+
1
, A x
+
2
, , A x
+
a
} u
+
e{u
+
1
, u
+
2
, , u
+
c
}
A y
+
e{ A y
+
1
, A y
+
2
, , A y
+
b
} s
+
e{s
+
1
, s
+
2
, , s
+
d
}

A four-dimensional array A, with one entry for each of the parameter discretizations, is initially reset and
the following algorithm is executed:

For each m
i
, i = 1..m //for each template minutia point
For each m
j
, j = 1..n //for each input minutia point
For each u
+
e {u
+
1
, u
+
2
, , u
+
c
} //for each discretized rotation
If direction_distance(u
j
+ u
+
, u
i
) < u
0
//if the directions difference //after rotation is
small
For each s
+
e {s
+
1
, s
+
2
, , s
+
d
} //for each discretized scale
{

(

(
(

=
(

A
A
+ +
+ +
+
j
j
i
i
y
x
s
y
x
y
x
'
'
cos sin
sin cos
u u
u u
//compute displacement
A x
+
, A y
+
= quantization of A x, A y to the nearest bin
A[ A x
+
, A y
+
, u
+
, s
+
] = A[ A x
+
, A y
+
, u
+
, s
+
] + 1
//count matched pairs
}
}}}
( A x*, A y*, u *, s*) = arg max A[ A x
+
, A y
+
, u
+
, s
+
]
//where the count of matched pairs is highest, give the optimal displacement, rotation and scale

This maximum gives the transformation that is believed to be the right one. If there exists a matching
fingerprint in the database, the template with the most matching minutiae is probably the same as the input.
[17, 20, 21]

5.2.1 Pre-alignment
An intuitive more logical choice would be to pre-align all the templates in the database and the input image
before the matching procedure. In this way the alignment takes place only once for every image. A great
advantage is that it significantly reduces the computational time. Pre-alignment cannot compare images to
one another so only depends on the properties of itself. The most common pre-alignment technique convert
the fingerprint according to the position of the core point. Unfortunately, reliable detection of the core is
very difficult in noisy images and in arch type patterns. For adjusting the rotation the shape of the
silhouette, the orientation of the core delta segment, the average orientation in some regions around the
core, and the orientations of the singularities can be used. But still this is a complex problem and causes
often errors in the matching procedure. That is the main reason why embedded alignment in the minutiae
matching stage is often used. [17]















RESULTS:

Fig1: Binarization using Rough set Theory




Fig: Input Image Fig: Thinned Image




Fig: Minutiae Points Extraction Fig(a) Minutiae points (b) Thinned and minutiae points



Fig: Orientation



Tabulation Results:
ROUGH-SET LATENT
Image 1 105 113
Image 2 112 115
Image 3 126 124
Image 4 116 129











CONCLUSION












The process described in this paper and shown in figure 2-1 (page 5) consists of the main steps that a
automatic fingerprint identification system should take when performing minutiae-based matching.
However, some experts repeat steps or speed the process up by carrying out multiple steps at a time. The
preprocessing and classification step can partly be executed at the same time for instance. The reason for
this is that some preprocessing steps are necessary for the classification and some are necessary for
minutiae extraction. As said before, weather to carry out the classification step depends on de size of the
database and the time in which the identification has to be fulfilled.
Almost each author has his own preprocessing steps. Especially the filtering step varies by author.
Moreover, as said in section 3.2, some steps of the preprocessing can be skipped to shorten the processing
time and even improve quality in some cases.
For other matching algorithms, like correlation-based and ridge feature-based matching, the
processes can be different. The preprocess is adjusted especially for the kind of matching that takes place
further in the process, in this paper the minutiae-based matching.
Improvement
Still there is a lot to improve in fingerprint identification technology. The improvement in performance is
mainly to achieve in the preprocessing step. Fingerprints of poor quality, as shown in figure 6-1, are still
difficult to classify and match. Most of the misclassified or mismatched fingerprints are the result of bad
quality. The preprocessing can improve the quality, but also discard useful information. Improvements are
still achieved in this area.


Figure 6-1: Fingerprint image of poor quality

Furthermore there can be improvements in time. A user does not want to wait for his results. Identification
has to be real-time when going through security at an airport. Recently a new way of classification is
developed, that speeds up the process considerably. In stead of using the classes of the Galton-Henry
scheme, the images are classified based on the angle of the ridges. Each image is divided into four main
directions: 0, 45, 90 and 135 degrees. There arise four scatter plots of the fingerprint, where the gray scale
indicates the concentration of present angles. A white spot on the plot of 45 degrees indicates only angles of
45 degrees, black spots indicate no angles of 45 degrees. The result is a very fast classification. This goes
about 100 times more rapidly than so far possible, and the error percentage is smaller than 0.5%. [31]
Future of fingerprint identification
A dirty skin, scars, sweat and bruises can easily distort fingerprint recognition. A false matching can be the
result of this. This factor does not play a role in iris recognition, a very reliable form of biometrics. Experts
say that iris scans are therefore superior to fingerprint recognition systems. Also because an iris scan will
produce results more quickly than a scan of a fingerprint. A check against 100.000 iris codes in a database
takes two seconds. A fingerprint search will take fifteen seconds to perform the same task [30]. On the other
hand, the fingerprint is the most practical biometric recognition method. It only takes a simple sensor and a
smart link to a database. These necessities can easily be built in small equipment like PDAs and mobile
phones. Iris detection on the contrary needs a high resolution camera. Besides, fingerprints are used in
criminology where iris scans cannot be used. Fingerprint identification will therefore be indispensable in the
future.











REFERENCES

[1] A. A. Paulino, J. Feng, and A. K. Jain, Latent fingerprint matching using descriptor-based Hough transform, in
Proc. Int. Joint Conf. Biometrics, Oct. 2011, pp. 17.
[2] C. Champod, C. Lennard, P. Margot, and M. Stoilovic, Fingerprints and Other Ridge Skin Impressions. Boca
Raton, FL: CRC Press, 2004.
[3] NIST Special Database 27, Fingerprint Minutiae From Latent and Matching Tenprint Images [Online].
Available:http://www.nist.gov/
srd/nistsd27.cfm
[4] D. R. Ashbaugh, Quantitative-Qualitative Friction Ridge Analysis. Boca Raton, FL: CRC Press, 1999.
[5] FBI, Next Generation Identification [Online].
Available:http://www.fbi.gov/aboutus/cjis/fingerprints_biometrics/ngi
[6] R. Cappelli, M. Ferrara, and D. Maltoni, Minutia cylinder-code: A new representation and matching technique
for fingerprint recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 21282141, Dec. 2010.
[7] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition, 2nd ed. New York:
Springer Verlag, 2009.















INTRODUCTION TO MATLAB
What Is MATLAB?
MATLAB

is a high-performance language for technical computing. It integrates computation,


visualization, and programming in an easy-to-use environment where problems and solutions are expressed
in familiar mathematical notation. Typical uses include
Math and computation
Algorithm development
Data acquisition
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including graphical user interface building.
MATLAB is an interactive system whose basic data element is an array that does not require
dimensioning. This allows you to solve many technical computing problems, especially those with matrix
and vector formulations, in a fraction of the time it would take to write a program in a scalar non
interactive language such as C or FORTRAN.
The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy
access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines
incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix
computation.
MATLAB has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses in mathematics,
engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research,
development, and analysis.
MATLAB features a family of add-on application-specific solutions called toolboxes. Very important
to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are
comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve
particular classes of problems. Areas in which toolboxes are available include signal processing, control
systems, neural networks, fuzzy logic, wavelets, simulation, and many others.
The MATLAB System:
The MATLAB system consists of five main parts:
Development Environment:
This is the set of tools and facilities that help you use MATLAB functions and files. Many of these
tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command
history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.
The MATLAB Mathematical Function:
This is a vast collection of computational algorithms ranging from elementary functions like sum,
sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen
values, Bessel functions, and fast Fourier transforms.
The MATLAB Language:
This is a high-level matrix/array language with control flow statements, functions, data structures,
input/output, and object-oriented programming features. It allows both "programming in the small" to
rapidly create quick and dirty throw-away programs, and "programming in the large" to create complete
large and complex application programs.
Graphics:
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as
annotating and printing these graphs. It includes high-level functions for two-dimensional and three-
dimensional data visualization, image processing, animation, and presentation graphics. It also includes
low-level functions that allow you to fully customize the appearance of graphics as well as to build
complete graphical user interfaces on your MATLAB applications.

The MATLAB Application Program Interface (API):
This is a library that allows you to write C and Fortran programs that interact with MATLAB. It
includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a
computational engine, and for reading and writing MAT-files.
MATLAB WORKING ENVIRONMENT:
MATLAB DESKTOP:-
Matlab Desktop is the main Matlab application window. The desktop contains five sub windows, the
command window, the workspace browser, the current directory window, the command history window,
and one or more figure windows, which are shown only when the user displays a graphic.
The command window is where the user types MATLAB commands and expressions at the prompt
(>>) and where the output of those commands is displayed. MATLAB defines the workspace as the set of
variables that the user creates in a work session. The workspace browser shows these variables and some
information about them. Double clicking on a variable in the workspace browser launches the Array Editor,
which can be used to obtain information and income instances edit certain properties of the variable.
The current Directory tab above the workspace tab shows the contents of the current directory, whose
path is shown in the current directory window. For example, in the windows operating system the path
might be as follows: C:\MATLAB\Work, indicating that directory work is a subdirectory of the main
directory MATLAB; WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current
directory window shows a list of recently used paths. Clicking on the button to the right of the window
allows the user to change the current directory.
MATLAB uses a search path to find M-files and other MATLAB related files, which are organize in
directories in the computer file system. Any file run in MATLAB must reside in the current directory or in
a directory that is on search path. By default, the files supplied with MATLAB and math works toolboxes
are included in the search path. The easiest way to see which directories are on the search path. The easiest
way to see which directories are soon the search path, or to add or modify a search path, is to select set path
from the File menu the desktop, and then use the set path dialog box. It is good practice to add any
commonly used directories to the search path to avoid repeatedly having the change the current directory.

The Command History Window contains a record of the commands a user has entered in the
command window, including both current and previous MATLAB sessions. Previously entered MATLAB
commands can be selected and re-executed from the command history window by right clicking on a
command or sequence of commands. This action launches a menu from which to select various options in
addition to executing the commands. This is useful to select various options in addition to executing the
commands. This is a useful feature when experimenting with various commands in a work session.
Using the MATLAB Editor to create M-Files:
The MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB
debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. M-files are
denoted by the extension .m, as in pixelup.m. The MATLAB editor window has numerous pull-down
menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and
also uses color to differentiate between various elements of code, this text editor is recommended as the tool
of choice for writing and editing M-functions. To open the editor , type edit at the prompt opens the M-file
filename.m in an editor window, ready for editing. As noted earlier, the file must be in the current directory,
or in a directory in the search path.
Getting Help:
The principal way to get help online is to use the MATLAB help browser, opened as a separate
window either by clicking on the question mark symbol (?) on the desktop toolbar, or by typing help
browser at the prompt in the command window. The help Browser is a web browser integrated into the
MATLAB desktop that displays a Hypertext Markup Language(HTML) documents. The Help Browser
consists of two panes, the help navigator pane, used to find information, and the display pane, used to view
the information. Self-explanatory tabs other than navigator pane are used to perform a search.















DIGITAL IMAGE PROCESSING










Digital image processing
Background:
Digital image processing is an area characterized by the need for extensive experimental work to
establish the viability of proposed solutions to a given problem. An important characteristic underlying the
design of image processing systems is the significant level of testing & experimentation that normally is
required before arriving at an acceptable solution. This characteristic implies that the ability to formulate
approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time
required to arrive at a viable system implementation.
What is DIP
An image may be defined as a two-dimensional function f(x, y), where x & y are spatial
coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the
image at that point. When x, y & the amplitude values of f are all finite discrete quantities, we call the
image a digital image. The field of DIP refers to processing digital image by means of digital computer.
Digital image is composed of a finite number of elements, each of which has a particular location & value.
The elements are called pixels.
Vision is the most advanced of our sensor, so it is not surprising that image play the single most
important role in human perception. However, unlike humans, who are limited to the visual band of the EM
spectrum imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves.
They can operate also on images generated by sources that humans are not accustomed to associating with
image.
There is no general agreement among authors regarding where image processing stops & other related
areas such as image analysis& computer vision start. Sometimes a distinction is made by defining image
processing as a discipline in which both the input & output at a process are images. This is limiting &
somewhat artificial boundary. The area of image analysis (image understanding) is in between image
processing & computer vision.
There are no clear-cut boundaries in the continuum from image processing at one end to complete
vision at the other. However, one useful paradigm is to consider three types of computerized processes in
this continuum: low-, mid-, & high-level processes. Low-level process involves primitive operations such
as image processing to reduce noise, contrast enhancement & image sharpening. A low- level process is
characterized by the fact that both its inputs & outputs are images.

Mid-level process on images involves tasks such as segmentation, description of that object to
reduce them to a form suitable for computer processing & classification of individual objects. A mid-level
process is characterized by the fact that its inputs generally are images but its outputs are attributes
extracted from those images. Finally higher- level processing involves Making sense of an ensemble of
recognized objects, as in image analysis & at the far end of the continuum performing the cognitive
functions normally associated with human vision.
Digital image processing, as already defined is used successfully in a broad range of areas of
exceptional social & economic value.
What is an image?
An image is represented as a two dimensional function f(x, y) where x and y are spatial co-ordinates
and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the image at that point.
Gray scale image:
A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane.
I(x, y) is the intensity of the image at the point (x, y) on the image plane.
I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] [0, b]I: [0, a] [0,
b] [0, info)
Color image:
It can be represented by three functions, R (xylem) for red, G (xylem) for green and B (xylem) for
blue.
An image may be continuous with respect to the x and y coordinates and also in amplitude.
Converting such an image to digital form requires that the coordinates as well as the amplitude to be
digitized. Digitizing the coordinates values is called sampling. Digitizing the amplitude values is called
quantization.
Coordinate convention:
The result of sampling and quantization is a matrix of real numbers. We use two principal ways to
represent digital images. Assume that an image f(x, y) is sampled so that the resulting image has M rows
and N columns. We say that the image is of size M X N. The values of the coordinates (xylem) are discrete
quantities. For notational clarity and convenience, we use integer values for these discrete coordinates.
In many image processing books, the image origin is defined to be at (xylem)=(0,0).The next
coordinate values along the first row of the image are (xylem)=(0,1).It is important to keep in mind that the
notation (0,1) is used to signify the second sample along the first row. It does not mean that these are the
actual values of physical coordinates when the image was sampled. Following figure shows the coordinate
convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments.
The coordinate convention used in the toolbox to denote arrays is different from the preceding
paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the notation (race) to indicate
rows and columns. Note, however, that the order of coordinates is the same as the order discussed in the
previous paragraph, in the sense that the first element of a coordinate topples, (alb), refers to a row and the
second to a column. The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); thus,
r ranges from 1 to M and c from 1 to N in integer increments. IPT documentation refers to the coordinates.
Less frequently the toolbox also employs another coordinate convention called spatial coordinates which
uses x to refer to columns and y to refers to rows. This is the opposite of our use of variables x and y.
Image as Matrices:
The preceding discussion leads to the following representation for a digitized image function:
f (0,0) f(0,1) .. f(0,N-1)
f (1,0) f(1,1) f(1,N-1)
f (xylem)= . . .
. . .
f (M-1,0) f(M-1,1) f(M-1,N-1)
The right side of this equation is a digital image by definition. Each element of this array is called an
image element, picture element, pixel or pel. The terms image and pixel are used throughout the rest of our
discussions to denote a digital image and its elements.





A digital image can be represented naturally as a MATLAB matrix:
f (1,1) f(1,2) . f(1,N)
f (2,1) f(2,2) .. f (2,N)
. . .
f = . . .
f (M,1) f(M,2) .f(M,N)
Where f (1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB quantities). Clearly the
two representations are identical, except for the shift in origin. The notation f(p ,q) denotes the element
located in row p and the column q. For example f(6,2) is the element in the sixth row and second column of
the matrix f. Typically we use the letters M and N respectively to denote the number of rows and columns
in a matrix. A 1xN matrix is called a row vector whereas an Mx1 matrix is called a column vector. A 1x1
matrix is a scalar.
Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and so on.
Variables must begin with a letter and contain only letters, numerals and underscores. As noted in the
previous paragraph, all MATLAB quantities are written using mono-scope characters. We use conventional
Roman, italic notation such as f(x ,y), for mathematical expressions

Reading Images:
Images are read into the MATLAB environment using function imread whose syntax is
Imread (filename)
Format name Description recognized extension
TIFF Tagged Image File Format .tif, .tiff
JPEG Joint Photograph Experts Group .jpg, .jpeg
GIF Graphics Interchange Format .gif
BMP Windows Bitmap .bmp

PNG Portable Network Graphics .png
XWD X Window Dump .xwd
Here filename is a spring containing the complete of the image file(including any applicable
extension).For example the command line
>> f = imread (8. jpg);
Reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes ()
to delimit the string filename. The semicolon at the end of a command line is used by MATLAB for
suppressing output If a semicolon is not included. MATLAB displays the results of the operation(s)
specified in that line. The prompt symbol (>>) designates the beginning of a command line, as it appears in
the MATLAB command window.
Data Classes:
Although we work with integers coordinates the values of pixels themselves are not restricted to be
integers in MATLAB. Table above list various data classes supported by MATLAB and IPT are
representing pixels values. The first eight entries in the table are refers to as numeric data classes. The ninth
entry is the char class and, as shown, the last entry is referred to as logical data class.
All numeric computations in MATLAB are done in double quantities, so this is also a frequent data
class encounter in image processing applications. Class unit 8 also is encountered frequently, especially
when reading data from storages devices, as 8 bit images are most common representations found in
practice. These two data classes, classes logical, and, to a lesser degree, class unit 16 constitute the primary
data classes on which we focus. Many ipt functions however support all the data classes listed in table. Data
class double requires 8 bytes to represent a number uint8 and int 8 require one byte each, uint16 and int16
requires 2bytes and unit 32.
Name Description
Double Double _ precision, floating_ point numbers the Approximate.
Uint8 unsigned 8_bit integers in the range [0,255] (1byte per
Element).
Uint16 unsigned 16_bit integers in the range [0, 65535] (2byte per element).
Uint 32 unsigned 32_bit integers in the range [0, 4294967295](4 bytes per element). Int8
signed 8_bit integers in the range [-128,127] 1 byte per element)
Int 16 signed 16_byte integers in the range [32768, 32767] (2 bytes per element).
Int 32 Signed 32_byte integers in the range [-2147483648, 21474833647] (4 byte per
element).
Single single _precision floating _point numbers with values
In the approximate range (4 bytes per elements)
Char characters (2 bytes per elements).
Logical values are 0 to 1 (1byte per element).
Int 32 and single required 4 bytes each. The char data class holds characters in Unicode
representation. A character string is merely a 1*n array of characters logical array contains only the values
0 to 1,with each element being stored in memory using function logical or by using relational operators.
Image Types:
The toolbox supports four types of images:
1 .Intensity images;
2. Binary images;
3. Indexed images;
4. R G B images.
Most monochrome image processing operations are carried out using binary or intensity images, so
our initial focus is on these two image types. Indexed and RGB colour images.
Intensity Images:
An intensity image is a data matrix whose values have been scaled to represent intentions. When the
elements of an intensity image are of class unit8, or class unit 16, they have integer values in the range
[0,255] and [0, 65535], respectively. If the image is of class double, the values are floating point numbers.
Values of scaled, double intensity images are in the range [0, 1] by convention.




Binary Images:
Binary images have a very specific meaning in MATLAB.A binary image is a logical array 0s
and1s.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not considered as a binary
image in MATLAB .A numeric array is converted to binary using function logical. Thus, if A is a numeric
array consisting of 0s and 1s, we create an array B using the statement.
B=logical (A)
If A contains elements other than 0s and 1s.Use of the logical function converts all nonzero
quantities to logical 1s and all entries with value 0 to logical 0s.
Using relational and logical operators also creates logical arrays.
To test if an array is logical we use the I logical function: islogical(c).
If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be converted
to numeric arrays using the data class conversion functions.

Indexed Images:
An indexed image has two components:
A data matrix integer, x
A color map matrix, map
Matrix map is an m*3 arrays of class double containing floating point values in the range [0, 1].The
length m of the map are equal to the number of colors it defines. Each row of map specifies the red, green
and blue components of a single color. An indexed images uses direct mapping of pixel intensity values
color map values. The color of each pixel is determined by using the corresponding value the integer matrix
x as a pointer in to map. If x is of class double ,then all of its components with values less than or equal to
1 point to the first row in map, all components with value 2 point to the second row and so on. If x is of
class units or unit 16, then all components value 0 point to the first row in map, all components with value 1
point to the second and so on.




RGB Image:

An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet
corresponding to the red, green and blue components of an RGB image, at a specific spatial location. An
RGB image may be viewed as stack of three gray scale images that when fed in to the red, green and blue
inputs of a color monitor
Produce a color image on the screen. Convention the three images forming an RGB color image are
referred to as the red, green and blue components images. The data class of the components images
determines their range of values. If an RGB image is of class double the range of values is [0, 1].
Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit 16
respectively. The number of bits use to represents the pixel values of the component images determines the
bit depth of an RGB image. For example, if each component image is an 8bit image, the corresponding
RGB image is said to be 24 bits deep.
Generally, the number of bits in all component images is the same. In this case the number of
possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component image. For the
8bit case the number is 16,777,216 colors.
























CHAPTER 8
CONCLUSION










Conclusion
This article has introduced a roughest based binarization for fingerprint images. The rough set based
method avoids assumption that images are mostly bimodal whereas the same assumption is the basis for
widely used binarization technique such as Otsus binarization method.
The results obtained appear to be equivalent, if not better than the existing Otsus technique. Ideally
the method can be compared with the other fuzzy based approaches [15] [16], frequency based approaches
[17], but it has not been included here as Otsus method for binarization is considered as widely used
method.
One limitation of the proposed method is that it may not lead to a good binarization in case the
fingerprint image quality is very poor in the sense of over inking or under inking. A good noise cleaning
algorithm should then be preceded by the process of binarization. Note that the same is true even if one uses
Otsus method.
The method proposed for binarization could be extended to classify the image pixels in more than
two classes as oppose to binarization which is classifying the image pixels in to two classes.

Table 1: Threshold values obtained from rough-set based method and Otsus method for natural and
fingerprint images.

REFERENCES
[1] M. Davide, M Dario, A. K. Jain, P. Salil, Handbook of Fingerprint Recognition, 2nd Ed.
Springer,2005.
[2] Gonzalez C. R., Woods E. R., Digital Image Processing, 3rd Ed., Pearson, 2008.
[3] Otsu Nobuyuki, A Threshold selection method from gray-level histogram, in proceedings
IEEE Transactions on systems, man, and cybernetics, vol-9, no. 1, pp. 62-66, Jan 1979.
[4] Pawlak Z., Grzymala B. Jerzy, slowinsky R., Ziarko W, Rough sets, in proceedings
communications of the ACM, vol. 38, no.11, pp. 89-95, Nov, 1995.
[5] Pal S. K, B. Uma, Mitra P., Granular computing, rough entropy and object extraction, in
proceedings Pattern Recognition Letters, vol. 26, pp. 2509-2517, 2005.
[6] Moayer B., Fu K., A tree system approach for fingerprint pattern recognition, in proceedings
IEEE Transaction on Pattern Analysis Machine Intelligence, vol. 8, no. 3, pp. 376-388, 1986.
[7] Xiao Q. Raafat H., Fingerprint image post- processing: A combined statistical and structural
approach, in proceedings Pattern Recognition, vol. 24, no. 10, pp. 985-992, 1991b.
[8] Verma M.R., Majumdar A.K. and Chatterjee B., Edge detection in fingerprints, in
proceedings, Pattern Recognition, vol. 20, pp. 513-523, 1987.
[9] Ratha N.K., Chen S.Y., Jain A.K., Adaptive flow orientation-based feature extraction in
fingerprint images, in proceedings, Pattern Recognition, vol. 28, no. 11, pp. 1657-1672, 1995.
[10] Yager N., Amin A., Fingerprint verification based on minutiae features: a review, in
proceedings, pattern analysis Applications, vol. 7, pp. 94-113, Nov, 2004.
[11] Zhao F., Tang X., Preprocessing for Skeleton- based Minutiae Extraction, in Conference on
Imaging Science, Systems, and Technology02, pp. 742-745, 2007.
[12] Zhang T.Y, Suen C.Y, A Fast Parallel Algorithm for Thinning Digital Patterns, in
proceedings Communications of the ACM, Vol. 27, No 3, pp. 236-239, March 1984.
[13] Rutovitz D, Pattern recognition, in proceedings of journal in Royal Statistical Society, vol.
129, ser. A, pp. 504-530, May 1966.
[14] Hwang H., Shin J., Chien S., Lee J., Run Representation based Minutiae Extraction in
Fingerprint Image, in proceedings International Association for Pattern Recognition workshop
on Machine Vision Applications, pp. 64-67, Dec 2002.

[15] T.C. Raja Kumar, S. Arumuga Perumal, N. Krishnan, and S. Kother Mohideen , Fuzzy
based Histogram Modeling for Fingerprint Image Binarization, in proceedings of International
Journal of Research and Reviews in Computer Science, Vol 2, No5, pp. 1151-1154; Science
Academy Publisher, United Kingdom October 2011
[16] Y. H. Yun, A study on fuzzy binarization method, Korea Intelligent Information Systems
Society, vol. 1, No. 1, pp. 510-513, 2002
[17] Taekyung Kim, Taekyung Kim, Soft Decision Histogram-Based Image Binarization for
Enhanced ID Recognition, in proceedings Information Communication and Signal processing,
pp.1-4, Dec 2007






















INTRODUCTION TO MATLAB
What Is MATLAB?
MATLAB

is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where problems and
solutions are expressed in familiar mathematical notation. Typical uses include
Math and computation
Algorithm development
Data acquisition
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including graphical user interface building.
MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. This allows you to solve many technical computing problems, especially
those with matrix and vector formulations, in a fraction of the time it would take to write a
program in a scalar non interactive language such as C or FORTRAN.
The name MATLAB stands for matrix laboratory. MATLAB was originally written to
provide easy access to matrix software developed by the LINPACK and EISPACK projects.
Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of
the art in software for matrix computation.
MATLAB has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses in
mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-
productivity research, development, and analysis.
MATLAB features a family of add-on application-specific solutions called toolboxes.
Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized
technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that
extend the MATLAB environment to solve particular classes of problems. Areas in which
toolboxes are available include signal processing, control systems, neural networks, fuzzy logic,
wavelets, simulation, and many others.
The MATLAB System:
The MATLAB system consists of five main parts:
Development Environment:
This is the set of tools and facilities that help you use MATLAB functions and files. Many
of these tools are graphical user interfaces. It includes the MATLAB desktop and Command
Window, a command history, an editor and debugger, and browsers for viewing help, the
workspace, files, and the search path.
The MATLAB Mathematical Function:
This is a vast collection of computational algorithms ranging from elementary functions
like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix
inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.
The MATLAB Language:
This is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows both "programming
in the small" to rapidly create quick and dirty throw-away programs, and "programming in the
large" to create complete large and complex application programs.
Graphics:
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as
annotating and printing these graphs. It includes high-level functions for two-dimensional and
three-dimensional data visualization, image processing, animation, and presentation graphics. It
also includes low-level functions that allow you to fully customize the appearance of graphics as
well as to build complete graphical user interfaces on your MATLAB applications.
The MATLAB Application Program Interface (API):
This is a library that allows you to write C and Fortran programs that interact with
MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling
MATLAB as a computational engine, and for reading and writing MAT-files.
MATLAB WORKING ENVIRONMENT:
MATLAB DESKTOP:-
Matlab Desktop is the main Matlab application window. The desktop contains five sub
windows, the command window, the workspace browser, the current directory window, the
command history window, and one or more figure windows, which are shown only when the
user displays a graphic.
The command window is where the user types MATLAB commands and expressions at
the prompt (>>) and where the output of those commands is displayed. MATLAB defines the
workspace as the set of variables that the user creates in a work session. The workspace browser
shows these variables and some information about them. Double clicking on a variable in the
workspace browser launches the Array Editor, which can be used to obtain information and
income instances edit certain properties of the variable.
The current Directory tab above the workspace tab shows the contents of the current
directory, whose path is shown in the current directory window. For example, in the windows
operating system the path might be as follows: C:\MATLAB\Work, indicating that directory
work is a subdirectory of the main directory MATLAB; WHICH IS INSTALLED IN
DRIVE C. clicking on the arrow in the current directory window shows a list of recently used
paths. Clicking on the button to the right of the window allows the user to change the current
directory.
MATLAB uses a search path to find M-files and other MATLAB related files, which are
organize in directories in the computer file system. Any file run in MATLAB must reside in the
current directory or in a directory that is on search path. By default, the files supplied with
MATLAB and math works toolboxes are included in the search path. The easiest way to see
which directories are on the search path. The easiest way to see which directories are soon the
search path, or to add or modify a search path, is to select set path from the File menu the
desktop, and then use the set path dialog box. It is good practice to add any commonly used
directories to the search path to avoid repeatedly having the change the current directory.
The Command History Window contains a record of the commands a user has entered in
the command window, including both current and previous MATLAB sessions. Previously
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands. This action launches a
menu from which to select various options in addition to executing the commands. This is useful
to select various options in addition to executing the commands. This is a useful feature when
experimenting with various commands in a work session.
Using the MATLAB Editor to create M-Files:
The MATLAB editor is both a text editor specialized for creating M-files and a graphical
MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in
the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor
window has numerous pull-down menus for tasks such as saving, viewing, and debugging files.
Because it performs some simple checks and also uses color to differentiate between various
elements of code, this text editor is recommended as the tool of choice for writing and editing M-
functions. To open the editor , type edit at the prompt opens the M-file filename.m in an editor
window, ready for editing. As noted earlier, the file must be in the current directory, or in a
directory in the search path.
Getting Help:
The principal way to get help online is to use the MATLAB help browser, opened as a
separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by
typing help browser at the prompt in the command window. The help Browser is a web browser
integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML)
documents. The Help Browser consists of two panes, the help navigator pane, used to find
information, and the display pane, used to view the information. Self-explanatory tabs other than
navigator pane are used to perform a search.











DIGITAL IMAGE PROCESSING

BACKGROUND:
Digital image processing is an area characterized by the need for extensive experimental
work to establish the viability of proposed solutions to a given problem. An important
characteristic underlying the design of image processing systems is the significant level of
testing & experimentation that normally is required before arriving at an acceptable solution.
This characteristic implies that the ability to formulate approaches &quickly prototype candidate
solutions generally plays a major role in reducing the cost & time required to arrive at a viable
system implementation.
What is DIP
An image may be defined as a two-dimensional function f(x, y), where x & y are spatial
coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray
level of the image at that point. When x, y & the amplitude values of f are all finite discrete
quantities, we call the image a digital image. The field of DIP refers to processing digital image
by means of digital computer. Digital image is composed of a finite number of elements, each of
which has a particular location & value. The elements are called pixels.
Vision is the most advanced of our sensor, so it is not surprising that image play the
single most important role in human perception. However, unlike humans, who are limited to the
visual band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging
from gamma to radio waves. They can operate also on images generated by sources that humans
are not accustomed to associating with image.
There is no general agreement among authors regarding where image processing stops &
other related areas such as image analysis& computer vision start. Sometimes a distinction is
made by defining image processing as a discipline in which both the input & output at a process
are images. This is limiting & somewhat artificial boundary. The area of image analysis (image
understanding) is in between image processing & computer vision.

There are no clear-cut boundaries in the continuum from image processing at one end to
complete vision at the other. However, one useful paradigm is to consider three types of
computerized processes in this continuum: low-, mid-, & high-level processes. Low-level
process involves primitive operations such as image processing to reduce noise, contrast
enhancement & image sharpening. A low- level process is characterized by the fact that both its
inputs & outputs are images. Mid-level process on images involves tasks such as segmentation,
description of that object to reduce them to a form suitable for computer processing &
classification of individual objects. A mid-level process is characterized by the fact that its inputs
generally are images but its outputs are attributes extracted from those images. Finally higher-
level processing involves Making sense of an ensemble of recognized objects, as in image
analysis & at the far end of the continuum performing the cognitive functions normally
associated with human vision.
Digital image processing, as already defined is used successfully in a broad range of
areas of exceptional social & economic value.
What is an image?
An image is represented as a two dimensional function f(x, y) where x and y are spatial
co-ordinates and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the
image at that point.
Gray scale image:
A grayscale image is a function I (xylem) of the two spatial coordinates of the image
plane.
I(x, y) is the intensity of the image at the point (x, y) on the image plane.
I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] [0, b]I:
[0, a] [0, b] [0, info)
Color image:
It can be represented by three functions, R (xylem) for red, G (xylem) for green and B
(xylem) for blue.
An image may be continuous with respect to the x and y coordinates and also in
amplitude. Converting such an image to digital form requires that the coordinates as well as the
amplitude to be digitized. Digitizing the coordinates values is called sampling. Digitizing the
amplitude values is called quantization.
Coordinate convention:
The result of sampling and quantization is a matrix of real numbers. We use two
principal ways to represent digital images. Assume that an image f(x, y) is sampled so that the
resulting image has M rows and N columns. We say that the image is of size M X N. The values
of the coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use
integer values for these discrete coordinates. In many image processing books, the image origin
is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are
(xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second
sample along the first row. It does not mean that these are the actual values of physical
coordinates when the image was sampled. Following figure shows the coordinate convention.
Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments.
The coordinate convention used in the toolbox to denote arrays is different from the
preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the
notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the
same as the order discussed in the previous paragraph, in the sense that the first element of a
coordinate topples, (alb), refers to a row and the second to a column. The other difference is that
the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to
N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox
also employs another coordinate convention called spatial coordinates which uses x to refer to
columns and y to refers to rows. This is the opposite of our use of variables x and y.
Image as Matrices:
The preceding discussion leads to the following representation for a digitized image
function:
f (0,0) f(0,1) .. f(0,N-1)
f (1,0) f(1,1) f(1,N-1)
f (xylem)= . . .
. . .
f (M-1,0) f(M-1,1) f(M-1,N-1)
The right side of this equation is a digital image by definition. Each element of this array
is called an image element, picture element, pixel or pel. The terms image and pixel are used
throughout the rest of our discussions to denote a digital image and its elements.
A digital image can be represented naturally as a MATLAB matrix:
f (1,1) f(1,2) . f(1,N)
f (2,1) f(2,2) .. f (2,N)
. . .
f = . . .
f (M,1) f(M,2) .f(M,N)
Where f (1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB quantities).
Clearly the two representations are identical, except for the shift in origin. The notation f(p ,q)
denotes the element located in row p and the column q. For example f(6,2) is the element in the
sixth row and second column of the matrix f. Typically we use the letters M and N respectively
to denote the number of rows and columns in a matrix. A 1xN matrix is called a row vector
whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar.
Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array
and so on. Variables must begin with a letter and contain only letters, numerals and underscores.
As noted in the previous paragraph, all MATLAB quantities are written using mono-scope
characters. We use conventional Roman, italic notation such as f(x ,y), for mathematical
expressions

Reading Images:
Images are read into the MATLAB environment using function imread whose syntax is
Imread (filename)
Format name Description recognized extension
TIFF Tagged Image File Format .tif, .tiff
JPEG Joint Photograph Experts Group .jpg, .jpeg
GIF Graphics Interchange Format .gif
BMP Windows Bitmap .bmp
PNG Portable Network Graphics .png
XWD X Window Dump .xwd
Here filename is a spring containing the complete of the image file(including any
applicable extension).For example the command line
>> f = imread (8. jpg);
Reads the JPEG (above table) image chestxray into image array f. Note the use of single
quotes () to delimit the string filename. The semicolon at the end of a command line is used by
MATLAB for suppressing output If a semicolon is not included. MATLAB displays the results
of the operation(s) specified in that line. The prompt symbol (>>) designates the beginning of a
command line, as it appears in the MATLAB command window.
Data Classes:
Although we work with integers coordinates the values of pixels themselves are not
restricted to be integers in MATLAB. Table above list various data classes supported by
MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to
as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred
to as logical data class.
All numeric computations in MATLAB are done in double quantities, so this is also a
frequent data class encounter in image processing applications. Class unit 8 also is encountered
frequently, especially when reading data from storages devices, as 8 bit images are most
common representations found in practice. These two data classes, classes logical, and, to a
lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt
functions however support all the data classes listed in table. Data class double requires 8 bytes
to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and
unit 32.
Name Description
Double Double _ precision, floating_ point numbers the Approximate.
Uint8 unsigned 8_bit integers in the range [0,255] (1byte per
Element).
Uint16 unsigned 16_bit integers in the range [0, 65535] (2byte per element).
Uint 32 unsigned 32_bit integers in the range [0, 4294967295](4 bytes per element).
Int8 signed 8_bit integers in the range [-128,127] 1 byte per element)
Int 16 signed 16_byte integers in the range [32768, 32767] (2 bytes per element).
Int 32 Signed 32_byte integers in the range [-2147483648, 21474833647] (4 byte per
element).
Single single _precision floating _point numbers with values
In the approximate range (4 bytes per elements)
Char characters (2 bytes per elements).
Logical values are 0 to 1 (1byte per element).
Int 32 and single required 4 bytes each. The char data class holds characters in Unicode
representation. A character string is merely a 1*n array of characters logical array contains only
the values 0 to 1,with each element being stored in memory using function logical or by using
relational operators.
Image Types:
The toolbox supports four types of images:
1 .Intensity images;
2. Binary images;
3. Indexed images;
4. R G B images.
Most monochrome image processing operations are carried out using binary or intensity
images, so our initial focus is on these two image types. Indexed and RGB colour images.
Intensity Images:
An intensity image is a data matrix whose values have been scaled to represent
intentions. When the elements of an intensity image are of class unit8, or class unit 16, they have
integer values in the range [0,255] and [0, 65535], respectively. If the image is of class double,
the values are floating point numbers. Values of scaled, double intensity images are in the range
[0, 1] by convention.

Binary Images:
Binary images have a very specific meaning in MATLAB.A binary image is a logical
array 0s and1s.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not
considered as a binary image in MATLAB .A numeric array is converted to binary using
function logical. Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using
the statement.
B=logical (A)
If A contains elements other than 0s and 1s.Use of the logical function converts all
nonzero quantities to logical 1s and all entries with value 0 to logical 0s.
Using relational and logical operators also creates logical arrays.
To test if an array is logical we use the I logical function: islogical(c).
If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be
converted to numeric arrays using the data class conversion functions.

Indexed Images:
An indexed image has two components:
A data matrix integer, x
A color map matrix, map
Matrix map is an m*3 arrays of class double containing floating point values in the range
[0, 1].The length m of the map are equal to the number of colors it defines. Each row of map
specifies the red, green and blue components of a single color. An indexed images uses direct
mapping of pixel intensity values color map values. The color of each pixel is determined by
using the corresponding value the integer matrix x as a pointer in to map. If x is of class double
,then all of its components with values less than or equal to 1 point to the first row in map, all
components with value 2 point to the second row and so on. If x is of class units or unit 16, then
all components value 0 point to the first row in map, all components with value 1 point to the
second and so on.
RGB Image:

An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet
corresponding to the red, green and blue components of an RGB image, at a specific spatial
location. An RGB image may be viewed as stack of three gray scale images that when fed in to
the red, green and blue inputs of a color monitor
Produce a color image on the screen. Convention the three images forming an RGB color
image are referred to as the red, green and blue components images. The data class of the
components images determines their range of values. If an RGB image is of class double the
range of values is [0, 1].
Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or
unit 16 respectively. The number of bits use to represents the pixel values of the component
images determines the bit depth of an RGB image. For example, if each component image is an
8bit image, the corresponding RGB image is said to be 24 bits deep.
Generally, the number of bits in all component images is the same. In this case the
number of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each
component image. For the 8bit case the number is 16,777,216 colors.