Professional Documents
Culture Documents
There is many type of security verifications present. Thus many system present there is no
proper secured system present. So actual aim of deep learning based feature extraction and
recovering for finger-vein verification using biometric is to extract and recover vein features
using limited a priori knowledge. To automatically discard the ambiguous region and to label
the pixels of clear region as foreground or background. And to recover missing finger-vein
patterns in the segmented image. Such a scheme avoids the heavy manual labeling and may
also reduce label errors, especially for ambiguous pixels. Thus this model is able to extract
the vein patterns from raw images in robust way, which leads to significant improvement in
finger-vein verification accuracy. we perform a rigorous experimental analysis that shows
that our scheme does succeed in recovering missing patterns which further improves the
verification performance.
i
CONTENTS
4 Proposed system 37
ii
4.1 Modules 37
4.2 Acquisition of an Infra-red image of the 39
finger
4.3 Normalization of the image 39
4.4 Extraction of Finger Vein patterns 39
4.5 Matching 40
5 System Requirements 41
5.1 Hardware Requirements 41
5.2 Software Requirements 41
5.3 Software Environment 42
5.4 Different modes of programming in Python 42
5.5 Flask frame work 43
6 System Study
6.1 Feasibility Study 44
6.2 System Testing 45
6.3 Types of Test 45
6.4 System Design 45
6.5 Data flow diagram 47
6.6 Use Case Diagram 48
7 Conclusion 49
Reference 50
iii
FIGURE NO TITLE PAGE NO
1.1 Categories of Biometrics 3
LIST OF
1.2 Machine Learning Algorithm 14
2.1 Finger Vein Image Acquiring Method 26 FIGURES
iv
LIST OF ABBREVATION
v
CHAPTER - 1
INTRODUCTION
1.1 INTRODUCTION TO FINGER-VEIN VERIFICATION
Finger veins are hidden under the skin where red blood cells are flowing. In biometrics,
the term vein does not entirely correspond to the terminology of medical science. Its network
patterns are used for authenticating the identity of a person, in which the approximately 0.3 to
1.0 mm thick vein is visible by near infrared rays. In this definition, the term finger includes
not only index, middle, ring, and little fingers, but also the thumb. Moreover, finger vein
system have some very powerful advantages. First, there is no property of latency. The vein
patterns in fingers stay where they belong, and where no one can see them- in the fingers.
This is a huge privacy consideration.
Vein recognition is generally used in high risk deployments where space is not an issue.
Physical access control and corporate banking are verticals where the modality is more
common. But, like all biometrics, advances in mobile technology and innovations in design
and manufacturing are contributing to a broadening the areas of application for the vascular
modality. Mobile devices are beginning to surface with vein recognition capabilities. Fujitsu’s
vein scanning tablet can authenticate users via palm vein, and multiple smartphones from
ZTE are shipping with Eye print ID software, which uses a smartphone camera to capture
vein patterns in a user’s eye. There is even a rumor circulating, based on a recent patent filing
by Apple, that the Apple Watch 2 will sport some sort of vein recognition.
1.2.1 SECURE
As finger vein patterns are found internally within the body, forgery is extremely
difficult. Dryness or roughness on the surface of the skin also has no effect on the accuracy of
vein pattern authentication.
1.2.2 ACCURATE
Rates for acceptance of false users or rejection of true users are among the lowest for
biometric technologies, making finger vein authentication a reliable security solution
1.2.3 FAST
1
Vein pattern matching is completed within the blink of an eye, affording users a
speedy authentication experience without the hassle and without the wait.
1.2.4 SMALL
Finger vein authentication devices are compact and therefore applicable as embedded
devices in a variety of applications.
1.2.5 USER-FRIENDLY
The vein patterns of each finger are unique, so each individual can register multiple
fingers as "back-up" for authentication purposes. Registration is possible even for sweaty,
oily or dirty fingers.
2
1.3.1 Emerging Trend in Biometric Processing
However, this system is more exposed to forgery as finger prints are exposed to
other and rather sweat / dryness can hinder obtaining clear image of the same.
This leads to degradation of system performance, can be even replicated with little
effort and use of technology. To overcome this drawback, an emerging process
was identified which can use the finger vein pattern to identify individual.
1.4.2 SECURITY
Over the decades, the passwords are comprising of numbers, alphabets, symbols,
etc. and are easy to crack. Many hacking incidents happening every year which results in
loss of information, money, privacy, etc. constantly.
3
With the implementation of Biometric technology based identification, it will not be
possible to hack as that of passwords which is of great help, specifically for business
communities and financial sectors, who are day in and day out fighting with security
complications over a extensive period of time.
1.4.3 ACCURACY
The traditional biometric based security systems are estimate huge amount of money,
time and valuable resources.
We may also understand that the securing systems using passwords, smart cards, personal
identification number were not accurate always and tends to have problems of hacking
and misuse.
However, identification system based on biometrics which uses physical traits – Vein
patterns, retina scanning, finger print, etc. will always provide a better results which are
very much accurate anytime, anywhere.
1.4.4 ACCOUNTABILITY
With the traditional security system, the codes and passwords can be easily broken
by anybody and hack the personal information. This problem causes more risk and will
be a continuous one. In case of biometric security system, the accountability of
persons using can be tracked and accounted for any misuse or abuse, thus allowing
100% accountability for the safety of information.
1.4.5 CONVENIENT
In the case of traditional security system, individual experience in their day to day
life, of forgetting the passwords, secret codes, etc. Individuals tend to undergo nerve-
wrecking procedures to retrieve the same and the process is cumbersome. Even though,
we follow some handy methods to remember the codes and passwords, the biometric
system will always pose a handy solution, which is most convenient and reliable at times
of need as these credentials being used in biometric system are with the individual
forever and doesn’t require any hard-core remembrance or maintenance.
1.4.6 SCALABILITY
Notwithstanding with the need for which the biometric systems is implemented or
applied, it can be scaled without losing its credential, used for diversified applications
like Projects related to Government Schemes, security systems for banks, management
of workforce, etc. This scalability is not available with the traditional security systems.
4
1.4.7 ROI
As compared to traditional security systems, biometric solutions provide the best
return on investment. The capital expenditure incurred on maintenance and monitoring of
traditional systems will be much more and the gestation period also will be long as
compared to biometric security systems.
But in the case of modern biometric system, the investment is much lesser and has quicker
gestation period.
1.4.8 FLEXIBILITY
The flexibility of modern biometric system is much more as compared to traditional
security systems. The system being held and monitored will be based on our own natural
resources which definitely reduces the criticality memorizing numbers, alphabets,
symbols being used in traditional system.
1.4.9 TRUSTABLE
With the development of modern and handy technology being used in biometric
systems, makes it user friendly and trustable one. The security and reliability of access
control systems using biometric had become more trustable with financial, security and
social organizations for its operations.
The existing databases available at various social organization can be enhanced for its
usage, to get accurate details about the individuals at any given point of time. This helps the
corporates and organization, to avoid spending huge capital money, for maintenance of
biometric details of their employees.
5
With the consideration of above advantages, we can determine that with the
development of biometric technology and process, there will be leaps of enhances in the
personal identification process.
Which clearly outdates the traditional method of recognition system. Many developed and
developing countries had already implemented the biometric based identification process
for the regular business and social applications, to take the advantage over traditional
methods.
This was well quoted by prominent researchers that “Great Power brings it along with
greater responsibilities” which is applicable to biometric identification technology also.
Even the biometric technologies have all the goodness surrounding and considered as one
of the positive development in technology, it also has its other darker side with some
disadvantages of its own. The introduction of biometric technology brings in many
benefits for individuals and society as a whole.
Biometric data, such as fingerprints or facial scans, raises privacy issues as it is highly
personal. If compromised, individuals may be at risk of identity theft.
Biometric systems are not foolproof. Factors like environmental conditions, quality of
sensors, and variations in individuals' biometric features can lead to false positives or
negatives.
1.5.3 COST
Implementing biometric systems can be expensive due to the need for specialized
hardware and software. This cost can be a barrier for widespread adoption, especially for
smaller organizations.
6
1.5.4 INHERENT BIASES
Some biometric technologies may exhibit biases based on race, ethnicity, or gender.
This can lead to unfair outcomes, potentially discriminating against certain demographic
groups.
Some people are uncomfortable with the idea of having their biometric data stored
and used, fearing misuse or unauthorized access. This lack of social acceptance can impede
the successful implementation of biometric systems.
The legal landscape surrounding biometric data is complex and varies across
jurisdictions. Organizations must navigate through regulatory requirements, which can be
challenging and time-consuming.
7
biometric modalities will exertion ideally with physical personalities of human being
such as finger veins & print, iris, palm, etc. and cannot be changed for the comfort of
individuals. Cannot be changed by means due to his / her ageing, climate and
environment.
The passwords can be reset or cracked, but the physical traits of human beings like
fingerprints or retina are fixed and cannot be changed. Ideally the biometric data /
information collected are stored in secured Databases or with organizations designed to
provide such services. With the recent developments of sharing the data with third parties
without the consent of individuals, will impact the security of information being collected
and stored using biometric system, even if the individuals wants to change the same, it
is not possible.
1.5.13 COST
The devices being used, software to be developed, hardware to be procured and
maintained, involves huge cost as compared to traditional method of security systems. A
huge of amount of money is to be spent for building and maintenance of the infrastructures
required for the fool proof biometric system.
1.5.14 DELAY
8
large number of employees needs to enter and exit every day, will cause considerable delay
in the accessing and identification process if biometric system is used .
1.5.15 COMPLEXITY
The biometric system being deployed today for identification process is much
complex and highly technical in nature which an added disadvantage.
A new entrant will always have difficulty to understand the operational process of
whole system. The organization need to employ highly technical and trained professional
for the development and implementation of the biometric based identification systems,
which adds further to its complexity
1.5.16 IDENTIFICATION
The biometric system being deployed today for identification process is much
complex and highly technical in nature which an added disadvantage. A new entrant will
always have difficulty to understand the operational process of whole system. The
organization need to employ highly technical and trained professional for the
development and implementation of the biometric based identification systems, which adds
further to its complexity.
1.5.17 DISTRIBUTION
Ideally the biometric systems being deployed to identify a person is based on
contacts with the equipment and considered as one of the unhygienic environment. Often
individuals having infectious disease, will tend to spread the same to other individuals /
employees who are using the same biometric devices for identification process. This is
most disturbing disadvantage of biometric system and needs to be addressed.
9
much hard time for these individuals to pass over the biometric based identification
process every time.
Any new technology or development, will prone to its own criticism which is also
applicable to state of art biometric systems. Easy process / system will have its own
advantages and disadvantages and the magnitude of these will decide on the
implementation factor. With the stated advantages, we cannot conclude that
biometric based identification of fool proof and user friendly, however, considering only
the disadvantages, we should not conclude that this system is not usable. Considering its
own merits and demerits, we should evolve an ideal possibility to use of modern
biometric based identification system having hassle free and user friendly.
10
Even though, finger vein based authentication process is an eye opener in the
field of personal identification system, it do have some bottlenecks of difficulties in
capturing vein images due to IR light parameters, outer skin appearance (skin decease,
dust, uncleanliness, etc.), uneven illumination, pressure, environmental conditions etc.
Moreover, the vein images being captured also tend to be affected by noise, distortion,
scaling etc., (Sprawls 1993). With the constraints and bottlenecks, the technology for
biometric authentication process using finger vein is the most promising solution in the
near future.
Over above the external factor, we can also infer that factors relating to collection
process of vein images like translation, orientation, rotation, finger pressure, image scale
and collection postures can also affect its features. Due to these factors, we need to
identify, robust ROI localization which is decisive for a finger vein based personal
identification system.
Finger vein based identification method as biometric tool, will have various
advantages over traditional ones, as it has unique vein patterns which cannot be forged.
It is very hard to replace the vein pattern of finger with forged ones due to its complex
internal features This tool will have high accuracy ratings and the equipment being used
for this process is much smaller as compared to equipment used for finger print or palm
print. We can acquire the finger vein patterns using Near Infra-Red (NIR) light and a
camera device. From the hygienic point of view, this method is much safer as there will not
be any physical contact between the sensor and finger. The finger vein technology had
provided significant performance development towards personal identification system over
11
a span of many years. Based on existing research works, we can infer that not only external
factors like sprinkling, patchy illumination, environment and temperature can affect its
features. Over above the external factor, we can also infer that factors relating to collection
process of vein images like translation, orientation, rotation, finger pressure, image scale
and collection postures can also affect its features. Due to these factors, we need to
identify, robust ROI localization which is decisive for a finger vein based personal
identification system.
12
1.7.2 TRADITIONAL PROGRAMMING
Machine learning is supposed to overcome this issue. The machine learns how the input
and output data are correlated and it writes a rule. The programmers do not need to write new
rules each time there is new data. The algorithms adapt in response to new data and
experiences to improve efficacy over time.
13
The core objective of machine learning is the learning and inference. First of all, the
machine learns through the discovery of patterns.
This discovery is made thanks to the data. One crucial part of the data scientist is to choose
carefully which data to provide to the machine. The list of attributes used to solve a problem
is called a feature vector. You can think of a feature vector as a subset of data that is used to
tackle a problem. The machine uses some fancy algorithms to simplify the reality and
transform this discovery into a model. Therefore, the learning stage is used to describe the
data and summarize it into a model.
For instance, the machine is trying to understand the relationship between the wage of an
individual and the likelihood to go to a fancy restaurant. It turns out the machine finds a
positive relationship between wage and going to a high-end restaurant: This is the model
Inferring
When the model is built, it is possible to test how powerful it is on never-seen-before data.
The new data are transformed into a features vector, go through the model and give a
prediction. This is all the beautiful part of machine learning. There is no need to update the
rules or train again the model. You can use the model previously trained to make inference on
new data.
The life of Machine Learning programs is straightforward and can be summarized in the
following points:
1. Define a question
2. Collect data
3. Visualize data
4. Train algorithm
5. Test the Algorithm
6. Collect feedback
14
7. Refine the algorithm
8. Loop 4-7 until the results are satisfying
9. Use the model to make a prediction
Once the algorithm gets good at drawing the right conclusions, it applies that knowledge to
new sets of data.
Machine Learning Algorithms and Where they are Used?
15
1.9 CLASSIFICATION
Imagine you want to predict the gender of a customer for a commercial. You will start
gathering data on the height, weight, job, salary, purchasing basket, etc. from your customer
database. You know the gender of each of your customer, it can only be male or female. The
objective of the classifier will be to assign a probability of being a male or a female (i.e., the
label) based on the information (i.e., features you have collected).
When the model learned how to recognize male or female, you can use new data to make a
prediction. For instance, you just got new information from an unknown customer, and you
want to know if it is a male or female. If the classifier predicts male = 70%, it means the
algorithm is sure at 70% that this customer is a male, and 30% it is a female.
The label can be of two or more classes. The above Machine learning example has only two
classes, but if a classifier needs to predict object, it has dozens of classes (e.g., glass, table,
shoes, etc. each object represents a class)
1.9.1 REGRESSION
When the output is a continuous value, the task is a regression. For instance, a
financial analyst may need to forecast the value of a stock based on a range of feature like
equity, previous stock performances, macroeconomic index. The system will be trained to
estimate the price of the stocks with the lowest possible error Unsupervised learning.
In unsupervised learning, an algorithm explores input data without being given an explicit
output variable (e.g., explores customer demographic data to identify patterns)
You can use it when you do not know how to classify the data, and you want the algorithm to
find patterns and classify the data for you.
16
bottom corresponds to the dark category. The other images show different algorithms and
how they try to classified the data.
17
Machine learning, which works entirely autonomously in any field without the need
for any human intervention. For example, robots performing the essential process
steps in manufacturing plants.
1.10.3 FINANCE INDUSTRY
• Machine learning is growing in popularity in the finance industry. Banks are mainly
using ML to find patterns inside the data but also to prevent fraud.
1.10.4 GOVERNMENT ORGANIZATION
The government makes use of ML to manage public safety and utilities. Take the
example of China with the massive face recognition. The government uses Artificial
intelligence to prevent jaywalker.
1.10.5 HEALTHCARE INDUSTRY
Healthcare was one of the first industry to use machine learning with image detection.
1.10.6 MARKETING
Broad use of AI is done in marketing thanks to abundant access to data. Before the
age of mass data, researchers develop advanced mathematical tools like Bayesian
analysis to estimate the value of a customer. With the boom of data, marketing
department relies on AI to optimize the customer relationship and marketing
campaign.
18
term of sales, it means an increase of 2 to 3 % due to the potential reduction in inventory
costs.
1.11.2 EXAMPLE OF MACHINE LEARNING GOOGLE CAR
For example, everybody knows the Google car. The car is full of lasers on the roof which are
telling it where it is regarding the surrounding area. It has radar in the front, which is
informing the car of the speed and motion of all the cars around it. It uses all of that data to
figure out not only how to drive the car but also to figure out and predict what potential
drivers around the car are going to do. What's impressive is that the car is processing almost a
gigabyte a second of data.
1.11.3 WHY IS MACHINE LEARNING IMPORTANT?
Machine learning is the best tool so far to analyze, understand and identify a pattern in the
data. One of the main ideas behind machine learning is that the computer can be trained to
automate tasks that would be exhaustive or impossible for a human being. The clear breach
from the traditional analysis is that machine learning can take decisions with minimal human
intervention.
Take the following example for this ML tutorial; a retail agent can estimate the price of a
house based on his own experience and his knowledge of the market.
A machine can be trained to translate the knowledge of an expert into features. The features
are all the characteristics of a house, neighborhood, economic environment, etc. that make the
price difference. For the expert, it took him probably some years to master the art of estimate
the price of a house.
His expertise is getting better and better after each sale.
For the machine, it takes millions of data, (i.e., example) to master this art. At the very
beginning of its learning, the machine makes a mistake, somehow like the junior salesman.
Once the machine sees all the example, it got enough knowledge to make its estimation. At
the same time, with incredible accuracy. The machine is also able to adjust its mistake
accordingly.
Most of the big company have understood the value of machine learning and holding data.
McKinsey have estimated that the value of analytics ranges from $9.5 trillion to $15.4 trillion
while $5 to 7 trillion can be attributed to the most advanced AI techniques.
Machine learning (ML) is the study of computer algorithms that improve automatically
through experience. It is seen as a part of artificial intelligence. Machine learning algorithms
build a model based on sample data, known as "training data", in order to make predictions or
decisions without being explicitly programmed to do so. Machine learning algorithms are
19
used in a wide variety of applications, such as email filtering and computer vision, where it is
difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
A subset of machine learning is closely related to computational statistics, which focuses on
making predictions using computers; but not all machine learning is statistical learning. The
study of mathematical optimization delivers methods, theory and application domains to the
field of machine learning. Data mining is a related field of study, focusing on exploratory data
analysis through unsupervised learning. In its application across business problems, machine
learning is also referred to as predictive analytics.
1.11.4 OVERVIEW
Machine learning involves computers discovering how they can perform tasks without being
explicitly programmed to do so. It involves computers learning from data provided so that
they carry out certain tasks. For simple tasks assigned to computers, it is possible to program
algorithms telling the machine how to execute all steps required to solve the problem at hand;
on the computer's part, no learning is needed. For more advanced tasks, it can be challenging
for a human to manually create the needed algorithms. In practice, it can turn out to be more
effective to help the machine develop its own algorithm, rather than having human
programmers specify every needed step.
The discipline of machine learning employs various approaches to teach computers to
accomplish tasks where no fully satisfactory algorithm is available. In cases where vast
numbers of potential answers exist, one approach is to label some of the correct answers as
valid. This can then be used as training data for the computer to improve the algorithm(s) it
uses to determine correct answers. For example, to train a system for the task of digital
character recognition, the MNIST dataset of handwritten digits has often been used.
20
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own
to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden
patterns in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in
which it must perform a certain goal (such as driving a vehicle or playing a game against an
opponent). As it navigates its problem space, the program is provided feedback that's
analogous to rewards, which it tries to maximize.
Other approaches have been developed which don't fit neatly into this three-fold
categorization, and sometimes more than one is used by the same machine learning system.
For example topic modeling, dimensionality reduction or meta learning.
As of 2020, deep learning has become the dominant approach for much ongoing work in the
field of machine learning.
Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied
in the machine learning field: "A computer program is said to learn from experience E with
respect to some class of tasks T and performance measure P if its performance at tasks in T,
as measured by P, improves with experience E. "This definition of the tasks in which machine
learning is concerned offers a fundamentally operational definition rather than defining the
field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing
Machinery and Intelligence", in which the question "Can machines think?" is replaced with
the question "Can machines do what we (as thinking entities) can do?.
Modern day machine learning has two objectives, one is to classify data based on models
which have been developed, the other purpose is to make predictions for future outcomes
21
based on these models. A hypothetical algorithm specific to classifying data may use
computer vision of moles coupled with supervised learning in order to train it to classify the
cancerous moles. Whereas, a machine learning algorithm for stock trading may inform the
trader of future potential predictions.
22
The question to what is the difference between ML and AI is answered by Judea Pearl in The
Book of Why. Accordingly ML learns and predicts based on passive observations, whereas
AI implies an agent interacting with the environment to learn and take actions that maximize
its chance of successfully achieving its goals.
23
CHAPTER 2
LITERATURE SURVEY
David Mulyono proposes an ideology for finger vein image off-line and online acquisition.
The images which are acquired during the course of real time is called On-line images.
Images which are acquired from already existing source such as database, historical
information, etc are called as Off-line images. By using equipment like Web Camera or any
device so designed using light transmission technology are the distinct methods for
capturing on-line images. Light reflection and transmission are the two prominent
methods used for acquiring on-line images and the prominent difference between the two is
the location where in the near infrared light is positioned. In the case of light reflection
method, the pattern of finger vein is acquired based on near infrared light reflection
from palmer surface but in the method of light transmission acquiring the reflected light
from finger palmer surface, which penetrates to capture finger vein pattern. While
comparing between light reflection and transmission methods, the later one can acquire
vein images with high contrast, and most of the devices apply this methodology.
24
Figure 2.1: Finger Vein image acquiring methods
Ton et.al (2013) presented a distinctive device using transmission of light method for
acquiring finger vein images (Huang 2017). This device normally designed, by using
infrared light. This method can help to capture the vein patterns more distinctly and
accurately.
Yin et al (2011) proposed homologous multimodal database, the first one of its kind
named SDUMLA-FV and was constructed by Shandong University (Wang 2012). As part
of this database, Ajay and Zhou, published another finger vein database which was
considered as an extended arm and called as HKPU-FV database. Subsequently another
database was formed by University of Twenty, named as UTFV database. For all the
practical purpose of research work, the two finger vein databases which were available, by
Tsinghua University and Chonbuk Nation University are used. The first one was called as
THU- FV database which is part of Homologous multimodal database and second one
is termed MMCBNU_6000 database.
The images acquired based on, light transmission-technology was used on all databases
and are different in sizes, contrast, background and quality.
In finger vein based biometric systems, various pre-processing task are to be performed –
retrieving information regarding edges, enhancement of contrast and brightness, noise
removal, sharpening of images, etc. in order to augment the excellence of image captured.
These pre-processes helps to improve image quality, which can be used as an input
during later stage of process for obtaining more relevant information and authentication
25
tool. Indeed, if the quality of the image is better, then better accuracy can be gained
which helps to advance upon the authenticity of the biometric system. Primarily the
pre-processing activity involves segmentation of finger vein images and alignment
denoising of images, detection of Region of Interest (ROI), normalization of image size,
and image augmentation.
Zhi Liu (2012) proposed based of using mobile devise, finger vein pattern can be
identified in the process of alignment and segmentation of vein image is carried out by:
By using Bi-cubic interpolation for resizing and enhancement of the image is carried after
image segmentation and alignment. To enhance the image grey level contrast, the principle
of histogram equalization is applied. Three processes are involved in the image
segmentation function – (a) by applying canny edge detection algorithm the finger
edge is detected (b) By applying morphological dilation algorithm, the broken edges of
the finger are joined to achieve edge smoothening and (c) By applying Histogram
equalization algorithm, inside region of the finger area are filled with white pixels.
26
2.3 HUMAN IDENTIFICATION USING FINGER VEIN
Fernando C. Monteiro (2015) projected a novel technique for segmenting finger vein
image based on edge information which can be obtained by watershed of morphological
algorithm and spectral method. R.V.Patil (2010) prerogative that better result can be
provided by K-means image segmentation, if the estimation of cluster numbers are
measure accurately. The process of edge detection is considered as major phenomenon
for estimating the cluster numbers accurately. To detect the edges and find cluster, phase
congruency was proposed by the author. The clusters are made and identified based on
Threshold and Euclidean distance computation and to find the segmentation of the image
K-means is use. Results obtained based on experiments on nine different images proves
that the identification of clusters based on the proposed method was accurate and optimal.
Weihong Cui Yi Zhang (2010) projected a new method for generating segmentation of
multi-scale based on method by selection edge based auto threshold. For calculation of
edge weight to prominent method named Band weight and Normalized Difference
Vegetation Index (NDVI) is use for the performance of segmentation of images. Over
and above edge based Threshold technique is also used for image segmentation. Based on
experiments using on multi-scale determination images, this proposed methodology proves
to maintain the boundaries and information of objects intact, while segmenting the same.
Anna Fabijańska (2011) presented a state of art technique for image segmentation process
by using Variance Filter technique, which helps to identify edge position of the image. In
this proposed technique, the edge information is extracted and compared using Sobel
Gradient filter with K-means. This method is appropriate, if 9 x 9 matrix window to be
used to extract the edge information, which helps to match accurately with the shape of
image object. We can also use small filtering window for the larger detailed images.
Mohammed J. Islam (2011) initiate that in case of pharmaceutical industries, for the
real time inspection of images, the best method is application of Computer Vision. The
Author had proposed a new system for inspection of images with respect to quality by
applying image segmentation edge based technique. Edges with noise-suppression property
was detected based on Sobel Edge Detector technique.
27
2.3.1 STATEMENT OF MD KHALED HASAN 2021
Present detailed comparison amoung the algorithm in all-inclusive and under sub-branches.
we provide the strengths and limitations of these algorithms and a novel literature survey.
Gang Chen (2009) observed that, in the course of real time image processing, extraction
of image information as fast as required is a problem for a given image. Effective image
segmentation and the process is much time consuming, if region based methods are used.
To overcome the bottleneck, an application which can use new region based information
methods like Least Square Method which can detect the objects very fast as well
accurately by use of weight matrix and taking care of image local information into
account. This proposed technique provides very optimal and fast segmentation results,
which can be compared with other traditional techniques. Comparatively the projected
technique can extract image features more effectively and accurately at a given situation.
Zhen Hua, Yewei Li (2010) proposed an innovative method for image segmentation
created on two approaches – Region increasing and Watershed Method to improve
visual attention. With the use of Guass-Laplace and Gabor Filters, the edges and grey
values of image are extracted and subsequently ANN method is used for ROI extraction.
Experiments were conducted on natural images and the results obtained were associated
with other methods, proves that the newly developed algorithm helps perfectly for image
segmentation and also improves to retain the salience edges of the images.
28
ancan Mei (2011) proclaims that in the process of image segmentation, due to large
range of interactions, the Markov Random Field (MRF) is suffered. A new improved
version was proposed for image segmentation which can over this bottleneck, based on
Region Based Multi Scale Segmentation method.Tiancan Mei (2011) proclaims that in
the process of image segmentation, due to large range of interactions, the Markov
Random Field (MRF) is suffered. A new improved version was proposed for image
segmentation which can over this bottleneck, based on Region Based Multi Scale
Segmentation method.
Gang Chen (2009) observed that, in the course of real time image processing, extraction
of image information as fast as required is a problem for a given image. Effective image
segmentation and the process is much time consuming, if region based methods are used.
To overcome the bottleneck, an application which can use new region based information
methods like Least Square Method which can detect the objects very fast as well
accurately by use of weight matrix and taking care of image local information into
account. This proposed technique provides very optimal and fast segmentation results,
which can be compared with other traditional techniques. Comparatively the projected
technique can extract image features more effectively and accurately at a given situation.
Zhen Hua, Yewei Li (2010) proposed an innovative method for image segmentation
created on two approaches – Region increasing and Watershed Method to improve
visual attention. With the use of Guass-Laplace and Gabor Filters, the edges and grey
values of image are extracted and subsequently ANN method is used for ROI extraction.
Experiments were conducted on natural images and the results obtained were associated
29
with other methods, proves that the newly developed algorithm helps perfectly for image
segmentation and also improves to retain the salience edges of the images.
Tiancan Mei (2011) proclaims that in the process of image segmentation, due to large
range of interactions, the Markov Random Field (MRF) is suffered. A new improved
version was proposed for image segmentation which can over this bottleneck, based on
Region Based Multi Scale Segmentation method. The data set of natural scenario images
are used, and by using multi-scale MRF model for regions as parameter, this algorithm
is considered as better option than other techniques towards image segmentation process.
From the results derived it was concluded that, as compared to MSAP, RSMAP
algorithm provides improvised results for image segmentation.
In order to extract finger vein patterns from non-uniform images, by a robust system, the
method proposed should consider tracking dark lines presents in an image repeatedly.
Established on the quantum of times, wherein the tracking lines pass through the points, the
extracting process is defined. A few literatures published by eminent personalities,
describing the process and modalities for extracting finger vein images are discussed in this
thesis work.
N. Miura, A. Nagasaka, and T. Miyatake (2004) proposed a new method for future
extraction based on repeated line tracking technique. This method adopts the pattern of
extraction by applying the concept of number of times the tracking lines pass through
the points[63]. This method helps to identify local dark lines, which starts at various
positions for line tracking and the same is executed by moving along with the lines by
pixel by pixel.. During this process if a dark line is not detectable, then start a new
tracking operation from another position until all the dark lines present in an image is
identified. This process is carried out repeatedly, by executing local line tracking
operations and based on this operations the location of the lines overlap.
30
vein features are extracted. Once this process completed, then the next stage is minutia
matching which involves three stage Literature survey is carried on based on the papers
published by various authors on their contribution towards these process are evaluated and
submitted in this thesis work.
Fei Liu et al (2014) projected a new technique for minutiae pairing which is called
Singular Value Decomposition (SVM). The process of False Removing is performed
based on Local Extensive Binary Pattern (LEBP) by applying the local characteristics
which are rich in nature for minutiae representation. LEBP is the combination of Local
Directional and Local Multilayer binary pattern. For image classification Support Vector
Machine (SVM) is widely used for image classification and matching.
Jian-Da Wu et al (2011) proposed a new method to find hyper plane classifier which helps
for classifying Support Vector Machine. The SVM usage was proposed due to its wide
application usage and its capacity to handle data of nonlinearly separable. This classified
is robust in nature and can operate on much lesser time.
A Kumar et al(2012) proposed a new technique for finger vein matching based
on finger vein and its dorsal texture. By combing these two properties the score level
combination is calculated, which is a better decision level as compared to feature level
combination. The user will be declared as imposter, one the input image is not matching
with the combination of finger vein and its dorsal texture. In order to calculate the score
level, the holistic and nonlinear fusions are considered. The prior knowledge existing in
the dynamic combination process of matched scores will be utilized in the process of
Holistic fusion. Subsequently, based on the degree of uniformity between the two
identical scores the non-linear fusion is adjusted with the combined score.
31
Wonseok Song et al (2011) proposed a novel methodology based on Mean
Curvature for fing er vein verification system. The two dimensional point of the finger
vein pattern can be measure using Hausdorff distance. The Multiple Pixel Ratios (MPR) is
defined as the ratio of matching the quantum of pixels and total number of pixels
available in finger vein image patterns. This new method can achieves EER value of
0.761% [75]. We cannot achieve the process of matching the pixels, if existence of any
rotation or translation then the EER of mean curvature value will be 0.0025%.
32
CHAPTER-3
EXISTING SYSTEM
In this approaches extract vein patterns by assuming that they generate distributions such as
valleys and line segments. They can be broadly classified into two categories:
In this, a Convolutional Neural Network (CNN) is trained on the resulting dataset to predict
the probability of each pixel of being foreground given a patch centered on it. The CNN
learns what a finger-vein pattern is by learning the difference between vein patterns and
background ones. We propose another new and original contribution by developing and
investigating a Fully Convolutional Network (FCN) to recover missing finger-vein patterns in
the segmented image. We propose an automatic scheme to label pixels in vein regions and
background regions, given very limited human knowledge.
We employ several baselines approaches to extract (segment) the vein network from an image
and use their combined output automatically to assign a label for each pixel.
Such a scheme avoids the heavy manual labeling and may also reduce label errors, especially
for ambiguous pixels. As a finger-vein consists of clear regions and ambiguous regions,
several baselines are employed to automatically label pixels as vein or background in the
image clear regions, thus avoiding the tedious and prone-to-error manual labeling.
28
A CNN-based scheme is employed to automatically learn features from raw pixels for
finger-vein verification. First, a dataset is constructed based on patches centered on the
labeled pixels, and we take the patches as input for CNN training. Secondly, in the test phase,
the patch of each pixel is input into CNN the output of which is taken as the probability of the
pixel to belong to a vein pattern. Then, the vein patterns are segmented using a probability
threshold of 0.5. Compared to existing approaches, our CNN automatically learns robust
attributes for finger-vein representation. This paper investigates a new approach for
recovering vein patterns in the extracted finger-vein image.
As finger-vein patterns may be missing by corruption during the imaging stage and the
inaccurate estimation of parameters during the preprocessing stage (i.e. alignment and feature
extraction), we develop a robust finger-vein feature recovering scheme based on a Fully
Convolutional Network (FCN).
In this context, we perform a rigorous experimental analysis that shows that our scheme does
succeed in recovering missing patterns which further improves the verification performance.
FINGER vein authentication can be done using the vascular pattern on the back of a hand or a
finger. However, the FINGER vein pattern is the most complex and covers the widest area,
because the FINGER has no hair, it is easier to photograph its vascular pattern.
1. The FINGER also has no significant variations in skin colour compared with
fingers or back of the hand, where the colour can darken in certain areas. Also we
can use fusion of two technologies, FINGER vein and FINGER print, which
will be more complex and more reliable but costly.
2. FRR and FAR are very low in comparison to other biometric technologies. So it is
more secure and reliable.
3. FINGER vein pattern of any individual cannot be theft. Also since it is contactless,
privacy cannot be invaded.
4. The completely contactless feature of this device makes it suitable for use
when high levels of hygiene are required. It also eliminates any hesitation people
might have about coming into contact with something that other people have already
touched.
29
CHAPTER 4
PROPOSED SYSTEM
4.1 MODULES
Based on the exploration of various literatures published in the field of finger vein
authentication system, we could estimate the truthfulness of these methods to extract
the finger vein patterns. We need to regulate whether the projected method over
comes the drawbacks of the various existing methods, with respect to its robustness,
repeatability, accountability and effectiveness. The experimental results thus arrived based
on conventional methods of biometric system as that of new methods are validated and
compares for its effectiveness of qualitatively and cost effective. The traditional methods
uses matched filtering technique for the comparison of biometric parameter to identify a
person where in the profile matching technique is being used in the finger vein based
identification system.
After careful analysis of these literatures, we could able to propose the finger vein
based recognition framework, which is based on the principles of Acquisition,
Normalization, Extraction, Matching and output identification of captured image.
30
4.4 EXTRACTION OF FINGER-VEIN PATTERNS:
From the normalized image of finger vein, we extract its pattern for matching the
same. The finger-vein pattern is extracted from the normalized infrared image of the finger.
4.5 MATCHING:
The input and registered image patterns are compared for correlations as computed
and based on the same the identification process if carried out for further processing of the
system.
31
CHAPTER -5
SYSTEM REQUIREMENTS
5.3.1 PYTHON:
Python is Interactive − You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
32
Python is a Beginner's Language − Python is a great language for the beginner-level
programmers and supports the development of a wide range of applications from simple
text processing to WWW browsers to games.
Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68,
SmallTalk, and Unix shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the GNU General
Public License (GPL).
Python is now maintained by a core development team at the institute, although Guido van
Rossum still holds a vital role in directing its progress.
Easy-to-learn − Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and visible to the eyes.
Easy-to-maintain − Python's source code is fairly easy-to-maintain.
A broad standard library − Python's bulk of the library is very portable and
cross-platform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
33
Databases − Python provides interfaces to all major commercial databases.
GUI Programming − Python supports GUI applications that can be created and
ported to many system calls, libraries and windows systems, such as Windows
MFC, Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support for large programs than
shell scripting.
Apart from the above-mentioned features, Python has a big list of good features, few are listed
below
Python is available on a wide variety of platforms including Linux and Mac OS X. Let's
understand how to set up our Python environment.
34
CHAPTER - 6
SYSTEM STUDY
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
35
6.1.3 SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system and
to make him familiar with it. His level of confidence must be raised so that he is also able to
make some constructive criticism, which is welcomed, as he is the final user of the system.
Operational feasibility is the measure of how well a proposed system solves the problems, and
takes advantage of the opportunities identified during scope definition and how it satisfies the
requirements identified in the requirements analysis phase of system development.
The operational feasibility assessment focuses on the degree to which the proposed development
project fits in with the existing business environment and objectives with regard to development
schedule, delivery date, corporate culture and existing business processes.
To ensure success, desired operational outcomes must be imparted during design and
development. These include such design-dependent parameters as reliability, maintainability,
supportability, usability, producibility, disposability, sustainability, affordability and others.
These parameters are required to be considered at the early stages of design if desired operational
behaviours are to be realised. A system design and development requires appropriate and timely
application of engineering and management efforts to meet the previously mentioned parameters.
A system may serve its intended purpose most effectively when its technical and operating
characteristics are engineered into the design. Therefore, operational feasibility is a critical
aspect of systems engineering that needs to be an integral part of the early design phases.
36
6.2 SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub-assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.
6.3 TYPES OF TESTS
6.3.1 UNIT TESTING
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Unit testing is usually conducted as part of a combined code and unit test phase of the software
lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct
phases.
TEST OBJECTIVES
37
FEATURES TO BE TESTED
38
6.3.4 SYSTEM TEST
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.
The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of input data to the system, various processing carried
out on this data, and the output data is generated by this system.
The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used
by the process, an external entity that interacts with the system and the information flows
in the system.
39
DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
DFD is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent increasing
information flow and functional detail.
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented as
use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors in
40
the system can be depicted.
6.6.1 CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among the classes. It explains
which class contains information.
Input Output
Features extraction
41
6.6.3 ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.
42
CHAPTER 7
CONCLUSION
In conclusion, finger vein verification based on deep learning offers a robust and secure
biometric authentication method. Its ability to leverage intricate patterns within the veins ensures
a high level of accuracy and resistance to fraudulent attempts. As technology advances, further
refinements in deep learning models are likely to enhance the efficiency and reliability of finger
vein verification, making it a promising solution for secure identity authentication in various
applications. However, challenges and considerations persist. Privacy concerns associated with
biometric data storage and potential vulnerabilities, such as spoofing attempts, demand careful
implementation and adherence to stringent security measures. Additionally, the system's success
relies on overcoming issues like environmental variations and ensuring social acceptance.
43
REFERENCE
• M. A. Turk and A. P. Pentland, “Face recognition using Eigen faces,” CVPR, pp.
586–591, 2021.
• A. Jain, L. Hong, and R. Bolle, “On-line fingerprint verification,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 19, no. 4, pp. 302–314, 2017.
• J. Daugman, “How iris recognition works,” IEEE Transactions on Circuits and
Systems for Video Technology, vol. 14, no. 1, pp. 21–30, 2014.
• A. Kumar and Y. Zhou, “Human identification using finger images,” IEEE
Transactions on Image Processing, vol. 21, no. 4, pp. 2228–2244, 2012.
• A. Kumar and K. V. Prathyusha, “Personal authentication using hand vein
triangulation and knuckle shape,” IEEE Transactions on Image Processing, vol. 18,
no. 9, pp. 2127–2136, 2009.
44