You are on page 1of 8

Materials Today: Proceedings xxx (xxxx) xxx

Contents lists available at ScienceDirect

Materials Today: Proceedings


journal homepage: www.elsevier.com/locate/matpr

Finger veins recognition using machine learning techniques


Ashraf Tahseen Ali ⇑, Hasanen S. Abdullah, Mohammad N. Fadhil
Department of Computer Science, University of Technology, Baghdad, Iraq

a r t i c l e i n f o a b s t r a c t

Article history: This study aims The Finger veins are a unique feature of the human, this differs from other biometric
Received 30 March 2021 signs. That impossibility for an imposter to discovered and penetrate the system and difficult to know
Accepted 5 April 2021 because the veins are under the skin, and is distinguished from the rest by its flexibility because the per-
Available online xxxx
son has more than one finger to take. In this paper, we will present impostor detection using seven
machine-learning techniques, preceded description of preprocessing, and features extraction. These steps
Keywords: implemented on two datasets. The best performance of the classifiers was naive bias followed by random
Machine learning
forest. While the lower classifier accuracy was JRip.
Veins
Imposter
Ó 2021 Elsevier Ltd. All rights reserved.
Naïve Bayes Selection and peer-review under responsibility of the scientific committee of the Emerging Trends in
k-Nearest Neighbor Materials Science, Technology and Engineering.
Random forest

1. Introduction extraction. the second way is to attempt to depart the vein patterns
from the image, and then extract features on the pure vein patterns
Finger veins are attracting more researchers than in previous can called vein-level feature extraction [6]. Table1 show the Survey
decades because they are easy to reach, highly accurate, and of existing biometric characteristics.
impossible to replicate [1]. Most Finger veins identification meth-
ods endure the ill consequences of the downsides related to the
extraction of important features because of bad quality pictures 2. Literature survey
and weak permeability of the Finger veins. Bad quality pictures
can be because of low control of infrared radiation, low lighting Since the beginning of the current century, the methods have
conditions and light scattering in tissues coating the vein construc- been widely developed finger vein identification systems. In this
tion to be captured [2]. Image collection, image preprocessing, fea- section, some of the previous work related to this research that
ture extraction, and feature matching are the four phases of a presented veins recognition using machine learning classifiers will
biometric method based on finger veins. Near infrared light applied be reviewed:
to acquire images in two different ways: light reflection and light In 2021, they achieved an efficient finger vein recognition con-
refraction [3]. The use of a good image acquisition system is essen- sisting of the hybrid Local Phase Quantization (LPQ) for strong fea-
tial; otherwise, there would be too much preprocessing. With a ture extraction and Grey Wolf Optimization-based SVM (GWO-
tidy and clean picture, several current finger vein identification SVM) to calculate the best parameter combination of SVM for opti-
models function well [4]. Even if the image is not clear and the fin- mal decisions of binary classification. GWO-SVM is used for classi-
ger location is perverted or depraved, changes are needed. After fication in order to maximize the classification accuracy by
obtaining a vein image, it is important to preprocess it in order determining the optimal SVM parameters the effectiveness of the
to improve the image’s efficiency [5]. feature extraction of images suggested framework on four tested finger vein datasets, which
can be taken out in two ways the first is to see an image as a gen- performing a recognition accuracy of 98% [7]. In 2021, they intro-
eral image, through some mature feature extraction algorithms in duced an adaptive k-nearest centroid neighbour (akNCN) achieved
the area of image processing can be called image-level feature as an enhancement to the kNCN classifier. Two new rules added to
adaptively choose the neighbourhood size of the test sample. The
⇑ Corresponding author. neighbourhood size for the test sample is modified, the size of
E-mail addresses: cs.19.12@grad.uotechnology.edu.iq (A. Tahseen Ali),
the neighbourhood is adaptively modified to j. Experimental
110014@uotechnology.edu.iq (H.S. Abdullah), mohammad.n.fadhil@uotechnology. results on the Finger Vein (FV-USM) image database demonstrate
edu.iq (M.N. Fadhil). the promising results in which the classification of the akNCN clas-

https://doi.org/10.1016/j.matpr.2021.04.076
2214-7853/Ó 2021 Elsevier Ltd. All rights reserved.
Selection and peer-review under responsibility of the scientific committee of the Emerging Trends in Materials Science, Technology and Engineering.

Please cite this article as: A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil, Finger veins recognition using machine learning techniques, Materials Today: Pro-
ceedings, https://doi.org/10.1016/j.matpr.2021.04.076
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx

Table 1
The Survey of existing biometric characteristics [3].

Biometric characteristics Key Advantage disadvantage Safety Level Cost


Voice Natural and comfortable Noises Ordinary Low
Face Remote capturing Illumination requirements Ordinary Low
Iris High accuracy Glasses Very Good High
Fingerprint Extensively implemented Skin Good Low
Finger -veins Large-safety level Disease Very Good Low

greatest exactness of 98.11 has been performed [2]. In 2019, they


achieved Finger veins recognition using discriminant analysis and
KNN of the machine learning system, as it stands for K- nearest
neighbor algorithm was used to test and measure the accuracy of
the features of the finger vein images. The accuracy obtained from
KNN is 55.84% whereas from discriminant analysis the accuracy
achieved is 92.21%, using SDUMLA-HMT dataset [9].

3. The proposed system architecture

The proposed system which is using biometrics will be recog-


nized Finger veins depending on machine learning system. In gen-
eral, consist of Finger veins records, dataset description, pre-
processing, feature extraction and classification stages and post-
processing stage. The proposed system architecture is as shown
in Figs. 1 and 2.

3.1. Database description

The details of these data are shown in the Table 2.


Fig. 1. The propose Finger veins recognition system architecture.
3.2. Pre-processing

sifier is 85.64% [8]. In 2020, They achieved finger vein improve- The preprocessing stage’s main advantage is that it organizes
ment in the motivations behind ID and check, s based on the equal- the data, making the recognition task easier. All operations relating
ization of fuzzy histograms. A mixture of Hierarchical Centroid and to image are referred to as ‘‘preprocessing.”
Gradient Histograms was applied to extract features. Both the
enhancement stages were evaluated using 6 fold stratified cross- a) Increase Lustring using Gamma Correlation
validation. The results are altogether tried with KNN and moreover After capturing the finger image, it was dark and dim, the
with SVM. KNN’s forecasts on test information demonstrated con- following step is to increase its lustre with Gamma modifica-
siderably more exact. Employing delineated 6- overlap investiga- tion. By increasing the dynamic range of pixel forces by con-
tion on all hands fingers in the SDUMLA information dataset, the trolling the pixels in a nonlinear way. It’s cultivated by

Fig. 2. Image Gamma Correlation.

2
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx

Table 2
System’s Finger veins Datasets [1].

Dataset ame No of images No of persons Finger Number Image Number Image resolution Format Typical Image
per person per Finger
SDMULA-HMT 3816 106 6 (Index, ring, middle, 6 320  240pxl .BMP
of both hands)

UTFV 1440 60 6 (Index, ring, middle, 4 672  380pxl 8 bit gray


of both hands) scale .PNG

diminishing co-event network homogeneity. This plans to As a result, the following successful supervised classification
hold the essential distinctions to the surface and make them machine learning algorithms will be used:
more noticeable. It significantly enhances image quality by
adjusting normal image brilliance in an excellent way. a) Naïve Bayes (NB)
b) Color to gray level image using Grayscale Image The Nave Bayes classifier is a simple probabilistic classifier
Because only one channel is dealt with within the grayscale based on Bayes’ Theorem and strict independence assump-
domain instead of three as in RGB (Red, Green, and Blue), tions, as shown in Eq. (3), with all features being equally
converting a color image to the grayscale domain reduces independent [13]. Using the NB classifier of the probability
data and improves processing speed. A weighted average that the function belongs to a class of prior probability, the
method was used to convert the RGB colored image to a gray feature will be assigned to the class of posterior probability.
image; this method was chosen because it produces a purer The class with the highest posterior likelihood is the product
image in which the colors are not equally weighted. Because of prediction. This classifier accurately and quickly predicts
pure green is lighter than pure blue and pure red, it has a the test data set’s class, and it also performs well in multi-
heavier weight. As can be seen in Eq. (1), pure blue is the class prediction [13].
darkest of the three and thus gets the least weight [10].
pðxjcÞpðcÞ
pðcjxÞ ¼ ð3Þ
Gray scale = 0.2989 Red + 0.5870 Green + 0.1140 Blue [10] pðxÞ
where
c) Contrast enhancement using Histogram Equalization
The principal purpose of HE is to level the probability inten-
pðcjxÞ: the posterior probability of class (c, target) given predic-
sity function of the input image and remap the grey levels to
tor (x, attributes).
create a treated image with enhanced contrast. It meaning-
pðcÞ: the prior probability of class.
ful changes the average brightness of the processed image
pðxjcÞ : the likelihood which is the probability of predictor given
with regard to the initial image. It also includes noise and
class.
density saturation effects which appear in a loss of image
pðxÞ : the prior probability of predictor.
details and give the appearance of the processed image
b) k-Nearest Neighbor
unnatural [11]. (1)
The KNN is a supervised learning system that allows machi-
d) Resizing and Cropping (R&C)
nes to classify objects, problems, or situations using data
(R&C) are the basic image editing functions. Both must care-
that has already been fed into them [14]. An unlabeled vec-
fully considered because they have the potential to degrade
tor (a query or test point) is classified by assigning the mark
image quality. Resizing an image alters its dimensions,
that appears most frequently among the k training samples
resulting in a larger file size (and, thereby, image quality).
closest to that query point, where k is a user-defined con-
Cropping often involves removing a portion of the original
stant in the classification process. For continuous variables,
image, resulting in the loss of some pixels.
Euclidean distance was used as a distance metric, which
was measured using Eq. (4) below: [14]
3.3. Feature extraction using (LDA)
qffiX
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n
(LDA) is a conventional analytical technique. (LDA) depends on dðx; yÞ ¼ i¼1 i
ða ðxÞ ai ðyÞÞ2 ð4Þ
linear combinations of variables to distinguish among classes that
result in linear decision limits. The technique search for a linear
transformation that maximizes class separability in a reduced
c) Random Forest (RF)
dimensional space [12]. Distinctive essential that merge linear fea-
Random Forest Classification (RFC) is a decision tree-based
tures, determine axes that maximize the variance, can dimension
supervised classification technique for machine learning
reduction. Calculated according to Eq. (2) below:
[15]. Each tree in the collection is made by randomly select-
y ¼ k1 þ ax1 þ bx2 . . . axn ½ldaŠ ð2Þ ing a small group of features to split on for each node, and
then deciding the best split based on these features in the
training set. Each tree gives the particular feature vector a
3.4. Proposed system classifiers
vote. The forest selects the class with the most votes for each
feature vector. It’s easy to make and forecast, runs quickly on
Machine learning classifiers, including feature extraction tech-
large datasets, estimates missing data quickly, and main-
niques, are critical in assessing the overall effectiveness of the
tains accuracy even when a large percentage of data is
speaker recognition model. This is a classification problem since
missing.
we want to classify audios and figure out who is speaking in them.
3
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx

d) J48 tainty by eliminating leaves and branches from the DT struc-


Missing values, rule derivation, continuous attribute value ture, which can lead to overfitting and reduced model
ranges, and decision tree pruning are only a few of the interpretability.
J480 s features. Over fitting pruning can be used as a precision
device if necessary [16]. This algorithm used to establish the 4. The proposed system implementation
rules that resulted in the data creating a unique identity. The
aim of using this classifier is to progressively mainstream This system was implemented into two Phases as follows:
the decision tree until a balance of versatility and accuracy
is achieved.
e) Part 4.1. Training phase
PART is ruled learner which separate-and-conquer. ‘‘Deci-
sion lists,” which are pre-determined sets of rules, are gener- The proposed system’s first step is the training of the two data-
ated by the algorithm [17]. This algorithm creates a decision sets using cross-validation, where the largest part of data enters to
list, which is a set of rules in a logical order. The data is allo- training phase Frequent and random then the reminder data
cated to the category of the rule with the best match after passed to testing phase sec. (4.2). The dataset will be preprocessed
each rule in the list is compared to new data. using a Gamma correlation, after which the images will be con-
f) JRIp verted from RGB to Gray Level, and these images will be prepro-
JRip is one of the most common and commonly used algo- cessed using Histogram Equalization, later that images will be
rithms. As classes grow larger, they are evaluated and an initial cropped and resized. the features extracted using LDA prepare for
set of rules for the class is generated based on decreasing error the classification algorithms. The mixed features are saved as refer-
rates [18]. This algorithm, which is used to create the law, ence models during the training process. These models are then
results in the data acquiring a unique identity. The aim of using compared to the Finger veins that has been entered.
this classifier is to progressively mainstream the decision tree
until a balance of versatility and accuracy is achieved. 4.2. Testing phase
g) REP Tree (RT)
The REPTree is an ensemble model that combines decision The proposed system’s testing phase is the second phase. As
tree (DT) and reduced error pruning (REP) algorithms to mentioned above the reminder data will be tested after applying
solve classification and regression problems [19]. This algo- the same pre-processing steps which applied on data in training
rithm builds a decision tree using information gain and phase. The proposed system architecture is as shown in Algorithm
prunes it using reduced-error pruning. REP removes uncer- (1) below.

Input -: Finger veins


Output: -

Begin
SDMULA-HMT UTFV // Input //

Pre-processing Phase

Feature Extraction Phase

:
Output
End

4
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx

5. Proposed system evaluation

0.972
0.974
0.972
0.972

0.999
0.971
0.028
UTFV
1440
1400
40
For evaluating a model’s performance, certain parameters are
used to determine its behavior. The results are influenced by the

SDMULA-HMT
size of the training data, the quality of the audio files, and, most
importantly, the type of machine-learning algorithm used. The fol-
lowing criteria are used to assess the models’ efficacy [20]:

0.847
0.926
0.846
0.864
0.153
0.998
0.845
3816
3232
KNN

584
 Accuracy: Percentage of examples correctly categorized from
0.981
all given examples. It is calculated as [20]:
0.982
0.981
0.981

0.999
0.019
UTFV

1413
1440

0.98
27

tp þ tn
Accuracy ¼ ð5Þ
tp þ tn þ fp þ fn
SDMULA-HMT

0.932
0.931
0.932
0.931

0.999
0.931
0.068
3816
3558
258

 Precision: The percentage of true x-class instances for all those


RF

listed as class x. It is calculated as [20]:


0.657
0.666
0.656
0.658
0.343
0.994
0.651
UTFV
1440
946
494

tp
Precision ¼ ð6Þ
tp þ fp
SDMULA-HMT

 Recall: The percentage of examples listed as class x among all


0.631
0.633

0.369
0.996
0.627
3816
2409

0.63
0.63
539

examples of class x. It is calculated as [20]:


J48

tp
Recall ¼
0.572
0.581
0.572
0.566
0.428
0.993
0.565

ð7Þ
UTFV
1440
842
616

tp þ fn
SDMULA-HMT

 F- measure: is the harmonic mean of precision and recall. It is


calculated as [20]:
0.477

0.476
0.471
0.523
0.995
0.472
3816

1996
1820

0.92
RT

precision  recall
F1 ¼ 2  ð8Þ
precision þ recall
0.993
0.994
0.993
0.994

0.991
0.993
0.007
UTFV

1431
1440

where
9
SDMULA-HMT

tp = true positives: number of examples predicted positive that


are actually positive
fp = false positives: number of examples predicted positive that
0.972
0.974
0.972
0.972

0.999
0.971
0.028
3816
3710

are actually negative


106
NB

tn = true negatives: number of examples predicted negative


that are actually negative
0.532
0.613
0.531
0.554
0.468
0.992
0.524
UTFV
1440

fn = false negatives: number of examples predicted negative


766
674

that are actually positive


 Error rate: An error is simply a misclassification: the classifier
SDMULA-HMT

has presented a case, and it classifies the case incorrectly, as


shown in Eq. (9) below [20]:
0.459
0.496

0.994
0.454
3816
1753
2063
0.46
0.58

0.64
JRip

ErrorRate ¼ 1 accuracy ð9Þ


0.672
0.659

0.994
0.654
UTFV
1440

0.66

0.66
0.34
950
490

 Specificity: measures the ability of a test to be negative when


the condition is actually not present. It is also known as false-
Results of Machine Learning Classifiers.

SDMULA-HMT

positive rate, precision, Type I error, a error, the error of com-


mission, or null hypothesis [20].
0.596

0.595
0.597

0.996
0.592
0.607

0.404
PART

3816
2274
1542

TN
Specificity ¼ 100% ð10Þ
TN þ FP
Total instances

Total incorrect
Total correct

F- measure

Specificity
Error rate
Precision
Accuracy

 KAPPA: Cohen’s kappa coefficient can be used for assessing


KAPPA
Recall
Table 3

agreement between two regular nominal classifications. If one


uses Cohen’s kappa to quantify agreement between the classifi-

5
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx

Fig. 3. Image Color to Gray level.

Fig. 4. Image Histogram Equalization.

Fig. 5. Image Resizing and Cropping.

Fig. 6. Accuracy measured for Classifiers. Fig. 7. Precision measured for Classifiers.

cations, the distances between all categories are considered


equal, and this makes sense if all nominal categories reflect dif- O E
K¼ ð11Þ
ferent types of ‘presence’ [21,22]. The weighted kappa coeffi- I E
cient is defined as [23,24]:

6
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx

Fig. 8. Recall measured for Classifiers.

Fig. 12. KAPPA measured for Classifiers.

Fig. 9. F-measure measured for Classifiers. Fig. 13. Results Comparison Figure.

Table 4
Results Comparison.

RF NB J48 Ref. [7] Ref. [8] Ref. [2] Ref. [9]


Accuracy 95.7% 98.25% 90.9% 98% 85.64% 98.11% 92.21%

6. Experiential results

In this experiment, we test our datasets with (PART, JRip, NB,


RT, J48, RF, KNN) classifiers. The results of using the Finger veins
(SDMULA-HMT),(UTFV) as input will illustrate in Table 2. From
Table 3 and figures numbered from Figs. 3–9, for SDMULA-HMT
and UTFV datasets can notice that the (NB) classifier gives the best
accuracy, precision, recall, f-measure and specificity while JRip
classifier gives the worst accuracy, precision recall, f-measure
Fig. 10. Error Rate measured for Classifiers.
and specificity. As for error rate the results were on the contrary,
where the PART classifier produced highest error rate and NB pro-
duced lowest error rate among other classifiers Figs. 10–13.

7. Results comparison

In Table 4 the comparison between the best three performances


of proposed system algorithms with the related work. The average
accuracy of the proposed system for the three classifiers (RF, J48,
NB) are calculated by Eq. (12):
accuracyof dataset1 þ accuracyof dataset2
av e:accuracy ¼
2
ð12Þ
The results refer to that the proposed system showed superior-
ity using (NB) algorithm in terms of the average accuracy of Finger
veins recognition over all Literature Survey techniques, which
Fig. 11. Specificity measured for Classifiers. mentioned in Table 4.
7
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx

8. Conclusions [12] T. Zarra, G.K. Mark, F.C. Galang, V.B. Ballesteros, V. Naddeo, Instrumental odour
monitoring system classification performance optimization by analysis of
different pattern-recognition and feature extraction techniques, Sensors 21 (1)
It turns out that the finger veins are one of the biometric fea- (2021) 114.
tures that are most difficult for the imposter to gain and avoid. [13] P. Dhakal, P. Damacharla, A.Y. Javaid, V. Devabhaktuni, A near real-time
automatic speaker recognition architecture for voice-based user interface,
Choosing two datasets in different formats. Applying preprocessing
Machine Learn. Knowledge Extraction 1 (1) (2019) 504–520.
on images to remove noise, enhance quality, resize, and cropping [14] A. Bombatkar, G. Bhoyar, K. Morjani, S. Gautam, V. Gupta, Emotion recognition
image. Feature extraction is an important phase, using the LDA using Speech Processing Using k-nearest neighbor algorithm, Int. J. Eng. Res.
Appl. (IJERA) (2014) 2248–9622, ISSN 2014.
method for features extractions. In the training and testing phase
[15] J.T. Senders, M.M. Zaki, A.V. Karhade, B. Chang, W.B. Gormley, M.L. Broekman,
using cross-validation. Machine-learning techniques are used that T.R. Smith, O. Arnaout, An introduction and overview of machine learning in
achieved ranged performances, (NV) achieved the highest average neurosurgical care, Acta Neurochirurgica 160 (1) (2018) 29–38.
accuracy which was 98.25%, followed by an (RF) outcome was [16] P. Song, W. Zheng, Feature selection based transfer subspace learning for
speech emotion recognition, IEEE Trans. Affective Comput. (3) (2018) 373–
95.7%. 382.
[17] A.K. Sandhu, R.S. Batth, Software reuse analytics using integrated random
Declaration of Competing Interest forest and gradient boosting machine learning algorithm, Software: Pract.
Experience 51 (4) (2021) 735–747.
[18] Dhakal, Parashar, Praveen Damacharla, Ahmad Y. Javaid, and Vijay
The authors declare that they have no known competing finan- Devabhaktuni. ‘‘Detection and Identification of Background Sounds to
cial interests or personal relationships that could have appeared Improvise Voice Interface in Critical Environments.” in: 2018 IEEE
International Symposium on Signal Processing and Information Technology
to influence the work reported in this paper. (ISSPIT), pp. 078-083. IEEE, 2018.
[19] Bilal Alhayani, Rane Milind, Face recognition system by image processing, Int.
References J. Electron. Commun. Eng. Technol. (IJCIET) 5 (5) (2014) 80–90.
[20] A.H. Meftah, Y.A. Alotaibi, S.-A. Selouani, Evaluation of an Arabic speech corpus
of emotions: a perceptual and statistical analysis, IEEE Access 6 (2018) 72845–
[1] K. Syazana-Itqan, A.R. Syafeeza, N.M. Saad, N.A. Hamid, W.H.B.M. Saad, A
72861.
review of finger-vein biometrics identification approaches, Indian J. Sci.
[21] M.J. Warrens, Kappa coefficients for dichotomous-nominal classifications, Adv.
Technol. 9 (32) (2016) 1–9.
Data Anal. Classification (2020) 1–16.
[2] Zidan, Khamis A., and Shereen S. Jumaa. ‘‘A New Finger Vein Verification
[22] B. Al Hayan, H. Ilhan, Visual sensor intelligent module based image
Method Focused On The Protection Of The Template.” in: IOP Conference
transmission in industrial manufacturing for monitoring and manipulation
Series: Materials Science and Engineering, vol. 993, no. 1, p. 012108. IOP
problems, J. Intell. Manuf. 32 (2021) 597–610.
Publishing, 2020.
[23] B. Al-Hayani, H. Ilhan, Efficient cooperative image transmission in one-way
[3] H.G. Hong, Min Beom Lee, Kang Ryoung Park, ‘‘onvolutional neural network-
multi-hop sensor network, Int. J. Electr. Eng. Educ. 57 (4) (2020) 321–339.
based finger-vein recognition using NIR image sensors, Sensors 17 (6) (2017)
[24] B. Alhayani, A. Abdallah,, Manufacturing intelligent corvus corone module for a
1297.
secured two way image transmission under WSN, Eng. Comput. 37 (2020) 1–
[4] Y.i. Liu, J. Ling, Z. Liu, J. Shen, C. Gao, Finger vein secure biometric template
17.
generation based on deep learning, Soft Comput. 22 (7) (2018) 2257–2265.
[5] Tagkalakis, Fotios, Dimitrios Vlachakis, Vasileios Megalooikonomou, and
Athanassios Skodras. ‘‘A novel approach to finger vein authentication.” Further Reading
In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI
2017), pp. 659-662. IEEE, 2017. [1] Milind E. Rane, Umesh S. Bhadade, Comparative Study of ROI Extraction of
[6] Q. Yao, D. Song, X.u. Xiang, K. Zou, A novel finger vein recognition method Palmprint, IJCSN Int. J. Comput. Sci. Network 5 (2) (2016).
based on aggregation of radon-like features, Sensors 21 (5) (2021) 1885. [2] Milind. Rane and Umesh. Bhadade, ‘‘ Multimodal score level fusion for
[7] B.A. Rosdi, N. Mukahar, N.T. Han, Finger Vein Recognition Using Principle recognition using face and palmprint”, The International Journal of Electrical
Component Analysis and Adaptive k-Nearest Centroid Neighbor Classifier, Int. Engineering & Education, PP1-19, 2020.
J. Integr. Eng. 13 (1) (2021) 177–187. [3] Milind Rane, Tejas Latne, Umesh Bhadade, Biometric recognition using fusion,
[8] K. Kapoor, S. Rani, M. Kumar, V. Chopra, G.S. Brar, Hybrid local phase ICDSMLA 2019 (2019) 1320–1329.
quantization and grey wolf optimization based SVM for finger vein [4] B. ALhayani, H. Ilhan, Image transmission over decode and forward based
recognition, Multimedia Tools Appl. (2021) 1–39. cooperative wireless multimedia sensor networks for Rayleigh fading channels
[9] Khanam, Ruqaiya, Ramsha Khan, and Rajeev Ranjan. ‘‘Analysis of finger vein in medical internet of things (MIoT) for remote health-care and health
feature extraction and recognition using DA and KNN methods.” In 2019 Amity communication monitoring, J. Medical Imaging Health Inf. 10 (2020) 160–168.
International Conference on Artificial Intelligence (AICAI), pp. 477-483. IEEE, [5] Bilal Alhayani, Husam Jasim Mohammed, Ibrahim Zeghaiton Chaloob, Jehan
2019. Saleh Ahmed, Effectiveness of artificial intelligence techniques against cyber
[10] K. Padmavathi, K. Thangadurai, Implementation of RGB and grayscale images security risks apply of IT industry, Mater. Today: Proceedings (2021), https://
in plant leaves disease detection–comparative study, Indian J. Sci. Technol. 9 doi.org/10.1016/j.matpr.2021.02.531.
(6) (2016) 1–6. [6] Bilal Alhayani, Sara Taher Abbas, Dawood Zahi Khutar, Husam Jasim
[11] Zhu, Xiangyuan, Xiaoming Xiao, Tardi Tjahjadi, Zhihu Wu, and Jin Tang. ‘‘Image Mohammed, Best ways computation intelligent of face cyber attacks, Mater.
enhancement using fuzzy intensity measure and adaptive clipping histogram Today Proc. (2021), https://doi.org/10.1016/j.matpr.2021.02.557.
equalization.” arXiv preprint arXiv:2101.05922 (2021).

You might also like