Professional Documents
Culture Documents
Received February 3, 2019, accepted February 13, 2019, date of publication February 28, 2019, date of current version March 18, 2019.
Digital Object Identifier 10.1109/ACCESS.2019.2902133
ABSTRACT In terms of biometrics, a human finger itself is with trimodal traits including fingerprint,
finger-vein, and finger-knuckle-print, which provides convenience and practicality for finger trimodal fusion
recognition. The scale inconsistency of finger trimodal images is an important reason affecting effective
fusion. It is therefore very important to developing a theory of giving a unified expression of finger trimodal
features. In this paper, a graph-based feature extraction method for finger biometric images is proposed. The
feature expression based on graph structure can well solve the problem of feature space mismatch for the
finger three modalities. We provide two fusion frameworks to integrate the finger trimodal graph features
together, the serial fusion and coding fusion. The research results can not only promote the advancement
of finger multimodal biometrics technology but also provide a scientific solution framework for other
multimodal feature fusion problems. The experimental results show that the proposed graph fusion recogni-
tion approach obtains a better and more effective recognition performance in finger biometrics.
INDEX TERMS Finger biometrics, multimodal feature, fusion recognition, graph theory.
2169-3536
2019 IEEE. Translations and content mining are permitted for academic research only.
VOLUME 7, 2019 Personal use is also permitted, but republication/redistribution requires IEEE permission. 28607
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
H. Zhang et al.: Graph Fusion for Finger Multimodal Biometrics
ability of fusion features, and presents much more stable and fusion framework. First, a directional weighted graph is
effective fusion performance [4], [9]. established for each modality of finger, where a block-wise
Recently, some relative research works about the fusion of operation is done to generated the vertices. The Steerable fil-
finger biometrics have been reported. Alajlan et al. [14] pro- ter is applied to extract the oriented energy distribution (OED)
posed a biometric identification method based on the fusion of each block, and the weighted function of vertexes and
of FP and heartbeat at the decision level. Reference [19] edges of graph is calculated. Based on different block par-
introduced a biometric system that integrates face and FP. titioning strategies, the trimodal graph features can solve
Evangelin and Fred [20] proposed the fusion identification the problem of dimensional incompatibility, which lays the
frame of FP, palm print and FKP on feature level. In 2016, foundation for trimodal fusion. Then we propose two finger
Khellat-Kihel et al. [12] proposed a recognition framework trimodal feature fusion frameworks named serial and cod-
for the fusion of trimodal biometrics on the finger on ing fusions, which can integrate trimodal features of finger
the feature and decision level. Reference [21] developed a together. The serial fusion refers to splicing the graph features
magnitude-preserved competitive coding feature extraction of finger three modalities, while in coding fusion framework,
method for FV and FKP images, and obtained robust recog- the competitive coding way is applied to combine the three
nition performance after fusion of two biometric features. graph features to a comprehensive feature. In addition, any
In addition, some feature extraction methods can promote famous classifies connected after the fusion feature can be
efficient fusion of finger trimodal images on the feature layer, used for pattern recognition.
such as GOM method [39] and MRRID method [38]. It is In order to demonstrate the validity of the proposed fusion
worth mentioning that in our previous work, we discussed recognition frameworks for finger trimodal biometrics, we
the granulation fusion method of finger trimodal biological design a finger trimodal images acquisition device simulta-
features, and solved the basic theoretical problems of fusion neously, and construct a homemade finger trimodal database.
by using particle computing as a methodology [13], [22]. The Simulation results have demonstrated the effective and sat-
existing fusion algorithms have some shortcomings: isfied performance of the proposed finger biometric fusion
♦ Unreasonable image scale consistency. Finger trimodal recognition frameworks.
scale inconsistency is still an important factor affecting effec-
tive fusion. The straightforward scale normalization is easy to II. TRIMODAL IMAGE ACQUISITION
destroy image structure information, and stable feature fusion To obtain FP, FV and FKP images, we have designed a
cannot be achieved. homemade finger trimodal imaging system [31], as is shown
♦ Large storage space and long recognition time. Practi- in Fig. 1. The proposed imaging system, which can simul-
cality is a factor that should be considered in finger trimodal taneously collect FP, FV and FKP images, integrates modal
fusion recognition. acquisition, image storage, image processing and recognition
The scale inconsistency or incompatibility of finger tri- modules. Then we constructed a finger trimodal database.
modal images is the most critical problem that causes fusion
difficulties. It is very important to develop a theory of giv-
ing a unified expression of the trimodal features of the
finger [23]. The rapid development of graph-based image
analysis technology motivates us extracting graph structure
features for the finger trimodalities. The related research of
graph theory began in 1736, which is an important branch
of mathematics [24], [25]. Graphs are often used to describe
a certain connection between objects, or a specific relation-
ship between various parts of a corresponding object. As
an image description method, the graph can well describe
the various structural relationships in the image. In the FIGURE 1. The schematic of finger trimodal imaging device.
past decades, many graph-based methods have been pro-
posed for dealing with pattern recognition problems [26]. In order to reduce the finger pose variations, a fixed col-
Luo et al. [27] employed the graph embedding method lection groove is designed in the image acquisition device.
for 3D image clustering. Some graph-based coding oper- It can avoid the image distortion problems due to finger offset
ators have been well applied in feature coding, like LGS and rotation as much as possible. In the proposed imaging
and WLGS operators [28], [29]. Reference [30] overviews system, the collection of FP image is relatively simple, which
the recent graph spectral techniques in graph signal pro- can be obtained through the FP collector at the bottom.
cessing (GSP) specifically for image and video processing, For FV modality imaging, the used array luminaire is made
including image compression, image restoration, image fil- of near infrared (NIR) light-emitting diodes (LEDs) with a
tering and image segmentation. wavelength of 850nm. The light source which can penetrate
In this paper, we explore the description of finger trimodal a finger is on the palm side of the finger, while the imaging
features based on graph theory, and propose the reasonable lens is on the back side of the finger. Based on the principle of
reflection of visible light source on the skin surface on finger, often heavily influenced by humans and lack of robustness
the FKP modality is imaged by reflected lights. when image quality is poor.
By a software interface, as shown in Fig. 2, the finger Here for the graph establishment, we divide the image into
trimodal image acquisition task can be implemented conve- blocks and select the features of blocks as the vertices of the
niently. From the left of Fig. 2, we can see that the cap- graph, as shown in Fig. 3. For an image with size of M × N ,
tured image contains not only the useful region but also we design a sliding window with size of h × w, which would
some uninformative parts. Hence, the region of interest (ROI) traverse the entire image. The striding steps along the right
localization for the original trimodal images is critical. The and down directions are recorded as r and d respectively.
ROI localization strategy of three modalities is presented Then one can get the number of vertices of the graph as
in the appendix. In addition, the original images are often
M −h + r
blurred due to the influence of light and noise. The image Num = round +1
enhancement can highlight more texture information for fin- r
ger three modality images, which is very helpful to improve N −w + d
× round +1 (1)
recognition accuracy. The image enhancement method and d
some results are presented in Sec. III during the introduction where round (·) is the rounding operator, which means if
of graph establishment. M −h+r
r is not an integer, round it up to the nearest integer.
In order to reliably strength the finger texture information, number base filters. kj (θ ) is the interpolation function, which
finger three modalities images need to be enhanced effec- is only related with θ . I (x, y) represents the corresponding
tively. Here, a bank of Gabor filters with 8 orientations [33] image region. E (θ) is the convolution energy in the direction
and Weberąŕs Law Descriptor (WLD) [34] are combined of θ . Fig. 5(e) illustrates an obtained energy map correspond-
for images enhancement and light attenuation elimination. ing to a block located at Fig. 5(d).
In order to extract reliable finger skeletons, a multi-threshold The maximum energy feature is selected as the vertex
method [35] is used to obtain the binary results of the weight, while the similarity of OEDs of two connected ver-
enhanced finger image. A thinning algorithm proposed tices is treated as the weighted function. Such that
in [36] is used to extract the finger skeletons of the binary
V (I ) = max (OED|I ) (6)
images, some results are shown in Fig. 5(a)(b)(c). θ
!, !
X 2
S Vi , Vj = exp Ei (k) − Ej (k) 2σ 2
− (7)
k
The adjacency matrix can be used to describe graph struc-
ture of finger modalities, where the elements in the adjacency
matrix represents the weighted functions. Then any classifiers
can make pattern recognition based on the adjacency matrix
expression of finger modalities.
B. FEATURE FUSION
In the above subsection, we present a novel feature extraction
frame based on the graph structure for the finger biometric
images. Following shows two fusion frameworks for three
modalities of finger.
The serial fusion strategy refers to the tandem splicing of FK Pi,j , while (c3 , c5 ), (c2 , c6 ), (c1 , c7 ) are the competitive
trimodal CSR vectors and gets a fused vector. Fig. 7 presents coding results of pixels at the left and right of the target pixel
the serial fusion framework. Ap , Av and Akp are feature vec- for the FP, FV and FKP features respectively. For example,
tors extracted through three adjacency matrixes after graph if FVi,j−1 > FVi,j+1 , (c3 , c5 ) = (1, 0). It is worth mentioning
feature extraction. Then we apply the serial fusion strategy to that the trimodal image pixels should be normalized before
integrate three feature
vectors into one feature fused vector, coding fusion. A general coding way for trimodal fusion is as
such that V = Ap Av Akp . The adjacency matrix based follows.
fusion framework is simple. Under the premise of ensur-
ing complete features, the matrix dimension is reduced and (1, 0), if Fi,j−1 > Fi,j+1
the computational efficiency is improved. For the feature (ci , c8−i ) = (0, 1), if Fi,j−1 < Fi,j+1
(1, 1), if Fi,j−1 = Fi,j+1
learning and matching process, one would apply the famous
classification algorithms in machine learning fields, such as
(
1, if FVi,j ≥ FPij + FK Pij /2
Support Vector Machine (SVM), Extreme Learning Machine c4 = (9)
0, if FVi,j < FPij + FK Pij /2
(ELM) and Softmax classifier.
where F represents the trimodal biometric images (FP, FV ,
FKP) and i = 1, 2, 3.
Through the proposed fusion strategy based on competitive
coding, we integrate three adjacency matrices together. For
the feature matching process, one may apply the traditional
classifier, or the similarity of two matrices (named Matrix
Matching method). The computational formula for the simi-
larity of two matrices is as follows.
n P
n
A (p, q)− Ā B (p, q)− B̄
P
p=1 q=1
Ms = v ! !
FIGURE 7. Serial fusion of the trimodal graph features of finger. (a) ROI of
u
n P
u P n 2 n P
n 2
A (p, q)− Ā B (p, q)− B̄
P
trimodal images of finger. (b) Graph structure. (c) Adjacency matrixes. t
(d) CSR vector expression. (e) Serial fusion. p=1 q=1 p=1 q=1
(10)
2) CODING FUSION STRATEGY
Another fusion framework for trimodal graph features based where Ā and B̄ represent the mean values of fusion adjacency
on competitive coding is presented. When establishing the matrix A and B respectively.
graph, the block segmentation operation is applied to generate The matrix matching method is easy to operate and to be
the vertex expression. By properly setting the segmentation understood. However, the recognition efficiency of matrix
sizes and sliding steps, it is easy to make normalization for matching method is related to the dimension of the matrix.
the trimodal feature graphs. We would obtain three adjacency When the scale of the graph increases, the dimension of the
matrixes with the same size. Then the competitive coding adjacency matrix rises sharply, which will greatly increase the
fusion method is applied to combine the three adjacency matching computational burden.
matrixes into one integrated matrix.
Fig. 8 presents the fusion strategy based on competitive IV. SIMULATION RESULTS
coding. For the target pixels in the same position of the In this section, we present the simulation results for the
trimodal adjacency matrixes, we employ a seven-bit binary unimodal recognition and trimodal fusion recognition of fin-
code ([c1 , c2 , c3 , c4 , c5 , c6 , c7 ]) to represent the feature infor- gers. In our homemade finger trimodal database, there are
mation. Considering the (i, j)th pixels of trimodal graphs, 17550 images in total from 585 fingers. For one finger of one
c4 represents the comprehensive feature of FVi,j , FPi,j and category, we collect 10 FP, FV and FKP images respectively.
A. UNIMODAL RECOGNITION
The graph features of finger three modalities can be used
for identification separately. Here the simulation results of
finger unimodal recognition is presented. Before presenting
the results, we show some necessary explanations of some
common statistical variables. FAR, fault acceptance rate for
short, measures a recognition result of incorrectly identifying
an individual, while FRR is the abbreviated form fault rejec-
tion rate, which indicates the result of incorrectly denying an
FIGURE 8. Coding fusion of the trimodal graph features of finger. individual. The receiver operating characteristic (ROC) curve
FIGURE 11. Testing time for the finger single-modal recognition based on
different classifiers.
on competitive coding. Finger trimodal fusion recognition that through the surrounding tissue. We apply the difference
can improve recognition accuracy and robustness. In the in light transmission to conduct ROI localization of the FV
experiment, a homemade finger trimodal dataset has been image. Due to the finger blocking the light-transmitting plate,
applied to demonstrate the better performance of the pro- it is easy to localize the left and right outline of the finger.
posed finger trimodal fusion recognition frameworks. After the determination of the outlines of FV image, we make
Future Work: Extraction the graph features for the finger rotation for the image to keep correct direction. Additionally,
trimodal images solves the problem of scale inconsistency, ROI would also be acquired by the outlines of the image.
which is the key for the effective and stable fusion recogni- The FKP images have no obvious characteristic marks,
tion for finger biometrics. This attempt is not only effective resulting that it is difficult to locate the ROI in the vertical
for finger multimodal fusion, but also universal for other direction. According to the biological structure of the finger,
biomodalities. The related future work is as follows: the ROI of FKP locates on the back side of proximal joint.
(1) Finger biometric image enhancement is the premise of During the ROI localization of FV, we have obtained the
extracting graph features. Inappropriate filter parameters or position of the proximal joint region. Through spatial map-
filter structure without generalization can cause damage to ping, the position of the proximal joint can be mapped to the
the finger biometric images. Texture breaks are very com- knuckle imaging spece from the FV imaging space.
mon. Thus, in the future work, we would establish the image
repainting model in order to improve graph feature accuracy. REFERENCES
(2) Recently, the Graph Convolutional Neural Net- [1] A. K. Jain, A. Ross, and S. Prabhakar, ‘‘An introduction to biometric
work (GCNNs) shows satisfactory results in processing graph recognition,’’ IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1,
structure data, which motivates us to employ GCNNs to pp. 1–29, 2004.
[2] Q. Zhao, L. Zhang, D. Zhang, N. Luo, and J. Bao, ‘‘Adaptive pore model
make recognition based on graph features of finger trimodal for fingerprint pore extraction,’’ in Proc. 19th Int. Conf. Pattern Recognit.,
images. In addition, establishing a deep fusion model based Dec. 2008, pp. 1–4.
on GCNNs for finger trimodalities is also the future work. [3] J. Yang and Y. Shi, ‘‘Finger–vein ROI localization and vein ridge enhance-
ment,’’ Pattern Recognit. Lett., vol. 33, no. 12, pp. 1569–1579, 2012.
[4] Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo,
APPENDIX ‘‘Variational Bayesian matrix factorization for bounded support data,’’
ROI LOCALIZATOIN IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 4, pp. 876–889,
Apr. 2015.
In the appendix, we present the ROI localization method for [5] N. K. Ratha, K. Karu, S. Chen, and A. K. Jain, ‘‘A real-time matching
the finger trimodal images. FP image collection is relatively system for large fingerprint databases,’’ IEEE Trans. Pattern Anal. Mach.
mature and the collection area is highly targeted. One can Intell., vol. 18, no. 8, pp. 799–813, Aug. 1996.
[6] L. Coetzee and E. C. Botha, ‘‘Fingerprint recognition in low quality
directly use the real acquired image as the ROI. For FV and images,’’ Pattern Recognit., vol. 26, no. 10, pp. 1441–1460, 1993.
FKP images, we design the corresponding ROI localization [7] F. Zeng, S. Hu, and K. Xiao, ‘‘Research on partial fingerprint recog-
strategy. Fig. 13 presents the main results of ROI localization nition algorithm based on deep learning,’’ Neural Comput. Appl., 2018.
for FV and FKP images, while (a) presents the FV images, doi: /10.1007/s00521-018-3609-8.
[8] T. Ojala, M. Pietikäinen, and T. Mäenpää, ‘‘Multiresolution gray-scale
(b) shows the outlines of FV pattern, while (c) are the FKP and rotation invariant texture classification with local binary patterns,’’
images. The joint cavity of finger is filled with biological IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971–987,
tissue effusion, where the light transmittance is higher than Jul. 2002.
[9] M. Faundez-Zanuy, ‘‘Data fusion in biometrics,’’ IEEE Aerosp. Electron.
Syst. Mag., vol. 20, no. 1, pp. 34–38, Jan. 2005.
[10] J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, ‘‘Object detection in
optical remote sensing images based on weakly supervised learning and
high-level feature learning,’’ IEEE Trans. Geosci. Remote Sens., vol. 53,
no. 6, pp. 3325–3337, Aug. 2015.
[11] M. Nageshkumar, P. Mahesh, and M. N. S. Swamy, ‘‘An efficient
secure multimodal biometric fusion using palmprint and face image,’’
Int. J. Comput. Sci. Issues, vol. 2, no. 1, pp. 49–53, 2009.
[12] S. Khellat-Kihela, R. Abrishambaf, J. L. Monteiro, and M. Benyet-
tou, ‘‘Multimodal fusion of the finger vein, fingerprint and the finger-
knuckle-print using Kernel Fisher analysis,’’ Appl. Soft Comput., vol. 42,
pp. 439–447, May 2016.
[13] J. Yang, Z. Zhong, G. Jia, and Y. Li, ‘‘Spatial circular granulation method
based on multimodal finger feature,’’ J. Elect. Comput. Eng., vol. 2016,
Mar. 2016, Art. no. 7913170.
[14] N. Alajlan, M. S. Islam, and N. Ammour, ‘‘Fusion of fingerprint and
heartbeat biometrics using fuzzy adaptive genetic algorithm,’’ in Proc.
World Congr. Internet Secur. (WorldCIS), Dec. 2013, pp. 76–81.
[15] Z. Ma, Y. Lai, W. B. Kleijn, Y.-Z. Song, L. Wang, and J. Guo, ‘‘Variational
Bayesian learning for Dirichlet process mixture of inverted Dirichlet dis-
tributions in non-Gaussian image feature modeling,’’ IEEE Trans. Neural
Netw. Learn. Syst., vol. 30, no. 2, pp. 449–463, Feb. 2019.
[16] D. Zhang, J. Han, C. Li, J. Wang, and X. Li, ‘‘Detection of co-salient
objects by looking deep and wide,’’ Int. J. Comput. Vis., vol. 120, no. 2,
FIGURE 13. ROI localization for the FV and FKP images. pp. 215–232, 2016.
[17] D. Zhang, D. Meng, and J. Han, ‘‘Co-saliency detection via a self- [41] Z. Chai, Z. Sun, H. Méndez-Vázquez, R. He, and T. Tan, ‘‘Gabor ordinal
paced multiple-instance learning framework,’’ IEEE Trans. Pattern Anal. measures for face recognition,’’ IEEE Trans. Inf. Forensics Security, vol. 9,
Mach. Intell., vol. 39, no. 5, pp. 865–878, May 2017. no. 1, pp. 14–26, Jan. 2014.
[18] Z. Ma, H. Yu, W. Chen, and J. Guo, ‘‘Short utterance based speech lan- [42] Z. Ma, J.-H. Xue, A. Leijon, Z.-H. Tan, Z. Yang, and J. Guo, ‘‘Decorre-
guage identification in intelligent vehicles with time-scale modifications lation of neutral vector variables: Theory and applications,’’ IEEE Trans.
and deep bottleneck features,’’ IEEE Trans. Veh. Technol., vol. 68, no. 1, Neural Netw. Learn. Syst., vol. 29, no. 1, pp. 129–143, Jan. 2018.
pp. 121–128, Jan. 2019.
[19] L. Hong and A. Jain, ‘‘Integrating faces and fingerprints for personal
identification,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12,
pp. 1295–1307, Dec. 1998. HAIGANG ZHANG received the B.S. degree from
[20] L. N. Evangelin and A. L. Fred, ‘‘Feature level fusion approach for personal the University of Science and Technology, Liaon-
authentication in multimodal biometrics,’’ in Proc. 3rd Int. Conf. Technol. ing, in 2012, and the Ph.D. degree from the School
Eng. Manage., Mar. 2017, pp. 148–151. of Automation and Electrical Engineering, Univer-
[21] W. Yang, X. Huang, F. Zhou, and Q. Liao, ‘‘Comparative competitive sity of Science and Technology Beijing, in 2018.
coding for personal identification by using finger vein and finger dorsal He is currently with the College of Electronic
texture fusion,’’ Inf. Sci., vol. 268, pp. 20–32, Jun. 2014.
Information and Automation, Civil Aviation Uni-
[22] J. Peng, Y. Li, R. Li, G. Jia, and J. Yang, ‘‘Multimodal finger feature fusion
and recognition based on delaunay triangular granulation,’’ in Pattern
versity of China. His research interests include
Recognition (Communications in Computer and Information Science), machine learning, image process, and biometrics
vol. 484. Berlin, Germany: Springer, 2014. recognition.
[23] T. Sim, S. Zhang, R. Janakiraman, and S. Kumar, ‘‘Continuous verification
using multimodal biometrics,’’ IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 29, no. 4, pp. 687–700, Apr. 2007.
[24] R. Diestel, Graph Theory. Berlin, Germany: Springer-Verlag, 2016.
[25] G. Cheng, P. Zhou, and J. Han, ‘‘Learning rotation-invariant convo- SHUYI LI received the B.S. degree in electronic
lutional neural networks for object detection in VHR optical remote information engineering from the Changchun Uni-
sensing images,’’ IEEE Trans. Geosci. Remote Sens., vol. 54, no. 12, versity of Technology, in 2016. She is currently
pp. 7405–7415, Dec. 2016. pursuing the master’s degree with the Civil Avia-
[26] P. Foggia, G. Percannella, and M. Vento, ‘‘Graph matching and learning tion University of China. Her main research inter-
in pattern recognition in the last 10 years,’’ Int. J. Pattern Recognit. Artif. ests include image process and biometrics recog-
Intell., vol. 28, no. 1, 2014, Art. no. 1450001. nition.
[27] B. Luo, R. C. Wilson and, and E. R. Hancock, ‘‘Spectral embedding of
graphs,’’ Pattern Recognit., vol. 36, no. 10, pp. 2213–2230, 2003.
[28] E. E. A. Abusham and H. K. Bashir, ‘‘Face recognition using local graph
structure (LGS),’’ Lect. Notes Comput. Sci., vol. 6762, no. 1, pp. 169–175,
2011.
[29] S. Dong et al., ‘‘Finger Vein Recognition Based on Multi-Orientation
Weighted Symmetric Local Graph Structure,’’ KSII Trans. Internet Inf.
Syst., vol. 9, no. 10, pp. 4126–4142, 2015. YIHUA SHI received the B.Sc. degree from Inner
[30] G. Cheung, E. Magli, Y. Tanaka, and M. K. Ng, ‘‘Graph Spectral Image Mongolia Normal University, China, in 1993,
Processing,’’ Proc. IEEE, vol. 106, no. 5, pp. 907–930, May 2018. and the M.Sc. degree from Zhengzhou Univer-
[31] J. Yang, J. Wei, and Y. Shi, ‘‘Accurate ROI localization and hierarchi- sity, China, in 2006. She is currently an Asso-
cal hyper-sphere model for finger-vein recognition,’’ Neurocomputing, ciate Professor with the Civil Aviation University
vol. 328, pp. 171–181, Feb. 2019. of China. Her major research interests include
[32] Z. Ye, J. Yang, and J. H. Palancar, ‘‘Weighted graph based description machine learning, pattern recognition, computer
for finger-vein recognition,’’ in Proc. China Conf. Biometric Recognitoin, vision, multimedia processing, and data mining.
2017, pp. 370–379.
[33] J. Yang, J. Yang, and Y. Shi, ‘‘Finger-vein segmentation based on multi-
channel even-symmetric Gabor filters,’’ in Proc. IEEE Int. Conf. Intell.
Comput. Intell. Syst., Nov. 2009, pp. 500–503.
[34] J. Chen et al., ‘‘WLD: A robust local image descriptor,’’ IEEE Trans.
Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1705–1720, Sep. 2010.
[35] M. Vlachos and E. Dermatas, ‘‘Vein segmentation in infrared images
using compound enhancing and crisp clustering,’’ in Proc. 6th Int. Conf. JINFENG YANG received the B.Sc. degree
Comput. Vis. Syst. (ICVS). Santorini, Greece: Springer-Verlag, May 2008, from the Zhengzhou University of Light Indus-
pp. 393–402. try, China, in 1994 and the M.Sc. degree from
[36] C. B. Yu, H. F. Qin, and Y. Z. Cui, ‘‘Finger-vein image recognition Zhengzhou University, China, in 2001, and the
combining modified Hausdorff distance with minutiae feature matching,’’ Ph.D. degree in pattern recognition and intelligent
Interdiscipl. Sci. Comput. Life Sci., vol. 1, no. 4, pp. 280–289, 2009. system from the National Laboratory of Pattern
[37] H. G. Zhang, S. Zhang, and Y. Yin, ‘‘Online sequential ELM algorithm Recognition, Institute of Automation, Chinese
with forgetting factor for real applications,’’ Neurocomputing, vol. 261, Academy of Sciences, China, in 2005. He is cur-
pp. 144–152, Oct. 2017. rently a Professor with the Civil Aviation Univer-
[38] C.-W. Hsu and C.-J. Lin, ‘‘A comparison of methods for multiclass support
sity of China. His major research interests include
vector machines,’’ IEEE Trans. Neural Netw., vol. 13, no. 2, pp. 415–425,
machine learning, pattern recognition, computer vision, multimedia pro-
Mar. 2002.
[39] A. A. Mohammed and V. Umaashankar, ‘‘Effectiveness of hierarchi- cessing, and data mining. He regularly served as PC Members/Referees
cal softmax in large scale classification tasks,’’ in Proc. Int. Conf. for international journals and top conferences, including, IEEE TIP, IVC,
Adv. Comput., Commun. Inform. (ICACCI), Bangalore, India, Sep. 2018, PR, PRL, ICCV, CVPR, and ECCV. He currently serves as a syndic of the
pp. 1090–1094. Chinese Society of Image and Graphics and the Chinese Association for
[40] B. Fan, F. Wu, and Z. Hu, ‘‘Rotationally invariant descriptors using inten- Artificial Intelligence.
sity order pooling,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 34,
no. 10, pp. 2031–2045, Oct. 2012.