You are on page 1of 6

Personal Identification For Single Sample Using

Finger Vein Location and Direction Coding

Wenming Yang, Qing Rao, Qingmin Liao
Visual Information Processing Laboratory, Department of Electronic Engineering, Tsinghua University
Tsinghua-PolyU Biometric Joint Laboratory, Graduate School at Shenzhen, Tsinghua University
Shenzhen, China, 518055

AbstractRecent years have seen a plenty of personal

identification methods with different biometrics such as finger
pattern, face, palm-print and vein. The majority of these methods
focus on complex image data projections and transforms in
Fourier space, wavelet space or other domains, which usually
bring heavy load in computation and difficult understanding in
perceptual intuition. Moreover, these methods, oriented to
multiple samples learning, are constricted usually in application.
Among so much biometrics, vein, as a living feature with high
anti-counterfeiting capability, has attracted considerable
attention. In this paper, we propose a structured personal
identification approach using finger vein Location and Direction
Coding(LDC). First of all, we design a finger vein imaging device
with near-infrared(NIR) light source, by which a database for
finger vein images is established. Subsequently, we make use of
the brightness difference in the finger vein image to extract the
vein pattern. Furthermore, finger vein LDC is proposed and
performed, which creates a structured feature image for each
finger vein. Finally, the structured feature image is utilized to
conduct the personal identification on our image database for
finger vein, which includes 440 vein images from 220 different
fingers. The equal error rate of our method for this database is

Keywords- Biometrics; Finger vein; Personal identification;

Feature extractionLocation and direction coding



Biometric personal identification is an automatic

identification based on the digital information of peoples
physiological characteristics or behavior pattern such as face,
finger, palm-print, iris, vein and voice and gait. These
Biometric characteristics are naturally attached on people,
nearly invariant with time, and see significant variation
between different individuals. All these promise good
application prospect for this technology. Successful examples
include the widely used face recognition, fingerprint

carry rich personal information, which can be used for

identification generally. Moreover, the small size and high
flexibility of the hand make the hand-based biometrics easy to
sample with small instruments and easy procedure. Therefore,
hand-based biometric has drawn wide attention, and become a
hot issue in this area.
Finger vein recognition is one of the hottest topics of handbased biometric identification. Vein image can be sampled by
near-infrared light. The smaller size and the higher flexibility
make the finger vein image easy to sample. Since the finger
vein image is only available on living body, it is nearly free
from forgery. There has been some research on the finger vein
based recognition. The major technical issues on finger vein
recognition are vein shape extraction and feature extraction.
Methods like local threshold method and repeated line tracking
have been applied in vein extraction[1-3]. Methods like
template matching[1], structured features[4] and wavelet
transform[5-6] have been applied in feature recognition. Some
other hand-based biometric such as palmprint[4], palm vein[7],
finger-knuckle-print[8] or their combinations[9] have been
studied for many years. These methods, most of them are
oriented to multiple samples learning, are constricted usually in
application. By comparison the methods based on single
sample learning are promising in future application.
In this paper, we propose a novel approach based on single
sample finger vein recognition. Our method simultaneously
conducts the segmentation of the vein image and the extraction
of vein feature. We utilize the difference in brightness to
segment the vein area (location features), as well as extracting
the vein direction information (direction features). Finally, the
vein location and direction features are coded for identification.
This paper is organized as follows. Section II introduces
our finger vein image sampling system. The preprocessing of
the vein image, including finger segmentation, image
enhancement, denoising and normalization are described in
Section III. Section IV describes our algorithm for vein
segmentation, finger vein location and direction coding and
identification. The experiment results are shown in section V,
and Section VI is the conclusion.

There are many distinguishable characteristics on the hand,

like fingerprint, palm-print, finger-knuckle-print, hand shape,
hand vein, finger vein. All these hand based characteristics

978-1-4577-0490-1/11/$26.00 2011 IEEE



Since there is no finger vein image database open to the

public, we have established a sampling system to collect nearinfrared vein image. The system is shown in Figure.1.

Therefore we use the gradient operator (as shown in Figure 3)

to detect the vertical lines as finger edge. When moving the
mask from the middle of the image to the left (right), an edge is
detected if the gradient is higher than a threshold. The left
(right) edges are obtained by repeating this process for each

Figure 3. Gradient operator

Figure 1. Acquisition system

Two arrays of NIR LED have been used as light source.

They are fixed up left and up right to the finger and the
wavelength of LED is 890nm. A grey-scale CCD camera is
installed with an 850nm high-pass filter to collect the nearinfrared image, which is sent into PC. Vein in the image seems
dark because most of the near-infrared light are absorbed by
the hemoglobin in the blood. A sample raw image is shown in
Figure 2.

One constraint is introduced to guarantee the continuity of

the edge: the distance between two neighbor-points on the edge
should be no larger than several pixels. So we adjust the edge
as follows:
d = Line(i + 1) Line(i )

Line(i + 1)
d 3

Line(i + 1) = Line(i ) + sgn(d ) 3 < d 6

Line(i )
d >6



Line(i ) is the edge point of the i-th row. Perform this

operation to both the edge. The area between lines is finger,
and those outside are eliminated as background.

B. Normalization
Size and brightness normalization is necessary for feature
extraction and final matching. So we normalize size and
brightness of the finger image.
We normalize every row in the finger area to the same
length, using linear interpolation. The finger areas are thus
normalized to the same rectangular size (200*100 pixels)
Figure 2. Raw image

We normalize brightness of the image as follow:

I(i, j ) = m + ( I (i, j ) m)



In this section, preprocessing is conducted on the raw

image. Preprocessing includes the background elimination,
denoising, image enhancement, size normalization and
brightness normalization.
A. Finger shape segmentation
The finger is in the middle of the vein image, and there is
obvious brightness jump in vertical direction on its edge.


I (i, j ) , m , are the brightness, mean and deviation of

the image before normalization, respectively. I(i, j ) , m
are the corresponding brightness, mean and deviation of the
image after normalization, respectively. The processed images
have similar values in mean and variation. The result is shown
in Figure 4.

Figure.6 shows how to calculate the depth of a point. The

following equation can be used to get the depth.
G (i, j ) = I (i 2 r sin , j 2 r cos )
+ I (i r sin , j r cos )
+ I (i + r sin , j + r cos )


+ I (i + 2 r sin , j + 2 r cos )
4 I (i, j )


Figure 4. Normalization. (a) Image before normalization

(b)Image after normalization




Vein recognition processing consists of vein segmentation,

vein location and direction coding and identification. Here we
do vein segmentation and feature extraction to get feature maps.
The feature maps are used for identification.

I (i, j ) is the brightness, 4r is the estimated vein width,

and is the vein direction. Set r = 2 , because we assume that
the vein length is about 8 pixels (200*100 pixels for the whole
image). Since vein direction cannot be estimated directly, we
have to look for the depth on various directions, and pick the
deepest one as the direction of the point. In this operation, we
restrict our search in eight directions, 0, 22.5, 45, 67.5, 90,
112.5, 135, 157.5, as shown in Figure 7. The direction with
the largest valley depth is taken as the direction of the point,
whose depth is taken as the depth value of that point. Thus, we
obtained the valley depth image G and direction map of the
vein image.
G(i, j ) = max {G0 (i, j), G22.5 (i, j ),..., G157.5 (i, j )}

(i, j ) = arg max {G0 (i, j ), G22.5 (i, j ),..., G157.5 (i, j )}

A. Vein valley characteristics analysis

In the vein image, the area with vein is relatively dark,
while others are bright. Literature [1] mentioned the valley
shaped characteristic in vein areas brightness, which is, the
cross-sectional brightness goes from high to low, and then to
high, as shown in Figure 5.


Figure 7. 8 Directions

(a). cross-sectional profile

(b). position of cross section

Figure 5. Cross-sectional profile of a vein

Figure 6. Valley depth of a point

The valley depth of the pixel reflects the possibility that this
point belongs to the vein. A point has a high probability of
being in the vein area when it is in the valley, and vise versa.

B. Vein Location Extraction

In this part, we obtain the binary vein image (location map)
from the valley depth image.
Step 1
Valley depth image shows the probability of being in the
vein area for each point, while the brighter points have higher
probability of being in the vein area, and darker ones have less
probability for that (and a higher probability for being in nonvein area). We use two thresholds, high threshold Th and low
threshold Tl , to do the segmentation. Points with larger depth
than Th are assumed to be in the vein area, and those with
smaller depth than Tl in non-vein areas. Those points with
depth between the two thresholds are ambiguous and left for
further segmentation.

FMap(i, j ) = G (i, j ) / Th

G (i, j ) Th
Tl < G (i, j ) < Th
G (i, j ) Tl


Th = k Gmean


Gmean is the mean for the non-zero elements in the depth

image. k is set as 1.2, Tl is set 5.
Step 2
We use local threshold method for the further segmentation
of the ambiguous area. For each ambiguous point, we take into
consideration its neighbour. We calculate the mean and
deviation of its neighbour, and settle the segmentation
threshold using the following equation:

wm(i, j ) =


(2b + 1) 2
j +b

Since the majority of finger veins are distributed vertically

in image, the discrimination in vertical direction should be
greater than that in horizontal direction. Therefore, we code the
directions 4, 5, 6 into one, and simplify the original 8 directions
into 6 directions, as show in Figure 9.

FMap( x, y )

x =i b y = j b


w (i, j ) =

j +b

C. Vein Location and Direction Coding

We have obtained the binary vein location map and
direction map. Overlapping the direction map onto the binary
vein location map yields the vein feature map with direction

FMap ( x, y ) wm(i, j )) 2

x =i b y = j b


(2b + 1) 2



Figure 9. (a) 8 Directions. (b) 6 Directions.

th(i, j ) = wm (i, j ) + k * w (i, j )


Step 3
Smooth the image with median filter, fix the small holes
with morphological operation, and eliminate the noise points.
The result is shown in Figure 8.

Thus, finger vein location and direction information can be

coded into a feature map. In this map, there are 7 different
values (0-6). 0 represents non vein area (background area), and
1-6 indicates the vein area (locations) and the directions.
Figure.10 shows several feature maps after finger vein LDC.





Figure 8. Segmentation result. (a) Image after

normalization (b) Valley Depth Image
(c) Vein Image with ambiguous area (d) Result


Figure 10. LDCs result. (a) Image after
normalization. (b) Feature map.

D. Matching
Based on finger vein LDC, We use template matching
method for the recognition. For corresponding points in two
different feature map A and B, we assume they are matched if
they are non-zero and equal. Calculating the total number of
matched points, we define the similarity between A and B as
R ( A, B ) =

2 ( A, B)
S ( A) + S ( B)


( A, B) = f ( A(i, j ), B(i, j ))


finish the classification using the similarity above mentioned.

We use the maximal similarity principle, which assigns a
sample to the class with highest similarity. The identification
accuracy rate of our approach is 100%.
In order to quantitatively evaluate these results, we
calculate the equal error rate(EER). Figure 11 shows the
receiver operating characteristic(ROC) curves, which indicates
the relationship between false acceptance rate(FAR) and false
rejection rate(FRR). The EER for our method is 0.44%. The
EER for Miuras method[1] is 2.72%. The EER for Yangs
method[11] is 3.19%. The experiment results show that our
method has better performance than others.

i =1 j =1


1 s = t , s > 0, t > 0
f ( s, t ) =





S ( A) = g ( A(i, j ))


1 x > 0
g ( x) =
0 x = 0




i =1 j =1






R ( A, B ) is the similarity function with value interval [0,1] .

The larger the similarity, the higher probability that the two
samples are in the same class. ( A, B ) represents the total
amount of the matched points for the two samples. S ( A)
represents the number of non-zero elements in the image,
which is the size of the vein area as well.

Taking into account of the possible shift of the finger

position in the image, we perform a shift correction when the
similarity is calculated. We shift the feature map of the test
sample within the area of horizontal 6 pixels and vertical
12 pixels (move 2 pixel for each step in the vertical shifts),
calculate the similarities at different shift, and take the highest
similarity as the final result.

( A, B) = f ( A(i + x, j + y ), B(i, j ))


i =1 j =1

R( A, B ) = max { R( A, B, x, y )}
x, y

x = 6, 5,...,5, 6


y = 12, 10,...,10,12




Since there is no public finger vein image database

available, we established a database for finger veins, which has
440 images from 220 different fingers. For each fingerone
image is used for learning and the other one for testing. After
apply our algorithm to obtain the feature maps, we sub-sample
the feature maps and save them as 90*40 pixels.
Experimentally, we think feature maps of this size have
sufficient discrimination, and reduced size lowers the storage
and computational complexity. In recognition, we compare the
test sample(image) with all the classes in the database, and

0.002 0.004 0.006 0.008


0.012 0.014 0.016 0.018


Figure 11. ROC curves



In this paper, we propose a new approach based on single

finger vein sample recognition. This approach can extract vein
feature for recognition from near-infrared images. Our
approach exhibits high robustness and recognition rate in the
experiment. Method based on single sample is convenient to
sampling and easy to application, and therefore it has a
promising prospect for future application.
Our future work will focus on the following issues:
1). Improving the imaging device. Enhance the lighting
to make imaging more stable and reduce the size of the
imaging device by implementation it on the embedded system.


Expanding the database for further verification.

3). Trying in multiple biometrics fusion such as merging

finger vein image and finger-knuckle-print, which has been
sampled in our current finger vein imaging system.

The authors would like to thank the support of Basic Nature
Research Funding from Shenzhen of China.






Naoto Miura, Akio Nagasaka and Takafumi Miyatake, Feature

extraction of finger-vein patterns based on repeated line tracking and its
application to personal identification, Machine Vision and
Applications. Digital Object Identifer, pp. 1007, Oct. 2004.
Yu Chengbo, Qing Huafeng, Zhang Lian, A Research on Extracting
Low Quality Human Finger Vein Pattern Characteristics, The 2nd
International Conference on Bioinformatics and Biomedical
Engineering, ICBBE 2008, pp.1876-1879, 16-18 May 2008.
Xiang Yu, Wenming Yang, Qingmin Liao, et al, "A Novel Finger Vein
Pattern Extraction Approach for Near-Infrared Image," 2nd International
Congress on Image and Signal Processing, CISP '09, pp.1-5, 17-19 Oct.
Wu X, Zhang D, Wang K, Palmprint Recognition, Beijing: Science
Press, Oct. 2006.
Fengxu Guan, Kejun Wang, Hongwei Mo, Hui Ma, et al, "Research of
Finger Vein Recognition Based on Fusion of Wavelet Moment and
Horizontal and Vertical 2DPCA," 2nd International Congress on Image
and Signal Processing, CISP '09, pp.1-5, 17-19 Oct. 2009.
Zhong Bo Zhang, Dan Yang Wu, Si Liang Ma, et al, "Multiscale Feature
Extraction of Finger-Vein Patterns Based on Wavelet and Local
Interconnection Structure Neural Network," International Conference on

Neural Networks and Brain, ICNN&B '05 , vol.2 , pp.1081-1084, 13-15

Oct. 2005.
[7] Yingbo Zhou, Kumar.A, "Contactless palm vein identification using
multiple representations", Fourth IEEE International Conference on
Biometrics: Theory Applications and Systems (BTAS), pp.1-6, 27-29
Sept. 2010.
[8] Lin Zhang, Lei Zhang, David Zhang, et al, Online finger-knuckle-print
verification for personal authentication, Pattern Recognition, vol. 43,
no. 7, pp. 2560-2571, July 2010.
[9] Wenming Yang, Xiang Yu, Qingmin Liao, Personal Authentication
Using Finger Vein Pattern and Finger-Dorsa Texture Fusion, ACM
International Conference on Multimedia, Beijing Hotel, Beijing, China
October 19-24, 2009.
[10] Naoto Miura, Akio Nagasaka, Takafumi Miyatake, "Extraction of
finger-vein patterns using maximum curvature points in image profiles",
IEICE-Transactions on Information and Systems, E90-D(8):11851194,
[11] Jinfeng Yang, Xu Li, "Efficient Finger Vein Localization and
Recognition", 20th International Conference on Pattern Recognition
(ICPR 2010), pp.1148-1151, 23-26 Aug. 2010.
[12] Jinfeng Yang, Jinli Yang, "Multi-Channel Gabor Filter Design for
Finger-Vein Image Enhancement", Fifth International Conference on
Image and Graphics, ICIG '09, pp.87-91, 20-23 Sept. 2009