You are on page 1of 8

SECURITY & PRIVACY

Palmprint Verification
for Controlling Access
to Shared Computing
Resources
You can build an effective palmprint verification system using a
combination of mostly off-the-shelf components and techniques.

A
ccess security is an important aspect We’ve developed an effective prototype palm-
of pervasive computing systems. It print verification system using a combination of
offers the system developer and end mostly off-the-shelf (and therefore tried and
users a certain degree of trust in the tested) components and techniques. Such an
use of shared computing resources. approach should make palmprint verification an
Biometrics verification offers many advantages appealing proposition.
over the username-plus-password approach for
access control. Users don’t have to memorize any Palmprint verification
codes or passwords, and biometric systems are Figure 1 shows the system overview. The sys-
more reliable because biometric characteristics tem verifies palmprints in four stages: image
can’t easily be duplicated, lost, or stolen. acquisition, palm positioning, feature extraction,
Researchers have studied such biometric char- and palmprint matching.
acteristics as faces, fingerprints, irises, voices, and
palmprints.1 Facial appearance Image acquisition
and features change with age. During image acquisition, the system captures
Maylor K.H. Leung, A.C.M. Fong, Fingerprints can be affected by an image of the user’s hand via a camera and
and Siu Cheung Hui surface abrasions or otherwise stores it as a grayscale TIFF file. We use a general-
Nanyang Technological University compromised.2,3 Capturing iris purpose digital camera (a Pulnix America TMC-
images is relatively difficult, and 6). The image quality is acceptable for palmprint
iris scans can be intrusive. verification, assuming that the background is
Voices are susceptible to noise corruption and can illuminated uniformly. This is reasonable, given
be easily copied and manipulated. Palmprints are that lighting variations within such a small area
potentially a good choice for biometric applica- as a palm can be minimized.
tions because they’re invariant with a person, easy
to capture, and difficult to duplicate. They offer Palm positioning
greater security than fingerprints because palm Normally, in the raw image, the palm’s loca-
veins are more complex than finger veins. How- tion and orientation aren’t fixed. To solve this
ever, compared to other biometric characteristics, problem, the system must establish a coordinate
they have perhaps seen less research (see the side- system so that all the palms are properly aligned
bar on page 43 for more on palmprint biomet- and normalized. Another problem is that the
rics). This provides a big opportunity for advanc- raw image consists of the palm, fingers, wrist,
ing palmprint technology and applications. and some background. The system must trim

40 PERVASIVE computing Published by the IEEE Computer Society ■ 1536-1268/07/$25.00 © 2007 IEEE

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.
Figure 1. The palmprint verification
system.
Image acquisition

away the unwanted portions to reduce TIFF file (gray-scale)


the computation required in subsequent
processing. The palm-positioning stage Palm positioning
handles these tasks, ultimately produc-
ing a subimage. Figure 2 shows the four- Gray-scale image
step process.
Feature extraction
Boundary extraction and edge thinning.
To extract the boundaries, we use edge
Line edge map
detection (for a tutorial, see www.pages.
drexel.edu/~weg22/edge.html). Edge pix- Register Register or
els should lie on regions with a sharp verify?
gray-level transition. Edge detection in-
volves two steps: Verify
Registered
1. Compute the gradient magnitude of model
Database Palmprint matching
each pixel in the image using the set
of Sobel masks for detecting hori-
zontal, vertical, and diagonal edges.
2. Threshold the image on the basis of Decision
gradient magnitude.

In step 1, the system performs convo- binary image isn’t immediately suitable helps to extract the feature points more
lution of the raw image (see figure 2a) for extracting feature points. The system efficiently.
with the Sobel kernels for each pixel. first passes it to the edge-thinning sub-
This step selects each pixel’s strongest stage to produce a refined edge image, to Feature-point location. To set up the co-
response to the four different masks to facilitate boundary tracing. ordinate system, we require the locations
represent that pixel’s gradient magni- For edge thinning, a morphological- of three feature points. These key points
tude. Step 2 uses adaptive thresholding thinning algorithm4 removes selected lie on the bottom of valleys between fin-
based on the computed gradient magni- pixels from the binary image to reduce gers. By observing the boundary image’s
tudes. First, we omit 5 percent of the all lines in the image to a single pixel line pattern, we see that the bottom of a
highest computed gradient magnitudes width. We calculate the thinned image valley is a short curve joining the edges of
to avoid biasing by outlier values. Sec- by first translating the structuring ele- adjacent fingers. The key points are best
ond, for the remaining gradient magni- ment’s origin to each possible pixel posi- represented as those curves’ midpoints.
tudes, using the highest value as refer- tion in the original image. (A structuring To locate the midpoint, one method is
ence Gr, we compute the threshold value element defines the shape and form of to first find the line (Lm) dividing the
as T_Gradient = Gr  Ratio_Gradient, morphological operations.) We then interfinger space into halves. The inter-
where Ratio_Gradient is a predeter- compare the structuring element’s pixel section point between Lm and the bot-
mined constant between 0 and 1. We set pattern with the underlying part of the tom curve of the valley is one of the
Ratio_Gradient at 0.3. This relatively image. At each position, if the pixel pat- desired key points. Usually, the edges of
low value avoids broken edges. terns match, the image pixel underneath two adjacent fingers form a V-shape. We
With this approach, the threshold the structuring element’s origin is set as can establish an angle by extending the
value will be low if the input image’s edge background. In other words, we remove V-shaped edges until they intersect. To
response is weak, or it will be high if the a redundant foreground pixel. If the struc- find Lm, we calculate this angle’s bisec-
edge response is strong. After the system turing element’s pixel pattern doesn’t tor. Figure 2c illustrates this method.
computes the threshold value, it maps the match, the corresponding pixel remains Generally, directly locating the V-shaped
input image into a binary image, with unchanged. The thinned-edge image (see edges isn’t easy. To solve this problem, we
white pixels representing the edges and figure 2b) makes the analysis of bound- first find each finger’s parallel edges. Then,
black pixels representing the rest. The ary information much easier, which for every two adjacent fingers, we select

OCTOBER–DECEMBER 2007 PERVASIVE computing 41

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.
SECURITY & PRIVACY

Figure 2. Palm positioning: The


(a) initial image, (b) thinned edge image,
(c) located feature points, (d) coordinate
system, and (e) resulting subimage.

K3.The y-axis is perpendicular to the


x-axis and passes through K2. The inter-
(a) 1 (b) section of the two axes determines the
Boundary
extraction and Feature
coordinate system’s origin.
edge thinning 2 point With this coordinate system, we can
location create a rectangular cutting window out-
lining the subimage. We determine the
Lm
window’s location and size such that
most of the distinctive palmprint features
are in the window. That is, the window
must cover most of the prominent lines
3 in the palm’s inner area. Here are the rec-
Coordinate tangle’s specifications:
system
(d) establishment (c) 1. The rectangle’s sides are parallel to
the x-axis or y-axis.
Subimage
4
normalization
2. The rectangle is symmetric with
respect to the y-axis.
3. The distance between the x-axis
and the rectangle’s nearest side is
RefLength  0.25, where RefLength
is the distance between K1 and K3.
4. The rectangle’s sides have the length
(e)
of RefLength.

We can uniquely determine the rec-


an appropriate edge from each finger and for each V-shaped pair, and calculate tangle’s orientation, location, and size.
use those two edges to form a V-shaped each angle’s bisector. Find the inter- The window is invariant to shift and
pair. This approach is feasible because section points between the bound- rotation, which ensures that the system
detecting parallel-line pairs is much easier aries of the interfinger valleys and extracts the same part of the palm, no
than detecting V-shaped line pairs directly. the bisectors. The intersecting points matter how the hand is positioned or ori-
So, locating the three key points takes represent the desired key points. ented in the input image.
four steps:
The nature of this algorithm imposes Subimage normalization. Now, we cut
1. Extract the straight lines. Find the two constraints on the input image. First, out a subimage containing the most sig-
straight lines in the thresholded edge background objects can’t contain paral- nificant palmprint lines. Generally, we
image, and select the long ones that lel-line pairs; these extra pairs would need to rotate the image so that the rec-
could be finger edges. confuse the system. Second, the input tangle is aligned with the normal coordi-
2. Group the parallel lines. Group the image must contain sufficient lengths of nate system, with a horizontal x-axis and
extracted long lines into parallel the fingers; otherwise, the system won’t a vertical y-axis. Next comes a simple
pairs, each representing two edges be able to identify the finger edges. scaling operation to normalize the image
of a finger. to a standard size (we chose 160  160
3. Group the V-shaped lines. Reorder Establishment of the coordinate system. pixels). Figure 2e shows the results.
the parallel pairs, and group the lines We denote the three located key points as
into V-shaped pairs, each represent- K1, K2, and K3. The coordinate system Feature extraction
ing the edges of two adjacent fingers. consists of two axes, x and y (see figure During feature extraction, the system
4. Locate the key points. Form an angle 2d). The x-axis passes through K1 and extracts the palmprint’s line patterns from

42 PERVASIVE computing www.computer.org/pervasive

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.
Related Work in Palmprint Biometrics

I n accordance with common usage, we define “palm” as the


human hand’s inner surface, extending from the base of the
fingers to the wrist. A palmprint is the line pattern on a palm. The
projects are ongoing, and few commercial palmprint identifica-
tion systems are available. To our knowledge, only one commer-
cial system has been reportedly deployed.7 However, details on
three most noticeable palmprint lines are the lifeline, head line, the techniques that the system employs haven’t been disclosed,
and heart line. Six attributes make the palmprint lines on one owing to proprietary concerns.8
hand distinguishable from those on another: color, clarity, length,
REFERENCES
position, continuity, and variation in thickness.1 Researchers have
reported that palmprint lines are stable features and that no two 1. D. Warren-Davis, The Hand Reveals, Element Books, 1993.
palms have the same line patterns.2,3 This makes palmprints a can-
2. D. Zhang, Automated Biometrics: Technologies and Systems, Kluwer Aca-
didate for personal identification. demic Publishers, 2000.
No standardized method exists for palmprint identification in a
biometrics computing system. Nicolae Duta, Anil Jain, and Kanti 3. D. Zhang and W. Shu, “Two Novel Characteristics in Palmprint Verifi-
cation: Datum Point Invariance and Line Feature Matching,” Pattern
Mardia proposed matching based on extracted feature points Recognition, vol. 33, no. 4, 1999, pp. 691–702.
along the main palmprint lines and the associated line orienta-
tion.4 Their method simply subsamples those feature points from 4. N. Duta, A.K. Jain, and K.V. Mardia, “Matching of Palmprints,” Pattern
Recognition Letters, vol. 23, no. 4, 2002, pp. 477–485.
the extracted line pixels. Wageeh Boles and S.Y.T. Chu applied a
Hough transform to detect straight lines of palmprints, using the 5. W.W. Boles and S.Y.T. Chu, “Personal Identification Using Images of the
detected lines’ lengths and angles to match palms.5 Human Palm,” Proc. IEEE TENCON ’97: Speech and Image Technologies for
Computing and Telecommunications, vol. 1, IEEE Press, 1997, pp. 295–298.
Jun Chen, Changshui Zhang, and Gang Rong explored another
method using palm line features.6 In this method, palmprint match- 6. J. Chen, C. Zhang, and G. Rong, “Palmprint Recognition Using Crease,”
ing is based on creases, a kind of wrinkle on the palm’s skin. Because Proc. Int’l Conf. Image Processing, vol. 3, IEEE Press, 2001, pp. 234–237.

creases are less optically identifiable, crease extraction requires scan- 7. “Fujitsu Announces Global Launch of Its Contactless Palm Vein Authen-
ning a piece of paper with an ink palmprint. This requirement limits tication Technology,” press release, Fujitsu Corp., 30 June 2005; www.
this method’s practical application. fujitsu.com/global/news/pr/archives/month/2005/20050630-01.html.

These studies have shown positive results for using palmprints 8. L. Sherriff, “Japanese Banks Deploy Biometric Palm Scanners,” The Regis-
as biometric features for personal identification. Most of these ter, 27 Aug. 2004, www.theregister.co.uk/2004/08/27/palm_biometrics.

the subimage. This stage involves four


steps: image preprocessing, line detection,
image thresholding, and line thinning.

Image preprocessing. Palmprint lines are


subtle features easily affected by noise
caused during image capture. Even a
minor noisy pixel might break a palmprint
line in the image. So, we apply image pre-
processing to remove the noise and make
line extraction more accurate. To do this, Palmprint subimage Result of line detection
we use a 3  3 averaging mask, which
smooths the image and minimizes the Figure 3. Line detection using a standard Sobel edge detector.
noise’s impact.

Line detection. We use suitable line detec- plied the standard Sobel edge detector Image thresholding. After line detection,
tion masks for line extraction. Any stan- and thresholding on edge magnitude. each pixel’s value represents the strength
dard technique, such as Sobel or Canny This step produces an image such as in of its response to convolution of the image
edge detection, will serve. For this itera- figure 3. Other standard edge detectors with the masks. We use image threshold-
tion of our implementation, we’ve ap- should give similar results. ing to convert the 8-bit image into a

OCTOBER–DECEMBER 2007 PERVASIVE computing 43

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.
SECURITY & PRIVACY

centage of the image area. Using this


approach, we calculate the threshold
value so that the number of remaining
pixels is always 5 percent of the number
of image pixels. Because our system nor-
malizes the subimage’s size, the number
of resulting foreground pixels after
thresholding will always be the same.
Figure 4 shows two results of thresh-
olding. The two sample images are the
palmprint of the same person, captured
under different lighting conditions. The
number of extracted line pixels from
both samples is identical.

Line thinning. A morphological-thinning


algorithm processes the binary image, as
we described in the section “Boundary
Figure 4. Thresholding palmprint lines. The two sample images are the palmprint extraction and edge thinning.” The re-
of the same person, captured under different lighting conditions. The number of sulting image contains lines of only a sin-
extracted line pixels from both samples is identical. gle pixel width. Then, we apply contour
tracing and the Dynamic Two-Strip
(DYN2S) algorithm5 to establish a set of
straight line segments that approximate
the extracted palmprint lines.
Figure 5 shows the results of thinning
and straight-line approximation. The
DYN2S algorithm generates a line edge
map that’s a more compact representa-
tion of the palmprint line segments. It
saves both storage space and the com-
(a) (b) putation time for feature matching.

Palmprint matching
Figure 5. The results of (a) thinning and (b) straight-line approximation. The feature extraction stage passes the
information of the extracted line seg-
ments to the palmprint-matching stage.
binary image, with white representing must appear in both images, or the match- This stage’s main task is to measure the
pixels on the lines, and black otherwise. ing result might not be accurate. dissimilarity between the sets of line seg-
The number of extracted lines depends Images captured in different lighting ments extracted from different images.
on the threshold value. With a high environments have a different strength To do this, we apply the line segment
threshold, only those pixels that respond of response to the line detection masks. Hausdorff distance (LHD).6
strongly to line detection will remain, so So, using a fixed threshold, or a per-
the number of extracted lines will be centage of the strongest response value Line segment Hausdorff distance. The
small. With a low threshold, we can ex- as a threshold, doesn’t guarantee that the original Hausdorff distance is a shape
tract more lines, because the weaker ones system will extract the same number of comparison metric based on binary
are also included. Palmprint matching is lines from the different images. images.7 To measure dissimilarity, the
based on comparing the line segments of A better approach is to calculate the LHD also considers line orientation and
two images. The same number of lines threshold value on the basis of a per- line-point association. A number of ap-

44 PERVASIVE computing www.computer.org/pervasive

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.
Figure 6. The parallel-displacement
m
measure.

plications, such as face recognition and testing image. If the testing image is close t
logo recognition, employ the LHD.6,8 to the model, the value will be small; oth-
Unlike shape comparison methods that erwise, it will be large. We base the veri- l ||1 l ||2
build a one-to-one correspondence be- fication decision on the matching score, d | | (m, t ) = min(l | | 1 , l | | 2 )
tween a model and a test image, the LHD using one of two methods.
can be calculated without explicit line The first method uses a fixed threshold
correspondence, to deal with the broken value. We perform LHD matching be- els in a database. The verification speed
lines due to segmentation errors.8 tween the testing image and the model depends on the database size. This
The LHD is based on the distance (d(m, with the desired user ID. If the matching method is slower than the first but can
t)) between two line segments m and t: score is below the threshold value, we be more accurate. To obtain 100 per-
consider the two palmprints to be from cent accuracy, the first method requires
d ( m, t ) = the same person. that all intrapalm matching scores are
Or, for a single testing image, we per- lower than all interpalm matching
Wa2 × dθ2 ( m, t ) + d2 ( m, t ) + d⊥2 ( m, t ) form LHD matching on all models in scores. The second method requires only
the database and select the model with that for each testing image, its intrapalm
where d(m, t) is the angle distance repre- the smallest matching score. If the user- matching score is lower than its inter-
sented by the tangent function with respect presented ID is the same as the selected palm matching scores. However, a test-
to the smallest angle between m and t. Wa model’s user ID, the system accepts the ing image’s intrapalm matching score
is the predetermined weight of angle dis- user. can be higher than another testing
tance. d||(m, t) is the parallel displace- The first method is simple and effi- image’s interpalm matching scores. If
ment—that is, the minimum displacement cient, because it performs LHD match- the ideal threshold value doesn’t exist,
to align either the left end points or the ing only once on two images. However, this method might still generate a per-
right end points of m and t (see figure 6). the decision’s accuracy depends on the fect result because it doesn’t depend on
d is the perpendicular displacement— threshold value. If the threshold value is a global threshold value.
that is, the distance between the two lines. too low, the false-rejection ratio will be The choice of method depends on the
Before computing the distance, we rotate high; if the threshold value is too high, system specification. If the database is
the shorter line around its center until the the false-acceptance ratio will be high. large, the first method is preferable. If
two lines become parallel. We can select a suitable threshold value the database is small or the system’s exe-
We define the directed LHD (hs) and based on the results of testing samples. cution time isn’t crucial, the second
undirected LHD (Hs) between two sets The ideal threshold value lies above the method can be considered.
of line segments M and T as global maximum intrapalm (matching the
palms of the same person) distance and Experimental results
hs ( M, T ) = below the minimum interpalm (matching We implemented our prototype using

( )
1 the palms of different persons) distance. Microsoft Visual C++ 6.0. To test the sys-
∑ lmi × min d mi , t j
∑ lmi mi∈M tj∈T Such a threshold value would produce tem’s performance, we captured 90 palm-
mi∈M 100 percent verification accuracy. How- print images—three images each from 30
ever, knowing these distances’ values is persons. We randomly selected one of the
H s ( M, T ) = impossible because we can collect only a three images to set up the database, so the

(
max hs ( M, T ) , hs (T , M ) ) limited number of samples for testing,
which produce only the local minimum
database contained 30 models. We used
the remaining 60 images as testing images
and local maximum. In addition, such an to match against the models. So, testing
where lmi is the length of line segment mi. ideal threshold value might not even exist involved 1,800 matching operations. Each
Hs represents the degree of dissimilarity if the global maximum intrapalm distance sample image was 380  285 pixels.
between the two sets of line segments. is larger than the minimum interpalm dis-
tance. In this case, verification won’t be Performance analysis
Decision making. LHD matching pro- 100 percent accurate. The system successfully detected the
duces a value (a matching score) repre- The second method is similar to a key points for all 90 images and correctly
senting the degree of dissimilarity be- recognition method, which generally re- cut out the subimages. It then compared
tween the model in the database and the quires matching operations on all mod- the line segments extracted from the

OCTOBER–DECEMBER 2007 PERVASIVE computing 45

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.
SECURITY & PRIVACY

12

11

10

9
Matching score

8.91
8.22
8

5
Minimum interpalm distance
Intrapalm distance
4
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59
Test image ID

Figure 7. Test results for the palmprint-matching system.

testing images with those of the models. cribed in the section “Decision making”) considered the 20 images independently
Figure 7 shows the matching scores. produced 100 percent accurate identifi- of each other. These images corresponded
The system matched each testing image cation because for each testing image, to 20 individuals attempting to access a
against the 30 models in the database, the intrapalm distance was smaller than computer system. So, we considered the
including one intrapalm-matching model the corresponding minimum interpalm 20 images to be 20 “registered users,” each
and 29 interpalm-matching models. The distance. The average interpalm distance with his or her own computer account.
upper curve in figure 7 shows the mini- was 10.1, which is approximately twice Each registered user then attempted to
mum interpalm matching score for each the average intrapalm distance (5.9). access all 20 computer accounts in round-
testing image; the lower curve shows the These results show that the LHD can robin fashion. This entailed 400 tests, of
intrapalm matching scores. The upper effectively distinguish between interpalm which 20 should successfully gain access
curve’s lowest value (8.91) is greater than matches and intrapalm matches. and the rest should be denied.
the lower curve’s highest value (8.22), Using this procedure, we determined the
which means that all intrapalm matching Comparison with fingerprint FR and FA rates. Because a trade-off exists
scores are lower than the interpalm match- technology between FR and FA,1,11 we quote the
ing scores. For the sample testing images, To gauge our approach’s effectiveness, ranges of their values as follows. In our
100 percent accurate identification is pos- we performed additional tests to com- tests, FR ranged from 0 to 3.5 percent and
sible if the threshold value is between the pare it with fingerprint authentication. FA ranged from 0 to 0.5 percent. These
two boundary points. However, the dis- We chose fingerprint recognition be- values compare favorably against known
tance between those points is small, and cause it’s the most mature and produces data from fingerprint technology.9,11
selecting a suitable threshold isn’t easy in false-rejection (FR) and false-acceptance
practice. So, in general, using a predeter- (FA) rates that compare favorably to

A
mined threshold value to make verifica- those of techniques such as voice and s our experiments showed, the
tion decisions might generate errors. face recognition.9 system works well on images
We don’t report the accuracy of veri- Our procedure essentially followed two with a uniform background.
fication using a threshold value here previous approaches.9,10 First, we cap- We can further extended this
because it depends on the selected value. tured two palmprints each from 10 users, system to handle images with arbitrary
The second verification method (des- giving 20 different palmprint images. We backgrounds, because the algorithm for

46 PERVASIVE computing www.computer.org/pervasive

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.
the AUTHORS
Maylor K.H. Leung is an associate professor in Nanyang Technological University’s School of Computer
Engineering. His research interests include computer vision, pattern recognition, and image processing,
with current focuses on video surveillance, face recognition, and shape indexing. He received his PhD
in computer science from the University of Saskatchewan. Contact him at Blk. N4, #2A-32, School of
locating and aligning the palmprint is Computer Eng., Nanyang Technological Univ., Singapore 639798; asmkleung@
based on line detection instead of simple ntu.edu.sg.
segmentation. This can make the system
A.C.M. Fong is an associate professor in computing systems at Nanyang Technological University’s
more robust and suitable for security School of Computer Engineering. His research interests include the Internet and multimedia, as well as
applications with outdoor cameras. digital communications and software engineering. He received his PhD in electrical and electronic engi-
neering from the University of Auckland. Contact him at Blk. N4, #2A-32, School of Computer Eng.,
Nanyang Technological Univ., Singapore 639798; acmfong@gmail.com.

Siu Cheung Hui is an associate professor in the Division of Information Systems at Nanyang Technolo-
REFERENCES gical University’s School of Computer Engineering. His research interests include data mining, Web
mining, the Semantic Web, intelligent systems, information retrieval, intelligent tutoring systems, and
1. Y. Gao, S.C. Hui, and A.C.M. Fong, “A timetabling and scheduling. He received his D.Phil. in computer science from the University of Sussex.
MultiView Facial Analysis Technique for Contact him at Blk. N4, #2A-32, School of Computer Eng., Nanyang Technological Univ., Singapore
Identity Authentication,” IEEE Pervasive 639798; asschui@ntu.edu.sg.
Computing, vol. 2, no. 1, 2003, pp. 38–45.

2. “Biometric Enrollment Errors a Problem,


but Not Fatal,” ID Newswire, 9 July 2003,
pp. 1–3; www.itl.nist.gov/iad/Articles/
7903IDNewswire.pdf. ADVERTISER INDEX
3. M. Kothavale, R. Markworth, and P. OCTOBER–DECEMBER 2007
Sandhu, “Computer Security SS3: Biomet-
ric Authentication,” 2004, www.cs.bham.
ac.uk/~mdr/teaching/modules03/security/ Advertiser Page Number
students/SS3/handout.
MobiQuitous 2008 14
4. R.C. Gonzalez and R.E. Woods, Digital Image
Processing, 2nd ed., Prentice-Hall, 2002. Advertising Personnel
Marion Delaney Marian Anderson Sandy Brown
5. M.K. Leung and Y.H. Yang, “Dynamic IEEE Media, Advertising Coordinator IEEE Computer Society,
Two-Strip Algorithm in Curve Fitting,” Pat- Advertising Director Phone: +1 714 821 8380 Business Development Manager
tern Recognition, vol. 23, nos. 1–2, 1990, Phone: +1 415 863 4717 Fax: +1 714 821 4010 Phone: +1 714 821 8380
pp. 69–79. Email: md.ieeemedia@ieee.org Email: manderson@ Fax: +1 714 821 4010
computer.org Email: sb.ieeemedia@ieee.org
6. Y. Gao and M.K. Leung, “Face Recognition
Using Line Edge Map,” IEEE Trans. Pat-
Advertising Sales Representatives
tern Analysis and Machine Intelligence, vol.
24, no. 6, 2002, pp. 764–779. Mid Atlantic (product/recruitment) Northwest (product) Southeast (recruitment)
Dawn Becker Lori Kehoe Thomas M. Flynn
Phone: +1 732 772 0160 Phone: +1 650 458 3051 Phone: +1 770 645 2944
7. W.J. Rucklidge, “Efficiently Locating Ob- Fax: +1 732 772 0164 Fax: +1 650 458 3052 Fax: +1 770 993 4423
jects Using the Hausdorff Distance,” Int’l Email: db.ieeemedia@ieee.org Email: l.kehoe@ieee.org Email: flynntom@mindspring.com
J. Computer Vision, vol. 24, no. 3, 1997,
New England (product) Southern CA (product) Midwest/Southwest (recruitment)
pp. 251–270. Jody Estabrook Marshall Rubin Darcy Giovingo
Phone: +1 978 244 0192 Phone: +1 818 888 2407 Phone: +1 847 498-4520
8. C. Du, G. Su, and X. Lin, “Face Recogni- Fax: +1 978 244 0103 Fax: +1 818 888 4907 Fax: +1 847 498-5911
tion Using a Modified Line Segment Haus- Email: je.ieeemedia@ieee.org Email: mr.ieeemedia@ieee.org Email: dg.ieeemedia@ieee.org
dorff Distance,” Proc. 2003 Int’l Conf. New England (recruitment) Northwest/Southern CA (recruit- Southeast (product)
Machine Learning and Cybernetics, vol. 5, John Restchack ment) Bill Holland
IEEE Press, 2003, pp. 3016–3021. Phone: +1 212 419 7578 Tim Matteson Phone: +1 770 435 6549
Fax: +1 212 419 7589 Phone: +1 310 836 4064 Fax: +1 770 435 0243
Email: j.restchack@ieee.org Fax: +1 310 836 4067 Email: hollandwfh@yahoo.com
9. P.J. Phillips et al., “An Introduction to Eval- Email: tm.ieeemedia@ieee.org
uating Biometric Systems,” Computer, vol. Connecticut (product) Japan (recruitment)
33, no. 2, 2000, pp. 56–63. Stan Greenfield Midwest (product) Tim Matteson
Phone: +1 203 938 2418 Dave Jones Phone: +1 310 836 4064
Fax: +1 203 938 3211 Phone: +1 708 442 5633 Fax: +1 310 836 4067
10. C.L. Wilson and R.M. McCabe, Simple Test Email: greenco@optonline.net Fax: +1 708 442 7620 Email: tm.ieeemedia@ieee.org
Procedure for Image-Based Biometric Ver- Email: dj.ieeemedia@ieee.org
Southwest (product) Will Hamilton Europe (product)
ification Systems, tech. report NISTIR 6336, Steve Loerch Phone: +1 269 381 2156 Hilary Turnbull
Nat’l Inst. Standards and Technology, 1999. Phone: +1 847 498 4520 Fax: +1 269 381 2556 Phone: +44 1875 825700
Fax: +1 847 498 5911 Email: wh.ieeemedia@ieee.org Fax: +44 1875 825701
11. V. Matyas Jr. and Z. Riha, “Toward Reli- Email: Joe DiNardo Email: impress@impressmedia.com
steve@didierandbroderick.com Phone: +1 440 248 2456
able User Authentication through Biomet- Fax: +1 440 248 2594
rics,” IEEE Security & Privacy, vol. 1, no. Email: jd.ieeemedia@ieee.org
3, 2003, pp. 45–49.

OCTOBER–DECEMBER 2007 PERVASIVE computing 47

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 05,2010 at 02:42:32 UTC from IEEE Xplore. Restrictions apply.

You might also like