Professional Documents
Culture Documents
Abstract— In this paper an efficient feature extraction Linear Discriminant Analysis (LDA) [4].
method called Orthogonal Weighted Locally Linear Discriminant Linear methods fail in the cases where data have nonlinear
Embedding (OWLLDE) is proposed for face recognition. The distribution. To overcome this problem, nonlinear methods
OWLLDE algorithm is motivated by locally linear embedding
(LLE) algorithm, modified maximizing margin criterion
such as kernel-based methods and manifold-based ones were
(MMMC) and cam weighted distance. In OWLLDE, the LLE presented for feature extraction. In kernel-based methods, first
algorithm is modified based on the weighted distance the data are mapped into a space with higher dimensions and
measurement to select more suitable neighbors for each data. In then linear methods are used for dimension reduction in this
this way, the performance of OWLLDE in feature extraction will new space. Kernel-PCA (KPCA) [5] and kernel-LDA (KLDA)
be improved for deformed distributed data. Moreover, [6] are two well-known methods that can be considered as
OWLLDE preserves the local geometry structure of the data
based on modified LLE and also makes full use of class
kernel versions of PCA and LDA.
information to improve the discriminant ability by a vector Unlike kernel-based methods, the manifold learning-based
translation and rescaling model. Finally to improve the methods are based on the idea that the data points are actually
recognition accuracy, we use Gram–Schmidt orthogonalization to samples from a low-dimensional manifold that is embedded in
obtain the orthogonal basis vectors. The results of experiments on a high-dimensional space. The methods such as isometric
ORL and YALE databases show the superior performance of feature mapping (ISOMAP) [7], Locally Linear Embedding
OWLLDE.
(LLE) [8], [9], Laplacian Eigenmap (LE) [10], [11] and Local
Keywords— cam weighted distanc; feature extraction; locally Tangent Space Alignment (LTSA) [12]can be classified as the
linear discriminant embedding; manifold learning. most well-known manifold-based methods.
LLE is designed to maintain the local linear reconstruction
I. INTRODUCTION relationship among neighboring points in the low dimensional
In recent decades, the face recognition has received a lot of space. However, LLE has limitations and is not suitable for
attention in various applications such as video coding, human- face recognition. It only introduces an embedding of the
computer interface, and surveillance [1], [2]. Appearance- training data points, and does not present a method for
based face recognition has been studied widely since 1990. mapping new data points that do not exist in the training set,
Two central issues in appearance-based face recognition which is the well-known out-of-sample problem. Another
are feature extraction for face representation and image limitation is that LLE is an unsupervised algorithm; such
classification. In appearance based techniques, an -by- pixel characteristic can cause to impair the recognition accuracy.
face image can be viewed as an -dimensional vector. This However, LLE algorithm actually depends on a distance
approach encounters problems in the cases where there are a measure. As a result, the performance of the method relies on
small number of high dimensional samples. The feature the choice of an appropriate measure. To solve this problem,
extraction methods can solve this problem by reducing the the cam weighted distance in [13] is proposed for improving
dimensions. The aim of feature extraction methods is to nearest neighbor finding. The “cam weighted distance” gives a
project the high-dimensional data into low-dimensional deflective cam contours for equal-distance contour in
feature spaces. These methods can be classified into two classification as mentioned and shown in [13], Fig. 1]. Since,
categories based on using or not using the class information: the samples are not isolated instances; the inter-prototype
supervised and unsupervised methods. They can also be relationship should not be neglected. As a result, to globally
divided into linear and nonlinear methods. Linear methods improve distance measure, we should consider both variances
obtain the low-dimensional space from high-dimensional one with its own orientation and discrimination with respect to its
using linear transformation. The most well known linear different surroundings of each sample.
methods are Principal Component Analysis (PCA) [3] and Bo Li et al. proposed locally linear discriminant embedding
(10)
(11)
From the experimental results, we can see that OWLLDE VI. CONCLUSIONS
achieves the highest recognition rate compared with existing In this paper, we presented a new dimensionality reduction
methods. method for face recognition called OWLLDE which optimizes
C. Recognition Rate with Different Numbers of Training the distance measure to find more suitable neighbors especially
Samples for deformed distributed data. It uses Gram-Schmidt
orthogonalization to obtain orthogonal basis vectors. As a
In this section, we investigate the effect of the number of result, it improves the performance of dimension reduction
training samples on the recognition rate obtained by different especially for deformed distributed data. Experimental results
methods. First, 3,4,5 images of each person from ORL on ORL and Yale face databases demonstrate the effectiveness
face database are randomly chosen to form the ORL training of the proposed method.
set. From the Yale face database, we randomly selected
3,4,5,6 images from each person for training. The rest of REFERENCES
each database is considered as test set. We repeated the [1] X. Chen and J. Zhang, "A novel maximum margin neighborhood
preserving embedding for face recognition," Future Generation
experiments 20 times and calculate the maximal average Computer Systems, vol. 28, no. 1, pp. 212-217, January. 2012.
recognition rates, dimensions and standard deviations of [2] B. Li, C. H. Zheng, and D. S. Huang, "Locally linear discriminant
different methods. Experimental results on the ORL and Yale embedding: An efficient method for face recognition," Pattern
databases are shown in tables II-III, respectively. Recognition, vol. 41, no. 12, pp. 3813-3821, December. 2008.
As can be seen, the performances of all methods will be [3] M. A. Turk and A. P. Pentland, "Face recognition using
eigenfaces," in Proc. IEEE Computer Vision and Pattern
improved significantly when the training samples increase. It is Recognition Conf. , 1991, pp. 586-591.
easy to see that our proposed algorithm performs superior [4] A. M. Martinez and A. C. Kak, "PCA versus LDA," IEEE Trans.
compared with other existing methods. Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228-
233, February. 2001.
TABLE II. THE MAXIMAL AVERAGE RECOGNITION RATES (%) AND THE [5] K. I. Kim, K. Jung, and H. J. Kim, "Face recognition using kernel
CORRESPONDING STANDARD DEVIATIONS WITH THE REDUCED DIMENSIONS ON principal component analysis," IEEE. Signal Processing Letters,
THE ORL DATABASE.
vol. 9, no. 2, pp. 40-42, 2002.
Method 3 Train 4 Train 5 Train [6] M. H. Yang, "Kernel eigenfaces vs. kernel Fisherfaces: Face
recognition using kernel methods.," in Proc. IEEE Int. Conf.
PCA 84.71± 2.15(80) 89.5±2.0052(115) 92.7±1.7 (152) Automatic Face and Gesture Recognition, 2002, pp. 215-220.
LLE 81.54±3.13(69) 85.69±5.35(105) 91.7±2.36 (117) [7] J. B. Tenenbaum, V. De Silva, and J. C. Langford, "A global
geometric framework for nonlinear dimensionality reduction,"
LPP 67.84±2.23 (79) 72.71±3.21(92) 77.5±2.76(110) Science, vol. 290, no. 5500, pp. 2319-2323, 2000.
NPE 59.77±5.28 (67) 60.79±5.58(120) 64.37±5.08(160) [8] S. T. Roweis and L. K. Saul, "Nonlinear dimensionality reduction
by locally linear embedding," Science, vol. 290, no. 5500, pp.
LDE 75.07±5.62(54) 87.87±3.05(57) 92.35±1.6 (66) 2323-2326, 2000.
[9] L. K. Saul and S. T. Roweis, "Think globally, fit locally:
LLDE 83.7±2.65(40) 88.31±2.15(40) 90.47±1.47(39) unsupervised learning of low dimensional manifolds," The Journal
of Machine Learning Research, vol. 4, pp. 119-155, 2003.
OWLLDE 90.7±1.56(52) 93.1±1.75(56) 94.82±1.33(52) [10] M. Belkin and P. Niyogi, "Laplacian eigenmaps and spectral
techniques for embedding and clustering," Advances in Neural
Information Processing Systems, vol. 14, pp. 585-591, 2001.
[11] M. Belkin and P. Niyogi, "Laplacian eigenmaps for dimensionality
reduction and data representation," Neural Computation, vol. 15,
no. 6, pp. 1373-1396, 2003.
[12] Z. Zhang and H. Zha, "Principal manifolds and nonlinear
dimensionality reduction via tangent space alignment,"
SIAMJ.Sci.Comput., vol. 26, pp. 313-338, 2004.
[13] C. Y. Zhou and Y. Q. Chen, "Improving nearest neighbor
classification with cam weighted distance," Pattern Recognition,
vol. 39, no. 4, pp. 635-645, April. 2006.
[14] H. Li, T. Jiang, and K. Zhang, "Efficient and robust feature
extraction by maximum margin criterion," Neural Networks, IEEE
Transactions on, vol. 17, no. 1, pp. 157-165, 2006.
[15] W. Zheng, C. Zou, and L. Zhao, "Weighted maximum margin
discriminant analysis with kernels," Neurocomputing, vol. 67, pp.
357-362, August. 2005.
[16] Y. Pan, S. S. Ge, and A. Al Mamun, "Weighted locally linear
embedding for dimension reduction," Pattern Recognition, vol. 42,
no. 5, pp. 798-811, May. 2009.
[17] P. Niyogi, "Locality preserving projections," Advances in neural
information processing systems, vol. 16, pp. 153-160, 2004.
[18] X. He, S. Yan, Y. Hu, P. Niyogi, and H. J. Zhang, "Face
recognition using laplacianfaces," IEEE Trans, Pattern Analysis
and Machine Intelligence, vol. 27, no. 3, pp. 328-340, 2005.
[19] X. He, D. Cai, S. Yan, and H. J. Zhang, "Neighborhood preserving
embedding," in Proc. IEEE Conf. Computer Vision, Beijing,
2005, pp. 1208-1213.
[20] H. T. Chen, H. W. Chang, and T. L. Liu, "Local discriminant
embedding and its variants," in Proc. IEEE Conf. Computer Vision
and Pattern Recognition, 2005, pp. 846-853.
[21] The ORL Face Database, Cambridge, U.K,: AT&T (Olivetti)
Research Laboratories. [Online]. Available:
http://www.uk.research.att.com/facedatabase.html.
[22] Yale Univ. Face Database,
http://cvc.yale.edu/projects/yalefaces/yalefaces.html, 2002.