You are on page 1of 7

Performance Improvement for 2-D Face Recognition

Using Multi-Classifier and BPN


Mohannad Abuzneid Ausif Mahmood
Department of Computer Science and Engineering Department of Computer Science and Engineering
University of Bridgeport, USA University of Bridgeport, USA
Mohannad.abuzneid@gmail.com Mahmood@bridgeport.edu

Abstract— Human face recognition and detection has popular common approaches which is used for data re-
become a very interesting field for the researcher and this representation and feature extraction is the PCA, also known
interest is motivated by the huge demand of extensive as Karhunen-Loeve expansion [3]. The most variations in
applications of the real time surveillance system and the static
matching system like DMV licenses, port authority and bank
system. The image processing, neural network and computer
vision are the most area active research areas .Many of papers
and approaches have been introduced in the past decades. It is
difficult to create and design an efficient computational system
for accurate human face recognition because of the complex
visual of the human face, which changes dramatically based on
Fig. 1. Same person with pose variation and facial expression.
the variant effects. In this paper, we propose a new framework
for 2D face recognition using five different distance algorithms
and Backpropagation Neural network (BPN) to improve the image data are retained after reducing the dimensionality
conventional Principal Component Analysis (PCA) approach. of the image and provide solid representation of the human
The performance of the new framework is compared with the
performance of the K Nearest-Neighbor (KNN) classifier using face. The concept of the PCA is to translate the human face
five different distance algorithms then we combine them using in to a smaller set of features data and keep the variations in
the least square root algorithm to achieve a higher accuracy. the image data characteristic which is called Eigen-Faces
Our experimental results on AT&T (formerly ORL) and Yale and they are the principal components of the initial training
face databases show that our proposed method improves the set of the human face images. In the recognition testing
overall performance of face recognition system with respect to process, unknown face image is projected in to a reduced
existing approaches. The results show the high accuracy can be
achieved by using %15 of the image features which increases the
dimension human face space obtained by the Eigen Faces
recognition performance rate with less computational cost than then classified by distance classifiers or statistical method.
the existing methods.

Index Terms- Principal component analysis (PCA); K


Nearest-Neighbor; Backpropagation neural network (BPN).

I INTRODUCTION

The challenge in 2D-face recognition and detection are


Fig. 2. The same person can look different person under variant lighting
due to the fact that the human face images are variable. The environments. In the left image, the light source is front of the head; in
reasons of this variability due to the individual appearance, the right image, the light source is from left side.
facial expression, pose variation, facial expression and the
poor illumination as showing in Fig.1 [1]. The lighting
variability includes intensity, number of light sources, Most of recognition systems methodology for human
direction and the camera location as showing in Fig. 2. The face has three phases which is shown in Fig. 3: (1)
variations between the human face images of the same person Preprocessing which consist of face normalization, scaling,
due to pose and illumination are usually larger than face resizing, detection, elimination of background noise and
image variations due to changes in the face personality [2]. excluding part of the face which may affect the face
This variability makes 2D-face recognition an excessive, recognition rate. (2) Feature extraction and dimension
interesting and challenging problem. To design high reduction by applying significant powerful transformation
performance algorithms for automatic face recognition methods in face recognition systems. One of these statistical
systems is a challenging task for real time computer vision approach transformations is the PCA [3] [4]. PCA represents
and pattern recognition applications. human face images as a subset of their Eigen Vectors which
Human-face recognition system requires huge amount is called Eigen-Faces. Many methods proposed in this filed
of similarity computing time using powerful classification like Diagonal PCA [5], Curvelet based PC A [6]. Yang et al
methods. Reducing the image’s dimension improves the [7] proposed Kernel PCA and Kernel FLD for human face
recognition and classification processing time. One of the recognition, which they called Kernel Fisher-face and Kernel
Eigen-face methods. The modular PCA [8] approach has distance classifier [22], Hidden Markov Models (HMMs)
achieved the high accuracy of PCA in cases of extreme [25], and extreme learning machine (ELM) [27].
change of pose variations, illumination and expressions.
In our proposed work, we apply few preprocessing
algorithms to reduce the computational processing time. For
feature extraction and face representation we used
conventional PCA and we considered only the highest 30
eigenvalues and their corresponding eigenvectors. We used
five classification methods:- Correlation, Euclidean,
Canberra, Manhattan and Mahalanobis distance classifier.
Backpropagation neural network used to train the face
recognition system. The proposed methods provide higher
accuracy than the existing methods and we applied the
method on two of the benchmark databases, ORL database
Fig. 3. Face recognition systems methodology and Yale database.
Bartlett et al [9] and Draper et al. [10] introduced a
generalization of PCA called Independent Component This paper is organized as follows. In Section II,
Analysis (ICA) and they assumed a better basis of the human overview of conventional PCA, classification methods and
face images may be found by methods which are sensitive to Backpropagation neural network. The details of the
these high-order statistics. On the other hand, Moghaddam proposed framework will be discussed in Section III.
[11] showed that ICA-based approach does not provide Experimental results are discussed in Section IV. Section V
significant advantage over PCA-based method. Yang [12] describes the conclusion of this paper and the future work.
applied the Kernel PCA for human face feature extraction and
recognition and showed that the Kernel PCA method II CONVENTIONAL PCA, CLASSIFICATION METHODS
outperforms the classical PCA method. However, Kernel AND BACKPROPAGATION NEURAL NETWORK
PCA and ICA are both computationally more expensive than
PCA. A. Principal Component Analysis.
It is typical to perform features reduction and face
Yang et al [13] introduced a new approach, 2- representation since correlation methods require huge
dimensional principal component analysis (2DPCA) for amounts of storage and an expensive computation time.
image feature extraction and representation which has PCA is a holistic based statistical transform and
numerous advantages over conventional PCA. It is more transforms the face data to a new face coordinate
straightforward than the PCA to use for face image extraction system by projection the greatest disparity of the face
because 2DPCA is based on the image matrix. Based on data to lie on the first coordinate then the second greatest
Yang method, the 2DPCA is better than conventional PCA in disparity on the next coordinate and finally the lowest
terms of recognition accuracy and 2DPCA is computationally disparity lie on the last coordinate. We reduce the face
more efficient than conventional PCA and it can improve the data dimension in the face recognition by applying the
process time of image feature extraction significantly. On the PCA method which is finding the principal components
other hand, the conventional PCA based image representation for the training face database then using the K high
is more efficient than the 2DPCA-based image representation eigenvalues and their corresponding eigenvectors to
in terms of storage space because 2DPCA needs more represent the face image in lower dimensional face space
coefficients for face representation. with less number of points [28]. Because the PCA
method relies on the covariance matrix, the dimension of
Lu et al [14] introduced a new feature extraction method the face images is not substantial in comparison to
for face recognition using Linear Discriminate Analysis number of face images in the training dataset. PCA
(LDA) method. Numerous kernel based classifier methods extracts the new data set in an efficient way based on the
[15]-[18] have been developed to handle the nonlinear related information between the face images and reduces
problems. The new method safely eliminate the null space the description dimension by projecting the data onto the
between class scatter matrix by utilizing the variant of D- axes of the principal components, where these
LDA and improve the discriminatory power of the obtained orthonormal data points are in the maximum covariance
D-LDA feature dimension by applying a partial step LDA direction.
scheme. Many researcher claim the advantage of LDA over
PCA but Martez et al [19] illustrates why the PCA perform The first step in the PCA is to obtain the same size face
better than the LDA when the training dataset is non data images I1, I2, ….,IM, then convert each face to vector .The
non-uniformly sampled or number of samples per person is training set of M faces can be written as I = (I1, I2,… , IN) and
small. (3) Classification by employing powerful classifiers the average image ‘A’ is found by
such as neural networks [23], [24], support vector machine
(SVM) [26], Euclidean distance classifier [21], Mahalanobis = ∑ (1)
The next step is to center each face image and find the Backpropagation neural network is one of the most
vector = − . To obtain the covariance matrix ‘C’ common used neural networks since BPN offers a great
from the difference image using: simplification abilities and it is straightforward to implement.
Even though it may be harder to determine the best network
1
weights and network configuration. The design of BPN used
= = ( ) (2) is described below: 1). Training input layer 2). Hidden layer
(one or more) 3). Expected output layer [1] as shown in Fig.5.

ℎ = [Φ Φ … … … Φ ] ( ) The common backpropagation algorithm can be


described as∶
The eigenvalues and their corresponding
eigenvectors of the covariance matrix are computed and []
1. Randomly initialize the weights and the
they are related as: []
thresholds , let n=l.
= ( ) 2. Feed the input training image pattern and the
corresponding match output image pattern
We sort the Eigen-Values and their corresponding through the propagation neural network then
Eigen-Vectors and we take in our consideration the top K of calculate the output of all layers according to :
them to reduce the face domain. Approximately 90% of the
total discrepancy is exist in the first 15% of the face space [
= (∑
]
+ )
[ ] []
(9)
[ ]
which means that the Eigen-Values are exponentially
3. Compute the square root error in each layer as
decrease. PCA produces a new face image for the original
follows:
face image which appears as a ghostly face, which is called
In the output layer:
Eigen-Faces as shown in Fig. 4.
[ ] [ ] [ ]
= ′( )( − ) (10)

In the lth hidden layer (l=L-1, L-2 ... 1):

[] [] [ ] [ ]
= ′( )∑ (11)
4. Update the new threshold and weights by applying
the following formula:

[] [] []
( + )= ( )+ . (12)
[] [] [] [ ]
Fig.4. Original images and the corresponding Eigen Faces
( + )= ( )+ . . (13)
5. Calculate the mean-squared error and if the value is
B. Backpropagation neural network
less than the threshold then stop and print the
weight, else go back to step 2.

Sigmoidal function which defined in (14) used as the


neuron activation function in the backpropagation algorithm,
( )= (14)

Sigmoidal function derivatives is:

′( ) = ( )( − ( )) (15)

The mean-squared-error can reach to the minimum by


Fig.5. Schematic diagram of backpropagation (BP) neural network finding some weights using the sigmoidal activation function
with three-layer. and for a small size neural network it means that all image
input patterns are correctly classified but generally this will
not be true for bigger size neural network. Therefore J. Toms
proposed in 1990 an improved backpropagation algorithm by
using another activation function called hybrid neuron which
is defined as:
( ) = . ( ) + ( − ). ( ) scenarios are proposed in this paper to compere our result to
(16) other proposed systems, Fig. 9 and Fig. 12 show the overall
view of the proposed face recognition system.
Where s(x) is the sigmoidal function (14) and h(x) is the hard-
limiting function which is defined as: First, we start with preprocessing the input images by
using different methods when it is necessary. It includes
≥ cropping the face, resizing the image; translate the image to
( )= (17) Gray image then histogram equalization, and finally
<
normalization the image between 0 and 255 to decrease the
effect of the image background, nose and local illumination,
and 0 ≤ λ ≤ 1. When λ = 0, the neuron becomes discrete hard-
which allow us to perform robust face recognition even under
limiting one and when λ = 1, the neuron is purely analog
severe illumination variations. Fig.6 shows some of the
sigmoidal one. For intermediate value of 0 ≤ λ ≤ 1, the BPN
preprocessing methods.
is a hybrid, with a hidden unit activation function that is
differentiable everywhere except at λ = 0:

( )= ( ) − ( ) ≠ .
(18)
Even with J. Toms simple modification the PBN often
stuck at the local minimum and since λ is linearly decreased
Fig.6. Example for adopted prepressing steps in the proposed work.
to zero, the learning speed is also affected by this updating
this coefficient according to the following: Then, conventional PCA is applied as a facial
representation on the preprocessed images to obtain the
( )= /
. Eigen-Faces database using the highest 30 Eigen value with
(19) their corresponding Eigenvectors. Finally, we used the five
different measurement distance methods to feed them to the
Where SSE is the Sum-Squared-Error. The second method is classifiers to test the system and prove the improvement on
using the following sigmoidal neuron activation function with different experiments scenarios.
sloping factor α in the hidden layers:
V. EXPERIMENTS
( )= (20)
In order to compare our results with the other proposed
face recognition systems, we applied our framework on two
Its derivative is: databases, the Yale Database and the ORL Database. We
applied the other proposed system experiment scenarios to
( ) = . ( )( − ( )) compare our result with their result. We randomly divided
(21) the face database into two sets: - one set for training and the
second for testing. Finally, we calculated the recognition
rate (RR) after the system has completed the training and
The coefficient α is related to the steepness of the sigmoidal
tested the testing images set using:
function. These modifications makes BPN fast to converge,
easy to implement, and the final error for training system is
absolutely zero. (%) = *100 (22)

III Proposed Framework The simulation environment is Microsoft visual studio


C# which runs on the personal Laptop with Intel CoreI7
In our paper, we present and discuss face recognition CPU @ 3GHz.
techniques using two different classification approaches. We
used the conventional PCA as a feature extraction method A. Human Face Databases
with five different distance measurement methods to achieve 1) Olivetti Research Laboratory Human Face
an extraordinary performance for human face recognition Database (ORL) [30]. ORL database has total of
system. We used the conventional PCA to extract the features 400 face images which belong to 40 persons for both
by reducing the face space using the top K-eigenvectors (K is female and male with 10 different images for each of
approximately= 15%) corresponding to the K-highest Eigen them. The images are in grayscale with size 92x112
Values .The two approaches which we used in this paper are pixels. Fig. 7 shows a sample set for one person
K Nearest-Neighbor (KNN) classifier and Backpropagation from ORL database.
neural network (BPN) classifier. We have applied five 2) YALE Face Database: The Yale face database
different distance measurement methods. We then combined contains images of 15 people male and female [31].
them to show the improvement in each method. Different
Fig.7. ORL database is covering 40 persons and contains 400 face images taken under 10 different conditions.

Fig.8. Sample images of Yale Face Database (11 pose for each person)

There are 165 images divided to11 images per


person. All these images are also in gray scale
domain with size of 92x112 pixels after we cropped
and resized them. Fig. 8 shows Yale sample images.

B. Experiments Using the KNN Classifier

Our Face recognition system using KNN Classifier is


showing in Fig. 9. In the experiment we applied two different
scenarios on Yale database and ORL database. First, an
experiment was performed using leaving one image out for
testing(15 for Yale and 40 for ORL for testing) then we used
random five face image samples per person for training the
system (200 in ORL and 90 in Yale for testing ) as a second
scenario. In both scenarios we start with dimensionality
reduction using PCA then we find the Egin Faces to evaluate
the testing images. We took in our consideration only the top
30 Eigen vectors to reduce the computation time. Then we Fig.9. Face recognition system using The KNN classifier
used different dissimilarity measure methods to measure the methods
similarity between the test images and the Egin faces. KNN
classifier provides the best K match and we assume the
lowest distance is the best match. We applied this scenario
with five different distances methods, Correlation distance,
Euclidean distance, Canberra distance, Manhattan distance
and Mahalanobis distance then we combined them with the
square root of the sum square equation (23) to achieve higher
recognition rate :

= ∑ (23)

We consider the face recognition occurs only when the


first lowest distance is matched the test image as showing in
Fig.10. Example of the match image using KNN with Mahalanobis distance.
Fig. 10 and we consider the face recognition failed even if the (a) Test image (b) the first lowest distance (c)second lowest distance (d) Third
second lowest distance matched the test image as showing in lowest distance .
Fig. 11.
Yale database and %97.5 in ORL database by applying the
second scenario which is leaving only one face image out for
testing . Table I shows these result.
In the proposed method which is using PCA, KNN and
BPN, we achieved higher accuracy by combining the distance
methods using equation (23). We achieved %96.7 accuracy in
the Yale database with only 3 mismatching and %95.6 with 7
mismatching of 200 testing images. Table II shows the result
in details.
Fig.11. Example of the mismatch image using KNN with Mahalanobis
distance. (a) Test image (b) the first lowest distance (c)second lowest distance V. CONCLUSION
(d) Third lowest distance . In this paper we proposed an improved framework for face
recognition using PCA and BPN and combined five distance
measurement methods because each distance method has
advantage over the other methods and by combining them we
have been able to achieve a higher accuracy than the existing
C. Experiments using the BPN classifier methods. By using BPN we reduced the compute time and
The similar two scenarios, which we used in the KNN improved face recognition rate. In the future work we plan to
experiment, applied again using the BPN classifier. The apply this framework using different feature extraction
experiment was performed by leaving one image out for methods like SVM and 2D-PCA, which they are proven to
testing (15 for Yale and 40 for ORL). We then used random have advantage over the conventional PCA
five image samples per person for training the system (200
for ORL and 90 for Yale). Fig.12 shows the proposed face
recognition system which is divided to three subsystems. TABLE I
Experiments result using PCA and KNN
First, we do subsystem, which includes preprocessing
and features extraction. After preprocessing the training
images as we mentioned earlier by applying some of the Strategy Method Recognition Rate (%)
translation methods to reduce the external effects, we PCA and KNN Yale ORL
extracted the highest effects feature using the PCA method. Using Canberra Distance 96.6 95
Next, we found the Egin Faces database based on the highest five
Euclidean Distance 87.8 90.5
K eigenvectors, which will be used in the classification random
between the training images and the test images sets. images Correlation Distance 90 91
for
The second subsystem is to train the proposed system training Manhattan Distance 91.1 95
by feeding the training dataset to the Backpropagation Mahalanobis Distance 94.4 95
neural network. Using the Eigen Faces, which we found in Leave- Canberra Distance 93.3 90
the first subsystem we found the lowest distance between one-out
Euclidean Distance 93.3 97.5
each training image and the Eigen Faces. Then we added the Method
lowest distance for each training image to the training Correlation Distance 100 97.5
dataset. After that, we fed the training dataset to the BPN Manhattan Distance 100 97.5
with 0.3 training rate and one hidden layer. Mahalanobis Distance 100 97.5
The third subsystem is examining the recognition
system by testing the test images and finding the face
recognition rate. First, we find the Egin Face for the test TABLE II
image then find the distance between the test image and the Experiments result using PCA and BPN
Egin Faces database using five different methods. Finally, Strategy Method Recognition Rate (%)
we fed these distances to the BPN and found out if the test PCA and BPN Yale ORL
image matches the right person or not.
Using Canberra Distance 88.9 80
five
D. Results and Analysis Euclidean Distance 75.6 79.5
random
images Correlation Distance 89.9 85.5
In the first experiment using the PCA and KNN with for
%50 of the dataset for training and %50 for testing, we training Manhattan Distance 87.8 89.5
achieved variant accuracy based on which distance methods Mahalanobis Distance 94.4 95
applied. Canberra Distance achieved %96.6 accuracy with Proposed
only 3 mismatching in the Yale database. Mahalanobis
: ∑ 96.7 96.5
Distance achieved %95 accuracy with only 10 mismatching
on the ORL database. The accuracy increased to %100 in
Fig.12. Proposed recognition system using the BPN classifier method

REFERENCES [16] J. Yang, A. F. Frangi, J. Y. Yang, D. Zhang, Z. Jin, KPCA Plus


LDA: A complete kernel fisher discriminant framework for
[1] R. Chellappa, C. L.Wilson, and S. Sirohey, “Human and machine feature extraction and recognition. IEEE Transaction on Pattern
recognition of faces: A survey,” Proc. IEEE, vol. 83, pp. 705–740, Analysis and Machine Intelligence, 27(2): 230–244,2005
1995 J. Clerk Maxwell, A Treatise on Electricity and Magnetism,
3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73. [17] W. S. Chen, P. C. Yuen, Kernel Interpolatory Mercer kernel
construction for kernel direct LDA on face recognition, Submitted
[2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, to 2009 IEEE International Conference on Acoustics, Speech, and
“Eigenfaces versus fisherfaces: Recognition using class specific Signal Processing (ICASSP2009).
linear projection”, ,” IEEE Trans. Pattern Anal. Machine Intell.,
vol. 19, pp. 711–720, 1997. [18] J. Huang, P. C. Yuen, W. S. Chen, J. H. Lai, Face Recognition
Using Kernel Subsapce LDA Algorithm. Proceeding of Asian
[3] M. A. Turk and A. P. Pentland, “Eigenfaces for Recognition,” J. Conference on Computer Vison (ACCV04), 1:61–66, 2004.
Cogn. Neuroscience. vol. 3, pp. 71–6, 1991.
[19] A. Martez, A. Kak, “PCA versus LDA,” IEEE Trans. on Pattern
[4] L. Sirovich and M. Kirby, ªA Low-Dimensional Procedure for the Analysis and Machine Intelligence, vol.23, no. 2, February 2001,
Characterization of Human Faces,º J. Optical Soc. Am. A, vol. 4, pp.228-233.
no. 3, pp. 519-524, 1987
[20] J. R. Solar, and P. Navarreto, “Eigen space-based face
[5] D. Zhang, Z. Zhou, and S. Chen, “Diagonal Principal Component recognition: A comparative study of different approaches”, IEEE
Analysis for Face Recognition,” Pattern Recognition, vol 39, pp. Transactions on Systems man And Cybernetics- part c:
140142, 2006. Applications, vol. 35, no.3, 2005.
[6] T. Mandal, Q.M.J. Wu, and Y. Yuan, “Curvelet based face [21] Duda R, Hart P, Stork D. “Pattern Classification,” 2nd edn.,
recognition via dimension reduction,” Elsevier, Signal Processing, WileyInterscience, New York, (2001).
vol 89, no. 3, pp. 2345-2353, 2009.
[22] N. A. Mahmon, N. Ya’acob, and A. L. Yusof, “Differences of
[7] M.H. Yang, “Kernel Eigenfaces vs. Kernel Fisherfaces: Face Image Classification Techniques for Land Use and Land Cover
Recognition Using Kernel Methods,” Proc. Fifth IEEE Int’l Conf. Classification” IEEE Trans. CSPA, pp.90-94, 2015.
Automatic Face and Gesture Recognition (RGR’02), pp. 215-220,
May 2002. [23] M. J. Er, W. Chen, and S. Wu, “High speed face recognition
based on discrete cosine transform and RBF neural network”,
[8] R. Gottumukkal and V.K. Ansri, “An improved face recognition IEEE Trans. on Neural Network, vol. 16, no.3, pp. 679-691, 2005.
technique based on Mudular PCA approach,” Pattern Recognition
letters, vol 25, pp. 429-436, 2004. [24] M. Er, S. Wu, J. Lu, and L.H.Toh, “ ”, IEEE Transactions
on Neural Networks, vol. 13, no. 3, pp. 697-710, 2003.
[9] M.S. Bartlett, J.R. Movellan, and T.J. Sejnowski, “Face
Recognition by Independent Component Analysis,” IEEE Trans. [25] H. Othman, and T. Aboulnasr, “A separable low complexity 2D
Neural Networks, vol. 13, no. 6, pp. 1450-1464, 2002. HMM with application to face recognition”, IEEE Transaction
on Pattern Analysis and Machine Intelligence, vol. 25, no.10, pp.
[10] B.A. Draper, K. Baek, M.S. Bartlett, J.R. Beveridge, 1229-1238, 2003.
“Recognizing Faces with PCA and ICA,” Computer Vision and
Image Understanding: special issue on face recognition, in press [26] K. Lee, Y. Chung, and H. Byun, “SVM based face verification
with feature set of small size”, Electronic letters, vol. 38, no.15,
[11] B. Moghaddam. Principal manifolds and bayesian subspaces for pp. 787-789, 2002.
visual recognition. In Proceedings of the Seventh IEEE
International Conference on Computer Vision, pp. 1131{1136, [27] A.A. Mohammed, R. Minhas, Q.M. Jonathan Wu, and M.A. Sid-
1999. Ahmed, “Human face recognition based on Multidimensional
PCA and extreme learning machine,” Pattern Recognition, vol
[12] P.C. Yuen and J.H. Lai, “Face representation using Independent 44, pp. 2588-2597, 2011.
Component Analysis,” Pattern Recognition, vol. 35, no. 6, pp.
12471257, 2002. [28] M. Kirby and L. Sirvoich, “Application of the Karhunen-Loeve
Procedure for the characterization of human faces,” IEEE Trans.
[13] J. Yang, D. Zhang, A.F. Frangi, and J.Y. Yang, “Two- Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp.
dimensional PCA: a new approach to appearance-based face 103–108, Jan. 1990.
representation and recognition,” IEEE Trans. on Pattern Analysis
and Machine Intelligence, vol. 26, no. 1, pp. 131-137, Jan. 2004. [29] Perumal, K. and Bhaskaran, R. “Supervised classification
[14] J. Lu, K. N. Plataniotis, A. N. Venetsanopoulos, “ Face Performance of multispectral images” in Journal of computing,
Recognition using LDA based Algorithms,” IEEE Trans. on 2(2), arXiv:1002.4046v1, 2010.
Neural Networks 14 (1) , pp 195-200, 2003. [30] ORL database: http://www.camorl.co.uk.
[15] W. Zheng, L. Zhao, C. Zou A modified algorithm for generalized [31] Yale Face Database: http://cvc.yale.edu/projects/yalefaces/
discriminant analysis. Neural Computation, 16(6): 1283–1297,
2004.

You might also like