You are on page 1of 8

(IJCNS) International Journal of Computer and Network Security, 5

Vol. 2, No. 3, March 2010

New Illumination Compensation Method for Face


Recognition
Heng Fui Liau1and Dino Isa2
1
Faculty of Engineering, School of Electrical and Electronic Engineering,University of Nottingham Malaysia Campus,
Jalan Broga, 43500 Semenyih, Selangor, Malaysia.
eyx6lhf@nottingham.edu.my
2
Faculty of Engineering, School of Electrical and Electronic Engineering,University of Nottingham Malaysia Campus,
Jalan Broga, 43500 Semenyih, Selangor, Malaysia.
Dino.Isa@nottingham.edu.my

of a lambertian surface in fixed pose, under variable lighting


Abstract: This paper proposes a method for implementing
illumination invariant face recognition based on discrete cosine and without shadowing, is a 3D linear subspace [5]-[7]. One
transform (DCT). This is done to address the effect of varying of the approaches used to solve the problem is building a 3D
illumination on the performance of appearance based face illumination model. Basri et al [5] proposed spherical
recognition systems. The proposed method aims to correct the harmonic model where illuminated images are represented
illumination variation rather than to simply discard it. In the in low-dimensional subspace. Lee et al [8] proposed the 9D
proposed method, illumination variation, which lies mainly in linear subspace approach using nine images captured under
the low frequency band, is normalized in the DCT nine different lighting directions. Face images obtained
domain. Other effects of illumination variation which manifest under different lighting conditions are regarded as a convex
themselves by the formation of shadows Lambertian and can be approximated well by a 9D linear
and specular defects are corrected by manipulating the subspace [8]. The illumination convex cone method [9]-[11]
properties of the odd and even components of the DCT. The
showed that a set of images of an object in a fixed
proposed method possesses several advantages. First, it does not
require the use of training images or an illumination
pose under all possible illumination conditions is a convex
model and can be applied directly to the test image. Second is cone in the image space. This method requires a number of
the simplicity of the system. It needs only one parameter to be training images for each face taken with different lighting
determined, making it simple to directions to build the generative model. Unfortunately, the
implement. Experimental results on Yale face database B using illumination cone is extremely complex to build. Several
PCA- and support vector machines (SVM)-based face simpler models that approximate it in the best possible way
recognition algorithms showed that the proposed method gave with minimum complexity is presented in [10], such as low
comparable performance compared to other well-known but dimension subspace model, and cones-attached and cones-
more complicated methods. cast using extreme rays. Gross et al [12] proposed a 3D
Keywords: face recognition, illumination invariant, discrete method based on light-field estimation. It operates by
cosine transform, biometrics. estimating a representation of the light field of the subject
head. The light field is then used as the set of features for
recognition. Recently, Zhang et al [13] proposed a 3D
1. Introduction Spherical Harmonic Basis Morphable Model. They showed
that any faces under arbitrary unknown lighting condition
Face recognition has gained much attention in the past two can be simply represented by three low-dimensional vectors
decades, particularly for its potential role in information and which correspond to shape, spherical harmonic and
forensic security. Principal component analysis illumination. One of the major drawbacks of the 3D
(PCA) [1] and linear discriminant analysis (LDA) [2] are illumination model-based approach is that a number
the two most well-known and widely-used appearance-based of training images of the subject under varying lighting
methods. Existing face recognition systems including PCA conditions or information of 3D shapes are needed during
and LDA do not perform well in the presence the training phase. Moreover, its application range is
of illumination variation. For example, Adini et al [3] stated limited by the 3D face model, and has significant drawback
that the variation between the images of the same person for real-time system due to its expensive computation
due to illumination inconsistencies is always larger than requirement.
image variation due to the change in face identity. Due to Every single pixel of the illuminated face image can be
the importance of this issue, recent researchers focus on regarded as a product of reflectance and luminance at that
developing robust face recognition systems. However, point. Luminance varies slowly while reflectance can
illumination variation still remains a challenging problem change abruptly. In other words, luminance, which
[4]. corresponds to the illumination variation, is located
The problems associated with Illumination variation for mainly in low frequency band but reflectance, which
facial image are mainly due to the different appearance of corresponds to the stable facial features under illumination
the 3D shape of human faces under lighting from different variation, is located at higher frequency band. Land's
direction. Early work showed that the variability of images "retinex” model [14] approximates the reflectance as the
6 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 3, March 2010

ratio of the image and its low-pass version serves as zero mean and unit variance. The advantage of the method
estimation for luminance. Chen et al [15] discards the low is low computation complexity.
frequency DCT component in the logarithm domain to This paper proposes an new illumination compensation
achieve illumination invariance. They first expand the pixel method for face recognition . The proposed method aims to
intensity in the dark region using logarithm transform. correct the illumination variation rather than to simply
Then, discrete cosine transform is applied on the image in discard it. Compared to existing methods based on the DCT,
logarithm domain. The low frequency components are the proposed method here does not discard the low
discarded by setting the corresponding coefficients as zero. frequency components which represent most of the effects of
Nanni and Lumini [16] had investigated discrete wavelet illumination variations. This allows us to retain more
transform features under different lighting conditions. They information, which might be useful for face
found that the most useful sub-band are the first level of recognition. Furthermore, some of the effects of this
decomposition obtained by Biorthogonal wavelet and the illumination variation lie in the higher frequency band,
horizontal details are the third level of decomposition approximately the same frequencies as occupied by some
obtained by Reverse Biorthogonal wavelet, which is robust important facial features. As previously mentioned, the
against illumination variation. Franco and Nanni [17] effect of these kinds of illumination variation creates
proposed classifiers fusion scheme. PCA is applied to shadow and specularities in the image. Thus, removing the
extract and reduce the dimensionality of the useful features low frequency component does not help in this case. The
from DWT domain [16] and DCT domain [15]. The proposed method here manipulates the odd and even DCT
individual classifier for each feature is Kernel Laplacian components to remove these artifacts. Using the DCT
Eigenmap. The output of each classifier is further fused approach, the complexity of the previously described
using sum rule. methods is avoided while retaining a comparable
Generally, human face is similar in shape, have two eyes, performance [8]-[10], [15], [21]-[24]. The proposed method
a mouth and a nose. Each of these components makes is based solely on 2D images and does not need to estimate
different distinctive shadows and specularities depending on the 3D shape. It requires much less computation resource
the direction of the lighting in a fixed pose. By using such than other methods based on 3D model as stated above.
characteristic, the lighting direction can be estimated and Besides that, it does not require any prior information on the
illumination variation can be compensated. Quotient image illumination model and training images. Furthermore, the
(QI) [18] is an effective method to handle illumination parameter selection is simple. Only one parameter, the cut-
problem. This method is very practical because it off-frequency of the filter, is required to be
only requires one training image for each person. The determined. Experimental results on Yale face database B
authors in literature [19] and [20] further improved the [11] using PCA- and support vector machines (SVM)--based
original QI method and proposed the self-quotient face recognition algorithms showed that the proposed
image (SQI) method. The luminance is estimated as the method has achieved good performance as compared
smoothed version of the image by Gaussian filtering. The to some of the more complicated methods described above.
illumination variation is eliminated
by dividing the estimated luminance. However, the 2. Method
parameter selection for the weighted Gaussian filter is
empirical and complicated. Shan et al As aforementioned before, an illuminated face image, I(x,y)
[21] proposed quotient illumination relighting (QIR) , can be regarded as product of reflectance, R(x,y) , and
method which synthesizes images under a predefined luminance, L(x,y) , as shown in (1). Taking logarithm
normal lighting condition from the provided face images transform on (1), we have (2).
captured under non-uniform lighting conditions. Zhao et al
[22] proposed illumination ratio image. One training image
for each lighting conditions is required to simulate the
distribution of the images under varying lighting condition.
Liu et al [23] proposed an illumination restoration method. (2)
The illuminated image is restored to image under frontal
light source using a ratio-image between the face image The logarithm transform transforms (1) into linear
under different lighting conditions and a face image under equation where the logarithm transform of the illuminated
frontal light source, both of which are blurred by Gaussian image is the sum of the logarithm transformed of reflectance
filter. The image is further enhanced through an iterative and the logarithm transform of luminance as shown in
algorithm where Gaussian filter with different sizes are (2). In the image processing field, logarithm transform
applied on different regions of the face image. Vuccini et al is often employed to expand the values of the dark
[24] adopted Liu et al’s method to restore and enhance the pixels. Fig. 1 gives a comparison between the original image
illuminated image. LDA is employed as feature extractor. and the same image following the logarithm transform. The
They proposed an image synthesis method based on QI to brightness of the right image is spread more uniformly.
generate training images to overcome the small-sample-size
problem. Xie and Lam [25] normalize the lighting effect
using local normalization technique. The face image is
partitioned to blocks with fixed size. For each block, the
pixel intensities are normalized in such a way that it has
(IJCNS) International Journal of Computer and Network Security, 7
Vol. 2, No. 3, March 2010

number. Furthermore, DCT has lower computation


complexity than Discrete Wavelet Transform. For an N×N
image, the 2D DCT is given by (5). For υ , ν = 0,1,2,…N-1.
and represent the horizontal component and vertical
component respectively is the is the value of the
pixel at coordinate (x,y). For 2D DCT, the frequency is
computed as the square root of the sum of and . The
inverse 2D DCT is expressed as (6).

Figure 1. On the left is the original face image. On the right


is the output of logarithm transform (5)

Luminance component lies mainly in the low frequency


band while the reflectance component lies mainly in the
higher frequency band. The luminance component can be
approximated as the 'low-pass version' of the illuminated From (6), the uniform luminance component can be
image. The reflectance component of an illuminated face obtained from the original image by adding a compensation
image represents the stable facial features under varying term, which compensates for the nonuniform
lighting conditions. Thus, the illumination variations can be illumination.
compensated for provided we can estimate them relatively Generally human faces are almost symmetrical. The left-
well. half side of the face is almost identical to the right-half side
Let be the incident luminance and be the of the face in most of the cases. For non-uniformly
uniform luminance. A uniformly illuminated image, , can illuminated images, part of the face is brightened and the
be expressed as (3). In order to normalize the brightness others are darkened. The component is
across all of the images, the mean value of are set to approximated as the 'low-pass version' of with
be the same for all image. certain cut-off frequency. The compensation term can be
estimated as follow. Firstly, the mean of an image
(3) reconstructed from low frequency DCT components is
computed according to (7). Then, each pixel is subtracted by
the mean as shown in (8). is estimated using the
low frequency DCT components. A negative value indicates
The luminance component corresponds to the illumination that a single pixel is 'dark' and positive value indicates that
variation. As previously described, the illumination the corresponding pixel is 'bright'. Each pixel is adjusted by
variation can be well-approximated by the low frequency halving the difference between pixel value and mean value
components of the illuminated face image. In section 2.1, an as shown in (8). In other words, the intensity of each pixel is
illumination compensation method based on low frequency adjusted according to the difference between the mean
DCT components are presented. In fact, illumination intensity of the image that is reconstructed using only low
variation cannot be perfectly separated from the original frequency components. The illumination variations which
image in the frequency domain. Under large and complex lie mainly in the low frequency band are reduced while
illumination variations, the shadows and specularities are important facial features are preserved. In other words,
casted on the face due to the unique 3D shape of human the low-pass version of the image in the DCT domain serves
face. Some shadows and specularities lie in the same as an estimated of the luminance component. Fig. 2 shows
frequency band as the reflectance, which corresponds to the effect of illumination normalization in the low frequency
facial features do. In this case, compensation technique DCT components on a face image. The left-most image is
based on low frequency components will be unable to the original image. The second, third and forth images are
remove the artifacts. In section 2.2, an illumination the processed image with the cut-off frequency set at 2, 4
compensation method based on the properties of odd and and 6 respectively. As you can see, the shadow effects at left
even DCT components in horizontal direction, which has side of the face are reduced at the increasing value of the
the capability to remove artifacts in the high frequency cut-off frequency while the facial features are unaffected.
band, is presented. Unlike other similar methods, only one parameter needs to
. be determined, which is the cut-off frequency of the low-
pass filter. However, the normalization may cause some
2.1 Illumination normalization using low frequency
information lost in the process due to wrong estimations of
DCT components
the 'dark' and 'bright' zones.

DCT has been widely used in modern imaging and video


compression. DCT possesses some fine properties, such as
de-correlation and energy compaction. The DCT is chosen
because it only gives real value, unlike Discrete Fourier
Transform (DFT) which gives complex
8 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 3, March 2010

is always below 0.5. For the non-uniformly illuminated


image, the strong signals (greater than one) are located at
the low frequency band. However, the middle and high
frequency band contain points that have strength greater
than 0.5. This shows that the illumination variation also
corrupts the high frequency component, which corresponds
to reflectance. Hence, this proves that no features are robust
enough to be against the illumination variation.
Figure 2. Normalization on low frequency DCT
components. The left most image is the original image

2.2 Illumination Correction Using Odd and Even DCT


Components in Horizontal Direction
Under large and complex illumination variation, the effect
of illumination variation occurs across almost the
entire frequency band. Some variations such as shadows Figure 4. The top and bottom is the odd DCT component of
and specularities lie in the same frequency band as some original image and illuminated image respectively
important facial features do. In this section, we
propose an illumination correction method which uses the
Since human face generally have symmetry property, we
properties of odd and even DCT components to eliminate
can easily compensate the illumination variation in the face
the shadows and specularities and to restore the original
based on the properties of odd and even components of
features. DCT. Here, an illumination compensation method based on
manipulating even and odd DCT components is proposed.
First, two new images, and , are reconstructed
from the odd and even DCT components in the horizontal
direction of the original images. The pixels on the left half
Figure 3. 2D-DCT basis functions are compared with the pixels on the right half. If the left-
side pixel is negative and the corresponding right-side pixel
is positive, the values of both pixels will be adjusted
In Fig. 3, different images have been produced by varying according to (10) and (11). is the compensation term. Fig
the DCT components in the horizontal direction. As shown .5 shows an illuminated image and the corresponding
in the Fig. 3, the second and forth images have and .
been reconstructed by different even DCT components in the
horizontal direction and the symmetry is evident. The first
and third asymmetry images are reconstructed by odd DCT
components in the horizontal direction
are asymmetrical. Non-uniform illumination gives rise
to the odd DCT components and affects the
magnitude of the even DCT components. (5) can now be
rewritten in that,
Figure 5. From left to right: illuminated image, and

Odd DCT components can be regarded as noise or


luminance components due to illumination variation and the
even DCT components represent the wanted information
which may or may not be slightly corrupted by non-uniform
luminance. Under such model, the luminance components
can be estimated using odd DCT components thereby giving
estimation for the correction term as well. Fig. 4 shows
the difference between odd DCT components of image under If the left-side pixel is positive and the corresponding
variable illumination and the image of same person under right-side pixel is negative, the values of both pixels will be
uniformly illuminated condition. As shown in the top of adjusted according to (12) and (13).
Fig. 4, the odd components of the illuminated images
fluctuate greatly with the magnitude greater than one. In
contrast, the bottom of Fig. 4 also shows that illumination
variation happens across the entire frequency band. Note
that, the signal strength of the uniformly illuminated image
(IJCNS) International Journal of Computer and Network Security, 9
Vol. 2, No. 3, March 2010

3. Experiment results on Yale Face database B


Yale face database B is commonly used to evaluate the
performance of illumination invariant face recognition. The
Yale face database B contains 10 individuals in nine
different poses. For each pose, there are 64 different
illumination conditions. All images were manually aligned.
The size of face images are cropped to 64×55 pixels. The
face images were divided into 4 subsets according to the
[11].The four subsets were, subset 1 ), subset 2
Figure 6. The sum of the correction term and even DCT
), subset 3 ) and subset 4 ),
components gives the illumination corrected image
where bracketed numbers indicate the angular position
of light of incident with respect to the person.
The new pixel intensity for the compensated image can
then be computed using (14). As shown in Fig. 6, the
shadow created by the nose has been eliminated. Fig. 7 3.1 Face recognition using principal component
shows the illumination-corrected image using the proposed analysis
method. Illumination variations are largely reduced while
the important facial features are preserved. Here, we would PCA is a classical face recognition method. PCA seeks for a
like to emphasize that the proposed method is able to set of projection vectors which projects the input data in
recover facial feature as long as the specularities or shadow such a way that the covariance matrix of the training images
regions occur only on one side of the face. The output image is maximized. Six images in subset 1 were selected to serve
looks different from the original image because the proposed as training images to build the covariance matrix for PCA
method corrects the illumination variation based on the [13] and [17]. Eigenvalue decomposition was applied on the
symmetry of human face. The output image is aligned in covariance matrix to seek for the eigenvector that projected
such a way that the left-half side is identical to the right-half the data in the direction that maximized the covariance
side. Due to some subtle variations in the two halves of matrix. The nearest neighbour rule was used as a classifier
every face, a face image is not exactly symmetrical; for with the Euclidean distance as the distance metric. Fig. 8
example, the two eyes have slightly different sizes, there is a shows that the largest thirty principal components were
mole at only one side of cheek, etc. Therefore, the output sufficient to give the best result and to represent the face
images may have an adverse effect. All of the images are image. Therefore, thirty principal components were used to
normalized to a mean of 0.5. form the projection matrix. Fig. 9 and Fig. 10 show the
performance of the proposed method for subset 3 and 4
respectively under different values of cut-off frequency.
Setting the cut-off frequency at six gave the best results for
both subsets.

Figure 7. Illumination normalization and compensation


Figure 8. Recognition rate for subset 3 and subset 4 that
based on proposed method
correspond to the number of principal components

The flow of the proposed illumination invariant face


recognition system can be summarized as below:
1. Perform logarithm transform on the input images to
expand the values of the dark pixels.
2. Correct the non-uniform luminance in the low frequency
domain using the proposed method in section 2.1.
3. Eliminate the specularities and shadows that lie in the
same frequency band as the reflectance does using the
proposed method in section 2.2.

Figure 9. Recognition rate for subset 3 that correspond to


the cut-off frequency
10 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 3, March 2010

of SVM tremendously in Subset 4. Our method scored 100%


recognition rate in Subset 3 and 96.4% in subset 4.

Table 2: Face recognition using SVM


Method Subset 3 Subset 4
Original 13.6% 55%
Histogram 100% 61.4%
equalization
Proposed method 100% 96.4%
Figure 10. Recognition rate for subset 4 that correspond to
the cut-off frequency
3.3 Comparison with other methods
The results for subset 3 and 4, along with other well-
Table 1: Face recognition using PCA
known methods, are presented in Table 3. From Table 3,
Method Subset 3 Subset 4 it is shown that the proposed
Histogram 58.4% 15.6% method gave results comparable with other well-known but
equalization more complicated methods. In contrast, the proposed
Proposed method 100% 92.1% method is much simpler to implement with much less
computation resource requirement. The baseline system only
Without illumination normalization in the low frequency scored 58.4% and 15.6% in subset 3 and subset 4
band, the proposed method was only able to achieve 96.3% respectively. The proposed method outperformed linear
and 64.1% recognition rate in subset 3 and 4 respectively. subspace method, cones-attached method, illumination ratio
The poor results were due to the large variation in the low image method and QIR method. As aforementioned, the
frequency band where the illumination correction algorithm above stated methods required training images to build the
was unable to cope with. As we increased the value of cut- illumination model or to estimate the lighting
off frequency, the performance improved steadily although conditions. The authors in [15] discards low frequency
there were some small fluctuations. It gave the best components, keeping only the high frequency ones
performance when the cut-off frequency was set at 6 (100% which correspond to the reflectance component. Fig.
for subset 3). For subset 4, the best performance was 92.1% 11 shows the difference between the proposed method and
when the cut-off frequency was at 6, 7 or 9. Beyond that, the [15]. The shadows and specular defects remained on the
performance decreased because some important image because they lay in the same frequency band as the
features might have lost in the illumination normalization reflectance component does. Discarding low frequency
process. components might remove the important facial features that
Table 1 shows that our method can achieve a better lie in low frequency band and degrade the performance of
performance level than if only histogram equalization was the face recognition system when the number of class was
used. large. The proposed method aims to correct the
illumination variation rather than to simply discard it to
3.2 Face recognition using support vector machine achieve illumination invariance. However asymmetrically,
it was unable to completely remove artifacts that appeared
Support vector machine (SVM) [26]-[28] is a popular
on the both sides of the face. This caused the proposed
technique for classification motivated by the results of
method to have rather unimpressive performance in subset
statistics learning theory. Unlike traditional methods such as
4 compared to [15]. Our method is robust against
neural network that minimize the empirical training error,
illumination variation and is easy to implement. It can be a
SVM is designed based structural minimization principal.
pre-processing stage for other face recognition algorithms.
SVM maps the training data into a higher dimensional
feature using kernel trick, in which an optimal hyperplane
Table 3: Recognition rate comparison with other methods
with large separating margin between two classes of the
labeled data is constructed. The training data of SVM are Methods Recognition rate
the PCA feature set described in the previous section. In this (%)
paper, several types of kernel such as linear kernel, wavelet subset 3 subset 4
kernel, polynomial kernel and radial basis function (RBF) Linear Subspace [10] 100 85
kernel are studied. RBF kernel gives the best result. The Cones-attached [10] 100 91.4
RBF function is defined as below: Cone-cast [10] 100 100
Illumination ratio image [22] 96.7 81.4
Quotient Illumination Relighting 100 90.6
[21]
illumination restoration [23] 98.3 96.4
where is the width of the RBF. DCT in logarithm domain [15] 100 99.8
Vucini et al’s method [24] 100 95
Table 2 shows the performance of SVM under different Our method+PCA 100 92.1
types of methods. Our algorithm improved the performance Our method +SVM 100 96.4
(IJCNS) International Journal of Computer and Network Security, 11
Vol. 2, No. 3, March 2010

[5] R. Basri, D.W. Jacobs, “Lambertian reflectance and


linear subspaces”, IEEE Transaction on Pattern
Analysis and Machine Intelligent, 25(2), pp.218-233,
2003.
[6] S. Nayar, H. Murase, “Dimensionality of illumination
manifold in eigenspace”, Technical Report CUCS-021-
94 Columbia University.
[7] S. Amnon, “Photometric issues in 3D visual
recognition from a single 2D image”, International
Figure 11. Comparison between proposed method (bottom)
Journal of Computer Vision,2, pp. 99-122, 1997.
and [10] (top)
[8] K.C. Lee, J. Ho, D.J. Kriegman, “Nine points of light:
acquiring subspaces for face recognition under variable
3.4 Computation time
lighting and pose”, In Proceedings of the IEEE
An efficient and simple illumination compensation method Conference on Computer Vision and Pattern
in DCT domain is presented. All experiments were Recognition, vol.2, pp. 519-526, 2001.
conducted using Matlab 2008a on Core 2 Duo E6750 CPU [9] P.N. Belhumeur, D.J. Kriegman, “What is the set of
and 2GB RAM. It takes 2.050 seconds and 2.311 seconds to images of an object under all possible illumination
process all of the face images included in subset 3 and conditions?”, In Proceedings of the IEEE Conference
subset 4 respectively. The mean computation time for each on Computer Vision and Pattern Recognition, pp. 270-
image is 0.0167 second. The complexity is low and it can be 277, 1996.
implemented in real-time system. [10] A.S. Georghiades, P.N. Belhumeur, D.W. Jacobs,
“From few to many: illumination cone models for face
4. Conclusion recognition under variable lighting and pose”, IEEE
Transaction on Pattern Analysis and Machine
This paper proposes a novel illumination compensation
Intelligent, 23(6), pp.630-660, 2001.
method in DCT domain. The illumination is normalized
[11] H.F. Chen, P.N. Belhumeur, D.J. Kriegman, “In search
based on the low frequency components of DCT.
of illumination invariants”, iIn Proceedings of the
Subsequently, the illumination variations which create
IEEE Conference on Computer Vision and Pattern
shadows and specularities are further corrected by the
Recognition, pp. 13-15, 2000.
proposed method which uses the properties of odd and even
components of DCT. The proposed method is simple in [12] R. Gross, I. Matthews, S. Baker, “Appearance-based
terms of design, where there is only one parameter that face recognition and light-fields”, IEEE Transaction
needs to be determined, which is the cutoff frequency of the on Pattern Analysis and Machine Intelligent,26(4),
filter for the illumination normalization process. The pp.449-465, 2004.
proposed method gives comparable result with other well- [13] L. Zhang, S. Wang, D. Samaras, “Face synthesis and
known but more complicated methods in Yale face database recognition from a single image under arbitrary
B. The proposed method has rather unimpressive unknown lighting using a spherical harmonic basic
performance in subset 4 due to reason that large morphable model”, In Proceedings of the IEEE
illumination variation creates shadows and specularities that Computer Society Conference on Computer Vision and
occur at both sides of the face. Our method is robust against Pattern Recognition, vol.2, pp.209-216, 2005.
illumination variation and is easy to implement. It can be a [14] E.H. Land, J.J. McCann, “Lightness and retinex
pre-processing stage for other face recognition algorithms. theory”, Journal of the Optical. Society of America,61
(1), pp. 1-11, 1971.
[15] W. Chen, M.J. Er, S. Wu, “Illumination compensation
References and normalization for robust face recognition using
discrete cosine transform in logarithm domain”, IEEE
[1] M.A. Turk, A.P. Pentland, “Eigenface for face Transaction on System, Man and Cybernetics. B,36 (2)
recognition”, Journal of Cognitive Neurosience, 3, pp. pp. 458-466, 2006.
71-86, 1991. [16] L. Nanni, A. Lumini, “Wavelet decomposition tree
[2] P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman, selection for palm and face authentication”, Pattern
“Eigenface versus Fisherface”, IEEE Transaction on Recognition Letters, 329, pp.343-353, 2008.
Pattern Analysis and Machine Intelligent,19 (6), [17] A. Franco, L. Nanni, “Fusion of classifiers for
pp.711-720, 1997. illumination robust face recognition”, Expert Systems
[3] Y. Aldini, Y. Moses, S. Ullman, “Face recognition: the with Applications: An International Journal, 36,
problem of compensating for changes in illumination pp.8964-8954, 2009.
direction”, IEEE Transaction on Pattern Analysis and [18] S. Amnon, R.-R Tammy, “The quotient image: class-
Machine Intelligent, 19 (7), pp.721-732, 1997. based re-rendering and recognition with with varying
[4] P.J. Phillips, W.T. Scruggs, A.J. O’Toole, P.J. Flynn, illuminations”, IEEE Transaction on Pattern Analysis
K.W. Bowyer, C.L. Schott, M. Sharpe, “FRVT 2006 and Machine Intelligent,23 (2),pp.129-139, 2001.
and ICE 2006 large-scale results”, National Institute of [19] H. Wang, S.Z. Li, Y. Wang, “Generalized quotient
Standards and Technology Report. image”, In Proceedings of the IEEE Conference on
12 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 3, March 2010

Computer Vision and Pattern Recognition, 2, pp. 498- [24] E. Vucini, M. Gökmen, E. Gröller, “Face recognition
505, 2004. under varying illumination”, In Proceedings of the 15th
[20] H. Wang, S.Z. Li, Y. Wang, “Face recognition under Conferences in Central Europe on Computer Graphics,
varying lighting conditions using self quotient image”, Visualization and Computer Vision, pp.57-64, 2007.
In Proceedings of the IEEE International Conference [25] X. Xie, KM. Lam, “An efficient illumination
on Automatic Face and Gesture Recognition, pp. 819- normalization method for face recognition”, Pattern
824, 2004. Recognition Letters. 27, pp.609-617, 2006.
[21] S. Shan, W. Gao, B. Gao, D. Zhao, “Illumination [26] V. Vapnik, The nature of statistical learning theory.
normalization for robust face recognition against Springer-Verlag, New York, 1995.
varying lighting conditions”, In Proceedings of the [27] B.E. Boser, I. Guyon, V. Vapnik, “A training
IEEE Workshop on AMFG, pp. 157-164, 2003. algorithm for optimal margin classiffers”, In
[22] J. Zhao, Y. Su, D. Wang S. Luo, “Illumination ratio Proceedings of the Fifth Annual Workshop on
image: synthesizing and recognition with varying Computational Learning Theory, pp144-152, 1992.
illuminations”, Pattern Recognition Letters. 24, [28] C. Cortes, V. Vapnik, “Support-vector network.,
pp.2703-2710, 2003. Machine Learning”,20 , pp.273-297, 1995.
[23] DH. Liu, KM Lam, LS. Shen, “Illumination invariant
face recognition”, Pattern Recognition. 38 , pp.1705-
1716, 2005.

You might also like