Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Comparative Study of Person Identification System with Facial Images Using PCA and KPCA Computing Techniques

Comparative Study of Person Identification System with Facial Images Using PCA and KPCA Computing Techniques

Ratings: (0)|Views: 9|Likes:
Published by ijcsis
Face recognition is one of the most successful areas of research in computer vision for the application of image analysis and understanding. It has received a considerable attention in recent years both from the industry and the research community. But face recognition is susceptible to variations in pose, light intensity, expression, etc. In this paper, a comparative study of linear (PCA) and nonlinear (KPCA) based approaches for person identification has been explored. The Principal Component Analysis (PCA) is one of the most well-recognized feature extraction tools used in face recognition. The Kernel Principal Component analysis (KPCA) was proposed as a nonlinear extension of a PCA. The basic idea of KPCA is to maps the input space into a feature space via nonlinear mapping and then computes the principal components in that feature space. In this paper, facial images have been classified using Euclidean distance and performance has been analysed for both feature extraction tools.
Face recognition is one of the most successful areas of research in computer vision for the application of image analysis and understanding. It has received a considerable attention in recent years both from the industry and the research community. But face recognition is susceptible to variations in pose, light intensity, expression, etc. In this paper, a comparative study of linear (PCA) and nonlinear (KPCA) based approaches for person identification has been explored. The Principal Component Analysis (PCA) is one of the most well-recognized feature extraction tools used in face recognition. The Kernel Principal Component analysis (KPCA) was proposed as a nonlinear extension of a PCA. The basic idea of KPCA is to maps the input space into a feature space via nonlinear mapping and then computes the principal components in that feature space. In this paper, facial images have been classified using Euclidean distance and performance has been analysed for both feature extraction tools.

More info:

Categories:Topics
Published by: ijcsis on Jan 06, 2014
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

01/29/2015

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 12, December 2013
Comparative Study of Person Identification System with Facial Images Using PCA and KPCA Computing Techniques
Md. Kamal Uddin, Abul Kalam Azad, Md.Amran Hossen Bhuiyan
 Department of Computer Science & Telecommunication Engineering, Noakhali Science & Technology University  Noakhali-3814, Bangladesh
 Abstract
 — 
 
Face recognition is one of the most successful areas of research in computer vision for the application of image analysis and understanding. It has received a considerable attention in recent years both from the industry and the research community. But face recognition is susceptible to variations in pose, light intensity, expression, etc. In this paper, a comparative study of linear (PCA) and nonlinear (KPCA) based approaches for person identification has been explored. The Principal Component Analysis (PCA) is one of the most well-recognized feature extraction tools used in face recognition. The Kernel Principal Component analysis (KPCA) was proposed as a nonlinear extension of a PCA. The basic idea of KPCA is to maps the input space into a feature space via nonlinear mapping and then computes the principal components in that feature space. In this paper, facial images have been classified using Euclidean distance and performance has been analysed for both feature extraction tools.
 Keywords
 —
Face recognition; Eigenface; Principal component analysis; Kernel principal component analysis.
I.
 
INTORDUCTION Modern civilization heavily depends on person authentication for several purposes. Face recognition has always a major focus of research because of its non-invasive nature and  because it is people’s primary method of person identification. The identification of a person interacting with computers represents an important task for automatic systems in the area of information retrieval, automatic banking and control of access to security areas and so on. Here, a tiny effort has been carried out to develop the person identification systems with facial image by using Principal Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) computing techniques. And finally the performance has been compared with both computing techniques. II.
 
ELATED WORKS
 Face recognition is an active area of research with applications ranging from static, controlled mug-shot verification to dynamic, uncontrolled face identification in a cluttered  background [1]. In the context of personal identification, face recognition usually refers to static, controlled full-frontal  portrait recognition [10]. By static, it means that the facial  portraits used by the face recognition system are still facial images (intensity or range). By controlled, it means that the type of background, illumination, resolution of the acquisition devices, and the distance between the acquisition devices and faces, etc. are essentially fixed during the image acquisition  process. Obviously, in such a controlled situation, the segmentation task is relatively simple and the intra-class variations are small. Over the past three decades, a substantial amount of research effort has been devoted to face recognition [1], [10]. In the1970s, face recognition was mainly based on measured facial attributes such as eyes, eyebrows, nose, and lips, chin shape, etc. [1]. Due to the lack of computational resources and  brittleness of feature extraction algorithms, only a very limited number of tests were conducted and the recognition  performance of face recognition systems was far from desirable [1]. After the dormant 1980s, there was resurgence in face recognition research in the early 1990s. In addition to continuing efforts on attribute based techniques [11], a number of new face recognition techniques were proposed, including:
 
Principal Component Analysis(PCA) [1], [11], [12]
 
Linear Discriminant Analysis(LDA) [13]
 
A variety of neural network based techniques [14]
 
Kernel Principal Component Analysis(KPCA)[15] III.
 
FACE FEATURE EXTRACTION PCA is powerful technique for extracting a structure from  potentially high-dimensional data sets, which corresponds to extracting the eigenvectors that are associated with the largest eigenvalues from the input distribution. For faster computation of the eigenvectors and eigenvalues, singular value decomposition (SVD) is used. This eigenvector analysis has already been widely used in face processing [1], [2]. A kernel
23http://sites.google.com/site/ijcsis/ ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 12, December 2013
PCA, proposed as a nonlinear extension of a PCA [3]–[5] computes the principal components in a high-dimensional feature space, which is nonlinearly related to the input space. A kernel PCA is based on the principle that since a PCA in feature space can be formulated in terms of the dot products in feature space, this same formulation can also be performed using
kernel
functions (the dot product of two data in ) without explicitly working in feature space. In this section two methods which are used in this study are explained briefly.
 A.
 
Principal Component Analysis(PCA)
 
A 2-D facial image can be represented as 1-D vector by concatenating each row (or column) into a long thin vector. Let’s suppose, an
 M
vectors of size
 N
(= rows of image
×
columns of image) representing a set of sampled images.
’s represents the pixel values.
 = [
 …
]
,
 
 = 1,…
 
,
(1)
 
The images are mean centred by subtracting the mean image from each image vector. Let
m
represent the mean image.
 =
 
 

 (2) And let
 be defined as mean centred image
 =
 −
 (3) The primary goal is to find a set of
’s which have the largest  possible projection onto each of the
’s. This work wish to find a set of
 M
orthonormal vectors
 
for which the quantity
 =
 
 (
)

 (4) is maximized with the orthonormality constraint
 =

 (5) It has been shown that the
’s and
’s are given by the eigenvectors and eigenvalues of the covariance matrix
 =

 (6) Where
W
is a matrix composed of the column vectors
  placed side by side. The size of
C
is
 N × N
which could be enormous. For example, images of size 64
×
64 create the covariance
 
matrix of size 4096
×
4096. It is not practical to solve for the eigenvectors of
C
directly. A common
 
theorem in linear algebra states that the vectors
 
and scalars
 
can be obtained by solving for 
 
the eigenvectors and eigenvalues of the
 M×M
matrix
. Let
 
and
 
 be the eigenvectors
 
and eigenvalues of
, respectively.

 =
 (7) By multiplying left to both sides by
 

(

) =
(

)
 (8) Which means that the first
 M
1 eigenvectors
 
and eigenvalues
 
of

 
are given by

 
and
, respectively.

 
needs to be normalized in order to be equal to
. Since only sum
 
up of a finite number of image vectors is used,
 M 
, the rank of the covariance matrix cannot exceed
 M –
1
 
(The
 – 
1 come from the subtraction of the mean vector
m
). The eigenvectors corresponding to nonzero eigenvalues of the covariance matrix produce an orthonormal basis for the subspace within which most image data can be represented with a small amount of error. The eigenvectors are sorted from high to low according to their corresponding eigenvalues. The eigenvector associated with the largest eigenvalue is one that reflects the greatest variance in the image. That is, the smallest eigenvalue is associated with the eigenvector that finds the least variance. They decrease in exponential fashion, meaning that the roughly 90% of the total variance is contained in the first 5% to 10% of the dimensions. A facial image can be projected onto
 
(
≪ 
)
 dimensions  by computing
 = [
 …
]
 (9) Where
 =
 .
 is the

 coordinate of the facial image in the new space, which came to be the principal component. The vectors
 
are also images, so called,
eigenimages
, or
eigenfaces
in our case, which was first named by [6]. They can  be viewed as images and indeed look like faces.
Fig 1. Eigenfaces for the example image set
24http://sites.google.com/site/ijcsis/ ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 12, December 2013
So,
 describes the contribution of each eigenface in representing the facial image by treating the eigenfaces as a  basis set for facial images. The simplest method for determining which face class provides the best description of an input facial image is to find the face class
 
that minimizes the Euclidean distance
 =
(
 −
)
 (10) Where
 is a vector describing the
th face class. If
 is less than some predefined threshold
, a face is classified as  belonging to the class
.
 B.
 
Kernel Principal Component Analysis
The basic idea of kernel PCA is to first map the input data into
 x
 a feature space
 
via a nonlinear mapping Φ and then
 perform a linear PCA in
. Assuming that the mapped data are centred, i.e.,
 
(
)

 = 0
, where
 M 
 is the number of input data (the centring method in
 can be found in [7] and [8]), kernel PCA diagnoses the estimate of the covariance matrix of the mapped data
(
)
, defined as
 =
 
 
(
)
(
)

 (11) To do this, the eigenvalue equation

 =

 must be solved for eigenvalues
 0
 and eigenvector 
 ∈  ⃥
{0}
. As

 =1
 
 (
(
)
)
(
)

, all solutions
v
 with
 0
 lie within the span of
(
),…
 
,
(
)
 
, i.e., the coefficients
(
 = 1,…
 
,
)
 exist such that
 =
 
(
)

 (12) Then the following set of equations can be considered:
(
(
)
) = (
(
)

)
 for all
 = 1,…,
 (13) The substitution of (11) and (12) into (13) and the definition of an M × M matrix K by

 ≡
,
 = (
(
)
)
  produces an eigenvalue problem which can be expressed in terms of the dot products of two mappings Solve

 =

 For nonzero eigenvalues
 and eigenvectors
 =(
,…,
 )
 
subject to the normalization condition
(
 ∙
) = 1
. For the purpose of principal component extraction, the  projections of
 x
 are computed onto the eigenvectors
 in
. For face feature extraction using kernel PCA which involves three layers with entirely different roles. The input layer is made up of source nodes that connect the kernel PCA to its environment. Its activation comes from the gray level values of the face image. The hidden layer applies a nonlinear mapping
Φ
 from the input space to the feature space
, where the inner products are computed. These two operations are in  practice performed in one single step using the kernel
. The outputs are then linearly combined using weights
 resulting in an
l
th nonlinear principal component corresponding to
Φ
. Thereafter, the first
q
 principal components (assuming that the eigenvectors are sorted in a descending order of their eigenvalue size) constitute the
q
-dimensional feature vector for a face pattern. By selecting the proper kernels, various mappings,
Φ
, can be indirectly induced. One of these mappings can be achieved by taking the
-order correlations between the entries,
 , of the input vector
 x
. Since
 x
represents a face pattern with
 as a  pixel value, a PCA in
 computes the
th order correlations of the input pixels, and more precisely the most important
q
of the
th order cumulants. Note that these features cannot be extracted by simply computing all the correlations and  performing a PCA on such pre-processed data, since the required computation is prohibitive when is not small (
 
˃ 2):
for
 N 
 dimensional input patterns, the dimensionality of the feature space
 is (
 N 
+
-1)!/
!(
 N 
-1). However, this is facilitated by the introduction of a polynomial kernel, as a  polynomial kernel with degree
(
(
,
) = (
 ∙
)
)
 corresponds to the dot product of two monomial mappings,
 [7], [9].
(
)
 ∙
(
)
 =
  
 ∙
...
 
 ∙
 ∙
,,

 
=

 ∙

= (
 ∙
)
 IV.
 
FACE RECOGNITION PCA and kernel PCA compute the basis of a space which is represented by its training vectors. These basis vectors, actually eigenvectors, computed by PCA and KPCA are in the direction of the largest variance of the training vectors, earlier considered them as eigenfaces. Each eigenface can be viewed as a feature. When a particular face is projected onto the face space, its vector into the face space describes the importance of each of those features in the face. The face is expressed in the face space by its eigenface coefficients (or weights). It is  possible to handle a large input vector, facial image, only by taking its small weight vector in the face space. This means that it is possible to reconstruct the original face with some error, since the dimensionality of the image space is much larger than that of face space. Each face in the training set is transformed into the face space and its components are stored in memory. The face space has to be populated with these known faces. An input face is given to the system, and then it is projected onto the face space. The system computes its distance from all the stored faces.
25http://sites.google.com/site/ijcsis/ ISSN 1947-5500

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->