Professional Documents
Culture Documents
1
2. Represent every image Ii as a vector Γi In this K basis, each face (minus the mean) in the
This Γ is an N 2 × 1 vector, corresponding to an training set can be represented as a linear combina-
N × N face image I. tion of the best K eigenvectors:
K
3. Compute the average face vector Ψ: X
Φ̂i = wj uj (wj = uTj Φi )
M j=1
1 X
Ψ= Γi The uj ’s, or eigenvectors, are also referred to as the
M i=1
eigenfaces. Each normalized training face Φi can be
represented in this K basis by the vector:
4. Subtract the mean face: i
w1
Φi = Γi − Ψ w2i
. . . i = 1, 2, ..., M
Ωi =
i
wK
5. Compute the covariance matrix C:
1 X
M Given an unknown image Γ, we want to know whether
C= Φn ΦTn = AAT (N 2 ×N 2 matrix) or not it is a face. To do so we calculate the distance
M n=1
from face space d .
1. Compute: Φ = Γ − Ψ
6. Compute the eigenvectors ui of AAT PK
However, notice that AAT N 2 ×M matrix is 2. Compute: Φ̂ = i=1 wi ui (wi = uTi Φ)
very large, and findings its eigenvectors is not
3. Compute: d = ||Φ − Φ̂||
practical in near-real-time. Therefore we con-
sider a simpler approach: 4. Define a detection threshold Td . If d < Td , then
Γ is a face.
(a) Compute the eigenvectors vi of the matrix
AT A 2.1.3 Face Recognition
AT Avi = µi vi
Given an unknown face image Γ, we want to know
The relationship between the eigenvalues ui whether or not it is in the training database. We
and eigenvectors vi is given by: first crop and center the face the same way as the
training images. Then, we define a distance within
AT Avi = µi vi the face space r . The following procedures are
AAT Avi = µi Avi taken:
AAT ui = µi ui 1. Compute: Φ = Γ − Ψ
2. Project the normalized face onto the face space
AAT and AT A have the same eigenvalues, (eigenspace):
their eigenvectors relate by: ui = Avi . This
reduces the burden of finding N 2 eigenval- K
X
ues and eigenvectors down to only M eigen- Φ̂ = wi ui (wi = uTi Φ)
values and eigenvectors. These M eigenvec- i=1
2
5. Define a recognition threshold Tr . If r < Tr , is generally applicable to view the areas with high
then Γ is recognized as face l from the training contrast (within each eigenface) as the ”features” rep-
set. resented.
×10 8 Eigenfaces singular values
Note that when a new image is presented and anal- 12
3 Our Attempt
3.1 Preface 8
3.2 Singular Value Decomposition Figure 1: Singular values from AT&T Facedatabase
3
Face Detection Face Recognition Implication
d < Td r < Tr Image is a face, face is recognized
d < Td r > Tr Image is a face, face is unrecognized
d > Td r < Tr Image isn’t a face, recognition returns false positive
d > Td r > Tr Image isn’t a face, face is unrecognized as expected
20 20 20 20
faces[1]. A proposed remedy to this problem is to
40 40 40 40
discard the largest three principal components after
60 60 60 60 performing PCA. However, there is no way to be
80 80 80 80
sure that for every training database presented, the
100 100 100 100
20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80
top three principal components correspond to these
Eigenface 5 Eigenface 6 Eigenface 7 Eigenface 8
large yet insignificant variances. Furthermore, it
20 20 20 20
is very likely that we would lose other important
40 40 40 40 ”features” in the meantime.
60 60 60 60
80 80 80 80
40 40 40 40
as a real-time adaptive system, but more suitable as
60 60 60 60 a system with a static database that only performs
80 80 80 80 recognition.
100 100 100 100
20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80
Eigenface 13 Eigenface 14 Eigenface 15 Eigenface 16 dist: 0.000000 dist: 0.013918 dist: 0.024913
20 20 20 20
20 20 20
40 40 40 40
60 60 60 60 40 40 40
80 80 80 80
60 60 60
100 100 100 100
80 80 80
20 40 60 80 20 40 60 80 20 40 60 80 20 40 60 80
method. While PCA maximizes the scatter between 100 100 100
80 80 80
4
4.1 C++ Implementation References
In the source code attached we have also implemented [1] Y. Adini, Y. Moses, and S. Ullman. Face recognition: The
the aforementioned Eigenfaces method, along with problem of compensating for changes in illumination direc-
tion. IEEE Transactions on Pattern Analysis and Machine
the edits and improvements, in C++, using the fully
Intelligence, 19(7):721–732, 07 1997. ISSN 0162-8828. doi:
templated ”Eigen3” and ”RedSVD” libraries. The 10.1109/34.598229.
goal in doing this is to increase cross-compatibility
of the program with different platforms. However, [2] Philipp Wegner. Eigenfaces. http://www.bytefish.de/
blog/eigenfaces, 07 2011. Visited: 2016-10-17.
we discovered that by foregoing the overhead that
Matlab provides with built in matrix manipulation, [3] Matthew Turk and Alex Pentland. Eigenfaces for recog-
we are able to see a runtime speed improvement of nition. Journal of Cognitive Neuroscience, 3(1):71–86, 01
1991. ISSN 0898-929X. doi: 10.1162/jocn.1991.3.1.71.
50% with the AT&T database, from an average of
0.8s to 0.4s. Another interesting phenomenon we
observed was the sign ambiguity in singular value de-
composition. When we calculated the SVD in our
C++ implementation, we find that while the singu-
lar values and the absolute values of the eigenvec-
tors are consistent, the signs of the eigenvectors are
often flipped between the Matlab and the C++ solu-
tion. As a result, the greyscale plots of the eigen-
faces are sometimes inverted in terms of the ”color”.
Since the following analysis only requires the dis-
tance measurement, the sign ambiguity is unimpor-
tant. If desired, the C++ code can be implemented
to follow the sign of the majority for each eigenvec-
tor, as that is the standard Matlab uses. The Mat-
lab and C++ code is available at the following link:
https://github.com/Lauszus/Eigenfaces.
dist: 0.000000 dist: 0.031590 dist: 0.034310
20 20 20
40 40 40
60 60 60
80 80 80
20 40 60 80 20 40 60 80 20 40 60 80
20 20 20
40 40 40
60 60 60
80 80 80
20 40 60 80 20 40 60 80 20 40 60 80
20 20 20
40 40 40
60 60 60
80 80 80
20 40 60 80 20 40 60 80 20 40 60 80