Professional Documents
Culture Documents
Face Recognition Using Biometrics On Java: A (Documentation Work) ON Project (Only Study Purpose)
Face Recognition Using Biometrics On Java: A (Documentation Work) ON Project (Only Study Purpose)
REPORT
(DOCUMENTATION WORK)
ON
PROJECT
(ONLY STUDY PURPOSE)
FACE RECOGNITION
USING BIOMETRICS
ON JAVA
GUIDED BY- PROJECT BY-
MR. RAJENDRA SINGH ARPIT KR. SHARMA
PROJECT
1. MINOR PART
2. MAJOR PART
1. MINOR PART
CONTENTS
Abstract i
1. Introduction 1
4. Face Recognition 12
5. Feed Forward Neural Network 15
6. Results 17
7. Performance Analysis and Discussions 25
8. Conclusion 27
References 29
Appendix
ABSTRACT
INTRODUCTION
1
Introduction
Face recognition has been studied extensively for more than 20 years now. Since
the beginning of 90’s the subject has became a major issue; mainly due to its
important real-world applications in areas like video surveillance, smart cards,
database security, internet and intranet access.
The face plays a major role in our social intercourse in conveying identity and
emotion. The human ability to recognize faces is remarkable. We can recognize
thousands of faces learned throughout our lifetime and identify familiar faces at a
glance even after years of separation. The skill is quite robust, despite large
changes in the visual stimulus due to viewing conditions, expression, aging, and
distractions such as glasses or changes in hairstyle.
Computational models of faces have been an active area of research since late
1980s, for they can contribute not only to theoretical insights but also to practical
applications, such as criminal identification, security systems, image and film
processing, and human-computer interaction, etc. However, developing a
computational model of face recognition is quite difficult, because faces are
complex, multidimensional, and subject to change over time. The basic task, given
as input the visual image of a face, is to compare the input face against models of
faces stored in a library and report a match if one is found. The problem of locating
the face- distinguishing it from cluttered background is usually avoided by imaging
face against a uniform background.
Face recognition is difficult for two major reasons. First, face form a class of
similar objects; all faces consist of the same facial features in roughly the same
geometrical configuration, which makes the recognition problem a fine
discrimination task. The Second source of diffcuility lies in the wide variation in
the appearance of a particular face due to changes in pose, lighting, and facial
expression.
2
The face representation was performed by using two categories. The First category
is global approach or appearance-based, which uses holistic texture features and
is applied to the face or specific region of it. The second category is feature-based
or component-based, which uses the geometric relationship among the facial
features like mouth, nose, and eyes. (Wiskott et al.,1997) implemented feature-
based approach by a geometrical model of a face by 2-D elastic graph.
Principal components analysis (PCA) method (Sirovich & Kirby, 1987; Kirby &
Sirovich,1990) which is also called eigenfaces (Turk & Pentland, 1991; Pentland &
Moghaddam, 1994) is appearance-based technique used widely for the
dimensionality reduction and recorded a great performance in face recognition.
In the training phase, an eigenspace is established from the training samples using
PCA and the training face images are mapped to the eigenspace for classification.
In the classification phase, an input face is projected to the same eigenspace and
classified by an appropriate classifier.
3
CHAPTER:2
The jobs which PCA can do are prediction, redundancy removal, feature
extraction, data compression, etc. Because PCA is a classical technique which can
do something in the linear domain, applications having linear models are suitable,
such as signal processing, image processing, system and control theory,
communications, etc.
1) Mean:-
n= no.of data
5
2) Standard Deviation:-
If the random variable X takes on N values (which are real
numbers) with equal probability, then its standard deviation σ can be
calculated as follows:
3) Variance:-
It is almost identical to the standard deviation. The formula is this:
4) Covariance:-
The covariance between two real-valued random variables X and Y, with
6
5)Covariance Matrix:-
The covariancematrix is a matrix of covariances between elements of
a random vector. It is the natural generalization to higher dimensions of the
concept of the variance of a scalar-valued random variable.
Ax= λx .
The nonzero vector x is called an eigenvector of A associated with the
eigenvalue λ .
7
CHAPTER:3
1) Get some data:- First of all we get some data. we get 2 dimensions, and
the reason why I have chosen this is so that I can provide plots of the data to show
what the PCA analysis is doing at each step.
2) Subtract the mean:- For PCA to work properly, we have to subtract the
mean from each of the data dimensions. The mean subtracted is the average across
each dimension. So, all the x values have (the mean of the x values of all the data
points) subtracted, and all the y values have subtracted from them. This produces a
data set whose mean is zero.
It will give us the original data solely in terms of the vectors we chose. Our
original data set had two axes, x and y , so our data was in terms of them. It is
possible to express data in terms of any two axes that we like. If these axes are
perpendicular, then the expression is the most efficient. This was why it was
important that eigenvectors are always perpendicular to each other. We have
changed our data from being in terms of the axes x and y , and now they are in
terms of our 2 eigenvectors. In the case of when the new data set has reduced
dimensionality, ie. We have left some of the eigenvectors out, the new data is only
in terms of the vectors that we decided to keep.
we have transformed our data so that is expressed in terms of the patterns between
them, where the patterns are the lines that most closely describe the relationships
between the data. This is helpful because we have now classified our data point as
a combination of the contributions from each of those lines. Initially we had the
simple x and y axes. This is fine, but the x and y values of each data point don’t
really tell us exactly how that point relates to the rest of the data. Now, the values
of the data points tell us exactly where (ie. above/below) the trend lines the data
point sits. In the case of the transformation using both eigenvectors, we have
simply altered the data so that it is in terms of those eigenvectors instead of the
usual axes.
10
Face Database Training Set
Testing Set
Projection of PCA
Test Image (Feature Extraction)
Classifier
(Euclidean Distance)
Decision Making
CHAPTER:4
FACE RECOGNITION
12
Face Recognition
A small database is created with images. Each of these images are m pixels high
and n pixels wide For each image in the database an image vector is created and
are put in a matrix form which gives a start point for PCA. Covariance is found
from the matrix of images and from the covariance the eigen vectors are found for
the original set of images. The way this algorithm works is by treating face
recognition as a "two-dimensional recognition problem, taking advantage of the
fact that faces are normally upright and thus may be described by a small set of 2-
D characteristics views. Face images are projected onto a feature space ('face
space') that best encodes the variation among known face images. The face space is
defined by the eigenfaces which are the eigenvectors of the set of faces; they do
not necessarily correspond to isolated features such as eyes, ears, and noses. So
when a new image is passed from the blob detected image, the algorithm measures
the difference between the new image and the original images, not along the
original axes, but along the new axes derived from the PCA analysis. It proves out
that these axes works much better for recognizing faces, because the PCA analysis
has given us the original images in terms of the differences and similarities
between them.
The eigenfaces approach for face recognition involves the following initialization
operations:
1. Acquire a set of training images.
2. Calculate the eigenfaces from the training set, keeping only the best M
images with the highest eigenvalues. These M images define the “face
space”. As new faces are experienced, the eigenfaces can be updated.
Having initialized the system, the following steps are used to recognize new face
images:
1. Given an image to be recognized, calculate a set of weights of the M
eigenfaces by projecting the it onto each of the eigenfaces.
2. Determine if the image is a face at all by checking to see if the image is
sufficiently close to the face space.
3. If it is a face, classify the weight pattern as eigher a known person or as
unknown.
4. (Optional) Update the eigenfaces and/or weight patterns.
5. (Optional) Calculate the characteristic weight pattern of the new face image,
and incorporate into the known faces.
The two systems consist of two phases which are the PCA feature extraction
phase, and the neural network classification phase. The introduced systems provide
improvement on the recognition performances over the conventional PCA face
recognition systems.
The neural networks are among the most successful decision making systems that
can be trained to perform complex functions in various fields of applications
including pattern recognition, optimization, identification, classification, speech,
vision, and control systems.
CHAPTER:5
FFNN is suitable structure for nonlinear separable input data. In FFNN model the
neurons are organized in the form of layers. The FFNN requires a training
procedure where the weights connecting the neurons in consecutive layers are
calculated based on the training samples and target classes. The neurons in a layer
get input from the previous layer and feed their output to the next layer. I. After
generating the eigenvectors using PCA methods, the projection vectors of face
images in the training set are calculated and then used to train the neural network.
These architectures are called PCA-NN for eigenfaces. In this type of networks
connections to the neurons in the same or previous layers are not permitted.
1 2 k
Output layer
1 2 m
Hidden layer
Input layer 1 2 3 n
CHAPTER:6
RESULTS
17
Results
The face recognition system was tested using a set of face images. All the training
and testing images are grayscale images of size 120x128. There are 16 persons in
the face image database, each having 27 distinct pictures taken under different
conditions (illuminance, head tilt, and head scale).
The training images are chosen to be those of full head scale, with head-on
lighting, and upright head tilt. The initial training set consists of 12 face images of
12 individuals, i.e. one image for one individual (M=12). These training images are
shown in Figure 1. Figure 2 is the average image of the training set.
Figure 3. Eigenfaces
19
a. b. c.
Figure 4. Training image and test images with different head tilts.
a. training image; b. test image 1; c. test image 2
If the system correctly relates the test image with its correspondence in the training
set, we say it conducts a true-positive identification (Figures 5and 6); if the system
relates the test image with a wrong person (Figure 7), or if the test image is from
an unknown individual while the system recognizes it as one of the persons in the
database, a false-positive identifaction is performed; if the system identifies the test
image as unknown while there does exist a correspondence between the test image
and one of the training images, the system conducts a false-negative detection.
The experiment results are illustrated in the Table 1:
a. b. c.
Figure 5 (irfan). Recognition with different head tilts—success!
a. test image 1; b. test image 2; c. training image
a. b.
Figure 6 (david). Recognition with different head tilts—success!
a. test image; b. training image
a. b.
c. d.
Figure 7 (foof). Recognition with different head tilts—false!
a. test image 1; b. training image (irfan) returned by the face recognition system;
c. test image 2; d. training image (stephen) returned by the program
21
Recognition with varying illuminance:
Each training image (with head-on lighting) has two corresponding test images—
one with light moved by 45 degrees and the other with light moved by 90 degrees.
Other conditions, such as head scale and tilt, remain the same as in the training
image. The experiment results are shown in Table 2.
Table 2: Recognition with varying illuminance
Number of test images 24
Number of true-positive identifications 21
Number of false-positive identifications 3
Number of false-negative identifications 0
Figure 8 shows the difference between the training image and test images.
a. b. c.
a. b.
Figure 9 (stan). Recognition with varying illuminance—success!
a. test image, light moved by 45 degrees; b. training image, head-on lighting
22
a. b.
Figure 10 (ming). Recognition with varying illuminance—false!
a. test image, light moved by 90 degrees; b. training image (foof) returned by the system, head-on
lighting
a. b. c.
Figure 11. Training image and test images with varying head scale.
a. training image; b. test image 1: medium head scale; c. test image 2: small head scale
23
Figures 12 and 13 illustrate a true-positive example and a false-positive one
respectively.
a. b. c.
Figure 12 (stan). Recognition with varying head scale—success!
a. test image 1, medium scale; b. test image 2, small scale; c. training image, full scale
a. b.
Figure 13 (pascal). Recognition with varying head scale—false!
a. test image, medium scale; b. training image (robert), full scale
CHAPTER:7
CHAPTER:7
CONCLUSIONS
27
CONCLUSIONS
An eigenfaces-based face recognition approach was implemented in MatLab. This
method represents a face by projecting original images onto a low-dimensional
linear subspace—‘face space’, defined by eigenfaces. A new face is compared to
known face classes by computing the distance between their projections onto face
space.
(1) To reduce the false-positive rate, we can make the system return a number
of candidates from the existing face classes instead of a single face class. And
the remaining work is left to human.
(2) Regarding the pattern vector representing a face class, we can make each
face class consist of several pattern vectors, each constructed from a face image
of the same individual under a certain condition, rather than taking the average
of these vectors to represent the face class.
28
CODING PART-
All the coding will done in matlab software 7 series. Which have allready
predefiend functions.
29