Professional Documents
Culture Documents
Submitted by
1
2
DECLARATION
We hereby declare that the project work entitled “STUDENT TRACKING SYSYTEM IN LPU” is an
authentic record of our own work carried out as requirements of Capstone Project for the award of B.Tech
degree in CSE from Lovely Professional University, Phagwara, under the guidance of Mr Gurpreet Singh,
during January to April 2014.All the information furnished in this capstone project report is based on our own
intensive work and is genuine.
3
CERTIFICATE
This is to certify that the declaration statement made by this group of students is correct to the best of my
knowledge and belief. They have completed this Capstone Project under my guidance and supervision. The
present work is the result of their original investigation, effort and study. No part of the work has ever been
submitted for any other degree at any University. The Capstone Project is fit for the submission and partial
fulfillment of the conditions for the award of B.Tech degree in from Lovely Professional University,
Phagwara.
Designation
Date :
4
INDEX
1. Introduction 7
1.1 Face Recognition 7
1.2 ID Card Image Processing 8
3. Existing System 10
3.1 Introduction 10
3.2 Existing Software 10
7. Testing 21
7.1 Functional Testing 21
7.2 Structural Testing 21
7.3 Level of Testing 22
7.4 Integration Testing 22
7.5 Smoke Testing 23
7.6 Testing the Project 23
9. Project Legacy 26
9.1 Current Status of the project 26
9.2 Remaining Areas of concern 26
9.3 Services Provided 26
5
9.4 Technical Lessons 26
9.5 Managerial Lessons 26
12. Bibliography 55
6
1. Introduction
Face recognition is a part of a wide area of pattern recognition technology. Recognition and especially face
recognition covers a range of activities from many walks of life. Face recognition is something that humans
are particularly good at and science and technology have brought many similar tasks to us. Face recognition in
general and the recognition of moving people in natural scenes in particular, require a set of visual tasks to be
performed robustly. That process includes mainly three-task acquisition, normalization and recognition. By the
term acquisition we meant detection and tracking of face-like image patches in a dynamic scene.
Normalization is the segmentation, alignment and normalization of the face images, and finally recognition
that is the representation and modeling of face images as identities, and the association of novel face images
with known models.
Given the requirement for determining people's identity, the obvious question is what technology is best suited
to supply this information? The many ways that human can identify each other, and so is for machines. There
are many different identification technologies available, many of which have been in commercial use for
years. The most common person verification and identification methods today are Password/PIN known as
Personal Identification Number, systems. The problem with that or other similar techniques is that they are not
unique, and is possible for somebody to forget loose or even have it stolen for somebody else. In order to
overcome these problems there has developed considerable interest in "biometrics" identification systems,
which use pattern recognition techniques to identify people using their characteristics. Some of those methods
are fingerprints and retina and iris recognition. Though these techniques are not easy to use. For example in
bank transactions and entry into secure areas, such technologies have the disadvantage that they are intrusive
both physically and socially. The user must position the body relative to the sensor, and then pause for a
second to declare himself or herself. That doesn’t mean that face recognition doesn’t need specific positioning.
As we are going to analyze later on the poses and the appearance of the image taken is very important. While
the pause and present interaction are useful in high-security, they are exactly the opposite of what is required
when building a store that recognize its best customers, or an information kiosk that remembers you, or a
house that knows the people who live there. Face recognition from video and voice recognition have a natural
place in these next generation smart environments, they are unobtrusive, are usually passive, do not restrict
user movement, and are now both low power and inexpensive. Perhaps most important, however, is that
humans identify other people by their face and voice, therefore are likely to be comfortable with systems that
use face and voice recognition.
7
1.2 ID CARD IMAGE PROCESSING
As we mentioned in the preface, human beings are predominantly visual creatures: we rely heavily on our
vision to make sense of the world around us. We not only look at things to identify and classify them, but we
can scan for differences, and obtain an overall rough feeling for a scene with a quick glance. Humans have
evolved very precise visual skills: we can identify a face in an instant; we can process a large amount of visual
information very quickly. However, the world is in constant motion: stare at something for long enough and it
will change in some way. Even a large solid structure, like a building or a mountain, will change its
appearance depending on the time of day (day or night); amount of sunlight (clear or cloudy), or various
shadows falling upon it. We are concerned with single images: snapshots, if you like, of a visual scene.
Although image processing can deal with changing scenes, we shall not discuss it in any detail in this text. For
our purposes, an image is a single picture which represents something. It may be a picture of a person, of
people or animals, or of an outdoor scene, or a microphotograph of an electronic component, or the result of
medical imaging. Image processing involves changing the nature of an image in order to either
We shall be concerned with digital image processing, which involves using a computer to change the nature of
a digital image. It is necessary to realize that these two aspects represent two separate but equally important
aspects of image processing. A procedure which satisfies condition
(1) A procedure which makes an image look better, may be the very worst procedure for satisfying condition
(2) Humans like their images to be sharp, clear and detailed; machines prefer their images to be simple and
uncluttered.
8
2. Profile of the Problem Rationale/Scope of the study (Problem Statement)
The basic problem of the system already existing was, people were entering the university without any
verification from any automated system, the ID cards were checked manually by the security staff ,which
allowed the outsider to enter the university and cause any damage ,this method of verification is also not at all
fruitful as it contains all the demerits which a naked eye could cause
In secured system, many type of password are used to access the premises cards, tokens, keys and the like can
be misplaced, forgotten, purloined or duplicated; magnetic cards which can provide access can become
corrupted and unreadable. By developed face recognition it more secure because facial image had been used as
the ID. It also helps to avoid any duplicated identification.
A full record will be kept for the official no of students in the university
9
3. Existing system
3.1 Introduction
We are telling you that our university has 40000 students on a single campus.
In our university, we use many security guards on the main gate to check each and every movement about the
student which is genuine or not. (i.e. which belongs to our university or not).
They check each and every id card of the student which is too irritating to the students and obviously their job
is to check and it takes time a bit to check each and every student.
But the main question is why our university checks each and every student? Because it’s a college full of
students and the security to the students is part of the university.
So we provided you the new security system for our university i.e. Student Tracking System
10
4. Problem Analysis
The problem of face recognition is all about face detection. This is a fact that seems quite bizarre to new
researchers in this area. However, before face recognition is possible, one must be able to reliably find a face
and its landmarks. This is essentially a segmentation problem and in practical systems, most of the effort goes
into solving this task. In fact the actual recognition based on features extracted from these facial landmarks is
only a minor last step.
A successful face detection in an image with a frontal view of a human face Most face detection systems
attempt to extract a fraction of the whole face, thereby eliminating most of the background and other areas of
an individual's head such as hair that are not necessary for the face recognition task. With static images, this is
often done by running a 'window' across the image. The face detection system then judges if a face is present
inside the window (Brunelli and Poggio, 1993). Unfortunately, with static images there is a very large search
space of possible locations of a face in an image. Faces may be21 large or small and be positioned anywhere
from the upper left to the lower right of the image.
Most face detection systems use an example based learning approach to decide whether or not a face is present
in the window at that given instant (Sung and Poggio,1994 and Sung,1995). A neural network or some other
classifier is trained using supervised learning with 'face' and 'non-face' examples, thereby enabling it to
classify an image (window in face detection system) as a 'face' or 'non-face'.. Unfortunately, while it is
relatively easy to find face examples, how would one find a representative sample of images which represent
nonfaces (Rowley et al., 1996)? Therefore, face detection systems using example based learning need
thousands of 'face' and 'non-face' images for effective training. Rowley, Baluja, and Kanade (Rowley et
al.,1996) used 1025 face images and 8000 non-face images (generated from 146,212,178 sub-images) for their
training set. There is another technique for determining whether there is a face inside the face detection
system's window - using Template Matching. The difference between a fixed target pattern (face) and the
window is computed and thresholded. If the window contains a pattern which is close to the target
pattern(face) then the window is judged as containing a face. An implementation of template matching called
Correlation Templates uses a whole bank of fixed sized templates to detect facial features in an image. By
using several templates of different (fixed) sizes, faces of different scales (sizes) are detected. The other
implementation of template matching is using a deformable template . Instead of using several fixed size
templates, we use a deformable template (which is non-rigid) and thereby change the size of the template
hoping to detect a face in an image. A face detection scheme that is related to template matching is image
invariants. Here the fact that the local ordinal structure of brightness distribution of a face remains largely
unchanged under different illumination conditions (Sinha, 1994) is used to construct a spatial template of the
face which closely corresponds to facial features. In other words, the average grey-scale intensities in human
faces are used as a basis for face detection. For example, almost always an individual’s eye region is darker
than his forehead or nose. Therefore an image will match the template if it satisfies the 'darker than' and
'brighter than' relationships
11
4.1.2 Real-Time Face Detection
Real-time face detection involves detection of a face from a series of frames from a video capturing
device. While the hardware requirements for such a system are far more stringent, from a computer vision
stand point, real-time face detection is actually a far simpler process than detecting a face in a static image.
This is because unlike most of our surrounding environment, people are continually moving. We walk around,
blink, fidget,wave our hands about, etc.
Since in real-time face detection, the system is presented with a series of frames in which to detect a face, by
using spatiotemporal filtering (finding the difference between subsequent Frames), the area of the frame that
has changed can be identified and the individual detected by:-
1) The head is the small blob above a larger blob -the body
2) Head motion must be reasonably slow and contiguous -heads won't jump around erratically
Real-time face detection has therefore become a relatively simple problem and is possible even in unstructured
and uncontrolled environments using these very simple image processing techniques and reasoning rules
Over the last few decades many techniques have been proposed for face recognition. Many of the techniques
proposed during the early stages of computer vision cannot be considered successful, but almost all of the
recent approaches to the face recognition problem have been creditable. According to the research by Brunelli
and Poggio all approaches to human face recognition can be divided into two strategies:
This technique involves computation of a set of geometrical features such as nose width and length, mouth
position and chin shape, etc. from the picture of the face we want to recognize. This set of features is then
matched with the features of known individuals. A suitable metric such as Euclidean distance (finding the
closest vector) can be used to find the closest match. Most pioneering work in face recognition was done using
geometric features (Kanade), although Craw et al. did relatively recent work in this area.
Geometrical features (white) which could be used for face recognition
The advantage of using geometrical features as a basis for face recognition is that recognition is possible even
at very low resolutions and with noisy images (images with many disorderly pixel intensities). Although the
face cannot be viewed in detail its overall geometrical configuration can be extracted for face recognition. The
technique's main disadvantage is that automated extraction of the facial geometrical features is very hard.
Automated geometrical feature extraction based recognition is also very sensitive to the scaling and rotation of
a face in the image plane (Brunelli and Poggio, 1993). This is apparent when we examine Kanade's(1973)
results where he reported a recognition rate of between 45-75 % with a database of only 20 people. However if
these features are extracted manually as in Goldstein et al. (1971), and Kaya and Kobayashi (1972)
satisfactory results may be obtained.
12
4.2.2 Face Recognition using Template matching
This is similar the template matching technique used in face detection, except here we are not trying to classify
an image as a 'face' or 'non-face' but are trying to recognize a face.
Whole face, eyes, nose and mouth regions which could be used in a template matching strategy
The basis of the template matching strategy is to extract whole facial regions (matrix of pixels) and compare
these with the stored images of known individuals. Once again Euclidean distance can be used to find the
closest match. The simple technique of comparing grey-scale intensity values for face recognition was used by
Baron (1981).
However there are far more sophisticated methods of template matching for face recognition. These involve
extensive pre-processing and transformation of the extracted grey-level intensity values. For example, Turk
and Pentland (1991a) used Principal Component Analysis, sometimes known as the eigenfaces approach, to
pre-process the gray-levels and Wiskott et al. (1997) used Elastic Graphs encoded using Gabor filters to pre-
process the extracted regions.
An investigation of geometrical features versus template matching for face recognition by Brunelli and Poggio
(1993) came to the conclusion that although a feature based strategy may offer higher recognition speed and
smaller memory requirements, template based techniques offer superior recognition accuracy.
13
4.4 Disadvantages of present system
Not User Friendly: The existing system is not user friendly because the retrieval of data is very slow and data
is not maintained efficiently.
Manual control:
All calculations to generate report is done manually so there is greater chance of errors.
Lots of paperwork: Existing system requires lot of paper work. Loss of even a single register/record led to
difficult situation because all the papers are needed to generate the reports.
Time consuming: Every work is done manually so we cannot generate report in the middle of the session or as
per the requirement because it is very time consuming.
User Friendly:- The proposed system is user friendly because the retrieval and storing of data is fast and data
is maintained efficiently. Moreover the graphical user interface is provided in the proposed system, which
provides user to deal with the system very easily.
Reports are easily generated: reports can be easily generated in the proposed system so user can generate the
report as per the requirement (monthly) or in the middle of the session. User can give the notice to the students
so he/she become regular.
Very less paper work: The proposed system requires very less paper work. All the data is feted into the
computer immediately and reports can be generated through computers.
Moreover work becomes very easy because there is no need to keep data on papers.
Computer operator control: Computer operator control will be there so no chance of errors. Moreover
storing and retrieving of information is easy. So work can be done speedily and in time.
14
5. Software Requirement Analysis
MATLAB is a high-performance language for technical computing. It integrates computation, visualization,
and programming environment. Furthermore, MATLAB is a modern programming language environment: it
has sophisticated data structures, contains built-in editing and debugging tools, and supports object-oriented
programming. These factors make MATLAB an excellent tool for teaching and research.
MATLAB has many advantages compared to conventional computer languages (e.g., C, FORTRAN) for
solving technical problems. MATLAB is an interactive system whose basic data element is an array that does
not require dimensioning. The software package has been commercially available since 1984 and is now
considered as a standard tool at most universities and industries worldwide. It has powerful built-in routines
that enable a very wide variety of computations. It also has easy to use graphics commands that make the
visualization of results immediately available. Specific applications are collected in packages referred to as
toolbox. There are toolboxes for signal processing, symbolic computation, control theory, simulation,
optimization, and several other fields of applied science and engineering.
The order in which MATLAB performs arithmetic operations is exactly that taught in high school algebra
courses. Exponentiations are done first, followed by multiplications and divisions, and finally by additions and
subtractions. However, the standard order of precedence of arithmetic operations can be changed by inserting
parentheses.
Parentheses can always be used to overrule priority, and their use is recommended in some complex
expressions to avoid ambiguity.Therefore, to make the evaluation of expressions unambiguous, MATLAB has
established a series of rules. The order in which the arithmetic operations are evaluated. MATLAB arithmetic
operators obey the same precedence rules
First the contents of all parentheses are evaluated first, starting from the innermost parentheses and working
outward.
Third All multiplications and divisions are evaluated, working from left to right
Fourth All additions and subtractions are evaluated, startingfrom left to rightmost computer programs.
The name MATLAB stands for matrix laboratory, originally written to provide easy access to matrix software
developed by the LINPACK and EISPACK projects. Today MATLAB engines incorporate the LAPACK and
BLAS Libraries, embedding the state of the art in software for matrix computation MATLAB is an interactive,
matrix based system for scientific and engineering numeric computation and visualization. Its basic data
element is an array that does not require dimensioning. It is used to solve many technical computing problems,
especially those with matrix and vector formulation, in a fraction of the time it would take to write a program
in a scalar non interactive language such as C or FORTRON
Programming and developing algorithms is faster with MATLAB than with traditional languages because
MATLAB supports interactive development without the need to perform low-level administrative tasks, such
as declaring variables and allocating memory. Thousands of engineering and mathematical functions are
available, eliminating the need to code and test them yourself. At the same time, MATLAB provides all the
features of a traditional programming language, including arithmetic operators, flow control, data structures,
data types, object-oriented programming, and debugging features.
15
MATLAB helps you better understand and apply concepts in a wide range of engineering, science, and
mathematics applications, including signal and image processing, communications, control design, test and
measurement, financial modeling and analysis, and computational biology. Add-on toolboxes, which are
collections of task- and application-specific MATLAB functions, add to the MATLAB environment to solve
particular classes of problems in these application areas.
With over one million users, MATLAB is recognized as a standard tool for increasing the productivity of
engineers and scientists. Employers worldwide consistently report the advantages of being MATLAB
proficient.
Differences in MATLAB
The MATLAB in MATLAB Student provides all the features and capabilities of the professional version of
MATLAB software, with no limitations. There are a few small differences between the MATLAB Student
interface and the professional version of MATLAB
16
5.1 System Requirements
Windows 8.1 Any Intel or AMD x86 processor 1 GB for MATLAB 1024 MB
supporting SSE2 instruction set* only, (At least 2048 MB
Windows 8 3–4 GB for a recommended)
typical installation
Windows 7 Service
Pack 1
Windows Vista Service
Pack 2
Windows XP Service
Pack 3
17
6. Design
18
6.2 Pseudo code
Step 1: START
Step 2.2: Assign respective serial number to ID card image and student image
Step 5: END
Step 3: END
19
Pseudo code for facial recognition
Step 4: END
20
7. TESTING
Knowing the specific function that a product has been designed to perform, test can be conducted that
demonstrate each function is fully operational, at the same time searching for errors in each function.
This approach is known as black box testing.
Knowing the internal working of a product, test can be conducted to ensure that internal operation
performs according to specification and all internal components have been adequately exercised. This
approach is known as white-box testing.
Black box testing is designed to uncover errors. They are used to demonstrate that software
function are operations; that input is properly accepted and output is correctly produced; and that integrity of
external information is maintained (e.g. data files.). A black box examines some fundamental aspects of a
system with little regard for the internal logical structure of the software.
White box testing of software is predicated on close examination of procedural details. Providing test cases
that exercise specific set of conditions and loops test logical paths through the software. The “state of the
program” may be examined at various points to determine if the expected or asserted status corresponds to the
actual status.
21
Regression Testing is the re-execution of some subsets of tests already been conducted to ensure that
changes are not propagated unintended side effects.
1. Main control module is used as test driver and stubs (modules) are substituted for all components
subordinate to main control.
5. It is also tested to ensure that new errors have not been introduced.
In well-factored program structure decision-making occurs at upper levels in hierarchy and therefore
encountered first. If major control problem do exist, early recognition is essential. This is termed as top-down
integration testing.
Bottom-up integration testing begins construction and testing with atomic modules as the
components are integrated from the bottom-up, processing required for components subordinate to a given
level is always available and the need for stubs is eliminated.
Low-level components are combined into clusters that perform a specific software function.
A driver (a control program for testing) is written to coordinate test case input and output.
The cluster is tested.
Drivers are removed and clusters are combined moving upward in the program structure.
Each time a new module is added as part of integration testing, the software changes. New data flow paths are
established, new I/O can occur, and new control logic is invoked. These changes cause problems with
functions that previously worked flawlessly. In context of integration test strategy Successful tests result in
discovery of errors and errors must be corrected. When software is corrected some aspect of software
configuration is changed.
22
7.5 Smoke Testing:-
It is an integration testing that is commonly used when “shrink wrapped” software products are being
developed. It is designed as pacing mechanism for time critical projects, allowing us to assess the project on
frequent basis. This consists of steps: -
Software components are translated into code are integrated into a “build”. A build includes all data
files, libraries, reusable modules and engineered components.
A series of tests is designed to expose errors that will keep the build from properly performing its
function.
The build is integrated with other builds and the entire product is smoke tested daily.
Validation testing prior to customer, conducted by the validation team to validate the product against the
customer requirement specifications and the user documentation.
The best testing is to test each subsystem separately as we have done in our project. It is best to test a system
during the implementation stage in form of small sub steps rather than large chunks. We have tested each
module separately i.e. have completed unit testing first and system testing was done after combining /linking
all different Modules with different options and thorough testing was done. Once each lowest level unit has
been tested, units are combined with related units and retested in combination. These proceeds hierarchically
bottom-up until the entire system is tested as a whole. Hence we have used the Top Up approach for testing
our system.
23
8. IMPLEMENTATION OF IMAGE RECOGNITION TECHNOLOGY
The implementation of includes the following four stages:
Data acquisition
Input processing
The proposed method was carried out by taking the picture database. The database was obtained with several
photographs of a particular person at different expressions. These expressions can be classified into some
discrete classes like happy, anger, disgust, sad and neutral. Absence of any expression is the “neutral”
expression. The database is kept in the train folder which contains a particular person having all his/her
photographs.
A pre-processing module locates the eye position and takes care of the surrounding lighting condition and
colour variance. First the presence of faces or face in a scene must be detected. Once the face is detected, it
must be localized and Normalization process may be required to bring the dimensions of the live facial sample
in alignment with the one on the template. Some facial recognition approaches use the whole face while others
concentrate on facial components and/ or regions (such as lips, eyes etc.). The appearance of the face can
change considerably during speech and due to facial expressions.
24
8.3 Image classification and decision making:
Synergetic computer are used to classify optical and audio features, respectively. A synergetic computer is a
set of algorithm that simulates synergetic phenomena. In training phase the BIOID creates a prototype called
face print for each person. A newly recorded pattern is pre-processed and compared with each face print stored
in the database. As comparisons are made, the system assigns a value to the comparison using a scale of one to
ten. If a score is above a predetermined threshold, a match is declared. From the image of the face, a particular
trait is extracted. It may measure various nodal points of the face like the distance between the eyes ,width of
nose etc. it is fed to a synergetic computer which consists of algorithm to capture, process, compare the sample
with the one stored in the database.
We can also track the lip movement which is also fed to the synergetic computer. Observing the likelihood
each of the samples with the one stored in the database we can accept or reject the sample.
25
9. Project Legacy
9.1 Current Status of the project-
The project is completed and ready to be deployed to campus to monitor the entry of students.
26
10. User manual
1. For creating database
1.1
Run matlab2013a.
Open main.m .
Open face.m file.
27
1.2
Run face.m file by clicking run button.
Take image of students face by clicking on start button and after that by clicking on capture button.
2.3
Click on Start Button
Camera will be active.
Place ID card in front of camera.
Click on match button.
It will match this image with the database.
If user is valid then display “user is authenticated”.
29
If user is not valid then display “you are not allowed to the system”.
Reset Button
For resetting the database .
30
Delete Database Button
For deleting the database.
31
Create Database Button
For updating the database we will have to enter no. Of images in white rectangle box.
Click on create database button
Exit Button
For exiting the GUI.
3. Facial Recognition
3.1
Now Open face.m
Run face.m file by clicking run button.
3.2
GUI for Face Recognition System opens.
32
Camera will be active
33
11. Source Code
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
34
% --- Executes just before main is made visible.
function main_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to main (see VARARGIN)
handles.output = hObject;
% --- Outputs from this function are returned to the command line.
function varargout = main_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
%handles.im= imagesc(getsnapshot(handles.video));
% saveas(gca,['ArialPic_' int2str(i)],'jpg') ;
% set(handles.startAcquisition,'Enable','on');
%set(handles.captureImage,'Enable','on');
else
% Camera is on. Stop camera and change button string.
set(handles.pushbutton1,'String','Start')
stop(handles.video)
% set(handles.startAcquisition,'Enable','off');
%set(handles.captureImage,'Enable','off');
end
imwrite(handles.im,'Capture.jpg');
load Eigenface.mat;
36
%saveas(handles.axis2,['1'],'jpg') ;
%Algo
% input='1.jpg';
% %imageno='1.jpg';
%uu;
% outputimage=Recognition(T, m1, Eigenfaces, ProjectedImages, imageno);
% if(imageno==1)
% set(handles.edit2,'String','2');
% end
MeanInputImage=[];
% for non real time
%[fname pname]=uigetfile('*.jpg','Select the input image for recognition');
% For Real Time
fname='Capture.jpg';
%end
InputImage=imread(fname);
InputImage=rgb2gray(InputImage);
InputImage=imresize(InputImage,[200 180],'bilinear');%resizing of input image. This is a part of
preprocessing techniques of images
[m n]=size(InputImage);
imshow(InputImage);
Imagevector=reshape(InputImage',m*n,1);%to get elements along rows as we take InputImage'
MeanInputImage=double(Imagevector)-m1;
ProjectInputImage=Eigenfaces'*MeanInputImage;% here we get the weights of the input image with respect to
our eigenfaces
% next we need to euclidean distance of our input image and compare it
% with our Image space and check whether it matches the answer...we need
% to take the threshold value by trial and error methods
Euclideandistance=[];
for i=1:T
temp=ProjectedImages(:,i)-ProjectInputImage;
Euclideandistance=[Euclideandistance temp];
end
% the above statements will get you a matrix of Euclidean distance and you
% need to normalize it and then find the minimum Euclidean distance
tem=[];
for i=1:size(Euclideandistance,2)
k=Euclideandistance(:,i);
tem(i)=sqrt(sum(k.^2));
end
% We now set some threshold values to know whether the image is match or not
% and if it is a Match then if it is known Image or not
% The threshold values taken are done by trial and error methods
[MinEuclid, index]=min(tem);
if(MinEuclid<0.8e008)
if(MinEuclid<0.35e008)
outputimage=(strcat(int2str(index),'.jpg'));
[cdata] = imread('icon.jpg');
37
% figure,imshow(outputimage);
switch index % we are entering the name of the persons in the code itself
% There is no provision of entering the name in real time
case 1
%str1;
case 2
case 3
case 4
case 5
case 6
case 7
case 8
case 9
case 11
case 12
case 13
case 14
case 15
case 16
case 17
case 18
case 19
39
msgbox('Operation Completed - User is Authenticated',...
'Success','custom',cdata);
case 20
otherwise
msgbox('Image in database but name unknown')
%h = msgbox('Not Allowed');
%set(handles.edit2,'String','2');
end
else
msgbox('No matches found');
msgbox('You are not allowed to enter this system', 'Error','error');
outputimage=0;
end
else
msgbox('No matches found');
msgbox('You are not allowed to enter this system', 'Error','error');
%disp('Hello');
outputimage=0;
end
save test2.mat % this is used to save the variables of the file and thus can be used to set Eigenvalues
global a;
global b;
global c;
set(handles.edit2,'String',a);
set(handles.edit3,'String',c);
set(handles.edit4,'String',b);
a=0;
b=0;
c=0;
44
% --- Executes on button press in pushbutton8.
function pushbutton8_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton8 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
46
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
handles.output = hObject;
% --- Outputs from this function are returned to the command line.
function varargout = Face_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
47
% Get default command line output from handles structure
varargout{1} = handles.output;
%handles.im= imagesc(getsnapshot(handles.video));
% saveas(gca,['ArialPic_' int2str(i)],'jpg') ;
% set(handles.startAcquisition,'Enable','on');
%set(handles.captureImage,'Enable','on');
else
% Camera is on. Stop camera and change button string.
set(handles.pushbutton1,'String','Start')
stop(handles.video)
% set(handles.startAcquisition,'Enable','off');
%set(handles.captureImage,'Enable','off');
end
imwrite(handles.im,'Capture.jpg');
load Eigenface.mat;
%saveas(handles.axis2,['1'],'jpg') ;
%Algo
48
% input='1.jpg';
% %imageno='1.jpg';
%uu;
% outputimage=Recognition(T, m1, Eigenfaces, ProjectedImages, imageno);
% if(imageno==1)
% set(handles.edit2,'String','2');
% end
MeanInputImage=[];
% for non real time
%[fname pname]=uigetfile('*.jpg','Select the input image for recognition');
% For Real Time
fname='Capture.jpg';
%end
InputImage=imread(fname);
InputImage=rgb2gray(InputImage);
InputImage=imresize(InputImage,[200 180],'bilinear');%resizing of input image. This is a part of
preprocessing techniques of images
[m n]=size(InputImage);
imshow(InputImage);
Imagevector=reshape(InputImage',m*n,1);%to get elements along rows as we take InputImage'
MeanInputImage=double(Imagevector)-m1;
ProjectInputImage=Eigenfaces'*MeanInputImage;% here we get the weights of the input image with respect to
our eigenfaces
% next we need to euclidean distance of our input image and compare it
% with our Image space and check whether it matches the answer...we need
% to take the threshold value by trial and error methods
Euclideandistance=[];
for i=1:T
temp=ProjectedImages(:,i)-ProjectInputImage;
Euclideandistance=[Euclideandistance temp];
end
% the above statements will get you a matrix of Euclidean distance and you
% need to normalize it and then find the minimum Euclidean distance
tem=[];
for i=1:size(Euclideandistance,2)
k=Euclideandistance(:,i);
tem(i)=sqrt(sum(k.^2));
end
% We now set some threshold values to know whether the image is match or not
% and if it is a Match then if it is known Image or not
% The threshold values taken are done by trial and error methods
[MinEuclid, index]=min(tem);
if(MinEuclid<0.8e008)
if(MinEuclid<0.35e008)
outputimage=(strcat(int2str(index),'.jpg'));
[cdata] = imread('icon.jpg');
49
% figure,imshow(outputimage);
switch index % we are entering the name of the persons in the code itself
% There is no provision of entering the name in real time
case 1
%str1;
case 2
case 3
case 4
case 5
case 6
case 7
case 8
case 9
case 11
case 12
case 13
case 14
case 15
case 16
case 17
case 18
case 19
51
msgbox('Operation Completed - User is Authenticated',...
'Success','custom',cdata);
case 20
otherwise
msgbox('Image in database but name unknown')
%h = msgbox('Not Allowed');
%set(handles.edit2,'String','2');
end
else
msgbox('No matches found');
msgbox('You are not allowed to enter this system', 'Error','error');
outputimage=0;
end
else
msgbox('No matches found');
msgbox('You are not allowed to enter this system', 'Error','error');
%disp('Hello');
outputimage=0;
end
save test2.mat % this is used to save the variables of the file and thus can be used to set Eigenvalues
set(handles.edit2,'String',a);
set(handles.edit3,'String',c);
set(handles.edit4,'String',b);
a=0;
b=0;
52
c=0;
imwrite(handles.im,'Demo.jpg');
close;
54
12. Bibliography
AT&T Laboratories, Cambridge, UK. “The ORL Database of Faces” (now AT&T “The Database of Faces”).
Available [Online: http://www.cl.cam.ac.uk/Research/ DTG/attarchive/ pub/data/att_faces.zip Last referred on
15 September 2007].
M.Turk and A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, March 1991.
M.A. Turk and A.P. Pentland. “Face recognition using eigenfaces”. In Proc. of Computer Vision and Pattern
Recognition, pages 586-591. IEEE,June 1991b.
Prof. Y. VijayaLata, Chandra KiranBharadwajTungathurthi, H. Ram Mohan Rao, Dr. A. Govardhan, Dr. L. P.
Reddy, "Facial Recognition using Eigenfaces by PCA", In International Journal of Recent Trends in
Engineering, Vol. 1, No. 1, May 2009, Pages 587-590.
H.B.Kekre, Sudeep D. Thepade, AkshayMaloo “Performance Comparison for Face Recognition using PCA,
DCT & Walsh Transform of Row Mean and Column Mean”, ICGST International Journal on Graphics, Vision
and Image Processing (GVIP), Volume 10, Issue II, Jun.2010, pp.9-18, Available online at
http://209.61.248.177/gvip/
Volume10/Issue2/P1181012028.pdf.
D. L. Swets and J. Weng, “Using discriminant eigenfeatures for image retrieval”, IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 18, pp. 831–836, 1996. [13] C. Liu and H. Wechsler,
“Evolutionary pursuit and its application to face recognition”, IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 22, no. 6, pp. 570– 582, June 2000.
A. M. Martnez and A. C. Kak, “PCA versus LDA”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 23, no. 2, pp. 228–233, 2001.May
http://www.webopedia.com/TERM/F/false_rejection.html
Firdaus, M. et al., 2006. Dimensions reductions for face recognition using principal component analysis. Proc.
11th Intl. Symp. Artificial life and Robotics (AROB 11th 06). CD-ROM.
55