You are on page 1of 17

2021

BEng (Hons) Electrical and Electronics


Engineering

Final Year Project


Face recognition Based Attendance Monitoring System

Student Name:

University Number:

Supervisor Name:

A FINAL YEAR PROJECT REPORT SUBMISSION IN PARTIAL FULFILMENT OF THE


REGULATIONS FOR THE AWARD OF BENG (HONS) IN ELECTRICAL & ELECTRONIC
ENGINEERING AT THE UNIVERSITY OF SUNDERLAND 2021
Chapter 01: Introduction

1.1 Rational of the Topic


Students Attendance at educational institutions has been identified as the most important part
of administrative processes. A proper attendance monitoring system contributes to the
success of the students as well as the institution.

Manual signing of the attendance sheet while conducting a lecture is the common way of
tracking attendance in the Classroom. Using this way to record attendance may cause many
problems. Sometimes it disturbs discipline of the class & it is difficult to pay attention to the
lecture while signing. In this system, students can keep their signatures for absentees also. So
the accuracy of the attendance sheets in manual process is very low compared to an
automated system. Also there is a possibility of losing this data as they come in hard copy
state. It is also an emerging problem in most of the countries due to this pandemic situation
and therefore an automated recording system will be highly useful so as to minimize the
spread.

Beside, manual attendance monitoring system put pressure on people who work at the
administration department. They should calculate the percentage of the attendance for each
student for each subject at the end of the semester. It is very difficult to do manually & it
takes a long time & there is a great risk of error. Also, it is difficult to maintain a database in
manual system.

The Improper attendance monitoring system may cause the quality of the education and also
the performance of the students. Due to an improper attendance recording system, there can
have more inefficiencies which will leads to less reliability of the data. Also data
manipulation can be commonly happen in manual based system. Recording data, keeping the
data records & processing it to an information in manual process is a time consuming & less
efficient procedure compared to an automated system as it produce more reliable data in a
short time period.

So this project is proposed for an attendance monitoring problem in an institute using image
processing technology. This proposed project provides more secure and efficiency way for
monitoring attendance.

2
1.2 Problem Identification
The Improper attendance monitoring system may cause the quality of the education and also
the performance of the students and more man power required to calculate the percentage of
attendance at the end of the semester.

1.3 Aim and Objectives

1.3.1 Aim
This project aims to design & implement fully automated & low cost attendance monitoring
system with the help of image processing technology.

1.3.2 Objectives
 Conduct literature survey about attendance monitoring system.
 Propose conceptual design based on literature survey.
 Analyse the conceptual design & select the most suitable design.
 Design & implement a prototype for the system.
 Test the result.

1.4 Scope and Limitation

1.4.1 Scope
The project focuses on computerization of the traditional way of monitoring attendance &
developing the system as a desktop application to work for a particular institute.

1.4.2 Limitation
 The system only monitors the attendance of previously registered students.
 Require well trained person to handle the system.
 Internet facility must be needed to update the database.
 It can only recognized students’ face from a limited distance.

3
Chapter 2.0- Literature Review

This chapter is focused on published and relevant facts regarding the Face recognition Based
Attendance Monitoring System. The content of the logical blocks of the system diagram is
presented thematically below. The project was targeted at demonstrating competence in the
design and implementation of a Face recognition Based Attendance Monitoring System.

2.1 Literature Mapping

Face recognition Based


Attendance Monitoring System

Feature Face
ANN Training Face Detection Data Base
Extraction Recognition

Face Table
Face tracking Facial Feature
Tracking

Attendance
Table

Figure 2.1:( Literature mapping table)

4
2.1.1 ANN Training

As according to Manisha M. Kasar et al. (2016), Artificial Neural Network (ANN) was
analysed for face recognition. In the beginning, the neural network is used only in face
detection. Later it was developed to analyse facial recognition. An ANN must be planned and
executed in such a way that a given set of input data gives the desired output. Face detection
and identification were carried out using several ANN architectures and models. The ANN's
structure consists of a large number of basic units called neurons or nodes, which are
connected to transfer signals from the input layer to the output layer. Previous research has
discussed several forms of artificial neural networks (ANNs), including the Adaline network,
perceptron network, Hebb network, probabilistic network, and back-propagation artificial
neural network (BP-ANN). BP-ANN is a supervised multilayer feed-forward algorithm. Paul
Werbos initially proposed it in 1970, and then Rumelhart and McClelland rediscovered it in
1986.

As according to Chandramouli et al (2002), the neural network is a very effective and reliable
classification method that can be used to model unknown data as well as known data. It
functions well for separable datasets that are both linear and nonlinear. Artificial neural
networks (ANN) have been widely used in image processing (compression, detection, and
encryption) and pattern recognition in recent years. ANN is made up of "Nodes" which are a
network of artificial neurons.

As according to Heisele et al., (2001), two parameters must be modified during ANN
training: the learning rate and the number of neurons in the hidden layer. The learning rate
has a range value of more than 0 and equal to or less than 1. In order to recognize and classify
things, the nodes function like a human brain and it easily make face detection and
recognition. These nodes are interconnected, and the frequency of their interactions is given a
value depending on its strength: inhibition (maximum of -1.0) or excitation (maximum of
+1.0). A high value means a close connection. Input nodes, hidden nodes, and output nodes
are the three kinds of nodes or layers found in neurons to face recognition. Weight also
assigned to input nodes based on their effect. ANN can change its weights in response to the
variations it faces during training.

5
ANN-based human face recognition

As according to Maryam (2005), a survey was conducted by using 10 people to recognize


human faces based on ANN. It consisted of four stages such as the stage of detection and pre-
processing, stage of training, stage of ANN implementation, and stage of testing. The face
region is identified from the entire picture. Face cropping, resizing, normalizing, and filtering
are some of the proposed pre-processing techniques. The various operations are applied to
prepare for the training stage in the detection and pre-processing stage. The principle
component analysis approach is the most frequent approach for extracting features (PCA). It
was used to extract the key characteristics from a facial image and convert it to vector format.

As according to Dmitry (2002), neural networks have a variety of benefits, including the
ability to detect dynamic, nonlinear interactions between dependent and independent
variables without explicitly detecting them. It has the ability to detect all potential
correlations between predictor variables, or the presence of several training algorithms.
ANN-based solutions have yielded outstanding results/insights into very complex challenges
such as planning, data processing, job management, rejection of unwanted people, and
resource utilization optimization. The robustness of ANN classifiers in terms of incorrect
acceptance and rejection errors.

2.1.2 Face Detection

As according to Shireesha Chintalapati (2013), the primary goal of this process is to detect a
face in a captured picture or a chosen image from the database. This face detection method
checks Facial part of the given images & reject the other objectives in the given picture. Face
identification is difficult because there are so many variables in picture appearance, such as
stance (front, back), occlusion, picture orientation, lighting condition, and face expression.
The process of capturing photos, processing them, detecting faces, and extracting expression
data is all part of a face detection system. The key difficult problem of an autonomous face
detection is to identify faces against a cluttered background, extract facial features, and
recognize faces.

6
As according to Marko Arsenovic, et.al, (2017), from identified faces, hog and SURF
features are detected and retrieved. Every dataset image's SURF characteristics are
discovered, extracted, merged, and analyzed. SVM classifiers are used to classify the data.
Finally, the characteristics of the identified and test faces are compared. Finally, the result of
the classifier is displayed.

Face detection is required to begin face tracking, and facial feature extraction is required to
recognize human emotion, which is critical in Human-Computer Interface (HCI) systems.
Due to its excellent efficiency, the Viola-Jones method is good to identify faces. After
detecting a face or many faces in a video frame, the system cuts the faces out of the picture
and converts them to grayscale pictures scaled to a size of 100x100. Second, overlapping and
non-facial photos are eliminated; many viola-jones algorithms mistake non-facial photos for
faces. Face detection is a fairly expensive procedure, and inaccuracies in this technology can
sometimes contribute to higher prices. To improve the performance of facial detection, it is
necessary to down sample the acquired RGB frame and then normalize it using the Gaussian
filtering approach before deleting non-facial and overlapping faces.

(a) Face Tracking

As according to Wei-Lun Chao (2007), the distinction between facial detection and face
recognition is often muddled. Face identification identifies only the face segment or area
from an image, while face recognition identifies the person who owns the facial image. Wei-
Lun Chao (2007) presented a few reasons that allow face identification and recognition to be
difficult. The context, lighting, position, gesture, occlusion, rotation, scaling, and translation
are all considerations to consider under face tracking.

As according to S.Aanjanadevi et al. (2017), face detection using skin colour detection in
colour images is a common and useful technique. The colour of a person's face is very
significant. There are some benefits of using skin colour as a feature for monitoring a face.
When tracking the faces, producing colour takes considerably less time than processing other
facial characteristics. Each pixel was labelled as skin or non-skin in the skin colour detection
process based on its colour components.

As Paul Viola and Michael Jones (2001), proposed the Viola–Jones object detection system
provides efficient and effective real-time object detection speeds. It was inspired primarily by
the challenge of face recognition, despite the fact that it can be learned to detect a number of

7
object types. They concluded that, among methods such as face geometry-based methods,
feature invariant methods, and machine learning-based methods, the Viola-Jones algorithm is
not only fast and stable but also has a high detection rate and performs better in a variety of
situations. Different ANN architectures and models have been used for face recognition in
recent years. As proposed by Rowley, Baluja, and Kanade (2017), the face detection method
based on a retinal connected neural network (RCNN) examines tiny windows of an image to
determine if each window contains a face.

2.1.3 Feature Extraction

As according to Shireesha Chintalapati (2013), the primary goal of this process is to detect a
face in a captured picture or a chosen image from the database. This face detection method
checks whether or not a given picture contains a face. Face identification is difficult because
there are so many variables in picture appearance, such as stance (front, back), occlusion,
picture orientation, lighting condition, and facial expression. The process of capturing photos,
processing them, detecting faces, and extracting expression data is all part of a face detection
system. The key difficult problem of autonomous face detection is to identify faces against a
cluttered background, extract facial features, and recognize faces.

As according to Ahmed et al (2016), faces were intuitively split into areas such as the eyes,
nose, mouth, and so on, which corresponded to hidden Markov model states.

As according to Bhuvaneshwari et al. (2017), there are a few feature extraction methods.
HMMs need a one-dimensional observation sequence and pictures are two-dimensional So,
they need to be transformed into 1D temporal or 1D spatial sequences. Using a band
sampling approach, a spatial observation sequence was recovered from a facial image. A 1D
vector sequence of pixel observations was used to represent each facial image. Each
observation vector is made up of a block of L lines, with an M line overlap between them. An
observation series is initially sampled from an unknown test picture. An observation series is
initially sampled from an unknown test picture. The model face database's HMMs are then
compared against it (each HMM represents a distinct subject). The best match is the one with
the highest likelihood, and the applicable model discloses the test face's identity.

As according to Visar Shehu et. al. (2015), Many feature extraction approaches may be
utilized to extract characteristics from a signature image for accurate verification. Euclidean
distance, Support Vector Machine (SVM), and Artificial Neural Networks (ANN) are some

8
of the most often used pattern classifiers for offline signature verification. Several scientists
have experimented with various feature extraction approaches and combinations of
approaches in order to discover an effective set of characteristics. They tried HOG, SURF,
LBP, HOG-SURF, HOG-SURF-LBP combinations, and HOG-SURF-LBP combinations.
HOG-SURF produced the greatest results out of all of these.

As according to Ahmed et al (2016), each dataset image's HOG and SURF features were
extracted and merged, then stored in a 2D array. After that, an SVM classifier is fed this
array. The HOG and SURF characteristics of the identified face are also extracted. Finally,
the characteristics of the dataset pictures and the detected face pictures are compared, and the
best match is displayed. HOG is a reliable feature extraction approach that is mostly utilized
for object recognition in image processing. HOG divides the picture into extremely small
linked sections called cells (Visar Shehu et. al., 2015).

(a) Facial Feature Tracking

As mentioned by Liton Chandra Paul (2012), it is important to remove background


information when tracking facial features. Face detection is aided by biometrics and facial
features. The geometrical form and location of facial features are affected by facial
expression. Geometrical feature matching algorithms work by calculating a collection of
geometrical features from a facial image.

As according to A. Majumdar et.al (2016), face recognition would be simplified if


unnecessary information such as noise, non-face parts, and context were removed. Feature
tracking is the process of extracting essential features such as the eyes, nose, mouth, lips,
chin, and eye-brows in order to monitor them. Face matching and detection use a qualified
data collection of pictures from a database to match a face. The input image is transferred to
the machine for pre-processing of most face recognition systems. Feature extraction is also
essential for face emotion animation and identification.

D. Nithya (2015), used Principle Component Analysis (PCA) for facial recognition-based
student attendance scheme. PCA essentially keeps data variance without removing any
unwanted current associations among the original functions. PCA removes the average value
from the picture to make the picture centralized. PCA is a method for reducing the number of
dimensions in a dataset. Each facial image represented by the matrix is compressed into a
single column vector. Trained facial images are used in the PCA feature extraction process.

9
As mentioned by Divyarajsinh (2013), face tracking can be divided into Feature-based
methods, Holistic-based methods, and Hybrid methods.

Feature-based approach

Local characteristics such as the nose and eyes are separated in a feature-based method and it
can be utilized as input data in face detection to make the work of face identification easier.
This category includes pure geometry, dynamic link architecture, and hidden Markov model
approaches.

Holistic-based approach

Holistic-based approaches are also known as appearance-based methods, and they imply that
the entire knowledge about a face patch is used to execute some transformation in order to
obtain a complex representation for identification. Further, the full face is used as an input in
the face detection system to conduct face identification. For face recognition, the holistic
technique is the most extensively used approach.

Hybrid approach

The hybrid method combines feature-based and holistic approaches. The input to the face
detection system in this technique is both local and full face. Typically, 3D photos are used,
i.e., the person's face picture is trapped in 3D, which aids the system in remembering the arcs
of the eye hollows or the facial curvature, among other things.

2.1.4 Face Recognition

As according to B. Mehta (2013), face recognition is a biometric technique that uses


automated ways to confirm or detect a person's identification based on physiological features.
Face recognition begins by detecting face shapes in often cluttered scenes, and then
normalizes the faces to adjust for geometrical and lighting shifts. Face recognition is one of
the biometric methods which combine high precision with minimal intrusiveness. It's as
precise as a biochemical solution of being a non-intrusive, passive technology for verifying
personal identity in a "natural" and courteous manner.

10
As according to Malallah et al, (2015), the following three-step approach can be used to
describe biometric devices in general.

(1) A sensor makes an observation. The type of sensor and how it is observed is
determined by the biometric devices which are utilized. This observation provides a
person’s "Biometric Signature."
(2) The biometric signature is “normalized” according to the signatures in the system's
database. A computer program is used with the same format such as size, resolution,
view, etc. The person's "Normalized Signature" is created by normalizing the
biometric signature.
(3) A matcher compares the individual's normalized signature to the set (or subset) of
normalized signatures in the system's database. Then it generates a "similarity score"
that compares the individual's normalized signature to each signature in the database
set (or sub-set).

As according to Pradeep Kumar et al, (2013) mentioned, the extracted characteristics are
based on the distance classifier calculates the distance between the test and train images. The
greater the correlation between the test image and the training image, the smaller the
difference between the input feature points and the trained feature points. Face recognition
determines whether a recognised face is positive or negative based on feature matching and
classification from a referenced facial image.

As according to Al Singh (2010), it's not simple to recognize someone's face. Face
recognition necessitates the implementation of a number of methods and algorithms. A
technique called SVM (Support Vector Machine) and LBP (Local Binary Pattern) used to
recognize a person's face. It compares photographs saved in the database to photos captured
by a camera within the lecture hall. A lecture attendance system using a new approach called
continuous monitoring, in which the student's attendance is automatically recorded by a
camera that snaps a snapshot of a student in the class and automatically records students’
attendance.

11
Figure 2.4: Configuration of a generic face recognition (Ricardo Ribeiro, 2018)

As according to Deepesh Raj (2011), 2DImages and 3DImages are used for facial
recognition. Further, several forms are listed for face recognition, such as distance classifiers,
including Euclidean Distance, City Block Distance, and Mahalanobis Distance. The Chi-
Square metric was used as a distance classifier by Md. Abdur Rahim et al. (2013). Face
recognition analyses patterns in photographs and compares them to identify one or more
faces in the picture. To find a fit, this method employs algorithms that extract features and
compare them to a database. The importance of automated face recognition is that it must be
able to cope with various variants of the same face picture due to changes in the environment.

As according to Lenc and Král (2014 pp.759-769), there are two primary tasks for Face
Recognition such as verification and identification. Verification is the process of comparing
an unfamiliar face to a claim of identity in order to determine the face of the person
pretending to be the one on the photograph. Identification is a process of one-to-one matching
that compares the image to a collection of images of identified individuals to determine their
identity. Psychologists described the facial expressions under investigation as a collection of
six simple facial expressions (anger, disgust, terror, pleasure, sorrow, and surprise). Capture,
extraction, comparison, and matching are all processes in the facial recognition process.
During the system's enrolment, take a photograph of the person as the first step. The

12
extraction process is the second stage with the purpose of locating or extracting a specific
object from the face. In the comparison process, fresh data is compared to the database
enrolment image. The final step is using an extraction and comparison procedure, the system
will be able to determine if the new face matches the registration face or not.

As according to Abhishek Singh (2012), there are several observing parameters like pose,
illumination, expression, motion, facial hair, glasses, and background of an image. Face
detection techniques for still images can be divided into three categories as Holistic approach,
Feature-based approach and Hybrid approach product. The entire face area is taken into
account as input data into a face detection device in a Holistic approach. Local characteristics
on the face, such as the nose and eyes, are segmented and used as input data for structural
classifiers in Feature-based approach. The human vision system perceives both holistic and
local features through the Hybrid approach.

As according to Beymer and Poggio (1995), the aim of developing a face recognition
technology is to take advantage of its benefits in everyday life by using easy-to-use, Natural,
Nonintrusive, and embedded computing solutions. The prime aim of face Recognition is to
understand the image, recover image structure, and know what it represents.

2.1.5 Database

As according to Jafri and Arabnia (2009), it’s an onetomany matching mechanism that
determines the identity of a query face by comparing it to all of the template faces in a face
database. The test image is identified by looking for the image in the database with the
greatest resemblance to the test image. The detection procedure is a closed examination,
which indicates that the sensor records an impression of an individual who is already in the
database. Every comparison of the test normalized features to the other features in the
system's database yields a similarity score.

As according to Tripathi et.al (2007), when an anonymous face is used as the input for
authentication, and the algorithm returns the determined identity from a database of identified
persons. If there are any authentication issues, the machine must affirm or deny the input's
asserted identity. The “top match performance” refers to the number of times the highest
resemblance score is the correct match for all persons. Eigen face and neural net techniques
enable images to be of the same size and viewing angle, the databases were “trimmed”. So,
all images in each trimmed data base are frontal and have a similar scale. On each of the four

13
separate databases as well as the composite database, the Eigen face, Auto-Association and
classification nets, and elastic matching algorithms were run.

As according to Zhao et al. (1999), a relational database is used to generate, update, and
maintain tables for each course/subject taken by students, as well as enrolment information
and other information. This is useful for presenting a summary of the attendance report. The
facial picture of a human is entered into a database that is typically obtained under various
conditions. When maintaining a large data set like student attendance it is necessary to keep
the records in the database. There are a lot of advantages of keeping the data in a database
such as improve data security, increase consistency, reduce data redundancy, data entry,
storage, etc.

(a) Face Table

As according to Kalocsai (2008), the face image is identified by looking for the image in the
database with the greatest resemblance to the face image. When the data is in the dataset, the
security of the information is high. The face image is identified by looking for the images in
the database with the greatest resemblance to the particular face image. Hog features and
PCA algorithms are used to recognize faces.

As according to E. Varadharajan et.al (2011), face recognition is rapidly developing because


of the number of variables, including active algorithm research, the availability of vast
libraries of facial photos, and a mechanism for measuring the efficacy of face recognition
algorithms. The pupil must stand in front of a camera and allow the camera to scan them. The
scanned student photograph is compared to the data in the database about the student, and the
attendance on their appearance must be updated. This decreases the institute's faculty
member's paper and pen burden. This also decreases the likelihood of proxies in the
classroom and aids in maintaining student racial diversity. The burden of the institute's
faculty members is reduced as a result of this. This also helps to keep student records safe by
reducing the likelihood of proxies in the class. It's a wireless biometric technology that
addresses the issue of bogus attendance as well as the difficulty of setting up the necessary
network.

As according to Bruce (1996), pose, illumination, expression, motion, facial hair is


considering when determining the face images. Face verification is comparing a query face
image to a reference face image whose identity is being asserted in a one to one contest. Face

14
recognition uses one to many matches to determine the identity of a query face by comparing
it to all of the reference faces in the database face table.

(b) Attendance Table

As according to Gottfried (2015), maintaining attendance is required in each of the


foundations for evaluating student achievement. Each organization employs its own methods,
and in the majority of situations, the standard way for taking attendance is to use a physical
attendance sheet. Biometrics helps to keep track of attendance in a clever and efficient
manner. The attendance system is automated for face recognition. In general, a manual or
conventional method of tracking attendance is time-intensive and can cause distractions
during the teaching period. The disadvantage of using a manual attendance sheet is that
students can be misleading when signing on behalf of their friends.

As according to Al Singh (2010), the student attendance tracking system helps to improve the
headcount of the lecture room than maintain the attendance sheet. It helps to improve
students’ attendance more effectively and efficiently. Further, it directly affects increasing
students’ passing rate. The dates and times of students’ attendance can be collected by
checking the attendance table. All of the participants are initially labelled absent. The
attendance table is helpful for collecting accurate data on students’ participation in each
subject before they are facing the exams. This attendance table helps to filter out more than
70% of attendance who are eligible for exam facing. So, a reliable decision can be taken
within a short time, with less labour cost and with less effort. In brief, the face recognition-
based automated attendance system is really helpful in every aspect. Further, the provided
images of a scene recognise or check one or more individuals in the scene using a stored in
attendance table in the database. This attendance table helps to collect more accurate data
than a manual attendance sheet.

15
References
 A Review Paper on Attendance Marking System based on Face Recognition. (2017). Bali
2017 International Conference Proceeding IPCET-17, CM3E-17, ABEMS-17, ICEEE-17,
ICCEE-17, ICASE-17, ICAFE-17, EBHRM-17, LHHSS-17 & LEBHM-17. [online] Available
at: http://eirai.org/images/proceedings_pdf/IAE1017211.pdf [Accessed 15 Jun. 2020].

 Howard, C., Tunku, U. and Rahman, A. (n.d.). FACE RECOGNITION BASED


AUTOMATED STUDENT ATTENDANCE SYSTEM. [online] . Available at:
http://eprints.utar.edu.my/2832/1/EE-2018-1303261-1.pdf [Accessed 10 Apr. 2021].

 Kadam, V. and Ganakwar, D. (2007). Certified Organization) Website: www. International


Journal of Innovative Research in Science, Engineering and Technology (An ISO, [online]
3297. Available at: http://www.ijirset.com/upload/2017/july/92_Face.pdf [Accessed 29 Apr.
2021].

 Kaur, P., Krishan, K., Sharma, S.K. and Kanchan, T. (2020). Facial-recognition algorithms:
A literature review. Medicine, Science and the Law, p.002580241989316.

 Kumar, A. (n.d.). LITERATURE REVIEW ON TEXTUAL FEATURE AND


CLASSIFICATION TECHNIQUES FOR FACE DETECTION FOR ATTENDANCE SYSTEM.
[online] . Available at: https://www.journalijcar.org/sites/default/files/issue-files/5677-A-
2018.pdf [Accessed 24 May 2021].

 ResearchGate. (n.d.). (PDF) Face Recognition: A Literature Review. [online] Available at:
https://www.researchgate.net/publication/233864740_Face_Recognition_A_Literature_Revi
ew [Accessed 29 Apr. 2021].

 Shariful, M. and Ferdous, I. (2008). LITERATURE SURVEY OF AUTOMATIC FACE


RECOGNITION SYSTEM AND EIGENFACE BASED IMPLEMENTATION. [online] .
Available at: https://core.ac.uk/download/pdf/61800074.pdf [Accessed 24 Mar. 2020].

 Tolba, A.S., El-Baz, A.H. and El-Harby, A.A. (2008). Face Recognition: A Literature
Review. International Journal of Computer and Information Engineering, [online] 2(7),
pp.2556–2571. Available at: https://publications.waset.org/7912/face-recognition-a-
literature-review [Accessed 29 Apr. 2021].

16
 Zarkasyi, M.I., Hidayatullah, M.R. and Zamzami, E.M. (2020). Literature Review :
Implementation of Facial Recognition in Society. Journal of Physics: Conference Series,
1566, p.012069.

 Zhao, W., Chellappa, R., Phillips, P., Rosenfeld, A. and Chellappa (2003). Face Recognition:
A Literature Survey. ACM Computing Surveys, [online] 35(4), pp.399–458. Available at:
https://inc.ucsd.edu/~marni/Igert/Zhao_2003.pdf [Accessed 22 Nov. 2019].

17

You might also like