You are on page 1of 43

Chapter 1

INTRODUCTION
The main objective of this project is to develop face recognition based automated student
attendance system. In order to achieve better performance, the test images and training images of
this proposed approach are limited to frontal and upright facial images that consist of a single face
only. The test images and training images have to be captured by using the same device to ensure
no quality difference. In addition, the students have to register in the database to be recognized.
The enrolment can be done on the spot through the user friendly interface.
1.1 Background:
Face recognition is crucial in daily life in order to identify family, friends or someone we are
familiar with. We might not perceive that several steps have actually taken in order to identify
human faces. Human intelligence allows us to receive information and interpret the information in
the recognition process. We receive information through the image projected into our eyes, by
specifically retina in the form of light. Light is a form of electromagnetic waves which are radiated
from a source onto an object and projected to human vision. Robinson-Riegler, G., & Robinson-
Riegler, B. (2008) mentioned that after visual processing done by the human visual system, we
actually classify shape, size, contour and the texture of the object in order to analyse the
information. The analysed information will be compared to other representations of objects or face
that exist in our memory to recognize. In fact, it is a hard challenge to build an automated system
to have the same capability as a human to recognize faces. However, we need large memory to
recognize different faces, for example, in the Universities, there are a lot of students with different
race and gender, it is impossible to remember every face of the individual without making mistakes.
In order to overcome human limitations, computers with almost limitless memory, high processing
speed and power are used in face recognition systems.
The human face is a unique representation of individual identity. Thus, face recognition is
defined as a biometric method in which identification of an individual is performed by comparing
real-time capture image with stored images in the database of that person(Margaret Rouse, 2012).
Nowadays, face recognition system is prevalent due to its simplicity and awesome performance.
For instance, airport protection systems and FBI use face recognition for criminal investigations by
tracking suspects, missing children and drug activities (Robert Silk, 2017). Apart from that,
Facebook which is a popular social networking website implement face recognition to allow the
users to tag their friends in the photo for entertainment purposes (Sidney Fussell, 2018).
Furthermore, Intel Company allows the users to use face recognition to get access to their online

1
account (Reichert, C., 2017). Apple allows the users to unlock their mobile phone, iPhone X by
using face recognition (deAgonia, M., 2017).
The work on face recognition began in 1960. Woody Bledsoe, Helen Chan Wolf and Charles
Bisson had introduced a system which required the administrator to locate eyes, ears, nose and
mouth from images. The distance and ratios between the located features and the common reference
points are then calculated and compared. The studies are further enhanced by Goldstein, Harmon,
and Lesk in 1970 by using other features such as hair colour and lip thickness to automate the
recognition. In 1988, Kirby and Sirovich first suggested principle component analysis (PCA) to
solve face recognition problem. Many studies on face recognition were then conducted
continuously until today (Ashley DuVal, 2012).
1.2 Problem Statement:
Traditional student attendance marking technique is often facing a lot of trouble. The face
recognition student attendance system emphasizes its simplicity by eliminating classical student
attendance marking technique such as calling student names or checking respective identification
cards. There are not only disturbing the teaching process but also causes distraction for students
during exam sessions. Apart from calling names, attendance sheet is passed around the classroom
during the lecture sessions. The lecture class especially the class with a large number of students
might find it difficult to have the attendance sheet being passed around the class. Thus, face
recognition student attendance system is proposed in order to replace the manual signing of the
presence of students which are burdensome and causes students get distracted in order to sign for
their attendance. Furthermore, the face recognition based automated student attendance system able
to overcome the problem of fraudulent approach and lecturers does not have to count the number
of students several times to ensure the presence of the students.
The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facial identification.
One of the difficulties of facial identification is the identification between known and unknown
images. In addition, paper proposed by Pooja G.R et al. (2010) found out that the training process
for face recognition student attendance system is slow and time-consuming. In addition, the paper
proposed by Priyanka Wagh et al. (2015) mentioned that different lighting and head poses are often
the problems that could degrade the performance of face recognition based student attendance
system.

2
Hence, there is a need to develop a real time operating student attendance system which means the
identification process must be done within defined time constraints to prevent omission. The
extracted features from facial images which represent the identity of the students have to be
consistent towards a change in background, illumination, pose and expression. High accuracy and
fast computation time will be the evaluation points of the performance.
1.3 Aims and Objective:
The objective of this project is to develop face recognition based automated student attendance
system. Expected achievements in order to fulfill the objectives are:
 To detect the face segment from the video frame.
 To extract the useful features from the face detected.
 To classify the features in order to recognize the face detected.
 To record the attendance of the identified student.
1.4 Existing Attendance System:
Attendance is prime important for both the teacher and student of an educational
organization. So, it is very important to keep record of the attendance. There are various
attendance management systems that vary in complexity and feasibility. We have divided them
into three categories namely, basic, moderate, and advanced.
1.4.1 Basic Attendance System:
a. Manual Attendance System:
The Manual Attendance System involves the process of the faculty calling out the roll
calls. If the student is present in the class, the student physically acknowledges the roll call and
says that he/she is present. In all other cases, the faculty marks the student absent.
b. Paper Based Attendance System:
The Paper Based Attendance System is a part of the manual attendance system or could be
used for any other attendance system as well. Attendance is taken in any form and it's recorded
on a paper by writing either the absentees, or the present only. Usually faculties write the roll
numbers of the students that are absent or present as per convenience.
c. Timesheet Attendance System:
The Timesheet Attendance System involves recording attendance into a timesheet. A
timesheet is a physical or virtual tool that allows you to record and keep track of your worked
time, in this case it's number hours the student attends.

3
d. Token Based Attendance:

The Token based attendance involves displaying of a security token when demanded in
order to secure attendance. A security token (sometimes called an authentication token) is a
small hardware device that the owner carries to authorize access to a network service. The device
may be in the form of a smart card or may be embedded in a commonly used object such as a
key fob. In the context of students, the token is usually their identity car.
1.4.2 Moderate Attendance Systems:
a. Biometric Attendance System:
The biometric attendance system works on two basic principles. First, it takes an image of
a finger. Then finger scanner saves characteristics of every unique finger and saved in the form of
biometric key. Actually, finger print scanner never saves images of a finger only series of binary
code for verification purpose. Secondly, the biometric attendance system determines whether the
pattern of ridges and valleys in this image matches the pattern of ridges and valleys in pre-scanned
images.
b. Badge Monitoring Attendance System:
The Badge Monitoring Attendance System is most commonly used in places where people
work with radioactive materials such as in a X-ray lab, nuclear center’s etc. The radioactive badge
is worn by the person somewhere between the neck and the waist such that the front faces the
source of radiation.
c. Swipe Card Attendance:
The Swipe Card Attendance System works by the person swiping its card on entry and
exit of the gate, and the attendance is recorded. A swipe card must come in contact with the
corresponding card reader before any transaction can take place. The transaction becomes active
when the magnetic stripe on a card is moved through a console at a gate.
d. Access Card Punching Attendance System:
A punch card is a flat and stiff paper with notches cut in it and contains digital information.
In punch card attendance system, students use this punch or proximity card for in and/or out. To
use a punch card, students just need to wave the card near a reader, which then ensures whether the
correct person is logging in and or out.

4
1.4.3 Advanced Attendance Systems:
a. Retinal Scan-based Attendance System:
The Retinal Scan-based Attendance System makes uses of retinal features and marks
attendance on retinal recognition. Eye scan or retinal scan is a biometric system that identifies a
person by using unique patterns of the retina. Human retina contains a complex blood vessel (retinal
vein) patterns through which an eye scanner device can easily identify a person and can even
differentiate identical twins. To scan human retina, retinal scanner uses the reflection of light that
is absorbed by retinal vein.
b. Gait Recognition Attendance System:
The Gait Recognition Attendance System records and recognizes an individual by
examining the way an individual walks, saunters, swaggers, or sashays — with up to 90-percent
accuracy.
c. Facial Recognition Attendance System:
The Facial Recognition Attendance System makes use of facial features such as distance
between the eyes, width of the nose, depth of the eye sockets, the shape of the cheekbones, the
length of the jaw line, etc. to recognize and mark attendance.
d. Sensor Detection Attendance System:
The Sensor Detection Attendance System uses RFID (Radio Frequency Identification) to
identify individuals. A radio frequency identification reader (RFID reader) is a device used to
gather information from an RFID tag, which is used to track individual objects. Radio waves are
used to transfer data from the tag to a reader. RFID is a technology similar in theory to bar code.
1.4.4 Advantage Of Existing System:

a. Increased Productivity:
Organizations who use a manual roll call process spend several hours each day collecting
time cards, re-entering an illegible data by hand, faxing, phoning, and processing roll call. When
you employ an automated time and attendance system, the roll call process takes just minutes each
period.
b. Accuracy:
With an automated system, there is no human error. When you manually track your
students’ time, your students typically report their hours after they've worked them. This will often
increase the likelihood of inaccurate reporting. A student may not intend to 6 misrepresent his
hours, he may just forget what his actual in and out times were. Or, if a student has illegible
handwriting, it could make it difficult for roll list to determine actual hours attended. With manual

5
reporting, the organization is basically relying on the honor system. This system can be abused,
which can lead to time theft.
c. Savings:
With an automated system, you’ll save roll call processing hours and eliminate time theft
which means your bottom line will improve.
d. Regulatory Compliance:
An automated time and attendance system will not guarantee you’ll be compliant with all
student laws, the data that’s collected through the system can ensure you have the information at
your fingertips you’ll need to comply with all labour regulations. With an automated timekeeping
system, you’ll have the ability to pull up reports quickly. This will provide you with all the
information you’ll need if you’re ever subject to an audit.
1.5 PROPSED SYSTEM:
The proposed system “Face Recognition Digital Attendance Management System”
overcomes the problems of the existing systems as mentioned previously. It mainly incorporates
Facial Recognition to mark student’s attendance into the database.
1.5.1 Salient Features of the Proposed System:
a. Face-mapping:
Facial features of the student such as distance between the eyes, width of the nose, depth
of the eye sockets, the shape of the cheekbones, the length of the jaw line, etc. are registered into
the database. Students are recognized based on these stored facial features, and if a match is found,
the student is marked present and the same is updated into the database. In all other cases, the
student is recorded absent in the database.
b. Complete Automation:
The system is automated to its full potential. The algorithm runs for few initial mins of
every hour and captures the attendance of students present in class and restarts at the beginning of
the next hour until end of day.
c. Immediate Update:
Once the algorithm successfully recognizes the student, the attendance for the
corresponding student is updates into the database automatically, without any human intervention.

6
d. Three-step Management:
The entire system is efficiently split into three components. Firstly, the facial features are
stored, and the model is trained. Second, the student is recognized by facial recognition algorithm
“Haarcascade”. Third, the attendance is automatically updated into the database.
e. Excel Sheet Generation:
After taking attendance an excel sheet is generated along with ID, NAME, DATE, TIME.

7
Chapter 2

REVIEW OF LITERATURE

“A literature review shows readers that you have an in-depth grasp of your
subject and that you understand where your own research fits into and adds to an
existing body of agreed knowledge”
2.1 Student Attendance System:
Arun Katara et al. (2017) mentioned disadvantages of RFID (Radio Frequency Identification)
card system, fingerprint system and iris recognition system. RFID card system is implemented due
to its simplicity. However, the user tends to help their friends to check in as long as they have their
friend’s ID card. The fingerprint system is indeed effective but not efficient because it takes time
for the verification process so the user has to line up and perform the verification one by one.
However for face recognition, the human face is always exposed and contain less information
compared to iris. Iris recognition system which contains more detail might invade the privacy of
the user. Voice recognition is available, but it is less accurate compared to other methods. Hence,
face recognition system is suggested to be implemented in the student attendance system.
2.2 Face Detection:
Difference between face detection and face recognition are often misunderstood. Face detection
is to determine only the face segment or face region from image, whereas face recognition is to
identify the owner of the facial image. S.Aanjanadevi et al. (2017) and Wei-Lun Chao (2007)
presented a few factors which cause face detection and face recognition to encounter difficulties.
These factors consist of background, illumination, pose, expression, occlusion, rotation, scaling
and translation.
2.3 Viola-Jones Algorithm:
Viola-Jones algorithm which was introduced by P. Viola, M. J. Jones (2001) is the most popular
algorithm to localize the face segment from static images or video frame. Basically the concept of
Viola-Jones algorithm consists of four parts. The first part is known as Haar feature, second part is
where integral image is created, followed by implementation of Adaboost on the third part and
lastly cascading process.
Viola-Jones algorithm analyses a given image using Haar features consisting of multiple
rectangles (Mekha Joseph et al., 2016). The features perform as window function mapping onto the
image.

8
A single value result, which representing each feature can be computed by subtracting the sum of
the white rectangle(s) from the sum of the black rectangle(s) (Mekha Joseph et al., 2016).
2.4 Pre-Processing:
Subhi Singh et al. (2015) suggested cropping of detected face and colour image was converted
to grayscale for pre-processing. They also proposed affine transform to be applied to align the facial
image based on coordinates in middle of the eyes and scaling of image to be performed. Arun
Katara et al (2017), Akshara Jadhav et.al (2017), Shireesha Chintalapati, and M.V. Raghunadh
(2013), all of the 3 papers have proposed histogram equalization to be applied to facial image, and
scaling of images was performed for pre-processing.
Pre-processing enhances the performance of the system. It plays an essential role to improve the
accuracy of face recognition. Scaling is one of the important preprocessing steps to manipulate the
size of the image. Scaling down of an image increases the processing speed by reducing the system
computations since the number of pixels are reduced. The size and pixels of the image carry spatial
information. Gonzalez, R. C. and Woods (2008) mentioned spatial information is a measure of the
smallest discernible detail in an image. Hence, spatial information has to be manipulated carefully
to avoid distortion of images to prevent checkerboard effect. The size should be same for all the
images for normalization and standardization purposes. Subhi Singh et al (2015) proposed PCA
(Principal Component Analysis) to extract features from facial images, same length and width of
image is preferred, thus images were scaled to 120 × 120 pixels. Fig. 2.4.1 shows the
representataion.

Fig.2.4.1: Images Show Checkerboard Effect Significantly Increasing from Left to Right
(Gonzalez, R. C., & Woods, 2008)

Besides scaling of images, colour image is usually converted to grayscale image for pre-
processing. Grayscale images are believed to be less sensitive to illumination condition and take
less computational time. Grayscale image is 8 bit image which the pixel range from 0 to 255
whereas colour image is 24 bit image which pixel can have 16 77 7216 values.

9
Hence, colour image requires more storage space and more computational power compared to
grayscale images. (Kanan and Cottrell, 2012). If colour image is not necessary in computation, then
it is considered as noise. In addition, pre-processing is important to enhance the contrast of images.
In the paper of Pratiksha M. Patel (2016), he mentioned that Histogram equalization is one of the
methods of pre-processing in order to improve the contrast of the image. It provides uniform
distribution of intensities over the intensity level axis, which is able to reduce uneven illumination
effect at the same time. Fig. 2.4.2 shows facial images in grayscale.

Fig.2.4.2: Facial Images Were Converted To Grayscale, Histogram Equalization


Was Applied and Images Were Resized to 100x100
(Shireesha Chintalapati and M.V. Raghunadh, 2013)
There are a few methods to improve the contrast of images other than Histogram Equalization.
Neethu M. Sasi and V. K. Jayasree (2013) studied Histogram Equalization and Contrast Limited
Adaptive Histogram Equalization (CLAHE) in order to enhance myocardial perfusion images.
Aliaa A. A. Youssif (2006) studied contrast enhancement together with illumination equalization
methods to segment retinal vasculature. In addition, in paper by A., I. and E.Z., F. (2016) Image
Contrast Enhancement Techniques and performance were studied. Unlike Histogram equalization,
which operate on the data of the entire image, CLAHE operates on data of small regions throughout
the image. Hence, the Contrast Limited Adaptive Histogram Equalization is believed to outperform
the conventional Histogram Equalization.
2.5 Feature Extraction
The feature is a set of data that represents the information in an image. Extraction of facial
feature is most essential for face recognition. However, selection of features could be an arduous
task. Feature extraction algorithm has to be consistent and stable over a variety of changes in order
to give high accuracy result.

10
There are a few feature extraction methods for face recognition. In the paper of Bhuvaneshwari
et al. (2017), Abhishek Singh and Saurabh Kumar (2012) and Liton Chandra Paul and Abdulla Al
Sumam (2012), they proposed PCA for the face recognition. D. Nithya (2015) also used PCA in
face recognition based student attendance system.
PCA is famous with its robust and high speed computation. Basically, PCA retains data variation
and remove unnecessary existing correlations among the original features. PCA is basically a
dimension reduction algorithm. It compresses each facial image which is represented by the matrix
into single column vector. Furthermore, PCA removes average value from image to centralize the
image data. The Principle Component of distribution of facial images is known as Eigen faces.
Every single facial image from training set contributes to Eigen faces. As a result, Eigen face
encodes best variation among known facial images. Training images and test images are then
projected onto Eigen face space to obtain projected training images and projected test image
respectively. Euclidean distance is computed by comparing the distance between projected training
images and projected test image to perform the recognition. PCA feature extraction process
includes all trained facial images. Hence, the extracted feature contains correlation between facial
images in the training set and the result of recognition of PCA highly depends on training set image.

Fig. 2.5.1: PCA Dimension Reduction


(Liton Chandra Paul and Abdulla Al Sumam, 2012)
LDA (Linear discriminant analysis) also known as Fisher face is another popular
algorithm for face recognition. In the paper by Suman Kumar Bhattacharyya and Kumar Rahul
(2013), LDA was proposed for face recognition. LDA extract features by grouping images of the
same class and separate images of different classes. LDA is able to perform well even with
different facial expressions, illumination and pose due to its class separation characteristic. Same
class is defined by facial images of the same individual, but with different facial expressions,
11
varying lighting or pose, whereas facial images of person with different identity are categorized
as different classes. Same class images yield within-class scatter matrix meanwhile different class
images yield between-class scatter matrix.
LDA manage to maximize the ratio of the determinant of the between-class scatter matrix
over the determinant of the within class scatter matrix. LDA is believed to have lower error rates
compared to PCA only if more samples per class are trained and small size of different class.

Fig. 2.5.2: Class Separation in LDA


(Suman Kumar Bhattacharyya and Kumar Rahul, 2013)
The original LBP (Local Binary Patterns) operator was introduced by the paper of Timo
Ojala et al. (2002). In the paper by Md. Abdur Rahim et al. (2013), they proposed LBP to extract
both texture details and contour to represent facial images. LBP divides each facial image into
smaller regions and histogram of each region is extracted. The histograms of every region are
concatenated into a single feature vector. This feature vector is the representation of the facial
image and Chi square statistic is used to measure similarities between facial images. The smallest
window size of each region is 3 by 3. It is computed by thresholding each pixel in a window
where middle pixel is the threshold value. The neighborhood larger than threshold value is
assigned to 1 whereas the neighborhood lower than threshold value is assigned to 0.
Then the resulting binary pixels will form a byte value representing center pixel.

Fig.2.5.3: LBP Operator


(Md. Abdur Rahim et.al, 2013)

12
LBP has a few advantages which make it popular to be implemented. It has high tolerance
against the monotonic illumination changes and it is able to deal with variety of facial
expressions, image rotation and aging of persons.
These overwhelming characteristics cause LBP to be prevalent in real-time applications.
Neural network is initially used only in face detection. It is then further studied to be
implemented in face recognition. In the paper by Manisha M. Kasar et al. (2016), Artificial
Neural Network (ANN) was studied for face recognition. ANN consists of the network of
artificial neurons known as "nodes". The nodes act as human brain in order to make recognition
and classification. These nodes are interconnected and values are assigned to determine the
strength of their connections. High value indicates strong connection. Neurons were categorized
into three types of nodes or layers which are input nodes, hidden nodes, and output nodes. Input
nodes are given weight based on its impact. Hidden nodes consist of some mathematical function
and thresholding function to perform prediction or probabilities that determine and block
unnecessary inputs and result is yield in output nodes. Hidden nodes can be more than one layer.
Multiple inputs generate one output at the output node. Fig. 2.5.4 represent it’s conceptual view.

Fig. 2.5.4: Artificial Neural Network (ANN)


(Manisha M. Kasar et al., 2016)

Convolutional Neural Network (CNN) is another neural network algorithm for face
recognition. Similar to ANN, CNN consists of the input layer, hidden layer and output layer.
Hidden layers of a CNN consists of multiple layers which are convolutional layers, pooling
layers, fully connected layers and normalization layers. However, a thousand or millions of facial
images have to be trained for CNN to work accurately and it takes long time to train, for instance
Deepface which is introduced by Facebook. Fig. 2.5.5 represent CNN.

13
Fig. 2.5.5: Deepface Architecture by Facebook
(Yaniv Taigman et al, 2014)

2.6 Types of Feature Extraction:


Divyarajsinh N. Parmar and Brijesh B. Mehta (2013) face recognition system can be
categorized into a few Holistic-based methods, Feature-based methods and Hybrid methods.
Holistic-based methods are also known as appearance-based methods, which mean entire
information about a face patch is involved and used to perform some transformation to obtain a
complex representation for recognition. Example of Holistic-based methods are PCA(Principal
Component Analysis) and LDA(Linear dependent Analysis).On the other hand, feature-based
methods directly extract detail from specific points especially facial features such as eyes, noses,
and lips whereas other information which is considered as redundant will be discarded. Example
of feature-based method is LBP (Local Binary Pattern). These methods mentioned are usually
combined to exist as Hybrid method, for example Holistic-based method combine with Feature-
based in order to increase efficiency.

2.6 Feature Classification And Face Recognition:


Classification involves the process of identification of face. Distance classifier finds the
distance between the test image and train image based on the extracted features. The smaller the
distance between the input feature points and the trained feature points, the higher the similarity of
the test image and training image. In other words, the facial images with the smallest/minimum
distance will be classified as the same person. Deepesh Raj (2011) mentioned several types of
distance classifiers such as Euclidean Distance, City Block Distance and Mahalanobis distance for
face recognition. Md. Abdur Rahim et al. (2013) implemented Chi-Square statistic as distance
classifier for LBP operator.

14
Chapter 3

REQUIREMENT AND METHODOLOGY

3.1 Requirements Specifications:


 Software Requirements:
Operating System: Windows Environments
Browser: Any efficient fast-paced modern-day browser
 Hardware Requirements:
Processor: Pentium IV Processor or higher
Hard Disk: 400Mb minimum hard disk storage
RAM: 512Mb or more
 Developer Requirements:
Operating System: Windows Environments
Language: Python
Package Installer: OpenCV 3.0, NumPy, python-csv, Pillow.
Server Base: Local Host
RAM: 2GB Minimum
Processor: Pentium IV Processor or higher
3.2 Methodology:
The approach performs face recognition based student attendance system. The methodology
flow begins with the capture of image by using simple and handy interface, followed by pre-
processing of the captured facial images, then feature extraction from the facial images, subjective
selection and lastly classification of the facial images to be recognized.
The entire system “Face Recognition Digital Attendance Management System” is efficiently
divided into four design modules namely, Feature Registration, Facial Recognition, Attendance
Update, and Excel sheet generation.
3.2.1 Feature Extraction:
The facial features such as distance between the eyes, width of the nose, depth of the eye sockets,
the shape of the cheekbones, the length of the jaw line, etc. are registered into the database folder
by using the facial recognition xml file from the ‘OpenCV’ python library. Once the features are
registered, the model is trained using the gathered features. Fig 3.2.1 represent full conceptual
view of feature registration.

15
Fig.3.2.1: Feature Registration

3.2.2 Facial Recognition:


The facial features are fetched from the database and the face of the student is recognized by
comparing with existing values in the database. Facial Recognition is done using ‘OpenCV’ python
library, particularly using ‘Haarcascade’ code. Fig.3.2.2 describes diagrammatical analysis of
Facial Recognition.

16
Fig.3.2.2: Facial Recognition

3.2.3 Attendance Update:


Once the facial recognition is done successfully, the attendance for the corresponding student is
updated into the database automatically, without any human intervention. If a match is found from
the existing database, the student is marked as present, in all other cases the student is marked
absent. Fig. 3.2.3 visual impact of attendance update.

17
Fig. 3.2.3: Attendance Update

18
3.2.4 Activity Diagram:

Fig. 3.2.4: Activity Diagram – Decision Making and Implementation

Fig. 3.2.4 signifies the activity flow in terms of decisions and how their implementation is
responsible for the application’s activity. The activity flow starts with capturing an input image.
The image capturing process runs into a loop until a proper input image is captured which is suitable
for facial detection and facial recognition. The captured image is pre-processed, extraneous details
and background noise is removed. The specific facial features such as distance between the eyes,
width of the nose, depth of the eye sockets, the shape of the cheekbones, the length of the jaw line,
etc. are analyzed and hoarded.
19
The captured input image, after being pre-processed and the facial features being extracted, is at
verified with the existing database that consists of facial features recorded at the time registration
of students’ details into the system.If a match is found, the corresponding student is marked.
3.2.5 Sequence Diagram:

Fig. 3.2.5: Sequence Diagram - Object Interaction and Process Execution

Fig. 3.2.5 indicates the interaction amongst the application’s objects. There are three objects,
User, Face Detector, and System. The ‘User’ i.e. the student registers himself/herself into the ‘Face
Detector’ database. This data is stored into the ‘System’. The ‘Face Detector’ retrieves data from
the ‘System’ and accesses the input image from the ‘User’.
The ‘User’ sends the input value to the ‘Face Detector’ and the ‘Face Detector’ searches for a tally
in the ‘System’. If a match is found, the ‘User’ is marked NAME, ID, TIME.

20
Table 3.3: Test Cases and Outputs:

S. No Action Performed Expected Output Actual Output Remarks


1. Starting Up and Runs successfully System checks Inefficient Code
Successfully and system checks report syntax errors loops and
Running Server take place properly in code and import improper library
without errors. crashes. imports.

2. Server Start- Server successfully Takes a time gap of Make sure you don’t
up and starts up and user a few milliseconds start the server
Execution gives requirement- for the server to start abruptly
based inputs and run

3. Feature Registration No features Register the


Registration Successful with the registered and hence system model in a
help of UIDs and no dataset available well-lit
stored within for detection environment for
dataset folder maximum
accuracy
4. Face Detection Successful Detection Inefficient Face Train the system
of Faces by tallying Detection and with an increased
images of the given improper results count of data set
dataset shown

5. Model Training Efficient Training Improper UID Make sure inputs


done within entered while are given correctly
specified time registering face before registration
period leading to inefficient takes place
results
6. Python Setup Make sure all Improper Path System paths
required Details and server differ uniquely,
specifications are command doesn’t define them
followed and execute accordingly
specified
7. Database Update Successfully Database overload Group and arrange
appended student’s due to improper database based on
attendance within the appending of data timely basis
excel sheet
8. Open CV OpenCV Version of OpenCV All required
extraction installed differs from that of libraries have to be
successfully and NumPy and code installed with proper
properly crashes versions
imported

21
Chapter 4

RESULT AND DISCUSSION

4.1 Result:
In this proposed approach, face recognition student attendance system with user friendly interface
is designed by using MATLAB GUI(Graphic User Interface). A few buttons are designed in the
interface, each provides specific function, for example, start button is to initialize the camera and to
perform face recognition automatically according to the face detected, register button allows
enrolment or registrations of students and update button is to train the latest images that have been
registered in the database. Lastly, browse button and recognize button is to browse facial images
from selected database and recognized the selected image to test the functionality of the system
respectively.
4.1.1 Feature Registration:

Fig. 4.1.1: Feature Registration Screenshot

22
Fig. 4.1.1 demonstrates the process of feature registration. the camera window opens for a few
seconds and captures 100 images from the video frame and stores in the database with the respective
student id. OpenCV uses Haarcascade algorithm (Abhish Ijari , Anand Mannikeri , Vinod Kumar
Gulmikar)to extract features.
4.1.2 Face Recognition:

Fig. 4.1.2: Face Recognition Screenshot


Fig. 4.1.2 shows the outcome of facial recognition. The Python libraries including NumPy help
the system recognize faces by tallying with dataset. OpenCV detectmultiscale() function detects the
faces through the camera and recognises them using the existing dataset.

23
4.1.3 Attendance Update:

Fig. 4.1.3: Digital Attendance Sheet

Fig. 4.1.3 displays the excel sheet where upon the recognition of faces, the attendance is marked
for the respective student for that corresponding hour automatically, without any human intervention
into the excel sheet, which acts as the database on this case.

24
4.2 Discussion:
This proposed approach provides a method to perform face recognition for student attendance
system, which is based on the texture based features of facial images. Face recognition is the
identification of an individual by comparing his/her real-time captured image with stored images in
database of that person. Thus, training set has to be chosen based on the latest appearance of an
individual other than taking important factor for instance illumination into consideration.
The proposed approach is being trained and tested on different datasets. Yale face database which
consists of one hundred images of fifteen individuals with multiple conditions is implemented.
However, this database consists of only grayscale images. Hence, our own database with color
images which is further categorized into high quality set and the low quality set, as images are
different in their quality: some images are blurred while some are clearer.
Viola-Jones object detection framework is applied in this approach to detect and localize the
face given a facial image or provided a video frame. From the detected face, an algorithm that can
extract the important features to perform face recognition is designed.
Some pre-processing steps are performed on the input facial image before the features are extracted.
Median filtering is used because it is able to preserve the edges of the image while removing the
image noises. The facial image will be scaled to a suitable size for standardizing purpose and
converted to grayscale image if it is not a grayscale image because LBP operator work on a grayscale
image.
The recognition rate of LBP operator with different radius is then computed by using our own
database. However, LBP operator with different radius does not give significant results because
there is no critical illumination problem exists in the images of our own database. Hence, the pixels
of good quality images of our own database are modified to generate the illumination effects in
order to determine the impact of different size LBP operator.
Database with good quality colour images, achieves the highest accuracy (100 %) either one
image or two images per individual is trained whereas database with poor quality color images have
average accuracy of (86.54 %) when only one image per individual is trained and average accuracy
of (88.46 %) when two images per individual are trained. It can be said that the approach works best
with good quality images, poor quality images could degrade the performance of the algorithm.
Poor quality images were captured by using Laptop camera. The poor quality images might include
the relatively darker images, blur images or having too much unwanted noise. In blurred images,
the face is blurred out. Unwanted noise can be reduced by applying median filtering.

25
26
Chapter 5

CONCLUSION AND SUMMARY

5.1 Conclusion:
The Face Recognition Digital Attendance Management System has the main goal of automating
the process of managing attendance and revolves around the fulcrum of Automation. The system
involves mainly three steps: Registration, Authentication and Update. Its fundamental goal is to
reduced time consumption and the need to maintain paperwork. Since the advent of technology,
humans have progressed to evolve and adapt to changes based on their convenience. Bringing this
idea to practicality helps the common man to effectively and efficiently progress and this system
helps the whole organization to evolve and achieve what is necessary by eliminating the tedious
and iterative tasks.
Future Scope:
The system is built on a combination of several technologies and has overcome most manual
flaws and thus stands apart from the existing systems. However, the database management is an
area of concern and needs to be filtered out on a cyclic and timely basis to avoid data overflow.
Furthermore, since the system is built upon the features of Machine learning, effective training
and efficient procedures must be followed to achieve 100% accuracy. Finally, we can build the
system as an integration of the current attendance systems being used in the organization in order
to achieve maximum efficiency.
5.2 SUMMARY:
Initially the facial features of the student are captured based on convolutional neural networks
(Kewen Yan, Shaohui Huang, Yaoxian Song, Wei Liu, Neng Fan, Conference), extracted and
registered into the database. Then the facial features are compared at the time of taking attendance
using the existing database images. If a match is found, the attendance for the respective student
for that corresponding hour is marked into the database.

27
REFERENCES

 Robinson-Riegler, G., & Robinson-Riegler, B. (2008). Cognitive psychology: applying the


science of the mind. Boston, Pearson/Allyn and Bacon.
 Margaret Rouse. (2012). What is facial recognition? - Definition from WhatIs.com.[online]
Available at: http://whatis.techtarget.com/definition/facial-recognition [Accessed
 Robert Silk. (2017). Biometrics: Facial recognition tech coming to an airport near you:
Travel Weekly. Available at:http://www.travelweekly.com/Travel-News/Airline-
News/Biometrics-Facialrecognition-tech-coming-airport-near-you.
 Sidney Fussell. (2018). NEWS Facebook's New Face Recognition Features: What We Do
(and Don't) Know. [online] Available at: https://gizmodo.com/facebooks-new-face-
recognition-features-what-we-do-an1823359911 [Accessed 25 Mar. 2022].
 Robert Silk. (2017). Biometrics: Facial recognition tech coming to an airport near you:
Travel Weekly.[online]Available at:http://www.travelweekly.com/Travel-News/Airline-
News/Biometrics-Facialrecognition-
 deAgonia, M. (2017). Apple's Face ID [The iPhone X's facial recognition tech explained].
https://www.computerworld.com/article/3235140/apple-ios/apples-face-id-theiphone-xs-
facial-recognition-tech-explained.html [Accessed 25 Mar. 2022].
 Jesse Davis West. (2017). History of Face Recognition - Facial recognition software.
[online].
 Reichert, C. (2017). Intel demos 5G facial-recognition payment technology | ZDNet.
 [online] ZDNet. Available at: http://www.zdnet.com/article/intel-demos-5g-facial-
recognition-paymenttechnology/ [Accessed 25 Mar. 2022]. Zhao, W., Chellappa, R.,
Phillips, P. and Rosenfeld, A. (2003). Face recognition. ACM Computing Surveys, 35(4),
pp.399-458.
 Pooja G.R, et al. (2010). An automated Attendance System Using Image Processing.
 Wagh, P., Thakare, R., Chaudhari, J. and Patil, S. (2015). Attendance system based on face
recognition using eigen face and PCA algorithms. International Conference on Green
Computing and Internet of Things.
 S.Aanjanadevi, Dr.V.Palanisamy,R.Anandha Jothi. (2017). A Study on Secure Online
 Voting System using Biometrics Face Detection and Recognition Algorithms. International
Journal for Modern Trends in Science and Technology, V3(8).
 Wei-Lun Chao. (2007). Face Recognition, GICE, National Taiwan University.
 Akshara Jadhav. (2017) .Automated Attendance System Using Face Recognition.

28
International Research Journal of Engineering and Technology.V4 (1).
 P. Arun Mozhi Devan et al., (2017). Smart Attendance System Using Face Recognition.
Advances in Natural and Applied Sciences. 11(7), Pages: 139-144
 Rahul V. Patil and S. B.Bangar. (2017). Video Surveillance Based Attendance system.
IJARCCE, 6(3), pp.708-713.
 Mrunmayee Shirodkar. (2015). Automated Attendance Management System using Face
Recognition. International Journal of Computer Applications and International Conference
and Workshop on Emerging Trends in Technology.
 Naveed Khan Balcoh. (2012). Algorithm for Efficient Attendance Management: Face
Recognition based approach.International Journal of Computer Science Issues, V9 (4), No
1.
 Varsha Gupta, Dipesh Sharma. (2014), “A Study of Various Face Detection Methods”,
International Journal of Advanced Research in Computer and Communication
Engineering), vol.3, no. 5.
 P. Viola, M. J. Jones. (2004), “Robust Real-Time Face Detection”, International Journal
of Computer Vision 57(2), 137–154.
 Mekha Joseph et al. (2016). Children's Transportation Safety System Using Real Time Face
Recognition. International Journal of Advanced Research in Computer and
Communication Engineering V5 (3).
 Srushti Girhe et al. (2015). Computer Vision Based Semi-automatic Algorithm for face
detection. International Journal on Recent and Innovation Trends in Computing and
Communication V3(2).
 Burak Ozen. (2017).Introduction to Boosting Methodology & Adaboost algorithm. [online]
Available at: https://www.linkedin.com/pulse/introduction-boostingmethodology-
adaboost-algorithm-burak-ozen [Accessed 12 Apr. 2018].
 Chris McCormick. (2013). Adaboost Tutorial. [online] Available at:
http://mccormickml.com/2013/12/13/adaboost-tutorial/ [Accessed 12 Apr. 2018].
 Kihwan Kim. (2011).Rapid Object Detection using a Boosted Cascade of Simple Feature
and Fast Face Detection via Morphology-Based Pre-processing.
 Subhi Singh. (2015). Automatic Lecture Attendance System Using Face
Reorganizationmatrix. Academic International Online Journal of Engineering and
Technology.V3 (1).
 Shireesha Chintalapati, M.V. Raghunadh. (2014), “Automated Attendance Management
System Based On Face Recognition Algorithms”, IEEE International Conference on
29
Computational Intelligence and Computing Research.
 Gonzalez, R. C., & Woods, R. E. (2002). Digital image processing. Upper Saddle River,
N.J., Prentice Hall.
 Kanan, C. and Cottrell, G. (2012). Color-to-Grayscale: Does the Method Matter in Image
Recognition?. PLoS ONE, 7(1), p.e29740.
 Pratiksha M. Patel (2016). Contrast Enhancement of Images and videos using Histogram
Equalization. International Journal on Recent and Innovation Trends in Computing and
Communication.V4 (11).
 Sasi, N. and Jayasree, V. (2013). Contrast Limited Adaptive Histogram Equalization for
Qualitative Enhancement of Myocardial Perfusion Images. Engineering, 05(10), pp.326-
331.
 Aliaa A. A. Youssif, and Atef Z. (2006).Comparative Study of Contrast Enhancement and
Illumination Equalization Methods for Retinal Vasculature Segmentation. Proc. Cairo
International Biomedical Engineering Conference.
 A., I. and E.Z., F. (2016). Image Contrast Enhancement Techniques: A Comparative Study
of Performance. International Journal of Computer Applications, 137(13), pp.43-48.
 Bhuvaneshwari, K., Abirami, A. and Sripriya, N. (2017). Face Recognition Using PCA.
International Journal Of Engineering And Computer Science, 6(4).
 Abhishek Singh and Saurabh Kumar. (2012). Face Recognition Using PCA and Eigen Face
Approach. [online] Available at: http://ethesis.nitrkl.ac.in/3814/1/Thesis.pdf [Accessed 10
Apr. 2022].
 LC Paul and Abdulla Al Sumam. (2012). Face Recognition Using Principal Component
Analysis Method. IJARCET, V1 (9).
 D. Nithya (2015). Automated Class Attendance System based on Face Recognition using
PCA Algorithm. International Journal of Engineering Research and, V4 (12).
 Suman Kumar Bhattacharyya & Kumar Rahul. (2013), “Face Recognition by Linear
Discriminant Analysis”, International Journal of Communication Network Security, V2(2),
pp 31-35.
 Ojala, T., Pietikainen, M. and Maenpaa, T. (2002). Multiresolution gray-scale and rotation
invariant texture classification with local binary patterns. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 24(7), pp.971-987.
 Md. Abdur Rahim (2013), Face Recognition Using Local Binary Patterns. Global
 Journal of Computer Science and Technology Graphics & Vision V13 (4) Version 1.0.
 Kasar, M., Bhattacharyya, D. and Kim, T. (2016). Face Recognition Using Neural
30
Network: A Review. International Journal of Security and Its Applications, 10(3), pp.81-
100.
 Taigman, Y. et al., 2014. DeepFace: Closing the Gap to Human-Level Performance in Face
Veriication. IEEE Conference on Computer Vision and Pattern Recognition.
 Divyarajsinh N Parmar et al. (2013).Face Recognition Methods and Applications.
Int.J.Computer Technology & Applications,Vol 4 (1),84-86.

31
APPENDIX – SOURCE CODE:

#############################################IMPORTING######################
import tkinter as tk
from tkinter import ttk
from tkinter import messagebox as mess
import tkinter.simpledialog as tsd
import cv2,os
import csv
import numpy as np
from PIL import Image
import pandas as pd
import datetime
import time

#############################################FUNCTIONS######################

def assure_path_exists(path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs(dir)

##############################################################################

def tick():
time_string = time.strftime('%H:%M:%S')
clock.config(text=time_string)
clock.after(200,tick)

##############################################################################

def contact():
mess._show(title='Contact us', message="Please contact us on :
'chaudharyabhishek6307044617@gmail.com' ")

##############################################################################

def check_haarcascadefile():
exists = os.path.isfile("haarcascade_frontalface_default.xml")
if exists:
pass
else:
mess._show(title='Some file missing', message='Please contact us for help')
window.destroy()

##############################################################################

def save_pass():
assure_path_exists("TrainingImageLabel/")
32
exists1 = os.path.isfile("TrainingImageLabel\psd.txt")
if exists1:
tf = open("TrainingImageLabel\psd.txt", "r")
key = tf.read()
else:
master.destroy()
new_pas = tsd.askstring('Old Password not found', 'Please enter a new password below',
show='*')
if new_pas == None:
mess._show(title='No Password Entered', message='Password not set!! Please try again')
else:
tf = open("TrainingImageLabel\psd.txt", "w")
tf.write(new_pas)
mess._show(title='Password Registered', message='New password was registered
successfully!!')
return
op = (old.get())
newp= (new.get())
nnewp = (nnew.get())
if (op == key):
if(newp == nnewp):
txf = open("TrainingImageLabel\psd.txt", "w")
txf.write(newp)
else:
mess._show(title='Error', message='Confirm new password again!!!')
return
else:
mess._show(title='Wrong Password', message='Please enter correct old password.')
return
mess._show(title='Password Changed', message='Password changed successfully!!')
master.destroy()

#############################################################################

def change_pass():
global master
master = tk.Tk()
master.geometry("400x160")
master.resizable(False,False)
master.title("Change Password")
master.configure(background="white")
lbl4 = tk.Label(master,text=' Enter Old Password',bg='white',font=('comic', 12, ' bold '))
lbl4.place(x=10,y=10)
global old
old=tk.Entry(master,width=25 ,fg="black",relief='solid',font=('comic', 12, ' bold '),show='*')
old.place(x=180,y=10)
lbl5 = tk.Label(master, text=' Enter New Password', bg='white', font=('comic', 12, ' bold '))
lbl5.place(x=10, y=45)
global new
new = tk.Entry(master, width=25, fg="black",relief='solid', font=('comic', 12, ' bold
33
'),show='*')
new.place(x=180, y=45)
lbl6 = tk.Label(master, text='Confirm New Password', bg='white', font=('comic', 12, ' bold '))
lbl6.place(x=10, y=80)
global nnew
nnew = tk.Entry(master, width=25, fg="black", relief='solid',font=('comic', 12, ' bold
'),show='*')
nnew.place(x=180, y=80)
cancel=tk.Button(master,text="Cancel", command=master.destroy ,fg="black" ,bg="red"
,height=1,width=25 , activebackground = "white" ,font=('comic', 10, ' bold '))
cancel.place(x=200, y=120)
save1 = tk.Button(master, text="Save", command=save_pass, fg="black", bg="#00fcca",
height = 1,width=25, activebackground="white", font=('comic', 10, ' bold '))
save1.place(x=10, y=120)
master.mainloop()

##############################################################################

def psw():
assure_path_exists("TrainingImageLabel/")
exists1 = os.path.isfile("TrainingImageLabel\psd.txt")
if exists1:
tf = open("TrainingImageLabel\psd.txt", "r")
key = tf.read()
else:
new_pas = tsd.askstring('Old Password not found', 'Please enter a new password below',
show='*')
if new_pas == None:
mess._show(title='No Password Entered', message='Password not set!! Please try again')
else:
tf = open("TrainingImageLabel\psd.txt", "w")
tf.write(new_pas)
mess._show(title='Password Registered', message='New password was registered
successfully!!')
return
password = tsd.askstring('Password', 'Enter Password', show='*')
if (password == key):
TrainImages()
elif (password == None):
pass
else:
mess._show(title='Wrong Password', message='You have entered wrong password')

##############################################################################

def clear():
txt.delete(0, 'end')
res = "1)Take Images >>> 2)Save Profile"
message1.configure(text=res)

34
def clear2():
txt2.delete(0, 'end')
res = "1)Take Images >>> 2)Save Profile"
message1.configure(text=res)

##############################################################################

def TakeImages():
check_haarcascadefile()
columns = ['SERIAL NO.', '', 'ID', '', 'NAME']
assure_path_exists("StudentDetails/")
assure_path_exists("TrainingImage/")
serial = 0
exists = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists:
with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for l in reader1:
serial = serial + 1
serial = (serial // 2)
csvFile1.close()
else:
with open("StudentDetails\StudentDetails.csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(columns)
serial = 1
csvFile1.close()
Id = (txt.get())
name = (txt2.get())
if ((name.isalpha()) or (' ' in name)):
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)
sampleNum = 0
while (True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
# incrementing sample number
sampleNum = sampleNum + 1
# saving the captured face in the dataset folder TrainingImage
cv2.imwrite("TrainingImage\ " + name + "." + str(serial) + "." + Id + '.' +
str(sampleNum) + ".jpg",
gray[y:y + h, x:x + w])
# display the frame
cv2.imshow('Taking Images', img)
# wait for 100 miliseconds
35
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100
elif sampleNum > 100:
break
cam.release()
cv2.destroyAllWindows()
res = "Images Taken for ID : " + Id
row = [serial, '', Id, '', name]
with open('StudentDetails\StudentDetails.csv', 'a+') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
message1.configure(text=res)
else:
if (name.isalpha() == False):
res = "Enter Correct name"
message.configure(text=res)

##############################################################################

def TrainImages():
check_haarcascadefile()
assure_path_exists("TrainingImageLabel/")
recognizer = cv2.face_LBPHFaceRecognizer.create()
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)
faces, ID = getImagesAndLabels("TrainingImage")
try:
recognizer.train(faces, np.array(ID))
except:
mess._show(title='No Registrations', message='Please Register someone first!!!')
return
recognizer.save("TrainingImageLabel\Trainner.yml")
res = "Profile Saved Successfully"
message1.configure(text=res)
message.configure(text='Total Registrations till now : ' + str(ID[0]))

##############################################################################

def getImagesAndLabels(path):
# get the path of all the files in the folder
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# create empth face list
faces = []
# create empty ID list
Ids = []
# now looping through all the image paths and loading the Ids and the images
for imagePath in imagePaths:
# loading the image and converting it to gray scale
36
pilImage = Image.open(imagePath).convert('L')
# Now we are converting the PIL image into numpy array
imageNp = np.array(pilImage, 'uint8')
# getting the Id from the image
ID = int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces.append(imageNp)
Ids.append(ID)
return faces, Ids

##############################################################################

def TrackImages():
check_haarcascadefile()
assure_path_exists("Attendance/")
assure_path_exists("StudentDetails/")
for k in tv.get_children():
tv.delete(k)
msg = ''
i=0
j=0
recognizer = cv2.face.LBPHFaceRecognizer_create() # cv2.createLBPHFaceRecognizer()
exists3 = os.path.isfile("TrainingImageLabel\Trainner.yml")
if exists3:
recognizer.read("TrainingImageLabel\Trainner.yml")
else:
mess._show(title='Data Missing', message='Please click on Save Profile to reset data!!')
return
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath);

cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Id', '', 'Name', '', 'Date', '', 'Time']
exists1 = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists1:
df = pd.read_csv("StudentDetails\StudentDetails.csv")
else:
mess._show(title='Details Missing', message='Students details are missing, please check!')
cam.release()
cv2.destroyAllWindows()
window.destroy()
while True:
ret, im = cam.read()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
for (x, y, w, h) in faces:
cv2.rectangle(im, (x, y), (x + w, y + h), (225, 0, 0), 2)
serial, conf = recognizer.predict(gray[y:y + h, x:x + w])
if (conf < 50):
37
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
aa = df.loc[df['SERIAL NO.'] == serial]['NAME'].values
ID = df.loc[df['SERIAL NO.'] == serial]['ID'].values
ID = str(ID)
ID = ID[1:-1]
bb = str(aa)
bb = bb[2:-2]
attendance = [str(ID), '', bb, '', str(date), '', str(timeStamp)]

else:
Id = 'Unknown'
bb = str(Id)
cv2.putText(im, str(bb), (x, y + h), font, 1, (255, 255, 255), 2)
cv2.imshow('Taking Attendance', im)
if (cv2.waitKey(1) == ord('q')):
break
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
exists = os.path.isfile("Attendance\Attendance_" + date + ".csv")
if exists:
with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(attendance)
csvFile1.close()
else:
with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(col_names)
writer.writerow(attendance)
csvFile1.close()
with open("Attendance\Attendance_" + date + ".csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for lines in reader1:
i=i+1
if (i > 1):
if (i % 2 != 0):
iidd = str(lines[0]) + ' '
tv.insert('', 0, text=iidd, values=(str(lines[2]), str(lines[4]), str(lines[6])))
csvFile1.close()
cam.release()
cv2.destroyAllWindows()

########################################USEDSTUFFS##########################

global key
key = ''

ts = time.time()
38
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
day,month,year=date.split("-")
mont={'01':'January',
'02':'February',
'03':'March',
'04':'April',
'05':'May',
'06':'June',
'07':'July',
'08':'August',
'09':'September',
'10':'October',
'11':'November',
'12':'December'
}

########################################GUIFRONTEND########################

window = tk.Tk()
window.geometry("1280x720")
window.resizable(True,False)
window.title("Attendance System")
window.configure(background='#2d420a')

frame1 = tk.Frame(window, bg="#c79cff")


frame1.place(relx=0.11, rely=0.17, relwidth=0.39, relheight=0.80)

frame2 = tk.Frame(window, bg="#c79cff")


frame2.place(relx=0.51, rely=0.17, relwidth=0.38, relheight=0.80)

message3 = tk.Label(window, text="Face Recognition Digital Attendance Management System"


,fg="white",bg="#2d420a" ,width=55 ,height=1,font=('comic', 29, ' bold '))
message3.place(x=10, y=10)

frame3 = tk.Frame(window, bg="#c4c6ce")


frame3.place(relx=0.52, rely=0.09, relwidth=0.09, relheight=0.07)

frame4 = tk.Frame(window, bg="#c4c6ce")


frame4.place(relx=0.36, rely=0.09, relwidth=0.16, relheight=0.07)

datef = tk.Label(frame4, text = day+"-"+mont[month]+"-"+year+" | ",


fg="#ff61e5",bg="#2d420a" ,width=55 ,height=1,font=('comic', 22, ' bold '))
datef.pack(fill='both',expand=1)

clock = tk.Label(frame3,fg="#ff61e5",bg="#2d420a" ,width=55 ,height=1,font=('comic', 22, '


bold '))
clock.pack(fill='both',expand=1)
tick()

head2 = tk.Label(frame2, text=" For New Registrations ", fg="black",bg="#00fcca"


39
,font=('comic', 17, ' bold ') )
head2.grid(row=0,column=0)

head1 = tk.Label(frame1, text=" For Already Registered ", fg="black",bg="#00fcca"


,font=('comic', 17, ' bold ') )
head1.place(x=0,y=0)

lbl = tk.Label(frame2, text="Enter ID",width=20 ,height=1 ,fg="black" ,bg="#c79cff"


,font=('comic', 17, ' bold ') )
lbl.place(x=80, y=55)

txt = tk.Entry(frame2,width=32 ,fg="black",font=('comic', 15, ' bold '))


txt.place(x=30, y=88)

lbl2 = tk.Label(frame2, text="Enter Name",width=20 ,fg="black" ,bg="#c79cff" ,font=('comic',


17, ' bold '))
lbl2.place(x=80, y=140)

txt2 = tk.Entry(frame2,width=32 ,fg="black",font=('comic', 15, ' bold ') )


txt2.place(x=30, y=173)

message1 = tk.Label(frame2, text="1)Take Images >>> 2)Save Profile" ,bg="#c79cff"


,fg="black" ,width=39 ,height=1, activebackground = "#3ffc00" ,font=('comic', 15, ' bold '))
message1.place(x=7, y=230)

message = tk.Label(frame2, text="" ,bg="#c79cff" ,fg="black" ,width=39,height=1,


activebackground = "#3ffc00" ,font=('comic', 16, ' bold '))
message.place(x=7, y=450)

lbl3 = tk.Label(frame1, text="Attendance",width=20 ,fg="black" ,bg="#c79cff" ,height=1


,font=('comic', 17, ' bold '))
lbl3.place(x=100, y=115)

res=0
exists = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists:
with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for l in reader1:
res = res + 1
res = (res // 2) - 1
csvFile1.close()
else:
res = 0
message.configure(text='Total Registrations till now : '+str(res))

##################### MENUBAR #################################

menubar = tk.Menu(window,relief='ridge')
filemenu = tk.Menu(menubar,tearoff=0)
40
filemenu.add_command(label='Change Password', command = change_pass)
filemenu.add_command(label='Contact Us', command = contact)
filemenu.add_command(label='Exit',command = window.destroy)
menubar.add_cascade(label='Help',font=('comic', 29, ' bold '),menu=filemenu)

################## TREEVIEW ATTENDANCE TABLE ####################

tv= ttk.Treeview(frame1,height =13,columns = ('name','date','time'))


tv.column('#0',width=82)
tv.column('name',width=130)
tv.column('date',width=140)
tv.column('time',width=133)
tv.grid(row=2,column=0,padx=(0,0),pady=(150,0),columnspan=4)
tv.heading('#0',text ='ID')
tv.heading('name',text ='NAME')
tv.heading('date',text ='DATE')
tv.heading('time',text ='TIME')

###################### SCROLLBAR ################################

scroll=ttk.Scrollbar(frame1,orient='vertical',command=tv.yview)
scroll.grid(row=2,column=4,padx=(0,100),pady=(150,0),sticky='ns')
tv.configure(yscrollcommand=scroll.set)

###################### BUTTONS ##################################

clearButton = tk.Button(frame2, text="Clear", command=clear ,fg="black" ,bg="#ff7221"


,width=11 ,activebackground = "white" ,font=('comic', 11, ' bold '))
clearButton.place(x=335, y=86)
clearButton2 = tk.Button(frame2, text="Clear", command=clear2 ,fg="black" ,bg="#ff7221"
,width=11 , activebackground = "white" ,font=('comic', 11, ' bold '))
clearButton2.place(x=335, y=172)
takeImg = tk.Button(frame2, text="Take Images", command=TakeImages ,fg="white"
,bg="#6d00fc" ,width=34 ,height=1, activebackground = "white" ,font=('comic', 15, ' bold '))
takeImg.place(x=30, y=300)
trainImg = tk.Button(frame2, text="Save Profile", command=psw ,fg="white" ,bg="#6d00fc"
,width=34 ,height=1, activebackground = "white" ,font=('comic', 15, ' bold '))
trainImg.place(x=30, y=380)
trackImg = tk.Button(frame1, text="Take Attendance", command=TrackImages ,fg="black"
,bg="#3ffc00" ,width=35 ,height=1, activebackground = "white" ,font=('comic', 15, ' bold '))
trackImg.place(x=30,y=50)
quitWindow = tk.Button(frame1, text="Quit", command=window.destroy ,fg="black"
,bg="#eb4600" ,width=35 ,height=1, activebackground = "white" ,font=('comic', 15, ' bold '))
quitWindow.place(x=30, y=450)

##################### END ######################################

window.configure(menu=menubar)
window.mainloop()

41
ABSTRACT

With the advancement of modern technologies areas related to robotics and computer vision, real
time image processing has become a major technology under consideration. So here a try has been
made for a novel approach for capturing images from the Camera in real time environment and
process them as we are required. This project portrays a machine learning approach for face
recognition to accomplish this process very quick with high identification rates using OpenCV.
Here in this project depicts a basic and simple equipment execution of face location framework
utilizing Haarcascade Algorithm, which itself is a minicomputer of a small estimate and is of a low
cost. The framework is modified utilizing Python programming language. The destinations of the
face recognition are to recognize appearances and its spatial area in any pictures or recordings. The
proposed framework distinguishes the faces present in a grey scale and colored image. This project
center around usage of face detection framework for human recognizable proof in light of OpenCV
library with python. Here in this project the idea of identification has been built up by composing
distinguishable code for dataset generator, trainer and indicator. Effectiveness of the framework is
examined by ascertaining the Face recognition rate for every one of the databases. The outcomes
uncover that the proposed framework can be utilized for face detection even from low quality
pictures and shows incredible execution level.
At last, the data that will be shown alongside recognized photograph has been put away on database.
This concept has a higher scope on security and surveillance projects and various operation.

42
GROUP MEMBER
(COMPUTER SCIENCE AND ENGINEERING)
ABHISHEK CHAUDHARY (E-10467/18)
ADITYA KUMAR CHAURASIYA (E-10468/18)
ARYAN YADAV (E-10469/18)
MANOJ KUMAR PAL (E-10473/18)

43

You might also like