You are on page 1of 73

FACE RECOGNITION ATTENDANCE BASED

SYSTEM
A Project Report Submitted to

ISLAMIAH COLLEGE (AUTONOMOUS), VANIYAMBADI

Affiliated to the

THIRUVALLUVAR UNIVERSITY, VELLORE


in partial fulfillment of the requirements
For the award of the Degree of

BACHELOR OF COMPUTER APPLICATIONS

BY

P. MOHAMMED SABEEL

Register Number: 31120U09020

Under The Guidance Of

Prof. A. VISWANATHAN, M.C.A, M.Phil.

Department of Computer Science and Applications


Islamiah College (Autonomous), Vaniyambadi-635 752
April – 2023
BONAFIDE CERTIFICATE
FACE RECOGNITION ATTENDANCE BASED SYSTEM
Being submitted to the
ISLAMIAH COLLEGE (AUTONOMOUS)
VANIYAMBADI.
By
P. MOHAMMED SABEEL

Register Number: 31120U09020

For the Partial fulfillment for the award of the degree of

BACHELOR OF COMPUTER APPLICATIONS


Is a Bonafide record of work carried out by them under the
guidance & supervision of

….…..……………………………. ...………………………………….
(Project Guide) (Head of the Department)
Prof. A. Viswanathan, M.C.A, M.Phil. Prof P. Magizhan, M.Sc., M.Phil., PGDCA.
Assistant Professor Associate Professor & Head
DEPT OF COMPUTER SCIENCE DEPT OF COMPUTER SCIENCE
Islamiah College (Autonomous) Islamiah College (Autonomous)
Vaniyambadi. Vaniyambadi.

The viva-voce examination of this project work was held on……………………….

EXAMINERS:

1.
2.
ACKNOWLEDGEMENT

With profound gratitude I thank Almighty GOD for all blessings showered on
me for completing my course and project work successfully in time.

I take this opportunity to express my gratitude to all those who contribution in


this project work. First of all I would like to offer my thanks to the principal
Dr. T. MOHAMED ILYAS, M.Com., M.B.A., M.Phil., PhD. for the facilities and
assistance provided by him during my study.

My Sincere Thanks to Prof. MAGIZHAN, M.Sc., M.Phil., PGDCA. Head of


The Department of Computer Science, Islamiah College (Autonomous) Vaniyambadi
for sharing the ideas with me and full support.

I owe my sincere and enormous gratitude to my venerated faculty


guide Prof. A. Viswanathan, M.C.A, M.Phil. Assistant professor, Department of
Computer Science, Islamiah College (Autonomous) Vaniyambadi, for the unfailing
support and valuable suggestions for successful completion of this project.

I render my thankfulness to all faculties and programmer for their precious help
directly and indirectly to complete my project successfully.

At last but not a least I consider my privilege to express our respect to all guided
inspired and helped me in the completion of the project.

~P. MOHAMMED SABEEL


FACE RECOGNITIONATTENDANCE BASED SYSTEM

ABSTRACT

In colleges, universities, organizations, schools, and offices, taking attendance is one of the
most important tasks that must be done on a daily basis. The majority of the time, it is done
manually, such as by calling by name or by roll number. The main goal of this project is to create
a Face Recognition-based attendance system that will turn this manual process into an automated
one. This project meets the requirements for bringing modernization to the way attendance is
handled, as well as the criteria for time management. This device is installed in the classroom,
where and student's information, such as name, roll number, class, sec, and photographs, is trained.
The images are extracted using Open CV. Before the start of the corresponding class, the student
can approach the machine, which will begin taking pictures and comparing them to the qualified
dataset. The image is processed as follows: first, faces are identified using a haarcascade classifier,
then faces are recognized using the LBPH (Local Binary Pattern Histogram) Algorithm, histogram
data is checked against an established dataset, and the device automatically labels attendance. An
Excel sheet is developed, and it is updated every hour with the information from the respective
class instructor.

Keywords: Face Detection, Face Recognition, haarcascade classifier, attendance.

i
CONTENTS

ABSTRACT i

LIST OF FIGURE iv

LIST OF TABLES v

CHAPTER 1 INTRODUCTION 1-7


1.1 Project Objective 2

1.2 Background 3

1.3 Problem Statement 4

1.4 Aims and Objective 5

1.5 Flow chart 6

1.6 Scope of the project 7

1.7 System Specification 7

CHAPTER 2 LITERATURE REVIEW 8-18

2.1 Student Attendance System 9

2.2 Digital Image Processing 10

2.3 Steps in Digital Image Processing 10

2.4 Face Detection 11

2.5 Face Recognition 11

2.6 Difference Between FD & FR 12

2.5 Local Binary Pattern Histogram 14

ii
CHAPTER 3 MODEL IMPLEMENTATION AND ANALYSIS 19 – 28

3.1 Introduction 20

3.2 Model Implementation 21

3.3 Design Requirements 22

3.4 Software Implementation 22

3.5 Webcam 24

3.6 Experimental Results 25

CHAPTER 4 SOFTWARE TESTING 29-31

4.1 Unit testing 30

4.2 Integration testing 30

4.3 Validation testing 30

4.4 Verification testing 31

4.5 Maintenance 31

CHAPTER 5 CODING 32-63

5.1 Main.py 33

5.2 Output Images 57

CONCLUSION 64

BIBLOGRAPHY 65

iii
LIST OF FIGURES:

CONTENTS PAGE NO.


Figure 1.1 Project outline 6
Figure 2.1 A diagram showing the steps in digital image processing 11

Figure 2.2 Haar Feature 13

Figure 2.3 Integral of Image 14

Figure 2.4 The LBP operation Radius charge 17

Figure 3.1 Model Implement 21

Figure 3.2 Installing OpenCV 23

Figure 3.3 Web cam 24

Figure 3.4 Dataset Sample 28


Figure 5.1 Home Page 57
Figure 5.2 Registration 58
Figure 5.3 Admin password 59
Figure 5.4 Taking Attendance 60
Figure 5.5 Change Password 61
Figure 5.6 Contact Us 62
Figure 5.7 Attendance Sheet 63

iv
LIST OF TABLES:

CONTENTS PAGE NO.

Table 2.1 Advantages and Disadvantages of Difference Biometric system 9

Table 2.2 Advantages and Disadvantages of Face Detection Method 12

Table 3.1 Experimental Results-1 26

Table 3.2 Experimental Results-2 27

v
CHAPTER-1
INTRODUCTION

1
1.1 Project Objective:

Attendance is prime important for both the teacher and student of an educational
organization. So it is very important to keep record of the attendance. The problem arises when we
think about the traditional process of taking attendance in class room. Calling name or roll number
of the student for attendance is not only a problem of time consumption but also it needs energy.
So an automatic attendance system can solve all above problems.

There are some automatic attendances making system which are currently used by much
institution. One of such system is biometric technique and RFID system. Although it is automatic
and a step ahead of traditional method it fails to meet the time constraint. The student has to wait
in queue for giving attendance, which is time taking.

This project introduces an involuntary attendance marking system, devoid of any kind of
interference with the normal teaching procedure. The system can be also implemented during exam
sessions or in other teaching activities where attendance is highly essential. This system eliminates
classical student identification such as calling name of the student, or checking respective
identification cards of the student, which can not only interfere with the ongoing teaching process,
but also can be stressful for students during examination sessions. In addition, the students have to
register in the database to be recognized. The enrolment can be done on the spot through the user-
friendly interface.

2
1.2 Background:

Face recognition is crucial in daily life in order to identify family, friends or someone
we are familiar with. We might not perceive that several steps have actually taken in order to
identify human faces. Human intelligence allows us to receive information and interpret the
information in the recognition process. We receive information through the image projected
into our eyes, by specifically retina in the form of light. Light is a form of electromagnetic
waves which are radiated from a source onto an object and projected to human vision.
Robinson-Riegler.G. mentioned that after visual processing done by the human visual system,
we actually classify shape, size, contour, and the texture of the object in order to analyse the
information. The analysed information will be compared to other representations of objects or
face that exist in our memory to recognize. In fact, it is a hard challenge to build an automated
system to have the same capability as a human to recognize faces. However, we need large
memory to recognize different faces, for example, in the Universities, there are a lot of students
with different race and gender, it is impossible to remember every face of the individual without
making mistakes. In order to overcome human limitations, computers with almost limitless
memory, high processing speed and power are used in face recognition systems.

The human face is a unique representation of individual identity. Thus, face recognition
is defined as a biometric method in which identification of an individual is performed by
comparing real-time capture image with stored images in the database of that person (Margaret
Rouse.

Nowadays, face recognition system is prevalent due to its simplicity and awesome
performance. For instance, airport protection systems and FBI use face recognition for criminal
investigations by tracking suspects, missing children and drug activities. Apart from that,
Facebook which is a popular social networking website implement face recognition to allow
the users to tag their friends in the photo for entertainment purposes.

3
Furthermore, Intel Company allows the users to use face recognition to get access to their online
account. Apple allows the users to unlock their mobile phone, iPhone X by using face
recognition.

The work on face recognition began in 1960. Woody Bledsoe, Helen Chan Wolf and
Charles Bisson had introduced a system which required the administrator to locate eyes, ears,
nose and mouth from images. The distance and ratios between the located features and the
common reference points are then calculated and compared. The studies are further enhanced
by Goldstein, Harmon, and Lesk in 1970 by using other features such as hair colour and lip
thickness to automate the recognition. In 1988, Kirby and Sirovich first suggested principle
component analysis (PCA) to solve face recognition problem. Many studies on face recognition
were then conducted continuously until today.

1.3 Problem Statement:

Traditional student attendance marking technique is often facing a lot of trouble. The
face recognition student attendance system emphasizes its simplicity by eliminating classical
student attendance marking technique such as calling student names or checking respective
identification cards. There are not only disturbing the teaching process but also causes
distraction for students during exam sessions. Apart from calling names, attendance sheet is
passed around the classroom during the lecture sessions. The lecture class especially the class
with a large number of students might find it difficult to have the attendance sheet being passed
around the class. Thus, face recognition attendance system is proposed in order to replace the
manual signing of the presence of students which are burdensome and causes students get
distracted in order to sign for their attendance. Furthermore, the face recognition based
automated student attendance system able to overcome the problem of fraudulent approach and
lecturers does not have to count the number of students several times to ensure the presence of
the students.

4
Hence, there is a need to develop a real time operating student attendance system which
means the identification process must be done within defined time constraints to prevent
omission. The extracted features from facial images which represent the identity of the students
have to be consistent towards a change in background, illumination, pose and expression. High
accuracy and fast computation time will be the evaluation points of the performance.

1.4 Aims and Objectives:

The objective of this project is to develop face recognition attendance system. Expected
achievements in order to fulfil the objectives are:

● To detect the face segment from the video frame.

● To extract the useful features from the face detected.

● To classify the features in order to recognize the face detected.

● To record the attendance of the identified student.

5
1.5 Flow chart:

Figure 1.1: Project Outline

6
1.6 Scope of the project:

We are setting up to design a system comprising of two modules. The first module (face
detector) is a mobile component, which is basically a camera application that captures student
faces and stores them in a file using computer vision face detection algorithms and face
extraction techniques. The second module is a desktop application that does face recognition of
the captured images (faces) in the file, marks the students register and then stores the results in
a database for future analysis.

1.7 System Specification:

1.7.1 Hardware requirements:

The hardware used for the development of the project is:

Processor : core i5
Ram : 8 GB Ram
Camera : 720p Camera. (min)
Hard disk : 150GB

1.7.2 Software Requirements:

The software used for the development of the project is:

Operating system : Windows 10


Language : Python- 3.11 latest version.
Front-end : Tkinter (for whole GUI).

7
CHAPTER-2
LITERATURE REVIEW

8
2.1 Student Attendance System:

Arun Katara (2017) mentioned disadvantages of RFID (Radio Frequency Identification)


card system, fingerprint system and iris recognition system. RFID card system is implemented due
to its simplicity. However, the user tends to help their friends to check in as long as they have their
friend’s ID card. The fingerprint system is indeed effective but not efficient because it takes time
for the verification process, so the user has to line up and perform the verification one by one.
However, for face recognition, the human face is always exposed and contain less information
compared to iris. Iris recognition system which contains more detail might invade the privacy of
the user. Voice recognition is available, but it is less accurate compared to other methods. Hence,
face recognition system is suggested to be implemented in the student attendance system.

System Type Advantage Disadvantages

RFID card system Simple Fraudulent usage


Fingerprint system Accurate Time-consuming

Voice recognition system Simple Less accurate compared to


Others
Iris recognition system Accurate Privacy Invasion

Table 2.1: Advantages & Disadvantages of Different Biometric


System

9
2.2 Digital Image Processing:

Digital Image Processing is the processing of images which are digital in nature by a
digital computer. Digital image processing techniques are motivated by three major
applications mainly:

● Improvement of pictorial information for human perception


● Image processing for autonomous machine application
● Efficient storage and transmission

2.3 Steps in Digital Image Processing:

Digital image processing involves the following basic tasks:

● Image Acquisition - An imaging sensor and the capability to digitize the signal
produced by the sensor.
● Pre-processing – Enhances the image quality, filtering, contrast enhancement
etc.
● Segmentation – Partitions an input image into constituent parts of objects.
● Description/feature Selection – extracts the description of image objects
suitable for further computer processing.
● Recognition and Interpretation – Assigning a label to the object based on the
information provided by its descriptor. Interpretation assigns meaning to a set
of labelled objects.
● Knowledge Base – This helps for efficient processing as well as inter module
co-operation.

10
Figure 2.1: A diagram showing the steps in digital image processing

2.4 Face Detection:

Face detection is the process of identifying and locating all the present faces in
a single image or video regardless of their position, scale, orientation, age and expression.
Furthermore, the detection should be irrespective of extraneous illumination conditions
and the image and video content.

2.5 Face Recognition:

Face Recognition is a visual pattern recognition problem, where the face,


represented as a three-dimensional object that is subject to varying illumination, pose and
other factors, needs to be identified based on acquired images. Face Recognition is
therefore simply the task of identifying an already detected face as a known or unknown
face and in more advanced cases telling exactly whose face it is.

11
2.6 Difference between Face Detection and Face Recognition:

Face detection answers the question, where is the face? It identifies an object as
a “face” and locates it in the input image. Face Recognition on the other hand
answers the question who is this? Or whose face is it? It decides if the detected
face is someone. It can therefore be seen that face detections output (the detected
face) is the input to the face recognizer and the face Recognition’s output is the
final decision i.e., face known or face unknown.

Face Detection
Advantages Disadvantages
Method
1.Long Training Time.
High detection Speed. High
V Jones Algorithm 2.Limited Head Pose.
Accuracy.
3.Not able to detect dark faces.
1.Simple computation. 1.Only used for binary and grey images.
Local Binary Pattern 2.High tolerance against 2.Overall performance is inaccurate
Histogram (LBPH) the monotonic illumination compared to Viola-Jones Algorithm.
changes.
Ada Boost Need not to have any prior The result highly depends on the training
Algorithm knowledge about face data and affected by weak classifiers.
structure.
Neural-Network High accuracy only if large 1.Detection process is slow and
size of image were trained. computation is complex.
2.Overall performance is weaker
than Viola-Jones algorithm.

Table 2.2: Advantages & Disadvantages of Face Detection Methods

12
Viola-Jones algorithm which was introduced by P. Viola, M. J. Jones (2001) is the most
popular algorithm to localize the face segment from static images or video frame. Basically, the
concept of Viola-Jones algorithm consists of four parts. The first part is known as Haar feature,
second part is where integral image is created, followed by implementation of Ada boost on the
third part and lastly cascading process.

Figure 2.2: Haar Feature

Viola-Jones algorithm analyses a given image using Haar features consisting of multiple
rectangles. In the fig shows several types of Haar features. The features perform as window
function mapping onto the image. A single value result, which representing each feature can be
computed by subtracting the sum of the white rectangle(s) from the sum of the black rectangle(s).

13
Figure 2.3: Integral of Image

2.7 Local Binary Pattern Histogram:

Local Binary Pattern (LBP) is a simple yet very efficient texture operator which
labels the pixels of an image by thresholding the neighbourhood of each pixel and
considers the result as a binary number.

It was first described in 1994 (LBP) and has since been found to be a powerful
feature for texture classification. It has further been determined that when LBP is
combined with histograms of oriented gradients (HOG) descriptor, it improves the
detection performance considerably on some datasets. Using the LBP combined with
histograms we can represent the face images with a simple data vector.

14
LBPH algorithm work step by step:

LBPH algorithm work in 4 steps:

1. Parameters: The LBPH uses 4 parameters:

● Radius: the radius is used to build the circular local binary pattern
and represents the radius around the central pixel. It is usually set to
one.
● Neighbors: the number of sample points to build the circular local
binary pattern. Keep in mind: the more sample points you include,
the higher the computational cost. It is usually set to 8.
● Grid X: the number of cells in the horizontal direction. The more
cells, the finer the grid, the higher the dimensionality of the resulting
feature vector. It is usually set to 8.
● Grid Y: the number of cells in the vertical direction. The more cells,
the finer the grid, the higher the dimensionality of the resulting
feature vector. It is usually set to 8.

2. Training the Algorithm: First, we need to train the algorithm. To do


so, we need to use a dataset with the facial images of the people we want
to recognize. We need to also set an ID (it may be a number or the name
of the person) for each image, so the algorithm will use this information
to recognize an input image and give you an output. Images of the same
person must have the same ID. With the training set already constructed,
let’s see the LBPH computational steps.

3. Applying the LBP operation: The first computational step of the LBPH
is to create an intermediate image that describes the original image in a
better way, by highlighting the facial characteristics.

15
To do so, the algorithm uses a concept of a sliding window, based on the
parameters radius and neighbours.

● Suppose we have a facial image in grayscale.

● We can get part of this image as a window of 3x3 pixels.


● It can also be represented as a 3x3 matrix containing the intensity of each
pixel (0~255).
● Then, we need to take the central value of the matrix to be used as the
threshold.
● This value will be used to define the new values from the 8 neighbours.

● For each neighbour of the central value (threshold), we set a new binary
value. We set 1 for values equal or higher than the threshold and 0 for values
lower than the threshold.
● Now, the matrix will contain only binary values (ignoring the central value).
We need to concatenate each binary value from each position from the
matrix line by line into a new binary value (e.g. 10001101). Note: some
authors use other approaches to concatenate the binary values (e.g.
clockwise direction), but the final result will be the same.
● Then, we convert this binary value to a decimal value and set it to the central
value of the matrix, which is actually a pixel from the original image.
● At the end of this procedure (LBP procedure), we have a new image which
represents better the characteristics of the original image.
● It can be done by using bilinear interpolation. If some data point is between
the pixels, it uses the values from the 4 nearest pixels (2x2) to estimate the
value of the new data point.

16
Figure 2.4: The LBP operation Radius Change

4. Extracting the Histograms: Now, using the image generated in the last step, we can use
the Grid X and Grid Y parameters to divide the image into multiple grids.

● As we have an image in grayscale, each histogram (from each grid) will


contain only 256 positions (0~255) representing the occurrences of each
pixel intensity.
● Then, we need to concatenate each histogram to create a new and bigger
histogram. Supposing we have 8x8 grids, we will have 8x8x256=16.384
positions in the final histogram. The final histogram represents the
characteristics of the image original image. Performing the face recognition
in this step, the algorithm is already trained. Each histogram created is used
to represent each image from the training dataset. So, given an input image,
we perform the steps again for this new image and creates a histogram
which represents the image.

● So to find the image that matches the input image we just need to compare
two histograms and return the image with the closest histogram.

17
● We can use various approaches to compare the histograms (calculate the
distance between two histograms), for example: Euclidean distance, chi-
square, absolute value, etc.
● So the algorithm output is the ID from the image with the closest histogram.
The algorithm should also return the calculated distance, which can be used
as a ‘confidence’ measurement.
● We can then use a threshold and the ‘confidence’ to automatically estimate
if the algorithm has correctly recognized the image. We can assume that the
algorithm has successfully recognized if the confidence is lower than the
threshold defined.

18
CHAPTER-3
MODEL IMPLEMENTATION
AND
ANALYSIS

19
3.1 Introduction:

Face detection involves separating image windows into two classes; one containing
faces (turning the background (clutter). It is difficult because although commonalities exist
between faces, they can vary considerably in terms of age, skin color and facial expression.
The problem is further complicated by differing lighting conditions, image qualities and
geometries, as well as the possibility of partial occlusion and disguise. An ideal face
detector would therefore be able to detect the presence of any face under any set of lighting
conditions, upon any background. The face detection task can be broken down into two
steps. The first step is a classification task that takes some arbitrary image as input and
outputs a binary value of yes or no, indicating whether there are any faces present in the
image. The second step is the face localization task that aims to take an image as input and
output the location of any face or faces within that image as some bounding box with (x,
y, width, height).After taking the picture the system will compare the equality of the
pictures in its database and give the most related result.

We will use HD Webcam, open CV platform and will do the coding in python language.

20
3.2 Model Implementation:

Figure 3.1: Model Implement

The main components used in the implementation approach are open source computer
vision library (OpenCV). One of OpenCV’s goals is to provide a simple to-use computer vision
infrastructure that helps people build fairly sophisticated vision applications quickly. OpenCV
library contains over 500 functions that span many areas in vision. The primary technology behind
Face recognition is OpenCV. The user stands in front of the camera keeping a minimum distance
of 50cm and his image is taken as an input. The frontal face is extracted from the image then
converted to grayscale and stored.

21
The Principal component Analysis (PCA) algorithm is performed on the images and the eigen
values are stored in an xml file. When a user requests for recognition the frontal face is extracted
from the captured video frame through the camera.
The eigen value is re-calculated for the test face and it is matched with the stored data for the
closest neighbour.

3.3 Design Requirements:

We used some tools to build the system. Without the help of these tools it would not
be possible to make it done. Here we will discuss about the most important one.

3.4 Software Implementation:

1. OpenCV: We used OpenCV 3 dependency for python 3. OpenCV is library where


there are lots of image processing functions are available. This is very useful library
for image processing. Even one can get expected outcome without writing a single
code. The library is cross-platform and free for use under the open-source BSD
license. Example of some supported functions are given bellow:

● Derivation: Gradient/Laplacian computing, contours delimitation.

● Hough transforms: lines, segments, circles, and geometrical shapes


detection.
● Histograms: computing, equalization, and object localization with back
projection algorithm.
● Segmentation: thresholding, distance transform, foreground/background
detection, watershed segmentation.

● Filtering: linear and nonlinear filters, morphological operations.

22
● Cascade detectors: detection of face, eye, car plates.

● Interest points: detection and matching.

● Video processing: optical flow, background subtraction, camshaft (object


tracking).
● Photography: panoramas realization, high-definition imaging (HDR),
image inpainting.

So it was very important to install OpenCV, How we did it is given below:

Figure 3.2: Installing OpenCV

Pip install OpenCV-python

23
2. Python IDE: There are lots of IDEs for python. Some of them are
PyCharm, Thonny, Ninja, Spyder etc. Ninja and Spyder both are very
excellent and free but we used Spyder as it features- rich than ninja.
Spyder is a little bit heavier than ninja but still much lighter than
PyCharm. You can run them in pi and get GUI on your PC.

3.5 Webcam:

Figure 3.3 Web Camera

Specifications:

• Logitech C270 Web Camera (960-000694).


• The C270 HD Webcam gives you sharp, smooth conference calls (720p/30fps)
in a widescreen format. Automatic light correction shows you in lifelike, natural
colours.

24
3.6 Experimental Results:
The step of the experiments process are given below:

Face Detection:
Start capturing images through web camera of the client side:
Begin:
● Pre-process the captured image and extract face image.

● calculate the eigen value of the captured face image and compared with eigen
values of existing faces in the database.
● If eigen value does not matched with existing ones save the new face image
information to the face database (xml file).
● If eigen value matched with existing one then recognition step will done.

End:

Face Recognition:

Using PCA algorithm the following steps would be followed in for face recognition:

Begin:
● Find the face information of matched face image in from the database.

● update the log table with corresponding face image and system time that
makes completion of attendance for an individua students.
End:

This section presents the results of the experiment conducted to capture the face
into a grey scale image of 50x50 pixels.

25
Test data Expected Result Observed Pass/Fail
Open CAM_CV() Connects with the Camera Pass
installed camera and started.
starts playing

LoadHaar Loads the HaarClassifier Gets ready for Pass


Classifier() Cascade files for frontal Extraction.
face

Initiates the Paul- Viola


Extract Face() Face extracting Frame Face extracted Pass
work.

Learn() Start the PCA Updates the face Pass


Algorithm data. xml
It compares the input Nearest face Pass
Recognize() face with the saved
faces.

Table 3.1 Experimental Results-1

26
Face Orientations Detection Rate Recognition Rate

0o (Frontal face) 98.7 % 95%

18º 80.0 % 78%

54º 59.2 % 58%

72º 0.00 % 0.00%

90º (Profile face) 0.00 % 0.00%

Table 3.2 Experimental Results-2

We performed a set of experiments to demonstrate the efficiency of the


proposed method. 30 different images of 10 persons are used in training set. Figure
3 shows a sample binary image detected by the Extract Face() function using Paul-
Viola Face extracting Frame work detection method.

27
Sample Data Set:

Figure 3.4: Dataset sample

28
CHAPTER 4
SYSTEM TESTING
AND
MAINTANENCE

29
4.1 Unit Testing:

The procedure level testing is made first. By giving improper inputs, the errors
occurred are noted and eliminated. Then the web form level testing is made. For example
storage of data to the table in the correct manner.

The dates are entered in wrong manner and checked. Wrong email-id and web site
URL (Universal Resource Locator) is given and checked.

4.2 Integration Testing:

Testing is done for each module. After testing all the modules, the modules are
integrated and testing of the final system is done with the test data, specially designed to
show that the system will operate successfully in all its aspects conditions. Thus the system
testing is a confirmation that all is correct and an opportunity to show the user that
the system works.

4.3 Validation Testing:

The final step involves Validation testing, which determines whether the software
function as the user expected. The end-user rather than the system developer conduct this
test most software developers as a process called “Alpha and Beta Testing” to uncover that
only the end user seems able to find.

The compilation of the entire project is based on the full satisfaction of the end users.
In the project, validation testing is made in various forms. In registration form Email id,
phone number and also mandatory fields for the user is verified.

30
4.4 Verification Testing:

Verification is a fundamental concept in software design. This is the bridge between


customer requirements and an implementation that satisfies those requirements. This is
verifiable if it can be demonstrated that the testing will result in an implementation that
satisfies the customer requirements.

Inadequate testing or non-testing leads to errors that may appear few months later.
This will create two problems:

Time delay between the cause and appearance of the problem.


The effect of the system errors on files and records within the system.

4.5 Maintenance:

The objectives of this maintenance work are to make sure that the system gets into
work all time without any bug. Provision must be for environmental changes which may
affect the computer or software system. This is called the maintenance of the system.
Nowadays there is the rapid change in the software world. Due to this rapid change, the
system should be capable of adapting these changes. In our project the process can be added
without affecting other parts of the system.Maintenance plays a vital role. The system liable
to accept any modification after its implementation. This system has been designed to favour
all new changes. Doing this will not affect the system’s performance or its accuracy.

31
CHAPTER-5
CODING

32
5.1 Main.py:

All work will be done here

################################# IMPORTING ################################

import tkinter as tk

from tkinter import ttk

from tkinter import messagebox as mess

import tkinter.simpledialog as tsd

import cv2,os

import csv

import numpy as np

from PIL import Image

import pandas as pd

import datetime

import time

############################ FUNCTIONS ######################################

def assure_path_exists(path):

dir = os.path.dirname(path)

if not os.path.exists(dir):

33
os.makedirs(dir)

##############################################################################

def tick():

time_string = time.strftime('%H:%M:%S')

clock.config(text=time_string)

clock.after(200,tick)

##############################################################################

def contact():

mess._show(title='Contact us', message="Please contact us on :E-mail:


'mohammedsabeel3181@gmail.com' ")

##############################################################################

def check_haarcascadefile():

exists = os.path.isfile("haarcascade_frontalface_default.xml")

if exists:

34
pass

else:

mess._show(title='Some file missing', message='Please contact us for help')

window.destroy()

##############################################################################

def save_pass():

assure_path_exists("TrainingImageLabel/")

exists1 = os.path.isfile("TrainingImageLabel\psd.txt")

if exists1:

tf = open("TrainingImageLabel\psd.txt", "r")

key = tf.read()

else:

master.destroy()

new_pas = tsd.askstring('Old Password not found', 'Please enter a new password below',
show='*')

if new_pas == None:

35
mess._show(title='No Password Entered', message='Password not set!! Please try again')

else:

tf = open("TrainingImageLabel\psd.txt", "w")

tf.write(new_pas)

mess._show(title='Password Registered', message='New password was registered


successfully!!')

return

op = (old.get())

newp= (new.get())

nnewp = (nnew.get())

if (op == key):

if(newp == nnewp):

txf = open("TrainingImageLabel\psd.txt", "w")

txf.write(newp)

else:

mess._show(title='Error', message='Confirm new password again!!!')

return

else:

mess._show(title='Wrong Password', message='Please enter correct old password.')

return

mess._show(title='Password Changed', message='Password changed successfully!!')

36
master.destroy()

#############################################################################

def change_pass():

global master

master = tk.Tk()

master.geometry("400x160")

master.resizable(False,False)

master.title("Change Password")

master.configure(background="white")

lbl4 = tk.Label(master,text=' Enter Old Password',bg='white',font=('comic', 12, ' bold '))

lbl4.place(x=10,y=10)

global old

old=tk.Entry(master,width=25 ,fg="black",relief='solid',font=('comic', 12, ' bold '),show='*')

old.place(x=180,y=10)

lbl5 = tk.Label(master, text=' Enter New Password', bg='white', font=('comic', 12, ' bold '))

lbl5.place(x=10, y=45)

global new

new = tk.Entry(master, width=25, fg="black",relief='solid', font=('comic', 12, ' bold '),show='*')

new.place(x=180, y=45)

37
lbl6 = tk.Label(master, text='Confirm New Password', bg='white', font=('comic', 12, ' bold '))

lbl6.place(x=10, y=80)

global nnew

nnew = tk.Entry(master, width=25, fg="black", relief='solid',font=('comic', 12, ' bold


'),show='*')

nnew.place(x=180, y=80)

cancel=tk.Button(master,text="Cancel", command=master.destroy ,fg="black" ,bg="red"


,height=1,width=25 , activebackground = "white" ,font=('comic', 10, ' bold '))

cancel.place(x=200, y=120)

save1 = tk.Button(master, text="Save", command=save_pass, fg="black", bg="#00fcca", height


= 1,width=25, activebackground="white", font=('comic', 10, ' bold '))

save1.place(x=10, y=120)

master.mainloop()

#############################################################################

def psw():

assure_path_exists("TrainingImageLabel/")

exists1 = os.path.isfile("TrainingImageLabel\psd.txt")

if exists1:

tf = open("TrainingImageLabel\psd.txt", "r")

key = tf.read()

38
else:

new_pas = tsd.askstring('Old Password not found', 'Please enter a new password below',
show='*')

if new_pas == None:

mess._show(title='No Password Entered', message='Password not set!! Please try again')

else:

tf = open("TrainingImageLabel\psd.txt", "w")

tf.write(new_pas)

mess._show(title='Password Registered', message='New password was registered


successfully!!')

return

password = tsd.askstring('Password', 'Enter Password', show='*')

if (password == key):

TrainImages()

elif (password == None):

pass

else:

mess._show(title='Wrong Password', message='You have entered wrong password')

#############################################################################

39
def clear():

txt.delete(0, 'end')

res = "1)Take Images >>> 2)Save Profile"

message1.configure(text=res)

def clear2():

txt2.delete(0, 'end')

res = "1)Take Images >>> 2)Save Profile"

message1.configure(text=res)

##############################################################################

def TakeImages():

check_haarcascadefile()

columns = ['SERIAL NO.', '', 'ID', '', 'NAME']

assure_path_exists("StudentDetails/")

assure_path_exists("TrainingImage/")

serial = 0

exists = os.path.isfile("StudentDetails\StudentDetails.csv")

if exists:

40
with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:

reader1 = csv.reader(csvFile1)

for l in reader1:

serial = serial + 1

serial = (serial // 2)

csvFile1.close()

else:

with open("StudentDetails\StudentDetails.csv", 'a+') as csvFile1:

writer = csv.writer(csvFile1)

writer.writerow(columns)

serial = 1

csvFile1.close()

Id = (txt.get())

name = (txt2.get())

if ((name.isalpha()) or (' ' in name)):

cam = cv2.VideoCapture(0)

harcascadePath = "haarcascade_frontalface_default.xml"

detector = cv2.CascadeClassifier(harcascadePath)

sampleNum = 0

while (True):

ret, img = cam.read()

41
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = detector.detectMultiScale(gray, 1.3, 5)

for (x, y, w, h) in faces:

cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)

# incrementing sample number

sampleNum = sampleNum + 1

# saving the captured face in the dataset folder TrainingImage

cv2.imwrite("TrainingImage\ " + name + "." + str(serial) + "." + Id + '.' + str(sampleNum)


+ ".jpg",

gray[y:y + h, x:x + w])

# display the frame

cv2.imshow('Taking Images', img)

# wait for 100 miliseconds

if cv2.waitKey(100) & 0xFF == ord('q'):

break

# break if the sample number is morethan 100

elif sampleNum > 100:

break

cam.release()

cv2.destroyAllWindows()

res = "Images Taken for ID : " + Id

42
row = [serial, '', Id, '', name]

with open('StudentDetails\StudentDetails.csv', 'a+') as csvFile:

writer = csv.writer(csvFile)

writer.writerow(row)

csvFile.close()

message1.configure(text=res)

else:

if (name.isalpha() == False):

res = "Enter Correct name"

message.configure(text=res)

#############################################################################

def TrainImages():

check_haarcascadefile()

assure_path_exists("TrainingImageLabel/")

recognizer = cv2.face_LBPHFaceRecognizer.create()

harcascadePath = "haarcascade_frontalface_default.xml"

detector = cv2.CascadeClassifier(harcascadePath)

faces, ID = getImagesAndLabels("TrainingImage")

try:

43
recognizer.train(faces, np.array(ID))

except:

mess._show(title='No Registrations', message='Please Register someone first!!!')

return

recognizer.save("TrainingImageLabel\Trainner.yml")

res = "Profile Saved Successfully"

message1.configure(text=res)

message.configure(text='Total Registrations till now : ' + str(ID[0]))

#############################################################################

def getImagesAndLabels(path):

# get the path of all the files in the folder

imagePaths = [os.path.join(path, f) for f in os.listdir(path)]

# create empth face list

faces = []

# create empty ID list

Ids = []

# now looping through all the image paths and loading the Ids and the images

for imagePath in imagePaths:

# loading the image and converting it to gray scale

44
pilImage = Image.open(imagePath).convert('L')

# Now we are converting the PIL image into numpy array

imageNp = np.array(pilImage, 'uint8')

# getting the Id from the image

ID = int(os.path.split(imagePath)[-1].split(".")[1])

# extract the face from the training image sample

faces.append(imageNp)

Ids.append(ID)

return faces, Ids

#############################################################################

def TrackImages():

check_haarcascadefile()

assure_path_exists("Attendance/")

assure_path_exists("StudentDetails/")

for k in tv.get_children():

tv.delete(k)

msg = ''

i=0

j=0

45
recognizer = cv2.face.LBPHFaceRecognizer_create() # cv2.createLBPHFaceRecognizer()

exists3 = os.path.isfile("TrainingImageLabel\Trainner.yml")

if exists3:

recognizer.read("TrainingImageLabel\Trainner.yml")

else:

mess._show(title='Data Missing', message='Please click on Save Profile to reset data!!')

return

harcascadePath = "haarcascade_frontalface_default.xml"

faceCascade = cv2.CascadeClassifier(harcascadePath);

cam = cv2.VideoCapture(0)

font = cv2.FONT_HERSHEY_SIMPLEX

col_names = ['Id', '', 'Name', '', 'Date', '', 'Time']

exists1 = os.path.isfile("StudentDetails\StudentDetails.csv")

if exists1:

df = pd.read_csv("StudentDetails\StudentDetails.csv")

else:

mess._show(title='Details Missing', message='Students details are missing, please check!')

cam.release()

cv2.destroyAllWindows()

window.destroy()

46
while True:

ret, im = cam.read()

gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)

faces = faceCascade.detectMultiScale(gray, 1.2, 5)

for (x, y, w, h) in faces:

cv2.rectangle(im, (x, y), (x + w, y + h), (225, 0, 0), 2)

serial, conf = recognizer.predict(gray[y:y + h, x:x + w])

if (conf < 50):

ts = time.time()

date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')

timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')

aa = df.loc[df['SERIAL NO.'] == serial]['NAME'].values

ID = df.loc[df['SERIAL NO.'] == serial]['ID'].values

ID = str(ID)

ID = ID[1:-1]

bb = str(aa)

bb = bb[2:-2]

attendance = [str(ID), '', bb, '', str(date), '', str(timeStamp)]

else:

Id = 'Unknown'

47
bb = str(Id)

cv2.putText(im, str(bb), (x, y + h), font, 1, (255, 255, 255), 2)

cv2.imshow('Taking Attendance', im)

if (cv2.waitKey(1) == ord('q')):

break

ts = time.time()

date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')

exists = os.path.isfile("Attendance\Attendance_" + date + ".csv")

if exists:

with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:

writer = csv.writer(csvFile1)

writer.writerow(attendance)

csvFile1.close()

else:

with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:

writer = csv.writer(csvFile1)

writer.writerow(col_names)

writer.writerow(attendance)

csvFile1.close()

with open("Attendance\Attendance_" + date + ".csv", 'r') as csvFile1:

reader1 = csv.reader(csvFile1)

48
for lines in reader1:

i=i+1

if (i > 1):

if (i % 2 != 0):

iidd = str(lines[0]) + ' '

tv.insert('', 0, text=iidd, values=(str(lines[2]), str(lines[4]), str(lines[6])))

csvFile1.close()

cam.release()

cv2.destroyAllWindows()

########################### USED STUFFS ####################################

global key

key = ''

ts = time.time()

date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')

day,month,year=date.split("-")

mont={'01':'January',

'02':'February',

49
'03':'March',

'04':'April',

'05':'May',

'06':'June',

'07':'July',

'08':'August',

'09':'September',

'10':'October',

'11':'November',

'12':'December'

########################## GUI FRONT-END ####################################

window = tk.Tk()

window.geometry("1280x720")

window.resizable(True,False)

window.title("Islamiah College Lab Attendance System")

window.configure(background='#002223')

frame1 = tk.Frame(window, bg="#c79cff")

50
frame1.place(relx=0.11, rely=0.17, relwidth=0.39, relheight=0.80)

frame2 = tk.Frame(window, bg="#c79cff")

frame2.place(relx=0.51, rely=0.17, relwidth=0.38, relheight=0.80)

message3 = tk.Label(window, text="Islamiah College Face Recognition Attendance Monitoring


System" ,fg="white",bg="#1d2951" ,width=55 ,height=1,font=('rockwell', 29, ' bold '))

message3.place(x=10, y=10)

frame3 = tk.Frame(window, bg="#c4c6ce")

frame3.place(relx=0.52, rely=0.09, relwidth=0.09, relheight=0.07)

frame4 = tk.Frame(window, bg="#c4c6ce")

frame4.place(relx=0.36, rely=0.09, relwidth=0.16, relheight=0.07)

datef = tk.Label(frame4, text = day+"-"+mont[month]+"-"+year+" | ", fg="white",bg="#1d2951"


,width=55 ,height=1,font=('comic', 16, ' bold '))

datef.pack(fill='both',expand=1)

clock = tk.Label(frame3,fg="white",bg="#1d2951" ,width=55 ,height=1,font=('comic', 18, ' bold


'))

clock.pack(fill='both',expand=1)

51
tick()

head2 = tk.Label(frame2, text=" For New Registrations ",


fg="black",bg="#3bbdc2" ,font=('comic', 17, ' bold ') )

head2.grid(row=0,column=0)

head1 = tk.Label(frame1, text=" For Already Registered ",


fg="black",bg="#3bbdc2" ,font=('comic', 17, ' bold ') )

head1.place(x=0,y=0)

lbl = tk.Label(frame2, text="Enter ID",width=20 ,height=1 ,fg="black" ,bg="#c79cff"


,font=('comic', 17, ' bold ') )

lbl.place(x=80, y=55)

txt = tk.Entry(frame2,width=32 ,fg="black",font=('comic', 15, ' bold '))

txt.place(x=30, y=88)

lbl2 = tk.Label(frame2, text="Enter Name",width=20 ,fg="black" ,bg="#c79cff" ,font=('comic',


17, ' bold '))

lbl2.place(x=80, y=140)

txt2 = tk.Entry(frame2,width=32 ,fg="black",font=('comic', 15, ' bold ') )

52
txt2.place(x=30, y=173)

message1 = tk.Label(frame2, text="1)Take Images >>> 2)Save Profile" ,bg="#c79cff"


,fg="black" ,width=39 ,height=1, activebackground = "#3ffc00" ,font=('comic', 15, ' bold '))

message1.place(x=7, y=230)

message = tk.Label(frame2, text="" ,bg="#c79cff" ,fg="black" ,width=39,height=1,


activebackground = "#3ffc00" ,font=('comic', 16, ' bold '))

message.place(x=7, y=450)

lbl3 = tk.Label(frame1, text="Attendance",width=20 ,fg="black" ,bg="#c79cff" ,height=1


,font=('comic', 17, ' bold '))

lbl3.place(x=100, y=115)

res=0

exists = os.path.isfile("StudentDetails\StudentDetails.csv")

if exists:

with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:

reader1 = csv.reader(csvFile1)

for l in reader1:

res = res + 1

res = (res // 2) - 1

53
csvFile1.close()

else:

res = 0

message.configure(text='Total Registrations till now : '+str(res))

##################### MENUBAR #################################

menubar = tk.Menu(window,relief='ridge')

filemenu = tk.Menu(menubar,tearoff=0)

filemenu.add_command(label='Change Password', command = change_pass)

filemenu.add_command(label='Contact Us', command = contact)

filemenu.add_command(label='Exit',command = window.destroy)

menubar.add_cascade(label='Help',font=('comic', 29, ' bold '),menu=filemenu)

################## TREEVIEW ATTENDANCE TABLE ####################

tv= ttk.Treeview(frame1,height =13,columns = ('name','date','time'))

tv.column('#0',width=82)

tv.column('name',width=130)

tv.column('date',width=133)

tv.column('time',width=133)

54
tv.grid(row=2,column=0,padx=(0,0),pady=(150,0),columnspan=4)

tv.heading('#0',text ='ID')

tv.heading('name',text ='NAME')

tv.heading('date',text ='DATE')

tv.heading('time',text ='TIME')

###################### SCROLLBAR ################################

scroll=ttk.Scrollbar(frame1,orient='vertical',command=tv.yview)

scroll.grid(row=2,column=4,padx=(0,100),pady=(150,0),sticky='ns')

tv.configure(yscrollcommand=scroll.set)

###################### BUTTONS ##################################

clearButton = tk.Button(frame2, text="Clear", command=clear ,fg="black" ,bg="#eb3c62"


,width=11 ,activebackground = "white" ,font=('comic', 11, ' bold '))

clearButton.place(x=335, y=86)

clearButton2 = tk.Button(frame2, text="Clear", command=clear2 ,fg="black" ,bg="#eb3c62"


,width=11 , activebackground = "white" ,font=('comic', 11, ' bold '))

clearButton2.place(x=335, y=172)

takeImg = tk.Button(frame2, text="Take Images", command=TakeImages ,fg="white"


,bg="#6d00fc" ,width=34 ,height=1, activebackground = "white" ,font=('comic', 15, ' bold '))

55
takeImg.place(x=30, y=300)

trainImg = tk.Button(frame2, text="Save Profile", command=psw ,fg="white" ,bg="#6d00fc"


,width=34 ,height=1, activebackground = "white" ,font=('comic', 15, ' bold '))

trainImg.place(x=30, y=380)

trackImg = tk.Button(frame1, text="<< Take Attendance >>", command=TrackImages


,fg="black" ,bg="#76ba1b" ,width=35 ,height=1, activebackground = "white" ,font=('comic', 15,
' bold '))

trackImg.place(x=30,y=50)

quitWindow = tk.Button(frame1, text="Quit", command=window.destroy ,fg="black"


,bg="#eb3c62" ,width=35 ,height=1, activebackground = "white" ,font=('comic', 15, ' bold '))

quitWindow.place(x=30, y=450)

##################### END ######################################

window.configure(menu=menubar)

window.mainloop()

56
5.2 Output Images:

Figure 5.1 Home page

57
Figure 5.2 Registration

58
Figure 5.3 Admin Password

59
Figure 5.4 Taking Attendance

60
Figure 5.5 Change Password

61
Figure 5.6 Contact Us

62
Figure 5.7 Attendance sheet

63
CONCLUSION

Face recognition systems are part of facial image processing applications and their
significance as a research area are increasing recently. Implementations of system are crime
prevention, video surveillance, person verification, and similar security activities.

The face recognition system implementation can be part of universities. Face Recognition
Based Attendance System has been envisioned for the purpose of reducing the errors that occur in
the traditional (manual) attendance taking system.

The aim is to automate and make a system that is useful to the organization such as an
institute. The efficient and accurate method of attendance in the office environment that can replace
the old manual methods.

This method is secure enough, reliable and available for use. Proposed algorithm is capable
of detect multiple faces, and performance of system has acceptable good results.

64
BIBLOGRAPHY

References:

• GitHub - shubhamkumar27/Face_recognition_based_attendance_system: A python GUI


integrated attendance system using face recognition to take attendance.

• www.geeksforgeeks.org/opencv-overview

• https://en.wikipedia.org/wiki/Local_binary_patterns

• www.geeksforgeeks.org/python-haar-cascades-for-object-detection

• face-recognition-attendance-system · GitHub Topics · GitHub

• www.analyticsvidhya.com/blog/2021/11/build-face-recognition-attendance-system-using-
python/

• Face Recognition-Based Attendance System with source code - Flask App - With GUI -
2023 - Machine Learning Projects

• Face Recognition Attendance System Project (nevonprojects.com)

________________________________________________________________

65

You might also like