You are on page 1of 19

A Project Report

on
Smart Attendance System Using Image Processing

Submitted in partial fulfillment of the


requirement for the award of the degree
of

Bachelor of Technology in Computer Science and


Engineering

Under The Supervision of


Dr. Ashish Kumar Srivastava
Assistant Professor

Submitted By:-
Ayush
Choudhary
21SCSE1180141

SCHOOL OF COMPUTING SCIENCE AND ENGINEERING


DEPARTMENT OF COMPUTER SCIENCE AND
ENGINEERING GALGOTIAS UNIVERSITY, GREATER
NOIDA
INDIA
JUNE, 2023
SCHOOL OF COMPUTING SCIENCE AND
ENGINEERING
GALGOTIAS UNIVERSITY, GREATER NOIDA

CANDIDATE’S DECLARATION

We hereby certify that the work which is being presented in the project, entitled “SMART

ATTENDANCE SYSTEM USING IMAGE PROCESSING” in partial fulfillment of the

requirements for the award of the Bachelor of Technology in Computer Science and Engineering

submitted in the School of Computing Science and Engineering of Galgotias University, Greater

Noida, is an original work carried out during the period of February, 2023 to June, 2023 under the

supervision of Dr. Ashish Kumar Srivastava, Assistant Professor, Department of Computer Science

and Engineering and, of School of Computing Science and Engineering , Galgotias University,

Greater Noida

The matter presented in the project has not been submitted by us for the award of any other

degree of this or any other places.

Ayush Choudhary
(21SCSE1180141)

This is to certify that the above statement made by the candidates is correct to the best of my

knowledge.

Supervisor Name

Designation
CERTIFICATE

The Final Project Viva-Voce examination of Ayush Choudhary (21SCSE1180141) has been held on

and his work is recommended for the award of Bachelor of

Technology in Computer Science and Engineering.

Signature of Examiner(s) Signature of Supervisor(s)

Signature of Program Chair Signature of Dean

Date: June, 2023

Place: Greater Noida


Abstract
Proper attendance management is crucial for the academic institutions to spread and
ensure quality education to everystudent. We will be making a model of an
automated attendance system to eliminate the manual effort of recording data,
reducing the chances of fraudulency.

This project is based on real time attendance using image processing technique. The
concept of our project is to provide real time attendance of students in the class to the
faculty's database.For This Makeuseof“ImageProcessing”. Here,in image processing
wecapture the image using a camera which is situated exactly on the top of class
entrance. This camera will capture image of its nearby surrounding and only detect
the facial part of that particular image and send this image information
toourprocessingsystems.Now as per the algorithm which we are using for image
processing and detection, this captured image will undergo various filtering and
masking techniques in order to obtain the desired suitable image format. Then our
algorithm will compare the input image with the reference points and detect the
desired image and eventually detect the student and send the attendance of this
student to a database of faculty via IOT. The Smart Attendance System will keep the
authentic record of every registered student and eradicates greatly the traditional
tedious task. Moreover, this smart system keeps the data of every student registered
for a particular course in the attendance log and provides necessary information
according to the need.
Table of Contents

Title Page No.


Candidates Declaration I
Acknowledgement II
Abstract III
Contents IV
Chapter 1 Introduction 1

Chapter 2 Literature Survey/Project Design 2

Chapter 3 Functionality/Working of Project 4

Chapter 4 Results and Discussion 9

Chapter 5 Conclusion and Future Scope 10


Reference 10
CHAPTER-1

Introduction
Attendance is a necessary parameter which is required is most of schools and colleges. On an
average this attendance is carried out to have accurate count of people or students seating in a
particular classroom or any practice area. Traditional method of taking this attendance carried
out by humans where the lecturer or teacher manually counts each and every student with their
required data like candidate’s name, serial number, status etc. This process is very much time
consuming and maintenance of collected data is difficult. Thus, we can digitize the process of
taking attendance and make it simple and accurate. Our project will digitize the process of
attendance collection by using image processing technique and will automatically update real
time data of faculty’s data base via IOT. Since this project is software based, we have few
hardware components. Initially we use a stationary camera which is situated above classrooms
entrance so that we can scan each and every student’s face without any interruption.
Camera only provides captured real time image as an input to computer node 1 which is acting
as a master processor. We use “MATLAB” as a platform for processing our input image. This
image is processed and compared with preloaded image, as soon as image gets identified our
algorithm indicated the master processor to transmit the identified person data to the computer
node 2 which is faculty’s data base via IOT with help of Wi-Fi. As this is digitized process so
the real time attendance is updated continuously at faculty’s data base.

Flow chart:
CHAPTER-2

Literature Survey/Project Design

[1]
With the constant advancement of network technology, a variety of useful
network technologies have emerged, and human reliance on network technology
has risen, highlighting the relevance of network information security challenges.
The application of sensors is getting more and more widespread as my country's
industrialization progresses, for example, security flaws and problems in the
operating system itself. Early intelligent recognition systems scanned and
contrasted the uniqueness of finger and palm lines, however these methods had
limits due to weather or skin texture constraints created by skin texture. Face
detection and recognition technology are used in this paper to create a novel
computer vision-based system. The OpenCV approach is mostly used in the face
detection technology. The Seetaface and YouTu methods have enhanced face
recognition technology in practical applications. Simultaneously, the detection
and identification rates under the three different requirements of side face
detection, occlusion detection, and facial exaggerated expression are compared
using the contrast experiment, and the accuracy of each approach is enhanced.
The algorithm's advantages and weaknesses effectively verify the method's
effectiveness.

[2]
Face and facial feature detection, as well as face recognition — or rather, face
comparison are two of the most essential parts in the broad study framework of
computerized face recognition. Elastic matching with jets produces the best
reported results for the mug-shot face recognition challenge. The overall face
detection, facial feature localization, and face comparison are all done in one step
in this method. This report details our efforts in developing a new technique to
facial recognition. On the one hand, we describe a visual learning technique and
its application to accurate facial feature detection and tracking in complex
backgrounds. A quick approach for 2D-template matching, on the other hand, is
provided, as well as its application to face recognition. Finally, we provide a real-
time, automatic facial recognition system.
[3]
Face detection has gotten a lot of press since it has a lot of uses in computervision
communication and autonomous control systems. Face detection is a technique
for detecting a face in a picture with several features. Face detection, expression
identification, face tracking, and position estimation research is needed. The
challenge is to detect the face from a single photograph. Face identification is a
difficult task since faces are not static and alter in size, shape, color, and other
characteristics. When the image is not clear and obstructed by anything else, such
as poor lighting, not facing the camera, and so on, face detection becomes a more
difficult process. The viola Jones algorithm is used to detect faces in this work,
while principal component analysis is utilized to recognize faces.

[4]
Face recognition systems are used in practically every industry in this digital age.
Despite having lower accuracy than iris and fingerprint identification, it is
extensively utilized due to its contactless and non-invasive nature. Face
recognition systems can also be used to track attendance at schools, colleges, and
offices, among other places. Because the current manual attendance system is
time consuming and difficult to maintain, this system intends to create a class
attendance system that employs the concept of facial recognition. It’s also
possible that proxies will be present. As a result, there is a greater demand for this
system. Database construction, face detection, face recognition, and attendance
updating are the four stages of this system. The photos of kids in class are used to
build the database. The Haar-Cascade classifier and the Local Binary Pattern
Histogram technique are used for face detection and recognition, respectively.
From the live streaming camera of the classroom, faces are discovered and
recognized. At the end of the session, attendance will be mailed to the
relevant professors.
CHAPTER-3

Functionality/Working of Project

WORKING PRINCIPLE:-
As seen above our master processor has two inputs, one from the camera which is real
time data and another from pre- stored image data. Initially when camera sends input to
master processor the face detection algorithm starts functioning. The face detection
algorithm is designed in such a way that it will only crop the facial section of input image.
This facial section is identified with help of matching coordinates like eyes, nose and
mouth which works as reference while cropping as image. Later this facial section of
image is been stored in form of face registration. As we know each face has its own
identity in form of dedicated eyes pattern, nose and mouth figure, by considering this our
algorithm tags each input face with different labels. Simultaneously master processor is
also carrying out another process which feature extraction of pre- stored image. Now this
extracted feature goes under machine learning stage where machine learns which face
belongs to which particular tag or label. By using this parameter both inputs are compared
in classifier and if the features match within particular threshold value, then algorithm
indicates that face is been identified, later by selecting desired data such as name, serial
number and status of that identity is transmits the information via IOT to another
computer which is assumed to be faculty’s computer. This is how we get real time
attendance.
Basic working Principle of facial recognition:-
Viola- Jones Algorithm on Matlab the basic principle of the Viola-Jones algorithm is to
scan a sub-window capable of detecting faces across a given input image. The standard
image processing approach would be to rescale the input image to different sizes and then
run the fixed size detector through these images. This approach turns out to be rather time
consuming due to the calculation of the different size images. Contrary to the standard
approach Viola-Jones rescale the detector instead of the input image and run the detector
many times through the image — each time with a different size. At first one might
suspect both approaches to be equally time consuming, but Viola-Jones have devised a
scale invariant detector that requires the same number of calculations whatever the size.
This detector is constructed using a so-called integral image and some simple rectangular
features reminiscent of Haar wavelets. The first step of the Viola- Jones face detection
algorithm is to turn the input image into an integral image. This is done by making each
pixel equal to the entire sum of all pixels above and to the left of the concerned pixel. This
allows for the calculation of the sum of all pixels inside any given rectangle using only
four values. These values are the pixels in the integral image that coincide with the
corners of the rectangle in the input image. It has now been demonstrated how the sum of
pixels within rectangles of arbitrary size can be calculated in constant time. The Viola
Jones face detector analyzes a given sub window using features consisting of two or more
rectangles. Each feature results in a single value which is calculated by subtracting the
sum of the white rectangle(s) from the sum of the black rectangle(s). Viola-Jones have
empirically found that a detector with a base resolution of 24*24 pixels gives satisfactory
results. When allowing for all possible sizes and positions of the features in Figure 4 a
total of approximately 160.000 different features can then be constructed. Thus, the
number of possible features vastly outnumbers the 576 pixels contained in the detector at
base resolution. These features may seem overly simple to perform such an advanced task
as face detection, but what the features lack in complexity they most certainly have in
computational efficiency. One could understand the features as the computer’s way of
perceiving an input image. The hope being that some features will yield large values when
on top of a face. Of course, operations could also be carried out directly on the raw pixels,
but the variation due to different pose and individual characteristics would be expected to
hamper this approach. The goal is now to smartly construct a mesh of features capable of
detecting faces and this is the topic of the next section.
The cascaded classifier the basic principle of the Viola-Jones face detection algorithm is
to scan the detector many times through the same image — each time with a new size.
Even if an image should contain one or more faces it is obvious that an excessive large
amount of the evaluated sub-windows would still be negatives (nonfaces). This realization
leads to a different formulation of the problem: Instead of finding faces, the algorithm
should discard non- faces. The thought behind this statement is that it is faster to discard a
nonface than to find a face. With this in mind a detector consisting of only one (strong)
classifier suddenly seems inefficient since the evaluation time is constant no matter the
input. Hence the need for a cascaded classifier arises. The cascaded classifier is composed
of stages each containing a strong classifier. The job of each stage
is to determine whether a given sub-window is definitely not a face or maybe a face.
When a sub- window is classified to be a nonface by a given stage it is immediately
discarded. Conversely a sub window classified as a maybe-face is passed on to thenext
stage in the cascade. It follows that the more stages a given sub-window passes, the higher
the chance the sub-window actually contains a face. In a single stage classifier, one would
normally accept false negatives in order to reduce the false positive rate. However, for the
first stages in the staged classifier false positives are not considered to be a problem since
the succeeding stages are expected to sort them out. Therefore Viola- Jones prescribe the
acceptance of many false positives in the initial stages. Consequently, the number of false
negatives in the final staged classifier is expected to be very small. Viola Jones also refer
to the cascaded classifier as an attentional cascade. This name implies that more attention
(computing power) is directed towards the regions of the image suspected to contain
faces. It follows that when training a given stage, say n, the negative examples should of
course be false negatives generated by stage n-1.

Introduction of LBPH Local Binary Pattern Histogram algorithm:-

It is a simple approach that labels the pixels of the image thresholding the neighborhood
of each pixel. In other words, LBPH summarizes the local structure in an image by
comparing each pixel with its neighbor sand the result is converted into a binary number.
It was first defined in 1994 (LBP) and since that time it has been found to be a powerful
algorithm for texture classification. This algorithm is generally focused on extracting
local features from images. The basic idea is not to look at the whole image as a high-
dimension vector; it only focuses on the local features of an object.
2. The first step is to train the algorithm:
It requires a data set with the facial images of the person that we want to recognize. A
unique ID (it maybe a number or name of the person) should provide with each image.
Then the algorithm uses this information to recognize an input image and give you the
output. An Image of particular person must have the same ID. Let’s understand the
LBPH computational in the next step.

3. LBP In this step,


LBP computation is used to create an intermediate image that describes the original
image in a specific way through highlighting the facial characteristic. The parameters in
the concept of sliding window and are used in the concept of sliding window.
Open CV is the huge open-source library for the computer vision, machine
learning, and image processing and now it plays a major role in real-time
operation which is very important in today’s systems. By using it, one can
process images and videos to identify objects, faces, or even handwriting of a
human. When it integrated with various libraries, such as Num Py, python is
capable of processing the Open CV array structure for analysis. To Identify
image pattern and its various features we use vector space and perform
mathematical operations on these features.

Flowchart of the proposed attendance system:-


Results and Discussion

Source Code :-

Output
Take Attendance E■term
13
Attea
ID
da■ce TIME
, P. □ tSa
X

Ent rP:w;word

. Tlllralllrm:13

Save Profile

Totalllleptntloa •- :3

I Attend,m e Sy tem □
Taking Image □ X

EDtum
12

shusb:mt sh.firm

.....Tt11rafllrm 111

T•ke lm•ges

Saye Profile

Totar •••:3

04-06-2023
Conclusion

This smart attendance system will help to save time and energy of teachers to take
attendance. It will also create the transparency for the students that there will not be any
kind of human error for attendance. This system is surely better than the biometric
system and it is very efficient to use and implement. There it goes without any saying
that our proposed model has the potential to overcome the manual attendance system
because it’s efficient and convenient. Our model is more user friendly and it provides
the most accurate and organized data. And with just some few modifications we can use
our system in any secured facilities.

References

[1] Rafael C. Gonzalez, University of Tennessee.


[2] Richard E. Woods, Med Data Interactive.
[3] PT Pham-Ngoc, QL Huynh, "Robust face detection under challenges of rotation
pose and occlusion" in Department of Biomedical Engineering Faculty of Applied
Science, 2010, Hochiminh University of Technology.
[4] E. Hjelmâs, B.K. Low, "Face detection: A survey", Computer vision and image
understanding 83, no. 3, pp. 236- 274, 2001.
[5] Kamaruddin Mamata, FarokAzmat, “Mobile Learning Application for Basic Router
and Switch Configuration on Android Platform” published in Sixth International
Conference on University Learning and Teaching (In CULT 2012) 1877- 0428 2013

You might also like