Professional Documents
Culture Documents
INTRODUCTION
The main objective of this project is to develop face recognition based automated student
attendance system. In order to achieve better performance, the test images and training images of
this proposed approach are limited to frontal and upright facial images that consist of a single face
only. The test images and training images have to be captured by using the same device to ensure
no quality difference. In addition, the students have to register in the database to be recognized.
The enrolment can be done on the spot through the user friendly interface.
1.1 Background:
Face recognition is crucial in daily life in order to identify family, friends or someone we are
familiar with. We might not perceive that several steps have actually taken in order to identify
human faces. Human intelligence allows us to receive information and interpret the information in
the recognition process. We receive information through the image projected into our eyes, by
specifically retina in the form of light. Light is a form of electromagnetic waves which are radiated
from a source onto an object and projected to human vision. Robinson-Riegler, G., & Robinson-
Riegler, B. (2008) mentioned that after visual processing done by the human visual system, we
actually classify shape, size, contour and the texture of the object in order to analyse the
information. The analysed information will be compared to other representations of objects or face
that exist in our memory to recognize. In fact, it is a hard challenge to build an automated system
to have the same capability as a human to recognize faces. However, we need large memory to
recognize different faces, for example, in the Universities, there are a lot of students with different
race and gender, it is impossible to remember every face of the individual without making mistakes.
In order to overcome human limitations, computers with almost limitless memory, high processing
speed and power are used in face recognition systems.
The human face is a unique representation of individual identity. Thus, face recognition is
defined as a biometric method in which identification of an individual is performed by comparing
real-time capture image with stored images in the database of that person(Margaret Rouse, 2012).
Nowadays, face recognition system is prevalent due to its simplicity and awesome performance.
For instance, airport protection systems and FBI use face recognition for criminal investigations by
tracking suspects, missing children and drug activities (Robert Silk, 2017). Apart from that,
Facebook which is a popular social networking website implement face recognition to allow the
users to tag their friends in the photo for entertainment purposes (Sidney Fussell, 2018).
Furthermore, Intel Company allows the users to use face recognition to get access to their online
1
account (Reichert, C., 2017). Apple allows the users to unlock their mobile phone, iPhone X by
using face recognition (deAgonia, M., 2017).
The work on face recognition began in 1960. Woody Bledsoe, Helen Chan Wolf and Charles
Bisson had introduced a system which required the administrator to locate eyes, ears, nose and
mouth from images. The distance and ratios between the located features and the common reference
points are then calculated and compared. The studies are further enhanced by Goldstein, Harmon,
and Lesk in 1970 by using other features such as hair colour and lip thickness to automate the
recognition. In 1988, Kirby and Sirovich first suggested principle component analysis (PCA) to
solve face recognition problem. Many studies on face recognition were then conducted
continuously until today (Ashley DuVal, 2012).
1.2 Problem Statement:
Traditional student attendance marking technique is often facing a lot of trouble. The face
recognition student attendance system emphasizes its simplicity by eliminating classical student
attendance marking technique such as calling student names or checking respective identification
cards. There are not only disturbing the teaching process but also causes distraction for students
during exam sessions. Apart from calling names, attendance sheet is passed around the classroom
during the lecture sessions. The lecture class especially the class with a large number of students
might find it difficult to have the attendance sheet being passed around the class. Thus, face
recognition student attendance system is proposed in order to replace the manual signing of the
presence of students which are burdensome and causes students get distracted in order to sign for
their attendance. Furthermore, the face recognition based automated student attendance system able
to overcome the problem of fraudulent approach and lecturers does not have to count the number
of students several times to ensure the presence of the students.
The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facial identification.
One of the difficulties of facial identification is the identification between known and unknown
images. In addition, paper proposed by Pooja G.R et al. (2010) found out that the training process
for face recognition student attendance system is slow and time-consuming. In addition, the paper
proposed by Priyanka Wagh et al. (2015) mentioned that different lighting and head poses are often
the problems that could degrade the performance of face recognition based student attendance
system.
2
Hence, there is a need to develop a real time operating student attendance system which means the
identification process must be done within defined time constraints to prevent omission. The
extracted features from facial images which represent the identity of the students have to be
consistent towards a change in background, illumination, pose and expression. High accuracy and
fast computation time will be the evaluation points of the performance.
1.3 Aims and Objective:
The objective of this project is to develop face recognition based automated student attendance
system. Expected achievements in order to fulfill the objectives are:
To detect the face segment from the video frame.
To extract the useful features from the face detected.
To classify the features in order to recognize the face detected.
To record the attendance of the identified student.
1.4 Existing Attendance System:
Attendance is prime important for both the teacher and student of an educational
organization. So, it is very important to keep record of the attendance. There are various
attendance management systems that vary in complexity and feasibility. We have divided them
into three categories namely, basic, moderate, and advanced.
1.4.1 Basic Attendance System:
a. Manual Attendance System:
The Manual Attendance System involves the process of the faculty calling out the roll
calls. If the student is present in the class, the student physically acknowledges the roll call and
says that he/she is present. In all other cases, the faculty marks the student absent.
b. Paper Based Attendance System:
The Paper Based Attendance System is a part of the manual attendance system or could be
used for any other attendance system as well. Attendance is taken in any form and it's recorded
on a paper by writing either the absentees, or the present only. Usually faculties write the roll
numbers of the students that are absent or present as per convenience.
c. Timesheet Attendance System:
The Timesheet Attendance System involves recording attendance into a timesheet. A
timesheet is a physical or virtual tool that allows you to record and keep track of your worked
time, in this case it's number hours the student attends.
3
d. Token Based Attendance:
The Token based attendance involves displaying of a security token when demanded in
order to secure attendance. A security token (sometimes called an authentication token) is a
small hardware device that the owner carries to authorize access to a network service. The device
may be in the form of a smart card or may be embedded in a commonly used object such as a
key fob. In the context of students, the token is usually their identity car.
1.4.2 Moderate Attendance Systems:
a. Biometric Attendance System:
The biometric attendance system works on two basic principles. First, it takes an image of
a finger. Then finger scanner saves characteristics of every unique finger and saved in the form of
biometric key. Actually, finger print scanner never saves images of a finger only series of binary
code for verification purpose. Secondly, the biometric attendance system determines whether the
pattern of ridges and valleys in this image matches the pattern of ridges and valleys in pre-scanned
images.
b. Badge Monitoring Attendance System:
The Badge Monitoring Attendance System is most commonly used in places where people
work with radioactive materials such as in a X-ray lab, nuclear center’s etc. The radioactive badge
is worn by the person somewhere between the neck and the waist such that the front faces the
source of radiation.
c. Swipe Card Attendance:
The Swipe Card Attendance System works by the person swiping its card on entry and
exit of the gate, and the attendance is recorded. A swipe card must come in contact with the
corresponding card reader before any transaction can take place. The transaction becomes active
when the magnetic stripe on a card is moved through a console at a gate.
d. Access Card Punching Attendance System:
A punch card is a flat and stiff paper with notches cut in it and contains digital information.
In punch card attendance system, students use this punch or proximity card for in and/or out. To
use a punch card, students just need to wave the card near a reader, which then ensures whether the
correct person is logging in and or out.
4
1.4.3 Advanced Attendance Systems:
a. Retinal Scan-based Attendance System:
The Retinal Scan-based Attendance System makes uses of retinal features and marks
attendance on retinal recognition. Eye scan or retinal scan is a biometric system that identifies a
person by using unique patterns of the retina. Human retina contains a complex blood vessel (retinal
vein) patterns through which an eye scanner device can easily identify a person and can even
differentiate identical twins. To scan human retina, retinal scanner uses the reflection of light that
is absorbed by retinal vein.
b. Gait Recognition Attendance System:
The Gait Recognition Attendance System records and recognizes an individual by
examining the way an individual walks, saunters, swaggers, or sashays — with up to 90-percent
accuracy.
c. Facial Recognition Attendance System:
The Facial Recognition Attendance System makes use of facial features such as distance
between the eyes, width of the nose, depth of the eye sockets, the shape of the cheekbones, the
length of the jaw line, etc. to recognize and mark attendance.
d. Sensor Detection Attendance System:
The Sensor Detection Attendance System uses RFID (Radio Frequency Identification) to
identify individuals. A radio frequency identification reader (RFID reader) is a device used to
gather information from an RFID tag, which is used to track individual objects. Radio waves are
used to transfer data from the tag to a reader. RFID is a technology similar in theory to bar code.
1.4.4 Advantage Of Existing System:
a. Increased Productivity:
Organizations who use a manual roll call process spend several hours each day collecting
time cards, re-entering an illegible data by hand, faxing, phoning, and processing roll call. When
you employ an automated time and attendance system, the roll call process takes just minutes each
period.
b. Accuracy:
With an automated system, there is no human error. When you manually track your
students’ time, your students typically report their hours after they've worked them. This will often
increase the likelihood of inaccurate reporting. A student may not intend to 6 misrepresent his
hours, he may just forget what his actual in and out times were. Or, if a student has illegible
handwriting, it could make it difficult for roll list to determine actual hours attended. With manual
5
reporting, the organization is basically relying on the honor system. This system can be abused,
which can lead to time theft.
c. Savings:
With an automated system, you’ll save roll call processing hours and eliminate time theft
which means your bottom line will improve.
d. Regulatory Compliance:
An automated time and attendance system will not guarantee you’ll be compliant with all
student laws, the data that’s collected through the system can ensure you have the information at
your fingertips you’ll need to comply with all labour regulations. With an automated timekeeping
system, you’ll have the ability to pull up reports quickly. This will provide you with all the
information you’ll need if you’re ever subject to an audit.
1.5 PROPSED SYSTEM:
The proposed system “Face Recognition Digital Attendance Management System”
overcomes the problems of the existing systems as mentioned previously. It mainly incorporates
Facial Recognition to mark student’s attendance into the database.
1.5.1 Salient Features of the Proposed System:
a. Face-mapping:
Facial features of the student such as distance between the eyes, width of the nose, depth
of the eye sockets, the shape of the cheekbones, the length of the jaw line, etc. are registered into
the database. Students are recognized based on these stored facial features, and if a match is found,
the student is marked present and the same is updated into the database. In all other cases, the
student is recorded absent in the database.
b. Complete Automation:
The system is automated to its full potential. The algorithm runs for few initial mins of
every hour and captures the attendance of students present in class and restarts at the beginning of
the next hour until end of day.
c. Immediate Update:
Once the algorithm successfully recognizes the student, the attendance for the
corresponding student is updates into the database automatically, without any human intervention.
6
d. Three-step Management:
The entire system is efficiently split into three components. Firstly, the facial features are
stored, and the model is trained. Second, the student is recognized by facial recognition algorithm
“Haarcascade”. Third, the attendance is automatically updated into the database.
e. Excel Sheet Generation:
After taking attendance an excel sheet is generated along with ID, NAME, DATE, TIME.
7
Chapter 2
REVIEW OF LITERATURE
“A literature review shows readers that you have an in-depth grasp of your
subject and that you understand where your own research fits into and adds to an
existing body of agreed knowledge”
2.1 Student Attendance System:
Arun Katara et al. (2017) mentioned disadvantages of RFID (Radio Frequency Identification)
card system, fingerprint system and iris recognition system. RFID card system is implemented due
to its simplicity. However, the user tends to help their friends to check in as long as they have their
friend’s ID card. The fingerprint system is indeed effective but not efficient because it takes time
for the verification process so the user has to line up and perform the verification one by one.
However for face recognition, the human face is always exposed and contain less information
compared to iris. Iris recognition system which contains more detail might invade the privacy of
the user. Voice recognition is available, but it is less accurate compared to other methods. Hence,
face recognition system is suggested to be implemented in the student attendance system.
2.2 Face Detection:
Difference between face detection and face recognition are often misunderstood. Face detection
is to determine only the face segment or face region from image, whereas face recognition is to
identify the owner of the facial image. S.Aanjanadevi et al. (2017) and Wei-Lun Chao (2007)
presented a few factors which cause face detection and face recognition to encounter difficulties.
These factors consist of background, illumination, pose, expression, occlusion, rotation, scaling
and translation.
2.3 Viola-Jones Algorithm:
Viola-Jones algorithm which was introduced by P. Viola, M. J. Jones (2001) is the most popular
algorithm to localize the face segment from static images or video frame. Basically the concept of
Viola-Jones algorithm consists of four parts. The first part is known as Haar feature, second part is
where integral image is created, followed by implementation of Adaboost on the third part and
lastly cascading process.
Viola-Jones algorithm analyses a given image using Haar features consisting of multiple
rectangles (Mekha Joseph et al., 2016). The features perform as window function mapping onto the
image.
8
A single value result, which representing each feature can be computed by subtracting the sum of
the white rectangle(s) from the sum of the black rectangle(s) (Mekha Joseph et al., 2016).
2.4 Pre-Processing:
Subhi Singh et al. (2015) suggested cropping of detected face and colour image was converted
to grayscale for pre-processing. They also proposed affine transform to be applied to align the facial
image based on coordinates in middle of the eyes and scaling of image to be performed. Arun
Katara et al (2017), Akshara Jadhav et.al (2017), Shireesha Chintalapati, and M.V. Raghunadh
(2013), all of the 3 papers have proposed histogram equalization to be applied to facial image, and
scaling of images was performed for pre-processing.
Pre-processing enhances the performance of the system. It plays an essential role to improve the
accuracy of face recognition. Scaling is one of the important preprocessing steps to manipulate the
size of the image. Scaling down of an image increases the processing speed by reducing the system
computations since the number of pixels are reduced. The size and pixels of the image carry spatial
information. Gonzalez, R. C. and Woods (2008) mentioned spatial information is a measure of the
smallest discernible detail in an image. Hence, spatial information has to be manipulated carefully
to avoid distortion of images to prevent checkerboard effect. The size should be same for all the
images for normalization and standardization purposes. Subhi Singh et al (2015) proposed PCA
(Principal Component Analysis) to extract features from facial images, same length and width of
image is preferred, thus images were scaled to 120 × 120 pixels. Fig. 2.4.1 shows the
representataion.
Fig.2.4.1: Images Show Checkerboard Effect Significantly Increasing from Left to Right
(Gonzalez, R. C., & Woods, 2008)
Besides scaling of images, colour image is usually converted to grayscale image for pre-
processing. Grayscale images are believed to be less sensitive to illumination condition and take
less computational time. Grayscale image is 8 bit image which the pixel range from 0 to 255
whereas colour image is 24 bit image which pixel can have 16 77 7216 values.
9
Hence, colour image requires more storage space and more computational power compared to
grayscale images. (Kanan and Cottrell, 2012). If colour image is not necessary in computation, then
it is considered as noise. In addition, pre-processing is important to enhance the contrast of images.
In the paper of Pratiksha M. Patel (2016), he mentioned that Histogram equalization is one of the
methods of pre-processing in order to improve the contrast of the image. It provides uniform
distribution of intensities over the intensity level axis, which is able to reduce uneven illumination
effect at the same time. Fig. 2.4.2 shows facial images in grayscale.
10
There are a few feature extraction methods for face recognition. In the paper of Bhuvaneshwari
et al. (2017), Abhishek Singh and Saurabh Kumar (2012) and Liton Chandra Paul and Abdulla Al
Sumam (2012), they proposed PCA for the face recognition. D. Nithya (2015) also used PCA in
face recognition based student attendance system.
PCA is famous with its robust and high speed computation. Basically, PCA retains data variation
and remove unnecessary existing correlations among the original features. PCA is basically a
dimension reduction algorithm. It compresses each facial image which is represented by the matrix
into single column vector. Furthermore, PCA removes average value from image to centralize the
image data. The Principle Component of distribution of facial images is known as Eigen faces.
Every single facial image from training set contributes to Eigen faces. As a result, Eigen face
encodes best variation among known facial images. Training images and test images are then
projected onto Eigen face space to obtain projected training images and projected test image
respectively. Euclidean distance is computed by comparing the distance between projected training
images and projected test image to perform the recognition. PCA feature extraction process
includes all trained facial images. Hence, the extracted feature contains correlation between facial
images in the training set and the result of recognition of PCA highly depends on training set image.
12
LBP has a few advantages which make it popular to be implemented. It has high tolerance
against the monotonic illumination changes and it is able to deal with variety of facial
expressions, image rotation and aging of persons.
These overwhelming characteristics cause LBP to be prevalent in real-time applications.
Neural network is initially used only in face detection. It is then further studied to be
implemented in face recognition. In the paper by Manisha M. Kasar et al. (2016), Artificial
Neural Network (ANN) was studied for face recognition. ANN consists of the network of
artificial neurons known as "nodes". The nodes act as human brain in order to make recognition
and classification. These nodes are interconnected and values are assigned to determine the
strength of their connections. High value indicates strong connection. Neurons were categorized
into three types of nodes or layers which are input nodes, hidden nodes, and output nodes. Input
nodes are given weight based on its impact. Hidden nodes consist of some mathematical function
and thresholding function to perform prediction or probabilities that determine and block
unnecessary inputs and result is yield in output nodes. Hidden nodes can be more than one layer.
Multiple inputs generate one output at the output node. Fig. 2.5.4 represent it’s conceptual view.
Convolutional Neural Network (CNN) is another neural network algorithm for face
recognition. Similar to ANN, CNN consists of the input layer, hidden layer and output layer.
Hidden layers of a CNN consists of multiple layers which are convolutional layers, pooling
layers, fully connected layers and normalization layers. However, a thousand or millions of facial
images have to be trained for CNN to work accurately and it takes long time to train, for instance
Deepface which is introduced by Facebook. Fig. 2.5.5 represent CNN.
13
Fig. 2.5.5: Deepface Architecture by Facebook
(Yaniv Taigman et al, 2014)
14
Chapter 3
15
Fig.3.2.1: Feature Registration
16
Fig.3.2.2: Facial Recognition
17
Fig. 3.2.3: Attendance Update
18
3.2.4 Activity Diagram:
Fig. 3.2.4 signifies the activity flow in terms of decisions and how their implementation is
responsible for the application’s activity. The activity flow starts with capturing an input image.
The image capturing process runs into a loop until a proper input image is captured which is suitable
for facial detection and facial recognition. The captured image is pre-processed, extraneous details
and background noise is removed. The specific facial features such as distance between the eyes,
width of the nose, depth of the eye sockets, the shape of the cheekbones, the length of the jaw line,
etc. are analyzed and hoarded.
19
The captured input image, after being pre-processed and the facial features being extracted, is at
verified with the existing database that consists of facial features recorded at the time registration
of students’ details into the system.If a match is found, the corresponding student is marked.
3.2.5 Sequence Diagram:
Fig. 3.2.5 indicates the interaction amongst the application’s objects. There are three objects,
User, Face Detector, and System. The ‘User’ i.e. the student registers himself/herself into the ‘Face
Detector’ database. This data is stored into the ‘System’. The ‘Face Detector’ retrieves data from
the ‘System’ and accesses the input image from the ‘User’.
The ‘User’ sends the input value to the ‘Face Detector’ and the ‘Face Detector’ searches for a tally
in the ‘System’. If a match is found, the ‘User’ is marked NAME, ID, TIME.
20
Table 3.3: Test Cases and Outputs:
2. Server Start- Server successfully Takes a time gap of Make sure you don’t
up and starts up and user a few milliseconds start the server
Execution gives requirement- for the server to start abruptly
based inputs and run
21
Chapter 4
4.1 Result:
In this proposed approach, face recognition student attendance system with user friendly interface
is designed by using MATLAB GUI(Graphic User Interface). A few buttons are designed in the
interface, each provides specific function, for example, start button is to initialize the camera and to
perform face recognition automatically according to the face detected, register button allows
enrolment or registrations of students and update button is to train the latest images that have been
registered in the database. Lastly, browse button and recognize button is to browse facial images
from selected database and recognized the selected image to test the functionality of the system
respectively.
4.1.1 Feature Registration:
22
Fig. 4.1.1 demonstrates the process of feature registration. the camera window opens for a few
seconds and captures 100 images from the video frame and stores in the database with the respective
student id. OpenCV uses Haarcascade algorithm (Abhish Ijari , Anand Mannikeri , Vinod Kumar
Gulmikar)to extract features.
4.1.2 Face Recognition:
23
4.1.3 Attendance Update:
Fig. 4.1.3 displays the excel sheet where upon the recognition of faces, the attendance is marked
for the respective student for that corresponding hour automatically, without any human intervention
into the excel sheet, which acts as the database on this case.
24
4.2 Discussion:
This proposed approach provides a method to perform face recognition for student attendance
system, which is based on the texture based features of facial images. Face recognition is the
identification of an individual by comparing his/her real-time captured image with stored images in
database of that person. Thus, training set has to be chosen based on the latest appearance of an
individual other than taking important factor for instance illumination into consideration.
The proposed approach is being trained and tested on different datasets. Yale face database which
consists of one hundred images of fifteen individuals with multiple conditions is implemented.
However, this database consists of only grayscale images. Hence, our own database with color
images which is further categorized into high quality set and the low quality set, as images are
different in their quality: some images are blurred while some are clearer.
Viola-Jones object detection framework is applied in this approach to detect and localize the
face given a facial image or provided a video frame. From the detected face, an algorithm that can
extract the important features to perform face recognition is designed.
Some pre-processing steps are performed on the input facial image before the features are extracted.
Median filtering is used because it is able to preserve the edges of the image while removing the
image noises. The facial image will be scaled to a suitable size for standardizing purpose and
converted to grayscale image if it is not a grayscale image because LBP operator work on a grayscale
image.
The recognition rate of LBP operator with different radius is then computed by using our own
database. However, LBP operator with different radius does not give significant results because
there is no critical illumination problem exists in the images of our own database. Hence, the pixels
of good quality images of our own database are modified to generate the illumination effects in
order to determine the impact of different size LBP operator.
Database with good quality colour images, achieves the highest accuracy (100 %) either one
image or two images per individual is trained whereas database with poor quality color images have
average accuracy of (86.54 %) when only one image per individual is trained and average accuracy
of (88.46 %) when two images per individual are trained. It can be said that the approach works best
with good quality images, poor quality images could degrade the performance of the algorithm.
Poor quality images were captured by using Laptop camera. The poor quality images might include
the relatively darker images, blur images or having too much unwanted noise. In blurred images,
the face is blurred out. Unwanted noise can be reduced by applying median filtering.
25
26
Chapter 5
5.1 Conclusion:
The Face Recognition Digital Attendance Management System has the main goal of automating
the process of managing attendance and revolves around the fulcrum of Automation. The system
involves mainly three steps: Registration, Authentication and Update. Its fundamental goal is to
reduced time consumption and the need to maintain paperwork. Since the advent of technology,
humans have progressed to evolve and adapt to changes based on their convenience. Bringing this
idea to practicality helps the common man to effectively and efficiently progress and this system
helps the whole organization to evolve and achieve what is necessary by eliminating the tedious
and iterative tasks.
Future Scope:
The system is built on a combination of several technologies and has overcome most manual
flaws and thus stands apart from the existing systems. However, the database management is an
area of concern and needs to be filtered out on a cyclic and timely basis to avoid data overflow.
Furthermore, since the system is built upon the features of Machine learning, effective training
and efficient procedures must be followed to achieve 100% accuracy. Finally, we can build the
system as an integration of the current attendance systems being used in the organization in order
to achieve maximum efficiency.
5.2 SUMMARY:
Initially the facial features of the student are captured based on convolutional neural networks
(Kewen Yan, Shaohui Huang, Yaoxian Song, Wei Liu, Neng Fan, Conference), extracted and
registered into the database. Then the facial features are compared at the time of taking attendance
using the existing database images. If a match is found, the attendance for the respective student
for that corresponding hour is marked into the database.
27
REFERENCES
28
International Research Journal of Engineering and Technology.V4 (1).
P. Arun Mozhi Devan et al., (2017). Smart Attendance System Using Face Recognition.
Advances in Natural and Applied Sciences. 11(7), Pages: 139-144
Rahul V. Patil and S. B.Bangar. (2017). Video Surveillance Based Attendance system.
IJARCCE, 6(3), pp.708-713.
Mrunmayee Shirodkar. (2015). Automated Attendance Management System using Face
Recognition. International Journal of Computer Applications and International Conference
and Workshop on Emerging Trends in Technology.
Naveed Khan Balcoh. (2012). Algorithm for Efficient Attendance Management: Face
Recognition based approach.International Journal of Computer Science Issues, V9 (4), No
1.
Varsha Gupta, Dipesh Sharma. (2014), “A Study of Various Face Detection Methods”,
International Journal of Advanced Research in Computer and Communication
Engineering), vol.3, no. 5.
P. Viola, M. J. Jones. (2004), “Robust Real-Time Face Detection”, International Journal
of Computer Vision 57(2), 137–154.
Mekha Joseph et al. (2016). Children's Transportation Safety System Using Real Time Face
Recognition. International Journal of Advanced Research in Computer and
Communication Engineering V5 (3).
Srushti Girhe et al. (2015). Computer Vision Based Semi-automatic Algorithm for face
detection. International Journal on Recent and Innovation Trends in Computing and
Communication V3(2).
Burak Ozen. (2017).Introduction to Boosting Methodology & Adaboost algorithm. [online]
Available at: https://www.linkedin.com/pulse/introduction-boostingmethodology-
adaboost-algorithm-burak-ozen [Accessed 12 Apr. 2018].
Chris McCormick. (2013). Adaboost Tutorial. [online] Available at:
http://mccormickml.com/2013/12/13/adaboost-tutorial/ [Accessed 12 Apr. 2018].
Kihwan Kim. (2011).Rapid Object Detection using a Boosted Cascade of Simple Feature
and Fast Face Detection via Morphology-Based Pre-processing.
Subhi Singh. (2015). Automatic Lecture Attendance System Using Face
Reorganizationmatrix. Academic International Online Journal of Engineering and
Technology.V3 (1).
Shireesha Chintalapati, M.V. Raghunadh. (2014), “Automated Attendance Management
System Based On Face Recognition Algorithms”, IEEE International Conference on
29
Computational Intelligence and Computing Research.
Gonzalez, R. C., & Woods, R. E. (2002). Digital image processing. Upper Saddle River,
N.J., Prentice Hall.
Kanan, C. and Cottrell, G. (2012). Color-to-Grayscale: Does the Method Matter in Image
Recognition?. PLoS ONE, 7(1), p.e29740.
Pratiksha M. Patel (2016). Contrast Enhancement of Images and videos using Histogram
Equalization. International Journal on Recent and Innovation Trends in Computing and
Communication.V4 (11).
Sasi, N. and Jayasree, V. (2013). Contrast Limited Adaptive Histogram Equalization for
Qualitative Enhancement of Myocardial Perfusion Images. Engineering, 05(10), pp.326-
331.
Aliaa A. A. Youssif, and Atef Z. (2006).Comparative Study of Contrast Enhancement and
Illumination Equalization Methods for Retinal Vasculature Segmentation. Proc. Cairo
International Biomedical Engineering Conference.
A., I. and E.Z., F. (2016). Image Contrast Enhancement Techniques: A Comparative Study
of Performance. International Journal of Computer Applications, 137(13), pp.43-48.
Bhuvaneshwari, K., Abirami, A. and Sripriya, N. (2017). Face Recognition Using PCA.
International Journal Of Engineering And Computer Science, 6(4).
Abhishek Singh and Saurabh Kumar. (2012). Face Recognition Using PCA and Eigen Face
Approach. [online] Available at: http://ethesis.nitrkl.ac.in/3814/1/Thesis.pdf [Accessed 10
Apr. 2022].
LC Paul and Abdulla Al Sumam. (2012). Face Recognition Using Principal Component
Analysis Method. IJARCET, V1 (9).
D. Nithya (2015). Automated Class Attendance System based on Face Recognition using
PCA Algorithm. International Journal of Engineering Research and, V4 (12).
Suman Kumar Bhattacharyya & Kumar Rahul. (2013), “Face Recognition by Linear
Discriminant Analysis”, International Journal of Communication Network Security, V2(2),
pp 31-35.
Ojala, T., Pietikainen, M. and Maenpaa, T. (2002). Multiresolution gray-scale and rotation
invariant texture classification with local binary patterns. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 24(7), pp.971-987.
Md. Abdur Rahim (2013), Face Recognition Using Local Binary Patterns. Global
Journal of Computer Science and Technology Graphics & Vision V13 (4) Version 1.0.
Kasar, M., Bhattacharyya, D. and Kim, T. (2016). Face Recognition Using Neural
30
Network: A Review. International Journal of Security and Its Applications, 10(3), pp.81-
100.
Taigman, Y. et al., 2014. DeepFace: Closing the Gap to Human-Level Performance in Face
Veriication. IEEE Conference on Computer Vision and Pattern Recognition.
Divyarajsinh N Parmar et al. (2013).Face Recognition Methods and Applications.
Int.J.Computer Technology & Applications,Vol 4 (1),84-86.
31
APPENDIX – SOURCE CODE:
#############################################IMPORTING######################
import tkinter as tk
from tkinter import ttk
from tkinter import messagebox as mess
import tkinter.simpledialog as tsd
import cv2,os
import csv
import numpy as np
from PIL import Image
import pandas as pd
import datetime
import time
#############################################FUNCTIONS######################
def assure_path_exists(path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs(dir)
##############################################################################
def tick():
time_string = time.strftime('%H:%M:%S')
clock.config(text=time_string)
clock.after(200,tick)
##############################################################################
def contact():
mess._show(title='Contact us', message="Please contact us on :
'chaudharyabhishek6307044617@gmail.com' ")
##############################################################################
def check_haarcascadefile():
exists = os.path.isfile("haarcascade_frontalface_default.xml")
if exists:
pass
else:
mess._show(title='Some file missing', message='Please contact us for help')
window.destroy()
##############################################################################
def save_pass():
assure_path_exists("TrainingImageLabel/")
32
exists1 = os.path.isfile("TrainingImageLabel\psd.txt")
if exists1:
tf = open("TrainingImageLabel\psd.txt", "r")
key = tf.read()
else:
master.destroy()
new_pas = tsd.askstring('Old Password not found', 'Please enter a new password below',
show='*')
if new_pas == None:
mess._show(title='No Password Entered', message='Password not set!! Please try again')
else:
tf = open("TrainingImageLabel\psd.txt", "w")
tf.write(new_pas)
mess._show(title='Password Registered', message='New password was registered
successfully!!')
return
op = (old.get())
newp= (new.get())
nnewp = (nnew.get())
if (op == key):
if(newp == nnewp):
txf = open("TrainingImageLabel\psd.txt", "w")
txf.write(newp)
else:
mess._show(title='Error', message='Confirm new password again!!!')
return
else:
mess._show(title='Wrong Password', message='Please enter correct old password.')
return
mess._show(title='Password Changed', message='Password changed successfully!!')
master.destroy()
#############################################################################
def change_pass():
global master
master = tk.Tk()
master.geometry("400x160")
master.resizable(False,False)
master.title("Change Password")
master.configure(background="white")
lbl4 = tk.Label(master,text=' Enter Old Password',bg='white',font=('comic', 12, ' bold '))
lbl4.place(x=10,y=10)
global old
old=tk.Entry(master,width=25 ,fg="black",relief='solid',font=('comic', 12, ' bold '),show='*')
old.place(x=180,y=10)
lbl5 = tk.Label(master, text=' Enter New Password', bg='white', font=('comic', 12, ' bold '))
lbl5.place(x=10, y=45)
global new
new = tk.Entry(master, width=25, fg="black",relief='solid', font=('comic', 12, ' bold
33
'),show='*')
new.place(x=180, y=45)
lbl6 = tk.Label(master, text='Confirm New Password', bg='white', font=('comic', 12, ' bold '))
lbl6.place(x=10, y=80)
global nnew
nnew = tk.Entry(master, width=25, fg="black", relief='solid',font=('comic', 12, ' bold
'),show='*')
nnew.place(x=180, y=80)
cancel=tk.Button(master,text="Cancel", command=master.destroy ,fg="black" ,bg="red"
,height=1,width=25 , activebackground = "white" ,font=('comic', 10, ' bold '))
cancel.place(x=200, y=120)
save1 = tk.Button(master, text="Save", command=save_pass, fg="black", bg="#00fcca",
height = 1,width=25, activebackground="white", font=('comic', 10, ' bold '))
save1.place(x=10, y=120)
master.mainloop()
##############################################################################
def psw():
assure_path_exists("TrainingImageLabel/")
exists1 = os.path.isfile("TrainingImageLabel\psd.txt")
if exists1:
tf = open("TrainingImageLabel\psd.txt", "r")
key = tf.read()
else:
new_pas = tsd.askstring('Old Password not found', 'Please enter a new password below',
show='*')
if new_pas == None:
mess._show(title='No Password Entered', message='Password not set!! Please try again')
else:
tf = open("TrainingImageLabel\psd.txt", "w")
tf.write(new_pas)
mess._show(title='Password Registered', message='New password was registered
successfully!!')
return
password = tsd.askstring('Password', 'Enter Password', show='*')
if (password == key):
TrainImages()
elif (password == None):
pass
else:
mess._show(title='Wrong Password', message='You have entered wrong password')
##############################################################################
def clear():
txt.delete(0, 'end')
res = "1)Take Images >>> 2)Save Profile"
message1.configure(text=res)
34
def clear2():
txt2.delete(0, 'end')
res = "1)Take Images >>> 2)Save Profile"
message1.configure(text=res)
##############################################################################
def TakeImages():
check_haarcascadefile()
columns = ['SERIAL NO.', '', 'ID', '', 'NAME']
assure_path_exists("StudentDetails/")
assure_path_exists("TrainingImage/")
serial = 0
exists = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists:
with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for l in reader1:
serial = serial + 1
serial = (serial // 2)
csvFile1.close()
else:
with open("StudentDetails\StudentDetails.csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(columns)
serial = 1
csvFile1.close()
Id = (txt.get())
name = (txt2.get())
if ((name.isalpha()) or (' ' in name)):
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)
sampleNum = 0
while (True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
# incrementing sample number
sampleNum = sampleNum + 1
# saving the captured face in the dataset folder TrainingImage
cv2.imwrite("TrainingImage\ " + name + "." + str(serial) + "." + Id + '.' +
str(sampleNum) + ".jpg",
gray[y:y + h, x:x + w])
# display the frame
cv2.imshow('Taking Images', img)
# wait for 100 miliseconds
35
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100
elif sampleNum > 100:
break
cam.release()
cv2.destroyAllWindows()
res = "Images Taken for ID : " + Id
row = [serial, '', Id, '', name]
with open('StudentDetails\StudentDetails.csv', 'a+') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
message1.configure(text=res)
else:
if (name.isalpha() == False):
res = "Enter Correct name"
message.configure(text=res)
##############################################################################
def TrainImages():
check_haarcascadefile()
assure_path_exists("TrainingImageLabel/")
recognizer = cv2.face_LBPHFaceRecognizer.create()
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)
faces, ID = getImagesAndLabels("TrainingImage")
try:
recognizer.train(faces, np.array(ID))
except:
mess._show(title='No Registrations', message='Please Register someone first!!!')
return
recognizer.save("TrainingImageLabel\Trainner.yml")
res = "Profile Saved Successfully"
message1.configure(text=res)
message.configure(text='Total Registrations till now : ' + str(ID[0]))
##############################################################################
def getImagesAndLabels(path):
# get the path of all the files in the folder
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# create empth face list
faces = []
# create empty ID list
Ids = []
# now looping through all the image paths and loading the Ids and the images
for imagePath in imagePaths:
# loading the image and converting it to gray scale
36
pilImage = Image.open(imagePath).convert('L')
# Now we are converting the PIL image into numpy array
imageNp = np.array(pilImage, 'uint8')
# getting the Id from the image
ID = int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces.append(imageNp)
Ids.append(ID)
return faces, Ids
##############################################################################
def TrackImages():
check_haarcascadefile()
assure_path_exists("Attendance/")
assure_path_exists("StudentDetails/")
for k in tv.get_children():
tv.delete(k)
msg = ''
i=0
j=0
recognizer = cv2.face.LBPHFaceRecognizer_create() # cv2.createLBPHFaceRecognizer()
exists3 = os.path.isfile("TrainingImageLabel\Trainner.yml")
if exists3:
recognizer.read("TrainingImageLabel\Trainner.yml")
else:
mess._show(title='Data Missing', message='Please click on Save Profile to reset data!!')
return
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath);
cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Id', '', 'Name', '', 'Date', '', 'Time']
exists1 = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists1:
df = pd.read_csv("StudentDetails\StudentDetails.csv")
else:
mess._show(title='Details Missing', message='Students details are missing, please check!')
cam.release()
cv2.destroyAllWindows()
window.destroy()
while True:
ret, im = cam.read()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
for (x, y, w, h) in faces:
cv2.rectangle(im, (x, y), (x + w, y + h), (225, 0, 0), 2)
serial, conf = recognizer.predict(gray[y:y + h, x:x + w])
if (conf < 50):
37
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
aa = df.loc[df['SERIAL NO.'] == serial]['NAME'].values
ID = df.loc[df['SERIAL NO.'] == serial]['ID'].values
ID = str(ID)
ID = ID[1:-1]
bb = str(aa)
bb = bb[2:-2]
attendance = [str(ID), '', bb, '', str(date), '', str(timeStamp)]
else:
Id = 'Unknown'
bb = str(Id)
cv2.putText(im, str(bb), (x, y + h), font, 1, (255, 255, 255), 2)
cv2.imshow('Taking Attendance', im)
if (cv2.waitKey(1) == ord('q')):
break
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
exists = os.path.isfile("Attendance\Attendance_" + date + ".csv")
if exists:
with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(attendance)
csvFile1.close()
else:
with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(col_names)
writer.writerow(attendance)
csvFile1.close()
with open("Attendance\Attendance_" + date + ".csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for lines in reader1:
i=i+1
if (i > 1):
if (i % 2 != 0):
iidd = str(lines[0]) + ' '
tv.insert('', 0, text=iidd, values=(str(lines[2]), str(lines[4]), str(lines[6])))
csvFile1.close()
cam.release()
cv2.destroyAllWindows()
########################################USEDSTUFFS##########################
global key
key = ''
ts = time.time()
38
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
day,month,year=date.split("-")
mont={'01':'January',
'02':'February',
'03':'March',
'04':'April',
'05':'May',
'06':'June',
'07':'July',
'08':'August',
'09':'September',
'10':'October',
'11':'November',
'12':'December'
}
########################################GUIFRONTEND########################
window = tk.Tk()
window.geometry("1280x720")
window.resizable(True,False)
window.title("Attendance System")
window.configure(background='#2d420a')
res=0
exists = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists:
with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for l in reader1:
res = res + 1
res = (res // 2) - 1
csvFile1.close()
else:
res = 0
message.configure(text='Total Registrations till now : '+str(res))
menubar = tk.Menu(window,relief='ridge')
filemenu = tk.Menu(menubar,tearoff=0)
40
filemenu.add_command(label='Change Password', command = change_pass)
filemenu.add_command(label='Contact Us', command = contact)
filemenu.add_command(label='Exit',command = window.destroy)
menubar.add_cascade(label='Help',font=('comic', 29, ' bold '),menu=filemenu)
scroll=ttk.Scrollbar(frame1,orient='vertical',command=tv.yview)
scroll.grid(row=2,column=4,padx=(0,100),pady=(150,0),sticky='ns')
tv.configure(yscrollcommand=scroll.set)
window.configure(menu=menubar)
window.mainloop()
41
ABSTRACT
With the advancement of modern technologies areas related to robotics and computer vision, real
time image processing has become a major technology under consideration. So here a try has been
made for a novel approach for capturing images from the Camera in real time environment and
process them as we are required. This project portrays a machine learning approach for face
recognition to accomplish this process very quick with high identification rates using OpenCV.
Here in this project depicts a basic and simple equipment execution of face location framework
utilizing Haarcascade Algorithm, which itself is a minicomputer of a small estimate and is of a low
cost. The framework is modified utilizing Python programming language. The destinations of the
face recognition are to recognize appearances and its spatial area in any pictures or recordings. The
proposed framework distinguishes the faces present in a grey scale and colored image. This project
center around usage of face detection framework for human recognizable proof in light of OpenCV
library with python. Here in this project the idea of identification has been built up by composing
distinguishable code for dataset generator, trainer and indicator. Effectiveness of the framework is
examined by ascertaining the Face recognition rate for every one of the databases. The outcomes
uncover that the proposed framework can be utilized for face detection even from low quality
pictures and shows incredible execution level.
At last, the data that will be shown alongside recognized photograph has been put away on database.
This concept has a higher scope on security and surveillance projects and various operation.
42
GROUP MEMBER
(COMPUTER SCIENCE AND ENGINEERING)
ABHISHEK CHAUDHARY (E-10467/18)
ADITYA KUMAR CHAURASIYA (E-10468/18)
ARYAN YADAV (E-10469/18)
MANOJ KUMAR PAL (E-10473/18)
43