You are on page 1of 64

FACE RECOGNITION BASED ATTENDANCE SYSTEM USING OPENCV (CNN)

Abstract:

Automatic face recognition (AFR) technologies have made many improvements in the changing
world. Smart Attendance using Real-Time Face Recognition is a real-world solution which
comes with day to day activities of handling student attendance system. Face recognition-based
attendance system is a process of recognizing the students face for taking attendance by using
face biometrics based on high - definition monitor video and other information technology. In
my face recognition project, a computer system will be able to find and recognize human faces
fast and precisely in images or videos that are being captured through a surveillance camera.
Numerous algorithms and techniques have been developed for improving the performance of
face recognition but the concept to be implemented here is Deep Learning. It helps in conversion
of the frames of the video into images so that the face of the student can be easily recognized for
their attendance so that the attendance database can be easily reflected automatically.

Keywords: Face recognition, Face detection, Deep Learning, Convolution Neural Network.
Chapter-1
INTRODUCTION OF DOMAIN
Machine learning (ML)
Machine learning is the scientific study of algorithms and statistical models that computer
systems use to perform a specific task without using explicit instructions, relying on patterns
and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms
build a mathematical model based on sample data, known as "training data", in order to make
predictions or decisions without being explicitly programmed to perform the task. Machine
learning algorithms are used in a wide variety of applications, such as email
filtering and computer vision, where it is difficult or infeasible to develop a conventional
algorithm for effectively performing the task.
Machine learning is closely related to computational statistics, which focuses on making
predictions using computers. The study of mathematical optimization delivers methods, theory
and application domains to the field of machine learning. Data mining is a field of study within
machine learning, and focuses on exploratory data analysis through unsupervised learning. In its
application across business problems, machine learning is also referred to as predictive analytics.
The name machine learning was coined in 1959 by Arthur Samuel. Tom M. Mitchell provided a
widely quoted, more formal definition of the algorithms studied in the machine learning field: "A
computer program is said to learn from experience E with respect to some class of tasks T and
performance measure P if its performance at tasks in T, as measured by P, improves with
experience E. This definition of the tasks in which machine learning is concerned offers a
fundamentally operational definition rather than defining the field in cognitive terms. This
follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence, in which
the question Can machines think?" is replaced with the question "Can machines do what we (as
thinking entities) can do In Turing's proposal the various characteristics that could be possessed
by a thinking machine and the various implications in constructing one are exposed.

Machine learning uses data to detect various patterns in a given dataset.


1.It can learn from past data and improve automatically.
2.It is a data-driven technology.
3.Machine learning is much similar to data mining as it also deals with the huge amount of the
data.

How does Machine Learning Work?


A Machine Learning system learns from historical data, builds the prediction models, and
whenever it receives new data, predicts the output for it. The accuracy of predicted output
depends upon the amount of data, as the huge amount of data helps to build a better model which
predicts the output more accurately.
Machine learning tasks are classified into several broad categories. In supervised learning, the
algorithm builds a mathematical model from a set of data that contains both the inputs and the
desired outputs. For example, if the task were determining whether an image contained a certain
object, the training data for a supervised learning algorithm would include images with and
without that object (the input), and each image would have a label (the output) designating
whether it contained the object. In special cases, the input may be only partially available, or
restricted to special feedback Semi-supervised learning algorithms develop mathematical models
from incomplete training data, where a portion of the sample input doesn't have labels.
Classification algorithms and regression algorithms are types of supervised learning.
Classification algorithms are used when the outputs are restricted to a limited set of values. For a
classification algorithm that filters emails, the input would be an incoming email, and the output
would be the name of the folder in which to file the email. For an algorithm that identifies spam
emails, the output would be the prediction of either "spam" or "not spam", represented by
the Boolean values true and false. Regression algorithms are named for their continuous outputs,
meaning they may have any value within a range. Examples of a continuous value are the
temperature, length, or price of an object.
In unsupervised learning, the algorithm builds a mathematical model from a set of data that
contains only inputs and no desired output labels. Unsupervised learning algorithms are used to
find structure in the data, like grouping or clustering of data points. Unsupervised learning can
discover patterns in the data, and can group the inputs into categories, as in feature
learning. Dimensionality reduction is the process of reducing the number of features, or inputs,
in a set of data.
Active learning algorithms access the desired outputs (training labels) for a limited set of inputs
based on a budget and optimize the choice of inputs for which it will acquire training labels.
When used interactively, these can be presented to a human user for labeling. Reinforcement
learning algorithms are given feedback in the form of positive or negative reinforcement in a
dynamic environment and are used in autonomous vehicles or in learning to play a game against
a human opponent Other specialized algorithms in machine learning include topic modeling,
where the computer program is given a set of natural language documents and finds other
documents that cover similar topics. Machine learning algorithms can be used to find the
unobservable probability density function in density estimation problems. Meta
learning algorithms learn their own inductive bias based on previous experience.
In developmental robotics, robot learning algorithms generate their own sequences of learning
experiences, also known as a curriculum, to cumulatively acquire new skills through self-guided
exploration and social interaction with humans. These robots use guidance mechanisms such as
active learning, maturation, motor synergies, and imitation.

Relation to data mining


Machine learning and data mining often employ the same methods and overlap significantly, but
while machine learning focuses on prediction, based on known properties learned from the
training data, data mining focuses on the discovery of (previously) unknown properties in the
data (this is the analysis step of knowledge discovery in databases). Data mining uses many
machine learning methods, but with different goals; on the other hand, machine learning also
employs data mining methods as unsupervised learning or as a preprocessing step to improve
learner accuracy. Much of the confusion between these two research communities (which do
often have separate conferences and separate journals, ECML PKDD being a major exception)
comes from the basic assumptions they work with: in machine learning, performance is usually
evaluated with respect to the ability to reproduce known knowledge, while in knowledge
discovery and data mining (KDD) the key task is the discovery of
previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed
(unsupervised) method will easily be outperformed by other supervised methods, while in a
typical KDD task supervised methods cannot be used due to the unavailability of training data.
Relation to statistics
Machine learning and statistics are closely related fields in terms of methods, but distinct in their
principal goal: statistics draws population inferences from a sample, while machine learning
finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine
learning, from methodological principles to theoretical tools, have had a long pre-history in
statistics. He also suggested the term data science as a placeholder to call the overall field.
Leo-Breiman distinguished two statistical modeling paradigms: data model and algorithmic
model, wherein "algorithmic model" means more or less the machine learning algorithms
like Random forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that
they call statistical learning.

Types of learning algorithms


The types of machine learning algorithms differ in their approach, the type of data they input and
output, and the type of task or problem that they are intended to solve.
Supervised learning
Unsupervised learning
Reinforcement learning

Supervised learning
Supervised learning algorithms build a mathematical model of a set of data that contains both the
inputs and the desired outputs. The data is known as training data, and consists of a set of
training examples. Each training example has one or more inputs and the desired output, also
known as a supervisory signal. In the mathematical model, each training example is represented
by an array or vector, sometimes called a feature vector, and the training data is represented by
a matrix. Through iterative optimization of an objective function, supervised learning algorithms
learn a function that can be used to predict the output associated with new inputs. An optimal
function will allow the algorithm to correctly determine the output for inputs that were not a part
of the training data. An algorithm that improves the accuracy of its outputs or predictions over
time is said to have learned to perform that task.
Supervised learning algorithms include classification and regression. Classification algorithms
are used when the outputs are restricted to a limited set of values, and regression algorithms are
used when the outputs may have any numerical value within a range. Similarity learning is an
area of supervised machine learning closely related to regression and classification, but the goal
is to learn from examples using a similarity function that measures how similar or related two
objects are. It has applications in ranking, recommendation systems, visual identity tracking, face
verification, and speaker verification.
Supervised learning can be grouped further in two categories of algorithms:
1.Classification
2.Regression

Unsupervised learning
Unsupervised learning algorithms take a set of data that contains only inputs, and find structure
in the data, like grouping or clustering of data points. The algorithms, therefore, learn from test
data that has not been labeled, classified or categorized. Instead of responding to feedback,
unsupervised learning algorithms identify commonalities in the data and react based on the
presence or absence of such commonalities in each new piece of data. A central application of
unsupervised learning is in the field of density estimation in statistics, though unsupervised
learning encompasses other domains involving summarizing and explaining data features.
Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that
observations within the same cluster are similar according to one or more predestinated criteria,
while observations drawn from different clusters are dissimilar. Different clustering techniques
make different assumptions on the structure of the data, often defined by some similarity
metric and evaluated, for example, by internal compactness, or the similarity between members
of the same cluster, and separation, the difference between clusters. Other methods are based
on estimated density and graph connectivity.
It can be further classifieds into two categories of algorithms:
3.Clustering
4.Association
Reinforcement learning
Reinforcement learning is an area of machine learning concerned with how software
agents ought to take actions in an environment so as to maximize some notion of cumulative
reward. Due to its generality, the field is studied in many other disciplines, such as game
theory, control theory, operations research, information theory, simulation-based
optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In
machine learning, the environment is typically represented as a Markov Decision
Process (MDP). Many reinforcement learning algorithms use dynamic
programming techniques. Reinforcement learning algorithms do not assume knowledge of an
exact mathematical model of the MDP, and are used when exact models are infeasible.
Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game
against a human opponent.

Prerequisites
Before learning machine learning, you must have the basic knowledge of followings so that you
can easily understand the concepts of machine learning:

1.Fundamental knowledge of probability and linear algebra.


2.The ability to code in any computer language, especially in Python language.
3.Knowledge of Calculus, especially derivatives of single variable and multivariate functions.

Linear Regression in Machine Learning


Linear regression is one of the easiest and most popular Machine Learning algorithms. It is a
statistical method that is used for predictive analysis. Linear regression makes predictions for
continuous/real or numeric variables such as sales, salary, age, product price, etc. Linear
regression algorithm shows a linear relationship between a dependent (y) and one or more
independent (y) variables, hence called as linear regression. Since linear regression shows the
linear relationship, which means it finds how the value of the dependent variable is changing
according to the value of the independent variable. The linear regression model provides a sloped
straight line representing the relationship between the variables. Consider the below image:
Linear regression can be further divided into two types of the algorithm:
Simple Linear Regression:
If a single independent variable is used to predict the value of a numerical dependent variable,
then such a Linear Regression algorithm is called Simple Linear Regression.

Multiple Linear regression:


If more than one independent variable is used to predict the value of a numerical dependent
variable, then such a Linear Regression algorithm is called Multiple Linear Regression.

What is the Classification Algorithm?


The Classification algorithm is a Supervised Learning technique that is used to identify the
category of new observations on the basis of training data. In Classification, a program learns
from the given dataset or observations and then classifies new observation into a number of
classes or groups. Such as, Yes or No, 0 or 1, Spam or Not Spam, cat or dog, etc. Classes can be
called as targets/labels or categories. Unlike regression, the output variable of Classification is a
category, not a value, such as "Green or Blue", "fruit or animal", etc. Since the Classification
algorithm is a Supervised learning technique, hence it takes labeled input data, which means it
contains input with the corresponding output

Types of ML Classification Algorithms:


Classification Algorithms can be further divided into the mainly two categories:

Linear Models
Logistic Regression
Support Vector Machines

Non-linear Models
K-Nearest Neighbours
Kernel SVM
Naïve Bayes
Decision Tree Classification
Random Forest Classification
Logistic Regression in Machine Learning
Logistic regression is one of the most popular Machine Learning algorithms, which comes under
the Supervised Learning technique. It is used for predicting the categorical dependent variable
using a given set of independent variables.
Logistic regression predicts the output of a categorical dependent variable. Therefore the
outcome must be a categorical or discrete value. It can be either Yes or No, 0 or 1, true or False,
etc. but instead of giving the exact value as 0 and 1, it gives the probabilistic values which lie
between 0 and 1.
Logistic Regression is much similar to the Linear Regression except that how they are used.
Linear Regression is used for solving Regression problems, whereas Logistic regression is used
for solving the classification problems.
In Logistic regression, instead of fitting a regression line, we fit an "S" shaped logistic function,
which predicts two maximum values (0 or 1).
The curve from the logistic function indicates the likelihood of something such as whether the
cells are cancerous or not, a mouse is obese or not based on its weight, etc.
Logistic Regression is a significant machine learning algorithm because it has the ability to
provide probabilities and classify new data using continuous and discrete datasets.
Logistic Regression can be used to classify the observations using different types of data and can
easily determine the most effective variables used for the classification. The below image is
showing the logistic function:

Assumptions for Logistic Regression:


The dependent variable must be categorical in nature.
The independent variable should not have multi-collinearity.

Binomial: In binomial Logistic regression, there can be only two possible types of the dependent
variables, such as 0 or 1, Pass or Fail, etc.
Multinomial: In multinomial Logistic regression, there can be 3 or more possible unordered
types of the dependent variable, such as "cat", "dogs", or "sheep"
Ordinal: In ordinal Logistic regression, there can be 3 or more possible ordered types of
dependent variables, such as "low", "Medium", or "High".
K-Nearest Neighbor (KNN) Algorithm for Machine Learning
K-Nearest Neighbor is one of the simplest Machine Learning algorithms based on Supervised
Learning technique. K-NN algorithm assumes the similarity between the new case/data and
available cases and put the new case into the category that is most similar to the available
categories. K-NN algorithm stores all the available data and classifies a new data point based on
the similarity. This means when new data appears then it can be easily classified into a well suite
category by using K- NN algorithm.
K-NN algorithm can be used for Regression as well as for Classification but mostly it is used for
the Classification problems.
K-NN is a non-parametric algorithm, which means it does not make any assumption on
underlying data. It is also called a lazy learner algorithm because it does not learn from the
training set immediately instead it stores the dataset and at the time of classification, it performs
an action on the dataset. KNN algorithm at the training phase just stores the dataset and when it
gets new data, then it classifies that data into a category that is much similar to the new data.
Example: Suppose, we have an image of a creature that looks similar to cat and dog, but we want
to know either it is a cat or dog. So for this identification, we can use the KNN algorithm, as it
works on a similarity measure. Our KNN model will find the similar features of the new data set
to the cats and dogs images and based on the most similar features it will put it in either cat or
dog category.

Support Vector Machine Algorithm


Support Vector Machine or SVM is one of the most popular Supervised Learning algorithms,
which is used for Classification as well as Regression problems. However, primarily, it is used
for Classification problems in Machine Learning.
The goal of the SVM algorithm is to create the best line or decision boundary that can segregate
n-dimensional space into classes so that we can easily put the new data point in the correct
category in the future. This best decision boundary is called a hyper plane.
SVM chooses the extreme points/vectors that help in creating the hyper plane. These extreme
cases are called as support vectors, and hence algorithm is termed as Support Vector Machine.
Consider the below diagram in which there are two different categories that are classified using a
decision boundary or hyper plane:
Naïve Bayes Classifier Algorithm
Naïve Bayes algorithm is a supervised learning algorithm, which is based on Bayes theorem and
used for solving classification problems.
It is mainly used in text classification that includes a high-dimensional training dataset.
Naïve Bayes Classifier is one of the simple and most effective Classification algorithms which
helps in building the fast machine learning models that can make quick predictions.
It is a probabilistic classifier, which means it predicts on the basis of the probability of an object.
Some popular examples of Naïve Bayes Algorithm are spam filtration, Sentimental analysis, and
classifying articles.

Decision Tree Classification Algorithm


Decision Tree is a Supervised learning technique that can be used for both classification and
Regression problems, but mostly it is preferred for solving Classification problems. It is a tree-
structured classifier, where internal nodes represent the features of a dataset, branches represent
the decision rules and each leaf node represents the outcome.
In a Decision tree, there are two nodes, which are the Decision Node and Leaf Node. Decision
nodes are used to make any decision and have multiple branches, whereas Leaf nodes are the
output of those decisions and do not contain any further branches.
The decisions or the test are performed on the basis of features of the given dataset.
It is a graphical representation for getting all the possible solutions to a problem/decision based
on given conditions.
It is called a decision tree because, similar to a tree, it starts with the root node, which expands
on further branches and constructs a tree-like structure.
In order to build a tree, we use the CART algorithm, which stands for Classification and
Regression Tree algorithm. A decision tree simply asks a question, and based on the answer
(Yes/No), it further split the tree into sub trees.

Random Forest Algorithm


Random Forest is a popular machine learning algorithm that belongs to the supervised learning
technique. It can be used for both Classification and Regression problems in ML. It is based on
the concept of ensemble learning, which is a process of combining multiple classifiers to solve a
complex problem and to improve the performance of the model.
As the name suggests, "Random Forest is a classifier that contains a number of decision trees on
various subsets of the given dataset and takes the average to improve the predictive accuracy of
that dataset." Instead of relying on one decision tree, the random forest takes the prediction from
each tree and based on the majority votes of predictions, and it predicts the final output.

Syntax for algorithm implementation:


Including the required packages

import numpy as nm

import matplotlib.pyplot as mtp

import pandas as pd

importing datasets to the program

data_set= pd.read_csv('user_data.csv')

here 'user_data.csv' replace with location of the dataset file

Extracting Independent and dependent Variable

x= data_set.iloc[:, [2,3]].values

y= data_set.iloc[:, 4].values

Splitting the dataset into training and test set.

from sklearn.model_selection import train_test_split

x_train, x_test, y_train, y_test= train_test_split(x, y, test_size= 0.25, random_state=0)


feature Scaling

from sklearn.preprocessing import StandardScaler

st_x= StandardScaler()

x_train= st_x.fit_transform(x_train)

x_test= st_x.transform(x_test)

Fitting required algorithm to the training set

from sklearn.linear_model import LogisticRegression

classifier= LogisticRegression(random_state=0)

classifier.fit(x_train, y_train)

LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,

intercept_scaling=1, l1_ratio=None, max_iter=100,

multi_class='warn', n_jobs=None, penalty='l2',

random_state=0, solver='warn', tol=0.0001, verbose=0,

warm_start=False)

Predicting the test set result

y_pred= classifier.predict(x_test)

Creating the Confusion matrix

from sklearn.metrics import confusion_matrix

cm= confusion_matrix()
Visualizing the training set result

from matplotlib.colors import ListedColormap

x_set, y_set = x_train, y_train

x1, x2 = nm.meshgrid(nm.arange(start = x_set[:, 0].min() - 1, stop = x_set[:, 0].max() + 1, step


=0.01),

nm.arange(start = x_set[:, 1].min() - 1, stop = x_set[:, 1].max() + 1, step = 0.01))

mtp.contourf(x1, x2, classifier.predict(nm.array([x1.ravel(), x2.ravel()]).T).reshape(x1.shape),

alpha = 0.75, cmap = ListedColormap(('purple','green' )))

mtp.xlim(x1.min(), x1.max())

mtp.ylim(x2.min(), x2.max())

for i, j in enumerate(nm.unique(y_set)):

mtp.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1],

c = ListedColormap(('purple', 'green'))(i), label = j)

mtp.title('Logistic Regression (Training set)')

mtp.xlabel('Age')

mtp.ylabel('Estimated Salary')

mtp.legend()

mtp.show()

mtp package is used to plot the data analysis in prediction using the required model and preview
the data
Applications:

There are many applications for machine learning

 Agriculture
 Anatomy
 Adaptive websites
 Affective computing
 Banking
 Bioinformatics
 Brain–machine interfaces
 Cheminformatics
 Citizen science
 Computer networks
 Computer vision
 Credit-card fraud detection
 Data quality
 DNA sequence classification
 Economics
 Financial market analysis [59]
 General game playing
 Handwriting recognition
 Information retrieval
 Insurance
 Internet fraud detection
 Linguistics
 Machine learning control
 Machine perception
 Machine translation
 Marketing
 Medical diagnosis
 Natural language processing
 Natural language understanding
 Online advertising
 Optimization
 Recommender systems
 Robot locomotion
 Search engines
 Sentiment analysis
 Sequence mining
 Software engineering
 Speech recognition
 Structural health monitoring
 Syntactic pattern recognition
 Telecommunication
 Theorem proving
 Time series forecasting
 User behavior analytics
INTRODUCTION TO THE DEEP LEARNING

Deep learning

Deep learning is a class of machine learning algorithms that uses multiple layers to progressively


extract higher level features from the raw input. For example, in image processing, lower layers
may identify edges, while higher layers may identify the concepts relevant to a human such as
digits or letters or faces.

Deep learning (also known as deep structured learning or differential programming) is part of a


broader family of machine learning methods based on artificial neural
networks with representation learning. Learning can be supervised, semi-
supervised or unsupervised.

Deep learning architectures such as deep neural networks, deep belief networks, recurrent neural
networks and convolutional neural networks have been applied to fields including computer
vision, speech recognition, natural language processing, audio recognition, social network
filtering, machine translation, bioinformatics, drug design, medical image analysis, material
inspection and board game programs, where they have produced results comparable to and in
some cases surpassing human expert performance.

Artificial neural networks (ANNs) were inspired by information processing and distributed


communication nodes in biological systems. ANNs have various differences from
biological brains. Specifically, neural networks tend to be static and symbolic, while the
biological brain of most living organisms is dynamic (plastic) and analog.

Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks
capable of learning unsupervised from data that is unstructured or unlabeled. Also known as
deep neural learning or deep neural network.
CNN is a feed forward neural network that is generally used for Image recognition and object
classification. ... A Recurrent Neural Network looks something like this: In RNN, the previous
states is fed as input to the current state of the network. RNN can be used in NLP, Time Series
Prediction, Machine Translation, etc.

Convolutional Neural Network (cnn)

Convolutional Neural Network is one of the main categories to do image classification and
image recognition in neural networks. Scene labeling, objects detections, and face recognition,
etc., are some of the areas where convolutional neural networks are widely used.

CNN takes an image as input, which is classified and process under a certain category such as
dog, cat, lion, tiger, etc. The computer sees an image as an array of pixels and depends on the
resolution of the image. Based on image resolution, it will see as h * w * d, where h= height w=
width and d= dimension. For example, An RGB image is 6 * 6 * 3 array of the matrix, and the
grayscale image is 4 * 4 * 1 array of the matrix.

In CNN, each input image will pass through a sequence of convolution layers along with
pooling, fully connected layers, filters (Also known as kernels). After that, we will apply the
Soft-max function to classify an object with probabilistic values 0 and 1.

Convolution Layer

Convolution layer is the first layer to extract features from an input image. By learning image
features using a small square of input data, the convolutional layer preserves the relationship
between pixels. It is a mathematical operation which takes two inputs such as image matrix and a
kernel or filter.

Strides

Stride is the number of pixels which are shift over the input matrix. When the stride is equaled to
1, then we move the filters to 1 pixel at a time and similarly, if the stride is equaled to 2, then we
move the filters to 2 pixels at a time. The following figure shows that the convolution would
work with a stride of 2.
Padding

Padding plays a crucial role in building the convolutional neural network. If the image will get
shrink and if we will take a neural network with 100's of layers on it, it will give us a small
image after filtered in the end.

Pooling Layer

Pooling layer plays an important role in pre-processing of an image. Pooling layer reduces the
number of parameters when the images are too large. Pooling is "downscaling" of the image
obtained from the previous layers. It can be compared to shrinking an image to reduce its pixel
density. Spatial pooling is also called downsampling or subsampling, which reduces the
dimensionality of each map but retains the important information.

max

average

sum

Fully Connected Layer

The fully connected layer is a layer in which the input from the other layers will be flattened into
a vector and sent. It will transform the output into the desired number of classes by the network.

Recurrent Neural Network (RNN)

A recurrent neural network (RNN) is a kind of artificial neural network mainly used in speech
recognition and natural language processing (NLP). RNN is used in deep learning and in the
development of models that imitate the activity of neurons in the human brain.

Recurrent Networks are designed to recognize patterns in sequences of data, such as text,
genomes, handwriting, the spoken word, and numerical time series data emanating from sensors,
stock markets, and government agencies.
A recurrent neural network looks similar to a traditional neural network except that a memory-
state is added to the neurons. The computation is to include a simple memory.

The recurrent neural network is a type of deep learning-oriented algorithm, which follows a
sequential approach. In neural networks, we always assume that each input and output is
dependent on all other layers. These types of neural networks are called recurrent because they
sequentially perform mathematical computations.

Applications:

There are many applications for deep learning

 Automatic speech recognition


 Image recognition
 Visual art processing
 Natural language processing
 Drug discovery and toxicology
 Customer relationship management
 Recommendation systems
 Bioinformatics
 Medical Image Analysis
 Mobile advertising
 Image restoration
 Financial fraud detection
 Military
INTRODUCTION TO PYTHON
Python:

Python is an interpreted, high-level, general-purpose programming language. Created by Guido


van Rossum and first released in 1991, Python's design philosophy emphasizes code
readability with its notable use of significant whitespace. Its language constructs and object-
oriented approach aim to help programmers write clear, logical code for small and large-scale
projects.

Python is dynamically typed and garbage-collected. It supports multiple programming


paradigms, including structured (particularly, procedural,) object-oriented, and functional
programming. Python is often described as a "batteries included" language due to its
comprehensive standard library.

Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0, released
in 2000, introduced features like list comprehensions and a garbage collection system capable of
collecting reference cycles. Python 3.0, released in 2008, was a major revision of the language
that is not completely backward-compatible, and much Python 2 code does not run unmodified
on Python 3.

The Python 2 language, i.e. Python 2.7.x, was officially discontinued on 1 January 2020 (first
planned for 2015) after which security patches and other improvements will not be released for
it. With Python 2's end-of-life, only Python 3.5.x and later are supported.

Python interpreters are available for many operating systems. A global community of


programmers develops and maintains CPython, an open source reference implementation. A non-
profit organization, the Python Software Foundation, manages and directs resources for Python
and CPython development.

Python is used for:


 web development (server-side),
 software development,
 mathematics,
 system scripting.

Python do?:

 Python can be used on a server to create web applications.


 Python can be used alongside software to create workflows.
 Python can connect to database systems. It can also read and modify files.
 Python can be used to handle big data and perform complex mathematics.
 Python can be used for rapid prototyping, or for production-ready software development.

Why Python?:

 Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc).
 Python has a simple syntax similar to the English language.
 Python has syntax that allows developers to write programs with fewer lines than some
other programming languages.
 Python runs on an interpreter system, meaning that code can be executed as soon as it is
written. This means that prototyping can be very quick.
 Python can be treated in a procedural way, an object-orientated way or a functional way.

Python compared to other programming languages

 Python was designed for readability, and has some similarities to the English language
with influence from mathematics.
 Python uses new lines to complete a command, as opposed to other programming
languages which often use semicolons or parentheses.
 Python relies on indentation, using whitespace, to define scope; such as the scope of
loops, functions and classes. Other programming languages often use curly-brackets for
this purpose.
Python installation procedure:

Windows Based

It is highly unlikely that your Windows system shipped with Python already installed. Windows
systems typically do not. Fortunately, installing does not involve much more than downloading
the Python installer from the python.org website and running it. Let’s take a look at how to
install Python 3 on Windows:

Step 1: Download the Python 3 Installer

1. Open a browser window and navigate to the Download page for Windows at python.org.


2. Underneath the heading at the top that says Python Releases for Windows, click on the
link for the Latest Python 3 Release - Python 3.x.x. (As of this writing, the latest is
Python 3.6.5.)
3. Scroll to the bottom and select either Windows x86-64 executable installer for 64-bit
or Windows x86 executable installer for 32-bit. (See below.)

Sidebar: 32-bit or 64-bit Python?


For Windows, you can choose either the 32-bit or 64-bit installer. Here’s what the difference
between the two comes down to:

 If your system has a 32-bit processor, then you should choose the 32-bit installer.
 On a 64-bit system, either installer will actually work for most purposes. The 32-bit
version will generally use less memory, but the 64-bit version performs better for
applications with intensive computation.
 If you’re unsure which version to pick, go with the 64-bit version.

Note: Remember that if you get this choice “wrong” and would like to switch to another version
of Python, you can just uninstall Python and then re-install it by downloading another installer
from python.org.

Step 2: Run the Installer


Once you have chosen and downloaded an installer, simply run it by double-clicking on the
downloaded file. A dialog should appear that looks something like this:

Important: You want to be sure to check the box that says Add Python 3.x to PATH as shown
to ensure that the interpreter will be placed in your execution path.
Then just click Install Now. That should be all there is to it. A few minutes later you should
have a working Python 3 installation on your system.

Mac OS based

While current versions of macOS (previously known as “Mac OS X”) include a version of
Python 2, it is likely out of date by a few months. Also, this tutorial series uses Python 3, so let’s
get you upgraded to that.
The best way we found to install Python 3 on macOS is through the Homebrew package
manager. This approach is also recommended by community guides like The Hitchhiker’s Guide
to Python.

Step 1: Install Homebrew (Part 1)

To get started, you first want to install Homebrew:


1. Open a browser and navigate to http://brew.sh/. After the page has finished
loading, select the Homebrew bootstrap code under “Install Homebrew”. Then hit
cmd+c  to copy it to the clipboard. Make sure you’ve captured the text of the complete
command because otherwise the installation will fail.
2. Now you need to open a Terminal app window, paste the Homebrew bootstrap code,
and then hit Enter. This will begin the Homebrew installation.
3. If you’re doing this on a fresh install of macOS, you may get a pop up alert asking you to
install Apple’s “command line developer tools”. You’ll need those to continue with the
installation, so please confirm the dialog box by clicking on “Install”.

At this point, you’re likely waiting for the command line developer tools to finish installing, and
that’s going to take a few minutes. Time to grab a coffee or tea!

Step 2: Install Homebrew (Part 2)

You can continue installing Homebrew and then Python after the command line developer tools
installation is complete:

1. Confirm the “The software was installed” dialog from the developer tools installer.
2. Back in the terminal, hit Enter to continue with the Homebrew installation.
3. Homebrew asks you to enter your password so it can finalize the installation. Enter your
user account password and hit Enter to continue.
4. Depending on your internet connection, Homebrew will take a few minutes to download
its required files. Once the installation is complete, you’ll end up back at the command
prompt in your terminal window.

Whew! Now that the Homebrew package manager is set up, let’s continue on with installing
Python 3 on your system.

Step 3: Install Python

Once Homebrew has finished installing, return to your terminal and run the following
command:
$ brew install python3
Note: When you copy this command, be sure you don’t include the $ character at the beginning.
That’s just an indicator that this is a console command.
This will download and install the latest version of Python. After the Homebrew brew
install command finishes, Python 3 should be installed on your system.

You can make sure everything went correctly by testing if Python can be accessed from the
terminal:

1. Open the terminal by launching Terminal app.


2. Type pip3 and hit Enter.
3. You should see the help text from Python’s “Pip” package manager. If you get an error
message running pip3, go through the Python install steps again to make sure you have a
working Python installation.

Assuming everything went well and you saw the output from Pip in your command prompt
window…congratulations! You just installed Python on your system, and you’re all set to
continue with the next section in this tutorial.

Packages need for python based programming:

 Numpy
NumPy is a Python package which stands for 'Numerical Python'. It is the core library for
scientific computing, which contains a powerful n-dimensional array object, provide tools for
integrating C, C++ etc. It is also useful in linear algebra, random number capability etc.
 Pandas
Pandas is a high-level data manipulation tool developed by Wes McKinney. It is built on the
Numpy package and its key data structure is called the DataFrame. DataFrames allow you to
store and manipulate tabular data in rows of observations and columns of variables.
 Keras
Keras is a high-level neural networks API, written in Python and capable of running on top
of TensorFlow, CNTK, or Theano. Use Keras if you need a deep learning library that:
Allows for easy and fast prototyping (through user friendliness, modularity, and
extensibility).
 Sklearn
Scikit-learn is a free machine learning library for Python. It features various algorithms like
support vector machine, random forests, and k-neighbours, and it also supports Python
numerical and scientific libraries like NumPy and SciPy.
 Scipy
SciPy is an open-source Python library which is used to solve scientific and mathematical
problems. It is built on the NumPy extension and allows the user to manipulate and visualize
data with a wide range of high-level commands.
 Tensorflow
TensorFlow is a Python library for fast numerical computing created and released by Google.
It is a foundation library that can be used to create Deep Learning models directly or by using
wrapper libraries that simplify the process built on top of TensorFlow.
 Django
Django is a high-level Python Web framework that encourages rapid development and clean,
pragmatic design. Built by experienced developers, it takes care of much of the hassle of
Web development, so you can focus on writing your app without needing to reinvent the
wheel. It's free and open source.
 Pyodbc
pyodbc is an open source Python module that makes accessing ODBC databases simple. It
implements the DB API 2.0 specification but is packed with even more Pythonic
convenience. Precompiled binary wheels are provided for most Python versions on Windows
and macOS. On other operating systems this will build from source.
 Matplotlib
Matplotlib is an amazing visualization library in Python for 2D plots of arrays. Matplotlib is
a multi-platform data visualization library built on NumPy arrays and designed to work with
the broader SciPy stack. It was introduced by John Hunter in the year 2002.
 Opencv
OpenCV-Python is a library of Python bindings designed to solve computer vision
problems. Python is a general purpose programming language started by Guido van Rossum
that became very popular very quickly, mainly because of its simplicity and code readability.
 Nltk
Natural Language Processing with Python NLTK is one of the leading platforms for working
with human language data and Python, the module NLTK is used for natural language
processing. NLTK is literally an acronym for Natural Language Toolkit. In this article you
will learn how to tokenize data (by words and sentences).
 SQLAIchemy
SQLAlchemy is a library that facilitates the communication between Python programs and
databases. Most of the times, this library is used as an Object Relational Mapper (ORM) tool
that translates Python classes to tables on relational databases and automatically converts
function calls to SQL statements.
 Urllib
urllib is a Python module that can be used for opening URLs. It defines functions and classes
to help in URL actions. With Python you can also access and retrieve data from the internet
like XML, HTML, JSON, etc. You can also use Python to work with this data directly.

Installation of packages:

Syntax for installation of packages via cmd terminal using the basic

Step:1- First check pip cmd

First check pip cmd

If ok then

Step:2- pip list

Check the list of packages installed and then install required by following cmds

Step:3- pip install package name

The package name should as requirement


INTRODUCTION TO OPENCV

Open cv:

OpenCV was started at Intel in 1999 by Gary Bradsky and the first release came out in 2000.
Vadim Pisarevsky joined Gary Bradsky to manage Intel’s Russian software OpenCV team. In
2005, OpenCV was used on Stanley, the vehicle who won 2005 DARPA Grand Challenge. Later
its active development continued under the support of Willow Garage, with Gary Bradsky and
Vadim Pisarevsky leading the project. Right now, OpenCV supports a lot of algorithms related to
Computer Vision and Machine Learning and it is expanding day-by-day. Currently OpenCV
supports a wide variety of programming languages like C++, Python, Java etc and is available on
different platforms including Windows, Linux, OS X, Android, iOS etc. Also, interfaces based
on CUDA and OpenCL are also under active development for high-speed GPU operations.
OpenCV-Python is the Python API of OpenCV. It combines the best qualities of OpenCV C++
API and Python language.

OpenCV-Python Python is a general purpose programming language started by Guido van


Rossum, which became very popular in short time mainly because of its simplicity and code
readability. It enables the programmer to express his ideas in fewer lines of code without
reducing any readability. Compared to other languages like C/C++, Python is slower. But
another important feature of Python is that it can be easily extended with C/C++. This feature
helps us to write computationally intensive codes in C/C++ and create a Python wrapper for it so
that we can use these wrappers as Python modules. This gives us two advantages: first, our code
is as fast as original C/C++ code (since it is the actual C++ code working in background) and
second, it is very easy to code in Python. This is how OpenCV-Python works, it is a Python
wrapper around original C++ implementation. And the support of Numpy makes the task more
easier. Numpy is a highly optimized library for numerical operations. It gives MATLAB-style
syntax. All the OpenCV array structures are converted to-and-from Numpy arrays. So whatever
operations you can do in Numpy, you can combine it with OpenCV, which increases number of
weapons in your arsenal. Besides that, several other libraries like SciPy,

Matplotlib which supports Numpy can be used with this. So OpenCV-Python is an appropriate
tool for fast prototyping of computer vision problems.

Since OpenCV is an open source initiative, all are welcome to make contributions to this library.
And it is same for this tutorial also. So, if you find any mistake in this tutorial (whether it be a
small spelling mistake or a big error in code or concepts, whatever), feel free to correct it. 1.1.
Introduction to OpenCV 7 OpenCV-Python Tutorials Documentation, Release 1 And that will be
a good task for fresher’s who begin to contribute to open source projects. Just fork the OpenCV
in github, make necessary corrections and send a pull request to OpenCV.

OpenCV developers will check your pull request, give you important feedback and once it passes
the approval of the reviewer, it will be merged to OpenCV. Then you become a open source
contributor. Similar is the case with other tutorials, documentation etc. As new modules are
added to OpenCV-Python, this tutorial will have to be expanded. So those who knows about
particular algorithm can write up a tutorial which includes a basic theory of the algorithm and a
code showing basic usage of the algorithm and submit it to OpenCV. Remember, we together
can make this project a great success!!! Contributors Below is the list of contributors who
submitted tutorials to OpenCV-Python.

1. Alexander Mordvintsev (GSoC-2013 mentor)

2. Abid Rahman K. (GSoC-2013 intern)

Additional Resources

1. A Quick guide to Python - A Byte of Python

2. Basic Numpy Tutorials


3. Numpy Examples List

4. OpenCV Documentation

5. OpenCV Forum

Install OpenCV-Python in Windows

Goals In this tutorial

We will learn to setup OpenCV-Python in your Windows system. Below steps are tested in a
Windows 7-64 bit machine with Visual Studio 2010 and Visual Studio 2012. The screenshots
shows VS2012.

Installing Open CV from prebuilt binaries

1. Below Python packages are to be downloaded and installed to their default locations.

1.1. Python-2.7.x.

1.2. Numpy.

1.3. Matplotlib (Matplotlib is optional, but recommended since we use it a lot in our tutorials).

2. Install all packages into their default locations. Python will be installed to C:/Python27/.

3. After installation, open Python IDLE. Enter import numpy and make sure Numpy is working
fine.

4. Download latest OpenCV release from source forge site and double-click to extract it.

5. Goto opencv/build/python/2.7 folder.

6. Copy cv2.pyd to C:/Python27/lib/site-packeges.

7. Open Python IDLE and type following codes in Python terminal.


>>> import cv2

>>> print cv2.__version__

If the results are printed out without any errors, congratulations!!! You have installed OpenCV-
Python successful

Download and install necessary Python packages to their default locations

1. Python 3.6.8.x

2. Numpy

3. Matplotlib (Matplotlib is optional, but recommended since we use it a lot in our tutorials.)

Make sure Python and Numpy are working fine.

4. Download OpenCV source. It can be from Source forge (for official release version) or from
Github (for latest source).

5. Extract it to a folder, opencv and create a new folder build in it.

6. Open CMake-gui (Start > All Programs > CMake-gui)

7. Fill the fields as follows (see the image below):

7.1. Click on Browse Source... and locate the opencv folder.

7.2. Click on Browse Build... and locate the build folder we created.

7.3. Click on Configure.

7.4. It will open a new window to select the compiler. Choose appropriate compiler
(here, Visual Studio 11) and click Finish.

7.5. Wait until analysis is finished.


8. You will see all the fields are marked in red. Click on the WITH field to expand it. It decides
what extra features you need. So mark appropriate fields. See the below image:

9. Now click on BUILD field to expand it. First few fields configure the build method. See the
below image:

10. Remaining fields specify what modules are to be built. Since GPU modules are not yet
supported by Open CV Python, you can completely avoid it to save time (But if you work with
them, keep it there). See the image below:

11. Now click on ENABLE field to expand it. Make sure ENABLE_SOLUTION_FOLDERS is
unchecked (Solution folders are not supported by Visual Studio Express edition). See the image
below:

12. Also make sure that in the PYTHON field, everything is filled. (Ignore
PYTHON_DEBUG_LIBRARY). See image below:

13. Finally click the Generate button.

14. Now go to our opencv/build folder. There you will find OpenCV.sln file. Open it with Visual
Studio.

15. Check build mode as Release instead of Debug.

16. In the solution explorer, right-click on the Solution (or ALL_BUILD) and build it. It will
take some time to finish.

17. Again, right-click on INSTALL and build it. Now OpenCV-Python will be installed.

18. Open Python IDLE and enter import cv2. If no error, it is installed correctly

Using OpenCV Read an image


Use the function cv2.imread () to read an image. The image should be in the working directory
or a full path of image should be given. Second argument is a flag which specifies the way image
should be read.

cv2.IMREAD_COLOR : Loads a color image. Any transparency of image will be neglected. It is


the default flag.

cv2.IMREAD_GRAYSCALE : Loads image in gray scale mode

cv2.IMREAD_UNCHANGED : Loads image as such including alpha channel

See the code below:

 import numpy as np

 import cv2

 # Load an color image in grayscale

 img = cv2.imread('messi5.jpg',0)

Warning: Even if the image path is wrong, it won’t throw any error, but print img will give you
None

Display an image Use the function cv2.imshow() to display an image in a window. The window
automatically fits to the image size. First argument is a window name which is a string. second
argument is our image. You can create as many windows as you wish, but with different window
names.

 cv2.imshow('image’, mg)

 cv2.waitKey(0)

cv2.waitKey() is a keyboard binding function. Its argument is the time in milliseconds.

 cv2.destroyAllWindows()
cv2.destroyAllWindows () simply destroys all the windows we created

Write an image

Use the function cv2.imwrite () to save an image. First argument is the file name, second
argument is the image you want to save.

 cv2.imwrite('messigray.png',img)

This will save the image in PNG format in the working directory

Below program loads an image in gray scale, displays it, save the image if you press ‘s’ and exit,
or simply exit without saving if you press ESC key.

 import numpy as np

 import cv2

 img = cv2.imread('messi5.jpg',0)

 cv2.imshow('image’, mg)

 k = cv2.waitKey(0)

 if k == 27: # wait for ESC key to exit

 cv2.destroyAllWindows()

 elif k == ord('s'): # wait for 's' key to save and exit

 cv2.imwrite('messigray.png',img)

 cv2.destroyAllWindows()
Chapter-2
INTRODUCTION OF FACE RECOGNITION

The technology aims in imparting a tremendous knowledge oriented technical innovations these
days. Deep Learning is one among the interesting domain that enables the machine to train itself
by providing some datasets as input and provides an appropriate output during testing by
applying different learning algorithms. Nowadays Attendance is considered as an important
factor for both the student as well as the teacher of an educational organization. With the
advancement of the deep learning technology the machine automatically detects the attendance
performance of the students and maintains a record of those collected data.

In general, the attendance system of the student can be maintained in two different forms
namely,
 Manual Attendance System (MAS)
 Automated Attendance System (AAS).

Manual Student Attendance Management system is a process where a teacher concerned with the
particular subject need to call the students name and mark the attendance manually. Manual
attendance may be considered as a time-consuming process or sometimes it happens for the
teacher to miss someone or students may answer multiple times on the absence of their friends.
So, the problem arises when we think about the traditional process of taking attendance in the
classroom. To solve all these issues we go with Automatic Attendance System(AAS).
Automated Attendance System (AAS) is a process to automatically estimate the presence or the
absence of the student in the classroom by using face recognition technology. It is also possible
to recognize whether the student is sleeping or awake during the lecture and it can also be
implemented in the exam sessions to ensure the presence of the student. The presence of the
students can be determined by capturing their faces on to a high-definition monitor video
streaming service, so it becomes highly reliable for the machine to understand the presence of all
the students in the classroom.

The Feature-based approach also known as local face recognition system, used in pointing the
key features of the face like eyes, ears, nose, mouth, edges, etc., whereas the brightness-based
approach also termed as the global face recognition system, used in recognizing all the parts of
the image.
Over the past decade face detection and
recognition have transcended from
esoteric to popular areas of research in
computer vision and one of the better
and successful applications of image
analysis and algorithm based
understanding. Because of the intrinsic
nature of the problem, computer vision
is not only a computer science area of
research, but also the object of neuro-
scientific and psychological studies also,
mainly because of the general opinion
that advances in computer image
processing and understanding research
will provide insights into how our brain
work and vice versa.
A general statement of the face
recognition problem (in computer vision)
can be formulated as follows: given still
or video images of a scene, identify or
verify one or more persons in the scene
using a stored database of faces.
Facial recognition generally involves two
stages:
Face Detection where a photo is
searched to find a face, then the image
Journal of Mobile, Embedded and Distributed Systems, vol. IV, no. 1, 2012
ISSN 2067 – 4074

39

is processed to crop and extract the


person’s face for easier recognition.
Face Recognition where that detected
and processed face is compared to a
database of known faces, to decide who
that person is.
Since 2002, face detection can be
performed fairly easily and reliably with
Intel’s open source framework called
OpenCV [1]. This framework has an in-
built Face Detector that works in roughly
90-95% of clear photos of a person
looking forward at the camera. However,
detecting a person’s face when that
person is viewed from an angle is
usually harder, sometimes requiring 3D
Head Pose Estimation. Also, lack of
proper brightness of an image can
greatly increase the difficulty of
detecting a face, or increased contrast in
shadows on the face, or maybe the
picture is blurry, or the person is
wearing glasses, etc.

Face recognition however is much less


reliable than face detection, with an
accuracy of 30-70% in general. Face
recognition has been a strong field of
research since the 1990s, but is still a
far way away from a reliable method of
user authentication. More and more
techniques are being developed each
year. The Eigenface technique is
considered the simplest method of
accurate face recognition, but many
other (much more complicated) methods
or combinations of multiple methods are
slightly more accurate.

OpenCV was started at Intel in 1999 by


Gary Bradski for the purposes of
accelerating research in and commercial
applications of computer vision in the
world and, for Intel, creating a demand
for ever more powerful computers by
such applications. Vadim Pisarevsky
joined Gary to manage Intel's Russian
software OpenCV team. Over time the
OpenCV team moved on to other
companies and other Research. Several
of the original team eventually ended up
working in robotics and found their way
to Willow Garage. In 2008, Willow
Garage saw the need to rapidly advance
robotic perception capabilities in an open
way that leverages the entire research
and commercial community and began
actively supporting OpenCV, with Gary
and Vadim once again leading the effort
[2].
Intel's open-source computer-vision
library can greatly simplify computer-
vision programming. It includes
advanced capabilities - face detection,
face tracking, face recognition, Kalman
filtering, and a variety of artificial-
intelligence (AI) methods - in ready-to-
use form. In addition, it provides many
basic computer-vision algorithms via its
lower-level APIs.
OpenCV has the advantage of being a
multi-platform framework; it supports
both Windows and Linux, and more
recently, Mac OS X.
OpenCV has so many capabilities it can
seem overwhelming at first. A good
understanding of how these methods
work is the key to getting good results
when using OpenCV. Fortunately, only a
select few need to be known beforehand
to get started.
OpenCV's functionality that will be used
for facial recognition is contained within
several modules. Following is a short
description of the key namespaces:

Over the past decade face detection and


recognition have transcended from
esoteric to popular areas of research in
computer vision and one of the better
and successful applications of image
analysis and algorithm based
understanding. Because of the intrinsic
nature of the problem, computer vision
is not only a computer science area of
research, but also the object of neuro-
scientific and psychological studies also,
mainly because of the general opinion
that advances in computer image
processing and understanding research
will provide insights into how our brain
work and vice versa.
A general statement of the face
recognition problem (in computer vision)
can be formulated as follows: given still
or video images of a scene, identify or
verify one or more persons in the scene
using a stored database of faces.
Facial recognition generally involves two
stages:
Face Detection where a photo is
searched to find a face, then the image
Journal of Mobile, Embedded and Distributed Systems, vol. IV, no. 1, 2012
ISSN 2067 – 4074

39

is processed to crop and extract the


person’s face for easier recognition.
Face Recognition where that detected
and processed face is compared to a
database of known faces, to decide who
that person is.
Since 2002, face detection can be
performed fairly easily and reliably with
Intel’s open source framework called
OpenCV [1]. This framework has an in-
built Face Detector that works in roughly
90-95% of clear photos of a person
looking forward at the camera. However,
detecting a person’s face when that
person is viewed from an angle is
usually harder, sometimes requiring 3D
Head Pose Estimation. Also, lack of
proper brightness of an image can
greatly increase the difficulty of
detecting a face, or increased contrast in
shadows on the face, or maybe the
picture is blurry, or the person is
wearing glasses, etc.

Face recognition however is much less


reliable than face detection, with an
accuracy of 30-70% in general. Face
recognition has been a strong field of
research since the 1990s, but is still a
far way away from a reliable method of
user authentication. More and more
techniques are being developed each
year. The Eigenface technique is
considered the simplest method of
accurate face recognition, but many
other (much more complicated) methods
or combinations of multiple methods are
slightly more accurate.

OpenCV was started at Intel in 1999 by


Gary Bradski for the purposes of
accelerating research in and commercial
applications of computer vision in the
world and, for Intel, creating a demand
for ever more powerful computers by
such applications. Vadim Pisarevsky
joined Gary to manage Intel's Russian
software OpenCV team. Over time the
OpenCV team moved on to other
companies and other Research. Several
of the original team eventually ended up
working in robotics and found their way
to Willow Garage. In 2008, Willow
Garage saw the need to rapidly advance
robotic perception capabilities in an open
way that leverages the entire research
and commercial community and began
actively supporting OpenCV, with Gary
and Vadim once again leading the effort
[2].
Intel's open-source computer-vision
library can greatly simplify computer-
vision programming. It includes
advanced capabilities - face detection,
face tracking, face recognition, Kalman
filtering, and a variety of artificial-
intelligence (AI) methods - in ready-to-
use form. In addition, it provides many
basic computer-vision algorithms via its
lower-level APIs.
OpenCV has the advantage of being a
multi-platform framework; it supports
both Windows and Linux, and more
recently, Mac OS X.
OpenCV has so many capabilities it can
seem overwhelming at first. A good
understanding of how these methods
work is the key to getting good results
when using OpenCV. Fortunately, only a
select few need to be known beforehand
to get started.
OpenCV's functionality that will be used
for facial recognition is contained within
several modules. Following is a short
description of the key namespaces:
Over the past decade face detection and recognition have transcended from esoteric to popular
areas of research in computer vision and one of the better and successful applications of image
analysis and algorithm based understanding. Because of the intrinsic nature of the problem,
computer vision is not only a computer science area of research, but also the object of neuro
scientific and psychological studies also, mainly because of the general opinion that advances in
computer image processing and understanding research will provide insights into how our brain
work and vice versa. A general statement of the face recognition problem (in computer vision)
can be formulated as follows: given still or video images of a scene, identify or verify one or
more persons in the scene using a stored database of faces.
Facial recognition generally involves two stages:
Face Detection where a photo is searched to find a face, then the image is processed to crop and
extract the person’s face for easier recognition.
Face Recognition where that detected and processed face is compared to a database of known
faces, to decide who that person is.
Since 2002, face detection can be performed fairly easily and reliably with Intel’s open source
framework called OpenCV

This framework has an inbuilt Face Detector that works in roughly 90-95% of clear photos of a
person looking forward at the camera. However, detecting a person’s face when that person is
viewed from an angle is usually harder, sometimes requiring 3D Head Pose Estimation. Also,
lack of proper brightness of an image can greatly increase the difficulty of detecting a face, or
increased contrast in shadows on the face, or maybe the picture is blurry, or the person is
wearing glasses, etc.

Face recognition however is much less reliable than face detection, with an accuracy of 30-70%
in general. Face recognition has been a strong field of research since the 1990s, but is still a far
way away from a reliable method of user authentication. More and more techniques are being
developed each year. The Eigenface technique is considered the simplest method of accurate face
recognition, but many other (much more complicated) methods or combinations of multiple
methods are slightly more accurate.
OpenCV was started at Intel in 1999 by Gary Bradski for the purposes of accelerating research in
and commercial applications of computer vision in the world and, for Intel, creating a demand
for ever more powerful computers by such applications. Vadim Pisarevsky joined Gary to
manage Intel's Russian software OpenCV team. Over time the OpenCV team moved on to other
companies and other Research. Several of the original team eventually ended up working in
robotics and found their way to Willow Garage. In 2008, Willow Garage saw the need to rapidly
advance robotic perception capabilities in an open way that leverages the entire research and
commercial community and began actively supporting OpenCV, with Gary and Vadim once
again leading the effort.
Intel's open-source computer-vision library can greatly simplify computer vision programming.
It includes advanced capabilities - face detection, face tracking, face recognition, Kalman
filtering, and a variety of artificial intelligence (AI) methods - in ready-touse form. In addition, it
provides many basic computer-vision algorithms via its lower-level APIs.

OpenCV has the advantage of being a multi-platform framework; it supports both Windows and
Linux, and more recently, Mac OS X. OpenCV has so many capabilities it can seem
overwhelming at first. A good understanding of how these methods work is the key to getting
good results when using OpenCV. Fortunately, only a select few need to be known beforehand to
get started.

Chapter-3
LITERATURE SURVEY

A Counterpart Approach to Attendance and Feedback System using Machine Learning


Techniques:
In this paper, the idea of two technologies namely Student Attendance and Feedback system has
been implemented with a machine learning approach. This system automatically detects the
student performance and maintains the student's records like attendance and their feedback on
the subjects like Science, English, etc. Therefore the attendance of the student can be made
available by recognizing the face. On recognizing, the attendance details and details about the
marks of the student is obtained as feedback.

Automated Attendance System Using Face Recognition:


Automated Attendance System using Face Recognition proposes that the system is based on face
detection and recognition algorithms, which is used to automatically detects the student face
when he/she enters the class and the system is capable to marks the attendance by recognizing
him. Viola-Jones Algorithm has been used for face detection which detect human face using
cascade classifier and PCA algorithm for feature selection and SVM for classification. When it is
compared to traditional attendance marking this system saves the time and also helps to monitor
the students.

Student Attendance System Using Iris Detection:


In this proposed system the student is requested to stand in front of the camera to detect and
recognize the iris, for the system to mark attendance for the student. Some algorithms like Gray
Scale Conversion, Six Segment Rectangular Filter, Skin Pixel Detection is being used to detect
the iris. It helps in preventing the proxy issues and it maintains the attendance of the student in
an effective manner, but in one of the time-consuming process for a student or a staff to wait
until the completion of the previous members.

Face Recognition-based Lecture Attendance System:

This paper proposes that the system takes the attendance automatically recognition obtained by
continuous observation. Continuous observation helps in estimating and improving the
performance of the attendance. To obtain the attendance, positions and face images of the
students present in the class room are captured. Through continuous observation and recording
the system estimates seating position and location of each student for attendance marking. The
work is focused on the method to obtain the different weights of each focused seat according to
its location. The effectiveness of the picture is also being discussed to enable the faster
recognition of the image.
Chapter-4

EXISTING RECOGNITION SYSTEMS:

4.1 Fingerprint Based recognition system:

In the Fingerprint based existing attendance system, a portable fingerprint device need to be
configured with the students fingerprint earlier. Later either during the lecture hours or before,
the student needs to record the fingerprint on the configured device to ensure their attendance for
the day. The problem with this approach is that during the lecture time it may distract the
attention of the students.

4.2 RFID (Radio Frequency Identification) Based recognition system:

In the RFID based existing system, the student needs to carry a Radio Frequency Identity Card
with them and place the ID on the card reader to record their presence for the day. The system is
capable of to connect to RS232 and record the attendance to the saved database. There are
possibilities for the fraudulent access may occur. Some are students may make use of other
students ID to ensure their presence when the particular student is absent or they even try to
misuse it sometimes.

4.3 Iris Based Recognition System:

In the Iris based student attendance system, the student needs to stand in front of a camera, so
that the camera will scan the Iris of the student. The scanned iris is matched with data of student
stored in the database and the attendance on their presence needs be updated. This reduces the
paper and pen workload of the faculty member of the institute. This also reduces the chances of
proxies in the class, and helps in maintaining the student records safe. It is a wireless biometric
technique that solves the problem of spurious attendance and the trouble of laying the
corresponding network.

4.4 Face Based Recognition System using matlab:

The facial recognition technology can be used in recording the attendance through a high-
resolution digital camera that detects and recognizes the faces of the students and the machine
compares the recognized face with students’ face images stored in the database. Once the face of
the student is matched with the stored image, then the attendance is marked in attendance
database for further calculation. If the captured image doesn't match with the students' face
present in the database then this image is stored as a new image onto the database. In this system,
there are possibilities for the camera to not to capture the image properly or it may miss some of
the students from capturing.
PROPOSED SYSTEM

4.5 Face Based Recognition System using python opencv:

The task of the proposed system is to capture the face of each student and to store it in the
database for their attendance. The face of the student needs to be captured in such a manner that
all the feature of the students' face needs to be detected, even the seating and the posture of the
student need to be recognized. There is no need for the teacher to manually take attendance in the
class because the system records a video and through further processing steps the face is being
recognized and the attendance database is updated.
Chapter-

INTRODUCTION OF TKINTER

Tkinter:

Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk


GUI tool kit. Tkinter is included with standard Linux, Microsoft Windows and Mac OS
X installs of Python.
The name Tkinter comes from Tk interface. Tkinter was written by Fredrik Lundh. Tkinter
is free software released under a Python license.

As with most other modern Tk bindings, Tkinter is implemented as a Python wrapper around a
complete Tcl interpreter embedded in the Python interpreter. Tkinter calls are translated into Tcl
commands which are fed to this embedded interpreter, thus making it possible to mix Python and
Tcl in a single application.

There are several popular GUI library alternatives available, such as 
 Tkinter − Tkinter is the Python interface to the Tk GUI toolkit shipped with Python. We
would look this option in this chapter.

 wxPython − This is an open-source Python interface for wxWindows http://wxpython.org.

 JPython − JPython is a Python port for Java which gives Python scripts seamless access to
Java class libraries on the local machine http://www.jython.org

There are many other interfaces available, which you can find them on the net.

Tkinter Programming

Tkinter is the standard GUI library for Python. Python when combined with Tkinter provides a
fast and easy way to create GUI applications. Tkinter provides a powerful object-oriented
interface to the Tk GUI toolkit.

Creating a GUI application using Tkinter is an easy task. All you need to do is perform the
following steps −

 Import the Tkinter module.

 Create the GUI application main window.

 Add one or more of the above-mentioned widgets to the GUI application.

 Enter the main event loop to take action against each event triggered by the user.

Tkinter had 15+ widgets for creating the gui application


 Button
The Button widget is used to display buttons in your application
 Canvas
The Canvas widget is used to draw shapes, such as lines, ovals, polygons and rectangles, in your
application.
 Check button
The Check button widget is used to display a number of options as checkboxes. The user can
select multiple options at a time.
 Entry
The Entry widget is used to display a single-line text field for accepting values from a user.
 Frame
The Frame widget is used as a container widget to organize other widgets.
 Label
The Label widget is used to provide a single-line caption for other widgets. It can also contain
images.
 Listbox
The Listbox widget is used to provide a list of options to a user.
 Menubutton
The Menubutton widget is used to display menus in your application.
 Menu
The Menu widget is used to provide various commands to a user. These commands are contained
inside Menubutton
 Message
The Message widget is used to display multiline text fields for accepting values from a user.

 Radiobutton
The Radio button widget is used to display a number of options as radio buttons. The user
can select only one option at a time.
 Scale
The Scale widget is used to provide a slider widget.
 Scrollbar
The Scrollbar widget is used to add scrolling capability to various widgets, such as list boxes.
 Text
The Text widget is used to display text in multiple lines.
 Toplevel
The Toplevel widget is used to provide a separate window container.
 Spinbox
The Spinbox widget is a variant of the standard Tkinter Entry widget, which can be used to
select from a fixed number of values.
 PanedWindow
A PanedWindow is a container widget that may contain any number of panes, arranged
horizontally or vertically.
 LabelFrame
A labelframe is a simple container widget. Its primary purpose is to act as a spacer or
container for complex window
 tkMessageBox
This module is used to display message boxes in your applications.

By using this widgets modules creating the gui application in the python applications

The application looks like console application which include buttons, entry lable, text labels,
message box, message information, check buttons, analysis preview

Example
#!/usr/bin/python

import Tkinter
top = Tkinter.Tk()
# Code to add widgets will go here...
top.mainloop()

Creating the application using tkinter for capture image and recognize and add attendance
to database csv file

Tkinter tutorial provides basic and advanced concepts of Python Tkinter. Our Tkinter tutorial is
designed for beginners and professionals.
Python provides the standard library Tkinter for creating the graphical user interface for desktop
based applications.

Developing desktop based applications with python Tkinter is not a complex task. An empty
Tkinter top-level window can be created by using the following steps.

1. import the Tkinter module.


2. Create the main application window.
3. Add the widgets like labels, buttons, frames, etc. to the window.
4. Call the main event loop so that the actions can take place on the user's computer screen.

Syntax:
# !/usr/bin/python3  
from tkinter import *  
#creating the application main window.   
top = Tk()  
#Entering the event main loop  
top.mainloop()  

this syntax will create the tkinter window on the screen

Output:

Then on the window creating buttons


 The button widget is used to add various types of buttons to the python application. Python
allows us to configure the look of the button according to our requirements. Various options
can be set or reset depending upon the requirements.

We can also associate a method or function with a button which is called when the button is
pressed.

The syntax to use the button widget is given below.

Syntax:
def fun():  
messagebox.showinfo("Hello", "Button clicked")  
b = Button(top,text = "Red",command = fun,activeforeground = "red",activebackground = "pink"
,pady=10)  
b.pack(side = TOP)  

 The Entry widget is used to provde the single line text-box to the user to accept a value from
the user. We can use the Entry widget to accept the text strings from the user. It can only be
used for one line of text from the user. For multiple lines of text, we must use the text widget.

The syntax to use the Entry widget is given below.

Syntax:
Entry = Label(top, text = "Entry").place(x = 30,y = 50)  
e1 = Entry(top).place(x = 80, y = 50)  

 Python Tkinter Frame widget is used to organize the group of widgets. It acts like a container
which can be used to hold the other widgets. The rectangular areas of the screen are used to
organize the widgets to the python application.
The syntax to use the Frame widget is given below.
Syntax:
frame = Frame(top)  
frame.pack()  
lframe = Frame(top)  
lframe.pack(side = side to be place)

 The Label is used to specify the container box where we can place the text or images. This
widget is used to provide the message to the user about other widgets used in the python
application.
There are the various options which can be specified to configure the text or the part of the text
shown in the Label.
The syntax to use the Label is given below.
Syntax:
#creating label  
uname = Label(top, text = "Username").place(x = 30,y = 50) 

 The Message widget is used to show the message to the user regarding the behaviour of the
python application. The message widget shows the text messages to the user which can not
be edited.
The message text contains more than one line. However, the message can only be shown in the
single font.

The syntax to use the Message widget is given below.


Syntax:
var = StringVar()  
msg = Message( top, text = "Welcome to tkinter")  
msg.pack()  

Finally we combined those widgets and create the application face recognition based attendance
using opencv
Working and recognition:

When the tkinter application execute in python software it display and tkinter window on top
screen in that window we have four buttons, three entry inputs, four labels, one output label
As mentioned in figure

The main working principle of the project is that, the video captured data is converted into image
to detect and recognize it. Further the recognized image of the student is provided with
attendance; else the system marks the database as absent.

Capture video:
The Camera is fixed at a specific distance inside a classroom to capture videos of the frontal
images of the entire students of the class.
Separate as frames from the video:
The captured video needs to be converted into frames per second for easier detection and
recognition of the students

Face Detection:
Face Detection is the process where the image, given as an input (picture) is searched to find any
face, after finding the face the image processing cleans up the facial image for easier recognition
of the face. CNN algorithm can be implemented to detect the faces

Face Recognition:
After the completion of detecting and processing the face, it is compared to the faces present in
the students' database to update the attendance of the students.

Post-Processing:
The post-processing mechanism involves the process of updating the names of the student into
an excel sheet. The excel sheet can be maintained on a weekly basis or monthly basis to record
the students' attendance. This attendance record can be sent to parents or guardians of students to
report the performance of the student.
Source coding:
Results:
Conclusion:
Thus, the aim of this paper is to capture the video of the students, convert it into frames, relate it
with the database to ensure their presence or absence, mark attendance to the particular student to
maintain the record. The Automated Classroom Attendance System helps in increasing the
accuracy and speed ultimately achieve the high-precision real-time attendance to meet the need
for automatic classroom evaluation.

FUTURE ENHANCEMENTS:
 Automated Attendance System can be implemented in larger areas like in a seminar hall where it helps
in sensing the presence of many people.
 Sometimes the poor lighting condition of the classroom may affect image quality which indirectly
degrades system performance, this can be overcome in the latter stage by improving the quality of the
video or by using some algorithms

REFERENCES:
1. N.Sudhakar Reddy, M.V.Sumanth, S.Suresh Babu, "A Counterpart Approach to Attendance
and Feedback System using Machine Learning Techniques",Journal of Emerging Technologies
and Innovative Research (JETIR), Volume 5, Issue 12, Dec 2018.

2. Dan Wang, Rong Fu, Zuying Luo, "Classroom Attendance Auto-management Based on Deep
Learning",Advances in Social Science, Education and Humanities Research, volume
123,ICESAME 2017.

3. Akshara Jadhav, Akshay Jadhav, Tushar Ladhe, Krishna Yeolekar, "Automated Attendance
System Using Face Recognition", International Research Journal of Engineering and Technology
(IRJET), Volume

4, Issue 1, Jan 2017. 4. B Prabhavathi, V Tanuja, V Madhu Viswanatham and M Rajashekhara


Babu, "A smart technique for attendance system to recognize faces through parallelism", IOP
Conf. Series: Materials Science and Engineering 263, 2017.

5. Prajakta Lad, Sonali More, Simran Parkhe, Priyanka Nikam, Dipalee Chaudhari, " Student
Attendance System Using Iris Detection", IJARIIE-ISSN(O)-2395-4396, Vol-3 Issue-2 2017.

6. Samuel Lukas, Aditya Rama Mitra, Ririn Ikana Desanti, Dion Krisnadi, "Student Attendance
System in Classroom Using Face Recognition Technique", Conference Paper DOI:
10.1109/ICTC.2016.7763360, Oct 2016.

7. K.Senthamil Selvi, P.Chitrakala, A.Antony Jenitha, "Face Recognition Based Attendance


Marking System", IJCSMC, Vol. 3, Issue. 2, February 2014.

8. Yohei KAWAGUCHI, Tetsuo SHOJI, Weijane LIN, Koh KAKUSHO, Michihiko MINOH,
"Face Recognition-based Lecture Attendance System", Oct 2014.
9. Shireesha Chintalapati, M.V. Raghunadh, "Automated Attendance Management System Based
On Face Recognition Algorithms", IEEE International Conference on Computational
Intelligence and Computing Research, 2013.

10. B. K. Mohamed and C. Raghu, “Fingerprint attendance system for classroom needs,” India
Conference (INDICON), Annual IEEE, pp. 433–438, 2012

You might also like