You are on page 1of 10

Technical seminar

Synopsis

Name: Yash Uday Kulkarni


Roll no: 202010020
Batch: B1

Topic: Artificial Intelligence – Facial recognition

1. Introduction
Artificial intelligence is one of the most intuitive and actively researched areas of
technology in today's date. It has contributed to many fields like automation,
robotics, security etc. One of its most important contributions is in the field of
Facial recognition. Facial recognition analysis has become one of the most
interesting areas of research in human-computer interaction. Face images are
used throughout the globe to recognize persons with citizenship identification,
identification cards, social security card, intrusion detection, etc.
The process of face recognition includes segmentation, isolation and validation of
facial features such as noise size, brows width and forehead area. Face
recognition is now being actively used in smartphones like Samsung or the
iphone. The process of face recognition comprises of two major steps, the
extraction of the feature and the classification. The raw pictures often take a long
time to process as they contain a lot of pixels. Thus, it is required to reduce the
dimensionality of the pictures to reduce processing time.
2. Literature review

There have been quite a few contributions in the field of face


recognition over the past decades. Z B Lahaw introduced a method
using linear discriminant analysis, principal component analysis and
support vector machine algorithms. He carried out an experiment on
the AT & T database which comprises of 400 face pictures of 40
different people, 10 of which are taken at different stages in which the
subjects are at different angles and wearing shades. These pictures
were grayscale with dimensions of 112 X 92. The author was able to
achieve an accuracy of 96%.
N. Sabri presented his work which was a combination of 4 different ML
algorithms such as Naïve Bayes, MLP and SVM. He concluded that the
Naïve Bayes approach eliminates the other 3 by an accuracy of 91% to
93%.
Sujata G. Bhele and V.H. Mankar have attempted to examine a
significant number of papers covering the latest developments in the
area of face recognition. The current study demonstrates that new
algorithms need to develop using hybrid techniques of soft computing
tools like ANN, SVM, SOM that produces better output for better face
recognition. The author has attempted to examine a significant number
of papers covering the latest developments in the area of face
recognition. This analysis examines all of these techniques with criteria
that face recognition problems such as lighting, variation, facial
expressions. PCA recognized as the Karhunen-Loeve technique is
among the most common techniques for selecting features and
reducing dimensions. The technique of recognition, defined as the
eigenface technique, describes a facial characteristics location which
diminishes existing data space dimensionality. A dominant technique
for facial recognition is the linear discriminant analysis (LDA). It
produces an appropriate representation that converts the existing data
space linearly into a low-dimension feature domain where the data is
well isolated. Support Vector Machines (SVM) are among the most
valuable classification problem methods. One perfect example is face
recognition. The added benefit of the SVM classifier with a standard
neural network is that SVMs can attain better generalization precision.

Bartlett, Movellan, Sejnowski also proposed a face recognition algorithm based


on Independent Component Analysis. The PCA algorithm is based on the fact that
important information of image is contained in pair wise relationship between
pixels whereas ICA is based on the fact that some important information may be
contained in the high order statistics. Maryam Mollaee, Mohammad Hossein
Moattar have proposed face recognition system using modified ICA for better
accuracy.
Dhanaseely, Himavati, Srinivasan proposed a face recognition system using PCA
to reduce Dimensionality and used neural network for classification. The neural
network-based face recognition system is biologically inspired and behave like
neurons of human beings which carry signals from one place to another. Just like
neuron a perceptron calculates weighted sum on numerical inputs and
determines if a person is recognized or not. Using neural network requires lot of
computational work.
3. Problem statement

Now, as we have taken a look at what facial recognition is and the


different algorithms to recognize and detect faces, we can move to the
problem statement. How can we design an efficient face detection
system using one, or a combination of given algorithms to achieve
maximum accuracy?
An example would be the use of facial detection to track attendance
and also track presence of the said person while attempting an
examination. The recent events related to pandemic has shifted most
of the academic activities to an online mode. We can implement a facial
recognition algorithm and develop application to track attendance and
store the data as a CSV or any other sheet extension.
The most common and easy ways to implement a face detection
system is by making use of well researched libraries like the Open
Computer vision (OpenCV) library of python, face-recognition module
of python and ML and the dlib library of c/c++. Although not necessary,
a Raspberry Pi 4b and a USB camera can be used to reduce the load of
processing on the computer and its processor.
4. Proposed system

The model that is proposed here aims to provide an efficient and


accurate way of facial recognition using a combination of a couple of
existing algorithms. We will make use of the OpenCV library of Python
and Java to capture faces real time and also if needed convert the
images into grayscale. The OpenCV library also has inbuilt functions for
feature extraction.
The model follows following steps: -
1. Image Input
2. Image processing
3. Extract features from image
4. Provide the extracted features as input for classification
5. Classification
6. Identify a face.
1. Image Input
In the first step, we will capture an image in the real time or make use
of a dataset like the AT & T image dataset as input. We can either use
an IoT based solution, making use of a Raspberry pie 4b and its USB
camera, or we can simply use the front camera of our device. The
OpenCV library is very efficient in capturing and detecting a face. This
first step can also be called as “facial detection”.

2. Image processing
The captures images often contain a lot of pixels and thus have many
“dimensions”. The OpenCV library does provide methods to convert
images to grayscale but in this model, we will use PCA. PCA has shown
an accuracy of 96% and is a very commonly used dimensionality
reduction technique. Converting an image to grayscale reduces the
images dimensions and it becomes easy to extract features.
3. Extract features from image
The processed grayscales can then be used for feature extraction. Step
2 and 3 are often done together as in both cases we are making use of
PCA. The dataset is divided into three configurations A, B & C. In
configuration A the dataset is divided into two parts, first part
contributes 60 % for learning and 40% of data for evaluating purpose.
In configuration B the dataset is divided into two parts, first part
contributes 80 % for learning and 20% of data for evaluating purpose.
In configuration C the dataset is divided into two parts, first part
contributes 90 % for learning and 10% of data for evaluating purpose.
After the extraction of features, a machine-learning algorithm is applied
to classify the faces.

4. Extracted Features
This is the output of the Feature Extracted method. Extracted features
will give as input for the classifier.

5. Classification
Once the facial data and features are extracted, we apply a classifier to
the dataset. The machine-learning algorithm is used as a classifier. It
has been experimented using linear discriminant analysis, multilayer
perceptron, naive bayes, and support vector machine. In this model we
will make use of LDA as it has shown the most accurate results in past
studies. A combination of other techniques will also be used to figure
out the most efficient classifier.

6. Facial identification
The final Step after face detection and recognition, is to figure out if the
given grayscale image contains a face or not. This concludes the
proposed face recognition approach.
5. References

• https://ieeexplore.ieee.org/document/9137850

• Face Detection and Face Recognition in Android Mobile


Applications- article from Informatica 2019.

• Face Detection and Recognition System for Enhancing


Security Measures Using Artificial Intelligence System from
Indian journal of science and technology.

• https://ieeexplore.ieee.org/abstract/document/9402553

• https://ieeexplore.ieee.org/abstract/document/9145558

• https://ieeexplore.ieee.org/document/8282685

You might also like