Professional Documents
Culture Documents
Synopsis
1. Introduction
Artificial intelligence is one of the most intuitive and actively researched areas of
technology in today's date. It has contributed to many fields like automation,
robotics, security etc. One of its most important contributions is in the field of
Facial recognition. Facial recognition analysis has become one of the most
interesting areas of research in human-computer interaction. Face images are
used throughout the globe to recognize persons with citizenship identification,
identification cards, social security card, intrusion detection, etc.
The process of face recognition includes segmentation, isolation and validation of
facial features such as noise size, brows width and forehead area. Face
recognition is now being actively used in smartphones like Samsung or the
iphone. The process of face recognition comprises of two major steps, the
extraction of the feature and the classification. The raw pictures often take a long
time to process as they contain a lot of pixels. Thus, it is required to reduce the
dimensionality of the pictures to reduce processing time.
2. Literature review
2. Image processing
The captures images often contain a lot of pixels and thus have many
“dimensions”. The OpenCV library does provide methods to convert
images to grayscale but in this model, we will use PCA. PCA has shown
an accuracy of 96% and is a very commonly used dimensionality
reduction technique. Converting an image to grayscale reduces the
images dimensions and it becomes easy to extract features.
3. Extract features from image
The processed grayscales can then be used for feature extraction. Step
2 and 3 are often done together as in both cases we are making use of
PCA. The dataset is divided into three configurations A, B & C. In
configuration A the dataset is divided into two parts, first part
contributes 60 % for learning and 40% of data for evaluating purpose.
In configuration B the dataset is divided into two parts, first part
contributes 80 % for learning and 20% of data for evaluating purpose.
In configuration C the dataset is divided into two parts, first part
contributes 90 % for learning and 10% of data for evaluating purpose.
After the extraction of features, a machine-learning algorithm is applied
to classify the faces.
4. Extracted Features
This is the output of the Feature Extracted method. Extracted features
will give as input for the classifier.
5. Classification
Once the facial data and features are extracted, we apply a classifier to
the dataset. The machine-learning algorithm is used as a classifier. It
has been experimented using linear discriminant analysis, multilayer
perceptron, naive bayes, and support vector machine. In this model we
will make use of LDA as it has shown the most accurate results in past
studies. A combination of other techniques will also be used to figure
out the most efficient classifier.
6. Facial identification
The final Step after face detection and recognition, is to figure out if the
given grayscale image contains a face or not. This concludes the
proposed face recognition approach.
5. References
• https://ieeexplore.ieee.org/document/9137850
• https://ieeexplore.ieee.org/abstract/document/9402553
• https://ieeexplore.ieee.org/abstract/document/9145558
• https://ieeexplore.ieee.org/document/8282685