You are on page 1of 29

Innovative Computing Review (ICR)

Volume 2 Issue 2, Fall 2022


ISSN(P): 2791-0024 ISSN(E): 2791-0032
Homepage: https://journals.umt.edu.pk/index.php/ICR

Article QR

Title: Feature Based Techniques for a Face Recognition using Supervised


Learning Algorithms based on Fixed Monocular Video Camera

Author (s): Isra Anwar, Syed Farooq Ali, Jameel Ahmad, Sumaira Zafar, Malik Tahir Hassan

Affiliation (s): University of Management and Technology, Lahore, Pakistan

DOI: https://doi.org/10.32350.icr.22.05

History: Received: September 9, 2022, Revised: November 29, 2022, Accepted: December 10, 2022

Citation: I. Anwar, S. F. Ali, J. Ahmad, S. Zafar, and M. T. Hassan, “Feature based


techniques for a face recognition using supervised learning algorithms based on
fixed monocular video camera,” UMT Artif. Intell. Rev., vol. 2, no. 2, pp. 82-110,
2022, doi: https://doi.org/10.32350.icr.22.05

Copyright: © The Authors


Licensing: This article is open access and is distributed under the terms of
Creative Commons Attribution 4.0 International License
Conflict of
Interest: Author(s) declared no conflict of interest

A publication of
School of Systems and Technology
University of Management and Technology, Lahore, Pakistan
Feature Based Techniques for Face Recognition using
Supervised Learning Algorithms Based on Fixed
Monocular Video Camera
Isra Anwar1, Syed Farooq Ali 2, Jameel Ahmad3, Sumaira Zafar4*, and
Malik Tahir Hassan5
School of Systems and Technology, University of Management and Technology,
Lahore, Pakistan
Abstract- Automatic face recognition depicted that the proposed
has ample significance in biometric approaches achieve a percentage
research. Recent decades have accuracy of up to 95.5%.
witnessed enormous growth in this
Index Terms- face identification, face
research area. Face-based
recognition, geometric features,
identification is always considered
supervised learning, Support Vector
more expedient as compared to other
Machine (SVM)
biometric authentications owing to
its uniqueness and wide acceptance. I. Introduction
The major contribution of this work
is twofold; firstly, it comprises an Despite the fact that numerous
extension of manual thresholding algorithms have been proposed in
feature-based face recognition the literature, face recognition in
approach to an automatic feature- unconstrained scenes remains a
based supervised learning face challenging problem due to changes
recognition. Secondly, various new in appearance caused by the
feature sets are proposed and tested variations in pose and expression.
on several classifiers for 2, 3, 4, and 5 Still, face recognition has achieved
persons. In addition, the use of slope mammoth popularity during the last
features of facial components, such few decades in computer vision,
as the nose, right eye, left eye, and
pattern recognition, neural
lips along with other conventional
features for face recognition is also a networks, neuroscience, cognitive
unique contribution of this research. science, image processing,
Multiple experiments were psychology, and physiology for
performed on the UMT face good reasons.
database. The results demonstrated a The major reasons for its wide
comparison of 5 different sets of
acceptance are its simplicity,
feature-based approaches on 7
classifiers using the metrics of time security, and authenticity. Face
efficiency and accuracy. They also authentication works regardless of

*
Corresponding Author: sumairazafar.sz@gmail.com
School of System and Technology
83
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

any user input in the form of features are suggested for feature
passwords, PIN codes, plastic cards, matching. Thirdly, a detailed
keys, tokens, and smart card comparison of these approaches is
authentication, which makes it provided using 7 existing classifiers
simple. As far as security is on a dataset of 2, 3, 4, and 5 persons.
concerned, all other methods which The proposed work uses the
involve user input are no match for slopes of different fiducial
face recognition, as user input in any landmarks of facial components
aforementioned form always comes (nose, right eye, left eye, and lips) to
with the risk of its misuse. This is build a slope table that is merged
exemplified by the use of passwords with other geometric and holistic
and PINs, which can be guessed or features. Finally, features are fed to
stolen. the classifiers for training and
Moreover, the solicitude of matching. The rest of the paper is
enhanced physical security systems organized as follows. In Section 2, a
has enjoyed rapid growth in recent review of the related literature is
years due to a dramatic increase in presented. Section 3 gives an
the crime rate. Face recognition overview of the data set used for
plays a pivotal role in this regard. It experiments. It also incorporates the
is widely used in numerous proposed framework. Experimental
applications, such as automatic evaluation is presented in Section 4.
attendance systems, physical access Finally, conclusions are drawn in
control, surveillance control, people Section 5.
tagging, gaming, image search, and
II. Literature Review
many more. This work proposes
several feature-based approaches of Face recognition has been an
face recognition containing some active area of research for the last
new features, namely slope table, few decades. Initially, a semi-
along with random projection and automated system was proposed for
regional properties. face recognition and only major
features, such as ears, eyes, mouth,
There are three major and nose were used for this purpose
contributions of the current study. [2]. The algorithms of face
Firstly, it proposes a new approach
recognition can be divided into two
for face recognition which extends
broad categories, namely feature-
the manual thresholding system [1]
based techniques and holistic
to a supervised learning algorithm. approaches [3]. Feature-based
Secondly, five different sets of new approaches initially identify and
Innovative Computing Review
84
Volume 2 Issue 2, Fall 2022
Anwar et al.

extract the unique facial features Afterwards, in some variations of


and then compute geometric these approaches, Gabor features
relationships among these were replaced by a graph-matching
distinctive features. Ultimately, strategy [6]–[8] and the histograms
some standard statistical pattern of oriented gradients [9].
recognition techniques are applied Campadelli et al. [10] proposed an
to these computed geometric elastic bunch graph-based technique
features for matching. with automatic fiducial point
localization. This technique
Liu et al. [4] proposed a
groundbreaking Gabor-Fisher computes 16 facial fiducial points
and the head pose. Their proposed
Classifier (GFC) for face
technique normalizes the image and
identification with 100% accuracy
characterizes facial fiducial points
in face recognition using only 62
features. Gabor-Fisher Classifier is with its jets vector to extract the
vigorous to illumination and facial peculiar texture around jets. Finally,
they measured the similarity among
expression variations. Its feasibility
was tested on the FERET database manifold jets to recognize the
image. This approach was tested on
containing 600 FERET frontal
facial images, corresponding to 200 a database of 2500 face foreground
images, yielding an accuracy of
subjects, captured in manifold
93% for face recognition. They
illuminations and distinct facial
claimed to obtain the same
expressions. Although, these
performance as the existing elastic
features have been categorized as
bunch graph approaches with
the most successful face
manual graph placement.
representations, which are highly
rated dimensions for rapid The discriminant elastic graph
extraction and accurate results. matching (DEGM) algorithm by
Zafeiriou et al. [7] uses discriminant
Shen et al. [5] proposed a
techniques at all phases of elastic
framework of Gabor wavelets and
graph matching for face
general discriminant analysis
verification. This conventional
(GDA) which extracts features from
EGM was extended to a generalized
the whole face image. This
algorithm was also tested on the EGM (G-EGM) in order to provide
improved performance on globally
same FERET database, as well as on
warped faces [8]. It improved the
the BANCA face database, with a
robustness of node descriptors to
97.5% recognition rate and 5.96%
globally misaligned faces and
verification error rate, respectively.
introduced warping-compensated
School of System and Technology
85
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

edges, with an incredible accuracy Nevertheless, Gabor


rate for face identification. Despite transformation is considered
great accuracy, the technique has impractical for numerous real-time
not gained adequate attention in face recognition applications owing
many suitable real-time applications to its unusual time and space
due to its expensive computational complexity. Yang et al. proposed a
cost. Albiol et al. [9] applied HOG new and swift technique called
descriptors instead of Gabor monogenic binary coding for local
features, which are more robust to feature extraction to combat this
illumination changes and issue [11]. In this scheme, the
insignificant displacements. As a original signal is decomposed into
result, higher accuracy was three harmonious elements, namely
achieved for face verification in amplitude, orientation, and phase.
comparison to the existing Gabor- Monogenic variation and features
EBGM based approaches. are encoded in each local region and
pixel, respectively
Yang and Zhang [10] employed
sparse representation-based Another variation of Gabor
classification (SRC) for face transformation was proposed by
recognition. This algorithm codes Zhenhua, et al. which deals with
the input testing images as a sparse inter-person and intra-person
linear combination of training variations in face images by using
samples. Local Gabor features of the robustness of ordinal measures,
images were used for SRC and an along with the distinctiveness of
algorithm was introduced to Gabor features [12]. The above
compute the associated Gabor authors also computed the
occlusion dictionary which handles histogram and initially generated
occluded face images with much feature vectors by concatenating
higher recognition rates. Moreover, statistical distributions, which were
the computational cost of sparse further refined by linear
coding was intensely reduced by discriminant analysis. Finally, face
compacting the occlusion recognition classifier was trained by
dictionary. The major contribution practicing cascade learning and
of this work was to achieve high greedy block selection methods.
accuracy against the variations in In holistic approaches, global
lighting, expression, pose, and representations of the entire face are
occlusion.
used for recognition instead of using
some selective local features, such

Innovative Computing Review


86
Volume 2 Issue 2, Fall 2022
Anwar et al.

as Eigenfaces [13], principal learning from raw pixels of facial


component analysis (PCA), linear images to learn hierarchical feature
discriminant analysis (LDA) [14], representation, instead of using
and independent component Gabor features or any other
analysis-based approaches. These conventional descriptor [14]. They
approaches take all images as used multiple feature dictionaries to
matrices of intensity values. For represent different physical
every input image, a direct characteristics of the unique face
correlation comparison of this region and computed multiple
image is performed with all other related features from them.
faces in the database for face Sirovich et al. [15] represented
recognition. Holistic approaches facial images economically by
can be further subdivided into two utilizing PCA [16], [17]. They used
main categories, that is, statistical Eigen-pictures coordinate space to
approaches and artificial represent a face efficiently and
intelligence (AI) based approaches. reconstructed that face using a small
The simplest approach in statistical collection of Eigen-pictures and
approaches uses intensity values of their projections. Jian et al. [18]
all facial pixels and performs introduced the Spectro face method
correlation comparisons with all with 98% accuracy on Yale and
faces stored in the database. This Olivetti face databases. This method
approach is considered impractical combines the Fourier transform and
because it is not only wavelet transform and proves that
computationally expensive but
low-frequency images, decomposed
works only in a restricted view, such
using the wavelet transform, are less
as equal illumination, face
sensitive to the variations in facial
orientation, and size [13]. Another
expression. This method uses
shortcoming of such approaches is
transformed images, invariant to
that it usually classifies data in a scale, translation, and on-the-plane
space of very high dimensionality.
rotation. The outcome of the study
However, several other approaches
showed that wavelet transform
have been proposed to combat the
delivered robust representations in
dimensionality problem by reducing
illumination variations and captured
the statistical dimensionality and substantial facial features with low
procuring only meaningful feature computational complexity.
dimensions for comparisons. Lu et
al. also presented an unsupervised Following these considerations,
approach for automatic feature an efficient scheme for face

School of System and Technology


87
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

recognition by applying wavelet It is generally accepted that low-


sub-band representation was resolution representation of face
proposed by Zhang et al. [19]. Their images reduces the performance of
method used two-dimensional (2D) face recognition. To combat this
wavelet sub-band coefficients to issue, Huang et al. [22] proposed a
represent the face images. They also super-resolution approach that maps
used a personalized classification coherent features, nonlinearly. It
method for recognition based on improves higher recognition of the
kernel associative memory (KAM) nearest classifiers for the
models. This method was tested on recognition of low-resolution face
three standard face identification image. Initially, coherent subspaces
datasets, namely XM2VTS, are constructed by applying the
FERET, and ORL. The results canonical correlation analysis
showed that this method among low-resolution face images
outperformed the existing and the features of high-resolution
approaches in terms of accuracy. based on PCA. Then, radial basis
They also utilised KAM in functions are used to build the
combination with the Gabor wavelet nonlinear mapping between high
network to construct a unified and low-resolution features. Lastly,
structure of Gabor wavelet super-resolved coherent features are
associative memory (GWAM) [20]. constructed and fed to a simple NN
This method was tested using three classifier for face identification.
famous databases and reported a The computational complexity
very high performance as compared of these techniques was further
to other techniques. reduced by Azad et al. [23]. In their
Kwak et al. [21] systematically proposed face recognition scheme,
formed an enhancement of the instead of taking all pixels of the
generic independent component whole image only the face was
analysis called FICA by identified and PCA technique was
supplementing it with Fisher LDA applied to that particular region in
and presenting it along with its order to extract the vector features.
underlying architecture. This Then, multiclass support vector
method improved well-separated machine (SVM) classifiers were
classification rates significantly and applied to these features for
also dealt with illumination and identification and classification.
facial expression invariants. This technique achieved a 98.45%
accuracy rate on the FEI database.

Innovative Computing Review


88
Volume 2 Issue 2, Fall 2022
Anwar et al.

Zhang et al. also presented a unsupervised clustering. The


novel and efficient low-resolution between-class scatter matrices are
face recognition algorithm known regularized with inter-cluster scatter
as coupled marginal discriminant matrices and within-class matrices
mappings (CMDM) [24]. CMDM with intra-cluster scatter matrices.
makes data points in the low and the As this approach allows exploiting
original high-resolution features and more variations from the narrow
compels them into a unified space. training samples, so, the overfitting
It constructs samples closely from and singularity problems are dealt
the same class and separately from a effectively. The effectiveness of this
distinct class with a great margin. algorithm has been proven by
This method avoids the dimensional extensive testing on low-resolution
mismatch problem and fills the data face images captured in both
gap of different resolutions. CMDM controlled and uncontrolled
achieved outstanding performance environments.
on the AR and FERET face Feature-based approaches using
databases as compared to other up- classifiers show promising results
to-date lower solution face and the research community
recognition techniques. remains convinced that these
In 2016, Yihang et al. [25] learning-based techniques can
proposed an approach that deals provide better results in the future.
with multimodal face and ear This paper is an extension of the
recognition in unconstrained scenes existing manual threshold feature-
by using the fusion of both spherical based approach to automatic
depth map and the spherical texture supervised learning-based approach
map characteristics. They initially for face recognition [1]. In this
used sparse representation for work, different sets of new features
recognition and then the final results are suggested for feature matching.
were refined using Bayesian Each approach was tested on seven
decision-level fusion. Recently, existing classifiers and the results
Yongjie et al. [26] investigated low- are reported in Section 5. These
resolution face recognition with one classifiers include neural network
image per person. A cluster-based (Multilayer Perceptron (MLP)),
regularized simultaneous Naive Bayes, J48, SVM, AdaBoost,
discriminant analysis (C-RSDA) Decision Tree, Decision Table, and
method was proposed based on Nearest-Neighbor-like Algorithm
SDA. C-RSDA computes cluster- (NNge) [27]–[29]. Features
based scatter matrices from calculated in approach 5
School of System and Technology
89
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

outperformed all other approaches of using traditional methods. They


in terms of accuracy with more than used a face profiling approach to use
90% true positive results. Sagonas abundant samples during training
et al. [30] explored the problem of and a cost function OWPDC was
joints and facial contaminated data. also introduced to improve the
They proposed the JIVE and Robust parameters' priority. Mollahosseini
JIVE techniques where 1-norm was et al. [34] trained neural networks
employed on data. They performed with multiple scenarios to
extensive 2D and 3D face age differentiate their performance
progression experiments where they between labelled and non-labelled
found the RJIVE method as the data. As per their observation, noise
most reliable technique, among estimation may apply to a few
others. expressions.
Wang et al. [31] offered a Hang et al. proposed an
recurrent face aging (RFA) GRU approach in which detection traced
framework with triple layers. GRU faces in a frame. Where, face is
triple layer better determines face aligned according to the canonical
identification than bi-layer with a view to crop it with the normal size
combination of RNNs. However, of the pixels [35].
during their testing, there was a lack Govind et al. observed that
of age input in their experiments maximum performance percentage
which may be catered to in future could be gained by using masked
work. Singhal and Srivastava [32] faces in training classifiers. Based
carried out a survey in which on empirical analysis, it can be
different databases and approaches improved with new values and
were explored to rectify the variables [36]. Marcus et al. also
problems faced during the face conducted an analysis on facial
identification process. They recognition application [37].
explored many facial databases,
real-time images, and videos, so that III. Methodology
effective machine-learning The proposed approaches were
approaches could be applied to evaluated on a self-generated UMT
improve facial identification face dataset 1 consisting of 5
processes. Zhu et al. [33] proposed
1F

subjects. The UMT face database


a cascaded CNN approach instead (DB) contains 150 face images, with

1
https://sites.google.com/site/farooq1us/dat
aset
Innovative Computing Review
90
Volume 2 Issue 2, Fall 2022
Anwar et al.

30 face images of each subject Fig. 2 show different frames that


marking the variations in pose and contain various expressions and
expression. This data was used with different poses.
10-fold cross-validation. Fig. 1 and

Fig. 1. Key frames from UMT database


BoRMaN was used to detect In this study, various new
face images and compute the feature sets are proposed and used
coordinates of 22 facial points with existing classifiers for face
which act as inputs to the proposed recognition, instead of using a
algorithm [38]. Table I provides a manual threshold system for
vivid description of the dataset. The matching [1]. The proposed
current authors assumed a single framework works in two major
camera setup to capture static phases, namely training and testing.
colored images of 500*375 Each phase consists of a sequence of
resolutions with various poses and four steps. The initial three steps of
expressions, although illumination both phases are similar, starting
remained constant. from the identification and
localization of the face region in the

School of System and Technology


91
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

2D face image. The second step is geometrical relationship among


the extraction of distinctive facial these facial points is estimated.
components, such as mouth, eyes, Thus, a vector of geometric features
nose, as well as other facial fiducial from the input facial image is shown
points calculated by using below.
BoRMAN. In the third step the

Fig. 2. UMT database with pose and expression variations


𝐹𝐹𝐹𝐹(𝐼𝐼𝑖𝑖 ) = 𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹(𝐼𝐼𝐼𝐼(𝐼𝐼𝑖𝑖 )) (1) AdaBoost, Decision Tree, Decision
If (Ii) is the face pixels of the Table, and Nearest-Neighbor-like
fifth training image for Approach 1, Algorithm (NNge) [39], [40]. In the
Approach 2, and Approach 3, FV final step of the training phase, these
(Ii) is the feature vector of the fifth extracted features are employed by
training image. The proposed the classifiers to train it as shown
methodology uses five different below.
approaches to calculate features and 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 =
these features are used with 7 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶(𝐹𝐹𝐹𝐹(𝐼𝐼𝑖𝑖 ) (2)
classifiers including Neural where 1 < i < N, N = 5060
Network (Multilayer Perceptron
(MLP)), Naive Bayes, SVM, J48,
Innovative Computing Review
92
Volume 2 Issue 2, Fall 2022
Anwar et al.

Table I On the other hand, in the final


UMT Face Dataset Description step of the face recognition phase,
the feature vector (FV) of a given
Properties Description
test image a test has employed by
No. of subjects 5
the trained classifier for matching.
No. of 150
The trained classifier categorizes
images/Videos Static
the image into 6 categories, namely
Videos/Static Single
Person 1, Person 2, Person 3, Person
Multiple face Color
4, Person 5, and no match.
Gray Various Pose
Scale/Color Various 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶(1<𝑗𝑗<6) =
Face Expression 𝑇𝑇𝑇𝑇𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 (𝐹𝐹𝐹𝐹(𝐼𝐼𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡)) (3)
Pose/Various Constant Section 5 delineates the
Pose Illumination discussion regarding the results of
Facial/Various 500*375 all classifiers. All aforementioned
Expression strides are well explained in the sub-
Illumination sections below. Fig. 3 shows the
Resolution block diagram of the proposed
algorithm.

Fig. 3. Block diagram of proposed framework


A. Facial Fiducial Points Detection points of facial components,
including nose, lips, chin, mouth,
The proposed framework
eyebrows, and eyes. The
employed BoRMaN to detect face
distribution of these 22 facial
images and extract the facial
fiducial points is shown in Fig. 4.
fiducial points for each frame.
Ten points are used to mark both
BoRMaN iteratively uses support
eyes, three collectively for the nose,
vector regression (SVR) and local
four for lips, four points represent
appearance-based features to
eyebrows, and one point is used to
provide an initial prediction of 22
spot the chin. Then, it computes
School of System and Technology
93
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

ratios, angles, and areas of different several approaches to build various


invariant features using these facial novel sets of features used in the
fiducial points and formulates testing and training of classifiers.

Fig 4. Facial fiducial points are detected and shown on the facial image at
the right side by applying BoRMaN on the left image
B. Features Reckoning collection, whereas the fifth
approach is a combination of
Facial fiducial points computed
Approach 4 in combination with the
in the previous phase were used to
slope table.
calculate different features, using
several approaches given to the 1) Approach 1: Random
classifiers for training and testing. projection features (RPF)
Identification labels of these This is a holistic feature
features were passed as input to a extraction approach that is time
classifier for training and testing efficient and based on features
purposes. Face recognition is based extracted from the facial image and
on the output of the classifiers. If the sparse matrix. Initially, features are
values of the features determine the extracted effectively using a sparse
difference as below the threshold measurement matrix and then
during the testing phase, then as per multiplied with the image features.
the matching frame, the classifiers This model employs random
are automatically classified. The projection which preserves the
proposed five different methods to structure of an image. A random
extract features from 2D frames are
matrix Z 𝜖𝜖𝑍𝑍 (n∗m) whose rows have
explained below. The first four
unit length projects data from the
techniques differ by feature
Innovative Computing Review
94
Volume 2 Issue 2, Fall 2022
Anwar et al.

high-dimensional image features based on the gradients of


space 𝑥𝑥𝑥𝑥𝑍𝑍 𝑚𝑚 to a low-dimensional the given facial image. The gradient
space 𝑣𝑣𝑣𝑣𝑍𝑍 𝑛𝑛 , 𝑣𝑣 = 𝑍𝑍𝑍𝑍 . Fig 6 is a generalization of the usual
illustrates the feature-based face concept of derivative to the function
recognition model using random of two variables of the given image
projection. represented as 𝐹𝐹(𝑓𝑓𝑥𝑥 , 𝑓𝑓𝑦𝑦 ), whose
𝑖𝑖𝑖𝑖 𝑣𝑣𝑣𝑣𝑣𝑣 < 0 𝑡𝑡ℎ𝑒𝑒𝑒𝑒 𝑣𝑣𝑣𝑣𝑣𝑣 = −1 components are derivatives in
𝑧𝑧𝑖𝑖,𝑗𝑗 = � 𝑖𝑖𝑖𝑖 𝑣𝑣𝑣𝑣𝑣𝑣 > 0 𝑡𝑡ℎ𝑒𝑒𝑒𝑒 𝑣𝑣𝑣𝑣𝑣𝑣 = 1 horizontal and vertical directions. It
𝑖𝑖𝑖𝑖 𝑣𝑣𝑣𝑣𝑣𝑣 = 0 𝑡𝑡ℎ𝑒𝑒𝑒𝑒 𝑣𝑣𝑣𝑣𝑣𝑣 = 0 is, thus, a vector-valued function.
(4) Initially, the gradient vector of the
In this work, a sparse random given image is calculated as shown
measurement matrix adopted in Eq. 5.
efficient dimensionality reduction ∇F(x, y) = �
∂F ∂F
, � (5)
∂x ∂y
with entries, as shown in Eq. 4.
The gradient vector was used to
2) Approach 2: Histogram of compute a magnitude of
oriented gradient features gradients using Eq. 6.
(HOGF)
𝑀𝑀 = �(𝑓𝑓𝑥𝑥2 + 𝑓𝑓𝑦𝑦2 ) (6)
This appearance and shape-
based approach calculates the

Fig. 5. Explaining random projection that converts image features into the
compressed vector using sparse measurement matrix
In the next step, a histogram of frequency. The histogram of
this magnitude is calculated to magnitude is also considered in
divide the entire range of values into stating distinctive values for each
a series of intervals. This reduces image. The overall process of
the feature vector size based on feature calculation using a

School of System and Technology


95
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

histogram of the oriented gradient is


illustrated in Fig 6.

Fig. 6. Feature extraction by computing histogram of oriented gradients


using a facial image as input
3) Approach 3: regional Diameter determines the diameter
properties features (RePF) of a circle with the same area as the
region and is calculated using Eq. 7.
In this approach, the regional
properties template is applied over �4 �
𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴
� (7)
image pixel values. A regional 𝜋𝜋

property template is a set of The eccentricity property is used


different scalar regional properties, to measure the eccentricity of the
such as area, Euler number, ellipse that has the same second
eccentricity, extent, major axis moments as the region. Eccentricity
length, orientation, solidity, Equiv- is the ratio between the major axis
diameter, max intensity, minor axis length and the distance between the
length, mean intensity, and minor foci of the ellipse. Its value is
intensity. Where, area identifies the between 0 and 1. In the regional
actual number of pixels in the region properties template, the ratio of
and the Euler number specifies the pixels in the region to pixels in the
total number of objects in the image total bounding box is computed
subtracting the total number of using Eq. 8.
holes. The proposed regional 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴
property template uses 8- 𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸 = (8)
𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
connectivity to compute the Euler
number measurement. Equiv-
Innovative Computing Review
96
Volume 2 Issue 2, Fall 2022
Anwar et al.

The major and minor length axis ellipse that has the same second
specifies the length of the major and moments as the region. The solidity
minor axes of the ellipse that have of the region is calculated by the
the same normalized second central proportion of pixels in the convex
moments as the region, respectively. hull, as shown below in Eq. 9.
Orientation states the angle between 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 = (9)
the x-axis and the major axis of the 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶

Fig. 7. Feature extraction through regional properties


4) Approach 4: Ratio and angles right eye alone could be taken as a
of facial image (RAF) feature but could not be considered
as a scale constant. This is why the
In this approach, a geometric
ratio of the length and width of lips
feature set is constructed a
and eyes (both left and right) is
combination of the ratios of length
considered as a feature to calculate
to width, angles among triangles
its scale constant, as shown in Eq.
from the facial points, and the ratios
10.
of the areas of triangles as given by
BoRMaN. After the detection of 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙ℎ /𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤ℎ (10)
these facial points, the ratio of Where, lip length is calculated using
length to width of lips and eyes, the the Euclidean distance formula
ratio of the area of triangles, and the between points 5 (x5, y5) and 6 (x6,
angle of different triangles are y6) marked on Fig 8(a), as shown in
computed for face recognition. The Eq. 10.
details of these features are
presented in this section. 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙ℎ =
�((𝑥𝑥6 − 𝑥𝑥5 )2 + (𝑦𝑦6 − 𝑦𝑦5 )2 (11)
The ratios of length to width of
these points (left and right eyes and Similarly, lip width is also
upper and lower lips) are computed calculated using the Euclidean
instead of using the length or width distance formula between points 7
of lips, left eye, and right eye to (x7; y7) and 8 (x8; y8) marked in Fig
ensure that the scale is invariant. 8(a), as shown in Eq. 12.
The length of the left eye and right 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤ℎ = �((𝑥𝑥6 − 𝑥𝑥5 )2 + (𝑦𝑦6 − 𝑦𝑦5 )2
eye or the width of the left eye and (12)
School of System and Technology
97
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

The ratio of length to width of using points 1 (x1; y1), 2 (x2; y2) and
the left eye in Fig. 8(a) is computed 3 (x3; y3), 4 (x4; y4), respectively.
with eye length and width. Eye The ratio of length to width of the
length and width are calculated by right eye is also computed in a
the Euclidean distance formula similar manner.

Fig. 8. Ratios of angles of triangles


The proposed method uses the 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 =
ratios of the areas of triangles as �(𝑠𝑠(𝑠𝑠 − 𝑎𝑎)(𝑠𝑠 − 𝑏𝑏)(𝑠𝑠 − 𝑐𝑐))
ratio is a scaled invariant. The three (14)
triangles, marked in Fig. 8 (b), are Where, a, b, c is the length of the
used to calculate the ratios of the sides of the triangle computed by
triangles ∆ 2; 6; 17; ∆ 2, 6, 21 and ∆ Euclidean distance and s is the
18, 20, 17, using the formula given average length of the three sides of
in Eq. 13. the triangle calculated in Eq. 15.
𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇1
𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 = (𝑎𝑎+𝑏𝑏+𝑐𝑐)
𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇1 𝑠𝑠 = (15)
3
(13)
The values of different triangles’
The areas of triangles are angles marked in Fig. 8 b are used as
computed by using Hero’s formula, features for face recognition. These
as shown in Eq.14. angles are calculated using the
Innovative Computing Review
98
Volume 2 Issue 2, Fall 2022
Anwar et al.

fundamental cosine formula described Initially, centroids of each facial


below in Eqs. 16-18. component (obtained from
𝛼𝛼 = 𝑐𝑐𝑐𝑐𝑐𝑐 −1 ((𝑏𝑏 2 + 𝑐𝑐 2 − 𝑎𝑎2 )/2𝑏𝑏𝑏𝑏 (16) BoRMaN) are computed. Then,
Euclidean distance is computed by
−1 2 2 2
𝛽𝛽 = 𝑐𝑐𝑐𝑐𝑐𝑐 ((𝑎𝑎 + 𝑐𝑐 − 𝑏𝑏 )/2𝑎𝑎𝑎𝑎 (17)
using these centroids and the rest of
𝛾𝛾 = 𝑐𝑐𝑐𝑐𝑐𝑐 −1 ((𝑎𝑎2 + 𝑏𝑏 2 − 𝑐𝑐 2 )/2𝑎𝑎𝑎𝑎 (18) the points. Finally, slope tables are
The angles of eight triangles (< computed for the left and right eyes
2,6,17, < 2,6,21, < 2,6,22, <17,20,22, by calculating the slope of each
< 6,2,17, < 6,2,21, < 6,2,22 and < point of the left eye and right eye,
17,18,22) among the points marked in respectively. Similarly, slope tables
Fig. 8(a, b) are computed using the of lips and nose are also computed
formulas in Eqs. [16-18]. by employing their respective
fiducial points. Slope computation
5) Approach 5: Ratios, angles, and is shown in Fig. 9. The centroid of a
slopes of facial image (RASF) facial component is calculated using
In this approach, the same Eq. 19.
geometric features are used as in ∑𝑋𝑋𝑖𝑖 ∑𝑌𝑌𝑖𝑖
𝐶𝐶 = � , � (19)
RAF discussed in Section 3.2.4. 𝑛𝑛 𝑛𝑛

However, this approach also uses In the above equation, (𝑋𝑋𝑖𝑖 , 𝑌𝑌𝑖𝑖 )
the slope features of a facial image are the two coordinates of the fifth
along with the ratios of length to point of a facial component. 𝑃𝑃𝑥𝑥 and
width, area, and angles. Ratios and 𝑃𝑃𝑦𝑦 are Euclidean distances of the
angles are computed using Eqs. [13- boundary point to the centroid along
18]. For slope features, the proposed the x-axis and y-axis, respectively. P
framework introduces a represents the line from the
groundbreaking technique, where a boundary point to the centroid,
slope table is used as a feature along whereas 𝑃𝑃𝑥𝑥 is the x-component and
with the other multiple features for 𝑃𝑃𝑦𝑦 is the y-component of the line P.
face recognition. A slope table is 𝑃𝑃𝑥𝑥 and 𝑃𝑃𝑦𝑦 are used to compute the
constructed by calculating the
slope as shown below.
slopes of the different fiducial
𝑃𝑃𝑦𝑦
landmarks of facial components 𝑚𝑚 = tan 𝜃𝜃 =
𝑃𝑃𝑥𝑥
(20)
(left eye, right eye, nose, and lips).

School of System and Technology


99
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

Fig. 9. Slope table computation


Several classifiers are trained and extreme case (for 5-person testing)
tested using these facial features. as person 1, 2, 3, 4, and 5, or not
matched. Similarly, for the 4-person
C. Training and Testing
problem, classifiers categorized into
All the features calculated in 5 categories as person 1, 2, 3, and 4,
Section 4.2 were employed as input or not matched, and so on.
to seven classifiers to train them.
Then, the features of the test image IV. Results and Discussion
were fed to trained classifiers for The proposed approach was
face matching. These classifiers tested on seven classifiers for
included SVM, AdaBoost, Nearest- accuracy and scalability on a wide
Neighbor-like algorithm (NNge), variety of images using five
Naive Bayes, Decision Table, J48, features-based approaches. The
and Neural Network (Multilayer results, in terms of percentage
Perceptron (MLP)) [38]. In the accuracy (90% percentage split as
matching phase, classifiers 90:10 training: testing respectively)
automatically categorized the input and time efficiency, are shown in
image into 6 categories in the Table II and Table III, respectively.
Innovative Computing Review
100
Volume 2 Issue 2, Fall 2022
Anwar et al.

Table II and Table III show the these approaches use geometrical
results for features computed using features and the motion vector-
all the above elaborated approaches based interpolation technique. The
applied on different numbers of performance of RASF is best
persons. To test these approaches, because of the addition of a slope
the data set was divided into sub- feature in the features of the RAF
parts. Hence, all the approaches approach. So, it can be concluded
were tested on 2 persons, then 3, 4, that the features of the slope table
and 5 persons, which helped to are working as very useful face
understand the dimensionality and recognition features, achieving the
generalization of the current highest accuracy in all the sub-parts
approaches. of the data set.
Table II depicts the performance The average percentage
in terms of percentage accuracy of accuracy of classifiers based on all
Approach 5 (RASF) using NNge as approaches shows that the Support
an effective approach than the rest Vector Machine (SVM) classifiers
of the four techniques and six outperform the rest of the classifiers
classifiers for 2 persons. However, including Neural Network
for persons 3 and 5, RASF with (Multilayer Perceptron (MLP)),
SVM classifiers performs better. Decision Table, AdaBoost, Naive
For person 4, the best approach Bayes, J48 and NNge. It is because
remains RASF but with MLP SVM has a regularization parameter
classifiers. Moreover, the average which helps in avoiding overfitting
percentage accuracy of Approach 4 and uses the kernel trick, which
(RAF) using all classifiers for builds expert knowledge about the
persons 2 and 5 is remarkably better problem via engineering the kernel.
than the rest of the approaches, The second and third-best average
while the average percentage percentage accuracy are shown by a
accuracy of RASF for persons 3 and neural network (Multilayer
4 is best among all approaches. Perceptron (MLP)) and Naive
Hence, it can be summarized that Bayes classifiers, respectively (see
RAF and RASF show a clear Table II). The grounds of improved
generalized behavior that percentage accuracy of these
outperforms all the other classifiers must be the ability to
approaches, irrespective of the handle the missing values which is
number of persons. The possible known as completeness property.
reasons for the better performance This property assures the
of RAF and RASF are that both of scrutinizing of all possible
School of System and Technology
101
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

combinations of condition values. multiple applied approaches namely


Similarly, RAF showed the second- Random Projection (RPF),
best percentage accuracy for 3- and Histogram of Oriented Gradients
4-person problems and RASF (HOGF), Regional Properties
reported second-best results for 2- (RePF), Ratio and Angles (RAF),
and 5-person problems, on average, and Ratio, Angles, and Slope
of all classifiers. (RASF) in terms of percentage
accuracy for 2-persons, 3-persons,
Fig 10 (a-e) shows the pictorial
representations of the results of 4-persons, and 5-persons problems.
Table II
Results of Random Projection (RPF), Histogram of Oriented Gradients
(HOGF), Regional Properties (RePF), Ratio and Angles (RAF), and Ratio,
Angles, and Slope in terms of Percentage Accuracy along with 10-fold
Cross Validation
Person SVM AdaBoost NNge N-B D-Table J48 MLP Average
1 71.6 75 71.6 76 78 78 71.6 74.5
2 68.8 67.5 60.7 63.7 75 76.7 68 68.6
3 56.3 61.5 58.6 59.6 68 70.2 64 62.6
RPF 4 52.8 58.7 51.6 52.8 64 68.6 56 57.7
1 80.3 81.6 81.6 79.7 81.7 85 81.6 81.6
2 78.5 76.6 76.5 70.8 72.2 72.2 72.2 74.1
3 75.5 64.7 62.5 68.5 68 66.7 67.7 67.6
HOGF 4 63.3 58 58 62.7 64 58 59.3 60.4
1 86.6 81.7 80 81.67 83.3 80 81.6 82.1
2 79 76.6 73.5 74.5 68.9 78 73.3 74.8
3 63.8 68.5 68.6 67.5 55.8 68 68.7 65.8
RePF 4 58.6 60 58.3 62.7 53.3 61 56.6 58.6
1 90.6 91.6 91.6 95 91.6 81.6 93.3 90.7
2 91.1 86.6 93.3 94.4 78.8 86.6 94.4 89.3
3 85 75.8 85 84.1 67.5 75 88.3 80.1
RAF 4 90.6 63.6 82.6 82 62 68 91.3 77.1
1 90.3 91.6 93.3 91.6 91.6 81.6 88.3 89.7
2 95.5 86.6 91.1 87.7 82.8 86.6 94.4 89.3
3 90 75.7 88.6 78.3 70.3 79.6 90.8 81.9
RASF 4 92 68.8 79.3 75.3 63.6 68 90.6 76.8
Average 78 73.5 75.3 75.4 72 74.4 77.6
Naive Bayes=N-B, Decision table=D-Table

Innovative Computing Review


102
Volume 2 Issue 2, Fall 2022
Anwar et al.

Table 3 shows the results for the efficient classifier. On the other
time taken in building the model for hand, neural network (MLP)
the features computed and tested consumes the most time out of all
using the above mentioned the classifiers, whereas SVM
approaches. It is clear from the overtakes all the other classifiers by
results that Naive Bayes is the most achieving the highest accuracy and
time-efficient approach and does not compromise time
AdaBoost is the second most time- efficiency either.
Table III
Time taken to build a model for Random Projection (RPF), Histogram of
Oriented Gradients (HOGF), Regional Properties (RePF), Ratio and
Angles (RAF), and Ratio, Angles, and Slope (RASF) with Different
Classifies
Person SVM AdaBoost NNge N-B D-Table J48 MLP
1 10 30 10 0 40 10 3680
2 30 10 30 0 40 40 5330
RPF
3 40 10 30 0 60 50 7390
4 80 10 60 10 80 80 10050
1 10 10 10 0 10 0 90
2 20 0 0 0 10 0 150
HOGF
3 40 0 10 0 20 10 620
4 50 0 0 0 20 0 350
1 10 10 10 10 40 10 2150
2 20 30 10 0 60 10 3080
RePF
3 30 10 30 0 60 20 4090
4 60 10 40 0 110 20 5830
1 10 30 10 0 60 10 4470
2 20 50 20 0 120 10 6440
RAF
3 40 10 40 0 160 30 8760
4 70 10 60 0 270 50 9940
1 60 130 220 40 330 80 2190
2 30 40 20 0 80 20 3030
RASF
3 40 10 30 0 100 20 4110
4 70 10 30 0 140 80 5540
Average 37 21 33.5 3 90.5 27.5 4364.5
Naïve Bayes=N-B, Decision table=D-Table
Fig. 11(a-e) shows the pictorial Projection (RPF), Histogram of
representations of the results of the Oriented Gradients (HOGF),
approaches used namely Random Regional Properties (RePF), and

School of System and Technology


103
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

Ratio and Angles (RAF), Ratio, persons, 4-persons, and 5-persons


Angles, and Slope (RASF) in terms problems on the seven
of execution time for 2-persons, 3- aforementioned classifiers.

Fig. 10. Graph showing the results of Random Projection (RPF), Histogram
of Oriented Gradients (HOGF), Regional Properties (RePF), Ratio and
Angles (RAF), Ratio, Angles, and Slope (RASF) in terms of percentage
accuracy with 10-fold cross-validation

Fig. 11. Graph showing the pictorial representations of the results of


Random Projection (RPF), Histogram of Oriented Gradients (HOGF),
Regional Properties (RPF), Ratio and Angles (RAF), and Ratio, Angles, and
Slope (RASF) in terms of execution time for 2-persons, 3-persons, 4-
persons, and 5-persons problems on the seven aforementioned classifiers

Innovative Computing Review


104
Volume 2 Issue 2, Fall 2022
Anwar et al.

A. Conclusion and Future Work and University of Management


Technology, Lahore, Pakistan.
The proposed framework
experiments with various features References
for face identification problems and [1] I. Anwar, S. Nawaz, G. Kibria,
suggests the use of BoRMaN and S. F. Ali, M. T. Hassan, and J.-
slopes. As is clearly shown by the B. Kim, “Feature based face
results that feature-based face recognition using slopes,” in
identification using BoRMaN and
2014 Int. Conf. Cont.,
the slopes is the best approach Automation Inform. Sci., IEEE,
among all the other discussed Gwangju, Korea (South), Dec.
approaches. BoRMaN returns 22 2–5, 2014, pp. 200–205, doi:
constant facial features used to https://doi.org/10.1109/ICCAIS
calculate features like ratios and .2014.7020558
angles. This framework is scale- [2] W. W. Bledsoe, “Man-machine
invariant, however, it does not facial recognition,” Rep. PRi,
perform well in dim light. In the
vol. 22, 1966.
future, a system shoud be developed [3] S. B. Ahmed, S. F. Ali, J.
that also works in gloomy light. Ahmad, M. Adnan, and M. M.
Besides other extensions, the
Fraz, “On the frontiers of pose
availability of a large dataset
invariant face recognition: A
repository is an important issue.
review,” Artif. Intell. Rev., vol.
Hence, more repositories should
53, no. 4, pp. 2571–2634, Apr.
also be developed.
2020, doi:
Acknowledgements https://doi.org/10.1007/s10462-
We would like to thank Touqir 019-09742-3
Attique, Mohsin Ijaz, Khawaja [4] C. Liu and H. Wechsler, “Gabor
Ubaid Ur Rehman, Junaid Jabbar feature based classification
Faizi, Hassaam Zahid, Muhammad using the enhanced fisher linear
Maaz Aslam, Wasiq Maqsood who discriminant model for face
are voluntarily responsible for the recognition,” IEEE Trans.
data generation of this paper. Our Image Process., vol. 11, no. 4,
gratitude goes to all the anonymous pp. 467–476, Apr. 2002, doi:
reviewers for their effective input. https://doi.org/10.1109/TIP.200
This research work was held up by 2.999679
the National ICT RD under grant no. [5] L. Shen, L. Bai, and M.
NICTDF/NGIRI/2013-14/Crops/2; Fairhurst, “Gabor wavelets and
general discriminant analysis for

School of System and Technology


105
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

face identification and [10] M. Yang and L. Zhang,


verification,” Image Vision “Gabor feature based sparse
Comput., vol. 25, no. 5, pp. 553– representation for face
563, May 2007, doi: recognition with gabor
https://doi.org/10.1016/j.imavis. occlusion dictionary,” in Eur.
2006.05.002 Conf. Comput. Vision. Berlin,
[6] S. Arca, P. Campadelli, and R. Heidelberg, Springer, 2010, pp.
Lanzarotti, “A face recognition 448–461, doi:
system based on local feature https://doi.org/10.1007/978-3-
analysis,” in Int. Conf. Audio- 642-15567-3_33
and Video-Based Biomet. [11] M. Yang, L. Zhang, S. C.-K.
Person Authentica. Springer, Shiu, and D. Zhang,
2003, pp. 182–189. “Monogenic binary coding: An
[7] S. Zafeiriou, A. Tefas, and I. efficient local feature extraction
Pitas, “The discriminant elastic approach to face recognition,”
graph matching algorithm IEEE Trans. Inform. Forens.
applied to frontal face Secur., vol. 7, no. 9, pp. 1738–
verification,” Pattern 1751, Sep. 2012, doi:
Recognition, vol. 40, no. 6, pp. https://doi.org/10.1109/TIFS.20
2798–2810, 2007, doi: 12.2217332
https://doi.org/10.1016/j.patcog. [12] Z. Chai, Z. Sun, H. Mendez-
2007.01.026 Vazquez, R. He, and T. Tan,
[8] H. Shin, S.-D. Kim, and H.-C. “Gabor ordinal measures for
Choi, “Generalized elastic graph face recognition,” IEEE Trans.
matching for face recognition,” Inform. Forens. Secur., vol. 9,
Pattern Recog. Lett., vol. 28, no. no. 10, pp. 14–26, Nov. 2014,
7, pp. 1077– 1082, 2007, doi: doi:
https://doi.org/10.1016/j.patrec. https://doi.org/10.1109/TIFS.20
2007.01.003 13.2290064
[9] A. Albiol, D. Monzo, A. Martin, [13] R. J. Baron, “Mechanisms of
J. Sastre, and A. Albiol, “Face human facial recognition,” Int.
recognition using hog–ebgm,” J. Man-Machine Stud., vol. 15,
Pattern Recog. Lett., vol. 29, no. no. 13, pp. 137–178, Aug. 1981,
8, pp. 1537–1543, July 2008, doi:
doi: https://doi.org/10.1016/S0020-
https://doi.org/10.1016/j.patrec. 7373(81)80001-6
2008.03.017 [14] S. Satonkar Suhas, B. Kurhe
Ajay, and B. Prakash Khanale,

Innovative Computing Review


106
Volume 2 Issue 2, Fall 2022
Anwar et al.

“Face recognition using face recognition,” IEEE Trans.


principal component analysis Neural Netw., vol. 16, no. 20,
and linear discriminant analysis pp. 275–278, 2005, doi:
on holistic approach in facial https://doi.org/10.1109/TNN.20
images database,” Int. Organ. 04.841811
Sci. Res., vol. 2, no. 12, pp. 15– [20] K.-C. Kwak and W.
23, 2012. Pedrycz, “Face recognition
[15] L. Sirovich and M. Kirby, using an enhanced independent
“Low-dimensional procedure component analysis approach,”
for the characterization of IEEE Trans. Neural Netw, vol.
human faces,” J. Opti. Soc. Am. 18, no. 21, pp. 530–541, Mar.
A, vol. 4, no. 14, pp. 519–524, 2007, doi:
1987, doi: https://doi.org/10.1109/TNN.20
https://doi.org/10.1364/JOSAA. 06.885436
4.000519 [21] H. Huang and H. He,
[16] A. K. Jain and R. C. Dubes, “Super-resolution method for
Algorithms for clustering data. face recognition using nonlinear
Prentice-Hall, Inc., 1988. mappings on coherent features,”
[17] J. H. Lai, P. C. Yuen, and G. IEEE Trans. Neural Netw., vol.
C. Feng, “Face recognition 22, no. 22, pp. 121–130, Nov.
using holistic fourier invariant 2011, doi:
features,” Pattern Recog., vol. https://doi.org/10.1109/TNN.20
34, no. 17, pp. 95– 109, 2001, 10.2089470
doi: [22] R. Azad, B. Azad et al.,
https://doi.org/10.1016/S0031- “Optimized method for real-
3203(99)00200-9 time face recognition system
[18] B.-L. Zhang, H. Zhang, and based on pca and multiclass
S. S. Ge, “Face recognition by support vector machine,” Adv.
applying wavelet subband Comput. Sci., vol. 2, no. 23, pp.
representation and kernel 126–132, 2013.
associative memory,” IEEE [23] P. Zhang, X. Ben, W. Jiang,
Trans. Neural Netw., vol. 15, no. R. Yan, and Y. Zhang, “Coupled
19, pp. 166–177, 2004, doi: marginal discriminant mappings
https://doi.org/10.1109/TNN.20 for low-resolution face
03.820673 recognition,” Optik-Int. J. Light
[19] H. Zhang, B. Zhang, W. Electron Optics, vol. 126, no.
Huang, and Q. Tian, “Gabor 24, pp. 4352–4357, Dec. 2015,
wavelet associative memory for doi:

School of System and Technology


107
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

https://doi.org/10.1016/j.ijleo.2 for human fall detection,”


015.08.138 Sensors, vol. 18, no. 6, Art. no.
[24] Y. Li, Z. Mu, and T. Zhang, 1918, 2018, doi:
“Local feature extraction and https://doi.org/10.3390/s180619
recognition under expression 18
variations based on multimodal [28] M. Kirby and L. Sirovich,
face and ear spherical map,” 9th “Application of the karhunen-
Int. Cong. Image Signal loeve procedure for the
Process., BioMedical Eng. characterization of human
Informat. (CISP-BMEI), faces,” IEEE Trans. Pattern
Datong, China, 15–17 Oct. Anal. Mach. Intell., vol. 12, no.
2016, pp. 286–290. 25, pp. 103–108, Jan. 1990, doi:
https://doi.org/10.1109/CISP- https://doi.org/10.1109/34.4139
BMEI.2016.7852723 0
[25] Y. Chu, T. Ahmad, G. Bebis, [29] C. Sagonas, E. Ververas, Y.
and L. Zhao, “Low-resolution Panagakis, and S. Zafeiriou,
face recognition with single “Recovering joint and
sample per person,” Signal individual components in facial
Process., vol. 141, pp. 144–157, data,” IEEE Trans. Pattern
2017, doi: Anal. Mach. Intell., vol. 40, no.
https://doi.org/10.1016/j.sigpro. 26, pp. 2668–2681, Dec. 2017,
2017.05.012 doi:
[26] S. Aftab, S. F. Ali, A. https://doi.org/10.1109/TPAMI.
Mahmood, and U. Suleman, “A 2017.2784421
boosting framework for human [30] W. Wang, Y. Yan, Z. Cui, J.
posture recognition using Feng, S. Yan, and N. Sebe,
spatio-temporal features along “Recurrent face aging with
with radon transform,” hierarchical autoregressive
Multimed. Tools Appl., vol. 18, memory,” IEEE Trans. Pattern
pp. 42325–42351, Aug. 2022, Anal. Mach. Intell., vol. 40, no.
doi: 27, pp. 654–668, Feb. 2018, doi:
https://doi.org/10.1007/s11042- https://doi.org/10.1109/TPAMI.
022-13536-1 2018.2803166
[27] S. F. Ali, R. Khan, A. [31] P. Singhal, P. K. Srivastava,
Mahmood, M. T. Hassan, and A. K. Tiwari, and R. K. Shukla,
M. Jeon, “Using temporal “A survey: Approaches to facial
covariance of motion and detection and recognition with
geometric features via boosting machine learning techniques,”

Innovative Computing Review


108
Volume 2 Issue 2, Fall 2022
Anwar et al.

in Proc. Second Doct. Symp. Sci., vol. 11, no. 24, Art. no.
Comput. Intell., vol. 40, no. 28, 11600, Dec. 2021, doi:
2021, pp. 654–668, doi: https://doi.org/10.3390/app1124
https://doi.org/10.1007/978- 11600
981-16-3346-1_9 [36] S. F. Ali and M. T. Hassan,
[32] X. Zhu, X. Liu, Z. Lei, and “Feature based techniques for a
S. Z. Li, “Face alignment in full driver’s distraction detection
pose range: A 3d total solution,” using supervised learning
IEEE Trans. Pattern Anal. algorithms based on fixed
Mach. Intell.,vol. 41, no. 29, pp. monocular video camera,” KSII
78–92, Nov. 2017, doi: Trans. Internet Info. Syst., vol.
https://doi.org/10.1109/TPAMI. 12, no. 8, pp. 3820–3841, Aug.
2017.2778152 2018.
[33] A. Mollahosseini, B. Hasani, [37] S. F. Ali, M. Muaz, A.
M. J. Salvador, H. Abdollahi, D. Fatima, F. Idrees, and N. Nazar,
Chan, and M. H. Mahoor, “Human fall detection,” in
“Facial expression recognition Inmic. IEEE, 2013, pp. 101–
from world wild web,” in Proc. 105.
IEEE Conf. Comput. Vision [38] H. Du, H. Shi, D. Zeng, X.-
Pattern Recog. Workshops, vol. P. Zhang, and T. Mei, “The
42, no. 29, 2016, pp. 78–92. elements of end-to end deep face
[34] M. Valstar, B. Martinez, recognition: A survey of recent
X. Binefa, and M. Pantic, advances,” ACM Comput.
“Facial point detection using Surveys (CSUR), vol. 54, no.
boosted regression and graph 10s, pp. 1–42, 2022, doi:
models,” in Comput. Vision https://doi.org/10.1145/350790
Pattern Recog., 2010 IEEE 2
Conf. IEEE, June 13–18, 2010, [39] G. Jeevan, G. C. Zacharias,
pp. 2729–2736, doi: M. S. Nair, and J. Rajan, “An
https://doi.org/10.1109/CVPR.2 empirical study of the impact of
010.5539996 masks on face recognition,”
[35] S. F. Ali, A. S. Aslam, M. J. Pattern Recognition, vol. 122,
Awan, A. Yasin, and R. Art. no. 108308, Feb. 2022, doi:
Damaˇseviˇcius, “Pose https://doi.org/10.1016/j.patcog.
estimation of driver’s head 2021.108308
panning based on interpolation [40] M. Smith and S. Miller,
and motion vectors under a “The ethical application of
boosting framework,” Appl. biometric facial recognition

School of System and Technology


109
Volume 2 Issue 2, Fall 2022
Feature Based Techniques for...

technology,” Ai & Society, vol. https://doi.org/10.1007/s00146-


37, no. 1, pp. 167–175, Apr. 021-01199-9
2022, doi:
[41]

Innovative Computing Review


110
Volume 2 Issue 2, Fall 2022

You might also like