You are on page 1of 31






This paper splashes a method for effective face recognition system in the video sequence.
Minimizing the face recognition framework complexities is the vital reason of this paper. In our
suggested posture invariant face recognition technique, first database video clip is separated into
variant frames. Preprocessing is executed for every frame for removing the noise Gaussian
filtering.Face is identified from the preprocessed imageby using viola- Jones algorithm.
Thenfeatures are excerpted from detected face. Next, the excerpted feature value will present as
the input for the ANFIS classifier to train. Theparameters in ANFIS are maximized by artificial
bee colony algorithm for deriving accurate high recognition throughout the training.Likely the
features of query image are applied to test for analyzing our suggested recognition
systemperformance. So we receive notable image, concerning the threshold value. Hence, the
method will be executed in the working platform in MATLAB and the outcome will be
examined and matched with the prevailing techniques for demonstrating the suggestedvideo face
recognition method’s performance.

Keywords:-Gaussian filtering, Face detection, Features extraction, optimization, Classification.


Face recognition is very attractive holding extensive attention from the researchers owing to its
huge variety of applications, like law enforcement, video surveillance, homeland security and
also identity management. The face recognition task is executed non-intrusively, with no user’s
awareness or else explicit cooperation, the image sensing development methods should be
thanked. During the previous decades, face recognition’s performance is developing, but the big
pose variation issue still maintains without solved [1]. Face recognition is a technology of
biometric for recognizing an individual from the image or else a video utilizing statistical,
frequency domain or else spatial geometric features. Human faces can be identified by human
brain with some or no strive [2][3]. The best-noted face recognition techniques are fisher face
and Eigen face that is observed as the approaches based on global feature. Anyhow, while the
input image’s face pose is divergent from that of the sample images, hence, the exactitude of
these approaches slows down dramatically [4]. Also, Lots of current face recognizing approaches
is quite sensitized to the pose, occlusion, lighting, and aging. Due to varying appearance of a
person having different pose, it is the bottlenecks for most recent technologies of face
recognition. Hence, the conventional appearance based techniques of Eigen face [5], slow down
dramatically while non-frontal probes match against the enrolled frontal faces

The system of Face recognition mainly handles Image Acquisition, Pre-processing, Face region
detection and extraction, Feature extraction and then Classification regarding trained images.
The extra feature of pose evaluation is incorporated in the suggested system. Face recognition is
classified as per the features type of Appearance-based features plus Geometric-based features
[6]. In that, appearance-based features illustrate the face texture while Geometric based features
illustrate the shape. The recognizing rates of face recognition method are mainly influenced by
expression, illumination, and pose variations. The deficient recognition rates are provided by
occluded faces. [7] [8]. While we are conversing with face recognition posing issue based on a
video is also significant. Also, other challenges of expression, facial hair, aging, cosmetics and
then facial paraphernalia are also vital. Occlusion issue is one of the most vital issues of face
recognition. Managing partial occlusion is one of the most demanding problems amidst the
various issues associating with the system of face recognition. The occlusion problem with other
objects of sunglasses, masks or scarves turns eminent [9] [10].

Various algorithms are enhanced for face recognition derived of fixed viewpoints, however little
efforts are given for solving combined variation issue of illumination, pose, expression, etc [11].
Variant particular techniques are suggested presently. Among these, One may cite neural
networks[12], Elastic Template Matching [13], Karhunen-Loeve expansion, Algebraic moments,
Principal Component Analysis [14], Discriminant Analysis, Local Binary Pattern, and then
higher order Derivative Pattern [15]. Conventionally, the association of these particular methods
utilizing standard approaches also their relative performance is not represented well [16]. When
comparing with holistic methods, the feature based techniques have numerous benefits. They are
potent to dissimilarities in the pose, illumination, expression, occlusion, and localization
mistakes [17].

Mainly Face recognition is utilized person identification. A person is identified by his face in
common scenario. Considering a face to a machine is a complicated task. For classifying the
provided images with notable structured properties used generally in lots of the applications of
computer vision, Recognition algorithm is utilized [20]. These images hold some notable
properties of similar components of facial feature, similar distance betwixt facial candidate and
same eye alignment. Normal images are utilized in Recognition applications. Detection
algorithms identify the faces and separate face images including eyes, eyebrows, nose and
mouth. Thus, the algorithm is made more complicated rather than single detection process or
recognition algorithm [18]. The key benefit of face recognition has been the non-intrusiveness of
recognition in which the system assists for identifying even an uncooperative face in
unrestrained condition without the person’s knowledge. But every face recognition algorithms
meet a performance fall whenever there is a change in face appearances owing to the factors like
occlusion, expression, illumination, accessories, pose or aging [19].

The outline structure of the paper is organized as follows: Section 2 reviews the related works
with respect to the proposed method. In sections 3, a brief discussion about the proposed
methodology is presented, section 4 analysis the Experimental result and section 5 concludes the


Reecha Sharma [21] they presented an efficacious pose invariant face recognition technique
utilizing PCA and ANFIS (PCA–ANFIS). The image aspects under test are unearthed utilizing
PCA plus neuro fuzzy regarded method ANFIS was utilized for recognition. The suggested
system speculates the face images through a variety of pose conditions by utilizing ANFIS. PCA
technique processes the training face image dataset for computing the score values that are
utilized later in the recognition procedure. The suggested technique of face recognition along
neuro-fuzzy system recognizes the input face images with the high recognition ratio. [22] have suggested facial alignment algorithm which can jointly handles
the facial pose variation presence, partial occlusion of the face, and also varying illumination and
expressions. Their method proceeds to dense land from sparse which marks steps utilizing a
series of particular models skilled to the best account for the texture and shape variation
expressed by facial landmarks and facial shapes through pose and various expressions. They also
suggested the utilization of a novel 1-regularized least squares technique that we integrated into
their shape model that was a development across the shape model utilized by several prior Active
Shape Model (ASM) concerned facial landmark localization algorithms. [23] have suggested an innovational system of generalized space curve representation
based on Frenet frame method for 3-D pose-invariant face and then facial expression recognition
and then classification. The elicitation of three-dimensional facial curves were done from frontal
or else synthetically posed 3-D facial data for deriving the suggested features based on Frenet
frame. The efficiency of the suggested method was estimated in two recognizing tasks: 3-D face
recognition (3D-FR) and also 3-D facial expression recognition (3D-FER) utilizing 3-D datasets
that is benchmarked. The provided framework presents 96% rank-I recognition rate for the 3D-
FR and then 91.4% area through ROC curves for the six basic 3D-FER.

Jian Yang [24] have presented an error model based on two-dimensional image-matrix,
named, nuclear norm based matrix regression (NMR), for face representation and also
classification. NMR utilizes the nominal nuclear norm of representation error image as a
benchmark, and the alternating direction method of multipliers (ADMM) for computing the
regression coefficients. We additionally improve a fast ADMM algorithm for solving the
proximate NMR model then exhibit that it contains a quadratic rate of convergence. We
investigate utilizing five famous face image databases: the Extended Yale B, AR, EURECOM,
Multi-PIE and FRGC.

Ying Tai [25] have presented the orthogonal Procrustes problem (OPP) as a technique for
handling pose variations prevailed in 2D face images. An optimal linear transformation is sought
by OPP betwixt two images with variant poses for making the transformed image fits best the
other one. OPP is incorporated by them into the model of regression and suggest the orthogonal
Procrustes regression (OPR) system. For addressing the concern of the unsuitability of linear
transformation for dealing highly non-linear pose variation, a progressive strategy is further
adopted by them and suggests the stacked OPR. Also, as a practical framework, OPR can
manage face alignment, pose correction, plus face representation concurrently. [26] have suggested an innovative geometric framework for evaluating 3D

faces with the particular aims of comparing, matching, and then averaging their shapes. The
facial surfaces representation with radial curves emerging from the tips of nose utilizes elastic
shape investigation of these particular curves for developing a Riemannian framework then
scrutinizing shapes of complete facial surfaces., With elastic Riemannian metric, that
representation seems inartificial to measure facial deformations also was powerful for
challenges of huge facial expressions (particularly for those having open mouths), missing parts ,
partial occlusions and huge pose variations because of glasses, hair, and all. This framework was
presented to be assuring from empirical and also theoretical-perspectives.


The suggested posture invariant face recognition method springsthrough the stages mentioned

Posture invariant face recognition

 Preprocessing
 Gaussian filter
 Face detection
 Viola jones algorithm
 Haar like features selection
 Creating integral image
 Adaboost training algorithm
 casecaded classifiers
 Features extraction
 Classification
 ANFIS classifier
 Optimization
o ABC algorithm

Fig.1Posture invariant face recognition stages

Initially, conversion of the input video into a number of image frames is done. Image
frames have beenpre-processing for removing the noise. For this removing noise,Gaussian
filtering is applied in the preprocessing state. From the pre-processed image,face is identified
utilizing viola jones algorithm. And, Image features ofcontrast, energy, correlation, homogeneity,
maximum probability,entropy, cluster shade,cluster prominence, local homogeneity, sum of
squares or variance, dissimilarity, autocorrelation and also inverse difference moment are
excreted once the face detection is done in images. The following block diagram in figure
2depicts thepresented work.

Frames Preprocessing
Input video
[Gaussian filter]

Face detection

Viola jones algorithm


ABC Optimization
algorithm Features extraction

ANFIS classifier

Recognized Non-
Fig.2Block schematic of Posture invariant face recognition


Initially, the processing ofinput image for extracting the features demonstrates its contents. For
removing the noise,the processing comprises filtering.In the suggested work preprocessing of
input image is done by applying Gaussian filter.

3.1.1 Gaussian Filtering

Gaussian filter utilizes to eliminate Gaussian noise.Here, the Gaussian Smoothing operator
functions the Gaussian distribution based weighted average of the surrounding pixels. The
weightsprovide higher importance to pixels close to the edge (minimizes edge blurring).
Moreover, the degree of smoothingis limited by  (bigger  for more intensive smoothing).
Sigma signifies the blurring amount. The radius slider is utilized for restricting how huge the
template is. Huge values for the sigma will only offer huge blurring for huge template sizes.
Noise is appended utilizing the sliders. Thus,Gaussian filtering is utilized for blurring the images
and eliminating noise with detail.The Gaussian function is mentioned below:

a 2 b 2
1 ( )
g ( a, b)  e 2 2
2 2 (1)

In which,  is Gaussian Distribution’sstandard deviation.The Gaussian distribution ispresumed

for having a mean of 0.For erasing Gaussian noise,Gaussian smoothing is very efficient.


The preprocessed image achieved from Gaussianfiltering next undergoes object

identification. The human face descriptors have been recognized as an object. By Viola Jones
method means,the object detection is implemented.
Viola and Jones offer the speed and effective way foridentifying a face in the provided
image. It is concernedHaar-like featuresplus cascade Ada-Boost classifier. This is the no 1 face
detectionframework having the capacity of presenting real timeperformance. Hence, lot of image
processing applications that need faces as their input are building utilizing thisalgorithm and it is
the most generally utilized face detectionalgorithm. This algorithm significantly comprises the
following stages.

 Haar Features Selection

 Creating Integral Image
 Adaboost Training algorithm
 Cascaded Classifiers

In this we introduce all these ideasbelow in essenceand explain them elaborately in the following

3.2.1 Haar like features

Haarlikefeatures have been rectangular digital image features which derive theirname from their
similarity withHaar-wavelets.Thetworectangle feature valueis the variance betwixt the pixels
sum among two rectangular regions. Here, the regions are in similar size and shape, also they are
horizontally or else vertically adjoining (see Figure 3).The sum is computed by athreerectangle
featurewithin the two outside rectangles reduced from the sum in a center rectangle. Ultimately a
four-rectangle feature calculates the difference betwixt diagonal pairs of rectangles.

Rectangular Features value can be estimated as

VRF   P
A(black)   PA( white)  (2)

Where, VRF is the rectangular features value, PA(black) is the pixels in black area, PA( white) is the

pixels in white area.


Fig.3Various haar like features.

3.2.2 Feature selection and analysis

The Viola-Jones technique utilizedthe AdaBoostalgorithmvariation, created by Freund and

Schapire, forselecting a minimalset of crucial features to create an efficientclassifier. The
training data must haveimages through lighting conditionsrange and facialproperties for the best

3.2.3 Integral image

An innovative imagerepresentation named as an integral image permits for veryspeed feature

evaluation. Here, the detection system is not functioningwith image intensitiesdirectly. Our
object detectionprocess categorizes images regarding simple features value. The integral image is
calculated from an image utilizing some operations for one pixel.Any one of these Haar-like
components is calculated at any location or scale in stabletime.

The integral image at location (x,y) comprises the total of thepixels above and to the left of x,y

I i ( x, y)   i( x, y)
x'  x' , y'  y' (3)

In which, I i ( x, y ) is the integral image and i ( x, y ) is theactualimage intensity.


Fig.4 I i ( x, y ) =sum of image intensities inshaded area

3.2.4 Adaboostalgorithm
The framework of object detection engages a learning algorithm Adaboost for both selecting the
best featuresand also training classifiers that utilize them. Hence,this algorithmbuilds a
powerfulclassifier as aweighted simple weakclassifiers’ linear combination.

h( x)  sign (  j h j ( x))
j 1 (4)

Every weak classifier has beenthe feature F j based thresholdfunction. A weak classifier h j (x )

comprises a feature F j , a threshold j, and a parity indicating the direction of inequality sign:

 T j if F j   j

h j ( x)  

T j if otherwise (5)

In which, threshold  j and T j are assured in the training and the coefficients  j and x is a 24-by-

24 sub-window of an image.

3.2.5Cascade architecture

Viola and Jones used a series of increasingly more complex classifiers named a cascade for
improving computational effectiveness and also minimize the wrong positive rate. Evaluation of
an input window is done on the initial classifier in the cascade and if that classifier arrives
false,the computation on that window completes and the detector arrivesfalse. But, if the
classifier reappears true, the window is gone to the following classifier of the cascade. Thus, if
the window undergoes each classifier with every returning true, the detector arrives true for that
window. The more the window appears like a face, then the more classifiers are analyzed on it
and the longer it consumes time to categorize that window. As most windows of an image may
not resemble faces, most are rapidly disowned as non-faces.The process of classification inviola
jones algorithm is illustrated in the below Figure 5.

No face found

Fail Image sub

window 1 2 … N

Pass PassPass

Fig.5 Classification process in Viola-Jones algorithm.


Feature extraction is modifying the object detected image into the series of features. Features
ofcontrast,correlation, entropy, energy, local homogeneity, maximum probability, cluster shade,
sum of squares, homogeneity,cluster prominence,variance, dissimilarity, autocorrelationand
inverse difference momentare utilized for describing the image content.

3.3.1 Contrast ( C ): It Returns the intensity measure contrast betwixt a pixel and its neighbor
through the whole image. And, contrast is 0 for a constant image.

C   a  b P ( a, b)

a ,b (6)

In which, C is the contrast and P(a, b) is the pixel at location ( a, b)

3.3.2 Cluster shade and Cluster prominence

Cluster shade has been a measurement of the asymmetryof the matrix and is hopedto gauge the
perceptual thoughts of uniformity.

G 1 G 1
Shade    a  b   x   y   p (a, b)

a 0 b 0 (7)

Cluster prominence has been also aasymmetry measure.While the cluster prominence value is
high, the image has been less symmetric.

G 1 G 1
P    a  b   x   y   p(a, b)

a 0 b 0 (8)

3.3.3 Correlation and Auto correlation

Correlation measures the linear dependency of grey levels of adjacent pixels. Correlation betwixt
a serieselements and others from the same series isolated from them by a provided interval

3.3.4 Homogeneity ( H ): Returns a value which measures the distributioncloseness of elements

in the GLCM to the GLCM diagonal.

P ( a , b)
H 
a ,b 1  a  b

3.3.5 Inverse variance moment(local homogeneity)

Inverse Difference Moment (IDM) has been the local homogeneity. It has been high while local
gray level is uniform. IDM weight value has been the inverse of the Contrast weight.

G 1 G 1
IDM   p(a, b)
a  0 b  0 1  (a  b)
2 (10)

3.3.6 Sum of squares

The sum of squares has been a mathematical method to determine the data points’ dispersion. In
an investigation, the aim is determining how good a data series are fitted to the function that may
assist forexplaining the way that the data series was formed.

3.3.7 Energy (E): Returns the sum of squared elements in the GLCM. Energy is 1 for a constant

E   P ( a, b) 2
a ,b

3.3.8 Dissimilarity

Numerical measuring of how variant two data objects are and Range from 0 (objects are alike) to
∞ (objects are different).

3.3.9 Optimum probability

Probability is measured as a number betwixt 0 and 1 (where 0 denotes impossibility and also 1
signifies certainty).An event’s optimum probability is more certain that the event will emerge.
3.3.10 Entropy ( e ): It is a randomness measure.

e   P(a, b) log 2 P(a, b)
b 0 (12)

In which, M is the number of variant values which pixels may adopt

3.3.11 Variance( V ): It is the measures that signifying how much the gray level aregetting
varied from the mean.

V  b  P(a, b) P(a, b)   2
a (13)


Exerted features deriving of the preprocessed images are maximized utilizing artificial bee
colony (ABC) algorithm.Thisalgorithmcomprises three groups: onlooker bees,scouts and
employed bees. Consonantly, in the optimization framework, the food sources’numberin ABC
algorithm portrays the number of findings in the population. A good food source signifiesthe
promising solutionposition to the optimization issue and the food source quality represents the
associated solution’s fitness cost. ABC optimization algorithm process are explained below as,

Initialization Phase: The food sources, whose size of populationis S ,are randomly created by
scout bees. Every food source, denoted by q ( x n ) is an input vector for the optimization issue, x n

contains d variables and d is the searching spacedimension of the objective function for being
optimized. The first food sources are randomlyproduced through the expression equation (14)

q( x n )  Li  rand (0,1)  (U i  Li )

In which U i and Li have been the upper and lower bound of the solution space of objective

function, rand (0,1) has been a random number within the range [0, 1].

Fitness calculation

The food sources fitness is significant for finding the global optimal. The fitness is computed by
the formula (3) mentioned below, after that a greedy selection is utilized betwixt  mi and N m .
 1 
 
, f m (q( x n ))  0 
Fit m (q( x ))   1  f m (q( x n ))

n 
   
1  f m (q( x n )) , f m (q( x n ))  0  (15)

In which f m (q( x n )) is the objective function value of q ( x n )

Employed bee phase (task)

Employed bee drifts to a food source then detects a new food source among the food
sourceneighborhood. The employed bees memorize the greater quantity food source. The food
source information saved by the employed bee is shared along with onlooker bees. A neighbor
food source N mi is regulated and computed by the followed equation (14),

N m  N mi   mi (q( x ni )  x ki )

In whichi is randomly chosen parameter index, xk is a randomly chosen food source,  mi is a

random number among the range [-1,1].This parameter range may make suitable adjustment on
partcular issues.

Onlooker Bee Phase: Onlooker bees compute the food sources’ profitability by perceiving the
waggle dance in the dance area then choose a greater food source randomly. Then onlooker bees
execute randomly search in thefood sourceneighborhood. The food source quantity is estimated
by its profitability plus the profitability of all food sources. Here,Pm is determined by the
following formula

Fit m (q( x n ))
probabilit y ( p m )  S

 Fit
m 1
m (q( x n ))

In which, Fit m (q( x n )) is the fitness of q( xn ) .

Onlooker bees examine the neighborhood of food source as per the expression (16),

N m  N mi   mi (q( x ni )  x ki )
Scout Phase: If the food sourceprofitability cannot be enhanced and the unchanged times are
greater than the predetermined number of trials, which named "limit", the solutions will be
abandoned by scout bees. Next, the innovative solutions are randomly looked for the scout bees.
Discovering the new solution q( xn ) is doneby the scout by applying the expression (17),

q( x n )  Li  rand (0,1)  (U i  Li )

rand (0,1) is an random number within the range [0,1], U i and Li are the upper and lower bound

of the solution space of objective function.The flow diagram of ABC algorithm is presented in
figure 6.

Initialization Compute the

fitness function

Is fitness


Employed bee

Onlooker bee
If maximum
iteration Scout bee phase


Stop Save the optimal

Fig.6 flow diagram of ABC algorithm


The data set is classified into two categories. They are training data plus testing data. The
training data set comprises images of all the types.The optimized features are exerted and
compared with the best available solution, in the testing process

The maximized features q( x 1 ) , q( x 2 ) , q( x n ) are derived from the optimization of ABC and
areclassified utilizing the well-known classifier called ANFIS which comprises five layers of
nodes.In the five layers, the first with the fourth layers has adaptive nodes in which the second,
third and fifth layers holds fixed nodes.TheANFIS architectureis provided in figure 7. Utilizing
the ANFIS,the features are categorized. The Rule fundamental of the ANFIS is of the form:

If q( x 1 ) is Ai , q( x 2 ) is Bi and q( x n ) is C i

Rules i  a i  q( x 1 )  bi  q( x 2 )  ci  q( x n )  f i (20)

In which, q( x 1 ) , q( x 2 ) ,plus q( x n ) are the inputs, Ai , B i , and C i are the fuzzy sets, Rules i is the

output among thespecified fuzzy region by the fuzzy rule, ai , bi , and ci are the design
parameters which are determined by the training method. The ANFISarchitecture is provided in
figure 7.
Layer 1

Layer 4

q( x 1 ) A1

Layer 2Layer 3 q( x1 )Aq2( x 2 ) q( x n )

wt1 wt1 Layer 5

B1  N

q( x 2 ) wt1 Y 
 N
wt 2 wt 2

q( x1 ) q( x 2 ) q( x n )
q( x n )

Fig. 7Architecture of ANFIS

Layer-1: Each node i in this layer is a square node with a node function.

O1,i   Ai (q ( x1 )), O1,i   Bi (q ( x 2 )) , O1,i   Ci (q ( x n ))


 Ai (q( x1 )) ,  Bi (q( x 2 )) , plus  Ci (q( x n )) are selected to be bell-shaped with optimum

equal to 1 and minimum equal to 0 and are defined as,

 Ai (q( x1 ))   Bi (q( x 2 ))   Ci (q( x n ))  qi
 x  o 

1   i
 
 p i  

Where, o i , p i , q i is the parameter set. These parameters in this layer are mentioned to as premise

Layer-2: Every node of this layer is a circle node labeling ˘which multiplies the incoming
signals and sends the product out. For example,

O2,i  wti   Ai (q( x1 ))   Bi (q( x 2 ))   Ci (q( x n )), i  1, 2


Each node output represents the firing strength of a rule.

Layer-3: Each node in this layer has been a circle node labeled N . The i th node calculates the
ratio of the i rules firing strength to the sum of all rule’s firing strengths:

O3,i  wt i  , i  1, 2
( wt1  wt2 ) (24)

Layer-4: Every node i in this layer is a square node with a node function

O4,i  wt i .Rules i i  1, 2

Where wt i is the output of layer 3 and a i , bi , c i , f i are the parameter set. Parameters in this layer
will be referred to as consequent parameters.

Layer-5: The single node in this layer is a circle node labeled  that computes the overall output
as the summation of all incoming signals:

O5,i   wt i Rules i 
 wt Rules
i i i

i  wt i i

wt1 Rules 1  wt 2 Rules 2

n(oi )
= wt1  wt 2 (27)

n(oif )  wtRules1  wtRules 2 (28)

Accordinglythe obtainedfeature is classified by the ANFIS and then the classified feature is
signified as n(Oi f ) . Next the predefined threshold value  plus the outcome of the neural

network is ( Y ).The neural network output Y greater than the threshold value  means, the
provided input image is acknowledged and Y less than the threshold value  means the image
is not acknowledged.


The face recognition system is implemented in the Mat Lab 14a software and executed in a
personal computer which comprises Intel (R) Core (TM ) i3 processor of 2.40 GHz CPU and 4
GB RAM. Face recognition procedure is analyzed with dissimilar video frames and the outcome
of the intended system is illustrated below.


Fig. 8 Training Input Dataset for recognized and non-recognized images

In Figure 8, the training phase acquires the input as multiple objects dataset for recognizing and
for non-recognizing the images, the input dataset is transformed into a frame to undergo


Fig. 9 preprocessed recognized and non-recognized training images

Originally the videos are converted into frame; the preprocessing has performed the RGB to grey
conversion for the recognized and non-recognized images with the help of a Gaussian filter as
illustrated in Figure9(b). Then the preprocessed images extract their frontal faces by means of
the viola jones algorithm

(a) (b)

Fig. 10 Frontal faces of (a) recognized and (b) non-recognized training images


Fig. 11 Input Testing Dataset for recognized (a) and non-recognized (b) images
In Figure11, the testing phase acquires the input as multiple objects dataset for recognizing and
non-recognizing the images, the input data dataset is then transformed into a frame for


Fig. 12 Preprocessing for recognized (a) and non-recognized (b) testing images

Originally the videos are converted to frame, the preprocessing has carried out for RGB to grey
conversion for the recognized and the non-recognized images by means of the Gaussian filter as
illustrated in Fig 1(b). Then the preprocessing RGB frame has extracts the frontal face by means
of viola jones algorithm.

(a) (b)

Fig. 13 frontal faces of (a) recognized and (b) non-recognized testing images using viola

The Viola and Jones face detector is carried out locally on every preferred bounding box around
the associated pixel regions on an image. The frontal faces are identified as recognition and non-
recognition image by means of viola jones algorithm as illustrated in the figure. The features are
then extracted for the categorization of intended ANFIS based on Precision, recall, accuracy,
sensitivity, specificity, FM, FDR, FNR, FPR, FAR, FRR, MCC. The performance analysis of our
proposed ANFIS classification technique is related with our conventional Neural Network, and


5.1 Performance analysis

The performance analysis of the intended technique of face recognition is analyzed by using the
statistical measures illustrated below.


The fraction of images recognized which are appropriate to the query image is termed as
precision 
TP  FP (29)


Recall ascertains the fraction of images which are appropriate to the query images that are
effectively recognized.

recall 
TP  FN (30)


A weighted harmonic mean of precision and recall is termed as F-measure.

precision  recall
FMeasure  2
precision  recall (31)


Accuracy computed the closeness of the recognized image to the query image.

(TP  TN )
Accuracy 
(TP  FP  TN  FN ) (32)


It measures the proportion of images which are appropriate to the query images that are
effectively recognized.

Sensitivity 
TP  FN (33)


Specificity measures the proportion of images which are appropriate to the query images that are
not perfectly recognized.
Specificit y 
FP  TN (34)

False Discovery Rate

It is described as the expected proportion of false negatives amid the entire hypothesis discarded.

FP  TP (35)

False Negative Rate

It the ratio of the number of positive events wrongly categorized as positives to the total no of
actual positive events.

False Negative Rate 
FN  TP (36)

False positive rate

It the ratio of the number of negative events wrongly categorized as positives to the total no of
actual negative events.

False Positive Rate 
FP  TN (37)


False acceptance rate and false rejection rate is defined as false positives and false negatives

FRR  FN (38)

Mathew’s correlation coefficient

MCC considered the true and false positives and negatives and is regarded as a balanced measure
which can be employed even when the classes are of extremely dissimilar sizes.
(TP  TN )  ( FP  FN )
(TP  FP)(TP  FN )(TN  FP)(TN  FN )

Object detection and recognition process is analyzed with different video frames and the
outcomes of the intended system have been tabulated below.

Performance Proposed
Neural Network KNN
measure ANFIS

precision 95 81 94

Recall 100 95.3 98.9

F measure 97.4 87.6 96.4

Accuracy 96.9 85.8 95.7

Sensitivity 100 95.3 98.9

Specificity 92.5 75.3 91

FDR 5 19 6

FNR 0 4.71 1.05

FPR 7.46 24.7 8.96

FAR 5 19 6

FRR 0 4 1

MCC 93.8 72.6 91.2

Table 1: Performance table for classification

The above comparison table illustrates that our intended ANFIS is more than the conventional
classification approach called Neural network and KNN based upon Precision, recall, accuracy,
sensitivity, specificity, FM, FDR, FNR, FPR, FAR, FRR, MCC. The comparison graph is
illustrated below.
Fig. 14 Comparison graph for classification of Performance measure

In figure 14, the comparison graph illustrates that our intended approach ANFIS is better than the
conventional classification approaches of Neural network and KNN. Our proposed technique
yields high precision of 95% and 100% on recall and sensitivity , accuracy of 96.9%, specificity
of 92.5% , F Measure of 97.4%, FDR of 5%, FNR of 0%, FPR of 7.46%, FAR of 7.46%, FRR
of 0% and MCC of 93.8%.

Preprocessing is done by utilizing Gaussian filter in the technique of face invariant face
recognition and the face is identified effectively via the viola jones algorithm. ABC optimized
ANFIS classifier utilized for the image classification as recognized or non-recognized. In
performance analysis Our suggested segmentation method is investigated utilizing different
performance metrics like precision(95%), recall(100%), f-measure(97.4%), sensitivity(100%),
specificity(92.5%), accuracy(96.9%), false discovery rate(5%), false negative rate(0%), false
positive rate(7.46%), false acceptance rate(7.46%), false rejection rate(0%), Mathew’s
correlation coefficient(93.8%).The comparison result has been proved that our suggestedANFIS
gives better resultsthan the prevailingneural network and KNN.

[1] Sang, Gaoli, Jing Li, and Qijun Zhao, "Pose-invariant face recognition via RGB-D images",
Computational intelligence and neuroscience, No.13, 2016.
[2] Patil, Hemprasad, Ashwin Kothari, and KishorBhurchandi, "Expression invariant face
recognition using semi-decimated DWT, Patch-LDSMT, feature and score level fusion", Applied
Intelligence, Vol.44, No.4, pp.913-930, 2016.
[3] Chen, Qiu, Koji Kotani, Feifei Lee, and TadahiroOhmi. "An Improved Face Recognition
Algorithm Using Histogram-Based Features in Spatial and Frequency Domains." World
Academy of Science, Engineering and Technology, International Journal of Computer,
Electrical, Automation, Control and Information Engineering, Vol.10, No.2, pp.360-364, 2016.
[4] Nitin Sharma, RanjithKaur, “Review of Face Recognition Techniques”, International Journal
of Advanced Research in Computer Science and Software Engineering, Vol.6, No.7, 2016.
[5] Zhang, Jian, Jinxiang Zhang, and Rui Sun. "Pose-invariant face recognition via SIFT feature
extraction and manifold projection with Hausdorff distance metric", In Security, Pattern
Analysis, and Cybernetics (SPAC), pp. 294-298, 2014.
[6] Ali, Asem M, "A 3D-based pose invariant face recognition at a distance framework", IEEE
Transactions on Information Forensics and Security, Vol.9, No.12, pp.2158-2169, 2014.
[7] Muruganantham, S., and T. Jebarajan, "An Efficient Face Recognition System Based On the
Combination of Pose Invariant and Illumination Factors", International Journal of Computer
Applications, Vol.50, No.2, 2012.
[8] Azeem, Aisha, Muhammad Sharif, MudassarRaza, and MarryamMurtaza, "A survey: face
recognition techniques under partial occlusion", Int. Arab J. Inf. Technol, Vol.11, No.1, pp.1-10,
[9] Khadatkar, Ashwin, Roshni, Khedgaonkar, and Patnaik, "Occlusion invariant face
recognition system", In Futuristic Trends in Research and Innovation for Social Welfare (Startup
Conclave), , pp. 1-4, 2016.
[10] Sindhuja, A., S. Devi Mahalakshmi, and K. Vijayalakshmi, "Age invariant face recognition
with occlusion", In Advanced Communication Control and Computing Technologies
(ICACCCT), IEEE, pp.83-87, 2012.
[11] Sharma, Poonam, Ram N. Yadav, and Karmveer V. Arya, "Pose-invariant face recognition
using curvelet neural network", IET Biometrics, Vol.3, No.3, pp.128-138, 2014.
[12] Ji, Shuiwang, Wei Xu, Ming Yang, and Kai Yu, "3D convolutional neural networks for
human action recognition", IEEE transactions on pattern analysis and machine intelligence,
Vol.35, No. 1, pp.221-231, 2013.
[13] Drira, Hassen, Boulbaba Ben Amor, AnujSrivastava, Mohamed Daoudi, and Rim Slama,
"3D face recognition under expressions, occlusions, and pose variations", IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol.35, No.9, pp.2270-2283, 2013.
[14] Sharma, Reecha, and M. S. Patterh, "A New Hybrid Approach Using PCA for Pose
Invariant Face Recognition", Wireless Personal Communications, Vol.85, No.3, pp.1561-1571,
[15] Agrawal, Amrit Kumar, and YogendraNarain Singh, "Evaluation of Face Recognition
Methods in Unconstrained Environments", Computer Science, Vol.48, pp.644-651, 2015.
[16] Beham, M. Parisa, and S. Mohamed MansoorRoomi. "Face recognition using appearance
based approach: A literature survey." In Proceedings of International Conference & Workshop
on Recent Trends in Technology, Mumbai, Maharashtra, India, Vol. 2425, p. 1621. 2012.
[17] Meena, K., A. Suruliandi, and R. Reena Rose, "An Illumination Invariant Texture Based
Face Recognition", ICTACT Journal on Image and Video Processing, Vol.4, No.2, 2013.
[18] Makwana, Kaushik R. "A Survey on Face Recognition Eigen face and PCA method."
International Journal, Vol.2, No.2, 2014.
[19] Shermina, "Impact of locally linear regression and fisher linear discriminant analysis in pose
invariant face recognition", International Journal of Computer Science and Network Security,
Vol.10, No.10, 2010.
[20] Rath, Subrat Kumar, and SiddharthSwarupRautaray, "A Survey on Face Detection and
Recognition Techniques in Different Application Domain", International Journal of Modern
Education and Computer Science, Vol.6, No.8, 2014.
[21] Sharma, Reecha, and M. S. Patterh. "A new pose invariant face recognition system using
PCA and ANFIS." Optik-International Journal for Light and Electron Optics, Vol.126, No.23
pp.3483-3487, 2015.
[22] Seshadri, Keshav, and MariosSavvides, "Towards a Unified Framework for Pose,
Expression, and Occlusion Tolerant Automatic Facial Alignment", vol. 38, no. 10, pp. 2110-
2122, 2015
[23] Amad, Iftekharuddin, "Frenet Frame-Based Generalized Space Curve Representation for
Pose-Invariant Classification and Recognition of 3-D Face," in IEEE Transactions on Human-
Machine Systems, Vol. 46, No. 4, pp. 522-533, 2016.
[24] J. Yang; L. Luo; J. Qian; Y. Tai; F. Zhang; Y. Xu, "Nuclear Norm based Matrix Regression
with Applications to Face Recognition with Occlusion and Illumination Changes," in IEEE
Transactions on Pattern Analysis and Machine Intelligence , 2016.
[25] Y. Tai, J. Yang, Y. Zhang, L. Luo, J. Qian and Y. Chen, "Face Recognition With Pose
Variations and Misalignment via Orthogonal Procrustes Regression," IEEE Transactions on
Image Processing, Vol. 25, No. 6, pp. 2673-2683, 2016.
[26] H. Drira, B. Ben Amor, A. Srivastava, M. Daoudi and R. Slama, "3D Face Recognition
under Expressions, Occlusions, and Pose Variations," IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. 35, No. 9, pp. 2270-2283, 2013.