You are on page 1of 6

Patch-based Face Recognition under Plastic

Surgery
Roshni Khedgaonkar, M. M. RaghuwanshiDepartment K. R. Singh
of Information Technology, YCCE, Nagpur, India Department of Computer Technology, YCCE, Nagpur, India
roshni.ycce@gmail.com, m_raghuwanshi@rediffmail.com singhkavita19@yahoo.co.in

Abstract—Even though face recognition has been a widely consideration. This makes face recognition a more challenging
explored area for last many decades, real time face recognition is task. For example, Fig. 1 depicts facial images captured
still, a challenging issue due to variations in pose, illumination, before plastic surgery and after plastic surgery of two different
occlusion, aging and expression. In addition, with advancement subjects from PSD face databases of IIIT-D [7]. Fig 1(a)
of technology in medical field variation in faces due to plastic depicts the facial image of a person after rhinoplasty, a type of
surgery has also emerged as a new challenge to face recognition. local plastic surgery as mentioned earlier. Facial image of the
Furthermore, compared to other challenges mentioned such as same person before plastic surgery is shown in Fig 1(b). These
pose, illumination etc., the facial plastic surgery changes some of two images highlight the large degree of variation in
the facial features permanently making face recognition under
appearances of the facial images of the same person.
plastic surgery even more complex problem. In this paper, we
proposed a patch-based face recognition using k-NN classifier for
Similarly, Fig. 1 (c) and Fig 1(d) depicts the face of a person
plastic surgical faces. Experimentation is carried on IIIT-D before and after local plastic surgery (blepharoplasty) and
plastic surgery face database for three different facial surgeries variations in facial appearances involved. From this Fig. 1, it
such as, Rhinoplasty, Blepharoplasty and Lip-augmentation to is very much clear that such facial transformations change the
evaluate the performance of the proposed approach appearances drastically as it changes some facial features
permanently due to varying effects of plastic surgery. Thus, it
Keywords—Face recognition, plastic surgery, patch-based also highlights the complexity involved in face recognition of
a particular person from these pre-surgery and post-surgery
I. INTRODUCTION facial images.
Since last so many decades face recognition has emerged This challenging issue of large variation in facial
as one of the explored area of research. Moreover, due to appearances of a person due to plastic surgery motivates us to
advancement in digital world, face recognition has also propose a patch-based approach for face recognition under
established itself as one of the biometric method for plastic surgery. The proposed approach firstly divides the
authentication at various places like offices, residential, old facial images into patches. Further, for each patch, of the face
age home, child care center, academics to name a few. of a subject under consideration, facial features are extracted
However, being matured area of research, face recognition for recognition with a method for nearest matching using k-
still struggles with performance issues due to various NN classifier. To evaluate the proposed approach, the
challenging issues such as pose variations [1], illumination experimentation has been performed on IIIT-D facial database
variations [2], Expression variations [3], age [4], occlusion [7].
[5]. Subsequently, in addition to above mentioned issues, with
advancement of technology in medical field plastic surgery Further, the paper is organized as follows: Section II represent
has emerged as a new challenge to face recognition. Facial the summary of face feature extraction method used for local
plastic surgery is broadly categorized as global plastic surgery features and various classifiers that have been used. Section III
and local plastic surgery. Global plastic surgery can be describes the proposed approach for face recognition under
performed to completely change the facial structure. This kind plastic surgery. Lastly, section IV presents experimental setup
of surgery can be recommended for cases where functional and results followed by conclusion.
damage has to be cured. Global plastic surgery may also be
used to entirely change the facial appearance, skin texture and
other facial geometries making it difficult for any face
recognition system to recognize facesbefore and after
surgery.Apart from global plastic surgery, local plastic surgery
is the process of transforming some of the facial features to
remove scars, marks, birth marks and injuries caused due to
accident of fatalities. It is also used for cosmetic purposes, and
aesthetic purposes. Different types of local plastic surgeries
are Rhinoplasty (nose surgery), Blepharoplasty (eyelid
surgery), lip augmentation, Cheek implant, Otoplasty and
Cranioplasty [6]. The mentioned facial transformation often
leads to some key challenges as it changes some facial Fig.1. Images showing the degree to which the appearance of a human face
features permanently. The change in a facial feature can be modified by plastic surgeries. Top row (a) after and (b) before local
plastic surgery(rhinoplasty). Bottom row: (c) after, and (d) before local plastic
furtherchanges the facial appearance of the subject under surgery(blepharoplasty) [7]
II. FEATURE EXTRACTION AND CLASSIFICATION TECHNIQUES LinearDiscriminant Analysis (LDA) [17] were widely used
Plastic surgery-based face recognition is a newly explored linear techniques proposed by the researcher
area of face recognition problem which was firstly addressed TABLE I. VARIOUS TECHNIQUES AVAILABLE FOR FACE RECOGNITION
FOR MULTIPLE PATCHES
by Richa Singh in paper [8]. However, recently many
researchers have proposed various different approaches Feature Extraction Classifiers Database/Performance
towards the face recognition. In this section, we present a brief Techniques (Recognition Rate in %)
summary of few selectively recently used feature extraction DCT(2008, [9]) LS-SVM ORL 96.1%
methods and different classifiers used for face recognition in LBP+GLBP (2014, [10]) DCT LFW 92.84%
general and can be used for plastic surgical face recognition in
particular. Different feature extraction methods and classifiers PCA+LDA (2015, [11]) Nearest PSD 86.4%
neighbor
used for multiple patch of single facial image are listed in table
I. Author Zhang in [9] proposed a face recognition model LBP (2016,[12]) Based ORL, Yale, Extended
classifier Yale B and CMU PIE
based on DCT sub-image feature extraction method of face
images and least square support vector machine (LS-SVM) PCA (2017, [13]) Artificial ORL, AR mse (1.7)
neural network
classifier which was more effective and has fast recognition
speed. The experimentation has been performed on ORL face LBP+KPCA (2017, [14]) SVM AR 98%
database with 96.1% of accuracy. The advantage of the DCT is
that, it is independent of characteristics of data and it can be
implemented using a fast algorithm. The motivation behind community. However, these methods are not efficient due to
LS-SVMs was to use DCT as the approximation tool since the non-linear characteristic of the face. Nonlinear kernel-based
DCT has higher generalization capability. In addition, it can techniques, namely Kernel PCA (KPCA) have been proposed
also provide global solution in minimal time. However, only in [18] to solve the problem of nonlinearity. In addition, by
using DCT in face recognition as proposed in [9] have not taking the advantages of KPCA the researchers in [14] have
shown effective performance. To overcome the mentioned also proposed a combinedapproach of LBP and KPCA for
limitation, author in [10] proposed integration of DCT hashing feature extraction and dimensionality reduction.Authors
with the descriptive reference face graph that covers a variety specifically focused on randompatch sampling technique to
of face images with different pose, expression, scale, and build multiple sub-SVM classifiers. The experimentation was
illumination. The extensive empirical experiments on several performed on AR face database with recognition rate between
publicly available databases demonstrate that the proposed 92~98% accuracy
method outperformed the state-of-the-art methods with similar III. PROPOSED APPROACH
feature descriptor types and achieved the accuracy of 91.1%.
Although, the overall performance of the approach proposed in For plastic surgery face recognition, we propose an
[10] is not better than the work proposed in [9], the authors in approach, namely, patch-based face recognition. Proposed
[10] has dealt with large degree of variations in pose, patch-based approach consists of mainly two major modules
expression, scale, and illumination. Later, authors in [11] which are also very common to any face recognition system.
proposed region-based strategies for addressing the problem of These modules are feature extraction and recognizer. In the
face recognition after facial plastic surgery by applying nearest first module, we divide the complete face image in three
neighbor algorithm on features extracted by PCA and LDA different patches. Each patch defines eye region, nose region
approach. The authors performed experimentation on PSD and the lip portion respectively. Further, statistical features are
database with recognition rate of 86.4% which yield much extracted from each patch separately. In the second module,
better performance than state-of-the art algorithms from these extracted features are used to find nearest match using k-
holistic and region-based categories both. In subsequent year, NN classifier and gives the correct classification or
authors in [12] proposed a LBP texture feature where random misclassification responses. The overall architecture of
feature sampling was completed based on sub-images in proposed approach is depicted in fig 2. Fig 3 shows the
combination with random subspace method, and the based partitioning of face image into three patches namely eye patch,
classifiers were trained by the randomly sampled features to nose patch and mouth patch. The statistical features are
build an ensemble of base classifiers. Performance of the extracted from each of these three patches. The systematic
proposed method was evaluated on four facial databases of working of the proposed patch-based approach for train and
ORL, Yale, Extended Yale-B and CMU PIE face databases test image is summarized in Algorithm1.
with high accuracy. Further, in [13] the face recognition on
surgically altered faces was presentedusing principal
component analysis along with the Artificial Neural Network
(ANN). The ANN paradigm used in this paper was Multilayer
Feed forward Neural Network (MFNNs) which aids in
minimizing the squared error function. The experimentation
has been performed on ORL and AR face databases to achieve
minimum square error which helped in achieving better
recognition rate. From the recent literature, it has been
observed that Principal Component analysis (PCA) method
[15], Independent Component Analysis (ICA) [16], and
Further, each train and test image is partitioned into three
horizontal patches each of size 91 × 238 pixel size as shown in
fig. 7. Next, the statistical features of each patch are
computedas discussed earlier. Let, presents the set of
statistical

Fig. 1. Overall architecture of proposed approach

Fig. 2. Three patches of facial image

Algorithm1: Patch-based method


Input: Train images X={x1,x2,…,xN}, where N is total
number of training samples Test Images Y= {y1,y2,…,yM} Fig. 3. Training (the top row) and testing (the bottom row) sample facial
images for rhinoplasty surgery from PSD database [7]
Where M is total number of test samples
Output: Matched image {R}
Step 1: Partition train/test image into three horizontal patches
as
xi={xi1,xi2,…,xij}, (three patches for ithtraining sample,
represented as xij, where i:1 to N, j: 1 to 3)
yi={yi1,yi2,…,yij},(three patches for ithtest sample,
represented as yij, where i:1 to M, j: 1 to 3)
Step 2: Compute statistical features for all xi and yi as
B={bi/i=1 to 3}={mean value; standard deviation; entropy} Fig. 4. Training (the top row) and testing (the bottom row) sample facial
Step 3: Save all B for train images {xi}ϵX in look-up table for images for blepharoplasty surgery from PSD database [7]
classification
Step 4: Build k-NN classifier
for i=1 to n do
Compare all Biin look-up table for Xwith all B of test
image {yi}
End
Step 5: Return nearest matched image{R}

IV. EXPERIMENTATION AND RESULTS


Fig. 5. Training (the top row) and testing (the bottom row) sample facial
In this section of the paper, we illustrate the images for lip-augmentation surgery from PSD database [7]
experimentation and results of the proposed work. The
objective of the proposed work, as stated earlier is to tackle
three different types plastic surgical faces. Therefore, the
proposed work is evaluated for three different types of facial
plastic surgery namely, blepharoplasty (eyelid surgery), lip-
augmentation and rhinoplasty (nose surgery). For
experimentation, the subset of plastic surgery face database
(PSD) [7] has been used. The PSD dataset used consists of 197
samples of blepharoplasty (eyelid surgery), 15 samples of lip-
augmentation and 329 samples of rhinoplasty (nose surgery) of
pre-surgical and post-surgical facial images. Each facial image Fig. 6. Patch-based partitio
of an individual subject or sample is a cropped image of 273 x
238-pixel size. Fig 4 presents few samples of rhinoplasty, parameters that we derive from a given patches for train and
blepharoplasty and lip-augmentation from PSD database [7]. In test images has illustrated in table II and table III respectively.
our experiment, for each five images from top row (with Each feature is represented as b1, b2, b3 as given in (1).
various surgeries) are chosen to create the training set.
Likewise, each five images from bottom rows (with B = {bi/I = 1 to 3} = {mean value; standard deviation;
varioussurgeries) of Fig. 4, Fig. 5, and Fig. 6 are chosen to entropy}(1)
generate the testing set.
The extracted features are further used as an input to k-NN I4P1 63.84 2.30 0.13
classifier for recognition. The results of the proposed approach I4 I4P2 82.35 0.68 0.17
for few test samples is tabulated in table IV for all the three I4P3 73.62 1.83 0.14
different types of facial plastic surgery such as rhinoplasty, I5P1 77.34 3.36 0.14
blepharoplasty and lip-augmentation under consideration.In I5 I5P2 77.33 2.66 0.18
table IV (A), table IV (B) and table IV(C), first column I5P3 64.95 1.43 0.22
represents the post-plastic surgical test image number, namely,
TABLE IV. RESULTANT TABLE OF K-NN FOR RHINOPLASTY,
TIN for different test samples. Second column depicts the BLEPHAROPLASTY AND LIP-AUGMENTATION FROM PSD DATASET
actual target plastic surgical facial image of test samples from (A) RHINOPLASTY (NOSE SURGERY)
pre-surgical facial training dataset. Similarly, third column
TIN TF Test Image (Rhinoplasty) PMC
represents the post-plastic surgical test images. Each test image
I1 1 147.1 0.3 0.1 1
is shown with feature values. And, lastly the fourth column is 151.1 0.6 0.2 2
for predicted pre-surgical facial image for each of the post- 139.4 0.8 0.1
surgical facial test image by k-NN classifier. I2 2 124.0 0.8 0.1 2
137.9 1.5 0.2
95.6 2.5 0.1
From table IV (A), table IV (B) and table IV (C), predicted I3 3 168.6 0.2 0.1 3
value of k-NN for each test image can be compared with the 183.3 0.5 0.2
target image value from the second column. This comparison 182.4 1.8 0.1
can help in taking the decision whether a test image has been I4 4 63.8 2.3 0.1 4
successful matched or not. For instance, from table IV (A) it is 82.3 0.7 0.2 5
clear that test images I1from second column from left to right 73.6 1.8 0.1
I5 5 77.3 3.4 0.1 4
are of target face number 1. It is very much clear from the 77.3 2.7 0.2 5
fourth column of table IV (A), that I1 has been correctly 65.0 1.4 0.2
classified as 1 i.e., the predicted face number is 1.
(B) BLEPHAROPLASTY (EYELID SURGERY)
TABLE II. INFORMATION TABLE FOR TRAIN IMAGES FROM PSD
DATASET TIN TF Test Image (blepharoplasty) PMC

Image Image b1 b2 b3 I1 1 28.0 2.0 0.6 1


Patch 85.4 1.6 0.6 5
I1P1 73.3 1.2 0.5
I1 141.51 0.42 0.11 I2 2 63.0 1.8 0.4 2
I1P2 157.24 0.09 0.18 109.5 2.0 0.3
I1P3 89.6 1.5 0.4 5
150.00 0.31 0.12
I3 3 119.7 1.3 0.1 3
I2P1
129.03 1.26 0.11 102.3 2.1 0.2
I2
I2P2 143.25 1.14 0.17 96.5 3.4 0.1
I2P3 I4 4 95.1 1.3 0.1 4
107.74 2.38 0.12 107.7 2.1 0.2
I3P1 121.9 2.4 0.1
I3 178.06 0.22 0.11
I3P2 I5 5 108.4 0.6 0.1 1
194.78 0.33 0.17
83.3 2.5 0.2 5
I3P3 191.32 1.75 0.11 82.6 2.9 0.1
I4P1
I4 74.23 1.82 0.11
I4P2 93.26 1.15 0.16 ( C ) LIP-AUGMENTATION
I4P3 81.52 1.00 0.10 TIN TF Test Image PMC
I5P1 (lip-augmentation)
I5 92.36 2.41 0.11
I5P2 95.96 0.46 0.16 I1 1 75.0 0.5 0.1 1
129.6 0.7 0.1 2
I5P3 71.32 0.64 0.17 98.3 2.0 0.1
I2 2 87.7 2.9 0.1 1
TABLE III. INFORMATION TABLE FOR TEST IMAGES FROM PSD DATASET 130.3 0.9 0.1 2
88.4 1.8 0.1
Image Image b1 b2 b3 I3 3 56.7 7.2 0.1 3
Patch 68.6 3.3 0.2 5
I1P1 147.12 0.26 0.11 43.3 1.3 0.2
I1 I1P2 I4 4 120.4 2.9 0.1 2
151.05 0.60 0.18
I1P3 139.38 0.85 0.11 124.8 2.3 0.1
I2P1 86.9 1.1 0.1
123.97 0.84 0.12 I5 5 78.2 0.4 0.1 3
I2 I2P2 137.87 1.53 0.18 85.6 2.5 0.1
I2P3 95.63 2.50 0.14 71.7 0.1 0.1
I3P1 168.63 0.19 0.11
I3 I3P2 183.29 0.47 0.17
I3P3 182.42 1.79 0.11
On the other hand, from table IV (c) test images I5 from
third column from left to right are of target face number 5has
been misclassified with target face number i.e., 3. Similarly, an REFERENCES
overall recognition rate for k-NN classifier is evaluated. [1] X. Liu and T. Chen, “Pose-robust face recognition using geometry
assisted probabilistic modeling”, in Proceedings of International
The recognition rate for three different types of plastic surgery Conference on Computer Vision and Pattern Recognition, vol. 1, pp.
is depicted in table V. From the results tabulated in table V, it 502–509, 2005.
can be observed that Rhinoplasty has achieved 70% of [2] S. Li, R. Chu, S. Liao, and L. Zhang, “Illumination invariant face
recognition rate which is lowest among the performances for recognition using near-infrared images”, IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 29, no. 4, pp. 627–639, 2007.
three different types of facial plastic surgery. On the other hand
[3] Michel P. and Kaliouby R., “Real Time Facial Expression Recognition
recognition rate of 81% has been achieved for Blepharoplasty in Video using Support Vector Machines,” in Proceedings of the 5th
and Lip-augmentation has achieved the recognition rate of International Conference on Multimodal Interfaces, USA, pp. 258-264,
77%. From the results obtained it can be observed that 2003
Blepharoplasty has less impact on change in facial appearances [4] Lanitis, C.J. Taylor, and T.F. Cootes, “Toward automatic simulation of
aging effects on face images,” IEEE Transactions on Pattern Analysis
compared to other two facial plastic surgeries under and Machine Intelligence, vol. 24, no. 4, pp. 442–450, 2002.
consideration. Therefore, for Blepharoplasty proposed [5] G Nirmala Priya and R S WahidaBanu,”Occlusion invariantface
approach has outperformed. Nose being the center of the facial recognition using mean based weight matrix and
image and an important part of facial structure, thus any change supportvectormachine,” Indian Academy of Sciences, vol 7,no 2, pp.154-
in the structure of nose through Rhinoplasty changes the facial 160.,April 2014.
appearances to a great extent. And, due to this performance of [6] R. Singh, M. Vatsa, H. S. Bhatt, S. Bharadwaj, A. Noore and S. S.
Nooreyezdan, "Plastic Surgery: A New Dimension to Face
the proposed approach has been deteriorated to 70%. Similarly, Recognition," IEEE Transactions on Information Forensics and
lip-augmentation also has great impact on the appearances of Security, vol. 5, no. 3, pp. 441-448, Sept. 2010
facial image and any change in the structure of lip due to [7] “Plastic surgery Face Database,” Accessed on August 2018. [Online].
plastic surgery greatly affects the performance. [8] R. Singh, M. Vatsa, H. S. Bhatt, S. Bharadwaj, A. Noore, and S.S.
Noorezdan, “Plastic Surgery: A New Dimension to Face Recognition,”
TABLE V. COMPARISONS OF THE RECOGNITION RATES OF THREE IEEE Transactions on Information forensics and Security, vol.5, issue 3,
DIFFERENT SURGERIES ON THE PSD FACE DATASET pp.441-448, 2010
[9] X. Zhang and J. Zou, "Face Recognition Based on sub-image feature
Types of local Rhinoplasty Blepharoplasty Lip- extraction and LS-SVM," 2008 International Conference on Computer
plastic surgery (Nose surgery) (Eyelid surgery) augment Science and Software Engineering, Hubei, pp. 772-775, 2008.
ation
[10] Mehran Kafai, Bir Bhanu, “Reference Face Graph for Face
Recognition rate 70% 81% 77% Recognition”, IEEE Transactions on information forensics and security,
vol. 9, No. 12, 2014
[11] Maria De Marsico, Michele Nappi, Daniel Riccio, Harry Wechsler,
“Robust face recognition after plastic surgery using region-based
V. CONCLUSION approaches”, Pattern Recognition, Volume 48, pp.1261-1276, 2015
[12] Wang, J &Xie, X & Hu, M & Deng, X & Chen, Q., “ Face recognition
In this paper, we proposed a patch-based approach for based on LBP preprocessing and sub-image feature sampling”, Vol.37,
plastic surgery face recognition for specifically three different pp.85-91, 2016.
types of facial plastic surgery viz; rhinoplasty, blepharoplasty [13] G. George, R. Boben, B. Radhakrishnan and L. P. Suresh, "Face
and lip-augmentation. First, we presented different methods recognition on surgically altered faces using principal component
used in literature for region-based recognition. Secondly, the analysis," International Conference on Circuit ,Power and Computing
Technologies (ICCPCT), Kollam, pp. 1-6, 2017
proposed patch-based approach was implemented using
[14] Cheheb, N. Al-Maadeed, S. Al-Madeed, A. Bouridane and R. Jiang,
statistical features and k-NN classifier. Experimentation was "Random sampling for patch-based face recognition," 2017 5th
performed on PSD dataset. From the results obtained it has International Workshop on Biometrics and Forensics (IWBF), Coventry,
been observed that proposed approach is robust against the pp. 1-5, 2017
three types of facial plastic surgery, however it has [15] T. F. Karim, M. S. H. Lipu, M. L. Rahman, and F. Sultana, “Face
outperformed for Blepharoplasty. From the results obtained it recognition using pca-based method,” in 2010 IEEE International
has been concluded that Rhinoplasty and lip-augmentation has Conference on Advanced Management Science(ICAMS 2010), vol. 3,
pp.158–162, July 2010J. Clerk Maxwell, A Treatise on Electricity and
great impact on the change in facial appearances due to plastic Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68-73.
surgery. The proposed approach shows an average performance [16] M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, “Face recognition by
of 76%. independent component analysis,” IEEE Transactions on Neural
Networks, vol. 13, no. 6, pp. 1450–1464, Nov 2002.
Currently, the proposed approach has been experimented
[17] Yang, H. Yu, and W. Kunz, “An efficient LDA algorithm for face
for limited number of test samples. In future, the proposed recognition, ”in Proceedings of the International Conference on
patch-based approach will be evaluated for larger number of Automation ,Robotics, and Computer Vision (ICARCV 2000), pp. 34–
test images and the results of patch-based approach will also be 47, 2000.
compared with the performance of the approaches proposed in [18] Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Face recognition
literature. In addition, patch-based approach can be used with using kernel direct discriminant analysis algorithms,” IEEE
better feature extraction technique to improve the performance Transactions on Neural Networks, vol. 14, no. 1, pp. 117–126, 2003.
of the plastic surgery face recognition.

You might also like