JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGIES, VOLUME 2, ISSUE 9, OCTOBER 2012

8
Real time Face Recognition using Curvelet
Transform and Complete Local Binary Pattern
Sirshendu Arosh, Subhasis Chand and G.N.Rathna
Abstract—In this paper, we propose a novel method of face recognition. The method involves Curvelet Transform, Complete
Local Binary Pattern (CLBP), PCA and SVM. Curvelet transform function is a multi resolution and directional method which is
efficient in representing curve singularities and edges in an image. An image is decomposed into curvelet subbands of three
different resolutions resulting in a total nine output images. Later on, a CLBP applied to extract the descriptive feature sets. A
non-linear uncorrelated feature sets are obtained after applying PCA. The training of the dataset is carried out using SVM to
obtain feature face vectors. The experimental results were tested on the ORL, Faces94 and Grimace databases and found to be
about 98.5%, 99.74% and 99.45% accurate respectively. The real-time implementation is carried out on the PC and Beagle
Board using webcam.
Index Terms—Curvelet transform, Complete Local Binary Pattern, PCA, SVM.
——————————

——————————
1 INTRODUCTION
ACE recognition is very effective identification tech-
nique in public security system, checking or tracking
person in crowd. Studies in Human Visual System
and Image statistics reveals that image representations
should satisfy multi-resolution, localization, critical sam-
pling, directionality and anisotropy. Wavelet transform is
the well known multi-resolution tool with good localiza-
tion and critical sampling quality only. In the next genera-
tion multi-resolution technique, ridgelet, contourlet,
curvelet has proved efficient for directionality and anisot-
ropy also.
Curvelet Transform developed by Candes and Donoho
[1], [2] yields more sparse representation of an image. For
an image, curvelet transform gives three output parame-
ters such as scale parameters, orientation parameters, and
spatial location parameters. Curvelet transform has al-
ready been used for different image processing purposes
such as image denoising, image fusion, image compres-
sion and texture classification. A face recognition problem
using curvelet and PCA is proposed by Mandal and Wu
[3]. It outperforms the eigenfaces technique and wavelet
based PCA. A method combining curvelet and LDA is
proposed in [4] which works well in insufficient data and
gets higher accuracy but suffers problems when the num-
ber of training samples available for each subject is small-
er than the dimentionality of the samples and when face
patterns are subject to large variations in viewpoints or
illumination. An algorithm combining curvelet and SVM
for vehicle recognition was designed in [5]. A numeral
recognition algorithm using curvelet transform was pro-
posed in [6]. In [7], a curvelet based face recognition sys-
tem by fusing results from multiple SVM classifiers
trained with curvelet coefficients from images is pro-
posed. This technique appears to be robust to the changes
in facial expression but efficiency reduces with change in
illiumination and with increase in database size.
The proposed method is effiecient in change of light
condition and real time implenetation is also done on
opencv [8] and is ported on Beagle Board [9] which is a is
a low-power open source hardware single-board comput-
er produced by Texas Instruments in association with
Digi-Key.
In this paper, a novel method of face recognition is
proposed using curvelet transform and complete local
binary pattern (CLBP) [10] which is a generalized local
binary pattern and gives out higher accuracy in texture
classification than the normal LBP. The paper is struc-
tured as follows: Section 2 discusses the curvelet trans-
form. Complete local binary pattern is explained in sec-
tion 3. The general concept of PCA and SVM is given in
the section 4 and 5. In section 6 , the method is explained
with flowchart and the result is discussed in section 7. In
8, conclusion and future work is discussed.
2 CURVELET TRANSFORM
Wavelet Transform works well on piece wise smooth
function in one direction but it can not handle the curve
discontinuities. To overcome this drawback Candes and
Donoho [11] first proposed Ridgelet method for dealing
with line singularities in 2-D. Next Curvelet transform
was proposed which represent the curve singularities and
hyper plane singularities in higher dimension. Curvelet
Transform takes image edges as basic representation ele-
ment and it is anisotropic as it has both variable length
and width. The difference beteween general wavelet and
————————————————
- Mr. Sirshendu Arosh is student of Indian Institute of Science, Bangalore,
India.
- Mr. Subhasis Chand is student of National Institute of Technology, Rourke-
la, Orissa, India.
- Mrs.Dr. G.N. Rathna is Principal Research Scientist of Indian Institute of
Science, Bangalore, India.


F
© 2012 JICT
www.jict.co.uk
9

curvelet has been shown in figure 1. First generation
curvelet transform is ridgelet transform and work on line


Fig. 1. Edge representation by wavelets and curvelets (a) wavelet;
(b) curvelet [21].

discontinuities. Second generation curvelet consists of
four steps Sub-band decomposition, Smooth partitioning,
Renormalization and Ridgelet transform. This new gener-
ation curvelet transform is discrete and processes faster
and less redundant compared to first generation curvelet
transform on digital images.
There are two different implementation of second gen-
eration curvlet transform i.e. curvelet via USFFT (Une-
qually Spaced Fast Fourier Transform) and curvelet via
Wrapping. The differences of these two are on different
spatial grid used to translate curvelet in different scale
and angle. The curvlet Via Wrapping is faster than the
curvelet via USFFT. In this paper, curvelet via Wrapping
is used for implementation.
In wrapping based curvelet transform, a decimated
rectangular grid is aligned with the input image. Curvelet
transform using Wrapping takes a 2-D image as in-
put [ , ] f m n such that 0 , 0 m M n N s s s s and gen-
erates a number of curvelet coefficients indexed by a
scale f , an orientationl , and two spatial location param-
eters
1 2
( , ) k k
as output. Discrete curvelet coefficient can
be defined by
1 2
0
1 2 , , , [ , ]
0
( , , , ) [ , ]
m M
D D
j l k k m n
n N
C j l k k f m n
s s
s s
= u
¿
(1)
where each
1 2
, , , [ , ]
D
j l k k m n
u is a digital curvelet waveform.
This approach implements the effective parabolic scaling
law on the sub-bands in the frequency domain to capture
curved ages within the image more effectively. Wrapping
based curvelet transform is nothing but a multi-scale
transform with a pyramid structure consisting of many
orientations at each scale as shown in figure 2.

Fig. 2. 5-level curvelet digital pyramid structure of an image [21].

Fig. 3. Complete Local Binary Pattern algorithm [10].


Fig. 4. Central pixel and its P circularly and evenly spaced neighbors
with radius R [10].


Fig 5. Support Vector Machine.


10




Fig. 6. Current approach vs. approach in proposed method.



Fig. 7. Face Detection Algorithm.


Fig. 8 . Face Recognition Algorithm: Training.



Fig. 9 . Face Recognition Algorithm : Testing.

The discrete curvelet transform can be implemented by
four steps:
1. 2-D fast Fourier transforms on input image. It is
known as sub-band decomposition.
2. Multiply with a window
j
U for each scale and
angle which is referred as smooth partitioning.
3. A wrapping of this product is done around the
origin for Renormalization.
4. At the end, reverse Fourier transform to get the
curvelet coefficient.

11

3 COMPLETE LOCAL BINARY PATTERN (CLBP)
In order to enhance the information from curvelet trans-
form, Complete Local Binary Pattern (CLBP) is used.
CLBP is a generalized version of LBP which is introduced
by Guo et al [10] and it has proved to be effective on tex-
ture classification.
In CLBP, a local region is represented by center pixel
and the difference between the values with local center
pixel with magnitude that is called as Local difference
signmantude transform (LDSMT). CLBP has three diff-
erent components CLBP-S indicates the sign (positive or
negative) of difference between the center pixel and local
pixel, CLBP-M indicates the magnitude of the difference
between the center pixel and local pixel and CLBP-C
indcates the difference between local pixel value and av-
erage central pixel value. CLBP-S is nothing but normal
LBP. Algorithm is illustrated by figure 3. Mathematically,
1
1, 0
, 0, 0
0
_ ( )2 , ( ) {
P
p x
P R p c x
p
LBP S s g g s x
÷
>
<
=
= ÷ =
¿
(2)
Where
c
g is the gray value of center pixel and
p
g is the
value of its neighbors. P is the total no of involved
neighbors, and R is the radius of the neighborhood. The
illustration is done on the figure 4.
CLBP-M is calculated as same as CLBP-S but it deals
with the difference of the magnitude. Mathematically,
1
1,
, 0,
0
_ ( , )2 , ( , ) {
P
p x c
P R p x c
p
CLBP M t m c t x c
÷
>
<
=
= =
¿
(3)
Where c is the threshold and can be determined
adaptively. The central image pixel also has discriminant
information.Hence it is given by CLBP_C and
mathematically
,
_ ( , )
P R c l
CLBP C t g c = (4)
Where t is defined in equation no. 3 and threshold
l
c
can be calculated as the average gray level of the whole
image. The three operators CLBP_M, CLBP_S, CLBP_C
can be combined in two ways – jointly or hybridly. In the
first way, a 3-D joint histogram can be made .In hybrid
way, 2-D joint histogram CLBP_S/C or CLBP_M/C is
computed first and then it is converted to 1-D histogram
and then it is concatenated CLBP_M or CLBP_S to
generate joint histogram. So by this method, CLBP feature
map is got from Curvelet output images.
4 PCA (PRINCIPAL COMPONENT ANALYSIS)
Principal component analysis (PCA) is a standard tool in
modern data analysis which provides a process to reduce
a complex data set to a lower dimension to reveal the
sometimes hidden, simplified structures. The basic of
principal component analysis is to identify the most use-
ful basis to re express a data set. This basis will filter out
the noise and reveal the hidden structure in the data. Al-
gorithm for calculating PCA is as follows:
1. The mean is computed from the given dataset. The
mean value is subtracted to get zero-mean dataset.
2. Next, Covariance matrix for the database is calculat-
ed.
3. Since the covariance matrix is square, the eigen vec-
tors and eigenvalues are calculated for this matrix.
4. It turns out that, the eigen vector with the highest ei-
genvalue is the principal component of the data set. The
number of component can be decided by choosing the
dimensionality of the dataset.
5. Final PCA output will be the transposed multiplica-
tion of mean adjusted dataset and chosen eigenvectors in
previous step.
5 SUPPORT VECTOR MACHINE (SVM)
A support vector machine (SVM) is a concept in statistics
and machine learning that analyze data and recognize
patterns used for classication and regression analysis. The
standard SVM takes a set of input data and predicts, for
each given input, which of two possible classes forms the
input, making the SVM a non-probabilistic binary linear
classifier.
Suppose some given data points each belong to one of
two classes and the goal are to decide perfect class for a
new data point. A data point is viewed as a p-dimensional
vector, and motivation is to separate such points with a
(p-1)-dimensional hyper plane. This is called linear classi-
fier. There are many hyper planes that might classify the
data.One reasonable choice as the best hyperplane is the
one that represents the largest separation between the two
classes.
Given a finite sample pattern,{( , ), 1, 2,...., }
i i
x y i l = ,
where
n
i
x R e and { 1, 1}
i
y e ÷ + ,which is linearly
separable. Since pattern classes are linearly separable,
there exist
n
w R e andb R e , such that for
1, 2,......, i l =
0
T
i
w x b + > i ¬ Such that 1
i
y = + (5)
0
T
i
w x b + < i ¬ Such that 1
i
y = ÷ (6)
The hyper-plane in
n
R describe in , w b that satisfies the
above equation is called the separating hyper-plane and is
given by 0
T
i
w x b + = . (7)
The above equation can be written as the form follows
[ ] 1
T
i i
y w x b + > , 1, 2,..., i l = . (8)
Since the classes are linearly separable, there exist
infinitely many separating hyper-plane. So the distance
between two hyper-plane is
2
|| || w
. To maximize this
distance margin,
T
w w needs to be minimized. So the
constrained optimization problem can be looked as

min
1
2
T
w w
12


Subject to [ ] 1
T
i i
y w x b + > , 1, 2,..., i l = (9)
Introducing Lagrange multipliers, this problem can now
be solved by standard quadratic programming techniques
and programs to find out the optimized hyper-plane for
two-class problem. The parameters of the maximum-
margin hyperplane are derived by solving the
optimization. There exist several algorithms for quickly
solving the QP (quadratic programming) problem that
arises from SVMs, mostly relying on heuristics for
breaking the problem down into smaller, more-
manageable chunks. One popular approach is to use an
interior point method that uses Newton-like iterations to
find a solution of the Karush–Kuhn–Tucker conditions of
the primal and dual problem. To avoid solving a linear
system involving the large kernel matrix, a low rank
approximation to the matrix is often used in the kernel
trick by this algorithm.
6 PROPOSED REAL-TIME FACE RECOGNITION
METHOD
6.1 Contribution made in this paper
Curvelet transform using PCA is used for Face recogni-
tion in recent methods. Figure 6 illustrates the proposed
method with the existing one. The real time Face Detec-
tion and Recognition algorithm is explained here. The
system is comprised of two sections:
1. Face Detection using Web Cam and frames are
taken from camera. For unknown person who is
not in database, a password protected training
set will be generated. For testing purpose, 5
frames are taken after detection of the face and
saved in training folder.
2. After detection, Face Recognition system
starts.While a person comes in front of camera, if
he is not in the database, a face detected frame is
taken for testing purpose. Then the face recogni-
tion algorithm is used to take out the features.
Then machine learning algorithm is used to clas-
sify the particular face from others.

The whole algorithm is shown in flow-chat in gure 7, 8
and 9.

6.2 Explanation of the algorithm
1. Preprocessing Phase: The face area is extracted and
normalized to 112 *92 pixels after face detection. Input
faces are divided to training faces and testing faces. In the
training stage, a set of known faces (training faces) are
used to create training database or feature set. In the clas-
sication stage, a unknown facial image (testing faces) is
matched against the previous training feature set by
comparing the features. The input images are histogram
equalized.This process ends the preprocessed phase.
2. Curvelet Transform on normalized image: Curvelet
decomposition is done over preprocessed images of size
112*92 with a scale 2 (coarse and fine) and 8 orientations.
Output comes as one approximate coefficient image with
size 75 *61 and eight detailed coeffcients. From these four
are of size 66*123 and rests are of size 149*54. All the
curvelet coefficients are converted to a vector by row.
3. Generating Feature Vector using Complete Local Bi-
nary Pattern: The approximate sub-band obtained from
the curvelet transform is divided into k regions each of
size m*n pixels. From each of these k regions, the CLBP
histogram of 255 labels is calculated.
4. Classification Using PCA and SVM: The row input
patterns from the curvelet sub-bands and the correspond-
ing target patterns are used to train a network until it can
approximate to a function. Here PCA based SVM is used
to train the Neural Network.
7 RESULTS
The Face Recognition System is divided into two phase
(1) Training Phase (2) Testing Phase. Here ORL, Faces94
and Grimace face databases have been used to test the
algorithm. An OpenCV implementation of the whole al-
gorithm is done and for training purpose total 200 images
are used in ORL database. The ORL database comprises
of 40 different persons with 10 different face images each
having different pose and lighting condition, expression
and presence and absence of spectacles and mustache are
taken care of. The following results which are listed
where table 1 contains the results on ORL [12] database
and table 2 contains the results on Faces94 [13] database.


TABLE 1
Comparison of accuracy results for the different algorithms on
ORL Database.

Recognition Method Recognition Rate (%)
Curvefaces[15] 92.6
Waveletface+PCA[16] 94.5
Curveletface+PCA[17] 96.6
Curvelet+LDA[4] 98.0
SGCT+KPCA[18] 97.0
Proposed Method 98.5

TABLE 2
Comparison of accuracy results for the different algorithms on
Faces94 Database.

Recognition Method Recognition Rate (%)
PCA [19] 98.0
Waveletface+PCA[20] 99.26
Curveletface+PCA [17] 99.30
Proposed Method 99.74

The accuracy obtained by testing the proposed algo-
rithm on Grimace face database was 99.45%. During the
testing 5 arbitrary images of each person was taken.

During testing in real time the average recognition
time was recorded to be 0.07 second. This makes the sys-

13

tem more robust. With a good face detection algorithm
this can be implemented for almost all the real time pur-
poses.
8 CONCLUSION AND FUTURE WORK
In this paper, a face recognition method is proposed
which uses Curvelet features to enhance the recognition
rate, and decreases the computational cost and time re-
quired for recognition. The technique introduced is robust
to change in facial expression as well as pose variations as
well as lighting conditions which are the main challanges
of the face recognition algorithms till now.It shows good
results for the different standard databases. The whole
detection and recognition system which is described here
is implemented in OpenCV code, real time testing is
made through the PC. With a logitech web cam with
normal light condition, this algorithm is giving good ac-
curacy and it can handle multiple faces at same time and
recognize it in real-time. Future work is suggested to-
wards experimenting different light condition such as if
high intensity light is coming on face or in almost dark
condition or testing noisy images to check the robustness
of the system.
9 REFERENCES
[1] E. Candes and D. Donoho. “Curvelets: A surprisingly effective
nonadaptive representation for objects with edges,” Technical
report, DTIC Document, 2000.
[2] E. Candes and D. Donoho. “Continuous curvelet transform: I.
resolution of the wavefront set,” Applied and Computational
Harmonic Analysis, 19(2): pp. 162-197, 2005.
[3] T. Mandal and Q. Wu. “Face recognition using curvelet based
pca,” Pattern Recognition, 2008. ICPR 2008. pp. 1-4. IEEE, 2008.
[4] M. El Aroussi, S. Ghouzali, M. El Hassouni, M. Rziza, and D.
Aboutajdine. “Curvelet-based feature extraction with b-lda for
face recognition,” Computer Systems and Applications, 2009.
AICCSA 2009. pp. 444-448. IEEE, 2009.
[5] F. Kazemi, S. Samadi, H. Poorreza, and M. Akbarzadeh-T. “Ve-
hicle recognition using curvelet transform and svm,” Infor-
mation Technology, 2007. ITNG'07. pp 516-521. IEEE, 2007.
[6] F. Kazemi, J. Izadian, R. Moravejian, and E. Kazemi. “Numeral
recognition using curvelet transform. In Computer Systems and
Applications,” 2008. AICCSA 2008. IEEE/ACS I. pages 606-612.
IEEE, 2008.
[7] T. Mandal, A. Majumdar, and Q. Wu. “Face recognition by
curvelet based feature extraction,” Image Analysis and Recogni-
tion, pages 806-817, 2007.
[8] http://docs.opencv.org/doc/tutorials/tutorials.html
[9] http://beagleboard.org/static/bbsrm latest.pdf
[10] Z. Guo, L. Zhang, and D. Zhang. “A completed modeling of
local binary pattern operator for texture classification,” Image
Processing, IEEE Transactions on, 19(6):1657-1663, 2010.
[11] E. Candes and D. Donoho. “Ridgelets: A key to higher-
dimensional intermittency?” Philosophical Transactions of the
Royal Society of London. Series A: Mathematical, Physical and
Engineering Sciences, 357(1760): pp. 2495-2509, 1999.
[12] http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.h
tml
[13] http://cswww.essex.ac.uk/mv/allfaces/faces94.html
[14] http://cswww.essex.ac.uk/mv/allfaces/grimace.html
[15] J. Zhang, Z. Zhang, W. Huang, Y. Lu, and Y. Wang. “Face
recognition based on curvefaces,” Natural Computation, 2007.
ICNC 2007. Volume 2, pp. 627-631. IEEE, 2007.
[16] C. Liu and H. Wechsler. “Independent component analysis of
gabor features for face recognition,” Neural Networks, IEEE
Transactions on, 14(4): pp. 919-928, 2003.
[17] T. Mandal, Q. Jonathan Wu, and Y. Yuan. “Curvelet based face
recognition via dimension reduction” Signal Processing, 89(12):
pp. 2345-2353, 2009.
[18] P. Shi and X. Li. “Face recognition based on second generation
of curvelet transform and kernel principal component analy-
sis,” Image and Signal Processing (CISP), 2011. Volume 3, pp.
1513-1516. IEEE, 2011.
[19] M. Kirby and L. Sirovich. “Application of the karhunen-loeve
procedure for the characterization of human faces,” Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 12(1):
pp. 103-108, 1990.
[20] G. Feng, P. Yuen, and D. Dai. “Human face recognisation using
PCA on wavelet subband,” Journal of Electronic Imaging, 9:226,
2000.
[21] S. AlZubi, N. Islam, and M. Abbod. “Multiresolution analysis
using wavelet, ridgelet and curvelet transforms for medical im-
age segmentation,” Journal of Biomedical Imaging, 2011:4, 2011.


Mr. Sirshendu Arosh did his B.Tech on HIT(k),Kolkata,India on the
year of 2010.Currently, he has passed his M.E. from Indian Institute
of Science,Bangalore,India on the year 2012.

Mr. Subhasis Chand is currently pursuing his B.Tech from National
Institute of Technology, Rourkela, Orissa, India, in the department of
Electronics & Communication Engineering.

Mrs. Dr. G.N. is Principal Reserch Scientist in Indian Institute of Sci-
ence,Bangalore, India.