Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
4Activity
0 of .
Results for:
No results containing your search query
P. 1
Automated Face Detection and Feature Extraction Using Color FERET Image Database

Automated Face Detection and Feature Extraction Using Color FERET Image Database

Ratings: (0)|Views: 540 |Likes:
Published by ijcsis
Detecting the location of human faces and then extracting the facial features in an image is an important ability with a wide range of applications, such as human face recognition, surveillance systems, human-computer interfacing, biometric identification, etc. Both face detection and face features extraction methods have been reported by the researchers, each with a separate process in the field of face recognition. They need to be connected through adapting the face detection results to be the input face in the extraction process by turning the minimum face size results from the detection process, and the way of face cropping process from the extraction process. The identification and recognition of human face features that has developed in this research is the combination of face features detecting and extracting process in 150 frontal single still face images from color FERET facial image database with additional extracted face features and face features’ distances.
Detecting the location of human faces and then extracting the facial features in an image is an important ability with a wide range of applications, such as human face recognition, surveillance systems, human-computer interfacing, biometric identification, etc. Both face detection and face features extraction methods have been reported by the researchers, each with a separate process in the field of face recognition. They need to be connected through adapting the face detection results to be the input face in the extraction process by turning the minimum face size results from the detection process, and the way of face cropping process from the extraction process. The identification and recognition of human face features that has developed in this research is the combination of face features detecting and extracting process in 150 frontal single still face images from color FERET facial image database with additional extracted face features and face features’ distances.

More info:

Published by: ijcsis on Jul 07, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

11/13/2012

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 6, June 2011
AUTOMATED FACE DETECTION ANDFEATURE EXTRACTION USING COLOR FERETIMAGE DATABASE
Dewi Agushinta R.
1
, Fitria Handayani S.
2
1
Information System,
2
InformaticsGunadarma UniversityJl. Margonda Raya 100 Pondok Cina, Depok 16424, Indonesia
1
dewiar@staff.gunadarma.ac.id,
2
dearest_v3chan@student.gunadarma.ac.id
 Abstract
—Detecting the location of human faces and thenextracting the facial features in an image is an important abilitywith a wide range of applications, such as human facerecognition, surveillance systems, human-computer interfacing,biometric identification, etc. Both face detection and face featuresextraction methods have been reported by the researchers, eachwith a separate process in the field of face recognition. They needto be connected through adapting the face detection results to bethe input face in the extraction process by turning the minimumface size results from the detection process, and the way of facecropping process from the extraction process. The identificationand recognition of human face features that has developed in thisresearch is the combination of face features detecting andextracting process in 150 frontal single still face images fromcolor FERET facial image database with additional extractedface features and face features’ distances.
 
 Keywords-
 
 face detection, face extraction, face features, face recognition, feret
I.
 
I
NTRODUCTION
 Biometrics encompasses automated methods of recognizingan individual based on measurable biological (anatomical andphysiological) and behavioral characteristics. As a biometric,facial recognition is a form of computer vision that uses facesto attempt to identify a person or verify a person’s claimedidentity [1]. Facial recognition systems are more widely usedthan just a few years ago. Most of the pioneering work in facerecognition was done based on the geometric features of ahuman face [2]. Detecting the location of human faces andthen extracting the human face features in an image is animportant ability with a wide range of applications, such ashuman face recognition, surveillance systems, humancomputer interfacing, video-conferencing, etc [3].Detecting human faces and extracting the facial features inan unconstrained image is a challenging process. There areseveral variables that affect the detection performance, such asdifferent skin coloring, gender, and facial expressions. One of the researches in face recognition system is face imagedetection from a single still image where the face image part isdetected by skin color model analysis [4], [5]. Thedetermination of face region in the research was not complete,because some parts of the face were not included in theextraction process. If one component of face features isdetected, the position of other components would be obtainedand the component could be extracted. Other research in facerecognition system is the face features extraction which hasobtained eye and mouth region and the distances between eyeand mouth [6]. In this paper, based on those researches, wedevelop a system that separates a face image into facecomponents, and then extracts components in the regions of eyes, nose, mouth, and face boundary from a frontal single stillface images from color FERET facial image database withadditional extracted face features and face features’ distances.II.
 
METHODOLOGY
 The techniques in image processing that is used in facerecognition are varied [7], yet they have the same three basicstages, as illustrated in Fig. 1. First, the system gets the input of an image. Then, the system will detect the face region. Afterthat, it will extract the face image in order to get the facefeatures, for instance eye, nose and mouth. Finally, the systemuses the detected face image to obtain the information of eachface feature that has been extracted.
Figure 1. Configuration of a generic face recognition system.
313http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 6, June 2011
 A.
 
Face Detection Process
In this paper, the 150 frontal face images from colorFERET facial image database will be processed in order to dothe automated face recognition process. Fig. 2 shows thefrontal single still face images from color FERET facial imagedatabase. The first step is face detection based on skin colormodel. A skin color model which has represented the humanskin color is made for the segmentation of skin region andnon-skin region in color images. The size of the input of faceimage is used for this research object is 300x350 pixels in JPGformat. The image with RGB format will be converted toYCbCr format in order to eliminate the light effects.Decreasing the luminance level is conducted by imageconversion from RGB to YCbCr or chromatic color. After theCb and Cr values are obtained then the low pass filter isconducted to the image in order to reduce noise. The reshapefunction is next applied to Cb and Cr values which turn theminto row vectors. Ninety row vectors are formed from each Cband Cr values. All components of Cb become Cb vectorelements, and all Cr components become Cr vector elements.The vector results of Cb and Cr are used to find Cb averagevalue, Cr average value, and covariance of Cb and Cr.
Figure 2. The frontal single still face image.
Face detection process begins with skin model detectionprocess by applying the threshold value. The threshold is theprocess of separating pixels which have different gray level.The value of a pixel that has a lower gray level from theboundary value will be changed into 0, and the value of a pixelwhich has a higher gray level will be changed into 1.Threshold is needed to transform the element of pixel in thetest images into binary images.
 B.
 
Face Cropping Process in a Single Still Image
The binary image, obtained from the threshold process, isfurther processed to take and crop the face region of theimage. The face image is the region in white color or pixelvalue = 1. This process goes through the following steps [2]:
1)
 
Separating the face skin region from those of the non-face region, such as neck.
2)
 
Determining holes of the face, as there must be one holefrom the image that belongs to the face region. The number of holes in face region is computed with the following equation:
Ε =
C
Η (1)
Where E is Euler number, C is related component number andH is hole number in the region.By using this equation, the value of H can be found bycalculating H = 1
E, to obtain the skin region of the face.
3)
 
Finding the statistic of color value between the holearea of the picture (which indicates the face area) and the facetemplate picture after the hole that represents the face regionhas been determined. The following equations are used to findthe center of mass in determining the face part position of thepicture:
××=
)__( _1
el picelrow area pic x
(2)
 
××=
)__( _1
el picelcol area pic y
(3)
This process uses a different face template from the one inRademacher’s research. Rademacher used the face template asshown in Fig. 3, while the improved template which is used inthis research, is shown in Fig. 4. The fundamental differencesbetween these two templates are the face position in the wholetemplate and the size of black region around the image. In theprevious template, the forehead and chin region wereeliminated. This research uses those regions to determine thesuccessful of the feature extraction. Moreover, the faceposition in Rademacher’s template was not symmetric. It canaffect the final result because it will reduce the symmetric faceregion which has been obtained. The difference of face skincolor in the template has no significant effect to the face resultobtained through this process.
Figure 3. Rademacher’s template.Figure 4. Face improved template.
 
4)
 
The previous process gives the result of an image thatcontains a human face, has at least one hole after the analysisprocess or has a width to height ratio of approximately 1. Theangle of the centre of mass that contains the face region isdetermined by using the centre of mass of the face object’sposition. The following equation is used to find such angle:
314http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 6, June 2011
 ⎠ ⎞⎝ ⎛ =
cab
tan5.0
θ 
 
(4)
Wherea =
ΣΣ
(x’) * 2 * pic_elb = 2
ΣΣ
(x’) *
ΣΣ
(y’) * pic_elc =
ΣΣ
(y’) * 2 * pic_el
 x x x
=
 
 y y y
=
 
5)
 
Every step is conducted based on the numregion or thenumber of regions with color value of 1 in the binary image. If a width to height ratio is between 0.6 and 1.2 is obtained, thesegmented area is assumed to be a face region, and itscoordinates are saved in a row vector.
6)
 
The coordinate values are used to form a rectanglewhich is surrounding the face (bounding box) in the image.
7)
 
The bounding box, which is the face region, isseparated with the original image using the cropping process.The result of the face cropping process can be seen in Fig. 5.
Figure 5. The result of face cropping process.
C.
 
Face Extraction Process and Measurement of FaceFeatures’ Distances
In this paper, the 150 frontal face images from colorFERET facial image database will be processed in order to do.The face region image as a result of face detection process isfurther processed to obtain the face components and thedistances between them. This is conducted by extracting theeyes, nose, and mouth components. The extraction determinesthe components’ locations, and is done on YCbCr color spaceto separate the luminance and chrominance components inorder to reduce the lighting effect. There are 8 componentdistances measured [10].The face extraction process in this research is conducted inthree stages:
 
Face division.
 
Face features detection and extraction.
 
Measurement/ calculation of distances between facefeatures.The face image, which the face features will be extracted, isprocessed by dividing it into regions, in order to narrow downthe area for the detection. So, the extraction result can be moreaccurate and it can minimize the probability of other facefeatures detected. The detection is conducted by computing thecomponents of color space in regions assumed to be thelocations of face features. After that, the regions that areassumed to be the locations of face features are extracted todetermine the location of the features. In this research, there arefour steps to do the extraction of face features. The first is theface division. The second is the extraction of eye feature. Thethird is the extraction of nose features. And the last is theextraction of mouth feature.
1)
 
Face Division in The Face Image :
The face division isdivided into three parts, which are face region, eyes region,and mouth regions. The face image that can be processed musthave the forehead and chin regions as a minimum requirement,and the neck region as the maximum requirement. The facedivision is meant to detach the detection area among eye andmouth features. The face division is based on assumption asfollows:
 
Both eyes regions always reside in the upper half of face region
 
The mouth region always resides in the undercarriagehalf of face regionThis division is conducted while a face image is assumed asthe result from face detection process. Next, the eye regionresulted from the face region division that has been divided.The aim of this division is to narrow the possibility area of right eye and left eye location reside and then separate them,based on assumption:
 
Half of upper eye area is eyebrow/ forehead
 
Half of lower eye area is the location/ position of theright and the left eye
 
The position of eyes is assumed symmetrically, righteye reside in half of right shares and left eye reside inhalf of left shares of face.In this research, some improvements are done on themouth region to get better result from the previous research.The former research divided the mouth region as illustrated inFig. 6.
Figure 6. Previous mouth region division.
An approximate position of the mouth was determined as thecentre of the region, vertically and horizontally. While, themouth feature in this research is not always located verticallyat the centre of the region, as there exists a neck region thataffects the mouth feature position in the mouth region. This is
Left EyeMouthRight Eye
315http://sites.google.com/site/ijcsis/ISSN 1947-5500

Activity (4)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads
Öner Ayhan liked this
sahalbabu liked this

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->