(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 7, July 2011
their chrominance. Texture of human faces has a specialtexture that used to separate them from different objects.Facial Features method depends on detecting features of theface. Some users use the edges to detect the features of theface, and then grouping the edges. Some others use the blocksand the streaks instead of edges. For example, the face modelconsists of two dark blocks and three light blocks to representeyes, cheekbones, and nose. The model uses streaks torepresent the outlines of the faces like, eyebrows, and lips.Multiple Features methods use several combined facialfeatures to locate or detect faces. First, find the face by usingfeatures like skin color, size and shape and then verifyingthese candidates using detailed features such as eyebrows,nose, and hair.
Machine learning methods:
Machine learning methods [13,14] use techniques from statistical analysis and machinelearning to find the relevant characteristics of faces and nonfaces. We now give a definition of face detection given anarbitrary image, the goal of face detection is to determinewhether or not there are any faces in the image and, if present,return the image location and extent of each face. Thechallenges associated with face detection can be attributed tothe following factors:
The images of a face vary due to the relative
face pose (frontal, 45 degree, profile, upside down),and some facial features such as an eye or the nose maybecome partially or wholly occluded.
Facial features such as beards,mustaches and glasses may or may not be present and there isa great deal of variability among these components includingshape, color, and size.
: The appearance of faces is directlyaffected by a person’s facial expression.
Faces may be partially occluded by other objects.In an image with a group of people, some faces may partiallyocclude other faces.
: Face images directly vary for differentrotations about the camera’s optical axis.
When the image is formed, factors suchas lighting (spectra, source distribution and intensity) andcamera characteristics (sensor response, lenses) affect theappearance of a face. There are many closely related problemsof face detection. Face localization aims to determine theimage position of a single face, this is a simplified detectionproblem with the assumption that an input image contains onlyone face , . The goal of facial feature detection is todetect the presence and location of features, such as eyes,nose, nostrils, eyebrow, mouth, lips, ears, etc., with theassumption that there is only one face in an image , .
Face Detection Using AdaBoost
Viola and Jones proposed atotally corrective face Detection algorithm in . They used aset of Haar-like Features to construct a classifier. Every weak classifier had a simple threshold on one of the extractedfeatures. AdaBoost classifier was then used to choose a smallnumber of important features and combines them in a cascadestructure to decide whether an image is a face or a nonface.AdaBoost, short for Adaptive Boosting, is a machine learningalgorithm, formulated by Yoav Freund and Robert Schapire. Itis a meta-algorithm, and can be used in conjunction with manyother learning algorithms to improve their performance.AdaBoost is adaptive in the sense that subsequent classifiersbuilt are weakening in favor of those instances misclassifiedby previous classifiers. AdaBoost is sensitive to noisy data andoutliers. Otherwise, it is less susceptible to the over fittingproblem than most learning algorithms.Lang Li Yang , a new algorithm was presented combiningeffectively the optimizing rect-features and weak classifierlearning algorithm, which can largely improve the hit-rate anddecrease the train time. Optimized rect-feature means thatwhen searching rect-feature we can establish a growth steplength of the rect-feature and reduce its features. And the newclassifier training method is seeking the weak classifier errorrate directly which can avoid the iterative training, the staticsprobability distribution and any other time consuming process.In this paper reduces training time cost and compared withconventional Adaboost algorithm. It can improve the detectionspeed on the high detection accuracy.
A set of Haar-like features used as the input features to thecascade classifier, are shown in Fig. 1. Computation of Haar-like features can be accelerated using an intermediate imagerepresentation called the integral image. An integral imagewas defined as the sum of all pixel values (in an image) aboveand to the left, including itself.Figure.1. Example of Haar like features 
AdaBoost is an algorithm forconstructing a composite classifier by sequentially trainingclassifiers while putting more and more emphasis on certainpatterns. A weak classifier is defined by applying the featureto images in the training set, feature by feature. It can reducesthe sizes of the feature set, it can be selected a limited numberof best features that discriminate faces from non-faces andalso complements each other. The Adaboost algorithmchanges the weights used in computing the classification errorof weak classifier. A small error is now weighted more andthis ensures that the first best feature and any other featuresimilar to it will not be chosen as the second best feature. Thissecond best feature ideally compliments the first best featurein the sense that it is successful at classifying faces that thefirst best feature e failed on. This process is repeated, T timesfor example, to find as many best features as desired . Eachfeature as a weak classifier votes on whether or not an inputtest image is likely to be a face. Each feature vote is weightedin log-inverse proportion to the error of that feature. So afeature with a smaller error gets a heavier weighted vote,