You are on page 1of 25

CHAPTER 4

FACE RECOGNITION DESIGN AND ANALYSIS

As explained previously in the scope, this thesis will also create a prototype

about face recognition system. The face recognition system itself has several

modules that are working together as one to make the system runs smoothly.

However, the prototype will not cover all of the modules. The prototype will

only cover until the face detection module.

The sub-chapters below are the general design for face recognition system.

Instead of the face recognition system design, I will also give some explanation

on how the prototype is built.

4.1 General Phases in Face Recognition

There are five general phases in face recognition system. The system must

execute all the phase before finally get into the expected result. The phases are

[29]:

1. Capture image

2. Detect faces in the image

3. Feature Extraction
46

4. Template comparison

5. Declaration of matching template

Figure 13 - Facial recognition steps [29]

4.1.1 Image Capturing

The acquisition of face images can be done by digitally scanning an

existing photograph or by using an electro-optical camera. While to

capture a real-time situation like in public place, the acquisition can be

accomplished by using CCVT camera or any surveillance camera.


47

Figure 14 - CCTV camera and Surveillance Camera

Furthermore, the acquisition of the prototype can be done by using web-

camera or manually input the desired image into the prototype system.

Figure 15 - Web Camera


48

4.1.2 Face Detection

The function of this module is to determine where in an image a face is

located. The face detection module works by scanning up an image at

different scales and looking for some simple patterns that denote the

presence of a face. After the system detected a face, it will then produce a

sub-image (image chip) that is scaled such that the face appears in the

centre and presented at a uniform size.

OpenCV already provide algorithm to locate faces in still images and

videos. It is called the Haar-based Cascade Classifier or simply known as

Haar Classifier. [30] This algorithm scans and image and create a

bounding box as a returns for each detected face.


49

Figure 16 - Haar Classifier Result Example

The square bounding boxes indicate the detected faces in the image. It is

actually the result of the Haar Classifier.

The code snipped above is the way how to declare the Haar Classifier and

load it to be used afterwards. [31]


50

4.1.3 Feature Extraction

Another phase in face recognition is feature extraction. This is a phase

where the system does the localizing of the characteristics of face

components (i.e. eyes, mouth, nose, etc) in an image. In other words,

feature extraction is a step in face recognition where the system locates

certain points on the face such as the corner and centre of the eyes, tip of

the nose, mouth, etc. It analyze spatial geometry of differentiate feature

of a face. The result of this analyzing is a set of template generated for

each face. The template consists of a reduced set of data that represent the

uniqueness of the enrolled face features.

There are two algorithms that can be used for this module, such as:

1. PCA (Principal Component Analysis) [13]

PCA represents the face images as vectors where each vector

elements relate to a pixel value in the image. PCA is the most

popular algorithm to use. It can be combined with other neural

networks and local feature analysis to enhance its performance.

Figure 17 - PCA example


51

PCA compares the resulted template with the template images

stored in the database and find the closest matching image.

2. EBGM (Elastic Bunch Graph Matching) [17]

This is another algorithm to extract the face features. The EBGM

algorithm operates in three phases. First, important landmarks on

the face are located by comparing Gabor jets extracted from the

new image to Gabor jets taken from training imagery. Second,

each face image is processed into a smaller description of that

face called a FaceGraph. The last phase computes the similarity

between many FaceGraphs by computing the similarity of the

Gabor jet features. [bolme]


52

Figure 18 - The Basic Steps of EBGM

4.1.4 Template Comparison

The fourth phase of face recognition is to compare the templates

generated in previous phase with the template in the database (enrolled

templates). [29] There are two ways of comparing the templates based on
53

the purpose of the application itself. In identification application the

template match all templates in the database and get the closest match (1 :

N). While in verification application, the generated template will only be

compared to one data entry in the database that is the claimed identity (1 :

1).

4.1.5 Declare Matches

The final phase of face recognition is to declare the highest matching

score resulted in the previous step. [29] The ground rules applied to the

application to declare the strictness of the application are based on the

configuration set up by the end users. The configuration will determine

how the application should behave based on the desired security and

operational consideration.

4.2 Face Recognition Application Design

The face recognition application used in this thesis is a free download-able

version provided by Neurotechnology, called VeriLook Face Identification

Technology. This company provides algorithm and software development

product for biometrics (fingerprint and face recognition) system, computer vision

based system, and object recognition. [32]


54

This application is able to capture image using cameras (web-cam) as its external

video source and also to read captured image from external file (png, bmp, jpg,

tif, and gif).

There are two types of execution modes that the application has, such as:

1. Enrolment

The enrolment mode happens when the users register their face images. The

system will then check whether there is a face in it or not and do the feature

extraction. After that the extraction result will be wrote into the template

database.

2. Matching

This mode performs the algorithm to match the new acquired face with the

enrolled face images in the template database.


55

4.2.1 Data Flow Diagram

1.0
Enroll User
Image
Capturing

2.0
Face
Detection

3.0
Stored Feature
Extraction

Face Images
Database
4.0
Template
Load
Comparison

5.0
Declare
Matches

Figure 19 - Data Flow Diagram of Face Recognition System


56

4.2.2 Flowchart

Figure 20 - Flowchart of face recognition system


57

The face recognition implementation is mainly divided into two sub tasks. The

first one is face enrolling and the second one is the face recognition system itself.

1. Face Enrolling

Face enrolling happens when a candidate or recognition subject first use the

system. Enrolling their face image means the system will take their picture

using the installed camera. Next, the system will then automatically detect

the subject’s face and extract the image using preferred algorithm (PCA or

EBGM). It will then produce a template image and it will be stored into the

face database.

2. Face Recognition

Face recognition system works when the subject already enrolled their face

image into the system. Then, the system will also capture the subject’s face

through the camera, detect the face, and apply the feature extraction functions

on it to get the template. Next, the new generated template will be compared

to the enrolled template in the database.

4.3 Prototype Design

As it has been told previously, the prototype will be covered the image

acquisition and face detection phases only.


58

4.3.1 Image Acquisition

Image acquisition is the first step that has to be done in this prototype.

The prototype will use a Logitech web-cam as the hardware.

Figure 21 - Logitech webcam for Notebook

OpenCV library provides function to capture image from specified

camera.

I use the provided function to capture image from the webcam, thus the

prototype will be captured a real-time image.


59

4.3.2 Face Detection

To be able to detect faces from the captured image is the main purpose of

this prototype. As written previously in this chapter, face detection

determines where in an image a face is located. The face detection works

by scanning up an image at different scales and looking for some simple

patterns that identify the presence of a face.

The prototype is built using the Haar-like Features (Haar Classifier)

function from OpenCV. Haar classifier detection is done by creating a

search window that slide through the image and check whether a certain

region of an image looks like a face or not. It quickly discards the other

part of an image that is not detected as a face.

The basic of this method are the haar-like features and a large set of very

simple weak classifier that use a single feature to define a certain image

region as face or non face. Each feature is described by the template

(shape of the feature), its coordinate relative to the search window origin

and the size (scale factor) of the feature. [34]


60

Figure 22 - Haar-like features

Figure 23 - Search window


61

The search window quickly scanning the first classifier on the cascade, if

the classifier returns false then the computation on that window also ends

and results no detected face (false). Moreover, if the classifier returns

true, then the window will be passed down to the next classifier in the

cascade to do the exact same thing. When all classifier returns true for

that window, then the result will returns true also for that certain window

(face detected).

Figure 24 - Decision tree based on haar-like features (cascade of classifiers)

The more a window looks like a face, the more the classifiers to be

computed and the longer it takes to classify that window. Thus, if the
62

window is not a face the classifier will quickly reject it after considering

only a small fraction of features in it.

Figure 25 - Removal concept in cascade classifier


63

4.4 Design Diagram

4.4.1 Data Flow Diagram

1.0
Enroll User
Image
Capturing

2.0
Face
Detection

3.0
Display detected
face by creating
bounding box

Figure 26 - Prototype data flow diagram


64

4.4.2 Simple Haar Classifier Flow Diagram

Input image

Sum pixel
calculation

Rectangular node
selection

Haar-like features Haar-like feature in


calculation database scalling

Haar-like features
comparisson

Face Detection

Figure 27 - Haar-like features flow diagram


65

4.4.3 Flowchart

start

Image acquisition

Haar classifier
image calculation No
process

Face detected?

yes

Display detected
face using
bounding box

Finish

Figure 28 - Face detection flowchart


66

4.5 Data Dictionary

4.5.1 Data Flow Diagram (DFD) Data Dictionary

Symbol Description

A process is an activity or function performed for a

specific reason. Used to represent the thing done by

the system.

A data store is an inventory of data, database.

External Agent An external agent defines a person, organizational

unit, or other organization lies outside of the scope

of the project but interacts with the system.

A data flow represents an input of data to a process,

or the output of data from a process. The data flow

may also be used to represent the creation, reading,

deletion, or updating of data in a data store.

Table 5 - DFD Data Dictionary


67

4.5.2 System Flow Chart Data Dictionary

Symbol Description

A terminal that tells where the flowchart begins and

ends. It shows the entry point of the flowchart and

the exit point.

Arrows determine the flow (sequence) through the

chart.

Shows a process, task, action, or operation. It

shows something that has to be done or an action

that has to be taken.

A decision asks a question. The answer to the

question determines which arrow you follow out of

the decision shape.

A parallelogram used to show input or output.

Examples of input are receiving information from

the patient table and receiving data from the doctor

database.

Table 6 - System Flowchart Data Dictionary

4.6 System Solution

System solution is basically the list of tools needed to develop the prototype. It is

divided into several types such as, implementation tools, development tools, and

hardware and software requirements.


68

4.6.1 Implementation Tools

Implementation tool is basically the programming language that is used

in order to build the prototype. The prototype will be built under C++

programming language. C++ is a middle-level language because it

contains a combination of both high-level and low-level language

features.

Besides C++ as the basic programming language, the prototype will also

use OpenCV as a library of programming functions that can be used

mainly to build a real-time computer vision. In this prototype, the

OpenCV library will be used to enhancing the image processing process.

4.6.2 Development Tools

Development tool is the software used to build the prototype.

1. Microsoft Visual Studio 2008

This software provides access to use C++ and its library.

Moreover, the OpenCV library can also be used and included in

this software.

2. Adobe Photoshop CS

I use this software to edit the image (cropping) to get a better view

of the image.
69

4.6.3 Hardware and Software Requirements

Hardware requirement is the specification of the computer that is required

to be able to run the system smoothly. The system will need a camera

(web-cam), monitor, and a CPU with minimum specification as listed

below:

Operating System : Microsoft Windows XP Professional

Processor : 1GHz 32-bit

Memory : 128MB RAM

Hard Disk Space : 10 GB

Besides hardware, the system also needs some software as the

requirements that need to be installed in the client’s machine.

Software required : Microsoft Visual Studio 2008 with

OpenCV Library installed.

You might also like