You are on page 1of 169

1

ABSTRACT

Face recognition is a wide area which is currently studied by lot of researchers and
vendors but still there are some areas that even researchers have not touched. Partial
face recognition can be considered as one of these areas. Few years back researchers
considered that partial face recognition is an unrealistic approach because there is
less information which can use to recognize a person. But after the research
conducted by researchers like Sato et al (1998), it is proven that partial face region
also contain some amount of information that can use to recognition.

By conducting this project, I tried to investigate how far partial face regions involve
recognizing individual. The artifact allows users to input probe images and then the
system will determine individual’s face whose belongs that particular face region.
Even though it appears to be a simple process the system should perform intensive
amount of work to achieve it.

A domain research, which has conducted to identify and study about the problem,
related to face recognition, similar system. Then by analysing it I decided to the area
that needed constrain in system research.

During system research researcher studied about the fundamentals about the image
processing, face detection techniques and face recognition techniques. Then after
analysing them it decided to select appropriate techniques to extract partial face
regions and recognize individuals.

The document contains introduction, domain research, system research, project


development plan, requirement specification and technical investigation, which
contain details about above-mentioned areas.

Finally appropriate image processing techniques, face detection techniques and face
recognition techniques were selected and justified along with selection of the project
developing methodologies and project developing platforms.

ii
ACKNOWLEDGEMENT

First I would like to express my appreciation to my supervisor Ms Oshini Jayatunga


and my assessor Mr. Gamindu Hemachandra for their guidance, suggestions and
support that given to me during my project. Without their directions I will not able to
accomplish this project.

Special thanks to my family without their help guidance and love I would not able to
make this far. Then I like to thank my colleagues for their advice and guidance about
the project. Special thanks go to my friend Tharindu Roshantha and Andrew Jebaraj
for being good friends to me and help me to proof read my document and correcting
my mistakes.

Finally, I would like to thank to APIIT Sri Lanka for facilitating me to by providing
me necessary educational help form well qualified lecture staff, library and lobotomy
facilities which need to make this project success. Especially thanking the library, for
facilitating me to get the necessary reading materials which were needed to my
project research.

iii
TABLE OF CONTENT5S

ABSTRACT .................................................................................................................ii

ACKNOWLEDGEMENT ......................................................................................... iii

TABLE OF CONTENT5S .......................................................................................... iv

LIST OF FIGURES .................................................................................................... ix

LIST OF TABLES ...................................................................................................... xi

LIST OF ABBREVIATION ......................................................................................xii

CHAPTER 1 ................................................................................................................ 1

1 INTRODUCTION ................................................................................................... 1

1.1 Project Background ............................................................................................ 1

1.2 Understanding the Problem ................................................................................ 2

1.3 Sub Problems ...................................................................................................... 3

1.4 Project Objectives ............................................................................................... 4

1.5 Proposed Solution ............................................................................................... 4

1.6 Project Scope ...................................................................................................... 5

1.6.1 Functionality of the Artefact ....................................................................... 5

1.7 Constrains ........................................................................................................... 7

1.8 Assumptions ....................................................................................................... 7

CHAPTER 2 ................................................................................................................ 8

2 Domain Investigation .............................................................................................. 8

2.1 Outline of the generic face recognition system .................................................. 8

2.2 Similar systems ................................................................................................... 9

2.2.1 Google Picasa .............................................................................................. 9

2.2.2 APPLE IPhoto ........................................................................................... 10

2.2.3 Face.Com automatic face book photo tagger ............................................ 11

2.3 Problem of face recognition systems ................................................................ 12

iv
2.3.1 Pose variance ............................................................................................. 12

2.3.2 Lighting/Illumination variance .................................................................. 13

2.3.3 Different facial expression......................................................................... 13

2.3.4 Different facial occlusions ......................................................................... 13

2.3.5 Individual characteristics ........................................................................... 14

2.3.6 Scale Invariance and robustness ................................................................ 14

CHAPTER 3 .............................................................................................................. 15

3 System research and investigation ........................................................................ 15

3.1 Fundamental of Image processing .................................................................... 15

3.1.1 Image classifications.................................................................................. 15

3.1.2 Image segmentation ................................................................................... 16

3.1.3 Edge detection ........................................................................................... 19

3.2 Face recognition approaches............................................................................. 24

3.2.1 Three dimensional face recognition .......................................................... 24

3.2.2 Two dimensional face recognition ............................................................ 25

3.2.3 Geometric based approaches ..................................................................... 25

3.2.4 Appearance based approach ...................................................................... 26

3.3 Face region extraction approaches.................................................................... 28

3.3.1 Template matching .................................................................................... 28

3.3.2 Haar like features - Viola-Jones face detector ........................................... 32

3.3.3 Facial feature detection using AdaBoost with shape constraints .............. 35

3.3.4 Justification on selected technologies ....................................................... 37

3.4 Face recognition methods ................................................................................. 38

3.4.1 Elastic branch graph matching approach ................................................... 38

3.4.2 Appearance based subspace methods ........................................................ 41

3.4.3 Support vector machine kernel correlation feature analysis method ......... 42

3.4.4 Eigenface Approach .................................................................................. 43

v
3.4.5 Justification on selected technique ............................................................ 56

3.5 Approach to the solution ................................................................................... 59

3.5.1 Approach to face region extraction ........................................................... 59

3.5.2 Approach to face region identification ...................................................... 59

3.5.3 Approach to face region recognition ......................................................... 59

Chapter 4 .................................................................................................................... 60

4 Requirement Specification .................................................................................... 60

4.1 System requirements ......................................................................................... 60

4.1.1 Functional requirements ............................................................................ 60

4.1.2 Non Functional requirements .................................................................... 60

4.2 Resource requirements...................................................................................... 61

4.2.1 Hardware Requirements ............................................................................ 61

4.2.2 Software Requirements.............................................................................. 62

CHAPTER 5 .............................................................................................................. 64

5 Development methodologies ................................................................................. 64

5.1 Waterfall ........................................................................................................... 64

5.2 Incremental Model ............................................................................................ 65

5.3 Hybrid model .................................................................................................... 66

5.4 Prototyping ....................................................................................................... 67

5.5 Justification for selection method ..................................................................... 69

5.6 Developing stages ............................................................................................. 69

CHAPTER 6 .............................................................................................................. 72

6 Technical research and investigation .................................................................... 72

6.1 Developing platforms ....................................................................................... 72

6.1.1 Dot.Net ...................................................................................................... 72

6.1.2 Sun Java ..................................................................................................... 76

6.1.3 MATLAB – MathWorks ........................................................................... 77

vi
6.2 Image processing APIs ..................................................................................... 78

6.2.1 OpenCV ..................................................................................................... 78

6.2.2 EmguCV .................................................................................................... 78

6.2.3 Aforge.Net ................................................................................................. 78

6.3 Math Related Libraries ..................................................................................... 79

6.4 Developing Approaches.................................................................................... 81

6.4.1 Approach One ............................................................................................ 81

6.4.2 Approach Two ........................................................................................... 81

6.4.3 Approach Three ......................................................................................... 82

6.4.4 Approach Four ........................................................................................... 82

6.5 Justification for selection of programming environment.................................. 82

CHAPTER 7 .............................................................................................................. 85

7 System Design ....................................................................................................... 85

7.1 Design Approach .............................................................................................. 86

7.1.1 Programming Approach – Object Oriented Approach .............................. 86

7.1.2 Design architecture .................................................................................... 86

7.2 Use Case Diagram ............................................................................................ 87

7.3 Use case descriptions ........................................................................................ 88

7.3.1 Add New Record ....................................................................................... 88

7.3.2 View Record .............................................................................................. 89

7.3.3 Train Image Set ......................................................................................... 90

7.3.4 Find Match ................................................................................................. 91

7.3.5 Set Configuration ....................................................................................... 92

7.3.6 Modify Record ........................................................................................... 93

7.4 System Overview .............................................................................................. 95

7.5 Activity Diagrams ............................................................................................. 96

7.5.1 Add Record ................................................................................................ 96

vii
7.5.2 View Record .............................................................................................. 97

7.5.3 Training Image Space ................................................................................ 98

7.5.4 Find Match ................................................................................................. 99

7.5.5 Automatic Face Region Verification ....................................................... 100

7.5.6 Pre-processing/Normalization ................................................................. 100

7.6 Class Diagrams ............................................................................................... 101

7.7 Classes Description ......................................................................................... 102

7.7.1 Data management related classes ............................................................ 102

7.7.2 Application logic related classes ............................................................. 102

7.7.3 GUI Related Classes ................................................................................ 103

7.7.4 Other supportive classes .......................................................................... 103

CHAPTER 8 .............................................................. Error! Bookmark not defined.

8 TESTING AND EVALUATION ........................................................................ 105

8.1 Unit testing...................................................................................................... 106

8.2 Integration testing ........................................................................................... 107

7.3 System Integration testing .............................................................................. 107

7.4 Performance testing.......................................... Error! Bookmark not defined.

7.5 Accuracy testing ............................................... Error! Bookmark not defined.

7.6 Sample Unit Testing Test plans for eye region extraction module ...........Error!
Bookmark not defined.

REFERENCE ........................................................................................................... 132

APPENDICES .............................................................................................................. i

A. Image Processing Techniques .................................................................................ii

B. Eigenface Step by Step (Formulas) ........................................................................ vi

viii
LIST OF FIGURES
Figure ‎1.1: Altered faces. ............................................................................................. 3
Figure ‎1.2: Normal face recognition process ............................................................... 3
Figure ‎1.3: Proposed solution overview ...................................................................... 4
Figure ‎2.1: Configuration of generic face recognition ................................................. 8
Figure ‎2.2: Google Picasa ............................................................................................ 9
Figure ‎2.3: APPLE I photo face recognition .............................................................. 10
Figure ‎2.4: Face.com Photo Tagging ......................................................................... 11
Figure ‎2.5: Face Book Photo Tagging ....................................................................... 11
Figure ‎2.6: Different pose Audio Visual Technologies ............................................. 12
Figure ‎2.7: Different illuminations ............................................................................ 13
Figure ‎2.8: Different facial expression....................................................................... 13
Figure ‎2.9: Different facial occlusions ....................................................................... 13
Figure ‎3.1: Image representation in plane, As a matrix ............................................. 15
Figure ‎3.2: Image segmentation approach ................................................................. 16
Figure ‎3.3: Low noise object/background image histogram ...................................... 18
Figure ‎3.4: Canny edge detection process.................................................................. 21
Figure ‎3.5: Masks ....................................................................................................... 22
Figure ‎3.6: Sober Template ........................................................................................ 22
Figure ‎3.7: Geometrical features ................................................................................ 26
Figure ‎3.8: Template matching strategy..................................................................... 27
Figure ‎3.9: Template matching .................................................................................. 28
Figure ‎3.10: Template ................................................................................................ 29
Figure ‎3.11: Test one Grey-Level Template matching .............................................. 29
Figure ‎3.12: Test two Grey-Level Template matching .............................................. 30
Figure ‎3.13: Test three Grey-Level Template matching ............................................ 30
Figure ‎3.14: Test four Grey-Level Template matching ............................................. 30
Figure ‎3.15: Test Five Grey-Level Template matching ............................................. 31
Figure ‎3.16: Test Five template is not available ........................................................ 31
Figure ‎3.17: Rectangle Features ................................................................................. 33
Figure ‎3.18: The features selected by AdaBoost ....................................................... 33
Figure ‎3.19: Cascade of classifiers............................................................................. 34
Figure ‎3.20: Features selected by AdaBoost .............................................................. 36

ix
Figure ‎3.21: Elastic branch graph matching face recognition.................................... 39
Figure ‎3.22: Face image transformation .................................................................... 44
Figure ‎3.23: Distribution of faces in image space...................................................... 44
Figure ‎3.24: Faces in face space ................................................................................ 45
Figure ‎3.25: Overview of Eigen face Approach ........................................................ 46
Figure ‎3.27: Representation of A x =b. ..................................................................... 47
Figure ‎3.26: Vector .................................................................................................... 47
Figure ‎3.28: Representation of A v = λ v. .................................................................. 48
Figure ‎3.29: Training Images ..................................................................................... 50
Figure ‎3.30: Example images from the ORL database .............................................. 53
Figure ‎3.31: Mean face obtained from the ORL database ......................................... 53
Figure ‎3.32: Eigenfaces .............................................................................................. 54
Figure ‎5.1: SDLC Model ........................................................................................... 65
Figure ‎5.2: Incremental model ................................................................................... 66
Figure ‎5.3: Hybrid of waterfall and incremental model ............................................. 66
Figure ‎5.4: Prototyping model ................................................................................... 67
Figure ‎5.5: Evolutionary Prototyping ........................................................................ 68
Figure ‎6.1 : Dot net Framework ................................................................................. 73
Figure ‎6.2: Dot net Code Execution process .............................................................. 73
Figure ‎6.3: Java Architecture ..................................................................................... 77
Figure ‎7.1: Use Case Diagram ................................................................................... 87
Figure ‎7.2: System Architecture ............................................................................... 95
Figure ‎7.3: Activity Diagram – Add Record.............................................................. 96
Figure ‎7.4: Activity Diagram – View Record ............................................................ 97
Figure ‎7.5: Activity Diagram – Training Image Space .............................................. 98
Figure ‎7.6: Activity Diagram – Find Match............................................................... 99
Figure ‎7.7: Activity Diagram - Automatic Face Region Verification ..................... 100
Figure ‎7.8: Activity Diagram - Pre-processing/Normalization ................................ 100
Figure ‎7.9: Class Diagram ....................................................................................... 101

x
LIST OF TABLES

Table ‎3.1: Detection rates for various numbers of false positives on the test set ...... 34
Table ‎3.2: Comparison of face detection approach .................................................... 38
Table ‎3.3: Recognition results between different galleries using EGBM .................. 40
Table ‎3.4: Recognize and Detect Faces ..................................................................... 56
Table ‎6.1: Comparison of Programming languages ................................................... 76
Table ‎6.2: Comparison of matrix libraries ................................................................. 81
Table ‎7.1: Use case description - Add New User ...................................................... 89
Table ‎7.2: Use case description – View Record ........................................................ 90
Table ‎7.3: Use case description – Train Image Set .................................................... 91
Table ‎7.4: Use case description – Find Match ........................................................... 92
Table ‎7.5: Use case description – Set Configuration ................................................. 93
Table ‎7.6: Use case description – Modify Record ..................................................... 94
Table ‎8.1: Test Case : Eye region detection ............... Error! Bookmark not defined.
Table ‎8.2: Test Name : Eye region Extraction ........... Error! Bookmark not defined.

xi
LIST OF ABBREVIATION

1D One Dimensional Vector


2D Two Dimensional
3D Three Dimensional
ADO Activex Data Objects
API Application Programming Interface
ASP Active Server Page
CFA Correlation Feature Analysis
CPU Central Processed Using
EBGM Elastic Brunch Graph Matching
GUI Graphical User Interface
KCFA Kernel Correlation Feature Analysis
LDA Linear Discriminate Analysis
LINQ Language-Integrated Query
MACF Minimize Average Correlation Filter
MSE Mean Square Error
NSTC National Science And Technology Counsel
OOP Object Oriented Programming
PC Personal Computer
PCA Principal Component Analysis
SDLC System Development Life Cycle
SQL Structured Query Language
SVM Support Vector Machine
VB Visual Basic

xii
CHAPTER 1

1 INTRODUCTION

1.1 Project Background

In the modern world, security is a one of the main concerns. There is a significant
rise of threats to the society with increasing rate of crimes and terrorist activities.
Because of that, there is a vast usage of surveillance systems to ensure security for
lives and properties of the citizens in the society.

There are different ways of identifying a particular person. Biometric identification


approaches have had a huge attraction because of the accuracy and uniqueness of the
biometric factors of a person, giving a high accurate result. Among biometric
identification approaches such as finger print recognition, palm recognition, iris
recognition and voice recognition, face recognition acts an important role comparing
with other approaches, because face recognition approach does not requires people’s
cooperation. The advantage of face recognition approach is people do not need to
look into an iris scanner or to place their hands on a fingerprint reader, or to speak to
a close-by microphone. Hence, face recognition approach can be very useful in
footages taken by surveillance and security applications.

Because of that, usage of face recognition in surveillance systems has increased


significantly in many places such as buildings with restricted access, airports and
banks to strengthen the security.

Automated face recognition and computer vision is one of the most favourite areas
of researchers because it relates not only to image processing but also artificial
intelligent which consider as next generation of the computer. Although human can
identify familiar faces independent from variations of viewing conditions,
expressions, aging and distractions such as glasses or masks, but a computer is not
capable of understanding through cameras and recognize faces. However, as
mentioned by Zhang et al (2007) there are several approaches invented by

1
researchers and vendors for face recognition. Still there are lot of areas to improve in
automated face recognition.

1.2 Understanding the Problem

However, when it come to the real world, it might not be possible to capture a full
frontal picture of a face at all the times in uncontrolled environments. Even though
there are many face recognition systems available, most of these work in optimal
conditions. Especially without full frontal face, these systems fail to recognise a face.
As a result of this most of the systems cannot give an accurate face match result.
Because of that, there can be lot of complications in identifying a person in an
image.

As an example in the news article written by Willing (2003) indicates failures of


implementing face recognition systems in airport at Logan – USA. Furthermore,
Willing (2003) stated that there were similar project that had to shut down because of
incapability of giving accurate face matches due to different reasons.

One of the reason identified was criminals intentionally alter their appearance using
disguises to defraud law enforcement and the public. Also because of the influences
of different cultures and environment factors sometimes it is not possible to expose
full face. In those situations normally, face recognition approaches fail to give well
accurate result.

There are various reasons for the failings of the current systems. One possible reason
is they cannot identify disguised faces comparing with full faces because current
approaches are not capable of identifying individuals using partial face regions
which have fewer characteristics in small face region comparing with full faces.

2
Figure ‎1.1: Altered faces.

(a) Muslim girl wearing veil [Source: Chopra, 2007]. (b) Women wearing masks
[Source: Tarlow, 2007]. (c) Person wearing sunglass [Source: University of Buenos
Aires ,n.d].

As mentioned before most of face recognition solutions cannot recognise


individual’s faces, which are intestinally occlude. Therefore, it is understood that an
affective and robust face recognition system should be capable of identifying
partially occluded faces including other regular features.

1.3 Sub Problems

Automated face recognition is a sequence of process. As mentioned in


MyHeritage.com (2006) the following diagram shows how face recognition systems
are functioning.

Acquire Images Detect and Match extracted


form video or extract face area face against images
still camera form the image in database

Figure ‎1.2: Normal face recognition process

By understanding the above, there are lot of sub steps performed within the above
steps. Although there are lot of approaches for full-face recognition, the proposed
solution deviates from the regular face recognition approaches. Therefore, when it
takes the suggested solution the overall process of the face recognition applies but
internal process sequence is different from the regular face recognising process.

3
1.4 Project Objectives

This face recognition system will identify individuals based on characteristics of


separate face segmentations and the objectives of the project as follows.
 Investigation of unique face features of eye, nose and mouth regions for
recognises individuals.
When it come to separate face regions there are less unique features
that help to identify individuals. Identifying unique features of the
individuals has being archiving throughout this project.
 Improve capabilities of the detecting features of local segmentations of face
It is necessary to find the efficient algorithm to extract features of the
face segmentations.
 Implement robust, efferent face recognition system based on facts found in
the research.
A thorough research being carried on face recognition techniques and
available algorithm on partial face recognition and choose a
appreciate method and implement face recognition system based in it.

1.5 Proposed Solution

The purposed solution takes a segment of a face (eye region, nose region or mouth
region) at a time and identifies which has submitted region. Based on the input
region it will extract the features that are unique to each region. Then it will extract
particular face region form the faces in database. After that, it will match the
features, which have extracted from both two-face region.

Face Region Database


Extractor face Features

Region Face
Face Region
Feature Matcher
Identifier
Extractor

Submitted
Image face
feature

Results

Figure ‎1.3: Proposed solution overview

4
1.6 Project Scope

The Scope of this Project is to develop a system that allows recognise individuals
using partial regions of face. Initially the project is based on recognise individuals
using submitted face regions.

The scope is as follows:

 This project is only focus on identifying individuals using only eye region
but as external functionality, it will try to implement it to mouth region.
 The all images have taken in 0-degree camera angle, which mean all images
are frontal view images, and taken in controlled environment.
 The eyes should be open and mouth should close in the faces including both
input image and images in database. All faces should be in neutral mood.
 It assumed that input image of the system (The image to be recognised) is
pre scaled and it is not necessary to scale back.

Furthermore, the solution will not work on recognising in following situations

 Disguised faces, which cannot use for extract at least one of eye region or
nose region or mouth region.
 Different poses of face regions.(taken in different angles)
 Low quality images ( less resolutions, etc)
 Different facial expressions

1.6.1 Functionality of the Artefact

1.6.1.1 Core Functionality

Core Functionality of the artefact can be identified as follows. The main


functionality of the solution is “recognize individuals using partial face segments”.

During this research and implementation, it focuses on face recognition using eye
region of a person. Because of that which will mention as partial face recognition as
onward here consider as partial face recognition using eye region.

5
Furthermore, it can be divide as follows

Input identification

This will identify whether input region is a face region or not and if it is a
face region what is the region.

i.e.

User input an eye region then first at all it will check whether it is a face
region if yes it will identify what is the face region.

Face region detection

This will detect the particular face region of full frontal faces in database and
extract particular face region from the faces

Face match

This will match submitted face region with extracted face regions in database
and provide match result.

1.6.1.2 Extra functionality

 Face recognition using mouth region


 Full frontal face recognition
Detect and extract different face regions from submitted full
frontal face and recognize individuals based on partial face
regions.

The proposed artefact will develop a standalone application, which runs on PC and a
project report, which include the details of the project.

6
1.7 Constrains

1. Camera view
2. Inadequate resolution of the images
3. Lighting condition
4. Scale of the image
5. Distance between camera and person
6. Facial expressions
7. Occlusion
8. Ageing

1.8 Assumptions

1. The input images, which submitted to system and images in database are in
same scale.
2. The input image has extract manually based on given dimensions.
3. Images have taken in controlled environment, which avoids different lighting
conditions, variance of view, variance distance between camera and person.
4. All images have constant resolution.
5. All faces have taken in neutral facial experience.
6. The face’s areas taken into recognize have not occluded.
7. It is assume that all images taken are in same age because of that the distance
of each features of the face does not change.

7
CHAPTER 2

2 DOMAIN INVESTIGATION

2.1 Outline of the generic face recognition system

Typical face recognition systems recognize faces based on various factors and
technologies. Although different face recognition systems use different approaches,
most of them perform key main steps, which are common to most face recognition
approaches.

As mentioned by Zhao et al.(2003), a generic face recognition system consist three


main processing steps. Face detection that involve detecting and separation face
region from submitted face image or video (which is an image sequence), feature
extraction which identifies and extract features of the submitted images. Furthermore
Zhao et al.(2003) described that feature can be “ local features such as lines or
fiducial points, or facial features such as eyes, nose, and mouth. “. The next and final
step is recognising faces by matching input image against the faces in database.

Input Images
and Video

Face Detection

Feature
Extraction

Face
Recognition

Figure ‎2.1: Configuration of generic face recognition

8
2.2 Similar systems

Face recognition systems dates back in history of over 5 decades which starts form
1960. At Present, face recognition has achieve an excellent development and gained
attention from areas such as surveillance, biometric identification etc.

Because of that, there are many systems that have been developed using face
recognition technology but during the research, researchers could not find a system
which is capable of partial face recognition.

2.2.1 Google Picasa

Google Picasa is a photo organizing and editing software, which is a free application
initiated by Google. One of newest feature of Picasa is face recognition feature.
According to Baker (2006) Google Picasa use Face recognition approach called
“Neven Vison ” for recognition faces.

Baker (2006) also mentioned that Neven Vison is very advanced techniques and it
covers over 15 patents. Furthermore “Neven Vison” is not only for face recognition
but also for object recognition.

Figure ‎2.2: Google Picasa

[Source: Waltercedric, n.d. ]

9
2.2.2 APPLE IPhoto

Apple IPhoto can be consided as an alternative to Google Picasa in Mac Os


environment. It also has similar features as Google Picasa. In Apple IPhoto product
page describes face recognition feature as follows.

Apple Inc (2010) stated that “iPhoto introduces Faces: a new feature that
automatically detects and even recognizes faces in your photos. iPhoto uses face
detection to identify faces of people in your photos and face recognition to match
faces that look like the same person”.

Figure ‎2.3: APPLE I photo face recognition

[Source: Lee, 2009]

10
2.2.3 Face.Com automatic face book photo tagger

According to Bil (2010) Face.com provide auto face tagging ability to facebook
photo albums. It will check all the photos available in users’ photo collection in
facebook and tag user and friends.

Figure ‎2.4: Face.com Photo Tagging

[Source: Bil, 2010]

Figure ‎2.5: Face Book Photo Tagging

During the experimentation, the researcher has identified that this application gives
poor results. Especially when it try to identify images in different lighting condition.

11
2.3 Problem of face recognition systems

Even thought face recognition is five decades old. In study filed of face recognition
system there are lot of unsolved problems and biometric identification area. At times
most of the approaches of the face recognition, which is currently in use and in
experimenting stages, can only give solutions for few problems in face recognition.
During the research, the researcher could not find a robust face recognition
algorithm, which gives 100 % accurate results.

Generally, the following situations can categorize as problem in face recognition.

2.3.1 Pose variance

Pose variance mean difference of the camera angel that takes photos (Image).

Figure ‎2.6: Different pose Audio Visual Technologies

[Source: Audio Visual Technologies Group, n.d.]

12
2.3.2 Lighting/Illumination variance

Different lighting affects to recognition.

Figure ‎2.7: Different illuminations

[Source: Audio Visual Technologies Group, n.d.]

2.3.3 Different facial expression

Figure ‎2.8: Different facial expression

[Source: Audio Visual Technologies Group, n.d.]

2.3.4 Different facial occlusions

Figure ‎2.9: Different facial occlusions

[Source: Audio Visual Technologies Group, n.d.]

13
2.3.5 Individual characteristics

Face recognition systems might depend on some face characteristics such as skin
colour. Some time face recognition systems, which are designed for Caucasian might
not be capable of handling faces of other races.

In addition, gender can also consider as another factor sometime face recognition
system that is use for females lip identification might not work for males.

2.3.6 Scale Invariance and robustness

The face scale depends on the distance between person and camera. Therefore, some
time it is not capable of handling faces taken at different distance.

Apart from that, face recognition requires lot of computational power which should
be considered because although there are high-end computers which are capable of
handling such process, they are reasonably expensive.

14
CHAPTER 3

3 SYSTEM RESEARCH AND INVESTIGATION

3.1Fundamental of Image processing

Digital image processing can be considered as one of the interesting area of the
computer vision. Image processing based on concepts on mathematics. Following
section shows fundamentals of image processing.

3.1.1 Image classifications

According to Jähne (2005, pp32) there are entirely different ways to display an
image. The most popular way to represent an image is rectangular grid. Gonzalez
(2004, pp 01) further described it as follows; digital images can be represent as a
matrix as it can be represent as 2D function where x and y gives coordinates in
geometric plane.

Figure ‎3.1: Image representation in plane, As a matrix

[Source: Gonzalez, 2004, pp 01] & [Source: Jähne , 2005 , pp32]

According to (Gonzalez, 2004, pp 01) Images can be categorize as follows,

15
3.1.1.1 Raster Image

As defined by Busselle( 2010) Raster images known as “bitmap” images. Raster


images are made by collection of dots, which called pixels. Pixels are tiny scares that
contain colour information about the particular location of the image. These images
are resolution dependent.

3.1.1.2 Vector Image

As defined by Busselle( 2010) vector images are collection of connected lines and
curves. Each line and curves defined by mathematical formula. Therefore, these
images are resolution independent and it has the ability to control mathematical
formula according to the scale.

3.1.1.3 Binary Image

Binary images represent images by using 0 and 1.The pixel of the object represent as
“1” and background as “0”. By using threshold, a digital colour image could convert
into a binary image. Threshold is a method that uses to segment an image.

3.1.2 Image segmentation

According to Grady & Schwartz (2006), “Image segmentation has often been
defined as the problem of localizing regions of an image relative to content“.

According to Biswas (2008) Image segmentation approaches can divide as two


different categories as follows.

Image segmentation

Discontinuity based Region based

Figure ‎3.2: Image segmentation approach

16
Furthermore Biswas (2008) and Yang (2004) described discontinuity based image
segmentation is focused on Isolated points, lines and edges detection while region
based / similarity based approach is focus on thresholding.

Spann (n.d) mentioned that, grey level histogram can consider as one of most
popular image segmentation method among other techniques.

3.1.2.1 Grey level histogram-based segmentation

Image histogram is a “graph which shows the size of the area of the image that is
captured for each tonal variation that the camera is capable of recording” (Sutton,
n.d).

Spann (n.d) mentioned that Thresholding, Clustering are image segmentation


techniques that are based on grey level histograms.

Noise is unwanted external information that is in image. First the noise should be
filter out form the image or it should consider when it processing the image.

Please refer appendix for Identifying Noise form histogram.

3.1.2.2 Thresholding

“The operation known as “simple thresholding” consists in using zero for all pixels
whose level of grey is below a certain value (called the threshold) and the maximum
value for all the pixels with a higher value” (kioskea.net , 2008).

As mentioned previously, this is a most popular method in image segmentation. As


mentioned by kioskea.net(2008) it classify an image based on pixel value of the
image. It checks whether gray values of a particular picture is greater than or less
than Threshold point and convert it to binary image by applying 1 or 0 to every
pixels in image.

17
3.1.2.2.1 Grey level thresholding
Grey level thresholding defined whether particular grey pixel belongs to the object or
to the background. By using this method, it is possible to separate an object form the
background.

Figure ‎3.3: Low noise object/background image histogram

[Source: Spann ,n.d]

Spann (n.d) defined grey level thresholding algorithm as follows.

If the greylevel of pixel p <=T

then pixel p is an object pixel.

else

Pixel p is a background pixel.

18
3.1.2.2.2 Determine threshold
Spann (n.d) mentioned that to determine threshold there are several approaches but
following approaches shows high interaction.

 Interactive threshold
 Adaptive threshold
 Minimisation method

Please refer appendix for detailed explanation for approaches for determining
threshold.

3.1.3 Edge detection

Neoh and Hazanchuk (2005) mentioned, “Edge detection is a fundamental tool used
in most image processing applications to obtain information from the frames as a
precursor step to feature extraction and object segmentation.” In addition, they
further explained it as follows “This process detects outlines of an object and
boundaries between objects and the background in the image.” According to Green
(2002), “Edge detecting an image significantly reduces the amount of data and filters
out useless information, while preserving the important structural properties in an
image”.

Green(2002a) categorized edge detection approaches into two main category.


Gradient and Laplacian. According to Green(2002) “gradient method detects the
edges by looking for the maximum and minimum in the first derivative of the
image” and “The Laplacian method searches for zero crossings in the second
derivative of the image to find edges” .

The relationship between edge detection and threshold is once image is binarized the
next step is applying threshold by doing that it can decide whether edges are
available or not.

If the threshold is lower, more edges can be found and high threshold might cause to
loss of edges.

19
3.1.3.1 Canny’s edge detection

According to Fisher et al (2003) Canny’s edge detection is algorithm which consider


as optimal edge detector. Furthermore Fisher et al (2003) mentioned, “It takes as
input a gray scale image, and produces as output an image showing the positions of
tracked intensity discontinuities”.

According to Green (2002b) there are few criteria that should consider.

 Edges occurring in images should not be response to non-edges and edges


in images should not be missed identify. The algorithm that takes to
identify the edges must to identify (mark) the all real edges in the image.
 The edge points must to be well localized. The distance between the edge
pixels must to be well organized and close to the edges in the real image.
 Must to well identify the noisy images before using the edge detection
algorithm. Then have to filter the noisy picture first using appropriate
filers.

Green (2002b) applied following steps to detect edges using Canny’s edge detection.

Step 1 Filter out any noise in the original image

Green (2002b) used Gaussian filter to remove noise.

Step 2 Find the edge strength

Green (2002b) has achieved by finding the gradient of the image. To do that
he perform the “Sobel” operator 2-D spatial gradient measurement on an
image.

Step 3 Finding the edge direction

Green (2002b) stated that achieving this is a trivial task.

20
Step 4

“Once the edge direction is known, the next step is to relate the edge
direction to a direction that can be traced in an image” (Green, 2002b).

Step 5

After the edge directions are known, non maximum suppression now has to
be applied (Green ,2002b).

Step 6

Then it use hysteresis to determine final edges. He did it by using suppressing all
edges, which does not connect to selected edge.

Figure ‎3.4: Canny edge detection process

[Source: szepesvair, 2007, pp19]

3.1.3.2 Sobel Edge Detection

According to (Green, 2002a ) “The Sobel operator performs a 2-D spatial gradient
measurement on an image. Typically, it is used to find the approximate absolute
gradient magnitude at each point in an input greyscale image”.

(Green, 2002a) and Biswas(2008) Describe it further as Sobel detector can consider
as algorithm that performs two dimensional gradient measurements on an image.

21
This technique achieve by calculating the estimated gradient magnitude at each point
in a gray scale image.

In the article presented by Green (2002a).He mentioned that the Sobel edge detector
used a pair of 3 x 3 convolution masks. One of mask estimate gradient in the x-axis
other mask estimating the gradient in the y-axis. Normally mask is smaller than the
actual image because of that this mask is slid over the image. Because of that, it
manipulates a square of pixel at a time.

The following figure shows an example of Sobel template.

Figure ‎3.5: Masks

[Source: szepesvair, 2007, pp14]

Figure ‎3.6: Sober Template

[Source: szepesvair, 2007, pp14]

Following formulas presented by Green (2002a).

The formula to find magnitude of the gradient

22
Approximate magnitude of the gradient

It has identified that by using grey-level histogram segmentation and applying edge
detection like canny edge detector, it is possible to extract face regions. According to
view of researcher, this method can be useful in extraction face regions from the
face but it might not give accurate results because histogram thresholding value
might depend on gender and race.

23
3.2 Face recognition approaches

The first face recognition approach was introduced by Sir Francis Galton in 1888
(Kepenekci, 2001) which was done by measuring four characteristics of French
prisoners. Because of that attempt of Sir Francis Galton to identify persons in more
scientific way, it made foundation of the biometric recognition which cause to
improvement of face recognition.

As mentioned before there are different approaches for face recognition. Those
approaches can divide into main section based on the image type. Those are two-
dimensional and three dimensional face recognition.

3.2.1 Three dimensional face recognition

3D model based face recognition can consider as latest trend of the face recognition.
As mentioned by Bonsor & Johnson (2001) , this face recognition approach uses
distinctive features of the faces. Which mean features like dept of the face features,
curves of the eyes, nose and chin use to recognise a faces.

Akarun, G¨okberk, & Salah (2005) stated that three dimensional face recognition has
higher accuracy rate over two dimensional traditional face recognition but when it
comparing with traditional face recognition approach it has few disadvantages such
as cost of implementation , unfamiliarity of technology and lack of hardware
specially camera equipments. Because of that, still 2D face recognition has higher
demand.

As mentioned by Gaokberk (2006) there are different approaches that use in 3D face
recognition. Point cloud-based approaches, depth image-based approaches, curve-
based approaches, differential geometry-based approaches, facial feature-based
geometrical approaches and shape descriptor-based approaches can consider most
popular approaches in 3D face recognition.

2D (Two-dimensional) face recognition can consider as good alternative to 3D face


recognition that has developed over 50 years. Following section will describe about
the 2D face recognition approach.

24
3.2.2 Two dimensional face recognition

Two dimensional face recognition can be considered as the most oldest and popular
method that is currently using in the field. This approach can process 2D images that
have taken in regular cameras (probably cameras in security systems) and identify
faces based on different approaches. As mentioned by NSTC subcommittee on
biometrics (2006) some of algorithms use faces landmarks to identify faces while
other algorithm use normalized face data to identify probe image.

Based on NSTC subcommittee on biometrics (2006) explanations 2D face


recognition approaches can categories as follows.

Geometric based approach in other words feature based approach that


consider face feature such as nose, mouth and eyes to identify faces.

Photometric based approach or view based approach that consider images as


set of pixels and compare characteristics of those pixels in both images.

3.2.3 Geometric based approaches

Geometric based face recognition techniques involve processing and computation


face land marks in other words it use face features such as relative positions of nose,
mouth and eyes, distance between landmarks, etc to identify individuals.

As stated by Brunelli and Poggio (1993) the idea behind geometric face recognition
is “extract relative position and other parameters of distinctive features such as eyes”
.After finding the features they will match with the known individual’s features
details and find out the closest distance of the matches. According to Kanade (1973)
the research has achieved 45-75% success recognition rate with 20 test cases
.According to Brunelli and Poggio (1993) disadvantage of geometric face
recognition over view based face recognition is extraction of face features.

The following image shows some of face features that can use for recognize faces.

25
Figure ‎3.7: Geometrical features

[Source: Brunelli & Poggio , 1993]

However, with the modern techniques and approaches, face feature detection has
gained lot of improvements comparing with old day. As mentioned by Bagherian,
Rahmat and Udzir (2009) extraction the features can do by different approaches such
as colour segmentation approach, template based approach that use pre-defined
image to detect face features and relative locations.

With development of appearance based face recognition approach geo metric face
recognition approach has deviate from the face recognition area and the techniques
used by geometric face recognition have used to different areas such as facial
expression detection ,etc.

3.2.4 Appearance based approach

The second approach of the face recognition is appearance based face recognition,
which is quite popular among researchers and industries relate to machine vision. In
this approach, it takes probe face as a set of pixels into a matrix and compare with
the known individuals. Brunelli and Poggio (1993) described it as follows “the
image, which is represented as a bi dimensional array of intensity values, is
compared using a suitable metric with a single template representing the whole
face”. The above-mentioned approach is only an overall picture of the appearance
based recognisor approach. There are more sophisticated ways of performing
appearance based face recognition.

26
Figure ‎3.8: Template matching strategy

[Source: Brunelli & Poggio ,1993]

Based on NSTC subcommittee on biometrics (2006) statistical face recognition


approach can be categorise as follows Principal-component analysis (PCA) , Linear
discriminate analysis (LDA) and elastic brunch graph matching .

The different techniques use different approaches, algorithm in recognising process.


Some time as mentioned by Zhao et al (2003) hybrid methods like Modular
eigenfaces, Hybrid LFA, Shape-normalized Flexible appearance models,
Component-based. Therefore, finally the following conclusion can make that face
recognition can have different approaches some time there can be novel approaches
that use combination of different techniques.

During this project, it considers only 2D face recognition because it is not faceable to
find enough resources to do 3D face recognition. In 2D face recognition, it would
constraint on view based face recognition because by using geo metric face
recognition the data (distance between features) which can use to recognize is not
enough.

i.e:

If it takes eye region, it can only take few measures. Like distance between I corners,
width of eyelid, etc. These measures can be verified because of that this method is
not good reliable measure.

27
3.3 Face region extraction approaches

The author has identified several approaches to extract face features during the
research. Among them following techniques show promising results.

3.3.1 Template matching

According to Latecki( n.d.) Template matching compares two images (potions of


Images) and find out similarity between them. Nashruddin(2008) elaborated it as
follows “Template matching is a technique for finding small parts of an image which
match a template image. It slides the template from the top left to the bottom right of
the image, and compare for the best match with template. The template dimension
should be equal or smaller than the reference image”. He also motioned that there are
many methods for template matching.

Latecki (n.d.) Explained template matching by using following the diagrams.

Figure ‎3.9: Template matching

[Source : Latecki , n.d.]

According to Latecki (n.d.) in here, the matching processes changes the position of
the template image to all possible positions of the source image. Then it computes a
numerical index that indicates how well the template matches the image in that
position. Match is done on a pixel-by-pixel basis.

Latecki (n.d) described two type of template matching

28
3.3.1.1 Bi-Level Image template matching

It use Bi-Level template matching identify whether particular object is available in


particular source image. Letecki(n.d) stated that this methods only suitable for giving
yes or no (only for checking availability of a image).

3.3.1.2 Grey-Level Image Template matching

Instead of Bi-level Image template, matching it checks difference level of image is


used. To measure that correlation can use.

Letecki(n.d) has conducted an experiment on five data set and achieved following
correlation maps.

Figure ‎3.10: Template

[Source: Letecki , n.d.]

Figure ‎3.11: Test one Grey-Level Template matching

[Source: Letecki , n.d.]

29
Figure ‎3.12: Test two Grey-Level Template matching

[Source: Letecki , n.d.]

Figure ‎3.13: Test three Grey-Level Template matching

[Source: Letecki , n.d.]

Figure ‎3.14: Test four Grey-Level Template matching

[Source: Letecki , n.d.]

30
Figure ‎3.15: Test Five Grey-Level Template matching

[Source: Letecki , n.d.]

Figure ‎3.16: Test Five template is not available

[Source: Letecki , n.d.]

Letecki (n.d.) has used following formula to calculate the correlation

 ( xi  x )   yi  y 
N 1

cor  i 0
N 1 N 1

 xi  x     yi  y 
2 2

i 0 i 0

x is the template gray level image x is the average grey level in the
template image
y is the source image section ȳ is the average grey level in the source
image
N is the number of pixels in the section (N= template image size = columns *
image rows)

31
The value cor is between –1 and +1, with larger values representing a stronger
relationship between the two images.

If the correlation value is less than correlation values of template then there will not
be a template image in source.

By using grey level template matching it is possible to match face regions but it
might not get accurate results because intensity values of the faces can be depend of
the race and gender and computational time can be high because template matching
use pixel based calculations. However, by limiting variance it is possible to perform
this approach to face region extraction.

As a alternative to pixel based calculation Violla & Jones (2001) proposed feature
based approach. Following section describe about it.

3.3.2 Haar like features - Viola-Jones face detector

Violla & Jones (2001) proposed an approach to detect objects using a boosted
cascade of simple features. They introduced new image representation method called
“Integral image” according to Violla & Jones (2001) integral images allow
processing the data other than using raw images. Then they have used a learning
algorithm “AdaBoost” to select visual features from a larger set and yields extremely
efficient classifiers. Then they combined successively more complex classifiers in a
cascade structure, which dramatically increases the speed of the detector by focusing
attention on promising regions of the image.

Violla & Jones (2001) approach they have used features that generated by haar basis
functions. According to Violla & Jones (2001) the reason for using features are “that
features can act to encode ad-hoc domain knowledge that is difficult to learn using a
finite quantity of training data.” Furthermore, that has used three types of features.
Two rectangle feature , three-rectangle features and four-rectangle features which are
respectively the difference between the sum of the pixels within two rectangular
regions , computes the sum within two outside rectangles subtracted from the sum in
a centre rectangle , computes the difference between diagonal pairs of rectangles.

32
Figure ‎3.17: Rectangle Features

[Source: Violla & Jones , 2001]

According to Violla & Jones (2001) the reason for using above mentioned “integral
images” are rectangle features can be computed very rapidly using an integral image.

By using number of negative (which does not include false faces) and positive
images (which does include faces) to the AdaBoost has trained for extract the
features.

Figure ‎3.18: The features selected by AdaBoost

[Source:Violla & Jones 2001]

“The two features are shown in the top row and then over layed on a typical training
face in the bottom row. The first feature measures the difference in intensity between
the region of the eyes and a region across the upper cheeks. The feature capitalizes
on the observation that the eye region is often darker than the cheeks. The second
feature compares the intensities in the eye regions to the intensity across the bridge
of the nose” (Violla & Jones 2001).

33
In the end, they have formed the cascade, which can be, identify as a decision three,
which has classifier network to filter our negative results and provide a well accurate
result. The following diagram shows structure of a cascade.

Figure ‎3.19: Cascade of classifiers

[Source:Violla & Jones 2001]

The approach, which is proposed, by Violla & Jones (2001) has 38 stages with over
6000 features.

They have trained each classifier of the cascade by 4916 faces and 10,000 non-faces
size of the all images are 24 x 24.

Table ‎3.1: Detection rates for various numbers of false positives on the test set

[Source:Violla & Jones 2001]

This method can consider as good approach to face region extraction but identifying
face regions can be difficult because face can have lot of representation of
rectangular features therefore Cristinacce & Cootes (2003) proposed approach,
which can use to detect face features.
34
3.3.3 Facial feature detection using AdaBoost with shape constraints

This can consider as extend of face detection method proposed by Violla & Jones
(2001).

Cristinacce & Cootes (2003) proposed a method for facial feature extraction using
AdaBoost with shape constraints for locate the eye, nose, mouth corners in frontal
face image.

Their approach can divide into two main sections face detection and face
segmentation detection.

3.3.3.1 Face Detection

Cristinacce & Cootes (2003) used the face detection method used by Viola & Jones
to detect the faces for feature selection, which has described previously. Cristinacce
& Cootes (2003) described it as follows “The output of the face detector is a image
region containing the face, which is then examined to predict the location of the
internal face features”.

Cristinacce & Cootes (2003) deviated from Viola and Jones approach when selection
negative and positive images and building AdaBoost template using haar like
feature. Viola & Jones use human faces as positive examples and regions known
not to contain a human face as the negative examples. According to Cristinacce &
Cootes (2003) in this approach, the positive examples are image patches centred on a
particular facial feature and the negative examples are image patches randomly
displaced a small distance from the same facial feature.

3.3.3.2 Local feature detection

Cristinacce & Cootes (2003) performed same algorithm to locate the local features
detection. The following figure shows few features selected by AdaBoost for right
eye.

35
Figure ‎3.20: Features selected by AdaBoost

[Source: Cristinacce & Cootes,2003]

The other thing that the approach taken by Cristinacce & Cootes(2003) which is
extends approach taken by Viola & Jones(2001) is in this method it use shape
constraints to check candidate feature points. Cristinacce & Cootes (2003) described
it as follows “Firstly a shape model is fitted to the set of points and the likelihood of
the shape assessed. Secondly, limits are set on the orientation, scale and position of a
set of candidate feature points relative to the orientation, scale and position implied
by the global face detector”.

The shape model they have used designed in similar way that proposed by Dryden
and Mardia(1998 cited Cristinacce & Cootes ,2003). Then Cristinacce & Cootes
aligned the points in to a common co-ordinate frame. After taking the distribution
According to Cristinacce & Cootes (2003) they got multi-variant Gaussian type of
distribution. Then they estimated probability of the give shape ps(x).Then they got
threshold T by comparing training dataset. So if probability of given shape is greater
than threshold they consider as a shape.

Cristinacce & Cootes (2003) has analysed to find the “range of variation in position
of the features relative to the bounding box found by the full face detection”.

Furthermore, the feature detectors what have implemented during this application
returns list of candidate points that probability of given shape is greater than
threshold.

During their research, they have achieved following recognition rates.

According to Cristinacce & Cootes (2003) the maximum time the entire process took
was less than .5 Seconds. They have archived 88.8% detection rate as follows.

36
“Feature distance is within 5% of the eye separation for 65% of faces, 10% for 85%
of faces and 15% for 90% of faces” Cristinacce & Cootes (2003).

This method shows promising results but to implement this method it requires lot of
time and effort because of AdaBoost training but during the technical investigation,
it found a method to overcome this problem. The method for implementation will
describe in later.

3.3.4 Justification on selected technologies

In the research, it discussed about viola-jones face feature detection using Haar-like
features, template matching approach, face detection and facial feature detection
using AdaBoost approach and shape constraint. In Violla-Jones (Violla &
Jones,2001) approach, they have used features that generated by Haar based
functions. Features are rectangle areas, which represent the binary intensity values of
the faces. This method achieved 76.1% - 96.9% face detection rate. Cristinacce &
Cootes (2003) purposed extended version of Viola-Jones (Violla & Jones 2001)
approach for detect local features of faces. Cristinacce & Cootes (2003) have
achieved excellent results than Violla-Jones approach. Kuo & Hannah (2005).
purposed template-matching approach for eye feature extraction and they have
archived 94 % for iris extraction, 88% eye corners and eyelid extraction.

Both Violla-Jones (Violla & Jones, 2001) and Cristinacce-Cootes (Cristinacce &
Cootes, 2003) approaches used AdaBoost network to train the Haar cascades. In
addition, Kuo & Hannah (2005) approach does not use any neural network approach.
To train the AdaBoost they have used huge number of positive and nonnegative
images, which requires lot of time and approaches but once cascade files created it is
possible to reuse it for different set of data.

Kuo & Hannah (2005) mentioned that there approach is relatively inefficient because
this algorithm does not work in occluded faces and pose variance. In addition, the
algorithm is capable of identifying only eyes.

So above details can summarize as follows.

37
Violla-Jones Approach Cristinacce-Cootes Template based Approach
Approach
High detection rate High detection rate High detection rate

Detect only face Improved to detect face, Eye region only


eyes , nose and mouth
ANN training requires ANN training requires ANN training does not
requires

Table ‎3.2: Comparison of face detection approach

Researcher has identified there are trained haar-like feature cascade file which can
use for Violla-Jones Approach and Cristinacce-Cootes Approach. Then it does not
require use of AdaBoost training.

By considering above facts, it decided that to use haar-like features for detect the
face and face regions. Because by comparing above three techniques it shows high
accuracy rate and efficiency. The reason for rejecting template-based approach is the
accuracy of the template matching is depending on the template and testing set.

3.4 Face recognition methods

3.4.1 Elastic branch graph matching approach

EBGM (elastic branch graph matching) approach is simple dynamical link


architecture implementation. As mentioned by NSTC subcommittee on biometrics
(2006) there are non-linear characteristics of face images that does not address by
linear methods like PCA, LDA that will describe next. By using EBGM it is possible
to avoid or minimize impact of nonlinear characteristics like poses elimination,
expressions lighting conditions, etc in face recognition.

Wiskott et al (1997) presented an approach which is based on geometrical local


faced based for face recognition. This approach is known as elastic brunch graph
matching. In this approach, faces are represented as labelled graphs. The total
concept behind face representation is Gabor wavelet transformation purposed by
Daugman (1989 cited by Wiskott et al 1997).

38
Furthermore, in the face graph, node represent fiducial points (eyes, nose, etc) and
edges represent distance between each fiducial points. Furthermore those points are
describe by sets of wavelet components (jets).

Figure ‎3.21: Elastic branch graph matching face recognition

[Source: Wiskott et al ,1997]

According to Wiskott et al (1997) image graph of new faces are extract using elastic
graph matching process that represent the face. According to Wiskott et al (1997) it
is vital to have accurate node positioning for face recognition to get accurate result
therefore it use phase information to get accurate node positioning. Object-adapted
graphs which use fiducial points representation handle rotations of objects (In this
case faces) in depth.

The face graph is extraction base on bunch graph, which is combined representation
of all model graphs for cover wide variance, rather than one model does. Wiskott et
al (1997) also mentioned that use of individual separate models to cover variations
like such as differently shaped eyes, mouths, or noses, different types of beards,
variations due to sex, age, race, etc.

According to Wiskott et al (1997), the matching process involves few manual


supervising phase to build up a bunch graph. Then individuals can be recognise
based on learned system by comparing image graph and model graphs obtained from
the gallery images taken in the database. Then it takes graph, which has highest
similarity value.$

39
The above method purposed by Wiskott et al (1997) is not specific only for human
face recognition they have mentioned that it is possible to use this system for other
object recognition also. Furthermore, above method work fine in lot of variance
caused by size, expression, position and pose changes.

The experiment they have done by using FERET database they have achieved high
recognition rate over frontal - frontal (98 %) face recognition with and without
expression. They could archive 57% - 81 % recognition rate by comparing half
profile left side of the face and half-profiling right side of the face.

The following table shows the summery of their results which achieved in their
approach.

Model Gallery Probe images First Rank First 10 Rank


No of success

No of success
percentage

percentage
success

success
Image

Image

250 fa 250 fb 245 98 248 99


250 hr 181 hl 103 57 147 81
250 pr 250 pl 210 84 236 94
249 fa + 1 fb 171 hl + 79 hr 44 18 111 44
171 hl + 79 hr 249 fa + 1 fb 42 17 95 38
170 hl + 80 hr 217 pl + 33 pr 22 9 67 27
217 pl + 33 pr 170 hl + 80 hr 31 12 80 32

Table ‎3.3: Recognition results between different galleries using EGBM

[Source: Wiskott et al ,1997]

f: frontal views a, b: expression h: half-profiles p: profiles l, r: left and right

40
It has identified that this method is suitable for multi-view based face recognition but
like geometric based face recognition approach, this cannot apply for partial face
recognition because of lack of information for face recognition.

3.4.2 Appearance based subspace methods

As stated by Delac, Grgic and Liatsis (2005) there are several statistical (appearance
based) methods have been purposed. Among them PCA and LDA perform major
roles. According to Navarrete and Ruiz-del-solar (2001) in subspace method “it is
project the input faces onto a reduced dimensional space where the recognition is
carried out, performing a holistic analysis of the faces”.

Furthermore, Navarrete and Ruiz-del-solar (2001) stated that PCA and LDA could
consider as above projection methods that reduce high dimensional image space to
low dimensional image space. Heseltine (2005) elaborate it further as follows PCA
,LDA and other methods can be use to “image subspace projection in order to
compare face images by calculating image separation in a reduced dimensionality
coordinate space.” .He also mentioned that it use “ a training set of face images in
order to compute a coordinate space in which face images are compressed to fewer
dimensions, whilst maintaining maximum variance across each orthogonal subspace
dimension.”(Heseltine, 2005).

3.4.2.1 What is Subspace

According to Moghaddam(1999) “visual data can be represented as points in a high-


dimensional vector space. For example, a m -by-n pixel 2D image can be mapped to
m n

x ε Rmn a vector, by lexicographic ordering of the pixel elements” Which mean an


image that has m by n pixels can represent as a matrix / vector which has NxM
dimensions.

According to Yambor (2000) the images projected into a subspace, then it creates
combined all subspace into a one which has all training image’s subspaces then the
test (probe) image project into subspace. After that, each test image compares with
the training images by a similarity or distance measure. The most similar or closest
images identify as the match image.

41
3.4.3 Support vector machine kernel correlation feature analysis method

Savvides et al (2006) proposed a method for recognize partial faces and holistic
faces based on kernel correlation feature analysis (KCFA) and support vector
machine.

Here they have used correlation filter to extract features form the face images and
face regions. Minimize average correlation filter which has designed to minimize the
average correlation plane energy resulting from the training images Xie(2005 cited
Savvides et al 2006) has used in this approach to output required correlation peak
values that need to identify correlation peeks that match to face and face regions that
need to extract.

But as mentioned by Savvides et al (2006) stated that it is necessary to have generic


data set to build a correlation filter but up to this stage they had not used any generic
data set. Therefore, they proposed use of novel method call class dependent feature
analysis to overcome limitation of making generic dataset.

According to Savvides et al (2006) Class dependent feature analysis (CFA) is a


method that use to dimensionality reduction and feature extraction, which is similar
to PCA (Principle component analysis).They further stated that their proposed class
dependent feature analysis method achieved higher success rate than traditional
PCA. As mentioned by Savvides et al (2006) PCA address all number of images in
while CFA address number of classes ( here class mean set of image of same
individual).

According to Savvides et al (2006), they have designed one correlation filter which
is a minimize average correlation filter (MACE).First they have trained MACE set
using 12776 images of 222 different classes. Because of that, they have received 222
dimensional correlation feature vector after projection of input images to correlation
filters, which have generated by MACE.

Based on description provided by Savvides et al (2006) because of Non linear


attributes of face images such as lighting condition, pose linear subspace
methods(PCA and LDA) cannot perform well because of that those non linear
42
subspace method deviate to map non linear attributes in higher dimensional feature
space. Again, the calculations in higher dimensional feature space cost lot of
computational power. Therefore, kernel trick methods use to perform those
calculations to computer higher dimensional feature mapping.

Savvides et al (2006) stated that the kernel correlation filters could extend using
linear advance correlation filter by performing kernel trick. They extended their CFA
method by “kernel trick” and performing it over the all images.

According to Savvides et al (2006), they have used support vector machine as a


decision classifier on distance measure on KCFA projection space. They input
KCFA projection coefficients to training the SVM.

By examine distance and similarity threshold values between training image and
probe image (in KCFA projection coefficients) they have identified best match that
mean they recognised the individuals using this method which is similar to PCA.

They gained promising results during this approach. The results as follows the eye-
region yields a verification rate of 83.1% compared to 53% obtained by using the
mouth region and 50% with the nose region.

3.4.4 Eigenface Approach

According to Fladsrud (2005), Eigenface approach can consider as one of popular


face recognition approach that purposed by Sirovich & Kirby and the method
purposed by Kirby refined by Matthew Turk and Alex Pentland by adding pre-
processing and face detection procedure.

As mentioned before an image can represent as vector. If the image width is w by


and height is h then image can be represent w x h dimensional vector.

43
Figure ‎3.22: Face image transformation

[Source: Fladsrud,2005]

X=[x1,x2,x3.............................,xn]T

It assumed that n is the total of pixel in image.

The rows of the image place each after other and it form a vector. Above vector
belongs to a image space which is w by h all images whose dimensions is w by h and
by putting row each after each it can be represent as one dimensional vector which
shows in figure 4.4.Following images shows basis of the image space.

When it take faces all the faces look like same because any face has a mouth, a nose,
pair of eyes, etc which are relatively located at approximately same place. As
mentioned by Fladsrud (2005) because of above quality of the face all faces (face
vectors) are located in very small area in the image space. Figure 3.26 represents
distribution of faces (face vectors) in image space.

Figure ‎3.23: Distribution of faces in image space

[Source: O'Toole et al ,1993]

44
By understanding that it is possible to understand that represent the faces in image
space is waste of space. O'Toole et al (1993) purposed use of PCA to find the
specific vectors that are highly sensitive for distribution of face images. These
vectors can call as “face space” which is subspace of face images. Furthermore
O’Toole et al (1993) stated that “Face space will be a better representation for face
images than image space which is the space which containing all possible images
since there will be increased variation between the faces in face space”.

By projecting faces in to face space will provide following distribution of faces in


face space.

Figure ‎3.24: Faces in face space

[Source: O'Toole et al ,1993]

According to O’Toole et al (1993) the vectors in face space are known as eigenfaces.
The simplest method to compare two images is compare pixel by pixel but when it
considering 100 pixels by 100 pixels it contains 104 pixels doing comparison for that
amount of pixels is time consuming and inefficient. As a solution for that Kerby &
Sirovich(1990) used karhunen-Loève expansion which is popular as principle
component analysis(PCA) .Also Kerby & Sirovich(1991) stated , The main idea
behind applying PCA to a image is find out weights ( vectors) which are responsible
to distribution of face space within the image space.

Because of the application of PCA, it is possible to keep image data in way that is
more compact and because of that, the comparison of two images vector is less time
consuming and efficient than matching image pixel by pixel.

45
Training set
Start (known
faces)

D<e Yes X is not a face

D = Average No
Distanse
(W,Wx)
E = Eigenfaces
X is unknown
D>e and δ >e Yes
face

No

No

W=weights of E
Recognition Module
training set X is a known
D>e and δ < e Yes
face

Input
W=weights of E
Unknown E = Eigenfaces
training set
images E End

Figure ‎3.25: Overview of Eigen face Approach

Above figure represents overall idea behind the Eigen face algorithm. The diagram is
base on Turk & Pentland (1991). The training set which are known images are
transformed into set of Eigen vectors (Eigen faces)(E).After that it calculate weights
of E training set. Then it compares the weight of the new face and the weight of
training set (W).D is the average distance of between weight vectors .e is the
maximum allowable distance from any face class. δ is the Euclidian distance between
(L2 Norm) projection. The it identifies faces based on D , δ and e.

3.4.4.1 Variance

Variance is the measure variability. It is similar to standard deviation. Both standard


deviation and variance measure the distribution of the data in a dataset.

3.4.4.2 Covariance Matrix

Covariance is the measure variability of two data sets together. As further described
by Weisstein (2010a) “Covariance provides a measure of the strength of the
correlation between two or more sets of random variants”

Which means covariance matrix is a compact way to represent covariance between


vector elements. The application of covariance matrix in Eigen face it use covariance
matrix to extract Eigen value and eigenvectors.

46
3.4.4.3 Eigenvector

Vector is a unit that has both magnitude and direction. Vector can consider as arrow,
which has size. The arrowhead represents the direction.

Figure ‎3.26: Vector

In normal vector multiplication it act as follows

Let A be an n×n matrix, where A is a linear operator on vectors in n.

Ax=b

Where x and b are n×1 vectors

Figure ‎3.27: Representation of A x =b.

[Source: Baraniuk,2009]

Eigenvector has special properties that do not have in normal vector.A description by
Marcus & Minc (1988, p. 144 cited by Weisstein,2010b) “Eigenvectors are a
special set of vectors associated with a linear system of equations (i.e., a matrix
equation) that are sometimes also known as characteristic vectors, proper vectors, or
latent vectors “

47
Baraniuk (2009) further define Eigen vector as follows

An eigenvector of A is a vector v∈ x n such that

Av=λv

A = Vector v = eigen vector λ = eigen value

Where λ is corresponding Eigen value. A only changes the length of v, not its
direction.

Figure ‎3.28: Representation of A v = λ v.

[Source: Baraniuk,2009]

3.4.4.4 Eigen value

As noted in previous section Eigen values is the values of scale of a Eigen vector.
Based on Reedstrom (2006) following calculations are presented here.

Av=λv

Then it can rearrange as

Then if and det using this it is possible to get Eigen value of A


vector.

48
i.e

det det

3.4.4.5 Principle Component Analysis

Principle Component Analysis (PCA) , also known as karhunen-Loève expansion or


Hotelling transformation in different fields. In image processing field, PCA is an
efficient method for feature extraction and dimension reduction.

According to Turk and Pentland(1991) by using PCA dimensionality of the image


can reduce without minimizing loose quality of the image, which mean the mean
square error (MSE) between the image space and original image is minimal. Also
Turk and Pentland(1991) mentioned that by performing PCA for set of data it
identifies patterns of the data and present them a way that highlight their similarities
and differences.

3.4.4.6 Steps of calculating Eigen face

The following steps are based on Turk and Pentland(1991 cited Gül 2003 and Bebis
2003).

49
Computation of Eigen faces

Step 1 : Obtaining training faces

Important size of the faces should be same

Figure ‎3.29: Training Images

[Source: Bebis,2003]

Step 2: Represent all the 2D faces (Training set) as 1D vector

As shown in figure 4.8 the image should be converting into 1D vector.

Training set Γ = [Γ1 Γ2 ... ΓM ]


t

Γ i= Transformed Image (2D to 1D)

Let’s take Image matrix that ( Nx x Ny ) pixels size. Then the image transform into
new image vector Γ of size (P x 1) where (Nx x Ny) and the Γi is column vector. As
mentioned in image space section this is done by locating each column one after the
other.

50
Step 3: Create mean / average face

After transforming training images to 1D vector the average value of the faces
should calculate. This average value of the faces is call as mean face.

Mean face is the average face of all images training image vectors at each pixel
point. The size of the mean face is (P x 1).

Step 4: Subtracted the mean face

After creating the mean face, the difference between mean face and each image
should calculate. By doing this it is possible to identify each training set image
uniquely.

Mean subtracted image is the difference of the training image from the mean image.
Size of mean subtracted image is (P x 1).

Like that is should calculate each subtracted mean face (difference between mean
face and training image).The output will be matrix all have details about the
subtracted mean faces.

The size of A will be N2 x Mt.

Step 5: Principle component analysis

After finding the difference / subtracted mean face the set of this vectors use
principle component analysis.

λ = Eigen vector = Eigen values M=orthonormal vector = difference

51
Step 6: Covariance matrix calculation

After applying PCA, the corresponding Eigen value and eigenvector should
calculate. In order to do it covariance matrix should calculate.

According to Turk & Pentland (1991) transpose of difference ( ) is inverse of


difference. So then is transpose difference matrix.

Step 7: Compute Eigenvectors

For a face image of size Nx by Ny pixel the covariance matrix is size of (P x P), ( P =
Nx x Ny). Because of that processing covariance matrix, consume lot of time,
processing power. Because of that using directly in this format is not a good choice.

Therefore, According to Turk & Pentland (1991) If the M is image space(No of data
points in image space) less than dimensions of the face space. which mean M < N2
there will be M – 1 matrix . It can be done by solving M x M which is reasonably
effective rather than solving for N x N dimensional vector. Calculation of AAT does
by using linier combination of face images .

Turk & Pentland (1991) stated that by using following formula (calculation) it is
possible to construct M x M matrix into L = ATA. This gives M Eigen vectors.

They also mentioned that using following formula “It can determine leaner
combination of the M training set images from its eigenvector Ul where l=1...M
images.

= Eigen faces K = image number m = count of set


= difference of kth image = Eigen vectors of k

Turk & Pentland (1991) stated because of this approach the calculations that suppose
to do is greatly reduced.

52
They also described “from the order of the number of pixels in images (N2) to the
order of the number of images in the training set (M)” and practically training set of
the face images will be relatively small.

Step 8: Face recognition using Eigenface

Figure 4.8 and figure 4.9 shows images taken in ORL database and mean images that
calculated using ORL faces.

Figure ‎3.30: Example images from the ORL database

[Source: Gül,2003]

Figure ‎3.31: Mean face obtained from the ORL database

[Source: Gül,2003]

Furthermore Gül (2003) stated that By using less eigenfaces which is less than total
number of faces for eigenface projection, it is possible to eliminate some of the
eigenvectors with small eigenvalues those contribute to less variance in the data.

The following image shows some eigenfaces.

53
Figure ‎3.32: Eigenfaces

[Source: Gül,2003]

Based on Turk & Pentland(1991) and Turk & Pentland(1990) , Gül(2003) described
that larger eigenvalues indicates larger variance therefore the eigenvector that have
largest eigenvalue is consider as first eigenvector like that most generalizing
eigenvector comes first in the eigenvector matrix.

Step 8-1: calculate distance of difference

Let’s take as probe image (Which need to be recognised). The image should be

same size and it should have taken in same lighting condition. Then as mentioned by
Turk & Pentland(1991) , The image should normalize before it transform into
eigenface. Then it is assume that contains average value of the training set using

following formula it is possible to calculate distance different form average value.

= distance of difference = captured image value = average value

Step 8-2: Projection

The projection value should calculate with the probe image and value calculated
using training image.

Step 8-3 : Store weights in a matrix

54
can get using projection

Step 8-4: Calculating Euclidian distance

Euclidian distance use to measure distance between projection value ( Eigen faces in
training set) and inputted image

Using minimum Euclidian distance the match face can identify.

= Euclidean distance `= projection value = projection value of i image

Get minimum e

Step 9: classify face image

It is possible to verify image that whether it is a face or not by using Eigenface


approach.

Step 9-1 :

First the image has to normalize and transform it in to 1D vector. The calculate
distance difference

Step 9-2: Projection

The projection value should calculate with the probe image and value calculated
using training image.

Step 9-3: Calculating Euclidian distance

Euclidian distance use to measure distance between projection value ( Eigen faces in
training set) and inputted image

Using minimum Euclidian distance the match face can identify.

Step 9-4 : Calculate threshold

55
Distance threshold defined maximum allowable distance from any face class. which
is equal to half of distance between two most distant classes.

“Classification procedure of the Eigenface method ensures that face image vectors
should fall close to their reconstructions, whereas non-face image vectors should fall
far away. “( Gül,2003).

Distance measure is the distance between subtracted image and the reconstructed
image.Based on Turk & Pentland (1991) recognise an image can be done by
knowing i and ε.

If ε > Θ It is not a face


If ε < Θ and for all >Θ It is a unknown face
If ε < Θ and for all <Θ The image match to training image i

Table ‎3.4: Recognize and Detect Faces

= Euclidean distance = Distance value form the face Θ = Distance threshold

3.4.5 Justification on selected technique

During this research, three approaches for face recognition were discussed .EGBM,
Support vector machine based kernel correlation feature analysis method and popular
eigenface method. Each method has taken three different approaches to face
recognition. Each method has archived good results under different conditions.

EGBM technique takes face as graph that represents facial features as nodes, and
distance between them as weight of the edges between them. As mentioned before it
is capable of identifying human faces under different poses, different facial
expressions and different scale of the image. According to Wiskott et al (1997), they
could achieve 12 % to 98 % recognition rate under different face poses, lighting
condition and different facial expression. Wiskott et al (1997) also mentioned that
EGBM technology can use to recognize object other than face recognition.

Support vector machine based kernel correlation feature analysis method is a method
that is similar to principle component analysis. Savvides et al (2006) suggested a
method recognize partial face regions (eyes , mouth , nose) by using Support vector

56
machine based KCFA.In that method they have used Class dependent feature
analysis (CFA) for dimensionality reduction and feature extraction of the given
images. In addition, they have used techniques like Kernel tricks, Kernel CFA.
According to Savvides et al (2006), the eye-region yields a verification rate of 83.1%
compared to 53% obtained by using the mouth region and 50% with the nose region.

Eigen face method can consider as one of most popular face recognition techniques
use in filed. In eigenface it takes faces as eigenvectors and produce PCA for feature
detection and dimensionality reduction. Then it checks similarity between eigenface
and new probe image by projecting to the eigenface space. By comaparing with other
techniques eigenface approach is a simple approach there are lot of sources and
developing platforms supports to implementation of eigenface.Campos,Feris &
Cesar (2000) have adopted eigenface approach for recognize faces using eigeneyes
According to they have archived Campos,Feris & Cesar (2000) 25.00 % - 62.50 %
face recognition rate. Yuen et al(2009) purposed a method for extract facial features
for template machine based face recognition in their face recognition module they
have achieved recognition rates for eye and mouth as follow 79.17% , 51.39% .So by
considering results got by Savvides et al (2006), Campos,Feris & Cesar (2000) &
Yuen et al(2009) the following conclusion can be made.

Eye and mouth are unique because of that they can use to recognize
purpose.Template matching, kernel correlation feature analysis and Eigenface
approach can use to recognize faces using partial face regions.

By considering above analysis, it decided to use eigenface for face recognition


because when it comparing eigenface approach with EBGM the researcher could not
find any articles related to partial face recognition. Also by using EBGM approach
detection of nodes in partial face regions like eye and mouth is not practical because
of that there will not be enough data to compare faces.

Other than that EBGM approach is very complex over eigenface approach because
of that it will take lot of time and effort for implementation. By considering allocated
time duration it will not be feasible to use EBGM approach.

57
When we comparing with Eigenface and Support vector machine, based kernel
correlation feature analysis method can be consider as novel method while eigenface
can consider as more stable method. Also Support vector machine, based kernel
correlation feature analysis method use support vector machine to measure distance
which takes more computational power comparing with eigenface apporch. And
research has understood that developing and training support vector machine during
the given time duration is not feasible. Therefore between Support vector machine,
based kernel correlation feature analysis method and eigenface approach, eigenface
approach has selected.

58
3.5 Approach to the solution

As mentioned above the solution consists of three main modules. Face region
extraction, face region identification and face match module.

3.5.1 Approach to face region extraction

To implement this module it will use Haar-like features for detect the particular face
region. And it will extract the particular face region from the face image and process
it to create Eigen image.

3.5.2 Approach to face region identification

This is done by projecting probe image to Eigen image space and measure distance
and similarity value. Then it will decide whether submitted probe image is a face
region or not by using conditions in Table 3.4.

3.5.3 Approach to face region recognition

After identifying face region it will project face region into eigen space and
determine best match.

59
CHAPTER 4

4 REQUIREMENT SPECIFICATION

4.1 System requirements

Since this system is about the identifying individuals using partial, face regions, this
system will have simple interfaces to interaction with users. In functional
requirements section it will shows the procedures of the proposed system and what
will be the functional outcome of the system. Then under non-functional
requirements, it will describe the performance, usability and security of the system.

4.1.1 Functional requirements

Registration

Registration module will allow user to input new record to the system. Then it will
call the face region extraction module to extract relevant face segmentations and
process them to input face recognition module.

Probe Image input selection

Using this module user will able to input a probe image to search in database.

Image recognition

This feature allows recognizing inputted image with the image in database. And
identifies individual based on inputted image.

4.1.2 Non Functional requirements

Accessibility

Since this developed as pc client application this will able to access only single user
at a time.

60
Availability

User can be access system by using executable file or short cut.

Accuracy

It is required to have over 50 % accuracy rate of eye pair detection and over 40 %
accuracy of eye pair recognition.

Response Time

Response time will depend on detection module, extraction module and recognition
module. Based on the research it is expected to have 4 – 5s response time.

Security

Since target users of this application are law enforcement agencies, surveillance
monitoring systems and this application handle sensitive data this application will
have higher security level. Therefore, a login module will be implemented for user
login.

4.2 Resource requirements

This part has done by based on technical research and investigation which is describe
in chapter 6.

Under hardware requirements, it considers minimum hardware requirements that


need to develop this application and run the developing software.

In software requirement section, it will talk about the software requirements that
need to implement this solution.

4.2.1 Hardware Requirements

MATLAB R2009a requires at least Intel Pentium 4 CPU or AMD Athlon CPUwith
680 MB hard disk and 512 MB RAM.

61
Visual studio 2008 requires at least 1.6 GHz CPU, 384 MB RAM, 1024x768 display,
5400 RPM hard disk.

MATLAB image processing tool kit also requires MATLAB with extra hard disk
space.

Emgu CV requires Visual Studio 2008 with Windows operating system as minimum
requirements.

OpenCV requires any C++ or C programming environment. So let’s consider Visual


C++ as programming environment.

By considering above facts and since this is image processing project which might
requires more processing power and storage disk capacity.

It is suggest following minimum hardware configurations.

 CPU 32 bit Intel Core Duo P IV processor with 1.86GHz or similar


 RAM 2 GB RAM and 5 GB virtual memory
 HDD 10 GB minimum free space with 5400 RPM
 Operating system Windows Xp , Windows 7*
 Screen 1024x768 display
 Graphics card Intel GMA 950 or higher

* Hardware requirements can be vary based on operating system.

Assumed that the default components of PC are available.

4.2.2 Software Requirements

Following minimum software requirements need for implementation of this solution.

 Visual Studio 2008 Professional with Visual C ++ 2008


 OpenCV 2.0.0.a
 Dot net Framework 3.5 or higher

62
 Windows xp or Windows 7 operating system(Home edition or professional
editions ).

63
CHAPTER 5

5 DEVELOPMENT METHODOLOGIES

There are different approaches to develop particular solution. However, the approach
should select based on the characteristics and requirements of the project.

The characteristics of the project can be list down as follows

 Project is short term and will be developed by an individual.


 The features of the artefact can be categorised based on modules.
 It might be needed to change or enchased some functionalities and features.
 Some of the modules depend on each other’s functionalities hence without
completing and testing one module it is not possible to continue with the other.

The next section of the chapter briefly explain some of developing approaches
analysis them to select best approach for the project.

5.1 Waterfall

Waterfall development methodology (System development Life cycle) consider as


one of the oldest and simple development methodology. According to Costulis
(2004) Waterfall method consist 7 phases.

This method is a sequence method. All phases start one after another. Because of that
success of one phase, directly affect to other phase. In addition, every phase should
well document.

64
Software
concept

Requirements
Analysis

Architectural
Design

Detailed
Design

C oding &
debugging

System
Testing

Figure ‎5.1: SDLC Model

Costulis (2004) mentioned that it has following disadvantages

 Problems are not discovered until system testing.


 Requirements must be fixed before the system is designed - requirements
evolution makes the development method unstable.
 Design and code work often turn up requirements inconsistencies, missing
system components, and unexpected development needs.
 System performance cannot be tested until the system is almost coded; under
capacity may be difficult to correct.

5.2 Incremental Model

Incremental model can consider as one of iterative model in software development.


This model perform each phases incrementally. This model can consider as hybrid of
waterfall and prototyping.

According to SoftDevTeam(2010) Incremental model is suited for following type of


projects.

 Software Requirements are well defined, but realization may be delayed.

65
 The basic software functionality are required early

Figure ‎5.2: Incremental model

[Source: SoftDevTeam,2010]

5.3 Hybrid model

Hybrid model is a combination of two models. In this type of models, it adopts


suitable required characteristics and procedure that meet the requirements of new
project.Following diagram shows hybrid model of waterfall and incremental
software development models.

Figure ‎5.3: Hybrid of waterfall and incremental model

[Source: SoftDevTeam,2010]

66
5.4 Prototyping

Prototyping is a one of most popular software development approach in software


industry. Prototyping method develops a prototype at the end of each development
phrase. Because of that, it helps to develop team to identify customers/clients
requirements better than initial data gathering requirements. In addition, customer
has an idea about the development plan. Which mean both developing team and
client have good understanding about the recruitments and developing process.

According to Sommerville (2000, pp4) Prototyping consider as rapid development


methodology. In addition, prototyping reduce the risk of the project because of above
mentioned both customer and developing team interactions. At the end of the each
phase customer and other qualified persons evaluate the prototype and identify
deviate from the initial project requirement specification and analysis it. After that
based on the analysis the suggestion are made. Because of that, it has law error rate
and accuracy of the system will be high.

Prototyping took as a rapid development of a system and as a risk reduction


activity. Because the developers always interact with the client with there solution
and require feedback from them (client) to the system to be developed. So the errors
and accuracy of the system will be high. Prototyping mainly consists of three
phases. They are,

 Gathering user requirements


 Develop a prototype system
 Require user feed back

User requirements Prototype

User Feedbacks
Test

Production

Figure ‎5.4: Prototyping model

67
According to Albert, Yeung & Hall (2007, p352) Prototyping approach has two
variations.

 Evolutionary prototyping
In this approach, one prototype is building and refine until it meet
appropriate state.

Requirement
Gathering

Quick Design

Build Prototype

Evaluate and Refine


requirements

Engineer product

Figure ‎5.5: Evolutionary Prototyping

 Throw- away prototyping


In this approach, it is develop a prototype and evaluate it find out week point
to develop and put away the prototype and develop a new prototype from the
beginning. In addition, it is keep iterate until a prototype meet required
specifications.
 incremental prototyping
As mentioned by Albert, Yeung & Hall (2007, p352) “In incremental
prototyping applications are developed and delivered incremental after
establishing the overall architecture of the system. During the prototyping
process, requirements and specifications for each Increment are developed
and refine............................. When all the increments are developed, a
rigorous integration process is undertaken to ensure that all increments and
related software modules are functionally operational with one another.”

68
5.5 Justification for selection method

By considering project characteristics and facts, I decided to use hybrid method. The
hybrid method is a combined approach of prototyping and waterfall model. Coding
and debugging phase of waterfall model replace with evolutionary photo type. That
phase is an iterative phase.

The reason for selecting this approach is this project requires good documentation.
By using waterfall approaches, it is possible to give rich documentation about the
research. This is important because implementation and accuracy of the solution
depend on the research. In addition, this project is conduct by individual because of
that it is not practical to working on parallel phases. Since this waterfall is a
sequential, that is suite for this project.

As an alternative of coding and developing stage of waterfall model it suggest to use


prototype approach because as mentioned before this system is based on modules
which are interconnected and sometime without completing one module it is not
possible to continue with others. Moreover, allocated developing duration is short
because of that it requires rapid developing approach.

5.6 Developing stages

Phase 1 Academic Research (Week 4 – Week 14)

As mentioned before going to implementation a good research should be


done. The research has divided into 3 main sections.

Domain research (Week 4 – Week 5)

Under domain research, it consider about the typical face recognition system
structure and similar systems. Then identifies approaches achieved by similar
systems. In addition, during this section of research it studies about problems,
relate to face recognition.

69
System research (Week 5 – Week 12)

In this section studies about the image processing and face recognition
approaches that can use to develop the solution.

Technical research (Week 11 – Week 13)

Technical research focuses on available technologies that can use to


implementation of the solution.

Phase 2 Analysing (Week 15 - 17)

During this stage it analysis all the details gathered by research and identify what are
the required technologies that need to implement this system.

Then functional requirements and non-functional requirements are defined based on


considering above analysis research details, project specification and after
considering feasibility of implementation.

Phase 3 Designing (Week 16 - 18)

To better understand and mange the system design implement in this stage. First, it
starts to by designing modules of the system. To do that it designs logical design,
which gives total idea and theory behind the system logic. By comparing with other
approaches it design the logical design.

Then it moves to physical design of the approach.

Phase 4 Coding and Debugging

The actual system implementation happens here. At each increment, it provides


prototype.

Phase 4.1 Development Increment 1 Face Pre-processing Module (Week 19 -


23)

70
Phase 4.2 Development Increment 2 Face Region Detection and Extraction
Module (Week 23 – Week 25)

Phase 4.3 Development Increment 3 Face Math Module modules (Week 25 –


Week 29)

Phase 4.4 Development Increment 4 Final Integration (Week 29 – Week 30)

Phase 5 Final Testing, Evaluation and correcting (Week 29 – Week 31)

During this stage, following tests perform to each module and to the system. It will
check about whether systems meet the desired result required.

Phase 6 finalizing Documentation and Submission (Week 31 – Week 32)

Final documentation will be prepared during this stage and system and
documentation will deliver to APIIT project board for evaluate.

71
CHAPTER 6

6 TECHNICAL RESEARCH AND INVESTIGATION

For face recognition, there are different developing platforms, APIs. However,
finding appropriate platform is very important thing therefore in this chapter it
briefly analysis developing platforms and APIs.

6.1 Developing platforms

6.1.1 Dot.Net

Microsoft Dot.Net environment can consider as one of most popular platform


available in world. Dot.Net platform enrich with different powerful object oriented
languages, functional programming languages, virtual machine environment, code
libraries.

VB.Net, C#.Net, Visual C++.Net consider as dynamic Dot.Net languages ,


WPF(windows presentation foundation) and Silverlight can consider as cross
platforms that helps to increase functionality of the programmes. ASP.net is the
dotnet platform that enables developers to create web developing
applications.ADO.net, ADO.net entity framework, and ADO.net data service provide
flexible data access capability with database.

Apart from that technologies like Microsoft SQL, Windows communication


foundation, windows workflow foundation, Azure, LINQ, AJAX work with Dot.Net
platform.

Dot.Net framework 3.5 can consider as most reliable and stable dot net framework,
which is available now. Dot.Net framework 4.0 is the next version of the Dot.Net
framework, which suppose to release in this year.

72
6.1.1.1 Dot Net Architecture

Dot.Net run on top of windows API (operating system) following diagram shows
architecture of the Dot.Net.

Figure ‎6.1 : Dot net Framework

[Source: Northrup,2009]

Figure ‎6.2: Dot net Code Execution process

[Source: Northrup,2009]

6.1.1.2 Languages

VB.net , C#.net and Visual C++.net are the most popular dynamic languages in
Dot.Net frameworks other than these languages J# , F#, Iron python ,etc can consider
as Dot.Net languages.

73
Visual Basic .Net

VB.Net is an object-oriented language, which inherit form the VB 6 (Visual basic


6).VB.net support lot of OOP concepts and lot of operators like “ifnot”.

According to Microsoft (2010) Visual Basic.Net first appeared in year 2001 until
now there are 4 stable (up to Visual Studio 2008 release) and 1 beta version which
suppose to release in year 2010.

Visual C++ .Net

Visual C++ is the Microsoft implementation of C ++ language which is totally


support OOP concepts and this supported by huge number of APIs.

Visual C++ or Visual C consider as one of matured product of Microsoft Developing


family. Microsoft Visual C++ contain support of Windows API and .Net Framework
which other C++ variations does not have.

When it coma prating with other Microsoft languages like Visual C sharp and
VB.Net this fully support to pointer based development which mean this allows to
access memory location of particular variable directly. Because of this feature it is
allows to increase

Visual C sharp .Net

C#.net(Charp.net) which is a popular programming language similar to VB.Net but


C sharp has more object oriented capability than Vb.Net and support huge variety of
APIs.

According to C˘aleanu and Botoca (2007) C sharp is a language which based on


Java, VB.Net, C and C++ .They also mentioned that C sharp inherits from

- C, the high performance;


- C++, the object-oriented structure;
- Java, the garbage collection and high security;
- Visual Basic, the rapid development.

74
During the experiments done by researchers it has identified that C shap.Net
supports Pointer (By Reference) based developing in unsafe mode.

Following table shows comparison of all above languages. This is based on article
written by Voegele(n.d).

java Visual Visua Visua


C l l
sharp.N C++. Basic.
et Net Net
Object- Hybrid Hybrid Hybri Partia
Orientati d / l
on Multi- Suppo
Paradi rt
gm
Generic Yes Yes Yes No
Classes
Garbage Mark Mark No No
Collectio and and
n Sweep Sweep
or or
Generat Generat
ional ional
Class Yes Yes Yes Partia
Variables l
/ Suppo
Methods rt
Multithre Yes Yes Librar No
ading ies
Pointer No Yes Yes No
Arithmet
ic
Languag Non All .Net All All

75
e Langua .Net .Net
Integrati ges Langu Langu
on ages ages

Table ‎6.1: Comparison of Programming languages

6.1.1.3 Developing environment

There are different software which we can use to develop Dot net applications. They
are Visual Studio.Net, Phrogram , and Delphi, which are the commercial software,
which use to develop applications. Programming environments like “Monodevelop“
and “Sharpdevelop“ can consider as free developing environments. The speciality of
the “Monodevelop“ is it is the only developing environment that capable of run in
Linux and Mac environment other than Windows.

6.1.2 Sun Java

Sun java is a software platform, which built on Java virtual machine. Sun Java
developed by sun Microsoft system. In modern world java, consider as powerful
developing platform. One of the reason for popularity of the java is Java is a free
platform because of that most developers adapted java and developed java platform.

Java Virtual machine support some third party compilers other than Java, which is
main and fully supported language of java.

Clojure, Groovy, Jruby, Rhino, Jpython, Scala are the most popular compilers and
interpreter that supported by java platform.

Java is an operating system independent language which capable of running on lot of


operating systems like Windows, Linux, Mac and Solaris.

76
6.1.2.1 Java Architecture

Figure ‎6.3: Java Architecture

[Source: Sun.com n.d.]

Java is the most favourable language of Java platform, which is Object Oriented.
Technologies and APIs that described in above diagram shows are compatible with
java.

J2SE java Software application development layer is the place where it starts the
developing of application. The relevant components are embedded in that
environment by using above mentioned developing platform it is possible to develop
enterprise level application.

6.1.2.2 Developing environments

Since java is free, there are lot of developing environment developed for java based
developing among them Eclipse and Net bean are distinctive. Both applications
allow GUI based developing and integrating with other language interpreters and
compilers.

6.1.3 MATLAB – MathWorks

MATLAB originally developed for computation of numerical. However, at present


MATLAB supports ANN, Image processing, etc. MATLAB is a programming
environment and also a language. MATLAB is a fourth generation language (E.g.
SQL).

77
It is possible to integrate MATLAB with DotNet, C++ and Java.

6.2 Image processing APIs

API – Application Programming Interface are intermediate software, which have


designed to communicate with software for exchange data and process them.

An API consists of set libraries, which gives functionality to perform process and
communicate in between software.

In image processing field widely using APIs are Aforge.net, OpenCV , EmguCV.

6.2.1 OpenCV

(Open Source Computer Vision) is a library of programming functions for real time
computer vision. (OpenCV n.d.) OpenCV is a one of most popular set libraries for
image processing, and ANN training. OpenCV can integrate with C , C++ and
Python.

Opencv have over 500 optimized algorithms for image processing, face detection,
face recognition, OCR, Object tracking, ANN training, etc.

6.2.2 EmguCV

One of the main disadvantage of OpenCV is it is not easy to integrate it with Dot.net
environment. As a solution for that, OpenCV wrapper developed which support to
OpenCV and Dotnet integration.

EmguCV supports Dot net languages like VB.Net, Csharp.Net, C++.Net.

6.2.3 Aforge.Net

“Aforge.Net is a C# framework designed for developers and researchers in the fields


of Computer Vision and Artificial Intelligence - image processing, neural networks,
genetic algorithms, machine learning, and robotics” (Aforge.Net,2009)

78
Aforge.Net is only works with C sharp. This is a one of drawback of this framework.

6.3 Math Related Libraries

Eigen Face algorithm is based on matrix calculations therefore it is must to have


Linear algebra based mathematics library .This section shows mathematics libraries
that enable to use matrix calculations.

System.Math

This is the default Math class which comes with Dot.Net framework. According to
Microsoft (2010) this supports lot of mathematical functions but this does not
support matrix related calculations. Therefore another alternative should be
considered to implement the solution.

Math.Net

Math.Net is a free and open source project that is developed for advanced
mathematic calculations which are not capable of handle in System.Math libraries.
According to Math.NET(2010 ) “It is a mathematical open source toolkit written in
C# for the Microsoft .Net platform. Math.NET aims to provide a self-contained clean
framework for both numerical scientific and symbolic algebraic computations.”
Math.Net (2010).

During the experiments it has identified that Math.Net tool kit provides Eigen
calculations for Eigen face. This can consider as one of plus point of this tool kit.
But comparing with other third party libraries this is more bulky and time consume.

Extreme Optimization Numerical Libraries for .NET

The Extreme Optimization Numerical Libraries for .NET are a collection of general-
purpose mathematical and statistical classes built for the Microsoft .NET framework.
Extreme Optimization (2010).

79
This library set supports linear algebra but this library is a commercial library which
need to be purchase. Even though they provide 60 day trial version it is risky to use
this type of trial library for implement the product.

CSML – C sharp Matrix Library

CSML is a linear algebra library which was developed for c sharp matrix
calculations. According to article written in code project (2007) CSML is “a compact
and lightweight package for numerical linear algebra. Many matrix operations
known from Matlab, Scilab and Co. are implemented.”

Although this support Eigen calculation this is not capable of calculating Eigen
Vectors (code project, 2007).This is a free library but this does not provide source
code with it.

Advanced Matrix Library in C#. NET

This library is a one of latest library which developed recently. This was developed
by Anas for his master thesis. This library is a c sharp based library. Anas (2010)
describe this library as follows, “Matrix Library .NET v2.0 is a totally free advanced
matrix library. This library contains class Matrix which provides many static
methods for making various matrix operations on objects derived from the class or
on arrays defined as double of any dimension.”

This library is free and open source. Author provides e mail and forum support in
code project. The plus point of this is it provides simple methods for Eigen values
and Eigen vector calculations.

80
Following table shows comparison of above mentioned libraries.

System.Math

Math.Net

AMLC
CSML
EONL
Type 2 1 2 3 1
(1 –
FOSS,2
Commerci
al 3.Free)
Matrix N Ye Ye Yes Ye
Supported o s s s
Eigen N Ye Ye Yes Ye
Calculation o s s * s

Table ‎6.2: Comparison of matrix libraries

6.4 Developing Approaches

During research, following developing possibilities has identified.

6.4.1 Approach One

Extract face region using Emgucv and Do recognize process using MatLab

In this approach both platform should be integrated with Visual studio and
developing language like C sharp.net to exchange data. There are lot of ways to
integrate both MatLab and Emgucv separately but during research, it could not find
integration of both technologies at same time. EmguCv does not provide face
recognition ability using Eigenface method.

6.4.2 Approach Two

Extract face regions and do recognition using OpenCV API ,

81
OpenCv is a ideal solution for this because OpenCV API provides both Haar Like
feature object detection ability.C or C++ should use to intertact with OpenCV.Visual
Studio can intergrte with OpenCV as developing platform.

6.4.3 Approach Three

Implement entire application using MatLab is the third approach. However, as


mentioned before implement Haar-like object detection is over this project scope and
duration therefore this approach should be consider again.

6.4.4 Approach Four

In this approach it use Emgu Cv for face detection because only Emgu CV and Open
CV supports Haar based face region detection. Then it use AMLC for Eigen related
matrix calculations. Since C sharp and AMLC supports C sharp. it use C sharp as
programming language.

6.5 Justification for selection of programming environment

Dot.Net developing plat form and Java developing platform are competitive
platforms, which gives similar features and performance. When comparing with
windows application development Dot.Net environment is more compatible with
Microsoft technologies because Microsoft developed both Dot.Net and Windows.
However, when it considering non-windows application development java is ideal
for those kind of development because Dot.Net architecture can only implement in
non-window environment only using Mono technology, which is not sill stable.

Dot.Net development environment, Visual studio provides really advanced and


completed IDE for implement where Java development requires third party
developing tools for java development. Java environment is interacting with
operating system via Java virtual machine where Dot.Net directly interacts with
operating system. Because of that, Dot.Net provides higher response time that Java
environment.

82
When it considering API support Dot.Net supported by lot of image processing APIs
comparing with Java environment.

By considering above fact, Dot.Net has selected as developing platform and visual
studio 2008 selected as developing software.

As mentioned above Visual Studio environment contains different programming


languages. Csharp.net, Visual C++.net, VB.net and Jsharp.Net are the most popular
languages among them.

As mentioned before C++ shows higher efferent rate and as mentioned before C++
has lot of support from APIs including OpenCv, EmguCv and Aforge.net. Therefore,
C++ gives lot of support to image processing.

Visual Csharp.net, Visual Basic.net are simple languages than C++. In addition, both
of them have less capability to implement image processing. However, it does not
mean that by using those languages it is not possible to implement image-processing
application. Since Vb.Net, Csharp.Net and Visual C++ are working on Dot.Net
platform, all three languages use features of dot.net platform. Because of that, these
languages with combination of dot net platforms well.

But when it consider easy of coding and managing C sharp can consider as one of
most efficient language. As mentioned in above C sharp has both capabilities of C++
and Java. Because of that C sharp can identify as flexible language which can
adopt to develop this project.

When it considering math libraries AMLC takes lot of attentions event thought it is
newest. Because it is the simplest library that suite for this projects also it is light
weight and since it is free and open source installation does not require. When it
comparing with Math.Net, Math.Net library is more complex and heavy As a reason
of that it might increase the processing time of the application. When it considering
EONL it is not a free library because of that it can be consider as not suite for this
project because current EONL that allows for free use is 60 day trial which might not
work after 60 days because of that whole project might be frailer. Even though

83
CSML is .a free library it does not contain Eigen Vector calculation which is
essential for the success of this project. Even though it is possible to write function
for calculate eigenvector it might be a taking unnecessary risk since CSML does not
provide source code.

By considering above fact, it selected to use developing approach four for this
project. As an alternative approach developing approach two has selected

84
CHAPTER 7

7 SYSTEM DESIGN

This chapter provides logical design of the system by using UML (Unified Modelling
Language).Use case diagram , class diagram and activity diagram have used during the
representation of logical design of the system. This design has based on analysing and
concluding facts found on during research phase.

This phase has focused on functional and non functional requirements of the project and the
system which has identified by investigation information gathered by previous steps. This
phase took lot of attention and time because it is very important to design the system before it
actually implement in real world. It has drawn a use case diagram and activity diagram to
understand and identify behaviour of the system and for a better understand of functional
requirements. A class diagram has drawn for understand and identify structure of the
application.

In first section of this chapter is discuss functionality and then it discuss overall architecture
of the system. After that it use activity diagram to discuss the processing steps of the system.
Finally it use class diagram to discuss structure of the system.

85
7.1 Design Approach

7.1.1 Programming Approach – Object Oriented Approach

At present, it is possible to identify Object Oriented design paradigm as one of most popular
design paradigm. It has used OOP (Object Oriented programming) approach during this
project. As discussed in implementation chapter it has used Dot Net Framework to develop
this system.

7.1.2 Design architecture

The design of this system is base on MVC compound design pattern. The system basically
can divide into three main sections as follows.

Model

As discuss in next sections this model contains all database related and application logic
related classes. All image processing requests including image pre processing, detection,
matching, etc goes here.

View

This is the place where user interact with the system basically GUI and other user interaction
related things goes here. This section contains all GUI related classes of this application.

Controller

Controller contains all communication logic related to application. Controller takes input
from the user (view) and model route the relevant parties.

The application of design artchitecture to the system in design aspect explains section cccc.

86
7.2 Use Case Diagram

This diagram has designed to understand functional requirements and users of the system.
There are six use cases which extend form the two main use cases. The following use case
diagram shows overall view of functionalities which are expected from the system.

Add Record

View Record

Train Image Set

User

Find Match

Set Configuration

Figure ‎7.1: Use Case Diagram

87
Since this project is not focus on real time case the record only contain, unique name of the
individual, face images of the individual and face regions of the individuals. Therefore now
onward it only consider above mentioned attribute as a record.

7.3 Use case descriptions

7.3.1 Add New Record

Use Case Add new record

Description It has identified that adding a new user to


database is a critical activity. This allows
adding new records to database .And process
them to store in the database

Actor User

Assumptions Operator has logged into the system.

Main Flow Actions

Fill the relevant form(s) and upload the


image to the system.

Verify submitted image is a face

Detect face/ face regions

Crop face regions

Store the captured details into database

Display “Saved” Message

88
Alternative Flow Actions

The Input Image does not contains any face


or face region

Discard the process and prompt a error


message

Alternative Flow Actions

The input image contains a face and it is


occluded that is not possible to detect a face
or face regions.

Discard the process and prompt a error


message

Table ‎7.1: Use case description - Add New User

7.3.2 View Record

Use Case View Record

Description This is for view record of a particular person

Actor User

Assumptions If record is available it will show that record

Main Flow Action

Ask to search / browse user records

Select record to view

89
Show the record

Alternative Flow Action

If record is not available

Show “not Available” message.

Table ‎7.2: Use case description – View Record

7.3.3 Train Image Set

Use Case Train Image Set

Description It is necessary create a face space(image


space). This feature allows user to create face
space and store it in database for future use.

Actor User

Assumptions There are enough images that need to create


face space(image Space)

Main Flow Action

User enter no of Eigen vectors need to create

User select image space type

System create image space

Store it in database

90
Prompt “successful” message.

Alternative Flow Action

If database is not accessible

Display “Database Error” message.

Table ‎7.3: Use case description – Train Image Set

7.3.4 Find Match

Use Case Recognise Face

Description To compare face regions of stored records to


find match

Actor Operator

Assumptions This will provide accurate results if there are


relevant face/face regions.

Main Flow Actions

Submit the face region

Display the face region

Identify submitted face region

Verify the identified face region

Match /compare with images stored in


database

91
Display relevant record details

Alternative Flow Actions

If submitted image is not in database

Provide message indicating that face is not


registered in database and there are not
matches for face region.

Alternative Flow Actions

If submitted image is not a face region

Provide an error message

Alternative Flow Actions

If submitted image contains a face region but


it is not possible to detect the face region.

Provide an error message

Table ‎7.4: Use case description – Find Match

7.3.5 Set Configuration

Use Case Set Configuration

Description Allow user to set Threshold values and No of


Eigen vectors

Actor User

92
Assumptions -

Main Flow Action

Ask user to enter relevant values

Update the database.

Prompt a message “successfully saved”

Alternative Flow Action

If user not filled relevant fields correctly

Show “Error” message

Table ‎7.5: Use case description – Set Configuration

7.3.6 Modify Record

Use Case Modify Record

Description It is necessary to modify record in the system

Actor Operator

Assumptions This will modify record correctly

Main Flow Action

Ask to search / browse user records

Select record to modify

93
Modify the record

Update the database.

Prompt a message “successfully saved”

Alternative Flow Action

If user filled relevant fields correctly

Show error message

Alternative Flow Action

If record is not available

Show not available message.

Table ‎7.6: Use case description – Modify Record

94
System Overview

Following diagram illustrates relationships between components of the system. These


relationships should be identify and understand properly before starting implementation of
the system.

Find Match Training Automated Face


Region Verification

Get Euclidian Database


Calculate eigenImage
distansas Return face value and
region value

Compare with threshold


Get training Images Calculate eigenvector
values and binary value
and Eigen value
Project Dataset

Compare nearest
Euclidian distance Calculate binary image
value
Reshape Image Load to covariance
to 2D to 1D matrix
Calculate distance of
dataset Calculate difference
form
dataset average value
Return best accurate
Image numbers Calculate the mean Calculate distance with
(Average)value each face region
Reshape Image
Reshape Image to 2D to 1D
to 2D to 1D

Pre-Processing/
Face Region Acquire
Normalization

Get
Receive Face Save Face Convert to Noise Adjust
Verification Resize Image
Region Region greyscale Removal Brightness
form user

GUI
(Graphical User Interface)

Add Record View


Save form View Record
Get Face
details with
Image
Images Get Search
query

Verify Face
Pre-process Browse
Image
Records

Database
Detect Face Crop Face
Show record
Database
Regions Regions

Figure ‎7.2: System Architecture

95
7.4 Activity Diagrams

Activity diagrams give overall idea about flow of the system and it is functions. It means
basically what are the steps that system use to archive it is goals. Following diagrams
illustrate functionalities of the system.

7.4.1 Add Record

Upload Images

Validate
Uploaded Image

Show Error [not validated] [validated] Extract Regions


Message

Genarate [ extracted ] [ not extractable ] Show Error


FaceID Message

Save Image

Figure ‎7.3: Activity Diagram – Add Record

This is the flow of adding faces to the system. First it checks weather uploaded image is a
face image then if it is a face image it checks weather face regions of the face is extractable.
Else if submitted face image does not contain any face it prompts a message and ends
process. If it is possible to extract face regions form the face it extract the face region and
face form the image and save them in database.
96
7.4.2 View Record

Show record Id Select Record Show Record [exit clicked]


and name list

[not clicked exit]

Figure ‎7.4: Activity Diagram – View Record

It use this flows to view added records it shows all inputted details which were added during
add record process.

97
7.4.3 Training Image Space

Get Number of Eigen


Image Vectors

Get Type of
Image Space

Load Traning
Image Set

[If traning image set not load] Show Error


Message
[If Traning Set load]
Create Image
Set

Get Avarage
Face

Get Substracted
Face Set

Get Covariance
Matrix

Get Eigen Vector


and Value

Calculate Eigen
Image

Compute Image
Space

Update Traning
Set

Figure ‎7.5: Activity Diagram – Training Image Space

According to Eigen face algorithm it should calculate image space to project new image to
recognize. First it asks user what is the image space that needs to create then it takes relevant
images from the database and creates image (face/eye) space. During this process it prompts
user to enter number of Eigen Image vector which is less than or equal to number of relevant
images in database. When matching an image it uses image space to project probe image to
calculate distance.

98
7.4.4 Find Match

Get Image

[if Image not uploaded] Show Error


Message
[if Image uploaded]
Pre-process

Get Colomn
Vector

Get Substracted
Image

Project Into
Image Space

Get Minimum
Distance

[ If treshold value < Image space diffrence ] [ If treshold value > Image space diffrence ]

[if minimum distance not in range] [if minimum distance in range]


Out "Not a
Face"

No Match Out Match


Found Image ID

Figure ‎7.6: Activity Diagram – Find Match

First it takes an image which needs to be recognized. Then it pre-process submitted image
and isolate regions need to be recognize then it projects that image into image space. By
doing that it find Euclidian minimum distance and image space difference that allows to
decide whether submitted image is a relevant image(face/eye) or not. If it is not a relevant
image it shows error and discard the process if it is a relevant image it simply check what is
best match distances and result id of match image.

99
7.4.5 Automatic Face Region Verification

Get Pre-proccessed/Normalized Reshape image Calculate difference


Image to 1D from image Space

Get Threshold
Value

[If Threshold value < diffrence] [If Threshold value > diffrence]

Display "Region Display "Region


Found" not Found"

Figure ‎7.7: Activity Diagram - Automatic Face Region Verification

It use this process to avoid verify weather submitted image is a face region or not. It uses his
process when adding a record to database and finding a match.

7.4.6 Pre-processing/Normalization

Resize Normalize Grayscale


Image

Figure ‎7.8: Activity Diagram - Pre-processing/Normalization

To avoid miss matches and increase the efficiency of the system all images of the system
should follow a common standard. Pre-processing process allows resizing images into pre-
specific size then it normalize image and finally it convert image to Greyscale image.

100
7.5 Class Diagrams

The structural view of the classes of the system can identify as follows.

GUI

1 1 1 1 1

+recognitonManager +detector 1..* 1..*


1 +databaseDriver +record +dummyRecord
1..* 1
RecognitonManager Detector DatabaseDriver ImageRecord DummyRecord

1 1
1

+preProcessor
+trainBundle
1..* +preProcessor 1 1
TrainBundle PreProcessor FaceDetector EyeDetector

1 1

1..* 1..*
ColomnVector Matrix

Figure ‎7.9: Class Diagram

101
7.6 Classes Description

Above illustrated class diagram shows structural design of the system. The classes of
the system can categorize as follows base on their duties/behaviour.

7.6.1 Data management related classes

These types of classes manage database interaction and exchange data between other
entities of the system.

Class: DatabaseDriver

This class is the intermediate class between other classes and database of the system.
This class interacts with both database and other related classes of the system. Also
this class is responsible to convert system data to database compatible formats.

7.6.2 Application logic related classes

These types of classes handle control logic of system. Image Matching, pre
processing, image verification, region detection and extraction functionalities are
handle in these classes.

1. Class : PreProcessor

“Preprocessor” class is the class that handle all Eigen face algorithm related
functionalities and Haar detection algorithm related functionalities.

The main methods of the class can identify as follows.

 converToGray
This function converts colour images to greyscale image
 Normalize
This function normalizes images.
 ReSize

This function resizes images into given standard size

102
2. Class : Detector, EyeDetector and FaceDetector

Detector class is an abstract class, contains methods for detecting face regions from a
face images and crop them .EyeDetector and FaceDetector classes are specialized
classes that inherited from the Detector class.

The main methods of these classes can identify as follows.

 CropImage
This function crop an image.
 Detect
This function detect face region in a face image.
 IsItDetectable
This face identify weather give face area is detectable or not.

7.6.3 GUI Related Classes

These classes are mainly windows form that use to input and output data of the
system. There are separate classes for add new record, view record, find match and
create image space.

7.6.4 Other supportive classes

1. Class : ImageRecord

This class work as temporary class that use to store record data (including image
data).Mainly this class use to put data retrieved from database and data what want to
set to the database. This class only contains fields and this class does not process the
data in itself.

2. Class : DummyRecord

This also a temporary storage class live previous “ImageRecord” class. This class
work with GUI level to display records.

3. Class : TrainBundle

This class use to store data extracted from database to create image space

103
4. Class : Matrix

When it considers Eigen face algorithm it all about linear algebra matrix related
calculation. This class consist 2D matrix and other related functionalities.

5. Class : ColumnVector

In linear algebra 1D matrix consider as special entity. This class have attributes and
methods related to linear algebra 1D matrix related calculations.

104
CHAPTER 8

8 TESTING AND EVALUATION

It can identify software testing as a key step of a software developing project. Pan
(2009) stated that software testing can be identified as any activity that focus at
“evaluating an attribute or capability of a program or system” and check that the
system meet pre defined criteria that is should meet. In this system it will check
different aspects but mainly it will focus on face region detection and recognition
criteria.

It is expected that testing will allow measuring the system, evaluating the system and
to find out the bugs of the system (including logical, syntax etc) .Then quality of the
system can improve based on the results of the tests. In this project it will help to
identify errors of the applications and also in the research areas.

The project’s implementation is more focused on image processing, computer vision


and face recognition algorithm therefore GUI get less important in the project.
Because of that the tests are focused more into above mentioned areas.

Since this project use evolutionary prototyping based hybrid approach, therefore for
easy of testing the test plans has developed based on incremental approach which
basically test outcome of each develop phase and finally the outcome of integration
step.

As mentioned in previous chapter all incremental of the application is based on


previous modules that has implemented before therefore it is necessary to check and
verify each and every modules before the final integration. Therefore test plans also
have implemented in incremental manner. There are separate test plans for each
incremental of the project.

105
8.1 Testing Strategy

Based on report done by Luo (n.d) Following testing methods has identified among
large number of testing approaches as suitable testing methodologies for this
application.

http://www.cs.cmu.edu/~luluo/Courses/17939Report.pdf

Testing approaches can be consider as follows

Unit testing

This checks each modules /components of the system right after implementation of
each. Mainly it checks about the functionality of the modules. This testing is carried
out while developing the system .In this project there will be unit testing at the each
of components of main phases.

Accuracy level of the each unit is varied on it is exceptions .There are different test
cases based on each units. Units tests for each units carried out individually.

Following core units has identified to be tested.

Pre-Processing module
Image greyscale unit
Image resize unit
Image normalization unit
Face region extraction module
Face region detection unit for full face
Face region detection unit for eye region
Face region crop unit
Matching module
Face verification unit
Eye Region verification unit

106
Face match unit
Eye match unit

Integration testing

This will check whether particular module meet required functionality after
integrating all units of the particular. By doing that it can check whether the
communication between units are happening properly.

It will check following module for integration testing.

Pre-Processing Module
Face region extraction module
Matching module

System Integration testing

This will perform once after all the module integrating. By doing that it is ensure that
all modules of the system communicate properly within the system.

Testing has organized based on development /implementation phases.

8.2 Test Data

It is important to define the test data before beginning the test. It uses randomly
selected images of Essex face image (face 94 dataset) database for face images and
it use randomly selected pictures for non faces.

This is the information given for Essex face dataset in their web site.

Acquisition conditions The subjects sit at fixed distance from the camera and
are asked to speak, whilst a sequence of images is
taken. The speech is used to introduce facial expression
variation

107
Database Description Number of individuals: 153

Image resolution: 180 by 200 pixels (portrait format)

directories: female (20), male (113), malestaff(20)

Contains images of male and female subjects in


separate directories

Variation of individual's Backgrounds: the background is plain green


images
Head Scale: none

Head turn,tilt and slant: very minor variation in these


attributes

Position of face in image: minor changes

Image lighting variation: none

Expression variation: considerable expression changes

Additional comment: there is no individual hairstlyle


variation as the images were taken in a single session.

Table ‎8.1: Test data set details

Spacek(2007)

http://cswww.essex.ac.uk/mv/allfaces/faces94.html

108
8.3 Testing for developing phase one - Pre-processing module

As mentioned before this testing of application base on the developing incremental.


Under this section it will test pre processing module. Basically functionalities like
resizing, gray scaling and normalizing.

8.3.1 Unit testing for developing phase one

8.3.1.1 Developing phase one - unit test - image resize unit

Under this section it will test weather this unit is capable of resizing a given
image.The output of this image will be output of next unit.

Test Case : 01

Test Type Unit test

Test Unit Image Resize Unit

Description To ensure conversion of any given size of images in to 128 x


128 size image

Test plan Compare input and output after the processing by using size
and width of the images.

Input Colour various type png/jpg/bmp images

Success criteria Colour 128x128 image

Test Summery

No of Images 10
tested

109
No of successful 10
results

Success Rate 100% Unsuccessful rate 0%

Table ‎8.2: Developing phase one - unit test - image resize unit

8.3.1.2 Developing phase one - unit test - Image normalization unit

Under this section it check weather this section is capable of normalizing a image.
Basically it except this unit will apply median filter to the image and remove the
noise of the image and clear the image. The input of this unit is output of above
unit(resize image).

Test Case : 02

Test Type Unit test

Test Unit Image normalization unit

Description To ensure removal of noise (disguises of intensity values of


colours) of images and clean them.

Test plan Compare input and output after the processing

Input 128 x 128 RGB colour image

Success criteria 128 x 128 RGB colour image which is noise free.

Test Summery

No of Images 10
tested

110
No of successful 10
results

Success Rate 100% Unsuccessful rate 0%

Table ‎8.3: Developing phase one - unit test - Image normalization unit

8.3.1.3 Developing phase one - unit test - Image greyscale unit

Under this unit testing it will check weather this unit is capable of gray scaling
resized colour images. The input of this unit is output of above unit (normalizing
unit).

Test Case : 01

Test Type Unit Test

Test Unit Image greyscale unit

Description To ensure conversion of RGB(Colour) image to 255 bit


greyscale image

Test plan Compare input and output after the processing check pixel
values of each image between 0 – 255.

Input 128 x 128 noise removed RGB Jpeg/Png/bmp image

Success criteria 255 bit greyscale image

Test Summery

No of Images 10
tested

111
No of successful 10
results

Success Rate 100% Unsuccessful rate 0%

Table ‎8.4: Developing phase one - unit test - Image greyscale unit

8.3.2 Integration testing for developing phase one

After integrating above three units (resizing unit, normalizing unit and gray scaling
unit) it has performed following test to the entire module.

Test Case : 01

Test Type Integration testing

Test Unit Pre-Processing Module

Description To ensure after integrating all units of pre-processing module,


Pre-process functioning well.

Test plan Compare input and output after the processing

Check pixel values.

Input Multi size RGB Jpeg/Png/bmp image

Success criteria 255 bit resized normalized greyscale image

Test Summery

No of Images 10
tested

112
No of successful 10
results

Success Rate 100% Unsuccessful rate 0%

Table ‎8.5: Integration testing for developing phase one

For the testing purpose it has implemented GUI for this module and performed
above tests. Basically the results has identified by viewing them. But it has
understood that method that uses to measure the result is not scientific and accurate
therefore it is suggest use mathematical method for measure the test results. So for
verify purpose it has checked pixel values(RGB values) of the randomly selected
pixels in each images and it provided different values based on pixel intensity of the
image and pixel values was between 0-255 in final result.

113
8.4 Testing for developing phase two - Face region extraction
module

These tastings has performed to this phase to ensure that face region extracting
module meet predefined criteria. This phase uses pre-processing module.

8.4.1 Unit testing for developing phase two

8.4.1.1 Developing phase two - unit test - Face region detection unit for full face

Under this unit testing it checks weather this unit is capable of detecting face in
given images. It has uses randomly selected images in Essex dataset and some
randomly selected face images from the internet.

Test Case : 04

Test Type Unit Test

Test Unit Face region detection unit for full face

Description Ensure that unit is capable of detecting face in given image


which has only one face.

Input For positive test it has inputted pre-proceed different size RGB
colour images which contain face images

For negative test it has normalized different size RGB colour


images which are non face images

Success criteria Detect the face in face images and not detecting a faces in non
face images and draw rectangle around the face area.

Test Summery

114
No of Images 30
tested (positive set)

No of successful 28
results(positive set)

No of Images 10
tested (negative
set)

No of successful 10
results(positive set)

Success Rate 95 % Unsuccessful rate 5%

Table ‎8.6: Developing phase two - unit test - Face region detection unit for full face

8.4.1.2 Developing phase two - unit test - Face region detection unit for eye
region

Under this unit testing it checks weather this unit is capable of detecting face and eye
regions in given images. It has uses randomly selected images in Essex dataset and
some randomly selected face images from the internet.

Test Case : 05

Test Type Unit Test

Test Unit Face region detection unit for eye region

Description Ensure that unit is capable of detecting eye region in given


image which has extracted face area.

Input For positive test it has inputted normalized different size RGB
colour images which contain face area extracted form face

115
detection unit.

For negative test it has normalized different size RGB colour


images which are non face images(does not contain any eye
region).

Success criteria Detect the eye in face images and not detecting a eyes in non
face images and draw rectangle around the face area.

Test Summery

No of Images 28
tested (positive set)

No of successful 22
results(positive set)

No of Images 10
tested (negative
set)

No of successful 10
results(positive set)

Success Rate 84.21 % Unsuccessful rate 15.78 %

Table ‎8.7: Developing phase two - unit test - Face region detection unit for eye
region

8.4.1.3 Developing phase two - unit test - Face Region Crop unit

Under this unit testing it checks weather this unit is capable of cropping detected
face images and eye regions in given images. It has uses randomly selected images
in Essex dataset and some randomly selected face images from the internet.

116
Test Case : 06

Test Type Unit testing

Test Unit Face Region Crop unit

Description Ensure that crop selected face areas and eye regions from
selected face areas

Input Detected face images and detected eye images

Success criteria Crop detected areas and save as separated file.

Test Summery

No of face images 28
tested

No of eye images 22
tested

No of successful 50
results

Success Rate 100% Unsuccessful rate 0%

Table ‎8.8: Developing phase two - unit test - Face Region Crop unit

8.4.2 Integration testing for developing phase two

This test has performed after integrating above three units (Face region detection unit
for full face , Face region detection unit for eye region and face region crop
images).A simple GUI has developed for testing the module. It has used face images
from the Essex dataset and internet. It also used non face images as negative testing
data to verify this module functioning well.

117
Test Case : 01

Test Type Integration testing

Test Unit Face region extraction module

Description To ensure after integrating all units of face region extraction


module. It functions well.

Test plan Compare input and output after the processing

Input For positive test it has inputted normalized different size RGB
colour images which contain face images and eye region
images.

For negative test it has normalized different size RGB colour


images which are non face images

Success criteria Cropped face /face regions should be created in given location.
If face region and eye region are not available or not detectable
show an error message.

Test Summery

No of Images 30
tested (positive set)

No of successful 22(for eyes) , 28(for face)


results(positive
set)

No of Images 20
tested (negative

118
set)

No of successful 20
results(positive
set)

Success Rate 96.66 %(faces) Error Rate 15.78%(faces)

84.00%(eyes) 16.00%(eyes)

Table ‎8.9: Integration testing for developing phase two

Following table shows summarized results of test.

9 INPUTTED 10 30 11 SUCCESS 12 ERROR


IMAGES RATE RATE

13 NO OF 14 28 15 96.66 % 16 3.33%
FACE
DETECTIO
NS

17 NO OF EYE 18 22 19 84.00% 20 16.00%


DETECTIO
NS

Table ‎8.10: Integration testing for developing phase two - summarized results

This module is not like other module because this module does not give 100%
accuracy.

119
120
20.1 Testing for developing phase three - Matching module

20.1.1 Unit testing for developing phase three

20.1.1.1 Developing phase three - unit test - Face verification unit

This unit test performed to check weather face verification unit functioning well or
not. it has used only Essex dataset’s images to perform this as positive images and it
use some randomly selected non face images as negative data set.

Test Case : 07

Test Type Unit test

Test Unit Face verification unit

Description This check weather input image is face image or not

Input Pre-processed and extracted face images as positive test


images and non face images as negative image

Success criteria Return true if images is a face image else return false

Test Summery

No of Images 30
tested (positive)

No of Images 10
tested (negative)

No of successful
results

Success Rate Unsuccessful rate

121
Table ‎8.11: Developing phase three - unit test - Face verification unit

20.1.1.2 Developing phase three - unit test - Eye region verification unit

This unit test performed to check weather eye region verification unit functioning
well or not. it has used only extracted eye regions form Essex dataset’s images to
perform this as positive images and it use some randomly selected non face images
as negative data set.

Test Case : 08

Test Type Unit test

Test Unit Eye region verification unit

Description This check weather input image is eye image or not

Input Pre-processed and extracted eye images as positive images and


non face images as negative images

Success criteria Return true if images is a eye image else return false

Test Summery

No of Images 30
tested(positive
tests)

No of Images
tested(negative
tests)

No of successful
results

122
Success Rate Unsuccessful rate

Table ‎8.12 : Developing phase three - unit test - Eye region verification unit

20.1.1.3 Developing phase three - unit test - Face match unit

This unit test performed to check weather face match unit functioning well or not. it
has used only extracted face regions form Essex dataset’s images to perform this as
positive images and it use some randomly selected face images which are not stored
in database as negative data set.

Test Case : 09

Test Type Unit test

Test Unit Face match unit

Description This check weather database has matching image to given


probe (test) image

Input Pre-processed and extracted face images as positive images


and non face images as negative images

Success criteria Return true and id of the matching image if there is matching
record else return false

Test Summery

No of Images 30
tested(positive
tests)

No of Images 10
tested(negative

123
tests)

No of successful
results

Success Rate Unsuccessful rate

Table ‎8.13: Developing phase three - unit test - Face match unit

20.1.1.4 Developing phase three - unit test - Eye match unit

This unit test performed to check weather eye match unit functioning well or not. it
has used only extracted eye regions form Essex dataset’s images to perform this as
positive images and it use some randomly selected eye region images which are not
stored in database as negative data set.

Test Case : 10

Test Type Unit test

Test Unit Eye match unit

Description This check weather database has matching image to given


probe (test) image

Input Pre-processed and extracted eye images as positive images and


non face images as negative images

Success criteria Return true and id of the matching image if there is matching
record else return false

Test Summery

124
No of Images 30
tested(positive
tests)

No of Images 10
tested(negative
tests)

No of successful
results

Success Rate Unsuccessful rate

Table ‎8.14: Developing phase three - unit test - Eye match unit

20.1.2 Integration testing for developing phase three

This is for checks that after integrating above units together weather integrated
solution works or not. It use eye images and face images which extracted from Essex
database and stored 4 images as training images and used one images that is not in
database as test image in positive dataset. It used other face images /eye images
which are not stored in database as negative images.

Test Case : 01

Test Type Integration testing

Test Unit Matching module

Description To ensure after integrating all units of matching module. It


functions well.

Test plan Compare input and output after the processing

125
Input Pre-processed and extracted test face images and eye regions
as positive test images and non face images as negative image

Success criteria Return true and id of the matching image if there is matching
record else return false or if input image is not a face image or
eye image shows error message.

Test Summery

No of Images 60
tested (positive set)

No of successful
results(positive set)

No of Images 20
tested (negative
set)

No of successful
results(positive set)

Success Rate Success Rate

Table ‎8.15: Integration testing for developing phase three

20.2 Testing for developing phase four – System integrating test

After integrating above mentioned modules it has performed test for overall system.
For easy of measure it has divided overall function of the system into 3 main parts.
So testing will perform for them.

126
20.2.1 Developing phase four – system intergrading testing -Adding New
Record

Test Case : 01

Test Type System Integration testing

Test Adding New Record


Unit/Functionality

Description This checks input image is a face image then it will extract the
face areas and eye regions and show them. Finally it stores
them in database. Or if it does not contain any face image it
shows error and prompt to upload new image

Test plan Check relevant out put

Check database table for verify

Input Face Image and not face images (multi format)

Success criteria Display extracted face regions

After saving display save images

Test Summery

No of Images tested
(positive set)

No of successful
results(positive set)

No of Images tested 20

127
(negative set)

No of successful
results(positive set)

Success Rate Success Rate

Table ‎8.16: Developing phase four – system intergrading testing -Adding New
Record

20.2.2 Developing phase four – system intergrading testing –Match face

Test Case : 01

Test Type System Integration testing

Test Matching face image


Unit/Functionality

Description This checks input image is a face image then it will extract the
face areas and match with the images in database. Then it will
show the best match to that particular image

If Image of that person is not in database it will show a


message. Or if image is not a face image it will show error
message.

Test plan Check relevant out put

Check Messages

Input Face Image and non face images (multi format)

Success criteria Show matching image record

128
Test Summery

No of Images tested
(positive set)

No of successful
results(positive set)

No of Images tested 20
(negative set)

No of successful
results(positive set)

Success Rate Success Rate

Table ‎8.17: Developing phase four – system intergrading testing –Match face

20.2.3 Developing phase four – system intergrading testing –Match eye


region

Test Case : 01

Test Type System Integration testing

Test Matching eye image


Unit/Functionality

Description This checks input image is a eye image then it will match with
the images in database. Then it will show the best match to
that particular image

If Image of that person is not in database it will show a


message. Or if image is not a eye image it will show error

129
message.

Test plan Check relevant out put

Check Messages

Input Eye region Image and non eye region images (multi format)

Success criteria Show matching image record

Test Summery

No of Images tested
(positive set)

No of successful
results(positive set)

No of Images tested 20
(negative set)

No of successful
results(positive set)

Success Rate Success Rate

Table ‎8.18: Developing phase four – system intergrading testing –Match eye region

Following table shows summery of the all three tests

Test Test cases Success rate Error rate

Add new image

130
Match Face

Match Eye

Table ‎8.19: Developing phase four – system intergrading testing – Summery

131
REFERENCE

Aforge.Net. (2009). AForge.NET :: Computer Vision, Artificial Intelligence,


Robotics. [Online]. Available from : http://www.aforgenet.com/. [Accessed: 1st
March 2010]

Akarun,L. G¨okberk . B. Salah, A. (2005).3D Face Recognition for Biometric


Applications .[Online]. Available from -
http://homepages.cwi.nl/~salah/eusipco05.pdf . [Accessed: 20th January 2010]

Albert, K. W. Yeung, G & Hall, B.(2007).Spatial database systems: design,


implementation and project management.[Online] Dordrecht: Springer. Available
from-
http://books.google.com/books?id=AMX44TGkvWkC&dq=incremental+prototypin
g&source=gbs_navlinks_s. [Accessed: 24/01/2010]

Anas ,S.A. (2010).Advanced Matrix Library in C#. NET. [Online]. Available from -
http://www.codeproject.com/KB/recipes/AdvancedMatrixLibrary.aspx.[Accessed:
05th July 2010].

Apple Inc. (2010). Apple - iPhoto - Organize, edit, and share photos on the web or
in a book. [Online]. Available from : http://www.apple.com/ilife/iphoto/.[Accessed:
08 March 2008]

Audio Visual Technologies Group. (n.d.). GTAV Face Database. [Online]. Available
from: http://gps-
tsc.upc.es/GTAV/ResearchAreas/UPCFaceDatabase/GTAVFaceDatabase.htm.
[Accessed: 30 January 2010]

Baker, L. (2006). Google, Neven Vision & Image Recognition | Search Engine
Journal. [Online]. Available from : http://www.searchenginejournal.com/google-
neven-vision-image-recognition/3728/. [Accessed: 08 March 2008]

132
Bagherian, E., Rahmat, R.W. & Udzir, N.I. (2009).Extract of Facial Feature Point,
IJCSNS International Journal of Computer Science and Network Security.9(1) p. 49
- 53.

Bebis, G. (2003). CS4/791Y: Mathematical Methods for Computer Vision. [Online].


Available from : http://www.cse.unr.edu/~bebis/MathMethods/. [Accessed: 23
February 2010]

Biswas ,P.K. (2008). Image Segmentation - I. Lecture - 29. [Online video]. October
16th. Available from : http://www.youtube.com/watch?v=3qJej6wgezA. [Accessed:
March 4th 2010]

Bonsor, K. & Johnson, R. (2001) How Facial Recognition Systems Work ,


04September , [Online], Available:
http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/facial-
recognition3.htm# [25 February 2010]

Brunelli, R. & Poggio, T. (1993). Face Recognition: Features versus Templates.


IEEE Transactions on Pattern Analysis and Machine Intelligence. [Online] 15.
(10).p. 1024 - 1052. Available from :
http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org
%2Fiel1%2F34%2F6467%2F00254061.pdf%3Farnumber%3D254061&authDecisio
n=-203. [Accessed: 21/01/2010]

Busselle, J. (2010).Raster Images versus Vector Images. [Online]. Available from :


http://www.signindustry.com/computers/articles/2004-11-30-
DASvector_v_raster.php3. [Accessed: 2 March 2010]

C˘aleanu ,D.C & Botoca.C. (2007). C# Solutions for a Face Detection and Recognition
System .[Online]. April 2007. Available from -
http://factaee.elfak.ni.ac.rs/fu2k71/9caleanu.pdf. [Accessed: 05th July 2010].

133
Chopra,M. (2007). Two Muslim Girls. [Online]. June 24th 2007. Available from:
http://www.intentblog.com/archives/2007/06/two_muslim_girls.html. [Accessed:
08th March 2010]

Codeproject. (2007). C# Matrix Library. [Online]. Available from -


http://www.codeproject.com/KB/cs/CSML.aspx. [Accessed: 05th July 2010].

Costulis, P.K. (2004). The Standard Waterfall Model for Systems Development.
[Online]. Available from : http://web.archive.org/web/20050310133243/http://asd-
www.larc.nasa.gov/barkstrom/public/The_Standard_Waterfall_Model_For_Systems
_Development.htm [Accessed: 28 February 2010]

Cristinacce,D & Cootes,T (2003). Facial feature detection using AdaBoost with
shape constraints.[Online]. Available from
citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.119.8459. [Accessed: 20th
January 2010]

Delac,K. Grgic,M. Liatsis,P. (2005). Appearance-based Statistical Methods for Face


Recognition. Croatia and 8th - 10th June 2005 . Croatia: IEEE Xplore Digital Library.
Pp151-158.

Extreme Optimization. (2010).Extreme Optimization Numerical Libraries for .NET


Build financial, engineering and scientific applications faster. [Online]. Available
from - http://www.extremeoptimization.com/. [Accessed: 05th July 2010].

Fisher,R, et al (2003).Feature Detectors - Canny Edge Detector.[Online]. Available


from : http://homepages.inf.ed.ac.uk/rbf/HIPR2/canny.htm. [Accessed: 2 March
2010]

Fladsrud, T. (2005). Face Recognition in a border control environment:Non-zero


Effort Attacks’ Effect on False Acceptance Rate. In partial fulfilment of the
requirements for the degree of Master of Science. Gjøvik: Gjøvik University College

Grady, L & Schwartz , E.L. (2006). Isoperimetric Graph Partitioning for Image
Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence.

134
[Online] 28. (3). P.469 - 475. Available from:
http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org
%2Fiel5%2F34%2F33380%2F01580491.pdf%3Farnumber%3D1580491&authDeci
sion=-203. [Accessed: 2 March 2010].

Grill. (2010). face.com blog. [Online]. Available from : http://blog.face.com/.


[Accessed: 08 March 2008]

Gaokberk, B. (2006).Three Dimensional Face Recognition.partial fulfilment of the


requirements for the degree of Doctor of Philosophy.Istanbul: Bogazici University.

Gonzalez, R.C & Woods, R.E.(2004).Digital Image Processing. 2nd ed. New Jersey:
Prentice Hall.

GPWiki. (n.d.). Game Programming Wiki. [Online]. Available from :


http://gpwiki.org/index.php/.NET_Development_Environments. [28th February
2010]

Green,B.(2002a). Edge Detection Tutorial.[Online]. Available from :


http://www.pages.drexel.edu/~weg22/edge.html . [Accessed: 2 March 2010].

Green,B.(2002b). Canny Edge Detection Tutorial.[Online]. Available from :


http://www.pages.drexel.edu/~weg22/can_tut.html . [Accessed: 2 March 2010].

Gül,A.B. (2003). Holistic Face Recognition By Dimension Reduction. In Partial


Fulfilment of the Requirements For The Degree of Master of Science. Ankara: The
Graduate School of Natural and Applied Sciences of the Middle East Technical
University

Heseltine,T.D. (2005). Face Recognition: Two-Dimensional and Three-Dimensional


Techniques. In Partial Fulfilment of the Requirements For the Degree of PHD.
Heslington york: The University of York

Jähne,B. (2005). Digital Image Processing. [Online ] Berlin: Springer. Available


from :

135
http://books.google.com/books?id=qUeecNvfn0oC&dq=J%C3%A4hne++%2B+ima
ge+processing&source=gbs_navlinks_s. [Accessed: 1 March 2010]

Latecki,L.J(n.d.)Template Matching. [Online]. October 2005. Available from:


http://www.cis.temple.edu/~latecki/Courses/CIS601-
03/Lectures/TemplateMatching03.ppt. [Accessed: 3rd March 2010]

Kanade, T. (1973). Picture processing by computer complex and recognition of


humanfaces. doctoral dissertation. Kyoto: Kyoto University.

Kirby, M & Sirovich ,L. (1990). Application of the Karhunen-Loeve Procedure for
the Characterization of Human Faces. IEEE Transactions on Pattern Analysis and
Machine Intelligence. [Online] 10. (1). P.103 - 108. Available from:
http://portal.acm.org/citation.cfm?id=81077.81096. [Accessed: 2 March 2010]

Kepenekci, B. (2001). Face Recognition Using Gabor Wavelet Transform. In partial


fulfilment of the requirements for the degree of Master of Science. Ankara: the
graduate school of natural sciences of the middle east technical university.

kioskea.net (2008). Image processing. [Online]. Available from :


http://en.kioskea.net/contents/video/traitimg.php3 . [Accessed: 2 March 2010]

Kosecka,J.(n.d). Thresholding.[Online]. Available from -


http://cs.gmu.edu/~kosecka/cs682/cs682-thresholding.pdf . [Accessed: 20th January
2010]

Math.net. (n.d). Math.NET : About & Features. [Online]. Available from -


http://www.mathdotnet.com/About.aspx/. [Accessed: 05th July 2010].

Moghaddam, B.(2002). Principal manifolds and probabilistic subspaces for visual


recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[Online] 24. (6). p. 780 - 788. Available
fromhttp://www.google.lk/url?sa=t&source=web&ct=res&cd=1&ved=0CAsQFjAA

136
&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F34%2F21743%2F01008384
.pdf%3Farnumber%3D1008384&rct=j&q=Principal+manifolds+and+probabilistic+s
ubspaces+for+visual+recognition&ei=Op6US5-
BPcy1rAfquLWVCw&usg=AFQjCNH3x83cEeBg10AF_z8XesTIBjSAug.
[Accessed: 21/02/2010].

Micrsoft. (2010). Math Class(System). [Online]. Available from -


http://msdn.microsoft.com/en-us/library/4zfefwz9.aspx. [Accessed: 05th July 2010].

MyHeritage.com (2006) How face recogniton works, [Online], Available:


http://www.myheritage.com/facial-recognition-technology . [Accessed: 30 January
2010]

Nashruddin(2008) Nashruddin(2008) Template Matching with opencv [Online].


Available from: http://nashruddin.com/template-matching-in-opencv-with-
example.html[Accessed: 3 March 2008]

Navarrete,P. Ruiz-del-solar,J.(2001). Eigenspace-Based Recognition of Faces:


Comparisons and a New Approach. Proceedings of the 11th International
Conference on Image Analysis and Processing. [Online] 14. (11).p. 875 - 878.
Available from : http://portal.acm.org/citation.cfm?id=876879.879503&coll=&dl=.
[Accessed: 21/01/2010].

Neoh, S & Hazanchuk, N.A. (2005). Adaptive Edge Detection for Real-Time Video
Processing using FPGAs I. [Online]. Available from :
http://www.altera.com/literature/cp/gspx/edge-detection.pdf. [Accessed: 2nd March
2010]

Northrup,T(2009). Microsoft frame work application development fundamental.2nd


Ed. Delhi :PHI Private Limited.

NSTC subcommittee on biometrics. (2006). Face Recognition.[Online]. Available


from - http://www.biometrics.gov/Documents/FaceRec.pdf . [Accessed: 20th
January 2010].

137
OpenCV. (n.d.). OpenCV Wiki. [Online]. Available from :
http://opencv.willowgarage.com/wiki/. [28th February 2010]

O'Toole, A.Et al. (1993). Low-dimensional representation of faces in higher


dimensions of the face space. Journal of the Optical Society of America. A, Optics
and image science. [Online] 10. (3).p. 405 - 411. Available from :
http://www.opticsinfobase.org/viewmedia.cfm?uri=josaa-10-3-405&seq=0.
[Accessed: 21/01/2010]

Rudolf,K.B(1998). Template Matching. [Online]. Available from :


http://rkb.home.cern.ch/rkb/AN16pp/node283.html#SECTION00028300000000000
00000. [Accessed: 3rd March 2010]

Savvides, L.Et al. (2006). Partial & Holistic Face Recognition on FRGC-II data
using Support Vector Machine . CVPRW IEEE Proceedings of the 2006 Conference
on Computer Vision and Pattern Recognition Workshop. [Online] 24. (6). p. 48.
Available from: http://portal.acm.org/citation.cfm?id=1153824. [Accessed:
21/02/2010].

SoftDevTeam. (2010). Incremental lifecycle model. [Online]. Available from :


http://www.softdevteam.com/Incremental-lifecycle.asp [Accessed: 28 February
2010].

Sommerville,L. ( 2009). software engineering .[Online ] Delhi: Pearson Education.


Available from -
http://books.google.com/books?id=VbdGIoK0ZWgC&dq=Sommerville+%2B+Soft
ware+prototyping&source=gbs_navlinks_s . [Accessed: 3 March 2010].

Spann, M. (n.d.). Image Segmentation.[Online]. Available from :


http://www.eee.bham.ac.uk/spannm/Teaching%20docs/Computer%20Vision%20Co
urse/Image%20Segmentation.ppt. [Accessed: 4 March 2010]

138
Stats for student. (2007). Measures of Spread. [Online]. Available from :
http://www.stats4students.com/Essentials/Measures-Of-Spread/Overview_3.php.
[Accessed: 21 February 2010]

Sun.com. (n.d.). JDKTM 6 Documentation. [Online ]. Available from :


http://java.sun.com/javase/6/docs. [Accessed: 1st March 2010]

Sutton,E. (n.d.). Zone System & Histograms. [Online]. Available from :


http://www.illustratedphotography.com/photography-tips/basic/contrast . [Accessed:
4 March 2010]

Szepesvair, C. (2007). Image Processing:Low-level Feature Extraction. [Online].


Available from: http://webdocs.cs.ualberta.ca/~szepesva/CMPUT412/ip2.pdf.
[Accessed: 2 March 2010].

Turk, M.A & Pentland,A.P(1991). Face Recognition using Eigenfaces. IEEE


Transactions on Computer Vision and Pattern Recognition. [Online] 12. (2). p. 586 -
591. Available from :
http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org
%2Fiel2%2F340%2F3774%2F00139758.pdf&authDecision=-203. [Accessed:
21/02/2010].

University of Buenos Aires (n.d.) University of Buenos Aires [Online]. Available


from: www.uba.ar/ingles/about/scienceandtechnique/imagepr/db [Accessed: 21
January 2010]

Viola, P & Jones,M. (2001). Robust Real-time Object Detection.[Online]. Available


from : www.hpl.hp.com/techreports/Compaq-DEC/CRL-2001-1.pdf . [Accessed:
20th February 2010].

Voegele, J. (n.d). Programming Language Comparison. [Online]. Available from -


http://www.jvoegele.com/software/langcomp.html. [Accessed: 05th July 2010].

139
Waltercedric. (n.d.). Picasa 3.5 available with face recognition / easy geotagging.
[Online]. Available from :
http://www.waltercedric.com/component/content/article/238-software/1653-picasa-
35-available-with-face-recognition--easy-geotagging.html. [Accessed: 08 March
2008]

Weisstein,E.W (2010a).Covariance. [Online]. Available from :


http://mathworld.wolfram.com/Covariance.html. [Accessed: 21 February 2010]

Weisstein,E.W (2010b).Covariance. [Online]. Available from :


http://mathworld.wolfram.com/ http://mathworld.wolfram.com/Eigenvalue.html.
[Accessed: 21 February 2010]

Willing, R. (2003) Airport anti-terror systems flub tests; Face-recognition


technology fails to flag 'suspects', 02September, [Online], Available from:
http://usatoday.printthis.clickability.com/pt/cpt?action=cpt&expire=&urlID=738780
2&fb=Y&partnerID=1664 [Accessed: 12 February 2010].

Wiskott, L.Et al. (1997).Face Recognition by Elastic Bunch Graph Matching. IEEE
Transactions on Pattern Analysis and Machine Intelligence. [Online] 19. (7).p. 775 -
779. Available from : http://www.face-rec.org/algorithms/EBGM/WisFelKrue99-
FaceRecognition-JainBook.pdf. [Accessed: 21/01/2010].

Yambor,W.S. (2000). Analysis Of Pca-Based And Fisher Discriminant-Based Image


Recognition Algorithms. In Partial Fulfilment of the Requirements For the Degree of
Master of Science. Colorado: Colorado State University

Yang,X. (2004). Image Segmentation. [Online]. Available from:


http://www.img.cs.titech.ac.jp/ipcv/seminar/2004-2/Segmentation.pdf. [Accessed:
4th March 2010]

Zhang, W.Et al. (2007).Face Recognition Local Gabor Binary Patterns Based on
Kullback–Leibler Divergence for Partially Occluded Face Recognition. IEEE Signal
Processing Letters [Online] 14. (11).p. 875 - 878. Available from :

140
http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org
%2Fiel5%2F97%2F4351936%2F04351969.pdf%3Farnumber%3D4351969&authDe
cision=-203. [Accessed: 21/01/2010].

Zhao, W.Et al. (2003).Face recognition: A literature survey, ACM Computing


Surveys (CSUR) .35(4) p. 399 – 458.

141
APPENDICES

i
APPENDIX A

A. IMAGE PROCESSING TECHNIQUES

ii
1. Image Processing Techniques

1.1 Image Segmentation

1.1.1 Grey level histogram-based segmentation

1.1.1.1 Identifying Noise form histogram

Spann (n.d) has used three images and he got following histogram based on them.

Figure 1.1: Binary Images

[Source: Spann, n.d]

Figure 1.2: Histrogram

[Source: Spann ,n.d.]

iii
Spann (n.d) mentioned, by using histogram above three images can identify as
follows

Noise free

“For the noise free image, it’s simply two spikes at i=100, i=150 “
(Spann , n.d)

Low Noise

“There are two clear peaks centred on i=100, i=150” (Spann , n.d)

High Noise

“There is a single peak – two grey level populations corresponding to


object and background have merged” (Spann, n.d)

1.2 Approaches for determine threshold

20.2.3.1.1 1.2.1 Minimisation method


According to Haralick and Shapiro (2000, pp20 cited Spann,n.d) “Any threshold
separates the histogram in to two different groups and each group got its own mean
and variance.” Furthermore Haralick and Shapiro (2000, pp20 cited Spann n.d) “The
similarity of each group is measured by the within group variance. The best
threshold is the threshold that minimizes the within group variance thus maximizing
the homogeneity of each group. ”

Spann (n.d) presented following calculation to find probability of the background


and object.

T
po (T )   P(i )
i 0

255
pb (T )   P(i)
i T 1
iv
P(i)  h(i) / N

 Let group o (object) be those pixels with greylevel <=T

 Let group b (background) be those pixels with greylevel >T

 The prior probability of group o is po(T)

 The prior probability of group b is pb(T)

 where h(i) is the histogram of an N pixel image

By using these probabilities, it is possible to calculate mean and variance of each


group.

Then it calculate optimum variance group.

There can be incident that object and background overlaps (this is happen because
when both has similar characteristics).

v
APPENDIX B

B. EIGENFACE STEP BY STEP (FORMULAS)

vi
2. Eigenface Step by Step (Formulas)

2.1 Eigenface Step by Step (Formulas)

2.1.1 Steps of calculating Eigen face

The following steps are based on Turk and Pentland(1991 cited Gül 2003 and Bebis
2003).

Computation of Eigen faces

Step 1: Obtaining training faces

Important size of the faces should be same

Figure 2.1: Training Images

[Source : Bebis,2003]

Step 2: Represent all the 2D faces (Training set) as 1D vector

As shown in figure 4.8 the image should be converting into 1D vector.

Training set Γ = [Γ1 Γ2 ... ΓM ]


t

vii
Γ i= Transformed Image (2D to 1D)

Let’s take Image matrix that ( Nx x Ny ) pixels size. Then the image transform into
new image vector Γ of size (P x 1) where (Nx x Ny) and the Γi is column vector. As
mentioned in image space section this is done by locating each column one after the
other.

Step 3: Create mean / average face

After transforming training images to 1D vector the average value of the faces
should calculate. This average value of the faces is call as mean face.

= mean face / average value


= number of training image
= training face
i = current image
Γ = current training image

Mean face is the average face of all images training image vectors at each pixel
point. The size of the mean face is (P x 1).

Step 4: Subtracted the mean face

After creating the mean face, the difference between mean face and each image
should calculate. By doing this it is possible to identify each training set image
uniquely.

i -

= difference / subtracted mean face


i = current training image
= mean face

Mean subtracted image is the difference of the training image from the mean image.
Size of mean subtracted image is ( P x 1).

viii
Like that is should calculate each subtracted mean face (difference between mean
face and training image).The output will be matrix all have details about the
subtracted mean faces.

Difference matrix Mt Α = [ 1 2 ... Mt]

The size of A will be N2 x Mt.

Step 5: Principle component analysis

After finding the difference/subtracted mean face the set of this vectors use principle
component analysis.

λ = Eigen vector
= Eigen values
M = orthonormal vector
= difference

Step 6: Covariance matrix calculation

After applying PCA, the corresponding Eigen value and eigenvector should
calculate. In order to do it covariance matrix should calculate.

C= =

C = covariance matrix
= difference / subtracted mean face
= transpose of difference
A = difference matrix
i = current image no

ix
According to Turk & Pentland (1991) transpose of difference ( ) is inverse of
difference. So then is transpose difference matrix.

Step 7: Compute Eigenvectors

For a face image of size Nx by Ny pixel the covariance matrix is size of (P x P), ( P =
Nx x Ny). Because of that processing covariance matrix, consume lot of time,
processing power. Because of that using directly in this format is not a good choice.

Therefore, According to Turk & Pentland (1991) If the M is image space(No of data
points in image space) less than dimensions of the face space. which mean M < N 2
there will be M – 1 matrix . It can be done by solving M x M which is reasonably
effective rather than solving for N x N dimensional vector. Calculation of AAT does
by using linier combination of face images .

Turk & Pentland (1991) stated that by using following formula (calculation) it is
possible to construct M x M matrix into L = ATA. This gives M Eigen vectors.

They also mentioned that using following formula “It can determine leaner
combination of the M training set images from its eigenvector U l where l=1...M
images.

= Eigen faces
k = image number
m = count of set
= difference of kth image
= Eigen vectors of k

Turk & Pentland (1991) stated because of this approach the calculations that suppose
to do is greatly reduced.

x
They also described “from the order of the number of pixels in images (N2) to the
order of the number of images in the training set (M)” and practically training set of
the face images will be relatively small.

Step 8: Face recognition using Eigenface

Figure 4.8 and figure 4.9 shows images taken in ORL database and mean images that
calculated using ORL faces.

Figure 2.2: Example images from the ORL database

[Source : Gül,2003]

Figure 2.3 Mean face obtained from the ORL database

[Source : Gül,2003]

Furthermore Gül (2003) stated that by using less eigenfaces which is less than total
number of faces for eigenface projection, it is possible to eliminate some of the
eigenvectors with small eigenvalues those contribute to less variance in the data.

The following image shows some eigenfaces.

xi
Figure 2.4: Eigenfaces

[Source : Turk & Pentland,1991]

Based on Turk & Pentland(1991) and Turk & Pentland(1990) , Gül(2003) described
that larger eigenvalues indicates larger variance therefore the eigenvector that have
largest eigenvalue is consider as first eigenvector like that most generalizing
eigenvector comes first in the eigenvector matrix.

Step 8-1: calculate distance of difference

Let’s take as probe image (Which need to be recognised). The image should be

same size and it should have taken in same lighting condition. Then as mentioned by
Turk & Pentland(1991) , The image should normalize before it transform into
eigenface. Then it is assume that contains average value of the training set using

following formula it is possible to calculate distance different form average value.

= distance of difference
= captured image value
= average value

Step 8-2: Projection

The projection value should calculate with the probe image and value calculated
using training image.

xii
` = projection value
= image set count
= Image number
= projection of the image
= Eigen faces
= Distance of difference

Step 8-3 : Store weights in a matrix

= weight matrix
= weight of image / projection of the image

Describe the combination of input image and training Eigen face.

Step 8-4: Calculating Euclidian distance

Euclidian distance use to measure distance between projection value ( Eigen faces in
training set) and inputted image

Using minimum Euclidian distance the match face can identify.

= Euclidean distance
` = projection value
= projection value of i image

Get minimum e

xiii
Step 9: classify face image

It is possible to verify image that whether it is a face or not by using Eigenface


approach.

Step 9-1: calculate distance difference

First the image has to normalize and transform it in to 1D vector. The calculate
distance difference

= distance of difference
= captured image value
= average value

Step 9-2: Projection

The projection value should calculate with the probe image and value calculated
using training image.

` = projection value
= image set count
= Image number
= projection of the image
= Eigen faces
= Distance of difference

Step 9-3: Calculating Euclidian distance

Euclidian distance use to measure distance between projection value ( Eigen faces in
training set) and inputted image

Using minimum Euclidian distance the match face can identify.

xiv
= Euclidean distance
` = projection value
= projection value of i image

Step 9-4: Calculate threshold

Distance threshold defined maximum allowable distance from any face class. which
is equal to half of distance between two most distant classes.

Distance threshold Θ =

Θ = Distance threshold
` = projection value
= projection value of i image

“Classification procedure of the Eigenface method ensures that face image vectors
should fall close to their reconstructions, whereas non-face image vectors should fall
far away. “ Gül(2003)

= Distance value form the face


= Distance of difference
` = projection value

Distance measure is the distance between subtracted image and the reconstructed
image.

Based on Turk & Pentland (1991) recognise an image can be done by knowing i

and ε.

xv
If ε > Θ It is not a face
If ε < Θ and for all >Θ It is a unknown face
If ε < Θand for all <Θ The image match to training image i

Table 2.5: Eigenface Recognition

Figure 2.5: pixel miss-classification

[Source: Spann ,n.d.]

As researcher understood to minimize this anomaly the noise reduction should be


perform to the image.

xvi

You might also like