You are on page 1of 30

KABARAK UNIVERSITY

NAME: AFFAXERD CHERUIYOT

COURSE: COMPUTER SCIENCE

REG NO: CS/MG/1677/05/18

UNIT: PROJECT 1

COURSE CODE: COMP 411

LECTURER: DR. CHRISPUS ALUKWE

1
i. DECLARATION

I hereby declare that this project is based on my original work except for citations and quotations
which have been duly acknowledged. I also declare that this has not been previously or
concurrently submitted for any other degree or award at Kabarak University

Name:

Signature:

Date:

2
ii. RECOMMENDATION

I the supervisor do hereby certify that this is a true report for the project undertaken by the
above-named student under my supervision and that it has been submitted to Kabarak University
with my approval.

Signature……………………………………………. Date………………………

3
iii. ACKNOWLEDGEMENT PAGE

My deepest gratitude goes to Almighty God who is the source of strength, knowledge and
wisdom .I also wish to show my appreciation to my Supervisor , Dr. Chrispus Alukwe who
played an important role in approving my research and for his step by step guidance until
completion of my project proposal. Secondly, I would like to pass my sincere appreciation to
Kabarak library staffs for good services they offered me to ensure smooth research work in terms
of library resources , that is ,stable internet connection and reading materials. Not forgetting, I
would really like to appreciate the emotional and financial support from friends and family as
they consistently supported me throughout the research process

4
iv. DEDICATION PAGE.

This work is dedicated to my parents who have been the main source of spiritual ,emotional and
financial support. Their daily inspiration is what has got me going through the whole research.

Secondly, I dedicate this work to my lecturers as they have added a lot of positive impact to my
life in terms of knowledge and skills I got from class which formed the basis of this research.

Abbreviations

OPENCV: Open computer vision

DNA: Deoxyribonucleic acid

CCTV: Closed Circuit Television

CV: Computer vision

CVCAM: computer vision camera

I/O: Input/output

XML: Extensible markup language

5
v. List of figures
a) Face detection and recognition steps…………………………………………………22
b) Overall system design………………………………………………………………….23
c) Face detection steps………………………………………………………………….....24
d) Face recognition steps………………………………………………………………….25

6
Table of contents

i. Declaration………………………………………………………………………………2
ii. Recommendation………………………………………………………………………..3
iii. Acknowledgement page…………………………………………………………………4
iv. Dedications……………………………………………………………………………….5
v. Abbreviations……………………………………………………………………………6
vi. List of figures…………………………………………………………………………….7

Abstract…………………………………………………………………………………………10

CHAPTER 1

1.0 INTRODUCTION………………………………………………………………...11

1.2 BACKGROUND OF STUDY……………………………………………………11

1.3 STATEMENT OF PROBLEMS…………………………………………………13

7
1.4 OBJECTIVE OF STUDY…..…………………………………………………….14

1.5 SIGNIFICANCE OF STUDY……………………………………………………...14

1.6 SCOPE OF STUDY………………………………………………………………...15

1.7 LIMITATION OF STUDY………………………………………………………...15

CHAPTER 2 ……………………………………………………………………………………17

2.0 LITERATURE REVIEW………………………………………………………………….17


2.1 INTRODUCTION…………………...…………………………………………......18
2.2 FACE RECOGNITION TECHNIQUES……...…………………………………..18
2.2.1 IGENFACES..…………………………………………………………..18
2.2.2 NEURAL NETWORKS………………………………………………..19
2.2.3 GRAPH MATCHING………………………………………………….19
2.2.4 HIDDEN MORKOV MODELS……………………………………….20
2.3 NEW TECHNIQUES………………………………………………………………20
2.3.1 LINE EDGE MAP……………………………………………………..20
2.3.2 SUPPORT VECTOR MACHINE…………………………………….21
2.4 COMPARISON OF FACE DATABASES……………………………………….21
2.5 KNOWLEDGE GAP………………………………………………………………22
2.6 DESIGN FRAMEWORK....……………………………………………………….22
2.6.1 INTRODUCTION…………………………………………………….22
2.6.2 DESIGN OF FACE RECOGNITION SYSTEM……………………23
2.6.2.1 INPUT PART…………………………………………………23
2.6.2.2 FACE TRAINING………………………..………….………24
2.6.2.3 FACE RECOGNITION……………………………………...24
2.6.2.4 FACE RECOGNITION……………………………………...24

3.0 CHAPTER 3……………………………………………………………………………..…26

3.0 SYSTEM ANALYSIS DESIGN………………………………………………..….26

3.0.1 INPUT PART…………………………………………………………..…27

8
3.0.2 FACE DETECTION PART………………………………………...……27

3.0.3 FACE TRAINING PART………………………………………..………27

3.0.4 FACE RECOGNITION PART……………………………….…………28

3.1 FUNCTIONAL AND NON FUNCTIONAL REQUIREMENTS………………28

3.1.0 FUNCTIONAL REQUIREMENTS……………………………………..28

3.1.1 NON FUNCTIONAL REQUIREMENTS………………………………28

3.2 METHODOLOGY………………………………………………………………...29

REFERENCES………………………………………………………………………….30

ABSTRACT

Face detection and recognition is one of times most greatest inventions of time. This paradigm
involves detection of a person’s face by use of a web camera, CCTV or any camera that captures
faces, image processing and assignment of numerical ids and the image recognition which
recognizes the face if presented to it again. In this research, I am going to try and implement this
paradigm in creating a digital gate system whereby students won’t have to queue and show their
identity cards to the security personnel but just stand in front of a camera. The trainer will first
have to given access to student photos along with their admission numbers since they are unique.
This system will work in a way in that if a student if students does not belong to Kabarak
university, or even a lecturer or any person who enters the gate, the system presents an unknown
alert hence a person won’t be granted access. If the person proceeds to forcefully get in, he can
be tracked using the CCTV cameras around the school.

9
CHAPTER 1:

1.0) INTRODUCTION

Face is one of the important biometric features of a human being, in the same category with the
fingerprints and DNA test. Because of this features it has enabled us to build a security system
that would grant access to selected individual that have been authorized using their faces. With
this system, human being can be able to scan and identify faces based on different features and
environmental surrounding, thus the system can be reliable regardless of the time of the day.
Human can be able to distinguish faces with little light or in the dark ,when it has undergone a
little deformation or a little change of facial features such growth of hair around the face or
shaving it. With my face recognition system i have tried to achieve this accuracy by building a
data frame of a lot of facial features for a single individual and being able to store them in a
database such that the machine can be able to identify the same individual with different
parameters, but still it doesn’t beat human logic on recognizing the face.

1.2) BACKROUND OF THE STUDY

10
In modern times, industries and institution have embraced technology in their day to day
operation which has proven to be cost effective ,safe ,accurate and easy to operate with. In
Kenya for example where my main area of focus as one of the African country which has largely
invested and embraced technology over the past two decades, one of the most iconic days was on
2015 February when the government ordered the transformation of the TV broadcasting through
the digital media. However, In 2014 the Kenyan government faced a lot security challenges
which affected both public and private enterprises negatively. One of the main challenges during
that time was insecurity which was brought by terror attacks on the Kenyan border and on the
main cities such as Nairobi, Garissa, Mombasa, Malindi, Nakuru and Eldoret among
others .After a lot of brainstorming among the stakeholders, a solution was came about that a
country wide installation of CCTV be commenced with immediate effect. The government, who
were desperate to improve the security in the country were ready to splash over 162 million
Kenyan shillings to fund the project which saw all Kenyan towns and the Somali border being
under surveillance using the CCTV cameras. The contract was given to the Safaricom Company
which is one of the network providers in Kenyan. After completion of the first phase of the
project ,the results was promising. This has prompted the government to further the project to
other parts of the country and this has been one of the heroic moves when it comes to security
beef up as criminals could easily be found by just following them using the surveilance cameras.

1 however realized that there is a gap in this solution came about which my project seeks to fill.
The CCTVs can only be used to monitor but the challenge is it depends on human knowledge to
identify the person to be monitored. This project will hence be able to detect and recognize each
personnel within the CCTV focus. As a prototype, my project will focus on user and machine
interaction where i use of any regular camera through a series of steps .The main aim of this part
is to collect the facial features of each and every individual by considering every detail
information including the environmental condition such as the humidity, room brightness, the
face angles, temperature, and face makeups amongst others. This is not an important point, the
one which is worth it is that this system employs a machine learning technique where i can train
my model to take all those features and by doing so it raises its accuracy level.

As a starting point, I will employ this system at my institution where instead of using regular
logins information such as names or code numbers, the system is going to build a data frame for

11
all the employees of the institution or any member according to their ranks. This data frame will
hence be used to login in anywhere be it the server rooms, labs and even in the schools main
gate.

This project is a game changer when it comes to security. It will enable security personnel to
easily backtrack the days activities in case of any security issue if not the issue is not identified
as it occurs. This means that it is a contingency plan itself. It can also be used in access control
where it more effective, accurate and faster reducing traffics and human interactions in gate
systems.. In the case of the cyber security, face detection and recognition helps to avoid
unauthorized file access in an institution where logins names and password are used but in this
case the logins names and the password are replaced by a monitor and a camera where the user
face only is required which makes it secure.

1.3) STATEMENTS OF PROBLEMS

My system is for checking in and out of the institution gates. This complements the traditional
checking of the gate passes by the security personnel in the gates which is largely dependent on
human judgment. This at some point can be compromised due to its underlying loophole in
which an unauthorized individual can use bypass through the security personnel in the gate, for
example by generating a fake gate pass especially when handling a bigger number of employees.
To curb this problem i had to design a system that uses the faces of the employees to as the gate
pass. It however requires a little education on the procedure and how to interact with the system.

An institution with a porous security measures means that it has a lower chance of survival. An
institution without an elaborate security system faces the risk of asset loss, destruction of
properties, compromised work flow, and the risk of its confidential information landing on the
hands of unauthorized. Another additional advantage of using the system is that it can take the
register of every employee that reported to work on that day hence can be used for supervision of
work and other follow-ups.

12
This system will incorporate Intel’s frame work used in image processing called the openCV
which has an in build face detector with an accuracy level of about 95 percent while about 3
percent is true false.

OpenCV: It is an openCV computer library which has advanced capabilities in recognizing


faces. This technique provides variety of algorithm via its lower level Application that wil be
used in face detection and recognition. Its functionality is contained in several modules as
follows:

(a). CXCORE namespace contains the basic data type definition ,linear algebra and statistics
methods, the persistent method and the error handling and finally the graphic fiction.

(b). CV namespace contains image processing and camera calibration methods. The
computational geometry functions are also located here.

(c).CVAUX namespace, contain obsolete and experimental code with its simplest interfaces
contained in this module.ML namespace contains machine learning interfaces. High graphical
user interface which contain basic I/O interfaces and multi-platform windowing capabilities.

(d). CVCAM namespace contains interfaces for video access through DirectX on 32- bit
Windows platforms.

(e). EIGENFACES, This is the simplest but accurate method of recognition but it works better
with other methods.

1.4) OBJECTIVE OF STUDY

The main aim of this project is to build a face detection and recognition system which would be
used in the gates as a university gate entry pass. This is to distinguish the required persons in the
institution from intruders. This system uses opencv which requires the user to place the face in
front of the camera, get scanned and to verify the individual. The system can also serve other
secondary uses such as keeping track of the number of students that came to school that day and
if need be, it can be used to identify particularly who has been to school and who has not been.

1.5) SIGNIFICANCE OF THE STUDY

13
Image quality is a special consideration when it comes to facial detection and recognition. If the
model was trained in a dark environment to detect it in the dark then i have to stick with that
same as the one which was trained with sufficient lighting.

In terms of image positioning , images should be consistent also that means ,the eye should be in
the same pixel coordinates ,consistent size ,rotation angle ,hair and makeup, emotion and
position of lights. All this implies that i should provide a consistent pixel, this means that you
should do away with non-permanent objects in the face such the mask or the glasses for you to
take an inner face.

To solve most of this problem, i should standardize the lighting to be used to get away with
biasness. I also employed more processing stages such as the edge enhancement, motion
detection and many others stages.

Haar cascade classifier is another OpenCV application which is used to detect faces from a scene
or a video. Its only work is to identify it it’s a “face” or “not”. This is done through
standardization of the pixel such that if it’s 50*50, it will only detect faces on that measurement.
If the image is bigger than that it would have to move over and over again to be able to detect it
through the classifier. The Classifier uses several data in the XML to decide how to classify
image location. The OpenCV download includes four flavors of XML data for frontal face
detection, and one for profile faces. It also includes three non-face XML files - one for full body
detection, one for upper body, and one for lower body. You'll need to tell the classifier where to
find the data file you want it to use.

It's a good idea to locate the XML file you want to use, and make sure your path to it is correct,
before you code the rest of your face-detection program. It is very easy to use a webcam stream
as input to the face recognition system instead of a file list. Basically you just need to grab
frames from a camera instead of from a file, and you run forever until the user wants to quit,
instead of just running until the file list has run out.

1.6) SCOPE OF THE STUDY

This project would be carried out in Nakuru County, specifically Kabarak University. I would
take faces of small sample of the student to build a data frame where i can be able to store their

14
faces in the databases and then i use regular computer camera to scan their faces to determine if
they can recognize those faces

1.7) LIMITATION OF STUDY

During the research process i encountered different major challenges such as the back ground of
the faces are supposed to be constant all the times ,the angle of the face was a significant factor
in detecting the faces ,sometimes it was harder to choose the angle but in most circumstances
required a 3D head pose .The model required sufficient light in the room to be able to detect the
image, the main challenge here was when there is a shadow or presence of the glasses in the face
and application of make ups affects the face detection and recognition process. One might
actually have to repeat the process several times in order to identify them.

Another limitation i encountered that might pose a challenge ultimately when the system is in
use is that it cannot recognize a person wearing a face mask. This might pose a challenge since
the current government regulations and medical recommendations state that everyone should
wear a mask in order to prevent one from exposure to the corona virus which is currently a
global pandemic. This is one of my major challenges for now but i leave it as a gap for now and i
promise to get back to it and provide the feature during the next update.

15
CHAPTER2:

2.0) LITERATURE REVIEW.

2.1) INTRODUCTION

Face detection and recognition is becoming one of the major key players in every field especially
when it comes to security in the current world. From detection in crimes scenes, biometric
security in sensitive areas such as banks and even phone locking, the face detection and
recognition paradigm is proving to be one of the most essential features to have to enhance your
security. In my journey to improve this security feature, we however based my works foundation
on previous well-researched and written works.

According to one of the publication written by F. Galton (1988), the author first proposes a
method of classifying faces. The features to be collected should be unique to individuals and
some of the laid down features were the facial curves whereby their norms were to be studied
and derivatives for classifying this norms be drawn from the curves. With the rapid technological
advancements however, face detection and recognition took a step forward by demonstrating

16
their system in a real world setting thanks to an introduction of various factors to steer the whole
game to a new direction. These factors include: availability of large databases to store the trained
images, introduction of new powerful algorithms and a method for evaluating the algorithms.

In the literatures, the face recognition problem is formulated either as a static or video image in
a scene or verification of a person in a scene by comparing his image to the one on the collected
dataset. When comparing a face to a given dataset, there are some of the important aspects to
consider. The first is the client who is assumed as cooperative and is the one who makes the
identity claim. This thus means that it is not necessary to consult the whole database storing the
images in order to complete the verification. The incoming images are each compared to the
model image belonging to the persons whose identity is to be verified. The second factor to
consider is for the system to be able to work ate near real time to be acceptable to the client.
Finally, for experiments on recognition, only images of people that were used in model training
are used in testing the system and an additional image outside the trained images to authenticate
the system.

Face recognition is a biometric approach that uses automated methods to verify identities of
people using their physiological characteristics. Generally, as a biometric system, face
recognition devices work in a series of three procedures; first is that the sensor takes an
observation. For my case, the observation is face detection. The system must first detect the face
so that it can recognize it. Secondly, an algorithm that normalizes the biometric signature of the
observed image to the required format is introduced. Finally, a matcher that will compare the
normalized signatures with the ones on the database will be introduced in order to show the
similarity index between the two images and either verifies the image or not.

2.2 FACE RECOGNITION TECHNIQUES

In this section, i will take a look at the major face recognition techniques along with their
advantages and disadvantages. The techniques include: neural networks, igenfaces, dynamic link
architecture, geometrical feature matching, hidden Markov model and template matching. This
analysis is based on the used facial representations.

17
2.2.1) IGENFACES

This is a face recognition technique that argued that images of faces can be reconstructed by a
small collection of weights foe each face and a standard picture of a face. The weights for each
face are obtained by projecting the image of the face into the Eigen picture. Mathematically,
Eigen faces are a the key components of distribution of faces The Eigen vectors are arranged so
that they can respectively represent different amounts of variations among faces. Every face
should be exactly represented by an Eigen faces’ linear combination.

The limitation of this technique is that due to the large number of background data, there is a
huge influence from the background on the data.

2.2.2) NEURAL NETWORKS

This is technique that mostly liked due to its non-linearity. The first artificial neural network
technique to be used in face recognition is a single layer adaptive network known as WISARD
which has a different network for each individual image. In order to create a successful
recognition, a way in which a neural network structure is created is key as it is dependent on the
application to be used on. Multilayer perceptron and convolutional neural network is always used
for face detection. For verification, multi-resolution pyramid structure is always often used.

The main advantage of this technique is that it can achieve high recognition rates of about 96%
in images even above 200. Its disadvantage on the other hand is that the more the number of
classes increase, the more the problems with the system. This hence means that the system is not
suitable for an image recognition test involving a single model as this would require multiple
model images per person for an optimal parameter setting.

2.2.3) GRAPH MATCHING

This approach uses a dynamic link structure that for a distorted image recognition which
employs an extrapolated graph matching to find the closest stored graph. Trained objects are
represented by graphs in the database whose vertices are labeled with multiresolution
descriptions especially in terms of its power resolutions and the edges labeled with geometrically
measures distance vectors. Image recognitions are always formulated as an elastic graph
whereby stochastic optimization is used for cost matching functions.

18
The main advantage of this technique is that it is superior to the other face recognition techniques
as it can recognize even distorted images meaning that it might be able to operate in a poorly
lighted area. Its limitation on the other hand is that it is quite a quite expensive technique.

2.2.4) Hidden Markov Models (HMM)

Due to its success in speech recognition, this technique was introduced in order to determine
whether the same results can be achieved in face recognition. Here faces were partitioned to
regions that can be associated with states such as nose, mouth and eyes. As this technique only
requires a one-dimensional model while images are always two dimensional, images had to be
converted to ID temporal or spatial sequences.

The advantage of this technique is that its recognition rates are quite high as one of the sources
cited that there was 87% accuracy when introduced to an ORL database consisting of 400
images. Its main limitation is that the images must always be converted to two dimensional first.

2.3) Recent Techniques

2.3.1) Line Edge Map (LEM)

This is a most widely used technique in the face recognition field as it presents the edge
information of the object. The accuracy index of this system is almost the same as the currently
most used gray scale system’s accuracy of 92%. This approach extracts face edges as features to
be recognized in a person’s face. They then integrate this information along with the spatial
information of the face image by grouping faces’ pixels map edges to the line segments. Once
the thinning of the edge map has been achieved, a polygonal line is introduced in order to
generate the Line Edge Map of the image.

The major benefit of this technique is that it is much more superior to the Eigen face in
identifying faces as it only considers the facial structure which can be pinpointed even in a
relatively darker room. Another added advantage is that it is less likely to be affected by pose
variations when compared to the Eigen face method but can be affected by large changes in the
facial expressions.

19
2.3.2) Support Vector Machine (SVM)

This is a technique that has been deemed as the most effective in the software recognition field
for genera pattern recognition due to its high generalization performance without the need to add
any additional knowledge. Given a set of two points each belonging to a different class, an SVM
works by finding a hyper lane that separates the largest fraction point of the same class within
the same side while maximizing the distances of the classes on the hyper lane. This hyper lane is
also often known as Optimal Separating Hyper lane as it minimizes the risk of misclassifying
both the examples from the training sets and also the examples of the test sets.

The main advantage of this system is its ability to recognize no-membership and a stable
robustness that copes with the various variations such as the different group sizes or the different
group members. Its accuracy rates are also high as studies shows that it can clock between 97%
to 98.5% accuracy when there is no variation in the group members which is of the same group
size. One of the limitations associated with this technique is the accuracy reduces whenever the
group size if small, that is, less than 20 members due to a limited trained data set. Nonetheless, it
has been agreed that suitable results can be achieved even with group members of about 50.

2.4) Comparison of face databases

In the previous section, a number of face databases have been discussed along with its
descriptions and limitations. As much as there are many face databases consisting of different
images in the current world of face recognition, the variations in poses, face occlusions, gestures,
illuminant colors and illumination angles, these images have not be annotated to a single given
standard hence not useful in evaluating the face detection and recognition techniques discussed.

For us to be able to compare the different face recognition techniques given, the databases should
be comprehensive and systematic with face images captured in different face angles, different
illumination angles and different most commonly encountered color illumination temperatures.
The different types of illuminations that should be used when creating such a database include:
fluorescent, daylight, incandescent and skylight.

2.5) Knowledge gap

20
Despite the fact that these face detection and recognition techniques covers every inch of the face
recognition field, there are some factors that might have arisen in the near past that were not
considered while designing this techniques. This has left a wide knowledge gap necessitating
new research and design of new techniques in order to solve the problem. One of the main gaps
that have been witnessed recently is the government and medical recommendations for people to
wear face masks. Given that most techniques work with the physiological structure of the faces,
clearly it will not be possible to recognize a face with a mask and even if it is possible, the
accuracy levels would be low as most of the physiological structures such as the mouth, nose and
lower face edges are concealed behind a face mask.

Therefore, a new study should be ignited that would look at a way to primarily focus on the
upper part of the face for recognition considering the various factors such the ability for a
technique to detect the eye structure and provide consistent results when a test subject wears
googles and when they do not.

2.6) DESIGN FRAMEWORK

2.6.1) Introduction

Face detection and recognition has become a major key player in every field in the today’s
world. With the need to improve the security of every field, each sector is aiming at providing
the most unique security feature that might only be possessed by a single required individual.
The feature that has been settled on is the use of biometric where face recognition has proven to
be the most reliable and most comfortable feature. Face recognition system have been used in
various activities such as video surveillance, person verifications and crime preventions.

Face recognition is a complex system to design as it has to work in real time along with complex
effects such as imaging conditions, occlusions and illuminations in order to achieve its desired
results. The detections application is often used in finding the faces within the given area while
the recognition feature classifies the given features according to their properties commonly found
in computer vision applications. The first step of this system is to acquire an image from the

21
camera. The second step is to detect the acquired image while the third step is the face
recognition where by the acquired image is taken from the detection’s output. The final step is
the identification of the recognized person. Below is a diagrammatic representation of the series
of steps:

Im age face face


identification
acquisition detection recognition

Fig1. Face detection and recognition steps

2.6.2) Design of face recognitions system

A thorough study conducted has shown that various methods can be used in creating a face
recognition system. However, in my project we use a knowledge based approach for face
detection and a neural network approach for face recognition. The main reason for using this
approach is to come up with a reliable and a robust system. The diagram below represents my
overall system design:

INPUT FACE
FACE FACE
DETECTION
Image RECOGNITION
acquisition TRAINING

22
Fig2. Overall system design

2.6.2.1) Input part

This is the prerequisite or basically the starting point of my system. It actually is an external
feature as it works with the camera in order to capture and image.

2.6.2.2) Face detection

This feature extracts the face take from the camera and detects the are to be used for recognition.
To be abe to be detected a number of operations are applied in the face such as while balance
correction, face segmentation and morphological operations. The figure below shows the steps
followed until detection is established.

Acquired White balance skin


Image correction segmentation

facial feature face candidate morphological


extraction search operations

extraction of
the face image

Fig3. Face detection steps

2.6.2.3) Face training

23
The face identified from the face detection part is then trained by being given a unique numerical
identification. Once the system is started, the image is detected then system will prompt the user
of a unique numerical id. Once the ID is entered, the trained image is stored in a dataset folder
within the project’s main folder. The ID is then later linked with a name that will show alongside
the image.

2.6.2.4) Face recognition

Once the image had been trained and stored in the dataset, the image can be recognized anytime
it shows. Face recognition is performed through a series of steps which include: image matrix
vectorization, database generation and finally classification.

Histogram
face image Resize
Equalization

persons
classifier vectorize
name

Fig5: face recognition steps

24
CHAPTER 3:

3.0)SYSTEM ANALYSIS AND DESIGN

This system is one of the biometric processes, its applicability and working is easier as compared
to fingerprint, iris scanning and signature. This system uses combination of two techniques that
is face detection which is achieved through live facial scanning to generate dataset of images in
the system. Another technique is face recognition which checks and gives the output of the fed
image in the system. When a new image is fed into this system, it is first checked whether the
face is found in the dataset of the system.

1 decided to use knowledge based approach for face detection part and neural network approach
for the face recognition part. My face recognition approach system was as follows.

Face detection

- Skin Face training 25


Input - Segmentation Face Recognition
Image - Face Image
- Face Candidate Preprocessing - Person Name
Acquisition
- Classification

3.0.1. Input part

This is the important part for the face recognition system, image scanning is performed then it’s
converted to digital data which is saved in the dataset of the system for image processing
computation. Saved images are sent to face detection algorithm.

3.0.2 Face detection part

This algorithm is implemented using MATLAB and it’s capable for testing many faces. In this
part only important features of the face are used I.e. eyebrows, eyes, mouth, nose, nose tip and
cheek all this assists in taking unique features of the candidate.

3.0.3 Face recognition part

Modified face image obtained is classified to identify the candidate or the person in the dataset.
Before classification the image is first preprocessed for example resizing to fit into
recommended pixels, equalizing of rescale image and vectoring the matrix image. My face
recognition algorithm is as given below.

26
Face Image Histogram Resize to 32-by-
Equalization 32 pixels

Person Name Classifier Vectorize

Algorithm of face recognition

3.0.4 Face recognition system

Detection and recognition finally merged to implementation of face recognition system. Many
experiments are performed on live images and face detection and recognition produces best
results. The network can correctly classify when eye/eyes are closed, eyebrows are moved and
face is smiled or showed teeth. Also, number of people in dataset can be increased and most
probably will correctly classify faces

27
3.1) FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS

3.1.0.) Functional requirements

It should be able to handle ‘jpeg’ images as mentioned in the objectives.

3.1.2) Non-functional requirements

My project was intended to meet this non-functional requirements, face recognition software
should be readily available on the internet for users to download it and use it at any time. The
program also should be platform independent.

3.2) METHODOLOGY

In the research, I employed a number of methods in my research. The main method however was

observation. Here visited the university gate, keenly observed all the activities taking place there.

I then drew a list of objectives that I would satisfy at the end of the study. The main challenge

with this method, however, is that I did not get information from outside sources except from my

own judgement.

With the aim of looking forward to solve the problem that lied with observation, I decide to also

use interviews and questionnaires. I went around the university asking the students about the

challenges that they faced with the current gate system. They laid down a number which I cross-

checked with those I have observed on my own and strengthened my objectives and hence

helped me come up with the main objective.

In my project, as a method of analysis, I employed the data analysis method where collect data,

model them and analyze them by extracting insights which are then supported by decision

making. This project however bases its analysis on two types of data analysis which are the

28
explanatory analysis and diagnostic analysis. Explanatory analysis aims to explore and does not

involve any prior knowledge. Diagnostic analysis on the on the other hand is a type of data

analysis which seeks to help the researcher understand something. For instance in our case why

there has been a rise in cases of insecurity in the country.

REFERENCES

1. L. Zhi-fang, Y. Zhi-sheng, A.K.Jain and W. Yun-qiong, 2003, “Face Detection And


Facial Feature Extraction In Color Image”, Proc. The Fifth International Conference on
Computational Intelligence and Multimedia Applications (ICCIMA’03), pp.126-130,
Xi’an, China.
2. C. Lin, 2005, “Face Detection By Color And Multilayer Feedforward Neural Network”,
Proc. 2005 IEEE International Conference on Information Acquisition, pp.518-523, Hong
Kong and Macau, China.
3. S. Kherchaoui and A. Houacine, 2010, “Face Detection Based On A Model Of The Skin
Color With Constraints And Template Matching”, Proc. 2010 International Conference
on Machine and Web Intelligence, pp. 469 - 472, Algiers, Algeria.
4. P. Peer, J. Kovac and F. Solina, 2003, “Robust Human Face Detection in Complicated
Color Images”, Proc. 2010 The 2nd IEEE International Conference on Information
Management and Engineering (ICIME), pp. 218 – 221, Chengdu, China.
5. M. Ş. Bayhan and M. Gökmen, 2008, “Scale And Pose Invariant Real-Time Face
Detection And Tracking”, Proc. 23rd International Symposium on Computer and
Information Sciences ISCIS '08, pp.1-6, Istanbul, Turkey.

29
6. C.C. Tsai, W.C. Cheng, J.S. Taur and C.W. Tao, 2006, “Face Detection Using Eigenface
And Neural Network”, Proc. 2006 IEEE International Conference on Systems, Man, and
Cybernetics, pp.4343-4347, Taipei, Taiwan.
7. X. Liu, G. Geng and X. Wang, 2010, “Automatically Face Detection Based On BP
Neural Network And Bayesian Decision”, Proc. 2010 Sixth International Conference on
Natural Computation (ICNC 2010), pp.1590-1594, Shandong, China.
8. M. Tayyab and M. F. Zafar, 2009, “Face Detection Using 2D-Discrete Cosine Transform
And Back Propagation Neural Network”, Proc. 2009 International Conference on
Emerging Technologies, pp.35-39, Islamabad, Pakistan

30

You might also like