You are on page 1of 44

FACE RECOGNITION SYSTEM BASED ON FRONT-END

FACIAL FEATURE EXTRACTION USING FPGA

-----------------------------------

Pao-Chung A Thesis Proposal


Chao
Digitally signed by Pao-Chung
Presented to the
Chao
DN: CN = Pao-Chung Chao, C =
TW, O = De La Salle University,
OU = MS-ECE
Reason: I am the author of this
document
Date: 2009.11.12 15:28:07
+08'00'
Faculty of the College of Engineering

De La Salle University

------------------------------------

In Partial Fulfillment of

The Requirements for the Degree of

Master of Science in

Electronics and Communications Engineering

-------------------------------------

By

PAO-CHUNG, CHAO

November 2009
De La Salle University, Manila Face Recognition System, 2009

Abstract

Face recognition has received significant attention in recent years. Many researches

have been done in the domain of face detection and recognition. However, there are only

few works being implemented on hardware and put into test in real environment. In this

research, face detection and facial feature extraction are developed based on a hardware

using FPGA. Well-known algorithms in moving object detection, face detection, poses

estimation, illumination normalization, facial feature extraction and recognition, are

selected and implemented. The performance and accuracy of the system are experimented

using software simulation and hardware implementation. For face recognition, one image

per person in face database is the basic requirement in the system. Finally, the system will

be put into test in a real environment using a video camera.

1
De La Salle University, Manila Face Recognition System, 2009

TABLE OF CONTENTS

Chapter I: Introduction...........................................................................................................5

1.1 Background of the Study ...........................................................................................5

1.2 Problem Statement.....................................................................................................6

1.3 Objectives of the Study ..............................................................................................7

1.3.1 General Objectives..........................................................................................7

1.3.2 Special Objectives ...........................................................................................7

1.4 Significance of the Study ...........................................................................................8

1.5 Scope and Delimitations ............................................................................................9

1.5.1 Hardware Requirement..................................................................................9

1.5.2 Software Requirement....................................................................................9

1.5.3 Prospect of the System....................................................................................9

Chapter II: Literature Review..............................................................................................10

2.1 Reviews of Face Recognition Technology ..............................................................10

2.1.1 Issues on Face Recognition...........................................................................11

2.1.2 Face Detection ...............................................................................................12

2.1.3 Facial Feature Extraction and Recognition................................................18

2.1.4 Face Database................................................................................................21

2
De La Salle University, Manila Face Recognition System, 2009

2.2 A Review of Hardware Based Face Recognition ...................................................22

Chapter III: Theoretical Framework...................................................................................29

3.1 System Architecture.................................................................................................29

3.2 Image Processing Unit .............................................................................................30

3.3 Face Recognition Unit..............................................................................................31

Chapter IV: Methodology......................................................................................................32

4.1 Development .............................................................................................................32

4.1.1 Image Processing Unit ..................................................................................32

4.1.2 Face Recognition Unit...................................................................................37

4.1.3 Peripheral Interfacing ..................................................................................38

4.2 Experimentation and Evaluation ...........................................................................38

4.2.1 Software Simulation......................................................................................39

4.2.2 Hardware Evaluation ...................................................................................39

Chapter V: Summary.............................................................................................................40

References...............................................................................................................................41

3
De La Salle University, Manila Face Recognition System, 2009

LIST OF SYMBOLS
FPGA (Field Programmable Gate Array)

RFID (Radio-Frequency Identification)

PDMs (Point Distribution Models)

PCA (Principal Component Analysis)

LDA (Linear Discriminant Analysis)

FA (Factor Analysis)

SVM (Support Vector Machine)

LDA/FLD (Fisher Discriminant Analysis)

ICA (Independent Component Analysis)

PDBNN (Probabilistic Decision-Based Neural Networks)

GA (Genetic Algorithm)

HMM (Hidden Markov Model)

CNN-SOM (Convolution Neural Network Self Organizing Map)

ANN (Artificial Neural Networks)

mLBP (Modified Local Binary Pattern)

PCNN (Principal Component Neural Network)

ZISC (Zero Instruction Set Computer)

DSP (Digital Signal Processor)

LUT (Look Up Table)

4
De La Salle University, Manila Face Recognition System, 2009

Chapter I: Introduction

1.1 Background of the Study

Due to many reasons, such as authority management and security issue since the 9/11

terrorist attacks, a sophisticated security system become more important in our daily life,

especially for person identification. Nowadays, there are various ways to identify a person,

which could be classified into two categories, such as biometric and non-biometric methods [10].

On the one hand, non-biometric identification system uses a serious numbers of personal ID and

password to identify a person, such as ID cards (student/citizen), credit card, system password,

and other ID used to represent a person. However, non-biometric ID could be forged or taken by

other people, which system can not verify the validity of the ID holder.

On the other hand, there are some successful biometric recognition systems, using iris,

finger print, voice, palm geometry, signature, and facial features to identify a person, which these

systems have high accuracy in recognition [12]. However, some of the biometric recognition

systems, such as iris and finger print scanning, are intrusive identification, which the systems

need the cooperation of users and the equipments needed are usually expensive [12]. On the

contrary, in a face recognition system, cameras are installed at a surveillance place, which the

system can capture all the objects in real time without being noticed. As a result, face recognition

system have received significant attention.

5
De La Salle University, Manila Face Recognition System, 2009

Moreover, many researches have been done in face detection and recognition [10][11][12]

in recent years. In these papers, the results show high accuracy and excellent performance.

However, most of the researches are based on certain preconditions, and only few works have

been practiced in real environment due to the performance issue. In this research, software and

hardware based face recognition techniques are studied. The face recognition system is

developed based on hardware implementation and put into test in real environment.

1.2 Problem Statement

In the past few years, face recognition has attracted great interest by many researchers.

There are many well-known algorithms are created in the recent years [12]. In face recognition,

the simple process of face recognition system should go through image data retrieval, face

detection, facial feature extraction and face recognition. However, some researches focus on a

part of face recognition system, such as face detection, face recognition, or algorithms dealing

with certain issues. Moreover, in real environment, images are from nature scenes instead of

from pre-collected images, which there are many variant factors, such as complex background

noise, lightening condition, poses variation, and other factors [10], under the nature scenes; and

most of the researches only deal with certain issues.

Furthermore, most experiments are by software simulation on a computer. However, for

image sequences, the image processing tasks are computational expensive [11], which the

6
De La Salle University, Manila Face Recognition System, 2009

implementation can not be put into a real environment to test. As a result, a hardware

implementation is conducted in this research.

1.3 Objectives of the Study

1.3.1 General Objectives

The main objective is to design a face recognition system, which an image processing

unit is implemented based on a hardware using FPGA. In addition, the system will be put

into test in a real environment.

1.3.2 Special Objectives

1.3.2.1 Accelerate Computational Tasks

z FPGA Hardware Implementation

Most of the computational tasks in this system are implemented on Altera

FPGA (Field Programmable Gate Array) to advance the performance required in

real time recognition.

z Well-known algorithms Implementation

Efficient, accurate, and feasible algorithms are adapted and implemented in

this system.

1.3.2.2 Reduce laborious tasks

z Use one image per person to reduce data collection and training time

7
De La Salle University, Manila Face Recognition System, 2009

Using one image per person is required in this system, which many

algorithms require various training data to reach the accuracy [13]. The advantage

of using one image per person is not only to reduce the data collective issue, but

also to reduce training time while building the face database.

1.3.2.3 Design a Flexible Architecture of Recognition System

z Algorithms replaceable

In face recognition, new algorithms are presented every year. In this research,

a flexible architecture is designed in order to adapt new algorithms.

z Hardware software co-design

A hardware and software co-design (Co-SW/HW) is implemented in this

research. The advantage of Co-SW/HW is to exploit the synergism of hardware

and the flexibility of software [14].

1.4 Significance of the Study

Nowadays, non-biometric identification systems are commonly used in schools and

buildings security. For example, students need an ID card, RFID (Radio-frequency identification)

card, to pass the entrance gate in De La Salle University. However, the ID card could be used by

other people, which is not the person registered to the card. To deal with this, security guards use

their bare eyes to compare a picture on the screen with the person. In this occasion, this system

8
De La Salle University, Manila Face Recognition System, 2009

could help them to do the work. Also, this system can easily be integrated to a present face

database in the school due to the system only need one image per person in the database.

1.5 Scope and Delimitations

In this research, some hardware equipments and software tools are needed to do the

experiment. In addition, there are some limitations for the system functions.

1.5.1 Hardware Requirement

In this research, the key equipments used are computers, an Altera NIOS StratixII

FPGA kit, an image CMOS sensor, and a monitor.

1.5.2 Software Requirement

Some software programming tools are needed in this research, which are a C/C++

compiler IDE, Quartus II design software, Nios II IDE and ModelSim.

1.5.3 Prospect of the System

This system does not expect to solve all the issues in face recognition, such as extreme

facial expression, wearing on the face, great age discrepancy, extreme lightening condition,

and without frontal face information. In the system, front-end facial feature extraction is

based on a FPGA hardware implementation, and face recognition is software based, which

the recognition unit is implemented on either an embedded system on FPGA or PC system,

which depends on the capacity of the FPGA used.

9
De La Salle University, Manila Face Recognition System, 2009

Chapter II: Literature Review

2.1 Reviews of Face Recognition Technology

There are some researchers have done some reviews in the domain of face recognition

technology. In a review by Andrea F. Abate et al. (2007) [12], the paper studies the recent face

recognition trend in 2D imagery and 3D model based algorithms. In the article, the discussion

includes most important 2D and 3D face model databases, see subsection 2.1.4, issues on face

recognition, see subsection 2.1.1, old and new methods in face recognition, and experimental

results.

Next, in a survey by W. ZHAO et al. (2003) [10], the paper conducts a contemporary survey

of still-based and video-based face recognition research, see subsection 2.1.3. In the paper, some

available commercial face recognition systems are listed. Besides from the general issues [12] in

the face recognition, the psychophysics and neuroscience issues relevant to face recognition are

also discussed in this paper; for example, is face recognition a dedicated process, is face

perception the result of holistic, and ranking of significance of facial features etc. The methods of

key steps, such as face detection and feature extraction, prior to recognition are also discussed.

Finally, the protocols of evaluation of face recognition are described.

Next, in a survey by Erik Hjelmas and Boon Kee Low (2000) [11], the paper presents a

comprehensive and critical survey of face detection algorithms, which are the first step in the

10
De La Salle University, Manila Face Recognition System, 2009

face recognition systems. The technical approach and performance of the algorithms are

discussed in this paper, see subsection 2.1.2.

Next, in a survey by J. Kittler el al. (2005) [16], this paper studies 3D imaging, modeling

and recognition approaches. The review presents 3D face modeling, 3D to 3D and 3D to 2D

registration, 3D based recognition and 3D assisted 2D based recognition, see subsection 2.1.3.3.

Finally, in the subsection 2.2, some researches in face recognition based on hardware

implementation are presented.

2.1.1 Issues on Face Recognition

z Illumination issue

Illumination variation could be caused by environment lightening condition,

reflectance properties of skin, which different race has different complexion, and

configurations of a camera, such as white balance and sensitivity. Because

automatic face recognition can be seen as a pattern recognition issue [12], the

result will be upset due to illumination changes patterns of a image dramatically

in face recognition system.

z Pose issue

Variation of poses, such as head pans and tilts, could influence the result of

recognition. Some researches use multi-view approach, which need different

11
De La Salle University, Manila Face Recognition System, 2009

poses images as training images. Another approach is 3D face recognition, which

is more robust to pose variation [12][16].

z Occlusion issue

Occlusion problem can be seen as expression issue. For some algorithms, such as

appearance-based approach, occlusion of eyes and mouth will dramatically

influence the accuracy of recognition. One way to deal with this issue is to use

local approach, which divide the face into different parts and find the best match

[12].

z Other issues

There are some other issues, such as the wearing on the face, complex

background, and the aging issue, which could make the recognition more

complicated.

2.1.2 Face Detection

2.1.2.1 Local/Feature Based

z Low-level analysis

In human vision, some visual features, such as edges, color, and motion are

derived in our inner retina, prior to high-level visual activities in the brain.

12
De La Salle University, Manila Face Recognition System, 2009

„ Edges

In early face detection, some approaches use edge features within

the head outline to identify the face by using shape and position

information of the face. In an edge-detection-based approach, edges

need to be labeled and matched to a face model in order to verify

correct face detections.

„ Gray-levels

The dark color within a face outline can also be used as features.

Facial features, such as eyebrows, pupils, and lips appear generally

darker than other facial regions. The extraction of dark features is

implemented by low-level gray-scale thresholding.

„ Color

In face detection, many researched use skin color to detect the face.

There are several skin color models being built by researchers. These

models are in various color formats, such RGB, HSI, YIQ, HSV, YUV,

and other color formats.

„ Motion

For video sequences source, the moving objects could be located

13
De La Salle University, Manila Face Recognition System, 2009

as possible face region. Then the region is put into analysis by searching

the face features.

„ Generalized measure

Facial features are in symmetry, which could be used as criteria to

verify face detection. The symmetry operator and symmetry magnitude

map are built as model to verify the measurement.

z Feature analysis

The results generated from low-level analysis are likely to be ambiguous. For

example, after finding a possible face region by using a skin color model or

motion, the detected region is still needed to be verified.

„ Feature searching

Feature searching is based on the prominent facial features, eyes,

nose, and mouth. A pair of eyes is the most commonly used in facial

feature searching. After finding a possible eye pair, then the algorithm

moves on to search for a nose or a mouth. In addition, face on top of

shoulder also implies that a small area on top of a larger area could be a

head.

14
De La Salle University, Manila Face Recognition System, 2009

„ Constellation analysis

In face detection, some algorithms are able to deal with missing

features and problems due to translation, rotation, scale, poses,

eyeglasses, and complex backgrounds. For the different poses, the

probabilistic face models based on multiple face appearance have also

been proposed, which faces are classified under different viewpoints. A

Bayesian network, reported by Yow and Cipolla in 100 lab scene

images, is able to cope with small variations in scale, orientation,

viewpoint, the presence of eyeglasses, and missing features.

z Active shape models

The low edge contrast of features makes the edge detection process

problematic due to the lightening condition or skin color blending with

background color. For facial feature extraction, active shape model will

interact with local image features, such as edges and illumination, and

gradually change to take the shape of the feature.

„ Snakes

The algorithm, active contours or snakes, are commonly used to

form a shape of boundary. The evolution of a snake is achieved by

15
De La Salle University, Manila Face Recognition System, 2009

minimizing an energy function, Esnake, denoted as Esnake = Einternal +

Eexternal ,where Einternal, Eexternal are the internal and external energy

functions, respectively.

„ Deformable templates

A deformable template is using 11 parameters based on its salient

features. The mechanism involves the external energy, valley, edge,

peak, and image brightness (Ev, Ee, Ep, Ei ) given by

E = Ev + Ee + Ep + Ei + Einternal

K1 k 1 k 1 k
E int ernal = ( xe − x c ) 2 + 2 ( p1 + {r + b}) 2 + 2 ( p 2 + {r + b}) 2 + 3 (b − 2r ) 2
2 2 2 2 2 2

where the energy k1, k2, k3 control the course and the deformation of the template.

„ Point distribution models(PDMs)

The contour of PDM is discrete labeled points in a set. These points are

parameterized over training by using different objects of sizes and poses.

The model denoted as:

x = x + Pv , where x represents a point on the PDM, x is the mean

value of that point in the training set, P = [p1 p2 … pt ] is the matrix of

the t most significant variation vectors of the covariance of deviations,

and v is the weight vector for each mode.

16
De La Salle University, Manila Face Recognition System, 2009

2.1.2.2 Holistic/Image Based

z Linear subspace methods

Subspaces of a human face can be extracted from the overall image space.

There are several approaches use subspaces to represent a human face. The

algorithms include principal component analysis (PCA), linear discriminant

analysis (LDA), and factor analysis (FA). Some examples of eigenfaces, PCA, can

be seen in Fig. 2.1.2.2-1

Fig. 2.1.2.2-1 The number below each image indicates the principal

component number, ordered according to eigenvalues.

17
De La Salle University, Manila Face Recognition System, 2009

z Neural networks

In pattern recognition including face detection and recognition, neural

networks have become popular in recent years. Besides simple MLP network,

there are other architectures proposed by researchers, such as modular

architectures, committee–ensemble classification, complex learning

algorithms, auto-associative and compression networks, and networks

evolved or pruned with genetic algorithms.

z Statistical approaches

Different from other approaches, statistical approaches, such as systems

based on information theory, a support vector machine and Bayes’ decision

rule, are also proposed in face detection and recognition. Each statistical

approach provides different training and learning algorithms.

2.1.3 Facial Feature Extraction and Recognition

2.1.3.1 Categorization of Still Face Recognition Techniques

2.1.3.1.1 Holistic/Image Based

z Principal-component analysis (PCA)


Eigenfaces Direct application of PCA [Craw and Cameron 1996; Kirby and
Sirovich 1990; Turk and Pentland 1991]
z Probabilistic eigenfaces
Two-class problem with prob. measure [Moghaddam and Pentland 1997]
z Fisherfaces/subspace LDA

18
De La Salle University, Manila Face Recognition System, 2009

FLD on eigenspace [Belhumeur et al. 1997; Swets and Weng 1996b; Zhao et
al. 1998]
z SVM
Two-class problem based on SVM [Phillips 1998]
z Evolution pursuit
Enhanced GA learning [Liu and Wechsler 2000a]
z Feature lines
Point-to-line distance based [Li and Lu 1999]
z ICA
ICA-based feature analysis [Bartlett et al. 1998]
z LDA/FLD
LDA/FLD on raw image [Etemad and Chellappa 1997]
z PDBNN
Probabilistic decision based NN [Lin et al. 1997]

2.1.3.1.2 Local/Feature Based

z Pure geometry methods


Earlier methods [Kanade 1973; Kelly 1970]; recent methods [Cox et al. 1996;
Manjunath et al. 1992]
z Dynamic link architecture
Graph matching methods [Okada et al. 1998; Wiskott et al. 1997]
z Hidden Markov model
HMM methods [Nefian and Hayes 1998; Samaria 1994; Samaria and Young
1994]
z Convolution Neural Network
SOM learning based CNN methods [Lawrence et al. 1997]

3.1.3.1.3 Hybrid method

z Modular eigenfaces
Eigenfaces and eigenmodules [Pentland et al. 1994]
z Hybrid LFA
Local feature method [Penev and Atick 1996]
z Shape-normalized
Flexible appearance models [Lanitis et al. 1995]
z Component-based
Face region and components [Huang et al. 2003]

19
De La Salle University, Manila Face Recognition System, 2009

2.1.3.2 Categorization of Video-based Face Recognition Techniques

z Still-image methods Basic methods


[Turk and Pentland 1991; Lin et al. 1997; Moghaddam and Pentland 1997;
Okada et al. 1998; Penev and Atick 1996; Wechsler et al. 1997; Wiskott et al.
1997] Tracking-enhanced [Edwards et al. 1998; McKenna and Gong 1997,
1998; Steffens et al. 1998]
z Multimodal methods
Video- and audio-based [Bigun et al. 1998; Choudhury et al. 1999]
z Spatiotemporal methods
Feature trajectory-based [Li and Chellappa 2001; Li et al. 2001a] Video-to
video methods [Zhou et al. 2003]

2.1.3.3 3D Face Recognition

In 3D face recognition, the first step is to construct a 3D model by 3D sensing for

facial biometrics. The visual reconstruction techniques are in two categories, such as

active and passive sensing, which passive sensing reconstructs facial appearance

directly from the images or video. After receiving the row facial information, the 3D

face models, such as simple models, biomechanical models, and morphable models,

are used to build to 3D face database. The automatic face registration is 3D to 3D

registration or 3D to 2D registration. In 3D Recognition, recognition is using 3D shape

only, 3D shape and texture, or 3D shape assisted 2D recognition.

20
De La Salle University, Manila Face Recognition System, 2009

2.1.4 Face Database

Refer to papers [10][12] for the details of 2D/3D face databases.

Table 2.1.4-1 Important 2D face databases

Name RGB/ Size No. of Ima Condit avail URL


gray people ges/ ons able
pers
on
AR Face Database* RGB 576x 126 26 i,e,o,t Yes http://rvl1.ecn.purdu
768 70M e.edu/~aleix/aleix_f
67F ace_DB.html
Richard’s MIT RGB 480x 154 6 p,o Yes
database 640 82M
74F
CVL Database RGB 640x 114 7 p,e Yes http://www.lrv.fri.un
480 108M i-lj.si/facedb.html
6F
The Yale Face Gray 640x 10 576 p,i Yes http://cvc.yale.edu/p
Database B* Scale 480 rojects/yalefacesB/y
alefacesB.html
The Yale Face Gray 320x 15 11 i,e Yes http://cvc.yale.edu/p
Database* Scale 243 14M rojects/
1F yalefaces/yalefaces.
html
PIE Database* RGB 640x 68 ~60 p,i,e Yes http://www.ri.cmu.e
486 8 du/projects/project_
418.html
FERET* Gray 256x 30,000 p,i,e,i/ Yes http://www.itl.nist.g
RGB 384 o,t ov/iad/humanid/
feret/

The ‘*’ points out most used databases. Image variations are indicated by (i) illumination, (p)

pose, (e) expression, (o) occlusion, (i/o) indoor/outdoor conditions and (t) time delay.

21
De La Salle University, Manila Face Recognition System, 2009

2.2 A Review of Hardware Based Face Recognition

There are many researches related to face recognition based on software implementation.

However, only a few face recognitions based on hardware implementation. In recent years,

FPGA-based face recognition becomes popular for researchers due to FPGA’s high performance

and reconfigured ability. In a paper by T. Nakano et al. (2003) [17], coarse region segmentation

based on FPGA is used to detect face regions in an image. The video rate with 64×64 pixels is

achieved by using the FPGA implementation. A template matching based on dynamic-link

architecture is performed on the PC system. The architecture can be seen in Fig. 2.2-1

Fig. 2.2-1 Face/object recognition system

Next, in a research by Xiitoguaiig Li and Shawlti Areibi (2004) [18], face recognition is

using Artificial Neural Networks (ANN) based on hardware update module (HUM). Both a

processor and HUM are implemented on a FPGA chip. The results only include the FPGA usage

22
De La Salle University, Manila Face Recognition System, 2009

consumed by the system and the time consumed by feed forward calculation, backward

calculation, and updating.

Next, in a research by Y M. Mustafah et al. (2007) [19], a smart camera is developed based

on PicoBlaze microcontroller using FPGA to extract region of interest (ROI). Then the high

resolution face image is sent to client PC to perform the face recognition task. In this research a

HW/SW co-design architecture is implemented. Fig.2.2-2 shows the system architecture and

Fig.2.2-3 shows face extraction in different resolutions.

Fig. 2.2-2 System Architecture

Fig. 2.2-3 Overall Scene (a), ROI extracted from scene with resolution of 7MP (b), 5 MP(c),

3MP (d), IMP (e) and VGA (f).

23
De La Salle University, Manila Face Recognition System, 2009

Next, in a research by Mohan A.R. et al. (2008) [20], Principal Component Neural Network

(PCNN) based on FPGA in face recognition system is designed. Result presents real-time video

surveillance and high-speed access control.

Next, in a research by I. Sajid, M. M. Ahmed (2008) [21], the paper presents a fixed point

technique with software hardware co-design due to the floating point operations in Eigen values

algorithms are costly and complex in terms of hardware. The system architecture can be seen in

Fig 2.2-4. The result shows that the implementation of householder (HH) algorithm saves power

in the cost of losing precision no more than .008 percent in the diagonal values of computed

matrix.

Fig. 2.2-4 High level description

24
De La Salle University, Manila Face Recognition System, 2009

Next, in a book by Ginhac Dominique et al. (2007) [22], in chapter eight, three different

hardware platforms, such as FPGA, zero instruction set computer (ZISC) chips(Fig), and digital

signal processor (DSP) TMS320C62, dedicated to face recognition are presented. The result

shows the success rate of face tracking and identifying is 92%, 85%, and 98.2% respectively in

FPGA, ZISC, and DSP. Processing speeds are respectively 14, 25, and 4.8 images/s of in image

size 288 x 352.

Fig. 2.2-5 Implementation based on FPGA

Fig. 2.2-6 Implementation based on ZISC Chip

Fig. 2.2-7 Implementation based on DSP

25
De La Salle University, Manila Face Recognition System, 2009

Next, in a research by P. Rauschert et al. (2002) [23], a template matching in frequency

domain based on FPGA face recognition is presented in this paper. The hardware structure can be

seen in Fig. 2.2-8. The result shows that the identification rate is around 90% and the utilization

of FPGA is 37.84% logic device and 96.875% of block RAMs on Xilinx FPGA XCV1000.

Fig. 2.2-8 Identification Flow

Next, in a research by Nasim Shams et al. (2006) [24], face recognition system based on

Daubechies wavelets algorithm is implemented on a 300K gates Spartan-Il FPGA, as can be seen

in Fig. 2.2-9. The result show that the system processes in high frame per second.

Fig. 2.2-9 Floorplan on XC2S300E-6PQ20 using Xilinx PACE.

26
De La Salle University, Manila Face Recognition System, 2009

Next, in a research by Gonzalo Carvajal et al. (2007) [25], an analog-VLSI neural network

for face recognition based on subspace methods is proposed in this paper. As can be seen in Fig.

2.2-10, the dimensionality reduction is based on principal components analysis (PCA) and Linear

Discriminant Analysis (LDA), whose coefficients can be either programmed or learned on-chip.

Fig. 2.2-10 System Architecture

Next, in a research by Rajkiran Gottumukkal and Vijayan K. Asari (2003) [26], a fast

parallel architecture, as can be seen in Fig 2.2-11, based on Composite PCA face recognition is

proposed in this paper. The system works on both feature extraction and classification. The result

shows that the system is able to identify a person from a database of 110 images of 10

individuals in approximately 4 ms.

Fig 2.2-11 Block diagram of (a) PE1, (b) PE2.

27
De La Salle University, Manila Face Recognition System, 2009

Next, in a research by G. F. Zaki et al. (2007) [14], a hardware/software co-design face

recognition system on an FPGA is proposed. As can be seen in Fig. 2.2.12, the system uses

hardware acceleration module and a processor on FPGA. In this research, face detection is in a

unified simple background. Features Extraction is using PCA by implementing hardware

acceleration module. The result shows that proposed architecture is 1.7 times faster compared to

the system implemented without HW acceleration and the recognition efficiency achieves 85.3%.

Fig. 2.2.12 System Block Diagram

To sum up, only a few face recognition systems have been developed based on hardware

implementation. The works based on hardware studied in this paper, some of the works

[20][21][23][24] implement only with still images input instead of video image from the nature

scene. In addition, the researches are conducted under different input source image. Also, some

of the results didn’t present the accuracy of the recognition.

28
De La Salle University, Manila Face Recognition System, 2009

Chapter III: Theoretical Framework

3.1 System Architecture

In this research, face recognition system is composed of three units, which are input,

process, and output unit. The process unit deals with the image processing, face recognition and

data transmission, as can be seen in Fig. 3-1. A face image, row data received from an image

CMOS sensor, input to the system, which the image processing unit analyzes the input data and

outputs the result, facial feature vectors, to the next stage. Next, the face recognition unit

identifies the person by comparing the input vectors with the facial feature vectors in the face

database, and finally outputs a possible identity to a monitor.

Fig. 3-1 System Architecture Block Diagram

The process unit is implemented based on Altera StratixII FPGA Develop Kit. The Image COMS

Sensor is connected to the I/O pin, which the image sensor interface is implemented on FPGA.

29
De La Salle University, Manila Face Recognition System, 2009

3.2 Image Processing Unit

The major computational tasks are in the image processing unit. The unit includes six

procedures, moving object filter, face detection, frontal face estimation, facial feature

localization, illumination normalization and facial feature extraction, as can be seen in Fig.3.2.

Taking the advantage of video sequences, moving object detection and segmentation are

developed to extract foreground objects from video images. After extracting foreground objects,

complex background noises are eliminated. Therefore, less efforts are needed in the next phase.

For face detection, foreground objects are to be analyzed to find face regions. Once the face

regions are found, face pose estimation is executed to find frontal faces needed in the next

procedure. Next, key region of the faces is extracted and put into illumination normalization.

Finally, facial features are extracted and transformed for recognition.

Fig. 3-2 Image Processing Unit Block Diagram

30
De La Salle University, Manila Face Recognition System, 2009

3.3 Face Recognition Unit

In Face recognition unit, there are two data storages and an identification unit, as can be

seen in Fig.3-3. The facial feature data will be stored in a face database, and the recent matched

face vector will be put in the look up table (LUT).

Once the image process unit outputs test face features. The face identification unit first compares

the test image with the data in the LUT, which is to shorten the searching time in the database,

before comparing the data in the database. There is pre-computed threshold to verify the value of

closest matching between test image and the images in database or LUT.

Fig. 3-3 Face Recognition Unit Block Diagram

31
De La Salle University, Manila Face Recognition System, 2009

Chapter IV: Methodology

4.1 Development

In this research, the system is implemented by software and hardware co-design. The image

processing unit, NIOS MCU, and other peripheral interfaces are developed using VHDL

hardware language based on FPGA. As for the face recognition, applications are developed using

Altera’s software development IDE environment, which the applications are running in an

embedded OS system on NIOS MCU.

4.1.1 Image Processing Unit

Image processing unit includes six procedures, moving object filter, face detection,

frontal face estimation, facial feature localization, illumination normalization, and facial

feature extraction.

4.1.1.1 Moving Object Filter

For the video sequence image, the foreground objects can be abstracted before

detecting the facial regions. To do so, we can eliminate the background noise and

narrow the range of the processing area. There is variety of object segmentation

algorithms, based on motion-based and spatio-temporal [2]. However, there are

advantages and disadvantages of each algorithm [3], which each method deals with

different issues, such as illumination, complex background, blending, and other issues.

32
De La Salle University, Manila Face Recognition System, 2009

In order to deal with the globe illumination changes, A Kalman filter based background

updating algorithm [1] is selected to do the work in this research. The algorithm is

robust to gradual and sharp illumination changes, which the result can be seen in Fig

4.1.1.1-1.

Fig 4.1.1.1-1 Three background images and one foreground result

4.1.1.2 Face Detection

For face detection, two approaches, color-based and feature-based face detection,

are studied in this research. There are many researches using color-based algorithm to

detect the color skin of faces. However, the skin-color models are varied due to nature

conditions [4], such as the lighting condition, the white-balance of cameras, and the

skin color of various race etc. Component-based face detection [5] is developed in this

research thanks to the moving object extraction, which eliminates complex background

in an image. The component-based face detection uses the facial features and its

geometry, see Fig 4.1.1.2-1, to verify the candidate face region.

33
De La Salle University, Manila Face Recognition System, 2009

Fig 4.1.1.2-1 Schematic template and geometry relations

4.1.1.3 Frontal Face Estimation

Once face region is located, the face poses is estimated to search a frontal face.

Besides from learning-based face pose estimation, an asymmetry and geometry based

method [6] is proposed due to human face is composed in certain geometry. As a result,

different face pose geometry model can be built beforehand, as can be seen in Fig

4.1.1.3-1.

Fig 4.1.1.3-1 Face Pose Model

34
De La Salle University, Manila Face Recognition System, 2009

4.1.1.4 Facial Feature Localization

After detecting a face, a key region of the face is extracted from the face image.

An extraction method is using the geometry of common facial proportion of eyes and

mouth, see Fig 4.1.1.4-1.

Fig 4.1.1.4-1 Facial Feature Localization

4.1.1.5 Illumination Normalization

In many face recognition systems, accuracy drop sharply because of varying

illumination condition. In this research, illumination normalization is proposed instead

of using variety of training images to adapt illumination. A modified local binary

pattern (mLBP) is developed to compensate illumination. The mLBP preprocessing has

the high recognition rate in the eigenspace-based face recognition [7]. A result is

showing in Fig. 4.1.1.5-1.

35
De La Salle University, Manila Face Recognition System, 2009

A. Original Face B. M+LBP

Fig. 4.1.1.5-1 A. Original image from the Yale B database B. Image pre-processed by mLBP

4.1.1.6 Facial Feature Extraction

Eigenface based, so-called Principal Component Analysis (PCA), recognition

have been improved effectively in face recognition by many researchers. In addition, a

modular PCA algorithm has been implemented successfully using FPGA with

multi-lane architecture [8]. In this research, modular PCA-based recognition and

mLBP illumination Normalization collaborate to do the job of one image per person

face recognition.

⎛L L ⎞
Eq. 4.1.1.6-1. I ijk (m, n) = I i ⎜ ( j − 1) + m, (k − 1) + n ⎟, ∀i, j , k , m, n
⎝N N ⎠

where i:1 to M ; M: number of images in training set. j,k: 1 to N; N2 : number of

sub-images. m,n: 1 to L/N


M N N
1
Eq. 4.1.1.6-2. A =
MN 2
∑ ∑ ∑I
i =1 j =1 k =1
ijk

Eq. 4.1.1.6-3. Wtest , jkr = E rT ( I test , jk − A), ∀j , k , r

36
De La Salle University, Manila Face Recognition System, 2009

The size of face image L×L are divided into N2 smaller blocks, so the size of sub-image

would be L2 / N2 . Those sub-images Ii represent as a function of the original image,

see Eq. 4.1.1.6-1. The mean image of all the test sub-images is computed as Eq.

4.1.1.6-2. The weight vector of test image is computed for each of the sub-image using

the eigenvectors as Eq. 4.1.1.6-3.

4.1.2 Face Recognition Unit

Every test image from image processing unit will compare with face images in the look

up table before going through the face database.

4.1.2.1 Face Identification

Dijk , see Eq. 4.2.1 -1, represent the distances between the test image’s weight

vector and the weight vectors of the images in the database. The minimum value of Di ,

see Eq. 4.2.1 -2, means the ith face image in the database resembles closest to the test

face image.

1 M'
Eq. 4.2.1 -1 Dijk = ∑ Wtest , jkr − Wijkr , ∀i, j, k
M ' r =1
N N
1
Eq. 4.2.1 -2 Di =
N2
∑ ∑D
j =1 k =1
ijk , ∀i

A pre-defined threshold value is compared with the minimum distance, which if the

minimum distance exceeds the threshold value the test image is rejected as face image.

37
De La Salle University, Manila Face Recognition System, 2009

4.1.2.1 Look Up Table

In order to shorten the searching time from the face database, recent hit of ten test

images will be stored in a look up table (LUT). The LUT stores the distance of hit

images and indexes linking to the face database.

4.1.2.3 Face Database

One frontal face image for each person is required for the system. The Di and

personal ID of training face images will be stored in the face database.

4.1.3 Peripheral Interfacing

4.1.3.1 Interface to Image CMOS Sensor

Row data are received from image COMS sensor in order to reduce the

computational tasks. An interface of a sensor, likely USB, I2C, CAN bus, depends on

manufacturers’ models, which the interface of the sensor is implemented.

4.1.3.2 Interface to Monitor

VGA interface is developed for the communication between FPGA and a monitor.

4.2 Experimentation and Evaluation

In this research, algorithms being selected or created will be put into test before doing

hardware evaluation. In early experiment, software programs are created for each procedure to

see the effect. Next, hardware evaluation will be implemented on a FPGA.

38
De La Salle University, Manila Face Recognition System, 2009

4.2.1 Software Simulation

The proposed system will be put into test on a PC by means of software programming

before conducting a hardware experiment. For software programming, C/C++ programming

language is used to implement the algorithms. In this stage, the accuracy of recognition is

emphasized instead of the system performance.

4.2.2 Hardware Evaluation

For hardware evaluation, Altera NIOS StratixII FPGA kit [9] (see Fig. 4.2.2-1), a

monitor, and an image CMOS sensor are needed to do the experiment. For hardware

programming language, VHDL language and Altera QuartusII are used to do the hardware

simulation. The conditions of the system evaluation should include real scene experiment,

accuracy of recognition, and performance of recognition. The criteria of these conditions

such as variations of the source image, recognition rate, processing frames per second are

measured.

Fig. 4.2.2-1 Altera NIOS StratixII

39
De La Salle University, Manila Face Recognition System, 2009

Chapter V: Summary

In face recognition system, face detection and facial feature extraction are key steps before

into face identification. For video-based face recognition, foreground and background objects

can be separated by detecting moving objects, which eliminating complex background facilitates

tasks of face detection and recognition. However, in face recognition, there are some issues, such

as illumination and pose variation, should be resolved to improve the accuracy. To deal with

these problems, several algorithms are proposed in this research, such as illumination

normalization and pose estimation. In addition, the face recognition in this system requires only

one image per person in face database, which gathering multiple images of various poses or

illumination is laborious work. Furthermore, a flexible architecture is designed in order to

integrate these algorithms selected in this system. The system will be simulated by software

programming and implemented on hardware using FPGA. Finally, the system will be put into

test in real environment.

40
De La Salle University, Manila Face Recognition System, 2009

References
[1] Stefano Messelodi, Carla Maria Modena, Nicola Segata, and Michele Zanin (2005), Kalman

filter based background updating algorithm robust to sharp illumination changes, Lecture

notes in computer science, 2005 - Springer

[2] Dengsheng Zhang and Guojun Lu (2001),Segmentation of Moving Objects in Image

Sequence: A Review, Circuits, Systems, and Signal Processing, 2001 - Springer

[3] D. Gutchess, M. Trajkovi´c, E. Cohen-Solal, D. Lyons (2001), A. K. Jain†, A Background

Model Initialization Algorithm for Video Surveillance, Conference on Computer Vision,

Vancouver, Canada, 2001 – ieeecomputersociety

[4] Shinjiro Kawato and Jun Ohya (2000), Automatic Skin-color Distribution Extraction for Face

Detection and Tracking, 2000 IEEE

[5] Kyoung-Mi Lee (2007), Component-based face detection and verification, 2007 Elsevier B.V.

[6] Yuxiao Hu, Longbin Chen, Yi Zhou, Hongjiang Zhang (2004), Estimating Face Pose by

Facial Asymmetry and Geometry, 2004 IEEE

[7] Javier Ruiz-del-Solar, Julio Quinteros (2008),Illumination compensation and normalization in

eigenspace-based face recognition: A comparative study of different pre-processing

approaches, 2008 Elsevier B.V.

[8] Rajkiran Gottumukkal, Hau T. Ngo, Vijayan K. Asari (2005),Multi-lane architecture for

eigenface based real-time face recognition, 2005 Elsevier B.V

[9] ALTERA, Nios Development Board Reference Manual, Stratix II Edition, Nov 1, 2009

http://www.altera.com

[10] W. ZHAO, R. CHELLAPPA, P. J. PHILLIPS and A. ROSENFELD (2003), Face

Recognition: A Literature Survey, ACM Computing Surveys, Vol. 35, No. 4, December 2003,

41
De La Salle University, Manila Face Recognition System, 2009

pp. 399–458.

[11] Erik Hjelmas (2001), Face Detection: A Survey, 2001 by Academic Press

[12] Andrea F. Abate, Michele Nappi, Daniel Riccio, Gabriele Sabatino (2007), 2D and 3D face

recognition: A survey, 2007 Elsevier B.V.

[13] Xiaoyang Tan, Songcan Chen, Zhi-Hu Zhou, Fuyan Zhang (2006), Face recognition from a

single image per person:Asurvey, 2006 Elsevier B.V.

[14] G. F. Zaki, R. A. Girgis, W. W. Moussa, W. R. Gabran (2007), Using the Hardware/Software

Co-design Methodology to Implement an Embedded Face Recognition/Verification System

on an FPGA, 2007 IEEE

[15] Elham Bagherian, Rahmita Wirza O.K. Rahmat (2008), Facial feature extraction for face

recognition: a review, 2008 IEEE

[16] J. Kittler, A. Hilton, M. Hamouz, J. Illingworth (2005), 3D Assisted Face Recognition: A

Survey of 3D Imaging, Modelling and Recognition Approaches, 2005 IEEE

[17] T. Nakano, T. Morie, and A. Iwata (2003), A Face/Object Recognition System Using FPGA

Implementation of Coarse Region Segmentation, 2003 SICE

[18] Xiitoguaiig Li and Shawlti Areibi (2004), A Hardware/Software Go-design Approach for

Face Recognition, 2004 IEEE

[19] Y M. Mustafah, A. W. Azman, A. Bigdeli, B. C. Lovell (2007), AN AUTOMATED FACE

RECOGNITION SYSTEM FOR INTELLIGENCE SURVEILLANCE: SMART CAMERA

RECOGNIZING FACES IN THE CROWD, 2007 IEEE

[20] Mohan A.R. ,N. Sudha and Pramod K. Meher (2008), An Embedded Face Recognition

System on A VLSI Array Architecture and its FPGA Implementation, 2008 IEEE

[21] I. Sajid, M. M. Ahmed, I. Taj, M. Humayun, and F. Hameed (2008), Design of High

Performance FPGA Based Face Recognition System, PIERS Proceedings, Cambridge, USA,

42
De La Salle University, Manila Face Recognition System, 2009

July 2-6, 2008

[22] Ginhac Dominique, Yang Fan and Paindavoine Michel (2007), Design, Implementation and

Evaluation of Hardware Vision Systems dedicated to Real-Time Face Recognition, Face

Recognition, Book edited by: Kresimir Delac and Mislav Grgic, ISBN 978-3-902613-03-5,

pp.558, I-Tech, Vienna, Austria, June 2007

[23] P. Ravschert, A. Kummert, M. Krips and Y. Klimets (2002), Face Recognition By Means of

Template Matching in Frequency Domain – A Hardware Based Approach, 2002 IEEE

[24] Nasim Shams, Iraj Hosseini, Mohammad Sadegh Sadri, Ehsan Azarnasab (2006), LOW

COST FPGA-BASED HIGHLY ACCURATE FACE RECOGNITION SYSTEM USING

COMBINED WAVELETS WITH SUBSPACE METHODS, 2006 IEEE

[25] Gonzalo Carvajal, Waldo Valenzuela and Miguel Figueroa (2007), Subspace-Based Face

Recognition in Analog VLSI, Nov 1, 2009, books.nips.cc

[26] Rajkiran Gottumukkal, Vijayan K. Asari (2003), System Level Design of Real Time Face

Recognition Architecture Based on Composite PCA, 2003 ACM

43

You might also like