You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/286921244

Face detection system using FPGA

Conference Paper · November 2015


DOI: 10.1109/GET.2015.7453793

CITATION READS
1 987

4 authors, including:

Sivanantham S Sivasankaran Ks
Vellore Institute of Technology VIT University
69 PUBLICATIONS   170 CITATIONS    48 PUBLICATIONS   85 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

National Level Workshop on DESIGN FOR TESTABILITY (DFT) , 1st to 3rd February 2019, VIT Vellore. View project

Reconfigurable Architecture for Multimedia Applications View project

All content following this page was uploaded by Sivanantham S on 15 December 2015.

The user has requested enhancement of the downloaded file.


2015 Online International Conference on Green Engineering and Technologies (IC-GET)

Face Detection System Using FPGA


Saumyarup Rana, M Prasanna Deepu, Sivanantham S* and Sivasankaran K
School of Electronics Engineering
VIT University
Vellore - 632014, India
ssivanantham@vit.ac.in

Abstract— In recent times, face recognition is developing Verilog HDL program. So we follow the same procedure for
rapidly for its various applications in the field of Human- other Images. These images are compared through our Verilog
Interface, Biometrics, Robotics, Security, Surveillance and other program. The image pixel difference is calculated and a
commercial use. Biometric based advancements and technologies threshold limit is set. If the image falls within this value we
include identification associated with the biological
detect the image and display that match is found [8].
characteristics (finger print, retina scanning) and behavioral
traits (mood detection).Out of the available recognition sensors,
facial image recognition is highly recommended because of its
easy to use(cameras),reasonable cost, precise measurement and
other features. In this paper, we are reading various facial
images and storing them. Image test benches are read in our
Verilog program and stored in memories. We compare images
bit by bit and check if there is any mismatch. If image is matched
then we display “Match found” otherwise “No match found “. In
further study, we are obtaining special features of the face such
as Lip portion or Eye portion. We subtract test images with
stored images and compare the subtracted value with threshold
limit for detection.

Keywords— Biometrics, Verilog, threshold limit, FPGA, image


processing

I. INTRODUCTION
Face detection principles find its applications mainly
include video surveillance, criminalities, teleconferencing, Fig. 1.Block Diagram of the Face Detection System
robotics etc. We need not allocate time for thumb impression.
There are other biometric sensors such as retina scanners The algorithm for face detection using both software and
which are considered harmful when used for long periods of hardware implementations are described as follows:
time. Detecting an entire face is a challenge and requires
A. Software implementation:
complex algorithms [1] [2][3]. Instead of considering the
Requirement: - Real time image of various persons
whole Face it is advisable to extract portions of the face. This • Read the captured image of various persons.
is what we called Hemi-facial extraction [5]. For example lip • Crop and extract the lip region from the images
region or the eye region. In our project we use a simple way of obtained.
creating a face detector which compares and tries to identify a • Convert the cropped image into gray-scale.
person with various expressions. It can also differentiate • Generate histogram equivalence.
between two different persons [6] [7].
B. Hardware Implementation:
II. METHODOLOGY • Create binary values of the data
Through this work we understand, reading an image in • Coping of binary test bench file in Verilog program.
MATLAB, cropping the desired feature of the image such as • Repeat for all k.
Lip portion or Eye portion of the face [4]. After extracting • Perform XOR operation between two binary image
desired portion of the image we convert the image into grey files
scale. Now we generate Histogram equivalence of the image • Add the difference and store them in a variable.
(pixel to intensity equivalence). In the next step we convert • end for until size of k ≤ 255
Histogram values into binary. Now binary data read into

978-1-4673-8625-8 12
2015 Online International Conference on Green Engineering and Technologies (IC-GET)
• Check if the value of the variable is lesser than the set
threshold limit.

Below we display the real time images of a person


with different expression and another person. We
extract the lip-region of these images for comparison.

Fig.3.Extracted lips from the original faces for comparison

A. Conversion of coloured RGB image to Gray Scale


image
(a (b)
To reduce the complexity of operation using coloured image
we convert the image into black and white or in other words
gray-scale. After conversion bit-map image now comprises of
0 to 255 shades of gray. 1 indicates black, 255 indicates white
and 127 is for gray.

`
(c) (d)

(e)

Fig.2. From (a)-(d) face of a person with different expression,


(e) face of another person
Fig.4.The histogram plots of the extracted lips of Fig 2
To reduce the complexity of operation using coloured image
we convert the image into black and white or in other words B. Image Histogram
gray-scale[9]. After conversion bit-map image now comprises
of 0 to 255 shades of gray. 1 indicates black, 255 indicates
Imhist(I) is a Mat lab operation which is used in our project. It
white and 127 is for gray.
generates the histogram associated with the image I and
displays the histogram on screen[10]. The type of image
determines the amount of bins used up by the histogram. A

13
2015 Online International Conference on Green Engineering and Technologies (IC-GET)
gray-scaled picture will have 256 bins whereas a Binary
picture uses only 2 bins. imhist(I) can be used to plot the
number of pixel versus the intensity plot.

Fig.6. Simulation Window in Modelsim Software

Fig 7 . Displaying No Match Found in Console Window

Fig.5. The histogram plot of the lip obtained from the second
person

From the above plots itself we can make out how different
images generate different pixel intensities. The second
person’s histogram has a huge difference compared to the
first. The decimal To Binary Vector (imhist(I)) command
translates the histogram- equivalence into binary data of size
256 bits if the image is gray-scale. Fig.8. Displaying Match Found in Console Window
III. RESULTS AND DISCUSSION
TABLE 1: Detection of image when they are close to each IV. CONCLUSION
other i.e., within the threshold value given This paper described the implementation of the face
recognition system that captures facial images and recognizes
Images Value Of Threshold Detection
Difference Limit Results
a face. The image extraction and processing is performed in
Fig.1(a) 0 5500 Match Matlab Simulink R2014 software and the image comparison is
Found performed using Modelsim software (Verilog program).Our
Fig.1(b) 4488 5500 Match program is capable of detecting slight expressional changes of
Found the test image. Further, it differentiates two persons after
Fig.1(c) 5264 5500 Match
Found reading their lip images. There are chances of having similar
Fig.1(d) 5848 5500 No Match lip features of dissimilar persons. In our future work we tend
Found to compare the eye region of various persons along with the
Fig.1(e) 8744 5500 No Match lip region
Found
REFERENCES
It is seen that our algorithm is able to detect only the first three
images given in Fig.1. The fourth figure Fig.1(d) is of the [1] Nguyen, D.; Halupka, D.; Aarabi, P.; Sheikholeslami, A.,
same person but with a very different expression. The pixel "Real-time face detection and lip feature extraction using
difference value obtained for this face is more compared to the field-programmable gate arrays," in Systems, Man, and
Cybernetics, Part B: Cybernetics, IEEE Transactions on ,
threshold. Final figure Fig.1(e) is that of an entirely different vol.36, no.4, pp.902-912, Aug. 2006.
person hence the difference value is huge, thereby our
program does not detect him as the first person. [2] Nguyen, D.; Halupka, D.; Aarabi, P.; Sheikholeslami, A.,
"Real-time face detection and lip feature extraction using
field-programmable gate arrays," in Systems, Man, and
Cybernetics, Part B: Cybernetics, IEEE Transactions on ,
vol.36, no.4, pp.902-912, Aug. 2006

14
2015 Online International Conference on Green Engineering and Technologies (IC-GET)

[3] S. Sivanantham, N. Nitin Paul and R. Suraj Iyer, “Object


Tracking Algorithm Implementation for Security
Applications,” Far East Journal of Electronics and
Communications, 16 (1), 2016.

[4] Lee, Y.; Seok-Bum Ko, "FPGA Implementation of a Face


Detector using Neural Networks," in Electrical and
Computer Engineering, 2006. CCECE '06. Canadian
Conference on , vol., no., pp.1914-1917, May 2006

[5] Chun He; Papakonstantinou, A.; Deming Chen, "A novel


SoC architecture on FPGA for ultra fast face detection,"
in Computer Design, 2009. ICCD 2009. IEEE
International Conference on , vol., no., pp.412-418, 4-7
Oct. 2009

[6] G. Potamianos and C. Neti, “Improved ROI and within


frame discriminant features for lipreading,” in Proc. Int.
Conf. Image Processing, Thessaloniki, Greece, 2001, pp.
250–253.

[7] G. Potamianos, H. P. Graf, and E. Cosatto, “An image


transform approach for HMM based automatic lipreading,”
in Proc. Int. Conf. Image Processing, Chicago, IL, 1998,
pp. 173–177.

[8] Yang and A. Waibel, “A real-time face tracker,” in Proc.


3rd IEEE Workshop Applications Computer Vision,
Sarasota, FL, 1996, pp. 142–147.

[9] M. Hunke and A. Waibel, “Face locating and tracking for


humancomputer interaction,” in Proc. 28th Asilomar Conf.
Signals, Systems and Computers, Pacific Grove, CA, 1994,
pp. 1277–1281.

[10] R. Hsu, M. Abdel-Mottaleb, and A. K. Jain, “Face


detection in color images,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 24, no. 5, pp. 696–706, May 2002

15

View publication stats

You might also like