0% found this document useful (1 vote)
711 views78 pages

1.1 General Introduction: Face Recognition System

This document provides an introduction to face recognition systems. It discusses how face recognition has evolved from early manual systems to today's sophisticated computer algorithms. The document reviews the history of face recognition research from the 1960s to recent evaluations in 2006. It also outlines the objectives, scope, and features of the proposed face recognition system, which will detect and recognize faces in digital images and classify them as known or unknown persons.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
711 views78 pages

1.1 General Introduction: Face Recognition System

This document provides an introduction to face recognition systems. It discusses how face recognition has evolved from early manual systems to today's sophisticated computer algorithms. The document reviews the history of face recognition research from the 1960s to recent evaluations in 2006. It also outlines the objectives, scope, and features of the proposed face recognition system, which will detect and recognize faces in digital images and classify them as known or unknown persons.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

CHAPTER 1

INTRODUCTION

1.1 GENERAL INTRODUCTION

Face is a complex multidimensional structure and needs good computing techniques for
recognition. The face is our primary and first focus of attention in social life playing an
important role in identity of individual. We can recognize a number of faces learned
throughout our lifespan and identify that faces at a glance even after years.

There may be variations in faces due to aging and distractions like beard, glasses or change
of hairstyles. Facial features are extracted and implemented through algorithms which are
efficient and some modifications are done to improve the existing algorithm models.

Computers that detect and recognize faces could be applied to a wide variety of practical
applications including criminal identification, security systems, identity verification etc. Face
detection and recognition is used in many places nowadays, in websites hosting images and
social networking sites. Face recognition and detection can be achieved using technologies
related to computer science.

Features extracted from a face are processed and compared with similarly processed faces
present in the database. If a face is recognized it is known or the system may show a similar
face existing in database else it is unknown. In surveillance system if an unknown face
appears more than one time then it is stored in database for further recognition. These steps
are very useful in criminal identification. In general, face recognition techniques can be
divided into two groups based on the face representation they use appearance-based, which
uses holistic texture features and is applied to either whole-face or specific regions in a face

FACE RECOGNITION SYSTEM 1


image and feature-based, which uses geometric facial features (mouth, eyes, brows, cheeks
etc.), and geometric relationships between them.

The proposed technique is based on coding and decoding of face images with emphasis on
the significant of local and global features of face. In this proposed method the relevant
information in a face image is feature extracted, encoded and then compared with a face
database of models and then classified as known or unknown.

1.2 OBJECTIVE:

 The goal is to implement the system (model) for a particular face and distinguish it
from a large number of stored faces with some real-time variations as well.

 To propose a method which is independent of any judgment of features like


open/closed eyes, different facial expressions images, with and without Glasses.

1.3 SCOPE:

Face recognition system which would be capable in:

 Application of real-time image capturing.

 Detecting human faces in digital images.

 Determining the threshold for Euclidean distance classification.

 Giving output whether face is known or unknown person.

FACE RECOGNITION SYSTEM 2


1.4 Features:
 Face recognition system is simple and insensitive to small or gradual changes on the
face.
 Face Recognition can handle slight changes in illumination.

CHAPTER 2

PRELIMINARY STUDY

2.1 Problem Specifications:

2.1.1 Technical Problems:

 The accuracy of the face detection system should not be compromised even with
slight changes in face.

 Changes in background should not affect the efficiency of face recognition


system.

 Illumination conditions should not affect the face detection process henceforth
limiting its efficiency.

FACE RECOGNITION SYSTEM 3


2.1.2 Time Limit:

The time taken for training and classification of images will depend upon the
size of the database .In this project we will work with database limited size for
the recognition process but with increase in the size and complexity of images
the time taken increases.

2.1.3 Data Set:

We prepared our own dataset to verify the proper working of the face
recognition system comprising of images in different conditions.

2.1.4 File Maintenance:

Our project involves creation of database which is to be maintained efficiently


and in a securely fashion as loss of data set might result into the failure of the
system to classify and recognize the faces in the images.

FACE RECOGNITION SYSTEM 4


2.2 Literature Review:

2.2.1 GENERAL SURVEY OF FACE RECOGNITION

This chapter provides a detailed survey of face recognition research. There are two
underlying motivations to present this survey: the first is to provide an up-to-date review
of the existing literature, and the second is to offer some insights into the studies of
machine recognition of faces. To provide a comprehensive survey, existing recognition
techniques of face recognition are categorized and detailed descriptions of representative
methods within each category are presented. In addition, relevant topics such as
psychophysical studies, system evaluation, issues of illumination and pose variation are
covered.

Automated face recognition was developed in 1960s. The first semi-automated system
for face recognition required the administrator to locate features such as eyes, ears, nose,
and mouth on the photographs before it calculating the distances and the ratios to a
common reference point, which were then compared to the reference data. In 1970s, the
problem with both of these early solutions was that the measurements and locations were
manually computed. In 1990, Kirby and Sirovich applied Principal Component Analysis,
a standard linear algebraic technique, to the face recognition problem. This was
considered as a milestone. It is shown that less than one hundred values were required to
accurately code a suitably aligned and normalized face image.

As a result of many studies, scientists come up with the decision that face recognition is
not like other object recognition. Face recognition is one of the few biometric methods
that possess the merits of both high accuracy and low intrusiveness. It has the accuracy
of a physiological approach without being intrusive. For this reason, since the early
seventies, face recognition has drawn the attention of researchers in fields from security,
psychology and image processing.

FACE RECOGNITION SYSTEM 5


Early face recognition algorithms used simple geometric models, but the recognition
process has now matured into a science of sophisticated mathematical representations
and matching processes. Since the early 1950s when digital computers were born and
the world gained significant processing power, computer scientists have endeavored in
bringing thought and the senses to the computer. During the 1980s, work on face
recognition remained largely dormant. Plagued with the fears expressed in George
Orwell’s 1984, most members of society are very concerned about the use of a computer
system which is capable of recognizing them wherever they go. Since the 1990s, the
research interest in face recognition has grown significantly as a result of the following
facts:

 The increase in emphasis on civilian/commercial research projects, the re-


emergence of neural network classifiers with emphasis on real time computation
and adaptation.

 The availability of real time hardware.

 The increasing need for surveillance related application due to terrorist and drug
trafficking activities, etc.

In the year 1991, Turk and Pentland discovered that while using the Eigen faces
techniques, the residual error could be used to detect faces in images, a discovery that
enabled reliable real-time automated face recognition systems. This demonstration
initiated much-needed analysis on how to use the technology to support national needs
while being considerate of the public’s social and privacy concerns.

Critics of the technology complain that the London Borough of Newham scheme has, as
of 2004, never recognized a single criminal, despite several criminals in the system's
database living in the Borough and the system having been running for several years.

FACE RECOGNITION SYSTEM 6


"Not once, as far as the police know, has Newham's automatic facial recognition system
spotted a live target." This information seems to conflict with claims that the system was
credited with a 34% reduction in crime, which better explains why the system was then
rolled out to Birmingham also.

In 2006, the performance of the latest face recognition algorithms was evaluated in the
Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face
scans and iris images were used in the tests. The results indicated that the new algorithms
are 10 times more accurate than the face recognition algorithms of 2002 and 100 times
more accurate than those of 1995. Some of the algorithms were able to outperform
human participants in recognizing faces and could identify even identical twins.

Toolbar et al (2006) have reported an up-to-date review of major human face recognition
research in “Face recognition: a literature review, methods and technologies of face
recognition”. A literature review of the most recent face recognition techniques is
presented. Description and limitations of face databases which are used to test the
performance of these face recognition algorithms are given.

This face recognition problem is made difficult by the great variability in head rotation
and tilt, lighting intensity, angle, facial expression, aging etc. Some other attempts at
facial recognition by machine have allowed for little or no variability in these quantities.
Yet, the method of correlation or pattern matching of unprocessed data, which is often
used by some researchers, is certain to fail in cases where the variability is great. In
particular, the correlation is very low between two pictures of the same person with two
different head rotations.

Modern face recognition has reached an identification rate greater than 90% with well-
controlled pose and illumination conditions. The task of recognizing faces has attracted
much attention both from Neuro-scientists and from computer vision scientists. While
network security and access control are it most widely discussed applications, face
recognition has also proven useful in other multimedia information processing areas.

FACE RECOGNITION SYSTEM 7


2.2.2 SURVEY OF PCA AND LDA BASED FACE RECOGNITION

Numerous algorithms have been proposed for face recognition; Chellappa et al (1995),
Zhang et al (1997) and Chan et al (1998) use face recognition techniques to browse video
database to find out shots of particular people. Haibo Li et al (1993) code the face images
with a compact parameterized facial model for low-bandwidth communication
applications such as videophone and teleconferencing. Recently, as the technology has
matured, commercial products have appeared on the market.

Turk et al (1991) developed Principal Component Analysis (PCA) technique for Face
recognition to solve a set of faces using Eigen values. Rama Chellappa et al (2003) have
dealt with the feature based method using statistical, structural and neural classifiers for
Human and Machine Recognition of Faces. Krishnaswamy et al (1998) proposed
automatic face recognition using Linear Discriminant Analysis (LDA) of Human faces.
Chengjun Liu and Harry Wechsler (2002) presented new coding schemes, the
Probabilistic Reasoning Modes (PRM) and Enhanced Fisher linear discriminant Models
(EFM) for indexing and retrieval from large image databases. Michael Bromby (2003)
has presented a new form of Forensic identification-facial biometrics, using
computerized identification.

Joss Beveridge et al (2003) provided the PCA and LDA algorithms for face recognition.
A detailed Literature Survey of Face Recognition and Reconstruction Techniques were
given by Roger Zhang and Henry Chang (2005).

Vytautas Perlibakas (2004) has reported a method in “Face Recognition Using Principal
Component Analysis and Wavelet Packet Decomposition” which allows using PCA
based face recognition with a large number of training images and performing training
much faster than using the traditional PCA based method.

Kyungnam Kim (1998) has proposed PCA to reduce the large dimensionality of the data
space (observed variables) to the smaller intrinsic dimensionality of feature space
(independent variables), which are needed to describe the data economically in “Face
Recognition using Principle Component Analysis”. The original face is reconstructed

FACE RECOGNITION SYSTEM 8


with some error, since the dimensionality of the image space is much larger than that of
face space.

Jun-Ying et al (2005) have combined the characteristics of PCA with LDA. This
improved method is based on normalization of within-class average face image, which
has the advantages of enlarging classification distance between different-class samples.
Experiments were done on ORL (Olivetti Research Laboratory) face database. Results
show that 98% of correct recognition rate can be acquired and a better efficiency can be
achieved by the improved PCA method.

El-Bakry (2007) has proposed a new PCA implementation for fast face detection based
on the cross-correlation in the frequency domain between the input image and
eigenvectors (weights) in “New Fast Principal Component Analysis for Face Detection”.
This search is realized using cross-correlation in the frequency domain between the
entire input image and eigenvectors. This increases detection speed over normal PCA
algorithm implementation in the spatial domain.

Wangmeng Zuo et al (2006) have described in “Combination of two novel LDA-based


methods for face recognition” the Combination of two LDA methods which performed
LDA on distinctly different subspaces and this may be effective in further improving the
recognition performance. Fisher face technology uses 2D-Gaussian filter to smooth
classical Fisher faces.

Xiaoxun Zhang and Yunde Jia (2007) have explained the principal subspace, the optimal
reduced dimension of the face sample in “A linear Discriminant analysis framework
based on random subspace for face recognition Pattern Recognition” to construct a
random subspace where all the discriminative information in the face space is distributed
in the two principal subspaces of the within-class and between-class matrices.

Moshe Butman and Jacob Goldberger (2008) have introduced a face recognition
algorithm in “Face Recognition Using Classification-Based Linear Projections” based
on a linear subspace projection. The subspace is found via utilizing a variant of the
neighborhood component analysis (NCA) algorithm which is a supervised
dimensionality reduction method that has been recently introduced.

FACE RECOGNITION SYSTEM 9


Changjun Zhou et al (2010) have introduced a features fusion method for face
recognition based on Fisher’s Linear Discriminant (FLD) in “Features Fusion Based on
FLD for Face Recognition”. The method extracts features by employing Two-
Dimensional principal component analysis (2DPCA) and Gabor Wavelets, and then
fuses their features which are extracted with FLD respectively.

Hui Kong Lei Wang et al (2005) have explained in their paper, “Framework of 2D Fisher
Discriminant Analysis: Application to Face Recognition with Small Number of Training
Samples” that 2D Fisher Discriminant Analysis (2D-FDA) is different from the 1D-LDA
based approaches. 2D-FDA is based on 2D image matrices rather than column vectors
so the image matrix does not need to be transformed into a long vector before feature
extraction which contains unilateral and bilateral 2D-FDA.

Yanwei Pang et al (2004) have proposed “A Novel Gabor-LDA Based Face Recognition
Method” in which face recognition method based on Gabor-wavelet with linear
Discriminant analysis (LDA) is presented. These are used to determine salient local
features, the positions of which are specified by the Discriminant pixels. Because the
numbers of discriminant pixels are much less than those of the whole image, the amount
of Gabor Wavelet coefficients is decreased.

Xiang et al (2004) have reported in “Face Recognition using recursive Fisher Linear
Discriminant with Gabor wavelet coding” that the constraint on the total number of
features available from Fisher Linear Discriminant (FLD) has seriously limited its
+application to a large class of problems. In order to overcome this disadvantage of FLD,
a recursive procedure of calculating the Discriminant features is suggested. Work is
currently under progress to study the various design issues of face recognition, and the
objective is to achieve 99% accuracy rate for identity recognition for all the widely used
databases, and at least 80% accuracy for facial expression recognition for Yale database.

Juwei Lu et al (2003) have shown in “Regularization Studies on LDA for Face


Recognition”, that the applicability of Linear Discriminant Analysis (LDA) to high-
dimensional pattern classification tasks such as face recognition (FR) often suffers from
the so-called small sample size (SSS) problem arising from the small number of available

FACE RECOGNITION SYSTEM 10


training samples compared to the dimensionality of the sample space. The effectiveness
of the proposed method has been demonstrated through experimentation using the
FERET database.

Chengjun Liu and Harry Wechsler (2002) have reported in “Gabor Feature Based
Classification (GFC) using the Enhanced Fisher Linear Discriminant Model for Face
Recognition” that the feasibility of the proposed GFC method has been successfully
tested on face recognition using a data set from the FERET database, which is a standard
16 tested for face recognition technologies.

2.2.3 SURVEY OF NEURO AND FUZZY BASED FACE RECOGNITION

Rowley et al (1998) have provided a neural network-based upright frontal face detection
system in “Neural Network-Based Face Detection”. To collect negative examples, a
bootstrap algorithm is used, which adds false detections into the training set, as training
progresses.

Jianming Lu et al (2007) have presented a new method of face recognition on fuzzy


clustering and parallel neural networks, based on the neuron-fuzzy system. The face
patterns are divided into several small-scale parallel neural networks based on fuzzy
clustering, and they are combined to obtain the recognition result.

Yu et al (2001) has discussed Multiple Fisher Classifiers Combination for Face


Recognition based on Grouping Ada Boost Gabor Features. The key issue in using Gabor
features is how to efficiently reduce its high dimensionality. Gabor-based representation
is too high dimensional even after being selected by some feature selection methods. In
order to increase the total dimension of FDA subspace, the AdaBoosted Gabor features
are regrouped into some smaller feature subsets.

FACE RECOGNITION SYSTEM 11


Hongzhou Zhang et al (2007) have implemented face recognition by reconstructing
frontal view features using linear transformation in “Face Recognition Using Feature
Transformation” under different poses. Fei Zuo and Peter (2008) have introduced a fast
face detector using an efficient architecture in “Cascaded face detection using neural
network ensembles” based on a hierarchical cascade of neural network ensembles with
which enhanced detection accuracy and efficiency are achieved.

Dmitry Bryliuk and Valery Starovoitov (2002) in “Access Control by Face Recognition
Using Neural Networks” have considered a Multilayer Perceptrons Neural Network
(NN) for access control based on face image recognition. The robustness of NN
classifiers with respect to the False Acceptance and False Rejection errors is studied. A
new thresholding approach for rejection of unauthorized persons is proposed.

Keun-Chang Kwak et al (2007) employed Fisher based Fuzzy Integral and Wavelet
Decomposition methods for face recognition in the University of Alberta Shiguang Shan
et al (2004) dealt with the Gabor Wavelet for Face Recognition from the angle of its
robustness to mis-alignment.

Jun Zhang et al (1997) have compared three recently proposed algorithms for face
recognition: eigenfaces, auto association and classification neural nets, and elastic
matching in “Face Recognition: Eigenfaces, Elastic Matching, and Neural Nets”.

Smach et al (2005) have implemented a classifier based on neural networks MLP (Multi-
layer Perception) for face detection in “Design of a Neural Networks Classifier for Face
Detection”. The MLP is used to classify face and non-face patterns. Then a Hardware
implementation is achieved using VHDL based Methodology. The system was
implemented in VHDL and synthesized using Leonardo synthesis tool. The model’s
robustness has been obtained with a back propagation learning algorithms.

According to Kakarwal et al (2009), it is important to select the invariant facial features


especially faces with various pose and expression changes in Information Theory and

FACE RECOGNITION SYSTEM 12


Neural Network based Approach for Face Recognition. This work presents some novel
feature extraction techniques such as Entropy and Mutual Information. For classification,
feed forward neural network is used which will be better than traditional methods for
accurately recognizing the faces.

Gaile (1992) have explored the use of morphological operators for feature extraction in
range images and curvature maps of the human face in “Application of Morphology to
Feature Extraction for Face Recognition”. This paper has described general procedures
for locating features defined by the configuration of extrema in principal curvature. A
novel connection technique based on the concept of constrained skeleton was also
introduced in this paper. This technique being based on a proximity rule defined by a
structuring element, it could be used successfully for a variety of applications.

Sushmita Mitra and Sankar (2005) have explained that Fuzzy sets are well-suited to
modeling different forms of uncertainties and ambiguities, often encountered in real life
in “Fuzzy sets in pattern recognition and machine intelligence”. Fuzzy set theory is the
oldest and most widely reported component of present day soft computing, which deals
with the design of flexible information processing systems.

Vonesch et al (2005) have illustrated the flexibility of the proposed design method in
“Generalized bi-orthogonal Daubechies wavelets”. Most importantly, it is possible to
incorporate a priori knowledge on the characteristics of the signals to be analyzed into
the approximation spaces, via the exponential parameters.

Alaa Eleyar and Hasan Demiral (2007) have proposed PCA and LDA based Neural
Network for Human Face Recognition. Lekshmi and Sasikumar (2009) have analysed
both the Global and Local Information for Facial Expression Recognition.

Fatma et al (2008) have discussed in “Comparison between Haar and Daubechies


Wavelet Transformations on FPGA Technology” that the Daubechies wavelet is more
complicated than the Haar wavelet. Daubechies wavelets are continuous; thus, they are
more computationally expensive to use than the Haar wavelet. This wavelet type has
balanced frequency responses but non-linear phase responses. Daubechies wavelets use

FACE RECOGNITION SYSTEM 13


overlapping windows, so the high frequency coefficient spectrum reflects all high
frequency changes.

Manjunathi and Ma (1996) have suggested “Texture Features for Browsing and Retrieval
of Image Data” that a novel adaptive filter selection strategy in Gabor Wavelets to reduce
the image processing computations while maintaining a reasonable level of retrieval
performance.

Hossein Sahoolizadeh et al (2008), has proposed a new hybrid method of Gabor wavelet
faces using extended NFS classifier in “Face Detection using Gabor Wavelets and Neural
Networks”. Down sampled Gabor wavelets transform of face images as features for face
recognition in subspace approach is superior to pixel value approach.

Yousra Ben Jemaa and Sana Khanfir (2009) have discussed in “Automatic local Gabor
features extraction for face recognition” that Face is represented with its own Gabor
coefficients expressed at the fiducial points (points of eyes, mouth and nose). The first
is composed of geometrical distances automatically extracted between the fiducial
points. The second is composed of the responses of Gabor wavelets applied in the
fiducial points and the third is composed of the combined information between the
previous vectors.

From the literature review, it is found that in 1970s, typical pattern classification
techniques were used to measure attributes between features in faces or face profiles.
During the eighties, work on face recognition remained largely dormant. Since the early
nineties, research interest in Face Recognition Technology has grown very significantly.

Over the last ten years, increased activity has been seen in tackling problems such as
segmentation and location of a face in a given image and extraction of features such as
eyes, mouth, etc. Also, numerous advancements have been made in the design of
statistical and neural network classifiers for face recognition. There are many methods
that have been proposed in the literature for the facial recognition task. However, all of
them have still disadvantages such as not complete reflection about face structure and

FACE RECOGNITION SYSTEM 14


face texture. Therefore, a combination of different algorithms which can integrate the
complementary information should lead to improve the efficiency of the entire system.

Actually, development of face recognition over the past years allows an organization
into three types of recognition algorithms, namely frontal, profile, and view-tolerant
recognition, depending on the kind of images and the recognition algorithms. While
frontal recognition certainly is the classical approach, view-tolerant algorithms usually
perform recognition in a more sophisticated fashion by taking into consideration some
of the underlying physics, geometry and statistics. Profile schemes as stand-alone
systems have a rather marginal significance for identification. However, they are very
practical either for fast coarse pre-searches of large face database to reduce the
computational load for a subsequent sophisticated algorithm, or as part of a hybrid
recognition scheme.

The following observations were made after surveying the research

Literature:

 Some features of face or image subspace may be simultaneously invariant to all the
variations that a face image may exhibit.

 Given more number of training images almost any technique will do better and the
number of test images will decide the performance. These two actors are the major
reasons why face recognition is not widely used in real – world application.

The commercial applications range from static matching of photographs on credit cards,
ATM cards, passports, driver's licenses and photo ID to real-time matching with still images
or video image sequences for access control.

FACE RECOGNITION SYSTEM 15


CHAPTER - 3

REQUIREMENT ANALYSIS

3.1 HARDWARE REQUIREMENTS:-

 Windows 10 / Windows 8.1 / Windows 7 Service Pack 1

 Processor : Intel Core i3 or equivalent

 Memory : 4 GB / 8 GB

 Disk space : 2 GB of HDD space for MATLAB only, 4-6 GB for a typical
installation macOS High Sierra (10.13) / macOS Sierra (10.12) / macOS El Capitan
(10.11)

 Processor : Any Intel or AMD x86-64 processor

 Memory : 4 GB / 8 GB

 Disk Space ; 2.5 GB of HDD space for MATLAB only, 4-6 GB for a typical
Installation Ubuntu 17.10 / Ubuntu 16.04 LTS / Ubuntu 14.04 LTS

3.2 SOFTWARE REQUIREMENTS:-

 Operating System: Ubuntu 17.10 LTS / Windows 10


 Software: MATLAB

FACE RECOGNITION SYSTEM 16


3.3 FUNCTIONAL REQUIREMENTS:-

The basic functional requirements of this project is to provide a platform for person
to perform a recognition system search by using a given test image.

 Required functional images data sets.


 Notification for a successful or unsuccessful search.

3.4 NON – FUNCTIONAL REQUIREMENTS:-

3.4.1 Performance Requirements:-

 The search must run fast enough so that analysis of database and any changes
done in the system are reflected immediately.

3.4.2 Security Requirements:-

 The input database shall not be modified in any aspect during processing.
Only the authenticated person can forge the database as per the requirement.
 The face images of the dataset are available only to the authorized users.

3.4.3 Usability:-

For a naïve user too, it is easy to go through and learn to handle the
functionalities.

FACE RECOGNITION SYSTEM 17


3.4.4 Reliability:-

The major concern of the project is to ensure user satisfaction.

3.4.5 Availability:-

The project could be used at any time with minimum specified requirements.

3.5 TOOLS OF DEVELOPMENT:-

3.5.1 WINDOWS 7:

Windows is a personal computer operating system developed by Microsoft. It is a


part of the Windows NT family of operating systems. Windows 7 was primarily
intended to be an incremental upgrade to the operating system, intended to
address Windows Vista's poor critical reception while maintaining hardware
and software compatibility. Windows 7 continued improvements on Windows
Aero (the user interface introduced in Windows Vista) with the addition of a
redesigned taskbar that allows applications to be "pinned" to it, and new window
management features. Other new features were added to the operating system,
including libraries, the new file sharing system HomeGroup, and support
for multitouch input. A new "Action Center" interface was also added to provide an
overview of system security and maintenance information, and tweaks were made
to the User Account Control system to make it less intrusive. Windows 7 also
shipped with updated versions of several stock applications, including Internet
Explorer 8, Windows Media Player, and Windows Media Center. In contrast to
Windows Vista, Windows 7 was generally praised by critics, who considered the
operating system to be a major improvement over its predecessor due to its
FACE RECOGNITION SYSTEM 18
increased performance, its more intuitive interface (with particular praise devoted
to the new taskbar), fewer User Account Control popups, and other improvements
made across the platform.

3.5.2 MATLAB:

MATLAB (matrix laboratory) is a multi-paradigm numerical computing


environment. A proprietary programming language developed by MathWorks,
MATLAB allows matrix manipulations, plotting of functions and data,
implementing of algorithms, creation of user interfaces, and interfacing with
programs written in other languages including C, C++, C#, Java, Fortran and Python
Although MATLAB is intended primarily for numerical computing, an optional
toolbox uses the MuPAD symbolic engine, allowing access to symbolic
computing abilities. An additional package, Simulink adds graphical multi-domain
simulation and model based design for dynamic and embedded systems. MATLAB
supports developing applications with graphical user interface (GUI) features.
MATLAB includes GUIDE (GUI development environment) for graphically
designing GUIs. It also has tightly integrated graph-plotting features.

3.5.2.1 Structures

MATLAB has structure data types. Since all variables in MATLAB are arrays, a
more adequate name is "structure array", where each element of the array has the
same field names. In addition, MATLAB supports dynamic field names (field look-
ups by name, field manipulations, etc.). Unfortunately, MATLAB JIT does not
support MATLAB structures, therefore just a simple bundling of various variables
into a structure will come at a cost.

3.5.2.2 Functions

FACE RECOGNITION SYSTEM 19


When creating a MATLAB function, the name of the file should match the name of
the first function in the file. Valid function names begin with an alphabetic character,
and can contain letters, numbers, or underscores. Functions are often case sensitive.

3.5.2.3 Function handles

MATLAB supports elements of lambda calculus by introducing function handles, or


function references, which are implemented either in .m files or anonymous/nested
functions.

3.5.2.4 Classes and object-oriented programming

MATLAB supports object-oriented programming including classes, inheritance,


virtual dispatch, packages, pass-by-value semantics, and pass-by-reference
semantics. However, the syntax and calling conventions are significantly different
from other languages. MATLAB has value classes and reference classes, depending
on whether the class has handle as a super-class (for reference classes) or not (for
value classes)

FACE RECOGNITION SYSTEM 20


CHAPTER 4
MODULES

One of the simplest and most effective PCA approaches is used in this “ Face recognition
system” . This so-called approach used in this project is eigenface approach. This
approach transforms faces into a small set of essential characteristics, eigenfaces, which
are the main components of the initial set of learning images (training set). Recognition
is done by projecting a new image in the eigenface subspace, after which the person is
classified by comparing its position in eigenface space with the position of known
individuals . The advantage of this approach over other face recognition systems is in its
simplicity, speed and insensitivity to small or gradual changes on the face. The problem
is limited to files that can be used to recognize the face. Namely, the images must be
vertical frontal views of human faces. The whole recognition process involves two steps
namely initialization and recognition. The Initialization process involves the following
operations:

i ) Acquire the initial set of face images called as the training set.

ii) Calculate the Eigenfaces from the training set, keeping only the highest eigenvalues.
These M images define the face space.

iii) Calculate distribution in this M-dimensional space for each known person by
projecting his or her face images onto this face-space.

Having initialized the system, the next process involves the steps:

FACE RECOGNITION SYSTEM 21


i). Calculate a face space distribution based on the input image and the M eigenfaces by
projecting the input image onto each of the Eigenfaces.

ii). Determine if the image is a face at all (known or unknown) by checking to see if the
image is sufficiently close to a free space.

iii). If it is a face, then classify the face image as either a known person or as unknown.

iii). If it is a face, then classify the face image as either a known person or as unknown.

Figure.1 Process of face recognition using PCA

4.1 Image Acquisition

FACE RECOGNITION SYSTEM 22


The first stage of any vision system is the image acquisition stage. Digital image
acquisition is the creation of photographic images, such as of a physical scene or of the
interior structure of an object. The term is often assumed to imply or include the
processing, compression, storage, printing, and display of such images. High-resolution
images enable us to capture the finer details on the face. Facial marks are defined as
visible changes in the skin and they differ in texture, shape and color from the surrounding
skin. Facial marks appear at random positions of the face. By extracting different facial
mark features we aim to differentiate between the images.

4.2 Preprocessing

The aim of the data pretreatment (transformation and preprocessing) before of PCA or
other multivariate analysis is to remove mathematically the sources of unwanted
variations. These variations cannot be removed naturally during the data analysis.

The Principal Component Analysis (PCA) is a method based on spectral analysis of the
matrix of coefficients of linear correlation. The principal components are linear
combinations of the original variables of the data table analyzed. This descriptive method
has been developed for the detection of linear relations between variables.

However, if the relationships between the variables analyzed are not linear, the values of
correlation coefficients can be lower. Thus, it is sometimes useful to transform the
original variables prior to the Principal Component Analysis to "linearize" these
relationships.

As for a fair comparison, data in PCA need to be "dimensionally homogeneous", i.e.


measured in the same units. This is not the case for most data sets, but various data
transformations can then be employed to satisfy this requirement. For example,
"standardization" (or "scaling") within variables will express each observation relative to
its position in the distribution for that variable. For example, if, for each observation for
a variable, we subtract the variable's mean and then divide by the variable's standard
deviation, then we have scaled the distribution to have zero mean and unit variance (i.e.
variance of 1). If we apply this to all variables, they are effectively all in the same "units",
and therefore can be compared in PCA. There are various other methods for applying
FACE RECOGNITION SYSTEM 23
standardizations (e.g. subtract the minimum value and divide by the maximum value
minus the minimum), but they all function kind of similarly (they're also all *linear*
transformations). This is also effectively the same as running a PCA on a correlation
matrix of the data.

4.3 Feature extraction

After the pre-processing the normalized face image is given as input to the feature
extraction module to find the key features that will be used for classification. The module
composes a feature vector that is well enough to represent the face image.

4.3.1 Eigenfaces

• Eigen vectors resembles facial images which look like ghostly and are called
Eigen faces.
• Eigen faces correspond to each face in the free space and discard the faces for
which Eigen value is zero, thus reducing the Eigen face to an extent.

• The Eigen faces are ranked according to their usefulness in characterizing the
variation among the images.

• After it we can remove the last less significant Eigen face

4.4 Classification

FACE RECOGNITION SYSTEM 24


With the help of a pattern classifier(PCA), the extracted features of face image are
compared with the ones stored in the face database using Euclidean Distance to evaluate
distance between clusters. The face image is then classified as either known or unknown.

4.4.1 Principal Component Analysis (PCA)

• PCA stands for Principal Component Analysis.

• Images are high dimensional correlated data.

• Goal of PCA is to reduce the dimensionality of the data by retaining as


much as variation possible in our original data set.

• The simpler way is to keep one variable and discard all others: not
reasonable! Or we can reduce dimensionality by combining features.

• In PCA, we can see intermediate stage as visualization.

4.4.2 Euclidean Distance

 It is just a distance measure between a pair of samples p and q in an


n-dimensional feature space:

FACE RECOGNITION SYSTEM 25


feature space: For example, picture it as a "straight, connecting" line in a 2D

Fig 2

 The Euclidean is often the "default" distance used in e.g., K-nearest


neighbours (classification) or K-means (clustering) to find the "k
closest points" of a particular sample point. Another prominent
example is hierarchical clustering, agglomerative clustering
(complete and single linkage) where you want to find the distance
between clusters.

4.5 Face Database creation

4.5.1 Face Database

A total of 190 images are used for the training the system. The images are in the
format of 100*100 pixel. The images are taken of people on five different poses .the
images contain a similar and plain background. The illumination and lighting conditions
of the image are manually configured during image capturing .to ensure that the images
are of the same size they are also cropped to a fixed size during the database creation
.the face database is composed with utmost accuracy in manual effort.

FACE RECOGNITION SYSTEM 26


Sample Dataset

4.5.2 Dimensionality Reduction

The sheer size of data in the modern age is not only a challenge for computer hardware
but also a main bottleneck for the performance of many machine learning algorithms.
The main goal of a PCA analysis is to identify patterns in data; PCA aims to detect the
correlation between variables. If a strong correlation between variables exists, the
attempt to reduce the dimensionality only makes sense. In a nutshell, this is what PCA
is all about: Finding the directions of maximum variance in high-dimensional data and
project it onto a smaller dimensional subspace while retaining most of the information.
Often, the desired goal is to reduce the dimensions of a dd-dimensional dataset by
projecting it onto a (k)-dimensional subspace (where k <d) in order to increase the
computational efficiency while retaining most of the information.

For dimensionality reduction in principal component analysis the technique is simple.it


involves the linear mapping of the image data to a lower dimensional space in such a
way that the variance among the data in lower dimensional space is maximized as this
will form the basis for better and accurate feature extraction. With the reduction in
dimensions the representation not only becomes compact but also the features suitable
for extraction are also emphasized.
FACE RECOGNITION SYSTEM 27
CHAPTER: 5
Software Development Life Cycle

5.1 SDLC
The SDLC is a software development process to design, develop and test high quality
software. The SDLC is a software framework defining tasks performed at each step in the
software development process. The SDLC aims to produce a high-quality software that
meets or exceeds customer expectations, reaches completion within time and cost
estimates.

FACE RECOGNITION SYSTEM 28


Here in this project, Evolutionary model SDLC is used for the development of the project.
This model is based on the idea of rapidly developing an initial software implementation
from very abstract specifications and modifying this according to the appraisal. Each
program version inherits the best features from earlier versions. Each version is refined
based upon feedback, to produce a system that satisfies the customer’s needs or
requirements. At this point the system may be delivered or it may be reimplemented using
a more structured approach to enhance robustness and maintainability. Specification,
development and validation activities are concurrent with strong feedback between each.

Evolutionary model attacks this problem in a slightly different approach. Evolutionary


model suggests breaking down of work into smaller chunks, prioritizing them and then
delivering those chunks to the customer one by one. The number of chunks is huge and
is the number of deliveries to the customer. The main advantage is that the customer’s
confidence increases as he constantly gets deliverables from the beginning of the project
to validate and verify his requirements. The model allows for changing requirements. The
model allows for changing requirements as well as all work is broken down into
maintainable work chunks.

There are two types of evolutionary development model:

1. Exploratory programming
Here the objective of the process is to work with you to explore their requirements
and deliver a final system. The development starts with the better understood
components of the system. The software evolves by adding new features as they
are proposed.

2. Throwaway prototyping
Here the purpose of the evolutionary development process is to understand the project
requirements and thus develop a better requirements definition for the system. The
prototype concentrates on experimenting those components of the requirements, which
are poorly understood.

FACE RECOGNITION SYSTEM 29


The reason for using this model in this project is:
This is the only method appropriate for the situations where a detailed system
specification is unavailable. Effective in rapidly producing small systems, software with
short life spans, and developing sub-components of larger systems.

INITIAL

VERSION
SPECIFICATI
ON
OUTLINE
INTERMEDIATE
DESCRIPTION
VERSION
DEVELOPME
NT

FINAL

VERSION
VALIDATION

Figure 3: Evolutionary model Software development Life Cycle Process

5.1.1 Outline Description

In outline description we are mainly concerned with the detailed description of the
problem statement. The problem statement should be well described such that each and
every requirement term in the problem should be clearly understood or we can say there
is no ambiguity in the description of the problem statement. This is important phase

FACE RECOGNITION SYSTEM 30


because if the problem statement is not clearly understood by the team member then the
software product is not up to the mark.

5.1.2 Specification

After the detailed description of the problem statement the next step is specification of
the tools either hardware or software for the product. In this step; according to the problem
statement we specify the details of the tools, their versions, language used also the
platforms on which product is going to develop. If in case there is hardware part involved
in the product then the hardware specification should also be specified. This step is also
a crucial phase because the product development is totally dependent on the specifications
which are specified in the given software development cycle.

5.1.3 Development

When the tools, language to be used, platform in which product develops are decided then
comes the development phase. In this step we mainly concern with the coding part. The
coder has done the coding so that desired output is obtained. In this phase working product
can be seen. Also the coder tests the product from his side before the validation phase
which is known as Unit Testing.

5.1.4 Validation

The validation phase is mainly concerned with the testing part. The testing is done by the
tester in order to test whether the product is working properly or not. Also the tester test
FACE RECOGNITION SYSTEM 31
if there is any bug in the software or not. All the testing methods are applied by the tester.
The best product should be bug free, so that the client can satisfy with the product.

5.1.5 Initial Version

At this stage of software development, the initial version of the software is developed.
This initial version includes the basic modules and basic functionalities for which the
software is mainly developed.

5.1.6 Intermediate Version

At this stage of software development, the final version of the software is developed. This
final version of the software includes the overall developed software as the final product
with all the modifications implemented, that were suggested by the customer after testing
the software.

5.2 FLOWCHART

Flowchart is a collective term for a diagram representing a flow or set of dynamic


relationships in a system. The term flow diagram is also used as synonym of the flow
chart and sometimes as counterpart of the flowchart.

Flowchart is a diagram that visually displays interrelated information such as events, steps
in a process, functions, etc., in an organized fashion, such as sequentially or
chronologically. There are different types of flow diagram like control flow diagram, data
flow diagram, product flow diagram, information flow diagram etc.

5.2.1-SYMBOLS USED IN A FLOW CHART

FACE RECOGNITION SYSTEM 32


START/END

INPUT/OUTPUT

ASSIGNMENT/PROCESSING BOX

ARROW

CONNECTOR

DECISION BOX

FACE RECOGNITION SYSTEM 33


LOOP

Hence the flowchart of this project is as follows:-

Figure 4: Flow Chart

5.3 Components of DFD

Figure 5. Components of DFD

 Entity: It is a source and destination of information data. Entities are represented by


rectangles with their respective names.

FACE RECOGNITION SYSTEM 34


• Process: Activities and actions taken on data are represented by circles.

• Data Storage: There are two variants of data storage - it can either be represented as a
rectangle with the absence of both smaller sides or as an open-sided rectangle with only
one side missing.

• Data Flow: Movement of data is shown by pointed arrows. Data movement is shown from
the base of the arrow as its source towards the head of the arrow as a destination.

5.3.1 Level-0 DFD

This context-level data flow diagram first, which shows the interaction between the
system and external agents which act as data sources and data sinks. On the context
diagram (also known as the level 0 DFD) the system’s interactions with the outside
worlds are modeled purely in terms of data flows across the system boundary.

FACE RECOGNITION SYSTEM 35


Figure 6:- Level 0 DFD

5.3.2 LEVEL-1 DFD

Level -1 DFD is used to elaborate the interaction between system and agents to a level
1 diagram with lower level functions decomposed from the major functions of the
system.

FACE RECOGNITION SYSTEM 36


Figure 7:- LEVEL 1 DFD

5.4 USECASE DIAGRAM

5.4.1 Actor

You can picture an actor as a user of the IT system, for example, Mr. Steel or Mrs. Smith
from check-in. Because individual persons are irrelevant for the model, they are
abstracted. So the actors are called "check-in employee" or "passenger:

FACE RECOGNITION SYSTEM 37


5.4.2 Use Case

Use cases describe the interactions that take place between actors and IT systems during
the execution of business processes:

USE CASE

5.4.3 Association

An association is a connection between an actor and a use case. An association indicates


that an actor can carry out a use case. Several actors at one use case mean that each actor
can carry out the use case on his or her own and not that the actors carry out the use case
together:

5.4.4 Include Relationships

An include relationship is a relationship between two use cases

FACE RECOGNITION SYSTEM 38


Figure 8:- USE CASE DIAGRAM

5.5 WEB SERVER/APPLICATION

The proposed system is an Android Application/Website which mainly consists of three


modules. The user can choose a parking space that is nearest to his destination after getting
login to the application. After the user books a particular slot the administrator updates the
status of that respective parking slot to “RESERVED''. If the user doesn’t arrive to the parking
slot within 20 minutes from the time of booking his booking will be cancelled and the status
is updated to “EMPTY”. Smart Parking System is based on the client-server architecture. It’s
economically beneficial since it doesn’t require any heavy infrastructure. It is neither
sensitive to temperature change nor affected by any extreme air turbulence.

The main objectives of Smart Parking System application is to provide the following:

FACE RECOGNITION SYSTEM 39


1. Intelligent, ubiquitous, user friendly automated parking system application that minimizes
user’s time and avoid traffic congestion in metropolitan cities.

2. To ensure safe and secure parking slots within limited area, which is of most urge.

5.5.1 Methodology:

The slot allocation method follows a sequence as stated below:

Step1:

Initially the slot selection is made by the user from his mobile phone. He checks for the
availability of a parking slot that is nearest to his location. If it is available, he moves to the
next stage or else go to the initial state.

Step2:

Transfers request for parking slot from the mobile using Android application.

Step3:

The Parking Control Unit (PCU) gets the slot number requested by the user.

Step4:

If the booking is done successfully, then the requested slot is reserved in the parking area.

Step5:

After reserving a particular slot by the user then the status of that respective slot will be
marked as RED=RESERVED and the remaining will be GREEN=EMPTY.

Step6:

As soon as the vehicle gets entered into the parking slot, the timer gets ON and measures
the total time.

Step7:

As soon as the vehicle moves out of the parking slot, the timer gets OFF and the total cost
will be displayed.

FACE RECOGNITION SYSTEM 40


5.5.2 Modules:

Modules
Smart Parking System mainly consists of two modules :-

 User Module

 Booking Module

Architecture for Website/Application Module


This module of the application deals with the user interface/user experience.

 This module provides the user with the flexibility of registering, logging in, booking .
 If the user is new to the application then, the user must register in the application by
providing the user’s details.

 After the registration, the user logs in using the user-id and password.

FACE RECOGNITION SYSTEM 41


 Once the user logs in , then the user browses the parking slot then books that parking
slot followed by the making the confirmation booking on Administrator Module This
is the operative module of the application.
 It works in the backend for managing the database and performs various operations on
it.
 The administrator stores all the user’s data in the database as soon as he gets
registered with the application.
 Administrator maintains the details of all parking slots ( both empty and reserved ),
their price for booking , user details in database and the modification on these data is
only can be done by the administrator.
 The administrator also provides the confirmation message to the user.

5.5.3 Booking Module:

 This is the main module of the application and it deals with the booking of the parking
slot.
 When the user is ready for booking then the booking module comes in the scenario to
provide user the necessary information for booking.
 The available slot, cost to book the slot and the necessary processing in regards to
these, are done by this booking module.

5.5.4 Output Screen:

5.5.4.1 User Side:

 Initially, the user need to install smart parking application on his android device or
through website.
 After installation the smart parking icon will be displayed on his android mobile
screen.
 Registration and login:
 If the user is a new user he needs to get registered with the application by giving all
his details. The data which is entered by the user is stored on the server.
 These details consists user name, email, password, address etc. This registration is
done only for the first time.
 After successful registration he receives a unique login ID both to his mobile .
 After the user gets registered with the application, the user can login by providing
email and unique ID.
 User gets this unique ID both to user’s mail and mobile number as soon as he gets
registered.

FACE RECOGNITION SYSTEM 42


 If the user gets successfully login to the application then the user is said to be an
authorized use

5.5.5 App/Website User Interface Images:-

Initial Screen

 Smart parking providers will need to establish reliable application programming


interfaces (APIs) that enable service partners to provide consumers an access to smart
parking services on-line through a variety of channels, including the web, mobile
phone apps, connected
 The mobile app is developed using Android bundle and Android Studio application
platform is used. Application Modules are Registration, Login, selecting date and
timing or how many days, Parking slot selection, Price calculation and payment. App
also supports current booking and advance booking option. If the booked vehicle

FACE RECOGNITION SYSTEM 43


doesn’t enter parking slot within fifteen minutes of threshold booking is automatically
canceled.

Login Page

 It depicts the screen shot of Android Mobile phone application login/Register page.
 User must have its account before login
 Only authorized user can login
 Whole process will be done through Free slot identification is verified using Infra-Red
(IR) sensors. The IR sensor used for each parking slot. The infra-red sensor detect the
vehicle in infra-red waves reflected and covers short distance. A pulse of IR light is
generated by the IR sensor and emitted by emitter. Detected the information will be
send via WI-FI module to transfer the information to Arduino board and results are
displayin LED screen.

FACE RECOGNITION SYSTEM 44


Signup Page

 User have to enter his detail


 They have to enter following details:-
1. Name
2. Email Id
3. Phone Number
4. Password
 After entering following details account will be created.

FACE RECOGNITION SYSTEM 45


Booking details Page

 the number of user booking details and lane details duration hours. Over all flow
diagram of IoT based smart parking management system
 Following details need to be enter:-
a. Car No.
b. Time in
c. Time out
d. Date

Available Slots

Details of available slots:-

 After selecting a slot the user needs to check for the availability of that respective slot.
 The user can check the status of the slots with the help of green and red colour
indications.
 Where green colour indicates that the respective slot is empty and the red colour
indicates that the respective slot is already allocated to some other user.and Logout:

FACE RECOGNITION SYSTEM 46


Booking Confirmation Page

5.5.7 Check for Slot and its Status:-


 User login the application where he can view various parking slots in his location.
 User selects his desired parking slot that is nearest to his destination.
 After selecting a slot the user needs to check for the availability of that respective slot.
 The user can check the status of the slots with the help of green and red colour
indications.
 Where green colour indicates that the respective slot is empty and the red colour
indicates that the respective slot is already allocated to some other user.and Logout:
 On availability of empty slot, the user can confirm his booking of his desired slot.
 After reserving a particular slot the use can proceed to the payment option or else
terminate the entire process.
 The system requires full payment in advance either through a credit card or a debit
card. Hence, the user needs to give all his card details to book his desired slot.

FACE RECOGNITION SYSTEM 47


 After successful payment he receives a slot number, both to his mobile and mail.
 After utilisation of a particular slot he can move out of the parking area by clearing
his payment.
 He can check all the details in his account and can logout. The user can also leave a
feedback to share his experience.

5.5.8 Conclusion of Website/Application

Smart Parking System is used to book parking slots without any great effort by the
user using an android device. The user can check the status of parking area and book
the parking slot in advance. This will result in overcoming many problems which are
being created due to the bad management of the traffic. Mobile computing has proven
as the best area of work for researchers in the areas of database and data management
so this application is applied in Android Mobile OS. This application is utilized by
can be applied nook and corner due to its easy usage and effectiveness.

FACE RECOGNITION SYSTEM 48


CHAPTER 6

CODING

Create Connection:-

<?php

$conn =
mysqli_connect("localhost","id6458044_nvn","Arduino@123","id6458044_nvn");

// Check connection

if (mysqli_connect_errno())

echo "Failed to connect to MySQL: " . mysqli_connect_error();

?>

Index.html

<?php

session_start();

if(isset($_SESSION['loggedin'])){

header("location:webpage/home.php");}

else{

session_destroy();

?>

<?php
FACE RECOGNITION SYSTEM 49
require('webpage/conn.php');

session_start();

if (isset($_POST['user_mail'])){

$user_mail = stripslashes($_REQUEST['user_mail']);

//escapes special characters in a string

$user_mail = mysqli_real_escape_string($conn,$user_mail);

$user_password = stripslashes($_REQUEST['user_password']);

$user_password = mysqli_real_escape_string($conn,$user_password);

//Checking is user existing in the database or not

$query = "SELECT * FROM `userdetails` WHERE email='$user_mail'

and pass='".($user_password)."'";

$result = mysqli_query($conn,$query) or die(mysql_error());

$rows = mysqli_num_rows($result);

if($rows==1){

$_SESSION['user_mail'] = $user_mail;

$_SESSION['loggedin'] = true;

$sql = "SELECT phno FROM userdetails WHERE email = '$user_mail' ";

$sth = $conn->query($sql);

$r=mysqli_fetch_array($sth);

$_SESSION['phno'] = $r['phno'];

//mysqli_close($con);

// Redirect user to index.php

FACE RECOGNITION SYSTEM 50


header("Location: webpage/home.php");

}else{

echo "

<div class='modal fade' id='myModal'>

<div class='modal-dialog'>

<div class='modal-content'>

<!-- Modal Header -->

<div class='modal-header'>

<h4 class='modal-title'>Notification</h4>

<button type='button' class='close' data-


dismiss='modal'>&times;</button>

</div>

<!-- Modal body -->

<div class='modal-body'>

Username/password is incorrect.Try Again

</div>

<!-- Modal footer -->

<div class='modal-footer'>
FACE RECOGNITION SYSTEM 51
<button type='button' class='btn btn-danger' data-
dismiss='modal'>Close</button>

</div>

</div>

</div>

</div>

";

}else{

?>

<!DOCTYPE html>

<html lang="en">

<head>

<title>Smart Parking</title>

<link rel="icon" type="image/ico" href="webpage/images/logo.jpg" />

<meta charset="utf-8">

<meta name="viewport" content="width=device-width, initial-scale=1">

<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css">

FACE RECOGNITION SYSTEM 52


<script
src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>

<script
src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.0/umd/popper.min.js"></script>

<script
src="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/js/bootstrap.min.js"></script>

<style type="text/css">

body, html {

height: 100%;

.bg {

/* The image used */

background-image: url("webpage/images/black_net.jpg");

/* Full height */

height: 127%;

/* Center and scale the image nicely */

background-position: center;

background-repeat: repeat;

background-size: cover;

</style>

FACE RECOGNITION SYSTEM 53


</head>

<body class="container p-2 bg" style="height:127%">

<div class="row">

<div class="col-sm-3"></div>

<div class="col-sm-6">

<div class="container Regular shadow rounded" style=" ">

<div class="form-group ">

<img src="webpage/images/logo.jpg" class="rounded-circle mx-auto d-block


img-fluid"/>

<h1 style="text-align:center; color:white">Log In</h1>

<form action="" method="POST" name="login">

<div class="form-group">

<input type="text" class="form-control mb" placeholder="Email


Address" name="user_mail" id="phone" required>

</div>

<div class="form-group">

<input type="password" class="form-control" placeholder="Password"


name="user_password" id="pwd" required>

</div>

<button type="submit" name="submit" class="btn btn-primary mr-2 mb-2


form-control">LOG IN</button>

FACE RECOGNITION SYSTEM 54


<a href="webpage/singup.php" class="btn btn-danger mr-2 mb-2 form-
control">SIGNUP</a>

</form>

</div>

</div>

</div>

<div class="col-sm-3"></div>

</div>

</body>

<script type='text/javascript'>

$(window).on('load',function(){

$('#myModal').modal('show');

});

</script>

<script>

if ( window.history.replaceState ) {

window.history.replaceState( null, null, window.location.href );

</script>

</html>

FACE RECOGNITION SYSTEM 55


Aurdino.txt

int LED = 13; // Use the onboard Uno LED


int obstaclePin1 = 7; // This is our input pin
int hasObstacle1 = HIGH; // HIGH MEANS NO OBSTACLE
int obstaclePin2 = 8; // This is our input pin
int hasObstacle2 = HIGH;

void setup() {
pinMode(LED, OUTPUT);
pinMode(obstaclePin1, INPUT);
pinMode(obstaclePin2, INPUT);
Serial.begin(9600);
}
void loop() {
hasObstacle1 = digitalRead(obstaclePin1);
hasObstacle2 = digitalRead(obstaclePin2);//Reads the output of the obstacle sensor from
the 7th PIN of the Digital section of the arduino
if ((hasObstacle1 == LOW)&&(hasObstacle2 == LOW)) //LOW means something is ahead,
so illuminates the 13th Port connected LED
{
Serial.println("A");
digitalWrite(LED, HIGH);//Illuminates the 13th Port LED
}
else if ((hasObstacle1 == LOW)&&(hasObstacle2 == HIGH)) //LOW means something is
ahead, so illuminates the 13th Port connected LED
{
Serial.println("B");
digitalWrite(LED, HIGH);//Illuminates the 13th Port LED
}
else if ((hasObstacle1 == HIGH)&&(hasObstacle2 == LOW)) //LOW means something is
ahead, so illuminates the 13th Port connected LED
{
Serial.println("C");
digitalWrite(LED, HIGH);//Illuminates the 13th Port LED
}
else
{
Serial.println("D");
digitalWrite(LED, LOW);
}
delay(200);
}

FACE RECOGNITION SYSTEM 56


CHAPTER 7
TESTING OF PRODUCT

Testing basically means to crosscheck whether the product generated after processing is
as estimated or not. It is the process of evaluation of a software item to verify differences
between given input and expected output. Also to assess the feature of a software item.
Testing ensures the quality of the product. Software testing is the process that should be
done during the development process. Verification is to make sure that the product
satisfies the conditions imposed at the start of the development phase. Validation is to
make sure that the product is built as per customer requirements.

7.1 TYPES OF TESTING:-

7.1.1 UNIT TESTING:-

Unit testing, a testing technique using which individual modules are tested to determine
if there are any issues by the developer himself. It is concerned with functional correctness
of the standalone modules.

The main aim is to isolate each unit of the system to identify, analyze and fix the defects.

7.1.1.1 Unit Testing - Advantages:

FACE RECOGNITION SYSTEM 57


 Reduces Defects in the newly developed features or reduces bugs when changing the
existing functionality.

 Reduces Cost of Testing as defects are captured in very early phase.

 Improves design and allows better refactoring of code.

 Unit Tests, when integrated with build gives the quality of the build as well.

7.1.1.2 Unit Testing Lifecycle:

Figure 9

7.1.1.3 Unit Testing Techniques:

 Black Box Testing - Using which the user interface, input and output are
tested.

 White Box Testing - used to test each one of those functions behaviour is
tested.

 Gray Box Testing - Used to execute tests, risks and assessment methods.

FACE RECOGNITION SYSTEM 58


7.1.2 INTEGRATION TESTING:

Integration testing (sometimes called integration and testing, abbreviated I&T) is the
phase in software testing in which individual software modules are combined and tested
as a group. It occurs after unit testing and before validation testing. Integration testing
takes as its input modules that have been unit tested, groups them in larger aggregates,
applies tests defined in an integration test plan to those aggregates, and delivers as its
output the integrated system ready for system testing.

7.1.2.1 Purpose

Some different types of integration testing are big-bang, mixed (sandwich), risky-
hardest, top-down, and bottom-up. Other Integration Patterns are: collaboration
integration, backbone integration, layer integration, client-server integration, distributed
services integration and high-frequency integration.

In the big-bang approach, most of the developed modules are coupled together to form a
complete software system or major part of the system and then used for integration
testing. This method is very effective for saving time in the integration testing process.
However, if the test cases and their results are not recorded properly, the entire integration
process will be more complicated and may prevent the testing team from achieving the
goal of integration testing.

Bottom-up testing is an approach to integrated testing where the lowest level components
are tested first, then used to facilitate the testing of higher level components. The process
is repeated until the component at the top of the hierarchy is tested.

All the bottom or low-level modules, procedures or functions are integrated and then
tested. After the integration testing of lower level integrated modules, the next level of
modules will be formed and can be used for integration testing. This approach is helpful
only when all or most of the modules of the same development level are ready. This
method also helps to determine the levels of software developed and makes it easier to
report testing progress in the form of a percentage.

FACE RECOGNITION SYSTEM 59


Top-down testing is an approach to integrated testing where the top integrated modules
are tested and the branch of the module is tested step by step until the end of the related
module. Sandwich testing is an approach to combine top down testing with bottom up
testing.

7.1.3 FUNCTIONAL TESTING:-

Functional testing is a quality assurance (QA) process and a type of black-box


testing that bases its test cases on the specifications of the software component under test.
Functions are tested by feeding them input and examining the output, and internal
program structure is rarely considered (unlike white-box testing). Functional testing
usually describes what the system does.

Functional testing does not imply that you are testing a function (method) of your module
or class. Functional testing tests a slice of functionality of the whole system.

Functional testing differs from system testing in that functional testing “verifies a
program by checking it against ... design documents or specifications", while system
testing "validates a program by checking it against the published user or system
requirements”.

Functional testing has many types:

 Smoke testing

 Sanity testing

 Regression testing

 Usability testing

Functional testing typically involves six steps:-

1. The identification of functions that the software is expected to perform

2. The creation of input data based on the function's specifications

FACE RECOGNITION SYSTEM 60


3. The determination of output based on the function's specifications

4. The execution of the test case

5. The comparison of actual and expected outputs

6. To check whether the application works as per the customer need.

7.1.4 BLACK BOX TESTING:

Black-box testing is a testing strategy that ignores the internal mechanism of a system or
component and focuses solely on outputs generated in response to selected inputs and
execution conditions. In black box testing, the structure of the program is not taken into
consideration. It takes into account functionality of the application only. It is also called
functional testing. Tester is mainly concerned with the validation of the output rather
than how the output is produced. Knowledge of programming or implementation logic
(of internal structure and working) is not required for testers. It is applicable mainly at
higher levels of testing - Acceptance Testing and System Test
The software into which known inputs are fed and where known outputs are expected is
termed a black box. The transformation of the known inputs to the known outputs is
done via the system and is not checked in this kind of testing. This transformation
process system is called the black box.
In this kind of testing, the testers concentrate on functional testing, that is, on providing
a known input and check if the known output is obtained. This method is generally
followed while carrying out acceptance testing, when the end user is not a software
developer but only a user.
It is different from white box testing in the sense that in white box testing, the tester
ought to have the programming knowledge and understanding of code to test the
application whereas it may not be the case in black box testing.

Techniques that are used in black box testing are:

FACE RECOGNITION SYSTEM 61


1. Boundary-value analysis
2. Error guessing
3. Syntax testing
4. State transition testing
5. Equivalence partitioning

7.1.5 SYSTEM TESTING

System testing of software or hardware is testing conducted on a complete, integrated


system to evaluate the system's compliance with its specified requirements. System
testing falls within the scope of black-box testing, and as such, should require no
knowledge of the inner design of the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components
that have passed integration testing and also the software system itself integrated with any
applicable hardware systems. The purpose of integration testing is to detect any
inconsistencies between the software units that are integrated together
(called assemblages) or between any of the assemblages and the hardware. System
testing is a more limited type of testing; it seeks to detect defects both within the "inter-
assemblages" and also within the system as a whole.

7.1.6 ACCEPTANCE TESTING

In engineering and its various sub disciplines, acceptance testing is a test conducted to
determine if the requirements of a specification or contract are met. It may
involve chemical tests, physical tests, or performance tests. In systems engineering it
may involve black-box testing performed on a system prior to its delivery. In software
testing the ISTQB defines acceptance as: formal testing with respect to user needs,

FACE RECOGNITION SYSTEM 62


requirements, and business processes conducted to determine whether a system satisfies
the acceptance criteria and to enable the user, customers or other authorized entity to
determine whether or not to accept the system. Acceptance testing is also known as user
acceptance testing (UAT), end-user testing, and operational acceptance testing (OAT) or
field (acceptance) testing. A smoke test may be used as an acceptance test prior to
introducing a build of software to the main testing process.

The acceptance test suite may need to be performed multiple times, as all of the test cases
may not be executed within a single test iteration.

The acceptance test suite is run using predefined acceptance test procedures to direct the
testers which data to use, the step-by-step processes to follow and the expected result
following execution. The actual results are retained for comparison with the expected
results. If the actual results match the expected results for each test case, the test case is
said to pass. If the quantity of non-passing test cases does not breach the project's
predetermined threshold, the test suite is said to pass. If it does, the system may either be
rejected or accepted on conditions previously agreed between the sponsor and the
manufacturer.

The anticipated result of a successful test execution:

 test cases are executed, using predetermined data

 actual results are recorded

 actual and expected results are compared, and

 test results are determined.

7.2 Testing

Eclipse is used for coding. A coloured slot shows that which slot is empty or not we have to
book a slot through applying computational techniques in web host.

FACE RECOGNITION SYSTEM 63


Fig: All Empty Slot

All slot in the database is empty till user book any slot then slot will be booked and it will
be changed its colour from green to red.

many images can be of different colour whenever we book a slot for

recognition.

Booked/empty slot image

FACE RECOGNITION SYSTEM 64


CHAPTER: 8

FUTURE SCOPE

 We can add a payment gateway for online payment service.


 System can be scaled to multi-level and multiple parking areas by making
potential changes in the hardware setup.
 SMS sent through Android Application can be secured by applying encryption
algorithms. Also, for security purpose, Login facility can be provided to the
users.
 For the security purpose we can also add DIP in order to recognise the car no
from the number plate and create the record.
 Conveyor belts can be used to move cars horizontally without the need for a
driver and while the car engine is off, in parking mode and the doors locked

FACE RECOGNITION SYSTEM 65


CHAPTER 9
Snapshots

S1: Selecting the webpage folder

FACE RECOGNITION SYSTEM 66


S1.1: Selecting the image (Normal)

S1.2: Authentication image

FACE RECOGNITION SYSTEM 67


S1.3: slot booking image (45*)

S1.4:slot confirmation image

FACE RECOGNITION SYSTEM 68


S1.5: Connection image

S1.6:Credential image

FACE RECOGNITION SYSTEM 69


S1.7: Homepage image (Normal)

S1.8: Homepage Connectivity image

FACE RECOGNITION SYSTEM 70


S1.9: logout image

S1.10: Signup image

FACE RECOGNITION SYSTEM 71


S6.1: Selecting the image (45*)

S6.2: Equivalent image

FACE RECOGNITION SYSTEM 72


S7.1: Selecting the image (45*)

S7.2: Equivalent image

FACE RECOGNITION SYSTEM 73


S8.1: Selecting the image (Normal)

S8.2: Equivalent image

FACE RECOGNITION SYSTEM 74


S9.1: Selecting the image (180*)

S9.2: Equivalent image

FACE RECOGNITION SYSTEM 75


CHAPTER 10

CONCLUSION

Smart Parking System is used to book parking slots without any great effort by the
user using an android device. The user can check the status of parking area and book
the parking slot in advance. This will result in overcoming many problems which are
being created due to the bad management of the traffic. Mobile computing has proven
as the best area of work for researchers in the areas of database and data management
so this application is applied in Android Mobile OS. This application is utilized by
can be applied nook and corner due to its easy usage and effectiveness.

CHAPTER 11

REFERENCES

FACE RECOGNITION SYSTEM 76


1.Ngo. H. T, Gottumukkal. R , and Asari. V. K, “A flexible and efficient hardware
architecture for realtime face recognition based on eigenface,” in IEEE Computer Society
Annual Symposium on VLSI, 2005.

2. Boualleg. A. H., Bencheriet Ch, and Tebbikh. H, "Automatic Face recognition using
neural network-PCA." In Information and Communication Technologies, 2006.
ICTTA'06. 2nd, vol. 1, pp. 1920-1925. IEEE, 2006.

3. Sajid. I., Ahmed M. M., Kresimir Delac, Mislav Grgic, Sonja Grgic,” Independent
Comparative Study of PCA, ICA, and LDAon the I. Taj, M. Humayun, and
F.Hameed,“Design of high performance fpga based face recognition system,” in PIERS
Proceedings, Cambridge, USA, July 2-6, 2008, pp.504-510

4. Sathaporn Visakhasart, Orachat Chitsobhuk, “Multi- Pipeline architecture for face


recognition on FPGA,”International Conference on digital Image Processing, 2009,
pp.152-156.

5. Karim,T.F,Lipu,M.S.H, Rahman, M.L., & Sultana ,F.Face Recognition Using PCA-


based method. In advanced Management Science (ICAMS) 2010.IEEE International
Conference on (Vol.3,pp.158-162).

6. Janarbek Matai, Ali Irturk and Ryan Kastner,” Design and Implementation of an
FPGA-based Real-Time Face Recognition System”, in IEEE International Symposium
on Field Programmable Custom Computing Machines, 2011, pp 97-100.

7. Mohod, Prakash S., and Kalpna C. Jondhale. "Face Recognition Using PCA.
"International Journal of Artificial Intelligence and Knowledge Discovery 1, no. 1 (2011):
25-28.

8. Rala M. Ebied,” Feature Extraction using PCA and Kernel-PCA for Face
Recognition”, in The 8th International Conference on INFOrmatics and Systems
Computational Intelligence and Multimedia Computing Track , 2012 ,pp mm72-mm77

9. Abdullah, Manal, Majda Wazzan, and Sahar Bo-saeed. "Optimizing Face Recognition
Using PCA." arXiv preprint arXiv:1206.1515 (2012).

FACE RECOGNITION SYSTEM 77


10. Yuille, Alan L., Peter W. Hallinan, and David S. Cohen. "Feature extraction from
faces using deformable templates." International journal of computer vision 8, no. 2
(1992): 99-111.

11. Kirby, Michael, and Lawrence Sirovich. "Application of the Karhunen-Loeve


procedure for the characterization of human faces." Pattern Analysis and Machine
Intelligence, IEEE Transactions on 12, no. 1 (1990): 103-108.

FACE RECOGNITION SYSTEM 78

You might also like