1.1 General Introduction: Face Recognition System
1.1 General Introduction: Face Recognition System
INTRODUCTION
Face is a complex multidimensional structure and needs good computing techniques for
recognition. The face is our primary and first focus of attention in social life playing an
important role in identity of individual. We can recognize a number of faces learned
throughout our lifespan and identify that faces at a glance even after years.
There may be variations in faces due to aging and distractions like beard, glasses or change
of hairstyles. Facial features are extracted and implemented through algorithms which are
efficient and some modifications are done to improve the existing algorithm models.
Computers that detect and recognize faces could be applied to a wide variety of practical
applications including criminal identification, security systems, identity verification etc. Face
detection and recognition is used in many places nowadays, in websites hosting images and
social networking sites. Face recognition and detection can be achieved using technologies
related to computer science.
Features extracted from a face are processed and compared with similarly processed faces
present in the database. If a face is recognized it is known or the system may show a similar
face existing in database else it is unknown. In surveillance system if an unknown face
appears more than one time then it is stored in database for further recognition. These steps
are very useful in criminal identification. In general, face recognition techniques can be
divided into two groups based on the face representation they use appearance-based, which
uses holistic texture features and is applied to either whole-face or specific regions in a face
The proposed technique is based on coding and decoding of face images with emphasis on
the significant of local and global features of face. In this proposed method the relevant
information in a face image is feature extracted, encoded and then compared with a face
database of models and then classified as known or unknown.
1.2 OBJECTIVE:
The goal is to implement the system (model) for a particular face and distinguish it
from a large number of stored faces with some real-time variations as well.
1.3 SCOPE:
CHAPTER 2
PRELIMINARY STUDY
The accuracy of the face detection system should not be compromised even with
slight changes in face.
Illumination conditions should not affect the face detection process henceforth
limiting its efficiency.
The time taken for training and classification of images will depend upon the
size of the database .In this project we will work with database limited size for
the recognition process but with increase in the size and complexity of images
the time taken increases.
We prepared our own dataset to verify the proper working of the face
recognition system comprising of images in different conditions.
This chapter provides a detailed survey of face recognition research. There are two
underlying motivations to present this survey: the first is to provide an up-to-date review
of the existing literature, and the second is to offer some insights into the studies of
machine recognition of faces. To provide a comprehensive survey, existing recognition
techniques of face recognition are categorized and detailed descriptions of representative
methods within each category are presented. In addition, relevant topics such as
psychophysical studies, system evaluation, issues of illumination and pose variation are
covered.
Automated face recognition was developed in 1960s. The first semi-automated system
for face recognition required the administrator to locate features such as eyes, ears, nose,
and mouth on the photographs before it calculating the distances and the ratios to a
common reference point, which were then compared to the reference data. In 1970s, the
problem with both of these early solutions was that the measurements and locations were
manually computed. In 1990, Kirby and Sirovich applied Principal Component Analysis,
a standard linear algebraic technique, to the face recognition problem. This was
considered as a milestone. It is shown that less than one hundred values were required to
accurately code a suitably aligned and normalized face image.
As a result of many studies, scientists come up with the decision that face recognition is
not like other object recognition. Face recognition is one of the few biometric methods
that possess the merits of both high accuracy and low intrusiveness. It has the accuracy
of a physiological approach without being intrusive. For this reason, since the early
seventies, face recognition has drawn the attention of researchers in fields from security,
psychology and image processing.
The increasing need for surveillance related application due to terrorist and drug
trafficking activities, etc.
In the year 1991, Turk and Pentland discovered that while using the Eigen faces
techniques, the residual error could be used to detect faces in images, a discovery that
enabled reliable real-time automated face recognition systems. This demonstration
initiated much-needed analysis on how to use the technology to support national needs
while being considerate of the public’s social and privacy concerns.
Critics of the technology complain that the London Borough of Newham scheme has, as
of 2004, never recognized a single criminal, despite several criminals in the system's
database living in the Borough and the system having been running for several years.
In 2006, the performance of the latest face recognition algorithms was evaluated in the
Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face
scans and iris images were used in the tests. The results indicated that the new algorithms
are 10 times more accurate than the face recognition algorithms of 2002 and 100 times
more accurate than those of 1995. Some of the algorithms were able to outperform
human participants in recognizing faces and could identify even identical twins.
Toolbar et al (2006) have reported an up-to-date review of major human face recognition
research in “Face recognition: a literature review, methods and technologies of face
recognition”. A literature review of the most recent face recognition techniques is
presented. Description and limitations of face databases which are used to test the
performance of these face recognition algorithms are given.
This face recognition problem is made difficult by the great variability in head rotation
and tilt, lighting intensity, angle, facial expression, aging etc. Some other attempts at
facial recognition by machine have allowed for little or no variability in these quantities.
Yet, the method of correlation or pattern matching of unprocessed data, which is often
used by some researchers, is certain to fail in cases where the variability is great. In
particular, the correlation is very low between two pictures of the same person with two
different head rotations.
Modern face recognition has reached an identification rate greater than 90% with well-
controlled pose and illumination conditions. The task of recognizing faces has attracted
much attention both from Neuro-scientists and from computer vision scientists. While
network security and access control are it most widely discussed applications, face
recognition has also proven useful in other multimedia information processing areas.
Numerous algorithms have been proposed for face recognition; Chellappa et al (1995),
Zhang et al (1997) and Chan et al (1998) use face recognition techniques to browse video
database to find out shots of particular people. Haibo Li et al (1993) code the face images
with a compact parameterized facial model for low-bandwidth communication
applications such as videophone and teleconferencing. Recently, as the technology has
matured, commercial products have appeared on the market.
Turk et al (1991) developed Principal Component Analysis (PCA) technique for Face
recognition to solve a set of faces using Eigen values. Rama Chellappa et al (2003) have
dealt with the feature based method using statistical, structural and neural classifiers for
Human and Machine Recognition of Faces. Krishnaswamy et al (1998) proposed
automatic face recognition using Linear Discriminant Analysis (LDA) of Human faces.
Chengjun Liu and Harry Wechsler (2002) presented new coding schemes, the
Probabilistic Reasoning Modes (PRM) and Enhanced Fisher linear discriminant Models
(EFM) for indexing and retrieval from large image databases. Michael Bromby (2003)
has presented a new form of Forensic identification-facial biometrics, using
computerized identification.
Joss Beveridge et al (2003) provided the PCA and LDA algorithms for face recognition.
A detailed Literature Survey of Face Recognition and Reconstruction Techniques were
given by Roger Zhang and Henry Chang (2005).
Vytautas Perlibakas (2004) has reported a method in “Face Recognition Using Principal
Component Analysis and Wavelet Packet Decomposition” which allows using PCA
based face recognition with a large number of training images and performing training
much faster than using the traditional PCA based method.
Kyungnam Kim (1998) has proposed PCA to reduce the large dimensionality of the data
space (observed variables) to the smaller intrinsic dimensionality of feature space
(independent variables), which are needed to describe the data economically in “Face
Recognition using Principle Component Analysis”. The original face is reconstructed
Jun-Ying et al (2005) have combined the characteristics of PCA with LDA. This
improved method is based on normalization of within-class average face image, which
has the advantages of enlarging classification distance between different-class samples.
Experiments were done on ORL (Olivetti Research Laboratory) face database. Results
show that 98% of correct recognition rate can be acquired and a better efficiency can be
achieved by the improved PCA method.
El-Bakry (2007) has proposed a new PCA implementation for fast face detection based
on the cross-correlation in the frequency domain between the input image and
eigenvectors (weights) in “New Fast Principal Component Analysis for Face Detection”.
This search is realized using cross-correlation in the frequency domain between the
entire input image and eigenvectors. This increases detection speed over normal PCA
algorithm implementation in the spatial domain.
Xiaoxun Zhang and Yunde Jia (2007) have explained the principal subspace, the optimal
reduced dimension of the face sample in “A linear Discriminant analysis framework
based on random subspace for face recognition Pattern Recognition” to construct a
random subspace where all the discriminative information in the face space is distributed
in the two principal subspaces of the within-class and between-class matrices.
Moshe Butman and Jacob Goldberger (2008) have introduced a face recognition
algorithm in “Face Recognition Using Classification-Based Linear Projections” based
on a linear subspace projection. The subspace is found via utilizing a variant of the
neighborhood component analysis (NCA) algorithm which is a supervised
dimensionality reduction method that has been recently introduced.
Hui Kong Lei Wang et al (2005) have explained in their paper, “Framework of 2D Fisher
Discriminant Analysis: Application to Face Recognition with Small Number of Training
Samples” that 2D Fisher Discriminant Analysis (2D-FDA) is different from the 1D-LDA
based approaches. 2D-FDA is based on 2D image matrices rather than column vectors
so the image matrix does not need to be transformed into a long vector before feature
extraction which contains unilateral and bilateral 2D-FDA.
Yanwei Pang et al (2004) have proposed “A Novel Gabor-LDA Based Face Recognition
Method” in which face recognition method based on Gabor-wavelet with linear
Discriminant analysis (LDA) is presented. These are used to determine salient local
features, the positions of which are specified by the Discriminant pixels. Because the
numbers of discriminant pixels are much less than those of the whole image, the amount
of Gabor Wavelet coefficients is decreased.
Xiang et al (2004) have reported in “Face Recognition using recursive Fisher Linear
Discriminant with Gabor wavelet coding” that the constraint on the total number of
features available from Fisher Linear Discriminant (FLD) has seriously limited its
+application to a large class of problems. In order to overcome this disadvantage of FLD,
a recursive procedure of calculating the Discriminant features is suggested. Work is
currently under progress to study the various design issues of face recognition, and the
objective is to achieve 99% accuracy rate for identity recognition for all the widely used
databases, and at least 80% accuracy for facial expression recognition for Yale database.
Chengjun Liu and Harry Wechsler (2002) have reported in “Gabor Feature Based
Classification (GFC) using the Enhanced Fisher Linear Discriminant Model for Face
Recognition” that the feasibility of the proposed GFC method has been successfully
tested on face recognition using a data set from the FERET database, which is a standard
16 tested for face recognition technologies.
Rowley et al (1998) have provided a neural network-based upright frontal face detection
system in “Neural Network-Based Face Detection”. To collect negative examples, a
bootstrap algorithm is used, which adds false detections into the training set, as training
progresses.
Dmitry Bryliuk and Valery Starovoitov (2002) in “Access Control by Face Recognition
Using Neural Networks” have considered a Multilayer Perceptrons Neural Network
(NN) for access control based on face image recognition. The robustness of NN
classifiers with respect to the False Acceptance and False Rejection errors is studied. A
new thresholding approach for rejection of unauthorized persons is proposed.
Keun-Chang Kwak et al (2007) employed Fisher based Fuzzy Integral and Wavelet
Decomposition methods for face recognition in the University of Alberta Shiguang Shan
et al (2004) dealt with the Gabor Wavelet for Face Recognition from the angle of its
robustness to mis-alignment.
Jun Zhang et al (1997) have compared three recently proposed algorithms for face
recognition: eigenfaces, auto association and classification neural nets, and elastic
matching in “Face Recognition: Eigenfaces, Elastic Matching, and Neural Nets”.
Smach et al (2005) have implemented a classifier based on neural networks MLP (Multi-
layer Perception) for face detection in “Design of a Neural Networks Classifier for Face
Detection”. The MLP is used to classify face and non-face patterns. Then a Hardware
implementation is achieved using VHDL based Methodology. The system was
implemented in VHDL and synthesized using Leonardo synthesis tool. The model’s
robustness has been obtained with a back propagation learning algorithms.
Gaile (1992) have explored the use of morphological operators for feature extraction in
range images and curvature maps of the human face in “Application of Morphology to
Feature Extraction for Face Recognition”. This paper has described general procedures
for locating features defined by the configuration of extrema in principal curvature. A
novel connection technique based on the concept of constrained skeleton was also
introduced in this paper. This technique being based on a proximity rule defined by a
structuring element, it could be used successfully for a variety of applications.
Sushmita Mitra and Sankar (2005) have explained that Fuzzy sets are well-suited to
modeling different forms of uncertainties and ambiguities, often encountered in real life
in “Fuzzy sets in pattern recognition and machine intelligence”. Fuzzy set theory is the
oldest and most widely reported component of present day soft computing, which deals
with the design of flexible information processing systems.
Vonesch et al (2005) have illustrated the flexibility of the proposed design method in
“Generalized bi-orthogonal Daubechies wavelets”. Most importantly, it is possible to
incorporate a priori knowledge on the characteristics of the signals to be analyzed into
the approximation spaces, via the exponential parameters.
Alaa Eleyar and Hasan Demiral (2007) have proposed PCA and LDA based Neural
Network for Human Face Recognition. Lekshmi and Sasikumar (2009) have analysed
both the Global and Local Information for Facial Expression Recognition.
Manjunathi and Ma (1996) have suggested “Texture Features for Browsing and Retrieval
of Image Data” that a novel adaptive filter selection strategy in Gabor Wavelets to reduce
the image processing computations while maintaining a reasonable level of retrieval
performance.
Hossein Sahoolizadeh et al (2008), has proposed a new hybrid method of Gabor wavelet
faces using extended NFS classifier in “Face Detection using Gabor Wavelets and Neural
Networks”. Down sampled Gabor wavelets transform of face images as features for face
recognition in subspace approach is superior to pixel value approach.
Yousra Ben Jemaa and Sana Khanfir (2009) have discussed in “Automatic local Gabor
features extraction for face recognition” that Face is represented with its own Gabor
coefficients expressed at the fiducial points (points of eyes, mouth and nose). The first
is composed of geometrical distances automatically extracted between the fiducial
points. The second is composed of the responses of Gabor wavelets applied in the
fiducial points and the third is composed of the combined information between the
previous vectors.
From the literature review, it is found that in 1970s, typical pattern classification
techniques were used to measure attributes between features in faces or face profiles.
During the eighties, work on face recognition remained largely dormant. Since the early
nineties, research interest in Face Recognition Technology has grown very significantly.
Over the last ten years, increased activity has been seen in tackling problems such as
segmentation and location of a face in a given image and extraction of features such as
eyes, mouth, etc. Also, numerous advancements have been made in the design of
statistical and neural network classifiers for face recognition. There are many methods
that have been proposed in the literature for the facial recognition task. However, all of
them have still disadvantages such as not complete reflection about face structure and
Actually, development of face recognition over the past years allows an organization
into three types of recognition algorithms, namely frontal, profile, and view-tolerant
recognition, depending on the kind of images and the recognition algorithms. While
frontal recognition certainly is the classical approach, view-tolerant algorithms usually
perform recognition in a more sophisticated fashion by taking into consideration some
of the underlying physics, geometry and statistics. Profile schemes as stand-alone
systems have a rather marginal significance for identification. However, they are very
practical either for fast coarse pre-searches of large face database to reduce the
computational load for a subsequent sophisticated algorithm, or as part of a hybrid
recognition scheme.
Literature:
Some features of face or image subspace may be simultaneously invariant to all the
variations that a face image may exhibit.
Given more number of training images almost any technique will do better and the
number of test images will decide the performance. These two actors are the major
reasons why face recognition is not widely used in real – world application.
The commercial applications range from static matching of photographs on credit cards,
ATM cards, passports, driver's licenses and photo ID to real-time matching with still images
or video image sequences for access control.
REQUIREMENT ANALYSIS
Memory : 4 GB / 8 GB
Disk space : 2 GB of HDD space for MATLAB only, 4-6 GB for a typical
installation macOS High Sierra (10.13) / macOS Sierra (10.12) / macOS El Capitan
(10.11)
Memory : 4 GB / 8 GB
Disk Space ; 2.5 GB of HDD space for MATLAB only, 4-6 GB for a typical
Installation Ubuntu 17.10 / Ubuntu 16.04 LTS / Ubuntu 14.04 LTS
The basic functional requirements of this project is to provide a platform for person
to perform a recognition system search by using a given test image.
The search must run fast enough so that analysis of database and any changes
done in the system are reflected immediately.
The input database shall not be modified in any aspect during processing.
Only the authenticated person can forge the database as per the requirement.
The face images of the dataset are available only to the authorized users.
3.4.3 Usability:-
For a naïve user too, it is easy to go through and learn to handle the
functionalities.
3.4.5 Availability:-
The project could be used at any time with minimum specified requirements.
3.5.1 WINDOWS 7:
3.5.2 MATLAB:
3.5.2.1 Structures
MATLAB has structure data types. Since all variables in MATLAB are arrays, a
more adequate name is "structure array", where each element of the array has the
same field names. In addition, MATLAB supports dynamic field names (field look-
ups by name, field manipulations, etc.). Unfortunately, MATLAB JIT does not
support MATLAB structures, therefore just a simple bundling of various variables
into a structure will come at a cost.
3.5.2.2 Functions
One of the simplest and most effective PCA approaches is used in this “ Face recognition
system” . This so-called approach used in this project is eigenface approach. This
approach transforms faces into a small set of essential characteristics, eigenfaces, which
are the main components of the initial set of learning images (training set). Recognition
is done by projecting a new image in the eigenface subspace, after which the person is
classified by comparing its position in eigenface space with the position of known
individuals . The advantage of this approach over other face recognition systems is in its
simplicity, speed and insensitivity to small or gradual changes on the face. The problem
is limited to files that can be used to recognize the face. Namely, the images must be
vertical frontal views of human faces. The whole recognition process involves two steps
namely initialization and recognition. The Initialization process involves the following
operations:
i ) Acquire the initial set of face images called as the training set.
ii) Calculate the Eigenfaces from the training set, keeping only the highest eigenvalues.
These M images define the face space.
iii) Calculate distribution in this M-dimensional space for each known person by
projecting his or her face images onto this face-space.
Having initialized the system, the next process involves the steps:
ii). Determine if the image is a face at all (known or unknown) by checking to see if the
image is sufficiently close to a free space.
iii). If it is a face, then classify the face image as either a known person or as unknown.
iii). If it is a face, then classify the face image as either a known person or as unknown.
4.2 Preprocessing
The aim of the data pretreatment (transformation and preprocessing) before of PCA or
other multivariate analysis is to remove mathematically the sources of unwanted
variations. These variations cannot be removed naturally during the data analysis.
The Principal Component Analysis (PCA) is a method based on spectral analysis of the
matrix of coefficients of linear correlation. The principal components are linear
combinations of the original variables of the data table analyzed. This descriptive method
has been developed for the detection of linear relations between variables.
However, if the relationships between the variables analyzed are not linear, the values of
correlation coefficients can be lower. Thus, it is sometimes useful to transform the
original variables prior to the Principal Component Analysis to "linearize" these
relationships.
After the pre-processing the normalized face image is given as input to the feature
extraction module to find the key features that will be used for classification. The module
composes a feature vector that is well enough to represent the face image.
4.3.1 Eigenfaces
• Eigen vectors resembles facial images which look like ghostly and are called
Eigen faces.
• Eigen faces correspond to each face in the free space and discard the faces for
which Eigen value is zero, thus reducing the Eigen face to an extent.
• The Eigen faces are ranked according to their usefulness in characterizing the
variation among the images.
4.4 Classification
• The simpler way is to keep one variable and discard all others: not
reasonable! Or we can reduce dimensionality by combining features.
Fig 2
A total of 190 images are used for the training the system. The images are in the
format of 100*100 pixel. The images are taken of people on five different poses .the
images contain a similar and plain background. The illumination and lighting conditions
of the image are manually configured during image capturing .to ensure that the images
are of the same size they are also cropped to a fixed size during the database creation
.the face database is composed with utmost accuracy in manual effort.
The sheer size of data in the modern age is not only a challenge for computer hardware
but also a main bottleneck for the performance of many machine learning algorithms.
The main goal of a PCA analysis is to identify patterns in data; PCA aims to detect the
correlation between variables. If a strong correlation between variables exists, the
attempt to reduce the dimensionality only makes sense. In a nutshell, this is what PCA
is all about: Finding the directions of maximum variance in high-dimensional data and
project it onto a smaller dimensional subspace while retaining most of the information.
Often, the desired goal is to reduce the dimensions of a dd-dimensional dataset by
projecting it onto a (k)-dimensional subspace (where k <d) in order to increase the
computational efficiency while retaining most of the information.
5.1 SDLC
The SDLC is a software development process to design, develop and test high quality
software. The SDLC is a software framework defining tasks performed at each step in the
software development process. The SDLC aims to produce a high-quality software that
meets or exceeds customer expectations, reaches completion within time and cost
estimates.
1. Exploratory programming
Here the objective of the process is to work with you to explore their requirements
and deliver a final system. The development starts with the better understood
components of the system. The software evolves by adding new features as they
are proposed.
2. Throwaway prototyping
Here the purpose of the evolutionary development process is to understand the project
requirements and thus develop a better requirements definition for the system. The
prototype concentrates on experimenting those components of the requirements, which
are poorly understood.
INITIAL
VERSION
SPECIFICATI
ON
OUTLINE
INTERMEDIATE
DESCRIPTION
VERSION
DEVELOPME
NT
FINAL
VERSION
VALIDATION
In outline description we are mainly concerned with the detailed description of the
problem statement. The problem statement should be well described such that each and
every requirement term in the problem should be clearly understood or we can say there
is no ambiguity in the description of the problem statement. This is important phase
5.1.2 Specification
After the detailed description of the problem statement the next step is specification of
the tools either hardware or software for the product. In this step; according to the problem
statement we specify the details of the tools, their versions, language used also the
platforms on which product is going to develop. If in case there is hardware part involved
in the product then the hardware specification should also be specified. This step is also
a crucial phase because the product development is totally dependent on the specifications
which are specified in the given software development cycle.
5.1.3 Development
When the tools, language to be used, platform in which product develops are decided then
comes the development phase. In this step we mainly concern with the coding part. The
coder has done the coding so that desired output is obtained. In this phase working product
can be seen. Also the coder tests the product from his side before the validation phase
which is known as Unit Testing.
5.1.4 Validation
The validation phase is mainly concerned with the testing part. The testing is done by the
tester in order to test whether the product is working properly or not. Also the tester test
FACE RECOGNITION SYSTEM 31
if there is any bug in the software or not. All the testing methods are applied by the tester.
The best product should be bug free, so that the client can satisfy with the product.
At this stage of software development, the initial version of the software is developed.
This initial version includes the basic modules and basic functionalities for which the
software is mainly developed.
At this stage of software development, the final version of the software is developed. This
final version of the software includes the overall developed software as the final product
with all the modifications implemented, that were suggested by the customer after testing
the software.
5.2 FLOWCHART
Flowchart is a diagram that visually displays interrelated information such as events, steps
in a process, functions, etc., in an organized fashion, such as sequentially or
chronologically. There are different types of flow diagram like control flow diagram, data
flow diagram, product flow diagram, information flow diagram etc.
INPUT/OUTPUT
ASSIGNMENT/PROCESSING BOX
ARROW
CONNECTOR
DECISION BOX
• Data Storage: There are two variants of data storage - it can either be represented as a
rectangle with the absence of both smaller sides or as an open-sided rectangle with only
one side missing.
• Data Flow: Movement of data is shown by pointed arrows. Data movement is shown from
the base of the arrow as its source towards the head of the arrow as a destination.
This context-level data flow diagram first, which shows the interaction between the
system and external agents which act as data sources and data sinks. On the context
diagram (also known as the level 0 DFD) the system’s interactions with the outside
worlds are modeled purely in terms of data flows across the system boundary.
Level -1 DFD is used to elaborate the interaction between system and agents to a level
1 diagram with lower level functions decomposed from the major functions of the
system.
5.4.1 Actor
You can picture an actor as a user of the IT system, for example, Mr. Steel or Mrs. Smith
from check-in. Because individual persons are irrelevant for the model, they are
abstracted. So the actors are called "check-in employee" or "passenger:
Use cases describe the interactions that take place between actors and IT systems during
the execution of business processes:
USE CASE
5.4.3 Association
The main objectives of Smart Parking System application is to provide the following:
2. To ensure safe and secure parking slots within limited area, which is of most urge.
5.5.1 Methodology:
Step1:
Initially the slot selection is made by the user from his mobile phone. He checks for the
availability of a parking slot that is nearest to his location. If it is available, he moves to the
next stage or else go to the initial state.
Step2:
Transfers request for parking slot from the mobile using Android application.
Step3:
The Parking Control Unit (PCU) gets the slot number requested by the user.
Step4:
If the booking is done successfully, then the requested slot is reserved in the parking area.
Step5:
After reserving a particular slot by the user then the status of that respective slot will be
marked as RED=RESERVED and the remaining will be GREEN=EMPTY.
Step6:
As soon as the vehicle gets entered into the parking slot, the timer gets ON and measures
the total time.
Step7:
As soon as the vehicle moves out of the parking slot, the timer gets OFF and the total cost
will be displayed.
Modules
Smart Parking System mainly consists of two modules :-
User Module
Booking Module
This module provides the user with the flexibility of registering, logging in, booking .
If the user is new to the application then, the user must register in the application by
providing the user’s details.
After the registration, the user logs in using the user-id and password.
This is the main module of the application and it deals with the booking of the parking
slot.
When the user is ready for booking then the booking module comes in the scenario to
provide user the necessary information for booking.
The available slot, cost to book the slot and the necessary processing in regards to
these, are done by this booking module.
Initially, the user need to install smart parking application on his android device or
through website.
After installation the smart parking icon will be displayed on his android mobile
screen.
Registration and login:
If the user is a new user he needs to get registered with the application by giving all
his details. The data which is entered by the user is stored on the server.
These details consists user name, email, password, address etc. This registration is
done only for the first time.
After successful registration he receives a unique login ID both to his mobile .
After the user gets registered with the application, the user can login by providing
email and unique ID.
User gets this unique ID both to user’s mail and mobile number as soon as he gets
registered.
Initial Screen
Login Page
It depicts the screen shot of Android Mobile phone application login/Register page.
User must have its account before login
Only authorized user can login
Whole process will be done through Free slot identification is verified using Infra-Red
(IR) sensors. The IR sensor used for each parking slot. The infra-red sensor detect the
vehicle in infra-red waves reflected and covers short distance. A pulse of IR light is
generated by the IR sensor and emitted by emitter. Detected the information will be
send via WI-FI module to transfer the information to Arduino board and results are
displayin LED screen.
the number of user booking details and lane details duration hours. Over all flow
diagram of IoT based smart parking management system
Following details need to be enter:-
a. Car No.
b. Time in
c. Time out
d. Date
Available Slots
After selecting a slot the user needs to check for the availability of that respective slot.
The user can check the status of the slots with the help of green and red colour
indications.
Where green colour indicates that the respective slot is empty and the red colour
indicates that the respective slot is already allocated to some other user.and Logout:
Smart Parking System is used to book parking slots without any great effort by the
user using an android device. The user can check the status of parking area and book
the parking slot in advance. This will result in overcoming many problems which are
being created due to the bad management of the traffic. Mobile computing has proven
as the best area of work for researchers in the areas of database and data management
so this application is applied in Android Mobile OS. This application is utilized by
can be applied nook and corner due to its easy usage and effectiveness.
CODING
Create Connection:-
<?php
$conn =
mysqli_connect("localhost","id6458044_nvn","Arduino@123","id6458044_nvn");
// Check connection
if (mysqli_connect_errno())
?>
Index.html
<?php
session_start();
if(isset($_SESSION['loggedin'])){
header("location:webpage/home.php");}
else{
session_destroy();
?>
<?php
FACE RECOGNITION SYSTEM 49
require('webpage/conn.php');
session_start();
if (isset($_POST['user_mail'])){
$user_mail = stripslashes($_REQUEST['user_mail']);
$user_mail = mysqli_real_escape_string($conn,$user_mail);
$user_password = stripslashes($_REQUEST['user_password']);
$user_password = mysqli_real_escape_string($conn,$user_password);
and pass='".($user_password)."'";
$rows = mysqli_num_rows($result);
if($rows==1){
$_SESSION['user_mail'] = $user_mail;
$_SESSION['loggedin'] = true;
$sth = $conn->query($sql);
$r=mysqli_fetch_array($sth);
$_SESSION['phno'] = $r['phno'];
//mysqli_close($con);
}else{
echo "
<div class='modal-dialog'>
<div class='modal-content'>
<div class='modal-header'>
<h4 class='modal-title'>Notification</h4>
</div>
<div class='modal-body'>
</div>
<div class='modal-footer'>
FACE RECOGNITION SYSTEM 51
<button type='button' class='btn btn-danger' data-
dismiss='modal'>Close</button>
</div>
</div>
</div>
</div>
";
}else{
?>
<!DOCTYPE html>
<html lang="en">
<head>
<title>Smart Parking</title>
<meta charset="utf-8">
<link rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css">
<script
src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.0/umd/popper.min.js"></script>
<script
src="https://maxcdn.bootstrapcdn.com/bootstrap/4.1.0/js/bootstrap.min.js"></script>
<style type="text/css">
body, html {
height: 100%;
.bg {
background-image: url("webpage/images/black_net.jpg");
/* Full height */
height: 127%;
background-position: center;
background-repeat: repeat;
background-size: cover;
</style>
<div class="row">
<div class="col-sm-3"></div>
<div class="col-sm-6">
<div class="form-group">
</div>
<div class="form-group">
</div>
</form>
</div>
</div>
</div>
<div class="col-sm-3"></div>
</div>
</body>
<script type='text/javascript'>
$(window).on('load',function(){
$('#myModal').modal('show');
});
</script>
<script>
if ( window.history.replaceState ) {
</script>
</html>
void setup() {
pinMode(LED, OUTPUT);
pinMode(obstaclePin1, INPUT);
pinMode(obstaclePin2, INPUT);
Serial.begin(9600);
}
void loop() {
hasObstacle1 = digitalRead(obstaclePin1);
hasObstacle2 = digitalRead(obstaclePin2);//Reads the output of the obstacle sensor from
the 7th PIN of the Digital section of the arduino
if ((hasObstacle1 == LOW)&&(hasObstacle2 == LOW)) //LOW means something is ahead,
so illuminates the 13th Port connected LED
{
Serial.println("A");
digitalWrite(LED, HIGH);//Illuminates the 13th Port LED
}
else if ((hasObstacle1 == LOW)&&(hasObstacle2 == HIGH)) //LOW means something is
ahead, so illuminates the 13th Port connected LED
{
Serial.println("B");
digitalWrite(LED, HIGH);//Illuminates the 13th Port LED
}
else if ((hasObstacle1 == HIGH)&&(hasObstacle2 == LOW)) //LOW means something is
ahead, so illuminates the 13th Port connected LED
{
Serial.println("C");
digitalWrite(LED, HIGH);//Illuminates the 13th Port LED
}
else
{
Serial.println("D");
digitalWrite(LED, LOW);
}
delay(200);
}
Testing basically means to crosscheck whether the product generated after processing is
as estimated or not. It is the process of evaluation of a software item to verify differences
between given input and expected output. Also to assess the feature of a software item.
Testing ensures the quality of the product. Software testing is the process that should be
done during the development process. Verification is to make sure that the product
satisfies the conditions imposed at the start of the development phase. Validation is to
make sure that the product is built as per customer requirements.
Unit testing, a testing technique using which individual modules are tested to determine
if there are any issues by the developer himself. It is concerned with functional correctness
of the standalone modules.
The main aim is to isolate each unit of the system to identify, analyze and fix the defects.
Unit Tests, when integrated with build gives the quality of the build as well.
Figure 9
Black Box Testing - Using which the user interface, input and output are
tested.
White Box Testing - used to test each one of those functions behaviour is
tested.
Gray Box Testing - Used to execute tests, risks and assessment methods.
Integration testing (sometimes called integration and testing, abbreviated I&T) is the
phase in software testing in which individual software modules are combined and tested
as a group. It occurs after unit testing and before validation testing. Integration testing
takes as its input modules that have been unit tested, groups them in larger aggregates,
applies tests defined in an integration test plan to those aggregates, and delivers as its
output the integrated system ready for system testing.
7.1.2.1 Purpose
Some different types of integration testing are big-bang, mixed (sandwich), risky-
hardest, top-down, and bottom-up. Other Integration Patterns are: collaboration
integration, backbone integration, layer integration, client-server integration, distributed
services integration and high-frequency integration.
In the big-bang approach, most of the developed modules are coupled together to form a
complete software system or major part of the system and then used for integration
testing. This method is very effective for saving time in the integration testing process.
However, if the test cases and their results are not recorded properly, the entire integration
process will be more complicated and may prevent the testing team from achieving the
goal of integration testing.
Bottom-up testing is an approach to integrated testing where the lowest level components
are tested first, then used to facilitate the testing of higher level components. The process
is repeated until the component at the top of the hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and then
tested. After the integration testing of lower level integrated modules, the next level of
modules will be formed and can be used for integration testing. This approach is helpful
only when all or most of the modules of the same development level are ready. This
method also helps to determine the levels of software developed and makes it easier to
report testing progress in the form of a percentage.
Functional testing does not imply that you are testing a function (method) of your module
or class. Functional testing tests a slice of functionality of the whole system.
Functional testing differs from system testing in that functional testing “verifies a
program by checking it against ... design documents or specifications", while system
testing "validates a program by checking it against the published user or system
requirements”.
Smoke testing
Sanity testing
Regression testing
Usability testing
Black-box testing is a testing strategy that ignores the internal mechanism of a system or
component and focuses solely on outputs generated in response to selected inputs and
execution conditions. In black box testing, the structure of the program is not taken into
consideration. It takes into account functionality of the application only. It is also called
functional testing. Tester is mainly concerned with the validation of the output rather
than how the output is produced. Knowledge of programming or implementation logic
(of internal structure and working) is not required for testers. It is applicable mainly at
higher levels of testing - Acceptance Testing and System Test
The software into which known inputs are fed and where known outputs are expected is
termed a black box. The transformation of the known inputs to the known outputs is
done via the system and is not checked in this kind of testing. This transformation
process system is called the black box.
In this kind of testing, the testers concentrate on functional testing, that is, on providing
a known input and check if the known output is obtained. This method is generally
followed while carrying out acceptance testing, when the end user is not a software
developer but only a user.
It is different from white box testing in the sense that in white box testing, the tester
ought to have the programming knowledge and understanding of code to test the
application whereas it may not be the case in black box testing.
As a rule, system testing takes, as its input, all of the "integrated" software components
that have passed integration testing and also the software system itself integrated with any
applicable hardware systems. The purpose of integration testing is to detect any
inconsistencies between the software units that are integrated together
(called assemblages) or between any of the assemblages and the hardware. System
testing is a more limited type of testing; it seeks to detect defects both within the "inter-
assemblages" and also within the system as a whole.
In engineering and its various sub disciplines, acceptance testing is a test conducted to
determine if the requirements of a specification or contract are met. It may
involve chemical tests, physical tests, or performance tests. In systems engineering it
may involve black-box testing performed on a system prior to its delivery. In software
testing the ISTQB defines acceptance as: formal testing with respect to user needs,
The acceptance test suite may need to be performed multiple times, as all of the test cases
may not be executed within a single test iteration.
The acceptance test suite is run using predefined acceptance test procedures to direct the
testers which data to use, the step-by-step processes to follow and the expected result
following execution. The actual results are retained for comparison with the expected
results. If the actual results match the expected results for each test case, the test case is
said to pass. If the quantity of non-passing test cases does not breach the project's
predetermined threshold, the test suite is said to pass. If it does, the system may either be
rejected or accepted on conditions previously agreed between the sponsor and the
manufacturer.
7.2 Testing
Eclipse is used for coding. A coloured slot shows that which slot is empty or not we have to
book a slot through applying computational techniques in web host.
All slot in the database is empty till user book any slot then slot will be booked and it will
be changed its colour from green to red.
recognition.
FUTURE SCOPE
S1.6:Credential image
CONCLUSION
Smart Parking System is used to book parking slots without any great effort by the
user using an android device. The user can check the status of parking area and book
the parking slot in advance. This will result in overcoming many problems which are
being created due to the bad management of the traffic. Mobile computing has proven
as the best area of work for researchers in the areas of database and data management
so this application is applied in Android Mobile OS. This application is utilized by
can be applied nook and corner due to its easy usage and effectiveness.
CHAPTER 11
REFERENCES
2. Boualleg. A. H., Bencheriet Ch, and Tebbikh. H, "Automatic Face recognition using
neural network-PCA." In Information and Communication Technologies, 2006.
ICTTA'06. 2nd, vol. 1, pp. 1920-1925. IEEE, 2006.
3. Sajid. I., Ahmed M. M., Kresimir Delac, Mislav Grgic, Sonja Grgic,” Independent
Comparative Study of PCA, ICA, and LDAon the I. Taj, M. Humayun, and
F.Hameed,“Design of high performance fpga based face recognition system,” in PIERS
Proceedings, Cambridge, USA, July 2-6, 2008, pp.504-510
6. Janarbek Matai, Ali Irturk and Ryan Kastner,” Design and Implementation of an
FPGA-based Real-Time Face Recognition System”, in IEEE International Symposium
on Field Programmable Custom Computing Machines, 2011, pp 97-100.
7. Mohod, Prakash S., and Kalpna C. Jondhale. "Face Recognition Using PCA.
"International Journal of Artificial Intelligence and Knowledge Discovery 1, no. 1 (2011):
25-28.
8. Rala M. Ebied,” Feature Extraction using PCA and Kernel-PCA for Face
Recognition”, in The 8th International Conference on INFOrmatics and Systems
Computational Intelligence and Multimedia Computing Track , 2012 ,pp mm72-mm77
9. Abdullah, Manal, Majda Wazzan, and Sahar Bo-saeed. "Optimizing Face Recognition
Using PCA." arXiv preprint arXiv:1206.1515 (2012).