You are on page 1of 5

Missing Person Identification Using Face

Recognition
Simran Arora#1, Deepesh Kumar Rai*2, Gaurav Vishwakarma#3, Susanto Mandal*4
#
Computer Science Department, Dr.A.P.J Abdul Kalam Technical University, India

Abstract—In the world, a countless number of people missing people and this will cater for all the disadvantages of
are missing every day which includes kids, teens, mentally using media.
challenged, old-aged people with Alzheimer's, etc. Most of them
remain untraced. This paper proposes a system that would help The objective of this project is to identify/recognise
the police and the public by accelerating the process of searching
using face recognition. When a person goes missing, the people
the missing person based on their previously available data.
related to that person, or the police can upload the picture of the This project can be used by police to recognise missing person
person which will get stored in the database. When the public easily and based on video footage we can also identify missing
encounter a suspicious person, they can capture and upload the person from the crowd. The objective of this study is two-fold:
picture of that person into our portal. The face recognition model • Matching a face with available database accurately.
in our system will try to find a match in the database with the • Applying principal component analysis for finding
help of face encodings. It is performed by comparing the face distinguishable features from many images to get the
encodings of the uploaded image to the face encodings of the
similarity for the target image.
images in the database. If a match is found, it will be notified to
the police and the people related to that person along with the
location. It also has additional features like informing the police We have developed a system that can be used by
authorities and missing person’s relative using their specified e- police or investigation department to recognize missing
mail. Also, there is a feature to detect the missing person using person from their faces. The method of face recognition used
the video which can be implemented in CCTV and video is fast, robust, reasonably simple and accurate with a
surveillance systems to broaden its reach and enhance the relatively simple and easy to understand algorithms and
working. technique.
I. INTRODUCTION When a suspicious person is found, the picture at
A missing individual is regularly portrayed on the that instance of time is compared with the images uploaded
grounds that the person who is frequently a little child or a by the guardian/police department at the time of missing
grown-up who is lost, intentionally or automatically. There are through the face recognition model. If a match is found, it will
different classifications of missing instances of which just 43% be notified to the police and the guardian in the form of an
of missing cases' reasons are known, 99% are adolescent alert e-mail along with the location of where the person is
runways, 2500 cases are on account of family issues and found. If not found, a new record will be created in the
around 500 cases are seized by outsiders (which incorporate database with the uploaded picture. By this way, it decreases
the both teenagers and grown-ups). the time taken to search for a person’s detail after he is found.
Sometimes, the person has been missing for a long period of
“In India, there are no budgets allocated to finding time. The age gap is reflected in the image as ageing affects
missing people”, claimed by an official source. Media can be the structure of the face, including shape, texture, etc. The
used to find missing people for instance the use of newspapers. appearance of the person can vary due to ageing, filters, pose,
Media appeals may be the quickest and most effective way of lightings etc. All these factors were considered before
raising awareness of your missing person and helping in the choosing the face recognition algorithm.
continuing search for him or her.
Sometimes, the person has been missing for a long
Nevertheless, not everyone feels comfortable using period of time. The age gap is reflected in the image as ageing
the media. Different newspapers and magazines have different affects the structure of the face, including shape, texture, etc.
interviewing techniques and styles. Whilst many journalists The appearance of the person can vary due to ageing, filters,
will be sympathetic, others may appear forceful, cold or pose, lightings etc. All these factors were considered before
aggressive or behave in other ways, which seem insensitive to choosing the face recognition algorithm.
what you are going through. Some people do not trust the
media or want their circumstances made public; others feel II. LITERATURE SURVEY
overwhelmed by the thought of dealing with journalists and A. The Eigen Face Method
being asked probing and personal questions about their Firstly, Kirby and Sirvoich demonstrated Eigenfaces
missing friend or relatives. method for recognition. Pentland and Turk made
improvements on this research by employing Eigenfaces
Additionally, publicity may put already vulnerable method based on Principle Component Analysis for the same
people at greater risk by forcing them further away if they do reason the eigenfaces are the principal components of a
not wish to be found. distribution of faces, or equivalently, the eigenvectors of the
covariance matrix of the set of face images, where an image
Kidnappers can continue to victimize their victims, with N pixels is considered a point (or vector) in N-
as they will be aware through media. However, the use of dimensional space
facial recognition technique makes it easier for us to find the
B. Fisher Face Method found, then the person will be provided with the option of
Image recognition using fisherface method is based registering that face as a new entry to our database with the
on the reduction of face space dimension using Principal remarks.
Component Analysis (PCA) method, then apply Fisher's
Linear Discriminant (FDL) method or also known as Linear Whenever public or police upload an image, the face
Discriminant Analysis (LDA) method to obtain feature of encodings of the image are extracted and then compared to the
image characteristic. face encodings of the images stored in the database. If the
distance between the encoding of the uploaded image and the
C. Face Recognition Through Geometric Features encoding of the image in the database is less than or equal to
In the first phase a set of fiducial points are examined the threshold, then the face in both the images is of the same
in each face and the geometric facts such as distances between person. If that is the case, the user is notified that a match is
these points are explored and the image closest to the query found along with the picture from the database that matched
face is nominated. with the uploaded picture. If the distance between the
encodings is more than the threshold, it means that the faces in
D. Hidden Markov Model (HMM) the images are not of the same person. By this way, our
HMM generally employed on images with variations proposed system will help in identifying the missing people.
due to lighting, orientation and facial expression and thus it
has more advancements over than the approaches for treating
images using HMM, space sequences are considered. This
procedure is named as a Hidden Markov Model this is why
because the states are invisible, only the output is vivid to the
external use

E. Active Appearance Model (AAM)


2D Morphable Method Faces are highly distinct and
able to be deformed. Classifying by pose, expression, lighting, Figure 2. Comparing Face encodings of two images
and faces can have various looks in the images. Coots, Taylor,
and Edwards [56] presented Active Appearance Model which
is strongly capable of explaining the view of face in set of The model we have used involves five main steps to
model parameters. AAM is an integrated statistical model, perform face recognition.
implemented on the basis of a training set comprising labelled
images A. Preprocessing
To reduce the variability in the faces, the images are
III. METHODOLOGY processed before they are fed into the network. All positive
examples that is the face images are obtained by cropping
The proposed system makes use of Face Recognition
Department of CSE Page 3 images with frontal faces to include
for missing peoples’ identification. The architecture of our
only the front view. All the cropped images are then corrected
framework is presented in figure 1.
for lighting through standard algorithms.

A face recognition system, based on computing the


distance between unprocessed gray- level images, fails to
recognize all the faces in the database and will confuse the
faces. The approaches to this problem can be classified in three
main categories:

Figure 1. The Architecture of the proposed People Identification System • Illumination Normalization: In this approach, face images
are preprocessed to normalize the illumination. For
example, gamma correction, logarithmic transforms
Here the public or police who finds a suspicious histogram equalization few of the methods are used here.
person (child, mentally challenged person, etc.) on the road • Invariant features extraction: This approach attempts to
uploads a picture of that person into the portal. Our algorithm extract facial features invariant to illumination variations.
extracts the face encodings of the image as shown in Figure 2 For example, edge maps, derivatives of the gray-level,
and compare with that of the face encodings of the previously Gabor-like filters and Fisher-face etc. are few of the
existing images in the database. If a match is found, an alert e- methods applicable here.
mail will be sent to both the concerned police officer and the • Face modeling: illumination variations are mainly due to
parent/guardian of that person in the image. If a match is not the 3D shape of human faces under lighting in different
directions. There are researchers trying to construct a
generative 3D face model that can be used to render face
image with different pose and under varying lighting
conditions.

It has been focused only on the first category, the


process, called illumination normalization, attempts to
transform an image with an arbitrary lighting condition to an
image with standard lighting condition. Subsequently the
Gamma Intensity Correction method, the Logarithm transform
method, Discrete Cosine transform method and the Histogram
Equalization method (global and local) are analysed to process Figure 3 Convolutional neural networks (CNN)
face images normalizing lighting variations.
C. Localization
B. Classification
The trained neural network is then used to search for
Neural networks are implemented to classify the
faces in an image and if present localize them in a bounding
images as faces or nonfaces by training on these examples. We
box. Various Feature of Face on which the work has done on:-
use both our implementation of the neural network and the
Position Scale Orientation Illumination.
MATLAB neural network toolbox for this task. Different
network configurations are experimented with to optimize the
results. The way of localizing faces is the HAAR-like
features object recognition algorithm. It describes objects by
their changes in lightness.
Image classification refers to the labelling of images
into one of a number of predefined classes. There are
potentially n number of classes in which a given image can be What lightness changes can we see that describe a
classified. Manually checking and classifying images could be general face? The forehead is lighter than the area of the eyes.
a tedious task especially when they are massive in number (say The bridge of the nose is lighter than the wings of the nose.
10,000) and therefore it will be very useful if we could The mouth is darker than the chin.
automate this entire process using computer vision.
HAAR-like features describe these changes in
The features have been extracted using a lightness using four types of features:
convolutional neural network. This is because deep learning
models have achieved state of the art results in the feature
extraction process.

Convolutional neural networks (CNN) are a special


architecture of artificial neural networks. CNNs uses some of
its features of visual cortex and have therefore achieved state
of the art results in computer vision tasks. Convolutional
neural networks are comprised of two very simple elements,
namely convolutional layers and pooling layers. Figure 4. HAAR Features
Green stands for more lightness, red for less lightness.
Although simple, there are near-infinite ways to To describe a face, several of these features (in fact, several
arrange these layers for a given computer vision problem. The thousand) are grouped together to form a classifier:
elements of a convolutional neural network, such as
convolutional and pooling layers, are relatively
straightforward to understand.

The challenging part of using convolutional neural


networks in practice is how to design model architectures that
best use these simple elements.

Figure 5. HAAR Features grouped together


As you can see, the features can also be inverted.
• To detect a face in an image, the image gets scanned using 3) Distinctive Character Location: All facial-scan
the classifier. systems attempt to match visible facial features in a
• To localize tilted heads the classifier has to be rotated. fashion similar to the way people recognize one
• To calculate the lightness differences quickly you should another. The features most often utilized in facial-
use integral images. scan systems are those least likely to change
• To optimize the scanning process its suggested to first significantly over time: upper ridges of the eye
scan roughly and then close in on the responsive areas. sockets, areas around the cheekbones, sides of the
mouth, nose shape, and the position of major features
relative to each other. Behavioural changes such as
D. Data
alteration of hairstyle, changes in makeup, growing
The Data is being stored in numpy matrix. Numpy
or shaving facial hair, adding or removing eyeglasses
Matrix is a n x n dimensional matrix used to represent pixels
are behaviours that impact the ability of facial- scan
of images. It has capability of storing both grey scale images
systems to locate distinctive features, facial-scan
and colored images.
systems are not yet developed to the point where they
can overcome such variables.
Data being stored in numpy matrix is compared with
data being trained in neural net. Depending on comparison
4) Template Creation: Enrolment templates are
output is being produced.
normally created from a multiplicity of processed
facial images. These templates can vary in size from
less than 100 bytes, generated through certain
vendors and to over 3K for templates. The 3K
template is by far the largest among technologies
considered physiological biometrics. Larger
templates are normally associated with behavioural
biometrics.

Figure 6. Face is stored in n x n matrix

E. Functionalities
The functionalities performed by the system on the
input attributes to give the desired output are as follows:

1) Image Acquisition (Input image): Facial-scan


technology can acquire faces from almost any static Figure 7. Image Recognition Flow
camera or video system that generates images of
sufficient quality and resolution. Enrolment is
essential for verification and identification. IV. RESULTS AND DISCUSSION
Enrolment images define the facial characteristics to After implementation following are the results
be used in all future authentication events. summarized based on our observations:
• The system has reasonably fast response, and the
2) Image Processing (Face Detection): Images are
matching templates step- which happen to have a small
cropped such that the ovoid facial image remains, and
time relative to the total response time- is the only step
colour images are normally converted to black and
that depends upon the size of the dataset, luckily, it’s only
white in order to facilitate the initial comparisons
logarithmic proportional to dataset sizes, this implies that
based on grayscale characteristics. First the presence
the system is scalable to large dataset and will have fast
of faces or face in a scene must be detected. Once the
response on big data.
face is detected, it must be localized and
• Increasing the volume of data did not hurt the system
Normalization process may be required to bring the
response performance not it is accuracy in detecting the
dimensions of the live facial sample in alignment
faces.
with the one on the template.
• Finally, we get to see that the faces being scanned through at that age. We look forward to overcoming this limitation in
the systems are whether being identified or not as a the future.
missing person or normal citizen either.
REFERENCES
These are the salient features based on our discussion [1] Huang, Gary B. And Erik G. Learned-Miller. “Labeled Faces in the
over the system: Wild: Updates and New Reporting Procedures”, Department of
• All identification and authentication technologies operate Computer Science, University of Massachusetts Amherst, Amherst,
using the 4- stage method. MA, USA, Tech Report, 2014, pp 14–003
• Capturing the physical and behavioural sample during [2] S. Chandran, Pournami & Balakrishnan, Byju & Rajasekharan,
enrolment Deepak & N Nishakumari, K & Devanand, P & M Sasi, P. (2018).
• Extracting the unique data from the sample and a template “Missing Child Identification System Using Deep Learning and
is created and the data is stored in NumPy matrix Multiclass SVM”. 113-116. 10.1109/RAICS.2018.8635054
• Comparing the template with the new sample
[3] Rohit Satle , Vishnuprasad Poojary , John Abraham , Mrs. Shilpa
• Then the system checks if the feature extracted in the Wakode, “MISSING CHILD IDENTIFICATION USING FACE
matrix is a match or non-match RECOGNITION SYSTEM” Vol.3, Issue.1, July – August 2016
• Our project consists of two comparisons:
a. Verification: The system compares the given [4] S. B. Arniker et al., "RFID based missing person identification
system," International Conference on Informatics, Electronics &
individual with who they say they are and gives a yes Vision (ICIEV), Dhaka, 2014, pp. 1-4.
or no decision.
b. Identification: The system compares the given [5] Birari Hetal, “Android Based Application - Missing Person Finder”,
individual to all the other individuals in the database in Iconic Research and Engineering Journals, Vol.1, Issue 12, JUN
and gives the ranked list of matches. 2018.

[6] Thomas M. Omweri, “Using a Mobile Based Web Service to


V. CONCLUSION Search for Missing People – A Case Study of Kenya”, in
International Journal of Computer Applications Technology and
Our system replaces the manual method of scanning Research, Vol. 4, Issue 7, 507 - 511, 2015.
through the database for each picture to check the match by an
efficient face recognition method which finishes the work on [7] Sumeet Pate, “Robust Face Recognition System for E-Crime Alert”,
in International Journal for Research in Engineering Application
time. and Management, Issue 1, MAR, 2O16

Approximate match of human face at [8] Peace Muyambo, 2018, An Investigation on the Use of LBPH
different angles using optical flow. In the future, we Algorithm for Face Recognition to Find Missing People in
are planning to extend this system further by increasing the Zimbabwe, INTERNATIONAL JOURNAL OF ENGINEERING
RESEARCH & TECHNOLOGY (IJERT) Volume 07, Issue 07
reach of our system to public. (July
2018)
Though our system has a small limitation i.e., when
the age of the person is between the age 0 and 10 the accuracy
drops. This is due to the incomplete growth of facial features

You might also like