Professional Documents
Culture Documents
Ijsret v8 Issue4 461
Ijsret v8 Issue4 461
Keywords- Deep Learning, Machine learning, Python, Generative Adversarial networks, Computer Science
© 2022 IJSRET
1925
International Journal of Scientific Research & Engineering Trends
Volume 8, Issue 4, July-Aug-2022, ISSN (Online): 2395-566X
● With the help of this data, a Neural network model is recognition problems [21]. Considering a large amount of
trained that can detect patterns in the sketch to draw real- 2-D spatial information in face images, Gao et al. [12]
life colored images of the sketches. employed EHMM to model the nonlinear relationship
● The predicted sketches of the model will be sent back to between the sketches and their photo counterparts.
our front end and will be displayed to the user in different
canvas spaces. Deep learning-based methods One of the earliest and most
● The user then can make required changes to its original popular deep-learning approaches is the AlexNet Deep
sketch to add details to the output image. Convolutional Neural Network (DCNN) architecture [18]
which was trained for the task of object classification.
II.LITERATURE REVIEW Several superior approaches that rely on deep learning
have since been introduced, along with new methods to
Existing face sketch–photo-synthesis methods can be improve the performance of a network. Of particular
divided into three main categories [1]: interest in this project are face recognition methods such
1. Subspace learning-based as Facebook‟s DeepFace [14], DeepID series [21]–[12],
2. Sparse representation-based Google‟s FaceNet [15], and VGG-Face [16], which have
3. Bayesian inference-based approaches provided some valuable observations such as the superior
The Subspace learning-based methods include the linear performance that is generally obtained by using more
subspace method, which is based on the principal layers [12], the benefit of a high amount of training data
component analysis [21], and the nonlinear subspace (especially for „deeper‟ networks having more trainable
method, which is based on local line embedding (LLE) parameters) [15], [16], the use of multiple DCNNs [19],
[12]. Tang and Wang [10], [21] first considered the face- and a “triplet-based” objective function which aims at
sketch synthesis procedure as a linear process and decreasing the distance between features of the same
proposed an Eigen sketch transformation method. The subject and increasing the distance between features of
input photo was projected onto the training photo set to different subjects [15], [16].
obtain projection coefficients. The target sketch was then
synthesized by linearly combining the corresponding III.DATASET DESCRIPTION
training sketches with the previously obtained projection
coefficients. The dataset used is custom designed based on images and
sketches. The images are taken from a variety of datasets
capturing the maximum face shapes possible across
regions around the globe. More than 10,000 images are
trained to create the model. Free hand sketches are created
for a set of 2k photos. A machine learning model is trained
based on these 2k face sketch images to generate sketches
of the remaining photos for the model to learn from. These
sketches and images are then divided into eight batches.
Images per batch are based on the facial structure. A
model is trained on these batches using Generative
Table 1. Previous Approaches Adversarial Networks.
© 2022 IJSRET
1926
International Journal of Scientific Research & Engineering Trends
Volume 8, Issue 4, July-Aug-2022, ISSN (Online): 2395-566X
V. INTERPHASE DESIGN
Fig. 2 Neural Net Model
We are developing a web interface that will interact with
the user to take sketches and process those sketches with
Face sketch recognition with discriminator network The
the help of our custom-trained model to generate facial
basic structure of the discriminator network consists of
output. At the backend of the project, we proposed to
five convolutional layers with kernel size 3, padding 1,
develop an API that will interact with our front end to take
and strides 2, as shown in Fig. 1. The numbers of filters
sketch images from users. The proposed model is
are 32, 32, 64, 64, and 128, respectively. After each
developed using Tensorflow and the data is collected
through various channels over the internet. With the help
of this data, a neural network model is trained that can
detect patterns in the sketch to draw real-life colored
images of the sketches. The predicted sketches of the
model will be sent back to our front end and will be
displayed to users in different canvas spaces.
The user then can make the required changes to the
original sketch to add details to the output image.
CNN, the condition is checked to determine whether the Performance in Face Verification,” in IEEE Conf.
data is train data or not. If it is train data, then the CNN is Comput. Vision Pattern Recog., June 2014, pp. 1701–
trained, and the process continues. If it is not trained, 1708.
certain predictions are generated by the Sketch Face [15] C. Peng, N. Wang, X. Gao, and J. Li, (2018) “Face
Model and finally, we get our desired output. recognition from multiple stylistic sketches:
Scenarios, datasets, and evaluation,” Pattern
Recognition, vol. 84, pp. 262-272, 2018.
REFERENCES [16] N. Wang, X. Gao, J. Sun, and J. Li, “Anchored
neighborhood index for face sketch synthesis,” IEEE
[1] B. Klare, Z. Li, and A. K. Jain, “Matching forensic Transactions on Circuits System and Video
sketches to mug shot photos,” IEEE Trans. Pattern Technology, vol. 28, no. 9, pp. 2154-2163, 2018.
Anal. Mach. Intell., vol. 33, no. 3, pp. 639–646, 2011. [17] X. Wang, and X. Tang, “Face photo-sketch synthesis
[2] B. Klare and A. K. Jain, “Heterogeneous Face and recognition,” IEEE Transactions on Pattern
Recognition Using Kernel Prototype Similarities,” Analysis and Machine Intelligent, vol. 31, no. 11, pp.
IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1955-1967, 2009.
6, pp. 1410–1422, Jun 2013. [18] H. Zhou, Z. Kuang, and K. Wong, “Markov weight
[3] S. Klum, H. Han, A. K. Jain, and B. Klare, “Sketch fields for face sketch synthesis,” in Proceedings of the
based face recognition: Forensic vs. composite IEEE Conference on Computer Vision and Pattern
sketches,” in Int. Conf. Biometrics (ICB), 2013, pp. Recognition, 2012, pp. 1091-1097.
1–8. [19] N. Wang, X. Gao, and J. Li, “Random sampling for
[4] H. Han, B. F. Klare, K. Bonnen, and A. K. Jain, fast face sketch synthesis,” Pattern Recognition, vol.
“Matching composite sketches to face photos: A 76, pp. 215-227, 2018.
component-based approach,” IEEE Trans. Inf. [20] L. Zhang, L. Lin, X. Wu, S. Ding, and L. Zhang,
Forensics Security, vol. 8, no. 1, pp. 191–204, Jan “End-to-end photo-sketch generation via fully
2013. convolutional representation learning,” in Proceedings
[5] C. Galea and R. A. Farrugia, “A Large-Scale of the 5th ACM on International Conference on
Software-Generated Face Composite Sketch Multimedia Retrieval, 2015, pp. 627-634.
Database,” in Int. Conf. Biometrics Special Interest [21] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D.
Group (BIOSIG), Sep 2016, pp. 1–5. Warde-Farley, S. Ozair, A. Courville, Y. Bengio,
[6] Identi-kit, identi-kit solutions. “Generative adversarial nets,” in Advances in the
http://www.identikit.net/. [ neural information processing system, 2014, pp.
[7] VisionMetric, About EFIT-V. 2672-2680.
http://www.visionmetric.com/products/aboute-fit/.
[8] D. Mcquiston, L. Topp, and R. Malpass, “Use of
facial composite systems in US law enforcement
agencies,” Psychology, Crime and Law, vol. 12, no. 5,
pp. 505–517, 2006.
[9] H. Galoogahi and T. Sim, “Inter-modality face sketch
recognition,” in IEEE Int. Conf. Multimedia and Expo
(ICME), July 2012, pp. 224–229.
[10] N. Wang, D. Tao, X. Gao, X. Li, and J. Li, “A
comprehensive survey to face hallucination,” Int. J.
Comput. Vision, vol. 106, no. 1, pp. 9–30, 2014.
[11] C. Galea and R. A. Farrugia, “Fusion of intra- and
inter-modality algorithms for face-sketch
recognition,” in Computer Analysis of Images and
Patterns, vol. 9257, 2015, pp. 700–711.
[12] C. Galea and R. A. Farrugia, “Face Photo-Sketch
Recognition using Local and Global Texture
Descriptors,” in European Signal Processing
Conference (EUSIPCO), Budapest, Hungary, Aug.
2016.
[13] W. Zhang, X. Wang, and X. Tang, “Coupled
information-theoretic encoding for face photo-sketch
recognition,” in IEEE Conf. Comput. Vision Pattern
Recog., 2011, pp. 513–520.
[14] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf,
“DeepFace: Closing the Gap to Human-Level
© 2022 IJSRET
1928