You are on page 1of 6

A Comparative Study of Machine Learning

Approaches for Handwriter Identification


Ibrahim Aref
Amal Durou Mosa Elbendak
Department of Computer and
Department of Computer and Department of Computer and
Information Sciences
Information Sciences Information Sciences
Northumbria University
Northumbria University Northumbria University
Newcastle Upon Tyne, UK
Newcastle Upon Tyne, UK Newcastle Upon Tyne, UK
ibrahim.aref@northumbria.ac.uk
amal.durou@northumbria.ac.uk mosa.elbendak@northumbria.ac.uk
Ahmed Bouridane
Somaya Al-Maadeed
Department of Computer and
Department of Computer Science and
Information Sciences
Engineering
Northumbria University
Qatar University
Newcastle Upon Tyne, UK
Doha, Qatar
ahmed.bouridane@northumbria.ac.uk
s_alali@qu.edu.qa

Abstract— During the past few years, writer identification has vision. Typically, any writer identification model consists of
attracted significant interest due to its real-life applications two essential stages: feature extraction and classification [4],
including document analysis, forensics etc. Machine learning [5]. The extraction of the most discriminative features from
algorithms have played an important role in the development of handwritten scripts is one of the most important steps in this
writer identification systems demonstrating very effective process. In addition, an effective classifier would further
performance results. Recently, the emergence of deep learning enhance the performances.
has led to various system in computer vision and pattern
recognition applications. Therefore, this work aims to assess and There exist a number of approaches for writer identification
compare the performance between one of the deep learning and most try to analyse the characters and use their properties
algorithms, AlexNet model, with two of the most effective to extract features and create the feature vector. During the last
machine learning classification approaches: Support Vector few years, several approaches were proposed [6]–[8] using
Machine (SVM) and K-Nearest-Neighbour (KNN). The local features extracted from handwritten scripts. In general, the
evaluation has been conducted using both IAM dataset for performance of writer identification highly depends on the
English handwriting and ICFHR 2012 dataset for Arabic selected features and the used classifier. Therefore, robust and
handwriting. efficient approaches are required to determine and extract the
most discriminative features. Due to the extensive variations of
Keywords— convolutional neural network, writer identification,
the data from a writer to another, conventional machine
feature extraction, machine learning
learning approaches are not yet able to provide the most
reliable identification performances. Recently, Deep Learning
I. INTRODUCTION (DL) has emerged as an attractive concept capable to analyse
With the emergence of digitization, there has been an complex tasks thus achieving much higher performances. This
increasing interest to use electronic documents since they paper aims to investigate the power of DL for offline writer
inherently provide increased security than paper records and identification. To achieve this, we aim to carry out a
have own huge advantages such as ease of retrieval and access. comparative study of the performances achieved against two
Several studies have revealed that different people have well-known and proven machine learning algorithms to
different handwritings styles meaning that a handwritten demonstrate the usefulness of DL.
document carries more information about the personality of the Deep learning, which was been proposed in 2006 by
author who has written [1]. Although, there exist is a great Geoffrey Hinton, has become a powerful method to deal with
degree of similarity in the writing styles of people, it also various computer vision and pattern recognition problems [9].
possible to distinguish the writer using various methods. The main idea of DL is to use a number of hidden layers of a
During the past few years, writer identification topic, which is neural network in order to create a more effective high-level
the process of identifying the author of a specific document representation of the data by using a group of low-level
among different ones, has been an active research area and features. Moreover, a Deep Neural Network (DNN) can be
recently has been use in a large variety of real life applications implemented using various architectures including
such as data security in different services, financial activity, Convolutional Neural Networks (CNNs), Autoencoders, Deep
forensic science including access control and the analysis of Belief Networks (DBN which is similar of a stack of Restricted
ancient documents [2], [3]. Although writer identification topic Boltzmann Machines, RBM), Recurrent Neural Networks
has been studied in the last few decades, it still remains one of (RNN) and more [10], [11], [27].
the most widely researched topics due to the recent
development in the fields of pattern recognition and computer

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE

Authorized licensed use limited to: Consortium - Algeria (CERIST). Downloaded on October 08,2022 at 11:35:54 UTC from IEEE Xplore. Restrictions apply.
CNNs, which are one of the most common DNN studies have been done based on SVM as a classifier, but some
architectures, use convolutional layers with pooling layers that image features have to be extracted first.
reflect the translation-invariant nature of the processed images
[27]. In fact, a CNN architecture can be seen as a powerful Some attractive works have been done in [4] and [5] where
model of deep learning and has been successfully used to several statistical and model-based features were proposed. For
recognise and classify images effectively with various example, the authors in [5] proposed an approach to improve a
distortions [9], [11], [12]. As shown in Fig. 1, this paper aims statistical feature and the edge hinge distribution by exploring a
to investigate DL for writer identification task and its combination of the features with a codebook of graphemes
evaluation through its analysis against traditional machine generated using a model-based feature extraction approach. A
learning. To achieve this, AlexNet architecture, which is one of writer identification method was presented in [4] using the
CNN models, has been selected and compared against two of concept of Oriented Basic Image feature extraction and its
the most effective machine learning classification approaches; combination with the grapheme codebook method. In addition,
the Support Vector Machine (SVM) and K-Nearest-Neighbour numerous researches have attempted to evaluate several feature
(KNN). For traditional machine learning approaches, a extraction approaches and learning algorithms in order to
combination of Bag of Words (BoW) extracted using Scale- enhance the performances of writer identification [1], [2], [5],
[9]. However, it was very difficult to make a comparison
Invariant Feature Transform (SIFT) and Speeded-Up Robust
Feature (SURF) with SVM and KNN as classifiers. Typically, between these studies because of the various and large feature
AlexNet is used as an automated identification process and can extraction methods including different types of datasets used in
be used to select, extract features and to construct new ones. these researches.
All the experiments have been conducted based on the IAM During the past few years, a variety of sophisticated
and ICFHR-2012 datasets that we used before in [4], in order to methods based on DL have been used in writer identification
compare with this work. issue. However, each has its advantages and drawbacks, but the
concept was used to obtain the best deep learning architecture.
Numerous studies using CNNs have been widely used by
several researchers as an effectively approach for the tasks such
as document analysis, image classification and object
recognition. In most cases, DCNNs can be trained on large
datasets to achieve high performance resulting in very high
computation of the which is a big issue for such networks. The
authors in [14] use DCNN as generic feature extractors and a
previously trained network was reused in order to manage this
problem. At the start, a DCNN model was trained on the
Fig. 1. An approach for performing offline writer identification
ILSVRC-12 dataset. Then, the learned neurons (kernels) are
This paper is organised as follows. Section 2 gives an preserved and only the classification section is retrained based
overview of the related work in the topic of machine learning on different datasets. A similar strategy to overcome this issue
algorithms in the writer identification. The main idea of the has been used in [15] for the task of object recognition. The
machine learning workflow is presented in section 3. Then authors used pre-trained CNN with pre-processed depth
section 4 describes writer Identification using AlexNet model. images. The features extracted by the CNN and then used by
The dataset that used in this work is shown in section 5. The SVM in order to determine the objects. Their proposed model
experiments and results are presented in the sections 6 and 7. has been evaluated using the Washington RGB-D Objects
The conclusion of this work has been presented in section 8. dataset. In the same context, the authors in [27] tried to treat
some deficiencies in CNN. The training data has a large impact
II. RELATED WORK on the network performance. They proposed an approach that
investigate the statistics of CNN image recognition methods
The motivation of this work is to investigate reliability for against the Ising model. Moreover, a machine learning model
writer identification task throughout a comparison between has been proposed in [28]. The model is created based on CNN
some of the machine learning techniques and deep learning that used to learn a match function and the training process is
based algorithms in order to prove how CNN models can be achieved based on labelled sonar data.
used to manage complex issues in computer vision area. This
section provides the fundamental knowledge and the state of Offline writer identification can be classified into two levels
the art methods about deep learning in the topic of writer of analysis: the texture-based and allograph-based. The latter
identification process. In recent years, deep learning has got approach depends on local descriptors calculated from tiny
increased interest in computer vision applications and letter pieces (allographs). In the same context, the authors in
especially in writer identification [4], [13], [14]. Many studies [13] introduced a method for writer identification using the
have used different models of deep neural network which is the features extracted from CNNs as local descriptors. A global
most effective and supervised machine learning approach. Due descriptor was created through GMM super vector encoding.
to the importance of the writer identification as a significant Also, the authors compared their proposed method to
task for the automatic processing of handwritten documents, traditional local descriptor such as SIFT and SURF. The
several attempts have been made to automatically identify the proposed approach is evaluated using two datasets; the ICDAR
writer in the past few years. Prior the emergence of deep 2013 and the CVL. Another interesting method for CNN is the
learning techniques, most of the handwriting identification model proposed in [16] where a CNN architecture is employed

Authorized licensed use limited to: Consortium - Algeria (CERIST). Downloaded on October 08,2022 at 11:35:54 UTC from IEEE Xplore. Restrictions apply.
for writer identification and retrieval. A feature vector is IV. WRITER IDENTIFICATION USING CNNS
created for each writer and the identification is carried out A CNN architecture has been adopted and comprises of five
using a dataset that containing pre-calculated feature vectors. different layers: input, convolution, pooling, fully-connected
Their approach has been evaluated using ICDAR2013, ICDAR and output layer [17]. The image input layer specifies the size
2011 and CVL datasets. of the input images. Then in the next layer, the input patch is
convolved based on different learned kernels using shared
III. MACHINE LEARNING WORKFLOW weights. Next, the pooling layer has been tried to minimize the
In the case of writer identification, a machine learning size of the image while trying to preserve the basic information.
algorithm can be used to assign a handwriting script sample to This process can be achieved by implementing a max pooling
one author among many available in a dataset [1], [4]. At the over image parts of size 2x2 or 3x3 [14]. By performing these
learning stage, the algorithm extracts discriminative features two layers, the feature extraction phase has been composed.
from several writers and generates a feature vector for each After that the extracted features are processed by allocating
writer. Then, given a handwritten sample from a subject and weighting values and then merged in the fully-connected layer.
the algorithm will extract the corresponding feature vector This step represents the classification phase of the convolution
before comparing it against all learned feature vectors stored at network [13], [14]. However, to make CNN work very
the learning stage. Using a distance metrics, the closest feature efficiently and provide improved results, it is essential to use a
vector will identify the more likely writer. Various feature large dataset. Therefore, in this paper, a well-known and used
extraction methods with various classifiers such SVM and CNN model called AlexNet with trained datasets has been used
KNN can be used [4], [14]. in order to tackle the problem of high computation of the
training phase. The first part is used to extract features while
In this paper, the concept of BoW has been adopted as the the classification part is performed based on SVM. In addition,
feature extraction. Also, SURF and SIFT algorithms have been a very efficient GPU, which is based on NYIDIA GeForce
selected as extraction methods in the BoW model and then are GTX 770 with 16 GB of RAM, has been used in order to make
combined with SVM and KNN as classifiers. In the literature, training faster during the convolution operation. Although,
SURF and SIFT have been used in some of the previous works there exist various CNN architectures [17], [18] including
as efficient algorithms to solve many problems. In [25], the LeNet, ZF Net, GoogLeNet, VGGNet and ResNet (2015).
authors have used the colour images that have been AlexNet has been chosen comprising a number of hidden
manipulated in order to detect a colour object. SURF features layers: an input layer, five convolutional layers, three pooling
are extracted from the image and then integrated. The obtained layers, three fully-connected layers, and finally an output layer.
results proved that an efficient approach has been derived to The main advantages of the model are the ability to attain
detect the SURF features. Another approach in [26] that outstanding performance with reduced training parameters and
proposed to use a Computer-Assisted Detection (CAD) system a strong qualitative robustness.
to detect CA in DSA angiographic images. Different
algorithms such as Maximally Stable Extremal Regions However, The Alex-Net model requires the selection of
(MSER), SURF and SIFT, which are used before for robust convenient approaches in order to make the training process
features/interest points detector, have been combined. The faster and also to avoid over-fitting caused by complicated
proposed approach provided good results and were very structure, large number of training parameters and a vast
encouraging. amount of data. Consequently, the model has been constructed
by inserting the Rectified Linear Unit (ReLU) in the data
This proposed approach has been adopted due to their structures. In addition, dropout approach has been employed in
effective performances as demonstrated in writer identification. the fully connected layers (FC6, FC7 and FC8) to enhance the
Usually, the concept of BoW consisting of three main steps: robustness so that the learning process of the hidden layers
feature detection, feature description and codebook generation. remains independent on the extracted features from the upper
• Feature Detection: It is an operation used to process an layer [19].
image at low-level to determine low level features
V. DATASET
• Feature Extraction: After feature detection step, each
image is abstracted by several local patches, which are then The developed system has been evaluated using the IAM
represented as numerical vectors called feature descriptors. A dataset for English handwriting and ICFHR 2012 dataset for
descriptor is usually able to process intensity, rotation, scale Arabic handwriting. IAM dataset is one of the most popular
and affine variations to some extent. As mentioned above, in English handwritten datasets for writer identification and
this work, SIFT and SURF, which are effective extraction verification. The IAM test consists of a total of 1314
techniques, have been used as a feature descriptor to extract handwritten samples with two samples per writer. The training
distinguished features and descriptors from images. part contains only the third and the fourth samples. These
• Codebook Generation: In the last step, the BoW model samples have been provided from 127 writers and each writer
has been utilised to convert vector-represented patches to provided at least four samples. On the other hand, the data
codebooks. used for testing is taken from the 1st and 2nd samples of all
the 657 writers. ICFHR 2012 dataset uses Arabic language.
Then, at the last stage well known classifiers such as SVM The documents were scanned and provided as grey scale PNG
and KNN in order to classify the test images.

Authorized licensed use limited to: Consortium - Algeria (CERIST). Downloaded on October 08,2022 at 11:35:54 UTC from IEEE Xplore. Restrictions apply.
images of varied contents of text. Over 200 writers were 4) Descriptor generation.
requested to write three varied content paragraphs in Arabic.
VI. EXPERIMENTS
In this section, the experimental procedures are carried out
based on the steps that required achieving the main aim of this Fig. 2. Flow of SURF algorithm
work. We conduct two major experiments to assess and AlexNet Model: it is the second technique that has been used
compare the performance between deep learning algorithm, to extract features. It is pretrained CNN model and has
AlexNet model, against two of the most effective machine provided an efficient approach to get the feature vector for
learning classification approaches; KNN and SVM. The work both train and test dataset. To obtain the features using
has been developed by using Matlab environment. In the next AlexNet model, the fully connected layers FC6, FC7 and FC8
sections, the features extraction algorithms have been listed are used to get the learned feature vector of size 4096, 4096
and types of classifiers and their parameters selection. The and 1000 neurons respectively.
reliability for writer identification task has been investigated.
B. Classification Methods
A. Feacture Selection and Extraction
The classification phase is essential, because it is used to
As mentioned above, there are two main phases for writer allocate a sample (or a form) to predefined class based on a
identification process; feature extraction and classification relation between the data processed objects with a pre-defined
process. In the feature extraction phase, two techniques are class label [22]. In this work, we select two of the most
used to obtain feature vector for each document image (script) efficient classifiers, which are, KNN and SVM. They have
in dataset. The first one was to utilize hand crafted features of been used under machine learning techniques. On the other
an image, which means machine learning techniques such as hand, the softmax layer in AlexNet model has been used as
SURF and bag of words. The other technique was to use pre- classifier.
trained CNN such as AlexNet model. Given below is the brief
description of feature vectors that have been used in this work. K-Nearest Neighbours (KNN): Classification process is an
important and essential. It can be done by examine the data
Bag-of-Words (BoW): BoW model is a simple algorithm and then finding the most similar data points in the training
used in computer vision field and known as Bag-of-Features data. KNN is a classification approach that usually need to use
(BoF) or Bag-of-Visual Words (BoVW). It is mainly used to a vector space model in order to classify data points [23] . It is
extract features from text documents and provide significant mainly depending on the fact that the data within the same
advantages in images analysis processes such as image class will have high similarity. Therefore, when we need to
retrieval [20]. The main idea of this approach is to start by calculate the similarity with known class, the class of
selecting an input image. After that, there are some algorithms unknown class can be estimated. Since the performance of the
such as the Harris-Laplace corner or dense sampling strategy whole process will be changed and impacted with choice of
can be used to find the interest points and also referred to as number of neighbours (k). To improve performance, we need
detectors. They are usually used to choose local image to examine multiple k and then select k to increase test
patches. Then, when the image patch has been detected, we accuracy and also reduce over fitting [24].
have to use one of the image descriptors, in this experiment,
SIFT has been selected as an image descriptor in order to Support Vector Machines (SVM): SVM is a classier
extract visual information. Finally, the low-level features that approach that conducts classification tasks through
have been obtained will be transformed by using the BoW constructing hyperplanes in a multidimensional space that can
approach. be used to split cases of several class labels. SVM approach is
a supervised learning model and usually supports regression
Speeded Up Robust Features (SURF): SURF is a feature and classification tasks and can be used to address multiple
detector and descriptor. It has proven to be very effective continuous and categorical variables [24].
algorithm in object detection and pattern recognition
applications. But the main disadvantage that it needs a large Softmax: Softmax is output layer of the AlexNet model and
computational complexity when used for real-time used as a classifier. It looks at the output of the previous layer,
applications [21]. However, it is used in several applications, which represents the activation maps of high-level features,
because of its powerful attributes such as scale invariance, and determines which features most correlate to a particular
translation invariance, lighting invariance, contrast invariance, class.
and rotation invariance. Moreover, the algorithm can be used Therefore, as shown in Fig. 3, this work has investigated the
to detect objects in images taken under different extrinsic and identification performance based on several scenarios.
intrinsic conditions. The algorithm has been designed based on
four main parts as shown in Fig. 2: VII. RESULTS AND ANALYSIS
1) Integral image generation,
In this section, a collection of experiments and results are
2) Fast-Hessian detector (interest point detection),
presented and discussed. As mentioned above, there are two
3) Descriptor orientation assignment (optional),

Authorized licensed use limited to: Consortium - Algeria (CERIST). Downloaded on October 08,2022 at 11:35:54 UTC from IEEE Xplore. Restrictions apply.
main parts of experiments: the first part of experiments the first step, we extract the image features from one of the
includes evaluating the performance based on machine fully connected layers in the AlexNet model FC6, FC7 or FC8
learning algorithms; bag of words technique and SURF for (one at each time). Then, these image features have been
feature extraction and then using SVM and KNN for classified by using softmax. These features are examined
classification. The second part of experiments investigates the individually with respect to their respective performances. The
system performance using AlexNet model. All the identification performance of our proposed approach is shown
experiments have been carried out on datasets mentioned in Table III. The obtained results as shown in Fig. 6 indicate
above. that the best performance is provide from FC7, but in the next
layer, FC8, the recognition accuracy has fallen. That was
happen, because the higher layer FC8 of the Alex-Net model
maybe cannot recognise much between the features because
they capture only the abstract and high-level information,
while the mid-level features in the FC7 layer have additional
distinguished power for same-class recognition.
To simplify the comparison between two approaches
above, the average performance of CNN model (AlexNet) has
been compared against two machine learning approaches;
SVM and KNN. Although, we were noticed that SVM and
KNN achieved good results, but CNN model performed better
Fig. 3. The Experiment Scheme results over the same datasets.

A. Evaluation of Performance based on Machine Learning TABLE II. SYSTEM PERFORMANCE (TOP-1) BASE ON SIFT
Model Classifier KNN SVM
In this experiment, the image classification analysis has been IAM 89.0 90.0
carried out using SURF and SIFT descriptors where the ICFHR-2012 93.0 94.0
classification employs SVM and KNN to classify the test
images. The identification performance has been evaluated and
shown in Table I and Table II based on our datasets IAM and
ICFHR-2012. The obtained results are shown in Fig. 4 and
Fig. 5 indicate that it easy to observe that the SVM classifier in
both cases outperforms slightly the KNN classifier.

TABLE I. SYSTEM PERFORMANCE (TOP-1) BASE ON SURF


Classifier KNN SVM
IAM 90.0 90.5
ICFHR-2012 94.0 95.0

Fig. 5. Performance of machine learning model based on SIFT

VIII. CONCLUSION
In this paper, the writer identification performances of
machine learning algorithms using SURF and SIFT
approaches for feature extraction and SVM and KNN
classifiers have been compared against deep learning
algorithms such as CNN (Alex-net model). The analysis was
carried out using a comparative study using IAM dataset for
English handwriting and ICFHR 2012 dataset for Arabic
Fig. 4. Performance of machine learning model based on SURF handwriting. These two datasets are widely used databases for
B. Deep Learning writer identification. The obtained results show that deep
learning outperforms machine learning algorithms.
In order to assess DCNN approach using AlexNet model,
the identification performance is evaluated by using the image TABLE III IDENTIFICATION PERFORMANCE BASED ON FC LAYERS
features that are extracted from fully from the connected layers
of Alex-Net model (FC6, FC7 or FC8). Then, we determine Dataset FC6 (%) FC7 (%) FC8 (%) Average (%)
IAM 91.0 93.0 92.3 92.1
which fully connected layer provides the best identification
ICFHR-2012 98.6 99.5 95.6 97.9
performance. In this part, softmax classifier has been used. In

Authorized licensed use limited to: Consortium - Algeria (CERIST). Downloaded on October 08,2022 at 11:35:54 UTC from IEEE Xplore. Restrictions apply.
ACKNOWLEDGMENT networks,” AISTATS ’11 Proc. 14th Int. Conf. Artif. Intell. Stat., vol. 15,
pp. 315–323, 2011.
This work is supported by the Qatar National Research
Fund through National Priority Research Program (NPRP) No [13] V. Christlein, D. Bernecker, A. Maier, and E. Angelopoulou, “Offline
7-442-1-082. The contents of this publication are solely the Writer Identification Using Convolutional Neural Network Activation
Features,” Proc. Ger. Conf. Pattern Recognit., vol. 9358, pp. 540–552,
responsibility of the authors and do not necessarily represent 2015.
the official views of the Qatar National Research Fund or Qatar
University. [14] S. Loussaief and A. Abdelkrim, “Machine learning framework for image
classification,” 2016 7th Int. Conf. Sci. Electron. Technol. Inf.
Telecommun., pp. 58–61, 2016.

[15] M. Schwarz, H. Schulz, and S. Behnke, “RGB-D object recognition and


pose estimation based on pre-trained convolutional neural network
features,” 2015 IEEE Int. Conf. Robot. Autom., no. May, pp. 1329–1335,
2015.

[16] S. Fiel and R. Sablatnig, “Writer Identification and Retrieval using a


Convolutional Neural Network,” Proc. Int. Conf. Comput. Anal. Images
Patterns, pp. 26–37, 2015.

[17] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for


semantic segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis.
Pattern Recognit., vol. 07–12-June, pp. 3431–3440, 2015.

[18] C. Paper, “Application of Deep Learning to Computer Vision : A


Fig. 6. Identification performance using AlexNet model
Comprehensive Study Application of Deep Learning to Computer
Vision : A Comprehensive Study,” 2016 5th Int. Conf. Informatics,
REFERENCES Electron. Vis., vol. 2016 5th I, no. May, pp. 592–597, 2016.
[1] S. M and S. Mary Idicula, “A Survey on Writer Identification Schemes,”
Int. J. Comput. Appl., vol. 26, no. 2, pp. 23–33, 2011. [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification
with Deep Convolutional Neural Networks,” Adv. Neural Inf. Process.
[2] S. M. Awaida and S. A. Mahmoud, “Writer identification of arabic text Syst., pp. 1–9, 2012.
using statistical and structural features,” Cybern. Syst., vol. 44, no. 1, pp.
57–76, 2013. [20] H. Liu, H. Tang, W. Xiao, Z. Guo, L. Tian, and Y. Gao, “Sequential
Bag-of-Words model for human action classification,” CAAI Trans.
[3] S. M. Awaida, “State of the art in off-line writer identification of Intell. Technol., vol. 1, no. 2, pp. 125–136, 2016.
handwritten text and survey of writer identification of Arabic text,”
Educ. Res. Rev., vol. 7, no. 20, 2012. [21] E. Karami, S. Prasad, and M. Shehata, “Image Matching Using SIFT ,
SURF , BRIEF and ORB : Performance Comparison for Distorted
[4] A. Durou, I. Aref, S. Al-Maadeed, A. Bouridane, and E. Benkhelifa, Images Image Matching Using SIFT , SURF , BRIEF and ORB :
“Writer identification approach based on bag of words with OBI Performance Comparison for Distorted Images,” no. February 2016,
features,” Inf. Process. Manag., no. August, pp. 1–14, 2017. 2015.

[5] D. Paraskevas, G. Stefanos, and K. Ergina, “Writer Identification Using a [22] B.V. Ramana Reddy, A. Suresh, M. Radhika Mani and V. Vijaya Kumar,
Statistical and Model Based Approach,” 2014 14th Int. Conf. Front. “Classification of Textures Based on Features Extracted from
Handwrit. Recognit., pp. 589–594, 2014. Preprocessing Images on Random Windows,” Int. J. Adv. Sci. Technol.
Vol. 9, August, 2009, vol. 1, no. 1, pp. 15–18, 2009.
[6] V. Christlein, D. Bernecker, F. Hönig, and E. Angelopoulou, “Writer
Identification and Verification Using GMM Supervectors,” Proc. IEEE [23] J. Ahmed, “Comparison of machine learning and deep learning
Winter Conf. Appl. Comput. Vis., pp. 998–1005, 2014. classifiers for Traffic sign image classification.”

[7] S. Fiel and R. Sablatnig, “Writer identification and writer retrieval using [24] X. Niu, “Fusions of Cnn and Svm Classifiers for Recognizing
the fisher vector on visual vocabularies,” Proc. Int. Conf. Doc. Anal. Handwritten Characters,” no. April, 2011.
Recognition, ICDAR, pp. 545–549, 2013.
[25] M. Muthugnanambika and S. Padmavathi, "Feature detection for color
[8] S. Fiel and R. Sablatnig, “Writer Retrieval and Writer Identification images using SURF", 4th International Conference on Advanced
Using Local Features,” 2012 10th IAPR Int. Work. Doc. Anal. Syst., pp. Computing and Communication Systems (ICACCS). Jan. 2017.
145–149, 2012.
[26] I. Rahmany, B. Arfaoui, N. Khlifa and H. Megdiche, "Cerebral Aneurysm
[9] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Computer-Aided Detection System by combing MSER, SURF and SIFT
Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks descriptors", 5th International Conference on Control, Decision and
from Overfitting,” J. Mach. Learn. Res., vol. 15, pp. 1929–1958, 2014. Information Technologies (CoDIT), April 2018.

[10] S. Stober, A. Sternin, A. M. Owen, and J. A. Grahn, “Deep Feature [27] A. Gavrilov, A. Jordache, M. Vasdani and J. Deng, “Convolutional
Learning for EEG Recordings,” 2015. Neural Networks: Estimating Relations in the Ising Model on
Overfitting”, IEEE 17th International Conference on Cognitive
[11] A. C. Ian Goodfellow, Yoshua Bengio, Deep Learning, vol. 521, no. Informatics & Cognitive Computing (ICCI*CC), July 2018.
7553. 2017.
[28] M. Valdenegro-Toro, “Improving sonar image patch matching via deep
[12] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural learning”, European Conference on Mobile Robots (ECMR), Sept 2017.

Authorized licensed use limited to: Consortium - Algeria (CERIST). Downloaded on October 08,2022 at 11:35:54 UTC from IEEE Xplore. Restrictions apply.

You might also like