Professional Documents
Culture Documents
Journal Pre-proof
Manabu Okawa
PII: S0031-3203(20)30502-1
DOI: https://doi.org/10.1016/j.patcog.2020.107699
Reference: PR 107699
Please cite this article as: Manabu Okawa, Time-series averaging and local stability-weighted
dynamic time warping for online signature verification, Pattern Recognition (2020), doi:
https://doi.org/10.1016/j.patcog.2020.107699
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition
of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of
record. This version will undergo additional copyediting, typesetting and review before it is published
in its final form, but we are providing this version to give early visibility of the article. Please note that,
during the production process, errors may be discovered which could affect the content, and all legal
disclaimers that apply to the journal pertain.
The LS-DTW is calculated by incorporating the local stability when matching two signatures.
The proposed method achieves higher performance than existing signature verification systems.
Time-series averaging and local stability-weighted
dynamic time warping for online signature verification
Manabu Okawa∗
Forensic Science Laboratory, Metropolitan Police Department, Tokyo 100-8929, Japan
Abstract
To meet the recent demands for automated security systems, this study proposes
a novel single-template strategy that uses mean templates and local stability-
weighted dynamic time warping (LS-DTW) to simultaneously improve the speed
and accuracy of online signature verification. Specifically, we adopt a recent
time-series averaging method, called Euclidean barycenter-based DTW barycen-
ter averaging (EB-DBA), to obtain an effective mean template set for each feature
while preserving intra-user variability among reference samples. We then esti-
mate the local stability of the mean template set by using direct matching points
that represent stable signature regions in the DTW warping paths between the
mean template set and the references. Subsequently, we boost the discriminative
power in the verification phase using the LS-DTW distance that incorporates the
local stability sequence as the weights for the DTW cost function between the
mean template set and a query signature. Finally, we use the public SVC2004
Task2/MCYT-100 online signature datasets and the recent 3DAirSig in-air signa-
ture dataset to conduct experiments, whose results confirm the effectiveness of the
∗
Corresponding author
Email address: m_okawa.lab1@m.ieice.org (Manabu Okawa)
1. Introduction
2
approach uses global information, such as signature duration and number of pen-
ups [12, 13], whereas the function-based approach uses a signature time series of
functions, such as pen position trajectory and pressure [14–19]. Systems using
function-based approaches generally display better verification than those using
parameter-based approaches [12–14].
Two main approaches are used as matching: model- and distance-based meth-
ods. Model-based approaches describe data distribution by employing genera-
tive (e.g., hidden Markov models (HMMs) [14] and Gaussian models [15]) and
discriminative (e.g., convolutional neural networks (CNNs) [20], recurrent neu-
ral networks (RNNs) [21], and support vector machines (SVMs) [22]) models.
Distance-based approaches match query signatures with reference sets by employ-
ing distance measures, such as dynamic time warping (DTW) [23]. The distance-
based approaches are superior in forensic situations with limited data available
for the enrollment phase because a model-based approach would face overfitting
problems.
Among various distance-based systems, template matching has been widely
adopted for online signature verification [15–19]. Template matching includes
single- and multiple-template strategies [15, 18]. The single-template strategy
uses a representative sample selected directly from or a mean template created
from an original reference set. The multiple-template strategy calculates the dis-
tances between each of the reference samples and a query signature, then com-
pares them using various measures (e.g., min, max, mean, or median). Meanwhile,
the single-template strategy has more advantages, such as speed, security, and tol-
erance [16–19], which are in high demand in the current digital era; however,
it does not perform as well as the multiple-template strategy for function-based
3
approaches [15].
A recent study [18] proposed an effective single-template strategy that uses
mean templates created by a novel time-series averaging method, which is called
the Euclidean barycenter-based DTW barycenter averaging (EB-DBA). The rel-
evant research [19] proposed a novel single-template strategy that uses the mean
templates generated by the EB-DBA and a weighting scheme to efficiently com-
bine the multiple DTW distances from multivariate time series. These methods
provide lower calculation complexity while exhibiting performances that are com-
petitive with that of the multiple-template strategy. These studies brought to light
the limitations of the single-template strategy. However, with the use of a standard
DTW, this strategy still has room for improvement of its performance because the
DTW is sensitive to outliers and noise often contained in a time series [24]. Ad-
ditionally, the need to calculate multiple DTW distances and to tune the proper
parameters for each user resulted in further calculation complexity.
One promising approach is the incorporation of signature stability into verifi-
cation. In forensics, writing conditions (e.g., writing instruments and the writer’s
posture and health) are not ideal and would affect the quality of a signature; there-
fore, forensic document examiners (FDEs) inspect signatures while considering
signature stability as an important indicator for writer characterization [5]. This
approach was inspired by the following basic considerations:
• The wider a person’s range of signature variation, the more susceptible sig-
natures are to forgery because a forger’s mistakes would fall more easily
within a wider range of variation.
• In contrast, the more stable a person’s range of signature variation, the more
difficult it is for forgers to imitate a signature because they must imitate the
4
stable parts as accurately as possible while discarding their normal hand-
writing habits.
5
2. Related work
Many online signature verification systems have been proposed over the past
decade [1–3]. In this study, we focused on local stability in signatures and com-
bined it into a distance-based approach for the matching process to improve the
online signature verification performance. This section summarizes the related
studies.
In forensics, writing conditions are unstable and affect the quality of a sig-
nature; therefore, signature stability is key to the precise examinations by FDEs
of writer characterization in signatures [5]. In biometrics, considering the local
stability has provided a higher signature verification performance [10, 11, 25–
29]. Methods for analyzing local stability regions can be classified into feature-,
model-, and data-based approaches [28].
With the feature-based approach, some studies have estimated the local stabil-
ity of signatures using recent computer vision-based techniques. In [30], the local
stability regions in signatures were estimated using local part-based features such
as speeded-up robust features. The features detected in local regions provided
higher discriminative power in signature verification [10, 11].
For the model-based approach, a study used a personal entropy measure com-
puted by a local density estimation based on an HMM to quantify the levels of
both complexity and variability of a signature [31]. The lowest entropy category
exhibited a better performance than the medium and high entropy categories.
In the data-based approach, optical flow analysis was used to estimate lo-
cal stability while assuming a certain deformation between two genuine signa-
6
tures [27]. Other studies directly estimated stroke-oriented local stability using
the DTW between the sample points of two signature trajectories [25, 26, 28, 29].
The DMPs were then mainly applied to reveal the local regions, where no sig-
nificant distortion existed in the warping trajectories between both signatures, and
separate the signature strokes into segments for a stroke-oriented comparison. The
study [32] recently proposed a stability-modulated DTW (SM-DTW) to incor-
porate the most similar parts between two signatures into the distance measure.
However, most of the existing approaches need the multiple-template strategy and
the parameter optimization, which result in a high computational complexity.
7
ditional computation is needed to determine the optimal parameters for all users
to use faster techniques in online signature verification. To address the abovemen-
tioned issues, a recent study proposed another method of covering the calcula-
tion complexity on a sequence level by reducing the number of matches between
the reference set and a query signature [18]. The method adopted the single-
template strategy with dependent warping for the DTW instead of using the com-
mon multiple-template strategy with independent warping; thus, a smaller number
of DTW calculations (e.g., reduced in [18] to a maximum of 34 DTW calculations
for the verification phase) could contribute to rapid verification.
The other fundamental drawback of the DTW is its sensitivity to noise and
outliers because it has to pair all the elements of a sequence [24]. To compen-
sate for this drawback, a weighted DTW (WDTW), which adds a multiplicative
weight penalty based on the distances between the points in the warping path, was
proposed [36]. With this approach, the cost matrix is changed to incorporate a
modified logistic weight function (MLWF) that assigns an additional weight to
the DTW cost function between the reference and test points. As a result, time
instants with higher phase differences would be penalized more than instants near
the reference points. Sliding window dynamic time warping (SW-DTW) was aslo
proposed as a relevant approach [37]. The SW-DTW adopted the modified DTW
cost function using a window function to consider the context by incorporating a
weighted average of the neighboring distances. However, these modified DTW
methods systematically assigned weights to the cost function without considering
the data’s nature. Consequently, it still retained sensitivity to the nature of the se-
quence shape variations based on the pairings of all elements between sequences.
Moreover, the need to tune the proper parameters before applying them resulted
8
in further calculation complexity.
To improve online signature verification, our study proposes a novel single-
template strategy using mean templates and DTW weighted by the local stability
sequence (i.e., LS-DTW).
Note that this study is an extension of our previous studies [38, 39]. The
current paper describes the revised method and additional experiments as outlined
below:
(1) The proposed approach updates the local stability sequence for the LS-
DTW by using modified DMPs instead of multiple matching points (MMPs),
to further incorporate detailed local stability between the mean templates
and the references.
(2) This study conducts experiments using two public online signature datasets,
SVC2004 Task2 [40] and MCYT-100 [41], in both the random- and skilled-
forgery scenarios, to further confirm the generalization performance of the
proposed method.
3.1. Outline
Figure 1 shows an outline of the proposed method. After being provided with
some samples of online signatures as a reference set, we applied preprocessing to
improve the quality of the signatures and extract features. Next, in the enrollment
9
Fig. 1: Outline of the proposed online signature verification method.
phase, a single-template strategy with a mean template set per feature was gen-
erated using the EB-DBA and the local stability sequence was constructed using
the reference set. In the verification phase, the dissimilarity between a test sample
and the mean template set of a purported user was calculated using the LS-DTW
distance. Finally, the proposed system provides an accept/reject result for the test
sample if the dissimilarity is below/above a certain defined threshold.
The details of the subprocesses are explained in the following sections.
3.2. Preprocessing
Signature samples contain natural fluctuations even when written by the same
user [5]. To handle these fluctuations, this study adopted a common normalization
for the horizontal and vertical pen coordinates {x(i), y(i)}:
x(i) − xg
x̂(i) = , (1)
xmax − xmin
y(i) − yg
ŷ(i) = , (2)
ymax − ymin
where (xg , yg ) is the centroid of the signature while {xmin , ymin } and {xmax , ymax }
are the minimum and maximum values, respectively, of {x(i), y(i)} for i =
10
1, 2, . . . , I with an I-point length signature.
This study adopted only the normalization method in preprocessing based on
facts and preliminary experiments [19]. The actual effectiveness of the normal-
ization was demonstrated by a recent study [18].
Here, the derivatives of the discrete-time signals (i.e., ẋ(i), ẏ(i), θ̇(i), ν̇(i))
are calculated by a second-order regression that removes small noisy varia-
tions according to
P2
(f (i + ) − f (i − ))
f˙(i) = =1
P . (7)
2 2=1 2
Finally, each time sequence is normalized to a mean of zero and a unit standard
deviation to accommodate the differences in the ranges of values taken by the
seven function-based features.
11
Fig. 2: Example of a signature image (top left) and the corresponding features (pen coordinates
“X̂” and “Ŷ,” pen pressure “P,” path-tangent angle “Ang,” path velocity magnitude “Vel,” log
curvature radius “Logcr,” and total acceleration magnitude “Tam”). Only a forged signature is
depicted here to protect the signatories’ privacy.
The single-template strategy has advantages, such as speed, security, and tol-
erance; however, it has long been established that the single-template strategy
does not perform as well as the multiple-template strategy in the function-based
approach [15].
To boost the discriminative power of template matching, we developed a novel
single-template strategy using the mean template set and LS-DTW distances. The
detailed steps are as follows:
(1) In the enrollment phase, a set of mean templates is first calculated by using a
novel time-series averaging method, called EB-DBA [18], for each feature.
12
(2) Next, the local stability of the mean template set is estimated by using the
DMP-based technique.
13
Fig. 3: Process of the proposed single-template strategy using an example of five reference time
series per feature of a user (dashed lines in different colors): (1) mean template creation with the
EB-DBA per feature (solid black lines); (2) local stability calculation of the mean template set;
and (3) LS-DTW calculation between the set of mean templates and a test sample using the local
stability sequence.
scheme:
• Step 1 computes the DTW between each individual sequence and the tem-
porary averaged sequence to find the best alignment between the averaged
sequence and the N reference sequences.
14
3.4.2. Local stability estimation
This study applied the technique based on the DMPs to estimate the local
stability of the mean template set while modifying the original calculation process
of the DMPs [25, 26, 28, 29] for using the single-template strategy. Concretely,
the DMPs detected the direct matching points of the DTW trajectories where one-
to-one matching relations existed between the set of mean templates and all the
reference signatures while avoiding noise and outlier points. Finally, we used the
averaged DMPs as the local stability of the mean template set.
Assume a set of N references S R = {S R(n) }N
n=1 with Jn -length multivariate
time sequence,
and the I-length multivariate time sequence of the mean template created from
S R,
S M = {sM M M M
1 , s2 , . . . , si , . . . , sI }.
The estimation process of the local stability is described as follows (see the
example shown in Fig. 4):
(1) First, we computed the standard DTW between S M and S R to obtain a set
of N optimal warping paths:
Kn
where a Kn -length warping path is described by W n (S M , S R(n) ) = {(pnk , qkn )}k=1
with 1 ≤ pnk ≤ I, 1 ≤ qkn ≤ Jn , and max(I, Jn ) ≤ Kn ≤ I + Jn − 1.
15
(2) Next, we calculated the N sequences of the DMPs from the set of warping
paths, W(S M , S R ). When the multiplicity of the warping relation for each
component is defined as the number of consecutive occurrences of the com-
ponent index appearing in W n (S M , S R(n) ), the multiplicities corresponding
to the respective matching components of pnk and qkn are obtained:
Point i at sM
i , where the multiplicity simultaneously satisfies both mi = 1
n
(3) Finally, we calculated the averaged DMP sequence from the N sequences
of the DMPs and obtained the I-length local stability sequence LS:
( N
)I
1 X
{lsi }Ii=1 = cn (8)
N n=1 i
i=1
where
1 if a point i at sM
i is a DMP
cni =
0 otherwise
and lsi is 0 ≤ lsi ≤ 1, which is 0 when all the pairs of the matching point
are MMPs and 1 when all are DMPs.
16
Fig. 4: Example of the estimation process using toy samples for local stability: (1) the warp-
ing relations between the mean template “MT” and five references, “R1” to “R5,” with different
lengths (“0” to “8” in each sequence revealing the indices), (2) the DMPs, and (3) the local stability
sequence.
discriminative power of the DTW distances between a set of mean templates and
a test sample.
Considering D-dimensional multivariate time sequences, a mean template S M ,
and a test sample S T , we obtain
S M (d) = {sM M M M
1 (d), s2 (d), . . . , si (d), . . . , sI (d)},
17
An I × J cost matrix with dependent warping is then constructed with a cost
function d(·, ·) between a set of two points in the time sequences while weighting
the Euclidean distance by the local stability LS:
D
X
2
d sM T
i , sj = lsi × sM T
i (d) − sj (d) . (9)
d=1
with 1 ≤ pk ≤ I and 1 ≤ qk ≤ J from the cost matrix while satisfying the fol-
lowing conditions set forth in [23]:
18
Figure 5 shows examples of the alignment results of a mean template and a test
sample generated by the DTW and the LS-DTW. Compared to the conventional
DTW, the proposed LS-DTW
(2) decreases the local distances, such that they are unimportant for verification
and the alignments belong to unstable points in the mean template, and
(3) conversely penalizes/increases the local distances, such that they are impor-
tant for the verification, and the alignments belong to the stable points in the
mean template.
3.5. Evaluation
After the dissimilarity between the mean template set of the purported user and
a test sample is calculated by using the LS-DTW distance in the verification phase,
the system provides an accept/reject result if the dissimilarity is below/above the
designated writer-dependent threshold.
Finally, we evaluated the signature verification performance in terms of the
equal error rate (EER), where the false rejection rate was equal to the false accep-
tance rate.
19
(a) (b)
Fig. 5: Examples of the alignment results between a mean template (upper curved line in red) and
a test sample (lower curved line in blue) generated by (a) DTW and (b) LS-DTW, respectively. The
thickness of the alignment between the matching points (straight lines in black) is in proportion to
the local stability’s value of the mean template. The alignment is depicted as a blank if the value
of the local stability is zero.
4. Experiments
4.1. Methods
Skilled-forgery detection is a challenging task even for FDEs [4–6, 9]. To
assist in this task, we attempt herein to distinguish skilled forgeries from authentic
signatures. SVC2004 Task2 [40] and MCYT-100 [41] are widely used online
signature datasets containing highly skilled forgeries, as is evident in forensic
cases; therefore, we adopted both datasets for our experiments.
The SVC2004 Task2 dataset [40] consists of 1,600 signatures, including West-
ern and Asian signatures, from 40 writers. The data include horizontal and vertical
coordinates, pressure, azimuth, inclination with time stamp, and pen up/down sta-
tus, all of which were captured through a digitizing tablet at a sampling rate of
100 Hz. For each writer, the dataset contains 20 genuine and 20 skillfully forged
signatures. To avoid privacy issues, the writers were advised to provide invented
signatures as genuine after sufficient practice. Consequently, they tend to provide
simple stylized genuine signatures. Skillfully forged signatures were collected
from at least four other contributors who were given time to ensure that the forged
20
signatures were as close as possible to the targeted genuine signatures. Following
the experiments described in the previous studies (Table 1), we randomly selected
N = 5 or N = 10 genuine signatures as the reference set in each experiment
on this dataset. The remaining 15–10 genuine signatures and 20 skillfully forged
signatures in each case were used for the test samples in the verification phase.
The MCYT-100 dataset [41] consists of 5,000 Western signatures from 100
writers. The data include horizontal and vertical coordinates, pressure, azimuth,
and inclination with time stamp, all of which were captured through a digitizing
tablet at a sampling rate of 100 Hz. Each writer is represented by 25 samples of
both genuine and skillfully forged signatures. The latter were produced by five
users who observed and copied static images of genuine signatures to produce
validly acquired forgeries. In each experiment on this dataset, we randomly se-
lected N = 5 genuine signatures as the reference set according to the experiments
conducted in the previous studies (Table 2). The remaining 20 genuine signatures
and 25 skillfully forged signatures were used for the test samples in the verifica-
tion phase.
To prevent a selection bias, we repeated all the experiments for five times in
both SVC2004 Task2 and MCYT-100 datasets, following the methods of previous
studies such as in Tables 1 and 2, then finally obtained the averaged EERs.
4.2. Results
4.2.1. Overall performance
This study used the SVC2004 Task2 and MCYT-100 datasets to compare the
strategy with a method using a conventional single-template strategy with the
DTW distances [18] to measure the effectiveness of the proposed single-template
strategy. Furthermore, we compared the strategy with the WDTW [36] and the
21
(a) (b)
Fig. 6: Overall performance of the single-template strategy for the (a) SVC2004 Task2 and (b)
MCYT-100 datasets.
(a) (b)
Fig. 7: Effects of the local stability types (i.e., MMPs and DMPs) for the (a) SVC2004 Task2 and
(b) MCYT-100 datasets.
SW-DTW [37] as representatives of the previous weighting methods for the DTW
cost function under the same experimental conditions to confirm the effectiveness
of the proposed LS-DTW strategy for the distance-based method. We then chose
the WDTW and SW-DTW parameters based on our preliminary experiments.
Each type of DTW is summarized below:
22
study [18].
• “WDTW”: applying the MLWF as the weights for the cost function [36]:
wmax
w(a) = (12)
1 + exp(−g(a − mc ))
where wmax is an upper bound on the weight (set to 1), mc is a midpoint of
a sequence, and g is a parameter that controls the curvature of the function.
In this study, the distance between two points in the time sequences is cal-
culated using a weight penalty w|i−j| for a warping distance of |i − j|:
D
X
2
d sM T
i , sj = w|i−j| × sM T
i (d) − sj (d) . (13)
d=1
• “LS-DTW”: the proposed LS-DTW with the applied local stability sequence
as the weights for the cost function.
23
These results confirmed that the proposed method using the LS-DTW provides
an effective single-template strategy for online signature verification.
Figure 7 compares the EERs of the two types of local stability using the
proposed single-template strategy. In both the SVC2004 Task2 and MCYT-100
datasets, the DMPs provided EERs lower than those of the MMPs for the LS-
DTW.
These results confirm that the DMPs have a more discriminative power than
the MMPs for the local stability of the simple-template strategy. We adopted the
DMPs in the weighting process of the LS-DTW for the single-template strategy
considering these findings.
24
Table 1: Comparison between the proposed method and other systems for the SVC2004 Task2
dataset.
Method #References EER-SFa EER-RFb
Dynamic time functions and HMM [14] 5 6.90 3.02
SVM with the longest common subsequence kernel function [22] 5 6.84 0.12
Mean templates and multiple DTW distances with gradient boosting [19] 5 2.98 -
Enhanced contextual DTW based system using vector quantization [45] 5 2.73 -
Cosine similarities from 1-D CNNs trained with synthesized signatures [20]∗ 5 2.63 -
Modified DTW with signature curve constraint [46] 5 2.60 -
DTW and warping path-based features [47] 5 2.53 -
Two-stage method using shape contexts and function features [48] 5 2.39 0.3
Proposed 5 2.08 0.11
Semi-parametric method based on discrete cosine transform and sparse representation [13] 10 3.98 0.10
Template selection and DTW [49] 10 2.84 -
DTW and warping path-based features [47] 10 2.79 -
Mean templates and multiple DTW distances with random forests [16] 10 2.2 -
Mean templates and multiple DTW distances with gradient boosting [17, 19] 10 1.80 -
Proposed 10 1.53 0.03
a
EER (%) for the skilled-forgery scenario.
b
EER (%) for the random-forgery scenario. The blank “-” denotes data not reported in the paper.
∗
Writer-independent approach.
Fig. 8: An example of an in-air signature from the 3DAirSig dataset. The black line shows the 3-D
signature trajectory and the colored lines drawn on the back wall, left side wall, and floor indicate
the 2-D trajectories of (x̂(i), ŷ(i)), (ŷ(i), ẑ(i)), and (x̂(i), ẑ(i)), respectively. To protect privacy,
only a forged signature is depicted.
25
Table 2: Comparison between the proposed method and other systems for the MCYT-100 dataset.
Method #References EER-SFa EER-RFb
Signature partitioning and the weights of importance for selected partitions [50] 5 4.88 -
Histogram-based features and Manhattan distance [12] 5 4.02 1.15
Combination of global and regional features [51] 5 3.69 1.08
DTW and sigma-lognormal analysis [52] 5 3.56 1.01
Information divergence-based matching strategy [15] 5 3.16 -
SM-DTW using distance normalization [32] 5 3.09 1.30
Interval valued symbolic representation with writer dependent parameters [53] 5 2.2 1.0
Modified DTW with signature curve constraint [46] 5 2.17 -
RNNs for representation learning in the DTW framework [21]∗ 5 1.81 0.24
Enhanced contextual DTW based system using vector quantization [45] 5 1.55 -
Mean templates and DTW [18] 5 1.34 -
Mean templates and multiple DTW distances with gradient boosting [19] 5 1.28 -
DTW and warping path-based features [47] 5 1.15 -
Cosine similarities from 1-D CNNs trained with synthesized signatures [20]∗ 5 0.93 -
Proposed 5 0.72 0.07
a
EER (%) for the skilled-forgery scenario.
b
EER (%) for the random-forgery scenario. The blank “-” denotes data not reported in the paper.
∗
Writer-independent approach.
Table 3: Comparison between the proposed method and other systems for the 3DAirSig dataset.
Method #References EER-SFa EER-RFb
Baseline (3DAirSig [42]) 5 0.46 -
Mean templates and DTW [18] (’“DTW” in Section 4.2.1) 5 0.6 0.57
Mean templates and WDTW (“WDTW” in Section 4.2.1) 5 0.1 0.57
Mean templates and SW-DTW (’“SW-DTW” in Section 4.2.1) 5 0.1 0.57
Mean templates and LS-DTW (MMPs) [38, 39] (“MMPs” in Section 4.2.2) 5 0 0.57
Proposed 5 0 0.52
a
EER (%) for the skilled-forgery scenario.
b
EER (%) for the random-forgery scenario. The blank “-” denotes data not reported in the paper.
Tables 1 and 2 display the EERs of the proposed method and those from other
relevant previous studies that used the SVC2004 Task2 and MCYT-100 signature
datasets. Note that some of the previous studies examined not only the skilled-
forgery scenario, but also the random-forgery scenario (i.e., zero-effort attack);
26
therefore, for a fair and detailed comparison, the experiments compared the results
of both scenarios. The random-forgery scores were then obtained by comparing
the target writer’s mean template set to the remaining genuine signatures from the
target writer as genuine signatures and the first genuine signature sample from
each of the remaining writers as forged signatures in the verification phase. The
tables show that in all the SVC2004 Task2 and MCYT-100 datasets, the proposed
method demonstrated a lower EER than the other conventional methods in both
the random- and skilled-forgery scenarios because it adopted the single-template
strategy, which allowed for a lower calculation complexity.
Thus, we can conclude that the proposed single-template strategy is effective
for online signature verification.
27
signatures and 25 skilled forgeries obtained from five impostors for the verifica-
tion phase. The random-forgery scores were obtained by comparing the target
writer’s mean template set to the remaining 10 genuine signatures from the target
writer as genuine signatures and all the genuine signature samples from each of
the remaining writers as forged signatures in the verification phase.
Table 3 displays the EERs of the baseline system [42] and some other methods
considered in our experiments. In the 3DAirSig dataset, the proposed method
demonstrated the lowest EERs compared to the conventional methods in both the
random- and skilled-forgery scenarios.
Thus, we can conclude that the proposed single-template strategy is effective
for human motion analysis such as in-air signature verification, where the perfor-
mance is susceptible to a change in the intra-class variability.
5. Conclusion
28
iments by comparing the performances of the proposed method and those of
the other systems using the public SVC2004 Task2/MCYT-100 online signature
datasets and the recent 3DAirSig in-air signature dataset. For all these datasets, the
results showed lower EERs with the single-template strategy in both the random-
and skilled-forgery scenarios, which provided lower calculation complexity and
rendered the proposed method effective for online signature verification.
This work contributes to the broader goal of finding an effective single-template
strategy for online signature verification, which is relevant to the time-series clas-
sification/clustering [33, 34]. Therefore, the proposed method can be expanded
to other systems of time-series analyses with high speed, security, and tolerance.
Furthermore, unlike recent black box methods, such as deep learning algorithms,
the proposed system is composed of stepwise methods that aid the interpretability
of the verification process. Hence, the proposed method is particularly useful to
applications of forensics and security, for which we need fairness, accountability,
and transparency [54].
In the future work, it would be interesting to extend the proposed method
to the writer-independent system using a recent dataset such as DeepSignDB with
the experimental protocol [55] to further increase the practicality in real scenarios.
Acknowledgment
This work was supported in part by JSPS KAKENHI Grant Number JP 20H01154.
29
References
References
[7] M. Okawa, K. Yoshida, Offline writer verification using pen pressure infor-
mation from infrared image, IET Biometrics 2 (4) (2013) 199–207.
[8] M. Okawa, K. Yoshida, Text and user generic model for writer verification
30
using combined pen pressure information from ink intensity and indented
writing on paper, IEEE Trans. Human-Mach. Syst. 45 (3) (2015) 339–349.
[10] M. Okawa, From BoVW to VLAD with KAZE features: Offline signa-
ture verification considering cognitive processes of forensic experts, Pattern
Recognit. Lett. 113 (2018) 75–82.
[13] Y. Liu, Z. Yang, L. Yang, Online signature verification based on DCT and
sparse representation, IEEE Trans. Cybern. 45 (11) (2015) 2498–2511.
31
[16] M. Okawa, A single-template strategy using multi-distance measures and
weighting for signature verification, in: Proc. 18th IEEE Int. Symp. Signal
Process. and Inf. Technology (ISSPIT 2018), 2018, pp. 46–51.
[18] M. Okawa, Template matching using time-series averaging and DTW with
dependent warping for online signature verification, IEEE Access 7 (2019)
81010–81019.
[21] S. Lai, L. Jin, Recurrent adaptation networks for online signature verifica-
tion, IEEE Trans. Inf. Forensics Security 14 (6) (2019) 1624–1637.
32
[24] T. Górecki, Using derivatives in a longest common subsequence dissimilarity
measure for time series classification, Pattern Recognit. Lett. 45 (2014) 99–
105.
[26] K. Huang, H. Yan, Stability and style-variation modeling for on-line signa-
ture verification, Pattern Recognit. 36 (10) (2003) 2253–2270.
33
personal entropy, in: IEEE 3rd Int. Conf. Biometrics: Theory, Applications,
and Systems (BTAS 2009), 2009, pp. 1–6.
34
using local stability-weighted DTW, in: Proc. 9th IEEE Global Conf. Con-
sumer Electronics (GCCE 2020), 2020, (accepted).
35
selection for on-line signature verification, Pattern Recognit. 74 (2018) 422–
433.
[48] Y. Jia, L. Huang, H. Chen, A two-stage method for online signature veri-
fication using shape contexts and function features, Sensors 19 (8) (2019)
1808.
[49] N. Liu, Y. Wang, Template selection for on-line signature verification, in:
Proc. 19th Int. Conf. Pattern Recognit. (ICPR 2008), 2008, pp. 1–4.
36
[54] M. Okawa, Pushing the limits of online signature verification in the digital
age, in: INTERPOL Digital 4N6 Pulse, Vol. VII, 2020.
37
Declaration of interests
☒ The authors declare that they have no known competing financial interests or personal relationships
that could have appeared to influence the work reported in this paper.
☐The authors declare the following financial interests/personal relationships which may be considered
as potential competing interests:
Manabu Okawa received the B.S. degree in science from Tokyo Metropolitan University, Tokyo,
Japan in 2001, the M.S. degree in engineering from Shinshu University, Nagano, Japan in 2008,
and the Ph.D. degree in engineering from the University of Tsukuba, Tsukuba, Japan in 2014.
At present, he is with the Forensic Document Section of Forensic Science Laboratory (formerly
known as Criminal Investigation Laboratory) of the Metropolitan Police Department, Tokyo,
Japan. From 2016 to 2019, he also worked as a part-time lecturer at the University of Tsukuba.
His research interests include biometrics, forensics, and security.