You are on page 1of 11

Digital Signal Processing 18 (2008) 940–950

Contents lists available at ScienceDirect

Digital Signal Processing

www.elsevier.com/locate/dsp

A different approach to off-line handwritten signature verification using


the optimal dynamic time warping algorithm
İnan Güler a,∗ , Majid Meghdadi b
a
Department of Electronic and Computer Technology, Gazi University, 06500 Teknikokullar, Ankara, Turkey
b
Computer Department, Zanjan University, Zanjan, Iran

a r t i c l e i n f o a b s t r a c t

Article history: In this paper, a method for the automatic handwritten signature verification (AHSV)
Available online 18 June 2008 is described. The method relies on global features that summarize different aspects of
signature shape and dynamics of signature production. For designing the algorithm, we
Keywords:
have tried to detect the signature without paying any attention to the thickness and size
Off-line handwritten signature verification
Optimal DTW algorithm
of it. The results have shown that the correctness of our algorithm detecting the signature
Pattern recognition is more acceptable. In this method, first the signature is pre-processed and the noise of
sample signature is removed. Then, the signature is analyzed and specification of it is
extracted and saved in a string for the comparison. At the end, using adapted version of
the dynamic time warping algorithm, signature is classified as an original or a forgery one.
© 2008 Elsevier Inc. All rights reserved.

1. Introduction

Theoretically, the problem of handwritten signature verification is a pattern recognition task used to discriminate two
classes of original and forgery signatures. A signature verification system must be able to detect forgeries and to reduce
rejection of genuine signatures simultaneously.
Automatic recognition of signature from other similar and forged ones is very important. One of the problems of signa-
ture verification is to distinguish a person’s own signature from time to time. It means that the size of signature in relation
to its component is not always fixed and it may be rotated. This study tries to be flexible regarding to the above mentioned
chances.
Human signature verification (HSV) systems are usually built either on-line or off-line approaches, depending on the
kind of data and application involved. On-line systems generally present a better performance than the off-line ones but
require the necessary presence of the author during both the acquisition of the reference data and the verification process
limiting its use of certain kind of applications.
Off-line systems generally do not require the use of any complex apparatus rather than a simple scanner but usually
require a more complex or refined preprocessing step, generating also a greater database size [1,2]. A feature is called
global if it is extracted from the whole signature (see Table 1) and is called local if it is extracted from each sample point
(see Table 2) [3,4]. Two different approaches to signature verification are common using these two types of features. In the
first approach, called the parametric approach, just some of the available values are used such as a number of global feature
values called statistical features or parameters which are computed and compared [5].

* Corresponding author.
E-mail addresses: iguler@gazi.edu.tr (İ. Güler), majid_m1344@yahoo.com (M. Meghdadi).

1051-2004/$ – see front matter © 2008 Elsevier Inc. All rights reserved.
doi:10.1016/j.dsp.2008.06.005
İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950 941

Table 1
Important global features

Avg. pressure Avg. velocity First moment


Var. of pressure Max velocity Signature height (H)
Max pressure Avg. velocity ÷ max velocity Signature width (W )
Min pressure Avg. x velocity W to H ratio
Max pressure–min pressure Avg. y velocity Pen down samples (D)
Point of max pressure xy velocity correlation Time of max velocity ÷ D
Max acceleration Max x velocity Time of max x velocity ÷ D
Avg. acceleration Max y velocity Point max. velocity ÷ D
Var. of acceleration Var. of x velocity Avg. azimuth
Avg. x acceleration Num. pts. with positive x velocity Avg. elevation
Avg. y acceleration Var. of velocity Number of pen ups and downs
Total dots recorded RMS velocity RMS acceleration
Time of second pen down Direction at first pen down, first pen up

Table 2
Important local features

Signature length Initial direction Pen elevation


Horizontal position Total velocity x acceleration
Vertical position x velocity y acceleration
Normal pressure y velocity Log radius of curvature
Path tangent angle Total acceleration Pen azimuth

Once a set of feature has been selected, there may be no need to store the reference signature and only the feature value
of the reference signature need to be stored. When a test signature is presented, only the feature values are needed. This is
often saved on storage which may be at premium if the reference signature needed on a credit card as an example.
In the second approach, called the functional approach, all the collected position values (local features) of the test signa-
ture and reference signature are compared with point by point, by computing a set of correlation coefficients between the
two signatures. Such a comparison may require signature segmentation to be able to compare the corresponding segments.
It makes a difficult job to design off-line systems because as it is shown in Tables 1 and 2 the number of features
decreases from on-line to off-line ones.
This study emphasizes on global features and deals with the parametric approach to verify an original signature. Ignoring
the size and thickness of it, we attempt to detect the signature in this algorithm. Specifications of signature are extracted
after analyzing and removing the noises and they will be saved in a string. Using optimal dynamic time warping (ODTW)
algorithm, they are compared with one another and the results are obtained.
Many research works on signature verification have been reported. Researchers have applied many technologies, such as
neural networks and parallel processing, to the problem of signature verification and they are continually introducing new
ideas, concepts, and algorithms. Other approaches have been proposed and evaluated in the context of random forgeries,
like 2D transforms [6], histograms of directional data [7] or curvature [8], horizontal and vertical projections of the writing
trace of the signature [9], structural approaches [10], local measurements made on the writing trace of the signature [11],
and the position of feature points located on the skeleton of the signature [12]. Hence, the shape of the signature remains
relatively the same over time when the signature is written down on a pre-established frame (context) like a bank check.
This physical constraint contributes to the relative time-invariance of the signatures, which supports using only static shape
information to verify signatures. Some other solutions in the case of random forgeries are mainly based on the use of global
shape descriptors as the shadow code, investigated by Sabourin et al. [10]. Shape envelope projections on the coordinate
axes, geometric moments, or even more general global features such as area, height and width have been widely used
[9,13]. In some researches offline models of signature verification are compared based on hidden Markov models (HMM)
[14] but some others employ three expert systems that evaluates the signature in three different ways and judges it as
genuine, forgery, or rejection by a majority vote of the three experts and a signature verification system is presented with
both static and dynamic features [15,16]. In a research done by Munich et al. [17], the authors infer that shape similarity
and causality of signatures’ generation are more important than matching the dynamics of signing. This result indicates that
these dynamics are not stable enough to be used for signature verification since the subject is trying to reproduce a shape
rather than a temporal pattern. This is why we use only static images to verify signatures in this paper. Table 3 summarizes
the methods and results of some other researches about HSV.
The time warping technique is also applied in other works [27–30]. All methods that used DTW algorithm for HSV [31]
are related to on-line HSV systems, but we used optimized version of this algorithm in off-line HSV systems.

2. DTW algorithm for the handwritten signature verification system

The DTW (dynamic time warping) algorithm is the essential part of the method which is discussed briefly here (Fig. 1).
DTW computes the optimal distance between two sequences along a Viterbi path, which is used as the exponent of a radial
942 İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950

Table 3
The methods and results of some researches about HSV

Year Researchers Method Results


1997 Kashi et al. [18] Hidden Markov model Equal error rate of 2.5%, at 1% false rejection, error rate is 5%.
1998 Pawlidis et al. [19] Deformable structures A 78.9% accurate system, with 18.3% inconclusive data and 2.8% false
recognition.
1999 Wu et al. [20] Split-and-merge matching mechanism The system has a correct acceptance of 86.5% and a correct rejection rate
of 97.2%.
2000 Lecce et al. [21] A multi-expert system The system has the following error: 3.2% false rejection, 0.55% false
acceptance with 3.2% rejection rate.
2003 Sebastian et al. [22] Aligning curves The system performed with an accuracy rate of 98%.
2004 Aguilar et al. [23] Fusion of local and global information Relative improvements in the verification performance as high as 51%
(skilled) and 78% (random) are obtained as compared to published works.
2005 Hairong et al. [24] Based on support vector machines The average error rate can reach 5%.
2006 Audet et al. [25] Virtual support vector machines The average error rate is of about 14% for all cases.
2007 Ibrahim et al. [26] Graph matching An equal error rate of 26.7% and 5.6% was achieved for skilled and random
forgeries, respectively.

Fig. 1. Dynamic time warping algorithm.

basis function (RBF). DTW methods define kernels which can be shown to be symmetric and satisfy the Cauchy–Schwartz
inequality. DTW is a time series alignment algorithm developed originally for speech recognition [32] but we used this
algorithm for off-line HSV systems. The DTW method algorithm is based on matrix analysis. This method calculates the
minimum costs of passing between opposite edges of a matrix. More thorough descriptions can be found in the related
website [33]. We have two series of S and T . Let d(i , j) be the distance between points S i and T j . Now we try to find the
path named ω in the n × m matrix which minimizes the quotient
P P
κ =1 d(ωk ) κ =1 d(i κ , j κ )
= , (1)
K K
where K is the length of the path. Now we come up with:
P
κ =1 d(ωκ )
DTW( S , T ) = min . (2)
K ω
Once the strokes have been encoded, we are faced with the problem of aligning two encoded strings in which we should
determine the measure of their similarity. If we consider these sequences as a time series, we can employ dynamic time
İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950 943

Fig. 2. Distance determination between two similar sequences using DTW.

warping as a technique that can be used to find an alignment which minimizes the distances by “warping” the axis of one
or more of the strings to find a better alignment between points. An example of a warping alignment between two strings
is shown in Fig. 2.
Related research applies time warping to find a similar alignment between stroke endpoints in a signature, and then uses
differences between points in that alignment to train a back propagation neural network on the average distances between
local features of those points [25]. In a similar vein, this work seeks to find a minimal alignment between stroke segments
but employs a signer specific threshold to determine a match.
The dynamic time warping algorithm proceeds in two stages. First an n × m distance matrix is created, and second,
the shortest path alignment between opposite corners of the matrix is found. To construct the matrix, one signature (or
rather the encoding of a signature) is placed across the top and the other signature across the left side of the matrix. This
matrix will represent the distance of each point in one signature to the other signature. The shortest path through this
matrix corresponds to an ideal alignment between points in the signatures. Intuitively, the distance of this path becomes
a measurement of similarity between two signatures; the shorter this distance, the more similar two signatures, and for
identical signatures this distance will be zero (Fig. 1).
The DTW algorithm attempts to find a mapping between differences in the sequence. For our purposes, we consider an
alignment of stroke encodings in a sequence. To calculate the shortest path, dynamic programming is used to find the path
recursively. The shortest path from a given cell is simply the distance in the current cell along with the smallest distance of
a neighboring cell.
Dynamic programming allows us to efficiently calculate these distances recursively by calculating them in advance and
catching the cumulative results. In spite of the fact that this algorithm is mainly used for on-line signature verification
systems we have used it for off-line signature verification system, too.

3. Proposed algorithm for off-line HSV system

After pre-processing the image, signature is changed into long stream of its specifications. Then the similarities of sig-
nature will be distinguished by the comparison of these streams. The comparison of these streams will be done by ODTW
algorithm. Here in this study the differences of a signature with the other one cause a curve to appear. The area under the
curve shows the amount of difference. When the area under the curve is large, then the differences between signatures are
more.

3.1. Pre-processing

Noises are disturbances that can be attached to the image during scanning time; these noises are not a part of signature
and should be removed. In this study, we assumed that background is white and signature is made from colored points. The
points which are far from colored points are considered noises. In other words, those points which are in less accumulated
places are considered noises and should be omitted.

3.2. Method

For converting each signature to a stream of numbers, the necessary steps are given below:

Step 1. Determining the signature environment.


944 İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950

Fig. 3. Determination of the sample signature margin.

Fig. 4. Gradient of pixel in the environment of the example.

Fig. 5. Four different axes considered in our algorithm for the above mentioned example.

Determining the signature environment, we used different color of signature ink form paper. The marginalized part of
ink and paper is considered its environment. Therefore each colored pixel which has at least one neighboring uncolored
pixel is its margin (Fig. 3).

Step 2. Determining the gradient of each pixel from its environment.

We defined and used four axes in this study. Determining gradient, we defined four axes.

(1) Vertical (V1);


(2) Horizontal (V2);
(3) First quarter bisector (V3);
(4) Second quarter bisector (V4).

We consider a 3 × 3 square centralized with the mentioned point (Fig. 4).


In this square, the neighboring pixels in each ax should be examined, and the number of colored pixels in each ax is
assumed as V i x (value of i − ax at point x) (Fig. 5).
Each ax which has the maximum value is considered as the gradient of the signature.
For example in Fig. 5 we see:

V 1x = abs(1-1) = 0, (3)
İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950 945

Fig. 6. Method of creating S and T stream.

V 2x = abs(3-0) = 3, (4)

V 3x = abs(2-0) = 2, (5)

V 4x = abs(2-0) = 2. (6)

Then gradient

( X ) = max( V 1x , V 2x , V 3x , V 4x ) = max(0, 3, 2, 2) = V 2x = 3. (7)

Therefore gradient in point X is horizontal.


The set of these gradients which are numerical streams will be explained in the next section. Following this manner, we
will encounter with two S, T numerical streams.

Step 3. The storage of created streams.

S stream: The image is scanned from top to bottom and from left to right, and the marginalized points are stored.
T stream: The image is scanned from left to right and from top to bottom, and the marginalized points are stored (Fig. 6).

3.3. The comparison of signatures

Using ODTW algorithm, two signatures are compared as follows:

Step 1. The calculation of distinctive graph:

First of all, two signatures should be compared with one another in the streams of S and T. The following step is
considered for the above mentioned purpose:

(1) Two axes with the length of m, n are considered.


(2) We start from the origin of (1, 1), each point A (i , j ) from defined axes, the comparison of T [dt i , j ] and S [dsi , j ] is
performed in Fig. 7.

The curve can be forwarded in three points: A1, A2, and A3. For example, in A1, we can compare S [dsi +1, j ] with
T [dt i +1, j ]. In this comparison, we will encounter three cases:

(a) No difference is observed (e.g. S [dsi +1, j ] = 3, T [dt i +1, j ] = 3). It shows their matches and zero can be inserted.
(b) Two numbers are different and axes are not vertical, according to the definitions in Step 1 and Fig. 5 (e.g. S [dsi +1, j ] = 2,
T [dt i +1, j ] = 4) one is considered.
(c) Two numbers are different and axes are vertical (e.g. S [dsi +1, j ] = 3, T [dt i +1, j ] = 4), four is considered which shows
maximum difference between these values as seen in Table 4.

In fact, steps (a), (b), (c) are the explanation of a function with two parameters which returns us the difference of two
parameters (or gradients). We choose the route which has the minimum difference. This route should go from bottom and
left to top and right of rectangle. ODTW algorithm is used for discovering this route.
In this way a graph is drown. If the two compared signatures become exactly the same (S = T), the rectangle will turn
into square form. In this case we will observe: m = n and the shortest route will be the diameter of the square.
946 İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950

Fig. 7. Calculation of distinctive graph based on ODTW.

Table 4
The calculation of distinctive graph

Compare(S [dsi +1, j ], T [dt i +1, j ]) Same numbers Different numbers


Axes are not vertical Axes are vertical
Considered value 0 1 4

We restrict the algorithm by a local constraint so that the path may go from point (i , j ) to point (i , j + 1), (i + 1, j ) or
(i + 1, j + 1).
The set of allowable paths may be restricted with some parameters. When parameter P 1 restricts the paths to set S and
simultaneously parameter P 2 to set T the total set of allowable paths is the intersection of these two sets, i.e. ST = S ∩ T.
Used restrictions on the warping function for optimizations in DTW Algorithm (that we called Optimal DTW algorithm)
are:

• Boundary conditions (the alignment path starts at the bottom left and ends at the top right of the matrix)
i 1 = 1, j k = n,
(8)
j 1 = 1, J k = m.

It guarantees that the alignment does not consider partially one of the sequences.
• Continuity (the alignment path does not jump)
i s − i s−1  1, j s − j s−1  1 where 1 ≺ s  k. (9)

It guarantees that the alignment does not omit important features.


• Monotonous (the alignment path does not go back)
i s−1  i s , j s−1  j s where 1 ≺ s  k. (10)

It guarantees that features are not repeated in the alignment.


• Warping window (a good alignment path is unlikely to wander too far from the diagonal)
is − js  r, r > 0 where 1 ≺ s  k. (11)

It guarantees that the alignment does not try to skip over different features and gets stuck at similar features.
• Slope constraint (the alignment path should not be too steep or too shallow)
( j s p − j s0 )/(i s p − i s0 )  p ,
(12)
(i s p − i s0 )/( j s p − j s0 )  q,
where

q  0 is the number of steps in the x-direction and

p  0 is the number of steps in the y-direction.

It guarantees that short parts of the sequences are not matched to very long ones.
İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950 947

Fig. 8. Comparison of distinctive graph after 45◦ rotation.

Step 2. The comparison of the observed graph with the original one. In the second step, the observed graph in Step 1 is
compared with the original one. In this way, the observed graph is rotated 45◦ (Fig. 8). This rotation causes the original
signature to take no angle with the horizontal line.

The absolute value of each pixel from the first graph up to the axis is calculated and the sum of these distances for each
graph shows the area under the graph (Fig. 8). 100 times of this fraction in relation to the worst one exhibits the error
percentage. For the availability of similarity percentage, we can reduce the number from 100 (d = 100 − d).
(S[1] with T[1]), . . . , (S[m ∗ n] with T[m ∗ n]) are compared separately and the percentage mean of similarity between
these two comparisons show the percentage of final similarity between two signatures.

3.4. Database

A sub-corpus of the larger MCYT bimodal database is used for the experiments. MCYT database encompasses fingerprint
and on-line signature data (x, y, pressure, azimuth and altitude trajectories of the pen) of 330 contributors. High skilled
forgeries are also available in case of signature.
To be available on paper, signature information was acquired in MCYT database by using an inking pen and paper
templates over a pen tablet. Each signature is written within a 2 × 4 cm2 frame. The size of the frame is effective on
processing time of algorithm.
Paper templates of 50 signers and their associated skilled forgeries have been randomly selected and digitized with a
scanner at 600 dpi. Resulting sub-corpus, comprises 1500 signature images with 20 genuine signatures and of 10 forgeries
per user.

4. Results and discussion

The experimental results are given in Table 5.


Unfortunately we cannot use Table 5 directly. Some researches show that it is not sufficient to verify the validity of a
signature only by comparing the physical image of it [26]. This fact should be considered that a person’s signature is not
the same from time to time and it is different from one occasion to another. As it is shown in Fig. 9, with the increase of
similarity false reject rate (FRR) is increased but false acceptance rate (FAR) is decreased. The results of our examination
show that in this method, the best value for the percent of signature similarity is nearly 80.05 (S S ∼
= 80%). In this point we
obtain the minimum error rate (MinErrRate = min(FAR, FRR)). If we consider average error rate (AER) are as following:
(FAR + FRR)
AER = . (13)
2
AER will be the smallest amount in S S ∈ [75, 85]. On the other hand, we have the best performance of the system in
(75%  S S  85%).
In this interval, we have the minimum value for AER (see Fig. 10). Where the value of the FAR and the FRR meet one
another, the point is called equal error rate (EER) as it is shown in Fig. 9.

Table 5
The result of experiments

Percentage of similarity 0–70% (c) 70–99% (b) 100% (a)


Description The sample signature is not similar The sample signature is similar to The sample signature is the original one
to the original one the original one
948 İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950

Fig. 9. The best value of signature similarity percentage.

Fig. 10. Interval of the best performance.

As a matter of fact, getting the best performance, we should consider 75%  S S  85%. The fundamental result of this
study is obtaining the average of minimum errors not in the maximum surface similarity. In other words, if the correctness
of a signature is its high similarity to the original one, the correct signatures will be rejected because of minor differences
and this trend will decrease the efficiency of the system.
The method is tested using genuine and forgery signature produced by 50 subjects. An equal error rate (EER) of 25.1% and
5.5% was archived for skilled and random forgeries, respectively. Fig. 11 displays relationship among FRR, FAR for random
forgeries (FAR-random), and FAR for skilled forgeries (FAR-skilled). It is natural to notice that the FAR-random curve is lower
than the FAR-skilled curve, since in random forgeries the signer has no previous knowledge and/or training on the signature
she/he is forging
In a study two types of classifiers, a nearest neighbor and a threshold classifier are used for offline signature verification
[10]. These classifiers show a total error rate below 2% and 1% respectively in the context of random forgeries. These rates
are better than ours which is 5.5%. For skilled forgeries, the FAR of our algorithm is similar to those of other researchers.
From the results, it is obvious that the problem of signature verification becomes more difficult when passing from random
to skilled forgeries.
İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950 949

Fig. 11. Relationship among FRR, FAR for random forgeries and FAR for skilled forgeries.

5. Conclusion and future works

5.1. Dynamic features

We propose DTW as a similar measure for multidimensional sequence matching. Signature verification system can use
other methods with optimal alignment for intuitive similarity output and higher performance as well. The experimental
results are encouraging although we have to notice that further evaluation on large and real databases is necessary. Future
work can be exploring the feasibility of DTW on dynamic features like pressure, speed, etc.

5.2. Combining DTW with other methods

DTW can be used to determine the optimal alignment between two sequences with different lengths. Some methods
like ER2 (extended R-squared) allow only one to one matching between two sequences, just like the Euclidean norm. If two
sequences are not aligned very well or they have different lengths, this method cannot be applied directly. Usually, signature
sequences are neither of the same length nor aligned properly, even by the same person. Thus, by combining DTW with
ER2 , we can overcome the alignment problem while retaining all advantages of ER2 .
First, we can use DTW to determine the optimal alignment between two sequences then stretch the two sequences to
get the same length. If point xi in sequence X is aligned with k (k > 1) points in sequence Y , we stretch X by duplicating
point xi (k − 1 times). For sequence Y , we stretch it in the same way. After being stretched, the two sequences have the
same length. Then, we can use ER2 method.

Acknowledgment

This project is supported by Gazi University Research Fond, BAP Project No. 07/2007/-07.

References

[1] R. Plamondon, G. Lorette, Automatic signature verification and writer identification the state of the art, Pattern Recogn. 22 (2) (1998) 107–131.
[2] R. Plamondon, S. Srihari, On-line and off-line handwriting recognition: A comprehensive survey, IEEE Trans. Pattern Anal. Mach. Intell. 22 (1) (2000)
63–84.
[3] J. Ortega-Garcia, J. Fierrez-Aguilar, D. Simon, J. Gonzalez, M. Faundez-Zanuy, V. Espinosa, A. Satue, I. Hernaez, J.-J. Igarza, C. Vivaracho, C. Escudero, Q.-I.
Moro, MCYT baseline corpus: A bimodal biometric database, IEEE Proc. Vision Image Signal Process. 150 (6) (2003) 395–401.
[4] J. Richiardi, H. Ketabdar, A. Drygajlo, Local and global feature selection for on-line signature verification document analysis and recognition, in: 8th
International Conference on Document Analysis and Recognition, vol. 2, 2005, pp. 625–629.
[5] N.M. Herbst, C.N. Liu, Automatic signature verification based on accelerometry, IBM J. Res. Dev. 21 (1997) 245–254.
[6] R.N. Nagel , A. Rosenfeld, Computer detection of freehand forgeries, IEEE Trans. Comput. 26 (9) (1997) 895–905.
[7] J.P. Drouhard, R. Sabourin, M. Godbout, Evaluation of a training method and of various rejection criteria for a neural network classifier used for off-line
signature verification, in: IEEE Int. Conf. Neural Networks, 1994, pp. 294–299.
[8] E.R. Brocklehurst, Computer methods of signature verification, J. Forens. Sci. Soc. 25 (6) (1995) 445–457.
[9] M. Ammar, Progress in verification of skillfully simulated handwritten signatures, Int. J. Pattern Recogn. Artif. Intell. 5 (1) (1991) 337–351.
[10] R. Sabourin, G. Genest, F.J. Preteux, Off-line signature verification by local granulometric size distributions, IEEE Trans. Pattern Anal. Mach. Intell. 19 (9)
(1997) 976–988.
[11] W.F. Nemcek, W.C. Lin, Experimental investigation of automatic signature verification, IEEE Trans. Syst. Man Cybernet. SMC-4 (1974) 121–126.
[12] Y. Qi, B.R. Hunt, Signature verification using global and grid features, Pattern Recogn. 27 (12) (1994) 1621–1629.
[13] N.A. Murshed, R. Sabourin, F. Bortolozzi, A cognitive approach to off-line signature verification, Int. J. Pattern Recogn. Artif. Intell. 11 (5) (1997) 801–825.
950 İ. Güler, M. Meghdadi / Digital Signal Processing 18 (2008) 940–950

[14] G. Rigoli, A. Kosmala, A systematic comparison between on-line and off-line methods for signature verification with hidden Markov models, in: 14th
Int. Conf. Pattern Recogn., 1998, pp. 755–1757.
[15] G. Dimauro, S. Impedovo, G. Pirlo, A. Salizo, A multi-expert signature verification system for bank check processing, Int. J. Pattern Recogn. Artif.
Intell. 11 (5) (1997) 827–844.
[16] Q. Wu, S. Lee, I. Jou, On-line signature verification based on split-and-merge matching mechanism, Pattern Recogn. Lett. 18 (7) (1997) 665–673.
[17] M. Munich, P. Perona, Visual signature verification using affine arc-length, in: Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recogn., vol. 2,
1999, pp. 2180–2196.
[18] R.S. Kashi, J. Hu, W.L. Nelson, W. Turin, On-line handwritten signature verification using hidden Markov model features, in: Proc. of ICDAR, 1997.
[19] N.P. Pawlidis, R. Papanikolopoulos, R. Mavuduru, Signature identification through the use of deformable structures, On Aligning Curves 71 (2) (1998)
187–201.
[20] Q.Z. Wu, S.Y. Lee, I.C. Jou, Online signature verification based on split and merge matching mechanism, Pattern Recogn. Lett. 18 (7) (1997) 665–673.
[21] V.D. Lecce, G. Dimauro, A. Guerriero, S. Impedovo, G. Pirlo, A. Salzo, A multi-expert system for dynamic signature verification, in: Multiple Classifier
Systems 1857 (2000) 320–329.
[22] P. Sebastian, B. Klein, B. Kimia, On aligning curves, IEEE Trans. Pattern Anal. Mach. Intell. 25 (1) (2003) 116–125.
[23] J.F. Aguilar, N.A.G. Hermira, M. Marquez, J.O. Garcia, An off-line signature verification system based on fusion of local and global information, Lect.
Notes Comput. Sci. 3087 (2004) 295–306.
[24] L. Hairong, W. Wang, C. Wang, Q. Zhuo, Off-line Chinese signature verification based on support vector machines, Pattern Recogn. Lett. 26 (2005)
2390–2399.
[25] S. Audet, P. Bansal, S. Baskaran, Offline Signature Verification Using Virtual Support Vector Machines, ECSC 525–Artificial Intelligence, Final Project,
2006.
[26] I.A. Ibrahim, Offline signature verification using graph matching, Turk. J. Elec. Engin. TÜBİTAK 15 (1) (2007) 89–104.
[27] M. Faundez-Zanuy, On-line signature recognition based on VQ-DTW, Pattern Recogn. 40 (3) (2007) 981–992.
[28] K. Huang, H. Yan, Stability and style variation modeling for on-line signature verification, Pattern Recogn. 36 (10) (2003) 2253–2270.
[29] A.K. Jain, F.D. Griess, S.D. Connell, On-line signature verification, Pattern Recogn. 35 (1) (2002) 2963–2972.
[30] M. Wirotius, J.Y. Ramel, N. Vincent, Improving DTW for online handwritten signature verification, Image Anal. Recogn. ICIAR 2 (2004) 786–793.
[31] P. Fang, Z.C. Wu, F. Shen, Y.J. Ge, B. Fang, Improved DTW algorithm for online signature verification based on writing forces, Springer Adv. Intell.
Comput. 3644 (2005) 631–640.
[32] H. Sakoe, S. Chiba, Dynamic programming algorithm optimization for spoken word recognition, IEEE Trans. Acoust. Speech Signal Process. 26 (1) (1978)
159–165.
[33] Oulu University website, available at: http://www.ee.oulu.fi, 2008.

İnan Güler was born in Düzce, Turkey in 1956. He graduated from Erciyes University in 1981. He took the M.S.
degree from Middle East Technical University in 1985 and the Ph.D. degree from İstanbul Technical University in 1990,
both in electronic engineering. He is a Professor at Gazi University where he is a head of department. His interest
areas include biomedical instrumentation, biomedical signal processing, electronic instrumentation, neural networks,
and artificial intelligence. He has written more than 150 articles related with his interest areas.

Majid Meghdadi was born in Zanjan, Iran in 1965. He received the B.S. degree in Mathematics and Applied Com-
puter from the University of Tehran, Iran, in 1991, and the M.S. degree in Computer Engineering from the University
of Sharif, Iran, in 1994. He is currently a Ph.D. candidate in the department of Electronic and Computer Education
at Gazi University, Turkey. His current research interests include security in wireless sensor networks, and image
processing, and neural networks.

You might also like