You are on page 1of 28

(1)

A
A
b
b
s
s
t
t
r
r
a
a
c
c
t
t
Recognition is regarded as a basic attribute of human beings, as
well as other living organisms. There are many practical applications
of pattern recognition, such as: in the area of signature recognition,
handwriting, fingerprints, face recognition and other applications.
Signatures are used everyday to authorize the transfer of funds of
millions of people, Bank checks, credit cards and legal documents, all
of which require our signatures. In this work we propose signature
recognition by using neural networks. To achieve this objective, image
loading is done to read the image file. After that the preprocessing
input image will be done then Feature extraction by finding the
frequency histograms of the on-pixels in the (50x50) image matrix on
the horizontal lines , vertical , upper triangle of the main diagonal and
lower triangle of the main diagonal. At last Backpropagation
Algorithm have been used in recognition. Experimental results
showed that, out of (100) test signatures, (98) signatures have been
correctly recognized which amounts to (98%) success rate by using
backpropagation neural network with (49) hidden nodes and (0.5)
learning rate within a very short training time.
(2)
1. Introduction
Since the beginning of the computer industry, users of computers
have been forced to modify their behavior to utilize these devices.
User interfaces ranged from confusing to downright hostile. As
computers became more powerful, as measured in processing speed,
user interfaces were written in a more intuitive fashion, but users still
had to change the way they normally interacted with the world. For
example, talking to a computer, until very recently, will not get it to
accomplish a desired task, and smiling at a computer wont make it
respond in a friendlier fashion. Some users may want or even need to
be able to speak commands directly to the computer. May be the
computer should be able to read our handwriting [1].
However, as personal computers have become a standard
consumer staple, and computing power has drastically increased, the
barriers to handwriting recognition research have been eliminated.
This rekindled interest has been furthered by the realization that
Artificial Neural Networks (ANNs) have the ability to generalize,
adapt, and learn implicit concepts. These ANN properties are
particularly well suited to address the problems caused by high
variability between handwriting samples. An examination of the
signature verification process provides a nice illustration of the
usefulness of ANNs. The power of ANNs lies in their ability to
recognize general patterns. This is particularly useful because,
although there is a large amount of variability between signatures,
there will be general characteristics present in each signature [2].
(3)
2. The Objective
The aim of this paper is to use neural networks in the recognition
of handwritten signatures. To achieve this objective, the preprocessing
input image will be done includes some of operations such as: Noise
removal, Image Scaling, Image Rotation, Image Centralization, then
Feature extraction and Recognition Using Backpropagation Algorithm
will be done.
Off-line is the type of signature recognition adopted in the work.
3. Pattern Recognition
Recognition is the process, where it is determined in which
region the unknown sample falls and subsequent to which pattern
class it belongs [3].
Pattern recognition can be defined as an area of science
concerned with discriminating objects on the basis of information
available about them. Each distinct bit of information about objects is
called a feature [4].
A pattern recognition system can be considered as a two stage
device. The first stage is feature extraction and the second is
classification.
Recognition methods can be roughly classified into three major
groups: statistical, structural and syntactical, and Neural Networks
methods. Sometimes, different methods are combined, for example,
simple methods are used for pre-classification and a final decision is
made with more sophisticated methods[5].
(4)
3.1 Artificial Neural Networks (ANN)
Neural Networks (NNs) deal with the classification of ( pattern
recognition) problems with greater reliability than human counter
parts. Automatic training allows the NNs to train before being used on
real problems. Certainty factors accompany the results as mitigation
against possible errors that occur. A low certainty factor accompanies
borderline cases [6].
The field of NNs can be thought of as being related to artificial
intelligence, machine learning, parallel processing, statistics, and other
fields. The attraction of NNs is that they are best suited to solving the
problems that are the most difficult to solve by traditional
computational methods [7].
The ANN approach has many similarities with statistical pattern
recognition concerning both the data representation and the
classification principles. The practical implementation is, however,
very different. The analysis mode involves the configuration of a
network of artificial neurons and the training of the net to determine
how the individual neurons can effect each other. The recognition
mode involves sending data through the net and evaluating which
class got the highest score [3].
4 . The Proposed Signature Recognition System (S. R. S.)
The system is divided into a set of procedures, each of which
does a specific job, the result is then given to the post procedure. In
the SRS, image loading is done to read the image file and to transform
(5)
Start
Image Loading
Recognition Using
Backpropagation Algorithm
End
Feature extraction
Preprocessing
it into a binary image. The next procedure preprocessing is performed
to improve the robustness of features to be extracted.
The feature vector is extracted from the (vertical, horizontal,
upper triangle and lower triangle of the main diagonal) histograms of
the image, which is used as an input for a backpropagation network to
recognize the signature. The SRS processing sequence is depicted in
figure (1).
Figure (1): Block structure for SRS.
Step(1) Image Loading
This procedure is responsible for loading the bitmap (BMP)
image file into memory. The dimension of the image used in this work
is (250x250) pixels.
(6)
Step(2) Preprocessing
The signature images require some manipulation before the
application of any recognition technique. This process prepares the
image and improves its quality in order to eliminate irrelevant
information and to enhance the selection of the important features for
recognition. This is known as preprocessing.
Moreover Preprocessing steps are performed in order to reduce
noise in the input images, and to remove most of the variability of the
handwriting.
Different people sign their signatures with different orientation, size,
deviation, etc. Even the signatures of the same individual change
temporarily in the aforementioned attributes under different
circumstances (e.g. the size of the signing space). To minimize the
variation in the final results, all signatures are normalized for duration,
rotation, position and size.
Step(3) Feature Extraction
The main objective of this procedure is to capture the most
relevant and discriminate characteristics of the signature to be
recognized. The dimension of the resulting feature vector is usually
smaller than the dimension of the original pixel images, thus
facilitating the subsequent classification processes.
Having applied the above stages on a signature, the features
extraction stage is started by finding the frequency histograms of the
on-pixels in the (50x50) image matrix.
(7)
The extracted features in this work include four histograms that
are computed in different directions. Each histogram has (50) values,
and these histograms are explained as the follows:
1- First histogram: this histogram is computed via frequency the
pixels that are on the horizontal lines of the image matrix, that is
producing a vector of (50) values and these values are
represented in the table (1) and illustrated as a histogram in
figure (2).
Table (1): Features Extraction on Horizontal Vector.
Row No. 1 2 3 4 5 6 7 8 9 10 11 12 13
Frequency 0 0 0 0 0 0 0 0 0 0 0 0 0
Row No. 14 15 16 17 18 19 20 21 22 23 24 25 26
Frequency 0 0 17 41 17 8 7 6 6 15 19 1 1
Row No. 27 28 29 30 31 32 33 34 35 36 37 38 39
Frequency 0 0 0 0 0 0 0 0 0 0 0 0 0
Row No. 40 41 42 43 44 45 46 47 48 49 50
Frequency 0 0 0 0 0 0 0 0 0 0 0
Figure (2): Features Extraction on Horizontal Histogram.
2- Second histogram: this histogram is computed via the frequency
of the pixels on the vertical lines of the image matrix, that
0
5
10
15
20
25
30
35
40
45
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Row No.
F
r
e
q
u
e
n
c
y
(8)
produces a vector of (50) values and these values are represented
in the table (2) and illustrated as a histogram in figure (3).
Table (2): Features Extraction on Vertical direction.
Column No. 1 2 3 4 5 6 7 8 9 10 11 12 13
Frequency 4 1 1 2 4 5 4 4 3 2 2 1 1
Column No. 14 15 16 17 18 19 20 21 22 23 24 25 26
Frequency 2 3 4 3 4 2 2 4 4 4 3 3 3
Column No. 27 28 29 30 31 32 33 34 35 36 37 38 39
Frequency 3 3 3 3 3 2 2 2 2 3 3 2 3
Column No. 40 41 42 43 44 45 46 47 48 49 50
Frequency 3 3 3 4 3 3 2 2 3 3 0
Figure (3): Features Extraction on Vertical Histogram.
3-Third histogram: this histogram is computed via the frequency of
the pixels on the lower triangle of the main diagonal of the image
matrix, that produces a vector of (50) values and these values are
represented in the table (3) and illustrated as histogram in figure (4).
Table (3): Features Extraction on the lower part of the main diagonal.
Lower d. No. 1 2 3 4 5 6 7 8 9 10 11 12 13
Frequency 0 0 0 1 4 2 3 4 2 3 2 2 3
Lower d. No. 14 15 16 17 18 19 20 21 22 23 24 25 26
Frequency 2 5 3 4 3 3 3 3 4 2 2 2 4
Lower d. No. 27 28 29 30 31 32 33 34 35 36 37 38 39
0
1
2
3
4
5
6
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Column No.
F
r
e
q
u
e
n
c
y
(9)
Frequency 7 4 3 6 2 4 2 4 2 1 3 1 2
Lower d. No. 40 41 42 43 44 45 46 47 48 49 50
Frequency 2 1 1 0 0 0 0 0 0 0 0
Figure (4): Features Extraction on the lower triangle of the main diagonal
of the image matrix.
4-Fourth histogram: this histogram is computed via the frequency of
the pixels on the upper triangle of the main diagonal of the image
matrix, that produces a vector of (50) values and these values are
represented in the table (4) and illustrated as histogram in figure(5).
Table (4): Features Extraction on the upper part of the main diagonal.
Upper d. No. 1 2 3 4 5 6 7 8 9 10 11 12 13
Frequency 0 0 0 1 7 6 5 5 1 2 2 2 2
Upper d. No. 14 15 16 17 18 19 20 21 22 23 24 25 26
Frequency 4 4 2 3 4 3 4 4 5 2 2 2 2
Upper d. No. 27 28 29 30 31 32 33 34 35 36 37 38 39
Frequency 2 2 2 2 4 6 3 5 3 3 3 2 0
Upper d. No. 40 41 42 43 44 45 46 47 48 49 50
Frequency 2 1 1 0 0 0 0 0 0 0 0
0
1
2
3
4
5
6
7
8
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
lower triangle of the main
diagonal No.
F
r
e
q
u
e
n
c
y
(10)

Figure(5):Features Extraction on the upper triangle of the main diagonal of
the image matrix.
Step(3.1) References Database
The first step in preparing the signature database is to determine
the number of sample signers that need to be used in recognition, here
the sample is (50) signers.
The second step is to determine the number of sample signatures
that need to be collected from each person. Each group consists of (5)
signatures from each signer and there are (50) signers used their
signatures in database.
Each sample signature was collected in (250x250) pixels, then
the image is passed through preprocessing, after that, feature
extraction stage begins, the result of this stage is a vector of (200)
values for each signature, but, there are (5) signatures for each signer
and there are (50) signers, therefore, there are (250) signatures in all
database. Also, database includes the values of the desired output
0
1
2
3
4
5
6
7
8
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50
Upper triangle of the main
diagonal No.
F
e
r
q
u
e
n
c
y
(11)
layer, outputs of the appropriate six-bit code, which is arbitrarily
assigned to each original input pattern.
Step(4) Recognition Using Backpropagation Algorithms
To successfully create an effective neural network, two
fundamental procedures must occur: feed-forward propagation and
feed-backward propagation. In feed-forward propagation, one sends
an input signal through the network and receives an output.
Backpropagation allows the network to "learn" from its mistakes.
The first phase of implementing the feed-forward propagation
sequence was to find an effective way to input the pattern into the
network. First, extracting feature vectors from the original image for
use as input to the neural network.
This model consists of (200) units in the input layer, in addition
to an additional unit representing the bias. The number of nodes in the
second hidden layer are chosen to be (49) nodes (which was found
suitable for this problem by experimentation) in addition to the bias
unit as in the input layer. Finally, it outputs the appropriate six-bit
code arbitrarily assigned to each original input pattern figure (6)
shows the net architecture as follows:
(12)




Figure (6): Diagram of a Neural Network.
In the implementation of network, input one value of
information from the vector that represents features for the pattern to
each of the two hundred input nodes. Subsequently, each input node
fires a signal to every hidden node, when a hidden node receives a
value from an input node, it multiplies the signal by its weight in that
presynaptic node. The sum of the products for all presynapatic nodes
is the input to the sigmoid function. It returns the activation energy of
the new signal. And the hidden nodes then propagate the signal to the
output layer in the same manner.
The output nodes then perform this same process, using the
activation energy received from the hidden layer nodes rather than the
values that the hidden layer used.


199
200
2
1
1
2
3
4
5
6
49
1
2
3
3
Input layer
200 nodes
Hidden layer
49 nodes
Output layer
6 nodes
(13)
Although this architecture is all that is needed to carry out feed-
forward propagation, several other design consideration were
implemented. For instance, add to both the input layer and to the
hidden layer a special "bias node" that always fires a signal and acts as
yet another control on the flow of the network. By altering the weight
between the bias and its postsynaptic nodes, the network could
effectively shut down nodes that continuously fail and boost those that
succeed.
To illustrate the backpropagation training algorithm is an
iterative gradient algorithm is designed to minimize the mean square
error between the actual output of feedforward phase and the desired
output. It requires continuous differentiable non-linearity.
Step1. Initialize weights and offsets
Set all weights and node offsets to small random values
between (-1 and 1), her used (9800) values between input and hidden
layers, (294) values between hidden and output layers. Further more,
the bias equal to (55) values.
- (49) Values between input and hidden layer (V
ij
).
- (6) Values between hidden and output layer (W
jk
).
Step2. While stopping condition is false, do steps 3-6.
Step3. For each training pair, do steps 4-5.
Step4. Feedforward phase
Step4.1: Each input unit (x
1
, , x
200
) receives an input value and
broadcast this value to all units in the layer above (the
hidden units).
(14)
Step4.2: Each hidden unit (Z
j
) sums its weight input signals.
j=1,2, ,49 (1)
applies its activation function to compute its output values

(2)
and sends this signal to all units in the layer above (output
units). Where, o equal (1)
Step4.3: Each output unit (Y
k
, k=1,.,6) sums its weighted input
signals.
k =1,..,6
and applies its activation function to compute its out put values
Step5. Backpropagation of error phase
Step5.1: Each output unit (Y
1
, , Y
6
) receives a target signature
corresponding to the input training signature and computes
its error information term.
k=1,..,6 (3)
calculates its weight correction term (used to update w
jk
later).
(4)
calculates its bias correction term (used to update w
0k
later)
and the learning rate (o) equal to (0.5).

=
+ =
200
1 i
ij i oj j
v . x v in _ z

=
+ =
49
1 j
jk j ok k
w . z w in _ y
) in _ y ( f ) y t (
k k k k
' = o
j k jk
z w oo = A
)) f(z_in ( exp 1
1
) in _ z ( f z
j
j j
o +
= =
)) f(y_in exp( 1
1
) in _ y ( f y
k
k k
o +
= =
] ) in _ y ( f - [1 ) in _ y ( f ) in _ y ( f
k k k
'
=
(15)
(5)
and send o
k
to units in the layer below.
Step5.2: Each hidden unit (Z
1
, Z
2
, .. Z
49
) sums its delta inputs (from
units in the layer above).
j=1,2 (6)
multiplied by the derivative of its activation function to
calculate its error information term.
(7)

calculates its weight correction term (used to update v
ij
later).

and calculates its bias correction term (used to update v
0j
later).

update weights and biases:
Step5.3: Each output unit (Y
k
, k=1,6) updates its bias and weights:
j=1,2,.,49 (8)
the hidden units (Z
1
,Z
2
, ., Z
49
) updates its bias and
weights:
i=0,.200
Step6. Test stopping condition.
An epoch is one cycle through the entire set of training vectors.
Typically, many epochs are required for training a backpropagation
k k 0
w oo = A

=
o = o
6
1 k
jk k j
w in _
) in _ z ( f in _
j j j
' o = o
i j ij
x v oo = A
j j 0
v oo = A
jk jk jk
w ) t ( w 1) (t w A + = +
ij ij ij
v ) t ( v 1) (t v A + = +
] ) in _ z ( f - [1 ) in _ z ( f ) in _ z ( f
j j j
'
=
(16)
neural network. The foregoing algorithm updates the weights after
each training signature is presented. A common variation is batch
updating, in which weight updates are accumulated over an entire
epoch (or some other number of presentations of signatures) before
being applied.
Training continued until the total squared error for (250)
signatures was less than (0.01).
Step7. Testing phase
After training, a backpropagation neural net is applied by using
only the feedforward phase of the training algorithm. The testing
procedure is as follows:
Step7.1: Initialize weights (from training algorithm).
Step7.2: For each input vector, do steps 3-5.
Step7.3: For i= 1.200: set activation of input unit (x
j
).
Step7.4: For j= 1.49:
j =1,.,49
z
j
= f(z_in
j
)
Step7.5: For k = 16.
k =1,..,6

y
k
= f(y_in
k
)
After application the set of preprocess procedures to the
unknown signature and the training algorithm to database is complete.
The unknown signature must be tested to be recognized by using the

=
+ =
200
1 i
ij i oj j
v . x v in _ z

=
+ =
49
1 j
jk j ok k
w . z w in _ y
(17)
feedforward phase to compute the actual output to the unknown
signature and calculate the minimum distance between the targets,
which is computed and all targets to the database signatures. A
signature is taken to be present if the minimum distance between the
computed output nodes and the original target nodes equals zero.
5. SRS Implementation
It includes all the main steps in the block structure represented in
figure (1) in the previous chapter, the main steps are:
Image Loading, Preprocessing, Feature extraction and
Recognition Using Backpropagation Algorithm. As shown in figure
(7), which represent a framework of the SRS.
In this example, the image contains one signature and the stages
of recognition are demonstrated below:
(18)
Figure (7): Framework of the SRS.
Start
Image Scanning
Preprocessing
Feature extraction
Classification
Signature
Recognition
Get Person Name
End
Yes
Return Error
No
References
database
Start
References
Image Scanning
Preprocessing
Feature extraction
(19)
5.1 Experimental Results
This section is devoted to present the testing results for selecting
the best number of hidden nodes and the best learning rate, which are
important for the network design to obtain the optimal results.
5.1.1 The Best Number of Hidden Nodes
To determine the number of hidden neurons, the network should
have to do its best. Different network architectures have been
attempted by varying the number of hidden nodes between (3-50)
nodes.
By comparing the results of the different architectures across
table (5) and the graph shown in figures (8) and (9), it was found that
the network with (49) hidden nodes produced a good result and it had
a smaller error than the other numbers of the hidden nodes.
Out of (100) test signatures, (98) have been correctly recognized
which amounts to (98%) success rate. The remaining signature refers
to (2.0%) that the system gave false recognition.
5.1.2 The Best Learning Rate
In order to increase the network effectiveness and to make it
more suitable in selecting the learning rate (o), several experiments
were carried out to select the best learning rate values. These values
were chosen in the range (0-1) in order to find their effects on the
amount each weight is changed for any given error calculation through
network learning.
(20)
Using of (100) test signatures and learning for the
backpropagation network producing the results are presented in tables
(6, 7) and figures (10), (11). Shown that the learning rate (0.2)
requires long time and gives the largest number of Iteration and time
required adjusting the weights through learning the network.
However, (o=0.5) gives less time and Iteration of the (o =0.2) values.
The learning rate (0.8) time and iteration values are less than the
(o=0.5). The results are summarized in the following:
Learning
Rate
Hidden
Nodes
Success
Rate %
False
Rate %
0.2 35 93 7
0.5 49 98 2
0.8 48,49 93 7
Out of (100) test signatures, the learning rate (0.2) has been
correctly recognized which amounts to (93%) success rate with
(80300) Iterations and (5) minutes, which performs bad results
compared with the second factor.
The learning rate (0.5) with momentum term (= 0.4) gave the
best recognition results (98%) correct rate with (38800) iterations and
(3.20) minutes and the remaining signature refers to (2%) that the
system gave false recognition. While learning rate (0.8) has
recognized (93%) success rate with (20600, 23000) iterations and
(1.45, 1.58) minutes for (48, 49) hidden nodes respectively that
performs bad results as the learning rate (0.2).
The learning rate (0.5) gave the best recognition results
compared with the other factors. It was therefore used in the network
design.
(21)
Table (5): Results for the networks
Hidden
Nodes
0.2 0.5 0.8
Success
Rate %
False
Rate %
Success
Rate %
False
Rate %
Success
Rate %
False
Rate %
3 42 58 57 43 30 70
4 65 35 57 43 37 63
5 65 35 72 28 63 37
6 71 29 72 28 66 34
7 72 28 70 30 63 37
8 70 30 72 28 77 23
9 76 24 66 34 72 28
10 81 19 79 21 76 24
11 88 12 89 11 76 24
12 73 27 79 21 70 30
13 80 20 77 23 81 19
14 83 17 78 22 74 26
15 80 20 77 23 84 16
16 82 18 89 11 76 24
17 83 17 85 15 86 14
18 90 10 85 15 83 17
19 82 18 81 19 88 12
20 84 16 82 18 80 20
21 84 16 80 20 77 23
22 81 19 78 22 88 12
23 82 18 87 13 80 20
24 85 15 87 13 81 19
25 86 14 86 14 81 19
26 87 13 85 15 81 19
27 84 16 76 24 84 16
28 82 18 86 14 86 14
29 82 18 81 19 84 16
30 83 17 89 11 78 22
31 76 24 80 20 83 17
32 84 16 85 15 81 19
33 92 8 81 19 78 22
34 85 15 83 17 84 16
35 93 7 83 17 87 13
36 79 21 82 18 82 18
37 88 12 94 6 91 9
38 83 17 83 17 82 18
39 80 20 82 18 76 24
40 86 14 86 14 79 21
41 87 13 87 13 87 13
42 86 14 88 12 88 12
43 83 17 81 19 85 15
44 88 12 85 15 79 21
45 85 15 81 19 86 14
46 79 21 81 19 79 21
47 78 22 78 22 77 23
48 84 16 88 12 93 7
49 84 16 98 2 93 7
50 87 13 90 10 83 17
(22)
Table (6): Depict No. of iterations according to the
networks with varying Learning rate.
No! of
Hidden
Nodes
No. of
Iteration
0.2
No. of
Iteration
0.5
No. of
Iteration
0.8
No! of
Hidden
Nodes
No. of
Iteration
0.2
No. of
Iteration
0.5
No. of
Iteration
0.8
1 - - - 26 73100 27100 15500
2 - - - 27 64600 28900 18900
3 1800400 218600 1860900 28 87000 29000 16700
4 427600 377000 151300 29 93400 35300 17100
5 485600 95700 26200 30 70100 28800 18800
6 97900 31700 39000 31 89100 33400 17600
7 112100 45200 20600 32 87600 31700 16700
8 106200 44500 20900 33 87400 33900 22100
9 71900 29200 18400 34 77300 32700 17900
10 87700 32700 29600 35 80300 29500 19700
11 60500 27900 15100 36 92300 33300 18700
12 81700 23500 21100 37 76900 28800 18700
13 69100 25900 16700 38 106500 34900 19200
14 99500 29900 17500 39 83000 33800 21100
15 78000 28800 18300 40 87900 34500 19700
16 84200 28200 16700 41 109200 36500 23500
17 62800 24000 16300 42 95900 35900 19000
18 82100 31100 21900 43 101700 35200 19900
19 104600 29200 16200 44 91800 34500 19700
20 96600 27900 18900 45 84000 37900 20300
21 71400 40100 18600 46 95400 39300 21000
22 74600 24600 16800 47 142700 42700 21600
23 78800 27300 16700 48 121200 42000 20600
24 67100 30100 16500 49 115700 38800 23000
25 81000 34100 20100 50 100900 38000 222000
(23)
Table (7): Depict Time (minutes) according to the
networks with varying Learning rate.
Hidden
Nodes
Time
minute
0.2
Time
minute
0.5
Time
minute
0.8
Hidden
Nodes
Time
minute
0.2
Time
minute
0.5
Time
minute
0.8
1 - - - 26 3.20 1.15 0.40
2 - - - 27 3.02 1.20 0.45
3 10.30 1.15 12.50 28 4.10 1.25 0.45
4 3.10 2.40 1.10 29 4.40 1.45 0.50
5 4.2 0.40 0.15 30 3.50 1.35 1.05
6 1.5 0.20 0.25 31 5.05 1.53 1
7 1.10 0.35 0.15 32 4.55 1.45 0.58
8 1.20 0.35 0.20 33 5.00 1.55 1.15
9 1.50 0.30 0.17 34 4.40 1.58 1.2
10 1.30 0.35 0.35 35 5.00 1.45 1.12
11 1.10 0.35 0.15 36 5.50 2.05 1.10
12 1.40 0.30 0.25 37 5.00 1.50 1.12
13 1.35 0.35 0.25 38 7.10 2.15 1.15
14 2.35 0.40 0.27 39 5.45 2.20 1.25
15 2.15 0.45 0.30 40 6.15 2.23 1.25
16 2.15 0.40 0.27 41 7.45 2.40 1.40
17 1.40 0.40 0.30 42 7.00 2.35 1.25
18 2.35 0.55 0.40 43 8.35 2.40 1.30
19 3.30 0.55 0.40 44 7.50 2.35 1.30
20 3.25 0.57 0.40 45 6.40 2.58 1.40
21 3.35 1.30 0.40 46 8.10 3.25 1.50
22 2.55 0.57 0.40 47 12.35 3.45 1.55
23 3.10 1.05 0.40 48 10.15 3.35 1.45
24 2.45 1.15 0.40 49 10.05 3.20 1.58
25 3.30 1.27 0.40 50 9.00 3.20 1.57
(24)
Figure (8): Success rate for the networks with varying of Learning rates.
Figure (9): False rate for the network with varying of Learning rates
0
10
20
30
40
50
60
70
80
90
100
3 5 7 9
1
1
1
3
1
5
1
7
1
9
2
1
2
3
2
5
2
7
2
9
3
1
3
3
3
5
3
7
3
9
4
1
4
3
4
5
4
7
4
9
No. of Hidden Units
S
u
c
c
e
s
s

%
0.2 0.5
0.8
0
10
20
30
40
50
60
70
80
90
100
3 5 7 9
1
1
1
3
1
5
1
7
1
9
2
1
2
3
2
5
2
7
2
9
3
1
3
3
3
5
3
7
3
9
4
1
4
3
4
5
4
7
4
9
No. of Hidden Units
F
a
l
s
e

%
0.2 0.5
0.8
(25)
Figure (10): No. of Iteration according to the network with varying no. of
hidden nodes and Learning rate ( ) .
Figure (11): Depict Time (minutes) according to the network with varying
no. of hidden nodes and Learning rate.
0
2
4
6
8
10
12
14
2 5 8
1
1
1
4
1
7
2
0
2
3
2
6
2
9
3
2
3
5
3
8
4
1
4
4
4
7
5
0
No. of Hidden nodes
t
i
m
e

-
m
i
n
u
t
e
s
0.2
0.5
0.8
0
100000
200000
300000
400000
500000
600000
2 5 8
1
1
1
4
1
7
2
0
2
3
2
6
2
9
3
2
3
5
3
8
4
1
4
4
4
7
5
0
No. of Hidden nodes
I
t
e
r
a
t
i
o
n
0.2
0.5
0.8

(26)
6. Conclusions
The main conclusions of the research work can be summarized
as follows:
1- The SRS given a high proportion of achievement in the recognition,
amounts to (98%), (100) test signatures were chosen out of (50)
signatures belonging to persons whose signatures are existents
including the database and other (50) signatures not included in the
database.
2- The system proved its effectiveness to classify by applying NN
especially the Backpropagation network, which also proved to be
effective in pattern recognition. Through applied experiments was
performed on (100) test signature, the system accepts (98)
signatures and refuses (2) signatures so type I error are (2%).
3- By comparing the results of different NN architectures across the
table (4.1) in the previous chapter, it was found that the network
with (49) hidden nodes produced a good result. That means we
have to varying the number of the hidden nodes to determine the
number of hidden neurons for the network in order to perform its
best work.
4- The results of experiments listed in tables (4.1) and (4.3) have
shown that the enhanced backpropagation network with (o= 0.5)
learning rate is the most suitable for the task of off-line signature
recognition.
(27)
5- The extracted features in this work include four histograms, which
produce a good result where the same idea hat may be used and
applied on other pattern recognition.
7. Recommendations for Future Research
The following recommendation for further research can be
identified:
1- The structural pattern recognition of Bezier Curves or Elliptic
Curves can be regarded as the attributes that can be used for the
recognition and the application of backpropagation algorithm
after solving the signature segmentation problem.
2- Since the Backpropagation network was found effective in
pattern recognition, it can be improved by using multi layer
instead of one for the hidden layer.
3- Other networks can be used instead of Backpropagation
network such as learning Vector Quantization or Necognitron
net. These networks may give fast results.
4- There are many reasons for the significant differences among
signatures of the same person such as psychological, age.
Therefore it is necessary to find a system for minimizing the
differences among the signatures of the same person.
5- The SRS can be used for (handwriting, fingerprints and faces)
recognition. It will be more effective than signature recognition.
(28)
8. References
1- Mohr, D., Pino, D. and Saxe, D. Comparison of Methods for Digit
Recognition , Wilmingtion, DE, USA, 1999.
Internet Site:
http://www.cis.udel.edu/~saxe/DaveResearch/Digit/Digitpaper.html.
2- Wilson, A. T., "Off-line handwriting Recognition Using Artificial
Neural Networks", University of Minnesota, 2000.
Internet Site: http://mrs.umu.edu./~lopezdr/seminar/fall2000/wilson. pdf.
3- "Introduction to Pattern Recognition", Internet Site:
http://cwis.auc.dk/phd/fulltext/larsen/htm1/node5.htm1.
4- Internet Site :http://www.ph.tn.tude/ft.nl/Research/neural/feature-extracti
on /papers /thesis/node67.html.
5- Vuori, V., thesis in Adaptation in On-line Recognition of
Handwriting, Helsinki University of Technology, 1999.
6- David, C., Neural Networks and Genetic Algorithms, Reading
University, P.1, 1998.
7- McCollum, P., An Introduction to Back-Propagation Neural
Networks, P.1, 1998.
Internet Site: http://www.seattlerobotics.org/encoder/nov98/neural.html.

You might also like