You are on page 1of 6

Tribhuvan University

Patan Multiple Campus


Department of Computer Science and Information Technology
Patan Dhoka, Lalitpur, Nepal

Project Proposal
PATTERN RECOGNITION USING NEURAL NETWORK
(Digit Recognition)
In partial fulfillment of the Image Processing
Date: 4/10/2011

Project Adviser: Submitted By:

Rosa Kiran Basukala Dil Prasad Kunwar(02/064)


Abstract

An Artificial Neural Network (ANN), often just called a "neural network" (NN), is
a mathematical model or computational model based on biological neural networks. It consists
of an interconnected group of artificial neurons and processes information using a
connectionist approach to computation. In most cases an ANN is an adaptive system that
changes its structure based on external or internal information that flows through the
network during the learning phase. In more practical terms neural networks are non‐linear statistic
al data modeling tools. They can be used to model complex relationships between inputs and outpu
ts or to find patterns in data.

A Feed Forward Neural Network is an artificial neural network where connections between the
units do not form a directed cycle. This is different from recurrent neural networks. The
feed forward neural network was the first and arguably simplest type of artificial neural
network devised. In this network, the information moves in only one direction, forward,
from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycle
s or loops in the network.

The Back Propagation Algorithm is a common way of teaching artificial neural networks
how to perform a given task. It requires a teacher that knows, or can calculate, the desired output fo
r any given input. It is most useful for feed‐forward networks. Back propagation algorithm
learns the weights for a multilayer network, given a network with a fixed set of units and interconn
ections. It employs gradient descendent rule to attempt to minimize the squared error between the
network output values and the target values for these outputs. [1]

Fig 1: Simple Neural Networks having 3 Layers (Input layer, hidden layer and output layer)

Problem considered in NN:


Pattern recognition has become much important and interesting application industry. I’m
planning to apply Neural Network for recognizing characters, initially digits. My search
from internet and other materials made me feel that Neural Network would be a good
option to recognize character. I’m much interested in learning “How Neural Network can
be applied to a particular problem domain (in this case “Character Recognition”) and why NN is wor
th of applying.”

2
Objectives

I propose to know the details of how the neural network can be applied for pattern recognition on
image processing. In this project work I will achieve the following two goals:

1. To know the working principle of an ANN for pattern recognition.


2. To know how the efficiency of pattern recognition can be improved using ANN.

The Task

The learning task here involves recognizing characters (digits considered in the beginning). The
target function is to classify the given character image to a particular digit set early as target.

Initial Input Encoding:

There should be a drawing panel where digits should be drawn and then can be used for either
training or test. These drawing should be converted into a matrix of size of 5 by 5 i.e 25 pixels.

Initial Output Encoding:


I will use 10 distinct output units since we need to classify 10 digits [0,1,…,9], each of which
correspond to one of the possible digits that might be considered.

00111 00001
00001 00001
00001 00001
00001 00001
00011 00001

Fig 2: a) Input b) Converted


Fig 2: a) Input
Matrix c) Target

Initial Network Structure:

To represent 25 matrix pixels I will use 25 input layer units and 10 outputs layer units for distinct
10 digits. For hidden layer, I’m planning to start with four units as I think 3 will be enough to map
8(23) output, so 4 hidden units would be fine for 10 outputs units. I shall change the number of
hidden layer units in case I feel it is required for better performance while testing the system.

3
      Input values

25    10
   1 Input neuron layer
 

  Weight matrix
 
4     1 Hidden neuron layer

Weight matrix

10    1 Output neuron layer

Output values


Fig 2: NN Structure for the proposed digit recognition

Learning Process:

Basically Backpropatation algorithm is used to train the NN Structure proposed above. I will start
with random value assigned to the input matrices, and will gradually adjust the weight (i.e. train the
network by performing the following procedure for all pattern pairs:

Forward pass

1. First, it computes the total weighted input xj, using the formula:

where yi is the activity level of the jth unit in the previous layer and Wij is the weight of the
connection between the ith and the jth unit

2. Next, the unit calculates the activity yj using some function of the total weighted input.
Typically we use the sigmoid function:

Note: here sigmoid function is used to guarantee that the output returned by the function is
within the range of 0 and 1.
Once the activities of all output units have been determined, the network computes the
error E.( for backward pass I have used symbol only, I haven’t mention the loop here)

4
Backward pass

1. Compute the error using the formula


Eo=yo (1‐ yo )( yo‐t)
where Eo is the vector of errors for each output neuron, yo the output layer vector, and t is
the target(correct) activation of the output layer.
2. Compute the hidden layer error: Wi
Eh=Wh*Eo Yp
Here, Yi Wh Wo
Yh Yo
Eh= hidden layer error.
Wh= hidden layer weight.
Eo= output layer error.
3. Compute the input layer error
Ei=Wi*Eh Eh Eo
Here,
Ei=input layer error. Ei
Fig 3: Symbol used in back
Wi=weight of input layer i.e. pre input layer weight. propagation

Now the task is to adjust the weight of each node of each layer
4. Update the first(input) layer weight
Wi=R*Eh*yp
Here,
R=Learning rate (The constant of proportionality is the learning rate. )
Yp=pre input activation to the input layer node.
5. Update the hidden layer weight
Wh=R*E1*yi (Yi activation of inpute layer node)
6. Update the output layer weight
W0=R*E*yh
yh=activation of hidden layer node.
W0 = weight of output layer .
Repeat these all steps (forward pass and backward pass) on all pattern pairs until the
output layer error (vector Eo) is within the specified tolerance for each pattern and each
neuron

Note: The link weights store the knowledge necessary to solve specific problems.

Implementation

I’m going to implement the project using C sharp (C#) programming language.

Probable date of completion of the project:

The approximate date of completion of the project including the final report is:

Monday, May 09, 2011

5
Management Plan

Fig4: Schedule of completion of the project (Gantt Chart)

Conclusion:

The architectural model for digit recognition using pattern recognition with neural network was
proposed.

Reference:
1. http://www.codeproject.com/KB/cs/BackPropagationNeuralNet.aspx
2. http://www.scipub.org/fulltext/jcs/jcs56427-434.pdf
3. http://ix.cs.uoregon.edu/~raihan/NN_Digit_Recog_Raihan.pdf
4. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27.9135&rep=rep1&type=pdf
5. http://mindsignal.org/technology/optical-character-recognition-backpropagation-nural-
network/

You might also like