You are on page 1of 9

1.

INTRODUCTION
This project will deal with the detection and recognition of hand gestures. Images of the
hand gestures will take using a personal computer and recognize the gesture using the
image processing and neural network. Gesture recognition is one of the essential techniques
to build user-friendly interfaces. For example, a robot that can recognize hand gestures can
take commands from humans, and for those who are unable to speak or hear, having a robot
that can recognize sign language would allow them to communicate with it. Hand gesture
recognition could help in video gaming by allowing players to interact with the game using
gestures instead of using a controller. However, such an algorithm needs to be more robust
to account for the myriad of possible hand positions in three-dimensional space. It also
needs to work with video rather than static images. That is beyond the scope of our project.

2. PROBLEM DEFINITION
Hand Gesture recognition is a very important approach to deal with as with the
development of ubiquitous computing, current user interaction approaches with keyboard,
mouse and pen are not sufficient. Due to the limitation of these devices the useable
command set is also limited. Direct use of hands can be used as an input device for
providing natural interaction. Similarly with the help of this system it can smoothen the
communication between disabled people. Hence, to increase communication between users
and machines our system can be very helpful.

3. OBJECTIVE
The objective of this Project is to
 Non-verbal communication between two people using the hand gesture.
 To give the command to computer using the hand gesture..

4. SCOPES AND LIMITATIONS


The different scopes of this project are:
 To create a method to recognize hand gestures using camera and using machine
learning method.
 To continue the research to improve GCUI (Gesture Controlled User Interface) its
outcome and to enhance the communication between machine and users.

1
The different limitations of this project are:
 The algorithm cannot detect fast moving hand.

 The system makes an assumption that the hand is the brightest and closest
object to the camera.
 The system that is built may not recognize all and complex gestures.

5. LITERATURE REVIEWS
There are a lot of works which concentrate on wearable devices for gesture recognition.
These works range in scope from small projects by undergraduate CSIT students, to
Master’s Thesis, and much further into million dollar projects such as that conducted by
NASA’s JPL. What follows are a few such works that contributed to this thesis, either by
offering simple guidelines, explanations and examples, or even aiding in defining the
scope.
In vision based hand gesture recognition for human computer interaction the different
application which employs hand gestures for efficient interaction has been discussed
under core and advanced application domains. This paper provided an analysis of
literature related to gesture recognition systems for human computer interaction by
categorizing it under different key parameters. It further discusses the advances that are
needed to further improvise the present hand gesture recognition systems for future
perspective that can be widely used for efficient human computer interaction [1].
In Automatic Hand Gesture Recognition,the application was designed to recognize
predefined hand gestures using various computer vision and machine learning algorithms.
Hidden Markov Models method is used for recognition and forward algorithm are used for
evaluation. [2]

In Superpixel-Based Human Computer Interface Using Hand-Gesture Recognition is


another attempt. This application implemented Hand Localization and Segmentation,
Shape Representation Using Joint Color-Depth Superpixel, Depth Normalization and SP-
EMD .Template matching is utilized for hand gesture recognition based on the SP-
EMD[3].

2
Backpropagation algorithm:
One of the most popular NN algorithms is back propagation algorithm. Rojas [2005]
claimed that BP algorithm could be broken down to four main steps. After choosing the
weights of the network randomly, the back-propagation algorithm is used to compute the
necessary corrections. The algorithm can be decomposed in the following four steps:
 Feed-forward computation

 Back propagation to the output layer

 Back propagation to the hidden layer

 Weight updates

The algorithm is stopped when the value of the error function has become
sufficiently small.
The algorithm is as follows:
1. First apply the inputs to the network and work out the output – remember this initial
output could be anything, as the initial weights were random numbers.
2. Next work out the error for neuron B. The error is What you want – What you actually
get, in other words: Error B = Output B (1-OutputB) (Target B – Output B) The “Output(1-
Output)” term is necessary in the equation because of the Sigmoid Function – if only using
a threshold neuron it would just be (Target – Output).
3. Change the weight. Let W+ AB be the new (trained) weight and WAB be the
initial weight.
W+ AB = WAB + (Error B * Output A) ……………(i)
Notice that it is the output of the connecting neuron (neuron A) will be using (not
B). Update all the weights in the output layer in this way
4. Calculate the Errors for the hidden layer neurons. Unlike the output layer it can’t calculate
these directly (because we don’t have a Target), so we Back Propagate them from the output
layer (hence the name of the algorithm). This is done by taking the Errors from the output
neurons and running them back through the weights to get the hidden layer errors. For
example if neuron A is connected as shown to B and C then take the errors from B and C
to generate an error for A. ErrorA = Output A (1 - Output A) (ErrorB WAB + ErrorC WAC)

3
Again, the factor “Output (1 - Output)” is present because of the sigmoid squashing
function.
5. Having obtained the Error for the hidden layer neurons now proceed as in stage 3 to
change the hidden layer weights. By repeating this method we can train a network of any
number of layers.

6. Feasibility study
Feasibility study are helpful to expand, build and remodel, change the methods, add new
products, or even merge the system. The key considerations involved in the feasibility
analysis are economic, technical, operational, and schedule.

 Technical feasibility:
In this proposed model it focuses on the improvement of the classification accuracy rates
using Back propagation algorithm. This work will also be a good example which advocates
achieving better results for a data mining mixing up supervised and supervised learning.
 Schedule feasibility:
It will be time acceptable web-based system and can be conducted at a decent time.
Estimated time spent in the development of this application is 4 months and scheduled as:

S.N Task description Weeks


1-2 3-4 5 6 7-8 9-10 11 12 13-14
1 Field Study and
Proposal
Preparation
2 Analysis
3 Data Collection
4 Design
5 Implementation
6 Testing\Debugging
7 Report

Figure 1: Project Schedule

4
 Operational feasibility:
This system we built will be used efficiently even by the non-technical people.
This system can be used by any person by not even changing the system.

7. Data Collection
Data all will be collected form the different human gesture and other will be
given manually.

8. Tools
Front-End Tools:
The front end will be designed using HTML, CSS and Javascript.
 HTML:-It will be use to give the layout for the project.
 CSS:-It will be use to design the layout for the project.
 Javascript:-It will be use to give the animation effect to the project.

Back-End Tools:
Back end will be mainly used to store the data and also retrieve the data using
Python along with Django framework. Databases are maintained using dbSQlites.
 Python:-It is used as the server side scripting and for computation.
 dbSQlites:-It is used as the database system in this project.

9. TESTING
Unit Testing

Unit Test comprises the set of tests performed by an individual programmer prior
to integration of the unit into a large system. Unit testing will be done after the
each network will be created by giving the real input and checks either the output
will correct or not.
Integration Testing
Integration testing will be done by feeding the output of the previous network and
checks the actual desired output.
Test Case:

5
Table: Test Case
Test Case Input Cases Expected Test Case
Name Result Status
System The Successful
test system
should
display
‘0’.

System The Successful


test system
should
display ‘1’

6
10. HIGH LEVEL DESIGN OF PROPOSED SYSTEM

The high level design of the Hand Gesture Recognition system can be studied from the
following flowchart:

Figure 2: System Flowchart of Hand Gesture Recognition System

The system first takes the input as the hand (its sign), and is detected by the help of camera.
The camera detects the RGB images and is converted to black. The static hand images
subtracts the background and the feature is extracted which recognizes the gesture and the
output is generated.

7
11. EXPECTED OUTPUT
After the successful completion of this project, this system could define the gestures that
is trained to the computer, which is defined as machine learning and provides the assumed
output which eases the communication between human-computer interpretations. The
system extracts data from the databases and provides the outcome by the help of camera.

8
REFERENCES

[1] S. S.Rautaray and A. Agrawal, "Vision based hand gesture recognition for human
computer interaction:survey," Springer Link, vol. 43, no. 1, pp. 1-54, 2015.

[2] Anant Atray “Automatic Hand Gesture Recognition” - School of Computer


University of Manchester 3rd Year Project Report – 2015.

[3] Abhishek Maheshwari, Anurag Semwal, Susmit Wagle “Superpixel-Based Human


Computer Interface Using Hand-Gesture Recognition”@iitk.ac.in Indian Institute
of Technology, Kanpur April 28, 2017 .

You might also like