You are on page 1of 11

COMPARATIVE STUDY OF HAND GESTURE

RECOGNITION SYSTEM
Rafiqul Zaman Khan1 and Noor Adnan Ibraheem2
1,2
Department of Computer Science, A.M.U., Aligarh, India
1
rzk32@yahoo.co.in,2naibraheem@gmail.com

ABSTRACT
Human imitation for his surrounding environment makes him interfere in every details of this
great environment, hear impaired people are gesturing with each other for delivering a specific
message, this method of communication also attracts human imitation attention to cast it on
human-computer interaction. The faculty of vision based gesture recognition to be a natural,
powerful, and friendly tool for supporting efficient interaction between human and machine. In
this paper a review of recent hand gesture recognition systems is presented with description of
hand gestures modelling, analysis and recognition. A comparative study included in this paper
with focusing on different segmentation, features extraction and recognition tools, research
advantages and drawbacks are provided as well.

KEYWORDS
Hand posture,hand gesture,human computer Interaction (HCI), Segmentation, Features
Extraction, Hidden Markov Model (HMM), Artificial Neural Networks (ANNs), Fuzzy C-Mean.

1. INTRODUCTION

The airplane host and hostess use gestures to explain the safety guide for the passengers before
the airplane takes off, since most of the passengers might unable to understand their languages
especially for aged or hear-impaired passengers. Gestures used for communication between
people and machines as well. Building natural interaction between human and computer required
accurate hand gesture recognition system as an interface for easily human computer interaction
(HCI), where the recognized gestures can be used for controlling a robot or conveying meaningful
information [1].

Gestures can be static (posture) or dynamic (sequence of postures). Static gestures require less
computational complexity [2] rather than dynamic gestures which are complex and for that it is
suitable for real time environments [2] [3]. Acquired data in recognition system can be obtained
either using spare devices such as instrumented data glove devices [4], or using the virtual hand
image, both of the mentioned methods are used to extract gesture features [5]. Vision based
approaches considered easy, natural and less cost comparing with glove based approaches which
require the user to wear special device and be connected to the computer which hinder the
naturalness of communication between the user and computes [5]. Recently a lot of reviews
discussed gesture recognition systems and applications using different tools [6][7][8][9]. This
work demonstrates the advancement of the gesture recognition system and it is up to date, and
represents a starting point for investigators in hand gestures recognition field.

Natarajan Meghanathan, et al. (Eds): SIPM, FCST, ITCA, WSE, ACSIT, CS & IT 06, pp. 203213, 2012.
CS & IT-CSCP 2012 DOI : 10.5121/csit.2012.2320
204 Computer Science & Information Technology ( CS & IT )
The paper organization is as follows: Section 2 explained the modeling of hand gesture. The
analysis of hand gesture is explained in Section 3. Hand gesture recognition demonstrated in
Section 4, gesture challenges discussed in Section 5. Section 6 provided a review of recent hand
gesture recognition systems. Summary and conclusions presented in Section 7.

2. MODELING HAND GESTURE

An important stage in recognition process is how to model the hand in a manner that makes
postures or gestures understandable as an interface in HCI [10]. For modeling the hand, kinematic
structure of the hand deemed as the base for modeling techniques [9]. There are two types of
gesture modeling as classified in [10] spatial and temporal modeling. Spatial modeling consider
the characteristic of posture (gestures shape) in the environment of HCI applications [10],
whereas temporal modeling relates the dynamic characteristic of the hand gesture (refers to
gestures motion) in the environments [10]. Hand Modeling in spatial domain can be
implemented in 2D and 3D space [11][24].

Figure 1. Hand modeling.

2D hand modeling can be represented by shape, motion, and Deformable templates [11]. Shape
can be categorized into geometric models [8][24][1], and non-geometric models [1][24].
Geometric models deals with location and position features of fingertips, and palm features [24],
while the non-geometric models use other shape features to models the hand such as the color [1],
silhouette and textures [1], contour [9][11], edges [11], image moments [11], and eigenvectors
[11], some of these features are used in features extraction and analysis as well [11]. Deformable
templates or flexible models [36] provide a flexibility level of changing the shape of the object
[25] to allow for little variation in the hand shape [11]. Image motion based model can be
achieved with respect to color cues to track the hand, color markers are used or tracking and
estimating fingers position to model the hand shape [26]. The hand shape in 3D [9] can also
represented using 3D model of the hand [11][24], and can be categorized into volumetric,
skeletal, and geometrical models [24][11][9]. Volumetric models are complex, computations
required [9], and use a lot of parameters to represent hand shape [11], skeletal models on the other
hand required less parameters to structure hand shape [11][24]. Geometrical models used widely
for hand animation and real applications [26]. The geometric surfaces efficiently simulate visual
Computer Science & Information Technology ( CS & IT ) 205
hand image but a lot of parameters are required besides it considered as time and speed
consuming process [26] [11][24]. An alternative option is to use geometric forms to approach the
visual shape such as cylinders and ellipsoids [26][24][11][9]. Polygon meshes [26], and
cardboard models [26] are examples of geometrical models. Figure 1 explains the techniques for
hand modeling, and Figure 2 shows examples of these modeling methods.

Figure 2. Various hand modeling methods to represent hand posture [27]. (a) 3D volumetric model. (b) 3D
geometric model. (c) 3D Skeleton model. (d) Colored marker based model. (e) Non-geometric shape model
(Binary silhouette [11]). (f) 2D deformable template model (contour [11]). (g) Motion based model.

3. ANALYSIS HAND GESTURE

After modeling the hand gesture, an analysis phase of the hand is required, which aims to estimate
the necessary parameters [11] needed for gesture recognition. Two tasks are addressed for gesture
analysis [11] as shown in Figure 3, firstly, features detection which relates with the extraction of
useful features from the input image or input video [11], and secondly, relates with the calculation
of parameters estimation model from the extracted features [11].

The detection of features required a localization of the hand to extract necessary features [9][24].
Different approaches have been applied for hand localization and features extraction [24]. Hand
gesture can be localized by detecting the hand gesture from the image [11], and segmenting the
hand (the foreground object) from the background which is the unwanted other objects [24]. The
localization process can be categorized into two methods depending on method used for detection
as described in [11], which are: color tracking and motion tracking characteristics.

Skin color provides an effective [26], and efficient [24][26] method for hand localization [26].
Segmentation based skin color are the most popular method applied for hand locating [9]. Other
methods also used for same purpose, such as background subtraction [9][24], and adaptive
background models [9]. The use of proper color space play a significant role in segmentation
process, where some color spaces are less affected with lighting conditions such normalized color
model r-g, and HS hue saturation spaces [11]. However, to provide best results of color based
localization methods some limitations on the system have to be impose [9] such as the hand is the
only object with skin color in the scene or the hand is the dominant object [9], besides the static
background [9], and controlled lighting conditions as well [9][11]. Other researches turned to
apply color glove or markers to locate hand gesture in many realistic applications [11]. Locating
hand object from a sequence of image frames can be achieved using tracking motion based color
property which is generally used for locating hand gesture with certain assumptions for the
moving hand gesture and the background of ambient environments [11]. Since the hand is an
articulated structured object, it is difficult to analyze hand motion [26]. Appearance based and 3D
model based approaches can be utilized for gesture analysis to estimate parameters needed for
feature extraction [26][11]. In 3D hand model two parameters are used for the analysis of
kinematic shape [11] as described in [11]; angular (which is the joint angles), and linear (which is
the palm dimensions). These two parameters should be evaluated properly and change when
206 Computer Science & Information Technology ( CS & IT )

gestures evolve [11] in time. On the other hand appearance based models use
different parameters [11], such as using optical flow for hand motion estimation [11], active
contour [11], and deformable 2D template based models [11]. The fusion of color and motion
cues is recently studied and could exceed some drawbacks of using these models separately [11].
Features extracted can be fingers and palm locations [24], hand position [24], and hand centroid
[24]. However these features are stored in the computer and used as a template for training system
data to be compared with features extracted form input image with the help of some machine
learning techniques such as neural network, HMM, etc. [24].

Figure 3. Analysis of hand gesture and recognition steps [11].

4. RECOGNITION HAND GESTURE

In recognition phase the input visual gesture images are recognized as a meaningful gesture
depending on gesture modeling and analysis [11]. Recognition process affected with the proper
selection of gesture parameters of features, and thus the accuracy of the classification [7]. For
example edge detection or contour operators [7] not suitable for gesture recognition since it might
lead to misclassification [7]. Neural network has been widely applied in the field of extracted the
hand shape [12], and for hand gesture recognition [13][14] [15]. Hidden Markov Model (HMM)
has showed excel participate as a recognition tool [16][17]. Clustering algorithms took its chance
in recognition process [3], fuzzy c-mean clustering algorithm applied for controlling robot motion
[2].

5. HAND GESTURE CHALLENGES

Hand gesture recognition system confronts many challenges as addressed in [18], these
challenges are: Variation of illumination conditions; where any change in the lighting condition
affects badly on the extracted hand skin region [18]. Rotation problem: this problem arises when
the hand region rotated in any direction in the scene [5]. Background problem; refers to the
complex background where there is other objects in the scene with the hand objects [18] and these
objects might contain skin like color which would produce misclassification problem. Scale
problem; this problem appears when the hand poses have different sizes in the gesture image [5].
Finally, Translation problem; the variation of hand positions in different images also leads to
erroneous representation of the features [19].
Computer Science & Information Technology ( CS & IT ) 207

6. REVIEW OF GESTURE RECOGNITION SYSTEMS

Various techniques and tools are applied for recognition hand gesture; in this section some of
these methods are presented hereinafter. Li. [2] recognized hand gestures using fuzzy c-means
clustering algorithm for mobile remote application. The input image is converted into HSV color
space and segmented using Thresholding technique. After removing the noise from the image, 13
elements were extracted as the feature vector, first element represents the aspect ratio of the
bounding box of the hand, and the rest 12 parameters represent grid cells of the image [2]. Each
cell represents the mean gray level in the 3 by 4 block partition of the image, where the mean
value of each cell is the average brightness of these pixels, FCM clustering algorithm used for
classification gestures. The system implemented under intricate background and invariant light
conditions [2]. The system trained with 120 samples, 20 samples for 6 gestures. The system
implemented with recognition accuracy of 85.83% and recognition time 2-4 seconds. Hasan [18]
partitioned the input image into different blocks of 23x23 pixels each, and used these blocks as a
features vector. The system consists of training phase, and running phase. In training phase,
several feature vectors of input gestures has been stored in a database. In testing phase the input
image features vector compared with those stored in the database for classification using
Euclidean distance. Since segmentation operation is imperfect, hand saturation algorithm was
applied for filling the inner area of the hand gesture [18] [11]. Figure 4 shows this process.

Figure 4. Hand Gesture Saturation process [18]. A: Original input image. B: Segmented gesture. C: Hand
Gesture Saturation. D: Edge boundary.

The filling algorithm has been improved in [21] to produce Speed Border Image Initial Algorithm
(SBIIA). Morphological operations for object reconstruction modified with good processing time
and quality achieved as well. Dynamic structuring elements for dilation operation have been
employed [21] to fix the problem unfilled objects in the previous algorithm, as seen by Figure 5.

Figure 5. Modified filling algorithm (SBIIA) [21]. A: Represents the original input image, B: The
processed image by the original version of the algorithm (BIIA), C: The processed image by the modified
algorithm (SBIIA).

Stergiopoulou [12] suggested a new Self-Growing and Self-Organized Neural Gas (SGONG)
network for hand shape approximation. For hand region detection a color segmentation technique
based on skin color filter in YCbCr color space was used. Three features were extracted using
finger identification process which determines the number of the raised fingers and characteristics
of hand shape. Wysoski et al. [5] presented rotation invariant postures using boundary histogram.
208 Computer Science & Information Technology ( CS & IT )
For segmentation, skin color detection filtering and clustering operation used to find the boundary
for each group using ordinary contour- tracking algorithm. The boundary represented as chords
size chain which used as histograms. For classification MLP Neural Networks and DP matching
were used. Kouichi [13] presented posture recognition system using neural network to recognize
42 alphabet finger symbols, and gesture recognition system to recognize 10 words. Back
propagation Neural Network used for postures recognition and Elman Recurrent Neural Network
for gesture recognition. The two systems were integrated in a way that after receiving the raw
data, the posture system determined the sampling start time of the input image, and if it decided to
be a gesture then it sent to the gesture system. Elmezain [16] proposed a system to recognize
isolated and meaningful gestures for Arabic numbers (0 to 9). Gaussian Mixture Model (GMM)
used for skin color detection. For features extraction, the orientation between the centroid points
of current frame and previous frame were determined by vector quantization. The hand motion
path recognized using different HMM topologies and BW algorithm. The system relied on zero-
codeword detection model to recognize the meaningful gestures from continuous gestures (see
Figure 6).

A B
Figure 6. Isolation and continuous gesture representation [19]. A: path for isolation number 5. B:
Continuous number 32 from connected points of hand regions.

Trigo [22] used geometric shape descriptors for gesture recognition. A webcam used to collect
database image and segmented it manually. Several experiments were performed and each
experiment contains one or more groups of the three features group defined; invariant moments
group with seven moments; K-curvature group with the features: fingers number, angle between
two fingers, and distance-radius relation; and geometric shape descriptors group with the features;
aspect ratio, circularity, spreadness, roundness and solidity. Multi-layer perceptron MLP used for
classification. The geometric shape descriptor group has the best performance classification.
Freeman [19] presented a method for recognition gesture based on calculated local orientation
histogram for each image. The system consists of training phase, and running phase. For training
phase, several histograms were stored in the computer as a features vector, and in running phase,
the features vector of the input gesture extracted and compared with all the feature vectors stored
in computer, Euclidean distance metric used for recognized gestures.

7. SUMMARY AND CONCLUSIONS

Constructing an efficient hand gesture recognition system is an important aspect for easily
interaction between human and machine. In this work we provided a comparative study on
various gesture recognition systems with emphasis on segmentation and features detection and
extraction phases which are essential for gesture modeling and analysis. Figure 7 shows the
recognition rates of different recognition systems according to their appearance in the paper.
Table 1 shows summaries of analysis the segmentation and features vector representation phases
in hand gesture recognition systems using examples. Table 2 displays the main three steps for
hand gesture recognition system; Segmentation, Features representation, and Recognition and
techniques used for each phase. Finally table 3 shows the advantages and drawbacks of the some
hand gesture recognition systems.
Computer Science & Information Technology ( CS & IT ) 209

Figure 7.. Recognition Rates of gesture recognition


reco systems

Table 1. Examples of segmentation and feature vectors representation of various systems.

Method Segmentation Features Vector Representation


[2]

A: Input image. B: Segmented A: Noiseless normalized hand. B: Image


image. partitioned into 12 blocks feature vector.
[5]

A: Boundary representation of the


hand. B: Grid division. C: Feature vector histograms represented by
Normalized the grid. dividing the image into number of regions N in
a radial form.
[12]

A: Input image. B: Segmented A: RC Angle. B: TC Angle. C: distance from


image. the palm center.

[19] -

The input gesture represented by a histogram.


210 Computer Science & Information Technology ( CS & IT )
[23]

A: Input image. B: Segmented A: Edged image. B: Features brightness


image. division.
[23]

Features vector formed by five distances from


palm to all fingers and four angles between
A: Input image. B: Segmented those distances.
image.

Table 2. Summary of segmentation, features vector representation, and recognition of hand gesture
recognition systems.
Method Segmentation Features Vector Representation Classifier
[2] HSV threshold 13 parameters as a feature vector; the Fuzzy C-Means
Means (FCM)
first parameter represents the ratio algorithm
aspect of the bounding hand box and 12
parameters are the mean values of
brightness pixels in the image
[5] thresholding Three features vector: Boundary MLP Neural Network /
Chords size FFT, Boundary Chords Dynamic Programming
size, Boundary Chords size histogram (DP) matching
[12] YCbCr color Three angles from hand shape; RC Gaussian distribution
space Angle, TC Angle, Distance from the
palm center.
[13] Threshold 13 data item for postures/ 16 data item Back propagation
for gestures network / Elman recurrent
network
[16] GMM and Orientation quantization HMM
YCbCr space
[17] thresholding Template for posture/ hand location for structural analysis for
gesture postures / HMM for
gestures.
[18] Thresholding Divide the posture image into Blocks of Euclidean Distance
different odd sizes ((1x1), Metric.
3x3).(23x23))
[19] thresholding Representing hand gesture as an Euclidean Distance
orientation histogram Metric.
[22] Segmented Three different features group; Multilayer Perceptron
manually Invariant Moments, K K-curvature Artificial Neural
Geometric, Shape Network.
Descriptors.
[23] HSV and Nine numerical features; five distances Learning Vector
thresholding from palm to all fingers and four angles Quantization (LVQ).
method between these distances
Computer Science & Information Technology ( CS & IT ) 211
Table 3. Advantages and disadvantages of hand gesture recognition systems.

Met Advantages Disadvantages


hod
[2] (1) Speed and sufficient reliable (1) Irrelevant object might overlap with the
for recognition system. (2) Good hand. (2) Wrong object extraction appeared if
performance system with the objects larger than the hand. (3)
complex background. Performance recognition algorithm decreases
when the distance greater than 1.5 meters
between the user and the camera is
[5] (1) The radial form division and (1) The proposed method is susceptible to
boundary histogram for each errors, especially in shapes like square and
extracted region, overcome the circular.
chain shifting problem, and
variant rotation problem.
[12] (1) Exact shape of the hand (1) System limitations restrict the applications
obtained led to good feature such as; gestures are made with the right hand
extraction. (2) Fast and powerful only, the arm must be vertical, the palm is
results from the proposed facing the camera, background is plain and
algorithm. uniform.

[13] (1) Simple and active, and (1) Required long time for Learning; several
successfully can recognize a hours for learning 42 characters, and four
word and alphabet. (2) days to learn ten words. (2) Large difference
Automatic sampling, and in recognition rate for both registered and
augmented filtering data unregistered people. 98 % for registered and
improved the system 77 % for unregistered people.
performance.
[16] (1) Recognized both isolated and (1) Recognition limited to numbers only.
meaningful gestures for Arabic
numbers.
[17] (1) The system successfully (1) The system doesnt reflect the dynamic
recognized static and dynamic gesture characteristics.
gestures. (2) Could be applied
on a mobile robot control.
[18] (1) Hand Saturation algorithm (1) The edge method less effective since some
kept the hand object as a important features was lost.
complete area.
[19] (1) Simple, fast, and easy to (1) Similar gestures have different orientation
implement. (2) Can be applied on histograms. (2) Different gestures have similar
real system and play games. orientation histograms. (3) Any objects
dominate the image will be represented as
orientation histograms.
[22] (1) Three different groups of (1) The database samples, variation in scale,
features are examined to decide translation and rotation led to a
the good performance group. misclassification in the.
[23] (1) Used wool glove not costly (1) The feature vector with highest score
evaluated first by the classifier then the system
selects that features hypothesis.
212 Computer Science & Information Technology ( CS & IT )

REFERENCES
[1] G. R. S. Murthy & R. S. Jadon. (2009) A Review of Vision Based Hand Gestures Recognition,
International Journal of Information Technology and Knowledge Management, Vol. 2, No. 2, pp 405-
410.
[2] Xingyan Li. (2003) Gesture Recognition Based on Fuzzy C-Means Clustering Algorithm,
Department of Computer Science. The University of Tennessee Knoxville.
[3] S. Mitra, & T. Acharya. (2007) Gesture Recognition: A Survey IEEE Transactions on systems, Man
and Cybernetics, Part C: Applications and reviews, Vol. 37, No. 3, pp 311- 324, doi:
10.1109/TSMCC.2007.893280.
[4] Laura Dipietro, Angelo M. Sabatini & Paolo Dario (2008) Survey of Glove-Based Systems and their
applications, IEEE Transactions on systems, Man and Cybernetics, Vol. 38, No. 4, pp 461-482.
[5] Simei G. Wysoski, Marcus V. Lamar, Susumu Kuroyanagi, & Akira Iwata (2002) A Rotation
Invariant Approach On Static-Gesture Recognition Using Boundary Histograms And Neural
Networks, IEEE 9th International Conference on Neural Information Processing, Singapura.
[6] Rafiqul Zaman Khan1 & Noor Adnan Ibraheem, (2012) Survey on Gesture Recognition for Hand
Image Postures, International Journal of Computer and Information Science, Vol. 5, No. 3, pp 110-
121. doi:10.5539/cis.v5n3p110
[7] Joseph J. LaViola Jr. (1999) A Survey of Hand Posture and Gesture Recognition Techniques and
Technology, Master Thesis, NSF Science and Technology Center for Computer Graphics and
Scientific Visualization, USA.
[8] Thomas B. Moeslund & Erik Granum (2001) A Survey of Computer Vision-Based Human Motion
Capture, Elsevier, Computer Vision and Image Understanding, Vol. 81, pp 231268.
[9] Ali Erol, George Bebis, Mircea Nicolescu, Richard D. Boyle & Xander Twombly, (2007) Vision-
based hand pose estimation: A review, Elsevier Computer Vision and Image Understanding, Vol.
108, pp 5273.
[10] P. Garg, N. Aggarwal & S. Sofat, (2009). Vision Based Hand Gesture Recognition, World
Academy of Science, Engineering and Technology, Vol. 49, pp 972-977.
[11] Pavlovic, V. I., Sharma, R. & Huang, T. S. (1997) Visual Interpretation of Hand Gestures for
Human- Computer Interaction: A Review. IEEE Transactions on Pattern Analysis And Machine
Intelligence, Vol. 19, No. 7, pp 677- 695, doi; 10.1109/34.598226
[12] E. Stergiopoulou & N. Papamarkos. (2009) Hand gesture recognition using a neural network shape
fitting technique, Elsevier Engineering Applications of Artificial Intelligence, Vol. 22, No. 8, pp
11411158, doi: 10.1016/j.engappai.2009.03.008
[13] Kouichi M. & Hitomi T. (1999) Gesture Recognition using Recurrent Neural Networks, ACM
SIGCHI conference on Human factors in computing systems: Reaching through technology (CHI
'91), pp 237-242, doi: 10.1145/108844.108900
[14] Manar Maraqa & Raed Abu-Zaiter, (2008) Recognition of Arabic Sign Language (ArSL) Using
Recurrent Neural Networks, IEEE First International Conference on the Applications of Digital
Information and Web Technologies, (ICADIWT 2008), pp 478-48. Doi:
10.1109/ICADIWT.2008.4664396
[15] Tin Hninn H. Maung. (2009) Real-Time Hand Tracking and Gesture Recognition System Using
Neural Networks, World Academy of Science, Engineering and Technology , Vol. 50, pp 466- 470.
[16] Mahmoud E., Ayoub A., Jorg A., & Bernd M., (2008) Hidden Markov Model-Based Isolated and
Meaningful Hand Gesture Recognition, World Academy of Science, Engineering and Technology,
Vol. 41, pp 393-400.
[17] Min B., Yoon, H., Soh, J., Yangc, Y., & Ejima, T. (1997) Hand Gesture Recogrution Using Hidden
Markov Models. IEEE International Conference on computional cybernetics and simulation, Vol. 5,
pp 4232-4235. Doi: 10.1109/ICSMC.1997.637364
Computer Science & Information Technology ( CS & IT ) 213
[18] M. M. Hasan & P. K. Mishra, (2011) Performance Evaluation of Modified Segmentation on Multi
Block For Gesture Recognition System, International Journal of Signal Processing, Image
Processing and Pattern Recognition, Vol. 4, No. 4, pp 17-28.
[19] W. T. Freeman & Michal R., (1995) Orientation Histograms for Hand Gesture Recognition, IEEE
International Workshop on Automatic Face and Gesture Recognition, Zurich, pp.
[20] M. M. Hasan , & P. K. Mishra, (2012) Improving Morphology Operation for 2D Hole Filling
Algorithm, International Journal of Image Processing (IJIP), Vol. 6, No. 1, pp. 635-646.
[21] T. R. Trigo & S. R. M. Pellegrino, (2010) An Analysis of Features for Hand-Gesture Classification,
17th International Conference on Systems, Signals and Image Processing (IWSSIP 2010), pp 412-
415.
[22] M. M. Hasan & P. K. Mishra, (2011) HSV Brightness Factor Matching for Gesture Recognition
System, International Journal of Image Processing (IJIP), Vol. 4, No. 5, pp 456-467.
[23] Luigi Lamberti & Francesco Camastra, (2011) Real-Time Hand Gesture Recognition Using a Color
Glove, Springer 16th international conference on Image analysis and processing: Part I (ICIAP'11),
pp 365-373.
[24] Mokhtar M. Hasan, and Pramod K. Mishra, (2012) Hand Gesture Modeling and Recognition using
Geometric Features: A Review, Canadian Journal on Image Processing and Computer Vision Vol. 3,
No.1.
[25] T. F. Cootes, C. J. Taylor, D. H. Cooper, And J. Graha, (1995) Active Shape Models-Their Training
and Application, Computer Vision And Image Understanding, Vol. 61, No. 1, pp 38-59.
[26] Ying Wu and Thomas S. Huang, (2001) Hand Modeling, analysis and Recognition, IEEE Signal
Processing Magazine, Vol. 18, No. 3, pp 51-60. doi: http://dx.doi.org/10.1109/79.924889
[27] Mohamed B. Kaaniche, (2009) Human Gesture Recognition PowerPoint slides.

Authors

Dr. Rafiqul Zama Khan obtained his B.Sc degree from M.J.P Rohilkhand
University, Bareilly, M.Sc and M.C.A from Aligarh Muslim University Aligarh,
and his Ph.D. from Jamia Hamdard University, New Delhi. He has 18 years of rich
teaching experience of various reputed National (Pune University, Jamia Hamdard
University) & International Universities (King Fhad University of Petroleum &
Minerals, Dharan, K.S.A; Ittihad University, U.A.E). Presently he is working as an
Associate Professor in Department of Computer Science, Aligarh Muslim
University Aligarh (U.P), India. He worked as a Head of the Department of
Computer Science at Poona College, University of Pune. He also worked as a
Chairman of the Department of Computer Science, at Aligarh Muslim University, Aligarh, India. He is also
working as a PhD guide of several students. He has published more than 25 research papers in
International/National Journals. He is the member of Editorial Board of number of International Journals.

Noor Adnan Ibraheem: Received her B.Sc. and M.Sc. in computer science from
BGU in 2001 and 2005 respectively, she is currently a Ph.D. student at Aligarh
Muslim University, Aligarh, Uttar Pradesh, India. Her research interests include
computer vision, image processing, and artificial intelligent.

You might also like