You are on page 1of 25

IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

CHAPTER1
INTRODUCTION
Image processing is the use of computer algorithms to perform image processing
on digital images. As a subcategory or field of digital image processing, digital image
processing has many advantages over analog image processing. It allows a much wider
range of algorithms to be applied to the input data and can avoid problems such as the
build-up of noise and signal distortion during processing. Since images are defined over
two dimensions digital image processing may be modeled in the form of multidimensional
system . Application to satellite imagery, wire photo standards conversion, medical
imaging, video phone, character imaging, and photograph enhancement. The purpose of
early image processing was to improve the quality of the image. It was aimed at human
beings to improve the visual effect of people. In image processing, the input is a low-
quality image, and the output is an image with improved quality. Common image
processing includes image enhancement, restoration, encoding, and compression.

1.1 Gesture Recognition


Gesture recognition is a topic in computer science and language technology with
the goal of interpreting human gestures via mathematical algorithms. Gestures can
originate from any bodily motion or state but commonly originate from the face or hand.
Current focuses in the field include emotion recognition from face and hand gesture
recognition. Users can use simple gestures to control or interact with devices without
physically touching them. Many approaches have been made using cameras and computer
vision algorithms to interpret sign language. However, the identification and recognition
of posture, gait, proxemics, and human behaviours is also the subject of gesture
recognition techniques. Gesture recognition can be seen as a way for computers to begin
to understand human body language, thus building a richer bridge between machines and
humans than primitive text user interfaces or even GUIs (graphical user interfaces), which
still limit the majority of input to keyboard and mouse and interact naturally without any
mechanical devices. Using the concept of gesture recognition, it is possible to point a finger
at this point will move accordingly. This could make conventional input on devices such

Dept. of ECE, ASIET Page 1


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

and even redundant[1]. Gesture recognition enables humans to interface with the machine
and interact naturally without any mechanical devices. Using the concept of gesture
recognition, it is possible to point a finger at the computer screen so that the cursor will
move accordingly. This could potentially make conventional input devices such as mouse,
keyboards andeven touch-screens redundant. Gesture recognition can be conducted with
techniques from computer vision and image processing. Interface with computers using
gestures of the human body, typically hand movements. In gesture recognition technology,
a camera reads the movements of the human body and communicates the data to a
computer that uses the gestures as input to control devices or applications. For example, a
person clapping his hands together in front of a camera can produce the sound of cymbals
being crashed together when the gesture is fed through a computer. One way gesture
recognition is being used is to help the physically impaired to interact with computers,
such as interpreting sign language. The technology also has the potential to change the
way users interact with computers by eliminating input devices such as joysticks, mouse
and keyboards and allowing the unencumbered body to give signals to the computer
through gestures such as finger pointing. Unlike haptic interfaces, gesture recognition
does not require the user to wear any special equipment or attach any devices to the body.
The gestures of the body are read by a camera instead of sensors attached to a device such
as a data glove . In addition to hand and body movement, gesture recognition technology
also can be used to read facial and speech expressions , and eye movements[2].

1.2 Hand Gesture Recognition Using Image Processing

Hand gesture recognition (HGR) provides an intelligent and natural way of human
computer interaction (HCI). Its applications range from medical rehabilitation to
consumer electronics control (e.g. mobile phone). In order to distinguish hand gestures,
various kinds of sensing techniques are utilized to obtain signals for pattern recognition.
The HGR system can be divided into three parts according to its processing Steps : hand
detection, finger identification, and gesture recognition. The system has two major
advantages. First, it is highly modularized, and each of the three steps is capsuled from

Dept. of ECE, ASIET Page 2


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

others; second, the edge/contour detection of hand as well as gesture recognition is an


add-on layer, which can be easily transplanted to other applications. We are detecting the
hand and gesture with simple web camera and performing the image processing technique
in which using those gesture. In Image Process , the motions are detected through a web
camera. These images are then passed for the image processing. The techniques used for
image processing are hand gesture detection, edge detection, thresholding, contour
detection. Using OpenCV, which provides a library collection of functions for different
image processing techniques, these input images can be processed and corresponding key
strokes will be generated. Hand gesture recognition provides an intelligent and natural
way of human computer interaction . Its applications range from medical rehabilitation to
consumer electronics control . In order to distinguish hand gestures, various kinds of
sensing techniques are utilized to obtain signals for pattern recognition. Acceleration-base
and electromyogram-based techniques are two research branches in the field of hand
gesture pattern recognition. Acceleration-based gesture control is usually studied as a
supplementary interaction modality. It is well suited to distinguish noticeable, larger scale
gestures with different hand trajectories of forearm movements. With ACC-based
techniques some subtle finger or hand movement may be ignored whereas
electromyogram-based gesture recognition . Hand gesture recognition provides an
intelligent and natural way of human computer interaction . Its applications range from
medical rehabilitation to consumer electronics control . In order to distinguish hand
gestures, various kinds of sensing techniques are utilized to obtain signals for pattern
recognition. Acceleration-base and electromyogram-based techniques are two research
branches in the field of hand gesture pattern recognition. Acceleration-based gesture
control is usually studied as a supplementary interaction modality. It is well suited to
distinguish noticeable, larger scale gestures with different hand trajectories of forearm
movements. With ACC-based techniques some subtle finger or hand movement may be
ignored whereas electromyogram-based gesture recognition techniques use multi-channel
EMG signals which contain rich information about hand gestures of various size scales.
Due to some problems inherent in the EMG measurements, including the separability and
reproducibility of measurement, the size of discriminable hand gesture set is still limited

Dept. of ECE, ASIET Page 3


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

to 4-8 classes. In order to realize a natural and robust gesture-based HCI system, the
selection of input hand gestures that are well discriminable from each other is of crucial
importance. Considering the complementary features of ACC- and EMG-measurements, we
believe that their combination will increase the number of discriminable hand, wrist and
forearm gestures and the accuracy of the recognition system. We are detecting the hand
and gesture with Simple Web camera and performing the Image Processing technique in
which using those gesture, we can play game on console. In Image Process Gaming, the
motions are detected through a web camera. These images are then passed for the image
processing. The techniques used for image processing are hand gesture detection, edge
detection, thresholding, contour detection. Using OpenCV, which provides a library
collection of functions for different image processing techniques, these input images can
be processed and corresponding key strokes will be generated.

1.3 Devices Used For Gesture Recognition

The ability to track a person's movements and determine what gestures they may
be performing can be achieved through various tools. Although there is a large amount of
research done in image/video based gesture recognition, there is some variation within
the tools and environments used between implementations.

1.3.1 Depth-aware cameras:- Using specialized cameras such as time-of-flight


cameras, one can generate a depth map of what is being seen through the camera
at a short range, and use this data to approximate a 3d representation.
1.3.2 Controller-based gestures:- These controllers act as an extension of the body so
that when gestures are performed, some of their motion can be conveniently
captured by software. Mouse gestures are one such example, where the motion of
the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii
Remote, which can study changes in acceleration over time to represent gestures.
1.3.3 Single camera:- A normal camera can be used for gesture recognition where the
resources/environment would not be convenient for other forms of image-based

Dept. of ECE, ASIET Page 4


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

recognition. Although not necessarily as effective as stereo or depth aware


cameras, using a single camera allows a greater possibility of accessibility to a wider
audience.
1.3.4 Stereo cameras:- Using two cameras whose relations to one another are known,
are 3d representation can be approximated by the output of the cameras. To get the
cameras' relations, one can use a positioning reference such as a lexian-stripe or infrared
emitters. In combination with direct motion measurement gestures can directly be
detected being seen. These can be effective for detection of hand gestures due to their
short range capabilities[2].

Dept. of ECE, ASIET Page 5


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

CHAPTER 2
CONTOUR TRACING ALGORITHMS
The first two, namely: the square tracing algorithm and Moore neighbor
tracing, are easy to implement and are therefore used frequently to trace the contour of a
given pattern. Unfortunately, both of these algorithms have a number of weaknesses
which cause them to fail in tracing the contour of a large class of patterns due to their
special kind of connectivity[3].

The following algorithms will ignore any "holes"  present in the pattern. For
example, if we're given a pattern like that of Figure 2.1 below, the contour traced by the
algorithms will be similar to the one shown in Figure 2.2 , the blue pixels represent the
contour. This could be acceptable in some applications but in other applications, like
character recognition, we would want to trace the interior of the pattern as well in order
to capture any holes which identify a certain character. Figure 2.3 below shows  the
contour.
As a result, a "hole searching" algorithm should be used to first extract the holes in a given
pattern and then apply a contour tracing algorithm on each hole in order to extract the
complete contour.

Figure 2.1 Pattern of letter A

Dept. of ECE, ASIET Page 6


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Figure 2.2 Contour traced

Figure 2.3 Contour traced by pavlidis algorithm

2.1 Square Tracing Algorithm

The idea behind the square tracing algorithm is very simple; this could be
attributed to the fact that the algorithm was one of the first attempts to extract the contour
of a binary pattern. To understand how it works, you need a bit of imagination.
Given a digital pattern i.e. a group of black pixels, on a background of white pixels i.e. a
grid; locate a black pixel and declare it as your "start" pixel. Locating a "start" pixel can be
done in a number of ways; we'll start at the bottom left corner of the grid, scan each
column of pixels from the bottom going upwards -starting from the leftmost column and
proceeding to the right- until we encounter a black pixel. We'll declare that pixel as our
"start" pixel. Now, imagine that you are a bug (ladybird) standing on the start pixel as
in Figure 2.4 below. In order to extract the contour of the pattern, every time you find
yourself standing on a black pixel, turn left, and every time you find yourself standing on a
white pixel, turn right, until you encounter the start pixel again. The black pixels you
walked over will be the contour of the pattern[6].

Dept. of ECE, ASIET Page 7


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Figure 2.4 square tracing pattern

2.1.1 Algorithm

Input: A square  T, containing a connected component P of black cells.

Output: A sequence B (b1, b2 ,..., bk) of boundary pixels i.e. the contour.

Begin

 Set B to be empty.
 From bottom to top and left to right scan the cells of T until a black pixel, s, of P is
found.
 Insert s in B.
 Set the current pixel, p, to be the starting pixel, s.
 Turn left i.e. visit the left adjacent pixel of p.
 Update p i.e. set it to be the current pixel.
 While p not equal to s do

If the current pixel p is black

o insert p in B and turn left (visit the left adjacent pixel of p).


o Update p i.e. set it to be the current pixel.

  else

o turn right (visit the right adjacent pixel of p).


o Update p i.e. set it to be the current pixel.

Dept. of ECE, ASIET Page 8


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

end While

End

As a result, when the square tracing algorithm enters the start boundary edge for a
second time, it will do so in the same direction it did when it first entered it. The reason for
that is since there are 2 ways to go through a boundary edge, and since the algorithm
alternates between "in" and "out" of consecutive  boundary edges, and since there is an
even number of boundary edges, the algorithm will go through the start boundary edge a
second in the same manner it did the first time around. Given a 4-connected pattern and
background, the square tracing algorithm will trace the whole boundary i.e. contour, of the
pattern and will stop after tracing the boundary once i.e. it will not trace it again since
when it reaches the start boundary edge for a second time, it will enter it in the same way
it did the first time around. Therefore, the square tracing algorithm, using Jacob's stopping
criterion, will correctly extract the contour of any pattern provided both the pattern .

2.2 Moore Neighbour Tracing

The Moore neighborhood of a pixel, P, is the set of 8 pixels which share a vertex or
edge with that pixel. These pixels are namely pixels P1, P2, P3, P4, P5, P6, P7 and
P8 shown in Figure 2.5 below. The Moore neighborhood (also known as the 8-
neighbors or indirect neighbors) is an important concept that frequently arises in the
literature.
 

Figure 2.5 Given pattern for moore neighbor tracing

Dept. of ECE, ASIET Page 9


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Given a digital pattern i.e. a group of black pixels, on a background of white pixels
i.e. a grid; locate a black pixel and declare it as your "start" pixel. Locating a "start" pixel
can be done in a number of ways; we'll start at the bottom left corner of the grid, scan each
column of pixels from the bottom going upwards -starting from the leftmost column and
proceeding to the right- until we encounter a black pixel. We'll declare that pixel as our
"start" pixel.

Now, imagine that you are a bug (ladybird) standing on the start pixel as in Figure
2.6 below. Without loss of generality, we will extract the contour by going around the
pattern in a clockwise direction. It doesn't matter which direction you choose as long as
you stick with your choice throughout the algorithm. The general idea is: every time you
hit a black pixel, P, backtrack i.e. go back to the white pixel you were previously standing
on, then, go around pixel P in a clockwise direction, visiting each pixel in its Moore
neighborhood, until you hit a black pixel. The algorithm terminates when the start pixel is
visited for a second time. The black pixels you walked over will be the contour of the
pattern[6].

Figure 2.6 moore neighbor tracing

2.2.1 Algorithm

Input: A square  T, containing a connected component P of black cells.

Output: A sequence B (b1, b2 ,..., bk) of boundary pixels i.e. the contour.

Begin

Dept. of ECE, ASIET Page 10


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

 Set B to be empty.
 From bottom to top and left to right scan the cells of T until a black pixel, s, of P is
found.
 Insert s in B.
 Set the current boundary point p to s i.e. p=s
 Backtrack i.e. move to the pixel from which s was entered.
 Set c to be the next clockwise pixel in M(p).
 While c not equal to s do

   If c is black

o insert c in B
o set p=c
o backtrack (move the current pixel c to the pixel from which p was entered)

   else

o advance the current pixel c to the next clockwise pixel in M(p)

end While

End

Using Jacob's stopping criterion will greatly improve the performance of Moore-
Neighbor tracing making it the best algorithm for extracting the contour of any pattern no
matter what its connectivity. The reason for this is largely due to the fact that the
algorithm checks the whole moore neighbourhood of a boundary pixel in order to find the
next boundary pixel. Unlike the square tracing algorithm, which makes either left or right
turns and misses "diagonal" pixels; Moore-Neighbor tracing will always be able to extract
the outer boundary of any connected component. The reason for that is: for any 8-
connected pattern, the next boundary pixel lies within the Moore neighborhood of the

Dept. of ECE, ASIET Page 11


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

current boundary pixel. Since Moore-Neighbor tracing proceeds to check every pixel in the
Moore neighborhood of the current boundary pixel, it is bound to detect the next
boundary pixel.When Moore-Neighbor tracing visits the start pixel for a second time in the
same way it did the first time around, this means that it has traced the complete outer
contour of the pattern and if not terminated, it will trace the same contour again. This
result has yet to be proved[6].

Dept. of ECE, ASIET Page 12


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

CHAPTER 3
PAVLIDIS ALGORITHM
This algorithm is one of the more recent contour tracing algorithms and was
proposed by Theo pavlidis He published it in his book Algorithms for Graphics and Image
Processing in 1982. It is not as simple as the square tracing algorithm or moore tracing
algorithm , yet it is not complicated a property shared by most contour tracing algorithms.
We will explain this algorithm using an approach different from the one presented in the
book. This approach is easier to comprehend and will give insight into the general idea
behind the algorithm. Without loss of generality, we have chosen to trace the contour in a
clockwise direction in order to be consistent with all the other contour tracing algorithms
discussed on this web site. On the other hand, Pavlidis chooses to do so in a counter
clockwise direction. This shouldn't make any difference towards the performance of the
algorithm. The only effect this will have is on the relative direction of movements you'll be
making while tracing the contour . Given a digital pattern i.e. a group of black pixels, on a
background of white pixels i.e. a grid; locate a black pixel and declare it as your "start"
pixel. Locating a "start" pixel can be done in a number of ways; one of which is done by
starting at the bottom left corner of the grid, scanning each column of pixels from the
bottom going upwards -starting from the leftmost column and proceeding to the right-
until a black pixel is encountered. Declare that pixel as the "start" pixel. We will not
necessarily follow the above method in locating a start pixel. Instead we will choose
a start pixel satisfying the following restriction imposed on the choice of a start pixel for
Pavlidis' algorithm . You actually can choose any black boundary pixel to be
your start pixel as long as when you're initially standing on it, your left adjacent pixel is
NOT black. In other words, you should make sure that you enter the start pixel in a
direction which ensures that the left adjacent pixel to it will be white "left" here is taken
with respect to the direction in which you enter the start pixel[7]. .
Now, imagine that you are a bug (ladybird) standing on the start pixel as in Figure
3.1.Throughout the algorithm, the pixels which interest you at any time are the 3 pixels .
We will define P2 to be the pixel right in front of you , P1 is the pixel adjacent to P2 from
the left and P3 is the right adjacent pixel to P2.

Dept. of ECE, ASIET Page 13


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Figure 3.1 pattern for pavlidis algorithm

Like in the Square Tracing algorithm, the most important thing in Pavlidis'
algorithm is your "sense of direction". The left and right turns you make are with respect
to your current positioning, which depends on the way you entered the pixel you are
standing on. Therefore, it's important to keep track of your current orientation in order to
make the right moves. But no matter what position you are standing in, pixels P1, P2 and
P3 will be defined as above. With this information, we are ready to explain the algorithm.
Every time you are standing on the current boundary pixel (which is the start pixel at first)
do the following: First, check pixel P1. If  P1 is black, then declare P1 to be your current
boundary pixel and move one step forward followed by one step to your current left to
land on P1. Figure 3.2 below demonstrates this case. The path you should follow in order
to land on P1 is drawn in blue[4] .

Figure 3.2 Path towards p1

Only if P1 is white proceed to check P2. If  P2 is black, then declare P2 to be your
current boundary pixel and move one step forward to land on P2. Figure 3.3 below
demonstrates this case. The path you should follow in order to land in P2 is drawn in blue.

Dept. of ECE, ASIET Page 14


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Figure 3.3 path towards p2

Only if both P1 and P2 are white proceed to check P3. If P3 is black, then
declare P3 to be your current boundary pixel and move one step to your right followed by
one step to your current left as demonstrated in Figure 3.4 below.

Figure 3.4 path towards p3

3.1 Algorithm

Input: A square  T, containing a connected component P of black cells.

Output: A sequence B (b1, b2 ,..., bk) of boundary pixels i.e. the contour

Begin

 Set B to be empty.
 From bottom to top and left to right scan the cells of T until a
black start pixel, s, of P is found
 Insert s in B.
 Set the current pixel, p, to be the starting pixel, s.
 Repeat the following

Dept. of ECE, ASIET Page 15


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

If pixel P1 is black

o Insert P1 in B
o Update p=P1
o Move one step forward followed by one step to your current left

else if P2 is black

o Insert P2 in B
o Update p=P2
o Move one step forward

else if P3 is black

o Insert P3 in B
o Update p=P3
o Move one step to the right, update your position and move one step to your
current left 

else if you have already rotated through 90 degrees clockwise 3 times while on
the same pixel p

o terminate the program and declare p as an isolated pixel

else

o rotate 90 degrees clockwise while standing on the current pixel p

Until p=s  (End Repeat)
End

3.2 Analysis 

Dept. of ECE, ASIET Page 16


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Pavlidis' algorithm is the perfect one for extracting the contour . It's true that this
algorithm is a bit more complex than say, moore neighbor tracing which has no special
cases to take care of, yet it fails to extract the contour of a large family of patterns having a
certain kind of connectivity . The algorithm works very well on 4-connected patterns. its
problem lies in tracing some 8-connected patterns that are not 4-connected. The following
is an animated demonstration of how Pavlidis' algorithm fails to extract the contour of an
8-connected pattern (that is not 4-connected) by missing a large portion of boundary.
There are 2 simple ways of modifying the algorithm in order to improve its performance
dramatically.

 Change the stopping criterion


Instead of terminating the algorithm when it visits the start pixel for a second time,
make the algorithm terminate after visiting the start pixel a third or even a fourth
time. This will improve the general performance of the algorithm.
 Go to the source of problem
There is an important concerning the direction in which you enter the start pixel.
Basically, you have to enter the start pixel such that when you're standing on it, the
pixel adjacent to you from the left is white. The reason for imposing such a
restriction is: since you always consider the 3 pixels in front of you in a certain
order, you'll tend to miss a boundary pixel lying directly to the left of the start pixel
in certain patterns. Not only the left adjacent pixel of the start pixel is at risk of
being missed, but also the pixel directly below that pixel faces such a threat . In
addition, the pixel corresponding to pixel R in Figure 3.5 below will be missed in
some patterns. Therefore, we suggest that a start pixel should be entered in a
direction such that the pixels corresponding to pixels L, W and R shown in Figure
3.5 below, are white.

Dept. of ECE, ASIET Page 17


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Figure 3.5 8 connected pattern

CHAPTER 4
APPLICATION OF HAND GESTURE RECOGNITION

Dept. of ECE, ASIET Page 18


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Gesture recognition is useful for processing information from humans which is not
conveyed through speech or type. As well, there are various types of gestures which can
be identified by computers.
 Sign language recognition. Just as speech recognition can transcribe speech to
text, certain types of gesture recognition software can transcribe the symbols
represented through sign language into text.
 For socially assistive robotics. By using proper sensors (accelerometers and
gyros) worn on the body of a patient and by reading the values from those
sensors, robots can assist in patient rehabilitation. The best example can be stroke
rehabilitation.
 Directional indication through pointing. Pointing has a very specific purpose in
our society, to reference an object or location based on its position relative to
ourselves. The use of gesture recognition to determine where a person is pointing
is useful for identifying the context of statements or instructions. This application
is of particular interest in the field of robotics.
 Control through facial gestures. Controlling a computer through facial gestures
is a useful application of gesture recognition for users who may not physically be
able to use a mouse or keyboard. Eye tracking in particular may be of use for
controlling cursor motion or focusing on elements of a display.
 Alternative computer interfaces. Foregoing the traditional keyboard and mouse
setup to interact with a computer, strong gesture recognition could allow users to
accomplish frequent or common tasks using hand or face gestures to a camera.
 Immersive game technology. Gestures can be used to control interactions within
video games to try and make the game player's experience more interactive or
immersive.
 Virtual controllers. For systems where the act of finding or acquiring a physical
controller could require too much time, gestures can be used as an alternative
control mechanism. Controlling secondary devices in a car, or controlling a
television set are examples of such usage.

Dept. of ECE, ASIET Page 19


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

 Affective computing. In affective computing, gesture recognition is used in the


process of identifying emotional expression through computer systems.
 Remote control. Through the use of gesture recognition, "remote control with the
wave of a hand" of various devices is possible. The signal must not only indicate
the desired response, but also which device to be controlled[5].

CHAPTER 5
CHALLENGES

Dept. of ECE, ASIET Page 20


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

There are many challenges associated with the accuracy and usefulness of gesture
recognition software. For image-based gesture recognition there are limitations on the
equipment used and image noise. Images or video may not be under consistent lighting,
or in the same location. Items in the background or distinct features of the users may
make recognition more difficult.
The variety of implementations for image-based gesture recognition may also
cause issue for viability of the technology to general usage. For example, an algorithm
calibrated for one camera may not work for a different camera. The amount of background
noise also causes tracking and recognition difficulties, especially when occlusions partial
and full occur. Furthermore, the distance from the camera, and the camera's resolution
and quality, also cause variations in recognition accuracy.
In order to capture human gestures by visual sensors, robust computer vision
methods are also required, for example for hand tracking and hand posture recognition or
for capturing movements of the head, facial expressions or gaze direction.

CHAPTER 6

Dept. of ECE, ASIET Page 21


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

CONCLUSION
Pavlidis algorithm is one of the more recent contour tracing algorithms and was
proposed by Theo pavlidis . It is not as simple as the square tracing algorithm or moore
tracing algorithm , yet it is not complicated a property shared by most contour tracing
algorithms.It is a contour tracing algorithm used for hand contouring. And it is
implemented in OpenCV. It is used mainly for boundary tracking and edge detection. The
major advantage is that, it is a pixel based algorithm. Pavlidis' algorithm is the perfect one
for extracting the contour . It's true that this algorithm is a bit more complex than say,
moore neighbor tracing which has no special cases to take care of, yet it fails to extract the
contour of a large family of patterns having a certain kind of connectivity . The algorithm
works very well on 4-connected patterns. its problem lies in tracing some 8-connected
patterns that are not 4-connected. The following is an animated demonstration of how
Pavlidis' algorithm fails to extract the contour of an 8-connected pattern (that is not 4-
connected) by missing a large portion of boundary. There are 2 simple ways of modifying
the algorithm in order to improve its performance dramatically. And hence this improves
the efficiency of this algorithm and so it is most commonly used compared to other
algorithms.

REFERENCES
Dept. of ECE, ASIET Page 22
IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

[1]. Sajida fayyaz, Hafiz Ali Hamza Gondal Rubab Bukhsh,Sidra Tahir "Adjustment of bed for a
patient through gesture recognition :An image processing Approach" [2018].
[2]. D.L. Quam, “Gesture recognition with a Data Glove”. In Proceedings of the IEEE National
Aerospace and Electronics Conference, Vol. 2, [2018]
[3]. H. C. Xu, “Principal Component Analysis on Fingertips for Gesture Recognition”. Master
Thesis, Department of Applied Marine physics & Undersea Technology, National Sun Yat-
sen University, Kaohsiung, Taiwan[2016]
[4]. Buchmann, Volkert, Stephen Violich, Mark Billinghurst, and Andy Cockburn, “Fingertips
gesture based direct manipulation in Augmented Reality”. In Proceedings of the 2nd
international conference on Computer graphics and interactive techniques in Australasia
and South East Asia, [2016]
[5]. Yeo, Hui-Shyong, Byung-Gook Lee, and Hyotaek Lim, “Hand tracking and gesture
recognition system for human-computer interaction using low-cost hardware”. Multimedia
Tools and Applications ,[2018], pp. 1-29.
[6]. Abeergeorge “pavlidis algorithm” found at :
http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/
contour_tracing_Abeer_George_Ghuneim/theo.html

[7].Burns, Anne-Marie, and Barbara Mazzarino, “Finger tracking methods using eyes web”. In
Gesture in human-computer interaction and simulation, Springer Berlin Heidelberg,[2017]

Dept. of ECE, ASIET Page 23


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

APPENDIX
REVIEW QUESTIONS
1. What is HGR?

Gesture recognition is a topic in computer science and language technology with the


goal of interpreting human gestures via mathematical algorithms. Gestures can
originate from any bodily motion or state but commonly originate from
the face or hand. Current focuses in the field include emotion recognition from face and
hand gesture recognition. Users can use simple gestures to control or interact with
devices without physically touching them. Many approaches have been made using
cameras and computer vision algorithms to interpret sign language

2. What is HCI?

Human-computer interaction is a multidisciplinary field of study focusing on the


design of computer technology and, in particular, the interaction between humans and
computers. While initially concerned with computers, HCI has since expanded to cover
almost all forms of information technology design.

3. What is boundary tracking?

Boundary tracing, also known as contour tracing, of a binary digital region can be


thought of as a segmentation technique that identifies the boundary pixels of the
digital region. Boundary tracing is an important first step in the analysis of that region.

4. What is GUI?

The graphical user interface  is a form of user interface that allows users to interact


with electronic devices through graphical icons and visual indicators such as
secondary notation, instead of text-based user interfaces, typed command labels or
text navigation. GUIs were introduced in reaction to the perceived steep learning
curve of command-line interfaces  which require commands to be typed on a computer
keyboard.

Dept. of ECE, ASIET Page 24


IMAGE PROCESSING FOR HAND GESTURE RECOGNITION

Dept. of ECE, ASIET Page 25

You might also like