Professional Documents
Culture Documents
Prathyusha Karreddi
Amulya Patlola
Computer Science Engineering
Padmasri Dr.B.V.Raju Institute of technology
Narsapur, Andhrapradesh, India
Patlola.amulya@gmail.com
AbstractThis paper presents the framework on vision
based interface that has been designed to instruct a humanoid
robot through gestures using image processing. Image
thresholding and blob detection techniques were used to obtain
gestures. Then we analyze the images to recognize the
gesture given by the user in front of a web camera and
take an appropriate action (like taking picture, moving robot,
etc). The application is developed using OpenCV (Open
Computer Vision) libraries and Microsoft Visual C++. The
gestures obtained by processing the live images are used to
command a humanoid robot with simple capabilities. A
commercial humanoid toy robot Robosapien was used as the
output module of the system. The robot was interfaced to
computer by USB-UIRT (Universal Infrared Receiver and
Transmitter) module.
KeywordsOpencv libraries; blob tracking; thresholding;
USB-UIRT; image moments; Robosapian
I.
INTRODUCTION
MOTIVATION
RELATED WORKS
Many systems exist that are used for controlling the robot
through gestures. Some gesture recognition systems involve,
adaptive color segmentation [1], hand finding and labeling
BLOCK DIAGRAM
PROPOSED SYSTEM
VI.
APPROACH
A. OpenCV Libraries :
OpenCV (Open Source Computer Vision Library)
is a library of programming functions mainly aimed at realtime computer vision and image processing, developed by
Intel and can be used to develop accurate and efficient image
processing systems.
B. Thresholding :
Thresholding is the simplest method of image segmentation
i.e. it is the process of partitioning a digital image into
multiple segments (sets of pixels). Thresholding can be used to
create binary images from their corresponding grayscale
images.
If f(x, y) > T then f (x, y) = 0 else f (x, y) = 255
Real-time
video capture
Blob detection
and tracking
Pattern
recognition
Command
detection
Generate
signal
USB-UIRT
Robosapien
SCEECS 2014
D. USB-UIRT :
As stated above, USB-UIRT stands for Universal Infrared
Receiver and Transmitter which is available as USB module.
This module has capacity to learn the IR signals sent from
most remote controls. It has a small microcontroller in it,
which is flash upgradable and when an IR signal is detected, it
interprets these signals, decodes them, and sends them to your
PC via a USB connection.
Fig. 4. Flow chart of the gesture recognition
VII. IMPLEMENTATION
The entire process starts with compiling OpenCV libraries
on the desktop, configuring them with visual C++, which
requires many other dependencies to be installed as well. The
user when comes in front of the camera and gives his gesture
using his colored palm, the first step becomes acquiring each
single frame of image from the web camera. Considering an
image as a two dimensional matrix, we apply the principle of
thresholding followed by the blob detection. Once the static
blob is detected, its position in the image is calculated and
then all the frames are made as a series with respect to time
called video and hence blob is tracked. The blob positions are
SCEECS 2014
Ar >3000
Move front
Ar<-3000
Move back
=
The two dimensional function, in our case is an image. So,
for zero order moment, (m, n) becomes zero and since we find
the moment about the origin (0, 0), the equation becomes the
double integral of the function f(x, y), which gives he area of
the image, considerably, it is the area of the blob in our binary
image.
P = CurrentX PreviousX
Q = CurrentY PreviousY
Ar = CurrentArea PreviousArea
TABLE I
PATTERN MATHING USING THE CONDITIONS AND POSING THE
GESTURES TO THE ROBOT
Condition
Gesture
Move Right
Move left
Raise your
hand
Down your
hand
Fig. 7. Screenshot showing the robot raising his hand on human gesture.
VIII. APPLICATIONS
Image processing has become increasingly powerful in
recent years largely due to its numerous applications in many
fields. The potential applications of Image processing based
gesture capture are:
SCEECS 2014
[5]
1.
2.
3.
4.
5.
IX.
[6]
[7]
[8]
[9]
CONCLUSION
[3]
[4]
SCEECS 2014
Gesture
Controlled
Robot
using
Kinect
http://www.eyantra.org/home/projects-wiki/item/180-gesturecontrolled-robot-usingfirebirdv-and-kinect
Gesture Controlled Robot using Image Processing, Harish Kumar
Kaura, Vipul Honrao, , Sayali Patil, , Pravish Shetty , International
Journal of Advanced Research in Artificial Intelligence, Vol. 2,
No. 5, 2013
Vision-based Hand Gesture Recognition for Human-Computer
Interaction, X. Zabulis, H. Baltzakis, A. Argyros
Md. Hasanuzzaman, T. Zhang, V. Ampornaramveth, H. Gotoda,
Y. Shirai, and H. Ueno, Adaptive visual gesture recognition for
human-robot interaction using a knowledge-based software
platform, Robotics and Autonomous Systems, vol. 55, Issue 8, 31
August 2007, pp. 643-657
M. Van den Bergh, F. Bosch, E. Koller-Meier, and L. Van
Gool. Haarlet-based hand gesture recognition for 3D Interaction.
Proc. Of the Workshop on Applications of Computer Vision
(WACV), December 2009.