You are on page 1of 3

GESTURE BASED COMMUNICATION

A GESTURE HUMANOID
G S L K CHAND(chandugudi@gmail.com), Y PARAMESHWARAO,V MALLIKARJUN

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING, ST.MARYS ENGINEERING COLLEGE,CHEBROLU,GUNTUR DISTRICT,ANDHRA PRADESH

Abstract. Nowadays new interaction forms are not limited by Graphical User Interfaces (GUIs) making Human Computer Interaction (HCI) more natural. The development of humanoid robots for natural interaction is a challenging research topic. By using gesture based humanoid we can operate any system simply by gestures. The inter-human communication is very complex and offers a variety of interaction possibilities. Although speech is often seen as the primary channel of information, psychologists claim that 60% of the information is transferred non-verbally. Besides body pose, mimics and others gestures like pointing or hand waving are commonly used. In this paper the gesture detection and control system of the humanoid robot CHITTI is presented using a predefined dialog situation. The whole information flow from gesture detection till the reaction of the robot is presented in detail. Keywords Hand gesture, computer vision, HCI, marking menu,UBIhand

is no tight definition for a gesture. Gestures can be viewed as a non-verbal interaction and may range from simple static signals defined by hand shapes, actions like pointing to some objects, waving a hand or more complex movements to express ideas or feelings allowing the communication among people [8]. Thus for recognizing gestures it is necessary to find a way by which computers can detect dynamic or static configurations of the human hand, arm, and even other parts of the human body. Some methods used mechanical devices to estimate hand positions and arm joint angles such as the glove-based approaches. The main drawback of this interface is that, besides being expensive, the user must wear an uncomfortable glove, having a lot of cables to connect the device to the system that restricts the workspace to a small area and limits the movements of the user. Therefore, one of the best options to overcome the disadvantages of the glove-based methods and to implement less restricted systems is the usage of computer vision to detect and track hands and arms. Basically, the computer vision approaches concentrate on recognizing static hand shapes (pose gestures) or interpreting dynamic gestures and motion of the hands (temporal gestures). For the static gesture approaches the focus is to identify a gesture by the appearance of the hand, silhouettes, contours, 2D or 3D models, while the methods that consider dynamic gestures are concerned about motion analysis . There are also works like shown below that evaluates both pose gestures and temporal gestures. Our approach, like many others, aims to support the interaction to a robot and control some movements. But besides that we intend to improve the interaction and communication with a robot, allowing the usage of gestures and dialogs. As an example of some related work, in hand shapes are used to control a walking robot through gestures that indicate commands as stop, go forward, etc. Their system has four modules: a hand detection using skin segmentation, hand tracking, a hand-shape recognizer based on contours and the robot controller. In an integration of gaze and gestures is used to instruct a robot in an assembling task. Basically, pointing hands are detected through skin color and splines for contour

Introduction:
Ubiquitous, embedded computing, e.g., in domestic environments requires new human-computer interaction styles that are natural, convenient and efficient. Keyboards and mice are still the most common and most used interfaces between humans and computer systems, no matter if it is just a desktop PC, a notebook or a robot. However there is an increasing interest in developing additional interfaces such as speech recognition, handwriting, gesture recognition and emotion detection, more since these ways of interaction may bring naturalness into the human-computer interface. In addition most people tend to feel more comfortable if they can interact with computers and robots in the same manner humans are communicating with each other. The interesting thing is that psychologists claim that over 60% of interaction signals are transferred non-verbal [5].This is one of the motivations for the work presented in this paper. Our main goal is to establish a communication interface through gesture recognition between a human and the robot CHITTI, that stands for Robot Human Inter- action Machine. The robot must recognize some gestures made by a human and respond by speaking or making some body movements. Traditionally there are some techniques for gesture recognition based on the shape of the hand or based on the movement of the human arm and hand. Also there

descriptions. In [13] an attention model for humanoid robots is defined using gestures and verbal cues. Human motion and gestures are captured using markers attached to the subject body. In this work we focus on an appearance based method for recognizing gestures that will help improve the interaction with a robot allowing the usage of gestures and dialogs. Our approach also includes a hand tracking module, thus the user can make different sequences of gestures while the robot keeps looking at the users hand, without having to process the entire image every time in order to detect a hand and then identify a gesture. Not only by hand but also nodding of head etc. The preliminary results are encouraging and for future work we expect to take the motion history into account for recognizing more complex and temporal gestures. This paper is organized as follows. The first one presents the marking menus applied for detecting the hands of the user. The second one describes the pie and marking menus and the following section explains recognition algorithms. The integratio into the control system of the robot is described while finally we presented the conclusions and future work. This also includes the: Following we chose a scenario for the first prototype that is well known to most: remote control of appliances in a domestic environment. A hierarchic menu system for controlling the functions of a TV, a CD player, a VCR, and a lamp is under development and some initial user trials have been performed. The prototype has been set up similar to a home environment in an open lab /demo space at CID. In order to maximize speed and accuracy, gesture recognition is currently tuned to work only against a uniform background within a limited area, approximately 0.5 by 0.65 m in size, at a distance of approximately 3 m and under relatively fixed lighting conditions.

The Recognition Algorithms The computer vision system for tracking and recognizing the hand postures that control the menus is based on a combination of multi-scale color feature detection, view based hierarchical hand models and particle filtering. The hand postures or states are represented in terms of hierarchies of multi-scale color image features at different scales, with qualitative inter-relations in terms of scale, position and orientation. In each image, detection of multiscale color features is performed. The hand postures are then simultaneously detected and tracked using particle filtering, with an extension of layered sampling referred to as hierarchical layered sampling. To improve the performance of the system, a prior on skin color is included in the particle filtering. white ellipses show detected multi-scale features in a complex scene and the correctly recognized hand posture is superimposed in gray.

Marking menus for gesture control A fundamental concern in all kinds of gestural control is what the command set should be. Exactly what hand posture and movements should be used? A possible strategy is to base the command set on a menu system. The language is determined by menu layout and organization, and can be made culturally neutral and self-explanatory. Gestures can be kept relatively simple. The assumption here is that Pieand marking menus are especially well suited for the purpose, because they offer a possibility forusers to develop the skill to work with no feedback from the menus.

UBI HAND

It is a wireless control of home applicances via hand gestures by using humanoid It works as given in below flow chart FLOWCHART WORKING

Pie- and Marking Menus They are pop-up menus with the alternatives arranged radially, often used in pen-based interfaces. Because the gestures (marks) are directional users can learn to make selections without looking at the menu items. With expert users, menus need not even be popped up. Hierarchic marking menus are a development of pie menus that allow more complex choices. The shape of the path, rather than the series of distinct menu choices, can be recognized as a selection. If the user, e.g., a novice, works slowly, or hesitates, the underlying menus can be popped up to provide feedback.

The Prototype

Integration The experiments of the presented approaches have been performed on the humanoid robot CHITTI [7]. The robot consists of a humanoid upperbody and head with 24 degrees of freedom: 7 for the movement of the body and neck, 6 for eye movement and 11 for emotional expressions.

References: 1). Bretzner, L., Laptev, I. & Lindeberg,4T. (2002) Hand Gesture Recognition using Multi-Scale Colour Features, Hierarchical Models and Particle Filtering. Submitted to 5th Intl. Conf. on Automatic Face and Gesture Recognition. 2). Callahan, J., Hopkins, D, Weiser, M. & Shneiderman, B. (1988) An Empirical Comparision of Pie vs. Linear Menus. Proceedings of CHI88, pp. 95-100. 3). Freeman, W.T. & Weissman, C.D. (1994) Television Control by Hand Gestures. In 1st Intl. Conf. on Automatic Face and Gesture Recognition. 4). Guimbretire, F. & Winograd, T. (2000) FlowMenu: combining Command, Text and Data Entry. Proceedings of UIST2000, pp. 213-216. 5). Kurtenbach, G. & Buxton, W. The Limits of Expert Performance Using Hierarchic Marking Menus. Proceedings of CHI94, pp. 482-487. 6). Jacob Fraden, Handbook of Modern Sensors: Physics, Designs, and Applications, Springer publications 2010 edition

7. K. Mianowski, N. Schmitz, and K. Berns. Mechatronics of the humanoid robotroman. In Sixth International Workshop on Robot Motion and Control (RoMoCo), Bukowy Dworek, Poland, June 11-13 2007. 8. V. I. Pavlovic, R. Sharma, and T. S. Huang. Visual interpretation of hand gestures for humanIn IEEE computer interaction: A review. Transactions on Pattern Analysis and Machine Fig. 2. Screenshot of the Camera Intelligence, volume 19, pages 677695, 1997. group with GestureDetector and 9. F. Quek. Gesture, speech and gaze cues for CamShift module. The image shows the discourse segmentation. In IEEEConference on tool mcabrowser which can be used for Computer Vision and Pattern Recognition, pages run-time analysis of the control 247254, 2000. architecture. 10. D. J. Sturman and D. Zeltzer. A survey of glove-based input. In IEEE Computer Graphics and Open Closed Positive Applications, volume 14, pages 3039, 1994. V L Unknow Hand Hand 11. S. Waldherr, S.n Thrun, and R. Romero. A gesture-based interface for G human-robot interaction. Open 0.917 0.00 0. 0.00 0.00 0.083 dClosed Robots, volume 9, pages 151173, In Autonomous 0.00 0.00 0.850 0. 0.00 0.150 00 2000. Hand Positiv 0.00 0.00 0.883 0.017 0.00 0.100 V 0.00 0.00 0.050 0.900 0.00 0.050 L 0.00 0.050 0. 0.017 0.867 0.066

Figure 3 shows an interaction between the robot and a human during the experiments.

CONCLUSION:

Future work concerning the gesture based interaction will focus on the in- tegration of verbal and non-verbal interaction signals into multimodal dialog situations. Acknowledgement: We thank mainly our parents who supported not only by finanicially but also technically. My sincere thanks to my sister who helped me a lot in this project

You might also like