398 views

Uploaded by Xander Masotto

kinect

save

- Full Catalog
- Remote Sevsing
- Application of Remote Sensing
- 09
- Human Detection and Extraction Using Kinect Depth Images Thesis
- Remote Sensing
- Oil Mud Reservoir Imager
- Remote Sensing
- Chapter 5 Remote Sensing
- slva655.pdf
- 134918_634381717324042500_pdf
- Sixth Sense Technology
- Sign Language Converter and Central Health Monitoring and Controlling For Deaf and Dumb Patients in ICUs: A Survey
- Video Processing and Analysis for Surveillance Applications
- Voxel Size in Spatial Resolution
- excel scavenger hunt
- Bonito DataSheet CL-400 V3.1.0 En
- hci
- Sites Deep Web
- Dispatches from Pluto: Lost and Found in the Mississippi Delta
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Sapiens: A Brief History of Humankind
- The Unwinding: An Inner History of the New America
- Yes Please
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- Grand Pursuit: The Story of Economic Genius
- This Changes Everything: Capitalism vs. The Climate
- The Emperor of All Maladies: A Biography of Cancer
- The Prize: The Epic Quest for Oil, Money & Power
- John Adams
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- The World Is Flat 3.0: A Brief History of the Twenty-first Century
- Rise of ISIS: A Threat We Can't Ignore
- Smart People Should Build Things: How to Restore Our Culture of Achievement, Build a Path for Entrepreneurs, and Create New Jobs in America
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
- Team of Rivals: The Political Genius of Abraham Lincoln
- The New Confessions of an Economic Hit Man
- How To Win Friends and Influence People
- Angela's Ashes: A Memoir
- Steve Jobs
- Bad Feminist: Essays
- You Too Can Have a Body Like Mine: A Novel
- The Incarnations: A Novel
- The Light Between Oceans: A Novel
- The Silver Linings Playbook: A Novel
- Leaving Berlin: A Novel
- Extremely Loud and Incredibly Close: A Novel
- The Sympathizer: A Novel (Pulitzer Prize for Fiction)
- A Man Called Ove: A Novel
- Bel Canto
- The Master
- The Blazing World: A Novel
- The Rosie Project: A Novel
- The First Bad Man: A Novel
- We Are Not Ourselves: A Novel
- Brooklyn: A Novel
- The Flamethrowers: A Novel
- Life of Pi
- The Love Affairs of Nathaniel P.: A Novel
- The Perks of Being a Wallflower
- Lovers at the Chameleon Club, Paris 1932: A Novel
- The Bonfire of the Vanities: A Novel
- A Prayer for Owen Meany: A Novel
- My Sister's Keeper: A Novel
- The Cider House Rules
- The Art of Racing in the Rain: A Novel
- Wolf Hall: A Novel
- The Wallcreeper
- Interpreter of Maladies
- Beautiful Ruins: A Novel
- The Kitchen House: A Novel
- Quiz (1)
- Feature scruitinization for hand gestures
- li2010.pdf
- Recognition of Hand Gesture using PCA, Image Hu’s Moment and SVM
- Finger Mouse Project Report
- Digital Pen for Handwritten Digit and Gesture Recognition Using Trajectory Recognition Algorithm Based On Triaxial Accelerometer
- Sixth Sense Technology Ppt
- Embedded System Titles 2013-14, IEEE & Non IEEE Project Titles (Complete Reference) in Embedded, Power Electronics, VLSI, DSP, Matlab - Complete Reference
- Application specific Gesture Recognition
- Gesture Interfaces Project
- Kuno
- Skin based real time air writing using web camera
- A Study of Hand Gesture Recognition Technique
- Hand Gesture Recognition using Python
- ODIA LANGUAGE TO SIGN LANGUAGE MACHINE TRANSLATION SYSTEM: A FACE-TUBE FOR AUDIBLE IMPAIRED PEOPLE.
- Gimp Abstract
- Gesture Recognition using Wireless Signal
- Handgesture Report
- Sixth Sense Implementation of Hand Cursor Using Image Processing
- AI SPEECH
- Gesture-based Computer Mouse Using Kinect Sensor
- Human Computer Interface Tutorial
- Image Processing Ppt
- QtopenGL
- Eye Phone
- Embedded Project Titles Book 2012-12 -- EMBEDDED SYSTEMS IEEE 2012 Projects
- Iot Operated Wheel Chair
- Seminar on Gesture Recognition
- A 6
- Articulated Object Tracking and Pose Estimation - A Literature Survey

You are on page 1of 6

**3D Hand and Finger Recognition using Kinect
**

F. Trapero Cerezo1

1 Universidad

de Granada (UGR), Spain

Abstract Finger and hand tracking are included in the ﬁeld of the human-computer interaction (HCI). The goal of this work is to describe a robust and efﬁcient method to track the hand and the ﬁngers in real time using the Kinect device. This device allows us to obtain a depth image, instead of a normal RGB image, which gives us more information than a normal camera. These kind of images are more useful for the tracking, and moreover we can represent the relevant points in a 3D space. The ﬁnal idea is to use the algorithms of this paper to create immersive virtual environments applications. Categories and Subject Descriptors (according to ACM CCS): I.4.8 [Image processing and computer vision]: Scene Analysis—Tracking I.3.7 [Computer graphics]: Three-Dimensional Graphics and Realism—Virtual Reality

1. Introduction Human-computer interaction (HCI), and in particular hand and ﬁnger tracking, depends largely on physical devices. People have worked on this issue for years, but as technology advances, new devices and techniques appears that allow us to accomplish our objective in a more efﬁcient way. In order to control our computer we use devices like the mouse or the keyboard, but it seems unnatural for humans. We are used to interact with the world with our own hands, our body or our voice. That is the reason that hand and ﬁnger tracking has become more popular, these last years. So far, if we wanted the computer to know where our hands were, we would have to wear some special gloves. This kind of devices are really expensive, so they are only used in big projects of virtual reality. However, technology have advanced really fast, and now computers can analyze images and get the contour of a hand in real time. Progress in this type of technologies has led to the creation of devices like Kinect. Kinect is a cheap device which appeared in the latest of 2010, that has a lot of potential for HCI applications. This device is compound by a normal camera and two infrared cameras. Using the infrared cameras, Kinect is able to generate a depth image. This paper contains the methods needed to track several hands and their components (ﬁngertips and palm) in a 3D

space, based on the depth image that Kinect (or other device) generates. 1.1. Previous work Before having devices to get a depth image, algorithms of hand and ﬁnger recognition were based on the pigmentation of the hand, as we can see in [Mal03], to recognize the shape of the hand. But these kind of algorithm has an obvious problem, it depends on the hand color. The algorithm can only be used on one kind of pigmentation, light or dark. So, depending on the person, the algorithm must be changed. Anyway, algorithms created for this kind of hand recognition are still useful for depth images, because once you have the shape of the hand, in both cases, these algorithms can be applied. Many of the algorithms commented in this paper, have been used for hand recognition in normal RGB images. There are mainly two ways to deal with this problem. The ﬁrst one is to use a large amount of hands images to train our system and try to ﬁnd correspondences with them. The second option is to use some geometric properties of our hands, to ﬁnd out some important information, like where are the ﬁngertips and the center of the hand. The ﬁrst approximation is discussed in [SMC01, Che08], and it can provide good results in model-based 3D hand tracking. The main advantage of this method is that we can

we have not to check each time if a matrix access is out of bounds. Generate near pixels matrix The ﬁrst step we need to accomplish in order to start to work with the depth data. This method allow us to have greater mobility. 2. 2. Trapero Cerezo / 3D Hand and Finger Recognition using Kinect generate a 3D model of the hand. This is too expensive computationally. The best option is obviously the 640x480 resolution. so that gives us a quick and robust method to track more than one hand in real time. 6. there are other issues like scalability. so it is really difﬁcult to achieve a real time hand recognition. the program have to walk trough all these steps. After selecting a method. because it gives us enough deﬁnition to distinguish the contour of the hands. if the depth is between these two values. due to many of the algorithms need to know the adjacent pixels of a given one. and some modiﬁcations of the main algorithm to improve the efﬁciency of the code. We are going to base our choice using the proximity to the Kinect. The other option is to calculate the contour and inside points of the hand. System overview In order to ﬁnd the the ﬁngertips and the palm center. In the Figure 1 the minimum depth is d and we add a constant s to it. and in base of that depth we select a maximum . 7. it will improve the efﬁciency of the code. This method is more efﬁcient than the previous one. between the values minDist and maxDist (Figure 1 ). so every point between minDist and d+s will be accepted as near. • Relative depth. This matrix will let us to access the data in an easier way.1. because it does not force us to stay in the same position the whole time. to create this model we need a huge set of hands images to train our system. we calculate the minimum depth. That will increase the efﬁciency in the contour algorithm. It is possible to select it from three different options 80x60. First of all. 1. A pixel is near if its depth is lesser than a constant value. to carry on the tracking. We have chosen the second option because of the efﬁciency. 4. It consists on applying one or two morphological transformations used in In this section we are going to describe different ways to achieve these goals. This way.2. We could choose if a pixel is near. is to decide which pixels are we going to take into account. and we do not lose almost information. by one of this method: • Absolute depth. 2. and then to look for in this points which are the candidates to be a ﬁngertip or the hand center. Once we have the depth data. 5. due to it is only possible to obtain the 3D coordinates of relevant points. following the criterion selected at the beginning. it is difﬁcult that the parameters learned during the training. suits to a large variety of hand sizes. The Kinect can catch the distance of the points which are visible to the camera. Generate near pixels matrix Decrease noise Classify in contour or inside pixels Differentiate hands and calculate their contour Allocate inside points Find the center of the palm Find the ﬁngertips Allocate points in a 3D space Figure 1: Relative depth method representation depth (for example by adding a constant value to the minimum depth). but sometimes could help if we are too far from the Kinect or there too much noise. the pixel is near. Decrease noise This step is optional. However. 2.2 F. but the generation of a 3D model hand is less accurate and difﬁcult. that means that the pixel is between minDist and a predeﬁned value which is greater than minDist and lesser than maxDist. and consequently. we should generate a matrix of the corresponding size. which is one of our main goals. Besides the efﬁciency. 3. So. which contains if each pixel is close enough or not. However. 320x240 or 640x480. we can choose the 320x240 resolution. 8. but in order to improve the efﬁciency of the code. One ﬁnal improvement is to set to false the values of the borders of the matrix. This choice greatly reduce the number of operations. we must select the resolution of the depth image. it provides us with a good approximation of the hand in a really efﬁcient way.

valid ) 2: currPoint := startPoint 3: dir := 0 4: lastPoint := Point (−1. there are no more hands to analyze. In this case. and the misclassiﬁed pixels can never be the center of the palm. down. dilation and erosion. so after the dilation we could apply erosion with the same mask. These methods are not cost effective. One heuristic could be to calculate the container box. and will decrease the noise. Classify in contour or inside pixels Finally we have a binary matrix which contains the shape of the hands. so the ﬁnal image will contain a rounder shape which preserves the same size of the ﬁrst one.Y := currPoint .F. the hands are these shapes.X := currPoint . A sorted contour means that the contour must be a list of points.6. Allocate interior points At this point.add (currentPoint ) 8: lastPoint := currentPoint 9: contour[currPoint . Find the center of the palm The center of the palm is one of the points that gives us a lot of information. 2. we must discard it. the pixel is wrongly classiﬁed. Applying dilation using a circle as the mask will round the contour of the hand. Both methods apply a mask B over a matrix A. and the same interior point is inside more than one box. A pixel is interior if every adjacent pixel (up. and . 2. if the dilation is too strong we can lose the shape of some ﬁngers. −1) 5: while True do 6: if currPoint = lastPoint then 7: list . Must take into account that could be some noise that create some small shapes. This point indicates where is the hand. But. Each time a point is added to the contour of one hand. contour. 2. The next step is to allo- Algorithm 1 Turtle algorithm 1: function T URTLE A LGORITHM(startPoint . We can calculate the container box using the contour points. The valid pixels of this matrix must be divided in contour or interior pixels. There are some problems when the container boxes overlap. that point is marked as visited. in which. If all the contour points has been visited. due to the objective of classify the interior points into the correct hand is to ﬁnd the center of the palm. left and right pixels) are also valid. So. 2. These techniques are discussed in [MdPAE∗ 06].4.X := currPoint . they are only needed in case that the algorithm cannot ﬁnd the ﬁngertips. and contour if one of its adjacent pixels are not valid. The container box of a ﬁgure is delimited by two points.Y ][currPoint . the previous and next point of each point in the list are adjacent in the contour of the hand.Y + 1 18: end if 19: if dir = 0 then 20: currPoint . Differentiate hands and calculate their contour It seems that the contour is already calculated. These two points delimit the smallest box which contains every pixel of the ﬁgure. This information should be stored in two different lists. otherwise we have to go over the whole matrix several times. Trapero Cerezo / 3D Hand and Finger Recognition using Kinect 3 artiﬁcial vision.X ] := False 10: else 11: dir := (dir + 4 − 1) 12: end if # Change the direction 13: if dir = 0 then 14: currPoint .Y − 1 24: end if # Finish if comes back to starting point 25: if currPoint = startPoint then 26: break 27: end if 28: end while 29: return list 30: end function cate each interior point into the current hand. The pseudocode of the algorithm is shown in Algorithm1.X − 1 21: end if 22: if dir = 0 then 23: currPoint . we have already calculated the number of hands we have. and with the container box we can classify very quickly if a interior point is inside the container box and therefore in the ﬁgure. In this case. the application looks for other contour point to calculate the sorted contour of another hand. and the ﬁrst and last point are also adjacent.5. in order to calculate the sorted contour and the palm center.3.Y := currPoint . The contour is calculated applying the “turtle algorithm” explained in [MdPAE∗ 06]. Each pixel in this matrix points out if the pixel is part of the hand (is valid) or not. and their contour.X + 1 15: end if 16: if dir = 0 then 17: currPoint . after ﬁnishing the contour of one hand. These kind of errors does not affect the ﬁnal result. Dilation is not the opposite operation of erosion. If the contour is so small compared to the total number of contour points. but we need a sorted contour for calculating the ﬁngertips and to improve the efﬁciency of the calculation of the center of the palm. The dilation and erosion are used for expanding and contracting the shapes contained in a binary image respectively.

Actually. we can only do the calculations for 1 in N consecutive points.P(x). so the point P(x) cannot a ﬁngertip. it is a ﬁngertip. If the distance is greater in the ﬁrst case. the next point we should deal with is not P(i+1).8. because ﬁngertips are not near to each other. and the resulting angle is equal to 180º more or less. • Due to in the list of the interior and contour points. Each time we deduce that P(i) is a ﬁngertip. so we must improve its efﬁciency.P(i) and P(i-k) . Using a small value of N greater than 1. Using the Figure 3 (obtained from [TP10]) we can understand more clearly how the algorithm works. In order to ﬁnd this point. for example. P(i) and the middle point of P(i-k) and P(i+k). The point that corresponds to that distance is the center of the hand. in other case is valley. Therefore we have to be careful and avoid showing the arm as far as possible. . One of the problems of this algorithm is that it could produce some false positive in the valley between two ﬁngers. In this case the angle is around 30º. it should be P(i+m). the ﬁrst one try to calculate the angle α that is formed by the lines P(i+k) . with values around 8 the results are quite acceptable. and indeed it is the center. Trapero Cerezo / 3D Hand and Finger Recognition using Kinect Figure 2: Graphical interpretation of the calculation of the center of the palm Figure 3: K-curvature Algorithm 2. it is ﬁngertip. But how can we extend these 2D points into 3D? The solution is really simple using Kinect.4 F.7. To avoid this false positive. we can assure that this point is not the center.P(i). The vector are formed by P(i-k) . if the current minimum is lesser than the current maximum. 2. so we have only together with the ﬁngertips. the error is negligible and the efﬁciency is improve by 1/N 2 . because they have similar properties to the ﬁngers. Moreover to improve the efﬁciency we introduce a heuristic. and the algorithm allocate the center in the arm. could help us to identify gestures. and from these distances we select the maximum. The value of k in this sample is too small. This algorithm consume a lot of time. we calculate the distance between the center of the hand an two points. But. In the 2obtained from [Ste] we can observe what are we looking for when we use the algorithm. could be used for calculating other relevant information which. so the idea is to ﬁnd the value k that ﬁts better with the right side of the ﬁgure. The value of m depends on the size of the contour of the hand. get the P(i-k) and P(i+k) points. If the angle is lesser than an α value. due to we have the depth of each 2D point. Allocate points in a 3D space Everything explained above is used for hand and ﬁnger recognition in 2D space. The circle in the ﬁgure is the biggest one that the hand shape can contains. usually. we calculate the minimum distances from each interior point to every contour point. and using these points generate two vectors and calculate the minimum angle they form. The more robust values found were k=22 and α = 40. so there is no need of calculating the distance to the rest of contour points for this interior point. The main idea is for each point P(i) of the contour. We have done it in two ways: • While calculating the minimum distance of an interior point to every contour point.P(x) and P(x-k) . Find the ﬁngertips The method used to ﬁnd the ﬁngertips is the k-curvature algorithm. by for example delimiting the parameters explained in section 2. around a 10% of the number of points of the contour seems to be reasonable. We have two cases. The center of the hand is. so the center of that circle should be the center.P(i) and P(i+k) P(i). and we can ﬁnd more points than we should. the center of the biggest circle that ﬁts inside the contour of the hand. Sometimes we could have problems if the arm is showed. so the point P(i) it could be a ﬁngertip. consecutive points are near in the hand. in the second case we calculate the angle β using he lines P(x+k) .1.

But in order to achieve it. but for calculating the 5. The formulas to do conversions are: X • X = width ∗2 − 1 Y • Y = (1 − height )∗2−1 palm center. who has helped with some graphics. It is not a big issue. The Z value is between 400 and 8000. because without our discussions about this topic I did not come up with some ideas. The idea is to be able to control any thing in the most natural way. Future work The next step of this research is to apply it in immersive virtual environments. 0). 4. but also to allocate the points in a 3D space. it oscillates around the real center because of the noise. We have used the library to create a starship simulator. Developed applications To test the performance of the algorithms and improvements proposed a test application has been developed (Figure 5). using other virtual reality devices to increase the immersion of the user. it has been proved the depth data that the Kinect can provide is useful not only to recognize the shape of the hand. But this frame rate becomes slower when the algorithm try to perform the ﬁnger and hand tracking with big objects. the vision space the Kinect can catch measured in centimeters. The data we obtain from each point is a X and Y value contained in the box whose left-upper corner is (0. The library contains the values of the parameters which has provided us the best results and assigns those values by default. We have also achieved a great level of accuracy in ﬁnger tracking. Not only the basic methods have been explained. We have designed the control to be as realistic as possible. And also to Alejandro J. . This application uses the library we have developed to perform the ﬁnger and hand tracking. and Miguel Ángel Romero Cobos. due to we want to perform the hand tracking correctly when the hands are showed. With the application we can modify the different parameters mentioned in this document. and the right-down corner is (width. You can also shoot missiles opening and closing your hand. this way we have been able to ﬁnd the values that have the better equilibrium between performance and accuracy. but also some improvements. this normalization is shown in the Figure 4. To normalize the points we have decided to use the same normalization that Kinect uses for skeleton tracking. we need to add more features. like if you were holding a steering wheel.F. This kind of tools could be the beginning of a new era of HCI. although Kinect is not too accurate for points further than 4 meters. like for example gesture recognition [TP10]. Conclusions This paper presented a method to track several hands and how to obtain their relevant points (ﬁngertips and palm center). Moreover. León Salas who has been the teacher responsible of this research. height). based on some geometric properties. This simulator uses the 3D position of the hands to control the starship in a 3D space. • Z = Z −400 −7600 3. Trapero Cerezo / 3D Hand and Finger Recognition using Kinect 5 Figure 4: 3D Normalization Figure 5: Test application example to normalize the values and add the depth data to each point. I would like to thank Rubén Aguilar Becerra. in which we will control our computers using only our hands and our body. This way we have the ﬁngertips and the center of the palm located in a 3D space. Final results provide us with a frame rate of 30 fps when we show the most two hands at the same time. Width and height values depends on the resolution we have selected in the section 2.1. It could be used in games or navigation systems.

R. 2006. . P. S HOTTONY J. S HARP T. V. B LAKE A...: Realtime human pose recognition in parts from single depth images. V... 17th International Conference on Systems. M. R. [Ste] S TEFAN: Center of the palm (hand tracking).. 2008. M OORE R.: Realtime human pose recognition in parts from single depth images: Supplementary material. [GSK∗ 11b] G IRSHICKY R. C RIM INISIY A....: Efﬁcient regression of generalactivity human poses from depth images. [SMC01] S TENGER B.. C OOK M. K IPMAN A.. http://blog. M ERÉ J. F INOCCHIO M.: Técnicas y algoritmos básicos de visión artiﬁcial443. C. E SPINOZA A. C IPOLLA R. R.. [MdPAE∗ 06] M ARCOS A. M. L IMAS M. S. F ITZGIBBONY A. J.. G. P ELLEGRINO S... KOHLIY P... Universidad de la Rioja. M OORE R.. E LÍAS F. KOHLIY P. CSC2503F Project Report.ch/2011/04/center-of-palm-handtracking.. G ONZÁLEZ E. [Mal03] M ALIK S...: Real-time hand tracking and ﬁnger tracking for interaction.html. C RIM INISIY A. Signals and Image Processing (2010). F ITZGIBBON A.6 F.: Model-based 3d tracking of an articulated hand. F ITZGIBBON A. Trapero Cerezo / 3D Hand and Finger Recognition using Kinect References [Che08] C HEN Q.: Efﬁcient regression of generalactivity human poses from depth images: Supplementary material. DE P ISÓN A SCACÍBAR F. A.. S HOTTONY J. M ENDONÇA P. K IPMAN A... F ITZGIBBONY A. O.. Servicio de publicaciones. [SFC∗ 11b] S HOTTON J... [GSK∗ 11a] G IRSHICKY R. F INOCCHIO M. [TP10] T RIGO T..: An analysis of features for hand-gesture classiﬁcation.. [SFC∗ 11a] S HOTTON J. PhD thesis. S HARP T.: Real-Time Vision-Based Hand Tracking and Gesture Recognition. B LAKE A. C OOK M.candescent. School of Information Technology and Engineering Faculty of Engineering University of Ottawa....

- Full CatalogUploaded byDerf Zednanref
- Remote SevsingUploaded bySachin Kumar Bassi
- Application of Remote SensingUploaded byPranav
- 09Uploaded byapi-3696530
- Human Detection and Extraction Using Kinect Depth Images ThesisUploaded byRaghavendra Sanjamala
- Remote SensingUploaded byapi-3705107
- Oil Mud Reservoir ImagerUploaded byariosabz
- Remote SensingUploaded byswathi_ks33
- Chapter 5 Remote SensingUploaded byHarsh Sharma
- slva655.pdfUploaded bykrazzy4tech
- 134918_634381717324042500_pdfUploaded bySenthil Kumar
- Sixth Sense TechnologyUploaded byarjunsuresh012
- Sign Language Converter and Central Health Monitoring and Controlling For Deaf and Dumb Patients in ICUs: A SurveyUploaded byIRJET Journal
- Video Processing and Analysis for Surveillance ApplicationsUploaded byShima Solati
- Voxel Size in Spatial ResolutionUploaded bymahdi123456789
- excel scavenger huntUploaded byapi-239684071
- Bonito DataSheet CL-400 V3.1.0 EnUploaded byGabriel Nicolas Solìs
- hciUploaded byYashaswini
- Sites Deep WebUploaded byNicole Concha

- Quiz (1)Uploaded byYoga Lakshmi
- Feature scruitinization for hand gesturesUploaded byJournal of Computing
- li2010.pdfUploaded byKulwinder Singh Bedi
- Recognition of Hand Gesture using PCA, Image Hu’s Moment and SVMUploaded byAnonymous vQrJlEN
- Finger Mouse Project ReportUploaded byanupamdubey
- Digital Pen for Handwritten Digit and Gesture Recognition Using Trajectory Recognition Algorithm Based On Triaxial AccelerometerUploaded byInternational Organization of Scientific Research (IOSR)
- Sixth Sense Technology PptUploaded byAbhimanyu Gupta
- Embedded System Titles 2013-14, IEEE & Non IEEE Project Titles (Complete Reference) in Embedded, Power Electronics, VLSI, DSP, Matlab - Complete ReferenceUploaded byncctstudentproject
- Application specific Gesture RecognitionUploaded by9971553633
- Gesture Interfaces ProjectUploaded byShambhavi Singh
- KunoUploaded bypraveenbc
- Skin based real time air writing using web cameraUploaded byDr Muhamma Imran Babar
- A Study of Hand Gesture Recognition TechniqueUploaded byIRJET Journal
- Hand Gesture Recognition using PythonUploaded byRahul Sharma
- ODIA LANGUAGE TO SIGN LANGUAGE MACHINE TRANSLATION SYSTEM: A FACE-TUBE FOR AUDIBLE IMPAIRED PEOPLE.Uploaded byIJAR Journal
- Gimp AbstractUploaded byAlok_Mishra_8462
- Gesture Recognition using Wireless SignalUploaded byInternational Journal for Scientific Research and Development - IJSRD
- Handgesture ReportUploaded byturingsmart
- Sixth Sense Implementation of Hand Cursor Using Image ProcessingUploaded bySuraj Patil
- AI SPEECHUploaded byJishnu Rajendran
- Gesture-based Computer Mouse Using Kinect SensorUploaded bySushilKesharwani
- Human Computer Interface TutorialUploaded byOvidiu Costinel Danciu
- Image Processing PptUploaded byVarun Tendulkar
- QtopenGLUploaded byOsmanSA
- Eye PhoneUploaded bysatdharsan
- Embedded Project Titles Book 2012-12 -- EMBEDDED SYSTEMS IEEE 2012 ProjectsUploaded byanand787
- Iot Operated Wheel ChairUploaded byprakaashmech
- Seminar on Gesture RecognitionUploaded byPrathibha Pydikondala
- A 6Uploaded bytomrdav
- Articulated Object Tracking and Pose Estimation - A Literature SurveyUploaded byJosé Alejandro Mesejo Chiong