Realtime Sign Language Detection and Recognition
Realtime Sign Language Detection and Recognition
are:
Abstract—The real-time sign language recognition system is
developed for recognizing the gestures of Indian Sign Language x Palm Detection Model
(ISL). Generally, sign languages consist of hand gestures. For x Hand Landmark Model
recognizing the signs, the Regions of Interest (ROI) are identified
and tracked using the skin segmentation feature of OpenCV. x Multiclass Classification
Then by using [1]Media Pipe, it captures the landmarks of the
hands and the key points of landmarks are stored in an NumPy x Realtime Sign Detection and Recognition.
array. Then we can train the model on it by using TensorFlow, B. Methodologies and Technologies
Keras and LSTM. Lastly the model can be tested Realtime by
taking live feed from the webcam. The approach for sign detection and hence we can
recognize the sign language actions.
Realtime Sign Detection and Recognition is oneof the potential
applications for the deaf and dumb people as it help them to
x Firstly, we will detect the hands in the live feed of the
connect with the world. Previous approaches to sign detection webcam.
and recognition were done by using the Machine Learning x Landmarks are collected from the different position of
Algorithm by training it on the images but now we are using Deep the hands
Learning Models to enhancing Realtime sign detection and
recognition. x The Key points of the landmarks are store in the array so
that they can used further for the process.
Keywords—Indian Sign Language, Realtime, [1]Media Pipe,
Landmarks, Key points, OpenCV, NumPy, LSTM, TensorFlow, x While capturing the landmarks, the specific labels are
Keras. given by the user.
2
Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on August 24,2024 at 06:46:18 UTC from IEEE Xplore. Restrictions apply.
A. Flow Chart
B. Web Application
3
Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on August 24,2024 at 06:46:18 UTC from IEEE Xplore. Restrictions apply.
CNN model requires large amount storage as it works on
images but with the help of LSTM model, we can work with
Key points of the images, which are easy to store
ACKNOWLEDGMENT
We would like to thanks CDAC ACTS, Pune for
supporting and helping us to choose this project to work
on it. And we also like to thank our guide Dr Shantanu
Pathak for guiding us throughout the project.
REFERENCES
[1] https://mediapipe.dev/
Fig. 12. Sign Detection using Images [2] https://ai.googleblog.com/2020/12/mediapipe- holistic-simultaneous-
face.html
X. CONCLUSION [3] https://developers.googleblog.com/2021/04/signall-sdk- sign-language-
interface-using-mediapipe-now- available.html
In this Python project, we have built a Realtime Sign [4] Arpita Halder , Akshit Tayade, “Real-time Vernacular Sign Language
Detection and Recognition that you can implement in Recognition using MediaPipe and Machine Learning” www.ijrpr.com
numerous ways. We used [1]Media Pipe Library to capture the ISSN 2582-7421
landmarks of hands and the used LSTM model to train it and [5] Murat Taskiran, Mehmet Killioglu, Nihan Kahraman, “A Real-Time
predict the sign language actions. System for Recognition of American Sign Language by using Deep
Learning” IEEE 18044537
The objective of this project is met partially. The program [6] Brandon Garcia, Sigberto Alarcon Viesca, ” Real-time American Sign
is able to load and perform within the required time frame and Language Recognition with Convolutional Neural Networks”
the resulting detections accuracy is acceptable. This prototype stanford.edu
can be further tested and evaluated on various aspects of
scalability and stability
4
Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on August 24,2024 at 06:46:18 UTC from IEEE Xplore. Restrictions apply.