You are on page 1of 19

Minor Project - 2

Sign Language Detection


Mid-term Presentation
In the partial fulfilment of the requirements
for the course of
Minor Project -2

Under the guidance of


Dr. Neelu Jyoti Ahuja
Team Members & Role
Specialization &
S.No. Name Sap-Id Roll No Role
Semester
Cyber Security and Programmer, Tester,
1. Harshita Mittal 500077452 R134219044
Forensics (6th Sem.) Production, Research

Cloud Computing & Programmer, Tester,


2. Astha Kumari 500076107 R110219027
Virtualization (6th Sem.) Production, Research

Cloud Computing & Programmer, Tester,


3. Ankur Gupta 500076127 R110219018
Virtualization (6th Sem.) Production, Research
Cyber Security and Programmer, Tester,
4. F Rohith Immanuel 500075157 R134219041
Forensics (6th Sem.) Production, Research
Introduction
• There have been several advancements in technology and a lot of research has been done to help the
people who are deaf and dumb. Aiding the cause, Deep learning, and computer vision can be used too to
make an impact on this cause.

• Our Project can be very helpful for the deaf and dumb people in communicating with others as knowing
sign language is not something that is common to all, moreover, this can be extended to creating mid air
writing, where the person can easily write by just their hand gestures.
Literature Review
• [1]In this paper, SIGN LANGUAGE RECOGNITION(Feb 2014): STATE OF THE ART by Ashok K
Sahoo [1], Gouri Sankar Mishra [2] and Kiran Kumar Ravulakollu [3]
Journal: ARPN Journal of Engineering and Applied Sciences
They introduced that Systems should be able to distinguish face, hand (right/left) and other parts of body
simultaneously.
• [2]American Sign Language Recognition System(2018): An Optimal Approach by Shivashankara S [1]
Srinath S[2]. In this paper, It can be extended to recognize the rotation and distance invariant ASL
Alphabets gestures, numbers gestures and other complex gestures in different background.

• [3]SIGN LANGUAGE RECOGNITION by Muskan Dhiman[1] and Dr G.N. Rathna [2]. In this paper they
introduced that for user- dependent, the user will give a set of images to the model for training, so it
becomes familiar with the user.
Problem Statement

• Understanding the exact context of symbolic expressions of deaf and dumb people is the
challenging job in real life until unless it is properly specified.
Motivation
• Sign language is learned by deaf and dumb, and usually it is not known to normal people, so it
becomes a challenge for communication between a normal and hearing impaired person.
• Its strike to our mind to bridge between the hearing impaired and normal people to make the
communication easier.
• Sign Language Recognition (SLR) system takes an input expression from the hearing impaired
person gives output to the normal person in the form of text vise versa.
• India constitutes 2.4 million of Deaf and Dumb population, which holds the world‘s 20% of
the Deaf and Dumb Population. These persons lacks the amenities which a normal person should
own. The big reason behind this is lack of communication as deaf people are unable to listen and
dumb people are unable to speak. Image shows a survey analysis
Objective

• The objective of this project is to develop a system for the symbolic expression through images so
that the communication gap between a normal and physically impaired person can be easily
bridged.

Sub Objectives:
o To detect the hand gestures in Real-Time
o To achieve the accuracy more than 95%
Methodology

1. Extract Holistic Key points: With the help of Media pipe we will be extracting the landmarks of our
hand, face and pose So, that we can effectively create our Dataset.
2. Creating the dataset: Using OpenCV to read the frames and OS library to create and store the dataset
in the directory.
3. Training the Model: Training the Neural network Model with the help of the dataset that has been
created.
4. Make the Real-Time prediction: With the help of the trained model sequences and OpenCV we will be
able to predict the sign or gesture in Real-Time.
System Configuration
System Requirements:
Recommended Operating Systems
1.Operating System: 1.Windows 7 or newer version .
2.Linux
3.Mac-OS
Hardware Requirements:
1.Processor: Minimum 1 GHz .
2. A webcam or a USB Camera
3. Memory (RAM): Minimum 2 GB.
Other Software used : (For Compilation of the project)
1) Python (3.8.10)
2) IDE (Jupyter Notebook)
Class Diagram/UML/Use Case

Fig.2: Use Case


Diagram
Fig.3: Flow chart diagram Fig.4: Sequence Diagram
Algorithmic/Libraries/Data
Structure
Algorithm used:
LSTM (Long short term memory): Advanced RNN model

Data Structures:
• Arrays
• Dictionary
• Graph

Libraries:
• NumPy
• OpenCV
• Media pipe
• TensorFlow
Output Video
Pert Chart

Fig.5: Pert Chart


SWOT ANALYSIS
AREA OF APPLICATION

This system can be applied in the NGOs and Schools specifically for the physically impaired
people. Also there are so many workplaces where some deaf and dumb people works , here also we
can use this system to help them to communicate with others.
Reference & GIT link
• https://github.com/harshita0501/Sign-Language-Recongnition/
[1] https://www.researchgate.net/publication/262187093_Sign_language_recognition_State_of_the_art/
[2] https://www.researchgate.net/publication/326972551_American_Sign_Language_Recognition_System_An_Optimal_Approach/
[3] https://edu.authorcafe.com/academies/6813/sign-language-recognition/
Thank You

You might also like