You are on page 1of 12

SIGN

LANGUAGE
RECOGNITION
ISHIKA JAIN (01420602019)
SATYANSH (00320602019)
• Communication is a very crucial part of
INTRODUCTION every human being’s life as it enables a
person to express his/herself.
• through various means like through
speech, body language, gestures,
reading, writing or through visual aids.
• However, there are differently abled
minority among us who cannot listen or
speak like everybody which lead to a
communication gap.
• Visual aids, hearing aids or interpreter
are used for communicating with them.
• Sign language is a visual language. It
WHAT IS SIGN
mainly consists of 3 major components:
LANGUAGE?
1) Fingerspelling: Spell out words
character by character.

2) Word-level sign vocabulary: The entire


gesture of words or alphabets is
recogniseD.

3) Non-manual features: Facial


expressions, tongue, mouth, body
positions.
• Fingerspelling is not widely used as it is
PROBLEM AND challenging to understand and also difficult to
use.
SOLUTION • Moreover, there is no universal sign language
and very few people know it, which makes it an
inadequate alternative for communication.
• A system for “sign language recognition” as the
project name suggests that classifies finger
spelling can solve this problem.
• Various machine learning algorithms are used to
detect the hand gestures and will be converted
in text and voice form so that the other person
can easily understand what they want to say.
 Sign Up/ Login Module
MODULES
 Tutorial Module

 Sign language detection

 Text to Speech

 Feedback
 Operating System: Windows10
HARDWARE  Processor: Intel(R)Pentium(R) CPU
REQUIREMENTS N3710@1.60GHz

 System Type: 64-bit operating system, x64-


basedprocessor

 Installed Ram: 8 GB

 GPU: NVIDIA GeForce GTX 800 or higher

 Web cam (For real-time hand Detection)

 Smart mobile phone (for using application) with


camera.
 Front-End: Android Studio (for developing
SOFTWARE application).

REQUIREMENTS  Back-End: Python, Sql, IDE (Juptyer), Numpy


(version 1.16.5), cv2 (OpenCV) (version 3.4.2)
ER DIAGRAM
DFD LEVEL 0
DFD LEVEL 1
 Recognition of signs which include motion can
FUTURE SCOPE be done.

 Recognition of start and end of a gesture can


also be done.

 Enhanced safety and security.

 Reachability to more and more customers.

 Resolving queries within 24 hours and


providing 24*7 service support.

 Decrease risks and time to time removing of


bugs.
• We conclude that using python and open cv, we
CONCLUSION can create a model which can detect hand
gestures of sign language which can be
converted into text and speech so that dumb and
deaf people can easily communicate with other
people more easily and efficiently especially in
case of any emergency.

• This application will be really helpful in the


real-time world in which there is a barrier in
communication for such people.

You might also like