You are on page 1of 17

GST - Bengaluru Campus

Department of Computer Science and Engineering

Smart communication framework for visually


impaired people
Project Phase-I (Review-3) Presentation
Under the guidance of Presented by
N.Revanthi 322010305050
Dr. Sowmya
B.Vedasya 322010306056
Assistant Professor
R.Premitha 322010306038
P.Ashok Reddy 322010306019
02/05/2024 1
CONTENTS
• Abstract
• Introduction
• Problem definition
• Literature Survey
• Hardware Components
• Block Diagram
• Overall Workflow Diagram
• Module Implementation
• Advantages
• Conclusion
• References
2
ABSTRACT

• The device tailored to meet the unique needs of visually impaired individuals .
• We have come up with an idea called smart glove that will convert hand
movements into text and allow the visually impaired to express themselves and to
control the home appliances.
• The device can carefully sense each resistance and each movement made by the
hand.
• This device helps dumb ,deaf and impaired people to communicate easily.
INTRODUCTION

• The idea behind this is to make secure and comfortable for impaired people .
• Blind/paralyzed people face lot of issues while navigating around places by
always depend on others. They have high Risk of falls, fractures, and injuries.
• To overcome those challenges a lot of technologies have been implemented, but
because of drawbacks they don't reach to the people. So we implemented a glove
which can help blind and paralyzed people.
• We came up with an idea which is extended work of our project. The project is to
develop a cost-effective system which can give voice to voiceless people with the
help of Smart Glove.
Problem Definition

• This project aims to develop a Smart Hand Talk Glove based on


Microcontroller to provide a better solution for impaired people.
Literature Survey
S. Title of Paper Abstract Outcome Methodology Research Gap
N
O

1 Hand Gesture Interacting with physical the medical experts to pass the
instruction to the robotic hands
• Image Acquisition: Images are
acquired using the 13 megapixel
The database collected in the
world using expressive body ideal conditions has proved to
Recognition using PCA movements is much easier remotely to add the accuracy in real-aperture camera in be the most efficient database
the operations. But the proposed controlled background as well as
and effective than just model is not capable of working by varying the lightning in terms of accuracy and gives
speaking. Gesture recognition with the images containing hands conditions. 100% accuracy and when the
turns up to be important field of other than skin color. The • B. Hand Segmentation: The lightning conditions are
in the recent years. proposed model does not evaluate main and basic step in hand changed the accuracy
Communication through the images clicked in other light gesture recognition is to decreases as compare to the
gestures has been used since colors where the hand gestures segment the hand from the previous one. The system
has been clicked and the model whole image so that it can be
early ages not only by work only with static gesture .In utilized for recognition. In our shows 91.43% with low
physically challenged persons future the system can be upgraded proposed color skin color brightness images .The hand
but nowadays for many other to support dynamic gestures and segmentation is applied to images have been obtained
applications. an application for controlling segment the hand. for the purpose of human
medical operations can be • Conversion from RGB to YCbCr : computer interactions for the
developed using the system. The proposed skin color
segmentation in applied to operation theatre robots,
YCbCr color space. which must understand the
hand language in order to
take the actions
2 Real time detection and A real time sign language
detector is a significant step
a robust model that
consistently classifies Sign
• Image preprocessing: The
image is first preprocessed to
The system is able to
recognize selected Sign
recognition of Indian and forward in improving language in majority remove noise and enhance Language signs with the
communication between the
American sign language using sift deaf and the general
of cases. Additionally, this
strategy will be extremely
the features of the hand.
• Hand segmentation: The hand
accuracy of 70-80% without a
controlled background with
in population. We are pleased to
showcase the creation and beneficial to sign language region in the image is then small light.
implementation of sign learners in terms of segmented from the
language recognition model practiceing sign language background.
based on a • feature extraction: Features
Convolutional Neural are extracted from the hand
Network(CNN).We utilized a region. Sift recognition.
Pre-Trained SSD Mobile net V2 • Classification: The extracted
architecture trained on our own features are then classified to
dataset in determine the hand gesture
order to apply Transfer learning being performed.
to the task.

3 An efficient frame work for Hand gesture recognition


system is considered as a way
An automated ISLR system
based on DWT is presented in
• Image preprocessing: The
input image is preprocessed to
The nearest neighbour -
classifier used for the
for more intuitive and proficient this paper. DWT sub-band
Indian sign language recognition human energies extracted from
remove noise and enhance classification provides 99.23%
the features of the hand. accuracy while using the
using Wavelet computer interaction tool. The
range of applications includes
the hand gesture images are
used as features along with • Hand segmentation: The hand cosine distance metric. The
region in the image is proposed
transform(ISLR system) virtual prototyping, sign the area of the segmented
segmented from the framework is centralized the
language hand gesture region. Otsu
analysis and medical training thresholding and background using a skin color efforts of hand gesture and
morphological operators are segmentation algorithm. computer vision algorithms
used for the segmentation • Feature extraction: SIFT which in turn develop a low
procedure. features cost and effective ISLR
• Gesture recognition: A system.
support vector machine (SVM)
classifier is trained on a
labeled dataset of ISL and ASL
gestures.
Technical Specification
(Hardware Components)

LM353OP Amplifier
PIC Microcontroller

Flex Sensor

HC05 Bluetooth Module 16*2LCD Display


Module Implementation
BLOCK DIAGRAM
WORK FLOW

Flex sensors is used Through amplifier it With the help of


To sense the Converts gestures into microcontrollers
gestures text It converts text to speech

Through Lcd display it also


display the text and provides
audio through speakers

Should connect Bluetooth for


the device
02/05/2024 12
Sign Language Gestures
Modules with description

1. Text-to-Braille Conversion: This module converts digital text into Braille, which can
be read by a tactile display. Visually impaired people can use text-to-Braille
conversion software to generate Braille output from speech. This allows them to
communicate with others using speech-to-text software, and to receive Braille transcripts
of conversations.
2. Object Recognition :This module helps identify objects in the environment and
provides audio or tactile descriptions .We can identify person , object, surroundings
for impaired people. As object recognition algorithms continue to improve, we can
expect to see even more innovative and helpful applications in the future.
Advantages
• Reduce cost of Assistive Technologies
• Increase the availability of Assistive Technologies
• Empower visually impaired people with their own solutions
• Able to improve their safety , independence , access to education and
employment , and social participation
Conclusion
• The project targets one of the most prominent problems of current Society . Our Motivation is to lesson their
hardship and make the world bit easier for them . This System can work in both indoor and outdoor environment
effectively
• This Project includes the use of Arduino , Flex Sensor to convert hand gesture into audible speech as well as an
android application is used to covert speech to text which will be displayed on LCD screen and control home
appliances . In this message can also displayed in the form of text on the LCD screen
• This project aims to lower the dependence of blind/paralyzed people on the normal people and also help people
to increase their independence and quality of life
References
[1] Hand Gesture Recognition using PCA J. P. Sahoo S. Ari and D. K. Ghosh "Hand
gesture recognition using PCA(PRINCIPAL COMPONENT ANALYSIS)" IET Image
Processing vol. 12 no. 10 pp. 1780-1787 Oct. 2018

[2] Hand gesture dumb people it is a digital image processing. Swaroop, Abhishek,
Kishor, Ravindra Prasad ,Anisha Vol-8 Issue-9 2021

[3] An efficient frame work for Indian sign language recognition using Wavelet
transform(ISLR system) Madhavan Suresh Anand, Nagaraju Mohan Kumar ,Angappan
Kamarasan Department of computer Science Anna University Vol.7 No.8,June 2016

You might also like