You are on page 1of 15

MARRI LAXMAN REDDYINSTITUTE OF TECHNOLOGY AND MANAGEMENT

(AUTONOMOUS)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING(AI&ML)

REAREAL TIME SIGN LANGUAGE DETECTION AND RECOGNITION USING


DEEP LEARNING

Submitted by

G SHIVA KUMAR (217Y5A6605)


GINKALA VISHNU (217Y5A6606)
Under the Guidance of
Mr.CH V KRISHNA MOHAN
Assistant Professor
Abstract

 Sign language is how the hearing impaired express their feelings, contribute to a
conversation, learn, and overall live their lives as normal as possible.
 In this work, we are trying to propose a method which uses the power of
Convolutional Neural Network to identify and recognize hand signs which is captured
in real time through a laptop’s webcam. This work presents an image pattern
recognition system using neural network for the identification of sign language.
 The system has several stored images that show the specific symbol in this kind of
language, which is employed to teach a multilayer neural network.
 After training, the system is evaluated using the testing images.

2
Introduction

 Sign language is typically the first language and the main means of communication for
deaf individuals.
 Written communication is time consuming and slow and only possible when people are
sitting or stationary.
 The main aim of hand sign recognition system is to make an interaction between human
and CNN classifier where the recognized signs can be used for conveying meaningful
information or can be used for giving inputs to a machine without touching physical
knobs and dials on that machine.

3
Problem statement

 People who are unable to communicate via hand signals and gestures. Hence forth,
we need a system which can understand the precise meaning of deaf and dumb
people’s symbolic gestures and converting it into understandable language(Text).
Objective:
 Develop algorithms and models that can accurately recognize and interpret various
sign language gestures.
 Enable real-time recognition of sign language gestures to facilitate immediate
communication between sign language users and non-sign language users. The
system should be capable of interpreting gestures as they are being performed,
minimizing any delays or latency.

4
Existed method

 In the existing systems, BSL(British Sign Language) uses a two-handed finger- spelling
system, compared to the one-handed system used in ASL (American Sign Language).

 Many American Deaf believe that one- handed finger spelling makes foe faster fingers-
spelling than two handed
 Glove-based gesture recognition system:

In this system the signer has to wear a hardware glove, while the hand movements are
getting capture.
5
Proposed method

 The proposed method can be used for the easy of recognition of sign language. The
method used is Deep Learning for image recognition and the data is trained using
openCV2.
 Using this method, we would recognize the gesture and predict which sign is shown.

6
Block diagram

7
Technology Used

 CONVOLUTIONAL NEURAL NETWORK:


 Convolutional Neural Networks or CNN is a type of deep neural networks that are
efficient at extracting meaningful information from visual imagery. The role of the
CNN is to reduce the images into a form that is easier to process, without losing
features that are critical for getting a good prediction.
 TensorFlow:
 TensorFlow is an end-to-end open source platform for machine learning. It has a
comprehensive, flexible ecosystem of tools, libraries, and community resources
that lets researchers push the state-of-the-art in ML and developers easily build
and deploy ML-powered applications.

8
 Keras:
 Keras is a high-level library that’s built on top of Theano or TensorFlow. It provides
a scikit-learn type API (written in Python) for building Neural Networks. Developers
can use Keras to quickly build neural networks without worrying about the
mathematical aspects of tensor algebra, numerical techniques, and optimization
methods.

9
Methodology

 Data Collection:
 Data or photos of hand signs were collected using a smartphone's camera. Photos
of hand signs were taken in various environments with different lighting conditions
having slightly different orientation in each photo. So that we could simulate all
types of real-world conditions.
 Preprocessing:
 In image processing, normalization is a process that changes the range of pixel
intensity values. Normalization is an important step which ensures that each input
parameter (pixel, in this case) has a similar data distribution. This makes
convergence faster while training the network. Most probably the images are
converted to gray scale as the pixels range will be 0-255 only.

10
Drawbacks of previous work

 The model proposed in this paper is trained for recognition of only few characters.
The model is refined in the future and cab be trained to recognise all the 26 from the
sign language.
 The model’s validation accuracy is 74.08% whereas training accuracy is 91.90%.High
training accuracy and low validation accuracy means the model was not able to
generalize well on the given data.

11
RESULT

 In the recent past, research in the field of automatic sign language recognition
using machine learning methods have demonstrated remarkable success and
made momentous progression.
 This research article investigates the impact of machine learning in the state
of the art literature on sign recognition and classification.
 It highlights the issues faced by the present recognition system for which the
research frontier on sign language recognition intends the solutions.

12
13
 Screen 1: Screen displaying the letter "A”.
 In the above screenshot, the model is trained in such a way that it displays the letter
“A”
 when it captures the hand gesture of showing your thumbs up.

14
15

You might also like