You are on page 1of 2

Good Afternoon Sir, today our group is going to present our project Sign Language Recognition Software

using AI and ML.

First of all, let me introduce you my team members, Subham, Priyam, Yashaswi and myself Rishabh
Mathur.

Now, Yashaswi will tell you about the problem statement. Over to you, Yashaswi

Thank you Yashaswi. So now I will tell you what our Objective and Methodology is.

The primary issue is that the hard of hearing like deaf and dumb people cannot communicate
easily with normal people since persons other than disabled persons do not learn how to
communicate in sign language with each other. Our objective is to create a translator which can
detect sign language performed by a disabled person and then that sign will be fed to a
machine-learning algorithm performed by a neural network which is then detected by the neural
network and translated on display so that a normal person can understand what the sign
conveys.

To build a Sign language translator we will do these following methodology:

Data Gathering

A separate directory is created with a collection of various sub-directories labelled with every
alphabet image. Each sub-directory contains various images of the hand gesture of a user
displaying a sign language for the corresponding label i.e... Then we will arrange the data in a
proper layer.

Creating the Model

A model is created using TensorFlow which will recognize the real-time input of the hand signs
of the user and will display the corresponding letter of the sign language. This TensorFlow
model is a GPU model that compares the real-time image with the data that is already stored in
the database with the help of the data gathering process.

Training Model

The model will be trained with some samples of data from the database of the images of the
words from each word. This step will help the model to get a hold of how a letter is being
represented using a hand gesture to perform sign language. This model is trained for some
random amount of time in order to get a better and efficient accuracy while performing the
actual test with the model.
Testing

After the model is trained it can be tested to observe the actual working of the system. The user
shows a hand gesture of any word based on the sign language data which is being provided in
the model and also used for the training of the model. When the user creates a hand sign for
any specific letter the model recognizes the gesture and displays the corresponding letter in the
bottom left side of the console. Then we have analyzed the output and check for the error and
try to the rectify it in order to improve the accuracy and efficiency.

Now, Subham will tell you about the flow charts and the system architecture. Over to you, Subham

here are the links of the website and some books from where the references for the project are being
taken.

At last, we would like to thank our mentor Dr. G Elangovan Sir for supporting us and giving us this
wonderful opportunity of presenting such a great project which will be very interesting, and I will also
extend my gratitude to all the team members who cooperated and give their best in this project.

Thank You Sir

You might also like