You are on page 1of 84

GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

A PROJECT REPORT
ON
“GESTURE SCRIPT”
-------------------------------------------------------------------------------------------
---------
Submitted

By

Sumit Shukla
(04421005840)

Aniket Chaubey
(04421005846)

TOWARDS THE PARTIAL FULFILLMENT OF


BATCHELOR OF COMPUTER APPLICATION

INSTITUTE OF BUSINESS STUDIES AND RESEARCH


Navi Mumbai
Tilak Maharashtra Vidyapeeth, Pune
[2021-2024]

1|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CERTIFICATE OF COMPLETITION

2|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CERTIFICATE OF COMPLETITION

3|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

ACKNOWLEDGEMENT

We are delighted to present the completion of our project on " SIGN LANGUAGE
TRANSLATION " titled "GESTURE-SCRIPT." As students pursuing a Bachelor of
Computer Applications (BCA), we have invested considerable time, effort, and dedication into this
project to develop a comprehensive and efficient tool for a sign language translator web application.

We extend our sincere thanks to our esteemed project guide, “Prof. Kamalam Sundarrajan “ , for
their unwavering commitment and profound expertise. Their guidance, mentorship, and thoughtful
insights have been instrumental in shaping the success of our project. We are truly grateful for the
patience, encouragement, and constant availability that they graciously extended to us.

We would also like to extend our sincere thanks to the faculty members of the BCA department for
their constant encouragement, motivation, and for providing us with a conducive learning
environment. Their commitment to our academic growth has played a crucial role in our
development as aspiring professionals.

Furthermore, we are immensely grateful to our friends and fellow students who provided us with
invaluable suggestions, collaborative brainstorming sessions, and assistance when needed. Their
collective enthusiasm and unwavering support have fostered an environment conducive to growth
and innovation. We are truly fortunate to have such exceptional individuals within our academic
community.

We sincerely hope that our project, " GESTURE-SCRIPT" proves to be useful and beneficial to the
deaf community and contributes to the field of study for sign language. We are confident that the
skills, knowledge, and experience gained during this project will serve as a strong foundation for our
future endeavours.

Once again, we would like to express our deepest gratitude to everyone who has been a part of our
journey. Without your support, this accomplishment would not have been possible.

Yours Sincerely,

Aniket Chaubey

Sumit Shukla

4|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

PROJECT SYNOPSIS
The Sign Language Translator (SLT) is a tool to convert text or voice to sign language or signed
speech. Sign Language is a unique language used primarily by the deaf and it is expressed through
movements of the hands and face, body postures, and other gestures. The main aim for the SLT is to
create an easier communication medium between deaf persons and others by providing a cheap and
portable learning tool for SL to others and convert SL to English for the deaf person.

The Sign Language Translator Web Application for ASL holds immense potential in revolutionizing
communication for the deaf community, promoting inclusivity, and fostering understanding between
individuals who use ASL and those who do not. By providing a reliable tool for translating ASL into
written or spoken language, the application empowers deaf individuals to express themselves more
effectively and participate fully in various social and professional interactions.

The primary issue in this domain of research is the vast difference in complexity between basic Sign
Supported English (SSE) or Pigeon Signed English form and the more advanced British Sign
Language (BSL) and American Sign Language (ASL). Many deaf persons in education primarily use
signed forms of English rather than a native sign language as they have been taught by teachers who
are not fluent in sign language and it does not require an additional interpreter. The syntax and
grammar for SSE is the same as spoken/written English but its main feature is the use of sign signs in
replacement of vocabulary. Basic translation between signing and English is already a complex task,
for knowing what sign to use for replacement requires the understanding of the meaning of a word
and its context. Additional information is often conveyed in the facial expressions and body posture
used throughout the signing and this is often lost in the translation as many systems currently only
focus on the motion of the hands.

The project “GESTURE SCRIPT” aims to develop a real-time sign language translator . This
project aims to create an innovative educational tool and practical translation system for sign
language to English.

GESTURE SCRIPT will use predefined inputs in the web interface as well as computer vision to
recognize sign language gestures and convert them into text in real-time. which will be used to build
the application. With a focus on education and practical communication, this project serves as a
bridge, fostering effective communication between sign language learners and those unfamiliar with
sign language

5|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

INDEX

Chapter No. CONTENT Page No.

1. INTRODUCTION 7

2. REQUIREMENT ANALYSIS 10

3. PROJECT RELATED CONCEPTS 13

4. GOALS 17

5. FEASIBILITY STUDY 19

6. PHASES OF DEVELOPMENT 23

7. DATA DICTIONARY 32

8. SYSTEM DESIGN AND IMPLEMENTATION 34

9. MODULE 61
10. TESTING 69

11. FUTURE SCOPE 78

12. CONCLUSION 81

6|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 1

INTRODUCTION

INTRODUCTION

7|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

People with special needs or disabilities are also part of society, where they have the same
right to interact and socialize with the surrounding environment. Persons with disabilities,
such as deaf and speech impaired sometimes looks like a normal person. However, problems
arise when communication with others, the deaf cannot hear, whereas speech impaired
cannot be answered conversation. Classically, this problem can be answered, in which the
deaf using hearing aids. While the speech impaired using sign language, through hand
gestures or body movements. While there are many different types of gestures, the most
structured sets belong to the sign languages.

In sign language, each gesture already has assigned meaning, and strong rules of context and
grammar may be applied to make recognition tractable.

However, the constraints, if normal people who talk understands sign language, certainly not
all. Therefore, this issue would have to find an alternative solution, need a system that can
overcome these problem .

The project “GESTURE SCRIPT” aims to develop a real-time sign language translator .
This project aims to create an innovative educational tool and practical translation system for
sign language to English.

GESTURE SCRIPT will use predefined inputs in the web interface as well as
computer vision to recognize sign language gestures and convert them into text in real-
time. which will be used to build the application. With a focus on education and
practical communication, this project serves as a bridge, fostering effective
communication between sign language learners and those unfamiliar with sign
language

With a vision to transcend linguistic boundaries and foster seamless interactions, our
project endeavors to harness the latest advancements in computer vision, and machine
learning. By amalgamating these cutting-edge technologies, we aspire to create a
sophisticated tool that not only translates sign language into spoken language but also
preserves the nuances, emotions, and cultural richness embedded within each sign.

PROBLEM STATEMENT

8|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Sign language serves as a vital mode of communication for the deaf and hard of hearing community,
facilitating meaningful interactions and connections. However, the barrier between sign language
and spoken language can hinder seamless communication.

The deaf and hard of hearing community often encounter challenges in effectively communicating
with individuals who do not understand sign language. This communication barrier can lead to
feelings of isolation, exclusion, and limited access to essential services and information. There is a
pressing need for a solution that facilitates seamless communication between sign language users and
those who rely on spoken language.

Effective communication is fundamental to human interaction and plays a pivotal role in fostering
inclusivity, understanding, and connection within society. By addressing the gap between sign
language and spoken language, we can empower individuals in the deaf community to express
themselves, access information, and engage more fully in various aspects of life. GESTURE
SCRIPT holds the key to breaking down communication barriers, promoting accessibility, and
enhancing the quality of life for individuals with hearing impairments.

The idea of this project is to design a system that can interpret the Indian sign language accurately so
that the less fortunate people will be able to communicate with the outside world without need of an
interpreter. Hopefully the system is able to overcome the weakness of communication with deaf and
speech impaired and easier for normal people to communicate with them.

The system promotes an interactive way to learn as well as practice in the domain of sign languages.
Moreover it allows the users to create their own models , save and retrieve it for further use. The
system allows the user to integrate a pre-trained model thus giving them complete control over the
sign-language translation system on their end. We hope this empowers the vocally-challenged and
deaf community to use this system for their benefits and to improve their communication abilities.

9|Page
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 2

REQUIREMENT ANALYSIS

Introduction

10 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

The purpose of the study is to develop a system that can assist in communication between hearing
impaired and people who do not understand sign language. The application should be providing a
mechanism to translate sign language into text/speech form. In order to improve the user experience,
the application should be able to recognize sign language from video input and produce a translated
text. To ensure the successful development and implementation of the Sign Language Translator
Web Application for American Sign Language (ASL), a thorough requirement analysis is essential.

This analysis focuses on identifying the key functional and non-functional requirements necessary to
meet user needs and project objectives effectively.

Functional Requirements:

 Gesture Recognition: The application must accurately interpret ASL gestures in real-time
for seamless translation.
 Translation Algorithms: Advanced translation algorithms are needed to convert ASL
gestures into written text or spoken language with precision.
 User Interface: An intuitive and user-friendly interface is crucial for effortless user
interaction.
 Compatibility: The application should be compatible across various platforms for broad
accessibility.
 Educational Resources: Interactive learning modules and resources should be included to
support ASL learning and communication improvement.

Non-Functional Requirements:

 Accuracy: High translation accuracy is vital for clear and precise communication between
ASL and non-ASL users.
 Performance: Real-time translations must be provided without significant delays to maintain
smooth communication.
 Security: Robust security measures are necessary to safeguard user data and ensure privacy.
 Scalability: The application should be designed to scale effectively to accommodate future
growth in users and features.
 Usability: User experience should be prioritized with a visually appealing and easy-to-
navigate interface.

Stakeholder Requirements:

11 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

 Deaf Community: The application should be user-friendly and provide accurate ASL
translations to enhance communication for the deaf community.
 Educators: Educational resources and tools should be available to support ASL learning and
communication in educational settings.

By conducting a comprehensive requirement analysis that addresses functional, non-functional, and


stakeholder needs, the Sign Language Translator Web Application for ASL can be developed to
effectively meet user requirements and promote inclusivity and accessibility in communication .

System Requirements:

 Web browser with stable internet connection.

 Web camera

12 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 3

PROJECT RELATED CONCEPTS

PROJECT RELATED CONCEPTS

13 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Sign Language Recognition is the machine recognition of gestures. Gesture recognition can be done
in either way, Device based approach or Vision based approach. The later one is commonly used in
pattern recognition.

The frontend of the system focuses on providing a user-friendly interface for educators and
administrators. It can be built using technologies such as HTML, and CSS. HTML and CSS ensure a
visually appealing and intuitive user experience. These technologies come together to create a
responsive and engaging interface that allows users to interact with the system effortlessly.

On the backend, the system leverages a combination of technologies to handle complex operations.

Technologies such as PHP, XAMPP, JavaScript, play vital roles in the system's functionality. PHP, a
server-side scripting language, is used for database operations and implementing algorithms for
question selection and arrangement. XAMPP provides the necessary environment to run PHP and
manage the database efficiently. JavaScript adds interactivity and dynamic features to the
application, enabling validation checks and enhancing user experience.

The database used by the system is a critical component, containing a collection of pre-existing
models based on sign language translation systems. It can also be built using database management
systems such as MySQL or PostgreSQL, ensuring efficient storage and retrieval of datasets.

The algorithms employed by the system play a significant role in question selection and assembly.
After extracting features from images, neural network and kNN classification techniques were used
to classify the signs. Nearest neighbour algorithm is most popular classification technique proposed
by Fix and Hodges. kNN classify method classifies each row of the data in sample into one of the
groups in training using the nearest-neighbour method. Each element defines the group that the
corresponding row of training belongs. Group can be a numeric vector, a string array, or a cell array
of strin

FRONTEND

14 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

The frontend plays a crucial role in GestureScript as it provides the user interface through which
users and UI- system interact with the application. It has created an intuitive and visually appealing
experience for users. By utilizing technologies such as Java-script,HTML, and CSS, the frontend
development of the system ensures an engaging and user-friendly interface.

HTML, an essential technology in frontend development, is used to structure the content of the web-

based components of the system. HTML provides the necessary elements and tags to define
the structure of a web page. It allows developers to create headings, paragraphs, lists, tables,
and other elements that are essential for presenting the content in a structured manner. With
HTML, developers can organize and arrange the information in a logical and hierarchical manner,
ensuring clarity and ease of use for educators and administrators. HTML also provides the
foundation for incorporating dynamic content and interactivity using scripting languages like
JavaScript.

CSS, or Cascading Style Sheets, is employed to style the webpages of the system. It enables

developers to define the visual appearance of the application, including colors, fonts,
layouts, and other aesthetic aspects. CSS enhances the user experience by providing a
consistent and visually pleasing design across different pages and components of the
system. With CSS,

JavaScript, a versatile and dynamic programming language, serves as a cornerstone in web


development, enabling interactive and engaging user experiences. From enhancing website
functionality to creating responsive designs, JavaScript empowers developers to manipulate content,
validate forms, animate elements, and communicate with servers asynchronously. Its flexibility and
ubiquity across browsers make it a powerful tool for building dynamic web applications. JavaScript
plays a pivotal role in shaping the modern web landscape, driving innovation, interactivity, and user
engagement.

BACKEND

15 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

PHP, a server-side scripting language, is utilized to write scripts that


enable communication between

the app and the website. These scripts allow the app to interact with the website's
functionalities, such as retrieving and storing data in the database, performing

calculations, and handling user authentication. PHP provides a convenient and efficient way to
handle server-side operations and ensure the smooth operation of the system across different
platforms.

This Photo
Author is licensed
by Unknown
under CC To manage the database and facilitate data storage and retrieval, XAMPP
server is used. XAMPP is a cross-platform web server solution that provides the necessary
environment to run

PHP scripts and manage the database effectively. It integrates Apache as the web

server

MySQL as the database management system, and PHP as the server-side scripting

language. With XAMPP, the app and the website can utilize the same database, ensuring data
consistency and enabling seamless data sharing between the two platforms.

In conclusion, the backend of a question paper generating system utilizes Java for coding in the

Android platform, PHP for scripting to enable interaction between the app and the website, and
XAMPP server to manage the database. These technologies work together to handle data processing,
facilitate communication, and ensure the smooth functioning of the system across different
platforms. By leveraging the capabilities of these technologies, the backend enhances the efficiency
and effectiveness of the question paper generation process.

16 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER4

GOALS

17 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

GOALS

1. Accurate Translation: The sign language translator must accurately interpret and translate
sign language gestures into spoken language and vice versa to ensure clear and precise
communication.

2. Real-Time Capability: The translator should have real-time functionality to facilitate


seamless conversations and interactions between sign language users and individuals who
rely on spoken language.

3. User-Friendly Interface: A user-friendly interface is essential for easy navigation and


accessibility, catering to users of all levels of technological proficiency.

4. Multi-Language Support: The translator should support multiple sign languages and spoken
languages to cater to diverse linguistic needs and promote inclusivity.

5. Gesture Recognition Accuracy: High accuracy in recognizing and interpreting sign


language gestures is crucial for the translator to effectively convey the intended message
without errors or misinterpretations

6. Seamless Accessibility: The application and website provide a user-friendly and accessible
platform ford vocally impaired , deaf as well as people interested in learning sign language,
enabling them to conveniently access and utilize the translation system to its full potential.

In conclusion, a comprehensive sign language translator must strive to achieve a harmonious


blend of accuracy, real-time functionality, user-friendliness, and multi-language support to
cater to the diverse needs of users. Through a holistic approach that encompasses these key
aspects, a sign language translator can truly bridge communication gaps, empower users, and
promote inclusivity in a technologically advanced and interconnected world.

18 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 5
FEASIBILITY STUDY

19 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

FEASIBILITY STUDY

Feasibility is an important aspect to consider when evaluating the viability and potential success of a
project. In the context of your automatic question paper generating application, discussing feasibility
would involve assessing various factors that determine whether the project is practical, achievable,
and beneficial. Here are a few points you can consider when writing about the feasibility of your
application:

 Technical Feasibility: Evaluate the technical aspects of your project. Discuss whether the
required technologies and tools are readily available and compatible with each other.
Consider the feasibility of integrating with PhpMyAdmin and MySQL, ensuring that the
application can effectively interact with the database.

 Operational feasibility: is a crucial aspect we considered when evaluating the viability of our
group project - an automatic question paper generating application. Assessing the operational
feasibility involved analysing various factors to ensure that the application could be
effectively implemented and integrated into existing operations.

20 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

TECHNICAL FEASIBILITY

Technical feasibility helps in understanding the level and kind of technology needed for a system. It
includes performance issues and constraints that may affect the ability to achieve an acceptable
system It is focused on gaining and understanding of the present technical resources of the
organization and their applicability to the expected needs of the proposed system. It is an evaluation
of the hardware and software and how it meets the need of the proposed system.

This aspect focuses on evaluating the technical resources and capabilities required for the project.
Assess whether the team possesses the necessary skills and expertise to develop and maintain the
application. Additionally, consider the compatibility of the chosen technologies, such as PHP,

MySQL, and the integration with phpMyAdmin, to ensure a smooth development and deployment
process.

1. Integration with PhpMyAdmin and MySQL database:

▪ We ensured smooth communication between our application and the database for accurate
data retrieval and storage.
▪ Compatibility, efficient data retrieval, storage, and robust security measures were
implemented.
2. Availability of technical resources:

▪ We assessed the availability of hardware and software resources required for efficient
application execution.
▪ Processing power, memory, and storage capacity were considered to handle the expected
workload effectively.
3. Selection of appropriate technologies:

▪ We collectively analysed and selected programming languages, frameworks, and libraries


compatible with our development environment.
▪ Technologies chosen were readily accessible and supported by a strong community for easy
troubleshooting and future enhancements.

21 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

OPERATIONAL FEASIBILITY

Operational feasibility is an important consideration when evaluating the practicality and


effectiveness of implementing a project. In the context of your automatic question paper generating
application, operational feasibility refers to the assessment of whether the application can be
successfully integrated into the existing operations and processes. Here are some key points to
consider when discussing the operational feasibility of your project:

1. Compatibility with Existing Systems: Evaluate the compatibility of the application with the
existing systems and processes used by the target users, such as educational institutions or
individual educators. Ensure that the application can seamlessly integrate with their
workflows and technologies without disrupting their operations.

2. User Acceptance and Training: Assess the level of user acceptance and the ease of use of the
application. Consider the learning curve for users and the availability of resources, such as
user guides or training materials, to facilitate a smooth transition. Conduct user testing and
gather feedback to refine the application's usability and address any potential challenges.

3. Technical Infrastructure Requirements: Evaluate the technical infrastructure needed to


support the application. Consider factors such as server requirements, network connectivity,
and device compatibility. Ensure that the required infrastructure is readily available and
compatible with the application to ensure smooth operation.

4. Data Management and Security: Assess the feasibility of managing and securing the data
within the application. Implement appropriate data management practices, such as regular
backups and data integrity checks, to ensure the reliability and availability of the question
paper data. Implement robust security measures to protect user information and prevent
unauthorized access.

5. Maintenance and Support: Consider the resources and support required for ongoing
maintenance and updates. Evaluate the feasibility of providing timely bug fixes, feature
enhancements, and technical support to ensure the smooth operation of the application over
time.

22 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Chapter 6
Phases of development

23 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

METHODOLOGY
The following sections describe the various steps we went through to create our
project.

a) Database collection/Image acquisition


b) Image pre-processing
c) Feature extraction
d) Classification
e) Results and discussion

a) Database collection/Image acquisition :


The Database collection for a Sign Language Translator project involves curating a
diverse sign language corpus, annotating data for supervised learning, and implementing
quality control measures. Image acquisition methodology focuses on capturing high-
resolution images from multiple angles with consistent lighting and backgrounds to
ensure clarity and detail for accurate gesture recognition.

b) Image pre-processing :
Image pre-processing for a Sign Language Translator project involves noise reduction,
image enhancement, normalization, resizing, and feature extraction. These techniques aim
to improve image quality, enhance clarity, ensure consistency, focus on relevant
elements, and extract valuable features for accurate gesture recognition. By applying
these pre-processing steps, the project can optimize image data for the translation model.

c) Feature extraction :
Feature extraction involves simplifying the amount of resources required to describe a
large set of data accurately. When performing analysis of complex data one of the major
problems stems from the number of variables involved. Feature extraction is a general
term for methods of constructing combinations of the variables to get around these
problems while still describing the data with sufficient accuracy.

24 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

d) Classification :

Extracted features are needed as input for classification. Classification techniques are
helpful to recognize the gestures. There are a number of classification techniques available [1].
Classification is identifying inputs to a set of class on basis of training data set. . In our work, kNN
and Neural Network Pattern recognition tools ware used in recognizing the numeral gestures of ISL.

K-Nearest Neighbor (kNN) K-nearest neighbor (kNN) classifier classifies objects on the basis of
feature space. kNN uses supervised learning algorithm. Nearest neighbor algorithm is most popular
classification technique proposed by Fix and Hodges. kNN classify method classifies each row of the
data in sample into one of the groups in training using the nearest-neighbor method [

d) Results and discussion


The data set divided into two groups, one used for training and other for testing. The
training set consists of 70% of the aggregate data and remaining 30% are used as testing.
To calculate the accuracy of the results can be seen in equation below. As for
calculating the error rate (prediction error) is shown in equation it is also shown
below.

25 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

DEVELOPMENT LIFECYCLE
The Software Development Life Cycle (SDLC) is a structured approach to software development
that outlines the various stages involved in creating high-quality software. The SDLC encompasses
all the activities, processes, and methodologies that are followed from the initial conception of an
idea to the final deployment and maintenance of a software product. Here are the typical stages of the
SDLC:

1. Requirements Gathering: In this stage, the development team interacts with stakeholders,
including clients, end-users, and business analysts, to gather and document the requirements
of the software. The purpose is to understand the needs, features, and functionality expected
from the software.

2. Analysis and Planning: The gathered requirements are analysed, and the development team
identifies potential solutions and strategies to meet those requirements. The project scope,
timeline, resource allocation, and budget are determined during this phase. A detailed project
plan and design documentation are created.

3. Design: The design phase involves creating a blueprint of the software system based on the
requirements. This includes architectural design, database design, user interface design, and
any other relevant design elements. The design is usually documented in various diagrams
and models to provide a clear representation of the system's structure.

4. Implementation: In this stage, the actual coding and development of the software take place.
Programmers write the source code based on the design specifications. The coding standards
and best practices are followed, and the development team collaborates to ensure the software
is developed efficiently.

5. Testing: Once the implementation is complete, the software undergoes rigorous testing to
identify and fix any defects or issues. Different testing techniques, such as unit testing,
integration testing, system testing, and user acceptance testing, are employed to ensure the
software meets the specified requirements and functions correctly.

26 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

6. Deployment: After successful testing and quality assurance, the software is deployed to the
production environment. This involves the installation, configuration, and setup of the
software on the target system. Data migration, if required, is also performed during this stage.

7. Maintenance: Once the software is deployed, it enters the maintenance phase. This involves
monitoring the software for any issues, bug fixes, performance optimizations, and
and reliable.

27 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Requirements

Analysis

Design

Implementation

Testing

Deployment

Maintainance

Img.01

28 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

ALGORITHMS USED
MOBILE-NET DNN:
MobileNet stands out as a series of efficient models tailored for mobile and embedded vision tasks,
distinguished by their compact design utilizing depth-wise separable convolutions to create
lightweight deep neural networks. These models introduce flexible global hyper-parameters that
strike a balance between speed and precision, offering adaptability to suit specific application
requirements.
One of the key advantages of using MobileNet for sign language translation is its speed and
efficiency. MobileNet's architecture consists of depthwise separable convolutions, which
significantly reduces the computational cost while maintaining high accuracy levels. This allows for
quick inference times, enabling users to receive instant translations of sign language gestures into
text or spoken language

KNN:

The K-Nearest Neighbours (KNN) classifier is a simple yet effective algorithm used for pattern
recognition and classification tasks based on similarity measures. In the context of a Sign Language
Translator website, KNN can be utilized to classify sign language gestures by comparing them to a
database of labeled gestures. The KNN algorithm operates on the principle of similarity, where input
data points are classified based on the majority class of their nearest neighbours in the feature space.
In the context of sign language translation, this means that the algorithm can be trained on a dataset
of sign language gestures, assigning each gesture a unique label or meaning.

One of the key advantages of integrating a KNN classifier into a sign language translation website is
its simplicity and ease of implementation. KNN is a straightforward algorithm that does not require a
complex training process, making it ideal for applications where real-time response and user
interaction are critical.

29 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Sign Training Model

Img.02
Sign Training Model lets anyone build their own image classification model with no coding required.
All you need is a webcam.

The approach we're going to take is called transfer learning. This technique starts with an already
trained model and specializes it for the task at hand. This lets you train far more quickly and with
less data than if you were to train from scratch.

We bootstrap our model from a pre-trained model called MobileNet. Our system will learn to make
predictions using our own classes that were never seen by MobileNet. We do this by using the
activations produced by this pretrained model, which informally represent high-level semantic
features of the image that the model has learned.

The pretraining is so effective that we don't have to do anything fancy like train another neural
network, but instead we just use a nearest neighbors approach. What we do is feed an image through
MobileNet and find other examples in the dataset that have similar activations to this image. In
practice, this is noisy, so instead we choose the k-nearest neighbors and choose the class with the
most representation. By bootstrapping our model with MobileNet and using k-nearest neighbors, we
can train a realistic classifier in a short amount of time, with very little data, all in the browser. Doing
this fully end-to-end, from pixels to prediction, won’t require too much time and data for an
interactive application.

30 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Alphabets and Phrases Model

Img.03

The Practice Sign Model lets you practice your sign language skills which you learned on our tutorial
page. The above model is mainly divided into 3 parts:

1) Dataset

2) MobileNet Feature Extraction

3) Knn Classifier

The approach we're going to take is called transfer learning. This technique starts with an already
trained model and specializes it for the task at hand. This lets you train far more quickly and with
less data than if you were to train from scratch. We bootstrap our model from a pre-trained model
called MobileNet.

Our system will learn to make predictions using our own classes that were never seen by MobileNet.
We do this by using the feature extraction technique. Feature extraction is a process of
dimensionality reduction by which an initial set of raw data is reduced to more manageable groups
for processing. A characteristic of these large data sets is a large number of variables that require a
lot of computing resources to process. The pretraining is so effective that we don't have to do
anything fancy like train another neural network, but instead we just use a nearest neighbours
approach

31 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 7
DATA DICTIONARY

32 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

User table
Sr.no Entity Datatype Constraints
1. Id Int (11) Primary Key NOT NULL AI
2. Name varchar (255) NOT NULL
3. Username varchar (255 NOT NULL
4. Email varchar (255) NOT NULL
5. Age varchar (10) NOT NULL
6. Password varchar (255) NOT NULL

33 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 8

SYSTEM DESIGN AND IMPLEMENTATION

34 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

FLOWCHART
A flowchart is a graphical representation of a process or workflow. It uses symbols and arrows to

illustrate the sequence of steps or actions involved in completing a task or achieving a specific goal.

Flowcharts are commonly used in various fields, including software development, project

management, and business processes.

The purpose of a flowchart in a project can vary depending on the context, but generally, it serves the

following key purposes:

1. Visualize the Process: Flowcharts provide a clear visual representation of the project's workflow,

making it easier to understand and communicate. By presenting the steps in a logical sequence, it

helps project team members and stakeholders gain a comprehensive overview of the entire process.

2. Identify Dependencies and Relationships: Flowcharts help in identifying dependencies and

relationships between different steps or actions in a project. By mapping out the connections, it

becomes easier to analyze how one step affects another and identify potential bottlenecks or areas for

improvement.

3. Analyze and Improve Efficiency: Flowcharts enable project managers and team members to

analyze the efficiency of a process. By visually representing each step, it becomes easier to identify

redundant or unnecessary actions, areas of delay, or opportunities for streamlining the workflow.
This analysis can lead to process optimization and improved productivity.

Img.

04

35 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.01

36 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

FDD (Functional Decomposition Diagram)

A Functional Decomposition Diagram (FDD) is a graphical representation that breaks down a


complex system or project into smaller, more manageable functions or components. It is a top-down
approach to system analysis and design, where the main function is decomposed into sub-functions,
which are further decomposed into more detailed functions.
The purpose of a Functional Decomposition Diagram is to:

1. Understand System Structure: FDD helps in understanding the structure and organization
of a complex system or project. By breaking it down into smaller functions, it provides a clear
hierarchy of functions and their relationships.

2. Identify Functional Components: FDD helps identify the major functional components or
modules within a system. It provides a systematic way to identify and define the various functions or
tasks that need to be performed to achieve the system's objectives.

3. Analyze Dependencies and Interactions: FDD allows the analysis of dependencies and
interactions between functions. By visually representing the relationships between functions, it
becomes easier to identify how changes or modifications in one function may impact other functions.

4. Assign Responsibilities: FDD helps in assigning responsibilities to different individuals or


teams. Each function in the diagram can be associated with specific roles or teams responsible for its
implementation or maintenance. This clarity ensures that all functions are accounted for and properly
assigned.

5. Aid in System Design and Development: FDD provides a foundation for system design and
development. Once the functions are identified and decomposed, they can be further analyzed and
designed to determine how they will be implemented and integrated within the system.

6. Support Project Planning and Management: FDD is useful in project planning and
management. It helps project managers understand the scope of work, estimate resources, and
allocate tasks to team members. It also provides a basis for defining project milestones and tracking
progress.

37 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Overall, the Functional Decomposition Diagram helps in understanding, analyzing, and designing
complex systems or projects. It provides a structured and hierarchical view of functions, facilitating
effective system analysis, design, and project management.

38 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.02

39 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

DFD (Dataflow Diagram)

A Data Flow Diagram (DFD) is a graphical representation of how data flows within a system or
process. It illustrates the movement of data between various components and entities within a
system, highlighting the inputs, outputs, and transformations that occur.

A DFD consists of four main components:

1. Process: Processes represent the activities or functions that transform input data into output
data. They are depicted as circles or rectangles in a DFD. Processes can range from simple
calculations or data manipulations to more complex operations. Each process in the diagram is
labelled with a unique identifier and a clear description of the function it performs.

2. Data Flow: Data flows represent the movement of data between processes, entities, or
storage locations within the system. They are depicted as arrows in a DFD, indicating the direction
of data flow. Data flows carry information in the form of inputs, outputs, or intermediate data. Each
data flow is labelled to describe the type of data being transferred.

3. Data Store: Data stores represent the repositories or storage locations where data is persisted
within the system. They can be physical storage, such as databases or files, or conceptual storage,
such as temporary memory or buffers. Data stores are depicted as rectangles with two lines parallel
to the sides.

4. External Entity: External entities represent external sources or destinations of data that
interact with the system but are outside its boundaries. They can be users, other systems, devices, or
organizations. External entities are depicted as rectangles with lines extending beyond the system
boundary.

40 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

(Level 0)

This is the highest-level DFD that provides an overview of the entire system. It depicts the main
processes or functions of the system and the interactions between them.

Fig.03

41 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

(Level 1)
The Level 1 DFD expands on the processes or functions identified in the Level 0 DFD. It breaks
down the processes into more detailed sub-processes and shows how data flows between them.

Fig.04

42 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

(Level 2)
The Level 2 DFD further decomposes the Level 1 sub-processes into even more detailed processes. It
provides a deeper understanding of how data moves within each sub-process and illustrates the data
transformations that occur.

Fig.05

43 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

ER Diagram

An Entity-Relationship Diagram (ERD) is a visual representation of the relationships between


entities in a database. It is a modelling technique used in database design to depict the structure and
organization of data and the associations between different entities.

In an ERD, entities are represented as rectangles, and relationships between entities are represented
by lines connecting them. The ERD consists of three main components:
1. Entities: Entities represent real-world objects, concepts, or things that are important to the
database. Each entity is depicted as a rectangle, and its name is written inside the rectangle. Entities
can have attributes that describe their characteristics or properties.

2. Relationships: Relationships represent the associations or connections between entities.


They illustrate how entities are related to each other. Relationships are depicted as lines connecting
the entities involved in the relationship. The lines may have symbols or annotations to indicate the
type of relationship, such as one-to-one, one-to-many, or many-to-many.

3. Attributes: Attributes are the properties or characteristics of entities. They provide additional
details about the entities. Attributes are depicted as ovals or ellipses connected to the entities. Each
attribute has a name that describes the data it represents.

44 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.06

45 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CLASS Diagram

A Class Diagram is a type of diagram used in software engineering and object-oriented modelling to
illustrate the structure and relationships of classes within a system. It provides a static view of the
system by representing the classes, their attributes, methods, and the associations between them.
The purpose of a Class Diagram is to:

1.Visualize Class Structure: Class Diagrams provide a visual representation of the classes in a
system and their structure. Each class is depicted as a rectangle, with its name at the top, followed by
its attributes in the middle section, and its methods in the bottom section. This visualization helps
stakeholders understand the organization and composition of classes in the system.

2. Illustrate Relationships: Class Diagrams show the associations and relationships between
classes. Associations represent connections between instances of different classes, while
relationships like inheritance (generalization), aggregation, and composition indicate how classes are
related to and interact with each other. These relationships help define the behaviour and
dependencies within the system.

3. Define Attributes and Methods: Class Diagrams depict the attributes and methods of each
class. Attributes represent the properties or characteristics of an object, while methods represent the
behaviours or operations that the object can perform. This information helps in designing and
implementing the functionality of the system.

46 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.07

47 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Activity Diagram

An Activity Diagram is a type of UML (Unified Modelling Language) diagram used to visualize the
flow of activities, actions, and decisions within a system or process. It represents the dynamic
behaviour of a system, illustrating the sequence of activities and the relationships between them.
The purposes of an Activity Diagram are as follows:

1. Visualize Process Flow: Activity Diagrams provide a graphical representation of the flow of
activities within a system or process. They show the sequence of actions, decisions, and parallel or
concurrent flows, helping stakeholders understand how the system functions and the order in which
activities are executed.

2. Model Business Processes: Activity Diagrams are commonly used to model and analyze
business processes. They help in understanding the steps involved, the decision points, and the
interactions between different roles or participants. By visualizing the process flow, stakeholders can
identify areas of improvement, inefficiencies, or bottlenecks.

3. Specify Use Case Scenarios: Activity Diagrams can depict the steps and actions involved in
specific use case scenarios. They illustrate how a user or actor interacts with the system to
accomplish a particular goal. Activity Diagrams provide a clear visualization of the user's actions,
system responses, and decision points, aiding in use case analysis and design.

4. Support System Design and Implementation: Activity Diagrams assist in system design
and implementation by providing a blueprint for developers. They help in translating the high-level
process flow into executable code by detailing the activities, their dependencies, and the conditions
for branching or looping. Developers can refer to the Activity Diagram to understand the intended
behaviour and implement the corresponding logic.

5. Identify Exception Handling and Error Paths: Activity Diagrams can depict exception
handling and error paths within a system. They illustrate how errors or exceptional situations are
identified, handled, and communicated. By visualizing these paths, stakeholders can identify
potential risks and design appropriate error handling mechanisms.

48 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

6. Facilitate Communication and Collaboration: Activity Diagrams serve as a visual


communication tool that facilitates collaboration and understanding among stakeholders. They
provide a common language and a clear representation of the system's behaviour, enabling effective
discussions, feedback, and decision-making.

49 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.08

50 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.09

51 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

STATE CHART DIAGRAM

State chart diagrams, also known as state machine diagrams, are a popular visual modeling tool used
in software engineering to represent the behavior of complex systems. A state chart diagram
describes the various states that a system can be in and the events or conditions that cause transitions
between states.

State diagrams can be used to model the dynamic behavior of any system that has a finite number of
states and state transitions. Some examples of systems that can be modeled using state diagrams
include:

1. States: States represent the conditions or situations that a system can be in at any given
time. They are represented by circles or ovals in a state diagram. Each state should be
labeled with a name or description that makes it clear what the state represents.

2. Transitions: Transitions represent the changes from one state to another in response to an
input. They are represented by arrows or lines in a state diagram. Each transition should
be labeled with the input or event that triggers the transition.

3. Inputs: Inputs represent the events or conditions that trigger a transition from one state to
another. They can be represented by labels on the arrows or lines in a state diagram.

4. Outputs: Outputs represent the actions or results that occur when a transition is made.
They are not always included in a state diagram, but can be represented by labels on the
arrows or lines, or in the states themselves.

5. Initial State: The initial state is the state in which the system starts before any inputs are
received. It is represented by an arrow pointing to the initial state circle or oval.

6. Final State: The final state is the state that the system transitions to when it has completed
its task. It is represented by a double circle or oval

52 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.10

53 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Sequence Diagram
A Sequence Diagram is a type of UML (Unified Modelling Language) diagram that represents the
interactions and sequence of messages between objects or components in a system. It illustrates the
dynamic behaviour of a system over time, showing the order in which objects collaborate and the
messages they exchange.

The purpose of a Sequence Diagram is to:

1. Visualize Object Interactions: Sequence Diagrams provide a visual representation of how


objects or components interact with each other during the execution of a particular scenario or use
case. They show the sequence of messages exchanged between objects, depicting the order of
method calls and responses.

2. Model System Behaviour: Sequence Diagrams help in modelling and analyzing the
behaviour of a system from a dynamic perspective. They illustrate how objects collaborate to achieve
a specific functionality or to fulfil a use case scenario. By visualizing the sequence of interactions,
stakeholders can gain a better understanding of how the system behaves and how objects work
together.

3. Represent Time and Ordering: Sequence Diagrams explicitly represent the temporal
ordering of messages and method calls. They show the vertical ordering of objects along a timeline,
indicating when messages are sent and received. This helps stakeholders understand the flow of
control and the chronological order in which actions occur within the system.

4. Identify Collaboration and Dependencies: Sequence Diagrams highlight the collaboration


and dependencies between objects or components. They depict which objects participate in the
interaction and how they depend on each other to accomplish a task. This aids in understanding the
relationships and dependencies within the system.

5. Support System Design and Implementation: Sequence Diagrams assist in system design
and implementation by providing a detailed view of the interactions and message flows between

54 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

objects. They help developers understand the expected behaviour of the system, identify potential
design issues, and implement the necessary logic to support the sequence of messages.

6. Facilitate Communication and Collaboration: Sequence Diagrams serve as a


communication tool that promotes collaboration and understanding among stakeholders. They
provide a visual representation of the system's behaviour, allowing for effective discussions,
feedback, and clarification among developers, designers, and other project stakeholders.

55 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.11

56 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Use case Diagram

A Use Case Diagram is a type of UML (Unified Modelling Language) diagram used to visualize the
functional requirements and interactions between actors (users, external systems, or other entities)
and the system being developed. It represents the high-level functionality of a system and the use
cases that define the desired behaviour from the user's perspective.
The purpose of a Use Case Diagram is to:

1. Identify System Functionality: Use Case Diagrams help in identifying and representing the
various functionalities and features that a system should provide. They capture the interactions
between actors
(users) and the system, highlighting the specific actions or tasks that the system needs to support.

2. Define Use Cases: Use Cases are represented as ovals or ellipses in a Use Case Diagram.
Each Use Case represents a specific functionality or task that a user can perform with the system.
Use Cases describe the interactions and the desired outcome, helping stakeholders understand how
the system will be used.

3. Visualize User-System Interactions: Use Case Diagrams illustrate the interactions between
actors and the system. Actors represent the different roles or entities that interact with the system.
They can be human users, external systems, or any other entities that communicate with the system.
Use Cases depict the desired actions or tasks performed by actors within the system.

4. Identify System Boundaries: Use Case Diagrams help in defining the scope and boundaries of
the system. They provide a clear visualization of the external entities (actors) that interact with the
system and the specific functionalities that the system supports. This aids in defining the system's
context and understanding its relationships with external entities.

5. Aid in Requirements Analysis and Validation: Use Case Diagrams serve as a basis for
requirements analysis and validation. They capture the functional requirements and desired

57 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

behaviour from a user's perspective. Stakeholders can review and validate the Use Case Diagram to
ensure that all the necessary functionalities are accounted for and to identify any missing or
conflicting requirements.

6. Support System Design and Implementation: Use Case Diagrams provide a foundation for
system design and implementation. They guide the design of system components, the definition of
user interfaces, and the identification of system interfaces with external entities. Use Cases also help
in designing test cases to verify the system's functionality.

58 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Fig.12

59 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 9

MODULES

60 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Img.05

The above snippet shows the home page for our project. On this page we have our dataset integrated
in our system. You can select any Phrases or alphabets from “ Most Commonly Used Phrases ”
section , alphabets from “ Alphabets ” section or any phrase from “ Place , Time and Objects ”
section and learn through high quality videos and images.

Img.06

61 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Img.07

This is the loading screen that a user will view while switching pages. Permissions to open the
camera on the hardware i.e. laptop/desktop are asked here. After the user permits the use of camera
the page will load along with initialization of camera.

62 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

This is our Training model where the user can create his own dataset to be used in the system.

Img.08

Img.09

63 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Img.10

Write your text that is your label for that particular class and click on the “Add ” button, after that
using a webcam as input click on the “ Add New Images ” button and add 10-40 images per class.
You will be able to observe the real-time predictions below the video Click on “Speak” button to
convert the predicted text into speech using your favourite voice

64 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Img.11

Click on the “ Start Practicing” button and place your hand in


front of the camera against white background and start practicing
your signs ( Alphabets ) . You will be able to observe the
prediction below the button.

65 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Img.12

Img.13

66 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Img.14

Img.15

67 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Img.16

Img.17

68 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 10
TESTING

69 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

TESTING
Software testing is a crucial process in software development that involves evaluating the
functionality, quality, and performance of a software application. The purpose of testing is to identify
defects, errors, or any other issues within the software to ensure that it meets the desired
requirements and works as expected.

Software testing is typically performed at different levels of the development process, including unit
testing, integration testing, system testing, and user acceptance testing. Each level of testing serves a
specific purpose and targets different aspects of the application.

1. Unit Testing: Unit testing focuses on testing individual units or components of the software in
isolation. It verifies the functionality of each unit and ensures that they work as intended. It helps
detect bugs and provides a foundation for further testing.

2. Integration Testing: Integration testing verifies the interaction between different modules or
components of the software. It ensures that the integrated components work together correctly,
revealing any issues that may arise due to the combination of these units.

3. User Acceptance Testing (UAT): UAT involves testing the software from an end-user
perspective. It aims to determine if the software meets the user's requirements and expectations. It is
typically performed by actual users to ensure that the software is user friendly and meets business
needs.

4. System Testing: System testing evaluates the behaviour of a complete and integrated software
system. It tests the system, checking its compliance with functional and non-functional requirements.
System testing helps identify any discrepancies or issues that may arise in the overall system
operation.

5. Validation Testing: Validation testing ensures that the software meets the specified business
requirements and operates in its intended environment. It validates that the software meets the user's
needs and is suitable for its intended purpose. Validation testing is essential for confirming that the
software is ready for deployment.

Onto the next page, we delve into an in-depth exploration of the testing procedures executed for both
our application and website.

70 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

1. Unit Testing:

• Perform unit testing to validate the correctness and functionality of individual components
within the web application.

• Write unit tests to verify the behaviour of classes, methods, and functions responsible for
working of the models .

• Test scenarios such as sending and receiving data from the server, handling error conditions,
and validating server responses.

2. Integration Testing:

• Conduct integration testing to ensure proper interaction and compatibility between different
components within the web application.

. • Test the integration between activities, fragments, services, and other components to ensure
smooth communication and functioning.

• Validate data passing between components, proper handling of call-backs and events, and
correct utilization and co-operation of hardware and software components of the project.

3. User Acceptance Testing:

• Involve real or representative users to perform user acceptance testing

. • Validate if the web application meets the user requirements, provides a satisfactory user
experience, and aligns with user expectations.

4. System Testing:

• Conduct system testing to validate the overall functionality and behaviour of the web
application.

• Test various scenarios, configurations, and user interactions to identify defects or


inconsistencies.

• Test different screen resolutions and orientations, network connectivity scenarios, user
authentication, and proper handling of data inputs and outputs.

71 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

5. Validation Testing:

• Perform validation testing to ensure that the application meets the specified requirements.

• Verify if all the required features are implemented correctly and if the application follows
defined business rules.

72 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Sr.No Test Case Test Steps Expected Actual Pass/fail

A01 Activate Camera Switch to a module A pop-up window As PASS


Functionality using the camera asking for permissions Expected
function. to give access to the
Grant permission camera system.
for camera access
when asked
A02 Detect Hand Show your hand in The function responsible As PASS
Motion front of the camera for Detecting motion Expected.
so as to be visible in should successfully
the system screen. detect motion in the
(Stability increases video
in white
background)
A03 Pre-Process Head to the tutorials The mini-screen As PASS
image section. available in the tutorials Expected
functionality Click on letters, page should show valid
words or phrases output for the item
available on the selected
screen.

A04 Render Initializing Text/gestures should be AS PASS


Text/Gesture on functions appropriately displayed Expected
screen responsible for on the screens.
displaying
text/gestures on
screen
A05 Store Displayed Head over to The newly creates As PASS
Data training section. dataset should be parsed Expected
create and store a and converted into a
new dataset. json file and stored at
the user specified
location.

73 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

A06 Display Stored In the training The stored dataset As PASS


Data section, click on should be loaded in to Expected
load model. system .
Load the previously Gesture prediction as
stored dataset in to per the model should
the system. happen.
A07 Checking Search Head over to the Result for the searched As PASS
function tutorials page. item should be displayed Expecte
Search for a from the dataset.
letter/Phrase/word if no such item is
in the search phrase. present it should parse a
message describing item
not found.

74 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Website Testing

1.Functionality Testing:

• Test the website's functionalities for including adding, downloading and re-loading a newly
trained dataset model.

• Verify if the newly created model is downloading properly and also proper form is maintained
while integrating the newly downloaded model in the system.

• Validate the algorithm for automatic conversion from images, videos to text and describing its
accuracy with reference to the trained model.

2. Usability Testing:

• Evaluate the website's user interface (UI) for a smooth transition between pages.

• Test the user-friendliness and intuitiveness of the website's UI.

• Validate if error messages are displayed correctly, forms are validated, and user interactions are
properly handled.

3. Compatibility Testing:

• Test the website across different web browsers (e.g., Chrome, Firefox, Safari ,Brave) to ensure
compatibility

. • Verify if the website functions correctly on various devices and screen sizes, including
desktops, laptops.

4. Performance Testing:

• Test the website's performance by simulating high user loads and concurrent access.

• Verify if the website responds quickly, handles requests efficiently, and scales well under
increased load.

75 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

Sr.No Test Case Test Steps Expected Actual Pass/fail


A08 Load Application Opening the web The As Expected PASS
Application in a application
local browser. should load
without any
errors.
A09 Navigate to Different Click on the The As Expecting PASS
Sections. different sections application
of the applications. Should
(eg. navigate
Training,about,etc) through the
sections
without any
errors.
A010 Train your Own In Training The As Expected PASS
model. Section application
Create a class for should
the dataset. successfully
Capture images for create a new
the sign you want trained data
as per the named model.
class.
A011 Checking for In the ‘Test your The As Expected. PASS
pretrained models own skills’ section application
functionality. initialize the should access
system and check the gesture
for matching accuracy based
gestures in the on the
pretrained model.
A012 Checking Link Clicking on all All Links As expected PASS
Functionality available links in Should

76 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

the application. Navigate to the


correct Page
and section.
A013 System Performance Use the system The system As Expected. PASS
normally and should respond
observe the according to
system’s users input
performance. with minimal
lag and errors.
A014 Browser Initializing the Application As Expected PASS
Compatibility application on functions
different Web correctly in
Browsers different
browsers.

77 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 11
FUTURE SCOPE

78 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

SCOPE

The future prospects for a sign language translating web application are extensive and promising,
offering numerous avenues for growth and improvement. One significant area for advancement
involves integrating artificial intelligence and machine learning algorithms to enhance the precision
and efficiency of translations. By harnessing these technologies, the application can become more
proficient at recognizing and interpreting intricate sign language gestures, leading to more accurate
and fluent translations.

To meet the increasing demand for accessible communication, future development could involve
integrating artificial intelligence for better translation accuracy, expanding language support to
include more sign languages, enhancing user experience through interactive features, and
collaborating with organizations to promote inclusivity and accessibility in various sectors.

Moreover, expanding the application to include additional sign languages from various regions
worldwide can broaden its impact and user base, catering to a more diverse audience. This expansion
would necessitate collaboration with sign language experts and communities to ensure the faithful
representation of cultural and linguistic subtleties .

Another potential direction for future development is incorporating interactive functionalities like
real-time video translation and educational tools to support both sign language users and learners.

Additionally, exploring mobile app development could increase accessibility on-the-go, enhancing
the tool's usability and outreach.In essence, the future of sign language translating web applications
presents significant opportunities for advancing communication accessibility and inclusivity,
contributing to a more interconnected and empathetic society.

In essence, the future of sign language translating web applications presents significant opportunities
for advancing communication accessibility and inclusivity, contributing to a more interconnected and
empathetic society.

79 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

LIMITATIONS

The development of a sign language translating web application presents numerous challenges and
limitations that impact its functionality and effectiveness in today's world. One significant limitation
is the need for an intermediate written representation, known as gloss, which serves as the basis for
processing sign languages. This requirement poses difficulties for deaf individuals who may struggle
with written representations of signs, hindering the seamless translation process. Moreover, the lack
of comprehensive datasets for languages with limited resources presents a significant hurdle in
developing accurate translators for these languages, limiting the application's reach and inclusivity.

In the context of learning capabilities, the complexity of sign language syntax and the challenges
associated with decoupling pose classification from pose estimation present obstacles in accurately
recognizing and translating sign language gestures.

This complexity can impede the application's ability to provide real-time and accurate translations,
affecting the quality of communication between hearing-impaired and non-hearing-impaired
individuals. Additionally, the variability in hand shape, motion profile, and the position of hand,
face, and body parts contributing to each sign further complicates the recognition process,
highlighting the need for improved simplicity and accuracy in datasets for effective sign language
detection and recognition.

Overcoming these challenges through advancements in artificial intelligence, machine learning, and
data collection can enhance the learning capabilities and overall functionality of sign language
translating web applications, ultimately fostering better communication and understanding between
individuals who use sign language and those who do not.

80 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CHAPTER 11
CONCLUSION

81 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

CONCLUSION

This project was undertaken to solve the underlying issue faced by hearing and speech impaired
people. They often don’t even stand a chance in the competitive global arena because of
communication hurdles. They often miss many opportunities in their daily life t express themselves.
Mostly small kids with such impairments find it hard to learn and have an interactive way to learn
sign language.

This project, however, helps in eradicating the social stigma of them not able to participate in many
domains and successfully gives them confidence to stand upright in any field they want.

The identified research gaps in sign language recognition research offer valuable opportunities for
refinement in the field. Addressing limitations related to datasets, enhancing model robustness, and
understanding contextual challenges can significantly contribute to the progress of sign language
recognition technologies. The acknowledged limitations in existing studies, such as the call for more
diverse sign language gestures, higher resolution cameras, and testing in low-light conditions,
provide meaningful insights for potential improvements in future research.

The use of advanced techniques such as background subtraction and 3D CNNs helps to improve the
accuracy of the model. The use of transliteration helps to make the output more accessible and user-
friendly. The recognized text is then used as a query to search engines to extract relevant search
results based on the user’s requirements.

We extend our heartfelt gratitude to all those who contributed to the success of this project. The
insights, feedback, and encouragement we received have been integral to our growth as professionals
and individuals. We are confident that the skills and experiences gained through this project will
continue to inspire and guide us in our future endeavours.

In closing, "GESTURE SCRIPT" is more than just a project report; it is a symbol of our commitment
to leveraging technology for social good. We look forward to seeing the positive impact it will have
on the lives of

82 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

REFERENCES
https://www.researchgate.net/

https://learnopencv.com/deep-learning-with-opencvs-dnn-module-a-definitive-guide/

https://www.tutorialspoint.com/machine_learning_with_python/
knn_algorithm_finding_nearest_neighbors.htm

https://lucid.app/

https://www.signlanguagelinguistics.org/research

www.youtube.com

https://observablehq.com/@nsthorat/how-to-build-a-teachable-machine-with-tensorflow-js

https://www.npmjs.com/package/@tensorflow-models/knn-classifier

83 | P a g e
[2021-2024]
GESTURESCRIPT: A SIGN LANGUAGE TRANSLATOR

BIBILIOGRAPHY

84 | P a g e
[2021-2024]

You might also like