You are on page 1of 9

Microcontroller Based American Sign Language Recognition and

Language Translation System using IoT application.

A Capstone Design Proposal


Presented to the Faculty of the
Information and Communications Technology Program
STI College of Meycauayan

Jherome Lourence M. Baisa


Jaztin R. Belga
Walter Borja
Jae Clark B. Dela Torre

September 22, 2023


ENDORSEMENT FORM FOR PROPOSAL DEFENSE

TITLE OF RESEARCH: Microcontroller Based American Sign Language


Recognition and Language Translation System
using IoT application.

NAME OF PROPONENTS: Jherome Lourence M. Baisa


Jaztin R. Belga
Walter A. Borja
Jae Clark B. Dela Torre

In Partial Fulfilment of the Requirements


for the degree Bachelor of Science in Computer Engineering
has been examined and is recommended for Proposal Defense.

ENDORSED BY:

Engr. Noel Jason O. Lusung, MIT


Capstone Design Adviser

APPROVED FOR PROPOSAL DEFENSE:

Engr. Noel Jason O. Lusung, MIT


Capstone Design Coordinator

NOTED BY:

<Program Head's Given Name MI. Family Name>


Program Head

September 22, 2023

FT-ARA-020-00 | STI College of Meycauayan ii


APPROVAL SHEET

This capstone design proposal titled: Microcontroller Based American Sign Language
Recognition and Language Translation System using IoT application. prepared and
submitted by Jherome Laurence M. Baisa, Jaztin R. Belga, Walter A. Borja, and Jae
Clark B. Dela Torre, in partial fulfilment of the requirements for the degree of Bachelor
of Science in Computer Engineering, has been examined and is recommended for
acceptance an approval.

Engr. Noel Jason O. Lusung, MIT


Capstone Design Adviser

Accepted and approved by the Capstone Design Review Panel


in partial fulfilment of the requirements for the degree of
Bachelor of Science in Computer Engineering

<Panelists' Given Name MI. Family Name> <Panelists' Given Name MI. Family Name>
Panel Member Panel Member

<Panelists' Given Name MI. Family Name>


Lead Panelist

Noted:

<Capstone Design Coordinator's Given <Program Head's Given Name MI. Family
Name MI. Family Name> Name>
Capstone Design Coordinator Program Head

September 22, 2023

FT-ARA-020-00 | STI College of Meycauayan iii


TABLE OF CONTENTS

Page
Title Page i
Endorsement Form for Proposal Defense ii
Approval Sheet iii
Table of Contents iv
Introduction 5
Background of the problem
Overview of the current state of technology
Objectives of the study
Scope and limitations of the study

FT-ARA-020-00 | STI College of Meycauayan iv


INTRODUCTION

This capstone project, titled "Microcontroller Based American Sign Language


Recognition and Language Translation System using IoT Application," addresses the
need to improve communication for individuals with hearing impairments, particularly
those who use American Sign Language (ASL). The project combines electronics,
software development, and IoT technology to create an innovative solution that
recognizes ASL gestures and translates them into spoken or written language. By
bridging the communication gap, this system aims to enhance accessibility and
inclusivity for ASL users, enabling them to communicate seamlessly with individuals
who may not be fluent in ASL. The project's methodology, system architecture,
implementation details, and potential impact on improving the lives of those with hearing
impairments will be discussed in subsequent sections. Ultimately, this endeavor aspires to
foster global communication and inclusiveness, empowering individuals with hearing
impairments in an interconnected world.

Background of the problem

Communication is a important thing humans do, yet individuals with hearing


impairments, who rely on American Sign Language (ASL) for expression, face a
significant communication barrier when interacting with those who do not understand
ASL. This gap leads to social isolation, miscommunication, and limited access to
education and employment opportunities. While technology has revolutionized
communication, it has not adequately addressed the needs of ASL users. The scarcity of
ASL interpreters further exacerbates the issue. Thus, the project "Microcontroller Based
American Sign Language Recognition and Language Translation System using IoT
Application" is born out of the pressing need to bridge this communication divide. By
harnessing microcontroller technology and the Internet of Things (IoT), it aims to
develop a precise and accessible ASL recognition and translation system, facilitating
effective communication and promoting inclusivity in an increasingly interconnected
world.

FT-ARA-020-00 | STI College of Meycauayan 5


Overview of the current state of the technology

1. ASL Recognition Systems: ASL recognition systems have made significant


progress in recent years, with the integration of computer vision and machine
learning techniques. These systems leverage cameras and sensors to track the
movements of ASL gestures. However, their accuracy and speed can vary, and
they often struggle with complex, nuanced signs.

2. Machine Translation: Machine translation technology has enabled the


conversion of ASL into spoken or written language. Yet, the translations can
sometimes lack the subtleties and context of ASL, leading to potential
misinterpretations.

3. Wearable Devices: Some wearable devices, such as smart gloves equipped with
sensors, have been developed to aid ASL communication. These devices capture
hand movements and convert them into text or speech. However, their cost and
accessibility remain barriers for widespread adoption.

4. Mobile Applications: Mobile applications have emerged to facilitate ASL


learning and communication, but many lack real-time recognition and translation
capabilities.

5. IoT Integration: IoT technology has not been extensively incorporated into ASL
recognition and translation systems, leaving room for innovation in connecting
ASL users with mainstream communication channels.

6. Accessibility: While there has been progress in making technology more


accessible to individuals with hearing impairments, there is a need for solutions
that seamlessly integrate with everyday communication platforms, breaking down
barriers to effective interaction.

FT-ARA-020-00 | STI College of Meycauayan 6


Objectives of the study

1. Develop a Robust ASL Recognition System: Create a highly accurate and


efficient American Sign Language recognition system that can identify and
interpret a wide range of ASL gestures in real-time, considering variations in
signing styles and expressions.

2. Implement Language Translation Algorithms: Develop algorithms for


translating ASL gestures into spoken or written language with a focus on
preserving the nuances and contextual meaning of ASL expressions for more
accurate and natural translations.

3. Integrate IoT Connectivity: Incorporate Internet of Things (IoT) technology to


enable seamless connectivity between the ASL recognition system and various
communication devices and platforms, allowing ASL users to communicate with
non-ASL speakers effortlessly.

4. Ensure Accessibility and User-Friendliness: Design the system with a user-


centric approach, emphasizing accessibility for individuals with hearing
impairments. Ensure that the system is intuitive, user-friendly, and adaptable to
different user preferences and needs.

5. Optimize for Real-Time Communication: Aim for low latency in recognition


and translation processes to support real-time communication, making it suitable
for both casual conversations and more formal settings.

6. Evaluate Accuracy and Reliability: Conduct rigorous testing and evaluation to


assess the accuracy and reliability of the ASL recognition and translation system
under various conditions and with diverse user groups.

FT-ARA-020-00 | STI College of Meycauayan 7


7. Raise Awareness and Accessibility: Promote awareness and accessibility by
advocating for the adoption of the system in educational institutions, workplaces,
and public spaces to foster inclusivity and improve communication for individuals
with hearing impairments.

Scope and limitations of the study

Scope:

1. Create a robust ASL recognition system capable of accurately identifying and


interpreting a wide range of ASL gestures in real-time.
2. Integrate IoT connectivity to enable ASL users to communicate with non-ASL
speakers through various communication devices and platforms.
3. Prioritize accessibility and user-friendliness in the system's design, ensuring it
caters to the needs of individuals with hearing impairments.
4. Optimize the system for real-time communication, targeting low latency to
support both casual conversations and formal interactions.
5. Conduct thorough testing and evaluation to assess the accuracy and reliability of
the ASL recognition and translation system under diverse conditions and user
groups.
6. Consider cost-effective solutions to enhance accessibility for ASL users with
limited resources.

Limitations

1. Recognition Complexity: The system's accuracy may vary with the complexity
of ASL gestures, and it may face challenges in recognizing nuanced signs or signs
performed at high speeds.

FT-ARA-020-00 | STI College of Meycauayan 8


2. Translation Nuances: The translation of ASL into spoken or written language
may not capture the full depth of ASL's expressiveness and may require
continuous refinement to enhance naturalness.
3. Hardware Constraints: The project will operate within the constraints of
available microcontroller hardware, which may impact processing power and
memory capabilities.
4. IoT Infrastructure: The system's effectiveness will depend on the availability
and reliability of IoT infrastructure, including network connectivity and device
compatibility.
5. User Learning Curve: Users may need some time to adapt to the system, and its
success may depend on user training and familiarity with technology.
6. Cost Considerations: While efforts will be made to keep the system cost-
effective, there may be limitations in achieving affordability for all potential
users.

FT-ARA-020-00 | STI College of Meycauayan 9

You might also like