0% found this document useful (0 votes)
425 views32 pages

Capstone 1. Signlang

SignEase is an assistive communication system designed to bridge communication gaps for deaf and hearing-impaired learners at Malabon Elementary School by providing real-time translation between sign language and spoken/written language. The project aims to enhance inclusivity and educational opportunities for these students, addressing the challenges posed by inadequate sign language proficiency among teachers and peers. The research will assess the effectiveness of SignEase and explore its potential for broader application in similar educational settings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
425 views32 pages

Capstone 1. Signlang

SignEase is an assistive communication system designed to bridge communication gaps for deaf and hearing-impaired learners at Malabon Elementary School by providing real-time translation between sign language and spoken/written language. The project aims to enhance inclusivity and educational opportunities for these students, addressing the challenges posed by inadequate sign language proficiency among teachers and peers. The research will assess the effectiveness of SignEase and explore its potential for broader application in similar educational settings.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

SignEase: An Assistive Communication System for the Deaf and

Hearing-Impaired Learners of Malabon Elementary School

A Capstone Project Presented to the


Faculty of College of Computer Studies
City of Malabon University
City of Malabon

In Partial Fulfillment of the Requirements for the Degree of


Bachelor of Science in Information Technology

By
Bayola, Bryle Racel I.
Garado, Jhonrie C.
Razon, Luigi C.
Sales, Joe Honey Faith C.

June 2025
ii

Chapter I

THE PROBLEM AND THE LITERATURE REVIEW

Introduction

Communication is a basic human right and a powerful learning tool as


well as facilitator of social interaction and membership in society.
Unfortunately, for deaf and hearing-impaired students, especially under the
conventional classroom environment, communication barriers still impede
their academic achievement, social growth, and general participation in school
activities. In the Philippines, inclusive education is increasingly becoming a
priority, but most schools continue to struggle with delivering effective support
services to hearing-impaired students.

Malabon Elementary School, as with most public schools, is dedicated


to integrative learning but has a hard time closing communication gaps among
hearing-impaired students, their teachers, and classmates. Sign language is
the students' main language of communication, but it is not known by all
teachers or classmates, resulting in exclusion and limited learning
opportunities.

In order to fill this lack, SignEase suggests a new assistive


communication system with the aim of enriching deaf and hearing-impaired
students' learning experience. It is meant to enable real-time translation from
text and/or speech into sign language and the reverse, in order to create a
more accessible and inclusive learning environment.

Inclusive education requires that all students with or without physical or


mental disabilities should be provided with quality education in a friendly
environment. Notwithstanding the major advances made in policy, learners
with hearing impairments continue to experience educational disparities
because of a lack of suitable communication devices and trained staff.
iii

Teachers and students at Malabon Elementary School also report


difficulties in relating to deaf and hearing-impaired students. Inadequate sign
language skills among teachers and peers result in miscommunication and
isolation, negatively impacting these students' education and socialization.

Technological innovations in artificial intelligence, gesture recognition,


and real-time translation provide promising solutions to these challenges. With
this consideration, SignEase was envisioned as an easy-to-use, assistive tool
for communication designed to meet the needs of learners and instructors. It
uses technology to translate sign language into spoken or written output and
translate spoken or written input into sign language animations or visuals.

The research aims to develop and deploy SignEase in the Malabon


Elementary School, assess how effective it is in filling communication gaps,
and look into its potential for further use in other educational institutions. In
doing so, it hopes to support a more inclusive and equitable education for all
learners.Inclusive education requires that all students, despite physical or
cognitive disabilities, should receive quality education in an enabling
environment. Despite such advances in policy, many learners with hearing
impairment are still denied equal education opportunities by the unavailability
of proper communications aids and skilled personnel.

At Malabon Elementary School, teachers and students report


challenges in effectively interacting with deaf and hearing-impaired learners.
Limited sign language proficiency among teaching staff and peers often leads
to misunderstandings and exclusion, adversely affecting the academic and
social development of these students..

The research aims to create and apply SignEase in the context of


Malabon Elementary School, assess its efficacy in facilitating communication
gap bridging, and investigate its potential for scaling up to comparable
schooling environments. In so doing, it hopes to support a more inclusive and
fair schooling environment for all pupils.
iv

Related Studies and Literature

Accessible Computer Science for K-12 Students with Hearing


Impairments

This paper introduces a project designed to support Deaf and Hard of Hearing
(DHH) K-12 students in acquiring knowledge of complex Computer Science
concepts. It discusses the motivation for the project and an early design of an
accessible block-based Computer Science curriculum to engage DHH
students in hands-on computing education.

Towards Improving the E-Learning Experience for Deaf Students: e-LUX

This work aims to support the e-learning user experience (e-LUX) for deaf
users by enhancing the accessibility of content and services. It proposes
preliminary solutions to tailor activities that can be more fruitful when
performed in one's own "native" language, represented by national Sign
Language.

Sign Language Recognition, Generation, and Translation: An


Interdisciplinary Perspective

This paper presents the results of an interdisciplinary workshop, providing key


background often overlooked by computer scientists, a review of the state-of-
the-art, a set of pressing challenges, and a call to action for the research
community in the field of sign language technologies.

Development of New Distance Learning Platform to Create and Deliver


Learning Content for Deaf Students

This study suggests a new asynchronous distance learning platform for deaf
students with capabilities to assist instructors in developing and delivering
educational materials over distance, including sign language translation
videos.

Design and Implementation of a Virtual 3D Educational Environment to


Improve Deaf Education
v

This paper presents research on the design and implementation of a virtual


3D educational environment that generates and animates sign language
sentences from semantic representations, aiming to improve education for
deaf pupils.

The Effectiveness of an Authentic E-Learning Environment for Deaf


Learners

This study examines the effectiveness of authentic e-learning environments


for deaf learners, focusing on British Sign Language (BSL) acquisition and the
impact of technology integration on learner motivation and engagement.

Communication, Language, and Modality in the Education of Deaf


Students

This article discusses access for deaf children in the context of early
identification and advances in hearing technologies, considering the
ramifications for communication choices and educational implications.

Video Technology-Mediated Literacy Practices in American Sign


Language (ASL)

This mixed-methods study investigates ASL literacy practices among Deaf


bilingual high school students, focusing on their engagement with video texts
and the relationship between ASL and English literacy skills.

Advancing Personalized and Inclusive Education for Students with


Disability Through Artificial Intelligence

This article explores how AI technologies, such as speech-to-text converters,


can offer personalized and inclusive education for students with disabilities,
including those who are deaf or hard of hearing.

Exploring Assistive Technologies for Deaf and Hard-of-Hearing Students

This article discusses various assistive technologies, including real-time


captioning and hearing aid glasses, that can support deaf and hard-of-hearing
students in academic settings.
vi

Sign Language Transformers: Joint End-to-End Sign Language


Recognition and Translation

Based on the research of Camgoz et al. (2020) they introduced a transformer-


based architecture that simultaneously learns continuous sign language
recognition and translation. Their model achieved state-of-the-art results on
the RWTH-PHOENIX-Weather-2014T dataset, highlighting the potential of
end-to-end systems in sign language processing.

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-


Level Sign Language Translation

According to Fang et al. (2018), they developed DeepASL, a deep learning-


based system that enables non-intrusive translation of American Sign
Language at both word and sentence levels using infrared sensors and
hierarchical bidirectional recurrent neural networks

A Comprehensive Study on Deep Learning-based Methods for Sign


Language Recognition

Based on Adaloglou et al. (2020), they conducted a comparative assessment


of deep neural network methods for sign language recognition, providing
insights into mapping non-segmented video streams to glosses and
introducing new sequence training criteria.

Sign-to-Speech Model for Sign Language Understanding: A Case Study


of Nigerian Sign Language

According to Kolawole et al. (2021), they presented a model that translates


Nigerian Sign Language into text and subsequently into speech, aiming to
reduce communication barriers in sub-Saharan Africa by deploying a
lightweight, real-time application.

Using LSTM to Translate Thai Sign Language to Text in Real Time


vii

A 2024 study employed Long Short-Term Memory (LSTM) networks combined


with MediaPipe Holistic to capture and translate Thai Sign Language gestures
into text in real-time, achieving an accuracy of 86% and highlighting areas for
improvement such as dataset expansion.

A Study Examining a Real-Time Sign Language-to-Text Interpretation


System Using Crowdsourcing

This 2020 study explored a system that interprets sign language to text via
crowdsourcing, dividing live sign language video into segments for non-expert
workers to transcribe, aiming to reduce labor costs and delays in caption
provision.

A Survey of Advancements in Real-Time Sign Language Translators:


Integration with IoT Technology

This survey discusses various real-time sign language translation systems,


including visual-based methods and wearable technologies, emphasizing the
integration with IoT devices to enhance communication for the hearing
impaired.

Real-Time Wearable Speech Recognition System for Deaf Persons

This 2021 study designed a wearable, vibration-based system that enables


deaf individuals to perceive speech content through vibrations, achieving 95%
accuracy and demonstrating potential for improving quality of life without the
need for hearing aids.

Real-Time Sign Language Recognition and Speech Generation

This 2020 paper presents a user-friendly and accurate sign language


recognition system trained by neural networks, capable of generating text and
speech from input gestures, aiming to facilitate communication for the deaf
and mute community.

Instantaneous Interpretation into Sign Language for the Hearing


Impaired
viii

This 2024 conference paper discusses the development of a system that


translates speech into text for the benefit of the deaf community, leveraging
recent advances in artificial intelligence to facilitate real-time communication.

Sign For EveryJUAN: An Interactive Android Application That Teaches


Filipino Sign Language Using Hand Gesture Recognition

According to Velasco et al. (2024) developed an Android application that


teaches Filipino Sign Language (FSL) using hand gesture recognition. The
app utilizes MediaPipe for tracking and a Long Short-Term Memory (LSTM)
neural network for classification, achieving over 90% accuracy.

Speak the Sign: A Real-Time Sign Language to Text Converter


Application for Basic Filipino Words and Phrases

Based on the research of Murillo et al. (2021) created a web-based


application that converts FSL into text in real-time. The application was
evaluated for content, design, and functionality, receiving a "Very Highly
Acceptable" rating from users.

KUMPAS: A Filipino Sign Language Translator of Deaf and Hearing


Individuals Utilizing Computer Vision Algorithm

Bautista et al. (2023) developed "KUMPAS," a translator facilitating


communication between deaf and hearing individuals. The system uses
OpenCV and MediaPipe for gesture recognition and translates speech into
sign language via a 3D character.

SimboWika: A Mobile and Web Application to Learn Filipino Sign


Language for Deaf Students in Elementary Schools

According to Empe et al. (2020) introduced "SimboWika," a mobile and web


application designed to teach FSL to elementary students. The app provides
illustrations and allows teachers to monitor student progress.

Senyas: A 3D Animated Filipino Sign Language Interpreter Using Speech


Recognition
ix

Alberto et al. (2022) developed "Senyas," a web application that interprets


spoken and written inputs into FSL using a 3D avatar. The system employs
speech recognition and sentiment analysis to enhance communication.

Recognizing Filipino Sign Language Video Sequences Using Deep


Learning Techniques

Tupal (2023) focused on creating a high-quality FSL dataset and developing a


model using Graph Convolutional Networks and Gated Recurrent Units. The
resulting application achieved a 100% top-5 accuracy on the dataset.

Filipino Sign Language Recognition Using Long Short-Term Memory and


Residual Network Architecture

This 2022 study explored the use of ResNet and LSTM models for continuous
FSL recognition. The LSTM model achieved a 94% accuracy in recognizing
15 continuous Filipino phrases.

Filipino Sign Language Alphabet Recognition Using Persistent


Homology Classification Algorithm

Jetomo and De Lara (2025) applied a Persistent Homology Classification


algorithm to recognize the FSL alphabet, addressing challenges in
communication for the deaf community by improving sign language
recognition accuracy.

The Development and Assessment of Pattern Matching Algorithms Used


by ZEE: A Filipino Sign Language (FSL) Dictionary and English-Learning
App

Daculan and Tan (2022) developed "ZEE," an FSL dictionary and English-
learning app. The study assessed pattern matching algorithms to enhance the
app's functionality.

SignTech: A Sign Language Translator App Designed to Make


Communication Easier Between Deaf and Mute Individuals and Non-
Disabled People
x

Developed by students from the Technological Institute of the Philippines,


"SignTech" is a sign language translator app that won second place in the
Empowerment Prize Track of HooHacks Hackathon 2021.
xi

Theoretical Framework

This study is grounded in theories related to human communication,


social inclusion, and human-computer interaction to support the development
of SignEase, a real-time sign language to text and speech translator for the
deaf and hearing-impaired. The foundational concept is that communication is
a two-way process involving encoding, transmission, and decoding of
messages, aligned with the Shannon and Weaver Model of Communication.
This model conceptualizes SignEase as a digital intermediary where the
camera and AI system function as the encoder, the real-time processing
system serves as the channel, and the output modules act as the decoder,
ensuring messages are clearly conveyed between deaf and hearing
individuals.

Furthermore, the framework incorporates the Social Model of Disability,


which emphasizes that societal structures and communication barriers—not
individual impairments—are the primary obstacles to inclusion. SignEase
addresses this by utilizing assistive technologies to eliminate these barriers,
enabling equal participation in communication and promoting inclusivity in
social, educational, and professional contexts.

Additionally, this study applies principles from Human-Computer


Interaction (HCI) Theory, highlighting the importance of designing user-
friendly, intuitive, and accessible interfaces. By integrating gesture recognition
and text-to-speech systems in a seamless manner, SignEase supports
effortless real-time interactions with minimal user training, reinforcing the
psychological and functional accessibility of the platform.

This theoretical framework supports the implementation of SignEase as


an innovative assistive communication system that fosters inclusive
interaction between deaf and hearing individuals. It contributes to breaking
xii

down communication gaps and promoting social equity through the strategic
application of AI, machine learning, and user-centered design.
xiii

Conceptual Framework

Input Process Output Outcome

Hardware De- 1. Image/Video 1. For Deaf 1. Improved Com-


vices: Capture: Users: munication:
 Camera  Real-time text
 Camera de-
(for real- or speech  Breaks barriers
tects and
time ges- translation of between deaf
tracks hand/
ture cap- their signs and hearing in-
gesture
ture) 2. For Hearing dividuals in
movements
 Micro- Users: daily interac-
phone and 2. Gesture  Clear under- tions
speaker Recognition: standing and 2. Social Inclusion:
(for output  AI interprets interaction  Encourages
feedback) signs using with deaf equal participa-
 GPU deep learning users through tion in educa-
(Nvidia models readable or tion, work, and
GPU with audible re- public spaces
3. Conversion:
CUDA) sponses 3. Accessibility
 Computer  Recognized 3. Enhanced Empowerment:
gestures are Communication:  Supports the
2. Software translated  Seamless and independence
Tools: into text or inclusive two- and confidence
 Mediapipe spoken lan- way interac- of deaf and
Hands guage tion hearing-im-
paired users
4. Display/Play-
3.Sign Lan- 4. Technological
back:
guage Dataset: Advancement:
 Output is
 Contributes to
shown on
 Pretrained inclusive and
screen and/or
sign lan- assistive com-
xiv

guage data spoken via munication


(e.g., ASL, speaker in technologies
FSL) real time

4. User Input:
 Real-time
sign ges-
tures from
deaf/mute
users
xv

Statement of the Problem

Many deaf or hearing-impaired individuals continue to face communication


barriers in everyday situations due to the limited number of people who can
understand sign language. Although sign language is their main form of
expression, the majority of the population relies on spoken language, making
it difficult for both parties to interact naturally.

1. How accurate is the SignEase system when translating sign language into
text and speech in real time?

2. How quickly can the system respond and convert gestures into output?

3. Is the system easy to use for both deaf and hearing users?

4. How helpful is the system in supporting smooth communication in practical


settings?

5. What difficulties or limitations are encountered during the development and


use of the SignEase system?
xvi

General Objectives

To develop and assess SignEase, a real-time translator that converts sign


language into text and speech, aiming to improve communication between
deaf individuals and people who do not understand sign language.

Specific Objectives:

1. To design and build a working system that can detect and translate sign
language into readable text and spoken words.

2. To evaluate the accuracy and speed of the system with respect to its capa-
bility to translate.

3. To evaluate the suitability of the system for both deaf and hearing users.

4. To assess real-life use of the tool.

5. To summarize the key issues faced during the development phase and to
provide some suggestions for a better system.
xvii

Significance of the Study

The creation of SignEase: An Assistive Communication System for the


Deaf and Hearing-Impaired Students of Malabon Elementary School is of
immense importance to many stakeholders of inclusive education and
technological innovation.

To the Deaf and Hearing-Impaired Students

This research directly benefits hearing-impaired students through an


accessible tool that fills communication gaps within the classroom. With
SignEase, students are better able to grasp lessons, engage more positively
in class discussions, and communicate more effectively. This enables them to
become academically successful and gain a greater feeling of inclusion and
confidence.

To the Teachers and School Administrators

Teachers may find it difficult to communicate with sign language users,


particularly if they are not formally trained. SignEase provides an assistive
system that facilitates real-time communication, enhancing instruction to be
more inclusive and effective. Administrators in schools can also utilize the
findings of this research to adopt evidence-based practices aligned with the
Department of Education's objective of fostering inclusive learning
environments.

To the Parents and Guardians

Better school communication and comprehension can result in improved


learning outcomes and social growth for deaf children. This makes parents
more at peace, as they know their children are being provided with a fair
education.

To the Educational Technology Community

This research adds to the existing body of knowledge in assistive technology


by demonstrating the application of AI, sign language recognition, and
xviii

communication aids in educational environments. It also provides an opening


for further research, development, and partnership in breaking the barriers for
students with disabilities in education.

To the Future Researchers

The results of this research can be used as a basis for future development in
assistive learning systems. It also acts as a reference for researchers looking
into such similar tools or developing more comprehensive solutions for
inclusive education.
xix

Scope and Delimitation

This research focuses on the creation and implementation of


"SignEase," an assistive communication system designed to help Deaf and
hearing-impaired students engage with their hearing friends and educators at
Malabon Elementary School. The technology attempts to break down
communication barriers by transforming Filipino Sign Language (FSL) motions
into text and speech, increasing inclusivity and improving educational
opportunities for students with hearing impairments. Key areas of focus
include using gesture recognition technology to read FSL motions,
incorporating speech-to-text conversion to enable real-time communication,
and creating a user-friendly interface that is accessible to both Deaf and
hearing people. The research is limited to the development and evaluation of
the system within the setting of the school, excluding other institutions or
communities.

SignEase will be implemented at Malabon Elementary School in


conjunction with instructors and students to ensure that the system fits the
Deaf learners' specific communication needs. By using this technology, the
initiative hopes to not only improve educational achievements for Deaf
students, but also to promote a more inclusive learning atmosphere that
bridges the communication gap between Deaf and hearing people. This study
aims to develop a realistic, technology-driven solution that tackles the
communication issues experienced by Deaf and hearing-impaired students, in
line with national efforts to encourage the use of Filipino Sign Language in
educational institutions.
xx

Definition of Terms

MediaPipe Hands

- MediaPipe Hands is a high-fidelity hand and finger tracking solution. It


employs machine learning (ML) to infer 21 3D landmarks of a hand from just a
single frame. Whereas current state-of-the-art approaches rely primarily on
powerful desktop environments for inference, our method achieves real-time
performance on a mobile phone, and even scales to multiple hands. We hope
that providing this hand perception functionality to the wider research and
development community will result in an emergence of creative use cases,
stimulating new applications and new research avenues.

GPU

- A GPU (Graphics Processing Unit) is a specialized electronic circuit


that rapidly manipulates and alters memory to accelerate the building of
images in a frame buffer intended for output to a display. Essentially, it's a chip
that handles the rendering of graphics and images on your computer or other
device.

CUDA

- CUDA (Compute Unified Device Architecture) is a parallel computing


platform and application programming interface (API) developed by NVIDIA
that allows software to utilize the processing power of NVIDIA GPUs for
general-purpose computing tasks. This enables developers to significantly
speed up compute-intensive applications by leveraging the thousands of
cores available in GPUs.

NVIDIA

- Nvidia is a company that designs and manufactures graphics


processing units (GPUs), which are specialized processors used to enhance
computer graphics and accelerate AI-related tasks. Besides GPUs, Nvidia
xxi

also creates other hardware like system-on-chip units and software for data
science, AI, and high-performance computing.
xxii

ASL

- ASL is an abbreviation for American Sign Language. It's a visual-


gestural language used by the Deaf and Hard of Hearing community in the
United States and Canada. ASL is a natural language with its own grammar,
syntax, and vocabulary, distinct from spoken English. It relies on handshapes,
movements, and facial expressions to communicate.

FSL

- FSL refers to Filipino Sign Language, the natural language of


communication for many Deaf Filipinos. It's a visual language that uses
gestures, facial expressions, and body movements to convey meaning. FSL is
the national sign language of the Philippines, officially recognized by Republic
Act 11106.

HCI

- Human-Computer Interaction (HCI) is a multidisciplinary field that


studies how people interact with computers and other technologies. It focuses
on designing, implementing, and evaluating user interfaces and systems that
are user-friendly, efficient, and effective. HCI aims to make technology more
accessible and usable for a wide range of users.
xxiii

REFERENCES

 Das, M., Marghitu, D., Jamshidi, F., Mandala, M., & Howard, A. (2020).
Accessible computer science for K-12 students with hearing
impairments. https://arxiv.org/abs/2007.08476
 Borgia, F., Bianchini, C., & de Marsico, M. (2019). Towards improving
the e-learning experience for deaf students: e-LUX.
https://arxiv.org/abs/1911.13231
 Bragg, D., Koller, O., Bellard, M., et al. (2019). Sign language
recognition, generation, and translation: An interdisciplinary
perspective. https://arxiv.org/abs/1908.08597
 Ahmed, M. E., & Hasegawa, S. (2022). Development of new distance
learning platform to create and deliver learning content for deaf
students. Education Sciences, 12(11), 826.
https://www.mdpi.com/2227-7102/12/11/826
 Lakhfif, A. (2020). Design and implementation of a virtual 3D
educational environment to improve deaf education.
https://arxiv.org/abs/2006.00114
 Education Sciences Editorial Office. (2023). Communication,
language, and modality in the education of deaf students. Education
Sciences, 13(10), 1033. https://www.mdpi.com/2227-7102/13/10/1033
 Zernovoj, A. (2022). Video technology-mediated literacy practices in
American Sign Language (ASL). University of California, Santa Cruz.
https://escholarship.org/uc/item/2c09v6ns
 Chen, H., Zhao, Y., & Ye, Y. (2024). Advancing personalized and
inclusive education for students with disability through artificial
intelligence. AI, 5(2), 11. https://www.mdpi.com/2673-6470/5/2/11
 Applied Sciences Editorial Office. (2023). The effectiveness of an
authentic e-learning environment for deaf learners. Applied Sciences,
15(3), 1568. https://www.mdpi.com/2076-3417/15/3/1568
xxiv

 Learning Counsel. (2023). Exploring assistive technologies for deaf


and hard-of-hearing students. The Learning Counsel.
https://thelearningcounsel.com/articles/exploring-assistive-
technologies-for-deaf-and-hard-of-hearing-students/

● Camgoz, N. C., Koller, O., Hadfield, S., & Bowden, R. (2020). Sign
Language Transformers: Joint End-to-end Sign Language Recognition
and Translation. arXiv preprint arXiv:2003.13830.

● Fang, B., Co, J., & Zhang, M. (2018). DeepASL: Enabling Ubiquitous
and Non-Intrusive Word and Sentence-Level Sign Language Transla-
tion. arXiv preprint arXiv:1802.07584.

● Adaloglou, N., Chatzis, T., Papastratis, I., Stergioulas, A., Papadopou-


los, G. T., Zacharopoulou, V., Xydopoulos, G. J., Atzakas, K., Pa-
pazachariou, D., & Daras, P. (2020). A Comprehensive Study on Deep
Learning-based Methods for Sign Language Recognition. arXiv preprint
arXiv:2007.12530.

● Kolawole, S., Osakuade, O., Saxena, N., & Olorisade, B. K. (2021).


Sign-to-Speech Model for Sign Language Understanding: A Case
Study of Nigerian Sign Language. arXiv preprint arXiv:2111.00995.

● Author(s) not specified. (2024). Using LSTM to Translate Thai Sign


Language to Text in Real Time. Discover Artificial Intelligence, 4, Article
17. https://doi.org/10.1007/s44163-024-00113-8

● Author(s) not specified. (2020). A Study Examining a Real-Time Sign


Language-to-Text Interpretation System Using Crowdsourcing. In Com-
puters Helping People with Special Needs (pp. 186–194). Springer.
https://doi.org/10.1007/978-3-030-58805-2_22

● Author(s) not specified. (Year not specified). A Survey of Advancements


in Real-Time Sign Language Translators: Integration with IoT Technol-
xxv

ogy. Technologies, 11(4), 83.


https://doi.org/10.3390/technologies11040083

● Author(s) not specified. (2021). Real-Time Wearable Speech Recogni-


tion System for Deaf Persons. Computers & Electrical Engineering, 91,
107026. https://doi.org/10.1016/j.compeleceng.2021.107026

● Author(s) not specified. (2020). Real-Time Sign Language Recognition


and Speech Generation. Journal of Innovative Image Processing, 2(2),
65–76. https://doi.org/10.36548/jiip.2020.2.001

 Author(s) not specified. (2024). Instantaneous Interpretation into Sign


Language for the Hearing Impaired. In Universal Threats in Expert
Applications and Solutions (pp. 227–238). Springer.
https://doi.org/10.1007/978-981-97-3810-6_19

● Velasco, L. U., Macatangga, R. S., Pepito, C. P., Chua, M. G., Galupo


Jr, E. G., Del Mundo, M. J. R., & Dilla, E. L. (2024). Sign For Ev-
eryJUAN: An Interactive Android Application That Teaches Filipino Sign
Language Using Hand Gesture Recognition. Proceedings of the 5th
Borobudur International Symposium on Humanities and Social Science
2023, 504–513. Atlantis Press. https://doi.org/10.2991/978-2-38476-
273-6_55

● Murillo, S. C. M., Villanueva, M. C. A. E., Tamayo, K. I. M., Apolinario,


M. J. V., Lopez, M. J. D., & Edd. (2021). Speak the Sign: A Real-Time
Sign Language to Text Converter Application for Basic Filipino Words
and Phrases. Central Asian Journal of Mathematical Theory and Com-
puter Sciences, 2(8), 1–8. Retrieved from
https://cajmtcs.centralasianstudies.org/index.php/CAJMTCS/article/
view/92

● Bautista, D., Cruz, R., Esbieto, M., Montemayor, E., & Panganiban, J.
(2023). KUMPAS: A Filipino Sign Language Translator of Deaf and
xxvi

Hearing Individuals Utilizing Computer Vision Algorithm. International


Research Journal, 2023. https://doi.org/10.62293/IRIJ-573ct

● Empe, N. A. A., Echon, R. C. L., Vega, H. D. A., Paterno, P. L. C.,


Jamis, M. N., & Yabut, E. R. (2020). SimboWika: A Mobile and Web Ap-
plication to Learn Filipino Sign Language for Deaf Students in Elemen-
tary Schools. Proceedings of the 2020 IEEE Region 10 Humanitarian
Technology Conference (R10-HTC), 9357056.
https://doi.org/10.1109/R10-HTC49770.2020.9357056

● Alberto, A. A. B., Mangampo, H. M. I., Presto, M. L. V., & Herradura, T.


R. (2022). Senyas: A 3D Animated Filipino Sign Language Interpreter
Using Speech Recognition. Philippine Computing Science Congress
2022. Retrieved from
https://www.researchgate.net/publication/361324444_Senyas_A_3D_A
nimated_Filipino_Sign_Language_Interpreter_Using_Speech_Recognit
ion

● Tupal, I. L. (2023). Recognizing Filipino Sign Language Video Se-


quences Using Deep Learning Techniques (Master's thesis). De La
Salle University. Retrieved from
https://animorepository.dlsu.edu.ph/etdm_ece/25/

● Author(s) not specified. (2022). Filipino Sign Language Recognition Us-


ing Long Short-Term Memory and Residual Network Architecture. In
Proceedings of the Seventh International Congress on Information and
Communication Technology (pp. 489–497). Springer.
https://doi.org/10.1007/978-981-19-2397-5_45

● Jetomo, C., & De Lara, M. L. D. (2025). Filipino Sign Language Alpha-


bet Recognition Using Persistent Homology Classification Algorithm.
PeerJ Computer Science, 11(2), e2720. https://doi.org/10.7717/peerj-
cs.2720
xxvii

● Daculan, M. D., & Tan, A. J. (2022). The Development and Assessment


of Pattern Matching Algorithms Used by ZEE: A Filipino Sign Language
(FSL) Dictionary and English-Learning App. In Proceedings of the 36th
Pacific Asia Conference on Language, Information and Computation
(pp. 842–852). Association for Computational Linguistics.
https://aclanthology.org/2022.paclic-1.93/

 Technological Institute of the Philippines. (2021). T.I.P.ians Secure Win


in HooHacks with Sign Language Translator App, SignTech. Retrieved
from https://dru.tip.edu.ph/home/t-i-p-ians-secure-win-in-hoohacks-
with-sign-language-translator-app-signtech/
xxviii
1
2
3
4

You might also like