Hyan Jan Y. Suamina
1 Publications
Scopus ID: 105034159397
TENCON 2025 - 2025 IEEE Region 10 Conference (TENCON), (2026), pp. 1215-1219
Conference Paper | Published: February 18, 2026
Abstract
Filipino Sign Language (FSL) is an invaluable tool for communication within the deaf and mute communities, yet there is a shortage of proficient special education teachers and accessible learning materials. Current research on FSL recognition is limited to basic detection, often invasive, and lacks comprehensive systems that provide feedback to users. Additionally, FSL incorporates distinctive static and dynamic gestures, including contractions, which set it apart from other sign languages. This study presents the development of a machine vision-based FSL tutor that leverages the MediaPipe framework-specifically, MediaPipe Hands for static gesture recognition and MediaPipe Holistic for full-body dynamic gesture tracking. LSTM networks were used to classify dynamic gestures based on sequential landmark data to capture temporal dependencies in sign execution. The system supports a desktop application platform enabling learners to engage in interactive modules with real-time feedback through visual prompts and audio cues. It utilizes 42 static hand feature landmarks and over 1,662 key points derived from hand, pose, and facial data to ensure accurate recognition and feedback. A total of 50 essential FSL gestures-aligned with the kindergarten curriculum-were modeled, covering alphabet knowledge, vocabulary development, self-introduction, and polite expressions. Performance evaluation using computer vision metrics demonstrated high recognition accuracy for both gesture types. In addition, the System Usability Scale (SUS) and statistical comparisons with traditional instruction methods confirmed the platform's effectiveness and user acceptability. The results validate the system as a comprehensive and accessible solution for FSL education, particularly suited for early learners and self-guided instruction.