dc.description.abstract |
Communication barriers between deaf and hearing individuals, particularly in
Philippine healthcare settings, are exacerbated by the scarcity of Filipino Sign
Language (FSL) interpreters and limited FSL knowledge among the general public.
While advancements have been made in sign language recognition, many existing
systems focus on American or Indian Sign Languages, often lacking bi-directional
translation or specific applicability to FSL and its unique linguistic structures.
This study addresses these gaps by developing a real-time, bi-directional translation
system that converts FSL gestures to text and Filipino speech to text, thereby
facilitating more effective communication in crucial environments like healthcare.
The proposed system employs computer vision techniques, specifically MediaPipe
for hand tracking combined with an SVM classifier for FSL gesture recognition
(including letters, numbers, and 10 domain-expert-validated emergency FSL
phrases), and the Vosk API for Filipino speech-to-text conversion. This research
contributes by creating a specialized FSL dataset for healthcare, evaluating various
deep learning models (ResNet18, EfficientNetV2s, MobileNetV3 Small, and
MediaPipe with SVM) where the SVM classifier demonstrated superior real-time
FSL recognition while achieving great results (98% accuracy and 100% accuracy in
36 and 10 classes respectively), and integrating these components into a functional
application aimed at improving healthcare accessibility and fostering inclusivity
for the Filipino deaf community. |
en_US |