Klasifikasi Bahasa Isyarat Menggunakan Metode Convolutional Neural Network dengan Arsitektur Mobilenet
DOI:
https://doi.org/10.35960/ikomti.v6i2.1792Keywords:
Sign Language Classification, CNN, MobileNet, Data AugmentationAbstract
Deaf individuals are people with hearing impairments, classified as either completely deaf or hard of hearing, which hinders verbal communication. According to the World Health Organization (WHO), over 430 million people worldwide, including 34 million children, experience hearing loss. In Indonesia, the most commonly used communication method among the deaf community is the Indonesian Sign System (SIBI). With advancements in technology, artificial intelligence methods such as Convolutional Neural Networks (CNN) have been widely used for image processing and pattern recognition to support communication for the hearing impaired. However, previous studies have shown limitations in data quantity, class coverage, and a lack of evaluation involving lightweight architectures such as MobileNet, particularly for SIBI recognition. This research aims to develop a sign language classification model using CNN with the MobileNet architecture. The dataset used consists of 5,720 SIBI hand gesture images, including manually collected samples for the letters "J" and "Z." Preprocessing involved image resizing and data augmentation to prevent overfitting. The model was trained for 30 epochs. Evaluation results indicate that MobileNet achieved an accuracy of 74.15%, significantly outperforming the baseline CNN model, which only reached 19%. These results demonstrate that MobileNet is more efficient in recognizing visual patterns in sign language and offers a practical solution for implementation on resource-constrained devices. Nevertheless, further improvements are needed to enhance classification performance across all alphabet letters.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Ariefah Khairina, Deny Nugroho Triwibowo, Rosyid Ridlo Al Hakim

This work is licensed under a Creative Commons Attribution 4.0 International License.