e-ISSN 2231-8526
ISSN 0128-7680
Yusuke Takei and Eiichi Inohira
Pertanika Journal of Science & Technology, Volume 33, Issue S4, December 2025
DOI: https://doi.org/10.47836/pjst.33.S4.06
Keywords: Air writing, gesture recognition, multimodal dataset, wearable sensor
Published on: 2025-06-10
In recent years, there has been a significant increase in the research and development of technologies and products, such as wearable devices. Gesture input has emerged as a promising new input method well-suited to these technologies. However, developing a reliable method for recognizing continuous gestures that lack contextual relationships poses a significant challenge. This study introduces a novel approach for recognizing continuous multidigit numbers to address this issue. This method utilizes a deep learning model equipped with sample-level dense labeling and leverages a multimodal dataset comprising inertial measurement unit data from wearable sensors and camera data. The outcomes of our recognition experiment reveal that using a multimodal dataset to produce accurate training data enhances recognition accuracy by 13% compared to approaches that do not use a multimodal dataset. Additionally, using our two proposed methods, the recognition of continuous digit gestures comprising 5, 8, and 10 digits achieved a correct recognition rate exceeding 90%. These results underscore the efficacy of our proposed method in recognizing continuous air-writing character gestures.
ISSN 0128-7680
e-ISSN 2231-8526