Multi-Channel Spatio-Temporal Transformer for Sign Language Production
Ma, Xiaohan | Jin, Rize | Chung, Tae-Sun
- Volume:
- Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
- Venue:
- Torino, Italy
- Date:
- 20 to 25 May 2024
- Pages:
- 11699–11712
- Publisher:
- ELRA Language Resources Association (ELRA) and the International Committee on Computational Linguistics (ICCL)
- License:
- CC BY-NC 4.0
- ACL ID:
- 2024.lrec-main.1022
- ISBN:
- 978-2-493814-10-4
Content Categories
- Languages:
- German Sign Language, Korean Sign Language
- Corpora:
- KSL-Guide, RWTH-PHOENIX Weather 2014 T
Abstract
The task of Sign Language Production (SLP) in machine learning involves converting text-based spoken language into corresponding sign language expressions. Sign language conveys meaning through the continuous movement of multiple articulators, including manual and non-manual channels. However, most current Transformer-based SLP models convert these multi-channel sign poses into a unified feature representation, ignoring the inherent structural correlations between channels. This paper introduces a novel approach called MCST-Transformer for skeletal sign language production. It employs multi-channel spatial attention to capture correlations across various channels within each frame, and temporal attention to learn sequential dependencies for each channel over time. Additionally, the paper explores and experiments with multiple fusion techniques to combine the spatial and temporal representations into naturalistic sign sequences. To validate the effectiveness of the proposed MCST-Transformer model and its constituent components, extensive experiments were conducted on two benchmark sign language datasets from diverse cultures. The results demonstrate that this new approach outperforms state-of-the-art models on both datasets.Document Download
Paper PDF BibTeX File + Abstract
Cite as
Citation in ACL Citation Format
Xiaohan Ma, Rize Jin, Tae-Sun Chung. 2024. Multi-Channel Spatio-Temporal Transformer for Sign Language Production. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 11699–11712, Torino, Italy. ELRA Language Resources Association (ELRA) and the International Committee on Computational Linguistics (ICCL).BibTeX Export
@inproceedings{ma-etal-2024-multichannel:lrec, author = {Ma, Xiaohan and Jin, Rize and Chung, Tae-Sun}, title = {Multi-Channel Spatio-Temporal Transformer for Sign Language Production}, pages = {11699--11712}, editor = {Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen}, booktitle = {2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation ({LREC-COLING} 2024)}, publisher = {{ELRA Language Resources Association (ELRA) and the International Committee on Computational Linguistics (ICCL)}}, address = {Torino, Italy}, day = {20--25}, month = may, year = {2024}, isbn = {978-2-493814-10-4}, language = {english}, url = {https://aclanthology.org/2024.lrec-main.1022} }