sign-lang@LREC Anthology

Factorized Learning Assisted with Large Language Model for Gloss-free Sign Language Translation

Chen, Zhigang | Zhou, Benjia | Li, Jun | Wan, Jun | Lei, Zhen | Jiang, Ning | Lu, Quan | Zhao, Guoqing


Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Venue:
Torino, Italy
Date:
20 to 25 May 2024
Pages:
7071–7081
Publisher:
ELRA Language Resources Association (ELRA) and the International Committee on Computational Linguistics (ICCL)
License:
CC BY-NC 4.0
ACL ID:
2024.lrec-main.620
ISBN:
978-2-493814-10-4

Abstract

Previous Sign Language Translation (SLT) methods achieve superior performance by relying on gloss annotations. However, labeling high-quality glosses is a labor-intensive task, which limits the further development of SLT. Although some approaches work towards gloss-free SLT through jointly training the visual encoder and translation network, these efforts still suffer from poor performance and inefficient use of the powerful Large Language Model (LLM). Most seriously, we find that directly introducing LLM into SLT will lead to insufficient learning of visual representations as LLM dominates the learning curve. To address these problems, we propose Factorized Learning assisted with Large Language Model (FLa-LLM) for gloss-free SLT. Concretely, we factorize the training process into two stages. In the visual initialing stage, we employ a lightweight translation model after the visual encoder to pre-train the visual encoder. In the LLM fine-tuning stage, we freeze the acquired knowledge in the visual encoder and integrate it with a pre-trained LLM to inspire the LLM’s translation potential. This factorized training strategy proves to be highly effective as evidenced by significant improvements achieved across three SLT datasets which are all conducted under the gloss-free setting.

Document Download

Paper PDF BibTeX File+ Abstract

BibTeX Export

@inproceedings{chen-etal-2024-factorized:lrec,
  author    = {Chen, Zhigang and Zhou, Benjia and Li, Jun and Wan, Jun and Lei, Zhen and Jiang, Ning and Lu, Quan and Zhao, Guoqing},
  title     = {Factorized Learning Assisted with Large Language Model for Gloss-free Sign Language Translation},
  pages     = {7071--7081},
  editor    = {Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen},
  booktitle = {2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation ({LREC-COLING} 2024)},
  publisher = {{ELRA Language Resources Association (ELRA) and the International Committee on Computational Linguistics (ICCL)}},
  address   = {Torino, Italy},
  day       = {20--25},
  month     = may,
  year      = {2024},
  isbn      = {978-2-493814-10-4},
  language  = {english},
  url       = {https://aclanthology.org/2024.lrec-main.620}
}
Something missing or wrong?