Perceptual Validation of 3D Pose, Guided Sign Language Synthesis
Maina, Ezekiel
| Wanzare, Lilian | Obuhuma, James 
- Volume:
- Proceedings of the LREC2026 12th Workshop on the Representation and Processing of Sign Languages: Language in Motion
- Venue:
- Palma, Mallorca, Spain
- Date:
- 16 May 2026
- Pages:
- 315–323
- Publisher:
- European Language Resources Association (ELRA)
- Licence:
- CC BY-NC 4.0
- sign-lang ID:
- 26067
- ISBN:
- 978-2-493814-82-1
Abstract
Sign language corpora face a structural tension between open-access requirements and the irreducible biometric identity embedded in visual, gestural data. While 3D pose estimation enables signer-agnostic abstraction, the representational adequacy of pose-based modeling for preserving linguistic structure remains underexplored. This paper introduces a perceptually-grounded kinematic modeling framework that formalizes 3D landmark sequences as an intermediate linguistic representation and validates their adequacy through avatar-mediated synthesis and large-scale human evaluation. Using 30370 gloss-level Kenyan Sign Language (KSL) segments derived from the AI4KSL corpus, we construct normalized 3D motion trajectories via MediaPipe Holistic. These trajectories are retargeted to parameterized avatars through a constrained kinematic mapping that preserves non-manual marker geometry and articulatory timing. We define a dual evaluation paradigm combining geometric fidelity metrics (PCK=92.7%, OKS=0.88, PCP=91.5%, PDJ>85.3%) with perceptual constructs measured across a statistically powered Deaf participant cohort (N=384). Results demonstrate a strong predictive relationship between structural joint precision and perceived gesture clarity (r=0.76, p<.01), suggesting that linguistic adequacy is partially recoverable from normalized kinematic structure. Furthermore, representational diversity in avatar instantiation significantly increases perceived inclusivity without degrading intelligibility. These findings establish pose-based motion abstraction not merely as an anonymization technique but as a viable corpus-level modeling layer for ethically sustainable language in motion.Document Download
Paper PDF BibTeX File + Abstract
Cite as
Citation in ACL Citation Format
Ezekiel Maina, Lilian Wanzare, James Obuhuma. 2026. Perceptual Validation of 3D Pose, Guided Sign Language Synthesis. In Proceedings of the LREC2026 12th Workshop on the Representation and Processing of Sign Languages: Language in Motion, pages 315–323, Palma, Mallorca, Spain. European Language Resources Association (ELRA).BibTeX Export
@inproceedings{maina:26067:sign-lang:lrec,
author = {Maina, Ezekiel and Wanzare, Lilian and Obuhuma, James},
title = {Perceptual Validation of {3D} Pose, Guided Sign Language Synthesis},
pages = {315--323},
editor = {Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Mesch, Johanna and Schulder, Marc},
booktitle = {Proceedings of the {LREC2026} 12th Workshop on the Representation and Processing of Sign Languages: Language in Motion},
maintitle = {15th International Conference on Language Resources and Evaluation ({LREC} 2026)},
publisher = {{European Language Resources Association (ELRA)}},
address = {Palma, Mallorca, Spain},
day = {16},
month = may,
year = {2026},
isbn = {978-2-493814-82-1},
language = {english},
url = {https://www.sign-lang.uni-hamburg.de/lrec/pub/26067.html}
}