Proform structures such as classifier predicates have traditionally challenged Sign Language (SL) synthesis systems, particularly in respect to the production of smooth natural motion. To address this issue a synthesizer must necessarily leverage a structured linguistic model for such constructs to specify the linguistic constraints, and also an animation system that is able to provide natural avatar motion within the confines of those constraints. The proposed system bridges two existing technologies, taking advantage of the ability of AZee to encode both the form and functional linguistic aspects of the proform movements and on the Paula avatar system to provide convincing human motion. The system extends a previous principle that more natural motion arises from leveraging knowledge of larger structures in the linguistic description.
Keywords
Future sign language technology user interfaces such as avatar technology
@inproceedings{filhol:18024:sign-lang:lrec,
author = {Filhol, Michael and McDonald, John C.},
title = {Extending the {AZee-Paula} Shortcuts to Enable Natural Proform Synthesis},
pages = {45--52},
editor = {Bono, Mayumi and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna and Osugi, Yutaka},
booktitle = {Proceedings of the {LREC2018} 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community},
maintitle = {11th International Conference on Language Resources and Evaluation ({LREC} 2018)},
publisher = {{European Language Resources Association (ELRA)}},
address = {Miyazaki, Japan},
day = {12},
month = may,
year = {2018},
isbn = {979-10-95546-01-6},
language = {english},
url = {https://www.sign-lang.uni-hamburg.de/lrec/pub/18024.pdf}
}