Adding American Sign Language (ASL) animation to websites can improve information access for people who are deaf with low levels of English literacy. Given a script representing the sequence of ASL signs, we must generate an animation, but a challenge is selecting accurate speed and timing for the resulting animation. In this work, we analyzed motion-capture data recorded from human ASL signers to model the realistic timing of ASL movements, with a focus on where to insert prosodic breaks (pauses), based on the sentence syntax and other features. Our methodology includes extracting data from a pre-existing ASL corpus at our lab, selecting suitable features, and building machine learning models to predict where to insert pauses. We evaluated our model using cross-validation and compared various subsets of features. Our model had 80% accuracy at predicting pause locations, out-performing a baseline model on this task.
Keywords
Future sign language technology user interfaces such as avatar technology
@inproceedings{alkhazraji:18013:sign-lang:lrec,
author = {Al-khazraji, Sedeeq and Kafle, Sushant and Huenerfauth, Matt},
title = {Modeling and Predicting the Location of Pauses for the Generation of Animations of {American} {Sign} {Language}},
pages = {1--6},
editor = {Bono, Mayumi and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna and Osugi, Yutaka},
booktitle = {Proceedings of the {LREC2018} 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community},
maintitle = {11th International Conference on Language Resources and Evaluation ({LREC} 2018)},
publisher = {{European Language Resources Association (ELRA)}},
address = {Miyazaki, Japan},
day = {12},
month = may,
year = {2018},
isbn = {979-10-95546-01-6},
language = {english},
url = {https://www.sign-lang.uni-hamburg.de/lrec/pub/18013.pdf}
}