According to the World Wide Web Consortium (W3C), localization is “the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market”. One requirement necessary for localizing a sign language avatar is creating a capability to produce convincing mouthing. For purposes of this inquiry we make a distinction between mouthings and mouth gesture. The term ‘mouthings’ refers to mouth movements derived from words of a spoken language whereas ‘mouth gesture’ refers to mouth movements not derived from a spoken language. This effort focuses on the former. The prevalence of mouthings varies across different sign languages and individual signers. Although mouthings occur regularly in most sign languages, their significance and status have been a matter of sometimes heated discussions among sign linguists. However, no matter the theoretical viewpoint one takes on the issue of mouthing, one must acknowledge that for most if not all sign languages, mouthings do occur. If an avatar purports to fully and naturally express any sign language, it must have the capacity to express all aspects of the language, which likely will include mouthings. Although most avatar systems were created for hearing communities, several technologies have emerged to improve speech recognition for those who are hard-of-hearing or who find themselves in noisy environments. These were not satisfactory for Deaf communities as they did not portray sign language. Initial efforts to incorporate mouthing in sign language avatars utilized a mouth picture or viseme for each letter of the International Phonetic Alphabet (IPA), but were hampered by a reliance on blend shapes. Muscle-based avatars have the advantage of avoiding the limitations of blend shapes. This paper reports on a first step to identify the requirements for extending a muscle-based avatar to incorporate mouthings in multiple sign languages.
Keywords
Future sign language technology user interfaces such as avatar technology
Avatar technology as a tool in sign language corpora and corpus data feeding into advances in avatar technology
Rosalee Wolfe, Thomas Hanke, Gabriele Langer, Elena Jahn, Satu Worseck, Julian Bleicken, John C. McDonald, Sarah Johnson. 2018. Exploring Localization for Mouthings in Sign Language Avatars. In Proceedings of the LREC2018 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, pages 207–212, Miyazaki, Japan. European Language Resources Association (ELRA).
BibTeX Export
@inproceedings{wolfe:18023:sign-lang:lrec,
author = {Wolfe, Rosalee and Hanke, Thomas and Langer, Gabriele and Jahn, Elena and Worseck, Satu and Bleicken, Julian and McDonald, John C. and Johnson, Sarah},
title = {Exploring Localization for Mouthings in Sign Language Avatars},
pages = {207--212},
editor = {Bono, Mayumi and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna and Osugi, Yutaka},
booktitle = {Proceedings of the {LREC2018} 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community},
maintitle = {11th International Conference on Language Resources and Evaluation ({LREC} 2018)},
publisher = {{European Language Resources Association (ELRA)}},
address = {Miyazaki, Japan},
day = {12},
month = may,
year = {2018},
isbn = {979-10-95546-01-6},
language = {english},
url = {https://www.sign-lang.uni-hamburg.de/lrec/pub/18023.pdf}
}