sign-lang@LREC Anthology

Exploring Localization for Mouthings in Sign Language Avatars

Wolfe, Rosalee ORCID button Wolfe, Rosalee | Hanke, Thomas ORCID button Hanke, Thomas | Langer, Gabriele ORCID button Langer, Gabriele | Jahn, Elena ORCID button Jahn, Elena | Worseck, Satu | Bleicken, Julian | McDonald, John C. | Johnson, Sarah

Proceedings of the LREC2018 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community
Miyazaki, Japan
12 May 2018
European Language Resources Association (ELRA)
CC BY-NC 4.0
sign-lang ID:

Content Categories

DGS Corpus project
German Sign Language


According to the World Wide Web Consortium (W3C), localization is “the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market”. One requirement necessary for localizing a sign language avatar is creating a capability to produce convincing mouthing. For purposes of this inquiry we make a distinction between mouthings and mouth gesture. The term ‘mouthings’ refers to mouth movements derived from words of a spoken language whereas ‘mouth gesture’ refers to mouth movements not derived from a spoken language. This effort focuses on the former. The prevalence of mouthings varies across different sign languages and individual signers. Although mouthings occur regularly in most sign languages, their significance and status have been a matter of sometimes heated discussions among sign linguists. However, no matter the theoretical viewpoint one takes on the issue of mouthing, one must acknowledge that for most if not all sign languages, mouthings do occur. If an avatar purports to fully and naturally express any sign language, it must have the capacity to express all aspects of the language, which likely will include mouthings. Although most avatar systems were created for hearing communities, several technologies have emerged to improve speech recognition for those who are hard-of-hearing or who find themselves in noisy environments. These were not satisfactory for Deaf communities as they did not portray sign language. Initial efforts to incorporate mouthing in sign language avatars utilized a mouth picture or viseme for each letter of the International Phonetic Alphabet (IPA), but were hampered by a reliance on blend shapes. Muscle-based avatars have the advantage of avoiding the limitations of blend shapes. This paper reports on a first step to identify the requirements for extending a muscle-based avatar to incorporate mouthings in multiple sign languages.


Document Download

Paper PDF BibTeX File+ Abstract

BibTeX Export

  author    = {Wolfe, Rosalee and Hanke, Thomas and Langer, Gabriele and Jahn, Elena and Worseck, Satu and Bleicken, Julian and McDonald, John C. and Johnson, Sarah},
  title     = {Exploring Localization for Mouthings in Sign Language Avatars},
  pages     = {207--212},
  editor    = {Bono, Mayumi and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna and Osugi, Yutaka},
  booktitle = {Proceedings of the {LREC2018} 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community},
  maintitle = {11th International Conference on Language Resources and Evaluation ({LREC} 2018)},
  publisher = {{European Language Resources Association (ELRA)}},
  address   = {Miyazaki, Japan},
  day       = {12},
  month     = may,
  year      = {2018},
  isbn      = {979-10-95546-01-6},
  language  = {english},
  url       = {}
Something missing or wrong?