sign-lang@LREC Anthology

Exploring Localization for Mouthings in Sign Language Avatars

Wolfe, Rosalee ORCID button Wolfe, Rosalee | Hanke, Thomas ORCID button Hanke, Thomas | Langer, Gabriele ORCID button Langer, Gabriele | Jahn, Elena ORCID button Jahn, Elena | Worseck, Satu | Bleicken, Julian | McDonald, John C. | Johnson, Sarah


Volume:
Proceedings of the LREC2018 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community
Venue:
Miyazaki, Japan
Date:
12 May 2018
Pages:
207–212
Publisher:
European Language Resources Association (ELRA)
License:
CC BY-NC 4.0
sign-lang ID:
18023
ISBN:
979-10-95546-01-6

Content Categories

Projects:
DGS Corpus project
Languages:
German Sign Language
Avatars:
Paula

Abstract

According to the World Wide Web Consortium (W3C), localization is “the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market”. One requirement necessary for localizing a sign language avatar is creating a capability to produce convincing mouthing. For purposes of this inquiry we make a distinction between mouthings and mouth gesture. The term ‘mouthings’ refers to mouth movements derived from words of a spoken language whereas ‘mouth gesture’ refers to mouth movements not derived from a spoken language. This effort focuses on the former. The prevalence of mouthings varies across different sign languages and individual signers. Although mouthings occur regularly in most sign languages, their significance and status have been a matter of sometimes heated discussions among sign linguists. However, no matter the theoretical viewpoint one takes on the issue of mouthing, one must acknowledge that for most if not all sign languages, mouthings do occur. If an avatar purports to fully and naturally express any sign language, it must have the capacity to express all aspects of the language, which likely will include mouthings. Although most avatar systems were created for hearing communities, several technologies have emerged to improve speech recognition for those who are hard-of-hearing or who find themselves in noisy environments. These were not satisfactory for Deaf communities as they did not portray sign language. Initial efforts to incorporate mouthing in sign language avatars utilized a mouth picture or viseme for each letter of the International Phonetic Alphabet (IPA), but were hampered by a reliance on blend shapes. Muscle-based avatars have the advantage of avoiding the limitations of blend shapes. This paper reports on a first step to identify the requirements for extending a muscle-based avatar to incorporate mouthings in multiple sign languages.

Keywords

Document Download

Paper PDF BibTeX File+ Abstract

BibTeX Export

@inproceedings{wolfe:18023:sign-lang:lrec,
  author    = {Wolfe, Rosalee and Hanke, Thomas and Langer, Gabriele and Jahn, Elena and Worseck, Satu and Bleicken, Julian and McDonald, John C. and Johnson, Sarah},
  title     = {Exploring Localization for Mouthings in Sign Language Avatars},
  pages     = {207--212},
  editor    = {Bono, Mayumi and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna and Osugi, Yutaka},
  booktitle = {Proceedings of the {LREC2018} 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community},
  maintitle = {11th International Conference on Language Resources and Evaluation ({LREC} 2018)},
  publisher = {{European Language Resources Association (ELRA)}},
  address   = {Miyazaki, Japan},
  day       = {12},
  month     = may,
  year      = {2018},
  isbn      = {979-10-95546-01-6},
  language  = {english},
  url       = {https://www.sign-lang.uni-hamburg.de/lrec/pub/18023.pdf}
}
Something missing or wrong?