sign-lang@LREC Anthology

Building French Sign Language Motion Capture Corpora for Signing Avatars

Gibet, Sylvie


Volume:
Proceedings of the LREC2018 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community
Venue:
Miyazaki, Japan
Date:
12 May 2018
Pages:
53–58
Publisher:
European Language Resources Association (ELRA)
License:
CC BY-NC 4.0
sign-lang ID:
18020
ISBN:
979-10-95546-01-6

Content Categories

Projects:
HuGEx, Sign3D, SignCom
Languages:
French Sign Language
Corpora:
Sign3D

Abstract

The design of traditional corpora for linguistic analysis aims to provide living representations of sign languages across deaf communities and linguistic researchers. Most of the time, the sign language data is video-recorded and then encoded in a standardized and homogenous structure for open-ended analysis (statistical or phonological studies). With such structures, sign language corpora are described and annotated into linguistic components, including phonology, morphology, and syntactic components. Conversely, motion capture (MoCap) corpora provide researchers the data necessary to carry on finer-grained studies on movement, thus allowing precise, and quantitative analysis of sign language gestures as well as sign language (SL) generation. One the one hand, motion data serves to validate and enforce existing theories on the phonologies of sign languages. By aligning temporally motion trajectories and labelled linguistic information, it thus becomes possible to study the influence of the movement articulation on the linguistic aspects of the SL, including hand configuration, hand movement, co-articulation or synchronization within intra and inter phonological channels. On the other hand, generation pertains to sign production using animated virtual characters, usually called signing avatars. Although MoCap technology presents exciting future directions for sign language studies, tightly interlinking language components and signals, it still requires high technical skills for recording, post-processing data, and there are many unresolved challenges, with the need to simultaneously record body, hand motion, facial expressions, and gaze direction. Therefore, there are still few MoCap corpora that have been developed in the field of sign language studies. Some of them are dedicated to the analysis of articulation and prosody aspects of sign languages, whereas recent interest in avatar technology has led to develop corpora associated to data-driven synthesis. This paper describes four corpora that have been designed and built in our research team. These corpora have been recorded using MoCap and video equipment, and annotated according to multi-tiers linguistic templates. Each corpus has been designed for a specific linguistic purpose and is dedicated to data-driven synthesis, by replacing signs or groups of signs, by composing phonetic or phonological components, or by altering prosody in the produced sign language utterances.

Keywords

Document Download

Paper PDF BibTeX File+ Abstract

BibTeX Export

@inproceedings{gibet:18020:sign-lang:lrec,
  author    = {Gibet, Sylvie},
  title     = {Building {French} {Sign} {Language} Motion Capture Corpora for Signing Avatars},
  pages     = {53--58},
  editor    = {Bono, Mayumi and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna and Osugi, Yutaka},
  booktitle = {Proceedings of the {LREC2018} 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community},
  maintitle = {11th International Conference on Language Resources and Evaluation ({LREC} 2018)},
  publisher = {{European Language Resources Association (ELRA)}},
  address   = {Miyazaki, Japan},
  day       = {12},
  month     = may,
  year      = {2018},
  isbn      = {979-10-95546-01-6},
  language  = {english},
  url       = {https://www.sign-lang.uni-hamburg.de/lrec/pub/18020.pdf}
}
Something missing or wrong?