sign-lang@LREC Anthology

How to use depth sensors in sign language corpus recordings

Jayaprakash, Rekha | Hanke, Thomas ORCID button Hanke, Thomas


Volume:
Proceedings of the LREC2014 6th Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel
Venue:
Reykjavik, Iceland
Date:
31 May 2014
Pages:
77–80
Publisher:
European Language Resources Association (ELRA)
License:
CC BY-NC 4.0
sign-lang ID:
14024

Content Categories

Projects:
DGS Corpus project

Abstract

Recently, combined camera and depth sensor devices caused substantial advances in Computer Vision directly applicable to automatic coding a signerís use of head movement, eye gaze, and, to some extent, facial expression. Automatic and even semi-automatic annotation of nonmanuals would mean dramatic savings on annotation time and are therefore of high interest for anyone working on sign language corpora. Optimally, these devices need to be placed directly in front of the signerís face at a distance of less than 1m. While this might be ok for some experimental setups, it is definitely nothing to be used in a corpus setting for at least two reasons: (i) The signer looks at the device instead of into the eyes of an interlocutor. (ii) The device is in the field of view of other cameras used to record the signerís manual and nonmanual behaviour. Here we report on experiments determining the degradation in performance when moving the devices away from their optimal positions in order to achieve a recording setup acceptable in a corpus context. For these experiments, we used two different device types (Kinect and Carmine 1.09) in combination with one mature CV software package specialised on face recognition (FaceShift). We speculate about the reasons for the asymmetries detected and how they could be resolved. We then apply the results to the studio setting used in the DGS Corpus project and show how the signers and cameras fields of view are influenced by introducing the new devices and are happy to discuss the acceptability of this approach.

Document Download

Paper PDF Poster BibTeX File+ Abstract

BibTeX Export

@inproceedings{jayaprakash:14024:sign-lang:lrec,
  author    = {Jayaprakash, Rekha and Hanke, Thomas},
  title     = {How to use depth sensors in sign language corpus recordings},
  pages     = {77--80},
  editor    = {Crasborn, Onno and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Kristoffersen, Jette and Mesch, Johanna},
  booktitle = {Proceedings of the {LREC2014} 6th Workshop on the Representation and Processing of Sign Languages: Beyond the Manual Channel},
  maintitle = {9th International Conference on Language Resources and Evaluation ({LREC} 2014)},
  publisher = {{European Language Resources Association (ELRA)}},
  address   = {Reykjavik, Iceland},
  day       = {31},
  month     = may,
  year      = {2014},
  language  = {english},
  url       = {https://www.sign-lang.uni-hamburg.de/lrec/pub/14024.pdf}
}
Something missing or wrong?