sign-lang@LREC Anthology

Semi-automatic Annotation of Sign Language Corpora

Hrúz, Marek | Campr, Pavel | Železný, Miloš


Volume:
Proceedings of the LREC2008 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora
Venue:
Marrakech, Morocco
Date:
1 June 2008
Pages:
78–81
Publisher:
European Language Resources Association (ELRA)
License:
CC BY-NC
sign-lang ID:
08013

Abstract

The first step of automatic sign language recognition is feature extraction. It has been shown which features are sufficient for a successful classification of a sign. It is the hand shape, orientation of the hand in space, trajectory of the hands and the non-manual component of the speech (facial expression, articulation). Usually the efficiency of the feature extracting algorithm is evaluated by the rate of recognition of the whole system. This approach can be confusing since the researcher cannot be always sure which part of the system is failing. However if the corpora would be available with a detailed annotation of these features the evaluation would be more precise. A manual creation of the annotation data can be very time consuming. We propose a semi-automatic tool for annotating trajectory of head and hands and the shape of the hands.
For the purpose of extracting the trajectory of hands a tracker is developed. In our case the tracker is based on similarity of the scalar description of objects. We describe the objects by seven Hu moments of the contour, a gray scale image (template), position, velocity, perimeter of the contour, area of the bounding box and area of the contour. For every new frame all objects in the image are detected and filtered. Every tracker computes the similarity of the tracked object and the evaluated object. As long as the tracker's certainty is above a threshold it is considered as ground truth. At this point all available data are collected from the object and saved as annotation. If the level of uncertainty is high, the user is asked to verify the tracking.
If a perfect tracker was available all the annotation could be created automatically. But the trackers usually fail when an occlusion of objects occurs. Because of this problem the system must be able to detect occlusions of objects and have the user verify the resulting tracking. In our system we assume that the bounding box of an overlapped object becomes relatively bigger in the first frame of occlusion and relatively smaller in the first frame after occlusion. We consider the area of the bounding box as a feature which determines the occlusion.
Up to now the annotation through tracker allows us to semi-automatically obtain the trajectory of head and hands and the shape of the hands. In the future we will extend the system to be able to determine the orientation of hands and combine it with a lip-reading system which we have ready for use. The obtained parameters can be then used as ground truth data for evaluation of feature extracting algorithm.

Document Download

Paper PDF Poster BibTeX File+ Abstract

BibTeX Export

@inproceedings{hruz:08013:sign-lang:lrec,
  author    = {Hr{\'u}z, Marek and Campr, Pavel and {\v Z}elezn{\'y}, Milo{\v s}},
  title     = {Semi-automatic Annotation of Sign Language Corpora},
  pages     = {78--81},
  editor    = {Crasborn, Onno and Efthimiou, Eleni and Hanke, Thomas and Thoutenhoofd, Ernst D. and Zwitserlood, Inge},
  booktitle = {Proceedings of the {LREC2008} 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora},
  maintitle = {6th International Conference on Language Resources and Evaluation ({LREC} 2008)},
  publisher = {{European Language Resources Association (ELRA)}},
  address   = {Marrakech, Morocco},
  day       = {1},
  month     = jun,
  year      = {2008},
  language  = {english},
  url       = {https://www.sign-lang.uni-hamburg.de/lrec/pub/08013.pdf}
}
Something missing or wrong?