Video-to-HamNoSys Automated Annotation System
Skobov, Victor | Lepage, Yves
- Volume:
- Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives
- Venue:
- Marseille, France
- Date:
- 16 May 2020
- Pages:
- 209–216
- Publisher:
- European Language Resources Association (ELRA)
- License:
- CC BY-NC 4.0
- sign-lang ID:
- 20001
- ACL ID:
- 2020.signlang-1.34
- ISBN:
- 979-10-95546-54-2
Content Categories
- Corpora:
- DGS Corpus
- Avatars:
- JASigning
- Writing Systems:
- HamNoSys
Abstract
The Hamburg Notation System (HamNoSys) was developed for movement annotation of any sign language (SL) and can be used to produce signing animations for a virtual avatar with the JASigning platform. This provides the potential to use HamNoSys, i.e., strings of characters, as a representation of an SL corpus instead of video material. Processing strings of characters instead of images can significantly contribute to sign language research. However, the complexity of HamNoSys makes it difficult to annotate without a lot of time and effort. Therefore annotation has to be automatized. This work proposes a conceptually new approach to this problem. It includes a new tree representation of the HamNoSys grammar that serves as a basis for the generation of grammatical training data and classification of complex movements using machine learning. Our automatic annotation system relies on HamNoSys grammar structure and can potentially be used on already existing SL corpora. It is retrainable for specific settings such as camera angles, speed, and gestures. Our approach is conceptually different from other SL recognition solutions and offers a developed methodology for future research.Keywords
- Machine / Deep Learning – Machine Learning methods both in the visual domain and on linguistic annotation of sign language data
- Machine / Deep Learning – Human-computer interfaces to sign language data and sign language annotation profiting from Machine Learning
- Annotation and Visualization Tools
- Machine / Deep Learning – How to get along with the size of sign language resources actually existing
Document Download
Paper PDF BibTeX File + Abstract
Cite as
Citation in ACL Citation Format
Victor Skobov, Yves Lepage. 2020. Video-to-HamNoSys Automated Annotation System. In Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, pages 209–216, Marseille, France. European Language Resources Association (ELRA).BibTeX Export
@inproceedings{skobov:20001:sign-lang:lrec, author = {Skobov, Victor and Lepage, Yves}, title = {{Video-to-HamNoSys} Automated Annotation System}, pages = {209--216}, editor = {Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna}, booktitle = {Proceedings of the {LREC2020} 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives}, maintitle = {12th International Conference on Language Resources and Evaluation ({LREC} 2020)}, publisher = {{European Language Resources Association (ELRA)}}, address = {Marseille, France}, day = {16}, month = may, year = {2020}, isbn = {979-10-95546-54-2}, language = {english}, url = {https://www.sign-lang.uni-hamburg.de/lrec/pub/20001.pdf} }