In this paper we report on a research effort focusing on recognition of static features of sign formation in single sign videos. Three sequential models have been developed for handshape, palm orientation and location of sign formation respectively, which make use of key-points extracted via OpenPose software. The models have been applied to a Danish and a Greek Sign Language dataset, providing results around 96%. Moreover, during the reported research, a method has been developed for identifying the time-frame of real signing in the video, which allows to ignore transition frames during sign recognition processing.
Keywords
Machine / Deep Learning – Machine Learning methods both in the visual domain and on linguistic annotation of sign language data
Machine / Deep Learning – How to get along with the size of sign language resources actually existing
@inproceedings{koulierakis:20035:sign-lang:lrec,
author = {Koulierakis, Ioannis and Siolas, Georgios and Efthimiou, Eleni and Fotinea, Stavroula-Evita and Stafylopatis, Andreas-Georgios},
title = {Recognition of Static Features in Sign Language Using Key-Points},
pages = {123--126},
editor = {Efthimiou, Eleni and Fotinea, Stavroula-Evita and Hanke, Thomas and Hochgesang, Julie A. and Kristoffersen, Jette and Mesch, Johanna},
booktitle = {Proceedings of the {LREC2020} 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives},
maintitle = {12th International Conference on Language Resources and Evaluation ({LREC} 2020)},
publisher = {{European Language Resources Association (ELRA)}},
address = {Marseille, France},
day = {16},
month = may,
year = {2020},
isbn = {979-10-95546-54-2},
language = {english},
url = {https://www.sign-lang.uni-hamburg.de/lrec/pub/20035.pdf}
}