6th Workshop on Sign Language Translation and Avatar Technology

Hamburg, Germany, Sept 29, 2019

Recent advances in virtual character technology and a common trend toward the 3D internet have the potential to achieve full accessibility for the deaf/Deaf in the internet and to provide key tools to facilitate participation in a hearing world. Prior work on the topic of sign language translation using avatars has explored an impressive range of methods in various languages and application domains.

Future projects will profit from a joint effort of international experts to look back on the work done and identify and specify the remaining problems (or those only partially solved) in specific areas, such as symbolic translation and sign language animation. The motivation is threefold: First, discussing problems in detail will reveal potential solutions and, second, will lead to an exchange of ideas across sign language boundaries. Third, a common agreement on problems hopefully leads to a higher comparability of results and even objective benchmarks.

We focus on three main topics: symbolic translation of sign language, animation of sign language using avatars, and usability evaluation of practical translation and animation systems.

The format encourages discussion and collaboration between researchers. We have a mix of oral/signed presentations as well as poster presentations covering active work and proposed research. There are opportunities for demonstration of existing work and presentation of videos.

The workshop takes place directly after TISLR at the Institute of German Sign Language and Communication of the Deaf, Gorch-Fock-Wall 7, 20354 Hamburg.

Workshop languages are English and International Sign.

Link to the workshop series


  • Lexicographic and linguistic approaches
  • Use of corpora to inform translation
  • Modelling of signing space for translation
  • Handling of productive signs, classifier constructions, and constructed action
  • Incorporation of non-manual features in production
  • Consideration of emotional and prosodic aspects
  • Requirements for signing avatar technology
  • Linguistically-informed notations for gestural animation
  • Use of corpora to inform animation
  • Realistic animation of manual and bodily gestures
  • Flexibility in animation of facial gestures and mouthing
  • Evaluation results for practical translation systems
  • Evaluation methodologies for signing translation systems
  • Realism and acceptability of signing avatars
  • Efficient content creation and editing tools for signing texts

Organizers: Rosalee Wolfe (DePaul U Chicago) and Thomas Hanke (U Hamburg)

Location: Gorch-Fock-Wall 7, Room A0020

Please note that from 09:00-11:00, the DGS-Korpus Release 2 presentation takes place in the same room. Please join us for breakfast and discussions! Registration also starts at 09:00 in room A0018.


11:30-11:40 Opening Remarks
11:40-12:45 Presentations
Gesture Recognition using Keypoints Detection in the Context of Sign Language Translation (Koulierakis Ioannis, Georgios Siolas, Andreas-Georgios Stafylopatis, Eleni Efthimiou and Stavroula-Evita Fotinea)
An Improved Avatar for Automatic Mouth Gesture Recognition (Ronan Johnson, Maren Brumm and Rosalee Wolfe)
Towards Automatic Sign Language Corpus Annotation Using Deep Learning (Mathieu De Coster, Mieke Van Herreweghe and Joni Dambre)
12:45-14:15 Lunch Break
14:15-15:00 Presentations
Signing Avatar Motion: Combining Naturality and Anonymity (Félix Bigand, Annelies Braffort, Elise Prigent and Bastien Berret)
A Model for Animating Adverbs of Manner in American Sign Language (Robyn Moncrief)
15:00-16:00 Posters
The Case for Avatar Makeup (Rosalee Wolfe, Elena Jahn, Ronan Johnson and John McDonald)
Evolution of a Solution for High-fidelity 3D Recording and Avatar Animation in Sign Language (Boris Dauriac and Rémi Brun)
ExTOL: Automatic Recognition of British Sign Language using the BSL Corpus (Kearsy Cormier, Neil Fox, Bencie Woll, Andrew Zisserman, Necati Cihan Camgoz and Richard Bowden)
Designing an Interface to Support the Creation of Animations of Individual ASL Signs (Spandana Jaggumantri, Sedeeq Al-Khazraji, Abraham Glasser and Matt Huenerfauth)
Characterising the Phonological Parameters of Irish Sign Language in a Linguistically Motivated Computational Model (Irene Murtagh)
16:00-16:45 Coffee Break
16:45-17:50 Presentations
Computer-assisted Sign Language translation: a study of translators’ practice to specify CAT software (Marion Kaczmarek and Michael Filhol)
Case Study: Avatar sign for Emergency Announcements: Template-based Avatar Sign Language Translation for In-train and Station Announcements (Kevin Lee and Stephen Ko)
Fine Tuning Dynamics in Contextualized Classifier Constructs from Linguistic Descriptions (John McDonald and Michael Filhol)
17:50-18:00 Closing Remarks