FAQs about digital sign language translation - research project AVASAG


FAQ AVASAGIn connection with our AVASAG webinars, you will find frequently asked questions about sign translation, the sign language avatar, the project partners, a project participation and similar topics arising to the research project AVASAG. Please feel free to contact us if you have any further questions.

1) What does AVASAG stand for?
► AVASAG stands for Avatar-based Speech Assistant for Automated Sign Translation.

2) What is AVASAG?
► The AVASAG project, funded by the German Federal Ministry of Education and Research (BMBF), generates data on sign animation. These data are used to train artificial intelligence. The larger the database, the more intelligent the translation function becomes. The generation of these versatile translation capabilities for different application areas is the basis for the sign language avatar. There is currently no usable data for the German sign language. For this reason, AVASAG produces versatile data for different application scenarios in advance with partners.

3) Who are the partners in the joint project?
► Charamel GmbH (project management and specialist for interactive avatar-based assistance systems), yomma GmbH (experts for sign language), Ergosign GmbH (pioneer for user experience design), Deutsches Forschungszentrum für Künstliche Intelligenz [DFKI] GmbH (research areas "Cognitive Assistance Systems", "Language Technology and Multilingualism"), Technische Hochschule Köln (Institute for Media and Photo Technology) and the University of Augsburg (Human Centered Multimedia, HCM) are jointly researching and developing.

4) Where in Germany are the project partners located?
► The collaborating project partners are from Cologne, Hamburg, Saarbrücken, and Augsburg. Associating partners, i.e. users who want to use the sign language avatar and support the project, are located all over Germany. In addition, we work with the sign language community, which is mainly located in the region of the project partners. Here, however, we are happy to hear from anyone who would like to participate.

5) What are BITV 2.0 and BFSG?
► Following on from the Disability Equality Act (BGG, Behindertengleichstellungsgesetz), which was updated on June 2, 2021, and the Barrier-Free Information Technology Ordinance (BITV, Barrierefreie-Informationstechnik-Verordnung), which was updated on May 21, 2019, the Accessibility Act (BFG, Barrierefreiheitsgesetz) or Barrier-Free Strengthening Act (BFSG, Barrierefreiheitsstärkungsgesetz) of July 16, 2021, passed by the German Bundestag, implements an EU directive to reduce barriers to accessing information and communication. With the goal of making everyday digital applications via computer, tablet, smartphone, ATM or ticket machine barrier-free usable, companies and public authorities must accordingly provide a digital infrastructure without hurdles.

6) How can I meet the requirements of the BITV 2.0 and the BFSG in the case of sign language? And what is the situation here with real-time representation?
► Common solutions are pictograms, easy language, images, videos, etc. Videos of sign language interpreters are static solutions, because once recorded they have to be re-filmed when changes occur. Sign language avatars offer an alternative form of presentation in which artificial intelligence (AI) is used to translate content dynamically, i.e., directly even when content changes. This means that the current status is always available to the deaf target group in real time. Our claim is real-time translation of text into signs. This is very important, because only in this way is full digital participation possible. However, enormous data is needed here so that automated real-time translation is possible. We are working on that.

7) Who specifically are the project partners in contact with in the sign language community? Who decides that the quality of the avatar is suitable for practical use?
► In the AVASAG project, we work with a variety of sign language experts who are involved in different stages of the project process. On the one hand, employees of the project partner "yomma" are actively involved in the implementation. But also in focus groups with deaf people user experience requirements are checked, discussed and optimized. In the next phase, we will cooperate with associations and institutes, which will be involved in particular in determining acceptance. Here, some associations already belong to the cooperation network. However, we would be pleased if associations, institutions, and interested parties with know-how that are not yet participating would also like to participate. Please feel free to contact us!

8) What is the attitude of deaf associations towards avatars?
► As part of the research project, deaf associations will be involved in the development. The fact is that true digital participation cannot be guaranteed purely through sign language interpreters. So if you want to present all content of the approximately 16.6 million web pages (source: statista) digitally, the number of sign language interpreters is not enough. It is also not possible to respond to current notices and announcements in real time if videos are created first. We want to create a real alternative with a technical solution that makes all content easily and directly translatable. This is an absolute advantage that we want to discuss with those responsible and the community. It's not that we want to replace interpreters, but rather we want real digital participation at all levels.

9) Does the avatar have to read or can it also see and hear? Can deaf people communicate with the avatar in dialog form? And are the facial expressions of the avatar understandable?
► The avatar reproduces text in sign language (because for many deaf people text language is a foreign language). A dialogue where the signer communicates with the avatar, similar to a chat bot, is not part of the project. Facial expressions are elementary. We have the claim to represent them accurately. In many other projects, this is not the case. At AVASAG, we are working on a detailed representation of facial expressions.

10) Are dialects and new emerging signs taken into account?
► German sign language (DGS) will be implemented. Dialects are not planned for the time being. Signs for terms that have not existed for a long time will probably only be taken into account if a standard can be identified (example: for the term "Corona" there was initially no DGS translation. In the meantime there are established signs).

11) How many signs does the avatar already know?
► No individual gestures will be recorded, but rather a large number of different sentences from a topic area. Based on the data points generated via these sentences, the artificial intelligence (AI) is trained via an algorithm.

12) Are contrasts and visually impaired people taken into account?
► We take into account the requirements of BITV 2.0 (Annex 2, Part 1) as well as the recommendations of the Federal Working Group of the Deafblind.

13) Does the topic of "easy language" also play a role in the project?
► Easy language is not considered in the project for the time being.

14) How can I get information about AVASAG events with sign language interpreters?
► Check our website regularly for webinar dates, write us an email or call us. A first insight into the research project in sign language is provided by the TV magazine "Sehen statt Hören" of the Bayerischer Rundfunk: "DGS: Mehr digitale Teilhabe durch Künstliche Intelligenz?" (or see below). In addition, we provide news on our social media channels, so be sure to follow us: LinkedInXINGFacebookTwitterVimeoYouTubeInstagram

15) Can I become a project partner and what does project participation look like?
► Please contact us directly to discuss project participation.

16) Is there a demo so that deaf people can test the quality and try out the system? Is a mobile solution as an app also planned? And for which areas is the use of avatars currently recommended?
► Technical demos should be available online in early 2022. Individual animations are currently being reviewed with the deaf community and can then be viewed as well. At this stage, usage is recommended in areas where many have a common need. This can be standardized content that changes regularly. In the travel sector it can be current announcements and information, in the pandemic important updates that need to be directly available to everyone. But also explanations that give relevance for decisions are important, e.g. for shopping-behavior, dealing with authorities, explanations of complicated topics that are difficult to understand in text language. The sign language avatar should be usable at all levels. This also includes integration into the app, in order to display content directly in sign language here as well.


We will be happy to answer any further questions you may have.