EABM

The project

The Ergonomics for the Artificial Booth Mate (EABM) project aims at creating a future-proof CAI tool. Together the University of Ghent and the Johannes Gutenberg University of Mainz will use speech recognition technology to create a user-friendly interpreting tool and enhance interpreting performance in the booth.

Speech recognition technology enables your artificial booth mate (ABM) to accurately and rapidly display terms and numbers on a screen during real-time speech. This way it will provide interpreters with support when they need it most just like a real booth mate. Unlike Machine Interpretation (MI), an ABM does not replace the interpreter in any way. On the contrary, an artificial booth mate works together seamlessly with the interpreter increasing the accuracy and productivity of the human interpretation.

Help us create a tool that fits an interpreter’s needs. In order to become a useful and efficient support tool, the ABM needs to be developed specifically for interpreters by interpreters and for that we need your input! By filling out our survey we can determine the optimal display formats and put them to the test. You can find the EABM survey here.

To make sure you can use your future booth mate to its full potential we will offer free online course materials and webinars on the possibilities and the limitations of this new support system. You can expect a series of videos in which we will go over how the system operates, what it can do for you and what the best practices are. Everything you need to know can be found here.

Research

An ambitious and innovative project should always be based on thorough research. In their recent research paper Bart Defrancq (Ghent University) and Claudio Fantinuoli (Johannes Gutenberg University Mainz) evaluated the potential benefits and usefulness of automatic speech recognition (ASR) technology in the booth. In their experiment they used a promising ABM prototype, InterpretBank ASR, with a high precision rate (96%) and a latency low enough to fit an interpreter’s ear-voice span (EVS). Read more about the experiment here.

Defrancq, B., & Fantinuoli, C. (2020). Automatic speech recognition in the booth: Assessment of system performance, interpreters’ performances and interactions in the context of numbers. Target.

 

 

 

The EABM project is funded by the European Commission (DG Interpretation)