M. Kocour, Karel Veselý, Igor Szöke, Santosh Kesiraju, Juan Zuluaga-Gómez, Alexander Blatt, Amrutha Prasad, Iuliia Nigmatulina, P. Motlícek, D. Klakow, Allan Tart, H. Atassi, Pavel, Kolčárek, Honza Černocký, Claudia Cevenini, K. Choukri, M. Rigault, Fabian Landis, Saeed, Sarfjoo, Chloe Salamin
{"title":"Automatic Processing Pipeline for Collecting and Annotating Air-Traffic Voice Communication Data","authors":"M. Kocour, Karel Veselý, Igor Szöke, Santosh Kesiraju, Juan Zuluaga-Gómez, Alexander Blatt, Amrutha Prasad, Iuliia Nigmatulina, P. Motlícek, D. Klakow, Allan Tart, H. Atassi, Pavel, Kolčárek, Honza Černocký, Claudia Cevenini, K. Choukri, M. Rigault, Fabian Landis, Saeed, Sarfjoo, Chloe Salamin","doi":"10.3390/engproc2021013008","DOIUrl":null,"url":null,"abstract":"This document describes our pipeline for automatic processing of ATCO pilot audio communication we developed as part of the ATCO2 project. So far, we collected two thousand hours of audio recordings that we either preprocessed for the transcribers or used for semi-supervised training. Both methods of using the collected data can further improve our pipeline by retraining our models. The proposed automatic processing pipeline is a cascade of many standalone components: (a) segmentation, (b) volume control, (c) signal-to-noise ratio filtering, (d) diarization, (e) ‘speech-to-text’ (ASR) module, (f) English language detection, (g) call-sign code recognition, (h) ATCO—pilot classification and (i) highlighting commands and values. The key component of the pipeline is a speech-to-text transcription system that has to be trained with real-world ATC data; otherwise, the performance is poor. In order to further improve speech-to-text performance, we apply both semi-supervised training with our recordings and the contextual adaptation that uses a list of plausible callsigns from surveillance data as auxiliary information. Downstream NLP/NLU tasks are important from an application point of view. These application tasks need accurate models operating on top of the real speech-to-text output; thus, there is a need for more data too. Creating ATC data is the main aspiration of the ATCO2 project. At the end of the project, the data will be packaged and distributed by ELDA.","PeriodicalId":11748,"journal":{"name":"Engineering Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/engproc2021013008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
This document describes our pipeline for automatic processing of ATCO pilot audio communication we developed as part of the ATCO2 project. So far, we collected two thousand hours of audio recordings that we either preprocessed for the transcribers or used for semi-supervised training. Both methods of using the collected data can further improve our pipeline by retraining our models. The proposed automatic processing pipeline is a cascade of many standalone components: (a) segmentation, (b) volume control, (c) signal-to-noise ratio filtering, (d) diarization, (e) ‘speech-to-text’ (ASR) module, (f) English language detection, (g) call-sign code recognition, (h) ATCO—pilot classification and (i) highlighting commands and values. The key component of the pipeline is a speech-to-text transcription system that has to be trained with real-world ATC data; otherwise, the performance is poor. In order to further improve speech-to-text performance, we apply both semi-supervised training with our recordings and the contextual adaptation that uses a list of plausible callsigns from surveillance data as auxiliary information. Downstream NLP/NLU tasks are important from an application point of view. These application tasks need accurate models operating on top of the real speech-to-text output; thus, there is a need for more data too. Creating ATC data is the main aspiration of the ATCO2 project. At the end of the project, the data will be packaged and distributed by ELDA.