{"title":"A Study of Levenshtein Transformer and Editor Transformer Models for Under-Resourced Languages","authors":"","doi":"10.1109/iSAI-NLP54397.2021.9678159","DOIUrl":null,"url":null,"abstract":"Transformers are the current state-of-the-art type of neural network model for dealing with sequences. Evidently, the most prominent application of these models is in text processing tasks, and the most prominent of these is machine translation. Recently, transformer-based models such as the Edit-Based Transformer with Repositioning (EDITOR) and Levenshtein Transformer (LevT) models have become popular in neural machine translation. To the best of our knowledge, there are no experiments for these two models using under-resourced languages. In this paper, we compared the performance and decoding time of the EDITOR model and the LevT model. We conducted the experiments for under-resourced language pairs, namely, Thai-to-English, Thai-to-Myanmar, English-to-Myanmar, and vice versa. The experimental results showed that the EDITOR model outperforms the LevT model in English-Thai, Thai-English and English-Myanmar language pairs whereas LevT achieves better score than EDITOR in Thai-Myanmar, Myanmar-Thai and Myanmar-English language pairs. Regarding the decoding time, EDITOR model is generally faster than the LevT model in the four language pairs. However, in the case of English-Myanmar and Myanmar-English pairs, the decoding time of EDITOR is slightly slower than the LevT model. At last, we investigated the system level performance of both models by means of compare-mt and word error rate (WER).","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Transformers are the current state-of-the-art type of neural network model for dealing with sequences. Evidently, the most prominent application of these models is in text processing tasks, and the most prominent of these is machine translation. Recently, transformer-based models such as the Edit-Based Transformer with Repositioning (EDITOR) and Levenshtein Transformer (LevT) models have become popular in neural machine translation. To the best of our knowledge, there are no experiments for these two models using under-resourced languages. In this paper, we compared the performance and decoding time of the EDITOR model and the LevT model. We conducted the experiments for under-resourced language pairs, namely, Thai-to-English, Thai-to-Myanmar, English-to-Myanmar, and vice versa. The experimental results showed that the EDITOR model outperforms the LevT model in English-Thai, Thai-English and English-Myanmar language pairs whereas LevT achieves better score than EDITOR in Thai-Myanmar, Myanmar-Thai and Myanmar-English language pairs. Regarding the decoding time, EDITOR model is generally faster than the LevT model in the four language pairs. However, in the case of English-Myanmar and Myanmar-English pairs, the decoding time of EDITOR is slightly slower than the LevT model. At last, we investigated the system level performance of both models by means of compare-mt and word error rate (WER).