A Study of Levenshtein Transformer and Editor Transformer Models for Under-Resourced Languages

{"title":"A Study of Levenshtein Transformer and Editor Transformer Models for Under-Resourced Languages","authors":"","doi":"10.1109/iSAI-NLP54397.2021.9678159","DOIUrl":null,"url":null,"abstract":"Transformers are the current state-of-the-art type of neural network model for dealing with sequences. Evidently, the most prominent application of these models is in text processing tasks, and the most prominent of these is machine translation. Recently, transformer-based models such as the Edit-Based Transformer with Repositioning (EDITOR) and Levenshtein Transformer (LevT) models have become popular in neural machine translation. To the best of our knowledge, there are no experiments for these two models using under-resourced languages. In this paper, we compared the performance and decoding time of the EDITOR model and the LevT model. We conducted the experiments for under-resourced language pairs, namely, Thai-to-English, Thai-to-Myanmar, English-to-Myanmar, and vice versa. The experimental results showed that the EDITOR model outperforms the LevT model in English-Thai, Thai-English and English-Myanmar language pairs whereas LevT achieves better score than EDITOR in Thai-Myanmar, Myanmar-Thai and Myanmar-English language pairs. Regarding the decoding time, EDITOR model is generally faster than the LevT model in the four language pairs. However, in the case of English-Myanmar and Myanmar-English pairs, the decoding time of EDITOR is slightly slower than the LevT model. At last, we investigated the system level performance of both models by means of compare-mt and word error rate (WER).","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Transformers are the current state-of-the-art type of neural network model for dealing with sequences. Evidently, the most prominent application of these models is in text processing tasks, and the most prominent of these is machine translation. Recently, transformer-based models such as the Edit-Based Transformer with Repositioning (EDITOR) and Levenshtein Transformer (LevT) models have become popular in neural machine translation. To the best of our knowledge, there are no experiments for these two models using under-resourced languages. In this paper, we compared the performance and decoding time of the EDITOR model and the LevT model. We conducted the experiments for under-resourced language pairs, namely, Thai-to-English, Thai-to-Myanmar, English-to-Myanmar, and vice versa. The experimental results showed that the EDITOR model outperforms the LevT model in English-Thai, Thai-English and English-Myanmar language pairs whereas LevT achieves better score than EDITOR in Thai-Myanmar, Myanmar-Thai and Myanmar-English language pairs. Regarding the decoding time, EDITOR model is generally faster than the LevT model in the four language pairs. However, in the case of English-Myanmar and Myanmar-English pairs, the decoding time of EDITOR is slightly slower than the LevT model. At last, we investigated the system level performance of both models by means of compare-mt and word error rate (WER).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
资源不足语言的Levenshtein变压器和Editor变压器模型研究
变压器是目前最先进的处理序列的神经网络模型。显然,这些模型最突出的应用是在文本处理任务中,其中最突出的是机器翻译。近年来,基于变压器的模型,如基于编辑的重定位变压器(EDITOR)和Levenshtein变压器(LevT)模型在神经网络机器翻译中得到了广泛的应用。据我们所知,目前还没有针对这两种模型使用资源不足语言的实验。在本文中,我们比较了EDITOR模型和LevT模型的性能和解码时间。我们对资源不足的语言对进行了实验,即泰语对英语,泰语对缅甸,英语对缅甸,反之亦然。实验结果表明,EDITOR模型在英语-泰语、泰语-英语和英语-缅甸语对上的表现优于LevT模型,而LevT模型在泰语-缅甸语、缅甸语-泰语和缅甸语-英语对上的表现优于EDITOR模型。在解码时间方面,在四种语言对中,EDITOR模型普遍比LevT模型快。然而,在英语- myanmar和缅甸- english对的情况下,EDITOR的解码时间比LevT模型稍慢。最后,我们通过比较mt和单词错误率(WER)来考察两种模型的系统级性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Replay Attack Detection in Automatic Speaker Verification Based on ResNeWt18 with Linear Frequency Cepstral Coefficients Image Processing for Classification of Rice Varieties with Deep Convolutional Neural Networks KaleCare: Smart Farm for Kale with Pests Detection System using Machine Learning The comparison of the proposed recommended system with actual data sylbreak4all: Regular Expressions for Syllable Breaking of Nine Major Ethnic Languages of Myanmar
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1