{"title":"用迭代滤波和数据选择改进僧伽罗语-英语NMT的反翻译","authors":"Koshiya Epaliyana, Surangika Ranathunga, Sanath Jayasena","doi":"10.1109/MERCon52712.2021.9525800","DOIUrl":null,"url":null,"abstract":"Neural Machine Translation (NMT) requires a large amount of parallel data to achieve reasonable results. For low resource settings such as Sinhala-English where parallel data is scarce, NMT tends to give sub-optimal results. This is severe when the translation is domain-specific. One solution for the data scarcity problem is data augmentation. To augment the parallel data for low resource language pairs, commonly available large monolingual corpora can be used. A popular data augmentation technique is Back-Translation (BT). Over the years, there have been many techniques to improve Vanilla BT. Prominent ones are Iterative BT, Filtering, and Data selection. We employ these in Sinhala - English extremely low resource domain-specific translation in order to improve the performance of NMT. In particular, we move forward from previous research and show that by combining these different techniques, an even better result can be obtained. Our combined model provided a +3.0 BLEU score gain over the Vanilla NMT model and a +1.93 BLEU score gain over the Vanilla BT model for Sinhala → English translation. Furthermore, a +0.65 BLEU score gain over the Vanilla NMT model and a +2.22 BLEU score gain over the Vanilla BT model were observed for English → Sinhala translation.","PeriodicalId":6855,"journal":{"name":"2021 Moratuwa Engineering Research Conference (MERCon)","volume":"30 1","pages":"438-443"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Improving Back-Translation with Iterative Filtering and Data Selection for Sinhala-English NMT\",\"authors\":\"Koshiya Epaliyana, Surangika Ranathunga, Sanath Jayasena\",\"doi\":\"10.1109/MERCon52712.2021.9525800\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural Machine Translation (NMT) requires a large amount of parallel data to achieve reasonable results. For low resource settings such as Sinhala-English where parallel data is scarce, NMT tends to give sub-optimal results. This is severe when the translation is domain-specific. One solution for the data scarcity problem is data augmentation. To augment the parallel data for low resource language pairs, commonly available large monolingual corpora can be used. A popular data augmentation technique is Back-Translation (BT). Over the years, there have been many techniques to improve Vanilla BT. Prominent ones are Iterative BT, Filtering, and Data selection. We employ these in Sinhala - English extremely low resource domain-specific translation in order to improve the performance of NMT. In particular, we move forward from previous research and show that by combining these different techniques, an even better result can be obtained. Our combined model provided a +3.0 BLEU score gain over the Vanilla NMT model and a +1.93 BLEU score gain over the Vanilla BT model for Sinhala → English translation. Furthermore, a +0.65 BLEU score gain over the Vanilla NMT model and a +2.22 BLEU score gain over the Vanilla BT model were observed for English → Sinhala translation.\",\"PeriodicalId\":6855,\"journal\":{\"name\":\"2021 Moratuwa Engineering Research Conference (MERCon)\",\"volume\":\"30 1\",\"pages\":\"438-443\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Moratuwa Engineering Research Conference (MERCon)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MERCon52712.2021.9525800\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Moratuwa Engineering Research Conference (MERCon)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MERCon52712.2021.9525800","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving Back-Translation with Iterative Filtering and Data Selection for Sinhala-English NMT
Neural Machine Translation (NMT) requires a large amount of parallel data to achieve reasonable results. For low resource settings such as Sinhala-English where parallel data is scarce, NMT tends to give sub-optimal results. This is severe when the translation is domain-specific. One solution for the data scarcity problem is data augmentation. To augment the parallel data for low resource language pairs, commonly available large monolingual corpora can be used. A popular data augmentation technique is Back-Translation (BT). Over the years, there have been many techniques to improve Vanilla BT. Prominent ones are Iterative BT, Filtering, and Data selection. We employ these in Sinhala - English extremely low resource domain-specific translation in order to improve the performance of NMT. In particular, we move forward from previous research and show that by combining these different techniques, an even better result can be obtained. Our combined model provided a +3.0 BLEU score gain over the Vanilla NMT model and a +1.93 BLEU score gain over the Vanilla BT model for Sinhala → English translation. Furthermore, a +0.65 BLEU score gain over the Vanilla NMT model and a +2.22 BLEU score gain over the Vanilla BT model were observed for English → Sinhala translation.