{"title":"藏汉神经机器翻译的领域自适应","authors":"Maoxian Zhou, Jia Secha, Rangjia Cai","doi":"10.1145/3446132.3446404","DOIUrl":null,"url":null,"abstract":"The meaning of the same word or sentence is likely to change in different semantic contexts, which challenges general-purpose translation system to maintain stable performance across different domains. Therefore, domain adaptation is an essential researching topic in Neural Machine Translation practice. In order to efficiently train translation models for different domains, in this work we take the Tibetan-Chinese general translation model as the parent model, and obtain two domain-specific Tibetan-Chinese translation models with small-scale in-domain data. The empirical results indicate that the method provides a positive approach for domain adaptation in low-resource scenarios, resulting in better bleu metrics as well as faster training speed over our general baseline models.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Domain Adaptation for Tibetan-Chinese Neural Machine Translation\",\"authors\":\"Maoxian Zhou, Jia Secha, Rangjia Cai\",\"doi\":\"10.1145/3446132.3446404\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The meaning of the same word or sentence is likely to change in different semantic contexts, which challenges general-purpose translation system to maintain stable performance across different domains. Therefore, domain adaptation is an essential researching topic in Neural Machine Translation practice. In order to efficiently train translation models for different domains, in this work we take the Tibetan-Chinese general translation model as the parent model, and obtain two domain-specific Tibetan-Chinese translation models with small-scale in-domain data. The empirical results indicate that the method provides a positive approach for domain adaptation in low-resource scenarios, resulting in better bleu metrics as well as faster training speed over our general baseline models.\",\"PeriodicalId\":125388,\"journal\":{\"name\":\"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3446132.3446404\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3446132.3446404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Domain Adaptation for Tibetan-Chinese Neural Machine Translation
The meaning of the same word or sentence is likely to change in different semantic contexts, which challenges general-purpose translation system to maintain stable performance across different domains. Therefore, domain adaptation is an essential researching topic in Neural Machine Translation practice. In order to efficiently train translation models for different domains, in this work we take the Tibetan-Chinese general translation model as the parent model, and obtain two domain-specific Tibetan-Chinese translation models with small-scale in-domain data. The empirical results indicate that the method provides a positive approach for domain adaptation in low-resource scenarios, resulting in better bleu metrics as well as faster training speed over our general baseline models.