{"title":"Mono and Multi-Lingual Machine Translation Using Deep Attention-based Models","authors":"M. I. Khaber, A. Moussaoui, Mohamed Saidi","doi":"10.1109/ICRAMI52622.2021.9585969","DOIUrl":null,"url":null,"abstract":"Machine translation (MT) is the automatic translation of natural language texts. The complexities and incompatibilities of natural languages make MT an arduous task facing several challenges, especially when it is to be compared to a human translation. Neural Machine Translation (NMT) has to make MT results closer to human expectations with the advent of deep-learning artificial intelligence. The newest deep learning approaches are based on Recurrent Neural Networks (RNN), transformers, complex convolutions, and employing encoder/decoder pairs. In this work, we propose a new attention-based encoder-decoder model with monolingual and multilingual for MT. The Training has been several models with single languages and one model with several languages on both of our long short-term memory (LSTM) architecture and Transformer. We show that the Transformer outperforms the LSTM within our specific neural machine translation task. These models are evaluated using IWSLT2016 datasets, which contain a training dataset for three languages, test2015 and test2016 dataset for testing. These experiments show a 93.9% accuracy, which we can estimate as a 5 BLEU point improvement over the previous studies. (metric used in MT).","PeriodicalId":440750,"journal":{"name":"2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Recent Advances in Mathematics and Informatics (ICRAMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRAMI52622.2021.9585969","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine translation (MT) is the automatic translation of natural language texts. The complexities and incompatibilities of natural languages make MT an arduous task facing several challenges, especially when it is to be compared to a human translation. Neural Machine Translation (NMT) has to make MT results closer to human expectations with the advent of deep-learning artificial intelligence. The newest deep learning approaches are based on Recurrent Neural Networks (RNN), transformers, complex convolutions, and employing encoder/decoder pairs. In this work, we propose a new attention-based encoder-decoder model with monolingual and multilingual for MT. The Training has been several models with single languages and one model with several languages on both of our long short-term memory (LSTM) architecture and Transformer. We show that the Transformer outperforms the LSTM within our specific neural machine translation task. These models are evaluated using IWSLT2016 datasets, which contain a training dataset for three languages, test2015 and test2016 dataset for testing. These experiments show a 93.9% accuracy, which we can estimate as a 5 BLEU point improvement over the previous studies. (metric used in MT).