{"title":"应用自然语言处理技术探索英语长句翻译方法","authors":"Fengmei Shang, You Li","doi":"10.2478/amns.2023.2.01352","DOIUrl":null,"url":null,"abstract":"Abstract This paper analyzes the “encoder-decoder” framework in neural machine translation and clarifies that the task of natural language processing is sequence learning. Secondly, recurrent neural networks are used to combine the historical hidden layer output information with the current input information, which is specialized in processing sequence data to achieve good translation results. Applying the attention mechanism to the field of natural language processing, a Transformer model based on the full attention mechanism is constructed in order to achieve the purpose of translating the source language while also performing alignment operations on the target language. The evaluation and analysis of the Transformer model based on the full-attention mechanism concludes that the Transformer model has 0.0152 Pearson correlation coefficients higher than the Bilingual Expert model, which is also 2.92% higher than the Bilingual Expert model, with the participation of f feature in both models. This further proves the Transformer model’s ability to correctly and effectively translate English sentences. At the same time, it also shows that the application of natural language processing technology can improve the efficiency of English long-sentence translation and comprehensively improve the quality of long-sentence translation.","PeriodicalId":52342,"journal":{"name":"Applied Mathematics and Nonlinear Sciences","volume":"11 9","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring English Long Sentence Translation Methods by Applying Natural Language Processing Techniques\",\"authors\":\"Fengmei Shang, You Li\",\"doi\":\"10.2478/amns.2023.2.01352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract This paper analyzes the “encoder-decoder” framework in neural machine translation and clarifies that the task of natural language processing is sequence learning. Secondly, recurrent neural networks are used to combine the historical hidden layer output information with the current input information, which is specialized in processing sequence data to achieve good translation results. Applying the attention mechanism to the field of natural language processing, a Transformer model based on the full attention mechanism is constructed in order to achieve the purpose of translating the source language while also performing alignment operations on the target language. The evaluation and analysis of the Transformer model based on the full-attention mechanism concludes that the Transformer model has 0.0152 Pearson correlation coefficients higher than the Bilingual Expert model, which is also 2.92% higher than the Bilingual Expert model, with the participation of f feature in both models. This further proves the Transformer model’s ability to correctly and effectively translate English sentences. At the same time, it also shows that the application of natural language processing technology can improve the efficiency of English long-sentence translation and comprehensively improve the quality of long-sentence translation.\",\"PeriodicalId\":52342,\"journal\":{\"name\":\"Applied Mathematics and Nonlinear Sciences\",\"volume\":\"11 9\",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Nonlinear Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2478/amns.2023.2.01352\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Nonlinear Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/amns.2023.2.01352","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
Exploring English Long Sentence Translation Methods by Applying Natural Language Processing Techniques
Abstract This paper analyzes the “encoder-decoder” framework in neural machine translation and clarifies that the task of natural language processing is sequence learning. Secondly, recurrent neural networks are used to combine the historical hidden layer output information with the current input information, which is specialized in processing sequence data to achieve good translation results. Applying the attention mechanism to the field of natural language processing, a Transformer model based on the full attention mechanism is constructed in order to achieve the purpose of translating the source language while also performing alignment operations on the target language. The evaluation and analysis of the Transformer model based on the full-attention mechanism concludes that the Transformer model has 0.0152 Pearson correlation coefficients higher than the Bilingual Expert model, which is also 2.92% higher than the Bilingual Expert model, with the participation of f feature in both models. This further proves the Transformer model’s ability to correctly and effectively translate English sentences. At the same time, it also shows that the application of natural language processing technology can improve the efficiency of English long-sentence translation and comprehensively improve the quality of long-sentence translation.