Muljono, M. Nababan, R. A. Nugroho, Kevin Djajadinata
{"title":"基于梯度算法的提取文本摘要优化模型HASumRuNNer","authors":"Muljono, M. Nababan, R. A. Nugroho, Kevin Djajadinata","doi":"10.12720/jait.14.4.656-667","DOIUrl":null,"url":null,"abstract":"—This article is based on text summarization research model, also referred to as “text summarization”, which is the act of summarizing materials in a way that directly communicates the intent or message of a document. Hierarchical Attention SumRuNNer (HASumRuNNer), an extractive text summary model based on the Indonesian language is the text summary model suggested in this study. This is a novelty for the extractive text summary model based on the Indonesian language, as there is currently very few related research, both in terms of the approach and dataset. Three primary methods—BiGRU, CharCNN, and hierarchical attention mechanisms—were used to create the model for this study. The optimization in this suggested model is likewise carried out using a variety of gradient-based methods, and the ROUGE-N approach is used to assess the outcomes of text synthesis. The test results demonstrate that Adam’s gradient-based approach is the most effective for extracting text summarization using the HASumRuNNer model. As can be seen, the values of RED-1 (70.7), RED-2 (64.33), and RED-L (68.14) are greater than those of other methods employed as references. The approach used in the suggested HASumRuNNer Model, which combines BiGRU with CharCNN, can result in more accurate word and sentence representations at word and sentence levels. Additionally, the word and sentence-level hierarchical attention mechanisms aid in preventing the loss of information on each word in documents that are typically brought on by the length of the input model word or sentence.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HASumRuNNer: An Extractive Text Summarization Optimization Model Based on a Gradient-Based Algorithm\",\"authors\":\"Muljono, M. Nababan, R. A. Nugroho, Kevin Djajadinata\",\"doi\":\"10.12720/jait.14.4.656-667\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"—This article is based on text summarization research model, also referred to as “text summarization”, which is the act of summarizing materials in a way that directly communicates the intent or message of a document. Hierarchical Attention SumRuNNer (HASumRuNNer), an extractive text summary model based on the Indonesian language is the text summary model suggested in this study. This is a novelty for the extractive text summary model based on the Indonesian language, as there is currently very few related research, both in terms of the approach and dataset. Three primary methods—BiGRU, CharCNN, and hierarchical attention mechanisms—were used to create the model for this study. The optimization in this suggested model is likewise carried out using a variety of gradient-based methods, and the ROUGE-N approach is used to assess the outcomes of text synthesis. The test results demonstrate that Adam’s gradient-based approach is the most effective for extracting text summarization using the HASumRuNNer model. As can be seen, the values of RED-1 (70.7), RED-2 (64.33), and RED-L (68.14) are greater than those of other methods employed as references. The approach used in the suggested HASumRuNNer Model, which combines BiGRU with CharCNN, can result in more accurate word and sentence representations at word and sentence levels. Additionally, the word and sentence-level hierarchical attention mechanisms aid in preventing the loss of information on each word in documents that are typically brought on by the length of the input model word or sentence.\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12720/jait.14.4.656-667\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12720/jait.14.4.656-667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
HASumRuNNer: An Extractive Text Summarization Optimization Model Based on a Gradient-Based Algorithm
—This article is based on text summarization research model, also referred to as “text summarization”, which is the act of summarizing materials in a way that directly communicates the intent or message of a document. Hierarchical Attention SumRuNNer (HASumRuNNer), an extractive text summary model based on the Indonesian language is the text summary model suggested in this study. This is a novelty for the extractive text summary model based on the Indonesian language, as there is currently very few related research, both in terms of the approach and dataset. Three primary methods—BiGRU, CharCNN, and hierarchical attention mechanisms—were used to create the model for this study. The optimization in this suggested model is likewise carried out using a variety of gradient-based methods, and the ROUGE-N approach is used to assess the outcomes of text synthesis. The test results demonstrate that Adam’s gradient-based approach is the most effective for extracting text summarization using the HASumRuNNer model. As can be seen, the values of RED-1 (70.7), RED-2 (64.33), and RED-L (68.14) are greater than those of other methods employed as references. The approach used in the suggested HASumRuNNer Model, which combines BiGRU with CharCNN, can result in more accurate word and sentence representations at word and sentence levels. Additionally, the word and sentence-level hierarchical attention mechanisms aid in preventing the loss of information on each word in documents that are typically brought on by the length of the input model word or sentence.