基于内斯特罗夫加速自适应矩估计 NADAM-LSTM 的文本摘要1

P. Radhakrishnan, G. S. Kumar
{"title":"基于内斯特罗夫加速自适应矩估计 NADAM-LSTM 的文本摘要1","authors":"P. Radhakrishnan, G. S. Kumar","doi":"10.3233/jifs-224299","DOIUrl":null,"url":null,"abstract":"Automatic text summarization is the task of creating concise and fluent summaries without human intervention while preserving the meaning of the original text document. To increase the readability of the languages, a summary should be generated. In this paper, a novel Nesterov-accelerated Adaptive Moment Estimation Optimization based on Long Short-Term Memory [NADAM-LSTM] has been proposed to summarize the text. The proposed NADAM-LSTM model involves three stages namely pre-processing, summary generation, and parameter tuning. Initially, the Giga word Corpus dataset is pre-processed using Tokenization, Word Removal, Stemming, Lemmatization, and Normalization for removing irrelevant data. In the summary generation phase, the text is converted to the word-to-vector method. Further, the text is fed to LSTM to summarize the text. The parameter of the LSTM is then tuned using NADAM Optimization. The performance analysis of the proposed NADAM-LSTM is calculated based on parameters like accuracy, specificity, Recall, Precision, and F1 score. The suggested NADAM-LSTM achieves an accuracy range of 99.5%. The result illustrates that the proposed NADAM-LSTM enhances the overall accuracy better than 12%, 2.5%, and 1.5% in BERT, CNN-LSTM, and RNN respectively.","PeriodicalId":518977,"journal":{"name":"J. Intell. Fuzzy Syst.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Nesterov-accelerated Adaptive Moment Estimation NADAM-LSTM based text summarization1\",\"authors\":\"P. Radhakrishnan, G. S. Kumar\",\"doi\":\"10.3233/jifs-224299\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic text summarization is the task of creating concise and fluent summaries without human intervention while preserving the meaning of the original text document. To increase the readability of the languages, a summary should be generated. In this paper, a novel Nesterov-accelerated Adaptive Moment Estimation Optimization based on Long Short-Term Memory [NADAM-LSTM] has been proposed to summarize the text. The proposed NADAM-LSTM model involves three stages namely pre-processing, summary generation, and parameter tuning. Initially, the Giga word Corpus dataset is pre-processed using Tokenization, Word Removal, Stemming, Lemmatization, and Normalization for removing irrelevant data. In the summary generation phase, the text is converted to the word-to-vector method. Further, the text is fed to LSTM to summarize the text. The parameter of the LSTM is then tuned using NADAM Optimization. The performance analysis of the proposed NADAM-LSTM is calculated based on parameters like accuracy, specificity, Recall, Precision, and F1 score. The suggested NADAM-LSTM achieves an accuracy range of 99.5%. The result illustrates that the proposed NADAM-LSTM enhances the overall accuracy better than 12%, 2.5%, and 1.5% in BERT, CNN-LSTM, and RNN respectively.\",\"PeriodicalId\":518977,\"journal\":{\"name\":\"J. Intell. Fuzzy Syst.\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"J. Intell. Fuzzy Syst.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/jifs-224299\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Intell. Fuzzy Syst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/jifs-224299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自动文本摘要是指在保留原始文本文档含义的前提下,无需人工干预即可生成简洁流畅的摘要。为了提高语言的可读性,需要生成摘要。本文提出了一种新颖的基于长短期记忆的内斯特罗夫加速自适应矩估计优化算法 [NADAM-LSTM],用于总结文本。所提出的 NADAM-LSTM 模型包括三个阶段,即预处理、摘要生成和参数调整。首先,对 Giga word Corpus 数据集进行预处理,使用 Tokenization、Word Removal、Stemming、Lemmatization 和 Normalization 去除无关数据。在摘要生成阶段,采用词到向量方法转换文本。然后,将文本输入 LSTM 以总结文本。然后使用 NADAM 优化法调整 LSTM 的参数。根据准确率、特异性、召回率、精确度和 F1 分数等参数计算所建议的 NADAM-LSTM 的性能分析。建议的 NADAM-LSTM 准确率达到 99.5%。结果表明,在 BERT、CNN-LSTM 和 RNN 中,建议的 NADAM-LSTM 分别提高了 12%、2.5% 和 1.5%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Nesterov-accelerated Adaptive Moment Estimation NADAM-LSTM based text summarization1
Automatic text summarization is the task of creating concise and fluent summaries without human intervention while preserving the meaning of the original text document. To increase the readability of the languages, a summary should be generated. In this paper, a novel Nesterov-accelerated Adaptive Moment Estimation Optimization based on Long Short-Term Memory [NADAM-LSTM] has been proposed to summarize the text. The proposed NADAM-LSTM model involves three stages namely pre-processing, summary generation, and parameter tuning. Initially, the Giga word Corpus dataset is pre-processed using Tokenization, Word Removal, Stemming, Lemmatization, and Normalization for removing irrelevant data. In the summary generation phase, the text is converted to the word-to-vector method. Further, the text is fed to LSTM to summarize the text. The parameter of the LSTM is then tuned using NADAM Optimization. The performance analysis of the proposed NADAM-LSTM is calculated based on parameters like accuracy, specificity, Recall, Precision, and F1 score. The suggested NADAM-LSTM achieves an accuracy range of 99.5%. The result illustrates that the proposed NADAM-LSTM enhances the overall accuracy better than 12%, 2.5%, and 1.5% in BERT, CNN-LSTM, and RNN respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Three-way concepts in the interval-valued formal contexts Temperature prediction and scheduling of data center based on segmented neural network Nesterov-accelerated Adaptive Moment Estimation NADAM-LSTM based text summarization1 Elimination of heart sound from respiratory sound using adaptive variational mode decomposition for pulmonary diseases diagnosis Grey markov land pattern analysis and forecasting model incorporating social factors
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1