A parallel optimization and transfer learning approach for summarization in electrical power systems

IF 1.7 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS Automatika Pub Date : 2023-09-11 DOI:10.1080/00051144.2023.2254975
V. Priya, V. Praveena, L. R. Sujithra
{"title":"A parallel optimization and transfer learning approach for summarization in electrical power systems","authors":"V. Priya, V. Praveena, L. R. Sujithra","doi":"10.1080/00051144.2023.2254975","DOIUrl":null,"url":null,"abstract":"Transfer learning approaches in natural language processing have been explored and evolved as a potential solution for solving many problems in recent days. The current research on aspect-based summarization shows unsatisfactory accuracy and low-quality generated summaries. Additionally, the potential advantages of combining language models with parallel processing have not been explored in the existing literature. This paper aims to address the problem of aspect-based extractive text summarization using a transfer learning approach and an optimization method based on map reduce. The proposed approach utilizes transfer learning with language models to extract significant aspects from the text. Subsequently, an optimization process using map reduce is employed. This optimization framework includes an in-node mapper and reducer algorithm to generate summaries for important aspects identified by the language model. This enhances the quality of the summary, leading to improved accuracy, particularly when applied to electrical power system documents. By leveraging the strengths of natural language models and parallel data processing techniques, this model presents an opportunity to achieve better text summary generation. The performance metric used is accuracy, measured with the ROUGE tool, incorporating precision, recall and f-measure. The proposed model demonstrates a 6% improvement in scores compared to state-of-the-art techniques.","PeriodicalId":55412,"journal":{"name":"Automatika","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automatika","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/00051144.2023.2254975","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Transfer learning approaches in natural language processing have been explored and evolved as a potential solution for solving many problems in recent days. The current research on aspect-based summarization shows unsatisfactory accuracy and low-quality generated summaries. Additionally, the potential advantages of combining language models with parallel processing have not been explored in the existing literature. This paper aims to address the problem of aspect-based extractive text summarization using a transfer learning approach and an optimization method based on map reduce. The proposed approach utilizes transfer learning with language models to extract significant aspects from the text. Subsequently, an optimization process using map reduce is employed. This optimization framework includes an in-node mapper and reducer algorithm to generate summaries for important aspects identified by the language model. This enhances the quality of the summary, leading to improved accuracy, particularly when applied to electrical power system documents. By leveraging the strengths of natural language models and parallel data processing techniques, this model presents an opportunity to achieve better text summary generation. The performance metric used is accuracy, measured with the ROUGE tool, incorporating precision, recall and f-measure. The proposed model demonstrates a 6% improvement in scores compared to state-of-the-art techniques.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
电力系统总结的并行优化与迁移学习方法
近年来,自然语言处理中的迁移学习方法已经被探索和发展成为解决许多问题的潜在解决方案。目前基于方面的摘要研究存在准确性不理想、生成摘要质量不高的问题。此外,现有文献尚未探讨语言模型与并行处理相结合的潜在优势。本文采用迁移学习方法和基于地图约简的优化方法来解决基于方面的抽取文本摘要问题。该方法利用迁移学习和语言模型从文本中提取重要方面。随后,采用映射约简的优化过程。该优化框架包括节点内映射器和reducer算法,用于为语言模型识别的重要方面生成摘要。这提高了摘要的质量,从而提高了准确性,特别是在应用于电力系统文件时。通过利用自然语言模型和并行数据处理技术的优势,该模型提供了实现更好的文本摘要生成的机会。使用的性能度量标准是准确性,用ROUGE工具测量,包括精密度、召回率和f-measure。与最先进的技术相比,所提出的模型表明分数提高了6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Automatika
Automatika AUTOMATION & CONTROL SYSTEMS-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
4.00
自引率
5.30%
发文量
65
审稿时长
4.5 months
期刊介绍: AUTOMATIKA – Journal for Control, Measurement, Electronics, Computing and Communications is an international scientific journal that publishes scientific and professional papers in the field of automatic control, robotics, measurements, electronics, computing, communications and related areas. Click here for full Focus & Scope. AUTOMATIKA is published since 1960, and since 1991 by KoREMA - Croatian Society for Communications, Computing, Electronics, Measurement and Control, Member of IMEKO and IFAC.
期刊最新文献
Quasi Z source direct matrix converter based K- to three-phase wind energy conversion system using maximum constant boost current control modulation technique An adaptive multistage intrusion detection and prevention system in software defined networking environment SwinVNETR: Swin V-net Transformer with non-local block for volumetric MRI Brain Tumor Segmentation Dynamic control method of construction cost based on fuzzy neural network Data augmentation using a 1D-CNN model with MFCC/MFMC features for speech emotion recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1