IKDSumm: Incorporating key-phrases into BERT for extractive disaster tweet summarization

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Computer Speech and Language Pub Date : 2024-04-16 DOI:10.1016/j.csl.2024.101649
Piyush Kumar Garg , Roshni Chakraborty , Srishti Gupta , Sourav Kumar Dandapat
{"title":"IKDSumm: Incorporating key-phrases into BERT for extractive disaster tweet summarization","authors":"Piyush Kumar Garg ,&nbsp;Roshni Chakraborty ,&nbsp;Srishti Gupta ,&nbsp;Sourav Kumar Dandapat","doi":"10.1016/j.csl.2024.101649","DOIUrl":null,"url":null,"abstract":"<div><p>Online social media platforms, such as Twitter, are one of the most valuable sources of information during disaster events. Humanitarian organizations, government agencies, and volunteers rely on a concise compilation of such information for effective disaster management. Existing methods to make such compilations are mostly generic summarization approaches that do not exploit domain knowledge. In this paper, we propose a disaster-specific tweet summarization framework, <em>IKDSumm</em>, which initially identifies the crucial and important information from each tweet related to a disaster through key-phrases of that tweet. We identify these key-phrases by utilizing the domain knowledge (using existing ontology) of disasters without any human intervention. Further, we utilize these key-phrases to automatically generate a summary of the tweets. Therefore, given tweets related to a disaster, <em>IKDSumm</em> ensures fulfillment of the summarization key objectives, such as information coverage, relevance, and diversity in summary without any human intervention. We evaluate the performance of <em>IKDSumm</em> with 8 state-of-the-art techniques on 12 disaster datasets. The evaluation results show that <em>IKDSumm</em> outperforms existing techniques by approximately <span><math><mrow><mn>2</mn><mo>−</mo><mn>79</mn><mtext>%</mtext></mrow></math></span> in terms of ROUGE-N F1-score.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"87 ","pages":"Article 101649"},"PeriodicalIF":3.1000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230824000329","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Online social media platforms, such as Twitter, are one of the most valuable sources of information during disaster events. Humanitarian organizations, government agencies, and volunteers rely on a concise compilation of such information for effective disaster management. Existing methods to make such compilations are mostly generic summarization approaches that do not exploit domain knowledge. In this paper, we propose a disaster-specific tweet summarization framework, IKDSumm, which initially identifies the crucial and important information from each tweet related to a disaster through key-phrases of that tweet. We identify these key-phrases by utilizing the domain knowledge (using existing ontology) of disasters without any human intervention. Further, we utilize these key-phrases to automatically generate a summary of the tweets. Therefore, given tweets related to a disaster, IKDSumm ensures fulfillment of the summarization key objectives, such as information coverage, relevance, and diversity in summary without any human intervention. We evaluate the performance of IKDSumm with 8 state-of-the-art techniques on 12 disaster datasets. The evaluation results show that IKDSumm outperforms existing techniques by approximately 279% in terms of ROUGE-N F1-score.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
IKDSumm:将关键字词纳入 BERT 以提取灾难推文摘要
Twitter 等在线社交媒体平台是灾难事件中最有价值的信息来源之一。人道主义组织、政府机构和志愿者都依赖于对此类信息的简明汇编来进行有效的灾难管理。现有的汇编方法大多是通用的摘要方法,无法利用领域知识。在本文中,我们提出了一个针对特定灾害的推文摘要框架 IKDSumm,该框架可通过每条推文中的关键词组初步识别出与灾害相关的关键和重要信息。我们通过利用灾害领域知识(使用现有本体)来识别这些关键短语,无需任何人工干预。此外,我们还利用这些关键词组自动生成推文摘要。因此,在给定与灾难相关的推文时,IKDSumm 无需人工干预即可确保实现摘要的关键目标,如摘要的信息覆盖面、相关性和多样性。我们在 12 个灾难数据集上评估了 IKDSumm 与 8 种最先进技术的性能。评估结果表明,就 ROUGE-N F1 分数而言,IKDSumm 优于现有技术约 2-79%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Speech and Language
Computer Speech and Language 工程技术-计算机:人工智能
CiteScore
11.30
自引率
4.70%
发文量
80
审稿时长
22.9 weeks
期刊介绍: Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language. The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.
期刊最新文献
Modeling correlated causal-effect structure with a hypergraph for document-level event causality identification You Are What You Write: Author re-identification privacy attacks in the era of pre-trained language models End-to-End Speech-to-Text Translation: A Survey Corpus and unsupervised benchmark: Towards Tagalog grammatical error correction TR-Net: Token Relation Inspired Table Filling Network for Joint Entity and Relation Extraction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1