Benchmarking the Generation of Fact Checking Explanations

IF 4.2 1区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Transactions of the Association for Computational Linguistics Pub Date : 2023-01-01 DOI:10.1162/tacl_a_00601
Daniel Russo, Serra Sinem Tekiroğlu, Marco Guerini
{"title":"Benchmarking the Generation of Fact Checking Explanations","authors":"Daniel Russo, Serra Sinem Tekiroğlu, Marco Guerini","doi":"10.1162/tacl_a_00601","DOIUrl":null,"url":null,"abstract":"Abstract Fighting misinformation is a challenging, yet crucial, task. Despite the growing number of experts being involved in manual fact-checking, this activity is time-consuming and cannot keep up with the ever-increasing amount of fake news produced daily. Hence, automating this process is necessary to help curb misinformation. Thus far, researchers have mainly focused on claim veracity classification. In this paper, instead, we address the generation of justifications (textual explanation of why a claim is classified as either true or false) and benchmark it with novel datasets and advanced baselines. In particular, we focus on summarization approaches over unstructured knowledge (i.e., news articles) and we experiment with several extractive and abstractive strategies. We employed two datasets with different styles and structures, in order to assess the generalizability of our findings. Results show that in justification production summarization benefits from the claim information, and, in particular, that a claim-driven extractive step improves abstractive summarization performances. Finally, we show that although cross-dataset experiments suffer from performance degradation, a unique model trained on a combination of the two datasets is able to retain style information in an efficient manner.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":"6 1","pages":"0"},"PeriodicalIF":4.2000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of the Association for Computational Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/tacl_a_00601","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

Abstract

Abstract Fighting misinformation is a challenging, yet crucial, task. Despite the growing number of experts being involved in manual fact-checking, this activity is time-consuming and cannot keep up with the ever-increasing amount of fake news produced daily. Hence, automating this process is necessary to help curb misinformation. Thus far, researchers have mainly focused on claim veracity classification. In this paper, instead, we address the generation of justifications (textual explanation of why a claim is classified as either true or false) and benchmark it with novel datasets and advanced baselines. In particular, we focus on summarization approaches over unstructured knowledge (i.e., news articles) and we experiment with several extractive and abstractive strategies. We employed two datasets with different styles and structures, in order to assess the generalizability of our findings. Results show that in justification production summarization benefits from the claim information, and, in particular, that a claim-driven extractive step improves abstractive summarization performances. Finally, we show that although cross-dataset experiments suffer from performance degradation, a unique model trained on a combination of the two datasets is able to retain style information in an efficient manner.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
对事实核查解释的生成进行基准测试
打击虚假信息是一项具有挑战性但又至关重要的任务。尽管有越来越多的专家参与人工事实核查,但这项活动很耗时,而且无法跟上每天不断增加的假新闻。因此,自动化这个过程是必要的,以帮助遏制错误信息。到目前为止,研究人员主要集中在索赔真实性分类上。相反,在本文中,我们解决了证明的生成(为什么索赔被分类为真或假的文本解释),并使用新的数据集和先进的基线对其进行基准测试。我们特别关注非结构化知识(即新闻文章)的总结方法,并尝试了几种提取和抽象策略。我们采用了两个不同风格和结构的数据集,以评估我们的发现的普遍性。结果表明,在证明过程中,产品摘要受益于权利要求信息,特别是权利要求驱动的提取步骤提高了抽象摘要的性能。最后,我们表明,尽管跨数据集实验会受到性能下降的影响,但在两个数据集的组合上训练的独特模型能够有效地保留样式信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
32.60
自引率
4.60%
发文量
58
审稿时长
8 weeks
期刊介绍: The highly regarded quarterly journal Computational Linguistics has a companion journal called Transactions of the Association for Computational Linguistics. This open access journal publishes articles in all areas of natural language processing and is an important resource for academic and industry computational linguists, natural language processing experts, artificial intelligence and machine learning investigators, cognitive scientists, speech specialists, as well as linguists and philosophers. The journal disseminates work of vital relevance to these professionals on an annual basis.
期刊最新文献
General then Personal: Decoupling and Pre-training for Personalized Headline Generation MissModal: Increasing Robustness to Missing Modality in Multimodal Sentiment Analysis Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training Learning More from Mixed Emotions: A Label Refinement Method for Emotion Recognition in Conversations An Efficient Self-Supervised Cross-View Training For Sentence Embedding
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1