Cl2sum: abstractive summarization via contrastive prompt constructed by LLMs hallucination

IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Complex & Intelligent Systems Pub Date : 2025-02-19 DOI:10.1007/s40747-025-01795-y
Xiang Huang, Qiong Nong, Xiaobo Wang, Hongcheng Zhang, Kunpeng Du, Chunlin Yin, Li Yang, Bin Yan, Xuan Zhang
{"title":"Cl2sum: abstractive summarization via contrastive prompt constructed by LLMs hallucination","authors":"Xiang Huang, Qiong Nong, Xiaobo Wang, Hongcheng Zhang, Kunpeng Du, Chunlin Yin, Li Yang, Bin Yan, Xuan Zhang","doi":"10.1007/s40747-025-01795-y","DOIUrl":null,"url":null,"abstract":"<p>The rise of Large Language Models (LLMs) has further led to the development of text summarization techniques and also brought more attention to the problem of hallucination in the research of text summarization. Existing work in current text summarization research based on LLMs typically uses In-Context Learning (ICL) to supply accurate (document-summary) pairs of samples to the model, thus allowing the model to be more explicit in predicting the target. However, in this way, models can only determine what to do, without explicitly prohibiting what models cannot do. It is highly likely to lead to increased hallucinations due to excessive model-free play. In this paper, to alleviate the problem of hallucination in text summarization based on LLMs, we propose CL2Sum, a method that combines Contrastive Learning (CL) and ICL for summarization. After analysing the generated summaries of LLMs and summarising their hallucination types, we provided the models with accurate summaries and summaries containing hallucinations as ICL instances, either automatically or artificially. It aims to guide the model to make accurate predictions according to positive samples while also avoiding hallucinations similar to those in negative samples. Finally, a series of comparative experiments were conducted on summary datasets of different lengths and languages. The results show that CL2Sum effectively alleviates the hallucination problem of text summaries while also improving the overall quality of the generated summaries. Moreover, it can be widely adapted to text summarization tasks in different scenarios with a certain degree of robustness.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"13 1","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01795-y","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The rise of Large Language Models (LLMs) has further led to the development of text summarization techniques and also brought more attention to the problem of hallucination in the research of text summarization. Existing work in current text summarization research based on LLMs typically uses In-Context Learning (ICL) to supply accurate (document-summary) pairs of samples to the model, thus allowing the model to be more explicit in predicting the target. However, in this way, models can only determine what to do, without explicitly prohibiting what models cannot do. It is highly likely to lead to increased hallucinations due to excessive model-free play. In this paper, to alleviate the problem of hallucination in text summarization based on LLMs, we propose CL2Sum, a method that combines Contrastive Learning (CL) and ICL for summarization. After analysing the generated summaries of LLMs and summarising their hallucination types, we provided the models with accurate summaries and summaries containing hallucinations as ICL instances, either automatically or artificially. It aims to guide the model to make accurate predictions according to positive samples while also avoiding hallucinations similar to those in negative samples. Finally, a series of comparative experiments were conducted on summary datasets of different lengths and languages. The results show that CL2Sum effectively alleviates the hallucination problem of text summaries while also improving the overall quality of the generated summaries. Moreover, it can be widely adapted to text summarization tasks in different scenarios with a certain degree of robustness.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cl2sum:通过LLMs幻觉构建的对比提示进行抽象总结
大型语言模型(Large Language Models, LLMs)的兴起,进一步推动了文本摘要技术的发展,也引起了人们对文本摘要研究中的幻觉问题的更多关注。当前基于llm的文本摘要研究中的现有工作通常使用上下文学习(in - context Learning, ICL)为模型提供准确的(文档-摘要)样本对,从而使模型在预测目标时更加明确。然而,通过这种方式,模型只能决定做什么,而不能明确禁止模型不能做什么。由于过度的无模型游戏,很有可能导致幻觉的增加。本文针对基于llm的文本摘要中存在的幻觉问题,提出了一种结合对比学习(CL)和ICL进行摘要的CL2Sum方法。在分析生成的llm摘要并总结其幻觉类型后,我们自动或人为地为模型提供准确的摘要和包含幻觉的摘要作为ICL实例。它的目的是引导模型根据阳性样本做出准确的预测,同时避免类似于阴性样本的幻觉。最后,在不同长度和语言的汇总数据集上进行了一系列对比实验。结果表明,CL2Sum有效缓解了文本摘要的幻觉问题,同时提高了生成摘要的整体质量。此外,它可以广泛地适用于不同场景的文本摘要任务,并具有一定的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
期刊最新文献
FactorMoE: a framework with mixture-of-experts and attention to dynamically combine alpha factors in quantitative investment Trustworthy AI for radar vital signs: detecting and mitigating gender bias in healthcare A lightweight adaptive feature-domain defogging dynamic convolutional network for object detection in foggy conditions Well path lightweight prediction model construction method for rotary steerable system based on composite knowledge distillation Adaptive spatio-temporal graphs with self-supervised pretraining for multi-horizon weather forecasting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1