Understanding Knowledge Drift in LLMs through Misinformation

Alina Fastowski, Gjergji Kasneci
{"title":"Understanding Knowledge Drift in LLMs through Misinformation","authors":"Alina Fastowski, Gjergji Kasneci","doi":"arxiv-2409.07085","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have revolutionized numerous applications,\nmaking them an integral part of our digital ecosystem. However, their\nreliability becomes critical, especially when these models are exposed to\nmisinformation. We primarily analyze the susceptibility of state-of-the-art\nLLMs to factual inaccuracies when they encounter false information in a QnA\nscenario, an issue that can lead to a phenomenon we refer to as *knowledge\ndrift*, which significantly undermines the trustworthiness of these models. We\nevaluate the factuality and the uncertainty of the models' responses relying on\nEntropy, Perplexity, and Token Probability metrics. Our experiments reveal that\nan LLM's uncertainty can increase up to 56.6% when the question is answered\nincorrectly due to the exposure to false information. At the same time,\nrepeated exposure to the same false information can decrease the models\nuncertainty again (-52.8% w.r.t. the answers on the untainted prompts),\npotentially manipulating the underlying model's beliefs and introducing a drift\nfrom its original knowledge. These findings provide insights into LLMs'\nrobustness and vulnerability to adversarial inputs, paving the way for\ndeveloping more reliable LLM applications across various domains. The code is\navailable at https://github.com/afastowski/knowledge_drift.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Large Language Models (LLMs) have revolutionized numerous applications, making them an integral part of our digital ecosystem. However, their reliability becomes critical, especially when these models are exposed to misinformation. We primarily analyze the susceptibility of state-of-the-art LLMs to factual inaccuracies when they encounter false information in a QnA scenario, an issue that can lead to a phenomenon we refer to as *knowledge drift*, which significantly undermines the trustworthiness of these models. We evaluate the factuality and the uncertainty of the models' responses relying on Entropy, Perplexity, and Token Probability metrics. Our experiments reveal that an LLM's uncertainty can increase up to 56.6% when the question is answered incorrectly due to the exposure to false information. At the same time, repeated exposure to the same false information can decrease the models uncertainty again (-52.8% w.r.t. the answers on the untainted prompts), potentially manipulating the underlying model's beliefs and introducing a drift from its original knowledge. These findings provide insights into LLMs' robustness and vulnerability to adversarial inputs, paving the way for developing more reliable LLM applications across various domains. The code is available at https://github.com/afastowski/knowledge_drift.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过错误信息了解法律硕士的知识漂移
大型语言模型(LLM)给众多应用带来了变革,使其成为我们数字生态系统不可或缺的一部分。然而,它们的可靠性变得至关重要,尤其是当这些模型暴露在错误信息中时。我们主要分析了最先进的LLM在遇到QnAscenario中的虚假信息时对事实不准确性的易感性,这个问题可能会导致我们称之为 "已知漂移"(*knowledgedrift*)的现象,从而严重破坏这些模型的可信度。我们通过熵(Entropy)、复杂度(Perplexity)和令牌概率(Token Probability)指标来评估模型响应的事实性和不确定性。我们的实验表明,当暴露于虚假信息而导致问题回答错误时,LLM 的不确定性会增加高达 56.6%。与此同时,重复暴露于相同的虚假信息会再次降低模型的不确定性(与未受污染的提示答案相比为-52.8%),这可能会操纵底层模型的信念,使其偏离原有知识。这些发现深入揭示了 LLM 的鲁棒性和易受对抗性输入影响的脆弱性,为在各个领域开发更可靠的 LLM 应用铺平了道路。代码见 https://github.com/afastowski/knowledge_drift。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1