More is more: Addition bias in large language models

Luca Santagata , Cristiano De Nobili
{"title":"More is more: Addition bias in large language models","authors":"Luca Santagata ,&nbsp;Cristiano De Nobili","doi":"10.1016/j.chbah.2025.100129","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math<em>Σ</em>tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100129"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/18 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, MathΣtral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多即是多:大型语言模型中的加法偏差
在本文中,我们研究了大型语言模型(LLMs)中存在的加法偏差,并将其与在人类中观察到的认知偏差进行了类比,在人类中,个体倾向于支持加法而不是减法变化[3]。通过一系列的对照实验,我们测试了各种llm,包括GPT-3.5 Turbo、Claude 3.5 Sonnet、Mistral、MathΣtral和Llama 3.1,旨在测量它们对加法和减法修饰的倾向。我们的研究结果表明,在所有被测试的模型中,加性变化具有显著的偏好。例如,在一个创建回文的任务中,Llama 3.1在97.85%的情况下倾向于添加字母,而不是删除它们。同样,在乐高平衡塔的任务中,GPT-3.5 Turbo在76.38%的情况下选择添加一块砖,而不是移除一块砖。在文本摘要任务中,当被要求提高自己或他人的写作水平时,Mistral 7B在59.40%-75.10%的情况下产生了更长的摘要。这些结果表明,与人类相似,llm表现出明显的加法偏差,这可能会影响llm在大规模使用时的影响。附加性偏见可能增加资源使用和环境影响,导致过度消费和浪费造成更高的经济成本。在法学硕士的开发和应用中应该考虑到这种偏见,以确保平衡和有效的解决问题的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Mapping user gratifications in the age of LLM-based chatbots: An affordance perspective The acceptability of artificial intelligence to support university students’ mental health: the role of Asian cultural values and social support Multimodal robotic storytelling integrating sound effects and background music Understanding successful human–AI teaming: The role of goal alignment and AI autonomy for social perception of LLM-based chatbots The reasoning-like capabilities of large language models across different languages: Insights from representational similarity analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1