{"title":"More is more: Addition bias in large language models","authors":"Luca Santagata , Cristiano De Nobili","doi":"10.1016/j.chbah.2025.100129","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math<em>Σ</em>tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100129"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, MathΣtral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.