{"title":"Human bias in AI models? Anchoring effects and mitigation strategies in large language models","authors":"Jeremy K. Nguyen","doi":"10.1016/j.jbef.2024.100971","DOIUrl":null,"url":null,"abstract":"<div><p>This study builds on the seminal work of Tversky and Kahneman (1974), exploring the presence and extent of anchoring bias in forecasts generated by four Large Language Models (LLMs): GPT-4, Claude 2, Gemini Pro and GPT-3.5. In contrast to recent findings of advanced reasoning capabilities in LLMs, our randomised controlled trials reveal the presence of anchoring bias across all models: forecasts are significantly influenced by prior mention of high or low values. We examine two mitigation prompting strategies, ‘Chain of Thought’ and ‘ignore previous’, finding limited and varying degrees of effectiveness. Our results extend the anchoring bias research in finance beyond human decision-making to encompass LLMs, highlighting the importance of deliberate and informed prompting in AI forecasting in both <em>ad hoc</em> LLM use and in crafting few-shot examples.</p></div>","PeriodicalId":47026,"journal":{"name":"Journal of Behavioral and Experimental Finance","volume":"43 ","pages":"Article 100971"},"PeriodicalIF":4.3000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2214635024000868/pdfft?md5=a59aced4d78dcdb67f3ba973b6b2959e&pid=1-s2.0-S2214635024000868-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Behavioral and Experimental Finance","FirstCategoryId":"96","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214635024000868","RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 0
Abstract
This study builds on the seminal work of Tversky and Kahneman (1974), exploring the presence and extent of anchoring bias in forecasts generated by four Large Language Models (LLMs): GPT-4, Claude 2, Gemini Pro and GPT-3.5. In contrast to recent findings of advanced reasoning capabilities in LLMs, our randomised controlled trials reveal the presence of anchoring bias across all models: forecasts are significantly influenced by prior mention of high or low values. We examine two mitigation prompting strategies, ‘Chain of Thought’ and ‘ignore previous’, finding limited and varying degrees of effectiveness. Our results extend the anchoring bias research in finance beyond human decision-making to encompass LLMs, highlighting the importance of deliberate and informed prompting in AI forecasting in both ad hoc LLM use and in crafting few-shot examples.
期刊介绍:
Behavioral and Experimental Finance represent lenses and approaches through which we can view financial decision-making. The aim of the journal is to publish high quality research in all fields of finance, where such research is carried out with a behavioral perspective and / or is carried out via experimental methods. It is open to but not limited to papers which cover investigations of biases, the role of various neurological markers in financial decision making, national and organizational culture as it impacts financial decision making, sentiment and asset pricing, the design and implementation of experiments to investigate financial decision making and trading, methodological experiments, and natural experiments.
Journal of Behavioral and Experimental Finance welcomes full-length and short letter papers in the area of behavioral finance and experimental finance. The focus is on rapid dissemination of high-impact research in these areas.