Ludwig Danwitz, Lars Hornuf, Sebastian Fehrler, Hsuan-Yu Lin, Yongping Bao, Fabian Dvorak, Bettina von Helversen
Algorithmic advice has the potential to significantly improve human decision-making, especially in dynamic and complex tasks that require a balance between exploration and exploitation. This study examines conditions under which individuals are willing to accept advice from algorithms in such scenarios, focusing on the interaction between participants' exploration preferences and those of the advising algorithm. In an online experiment, we designed reinforcement learning algorithms to prioritize either exploration or exploitation and observed participants' decision-making behavior, modeled using a cognitive framework analogous to the algorithm. Contrary to expectations, participants did not show a preference for algorithms that matched their own exploration tendencies. In particular, participants were more likely to follow the advice of exploitative, consistent algorithms, possibly interpreting consistency as an indicator of competence. Although the participants benefited from the advice of the exploratory algorithm, their reluctance to follow it, regardless of whether the recommendation had been ignored previously or not, highlights a potential challenge in promoting effective collaboration between humans and algorithms. Explorative algorithms have the potential to promote behavioral diversification, but this effect is negated when humans disregard their advice. In such cases, algorithmic guidance can unintentionally decrease behavioral diversity by reinforcing established patterns.
{"title":"Similarity and Consistency in Algorithm-Guided Exploration","authors":"Ludwig Danwitz, Lars Hornuf, Sebastian Fehrler, Hsuan-Yu Lin, Yongping Bao, Fabian Dvorak, Bettina von Helversen","doi":"10.1002/bdm.70055","DOIUrl":"https://doi.org/10.1002/bdm.70055","url":null,"abstract":"<p>Algorithmic advice has the potential to significantly improve human decision-making, especially in dynamic and complex tasks that require a balance between exploration and exploitation. This study examines conditions under which individuals are willing to accept advice from algorithms in such scenarios, focusing on the interaction between participants' exploration preferences and those of the advising algorithm. In an online experiment, we designed reinforcement learning algorithms to prioritize either exploration or exploitation and observed participants' decision-making behavior, modeled using a cognitive framework analogous to the algorithm. Contrary to expectations, participants did not show a preference for algorithms that matched their own exploration tendencies. In particular, participants were more likely to follow the advice of exploitative, consistent algorithms, possibly interpreting consistency as an indicator of competence. Although the participants benefited from the advice of the exploratory algorithm, their reluctance to follow it, regardless of whether the recommendation had been ignored previously or not, highlights a potential challenge in promoting effective collaboration between humans and algorithms. Explorative algorithms have the potential to promote behavioral diversification, but this effect is negated when humans disregard their advice. In such cases, algorithmic guidance can unintentionally decrease behavioral diversity by reinforcing established patterns.</p>","PeriodicalId":48112,"journal":{"name":"Journal of Behavioral Decision Making","volume":"38 5","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bdm.70055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145750783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}