Reinforcement learning models of risky choice and the promotion of risk-taking by losses disguised as wins in rats.

Andrew T Marshall, Kimberly Kirkpatrick
{"title":"Reinforcement learning models of risky choice and the promotion of risk-taking by losses disguised as wins in rats.","authors":"Andrew T Marshall,&nbsp;Kimberly Kirkpatrick","doi":"10.1037/xan0000141","DOIUrl":null,"url":null,"abstract":"<p><p>Risky decisions are inherently characterized by the potential to receive gains or incur losses, and these outcomes have distinct effects on subsequent decision-making. One important factor is that individuals engage in loss-chasing, in which the reception of a loss is followed by relatively increased risk-taking. Unfortunately, the mechanisms of loss-chasing are poorly understood, despite the potential importance for understanding pathological choice behavior. The goal of the present experiment was to illuminate the mechanisms governing individual differences in loss-chasing and risky-choice behaviors. Rats chose between a low-uncertainty outcome that always delivered a variable amount of reward and a high-uncertainty outcome that probabilistically delivered reward. Loss-processing and loss-chasing were assessed in the context of losses disguised as wins (LDWs), which are loss outcomes that are presented along with gain-related stimuli. LDWs have been suggested to interfere with adaptive decision-making in humans and thus potentially increase loss-making. Here, the rats presented with LDWs were riskier, in that they made more choices for the high-uncertainty outcome. A series of nonlinear models were fit to individual rats' data to elucidate the possible psychological mechanisms that best account for individual differences in high-uncertainty choices and loss-chasing behaviors. The models suggested that the rats presented with LDWs were more prone to showing a stay bias following high-uncertainty outcomes compared to rats not presented with LDWs. These results collectively suggest that LDWs acquire conditioned reinforcement properties that encourage continued risk-taking and increase loss-chasing following previous high-risk decisions. (PsycINFO Database Record</p>","PeriodicalId":51088,"journal":{"name":"Journal of Experimental Psychology-Animal Learning and Cognition","volume":"43 3","pages":"262-279"},"PeriodicalIF":1.3000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5682951/pdf/nihms885038.pdf","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental Psychology-Animal Learning and Cognition","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/xan0000141","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Risky decisions are inherently characterized by the potential to receive gains or incur losses, and these outcomes have distinct effects on subsequent decision-making. One important factor is that individuals engage in loss-chasing, in which the reception of a loss is followed by relatively increased risk-taking. Unfortunately, the mechanisms of loss-chasing are poorly understood, despite the potential importance for understanding pathological choice behavior. The goal of the present experiment was to illuminate the mechanisms governing individual differences in loss-chasing and risky-choice behaviors. Rats chose between a low-uncertainty outcome that always delivered a variable amount of reward and a high-uncertainty outcome that probabilistically delivered reward. Loss-processing and loss-chasing were assessed in the context of losses disguised as wins (LDWs), which are loss outcomes that are presented along with gain-related stimuli. LDWs have been suggested to interfere with adaptive decision-making in humans and thus potentially increase loss-making. Here, the rats presented with LDWs were riskier, in that they made more choices for the high-uncertainty outcome. A series of nonlinear models were fit to individual rats' data to elucidate the possible psychological mechanisms that best account for individual differences in high-uncertainty choices and loss-chasing behaviors. The models suggested that the rats presented with LDWs were more prone to showing a stay bias following high-uncertainty outcomes compared to rats not presented with LDWs. These results collectively suggest that LDWs acquire conditioned reinforcement properties that encourage continued risk-taking and increase loss-chasing following previous high-risk decisions. (PsycINFO Database Record

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
风险选择的强化学习模型和在老鼠身上伪装成胜利的损失促进冒险。
风险决策的本质特征是可能获得收益或招致损失,这些结果对后续决策有明显的影响。一个重要的因素是,个人参与了追逐损失的行为,在这种行为中,接受损失之后,承担的风险相对增加。不幸的是,尽管对理解病态选择行为具有潜在的重要性,但人们对损失追逐的机制知之甚少。本实验的目的是阐明在追逐损失和风险选择行为中控制个体差异的机制。大鼠在低不确定性的结果和高不确定性的结果之间进行选择,前者总是提供可变数量的奖励,后者可能会提供奖励。损失处理和追逐是在损失伪装成胜利(ldw)的背景下进行评估的,ldw是与收益相关的刺激一起呈现的损失结果。ldw被认为会干扰人类的适应性决策,从而潜在地增加损失。在这里,呈现ldw的大鼠风险更大,因为它们对高不确定性的结果做出了更多的选择。一系列的非线性模型拟合了个体大鼠的数据,以阐明可能的心理机制,最好地解释高不确定性选择和损失追逐行为的个体差异。模型表明,与未呈现ldw的大鼠相比,呈现ldw的大鼠在高不确定性结果后更容易表现出停留偏差。这些结果共同表明,ldw获得了条件强化特性,鼓励继续冒险,并在之前的高风险决策后增加对损失的追逐。(PsycINFO数据库记录
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
23.10%
发文量
0
审稿时长
>12 weeks
期刊介绍: The Journal of Experimental Psychology: Animal Learning and Cognition publishes experimental and theoretical studies concerning all aspects of animal behavior processes.
期刊最新文献
Valence generalization across nonrecurring structures. Learned biases in the processing of outcomes: A brief review of the outcome predictability effect. Conditioned inhibition: Historical critiques and controversies in the light of recent advances. The partial reinforcement extinction effect: The proportion of trials reinforced during conditioning predicts the number of trials to extinction. On the role of responses in Pavlovian acquisition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1