Lilla Horvath, Stanley Colcombe, Michael Milham, Shruti Ray, Philipp Schwartenbeck, Dirk Ostwald
{"title":"基于人类信念状态的信息选择对称逆贼任务探索与开发。","authors":"Lilla Horvath, Stanley Colcombe, Michael Milham, Shruti Ray, Philipp Schwartenbeck, Dirk Ostwald","doi":"10.1007/s42113-021-00112-3","DOIUrl":null,"url":null,"abstract":"<p><p>Humans often face sequential decision-making problems, in which information about the environmental reward structure is detached from rewards for a subset of actions. In the current exploratory study, we introduce an information-selective symmetric reversal bandit task to model such situations and obtained choice data on this task from 24 participants. To arbitrate between different decision-making strategies that participants may use on this task, we developed a set of probabilistic agent-based behavioral models, including exploitative and explorative Bayesian agents, as well as heuristic control agents. Upon validating the model and parameter recovery properties of our model set and summarizing the participants' choice data in a descriptive way, we used a maximum likelihood approach to evaluate the participants' choice data from the perspective of our model set. In brief, we provide quantitative evidence that participants employ a belief state-based hybrid explorative-exploitative strategy on the information-selective symmetric reversal bandit task, lending further support to the finding that humans are guided by their subjective uncertainty when solving exploration-exploitation dilemmas.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s42113-021-00112-3.</p>","PeriodicalId":72660,"journal":{"name":"Computational brain & behavior","volume":"4 4","pages":"442-462"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s42113-021-00112-3","citationCount":"6","resultStr":"{\"title\":\"Human Belief State-Based Exploration and Exploitation in an Information-Selective Symmetric Reversal Bandit Task.\",\"authors\":\"Lilla Horvath, Stanley Colcombe, Michael Milham, Shruti Ray, Philipp Schwartenbeck, Dirk Ostwald\",\"doi\":\"10.1007/s42113-021-00112-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Humans often face sequential decision-making problems, in which information about the environmental reward structure is detached from rewards for a subset of actions. In the current exploratory study, we introduce an information-selective symmetric reversal bandit task to model such situations and obtained choice data on this task from 24 participants. To arbitrate between different decision-making strategies that participants may use on this task, we developed a set of probabilistic agent-based behavioral models, including exploitative and explorative Bayesian agents, as well as heuristic control agents. Upon validating the model and parameter recovery properties of our model set and summarizing the participants' choice data in a descriptive way, we used a maximum likelihood approach to evaluate the participants' choice data from the perspective of our model set. In brief, we provide quantitative evidence that participants employ a belief state-based hybrid explorative-exploitative strategy on the information-selective symmetric reversal bandit task, lending further support to the finding that humans are guided by their subjective uncertainty when solving exploration-exploitation dilemmas.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s42113-021-00112-3.</p>\",\"PeriodicalId\":72660,\"journal\":{\"name\":\"Computational brain & behavior\",\"volume\":\"4 4\",\"pages\":\"442-462\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1007/s42113-021-00112-3\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational brain & behavior\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s42113-021-00112-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational brain & behavior","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s42113-021-00112-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human Belief State-Based Exploration and Exploitation in an Information-Selective Symmetric Reversal Bandit Task.
Humans often face sequential decision-making problems, in which information about the environmental reward structure is detached from rewards for a subset of actions. In the current exploratory study, we introduce an information-selective symmetric reversal bandit task to model such situations and obtained choice data on this task from 24 participants. To arbitrate between different decision-making strategies that participants may use on this task, we developed a set of probabilistic agent-based behavioral models, including exploitative and explorative Bayesian agents, as well as heuristic control agents. Upon validating the model and parameter recovery properties of our model set and summarizing the participants' choice data in a descriptive way, we used a maximum likelihood approach to evaluate the participants' choice data from the perspective of our model set. In brief, we provide quantitative evidence that participants employ a belief state-based hybrid explorative-exploitative strategy on the information-selective symmetric reversal bandit task, lending further support to the finding that humans are guided by their subjective uncertainty when solving exploration-exploitation dilemmas.
Supplementary information: The online version contains supplementary material available at 10.1007/s42113-021-00112-3.