Pub Date : 2021-11-09DOI: 10.1080/13546783.2021.1999327
P. Kane, S. Broomell
Abstract Many important decisions depend on unknown states of the world. Society is increasingly relying on statistical predictive models to make decisions in these cases. While predictive models are useful, previous research has documented that (a) individual decision makers distrust models and (b) people’s predictions are often worse than those of models. These findings indicate a lack of awareness of how to evaluate predictions generally. This includes concepts like the loss function used to aggregate errors or whether error is training error or generalisation error. To address this gap, we present three studies testing how lay people visually evaluate the predictive accuracy of models. We found that (a) participant judgements of prediction errors were more similar to absolute error than squared error (Study 1), (b) we did not detect a difference in participant reactions to training error versus generalisation error (Study 2), and (c) participants rated complex models as more accurate when comparing two models, but rated simple models as more accurate when shown single models in isolation (Study 3). When communicating about models, researchers should be aware that the public’s visual evaluation of models may disagree with their method of measuring errors and that many may fail to recognise overfitting.
{"title":"Investigating lay evaluations of models","authors":"P. Kane, S. Broomell","doi":"10.1080/13546783.2021.1999327","DOIUrl":"https://doi.org/10.1080/13546783.2021.1999327","url":null,"abstract":"Abstract Many important decisions depend on unknown states of the world. Society is increasingly relying on statistical predictive models to make decisions in these cases. While predictive models are useful, previous research has documented that (a) individual decision makers distrust models and (b) people’s predictions are often worse than those of models. These findings indicate a lack of awareness of how to evaluate predictions generally. This includes concepts like the loss function used to aggregate errors or whether error is training error or generalisation error. To address this gap, we present three studies testing how lay people visually evaluate the predictive accuracy of models. We found that (a) participant judgements of prediction errors were more similar to absolute error than squared error (Study 1), (b) we did not detect a difference in participant reactions to training error versus generalisation error (Study 2), and (c) participants rated complex models as more accurate when comparing two models, but rated simple models as more accurate when shown single models in isolation (Study 3). When communicating about models, researchers should be aware that the public’s visual evaluation of models may disagree with their method of measuring errors and that many may fail to recognise overfitting.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"67 1","pages":"569 - 604"},"PeriodicalIF":2.6,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74555054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-28DOI: 10.1080/13546783.2021.1994009
Zachary A. Caddick, Gregory J. Feist
Abstract Motivated reasoning occurs when we reason differently about evidence that supports our prior beliefs than when it contradicts those beliefs. Adult participants (N = 377) from Amazon’s Mechanical Turk (MTurk) system completed written responses critically evaluating strengths and weaknesses in a vignette on the topic of anthropogenic climate change (ACC). The vignette had two fictional scientists present prototypical arguments for and against anthropogenic climate change that were constructed with equally flawed and conflicting reasoning. The current study tested and found support for three main hypotheses: cognitive style, personality, and ideology would predict both motivated reasoning and endorsement of human caused climate change; those who accept human-caused climate change will be less likely to engage in biased reasoning and more likely to engage in objective reasoning about climate change than those who deny human activity as a cause of climate change. (144 words)
{"title":"When beliefs and evidence collide: psychological and ideological predictors of motivated reasoning about climate change","authors":"Zachary A. Caddick, Gregory J. Feist","doi":"10.1080/13546783.2021.1994009","DOIUrl":"https://doi.org/10.1080/13546783.2021.1994009","url":null,"abstract":"Abstract Motivated reasoning occurs when we reason differently about evidence that supports our prior beliefs than when it contradicts those beliefs. Adult participants (N = 377) from Amazon’s Mechanical Turk (MTurk) system completed written responses critically evaluating strengths and weaknesses in a vignette on the topic of anthropogenic climate change (ACC). The vignette had two fictional scientists present prototypical arguments for and against anthropogenic climate change that were constructed with equally flawed and conflicting reasoning. The current study tested and found support for three main hypotheses: cognitive style, personality, and ideology would predict both motivated reasoning and endorsement of human caused climate change; those who accept human-caused climate change will be less likely to engage in biased reasoning and more likely to engage in objective reasoning about climate change than those who deny human activity as a cause of climate change. (144 words)","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"25 1","pages":"428 - 464"},"PeriodicalIF":2.6,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75193209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-28DOI: 10.1080/13546783.2021.1992012
Hannah Dames, K. C. Klauer, Marco Ragni
Abstract How individuals reason deductively has concerned researchers for many years. Yet, it is still unclear whether, and if so how, participants’ reasoning performance changes over time. In two test sessions one week apart, we examined how the syllogistic reasoning performance of 100 participants changed within and between sessions. Participants’ reasoning performance increased during the first session. A week later, they started off at the same level of reasoning performance but did not further improve. The reported performance gains were only found for logically valid, but not for invalid syllogisms indicating a bias against responding that ‘no valid conclusion’ follows from the premises. Importantly, we demonstrate that participants substantially varied in the strength of the temporal performance changes and explored how individual characteristics, such as participants’ personality and cognitive ability, relate to these interindividual differences. Together, our findings contradict common assumptions that reasoning performance only reflects a stable inherent ability.
{"title":"The stability of syllogistic reasoning performance over time","authors":"Hannah Dames, K. C. Klauer, Marco Ragni","doi":"10.1080/13546783.2021.1992012","DOIUrl":"https://doi.org/10.1080/13546783.2021.1992012","url":null,"abstract":"Abstract How individuals reason deductively has concerned researchers for many years. Yet, it is still unclear whether, and if so how, participants’ reasoning performance changes over time. In two test sessions one week apart, we examined how the syllogistic reasoning performance of 100 participants changed within and between sessions. Participants’ reasoning performance increased during the first session. A week later, they started off at the same level of reasoning performance but did not further improve. The reported performance gains were only found for logically valid, but not for invalid syllogisms indicating a bias against responding that ‘no valid conclusion’ follows from the premises. Importantly, we demonstrate that participants substantially varied in the strength of the temporal performance changes and explored how individual characteristics, such as participants’ personality and cognitive ability, relate to these interindividual differences. Together, our findings contradict common assumptions that reasoning performance only reflects a stable inherent ability.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"31 1","pages":"529 - 568"},"PeriodicalIF":2.6,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87991601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.1080/13546783.2021.1989034
Dominic K. Fernandez, Heather H. M. Gan, Amy Y. C. Chan
Abstract We investigated the preparatory benefits of counterfactual and prefactual thinking towards cognitive task performance. Experiment 1 replicated the robust finding that individuals focus more on mutating internally controllable elements when thinking prefactually about their future task performance than when thinking counterfactually about a past performance. We also replicated the finding that counterfactual thinking was associated with significant performance improvement in an anagram task. However, despite their greater focus on internally controllable thoughts, individuals who generated prefactuals showed no performance improvement. In Experiment 2, we examined the relative performance-enhancing roles of counterfactuals and prefactuals in a subsequent unrelated analytical reasoning task. Only individuals who completed a counterfactual priming task performed significantly better than those in a control group did. These results corroborate extant findings of the preparatory advantage of counterfactuals. They also raise questions regarding some ways in which the preparatory functions of counterfactual and prefactual thinking have been conceptualised.
{"title":"Comparing the functional benefits of counterfactual and prefactual thinking: the content-specific and content-neutral pathways","authors":"Dominic K. Fernandez, Heather H. M. Gan, Amy Y. C. Chan","doi":"10.1080/13546783.2021.1989034","DOIUrl":"https://doi.org/10.1080/13546783.2021.1989034","url":null,"abstract":"Abstract We investigated the preparatory benefits of counterfactual and prefactual thinking towards cognitive task performance. Experiment 1 replicated the robust finding that individuals focus more on mutating internally controllable elements when thinking prefactually about their future task performance than when thinking counterfactually about a past performance. We also replicated the finding that counterfactual thinking was associated with significant performance improvement in an anagram task. However, despite their greater focus on internally controllable thoughts, individuals who generated prefactuals showed no performance improvement. In Experiment 2, we examined the relative performance-enhancing roles of counterfactuals and prefactuals in a subsequent unrelated analytical reasoning task. Only individuals who completed a counterfactual priming task performed significantly better than those in a control group did. These results corroborate extant findings of the preparatory advantage of counterfactuals. They also raise questions regarding some ways in which the preparatory functions of counterfactual and prefactual thinking have been conceptualised.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"88 1","pages":"261 - 289"},"PeriodicalIF":2.6,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73400644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/13546783.2021.1979651
Aishlyn Angill-Williams, C. Davis
Abstract People’s perception of their own efficacy is a critical precursor for adaptive behavioural responses to the threat posed by climate change. The present study investigated whether components of climate efficacy could be enhanced by short video messages. An online study (N = 161) compared groups of participants who received messages focusing on individual or collective behaviour. Relative to a control group, these groups showed increased levels of response efficacy but not self-efficacy. However, this did not translate to increased climate commitment; mediation analysis suggested that the video messages, while increasing efficacy, may also have had a counterproductive effect on behavioural intentions, possibly by reducing the perceived urgency of action. This finding reinforces the challenge faced by climate communicators seeking to craft a message that boosts efficacy and simultaneously motivates adaptive responses to the climate crisis.
{"title":"Increasing climate efficacy is not a surefire means to promoting climate commitment","authors":"Aishlyn Angill-Williams, C. Davis","doi":"10.1080/13546783.2021.1979651","DOIUrl":"https://doi.org/10.1080/13546783.2021.1979651","url":null,"abstract":"Abstract People’s perception of their own efficacy is a critical precursor for adaptive behavioural responses to the threat posed by climate change. The present study investigated whether components of climate efficacy could be enhanced by short video messages. An online study (N = 161) compared groups of participants who received messages focusing on individual or collective behaviour. Relative to a control group, these groups showed increased levels of response efficacy but not self-efficacy. However, this did not translate to increased climate commitment; mediation analysis suggested that the video messages, while increasing efficacy, may also have had a counterproductive effect on behavioural intentions, possibly by reducing the perceived urgency of action. This finding reinforces the challenge faced by climate communicators seeking to craft a message that boosts efficacy and simultaneously motivates adaptive responses to the climate crisis.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"53 1","pages":"375 - 395"},"PeriodicalIF":2.6,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81127972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1080/13546783.2021.1982003
D. Molden, R. Bayes, J. Druckman
Abstract Understanding how people form opinions about climate change has proven to be challenging. One of the most common approaches to studying climate change beliefs is to assume people employ motivated reasoning. We first detail how scholars in this area have applied motivated reasoning perspectives, identifying a variety of different judgment goals on which they have focused. We next argue that existing findings fail to conclusively show motivated reasoning, much less isolate which specific goals guide opinion formation about climate change. Then, we describe a novel motivational systems framework that would allow a more precise identification of the role of motivated reasoning in such opinions. Finally, we conclude by providing examples from completed and planned studies that apply this framework. Ultimately, we hope to give scholars and practitioners better tools to isolate why people hold the climate opinions they do and to develop effective communication strategies to change those opinions.
{"title":"A motivational systems approach to investigating opinions on climate change","authors":"D. Molden, R. Bayes, J. Druckman","doi":"10.1080/13546783.2021.1982003","DOIUrl":"https://doi.org/10.1080/13546783.2021.1982003","url":null,"abstract":"Abstract Understanding how people form opinions about climate change has proven to be challenging. One of the most common approaches to studying climate change beliefs is to assume people employ motivated reasoning. We first detail how scholars in this area have applied motivated reasoning perspectives, identifying a variety of different judgment goals on which they have focused. We next argue that existing findings fail to conclusively show motivated reasoning, much less isolate which specific goals guide opinion formation about climate change. Then, we describe a novel motivational systems framework that would allow a more precise identification of the role of motivated reasoning in such opinions. Finally, we conclude by providing examples from completed and planned studies that apply this framework. Ultimately, we hope to give scholars and practitioners better tools to isolate why people hold the climate opinions they do and to develop effective communication strategies to change those opinions.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"17 1","pages":"396 - 427"},"PeriodicalIF":2.6,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75151194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-30DOI: 10.1080/13546783.2021.1965025
Wendy Ross, F. Vallée‐Tourangeau
Abstract Problem solving in a materially rich environment requires interacting with chance. Sixty-four participants were invited to solve 5-letter anagrams presented as movable tiles in conditions that either allowed the participants to move the tiles as they wished or only allowed random shuffling (without rearranging the tiles post shuffling) thus contrasting pure luck with an interactive model. We hypothesised that shuffling would break unhelpful mental sets and introduce beneficial unplanned problem-solving trajectories. However, participants performed significantly worse when shuffling, which suggests luck plays less of a role than has been previously suggested. Granular analysis of seven critical cases revealed arbitrary path dependency across both conditions and moments of missed luck. It also questions current models of non-agentic luck and the ability to separate agent and luck. This research has implications for fostering better problem solving in an uncertain and fluid world.
{"title":"Accident and agency: a mixed methods study contrasting luck and interactivity in problem solving","authors":"Wendy Ross, F. Vallée‐Tourangeau","doi":"10.1080/13546783.2021.1965025","DOIUrl":"https://doi.org/10.1080/13546783.2021.1965025","url":null,"abstract":"Abstract Problem solving in a materially rich environment requires interacting with chance. Sixty-four participants were invited to solve 5-letter anagrams presented as movable tiles in conditions that either allowed the participants to move the tiles as they wished or only allowed random shuffling (without rearranging the tiles post shuffling) thus contrasting pure luck with an interactive model. We hypothesised that shuffling would break unhelpful mental sets and introduce beneficial unplanned problem-solving trajectories. However, participants performed significantly worse when shuffling, which suggests luck plays less of a role than has been previously suggested. Granular analysis of seven critical cases revealed arbitrary path dependency across both conditions and moments of missed luck. It also questions current models of non-agentic luck and the ability to separate agent and luck. This research has implications for fostering better problem solving in an uncertain and fluid world.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"70 1","pages":"487 - 528"},"PeriodicalIF":2.6,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86225258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27DOI: 10.1080/13546783.2021.1963841
Kevin J. Holmes, Evan M. Doherty, S. Flusberg
Abstract Although subject-complement statements like “girls are as good as boys at math” appear to express gender equality, people infer a gender difference: the group in the complement position (boys) is judged superior. We investigated (1) whether this syntactic framing effect generalizes to other socially charged inferences and (2) whether awareness of the bias implied by the syntax mitigates its influence. Across four preregistered experiments (N = 2,734), we found reliable framing effects on inferences about both math ability and terrorist behavior, but only for the small subset of participants (∼30%) who failed to identify the influence of the subject-complement statements on their judgments. Most participants did recognize this influence, and these participants showed reduced or even reversed framing effects; they were also more likely to explicitly judge subject-complement syntax as biased. Our findings suggest that this syntax perpetuates stereotypes only when people are oblivious to, or unmotivated to interrogate, its implications.
{"title":"How and when does syntax perpetuate stereotypes? Probing the framing effects of subject-complement statements of equality","authors":"Kevin J. Holmes, Evan M. Doherty, S. Flusberg","doi":"10.1080/13546783.2021.1963841","DOIUrl":"https://doi.org/10.1080/13546783.2021.1963841","url":null,"abstract":"Abstract Although subject-complement statements like “girls are as good as boys at math” appear to express gender equality, people infer a gender difference: the group in the complement position (boys) is judged superior. We investigated (1) whether this syntactic framing effect generalizes to other socially charged inferences and (2) whether awareness of the bias implied by the syntax mitigates its influence. Across four preregistered experiments (N = 2,734), we found reliable framing effects on inferences about both math ability and terrorist behavior, but only for the small subset of participants (∼30%) who failed to identify the influence of the subject-complement statements on their judgments. Most participants did recognize this influence, and these participants showed reduced or even reversed framing effects; they were also more likely to explicitly judge subject-complement syntax as biased. Our findings suggest that this syntax perpetuates stereotypes only when people are oblivious to, or unmotivated to interrogate, its implications.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"18 1","pages":"226 - 260"},"PeriodicalIF":2.6,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87512263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-23DOI: 10.1080/13546783.2021.1957710
J. Swim, Nathaniel Geiger, Joe Guerriero
Abstract We suggest that policies will be less popular when individuals personally have to pay for them rather than when others have to pay (i.e., a Not Out of My Bank Account or NOMBA effect). Dual process models of persuasion suggest that personally having to pay would motivate scrutiny of persuasive messages making it essential to use effective science communication tactics when using climate science to support climate change policies. A pilot experiment (N = 186) and main study (N = 758) support a NOMBA effect with less policy support (Pilot study) and lower recommended fees (Main study) for a policy that would require participants, rather than another group, to pay a fee for community solar panels. Consistent with dual process models and suggesting systematic processing, only when participants would have to pay the fee, messages using strong (vs. weak) science communication tactics increased support for policies (Pilot study), increased the favorability of thoughts about the policy (Main study) and these thoughts subsequently predicted policy support (Main study). Inconsistent with propositions that information about expert sources would be a heuristic or bolster science messages, expert consensus information did not influence thoughts or policy support in any study condition. Efforts to understand climate change policy support would benefit from attending to research on dual process models of persuasion, including understanding how different types and degree of outcome relevance can alter how people process science information used to bolster support for climate change policies.
我们认为,当个人必须亲自支付而不是其他人必须支付时(即,Not Out of My Bank Account或NOMBA效应),保单将不那么受欢迎。说服的双重过程模型表明,个人必须付费将激发对说服性信息的审查,这使得在使用气候科学来支持气候变化政策时使用有效的科学传播策略至关重要。试点实验(N = 186)和主要研究(N = 758)支持NOMBA效应,较少的政策支持(试点研究)和较低的建议费用(主要研究)的政策,要求参与者,而不是另一个群体,为社区太阳能电池板支付费用。与双过程模型一致,并建议系统处理,只有当参与者必须支付费用时,使用强(与弱)科学传播策略的信息增加了对政策的支持(试点研究),增加了对政策的有利想法(主要研究),这些想法随后预测了政策支持(主要研究)。专家共识信息在任何研究条件下都不会影响思想或政策支持,这与有关专家来源的信息将是启发式的或支持科学信息的命题不一致。了解气候变化政策支持的努力将受益于对说服的双过程模型的研究,包括了解不同类型和程度的结果相关性如何改变人们处理用于支持气候变化政策的科学信息的方式。
{"title":"Not out of MY bank account! Science messaging when climate change policies carry personal financial costs","authors":"J. Swim, Nathaniel Geiger, Joe Guerriero","doi":"10.1080/13546783.2021.1957710","DOIUrl":"https://doi.org/10.1080/13546783.2021.1957710","url":null,"abstract":"Abstract We suggest that policies will be less popular when individuals personally have to pay for them rather than when others have to pay (i.e., a Not Out of My Bank Account or NOMBA effect). Dual process models of persuasion suggest that personally having to pay would motivate scrutiny of persuasive messages making it essential to use effective science communication tactics when using climate science to support climate change policies. A pilot experiment (N = 186) and main study (N = 758) support a NOMBA effect with less policy support (Pilot study) and lower recommended fees (Main study) for a policy that would require participants, rather than another group, to pay a fee for community solar panels. Consistent with dual process models and suggesting systematic processing, only when participants would have to pay the fee, messages using strong (vs. weak) science communication tactics increased support for policies (Pilot study), increased the favorability of thoughts about the policy (Main study) and these thoughts subsequently predicted policy support (Main study). Inconsistent with propositions that information about expert sources would be a heuristic or bolster science messages, expert consensus information did not influence thoughts or policy support in any study condition. Efforts to understand climate change policy support would benefit from attending to research on dual process models of persuasion, including understanding how different types and degree of outcome relevance can alter how people process science information used to bolster support for climate change policies.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"333 1","pages":"346 - 374"},"PeriodicalIF":2.6,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79729030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-09DOI: 10.1080/13546783.2021.1961859
S. Pighin, R. Byrne, K. Tentori
Abstract We examined how people think about how things could have turned out differently after they made a decision to cooperate or not in three social interactions: the Prisoner’s dilemma (Experiment 1), the Stag Hunt dilemma (Experiment 2), and the Chicken game (Experiment 3). We found that participants who took part in the game imagined the outcome would have been different if a different decision had been made by the other player, not themselves; they did so whether the outcome was good or bad for them, their own choice had been to cooperate or not, and the other player’s choice had been to cooperate or not. Participants who only read about a fictional protagonist’s game imagined changes outside the protagonist’s control (such as the other player’s decision) after a good outcome but within the protagonist’s control (such as the protagonist’s decision) after a bad outcome. The implications for theories of counterfactual thinking and moral decision-making are discussed.
{"title":"“If only” counterfactual thoughts about cooperative and uncooperative decisions in social dilemmas","authors":"S. Pighin, R. Byrne, K. Tentori","doi":"10.1080/13546783.2021.1961859","DOIUrl":"https://doi.org/10.1080/13546783.2021.1961859","url":null,"abstract":"Abstract We examined how people think about how things could have turned out differently after they made a decision to cooperate or not in three social interactions: the Prisoner’s dilemma (Experiment 1), the Stag Hunt dilemma (Experiment 2), and the Chicken game (Experiment 3). We found that participants who took part in the game imagined the outcome would have been different if a different decision had been made by the other player, not themselves; they did so whether the outcome was good or bad for them, their own choice had been to cooperate or not, and the other player’s choice had been to cooperate or not. Participants who only read about a fictional protagonist’s game imagined changes outside the protagonist’s control (such as the other player’s decision) after a good outcome but within the protagonist’s control (such as the protagonist’s decision) after a bad outcome. The implications for theories of counterfactual thinking and moral decision-making are discussed.","PeriodicalId":47270,"journal":{"name":"Thinking & Reasoning","volume":"69 1","pages":"193 - 225"},"PeriodicalIF":2.6,"publicationDate":"2021-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86086841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}