Pub Date : 2022-05-01DOI: 10.1017/s1930297500003570
J. Roozenbeek, R. Maertens, Stefan M. Herzog, Michael Geers, R. Kurvers, Mubashir Sultan, S. van der Linden
Misinformation presents a significant societal problem. To measure individuals’ susceptibility to misinformation and study its predictors, researchers have used a broad variety of ad-hoc item sets, scales, question framings, and response modes. Because of this variety, it remains unknown whether results from different studies can be compared (e.g., in meta-analyses). In this preregistered study (US sample; N = 2,622), we compare five commonly used question framings (eliciting perceived headline accuracy, manipulativeness, reliability, trustworthiness, and whether a headline is real or fake) and three response modes (binary, 6-point and 7-point scales), using the psychometrically validated Misinformation Susceptibility Test (MIST). We test 1) whether different question framings and response modes yield similar responses for the same item set, 2) whether people’s confidence in their primary judgments is affected by question framings and response modes, and 3) which key psychological factors (myside bias, political partisanship, cognitive reflection, and numeracy skills) best predict misinformation susceptibility across assessment methods. Different response modes and question framings yield similar (but not identical) responses for both primary ratings and confidence judgments. We also find a similar nomological net across conditions, suggesting cross-study comparability. Finally, myside bias and political conservatism were strongly positively correlated with misinformation susceptibility, whereas numeracy skills and especially cognitive reflection were less important (although we note potential ceiling effects for numeracy). We thus find more support for an “integrative” account than a “classical reasoning” account of misinformation belief.
{"title":"Susceptibility to misinformation is consistent across question framings and response modes and better explained by myside bias and partisanship than analytical thinking","authors":"J. Roozenbeek, R. Maertens, Stefan M. Herzog, Michael Geers, R. Kurvers, Mubashir Sultan, S. van der Linden","doi":"10.1017/s1930297500003570","DOIUrl":"https://doi.org/10.1017/s1930297500003570","url":null,"abstract":"Misinformation presents a significant societal problem. To measure individuals’ susceptibility to misinformation and study its predictors, researchers have used a broad variety of ad-hoc item sets, scales, question framings, and response modes. Because of this variety, it remains unknown whether results from different studies can be compared (e.g., in meta-analyses). In this preregistered study (US sample; N = 2,622), we compare five commonly used question framings (eliciting perceived headline accuracy, manipulativeness, reliability, trustworthiness, and whether a headline is real or fake) and three response modes (binary, 6-point and 7-point scales), using the psychometrically validated Misinformation Susceptibility Test (MIST). We test 1) whether different question framings and response modes yield similar responses for the same item set, 2) whether people’s confidence in their primary judgments is affected by question framings and response modes, and 3) which key psychological factors (myside bias, political partisanship, cognitive reflection, and numeracy skills) best predict misinformation susceptibility across assessment methods. Different response modes and question framings yield similar (but not identical) responses for both primary ratings and confidence judgments. We also find a similar nomological net across conditions, suggesting cross-study comparability. Finally, myside bias and political conservatism were strongly positively correlated with misinformation susceptibility, whereas numeracy skills and especially cognitive reflection were less important (although we note potential ceiling effects for numeracy). We thus find more support for an “integrative” account than a “classical reasoning” account of misinformation belief.","PeriodicalId":48045,"journal":{"name":"Judgment and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45322188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1017/s1930297500003582
Minfan Zhu, J. Wang, Xiaofei Xie
The maximizing decision-making style describes the style of one who pursues maximum utility in decision-making, in contrast to the satisficing style, which describes the style of one who is satisfied with good enough options. The current research concentrates on the within-person variation in the maximizing decision-making style and provides an explanation through three studies. Study 1 (N = 530) developed a domain-specific maximizing scale and found that individuals had different maximizing tendencies across different domains. Studies 2 (N = 162) and 3 (N = 106) further explored this mechanism from the perspective of subjective task value through questionnaires and experiments. It was found that the within-person variation of maximization in different domains is driven by the difference in the individuals’ subjective task value in the corresponding domains. People tend to maximize more in the domains they value more. Our research contributes to a comprehensive understanding of maximization and provides a new perspective for the study of the maximizing decision-making style.
{"title":"Maximize when valuable: The domain specificity of maximizing decision-making style","authors":"Minfan Zhu, J. Wang, Xiaofei Xie","doi":"10.1017/s1930297500003582","DOIUrl":"https://doi.org/10.1017/s1930297500003582","url":null,"abstract":"The maximizing decision-making style describes the style of one who pursues maximum utility in decision-making, in contrast to the satisficing style, which describes the style of one who is satisfied with good enough options. The current research concentrates on the within-person variation in the maximizing decision-making style and provides an explanation through three studies. Study 1 (N = 530) developed a domain-specific maximizing scale and found that individuals had different maximizing tendencies across different domains. Studies 2 (N = 162) and 3 (N = 106) further explored this mechanism from the perspective of subjective task value through questionnaires and experiments. It was found that the within-person variation of maximization in different domains is driven by the difference in the individuals’ subjective task value in the corresponding domains. People tend to maximize more in the domains they value more. Our research contributes to a comprehensive understanding of maximization and provides a new perspective for the study of the maximizing decision-making style.","PeriodicalId":48045,"journal":{"name":"Judgment and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43840214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1017/s1930297500003594
Gregory Gadzinski, Alessio Castello
Granting a short-term loan is a critical decision. A great deal of research has concerned the prediction of credit default, notably through Machine Learning (ML) algorithms. However, given that their black-box nature has sometimes led to unwanted outcomes, comprehensibility in ML guided decision-making strategies has become more important. In many domains, transparency and accountability are no longer optional. In this article, instead of opposing white-box against black-box models, we use a multi-step procedure that combines the Fast and Frugal Tree (FFT) methodology of Martignon et al. (2005) and Phillips et al. (2017) with the extraction of post-hoc explainable information from ensemble ML models. New interpretable models are then built thanks to the inclusion of explainable ML outputs chosen by human intervention. Our methodology improves significantly the accuracy of the FFT predictions while preserving their explainable nature. We apply our approach to a dataset of short-term loans granted to borrowers in the UK, and show how complex machine learning can challenge simpler machines and help decision makers.
{"title":"Combining white box models, black box machines and human interventions for interpretable decision strategies","authors":"Gregory Gadzinski, Alessio Castello","doi":"10.1017/s1930297500003594","DOIUrl":"https://doi.org/10.1017/s1930297500003594","url":null,"abstract":"Granting a short-term loan is a critical decision. A great deal of research has concerned the prediction of credit default, notably through Machine Learning (ML) algorithms. However, given that their black-box nature has sometimes led to unwanted outcomes, comprehensibility in ML guided decision-making strategies has become more important. In many domains, transparency and accountability are no longer optional. In this article, instead of opposing white-box against black-box models, we use a multi-step procedure that combines the Fast and Frugal Tree (FFT) methodology of Martignon et al. (2005) and Phillips et al. (2017) with the extraction of post-hoc explainable information from ensemble ML models. New interpretable models are then built thanks to the inclusion of explainable ML outputs chosen by human intervention. Our methodology improves significantly the accuracy of the FFT predictions while preserving their explainable nature. We apply our approach to a dataset of short-term loans granted to borrowers in the UK, and show how complex machine learning can challenge simpler machines and help decision makers.","PeriodicalId":48045,"journal":{"name":"Judgment and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47119658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1017/s1930297500009104
Erin L. Krupka, Roberto A. Weber, Rachel T. A. Crosno, H. Hoover
Previous research in economics, social psychology, and sociology has produced compelling evidence that social norms influence behavior. In this paper we apply the Krupka and Weber (2013) norm elicitation procedure and present U.S. and non-U.S. born subjects with two scenarios for which tipping and punctuality norms are known to vary across countries. We elicit shared beliefs by having subjects match appropriateness ratings of different actions (such as arriving late or on time) to another randomly selected participant from the same university or to a participant who is born in the same country. We also elicit personal beliefs without the matching task. We test whether the responses from the coordination task can be interpreted as social norms by comparing responses from the coordination game with actual social norms (as identified using independent materials such as tipping guides for travelers). We compare responses elicited with the matching tasks to those elicited without the matching task to test whether the coordination device itself is essential for identifying social norms. We find that appropriateness ratings for different actions vary with the reference group in the matching task. Further, the ratings obtained from the matching task vary in a manner consistent with the actual social norms of that reference group. Thus, we find that shared beliefs correspond more closely to externally validated social norms compared to personal beliefs. Second, we highlight the importance that reference groups (for the coordination task) can play.
{"title":"“When in Rome”: Identifying social norms using coordination games","authors":"Erin L. Krupka, Roberto A. Weber, Rachel T. A. Crosno, H. Hoover","doi":"10.1017/s1930297500009104","DOIUrl":"https://doi.org/10.1017/s1930297500009104","url":null,"abstract":"\u0000 Previous research in economics, social psychology, and sociology has\u0000 produced compelling evidence that social norms influence behavior. In this\u0000 paper we apply the Krupka and Weber (2013) norm elicitation procedure and\u0000 present U.S. and non-U.S. born subjects with two scenarios for which tipping\u0000 and punctuality norms are known to vary across countries. We elicit shared\u0000 beliefs by having subjects match appropriateness ratings of different\u0000 actions (such as arriving late or on time) to another randomly selected\u0000 participant from the same university or to a participant who is born in the\u0000 same country. We also elicit personal beliefs without the matching task. We\u0000 test whether the responses from the coordination task can be interpreted as\u0000 social norms by comparing responses from the coordination game with actual\u0000 social norms (as identified using independent materials such as tipping\u0000 guides for travelers). We compare responses elicited with the matching tasks\u0000 to those elicited without the matching task to test whether the coordination\u0000 device itself is essential for identifying social norms. We find that\u0000 appropriateness ratings for different actions vary with the reference group\u0000 in the matching task. Further, the ratings obtained from the matching task\u0000 vary in a manner consistent with the actual social norms of that reference\u0000 group. Thus, we find that shared beliefs correspond more closely to\u0000 externally validated social norms compared to personal beliefs. Second, we\u0000 highlight the importance that reference groups (for the coordination task)\u0000 can play.","PeriodicalId":48045,"journal":{"name":"Judgment and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42559752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1017/s1930297500009165
Tobias Vogel, Moritz Ingendahl, Linda McCaughey
Humans are evidently able to learn contingencies from the co-occurrence of cues and outcomes. But how do humans judge contingencies when observations of cue and outcome are learned on different occasions? The pseudocontingency framework proposes that humans rely on base-rate correlations across contexts, that is, whether outcome base rates increase or decrease with cue base rates. Here, we elaborate on an alternative mechanism for pseudocontingencies that exploits base rate information within contexts. In two experiments, cue and outcome base rates varied across four contexts, but the correlation by base rates was kept constant at zero. In some contexts, cue and outcome base rates were aligned (e.g., cue and outcome base rates were both high). In other contexts, cue and outcome base rates were misaligned (e.g., cue base rate was high, but outcome base rate was low). Judged contingencies were more positive for contexts in which cue and outcome base rates were aligned than in contexts in which cue and outcome base rates were misaligned. Our findings indicate that people use the alignment of base rates to infer contingencies conditional on the context. As such, they lend support to the pseudocontingency framework, which predicts that decision makers rely on base rates to approximate contingencies. However, they challenge previous conceptions of pseudocontingencies as a uniform inference from correlated base rates. Instead, they suggest that people possess a repertoire of multiple contingency inferences that differ with regard to informational requirements and areas of applicability.
{"title":"Pseudocontingencies: Flexible contingency inferences from base\u0000 rates","authors":"Tobias Vogel, Moritz Ingendahl, Linda McCaughey","doi":"10.1017/s1930297500009165","DOIUrl":"https://doi.org/10.1017/s1930297500009165","url":null,"abstract":"\u0000 Humans are evidently able to learn contingencies from the co-occurrence\u0000 of cues and outcomes. But how do humans judge contingencies when\u0000 observations of cue and outcome are learned on different occasions? The\u0000 pseudocontingency framework proposes that humans rely on base-rate\u0000 correlations across contexts, that is, whether outcome base rates increase\u0000 or decrease with cue base rates. Here, we elaborate on an alternative\u0000 mechanism for pseudocontingencies that exploits base rate information within\u0000 contexts. In two experiments, cue and outcome base rates varied across four\u0000 contexts, but the correlation by base rates was kept constant at zero. In\u0000 some contexts, cue and outcome base rates were aligned (e.g., cue and\u0000 outcome base rates were both high). In other contexts, cue and outcome base\u0000 rates were misaligned (e.g., cue base rate was high, but outcome base rate\u0000 was low). Judged contingencies were more positive for contexts in which cue\u0000 and outcome base rates were aligned than in contexts in which cue and\u0000 outcome base rates were misaligned. Our findings indicate that people use\u0000 the alignment of base rates to infer contingencies conditional on the\u0000 context. As such, they lend support to the pseudocontingency framework,\u0000 which predicts that decision makers rely on base rates to approximate\u0000 contingencies. However, they challenge previous conceptions of\u0000 pseudocontingencies as a uniform inference from correlated base rates.\u0000 Instead, they suggest that people possess a repertoire of multiple\u0000 contingency inferences that differ with regard to informational requirements\u0000 and areas of applicability.","PeriodicalId":48045,"journal":{"name":"Judgment and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47212608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}