After an aggressive interaction, perpetrators most want to offer apologies when they have unintentionally harmed another person and victims most want to receive an apology when another person intentionally harmed them. Perpetrators and victims also explain aggressive behaviors differently—perpetrators often explain their own aggressive behaviors by referring to beliefs they considered that led to their behaviors (i.e., “belief” explanations), whereas victims explain perpetrators’ behaviors by referring to background factors that do not mention the perpetrators’ mental deliberations (i.e., “causal history explanations”). Putting these ideas together, the current Registered Report had participants recall either a time they intentionally harmed another person or a time when they were intentionally harmed by another person. Participants then rated several characteristics of the recalled behavior, explained why the behavior occurred, and reported their desire for an apology. As predicted, we found that perpetrators who gave “belief” explanations wanted to give an apology much less than participants who gave “causal history explanations.” However, and inconsistent with our predictions, victims’ desire to receive an apology was similar regardless of how they explained the perpetrators’ behaviors. These findings underscore how perpetrators’ explanations can emphasize (or de-emphasize) the deliberateness of their harmful behaviors and how these explanations are related to their desire to make amends.
{"title":"Perpetrators’ and Victims’ Folk Explanations of Aggressive Behaviors and Desires for Apologies","authors":"Randy J. McCarthy, Jared P Wilson","doi":"10.1525/collabra.84918","DOIUrl":"https://doi.org/10.1525/collabra.84918","url":null,"abstract":"After an aggressive interaction, perpetrators most want to offer apologies when they have unintentionally harmed another person and victims most want to receive an apology when another person intentionally harmed them. Perpetrators and victims also explain aggressive behaviors differently—perpetrators often explain their own aggressive behaviors by referring to beliefs they considered that led to their behaviors (i.e., “belief” explanations), whereas victims explain perpetrators’ behaviors by referring to background factors that do not mention the perpetrators’ mental deliberations (i.e., “causal history explanations”). Putting these ideas together, the current Registered Report had participants recall either a time they intentionally harmed another person or a time when they were intentionally harmed by another person. Participants then rated several characteristics of the recalled behavior, explained why the behavior occurred, and reported their desire for an apology. As predicted, we found that perpetrators who gave “belief” explanations wanted to give an apology much less than participants who gave “causal history explanations.” However, and inconsistent with our predictions, victims’ desire to receive an apology was similar regardless of how they explained the perpetrators’ behaviors. These findings underscore how perpetrators’ explanations can emphasize (or de-emphasize) the deliberateness of their harmful behaviors and how these explanations are related to their desire to make amends.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66882525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When a test of attention, such as the d2 test, is repeated, performance improves. These practice benefits threaten the validity of a test because it is impossible to separate the contributions of ability and practice, respectively, to a particular result. A possible solution to this dilemma would be to determine the sources of practice effects, and to use this knowledge for constructing tests that are less prone to practice. The present study investigates the contribution of three components of a d2-like test of attention to practice benefits: targets, distractors, and stimulus configurations. In Experiment 1, we compared practice effects in a target-change condition, where targets changed between sessions, to a target-repetition condition. Similarly, in Experiment 2, we compared practice effects in a distractor-change condition to a distractor-repetition condition. Finally, in Experiment 3, we compared practice effects in a position-repetition condition, where stimulus configurations were repeated within and between tests, to a position-change condition. Results showed that repeating targets and repeating distractors contribute to practice effects, whereas repeating stimulus configurations does not. Hence, in order to reduce practice effects, one might construct tests in which target learning is prevented, for example, by using multiple targets.
{"title":"Disentangling the Contributions of Repeating Targets, Distractors, and Stimulus Positions to Practice Benefits in D2-Like Tests of Attention","authors":"Peter Wühr, B. Wühr","doi":"10.1525/collabra.71297","DOIUrl":"https://doi.org/10.1525/collabra.71297","url":null,"abstract":"When a test of attention, such as the d2 test, is repeated, performance improves. These practice benefits threaten the validity of a test because it is impossible to separate the contributions of ability and practice, respectively, to a particular result. A possible solution to this dilemma would be to determine the sources of practice effects, and to use this knowledge for constructing tests that are less prone to practice. The present study investigates the contribution of three components of a d2-like test of attention to practice benefits: targets, distractors, and stimulus configurations. In Experiment 1, we compared practice effects in a target-change condition, where targets changed between sessions, to a target-repetition condition. Similarly, in Experiment 2, we compared practice effects in a distractor-change condition to a distractor-repetition condition. Finally, in Experiment 3, we compared practice effects in a position-repetition condition, where stimulus configurations were repeated within and between tests, to a position-change condition. Results showed that repeating targets and repeating distractors contribute to practice effects, whereas repeating stimulus configurations does not. Hence, in order to reduce practice effects, one might construct tests in which target learning is prevented, for example, by using multiple targets.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66879940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce the Social Media Sexist Content (SMSC) database, an open-access online stimulus set consisting of 382 social media content items and 221 comments related to the content. The content items include 90 sexist posts and 292 neutral posts. The comment items include 75 sexist comments along with 238 neutral comments. The database consists of a broad range of topics including lifestyle, memes, and school posts. All posts were anonymized after being retrieved from publicly available sources. All content and comments were rated across two domains: degree of sexism and emotional reaction to the post. In terms of sexism, the posts were rated along three dimensions of gender bias: Hostile Sexism, Benevolent Sexism, and Objectification. Participants also provided their emotional reactions to the posts in terms of feeling Ashamed, Insecure, and/or Angry. Data were collected online in two separate studies: one rating the content and the other rating the comments. The sexism and emotion ratings were highly reliable and showed the posts displayed either sexism or neutral content. The SMSC database is beneficial to researchers because it offers updated social media content for research use online and in the lab. The database affords researchers the ability to explore stimuli either by content or by ratings, and the database is free to use for research purposes. The SMSC is available for download from hannahbuie.com.
{"title":"The Social Media Sexist Content (SMSC) Database: A Database of Content and Comments for Research Use","authors":"Hannah S. Buie, A. Croft","doi":"10.1525/collabra.71341","DOIUrl":"https://doi.org/10.1525/collabra.71341","url":null,"abstract":"We introduce the Social Media Sexist Content (SMSC) database, an open-access online stimulus set consisting of 382 social media content items and 221 comments related to the content. The content items include 90 sexist posts and 292 neutral posts. The comment items include 75 sexist comments along with 238 neutral comments. The database consists of a broad range of topics including lifestyle, memes, and school posts. All posts were anonymized after being retrieved from publicly available sources. All content and comments were rated across two domains: degree of sexism and emotional reaction to the post. In terms of sexism, the posts were rated along three dimensions of gender bias: Hostile Sexism, Benevolent Sexism, and Objectification. Participants also provided their emotional reactions to the posts in terms of feeling Ashamed, Insecure, and/or Angry. Data were collected online in two separate studies: one rating the content and the other rating the comments. The sexism and emotion ratings were highly reliable and showed the posts displayed either sexism or neutral content. The SMSC database is beneficial to researchers because it offers updated social media content for research use online and in the lab. The database affords researchers the ability to explore stimuli either by content or by ratings, and the database is free to use for research purposes. The SMSC is available for download from hannahbuie.com.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66880000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Haeffel, Hugh H Burke, Marissa Vander Missen, Lily M. Brouder
Tests of generalizability can diversify psychological science and improve theories and measurement. To this end, we conducted five studies testing the cognitive vulnerability to depression hypothesis featured in the hopelessness theory of depression: Study 1 was conducted with Honduran young adults (n = 50); Study 2 was conducted with Nepali adults (n = 34); Study 3 was conducted with Western hemisphere adults (n = 104); Study 4 was conducted with Black U.S. adults (n = 119); and Study 5 was conducted with U.S. undergraduates (n = 110). Results showed that cognitive vulnerability could be measured reliably in diverse populations and the distribution of vulnerability scores was similar for all samples. However, the tendency to generate negative inferences about stress had different implications for depression depending on sample; the association between cognitive vulnerability and depressive symptoms did not generalize to Honduran and Nepali participants. It is now necessary to understand why a negative cognitive style confers risk for depression in some contexts but not others (e.g., is it issues related to measurement, theory, or both). The results also suggest that understanding and reducing the global burden of depression will require more than simply “translating” existing cognitive measures and theories to other countries.
{"title":"What Diverse Samples Can Teach Us About Cognitive Vulnerability to Depression","authors":"G. Haeffel, Hugh H Burke, Marissa Vander Missen, Lily M. Brouder","doi":"10.1525/collabra.71346","DOIUrl":"https://doi.org/10.1525/collabra.71346","url":null,"abstract":"Tests of generalizability can diversify psychological science and improve theories and measurement. To this end, we conducted five studies testing the cognitive vulnerability to depression hypothesis featured in the hopelessness theory of depression: Study 1 was conducted with Honduran young adults (n = 50); Study 2 was conducted with Nepali adults (n = 34); Study 3 was conducted with Western hemisphere adults (n = 104); Study 4 was conducted with Black U.S. adults (n = 119); and Study 5 was conducted with U.S. undergraduates (n = 110). Results showed that cognitive vulnerability could be measured reliably in diverse populations and the distribution of vulnerability scores was similar for all samples. However, the tendency to generate negative inferences about stress had different implications for depression depending on sample; the association between cognitive vulnerability and depressive symptoms did not generalize to Honduran and Nepali participants. It is now necessary to understand why a negative cognitive style confers risk for depression in some contexts but not others (e.g., is it issues related to measurement, theory, or both). The results also suggest that understanding and reducing the global burden of depression will require more than simply “translating” existing cognitive measures and theories to other countries.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66880010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research has shown that in developed environmental cultures, people typically have positive attitudes towards sustainability and pro-environmental behaviour. This has been measured both explicitly, through surveys and interviews, and implicitly, through indirect measures. However, this phenomenon has not yet been extensively studied in emerging environmental cultures, such as Russia. In this study, we adapted two indirect measures, the Affect misattribution procedure and the Affective priming procedure, to examine whether people in Russia have a positive pro-environmental attitude and whether there is a relationship between this implicitly measured attitude and an explicit environmental concern. To ensure reproducibility, we preregistered and conducted two similar studies. The total sample size of the two studies is 394. Our results showed that both measures converge and successfully detect the existence of a positive implicit attitude towards sustainability and pro-environmental behaviour, but there does not appear to be a relationship with environmental concern.
{"title":"Does Going Green Feel Good in Russia: Implicit Measurements With Visual Stimuli","authors":"D. Valko","doi":"10.1525/collabra.73637","DOIUrl":"https://doi.org/10.1525/collabra.73637","url":null,"abstract":"Research has shown that in developed environmental cultures, people typically have positive attitudes towards sustainability and pro-environmental behaviour. This has been measured both explicitly, through surveys and interviews, and implicitly, through indirect measures. However, this phenomenon has not yet been extensively studied in emerging environmental cultures, such as Russia. In this study, we adapted two indirect measures, the Affect misattribution procedure and the Affective priming procedure, to examine whether people in Russia have a positive pro-environmental attitude and whether there is a relationship between this implicitly measured attitude and an explicit environmental concern. To ensure reproducibility, we preregistered and conducted two similar studies. The total sample size of the two studies is 394. Our results showed that both measures converge and successfully detect the existence of a positive implicit attitude towards sustainability and pro-environmental behaviour, but there does not appear to be a relationship with environmental concern.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66880238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moral dumbfounding occurs when people defend a moral judgment, without reasons in support of this judgment. The phenomenon has been influential in moral psychology, however, despite its influence, it remains poorly understood. Based on the notion that cognitive load enhances biases and shortcomings in human judgment when elaboration is beneficial, we hypothesized that under cognitive load, people would be less likely to provide reasons for a judgment and more likely to be dumbfounded (or to change their judgment). In a pre-registered study (N = 1686) we tested this prediction. Our findings suggest that cognitive load reduces reason-giving, and increases dumbfounding (but does not lead to changes in judgments). Our results provide new insights into the phenomenon of moral dumbfounding while also advancing theory in moral psychology.
{"title":"Cognitive Load Can Reduce Reason-Giving in a Moral Dumbfounding Task","authors":"Cillian McHugh, M. McGann, E. Igou, E. Kinsella","doi":"10.1525/collabra.73818","DOIUrl":"https://doi.org/10.1525/collabra.73818","url":null,"abstract":"Moral dumbfounding occurs when people defend a moral judgment, without reasons in support of this judgment. The phenomenon has been influential in moral psychology, however, despite its influence, it remains poorly understood. Based on the notion that cognitive load enhances biases and shortcomings in human judgment when elaboration is beneficial, we hypothesized that under cognitive load, people would be less likely to provide reasons for a judgment and more likely to be dumbfounded (or to change their judgment). In a pre-registered study (N = 1686) we tested this prediction. Our findings suggest that cognitive load reduces reason-giving, and increases dumbfounding (but does not lead to changes in judgments). Our results provide new insights into the phenomenon of moral dumbfounding while also advancing theory in moral psychology.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66880659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social norms can frame how typical and appropriate the choices available to individuals are, making some more difficult while others easier to make. Despite the important role of both descriptive and injunctive norms for intervention, few measures are available that distinguish these types of perceptions. Fewer still are tailored for settings where development challenges are present and behaviorally-informed interventions are implemented. To address gaps in measuring social norms that impact women’s employment in India, this study was conducted with 399 adolescents aged 14-17 years to develop the Strength of Social Gender Norms (SSGN) scale. Exploratory factor analysis demonstrated a good two-factor structure. Psychometric analyses satisfied tests for internal consistency, differentiated it from attitudes, and found moderate test-retest reliability. Using this scale, we found that girls perceived more positive social norms overall but held more negative perceptions of what others in their communities think about women working (i.e. injunctive norms), relative to boys. Our results confirm the ability of the SSGN scale to distinguish different aspects of social norms among low-income Indian adolescents, a population that is neglected in psychology research at large. Future research should aim to replicate results in additional hard-to-reach samples and investigate the association between actual longer-term employment outcomes of women.
{"title":"An Improved Measure for the Strength of Social Gender Norms (SSGN) Developed for Adolescents in Uttar Pradesh, India","authors":"Krittika Gorur, B. Cislaghi, Patrick S. Forscher","doi":"10.1525/collabra.75220","DOIUrl":"https://doi.org/10.1525/collabra.75220","url":null,"abstract":"Social norms can frame how typical and appropriate the choices available to individuals are, making some more difficult while others easier to make. Despite the important role of both descriptive and injunctive norms for intervention, few measures are available that distinguish these types of perceptions. Fewer still are tailored for settings where development challenges are present and behaviorally-informed interventions are implemented. To address gaps in measuring social norms that impact women’s employment in India, this study was conducted with 399 adolescents aged 14-17 years to develop the Strength of Social Gender Norms (SSGN) scale. Exploratory factor analysis demonstrated a good two-factor structure. Psychometric analyses satisfied tests for internal consistency, differentiated it from attitudes, and found moderate test-retest reliability. Using this scale, we found that girls perceived more positive social norms overall but held more negative perceptions of what others in their communities think about women working (i.e. injunctive norms), relative to boys. Our results confirm the ability of the SSGN scale to distinguish different aspects of social norms among low-income Indian adolescents, a population that is neglected in psychology research at large. Future research should aim to replicate results in additional hard-to-reach samples and investigate the association between actual longer-term employment outcomes of women.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66881116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We aimed to identify effect sizes of age discrimination in recruitment based on evidence from correspondence studies and scenario experiments conducted between 2010 and 2019. To differentiate our results, we separated outcomes (i.e., call-back rates and hiring/invitation to interview likelihood) by age groups (40-49, 50-59, 60-65, 66+) and assessed age discrimination by comparing older applicants to a control group (29-35 year-olds). We conducted searches in PsycInfo, Web of Science, ERIC, BASE, and Google Scholar, along with backward reference searching. Study bias was assessed with a tool developed for this review, and publication bias by calculating R-index, p-curve, and funnel plots. We calculated odds ratios for callback rates, pooled the results using a random-effects meta-analysis and calculated 95% confidence intervals. We included 13 studies from 11 articles in our review, and conducted meta-analyses on the eight studies that we were able to extract data from. The majority of studies were correspondence studies (k=10) and came largely from European countries (k=9), with the rest being from the U.S. (k=3) and Australia (k=1). Seven studies had a between-participants design, and the remaining six studies had a within-participants design. We conducted six random-effects meta-analyses, one for each age category and type of study design and found an average effect of age discrimination against all age groups in both study designs, with varying effect sizes (ranging from OR = 0.38, CI [0.25, 0.59] to OR = 0.89, CI [0.81, 0.97]). There was moderate to high risk of bias on certain factors, e.g., age randomization, problems with application heterogeneity. Generally, there’s an effect of age discrimination and it tends to increase with age. This has important implications regarding the future of the world’s workforce, given the increase in the older workforce and later retirement.
基于2010年至2019年间进行的对应研究和情景实验的证据,我们旨在确定年龄歧视在招聘中的效应大小。为了区分我们的结果,我们按年龄组(40-49岁,50-59岁,60-65岁,66岁以上)将结果(即回调率和招聘/邀请面试的可能性)分开,并通过将年龄较大的申请人与对照组(29-35岁)进行比较来评估年龄歧视。我们在PsycInfo, Web of Science, ERIC, BASE和谷歌Scholar中进行了搜索,并进行了反向参考搜索。使用为本综述开发的工具评估研究偏倚,通过计算r指数、p曲线和漏斗图评估发表偏倚。我们计算回调率的优势比,使用随机效应荟萃分析汇总结果,并计算95%置信区间。我们在综述中纳入了11篇文章中的13项研究,并对我们能够提取数据的8项研究进行了荟萃分析。大多数研究为函授研究(k=10),主要来自欧洲国家(k=9),其余研究来自美国(k=3)和澳大利亚(k=1)。7项研究采用参与者间设计,其余6项研究采用参与者内设计。我们进行了六次随机效应荟萃分析,每个年龄类别和研究设计类型各一次,发现两种研究设计中年龄歧视对所有年龄组的平均影响,影响大小不同(从OR = 0.38, CI[0.25, 0.59]到OR = 0.89, CI[0.81, 0.97])。在某些因素上存在中等到高度的偏倚风险,例如年龄随机化、应用异质性问题。一般来说,年龄歧视会产生影响,而且会随着年龄的增长而加剧。鉴于老年劳动力的增加和退休年龄的推迟,这对世界劳动力的未来具有重要意义。
{"title":"Ageism in Hiring: A Systematic Review and Meta-analysis of Age Discrimination","authors":"Lucija Batinovic, Marlon Howe, Samantha Sinclair, Rickard Carlsson","doi":"10.1525/collabra.82194","DOIUrl":"https://doi.org/10.1525/collabra.82194","url":null,"abstract":"We aimed to identify effect sizes of age discrimination in recruitment based on evidence from correspondence studies and scenario experiments conducted between 2010 and 2019. To differentiate our results, we separated outcomes (i.e., call-back rates and hiring/invitation to interview likelihood) by age groups (40-49, 50-59, 60-65, 66+) and assessed age discrimination by comparing older applicants to a control group (29-35 year-olds). We conducted searches in PsycInfo, Web of Science, ERIC, BASE, and Google Scholar, along with backward reference searching. Study bias was assessed with a tool developed for this review, and publication bias by calculating R-index, p-curve, and funnel plots. We calculated odds ratios for callback rates, pooled the results using a random-effects meta-analysis and calculated 95% confidence intervals. We included 13 studies from 11 articles in our review, and conducted meta-analyses on the eight studies that we were able to extract data from. The majority of studies were correspondence studies (k=10) and came largely from European countries (k=9), with the rest being from the U.S. (k=3) and Australia (k=1). Seven studies had a between-participants design, and the remaining six studies had a within-participants design. We conducted six random-effects meta-analyses, one for each age category and type of study design and found an average effect of age discrimination against all age groups in both study designs, with varying effect sizes (ranging from OR = 0.38, CI [0.25, 0.59] to OR = 0.89, CI [0.81, 0.97]). There was moderate to high risk of bias on certain factors, e.g., age randomization, problems with application heterogeneity. Generally, there’s an effect of age discrimination and it tends to increase with age. This has important implications regarding the future of the world’s workforce, given the increase in the older workforce and later retirement.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66882169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Rougier, J. de Houwer, J. Richetin, Sean Hughes, M. Perugini
Impression formation effects – such as the halo effect – and learning effects – such as evaluative or attribute conditioning effects – are often seen as separate classes of phenomena. In a recent conceptual paper, De Houwer et al. (2019) suggested that both may actually qualify as instances of feature transformation, where a source feature (e.g., attractiveness of a face; valence of an unconditioned stimulus; US) influences judgements about a target feature (e.g., social competence of a person; valence of a conditioned stimulus; CS). In halo effects, the source and target features typically differ (e.g., a person with an attractive face is judged as more socially competent) but belong to the same object. In evaluative conditioning, source and target features are the same (e.g., a neutral CS is judged as more positive after being paired with a positive US) but belong to different objects. In this paper, we highlight a phenomenon at the crossroads of the two previous effects: feature transformation where source and target features are different (as in halo studies) and belong to different objects that are paired together (as in evaluative conditioning studies). Across six pre-registered experiments (n = 1050), we obtained evidence for this phenomenon in the context of person perception (i.e., attractiveness halo) and food perception (i.e., health halo). We also show that this type of feature transformation is influenced by several known moderators of halo and conditioning effects (beliefs about traits relationship, memory of pairings, and salience of the source feature).
{"title":"From Halo to Conditioning and Back Again: Exploring the Links Between Impression Formation and Learning","authors":"M. Rougier, J. de Houwer, J. Richetin, Sean Hughes, M. Perugini","doi":"10.1525/collabra.84560","DOIUrl":"https://doi.org/10.1525/collabra.84560","url":null,"abstract":"Impression formation effects – such as the halo effect – and learning effects – such as evaluative or attribute conditioning effects – are often seen as separate classes of phenomena. In a recent conceptual paper, De Houwer et al. (2019) suggested that both may actually qualify as instances of feature transformation, where a source feature (e.g., attractiveness of a face; valence of an unconditioned stimulus; US) influences judgements about a target feature (e.g., social competence of a person; valence of a conditioned stimulus; CS). In halo effects, the source and target features typically differ (e.g., a person with an attractive face is judged as more socially competent) but belong to the same object. In evaluative conditioning, source and target features are the same (e.g., a neutral CS is judged as more positive after being paired with a positive US) but belong to different objects. In this paper, we highlight a phenomenon at the crossroads of the two previous effects: feature transformation where source and target features are different (as in halo studies) and belong to different objects that are paired together (as in evaluative conditioning studies). Across six pre-registered experiments (n = 1050), we obtained evidence for this phenomenon in the context of person perception (i.e., attractiveness halo) and food perception (i.e., health halo). We also show that this type of feature transformation is influenced by several known moderators of halo and conditioning effects (beliefs about traits relationship, memory of pairings, and salience of the source feature).","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66882763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The response time Concealed Information Test (RT-CIT) can help to reveal whether a person is concealing the knowledge of a certain information detail. During the RT-CIT, the examinee is repeatedly presented with a probe, the detail in question (e.g., murder weapon), and several irrelevants, other details that are similar to the probe (e.g., other weapons). These items all require the same keypress response, while one further item, the target, requires a different keypress response. Examinees tend to respond to the probe slower than to irrelevants, when they recognize the former as the relevant detail. To classify examinees as having or not having recognized the probe, RT-CIT studies have almost always used the averaged difference between probe and irrelevant RTs as the single predictor variable. In the present study, we tested whether we can improve classification accuracy (recognized the probe: yes or no) by incorporating the average RTs, the accuracy rates, and the SDs of each item type (probe, irrelevant, and target). Using the data from 1,871 individual tests and incorporating various combinations of the additional variables, we built logistic regression, linear discriminant analysis, and extra trees machine learning models (altogether 26), and we compared the classification accuracy of each of the model-based predictors to that of the sole probe-irrelevant RT difference predictor as baseline. None of the models provided significant improvement over the baseline. Nominal gains in classification accuracy ranged between –1.5% and 3.1%. In each of the models, machine learning captured the probe-irrelevant RT difference as the most important contributor to successful predictions, or, when included separately, the probe RT and the irrelevant RT as the first and second most important contributors, respectively.
{"title":"Machine learning mega-analysis applied to the Response Time Concealed Information Test: No evidence for advantage of model-based predictors over baseline","authors":"Gáspár Lukács, D. Steyrl","doi":"10.31234/osf.io/mfjx8","DOIUrl":"https://doi.org/10.31234/osf.io/mfjx8","url":null,"abstract":"The response time Concealed Information Test (RT-CIT) can help to reveal whether a person is concealing the knowledge of a certain information detail. During the RT-CIT, the examinee is repeatedly presented with a probe, the detail in question (e.g., murder weapon), and several irrelevants, other details that are similar to the probe (e.g., other weapons). These items all require the same keypress response, while one further item, the target, requires a different keypress response. Examinees tend to respond to the probe slower than to irrelevants, when they recognize the former as the relevant detail. To classify examinees as having or not having recognized the probe, RT-CIT studies have almost always used the averaged difference between probe and irrelevant RTs as the single predictor variable. In the present study, we tested whether we can improve classification accuracy (recognized the probe: yes or no) by incorporating the average RTs, the accuracy rates, and the SDs of each item type (probe, irrelevant, and target). Using the data from 1,871 individual tests and incorporating various combinations of the additional variables, we built logistic regression, linear discriminant analysis, and extra trees machine learning models (altogether 26), and we compared the classification accuracy of each of the model-based predictors to that of the sole probe-irrelevant RT difference predictor as baseline. None of the models provided significant improvement over the baseline. Nominal gains in classification accuracy ranged between –1.5% and 3.1%. In each of the models, machine learning captured the probe-irrelevant RT difference as the most important contributor to successful predictions, or, when included separately, the probe RT and the irrelevant RT as the first and second most important contributors, respectively.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44616560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}