Evaluative conditioning (EC), the change in liking towards a stimulus due to its co-occurrence with another stimulus, is a key effect in social and cognitive psychology. Despite its prominence, research on personality differences in EC has been scarce. First research found stronger EC among individuals high in Neuroticism and Agreeableness. However, it remains unclear how robust these moderations are and why they occur. In a high-powered preregistered EC experiment with a heterogeneous sample (N = 511), we found a robust moderation by Agreeableness. Individuals high in Agreeableness also showed more extreme evaluations of the unconditioned stimuli (USs) and more accurate memory for the stimulus pairings, which both in combination accounted for the moderation by Agreeableness. The moderation by Neuroticism was considerably weaker and depended on the type of analysis, but was independent of US evaluations and pairing memory. Extraversion, Conscientiousness, and Openness did not moderate EC. Our findings imply that Agreeableness-based personality differences in EC reflect differences in the affective and cognitive processes presumed in current propositional and memory-based EC theories. Furthermore, they offer important insights into the Big Five and interindividual differences in stimulus evaluation, memory, and learning.
{"title":"(Why) Do Big Five Personality Traits Moderate Evaluative Conditioning? The Role of US Extremity and Pairing Memory","authors":"Moritz Ingendahl, Tobias Vogel","doi":"10.1525/collabra.74812","DOIUrl":"https://doi.org/10.1525/collabra.74812","url":null,"abstract":"Evaluative conditioning (EC), the change in liking towards a stimulus due to its co-occurrence with another stimulus, is a key effect in social and cognitive psychology. Despite its prominence, research on personality differences in EC has been scarce. First research found stronger EC among individuals high in Neuroticism and Agreeableness. However, it remains unclear how robust these moderations are and why they occur. In a high-powered preregistered EC experiment with a heterogeneous sample (N = 511), we found a robust moderation by Agreeableness. Individuals high in Agreeableness also showed more extreme evaluations of the unconditioned stimuli (USs) and more accurate memory for the stimulus pairings, which both in combination accounted for the moderation by Agreeableness. The moderation by Neuroticism was considerably weaker and depended on the type of analysis, but was independent of US evaluations and pairing memory. Extraversion, Conscientiousness, and Openness did not moderate EC. Our findings imply that Agreeableness-based personality differences in EC reflect differences in the affective and cognitive processes presumed in current propositional and memory-based EC theories. Furthermore, they offer important insights into the Big Five and interindividual differences in stimulus evaluation, memory, and learning.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"17 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66881426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-06-15DOI: 10.1525/collabra.77837
Sally A Larsen, Kathryn Asbury, William L Coventry, Sara A Hart, Callie W Little, Stephen A Petrill
The Confusion, Hubbub and Order Scale (CHAOS) - short form - is a survey tool intended to capture information about home environments. It is widely used in studies of child and adolescent development and psychopathology, particularly twin studies. The original long form of the scale comprised 15 items and was validated in a sample of infants in the 1980s. The short form of the scale was developed in the late 1990s and contains six items, including four from the original scale, and two new items. This short form has not been validated and is the focus of this study. We use five samples drawn from twin studies in Australia, the UK, and the USA, and examine measurement invariance of the CHAOS short-form. We first compare alternate confirmatory factor models for each group; we next test between-group configural, metric and scalar invariance; finally, we examine predictive validity of the scale under different conditions. We find evidence that a two-factor configuration of the six items is more appropriate than the commonly used one-factor model. Second, we find measurement non-invariance across groups at the metric invariance step, with items performing differently depending on the sample. We also find inconsistent results in tests of predictive validity using family-level socioeconomic status and academic achievement as criterion variables. The results caution the continued use of the short-form CHAOS in its current form and recommend future revisions and development of the scale for use in developmental research.
{"title":"Measuring CHAOS? Evaluating the short-form Confusion, Hubbub And Order Scale.","authors":"Sally A Larsen, Kathryn Asbury, William L Coventry, Sara A Hart, Callie W Little, Stephen A Petrill","doi":"10.1525/collabra.77837","DOIUrl":"10.1525/collabra.77837","url":null,"abstract":"<p><p>The Confusion, Hubbub and Order Scale (CHAOS) - short form - is a survey tool intended to capture information about home environments. It is widely used in studies of child and adolescent development and psychopathology, particularly twin studies. The original long form of the scale comprised 15 items and was validated in a sample of infants in the 1980s. The short form of the scale was developed in the late 1990s and contains six items, including four from the original scale, and two new items. This short form has not been validated and is the focus of this study. We use five samples drawn from twin studies in Australia, the UK, and the USA, and examine measurement invariance of the CHAOS short-form. We first compare alternate confirmatory factor models for each group; we next test between-group configural, metric and scalar invariance; finally, we examine predictive validity of the scale under different conditions. We find evidence that a two-factor configuration of the six items is more appropriate than the commonly used one-factor model. Second, we find measurement non-invariance across groups at the metric invariance step, with items performing differently depending on the sample. We also find inconsistent results in tests of predictive validity using family-level socioeconomic status and academic achievement as criterion variables. The results caution the continued use of the short-form CHAOS in its current form and recommend future revisions and development of the scale for use in developmental research.</p>","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10961925/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66881570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of the personality traits Machiavellianism, psychopathy, and narcissism is hardly understood. We theorize that the well-documented maturity principle applies to these traits. Decreasing levels of Machiavellianism, psychopathy, and the antagonistic dimension “narcissistic rivalry” could be interpreted as reflecting maturation. The self-enhancing “narcissistic admiration” trait might remain unchanged. A sample of N = 926 German university students aged 18 to 30 (74% female) participated in a longitudinal study with 4 waves of measurement over 2 years, completing short and full-length measurement instruments. The preregistered analyses included latent growth curve models based on item factor analysis with partial measurement invariance. We accounted for the possibilities of contextual effects and nonlinear development and controlled the false discovery rate. All four traits showed very high rank-order stability (rs ranged from .74 to .81). In line with the maturity principle, mean levels of Machiavellianism and psychopathy decreased linearly (ds were −0.18 and −0.12). Moreover, model comparisons revealed systematic heterogeneity in Machiavellianism’s linear growth curve, indicating that young adults differ from each other in the direction or steepness of their developmental paths. We also assessed self-esteem and life satisfaction. Linear changes in Machiavellianism were inversely related to linear changes in life satisfaction (r = −.39), making the mean-level decrease in Machiavellianism appear as adaptive. While findings concerning narcissism were inconclusive, this study provides incremental evidence that the maturity principle might apply to Machiavellianism and, potentially, to psychopathy.
{"title":"The Development of Machiavellianism, Psychopathy, and Narcissism in Young Adulthood","authors":"Christian Wolff, Eunike Wetzel","doi":"10.1525/collabra.77870","DOIUrl":"https://doi.org/10.1525/collabra.77870","url":null,"abstract":"The development of the personality traits Machiavellianism, psychopathy, and narcissism is hardly understood. We theorize that the well-documented maturity principle applies to these traits. Decreasing levels of Machiavellianism, psychopathy, and the antagonistic dimension “narcissistic rivalry” could be interpreted as reflecting maturation. The self-enhancing “narcissistic admiration” trait might remain unchanged. A sample of N = 926 German university students aged 18 to 30 (74% female) participated in a longitudinal study with 4 waves of measurement over 2 years, completing short and full-length measurement instruments. The preregistered analyses included latent growth curve models based on item factor analysis with partial measurement invariance. We accounted for the possibilities of contextual effects and nonlinear development and controlled the false discovery rate. All four traits showed very high rank-order stability (rs ranged from .74 to .81). In line with the maturity principle, mean levels of Machiavellianism and psychopathy decreased linearly (ds were −0.18 and −0.12). Moreover, model comparisons revealed systematic heterogeneity in Machiavellianism’s linear growth curve, indicating that young adults differ from each other in the direction or steepness of their developmental paths. We also assessed self-esteem and life satisfaction. Linear changes in Machiavellianism were inversely related to linear changes in life satisfaction (r = −.39), making the mean-level decrease in Machiavellianism appear as adaptive. While findings concerning narcissism were inconclusive, this study provides incremental evidence that the maturity principle might apply to Machiavellianism and, potentially, to psychopathy.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66882220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article examines the idea that cognitive load interventions can expose lies—because lying is more demanding than truth-telling. I discuss the limitations of that hypothesis by reviewing seven of its justifications. For example, liars must suppress the truth while lying, and this handicap makes lying challenging such that one can exploit the challenge to expose lies. The theoretical fitness of each justification is variable and unknown. Those ambiguities prevent analysts from ascertaining the verisimilitude of the hypothesis. I propose research questions whose answers could assist in specifying the justifications and making cognitive load lie detection amenable to severe testing.
{"title":"A Metatheoretical Review of Cognitive Load Lie Detection","authors":"D. A. Neequaye","doi":"10.1525/collabra.87497","DOIUrl":"https://doi.org/10.1525/collabra.87497","url":null,"abstract":"This article examines the idea that cognitive load interventions can expose lies—because lying is more demanding than truth-telling. I discuss the limitations of that hypothesis by reviewing seven of its justifications. For example, liars must suppress the truth while lying, and this handicap makes lying challenging such that one can exploit the challenge to expose lies. The theoretical fitness of each justification is variable and unknown. Those ambiguities prevent analysts from ascertaining the verisimilitude of the hypothesis. I propose research questions whose answers could assist in specifying the justifications and making cognitive load lie detection amenable to severe testing.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66883634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyberloafing occurs when employees use technology to loaf instead of work. Despite mounting organizational concern and psychological research on cyberloafing, research provides little actionable guidance to address cyberloafing. Therefore, the present study builds on previous cyberloafing investigations in three primary ways. First, we utilize a person-situation framework to compare personological and situational construct domains. Second, we extend the cyberloafing nomological network by investigating previously unexamined, yet powerful, predictors. Third, we employ a multivariate approach to identify the most important cyberloafing antecedents. From seven cyberloafing constructs, we found that boredom, logical reasoning, and interpersonal conflict were the most important correlates. Our results highlight novel, important predictors of cyberloafing and allow us to provide empirically-based recommendations for developing cyberloafing interventions.
{"title":"Cyberloafing: Investigating the Importance and Implications of New and Known Predictors","authors":"Casey Giordano, Brittany Mercado","doi":"10.1525/collabra.57391","DOIUrl":"https://doi.org/10.1525/collabra.57391","url":null,"abstract":"Cyberloafing occurs when employees use technology to loaf instead of work. Despite mounting organizational concern and psychological research on cyberloafing, research provides little actionable guidance to address cyberloafing. Therefore, the present study builds on previous cyberloafing investigations in three primary ways. First, we utilize a person-situation framework to compare personological and situational construct domains. Second, we extend the cyberloafing nomological network by investigating previously unexamined, yet powerful, predictors. Third, we employ a multivariate approach to identify the most important cyberloafing antecedents. From seven cyberloafing constructs, we found that boredom, logical reasoning, and interpersonal conflict were the most important correlates. Our results highlight novel, important predictors of cyberloafing and allow us to provide empirically-based recommendations for developing cyberloafing interventions.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"220 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66879303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew J. Vonasch, W. Hung, Wai Yee Leung, Anna Thao Bich Nguyen, Stephanie Chan, Boley Cheng, G. Feldman
We conducted a preregistered close replication and extension of Studies 1, 2, and 4 in Hsee (1998). Hsee found that when evaluating choices jointly, people compare and judge the option higher on desirable attributes as better (“more is better”). However, when people evaluate options separately, they rely on contextual cues and reference points, sometimes resulting in evaluating the option with less as being better (“less is better”). We found support for “less is better” across all studies (N = 403; Study 1 original d = 0.70 [0.24,1.15], replication d = 0.99 [0.72,1.26]; Study 2 original d = 0.74 [0.12,1.35], replication d = 0.32 [0.07,0.56]; Study 4 original d = 0.97 [0.43,1.50], replication d = 0.76 [0.50,1.02]), with weaker support for “more is better” (Study 2 original d = 0.92 [0.42,1.40], replication dz = 0.33 [.23,.43]; Study 4 original d = 0.37 [0.02,0.72], replication dz = 0.09 [-0.05,0.23]). Some results of our exploratory extensions were surprising, leading to open questions. We discuss remaining implications and directions for theory and measurement relating to economic rationality and the evaluability hypothesis. Materials/data/code: https://osf.io/9uwns/
{"title":"“Less Is Better” in Separate Evaluations Versus “More Is Better” in Joint Evaluations: Mostly Successful Close Replication and Extension of Hsee (1998)","authors":"Andrew J. Vonasch, W. Hung, Wai Yee Leung, Anna Thao Bich Nguyen, Stephanie Chan, Boley Cheng, G. Feldman","doi":"10.1525/collabra.77859","DOIUrl":"https://doi.org/10.1525/collabra.77859","url":null,"abstract":"We conducted a preregistered close replication and extension of Studies 1, 2, and 4 in Hsee (1998). Hsee found that when evaluating choices jointly, people compare and judge the option higher on desirable attributes as better (“more is better”). However, when people evaluate options separately, they rely on contextual cues and reference points, sometimes resulting in evaluating the option with less as being better (“less is better”). We found support for “less is better” across all studies (N = 403; Study 1 original d = 0.70 [0.24,1.15], replication d = 0.99 [0.72,1.26]; Study 2 original d = 0.74 [0.12,1.35], replication d = 0.32 [0.07,0.56]; Study 4 original d = 0.97 [0.43,1.50], replication d = 0.76 [0.50,1.02]), with weaker support for “more is better” (Study 2 original d = 0.92 [0.42,1.40], replication dz = 0.33 [.23,.43]; Study 4 original d = 0.37 [0.02,0.72], replication dz = 0.09 [-0.05,0.23]). Some results of our exploratory extensions were surprising, leading to open questions. We discuss remaining implications and directions for theory and measurement relating to economic rationality and the evaluability hypothesis. Materials/data/code: https://osf.io/9uwns/","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"93 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66882099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farid Anvari, Jacqueline Bachmann, J. Sanchez-Burks, I. Schneider
The Affective Norms for English Words (ANEW) is a stimulus set that provides researchers with English language words that have been pre-rated on bipolar scales for valence, dominance, and arousal. Researchers rely on these pre-ratings to ensure that the words they select accurately reflect the affective responses these words elicit. Each word has a valence rating reflecting the degree to which people experience the word as positive or negative, with midpoint ratings on this scale presumably reflecting neutrality. However, neutral words tend to vary substantially in arousal, suggesting that not all neutral words are the same. Some researchers account for this by using the bipolar valence ratings in conjunction with the arousal ratings, selecting low-arousal neutral words when neutrality is what they seek. We argue that the varying levels of arousal in neutral words is due to varying levels of ambivalence. However, the idea that midpoint valence ratings for ANEW stimuli may hide varying levels of ambivalence has not yet been examined. This article provides evidence that words in the ANEW that appear neutral actually vary markedly in the levels of ambivalence they elicit and that this is related to their levels of arousal. These findings are relevant for research, past and present, that use the ANEW because ambivalence has different psychological consequences than neutrality, and therefore complicates the ability to draw clear inferences and maintain experimental control.
英语单词情感规范(英语:Affective norm for English Words,简称:新规范)是一个刺激集,它为研究人员提供了在双相量表上对英语单词的效价、优势和唤醒进行了预先评级。研究人员依靠这些预评分来确保他们选择的词语准确地反映了这些词语引发的情感反应。每个词都有一个效价等级,反映了人们对这个词的积极或消极感受的程度,这个等级的中点大概反映了中性。然而,中性词在唤起性方面往往存在很大差异,这表明并非所有中性词都是相同的。一些研究人员通过使用双相效价评级和唤醒评级来解释这一点,当中立是他们寻求的时候,选择低唤醒的中性词汇。我们认为,中性词的不同程度的唤醒是由于不同程度的矛盾心理。然而,关于新刺激的中点效价评级可能隐藏不同程度的矛盾心理的观点尚未得到检验。这篇文章提供的证据表明,在新语言中,看似中性的词实际上在引发的矛盾心理水平上有显著差异,这与它们的唤起水平有关。这些发现与过去和现在使用新思维的研究相关,因为矛盾心理与中立心理有不同的心理后果,因此使得出明确推论和维持实验控制的能力复杂化。
{"title":"Is “Neutral” Really Neutral? Mid-point Ratings in the Affective Norms English Words (ANEW) May Mask Ambivalence","authors":"Farid Anvari, Jacqueline Bachmann, J. Sanchez-Burks, I. Schneider","doi":"10.1525/collabra.82204","DOIUrl":"https://doi.org/10.1525/collabra.82204","url":null,"abstract":"The Affective Norms for English Words (ANEW) is a stimulus set that provides researchers with English language words that have been pre-rated on bipolar scales for valence, dominance, and arousal. Researchers rely on these pre-ratings to ensure that the words they select accurately reflect the affective responses these words elicit. Each word has a valence rating reflecting the degree to which people experience the word as positive or negative, with midpoint ratings on this scale presumably reflecting neutrality. However, neutral words tend to vary substantially in arousal, suggesting that not all neutral words are the same. Some researchers account for this by using the bipolar valence ratings in conjunction with the arousal ratings, selecting low-arousal neutral words when neutrality is what they seek. We argue that the varying levels of arousal in neutral words is due to varying levels of ambivalence. However, the idea that midpoint valence ratings for ANEW stimuli may hide varying levels of ambivalence has not yet been examined. This article provides evidence that words in the ANEW that appear neutral actually vary markedly in the levels of ambivalence they elicit and that this is related to their levels of arousal. These findings are relevant for research, past and present, that use the ANEW because ambivalence has different psychological consequences than neutrality, and therefore complicates the ability to draw clear inferences and maintain experimental control.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66882250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After an aggressive interaction, perpetrators most want to offer apologies when they have unintentionally harmed another person and victims most want to receive an apology when another person intentionally harmed them. Perpetrators and victims also explain aggressive behaviors differently—perpetrators often explain their own aggressive behaviors by referring to beliefs they considered that led to their behaviors (i.e., “belief” explanations), whereas victims explain perpetrators’ behaviors by referring to background factors that do not mention the perpetrators’ mental deliberations (i.e., “causal history explanations”). Putting these ideas together, the current Registered Report had participants recall either a time they intentionally harmed another person or a time when they were intentionally harmed by another person. Participants then rated several characteristics of the recalled behavior, explained why the behavior occurred, and reported their desire for an apology. As predicted, we found that perpetrators who gave “belief” explanations wanted to give an apology much less than participants who gave “causal history explanations.” However, and inconsistent with our predictions, victims’ desire to receive an apology was similar regardless of how they explained the perpetrators’ behaviors. These findings underscore how perpetrators’ explanations can emphasize (or de-emphasize) the deliberateness of their harmful behaviors and how these explanations are related to their desire to make amends.
{"title":"Perpetrators’ and Victims’ Folk Explanations of Aggressive Behaviors and Desires for Apologies","authors":"Randy J. McCarthy, Jared P Wilson","doi":"10.1525/collabra.84918","DOIUrl":"https://doi.org/10.1525/collabra.84918","url":null,"abstract":"After an aggressive interaction, perpetrators most want to offer apologies when they have unintentionally harmed another person and victims most want to receive an apology when another person intentionally harmed them. Perpetrators and victims also explain aggressive behaviors differently—perpetrators often explain their own aggressive behaviors by referring to beliefs they considered that led to their behaviors (i.e., “belief” explanations), whereas victims explain perpetrators’ behaviors by referring to background factors that do not mention the perpetrators’ mental deliberations (i.e., “causal history explanations”). Putting these ideas together, the current Registered Report had participants recall either a time they intentionally harmed another person or a time when they were intentionally harmed by another person. Participants then rated several characteristics of the recalled behavior, explained why the behavior occurred, and reported their desire for an apology. As predicted, we found that perpetrators who gave “belief” explanations wanted to give an apology much less than participants who gave “causal history explanations.” However, and inconsistent with our predictions, victims’ desire to receive an apology was similar regardless of how they explained the perpetrators’ behaviors. These findings underscore how perpetrators’ explanations can emphasize (or de-emphasize) the deliberateness of their harmful behaviors and how these explanations are related to their desire to make amends.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66882525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is considerable debate about whether survey respondents regularly engage in “expressive responding” – professing to believe something that they do not sincerely believe to show support for their in-group or hostility to an out-group. Nonetheless, there is widespread agreement that one study provides compelling evidence for a consequential level of expressive responding in a particular context. In the immediate aftermath of Donald Trump’s 2017 presidential inauguration rally there was considerable controversy about whether this inauguration crowd was the largest ever. At this time, a study was conducted which found that Donald Trump voters were more likely than Hillary Clinton voters or non-voters to indicate that an unlabeled photo of Donald Trump’s 2017 presidential inauguration rally showed more people than an unlabeled photo of Barack Obama’s 2009 presidential inauguration rally, despite the latter photo clearly showing more people. However, this study was not pre-registered, suggesting that a replication is needed to establish the robustness of this important result. In the present study, we conducted an extended replication over two years after Donald Trump’s presidential inauguration rally. We found that despite this delay the original result replicated, albeit with a smaller magnitude. In addition, we extended the earlier study by testing several hypotheses about the characteristics of Republicans who selected the incorrect photo.
{"title":"Expressive Responding in Support of Donald Trump: An Extended Replication of Schaffner and Luks (2018)","authors":"R. M. Ross, Neil Levy","doi":"10.1525/collabra.68054","DOIUrl":"https://doi.org/10.1525/collabra.68054","url":null,"abstract":"There is considerable debate about whether survey respondents regularly engage in “expressive responding” – professing to believe something that they do not sincerely believe to show support for their in-group or hostility to an out-group. Nonetheless, there is widespread agreement that one study provides compelling evidence for a consequential level of expressive responding in a particular context. In the immediate aftermath of Donald Trump’s 2017 presidential inauguration rally there was considerable controversy about whether this inauguration crowd was the largest ever. At this time, a study was conducted which found that Donald Trump voters were more likely than Hillary Clinton voters or non-voters to indicate that an unlabeled photo of Donald Trump’s 2017 presidential inauguration rally showed more people than an unlabeled photo of Barack Obama’s 2009 presidential inauguration rally, despite the latter photo clearly showing more people. However, this study was not pre-registered, suggesting that a replication is needed to establish the robustness of this important result. In the present study, we conducted an extended replication over two years after Donald Trump’s presidential inauguration rally. We found that despite this delay the original result replicated, albeit with a smaller magnitude. In addition, we extended the earlier study by testing several hypotheses about the characteristics of Republicans who selected the incorrect photo.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"84 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66879642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data collection and research methodology represents a critical part of the research pipeline. On the one hand, it is important that we collect data in a way that maximises the validity of what we are measuring, which may involve the use of long scales with many items. On the other hand, collecting a large number of items across multiple scales results in participant fatigue, and expensive and time consuming data collection. It is therefore important that we use the available resources optimally. In this work, we consider how the representation of a theory as a causal/structural model can help us to streamline data collection and analysis procedures by not wasting time collecting data for variables which are not causally critical for answering the research question. This not only saves time and enables us to redirect resources to attend to other variables which are more important, but also increases research transparency and the reliability of theory testing. To achieve this, we leverage structural models and the Markov conditional independency structures implicit in these models, to identify the substructures which are critical for a particular research question. To demonstrate the benefits of this streamlining we review the relevant concepts and present a number of didactic examples, including a real-world example.
{"title":"Prespecification of Structure for the Optimization of Data Collection and Analysis","authors":"M. Vowels","doi":"10.1525/collabra.71300","DOIUrl":"https://doi.org/10.1525/collabra.71300","url":null,"abstract":"Data collection and research methodology represents a critical part of the research pipeline. On the one hand, it is important that we collect data in a way that maximises the validity of what we are measuring, which may involve the use of long scales with many items. On the other hand, collecting a large number of items across multiple scales results in participant fatigue, and expensive and time consuming data collection. It is therefore important that we use the available resources optimally. In this work, we consider how the representation of a theory as a causal/structural model can help us to streamline data collection and analysis procedures by not wasting time collecting data for variables which are not causally critical for answering the research question. This not only saves time and enables us to redirect resources to attend to other variables which are more important, but also increases research transparency and the reliability of theory testing. To achieve this, we leverage structural models and the Markov conditional independency structures implicit in these models, to identify the substructures which are critical for a particular research question. To demonstrate the benefits of this streamlining we review the relevant concepts and present a number of didactic examples, including a real-world example.","PeriodicalId":45791,"journal":{"name":"Collabra-Psychology","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66879987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}