Pub Date : 2024-11-05DOI: 10.1080/00223891.2024.2420872
Saeid Komasi, Andre Kerber, Christopher J Hopwood
Clinical assessment increasingly emphasizes six maladaptive domains of the DSM-5 and ICD-11 trait models, including negative affectivity, detachment, antagonism/dissociality, disinhibition, psychoticism, and anankastia. The present study aimed to validate the Persian version of the ICD-11 compatible Personality Inventory for DSM-5-Brief Form Plus, Modified (PID5BF + M). Data from a mixed sample including 1,615 adults (community N = 1,476 and outpatient N = 139) were used to assess the latent structure, congruence coefficients, reliability, convergent validity, and criterion validity of the PID5BF + M. The results supported the six-factor structure of the PID5BF + M whose traits are largely congruent with those from previous studies. The scale reliabilities were acceptable, and strong associations were observed with personality disorder-type symptom counts (r ranging from .15 to .59, all p < .001). PID5BF + M scales also distinguished clinical and non-clinical samples. The present results support the validity and utility of the PID5BF + M for personality psychopathology screening in the Iranian population.
临床评估越来越强调 DSM-5 和 ICD-11 特质模型中的六个适应不良领域,包括负性情感、疏离、对抗/不合群、抑制、精神病性和厌世。本研究旨在验证与 ICD-11 兼容的波斯语版《DSM-5 人格量表--简表+,修订版》(PID5BF + M)。研究采用了包括 1,615 名成人(社区 N = 1,476 人,门诊 N = 139 人)在内的混合样本数据,以评估 PID5BF + M 的潜在结构、一致性系数、可靠性、收敛效度和标准效度。结果支持 PID5BF + M 的六因素结构,其特质与以往研究的特质基本一致。量表的信度是可以接受的,并且观察到与人格障碍类症状计数有很强的关联(r 范围在 0.15 到 0.59 之间,所有 p < 0.001)。PID5BF + M量表还能区分临床和非临床样本。本研究结果支持 PID5BF + M 在伊朗人群中进行人格心理病理学筛查的有效性和实用性。
{"title":"Validation of the Persian Version of the ICD-11 Compatible Personality Inventory for DSM-5- Brief Form Plus, Modified.","authors":"Saeid Komasi, Andre Kerber, Christopher J Hopwood","doi":"10.1080/00223891.2024.2420872","DOIUrl":"https://doi.org/10.1080/00223891.2024.2420872","url":null,"abstract":"<p><p>Clinical assessment increasingly emphasizes six maladaptive domains of the DSM-5 and ICD-11 trait models, including negative affectivity, detachment, antagonism/dissociality, disinhibition, psychoticism, and anankastia. The present study aimed to validate the Persian version of the ICD-11 compatible Personality Inventory for DSM-5-Brief Form Plus, Modified (PID5BF + M). Data from a mixed sample including 1,615 adults (community <i>N</i> = 1,476 and outpatient <i>N</i> = 139) were used to assess the latent structure, congruence coefficients, reliability, convergent validity, and criterion validity of the PID5BF + M. The results supported the six-factor structure of the PID5BF + M whose traits are largely congruent with those from previous studies. The scale reliabilities were acceptable, and strong associations were observed with personality disorder-type symptom counts (<i>r</i> ranging from .15 to .59, all <i>p</i> < .001). PID5BF + M scales also distinguished clinical and non-clinical samples. The present results support the validity and utility of the PID5BF + M for personality psychopathology screening in the Iranian population.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-19DOI: 10.1080/00223891.2024.2326893
Amanda L Y Valone, Adam W Meade
Self-report assessments are the standard for personality measurement, but motivated respondents are able to manipulate or fake their responses to typical Likert scale self-report. Although progress has been made in research seeking to reduce faking, most of it has focused on normative personality traits such as those measured by the five factor model. The measurement of socially aversive personality (e.g., the Dark Triad) is less well-researched. The negative aspects of socially aversive traits increase the opportunity and motivation of respondents to fake typical single-stimulus self-report assessments underscoring the need for faking resistant response formats. A possible way to reduce faking that has been explored in basic personality research is the use of the forced-choice response format. This study applied this method to socially aversive traits and illustrated best practices to create new multidimensional forced-choice and single-stimulus measures of socially aversive personality traits. Results indicated that participants were able to artificially alter their scores when asked to respond like an ideal job applicant, and counter to expectations, the forced-choice format did not decrease faking. Our results indicate that even when best practices are followed, forced-choice format is not a panacea for respondent faking.
{"title":"Can Forced-Choice Response Format Reduce Faking of Socially Aversive Personality Traits?","authors":"Amanda L Y Valone, Adam W Meade","doi":"10.1080/00223891.2024.2326893","DOIUrl":"10.1080/00223891.2024.2326893","url":null,"abstract":"<p><p>Self-report assessments are the standard for personality measurement, but motivated respondents are able to manipulate or fake their responses to typical Likert scale self-report. Although progress has been made in research seeking to reduce faking, most of it has focused on normative personality traits such as those measured by the five factor model. The measurement of socially aversive personality (e.g., the Dark Triad) is less well-researched. The negative aspects of socially aversive traits increase the opportunity and motivation of respondents to fake typical single-stimulus self-report assessments underscoring the need for faking resistant response formats. A possible way to reduce faking that has been explored in basic personality research is the use of the forced-choice response format. This study applied this method to socially aversive traits and illustrated best practices to create new multidimensional forced-choice and single-stimulus measures of socially aversive personality traits. Results indicated that participants were able to artificially alter their scores when asked to respond like an ideal job applicant, and counter to expectations, the forced-choice format did not decrease faking. Our results indicate that even when best practices are followed, forced-choice format is not a panacea for respondent faking.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140158294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-04-02DOI: 10.1080/00223891.2024.2330502
Marko Biberdzic, Julia F Sowislo, Nicole Cain, Kevin B Meehan, Emanuele Preti, Rossella Di Pierro, Eve Caligor, John F Clarkin
Both the new ICD-11 and the latest Alternative DSM-5 Model for Personality Disorders focus on self and interpersonal functioning as the central feature of personality pathology, also acknowledging that personality disorders are organized along a dimensional continuum of severity. This revised understanding is in line with long-standing psychodynamic conceptualisations of personality pathology, in particular Kernberg's object relations model of personality organization. Despite existing evidence for the clinical utility of the derived Structured Interview of Personality Organization (STIPO-R), empirical support for the identification of clear cut-points between the different levels of personality functioning is missing. For this purpose, a total of 764 adult participants were recruited across two clinical (outpatient and inpatient) settings (n = 250) and two non-clinical (university students and general community) samples (n = 514). Results from the mixture modeling suggested the existence of five groups across the clinical and non-clinical samples that covered: healthy personality functioning, maladaptive personality rigidity, and mild, moderate, and severe levels of personality pathology. All five indicators of personality organization were found to be reliable predictors of personality pathology. Of the five STIPO-R indicators, Aggression and Moral Values had the most discriminative power for differentiating between the Mild, Moderate, and Severe personality disorder groups. Implications of these findings are discussed.
{"title":"Establishing Levels of Personality Functioning Using the Structured Interview of Personality Organization (STIPO-R): A Latent Profile Analysis.","authors":"Marko Biberdzic, Julia F Sowislo, Nicole Cain, Kevin B Meehan, Emanuele Preti, Rossella Di Pierro, Eve Caligor, John F Clarkin","doi":"10.1080/00223891.2024.2330502","DOIUrl":"10.1080/00223891.2024.2330502","url":null,"abstract":"<p><p>Both the new ICD-11 and the latest Alternative DSM-5 Model for Personality Disorders focus on self and interpersonal functioning as the central feature of personality pathology, also acknowledging that personality disorders are organized along a dimensional continuum of severity. This revised understanding is in line with long-standing psychodynamic conceptualisations of personality pathology, in particular Kernberg's object relations model of personality organization. Despite existing evidence for the clinical utility of the derived Structured Interview of Personality Organization (STIPO-R), empirical support for the identification of clear cut-points between the different levels of personality functioning is missing. For this purpose, a total of 764 adult participants were recruited across two clinical (outpatient and inpatient) settings (<i>n =</i> 250) and two non-clinical (university students and general community) samples (<i>n =</i> 514). Results from the mixture modeling suggested the existence of five groups across the clinical and non-clinical samples that covered: healthy personality functioning, maladaptive personality rigidity, and mild, moderate, and severe levels of personality pathology. All five indicators of personality organization were found to be reliable predictors of personality pathology. Of the five STIPO-R indicators, Aggression and Moral Values had the most discriminative power for differentiating between the Mild, Moderate, and Severe personality disorder groups. Implications of these findings are discussed.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140336056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-10DOI: 10.1080/00223891.2024.2375213
Dominick Gamache, Philippe Leclerc, Alexandre Côté, David Théberge, Claudia Savard
Macina et al. (2023) recently reported mixed results on the German translation of the Self and Interpersonal Functioning Scale (SIFS). By focusing on suboptimal indices of structural validity, they recommended choosing other available instruments over the SIFS in future research on personality impairment. Reflecting on Macina et al.'s overall conclusions inspired us to consider broader issues in the field of personality impairment assessment. In this commentary, we discuss some issues regarding test translation and validity raised by Macina et al.'s article. We advise against assuming equivalence between original and translated versions of a test and discuss some caveats regarding comparison between different instruments based on structural validity. We also call into question whether the latter should be the litmus test for judging the quality of a measure. Finally, we discuss how the proliferation of personality impairment measures can benefit the broader field. Notably, this would allow moving toward a "what works for whom" approach that considers the match between psychometric property, desired use of the instrument, and characteristics of the target population.
{"title":"Broader Issues in Test Translation and Validation: A Commentary Inspired by Macina et al. (2023).","authors":"Dominick Gamache, Philippe Leclerc, Alexandre Côté, David Théberge, Claudia Savard","doi":"10.1080/00223891.2024.2375213","DOIUrl":"10.1080/00223891.2024.2375213","url":null,"abstract":"<p><p>Macina et al. (2023) recently reported mixed results on the German translation of the Self and Interpersonal Functioning Scale (SIFS). By focusing on suboptimal indices of structural validity, they recommended choosing other available instruments over the SIFS in future research on personality impairment. Reflecting on Macina et al.'s overall conclusions inspired us to consider broader issues in the field of personality impairment assessment. In this commentary, we discuss some issues regarding test translation and validity raised by Macina et al.'s article. We advise against assuming equivalence between original and translated versions of a test and discuss some caveats regarding comparison between different instruments based on structural validity. We also call into question whether the latter should be the litmus test for judging the quality of a measure. Finally, we discuss how the proliferation of personality impairment measures can benefit the broader field. Notably, this would allow moving toward a \"what works for whom\" approach that considers the match between psychometric property, desired use of the instrument, and characteristics of the target population.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141580039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-05-31DOI: 10.1080/00223891.2024.2355832
{"title":"Correction.","authors":"","doi":"10.1080/00223891.2024.2355832","DOIUrl":"10.1080/00223891.2024.2355832","url":null,"abstract":"","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141180015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-15DOI: 10.1080/00223891.2024.2326941
William B Ridgway, James J Picano, Charles A Morgan, Robert R Roland, Yaron G Rabinowitz
Shedding light on the validity of sentence completion test (SCT) verbal defensiveness as an index of defensive behavior, the current two-part study examined the relationship between psychological threat and verbal defensiveness among military security and mission-critical team candidates using SCTs. Our study showed that as the threatening nature of SCT stems increased, defensive responses also increased, substantiating the link between psychological threat and defensive behavior. In addition, expert ratings of stem content revealed moderately strong relationships with defensive responses across two different SCTs, irrespective of their structural characteristics. In contrast to previous studies using total verbal defensiveness scores, we examined specific defensive response types and their associations with stem threat ratings, finding that omissions, denial, and comments about the test were linked to stem threat levels. Lastly, our study extends the application of the SCT verbal defensiveness index beyond specialized personnel selection, finding no significant differences in verbal defensiveness based on gender or military status. Overall, these findings contribute to a comprehensive understanding of defensive behavior and its contextual variations.
{"title":"Unmasking Verbal Defensiveness: The Role of Psychological Threat in Sentence Completion Tests.","authors":"William B Ridgway, James J Picano, Charles A Morgan, Robert R Roland, Yaron G Rabinowitz","doi":"10.1080/00223891.2024.2326941","DOIUrl":"10.1080/00223891.2024.2326941","url":null,"abstract":"<p><p>Shedding light on the validity of sentence completion test (SCT) verbal defensiveness as an index of defensive behavior, the current two-part study examined the relationship between psychological threat and verbal defensiveness among military security and mission-critical team candidates using SCTs. Our study showed that as the threatening nature of SCT stems increased, defensive responses also increased, substantiating the link between psychological threat and defensive behavior. In addition, expert ratings of stem content revealed moderately strong relationships with defensive responses across two different SCTs, irrespective of their structural characteristics. In contrast to previous studies using total verbal defensiveness scores, we examined specific defensive response types and their associations with stem threat ratings, finding that omissions, denial, and comments about the test were linked to stem threat levels. Lastly, our study extends the application of the SCT verbal defensiveness index beyond specialized personnel selection, finding no significant differences in verbal defensiveness based on gender or military status. Overall, these findings contribute to a comprehensive understanding of defensive behavior and its contextual variations.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140136888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-21DOI: 10.1080/00223891.2024.2318352
Elisa Altgassen, Luc Zimny, Jessika Golle, Katharina Allgaier, Ingo Zettler, Oliver Wilhelm
Personality trait measures for children are rarely based on the HEXACO Model of Personality, although research using this framework could provide important insights into the structure and development of children's personalities. There is no HEXACO measure for elementary school children to date, and existing measures for older children seem inappropriate for this age group (e.g., due to some item content). We thus compiled two HEXACO-based short forms for measuring personality in elementary school children (8-10 years old) via parent reports. We applied a meta-heuristic item sampling algorithm (Ant Colony Optimization) in a training sample with 1,641 parent reports of 122 administered items. We selected a 54-Item Short Form comprising a latent facet structure and an 18-Item Ultra-Short Form comprising a correlated factors model for all six HEXACO dimensions but no facet structure. Both short forms showed good model fit in a holdout sample (n = 411) and sufficiently high re-test correlations after six months. Convergent and divergent validities for maximal performance measures and socio-emotional constructs (also measured six months after the initial personality assessment) were largely in line with theoretical assumptions. Overall, our study provides support for construct, re-test, and (predictive) criterion validity for the selected short forms.
{"title":"Compilation and Validation of Two Short Forms to Measure HEXACO Dimensions in Elementary School Children.","authors":"Elisa Altgassen, Luc Zimny, Jessika Golle, Katharina Allgaier, Ingo Zettler, Oliver Wilhelm","doi":"10.1080/00223891.2024.2318352","DOIUrl":"10.1080/00223891.2024.2318352","url":null,"abstract":"<p><p>Personality trait measures for children are rarely based on the HEXACO Model of Personality, although research using this framework could provide important insights into the structure and development of children's personalities. There is no HEXACO measure for elementary school children to date, and existing measures for older children seem inappropriate for this age group (e.g., due to some item content). We thus compiled two HEXACO-based short forms for measuring personality in elementary school children (8-10 years old) via parent reports. We applied a meta-heuristic item sampling algorithm (Ant Colony Optimization) in a training sample with 1,641 parent reports of 122 administered items. We selected a 54-Item Short Form comprising a latent facet structure and an 18-Item Ultra-Short Form comprising a correlated factors model for all six HEXACO dimensions but no facet structure. Both short forms showed good model fit in a holdout sample (n = 411) and sufficiently high re-test correlations after six months. Convergent and divergent validities for maximal performance measures and socio-emotional constructs (also measured six months after the initial personality assessment) were largely in line with theoretical assumptions. Overall, our study provides support for construct, re-test, and (predictive) criterion validity for the selected short forms.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140184738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-11-02DOI: 10.1080/00223891.2023.2268199
Caroline Macina, André Kerber, Johannes Zimmermann, Ludwig Ohse, Leonie Kampe, Jil Mohr, Marc Walter, Susanne Hörz-Sagstetter, Johannes Sebastian Wrege
The Self and Interpersonal Functioning Scale (SIFS) is a 24-item self-report questionnaire assessing personality functioning according to the alternative DSM-5 model for personality disorders. We evaluated the German SIFS version in a total sample of 886 participants from Germany and Switzerland. Its factor structure was investigated with confirmatory factor analysis comparing bifactor models with two specific factors (self- and interpersonal functioning) and four specific factors (identity, self-direction, empathy, and intimacy). The SIFS sum and domain scores were tested for reliability and convergent validity with self-report questionnaires and interviews for personality functioning, -organization, -traits, -disorder categories, and well-being. None of the bifactor models yielded good model fit, even after excluding two items with low factor loadings and including a method factor for reverse-keyed items. Based on a shortened 22-item SIFS version, models suggested that the g-factor explained 52.9-59.6% of the common variance and that the SIFS sum score measured the g-factor with a reliability of .68-.81. Even though the SIFS sum score showed large test-retest reliability and correlated strongly with well-established self-report questionnaires and interviews, the lack of structural validity appears to be a serious disadvantage of the SIFS compared to existing self-reports questionnaires of personality functioning.
{"title":"Evaluating the Psychometric Properties of the German Self and Interpersonal Functioning Scale (SIFS).","authors":"Caroline Macina, André Kerber, Johannes Zimmermann, Ludwig Ohse, Leonie Kampe, Jil Mohr, Marc Walter, Susanne Hörz-Sagstetter, Johannes Sebastian Wrege","doi":"10.1080/00223891.2023.2268199","DOIUrl":"10.1080/00223891.2023.2268199","url":null,"abstract":"<p><p>The Self and Interpersonal Functioning Scale (SIFS) is a 24-item self-report questionnaire assessing personality functioning according to the alternative DSM-5 model for personality disorders. We evaluated the German SIFS version in a total sample of 886 participants from Germany and Switzerland. Its factor structure was investigated with confirmatory factor analysis comparing bifactor models with two specific factors (self- and interpersonal functioning) and four specific factors (identity, self-direction, empathy, and intimacy). The SIFS sum and domain scores were tested for reliability and convergent validity with self-report questionnaires and interviews for personality functioning, -organization, -traits, -disorder categories, and well-being. None of the bifactor models yielded good model fit, even after excluding two items with low factor loadings and including a method factor for reverse-keyed items. Based on a shortened 22-item SIFS version, models suggested that the g-factor explained 52.9-59.6% of the common variance and that the SIFS sum score measured the g-factor with a reliability of .68-.81. Even though the SIFS sum score showed large test-retest reliability and correlated strongly with well-established self-report questionnaires and interviews, the lack of structural validity appears to be a serious disadvantage of the SIFS compared to existing self-reports questionnaires of personality functioning.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71424423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-02-29DOI: 10.1080/00223891.2024.2320417
Emma McEnaney, Christian Ryan
The study and measurement of alexithymia - a trait marked by difficulty identifying and describing feelings - can be improved by incorporating objective measures to supplement self-report scales. The Alexithymia Provoked Response Questionnaire (APRQ) is an observer-rated alexithymia tool that shows promise yet can be time-consuming to administer. The present study aimed to assess the feasibility of computer administration and scoring of the APRQ. Further, the APRQ's association with verbal IQ and emotional vocabulary use was examined, as was the relationship between the APRQ and the self-report Bermond-Vorst Alexithymia Questionnaire-B (BVAQ-B). Adult participants (n = 366), including a proportion gathered through purposive sampling, participated in an online study. Inter-rater reliability measures indicated that computerized scoring of the APRQ is as reliable as human scoring, making the measure scalable for use with large samples. Alexithymia levels were independent of two measures of verbal IQ. Correlational analyses indicated overlap in alexithymia as measured by the APRQ and most of the subscales of the BVAQ-B. The APRQ, as an objective measure, may capture deficits in emotional awareness independent of self-insight.
{"title":"Improving the Objective Measurement of Alexithymia Using a Computer-Scored Alexithymia Provoked Response Questionnaire with an Online Sample.","authors":"Emma McEnaney, Christian Ryan","doi":"10.1080/00223891.2024.2320417","DOIUrl":"10.1080/00223891.2024.2320417","url":null,"abstract":"<p><p>The study and measurement of alexithymia - a trait marked by difficulty identifying and describing feelings - can be improved by incorporating objective measures to supplement self-report scales. The Alexithymia Provoked Response Questionnaire (APRQ) is an observer-rated alexithymia tool that shows promise yet can be time-consuming to administer. The present study aimed to assess the feasibility of computer administration and scoring of the APRQ. Further, the APRQ's association with verbal IQ and emotional vocabulary use was examined, as was the relationship between the APRQ and the self-report Bermond-Vorst Alexithymia Questionnaire-B (BVAQ-B). Adult participants (<i>n</i> = 366), including a proportion gathered through purposive sampling, participated in an online study. Inter-rater reliability measures indicated that computerized scoring of the APRQ is as reliable as human scoring, making the measure scalable for use with large samples. Alexithymia levels were independent of two measures of verbal IQ. Correlational analyses indicated overlap in alexithymia as measured by the APRQ and most of the subscales of the BVAQ-B. The APRQ, as an objective measure, may capture deficits in emotional awareness independent of self-insight.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-26DOI: 10.1080/00223891.2024.2321956
Brianna N Davis, Rebekah Brown Spivey, Sarah Hernandez, Hadley McCartin, Tia Tourville, Laura E Drislane
The extent to which psychopathy can be reliably assessed via self-report has been debated. One step in informing this debate is examining the internal consistency of self-report psychopathy measures, such as the Triarchic Psychopathy Measure (TriPM; Patrick, 2010). Reliability generalization (RG) studies apply a meta-analytic approach to examine the internal consistency of an instrument in a more robust manner by aggregating internal consistency statistics reported across the published literature. This study conducted an RG analysis to yield the average Cronbach's alpha among published studies (k = 219) that administered the TriPM. Meta-analytic alphas were high for TriPM Total (α = .88) Boldness (α = .81), Meanness (α = .87), and Disinhibition (α = .85). Moderator analyses indicated internal consistency differed minimally as a function of study characteristics, like gender, age, or the nature of the sample (i.e., forensic or community). Subsequent RG analyses were performed for McDonald's omega (k = 40), which yielded comparable internal consistency estimates. The results of this study provide strong evidence that the TriPM measures coherent, internally consistent constructs and thus could be a viable, cost-effective mechanism for measuring psychopathy across a broad range of samples.
{"title":"Reliability Generalization of the Triarchic Psychopathy Measure.","authors":"Brianna N Davis, Rebekah Brown Spivey, Sarah Hernandez, Hadley McCartin, Tia Tourville, Laura E Drislane","doi":"10.1080/00223891.2024.2321956","DOIUrl":"10.1080/00223891.2024.2321956","url":null,"abstract":"<p><p>The extent to which psychopathy can be reliably assessed via self-report has been debated. One step in informing this debate is examining the internal consistency of self-report psychopathy measures, such as the Triarchic Psychopathy Measure (TriPM; Patrick, 2010). Reliability generalization (RG) studies apply a meta-analytic approach to examine the internal consistency of an instrument in a more robust manner by aggregating internal consistency statistics reported across the published literature. This study conducted an RG analysis to yield the average Cronbach's alpha among published studies (<i>k</i> = 219) that administered the TriPM. Meta-analytic alphas were high for TriPM Total (α = .88) Boldness (α = .81), Meanness (α = .87), and Disinhibition (α = .85). Moderator analyses indicated internal consistency differed minimally as a function of study characteristics, like gender, age, or the nature of the sample (i.e., forensic or community). Subsequent RG analyses were performed for McDonald's omega (<i>k</i> = 40), which yielded comparable internal consistency estimates. The results of this study provide strong evidence that the TriPM measures coherent, internally consistent constructs and thus could be a viable, cost-effective mechanism for measuring psychopathy across a broad range of samples.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":null,"pages":null},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140293821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}