Pub Date : 2024-11-01Epub Date: 2024-07-10DOI: 10.1080/00223891.2024.2375213
Dominick Gamache, Philippe Leclerc, Alexandre Côté, David Théberge, Claudia Savard
Macina et al. (2023) recently reported mixed results on the German translation of the Self and Interpersonal Functioning Scale (SIFS). By focusing on suboptimal indices of structural validity, they recommended choosing other available instruments over the SIFS in future research on personality impairment. Reflecting on Macina et al.'s overall conclusions inspired us to consider broader issues in the field of personality impairment assessment. In this commentary, we discuss some issues regarding test translation and validity raised by Macina et al.'s article. We advise against assuming equivalence between original and translated versions of a test and discuss some caveats regarding comparison between different instruments based on structural validity. We also call into question whether the latter should be the litmus test for judging the quality of a measure. Finally, we discuss how the proliferation of personality impairment measures can benefit the broader field. Notably, this would allow moving toward a "what works for whom" approach that considers the match between psychometric property, desired use of the instrument, and characteristics of the target population.
{"title":"Broader Issues in Test Translation and Validation: A Commentary Inspired by Macina et al. (2023).","authors":"Dominick Gamache, Philippe Leclerc, Alexandre Côté, David Théberge, Claudia Savard","doi":"10.1080/00223891.2024.2375213","DOIUrl":"10.1080/00223891.2024.2375213","url":null,"abstract":"<p><p>Macina et al. (2023) recently reported mixed results on the German translation of the Self and Interpersonal Functioning Scale (SIFS). By focusing on suboptimal indices of structural validity, they recommended choosing other available instruments over the SIFS in future research on personality impairment. Reflecting on Macina et al.'s overall conclusions inspired us to consider broader issues in the field of personality impairment assessment. In this commentary, we discuss some issues regarding test translation and validity raised by Macina et al.'s article. We advise against assuming equivalence between original and translated versions of a test and discuss some caveats regarding comparison between different instruments based on structural validity. We also call into question whether the latter should be the litmus test for judging the quality of a measure. Finally, we discuss how the proliferation of personality impairment measures can benefit the broader field. Notably, this would allow moving toward a \"what works for whom\" approach that considers the match between psychometric property, desired use of the instrument, and characteristics of the target population.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"724-726"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141580039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-15DOI: 10.1080/00223891.2024.2326941
William B Ridgway, James J Picano, Charles A Morgan, Robert R Roland, Yaron G Rabinowitz
Shedding light on the validity of sentence completion test (SCT) verbal defensiveness as an index of defensive behavior, the current two-part study examined the relationship between psychological threat and verbal defensiveness among military security and mission-critical team candidates using SCTs. Our study showed that as the threatening nature of SCT stems increased, defensive responses also increased, substantiating the link between psychological threat and defensive behavior. In addition, expert ratings of stem content revealed moderately strong relationships with defensive responses across two different SCTs, irrespective of their structural characteristics. In contrast to previous studies using total verbal defensiveness scores, we examined specific defensive response types and their associations with stem threat ratings, finding that omissions, denial, and comments about the test were linked to stem threat levels. Lastly, our study extends the application of the SCT verbal defensiveness index beyond specialized personnel selection, finding no significant differences in verbal defensiveness based on gender or military status. Overall, these findings contribute to a comprehensive understanding of defensive behavior and its contextual variations.
{"title":"Unmasking Verbal Defensiveness: The Role of Psychological Threat in Sentence Completion Tests.","authors":"William B Ridgway, James J Picano, Charles A Morgan, Robert R Roland, Yaron G Rabinowitz","doi":"10.1080/00223891.2024.2326941","DOIUrl":"10.1080/00223891.2024.2326941","url":null,"abstract":"<p><p>Shedding light on the validity of sentence completion test (SCT) verbal defensiveness as an index of defensive behavior, the current two-part study examined the relationship between psychological threat and verbal defensiveness among military security and mission-critical team candidates using SCTs. Our study showed that as the threatening nature of SCT stems increased, defensive responses also increased, substantiating the link between psychological threat and defensive behavior. In addition, expert ratings of stem content revealed moderately strong relationships with defensive responses across two different SCTs, irrespective of their structural characteristics. In contrast to previous studies using total verbal defensiveness scores, we examined specific defensive response types and their associations with stem threat ratings, finding that omissions, denial, and comments about the test were linked to stem threat levels. Lastly, our study extends the application of the SCT verbal defensiveness index beyond specialized personnel selection, finding no significant differences in verbal defensiveness based on gender or military status. Overall, these findings contribute to a comprehensive understanding of defensive behavior and its contextual variations.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"810-818"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140136888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-21DOI: 10.1080/00223891.2024.2318352
Elisa Altgassen, Luc Zimny, Jessika Golle, Katharina Allgaier, Ingo Zettler, Oliver Wilhelm
Personality trait measures for children are rarely based on the HEXACO Model of Personality, although research using this framework could provide important insights into the structure and development of children's personalities. There is no HEXACO measure for elementary school children to date, and existing measures for older children seem inappropriate for this age group (e.g., due to some item content). We thus compiled two HEXACO-based short forms for measuring personality in elementary school children (8-10 years old) via parent reports. We applied a meta-heuristic item sampling algorithm (Ant Colony Optimization) in a training sample with 1,641 parent reports of 122 administered items. We selected a 54-Item Short Form comprising a latent facet structure and an 18-Item Ultra-Short Form comprising a correlated factors model for all six HEXACO dimensions but no facet structure. Both short forms showed good model fit in a holdout sample (n = 411) and sufficiently high re-test correlations after six months. Convergent and divergent validities for maximal performance measures and socio-emotional constructs (also measured six months after the initial personality assessment) were largely in line with theoretical assumptions. Overall, our study provides support for construct, re-test, and (predictive) criterion validity for the selected short forms.
{"title":"Compilation and Validation of Two Short Forms to Measure HEXACO Dimensions in Elementary School Children.","authors":"Elisa Altgassen, Luc Zimny, Jessika Golle, Katharina Allgaier, Ingo Zettler, Oliver Wilhelm","doi":"10.1080/00223891.2024.2318352","DOIUrl":"10.1080/00223891.2024.2318352","url":null,"abstract":"<p><p>Personality trait measures for children are rarely based on the HEXACO Model of Personality, although research using this framework could provide important insights into the structure and development of children's personalities. There is no HEXACO measure for elementary school children to date, and existing measures for older children seem inappropriate for this age group (e.g., due to some item content). We thus compiled two HEXACO-based short forms for measuring personality in elementary school children (8-10 years old) via parent reports. We applied a meta-heuristic item sampling algorithm (Ant Colony Optimization) in a training sample with 1,641 parent reports of 122 administered items. We selected a 54-Item Short Form comprising a latent facet structure and an 18-Item Ultra-Short Form comprising a correlated factors model for all six HEXACO dimensions but no facet structure. Both short forms showed good model fit in a holdout sample (n = 411) and sufficiently high re-test correlations after six months. Convergent and divergent validities for maximal performance measures and socio-emotional constructs (also measured six months after the initial personality assessment) were largely in line with theoretical assumptions. Overall, our study provides support for construct, re-test, and (predictive) criterion validity for the selected short forms.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"798-809"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140184738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-02-29DOI: 10.1080/00223891.2024.2320417
Emma McEnaney, Christian Ryan
The study and measurement of alexithymia - a trait marked by difficulty identifying and describing feelings - can be improved by incorporating objective measures to supplement self-report scales. The Alexithymia Provoked Response Questionnaire (APRQ) is an observer-rated alexithymia tool that shows promise yet can be time-consuming to administer. The present study aimed to assess the feasibility of computer administration and scoring of the APRQ. Further, the APRQ's association with verbal IQ and emotional vocabulary use was examined, as was the relationship between the APRQ and the self-report Bermond-Vorst Alexithymia Questionnaire-B (BVAQ-B). Adult participants (n = 366), including a proportion gathered through purposive sampling, participated in an online study. Inter-rater reliability measures indicated that computerized scoring of the APRQ is as reliable as human scoring, making the measure scalable for use with large samples. Alexithymia levels were independent of two measures of verbal IQ. Correlational analyses indicated overlap in alexithymia as measured by the APRQ and most of the subscales of the BVAQ-B. The APRQ, as an objective measure, may capture deficits in emotional awareness independent of self-insight.
{"title":"Improving the Objective Measurement of Alexithymia Using a Computer-Scored Alexithymia Provoked Response Questionnaire with an Online Sample.","authors":"Emma McEnaney, Christian Ryan","doi":"10.1080/00223891.2024.2320417","DOIUrl":"10.1080/00223891.2024.2320417","url":null,"abstract":"<p><p>The study and measurement of alexithymia - a trait marked by difficulty identifying and describing feelings - can be improved by incorporating objective measures to supplement self-report scales. The Alexithymia Provoked Response Questionnaire (APRQ) is an observer-rated alexithymia tool that shows promise yet can be time-consuming to administer. The present study aimed to assess the feasibility of computer administration and scoring of the APRQ. Further, the APRQ's association with verbal IQ and emotional vocabulary use was examined, as was the relationship between the APRQ and the self-report Bermond-Vorst Alexithymia Questionnaire-B (BVAQ-B). Adult participants (<i>n</i> = 366), including a proportion gathered through purposive sampling, participated in an online study. Inter-rater reliability measures indicated that computerized scoring of the APRQ is as reliable as human scoring, making the measure scalable for use with large samples. Alexithymia levels were independent of two measures of verbal IQ. Correlational analyses indicated overlap in alexithymia as measured by the APRQ and most of the subscales of the BVAQ-B. The APRQ, as an objective measure, may capture deficits in emotional awareness independent of self-insight.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"776-786"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-11-02DOI: 10.1080/00223891.2023.2268199
Caroline Macina, André Kerber, Johannes Zimmermann, Ludwig Ohse, Leonie Kampe, Jil Mohr, Marc Walter, Susanne Hörz-Sagstetter, Johannes Sebastian Wrege
The Self and Interpersonal Functioning Scale (SIFS) is a 24-item self-report questionnaire assessing personality functioning according to the alternative DSM-5 model for personality disorders. We evaluated the German SIFS version in a total sample of 886 participants from Germany and Switzerland. Its factor structure was investigated with confirmatory factor analysis comparing bifactor models with two specific factors (self- and interpersonal functioning) and four specific factors (identity, self-direction, empathy, and intimacy). The SIFS sum and domain scores were tested for reliability and convergent validity with self-report questionnaires and interviews for personality functioning, -organization, -traits, -disorder categories, and well-being. None of the bifactor models yielded good model fit, even after excluding two items with low factor loadings and including a method factor for reverse-keyed items. Based on a shortened 22-item SIFS version, models suggested that the g-factor explained 52.9-59.6% of the common variance and that the SIFS sum score measured the g-factor with a reliability of .68-.81. Even though the SIFS sum score showed large test-retest reliability and correlated strongly with well-established self-report questionnaires and interviews, the lack of structural validity appears to be a serious disadvantage of the SIFS compared to existing self-reports questionnaires of personality functioning.
{"title":"Evaluating the Psychometric Properties of the German Self and Interpersonal Functioning Scale (SIFS).","authors":"Caroline Macina, André Kerber, Johannes Zimmermann, Ludwig Ohse, Leonie Kampe, Jil Mohr, Marc Walter, Susanne Hörz-Sagstetter, Johannes Sebastian Wrege","doi":"10.1080/00223891.2023.2268199","DOIUrl":"10.1080/00223891.2023.2268199","url":null,"abstract":"<p><p>The Self and Interpersonal Functioning Scale (SIFS) is a 24-item self-report questionnaire assessing personality functioning according to the alternative DSM-5 model for personality disorders. We evaluated the German SIFS version in a total sample of 886 participants from Germany and Switzerland. Its factor structure was investigated with confirmatory factor analysis comparing bifactor models with two specific factors (self- and interpersonal functioning) and four specific factors (identity, self-direction, empathy, and intimacy). The SIFS sum and domain scores were tested for reliability and convergent validity with self-report questionnaires and interviews for personality functioning, -organization, -traits, -disorder categories, and well-being. None of the bifactor models yielded good model fit, even after excluding two items with low factor loadings and including a method factor for reverse-keyed items. Based on a shortened 22-item SIFS version, models suggested that the g-factor explained 52.9-59.6% of the common variance and that the SIFS sum score measured the g-factor with a reliability of .68-.81. Even though the SIFS sum score showed large test-retest reliability and correlated strongly with well-established self-report questionnaires and interviews, the lack of structural validity appears to be a serious disadvantage of the SIFS compared to existing self-reports questionnaires of personality functioning.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"711-723"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71424423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-26DOI: 10.1080/00223891.2024.2321956
Brianna N Davis, Rebekah Brown Spivey, Sarah Hernandez, Hadley McCartin, Tia Tourville, Laura E Drislane
The extent to which psychopathy can be reliably assessed via self-report has been debated. One step in informing this debate is examining the internal consistency of self-report psychopathy measures, such as the Triarchic Psychopathy Measure (TriPM; Patrick, 2010). Reliability generalization (RG) studies apply a meta-analytic approach to examine the internal consistency of an instrument in a more robust manner by aggregating internal consistency statistics reported across the published literature. This study conducted an RG analysis to yield the average Cronbach's alpha among published studies (k = 219) that administered the TriPM. Meta-analytic alphas were high for TriPM Total (α = .88) Boldness (α = .81), Meanness (α = .87), and Disinhibition (α = .85). Moderator analyses indicated internal consistency differed minimally as a function of study characteristics, like gender, age, or the nature of the sample (i.e., forensic or community). Subsequent RG analyses were performed for McDonald's omega (k = 40), which yielded comparable internal consistency estimates. The results of this study provide strong evidence that the TriPM measures coherent, internally consistent constructs and thus could be a viable, cost-effective mechanism for measuring psychopathy across a broad range of samples.
{"title":"Reliability Generalization of the Triarchic Psychopathy Measure.","authors":"Brianna N Davis, Rebekah Brown Spivey, Sarah Hernandez, Hadley McCartin, Tia Tourville, Laura E Drislane","doi":"10.1080/00223891.2024.2321956","DOIUrl":"10.1080/00223891.2024.2321956","url":null,"abstract":"<p><p>The extent to which psychopathy can be reliably assessed via self-report has been debated. One step in informing this debate is examining the internal consistency of self-report psychopathy measures, such as the Triarchic Psychopathy Measure (TriPM; Patrick, 2010). Reliability generalization (RG) studies apply a meta-analytic approach to examine the internal consistency of an instrument in a more robust manner by aggregating internal consistency statistics reported across the published literature. This study conducted an RG analysis to yield the average Cronbach's alpha among published studies (<i>k</i> = 219) that administered the TriPM. Meta-analytic alphas were high for TriPM Total (α = .88) Boldness (α = .81), Meanness (α = .87), and Disinhibition (α = .85). Moderator analyses indicated internal consistency differed minimally as a function of study characteristics, like gender, age, or the nature of the sample (i.e., forensic or community). Subsequent RG analyses were performed for McDonald's omega (<i>k</i> = 40), which yielded comparable internal consistency estimates. The results of this study provide strong evidence that the TriPM measures coherent, internally consistent constructs and thus could be a viable, cost-effective mechanism for measuring psychopathy across a broad range of samples.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"832-842"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140293821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1080/00223891.2024.2413148
Jessica K Hlay, Benjamin N Johnson, Carolyn R Hodges-Simeon, Kenneth N Levy
In response to Cannon's widely accepted fight-or-flight system, Taylor et al. proposed the tend-and-befriend hypothesis to better capture variance in women's stress response behaviors. The Tend-and-Befriend Questionnaire (TBQ) measures self-reported individual differences in the use of fight, flight, tend, and befriend. Several studies have used this scale to evaluate sex differences in these behaviors, yet it has not yet been rigorously evaluated. Using three samples (N = 1094), we first explore the factor structure of the TBQ to produce and validate a revised measure, the TBQ-Short Form (TBQ-SF). Next, we evaluate the claim that women use tend-and-befriend more than men. Results indicated that the TBQ-SF provided both reliable subscales and largely acceptable model fit, yet the factor structure and validity varied across the three samples. While men do report more fighting than women, both men and women report use tending and befriending more than fighting or fleeing. Finally, other variables-namely attachment-capture more variance in TBQ-SF factors than sex. While the TBQ-SF does capture differences in stress reactions (fight, flight, tend/befriend), we suggest that the scale is most reliable in measuring overall stress reactivity. Therefore, future research should aim to construct a better scale specific to tend-and-befriend using alternative methodologies.
{"title":"A Psychometric Evaluation of the Tend-and-Befriend Questionnaire.","authors":"Jessica K Hlay, Benjamin N Johnson, Carolyn R Hodges-Simeon, Kenneth N Levy","doi":"10.1080/00223891.2024.2413148","DOIUrl":"10.1080/00223891.2024.2413148","url":null,"abstract":"<p><p>In response to Cannon's widely accepted fight-or-flight system, Taylor et al. proposed the tend-and-befriend hypothesis to better capture variance in women's stress response behaviors. The Tend-and-Befriend Questionnaire (TBQ) measures self-reported individual differences in the use of fight, flight, tend, and befriend. Several studies have used this scale to evaluate sex differences in these behaviors, yet it has not yet been rigorously evaluated. Using three samples (<i>N</i> = 1094), we first explore the factor structure of the TBQ to produce and validate a revised measure, the TBQ-Short Form (TBQ-SF). Next, we evaluate the claim that women use tend-and-befriend more than men. Results indicated that the TBQ-SF provided both reliable subscales and largely acceptable model fit, yet the factor structure and validity varied across the three samples. While men do report more fighting than women, <i>both</i> men and women report use tending and befriending more than fighting or fleeing. Finally, other variables-namely attachment-capture more variance in TBQ-SF factors than sex. While the TBQ-SF does capture differences in stress reactions (fight, flight, tend/befriend), we suggest that the scale is most reliable in measuring overall stress reactivity. Therefore, future research should aim to construct a better scale specific to tend-and-befriend using alternative methodologies.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-15"},"PeriodicalIF":2.8,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142558012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1080/00223891.2024.2420172
Abby L Mulay, Emily D Gottfried, Jared R Ruchensky, Tiffany Russell, Adam P Natoli, Christopher J Hopwood
Historically, forensic evaluators have relied heavily upon various editions of the Diagnostic and Statistical Manual of Mental Disorders when rendering psycholegal opinions. The field of mental health is increasing moving toward dimensional models of personality and psychopathology in lieu of traditional DSM categorical models, though the domains of forensic psychology and psychiatry have been slow to make this transition. The current study therefore sought to examine forensic evaluators' familiarity with dimensional approaches to personality and psychopathology, namely the Alternative DSM-5 Model for Personality Disorders (AMPD) and the Hierarchical Taxonomy of Psychopathology (HiTOP). Forensic psychologists and psychiatrists (N = 54) completed an online survey designed to assess their familiarity with these models, as well as to determine if forensics practitioners are using these models in clinical practice. Participants endorsed greater familiarity with the AMPD, with a large majority of participants indicating they were unfamiliar with the HiTOP model. Few participants endorsed using these models in their clinical forensic practice. Implications for making the transition to dimensional models within forensic evaluation are discussed, as are paths forward for future research.
{"title":"The Problem No One is Talking About: Forensic Evaluators' Lack of Familiarity with Dimensional Approaches to Personality and Psychopathology.","authors":"Abby L Mulay, Emily D Gottfried, Jared R Ruchensky, Tiffany Russell, Adam P Natoli, Christopher J Hopwood","doi":"10.1080/00223891.2024.2420172","DOIUrl":"10.1080/00223891.2024.2420172","url":null,"abstract":"<p><p>Historically, forensic evaluators have relied heavily upon various editions of the Diagnostic and Statistical Manual of Mental Disorders when rendering psycholegal opinions. The field of mental health is increasing moving toward dimensional models of personality and psychopathology in lieu of traditional DSM categorical models, though the domains of forensic psychology and psychiatry have been slow to make this transition. The current study therefore sought to examine forensic evaluators' familiarity with dimensional approaches to personality and psychopathology, namely the Alternative DSM-5 Model for Personality Disorders (AMPD) and the Hierarchical Taxonomy of Psychopathology (HiTOP). Forensic psychologists and psychiatrists (<i>N</i> = 54) completed an online survey designed to assess their familiarity with these models, as well as to determine if forensics practitioners are using these models in clinical practice. Participants endorsed greater familiarity with the AMPD, with a large majority of participants indicating they were unfamiliar with the HiTOP model. Few participants endorsed using these models in their clinical forensic practice. Implications for making the transition to dimensional models within forensic evaluation are discussed, as are paths forward for future research.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-9"},"PeriodicalIF":2.8,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142558013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1080/00223891.2024.2411557
Jared R Ruchensky, John F Edens, M Brent Donnellan
The Big Five Inventory - 2 (BFI-2) is a commonly used self-report assessment of normal personality trait domains (Extraversion, Agreeableness, Conscientiousness, Negative Emotionality, Open-Mindedness) and facets. To date, however, no direct measures of response distortion have been developed for it to identify potentially invalid responses. Such distortions (e.g., careless or random responding) can adversely impact data quality. The current study developed and provided initial validation data for an inconsistent responding scale within the BFI-2 to identify careless responders using two large undergraduate samples and a community sample. To create the scale, we first identified highly correlated BFI-2 item pairs in one undergraduate sample (N = 1,461) and then computed a total score by summing the absolute differences of these item pairs. This scale, the Detection of Response Inconsistency Procedure (DRIP), differentiated randomly generated and genuine data and generally correlated as expected with personality domains and other inconsistent responding scales across samples. The DRIP also incrementally predicted random data beyond a composite of items with exceptionally high or low base rates of endorsement from the Comprehensive Infrequency/Frequency Item Repository. We provide recommendations for DRIP cut scores that can detect careless responding while balancing sensitivity and specificity.
{"title":"Development of an Inconsistent Responding Scale for the Big Five Inventory-2.","authors":"Jared R Ruchensky, John F Edens, M Brent Donnellan","doi":"10.1080/00223891.2024.2411557","DOIUrl":"https://doi.org/10.1080/00223891.2024.2411557","url":null,"abstract":"<p><p>The Big Five Inventory - 2 (BFI-2) is a commonly used self-report assessment of normal personality trait domains (Extraversion, Agreeableness, Conscientiousness, Negative Emotionality, Open-Mindedness) and facets. To date, however, no direct measures of response distortion have been developed for it to identify potentially invalid responses. Such distortions (e.g., careless or random responding) can adversely impact data quality. The current study developed and provided initial validation data for an inconsistent responding scale within the BFI-2 to identify careless responders using two large undergraduate samples and a community sample. To create the scale, we first identified highly correlated BFI-2 item pairs in one undergraduate sample (<i>N</i> = 1,461) and then computed a total score by summing the absolute differences of these item pairs. This scale, the Detection of Response Inconsistency Procedure (DRIP), differentiated randomly generated and genuine data and generally correlated as expected with personality domains and other inconsistent responding scales across samples. The DRIP also incrementally predicted random data beyond a composite of items with exceptionally high or low base rates of endorsement from the Comprehensive Infrequency/Frequency Item Repository. We provide recommendations for DRIP cut scores that can detect careless responding while balancing sensitivity and specificity.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-8"},"PeriodicalIF":2.8,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142468254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}