Pub Date : 2024-08-13DOI: 10.1177/10731911241266286
Leonie Cloos, Merijn Mestdagh, Wolf Vanpaemel, Eva Ceulemans, Peter Kuppens
We examined continuous affect drawings as innovative measure of affective experiences over time. Intensive longitudinal data often rely on discrete assessments, containing "blind spots" between measurements. With continuous affect drawings participants visually depict their affect fluctuations between assessments. In an experience sampling study, participants (N = 115) rated their momentary positive and negative affect 6 times daily. From the second daily rating on, they additionally drew their positive and negative affect changes and reported affective events between assessments. They received one measurement burst between assessments daily. The strength of the approach is a substantial amount of informational gain (average 7%) over linearly interpolated points between assessments. The additional information was subsequently categorized into positive and negative affect peaks and valleys, each occurring once a day per person on average. The probability of detecting peaks and valleys increased with reported events. The drawings correlated positively with momentary affect scores from the burst. Yet, the drawing predicted the bursts less well suggesting that the momentary ratings may yield different information than the drawings. Although the timing of retrospective drawings is less precise than individual momentary assessments, this method provides a comprehensive understanding of affective experiences between assessments, offering a unique perspective on affect dynamics.
{"title":"Measuring Continuous Affect in Daily Life With Intensity Profile Drawings.","authors":"Leonie Cloos, Merijn Mestdagh, Wolf Vanpaemel, Eva Ceulemans, Peter Kuppens","doi":"10.1177/10731911241266286","DOIUrl":"https://doi.org/10.1177/10731911241266286","url":null,"abstract":"<p><p>We examined continuous affect drawings as innovative measure of affective experiences over time. Intensive longitudinal data often rely on discrete assessments, containing \"blind spots\" between measurements. With continuous affect drawings participants visually depict their affect fluctuations between assessments. In an experience sampling study, participants (<i>N</i> = 115) rated their momentary positive and negative affect 6 times daily. From the second daily rating on, they additionally drew their positive and negative affect changes and reported affective events between assessments. They received one measurement burst between assessments daily. The strength of the approach is a substantial amount of informational gain (average 7%) over linearly interpolated points between assessments. The additional information was subsequently categorized into positive and negative affect peaks and valleys, each occurring once a day per person on average. The probability of detecting peaks and valleys increased with reported events. The drawings correlated positively with momentary affect scores from the burst. Yet, the drawing predicted the bursts less well suggesting that the momentary ratings may yield different information than the drawings. Although the timing of retrospective drawings is less precise than individual momentary assessments, this method provides a comprehensive understanding of affective experiences between assessments, offering a unique perspective on affect dynamics.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141974966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1177/10731911241266293
Molly J Gardner, Michael C Edwards
Many scales used in psychological and health research are designed to yield subscores, yet it is common to see total scores reported instead. One challenge of using subscores is they can lack adequate reliability due to their shortened length. However, methods originally developed for educational measurement have shown that augmenting subscores can improve reliability estimates. Augmented subscores blend the individual score with other sources of information. The present study sought to understand (a) the costs of ignoring subscores in favor of total scores and (b) the extent to which augmentation can help alleviate challenges encountered when using subscores. Data were simulated to examine when subscores should be preferred to total scores and the magnitude of improvement from using augmented subscores over non-augmented subscores. Results suggested that when a scale is designed to yield subscores, there is practical benefit to using them. In situations where subscore reliability is low, we recommend using augmentation.
{"title":"Evaluating When Subscores Add Value in Psychological and Health Applications.","authors":"Molly J Gardner, Michael C Edwards","doi":"10.1177/10731911241266293","DOIUrl":"https://doi.org/10.1177/10731911241266293","url":null,"abstract":"<p><p>Many scales used in psychological and health research are designed to yield subscores, yet it is common to see total scores reported instead. One challenge of using subscores is they can lack adequate reliability due to their shortened length. However, methods originally developed for educational measurement have shown that augmenting subscores can improve reliability estimates. Augmented subscores blend the individual score with other sources of information. The present study sought to understand (a) the costs of ignoring subscores in favor of total scores and (b) the extent to which augmentation can help alleviate challenges encountered when using subscores. Data were simulated to examine when subscores should be preferred to total scores and the magnitude of improvement from using augmented subscores over non-augmented subscores. Results suggested that when a scale is designed to yield subscores, there is practical benefit to using them. In situations where subscore reliability is low, we recommend using augmentation.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141874018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1177/10731911241266306
Thomas B McGuckian, Jade Laracas, Nadine Roseboom, Sophie Eichler, Szymon Kardas, Stefan Piantella, Michael H Cole, Ross Eldridge, Jonathan Duckworth, Bert Steenbergen, Dido Green, Peter H Wilson
Portable and flexible administration of manual dexterity assessments is necessary to monitor recovery from brain injury and the effects of interventions across clinic and home settings, especially when in-person testing is not possible or convenient. This paper aims to assess the concurrent validity and test-retest reliability of a new suite of touchscreen-based manual dexterity tests (called EDNA™MoTap) that are designed for portable and efficient administration. A minimum sample of 49 healthy young adults will be conveniently recruited. The EDNA™MoTap tasks will be assessed for concurrent validity against standardized tools (the Box and Block Test [BBT] and the Purdue Pegboard Test) and for test-retest reliability over a 1- to 2-week interval. Correlation coefficients of r > .6 will indicate acceptable validity, and intraclass correlation coefficient (ICC) values > .75 will indicate acceptable reliability for healthy adults. The sample were primarily right-handed (91%) adults aged 19 and 34 years (M = 24.93, SD = 4.21, 50% female). The MoTap tasks did not demonstrate acceptable validity, with tasks showing weak-to-moderate associations with the criterion assessments. Some outcomes demonstrated acceptable test-retest reliability; however, this was not consistent. Touchscreen-based assessments of dexterity remain relevant; however, there is a need for further development of the EDNA™MoTap task administration.
{"title":"Portable Touchscreen Assessment of Motor Skill: A Registered Report of the Reliability and Validity of EDNA MoTap.","authors":"Thomas B McGuckian, Jade Laracas, Nadine Roseboom, Sophie Eichler, Szymon Kardas, Stefan Piantella, Michael H Cole, Ross Eldridge, Jonathan Duckworth, Bert Steenbergen, Dido Green, Peter H Wilson","doi":"10.1177/10731911241266306","DOIUrl":"https://doi.org/10.1177/10731911241266306","url":null,"abstract":"<p><p>Portable and flexible administration of manual dexterity assessments is necessary to monitor recovery from brain injury and the effects of interventions across clinic and home settings, especially when in-person testing is not possible or convenient. This paper aims to assess the concurrent validity and test-retest reliability of a new suite of touchscreen-based manual dexterity tests (called <i>EDNA</i>™<i>MoTap</i>) that are designed for portable and efficient administration. A minimum sample of 49 healthy young adults will be conveniently recruited. The <i>EDNA</i>™<i>MoTap</i> tasks will be assessed for concurrent validity against standardized tools (the Box and Block Test [BBT] and the Purdue Pegboard Test) and for test-retest reliability over a 1- to 2-week interval. Correlation coefficients of <i>r</i> > .6 will indicate acceptable validity, and intraclass correlation coefficient (ICC) values > .75 will indicate acceptable reliability for healthy adults. The sample were primarily right-handed (91%) adults aged 19 and 34 years (<i>M</i> = 24.93, <i>SD</i> = 4.21, 50% female). The <i>MoTap</i> tasks did not demonstrate acceptable validity, with tasks showing weak-to-moderate associations with the criterion assessments. Some outcomes demonstrated acceptable test-retest reliability; however, this was not consistent. Touchscreen-based assessments of dexterity remain relevant; however, there is a need for further development of the <i>EDNA</i>™<i>MoTap</i> task administration.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141791778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norming of psychological tests is decisive for test score interpretation. However, conventional norming based on subgroups results either in biases or require very large samples to gather precise norms. Continuous norming methods, namely inferential, semi-parametric, and (simplified) parametric norming, propose to solve those issues. This article provides a systematic review of continuous norming. The review includes 121 publications with overall 189 studies. The main findings indicate that most studies used simplified parametric norming, not all studies considered essential distributional assumptions, and the evidence comparing different norming methods is inconclusive. In a real data example, using the standardization sample of the Need for Cognition-KIDS scale, we compared the precision of conventional, semi-parametric, and parametric norms. A hierarchy in terms of precision emerged with conventional norms being least precise, followed by semi-parametric norms, and parametric norms being most precise. We discuss these findings by comparing our findings and methods to previous studies.
{"title":"Continuous Norming Approaches: A Systematic Review and Real Data Example.","authors":"Julian Urban, Vsevolod Scherrer, Anja Strobel, Franzis Preckel","doi":"10.1177/10731911241260545","DOIUrl":"10.1177/10731911241260545","url":null,"abstract":"<p><p>Norming of psychological tests is decisive for test score interpretation. However, conventional norming based on subgroups results either in biases or require very large samples to gather precise norms. Continuous norming methods, namely inferential, semi-parametric, and (simplified) parametric norming, propose to solve those issues. This article provides a systematic review of continuous norming. The review includes 121 publications with overall 189 studies. The main findings indicate that most studies used simplified parametric norming, not all studies considered essential distributional assumptions, and the evidence comparing different norming methods is inconclusive. In a real data example, using the standardization sample of the Need for Cognition-KIDS scale, we compared the precision of conventional, semi-parametric, and parametric norms. A hierarchy in terms of precision emerged with conventional norms being least precise, followed by semi-parametric norms, and parametric norms being most precise. We discuss these findings by comparing our findings and methods to previous studies.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141765061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1177/10731911241262140
Courtney Taylor Browne Lūka, Katie Hendry, Léo Dutriaux, Judith L Stevenson, Lawrence W Barsalou
Measuring trichotillomania is essential for understanding and treating it effectively. Using the Situated Assessment Method (SAM2), we developed a psychometric instrument to assess hair pulling in situations where it occurs. In two studies, pullers evaluated their pulling in relevant situations, along with how much they experience factors that potentially influence it (e.g., external triggers, reduction in negative emotion, negative self-thoughts). Individual measures of pulling, averaged across situations, exhibited high test reliability, construct validity, and content validity. Large differences between situations in pulling were observed, along with large individual-situation interactions (with limited evidence distinguishing focused versus automatic pulling subtypes). In linear regressions for individual participants, factors that influence pulling tended to correlate with pulling as predicted, explaining a median 74%-83% of its variance. By identifying factors that predict pulling for each individual across situations, the SAM2 Trichotillomania Assessment Instrument (TAI) offers a rich understanding of an individual's pulling experience, potentially supporting individualized pulling interventions.
{"title":"Developing and Evaluating a Situated Assessment Instrument for Trichotillomania: The SAM<sup>2</sup> TAI.","authors":"Courtney Taylor Browne Lūka, Katie Hendry, Léo Dutriaux, Judith L Stevenson, Lawrence W Barsalou","doi":"10.1177/10731911241262140","DOIUrl":"10.1177/10731911241262140","url":null,"abstract":"<p><p>Measuring trichotillomania is essential for understanding and treating it effectively. Using the Situated Assessment Method (SAM<sup>2</sup>), we developed a psychometric instrument to assess hair pulling in situations where it occurs. In two studies, pullers evaluated their pulling in relevant situations, along with how much they experience factors that potentially influence it (e.g., external triggers, reduction in negative emotion, negative self-thoughts). Individual measures of pulling, averaged across situations, exhibited high test reliability, construct validity, and content validity. Large differences between situations in pulling were observed, along with large individual-situation interactions (with limited evidence distinguishing focused versus automatic pulling subtypes). In linear regressions for individual participants, factors that influence pulling tended to correlate with pulling as predicted, explaining a median 74%-83% of its variance. By identifying factors that predict pulling for each individual across situations, the SAM<sup>2</sup> Trichotillomania Assessment Instrument (TAI) offers a rich understanding of an individual's pulling experience, potentially supporting individualized pulling interventions.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141765062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1177/10731911241259306
Jisoo Ock, Samuel T McAbee
We used exploratory structural equation modeling to examine gender-based measurement invariance (MI) in the HEXACO-100 across three samples that varied in terms of age (undergraduate students in Study 1, working adults in Studies 2 and 3) and testing context (research context in Studies 1 and 2, high-stakes selection context in Study 3). Across three studies, we consistently found support for configural and metric invariance but not scalar invariance. However, the effect size measures of non-invariance were generally small. That said, in the Emotionality scale, for the same latent score, females scored higher than males due to measurement non-invariance (between 0.26 and 0.48 standard deviation units). Thus, the observed mean gender differences overestimated the true mean gender differences. The current study provides detailed evidence regarding gender-based MI of HEXACO personality scales. More generally, it provides insight regarding the effect that measurement artifacts can have on understanding psychological gender differences at the latent level.
{"title":"Measurement Invariance of the HEXACO-100 Across Gender Groups: A Three-Sample Study.","authors":"Jisoo Ock, Samuel T McAbee","doi":"10.1177/10731911241259306","DOIUrl":"https://doi.org/10.1177/10731911241259306","url":null,"abstract":"<p><p>We used exploratory structural equation modeling to examine gender-based measurement invariance (MI) in the HEXACO-100 across three samples that varied in terms of age (undergraduate students in Study 1, working adults in Studies 2 and 3) and testing context (research context in Studies 1 and 2, high-stakes selection context in Study 3). Across three studies, we consistently found support for configural and metric invariance but not scalar invariance. However, the effect size measures of non-invariance were generally small. That said, in the Emotionality scale, for the same latent score, females scored higher than males due to measurement non-invariance (between 0.26 and 0.48 standard deviation units). Thus, the observed mean gender differences overestimated the true mean gender differences. The current study provides detailed evidence regarding gender-based MI of HEXACO personality scales. More generally, it provides insight regarding the effect that measurement artifacts can have on understanding psychological gender differences at the latent level.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1177/10731911241259560
Loïs Fournier, Beáta Bőthe, Zsolt Demetrovics, Mónika Koós, Shane W Kraus, Léna Nagy, Marc N Potenza, Rafael Ballester-Arnal, Dominik Batthyány, Sophie Bergeron, Peer Briken, Julius Burkauskas, Georgina Cárdenas-López, Joana Carvalho, Jesús Castro-Calvo, Lijun Chen, Giacomo Ciocca, Ornella Corazza, Rita I Csako, David P Fernandez, Hironobu Fujiwara, Elaine F Fernandez, Johannes Fuss, Roman Gabrhelík, Ateret Gewirtz-Meydan, Biljana Gjoneska, Mateusz Gola, Joshua B Grubbs, Hashim T Hashim, Md Saiful Islam, Mustafa Ismail, Martha C Jiménez-Martínez, Tanja Jurin, Ondrej Kalina, Verena Klein, András Költő, Sang-Kyu Lee, Karol Lewczuk, Chung-Ying Lin, Christine Lochner, Silvia López-Alvarado, Kateřina Lukavská, Percy Mayta-Tristán, Dan J Miller, Oľga Orosová, Gábor Orosz, Fernando P Ponce, Gonzalo R Quintana, Gabriel C Quintero Garzola, Jano Ramos-Diaz, Kévin Rigaud, Ann Rousseau, Marco De Tubino Scanavino, Marion K Schulmeyer, Pratap Sharan, Mami Shibata, Sheikh Shoib, Vera Sigre-Leirós, Luke Sniewski, Ognen Spasovski, Vesta Steibliene, Dan J Stein, Julian Strizek, Meng-Che Tsai, Berk C Ünsal, Marie-Pier Vaillancourt-Morel, Marie Claire Van Hout, Joël Billieux
The UPPS-P Impulsive Behavior Model and the various psychometric instruments developed and validated based on this model are well established in clinical and research settings. However, evidence regarding the psychometric validity, reliability, and equivalence across multiple countries of residence, languages, or gender identities, including gender-diverse individuals, is lacking to date. Using data from the International Sex Survey (N = 82,243), confirmatory factor analyses and measurement invariance analyses were performed on the preestablished five-factor structure of the 20-item short version of the UPPS-P Impulsive Behavior Scale to examine whether (a) psychometric validity and reliability and (b) psychometric equivalence hold across 34 country-of-residence-related, 22 language-related, and three gender-identity-related groups. The results of the present study extend the latter psychometric instrument's well-established relevance to 26 countries, 13 languages, and three gender identities. Most notably, psychometric validity and reliability were evidenced across nine novel translations included in the present study (i.e., Croatian, English, German, Hebrew, Korean, Macedonian, Polish, Portuguese-Portugal, and Spanish-Latin American) and psychometric equivalence was evidenced across all three gender identities included in the present study (i.e., women, men, and gender-diverse individuals).
{"title":"Evaluating the factor structure and measurement invariance of the 20-item short version of the UPPS-P Impulsive Behavior Scale across multiple countries, languages, and gender identities.","authors":"Loïs Fournier, Beáta Bőthe, Zsolt Demetrovics, Mónika Koós, Shane W Kraus, Léna Nagy, Marc N Potenza, Rafael Ballester-Arnal, Dominik Batthyány, Sophie Bergeron, Peer Briken, Julius Burkauskas, Georgina Cárdenas-López, Joana Carvalho, Jesús Castro-Calvo, Lijun Chen, Giacomo Ciocca, Ornella Corazza, Rita I Csako, David P Fernandez, Hironobu Fujiwara, Elaine F Fernandez, Johannes Fuss, Roman Gabrhelík, Ateret Gewirtz-Meydan, Biljana Gjoneska, Mateusz Gola, Joshua B Grubbs, Hashim T Hashim, Md Saiful Islam, Mustafa Ismail, Martha C Jiménez-Martínez, Tanja Jurin, Ondrej Kalina, Verena Klein, András Költő, Sang-Kyu Lee, Karol Lewczuk, Chung-Ying Lin, Christine Lochner, Silvia López-Alvarado, Kateřina Lukavská, Percy Mayta-Tristán, Dan J Miller, Oľga Orosová, Gábor Orosz, Fernando P Ponce, Gonzalo R Quintana, Gabriel C Quintero Garzola, Jano Ramos-Diaz, Kévin Rigaud, Ann Rousseau, Marco De Tubino Scanavino, Marion K Schulmeyer, Pratap Sharan, Mami Shibata, Sheikh Shoib, Vera Sigre-Leirós, Luke Sniewski, Ognen Spasovski, Vesta Steibliene, Dan J Stein, Julian Strizek, Meng-Che Tsai, Berk C Ünsal, Marie-Pier Vaillancourt-Morel, Marie Claire Van Hout, Joël Billieux","doi":"10.1177/10731911241259560","DOIUrl":"https://doi.org/10.1177/10731911241259560","url":null,"abstract":"<p><p>The UPPS-P Impulsive Behavior Model and the various psychometric instruments developed and validated based on this model are well established in clinical and research settings. However, evidence regarding the psychometric validity, reliability, and equivalence across multiple countries of residence, languages, or gender identities, including gender-diverse individuals, is lacking to date. Using data from the International Sex Survey (<i>N</i> = 82,243), confirmatory factor analyses and measurement invariance analyses were performed on the preestablished five-factor structure of the 20-item short version of the UPPS-P Impulsive Behavior Scale to examine whether (a) psychometric validity and reliability and (b) psychometric equivalence hold across 34 country-of-residence-related, 22 language-related, and three gender-identity-related groups. The results of the present study extend the latter psychometric instrument's well-established relevance to 26 countries, 13 languages, and three gender identities. Most notably, psychometric validity and reliability were evidenced across nine novel translations included in the present study (i.e., Croatian, English, German, Hebrew, Korean, Macedonian, Polish, Portuguese-Portugal, and Spanish-Latin American) and psychometric equivalence was evidenced across all three gender identities included in the present study (i.e., women, men, and gender-diverse individuals).</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1177/10731911241260209
Hannah L Lane, Andrew J Kremyar, Yossef S Ben-Porath, Martin Sellbom
The Minnesota Multiphasic Personality Inventory-3 (MMPI-3) includes a new Impulsivity (IMP) scale designed to assess for poor impulse-control and non-planful behavior, which was added to broaden the utility of the instrument. The current study aimed to examine the criterion and incremental validity of the IMP scale. A university student sample (n = 1,440) and a community sample oversampled for externalizing tendencies (n = 231) were used for this purpose, and IMP scores were compared to scores on various well-validated criterion measures of impulsivity and externalizing psychopathology. To examine the scale's incremental validity, hierarchical multiple regression analyses were conducted to determine whether IMP adds to other MMPI-3 Specific Problem (SP) scales in the prediction of relevant criteria. The IMP scale primarily showed meaningful correlations with the Negative Urgency and Positive Urgency on the UPPS-P. Significant correlations were also observed with the cognitive, behavioral, disinhibition, and lifestyle domains of various psychopathy measures, as well as measures of antisocial personality disorder and substance use. The IMP scale scores accounted for incremental variance in most of the directly relevant criterion measures above and beyond scores of other MMPI-3 SP scales. Several important caveats, limitations, and future directions are discussed.
{"title":"Examining the Criterion and Incremental Validity of the MMPI-3 Impulsivity Scale.","authors":"Hannah L Lane, Andrew J Kremyar, Yossef S Ben-Porath, Martin Sellbom","doi":"10.1177/10731911241260209","DOIUrl":"https://doi.org/10.1177/10731911241260209","url":null,"abstract":"<p><p>The Minnesota Multiphasic Personality Inventory-3 (MMPI-3) includes a new Impulsivity (IMP) scale designed to assess for poor impulse-control and non-planful behavior, which was added to broaden the utility of the instrument. The current study aimed to examine the criterion and incremental validity of the IMP scale. A university student sample (<i>n</i> = 1,440) and a community sample oversampled for externalizing tendencies (<i>n</i> = 231) were used for this purpose, and IMP scores were compared to scores on various well-validated criterion measures of impulsivity and externalizing psychopathology. To examine the scale's incremental validity, hierarchical multiple regression analyses were conducted to determine whether IMP adds to other MMPI-3 Specific Problem (SP) scales in the prediction of relevant criteria. The IMP scale primarily showed meaningful correlations with the Negative Urgency and Positive Urgency on the UPPS-P. Significant correlations were also observed with the cognitive, behavioral, disinhibition, and lifestyle domains of various psychopathy measures, as well as measures of antisocial personality disorder and substance use. The IMP scale scores accounted for incremental variance in most of the directly relevant criterion measures above and beyond scores of other MMPI-3 SP scales. Several important caveats, limitations, and future directions are discussed.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1177/10731911241261168
Chen Erez, Ilanit Gordon
The Difficulties in Emotion Regulation Scale (DERS) is frequently used to assess emotion regulation (ER) capabilities. Originally a multidimensional scale, many utilize its total score, without clear recommendations. We aimed to explore the DERS's structure, dimensionality, and utility and provide clinicians and researchers with clear guidelines. Self-report data on ER, personality, psychopathology, and life satisfaction were collected from 502 adults. Seventy also participated in a lab study evaluating group interactions, which included additional self-report and physiological monitoring. Findings suggested favoring the correlated-traits and bifactor models, the latter excelling in direct comparisons. The total score was found reliable and valid, explaining 53.3% of the variance, with a distinct emotional awareness subfactor, suggesting a non-pure unidimensional solution. A cutoff score of 95 identified significant ER difficulties, linked to psychopathology. We thus recommend using the DERS's total score and 95 as its cutoff, while calling for further validation in diverse and clinical samples.
情绪调节困难量表(DERS)经常被用来评估情绪调节(ER)能力。DERS 原本是一个多维量表,许多人使用它的总分,但没有明确的建议。我们旨在探索 DERS 的结构、维度和实用性,并为临床医生和研究人员提供明确的指导。我们收集了 502 名成年人关于急诊室、人格、精神病理学和生活满意度的自我报告数据。其中 70 人还参加了一项评估群体互动的实验室研究,其中包括额外的自我报告和生理监测。研究结果表明,相关特质模型和双因素模型更受青睐,后者在直接比较中更胜一筹。总分可靠有效,解释了 53.3% 的方差,其中有一个明显的情感意识子因子,表明这是一个非纯单维的解决方案。以 95 分为临界值,可以识别出与精神病理学相关的严重情感障碍。因此,我们建议使用 DERS 的总分和 95 分作为其临界值,同时呼吁在不同的临床样本中进一步验证。
{"title":"The Imperfect Yet Valuable Difficulties in Emotion Regulation Scale: Factor Structure, Dimensionality, and Possible Cutoff Score.","authors":"Chen Erez, Ilanit Gordon","doi":"10.1177/10731911241261168","DOIUrl":"https://doi.org/10.1177/10731911241261168","url":null,"abstract":"<p><p>The Difficulties in Emotion Regulation Scale (DERS) is frequently used to assess emotion regulation (ER) capabilities. Originally a multidimensional scale, many utilize its total score, without clear recommendations. We aimed to explore the DERS's structure, dimensionality, and utility and provide clinicians and researchers with clear guidelines. Self-report data on ER, personality, psychopathology, and life satisfaction were collected from 502 adults. Seventy also participated in a lab study evaluating group interactions, which included additional self-report and physiological monitoring. Findings suggested favoring the correlated-traits and bifactor models, the latter excelling in direct comparisons. The total score was found reliable and valid, explaining 53.3% of the variance, with a distinct emotional awareness subfactor, suggesting a non-pure unidimensional solution. A cutoff score of 95 identified significant ER difficulties, linked to psychopathology. We thus recommend using the DERS's total score and 95 as its cutoff, while calling for further validation in diverse and clinical samples.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1177/10731911241260233
Sarah E Williams, Thomas A Fergus, Annie T Ginty
The present series of studies aimed to develop and provide initial validation of the Ease of Imagery Questionnaire (EIQ)-a measure assessing ease of imaging different positive and negative imagery content reflective of valence and engaging or disengaging in adverse situations. Five studies were conducted to collectively examine the questionnaire's factor structure and concurrent validity. Study 1 (N = 336) and Study 2 (N = 207) informed the development of 16 items of the EIQ, with a four-factor structure supported in Studies 3 (N = 219), 4 (N = 135), and 5 (N = 184) using confirmatory factor analysis. Study 3 also supported concurrent validity with significant bivariate correlations (p < .05) with the similar Sport Imagery Ability Questionnaire subscales, while studies 4 and 5 demonstrated criterion validity in the EIQ's prediction of challenge and threat appraisal tendencies, perceived stress, stress mindset, and anxiety and depressive symptoms. Overall, the EIQ demonstrates a replicable four-factor structure and appears to assess ability to image content associated with positive and negative emotions as well as demanding stress-evoking situations.
{"title":"Development and Validation of the Ease of Imagery Questionnaire.","authors":"Sarah E Williams, Thomas A Fergus, Annie T Ginty","doi":"10.1177/10731911241260233","DOIUrl":"https://doi.org/10.1177/10731911241260233","url":null,"abstract":"<p><p>The present series of studies aimed to develop and provide initial validation of the Ease of Imagery Questionnaire (EIQ)-a measure assessing ease of imaging different positive and negative imagery content reflective of valence and engaging or disengaging in adverse situations. Five studies were conducted to collectively examine the questionnaire's factor structure and concurrent validity. Study 1 (<i>N</i> = 336) and Study 2 (<i>N</i> = 207) informed the development of 16 items of the EIQ, with a four-factor structure supported in Studies 3 (<i>N</i> = 219), 4 (<i>N</i> = 135), and 5 (<i>N</i> = 184) using confirmatory factor analysis. Study 3 also supported concurrent validity with significant bivariate correlations (<i>p</i> < .05) with the similar Sport Imagery Ability Questionnaire subscales, while studies 4 and 5 demonstrated criterion validity in the EIQ's prediction of challenge and threat appraisal tendencies, perceived stress, stress mindset, and anxiety and depressive symptoms. Overall, the EIQ demonstrates a replicable four-factor structure and appears to assess ability to image content associated with positive and negative emotions as well as demanding stress-evoking situations.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}