Pub Date : 2025-12-26DOI: 10.1177/10731911251403897
Kelly E Dixon, Andrew Lac
The present study sought to develop and validate a novel multidimensional assessment of substance use (SU) coping motives to manage trauma symptoms. In Study 1 (N = 326 trauma-exposed adults recruited from several online platforms), a set of questionnaire items was created and administered, and exploratory factor analysis was performed. A correlated four-factor structure represented by cognitive-affective motives, physiological motives, sleep motives, and social motives emerged. In Study 2 (N = 261 trauma-exposed adults recruited from ResearchMatch), confirmatory factor analysis cross-validated the correlated four-factor structure and additionally tested a five-factor higher-order structure. In tests of convergent, discriminant, and criterion validities, the subscales demonstrated differential correlations with previously validated measures of SU motives and positively correlated with higher PTSD symptom severity, functional impairment, and alcohol and drug use severity. The final 31-item Motives for Using Substances for Trauma Coping (MUST-Cope) Questionnaire offers a novel multifactorial measurement instrument to help researchers and clinicians assess and identify functional coping motives for SU that can be targeted in psychosocial treatment.
{"title":"Development and Validation of the Motives for Using Substances for Trauma Coping (MUST-Cope) Questionnaire: A Novel Multidimensional Scale to Assess Trauma-Specific Substance Use Coping Motives.","authors":"Kelly E Dixon, Andrew Lac","doi":"10.1177/10731911251403897","DOIUrl":"https://doi.org/10.1177/10731911251403897","url":null,"abstract":"<p><p>The present study sought to develop and validate a novel multidimensional assessment of substance use (SU) coping motives to manage trauma symptoms. In Study 1 (<i>N</i> = 326 trauma-exposed adults recruited from several online platforms), a set of questionnaire items was created and administered, and exploratory factor analysis was performed. A correlated four-factor structure represented by cognitive-affective motives, physiological motives, sleep motives, and social motives emerged. In Study 2 (<i>N</i> = 261 trauma-exposed adults recruited from ResearchMatch), confirmatory factor analysis cross-validated the correlated four-factor structure and additionally tested a five-factor higher-order structure. In tests of convergent, discriminant, and criterion validities, the subscales demonstrated differential correlations with previously validated measures of SU motives and positively correlated with higher PTSD symptom severity, functional impairment, and alcohol and drug use severity. The final 31-item Motives for Using Substances for Trauma Coping (MUST-Cope) Questionnaire offers a novel multifactorial measurement instrument to help researchers and clinicians assess and identify functional coping motives for SU that can be targeted in psychosocial treatment.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251403897"},"PeriodicalIF":3.4,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145832998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1177/10731911251399030
Jayden Lucas, Jeffery M Lackner, Gregory Gudleski, Andrew H Rogers, Rodrigo Becerra, Kristin Naragon-Gainey
There are a plethora of "flexibility" constructs and measures in psychology, but the extent to which they assess the same or different constructs, and whether flexibility and inflexibility are separate constructs (vs. extremes of the same bipolar continuum), remains underexplored. We examined the distinctiveness of seven different self-report measures of psychological (in)flexibility and cognitive flexibility using an online community (N = 465) and a chronic pain sample (N = 445). We analyzed the latent structure of these questionnaires using item-level exploratory structural equation modeling that controlled for measure-specific variance, and we tested these factors in relation to a range of mental health outcomes (concurrent validity) and discriminant validity measures. Findings indicate that psychological and cognitive flexibility questionnaires can be characterized at multiple levels, including six lower-order components that span individual measures and global factors that account for their shared variance. The six factors were broadly and uniquely associated with clinically relevant variables, including symptoms and well-being. We also found support for the notion that flexibility and inflexibility exist on a single bipolar continuum, rather than being characterized as separate. Implications for clinical assessment in research and intervention settings are discussed.
{"title":"One Construct or Many? Clarifying the Structure and Meaning of Measures of Psychological and Cognitive Flexibility and Their Components in a Community and Chronic Pain Sample.","authors":"Jayden Lucas, Jeffery M Lackner, Gregory Gudleski, Andrew H Rogers, Rodrigo Becerra, Kristin Naragon-Gainey","doi":"10.1177/10731911251399030","DOIUrl":"https://doi.org/10.1177/10731911251399030","url":null,"abstract":"<p><p>There are a plethora of \"flexibility\" constructs and measures in psychology, but the extent to which they assess the same or different constructs, and whether flexibility and inflexibility are separate constructs (vs. extremes of the same bipolar continuum), remains underexplored. We examined the distinctiveness of seven different self-report measures of psychological (in)flexibility and cognitive flexibility using an online community (<i>N</i> = 465) and a chronic pain sample (<i>N</i> = 445). We analyzed the latent structure of these questionnaires using item-level exploratory structural equation modeling that controlled for measure-specific variance, and we tested these factors in relation to a range of mental health outcomes (concurrent validity) and discriminant validity measures. Findings indicate that psychological and cognitive flexibility questionnaires can be characterized at multiple levels, including six lower-order components that span individual measures and global factors that account for their shared variance. The six factors were broadly and uniquely associated with clinically relevant variables, including symptoms and well-being. We also found support for the notion that flexibility and inflexibility exist on a single bipolar continuum, rather than being characterized as separate. Implications for clinical assessment in research and intervention settings are discussed.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251399030"},"PeriodicalIF":3.4,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145833058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-25DOI: 10.1177/10731911251398039
Samantha Cacace, Robert J Cramer, Max Stivers, Raymond P Tucker, Marcus VanSickle
U.S. military populations experience a high level of mental health concerns, including post-traumatic stress disorder, clinical depression, and suicide, when compared with their civilian counterparts, and tend to access mental health services at a lower rate. Military health scholars have noted that stigma against mental health help-seeking has multiple sources, including professional, personal, and social components, though these components are rarely separated in examining why military service members avoid clinical help. Valid measurement of these factors is necessary to examine the heart of rising clinical needs. The current study replicates and extends prior work applying a bifactor model to the Military Stigma Scale (MSS). In a sample of n = 1,832 Army National Guard members, a bifactor model presented acceptable fit, though invariance testing by rank and education indicates disparate experiences with military service as deviating influences. Specifically, Private Stigma was significantly lower in higher paygrade service members and those with a college degree, while Public Stigma was higher. Results call into question the theoretical viability of a bifactor model of the MSS, especially in the evaluation of Expected Common Variance and specific factor reliability.
{"title":"A Psychometric Analysis of the Military Stigma Scale.","authors":"Samantha Cacace, Robert J Cramer, Max Stivers, Raymond P Tucker, Marcus VanSickle","doi":"10.1177/10731911251398039","DOIUrl":"https://doi.org/10.1177/10731911251398039","url":null,"abstract":"<p><p>U.S. military populations experience a high level of mental health concerns, including post-traumatic stress disorder, clinical depression, and suicide, when compared with their civilian counterparts, and tend to access mental health services at a lower rate. Military health scholars have noted that stigma against mental health help-seeking has multiple sources, including professional, personal, and social components, though these components are rarely separated in examining why military service members avoid clinical help. Valid measurement of these factors is necessary to examine the heart of rising clinical needs. The current study replicates and extends prior work applying a bifactor model to the Military Stigma Scale (MSS). In a sample of <i>n</i> = 1,832 Army National Guard members, a bifactor model presented acceptable fit, though invariance testing by rank and education indicates disparate experiences with military service as deviating influences. Specifically, <i>Private Stigma</i> was significantly lower in higher paygrade service members and those with a college degree, while <i>Public Stigma</i> was higher. Results call into question the theoretical viability of a bifactor model of the MSS, especially in the evaluation of Expected Common Variance and specific factor reliability.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251398039"},"PeriodicalIF":3.4,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145832978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Validity of caregiver response to a developmental screening instrument was examined in 571 caregivers (51.7% identifying as ethnic minority) of infants/toddlers (48% female) assessed longitudinally from birth to 18 months. Three embedded validity scales were designed to detect: atypical (ATP), negative (NRS), and positive (PRS) response styles. Rates of responding on the ATP, NRS, and PRS scales relative to established validity measures, temporal stability including test-retest reliability of the scales, and relations between response styles and maternal education were examined. Response bias was low; however, significant differences due to maternal education were evident. More variable scores (ATP) and more advanced development (PRS) was consistently reported by caregivers with lower education. Caregivers with higher education reported their infants' development as less advanced (NRS). Base rates of uncommon responding ranged from 11.6% to 14.4% and 5.8% to 9.1% at liberal and conservative cut scores. Preliminary analysis of additional social-contextual sources of variation (e.g., caregiver mental health) in response styles suggests the need for complex modeling of multiple sources of bias in caregiver-reported developmental outcomes. These are the first embedded validity scales to be designed within a caregiver-reported instrument of infant/toddler development.
{"title":"Embedded Validity Scales to Examine Caregiver Response Styles When Measuring Infant/Toddler Developmental Status.","authors":"Renee Lajiness-O'Neill, Michelle Lobermeier, Angela D Staples, Annette Richard, Alissa Huth-Bocks, Seth Warschausky, H Gerry Taylor, Natasha Lang, Angela Lukomski, Laszlo Erdodi","doi":"10.1177/10731911251391563","DOIUrl":"https://doi.org/10.1177/10731911251391563","url":null,"abstract":"<p><p>Validity of caregiver response to a developmental screening instrument was examined in 571 caregivers (51.7% identifying as ethnic minority) of infants/toddlers (48% female) assessed longitudinally from birth to 18 months. Three embedded validity scales were designed to detect: atypical (ATP), negative (NRS), and positive (PRS) response styles. Rates of responding on the ATP, NRS, and PRS scales relative to established validity measures, temporal stability including test-retest reliability of the scales, and relations between response styles and maternal education were examined. Response bias was low; however, significant differences due to maternal education were evident. More variable scores (ATP) and more advanced development (PRS) was consistently reported by caregivers with lower education. Caregivers with higher education reported their infants' development as less advanced (NRS). Base rates of uncommon responding ranged from 11.6% to 14.4% and 5.8% to 9.1% at liberal and conservative cut scores. Preliminary analysis of additional social-contextual sources of variation (e.g., caregiver mental health) in response styles suggests the need for complex modeling of multiple sources of bias in caregiver-reported developmental outcomes. These are the first embedded validity scales to be designed within a caregiver-reported instrument of infant/toddler development.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251391563"},"PeriodicalIF":3.4,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145803146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1177/10731911251398037
Rapson Gomez, Shane Langsford, Stephen Houghton, Leila Karimi
The Child and Adolescent PsychProfiler version 5 (CAPP v5) is a comprehensive multi‑informant screening measure encompassing 17 symptom scales that map onto 14 Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) disorders. The self‑report form (CAPP‑SRF) has not previously undergone a comprehensive psychometric evaluation. The objective of the study is to analyze the internal structure (Independent Clusters Model of Confirmatory Factor Analysis [ICM‑CFA]), reliability (α, ω), and validity evidence (discriminant, convergent, criterion‑related) of the CAPP‑SRF. Study 1 examined the 17‑factor model within a community sample of 790 adolescents (M = 14.48 years). Study 2 evaluated convergent, criterion‑related, and discriminant validity in a clinic‑referred sample of 173 adolescents (M = 14.50 years) utilizing the Conners 3‑SR, Beck Youth Inventories, Second Edition (BYI‑2), Wechsler Intelligence Scale for Children-Fifth Edition (WISC‑V), and Wechsler Individual Achievement Test, Third Edition (WIAT‑III). Independent‑samples t tests compared CAPP‑SRF means across samples. The ICM‑CFA analysis confirmed the 17‑factor structure (χ²/df = 3.02; standardized root mean squared error [SRMR] = .076). Scale reliability was acceptable (ω = .79-.89). Clinic participants scored significantly higher than community participants on 15 of the 17 scales (all p < .001; d = .55-1.20), supporting criterion validity. Convergent and discriminant patterns with external measures were as hypothesized (|r| = .32-.68; R2 = .10-.46). The CAPP‑SRF demonstrates robust psychometric properties and complements the parent‑ and teacher‑report forms as an effective adolescent self‑report screener for common DSM‑5 disorders.
{"title":"Psychometric Properties of the Child and Adolescent PsychProfiler: Self-Report Form.","authors":"Rapson Gomez, Shane Langsford, Stephen Houghton, Leila Karimi","doi":"10.1177/10731911251398037","DOIUrl":"https://doi.org/10.1177/10731911251398037","url":null,"abstract":"<p><p>The Child and Adolescent PsychProfiler version 5 (CAPP v5) is a comprehensive multi‑informant screening measure encompassing 17 symptom scales that map onto 14 <i>Diagnostic and Statistical Manual of Mental Disorders</i> (5th ed.; <i>DSM-5</i>) disorders. The self‑report form (CAPP‑SRF) has not previously undergone a comprehensive psychometric evaluation. The objective of the study is to analyze the internal structure (Independent Clusters Model of Confirmatory Factor Analysis [ICM‑CFA]), reliability (α, ω), and validity evidence (discriminant, convergent, criterion‑related) of the CAPP‑SRF. Study 1 examined the 17‑factor model within a community sample of 790 adolescents (<i>M</i> = 14.48 years). Study 2 evaluated convergent, criterion‑related, and discriminant validity in a clinic‑referred sample of 173 adolescents (<i>M</i> = 14.50 years) utilizing the Conners 3‑SR, Beck Youth Inventories, Second Edition (BYI‑2), Wechsler Intelligence Scale for Children-Fifth Edition (WISC‑V), and Wechsler Individual Achievement Test, Third Edition (WIAT‑III). Independent‑samples <i>t</i> tests compared CAPP‑SRF means across samples. The ICM‑CFA analysis confirmed the 17‑factor structure (χ²/<i>df</i> = 3.02; standardized root mean squared error [SRMR] = .076). Scale reliability was acceptable (ω = .79-.89). Clinic participants scored significantly higher than community participants on 15 of the 17 scales (all <i>p</i> < .001; <i>d</i> = .55-1.20), supporting criterion validity. Convergent and discriminant patterns with external measures were as hypothesized (|<i>r</i>| = .32-.68; <i>R</i><sup>2</sup> = .10-.46). The CAPP‑SRF demonstrates robust psychometric properties and complements the parent‑ and teacher‑report forms as an effective adolescent self‑report screener for common <i>DSM‑5</i> disorders.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251398037"},"PeriodicalIF":3.4,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1177/10731911251394290
Jessica L Fossum, Melissa-Ann Lagunas, Emi Ichimura, Cammy Widman, Elizabeth Mateer, Koji Tohmon, Joel Jin
The Mental Help Seeking Attitudes Scale (MHSAS) is commonly used in psychological research. This nine-item unidimensional scale was designed to measure how favorable or unfavorable respondents' attitudes are toward seeking help from a mental health professional and was originally validated using a primarily White sample. To address the potential limitations of using this scale cross-culturally, we recruited participants who identified as Asian American (n = 161), Black American (n = 259), and Latine American (n = 259) to take the MHSAS, and then we ran confirmatory factor analyses. Our samples also all consisted of individuals with chronic pain. The original measure validation demonstrated excellent overall model fit; however, all three of our non-White samples had only adequate overall model fit and factor loading values. An exploratory bifactor analysis still confirmed the unidimensional structure of the scale. These findings suggest that the MHSAS should be used cautiously in cross-cultural contexts with racially minoritized groups experiencing chronic pain.
{"title":"Cultural Validity of the Mental Help Seeking Attitudes Scale Among Three Racially Minoritized Groups With Chronic Pain in the United States.","authors":"Jessica L Fossum, Melissa-Ann Lagunas, Emi Ichimura, Cammy Widman, Elizabeth Mateer, Koji Tohmon, Joel Jin","doi":"10.1177/10731911251394290","DOIUrl":"https://doi.org/10.1177/10731911251394290","url":null,"abstract":"<p><p>The Mental Help Seeking Attitudes Scale (MHSAS) is commonly used in psychological research. This nine-item unidimensional scale was designed to measure how favorable or unfavorable respondents' attitudes are toward seeking help from a mental health professional and was originally validated using a primarily White sample. To address the potential limitations of using this scale cross-culturally, we recruited participants who identified as Asian American (<i>n</i> = 161), Black American (<i>n</i> = 259), and Latine American (<i>n</i> = 259) to take the MHSAS, and then we ran confirmatory factor analyses. Our samples also all consisted of individuals with chronic pain. The original measure validation demonstrated excellent overall model fit; however, all three of our non-White samples had only adequate overall model fit and factor loading values. An exploratory bifactor analysis still confirmed the unidimensional structure of the scale. These findings suggest that the MHSAS should be used cautiously in cross-cultural contexts with racially minoritized groups experiencing chronic pain.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251394290"},"PeriodicalIF":3.4,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2024-12-23DOI: 10.1177/10731911241306370
Tapan A Patel, Morgan Robison, Jesse R Cougle
This study examined the item- and scale-level functioning of the Social Appearance Anxiety Scale (SAAS) as well as differential functioning by gender using an item response theory (IRT) analysis. SAAS data collected from 840 college students were analyzed. A graded response model was used to analyze the 16 items comprising the SAAS. The measure was found to be unidimensional in its factor structure, and every item demonstrated high to very high ability to differentiate respondents varying in levels of the underlying trait (i.e., appearance concerns). In addition, we found evidence of differential item functioning (DIF) by gender for four items, corresponding to small effect sizes. Two of these items were related to internal experiences of appearance concerns (e.g., nervousness and discomfort when a flaw is noticed by others) that were more likely to be endorsed by women, and two of the items were related to external evaluative experiences related to appearance (e.g., missing opportunities and life being more difficult) that were more likely to be endorsed by men. Overall, the IRT and DIF results suggest that the SAAS effectively identifies appearance concerns among individuals with low to very high appearance concerns.
{"title":"Item Response Theory Analysis and Differential Item Functioning of the Social Appearance Anxiety Scale.","authors":"Tapan A Patel, Morgan Robison, Jesse R Cougle","doi":"10.1177/10731911241306370","DOIUrl":"10.1177/10731911241306370","url":null,"abstract":"<p><p>This study examined the item- and scale-level functioning of the Social Appearance Anxiety Scale (SAAS) as well as differential functioning by gender using an item response theory (IRT) analysis. SAAS data collected from 840 college students were analyzed. A graded response model was used to analyze the 16 items comprising the SAAS. The measure was found to be unidimensional in its factor structure, and every item demonstrated high to very high ability to differentiate respondents varying in levels of the underlying trait (i.e., appearance concerns). In addition, we found evidence of differential item functioning (DIF) by gender for four items, corresponding to small effect sizes. Two of these items were related to internal experiences of appearance concerns (e.g., nervousness and discomfort when a flaw is noticed by others) that were more likely to be endorsed by women, and two of the items were related to external evaluative experiences related to appearance (e.g., missing opportunities and life being more difficult) that were more likely to be endorsed by men. Overall, the IRT and DIF results suggest that the SAAS effectively identifies appearance concerns among individuals with low to very high appearance concerns.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"1293-1305"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12183318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142876050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2024-12-26DOI: 10.1177/10731911241304223
Veljko Jovanović, Mihajlo Ilić, Dušana Šakan, Ingrid Brdar
The Meaning in Life Questionnaire (MLQ) assesses two distinct dimensions of meaning in life: presence of meaning and search for meaning. The MLQ is the most widely used instrument for measuring meaning in life, yet there is a limited variety of validity evidence on the originally proposed two-factor confirmatory factor analysis (CFA) solution. In this light, the present research examined, across five studies (total N = 3,205), several aspects of the MLQ's validity and tested cross-gender and cross-national measurement invariance. We also examined the usefulness of the exploratory structural equation model (ESEM) of the MLQ as an alternative to the standard CFA model. The results obtained provide evidence for: (a) the validity (structural, convergent, concurrent, and incremental) of the MLQ ESEM factors; (b) full scalar invariance of the MLQ ESEM model across gender and partial measurement invariance across four countries; and (c) similar cross-national relationships between MLQ ESEM factors and measures of depression and life satisfaction. The present research provides support for the value of applying the ESEM framework in overcoming limitations of the CFA model when examining evidence on the MLQ's validity.
{"title":"The Meaning in Life Questionnaire: Revisiting the Evidence of Validity and Measurement Invariance Using the Exploratory Structural Equation Modeling.","authors":"Veljko Jovanović, Mihajlo Ilić, Dušana Šakan, Ingrid Brdar","doi":"10.1177/10731911241304223","DOIUrl":"10.1177/10731911241304223","url":null,"abstract":"<p><p>The Meaning in Life Questionnaire (MLQ) assesses two distinct dimensions of meaning in life: presence of meaning and search for meaning. The MLQ is the most widely used instrument for measuring meaning in life, yet there is a limited variety of validity evidence on the originally proposed two-factor confirmatory factor analysis (CFA) solution. In this light, the present research examined, across five studies (total <i>N</i> = 3,205), several aspects of the MLQ's validity and tested cross-gender and cross-national measurement invariance. We also examined the usefulness of the exploratory structural equation model (ESEM) of the MLQ as an alternative to the standard CFA model. The results obtained provide evidence for: (a) the validity (structural, convergent, concurrent, and incremental) of the MLQ ESEM factors; (b) full scalar invariance of the MLQ ESEM model across gender and partial measurement invariance across four countries; and (c) similar cross-national relationships between MLQ ESEM factors and measures of depression and life satisfaction. The present research provides support for the value of applying the ESEM framework in overcoming limitations of the CFA model when examining evidence on the MLQ's validity.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"1274-1292"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142891519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2024-12-07DOI: 10.1177/10731911241299499
David K Marcus, Paul S Strand, Brian F French
The present study applied taxometric analyses to the Washington Assessment of Risks and Needs of Students (WARNS)-an instrument designed to assess multiple domains of functioning related to justice system involvement arising from school disengagement-a trajectory referred to as the school to prison pipeline. Previous taxometric studies of constructs related to juvenile justice system involvement found dimensional rather than taxonic (dichotomous) latent structures. Participants were 5008 students from 89 Washington school districts who completed the WARNS as part of standard educational practices. The results were uniformly consistent with a dimensional latent structure. Also supporting a dimensional latent structure, dichotomized WARNS scores were significantly less strongly associated with student arrests, school suspensions, and school skip days than continuous WARNS scores. These findings support the dimensionality of risk and needs and have implications for assessments undertaken to improve school and social outcomes for at-risk youth.
{"title":"A Taxometric Analysis and External Validation of the Latent Structure of Student Risks and Needs.","authors":"David K Marcus, Paul S Strand, Brian F French","doi":"10.1177/10731911241299499","DOIUrl":"10.1177/10731911241299499","url":null,"abstract":"<p><p>The present study applied taxometric analyses to the Washington Assessment of Risks and Needs of Students (WARNS)-an instrument designed to assess multiple domains of functioning related to justice system involvement arising from school disengagement-a trajectory referred to as <i>the school to prison pipeline</i>. Previous taxometric studies of constructs related to juvenile justice system involvement found dimensional rather than taxonic (dichotomous) latent structures. Participants were 5008 students from 89 Washington school districts who completed the WARNS as part of standard educational practices. The results were uniformly consistent with a dimensional latent structure. Also supporting a dimensional latent structure, dichotomized WARNS scores were significantly less strongly associated with student arrests, school suspensions, and school skip days than continuous WARNS scores. These findings support the dimensionality of risk and needs and have implications for assessments undertaken to improve school and social outcomes for at-risk youth.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"1265-1273"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142791010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-01-03DOI: 10.1177/10731911241306364
Kate Fifield, Kanyakorn Veerakanjana, John Hodsoll, Jonna Kuntsi, Charlotte Tye, Sara Simblett
Ecological Momentary Assessment using smartphone technology (smart EMA) has grown substantially over the last decade. However, little is known about the factors associated with completion rates in populations who have a higher likelihood of cognitive impairment. A systematic review of Smart EMA studies in populations who have a higher likelihood of cognitive impairment was carried out (PROSPERO; ref no CRD42022375829). Smartphone EMA studies in neurological, neurodevelopmental and neurogenetic conditions were included. Six databases were searched, and bias was assessed using Egger's test. Completion rates and moderators were analyzed using meta-regression. Fifty-five cohorts were included with 18 cohorts reporting confirmed cognitive impairment. In the overall cohort, the completion rate was 74.4% and EMA protocol characteristics moderated completion rates. Participants with cognitive impairment had significantly lower completion rates compared with those without (p = .021). There were no significant moderators in the cognitive impairment group. Limitations included significant methodological issues in reporting of completion rates, sample characteristics, and associations with completion and dropout rates. These findings conclude that smart EMA is feasible for people with cognitive impairment. Future research should focus on the efficacy of using smart EMA within populations with cognitive impairment to develop an appropriate methodological evidence base.
{"title":"Completion Rates of Smart Technology Ecological Momentary Assessment (EMA) in Populations With a Higher Likelihood of Cognitive Impairment: A Systematic Review and Meta-Analysis.","authors":"Kate Fifield, Kanyakorn Veerakanjana, John Hodsoll, Jonna Kuntsi, Charlotte Tye, Sara Simblett","doi":"10.1177/10731911241306364","DOIUrl":"10.1177/10731911241306364","url":null,"abstract":"<p><p>Ecological Momentary Assessment using smartphone technology (smart EMA) has grown substantially over the last decade. However, little is known about the factors associated with completion rates in populations who have a higher likelihood of cognitive impairment. A systematic review of Smart EMA studies in populations who have a higher likelihood of cognitive impairment was carried out (PROSPERO; ref no CRD42022375829). Smartphone EMA studies in neurological, neurodevelopmental and neurogenetic conditions were included. Six databases were searched, and bias was assessed using Egger's test. Completion rates and moderators were analyzed using meta-regression. Fifty-five cohorts were included with 18 cohorts reporting confirmed cognitive impairment. In the overall cohort, the completion rate was 74.4% and EMA protocol characteristics moderated completion rates. Participants with cognitive impairment had significantly lower completion rates compared with those without (<i>p</i> = .021). There were no significant moderators in the cognitive impairment group. Limitations included significant methodological issues in reporting of completion rates, sample characteristics, and associations with completion and dropout rates. These findings conclude that smart EMA is feasible for people with cognitive impairment. Future research should focus on the efficacy of using smart EMA within populations with cognitive impairment to develop an appropriate methodological evidence base.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"1175-1194"},"PeriodicalIF":3.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12579720/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142920421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}