Pub Date : 2026-03-01Epub Date: 2025-03-21DOI: 10.1177/10731911251320050
Anna Turek, Marcin Zajenkowski, Radosław Rogoza, Marta Rogoza, Gilles E Gignac
Recent advancements in the theory of narcissism emphasize that it is a multidimensional construct with three distinct facets: agentic, antagonistic, and neurotic. Although this model has been extensively studied and supported in adults, there is a lack of instruments assessing the multidimensional structure of narcissism in children. In response to this gap in the literature, we aimed to introduce a new measure of three-dimensional narcissism in children. In three studies among children aged between 8 and 10 years (N = 189, N = 235, N = 163), we successfully supported the presence of the three-factor structure of narcissism. In addition, we identified respectable reliability and validity for the new measure. Agentic narcissism positively correlated with self-enhancement values, agentic attributes, and self-esteem. Neurotic narcissism was negatively correlated with self-esteem. Finally, antagonistic narcissism was negatively associated with self-transcendence values and positively with self-enhancement values. In conclusion, we propose a 12-item measure distinguishing agentic, antagonistic, and neurotic narcissism in children.
{"title":"Three-Dimensional Narcissism Scale for Children: Structure, Reliability, and Construct Validity.","authors":"Anna Turek, Marcin Zajenkowski, Radosław Rogoza, Marta Rogoza, Gilles E Gignac","doi":"10.1177/10731911251320050","DOIUrl":"10.1177/10731911251320050","url":null,"abstract":"<p><p>Recent advancements in the theory of narcissism emphasize that it is a multidimensional construct with three distinct facets: agentic, antagonistic, and neurotic. Although this model has been extensively studied and supported in adults, there is a lack of instruments assessing the multidimensional structure of narcissism in children. In response to this gap in the literature, we aimed to introduce a new measure of three-dimensional narcissism in children. In three studies among children aged between 8 and 10 years (<i>N</i> = 189, <i>N</i> = 235, <i>N</i> = 163), we successfully supported the presence of the three-factor structure of narcissism. In addition, we identified respectable reliability and validity for the new measure. Agentic narcissism positively correlated with self-enhancement values, agentic attributes, and self-esteem. Neurotic narcissism was negatively correlated with self-esteem. Finally, antagonistic narcissism was negatively associated with self-transcendence values and positively with self-enhancement values. In conclusion, we propose a 12-item measure distinguishing agentic, antagonistic, and neurotic narcissism in children.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"275-286"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143668951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-26DOI: 10.1177/10731911251326379
Keefe J Maccarone, Andrew J Kremyar, Martin Sellbom, Yossef S Ben-Porath
In the current literature on compulsivity, it is unclear whether this construct is best conceptualized as an internalizing disorder, a fear disorder, a thought disorder, or some combination of the three. The Compulsivity (CMP) scale introduced with the MMPI-3 assesses compulsive behaviors. To address the question of compulsivity's placement within a hierarchical psychopathology structure, the current study examined the degree to which CMP scores share variance with internalizing, fear, and thought dysfunction factors using confirmatory factor analyses. Results indicated that a model in which CMP scores cross-loaded onto latent fear and thought dysfunction factors exhibited preferential fit compared to a model in which CMP scores cross-loaded onto a higher-order internalizing factor and a thought dysfunction factor. Constraining equality in the cross-loading of CMP scores onto fear and thought dysfunction factors caused no significant decrement in fit. These findings indicate that the MMPI-3 CMP scale measures both fear and thought dysfunction. Implications and limitations of these findings and future research directions are discussed.
{"title":"The Placement of the MMPI-3 Compulsivity (CMP) Scale Within a Hierarchical Structure of Psychopathology.","authors":"Keefe J Maccarone, Andrew J Kremyar, Martin Sellbom, Yossef S Ben-Porath","doi":"10.1177/10731911251326379","DOIUrl":"10.1177/10731911251326379","url":null,"abstract":"<p><p>In the current literature on compulsivity, it is unclear whether this construct is best conceptualized as an internalizing disorder, a fear disorder, a thought disorder, or some combination of the three. The Compulsivity (CMP) scale introduced with the MMPI-3 assesses compulsive behaviors. To address the question of compulsivity's placement within a hierarchical psychopathology structure, the current study examined the degree to which CMP scores share variance with internalizing, fear, and thought dysfunction factors using confirmatory factor analyses. Results indicated that a model in which CMP scores cross-loaded onto latent fear and thought dysfunction factors exhibited preferential fit compared to a model in which CMP scores cross-loaded onto a higher-order internalizing factor and a thought dysfunction factor. Constraining equality in the cross-loading of CMP scores onto fear and thought dysfunction factors caused no significant decrement in fit. These findings indicate that the MMPI-3 CMP scale measures both fear and thought dysfunction. Implications and limitations of these findings and future research directions are discussed.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"191-203"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143727604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-12DOI: 10.1177/10731911251321929
Benjamin C Darnell, Maya Bina N Vannini, Antonio Morgan-López, Stephanie E Brown, Breanna Grunthal, Willie J Hale, Stacey Young-McCaughan, Peter T Fox, Donald D McGeary, Patricia A Resick, Denise M Sloan, Daniel J Taylor, Richard P Schobitz, Christian C Schrader, Jeffrey S Yarvis, Terence M Keane, Alan L Peterson, Brett T Litz
The posttraumatic stress disorder (PTSD) Checklist for Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5; PCL-5) was designed and validated to track symptoms over the past month (PCL-5-M), yet an untested ad hoc weekly version (PCL-5-W) is commonly used to track changes during treatment. We used archival data of clinical trials for the treatment of PTSD in veterans to assess the construct validity of PCL-5-W. Both PCL-5-M and PCL-5-W were found to have configural measurement invariance across four consecutive administrations. The results also indicated at least partial metric and scalar invariance for each version. The reliability estimates of PCL-5-M and PCL-5-W at each time point were equivalent. However, we found a discrepancy with regard to concurrent validity; correlations with the nine-item Patient Health Questionnaire may be meaningfully different between PCL-5-M and PCL-5-W. Nevertheless, overall, the results suggest that PCL-5-W can be validly used to assess PTSD symptoms over time, but factor scores may need to be tracked alongside total scores to address validity concerns.
{"title":"Psychometric Evaluation of the Weekly Version of the PTSD Checklist for <i>DSM</i>-5.","authors":"Benjamin C Darnell, Maya Bina N Vannini, Antonio Morgan-López, Stephanie E Brown, Breanna Grunthal, Willie J Hale, Stacey Young-McCaughan, Peter T Fox, Donald D McGeary, Patricia A Resick, Denise M Sloan, Daniel J Taylor, Richard P Schobitz, Christian C Schrader, Jeffrey S Yarvis, Terence M Keane, Alan L Peterson, Brett T Litz","doi":"10.1177/10731911251321929","DOIUrl":"10.1177/10731911251321929","url":null,"abstract":"<p><p>The posttraumatic stress disorder (PTSD) Checklist for <i>Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition</i> (<i>DSM-5</i>; PCL-5) was designed and validated to track symptoms over the past month (PCL-5-M), yet an untested ad hoc weekly version (PCL-5-W) is commonly used to track changes during treatment. We used archival data of clinical trials for the treatment of PTSD in veterans to assess the construct validity of PCL-5-W. Both PCL-5-M and PCL-5-W were found to have configural measurement invariance across four consecutive administrations. The results also indicated at least partial metric and scalar invariance for each version. The reliability estimates of PCL-5-M and PCL-5-W at each time point were equivalent. However, we found a discrepancy with regard to concurrent validity; correlations with the nine-item Patient Health Questionnaire may be meaningfully different between PCL-5-M and PCL-5-W. Nevertheless, overall, the results suggest that PCL-5-W can be validly used to assess PTSD symptoms over time, but factor scores may need to be tracked alongside total scores to address validity concerns.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"221-240"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143613343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-29DOI: 10.1177/10731911251326371
Adam P Natoli
The validity and utility of translated instruments (psychological measures) depend on the quality of their translation, and differences in key linguistic characteristics could introduce bias. Likewise, linguistic differences between instruments designed to measure analogous constructs might contribute to similar instruments possessing dissimilar psychometrics. This article introduces and demonstrates the use of natural language processing (NLP), a subfield of artificial intelligence, to linguistically analyze 13 translations of two psychological measures previously translated into numerous languages. NLP was used to generate estimates reflecting specific linguistic characteristics of test items (emotional tone/intensity, sentiment, valence, arousal, and dominance), which were then compared across translations at both the test- and item-level, as well as between the two instruments. Results revealed that key linguistic characteristics can profoundly vary both within and between tests. Following a discussion of results, the current limitations of this approach are summarized and strategies for advancing this methodology are proposed.
{"title":"Leveraging Artificial Intelligence to Linguistically Compare Test Translations: A Methodological Introduction and Demonstration.","authors":"Adam P Natoli","doi":"10.1177/10731911251326371","DOIUrl":"10.1177/10731911251326371","url":null,"abstract":"<p><p>The validity and utility of translated instruments (psychological measures) depend on the quality of their translation, and differences in key linguistic characteristics could introduce bias. Likewise, linguistic differences between instruments designed to measure analogous constructs might contribute to similar instruments possessing dissimilar psychometrics. This article introduces and demonstrates the use of natural language processing (NLP), a subfield of artificial intelligence, to linguistically analyze 13 translations of two psychological measures previously translated into numerous languages. NLP was used to generate estimates reflecting specific linguistic characteristics of test items (emotional tone/intensity, sentiment, valence, arousal, and dominance), which were then compared across translations at both the test- and item-level, as well as between the two instruments. Results revealed that key linguistic characteristics can profoundly vary both within and between tests. Following a discussion of results, the current limitations of this approach are summarized and strategies for advancing this methodology are proposed.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"163-177"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143742079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-29DOI: 10.1177/10731911251328604
Wendy C Higgins, Victoria Savalei, Vince Polito, Robert M Ross
The Reading the Mind in the Eyes Test (RMET) is widely used in clinical and non-clinical research. However, the structural properties of RMET scores have yet to be rigorously examined. We analyzed the structural properties of RMET scores in nine existing datasets comprising non-clinical samples ranging from 558 to 9,267 (median = 1,112) participants each. We used confirmatory factor analysis to assess two theoretically derived factor models, exploratory factor analysis to identify possible alternative factor models, and reliability estimates to assess internal consistency. Neither of the theoretically derived models was a good fit for any of the nine datasets, and we were unable to identify any better fitting multidimensional models. Internal consistency metrics were acceptable in six of the nine datasets, but these metrics are difficult to interpret given the uncertain factor structures. Our findings contribute to a growing body of evidence questioning the reliability and validity of RMET scores.
{"title":"Reading the Mind in the Eyes Test Scores Demonstrate Poor Structural Properties in Nine Large Non-Clinical Samples.","authors":"Wendy C Higgins, Victoria Savalei, Vince Polito, Robert M Ross","doi":"10.1177/10731911251328604","DOIUrl":"10.1177/10731911251328604","url":null,"abstract":"<p><p>The Reading the Mind in the Eyes Test (RMET) is widely used in clinical and non-clinical research. However, the structural properties of RMET scores have yet to be rigorously examined. We analyzed the structural properties of RMET scores in nine existing datasets comprising non-clinical samples ranging from 558 to 9,267 (median = 1,112) participants each. We used confirmatory factor analysis to assess two theoretically derived factor models, exploratory factor analysis to identify possible alternative factor models, and reliability estimates to assess internal consistency. Neither of the theoretically derived models was a good fit for any of the nine datasets, and we were unable to identify any better fitting multidimensional models. Internal consistency metrics were acceptable in six of the nine datasets, but these metrics are difficult to interpret given the uncertain factor structures. Our findings contribute to a growing body of evidence questioning the reliability and validity of RMET scores.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"204-220"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12824027/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143742081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-25DOI: 10.1177/10731911251326399
Allison Peipert, Sydney Adams, Lorenzo Lorenzo-Luaces
Quality of life (QOL) broadly encompasses constructs including health, well-being, life satisfaction, and psychosocial functioning. Depression, a major cause of global disability, is linked to lower QOL. Despite the rise of measurement-based care and patient-reported outcomes, there is no consensus on QOL definitions or models, resulting in varied assessments. This study aims to describe the item content overlap among commonly used QOL measures in depression research. We analyzed 10 QOL measures from a meta-analysis, calculating Jaccard indices to quantify overlap, and used two coding approaches: one for similarly worded items and another for exact word matches. We also categorized items into broader themes. At the most, average Jaccard similarity was M = 0.14 (SD = 0.12), indicating significant heterogeneity among QOL measures in depression. This suggests that QOL outcomes may not be reproducible across different scales. Future research should examine the relationships between the content assessed by various QOL measures.
{"title":"Heterogeneity in Item Content of Quality of Life Assessments Used in Depression Psychotherapy Research.","authors":"Allison Peipert, Sydney Adams, Lorenzo Lorenzo-Luaces","doi":"10.1177/10731911251326399","DOIUrl":"10.1177/10731911251326399","url":null,"abstract":"<p><p>Quality of life (QOL) broadly encompasses constructs including health, well-being, life satisfaction, and psychosocial functioning. Depression, a major cause of global disability, is linked to lower QOL. Despite the rise of measurement-based care and patient-reported outcomes, there is no consensus on QOL definitions or models, resulting in varied assessments. This study aims to describe the item content overlap among commonly used QOL measures in depression research. We analyzed 10 QOL measures from a meta-analysis, calculating Jaccard indices to quantify overlap, and used two coding approaches: one for similarly worded items and another for exact word matches. We also categorized items into broader themes. At the most, average Jaccard similarity was <i>M</i> = 0.14 (SD = 0.12), indicating significant heterogeneity among QOL measures in depression. This suggests that QOL outcomes may not be reproducible across different scales. Future research should examine the relationships between the content assessed by various QOL measures.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"241-253"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143699468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-27DOI: 10.1177/10731911251329992
Abdullah Alrubaian
Parents of children with dyslexia have an important role in the detection and treatment of success in their children. However, standard scales in this context are not suitable for use among parents. The main aim of the current study was to find the most important indicators of dyslexia according to parents' reports and statements. First, a list of parent reports on dyslexia was developed. Then, according to the DSM-5 criteria (by clinicians), children were divided into two categories: children with dyslexia and healthy controls. Then, four Machine Learning (ML) algorithms-Logistic Regression, Random Forest, Extreme Gradient Boosting (XGBoost), and ensemble methods-were used to extract the most relevant predictors. To predict dyslexia, recursive feature elimination chose the five most important variables from 35 parent-reported items. Logistic Regression, Random Forest, XGBoost, and ensemble models were used in R-Studio. The ensemble model was the best. The most important were "Word Guessing," "Letter Confusion," "Letter-Sound Association," "Slow Reading," and "Letter Order Reversal." The study revealed that ML models can accurately identify dyslexia by analyzing parent-reported indicators. The five key predictors "Word Guessing," "Letter Confusion," "Letter-Sound Association," "Slow Reading," and "Letter Order Reversal" provide essential information for detecting dyslexia early.
{"title":"Using Advanced Machine Learning Models for Detection of Dyslexia Among Children By Parents: A Study from Screening to Diagnosis.","authors":"Abdullah Alrubaian","doi":"10.1177/10731911251329992","DOIUrl":"10.1177/10731911251329992","url":null,"abstract":"<p><p>Parents of children with dyslexia have an important role in the detection and treatment of success in their children. However, standard scales in this context are not suitable for use among parents. The main aim of the current study was to find the most important indicators of dyslexia according to parents' reports and statements. First, a list of parent reports on dyslexia was developed. Then, according to the DSM-5 criteria (by clinicians), children were divided into two categories: children with dyslexia and healthy controls. Then, four Machine Learning (ML) algorithms-Logistic Regression, Random Forest, Extreme Gradient Boosting (XGBoost), and ensemble methods-were used to extract the most relevant predictors. To predict dyslexia, recursive feature elimination chose the five most important variables from 35 parent-reported items. Logistic Regression, Random Forest, XGBoost, and ensemble models were used in R-Studio. The ensemble model was the best. The most important were \"Word Guessing,\" \"Letter Confusion,\" \"Letter-Sound Association,\" \"Slow Reading,\" and \"Letter Order Reversal.\" The study revealed that ML models can accurately identify dyslexia by analyzing parent-reported indicators. The five key predictors \"Word Guessing,\" \"Letter Confusion,\" \"Letter-Sound Association,\" \"Slow Reading,\" and \"Letter Order Reversal\" provide essential information for detecting dyslexia early.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"178-190"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143717833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-18DOI: 10.1177/10731911241312757
Léna Nagy, Mónika Koós, Dr Shane W Kraus, Zsolt Demetrovics, Marc N Potenza, Rafael Ballester-Arnal, Dominik Batthyány, Sophie Bergeron, Joël Billieux, Peer Briken, Julius Burkauskas, Georgina Cárdenas-López, Joana Carvalho, Jesús Castro-Calvo, Lijun Chen, Ji-Kang Chen, Giacomo Ciocca, Ornella Corazza, Rita Csako, David P Fernandez, Hironobu Fujiwara, Elaine F Fernandez, Johannes Fuss, Roman Gabrhelík, Ateret Gewirtz-Meydan, Biljana Gjoneska, Mateusz Gola, Joshua B Grubbs, Hashim T Hashim, Md Saiful Islam, Mustafa Ismail, Martha C Jiménez-Martínez, Tanja Jurin, Ondrej Kalina, Verena Klein, András Költő, Sang-Kyu Lee, Karol Lewczuk, Chung-Ying Lin, Christine Lochner, Silvia López-Alvarado, Kateřina Lukavská, Percy Mayta-Tristán, Dan J Miller, Ol̆ga Orosová, Gábor Orosz, Fernando P Ponce, Gonzalo R Quintana, Gabriel C Quintero Garzola, Jano Ramos-Diaz, Kévin Rigaud, Ann Rousseau, Scanavino Marco De Tubino, Marion K Schulmeyer, Pratap Sharan, Mami Shibata, Sheikh Shoib, Vera Sigre-Leirós, Luke Sniewski, Ognen Spasovski, Vesta Steibliene, Dan J Stein, Julian Strizek, Aleksandar Štulhofer, Banu C Ünsal, Marie-Pier Vaillancourt-Morel, Marie Claire Van Hout, Beáta Bőthe
Sexual assertiveness (SA) is an important concept in understanding sexual well-being and decision-making. However, psychometric evaluation of existing measures of SA in diverse populations is largely lacking, hindering cross-cultural and comparative studies. This study validated the short version of the Sexual Assertiveness Questionnaire (SAQ-9) and examined its measurement invariance across several languages, countries, genders, sexual orientations, and relationship statuses among 65,448 sexually-active adults (Mage = 32.98 years, SD = 12.08, 58% women, 2.74% gender-diverse individuals) taking part in the International Sex Survey. The scale demonstrated adequate psychometric properties. Measurement invariance tests indicated that the SAQ-9 is suitable for comparing individuals from different cultures, genders, sexual orientations, and relationship statuses, and significant group differences were also noted (e.g., gender-diverse individuals reported the highest levels of SA). Findings suggest that the SAQ-9 is a reliable and valid measure of SA and appropriate for use in diverse populations, with specific populations exhibiting varying levels of SA.
{"title":"Sexual Assertiveness Across Cultures, Genders, and Sexual Orientations: Validation of the Short Sexual Assertiveness Questionnaire (SAQ-9).","authors":"Léna Nagy, Mónika Koós, Dr Shane W Kraus, Zsolt Demetrovics, Marc N Potenza, Rafael Ballester-Arnal, Dominik Batthyány, Sophie Bergeron, Joël Billieux, Peer Briken, Julius Burkauskas, Georgina Cárdenas-López, Joana Carvalho, Jesús Castro-Calvo, Lijun Chen, Ji-Kang Chen, Giacomo Ciocca, Ornella Corazza, Rita Csako, David P Fernandez, Hironobu Fujiwara, Elaine F Fernandez, Johannes Fuss, Roman Gabrhelík, Ateret Gewirtz-Meydan, Biljana Gjoneska, Mateusz Gola, Joshua B Grubbs, Hashim T Hashim, Md Saiful Islam, Mustafa Ismail, Martha C Jiménez-Martínez, Tanja Jurin, Ondrej Kalina, Verena Klein, András Költő, Sang-Kyu Lee, Karol Lewczuk, Chung-Ying Lin, Christine Lochner, Silvia López-Alvarado, Kateřina Lukavská, Percy Mayta-Tristán, Dan J Miller, Ol̆ga Orosová, Gábor Orosz, Fernando P Ponce, Gonzalo R Quintana, Gabriel C Quintero Garzola, Jano Ramos-Diaz, Kévin Rigaud, Ann Rousseau, Scanavino Marco De Tubino, Marion K Schulmeyer, Pratap Sharan, Mami Shibata, Sheikh Shoib, Vera Sigre-Leirós, Luke Sniewski, Ognen Spasovski, Vesta Steibliene, Dan J Stein, Julian Strizek, Aleksandar Štulhofer, Banu C Ünsal, Marie-Pier Vaillancourt-Morel, Marie Claire Van Hout, Beáta Bőthe","doi":"10.1177/10731911241312757","DOIUrl":"10.1177/10731911241312757","url":null,"abstract":"<p><p>Sexual assertiveness (SA) is an important concept in understanding sexual well-being and decision-making. However, psychometric evaluation of existing measures of SA in diverse populations is largely lacking, hindering cross-cultural and comparative studies. This study validated the short version of the Sexual Assertiveness Questionnaire (SAQ-9) and examined its measurement invariance across several languages, countries, genders, sexual orientations, and relationship statuses among 65,448 sexually-active adults (<i>M<sub>age</sub></i> = 32.98 years, <i>SD</i> = 12.08, 58% women, 2.74% gender-diverse individuals) taking part in the International Sex Survey. The scale demonstrated adequate psychometric properties. Measurement invariance tests indicated that the SAQ-9 is suitable for comparing individuals from different cultures, genders, sexual orientations, and relationship statuses, and significant group differences were also noted (e.g., gender-diverse individuals reported the highest levels of SA). Findings suggest that the SAQ-9 is a reliable and valid measure of SA and appropriate for use in diverse populations, with specific populations exhibiting varying levels of SA.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"254-274"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143656168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-24DOI: 10.1177/10731911251407473
Harrison G Boynton, Matthew S Fontanese, Michael D Barnett
This study explored the diagnostic accuracy of a virtual reality-based cooking task compared to traditional memory tests for neurocognitive disorders (NCDs). Older adults (N = 127) were administered the Virtual Kitchen Protocol for Learning and Memory (VKP-LM) Immediate and Delayed Recall (VKP-IR and DR), the California Verbal Learning Test Short and Long Delay (CVLT-SD and LD), and the Wechsler Memory Scale-Visual Reproduction subtests (WMS-VR1 and VR2). A hierarchical logistic regression showed the VKP-DR as an independent predictor of NCDs (Exp[B] = 0.10, p < .001) among all study variables. Further analyses revealed that the VKP-DR had the highest area under the curve (AUC), reflecting the strongest classification performance. The CVLT-LD also showed good AUCs, while the WMS-VR2 demonstrated fair performance. The VKP-DR likely emerged with the highest diagnostic accuracy due to (a) delayed memory measures are consistently shown to be more accurate when classifying those with NCDs and (b) because NCD diagnostic criteria emphasize not only deficits in neurocognitive domains but also demonstrable impairment in everyday functioning. By embedding memory demands within a realistic, functionally relevant task (e.g., meal preparation), the VKP-DR may better approximate instrumental activities of daily living and thus capture the functional component required for diagnosis. This alignment with the Diagnostic and Statistical Manual of Mental Disorders (5th ed., text rev.; DSM-5-TR) criteria likely enhanced its predictive accuracy.
这项研究探索了基于虚拟现实的烹饪任务与传统记忆测试对神经认知障碍(ncd)的诊断准确性。对127名老年人进行虚拟厨房学习记忆测试(VKP-LM)、即时和延迟回忆测试(VKP-IR和DR)、加州语言学习短延迟和长延迟测试(CVLT-SD和LD)和韦氏记忆量表-视觉再现子测试(WMS-VR1和VR2)。分层logistic回归分析显示,VKP-DR是NCDs的独立预测因子(Exp[B] = 0.10, p < .001)。进一步分析表明,VKP-DR具有最高的曲线下面积(AUC),反映了最强的分类性能。CVLT-LD也表现出良好的auc,而WMS-VR2表现出一般的性能。VKP-DR可能具有最高的诊断准确性,因为(a)延迟记忆测量一直被证明在对非传染性疾病患者进行分类时更准确,(b)因为非传染性疾病诊断标准不仅强调神经认知领域的缺陷,而且还强调日常功能的明显损害。通过将记忆需求嵌入到现实的、功能相关的任务中(例如,准备膳食),VKP-DR可以更好地近似日常生活的工具性活动,从而捕获诊断所需的功能成分。这种与精神疾病诊断与统计手册(第5版,文本修订版;DSM-5-TR)标准的一致性可能提高了其预测的准确性。
{"title":"A Comparison of Virtual Reality and Traditional Measures of Memory in the Diagnostic Discriminability of Neurocognitive Disorders: A Virtual Kitchen Protocol Study.","authors":"Harrison G Boynton, Matthew S Fontanese, Michael D Barnett","doi":"10.1177/10731911251407473","DOIUrl":"https://doi.org/10.1177/10731911251407473","url":null,"abstract":"<p><p>This study explored the diagnostic accuracy of a virtual reality-based cooking task compared to traditional memory tests for neurocognitive disorders (NCDs). Older adults (<i>N</i> = 127) were administered the Virtual Kitchen Protocol for Learning and Memory (VKP-LM) Immediate and Delayed Recall (VKP-IR and DR), the California Verbal Learning Test Short and Long Delay (CVLT-SD and LD), and the Wechsler Memory Scale-Visual Reproduction subtests (WMS-VR1 and VR2). A hierarchical logistic regression showed the VKP-DR as an independent predictor of NCDs (<i>Exp[B]</i> = 0.10, <i>p</i> < .001) among all study variables. Further analyses revealed that the VKP-DR had the highest area under the curve (AUC), reflecting the strongest classification performance. The CVLT-LD also showed good AUCs, while the WMS-VR2 demonstrated fair performance. The VKP-DR likely emerged with the highest diagnostic accuracy due to (a) delayed memory measures are consistently shown to be more accurate when classifying those with NCDs and (b) because NCD diagnostic criteria emphasize not only deficits in neurocognitive domains but also demonstrable impairment in everyday functioning. By embedding memory demands within a realistic, functionally relevant task (e.g., meal preparation), the VKP-DR may better approximate instrumental activities of daily living and thus capture the functional component required for diagnosis. This alignment with the <i>Diagnostic and Statistical Manual of Mental Disorders</i> (5th ed., text rev.; DSM-5-TR) criteria likely enhanced its predictive accuracy.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251407473"},"PeriodicalIF":3.4,"publicationDate":"2026-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147282051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.1177/10731911251412219
Gary L Canivez, Marley W Watkins, Ryan J McGill, Stefan C Dombrowski
The Wechsler Adult Intelligence Scale-Fifth Edition (WAIS-5) latent factor structure was assessed using complementary hierarchical exploratory factor analyses (EFA) with the Schmid and Leiman procedure and confirmatory factor analyses (CFA) using the standardization sample (N = 2,020) correlation matrix and descriptive statistics of the 20 primary and secondary WAIS-5 subtests. The WAIS-5 Technical and Interpretive Manual did not include EFA, CFA with fewer than five first-order (group) factors, CFA with rival bifactor models, or model-based reliability and dimensionality estimates; thus, the present independent structural validity assessment corrects this evidential lacuna to help guide ethical and evidence-based interpretation. EFA results did not support five latent factors with separate Visual Spatial and Fluid Reasoning factors. Instead, a four-factor model with Visual Spatial and Fluid Reasoning factors merged into the former Perceptual Reasoning factor and measurement dominated by a general intelligence (g) factor-similar to the WAIS-IV structure-was supported. CFA results indicated that a bifactor model with four group factors provided the best fit, consistent with the EFA findings. Overall, the EFA and CFA results did not support the purported WAIS-5 structure and instead replicated findings from independent assessments of the WISC-V with standardization and clinical samples, that indicated primary, if not exclusive, interpretation of the FSIQ as an estimate of psychometric g.
{"title":"Construct Validity of the WAIS- 5: Complementary Exploratory and Confirmatory Factor Analyses of the 20 Primary and Secondary Subtests.","authors":"Gary L Canivez, Marley W Watkins, Ryan J McGill, Stefan C Dombrowski","doi":"10.1177/10731911251412219","DOIUrl":"https://doi.org/10.1177/10731911251412219","url":null,"abstract":"<p><p>The Wechsler Adult Intelligence Scale-Fifth Edition (WAIS-5) latent factor structure was assessed using complementary hierarchical exploratory factor analyses (EFA) with the Schmid and Leiman procedure and confirmatory factor analyses (CFA) using the standardization sample (<i>N</i> = 2,020) correlation matrix and descriptive statistics of the 20 primary and secondary WAIS-5 subtests. The WAIS-5 Technical and Interpretive Manual did not include EFA, CFA with fewer than five first-order (group) factors, CFA with rival bifactor models, or model-based reliability and dimensionality estimates; thus, the present independent structural validity assessment corrects this evidential lacuna to help guide ethical and evidence-based interpretation. EFA results did not support five latent factors with separate Visual Spatial and Fluid Reasoning factors. Instead, a four-factor model with Visual Spatial and Fluid Reasoning factors merged into the former Perceptual Reasoning factor and measurement dominated by a general intelligence (<i>g</i>) factor-similar to the WAIS-IV structure-was supported. CFA results indicated that a bifactor model with four group factors provided the best fit, consistent with the EFA findings. Overall, the EFA and CFA results did not support the purported WAIS-5 structure and instead replicated findings from independent assessments of the WISC-V with standardization and clinical samples, that indicated primary, if not exclusive, interpretation of the FSIQ as an estimate of psychometric <i>g</i>.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911251412219"},"PeriodicalIF":3.4,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146199997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}