Pub Date : 2026-03-01Epub Date: 2025-03-12DOI: 10.1177/10731911251319933
Cameron S Kay, Paul Slovic
Choosing a short-form measure of conspiracist ideation (i.e., the tendency to believe in conspiracy theories) is fraught. Despite there being numerous scales to choose from, little work has been done to compare their psychometric properties. To address this shortcoming, we compared the internal consistency, 2-week test-retest reliability, criterion validity, and construct validity of five short-form conspiracist ideation measures: the Generic Conspiracist Beliefs Scale-5 (GCB-5), the Conspiracy Mentality Questionnaire (CMQ), the General Measure of Conspiracism (GMC), the American Conspiracy Thinking Scale (ACTS), and the One-Item Conspiracy Measure (1CM). The results of our investigation indicated that all five scales are reliable and valid measures of conspiracist ideation. That said, the GCB-5 tended to perform the best, while the 1CM tended to perform the worst. We conclude our investigation by discussing trade-offs among the five scales, as well as providing recommendations for future research.
{"title":"Assessing Conspiracist Ideation Reliably, Validly, and Efficiently: A Psychometric Comparison of Five Short-Form Measures.","authors":"Cameron S Kay, Paul Slovic","doi":"10.1177/10731911251319933","DOIUrl":"10.1177/10731911251319933","url":null,"abstract":"<p><p>Choosing a short-form measure of conspiracist ideation (i.e., the tendency to believe in conspiracy theories) is fraught. Despite there being numerous scales to choose from, little work has been done to compare their psychometric properties. To address this shortcoming, we compared the internal consistency, 2-week test-retest reliability, criterion validity, and construct validity of five short-form conspiracist ideation measures: the Generic Conspiracist Beliefs Scale-5 (GCB-5), the Conspiracy Mentality Questionnaire (CMQ), the General Measure of Conspiracism (GMC), the American Conspiracy Thinking Scale (ACTS), and the One-Item Conspiracy Measure (1CM). The results of our investigation indicated that all five scales are reliable and valid measures of conspiracist ideation. That said, the GCB-5 tended to perform the best, while the 1CM tended to perform the worst. We conclude our investigation by discussing trade-offs among the five scales, as well as providing recommendations for future research.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"287-302"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143613339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-19DOI: 10.1177/10731911251321922
Renjun Gu, Lin Zhu, Jingxia Kong, Li Zhang, Mengna Song, Xiao Cheng, Donald L Patrick, Hongmei Wang
To refine the 23-item chronic Hepatitis B quality of life instrument (CHBQOL) using the modified Delphi method and Rasch model analysis, this study conducted a secondary data analysis on 578 chronic Hepatitis B (CHB) patients. The preliminary evaluation of the item's importance of the original CHBQOL and the final review of the short form of CHBQOL (CHBQOL-SF) were collected by the Delphi method. A bi-factor model was estimated and Rasch analysis with partial credit model was performed on each domain of the CHBQOL. Six items were suggested to remove based on the Delphi results. The fit of the bi-factor model was acceptable (RMSEA = 0.040; CFI = 0.983; TLI = 0.965). Disordered thresholds were initially found on three out of five items in Somatic symptoms, and four out of six items in Social stigma. Uniform differential item functioning was observed for three items for age group, two items for gender, and one item each for different ALT levels and HBV-DNA levels. Finally, the 10-item CHBQOL-SF retained the four-dimensional structure of the original instrument. The 10 items fit the Rasch model well and response options were set reasonably. The 10-item CHBQOL-SF would offer a brief and easily administrative CHB-specific patient-reported outcome measure for use in clinical practice and population studies.
{"title":"Development of the Short Form for Chronic Hepatitis B Quality of Life Instrument (CHBQOL-SF) Using Delphi Method and Rasch Analysis.","authors":"Renjun Gu, Lin Zhu, Jingxia Kong, Li Zhang, Mengna Song, Xiao Cheng, Donald L Patrick, Hongmei Wang","doi":"10.1177/10731911251321922","DOIUrl":"10.1177/10731911251321922","url":null,"abstract":"<p><p>To refine the 23-item chronic Hepatitis B quality of life instrument (CHBQOL) using the modified Delphi method and Rasch model analysis, this study conducted a secondary data analysis on 578 chronic Hepatitis B (CHB) patients. The preliminary evaluation of the item's importance of the original CHBQOL and the final review of the short form of CHBQOL (CHBQOL-SF) were collected by the Delphi method. A bi-factor model was estimated and Rasch analysis with partial credit model was performed on each domain of the CHBQOL. Six items were suggested to remove based on the Delphi results. The fit of the bi-factor model was acceptable (RMSEA = 0.040; CFI = 0.983; TLI = 0.965). Disordered thresholds were initially found on three out of five items in Somatic symptoms, and four out of six items in Social stigma. Uniform differential item functioning was observed for three items for age group, two items for gender, and one item each for different ALT levels and HBV-DNA levels. Finally, the 10-item CHBQOL-SF retained the four-dimensional structure of the original instrument. The 10 items fit the Rasch model well and response options were set reasonably. The 10-item CHBQOL-SF would offer a brief and easily administrative CHB-specific patient-reported outcome measure for use in clinical practice and population studies.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"303-319"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143662158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-21DOI: 10.1177/10731911251320050
Anna Turek, Marcin Zajenkowski, Radosław Rogoza, Marta Rogoza, Gilles E Gignac
Recent advancements in the theory of narcissism emphasize that it is a multidimensional construct with three distinct facets: agentic, antagonistic, and neurotic. Although this model has been extensively studied and supported in adults, there is a lack of instruments assessing the multidimensional structure of narcissism in children. In response to this gap in the literature, we aimed to introduce a new measure of three-dimensional narcissism in children. In three studies among children aged between 8 and 10 years (N = 189, N = 235, N = 163), we successfully supported the presence of the three-factor structure of narcissism. In addition, we identified respectable reliability and validity for the new measure. Agentic narcissism positively correlated with self-enhancement values, agentic attributes, and self-esteem. Neurotic narcissism was negatively correlated with self-esteem. Finally, antagonistic narcissism was negatively associated with self-transcendence values and positively with self-enhancement values. In conclusion, we propose a 12-item measure distinguishing agentic, antagonistic, and neurotic narcissism in children.
{"title":"Three-Dimensional Narcissism Scale for Children: Structure, Reliability, and Construct Validity.","authors":"Anna Turek, Marcin Zajenkowski, Radosław Rogoza, Marta Rogoza, Gilles E Gignac","doi":"10.1177/10731911251320050","DOIUrl":"10.1177/10731911251320050","url":null,"abstract":"<p><p>Recent advancements in the theory of narcissism emphasize that it is a multidimensional construct with three distinct facets: agentic, antagonistic, and neurotic. Although this model has been extensively studied and supported in adults, there is a lack of instruments assessing the multidimensional structure of narcissism in children. In response to this gap in the literature, we aimed to introduce a new measure of three-dimensional narcissism in children. In three studies among children aged between 8 and 10 years (<i>N</i> = 189, <i>N</i> = 235, <i>N</i> = 163), we successfully supported the presence of the three-factor structure of narcissism. In addition, we identified respectable reliability and validity for the new measure. Agentic narcissism positively correlated with self-enhancement values, agentic attributes, and self-esteem. Neurotic narcissism was negatively correlated with self-esteem. Finally, antagonistic narcissism was negatively associated with self-transcendence values and positively with self-enhancement values. In conclusion, we propose a 12-item measure distinguishing agentic, antagonistic, and neurotic narcissism in children.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"275-286"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143668951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-26DOI: 10.1177/10731911251326379
Keefe J Maccarone, Andrew J Kremyar, Martin Sellbom, Yossef S Ben-Porath
In the current literature on compulsivity, it is unclear whether this construct is best conceptualized as an internalizing disorder, a fear disorder, a thought disorder, or some combination of the three. The Compulsivity (CMP) scale introduced with the MMPI-3 assesses compulsive behaviors. To address the question of compulsivity's placement within a hierarchical psychopathology structure, the current study examined the degree to which CMP scores share variance with internalizing, fear, and thought dysfunction factors using confirmatory factor analyses. Results indicated that a model in which CMP scores cross-loaded onto latent fear and thought dysfunction factors exhibited preferential fit compared to a model in which CMP scores cross-loaded onto a higher-order internalizing factor and a thought dysfunction factor. Constraining equality in the cross-loading of CMP scores onto fear and thought dysfunction factors caused no significant decrement in fit. These findings indicate that the MMPI-3 CMP scale measures both fear and thought dysfunction. Implications and limitations of these findings and future research directions are discussed.
{"title":"The Placement of the MMPI-3 Compulsivity (CMP) Scale Within a Hierarchical Structure of Psychopathology.","authors":"Keefe J Maccarone, Andrew J Kremyar, Martin Sellbom, Yossef S Ben-Porath","doi":"10.1177/10731911251326379","DOIUrl":"10.1177/10731911251326379","url":null,"abstract":"<p><p>In the current literature on compulsivity, it is unclear whether this construct is best conceptualized as an internalizing disorder, a fear disorder, a thought disorder, or some combination of the three. The Compulsivity (CMP) scale introduced with the MMPI-3 assesses compulsive behaviors. To address the question of compulsivity's placement within a hierarchical psychopathology structure, the current study examined the degree to which CMP scores share variance with internalizing, fear, and thought dysfunction factors using confirmatory factor analyses. Results indicated that a model in which CMP scores cross-loaded onto latent fear and thought dysfunction factors exhibited preferential fit compared to a model in which CMP scores cross-loaded onto a higher-order internalizing factor and a thought dysfunction factor. Constraining equality in the cross-loading of CMP scores onto fear and thought dysfunction factors caused no significant decrement in fit. These findings indicate that the MMPI-3 CMP scale measures both fear and thought dysfunction. Implications and limitations of these findings and future research directions are discussed.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"191-203"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143727604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-12DOI: 10.1177/10731911251321929
Benjamin C Darnell, Maya Bina N Vannini, Antonio Morgan-López, Stephanie E Brown, Breanna Grunthal, Willie J Hale, Stacey Young-McCaughan, Peter T Fox, Donald D McGeary, Patricia A Resick, Denise M Sloan, Daniel J Taylor, Richard P Schobitz, Christian C Schrader, Jeffrey S Yarvis, Terence M Keane, Alan L Peterson, Brett T Litz
The posttraumatic stress disorder (PTSD) Checklist for Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5; PCL-5) was designed and validated to track symptoms over the past month (PCL-5-M), yet an untested ad hoc weekly version (PCL-5-W) is commonly used to track changes during treatment. We used archival data of clinical trials for the treatment of PTSD in veterans to assess the construct validity of PCL-5-W. Both PCL-5-M and PCL-5-W were found to have configural measurement invariance across four consecutive administrations. The results also indicated at least partial metric and scalar invariance for each version. The reliability estimates of PCL-5-M and PCL-5-W at each time point were equivalent. However, we found a discrepancy with regard to concurrent validity; correlations with the nine-item Patient Health Questionnaire may be meaningfully different between PCL-5-M and PCL-5-W. Nevertheless, overall, the results suggest that PCL-5-W can be validly used to assess PTSD symptoms over time, but factor scores may need to be tracked alongside total scores to address validity concerns.
{"title":"Psychometric Evaluation of the Weekly Version of the PTSD Checklist for <i>DSM</i>-5.","authors":"Benjamin C Darnell, Maya Bina N Vannini, Antonio Morgan-López, Stephanie E Brown, Breanna Grunthal, Willie J Hale, Stacey Young-McCaughan, Peter T Fox, Donald D McGeary, Patricia A Resick, Denise M Sloan, Daniel J Taylor, Richard P Schobitz, Christian C Schrader, Jeffrey S Yarvis, Terence M Keane, Alan L Peterson, Brett T Litz","doi":"10.1177/10731911251321929","DOIUrl":"10.1177/10731911251321929","url":null,"abstract":"<p><p>The posttraumatic stress disorder (PTSD) Checklist for <i>Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition</i> (<i>DSM-5</i>; PCL-5) was designed and validated to track symptoms over the past month (PCL-5-M), yet an untested ad hoc weekly version (PCL-5-W) is commonly used to track changes during treatment. We used archival data of clinical trials for the treatment of PTSD in veterans to assess the construct validity of PCL-5-W. Both PCL-5-M and PCL-5-W were found to have configural measurement invariance across four consecutive administrations. The results also indicated at least partial metric and scalar invariance for each version. The reliability estimates of PCL-5-M and PCL-5-W at each time point were equivalent. However, we found a discrepancy with regard to concurrent validity; correlations with the nine-item Patient Health Questionnaire may be meaningfully different between PCL-5-M and PCL-5-W. Nevertheless, overall, the results suggest that PCL-5-W can be validly used to assess PTSD symptoms over time, but factor scores may need to be tracked alongside total scores to address validity concerns.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"221-240"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143613343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-29DOI: 10.1177/10731911251326371
Adam P Natoli
The validity and utility of translated instruments (psychological measures) depend on the quality of their translation, and differences in key linguistic characteristics could introduce bias. Likewise, linguistic differences between instruments designed to measure analogous constructs might contribute to similar instruments possessing dissimilar psychometrics. This article introduces and demonstrates the use of natural language processing (NLP), a subfield of artificial intelligence, to linguistically analyze 13 translations of two psychological measures previously translated into numerous languages. NLP was used to generate estimates reflecting specific linguistic characteristics of test items (emotional tone/intensity, sentiment, valence, arousal, and dominance), which were then compared across translations at both the test- and item-level, as well as between the two instruments. Results revealed that key linguistic characteristics can profoundly vary both within and between tests. Following a discussion of results, the current limitations of this approach are summarized and strategies for advancing this methodology are proposed.
{"title":"Leveraging Artificial Intelligence to Linguistically Compare Test Translations: A Methodological Introduction and Demonstration.","authors":"Adam P Natoli","doi":"10.1177/10731911251326371","DOIUrl":"10.1177/10731911251326371","url":null,"abstract":"<p><p>The validity and utility of translated instruments (psychological measures) depend on the quality of their translation, and differences in key linguistic characteristics could introduce bias. Likewise, linguistic differences between instruments designed to measure analogous constructs might contribute to similar instruments possessing dissimilar psychometrics. This article introduces and demonstrates the use of natural language processing (NLP), a subfield of artificial intelligence, to linguistically analyze 13 translations of two psychological measures previously translated into numerous languages. NLP was used to generate estimates reflecting specific linguistic characteristics of test items (emotional tone/intensity, sentiment, valence, arousal, and dominance), which were then compared across translations at both the test- and item-level, as well as between the two instruments. Results revealed that key linguistic characteristics can profoundly vary both within and between tests. Following a discussion of results, the current limitations of this approach are summarized and strategies for advancing this methodology are proposed.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"163-177"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143742079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-29DOI: 10.1177/10731911251328604
Wendy C Higgins, Victoria Savalei, Vince Polito, Robert M Ross
The Reading the Mind in the Eyes Test (RMET) is widely used in clinical and non-clinical research. However, the structural properties of RMET scores have yet to be rigorously examined. We analyzed the structural properties of RMET scores in nine existing datasets comprising non-clinical samples ranging from 558 to 9,267 (median = 1,112) participants each. We used confirmatory factor analysis to assess two theoretically derived factor models, exploratory factor analysis to identify possible alternative factor models, and reliability estimates to assess internal consistency. Neither of the theoretically derived models was a good fit for any of the nine datasets, and we were unable to identify any better fitting multidimensional models. Internal consistency metrics were acceptable in six of the nine datasets, but these metrics are difficult to interpret given the uncertain factor structures. Our findings contribute to a growing body of evidence questioning the reliability and validity of RMET scores.
{"title":"Reading the Mind in the Eyes Test Scores Demonstrate Poor Structural Properties in Nine Large Non-Clinical Samples.","authors":"Wendy C Higgins, Victoria Savalei, Vince Polito, Robert M Ross","doi":"10.1177/10731911251328604","DOIUrl":"10.1177/10731911251328604","url":null,"abstract":"<p><p>The Reading the Mind in the Eyes Test (RMET) is widely used in clinical and non-clinical research. However, the structural properties of RMET scores have yet to be rigorously examined. We analyzed the structural properties of RMET scores in nine existing datasets comprising non-clinical samples ranging from 558 to 9,267 (median = 1,112) participants each. We used confirmatory factor analysis to assess two theoretically derived factor models, exploratory factor analysis to identify possible alternative factor models, and reliability estimates to assess internal consistency. Neither of the theoretically derived models was a good fit for any of the nine datasets, and we were unable to identify any better fitting multidimensional models. Internal consistency metrics were acceptable in six of the nine datasets, but these metrics are difficult to interpret given the uncertain factor structures. Our findings contribute to a growing body of evidence questioning the reliability and validity of RMET scores.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"204-220"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12824027/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143742081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-25DOI: 10.1177/10731911251326399
Allison Peipert, Sydney Adams, Lorenzo Lorenzo-Luaces
Quality of life (QOL) broadly encompasses constructs including health, well-being, life satisfaction, and psychosocial functioning. Depression, a major cause of global disability, is linked to lower QOL. Despite the rise of measurement-based care and patient-reported outcomes, there is no consensus on QOL definitions or models, resulting in varied assessments. This study aims to describe the item content overlap among commonly used QOL measures in depression research. We analyzed 10 QOL measures from a meta-analysis, calculating Jaccard indices to quantify overlap, and used two coding approaches: one for similarly worded items and another for exact word matches. We also categorized items into broader themes. At the most, average Jaccard similarity was M = 0.14 (SD = 0.12), indicating significant heterogeneity among QOL measures in depression. This suggests that QOL outcomes may not be reproducible across different scales. Future research should examine the relationships between the content assessed by various QOL measures.
{"title":"Heterogeneity in Item Content of Quality of Life Assessments Used in Depression Psychotherapy Research.","authors":"Allison Peipert, Sydney Adams, Lorenzo Lorenzo-Luaces","doi":"10.1177/10731911251326399","DOIUrl":"10.1177/10731911251326399","url":null,"abstract":"<p><p>Quality of life (QOL) broadly encompasses constructs including health, well-being, life satisfaction, and psychosocial functioning. Depression, a major cause of global disability, is linked to lower QOL. Despite the rise of measurement-based care and patient-reported outcomes, there is no consensus on QOL definitions or models, resulting in varied assessments. This study aims to describe the item content overlap among commonly used QOL measures in depression research. We analyzed 10 QOL measures from a meta-analysis, calculating Jaccard indices to quantify overlap, and used two coding approaches: one for similarly worded items and another for exact word matches. We also categorized items into broader themes. At the most, average Jaccard similarity was <i>M</i> = 0.14 (SD = 0.12), indicating significant heterogeneity among QOL measures in depression. This suggests that QOL outcomes may not be reproducible across different scales. Future research should examine the relationships between the content assessed by various QOL measures.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"241-253"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143699468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-27DOI: 10.1177/10731911251329992
Abdullah Alrubaian
Parents of children with dyslexia have an important role in the detection and treatment of success in their children. However, standard scales in this context are not suitable for use among parents. The main aim of the current study was to find the most important indicators of dyslexia according to parents' reports and statements. First, a list of parent reports on dyslexia was developed. Then, according to the DSM-5 criteria (by clinicians), children were divided into two categories: children with dyslexia and healthy controls. Then, four Machine Learning (ML) algorithms-Logistic Regression, Random Forest, Extreme Gradient Boosting (XGBoost), and ensemble methods-were used to extract the most relevant predictors. To predict dyslexia, recursive feature elimination chose the five most important variables from 35 parent-reported items. Logistic Regression, Random Forest, XGBoost, and ensemble models were used in R-Studio. The ensemble model was the best. The most important were "Word Guessing," "Letter Confusion," "Letter-Sound Association," "Slow Reading," and "Letter Order Reversal." The study revealed that ML models can accurately identify dyslexia by analyzing parent-reported indicators. The five key predictors "Word Guessing," "Letter Confusion," "Letter-Sound Association," "Slow Reading," and "Letter Order Reversal" provide essential information for detecting dyslexia early.
{"title":"Using Advanced Machine Learning Models for Detection of Dyslexia Among Children By Parents: A Study from Screening to Diagnosis.","authors":"Abdullah Alrubaian","doi":"10.1177/10731911251329992","DOIUrl":"10.1177/10731911251329992","url":null,"abstract":"<p><p>Parents of children with dyslexia have an important role in the detection and treatment of success in their children. However, standard scales in this context are not suitable for use among parents. The main aim of the current study was to find the most important indicators of dyslexia according to parents' reports and statements. First, a list of parent reports on dyslexia was developed. Then, according to the DSM-5 criteria (by clinicians), children were divided into two categories: children with dyslexia and healthy controls. Then, four Machine Learning (ML) algorithms-Logistic Regression, Random Forest, Extreme Gradient Boosting (XGBoost), and ensemble methods-were used to extract the most relevant predictors. To predict dyslexia, recursive feature elimination chose the five most important variables from 35 parent-reported items. Logistic Regression, Random Forest, XGBoost, and ensemble models were used in R-Studio. The ensemble model was the best. The most important were \"Word Guessing,\" \"Letter Confusion,\" \"Letter-Sound Association,\" \"Slow Reading,\" and \"Letter Order Reversal.\" The study revealed that ML models can accurately identify dyslexia by analyzing parent-reported indicators. The five key predictors \"Word Guessing,\" \"Letter Confusion,\" \"Letter-Sound Association,\" \"Slow Reading,\" and \"Letter Order Reversal\" provide essential information for detecting dyslexia early.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"178-190"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143717833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-03-18DOI: 10.1177/10731911241312757
Léna Nagy, Mónika Koós, Dr Shane W Kraus, Zsolt Demetrovics, Marc N Potenza, Rafael Ballester-Arnal, Dominik Batthyány, Sophie Bergeron, Joël Billieux, Peer Briken, Julius Burkauskas, Georgina Cárdenas-López, Joana Carvalho, Jesús Castro-Calvo, Lijun Chen, Ji-Kang Chen, Giacomo Ciocca, Ornella Corazza, Rita Csako, David P Fernandez, Hironobu Fujiwara, Elaine F Fernandez, Johannes Fuss, Roman Gabrhelík, Ateret Gewirtz-Meydan, Biljana Gjoneska, Mateusz Gola, Joshua B Grubbs, Hashim T Hashim, Md Saiful Islam, Mustafa Ismail, Martha C Jiménez-Martínez, Tanja Jurin, Ondrej Kalina, Verena Klein, András Költő, Sang-Kyu Lee, Karol Lewczuk, Chung-Ying Lin, Christine Lochner, Silvia López-Alvarado, Kateřina Lukavská, Percy Mayta-Tristán, Dan J Miller, Ol̆ga Orosová, Gábor Orosz, Fernando P Ponce, Gonzalo R Quintana, Gabriel C Quintero Garzola, Jano Ramos-Diaz, Kévin Rigaud, Ann Rousseau, Scanavino Marco De Tubino, Marion K Schulmeyer, Pratap Sharan, Mami Shibata, Sheikh Shoib, Vera Sigre-Leirós, Luke Sniewski, Ognen Spasovski, Vesta Steibliene, Dan J Stein, Julian Strizek, Aleksandar Štulhofer, Banu C Ünsal, Marie-Pier Vaillancourt-Morel, Marie Claire Van Hout, Beáta Bőthe
Sexual assertiveness (SA) is an important concept in understanding sexual well-being and decision-making. However, psychometric evaluation of existing measures of SA in diverse populations is largely lacking, hindering cross-cultural and comparative studies. This study validated the short version of the Sexual Assertiveness Questionnaire (SAQ-9) and examined its measurement invariance across several languages, countries, genders, sexual orientations, and relationship statuses among 65,448 sexually-active adults (Mage = 32.98 years, SD = 12.08, 58% women, 2.74% gender-diverse individuals) taking part in the International Sex Survey. The scale demonstrated adequate psychometric properties. Measurement invariance tests indicated that the SAQ-9 is suitable for comparing individuals from different cultures, genders, sexual orientations, and relationship statuses, and significant group differences were also noted (e.g., gender-diverse individuals reported the highest levels of SA). Findings suggest that the SAQ-9 is a reliable and valid measure of SA and appropriate for use in diverse populations, with specific populations exhibiting varying levels of SA.
{"title":"Sexual Assertiveness Across Cultures, Genders, and Sexual Orientations: Validation of the Short Sexual Assertiveness Questionnaire (SAQ-9).","authors":"Léna Nagy, Mónika Koós, Dr Shane W Kraus, Zsolt Demetrovics, Marc N Potenza, Rafael Ballester-Arnal, Dominik Batthyány, Sophie Bergeron, Joël Billieux, Peer Briken, Julius Burkauskas, Georgina Cárdenas-López, Joana Carvalho, Jesús Castro-Calvo, Lijun Chen, Ji-Kang Chen, Giacomo Ciocca, Ornella Corazza, Rita Csako, David P Fernandez, Hironobu Fujiwara, Elaine F Fernandez, Johannes Fuss, Roman Gabrhelík, Ateret Gewirtz-Meydan, Biljana Gjoneska, Mateusz Gola, Joshua B Grubbs, Hashim T Hashim, Md Saiful Islam, Mustafa Ismail, Martha C Jiménez-Martínez, Tanja Jurin, Ondrej Kalina, Verena Klein, András Költő, Sang-Kyu Lee, Karol Lewczuk, Chung-Ying Lin, Christine Lochner, Silvia López-Alvarado, Kateřina Lukavská, Percy Mayta-Tristán, Dan J Miller, Ol̆ga Orosová, Gábor Orosz, Fernando P Ponce, Gonzalo R Quintana, Gabriel C Quintero Garzola, Jano Ramos-Diaz, Kévin Rigaud, Ann Rousseau, Scanavino Marco De Tubino, Marion K Schulmeyer, Pratap Sharan, Mami Shibata, Sheikh Shoib, Vera Sigre-Leirós, Luke Sniewski, Ognen Spasovski, Vesta Steibliene, Dan J Stein, Julian Strizek, Aleksandar Štulhofer, Banu C Ünsal, Marie-Pier Vaillancourt-Morel, Marie Claire Van Hout, Beáta Bőthe","doi":"10.1177/10731911241312757","DOIUrl":"10.1177/10731911241312757","url":null,"abstract":"<p><p>Sexual assertiveness (SA) is an important concept in understanding sexual well-being and decision-making. However, psychometric evaluation of existing measures of SA in diverse populations is largely lacking, hindering cross-cultural and comparative studies. This study validated the short version of the Sexual Assertiveness Questionnaire (SAQ-9) and examined its measurement invariance across several languages, countries, genders, sexual orientations, and relationship statuses among 65,448 sexually-active adults (<i>M<sub>age</sub></i> = 32.98 years, <i>SD</i> = 12.08, 58% women, 2.74% gender-diverse individuals) taking part in the International Sex Survey. The scale demonstrated adequate psychometric properties. Measurement invariance tests indicated that the SAQ-9 is suitable for comparing individuals from different cultures, genders, sexual orientations, and relationship statuses, and significant group differences were also noted (e.g., gender-diverse individuals reported the highest levels of SA). Findings suggest that the SAQ-9 is a reliable and valid measure of SA and appropriate for use in diverse populations, with specific populations exhibiting varying levels of SA.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"254-274"},"PeriodicalIF":3.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143656168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}