Christopher Gonzalez, John-Christopher A Finley, Elmma Khalid, Karen S Basurto, Hannah B VanLandingham, Lauren A Frick, Julia M Brooks, Rachael L Ellison, Devin M Ulrich, Jason R Soble, Zachary J Resch
Objective: Adverse childhood experiences (ACEs) are commonly reported in individuals presenting for attention-deficit hyperactivity disorder (ADHD) evaluation. Performance validity tests (PVTs) and symptom validity tests (SVTs) are essential to ADHD evaluations in young adults, but extant research suggests that those who report ACEs may be inaccurately classified as invalid on these measures. The current study aimed to assess the degree to which ACE exposure differentiated PVT and SVT performance and ADHD symptom reporting in a multi-racial sample of adults presenting for ADHD evaluation.
Method: This study included 170 adults referred for outpatient neuropsychological ADHD evaluation who completed the ACE Checklist and a neurocognitive battery that included multiple PVTs and SVTs. Analysis of variance was used to examine differences in PVT and SVT performance among those with high (≥4) and low (≤3) reported ACEs.
Results: Main effects of the ACE group were observed, such that high ACE group reporting demonstrated higher scores on SVTs assessing ADHD symptom over-reporting and infrequent psychiatric and somatic symptoms on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. Conversely, no significant differences emerged in total PVT failures across ACE groups.
Conclusions: Those with high ACE exposure were more likely to have higher scores on SVTs assessing over-reporting and infrequent responses. In contrast, ACE exposure did not affect PVT performance. Thus, ACE exposure should be considered specifically when evaluating SVT performance in the context of ADHD evaluations, and more work is needed to understand factors that contribute to different patterns of symptom reporting as a function of ACE exposure.
{"title":"The Impact of Adverse Childhood Experiences on Symptom and Performance Validity Tests Among a Multiracial Sample Presenting for ADHD Evaluation.","authors":"Christopher Gonzalez, John-Christopher A Finley, Elmma Khalid, Karen S Basurto, Hannah B VanLandingham, Lauren A Frick, Julia M Brooks, Rachael L Ellison, Devin M Ulrich, Jason R Soble, Zachary J Resch","doi":"10.1093/arclin/acae006","DOIUrl":"10.1093/arclin/acae006","url":null,"abstract":"<p><strong>Objective: </strong>Adverse childhood experiences (ACEs) are commonly reported in individuals presenting for attention-deficit hyperactivity disorder (ADHD) evaluation. Performance validity tests (PVTs) and symptom validity tests (SVTs) are essential to ADHD evaluations in young adults, but extant research suggests that those who report ACEs may be inaccurately classified as invalid on these measures. The current study aimed to assess the degree to which ACE exposure differentiated PVT and SVT performance and ADHD symptom reporting in a multi-racial sample of adults presenting for ADHD evaluation.</p><p><strong>Method: </strong>This study included 170 adults referred for outpatient neuropsychological ADHD evaluation who completed the ACE Checklist and a neurocognitive battery that included multiple PVTs and SVTs. Analysis of variance was used to examine differences in PVT and SVT performance among those with high (≥4) and low (≤3) reported ACEs.</p><p><strong>Results: </strong>Main effects of the ACE group were observed, such that high ACE group reporting demonstrated higher scores on SVTs assessing ADHD symptom over-reporting and infrequent psychiatric and somatic symptoms on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. Conversely, no significant differences emerged in total PVT failures across ACE groups.</p><p><strong>Conclusions: </strong>Those with high ACE exposure were more likely to have higher scores on SVTs assessing over-reporting and infrequent responses. In contrast, ACE exposure did not affect PVT performance. Thus, ACE exposure should be considered specifically when evaluating SVT performance in the context of ADHD evaluations, and more work is needed to understand factors that contribute to different patterns of symptom reporting as a function of ACE exposure.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"692-701"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139745954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew M Kiselica, Alyssa N Kaser, Daniel S Weitzner, Cynthia M Mikula, Anna Boone, Steven Paul Woods, Timothy J Wolf, Troy A Webber
Objective: Cognitive dispersion indexes intraindividual variability in performance across a battery of neuropsychological tests. Measures of dispersion show promise as markers of cognitive dyscontrol and everyday functioning difficulties; however, they have limited practical applicability due to a lack of normative data. This study aimed to develop and evaluate normed scores for cognitive dispersion among older adults.
Method: We analyzed data from 4,283 cognitively normal participants aged ≥50 years from the Uniform Data Set (UDS) 3.0. We describe methods for calculating intraindividual standard deviation (ISD) and coefficient of variation (CoV), as well as associated unadjusted scaled scores and demographically adjusted z-scores. We also examined the ability of ISD and CoV scores to differentiate between cognitively normal individuals (n = 4,283) and those with cognitive impairment due to Lewy body disease (n = 282).
Results: We generated normative tables to map raw ISD and CoV scores onto a normal distribution of scaled scores. Cognitive dispersion indices were associated with age, education, and race/ethnicity but not sex. Regression equations were used to develop a freely accessible Excel calculator for deriving demographically adjusted normed scores for ISD and CoV. All measures of dispersion demonstrated excellent diagnostic utility when evaluated by the area under the curve produced from receiver operating characteristic curves.
Conclusions: Results of this study provide evidence for the clinical utility of sample-based and demographically adjusted normative standards for cognitive dispersion on the UDS 3.0. These standards can be used to guide interpretation of intraindividual variability among older adults in clinical and research settings.
{"title":"Development and Validity of Norms for Cognitive Dispersion on the Uniform Data Set 3.0 Neuropsychological Battery.","authors":"Andrew M Kiselica, Alyssa N Kaser, Daniel S Weitzner, Cynthia M Mikula, Anna Boone, Steven Paul Woods, Timothy J Wolf, Troy A Webber","doi":"10.1093/arclin/acae005","DOIUrl":"10.1093/arclin/acae005","url":null,"abstract":"<p><strong>Objective: </strong>Cognitive dispersion indexes intraindividual variability in performance across a battery of neuropsychological tests. Measures of dispersion show promise as markers of cognitive dyscontrol and everyday functioning difficulties; however, they have limited practical applicability due to a lack of normative data. This study aimed to develop and evaluate normed scores for cognitive dispersion among older adults.</p><p><strong>Method: </strong>We analyzed data from 4,283 cognitively normal participants aged ≥50 years from the Uniform Data Set (UDS) 3.0. We describe methods for calculating intraindividual standard deviation (ISD) and coefficient of variation (CoV), as well as associated unadjusted scaled scores and demographically adjusted z-scores. We also examined the ability of ISD and CoV scores to differentiate between cognitively normal individuals (n = 4,283) and those with cognitive impairment due to Lewy body disease (n = 282).</p><p><strong>Results: </strong>We generated normative tables to map raw ISD and CoV scores onto a normal distribution of scaled scores. Cognitive dispersion indices were associated with age, education, and race/ethnicity but not sex. Regression equations were used to develop a freely accessible Excel calculator for deriving demographically adjusted normed scores for ISD and CoV. All measures of dispersion demonstrated excellent diagnostic utility when evaluated by the area under the curve produced from receiver operating characteristic curves.</p><p><strong>Conclusions: </strong>Results of this study provide evidence for the clinical utility of sample-based and demographically adjusted normative standards for cognitive dispersion on the UDS 3.0. These standards can be used to guide interpretation of intraindividual variability among older adults in clinical and research settings.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"732-746"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11345113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139745951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marialaura Di Tella, Ylenia Camassa Nahi, Gabriella Paglia, Giuliano Carlo Geminiani
{"title":"In Response to Finsterer: Before Diagnosing SARS-CoV-2 Vaccination-Associated Immune Encephalitis Alternative Aetiologies Must be Ruled Out.","authors":"Marialaura Di Tella, Ylenia Camassa Nahi, Gabriella Paglia, Giuliano Carlo Geminiani","doi":"10.1093/arclin/acae057","DOIUrl":"10.1093/arclin/acae057","url":null,"abstract":"","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"784-785"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patients with schizophrenia tend to have deficits in emotion recognition (ER) that affect their social function. However, the commonly-used ER measures appear incomprehensive, unreliable and invalid, making it difficult to comprehensively evaluate ER. The purposes of this study were to develop the Computerized Emotion Recognition Video Test (CERVT) evaluating ER ability in patients with schizophrenia. This study was divided into two phases. First, we selected candidate CERVT items/videos of 8 basic emotion domains from a published database. Second, we validated the selected CERVT items using Rasch analysis. Finally, the 269 patients and 177 healthy adults were recruited to ensure the participants had diverse abilities. After the removal of 21 misfit (infit or outfit mean square > 1.4) items and adjustment of the item difficulties of the 26 items with severe differential item functioning, the remaining 217 items were finalized as the CERVT items. All the CERVT items showed good model fits with small eigenvalues (≤ 2) based on the residual-based principal components analysis for each domain, supporting the unidimensionality of these items. The 8 domains of the CERVT had good to excellent reliabilities (average Rasch reliabilities = 0.84-0.93). The CERVT contains items of the 8 basic emotions with individualized scores. Moreover, the CERVT showed acceptable reliability and validity, and the scores were not affected by examinees' gender. Thus, the CERVT has the potential to provide a comprehensive, reliable, valid, and gender-unbiased assessment of ER for patients with schizophrenia.
{"title":"Development of a Rasch-calibrated emotion recognition video test for patients with schizophrenia.","authors":"Kuan-Wei Chen, Shih-Chieh Lee, Frank Huang-Chih Chou, Hsin-Yu Chiang, I-Ping Hsueh, Po-Hsi Chen, San-Ping Wang, Yu-Jeng Ju, Ching-Lin Hsieh","doi":"10.1093/arclin/acad098","DOIUrl":"10.1093/arclin/acad098","url":null,"abstract":"<p><p>Patients with schizophrenia tend to have deficits in emotion recognition (ER) that affect their social function. However, the commonly-used ER measures appear incomprehensive, unreliable and invalid, making it difficult to comprehensively evaluate ER. The purposes of this study were to develop the Computerized Emotion Recognition Video Test (CERVT) evaluating ER ability in patients with schizophrenia. This study was divided into two phases. First, we selected candidate CERVT items/videos of 8 basic emotion domains from a published database. Second, we validated the selected CERVT items using Rasch analysis. Finally, the 269 patients and 177 healthy adults were recruited to ensure the participants had diverse abilities. After the removal of 21 misfit (infit or outfit mean square > 1.4) items and adjustment of the item difficulties of the 26 items with severe differential item functioning, the remaining 217 items were finalized as the CERVT items. All the CERVT items showed good model fits with small eigenvalues (≤ 2) based on the residual-based principal components analysis for each domain, supporting the unidimensionality of these items. The 8 domains of the CERVT had good to excellent reliabilities (average Rasch reliabilities = 0.84-0.93). The CERVT contains items of the 8 basic emotions with individualized scores. Moreover, the CERVT showed acceptable reliability and validity, and the scores were not affected by examinees' gender. Thus, the CERVT has the potential to provide a comprehensive, reliable, valid, and gender-unbiased assessment of ER for patients with schizophrenia.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"724-731"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139073205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Validation of and Demographically Adjusted Normative Data for the Learning Ratio Derived from the RAVLT in Robustly Intact Older Adults.","authors":"","doi":"10.1093/arclin/acae024","DOIUrl":"10.1093/arclin/acae024","url":null,"abstract":"","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"786"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11345110/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140130592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justin E Karr, Eric O Ingram, Cristina N Pinheiro, Sheliza Ali, Grant L Iverson
Objective: Researchers and practitioners can detect cognitive improvement or decline within a single examinee by applying a reliable change methodology. This study examined reliable change through test-retest data from the English-language National Institutes of Health Toolbox Cognition Battery (NIHTB-CB) normative sample.
Method: Participants included adults (n = 138; age: M ± SD = 54.8 ± 20.0, range: 18-85; 51.4% men; 68.1% White) who completed test-retest assessments about a week apart on five fluid cognition tests, providing raw scores, age-adjusted standard scores (SS), and demographic-adjusted T-scores (T).
Results: The Fluid Cognition Composite (SS: ICC = 0.87; T-score: ICC = 0.84) and the five fluid cognition tests had good test-retest reliability (SS: ICC range = 0.66-0.85; T-score: ICC range = 0.64-0.86). The lower and upper bounds of 70%, 80%, and 90% confidence intervals (CIs) were calculated around change scores, which serve as cutoffs for determining reliable change. Using T-scores, 90% CI, and adjustment for practice effects, 32.3% declined on one or more tests, 9.7% declined on two or more tests, 36.6% improved on one or more tests, and 5.4% improved on two or more tests.
Conclusions: It was common for participants to show reliable change on at least one test score, but not two or more test scores. Per an 80% CI, test-retest difference scores beyond these cutoffs would indicate reliable change: Dimensional Change Card Sort (SS ≥ 14/T ≥ 10), Flanker (SS ≥ 12/T ≥ 8), List Sorting (SS ≥ 14/T ≥ 10), Picture Sequence Memory (SS ≥ 19/T ≥ 13), Pattern Comparison (SS ≥ 11/T ≥ 8), and Fluid Cognition Composite (SS ≥ 10/T ≥ 7). The reliable change cutoffs could be applied in research or practice to detect within-person change in fluid cognition at the individual level.
{"title":"Test-Retest Reliability and Reliable Change on the NIH Toolbox Cognition Battery.","authors":"Justin E Karr, Eric O Ingram, Cristina N Pinheiro, Sheliza Ali, Grant L Iverson","doi":"10.1093/arclin/acae011","DOIUrl":"10.1093/arclin/acae011","url":null,"abstract":"<p><strong>Objective: </strong>Researchers and practitioners can detect cognitive improvement or decline within a single examinee by applying a reliable change methodology. This study examined reliable change through test-retest data from the English-language National Institutes of Health Toolbox Cognition Battery (NIHTB-CB) normative sample.</p><p><strong>Method: </strong>Participants included adults (n = 138; age: M ± SD = 54.8 ± 20.0, range: 18-85; 51.4% men; 68.1% White) who completed test-retest assessments about a week apart on five fluid cognition tests, providing raw scores, age-adjusted standard scores (SS), and demographic-adjusted T-scores (T).</p><p><strong>Results: </strong>The Fluid Cognition Composite (SS: ICC = 0.87; T-score: ICC = 0.84) and the five fluid cognition tests had good test-retest reliability (SS: ICC range = 0.66-0.85; T-score: ICC range = 0.64-0.86). The lower and upper bounds of 70%, 80%, and 90% confidence intervals (CIs) were calculated around change scores, which serve as cutoffs for determining reliable change. Using T-scores, 90% CI, and adjustment for practice effects, 32.3% declined on one or more tests, 9.7% declined on two or more tests, 36.6% improved on one or more tests, and 5.4% improved on two or more tests.</p><p><strong>Conclusions: </strong>It was common for participants to show reliable change on at least one test score, but not two or more test scores. Per an 80% CI, test-retest difference scores beyond these cutoffs would indicate reliable change: Dimensional Change Card Sort (SS ≥ 14/T ≥ 10), Flanker (SS ≥ 12/T ≥ 8), List Sorting (SS ≥ 14/T ≥ 10), Picture Sequence Memory (SS ≥ 19/T ≥ 13), Pattern Comparison (SS ≥ 11/T ≥ 8), and Fluid Cognition Composite (SS ≥ 10/T ≥ 7). The reliable change cutoffs could be applied in research or practice to detect within-person change in fluid cognition at the individual level.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"702-713"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11345114/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139943799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephanie Ruth Young, Elizabeth M Dworak, Aaron J Kaat, Hubert Adam, Miriam A Novack, Jerry Slotkin, Jordan Stoeger, Cindy J Nowinski, Zahra Hosseinian, Saki Amagai, Sarah Pila, Maria Varela Diaz, Anyelo Almonte Correa, Keith Alperin, Larsson Omberg, Michael Kellen, Monica R Camacho, Bernard Landavazo, Rachel L Nosheny, Michael W Weiner, Richard M Gershon
Objective: We describe the development of a new computer adaptive vocabulary test, Mobile Toolbox (MTB) Word Meaning, and validity evidence from 3 studies.
Method: Word Meaning was designed to be a multiple-choice synonym test optimized for self-administration on a personal smartphone. The items were first calibrated online in a sample of 7,525 participants to create the computer-adaptive test algorithm for the Word Meaning measure within the MTB app. In Study 1, 92 participants self-administered Word Meaning on study-provided smartphones in the lab and were administered external measures by trained examiners. In Study 2, 1,021 participants completed the external measures in the lab and Word Meaning was self-administered remotely on their personal smartphones. In Study 3, 141 participants self-administered Word Meaning remotely twice with a 2-week delay on personal iPhones.
Results: The final bank included 1363 items. Internal consistency was adequate to good across samples (ρxx = 0.78 to 0.81, p < .001). Test-retest reliability was good (ICC = 0.65, p < .001), and the mean theta score was not significantly different upon the second administration. Correlations were moderate to large with measures of similar constructs (ρ = 0.67-0.75, p < .001) and non-significant with measures of dissimilar constructs. Scores demonstrated small to moderate correlations with age (ρ = 0.35 to 0.45, p < .001) and education (ρ = 0.26, p < .001).
Conclusion: The MTB Word Meaning measure demonstrated evidence of reliability and validity in three samples. Further validation studies in clinical samples are necessary.
{"title":"Development and Validation of a Vocabulary Measure in the Mobile Toolbox.","authors":"Stephanie Ruth Young, Elizabeth M Dworak, Aaron J Kaat, Hubert Adam, Miriam A Novack, Jerry Slotkin, Jordan Stoeger, Cindy J Nowinski, Zahra Hosseinian, Saki Amagai, Sarah Pila, Maria Varela Diaz, Anyelo Almonte Correa, Keith Alperin, Larsson Omberg, Michael Kellen, Monica R Camacho, Bernard Landavazo, Rachel L Nosheny, Michael W Weiner, Richard M Gershon","doi":"10.1093/arclin/acae010","DOIUrl":"10.1093/arclin/acae010","url":null,"abstract":"<p><strong>Objective: </strong>We describe the development of a new computer adaptive vocabulary test, Mobile Toolbox (MTB) Word Meaning, and validity evidence from 3 studies.</p><p><strong>Method: </strong>Word Meaning was designed to be a multiple-choice synonym test optimized for self-administration on a personal smartphone. The items were first calibrated online in a sample of 7,525 participants to create the computer-adaptive test algorithm for the Word Meaning measure within the MTB app. In Study 1, 92 participants self-administered Word Meaning on study-provided smartphones in the lab and were administered external measures by trained examiners. In Study 2, 1,021 participants completed the external measures in the lab and Word Meaning was self-administered remotely on their personal smartphones. In Study 3, 141 participants self-administered Word Meaning remotely twice with a 2-week delay on personal iPhones.</p><p><strong>Results: </strong>The final bank included 1363 items. Internal consistency was adequate to good across samples (ρxx = 0.78 to 0.81, p < .001). Test-retest reliability was good (ICC = 0.65, p < .001), and the mean theta score was not significantly different upon the second administration. Correlations were moderate to large with measures of similar constructs (ρ = 0.67-0.75, p < .001) and non-significant with measures of dissimilar constructs. Scores demonstrated small to moderate correlations with age (ρ = 0.35 to 0.45, p < .001) and education (ρ = 0.26, p < .001).</p><p><strong>Conclusion: </strong>The MTB Word Meaning measure demonstrated evidence of reliability and validity in three samples. Further validation studies in clinical samples are necessary.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"714-723"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139982139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jack R Kaufman, Hudaisa Fatima, Laura H Lacritz, C Munro Cullum
Objective: to establish a proof-of-concept and ascertain the reliability of an abbreviated 30-second (30s) phonemic fluency measure as a cognitive screening tool in older adults.
Methods: in all, 201 English-speaking individuals with normal cognition (NC; n = 119) or cognitive impairment (CI; mild CI or dementia; n = 82) were administered a standard 60s phonemic fluency task (FAS/CFL) with discrete 30s intervals denoted.
Results: for all letters, 30s trial scores significantly predicted 60s scores for the same letter, R2 = .7-.9, F(1, 200) = 850-915, p < .001. As with 60s total scores, 30s cumulative scores (for all three trials) were significantly different between NC and CI groups (p < .001). Receiver operating characteristic analyses showed that 30s total scores distinguished NC and CI groups as effectively (AUC = .675) as 60s total scores (AUC = .658).
Conclusions: these findings support the utility and reliability of a short-form phonemic fluency paradigm, as 30s performance reliably predicted 60s/trial totals and was equally accurate in distinguishing impaired/non-impaired groups.
{"title":"Utility of a Short-Form Phonemic Fluency Task.","authors":"Jack R Kaufman, Hudaisa Fatima, Laura H Lacritz, C Munro Cullum","doi":"10.1093/arclin/acae022","DOIUrl":"10.1093/arclin/acae022","url":null,"abstract":"<p><strong>Objective: </strong>to establish a proof-of-concept and ascertain the reliability of an abbreviated 30-second (30s) phonemic fluency measure as a cognitive screening tool in older adults.</p><p><strong>Methods: </strong>in all, 201 English-speaking individuals with normal cognition (NC; n = 119) or cognitive impairment (CI; mild CI or dementia; n = 82) were administered a standard 60s phonemic fluency task (FAS/CFL) with discrete 30s intervals denoted.</p><p><strong>Results: </strong>for all letters, 30s trial scores significantly predicted 60s scores for the same letter, R2 = .7-.9, F(1, 200) = 850-915, p < .001. As with 60s total scores, 30s cumulative scores (for all three trials) were significantly different between NC and CI groups (p < .001). Receiver operating characteristic analyses showed that 30s total scores distinguished NC and CI groups as effectively (AUC = .675) as 60s total scores (AUC = .658).</p><p><strong>Conclusions: </strong>these findings support the utility and reliability of a short-form phonemic fluency paradigm, as 30s performance reliably predicted 60s/trial totals and was equally accurate in distinguishing impaired/non-impaired groups.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"770-774"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11345109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140183563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: Two studies were performed to validate a brief measure of cognitive insight and compare it to an empirical model - the Cognitive Awareness Model (CAM).
Method: A pilot study included 31 (52% male; Mage = 69.42) patients from an outpatient neuropsychological assessment clinic. Seven patients were diagnosed with likely Alzheimer's dementia (AD), 15 mild cognitive impairment (MCI), and 9 no diagnosis (i.e., cognitively normal; CN). The Cognitive Coding Form (CCF) and several other measures were administered. Study 2 entailed archival data extraction of 240 patients (80 CN, 80 MCI, and 80 AD; 53.3% female; Mage = 72.8) to examine whether the CCF predicts memory (Wechsler Memory Scale - IV) and executive functioning (Trail-Making Test B).
Results: The pilot study found preliminary evidence of convergent and discriminant validity for the 8-item CCF. Study 2 confirmed that both patient-reported cognitive concerns (F(2,237) = 10.40, p < .001, ω2 = .07, power = .99) and, more strongly, CCF informant-patient discrepancy scores (F(2,237) = 24.52, p < .001, ω2 = .16, power = .99) can distinguish CNs from those with MCI and AD. A regression indicated that depression (5.5%; β = -.38, p < .001) and TMT-B (13%; β = -.43, p < .001), together accounted for 18.5% of the variance in insight (R2 = .19, F(2,219) = 26.10, p < .001), supporting the CAM.
Conclusions: These studies establish an efficient measure of insight with high clinical utility and inform the literature on the role of insight in predicting performance in those with Alzheimer's pathology.
{"title":"Anosognosia in Alzheimer's Pathology: Validation of a New Measure.","authors":"Christian Terry, Len Lecci","doi":"10.1093/arclin/acae020","DOIUrl":"10.1093/arclin/acae020","url":null,"abstract":"<p><strong>Objective: </strong>Two studies were performed to validate a brief measure of cognitive insight and compare it to an empirical model - the Cognitive Awareness Model (CAM).</p><p><strong>Method: </strong>A pilot study included 31 (52% male; Mage = 69.42) patients from an outpatient neuropsychological assessment clinic. Seven patients were diagnosed with likely Alzheimer's dementia (AD), 15 mild cognitive impairment (MCI), and 9 no diagnosis (i.e., cognitively normal; CN). The Cognitive Coding Form (CCF) and several other measures were administered. Study 2 entailed archival data extraction of 240 patients (80 CN, 80 MCI, and 80 AD; 53.3% female; Mage = 72.8) to examine whether the CCF predicts memory (Wechsler Memory Scale - IV) and executive functioning (Trail-Making Test B).</p><p><strong>Results: </strong>The pilot study found preliminary evidence of convergent and discriminant validity for the 8-item CCF. Study 2 confirmed that both patient-reported cognitive concerns (F(2,237) = 10.40, p < .001, ω2 = .07, power = .99) and, more strongly, CCF informant-patient discrepancy scores (F(2,237) = 24.52, p < .001, ω2 = .16, power = .99) can distinguish CNs from those with MCI and AD. A regression indicated that depression (5.5%; β = -.38, p < .001) and TMT-B (13%; β = -.43, p < .001), together accounted for 18.5% of the variance in insight (R2 = .19, F(2,219) = 26.10, p < .001), supporting the CAM.</p><p><strong>Conclusions: </strong>These studies establish an efficient measure of insight with high clinical utility and inform the literature on the role of insight in predicting performance in those with Alzheimer's pathology.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"669-682"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140100921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lauren N Ratcliffe, Andrew C Hale, Taylor McDonald, Kelsey C Hewitt, Christopher M Nguyen, Robert J Spencer, David W Loring
Objective: The Montreal Cognitive Assessment (MoCA) is among the most frequently administered cognitive screening tests, yet demographically diverse normative data are needed for repeated administrations.
Method: Data were obtained from 18,410 participants using the National Alzheimer's Coordinating Center Uniform Data Set. We developed regression-based norms using Tobit regression to account for ceiling effects, explored test-retest reliability of total scores and by domain stratified by age and diagnosis with Cronbach's alpha, and reported the cumulative change frequencies for individuals with serial MoCA administrations to gage expected change.
Results: Strong ceiling effects and negative skew were observed at the total score, domain, and item levels for the cognitively normal group, and performances became more normally distributed as the degree of cognitive impairment increased. In regression models, years of education was associated with higher MoCA scores, whereas older age, male sex, Black and American Indian or Alaska Native race, and Hispanic ethnicity were associated with lower predicted scores. Temporal stability was adequate and good at the total score level for the cognitively normal and cognitive disorders groups, respectively, but fell short of reliability standards at the domain level.
Conclusions: MoCA total scores are adequately reproducible among those with cognitive diagnoses, but domain scores are unstable. Robust regression-based norms should be used to adjust for demographic performance differences, and the limited reliability, along with the ceiling effects and negative skew, should be considered when interpreting MoCA scores.
{"title":"The Montreal Cognitive Assessment: Norms and Reliable Change Indices for Standard and MoCA-22 Administrations.","authors":"Lauren N Ratcliffe, Andrew C Hale, Taylor McDonald, Kelsey C Hewitt, Christopher M Nguyen, Robert J Spencer, David W Loring","doi":"10.1093/arclin/acae013","DOIUrl":"10.1093/arclin/acae013","url":null,"abstract":"<p><strong>Objective: </strong>The Montreal Cognitive Assessment (MoCA) is among the most frequently administered cognitive screening tests, yet demographically diverse normative data are needed for repeated administrations.</p><p><strong>Method: </strong>Data were obtained from 18,410 participants using the National Alzheimer's Coordinating Center Uniform Data Set. We developed regression-based norms using Tobit regression to account for ceiling effects, explored test-retest reliability of total scores and by domain stratified by age and diagnosis with Cronbach's alpha, and reported the cumulative change frequencies for individuals with serial MoCA administrations to gage expected change.</p><p><strong>Results: </strong>Strong ceiling effects and negative skew were observed at the total score, domain, and item levels for the cognitively normal group, and performances became more normally distributed as the degree of cognitive impairment increased. In regression models, years of education was associated with higher MoCA scores, whereas older age, male sex, Black and American Indian or Alaska Native race, and Hispanic ethnicity were associated with lower predicted scores. Temporal stability was adequate and good at the total score level for the cognitively normal and cognitive disorders groups, respectively, but fell short of reliability standards at the domain level.</p><p><strong>Conclusions: </strong>MoCA total scores are adequately reproducible among those with cognitive diagnoses, but domain scores are unstable. Robust regression-based norms should be used to adjust for demographic performance differences, and the limited reliability, along with the ceiling effects and negative skew, should be considered when interpreting MoCA scores.</p>","PeriodicalId":8176,"journal":{"name":"Archives of Clinical Neuropsychology","volume":" ","pages":"747-765"},"PeriodicalIF":2.1,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11345112/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140027239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}