Pub Date : 2024-12-01Epub Date: 2024-11-23DOI: 10.1080/13803395.2024.2432655
Karlee Patrick, Erin Burke, John Gunstad, Mary Beth Spitznagel
Objective: Prior work indicates that discrete emotions are linked to performance across multiple domains of cognitive function and thus have the potential to impact cognitive profiles in neuropsychological assessment. However, reported presence and magnitude of the relationships between emotion and cognitive test performance are inconsistent. Variable findings in this regard could be due to failure to consider motivations associated with expressed emotion. To better understand the potential impact of expressed emotion on neuropsychological test performance, it may be beneficial to consider approach and avoidance motivation during assessment.
Method: The current cross-sectional study examined associations between cognitive performance and digitally phenotyped facial expressions of discrete emotions on dimensions of approach (i.e. joy, sadness, anger) and avoidance (i.e. fear, disgust) in the context of virtual neuropsychological assessment in 104 adults (ages 55-90).
Results: Initial facial expressions categorized as anger and joy predicted later reduced cognitive performance in aspects of memory and executive function within the virtual session, respectively. Test performance was associated neither with sadness nor with avoidance emotions (i.e. disgust or fear).
Conclusions: Results of the current study did not strongly align with approach/avoidance explanations for links between emotion and cognitive performance; however, results might support an arousal-based explanation, as joy and anger are both high arousal emotions. Additional investigation is needed to understand the intersection of emotion motivation and physiological arousal in the context of neuropsychological assessment.
{"title":"Initial expressed emotion during neuropsychological assessment: investigating motivational dimensions of approach and avoidance.","authors":"Karlee Patrick, Erin Burke, John Gunstad, Mary Beth Spitznagel","doi":"10.1080/13803395.2024.2432655","DOIUrl":"10.1080/13803395.2024.2432655","url":null,"abstract":"<p><strong>Objective: </strong>Prior work indicates that discrete emotions are linked to performance across multiple domains of cognitive function and thus have the potential to impact cognitive profiles in neuropsychological assessment. However, reported presence and magnitude of the relationships between emotion and cognitive test performance are inconsistent. Variable findings in this regard could be due to failure to consider motivations associated with expressed emotion. To better understand the potential impact of expressed emotion on neuropsychological test performance, it may be beneficial to consider approach and avoidance motivation during assessment.</p><p><strong>Method: </strong>The current cross-sectional study examined associations between cognitive performance and digitally phenotyped facial expressions of discrete emotions on dimensions of approach (i.e. joy, sadness, anger) and avoidance (i.e. fear, disgust) in the context of virtual neuropsychological assessment in 104 adults (ages 55-90).</p><p><strong>Results: </strong>Initial facial expressions categorized as anger and joy predicted later reduced cognitive performance in aspects of memory and executive function within the virtual session, respectively. Test performance was associated neither with sadness nor with avoidance emotions (i.e. disgust or fear).</p><p><strong>Conclusions: </strong>Results of the current study did not strongly align with approach/avoidance explanations for links between emotion and cognitive performance; however, results might support an arousal-based explanation, as joy and anger are both high arousal emotions. Additional investigation is needed to understand the intersection of emotion motivation and physiological arousal in the context of neuropsychological assessment.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"913-922"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11802313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142695520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-12-19DOI: 10.1080/13803395.2024.2441704
Kimberley Wallis, Linda Elisabet Campbell, Skye McDonald, Michelle Kelly
Background: Acquired brain injury (ABI) is associated with social cognitive impairments, yet these impairments are often overlooked during clinical assessments. There are few validated and clinically appropriate measures of social cognition in ABI. The current study examined the validity of the Brief Assessment of Social Skills (BASS) in measuring social cognition following ABI.
Method: Twenty-eight people with ABI were recruited from local brain injury rehabilitation and support services and completed measures of social cognition, general intellectual ability, and social functioning. Twenty-eight controls demographically matched for age, gender, and years of education also performed these measures.
Results: A diagnosis of ABI was significantly associated with poorer performance on five subtests of the BASS. The BASS had moderate correlations with established measures of social cognition and measures characteristics that are distinguishable from general cognition. There was minimal evidence of a relationship between performance on the BASS and social functioning, with a significant relationship between a BASS subscale and informant-reported living skills and total social functioning. Using a series of case studies, the clinical utility of the BASS was emphasized by the development of unique social cognitive profiles across ABI individuals, including impairments in areas not significant at a group level.
Discussion: The BASS is a brief and comprehensive measure that is able to detect social cognition impairments in ABI patients. Given the prevalence of impairment in social cognition following ABI and the implications of these abilities on social functioning, this measure can be used in comprehensive neuropsychological assessment to guide and monitor progress toward rehabilitation goals.
{"title":"Social cognition in acquired brain injury: adaptation and validation of the Brief Assessment of Social Skills (BASS).","authors":"Kimberley Wallis, Linda Elisabet Campbell, Skye McDonald, Michelle Kelly","doi":"10.1080/13803395.2024.2441704","DOIUrl":"10.1080/13803395.2024.2441704","url":null,"abstract":"<p><strong>Background: </strong>Acquired brain injury (ABI) is associated with social cognitive impairments, yet these impairments are often overlooked during clinical assessments. There are few validated and clinically appropriate measures of social cognition in ABI. The current study examined the validity of the Brief Assessment of Social Skills (BASS) in measuring social cognition following ABI.</p><p><strong>Method: </strong>Twenty-eight people with ABI were recruited from local brain injury rehabilitation and support services and completed measures of social cognition, general intellectual ability, and social functioning. Twenty-eight controls demographically matched for age, gender, and years of education also performed these measures.</p><p><strong>Results: </strong>A diagnosis of ABI was significantly associated with poorer performance on five subtests of the BASS. The BASS had moderate correlations with established measures of social cognition and measures characteristics that are distinguishable from general cognition. There was minimal evidence of a relationship between performance on the BASS and social functioning, with a significant relationship between a BASS subscale and informant-reported living skills and total social functioning. Using a series of case studies, the clinical utility of the BASS was emphasized by the development of unique social cognitive profiles across ABI individuals, including impairments in areas not significant at a group level.</p><p><strong>Discussion: </strong>The BASS is a brief and comprehensive measure that is able to detect social cognition impairments in ABI patients. Given the prevalence of impairment in social cognition following ABI and the implications of these abilities on social functioning, this measure can be used in comprehensive neuropsychological assessment to guide and monitor progress toward rehabilitation goals.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"923-942"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2025-01-02DOI: 10.1080/13803395.2024.2447263
Sanne Böing, Antonia F Ten Brink, Carla Ruis, Zoë A Schielen, Esther Van den Berg, J Matthijs Biesbroek, Tanja C W Nijboer, Stefan Van der Stigchel
Individuals with memory impairments may need to rely often on the external world (i.e. offloading). By memorizing only a fraction of the items at hand, and repeatedly looking back to the remainder of items (i.e. inspecting), they can avoid frailty or effortful memory use. However, individuals with subjective concerns may also prefer to rely on the external world even though their capacity is intact. Crucially, capacity assessment fails to recognize offloading strategies, while inspection assessment may reveal how people choose to deploy memory in everyday life. To disentangle the relative contributions of memory capacity and memory self-efficacy to offloading behavior, we recruited 29 individuals who were referred to a memory clinic and 38 age-matched individuals. We assessed memory capacity using neuropsychological measures, and memory self-efficacy using questionnaires. Inspection behavior was assessed in a copy task that allowed participants to store information to their preferred load or to rely on the external world. Referred individuals had lower capacity scores and lower memory self-efficacy. They inspected as often as controls, but used longer inspections and performed worse. Across all subjects, memory capacity - but not memory self-efficacy - explained inspection frequency and duration, with higher capacity associated with fewer and shorter inspections. Capacity measures thus translate to how people choose to deploy their memory in tasks that do not force full capacity use. However, people generally avoided remembering more than two items per inspection, and thus avoided using their full capacity. Inspection behavior was not further explained by memory self-efficacy, suggesting that inspections are not a sensitive measure of constraints experienced in everyday life. Although we provide support for the predictive value of capacity tasks in tasks with more degrees of freedom, capacity tasks overlook offloading behavior that individuals may employ to avoid using their full memory capacity in everyday life.
{"title":"Inspecting the external world: Memory capacity, but not memory self-efficacy, predicts offloading in working memory.","authors":"Sanne Böing, Antonia F Ten Brink, Carla Ruis, Zoë A Schielen, Esther Van den Berg, J Matthijs Biesbroek, Tanja C W Nijboer, Stefan Van der Stigchel","doi":"10.1080/13803395.2024.2447263","DOIUrl":"10.1080/13803395.2024.2447263","url":null,"abstract":"<p><p>Individuals with memory impairments may need to rely often on the external world (i.e. offloading). By memorizing only a fraction of the items at hand, and repeatedly looking back to the remainder of items (i.e. inspecting), they can avoid frailty or effortful memory use. However, individuals with subjective concerns may also prefer to rely on the external world even though their capacity is intact. Crucially, capacity assessment fails to recognize offloading strategies, while inspection assessment may reveal how people choose to deploy memory in everyday life. To disentangle the relative contributions of memory capacity and memory self-efficacy to offloading behavior, we recruited 29 individuals who were referred to a memory clinic and 38 age-matched individuals. We assessed memory capacity using neuropsychological measures, and memory self-efficacy using questionnaires. Inspection behavior was assessed in a copy task that allowed participants to store information to their preferred load or to rely on the external world. Referred individuals had lower capacity scores and lower memory self-efficacy. They inspected as often as controls, but used longer inspections and performed worse. Across all subjects, memory capacity - but not memory self-efficacy - explained inspection frequency and duration, with higher capacity associated with fewer and shorter inspections. Capacity measures thus translate to how people choose to deploy their memory in tasks that do not force full capacity use. However, people generally avoided remembering more than two items per inspection, and thus avoided using their full capacity. Inspection behavior was not further explained by memory self-efficacy, suggesting that inspections are not a sensitive measure of constraints experienced in everyday life. Although we provide support for the predictive value of capacity tasks in tasks with more degrees of freedom, capacity tasks overlook offloading behavior that individuals may employ to avoid using their full memory capacity in everyday life.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"943-965"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142921812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2025-01-20DOI: 10.1080/13803395.2025.2455074
Mira I Leese, John-Christopher A Finley, Karen S Basurto, Hannah B VanLandingham, Justyna Piszczor, Joseph M Bianco, Matthew S Phillips, Brian M Cerny, Ryan W Schroeder, Jason R Soble
Introduction: This study cross-validates and expands upon previous research by examining the optimal number of PVT failures necessary to determine invalid performance when 10 PVTs are administered during a neuropsychological evaluation. Additionally, the study assessed the degree of skewness of individual PVTs and PVT intercorrelations for the overall sample and by validity group.
Method: Participants were 283 adult neuropsychology outpatients evaluated at an academic medical center. Participants were initially classified as having valid (≤1 PVT failure; n = 225) or invalid (≥2 PVT failures; n = 58; base rate of 20% performance invalidity) performance based on four independent criterion PVTs. Failure rates of 10 additional PVTs were then compared, and sensitivity and specificity were calculated at different thresholds (e.g. ≥1, ≥2, ≥3, ≥4 PVT failures) to determine the optimal threshold for detecting invalid performance while maintaining ≥ 90% specificity.
Results: Findings indicate that failing ≥ 2 PVTs yielded 86% sensitivity/76% specificity, failing ≥ 3 PVTs yielded 69% sensitivity/92% specificity, failing ≥ 4 PVTs yielded 57% sensitivity/96% specificity, failing ≥ 5 PVTs yielded 29% sensitivity/99% specificity, and failing ≥ 6 PVTs yielded 22% sensitivity/100% specificity. PVT intercorrelations were generally small for the overall sample and by validity group. As expected, data were more highly skewed for patients with valid performance.
Conclusions: Findings were consistent with previous research and demonstrate that the three-failure threshold optimally detects invalid performance when 10 PVTs are administered. These findings inform the use of multiple PVTs in clinical settings and aid in the interpretation of PVT results.
{"title":"Analysis of skew, examination of intercorrelations, and determining the optimal threshold for performance invalidity when 10 performance validity tests are administered during a neuropsychological evaluation.","authors":"Mira I Leese, John-Christopher A Finley, Karen S Basurto, Hannah B VanLandingham, Justyna Piszczor, Joseph M Bianco, Matthew S Phillips, Brian M Cerny, Ryan W Schroeder, Jason R Soble","doi":"10.1080/13803395.2025.2455074","DOIUrl":"10.1080/13803395.2025.2455074","url":null,"abstract":"<p><strong>Introduction: </strong>This study cross-validates and expands upon previous research by examining the optimal number of PVT failures necessary to determine invalid performance when 10 PVTs are administered during a neuropsychological evaluation. Additionally, the study assessed the degree of skewness of individual PVTs and PVT intercorrelations for the overall sample and by validity group.</p><p><strong>Method: </strong>Participants were 283 adult neuropsychology outpatients evaluated at an academic medical center. Participants were initially classified as having valid (≤1 PVT failure; <i>n</i> = 225) or invalid (≥2 PVT failures; <i>n</i> = 58; base rate of 20% performance invalidity) performance based on four independent criterion PVTs. Failure rates of 10 additional PVTs were then compared, and sensitivity and specificity were calculated at different thresholds (e.g. ≥1, ≥2, ≥3, ≥4 PVT failures) to determine the optimal threshold for detecting invalid performance while maintaining ≥ 90% specificity.</p><p><strong>Results: </strong>Findings indicate that failing ≥ 2 PVTs yielded 86% sensitivity/76% specificity, failing ≥ 3 PVTs yielded 69% sensitivity/92% specificity, failing ≥ 4 PVTs yielded 57% sensitivity/96% specificity, failing ≥ 5 PVTs yielded 29% sensitivity/99% specificity, and failing ≥ 6 PVTs yielded 22% sensitivity/100% specificity. PVT intercorrelations were generally small for the overall sample and by validity group. As expected, data were more highly skewed for patients with valid performance.</p><p><strong>Conclusions: </strong>Findings were consistent with previous research and demonstrate that the three-failure threshold optimally detects invalid performance when 10 PVTs are administered. These findings inform the use of multiple PVTs in clinical settings and aid in the interpretation of PVT results.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"989-1000"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143006111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2025-01-15DOI: 10.1080/13803395.2025.2451320
Marielle Nagele, Zerrin Yetkin, Kenneth Chase Bailey, David Denney, Thomas O'neil, Sasha Alick, Roderick McColl, Jason A D Smith
Objective: To examine neuropsychological characteristic differences between typical and atypical language dominance in adult persons with epilepsy (PWE) and mesial temporal sclerosis (MTS), including exploring the impact of selected clinical variables on detection of atypical language and neuropsychological performance.
Methods: Adults with intractable epilepsy and MTS (n = 39) underwent comprehensive, pre-surgical evaluation including fMRI and neuropsychological assessment. Participants with concordant lateralization of MTS and seizure onset were included. Participants were grouped by dichotomized typical or atypical language lateralization based on fMRI results. Neuropsychological performance and other relevant clinical variables of the aforementioned groups were then compared.
Results: Those with atypical language demonstrated poorer performance across neuropsychological tasks as compared to those with typical language lateralization. Although, typical neuropsychological measures used to evaluate language lateralization were not among those significantly different between the groups. Differences in neuropsychological performance were particularly pronounced on TMT A, TMT B, Stroop (Color), GPB (Dominant), and GPB (Non-Dominant). ROC Curve was provided to evaluate reproducibility at different thresholds.
Conclusion: This pilot study revealed those with atypical language lateralization demonstrated greater cognitive dysfunction across neuropsychological tasks than those with typical language lateralization. Neuropsychological measures outside of the domain of language tests detected subtle changes of functional neuroanatomical reorganization while language domain tasks revealed no significant differences between aforementioned groups in pre-surgical evaluation of PWE. While these preliminary results require further replication, these are important implications for diagnostic and prognostic evaluation.
{"title":"Non-language neuropsychological measures increase sensitivity of identifying language reorganization in patients with epilepsy: a pilot study.","authors":"Marielle Nagele, Zerrin Yetkin, Kenneth Chase Bailey, David Denney, Thomas O'neil, Sasha Alick, Roderick McColl, Jason A D Smith","doi":"10.1080/13803395.2025.2451320","DOIUrl":"10.1080/13803395.2025.2451320","url":null,"abstract":"<p><strong>Objective: </strong>To examine neuropsychological characteristic differences between typical and atypical language dominance in adult persons with epilepsy (PWE) and mesial temporal sclerosis (MTS), including exploring the impact of selected clinical variables on detection of atypical language and neuropsychological performance.</p><p><strong>Methods: </strong>Adults with intractable epilepsy and MTS (<i>n</i> = 39) underwent comprehensive, pre-surgical evaluation including fMRI and neuropsychological assessment. Participants with concordant lateralization of MTS and seizure onset were included. Participants were grouped by dichotomized typical or atypical language lateralization based on fMRI results. Neuropsychological performance and other relevant clinical variables of the aforementioned groups were then compared.</p><p><strong>Results: </strong>Those with atypical language demonstrated poorer performance across neuropsychological tasks as compared to those with typical language lateralization. Although, typical neuropsychological measures used to evaluate language lateralization were not among those significantly different between the groups. Differences in neuropsychological performance were particularly pronounced on TMT A, TMT B, Stroop (Color), GPB (Dominant), and GPB (Non-Dominant). ROC Curve was provided to evaluate reproducibility at different thresholds.</p><p><strong>Conclusion: </strong>This pilot study revealed those with atypical language lateralization demonstrated greater cognitive dysfunction across neuropsychological tasks than those with typical language lateralization. Neuropsychological measures outside of the domain of language tests detected subtle changes of functional neuroanatomical reorganization while language domain tasks revealed no significant differences between aforementioned groups in pre-surgical evaluation of PWE. While these preliminary results require further replication, these are important implications for diagnostic and prognostic evaluation.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"978-988"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143006113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2025-01-20DOI: 10.1080/13803395.2025.2451315
Nicole Whiteley, Brooke F Beech, Maureen Schmitter-Edgecombe
Introduction: Self-monitoring abilities, both in the moment (online) and general self-knowledge (offline) of one's errors, are crucial to implementing modification to tasks to support healthy, independent aging. Cognitive strategies (CS) aid in functional, physical, and cognitive abilities, but without recognition of their need, individuals may struggle to complete daily tasks. The current study examined whether higher levels of self-monitoring would predict higher use and quality of real-world cognitive strategies in older adults.
Methods: Participants included 80 community-dwelling midlife and older adults. Participants completed a remote battery of neuropsychological tasks, including a computerized go-no-go task that evaluated online self-monitoring, and a self-reported questionnaire to measure offline self-monitoring (Cognitive Self-Efficacy Questionnaire). To assess CS, a count score (CS Quantity) and utility score (CS Quality) were computed based on strategies utilized in completion of real-world prospective memory tasks.
Results: Online self-monitoring was not significantly related to offline self-monitoring (r(77) = -.07, p = .52). A hierarchical regression revealed that while offline self-monitoring significantly predicted 7% of the variance in CS Quality, above and beyond age, global cognition, and premorbid functioning (ΔR2 = .07, ΔF = 6.23, p = .02), the addition of online self-monitoring did not contribute significant incremental validity (ΔR2 = .001, ΔF = 0.12, p = .73). The second hierarchical regression revealed that neither online nor offline self-monitoring significantly predicted CS Quantity, after controlling for sex (ΔR2 = .004, ΔF = 0.29, p = .60).
Conclusion: The results support the distinction between online and offline self-monitoring concepts and their assessment. For community-dwelling midlife and older adults without dementia, clinicians may consider an individual's perceptions of their ability to self-monitor when working to facilitate the use of cognitive strategies.
自我监控能力,无论是在当下(在线)还是对自己错误的一般自我认识(离线),对于实施任务修改以支持健康、独立的老龄化至关重要。认知策略(CS)有助于提高功能、身体和认知能力,但如果不认识到它们的需求,个体可能难以完成日常任务。目前的研究调查了更高水平的自我监控是否预示着老年人对现实世界认知策略的更高使用和质量。方法:参与者包括80名居住在社区的中年和老年人。参与者完成了一系列远程神经心理学任务,包括评估在线自我监控的计算机化go-no-go任务,以及测量离线自我监控的自我报告问卷(认知自我效能问卷)。为了评估CS,计数得分(CS数量)和效用得分(CS质量)是基于完成现实世界前瞻性记忆任务所使用的策略计算的。结果:在线自我监测与离线自我监测无显著相关(r(77) = -)。07, p = .52)。分层回归显示,离线自我监测显著预测7%的CS质量方差,高于和超过年龄、整体认知和病前功能(ΔR2 =。07, ΔF = 6.23, p = .02),增加在线自我监测对增加效度没有显著贡献(ΔR2 =。001, ΔF = 0.12, p = 0.73)。第二次层次回归显示,在控制性别(ΔR2 =)后,在线和离线自我监控都不能显著预测CS数量。004, ΔF = 0.29, p = 0.60)。结论:研究结果支持线上与线下自我监控概念的区分及其评价。对于居住在社区的中年人和没有痴呆症的老年人,临床医生在促进认知策略的使用时,可能会考虑个人对自我监控能力的感知。
{"title":"The relationship between self-monitoring and cognitive strategy use in midlife and older adults.","authors":"Nicole Whiteley, Brooke F Beech, Maureen Schmitter-Edgecombe","doi":"10.1080/13803395.2025.2451315","DOIUrl":"10.1080/13803395.2025.2451315","url":null,"abstract":"<p><strong>Introduction: </strong>Self-monitoring abilities, both in the moment (online) and general self-knowledge (offline) of one's errors, are crucial to implementing modification to tasks to support healthy, independent aging. Cognitive strategies (CS) aid in functional, physical, and cognitive abilities, but without recognition of their need, individuals may struggle to complete daily tasks. The current study examined whether higher levels of self-monitoring would predict higher use and quality of real-world cognitive strategies in older adults.</p><p><strong>Methods: </strong>Participants included 80 community-dwelling midlife and older adults. Participants completed a remote battery of neuropsychological tasks, including a computerized go-no-go task that evaluated online self-monitoring, and a self-reported questionnaire to measure offline self-monitoring (Cognitive Self-Efficacy Questionnaire). To assess CS, a count score (CS Quantity) and utility score (CS Quality) were computed based on strategies utilized in completion of real-world prospective memory tasks.</p><p><strong>Results: </strong>Online self-monitoring was not significantly related to offline self-monitoring (<i>r</i>(77) = -.07, <i>p</i> = .52). A hierarchical regression revealed that while offline self-monitoring significantly predicted 7% of the variance in CS Quality, above and beyond age, global cognition, and premorbid functioning (Δ<i>R</i><sup>2</sup> = .07, Δ<i>F</i> = 6.23, <i>p</i> = .02), the addition of online self-monitoring did not contribute significant incremental validity (Δ<i>R</i><sup>2</sup> = .001, Δ<i>F</i> = 0.12, <i>p</i> = .73). The second hierarchical regression revealed that neither online nor offline self-monitoring significantly predicted CS Quantity, after controlling for sex (Δ<i>R</i><sup>2</sup> = .004, Δ<i>F</i> = 0.29, <i>p</i> = .60).</p><p><strong>Conclusion: </strong>The results support the distinction between online and offline self-monitoring concepts and their assessment. For community-dwelling midlife and older adults without dementia, clinicians may consider an individual's perceptions of their ability to self-monitor when working to facilitate the use of cognitive strategies.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"966-977"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11802286/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143006115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2025-01-25DOI: 10.1080/13803395.2025.2458547
John-Christopher A Finley, Matthew S Phillips, Jason R Soble, Violeta J Rodriguez
Introduction: Diagnostic evaluations for attention-deficit/hyperactivity disorder (ADHD) are becoming increasingly complicated by the number of adults who fabricate or exaggerate symptoms. Novel methods are needed to improve the assessment process required to detect these noncredible symptoms. The present study investigated whether unsupervised machine learning (ML) could serve as one such method, and detect noncredible symptom reporting in adults undergoing ADHD evaluations.
Method: Participants were 623 adults who underwent outpatient ADHD evaluations. Patients' scores from symptom validity tests embedded in two self-report questionnaires were examined in an unsupervised ML model. The model, called "sidClustering," is based on a clustering and random forest algorithm. The model synthesized the raw scores (without cutoffs) from the symptom validity tests into an unspecified number of groups. The groups were then compared to predetermined ratings of credible versus noncredible symptom reporting. The noncredible symptom ratings were defined by either two or three or more symptom validity test elevations.
Results: The model identified two groups that were significantly (p < .001) and meaningfully associated with the predetermined ratings of credible or noncredible symptom reporting, regardless of the number of elevations used to define noncredible reporting. The validity test assessing overreporting of various types of psychiatric symptoms was most influential in determining group membership; but symptom validity tests regarding ADHD-specific symptoms were also contributory.
Conclusion: These findings suggest that unsupervised ML can effectively identify noncredible symptom reporting using scores from multiple symptom validity tests without predetermined cutoffs. The ML-derived groups also support the use of two validity test elevations to identify noncredible symptom reporting. Collectively, these findings serve as a proof of concept that unsupervised ML can improve the process of detecting noncredible symptoms during ADHD evaluations. With additional research, unsupervised ML may become a useful supplementary tool for quickly and accurately detecting noncredible symptoms during these evaluations.
{"title":"Detecting noncredible symptomology in ADHD evaluations using machine learning.","authors":"John-Christopher A Finley, Matthew S Phillips, Jason R Soble, Violeta J Rodriguez","doi":"10.1080/13803395.2025.2458547","DOIUrl":"10.1080/13803395.2025.2458547","url":null,"abstract":"<p><strong>Introduction: </strong>Diagnostic evaluations for attention-deficit/hyperactivity disorder (ADHD) are becoming increasingly complicated by the number of adults who fabricate or exaggerate symptoms. Novel methods are needed to improve the assessment process required to detect these noncredible symptoms. The present study investigated whether unsupervised machine learning (ML) could serve as one such method, and detect noncredible symptom reporting in adults undergoing ADHD evaluations.</p><p><strong>Method: </strong>Participants were 623 adults who underwent outpatient ADHD evaluations. Patients' scores from symptom validity tests embedded in two self-report questionnaires were examined in an unsupervised ML model. The model, called \"sidClustering,\" is based on a clustering and random forest algorithm. The model synthesized the raw scores (without cutoffs) from the symptom validity tests into an unspecified number of groups. The groups were then compared to predetermined ratings of credible versus noncredible symptom reporting. The noncredible symptom ratings were defined by either two or three or more symptom validity test elevations.</p><p><strong>Results: </strong>The model identified two groups that were significantly (<i>p</i> < .001) and meaningfully associated with the predetermined ratings of credible or noncredible symptom reporting, regardless of the number of elevations used to define noncredible reporting. The validity test assessing overreporting of various types of psychiatric symptoms was most influential in determining group membership; but symptom validity tests regarding ADHD-specific symptoms were also contributory.</p><p><strong>Conclusion: </strong>These findings suggest that unsupervised ML can effectively identify noncredible symptom reporting using scores from multiple symptom validity tests without predetermined cutoffs. The ML-derived groups also support the use of two validity test elevations to identify noncredible symptom reporting. Collectively, these findings serve as a proof of concept that unsupervised ML can improve the process of detecting noncredible symptoms during ADHD evaluations. With additional research, unsupervised ML may become a useful supplementary tool for quickly and accurately detecting noncredible symptoms during these evaluations.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"1015-1025"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143039026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2025-01-27DOI: 10.1080/13803395.2025.2458539
Marilien C Marzolla, Lex Borghans, Juliëtte Ebus, Martyna Gwiazda, Caroline van Heugten, Petra Hurks
Introduction: Sensory hypersensitivity (SHS) refers to an increased sensitivity to sensory stimuli, often leading to sensory overload and adversely affecting daily functioning and well-being. This study examined the effects of three situational triggers - noise, time pressure, and cognitive load - on task performance, sensory overload, and fatigue. Additionally, we sought to explore the associations between these effects and SHS, while accounting for other influencing factors such as personality, coping mechanisms, and anxiety.
Method: We experimentally tested 105 university students, employing a visuospatial task (the Paper Folding Test, PFT) under eight different conditions, manipulating the three situational triggers. The measured outcomes included task accuracy, average response time, sensory overload, and fatigue. Participants also completed several questionnaires: Highly Sensitive Person Scale (HSPS), Multi-Modal Evaluation of Sensory Sensitivity (MESSY), State and Trait Anxiety Index, Big Five Inventory, and COPE Easy.
Results: Our findings indicated that sensory overload increased as more situational triggers were introduced, with noise having the most significant impact. However, this increase in sensory overload did not correspond to changes in objective performance measures, such as accuracy and average response time on the PFT, which were primarily influenced by cognitive load (i.e. easy versus difficult items). Additionally, individuals with higher levels of SHS (HSPS and MESSY) reported greater overall sensory overload and fatigue. Nonetheless, the impact of the triggers on sensory overload and fatigue was not exclusive to those with high SHS, and neuroticism, conscientiousness, openness, and trait anxiety were significant predictors of SHS, more so than task-related outcomes.
Conclusions: Feelings of sensory overload may not necessarily impair cognitive performance, and the impact of situational triggers can be similar for individuals with and without SHS. This implies that the burden of SHS and overall sensory overload may be influenced by other underlying factors leading to an elevation of baseline sensory overload, warranting further investigation.
{"title":"The impact of noise exposure, time pressure, and cognitive load on objective task performance and subjective sensory overload and fatigue.","authors":"Marilien C Marzolla, Lex Borghans, Juliëtte Ebus, Martyna Gwiazda, Caroline van Heugten, Petra Hurks","doi":"10.1080/13803395.2025.2458539","DOIUrl":"10.1080/13803395.2025.2458539","url":null,"abstract":"<p><strong>Introduction: </strong>Sensory hypersensitivity (SHS) refers to an increased sensitivity to sensory stimuli, often leading to sensory overload and adversely affecting daily functioning and well-being. This study examined the effects of three situational triggers - noise, time pressure, and cognitive load - on task performance, sensory overload, and fatigue. Additionally, we sought to explore the associations between these effects and SHS, while accounting for other influencing factors such as personality, coping mechanisms, and anxiety.</p><p><strong>Method: </strong>We experimentally tested 105 university students, employing a visuospatial task (the Paper Folding Test, PFT) under eight different conditions, manipulating the three situational triggers. The measured outcomes included task accuracy, average response time, sensory overload, and fatigue. Participants also completed several questionnaires: Highly Sensitive Person Scale (HSPS), Multi-Modal Evaluation of Sensory Sensitivity (MESSY), State and Trait Anxiety Index, Big Five Inventory, and COPE Easy.</p><p><strong>Results: </strong>Our findings indicated that sensory overload increased as more situational triggers were introduced, with noise having the most significant impact. However, this increase in sensory overload did not correspond to changes in objective performance measures, such as accuracy and average response time on the PFT, which were primarily influenced by cognitive load (i.e. easy versus difficult items). Additionally, individuals with higher levels of SHS (HSPS and MESSY) reported greater overall sensory overload and fatigue. Nonetheless, the impact of the triggers on sensory overload and fatigue was not exclusive to those with high SHS, and neuroticism, conscientiousness, openness, and trait anxiety were significant predictors of SHS, more so than task-related outcomes.</p><p><strong>Conclusions: </strong>Feelings of sensory overload may not necessarily impair cognitive performance, and the impact of situational triggers can be similar for individuals with and without SHS. This implies that the burden of SHS and overall sensory overload may be influenced by other underlying factors leading to an elevation of baseline sensory overload, warranting further investigation.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"1001-1014"},"PeriodicalIF":1.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143046496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-11-27DOI: 10.1080/13803395.2024.2427421
Devrim Kalkan, Murat Kurt
Introduction: The extent to which different types of attention are affected in RRMS based on disease duration has not been extensively analyzed. Therefore, the aim of this study was to determine whether MS patients differ compared to healthy individuals in a homogeneous sample of RRMS patients in terms of attention types and from which year of MS attention deficit starts. Another aim of the study was to examine the effect of MS duration and stimulus onset asynchrony on dual task performance.
Methods: The sample consisted of RRMS patients (n = 53) and healthy participants (n = 30) between the ages of 20-49, who were at least primary school graduates. Healthy participants in the comparison group were reached by snowball sampling technique. Stroop Test, Cancellation Test, Paced Auditory Serial Addition Test, Coding Test, WMS-R Digit Span and Visual Memory Span subtests were administered to assess attention. Divided attention performance was assessed with a dual task developed based on psychological refractory period paradigm.
Results: The results show that there is a significant difference between RRMS patients and healthy participants in terms of different types of attention (p < 0.05). Focused, sustained and divided attention of RRMS patients and the ability to resist interference showed a significant decline from the 7th year of the disease (p < 0.05); no significant difference was found between healthy participants and patients with 1-6 years of RRMS.
Conclusions: Although the results of the study are consistent with the literature which show that attention deficit develops in MS, it is important in terms of showing that attention deficit changes depending on the duration of the disease. Focused attention, sustained attention, interference resistance and divided attention performance of RRMS patients showed a significant decline after the 7th year of the disease.
{"title":"Impairments of attention in RRMS patients: the role of disease duration.","authors":"Devrim Kalkan, Murat Kurt","doi":"10.1080/13803395.2024.2427421","DOIUrl":"10.1080/13803395.2024.2427421","url":null,"abstract":"<p><strong>Introduction: </strong>The extent to which different types of attention are affected in RRMS based on disease duration has not been extensively analyzed. Therefore, the aim of this study was to determine whether MS patients differ compared to healthy individuals in a homogeneous sample of RRMS patients in terms of attention types and from which year of MS attention deficit starts. Another aim of the study was to examine the effect of MS duration and stimulus onset asynchrony on dual task performance.</p><p><strong>Methods: </strong>The sample consisted of RRMS patients (<i>n</i> = 53) and healthy participants (<i>n</i> = 30) between the ages of 20-49, who were at least primary school graduates. Healthy participants in the comparison group were reached by snowball sampling technique. Stroop Test, Cancellation Test, Paced Auditory Serial Addition Test, Coding Test, WMS-R Digit Span and Visual Memory Span subtests were administered to assess attention. Divided attention performance was assessed with a dual task developed based on psychological refractory period paradigm.</p><p><strong>Results: </strong>The results show that there is a significant difference between RRMS patients and healthy participants in terms of different types of attention (<i>p</i> < 0.05). Focused, sustained and divided attention of RRMS patients and the ability to resist interference showed a significant decline from the 7th year of the disease (<i>p</i> < 0.05); no significant difference was found between healthy participants and patients with 1-6 years of RRMS.</p><p><strong>Conclusions: </strong>Although the results of the study are consistent with the literature which show that attention deficit develops in MS, it is important in terms of showing that attention deficit changes depending on the duration of the disease. Focused attention, sustained attention, interference resistance and divided attention performance of RRMS patients showed a significant decline after the 7th year of the disease.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"891-912"},"PeriodicalIF":1.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142728967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-11-10DOI: 10.1080/13803395.2024.2425004
Michael R Basso, Savanna M Tierney, Brad L Roper, Douglas M Whiteside, Dennis R Combs, Eduardo Estevis
Background: Performance validity (PV) and symptom validity (SV) tests assess biased responding that impact scores on neuropsychological tests. The extent to which PV and SV represent overlapping or unique constructs remains incompletely defined, especially among psychiatric patients in a non-forensic setting. The current study investigated this question using confirmatory factor analysis.
Method: Eighty-two inpatients with mood disorders were administered the Word Memory Test, and its primary indices formed a latent variable of PV. From the Minnesota Multiphasic Personality Inventory-2 the Fake Bad Scale (FBS), Response Bias Scale (RBS), and Henry-Heilbronner Index (HHI) were employed as a latent SV variable. Two models of the relationship between PV and SV were compared. One freely estimated the shared variance between SV and PV latent constructs. The other assumed the relationship between SV and PV was homogeneous, and covariance was fixed to 1.0.
Results: In the freely estimated model, covariance between PV and SV was -0.18, and model fit was excellent (CFI = 0.098; TLI = 0.096; SRMR = 0.08). For the fixed model, the RBS, HHI, and FBS achieved low loadings on the SV construct, and model fit was poor (CFI = 0.66; TLI = 0.43; SRMR = 0.42).
Conclusions: PV as indexed by the WMT and SV measured by the MMPI-2 are not overlapping constructs among inpatients with mood disorders. These data imply that PV and SV represent distinct constructs in this population. Implications for practice are discussed.
{"title":"A tale of two constructs: confirmatory factor analysis of performance and symptom validity tests.","authors":"Michael R Basso, Savanna M Tierney, Brad L Roper, Douglas M Whiteside, Dennis R Combs, Eduardo Estevis","doi":"10.1080/13803395.2024.2425004","DOIUrl":"10.1080/13803395.2024.2425004","url":null,"abstract":"<p><strong>Background: </strong>Performance validity (PV) and symptom validity (SV) tests assess biased responding that impact scores on neuropsychological tests. The extent to which PV and SV represent overlapping or unique constructs remains incompletely defined, especially among psychiatric patients in a non-forensic setting. The current study investigated this question using confirmatory factor analysis.</p><p><strong>Method: </strong>Eighty-two inpatients with mood disorders were administered the Word Memory Test, and its primary indices formed a latent variable of PV. From the Minnesota Multiphasic Personality Inventory-2 the Fake Bad Scale (FBS), Response Bias Scale (RBS), and Henry-Heilbronner Index (HHI) were employed as a latent SV variable. Two models of the relationship between PV and SV were compared. One freely estimated the shared variance between SV and PV latent constructs. The other assumed the relationship between SV and PV was homogeneous, and covariance was fixed to 1.0.</p><p><strong>Results: </strong>In the freely estimated model, covariance between PV and SV was -0.18, and model fit was excellent (CFI = 0.098; TLI = 0.096; SRMR = 0.08). For the fixed model, the RBS, HHI, and FBS achieved low loadings on the SV construct, and model fit was poor (CFI = 0.66; TLI = 0.43; SRMR = 0.42).</p><p><strong>Conclusions: </strong>PV as indexed by the WMT and SV measured by the MMPI-2 are not overlapping constructs among inpatients with mood disorders. These data imply that PV and SV represent distinct constructs in this population. Implications for practice are discussed.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":" ","pages":"840-847"},"PeriodicalIF":1.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142621239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}