Fabian Herold, Paula Theobald, Thomas Gronwald, Navin Kaushal, Liye Zou, Eling D de Bruin, Louis Bherer, Notger G Müller
A healthy lifestyle can be an important prerequisite to prevent or at least delay the onset of dementia. However, the large number of physically inactive adults underscores the need for developing and evaluating intervention approaches aimed at improving adherence to a physically active lifestyle. In this regard, hybrid physical training, which usually combines center- and home-based physical exercise sessions and has proven successful in rehabilitative settings, could offer a promising approach to preserving cognitive health in the aging population. Despite its potential, research in this area is limited as hybrid physical training interventions have been underused in promoting healthy cognitive aging. Furthermore, the absence of a universally accepted definition or a classification framework for hybrid physical training interventions poses a challenge to future progress in this direction. To address this gap, this article informs the reader about hybrid physical training by providing a definition and classification approach of different types, discussing their specific advantages and disadvantages, and offering recommendations for future research. Specifically, we focus on applying digital technologies to deliver home-based exercises, as their use holds significant potential for reaching underserved and marginalized groups, such as older adults with mobility impairments living in rural areas.
{"title":"The Best of Two Worlds to Promote Healthy Cognitive Aging: Definition and Classification Approach of Hybrid Physical Training Interventions.","authors":"Fabian Herold, Paula Theobald, Thomas Gronwald, Navin Kaushal, Liye Zou, Eling D de Bruin, Louis Bherer, Notger G Müller","doi":"10.2196/56433","DOIUrl":"10.2196/56433","url":null,"abstract":"<p><p>A healthy lifestyle can be an important prerequisite to prevent or at least delay the onset of dementia. However, the large number of physically inactive adults underscores the need for developing and evaluating intervention approaches aimed at improving adherence to a physically active lifestyle. In this regard, hybrid physical training, which usually combines center- and home-based physical exercise sessions and has proven successful in rehabilitative settings, could offer a promising approach to preserving cognitive health in the aging population. Despite its potential, research in this area is limited as hybrid physical training interventions have been underused in promoting healthy cognitive aging. Furthermore, the absence of a universally accepted definition or a classification framework for hybrid physical training interventions poses a challenge to future progress in this direction. To address this gap, this article informs the reader about hybrid physical training by providing a definition and classification approach of different types, discussing their specific advantages and disadvantages, and offering recommendations for future research. Specifically, we focus on applying digital technologies to deliver home-based exercises, as their use holds significant potential for reaching underserved and marginalized groups, such as older adults with mobility impairments living in rural areas.</p>","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e56433"},"PeriodicalIF":5.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11325123/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141856695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chang Liu, Kai Zhang, Xiaodong Yang, Bingbing Meng, Jingsheng Lou, Yanhong Liu, Jiangbei Cao, Kexuan Liu, Weidong Mi, Hao Li
<p><strong>Background: </strong>Myocardial injury after noncardiac surgery (MINS) is an easily overlooked complication but closely related to postoperative cardiovascular adverse outcomes; therefore, the early diagnosis and prediction are particularly important.</p><p><strong>Objective: </strong>We aimed to develop and validate an explainable machine learning (ML) model for predicting MINS among older patients undergoing noncardiac surgery.</p><p><strong>Methods: </strong>The retrospective cohort study included older patients who had noncardiac surgery from 1 northern center and 1 southern center in China. The data sets from center 1 were divided into a training set and an internal validation set. The data set from center 2 was used as an external validation set. Before modeling, the least absolute shrinkage and selection operator and recursive feature elimination methods were used to reduce dimensions of data and select key features from all variables. Prediction models were developed based on the extracted features using several ML algorithms, including category boosting, random forest, logistic regression, naïve Bayes, light gradient boosting machine, extreme gradient boosting, support vector machine, and decision tree. Prediction performance was assessed by the area under the receiver operating characteristic (AUROC) curve as the main evaluation metric to select the best algorithms. The model performance was verified by internal and external validation data sets with the best algorithm and compared to the Revised Cardiac Risk Index. The Shapley Additive Explanations (SHAP) method was applied to calculate values for each feature, representing the contribution to the predicted risk of complication, and generate personalized explanations.</p><p><strong>Results: </strong>A total of 19,463 eligible patients were included; among those, 12,464 patients in center 1 were included as the training set; 4754 patients in center 1 were included as the internal validation set; and 2245 in center 2 were included as the external validation set. The best-performing model for prediction was the CatBoost algorithm, achieving the highest AUROC of 0.805 (95% CI 0.778-0.831) in the training set, validating with an AUROC of 0.780 in the internal validation set and 0.70 in external validation set. Additionally, CatBoost demonstrated superior performance compared to the Revised Cardiac Risk Index (AUROC 0.636; P<.001). The SHAP values indicated the ranking of the level of importance of each variable, with preoperative serum creatinine concentration, red blood cell distribution width, and age accounting for the top three. The results from the SHAP method can predict events with positive values or nonevents with negative values, providing an explicit explanation of individualized risk predictions.</p><p><strong>Conclusions: </strong>The ML models can provide a personalized and fairly accurate risk prediction of MINS, and the explainable perspective can help identify pot
{"title":"Development and Validation of an Explainable Machine Learning Model for Predicting Myocardial Injury After Noncardiac Surgery in Two Centers in China: Retrospective Study.","authors":"Chang Liu, Kai Zhang, Xiaodong Yang, Bingbing Meng, Jingsheng Lou, Yanhong Liu, Jiangbei Cao, Kexuan Liu, Weidong Mi, Hao Li","doi":"10.2196/54872","DOIUrl":"10.2196/54872","url":null,"abstract":"<p><strong>Background: </strong>Myocardial injury after noncardiac surgery (MINS) is an easily overlooked complication but closely related to postoperative cardiovascular adverse outcomes; therefore, the early diagnosis and prediction are particularly important.</p><p><strong>Objective: </strong>We aimed to develop and validate an explainable machine learning (ML) model for predicting MINS among older patients undergoing noncardiac surgery.</p><p><strong>Methods: </strong>The retrospective cohort study included older patients who had noncardiac surgery from 1 northern center and 1 southern center in China. The data sets from center 1 were divided into a training set and an internal validation set. The data set from center 2 was used as an external validation set. Before modeling, the least absolute shrinkage and selection operator and recursive feature elimination methods were used to reduce dimensions of data and select key features from all variables. Prediction models were developed based on the extracted features using several ML algorithms, including category boosting, random forest, logistic regression, naïve Bayes, light gradient boosting machine, extreme gradient boosting, support vector machine, and decision tree. Prediction performance was assessed by the area under the receiver operating characteristic (AUROC) curve as the main evaluation metric to select the best algorithms. The model performance was verified by internal and external validation data sets with the best algorithm and compared to the Revised Cardiac Risk Index. The Shapley Additive Explanations (SHAP) method was applied to calculate values for each feature, representing the contribution to the predicted risk of complication, and generate personalized explanations.</p><p><strong>Results: </strong>A total of 19,463 eligible patients were included; among those, 12,464 patients in center 1 were included as the training set; 4754 patients in center 1 were included as the internal validation set; and 2245 in center 2 were included as the external validation set. The best-performing model for prediction was the CatBoost algorithm, achieving the highest AUROC of 0.805 (95% CI 0.778-0.831) in the training set, validating with an AUROC of 0.780 in the internal validation set and 0.70 in external validation set. Additionally, CatBoost demonstrated superior performance compared to the Revised Cardiac Risk Index (AUROC 0.636; P<.001). The SHAP values indicated the ranking of the level of importance of each variable, with preoperative serum creatinine concentration, red blood cell distribution width, and age accounting for the top three. The results from the SHAP method can predict events with positive values or nonevents with negative values, providing an explicit explanation of individualized risk predictions.</p><p><strong>Conclusions: </strong>The ML models can provide a personalized and fairly accurate risk prediction of MINS, and the explainable perspective can help identify pot","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e54872"},"PeriodicalIF":5.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angélique Roquet, Paolo Martinelli, Charikleia Lampraki, Daniela S Jopp
Background: Internet use has dramatically increased worldwide, with over two-thirds of the world's population using it, including the older adult population. Technical resources such as internet use have been shown to influence psychological processes such as stress positively. Following the Conservation of Resources theory by Hobfoll, stress experience largely depends on individuals' personal resources and the changes in these resources. While personal resource loss has been shown to lead to stress, we know little regarding the role that technical resources may play on the relationship between personal resources and stress.
Objective: This study aims to investigate the moderating effect of technical resources (internet use) on the relationship between personal resources and stress in younger and older adults.
Methods: A total of 275 younger adults (aged 18 to 30 years) and 224 older adults (aged ≥65 years) indicated their levels of stress; change in personal resources (ie, cognitive, social, and self-efficacy resource loss and gain); and internet use. Variance analyses, multiple regression, and moderation analyses were performed to investigate the correlates of stress.
Results: Results showed that older adults, despite experiencing higher levels of resource loss (questionnaire scores: 1.82 vs 1.54; P<.001) and less resource gain (questionnaire scores: 1.82 vs 2.31; P<.001), were less stressed than younger adults (questionnaire scores: 1.99 vs 2.47; P<.001). We observed that the relationship among resource loss, resource gain, and stress in older adults was moderated by their level of internet use (β=.09; P=.05). Specifically, older adults who used the internet more frequently were less stressed when they experienced high levels of both loss and gain compared to their counterparts who used internet the less in the same conditions. Furthermore, older adults with low resource gain and high resource loss expressed less stress when they used the internet more often compared to those with low internet use.
Conclusions: These findings highlight the importance of internet use in mitigating stress among older adults experiencing resource loss and gain, emphasizing the potential of digital interventions to promote mental health in this population.
背景:互联网的使用在全球范围内急剧增加,超过三分之二的世界人口使用互联网,其中包括老年人口。互联网使用等技术资源已被证明会对压力等心理过程产生积极影响。根据霍布福尔的资源保护理论,压力体验在很大程度上取决于个人的个人资源以及这些资源的变化。虽然个人资源的流失已被证明会导致压力,但我们对技术资源在个人资源与压力之间的关系中所起的作用知之甚少:本研究旨在调查技术资源(互联网使用)对年轻人和老年人的个人资源与压力之间关系的调节作用:共有275名年轻人(18至30岁)和224名老年人(年龄≥65岁)表示了他们的压力水平、个人资源变化(即认知、社会和自我效能资源的损失和增加)以及互联网使用情况。通过方差分析、多元回归和调节分析来研究压力的相关因素:结果显示,尽管老年人的资源损失程度较高(问卷得分:1.82 vs 1.54;1.82 vs 1.54;1.82 vs 1.54结果:结果表明,尽管老年人的资源损失程度较高(问卷得分:1.82 vs 1.54;PC 结论:这些结果突出了互联网使用的重要性:这些研究结果凸显了互联网的使用在缓解老年人资源损益压力方面的重要性,强调了数字干预在促进该人群心理健康方面的潜力。
{"title":"Internet Use as a Moderator of the Relationship Between Personal Resources and Stress in Older Adults: Cross-Sectional Study.","authors":"Angélique Roquet, Paolo Martinelli, Charikleia Lampraki, Daniela S Jopp","doi":"10.2196/52555","DOIUrl":"10.2196/52555","url":null,"abstract":"<p><strong>Background: </strong>Internet use has dramatically increased worldwide, with over two-thirds of the world's population using it, including the older adult population. Technical resources such as internet use have been shown to influence psychological processes such as stress positively. Following the Conservation of Resources theory by Hobfoll, stress experience largely depends on individuals' personal resources and the changes in these resources. While personal resource loss has been shown to lead to stress, we know little regarding the role that technical resources may play on the relationship between personal resources and stress.</p><p><strong>Objective: </strong>This study aims to investigate the moderating effect of technical resources (internet use) on the relationship between personal resources and stress in younger and older adults.</p><p><strong>Methods: </strong>A total of 275 younger adults (aged 18 to 30 years) and 224 older adults (aged ≥65 years) indicated their levels of stress; change in personal resources (ie, cognitive, social, and self-efficacy resource loss and gain); and internet use. Variance analyses, multiple regression, and moderation analyses were performed to investigate the correlates of stress.</p><p><strong>Results: </strong>Results showed that older adults, despite experiencing higher levels of resource loss (questionnaire scores: 1.82 vs 1.54; P<.001) and less resource gain (questionnaire scores: 1.82 vs 2.31; P<.001), were less stressed than younger adults (questionnaire scores: 1.99 vs 2.47; P<.001). We observed that the relationship among resource loss, resource gain, and stress in older adults was moderated by their level of internet use (β=.09; P=.05). Specifically, older adults who used the internet more frequently were less stressed when they experienced high levels of both loss and gain compared to their counterparts who used internet the less in the same conditions. Furthermore, older adults with low resource gain and high resource loss expressed less stress when they used the internet more often compared to those with low internet use.</p><p><strong>Conclusions: </strong>These findings highlight the importance of internet use in mitigating stress among older adults experiencing resource loss and gain, emphasizing the potential of digital interventions to promote mental health in this population.</p>","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e52555"},"PeriodicalIF":5.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11297370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141724615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony L Teano, Ashley Scott, Cassandra Gipson, Marilyn Albert, Corinne Pettigrew
Background: Social media may be a useful method for research centers to deliver health messages, increase their visibility in the local community, and recruit study participants. Sharing examples of social media-based community outreach and educational programs, and evaluating their outcomes in this setting, is important for understanding whether these efforts have a measurable impact.
Objective: The aim of this study is to describe one center's social media activities for community education on topics related to aging, memory loss, and Alzheimer disease and related dementias, and provide metrics related to recruitment into clinical research studies.
Methods: Several social media platforms were used, including Facebook, X (formerly Twitter), and YouTube. Objective assessments quantified monthly, based on each platform's native dashboard, included the number of followers, number of posts, post reach and engagement, post impressions, and video views. The number of participants volunteering for research during this period was additionally tracked using a secure database. Educational material posted to social media most frequently included content developed by center staff, content from partner organizations, and news articles or resources featuring center researchers. Multiple educational programs were developed, including social media series, web-based talks, Twitter chats, and webinars. In more recent years, Facebook content was occasionally boosted to increase visibility in the local geographical region.
Results: Up to 4 years of page metrics demonstrated continuing growth in reaching social media audiences, as indicated by increases over time in the numbers of likes or followers on Facebook and X/Twitter and views of YouTube videos (growth trajectories). While Facebook reach and X/Twitter impression rates were reasonable, Facebook engagement rates were more modest. Months that included boosted Facebook posts resulted in a greater change in page followers and page likes, and higher reach and engagement rates (all P≤.002). Recruitment of participants into center-affiliated research studies increased during this time frame, particularly in response to boosted Facebook posts.
Conclusions: These data demonstrate that social media activities can provide meaningful community educational opportunities focused on Alzheimer disease and related dementias and have a measurable impact on the recruitment of participants into research studies. Additionally, this study highlights the importance of tracking outreach program outcomes for evaluating return on investment.
{"title":"Social Media Programs for Outreach and Recruitment Supporting Aging and Alzheimer Disease and Related Dementias Research: Longitudinal Descriptive Study.","authors":"Anthony L Teano, Ashley Scott, Cassandra Gipson, Marilyn Albert, Corinne Pettigrew","doi":"10.2196/51520","DOIUrl":"10.2196/51520","url":null,"abstract":"<p><strong>Background: </strong>Social media may be a useful method for research centers to deliver health messages, increase their visibility in the local community, and recruit study participants. Sharing examples of social media-based community outreach and educational programs, and evaluating their outcomes in this setting, is important for understanding whether these efforts have a measurable impact.</p><p><strong>Objective: </strong>The aim of this study is to describe one center's social media activities for community education on topics related to aging, memory loss, and Alzheimer disease and related dementias, and provide metrics related to recruitment into clinical research studies.</p><p><strong>Methods: </strong>Several social media platforms were used, including Facebook, X (formerly Twitter), and YouTube. Objective assessments quantified monthly, based on each platform's native dashboard, included the number of followers, number of posts, post reach and engagement, post impressions, and video views. The number of participants volunteering for research during this period was additionally tracked using a secure database. Educational material posted to social media most frequently included content developed by center staff, content from partner organizations, and news articles or resources featuring center researchers. Multiple educational programs were developed, including social media series, web-based talks, Twitter chats, and webinars. In more recent years, Facebook content was occasionally boosted to increase visibility in the local geographical region.</p><p><strong>Results: </strong>Up to 4 years of page metrics demonstrated continuing growth in reaching social media audiences, as indicated by increases over time in the numbers of likes or followers on Facebook and X/Twitter and views of YouTube videos (growth trajectories). While Facebook reach and X/Twitter impression rates were reasonable, Facebook engagement rates were more modest. Months that included boosted Facebook posts resulted in a greater change in page followers and page likes, and higher reach and engagement rates (all P≤.002). Recruitment of participants into center-affiliated research studies increased during this time frame, particularly in response to boosted Facebook posts.</p><p><strong>Conclusions: </strong>These data demonstrate that social media activities can provide meaningful community educational opportunities focused on Alzheimer disease and related dementias and have a measurable impact on the recruitment of participants into research studies. Additionally, this study highlights the importance of tracking outreach program outcomes for evaluating return on investment.</p>","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e51520"},"PeriodicalIF":5.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyue Hu, Zenan Sun, Yi Nian, Yichen Wang, Yifang Dang, Fang Li, Jingna Feng, Evan Yu, Cui Tao
Background: Alzheimer disease and related dementias (ADRD) rank as the sixth leading cause of death in the United States, underlining the importance of accurate ADRD risk prediction. While recent advancements in ADRD risk prediction have primarily relied on imaging analysis, not all patients undergo medical imaging before an ADRD diagnosis. Merging machine learning with claims data can reveal additional risk factors and uncover interconnections among diverse medical codes.
Objective: The study aims to use graph neural networks (GNNs) with claim data for ADRD risk prediction. Addressing the lack of human-interpretable reasons behind these predictions, we introduce an innovative, self-explainable method to evaluate relationship importance and its influence on ADRD risk prediction.
Methods: We used a variationally regularized encoder-decoder GNN (variational GNN [VGNN]) integrated with our proposed relation importance method for estimating ADRD likelihood. This self-explainable method can provide a feature-important explanation in the context of ADRD risk prediction, leveraging relational information within a graph. Three scenarios with 1-year, 2-year, and 3-year prediction windows were created to assess the model's efficiency, respectively. Random forest (RF) and light gradient boost machine (LGBM) were used as baselines. By using this method, we further clarify the key relationships for ADRD risk prediction.
Results: In scenario 1, the VGNN model showed area under the receiver operating characteristic (AUROC) scores of 0.7272 and 0.7480 for the small subset and the matched cohort data set. It outperforms RF and LGBM by 10.6% and 9.1%, respectively, on average. In scenario 2, it achieved AUROC scores of 0.7125 and 0.7281, surpassing the other models by 10.5% and 8.9%, respectively. Similarly, in scenario 3, AUROC scores of 0.7001 and 0.7187 were obtained, exceeding 10.1% and 8.5% than the baseline models, respectively. These results clearly demonstrate the significant superiority of the graph-based approach over the tree-based models (RF and LGBM) in predicting ADRD. Furthermore, the integration of the VGNN model and our relation importance interpretation could provide valuable insight into paired factors that may contribute to or delay ADRD progression.
Conclusions: Using our innovative self-explainable method with claims data enhances ADRD risk prediction and provides insights into the impact of interconnected medical code relationships. This methodology not only enables ADRD risk modeling but also shows potential for other image analysis predictions using claims data.
{"title":"Self-Explainable Graph Neural Network for Alzheimer Disease and Related Dementias Risk Prediction: Algorithm Development and Validation Study.","authors":"Xinyue Hu, Zenan Sun, Yi Nian, Yichen Wang, Yifang Dang, Fang Li, Jingna Feng, Evan Yu, Cui Tao","doi":"10.2196/54748","DOIUrl":"10.2196/54748","url":null,"abstract":"<p><strong>Background: </strong>Alzheimer disease and related dementias (ADRD) rank as the sixth leading cause of death in the United States, underlining the importance of accurate ADRD risk prediction. While recent advancements in ADRD risk prediction have primarily relied on imaging analysis, not all patients undergo medical imaging before an ADRD diagnosis. Merging machine learning with claims data can reveal additional risk factors and uncover interconnections among diverse medical codes.</p><p><strong>Objective: </strong>The study aims to use graph neural networks (GNNs) with claim data for ADRD risk prediction. Addressing the lack of human-interpretable reasons behind these predictions, we introduce an innovative, self-explainable method to evaluate relationship importance and its influence on ADRD risk prediction.</p><p><strong>Methods: </strong>We used a variationally regularized encoder-decoder GNN (variational GNN [VGNN]) integrated with our proposed relation importance method for estimating ADRD likelihood. This self-explainable method can provide a feature-important explanation in the context of ADRD risk prediction, leveraging relational information within a graph. Three scenarios with 1-year, 2-year, and 3-year prediction windows were created to assess the model's efficiency, respectively. Random forest (RF) and light gradient boost machine (LGBM) were used as baselines. By using this method, we further clarify the key relationships for ADRD risk prediction.</p><p><strong>Results: </strong>In scenario 1, the VGNN model showed area under the receiver operating characteristic (AUROC) scores of 0.7272 and 0.7480 for the small subset and the matched cohort data set. It outperforms RF and LGBM by 10.6% and 9.1%, respectively, on average. In scenario 2, it achieved AUROC scores of 0.7125 and 0.7281, surpassing the other models by 10.5% and 8.9%, respectively. Similarly, in scenario 3, AUROC scores of 0.7001 and 0.7187 were obtained, exceeding 10.1% and 8.5% than the baseline models, respectively. These results clearly demonstrate the significant superiority of the graph-based approach over the tree-based models (RF and LGBM) in predicting ADRD. Furthermore, the integration of the VGNN model and our relation importance interpretation could provide valuable insight into paired factors that may contribute to or delay ADRD progression.</p><p><strong>Conclusions: </strong>Using our innovative self-explainable method with claims data enhances ADRD risk prediction and provides insights into the impact of interconnected medical code relationships. This methodology not only enables ADRD risk modeling but also shows potential for other image analysis predictions using claims data.</p>","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e54748"},"PeriodicalIF":5.0,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11263893/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141559973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Inés Ansaldo, Michèle Masson-Trottier, Barbara Delacourt, Jade Dubuc, Catherine Dubé
Background: Persons living with dementia experience autonomy loss and require caregiver support on a daily basis. Dementia involves a gradual decline in communication skills, leading to fewer interactions and isolation for both people living with dementia and their caregivers, negatively impacting the quality of life for both members of the dyad. The resulting stress and burden on caregivers make them particularly susceptible to burnout.
Objective: This study aims to examine the efficacy of Communication Proches Aidants (COMPAs), an app designed following the principles of person-centered and emotional communication, which is intended to improve well-being in persons living with dementia and caregivers and reduce caregiver burden.
Methods: In this implementation study, volunteer caregivers in 2 long-term care facilities (n=17) were trained in using COMPAs and strategies to improve communication with persons living with dementia. Qualitative and quantitative analyses, semistructured interviews, and questionnaires were completed before and after 8 weeks of intervention with COMPAs.
Results: Semistructured interviews revealed that all caregivers perceived a positive impact following COMPAs interventions, namely, improved quality of communication and quality of life among persons living with dementia and caregivers. Improved quality of life was also supported by a statistically significant reduction in the General Health Questionnaire-12 scores (caregivers who improved: 9/17, 53%; z=2.537; P=.01). COMPAs interventions were also associated with a statistically significant increased feeling of personal accomplishment (caregivers improved: 11/17, 65%; t15=2.430; P=.03; d=0.61 [medium effect size]).
Conclusions: COMPAs intervention improved well-being in persons living with dementia and their caregivers by developing person-centered communication within the dyad, increasing empathy, and reducing burden in caregivers although most caregivers were unfamiliar with technology. The results hold promise for COMPAs interventions in long-term care settings. Larger group-controlled studies with different populations, in different contexts, and at different stages of dementia will provide a clearer picture of the benefits of COMPAs interventions.
{"title":"Efficacy of COMPAs, an App Designed to Support Communication Between Persons Living With Dementia in Long-Term Care Settings and Their Caregivers: Mixed Methods Implementation Study.","authors":"Ana Inés Ansaldo, Michèle Masson-Trottier, Barbara Delacourt, Jade Dubuc, Catherine Dubé","doi":"10.2196/47565","DOIUrl":"10.2196/47565","url":null,"abstract":"<p><strong>Background: </strong>Persons living with dementia experience autonomy loss and require caregiver support on a daily basis. Dementia involves a gradual decline in communication skills, leading to fewer interactions and isolation for both people living with dementia and their caregivers, negatively impacting the quality of life for both members of the dyad. The resulting stress and burden on caregivers make them particularly susceptible to burnout.</p><p><strong>Objective: </strong>This study aims to examine the efficacy of Communication Proches Aidants (COMPAs), an app designed following the principles of person-centered and emotional communication, which is intended to improve well-being in persons living with dementia and caregivers and reduce caregiver burden.</p><p><strong>Methods: </strong>In this implementation study, volunteer caregivers in 2 long-term care facilities (n=17) were trained in using COMPAs and strategies to improve communication with persons living with dementia. Qualitative and quantitative analyses, semistructured interviews, and questionnaires were completed before and after 8 weeks of intervention with COMPAs.</p><p><strong>Results: </strong>Semistructured interviews revealed that all caregivers perceived a positive impact following COMPAs interventions, namely, improved quality of communication and quality of life among persons living with dementia and caregivers. Improved quality of life was also supported by a statistically significant reduction in the General Health Questionnaire-12 scores (caregivers who improved: 9/17, 53%; z=2.537; P=.01). COMPAs interventions were also associated with a statistically significant increased feeling of personal accomplishment (caregivers improved: 11/17, 65%; t<sub>15</sub>=2.430; P=.03; d=0.61 [medium effect size]).</p><p><strong>Conclusions: </strong>COMPAs intervention improved well-being in persons living with dementia and their caregivers by developing person-centered communication within the dyad, increasing empathy, and reducing burden in caregivers although most caregivers were unfamiliar with technology. The results hold promise for COMPAs interventions in long-term care settings. Larger group-controlled studies with different populations, in different contexts, and at different stages of dementia will provide a clearer picture of the benefits of COMPAs interventions.</p>","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e47565"},"PeriodicalIF":5.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11258517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141499177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><strong>Background: </strong>Usability is a key indicator of the quality of technology products. In tandem with technological advancements, potential use by individuals with dementia is increasing. However, defining the usability of technology for individuals with dementia remains an ongoing challenge. The diverse and progressive nature of dementia adds complexity to the creation of universal usability criteria, highlighting the need for focused deliberations. Technological interventions offer potential benefits for people living with dementia and caregivers. Amid COVID-19, technology's role in health care access is growing, especially among older adults. Enabling the diverse population of people living with dementia to enjoy the benefits of technologies requires particular attention to their needs, desires, capabilities, and vulnerabilities to potential harm from technologies. Successful technological interventions for dementia require meticulous consideration of technology usability.</p><p><strong>Objective: </strong>This concept analysis aims to examine the usability of technology in the context of individuals living with dementia to establish a clear definition for usability within this specific demographic.</p><p><strong>Methods: </strong>The framework by Walker and Avant was used to guide this concept analysis. We conducted a literature review spanning 1984 to 2024, exploring technology usability for people with dementia through the PubMed, Web of Science, and Google Scholar databases using the keywords "technology usability" and "dementia." We also incorporated clinical definitions and integrated interview data from 29 dyads comprising individuals with mild Alzheimer dementia and their respective care partners, resulting in a total of 58 older adults. This approach aimed to offer a more comprehensive portrayal of the usability needs of individuals living with dementia, emphasizing practical application.</p><p><strong>Results: </strong>The evidence from the literature review unveiled that usability encompasses attributes such as acceptable learnability, efficiency, and satisfaction. The clinical perspective on dementia stages, subtypes, and symptoms underscores the importance of tailored technology usability assessment. Feedback from 29 dyads also emphasized the value of simplicity, clear navigation, age-sensitive design, personalized features, and audio support. Thus, design should prioritize personalized assistance for individuals living with dementia, moving away from standardized technological approaches. Synthesized from various sources, the defined usability attributes for individuals living with dementia not only encompass the general usability properties of effectiveness, efficiency, and satisfaction but also include other key factors: adaptability, personalization, intuitiveness, and simplicity, to ensure that technology is supportive and yields tangible benefits for this demographic.</p><p><strong>Conclusions: </strong>Usabilit
{"title":"Technology Usability for People Living With Dementia: Concept Analysis.","authors":"Shao-Yun Chien, Oleg Zaslavsky, Clara Berridge","doi":"10.2196/51987","DOIUrl":"10.2196/51987","url":null,"abstract":"<p><strong>Background: </strong>Usability is a key indicator of the quality of technology products. In tandem with technological advancements, potential use by individuals with dementia is increasing. However, defining the usability of technology for individuals with dementia remains an ongoing challenge. The diverse and progressive nature of dementia adds complexity to the creation of universal usability criteria, highlighting the need for focused deliberations. Technological interventions offer potential benefits for people living with dementia and caregivers. Amid COVID-19, technology's role in health care access is growing, especially among older adults. Enabling the diverse population of people living with dementia to enjoy the benefits of technologies requires particular attention to their needs, desires, capabilities, and vulnerabilities to potential harm from technologies. Successful technological interventions for dementia require meticulous consideration of technology usability.</p><p><strong>Objective: </strong>This concept analysis aims to examine the usability of technology in the context of individuals living with dementia to establish a clear definition for usability within this specific demographic.</p><p><strong>Methods: </strong>The framework by Walker and Avant was used to guide this concept analysis. We conducted a literature review spanning 1984 to 2024, exploring technology usability for people with dementia through the PubMed, Web of Science, and Google Scholar databases using the keywords \"technology usability\" and \"dementia.\" We also incorporated clinical definitions and integrated interview data from 29 dyads comprising individuals with mild Alzheimer dementia and their respective care partners, resulting in a total of 58 older adults. This approach aimed to offer a more comprehensive portrayal of the usability needs of individuals living with dementia, emphasizing practical application.</p><p><strong>Results: </strong>The evidence from the literature review unveiled that usability encompasses attributes such as acceptable learnability, efficiency, and satisfaction. The clinical perspective on dementia stages, subtypes, and symptoms underscores the importance of tailored technology usability assessment. Feedback from 29 dyads also emphasized the value of simplicity, clear navigation, age-sensitive design, personalized features, and audio support. Thus, design should prioritize personalized assistance for individuals living with dementia, moving away from standardized technological approaches. Synthesized from various sources, the defined usability attributes for individuals living with dementia not only encompass the general usability properties of effectiveness, efficiency, and satisfaction but also include other key factors: adaptability, personalization, intuitiveness, and simplicity, to ensure that technology is supportive and yields tangible benefits for this demographic.</p><p><strong>Conclusions: </strong>Usabilit","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e51987"},"PeriodicalIF":5.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11255540/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141493814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leanne Greene, Miia Rahja, Kate Laver, Vun Vun Wong, Chris Leung, Maria Crotty
<p><strong>Background: </strong>Over the past decade, the adoption of virtual wards has surged. Virtual wards aim to prevent unnecessary hospital admissions, expedite home discharge, and enhance patient satisfaction, which are particularly beneficial for the older adult population who faces risks associated with hospitalization. Consequently, substantial investments are being made in virtual rehabilitation wards (VRWs), despite evidence of varying levels of success in their implementation. However, the facilitators and barriers experienced by virtual ward staff for the rapid implementation of these innovative care models remain poorly understood.</p><p><strong>Objective: </strong>This paper presents insights from hospital staff working on an Australian VRW in response to the growing demand for programs aimed at preventing hospital admissions. We explore staff's perspectives on the facilitators and barriers of the VRW, shedding light on service setup and delivery.</p><p><strong>Methods: </strong>Qualitative interviews were conducted with 21 VRW staff using the Nonadoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework. The analysis of data was performed using framework analysis and the 7 domains of the NASSS framework.</p><p><strong>Results: </strong>The results were mapped onto the 7 domains of the NASSS framework. (1) Condition: Managing certain conditions, especially those involving comorbidities and sociocultural factors, can be challenging. (2) Technology: The VRW demonstrated suitability for technologically engaged patients without cognitive impairment, offering advantages in clinical decision-making through remote monitoring and video calls. However, interoperability issues and equipment malfunctions caused staff frustration, highlighting the importance of promptly addressing technical challenges. (3) Value proposition: The VRW empowered patients to choose their care location, extending access to care for rural communities and enabling home-based treatment for older adults. (4) Adopters and (5) organizations: Despite these benefits, the cultural shift from in-person to remote treatment introduced uncertainties in workflows, professional responsibilities, resource allocation, and intake processes. (6) Wider system and (7) embedding: As the service continues to develop to address gaps in hospital capacity, it is imperative to prioritize ongoing adaptation. This includes refining the process of smoothly transferring patients back to the hospital, addressing technical aspects, ensuring seamless continuity of care, and thoughtfully considering how the burden of care may shift to patients and their families.</p><p><strong>Conclusions: </strong>In this qualitative study exploring health care staff's experience of an innovative VRW, we identified several drivers and challenges to implementation and acceptability. The findings have implications for future services considering implementing VRWs for older adults in terms of servi
{"title":"Hospital Staff Perspectives on the Drivers and Challenges in Implementing a Virtual Rehabilitation Ward: Qualitative Study.","authors":"Leanne Greene, Miia Rahja, Kate Laver, Vun Vun Wong, Chris Leung, Maria Crotty","doi":"10.2196/54774","DOIUrl":"10.2196/54774","url":null,"abstract":"<p><strong>Background: </strong>Over the past decade, the adoption of virtual wards has surged. Virtual wards aim to prevent unnecessary hospital admissions, expedite home discharge, and enhance patient satisfaction, which are particularly beneficial for the older adult population who faces risks associated with hospitalization. Consequently, substantial investments are being made in virtual rehabilitation wards (VRWs), despite evidence of varying levels of success in their implementation. However, the facilitators and barriers experienced by virtual ward staff for the rapid implementation of these innovative care models remain poorly understood.</p><p><strong>Objective: </strong>This paper presents insights from hospital staff working on an Australian VRW in response to the growing demand for programs aimed at preventing hospital admissions. We explore staff's perspectives on the facilitators and barriers of the VRW, shedding light on service setup and delivery.</p><p><strong>Methods: </strong>Qualitative interviews were conducted with 21 VRW staff using the Nonadoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework. The analysis of data was performed using framework analysis and the 7 domains of the NASSS framework.</p><p><strong>Results: </strong>The results were mapped onto the 7 domains of the NASSS framework. (1) Condition: Managing certain conditions, especially those involving comorbidities and sociocultural factors, can be challenging. (2) Technology: The VRW demonstrated suitability for technologically engaged patients without cognitive impairment, offering advantages in clinical decision-making through remote monitoring and video calls. However, interoperability issues and equipment malfunctions caused staff frustration, highlighting the importance of promptly addressing technical challenges. (3) Value proposition: The VRW empowered patients to choose their care location, extending access to care for rural communities and enabling home-based treatment for older adults. (4) Adopters and (5) organizations: Despite these benefits, the cultural shift from in-person to remote treatment introduced uncertainties in workflows, professional responsibilities, resource allocation, and intake processes. (6) Wider system and (7) embedding: As the service continues to develop to address gaps in hospital capacity, it is imperative to prioritize ongoing adaptation. This includes refining the process of smoothly transferring patients back to the hospital, addressing technical aspects, ensuring seamless continuity of care, and thoughtfully considering how the burden of care may shift to patients and their families.</p><p><strong>Conclusions: </strong>In this qualitative study exploring health care staff's experience of an innovative VRW, we identified several drivers and challenges to implementation and acceptability. The findings have implications for future services considering implementing VRWs for older adults in terms of servi","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e54774"},"PeriodicalIF":5.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11220728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141477567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javad Razjouyan, Ariela R Orkaby, Molly J Horstman, Parag Goyal, Orna Intrator, Aanand D Naik
{"title":"The Frailty Trajectory's Additional Edge Over the Frailty Index: Retrospective Cohort Study of Veterans With Heart Failure.","authors":"Javad Razjouyan, Ariela R Orkaby, Molly J Horstman, Parag Goyal, Orna Intrator, Aanand D Naik","doi":"10.2196/56345","DOIUrl":"10.2196/56345","url":null,"abstract":"","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e56345"},"PeriodicalIF":5.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11220725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141477568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emily W Paolillo, Kaitlin B Casaletto, Annie L Clark, Jack C Taylor, Hilary W Heuer, Amy B Wise, Sreya Dhanam, Mark Sanderson-Cimino, Rowan Saloner, Joel H Kramer, John Kornak, Walter Kremers, Leah Forsberg, Brian Appleby, Ece Bayram, Andrea Bozoki, Danielle Brushaber, R Ryan Darby, Gregory S Day, Bradford C Dickerson, Kimiko Domoto-Reilly, Fanny Elahi, Julie A Fields, Nupur Ghoshal, Neill Graff-Radford, Matthew G H Hall, Lawrence S Honig, Edward D Huey, Maria I Lapid, Irene Litvan, Ian R Mackenzie, Joseph C Masdeu, Mario F Mendez, Carly Mester, Toji Miyagawa, Georges Naasan, Belen Pascual, Peter Pressman, Eliana Marisa Ramos, Katherine P Rankin, Jessica Rexach, Julio C Rojas, Lawren VandeVrede, Bonnie Wong, Zbigniew K Wszolek, Bradley F Boeve, Howard J Rosen, Adam L Boxer, Adam M Staffaroni
<p><strong>Background: </strong>Frontotemporal lobar degeneration (FTLD) is a leading cause of dementia in individuals aged <65 years. Several challenges to conducting in-person evaluations in FTLD illustrate an urgent need to develop remote, accessible, and low-burden assessment techniques. Studies of unobtrusive monitoring of at-home computer use in older adults with mild cognitive impairment show that declining function is reflected in reduced computer use; however, associations with smartphone use are unknown.</p><p><strong>Objective: </strong>This study aims to characterize daily trajectories in smartphone battery use, a proxy for smartphone use, and examine relationships with clinical indicators of severity in FTLD.</p><p><strong>Methods: </strong>Participants were 231 adults (mean age 52.5, SD 14.9 years; n=94, 40.7% men; n=223, 96.5% non-Hispanic White) enrolled in the Advancing Research and Treatment of Frontotemporal Lobar Degeneration (ARTFL study) and Longitudinal Evaluation of Familial Frontotemporal Dementia Subjects (LEFFTDS study) Longitudinal Frontotemporal Lobar Degeneration (ALLFTD) Mobile App study, including 49 (21.2%) with mild neurobehavioral changes and no functional impairment (ie, prodromal FTLD), 43 (18.6%) with neurobehavioral changes and functional impairment (ie, symptomatic FTLD), and 139 (60.2%) clinically normal adults, of whom 55 (39.6%) harbored heterozygous pathogenic or likely pathogenic variants in an autosomal dominant FTLD gene. Participants completed the Clinical Dementia Rating plus National Alzheimer's Coordinating Center Frontotemporal Lobar Degeneration Behavior and Language Domains (CDR+NACC FTLD) scale, a neuropsychological battery; the Neuropsychiatric Inventory; and brain magnetic resonance imaging. The ALLFTD Mobile App was installed on participants' smartphones for remote, passive, and continuous monitoring of smartphone use. Battery percentage was collected every 15 minutes over an average of 28 (SD 4.2; range 14-30) days. To determine whether temporal patterns of battery percentage varied as a function of disease severity, linear mixed effects models examined linear, quadratic, and cubic effects of the time of day and their interactions with each measure of disease severity on battery percentage. Models covaried for age, sex, smartphone type, and estimated smartphone age.</p><p><strong>Results: </strong>The CDR+NACC FTLD global score interacted with time on battery percentage such that participants with prodromal or symptomatic FTLD demonstrated less change in battery percentage throughout the day (a proxy for less smartphone use) than clinically normal participants (P<.001 in both cases). Additional models showed that worse performance in all cognitive domains assessed (ie, executive functioning, memory, language, and visuospatial skills), more neuropsychiatric symptoms, and smaller brain volumes also associated with less battery use throughout the day (P<.001 in all cases).</p><p><strong>Conc
{"title":"Examining Associations Between Smartphone Use and Clinical Severity in Frontotemporal Dementia: Proof-of-Concept Study.","authors":"Emily W Paolillo, Kaitlin B Casaletto, Annie L Clark, Jack C Taylor, Hilary W Heuer, Amy B Wise, Sreya Dhanam, Mark Sanderson-Cimino, Rowan Saloner, Joel H Kramer, John Kornak, Walter Kremers, Leah Forsberg, Brian Appleby, Ece Bayram, Andrea Bozoki, Danielle Brushaber, R Ryan Darby, Gregory S Day, Bradford C Dickerson, Kimiko Domoto-Reilly, Fanny Elahi, Julie A Fields, Nupur Ghoshal, Neill Graff-Radford, Matthew G H Hall, Lawrence S Honig, Edward D Huey, Maria I Lapid, Irene Litvan, Ian R Mackenzie, Joseph C Masdeu, Mario F Mendez, Carly Mester, Toji Miyagawa, Georges Naasan, Belen Pascual, Peter Pressman, Eliana Marisa Ramos, Katherine P Rankin, Jessica Rexach, Julio C Rojas, Lawren VandeVrede, Bonnie Wong, Zbigniew K Wszolek, Bradley F Boeve, Howard J Rosen, Adam L Boxer, Adam M Staffaroni","doi":"10.2196/52831","DOIUrl":"10.2196/52831","url":null,"abstract":"<p><strong>Background: </strong>Frontotemporal lobar degeneration (FTLD) is a leading cause of dementia in individuals aged <65 years. Several challenges to conducting in-person evaluations in FTLD illustrate an urgent need to develop remote, accessible, and low-burden assessment techniques. Studies of unobtrusive monitoring of at-home computer use in older adults with mild cognitive impairment show that declining function is reflected in reduced computer use; however, associations with smartphone use are unknown.</p><p><strong>Objective: </strong>This study aims to characterize daily trajectories in smartphone battery use, a proxy for smartphone use, and examine relationships with clinical indicators of severity in FTLD.</p><p><strong>Methods: </strong>Participants were 231 adults (mean age 52.5, SD 14.9 years; n=94, 40.7% men; n=223, 96.5% non-Hispanic White) enrolled in the Advancing Research and Treatment of Frontotemporal Lobar Degeneration (ARTFL study) and Longitudinal Evaluation of Familial Frontotemporal Dementia Subjects (LEFFTDS study) Longitudinal Frontotemporal Lobar Degeneration (ALLFTD) Mobile App study, including 49 (21.2%) with mild neurobehavioral changes and no functional impairment (ie, prodromal FTLD), 43 (18.6%) with neurobehavioral changes and functional impairment (ie, symptomatic FTLD), and 139 (60.2%) clinically normal adults, of whom 55 (39.6%) harbored heterozygous pathogenic or likely pathogenic variants in an autosomal dominant FTLD gene. Participants completed the Clinical Dementia Rating plus National Alzheimer's Coordinating Center Frontotemporal Lobar Degeneration Behavior and Language Domains (CDR+NACC FTLD) scale, a neuropsychological battery; the Neuropsychiatric Inventory; and brain magnetic resonance imaging. The ALLFTD Mobile App was installed on participants' smartphones for remote, passive, and continuous monitoring of smartphone use. Battery percentage was collected every 15 minutes over an average of 28 (SD 4.2; range 14-30) days. To determine whether temporal patterns of battery percentage varied as a function of disease severity, linear mixed effects models examined linear, quadratic, and cubic effects of the time of day and their interactions with each measure of disease severity on battery percentage. Models covaried for age, sex, smartphone type, and estimated smartphone age.</p><p><strong>Results: </strong>The CDR+NACC FTLD global score interacted with time on battery percentage such that participants with prodromal or symptomatic FTLD demonstrated less change in battery percentage throughout the day (a proxy for less smartphone use) than clinically normal participants (P<.001 in both cases). Additional models showed that worse performance in all cognitive domains assessed (ie, executive functioning, memory, language, and visuospatial skills), more neuropsychiatric symptoms, and smaller brain volumes also associated with less battery use throughout the day (P<.001 in all cases).</p><p><strong>Conc","PeriodicalId":36245,"journal":{"name":"JMIR Aging","volume":"7 ","pages":"e52831"},"PeriodicalIF":5.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11237775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141451762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}