Lamiece Hassan, Alyssa Milton, Chelsea Sawyer, Alexander J Casson, John Torous, Alan Davies, Bernalyn Ruiz-Yu, Joseph Firth
Background: Digital wearable devices, worn on or close to the body, have potential for passively detecting mental and physical health symptoms among people with severe mental illness (SMI); however, the roles of consumer-grade devices are not well understood.
Objective: This study aims to examine the utility of data from consumer-grade, digital, wearable devices (including smartphones or wrist-worn devices) for remotely monitoring or predicting changes in mental or physical health among adults with schizophrenia or bipolar disorder. Studies were included that passively collected physiological data (including sleep duration, heart rate, sleep and wake patterns, or physical activity) for at least 3 days. Research-grade actigraphy methods and physically obtrusive devices were excluded.
Methods: We conducted a systematic review of the following databases: Cochrane Central Register of Controlled Trials, Technology Assessment, AMED (Allied and Complementary Medicine), APA PsycINFO, Embase, MEDLINE(R), and IEEE XPlore. Searches were completed in May 2024. Results were synthesized narratively due to study heterogeneity and divided into the following phenotypes: physical activity, sleep and circadian rhythm, and heart rate.
Results: Overall, 23 studies were included that reported data from 12 distinct studies, mostly using smartphones and centered on relapse prevention. Only 1 study explicitly aimed to address physical health outcomes among people with SMI. In total, data were included from over 500 participants with SMI, predominantly from high-income countries. Most commonly, papers presented physical activity data (n=18), followed by sleep and circadian rhythm data (n=14) and heart rate data (n=6). The use of smartwatches to support data collection were reported by 8 papers; the rest used only smartphones. There was some evidence that lower levels of activity, higher heart rates, and later and irregular sleep onset times were associated with psychiatric diagnoses or poorer symptoms. However, heterogeneity in devices, measures, sampling and statistical approaches complicated interpretation.
Conclusions: Consumer-grade wearables show the ability to passively detect digital markers indicative of psychiatric symptoms or mental health status among people with SMI, but few are currently using these to address physical health inequalities. The digital phenotyping field in psychiatry would benefit from moving toward agreed standards regarding data descriptions and outcome measures and ensuring that valuable temporal data provided by wearables are fully exploited.
{"title":"Utility of Consumer-Grade Wearable Devices for Inferring Physical and Mental Health Outcomes in Severe Mental Illness: Systematic Review.","authors":"Lamiece Hassan, Alyssa Milton, Chelsea Sawyer, Alexander J Casson, John Torous, Alan Davies, Bernalyn Ruiz-Yu, Joseph Firth","doi":"10.2196/65143","DOIUrl":"10.2196/65143","url":null,"abstract":"<p><strong>Background: </strong>Digital wearable devices, worn on or close to the body, have potential for passively detecting mental and physical health symptoms among people with severe mental illness (SMI); however, the roles of consumer-grade devices are not well understood.</p><p><strong>Objective: </strong>This study aims to examine the utility of data from consumer-grade, digital, wearable devices (including smartphones or wrist-worn devices) for remotely monitoring or predicting changes in mental or physical health among adults with schizophrenia or bipolar disorder. Studies were included that passively collected physiological data (including sleep duration, heart rate, sleep and wake patterns, or physical activity) for at least 3 days. Research-grade actigraphy methods and physically obtrusive devices were excluded.</p><p><strong>Methods: </strong>We conducted a systematic review of the following databases: Cochrane Central Register of Controlled Trials, Technology Assessment, AMED (Allied and Complementary Medicine), APA PsycINFO, Embase, MEDLINE(R), and IEEE XPlore. Searches were completed in May 2024. Results were synthesized narratively due to study heterogeneity and divided into the following phenotypes: physical activity, sleep and circadian rhythm, and heart rate.</p><p><strong>Results: </strong>Overall, 23 studies were included that reported data from 12 distinct studies, mostly using smartphones and centered on relapse prevention. Only 1 study explicitly aimed to address physical health outcomes among people with SMI. In total, data were included from over 500 participants with SMI, predominantly from high-income countries. Most commonly, papers presented physical activity data (n=18), followed by sleep and circadian rhythm data (n=14) and heart rate data (n=6). The use of smartwatches to support data collection were reported by 8 papers; the rest used only smartphones. There was some evidence that lower levels of activity, higher heart rates, and later and irregular sleep onset times were associated with psychiatric diagnoses or poorer symptoms. However, heterogeneity in devices, measures, sampling and statistical approaches complicated interpretation.</p><p><strong>Conclusions: </strong>Consumer-grade wearables show the ability to passively detect digital markers indicative of psychiatric symptoms or mental health status among people with SMI, but few are currently using these to address physical health inequalities. The digital phenotyping field in psychiatry would benefit from moving toward agreed standards regarding data descriptions and outcome measures and ensuring that valuable temporal data provided by wearables are fully exploited.</p><p><strong>Trial registration: </strong>PROSPERO CRD42022382267; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=382267.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"12 ","pages":"e65143"},"PeriodicalIF":4.8,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751658/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lauryn Gar-Mun Cheung, Pamela Carien Thomas, Eva Brvar, Sarah Rowe
Background: Digital interventions typically involve using smartphones or PCs to access online or downloadable self-help and may offer a more accessible and convenient option than face-to-face interventions for some people with mild to moderate eating disorders. They have been shown to substantially reduce eating disorder symptoms, but treatment dropout rates are higher than for face-to-face interventions. We need to understand user experiences and preferences for digital interventions to support the design and development of user-centered digital interventions that are engaging and meet users' needs.
Objective: This study aims to understand user experiences and user preferences for digital interventions that aim to reduce mild to moderate eating disorder symptoms in adults.
Methods: We conducted a metasynthesis of qualitative studies. We searched 6 databases for published and unpublished literature from 2013 to 2024. We searched for studies conducted in naturalistic or outpatient settings, using primarily unguided digital self-help interventions designed to reduce eating disorder symptoms in adults with mild to moderate eating disorders. We conducted a thematic synthesis using line-by-line coding of the results and findings from each study to generate themes.
Results: A total of 8 studies were included after screening 3695 search results. Overall, 7 metathemes were identified. The identified metathemes included the appeal of digital interventions, role of digital interventions in treatment, value of support in treatment, communication at the right level, importance of engagement, shaping knowledge to improve eating disorder behaviors, and design of the digital intervention. Users had positive experiences with digital interventions and perceived them as helpful for self-reflection and mindfulness. Users found digital interventions to be convenient and flexible and that they fit with their lifestyle. Overall, users noticed reduced eating disorder thoughts and behaviors. However, digital interventions were not generally perceived as a sufficient treatment that could replace traditional face-to-face treatment. Users have individual needs, so an ideal intervention would offer personalized content and functions.
Conclusions: Users found digital interventions for eating disorders practical and effective but stressed the need for interventions to address the full range of symptoms, severity, and individual needs. Future digital interventions should be cocreated with users and offer more personalization. Further research is needed to determine the appropriate balance of professional and peer support and whether these interventions should serve as the first step in the stepped care model.
{"title":"User Experiences of and Preferences for Self-Guided Digital Interventions for the Treatment of Mild to Moderate Eating Disorders: Systematic Review and Metasynthesis.","authors":"Lauryn Gar-Mun Cheung, Pamela Carien Thomas, Eva Brvar, Sarah Rowe","doi":"10.2196/57795","DOIUrl":"10.2196/57795","url":null,"abstract":"<p><strong>Background: </strong>Digital interventions typically involve using smartphones or PCs to access online or downloadable self-help and may offer a more accessible and convenient option than face-to-face interventions for some people with mild to moderate eating disorders. They have been shown to substantially reduce eating disorder symptoms, but treatment dropout rates are higher than for face-to-face interventions. We need to understand user experiences and preferences for digital interventions to support the design and development of user-centered digital interventions that are engaging and meet users' needs.</p><p><strong>Objective: </strong>This study aims to understand user experiences and user preferences for digital interventions that aim to reduce mild to moderate eating disorder symptoms in adults.</p><p><strong>Methods: </strong>We conducted a metasynthesis of qualitative studies. We searched 6 databases for published and unpublished literature from 2013 to 2024. We searched for studies conducted in naturalistic or outpatient settings, using primarily unguided digital self-help interventions designed to reduce eating disorder symptoms in adults with mild to moderate eating disorders. We conducted a thematic synthesis using line-by-line coding of the results and findings from each study to generate themes.</p><p><strong>Results: </strong>A total of 8 studies were included after screening 3695 search results. Overall, 7 metathemes were identified. The identified metathemes included the appeal of digital interventions, role of digital interventions in treatment, value of support in treatment, communication at the right level, importance of engagement, shaping knowledge to improve eating disorder behaviors, and design of the digital intervention. Users had positive experiences with digital interventions and perceived them as helpful for self-reflection and mindfulness. Users found digital interventions to be convenient and flexible and that they fit with their lifestyle. Overall, users noticed reduced eating disorder thoughts and behaviors. However, digital interventions were not generally perceived as a sufficient treatment that could replace traditional face-to-face treatment. Users have individual needs, so an ideal intervention would offer personalized content and functions.</p><p><strong>Conclusions: </strong>Users found digital interventions for eating disorders practical and effective but stressed the need for interventions to address the full range of symptoms, severity, and individual needs. Future digital interventions should be cocreated with users and offer more personalization. Further research is needed to determine the appropriate balance of professional and peer support and whether these interventions should serve as the first step in the stepped care model.</p><p><strong>Trial registration: </strong>PROSPERO CRD42023426932; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=426932.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"12 ","pages":"e57795"},"PeriodicalIF":4.8,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11748441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142922080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Tartaglia, Brendan Jaghab, Mohamed Ismail, Katrin Hänsel, Anna Van Meter, Michael Kirschenbaum, Michael Sobolev, John M Kane, Sunny X Tang
<p><strong>Background: </strong>Digital health technologies are increasingly being integrated into mental health care. However, the adoption of these technologies can be influenced by patients' digital literacy and attitudes, which may vary based on sociodemographic factors. This variability necessitates a better understanding of patient digital literacy and attitudes to prevent a digital divide, which can worsen existing health care disparities.</p><p><strong>Objective: </strong>This study aimed to assess digital literacy and attitudes toward digital health technologies among a diverse psychiatric outpatient population. In addition, the study sought to identify clusters of patients based on their digital literacy and attitudes, and to compare sociodemographic characteristics among these clusters.</p><p><strong>Methods: </strong>A survey was distributed to adult psychiatric patients with various diagnoses in an urban outpatient psychiatry program. The survey included a demographic questionnaire, a digital literacy questionnaire, and a digital health attitudes questionnaire. Multiple linear regression analyses were used to identify predictors of digital literacy and attitudes. Cluster analysis was performed to categorize patients based on their responses. Pairwise comparisons and one-way ANOVA were conducted to analyze differences between clusters.</p><p><strong>Results: </strong>A total of 256 patients were included in the analysis. The mean age of participants was 32 (SD 12.6, range 16-70) years. The sample was racially and ethnically diverse: White (100/256, 38.9%), Black (39/256, 15.2%), Latinx (44/256, 17.2%), Asian (59/256, 23%), and other races and ethnicities (15/256, 5.7%). Digital literacy was high for technologies such as smartphones, videoconferencing, and social media (items with >75%, 193/256 of participants reporting at least some use) but lower for health apps, mental health apps, wearables, and virtual reality (items with <42%, 108/256 reporting at least some use). Attitudes toward using technology in clinical care were generally positive (9 out of 10 items received >75% positive score), particularly for communication with providers and health data sharing. Older age (P<.001) and lower educational attainment (P<.001) negatively predicted digital literacy scores, but no demographic variables predicted attitude scores. Cluster analysis identified 3 patient groups. Relative to the other clusters, cluster 1 (n=30) had lower digital literacy and intermediate acceptance of digital technology. Cluster 2 (n=50) had higher literacy and lower acceptance. Cluster 3 (n=176) displayed both higher literacy and acceptance. Significant between-cluster differences were observed in mean age and education level between clusters (P<.001), with cluster 1 participants being older and having lower levels of formal education.</p><p><strong>Conclusions: </strong>High digital literacy and acceptance of digital technologies were observed among our patients,
{"title":"Assessing Health Technology Literacy and Attitudes of Patients in an Urban Outpatient Psychiatry Clinic: Cross-Sectional Survey Study.","authors":"Julia Tartaglia, Brendan Jaghab, Mohamed Ismail, Katrin Hänsel, Anna Van Meter, Michael Kirschenbaum, Michael Sobolev, John M Kane, Sunny X Tang","doi":"10.2196/63034","DOIUrl":"10.2196/63034","url":null,"abstract":"<p><strong>Background: </strong>Digital health technologies are increasingly being integrated into mental health care. However, the adoption of these technologies can be influenced by patients' digital literacy and attitudes, which may vary based on sociodemographic factors. This variability necessitates a better understanding of patient digital literacy and attitudes to prevent a digital divide, which can worsen existing health care disparities.</p><p><strong>Objective: </strong>This study aimed to assess digital literacy and attitudes toward digital health technologies among a diverse psychiatric outpatient population. In addition, the study sought to identify clusters of patients based on their digital literacy and attitudes, and to compare sociodemographic characteristics among these clusters.</p><p><strong>Methods: </strong>A survey was distributed to adult psychiatric patients with various diagnoses in an urban outpatient psychiatry program. The survey included a demographic questionnaire, a digital literacy questionnaire, and a digital health attitudes questionnaire. Multiple linear regression analyses were used to identify predictors of digital literacy and attitudes. Cluster analysis was performed to categorize patients based on their responses. Pairwise comparisons and one-way ANOVA were conducted to analyze differences between clusters.</p><p><strong>Results: </strong>A total of 256 patients were included in the analysis. The mean age of participants was 32 (SD 12.6, range 16-70) years. The sample was racially and ethnically diverse: White (100/256, 38.9%), Black (39/256, 15.2%), Latinx (44/256, 17.2%), Asian (59/256, 23%), and other races and ethnicities (15/256, 5.7%). Digital literacy was high for technologies such as smartphones, videoconferencing, and social media (items with >75%, 193/256 of participants reporting at least some use) but lower for health apps, mental health apps, wearables, and virtual reality (items with <42%, 108/256 reporting at least some use). Attitudes toward using technology in clinical care were generally positive (9 out of 10 items received >75% positive score), particularly for communication with providers and health data sharing. Older age (P<.001) and lower educational attainment (P<.001) negatively predicted digital literacy scores, but no demographic variables predicted attitude scores. Cluster analysis identified 3 patient groups. Relative to the other clusters, cluster 1 (n=30) had lower digital literacy and intermediate acceptance of digital technology. Cluster 2 (n=50) had higher literacy and lower acceptance. Cluster 3 (n=176) displayed both higher literacy and acceptance. Significant between-cluster differences were observed in mean age and education level between clusters (P<.001), with cluster 1 participants being older and having lower levels of formal education.</p><p><strong>Conclusions: </strong>High digital literacy and acceptance of digital technologies were observed among our patients,","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e63034"},"PeriodicalIF":4.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11729776/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lisa-Marie Hartnagel, Daniel Emden, Jerome C Foo, Fabian Streit, Stephanie H Witt, Josef Frank, Matthias F Limberger, Sara E Schmitz, Maria Gilles, Marcella Rietschel, Tim Hahn, Ulrich W Ebner-Priemer, Lea Sirignano
Background: Mobile devices for remote monitoring are inevitable tools to support treatment and patient care, especially in recurrent diseases such as major depressive disorder. The aim of this study was to learn if machine learning (ML) models based on longitudinal speech data are helpful in predicting momentary depression severity. Data analyses were based on a dataset including 30 inpatients during an acute depressive episode receiving sleep deprivation therapy in stationary care, an intervention inducing a rapid change in depressive symptoms in a relatively short period of time. Using an ambulatory assessment approach, we captured speech samples and assessed concomitant depression severity via self-report questionnaire over the course of 3 weeks (before, during, and after therapy). We extracted 89 speech features from the speech samples using the Extended Geneva Minimalistic Acoustic Parameter Set from the Open-Source Speech and Music Interpretation by Large-Space Extraction (audEERING) toolkit and the additional parameter speech rate.
Objective: We aimed to understand if a multiparameter ML approach would significantly improve the prediction compared to previous statistical analyses, and, in addition, which mechanism for splitting training and test data was most successful, especially focusing on the idea of personalized prediction.
Methods: To do so, we trained and evaluated a set of >500 ML pipelines including random forest, linear regression, support vector regression, and Extreme Gradient Boosting regression models and tested them on 5 different train-test split scenarios: a group 5-fold nested cross-validation at the subject level, a leave-one-subject-out approach, a chronological split, an odd-even split, and a random split.
Results: In the 5-fold cross-validation, the leave-one-subject-out, and the chronological split approaches, none of the models were statistically different from random chance. The other two approaches produced significant results for at least one of the models tested, with similar performance. In total, the superior model was an Extreme Gradient Boosting in the odd-even split approach (R²=0.339, mean absolute error=0.38; both P<.001), indicating that 33.9% of the variance in depression severity could be predicted by the speech features.
Conclusions: Overall, our analyses highlight that ML fails to predict depression scores of unseen patients, but prediction performance increased strongly compared to our previous analyses with multilevel models. We conclude that future personalized ML models might improve prediction performance even more, leading to better patient management and care.
{"title":"Momentary Depression Severity Prediction in Patients With Acute Depression Who Undergo Sleep Deprivation Therapy: Speech-Based Machine Learning Approach.","authors":"Lisa-Marie Hartnagel, Daniel Emden, Jerome C Foo, Fabian Streit, Stephanie H Witt, Josef Frank, Matthias F Limberger, Sara E Schmitz, Maria Gilles, Marcella Rietschel, Tim Hahn, Ulrich W Ebner-Priemer, Lea Sirignano","doi":"10.2196/64578","DOIUrl":"10.2196/64578","url":null,"abstract":"<p><strong>Background: </strong>Mobile devices for remote monitoring are inevitable tools to support treatment and patient care, especially in recurrent diseases such as major depressive disorder. The aim of this study was to learn if machine learning (ML) models based on longitudinal speech data are helpful in predicting momentary depression severity. Data analyses were based on a dataset including 30 inpatients during an acute depressive episode receiving sleep deprivation therapy in stationary care, an intervention inducing a rapid change in depressive symptoms in a relatively short period of time. Using an ambulatory assessment approach, we captured speech samples and assessed concomitant depression severity via self-report questionnaire over the course of 3 weeks (before, during, and after therapy). We extracted 89 speech features from the speech samples using the Extended Geneva Minimalistic Acoustic Parameter Set from the Open-Source Speech and Music Interpretation by Large-Space Extraction (audEERING) toolkit and the additional parameter speech rate.</p><p><strong>Objective: </strong>We aimed to understand if a multiparameter ML approach would significantly improve the prediction compared to previous statistical analyses, and, in addition, which mechanism for splitting training and test data was most successful, especially focusing on the idea of personalized prediction.</p><p><strong>Methods: </strong>To do so, we trained and evaluated a set of >500 ML pipelines including random forest, linear regression, support vector regression, and Extreme Gradient Boosting regression models and tested them on 5 different train-test split scenarios: a group 5-fold nested cross-validation at the subject level, a leave-one-subject-out approach, a chronological split, an odd-even split, and a random split.</p><p><strong>Results: </strong>In the 5-fold cross-validation, the leave-one-subject-out, and the chronological split approaches, none of the models were statistically different from random chance. The other two approaches produced significant results for at least one of the models tested, with similar performance. In total, the superior model was an Extreme Gradient Boosting in the odd-even split approach (R²=0.339, mean absolute error=0.38; both P<.001), indicating that 33.9% of the variance in depression severity could be predicted by the speech features.</p><p><strong>Conclusions: </strong>Overall, our analyses highlight that ML fails to predict depression scores of unseen patients, but prediction performance increased strongly compared to our previous analyses with multilevel models. We conclude that future personalized ML models might improve prediction performance even more, leading to better patient management and care.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e64578"},"PeriodicalIF":4.8,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11684135/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The rise of wearable sensors marks a significant development in the era of affective computing. Their popularity is continuously increasing, and they have the potential to improve our understanding of human stress. A fundamental aspect within this domain is the ability to recognize perceived stress through these unobtrusive devices.
Objective: This study aims to enhance the performance of emotion recognition using multitask learning (MTL), a technique extensively explored across various machine learning tasks, including affective computing. By leveraging the shared information among related tasks, we seek to augment the accuracy of emotion recognition while confronting the privacy threats inherent in the physiological data captured by these sensors.
Methods: To address the privacy concerns associated with the sensitive data collected by wearable sensors, we proposed a novel framework that integrates differential privacy and federated learning approaches with MTL. This framework was designed to efficiently identify mental stress while preserving private identity information. Through this approach, we aimed to enhance the performance of emotion recognition tasks while preserving user privacy.
Results: Comprehensive evaluations of our framework were conducted using 2 prominent public datasets. The results demonstrate a significant improvement in emotion recognition accuracy, achieving a rate of 90%. Furthermore, our approach effectively mitigates privacy risks, as evidenced by limiting reidentification accuracies to 47%.
Conclusions: This study presents a promising approach to advancing emotion recognition capabilities while addressing privacy concerns in the context of empathetic sensors. By integrating MTL with differential privacy and federated learning, we have demonstrated the potential to achieve high levels of accuracy in emotion recognition while ensuring the protection of user privacy. This research contributes to the ongoing efforts to use affective computing in a privacy-aware and ethical manner.
{"title":"Balancing Between Privacy and Utility for Affect Recognition Using Multitask Learning in Differential Privacy-Added Federated Learning Settings: Quantitative Study.","authors":"Mohamed Benouis, Elisabeth Andre, Yekta Said Can","doi":"10.2196/60003","DOIUrl":"10.2196/60003","url":null,"abstract":"<p><strong>Background: </strong>The rise of wearable sensors marks a significant development in the era of affective computing. Their popularity is continuously increasing, and they have the potential to improve our understanding of human stress. A fundamental aspect within this domain is the ability to recognize perceived stress through these unobtrusive devices.</p><p><strong>Objective: </strong>This study aims to enhance the performance of emotion recognition using multitask learning (MTL), a technique extensively explored across various machine learning tasks, including affective computing. By leveraging the shared information among related tasks, we seek to augment the accuracy of emotion recognition while confronting the privacy threats inherent in the physiological data captured by these sensors.</p><p><strong>Methods: </strong>To address the privacy concerns associated with the sensitive data collected by wearable sensors, we proposed a novel framework that integrates differential privacy and federated learning approaches with MTL. This framework was designed to efficiently identify mental stress while preserving private identity information. Through this approach, we aimed to enhance the performance of emotion recognition tasks while preserving user privacy.</p><p><strong>Results: </strong>Comprehensive evaluations of our framework were conducted using 2 prominent public datasets. The results demonstrate a significant improvement in emotion recognition accuracy, achieving a rate of 90%. Furthermore, our approach effectively mitigates privacy risks, as evidenced by limiting reidentification accuracies to 47%.</p><p><strong>Conclusions: </strong>This study presents a promising approach to advancing emotion recognition capabilities while addressing privacy concerns in the context of empathetic sensors. By integrating MTL with differential privacy and federated learning, we have demonstrated the potential to achieve high levels of accuracy in emotion recognition while ensuring the protection of user privacy. This research contributes to the ongoing efforts to use affective computing in a privacy-aware and ethical manner.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e60003"},"PeriodicalIF":4.8,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11684349/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sonia Baee, Jeremy W Eberle, Anna N Baglione, Tyler Spears, Elijah Lewis, Hongning Wang, Daniel H Funk, Bethany Teachman, Laura E Barnes
<p><strong>Background: </strong>Digital mental health is a promising paradigm for individualized, patient-driven health care. For example, cognitive bias modification programs that target interpretation biases (cognitive bias modification for interpretation [CBM-I]) can provide practice thinking about ambiguous situations in less threatening ways on the web without requiring a therapist. However, digital mental health interventions, including CBM-I, are often plagued with lack of sustained engagement and high attrition rates. New attrition detection and mitigation strategies are needed to improve these interventions.</p><p><strong>Objective: </strong>This paper aims to identify participants at a high risk of dropout during the early stages of 3 web-based trials of multisession CBM-I and to investigate which self-reported and passively detected feature sets computed from the participants interacting with the intervention and assessments were most informative in making this prediction.</p><p><strong>Methods: </strong>The participants analyzed in this paper were community adults with traits such as anxiety or negative thinking about the future (Study 1: n=252, Study 2: n=326, Study 3: n=699) who had been assigned to CBM-I conditions in 3 efficacy-effectiveness trials on our team's public research website. To identify participants at a high risk of dropout, we created 4 unique feature sets: self-reported baseline user characteristics (eg, demographics), self-reported user context and reactions to the program (eg, state affect), self-reported user clinical functioning (eg, mental health symptoms), and passively detected user behavior on the website (eg, time spent on a web page of CBM-I training exercises, time of day during which the exercises were completed, latency of completing the assessments, and type of device used). Then, we investigated the feature sets as potential predictors of which participants were at high risk of not starting the second training session of a given program using well-known machine learning algorithms.</p><p><strong>Results: </strong>The extreme gradient boosting algorithm performed the best and identified participants at high risk with macro-F<sub>1</sub>-scores of .832 (Study 1 with 146 features), .770 (Study 2 with 87 features), and .917 (Study 3 with 127 features). Features involving passive detection of user behavior contributed the most to the prediction relative to other features. The mean Gini importance scores for the passive features were as follows: .033 (95% CI .019-.047) in Study 1; .029 (95% CI .023-.035) in Study 2; and .045 (95% CI .039-.051) in Study 3. However, using all features extracted from a given study led to the best predictive performance.</p><p><strong>Conclusions: </strong>These results suggest that using passive indicators of user behavior, alongside self-reported measures, can improve the accuracy of prediction of participants at a high risk of dropout early during multisession CBM-I programs
背景:数字心理健康是个性化、患者驱动的医疗保健的一个很有前途的范例。例如,针对解释偏见的认知偏见修正程序(解释的认知偏见修正[CBM-I])可以在不需要治疗师的情况下,以不那么具有威胁性的方式在网络上提供对模糊情况的思考练习。然而,数字心理健康干预措施,包括CBM-I,往往受到缺乏持续参与和高流失率的困扰。需要新的磨损检测和缓解战略来改进这些干预措施。目的:本文旨在确定在3个基于网络的多阶段CBM-I试验的早期阶段处于高风险的参与者,并调查从参与者与干预和评估相互作用中计算出的自我报告和被动检测特征集在做出这一预测时最具信息性。方法:本文分析的参与者是具有焦虑或对未来消极思考等特征的社区成年人(研究1:n=252,研究2:n=326,研究3:n=699),他们在我们团队的公共研究网站上进行了3次疗效试验,被分配到CBM-I条件。为了识别退学风险高的参与者,我们创建了4个独特的特征集:自我报告的基线用户特征(例如,人口统计数据)、自我报告的用户背景和对程序的反应(例如,状态影响)、自我报告的用户临床功能(例如,心理健康症状),以及被动检测到的用户在网站上的行为(例如,在CBM-I训练练习的网页上花费的时间、完成练习的时间、完成评估的延迟时间和使用的设备类型)。然后,我们研究了特征集作为潜在的预测因素,哪些参与者在使用知名机器学习算法的给定程序中不开始第二次训练的风险很高。结果:极端梯度增强算法表现最好,识别出高风险参与者的宏观f1得分为0.832(研究1有146个特征)、0.770(研究2有87个特征)和0.917(研究3有127个特征)。相对于其他特征,涉及被动检测用户行为的特征对预测的贡献最大。被动特征的平均基尼重要性评分如下:研究1中为0.033 (95% CI为0.019 - 0.047);研究2中为0.029 (95% CI 0.023 - 0.035);研究3和0.045 (95% CI 0.039 - 0.051)。然而,使用从给定研究中提取的所有特征会导致最佳的预测性能。结论:这些结果表明,使用被动的用户行为指标,以及自我报告的措施,可以提高预测多会话CBM-I计划中早期退学高风险参与者的准确性。此外,我们的分析强调了数字健康干预研究中普遍性的挑战,以及对更个性化的磨损预防策略的需求。
{"title":"Early Attrition Prediction for Web-Based Interpretation Bias Modification to Reduce Anxious Thinking: A Machine Learning Study.","authors":"Sonia Baee, Jeremy W Eberle, Anna N Baglione, Tyler Spears, Elijah Lewis, Hongning Wang, Daniel H Funk, Bethany Teachman, Laura E Barnes","doi":"10.2196/51567","DOIUrl":"10.2196/51567","url":null,"abstract":"<p><strong>Background: </strong>Digital mental health is a promising paradigm for individualized, patient-driven health care. For example, cognitive bias modification programs that target interpretation biases (cognitive bias modification for interpretation [CBM-I]) can provide practice thinking about ambiguous situations in less threatening ways on the web without requiring a therapist. However, digital mental health interventions, including CBM-I, are often plagued with lack of sustained engagement and high attrition rates. New attrition detection and mitigation strategies are needed to improve these interventions.</p><p><strong>Objective: </strong>This paper aims to identify participants at a high risk of dropout during the early stages of 3 web-based trials of multisession CBM-I and to investigate which self-reported and passively detected feature sets computed from the participants interacting with the intervention and assessments were most informative in making this prediction.</p><p><strong>Methods: </strong>The participants analyzed in this paper were community adults with traits such as anxiety or negative thinking about the future (Study 1: n=252, Study 2: n=326, Study 3: n=699) who had been assigned to CBM-I conditions in 3 efficacy-effectiveness trials on our team's public research website. To identify participants at a high risk of dropout, we created 4 unique feature sets: self-reported baseline user characteristics (eg, demographics), self-reported user context and reactions to the program (eg, state affect), self-reported user clinical functioning (eg, mental health symptoms), and passively detected user behavior on the website (eg, time spent on a web page of CBM-I training exercises, time of day during which the exercises were completed, latency of completing the assessments, and type of device used). Then, we investigated the feature sets as potential predictors of which participants were at high risk of not starting the second training session of a given program using well-known machine learning algorithms.</p><p><strong>Results: </strong>The extreme gradient boosting algorithm performed the best and identified participants at high risk with macro-F<sub>1</sub>-scores of .832 (Study 1 with 146 features), .770 (Study 2 with 87 features), and .917 (Study 3 with 127 features). Features involving passive detection of user behavior contributed the most to the prediction relative to other features. The mean Gini importance scores for the passive features were as follows: .033 (95% CI .019-.047) in Study 1; .029 (95% CI .023-.035) in Study 2; and .045 (95% CI .039-.051) in Study 3. However, using all features extracted from a given study led to the best predictive performance.</p><p><strong>Conclusions: </strong>These results suggest that using passive indicators of user behavior, alongside self-reported measures, can improve the accuracy of prediction of participants at a high risk of dropout early during multisession CBM-I programs","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e51567"},"PeriodicalIF":4.8,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11699492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><strong>Background: </strong>Sleep-wake patterns are important behavioral biomarkers for patients with serious mental illness (SMI), providing insight into their well-being. The gold standard for monitoring sleep is polysomnography (PSG), which requires a sleep lab facility; however, advances in wearable sensor technology allow for real-world sleep-wake monitoring.</p><p><strong>Objective: </strong>The goal of this study was to develop a PSG-validated sleep algorithm using accelerometer (ACC) and electrocardiogram (ECG) data from a wearable patch to accurately quantify sleep in a real-world setting.</p><p><strong>Methods: </strong>In this noninterventional, nonsignificant-risk, abbreviated investigational device exemption, single-site study, participants wore the reusable wearable sensor version 2 (RW2) patch. The RW2 patch is part of a digital medicine system (aripiprazole with sensor) designed to provide objective records of medication ingestion for patients with schizophrenia, bipolar I disorder, and major depressive disorder. This study developed a sleep algorithm from patch data and did not contain any study-related or digitized medication. Patch-acquired ACC and ECG data were compared against PSG data to build machine learning classification models to distinguish periods of wake from sleep. The PSG data provided sleep stage classifications at 30-second intervals, which were combined into 5-minute windows and labeled as sleep or wake based on the majority of sleep stages within the window. ACC and ECG features were derived for each 5-minute window. The algorithm that most accurately predicted sleep parameters against PSG data was compared to commercially available wearable devices to further benchmark model performance.</p><p><strong>Results: </strong>Of 80 participants enrolled, 60 had at least 1 night of analyzable ACC and ECG data (25 healthy volunteers and 35 participants with diagnosed SMI). Overall, 10,574 valid 5-minute windows were identified (5854 from participants with SMI), and 84% (n=8830) were classified as greater than half sleep. Of the 3 models tested, the conditional random field algorithm provided the most robust sleep-wake classification. Performance was comparable to the middle 50% of commercial devices evaluated in a recent publication, providing a sleep detection performance of 0.93 (sensitivity) and wake detection performance of 0.60 (specificity) at a prediction probability threshold of 0.75. The conditional random field algorithm retained this performance for individual sleep parameters, including total sleep time, sleep efficiency, and wake after sleep onset (within the middle 50% to top 25% of the assessed devices). The only parameter where the model performance was lower was sleep onset latency (within the bottom 25% of all comparator devices).</p><p><strong>Conclusions: </strong>Using industry-best practices, we developed a sleep algorithm for use with the RW2 patch that can accurately detect sleep and wake wi
{"title":"Developing a Sleep Algxorithm to Support a Digital Medicine System: Noninterventional, Observational Sleep Study.","authors":"Jeffrey M Cochran","doi":"10.2196/62959","DOIUrl":"10.2196/62959","url":null,"abstract":"<p><strong>Background: </strong>Sleep-wake patterns are important behavioral biomarkers for patients with serious mental illness (SMI), providing insight into their well-being. The gold standard for monitoring sleep is polysomnography (PSG), which requires a sleep lab facility; however, advances in wearable sensor technology allow for real-world sleep-wake monitoring.</p><p><strong>Objective: </strong>The goal of this study was to develop a PSG-validated sleep algorithm using accelerometer (ACC) and electrocardiogram (ECG) data from a wearable patch to accurately quantify sleep in a real-world setting.</p><p><strong>Methods: </strong>In this noninterventional, nonsignificant-risk, abbreviated investigational device exemption, single-site study, participants wore the reusable wearable sensor version 2 (RW2) patch. The RW2 patch is part of a digital medicine system (aripiprazole with sensor) designed to provide objective records of medication ingestion for patients with schizophrenia, bipolar I disorder, and major depressive disorder. This study developed a sleep algorithm from patch data and did not contain any study-related or digitized medication. Patch-acquired ACC and ECG data were compared against PSG data to build machine learning classification models to distinguish periods of wake from sleep. The PSG data provided sleep stage classifications at 30-second intervals, which were combined into 5-minute windows and labeled as sleep or wake based on the majority of sleep stages within the window. ACC and ECG features were derived for each 5-minute window. The algorithm that most accurately predicted sleep parameters against PSG data was compared to commercially available wearable devices to further benchmark model performance.</p><p><strong>Results: </strong>Of 80 participants enrolled, 60 had at least 1 night of analyzable ACC and ECG data (25 healthy volunteers and 35 participants with diagnosed SMI). Overall, 10,574 valid 5-minute windows were identified (5854 from participants with SMI), and 84% (n=8830) were classified as greater than half sleep. Of the 3 models tested, the conditional random field algorithm provided the most robust sleep-wake classification. Performance was comparable to the middle 50% of commercial devices evaluated in a recent publication, providing a sleep detection performance of 0.93 (sensitivity) and wake detection performance of 0.60 (specificity) at a prediction probability threshold of 0.75. The conditional random field algorithm retained this performance for individual sleep parameters, including total sleep time, sleep efficiency, and wake after sleep onset (within the middle 50% to top 25% of the assessed devices). The only parameter where the model performance was lower was sleep onset latency (within the bottom 25% of all comparator devices).</p><p><strong>Conclusions: </strong>Using industry-best practices, we developed a sleep algorithm for use with the RW2 patch that can accurately detect sleep and wake wi","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e62959"},"PeriodicalIF":4.8,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11683743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rowdy de Groot, Frank van der Graaff, Daniël van der Doelen, Michiel Luijten, Ronald De Meyer, Hekmat Alrouh, Hedy van Oers, Jacintha Tieskens, Josjan Zijlmans, Meike Bartels, Arne Popma, Nicolette de Keizer, Ronald Cornet, Tinca J C Polderman
Background: The FAIR (Findable, Accessible, Interoperable, Reusable) data principles are a guideline to improve the reusability of data. However, properly implementing these principles is challenging due to a wide range of barriers.
Objectives: To further the field of FAIR data, this study aimed to systematically identify barriers regarding implementing the FAIR principles in the area of child and adolescent mental health research, define the most challenging barriers, and provide recommendations for these barriers.
Methods: Three sources were used as input to identify barriers: (1) evaluation of the implementation process of the Observational Medical Outcomes Partnership Common Data Model by 3 data managers; (2) interviews with experts on mental health research, reusable health data, and data quality; and (3) a rapid literature review. All barriers were categorized according to type as described previously, the affected FAIR principle, a category to add detail about the origin of the barrier, and whether a barrier was mental health specific. The barriers were assessed and ranked on impact with the data managers using the Delphi method.
Results: Thirteen barriers were identified by the data managers, 7 were identified by the experts, and 30 barriers were extracted from the literature. This resulted in 45 unique barriers. The characteristics that were most assigned to the barriers were, respectively, external type (n=32/45; eg, organizational policy preventing the use of required software), tooling category (n=19/45; ie, software and databases), all FAIR principles (n=15/45), and not mental health specific (n=43/45). Consensus on ranking the scores of the barriers was reached after 2 rounds of the Delphi method. The most important recommendations to overcome the barriers are adding a FAIR data steward to the research team, accessible step-by-step guides, and ensuring sustainable funding for the implementation and long-term use of FAIR data.
Conclusions: By systematically listing these barriers and providing recommendations, we intend to enhance the awareness of researchers and grant providers that making data FAIR demands specific expertise, available tooling, and proper investments.
{"title":"Implementing Findable, Accessible, Interoperable, Reusable (FAIR) Principles in Child and Adolescent Mental Health Research: Mixed Methods Approach.","authors":"Rowdy de Groot, Frank van der Graaff, Daniël van der Doelen, Michiel Luijten, Ronald De Meyer, Hekmat Alrouh, Hedy van Oers, Jacintha Tieskens, Josjan Zijlmans, Meike Bartels, Arne Popma, Nicolette de Keizer, Ronald Cornet, Tinca J C Polderman","doi":"10.2196/59113","DOIUrl":"10.2196/59113","url":null,"abstract":"<p><strong>Background: </strong>The FAIR (Findable, Accessible, Interoperable, Reusable) data principles are a guideline to improve the reusability of data. However, properly implementing these principles is challenging due to a wide range of barriers.</p><p><strong>Objectives: </strong>To further the field of FAIR data, this study aimed to systematically identify barriers regarding implementing the FAIR principles in the area of child and adolescent mental health research, define the most challenging barriers, and provide recommendations for these barriers.</p><p><strong>Methods: </strong>Three sources were used as input to identify barriers: (1) evaluation of the implementation process of the Observational Medical Outcomes Partnership Common Data Model by 3 data managers; (2) interviews with experts on mental health research, reusable health data, and data quality; and (3) a rapid literature review. All barriers were categorized according to type as described previously, the affected FAIR principle, a category to add detail about the origin of the barrier, and whether a barrier was mental health specific. The barriers were assessed and ranked on impact with the data managers using the Delphi method.</p><p><strong>Results: </strong>Thirteen barriers were identified by the data managers, 7 were identified by the experts, and 30 barriers were extracted from the literature. This resulted in 45 unique barriers. The characteristics that were most assigned to the barriers were, respectively, external type (n=32/45; eg, organizational policy preventing the use of required software), tooling category (n=19/45; ie, software and databases), all FAIR principles (n=15/45), and not mental health specific (n=43/45). Consensus on ranking the scores of the barriers was reached after 2 rounds of the Delphi method. The most important recommendations to overcome the barriers are adding a FAIR data steward to the research team, accessible step-by-step guides, and ensuring sustainable funding for the implementation and long-term use of FAIR data.</p><p><strong>Conclusions: </strong>By systematically listing these barriers and providing recommendations, we intend to enhance the awareness of researchers and grant providers that making data FAIR demands specific expertise, available tooling, and proper investments.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e59113"},"PeriodicalIF":4.8,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11683739/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lauren Southwick, Meghana Sharma, Sunny Rai, Rinad S Beidas, David S Mandell, David A Asch, Brenda Curtis, Sharath Chandra Guntuku, Raina M Merchant
Background: Therapists and their patients increasingly discuss digital data from social media, smartphone sensors, and other online engagements within the context of psychotherapy.
Objective: We examined patients' and mental health therapists' experiences and perceptions following a randomized controlled trial in which they both received regular summaries of patients' digital data (eg, dashboard) to review and discuss in session. The dashboard included data that patients consented to share from their social media posts, phone usage, and online searches.
Methods: Following the randomized controlled trial, patient (n=56) and therapist (n=44) participants completed a debriefing survey after their study completion (from December 2021 to January 2022). Participants were asked about their experience receiving a digital data dashboard in psychotherapy via closed- and open-ended questions. We calculated descriptive statistics for closed-ended questions and conducted qualitative coding via NVivo (version 10; Lumivero) and natural language processing using the machine learning tool latent Dirichlet allocation to analyze open-ended questions.
Results: Of 100 participants, nearly half (n=48, 49%) described their experience with the dashboard as "positive," while the other half noted a "neutral" experience. Responses to the open-ended questions resulted in three thematic areas (nine subcategories): (1) dashboard experience (positive, neutral or negative, and comfortable); (2) perception of the dashboard's impact on enhancing therapy (accountability, increased awareness over time, and objectivity); and (3) dashboard refinements (additional sources, tailored content, and privacy).
Conclusions: Patients reported that receiving their digital data helped them stay "accountable," while therapists indicated that the dashboard helped "tailor treatment plans." Patient and therapist surveys provided important feedback on their experience regularly discussing dashboards in psychotherapy.
{"title":"Integrating Patient-Generated Digital Data Into Mental Health Therapy: Mixed Methods Analysis of User Experience.","authors":"Lauren Southwick, Meghana Sharma, Sunny Rai, Rinad S Beidas, David S Mandell, David A Asch, Brenda Curtis, Sharath Chandra Guntuku, Raina M Merchant","doi":"10.2196/59785","DOIUrl":"10.2196/59785","url":null,"abstract":"<p><strong>Background: </strong>Therapists and their patients increasingly discuss digital data from social media, smartphone sensors, and other online engagements within the context of psychotherapy.</p><p><strong>Objective: </strong>We examined patients' and mental health therapists' experiences and perceptions following a randomized controlled trial in which they both received regular summaries of patients' digital data (eg, dashboard) to review and discuss in session. The dashboard included data that patients consented to share from their social media posts, phone usage, and online searches.</p><p><strong>Methods: </strong>Following the randomized controlled trial, patient (n=56) and therapist (n=44) participants completed a debriefing survey after their study completion (from December 2021 to January 2022). Participants were asked about their experience receiving a digital data dashboard in psychotherapy via closed- and open-ended questions. We calculated descriptive statistics for closed-ended questions and conducted qualitative coding via NVivo (version 10; Lumivero) and natural language processing using the machine learning tool latent Dirichlet allocation to analyze open-ended questions.</p><p><strong>Results: </strong>Of 100 participants, nearly half (n=48, 49%) described their experience with the dashboard as \"positive,\" while the other half noted a \"neutral\" experience. Responses to the open-ended questions resulted in three thematic areas (nine subcategories): (1) dashboard experience (positive, neutral or negative, and comfortable); (2) perception of the dashboard's impact on enhancing therapy (accountability, increased awareness over time, and objectivity); and (3) dashboard refinements (additional sources, tailored content, and privacy).</p><p><strong>Conclusions: </strong>Patients reported that receiving their digital data helped them stay \"accountable,\" while therapists indicated that the dashboard helped \"tailor treatment plans.\" Patient and therapist surveys provided important feedback on their experience regularly discussing dashboards in psychotherapy.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e59785"},"PeriodicalIF":4.8,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11683510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip Harvey, Rosie Curiel-Cid, Peter Kallestrup, Annalee Mueller, Andrea Rivera-Molina, Sara Czaja, Elizabeth Crocco, David Loewenstein
<p><strong>Background: </strong>The early detection of mild cognitive impairment (MCI) is crucial for providing treatment before further decline. Cognitive challenge tests such as the Loewenstein-Acevedo Scales for Semantic Interference and Learning (LASSI-L™) can identify individuals at highest risk for cognitive deterioration. Performance on elements of the LASSI-L, particularly proactive interference, correlate with the presence of critical Alzheimer's Disease (AD) biomarkers. However, in person paper tests require skilled testers and are not practical in many community settings or for large-scale screening in prevention.</p><p><strong>Objective: </strong>This paper reports on the development and initial validation of a self-administered computerized version of the LASSI, the LASSI-D™. A fully remotely deliverable digital version, with an AI generated avatar assistant, was the migrated assessment.</p><p><strong>Methods: </strong>Cloud-based software was developed, using voice recognition technology, for English and Spanish versions of the LASSI-D. Participants were assessed with either the LASSI-L or LASSI-D first, in a sequential assessment study. Participants with amnestic Mild Cognitive Impairment (aMCI; n=54) or normal cognition (NC;n=58) were also tested with traditional measures such as the ADAS-Cog. We examined group differences in performance across the legacy and digital versions of the LASSI, as well as correlations between LASSI performance and other measures across the versions.</p><p><strong>Results: </strong>Differences on recall and intrusion variables between aMCI and NC samples on both versions were all statistically significant (all p<.001), with at least medium effect sizes (d>.68). There were no statistically significant performance differences in these variables between legacy and digital administration in either sample, (all p<.13). There were no language differences in any variables, p>.10, and correlations between LASSI variables and other cognitive variables were statistically significant (all p<.01). The most predictive legacy variables, Proactive Interference (PI) and Failure to recover from Proactive Interference (frPI), were identical across legacy and migrated versions within groups and were identical to results of previous studies with the legacy LASSI-L. Classification accuracy was 88% for NC and 78% for aMCI participants.</p><p><strong>Conclusions: </strong>The results for the digital migration of the LASSI-D were highly convergent with the legacy LASSI-L. Across all indices of similarity, including sensitivity, criterion validity, classification accuracy, and performance, the versions converged across languages. Future papers will present additional validation data, including correlations with blood-based AD biomarkers and alternative forms. The current data provide convincing evidence of the utility of a fully self-administered digitally migrated cognitive challenge test.</p><p><strong>Clinicaltrial: </strong
{"title":"Digital Migration of a Validated Cognitive Challenge Test in Mild Cognitive Impairment: Convergence of the Loewenstein-Acevedo Scales for Semantic Interference and Learning (LASSI-L) and the Digital LASSI (LASSI-D) in older Participants with Amnestic MCI and Normal Cognition.","authors":"Philip Harvey, Rosie Curiel-Cid, Peter Kallestrup, Annalee Mueller, Andrea Rivera-Molina, Sara Czaja, Elizabeth Crocco, David Loewenstein","doi":"10.2196/64716","DOIUrl":"10.2196/64716","url":null,"abstract":"<p><strong>Background: </strong>The early detection of mild cognitive impairment (MCI) is crucial for providing treatment before further decline. Cognitive challenge tests such as the Loewenstein-Acevedo Scales for Semantic Interference and Learning (LASSI-L™) can identify individuals at highest risk for cognitive deterioration. Performance on elements of the LASSI-L, particularly proactive interference, correlate with the presence of critical Alzheimer's Disease (AD) biomarkers. However, in person paper tests require skilled testers and are not practical in many community settings or for large-scale screening in prevention.</p><p><strong>Objective: </strong>This paper reports on the development and initial validation of a self-administered computerized version of the LASSI, the LASSI-D™. A fully remotely deliverable digital version, with an AI generated avatar assistant, was the migrated assessment.</p><p><strong>Methods: </strong>Cloud-based software was developed, using voice recognition technology, for English and Spanish versions of the LASSI-D. Participants were assessed with either the LASSI-L or LASSI-D first, in a sequential assessment study. Participants with amnestic Mild Cognitive Impairment (aMCI; n=54) or normal cognition (NC;n=58) were also tested with traditional measures such as the ADAS-Cog. We examined group differences in performance across the legacy and digital versions of the LASSI, as well as correlations between LASSI performance and other measures across the versions.</p><p><strong>Results: </strong>Differences on recall and intrusion variables between aMCI and NC samples on both versions were all statistically significant (all p<.001), with at least medium effect sizes (d>.68). There were no statistically significant performance differences in these variables between legacy and digital administration in either sample, (all p<.13). There were no language differences in any variables, p>.10, and correlations between LASSI variables and other cognitive variables were statistically significant (all p<.01). The most predictive legacy variables, Proactive Interference (PI) and Failure to recover from Proactive Interference (frPI), were identical across legacy and migrated versions within groups and were identical to results of previous studies with the legacy LASSI-L. Classification accuracy was 88% for NC and 78% for aMCI participants.</p><p><strong>Conclusions: </strong>The results for the digital migration of the LASSI-D were highly convergent with the legacy LASSI-L. Across all indices of similarity, including sensitivity, criterion validity, classification accuracy, and performance, the versions converged across languages. Future papers will present additional validation data, including correlations with blood-based AD biomarkers and alternative forms. The current data provide convincing evidence of the utility of a fully self-administered digitally migrated cognitive challenge test.</p><p><strong>Clinicaltrial: </strong","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":" ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}