Zohar Elyoseph, Tamar Gur, Yuval Haber, Tomer Simon, Tal Angert, Yuval Navon, Amir Tal, Oren Asman
Unlabelled: Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides a sociohistorical perspective for the theme issue "Responsible Design, Integration, and Use of Generative AI in Mental Health." It evaluates ethical considerations in using generative artificial intelligence (GenAI) for the democratization of mental health knowledge and practice. It explores the historical context of democratizing information, transitioning from restricted access to widespread availability due to the internet, open-source movements, and most recently, GenAI technologies such as large language models. The paper highlights why GenAI technologies represent a new phase in the democratization movement, offering unparalleled access to highly advanced technology as well as information. In the realm of mental health, this requires delicate and nuanced ethical deliberation. Including GenAI in mental health may allow, among other things, improved accessibility to mental health care, personalized responses, and conceptual flexibility, and could facilitate a flattening of traditional hierarchies between health care providers and patients. At the same time, it also entails significant risks and challenges that must be carefully addressed. To navigate these complexities, the paper proposes a strategic questionnaire for assessing artificial intelligence-based mental health applications. This tool evaluates both the benefits and the risks, emphasizing the need for a balanced and ethical approach to GenAI integration in mental health. The paper calls for a cautious yet positive approach to GenAI in mental health, advocating for the active engagement of mental health professionals in guiding GenAI development. It emphasizes the importance of ensuring that GenAI advancements are not only technologically sound but also ethically grounded and patient-centered.
{"title":"An Ethical Perspective on the Democratization of Mental Health With Generative AI.","authors":"Zohar Elyoseph, Tamar Gur, Yuval Haber, Tomer Simon, Tal Angert, Yuval Navon, Amir Tal, Oren Asman","doi":"10.2196/58011","DOIUrl":"10.2196/58011","url":null,"abstract":"<p><strong>Unlabelled: </strong>Knowledge has become more open and accessible to a large audience with the \"democratization of information\" facilitated by technology. This paper provides a sociohistorical perspective for the theme issue \"Responsible Design, Integration, and Use of Generative AI in Mental Health.\" It evaluates ethical considerations in using generative artificial intelligence (GenAI) for the democratization of mental health knowledge and practice. It explores the historical context of democratizing information, transitioning from restricted access to widespread availability due to the internet, open-source movements, and most recently, GenAI technologies such as large language models. The paper highlights why GenAI technologies represent a new phase in the democratization movement, offering unparalleled access to highly advanced technology as well as information. In the realm of mental health, this requires delicate and nuanced ethical deliberation. Including GenAI in mental health may allow, among other things, improved accessibility to mental health care, personalized responses, and conceptual flexibility, and could facilitate a flattening of traditional hierarchies between health care providers and patients. At the same time, it also entails significant risks and challenges that must be carefully addressed. To navigate these complexities, the paper proposes a strategic questionnaire for assessing artificial intelligence-based mental health applications. This tool evaluates both the benefits and the risks, emphasizing the need for a balanced and ethical approach to GenAI integration in mental health. The paper calls for a cautious yet positive approach to GenAI in mental health, advocating for the active engagement of mental health professionals in guiding GenAI development. It emphasizes the importance of ensuring that GenAI advancements are not only technologically sound but also ethically grounded and patient-centered.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e58011"},"PeriodicalIF":4.8,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11500620/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142478187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sijia Huang, Yiyue Wang, Gen Li, Brian J Hall, Thomas J Nyman
[This corrects the article DOI: 10.2196/56650.].
[此处更正了文章 DOI:10.2196/56650]。
{"title":"Correction: Digital Mental Health Interventions for Alleviating Depression and Anxiety During Psychotherapy Waiting Lists: Systematic Review.","authors":"Sijia Huang, Yiyue Wang, Gen Li, Brian J Hall, Thomas J Nyman","doi":"10.2196/67281","DOIUrl":"10.2196/67281","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/56650.].</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e67281"},"PeriodicalIF":4.8,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142478189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shane Cross, Imogen Bell, Jennifer Nicholas, Lee Valentine, Shaminka Mangelsdorf, Simon Baker, Nick Titov, Mario Alvarez-Jimenez
Background: Artificial intelligence (AI) has been increasingly recognized as a potential solution to address mental health service challenges by automating tasks and providing new forms of support.
Objective: This study is the first in a series which aims to estimate the current rates of AI technology use as well as perceived benefits, harms, and risks experienced by community members (CMs) and mental health professionals (MHPs).
Methods: This study involved 2 web-based surveys conducted in Australia. The surveys collected data on demographics, technology comfort, attitudes toward AI, specific AI use cases, and experiences of benefits and harms from AI use. Descriptive statistics were calculated, and thematic analysis of open-ended responses were conducted.
Results: The final sample consisted of 107 CMs and 86 MHPs. General attitudes toward AI varied, with CMs reporting neutral and MHPs reporting more positive attitudes. Regarding AI usage, 28% (30/108) of CMs used AI, primarily for quick support (18/30, 60%) and as a personal therapist (14/30, 47%). Among MHPs, 43% (37/86) used AI; mostly for research (24/37, 65%) and report writing (20/37, 54%). While the majority found AI to be generally beneficial (23/30, 77% of CMs and 34/37, 92% of MHPs), specific harms and concerns were experienced by 47% (14/30) of CMs and 51% (19/37) of MHPs. There was an equal mix of positive and negative sentiment toward the future of AI in mental health care in open feedback.
Conclusions: Commercial AI tools are increasingly being used by CMs and MHPs. Respondents believe AI will offer future advantages for mental health care in terms of accessibility, cost reduction, personalization, and work efficiency. However, they were equally concerned about reducing human connection, ethics, privacy and regulation, medical errors, potential for misuse, and data security. Despite the immense potential, integration into mental health systems must be approached with caution, addressing legal and ethical concerns while developing safeguards to mitigate potential harms. Future surveys are planned to track use and acceptability of AI and associated issues over time.
{"title":"Use of AI in Mental Health Care: Community and Mental Health Professionals Survey.","authors":"Shane Cross, Imogen Bell, Jennifer Nicholas, Lee Valentine, Shaminka Mangelsdorf, Simon Baker, Nick Titov, Mario Alvarez-Jimenez","doi":"10.2196/60589","DOIUrl":"10.2196/60589","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) has been increasingly recognized as a potential solution to address mental health service challenges by automating tasks and providing new forms of support.</p><p><strong>Objective: </strong>This study is the first in a series which aims to estimate the current rates of AI technology use as well as perceived benefits, harms, and risks experienced by community members (CMs) and mental health professionals (MHPs).</p><p><strong>Methods: </strong>This study involved 2 web-based surveys conducted in Australia. The surveys collected data on demographics, technology comfort, attitudes toward AI, specific AI use cases, and experiences of benefits and harms from AI use. Descriptive statistics were calculated, and thematic analysis of open-ended responses were conducted.</p><p><strong>Results: </strong>The final sample consisted of 107 CMs and 86 MHPs. General attitudes toward AI varied, with CMs reporting neutral and MHPs reporting more positive attitudes. Regarding AI usage, 28% (30/108) of CMs used AI, primarily for quick support (18/30, 60%) and as a personal therapist (14/30, 47%). Among MHPs, 43% (37/86) used AI; mostly for research (24/37, 65%) and report writing (20/37, 54%). While the majority found AI to be generally beneficial (23/30, 77% of CMs and 34/37, 92% of MHPs), specific harms and concerns were experienced by 47% (14/30) of CMs and 51% (19/37) of MHPs. There was an equal mix of positive and negative sentiment toward the future of AI in mental health care in open feedback.</p><p><strong>Conclusions: </strong>Commercial AI tools are increasingly being used by CMs and MHPs. Respondents believe AI will offer future advantages for mental health care in terms of accessibility, cost reduction, personalization, and work efficiency. However, they were equally concerned about reducing human connection, ethics, privacy and regulation, medical errors, potential for misuse, and data security. Despite the immense potential, integration into mental health systems must be approached with caution, addressing legal and ethical concerns while developing safeguards to mitigate potential harms. Future surveys are planned to track use and acceptability of AI and associated issues over time.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e60589"},"PeriodicalIF":4.8,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11488652/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shireen L Rizvi, Allison K Ruork, Qingqing Yin, April Yeager, Madison E Taylor, Evan M Kleiman
Background: Novel technologies, such as ecological momentary assessment (EMA) and wearable biosensor wristwatches, are increasingly being used to assess outcomes and mechanisms of change in psychological treatments. However, there is still a dearth of information on the feasibility and acceptability of these technologies and whether they can be reliably used to measure variables of interest.
Objective: Our objectives were to assess the feasibility and acceptability of incorporating these technologies into dialectical behavior therapy and conduct a pilot evaluation of whether these technologies can be used to assess emotion regulation processes and associated problems over the course of treatment.
Methods: A total of 20 adults with borderline personality disorder were enrolled in a 6-month course of dialectical behavior therapy. For 1 week out of every treatment month, participants were asked to complete EMA 6 times a day and to wear a biosensor watch. Each EMA assessment included measures of several negative affect and suicidal thinking, among other items. We used multilevel correlations to assess the contemporaneous association between electrodermal activity and 11 negative emotional states reported via EMA. A multilevel regression was conducted in which changes in composite ratings of suicidal thinking were regressed onto changes in negative affect.
Results: On average, participants completed 54.39% (SD 33.1%) of all EMA (range 4.7%-92.4%). They also wore the device for an average of 9.52 (SD 6.47) hours per day and for 92.6% of all days. Importantly, no associations were found between emotional state and electrodermal activity, whether examining a composite of all high-arousal negative emotions or individual emotional states (within-person r ranged from -0.026 to -0.109). Smaller changes in negative affect composite scores were associated with greater suicidal thinking ratings at the subsequent timepoint, beyond the effect of suicidal thinking at the initial timepoint.
Conclusions: Results indicated moderate overall compliance with EMA and wearing the watch; however, there was no concurrence between EMA and wristwatch data on emotions. This pilot study raises questions about the reliability and validity of these technologies incorporated into treatment studies to evaluate emotion regulation mechanisms.
背景:生态瞬间评估(EMA)和可穿戴生物传感器腕表等新技术正越来越多地被用于评估心理治疗的结果和变化机制。然而,关于这些技术的可行性和可接受性以及它们是否能可靠地用于测量相关变量的信息仍然匮乏:我们的目标是评估将这些技术纳入辩证行为疗法的可行性和可接受性,并对这些技术是否可用于评估治疗过程中的情绪调节过程和相关问题进行试点评估:共有 20 名患有边缘型人格障碍的成年人参加了为期 6 个月的辩证行为疗法。在每个治疗月中的一周,参与者被要求每天完成 6 次 EMA 评估,并佩戴生物传感器手表。每次 EMA 评估都包括一些负面情绪和自杀想法等项目的测量。我们使用多层次相关性来评估皮肤电活动与通过 EMA 报告的 11 种负面情绪状态之间的同期关联。我们还进行了多层次回归,将自杀想法综合评分的变化与负面情绪的变化进行回归:参与者平均完成了所有 EMA 的 54.39%(SD 33.1%)(范围为 4.7%-92.4%)。他们平均每天佩戴仪器 9.52 小时(标准差 6.47 小时),占所有天数的 92.6%。重要的是,无论是研究所有高唤醒负面情绪的综合结果,还是研究单个情绪状态,都没有发现情绪状态与电皮活动之间存在关联(人内r值范围为-0.026至-0.109)。负性情绪综合评分的较小变化与随后时间点的自杀想法评级较高有关,超出了最初时间点自杀想法的影响:结果表明,EMA 和佩戴手表的总体依从性尚可;但是,EMA 和手表的情绪数据并不一致。这项试验性研究提出了一些问题,即在治疗研究中采用这些技术来评估情绪调节机制的可靠性和有效性。
{"title":"Using Biosensor Devices and Ecological Momentary Assessment to Measure Emotion Regulation Processes: Pilot Observational Study With Dialectical Behavior Therapy.","authors":"Shireen L Rizvi, Allison K Ruork, Qingqing Yin, April Yeager, Madison E Taylor, Evan M Kleiman","doi":"10.2196/60035","DOIUrl":"10.2196/60035","url":null,"abstract":"<p><strong>Background: </strong>Novel technologies, such as ecological momentary assessment (EMA) and wearable biosensor wristwatches, are increasingly being used to assess outcomes and mechanisms of change in psychological treatments. However, there is still a dearth of information on the feasibility and acceptability of these technologies and whether they can be reliably used to measure variables of interest.</p><p><strong>Objective: </strong>Our objectives were to assess the feasibility and acceptability of incorporating these technologies into dialectical behavior therapy and conduct a pilot evaluation of whether these technologies can be used to assess emotion regulation processes and associated problems over the course of treatment.</p><p><strong>Methods: </strong>A total of 20 adults with borderline personality disorder were enrolled in a 6-month course of dialectical behavior therapy. For 1 week out of every treatment month, participants were asked to complete EMA 6 times a day and to wear a biosensor watch. Each EMA assessment included measures of several negative affect and suicidal thinking, among other items. We used multilevel correlations to assess the contemporaneous association between electrodermal activity and 11 negative emotional states reported via EMA. A multilevel regression was conducted in which changes in composite ratings of suicidal thinking were regressed onto changes in negative affect.</p><p><strong>Results: </strong>On average, participants completed 54.39% (SD 33.1%) of all EMA (range 4.7%-92.4%). They also wore the device for an average of 9.52 (SD 6.47) hours per day and for 92.6% of all days. Importantly, no associations were found between emotional state and electrodermal activity, whether examining a composite of all high-arousal negative emotions or individual emotional states (within-person r ranged from -0.026 to -0.109). Smaller changes in negative affect composite scores were associated with greater suicidal thinking ratings at the subsequent timepoint, beyond the effect of suicidal thinking at the initial timepoint.</p><p><strong>Conclusions: </strong>Results indicated moderate overall compliance with EMA and wearing the watch; however, there was no concurrence between EMA and wristwatch data on emotions. This pilot study raises questions about the reliability and validity of these technologies incorporated into treatment studies to evaluate emotion regulation mechanisms.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e60035"},"PeriodicalIF":4.8,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel J Ridout, Kathryn K Ridout, Teresa Y Lin, Cynthia I Campbell
Background: While the number of digital therapeutics (DTx) has proliferated, there is little real-world research on the characteristics of providers recommending DTx, their recommendation behaviors, or the characteristics of patients receiving recommendations in the clinical setting.
Objective: The aim of this study was to characterize the clinical and demographic characteristics of patients receiving DTx recommendations and describe provider characteristics and behaviors regarding DTx.
Methods: This retrospective cohort study used electronic health record data from a large, integrated health care delivery system. Demographic and clinical characteristics of adult patients recommended versus not recommended DTx by a mental health provider between May 2020 and December 2021 were examined. A cross-sectional survey of mental health providers providing these recommendations was conducted in December 2022 to assess the characteristics of providers and recommendation behaviors related to DTx. Parametric and nonparametric tests were used to examine statistical significance between groups.
Results: Of 335,250 patients with a mental health appointment, 53,546 (16%) received a DTx recommendation. Patients recommended to DTx were younger, were of Asian or Hispanic race or ethnicity, were female, were without medical comorbidities, and had commercial insurance compared to those without a DTx recommendation (P<.001). More patients receiving a DTx recommendation had anxiety or adjustment disorder diagnoses, but less had depression, bipolar, or psychotic disorder diagnoses (P<.001) versus matched controls not recommended to DTx. Overall, depression and anxiety symptom scores were lower in patients recommended to DTx compared to matched controls not receiving a recommendation, although female patients had a higher proportion of severe depression and anxiety scores compared to male patients. Provider survey results indicated a higher proportion of nonprescribers recommended DTx to patients compared to prescribers (P=.008). Of all providers, 29.4% (45/153) reported using the suggested internal electronic health record-based tools (eg, smart text) to recommend DTx, and of providers recommending DTx resources to patients, 64.1% (98/153) reported they follow up with patients to inquire on DTx benefits. Only 38.4% (58/151) of respondents report recommending specific DTx modules, and of those, 58.6% (34/58) report following up on the impact of these specific modules.
Conclusions: DTx use in mental health was modest and varied by patient and provider characteristics. Providers do not appear to actively engage with these tools and integrate them into treatment plans. Providers, while expressing interest in potential benefits from DTx, may view DTx as a passive strategy to augment traditional treatment for select patients.
{"title":"Clinical Use of Mental Health Digital Therapeutics in a Large Health Care Delivery System: Retrospective Patient Cohort Study and Provider Survey.","authors":"Samuel J Ridout, Kathryn K Ridout, Teresa Y Lin, Cynthia I Campbell","doi":"10.2196/56574","DOIUrl":"10.2196/56574","url":null,"abstract":"<p><strong>Background: </strong>While the number of digital therapeutics (DTx) has proliferated, there is little real-world research on the characteristics of providers recommending DTx, their recommendation behaviors, or the characteristics of patients receiving recommendations in the clinical setting.</p><p><strong>Objective: </strong>The aim of this study was to characterize the clinical and demographic characteristics of patients receiving DTx recommendations and describe provider characteristics and behaviors regarding DTx.</p><p><strong>Methods: </strong>This retrospective cohort study used electronic health record data from a large, integrated health care delivery system. Demographic and clinical characteristics of adult patients recommended versus not recommended DTx by a mental health provider between May 2020 and December 2021 were examined. A cross-sectional survey of mental health providers providing these recommendations was conducted in December 2022 to assess the characteristics of providers and recommendation behaviors related to DTx. Parametric and nonparametric tests were used to examine statistical significance between groups.</p><p><strong>Results: </strong>Of 335,250 patients with a mental health appointment, 53,546 (16%) received a DTx recommendation. Patients recommended to DTx were younger, were of Asian or Hispanic race or ethnicity, were female, were without medical comorbidities, and had commercial insurance compared to those without a DTx recommendation (P<.001). More patients receiving a DTx recommendation had anxiety or adjustment disorder diagnoses, but less had depression, bipolar, or psychotic disorder diagnoses (P<.001) versus matched controls not recommended to DTx. Overall, depression and anxiety symptom scores were lower in patients recommended to DTx compared to matched controls not receiving a recommendation, although female patients had a higher proportion of severe depression and anxiety scores compared to male patients. Provider survey results indicated a higher proportion of nonprescribers recommended DTx to patients compared to prescribers (P=.008). Of all providers, 29.4% (45/153) reported using the suggested internal electronic health record-based tools (eg, smart text) to recommend DTx, and of providers recommending DTx resources to patients, 64.1% (98/153) reported they follow up with patients to inquire on DTx benefits. Only 38.4% (58/151) of respondents report recommending specific DTx modules, and of those, 58.6% (34/58) report following up on the impact of these specific modules.</p><p><strong>Conclusions: </strong>DTx use in mental health was modest and varied by patient and provider characteristics. Providers do not appear to actively engage with these tools and integrate them into treatment plans. Providers, while expressing interest in potential benefits from DTx, may view DTx as a passive strategy to augment traditional treatment for select patients.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e56574"},"PeriodicalIF":4.8,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11463191/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joanna Omylinska-Thurston, Supritha Aithal, Shaun Liverpool, Rebecca Clark, Zoe Moula, January Wood, Laura Viliardos, Edgar Rodríguez-Dorans, Fleur Farish-Edwards, Ailsa Parsons, Mia Eisenstadt, Marcus Bull, Linda Dubrow-Marshall, Scott Thurston, Vicky Karkou
<p><strong>Background: </strong>Depression affects 5% of adults and it is a major cause of disability worldwide. Digital psychotherapies offer an accessible solution addressing this issue. This systematic review examines a spectrum of digital psychotherapies for depression, considering both their effectiveness and user perspectives.</p><p><strong>Objective: </strong>This review focuses on identifying (1) the most common types of digital psychotherapies, (2) clients' and practitioners' perspectives on helpful and unhelpful aspects, and (3) the effectiveness of digital psychotherapies for adults with depression.</p><p><strong>Methods: </strong>A mixed methods protocol was developed using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. The search strategy used the Population, Intervention, Comparison, Outcomes, and Study Design (PICOS) framework covering 2010 to 2024 and 7 databases were searched. Overall, 13 authors extracted data, and all aspects of the review were checked by >1 reviewer to minimize biases. Quality appraisal was conducted for all studies. The clients' and therapists' perceptions on helpful and unhelpful factors were identified using qualitative narrative synthesis. Meta-analyses of depression outcomes were conducted using the standardized mean difference (calculated as Hedges g) of the postintervention change between digital psychotherapy and control groups.</p><p><strong>Results: </strong>Of 3303 initial records, 186 records (5.63%; 160 studies) were included in the review. Quantitative studies (131/160, 81.8%) with a randomized controlled trial design (88/160, 55%) were most common. The overall sample size included 70,720 participants (female: n=51,677, 73.07%; male: n=16,779, 23.73%). Digital interventions included "stand-alone" or non-human contact interventions (58/160, 36.2%), "human contact" interventions (11/160, 6.8%), and "blended" including stand-alone and human contact interventions (91/160, 56.8%). What clients and practitioners perceived as helpful in digital interventions included support with motivation and accessibility, explanation of task reminders, resources, and learning skills to manage symptoms. What was perceived as unhelpful included problems with usability and a lack of direction or explanation. A total of 80 studies with 16,072 participants were included in the meta-analysis, revealing a moderate to large effect in favor of digital psychotherapies for depression (Hedges g=-0.61, 95% CI -0.75 to -0.47; Z=-8.58; P<.001). Subgroup analyses of the studies with different intervention delivery formats and session frequency did not have a statistically significant effect on the results (P=.48 and P=.97, respectively). However, blended approaches revealed a large effect size (Hedges g=-0.793), while interventions involving human contact (Hedges g=-0.42) or no human contact (Hedges g=-0.40) had slightly smaller effect sizes.</p><p><strong>Conclusions: </strong>Digital inter
{"title":"Digital Psychotherapies for Adults Experiencing Depressive Symptoms: Systematic Review and Meta-Analysis.","authors":"Joanna Omylinska-Thurston, Supritha Aithal, Shaun Liverpool, Rebecca Clark, Zoe Moula, January Wood, Laura Viliardos, Edgar Rodríguez-Dorans, Fleur Farish-Edwards, Ailsa Parsons, Mia Eisenstadt, Marcus Bull, Linda Dubrow-Marshall, Scott Thurston, Vicky Karkou","doi":"10.2196/55500","DOIUrl":"10.2196/55500","url":null,"abstract":"<p><strong>Background: </strong>Depression affects 5% of adults and it is a major cause of disability worldwide. Digital psychotherapies offer an accessible solution addressing this issue. This systematic review examines a spectrum of digital psychotherapies for depression, considering both their effectiveness and user perspectives.</p><p><strong>Objective: </strong>This review focuses on identifying (1) the most common types of digital psychotherapies, (2) clients' and practitioners' perspectives on helpful and unhelpful aspects, and (3) the effectiveness of digital psychotherapies for adults with depression.</p><p><strong>Methods: </strong>A mixed methods protocol was developed using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. The search strategy used the Population, Intervention, Comparison, Outcomes, and Study Design (PICOS) framework covering 2010 to 2024 and 7 databases were searched. Overall, 13 authors extracted data, and all aspects of the review were checked by >1 reviewer to minimize biases. Quality appraisal was conducted for all studies. The clients' and therapists' perceptions on helpful and unhelpful factors were identified using qualitative narrative synthesis. Meta-analyses of depression outcomes were conducted using the standardized mean difference (calculated as Hedges g) of the postintervention change between digital psychotherapy and control groups.</p><p><strong>Results: </strong>Of 3303 initial records, 186 records (5.63%; 160 studies) were included in the review. Quantitative studies (131/160, 81.8%) with a randomized controlled trial design (88/160, 55%) were most common. The overall sample size included 70,720 participants (female: n=51,677, 73.07%; male: n=16,779, 23.73%). Digital interventions included \"stand-alone\" or non-human contact interventions (58/160, 36.2%), \"human contact\" interventions (11/160, 6.8%), and \"blended\" including stand-alone and human contact interventions (91/160, 56.8%). What clients and practitioners perceived as helpful in digital interventions included support with motivation and accessibility, explanation of task reminders, resources, and learning skills to manage symptoms. What was perceived as unhelpful included problems with usability and a lack of direction or explanation. A total of 80 studies with 16,072 participants were included in the meta-analysis, revealing a moderate to large effect in favor of digital psychotherapies for depression (Hedges g=-0.61, 95% CI -0.75 to -0.47; Z=-8.58; P<.001). Subgroup analyses of the studies with different intervention delivery formats and session frequency did not have a statistically significant effect on the results (P=.48 and P=.97, respectively). However, blended approaches revealed a large effect size (Hedges g=-0.793), while interventions involving human contact (Hedges g=-0.42) or no human contact (Hedges g=-0.40) had slightly smaller effect sizes.</p><p><strong>Conclusions: </strong>Digital inter","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e55500"},"PeriodicalIF":4.8,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11474132/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica D'Arcey, John Torous, Toni-Rose Asuncion, Leah Tackaberry-Giddens, Aqsa Zahid, Mira Ishak, George Foussias, Sean Kidd
<p><strong>Background: </strong>Digital mental health is a rapidly growing field with an increasing evidence base due to its potential scalability and impacts on access to mental health care. Further, within underfunded service systems, leveraging personal technologies to deliver or support specialized service delivery has garnered attention as a feasible and cost-effective means of improving access. Digital health relevance has also improved as technology ownership in individuals with schizophrenia has improved and is comparable to that of the general population. However, less digital health research has been conducted in groups with schizophrenia spectrum disorders compared to other mental health conditions, and overall feasibility, efficacy, and clinical integration remain largely unknown.</p><p><strong>Objective: </strong>This review aims to describe the available literature investigating the use of personal technologies (ie, phone, computer, tablet, and wearables) to deliver or support specialized care for schizophrenia and examine opportunities and barriers to integrating this technology into care.</p><p><strong>Methods: </strong>Given the size of this review, we used scoping review methods. We searched 3 major databases with search teams related to schizophrenia spectrum disorders, various personal technologies, and intervention outcomes related to recovery. We included studies from the full spectrum of methodologies, from development papers to implementation trials. Methods and reporting follow the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines.</p><p><strong>Results: </strong>This search resulted in 999 studies, which, through review by at least 2 reviewers, included 92 publications. Included studies were published from 2010 to 2023. Most studies examined multitechnology interventions (40/92, 43%) or smartphone apps (25/92, 27%), followed by SMS text messaging (16/92, 17%) and internet-based interventions (11/92, 12%). No studies used wearable technology on its own to deliver an intervention. Regarding the stage of research in the field, the largest number of publications were pilot studies (32/92, 35%), followed by randomized control trials (RCTs; 20/92, 22%), secondary analyses (16/92, 17%), RCT protocols (16/92, 17%), development papers (5/92, 5%), and nonrandomized or quasi-experimental trials (3/92, 3%). Most studies did not report on safety indices (55/92, 60%) or privacy precautions (64/92, 70%). Included studies tend to report consistent positive user feedback regarding the usability, acceptability, and satisfaction with technology; however, engagement metrics are highly variable and report mixed outcomes. Furthermore, efficacy at both the pilot and RCT levels report mixed findings on primary outcomes.</p><p><strong>Conclusions: </strong>Overall, the findings of this review highlight the discrepancy between the high levels of acceptability and usability of these digital interventions, mixed
{"title":"Leveraging Personal Technologies in the Treatment of Schizophrenia Spectrum Disorders: Scoping Review.","authors":"Jessica D'Arcey, John Torous, Toni-Rose Asuncion, Leah Tackaberry-Giddens, Aqsa Zahid, Mira Ishak, George Foussias, Sean Kidd","doi":"10.2196/57150","DOIUrl":"10.2196/57150","url":null,"abstract":"<p><strong>Background: </strong>Digital mental health is a rapidly growing field with an increasing evidence base due to its potential scalability and impacts on access to mental health care. Further, within underfunded service systems, leveraging personal technologies to deliver or support specialized service delivery has garnered attention as a feasible and cost-effective means of improving access. Digital health relevance has also improved as technology ownership in individuals with schizophrenia has improved and is comparable to that of the general population. However, less digital health research has been conducted in groups with schizophrenia spectrum disorders compared to other mental health conditions, and overall feasibility, efficacy, and clinical integration remain largely unknown.</p><p><strong>Objective: </strong>This review aims to describe the available literature investigating the use of personal technologies (ie, phone, computer, tablet, and wearables) to deliver or support specialized care for schizophrenia and examine opportunities and barriers to integrating this technology into care.</p><p><strong>Methods: </strong>Given the size of this review, we used scoping review methods. We searched 3 major databases with search teams related to schizophrenia spectrum disorders, various personal technologies, and intervention outcomes related to recovery. We included studies from the full spectrum of methodologies, from development papers to implementation trials. Methods and reporting follow the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines.</p><p><strong>Results: </strong>This search resulted in 999 studies, which, through review by at least 2 reviewers, included 92 publications. Included studies were published from 2010 to 2023. Most studies examined multitechnology interventions (40/92, 43%) or smartphone apps (25/92, 27%), followed by SMS text messaging (16/92, 17%) and internet-based interventions (11/92, 12%). No studies used wearable technology on its own to deliver an intervention. Regarding the stage of research in the field, the largest number of publications were pilot studies (32/92, 35%), followed by randomized control trials (RCTs; 20/92, 22%), secondary analyses (16/92, 17%), RCT protocols (16/92, 17%), development papers (5/92, 5%), and nonrandomized or quasi-experimental trials (3/92, 3%). Most studies did not report on safety indices (55/92, 60%) or privacy precautions (64/92, 70%). Included studies tend to report consistent positive user feedback regarding the usability, acceptability, and satisfaction with technology; however, engagement metrics are highly variable and report mixed outcomes. Furthermore, efficacy at both the pilot and RCT levels report mixed findings on primary outcomes.</p><p><strong>Conclusions: </strong>Overall, the findings of this review highlight the discrepancy between the high levels of acceptability and usability of these digital interventions, mixed","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e57150"},"PeriodicalIF":4.8,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11474131/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ash Tanuj Kumar, Cindy Wang, Alec Dong, Jonathan Rose
<p><strong>Background: </strong>Motivational interviewing (MI) is a therapeutic technique that has been successful in helping smokers reduce smoking but has limited accessibility due to the high cost and low availability of clinicians. To address this, the MIBot project has sought to develop a chatbot that emulates an MI session with a client with the specific goal of moving an ambivalent smoker toward the direction of quitting. One key element of an MI conversation is reflective listening, where a therapist expresses their understanding of what the client has said by uttering a reflection that encourages the client to continue their thought process. Complex reflections link the client's responses to relevant ideas and facts to enhance this contemplation. Backward-looking complex reflections (BLCRs) link the client's most recent response to a relevant selection of the client's previous statements. Our current chatbot can generate complex reflections-but not BLCRs-using large language models (LLMs) such as GPT-2, which allows the generation of unique, human-like messages customized to client responses. Recent advancements in these models, such as the introduction of GPT-4, provide a novel way to generate complex text by feeding the models instructions and conversational history directly, making this a promising approach to generate BLCRs.</p><p><strong>Objective: </strong>This study aims to develop a method to generate BLCRs for an MI-based smoking cessation chatbot and to measure the method's effectiveness.</p><p><strong>Methods: </strong>LLMs such as GPT-4 can be stimulated to produce specific types of responses to their inputs by "asking" them with an English-based description of the desired output. These descriptions are called prompts, and the goal of writing a description that causes an LLM to generate the required output is termed prompt engineering. We evolved an instruction to prompt GPT-4 to generate a BLCR, given the portions of the transcript of the conversation up to the point where the reflection was needed. The approach was tested on 50 previously collected MIBot transcripts of conversations with smokers and was used to generate a total of 150 reflections. The quality of the reflections was rated on a 4-point scale by 3 independent raters to determine whether they met specific criteria for acceptability.</p><p><strong>Results: </strong>Of the 150 generated reflections, 132 (88%) met the level of acceptability. The remaining 18 (12%) had one or more flaws that made them inappropriate as BLCRs. The 3 raters had pairwise agreement on 80% to 88% of these scores.</p><p><strong>Conclusions: </strong>The method presented to generate BLCRs is good enough to be used as one source of reflections in an MI-style conversation but would need an automatic checker to eliminate the unacceptable ones. This work illustrates the power of the new LLMs to generate therapeutic client-specific responses under the command of a language-based specification.<
{"title":"Generation of Backward-Looking Complex Reflections for a Motivational Interviewing-Based Smoking Cessation Chatbot Using GPT-4: Algorithm Development and Validation.","authors":"Ash Tanuj Kumar, Cindy Wang, Alec Dong, Jonathan Rose","doi":"10.2196/53778","DOIUrl":"10.2196/53778","url":null,"abstract":"<p><strong>Background: </strong>Motivational interviewing (MI) is a therapeutic technique that has been successful in helping smokers reduce smoking but has limited accessibility due to the high cost and low availability of clinicians. To address this, the MIBot project has sought to develop a chatbot that emulates an MI session with a client with the specific goal of moving an ambivalent smoker toward the direction of quitting. One key element of an MI conversation is reflective listening, where a therapist expresses their understanding of what the client has said by uttering a reflection that encourages the client to continue their thought process. Complex reflections link the client's responses to relevant ideas and facts to enhance this contemplation. Backward-looking complex reflections (BLCRs) link the client's most recent response to a relevant selection of the client's previous statements. Our current chatbot can generate complex reflections-but not BLCRs-using large language models (LLMs) such as GPT-2, which allows the generation of unique, human-like messages customized to client responses. Recent advancements in these models, such as the introduction of GPT-4, provide a novel way to generate complex text by feeding the models instructions and conversational history directly, making this a promising approach to generate BLCRs.</p><p><strong>Objective: </strong>This study aims to develop a method to generate BLCRs for an MI-based smoking cessation chatbot and to measure the method's effectiveness.</p><p><strong>Methods: </strong>LLMs such as GPT-4 can be stimulated to produce specific types of responses to their inputs by \"asking\" them with an English-based description of the desired output. These descriptions are called prompts, and the goal of writing a description that causes an LLM to generate the required output is termed prompt engineering. We evolved an instruction to prompt GPT-4 to generate a BLCR, given the portions of the transcript of the conversation up to the point where the reflection was needed. The approach was tested on 50 previously collected MIBot transcripts of conversations with smokers and was used to generate a total of 150 reflections. The quality of the reflections was rated on a 4-point scale by 3 independent raters to determine whether they met specific criteria for acceptability.</p><p><strong>Results: </strong>Of the 150 generated reflections, 132 (88%) met the level of acceptability. The remaining 18 (12%) had one or more flaws that made them inappropriate as BLCRs. The 3 raters had pairwise agreement on 80% to 88% of these scores.</p><p><strong>Conclusions: </strong>The method presented to generate BLCRs is good enough to be used as one source of reflections in an MI-style conversation but would need an automatic checker to eliminate the unacceptable ones. This work illustrates the power of the new LLMs to generate therapeutic client-specific responses under the command of a language-based specification.<","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e53778"},"PeriodicalIF":4.8,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11448290/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Salim Salmi, Saskia Mérelle, Renske Gilissen, Rob van der Mei, Sandjai Bhulai
Background: For the provision of optimal care in a suicide prevention helpline, it is important to know what contributes to positive or negative effects on help seekers. Helplines can often be contacted through text-based chat services, which produce large amounts of text data for use in large-scale analysis.
Objective: We trained a machine learning classification model to predict chat outcomes based on the content of the chat conversations in suicide helplines and identified the counsellor utterances that had the most impact on its outputs.
Methods: From August 2021 until January 2023, help seekers (N=6903) scored themselves on factors known to be associated with suicidality (eg, hopelessness, feeling entrapped, will to live) before and after a chat conversation with the suicide prevention helpline in the Netherlands (113 Suicide Prevention). Machine learning text analysis was used to predict help seeker scores on these factors. Using 2 approaches for interpreting machine learning models, we identified text messages from helpers in a chat that contributed the most to the prediction of the model.
Results: According to the machine learning model, helpers' positive affirmations and expressing involvement contributed to improved scores of the help seekers. Use of macros and ending the chat prematurely due to the help seeker being in an unsafe situation had negative effects on help seekers.
Conclusions: This study reveals insights for improving helpline chats, emphasizing the value of an evocative style with questions, positive affirmations, and practical advice. It also underscores the potential of machine learning in helpline chat analysis.
{"title":"The Most Effective Interventions for Classification Model Development to Predict Chat Outcomes Based on the Conversation Content in Online Suicide Prevention Chats: Machine Learning Approach.","authors":"Salim Salmi, Saskia Mérelle, Renske Gilissen, Rob van der Mei, Sandjai Bhulai","doi":"10.2196/57362","DOIUrl":"10.2196/57362","url":null,"abstract":"<p><strong>Background: </strong>For the provision of optimal care in a suicide prevention helpline, it is important to know what contributes to positive or negative effects on help seekers. Helplines can often be contacted through text-based chat services, which produce large amounts of text data for use in large-scale analysis.</p><p><strong>Objective: </strong>We trained a machine learning classification model to predict chat outcomes based on the content of the chat conversations in suicide helplines and identified the counsellor utterances that had the most impact on its outputs.</p><p><strong>Methods: </strong>From August 2021 until January 2023, help seekers (N=6903) scored themselves on factors known to be associated with suicidality (eg, hopelessness, feeling entrapped, will to live) before and after a chat conversation with the suicide prevention helpline in the Netherlands (113 Suicide Prevention). Machine learning text analysis was used to predict help seeker scores on these factors. Using 2 approaches for interpreting machine learning models, we identified text messages from helpers in a chat that contributed the most to the prediction of the model.</p><p><strong>Results: </strong>According to the machine learning model, helpers' positive affirmations and expressing involvement contributed to improved scores of the help seekers. Use of macros and ending the chat prematurely due to the help seeker being in an unsafe situation had negative effects on help seekers.</p><p><strong>Conclusions: </strong>This study reveals insights for improving helpline chats, emphasizing the value of an evocative style with questions, positive affirmations, and practical advice. It also underscores the potential of machine learning in helpline chat analysis.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e57362"},"PeriodicalIF":4.8,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jocelyn Shen, Daniella DiPaola, Safinah Ali, Maarten Sap, Hae Won Park, Cynthia Breazeal
Background: Empathy is a driving force in our connection to others, our mental well-being, and resilience to challenges. With the rise of generative artificial intelligence (AI) systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds toward stories from human versus AI narrators and how transparency plays a role in user emotions.
Objective: We aim to understand how empathy shifts across human-written versus AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy.
Methods: We conducted crowd-sourced studies with 985 participants who each wrote a personal story and then rated empathy toward 2 retrieved stories, where one was written by a language model, and another was written by a human. Our studies varied disclosing whether a story was written by a human or an AI system to see how transparent author information affects empathy toward the narrator. We conducted mixed methods analyses: through statistical tests, we compared user's self-reported state empathy toward the stories across different conditions. In addition, we qualitatively coded open-ended feedback about reactions to the stories to understand how and why transparency affects empathy toward human versus AI storytellers.
Results: We found that participants significantly empathized with human-written over AI-written stories in almost all conditions, regardless of whether they are aware (t196=7.07, P<.001, Cohen d=0.60) or not aware (t298=3.46, P<.001, Cohen d=0.24) that an AI system wrote the story. We also found that participants reported greater willingness to empathize with AI-written stories when there was transparency about the story author (t494=-5.49, P<.001, Cohen d=0.36).
Conclusions: Our work sheds light on how empathy toward AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of empathetic artificial social support or mental health chatbots.
{"title":"Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study.","authors":"Jocelyn Shen, Daniella DiPaola, Safinah Ali, Maarten Sap, Hae Won Park, Cynthia Breazeal","doi":"10.2196/62679","DOIUrl":"10.2196/62679","url":null,"abstract":"<p><strong>Background: </strong>Empathy is a driving force in our connection to others, our mental well-being, and resilience to challenges. With the rise of generative artificial intelligence (AI) systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds toward stories from human versus AI narrators and how transparency plays a role in user emotions.</p><p><strong>Objective: </strong>We aim to understand how empathy shifts across human-written versus AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy.</p><p><strong>Methods: </strong>We conducted crowd-sourced studies with 985 participants who each wrote a personal story and then rated empathy toward 2 retrieved stories, where one was written by a language model, and another was written by a human. Our studies varied disclosing whether a story was written by a human or an AI system to see how transparent author information affects empathy toward the narrator. We conducted mixed methods analyses: through statistical tests, we compared user's self-reported state empathy toward the stories across different conditions. In addition, we qualitatively coded open-ended feedback about reactions to the stories to understand how and why transparency affects empathy toward human versus AI storytellers.</p><p><strong>Results: </strong>We found that participants significantly empathized with human-written over AI-written stories in almost all conditions, regardless of whether they are aware (t<sub>196</sub>=7.07, P<.001, Cohen d=0.60) or not aware (t<sub>298</sub>=3.46, P<.001, Cohen d=0.24) that an AI system wrote the story. We also found that participants reported greater willingness to empathize with AI-written stories when there was transparency about the story author (t<sub>494</sub>=-5.49, P<.001, Cohen d=0.36).</p><p><strong>Conclusions: </strong>Our work sheds light on how empathy toward AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of empathetic artificial social support or mental health chatbots.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e62679"},"PeriodicalIF":4.8,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464935/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}