Background: The therapeutic relationship is a professional partnership between clinicians and patients that supports open communication and clinical decision-making. This relationship is critical to the delivery of effective mental health care. The integration of artificial intelligence (AI) into mental health care has the potential to support accessibility and personalized care; however, little is known about how AI might affect the dynamics of the therapeutic relationship.
Objective: This study aimed to ascertain how physicians anticipate AI tools will impact the therapeutic relationship in mental health care.
Methods: We conducted 42 in-depth interviews with psychiatrists and family medicine practitioners to investigate physician perceptions regarding the impact of AI on mental health care.
Results: Physicians identified several disruptions from AI use, noting that these tools could impact the dyad of the patient-physician relationship in ways that are both positive and negative. The main themes that emerged included potential disruptions to the therapeutic relationship, shifts in shared decision-making dynamics, and the importance of transparent AI use. Participants suggested that AI tools could create efficiencies that allow for relationship building as well as help avoid issues with miscommunication during psychotherapeutic interactions. However, they also expressed concerns that AI tools might not adequately capture aspects of the therapeutic relationship, such as empathy, that are vital to mental health care. Physicians also raised issues related to the impact that AI tools will have on maintaining relationships with patients.
Conclusions: As AI applications become increasingly integrated into mental health care, it is crucial to assess how this integration may support or disrupt the therapeutic relationship. Physician acceptance of emerging AI tools may be highly dependent on how well the human elements of mental health care are preserved.
Unlabelled: We propose the Stanford Brainstorm Social Media Safety Plan (SMS) as a user-friendly, collaborative, and effective tool to mitigate the imminent dangers and risks to mental health that are associated with social media use by children, adolescents, and young adults. This tool is informed and inspired by suicide safety plans as part of suicide safety planning, which have long shaped the standard of care for psychiatric discharges from inpatient units, emergency rooms, and comprehensive psychiatric emergency programs, as well as longitudinal outpatient care following occurrences of suicidal ideation or suicide attempts. In many systems including those of the Veterans Health Administration, they constitute an absolute requirement prior to the discharge of the patient. This social media safety plan is to be used proactively, in times of normalcy as well as crisis. While there are parental controls for digital devices and online platforms, official legal age requirements for online accounts, and individual parenting approaches, there is a dearth of practical tools that youth, families, schools, and communities can use to shape and alter social media use parameters, rules, and habits. Furthermore, providers in psychiatry, child and adolescent psychiatry, and mental health at large are often confronted with behaviors and issues related to social media use during time- and resource-limited appointments, providing a massive opportunity for interventions that are harm reduction-oriented and easy to disseminate. While it has not been studied in a clinical trial, we have used it extensively with patients and families, and presented it to larger audiences at mental health and technology conferences over the past two years. The responses and feedback we have received, as well as reported anecdotal experiences with using it, have been overwhelmingly positive. An already unfolding child and adolescent mental health epidemic in the United States has been observed and deepened partly by way of easy access to social media (and digital-screen time) with inadequate safeguards and monitoring in place. Social media's impacts and related interventions require a multitiered biopsychosocial and cultural approach: at the level of the individual child, the family, the school, the state, the market, and the nation. At the level of youth and their parents or caregivers, practical tools are desperately needed. We propose the SMS as one such significant tool.
Background: As mental health challenges continue to rise globally, there is an increasing interest in the use of GPT models, such as ChatGPT, in mental health care. A few months after its release, tens of thousands of users interacted with GPT-based therapy bots, with mental health support identified as the primary use case. ChatGPT offers scalable and immediate support through natural language processing capabilities, but their clinical applicability, safety, and effectiveness remain underexplored.
Objective: This scoping review aims to provide a comprehensive overview of the main clinical applications of ChatGPT in mental health care, along with the existing empirical evidence for its performance.
Methods: A systematic search was conducted in 8 electronic databases in April 2025 to identify primary studies. Eligible studies included primary research, reporting on the evaluation of a ChatGPT clinical application implemented for a mental health care-specific purpose.
Results: In total, 60 studies were included in this scoping review. The results highlighted that most applications used generic ChatGPT and focused on the detection of mental health problems and counseling and treatment. At the same time, only a minority of studies investigated ChatGPT use in clinical decision facilitation and prognosis tasks. Most of the studies were prompt experiments, in which standardized text inputs-designed to mimic clinical scenarios, patient descriptions, or practitioner queries-are submitted to ChatGPT to evaluate its performance in mental health-related tasks. In terms of performance, ChatGPT shows good accuracy in binary diagnostic classification and differential diagnosis, simulating therapeutic conversation, providing psychoeducation, and conducting specific therapeutic strategies. However, ChatGPT has significant limitations, particularly with more complex clinical presentations and its overly pessimistic prognostic outputs. Nevertheless, overall, when compared to mental health experts or other artificial intelligence models, ChatGPT approximates or surpasses their performance in conducting various clinical tasks. Finally, custom ChatGPT use was associated with better performance, especially in counseling and treatment tasks.
Conclusions: While ChatGPT offers promising capabilities for mental health screening, psychoeducation, and structured therapeutic interactions, its current limitations highlight the need for caution in clinical adoption. These limitations also underscore the need for rigorous evaluation frameworks, model refinement, and safety protocols before broader clinical integration. Moreover, the variability in performance across versions, tasks, and diagnostic categories also invites a more nuanced reflection on the conditions under which ChatGPT can be safely and effectively integrated into mental health settings.
Unlabelled: Generative artificial intelligence (AI) is reshaping mental health but the direction of that change remains unclear. In this commentary, we examine the recent evidence and trends in mental health AI to identify where AI can provide value for the field while avoiding the pitfalls that have challenged the smartphone app and VR space. While AI technology will continue to improve, those advances alone are not enough to move AI from mental wellness to psychiatric tools and a new generation of clinical investigation, integration, and leadership will unlock the full value of AI.
Background: Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by difficulties in attention, impulsivity, and hyperactivity. These difficulties can result in pervasive and longstanding psychological distress and social, academic, and occupational impairments.
Objective: This systematic review aims to investigate the effectiveness and user experience (ie, safety, usability, acceptability, and attrition) outcomes of immersive virtual reality (VR) interventions for cognitive rehabilitation in people with ADHD and identify research gaps and avenues for future research in this domain.
Methods: Peer-reviewed journal articles that appraised the treatment impact of any immersive VR-based intervention on cognitive abilities in people of all ages with ADHD were eligible for inclusion. The following databases were searched up until November 2024: Cochrane Library, IEEE Explore Digital Library, PsycINFO, PubMed, Scopus, and Web of Science. Records were screened on title and abstract information after deduplication, leading to full-text appraisal of the remaining records. Findings from eligible articles were extracted into a standardized coding sheet before being tabulated and reported with a narrative synthesis.
Results: Out of 1046 records identified, 15 articles met the inclusion criteria. Immersive VR-based interventions for people with ADHD were generally effective in improving cognitive abilities, such as attention, memory, and executive functioning. User experience outcomes were also generally positive, with low levels of simulator sickness and minimal attrition reported during VR-based treatment.
Conclusions: Immersive VR-based interventions hold promise for effectively, safely, and rapidly treating cognitive deficits in children and adults with ADHD. However, more studies are required to examine their longitudinal impact beyond treatment cessation.
Background: Many youth rely on direct-to-consumer generative artificial intelligence (GenAI) chatbots for mental health support, yet the quality of the psychotherapeutic capabilities of these chatbots is understudied.
Objective: This study aimed to comprehensively evaluate and compare the quality of widely used GenAI chatbots with psychotherapeutic capabilities using the Conversational Agent for Psychotherapy Evaluation II (CAPE-II) framework.
Methods: In this cross-sectional study, trained raters used the CAPE-II framework to rate the quality of 5 chatbots from GenAI platforms widely used by youth. Trained raters role-played as youth using personas of youth with mental health challenges to prompt chatbots, facilitating conversations. Chatbot responses were generated from August to October 2024. The primary outcomes were rated scores in 9 sections. The proportion of high-quality ratings (binary rating of 1) across each section was compared between chatbots using Bonferroni-corrected chi-square tests.
Results: While GenAI chatbots were found to be accessible (104/120 high-quality ratings, 86.7%) and avoid harmful statements and misinformation (71/80, 89%), they performed poorly in their therapeutic approach (14/45, 31%) and their ability to monitor and assess risk (31/80, 39%). Privacy policies were difficult to understand, and information on chatbot model training and knowledge was unavailable, resulting in low scores. Bonferroni-corrected chi-square tests showed statistically significant differences in chatbot quality in the background, therapeutic approach, and monitoring and risk evaluation sections. Qualitatively, raters perceived most chatbots as having strong conversational abilities but found them plagued by various issues, including fabricated content and poor handling of crisis situations.
Conclusions: Direct-to-consumer GenAI chatbots are unsafe for the millions of youth who use them. While they demonstrate strengths in accessibility and conversational capabilities, they pose unacceptable risks through improper crisis handling and a lack of transparency regarding privacy and model training. Immediate reforms, including the use of standardized audits of quality, such as the CAPE-II framework, are needed. These findings provide actionable targets for platforms, regulators, and policymakers to protect youth seeking mental health support.

