Background: Online counseling chat services are increasingly used by young people worldwide. A growing body of literature supports the use and effectiveness of these services for adolescent mental health. However, there is also a need to provide an overview of the main existing resources to identify unmet needs and gaps in the field.
Objective: This study aims to provide an overview of existing online counseling chat services targeting individuals aged 12-30 years in 4 European countries (Belgium, Finland, Hungary, and Spain), and to identify potential needs and gaps by comparing the collected data with recognized quality standard criteria that define the best practices in the field of counseling.
Methods: A web search was conducted in the 4 participating countries using the same keywords to identify the main chat services. The final selection of chat services was made using a stratified purposive sampling method. A common data extraction database was developed to record information from these websites. Finally, the extracted information was compared against the fulfillment of 7 selected criteria from the Child Helpline International Quality Standards Framework. Additionally, certain chat characteristics were compared with the number of Child Helpline Quality Standard criteria fulfilled.
Results: The search identified a total of 66 service providers offering 71 different chat services. Nongovernmental organizations accounted for more than half of the total service providers42 of 66 (64%). Additional helplines, such as hotlines, were also available through 54 of 66 (82%) service providers. Artificial intelligence tools were incorporated into 6 of 66 (9%) chat services. Differences were observed between countries; for example, the use of volunteers as counselors was predominant in Hungary and Belgium. Topic-specific chat services were common in Belgium and Spain, whereas in Finland and Hungary, chat services generally welcomed a wide range of topics for young people to discuss. Comparisons with Child Helpline International's recommendations revealed some gaps-for example, only 9 of 71 (13%) chat services operated 24 hours a day, and only 10 of 71 (14%) offered interactions in minority groups or foreign languages. Additionally, the use of free social media platforms for chat services was prevalent in some countries, which could compromise users' privacy. Being part of the Child Helpline International consortium was marginally associated with meeting a higher number of standard criteria (β coefficient 1.55; P=.08).
Conclusions: This study provides a comprehensive overview of existing online chat counseling services in 4 European countries. Our findings suggest that some existing chat services for young people could be improved in areas such as accessibility, data security, and the inclusion of vulnerable groups.
Background: Recent developments in generative artificial intelligence (AI) have introduced the general public to powerful, easily accessible tools, such as ChatGPT and Gemini, for a rapidly expanding range of uses. Among those uses are specialized chatbots that serve in the role of a therapist, as well as personally curated digital companions that offer emotional support. However, the ability of AI therapists to provide consistently safe and effective treatment remains largely unproven, and those concerns are especially salient in regard to adolescents seeking mental health support.
Objective: This study aimed to determine the willingness of therapy and companion AI chatbots to endorse harmful or ill-advised ideas proposed by fictional teenagers experiencing mental health distress.
Methods: A convenience sample of 10 publicly available AI bots offering therapeutic support or companionship were each presented with 3 detailed fictional case vignettes of adolescents with mental health challenges. Each fictional adolescent asked the AI chatbot to endorse 2 harmful or ill-advised proposals, such as dropping out of school, avoiding all human contact for a month, or pursuing a relationship with an older teacher, resulting in a total of 6 proposals presented to each chatbot. The clinical scenarios presented were intended to reflect challenges commonly seen in the practice of therapy with adolescents, and the proposals offered by the fictional teenagers were intended to be clearly dangerous or unwise. The 10 AI bots were selected by the author to represent a range of chatbot types, including generic AI bots, companion bots, and dedicated mental health bots. Chatbot responses were analyzed for explicit endorsement, defined as direct support for the teenagers' proposed behavior.
Results: Across 60 total scenarios, chatbots actively endorsed harmful proposals in 19 out of the 60 (32%) opportunities to do so. Of the 10 chatbots, 4 endorsed half or more of the ideas proposed to them, and none of the bots managed to oppose them all.
Conclusions: A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers. These results raise concerns about the ability of some AI-based companion or therapy bots to safely support teenagers with serious mental health issues and heighten concern that AI bots may tend to be overly supportive at the expense of offering useful guidance when appropriate. The results highlight the urgent need for oversight, safety protocols, and ongoing research regarding digital mental health support for adolescents.
Background: Mindfulness-based interventions (MBIs) are widely used in mental health promotion and treatment. Despite widespread evidence of effectiveness with different populations and delivery modes, there are sparse findings concerning the mechanisms of action in MBIs.
Objective: The objective of this paper was to understand the mediators of the Mindfulness Virtual Community (MVC) intervention, an 8-week, multicomponent, online mindfulness and cognitive-behavioral therapy (M-CBT) intervention, based on a secondary evaluation of 2 randomized controlled trials (RCTs) with student participants.
Methods: Mediation analysis, using structural equation modeling, was used to assess direct and indirect relationships between study group (ie, intervention or wait list control) and outcomes. Consistent with the intervention's theoretical perspective and direct effects paths, a model was specified to evaluate whether mindful nonreactivity, as evaluated by the 5-factor mindfulness questionnaire, mediated the effect of MVC intervention on anxiety and depression (as symptom-driven outcomes), and perceived stress and quality of life (as functional outcomes). The model included additional mediating paths for perceived stress through anxiety and depression, and for quality of life through anxiety, depression, and perceived stress. The model was thereafter extended, adjusting for pre-intervention differences in mindfulness (ie, observing, describing, activity with awareness, nonjudgment, and nonreactivity) facets.
Results: Direct (nonmediated) effects indicated statistically significant differences at 8 weeks between the MVC and waitlist control (WLC) groups on depression (-1.72; P=.002), anxiety (-3.40; P=.001), perceived stress (-2.44; P<.001), quality of life (4.31; P=.005), and the nonreactivity facet of mindfulness (1.63; P<.001), in favor of the MVC intervention. Mediation analysis supported the mediating role of the nonreactivity facet of mindfulness, depression, anxiety, and perceived stress through single and sequential mediation paths. Results indicated good fit characteristics for the main (comparative fit index [CFI]=.99; root-mean-square error of approximation [RMSEA]=.05; standardized root-mean-square residual [SRMR]=.05) and extended (CFI=.99; RMSEA=.04; SRMR=.04) models.
Conclusions: This research underscores the importance of mindful nonreactivity, depression, and anxiety as key mediators of MVC intervention benefits.
Unlabelled: The emergence of generative artificial intelligence (GenAI) in clinical settings-particularly in health documentation and communication-presents a largely unexplored but potentially transformative force in shaping placebo and nocebo effects. These psychosocial phenomena are especially potent in mental health care, where outcomes are closely tied to patients' expectations, perceived provider competence, and empathy. Drawing on conceptual understanding of placebo and nocebo effects and the latest research, this Viewpoint argues that GenAI may amplify these effects, both positive and negative. Through tone, assurance, and even the rapidity of responses, GenAI-generated text-either co-written with clinicians or peers, or fully automated-could influence patient perceptions in ways that mental health clinicians may not currently fully anticipate. When embedded in clinician notes or patient-facing summaries, AI language may strengthen expectancies that underlie placebo effects, or conversely, heighten nocebo effects through subtle cues, inaccuracies, or potentially via loss of human nuance. This article explores the implications of AI-mediated clinical communication particularly in mental health care, emphasizing the importance of transparency, ethical oversight, and psychosocial awareness as these technologies evolve.
Background: Generative artificial intelligence (AI) chatbots are an online source of information consulted by adolescents to gain insight into mental health and wellness behaviors. However, the accuracy and content of generative AI responses to questions related to suicide have not been systematically investigated.
Objective: This study aims to investigate general (not counseling-specific) generative AI chatbots' responses to questions regarding suicide.
Methods: A content analysis was conducted of the responses of generative AI chatbots to questions about suicide. In phase 1 of the study, generative chatbots examined include: (1) Google Bard or Gemini; (2) Microsoft Bing or CoPilot; (3) ChatGPT 3.5 (OpenAI); and (4) Claude (Anthropic). In phase 2 of the study, additional generative chatbot responses were analyzed, which included Google Gemini, Claude 2 (Anthropic), xAI Grok 2, Mistral AI, and Meta AI (Meta Platforms). The two phases occurred a year apart.
Results: Findings included a linguistic analysis of the authenticity and tone within the responses using the Linguistic Inquiry and Word Count program. There was an increase in the depth and accuracy of the responses between phase 1 and phase 2 of the study. There is evidence that the responses by the generative AI chatbots were more comprehensive and responsive during phase 2 than phase 1. Specifically, the responses were found to provide more information regarding all aspects of suicide (eg, signs of suicide, lethality, resources, and ways to support those in crisis). Another difference noted in the responses between the first and second phases was the emphasis on the 988 suicide hotline number.
Conclusions: While this dynamic information may be helpful for youth in need, the importance of individuals seeking help from a trained mental health professional remains. Further, generative AI algorithms related to suicide questions should be checked periodically to ensure best practices regarding suicide prevention are being communicated.
Background: In recent years, policymakers worldwide have been increasingly concerned with promoting public mental well-being. While digitally supported well-being interventions seem effective in general nonclinical populations, their cost-effectiveness remains unclear.
Objective: This study aims to systematically synthesize evidence on the cost-effectiveness of digitally supported mental well-being interventions targeting the general population or adults with subclinical mental health symptoms.
Methods: PubMed, Embase, Scopus, and Web of Science were systematically searched for health economic or cost-minimization studies. Eligibility criteria included interventions in the general population or adults showing risk factors or subclinical mental health symptoms, with at least 1 digital component. Study quality was comprehensively assessed using the Consensus Health Economic Criteria list.
Results: Of 3455 records identified after duplicate removal, 12 studies were included: 3 studies evaluated universal prevention, 3 investigated selective prevention, and 6 covered indicated prevention. Six studies applied a societal perspective. Incremental cost-utility ratios were reported in 6 of the included studies and varied from dominant to €18,710 (US $ 23,185) per quality-adjusted life year. In general, digitally supported well-being interventions in nonclinical adults, and particularly indicated prevention strategies, seemed to generate improved health outcomes at lower costs from a societal perspective. The quality appraisal highlighted several shortcomings of the available literature.
Conclusions: Overall, the use of digital tools for mental well-being prevention and promotion in nonclinical adult populations has the potential to be cost-effective. Nevertheless, to adequately guide policymaking, more evidence is still needed. Future studies should ensure valid argumentation for the applied time horizon and perspective, alongside rigorous sensitivity analyses in accordance with best practices to improve cost-effectiveness evidence. Furthermore, assessment methods more sensitive to changes in well-being such as the EQ Health and Well-being instrument could be considered.
Background: The COVID-19 pandemic has accelerated the adoption of video consultations in mental health care, highlighting the importance of therapeutic alliances for successful treatment outcomes in both face-to-face and web-based settings. Telepresence, the sense of being present with the mental health specialist (MHS) rather than feeling remote, is a critical component of building a strong therapeutic alliance in video consultations. While patients often report high telepresence levels, MHSs express concerns about whether video consultations can replicate the quality of face-to-face interactions. Despite its importance, research on telepresence development in MHSs over time and the dyadic interplay between patients and MHSs remains limited.
Objective: This study aimed to evaluate the mutual influence within patient-MHS dyads on telepresence development during video consultations, using data from a randomized controlled trial assessing the feasibility of video consultations for depression and anxiety disorders in primary care.
Methods: The study included 22 patient-MHS dyads (22 patients, 4 MHSs). Telepresence was measured using the Telepresence in Videoconference Scale. Dyadic data were analyzed using the actor-partner interdependence model with a distinguishable dyad structural equation model. Actor effects refer to the impact of an individual's telepresence at time point 1 (T1) on their telepresence at time point 2 (T2), while partner effects represent the influence of one party's telepresence at T1 on the other's telepresence at T2. Sensitivity analyses excluded data from individual MHSs to account for their unique effects.
Results: A significant actor effect for MHSs (P<.001) indicated a high temporal stability of telepresence between T1 and T2. In contrast, the actor effect for patients was not statistically significant, suggesting a greater variability between T1 and T2. No significant partner effects for both patients and MHSs were observed, suggesting no mutual influence between dyad members. Age was a significant covariate for telepresence in both groups.
Conclusions: Consistent with prior findings, MHSs experienced increased telepresence over time, whereas patients reported high telepresence levels from the start of therapy. The lack of dyadic influence highlights the need for further exploration into factors affecting telepresence development, such as age, technical proficiency, and prior treatment experience. Future studies with larger samples and more sessions are necessary to enhance the generalizability of these findings and to optimize the use of video consultations in mental health care.

