Background: eHealth interventions can favorably impact health outcomes and encourage health-promoting behaviors in children. More insight is needed from the perspective of children and their families regarding eHealth interventions, including features influencing program effectiveness.
Objective: This review aimed to explore families' experiences with family-focused web-based interventions for improving health.
Methods: Five databases were searched on October 26, 2022-updated on October 24, 2023-for studies reporting qualitative data on participating children or their caregivers' experiences with web-based programs. Study identification was performed in duplicate and studies were independently appraised for quality. Thematic synthesis was undertaken on qualitative data extracted from the results section of each included article.
Results: Of 5524 articles identified, 28 articles were included. The studies examined the experiences of school-aged children (aged 5-18 years) and their caregivers (mostly mothers) with 26 web-based interventions that were developed to manage 17 different health conditions or influence health-supporting behaviors. Six themes were identified on families' experiences: connecting with others, agency of learning, program reputability or credibility, program flexibility, meeting participants' needs regarding program content or delivery, and impact on lifestyle.
Conclusions: Families positively perceived family-focused web-based interventions, finding value in quality connections and experiencing social support; intervention features aligned with behavioral and self-management principles. Key considerations were highlighted for program developers and health care professionals on ways to adapt eHealth elements to meet families' health-related needs. Continued research examining families' experiences with eHealth interventions is needed, including the experiences of families from diverse populations and distinguishing the perspectives of children, their caregivers, and other family members, to inform the expansion of family-focused eHealth interventions in health care systems.
Trial registration: PROSPERO CRD42022363874; https://tinyurl.com/3xxa8enz.
Background: Large language model (LLM) artificial intelligence chatbots using generative language can offer smoking cessation information and advice. However, little is known about the reliability of the information provided to users.
Objective: This study aims to examine whether 3 ChatGPT chatbots-the World Health Organization's Sarah, BeFreeGPT, and BasicGPT-provide reliable information on how to quit smoking.
Methods: A list of quit smoking queries was generated from frequent quit smoking searches on Google related to "how to quit smoking" (n=12). Each query was given to each chatbot, and responses were analyzed for their adherence to an index developed from the US Preventive Services Task Force public health guidelines for quitting smoking and counseling principles. Responses were independently coded by 2 reviewers, and differences were resolved by a third coder.
Results: Across chatbots and queries, on average, chatbot responses were rated as being adherent to 57.1% of the items on the adherence index. Sarah's adherence (72.2%) was significantly higher than BeFreeGPT (50%) and BasicGPT (47.8%; P<.001). The majority of chatbot responses had clear language (97.3%) and included a recommendation to seek out professional counseling (80.3%). About half of the responses included the recommendation to consider using nicotine replacement therapy (52.7%), the recommendation to seek out social support from friends and family (55.6%), and information on how to deal with cravings when quitting smoking (44.4%). The least common was information about considering the use of non-nicotine replacement therapy prescription drugs (14.1%). Finally, some types of misinformation were present in 22% of responses. Specific queries that were most challenging for the chatbots included queries on "how to quit smoking cold turkey," "...with vapes," "...with gummies," "...with a necklace," and "...with hypnosis." All chatbots showed resilience to adversarial attacks that were intended to derail the conversation.
Conclusions: LLM chatbots varied in their adherence to quit-smoking guidelines and counseling principles. While chatbots reliably provided some types of information, they omitted other types, as well as occasionally provided misinformation, especially for queries about less evidence-based methods of quitting. LLM chatbot instructions can be revised to compensate for these weaknesses.
Background: The implementation of large language models (LLMs), such as BART (Bidirectional and Auto-Regressive Transformers) and GPT-4, has revolutionized the extraction of insights from unstructured text. These advancements have expanded into health care, allowing analysis of social media for public health insights. However, the detection of drug discontinuation events (DDEs) remains underexplored. Identifying DDEs is crucial for understanding medication adherence and patient outcomes.
Objective: The aim of this study is to provide a flexible framework for investigating various clinical research questions in data-sparse environments. We provide an example of the utility of this framework by identifying DDEs and their root causes in an open-source web-based forum, MedHelp, and by releasing the first open-source DDE datasets to aid further research in this domain.
Methods: We used several LLMs, including GPT-4 Turbo, GPT-4o, DeBERTa (Decoding-Enhanced Bidirectional Encoder Representations from Transformer with Disentangled Attention), and BART, among others, to detect and determine the root causes of DDEs in user comments posted on MedHelp. Our study design included the use of zero-shot classification, which allows these models to make predictions without task-specific training. We split user comments into sentences and applied different classification strategies to assess the performance of these models in identifying DDEs and their root causes.
Results: Among the selected models, GPT-4o performed the best at determining the root causes of DDEs, predicting only 12.9% of root causes incorrectly (hamming loss). Among the open-source models tested, BART demonstrated the best performance in detecting DDEs, achieving an F1-score of 0.86, a false positive rate of 2.8%, and a false negative rate of 6.5%, all without any fine-tuning. The dataset included 10.7% (107/1000) DDEs, emphasizing the models' robustness in an imbalanced data context.
Conclusions: This study demonstrated the effectiveness of open- and closed-source LLMs, such as GPT-4o and BART, for detecting DDEs and their root causes from publicly accessible data through zero-shot classification. The robust and scalable framework we propose can aid researchers in addressing data-sparse clinical research questions. The launch of open-access DDE datasets has the potential to stimulate further research and novel discoveries in this field.
This study provides preliminary evidence for real-time functional magnetic resonance imaging neurofeedback (rt-fMRI NF) as a potential intervention approach for internet gaming disorder (IGD). In a preregistered, randomized, single-blind trial, young individuals with elevated IGD risk were trained to downregulate gaming addiction-related brain activity. We show that, after 2 sessions of neurofeedback training, participants successfully downregulated their brain responses to gaming cues, suggesting the therapeutic potential of rt-fMRI NF for IGD (Trial Registration: ClinicalTrials.gov NCT06063642; https://clinicaltrials.gov/study/NCT06063642).
Background: Given the ubiquity of stress, a key focus of stress research is exploring how to better coexist with stress.
Objective: This study conducted text analysis on stress-related Weibo posts using a web crawler to investigate whether these posts contained positive emotions, as well as elements of mental time travel and meaning-making. A mediation model of mental time travel, meaning-making, and positive emotions was constructed to examine whether meaning-making triggered by mental time travel can foster positive emotions under stress.
Methods: Using Python 3.8, the original public data from active Weibo users were crawled, yielding 331,711 stress-related posts. To avoid false positives, these posts were randomly divided into two large samples for cross-validation (Sample 1: n = 165,374; Sample 2: n = 166,337). Google's Natural Language Processing Application Programming Interface was used for word segmentation, followed by text and mediation analysis using the Chinese psychological analysis system "Wenxin." A mini-meta-analysis of the mediation path coefficients was conducted. Text analysis identified mental time travel words, meaning-making words, and positive emotion words in stress-related posts.
Results: The constructed mediation model of mental time travel words (time words), meaning-making words (causal and insightful words), and positive post-stress emotions validated positive adaptation following stress. A mini-meta-analysis of two different mediation models constructed in the two subsamples indicated a stable mediation effect across the two random subsamples. The combined effect size obtained was B = 0.013, SE = 0.003, with a p-value < .001, and the 95% confidence interval was [0.007, 0.018], demonstrating that meaning-making triggered by mental time travel in stress-related blog posts can predict positive emotions under stress.
Conclusions: Individuals can adapt positively to stress by engaging in meaning-making processes that are triggered by mental time travel and reflected in their social media posts. The study's mediation model confirmed that mental time travel leads to meaning-making, which fosters positive emotional responses to stress. Mental time travel serves as a psychological strategy to facilitate positive adaptation to stressful situations.
Clinicaltrial: