Background: Diabetes mellitus (DM) continues to be a critical public health issue in Hong Kong. Although self-care behaviors help promote health among patients with DM, adherence remains suboptimal. More attention should be paid to eHealth literacy with the development of modern technologies.
Objective: This study aims to assess the level of eHealth literacy among patients with DM and examine its association with self-care and health outcomes.
Methods: A cross-sectional study was conducted among patients with type 2 DM from the DM clinic of a public hospital in Hong Kong. Data on eHealth literacy, self-care, self-care self-efficacy, diabetes distress, glycated hemoglobin (HbA1c) control, and sociodemographic information were collected. Multivariable regression analyses were performed, adjusting for relevant sociodemographic and medical variables.
Results: Among the 427 patients with DM recruited, around two-thirds (65.1%) were classified as having a high level of eHealth literacy. Compared to those with lower eHealth literacy, participants with higher eHealth literacy demonstrated significantly higher levels of self-care (P<.001) and self-care self-efficacy (P<.001) and lower levels of diabetes distress (P<.001). Higher eHealth literacy was also associated with greater odds of achieving ideal HbA1c control (<7%) in unadjusted analyses (odds ratio 1.90, 95% CI 1.15-2.81); however, this association was not statistically significant after adjustment for sociodemographic and medical covariates (adjusted odds ratio 1.57, 95% CI 0.99-2.52; P=.07).
Conclusions: This study evaluated eHealth literacy levels among patients with DM and examined the associations between eHealth literacy and health outcomes (eg, self-care, self-care self-efficacy, diabetes distress, and HbA1c control). Assessing eHealth literacy in patients with DM could be useful in identifying those who are vulnerable to poorer health outcomes. Promoting eHealth literacy among patients with DM may be important.
Unlabelled: In the context of digital health, just-in-time adaptive interventions (JITAIs) are nascent precision medicine systems that can extend personalized health care support to everyday life. A challenge in designing JITAIs is that personalized support often involves sophisticated decision-making algorithms. These decision-making algorithms can require numerous nontrivial design decisions that must be made between successive JITAI deployments (eg, hyperparameter selection for an artificial intelligence algorithm). Making design decisions between deployments-rather than during deployment-ensures intervention fidelity and enhances the ability to replicate results. Yet, each deployment can be costly, precluding the use of A/B testing for every design decision. How should design decisions be made strategically between JITAI deployments? This paper introduces "digital twins for just-in-time adaptive interventions (JITAI-Twins)" to address this question. JITAI-Twins are "digital twins of a subpopulation" (term used in the 2023 National Academies workshop proceedings on digital twins). JITAI-Twins are used to virtually simulate the potential outcomes of a JITAI's design decisions for an upcoming deployment. Based on simulation results, design decisions are made for the deployed JITAI. To continually improve the JITAI, data collected during deployment are used to update the JITAI-Twin-and this bidirectional feedback between deployments and simulation environments continues. JITAI-Twins are thus "fit-for-purpose" (term used in the National Academies 2024 consensus report on digital twins) instantiations of the digital twin concept. In this paper, we elucidate the specifics and design process of JITAI-Twins, with examples of prior use in clinical settings. JITAI-Twins highlight continuity over the course of a JITAI's optimization and continual improvement, emphasizing the need for bidirectional feedback between versions of a simulation environment and a JITAI's deployments.
Background: Artificial intelligence (AI) has the potential to support medicines information services. However, a comprehensive mapping of its use, particularly within pharmacy practice and in the context of digital health inequalities, is lacking.
Objective: This scoping review mapped existing evidence on AI-driven medicines information, focusing on the accuracy and completeness of AI-generated content, the role of health care professionals (HCPs), particularly pharmacists, and the impact of digital health inequalities on AI adoption.
Methods: This scoping review was informed by the methodological framework proposed by Levac et al, which includes modifications to the original Arksey and O'Malley scoping review framework. A systematic search was conducted across MEDLINE (Ovid), PubMed Central, Cochrane Library, CINAHL Plus (EBSCOhost), International Pharmaceutical Abstracts (IPA), Web of Science, and Google Scholar from inception to January 2025, which served as the search cutoff date. Peer-reviewed studies in English evaluating the role of AI in medicines information across any health care settings (including patient homes) were included. The results are reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines.
Results: A total of 1911 citations were identified, with 14 studies meeting the inclusion criteria. AI tools showed promise in supporting medicines information services but were found to have limitations in accuracy, particularly when applied to complex clinical queries. Pharmacists were the most engaged HCPs in the evaluation of AI-generated content. Only 3 studies explored digital health inequalities in the context of AI and access to medicines information. Reported barriers included misinformation risks, regulatory gaps, and digital health inequalities, particularly infrastructure limitations and disparities in digital literacy, which affected AI adoption.
Conclusions: AI-driven tools show promise in supporting medicines information services, but concerns remain. HCPs, particularly pharmacists, play a critical role in AI evaluation and validation, yet their involvement remains ill-defined. Addressing digital health inequalities is essential for effective AI integration. Future research should focus on identifying and minimizing digital health inequalities, as well as evidence-informed AI implementation in medicines information services.
Background: Surveys show that many people are willing to use generative artificial intelligence (AI) for health questions. Prior research has largely focused on chatbot accuracy, with some studies finding that both physicians and consumers overwhelmingly prefer chatbot-generated text over physician responses.
Objective: This study aimed to characterize and compare the emotional content of responses from physicians and 2 AI chatbots (OpenAI's ChatGPT and Google's Gemini) and to assess differences in reading level and use of medical disclaimers.
Methods: A public, patient-deidentified telehealth website was used to compile 100 physician-answered questions. The same questions were posed to both chatbots between May 18 and 19, 2025. Two coders classified the emotional content of each sentence using a predefined codebook and reviewed for agreement. Emotions were ranked as primary, secondary, and tertiary by the proportion of sentences classified as each emotion per response. Multinomial logistic regression compared emotional rankings using physician responses as the reference. Word count, Flesch Reading Ease, and Flesch-Kincaid Grade Level were analyzed via ANOVA with the Tukey honestly significant difference test. Disclaimer use was compared between chatbots using a χ2 test.
Results: Primary emotions were overwhelmingly neutral, except for one response from each chatbot in which anger was primary. For secondary emotions, the odds ratio of hope was 80.28% (95% CI 37.71%-93.76%) lower for ChatGPT, while the odds ratio of fear was 3.29 (95% CI 1.44-7.49) times higher for Gemini. For tertiary emotions, the odds ratio of compassion was 1.94 (95% CI 1.06-3.54) times higher, and the odds ratio of having no tertiary emotion was 84.33% (95% CI 64.72%-93.04%) lower for Gemini. Gemini responses averaged 889.1 (SD 305.7) words, ChatGPT 476.5 (SD 109.5), and physicians 193.5 (SD 113.6). Gemini had the lowest average Flesch Reading Ease score at 39.9 (SD 8.8), followed by ChatGPT at 45.8 (SD 12.8), while physicians had the highest at 51.9 (SD 13.6). Gemini had the highest average Flesch-Kincaid Grade Level at 11.3 (SD 1.5), followed by ChatGPT at 9.9 (SD 1.9), and physicians at 9.2 (SD 2.4). Gemini was significantly more likely to include a disclaimer than ChatGPT (χ21=49.2; P<.001).
Conclusions: Chatbot responses were significantly (P<.001) longer and more difficult to read than physician responses and were more likely to contain a wider range of emotions. Qualitatively, chatbot responses were more varied in their presentation as well as in the breadth of the emotions themselves. The findings of this study could be used to inform more emotionally connected physician responses to patient message queries.
[This corrects the article DOI: 10.2196/71188.].
Background: Digital addiction, including internet, smartphone, and gaming addiction, has emerged as a significant global health concern. Although a wide range of interventions has been evaluated, the fragmented and siloed nature of existing meta-analyses limits a clear understanding of the comparative effectiveness of different interventions across addiction subtypes.
Objective: This umbrella review and meta-meta-analysis aimed to estimate the overall effectiveness of interventions for digital addiction and examine differential effects according to addiction subtype, intervention modality, study design, and control condition.
Methods: A systematic search of 5 electronic databases (PubMed, Web of Science, Scopus, APA PsycInfo, and the Cochrane Library) was conducted from inception to June 24, 2025. Eligible studies were systematic reviews with meta-analyses evaluating interventions for internet, smartphone, or gaming addiction. Random-effects models were applied to synthesize standardized mean differences (SMDs). Methodological quality and certainty of evidence were assessed using A Measurement Tool to Assess Systematic Reviews 2 and the Grading of Recommendations Assessment, Development, and Evaluation framework.
Results: A total of 29 meta-analyses, comprising 52 effect sizes and 66,530 participants, were included (I2=95.13%). Overall, interventions demonstrated a large and statistically significant effect in reducing digital addiction symptoms (SMD=-1.44, 95% CI -1.67 to -1.21; P=.003). Subgroup analyses indicated that the largest effects were observed for internet addiction (SMD=-1.70, 95% CI -1.99 to -1.42), followed by gaming addiction (SMD=-0.82, 95% CI -1.09 to -0.56) and smartphone addiction (SMD=-0.80, 95% CI -1.39 to -0.21). Exercise-based interventions, particularly those integrated with psychological approaches, showed large effect sizes (SMD=-3.14, 95% CI -4.30 to -1.97); however, this finding was based on a very limited number of effect sizes and should be interpreted cautiously. In addition, randomized controlled trials yielded larger effects than mixed study designs, and no-intervention controls were associated with larger effect sizes than mixed control conditions. The certainty of evidence was generally low.
Conclusions: Interventions for digital addiction are effective, although their magnitude of benefit varies by addiction subtype and intervention modality. These findings support the use of tailored and multimodal intervention strategies while highlighting the need for more rigorous, high-quality, and balanced evidence across different forms of digital addiction.

