Background: At the end of 2023, Bayer AG launched its own internal large language model (LLM), MyGenAssist, based on ChatGPT technology to overcome data privacy concerns. It may offer the possibility to decrease their harshness and save time spent on repetitive and recurrent tasks that could then be dedicated to activities with higher added value. Although there is a current worldwide reflection on whether artificial intelligence should be integrated into pharmacovigilance, medical literature does not provide enough data concerning LLMs and their daily applications in such a setting. Here, we studied how this tool could improve the case documentation process, which is a duty for authorization holders as per European and French good vigilance practices.
Objective: The aim of the study is to test whether the use of an LLM could improve the pharmacovigilance documentation process.
Methods: MyGenAssist was trained to draft templates for case documentation letters meant to be sent to the reporters. Information provided within the template changes depending on the case: such data come from a table sent to the LLM. We then measured the time spent on each case for a period of 4 months (2 months before using the tool and 2 months after its implementation). A multiple linear regression model was created with the time spent on each case as the explained variable, and all parameters that could influence this time were included as explanatory variables (use of MyGenAssist, type of recipient, number of questions, and user). To test if the use of this tool impacts the process, we compared the recipients' response rates with and without the use of MyGenAssist.
Results: An average of 23.3% (95% CI 13.8%-32.8%) of time saving was made thanks to MyGenAssist (P<.001; adjusted R2=0.286) on each case, which could represent an average of 10.7 (SD 3.6) working days saved each year. The answer rate was not modified by the use of MyGenAssist (20/48, 42% vs 27/74, 36%; P=.57) whether the recipient was a physician or a patient. No significant difference was found regarding the time spent by the recipient to answer (mean 2.20, SD 3.27 days vs mean 2.65, SD 3.30 days after the last attempt of contact; P=.64). The implementation of MyGenAssist for this activity only required a 2-hour training session for the pharmacovigilance team.
Conclusions: Our study is the first to show that a ChatGPT-based tool can improve the efficiency of a good practice activity without needing a long training session for the affected workforce. These first encouraging results could be an incentive for the implementation of LLMs in other processes.
Background: eHealth literacy has increasingly emerged as a critical determinant of health, highlighting the importance of identifying its influencing factors; however, these factors remain unclear. Numerous studies have explored this concept across various populations, presenting an opportunity for a systematic review and synthesis of the existing evidence to better understand eHealth literacy and its key determinants.
Objective: This study aimed to provide a systematic review of factors influencing eHealth literacy and to examine their impact across different populations.
Methods: We conducted a comprehensive search of papers from PubMed, CNKI, Embase, Web of Science, Cochrane Library, CINAHL, and MEDLINE databases from inception to April 11, 2023. We included all those studies that reported the eHealth literacy status measured with the eHealth Literacy Scale (eHEALS). Methodological validity was assessed with the standardized Joanna Briggs Institute (JBI) critical appraisal tool prepared for cross-sectional studies. Meta-analytic techniques were used to calculate the pooled standardized β coefficient with 95% CIs, while heterogeneity was assessed using I2, the Q test, and τ2. Meta-regressions were used to explore the effect of potential moderators, including participants' characteristics, internet use measured by time or frequency, and country development status. Predictors of eHealth literacy were integrated according to the Literacy and Health Conceptual Framework and the Technology Acceptance Model (TAM).
Results: In total, 17 studies met the inclusion criteria for the meta-analysis. Key factors influencing higher eHealth literacy were identified and classified into 3 themes: (1) actions (internet usage: β=0.14, 95% CI 0.102-0.182, I2=80.4%), (2) determinants (age: β=-0.042, 95% CI -0.071 to -0.020, I2=80.3%; ethnicity: β=-2.613, 95% CI -4.114 to -1.112, I2=80.2%; income: β=0.206, 95% CI 0.059-0.354, I2=64.6%; employment status: β=-1.629, 95% CI -2.323 to -0.953, I2=99.7%; education: β=0.154, 95% CI 0.101-0.208, I2=58.2%; perceived usefulness: β=0.832, 95% CI 0.131-1.522, I2=68.3%; and self-efficacy: β=0.239, 95% CI 0.129-0.349, I2=0.0%), and (3) health status factor (disease: β=-0.177, 95% CI -0.298 to -0.055, I2=26.9%).
Conclusions: This systematic review, guided by the Literacy and Health Conceptual Framework model, identified key factors influencing eHealth literacy across 3 dimensions: actions (internet usage), determinants (age, ethnicity, income, employment status, education, perceived usefulness, and self-efficacy), and health status (disease). These findings provide valuable guidance for designing interventions to enhance eHealth literacy.
Trial registration: PROSPERO CRD42022383384; https://www.crd.york.ac.uk/PROSPERO/view/CRD42022383384.
Digital maturity assessments can inform strategic decision-making. However, national approaches to assessing the digital maturity of health systems are in their infancy, and there is limited insight into the context and processes associated with such assessments. This viewpoint article describes and compares national approaches to assessing the digital maturity of hospitals. We reviewed 5 national approaches to assessing the digital maturity of hospitals in Queensland (Australia), Germany, the Netherlands, Norway, and Scotland, exploring context, drivers, and approaches to measure digital maturity in each country. We observed a common focus on interoperability, and assessment findings were used to shape national digital health strategies. Indicators were broadly aligned, but 4 of 5 countries developed their own tailored indicator sets. Key topic areas across countries included interoperability, capabilities, leadership, governance, and infrastructure. Analysis of indicators was centralized, but data were shared with participating organizations. Only 1 setting conducted an academic evaluation. Major challenges of digital maturity assessment included the high cost and time required for data collection, questions about measurement accuracy, difficulties in consistent long-term tracking of indicators, and potential biases due to self-reporting. We also observed tensions between the practical feasibility of the process with the depth and breadth required by the complexity of the topic and tensions between national and local data needs. There are several key challenges in assessing digital maturity in hospitals nationally that influence the validity and reliability of output. These need to be explicitly acknowledged when making decisions informed by assessments and monitored over time.
Background: A meta-analysis is a quantitative, formal study design in epidemiology and clinical medicine that systematically integrates and quantitatively synthesizes findings from multiple independent studies. This approach not only enhances statistical power but also enables the exploration of effects across diverse populations and helps resolve controversies arising from conflicting studies.
Objective: This study aims to develop and implement a user-friendly tool for conducting meta-analyses, addressing the need for an accessible platform that simplifies the complex statistical procedures required for evidence synthesis while maintaining methodological rigor.
Methods: The platform available at MetaAnalysisOnline.com enables comprehensive meta-analyses through an intuitive web interface, requiring no programming expertise or command-line operations. The system accommodates diverse data types including binary (total and event numbers), continuous (mean and SD), and time-to-event data (hazard rates with CIs), while implementing both fixed-effect and random-effect models using established statistical approaches such as DerSimonian-Laird, Mantel-Haenszel, and inverse variance methods for effect size estimation and heterogeneity assessment.
Results: In addition to statistical tests, graphical representations including the forest plot, the funnel plot, and the z score plot can be drawn. A forest plot is highly effective in illustrating heterogeneity and pooled results. The risk of publication bias can be revealed by a funnel plot. A z score plot provides a visual assessment of whether more research is needed to establish a reliable conclusion. All the discussed models and visualization options are integrated into the registration-free web-based portal. Leveraging MetaAnalysisOnline.com's capabilities, we examined treatment-related adverse events in patients with cancer receiving perioperative anti-PD-1 immunotherapy through a systematic review encompassing 10 studies with 8099 total participants. Meta-analysis revealed that anti-PD-1 therapy doubled the risk of adverse events (risk ratio 2.15, 95% CI 1.39-3.32), with significant between-study heterogeneity (I2=95%) and publication bias detected through the Egger test (P=.02). While these findings suggest increased toxicity associated with anti-PD-1 treatment, the z score analysis indicated that additional studies are needed for definitive conclusions.
Conclusions: In summary, the web-based tool aims to bridge the void for clinical and life science researchers by offering a user-friendly alternative for the swift and reproducible meta-analysis of clinical and epidemiological trials.