Background: Clinical champions are often engaged to implement new evidence-based practices in health care settings. Previous research suggests that the mere presence of a champion does not guarantee successful implementation; therefore, we aimed to identify specific champion attributes and site-level factors that contribute to evidence-based practice adoption. During a Department of Veterans Affairs (VA) quality improvement program, we engaged site champions to implement an advance care planning evidence-based practice with seriously ill Veterans in VA home based primary care (HBPC) and community nursing homes (CNHs).
Methods: We conducted interviews (N = 99) with champions and leadership at 11 HBPC programs and 6 VA CNH programs. Guided by the Tailored Implementation in Chronic Diseases (TICD) framework and Shea's conceptual model of champion impact, we analyzed interview data to examine champion characteristics and site factors associated with successful adoption of the evidence-based practice. Additionally, we categorized sites as successful or not successful in terms of adopting the evidence-based practice and compared champion characteristics and site factors between these sites using a Matrixed Multiple Case Study approach.
Results: Eight HBPC programs (73%) and four CNH programs (67%) were successful. Champions at successful sites believed in the importance of eliciting and documenting Veterans goals of care, were motivated and committed to participating in the project, and were dedicated to serving as champions. Successful sites had champions who engaged in champion activities beyond attending coaching calls, including supporting and educating peers. The degree of leadership support, as well as the relative priority of the project varied; unsuccessful sites mentioned competing priorities and lower levels of leadership engagement.
Conclusion: Results suggest that champion belief in the importance of the evidence-based practice, commitment to the program, motivation to serve as a champion, and engagement with peers are characteristics common among champions at sites that successfully adopted the evidence-based practice. At the site-level, the degree of leadership engagement and the priority of implementing the evidence-based practice emerged as factors influencing success. These findings can assist future healthcare interventions and programs in identifying clinical champions for implementing evidence-based practices.
Background: About one-third of U.S. youth are overweight and most have at least one risk factor that increases their chance of developing cardiovascular or other chronic diseases. School- and research-based physical activity and healthy eating programs can reduce obesity and improve health outcomes; however, schools face many implementation challenges. Healthy School Recognized Campus (HSRC) bundles school- and researched-based programs to improve their implementation and student health outcomes. This paper describes the protocol for a hybrid type 2 implementation-effectiveness, cluster dual randomized controlled trial that evaluates the (aim 1) effectiveness of the HSRC initiative for improving health behaviors and (aim 2) the impact of an implementation strategy - school-to-school mentoring - on HSRC's delivery.
Methods: Students in 4th through 9th grade (n = 500) at public schools (n = 20) across East and Central Texas will be randomized at the school level to determine the effectiveness of HSRC (vs. waitlist control condition) on BMI z-score (primary outcome), physical activity measured via accelerometer, and skin carotenoids (i.e., fruit and vegetable intake; secondary outcomes). Assessments will occur at the start and end of one school year. Program implementers (n = 200) at schools will be randomized to assess the impact of the school-to-school mentoring strategy (vs. standard implementation) on HSRC's acceptability, appropriateness, and feasibility (co-primary outcomes). Assessments will occur at the start, middle, and end of one school year. The assessment at the end of the school year will also include a concurrent mixed-methods approach (QUAL + QUAN), guided by the Consolidated Framework for Implementation Research (CFIR), to evaluate the school-to-school mentoring strategy. For quantitative outcomes, a generalized linear model framework will be used to evaluate HSRC and the school-to-school mentoring strategy.
Discussion: This study's innovative dual randomized design allows for rigorous assessment of HSRC on effectiveness outcomes and the evaluation of a school-to-school mentoring implementation strategy on implementation outcomes. If both HSRC and the school-to-school mentoring strategy have their hypothesized effect, we will be well positioned to address cardiovascular and other chronic disease risk factors among youth using a scalable, widely used approach within one of the largest health educator networks in the country.
Trial registration: Clinicaltrials.gov on July 1, 2025 (NCT07079995).
Background: Learning Health Systems (LHSs) link research and health service delivery by generating evidence to guide decision-making and continuous improvement. Although various LHS frameworks exist, there is limited practical guidance for how LHSs can improve implementation. This systematic review aimed to consolidate existing guidance to identify the infrastructure (pillars) and improvement processes (steps) required to support a LHS cycle that improves the implementation (including scale up or sustainment) of health programs, policies, or practices.
Methods: We searched five databases and grey literature for documents describing LHSs for improving implementation, scale-up, or sustainment of health interventions. Title, abstract, and full-text screening were conducted independently by two reviewers. Data were synthesised separately for pillars and steps. Framework synthesis identified pillars and steps, informed by an existing LHS framework and refined iteratively; thematic synthesis explored patterns within each.
Findings: From 12,151 records and 25 websites, 96 guidance documents were included. Six Pillars were identified as important to operationalise LHS improvement processes: 1-Interest holder engagement, 2-Workforce development and capacity, 3-Evidence surveillance and synthesis, 4-Data collection and management, 5-Governance and organisational processes, and 6-Cross-cutting infrastructure. The improvement process was comprised of 10 'Steps' across three LHS phases: Phase 1) Knowledge to Practice -Identify and understand the problem; Decide and plan for action; Assess and build capacity; Pilot; Phase 2) Practice to Data-Execute the action; Collect data; Monitor and respond; Phase 3) Data to Knowledge- Analyse and evaluate; Disseminate; and Decide (continue, adapt, or cease improvement efforts). Despite the diversity in purpose and context across included documents, the consolidated steps and pillars were conceptually consistent, suggesting a shared foundation. Some contextual variation in emphasis and operationalisation was noted, particularly among guidance focused on scale-up or sustainment.
Conclusions: This review consolidated LHS pillars and improvement steps to better implement, scale or sustain health interventions. Findings provide a structured yet adaptable approach for operationalising implementation-focused learning cycles within LHSs. It informs forthcoming WHO guidance, and supports more systematic, responsive use of evidence in health systems.
Trial registration: The review protocol was prospectively registered on Open Science Framework (https://doi.org/10.17605/OSF.IO/V4JRC).
Background: Young adults (18-39 years) with type 2 diabetes have an increased loss of life expectancy and a greater risk of complications such as retinopathy, sexual health problems and foot disease than people diagnosed with type 2 diabetes later in life. Globally, there are increasing numbers of young adults with type 2 diabetes. Evidence describes both care (for example, prescribing) and improvement practices (for example, case management) that improve outcomes for people with type 2 diabetes. The National Diabetes Audit (NDA) provides feedback describing variation in both care and outcomes in young adults. Feedback facilitation can increase the effectiveness of audit feedback. Working collaboratively between researchers, audit providers, patients, clinicians and policy-makers, we have developed two feedback facilitation interventions deliverable at scale across England. We will evaluate whether theory-informed virtual educational materials with email support (low-intensity intervention) and / or virtual workshops (medium-intensity intervention) improve outcomes for young adults with type 2 diabetes.
Methods: An efficient, pragmatic cluster randomised controlled trial using routine data with a theory-informed process and economic evaluation. The interventions will be delivered alongside the NDA to primary care networks (small groups of general practices) across England. Our primary outcome will be HbA1c level at 16-months post-randomisation in young adults with type 2 diabetes and baseline HbA1c ≥ 58 mmol/mol. Secondary outcomes assess the proportion with an HbA1c below recommended thresholds, prescription consistent with recommendations and delivery of recommended care processes. We will investigate impacts on equity. We will explore implementation, engagement and fidelity through interviews, observations, documentary analysis and surveys. An economic evaluation will estimate cost-effectiveness and budget impact.
Discussion: Our study embeds a further evaluation within the NDA, strengthening its role as a national diabetes learning health system. Our findings will have implications for intervention providers and funders seeking improvement in care and outcomes, and for our understanding of large-scale implementation strategies.
Trial registration: ISRCTN 52205353 Registered 12 March 2025. https://www.isrctn.com/ISRCTN52205353 .
Background: In Sweden, childhood overweight and obesity rates have risen significantly over the last decades, necessitating scalable interventions. The evidence-based Healthy School Start (HSS) program integrates school and family components to promote healthy habits and prevent overweight and obesity among children. The IMPROVE trial aimed to compare the effect of two tailored implementation strategy bundles (Basic and Enhanced) on fidelity to the HSS program.
Methods: A hybrid type III cluster-randomized trial with two parallel arms was conducted in 45 schools (cluster) in three municipalities in Stockholm Sweden from August 2021 to June 2024. The program was implemented in two consecutive cohorts over two academic school years. Fidelity was measured with an adherence score (0-4) and parent's responsiveness (1-5) to the four intervention components (health brochure, motivational interviewing health talk, classroom module and type 2 diabetes risk test). Data were analyzed using mixed-effects linear and logistic regression models.
Key findings: A total of 946 parents and 655 children participated. Overall fidelity, assessed as an adherence score, was around 75%, with most components implemented as expected. The adherence score in the Basic bundle showed no significant difference compared to the Enhanced implementation strategy bundle (β = 0.01, p = 0.95, 95% CI: -0.24, 0.25). Two of four Enhanced implementation strategies, educational outreach visits and networking between school and primary health care, did not happen mainly due to lack of interest and time among personnel. Parents born within the Nordic countries had twice the odds (p < 0.001, 95% CI: 1.14-3.43) of completing the motivational interviewing health talk compared to those born outside the Nordics.
Discussion: Enhancing the Basic implementation bundle with additional strategies did not consistently improve adherence or responsiveness. However, improvements observed over time underscore the importance of targeted support during the initial implementation year. Additional motivational actions might be needed in schools with a high proportion of children whose parents are born outside the Nordic region. These findings highlight the complex interplay between context and implementation success, emphasizing the need to adapt strategies over time to optimize their effectiveness rather than merely adding more. Moreover, the essentially null findings also point to broader methodological challenges in implementation science, particularly how to prioritize among determinants, strategy selection and tailoring.
Trial registration: ClinicalTrials.gov, Unique Protocol ID: NCT04984421. Registered July 30, 2021, https://register.
Clinicaltrials: gov/.
Background: Process evaluations are considered an essential component in conducting and reporting complex interventions, such as those studied in randomised controlled trials (RCTs) of implementation interventions, to explain the effect of implementation interventions. Given the growth of RCTs of implementation interventions with embedded process evaluations, it is timely to review the explanatory learnings to date. This scoping review explores process evaluations of RCTs of implementation interventions to examine how studies are conducted and what insights can be offered about how and why implementation interventions achieve (or not) their intended impacts.
Methods: The scoping review was conducted in accordance with the JBI methodology. MEDLINE, CINAHL, Scopus, Web of Science and PsycINFO were searched. Articles were screened and data were extracted by two independent reviewers.
Results: Of the 5857 studies screened, 81 process evaluations were included. Two process evaluations reported on the same trial, resulting in a final number of n = 80 independent studies. Half of studies (48%) reported on implementation trials with no demonstrated effect on the primary outcome (null), while n = 32 (40%) reported on trials where the intervention group demonstrated positive changes in the primary outcome (positive). Seven studies (9%) had mixed findings and n = 3 (4%) studies had no reported trial outcomes. When comparing process evaluation findings from positive and null trials, few discernible patterns that clearly explained the difference in outcomes were identified. Education and training was the most common strategy used in implementation interventions, yet one of the most common implementation barriers reported related to knowledge and self-efficacy, which could indicate a misalignment. Availability of resources was the most prominent barrier for both positive and null trials and there was little evidence that implementation interventions were tailored to context despite prominent barriers and enablers at the inner and outer setting level.
Conclusions: Process evaluation studies embedded in RCTs of implementation interventions are recommended as an important method to explain whether and how interventions produce their intended effect. This review suggests a need to further optimise the design and evaluation of implementation interventions, including the conduct and reporting of process evaluations, to continue advancing the science and practice of implementation.
Trial registration: Protocol published in Open Science Framework, May 10 2022 (Collyer et al., Process evaluations in randomised trials of implementation interventions in health care: a scoping review protocol. In Open science framework, 2022).
Background: Policymakers need research-informed guidance on leveraging national government funding to promote evidence-based practice (EBP) implementation, however empirical studies of policy financing strategies in implementation science remain limited. Major investments are already being made. Starting in 2012, the U.S. Substance Abuse and Mental Health Services Administration (SAMHSA) funded state substance use service agencies to implement EBPs for youth substance use. We examined 19 states funded to implement the Adolescent Community Reinforcement Approach (A-CRA), an exemplar EBP selected by most states. Using the Exploration, Preparation, Implementation, Sustainment (EPIS) Framework, we sought to explain state-level variation in A-CRA reach (defined as the proportion of A-CRA certified providers) and to identify policy implications for improving EBP financing strategies.
Methods: We conducted an explanatory sequential mixed-method (QUAN→QUAL) comparative case study, treating each state as a case. States were categorized as achieving high, medium, and low reach during their grant periods using A-CRA certification records and state demographic data. We then synthesized available data (i.e., interviews with 33 state agency administrators, grant administrative records, other documents describing A-CRA implementation) to summarize grant activities completed and their quality, and factors potentially influencing reach in each state. Finally, we compared and contrasted state cases to identify policy implications through pattern matching techniques.
Results: We characterized the 19 states' reach levels as high (n = 7), medium (n = 5), and low (n = 7) and identified an average of 5 grant-related activities completed per state; the most common being A-CRA training to treatment organizations. Six states were case anomalies (e.g., low quantity and quality of activities, while achieving high reach). Most notably, we found that high-reach states had more specific, intentional, and explicit A-CRA implementation requirements for treatment organizations than did low- and medium-reach states. States were also more successful in achieving A-CRA reach when they reported proactively addressing implementation barriers (e.g., provider turnover, state leadership buy-in and support).
Conclusions: Our mixed-method comparative case study advances policy-focused implementation research related to EBP financing strategies, demonstrating how examination of large-scale real-world funding initiatives can produce generalizable lessons. Our findings have implications for how future funding initiatives can facilitate EBP delivery to maximize reach.

