Background: Electronic health record (EHR) systems have been used to support the implementation of evidence-based care. Growing evidence suggests that EHR systems can also support de-implementation of low-value care. However, a review of this literature has not been conducted. This scoping review will: 1) summarize how EHR-based interventions themselves have been used in primary care settings to de-implement low-value care, 2) summarize the effectiveness of these EHR interventions, 3) describe de-implementation strategies and outcome measures that have been used, and 4) describe facilitators and barriers that influence EHR-based de-implementation interventions.
Methods: We conducted a search using MEDLINE, CINAHL, Embase, and Web of Science on January 19, 2024 for peer-reviewed papers on EHRs and de-implementation in primary care. We inductively developed themes of how the EHR was used to support de-implementation. We mapped de-implementation strategies to a previously published taxonomy on implementation strategies, de-implementation outcomes to a previously published taxonomy on these outcomes, and facilitators and barriers to the Consolidated Framework for Implementation Research. We stratified study findings by EHR intervention type.
Results: We included 50 studies. EHRs supported de-implementation using four intervention types: 1) EHR alerts, 2) order sets and preference lists, 3) documentation templates, and 4) communication tools among the care team. The proportion of studies that showed favorable effectiveness in reducing low-value care ranged from 16.7% (communication tools) to 50.0% (documentation templates). Common strategies to support EHR-based de-implementation interventions included auditing and providing feedback, conducting educational meetings, and distributing educational materials. Twenty-two studies reported some assessment of de-implementation outcomes. Most EHR intervention types had numerous multi-level facilitators and barriers identified.
Conclusions: This scoping review identified multiple EHR-based interventions that health systems use to support de-implementation and their effectiveness. Although promising, the evidence base is limited by the general lack of frameworks used for intervention development and de-implementation, unclear theoretical rationale to support the use of selected de-implementation strategies, and the unclear validity of de-implementation outcomes used. Additional research is needed to develop and validate frameworks and outcomes for de-implementation to strengthen the evidence base.
Trial registration: None.
Background: Relationships are foundational to successful implementation of innovations in healthcare. In genomic medicine, multidisciplinary teams with good communication are most effective to provide safe genomic care; however, working together could be challenging due to the distinct work culture, worldviews, and clinical approaches held by different professional groups. In this paper, we explored the various strategies used to build relationships and foster collaboration as part of a Change program that supported the use of genomic testing and counselling in specialty areas.
Methods: Qualitative interviews were conducted with 36 participants across 3 professional categories (genetic counsellors, medical specialists, and nurses/allied health workers) to ask about their experiences working together in innovative models of genomic care across 7 clinical specialties. Data analysis was conducted through a two-staged inductive and deductive coding process: firstly to identify the categories based on the attributes of the Relational Theory and then coded against the Theoretical Model for Trusting Relationships and Implementation (the 'Model').
Results: Eight out of nine strategies to build/strengthen relationships described in the 'Model' were identified in the interview data. They included three technical strategies and five relational strategies. The inter-connections were present between relational and technical strategies, as well as within the relational category, whereby some served to reinforce one another. Two additional strategies emerged from the interview data but were not included within the 'Model,' including: negotiating boundary work and accepting differences used at inter-professional level. Specifically, genetic counsellors either reconstructed the professional boundary by taking over tasks beyond their role or adopted a boundary-preserving strategy to balance the social order within the team.
Conclusions: Our study highlights how relationship-building strategies can be leveraged in genomic multidisciplinary teams and can inform decisions about creating conditions that promote positive relationships and relational competence, ultimately leading to successful implementation of innovations into organisations/systems.
Background: The Normalization Process Theory (NPT) is increasingly used for evaluating and understanding implementation processes of complex care interventions. One key tool for applying the NPT in research and practice is the NoMAD questionnaire, which offers a structured approach to examination of the four constructs that according to the NPT are central in implementation and normalisation processes. We aimed to evaluate the psychometric properties of the Swedish version S-NoMAD.
Methods: Secondary analysis was performed on pooled S-NoMAD survey data from six implementation studies in different health and social care contexts. The NPT factor structure was tested by confirmatory factor analysis (CFA). Internal construct reliability was tested using Cronbach's alpha. Validity was confirmed by assessing the fit of the CFA using the fit measures Comparative Fit Index, Tucker-Lewis Index, root mean square error of approximation and standardised root mean square residual. Pearson correlations amongst the latent construct and general questions about the intervention were calculated.
Results: The estimation results of the CFA indicate that the four-factor model implied by the NPT fits the data reasonably well. The factor loadings are of good sizes and the fit indices do not imply a mis-specified model. A good internal construct validity, indicated by a good model fit to the NPT four-construct model and acceptable to good internal reliability, was shown. External validity was also demonstrated.
Conclusions: The CFA results indicate that the S-NoMAD has good psychometric properties for capturing perceptions of people involved in various Swedish implementation studies conducted in both health and social care contexts, demonstrating its general applicability. They show that the S-NoMAD, unlike the majority of instruments for evaluation of implementation processes, is not context- and intervention-specific. The findings highlight the utility of the S-NoMAD and show that it meets some important criteria for pragmatic measures. Further studies are warranted on different interventions implemented in diverse contexts regarding the meaning of the magnitude of the NoMAD scores in order to clarify its potential value as a tool for assessment of implementation strategies and on psychometric properties beyond construct validity and internal construct reliability, for example on test-retest reliability and longitudinal studies focusing on responsiveness.
Background: According to phasic models of implementation, a Preparation phase designed to enhance the implementation climate should be completed prior to the Implementation phase. Yet preparatory activities and outcomes are rarely reported or assessed in implementation research. Project MIMIC (Maximizing Implementation of Motivational Incentives in Clinics) was a hybrid type 3 effectiveness-implementation trial that compared two multi-component, phasic strategies to implement contingency management (CM) in opioid treatment programs. The current secondary analysis assessed the comparative effectiveness of the two strategies on 5-month Preparation phase outcomes: attainment of knowledge and fidelity benchmarks, implementation climate at the end of the Preparation phase, and time required for providers to complete the final preparatory/pre-implementation activity of enrolling and scheduling their first CM patient.
Methods: Twenty-eight opioid treatment programs and 186 staff were cluster-randomized to receive the Addition Technology Transfer Center (ATTC) control strategy (didactic workshop + performance feedback + consultation) or the theory-driven Enhanced-ATTC (E-ATTC) experimental strategy. During the Preparation phase, the E-ATTC strategy consisted of the ATTC strategy plus monthly Implementation Sustainment Facilitation sessions rooted in principles of team-based motivational interviewing to cultivate a strong implementation climate and accelerate successful completion of the Preparation phase.
Results: Across the 28 OTPs and 186 staff, attainment of knowledge and fidelity benchmarks favored the E-ATTC but did not differ significantly by condition. Implementation climate ratings after the Preparation phase were high in both conditions with no conditional differences. Providers randomized to E-ATTC completed their final preparatory activity at significantly higher rates than those randomized to ATTC. Cox regression revealed that receipt of the E-ATTC strategy was also associated with significantly faster completion of the final Preparation activity.
Conclusions: Consistent with hypotheses, the theory-driven implementation strategy was associated with higher levels of and faster time to completion of preparatory activities, a key indicator of readiness for implementation. Counter to expectations, this was not driven by differences in implementation climate. High ratings of implementation climate at baseline limited our ability to detect change over time, highlighting a need for alternate strategies to measure putative mechanisms of change. This analysis adds to the scant literature reporting Preparation phase strategies and outcomes, which are strong predictors of successful implementation.
Trial registration: This study is registered in Clinicaltrials.gov (NCT03931174).
Background: Assessing implementation fidelity-the degree to which a program is implemented as intended-is essential to understand whether poor outcomes are due to implementation problems or the design of an intervention. Few studies in health research have documented the association between implementation fidelity and effectiveness. The Integrated District Evidence-to-Action (IDEAs) is a multicomponent audit and feedback strategy designed to improve the implementation of maternal and child clinical guidelines in Mozambique. In a previous study, we found mixed results of IDEAs effectiveness. The objective of the present study is to understand how implementation fidelity may have influenced the effectiveness of the strategy.
Methods: IDEAs was implemented in 154 health facilities across 12 districts in Manica and Sofala provinces in Mozambique between 2016 and 2020. We used the conceptual framework for implementation fidelity to guide descriptive analysis of IDEAs adherence. Regression modeling was used to study patterns of the direction of association between measures of fidelity and effectiveness for ten service delivery outcomes and five service readiness outcomes.
Results: We describe adherence on 15 measures of fidelity, of which 12 had high fidelity. Poor fidelity was found in conducting facility service readiness assessments and completing micro-interventions from action plans. Service delivery measures tended to be positively associated with participation and degree of micro-intervention completion and negatively associated with a higher number of action plans elaborated by participating teams. For the service readiness outcomes, delivery of essential care was positively associated with participation and micro-intervention completion, and staff availability was negatively associated with supervision.
Conclusion: Participation in audit and feedback meetings, the number of action plans elaborated, and the degree of completion of micro-interventions seem to be related to the effectiveness results. IDEAs should be adapted to reduce the number of action plans elaborated and promote better micro-intervention completion. Additionally, combining audit and feedback strategies with other strategies might enhance effectiveness in service outcomes. This study examines how to analyze the link between fidelity and effectiveness of a strategy to inform better design and recommend context-specific improvements.
Background: Saturation is a common criterion for determining qualitative sample size adequacy and analytic completeness. The dynamic and fast-paced implementation research environment poses unique challenges for investigators conducting qualitative studies that seek to reach saturation. Saturated studies require an iterative, often lengthy and labor-intensive process of data collection and analysis, which is frequently at odds with implementation science's focus on rapid turnaround times for translating knowledge into practice. Moreover, despite its common usage, uncertainty around saturation's meaning and application remains. To date, there has been no systematic attempt to understand how the concept of saturation is defined and deployed specifically in the context of qualitative implementation research, or guidance on how to adapt the saturation concept in response to field-specific needs.
Methods: A concept synthesis was conducted to establish baseline knowledge that would inform field-specific guidance for assessing sample adequacy and analytic completeness in qualitative implementation research. Three leading implementation science journals were searched. Eligible studies (a) described empirical research, (b) discussed the saturation concept in the context of qualitative methodology, and (c) mentioned saturation in the body of the manuscript. Articles were systematically read and coded to identify meaningful content and patterns of interpretation.
Results: Of 207 studies identified, 158 met eligibility for full-text review, and 146 were included in the final analysis. Findings show cursory treatment of the saturation concept. Various saturation-related terms and definitions were identified, as were prevailing interview sample sizes and citation patterns. Studies rarely explained how analytic completeness was determined, and discussion of saturation leading to theory or concept generation was sparse. These findings informed development of the 3S Continuum as an alternative approach for assessing qualitative sample adequacy and analytic completeness.
Conclusions: In implementation research, saturation as an analytic benchmark is seldom explained and difficult to attain. We propose a practical approach for reconceptualizing saturation as part of a larger continuum for assessing sample adequacy and analytic completeness. We aim to help implementation researchers navigate decisions about qualitative sample adequacy and analytic completeness in pragmatic and transparent ways.
Background: Clinical performance feedback (CPF) is widely used to support physician development and improve care. Yet, its impact remains limited by low voluntary engagement. This study sought to: (1) develop a theory-informed, report-agnostic model outlining the key beliefs that shape physician engagement with CPF; (2) explore patterns of feedback orientation across physicians; and (3) understand how individual perceptions influence engagement with CPF.
Methods: We used a cross-sectional, multi-method approach combining a survey and qualitative interviews with primary care physicians in Ontario, Canada. We validated a conceptual model using path analysis, explored heterogeneity in feedback orientation using latent profile analysis, and qualitatively examined how perceptions of CPF influenced engagement.
Results: Survey results (n = 206) supported a model in which engagement with CPF is shaped by five recipient characteristics: perceived need for change (change discrepancy), perceived value of CPF, confidence to act on feedback (feedback self-efficacy), belief that feedback is useful (feedback utility), and sense of responsibility to act (feedback accountability). Perceived utility mediated the effects of self-efficacy and value on accountability, and perceived need for change influenced value. Latent profile analysis identified three groups: physicians with high and balanced feedback orientation (n = 32), moderate and balanced (n = 143), and low feedback orientation with low self-efficacy (n = 31). Interview findings (n = 9) revealed two mindsets: physicians who saw value in CPF despite its limitations (engagers), and those who dismissed its relevance (non-engagers). These mindsets aligned with differences in value, utility, and accountability scores from the survey.
Conclusions: Engagement with CPF is not one-size-fits-all. Physicians differ in how they appraise and act on feedback based on their beliefs about its relevance, usefulness, and their ability to act. CPF initiatives should explicitly link feedback to improved patient outcomes, focus on future actions, and provide clear, actionable guidance. Designing CPF that accounts for recipient heterogeneity is essential to realizing its full potential as an improvement strategy.

