Pub Date : 2025-12-18DOI: 10.1186/s13012-025-01477-w
Christiaan Vis, Leti van Bodegom-Vos, Bethany Hipple-Walters, Byron J Powell, Erwin Ista, Femke van Nassau
Background: Tailored implementation addresses the inherent dynamic complexity and heterogeneous nature of implementation practice. In general, tailored implementation involves setting implementation objectives, identifying determinants, matching strategies to those determinants, and developing an evaluation plan. How matching a specific implementation strategy to a determinant is done remains largely unknown. This study aimed to provide an overview of methods for matching strategies that have been applied in research and practice.
Methods: A scoping review of scientific and grey literature was conducted. A Rapid Assessment Procedure approach guided the design and analysis. Five online scientific bibliographic databases and various Dutch websites were searched for scientific and grey literature reporting applied methods for matching strategies to determinants. In addition, fifteen implementation practitioners in the Netherlands were interviewed to gain insights into how matching is conducted in daily practice. Findings were iteratively triangulated.
Results: Fifty-eight scientific articles and ten grey literature documents were included in the review. All identified methods for matching implementation strategies followed a stepped approach and recommended involving both implementation experts and stakeholders at various stages. Almost all methods were based on existing theories, models, and frameworks, such as Intervention Mapping, Expert Recommendations for Implementing Change, and Behaviour Change Wheel. Nevertheless, detailed instructions for matching strategies to determinants were lacking. Similarly, guidance on identifying and involving stakeholders remained superficial. Interviews indicated that in practice, strategy matching is generally based on previous experience and is non-systematic.
Conclusions: Various methods for matching implementation strategies to determinants are reported in literature and used in practice. However, specific and detailed instructions for matching remain lacking. Methods that balance specificity, flexibility, and pragmatism are needed.
{"title":"Applied methods for matching implementation strategies to determinants: a scoping review of scientific and grey literature, and qualitative exploration of practice experiences.","authors":"Christiaan Vis, Leti van Bodegom-Vos, Bethany Hipple-Walters, Byron J Powell, Erwin Ista, Femke van Nassau","doi":"10.1186/s13012-025-01477-w","DOIUrl":"https://doi.org/10.1186/s13012-025-01477-w","url":null,"abstract":"<p><strong>Background: </strong>Tailored implementation addresses the inherent dynamic complexity and heterogeneous nature of implementation practice. In general, tailored implementation involves setting implementation objectives, identifying determinants, matching strategies to those determinants, and developing an evaluation plan. How matching a specific implementation strategy to a determinant is done remains largely unknown. This study aimed to provide an overview of methods for matching strategies that have been applied in research and practice.</p><p><strong>Methods: </strong>A scoping review of scientific and grey literature was conducted. A Rapid Assessment Procedure approach guided the design and analysis. Five online scientific bibliographic databases and various Dutch websites were searched for scientific and grey literature reporting applied methods for matching strategies to determinants. In addition, fifteen implementation practitioners in the Netherlands were interviewed to gain insights into how matching is conducted in daily practice. Findings were iteratively triangulated.</p><p><strong>Results: </strong>Fifty-eight scientific articles and ten grey literature documents were included in the review. All identified methods for matching implementation strategies followed a stepped approach and recommended involving both implementation experts and stakeholders at various stages. Almost all methods were based on existing theories, models, and frameworks, such as Intervention Mapping, Expert Recommendations for Implementing Change, and Behaviour Change Wheel. Nevertheless, detailed instructions for matching strategies to determinants were lacking. Similarly, guidance on identifying and involving stakeholders remained superficial. Interviews indicated that in practice, strategy matching is generally based on previous experience and is non-systematic.</p><p><strong>Conclusions: </strong>Various methods for matching implementation strategies to determinants are reported in literature and used in practice. However, specific and detailed instructions for matching remain lacking. Methods that balance specificity, flexibility, and pragmatism are needed.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":""},"PeriodicalIF":13.4,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1186/s13012-025-01476-x
Jodi Summers Holtrop, Brooke Dorsey-Holliman, Alison B Hamilton
Qualitative methods are critical to the conduct of Dissemination and Implementation (D&I) research because they illuminate processes, relationships, contexts, and other phenomena known to influence implementation and dissemination. Given the multitude of methods available, choosing appropriate and feasible methods can be challenging, leading many to rely on a limited set of methods. Navigational assistance with methods decision-making, including choosing to use less common methods, is lacking. This paper outlines how to select study methods, beginning with the research goal and the type of research question(s), and presents methods options based on key characteristics of the research. Decision pathways and considerations important to decision-making are featured as well as brief descriptions of the main methods available. Examples are also presented for instructional purposes. This paper supports the field of D&I by addressing a gap in the existing literature about how to conduct qualitative methods D&I research from a methodological perspective.
{"title":"Navigating qualitative methods choices in dissemination and implementation research.","authors":"Jodi Summers Holtrop, Brooke Dorsey-Holliman, Alison B Hamilton","doi":"10.1186/s13012-025-01476-x","DOIUrl":"10.1186/s13012-025-01476-x","url":null,"abstract":"<p><p>Qualitative methods are critical to the conduct of Dissemination and Implementation (D&I) research because they illuminate processes, relationships, contexts, and other phenomena known to influence implementation and dissemination. Given the multitude of methods available, choosing appropriate and feasible methods can be challenging, leading many to rely on a limited set of methods. Navigational assistance with methods decision-making, including choosing to use less common methods, is lacking. This paper outlines how to select study methods, beginning with the research goal and the type of research question(s), and presents methods options based on key characteristics of the research. Decision pathways and considerations important to decision-making are featured as well as brief descriptions of the main methods available. Examples are also presented for instructional purposes. This paper supports the field of D&I by addressing a gap in the existing literature about how to conduct qualitative methods D&I research from a methodological perspective.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":"11"},"PeriodicalIF":13.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12874680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145776534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1186/s13012-025-01478-9
Geoffrey D Barnes, Seo Youn Choi, Michael Sm Lanham, Michael P Dorsch, Joshua Errickson, Morris Fabbri, Anish Saraswat, F Jacob Seagull, Shawna N Smith
Background: Inappropriate prescribing of Direct Oral Anticoagulants (DOACs) is a leading cause of adverse outcomes. Electronic health record (EHR)-based notification strategies may support evidence-based prescribing and reduce adverse events. Engaging clinical pharmacists (vs. prescribers) through EHR-based notifications that review inappropriate DOAC prescribing may be an effective strategy for ensuring evidence-based medication prescribing.
Methods: We conducted a pragmatic, single-center, parallel-group, randomized implementation trial using notifications (asynchronous EHR-based notifications) to prompt correction of inappropriate DOAC prescriptions that had arisen after the initial prescription of the DOAC (e.g., due to changes in patient condition). Notifications were sent for adult ambulatory patients with DOAC prescriptions not adhering to the package insert instructions or having significant drug-drug interactions. Notifications directed either to the prescribing clinician or to the clinical anticoagulation pharmacist, randomized at the prescriber level. The primary outcome was the proportion of notifications adopting any prescription change within 7 days. Moderator analyses examined the influence of prescriber, patient, and prescription characteristics.
Results: From May 2023 to December 2024, 388 notifications for potentially inappropriate DOAC prescriptions among 183 prescribers were analyzed. Overall, 23.2% of notifications led to a prescription change within 7 days, 26% among prescriber-directed and 21% among pharmacist-directed notifications (p = 0.36). Nearly all (97.8%) changes made were clinically appropriate changes aligned with notification recommendations. Subgroup and moderator analyses showed that pharmacists made more changes than prescribers when errors were further from dosing cutoffs and managed cases with polypharmacy or complex thresholds more consistently. Clinical pharmacists spent an average of 7.9 min per notification.
Conclusions: Prescribers and clinical pharmacists both responded similarly and consistently to correct inappropriate DOAC prescriptions in response to EHR asynchronous notifications. While pharmacists did not outperform prescribers overall, they demonstrated more nuanced application of medication prescribing guidelines in complex cases. Engaging clinical pharmacists directly may be an efficient implementation strategy for addressing medication prescribing issues. Optimal EHR-based implementation strategies for complex prescribing guidelines should consider both workflow integration and recipient expertise.
{"title":"Implementing prescriber-pharmacist collaboration to improve evidence-based medication prescribing using asynchronous, non-interruptive electronic health record notifications.","authors":"Geoffrey D Barnes, Seo Youn Choi, Michael Sm Lanham, Michael P Dorsch, Joshua Errickson, Morris Fabbri, Anish Saraswat, F Jacob Seagull, Shawna N Smith","doi":"10.1186/s13012-025-01478-9","DOIUrl":"https://doi.org/10.1186/s13012-025-01478-9","url":null,"abstract":"<p><strong>Background: </strong>Inappropriate prescribing of Direct Oral Anticoagulants (DOACs) is a leading cause of adverse outcomes. Electronic health record (EHR)-based notification strategies may support evidence-based prescribing and reduce adverse events. Engaging clinical pharmacists (vs. prescribers) through EHR-based notifications that review inappropriate DOAC prescribing may be an effective strategy for ensuring evidence-based medication prescribing.</p><p><strong>Methods: </strong>We conducted a pragmatic, single-center, parallel-group, randomized implementation trial using notifications (asynchronous EHR-based notifications) to prompt correction of inappropriate DOAC prescriptions that had arisen after the initial prescription of the DOAC (e.g., due to changes in patient condition). Notifications were sent for adult ambulatory patients with DOAC prescriptions not adhering to the package insert instructions or having significant drug-drug interactions. Notifications directed either to the prescribing clinician or to the clinical anticoagulation pharmacist, randomized at the prescriber level. The primary outcome was the proportion of notifications adopting any prescription change within 7 days. Moderator analyses examined the influence of prescriber, patient, and prescription characteristics.</p><p><strong>Results: </strong>From May 2023 to December 2024, 388 notifications for potentially inappropriate DOAC prescriptions among 183 prescribers were analyzed. Overall, 23.2% of notifications led to a prescription change within 7 days, 26% among prescriber-directed and 21% among pharmacist-directed notifications (p = 0.36). Nearly all (97.8%) changes made were clinically appropriate changes aligned with notification recommendations. Subgroup and moderator analyses showed that pharmacists made more changes than prescribers when errors were further from dosing cutoffs and managed cases with polypharmacy or complex thresholds more consistently. Clinical pharmacists spent an average of 7.9 min per notification.</p><p><strong>Conclusions: </strong>Prescribers and clinical pharmacists both responded similarly and consistently to correct inappropriate DOAC prescriptions in response to EHR asynchronous notifications. While pharmacists did not outperform prescribers overall, they demonstrated more nuanced application of medication prescribing guidelines in complex cases. Engaging clinical pharmacists directly may be an efficient implementation strategy for addressing medication prescribing issues. Optimal EHR-based implementation strategies for complex prescribing guidelines should consider both workflow integration and recipient expertise.</p><p><strong>Trial registration: </strong>(ClinicalTrials.gov: NCT05351749).</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":""},"PeriodicalIF":13.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1186/s13012-025-01466-z
Aubyn C Stahmer, Anna S Lau, Scott Roesch, Elizabeth Rangel, Gregory A Aarons, Lauren Brookman-Frazee
Background: Understanding the effectiveness of implementation strategies to support uptake of evidence-based interventions (EBIs) requires examining activation of mechanisms targeted by implementation strategies. This study uses data from the TEAMS (Translating Evidence-Based Interventions for Autism) hybrid type III implementation-effectiveness trial to examine whether leader-level and provider-level implementation strategies, when paired with provider training in AIM HI (An Individualized Mental Health Intervention for Autism) in mental health programs (Study 1) and CPRT (Classroom Pivotal Response Teaching) in schools (Study 2) successfully activated proposed implementation mechanisms (3 for leader level strategy and 2 for the provider-level strategy). We also examined whether any of the identified mechanisms associated with the leader-level strategy mediated the previously reported effect of the strategy on implementation and child outcomes.
Methods: Organizations were randomized to receive a leader-level strategy (TEAMS Leadership Institute [TLI]), provider strategy, both strategies, or neither strategy (EBI provider training only). Leader participants were recruited from enrolled programs/districts and then supported recruitment of provider/child dyads. Children ranged in age from 3 to 13 years. The combined sample included 65 programs/districts, 95 TLI leaders, and 385 providers/child dyads. Multi-level modeling was used to test hypotheses. The hypothesized mechanisms were implementation leadership, implementation climate, and implementation support strategies for TLI and EBI attitudes and motivation for training for TIPS.
Results: The leader-level strategy engaged the most proximal of the three hypothesized mechanisms (implementation support strategies). The provider-level intervention did not engage any of the hypothesized mechanisms. There was an interaction between the leader-level and provider-level strategies on implementation climate and provider motivation mechanisms favoring groups that received both implementation strategies compared to those that only received the provider-level strategy. No mechanisms significantly mediated the effect of the leader-level strategy on implementation or clinical outcomes.
Conclusions: This study provides support that a brief implementation leadership and climate training, TLI, increases leader use of specific actions to promote autism EBIs across two public service systems, children's mental health and public education. This does not fully account for strategy effects on fidelity or clinical outcomes. Findings advance the study of implementation mechanisms by examining how leadership training might work and identifying a clear need to focus on leader-level implementation strategies in these systems of care.
背景:了解支持基于证据的干预措施(ebi)实施战略的有效性,需要检查实施战略所针对的机制的激活情况。本研究使用了TEAMS(翻译基于证据的自闭症干预措施)混合III型实施有效性试验的数据,以检验领导者层面和提供者层面的实施策略,当与心理健康项目(研究1)的AIM HI(自闭症个体化心理健康干预)和学校的CPRT(课堂关键反应教学)(研究2)的提供者培训相结合时,成功地激活了拟议的实施机制(3个领导级策略和2个提供者级策略)。我们还研究了是否有任何与领导级策略相关的已确定机制介导了先前报道的策略对实施和儿童结局的影响。方法:组织随机接受领导级策略(TEAMS Leadership Institute [TLI])、提供者策略、两种策略都采用或两种策略都不采用(仅EBI提供者培训)。从已登记的项目/地区招募领导参与者,然后支持招募提供者/儿童夫妇。孩子们的年龄从3岁到13岁不等。综合样本包括65个项目/地区,95名TLI领导人和385名提供者/儿童。采用多层次模型对假设进行检验。假设的机制是TLI和EBI的实施领导、实施气候和实施支持策略的态度和TIPS培训的动机。结果:领导者层面的策略采用了三种假设机制中最接近的机制(实施支持策略)。提供者级别的干预没有涉及任何假设的机制。领导层和提供者层策略在实施氛围和提供者动机机制之间存在交互作用,与仅接受提供者层策略的群体相比,接受两种实施策略的群体更有利。没有机制显著调节领导层面策略对实施或临床结果的影响。结论:本研究提供了一个简短的实施领导力和气候培训,TLI,增加领导者使用具体行动,以促进自闭症的ebi在两个公共服务系统,儿童心理健康和公共教育。这并不能完全解释策略对保真度或临床结果的影响。研究结果通过检查领导力培训如何发挥作用,并确定在这些护理系统中明确需要关注领导者层面的实施战略,从而推进了实施机制的研究。试验注册:ClinicalTrials.gov标识符:NCT03380078。
{"title":"Understanding mechanisms of multi-level implementation strategies for autism interventions in a randomized trial across service systems.","authors":"Aubyn C Stahmer, Anna S Lau, Scott Roesch, Elizabeth Rangel, Gregory A Aarons, Lauren Brookman-Frazee","doi":"10.1186/s13012-025-01466-z","DOIUrl":"10.1186/s13012-025-01466-z","url":null,"abstract":"<p><strong>Background: </strong>Understanding the effectiveness of implementation strategies to support uptake of evidence-based interventions (EBIs) requires examining activation of mechanisms targeted by implementation strategies. This study uses data from the TEAMS (Translating Evidence-Based Interventions for Autism) hybrid type III implementation-effectiveness trial to examine whether leader-level and provider-level implementation strategies, when paired with provider training in AIM HI (An Individualized Mental Health Intervention for Autism) in mental health programs (Study 1) and CPRT (Classroom Pivotal Response Teaching) in schools (Study 2) successfully activated proposed implementation mechanisms (3 for leader level strategy and 2 for the provider-level strategy). We also examined whether any of the identified mechanisms associated with the leader-level strategy mediated the previously reported effect of the strategy on implementation and child outcomes.</p><p><strong>Methods: </strong>Organizations were randomized to receive a leader-level strategy (TEAMS Leadership Institute [TLI]), provider strategy, both strategies, or neither strategy (EBI provider training only). Leader participants were recruited from enrolled programs/districts and then supported recruitment of provider/child dyads. Children ranged in age from 3 to 13 years. The combined sample included 65 programs/districts, 95 TLI leaders, and 385 providers/child dyads. Multi-level modeling was used to test hypotheses. The hypothesized mechanisms were implementation leadership, implementation climate, and implementation support strategies for TLI and EBI attitudes and motivation for training for TIPS.</p><p><strong>Results: </strong>The leader-level strategy engaged the most proximal of the three hypothesized mechanisms (implementation support strategies). The provider-level intervention did not engage any of the hypothesized mechanisms. There was an interaction between the leader-level and provider-level strategies on implementation climate and provider motivation mechanisms favoring groups that received both implementation strategies compared to those that only received the provider-level strategy. No mechanisms significantly mediated the effect of the leader-level strategy on implementation or clinical outcomes.</p><p><strong>Conclusions: </strong>This study provides support that a brief implementation leadership and climate training, TLI, increases leader use of specific actions to promote autism EBIs across two public service systems, children's mental health and public education. This does not fully account for strategy effects on fidelity or clinical outcomes. Findings advance the study of implementation mechanisms by examining how leadership training might work and identifying a clear need to focus on leader-level implementation strategies in these systems of care.</p><p><strong>Trial registration: </strong>ClinicalTrials.gov Identifier: NCT03380078.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"20 1","pages":"54"},"PeriodicalIF":13.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12706966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145764151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12DOI: 10.1186/s13012-025-01469-w
Vera Yakovchenko, Chaeryon Kang, Brittney Neely, Carolyn Lamorte, Heather McCurdy, Dawn Scott, Anna Nobbe, Gwen Robins, Nsikak R Ekanem, Monica Merante, Sandra Gibson, Patrick Spoutz, Linda Chia, Rachel I Gonzalez, Matthew J Chinman, David Ross, Maggie Chartier, Lauren A Beste, Jasmohan S Bajaj, Tamar Taddei, Timothy R Morgan, Shari S Rogal
Background: While guidelines recommend twice-yearly liver cancer (hepatocellular carcinoma, HCC) surveillance for people with cirrhosis, adherence to these guidelines remains variable. We aimed to empirically identify and apply successful implementation strategies through Getting to Implementation (GTI), a manualized facilitation approach.
Methods: A hybrid type III, stepped-wedge, cluster-randomized trial was conducted at 12 underperforming Veterans Health Administration (VA) sites between October 2020 and October 2022. GTI included a stepwise approach to guide sites to detail their current state, set implementation goals, identify implementation barriers, select implementation strategies, make a work plan, conduct an evaluation, and sustain their work. Outcomes were defined using the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework.
Results: Facilitators supported site teams with an average of 20±6 facilitation hours over a 12-month period. Ten of 12 sites (83%) adopted GTI and applied a median of five strategies (e.g., dashboard use, small tests of change, direct patient outreach). Reach, the primary outcome, increased from mean 29.1% to mean 38.8% at-risk Veterans receiving HCC surveillance from pre- to post-intervention, and further increasing to 41.3% in the sustainment period. In both unadjusted and adjusted models, the odds of HCC surveillance were significantly higher during intervention (adjusted odds ratio, aOR=1.67, 95% CI:1.59, 1.75) and during sustainment (aOR=1.69, 95% CI:1.60, 1.78) compared with baseline, and with difference between active and sustainment periods, indicating sustained improvement after active facilitation ended.
Conclusions: GTI sustainably improved HCC surveillance, suggesting that applying data-driven implementation strategies within a manualized facilitation approach can improve care.
{"title":"Getting to Implementation: applying data-driven implementation strategies to improve guideline concordant surveillance for hepatocellular carcinoma.","authors":"Vera Yakovchenko, Chaeryon Kang, Brittney Neely, Carolyn Lamorte, Heather McCurdy, Dawn Scott, Anna Nobbe, Gwen Robins, Nsikak R Ekanem, Monica Merante, Sandra Gibson, Patrick Spoutz, Linda Chia, Rachel I Gonzalez, Matthew J Chinman, David Ross, Maggie Chartier, Lauren A Beste, Jasmohan S Bajaj, Tamar Taddei, Timothy R Morgan, Shari S Rogal","doi":"10.1186/s13012-025-01469-w","DOIUrl":"10.1186/s13012-025-01469-w","url":null,"abstract":"<p><strong>Background: </strong>While guidelines recommend twice-yearly liver cancer (hepatocellular carcinoma, HCC) surveillance for people with cirrhosis, adherence to these guidelines remains variable. We aimed to empirically identify and apply successful implementation strategies through Getting to Implementation (GTI), a manualized facilitation approach.</p><p><strong>Methods: </strong>A hybrid type III, stepped-wedge, cluster-randomized trial was conducted at 12 underperforming Veterans Health Administration (VA) sites between October 2020 and October 2022. GTI included a stepwise approach to guide sites to detail their current state, set implementation goals, identify implementation barriers, select implementation strategies, make a work plan, conduct an evaluation, and sustain their work. Outcomes were defined using the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework.</p><p><strong>Results: </strong>Facilitators supported site teams with an average of 20±6 facilitation hours over a 12-month period. Ten of 12 sites (83%) adopted GTI and applied a median of five strategies (e.g., dashboard use, small tests of change, direct patient outreach). Reach, the primary outcome, increased from mean 29.1% to mean 38.8% at-risk Veterans receiving HCC surveillance from pre- to post-intervention, and further increasing to 41.3% in the sustainment period. In both unadjusted and adjusted models, the odds of HCC surveillance were significantly higher during intervention (adjusted odds ratio, aOR=1.67, 95% CI:1.59, 1.75) and during sustainment (aOR=1.69, 95% CI:1.60, 1.78) compared with baseline, and with difference between active and sustainment periods, indicating sustained improvement after active facilitation ended.</p><p><strong>Conclusions: </strong>GTI sustainably improved HCC surveillance, suggesting that applying data-driven implementation strategies within a manualized facilitation approach can improve care.</p><p><strong>Clinical trial registration: </strong>ClinicalTrials.gov, NCT04178096.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"20 1","pages":"53"},"PeriodicalIF":13.4,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12699911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1186/s13012-025-01465-0
C Hendricks Brown, J D Smith, Tamara Haegerich, Gregory Simon, Ian Cero, Gregory Aarons, Guillermo Prado, Peter Wyman, John Kane, Delbert Robinson, Theresa L Walunas, Lindsey Zimmerman, Wouter Vermeer, Lia Chin-Purcell, Moira McNulty, Katerina A Christopoulos, Bryan Garner, Mark McGovern
Background: Randomized rollout trial designs, including stepped wedge designs, are commonly used to examine how well an evidence-based intervention or package is being implemented in community or healthcare settings. The multitude of implementation research questions and specific hypotheses suggest the need for diverse randomized rollout implementation trial designs, assignment principles and procedureds, and statistical modeling.
Methods: We separate key research questions and identify mixed effect models for randomized implementation rollout trials involving 1) a single implementation strategy that tests how this strategy varies over time and/or resources that are allocated, 2) comparison of two distinct implementation strategies, and 3) three distinct strategies or components tested in a single trial. Appropriate rollout designs, optimal assignment methods, and other design and analysis considerations are discussed for trials of up to three distinct implementation strategies.
Results: To examine improvement in implementation outcomes we present a Fixed-Length Staggered Rollout Trial Design to examine how well a sustainment period continues to produce outcomes, The Rollout Implementation Optimization (ROIO) methodology illustrates testing for quality improvement. For comparing an existing to new strategy, we focus on a Stepped Wedge design, and for comparing two new strategies we describe a Head-to-Head Rollout trial design. To test for synergy between two components, we introduce a Head-to-Head Rollout trial design, and for testing an existing strategy to a new one followed by a sustainment period, we recommend using a Three-Phase Sequential Rollout Implementation trial design. Modeling choices are described, including options for specifying random effects that capture variations in site and clustering. We discuss comparisons of superiority versus non-inferiority testing and multiple contrasts. To support uses of these six designs and analyses, we provide computational code.
Conclusions: The large class of randomized rollout implementation trial designs provides rich opportunities to address research questions posed by implementation scientists. Balance in assigning sites to cohorts is important before random assignment to time of transition to a new implementation occurs. Specific hypotheses are tested with mixed effects models where fixed effects include comparisons of implementation conditions and random effects that account for variation in sites and clustering.
{"title":"What scientific inferences can be made with randomized implementation rollout trials.","authors":"C Hendricks Brown, J D Smith, Tamara Haegerich, Gregory Simon, Ian Cero, Gregory Aarons, Guillermo Prado, Peter Wyman, John Kane, Delbert Robinson, Theresa L Walunas, Lindsey Zimmerman, Wouter Vermeer, Lia Chin-Purcell, Moira McNulty, Katerina A Christopoulos, Bryan Garner, Mark McGovern","doi":"10.1186/s13012-025-01465-0","DOIUrl":"10.1186/s13012-025-01465-0","url":null,"abstract":"<p><strong>Background: </strong>Randomized rollout trial designs, including stepped wedge designs, are commonly used to examine how well an evidence-based intervention or package is being implemented in community or healthcare settings. The multitude of implementation research questions and specific hypotheses suggest the need for diverse randomized rollout implementation trial designs, assignment principles and procedureds, and statistical modeling.</p><p><strong>Methods: </strong>We separate key research questions and identify mixed effect models for randomized implementation rollout trials involving 1) a single implementation strategy that tests how this strategy varies over time and/or resources that are allocated, 2) comparison of two distinct implementation strategies, and 3) three distinct strategies or components tested in a single trial. Appropriate rollout designs, optimal assignment methods, and other design and analysis considerations are discussed for trials of up to three distinct implementation strategies.</p><p><strong>Results: </strong>To examine improvement in implementation outcomes we present a Fixed-Length Staggered Rollout Trial Design to examine how well a sustainment period continues to produce outcomes, The Rollout Implementation Optimization (ROIO) methodology illustrates testing for quality improvement. For comparing an existing to new strategy, we focus on a Stepped Wedge design, and for comparing two new strategies we describe a Head-to-Head Rollout trial design. To test for synergy between two components, we introduce a Head-to-Head Rollout trial design, and for testing an existing strategy to a new one followed by a sustainment period, we recommend using a Three-Phase Sequential Rollout Implementation trial design. Modeling choices are described, including options for specifying random effects that capture variations in site and clustering. We discuss comparisons of superiority versus non-inferiority testing and multiple contrasts. To support uses of these six designs and analyses, we provide computational code.</p><p><strong>Conclusions: </strong>The large class of randomized rollout implementation trial designs provides rich opportunities to address research questions posed by implementation scientists. Balance in assigning sites to cohorts is important before random assignment to time of transition to a new implementation occurs. Specific hypotheses are tested with mixed effects models where fixed effects include comparisons of implementation conditions and random effects that account for variation in sites and clustering.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":"7"},"PeriodicalIF":13.4,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12831406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1186/s13012-025-01473-0
Sara J Becker, Tim Janssen, Tim Souza, Bryan Hartzler, Carla J Rash, Kira DiClemente-Bosco, Bryan R Garner
Background: Contingency management (CM), a behavioral treatment that incentivizes patients for attaining treatment goals, is a highly effective adjunct to medication for opioid use disorder. However, CM is rarely offered in opioid treatment programs in the United States. In a prior pilot trial, the implementation strategy (didactic workshop + feedback + consultation) delivered by the Addiction Technology Transfer Centers (ATTC strategy) promoted CM adoption more effectively than didactic training, but the speed and duration of implementation were sub-optimal. This 28-site type 3 hybrid trial tested the comparative effectiveness of the ATTC strategy versus an Enhanced-ATTC (E-ATTC) strategy that contained two theory-driven techniques targeting implementation climate to improve acceleration and sustainment, respectively: a provider-focused incentivization strategy and a team-focused facilitation strategy. We hypothesized that the E-ATTC strategy would be associated with superior implementation and patient outcomes.
Methods: Twenty-eight opioid treatment programs, 186 providers, and 592 patients were cluster-randomized to receive either the ATTC or E-ATTC strategy. Providers logged their CM sessions in an online CM Tracker and submitted audio-recorded CM sessions, and patients completed surveys about their opioid use at three timepoints. Intention-to-treat analyses examined impacts of the two multi-level strategies on implementation outcomes (CM Exposure, CM Competence, CM Sustainment) and patient outcomes (Opioid Abstinence, Opioid Related Problems).
Results: The pattern of results was identical across unadjusted, propensity score-adjusted, and covariate-adjusted general linear mixed models, though significance varied slightly. Relative to providers receiving the ATTC strategy, those receiving the E-ATTC strategy had significantly higher odds of CM Exposure (covariate adjusted OR = 3.21, p < 0.05) and of attaining the Excellent CM Competence benchmark (propensity-adjusted OR = 4.07, p < 0.05). Patients at the E-ATTC sites had significantly greater likelihood of Opioid Abstinence over time (OR = 2.04, p < 0.05). There were no significant conditional differences in CM Sustainment, though data were measured at the program-level, which limited power to detect differences.
Conclusions: The theory-driven E-ATTC strategy, which targeted implementation climate via facilitation and incentivization, had superior implementation and patient outcomes relative to the ATTC strategy. Results of this study can help inform ongoing CM implementation efforts across the United States.
Trial registration: This study was registered in Clinicaltrials.gov (NCT03931174) on April 23, 2019.
{"title":"Project MIMIC (Maximizing Implementation of Motivational Incentives in Clinics): results of a 28-site cluster-randomized type 3 hybrid trial.","authors":"Sara J Becker, Tim Janssen, Tim Souza, Bryan Hartzler, Carla J Rash, Kira DiClemente-Bosco, Bryan R Garner","doi":"10.1186/s13012-025-01473-0","DOIUrl":"10.1186/s13012-025-01473-0","url":null,"abstract":"<p><strong>Background: </strong>Contingency management (CM), a behavioral treatment that incentivizes patients for attaining treatment goals, is a highly effective adjunct to medication for opioid use disorder. However, CM is rarely offered in opioid treatment programs in the United States. In a prior pilot trial, the implementation strategy (didactic workshop + feedback + consultation) delivered by the Addiction Technology Transfer Centers (ATTC strategy) promoted CM adoption more effectively than didactic training, but the speed and duration of implementation were sub-optimal. This 28-site type 3 hybrid trial tested the comparative effectiveness of the ATTC strategy versus an Enhanced-ATTC (E-ATTC) strategy that contained two theory-driven techniques targeting implementation climate to improve acceleration and sustainment, respectively: a provider-focused incentivization strategy and a team-focused facilitation strategy. We hypothesized that the E-ATTC strategy would be associated with superior implementation and patient outcomes.</p><p><strong>Methods: </strong>Twenty-eight opioid treatment programs, 186 providers, and 592 patients were cluster-randomized to receive either the ATTC or E-ATTC strategy. Providers logged their CM sessions in an online CM Tracker and submitted audio-recorded CM sessions, and patients completed surveys about their opioid use at three timepoints. Intention-to-treat analyses examined impacts of the two multi-level strategies on implementation outcomes (CM Exposure, CM Competence, CM Sustainment) and patient outcomes (Opioid Abstinence, Opioid Related Problems).</p><p><strong>Results: </strong>The pattern of results was identical across unadjusted, propensity score-adjusted, and covariate-adjusted general linear mixed models, though significance varied slightly. Relative to providers receiving the ATTC strategy, those receiving the E-ATTC strategy had significantly higher odds of CM Exposure (covariate adjusted OR = 3.21, p < 0.05) and of attaining the Excellent CM Competence benchmark (propensity-adjusted OR = 4.07, p < 0.05). Patients at the E-ATTC sites had significantly greater likelihood of Opioid Abstinence over time (OR = 2.04, p < 0.05). There were no significant conditional differences in CM Sustainment, though data were measured at the program-level, which limited power to detect differences.</p><p><strong>Conclusions: </strong>The theory-driven E-ATTC strategy, which targeted implementation climate via facilitation and incentivization, had superior implementation and patient outcomes relative to the ATTC strategy. Results of this study can help inform ongoing CM implementation efforts across the United States.</p><p><strong>Trial registration: </strong>This study was registered in Clinicaltrials.gov (NCT03931174) on April 23, 2019.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":"9"},"PeriodicalIF":13.4,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12870086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1186/s13012-025-01474-z
Andrea L Nevedal, Christine P Kowalski, Erin P Finley, Gemmae M Fix, Alison B Hamilton, Christopher J Koenig
Background: Qualitative methods are central to implementation research. Qualitative research provides rich contextual insight into lived experiences of health and illness, healthcare systems and care delivery, and complex implementation processes. However, quantitative methods have historically been favored by editors and reviewers who serve as gatekeepers to scientific knowledge. Thus, we underscore that editors and reviewers must be familiar with the underlying principles and strengths of qualitative methods to avoid perpetuating inappropriate evaluation criteria that hinder qualitative research dissemination and funding opportunities. We aim to help authors and researchers provide sufficient details to dispel misperceptions and editors and reviewers to better evaluate studies using qualitative methods to maximize dissemination for high-impact implementation research.
Methods: We convened a panel of six researchers with extensive experience in: designing, conducting, and reporting on qualitative research in implementation science and other healthcare research; training and mentoring others on qualitative methods; and serving as journal editors and manuscript/grant peer reviewers. We reviewed existing literature, published and unpublished reviewer critiques of qualitative grants and manuscripts, and discussed challenges facing qualitative methodologists when disseminating findings. Over the course of one year, we identified candidate topics, ranked each by priority, and used a consensus-based process to finalize the inventory and develop written guidance for handling each topic.
Results: We identified and dispelled 10 common misperceptions that limit the impact of qualitative methods in implementation research. Five misperceptions were associated with the application of inappropriate quantitative evaluation standards (subjectivity, sampling, generalizability, numbers/statistics, interrater reliability). Five misperceptions were associated with overly prescribed qualitative evaluation standards (saturation, member checking, coding, themes, qualitative data analysis software). For each misperception, we provide guidance on key considerations, responses to common critiques, and citations to appropriate literature.
Conclusions: Unaddressed misperceptions can impede the contributions of qualitative methods in implementation research. We offer a resource for editors, reviewers, authors, and researchers to clarify misunderstandings and promote more nuanced and appropriate evaluation of qualitative methods in manuscripts and grant proposals. This article encourages a balanced assessment of the strengths of qualitative methods to enhance understandings of key problems in implementation research, and, ultimately, to strengthen the impact of qualitative findings.
{"title":"Optimizing qualitative methods in implementation research: a resource for editors, reviewers, authors, and researchers to dispel ten common misperceptions about qualitative research methods.","authors":"Andrea L Nevedal, Christine P Kowalski, Erin P Finley, Gemmae M Fix, Alison B Hamilton, Christopher J Koenig","doi":"10.1186/s13012-025-01474-z","DOIUrl":"10.1186/s13012-025-01474-z","url":null,"abstract":"<p><strong>Background: </strong>Qualitative methods are central to implementation research. Qualitative research provides rich contextual insight into lived experiences of health and illness, healthcare systems and care delivery, and complex implementation processes. However, quantitative methods have historically been favored by editors and reviewers who serve as gatekeepers to scientific knowledge. Thus, we underscore that editors and reviewers must be familiar with the underlying principles and strengths of qualitative methods to avoid perpetuating inappropriate evaluation criteria that hinder qualitative research dissemination and funding opportunities. We aim to help authors and researchers provide sufficient details to dispel misperceptions and editors and reviewers to better evaluate studies using qualitative methods to maximize dissemination for high-impact implementation research.</p><p><strong>Methods: </strong>We convened a panel of six researchers with extensive experience in: designing, conducting, and reporting on qualitative research in implementation science and other healthcare research; training and mentoring others on qualitative methods; and serving as journal editors and manuscript/grant peer reviewers. We reviewed existing literature, published and unpublished reviewer critiques of qualitative grants and manuscripts, and discussed challenges facing qualitative methodologists when disseminating findings. Over the course of one year, we identified candidate topics, ranked each by priority, and used a consensus-based process to finalize the inventory and develop written guidance for handling each topic.</p><p><strong>Results: </strong>We identified and dispelled 10 common misperceptions that limit the impact of qualitative methods in implementation research. Five misperceptions were associated with the application of inappropriate quantitative evaluation standards (subjectivity, sampling, generalizability, numbers/statistics, interrater reliability). Five misperceptions were associated with overly prescribed qualitative evaluation standards (saturation, member checking, coding, themes, qualitative data analysis software). For each misperception, we provide guidance on key considerations, responses to common critiques, and citations to appropriate literature.</p><p><strong>Conclusions: </strong>Unaddressed misperceptions can impede the contributions of qualitative methods in implementation research. We offer a resource for editors, reviewers, authors, and researchers to clarify misunderstandings and promote more nuanced and appropriate evaluation of qualitative methods in manuscripts and grant proposals. This article encourages a balanced assessment of the strengths of qualitative methods to enhance understandings of key problems in implementation research, and, ultimately, to strengthen the impact of qualitative findings.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":"4"},"PeriodicalIF":13.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12797730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145670941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1186/s13012-025-01471-2
Alex R Dopp, Michelle Bongard, Bing Han, Grace M Hindmarch, Mekdes Shiferaw, Sapna J Mendon-Plasek, Baji Tumendemberel, George Timmins, Kendal Reeder, Philip Pantoja, Danielle Schlang, Lora L Passetti, Mark D Godley, Sarah B Hunter
Background: Over the past decade, implementation researchers have empirically identified factors influencing long-term sustainment of evidence-based practices (EBPs) to target in implementation efforts. We examined progress toward promoting sustainment by conducting a conceptual replication of a prior study (Hunter et al., 2015, Implementation Science) that measured sustainment of an exemplar EBP for youth substance use, the Adolescent Community Reinforcement Approach (A-CRA).
Method: Data were collected 1-5 years after initial implementation funding ended (M = 3.3 years) through interviews and surveys with clinicians and supervisors from service organizations that implemented A-CRA (n = 66). Using survival analysis, we calculated the probability of A-CRA sustainment (dichotomously reported [yes/no] in interviews) over time and examined associations with contextual factors across the multilevel domains of the Consolidated Framework for Implementation Research (CFIR). We also combined our data with Hunter et al. (n = 68) to test if sustainment status or interactions with contextual factors differed by sample, and used rapid qualitative analysis of interviews to further explore patterns in the quantitative findings.
Results: In our sample, A-CRA sustainment probability decreased over time; 71% of organizations were sustaining A-CRA when funding ended, whereas only 33% were sustaining 5 years later; this survival curve did not statistically differ from Hunter et al. Sustainment was significantly associated with factors across CFIR domains: we replicated associations found by Hunter et al. (with e.g., funding stability, available clinicians, intervention complexity) and found unique associations (with e.g., program evaluation and strategic planning capacities, available supervisors, and perceived advantages and success of A-CRA). One association from the prior sample did not fully replicate (p < .10), but there were no significant interactions between contextual factors and sample. Qualitative findings further contextualized these results with service organization perspectives on factors influencing sustainment.
Conclusions: Our findings suggest that work over the past decade promoting sustainment of EBPs for youth substance use may not have produced measurable impacts. Future work needs to better incorporate growing knowledge on sustainment predictors into development and testing of robust, multilevel implementation strategies and system-level supports. This study also provides a useful illustration of a replication study in implementation science, which are important but rare.
{"title":"Minimal progress toward sustainment: 10-year replication of substance use EBP sustainment trajectories and associations with implementation characteristics.","authors":"Alex R Dopp, Michelle Bongard, Bing Han, Grace M Hindmarch, Mekdes Shiferaw, Sapna J Mendon-Plasek, Baji Tumendemberel, George Timmins, Kendal Reeder, Philip Pantoja, Danielle Schlang, Lora L Passetti, Mark D Godley, Sarah B Hunter","doi":"10.1186/s13012-025-01471-2","DOIUrl":"10.1186/s13012-025-01471-2","url":null,"abstract":"<p><strong>Background: </strong>Over the past decade, implementation researchers have empirically identified factors influencing long-term sustainment of evidence-based practices (EBPs) to target in implementation efforts. We examined progress toward promoting sustainment by conducting a conceptual replication of a prior study (Hunter et al., 2015, Implementation Science) that measured sustainment of an exemplar EBP for youth substance use, the Adolescent Community Reinforcement Approach (A-CRA).</p><p><strong>Method: </strong>Data were collected 1-5 years after initial implementation funding ended (M = 3.3 years) through interviews and surveys with clinicians and supervisors from service organizations that implemented A-CRA (n = 66). Using survival analysis, we calculated the probability of A-CRA sustainment (dichotomously reported [yes/no] in interviews) over time and examined associations with contextual factors across the multilevel domains of the Consolidated Framework for Implementation Research (CFIR). We also combined our data with Hunter et al. (n = 68) to test if sustainment status or interactions with contextual factors differed by sample, and used rapid qualitative analysis of interviews to further explore patterns in the quantitative findings.</p><p><strong>Results: </strong>In our sample, A-CRA sustainment probability decreased over time; 71% of organizations were sustaining A-CRA when funding ended, whereas only 33% were sustaining 5 years later; this survival curve did not statistically differ from Hunter et al. Sustainment was significantly associated with factors across CFIR domains: we replicated associations found by Hunter et al. (with e.g., funding stability, available clinicians, intervention complexity) and found unique associations (with e.g., program evaluation and strategic planning capacities, available supervisors, and perceived advantages and success of A-CRA). One association from the prior sample did not fully replicate (p < .10), but there were no significant interactions between contextual factors and sample. Qualitative findings further contextualized these results with service organization perspectives on factors influencing sustainment.</p><p><strong>Conclusions: </strong>Our findings suggest that work over the past decade promoting sustainment of EBPs for youth substance use may not have produced measurable impacts. Future work needs to better incorporate growing knowledge on sustainment predictors into development and testing of robust, multilevel implementation strategies and system-level supports. This study also provides a useful illustration of a replication study in implementation science, which are important but rare.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":"3"},"PeriodicalIF":13.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12777219/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1186/s13012-025-01475-y
Justin D Smith, Katy Bedjeti, Nicola Lancki, Elizabeth A Sloss, James L Merle, Sheetal Kircher, Ava Coughlin, Susan Metzger, Kimberly A Webster, Mary O'Connor, September Cahue, Ann Marie Flores, Quan Mai, Betina Yanez, Michael Bass, Roxanne E Jensen, Ashley Wilder Smith, Allison J Carroll, Cynthia Barnard, Christopher M George, Dean G Tsarwhas, Kimberly Richardson, Frank J Penedo, Karla Hemming, Sofia F Garcia, Denise M Scholtens, David Cella
Objective: To test a package of clinician- and system-level implementation strategies on the adoption and reach of an electronic health record (EHR)-integrated cancer symptom assessment and management program, called cPRO, within a large academic healthcare system.
Methods: This hybrid type 2 effectiveness-implementation study used a cluster randomized stepped-wedge trial design to test a package of strategies targeting system operations, clinician practices, and patient experience to support implementation of cPRO. Six clusters, comprised by 26 oncology clinic sites, were randomly allocated to one of six sequences which dictated the time at which each cluster underwent a 6-month implementation preparation period followed by a transition to the post-implementation phase in which 46 discrete implementation strategies were deployed. The primary implementation outcome was patient adoption of cPRO, measured by the proportion of patients completing cPRO assessments. Secondary outcomes included the reach of patient enrollment in the cPRO system and clinician adoption of referrals using an EHR "dot phrase" (snippets of text that can be quickly inserted into patient charts for referrals, orders, etc.) triggered by elevated cPRO scores. Data were analyzed using a cluster-period level analysis (generalized least squares linear regression with fixed cluster effects and adjustment for calendar time).
Results: The study included 34,643 unique outpatients receiving cancer treatment at 26 clinics between October 2020 and March 2024. The primary analysis showed no significant difference between the pre- and post-implementation periods on the mean difference in the proportion of patients who complete the assessments (25% vs. 40%). Secondary outcomes indicated that the implementation strategy package did not significantly improve the reach of cPRO enrollment among patients (RR = 1.00, CI: 0.78 to 1.27). Clinician adoption of referrals in response to elevated cPRO symptom scores showed a marginal positive, alebeit non-statistically significant association with the implementation strategy package (RR = 1.66, CI: 0.79 to 3.48), although this varied over time.
Conclusions: The implementation strategies tested did not significantly alter patient adoption rates of cPRO when comparing pre- and post-implementation periods, but might improve clinician adoption of the EHR dot phrase function. Future studies should explore strategies to enhance the integration of digital symptom management systems into routine cancer care to improve patient outcomes.
Trial registration: ClinicalTrials.gov NCT03988543; registered 8 May 2019 https://clinicaltrials.gov/study/NCT03988543?term=NCT03988543&rank=1 .
{"title":"Implementation outcomes of a symptom management intervention in ambulatory oncology practices evaluated using a cluster randomized stepped-wedge trial design.","authors":"Justin D Smith, Katy Bedjeti, Nicola Lancki, Elizabeth A Sloss, James L Merle, Sheetal Kircher, Ava Coughlin, Susan Metzger, Kimberly A Webster, Mary O'Connor, September Cahue, Ann Marie Flores, Quan Mai, Betina Yanez, Michael Bass, Roxanne E Jensen, Ashley Wilder Smith, Allison J Carroll, Cynthia Barnard, Christopher M George, Dean G Tsarwhas, Kimberly Richardson, Frank J Penedo, Karla Hemming, Sofia F Garcia, Denise M Scholtens, David Cella","doi":"10.1186/s13012-025-01475-y","DOIUrl":"10.1186/s13012-025-01475-y","url":null,"abstract":"<p><strong>Objective: </strong>To test a package of clinician- and system-level implementation strategies on the adoption and reach of an electronic health record (EHR)-integrated cancer symptom assessment and management program, called cPRO, within a large academic healthcare system.</p><p><strong>Methods: </strong>This hybrid type 2 effectiveness-implementation study used a cluster randomized stepped-wedge trial design to test a package of strategies targeting system operations, clinician practices, and patient experience to support implementation of cPRO. Six clusters, comprised by 26 oncology clinic sites, were randomly allocated to one of six sequences which dictated the time at which each cluster underwent a 6-month implementation preparation period followed by a transition to the post-implementation phase in which 46 discrete implementation strategies were deployed. The primary implementation outcome was patient adoption of cPRO, measured by the proportion of patients completing cPRO assessments. Secondary outcomes included the reach of patient enrollment in the cPRO system and clinician adoption of referrals using an EHR \"dot phrase\" (snippets of text that can be quickly inserted into patient charts for referrals, orders, etc.) triggered by elevated cPRO scores. Data were analyzed using a cluster-period level analysis (generalized least squares linear regression with fixed cluster effects and adjustment for calendar time).</p><p><strong>Results: </strong>The study included 34,643 unique outpatients receiving cancer treatment at 26 clinics between October 2020 and March 2024. The primary analysis showed no significant difference between the pre- and post-implementation periods on the mean difference in the proportion of patients who complete the assessments (25% vs. 40%). Secondary outcomes indicated that the implementation strategy package did not significantly improve the reach of cPRO enrollment among patients (RR = 1.00, CI: 0.78 to 1.27). Clinician adoption of referrals in response to elevated cPRO symptom scores showed a marginal positive, alebeit non-statistically significant association with the implementation strategy package (RR = 1.66, CI: 0.79 to 3.48), although this varied over time.</p><p><strong>Conclusions: </strong>The implementation strategies tested did not significantly alter patient adoption rates of cPRO when comparing pre- and post-implementation periods, but might improve clinician adoption of the EHR dot phrase function. Future studies should explore strategies to enhance the integration of digital symptom management systems into routine cancer care to improve patient outcomes.</p><p><strong>Trial registration: </strong>ClinicalTrials.gov NCT03988543; registered 8 May 2019 https://clinicaltrials.gov/study/NCT03988543?term=NCT03988543&rank=1 .</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":" ","pages":"10"},"PeriodicalIF":13.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12870884/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145661924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}