Pub Date : 2024-07-15DOI: 10.1186/s13012-024-01379-3
Virginia R McKay, Alithia Zamantakis, Ana Michaela Pachicano, James L Merle, Morgan R Purrier, McKenzie Swan, Dennis H Li, Brian Mustanski, Justin D Smith, Lisa R Hirschhorn, Nanette Benbow
Background: There are no criteria specifically for evaluating the quality of implementation research and recommending implementation strategies likely to have impact to practitioners. We describe the development and application of the Best Practices Tool, a set of criteria to evaluate the evidence supporting HIV-specific implementation strategies.
Methods: We developed the Best Practices Tool from 2022-2023 in three phases. (1) We developed a draft tool and criteria based on a literature review and key informant interviews. We purposively selected and recruited by email interview participants representing a mix of expertise in HIV service delivery, quality improvement, and implementation science. (2) The tool was then informed and revised through two e-Delphi rounds using a survey delivered online through Qualtrics. The first and second round Delphi surveys consisted of 71 and 52 open and close-ended questions, respectively, asking participants to evaluate, confirm, and make suggestions on different aspects of the rubric. After each survey round, data were analyzed and synthesized as appropriate; and the tool and criteria were revised. (3) We then applied the tool to a set of research studies assessing implementation strategies designed to promote the adoption and uptake of evidence-based HIV interventions to assess reliable application of the tool and criteria.
Results: Our initial literature review yielded existing tools for evaluating intervention-level evidence. For a strategy-level tool, additions emerged from interviews, for example, a need to consider the context and specification of strategies. Revisions were made after both Delphi rounds resulting in the confirmation of five evaluation domains - research design, implementation outcomes, limitations and rigor, strategy specification, and equity - and four evidence levels - best, promising, more evidence needed, and harmful. For most domains, criteria were specified at each evidence level. After an initial pilot round to develop an application process and provide training, we achieved 98% reliability when applying the criteria to 18 implementation strategies.
Conclusions: We developed a tool to evaluate the evidence supporting implementation strategies for HIV services. Although specific to HIV in the US, this tool is adaptable for evaluating strategies in other health areas.
{"title":"Establishing evidence criteria for implementation strategies in the US: a Delphi study for HIV services.","authors":"Virginia R McKay, Alithia Zamantakis, Ana Michaela Pachicano, James L Merle, Morgan R Purrier, McKenzie Swan, Dennis H Li, Brian Mustanski, Justin D Smith, Lisa R Hirschhorn, Nanette Benbow","doi":"10.1186/s13012-024-01379-3","DOIUrl":"10.1186/s13012-024-01379-3","url":null,"abstract":"<p><strong>Background: </strong>There are no criteria specifically for evaluating the quality of implementation research and recommending implementation strategies likely to have impact to practitioners. We describe the development and application of the Best Practices Tool, a set of criteria to evaluate the evidence supporting HIV-specific implementation strategies.</p><p><strong>Methods: </strong>We developed the Best Practices Tool from 2022-2023 in three phases. (1) We developed a draft tool and criteria based on a literature review and key informant interviews. We purposively selected and recruited by email interview participants representing a mix of expertise in HIV service delivery, quality improvement, and implementation science. (2) The tool was then informed and revised through two e-Delphi rounds using a survey delivered online through Qualtrics. The first and second round Delphi surveys consisted of 71 and 52 open and close-ended questions, respectively, asking participants to evaluate, confirm, and make suggestions on different aspects of the rubric. After each survey round, data were analyzed and synthesized as appropriate; and the tool and criteria were revised. (3) We then applied the tool to a set of research studies assessing implementation strategies designed to promote the adoption and uptake of evidence-based HIV interventions to assess reliable application of the tool and criteria.</p><p><strong>Results: </strong>Our initial literature review yielded existing tools for evaluating intervention-level evidence. For a strategy-level tool, additions emerged from interviews, for example, a need to consider the context and specification of strategies. Revisions were made after both Delphi rounds resulting in the confirmation of five evaluation domains - research design, implementation outcomes, limitations and rigor, strategy specification, and equity - and four evidence levels - best, promising, more evidence needed, and harmful. For most domains, criteria were specified at each evidence level. After an initial pilot round to develop an application process and provide training, we achieved 98% reliability when applying the criteria to 18 implementation strategies.</p><p><strong>Conclusions: </strong>We developed a tool to evaluate the evidence supporting implementation strategies for HIV services. Although specific to HIV in the US, this tool is adaptable for evaluating strategies in other health areas.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"50"},"PeriodicalIF":8.8,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11251241/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-15DOI: 10.1186/s13012-024-01381-9
Elizabeth A McGuier, David J Kolko, Gregory A Aarons, Allison Schachter, Mary Lou Klem, Matthew A Diabes, Laurie R Weingart, Eduardo Salas, Courtney Benjamin Wolk
<p><strong>Background: </strong>Implementation of new practices in team-based settings requires teams to work together to respond to new demands and changing expectations. However, team constructs and team-based implementation approaches have received little attention in the implementation science literature. This systematic review summarizes empirical research examining associations between teamwork and implementation outcomes when evidence-based practices and other innovations are implemented in healthcare and human service settings.</p><p><strong>Methods: </strong>We searched MEDLINE, CINAHL, APA PsycINFO and ERIC for peer-reviewed empirical articles published from January 2000 to March 2022. Additional articles were identified by searches of reference lists and a cited reference search for included articles (completed in February 2023). We selected studies using quantitative, qualitative, or mixed methods to examine associations between team constructs and implementation outcomes in healthcare and human service settings. We used the Mixed Methods Appraisal Tool to assess methodological quality/risk of bias and conducted a narrative synthesis of included studies. GRADE and GRADE-CERQual were used to assess the strength of the body of evidence.</p><p><strong>Results: </strong>Searches identified 10,489 results. After review, 58 articles representing 55 studies were included. Relevant studies increased over time; 71% of articles were published after 2016. We were unable to generate estimates of effects for any quantitative associations because of very limited overlap in the reported associations between team variables and implementation outcomes. Qualitative findings with high confidence were: 1) Staffing shortages and turnover hinder implementation; 2) Adaptive team functioning (i.e., positive affective states, effective behavior processes, shared cognitive states) facilitates implementation and is associated with better implementation outcomes; Problems in team functioning (i.e., negative affective states, problematic behavioral processes, lack of shared cognitive states) act as barriers to implementation and are associated with poor implementation outcomes; and 3) Open, ongoing, and effective communication within teams facilitates implementation of new practices; poor communication is a barrier.</p><p><strong>Conclusions: </strong>Teamwork matters for implementation. However, both team constructs and implementation outcomes were often poorly specified, and there was little overlap of team constructs and implementation outcomes studied in quantitative studies. Greater specificity and rigor are needed to understand how teamwork influences implementation processes and outcomes. We provide recommendations for improving the conceptualization, description, assessment, analysis, and interpretation of research on teams implementing innovations.</p><p><strong>Trial registration: </strong>This systematic review was registered in PROSPERO, the internati
{"title":"Teamwork and implementation of innovations in healthcare and human service settings: a systematic review.","authors":"Elizabeth A McGuier, David J Kolko, Gregory A Aarons, Allison Schachter, Mary Lou Klem, Matthew A Diabes, Laurie R Weingart, Eduardo Salas, Courtney Benjamin Wolk","doi":"10.1186/s13012-024-01381-9","DOIUrl":"10.1186/s13012-024-01381-9","url":null,"abstract":"<p><strong>Background: </strong>Implementation of new practices in team-based settings requires teams to work together to respond to new demands and changing expectations. However, team constructs and team-based implementation approaches have received little attention in the implementation science literature. This systematic review summarizes empirical research examining associations between teamwork and implementation outcomes when evidence-based practices and other innovations are implemented in healthcare and human service settings.</p><p><strong>Methods: </strong>We searched MEDLINE, CINAHL, APA PsycINFO and ERIC for peer-reviewed empirical articles published from January 2000 to March 2022. Additional articles were identified by searches of reference lists and a cited reference search for included articles (completed in February 2023). We selected studies using quantitative, qualitative, or mixed methods to examine associations between team constructs and implementation outcomes in healthcare and human service settings. We used the Mixed Methods Appraisal Tool to assess methodological quality/risk of bias and conducted a narrative synthesis of included studies. GRADE and GRADE-CERQual were used to assess the strength of the body of evidence.</p><p><strong>Results: </strong>Searches identified 10,489 results. After review, 58 articles representing 55 studies were included. Relevant studies increased over time; 71% of articles were published after 2016. We were unable to generate estimates of effects for any quantitative associations because of very limited overlap in the reported associations between team variables and implementation outcomes. Qualitative findings with high confidence were: 1) Staffing shortages and turnover hinder implementation; 2) Adaptive team functioning (i.e., positive affective states, effective behavior processes, shared cognitive states) facilitates implementation and is associated with better implementation outcomes; Problems in team functioning (i.e., negative affective states, problematic behavioral processes, lack of shared cognitive states) act as barriers to implementation and are associated with poor implementation outcomes; and 3) Open, ongoing, and effective communication within teams facilitates implementation of new practices; poor communication is a barrier.</p><p><strong>Conclusions: </strong>Teamwork matters for implementation. However, both team constructs and implementation outcomes were often poorly specified, and there was little overlap of team constructs and implementation outcomes studied in quantitative studies. Greater specificity and rigor are needed to understand how teamwork influences implementation processes and outcomes. We provide recommendations for improving the conceptualization, description, assessment, analysis, and interpretation of research on teams implementing innovations.</p><p><strong>Trial registration: </strong>This systematic review was registered in PROSPERO, the internati","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"49"},"PeriodicalIF":8.8,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11247800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-11DOI: 10.1186/s13012-024-01380-w
Tracy L Finch, Sebastian Potthoff, Carl R May, Melissa Girling, Neil Perkins, Christiaan Vis, Leah Bührmann, Anne Etzelmueller, Claire Rosalie van Genugten, Josien Schuurmans, Jordi Piera-Jiménez, Tim Rapley
Background: The process of tailored implementation is ill-defined and under-explored. The ItFits-toolkit was developed and subsequently tested as a self-guided online platform to facilitate implementation of tailored strategies for internet-based cognitive behavioural therapy (iCBT) services. In ImpleMentAll, ItFits-toolkit had a small but positive effect on the primary outcome of iCBT normalisation. This paper investigates, from a qualitative perspective, how implementation teams developed and undertook tailored implementation using the toolkit within the trial.
Methods: Implementation teams in thirteen sites from nine countries (Europe and Australia) used the ItFits-toolkit for six months minimum, consistent with the trial protocol. A qualitative process evaluation was conducted. Descriptive data regarding goals, barriers, strategies, and implementation plans collected within the toolkit informed qualitative data collection in real time. Qualitative data included remote longitudinal interviews (n = 55) with implementation team members (n = 30) and observations of support calls (n = 19) with study sites. Qualitative data were analysed thematically, using a team-based approach.
Results: Implementation teams developed and executed tailored implementation projects across all steps in the toolkit process. Working in a structured way but with room for flexibility, decisions were shaped by team members' ideas and goals, iterative stakeholder engagement, internal and external influences, and the context of the ImpleMentAll project. Although teams reported some positive impacts of their projects, 'time', both for undertaking the work, and for seeing project impacts, was described as a key factor in decisions about implementation strategies and assessments of success.
Conclusion: This study responds directly to McHugh et al.'s (2022) call for empirical description of what implementation tailoring looks like in action, in service settings. Self-guided facilitation of tailored implementation enables implementers in service settings to undertake tailoring within their organisations. Implementation tailoring takes considerable time and involves detailed work but can be supported through the provision of implementation science informed guidance and materials, iterative and ongoing stakeholder engagement, and working reflectively in response to external influencing factors. Directions for advancement of tailored implementation are suggested.
{"title":"How is tailored implementation undertaken using a self-guided toolkit? Qualitative study of the ItFits-toolkit in the ImpleMentAll project.","authors":"Tracy L Finch, Sebastian Potthoff, Carl R May, Melissa Girling, Neil Perkins, Christiaan Vis, Leah Bührmann, Anne Etzelmueller, Claire Rosalie van Genugten, Josien Schuurmans, Jordi Piera-Jiménez, Tim Rapley","doi":"10.1186/s13012-024-01380-w","DOIUrl":"10.1186/s13012-024-01380-w","url":null,"abstract":"<p><strong>Background: </strong>The process of tailored implementation is ill-defined and under-explored. The ItFits-toolkit was developed and subsequently tested as a self-guided online platform to facilitate implementation of tailored strategies for internet-based cognitive behavioural therapy (iCBT) services. In ImpleMentAll, ItFits-toolkit had a small but positive effect on the primary outcome of iCBT normalisation. This paper investigates, from a qualitative perspective, how implementation teams developed and undertook tailored implementation using the toolkit within the trial.</p><p><strong>Methods: </strong>Implementation teams in thirteen sites from nine countries (Europe and Australia) used the ItFits-toolkit for six months minimum, consistent with the trial protocol. A qualitative process evaluation was conducted. Descriptive data regarding goals, barriers, strategies, and implementation plans collected within the toolkit informed qualitative data collection in real time. Qualitative data included remote longitudinal interviews (n = 55) with implementation team members (n = 30) and observations of support calls (n = 19) with study sites. Qualitative data were analysed thematically, using a team-based approach.</p><p><strong>Results: </strong>Implementation teams developed and executed tailored implementation projects across all steps in the toolkit process. Working in a structured way but with room for flexibility, decisions were shaped by team members' ideas and goals, iterative stakeholder engagement, internal and external influences, and the context of the ImpleMentAll project. Although teams reported some positive impacts of their projects, 'time', both for undertaking the work, and for seeing project impacts, was described as a key factor in decisions about implementation strategies and assessments of success.</p><p><strong>Conclusion: </strong>This study responds directly to McHugh et al.'s (2022) call for empirical description of what implementation tailoring looks like in action, in service settings. Self-guided facilitation of tailored implementation enables implementers in service settings to undertake tailoring within their organisations. Implementation tailoring takes considerable time and involves detailed work but can be supported through the provision of implementation science informed guidance and materials, iterative and ongoing stakeholder engagement, and working reflectively in response to external influencing factors. Directions for advancement of tailored implementation are suggested.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"48"},"PeriodicalIF":8.8,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11241992/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1186/s13012-024-01373-9
Sarah Kate Bearman, Paul Rohde, Sydney Pauling, Jeff M Gau, Heather Shaw, Eric Stice
Background: Despite ongoing efforts to introduce evidence-based interventions (EBIs) into mental health care settings, little research has focused on the sustainability of EBIs in these settings. College campuses are a natural place to intervene with young adults who are at high risk for mental health disorders, including eating disorders. The current study tested the effect of three levels of implementation support on the sustainability of an evidence-based group eating disorder prevention program, the Body Project, delivered by peer educators. We also tested whether intervention, contextual, or implementation process factors predicted sustainability.
Methods: We recruited 63 colleges with peer educator programs and randomly assigned them to (a) receive a 2-day Train-the-Trainer (TTT) training in which peer educators were trained to implement the Body Project and supervisors were taught how to train future peer educators (TTT), (b) TTT training plus a technical assistance (TA) workshop (TTT + TA), or (c) TTT plus the TA workshop and quality assurance (QA) consultations over 1-year (TTT + TA + QA). We tested whether implementation support strategies, perceived characteristics of the intervention and attitudes towards evidence-based interventions at baseline and the proportion of completed implementation activities during the implementation year predicted three school-level dichotomous sustainability outcomes (offering Body Project groups, training peer educators, training supervisors) over the subsequent two-year sustainability period using logistic regression models.
Results: Implementation support strategies did not significantly predict any sustainability outcomes, although a trend suggested that colleges randomized to the TTT + TA + QA strategy were more likely to train new supervisors (OR = 5.46, 95% CI [0.89-33.38]). Colleges that completed a greater proportion of implementation activities were more likely to offer Body Project groups (OR = 1.53, 95% CI [1.19-1.98]) and train new peer educators during the sustainability phase (OR = 1.39, 95% CI [1.10-1.74]). Perceived positive characteristics of the Body Project predicted training new peer educators (OR = 18.42, 95% CI [1.48-299.66]), which may be critical for sustainability in routine settings with high provider turnover.
Conclusions: Helping schools complete more implementation activities and increasing the perceived positive characteristics of a prevention program may result in greater sustainment of prevention program implementation.
Trial registration: This study was preregistered on 12/07/17 with ClinicalTrials.gov, ID NCT03409809, https://clinicaltrials.gov/ct2/show/NCT03409809 .
背景:尽管人们一直在努力将循证干预(EBIs)引入心理健康护理环境,但很少有研究关注 EBIs 在这些环境中的可持续性。大学校园是对精神疾病(包括饮食失调)高风险青少年进行干预的天然场所。目前的研究测试了三个级别的实施支持对由同伴教育者提供的循证饮食失调团体预防项目--"身体项目 "的可持续性的影响。我们还测试了干预因素、环境因素或实施过程因素是否会影响项目的可持续性:方法:我们招募了63所开展同伴教育者项目的学院,并将其随机分配到:(a)接受为期2天的培训师培训(TTT),其中培训同伴教育者如何实施 "身体项目",并教导主管如何培训未来的同伴教育者(TTT);(b)TTT培训加技术援助(TA)研讨会(TTT + TA);或(c)TTT加技术援助研讨会和为期1年的质量保证(QA)咨询(TTT + TA + QA)。我们使用逻辑回归模型检验了实施支持策略、干预的感知特征、基线时对循证干预的态度以及实施年期间完成实施活动的比例是否能预测随后两年持续期中学校层面的三项二分法持续性结果(提供身体项目小组、培训同伴教育者、培训督导员):实施支持策略并不能显著预测任何可持续性结果,但有趋势表明,随机采用 TTT + TA + QA 策略的学院更有可能培训新的督导人员(OR = 5.46,95% CI [0.89-33.38])。完成了更多实施活动的学院更有可能在可持续发展阶段提供美体项目小组(OR = 1.53,95% CI [1.19-1.98])和培训新的同伴教育者(OR = 1.39,95% CI [1.10-1.74])。对 "身体项目 "积极特征的认知预示着对新同伴教育者的培训(OR = 18.42,95% CI [1.48-299.66]),这可能是在提供者更替频繁的常规环境中实现可持续性的关键:结论:帮助学校完成更多的实施活动,提高人们对预防计划积极特征的认识,可能会使预防计划的实施更加持久:本研究于 17 年 7 月 12 日在 ClinicalTrials.gov 进行了预注册,ID NCT03409809,https://clinicaltrials.gov/ct2/show/NCT03409809 。
{"title":"Predictors of the sustainability for an evidence-based eating disorder prevention program delivered by college peer educators.","authors":"Sarah Kate Bearman, Paul Rohde, Sydney Pauling, Jeff M Gau, Heather Shaw, Eric Stice","doi":"10.1186/s13012-024-01373-9","DOIUrl":"10.1186/s13012-024-01373-9","url":null,"abstract":"<p><strong>Background: </strong>Despite ongoing efforts to introduce evidence-based interventions (EBIs) into mental health care settings, little research has focused on the sustainability of EBIs in these settings. College campuses are a natural place to intervene with young adults who are at high risk for mental health disorders, including eating disorders. The current study tested the effect of three levels of implementation support on the sustainability of an evidence-based group eating disorder prevention program, the Body Project, delivered by peer educators. We also tested whether intervention, contextual, or implementation process factors predicted sustainability.</p><p><strong>Methods: </strong>We recruited 63 colleges with peer educator programs and randomly assigned them to (a) receive a 2-day Train-the-Trainer (TTT) training in which peer educators were trained to implement the Body Project and supervisors were taught how to train future peer educators (TTT), (b) TTT training plus a technical assistance (TA) workshop (TTT + TA), or (c) TTT plus the TA workshop and quality assurance (QA) consultations over 1-year (TTT + TA + QA). We tested whether implementation support strategies, perceived characteristics of the intervention and attitudes towards evidence-based interventions at baseline and the proportion of completed implementation activities during the implementation year predicted three school-level dichotomous sustainability outcomes (offering Body Project groups, training peer educators, training supervisors) over the subsequent two-year sustainability period using logistic regression models.</p><p><strong>Results: </strong>Implementation support strategies did not significantly predict any sustainability outcomes, although a trend suggested that colleges randomized to the TTT + TA + QA strategy were more likely to train new supervisors (OR = 5.46, 95% CI [0.89-33.38]). Colleges that completed a greater proportion of implementation activities were more likely to offer Body Project groups (OR = 1.53, 95% CI [1.19-1.98]) and train new peer educators during the sustainability phase (OR = 1.39, 95% CI [1.10-1.74]). Perceived positive characteristics of the Body Project predicted training new peer educators (OR = 18.42, 95% CI [1.48-299.66]), which may be critical for sustainability in routine settings with high provider turnover.</p><p><strong>Conclusions: </strong>Helping schools complete more implementation activities and increasing the perceived positive characteristics of a prevention program may result in greater sustainment of prevention program implementation.</p><p><strong>Trial registration: </strong>This study was preregistered on 12/07/17 with ClinicalTrials.gov, ID NCT03409809, https://clinicaltrials.gov/ct2/show/NCT03409809 .</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"47"},"PeriodicalIF":8.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11225113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141535988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-03DOI: 10.1186/s13012-024-01371-x
Ross C Brownson, Leopoldo J Cabassa, Bettina F Drake, Rachel C Shelton
In their article on "Navigating the Field of Implementation Science Towards Maturity: Challenges and Opportunities," Chambers and Emmons describe the rapid growth of implementation science along with remaining challenges. A significant gap remains in training and capacity building. Formats for capacity building include university degree programs, summer training institutes, workshops, and conferences. In this letter, we describe and amplify on five key areas, including the need to (1) identify advanced competencies, (2) increase the volume and reach of trainings, (3) sustain trainings, (4) build equity focused trainings, and (5) develop global capacity. We hope that the areas we highlight will aid in addressing several key challenges to prioritize in future efforts to build greater capacity in implementation science.
{"title":"Closing the gap: advancing implementation science through training and capacity building.","authors":"Ross C Brownson, Leopoldo J Cabassa, Bettina F Drake, Rachel C Shelton","doi":"10.1186/s13012-024-01371-x","DOIUrl":"10.1186/s13012-024-01371-x","url":null,"abstract":"<p><p>In their article on \"Navigating the Field of Implementation Science Towards Maturity: Challenges and Opportunities,\" Chambers and Emmons describe the rapid growth of implementation science along with remaining challenges. A significant gap remains in training and capacity building. Formats for capacity building include university degree programs, summer training institutes, workshops, and conferences. In this letter, we describe and amplify on five key areas, including the need to (1) identify advanced competencies, (2) increase the volume and reach of trainings, (3) sustain trainings, (4) build equity focused trainings, and (5) develop global capacity. We hope that the areas we highlight will aid in addressing several key challenges to prioritize in future efforts to build greater capacity in implementation science.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"46"},"PeriodicalIF":8.8,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11223366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141499682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1186/s13012-024-01376-6
Anshula Ambasta, Jayna M Holroyd-Leduc, Surakshya Pokharel, Pamela Mathura, Andrew Wei-Yeh Shih, Henry T Stelfox, Irene Ma, Mark Harrison, Braden Manns, Peter Faris, Tyler Williamson, Caley Shukalek, Maria Santana, Onyebuchi Omodon, Deirdre McCaughey, Narmin Kassam, Chris Naugler
Background: Laboratory test overuse in hospitals is a form of healthcare waste that also harms patients. Developing and evaluating interventions to reduce this form of healthcare waste is critical. We detail the protocol for our study which aims to implement and evaluate the impact of an evidence-based, multicomponent intervention bundle on repetitive use of routine laboratory testing in hospitalized medical patients across adult hospitals in the province of British Columbia, Canada.
Methods: We have designed a stepped-wedge cluster randomized trial to assess the impact of a multicomponent intervention bundle across 16 hospitals in the province of British Columbia in Canada. We will use the Knowledge to Action cycle to guide implementation and the RE-AIM framework to guide evaluation of the intervention bundle. The primary outcome will be the number of routine laboratory tests ordered per patient-day in the intervention versus control periods. Secondary outcome measures will assess implementation fidelity, number of all common laboratory tests used, impact on healthcare costs, and safety outcomes. The study will include patients admitted to adult medical wards (internal medicine or family medicine) and healthcare providers working in these wards within the participating hospitals. After a baseline period of 24 weeks, we will conduct a 16-week pilot at one hospital site. A new cluster (containing approximately 2-3 hospitals) will receive the intervention every 12 weeks. We will evaluate the sustainability of implementation at 24 weeks post implementation of the final cluster. Using intention to treat, we will use generalized linear mixed models for analysis to evaluate the impact of the intervention on outcomes.
Discussion: The study builds upon a multicomponent intervention bundle that has previously demonstrated effectiveness. The elements of the intervention bundle are easily adaptable to other settings, facilitating future adoption in wider contexts. The study outputs are expected to have a positive impact as they will reduce usage of repetitive laboratory tests and provide empirically supported measures and tools for accomplishing this work.
Trial registration: This study was prospectively registered on April 8, 2024, via ClinicalTrials.gov Protocols Registration and Results System (NCT06359587). https://classic.
{"title":"Re-Purposing the Ordering of Routine Laboratory Tests in Hospitalized Medical Patients (RePORT): protocol for a multicenter stepped-wedge cluster randomised trial to evaluate the impact of a multicomponent intervention bundle to reduce laboratory test over-utilization.","authors":"Anshula Ambasta, Jayna M Holroyd-Leduc, Surakshya Pokharel, Pamela Mathura, Andrew Wei-Yeh Shih, Henry T Stelfox, Irene Ma, Mark Harrison, Braden Manns, Peter Faris, Tyler Williamson, Caley Shukalek, Maria Santana, Onyebuchi Omodon, Deirdre McCaughey, Narmin Kassam, Chris Naugler","doi":"10.1186/s13012-024-01376-6","DOIUrl":"10.1186/s13012-024-01376-6","url":null,"abstract":"<p><strong>Background: </strong>Laboratory test overuse in hospitals is a form of healthcare waste that also harms patients. Developing and evaluating interventions to reduce this form of healthcare waste is critical. We detail the protocol for our study which aims to implement and evaluate the impact of an evidence-based, multicomponent intervention bundle on repetitive use of routine laboratory testing in hospitalized medical patients across adult hospitals in the province of British Columbia, Canada.</p><p><strong>Methods: </strong>We have designed a stepped-wedge cluster randomized trial to assess the impact of a multicomponent intervention bundle across 16 hospitals in the province of British Columbia in Canada. We will use the Knowledge to Action cycle to guide implementation and the RE-AIM framework to guide evaluation of the intervention bundle. The primary outcome will be the number of routine laboratory tests ordered per patient-day in the intervention versus control periods. Secondary outcome measures will assess implementation fidelity, number of all common laboratory tests used, impact on healthcare costs, and safety outcomes. The study will include patients admitted to adult medical wards (internal medicine or family medicine) and healthcare providers working in these wards within the participating hospitals. After a baseline period of 24 weeks, we will conduct a 16-week pilot at one hospital site. A new cluster (containing approximately 2-3 hospitals) will receive the intervention every 12 weeks. We will evaluate the sustainability of implementation at 24 weeks post implementation of the final cluster. Using intention to treat, we will use generalized linear mixed models for analysis to evaluate the impact of the intervention on outcomes.</p><p><strong>Discussion: </strong>The study builds upon a multicomponent intervention bundle that has previously demonstrated effectiveness. The elements of the intervention bundle are easily adaptable to other settings, facilitating future adoption in wider contexts. The study outputs are expected to have a positive impact as they will reduce usage of repetitive laboratory tests and provide empirically supported measures and tools for accomplishing this work.</p><p><strong>Trial registration: </strong>This study was prospectively registered on April 8, 2024, via ClinicalTrials.gov Protocols Registration and Results System (NCT06359587). https://classic.</p><p><strong>Clinicaltrials: </strong>gov/ct2/show/NCT06359587?term=NCT06359587&recrs=ab&draw=2&rank=1.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"45"},"PeriodicalIF":8.8,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11221016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1186/s13012-024-01370-y
{"title":"Proceedings of the 16<sup>th</sup> Annual Conference on the Science of Dissemination and Implementation in Health.","authors":"","doi":"10.1186/s13012-024-01370-y","DOIUrl":"10.1186/s13012-024-01370-y","url":null,"abstract":"","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 Suppl 2","pages":"42"},"PeriodicalIF":8.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11209993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1186/s13012-024-01374-8
Maria Jose Mendieta, Geert Goderis, Andreas Zeller, Olivia Yip, Flaka Siqeca, Franziska Zúñiga, Leah L Zullig, Sabina M De Geest, Mieke Deschodt, Johan Flamaing, Suzanne Dhaini
Background: In Northwestern Switzerland, recent legislation tackles the needs of community-dwelling older adults by creating Information and Advice Centers (IACs). IACs are a new service in the community that aims to assess the needs and provide information on age-related issues to community-dwelling older adults and their families. Previous studies reported difficulties in reaching community-dwelling older adults for community-based programs. We aimed to: 1) systematically identify implementation strategies to promote the IAC among community care providers, older adults and informal caregivers; 2) monitor the delivery of these strategies by the IAC management; and 3) describe the impact of those strategies on reach of community-dwelling older adults. This study was conducted as part of the TRANS-SENIOR project.
Methods: As part of the INSPIRE feasibility assessment, we conducted a pre-test post-test study between March and September 2022. The sample included 8,840 older adults aged 65 + visiting/calling or being referred to the IAC for the first time. Implementation strategies were selected using implementation mapping and organized in bundles for each group of community care providers and older adults/caregivers. Our evaluation included: estimation of fidelity to the delivery of implementation strategies and bundles by the IAC management and their coverage; referral source of older adults to the IAC; and impact of the strategies on reach of the IAC on the 65 + population living in the care region. Adaptations to the strategies were documented using the FRAME-IS. Descriptive statistics were calculated and reported.
Results: Seven implementation strategies were selected and organized in bundles for each community care provider and older adults and their caregivers. The lowest fidelity score was found in implementation strategies selected for nursing homes whereas the highest score corresponded to strategies targeting older adults and caregivers. "Informational visits" was the strategy with the lowest coverage (2.5% for nursing homes and 10.5% for hospitals and specialized clinics). The main referral sources were self-referrals and referrals by caregivers, followed by nursing homes. The IAC reach among the 65 + population was 5.4%.
Conclusion: We demonstrated the use of implementation mapping to select implementation strategies to reach community-dwelling older adults. The reach was low suggesting that higher fidelity to the delivery of the strategies, and reflection on the causal pathway of the implementation strategies might be needed.
{"title":"Mapping implementation strategies to reach community-dwelling older adults in Northwest Switzerland.","authors":"Maria Jose Mendieta, Geert Goderis, Andreas Zeller, Olivia Yip, Flaka Siqeca, Franziska Zúñiga, Leah L Zullig, Sabina M De Geest, Mieke Deschodt, Johan Flamaing, Suzanne Dhaini","doi":"10.1186/s13012-024-01374-8","DOIUrl":"10.1186/s13012-024-01374-8","url":null,"abstract":"<p><strong>Background: </strong>In Northwestern Switzerland, recent legislation tackles the needs of community-dwelling older adults by creating Information and Advice Centers (IACs). IACs are a new service in the community that aims to assess the needs and provide information on age-related issues to community-dwelling older adults and their families. Previous studies reported difficulties in reaching community-dwelling older adults for community-based programs. We aimed to: 1) systematically identify implementation strategies to promote the IAC among community care providers, older adults and informal caregivers; 2) monitor the delivery of these strategies by the IAC management; and 3) describe the impact of those strategies on reach of community-dwelling older adults. This study was conducted as part of the TRANS-SENIOR project.</p><p><strong>Methods: </strong>As part of the INSPIRE feasibility assessment, we conducted a pre-test post-test study between March and September 2022. The sample included 8,840 older adults aged 65 + visiting/calling or being referred to the IAC for the first time. Implementation strategies were selected using implementation mapping and organized in bundles for each group of community care providers and older adults/caregivers. Our evaluation included: estimation of fidelity to the delivery of implementation strategies and bundles by the IAC management and their coverage; referral source of older adults to the IAC; and impact of the strategies on reach of the IAC on the 65 + population living in the care region. Adaptations to the strategies were documented using the FRAME-IS. Descriptive statistics were calculated and reported.</p><p><strong>Results: </strong>Seven implementation strategies were selected and organized in bundles for each community care provider and older adults and their caregivers. The lowest fidelity score was found in implementation strategies selected for nursing homes whereas the highest score corresponded to strategies targeting older adults and caregivers. \"Informational visits\" was the strategy with the lowest coverage (2.5% for nursing homes and 10.5% for hospitals and specialized clinics). The main referral sources were self-referrals and referrals by caregivers, followed by nursing homes. The IAC reach among the 65 + population was 5.4%.</p><p><strong>Conclusion: </strong>We demonstrated the use of implementation mapping to select implementation strategies to reach community-dwelling older adults. The reach was low suggesting that higher fidelity to the delivery of the strategies, and reflection on the causal pathway of the implementation strategies might be needed.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"44"},"PeriodicalIF":8.8,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11210125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1186/s13012-024-01369-5
Laura Ellen Ashcraft, David E Goodrich, Joachim Hero, Angela Phares, Rachel L Bachrach, Deirdre A Quinn, Nabeel Qureshi, Natalie C Ernecoff, Lisa G Lederer, Leslie Page Scheunemann, Shari S Rogal, Matthew J Chinman
Background: Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify strategies and the Reach Effectiveness Adoption Implementation and Maintenance (RE-AIM) framework to classify outcomes.
Methods: We conducted a systematic review of studies examining implementation strategies from 2010-2022 and registered with PROSPERO (CRD42021235592). We searched databases using terms "implementation strategy", "intervention", "bundle", "support", and their variants. We also solicited study recommendations from implementation science experts and mined existing systematic reviews. We included studies that quantitatively assessed the impact of at least one implementation strategy to improve health or health care using an outcome that could be mapped to the five evaluation dimensions of RE-AIM. Only studies meeting prespecified methodologic standards were included. We described the characteristics of studies and frequency of implementation strategy use across study arms. We also examined common strategy pairings and cooccurrence with significant outcomes.
Findings: Our search resulted in 16,605 studies; 129 met inclusion criteria. Studies tested an average of 6.73 strategies (0-20 range). The most assessed outcomes were Effectiveness (n=82; 64%) and Implementation (n=73; 56%). The implementation strategies most frequently occurring in the experimental arm were Distribute Educational Materials (n=99), Conduct Educational Meetings (n=96), Audit and Provide Feedback (n=76), and External Facilitation (n=59). These strategies were often used in combination. Nineteen implementation strategies were frequently tested and associated with significantly improved outcomes. However, many strategies were not tested sufficiently to draw conclusions.
Conclusion: This review of 129 methodologically rigorous studies built upon prior implementation science data syntheses to identify implementation strategies that had been experimentally tested and summarized their impact on outcomes across diverse outcomes and clinical settings. We present recommendations for improving future similar efforts.
{"title":"A systematic review of experimentally tested implementation strategies across health and human service settings: evidence from 2010-2022.","authors":"Laura Ellen Ashcraft, David E Goodrich, Joachim Hero, Angela Phares, Rachel L Bachrach, Deirdre A Quinn, Nabeel Qureshi, Natalie C Ernecoff, Lisa G Lederer, Leslie Page Scheunemann, Shari S Rogal, Matthew J Chinman","doi":"10.1186/s13012-024-01369-5","DOIUrl":"10.1186/s13012-024-01369-5","url":null,"abstract":"<p><strong>Background: </strong>Studies of implementation strategies range in rigor, design, and evaluated outcomes, presenting interpretation challenges for practitioners and researchers. This systematic review aimed to describe the body of research evidence testing implementation strategies across diverse settings and domains, using the Expert Recommendations for Implementing Change (ERIC) taxonomy to classify strategies and the Reach Effectiveness Adoption Implementation and Maintenance (RE-AIM) framework to classify outcomes.</p><p><strong>Methods: </strong>We conducted a systematic review of studies examining implementation strategies from 2010-2022 and registered with PROSPERO (CRD42021235592). We searched databases using terms \"implementation strategy\", \"intervention\", \"bundle\", \"support\", and their variants. We also solicited study recommendations from implementation science experts and mined existing systematic reviews. We included studies that quantitatively assessed the impact of at least one implementation strategy to improve health or health care using an outcome that could be mapped to the five evaluation dimensions of RE-AIM. Only studies meeting prespecified methodologic standards were included. We described the characteristics of studies and frequency of implementation strategy use across study arms. We also examined common strategy pairings and cooccurrence with significant outcomes.</p><p><strong>Findings: </strong>Our search resulted in 16,605 studies; 129 met inclusion criteria. Studies tested an average of 6.73 strategies (0-20 range). The most assessed outcomes were Effectiveness (n=82; 64%) and Implementation (n=73; 56%). The implementation strategies most frequently occurring in the experimental arm were Distribute Educational Materials (n=99), Conduct Educational Meetings (n=96), Audit and Provide Feedback (n=76), and External Facilitation (n=59). These strategies were often used in combination. Nineteen implementation strategies were frequently tested and associated with significantly improved outcomes. However, many strategies were not tested sufficiently to draw conclusions.</p><p><strong>Conclusion: </strong>This review of 129 methodologically rigorous studies built upon prior implementation science data syntheses to identify implementation strategies that had been experimentally tested and summarized their impact on outcomes across diverse outcomes and clinical settings. We present recommendations for improving future similar efforts.</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"43"},"PeriodicalIF":8.8,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11194895/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1186/s13012-024-01372-w
Adam Shoesmith, Nicole Nathan, Melanie Lum, Serene Yoong, Erin Nolan, Luke Wolfenden, Rachel C Shelton, Brittany Cooper, Cassandra Lane, Alice Grady, Noor Imad, Edward Riley-Gibson, Nicole McCarthy, Nicole Pearson, Alix Hall
Background: There is a need for valid and reliable measures of determinants of sustainability of public health interventions in early childhood education and care (ECEC) settings. This study aimed to develop and evaluate the psychometric and pragmatic properties of such a measure - the Integrated Measure of PRogram Element SuStainability in Childcare Settings (IMPRESS-C).
Methods: We undertook a two-phase process guided by the COnsensus-based Standards for the selection of health status Measurement INstruments checklist (COSMIN) and Psychometric and Pragmatic Evidence Rating Scale (PAPERS). Phase 1 involved measure development; i.e., determining items and scales through an iterative process and assessment of face and content validity. Phase 2 involved the evaluation of psychometric and pragmatic properties. The 29-item measure completed by service executives (directors and nominated supervisors) was embedded in a larger survey from a national sample of Australian ECEC services assessing their implementation of nutrition and physical activity programs. Structural validity, concurrent validity, known groups validity, internal consistency, floor and ceiling effects, norms, and pragmatic qualities of the measure were assessed according to the PAPERS criteria.
Results: The final measure contained 26 items, with respondents reporting how strongly they agreed or disagreed on a five-point Likert scale. Phase 1 assessments confirmed the relevance, and face and content validity of the scale. In Phase 2, we obtained 482 completed surveys, of which 84% (n = 405) completed the entire measure across 405 ECEC settings (one executive per service). Three of the four fit indices for the confirmatory factor analysis met the pre-specified criteria (SRMR = 0.056, CFI = 0.993, RMSEA = 0.067) indicating 'good' structural validity. The IMPRESS-C illustrated: 'good' internal consistency, with Cronbach's alpha values from 0.53 to 0.92; 'emerging' concurrent validity; 'poor' known groups validity; 'good' norms; and 'good' overall pragmatic qualities (cost, readability, length, and assessor burden).
Conclusions: The IMPRESS-C possesses strong psychometric and pragmatic qualities for assessing service executive-level perceptions of determinants influencing sustainment of public health interventions within ECEC settings. To achieve a full range of perspectives in this setting, future work should be directed to also develop and test measures of sustainability determinants at the implementer level (e.g., among individual educators and staff).
{"title":"Integrated Measure of PRogram Element SuStainability in Childcare Settings (IMPRESS-C): development and psychometric evaluation of a measure of sustainability determinants in the early childhood education and care setting.","authors":"Adam Shoesmith, Nicole Nathan, Melanie Lum, Serene Yoong, Erin Nolan, Luke Wolfenden, Rachel C Shelton, Brittany Cooper, Cassandra Lane, Alice Grady, Noor Imad, Edward Riley-Gibson, Nicole McCarthy, Nicole Pearson, Alix Hall","doi":"10.1186/s13012-024-01372-w","DOIUrl":"10.1186/s13012-024-01372-w","url":null,"abstract":"<p><strong>Background: </strong>There is a need for valid and reliable measures of determinants of sustainability of public health interventions in early childhood education and care (ECEC) settings. This study aimed to develop and evaluate the psychometric and pragmatic properties of such a measure - the Integrated Measure of PRogram Element SuStainability in Childcare Settings (IMPRESS-C).</p><p><strong>Methods: </strong>We undertook a two-phase process guided by the COnsensus-based Standards for the selection of health status Measurement INstruments checklist (COSMIN) and Psychometric and Pragmatic Evidence Rating Scale (PAPERS). Phase 1 involved measure development; i.e., determining items and scales through an iterative process and assessment of face and content validity. Phase 2 involved the evaluation of psychometric and pragmatic properties. The 29-item measure completed by service executives (directors and nominated supervisors) was embedded in a larger survey from a national sample of Australian ECEC services assessing their implementation of nutrition and physical activity programs. Structural validity, concurrent validity, known groups validity, internal consistency, floor and ceiling effects, norms, and pragmatic qualities of the measure were assessed according to the PAPERS criteria.</p><p><strong>Results: </strong>The final measure contained 26 items, with respondents reporting how strongly they agreed or disagreed on a five-point Likert scale. Phase 1 assessments confirmed the relevance, and face and content validity of the scale. In Phase 2, we obtained 482 completed surveys, of which 84% (n = 405) completed the entire measure across 405 ECEC settings (one executive per service). Three of the four fit indices for the confirmatory factor analysis met the pre-specified criteria (SRMR = 0.056, CFI = 0.993, RMSEA = 0.067) indicating 'good' structural validity. The IMPRESS-C illustrated: 'good' internal consistency, with Cronbach's alpha values from 0.53 to 0.92; 'emerging' concurrent validity; 'poor' known groups validity; 'good' norms; and 'good' overall pragmatic qualities (cost, readability, length, and assessor burden).</p><p><strong>Conclusions: </strong>The IMPRESS-C possesses strong psychometric and pragmatic qualities for assessing service executive-level perceptions of determinants influencing sustainment of public health interventions within ECEC settings. To achieve a full range of perspectives in this setting, future work should be directed to also develop and test measures of sustainability determinants at the implementer level (e.g., among individual educators and staff).</p>","PeriodicalId":54995,"journal":{"name":"Implementation Science","volume":"19 1","pages":"41"},"PeriodicalIF":8.8,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188265/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141433425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}