Introduction: Declining participant engagement threatens human subjects research. Participant feedback systems (PFS) may combat this decline by empowering participants to evaluate their research experiences and share that feedback with researchers to identify targets for improvement. PFS signal that participant experiences are prioritized, making the request for feedback itself an intervention. PFS design work remains largely confined to clinical research. This exploratory study investigates the design parameters of extending PFS to nonclinical research. We conducted focus groups with nonclinical stakeholders: Experienced research participants (ERP) and research team members (RTM).
Methods: ERP focus groups were organized by affinity (LGBTQIA+, BIPOC, persons with disabilities, neurodivergent, and a general group). RTM focus groups were organized by unit within the University of Michigan. Transcripts were analyzed using inductive thematic analysis.
Results: Ten focus groups (ERP: 5, n = 25; RTM: 5, n = 26) identified key PFS design considerations: (1) motivations for feedback, (2) feedback collection, and (3) feedback delivery. ERP and RTM collectively preferred anonymous web-based surveys with six potential topic areas: communication, respect, being valued, receiving value, burden, and safety. Feedback delivery faced two key design tensions: balancing institutional standardization with study-specific insights and aligning leadership's preference for high-level summaries with frontline staff's need for detailed, real-time feedback.
Conclusion: Expanding PFS to nonclinical research requires balancing centralization and study-specific flexibility. While centralization enhances consistency, the diversity of nonclinical studies necessitates adaptable implementation. A hybrid model is proposed to optimize feasibility. Future research should refine and test this model.
参与者参与度的下降威胁着人类受试者的研究。参与者反馈系统(PFS)可以通过授权参与者评估他们的研究经验并与研究人员分享反馈以确定改进的目标来对抗这种衰退。PFS表明参与者的体验是优先的,使反馈请求本身成为一种干预。PFS的设计工作仍然主要局限于临床研究。本探索性研究探讨了将PFS扩展到非临床研究的设计参数。我们对非临床利益相关者进行了焦点小组讨论:有经验的研究参与者(ERP)和研究团队成员(RTM)。方法:ERP焦点组按亲和力分组(LGBTQIA+组、BIPOC组、残疾人组、神经发散组和普通组)。RTM焦点小组是由密歇根大学内的单位组织的。对转录本进行归纳主题分析。结果:10个焦点小组(ERP: 5, n = 25; RTM: 5, n = 26)确定了PFS设计的关键考虑因素:(1)反馈的动机,(2)反馈的收集,(3)反馈的传递。ERP和RTM共同倾向于匿名的基于网络的调查,有六个潜在的主题领域:沟通、尊重、被重视、接受价值、负担和安全。反馈传递面临着两个关键的设计紧张关系:平衡机构标准化与特定研究的见解,使领导层对高级摘要的偏好与一线员工对详细、实时反馈的需求保持一致。结论:将PFS扩展到非临床研究需要平衡集中化和研究特异性灵活性。虽然集中加强了一致性,但非临床研究的多样性需要适应性实施。提出了一种优化可行性的混合模型。未来的研究应该完善和测试这个模型。
{"title":"Extending participant feedback beyond clinical studies: A modular system designed to connect researchers and participants.","authors":"Alicia Giordimaina Carmichael, Donna Walter, Brandon Patric Labbree, Boluwatife Dogari, Natalie Leonard, Kathryn Ward, Xiaoya Geng, Medha Raju, Jess Francis-Levin, Richard Gonzalez","doi":"10.1017/cts.2025.10184","DOIUrl":"10.1017/cts.2025.10184","url":null,"abstract":"<p><strong>Introduction: </strong>Declining participant engagement threatens human subjects research. Participant feedback systems (PFS) may combat this decline by empowering participants to evaluate their research experiences and share that feedback with researchers to identify targets for improvement. PFS signal that participant experiences are prioritized, making the request for feedback itself an intervention. PFS design work remains largely confined to clinical research. This exploratory study investigates the design parameters of extending PFS to nonclinical research. We conducted focus groups with nonclinical stakeholders: Experienced research participants (ERP) and research team members (RTM).</p><p><strong>Methods: </strong>ERP focus groups were organized by affinity (LGBTQIA+, BIPOC, persons with disabilities, neurodivergent, and a general group). RTM focus groups were organized by unit within the University of Michigan. Transcripts were analyzed using inductive thematic analysis.</p><p><strong>Results: </strong>Ten focus groups (ERP: 5, <i>n</i> = 25; RTM: 5, <i>n</i> = 26) identified key PFS design considerations: (1) motivations for feedback, (2) feedback collection, and (3) feedback delivery. ERP and RTM collectively preferred anonymous web-based surveys with six potential topic areas: communication, respect, being valued, receiving value, burden, and safety. Feedback delivery faced two key design tensions: balancing institutional standardization with study-specific insights and aligning leadership's preference for high-level summaries with frontline staff's need for detailed, real-time feedback.</p><p><strong>Conclusion: </strong>Expanding PFS to nonclinical research requires balancing centralization and study-specific flexibility. While centralization enhances consistency, the diversity of nonclinical studies necessitates adaptable implementation. A hybrid model is proposed to optimize feasibility. Future research should refine and test this model.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e258"},"PeriodicalIF":2.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10191
L Gayani Tillekeratne, Nicholas O'Grady, Maria D Iglesias-Ussel, Jack Anderson, Alana Brown, Armstrong Obale, Christina Nix, Champica K Bodinayake, Ajith Nagahawatte, Robert Rolfe, E Wilbur Woodhouse, Gaya B Wijayaratne, Senali Weerasinghe, U H B Y Dilshan, Jayani Gamage, Ruvini Kurukulasooriya, Madureka Premamali, Himali S Jayasinghearachchi, Bradly P Nicholson, Emily R Ko, Ephraim L Tsalik, Micah T McClain, Rachel A Myers, Christopher W Woods, Thomas W Burke
Introduction: Distinguishing viral versus bacterial lower respiratory tract infection (LRTI) is challenging. We previously developed a rapid, host response-based test (Biomeme HR-B/V assay) using peripheral blood samples to identify viral versus bacterial infection. We assessed the performance of this assay when using nasopharyngeal (NP) samples.
Methods: Patients with LRTI were enrolled, and a NP swab sample was run using the HR-B/V assay (assessing 24 gene targets) on the FranklinTM platform. The performance of the prior classifier at identifying viral versus bacterial infection was assessed. A novel predictive model was generated for NP samples using the same 24 targets. Results were validated using external datasets with nasal/NP RNA sequence data.
Results: Nineteen patients (median age 62 years, 52.1% male) were included. When using the prior HR-B/V classifier on NP samples of 19 patients with LRTI (12 viral, 7 bacterial), the area under the receiver operator curve (AUC) for viral versus bacterial infection was 0.786 (0.524-1), with accuracy 0.79 (95% CI 0.57-0.91), positive percent agreement (PPA) 0.43 (95% CI 0.16-0.75), and negative percent agreement (NPA) 1.00 (95% CI 0.76-1). The novel model had AUC 0.881 (95% CI 0.726-1), accuracy 0.84 (95% CI 0.62-0.94), PPA 0.86 (95% CI 0.49-0.97), and NPA 0.83 (95% CI 0.55-0.95) for bacterial infection. Validation in two external datasets showed AUC of 0.932 (95% CI 0.90-0.96) and 0.915 (95% CI 0.88-0.95).
Conclusions: We show that host response in the nasopharynx can distinguish viral versus bacterial LRTI. These findings need to be replicated in larger cohorts with diverse LRTI etiologies.
区分病毒性和细菌性下呼吸道感染(LRTI)是具有挑战性的。我们之前开发了一种基于宿主反应的快速测试(Biomeme HR-B/V测定),使用外周血样本来识别病毒与细菌感染。我们在使用鼻咽(NP)样本时评估了该检测的性能。方法:纳入LRTI患者,在FranklinTM平台上使用HR-B/V法(评估24个基因靶点)对NP拭子样本进行检测。评估了先前分类器在识别病毒与细菌感染方面的性能。使用相同的24个目标,对NP样本生成了一个新的预测模型。使用鼻腔/NP RNA序列数据的外部数据集验证结果。结果:纳入19例患者(中位年龄62岁,男性52.1%)。当使用先前的HR-B/V分类器对19例LRTI患者(12例病毒,7例细菌)的NP样本进行分类时,病毒与细菌感染的接受者操作曲线下面积(AUC)为0.786(0.524-1),准确率为0.79 (95% CI 0.57-0.91),阳性一致性百分比(PPA)为0.43 (95% CI 0.16-0.75),阴性一致性百分比(NPA)为1.00 (95% CI 0.76-1)。该新型模型对细菌感染的AUC为0.881 (95% CI 0.726-1),准确度为0.84 (95% CI 0.62-0.94), PPA为0.86 (95% CI 0.49-0.97), NPA为0.83 (95% CI 0.55-0.95)。两个外部数据集的验证显示AUC分别为0.932 (95% CI 0.90-0.96)和0.915 (95% CI 0.88-0.95)。结论:鼻咽部的宿主反应可以区分病毒性和细菌性下呼吸道感染。这些发现需要在具有不同LRTI病因的更大队列中得到重复。
{"title":"Host gene expression in the Nasopharynx can discriminate microbiologically confirmed viral and bacterial lower respiratory tract infection.","authors":"L Gayani Tillekeratne, Nicholas O'Grady, Maria D Iglesias-Ussel, Jack Anderson, Alana Brown, Armstrong Obale, Christina Nix, Champica K Bodinayake, Ajith Nagahawatte, Robert Rolfe, E Wilbur Woodhouse, Gaya B Wijayaratne, Senali Weerasinghe, U H B Y Dilshan, Jayani Gamage, Ruvini Kurukulasooriya, Madureka Premamali, Himali S Jayasinghearachchi, Bradly P Nicholson, Emily R Ko, Ephraim L Tsalik, Micah T McClain, Rachel A Myers, Christopher W Woods, Thomas W Burke","doi":"10.1017/cts.2025.10191","DOIUrl":"10.1017/cts.2025.10191","url":null,"abstract":"<p><strong>Introduction: </strong>Distinguishing viral versus bacterial lower respiratory tract infection (LRTI) is challenging. We previously developed a rapid, host response-based test (Biomeme HR-B/V assay) using peripheral blood samples to identify viral versus bacterial infection. We assessed the performance of this assay when using nasopharyngeal (NP) samples.</p><p><strong>Methods: </strong>Patients with LRTI were enrolled, and a NP swab sample was run using the HR-B/V assay (assessing 24 gene targets) on the Franklin<sup>TM</sup> platform. The performance of the prior classifier at identifying viral versus bacterial infection was assessed. A novel predictive model was generated for NP samples using the same 24 targets. Results were validated using external datasets with nasal/NP RNA sequence data.</p><p><strong>Results: </strong>Nineteen patients (median age 62 years, 52.1% male) were included. When using the prior HR-B/V classifier on NP samples of 19 patients with LRTI (12 viral, 7 bacterial), the area under the receiver operator curve (AUC) for viral versus bacterial infection was 0.786 (0.524-1), with accuracy 0.79 (95% CI 0.57-0.91), positive percent agreement (PPA) 0.43 (95% CI 0.16-0.75), and negative percent agreement (NPA) 1.00 (95% CI 0.76-1). The novel model had AUC 0.881 (95% CI 0.726-1), accuracy 0.84 (95% CI 0.62-0.94), PPA 0.86 (95% CI 0.49-0.97), and NPA 0.83 (95% CI 0.55-0.95) for bacterial infection. Validation in two external datasets showed AUC of 0.932 (95% CI 0.90-0.96) and 0.915 (95% CI 0.88-0.95).</p><p><strong>Conclusions: </strong>We show that host response in the nasopharynx can distinguish viral versus bacterial LRTI. These findings need to be replicated in larger cohorts with diverse LRTI etiologies.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e257"},"PeriodicalIF":2.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10190
Brian Do-Golden, Nicole Wolfe, Nicole M G Maccalla, James Settles, Michele D Kipke
Introduction: Community engagement (CE) is essential in Clinical and Translational Science (CTS), yet its evaluation remains inconsistent and often lacks standardization. The RE-AIM framework (Reach, Effectiveness, Adoption, Implementation, Maintenance) offers a promising structure for evaluating CE efforts, but its application in dynamic, community-based contexts is often limited by data variability and implementation complexity.
Methods: We developed and applied a seven-step, structured, and replicable approach to operationalizing RE-AIM for program evaluation. This method includes the use of tailored RE-AIM subdomains, standardized scoring systems, and visual analytics through Net Effects Diagrams.
Results: We applied this framework to our community-based health education workshops delivered in English and Spanish across Los Angeles, using participant surveys and facilitator feedback data. The operationalized framework enabled consistent assessment and comparison between language groups. Spanish-language workshops outperformed English-language workshops (ELWs) in measures of attendance, participant satisfaction, and short-term effectiveness. Visualizations using Net Effects Diagrams facilitated collaboration among stakeholders to interpret program outputs and outcomes, supporting actionable insights for program adaptation. Differences between workshop groups will inform changes to recruitment and content delivery strategies in ELWs.
Conclusions: This approach offers a transparent, scalable, and context-sensitive method for assessing CE programs. It supports data-driven decision-making, continuous program improvement, and stakeholder engagement. While developed for CE initiatives, the method is broadly adaptable to other community and public health programs. Future efforts will include expanded outcome tracking, integration into dashboards, and dissemination as a toolkit for broader adoption within and beyond the CTS Award network.
{"title":"Operationalizing community engagement evaluation: A structured and scalable approach using the RE-AIM framework and net effects diagrams.","authors":"Brian Do-Golden, Nicole Wolfe, Nicole M G Maccalla, James Settles, Michele D Kipke","doi":"10.1017/cts.2025.10190","DOIUrl":"10.1017/cts.2025.10190","url":null,"abstract":"<p><strong>Introduction: </strong>Community engagement (CE) is essential in Clinical and Translational Science (CTS), yet its evaluation remains inconsistent and often lacks standardization. The RE-AIM framework (Reach, Effectiveness, Adoption, Implementation, Maintenance) offers a promising structure for evaluating CE efforts, but its application in dynamic, community-based contexts is often limited by data variability and implementation complexity.</p><p><strong>Methods: </strong>We developed and applied a seven-step, structured, and replicable approach to operationalizing RE-AIM for program evaluation. This method includes the use of tailored RE-AIM subdomains, standardized scoring systems, and visual analytics through Net Effects Diagrams.</p><p><strong>Results: </strong>We applied this framework to our community-based health education workshops delivered in English and Spanish across Los Angeles, using participant surveys and facilitator feedback data. The operationalized framework enabled consistent assessment and comparison between language groups. Spanish-language workshops outperformed English-language workshops (ELWs) in measures of attendance, participant satisfaction, and short-term effectiveness. Visualizations using Net Effects Diagrams facilitated collaboration among stakeholders to interpret program outputs and outcomes, supporting actionable insights for program adaptation. Differences between workshop groups will inform changes to recruitment and content delivery strategies in ELWs.</p><p><strong>Conclusions: </strong>This approach offers a transparent, scalable, and context-sensitive method for assessing CE programs. It supports data-driven decision-making, continuous program improvement, and stakeholder engagement. While developed for CE initiatives, the method is broadly adaptable to other community and public health programs. Future efforts will include expanded outcome tracking, integration into dashboards, and dissemination as a toolkit for broader adoption within and beyond the CTS Award network.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e255"},"PeriodicalIF":2.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10188
Kimberly McGhee, Matthew Greseth, Tammy Loucks, Paula Traktman
In January 2023, the South Carolina Science Writing Initiative for Trainees (SC-SWIFT), an internship in the College of Graduate Studies at the Medical University of South Carolina, began offering tiered digital badges in science communications. The badges' purpose was to encourage graduate students and postdoctoral fellows to engage in extracurricular science writing opportunities available through SC-SWIFT and to document acquired communications skills for employers. The badges have been well received, with 18 interns earning the beginner badge in the first two years of the program. In March 2025, SC-SWIFT queried 25 interns who had earned a beginner badge or completed half the requirements for doing so in 2023-2024 to gauge how important they considered the badges to their engagement in science communications and how valuable they would be in a job search. All 14 respondents found the badges important in engaging them in science communications, and 86% either strongly agreed or agreed that digital badges would be an asset when job searching. Eleven of 12 respondents (92%) thought that their confidence in telling their own research story had increased. These initial results suggest that digital badges could be useful tools for documenting science communications skills acquired during extracurricular, experiential learning.
{"title":"Badged up for success: Digital badges enable graduate students to become confident communicators via real-world opportunities and to document their skills for employers.","authors":"Kimberly McGhee, Matthew Greseth, Tammy Loucks, Paula Traktman","doi":"10.1017/cts.2025.10188","DOIUrl":"10.1017/cts.2025.10188","url":null,"abstract":"<p><p>In January 2023, the South Carolina Science Writing Initiative for Trainees (SC-SWIFT), an internship in the College of Graduate Studies at the Medical University of South Carolina, began offering tiered digital badges in science communications. The badges' purpose was to encourage graduate students and postdoctoral fellows to engage in extracurricular science writing opportunities available through SC-SWIFT and to document acquired communications skills for employers. The badges have been well received, with 18 interns earning the beginner badge in the first two years of the program. In March 2025, SC-SWIFT queried 25 interns who had earned a beginner badge or completed half the requirements for doing so in 2023-2024 to gauge how important they considered the badges to their engagement in science communications and how valuable they would be in a job search. All 14 respondents found the badges important in engaging them in science communications, and 86% either strongly agreed or agreed that digital badges would be an asset when job searching. Eleven of 12 respondents (92%) thought that their confidence in telling their own research story had increased. These initial results suggest that digital badges could be useful tools for documenting science communications skills acquired during extracurricular, experiential learning.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e254"},"PeriodicalIF":2.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10179
Nynikka R Palmer, Michael B Potter, Saji Mansur, Cecilia Hurtado, Maria Carbajal, Gary Bossier, Maria Echaveste, Paula Fleisher, Carlos Guerra-Sanchez, Stutee Khandelwal, Gena Lewis, Lali Moheno, Tung Nguyen, David Ofman, Kerrington Osborne, James D Harrison
Background: Community health centers (CHCs) and those most burdened by disease are important partners in setting research agendas to address the needs of people who are medically underserved.
Objectives: Identify and prioritize health equity-focused research priorities using a collaborative approach to community engagement of key informants.
Methods: We used five stepwise phases from January 2021 to February 2023 to formulate and prioritize a set of health equity-focused research topics among CHC staff (leaders, clinicians), their key advisors (patients and community members), and researchers from academic medical centers in California. Phases included: (1) community advisory board formation, (2) key informant identification, (3) individual/small group interview guide development and administration, (4) initial health equity-focused topic categorization, and (5) in-person meeting with community advisors for final topic prioritization using nominal group technique.
Results: Twenty individual or small group interviews were completed with 44 diverse participants, along with engagement from our community advisory board, which resulted in an initial list of 11 health equity-focused research topics. Ninety advisors including diverse community members, CHC staff/leaders, and researchers prioritized six overarching research topics. Final prioritized health-equity focused research topics include addressing mental health challenges, improving public's trust in healthcare and science, healthcare delivery models to increase access and utilization, build and sustain an anti-racist healthcare system, strategies and interventions to address health misinformation, and continuing and sustaining polices based on lessons learned from COVID-19.
Conclusions: Results offer future direction for community-engaged research agendas to advance health equity among medically underserved and vulnerable patient populations.
{"title":"Engaging community health center advisors to identify research priorities for health equity.","authors":"Nynikka R Palmer, Michael B Potter, Saji Mansur, Cecilia Hurtado, Maria Carbajal, Gary Bossier, Maria Echaveste, Paula Fleisher, Carlos Guerra-Sanchez, Stutee Khandelwal, Gena Lewis, Lali Moheno, Tung Nguyen, David Ofman, Kerrington Osborne, James D Harrison","doi":"10.1017/cts.2025.10179","DOIUrl":"10.1017/cts.2025.10179","url":null,"abstract":"<p><strong>Background: </strong>Community health centers (CHCs) and those most burdened by disease are important partners in setting research agendas to address the needs of people who are medically underserved.</p><p><strong>Objectives: </strong>Identify and prioritize health equity-focused research priorities using a collaborative approach to community engagement of key informants.</p><p><strong>Methods: </strong>We used five stepwise phases from January 2021 to February 2023 to formulate and prioritize a set of health equity-focused research topics among CHC staff (leaders, clinicians), their key advisors (patients and community members), and researchers from academic medical centers in California. Phases included: (1) community advisory board formation, (2) key informant identification, (3) individual/small group interview guide development and administration, (4) initial health equity-focused topic categorization, and (5) in-person meeting with community advisors for final topic prioritization using nominal group technique.</p><p><strong>Results: </strong>Twenty individual or small group interviews were completed with 44 diverse participants, along with engagement from our community advisory board, which resulted in an initial list of 11 health equity-focused research topics. Ninety advisors including diverse community members, CHC staff/leaders, and researchers prioritized six overarching research topics. Final prioritized health-equity focused research topics include addressing mental health challenges, improving public's trust in healthcare and science, healthcare delivery models to increase access and utilization, build and sustain an anti-racist healthcare system, strategies and interventions to address health misinformation, and continuing and sustaining polices based on lessons learned from COVID-19.</p><p><strong>Conclusions: </strong>Results offer future direction for community-engaged research agendas to advance health equity among medically underserved and vulnerable patient populations.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e253"},"PeriodicalIF":2.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766519/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10183
Kostiantyn Botnar, Justin T Nguyen, Madison G Farnswort, George Golovko, Kamil Khanipov
[This corrects the article DOI: 10.1017/cts.2025.55.].
[这更正了文章DOI: 10.1017/cts.2025.55]。
{"title":"Erratum: EHRchitect: An open-source software tool for medical event sequences data extraction from Electronic Health Records - CORRIGENDUM.","authors":"Kostiantyn Botnar, Justin T Nguyen, Madison G Farnswort, George Golovko, Kamil Khanipov","doi":"10.1017/cts.2025.10183","DOIUrl":"10.1017/cts.2025.10183","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1017/cts.2025.55.].</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e242"},"PeriodicalIF":2.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12695488/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145756198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10168
Christine Pfund, Christine Sorkness, David Asai, Marcus Lambert, Emma Anne Meagher, Audrey J Murrell, Nancy Schwartz, Joel Tsevat
{"title":"Advancing the science and practice of effective mentorship.","authors":"Christine Pfund, Christine Sorkness, David Asai, Marcus Lambert, Emma Anne Meagher, Audrey J Murrell, Nancy Schwartz, Joel Tsevat","doi":"10.1017/cts.2025.10168","DOIUrl":"10.1017/cts.2025.10168","url":null,"abstract":"","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e238"},"PeriodicalIF":2.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12695491/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145756934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10177
Michele Allen, Yasamin Graff, Caroline Carlin, Antonia Apolinario-Wilcoxon, Paulette Baukol, Kristin Boman, LaPrincess Brewer, Roli Dwivedi, Milton Eder, Susan Gust, Mikow Hang, Walter Novillo, Luis Ortega, Shannon Pergament, Chris Pulley, Rebecca Shirley, Sida Ly-Xiong
Introduction: While evaluation approaches for community-academic research groups are established, few tools exist for academic institutional advisory groups across multi-core centers and research, education, and clinical care missions. Institutional advisory group evaluation should consider group processes and their impact on community-centered outcomes. This study describes the community-engaged development of a mixed-method evaluation approach to address this gap and presents pilot outcomes across an NIH-funded center.
Methods: We utilized a Community of Practice model to co-develop a survey with 14 community and academic representatives of four advisory groups. The final survey included five categories of group process and four categories of outcomes. Storytelling sessions with community partners explored areas where the survey identified discrepancies in perspectives between community and academic team members, as well as areas with lower scores.
Results: Nine community and 14 academic (staff and faculty) partners completed the survey. Respondents positively assessed group process outcomes (shared values, leadership, community-centeredness, and decision-making), and slightly less positive assessments of institutional outcomes. Storytelling sessions confirmed the overall satisfaction of community partners but highlighted actionable concerns within power-sharing, decision-making, funding equity, and trust-building.
Conclusions: The results of this equity-centered evaluation suggest the utility and importance of participatory, mixed-methods approaches to evaluating community-academic institutional advisory groups.
{"title":"Development and pilot of a tool evaluating community-engaged group processes and community-centered impact for institutional level advisory boards.","authors":"Michele Allen, Yasamin Graff, Caroline Carlin, Antonia Apolinario-Wilcoxon, Paulette Baukol, Kristin Boman, LaPrincess Brewer, Roli Dwivedi, Milton Eder, Susan Gust, Mikow Hang, Walter Novillo, Luis Ortega, Shannon Pergament, Chris Pulley, Rebecca Shirley, Sida Ly-Xiong","doi":"10.1017/cts.2025.10177","DOIUrl":"10.1017/cts.2025.10177","url":null,"abstract":"<p><strong>Introduction: </strong>While evaluation approaches for community-academic research groups are established, few tools exist for academic institutional advisory groups across multi-core centers and research, education, and clinical care missions. Institutional advisory group evaluation should consider group processes and their impact on community-centered outcomes. This study describes the community-engaged development of a mixed-method evaluation approach to address this gap and presents pilot outcomes across an NIH-funded center.</p><p><strong>Methods: </strong>We utilized a Community of Practice model to co-develop a survey with 14 community and academic representatives of four advisory groups. The final survey included five categories of group process and four categories of outcomes. Storytelling sessions with community partners explored areas where the survey identified discrepancies in perspectives between community and academic team members, as well as areas with lower scores.</p><p><strong>Results: </strong>Nine community and 14 academic (staff and faculty) partners completed the survey. Respondents positively assessed group process outcomes (shared values, leadership, community-centeredness, and decision-making), and slightly less positive assessments of institutional outcomes. Storytelling sessions confirmed the overall satisfaction of community partners but highlighted actionable concerns within power-sharing, decision-making, funding equity, and trust-building.</p><p><strong>Conclusions: </strong>The results of this equity-centered evaluation suggest the utility and importance of participatory, mixed-methods approaches to evaluating community-academic institutional advisory groups.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e261"},"PeriodicalIF":2.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10176
Gaylen E Fronk, Larry W Hawk, Andrew Cates, John Clark, Noelle Natale, Jennifer Dahne
Decentralized clinical trials (DCTs) have the potential to increase pace and reach of recruitment as well as to improve sample representation, compared to traditional in-person clinical trials. However, concerns linger regarding data integrity in DCTs due to threats of fraud and sampling bias. The purpose of this report is to describe two tools that we have developed and successfully implemented to combat these threats. Cheatblocker and QuotaConfig are two external modules that we have made publicly available within the REDCap data capture system to target fraud and sampling bias, respectively. We describe the modules, present two case examples in which we used the modules successfully, and discuss the potential impact of tools such as these on data integrity in DCTs. We situate this discussion within the broader landscape of translational science wherein we strive to improve research rigor and efficiency to maximize public health benefit.
{"title":"Advancing translational science through trial integrity: REDCap-based approaches to mitigating fraud and bias.","authors":"Gaylen E Fronk, Larry W Hawk, Andrew Cates, John Clark, Noelle Natale, Jennifer Dahne","doi":"10.1017/cts.2025.10176","DOIUrl":"10.1017/cts.2025.10176","url":null,"abstract":"<p><p>Decentralized clinical trials (DCTs) have the potential to increase pace and reach of recruitment as well as to improve sample representation, compared to traditional in-person clinical trials. However, concerns linger regarding data integrity in DCTs due to threats of fraud and sampling bias. The purpose of this report is to describe two tools that we have developed and successfully implemented to combat these threats. Cheatblocker and QuotaConfig are two external modules that we have made publicly available within the REDCap data capture system to target fraud and sampling bias, respectively. We describe the modules, present two case examples in which we used the modules successfully, and discuss the potential impact of tools such as these on data integrity in DCTs. We situate this discussion within the broader landscape of translational science wherein we strive to improve research rigor and efficiency to maximize public health benefit.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e252"},"PeriodicalIF":2.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24eCollection Date: 2025-01-01DOI: 10.1017/cts.2025.10180
Shannon Hillery, Ryan Majkowski, Ying Wang, Bradley Barney, Lindsay Eyzaguirre, Andrew Mould, Nichol McBee, Esther Woo, Elizabeth Holthouse, Kenneth Wiley, Salina P Waddy, Daniel Ford, Daniel F Hanley, Karen Lane
Background: Operational roadblocks and organizational delays in multicenter clinical trials have been evident for decades, with the start-up cycle being especially notorious for setbacks. To address these challenges and improve multicenter clinical trial execution, we developed an accelerated start-up (ASU) management strategy - a structured site onboarding approach based on lean management principles.
Methods: Three elements were integrated into the strategy: a standardized workflow, a dedicated site navigator (SN), and an electronic tracking system. We examined the range, central tendencies, and distribution of site activation times among differing combinations of these three elements. To determine how these combinations affected individual start-up milestones, we fit mixed models to compare percent achievement of predetermined milestone benchmarks and time to completion.
Results: Thirteen consecutive trials (n = 308 site activations) employed three distinct combinations of the three ASU elements. Trials using all three elements (n = 6) had 160 total site activations in a median of 133 days. Three trials without the SN element had 52 total site activations in a median of 191 days. Four trials without the standardized workflow element had 96 total site activations in a median of 277 days. Significant differences between combinations included times to sIRB submission (p = 0.004), training/certificates completion (p = 0.03), and site activation (p = 0.003). Results suggest sites activated faster and achieved predetermined benchmarks for every milestone more often when three elements were employed.
Conclusion: This sample trial start-up data supports that sites can meet ambitious timelines, underscoring the strategy's potential to streamline workflows and improve site team performance.
{"title":"Accelerating start-up cycles in investigator-initiated multicenter clinical trials.","authors":"Shannon Hillery, Ryan Majkowski, Ying Wang, Bradley Barney, Lindsay Eyzaguirre, Andrew Mould, Nichol McBee, Esther Woo, Elizabeth Holthouse, Kenneth Wiley, Salina P Waddy, Daniel Ford, Daniel F Hanley, Karen Lane","doi":"10.1017/cts.2025.10180","DOIUrl":"10.1017/cts.2025.10180","url":null,"abstract":"<p><strong>Background: </strong>Operational roadblocks and organizational delays in multicenter clinical trials have been evident for decades, with the start-up cycle being especially notorious for setbacks. To address these challenges and improve multicenter clinical trial execution, we developed an accelerated start-up (ASU) management strategy - a structured site onboarding approach based on lean management principles.</p><p><strong>Methods: </strong>Three elements were integrated into the strategy: a standardized workflow, a dedicated site navigator (SN), and an electronic tracking system. We examined the range, central tendencies, and distribution of site activation times among differing combinations of these three elements. To determine how these combinations affected individual start-up milestones, we fit mixed models to compare percent achievement of predetermined milestone benchmarks and time to completion.</p><p><strong>Results: </strong>Thirteen consecutive trials (<i>n</i> = 308 site activations) employed three distinct combinations of the three ASU elements. Trials using all three elements (<i>n</i> = 6) had 160 total site activations in a median of 133 days. Three trials without the SN element had 52 total site activations in a median of 191 days. Four trials without the standardized workflow element had 96 total site activations in a median of 277 days. Significant differences between combinations included times to sIRB submission (<i>p</i> = 0.004), training/certificates completion (<i>p</i> = 0.03), and site activation (<i>p</i> = 0.003). Results suggest sites activated faster and achieved predetermined benchmarks for every milestone more often when three elements were employed.</p><p><strong>Conclusion: </strong>This sample trial start-up data supports that sites can meet ambitious timelines, underscoring the strategy's potential to streamline workflows and improve site team performance.</p>","PeriodicalId":15529,"journal":{"name":"Journal of Clinical and Translational Science","volume":"9 1","pages":"e249"},"PeriodicalIF":2.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12695510/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145756919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}