Pub Date : 2025-10-16DOI: 10.1177/17407745251378407
Tansy Edwards, Jennifer Thompson, Charles Opondo, Elizabeth Allen
Background: Individual non-compliance with an intervention in cluster randomised trials can occur and estimating an intervention effect according to intention-to-treat ignores non-compliance and underestimates efficacy. The effect of the intervention among compliers (the complier average causal effect) provides an unbiased estimate of efficacy but inference can be complex in cluster randomised trials.
Methods: We evaluated the performance of a pragmatic bootstrapping approach accounting for clustering to obtain a 95% confidence interval (CI) for a CACE for cluster randomised trials with monotonicity and one-sided non-compliance. We investigated a variety of scenarios for correlated cluster-level prevalence of a binary outcome and non-compliance (5%, 10%, 20%, 30%, 40%). Cluster randomised trials were simulated with the minimum number of clusters to provide at least 80% and at least 90% power, to detect an ITT odds ratio (OR) of 0.5 with 100 individuals per cluster.
Results: Under all non-compliance scenarios (5%-40%), there was negligible bias for the CACE. In the worst-case of bias, a true OR of 0.18 was estimated as 0.15 for the rarest outcome (5%) and highest non-compliance (40%). There was no under-coverage of bootstrap CIs. CIs were the correct width for an outcome prevalence of 20%-40% but too wide for a less common outcome. Loss of power for a CACE bootstrap analysis versus ITT regression analysis increased as the prevalence of the outcome decreased across all non-compliance scenarios, particularly for an outcome prevalence of less than 20%.
Conclusions: Our bootstrapping approach provides an accessible and computationally simple method to evaluate efficacy in support of ITT analyses in cluster randomised trials.
{"title":"Practical inference for a complier average causal effect in cluster randomised trials with a binary outcome.","authors":"Tansy Edwards, Jennifer Thompson, Charles Opondo, Elizabeth Allen","doi":"10.1177/17407745251378407","DOIUrl":"https://doi.org/10.1177/17407745251378407","url":null,"abstract":"<p><strong>Background: </strong>Individual non-compliance with an intervention in cluster randomised trials can occur and estimating an intervention effect according to intention-to-treat ignores non-compliance and underestimates efficacy. The effect of the intervention among compliers (the complier average causal effect) provides an unbiased estimate of efficacy but inference can be complex in cluster randomised trials.</p><p><strong>Methods: </strong>We evaluated the performance of a pragmatic bootstrapping approach accounting for clustering to obtain a 95% confidence interval (CI) for a CACE for cluster randomised trials with monotonicity and one-sided non-compliance. We investigated a variety of scenarios for correlated cluster-level prevalence of a binary outcome and non-compliance (5%, 10%, 20%, 30%, 40%). Cluster randomised trials were simulated with the minimum number of clusters to provide at least 80% and at least 90% power, to detect an ITT odds ratio (OR) of 0.5 with 100 individuals per cluster.</p><p><strong>Results: </strong>Under all non-compliance scenarios (5%-40%), there was negligible bias for the CACE. In the worst-case of bias, a true OR of 0.18 was estimated as 0.15 for the rarest outcome (5%) and highest non-compliance (40%). There was no under-coverage of bootstrap CIs. CIs were the correct width for an outcome prevalence of 20%-40% but too wide for a less common outcome. Loss of power for a CACE bootstrap analysis versus ITT regression analysis increased as the prevalence of the outcome decreased across all non-compliance scenarios, particularly for an outcome prevalence of less than 20%.</p><p><strong>Conclusions: </strong>Our bootstrapping approach provides an accessible and computationally simple method to evaluate efficacy in support of ITT analyses in cluster randomised trials.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745251378407"},"PeriodicalIF":2.2,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145298963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-16DOI: 10.1177/17407745251377734
Xiaoyu Tang, Ludovic Trinquart
Background: Valid surrogate endpoints are of great interest for efficient evaluation of novel therapies. With surrogate and true time-to-event endpoints, meta-analytic approaches for surrogacy validation commonly rely on the hazard ratio, ignore that randomized trials possibly contribute to the meta-analysis for different follow-up durations, overlook the importance of the time lag between surrogate and true endpoints in determining surrogate utility, and assume that treatment effects and the strength of surrogacy remain constant over time. In this context, we introduce a novel two-stage meta-analytic model to evaluate trial-level surrogacy.
Methods: Our model employs restricted mean survival time (RMST) differences to quantify treatment effects at the first stage. At the second stage, the model is based on the between-study covariance matrix of RMSTs and differences in RMST to assess surrogacy through coefficients of determination at multiple timepoints. This framework integrates estimates from each component RCT without extrapolation beyond the trial-specific time support, can explicitly model a time lag between endpoints, and remains valid under non-proportional hazards.
Results: Simulation studies indicate that our model yields unbiased and precise estimates of the coefficient of determination. In an individual patient data meta-analysis in gastric cancer, estimates of coefficients of determination from our model reflect the temporal lag between endpoints and reveal dynamic changes in surrogacy strength over time compared to the Clayton survival copula model, a widely used reference method in surrogate endpoint validation for time-to-event outcomes.
Conclusion: Our new meta-analytic model to evaluate trial-level surrogacy using the difference in RMST as the measure of treatment effect does not require the proportional hazard assumption, captures the strength of surrogacy at multiple time points, and can evaluate surrogacy with a time lag between surrogate and true endpoints. The proposed method enhances the rigor and practicality of surrogate endpoint validation in time-to-event settings.
{"title":"Meta-analytic evaluation of surrogate endpoints at multiple time points in randomized controlled trials with time-to-event endpoints.","authors":"Xiaoyu Tang, Ludovic Trinquart","doi":"10.1177/17407745251377734","DOIUrl":"10.1177/17407745251377734","url":null,"abstract":"<p><strong>Background: </strong>Valid surrogate endpoints are of great interest for efficient evaluation of novel therapies. With surrogate and true time-to-event endpoints, meta-analytic approaches for surrogacy validation commonly rely on the hazard ratio, ignore that randomized trials possibly contribute to the meta-analysis for different follow-up durations, overlook the importance of the time lag between surrogate and true endpoints in determining surrogate utility, and assume that treatment effects and the strength of surrogacy remain constant over time. In this context, we introduce a novel two-stage meta-analytic model to evaluate trial-level surrogacy.</p><p><strong>Methods: </strong>Our model employs restricted mean survival time (RMST) differences to quantify treatment effects at the first stage. At the second stage, the model is based on the between-study covariance matrix of RMSTs and differences in RMST to assess surrogacy through coefficients of determination at multiple timepoints. This framework integrates estimates from each component RCT without extrapolation beyond the trial-specific time support, can explicitly model a time lag between endpoints, and remains valid under non-proportional hazards.</p><p><strong>Results: </strong>Simulation studies indicate that our model yields unbiased and precise estimates of the coefficient of determination. In an individual patient data meta-analysis in gastric cancer, estimates of coefficients of determination from our model reflect the temporal lag between endpoints and reveal dynamic changes in surrogacy strength over time compared to the Clayton survival copula model, a widely used reference method in surrogate endpoint validation for time-to-event outcomes.</p><p><strong>Conclusion: </strong>Our new meta-analytic model to evaluate trial-level surrogacy using the difference in RMST as the measure of treatment effect does not require the proportional hazard assumption, captures the strength of surrogacy at multiple time points, and can evaluate surrogacy with a time lag between surrogate and true endpoints. The proposed method enhances the rigor and practicality of surrogate endpoint validation in time-to-event settings.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745251377734"},"PeriodicalIF":2.2,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12614294/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145298965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-16DOI: 10.1177/17407745251377730
Stephanie R Morain, Abigail Brickler, Matthew W Semler, Jonathan D Casey
Background: Some scholars have proposed that investigators and health systems should notify patients about their enrollment in pragmatic clinical trials conducted with a waiver of consent. However, others argue that decision-making about notification requires judgment, and reports suggest considerable heterogeneity about whether, when, and how individuals enrolled in pragmatic clinical trials with a waiver of consent are notified about that enrollment. Empirical data can inform this decision-making.
Methods: We conducted semi-structured interviews with knowledgeable stakeholders involved in conducting and/or overseeing pragmatic clinical trials conducted with waivers of consent, including investigators, those charged with the oversight of human subjects research, and operational leadership. Interviews were conducted via video conference from September to December 2024 and were audio-recorded and professionally transcribed. Data were qualitatively analyzed using an integrated approach, including both a priori codes drawn from the interview guide and emergent, inductive codes.
Results: Twenty-three of 28 experts invited to participate completed interviews. Respondents described rationales both for and against notification. Rationales for notification included both appeals to moral values (respect for persons, respect for autonomy, and transparency), as well as instrumental goals (promoting understanding of and/or support for research, avoiding downstream surprise, and supporting buy-in). Rationales against notification included preserving scientific validity, perceiving notification to lack value, and concerns that notification might be burdensome for patient-subjects or undermine trust and/or clinical or public health goals. Decision-making about notification was context-specific and reflected features related to the study design, the health system setting, the patient population, the clinical condition, and the intervention(s) being evaluated. While some factors were consistently described as weighing against notification, including scientific validity or decisions for which a patient would not be offered a choice outside the research context, other factors resulted in divergent decisions across different pragmatic clinical trials (or even across different sites for the same trial).
Conclusions: While several rationales support notification about enrollment in pragmatic clinical trials conducted with waivers of consent, the relative value and practicability of notification is context-dependent. Some features, such as the need to preserve scientific validity, may appropriately weigh in favor of forgoing notification. However, evidence of divergent decision-making for similar trials suggests the need for a framework to guide future notification decisions. These data can be an important input to inform future framework development.
{"title":"Patient notification about pragmatic clinical trials conducted with a waiver of consent: A qualitative study.","authors":"Stephanie R Morain, Abigail Brickler, Matthew W Semler, Jonathan D Casey","doi":"10.1177/17407745251377730","DOIUrl":"10.1177/17407745251377730","url":null,"abstract":"<p><strong>Background: </strong>Some scholars have proposed that investigators and health systems should notify patients about their enrollment in pragmatic clinical trials conducted with a waiver of consent. However, others argue that decision-making about notification requires judgment, and reports suggest considerable heterogeneity about whether, when, and how individuals enrolled in pragmatic clinical trials with a waiver of consent are notified about that enrollment. Empirical data can inform this decision-making.</p><p><strong>Methods: </strong>We conducted semi-structured interviews with knowledgeable stakeholders involved in conducting and/or overseeing pragmatic clinical trials conducted with waivers of consent, including investigators, those charged with the oversight of human subjects research, and operational leadership. Interviews were conducted via video conference from September to December 2024 and were audio-recorded and professionally transcribed. Data were qualitatively analyzed using an integrated approach, including both a priori codes drawn from the interview guide and emergent, inductive codes.</p><p><strong>Results: </strong>Twenty-three of 28 experts invited to participate completed interviews. Respondents described rationales both for and against notification. Rationales for notification included both appeals to moral values (respect for persons, respect for autonomy, and transparency), as well as instrumental goals (promoting understanding of and/or support for research, avoiding downstream surprise, and supporting buy-in). Rationales against notification included preserving scientific validity, perceiving notification to lack value, and concerns that notification might be burdensome for patient-subjects or undermine trust and/or clinical or public health goals. Decision-making about notification was context-specific and reflected features related to the study design, the health system setting, the patient population, the clinical condition, and the intervention(s) being evaluated. While some factors were consistently described as weighing against notification, including scientific validity or decisions for which a patient would not be offered a choice outside the research context, other factors resulted in divergent decisions across different pragmatic clinical trials (or even across different sites for the same trial).</p><p><strong>Conclusions: </strong>While several rationales support notification about enrollment in pragmatic clinical trials conducted with waivers of consent, the relative value and practicability of notification is context-dependent. Some features, such as the need to preserve scientific validity, may appropriately weigh in favor of forgoing notification. However, evidence of divergent decision-making for similar trials suggests the need for a framework to guide future notification decisions. These data can be an important input to inform future framework development.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745251377730"},"PeriodicalIF":2.2,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12614351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145298955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1177/17407745251376620
Carla Barile Godoy, Reshma Ramachandran, Pradyumna Sapre, Joseph S Ross
Backgrounds/aims: To secure market authorization, the Food and Drug Administration requires that drug manufacturers demonstrate product safety and efficacy for an indicated use based on two adequate and well-controlled studies, known as pivotal clinical trials. A single pivotal trial may also be sufficient for product approval, however, if safety and efficacy is clearly and convincingly demonstrated, or if accompanied by confirmatory evidence. We examined all original drug and biologic indication approvals by the Food and Drug Administration between 2015 and 2023 to determine what proportion of those approved on the basis of a single pivotal trial were accompanied by confirmatory evidence, the type and strength of this evidence, and whether confirmatory evidence was cited more frequently after December 2019, when the Food and Drug Administration released draft guidance clarifying issues related to confirmatory evidence.
Methods: Information was extracted from publicly available Food and Drug Administration documents, and we used descriptive statistics to characterize the sample and chi-square tests to compare the frequency with which confirmatory evidence was cited before and after December 2019.
Results: Overall, the Food and Drug Administration approved 441 original drug and biologic indications between 2015 and 2023; 40 of which were excluded. Of the remaining, 181 (41%) were based on 2 or more pivotal trials, 35 (7.9%) on a single pivotal trial with at least one clinical primary efficacy endpoint without orphan designation, and 185 (42%) on a single pivotal trial. Among the final category of approvals, the Food and Drug Administration explicitly referenced confirmatory evidence for 36 (19.5%) single pivotal trial approvals and implicitly referenced confirmatory evidence for 4 (2.2%) others. These 40 approvals referenced 99 unique sources of confirmatory evidence, most commonly pharmacodynamic/mechanistic (n = 49) and other (n = 32). Reference to confirmatory evidence was greater after the Food and Drug Administration issued clarifying guidance in December 2019 (pre: 7% vs post: 34%; p < 0.0001).
Conclusions: Given the rising number of the Food and Drug Administration approvals based on a single pivotal trial, greater clarity on confirmatory evidence standards and communication of its use could be considered.
{"title":"Confirmatory evidence supporting single pivotal trial new drug approvals by the Food and Drug Administration, 2015 through 2023.","authors":"Carla Barile Godoy, Reshma Ramachandran, Pradyumna Sapre, Joseph S Ross","doi":"10.1177/17407745251376620","DOIUrl":"https://doi.org/10.1177/17407745251376620","url":null,"abstract":"<p><strong>Backgrounds/aims: </strong>To secure market authorization, the Food and Drug Administration requires that drug manufacturers demonstrate product safety and efficacy for an indicated use based on two adequate and well-controlled studies, known as pivotal clinical trials. A single pivotal trial may also be sufficient for product approval, however, if safety and efficacy is clearly and convincingly demonstrated, or if accompanied by confirmatory evidence. We examined all original drug and biologic indication approvals by the Food and Drug Administration between 2015 and 2023 to determine what proportion of those approved on the basis of a single pivotal trial were accompanied by confirmatory evidence, the type and strength of this evidence, and whether confirmatory evidence was cited more frequently after December 2019, when the Food and Drug Administration released draft guidance clarifying issues related to confirmatory evidence.</p><p><strong>Methods: </strong>Information was extracted from publicly available Food and Drug Administration documents, and we used descriptive statistics to characterize the sample and chi-square tests to compare the frequency with which confirmatory evidence was cited before and after December 2019.</p><p><strong>Results: </strong>Overall, the Food and Drug Administration approved 441 original drug and biologic indications between 2015 and 2023; 40 of which were excluded. Of the remaining, 181 (41%) were based on 2 or more pivotal trials, 35 (7.9%) on a single pivotal trial with at least one clinical primary efficacy endpoint without orphan designation, and 185 (42%) on a single pivotal trial. Among the final category of approvals, the Food and Drug Administration explicitly referenced confirmatory evidence for 36 (19.5%) single pivotal trial approvals and implicitly referenced confirmatory evidence for 4 (2.2%) others. These 40 approvals referenced 99 unique sources of confirmatory evidence, most commonly pharmacodynamic/mechanistic (n = 49) and other (n = 32). Reference to confirmatory evidence was greater after the Food and Drug Administration issued clarifying guidance in December 2019 (pre: 7% vs post: 34%; p < 0.0001).</p><p><strong>Conclusions: </strong>Given the rising number of the Food and Drug Administration approvals based on a single pivotal trial, greater clarity on confirmatory evidence standards and communication of its use could be considered.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745251376620"},"PeriodicalIF":2.2,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145291408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-04DOI: 10.1177/17407745251378117
Siqi Wu, Richard M Jacques, Stephen J Walters
Background: Cluster randomized controlled trials are increasingly used to evaluate the effectiveness of interventions in clinical and public health research. However, missing data in cluster randomized controlled trials can lead to biased results and reduce statistical power if not handled appropriately. This study aimed to review, describe and summarize how missing primary outcome data are handled in reports of publicly funded cluster randomized controlled trials.
Methods: This study reviewed the handling of missing data in cluster randomized controlled trials published in the UK National Institute for Health and Care Research Journals Library from 1 January 1997 to 31 December 2024. Data extraction focused on trial design, missing data mechanisms, handling methods in primary analyses and sensitivity analyses.
Results: Among the 110 identified cluster randomized controlled trials, 45% (50/110) did not report or take any action on missing data in either primary analysis or sensitivity analysis. In total, 75% (82/110) of the identified cluster randomized controlled trials did not impute missing values in their primary analysis. Advanced methods like multiple imputation were applied in only 15% (16/110) of primary analyses and 28% (31/110) of sensitivity analyses. On the contrary, the review highlighted that missing data handling methods have evolved over time, with an increasing adoption of multiple imputation since 2017. Overall, the reporting of how missing data is handled in cluster randomized controlled trials has improved in recent years, but there are still a large proportion of cluster randomized controlled trials lack of transparency in reporting missing data, where essential information such as the assumed missing mechanism could not be extracted from the reports.
Conclusion: Despite progress in adopting multiple imputation, inconsistent reporting and reliance on simplistic methods (e.g. complete case analysis) undermine cluster randomized controlled trial credibility. Recommendations include stricter adherence to CONSORT guidelines, routine sensitivity analyses for different missing mechanisms and enhanced training in advanced imputation techniques. This review provides updated insights into how missing data are handled in cluster randomized controlled trials and highlight the urgency for methodological transparency to ensure robust evidence generation in clustered trial designs.
{"title":"How is missing data handled in cluster randomized controlled trials? A review of trials published in the NIHR Journals Library 1997-2024.","authors":"Siqi Wu, Richard M Jacques, Stephen J Walters","doi":"10.1177/17407745251378117","DOIUrl":"https://doi.org/10.1177/17407745251378117","url":null,"abstract":"<p><strong>Background: </strong>Cluster randomized controlled trials are increasingly used to evaluate the effectiveness of interventions in clinical and public health research. However, missing data in cluster randomized controlled trials can lead to biased results and reduce statistical power if not handled appropriately. This study aimed to review, describe and summarize how missing primary outcome data are handled in reports of publicly funded cluster randomized controlled trials.</p><p><strong>Methods: </strong>This study reviewed the handling of missing data in cluster randomized controlled trials published in the UK National Institute for Health and Care Research Journals Library from 1 January 1997 to 31 December 2024. Data extraction focused on trial design, missing data mechanisms, handling methods in primary analyses and sensitivity analyses.</p><p><strong>Results: </strong>Among the 110 identified cluster randomized controlled trials, 45% (50/110) did not report or take any action on missing data in either primary analysis or sensitivity analysis. In total, 75% (82/110) of the identified cluster randomized controlled trials did not impute missing values in their primary analysis. Advanced methods like multiple imputation were applied in only 15% (16/110) of primary analyses and 28% (31/110) of sensitivity analyses. On the contrary, the review highlighted that missing data handling methods have evolved over time, with an increasing adoption of multiple imputation since 2017. Overall, the reporting of how missing data is handled in cluster randomized controlled trials has improved in recent years, but there are still a large proportion of cluster randomized controlled trials lack of transparency in reporting missing data, where essential information such as the assumed missing mechanism could not be extracted from the reports.</p><p><strong>Conclusion: </strong>Despite progress in adopting multiple imputation, inconsistent reporting and reliance on simplistic methods (e.g. complete case analysis) undermine cluster randomized controlled trial credibility. Recommendations include stricter adherence to CONSORT guidelines, routine sensitivity analyses for different missing mechanisms and enhanced training in advanced imputation techniques. This review provides updated insights into how missing data are handled in cluster randomized controlled trials and highlight the urgency for methodological transparency to ensure robust evidence generation in clustered trial designs.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"17407745251378117"},"PeriodicalIF":2.2,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145225050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-02-08DOI: 10.1177/17407745251313979
Maryam Mooghali, Osman Moneer, Guneet Janda, Joseph S Ross, Sanket S Dhruva, Reshma Ramachandran
<p><p>IntroductionIn 2005, the Centers for Medicare and Medicaid Services introduced the Coverage with Evidence Development program for items and services with limited evidence of benefit and harm for Medicare beneficiaries, aiming to generate evidence to determine whether they meet the statutory "reasonable and necessary" criteria for coverage. Coverage with Evidence Development requires participation in clinical studies approved by the Centers for Medicare and Medicaid Services (i.e. Coverage with Evidence Development-approved studies) as a condition of coverage. We examined the quality of evidence generated by Coverage with Evidence Development-approved studies compared with those that informed Centers for Medicare and Medicaid Services' initial Coverage with Evidence Development decisions (i.e. National Coverage Determination studies).MethodsUsing Centers for Medicare and Medicaid Services' webpage, we identified all items and services covered under Coverage with Evidence Development and their Coverage with Evidence Development-approved studies. Through searching PubMed and Google Scholar, we identified original research articles that reported results for primary endpoints of Coverage with Evidence Development-approved studies. We then reviewed the initial Coverage with Evidence Development decision memos and identified National Coverage Determination studies that were original research.We characterized and compared Coverage with Evidence Development-approved studies and National Coverage Determination studies.ResultsFrom 2005 to 2023, 26 items and services were covered under the Coverage with Evidence Development program, associated with 196 National Coverage Determination studies (170 (86.7%) clinical trials and 26 (13.3%) registries) and 116 unique Coverage with Evidence Development-approved studies (86 (74.1%) clinical trials, 23 (19.8%) registries, 4 (3.4%) claims-based studies, and 3 (2.6%) expanded access studies). Among clinical trial studies, National Coverage Determination studies and Coverage with Evidence Development-approved studies did not differ with respect to multi-arm design (59.4% vs 68.6%; <i>p</i> = 0.15). However, among multi-arm clinical trial studies, National Coverage Determination studies were less likely than Coverage with Evidence Development-approved studies to be randomized (52.5% vs 93.2%; <i>p</i> < 0.001). Overall, National Coverage Determination studies less frequently had ≥ 1 primary endpoint focused on a clinical outcome measure (65.8% vs 87.9%; <i>p</i> = 0.006) and less frequently exclusively enrolled Medicare beneficiaries (3.1% vs 25.9%; <i>p</i> < 0.001). In addition, National Coverage Determination studies had smaller population sizes than Coverage with Evidence Development-approved studies (median 100 (interquartile range, 45-414) vs 302 (interquartile range, 93-1000) patients; <i>p</i> = 0.002). Among Coverage with Evidence Development-approved studies, 59 (50.9%) had not yet publicly reported resul
简介:2005年,医疗保险和医疗补助服务中心针对医疗保险受益人的利益和损害证据有限的项目和服务推出了“有证据的覆盖”项目,旨在产生证据来确定它们是否符合法定的“合理和必要”的覆盖标准。证据开发覆盖要求参与医疗保险和医疗补助服务中心批准的临床研究(即证据开发批准的研究覆盖)作为覆盖的条件。我们通过证据开发批准的研究,与那些为医疗保险和医疗补助服务中心的初始覆盖提供证据开发决策(即国家覆盖确定研究)的研究,比较了覆盖所产生的证据质量。方法:使用医疗保险和医疗补助服务中心的网页,我们确定了证据开发覆盖范围下的所有项目和服务,以及证据开发批准的研究覆盖范围。通过检索PubMed和b谷歌Scholar,我们确定了原始研究文章,这些文章报道了证据开发批准的研究的主要终点。然后,我们用证据开发决策备忘录回顾了最初的覆盖范围,并确定了国家覆盖范围确定研究是原始研究。我们将覆盖范围与证据开发批准的研究和国家覆盖范围确定研究进行了表征和比较。结果:从2005年到2023年,证据开发项目覆盖了26个项目和服务,与196个国家覆盖确定研究(170个(86.7%)临床试验和26个(13.3%)注册中心)和116个证据开发批准的独特覆盖研究(86个(74.1%)临床试验,23个(19.8%)注册中心,4个(3.4%)基于索赔的研究和3个(2.6%)扩大准入研究)相关。在临床试验研究中,国家覆盖确定研究和证据开发批准研究的覆盖范围在多组设计方面没有差异(59.4% vs 68.6%;p = 0.15)。然而,在多组临床试验研究中,国家覆盖确定研究比证据开发批准的研究更不可能随机化(52.5% vs 93.2%;p = 0.006)和较少的完全登记的医疗保险受益人(3.1% vs 25.9%;p = 0.002)。在证据开发批准的研究中,59项(50.9%)尚未公开报告主要终点的结果。讨论:与用于初始国家覆盖确定的研究相比,医疗保险覆盖证据开发项目要求的研究更常采用随机研究设计,患者群体更大,纳入美国患者,并将临床结果作为主要终点。然而,并非所有获得证据开发批准的研究都报告了结果,这可能会给患者、医生和付款人带来不确定性,使他们对所涵盖项目和服务的临床效益产生不确定性。结论:医疗保险和医疗补助服务中心的证据开发覆盖项目已经成功地促进了设计更稳健的临床研究的产生,与为初始覆盖决策提供信息的研究相比,这些研究可以更好地为临床、监管和覆盖决策提供信息。然而,仍有机会进一步加强本方案所要求的研究的设计和传播。
{"title":"Characterization of studies considered and required under Medicare's coverage with evidence development program.","authors":"Maryam Mooghali, Osman Moneer, Guneet Janda, Joseph S Ross, Sanket S Dhruva, Reshma Ramachandran","doi":"10.1177/17407745251313979","DOIUrl":"10.1177/17407745251313979","url":null,"abstract":"<p><p>IntroductionIn 2005, the Centers for Medicare and Medicaid Services introduced the Coverage with Evidence Development program for items and services with limited evidence of benefit and harm for Medicare beneficiaries, aiming to generate evidence to determine whether they meet the statutory \"reasonable and necessary\" criteria for coverage. Coverage with Evidence Development requires participation in clinical studies approved by the Centers for Medicare and Medicaid Services (i.e. Coverage with Evidence Development-approved studies) as a condition of coverage. We examined the quality of evidence generated by Coverage with Evidence Development-approved studies compared with those that informed Centers for Medicare and Medicaid Services' initial Coverage with Evidence Development decisions (i.e. National Coverage Determination studies).MethodsUsing Centers for Medicare and Medicaid Services' webpage, we identified all items and services covered under Coverage with Evidence Development and their Coverage with Evidence Development-approved studies. Through searching PubMed and Google Scholar, we identified original research articles that reported results for primary endpoints of Coverage with Evidence Development-approved studies. We then reviewed the initial Coverage with Evidence Development decision memos and identified National Coverage Determination studies that were original research.We characterized and compared Coverage with Evidence Development-approved studies and National Coverage Determination studies.ResultsFrom 2005 to 2023, 26 items and services were covered under the Coverage with Evidence Development program, associated with 196 National Coverage Determination studies (170 (86.7%) clinical trials and 26 (13.3%) registries) and 116 unique Coverage with Evidence Development-approved studies (86 (74.1%) clinical trials, 23 (19.8%) registries, 4 (3.4%) claims-based studies, and 3 (2.6%) expanded access studies). Among clinical trial studies, National Coverage Determination studies and Coverage with Evidence Development-approved studies did not differ with respect to multi-arm design (59.4% vs 68.6%; <i>p</i> = 0.15). However, among multi-arm clinical trial studies, National Coverage Determination studies were less likely than Coverage with Evidence Development-approved studies to be randomized (52.5% vs 93.2%; <i>p</i> < 0.001). Overall, National Coverage Determination studies less frequently had ≥ 1 primary endpoint focused on a clinical outcome measure (65.8% vs 87.9%; <i>p</i> = 0.006) and less frequently exclusively enrolled Medicare beneficiaries (3.1% vs 25.9%; <i>p</i> < 0.001). In addition, National Coverage Determination studies had smaller population sizes than Coverage with Evidence Development-approved studies (median 100 (interquartile range, 45-414) vs 302 (interquartile range, 93-1000) patients; <i>p</i> = 0.002). Among Coverage with Evidence Development-approved studies, 59 (50.9%) had not yet publicly reported resul","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"619-625"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143374119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-05-22DOI: 10.1177/17407745251338558
Yu Zheng, Judy S Currier, Michael D Hughes
BackgroundEvaluation of heterogeneity of treatment effect among participants in large randomized clinical trials may provide insights as to the value of individualizing clinical decisions. The effect modeling approach to predictive heterogeneity of treatment effect analysis offers a promising framework for heterogeneity of treatment effect estimation by simultaneously considering multiple patient characteristics and their interactions with treatment to predict differences in outcomes between randomized treatments. However, its implementation in clinical research remains limited and so we provide a detailed example of its application in a randomized trial that compared raltegravir-based vs darunavir/ritonavir-based therapy as initial antiretroviral treatments for people living with HIV.MethodsThe heterogeneity of treatment effect analysis used a two-step procedure, in which a working proportional hazards model was first selected to construct an index score for ranking the treatment difference for individuals, and then a second calibration step used a non-parametric kernel approach to estimate the true treatment difference for participants with similar index scores. Sensitivity and supplemental analyses were conducted to evaluate the robustness of the results. We further explored the impact of covariates on heterogeneity of treatment effect and the choice between treatments.ResultsThe heterogeneity of treatment effect analysis showed that while there is a clear trend favoring raltegravir-based therapy over darunavir/ritonavir-based therapy for the vast majority of the target population, there were a small subset of patients, characterized by more advanced HIV disease status, for whom the choice between the two treatments might be equivocal.ConclusionsThrough this example, we illustrate how an exploratory heterogeneity of treatment effect analysis might provide further insights into the comparative efficacy of treatments evaluated in a randomized trial. We also highlight some of the issues in implementing and interpreting effect modeling analyses in randomized trials.
{"title":"Precision medicine evaluation of heterogeneity of treatment effect for a time-to-event outcome with application in a trial of Initial treatment for people living with HIV.","authors":"Yu Zheng, Judy S Currier, Michael D Hughes","doi":"10.1177/17407745251338558","DOIUrl":"10.1177/17407745251338558","url":null,"abstract":"<p><p>BackgroundEvaluation of heterogeneity of treatment effect among participants in large randomized clinical trials may provide insights as to the value of individualizing clinical decisions. The effect modeling approach to predictive heterogeneity of treatment effect analysis offers a promising framework for heterogeneity of treatment effect estimation by simultaneously considering multiple patient characteristics and their interactions with treatment to predict differences in outcomes between randomized treatments. However, its implementation in clinical research remains limited and so we provide a detailed example of its application in a randomized trial that compared raltegravir-based vs darunavir/ritonavir-based therapy as initial antiretroviral treatments for people living with HIV.MethodsThe heterogeneity of treatment effect analysis used a two-step procedure, in which a working proportional hazards model was first selected to construct an index score for ranking the treatment difference for individuals, and then a second calibration step used a non-parametric kernel approach to estimate the true treatment difference for participants with similar index scores. Sensitivity and supplemental analyses were conducted to evaluate the robustness of the results. We further explored the impact of covariates on heterogeneity of treatment effect and the choice between treatments.ResultsThe heterogeneity of treatment effect analysis showed that while there is a clear trend favoring raltegravir-based therapy over darunavir/ritonavir-based therapy for the vast majority of the target population, there were a small subset of patients, characterized by more advanced HIV disease status, for whom the choice between the two treatments might be equivocal.ConclusionsThrough this example, we illustrate how an exploratory heterogeneity of treatment effect analysis might provide further insights into the comparative efficacy of treatments evaluated in a randomized trial. We also highlight some of the issues in implementing and interpreting effect modeling analyses in randomized trials.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"559-570"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12353116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144119117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-04DOI: 10.1177/17407745251344524
William J Cragg, Laura Clifton-Hadley, Jeremy Dearling, Susan J Dutton, Katie Gillies, Pollyanna Hardy, Daniel Hind, Søren Holm, Kerenza Hood, Anna Kearney, Rebecca Lewis, Sarah Markham, Lauren Moreau, Tra My Pham, Amanda Roberts, Sharon Ruddock, Mirjana Sirovica, Ratna Sohanpal, Puvan Tharmanathan, Rejina Verghis
<p><strong>Background/aims: </strong>Existing regulatory and ethical guidance does not address real-life complexities in how clinical trial participants' level of participation may change. If these complexities are inappropriately managed, there may be negative consequences for trial participants and the integrity of trials they participate in. These concerns have been highlighted over many years, but there remains no single, comprehensive guidance for managing participation changes in ways that address real-life complexities while maximally promoting participant interests and trial integrity. Motivated by the lack of agreed standards, and observed variability in practice, representatives from academic clinical trials units and linked organisations in the United Kingdom initiated the PeRSEVERE project (PRincipleS for handling end-of-participation EVEnts in clinical trials REsearch) to agree on guiding principles and explore how these principles should be implemented.</p><p><strong>Methods: </strong>We developed the PeRSEVERE principles through discussion and debate within a large, multidisciplinary collaboration, including research professionals and public contributors. We took an inclusive approach to drafting the principles, incorporating new ideas if they were within project scope. Our draft principles were scrutinised through an international consultation survey which focussed on the principles' clarity, feasibility, novelty and acceptability. Survey responses were analysed descriptively (for category questions) and using a combination of deductive and inductive analysis (for open questions). We used predefined rules to guide feedback handling. After finalising the principles, we developed accompanying implementation guidance from several sources.</p><p><strong>Results: </strong>In total, 280 people from 9 countries took part in the consultation survey. Feedback showed strong support for the principles with 96% of respondents agreeing with the principles' key messages. Based on our predefined rules, it was not necessary to amend our draft principles, but comments were nonetheless used to enhance the final project outputs. Our 17 finalised principles comprise 7 fundamental, 'overarching' principles, 6 about trial design and setup, 2 covering data collection and monitoring, and 2 on trial analysis and reporting.</p><p><strong>Conclusion: </strong>We devised a comprehensive set of guiding principles, with detailed practical recommendations, to aid the management of clinical trial participation changes, building on existing ethical and regulatory texts. Our outputs reflect the contributions of a substantial number of individuals, including public contributors and research professionals with various specialisms. This lends weight to our recommendations, which have implications for everyone who designs, funds, conducts, oversees or participates in trials. We suggest our principles could lead to improved standards in clinical trials and better exper
{"title":"Standardising management of consent withdrawal and other clinical trial participation changes: The UKCRC Registered Clinical Trials Unit Network's PeRSEVERE project.","authors":"William J Cragg, Laura Clifton-Hadley, Jeremy Dearling, Susan J Dutton, Katie Gillies, Pollyanna Hardy, Daniel Hind, Søren Holm, Kerenza Hood, Anna Kearney, Rebecca Lewis, Sarah Markham, Lauren Moreau, Tra My Pham, Amanda Roberts, Sharon Ruddock, Mirjana Sirovica, Ratna Sohanpal, Puvan Tharmanathan, Rejina Verghis","doi":"10.1177/17407745251344524","DOIUrl":"10.1177/17407745251344524","url":null,"abstract":"<p><strong>Background/aims: </strong>Existing regulatory and ethical guidance does not address real-life complexities in how clinical trial participants' level of participation may change. If these complexities are inappropriately managed, there may be negative consequences for trial participants and the integrity of trials they participate in. These concerns have been highlighted over many years, but there remains no single, comprehensive guidance for managing participation changes in ways that address real-life complexities while maximally promoting participant interests and trial integrity. Motivated by the lack of agreed standards, and observed variability in practice, representatives from academic clinical trials units and linked organisations in the United Kingdom initiated the PeRSEVERE project (PRincipleS for handling end-of-participation EVEnts in clinical trials REsearch) to agree on guiding principles and explore how these principles should be implemented.</p><p><strong>Methods: </strong>We developed the PeRSEVERE principles through discussion and debate within a large, multidisciplinary collaboration, including research professionals and public contributors. We took an inclusive approach to drafting the principles, incorporating new ideas if they were within project scope. Our draft principles were scrutinised through an international consultation survey which focussed on the principles' clarity, feasibility, novelty and acceptability. Survey responses were analysed descriptively (for category questions) and using a combination of deductive and inductive analysis (for open questions). We used predefined rules to guide feedback handling. After finalising the principles, we developed accompanying implementation guidance from several sources.</p><p><strong>Results: </strong>In total, 280 people from 9 countries took part in the consultation survey. Feedback showed strong support for the principles with 96% of respondents agreeing with the principles' key messages. Based on our predefined rules, it was not necessary to amend our draft principles, but comments were nonetheless used to enhance the final project outputs. Our 17 finalised principles comprise 7 fundamental, 'overarching' principles, 6 about trial design and setup, 2 covering data collection and monitoring, and 2 on trial analysis and reporting.</p><p><strong>Conclusion: </strong>We devised a comprehensive set of guiding principles, with detailed practical recommendations, to aid the management of clinical trial participation changes, building on existing ethical and regulatory texts. Our outputs reflect the contributions of a substantial number of individuals, including public contributors and research professionals with various specialisms. This lends weight to our recommendations, which have implications for everyone who designs, funds, conducts, oversees or participates in trials. We suggest our principles could lead to improved standards in clinical trials and better exper","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"578-596"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BackgroundNutrition and dietary trials are often prone to bias, leading to inaccurate or questionable estimates of intervention efficacy. However, reports on quality management practices of well-controlled dietary trials are scarce. This study aims to introduce the quality management system of the Diet, ExerCIse and CarDiovascular hEalth-Diet Study and report its performance in ensuring study quality.MethodsThe quality management system consisted of a study coordinating center, trial governance, and quality control measures covering study design, conduct, and data analysis and reporting. Metrics for evaluating the performance of the system were collected throughout the whole trial development and conducted from September 2016 to June 2021, covering major activities at the coordinating center, study sites, and central laboratories, with a focus on the protocol amendments, protocol deviations (eligibility, fidelity, confounders management, loss to follow-up and outside-of-window visits, and blindness success), and measurement accuracy.ResultsThree amendments to the study protocol enhanced feasibility. All participants (265) met the eligibility criteria. Among them, only 3% were lost to the primary outcome follow-up measurement. More than 95% of participants completed the study, they consumed more than 96% of the study meals, and more than 94% of participants consumed more than 18 meals per week, with no between-group differences. Online monitoring of nutrient targets for the intervention diet showed that all targets were achieved except for the fiber intake, which was 4.3 g less on average. Only 3% experienced a body weight change greater than 2.0 kg, and 3% had medication changes which were not allowed by the study. James' blinding index at the end of the study was 0.68. The end digits of both systolic and diastolic blood pressure readings were distributed equally. For laboratory measures, 100% of standard samples, 97% of blood-split samples, and 87% of urine-split samples had test results within the acceptable range. Only 1.4% of data items required queries, for which only 30% needed corrections.DiscussionThe Diet, ExerCIse and CarDiovascular hEalth-Diet Study quality management system provides a framework for conducting a high-quality dietary intervention clinical trial.
{"title":"Quality management of a multi-center randomized controlled feeding trial: A prospective observational study.","authors":"Xiayan Chen, Huijuan Li, Lin Feng, Xi Lan, Shuyi Li, Yanfang Zhao, Guo Zeng, Huilian Zhu, Jianqin Sun, Yanfang Wang, Yangfeng Wu","doi":"10.1177/17407745251324653","DOIUrl":"10.1177/17407745251324653","url":null,"abstract":"<p><p>BackgroundNutrition and dietary trials are often prone to bias, leading to inaccurate or questionable estimates of intervention efficacy. However, reports on quality management practices of well-controlled dietary trials are scarce. This study aims to introduce the quality management system of the Diet, ExerCIse and CarDiovascular hEalth-Diet Study and report its performance in ensuring study quality.MethodsThe quality management system consisted of a study coordinating center, trial governance, and quality control measures covering study design, conduct, and data analysis and reporting. Metrics for evaluating the performance of the system were collected throughout the whole trial development and conducted from September 2016 to June 2021, covering major activities at the coordinating center, study sites, and central laboratories, with a focus on the protocol amendments, protocol deviations (eligibility, fidelity, confounders management, loss to follow-up and outside-of-window visits, and blindness success), and measurement accuracy.ResultsThree amendments to the study protocol enhanced feasibility. All participants (265) met the eligibility criteria. Among them, only 3% were lost to the primary outcome follow-up measurement. More than 95% of participants completed the study, they consumed more than 96% of the study meals, and more than 94% of participants consumed more than 18 meals per week, with no between-group differences. Online monitoring of nutrient targets for the intervention diet showed that all targets were achieved except for the fiber intake, which was 4.3 g less on average. Only 3% experienced a body weight change greater than 2.0 kg, and 3% had medication changes which were not allowed by the study. James' blinding index at the end of the study was 0.68. The end digits of both systolic and diastolic blood pressure readings were distributed equally. For laboratory measures, 100% of standard samples, 97% of blood-split samples, and 87% of urine-split samples had test results within the acceptable range. Only 1.4% of data items required queries, for which only 30% needed corrections.DiscussionThe Diet, ExerCIse and CarDiovascular hEalth-Diet Study quality management system provides a framework for conducting a high-quality dietary intervention clinical trial.</p>","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"527-537"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143751500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-10DOI: 10.1177/17407745251338574
April M Crawford, Steven L Arxer, James P LePage
{"title":"Developing a research coordinator workforce: A case study of a hospital and university collaboration.","authors":"April M Crawford, Steven L Arxer, James P LePage","doi":"10.1177/17407745251338574","DOIUrl":"10.1177/17407745251338574","url":null,"abstract":"","PeriodicalId":10685,"journal":{"name":"Clinical Trials","volume":" ","pages":"632-634"},"PeriodicalIF":2.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144257494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}