首页 > 最新文献

Research Synthesis Methods最新文献

英文 中文
Automation tools to support undertaking scoping reviews. 支持进行范围界定审查的自动化工具。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-06-17 DOI: 10.1002/jrsm.1731
Hanan Khalil, Danielle Pollock, Patricia McInerney, Catrin Evans, Erica B Moraes, Christina M Godfrey, Lyndsay Alexander, Andrea Tricco, Micah D J Peters, Dawid Pieper, Ashrita Saran, Daniel Ameen, Petek Eylul Taneri, Zachary Munn

Objective: This paper describes several automation tools and software that can be considered during evidence synthesis projects and provides guidance for their integration in the conduct of scoping reviews.

Study design and setting: The guidance presented in this work is adapted from the results of a scoping review and consultations with the JBI Scoping Review Methodology group.

Results: This paper describes several reliable, validated automation tools and software that can be used to enhance the conduct of scoping reviews. Developments in the automation of systematic reviews, and more recently scoping reviews, are continuously evolving. We detail several helpful tools in order of the key steps recommended by the JBI's methodological guidance for undertaking scoping reviews including team establishment, protocol development, searching, de-duplication, screening titles and abstracts, data extraction, data charting, and report writing. While we include several reliable tools and software that can be used for the automation of scoping reviews, there are some limitations to the tools mentioned. For example, some are available in English only and their lack of integration with other tools results in limited interoperability.

Conclusion: This paper highlighted several useful automation tools and software programs to use in undertaking each step of a scoping review. This guidance has the potential to inform collaborative efforts aiming at the development of evidence informed, integrated automation tools and software packages for enhancing the conduct of high-quality scoping reviews.

目的:本文介绍了在证据综述项目中可以考虑使用的几种自动化工具和软件,并为在范围界定综述中整合这些工具和软件提供指导:本文介绍了在证据综述项目中可以考虑使用的几种自动化工具和软件,并为在范围界定综述中整合这些工具和软件提供了指导:研究设计与背景:本文所提供的指南是根据范围界定综述的结果以及与 JBI 范围界定综述方法学小组的协商结果改编而成:本文介绍了几种可靠的、经过验证的自动化工具和软件,可用于加强范围界定综述的开展。系统性综述以及最近的范围界定综述的自动化技术在不断发展。我们按照 JBI 方法指南推荐的范围界定综述关键步骤详细介绍了几种有用的工具,包括建立团队、制定方案、检索、去重、筛选标题和摘要、数据提取、数据制图和撰写报告。虽然我们提供了几种可靠的工具和软件,可用于范围界定综述的自动化,但提到的工具也有一些局限性。例如,有些工具仅有英文版,而且无法与其他工具集成,导致互操作性有限:本文重点介绍了几种有用的自动化工具和软件程序,可用于范围界定审查的每个步骤。本指南有可能为旨在开发有实证依据的集成自动化工具和软件包的合作努力提供信息,以加强高质量范围界定综述的开展。
{"title":"Automation tools to support undertaking scoping reviews.","authors":"Hanan Khalil, Danielle Pollock, Patricia McInerney, Catrin Evans, Erica B Moraes, Christina M Godfrey, Lyndsay Alexander, Andrea Tricco, Micah D J Peters, Dawid Pieper, Ashrita Saran, Daniel Ameen, Petek Eylul Taneri, Zachary Munn","doi":"10.1002/jrsm.1731","DOIUrl":"10.1002/jrsm.1731","url":null,"abstract":"<p><strong>Objective: </strong>This paper describes several automation tools and software that can be considered during evidence synthesis projects and provides guidance for their integration in the conduct of scoping reviews.</p><p><strong>Study design and setting: </strong>The guidance presented in this work is adapted from the results of a scoping review and consultations with the JBI Scoping Review Methodology group.</p><p><strong>Results: </strong>This paper describes several reliable, validated automation tools and software that can be used to enhance the conduct of scoping reviews. Developments in the automation of systematic reviews, and more recently scoping reviews, are continuously evolving. We detail several helpful tools in order of the key steps recommended by the JBI's methodological guidance for undertaking scoping reviews including team establishment, protocol development, searching, de-duplication, screening titles and abstracts, data extraction, data charting, and report writing. While we include several reliable tools and software that can be used for the automation of scoping reviews, there are some limitations to the tools mentioned. For example, some are available in English only and their lack of integration with other tools results in limited interoperability.</p><p><strong>Conclusion: </strong>This paper highlighted several useful automation tools and software programs to use in undertaking each step of a scoping review. This guidance has the potential to inform collaborative efforts aiming at the development of evidence informed, integrated automation tools and software packages for enhancing the conduct of high-quality scoping reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141417087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparison of two models for detecting inconsistency in network meta-analysis. 网络荟萃分析中检测不一致性的两种模型比较。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-07-04 DOI: 10.1002/jrsm.1734
Lu Qin, Shishun Zhao, Wenlai Guo, Tiejun Tong, Ke Yang

The application of network meta-analysis is becoming increasingly widespread, and for a successful implementation, it requires that the direct comparison result and the indirect comparison result should be consistent. Because of this, a proper detection of inconsistency is often a key issue in network meta-analysis as whether the results can be reliably used as a clinical guidance. Among the existing methods for detecting inconsistency, two commonly used models are the design-by-treatment interaction model and the side-splitting models. While the original side-splitting model was initially estimated using a Bayesian approach, in this context, we employ the frequentist approach. In this paper, we review these two types of models comprehensively as well as explore their relationship by treating the data structure of network meta-analysis as missing data and parameterizing the potential complete data for each model. Through both analytical and numerical studies, we verify that the side-splitting models are specific instances of the design-by-treatment interaction model, incorporating additional assumptions or under certain data structure. Moreover, the design-by-treatment interaction model exhibits robust performance across different data structures on inconsistency detection compared to the side-splitting models. Finally, as a practical guidance for inconsistency detection, we recommend utilizing the design-by-treatment interaction model when there is a lack of information about the potential location of inconsistency. By contrast, the side-splitting models can serve as a supplementary method especially when the number of studies in each design is small, enabling a comprehensive assessment of inconsistency from both global and local perspectives.

网络荟萃分析的应用越来越广泛,要想成功实施网络荟萃分析,就要求直接比较结果和间接比较结果保持一致。因此,在网络荟萃分析中,如何正确检测不一致性往往是一个关键问题,因为网络荟萃分析的结果能否可靠地用作临床指导。在现有的不一致性检测方法中,有两种常用的模型,即按治疗设计交互模型和侧分模型。虽然最初的侧分模型是用贝叶斯方法估计的,但在本文中,我们采用的是频数主义方法。在本文中,我们将网络荟萃分析的数据结构视为缺失数据,并对每个模型的潜在完整数据进行参数化,从而对这两类模型进行全面评述,并探讨它们之间的关系。通过分析和数值研究,我们验证了边分裂模型是按治疗设计交互模型的具体实例,包含了额外的假设或在特定的数据结构下。此外,与侧面分割模型相比,按治疗设计交互模型在不同数据结构下的不一致性检测中表现出稳健的性能。最后,作为不一致性检测的实用指南,我们建议在缺乏有关不一致性潜在位置的信息时使用逐项设计交互模型。相比之下,侧分模型可以作为一种补充方法,尤其是当每个设计中的研究数量较少时,可以从整体和局部两个角度对不一致性进行全面评估。
{"title":"A comparison of two models for detecting inconsistency in network meta-analysis.","authors":"Lu Qin, Shishun Zhao, Wenlai Guo, Tiejun Tong, Ke Yang","doi":"10.1002/jrsm.1734","DOIUrl":"10.1002/jrsm.1734","url":null,"abstract":"<p><p>The application of network meta-analysis is becoming increasingly widespread, and for a successful implementation, it requires that the direct comparison result and the indirect comparison result should be consistent. Because of this, a proper detection of inconsistency is often a key issue in network meta-analysis as whether the results can be reliably used as a clinical guidance. Among the existing methods for detecting inconsistency, two commonly used models are the design-by-treatment interaction model and the side-splitting models. While the original side-splitting model was initially estimated using a Bayesian approach, in this context, we employ the frequentist approach. In this paper, we review these two types of models comprehensively as well as explore their relationship by treating the data structure of network meta-analysis as missing data and parameterizing the potential complete data for each model. Through both analytical and numerical studies, we verify that the side-splitting models are specific instances of the design-by-treatment interaction model, incorporating additional assumptions or under certain data structure. Moreover, the design-by-treatment interaction model exhibits robust performance across different data structures on inconsistency detection compared to the side-splitting models. Finally, as a practical guidance for inconsistency detection, we recommend utilizing the design-by-treatment interaction model when there is a lack of information about the potential location of inconsistency. By contrast, the side-splitting models can serve as a supplementary method especially when the number of studies in each design is small, enabling a comprehensive assessment of inconsistency from both global and local perspectives.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141533057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A discrete time-to-event model for the meta-analysis of full ROC curves. 用于全 ROC 曲线荟萃分析的离散时间到事件模型。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-09-06 DOI: 10.1002/jrsm.1753
Ferdinand Valentin Stoye, Claudia Tschammler, Oliver Kuss, Annika Hoyer

The development of new statistical models for the meta-analysis of diagnostic test accuracy studies is still an ongoing field of research, especially with respect to summary receiver operating characteristic (ROC) curves. In the recently published updated version of the "Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy", the authors point to the challenges of this kind of meta-analysis and propose two approaches. However, both of them come with some disadvantages, such as the nonstraightforward choice of priors in Bayesian models or the requirement of a two-step approach where parameters are estimated for the individual studies, followed by summarizing the results. As an alternative, we propose a novel model by applying methods from time-to-event analysis. To this task we use the discrete proportional hazard approach to treat the different diagnostic thresholds, that provide means to estimate sensitivity and specificity and are reported by the single studies, as categorical variables in a generalized linear mixed model, using both the logit- and the asymmetric cloglog-link. This leads to a model specification with threshold-specific discrete hazards, avoiding a linear dependency between thresholds, discrete hazard, and sensitivity/specificity and thus increasing model flexibility. We compare the resulting models to approaches from the literature in a simulation study. While the estimated area under the summary ROC curve is estimated comparably well in most approaches, the results depict substantial differences in the estimated sensitivities and specificities. We also show the practical applicability of the models to data from a meta-analysis for the screening of type 2 diabetes.

为诊断测试准确性研究的荟萃分析开发新的统计模型仍是一个持续的研究领域,尤其是在接收者操作特征曲线(ROC)汇总方面。在最近出版的《Cochrane 诊断测试准确性系统综述手册》更新版中,作者指出了此类荟萃分析所面临的挑战,并提出了两种方法。然而,这两种方法都有一些缺点,比如贝叶斯模型中先验值的选择并不简单,或者需要分两步进行,即先估计单个研究的参数,然后再总结结果。作为一种替代方法,我们提出了一种应用时间到事件分析方法的新型模型。为此,我们采用了离散比例危险法,将不同的诊断阈值作为分类变量,在广义线性混合模型中使用 logit 和非对称 cloglog 链接来处理,这些阈值提供了估算敏感性和特异性的方法,并由单项研究报告。这导致了一种具有阈值特异性离散危害的模型规范,避免了阈值、离散危害和灵敏度/特异性之间的线性依赖关系,从而提高了模型的灵活性。在模拟研究中,我们将得出的模型与文献中的方法进行了比较。虽然大多数方法都能很好地估算出 ROC 曲线下的估计面积,但结果表明在估计灵敏度和特异性方面存在很大差异。我们还展示了这些模型在 2 型糖尿病筛查荟萃分析数据中的实际应用性。
{"title":"A discrete time-to-event model for the meta-analysis of full ROC curves.","authors":"Ferdinand Valentin Stoye, Claudia Tschammler, Oliver Kuss, Annika Hoyer","doi":"10.1002/jrsm.1753","DOIUrl":"10.1002/jrsm.1753","url":null,"abstract":"<p><p>The development of new statistical models for the meta-analysis of diagnostic test accuracy studies is still an ongoing field of research, especially with respect to summary receiver operating characteristic (ROC) curves. In the recently published updated version of the \"Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy\", the authors point to the challenges of this kind of meta-analysis and propose two approaches. However, both of them come with some disadvantages, such as the nonstraightforward choice of priors in Bayesian models or the requirement of a two-step approach where parameters are estimated for the individual studies, followed by summarizing the results. As an alternative, we propose a novel model by applying methods from time-to-event analysis. To this task we use the discrete proportional hazard approach to treat the different diagnostic thresholds, that provide means to estimate sensitivity and specificity and are reported by the single studies, as categorical variables in a generalized linear mixed model, using both the logit- and the asymmetric cloglog-link. This leads to a model specification with threshold-specific discrete hazards, avoiding a linear dependency between thresholds, discrete hazard, and sensitivity/specificity and thus increasing model flexibility. We compare the resulting models to approaches from the literature in a simulation study. While the estimated area under the summary ROC curve is estimated comparably well in most approaches, the results depict substantial differences in the estimated sensitivities and specificities. We also show the practical applicability of the models to data from a meta-analysis for the screening of type 2 diabetes.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142138824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reduce, reuse, recycle: Introducing MetaPipeX, a framework for analyses of multi-lab data. 减少、再利用、再循环:介绍 MetaPipeX,一个多实验室数据分析框架。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-06-28 DOI: 10.1002/jrsm.1733
Jens H Fünderich, Lukas J Beinhauer, Frank Renkewitz

Multi-lab projects are large scale collaborations between participating data collection sites that gather empirical evidence and (usually) analyze that evidence using meta-analyses. They are a valuable form of scientific collaboration, produce outstanding data sets and are a great resource for third-party researchers. Their data may be reanalyzed and used in research synthesis. Their repositories and code could provide guidance to future projects of this kind. But, while multi-labs are similar in their structure and aggregate their data using meta-analyses, they deploy a variety of different solutions regarding the storage structure in the repositories, the way the (analysis) code is structured and the file-formats they provide. Continuing this trend implies that anyone who wants to work with data from multiple of these projects, or combine their datasets, is faced with an ever-increasing complexity. Some of that complexity could be avoided. Here, we introduce MetaPipeX, a standardized framework to harmonize, document and analyze multi-lab data. It features a pipeline conceptualization of the analysis and documentation process, an R-package that implements both and a Shiny App (https://www.apps.meta-rep.lmu.de/metapipex/) that allows users to explore and visualize these data sets. We introduce the framework by describing its components and applying it to a practical example. Engaging with this form of collaboration and integrating it further into research practice will certainly be beneficial to quantitative sciences and we hope the framework provides a structure and tools to reduce effort for anyone who creates, re-uses, harmonizes or learns about multi-lab replication projects.

多实验室项目是参与数据收集站点之间的大规模合作,这些站点收集经验证据,并(通常)使用元分析对证据进行分析。它们是一种有价值的科学合作形式,能产生出色的数据集,是第三方研究人员的重要资源。它们的数据可以重新分析并用于研究综述。它们的资料库和代码可以为未来的此类项目提供指导。不过,虽然多重实验室在结构上相似,并使用元分析汇总数据,但它们在资源库的存储结构、(分析)代码的结构方式以及提供的文件格式方面却采用了各种不同的解决方案。继续保持这种趋势意味着,任何人想要处理来自多个此类项目的数据或合并数据集,都会面临日益增加的复杂性。其中一些复杂性是可以避免的。在此,我们介绍 MetaPipeX,这是一个用于协调、记录和分析多个实验室数据的标准化框架。它的特点包括:分析和记录过程的管道概念化、实现这两个过程的 R 包以及允许用户探索和可视化这些数据集的 Shiny App (https://www.apps.meta-rep.lmu.de/metapipex/)。我们介绍了该框架的各个组成部分,并将其应用到一个实际例子中。参与这种形式的合作并将其进一步整合到研究实践中肯定会对定量科学有益,我们希望该框架能为创建、重用、协调或学习多实验室复制项目的任何人提供结构和工具,以减少工作量。
{"title":"Reduce, reuse, recycle: Introducing MetaPipeX, a framework for analyses of multi-lab data.","authors":"Jens H Fünderich, Lukas J Beinhauer, Frank Renkewitz","doi":"10.1002/jrsm.1733","DOIUrl":"10.1002/jrsm.1733","url":null,"abstract":"<p><p>Multi-lab projects are large scale collaborations between participating data collection sites that gather empirical evidence and (usually) analyze that evidence using meta-analyses. They are a valuable form of scientific collaboration, produce outstanding data sets and are a great resource for third-party researchers. Their data may be reanalyzed and used in research synthesis. Their repositories and code could provide guidance to future projects of this kind. But, while multi-labs are similar in their structure and aggregate their data using meta-analyses, they deploy a variety of different solutions regarding the storage structure in the repositories, the way the (analysis) code is structured and the file-formats they provide. Continuing this trend implies that anyone who wants to work with data from multiple of these projects, or combine their datasets, is faced with an ever-increasing complexity. Some of that complexity could be avoided. Here, we introduce MetaPipeX, a standardized framework to harmonize, document and analyze multi-lab data. It features a pipeline conceptualization of the analysis and documentation process, an R-package that implements both and a Shiny App (https://www.apps.meta-rep.lmu.de/metapipex/) that allows users to explore and visualize these data sets. We introduce the framework by describing its components and applying it to a practical example. Engaging with this form of collaboration and integrating it further into research practice will certainly be beneficial to quantitative sciences and we hope the framework provides a structure and tools to reduce effort for anyone who creates, re-uses, harmonizes or learns about multi-lab replication projects.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141464697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A re-analysis of about 60,000 sparse data meta-analyses suggests that using an adequate method for pooling matters. 对大约 60,000 项稀疏数据荟萃分析的重新分析表明,使用适当的方法进行汇总非常重要。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-08-13 DOI: 10.1002/jrsm.1748
Maxi Schulz, Malte Kramer, Oliver Kuss, Tim Mathes

In sparse data meta-analyses (with few trials or zero events), conventional methods may distort results. Although better-performing one-stage methods have become available in recent years, their implementation remains limited in practice. This study examines the impact of using conventional methods compared to one-stage models by re-analysing meta-analyses from the Cochrane Database of Systematic Reviews in scenarios with zero event trials and few trials. For each scenario, we computed one-stage methods (Generalised linear mixed model [GLMM], Beta-binomial model [BBM], Bayesian binomial-normal hierarchical model using a weakly informative prior [BNHM-WIP]) and compared them with conventional methods (Peto-Odds-ratio [PETO], DerSimonian-Laird method [DL] for zero event trials; DL, Paule-Mandel [PM], Restricted maximum likelihood [REML] method for few trials). While all methods showed similar treatment effect estimates, substantial variability in statistical precision emerged. Conventional methods generally resulted in smaller confidence intervals (CIs) compared to one-stage models in the zero event situation. In the few trials scenario, the CI lengths were widest for the BBM on average and significance often changed compared to the PM and REML, despite the relatively wide CIs of the latter. In agreement with simulations and guidelines for meta-analyses with zero event trials, our results suggest that one-stage models are preferable. The best model can be either selected based on the data situation or, using a method that can be used in various situations. In the few trial situation, using BBM and additionally PM or REML for sensitivity analyses appears reasonable when conservative results are desired. Overall, our results encourage careful method selection.

在稀疏数据荟萃分析(试验数量少或事件为零)中,传统方法可能会扭曲结果。虽然近年来出现了性能更好的单阶段方法,但在实践中的应用仍然有限。本研究通过重新分析科克伦系统综述数据库中的荟萃分析,在零事件试验和少量试验的情况下,检验了使用传统方法与单阶段模型相比所产生的影响。对于每种情况,我们都计算了单阶段方法(广义线性混合模型[GLMM]、贝塔-二叉模型[BBM]、使用弱信息先验的贝叶斯二叉-正态分层模型[BNHM-WIP]),并将其与传统方法(零事件试验的Poto-Odds-ratio[PETO]、DerSimonian-Laird方法[DL];少数试验的DL、Paule-Mandel[PM]、限制最大似然法[REML])进行了比较。虽然所有方法都显示出相似的治疗效果估计值,但在统计精度方面出现了很大的差异。在零事件情况下,传统方法的置信区间(CI)通常小于单阶段模型。在试验次数较少的情况下,BBM 的置信区间平均最宽,尽管 PM 和 REML 的置信区间相对较宽,但与 PM 和 REML 相比,其显著性经常发生变化。我们的结果表明,与模拟结果和零事件试验荟萃分析指南一致,单阶段模型更可取。既可以根据数据情况选择最佳模型,也可以使用一种适用于各种情况的方法。在试验次数较少的情况下,如果希望得到保守的结果,使用 BBM 和 PM 或 REML 进行敏感性分析似乎是合理的。总之,我们的结果鼓励人们谨慎选择方法。
{"title":"A re-analysis of about 60,000 sparse data meta-analyses suggests that using an adequate method for pooling matters.","authors":"Maxi Schulz, Malte Kramer, Oliver Kuss, Tim Mathes","doi":"10.1002/jrsm.1748","DOIUrl":"10.1002/jrsm.1748","url":null,"abstract":"<p><p>In sparse data meta-analyses (with few trials or zero events), conventional methods may distort results. Although better-performing one-stage methods have become available in recent years, their implementation remains limited in practice. This study examines the impact of using conventional methods compared to one-stage models by re-analysing meta-analyses from the Cochrane Database of Systematic Reviews in scenarios with zero event trials and few trials. For each scenario, we computed one-stage methods (Generalised linear mixed model [GLMM], Beta-binomial model [BBM], Bayesian binomial-normal hierarchical model using a weakly informative prior [BNHM-WIP]) and compared them with conventional methods (Peto-Odds-ratio [PETO], DerSimonian-Laird method [DL] for zero event trials; DL, Paule-Mandel [PM], Restricted maximum likelihood [REML] method for few trials). While all methods showed similar treatment effect estimates, substantial variability in statistical precision emerged. Conventional methods generally resulted in smaller confidence intervals (CIs) compared to one-stage models in the zero event situation. In the few trials scenario, the CI lengths were widest for the BBM on average and significance often changed compared to the PM and REML, despite the relatively wide CIs of the latter. In agreement with simulations and guidelines for meta-analyses with zero event trials, our results suggest that one-stage models are preferable. The best model can be either selected based on the data situation or, using a method that can be used in various situations. In the few trial situation, using BBM and additionally PM or REML for sensitivity analyses appears reasonable when conservative results are desired. Overall, our results encourage careful method selection.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141970255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast-and-frugal decision tree for the rapid critical appraisal of systematic reviews. 用于快速批判性评估系统综述的 "快速节俭决策树"。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-09-05 DOI: 10.1002/jrsm.1754
Robert C Lorenz, Mirjam Jenny, Anja Jacobs, Katja Matthias

Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15-32 min per application. To save time, we developed two fast-and-frugal decision trees (FFTs) for assessing the methodological quality of SR for OoR either during the full-text screening stage (Screening FFT) or to the resulting pool of SRs (Rapid Appraisal FFT). To build a data set for developing the FFT, we identified published AMSTAR 2 appraisals. Overall confidence ratings of the AMSTAR 2 were used as a criterion and the 16 items as cues. One thousand five hundred and nineteen appraisals were obtained from 24 publications and divided into training and test data sets. The resulting Screening FFT consists of three items and correctly identifies all non-critically low-quality SRs (sensitivity of 100%), but has a positive predictive value of 59%. The three-item Rapid Appraisal FFT correctly identifies 80% of the high-quality SRs and correctly identifies 97% of the low-quality SRs, resulting in an accuracy of 95%. The FFTs require about 10% of the 16 AMSTAR 2 items. The Screening FFT may be applied during full-text screening to exclude SRs with critically low quality. The Rapid Appraisal FFT may be applied to the final SR pool to identify SR that might be of high methodological quality.

进行高质量的综述(OoR)非常耗时。由于系统性综述(SR)的质量参差不齐,因此在进行 OoR 时有必要对系统性综述进行严格评估。一个成熟的评估工具是系统综述评估工具(AMSTAR)2,每次应用大约需要 15-32 分钟。为了节省时间,我们开发了两种快速、节俭的决策树(FFT),用于在全文筛选阶段(筛选 FFT)或由此产生的系统综述库(快速评估 FFT)中评估 OoR 的系统综述方法学质量。为了建立开发 FFT 的数据集,我们确定了已发表的 AMSTAR 2 评估。以 AMSTAR 2 的总体信心评级为标准,以 16 个项目为线索。我们从 24 份出版物中获得了 1519 份鉴定,并将其分为训练数据集和测试数据集。由此产生的筛选快速鉴定模型由三个项目组成,能正确识别所有非临界低质量 SR(灵敏度为 100%),但阳性预测值为 59%。由三个项目组成的快速评估 FFT 能正确识别 80% 的高质量 SR,正确识别 97% 的低质量 SR,准确率达到 95%。FFT需要16个AMSTAR 2项目中的大约10%。筛选 FFT 可用于全文筛选,以排除质量极低的 SR。快速评估 FFT 可用于最终 SR 库,以确定可能具有较高方法学质量的 SR。
{"title":"Fast-and-frugal decision tree for the rapid critical appraisal of systematic reviews.","authors":"Robert C Lorenz, Mirjam Jenny, Anja Jacobs, Katja Matthias","doi":"10.1002/jrsm.1754","DOIUrl":"10.1002/jrsm.1754","url":null,"abstract":"<p><p>Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15-32 min per application. To save time, we developed two fast-and-frugal decision trees (FFTs) for assessing the methodological quality of SR for OoR either during the full-text screening stage (Screening FFT) or to the resulting pool of SRs (Rapid Appraisal FFT). To build a data set for developing the FFT, we identified published AMSTAR 2 appraisals. Overall confidence ratings of the AMSTAR 2 were used as a criterion and the 16 items as cues. One thousand five hundred and nineteen appraisals were obtained from 24 publications and divided into training and test data sets. The resulting Screening FFT consists of three items and correctly identifies all non-critically low-quality SRs (sensitivity of 100%), but has a positive predictive value of 59%. The three-item Rapid Appraisal FFT correctly identifies 80% of the high-quality SRs and correctly identifies 97% of the low-quality SRs, resulting in an accuracy of 95%. The FFTs require about 10% of the 16 AMSTAR 2 items. The Screening FFT may be applied during full-text screening to exclude SRs with critically low quality. The Rapid Appraisal FFT may be applied to the final SR pool to identify SR that might be of high methodological quality.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Narrative reanalysis: A methodological framework for a new brand of reviews. 叙事再分析:新品牌评论的方法框架。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-09-04 DOI: 10.1002/jrsm.1751
Steven Hall, Erin Leeder

In response to the evolving needs of knowledge synthesis, this manuscript introduces the concept of narrative reanalysis, a method that refines data from initial reviews, such as systematic and reviews, to focus on specific sub-phenomena. Unlike traditional narrative reviews, which lack the methodological rigor of systematic reviews and are broader in scope, our methodological framework for narrative reanalysis applies a structured, systematic framework to the interpretation of existing data. This approach enables a focused investigation of nuanced topics within a broader dataset, enhancing understanding and generating new insights. We detail a five-stage methodological framework that guides the narrative reanalysis process: (1) retrieval of an initial review, (2) identification and justification of a sub-phenomenon, (3) expanded search, selection, and extraction of data, (4) reanalyzing the sub-phenomenon, and (5) writing the report. The proposed framework aims to standardize narrative reanalysis, advocating for its use in academic and research settings to foster more rigorous and insightful literature reviews. This approach bridges the methodological gap between narrative and systematic reviews, offering a valuable tool for researchers to explore detailed aspects of broader topics without the extensive resources required for systematic reviews.

为了满足不断发展的知识综合需求,本手稿引入了叙事再分析的概念,这是一种对系统性综述和综述等初始综述的数据进行提炼的方法,重点关注特定的子现象。传统的叙事性综述缺乏系统性综述的方法论严谨性,而且范围较广,与之不同的是,我们的叙事性再分析方法框架采用了结构化、系统化的框架来解释现有数据。这种方法能够在更广泛的数据集中对细微的主题进行重点调查,从而加深理解并产生新的见解。我们详细介绍了指导叙事再分析过程的五阶段方法框架:(1)检索初步综述,(2)识别和论证子现象,(3)扩展搜索、选择和提取数据,(4)重新分析子现象,以及(5)撰写报告。建议的框架旨在规范叙事性再分析,倡导在学术和研究环境中使用这种方法,以促进更严谨、更有洞察力的文献综述。这种方法弥补了叙事性综述与系统性综述之间在方法论上的差距,为研究人员提供了一种宝贵的工具,使他们能够在不需要系统性综述所需的大量资源的情况下,探索更广泛主题的细节方面。
{"title":"Narrative reanalysis: A methodological framework for a new brand of reviews.","authors":"Steven Hall, Erin Leeder","doi":"10.1002/jrsm.1751","DOIUrl":"10.1002/jrsm.1751","url":null,"abstract":"<p><p>In response to the evolving needs of knowledge synthesis, this manuscript introduces the concept of narrative reanalysis, a method that refines data from initial reviews, such as systematic and reviews, to focus on specific sub-phenomena. Unlike traditional narrative reviews, which lack the methodological rigor of systematic reviews and are broader in scope, our methodological framework for narrative reanalysis applies a structured, systematic framework to the interpretation of existing data. This approach enables a focused investigation of nuanced topics within a broader dataset, enhancing understanding and generating new insights. We detail a five-stage methodological framework that guides the narrative reanalysis process: (1) retrieval of an initial review, (2) identification and justification of a sub-phenomenon, (3) expanded search, selection, and extraction of data, (4) reanalyzing the sub-phenomenon, and (5) writing the report. The proposed framework aims to standardize narrative reanalysis, advocating for its use in academic and research settings to foster more rigorous and insightful literature reviews. This approach bridges the methodological gap between narrative and systematic reviews, offering a valuable tool for researchers to explore detailed aspects of broader topics without the extensive resources required for systematic reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards the automatic risk of bias assessment on randomized controlled trials: A comparison of RobotReviewer and humans. 实现随机对照试验的偏倚风险自动评估:机器人审查员与人类的比较。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-09-26 DOI: 10.1002/jrsm.1761
Yuan Tian, Xi Yang, Suhail A Doi, Luis Furuya-Kanamori, Lifeng Lin, Joey S W Kwong, Chang Xu

RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two different approaches: (1) manually by human reviewers, and (2) automatically by the RobotReviewer. The manual assessment was based on two groups independently, with two additional rounds of verification. The agreement between RobotReviewer and humans was measured via the concordance rate and Cohen's kappa statistics, based on the comparison of binary classification of the risk of bias (low vs. high/unclear) as restricted by RobotReviewer. The concordance rates varied by domain, ranging from 63.07% to 83.32%. Cohen's kappa statistics showed a poor agreement between humans and RobotReviewer for allocation concealment (κ = 0.25, 95% CI: 0.21-0.30), blinding of outcome assessors (κ = 0.27, 95% CI: 0.23-0.31); While moderate for random sequence generation (κ = 0.46, 95% CI: 0.41-0.50) and blinding of participants and personnel (κ = 0.59, 95% CI: 0.55-0.64). The findings demonstrate that there were domain-specific differences in the level of agreement between RobotReviewer and humans. We suggest that it might be a useful auxiliary tool, but the specific manner of its integration as a complementary tool requires further discussion.

RobotReviewer 是一种自动评估随机对照试验偏倚风险的工具,但其可靠性的证据有限。我们以 1955 项随机对照试验为基础,评估了 RobotReviewer 与人类在偏倚风险评估方面的一致性。这些试验的偏倚风险通过两种不同的方法进行评估:(1) 由人类审稿人手动评估;(2) 由机器人审稿器自动评估。人工评估由两组人员独立进行,并额外进行两轮验证。机器人审稿器和人类之间的一致性是通过一致率和科恩卡帕统计来衡量的,基于机器人审稿器限制的偏倚风险二元分类(低与高/不明确)的比较。不同领域的一致率各不相同,从 63.07% 到 83.32% 不等。Cohen's kappa 统计显示,人类与 RobotReviewer 在分配隐藏(κ = 0.25,95% CI:0.21-0.30)、结果评估者盲法(κ = 0.27,95% CI:0.23-0.31)方面的一致性较差;而在随机序列生成(κ = 0.46,95% CI:0.41-0.50)以及参与者和人员盲法(κ = 0.59,95% CI:0.55-0.64)方面的一致性适中。研究结果表明,RobotReviewer 与人类在特定领域的一致性水平存在差异。我们认为,它可能是一个有用的辅助工具,但其作为补充工具的具体整合方式还需要进一步讨论。
{"title":"Towards the automatic risk of bias assessment on randomized controlled trials: A comparison of RobotReviewer and humans.","authors":"Yuan Tian, Xi Yang, Suhail A Doi, Luis Furuya-Kanamori, Lifeng Lin, Joey S W Kwong, Chang Xu","doi":"10.1002/jrsm.1761","DOIUrl":"10.1002/jrsm.1761","url":null,"abstract":"<p><p>RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two different approaches: (1) manually by human reviewers, and (2) automatically by the RobotReviewer. The manual assessment was based on two groups independently, with two additional rounds of verification. The agreement between RobotReviewer and humans was measured via the concordance rate and Cohen's kappa statistics, based on the comparison of binary classification of the risk of bias (low vs. high/unclear) as restricted by RobotReviewer. The concordance rates varied by domain, ranging from 63.07% to 83.32%. Cohen's kappa statistics showed a poor agreement between humans and RobotReviewer for allocation concealment (κ = 0.25, 95% CI: 0.21-0.30), blinding of outcome assessors (κ = 0.27, 95% CI: 0.23-0.31); While moderate for random sequence generation (κ = 0.46, 95% CI: 0.41-0.50) and blinding of participants and personnel (κ = 0.59, 95% CI: 0.55-0.64). The findings demonstrate that there were domain-specific differences in the level of agreement between RobotReviewer and humans. We suggest that it might be a useful auxiliary tool, but the specific manner of its integration as a complementary tool requires further discussion.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142338037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Checking the inventory: Illustrating different methods for individual participant data meta-analytic structural equation modeling. 检查清单:说明个体参与者数据元分析结构方程模型的不同方法。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-08-13 DOI: 10.1002/jrsm.1735
Lennert J Groot, Kees-Jan Kan, Suzanne Jak

Researchers may have at their disposal the raw data of the studies they wish to meta-analyze. The goal of this study is to identify, illustrate, and compare a range of possible analysis options for researchers to whom raw data are available, wanting to fit a structural equation model (SEM) to these data. This study illustrates techniques that directly analyze the raw data, such as multilevel and multigroup SEM, and techniques based on summary statistics, such as correlation-based meta-analytical structural equation modeling (MASEM), discussing differences in procedures, capabilities, and outcomes. This is done by analyzing a previously published collection of datasets using open source software. A path model reflecting the theory of planned behavior is fitted to these datasets using different techniques involving SEM. Apart from differences in handling of missing data, the ability to include study-level moderators, and conceptualization of heterogeneity, results show differences in parameter estimates and standard errors across methods. Further research is needed to properly formulate guidelines for applied researchers looking to conduct individual participant data MASEM.

研究人员可能掌握着他们希望进行元分析的研究的原始数据。本研究的目的是为希望对原始数据进行结构方程模型(SEM)拟合的研究人员确定、说明和比较一系列可能的分析方案。本研究阐述了直接分析原始数据的技术(如多层次和多组 SEM)和基于汇总统计的技术(如基于相关性的元分析结构方程模型 (MASEM)),讨论了程序、能力和结果方面的差异。这是通过使用开放源码软件分析以前发表的数据集来实现的。使用涉及 SEM 的不同技术,将反映计划行为理论的路径模型与这些数据集进行拟合。除了在处理缺失数据、纳入研究层面调节因素的能力以及异质性概念化方面存在差异外,结果还显示出不同方法在参数估计和标准误差方面的差异。需要进一步开展研究,为希望进行个体参与者数据 MASEM 的应用研究人员制定适当的指导原则。
{"title":"Checking the inventory: Illustrating different methods for individual participant data meta-analytic structural equation modeling.","authors":"Lennert J Groot, Kees-Jan Kan, Suzanne Jak","doi":"10.1002/jrsm.1735","DOIUrl":"10.1002/jrsm.1735","url":null,"abstract":"<p><p>Researchers may have at their disposal the raw data of the studies they wish to meta-analyze. The goal of this study is to identify, illustrate, and compare a range of possible analysis options for researchers to whom raw data are available, wanting to fit a structural equation model (SEM) to these data. This study illustrates techniques that directly analyze the raw data, such as multilevel and multigroup SEM, and techniques based on summary statistics, such as correlation-based meta-analytical structural equation modeling (MASEM), discussing differences in procedures, capabilities, and outcomes. This is done by analyzing a previously published collection of datasets using open source software. A path model reflecting the theory of planned behavior is fitted to these datasets using different techniques involving SEM. Apart from differences in handling of missing data, the ability to include study-level moderators, and conceptualization of heterogeneity, results show differences in parameter estimates and standard errors across methods. Further research is needed to properly formulate guidelines for applied researchers looking to conduct individual participant data MASEM.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141974697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of the individual participant data integrity tool for assessing the integrity of randomised trials using individual participant data. 开发个人参与者数据完整性工具,用于利用个人参与者数据评估随机试验的完整性。
IF 5 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2024-11-01 Epub Date: 2024-08-18 DOI: 10.1002/jrsm.1739
Kylie E Hunter, Mason Aberoumand, Sol Libesman, James X Sotiropoulos, Jonathan G Williams, Wentao Li, Jannik Aagerup, Ben W Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C Webster, Anna Lene Seidler

Increasing integrity concerns in medical research have prompted the development of tools to detect untrustworthy studies. Existing tools primarily assess published aggregate data (AD), though scrutiny of individual participant data (IPD) is often required to detect trustworthiness issues. Thus, we developed the IPD Integrity Tool for detecting integrity issues in randomised trials with IPD available. This manuscript describes the development of this tool. We conducted a literature review to collate and map existing integrity items. These were discussed with an expert advisory group; agreed items were included in a standardised tool and automated where possible. We piloted this tool in two IPD meta-analyses (including 116 trials) and conducted preliminary validation checks on 13 datasets with and without known integrity issues. We identified 120 integrity items: 54 could be conducted using AD, 48 required IPD, and 18 were possible with AD, but more comprehensive with IPD. An initial reduced tool was developed through consensus involving 13 advisors, featuring 11 AD items across four domains, and 12 IPD items across eight domains. The tool was iteratively refined throughout piloting and validation. All studies with known integrity issues were accurately identified during validation. The final tool includes seven AD domains with 13 items and eight IPD domains with 18 items. The quality of evidence informing healthcare relies on trustworthy data. We describe the development of a tool to enable researchers, editors, and others to detect integrity issues using IPD. Detailed instructions for its application are published as a complementary manuscript in this issue.

医学研究中对诚信的关注与日俱增,这促使人们开发各种工具来检测不可信的研究。现有的工具主要评估已发表的总体数据(AD),但要检测可信度问题,往往需要对个体参与者数据(IPD)进行仔细检查。因此,我们开发了IPD完整性工具,用于检测有IPD的随机试验中的完整性问题。本手稿介绍了这一工具的开发过程。我们进行了文献综述,整理并绘制了现有的诚信项目。我们与专家顾问小组讨论了这些项目;达成一致的项目被纳入标准化工具,并在可能的情况下实现了自动化。我们在两项 IPD 元分析(包括 116 项试验)中试用了这一工具,并对存在和不存在已知完整性问题的 13 个数据集进行了初步验证检查。我们确定了 120 个完整性项目:其中 54 项可以使用 AD 分析,48 项需要 IPD 分析,18 项可以使用 AD 分析,但 IPD 分析更为全面。在 13 位顾问的共同努力下,我们开发出了一个初步缩减工具,其中 11 个 AD 项目横跨 4 个领域,12 个 IPD 项目横跨 8 个领域。该工具在试用和验证过程中不断改进。在验证过程中,准确识别了所有存在已知完整性问题的研究。最终工具包括 7 个 AD 领域 13 个项目和 8 个 IPD 领域 18 个项目。为医疗保健提供依据的证据质量依赖于可信的数据。我们介绍了该工具的开发过程,该工具可帮助研究人员、编辑及其他人员利用 IPD 检测诚信问题。详细的应用说明将作为本期的补充手稿发表。
{"title":"Development of the individual participant data integrity tool for assessing the integrity of randomised trials using individual participant data.","authors":"Kylie E Hunter, Mason Aberoumand, Sol Libesman, James X Sotiropoulos, Jonathan G Williams, Wentao Li, Jannik Aagerup, Ben W Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C Webster, Anna Lene Seidler","doi":"10.1002/jrsm.1739","DOIUrl":"10.1002/jrsm.1739","url":null,"abstract":"<p><p>Increasing integrity concerns in medical research have prompted the development of tools to detect untrustworthy studies. Existing tools primarily assess published aggregate data (AD), though scrutiny of individual participant data (IPD) is often required to detect trustworthiness issues. Thus, we developed the IPD Integrity Tool for detecting integrity issues in randomised trials with IPD available. This manuscript describes the development of this tool. We conducted a literature review to collate and map existing integrity items. These were discussed with an expert advisory group; agreed items were included in a standardised tool and automated where possible. We piloted this tool in two IPD meta-analyses (including 116 trials) and conducted preliminary validation checks on 13 datasets with and without known integrity issues. We identified 120 integrity items: 54 could be conducted using AD, 48 required IPD, and 18 were possible with AD, but more comprehensive with IPD. An initial reduced tool was developed through consensus involving 13 advisors, featuring 11 AD items across four domains, and 12 IPD items across eight domains. The tool was iteratively refined throughout piloting and validation. All studies with known integrity issues were accurately identified during validation. The final tool includes seven AD domains with 13 items and eight IPD domains with 18 items. The quality of evidence informing healthcare relies on trustworthy data. We describe the development of a tool to enable researchers, editors, and others to detect integrity issues using IPD. Detailed instructions for its application are published as a complementary manuscript in this issue.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141999012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Research Synthesis Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1