Pub Date : 2024-11-01Epub Date: 2024-06-17DOI: 10.1002/jrsm.1731
Hanan Khalil, Danielle Pollock, Patricia McInerney, Catrin Evans, Erica B Moraes, Christina M Godfrey, Lyndsay Alexander, Andrea Tricco, Micah D J Peters, Dawid Pieper, Ashrita Saran, Daniel Ameen, Petek Eylul Taneri, Zachary Munn
Objective: This paper describes several automation tools and software that can be considered during evidence synthesis projects and provides guidance for their integration in the conduct of scoping reviews.
Study design and setting: The guidance presented in this work is adapted from the results of a scoping review and consultations with the JBI Scoping Review Methodology group.
Results: This paper describes several reliable, validated automation tools and software that can be used to enhance the conduct of scoping reviews. Developments in the automation of systematic reviews, and more recently scoping reviews, are continuously evolving. We detail several helpful tools in order of the key steps recommended by the JBI's methodological guidance for undertaking scoping reviews including team establishment, protocol development, searching, de-duplication, screening titles and abstracts, data extraction, data charting, and report writing. While we include several reliable tools and software that can be used for the automation of scoping reviews, there are some limitations to the tools mentioned. For example, some are available in English only and their lack of integration with other tools results in limited interoperability.
Conclusion: This paper highlighted several useful automation tools and software programs to use in undertaking each step of a scoping review. This guidance has the potential to inform collaborative efforts aiming at the development of evidence informed, integrated automation tools and software packages for enhancing the conduct of high-quality scoping reviews.
{"title":"Automation tools to support undertaking scoping reviews.","authors":"Hanan Khalil, Danielle Pollock, Patricia McInerney, Catrin Evans, Erica B Moraes, Christina M Godfrey, Lyndsay Alexander, Andrea Tricco, Micah D J Peters, Dawid Pieper, Ashrita Saran, Daniel Ameen, Petek Eylul Taneri, Zachary Munn","doi":"10.1002/jrsm.1731","DOIUrl":"10.1002/jrsm.1731","url":null,"abstract":"<p><strong>Objective: </strong>This paper describes several automation tools and software that can be considered during evidence synthesis projects and provides guidance for their integration in the conduct of scoping reviews.</p><p><strong>Study design and setting: </strong>The guidance presented in this work is adapted from the results of a scoping review and consultations with the JBI Scoping Review Methodology group.</p><p><strong>Results: </strong>This paper describes several reliable, validated automation tools and software that can be used to enhance the conduct of scoping reviews. Developments in the automation of systematic reviews, and more recently scoping reviews, are continuously evolving. We detail several helpful tools in order of the key steps recommended by the JBI's methodological guidance for undertaking scoping reviews including team establishment, protocol development, searching, de-duplication, screening titles and abstracts, data extraction, data charting, and report writing. While we include several reliable tools and software that can be used for the automation of scoping reviews, there are some limitations to the tools mentioned. For example, some are available in English only and their lack of integration with other tools results in limited interoperability.</p><p><strong>Conclusion: </strong>This paper highlighted several useful automation tools and software programs to use in undertaking each step of a scoping review. This guidance has the potential to inform collaborative efforts aiming at the development of evidence informed, integrated automation tools and software packages for enhancing the conduct of high-quality scoping reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141417087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-07-04DOI: 10.1002/jrsm.1734
Lu Qin, Shishun Zhao, Wenlai Guo, Tiejun Tong, Ke Yang
The application of network meta-analysis is becoming increasingly widespread, and for a successful implementation, it requires that the direct comparison result and the indirect comparison result should be consistent. Because of this, a proper detection of inconsistency is often a key issue in network meta-analysis as whether the results can be reliably used as a clinical guidance. Among the existing methods for detecting inconsistency, two commonly used models are the design-by-treatment interaction model and the side-splitting models. While the original side-splitting model was initially estimated using a Bayesian approach, in this context, we employ the frequentist approach. In this paper, we review these two types of models comprehensively as well as explore their relationship by treating the data structure of network meta-analysis as missing data and parameterizing the potential complete data for each model. Through both analytical and numerical studies, we verify that the side-splitting models are specific instances of the design-by-treatment interaction model, incorporating additional assumptions or under certain data structure. Moreover, the design-by-treatment interaction model exhibits robust performance across different data structures on inconsistency detection compared to the side-splitting models. Finally, as a practical guidance for inconsistency detection, we recommend utilizing the design-by-treatment interaction model when there is a lack of information about the potential location of inconsistency. By contrast, the side-splitting models can serve as a supplementary method especially when the number of studies in each design is small, enabling a comprehensive assessment of inconsistency from both global and local perspectives.
{"title":"A comparison of two models for detecting inconsistency in network meta-analysis.","authors":"Lu Qin, Shishun Zhao, Wenlai Guo, Tiejun Tong, Ke Yang","doi":"10.1002/jrsm.1734","DOIUrl":"10.1002/jrsm.1734","url":null,"abstract":"<p><p>The application of network meta-analysis is becoming increasingly widespread, and for a successful implementation, it requires that the direct comparison result and the indirect comparison result should be consistent. Because of this, a proper detection of inconsistency is often a key issue in network meta-analysis as whether the results can be reliably used as a clinical guidance. Among the existing methods for detecting inconsistency, two commonly used models are the design-by-treatment interaction model and the side-splitting models. While the original side-splitting model was initially estimated using a Bayesian approach, in this context, we employ the frequentist approach. In this paper, we review these two types of models comprehensively as well as explore their relationship by treating the data structure of network meta-analysis as missing data and parameterizing the potential complete data for each model. Through both analytical and numerical studies, we verify that the side-splitting models are specific instances of the design-by-treatment interaction model, incorporating additional assumptions or under certain data structure. Moreover, the design-by-treatment interaction model exhibits robust performance across different data structures on inconsistency detection compared to the side-splitting models. Finally, as a practical guidance for inconsistency detection, we recommend utilizing the design-by-treatment interaction model when there is a lack of information about the potential location of inconsistency. By contrast, the side-splitting models can serve as a supplementary method especially when the number of studies in each design is small, enabling a comprehensive assessment of inconsistency from both global and local perspectives.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141533057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-09-06DOI: 10.1002/jrsm.1753
Ferdinand Valentin Stoye, Claudia Tschammler, Oliver Kuss, Annika Hoyer
The development of new statistical models for the meta-analysis of diagnostic test accuracy studies is still an ongoing field of research, especially with respect to summary receiver operating characteristic (ROC) curves. In the recently published updated version of the "Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy", the authors point to the challenges of this kind of meta-analysis and propose two approaches. However, both of them come with some disadvantages, such as the nonstraightforward choice of priors in Bayesian models or the requirement of a two-step approach where parameters are estimated for the individual studies, followed by summarizing the results. As an alternative, we propose a novel model by applying methods from time-to-event analysis. To this task we use the discrete proportional hazard approach to treat the different diagnostic thresholds, that provide means to estimate sensitivity and specificity and are reported by the single studies, as categorical variables in a generalized linear mixed model, using both the logit- and the asymmetric cloglog-link. This leads to a model specification with threshold-specific discrete hazards, avoiding a linear dependency between thresholds, discrete hazard, and sensitivity/specificity and thus increasing model flexibility. We compare the resulting models to approaches from the literature in a simulation study. While the estimated area under the summary ROC curve is estimated comparably well in most approaches, the results depict substantial differences in the estimated sensitivities and specificities. We also show the practical applicability of the models to data from a meta-analysis for the screening of type 2 diabetes.
{"title":"A discrete time-to-event model for the meta-analysis of full ROC curves.","authors":"Ferdinand Valentin Stoye, Claudia Tschammler, Oliver Kuss, Annika Hoyer","doi":"10.1002/jrsm.1753","DOIUrl":"10.1002/jrsm.1753","url":null,"abstract":"<p><p>The development of new statistical models for the meta-analysis of diagnostic test accuracy studies is still an ongoing field of research, especially with respect to summary receiver operating characteristic (ROC) curves. In the recently published updated version of the \"Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy\", the authors point to the challenges of this kind of meta-analysis and propose two approaches. However, both of them come with some disadvantages, such as the nonstraightforward choice of priors in Bayesian models or the requirement of a two-step approach where parameters are estimated for the individual studies, followed by summarizing the results. As an alternative, we propose a novel model by applying methods from time-to-event analysis. To this task we use the discrete proportional hazard approach to treat the different diagnostic thresholds, that provide means to estimate sensitivity and specificity and are reported by the single studies, as categorical variables in a generalized linear mixed model, using both the logit- and the asymmetric cloglog-link. This leads to a model specification with threshold-specific discrete hazards, avoiding a linear dependency between thresholds, discrete hazard, and sensitivity/specificity and thus increasing model flexibility. We compare the resulting models to approaches from the literature in a simulation study. While the estimated area under the summary ROC curve is estimated comparably well in most approaches, the results depict substantial differences in the estimated sensitivities and specificities. We also show the practical applicability of the models to data from a meta-analysis for the screening of type 2 diabetes.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142138824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-06-28DOI: 10.1002/jrsm.1733
Jens H Fünderich, Lukas J Beinhauer, Frank Renkewitz
Multi-lab projects are large scale collaborations between participating data collection sites that gather empirical evidence and (usually) analyze that evidence using meta-analyses. They are a valuable form of scientific collaboration, produce outstanding data sets and are a great resource for third-party researchers. Their data may be reanalyzed and used in research synthesis. Their repositories and code could provide guidance to future projects of this kind. But, while multi-labs are similar in their structure and aggregate their data using meta-analyses, they deploy a variety of different solutions regarding the storage structure in the repositories, the way the (analysis) code is structured and the file-formats they provide. Continuing this trend implies that anyone who wants to work with data from multiple of these projects, or combine their datasets, is faced with an ever-increasing complexity. Some of that complexity could be avoided. Here, we introduce MetaPipeX, a standardized framework to harmonize, document and analyze multi-lab data. It features a pipeline conceptualization of the analysis and documentation process, an R-package that implements both and a Shiny App (https://www.apps.meta-rep.lmu.de/metapipex/) that allows users to explore and visualize these data sets. We introduce the framework by describing its components and applying it to a practical example. Engaging with this form of collaboration and integrating it further into research practice will certainly be beneficial to quantitative sciences and we hope the framework provides a structure and tools to reduce effort for anyone who creates, re-uses, harmonizes or learns about multi-lab replication projects.
多实验室项目是参与数据收集站点之间的大规模合作,这些站点收集经验证据,并(通常)使用元分析对证据进行分析。它们是一种有价值的科学合作形式,能产生出色的数据集,是第三方研究人员的重要资源。它们的数据可以重新分析并用于研究综述。它们的资料库和代码可以为未来的此类项目提供指导。不过,虽然多重实验室在结构上相似,并使用元分析汇总数据,但它们在资源库的存储结构、(分析)代码的结构方式以及提供的文件格式方面却采用了各种不同的解决方案。继续保持这种趋势意味着,任何人想要处理来自多个此类项目的数据或合并数据集,都会面临日益增加的复杂性。其中一些复杂性是可以避免的。在此,我们介绍 MetaPipeX,这是一个用于协调、记录和分析多个实验室数据的标准化框架。它的特点包括:分析和记录过程的管道概念化、实现这两个过程的 R 包以及允许用户探索和可视化这些数据集的 Shiny App (https://www.apps.meta-rep.lmu.de/metapipex/)。我们介绍了该框架的各个组成部分,并将其应用到一个实际例子中。参与这种形式的合作并将其进一步整合到研究实践中肯定会对定量科学有益,我们希望该框架能为创建、重用、协调或学习多实验室复制项目的任何人提供结构和工具,以减少工作量。
{"title":"Reduce, reuse, recycle: Introducing MetaPipeX, a framework for analyses of multi-lab data.","authors":"Jens H Fünderich, Lukas J Beinhauer, Frank Renkewitz","doi":"10.1002/jrsm.1733","DOIUrl":"10.1002/jrsm.1733","url":null,"abstract":"<p><p>Multi-lab projects are large scale collaborations between participating data collection sites that gather empirical evidence and (usually) analyze that evidence using meta-analyses. They are a valuable form of scientific collaboration, produce outstanding data sets and are a great resource for third-party researchers. Their data may be reanalyzed and used in research synthesis. Their repositories and code could provide guidance to future projects of this kind. But, while multi-labs are similar in their structure and aggregate their data using meta-analyses, they deploy a variety of different solutions regarding the storage structure in the repositories, the way the (analysis) code is structured and the file-formats they provide. Continuing this trend implies that anyone who wants to work with data from multiple of these projects, or combine their datasets, is faced with an ever-increasing complexity. Some of that complexity could be avoided. Here, we introduce MetaPipeX, a standardized framework to harmonize, document and analyze multi-lab data. It features a pipeline conceptualization of the analysis and documentation process, an R-package that implements both and a Shiny App (https://www.apps.meta-rep.lmu.de/metapipex/) that allows users to explore and visualize these data sets. We introduce the framework by describing its components and applying it to a practical example. Engaging with this form of collaboration and integrating it further into research practice will certainly be beneficial to quantitative sciences and we hope the framework provides a structure and tools to reduce effort for anyone who creates, re-uses, harmonizes or learns about multi-lab replication projects.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141464697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-13DOI: 10.1002/jrsm.1748
Maxi Schulz, Malte Kramer, Oliver Kuss, Tim Mathes
In sparse data meta-analyses (with few trials or zero events), conventional methods may distort results. Although better-performing one-stage methods have become available in recent years, their implementation remains limited in practice. This study examines the impact of using conventional methods compared to one-stage models by re-analysing meta-analyses from the Cochrane Database of Systematic Reviews in scenarios with zero event trials and few trials. For each scenario, we computed one-stage methods (Generalised linear mixed model [GLMM], Beta-binomial model [BBM], Bayesian binomial-normal hierarchical model using a weakly informative prior [BNHM-WIP]) and compared them with conventional methods (Peto-Odds-ratio [PETO], DerSimonian-Laird method [DL] for zero event trials; DL, Paule-Mandel [PM], Restricted maximum likelihood [REML] method for few trials). While all methods showed similar treatment effect estimates, substantial variability in statistical precision emerged. Conventional methods generally resulted in smaller confidence intervals (CIs) compared to one-stage models in the zero event situation. In the few trials scenario, the CI lengths were widest for the BBM on average and significance often changed compared to the PM and REML, despite the relatively wide CIs of the latter. In agreement with simulations and guidelines for meta-analyses with zero event trials, our results suggest that one-stage models are preferable. The best model can be either selected based on the data situation or, using a method that can be used in various situations. In the few trial situation, using BBM and additionally PM or REML for sensitivity analyses appears reasonable when conservative results are desired. Overall, our results encourage careful method selection.
{"title":"A re-analysis of about 60,000 sparse data meta-analyses suggests that using an adequate method for pooling matters.","authors":"Maxi Schulz, Malte Kramer, Oliver Kuss, Tim Mathes","doi":"10.1002/jrsm.1748","DOIUrl":"10.1002/jrsm.1748","url":null,"abstract":"<p><p>In sparse data meta-analyses (with few trials or zero events), conventional methods may distort results. Although better-performing one-stage methods have become available in recent years, their implementation remains limited in practice. This study examines the impact of using conventional methods compared to one-stage models by re-analysing meta-analyses from the Cochrane Database of Systematic Reviews in scenarios with zero event trials and few trials. For each scenario, we computed one-stage methods (Generalised linear mixed model [GLMM], Beta-binomial model [BBM], Bayesian binomial-normal hierarchical model using a weakly informative prior [BNHM-WIP]) and compared them with conventional methods (Peto-Odds-ratio [PETO], DerSimonian-Laird method [DL] for zero event trials; DL, Paule-Mandel [PM], Restricted maximum likelihood [REML] method for few trials). While all methods showed similar treatment effect estimates, substantial variability in statistical precision emerged. Conventional methods generally resulted in smaller confidence intervals (CIs) compared to one-stage models in the zero event situation. In the few trials scenario, the CI lengths were widest for the BBM on average and significance often changed compared to the PM and REML, despite the relatively wide CIs of the latter. In agreement with simulations and guidelines for meta-analyses with zero event trials, our results suggest that one-stage models are preferable. The best model can be either selected based on the data situation or, using a method that can be used in various situations. In the few trial situation, using BBM and additionally PM or REML for sensitivity analyses appears reasonable when conservative results are desired. Overall, our results encourage careful method selection.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141970255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-09-05DOI: 10.1002/jrsm.1754
Robert C Lorenz, Mirjam Jenny, Anja Jacobs, Katja Matthias
Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15-32 min per application. To save time, we developed two fast-and-frugal decision trees (FFTs) for assessing the methodological quality of SR for OoR either during the full-text screening stage (Screening FFT) or to the resulting pool of SRs (Rapid Appraisal FFT). To build a data set for developing the FFT, we identified published AMSTAR 2 appraisals. Overall confidence ratings of the AMSTAR 2 were used as a criterion and the 16 items as cues. One thousand five hundred and nineteen appraisals were obtained from 24 publications and divided into training and test data sets. The resulting Screening FFT consists of three items and correctly identifies all non-critically low-quality SRs (sensitivity of 100%), but has a positive predictive value of 59%. The three-item Rapid Appraisal FFT correctly identifies 80% of the high-quality SRs and correctly identifies 97% of the low-quality SRs, resulting in an accuracy of 95%. The FFTs require about 10% of the 16 AMSTAR 2 items. The Screening FFT may be applied during full-text screening to exclude SRs with critically low quality. The Rapid Appraisal FFT may be applied to the final SR pool to identify SR that might be of high methodological quality.
{"title":"Fast-and-frugal decision tree for the rapid critical appraisal of systematic reviews.","authors":"Robert C Lorenz, Mirjam Jenny, Anja Jacobs, Katja Matthias","doi":"10.1002/jrsm.1754","DOIUrl":"10.1002/jrsm.1754","url":null,"abstract":"<p><p>Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15-32 min per application. To save time, we developed two fast-and-frugal decision trees (FFTs) for assessing the methodological quality of SR for OoR either during the full-text screening stage (Screening FFT) or to the resulting pool of SRs (Rapid Appraisal FFT). To build a data set for developing the FFT, we identified published AMSTAR 2 appraisals. Overall confidence ratings of the AMSTAR 2 were used as a criterion and the 16 items as cues. One thousand five hundred and nineteen appraisals were obtained from 24 publications and divided into training and test data sets. The resulting Screening FFT consists of three items and correctly identifies all non-critically low-quality SRs (sensitivity of 100%), but has a positive predictive value of 59%. The three-item Rapid Appraisal FFT correctly identifies 80% of the high-quality SRs and correctly identifies 97% of the low-quality SRs, resulting in an accuracy of 95%. The FFTs require about 10% of the 16 AMSTAR 2 items. The Screening FFT may be applied during full-text screening to exclude SRs with critically low quality. The Rapid Appraisal FFT may be applied to the final SR pool to identify SR that might be of high methodological quality.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-09-04DOI: 10.1002/jrsm.1751
Steven Hall, Erin Leeder
In response to the evolving needs of knowledge synthesis, this manuscript introduces the concept of narrative reanalysis, a method that refines data from initial reviews, such as systematic and reviews, to focus on specific sub-phenomena. Unlike traditional narrative reviews, which lack the methodological rigor of systematic reviews and are broader in scope, our methodological framework for narrative reanalysis applies a structured, systematic framework to the interpretation of existing data. This approach enables a focused investigation of nuanced topics within a broader dataset, enhancing understanding and generating new insights. We detail a five-stage methodological framework that guides the narrative reanalysis process: (1) retrieval of an initial review, (2) identification and justification of a sub-phenomenon, (3) expanded search, selection, and extraction of data, (4) reanalyzing the sub-phenomenon, and (5) writing the report. The proposed framework aims to standardize narrative reanalysis, advocating for its use in academic and research settings to foster more rigorous and insightful literature reviews. This approach bridges the methodological gap between narrative and systematic reviews, offering a valuable tool for researchers to explore detailed aspects of broader topics without the extensive resources required for systematic reviews.
{"title":"Narrative reanalysis: A methodological framework for a new brand of reviews.","authors":"Steven Hall, Erin Leeder","doi":"10.1002/jrsm.1751","DOIUrl":"10.1002/jrsm.1751","url":null,"abstract":"<p><p>In response to the evolving needs of knowledge synthesis, this manuscript introduces the concept of narrative reanalysis, a method that refines data from initial reviews, such as systematic and reviews, to focus on specific sub-phenomena. Unlike traditional narrative reviews, which lack the methodological rigor of systematic reviews and are broader in scope, our methodological framework for narrative reanalysis applies a structured, systematic framework to the interpretation of existing data. This approach enables a focused investigation of nuanced topics within a broader dataset, enhancing understanding and generating new insights. We detail a five-stage methodological framework that guides the narrative reanalysis process: (1) retrieval of an initial review, (2) identification and justification of a sub-phenomenon, (3) expanded search, selection, and extraction of data, (4) reanalyzing the sub-phenomenon, and (5) writing the report. The proposed framework aims to standardize narrative reanalysis, advocating for its use in academic and research settings to foster more rigorous and insightful literature reviews. This approach bridges the methodological gap between narrative and systematic reviews, offering a valuable tool for researchers to explore detailed aspects of broader topics without the extensive resources required for systematic reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-09-26DOI: 10.1002/jrsm.1761
Yuan Tian, Xi Yang, Suhail A Doi, Luis Furuya-Kanamori, Lifeng Lin, Joey S W Kwong, Chang Xu
RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two different approaches: (1) manually by human reviewers, and (2) automatically by the RobotReviewer. The manual assessment was based on two groups independently, with two additional rounds of verification. The agreement between RobotReviewer and humans was measured via the concordance rate and Cohen's kappa statistics, based on the comparison of binary classification of the risk of bias (low vs. high/unclear) as restricted by RobotReviewer. The concordance rates varied by domain, ranging from 63.07% to 83.32%. Cohen's kappa statistics showed a poor agreement between humans and RobotReviewer for allocation concealment (κ = 0.25, 95% CI: 0.21-0.30), blinding of outcome assessors (κ = 0.27, 95% CI: 0.23-0.31); While moderate for random sequence generation (κ = 0.46, 95% CI: 0.41-0.50) and blinding of participants and personnel (κ = 0.59, 95% CI: 0.55-0.64). The findings demonstrate that there were domain-specific differences in the level of agreement between RobotReviewer and humans. We suggest that it might be a useful auxiliary tool, but the specific manner of its integration as a complementary tool requires further discussion.
{"title":"Towards the automatic risk of bias assessment on randomized controlled trials: A comparison of RobotReviewer and humans.","authors":"Yuan Tian, Xi Yang, Suhail A Doi, Luis Furuya-Kanamori, Lifeng Lin, Joey S W Kwong, Chang Xu","doi":"10.1002/jrsm.1761","DOIUrl":"10.1002/jrsm.1761","url":null,"abstract":"<p><p>RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two different approaches: (1) manually by human reviewers, and (2) automatically by the RobotReviewer. The manual assessment was based on two groups independently, with two additional rounds of verification. The agreement between RobotReviewer and humans was measured via the concordance rate and Cohen's kappa statistics, based on the comparison of binary classification of the risk of bias (low vs. high/unclear) as restricted by RobotReviewer. The concordance rates varied by domain, ranging from 63.07% to 83.32%. Cohen's kappa statistics showed a poor agreement between humans and RobotReviewer for allocation concealment (κ = 0.25, 95% CI: 0.21-0.30), blinding of outcome assessors (κ = 0.27, 95% CI: 0.23-0.31); While moderate for random sequence generation (κ = 0.46, 95% CI: 0.41-0.50) and blinding of participants and personnel (κ = 0.59, 95% CI: 0.55-0.64). The findings demonstrate that there were domain-specific differences in the level of agreement between RobotReviewer and humans. We suggest that it might be a useful auxiliary tool, but the specific manner of its integration as a complementary tool requires further discussion.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142338037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-13DOI: 10.1002/jrsm.1735
Lennert J Groot, Kees-Jan Kan, Suzanne Jak
Researchers may have at their disposal the raw data of the studies they wish to meta-analyze. The goal of this study is to identify, illustrate, and compare a range of possible analysis options for researchers to whom raw data are available, wanting to fit a structural equation model (SEM) to these data. This study illustrates techniques that directly analyze the raw data, such as multilevel and multigroup SEM, and techniques based on summary statistics, such as correlation-based meta-analytical structural equation modeling (MASEM), discussing differences in procedures, capabilities, and outcomes. This is done by analyzing a previously published collection of datasets using open source software. A path model reflecting the theory of planned behavior is fitted to these datasets using different techniques involving SEM. Apart from differences in handling of missing data, the ability to include study-level moderators, and conceptualization of heterogeneity, results show differences in parameter estimates and standard errors across methods. Further research is needed to properly formulate guidelines for applied researchers looking to conduct individual participant data MASEM.
研究人员可能掌握着他们希望进行元分析的研究的原始数据。本研究的目的是为希望对原始数据进行结构方程模型(SEM)拟合的研究人员确定、说明和比较一系列可能的分析方案。本研究阐述了直接分析原始数据的技术(如多层次和多组 SEM)和基于汇总统计的技术(如基于相关性的元分析结构方程模型 (MASEM)),讨论了程序、能力和结果方面的差异。这是通过使用开放源码软件分析以前发表的数据集来实现的。使用涉及 SEM 的不同技术,将反映计划行为理论的路径模型与这些数据集进行拟合。除了在处理缺失数据、纳入研究层面调节因素的能力以及异质性概念化方面存在差异外,结果还显示出不同方法在参数估计和标准误差方面的差异。需要进一步开展研究,为希望进行个体参与者数据 MASEM 的应用研究人员制定适当的指导原则。
{"title":"Checking the inventory: Illustrating different methods for individual participant data meta-analytic structural equation modeling.","authors":"Lennert J Groot, Kees-Jan Kan, Suzanne Jak","doi":"10.1002/jrsm.1735","DOIUrl":"10.1002/jrsm.1735","url":null,"abstract":"<p><p>Researchers may have at their disposal the raw data of the studies they wish to meta-analyze. The goal of this study is to identify, illustrate, and compare a range of possible analysis options for researchers to whom raw data are available, wanting to fit a structural equation model (SEM) to these data. This study illustrates techniques that directly analyze the raw data, such as multilevel and multigroup SEM, and techniques based on summary statistics, such as correlation-based meta-analytical structural equation modeling (MASEM), discussing differences in procedures, capabilities, and outcomes. This is done by analyzing a previously published collection of datasets using open source software. A path model reflecting the theory of planned behavior is fitted to these datasets using different techniques involving SEM. Apart from differences in handling of missing data, the ability to include study-level moderators, and conceptualization of heterogeneity, results show differences in parameter estimates and standard errors across methods. Further research is needed to properly formulate guidelines for applied researchers looking to conduct individual participant data MASEM.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141974697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-08-18DOI: 10.1002/jrsm.1739
Kylie E Hunter, Mason Aberoumand, Sol Libesman, James X Sotiropoulos, Jonathan G Williams, Wentao Li, Jannik Aagerup, Ben W Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C Webster, Anna Lene Seidler
Increasing integrity concerns in medical research have prompted the development of tools to detect untrustworthy studies. Existing tools primarily assess published aggregate data (AD), though scrutiny of individual participant data (IPD) is often required to detect trustworthiness issues. Thus, we developed the IPD Integrity Tool for detecting integrity issues in randomised trials with IPD available. This manuscript describes the development of this tool. We conducted a literature review to collate and map existing integrity items. These were discussed with an expert advisory group; agreed items were included in a standardised tool and automated where possible. We piloted this tool in two IPD meta-analyses (including 116 trials) and conducted preliminary validation checks on 13 datasets with and without known integrity issues. We identified 120 integrity items: 54 could be conducted using AD, 48 required IPD, and 18 were possible with AD, but more comprehensive with IPD. An initial reduced tool was developed through consensus involving 13 advisors, featuring 11 AD items across four domains, and 12 IPD items across eight domains. The tool was iteratively refined throughout piloting and validation. All studies with known integrity issues were accurately identified during validation. The final tool includes seven AD domains with 13 items and eight IPD domains with 18 items. The quality of evidence informing healthcare relies on trustworthy data. We describe the development of a tool to enable researchers, editors, and others to detect integrity issues using IPD. Detailed instructions for its application are published as a complementary manuscript in this issue.
{"title":"Development of the individual participant data integrity tool for assessing the integrity of randomised trials using individual participant data.","authors":"Kylie E Hunter, Mason Aberoumand, Sol Libesman, James X Sotiropoulos, Jonathan G Williams, Wentao Li, Jannik Aagerup, Ben W Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C Webster, Anna Lene Seidler","doi":"10.1002/jrsm.1739","DOIUrl":"10.1002/jrsm.1739","url":null,"abstract":"<p><p>Increasing integrity concerns in medical research have prompted the development of tools to detect untrustworthy studies. Existing tools primarily assess published aggregate data (AD), though scrutiny of individual participant data (IPD) is often required to detect trustworthiness issues. Thus, we developed the IPD Integrity Tool for detecting integrity issues in randomised trials with IPD available. This manuscript describes the development of this tool. We conducted a literature review to collate and map existing integrity items. These were discussed with an expert advisory group; agreed items were included in a standardised tool and automated where possible. We piloted this tool in two IPD meta-analyses (including 116 trials) and conducted preliminary validation checks on 13 datasets with and without known integrity issues. We identified 120 integrity items: 54 could be conducted using AD, 48 required IPD, and 18 were possible with AD, but more comprehensive with IPD. An initial reduced tool was developed through consensus involving 13 advisors, featuring 11 AD items across four domains, and 12 IPD items across eight domains. The tool was iteratively refined throughout piloting and validation. All studies with known integrity issues were accurately identified during validation. The final tool includes seven AD domains with 13 items and eight IPD domains with 18 items. The quality of evidence informing healthcare relies on trustworthy data. We describe the development of a tool to enable researchers, editors, and others to detect integrity issues using IPD. Detailed instructions for its application are published as a complementary manuscript in this issue.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141999012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}