Simulated treatment comparison (STC) is an established method for performing population adjustment for the indirect comparison of two treatments, where individual patient data (IPD) are available for one trial but only aggregate level information is available for the other. The most commonly used method is what we call ‘standard STC’. Here we fit an outcome model using data from the trial with IPD, and then substitute mean covariate values from the trial where only aggregate level data are available, to predict what the first of these trial's outcomes would have been if its population had been the same as the second. However, this type of STC methodology does not involve simulation and can result in bias when the link function used in the outcome model is non-linear. An alternative approach is to use the fitted outcome model to simulate patient profiles in the trial for which IPD are available, but in the other trial's population. This stochastic alternative presents additional challenges. We examine the history of STC and propose two new simulation-based methods that resolve many of the difficulties associated with the current stochastic approach. A virtue of the simulation-based STC methods is that the marginal estimands are then clearly targeted. We illustrate all methods using a numerical example and explore their use in a simulation study.
{"title":"Four alternative methodologies for simulated treatment comparison: How could the use of simulation be re-invigorated?","authors":"Landan Zhang, Sylwia Bujkiewicz, Dan Jackson","doi":"10.1002/jrsm.1681","DOIUrl":"10.1002/jrsm.1681","url":null,"abstract":"<p>Simulated treatment comparison (STC) is an established method for performing population adjustment for the indirect comparison of two treatments, where individual patient data (IPD) are available for one trial but only aggregate level information is available for the other. The most commonly used method is what we call ‘standard STC’. Here we fit an outcome model using data from the trial with IPD, and then substitute mean covariate values from the trial where only aggregate level data are available, to predict what the first of these trial's outcomes would have been if its population had been the same as the second. However, this type of STC methodology does not involve simulation and can result in bias when the link function used in the outcome model is non-linear. An alternative approach is to use the fitted outcome model to simulate patient profiles in the trial for which IPD are available, but in the other trial's population. This stochastic alternative presents additional challenges. We examine the history of STC and propose two new simulation-based methods that resolve many of the difficulties associated with the current stochastic approach. A virtue of the simulation-based STC methods is that the marginal estimands are then clearly targeted. We illustrate all methods using a numerical example and explore their use in a simulation study.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"227-241"},"PeriodicalIF":9.8,"publicationDate":"2023-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138714810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The trace plot is seldom used in meta-analysis, yet it is a very informative plot. In this article, we define and illustrate what the trace plot is, and discuss why it is important. The Bayesian version of the plot combines the posterior density of , the between-study standard deviation, and the shrunken estimates of the study effects as a function of . With a small or moderate number of studies, is not estimated with much precision, and parameter estimates and shrunken study effect estimates can vary widely depending on the correct value of . The trace plot allows visualization of the sensitivity to along with a plot that shows which values of are plausible and which are implausible. A comparable frequentist or empirical Bayes version provides similar results. The concepts are illustrated using examples in meta-analysis and meta-regression; implementation in R is facilitated in a Bayesian or frequentist framework using the bayesmeta and metafor packages, respectively.
迹线图在荟萃分析中很少使用,但它却是一种信息量非常大的图。在本文中,我们将对迹线图进行定义和说明,并讨论其重要性的原因。贝叶斯版本的迹线图结合了τ$$ tau $$的后验密度、研究间标准差以及作为τ$$ tau $$函数的研究效应收缩估计值。在研究数量较少或适中的情况下,τ$$ tau$$的估计精度不高,参数估计值和缩减的研究效应估计值会因τ$$ tau$$的正确值不同而有很大差异。迹线图可以直观地显示对 τ$$ tau $$ 的敏感性,同时显示哪些 τ$$ tau $$ 值是可信的,哪些是不可信的。可比较的频数主义或经验贝叶斯版本提供了类似的结果。我们使用元分析和元回归中的示例来说明这些概念;在贝叶斯或频数主义框架下,分别使用 bayesmeta 和 metafor 软件包可以方便地在 R 中实现这些概念。
{"title":"How trace plots help interpret meta-analysis results","authors":"Christian Röver, David Rindskopf, Tim Friede","doi":"10.1002/jrsm.1693","DOIUrl":"10.1002/jrsm.1693","url":null,"abstract":"<p>The trace plot is seldom used in meta-analysis, yet it is a very informative plot. In this article, we define and illustrate what the trace plot is, and discuss why it is important. The Bayesian version of the plot combines the posterior density of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math>, the between-study standard deviation, and the shrunken estimates of the study effects as a function of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math>. With a small or moderate number of studies, <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math> is not estimated with much precision, and parameter estimates and shrunken study effect estimates can vary widely depending on the correct value of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math>. The trace plot allows visualization of the sensitivity to <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math> along with a plot that shows which values of <span></span><math>\u0000 <mrow>\u0000 <mi>τ</mi>\u0000 </mrow></math> are plausible and which are implausible. A comparable frequentist or empirical Bayes version provides similar results. The concepts are illustrated using examples in meta-analysis and meta-regression; implementation in <span>R</span> is facilitated in a Bayesian or frequentist framework using the <span>bayesmeta</span> and <span>metafor</span> packages, respectively.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"413-429"},"PeriodicalIF":9.8,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1693","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138715481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The literature search underpins data collection for all systematic reviews (SRs). The SR reporting guideline PRISMA, and its extensions, aim to facilitate research transparency and reproducibility, and ultimately improve the quality of research, by instructing authors to provide specific research materials and data upon publication of the manuscript. Search strategies are one item of data that are explicitly included in PRISMA and the critical appraisal tool AMSTAR2. Yet some authors use search availability statements implying that the search strategies are available upon request instead of providing strategies up front. We sought out reviews with search availability statements, characterized them, and requested the search strategies from authors via email. Over half of the included reviews cited PRISMA but less than a third included any search strategies. After requesting the strategies via email as instructed, we received replies from 46% of authors, and eventually received at least one search strategy from 36% of authors. Requesting search strategies via email has a low chance of success. Ask and you might receive—but you probably will not. SRs that do not make search strategies available are low quality at best according to AMSTAR2; Journal editors can and should enforce the requirement for authors to include their search strategies alongside their SR manuscripts.
{"title":"A study of search strategy availability statements and sharing practices for systematic reviews: Ask and you might receive","authors":"Christine J. Neilson, Zahra Premji","doi":"10.1002/jrsm.1696","DOIUrl":"10.1002/jrsm.1696","url":null,"abstract":"<p>The literature search underpins data collection for all systematic reviews (SRs). The SR reporting guideline PRISMA, and its extensions, aim to facilitate research transparency and reproducibility, and ultimately improve the quality of research, by instructing authors to provide specific research materials and data upon publication of the manuscript. Search strategies are one item of data that are explicitly included in PRISMA and the critical appraisal tool AMSTAR2. Yet some authors use search availability statements implying that the search strategies are available upon request instead of providing strategies up front. We sought out reviews with search availability statements, characterized them, and requested the search strategies from authors via email. Over half of the included reviews cited PRISMA but less than a third included any search strategies. After requesting the strategies via email as instructed, we received replies from 46% of authors, and eventually received at least one search strategy from 36% of authors. Requesting search strategies via email has a low chance of success. Ask and you might receive—but you probably will not. SRs that do not make search strategies available are low quality at best according to AMSTAR2; Journal editors can and should enforce the requirement for authors to include their search strategies alongside their SR manuscripts.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"441-449"},"PeriodicalIF":9.8,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138714809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sean McGrath, XiaoFei Zhao, Omer Ozturk, Stephan Katzenschlager, Russell Steele, Andrea Benedetti
When performing an aggregate data meta-analysis of a continuous outcome, researchers often come across primary studies that report the sample median of the outcome. However, standard meta-analytic methods typically cannot be directly applied in this setting. In recent years, there has been substantial development in statistical methods to incorporate primary studies reporting sample medians in meta-analysis, yet there are currently no comprehensive software tools implementing these methods. In this paper, we present the metamedian R package, a freely available and open-source software tool for meta-analyzing primary studies that report sample medians. We summarize the main features of the software and illustrate its application through real data examples involving risk factors for a severe course of COVID-19.
在对连续性结果进行总体数据荟萃分析时,研究人员经常会遇到报告结果样本中位数的主要研究。然而,标准的荟萃分析方法通常不能直接应用于这种情况。近年来,将报告样本中位数的主要研究纳入荟萃分析的统计方法有了长足的发展,但目前还没有实现这些方法的综合软件工具。在本文中,我们介绍了 metamedian R 软件包,这是一款免费开源软件工具,用于对报告样本中位数的主要研究进行荟萃分析。我们总结了该软件的主要功能,并通过涉及 COVID-19 严重病程风险因素的真实数据示例来说明其应用。
{"title":"metamedian: An R package for meta-analyzing studies reporting medians","authors":"Sean McGrath, XiaoFei Zhao, Omer Ozturk, Stephan Katzenschlager, Russell Steele, Andrea Benedetti","doi":"10.1002/jrsm.1686","DOIUrl":"10.1002/jrsm.1686","url":null,"abstract":"<p>When performing an aggregate data meta-analysis of a continuous outcome, researchers often come across primary studies that report the sample median of the outcome. However, standard meta-analytic methods typically cannot be directly applied in this setting. In recent years, there has been substantial development in statistical methods to incorporate primary studies reporting sample medians in meta-analysis, yet there are currently no comprehensive software tools implementing these methods. In this paper, we present the <b>metamedian</b> R package, a freely available and open-source software tool for meta-analyzing primary studies that report sample medians. We summarize the main features of the software and illustrate its application through real data examples involving risk factors for a severe course of COVID-19.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"332-346"},"PeriodicalIF":9.8,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138569212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data extraction is a time-consuming and resource-intensive task in the systematic review process. Natural language processing (NLP) artificial intelligence (AI) techniques have the potential to automate data extraction saving time and resources, accelerating the review process, and enhancing the quality and reliability of extracted data. In this paper, we propose a method for using Bing AI and Microsoft Edge as a second reviewer to verify and enhance data items first extracted by a single human reviewer. We describe a worked example of the steps involved in instructing the Bing AI Chat tool to extract study characteristics as data items from a PDF document into a table so that they can be compared with data extracted manually. We show that this technique may provide an additional verification process for data extraction where there are limited resources available or for novice reviewers. However, it should not be seen as a replacement to already established and validated double independent data extraction methods without further evaluation and verification. Use of AI techniques for data extraction in systematic reviews should be transparently and accurately described in reports. Future research should focus on the accuracy, efficiency, completeness, and user experience of using Bing AI for data extraction compared with traditional methods using two or more reviewers independently.
在系统综述过程中,数据提取是一项耗时耗力的工作。自然语言处理(NLP)人工智能(AI)技术有可能实现数据提取自动化,从而节省时间和资源,加快审稿进程,并提高提取数据的质量和可靠性。在本文中,我们提出了一种使用必应人工智能和 Microsoft Edge 作为第二审核员的方法,以验证和增强由单个人工审核员首次提取的数据项。我们举例说明了指导必应人工智能聊天工具将研究特征作为数据项从 PDF 文档中提取到表格中的步骤,以便与人工提取的数据进行比较。我们表明,在资源有限的情况下或对于新手审稿人来说,这种技术可以为数据提取提供额外的验证过程。但是,在没有进一步评估和验证的情况下,不应将其视为已经建立和验证的双重独立数据提取方法的替代品。在系统综述中使用人工智能技术进行数据提取时,应在报告中进行透明、准确的描述。未来的研究应侧重于使用必应人工智能进行数据提取的准确性、效率、完整性和用户体验,并与使用两名或两名以上审稿人独立进行数据提取的传统方法进行比较。
{"title":"Methods for using Bing's AI-powered search engine for data extraction for a systematic review","authors":"James Edward Hill, Catherine Harris, Andrew Clegg","doi":"10.1002/jrsm.1689","DOIUrl":"10.1002/jrsm.1689","url":null,"abstract":"<p>Data extraction is a time-consuming and resource-intensive task in the systematic review process. Natural language processing (NLP) artificial intelligence (AI) techniques have the potential to automate data extraction saving time and resources, accelerating the review process, and enhancing the quality and reliability of extracted data. In this paper, we propose a method for using Bing AI and Microsoft Edge as a second reviewer to verify and enhance data items first extracted by a single human reviewer. We describe a worked example of the steps involved in instructing the Bing AI Chat tool to extract study characteristics as data items from a PDF document into a table so that they can be compared with data extracted manually. We show that this technique may provide an additional verification process for data extraction where there are limited resources available or for novice reviewers. However, it should not be seen as a replacement to already established and validated double independent data extraction methods without further evaluation and verification. Use of AI techniques for data extraction in systematic reviews should be transparently and accurately described in reports. Future research should focus on the accuracy, efficiency, completeness, and user experience of using Bing AI for data extraction compared with traditional methods using two or more reviewers independently.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"347-353"},"PeriodicalIF":9.8,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1689","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138561187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danielle Pollock, Timothy Hugh Barker, Jennifer C Stone, Edoardo Aromataris, Miloslav Klugar, Anna M Scott, Cindy Stern, Amanda Ross-White, Ashley Whitehorn, Rick Wiechula, Larissa Shamseer, Zachary Munn
Predatory journals are a blemish on scholarly publishing and academia and the studies published within them are more likely to contain data that is false. The inclusion of studies from predatory journals in evidence syntheses is potentially problematic due to this propensity for false data to be included. To date, there has been little exploration of the opinions and experiences of evidence synthesisers when dealing with predatory journals in the conduct of their evidence synthesis. In this paper, the thoughts, opinions, and attitudes of evidence synthesisers towards predatory journals and the inclusion of studies published within these journals in evidence syntheses were sought. Focus groups were held with participants who were experienced evidence synthesisers from JBI (previously the Joanna Briggs Institute) collaboration. Utilising qualitative content analysis, two generic categories were identified: predatory journals within evidence synthesis, and predatory journals within academia. Our findings suggest that evidence synthesisers believe predatory journals are hard to identify and that there is no current consensus on the management of these studies if they have been included in an evidence synthesis. There is a critical need for further research, education, guidance, and development of clear processes to assist evidence synthesisers in the management of studies from predatory journals.
{"title":"Predatory journals and their practices present a conundrum for systematic reviewers and evidence synthesisers of health research: A qualitative descriptive study","authors":"Danielle Pollock, Timothy Hugh Barker, Jennifer C Stone, Edoardo Aromataris, Miloslav Klugar, Anna M Scott, Cindy Stern, Amanda Ross-White, Ashley Whitehorn, Rick Wiechula, Larissa Shamseer, Zachary Munn","doi":"10.1002/jrsm.1684","DOIUrl":"10.1002/jrsm.1684","url":null,"abstract":"<p>Predatory journals are a blemish on scholarly publishing and academia and the studies published within them are more likely to contain data that is false. The inclusion of studies from predatory journals in evidence syntheses is potentially problematic due to this propensity for false data to be included. To date, there has been little exploration of the opinions and experiences of evidence synthesisers when dealing with predatory journals in the conduct of their evidence synthesis. In this paper, the thoughts, opinions, and attitudes of evidence synthesisers towards predatory journals and the inclusion of studies published within these journals in evidence syntheses were sought. Focus groups were held with participants who were experienced evidence synthesisers from JBI (previously the Joanna Briggs Institute) collaboration. Utilising qualitative content analysis, two generic categories were identified: predatory journals within evidence synthesis, and predatory journals within academia. Our findings suggest that evidence synthesisers believe predatory journals are hard to identify and that there is no current consensus on the management of these studies if they have been included in an evidence synthesis. There is a critical need for further research, education, guidance, and development of clear processes to assist evidence synthesisers in the management of studies from predatory journals.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"257-274"},"PeriodicalIF":9.8,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1684","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138476387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer L. Proper, Haitao Chu, Purvi Prajapati, Michael D. Sonksen, Thomas A. Murray
Drug repurposing refers to the process of discovering new therapeutic uses for existing medicines. Compared to traditional drug discovery, drug repurposing is attractive for its speed, cost, and reduced risk of failure. However, existing approaches for drug repurposing involve complex, computationally-intensive analytical methods that are not widely used in practice. Instead, repurposing decisions are often based on subjective judgments from limited empirical evidence. In this article, we develop a novel Bayesian network meta-analysis (NMA) framework that can predict the efficacy of an approved treatment in a new indication and thereby identify candidate treatments for repurposing. We obtain predictions using two main steps: first, we use standard NMA modeling to estimate average relative effects from a network comprised of treatments studied in both indications in addition to one treatment studied in only one indication. Then, we model the correlation between relative effects using various strategies that differ in how they model treatments across indications and within the same drug class. We evaluate the predictive performance of each model using a simulation study and find that the model minimizing root mean squared error of the posterior median for the candidate treatment depends on the amount of available data, the level of correlation between indications, and whether treatment effects differ, on average, by drug class. We conclude by discussing an illustrative example in psoriasis and psoriatic arthritis and find that the candidate treatment has a high probability of success in a future trial.
{"title":"Network meta analysis to predict the efficacy of an approved treatment in a new indication","authors":"Jennifer L. Proper, Haitao Chu, Purvi Prajapati, Michael D. Sonksen, Thomas A. Murray","doi":"10.1002/jrsm.1683","DOIUrl":"10.1002/jrsm.1683","url":null,"abstract":"<p>Drug repurposing refers to the process of discovering new therapeutic uses for existing medicines. Compared to traditional drug discovery, drug repurposing is attractive for its speed, cost, and reduced risk of failure. However, existing approaches for drug repurposing involve complex, computationally-intensive analytical methods that are not widely used in practice. Instead, repurposing decisions are often based on subjective judgments from limited empirical evidence. In this article, we develop a novel Bayesian network meta-analysis (NMA) framework that can predict the efficacy of an approved treatment in a new indication and thereby identify candidate treatments for repurposing. We obtain predictions using two main steps: first, we use standard NMA modeling to estimate average relative effects from a network comprised of treatments studied in both indications in addition to one treatment studied in only one indication. Then, we model the correlation between relative effects using various strategies that differ in how they model treatments across indications and within the same drug class. We evaluate the predictive performance of each model using a simulation study and find that the model minimizing root mean squared error of the posterior median for the candidate treatment depends on the amount of available data, the level of correlation between indications, and whether treatment effects differ, on average, by drug class. We conclude by discussing an illustrative example in psoriasis and psoriatic arthritis and find that the candidate treatment has a high probability of success in a future trial.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"242-256"},"PeriodicalIF":9.8,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138476375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hans-Peter Piepho, Johannes Forkman, Waqas Ahmed Malik
Checking for possible inconsistency between direct and indirect evidence is an important task in network meta-analysis. Recently, an evidence-splitting (ES) model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for heterogeneity appears in both the mean and the variance structure. Thus, full maximum likelihood (ML) has been proposed for estimating the parameters of this model. Maximum likelihood is known to yield biased variance component estimates in linear mixed models, and this problem is expected to also affect the ES model. The purpose of the present paper, therefore, is to propose a method based on residual (or restricted) maximum likelihood (REML). Our simulation shows that this new method is quite competitive to methods based on full ML in terms of bias and mean squared error. In addition, some limitations of the ES model are discussed. While this model splits direct and indirect evidence, it is not a plausible model for the cause of inconsistency.
{"title":"A REML method for the evidence-splitting model in network meta-analysis","authors":"Hans-Peter Piepho, Johannes Forkman, Waqas Ahmed Malik","doi":"10.1002/jrsm.1679","DOIUrl":"10.1002/jrsm.1679","url":null,"abstract":"<p>Checking for possible inconsistency between direct and indirect evidence is an important task in network meta-analysis. Recently, an evidence-splitting (ES) model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for heterogeneity appears in both the mean and the variance structure. Thus, full maximum likelihood (ML) has been proposed for estimating the parameters of this model. Maximum likelihood is known to yield biased variance component estimates in linear mixed models, and this problem is expected to also affect the ES model. The purpose of the present paper, therefore, is to propose a method based on residual (or restricted) maximum likelihood (REML). Our simulation shows that this new method is quite competitive to methods based on full ML in terms of bias and mean squared error. In addition, some limitations of the ES model are discussed. While this model splits direct and indirect evidence, it is not a plausible model for the cause of inconsistency.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"198-212"},"PeriodicalIF":9.8,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1679","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138456722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Briscoe, Rebecca Abbott, Hassanat Lawal, Morwenna Rogers, Liz Shaw, Jo Thompson Coon
{"title":"Adapting how to use Google Search to identify studies for systematic reviews in view of a recent change to how search results are displayed","authors":"Simon Briscoe, Rebecca Abbott, Hassanat Lawal, Morwenna Rogers, Liz Shaw, Jo Thompson Coon","doi":"10.1002/jrsm.1687","DOIUrl":"10.1002/jrsm.1687","url":null,"abstract":"","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 1","pages":"175-176"},"PeriodicalIF":9.8,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138456723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to assess the methods and outcomes of The Measurement Tool to Assess systematic Reviews (AMSTAR) 2 appraisals in overviews of reviews (overviews) of interventions in the cardiovascular field and identify factors that are associated with these outcomes. MEDLINE, Scopus, and the Cochrane Database of Systematic Reviews were searched until November 2022. Eligible were overviews of cardiovascular interventions, analyzing systematic reviews (SRs) of randomized controlled trials (RCTs). Extracted data included characteristics of overviews and SRs and AMSTAR 2 appraisal methods and outcomes. Data were synthesized using descriptive statistics and logistic regression to explore potential associations between the characteristics of SRs and extracted AMSTAR 2 overall ratings (“High-Moderate” vs. “Low-Critically low”). The original results on individual AMSTAR 2 items were entered into the official AMSTAR 2 online tool and the recalculated overall confidence ratings were compared to those provided in overviews. All 34 overviews identified were published between 2019 and 2022. Rating of overall confidence following the algorithm suggested by AMSTAR 2 developers was noted in 74% of overviews. The 679 unique included SRs were mainly of “Critically low” (53%) or “Low” (18.7%) confidence and underperformed in items 2 (Protocol, no = 65.2%) and 7 (List of excluded studies, no = 84%). The following characteristics of SRs were significantly associated with higher overall ratings: Cochrane origin, pharmacological interventions, including exclusively RCTs, citation of methodological and reporting guidelines, protocol, absence of funding and publication after AMSTAR 2 release. Generally, overviews' authors tended to deviate from the original rating scheme and ascribe higher ratings to SRs compared to the official AMSTAR 2 online tool. Most SRs included in overviews of cardiovascular interventions have critically low or low confidence in their results. Overviews' authors should be more transparent about the methods used to derive the overall confidence in SRs.
{"title":"Appraisal methods and outcomes of AMSTAR 2 assessments in overviews of systematic reviews of interventions in the cardiovascular field: A methodological study","authors":"Paschalis Karakasis, Konstantinos I. Bougioukas, Konstantinos Pamporis, Nikolaos Fragakis, Anna-Bettina Haidich","doi":"10.1002/jrsm.1680","DOIUrl":"10.1002/jrsm.1680","url":null,"abstract":"<p>This study aimed to assess the methods and outcomes of The Measurement Tool to Assess systematic Reviews (AMSTAR) 2 appraisals in overviews of reviews (overviews) of interventions in the cardiovascular field and identify factors that are associated with these outcomes. MEDLINE, Scopus, and the Cochrane Database of Systematic Reviews were searched until November 2022. Eligible were overviews of cardiovascular interventions, analyzing systematic reviews (SRs) of randomized controlled trials (RCTs). Extracted data included characteristics of overviews and SRs and AMSTAR 2 appraisal methods and outcomes. Data were synthesized using descriptive statistics and logistic regression to explore potential associations between the characteristics of SRs and extracted AMSTAR 2 overall ratings (“High-Moderate” vs. “Low-Critically low”). The original results on individual AMSTAR 2 items were entered into the official AMSTAR 2 online tool and the recalculated overall confidence ratings were compared to those provided in overviews. All 34 overviews identified were published between 2019 and 2022. Rating of overall confidence following the algorithm suggested by AMSTAR 2 developers was noted in 74% of overviews. The 679 unique included SRs were mainly of “Critically low” (53%) or “Low” (18.7%) confidence and underperformed in items 2 (Protocol, no = 65.2%) and 7 (List of excluded studies, no = 84%). The following characteristics of SRs were significantly associated with higher overall ratings: Cochrane origin, pharmacological interventions, including exclusively RCTs, citation of methodological and reporting guidelines, protocol, absence of funding and publication after AMSTAR 2 release. Generally, overviews' authors tended to deviate from the original rating scheme and ascribe higher ratings to SRs compared to the official AMSTAR 2 online tool. Most SRs included in overviews of cardiovascular interventions have critically low or low confidence in their results. Overviews' authors should be more transparent about the methods used to derive the overall confidence in SRs.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"213-226"},"PeriodicalIF":9.8,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92152061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}