A. E. Ades, Nicky J. Welton, Sofia Dias, David M. Phillippo, Deborah M. Caldwell
Network meta-analysis (NMA) is an extension of pairwise meta-analysis (PMA) which combines evidence from trials on multiple treatments in connected networks. NMA delivers internally consistent estimates of relative treatment efficacy, needed for rational decision making. Over its first 20 years NMA's use has grown exponentially, with applications in both health technology assessment (HTA), primarily re-imbursement decisions and clinical guideline development, and clinical research publications. This has been a period of transition in meta-analysis, first from its roots in educational and social psychology, where large heterogeneous datasets could be explored to find effect modifiers, to smaller pairwise meta-analyses in clinical medicine on average with less than six studies. This has been followed by narrowly-focused estimation of the effects of specific treatments at specific doses in specific populations in sparse networks, where direct comparisons are unavailable or informed by only one or two studies. NMA is a powerful and well-established technique but, in spite of the exponential increase in applications, doubts about the reliability and validity of NMA persist. Here we outline the continuing controversies, and review some recent developments. We suggest that heterogeneity should be minimized, as it poses a threat to the reliability of NMA which has not been fully appreciated, perhaps because it has not been seen as a problem in PMA. More research is needed on the extent of heterogeneity and inconsistency in datasets used for decision making, on formal methods for making recommendations based on NMA, and on the further development of multi-level network meta-regression.
{"title":"Twenty years of network meta-analysis: Continuing controversies and recent developments","authors":"A. E. Ades, Nicky J. Welton, Sofia Dias, David M. Phillippo, Deborah M. Caldwell","doi":"10.1002/jrsm.1700","DOIUrl":"10.1002/jrsm.1700","url":null,"abstract":"<p>Network meta-analysis (NMA) is an extension of pairwise meta-analysis (PMA) which combines evidence from trials on multiple treatments in connected networks. NMA delivers internally consistent estimates of relative treatment efficacy, needed for rational decision making. Over its first 20 years NMA's use has grown exponentially, with applications in both health technology assessment (HTA), primarily re-imbursement decisions and clinical guideline development, and clinical research publications. This has been a period of transition in meta-analysis, first from its roots in educational and social psychology, where large heterogeneous datasets could be explored to find effect modifiers, to smaller pairwise meta-analyses in clinical medicine on average with less than six studies. This has been followed by narrowly-focused estimation of the effects of specific treatments at specific doses in specific populations in sparse networks, where direct comparisons are unavailable or informed by only one or two studies. NMA is a powerful and well-established technique but, in spite of the exponential increase in applications, doubts about the reliability and validity of NMA persist. Here we outline the continuing controversies, and review some recent developments. We suggest that heterogeneity should be minimized, as it poses a threat to the reliability of NMA which has not been fully appreciated, perhaps because it has not been seen as a problem in PMA. More research is needed on the extent of heterogeneity and inconsistency in datasets used for decision making, on formal methods for making recommendations based on NMA, and on the further development of multi-level network meta-regression.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 5","pages":"702-727"},"PeriodicalIF":5.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1700","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139484797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A random-effects model is often applied in meta-analysis when considerable heterogeneity among studies is observed due to the differences in patient characteristics, timeframe, treatment regimens, and other study characteristics. Since 2014, the journals Research Synthesis Methods and the Annals of Internal Medicine have published a few noteworthy papers that explained why the most widely used method for pooling heterogeneous studies—the DerSimonian–Laird (DL) estimator—can produce biased estimates with falsely high precision and recommended to use other several alternative methods. Nevertheless, more than half of studies (55.7%) published in top oncology-specific journals during 2015–2022 did not report any detailed method in the random-effects meta-analysis. Of the studies that did report the methodology used, the DL method was still the dominant one reported. Thus, while the authors recommend that Research Synthesis Methods and the Annals of Internal Medicine continue to increase the publication of its articles that report on specific methods for handling heterogeneity and use random-effects estimates that provide more accurate confidence limits than the DL estimator, other journals that publish meta-analyses in oncology (and presumably in other disease areas) are urged to do the same on a much larger scale than currently documented.
由于患者特征、时间框架、治疗方案和其他研究特征的差异,当观察到研究之间存在相当大的异质性时,随机效应模型通常被应用于荟萃分析。自2014年以来,《研究综合方法》(Research Synthesis Methods)杂志和《内科学年鉴》(Annals of Internal Medicine)杂志发表了几篇值得关注的论文,解释了为什么最广泛使用的异质性研究集合方法--DerSimonian-Laird(DL)估计器--会产生有偏差的估计值和虚假的高精度,并建议使用其他几种替代方法。然而,2015-2022年间发表在顶级肿瘤学期刊上的一半以上(55.7%)研究并未报告随机效应荟萃分析的任何详细方法。在报告了所用方法的研究中,DL 方法仍是报告的主要方法。因此,作者建议《研究综合方法》和《内科学年鉴》继续增加发表文章,报告处理异质性的具体方法,并使用随机效应估计值,提供比DL估计值更准确的置信区间,同时敦促其他发表肿瘤学(可能还有其他疾病领域)荟萃分析的期刊也这样做,而且规模要比目前记录的大得多。
{"title":"Appropriateness of conducting and reporting random-effects meta-analysis in oncology","authors":"Jinma Ren, Jia Ma, Joseph C. Cappelleri","doi":"10.1002/jrsm.1702","DOIUrl":"10.1002/jrsm.1702","url":null,"abstract":"<p>A random-effects model is often applied in meta-analysis when considerable heterogeneity among studies is observed due to the differences in patient characteristics, timeframe, treatment regimens, and other study characteristics. Since 2014, the journals <i>Research Synthesis Methods</i> and the <i>Annals of Internal Medicine</i> have published a few noteworthy papers that explained why the most widely used method for pooling heterogeneous studies—the DerSimonian–Laird (DL) estimator—can produce biased estimates with falsely high precision and recommended to use other several alternative methods. Nevertheless, more than half of studies (55.7%) published in top oncology-specific journals during 2015–2022 did not report any detailed method in the random-effects meta-analysis. Of the studies that did report the methodology used, the DL method was still the dominant one reported. Thus, while the authors recommend that <i>Research Synthesis Methods</i> and the <i>Annals of Internal Medicine</i> continue to increase the publication of its articles that report on specific methods for handling heterogeneity and use random-effects estimates that provide more accurate confidence limits than the DL estimator, other journals that publish meta-analyses in oncology (and presumably in other disease areas) are urged to do the same on a much larger scale than currently documented.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 2","pages":"326-331"},"PeriodicalIF":9.8,"publicationDate":"2024-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139465478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reem El Sherif, Pierre Pluye, Quan Nha Hong, Benoît Rihoux
Qualitative comparative analysis (QCA) is a hybrid method designed to bridge the gap between qualitative and quantitative research in a case-sensitive approach that considers each case holistically as a complex configuration of conditions and outcomes. QCA allows for multiple conjunctural causation, implying that it is often a combination of conditions that produces an outcome, that multiple pathways may lead to the same outcome, and that in different contexts, the same condition may have a different impact on the outcome. This approach to complexity allows QCA to provide a practical understanding for complex, real-world situations, and the context of implementing interventions. There are guides for conducting QCA in primary research and quantitative systematic reviews yet, to our knowledge, no guidance for conducting QCA in systematic mixed studies reviews (SMSRs). Thus, the specific objectives of this paper are to (1) describe a step-by-step approach for novice researchers for using QCA to integrate qualitative and quantitative evidence, including guidance on how to use software; (2) highlight specific challenges; (3) propose potential solutions from a worked example; and (4) provide recommendations for reporting.
{"title":"Using qualitative comparative analysis as a mixed methods synthesis in systematic mixed studies reviews: Guidance and a worked example","authors":"Reem El Sherif, Pierre Pluye, Quan Nha Hong, Benoît Rihoux","doi":"10.1002/jrsm.1698","DOIUrl":"10.1002/jrsm.1698","url":null,"abstract":"<p>Qualitative comparative analysis (QCA) is a hybrid method designed to bridge the gap between qualitative and quantitative research in a case-sensitive approach that considers each case holistically as a complex configuration of conditions and outcomes. QCA allows for multiple conjunctural causation, implying that it is often a combination of conditions that produces an outcome, that multiple pathways may lead to the same outcome, and that in different contexts, the same condition may have a different impact on the outcome. This approach to complexity allows QCA to provide a practical understanding for complex, real-world situations, and the context of implementing interventions. There are guides for conducting QCA in primary research and quantitative systematic reviews yet, to our knowledge, no guidance for conducting QCA in systematic mixed studies reviews (SMSRs). Thus, the specific objectives of this paper are to (1) describe a step-by-step approach for novice researchers for using QCA to integrate qualitative and quantitative evidence, including guidance on how to use software; (2) highlight specific challenges; (3) propose potential solutions from a worked example; and (4) provide recommendations for reporting.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 3","pages":"450-465"},"PeriodicalIF":9.8,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1698","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139401167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while only screening a proportion of candidate records with high priority. In previous studies, screening prioritization is often referred to as automatic literature screening or automatic literature identification. Numerous screening prioritization methods have been proposed in recent years. However, there is a lack of screening prioritization methods with reliable performance. Our objective is to develop a screening prioritization algorithm with reliable performance for practical use, for example, an algorithm that guarantees an 80% chance of identifying at least