Kylie E. Hunter, Mason Aberoumand, Sol Libesman, James X. Sotiropoulos, Jonathan G. Williams, Jannik Aagerup, Rui Wang, Ben W. Mol, Wentao Li, Angie Barba, Nipun Shrestha, Angela C. Webster, Anna Lene Seidler
Increasing concerns about the trustworthiness of research have prompted calls to scrutinise studies' Individual Participant Data (IPD), but guidance on how to do this was lacking. To address this, we developed the IPD Integrity Tool to screen randomised controlled trials (RCTs) for integrity issues. Development of the tool involved a literature review, consultation with an expert advisory group, piloting on two IPD meta-analyses (including 73 trials with IPD), preliminary validation on 13 datasets with and without known integrity issues, and evaluation to inform iterative refinements. The IPD Integrity Tool comprises 31 items (13 study-level, 18 IPD-specific). IPD-specific items are automated where possible, and are grouped into eight domains, including unusual data patterns, baseline characteristics, correlations, date violations, patterns of allocation, internal and external inconsistencies, and plausibility of data. Users rate each item as having either no issues, some/minor issue(s), or many/major issue(s) according to decision rules, and justification for each rating is recorded. Overall, the tool guides decision-making by determining whether a trial has no concerns, some concerns requiring further information, or major concerns warranting exclusion from evidence synthesis or publication. In our preliminary validation checks, the tool accurately identified all five studies with known integrity issues. The IPD Integrity Tool enables users to assess the integrity of RCTs via examination of IPD. The tool may be applied by evidence synthesists, editors and others to determine whether an RCT should be considered sufficiently trustworthy to contribute to the evidence base that informs policy and practice.
{"title":"The Individual Participant Data Integrity Tool for assessing the integrity of randomised trials","authors":"Kylie E. Hunter, Mason Aberoumand, Sol Libesman, James X. Sotiropoulos, Jonathan G. Williams, Jannik Aagerup, Rui Wang, Ben W. Mol, Wentao Li, Angie Barba, Nipun Shrestha, Angela C. Webster, Anna Lene Seidler","doi":"10.1002/jrsm.1738","DOIUrl":"10.1002/jrsm.1738","url":null,"abstract":"<p>Increasing concerns about the trustworthiness of research have prompted calls to scrutinise studies' Individual Participant Data (IPD), but guidance on how to do this was lacking. To address this, we developed the IPD Integrity Tool to screen randomised controlled trials (RCTs) for integrity issues. Development of the tool involved a literature review, consultation with an expert advisory group, piloting on two IPD meta-analyses (including 73 trials with IPD), preliminary validation on 13 datasets with and without known integrity issues, and evaluation to inform iterative refinements. The IPD Integrity Tool comprises 31 items (13 study-level, 18 IPD-specific). IPD-specific items are automated where possible, and are grouped into eight domains, including unusual data patterns, baseline characteristics, correlations, date violations, patterns of allocation, internal and external inconsistencies, and plausibility of data. Users rate each item as having either no issues, some/minor issue(s), or many/major issue(s) according to decision rules, and justification for each rating is recorded. Overall, the tool guides decision-making by determining whether a trial has no concerns, some concerns requiring further information, or major concerns warranting exclusion from evidence synthesis or publication. In our preliminary validation checks, the tool accurately identified all five studies with known integrity issues. The IPD Integrity Tool enables users to assess the integrity of RCTs via examination of IPD. The tool may be applied by evidence synthesists, editors and others to determine whether an RCT should be considered sufficiently trustworthy to contribute to the evidence base that informs policy and practice.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"917-939"},"PeriodicalIF":5.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1738","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141970257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Judith Logan, Jenaya Webb, Nalini K. Singh, Nailisa Tanner, Kathryn Barrett, Margaret Wall, Benjamin Walsh, Ana Patricia Ayala
A thorough literature search is a key feature of scoping reviews. We investigated the search practices used by social science researchers as reported in their scoping reviews. We collected scoping reviews published between 2015 and 2021 from Social Science Citation Index. In the 2484 included studies, we observed a 58% average annual increase in published reviews, primarily from clinical and applied social science disciplines. Bibliographic databases comprised most of the information sources in the primary search strategy (n = 9565, 75%), although reporting practices varied. Most scoping reviews (n = 1805, 73%) included at least one supplementary search strategy. A minority of studies (n = 713, 29%) acknowledged an LIS professional and few listed one as a co-author (n = 194, 8%). We conclude that to improve reporting and strengthen the impact of the scoping review method in the social sciences, researchers should consider (1) adhering to PRISMA-S reporting guidelines, (2) employing more supplementary search strategies, and (3) collaborating with LIS professionals.
{"title":"Scoping review search practices in the social sciences: A scoping review","authors":"Judith Logan, Jenaya Webb, Nalini K. Singh, Nailisa Tanner, Kathryn Barrett, Margaret Wall, Benjamin Walsh, Ana Patricia Ayala","doi":"10.1002/jrsm.1742","DOIUrl":"10.1002/jrsm.1742","url":null,"abstract":"<p>A thorough literature search is a key feature of scoping reviews. We investigated the search practices used by social science researchers as reported in their scoping reviews. We collected scoping reviews published between 2015 and 2021 from Social Science Citation Index. In the 2484 included studies, we observed a 58% average annual increase in published reviews, primarily from clinical and applied social science disciplines. Bibliographic databases comprised most of the information sources in the primary search strategy (<i>n</i> = 9565, 75%), although reporting practices varied. Most scoping reviews (<i>n</i> = 1805, 73%) included at least one supplementary search strategy. A minority of studies (<i>n</i> = 713, 29%) acknowledged an LIS professional and few listed one as a co-author (<i>n</i> = 194, 8%). We conclude that to improve reporting and strengthen the impact of the scoping review method in the social sciences, researchers should consider (1) adhering to PRISMA-S reporting guidelines, (2) employing more supplementary search strategies, and (3) collaborating with LIS professionals.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"950-963"},"PeriodicalIF":5.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141970256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When searching for scholarly documents, researchers often stick with the same familiar handful of databases. Yet, just beyond these limited horizons lie dozens of alternatives with which they could search more effectively, whether for quick lookups or thorough searches in systematic reviews or meta-analyses. Searchsmart.org is a free website that guides researchers to particularly suitable search options for their particular disciplines, offering a wide array of resources, including search engines, aggregators, journal platforms, repositories, clinical trials databases, bibliographic databases, and digital libraries. Search Smart currently evaluates the coverage and functionality of more than a hundred leading scholarly databases, including most major multidisciplinary databases and many that are discipline-specific. Search Smart's primary use cases involve database-selection decisions as part of systematic reviews, meta-analyses, or bibliometric analyses. Researchers can use up to 583 criteria to filter and sort recommendations of databases and the interfaces through which they can be accessed for user-friendliness, search rigor, or relevance. With specific pre-defined filter settings, researchers can quickly identify particularly suitable databases for Boolean keyword searching and forward or backward citation searching. Overall, Search Smart's recommendations help researchers to discover knowledge more effectively and efficiently by selecting the more suitable databases for their tasks.
{"title":"Searchsmart.org: Guiding researchers to the best databases and search systems for systematic reviews and beyond","authors":"Michael Gusenbauer","doi":"10.1002/jrsm.1746","DOIUrl":"10.1002/jrsm.1746","url":null,"abstract":"<p>When searching for scholarly documents, researchers often stick with the same familiar handful of databases. Yet, just beyond these limited horizons lie dozens of alternatives with which they could search more effectively, whether for quick lookups or thorough searches in systematic reviews or meta-analyses. Searchsmart.org is a free website that guides researchers to particularly suitable search options for their particular disciplines, offering a wide array of resources, including search engines, aggregators, journal platforms, repositories, clinical trials databases, bibliographic databases, and digital libraries. Search Smart currently evaluates the coverage and functionality of more than a hundred leading scholarly databases, including most major multidisciplinary databases and many that are discipline-specific. Search Smart's primary use cases involve database-selection decisions as part of systematic reviews, meta-analyses, or bibliometric analyses. Researchers can use up to 583 criteria to filter and sort recommendations of databases and the interfaces through which they can be accessed for user-friendliness, search rigor, or relevance. With specific pre-defined filter settings, researchers can quickly identify particularly suitable databases for Boolean keyword searching and forward or backward citation searching. Overall, Search Smart's recommendations help researchers to discover knowledge more effectively and efficiently by selecting the more suitable databases for their tasks.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"1200-1213"},"PeriodicalIF":5.0,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1746","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141915611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been a transition from broad to more specific research questions in the practice of network meta-analysis (NMA). Such convergence is also taking place in the context of individual registrational trials, following the recent introduction of the estimand framework, which is impacting the design, data collection strategy, analysis and interpretation of clinical trials. The language of estimands has much to offer to NMA, particularly given the “narrow” perspective of treatments and target populations taken in health technology assessment.
{"title":"Broad versus narrow research questions in evidence synthesis: A parallel to (and plea for) estimands","authors":"Antonio Remiro-Azócar, Anders Gorst-Rasmussen","doi":"10.1002/jrsm.1741","DOIUrl":"10.1002/jrsm.1741","url":null,"abstract":"<p>There has been a transition from broad to more specific research questions in the practice of network meta-analysis (NMA). Such convergence is also taking place in the context of individual registrational trials, following the recent introduction of the estimand framework, which is impacting the design, data collection strategy, analysis and interpretation of clinical trials. The language of estimands has much to offer to NMA, particularly given the “narrow” perspective of treatments and target populations taken in health technology assessment.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 5","pages":"735-740"},"PeriodicalIF":5.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141905281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Caquelin, Pauline Badra, Lucas Poulain, Bruno Laviolle, Moreno Ursino, Clara Locher
This study aimed to assess the feasibility of applying two recent phase I meta-analyses methods to protein kinase inhibitors (PKIs) developed in oncology and to identify situations where these methods could be both feasible and useful. This ancillary study used data from a systematic review conducted to identify dose-finding studies for PKIs. PKIs selected for meta-analyses were required to have at least five completed dose-finding studies involving cancer patients, with available results, and dose escalation guided by toxicity assessment. To account for heterogeneity caused by various administration schedules, some studies were divided into study parts, considered as separate entities in the meta-analyses. For each PKI, two Bayesian random-effects meta-analysis methods were applied to model the toxicity probability distribution of the recommended dose and to estimate the maximum tolerated dose (MTD). Meta-analyses were performed for 20 PKIs including 96 studies corresponding to 115 study parts. The median posterior probability of toxicity probability was below the toxicity thresholds of 0.20 for 70% of the PKIs, even if the resulting credible intervals were very wide. All approved doses were below the MTD estimated for the minimum toxicity threshold, except for one, for which the approved dose was above the MTD estimated for the maximal threshold. The application of phase I meta-analysis methods has been feasible for the majority of PKI; nevertheless, their implementation requires multiple conditions. However, meta-analyses resulted in estimates with large uncertainty, probably due to limited patient numbers and/or between-study variability. This calls into question the reliability of the recommended doses.
{"title":"Meta-analyses of phase I dose-finding studies: Application for the development of protein kinase inhibitors in oncology","authors":"Laura Caquelin, Pauline Badra, Lucas Poulain, Bruno Laviolle, Moreno Ursino, Clara Locher","doi":"10.1002/jrsm.1747","DOIUrl":"10.1002/jrsm.1747","url":null,"abstract":"<p>This study aimed to assess the feasibility of applying two recent phase I meta-analyses methods to protein kinase inhibitors (PKIs) developed in oncology and to identify situations where these methods could be both feasible and useful. This ancillary study used data from a systematic review conducted to identify dose-finding studies for PKIs. PKIs selected for meta-analyses were required to have at least five completed dose-finding studies involving cancer patients, with available results, and dose escalation guided by toxicity assessment. To account for heterogeneity caused by various administration schedules, some studies were divided into study parts, considered as separate entities in the meta-analyses. For each PKI, two Bayesian random-effects meta-analysis methods were applied to model the toxicity probability distribution of the recommended dose and to estimate the maximum tolerated dose (MTD). Meta-analyses were performed for 20 PKIs including 96 studies corresponding to 115 study parts. The median posterior probability of toxicity probability was below the toxicity thresholds of 0.20 for 70% of the PKIs, even if the resulting credible intervals were very wide. All approved doses were below the MTD estimated for the minimum toxicity threshold, except for one, for which the approved dose was above the MTD estimated for the maximal threshold. The application of phase I meta-analysis methods has been feasible for the majority of PKI; nevertheless, their implementation requires multiple conditions. However, meta-analyses resulted in estimates with large uncertainty, probably due to limited patient numbers and/or between-study variability. This calls into question the reliability of the recommended doses.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"964-977"},"PeriodicalIF":5.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1747","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141892471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Areti Angeliki Veroniki, Ivan Florez, Brian Hutton, Sharon E. Straus, Andrea C. Tricco
Recently, Ades and colleagues discussed the controversies and advancements in network meta-analysis (NMA) over the past two decades, discussing its reliability, assumptions, novel approaches, and provided some useful recommendations for the conduction of NMAs. The present discussion paper builds on the insights by Ades and colleagues, providing a roadmap for NMA applications, advancements in software and tools, and approaches designed to facilitate the assessment and interpretation of NMA findings. It also discusses the impact of NMA across disciplines, particularly for policymakers and guideline developers. Despite 20 years of NMA history, challenges remain in understanding and assessing assumptions, communicating and interpreting findings, and applying common approaches like network meta-regression and NMA involving non-randomized studies in readily available software. NMA has proven particularly valuable in clinical decision-making, which highlights the need for additional training and interdisciplinary collaboration of knowledge users, including patient engagement, to enhance its adoption and address real-world problems.
{"title":"Two decades of network meta-analysis: Roadmap to their applications and challenges","authors":"Areti Angeliki Veroniki, Ivan Florez, Brian Hutton, Sharon E. Straus, Andrea C. Tricco","doi":"10.1002/jrsm.1744","DOIUrl":"10.1002/jrsm.1744","url":null,"abstract":"<p>Recently, Ades and colleagues discussed the controversies and advancements in network meta-analysis (NMA) over the past two decades, discussing its reliability, assumptions, novel approaches, and provided some useful recommendations for the conduction of NMAs. The present discussion paper builds on the insights by Ades and colleagues, providing a roadmap for NMA applications, advancements in software and tools, and approaches designed to facilitate the assessment and interpretation of NMA findings. It also discusses the impact of NMA across disciplines, particularly for policymakers and guideline developers. Despite 20 years of NMA history, challenges remain in understanding and assessing assumptions, communicating and interpreting findings, and applying common approaches like network meta-regression and NMA involving non-randomized studies in readily available software. NMA has proven particularly valuable in clinical decision-making, which highlights the need for additional training and interdisciplinary collaboration of knowledge users, including patient engagement, to enhance its adoption and address real-world problems.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 5","pages":"741-746"},"PeriodicalIF":5.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1744","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141854315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Jackson, Landan Zhang, Robert Hettle, Miranda Cooper
We respond to some of the methodological issues raised in a recent review of network meta-analysis (NMA). We also provide a health technology developer's perspective and consider the future application of NMA to health technology assessment.
{"title":"‘Twenty years of network meta-analysis: Continuing controversies and recent developments’: A health technology assessment perspective","authors":"Dan Jackson, Landan Zhang, Robert Hettle, Miranda Cooper","doi":"10.1002/jrsm.1740","DOIUrl":"10.1002/jrsm.1740","url":null,"abstract":"<p>We respond to some of the methodological issues raised in a recent review of network meta-analysis (NMA). We also provide a health technology developer's perspective and consider the future application of NMA to health technology assessment.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 5","pages":"731-734"},"PeriodicalIF":5.0,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141791489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. E. Ades, Nicky J. Welton, Sofia Dias, Deborah M. Caldwell, David M. Phillippo
We respond to discussant comments on our paper “Twenty years of network meta-analysis: Continuing controversies and recent developments” (https://doi.org/10.1002/jrsm.1700) and raise some additional points for consideration, including: the way in which methodological guidance is generated; integration of the estimand framework with evidence synthesis; and implications of the European Joint Clinical Assessment. We ask: what properties are required of population adjustment methods to enable transparent and consistent decision-making? We also ask why individual patient data is not routinely made available to re-imbursement authorities and clinical guideline developers.
{"title":"Response to discussant comments on “NMA, the first 20 years”","authors":"A. E. Ades, Nicky J. Welton, Sofia Dias, Deborah M. Caldwell, David M. Phillippo","doi":"10.1002/jrsm.1745","DOIUrl":"10.1002/jrsm.1745","url":null,"abstract":"<p>We respond to discussant comments on our paper “<i>Twenty years of network meta-analysis: Continuing controversies and recent developments</i>” (https://doi.org/10.1002/jrsm.1700) and raise some additional points for consideration, including: the way in which methodological guidance is generated; integration of the estimand framework with evidence synthesis; and implications of the European Joint Clinical Assessment. We ask: what properties are required of population adjustment methods to enable transparent and consistent decision-making? We also ask why individual patient data is not routinely made available to re-imbursement authorities and clinical guideline developers.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 5","pages":"751-757"},"PeriodicalIF":5.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1745","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This discussion contribution provides some subjective early history of network meta-analysis and also proposes a new bipartite graph structure to better represent multi-arm trials.
本讨论提供了一些网络荟萃分析的早期主观历史,还提出了一种新的双方图结构,以更好地表示多臂试验。
{"title":"Network meta-analysis: Looping back","authors":"Thomas Lumley","doi":"10.1002/jrsm.1743","DOIUrl":"10.1002/jrsm.1743","url":null,"abstract":"<p>This discussion contribution provides some subjective early history of network meta-analysis and also proposes a new bipartite graph structure to better represent multi-arm trials.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 5","pages":"728-730"},"PeriodicalIF":5.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1743","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Searching multiple resources to locate eligible studies for research syntheses can result in hundreds to thousands of duplicate references that should be removed before the screening process for efficiency. Research investigating the performance of automated methods for deduplicating references via reference managers and systematic review software programs can become quickly outdated as new versions and programs become available. This follow-up study examined the performance of default de-duplication algorithms in EndNote 20, EndNote online classic, ProQuest RefWorks, Deduklick, and Systematic Review Accelerator's new Deduplicator tool. On most accounts, systematic review software programs outperformed reference managers when deduplicating references. While cost and the need for institutional access may restrict researchers from being able to utilize some automated methods for deduplicating references, Systematic Review Accelerator's Deduplicator tool is free to use and demonstrated the highest accuracy and sensitivity, while also offering user-mediation of detected duplicates to improve specificity. Researchers conducting syntheses should take automated de-duplication performance, and methods for improving and optimizing their use, into consideration to help prevent the unintentional removal of eligible studies and potential introduction of bias to syntheses. Researchers should also be transparent about their de-duplication process to help readers critically appraise their synthesis methods, and to comply with the PRISMA-S extension for reporting literature searches in systematic reviews.
{"title":"Considerations for conducting systematic reviews: A follow-up study to evaluate the performance of various automated methods for reference de-duplication","authors":"Sandra McKeown, Zuhaib M. Mir","doi":"10.1002/jrsm.1736","DOIUrl":"10.1002/jrsm.1736","url":null,"abstract":"<p>Searching multiple resources to locate eligible studies for research syntheses can result in hundreds to thousands of duplicate references that should be removed before the screening process for efficiency. Research investigating the performance of automated methods for deduplicating references via reference managers and systematic review software programs can become quickly outdated as new versions and programs become available. This follow-up study examined the performance of default de-duplication algorithms in EndNote 20, EndNote online classic, ProQuest RefWorks, Deduklick, and Systematic Review Accelerator's new Deduplicator tool. On most accounts, systematic review software programs outperformed reference managers when deduplicating references. While cost and the need for institutional access may restrict researchers from being able to utilize some automated methods for deduplicating references, Systematic Review Accelerator's Deduplicator tool is free to use and demonstrated the highest accuracy and sensitivity, while also offering user-mediation of detected duplicates to improve specificity. Researchers conducting syntheses should take automated de-duplication performance, and methods for improving and optimizing their use, into consideration to help prevent the unintentional removal of eligible studies and potential introduction of bias to syntheses. Researchers should also be transparent about their de-duplication process to help readers critically appraise their synthesis methods, and to comply with the PRISMA-S extension for reporting literature searches in systematic reviews.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"15 6","pages":"896-904"},"PeriodicalIF":5.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1736","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}