Kylie E Hunter, Mason Aberoumand, Sol Libesman, James X Sotiropoulos, Jonathan G Williams, Wentao Li, Jannik Aagerup, Ben W Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C Webster, Anna Lene Seidler
{"title":"Development of the individual participant data integrity tool for assessing the integrity of randomised trials using individual participant data.","authors":"Kylie E Hunter, Mason Aberoumand, Sol Libesman, James X Sotiropoulos, Jonathan G Williams, Wentao Li, Jannik Aagerup, Ben W Mol, Rui Wang, Angie Barba, Nipun Shrestha, Angela C Webster, Anna Lene Seidler","doi":"10.1002/jrsm.1739","DOIUrl":null,"url":null,"abstract":"<p><p>Increasing integrity concerns in medical research have prompted the development of tools to detect untrustworthy studies. Existing tools primarily assess published aggregate data (AD), though scrutiny of individual participant data (IPD) is often required to detect trustworthiness issues. Thus, we developed the IPD Integrity Tool for detecting integrity issues in randomised trials with IPD available. This manuscript describes the development of this tool. We conducted a literature review to collate and map existing integrity items. These were discussed with an expert advisory group; agreed items were included in a standardised tool and automated where possible. We piloted this tool in two IPD meta-analyses (including 116 trials) and conducted preliminary validation checks on 13 datasets with and without known integrity issues. We identified 120 integrity items: 54 could be conducted using AD, 48 required IPD, and 18 were possible with AD, but more comprehensive with IPD. An initial reduced tool was developed through consensus involving 13 advisors, featuring 11 AD items across four domains, and 12 IPD items across eight domains. The tool was iteratively refined throughout piloting and validation. All studies with known integrity issues were accurately identified during validation. The final tool includes seven AD domains with 13 items and eight IPD domains with 18 items. The quality of evidence informing healthcare relies on trustworthy data. We describe the development of a tool to enable researchers, editors, and others to detect integrity issues using IPD. Detailed instructions for its application are published as a complementary manuscript in this issue.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Synthesis Methods","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1002/jrsm.1739","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/18 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Increasing integrity concerns in medical research have prompted the development of tools to detect untrustworthy studies. Existing tools primarily assess published aggregate data (AD), though scrutiny of individual participant data (IPD) is often required to detect trustworthiness issues. Thus, we developed the IPD Integrity Tool for detecting integrity issues in randomised trials with IPD available. This manuscript describes the development of this tool. We conducted a literature review to collate and map existing integrity items. These were discussed with an expert advisory group; agreed items were included in a standardised tool and automated where possible. We piloted this tool in two IPD meta-analyses (including 116 trials) and conducted preliminary validation checks on 13 datasets with and without known integrity issues. We identified 120 integrity items: 54 could be conducted using AD, 48 required IPD, and 18 were possible with AD, but more comprehensive with IPD. An initial reduced tool was developed through consensus involving 13 advisors, featuring 11 AD items across four domains, and 12 IPD items across eight domains. The tool was iteratively refined throughout piloting and validation. All studies with known integrity issues were accurately identified during validation. The final tool includes seven AD domains with 13 items and eight IPD domains with 18 items. The quality of evidence informing healthcare relies on trustworthy data. We describe the development of a tool to enable researchers, editors, and others to detect integrity issues using IPD. Detailed instructions for its application are published as a complementary manuscript in this issue.
期刊介绍:
Research Synthesis Methods is a reputable, peer-reviewed journal that focuses on the development and dissemination of methods for conducting systematic research synthesis. Our aim is to advance the knowledge and application of research synthesis methods across various disciplines.
Our journal provides a platform for the exchange of ideas and knowledge related to designing, conducting, analyzing, interpreting, reporting, and applying research synthesis. While research synthesis is commonly practiced in the health and social sciences, our journal also welcomes contributions from other fields to enrich the methodologies employed in research synthesis across scientific disciplines.
By bridging different disciplines, we aim to foster collaboration and cross-fertilization of ideas, ultimately enhancing the quality and effectiveness of research synthesis methods. Whether you are a researcher, practitioner, or stakeholder involved in research synthesis, our journal strives to offer valuable insights and practical guidance for your work.