Jianjian Wang , Shouyuan Wu , Qiangqiang Guo , Hui Lan , Estill Janne , Ling Wang , Juanjuan Zhang , Qi Wang , Yang Song , Nan Yang , Xufei Luo , Qi Zhou , Qianling Shi , Xuan Yu , Yanfang Ma , Joseph L. Mathew , Hyeong Sik Ahn , Myeong Soo Lee , Yaolong Chen
{"title":"Investigation and evaluation of randomized controlled trials for interventions involving artificial intelligence","authors":"Jianjian Wang , Shouyuan Wu , Qiangqiang Guo , Hui Lan , Estill Janne , Ling Wang , Juanjuan Zhang , Qi Wang , Yang Song , Nan Yang , Xufei Luo , Qi Zhou , Qianling Shi , Xuan Yu , Yanfang Ma , Joseph L. Mathew , Hyeong Sik Ahn , Myeong Soo Lee , Yaolong Chen","doi":"10.1016/j.imed.2021.04.006","DOIUrl":null,"url":null,"abstract":"<div><p><strong>Objective</strong> Complete and transparent reporting is of critical importance for randomized controlled trials (RCTs). The present study aimed to determine the reporting quality and methodological quality of RCTs for interventions involving artificial intelligence (AI) and their protocols.</p><p><strong>Methods</strong> We searched MEDLINE (via PubMed), Embase, Web of Science, CBMdisc, Wanfang Data, and CNKI from January 1, 2016, to November 11, 2020, to collect RCTs involving AI. We also extracted the protocol of each included RCT if it could be obtained. CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) statement and Cochrane Collaboration's tool for assessing risk of bias (ROB) were used to evaluate the reporting quality and methodological quality, respectively, and SPIRIT-AI (The Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence) statement was used to evaluate the reporting quality of the protocols. The associations of the reporting rate of CONSORT-AI with the publication year, journal's impact factor (IF), number of authors, sample size, and first author's country were analyzed univariately using Pearson's chi-squared test, or Fisher's exact test if the expected values in any of the cells were below 5. The compliance of the retrieved protocols to SPIRIT-AI was presented descriptively.</p><p><strong>Results</strong> Overall, 29 RCTs and three protocols were considered eligible. The CONSORT-AI items “title and abstract” and “interpretation of results” were reported by all RCTs, with the items with the lowest reporting rates being “funding” (0), “implementation” (3.5%), and “harms” (3.5%). The risk of bias was high in 13 (44.8%) RCTs and not clear in 15 (51.7%) RCTs. Only one RCT (3.5%) had a low risk of bias. The compliance was not significantly different in terms of the publication year, journal's IF, number of authors, sample size, or first author's country. Ten of the 35 SPIRIT-AI items (funding, participant timeline, allocation concealment mechanism, implementation, data management, auditing, declaration of interests, access to data, informed consent materials and biological specimens) were not reported by any of the three protocols.</p><p><strong>Conclusions</strong> The reporting and methodological quality of RCTs involving AI need to be improved. Because of the limited availability of protocols, their quality could not be fully judged. Following the CONSORT-AI and SPIRIT-AI statements and with appropriate guidance on the risk of bias when designing and reporting AI-related RCTs can promote standardization and transparency.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"1 2","pages":"Pages 61-69"},"PeriodicalIF":4.4000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.imed.2021.04.006","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266710262100019X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 2
Abstract
Objective Complete and transparent reporting is of critical importance for randomized controlled trials (RCTs). The present study aimed to determine the reporting quality and methodological quality of RCTs for interventions involving artificial intelligence (AI) and their protocols.
Methods We searched MEDLINE (via PubMed), Embase, Web of Science, CBMdisc, Wanfang Data, and CNKI from January 1, 2016, to November 11, 2020, to collect RCTs involving AI. We also extracted the protocol of each included RCT if it could be obtained. CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) statement and Cochrane Collaboration's tool for assessing risk of bias (ROB) were used to evaluate the reporting quality and methodological quality, respectively, and SPIRIT-AI (The Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence) statement was used to evaluate the reporting quality of the protocols. The associations of the reporting rate of CONSORT-AI with the publication year, journal's impact factor (IF), number of authors, sample size, and first author's country were analyzed univariately using Pearson's chi-squared test, or Fisher's exact test if the expected values in any of the cells were below 5. The compliance of the retrieved protocols to SPIRIT-AI was presented descriptively.
Results Overall, 29 RCTs and three protocols were considered eligible. The CONSORT-AI items “title and abstract” and “interpretation of results” were reported by all RCTs, with the items with the lowest reporting rates being “funding” (0), “implementation” (3.5%), and “harms” (3.5%). The risk of bias was high in 13 (44.8%) RCTs and not clear in 15 (51.7%) RCTs. Only one RCT (3.5%) had a low risk of bias. The compliance was not significantly different in terms of the publication year, journal's IF, number of authors, sample size, or first author's country. Ten of the 35 SPIRIT-AI items (funding, participant timeline, allocation concealment mechanism, implementation, data management, auditing, declaration of interests, access to data, informed consent materials and biological specimens) were not reported by any of the three protocols.
Conclusions The reporting and methodological quality of RCTs involving AI need to be improved. Because of the limited availability of protocols, their quality could not be fully judged. Following the CONSORT-AI and SPIRIT-AI statements and with appropriate guidance on the risk of bias when designing and reporting AI-related RCTs can promote standardization and transparency.