Artificial intelligence in clinical medicine: a state-of-the-art overview of systematic reviews with methodological recommendations for improved reporting.
Giovanni Morone, Luigi De Angelis, Alex Martino Cinnera, Riccardo Carbonetti, Alessio Bisirri, Irene Ciancarelli, Marco Iosa, Stefano Negrini, Carlotte Kiekens, Francesco Negrini
{"title":"Artificial intelligence in clinical medicine: a state-of-the-art overview of systematic reviews with methodological recommendations for improved reporting.","authors":"Giovanni Morone, Luigi De Angelis, Alex Martino Cinnera, Riccardo Carbonetti, Alessio Bisirri, Irene Ciancarelli, Marco Iosa, Stefano Negrini, Carlotte Kiekens, Francesco Negrini","doi":"10.3389/fdgth.2025.1550731","DOIUrl":null,"url":null,"abstract":"<p><p>Medicine has become increasingly receptive to the use of artificial intelligence (AI). This overview of systematic reviews (SRs) aims to categorise current evidence about it and identify the current methodological state of the art in the field proposing a classification of AI model (CLASMOD-AI) to improve future reporting. PubMed/MEDLINE, Scopus, Cochrane library, EMBASE and Epistemonikos databases were screened by four blinded reviewers and all SRs that investigated AI tools in clinical medicine were included. 1923 articles were found, and of these, 360 articles were examined via the full-text and 161 SRs met the inclusion criteria. The search strategy, methodological, medical and risk of bias information were extracted. The CLASMOD-AI was based on input, model, data training, and performance metric of AI tools. A considerable increase in the number of SRs was observed in the last five years. The most covered field was oncology accounting for 13.9% of the SRs, with diagnosis as the predominant objective in 44.4% of the cases). The risk of bias was assessed in 49.1% of included SRs, yet only 39.2% of these used tools with specific items to assess AI metrics. This overview highlights the need for improved reporting on AI metrics, particularly regarding the training of AI models and dataset quality, as both are essential for a comprehensive quality assessment and for mitigating the risk of bias using specialized evaluation tools.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1550731"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11920125/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1550731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Medicine has become increasingly receptive to the use of artificial intelligence (AI). This overview of systematic reviews (SRs) aims to categorise current evidence about it and identify the current methodological state of the art in the field proposing a classification of AI model (CLASMOD-AI) to improve future reporting. PubMed/MEDLINE, Scopus, Cochrane library, EMBASE and Epistemonikos databases were screened by four blinded reviewers and all SRs that investigated AI tools in clinical medicine were included. 1923 articles were found, and of these, 360 articles were examined via the full-text and 161 SRs met the inclusion criteria. The search strategy, methodological, medical and risk of bias information were extracted. The CLASMOD-AI was based on input, model, data training, and performance metric of AI tools. A considerable increase in the number of SRs was observed in the last five years. The most covered field was oncology accounting for 13.9% of the SRs, with diagnosis as the predominant objective in 44.4% of the cases). The risk of bias was assessed in 49.1% of included SRs, yet only 39.2% of these used tools with specific items to assess AI metrics. This overview highlights the need for improved reporting on AI metrics, particularly regarding the training of AI models and dataset quality, as both are essential for a comprehensive quality assessment and for mitigating the risk of bias using specialized evaluation tools.