{"title":"Improving CNN interpretability and evaluation via alternating training and regularization in chest CT scans","authors":"Rodrigo Ramos-Díaz , Jesús García-Ramírez , Jimena Olveres , Boris Escalante-Ramírez","doi":"10.1016/j.ibmed.2025.100211","DOIUrl":null,"url":null,"abstract":"<div><div>Interpretable machine learning is an emerging trend that holds significant importance, considering the growing impact of machine learning systems on society and human lives. Many interpretability methods are applied in CNN after training to provide deeper insights into the outcomes, but only a few have tried to promote interpretability during training. The aim of this experimental study is to investigate the interpretability of CNN. This research was applied to chest computed tomography scans, as understanding CNN predictions has particular importance in the automatic classification of medical images. We attempted to implement a CNN technique aimed at improving interpretability by relating filters in the last convolutional to specific output classes. Variations of such a technique were explored and assessed using chest CT images for classification based on the presence of lungs and lesions. A search was conducted to optimize the specific hyper-parameters necessary for the evaluated strategies. A novel strategy is proposed employing transfer learning and regularization. Models obtained with this strategy and the optimized hyperparameters were statistically compared to standard models, demonstrating greater interpretability without a significant loss in predictive accuracy. We achieved CNN models with improved interpretability, which is crucial for the development of more explainable and reliable AI systems.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100211"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666521225000146","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Interpretable machine learning is an emerging trend that holds significant importance, considering the growing impact of machine learning systems on society and human lives. Many interpretability methods are applied in CNN after training to provide deeper insights into the outcomes, but only a few have tried to promote interpretability during training. The aim of this experimental study is to investigate the interpretability of CNN. This research was applied to chest computed tomography scans, as understanding CNN predictions has particular importance in the automatic classification of medical images. We attempted to implement a CNN technique aimed at improving interpretability by relating filters in the last convolutional to specific output classes. Variations of such a technique were explored and assessed using chest CT images for classification based on the presence of lungs and lesions. A search was conducted to optimize the specific hyper-parameters necessary for the evaluated strategies. A novel strategy is proposed employing transfer learning and regularization. Models obtained with this strategy and the optimized hyperparameters were statistically compared to standard models, demonstrating greater interpretability without a significant loss in predictive accuracy. We achieved CNN models with improved interpretability, which is crucial for the development of more explainable and reliable AI systems.