Precise detection of lung cancer type is crucial for effective treatment, but blurred edges and textures of lung nodules can lead to misclassification, resulting in inappropriate treatment strategies. To address this challenge, we propose an explainable feature fusion convolutional neural network (Exp-FFCNN). The Exp-FFCNN model incorporates convolutional blocks with atrous spatial pyramid pooling (ASPP), squeeze-and-excitation ConvNeXt blocks (SECBs), and a feature fusion head block (FFHB). The initial convolutional blocks, augmented by ASPP, enable precise extraction of multi-dimensional local and global features of lung nodules. The SECBs are designed to capture domain-specific information by extracting deep and detailed texture features using an attention mechanism that highlights blurred textures and shape features. A pre-trained VGG16 feature extractor is utilized for diverse edge-related feature maps then both feature’s maps are fused channel-wise and spatial-wise in FFHB. To improve input image quality, several preprocessing techniques are applied, and to mitigate class imbalance, the borderline synthetic minority oversampling technique (BORDERLINE SMOTE) is employed. The chest CT-scan images dataset is used for training, while the generalizability of the model is validated on IQ-OTH/NCCD dataset. Through comprehensive evaluation against state-of-the-art models, our framework demonstrates exceptional accuracy, achieving 99.60% on the chest CT-scan images dataset and 98% on the IQ-OTH/NCCD dataset. Furthermore, to enhance feature interpretability for radiologists, Grad-CAM and LIME are utilized. This explainability provides insight into the decision-making process, improving the transparency and interpretability of the model, thereby fostering greater confidence in its application for real-world lung cancer diagnoses.
扫码关注我们
求助内容:
应助结果提醒方式:
