Most AI breast cancer detection systems use single-modality imaging algorithms, limiting clinical reliability. Early and accurate detection improves therapy and mortality. These challenges are addressed by Transformer-driven multi-modal fusion and explainable deep learning system, TransFusion-BCNet for breast cancer diagnosis. The framework consists of three parts. The TriFusion-Transformer (TriFT) performs three-tier fusion: intra-modality fusion across multiple mammogram views and imaging sources, inter-modality fusion combining mammogram, ultrasound, MRI, and clinical features, and decision-level fusion for robust outcome prediction. TriFT detects complicated connections across heterogeneous modalities, unlike classical fusion. Second, we present the FusionAttribution Map (FAMap), a dual-level interpretability mechanism that generates imaging data region-level saliency maps and modality-level contribution scores to evaluate input source influence. This openness helps clinicians understand where and which modality drives predictions. Third, the MetaFusion Optimizer (MFO) adjusts fusion weights, network depth, and learning parameters via evolutionary search and gradient-based fine-tuning. Traditional optimizers lack model generalization and training stability. This staged technique improved both. TransFusion-BCNet outperforms CNN–Transformer hybrids with 99.4 % accuracy, 99.0 % precision, 99.2 % recall, and 99.1 % F1-score in extensive CBIS-DDSM,BUSI, TCGA-BRCA and RIDER Breast MRI datasets. With TriFT, FAMap, and MFO, TransFusion-BCNet provides a robust, transparent, and clinically interpretable diagnostic framework, improving AI in breast cancer screening and decision assistance.
扫码关注我们
求助内容:
应助结果提醒方式:
