In the field of medical image segmentation and classification, deep learning models can automatically extract features and perform high-performance inference, thus aiding physicians in efficient and accurate automated decision support. However, these models are typically trained for a single task, leading to limitations such as neglect of task relevance and poor scalability. To address these issues, we propose a novel Alternating Multi-Task Hierarchical Network (AMTH-Net) for medical image segmentation and classification. This model is divided into three hierarchical modules: Pathology Region Clarity (PRC) as an auxiliary module to enhance the capabilities of segmentation and classification, Multi-Resolution Attention (MRA) segmentation module that uses deep supervision to focus on image information at various resolution levels to improve segmentation accuracy, and Cascaded Multi-Scale Information (CMSI) classification module which employs a cascaded multi-scale mechanism to gradually integrate discrete information from different network layers, thereby enhancing classification performance. Additionally, we introduce a novel Alternating Interaction Loss (AI-Loss) based on Multi-Gradient Information Feedback (MGIF) algorithm to further enhance the model's segmentation and diagnostic performance. Our experiments on the COVID CXR and F BUSI Breast Ultrasound datasets show that AMTH-Net achieves superior performance in both segmentation and classification tasks. Specifically, on the COVID chest X-ray (COVID CXR) dataset, the Dice coefficient of AMTH-Net reaches 98.33%, the Intersection over Union (IOU) is 96.31%, and the accuracy rate is 91.49%, outperforming existing methods in terms of performance. On the F BUSI dataset, its Dice coefficient is 96.76%, the Intersection over Union (IOU) is 95.92%, and the accuracy rate is 95.87%, surpassing other methods once again. These results confirm the effectiveness and superiority of the model we proposed.
扫码关注我们
求助内容:
应助结果提醒方式:
