Breast cancer (BC) is the second most prevalent cause of death for women and the most frequently diagnosed malignancy. Early identification of this deadly illness lowers treatment costs while significantly improving survival rates. In contrast, skilled radiologists and pathologists analyze radiographic and histopathological images, respectively. In addition to being expensive, the procedure is prone to errors. The paper offers a solution to these challenges by presenting an innovative approach that combines a Modified U-Net architecture with sophisticated self-supervised learning methods to the accuracy and efficiency of breast cancer detection in WSIs. The proposed model improves the accuracy of tumor detection by integrating a multi-stage process: starting with Gaussian filtering for image preprocessing to remove noise, followed by the Modified U-Net for precise tumor segmentation including multi-scale processing and attention mechanisms. Feature extraction is achieved through the Bag of Visual Words (BoW), Improved Local Gradient and Intensity Pattern (LGIP), and Pyramidal Histogram of Oriented Gradients (PHOG) techniques to capture diverse image characteristics. The classification phase employs an Improved Self-Supervised Learning (ISSL) method, which improves feature representation via a novel loss function and an improved Multiple Instance Pooling (IMIP) mechanism. This method is designed to overcome the limitations of conventional techniques by offering clearer tumor boundaries and more accurate classifications, thereby improving the overall reliability and efficacy of breast cancer detection in clinical practice. Moreover, the ISSL strategy yielded the highest performance metrics, including an accuracy of 0.924, a sensitivity of 0.886, and a negative predictive value (NPV) of 0.943.
扫码关注我们
求助内容:
应助结果提醒方式:
