High-Intensity Focused Ultrasound (HIFU) ablation represents a rapidly advancing non-invasive treatment modality that has achieved considerable success in addressing uterine fibroids, which constitute over 50% of benign gynecological tumors. Preoperative Magnetic Resonance Imaging (MRI) plays a pivotal role in the planning and guidance of HIFU surgery for uterine fibroids, wherein the segmentation of tumors holds critical significance. The segmentation process was previously manually executed by medical experts, entailing a time-consuming and labor-intensive procedure heavily reliant on clinical expertise. This study introduced deep learning-based nnU-Net models, offering a cost-effective approach for their application in the segmentation of uterine fibroids utilizing preoperative MRI images. Furthermore, 3D reconstruction of the segmented targets was implemented to guide HIFU surgery. The evaluation of segmentation and 3D reconstruction performance was conducted with a focus on enhancing the safety and effectiveness of HIFU surgery. Results demonstrated the nnU-Net's commendable performance in the segmentation of uterine fibroids and their surrounding organs. Specifically, 3D nnU-Net achieved Dice Similarity Coefficients (DSC) of 92.55% for the uterus, 95.63% for fibroids, 92.69% for the spine, 89.63% for the endometrium, 97.75% for the bladder, and 90.45% for the urethral orifice. Compared to other state-of-the-art methods such as HIFUNet, U-Net, R2U-Net, ConvUNeXt and 2D nnU-Net, 3D nnU-Net demonstrated significantly higher DSC values, highlighting its superior accuracy and robustness. In conclusion, the efficacy of the 3D nnU-Net model for automated segmentation of the uterus and its surrounding organs was robustly validated. When integrated with intra-operative ultrasound imaging, this segmentation method and 3D reconstruction hold substantial potential to enhance the safety and efficiency of HIFU surgery in the clinical treatment of uterine fibroids.
Objective: Develop a practical scoring system based on radiomics and imaging features, for predicting the malignant potential of incidental indeterminate small solid pulmonary nodules (IISSPNs) smaller than 20 mm.
Methods: A total of 360 patients with malignant IISSPNs (n = 213) and benign IISSPNs (n = 147) confirmed after surgery were retrospectively analyzed. The whole cohort was randomly divided into training and validation groups at a ratio of 7:3. The least absolute shrinkage and selection operator (LASSO) algorithm was used to debase the dimensions of radiomics features. Multivariate logistic analysis was performed to establish models. The receiver operating characteristic (ROC) curve, area under the curve (AUC), 95% confidence interval (CI), sensitivity and specificity of each model were recorded. Scoring system based on odds ratio was developed.
Results: Three radiomics features were selected for further model establishment. After multivariate logistic analysis, the combined model including Mean, age, emphysema, lobulated and size, reached highest AUC of 0.877 (95%CI: 0.830-0.915), accuracy rate of 83.3%, sensitivity of 85.3% and specificity of 80.2% in the training group, followed by radiomics model (AUC: 0.804) and imaging model (AUC: 0.773). A scoring system with a cutoff value greater than 4 points was developed. If the score was larger than 8 points, the possibility of diagnosing malignant IISSPNs could reach at least 92.7%.
Conclusion: The combined model demonstrated good diagnostic performance in predicting the malignant potential of IISSPNs. A perfect accuracy rate of 100% can be achieved with a score exceeding 12 points in the user-friendly scoring system.
Many image fusion methods have been proposed to leverage the advantages of functional and anatomical images while compensating for their shortcomings. These methods integrate functional and anatomical images while presenting physiological and metabolic organ information, making their diagnostic efficiency far greater than that of single-modal images. Currently, most existing multimodal medical imaging fusion methods are based on multiscale transformation, which involves obtaining pyramid features through multiscale transformation. Low-resolution images are used to analyse approximate image features, and high-resolution images are used to analyse detailed image features. Different fusion rules are applied to achieve feature fusion at different scales. Although these fusion methods based on multiscale transformation can effectively achieve multimodal medical image fusion, much detailed information is lost during multiscale and inverse transformation, resulting in blurred edges and a loss of detail in the fusion images. A multimodal medical image fusion method based on interval gradients and convolutional neural networks is proposed to overcome this problem. First, this method uses interval gradients for image decomposition to obtain structure and texture images. Second, deep neural networks are used to extract perception images. Three methods are used to fuse structure, texture, and perception images. Last, the images are combined to obtain the final fusion image after colour transformation. Compared with the reference algorithms, the proposed method performs better in multiple objective indicators of , , , and .
Breast cancer is a leading cause of mortality among women globally, necessitating precise classification of breast ultrasound images for early diagnosis and treatment. Traditional methods using CNN architectures such as VGG, ResNet, and DenseNet, though somewhat effective, often struggle with class imbalances and subtle texture variations, leading to reduced accuracy for minority classes such as malignant tumors. To address these issues, we propose a methodology that leverages EfficientNet-B7, a scalable CNN architecture, combined with advanced data augmentation techniques to enhance minority class representation and improve model robustness. Our approach involves fine-tuning EfficientNet-B7 on the BUSI dataset, implementing RandomHorizontalFlip, RandomRotation, and ColorJitter to balance the dataset and improve model robustness. The training process includes early stopping to prevent overfitting and optimize performance metrics. Additionally, we integrate Explainable AI (XAI) techniques, such as Grad-CAM, to enhance the interpretability and transparency of the model's predictions, providing visual and quantitative insights into the features and regions of ultrasound images influencing classification outcomes. Our model achieves a classification accuracy of 99.14%, significantly outperforming existing CNN-based approaches in breast ultrasound image classification. The incorporation of XAI techniques enhances our understanding of the model's decision-making process, thereby increasing its reliability and facilitating clinical adoption. This comprehensive framework offers a robust and interpretable tool for the early detection and diagnosis of breast cancer, advancing the capabilities of automated diagnostic systems and supporting clinical decision-making processes.
Recent improvements in artificial intelligence and computer vision make it possible to automatically detect abnormalities in medical images. Skin lesions are one broad class of them. There are types of lesions that cause skin cancer, again with several types. Melanoma is one of the deadliest types of skin cancer. Its early diagnosis is at utmost importance. The treatments are greatly aided with artificial intelligence by the quick and precise diagnosis of these conditions. The identification and delineation of boundaries inside skin lesions have shown promise when using the basic image processing approaches for edge detection. Further enhancements regarding edge detections are possible. In this paper, the use of fractional differentiation for improved edge detection is explored on the application of skin lesion detection. A framework based on fractional differential filters for edge detection in skin lesion images is proposed that can improve automatic detection rate of malignant melanoma. The derived images are used to enhance the input images. Obtained images then undergo a classification process based on deep learning. A well-studied dataset of HAM10000 is used in the experiments. The system achieves 81.04% accuracy with EfficientNet model using the proposed fractional derivative based enhancements whereas accuracies are around 77.94% when using original images. In almost all the experiments, the enhanced images improved the accuracy. The results show that the proposed method improves the recognition performance.
Objectives: To investigate the value of conventional ultrasonography (US) combined with quantitative shear wave elastography (SWE) in evaluating and identifying target axillary lymph node (TALN) for fine needle aspiration biopsy (FNAB) of patients with early breast cancer.
Materials and methods: A total of 222 patients with 223 ALNs were prospectively recruited from January 2018 to December 2021. All TALNs were evaluated by US, SWE and subsequently underwent FNAB. The diagnostic performances of US, SWE, UEor (either US or SWE was positive) and UEand (both US and SWE were positive), and FNAB guided by the above four methods for evaluating ALN status were assessed using receiver operator characteristic curve (ROC) analyses. Univariate and multivariate logistic regression analyses used to determine the independent predictors of axillary burden.
Results: The area under the ROC curve (AUC) for diagnosing ALNs using conventional US and SWE were 0.69 and 0.66, respectively, with sensitivities of 78.00% and 65.00% and specificities of 60.98% and 66.67%. The combined method, UEor, demonstrated significantly improved sensitivity of 86.00% (p < 0.001 when compared with US and SWE alone). The AUC of the UEor-guided FNAB [0.85 (95% CI, 0.80-0.90)] was significantly higher than that of US-guided FNAB [0.83 (95% CI, 0.78-0.88), p = 0.042], SWE-guided FNAB [0.79 (95% CI, 0.72-0.84), p = 0.001], and UEand-guided FNAB [0.77 (95% CI, 0.71-0.82), p < 0.001]. Multivariate logistic regression showed that FNAB and number of suspicious ALNs were found independent predictors of axillary burden in patients with early breast cancer.
Conclusion: The UEor had superior sensitivity compared to US or SWE alone in ALN diagnosis. The UEor-guided FNAB achieved a lower false-negative rate compared to FNAB guided solely by US or SWE, which may be a promising tool for the preoperative diagnosis of ALNs in early breast cancer, and had the potential implication for the selection of axillary surgical modality.
Background: The presence of lateral lymph node metastases (LNM) in paediatric patients with papillary thyroid cancer (PTC) is an independent risk factor for recurrence. We aimed to identify risk factors and establish a prediction model for lateral LNM before surgery in children and adolescents with PTC.
Methods: We developed a prediction model based on data obtained from 63 minors with PTC between January 2014 and June 2023. We collected and analysed clinical factors, ultrasound (US) features of the primary tumour, and pathology records of the patients. Multivariate logistic regression analysis was used to determine independent predictors and build a prediction model. We evaluated the predictive performance of risk factors and the prediction model using the area under the receiver operating characteristic (ROC) curve. We assessed the clinical usefulness of the predicting model using decision curve analysis.
Results: Among the minors with PTC, 21 had lateral LNM (33.3%). Logistic regression revealed that independent risk factors for lateral LNM were multifocality, tumour size, sex, and age. The area under the ROC curve for multifocality, tumour size, sex, and age was 0.62 (p = 0.049), 0.61 (p = 0.023), 0.66 (p = 0.003), and 0.58 (p = 0.013), respectively. Compared to a single risk factor, the combined predictors had a significantly higher area under the ROC curve (0.842), with a sensitivity and specificity of 71.4% and 81.0%, respectively (cutoff value = 0.524). Decision curve analysis showed that the prediction model was clinically useful, with threshold probabilities between 2% and 99%.
Conclusions: The independent risk factors for lateral LNM in paediatric PTC patients were multifocality and tumour size on US imaging, as well as sex and age. Our model outperformed US imaging and clinical features alone in predicting the status of lateral LNM.