Pub Date : 2025-03-12DOI: 10.1007/s00261-025-04879-y
Lingwei Li, Tongtong Liu, Peng Wang, Lianzheng Su, Lei Wang, Xinmiao Wang, Chidao Chen
Ovarian cancer is among the most common malignant tumours in women worldwide, and early identification is essential for enhancing patient survival chances. The development of automated and trustworthy diagnostic techniques is necessary because traditional CT picture processing mostly depends on the subjective assessment of radiologists, which can result in variability. Deep learning approaches in medical image analysis have advanced significantly, particularly showing considerable promise in the automatic categorisation of ovarian tumours. This research presents an automated diagnostic approach for ovarian tumour CT images utilising supervised contrastive learning and a Multiple Perception Encoder (MP Encoder). The approach incorporates T-Pro technology to augment data diversity and simulates semantic perturbations to increase the model's generalisation capability. The incorporation of Multi-Scale Perception Module (MSP Module) and Multi-Attention Module (MA Module) enhances the model's sensitivity to the intricate morphology and subtle characteristics of ovarian tumours, resulting in improved classification accuracy and robustness, ultimately achieving an average classification accuracy of 98.43%. Experimental results indicate the method's exceptional efficacy in ovarian tumour classification, particularly in cases involving tumours with intricate morphology or worse picture quality, thereby markedly enhancing classification accuracy. This advanced deep learning framework proficiently tackles the complexities of ovarian tumour CT image interpretation, offering clinicians enhanced diagnostic support and aiding in the optimisation of early detection and treatment strategies for ovarian cancer.
{"title":"Multiple perception contrastive learning for automated ovarian tumor classification in CT images.","authors":"Lingwei Li, Tongtong Liu, Peng Wang, Lianzheng Su, Lei Wang, Xinmiao Wang, Chidao Chen","doi":"10.1007/s00261-025-04879-y","DOIUrl":"https://doi.org/10.1007/s00261-025-04879-y","url":null,"abstract":"<p><p>Ovarian cancer is among the most common malignant tumours in women worldwide, and early identification is essential for enhancing patient survival chances. The development of automated and trustworthy diagnostic techniques is necessary because traditional CT picture processing mostly depends on the subjective assessment of radiologists, which can result in variability. Deep learning approaches in medical image analysis have advanced significantly, particularly showing considerable promise in the automatic categorisation of ovarian tumours. This research presents an automated diagnostic approach for ovarian tumour CT images utilising supervised contrastive learning and a Multiple Perception Encoder (MP Encoder). The approach incorporates T-Pro technology to augment data diversity and simulates semantic perturbations to increase the model's generalisation capability. The incorporation of Multi-Scale Perception Module (MSP Module) and Multi-Attention Module (MA Module) enhances the model's sensitivity to the intricate morphology and subtle characteristics of ovarian tumours, resulting in improved classification accuracy and robustness, ultimately achieving an average classification accuracy of 98.43%. Experimental results indicate the method's exceptional efficacy in ovarian tumour classification, particularly in cases involving tumours with intricate morphology or worse picture quality, thereby markedly enhancing classification accuracy. This advanced deep learning framework proficiently tackles the complexities of ovarian tumour CT image interpretation, offering clinicians enhanced diagnostic support and aiding in the optimisation of early detection and treatment strategies for ovarian cancer.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143612971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1007/s00261-025-04853-8
JunQiang Lei, YongSheng Xu, YuanHui Zhu, ShanShan Jiang, Song Tian, Yi Zhu
Objectives: To develop an automated deep learning (DL) methodology for detecting small hepatocellular carcinoma (sHCC) in cirrhotic livers, leveraging Gd-EOB-DTPA-enhanced MRI.
Methods: The present retrospective study included a total of 120 patients with cirrhosis, comprising 78 patients with sHCC and 42 patients with non-HCC cirrhosis, who were selected through stratified sampling. The dataset was divided into training and testing sets (8:2 ratio). The nnU-Net exhibits enhanced capabilities in segmenting small objects. The segmentation performance was assessed using the Dice coefficient. The ability to distinguish between sHCC and non-HCC lesions was evaluated through ROC curves, AUC scores and P values. The case-level detection performance for sHCC was evaluated through several metrics: accuracy, sensitivity, and specificity.
Results: The AUCs for distinguishing sHCC patients from non-HCC patients at the lesion level were 0.967 and 0.864 for the training and test cohorts, respectively, both of which were statistically significant at P < 0.001. At the case level, distinguishing between patients with sHCC and patients with cirrhosis resulted in accuracies of 92.5% (95% CI, 85.1-96.9%) and 81.5% (95% CI, 61.9-93.7%), sensitivities of 95.1% (95% CI, 86.3-99.0%) and 88.2% (95% CI, 63.6-98.5%), and specificities of 87.5% (95% CI, 71.0-96.5%) and 70% (95% CI, 34.8-93.3%) for the training and test sets, respectively.
Conclusion: The DL methodology demonstrated its efficacy in detecting sHCC within a cohort of patients with cirrhosis.
{"title":"Automated detection of small hepatocellular carcinoma in cirrhotic livers: applying deep learning to Gd-EOB-DTPA-enhanced MRI.","authors":"JunQiang Lei, YongSheng Xu, YuanHui Zhu, ShanShan Jiang, Song Tian, Yi Zhu","doi":"10.1007/s00261-025-04853-8","DOIUrl":"https://doi.org/10.1007/s00261-025-04853-8","url":null,"abstract":"<p><strong>Objectives: </strong>To develop an automated deep learning (DL) methodology for detecting small hepatocellular carcinoma (sHCC) in cirrhotic livers, leveraging Gd-EOB-DTPA-enhanced MRI.</p><p><strong>Methods: </strong>The present retrospective study included a total of 120 patients with cirrhosis, comprising 78 patients with sHCC and 42 patients with non-HCC cirrhosis, who were selected through stratified sampling. The dataset was divided into training and testing sets (8:2 ratio). The nnU-Net exhibits enhanced capabilities in segmenting small objects. The segmentation performance was assessed using the Dice coefficient. The ability to distinguish between sHCC and non-HCC lesions was evaluated through ROC curves, AUC scores and P values. The case-level detection performance for sHCC was evaluated through several metrics: accuracy, sensitivity, and specificity.</p><p><strong>Results: </strong>The AUCs for distinguishing sHCC patients from non-HCC patients at the lesion level were 0.967 and 0.864 for the training and test cohorts, respectively, both of which were statistically significant at P < 0.001. At the case level, distinguishing between patients with sHCC and patients with cirrhosis resulted in accuracies of 92.5% (95% CI, 85.1-96.9%) and 81.5% (95% CI, 61.9-93.7%), sensitivities of 95.1% (95% CI, 86.3-99.0%) and 88.2% (95% CI, 63.6-98.5%), and specificities of 87.5% (95% CI, 71.0-96.5%) and 70% (95% CI, 34.8-93.3%) for the training and test sets, respectively.</p><p><strong>Conclusion: </strong>The DL methodology demonstrated its efficacy in detecting sHCC within a cohort of patients with cirrhosis.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143584144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: To propose a node-by-node matching method between MRI and pathology with 3D node maps based on preoperative MRI for rectal cancer patients to improve the yet unsatisfactory diagnostic performance of nodal status in rectal cancer.
Methods: This methodological study prospectively enrolled consecutive participants with rectal cancer who underwent preoperative MRI and radical surgery from December 2021 to August 2023. All nodes with short-axis diameters of ≥ 3 mm within the mesorectum were regarded as target nodes and were localized in three directions based on the positional relationship on MRI and drawn on a node map with the primary tumor as the main reference, which was used as a template for node-by-node matching with pathological evaluation. Patient and nodal-level analyses were performed to investigate factors affecting the matching accuracy.
Results: 545 participants were included, of whom 253 received direct surgery and 292 received surgery after neoadjuvant therapy (NAT). In participants who underwent direct surgery, 1782 target nodes were identified on MRI, of which 1302 nodes (73%) achieved matching with pathology, with 1018 benign and 284 metastatic. In participants who underwent surgery after NAT, 1277 target nodes were identified and 918 nodes (72%) achieved matching, of which 689 were benign and 229 were metastatic. Advanced disease and proximity to primary tumor resulted in matching difficulties.
Conclusion: An easy-to-use and reliable method of node-by-node matching between MRI and pathology with 3D node map based on preoperative MRI was constructed for rectal cancer, which provided reliable node-based ground-truth labels for further radiological studies.
{"title":"A method of matching nodes between MRI and pathology with MRI-based 3D node map in rectal cancer.","authors":"Qing-Yang Li, Xin-Yue Yan, Zhen Guan, Rui-Jia Sun, Qiao-Yuan Lu, Xiao-Ting Li, Xiao-Yan Zhang, Ying-Shi Sun","doi":"10.1007/s00261-025-04826-x","DOIUrl":"10.1007/s00261-025-04826-x","url":null,"abstract":"<p><strong>Purpose: </strong>To propose a node-by-node matching method between MRI and pathology with 3D node maps based on preoperative MRI for rectal cancer patients to improve the yet unsatisfactory diagnostic performance of nodal status in rectal cancer.</p><p><strong>Methods: </strong>This methodological study prospectively enrolled consecutive participants with rectal cancer who underwent preoperative MRI and radical surgery from December 2021 to August 2023. All nodes with short-axis diameters of ≥ 3 mm within the mesorectum were regarded as target nodes and were localized in three directions based on the positional relationship on MRI and drawn on a node map with the primary tumor as the main reference, which was used as a template for node-by-node matching with pathological evaluation. Patient and nodal-level analyses were performed to investigate factors affecting the matching accuracy.</p><p><strong>Results: </strong>545 participants were included, of whom 253 received direct surgery and 292 received surgery after neoadjuvant therapy (NAT). In participants who underwent direct surgery, 1782 target nodes were identified on MRI, of which 1302 nodes (73%) achieved matching with pathology, with 1018 benign and 284 metastatic. In participants who underwent surgery after NAT, 1277 target nodes were identified and 918 nodes (72%) achieved matching, of which 689 were benign and 229 were metastatic. Advanced disease and proximity to primary tumor resulted in matching difficulties.</p><p><strong>Conclusion: </strong>An easy-to-use and reliable method of node-by-node matching between MRI and pathology with 3D node map based on preoperative MRI was constructed for rectal cancer, which provided reliable node-based ground-truth labels for further radiological studies.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-08DOI: 10.1007/s00261-025-04820-3
Andrew W Bowman, Zhuo Li
{"title":"Correction to: Assessment of diagnostic performance and complication rate in percutaneous lung biopsy based on target nodule size.","authors":"Andrew W Bowman, Zhuo Li","doi":"10.1007/s00261-025-04820-3","DOIUrl":"10.1007/s00261-025-04820-3","url":null,"abstract":"","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143582158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: The objective of this study is to investigate the value of radiomics features and deep learning features based on positron emission tomography/computed tomography (PET/CT) in predicting perineural invasion (PNI) in rectal cancer.
Methods: We retrospectively collected 120 rectal cancer (56 PNI-positive patients 64 PNI-negative patients) patients with preoperative 18F-FDG PET/CT examination and randomly divided them into training and validation sets at a 7:3 ratio. We also collected 31 rectal cancer patients from two other hospitals as an independent external validation set. χ2 test and binary logistic regression were used to analyze PET metabolic parameters. PET/CT images were utilized to extract radiomics features and deep learning features. The Mann-Whitney U test and LASSO were employed to select valuable features. Metabolic parameter, radiomics, deep learning and combined models were constructed. ROC curves were generated to evaluate the performance of models.
Results: The results indicate that metabolic tumor volume (MTV) is correlated with PNI (P = 0.001). In the training set and validation set, the AUC values of the metabolic parameter model were 0.673 (95%CI: 0.572-0.773), 0.748 (95%CI: 0.599-0.896). We selected 16 radiomics features and 17 deep learning features as valuable factors for predicting PNI. The AUC values of radiomics model and deep learning model were 0.768 (95%CI: 0.667-0.868) and 0.860 (95%CI: 0.780-0.940) in the training set. And the AUC values in the validation set were 0.803 (95%CI: 0.656-0.950) and 0.854 (95% CI 0.721-0.987). Finally, the combined model exhibited AUCs of 0.893 (95%CI: 0.825-0.961) in the training set and 0.883 (95%CI: 0.775-0.990) in the validation set. In the external validation set, the combined model achieved an AUC of 0.829 (95% CI: 0.674-0.984), outperforming each individual model. The decision curve analysis of these models indicated that using the combined model to guide treatment provided a substantial net benefit.
Conclusions: This combined model established by integrating PET metabolic parameters, radiomics features, and deep learning features can accurately predict the PNI in rectal cancer.
{"title":"The value of radiomics and deep learning based on PET/CT in predicting perineural nerve invasion in rectal cancer.","authors":"Mengzhang Jiao, Zongjing Ma, Zhaisong Gao, Yu Kong, Shumao Zhang, Guangjie Yang, Zhenguang Wang","doi":"10.1007/s00261-025-04833-y","DOIUrl":"https://doi.org/10.1007/s00261-025-04833-y","url":null,"abstract":"<p><strong>Objective: </strong>The objective of this study is to investigate the value of radiomics features and deep learning features based on positron emission tomography/computed tomography (PET/CT) in predicting perineural invasion (PNI) in rectal cancer.</p><p><strong>Methods: </strong>We retrospectively collected 120 rectal cancer (56 PNI-positive patients 64 PNI-negative patients) patients with preoperative <sup>18</sup>F-FDG PET/CT examination and randomly divided them into training and validation sets at a 7:3 ratio. We also collected 31 rectal cancer patients from two other hospitals as an independent external validation set. χ2 test and binary logistic regression were used to analyze PET metabolic parameters. PET/CT images were utilized to extract radiomics features and deep learning features. The Mann-Whitney U test and LASSO were employed to select valuable features. Metabolic parameter, radiomics, deep learning and combined models were constructed. ROC curves were generated to evaluate the performance of models.</p><p><strong>Results: </strong>The results indicate that metabolic tumor volume (MTV) is correlated with PNI (P = 0.001). In the training set and validation set, the AUC values of the metabolic parameter model were 0.673 (95%CI: 0.572-0.773), 0.748 (95%CI: 0.599-0.896). We selected 16 radiomics features and 17 deep learning features as valuable factors for predicting PNI. The AUC values of radiomics model and deep learning model were 0.768 (95%CI: 0.667-0.868) and 0.860 (95%CI: 0.780-0.940) in the training set. And the AUC values in the validation set were 0.803 (95%CI: 0.656-0.950) and 0.854 (95% CI 0.721-0.987). Finally, the combined model exhibited AUCs of 0.893 (95%CI: 0.825-0.961) in the training set and 0.883 (95%CI: 0.775-0.990) in the validation set. In the external validation set, the combined model achieved an AUC of 0.829 (95% CI: 0.674-0.984), outperforming each individual model. The decision curve analysis of these models indicated that using the combined model to guide treatment provided a substantial net benefit.</p><p><strong>Conclusions: </strong>This combined model established by integrating PET metabolic parameters, radiomics features, and deep learning features can accurately predict the PNI in rectal cancer.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143571615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1007/s00261-025-04858-3
Yuying Liu, Xueqing Han, Haohui Chen, Qirui Zhang
Background: To explore the predictive value of radiomics features extracted from anatomical ROIs in differentiating the International Society of Urological Pathology (ISUP) grading in prostate cancer patients.
Methods: This study included 1,500 prostate cancer patients from a multi-center study. The peripheral zone (PZ) and central gland (CG, transition zone + central zone) of the prostate were segmented using deep learning algorithms and were defined as the regions of interest (ROI) in this study. A total of 12,918 image-based features were extracted from T2-weighted imaging (T2WI), apparent diffusion coefficient (ADC), and diffusion-weighted imaging (DWI) images of these two ROIs. Synthetic minority over-sampling technique (SMOTE) algorithm was used to address the class imbalance problem. Feature selection was performed using Pearson correlation analysis and random forest regression. A prediction model was built using the random forest classification algorithm. Kruskal-Wallis H test, ANOVA, and Chi-Square Test were used for statistical analysis.
Results: A total of 20 ISUP grading-related features were selected, including 10 from the PZ ROI and 10 from the CG ROI. On the test set, the combined PZ + CG radiomics model exhibited better predictive performance, with an AUC of 0.928 (95% CI: 0.872, 0.966), compared to the PZ model alone (AUC: 0.838; 95% CI: 0.722, 0.920) and the CG model alone (AUC: 0.904; 95% CI: 0.851, 0.945).
Conclusion: This study demonstrates that radiomic features extracted based on anatomical sub-region of the prostate can contribute to enhanced ISUP grade prediction. The combination of PZ + GG can provide more comprehensive information with improved accuracy. Further validation of this strategy in the future will enhance its prospects for improving decision-making in clinical settings.
{"title":"Enhanced ISUP grade prediction in prostate cancer using multi-center radiomics data.","authors":"Yuying Liu, Xueqing Han, Haohui Chen, Qirui Zhang","doi":"10.1007/s00261-025-04858-3","DOIUrl":"https://doi.org/10.1007/s00261-025-04858-3","url":null,"abstract":"<p><strong>Background: </strong>To explore the predictive value of radiomics features extracted from anatomical ROIs in differentiating the International Society of Urological Pathology (ISUP) grading in prostate cancer patients.</p><p><strong>Methods: </strong>This study included 1,500 prostate cancer patients from a multi-center study. The peripheral zone (PZ) and central gland (CG, transition zone + central zone) of the prostate were segmented using deep learning algorithms and were defined as the regions of interest (ROI) in this study. A total of 12,918 image-based features were extracted from T2-weighted imaging (T2WI), apparent diffusion coefficient (ADC), and diffusion-weighted imaging (DWI) images of these two ROIs. Synthetic minority over-sampling technique (SMOTE) algorithm was used to address the class imbalance problem. Feature selection was performed using Pearson correlation analysis and random forest regression. A prediction model was built using the random forest classification algorithm. Kruskal-Wallis H test, ANOVA, and Chi-Square Test were used for statistical analysis.</p><p><strong>Results: </strong>A total of 20 ISUP grading-related features were selected, including 10 from the PZ ROI and 10 from the CG ROI. On the test set, the combined PZ + CG radiomics model exhibited better predictive performance, with an AUC of 0.928 (95% CI: 0.872, 0.966), compared to the PZ model alone (AUC: 0.838; 95% CI: 0.722, 0.920) and the CG model alone (AUC: 0.904; 95% CI: 0.851, 0.945).</p><p><strong>Conclusion: </strong>This study demonstrates that radiomic features extracted based on anatomical sub-region of the prostate can contribute to enhanced ISUP grade prediction. The combination of PZ + GG can provide more comprehensive information with improved accuracy. Further validation of this strategy in the future will enhance its prospects for improving decision-making in clinical settings.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143565695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-06DOI: 10.1007/s00261-025-04860-9
T Thanya, T Jeslin
Computed Tomography (CT) imaging captures detailed cross-sectional images of the pancreas and surrounding structures and provides valuable information for medical professionals. The classification of pancreatic CT images presents significant challenges due to the complexities of pancreatic diseases, especially pancreatic cancer. These challenges include subtle variations in tumor characteristics, irregular tumor shapes, and intricate imaging features that hinder accurate and early diagnosis. Image noise and variations in image quality also complicate the analysis. To address these classification problems, advanced medical imaging techniques, optimization algorithms, and deep learning methodologies are often employed. This paper proposes a robust classification model called DeepOptimalNet, which integrates optimization algorithms and deep learning techniques to handle the variability in imaging characteristics and subtle variations associated with pancreatic tumors. The model uses a comprehensive approach to enhance the analysis of medical CT images, beginning with the application of the Gaussian smoothing filter (GSF) for noise reduction and feature enhancement. It introduces the Modified Remora Optimization Algorithm (MROA) to improve the accuracy and efficiency of pancreatic cancer tissue segmentation. The adaptability of modified optimization algorithms to specific challenges such as irregular tumor shapes is emphasized. The paper also utilizes Deep Transfer CNN with ResNet-50 (DTCNN) for feature extraction, leveraging transfer learning to enhance prediction accuracy in CT images. ResNet-50's strong feature extraction capabilities are particularly relevant to fault diagnosis in CT images. The focus then shifts to a Deep Cascade Convolutional Neural Network with Multimodal Learning (DCCNN-ML) for classifying pancreatic cancer in CT images. The DeepOptimalNet approach underscores the advantages of deep learning techniques, multimodal learning, and cascade architectures in addressing the complexity and subtle variations inherent in pancreatic cancer imaging, ultimately leading to more accurate and robust classifications. The proposed DeepOptimalNet achieves 99.3% accuracy, 99.1% sensitivity, 99.5% specificity, and 99.3% F-score, surpassing existing models in pancreatic tumor classification. Its MROA-based segmentation improves boundary delineation, while DTCNN with ResNet-50 enhances feature extraction for small and low-contrast tumors. Benchmark validation confirms its superior classification performance, reduced false positives, and improved diagnostic reliability compared to traditional deep learning methods.
{"title":"DeepOptimalNet: optimized deep learning model for early diagnosis of pancreatic tumor classification in CT imaging.","authors":"T Thanya, T Jeslin","doi":"10.1007/s00261-025-04860-9","DOIUrl":"https://doi.org/10.1007/s00261-025-04860-9","url":null,"abstract":"<p><p>Computed Tomography (CT) imaging captures detailed cross-sectional images of the pancreas and surrounding structures and provides valuable information for medical professionals. The classification of pancreatic CT images presents significant challenges due to the complexities of pancreatic diseases, especially pancreatic cancer. These challenges include subtle variations in tumor characteristics, irregular tumor shapes, and intricate imaging features that hinder accurate and early diagnosis. Image noise and variations in image quality also complicate the analysis. To address these classification problems, advanced medical imaging techniques, optimization algorithms, and deep learning methodologies are often employed. This paper proposes a robust classification model called DeepOptimalNet, which integrates optimization algorithms and deep learning techniques to handle the variability in imaging characteristics and subtle variations associated with pancreatic tumors. The model uses a comprehensive approach to enhance the analysis of medical CT images, beginning with the application of the Gaussian smoothing filter (GSF) for noise reduction and feature enhancement. It introduces the Modified Remora Optimization Algorithm (MROA) to improve the accuracy and efficiency of pancreatic cancer tissue segmentation. The adaptability of modified optimization algorithms to specific challenges such as irregular tumor shapes is emphasized. The paper also utilizes Deep Transfer CNN with ResNet-50 (DTCNN) for feature extraction, leveraging transfer learning to enhance prediction accuracy in CT images. ResNet-50's strong feature extraction capabilities are particularly relevant to fault diagnosis in CT images. The focus then shifts to a Deep Cascade Convolutional Neural Network with Multimodal Learning (DCCNN-ML) for classifying pancreatic cancer in CT images. The DeepOptimalNet approach underscores the advantages of deep learning techniques, multimodal learning, and cascade architectures in addressing the complexity and subtle variations inherent in pancreatic cancer imaging, ultimately leading to more accurate and robust classifications. The proposed DeepOptimalNet achieves 99.3% accuracy, 99.1% sensitivity, 99.5% specificity, and 99.3% F-score, surpassing existing models in pancreatic tumor classification. Its MROA-based segmentation improves boundary delineation, while DTCNN with ResNet-50 enhances feature extraction for small and low-contrast tumors. Benchmark validation confirms its superior classification performance, reduced false positives, and improved diagnostic reliability compared to traditional deep learning methods.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143565773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-04DOI: 10.1007/s00261-025-04847-6
Suzanne Czerniak, Mahan Mathur
Retroperitoneal fibrosis (RPF) is a rare fibroinflammatory disease with idiopathic and secondary causes. Idiopathic disease is more common and is believed to be immune mediated; associations with autoimmune diseases and/or inflammatory disorders such as IgG4 related disease are often present. Common complications include hydronephrosis and venous stenosis and/or thrombosis. Due to its nonspecific clinical presentation, imaging is vital for diagnosis; in addition, imaging may help distinguish idiopathic from secondary causes and can aid in distinguishing early/active disease from chronic/inactive disease. Magnetic resonance imaging is the preferred imaging modality to stage and monitor the disease, though CT and PET/CT imaging may also be of value. While the imaging findings can overlap with other diseases, there are some characteristic findings which can favor RPF. However, a biopsy is needed for a definitive diagnosis.The following article discusses the clinical features, imaging appearances across modalities, associated complications, potential diagnostic pitfalls, and treatment approaches for RPF. The role of advanced imaging techniques, such as diffuse weighted imaging and 18F-FDG PET/MRI, in the evaluation of RPF will also be included.
{"title":"Multimodality imaging review of retroperitoneal fibrosis.","authors":"Suzanne Czerniak, Mahan Mathur","doi":"10.1007/s00261-025-04847-6","DOIUrl":"https://doi.org/10.1007/s00261-025-04847-6","url":null,"abstract":"<p><p>Retroperitoneal fibrosis (RPF) is a rare fibroinflammatory disease with idiopathic and secondary causes. Idiopathic disease is more common and is believed to be immune mediated; associations with autoimmune diseases and/or inflammatory disorders such as IgG4 related disease are often present. Common complications include hydronephrosis and venous stenosis and/or thrombosis. Due to its nonspecific clinical presentation, imaging is vital for diagnosis; in addition, imaging may help distinguish idiopathic from secondary causes and can aid in distinguishing early/active disease from chronic/inactive disease. Magnetic resonance imaging is the preferred imaging modality to stage and monitor the disease, though CT and PET/CT imaging may also be of value. While the imaging findings can overlap with other diseases, there are some characteristic findings which can favor RPF. However, a biopsy is needed for a definitive diagnosis.The following article discusses the clinical features, imaging appearances across modalities, associated complications, potential diagnostic pitfalls, and treatment approaches for RPF. The role of advanced imaging techniques, such as diffuse weighted imaging and 18F-FDG PET/MRI, in the evaluation of RPF will also be included.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143539779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.1007/s00261-025-04828-9
Nancy Kim, Linda Kelahan, Laura R Carucci
Esophageal motility disorders can have a major impact on quality of life. Dysphagia is the most commonly reported symptom; however, patients with esophageal dysmotility can also present with other symptoms such as chest pain and tightness, food impaction, regurgitation and heartburn. It is important to be aware of the spectrum of esophageal motility disorders so that timely and accurate diagnosis can be made. The Chicago Classification uses a hierarchical classification system that divides motility disorders as disorders of outflow obstruction and disorders of peristalsis. The disorders of esophago-gastric junction (EGJ) outflow include Type I, II and III achalasia and EGJ outflow obstruction. The disorders of peristalsis include absent contractility, distal esophageal spasm, hypercontractile esophagus, and ineffective esophageal motility. There are several diagnostic tools such as endoscopy, barium esophagram, high resolution manometry, and functional luminal imaging probe that can aid in evaluating esophageal motility disorders. A multidisciplinary approach including a primary care physician, radiologist, gastroenterologist, and surgeon may be beneficial for accurate diagnosis and proper treatment. The purpose of this paper is to discuss the diagnosis and management of esophageal dysmotility disorders other than achalasia.
食道运动障碍会严重影响生活质量。吞咽困难是最常见的症状,但食道运动障碍患者也可能出现其他症状,如胸痛和胸闷、食物嵌塞、反胃和烧心。了解食管运动障碍的范围非常重要,这样才能做出及时准确的诊断。芝加哥分类法》采用分级分类系统,将运动障碍分为流出物阻塞障碍和蠕动障碍。食管-胃交界处(EGJ)流出障碍包括 I、II 和 III 型贲门失弛缓症和 EGJ 流出阻塞。蠕动障碍包括收缩力缺失、食管远端痉挛、食管过度收缩和食管蠕动无效。有几种诊断工具,如内窥镜检查、食管钡餐造影、高分辨率测压和管腔功能成像探针,可以帮助评估食管运动障碍。包括主治医生、放射科医生、胃肠病医生和外科医生在内的多学科方法可能有利于准确诊断和正确治疗。本文旨在讨论贲门失弛缓症以外的食管运动障碍的诊断和治疗。
{"title":"Esophageal motility disorders other than achalasia.","authors":"Nancy Kim, Linda Kelahan, Laura R Carucci","doi":"10.1007/s00261-025-04828-9","DOIUrl":"https://doi.org/10.1007/s00261-025-04828-9","url":null,"abstract":"<p><p>Esophageal motility disorders can have a major impact on quality of life. Dysphagia is the most commonly reported symptom; however, patients with esophageal dysmotility can also present with other symptoms such as chest pain and tightness, food impaction, regurgitation and heartburn. It is important to be aware of the spectrum of esophageal motility disorders so that timely and accurate diagnosis can be made. The Chicago Classification uses a hierarchical classification system that divides motility disorders as disorders of outflow obstruction and disorders of peristalsis. The disorders of esophago-gastric junction (EGJ) outflow include Type I, II and III achalasia and EGJ outflow obstruction. The disorders of peristalsis include absent contractility, distal esophageal spasm, hypercontractile esophagus, and ineffective esophageal motility. There are several diagnostic tools such as endoscopy, barium esophagram, high resolution manometry, and functional luminal imaging probe that can aid in evaluating esophageal motility disorders. A multidisciplinary approach including a primary care physician, radiologist, gastroenterologist, and surgeon may be beneficial for accurate diagnosis and proper treatment. The purpose of this paper is to discuss the diagnosis and management of esophageal dysmotility disorders other than achalasia.</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143536338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Nuclear grading of clear cell renal cell carcinoma (ccRCC) plays a crucial role in diagnosing and managing the disease.
Objective: To develop and validate a CT-based Delta-Radiomics model for preoperative assessment of nuclear grading in renal clear cell carcinoma.
Materials and methods: This retrospective analysis included surgical cases of 146 ccRCC patients from two medical centers from December 2018 to December 2023, with 117 patients from Hospital and 29 from the *Hospital Affiliated to University of **. Radiomic features were extracted from whole-abdomen CT images, and the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm was used for feature selection. The Multi-Layer Perceptron (MLP) approach was employed to construct five predictive models (RAD_NE, RAD_AP, RAD_VP, RAD_Delta1, RAD_Delta2). The models were evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity, while clinical utility was assessed through Decision Curve Analysis (DCA).
Results: A total of 1,834 radiomic features were extracted from the three phases of the CT images for each model. The models demonstrated strong classification performance, with AUC values ranging from 0.837 to 0.911 in the training set and 0.608 to 0.869 in the test set. The Rad_Delta1 and Rad_Delta2 models demonstrated advantages in predicting ccRCC pathological grading.The AUC value of the Rad_Delta1 is 0.911in the training set and 0.771 in the external verifcation set.The AUC value of the Rad_Delta2 is 0.881 in the training set and0.608 in the external verifcation set. DCA curves confirmed the clinical applicability of these models.
Conclusion: CT-based delta-radiomics shows potential in predicting the pathological grading of clear cell renal cell carcinoma (ccRCC).
{"title":"Computed tomography-based delta-radiomics analysis for preoperative prediction of ISUP pathological nuclear grading in clear cell renal cell carcinoma.","authors":"Xiaohui Liu, Xiaowei Han, Guozheng Zhang, Xisong Zhu, Wen Zhang, Xu Wang, Chenghao Wu","doi":"10.1007/s00261-025-04857-4","DOIUrl":"https://doi.org/10.1007/s00261-025-04857-4","url":null,"abstract":"<p><strong>Background: </strong>Nuclear grading of clear cell renal cell carcinoma (ccRCC) plays a crucial role in diagnosing and managing the disease.</p><p><strong>Objective: </strong>To develop and validate a CT-based Delta-Radiomics model for preoperative assessment of nuclear grading in renal clear cell carcinoma.</p><p><strong>Materials and methods: </strong>This retrospective analysis included surgical cases of 146 ccRCC patients from two medical centers from December 2018 to December 2023, with 117 patients from Hospital and 29 from the *Hospital Affiliated to University of **. Radiomic features were extracted from whole-abdomen CT images, and the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm was used for feature selection. The Multi-Layer Perceptron (MLP) approach was employed to construct five predictive models (RAD_NE, RAD_AP, RAD_VP, RAD_Delta1, RAD_Delta2). The models were evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity, while clinical utility was assessed through Decision Curve Analysis (DCA).</p><p><strong>Results: </strong>A total of 1,834 radiomic features were extracted from the three phases of the CT images for each model. The models demonstrated strong classification performance, with AUC values ranging from 0.837 to 0.911 in the training set and 0.608 to 0.869 in the test set. The Rad_Delta1 and Rad_Delta2 models demonstrated advantages in predicting ccRCC pathological grading.The AUC value of the Rad_Delta1 is 0.911in the training set and 0.771 in the external verifcation set.The AUC value of the Rad_Delta2 is 0.881 in the training set and0.608 in the external verifcation set. DCA curves confirmed the clinical applicability of these models.</p><p><strong>Conclusion: </strong>CT-based delta-radiomics shows potential in predicting the pathological grading of clear cell renal cell carcinoma (ccRCC).</p>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143536337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}