The purpose of this study is to predict the mRNA expression of CSF1R in HGG non-invasively using MRI (magnetic resonance imaging) omics technology and to evaluate the correlation between the established radiomics model and prognosis. We investigated the predictive value of CSF1R in the Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA) database. The Support vector machine (SVM) and the Logistic regression (LR) algorithms were used to create a radiomics_score (Rad_score), respectively. The effectiveness and performance of the radiomics model was assessed in the training (n = 89) and tenfold cross-validation sets. We further analyzed the correlation between Rad_score and macrophage-related genes using Spearman correlation analysis. A radiomics nomogram combining the clinical factors and Rad_score was constructed to validate the radiomic signatures for individualized survival estimation and risk stratification. The results showed that CSF1R expression was markedly elevated in HGG tissues, which was related to worse prognosis. CSF1R expression was closely related to the abundance of infiltrating immune cells, such as macrophages. We identified nine features for establishing a radiomics model. The radiomics model predicting CSF1R achieved high AUC in training (0.768 in SVM and 0.792 in LR) and tenfold cross-validation sets (0.706 in SVM and 0.717 in LR). Rad_score was highly associated with tumor-related macrophage genes. A radiomics nomogram combining the Rad_score and clinical factors was constructed and revealed satisfactory performance. MRI-based Rad_score is a novel way to predict CSF1R expression and prognosis in high-grade glioma patients. The radiomics nomogram could optimize individualized survival estimation for HGG patients.
{"title":"MRI-based Machine Learning Radiomics Can Predict CSF1R Expression Level and Prognosis in High-grade Gliomas","authors":"Yuling Lai, Yiyang Wu, Xiangyuan Chen, Wenchao Gu, Guoxia Zhou, Meilin Weng","doi":"10.1007/s10278-023-00905-x","DOIUrl":"https://doi.org/10.1007/s10278-023-00905-x","url":null,"abstract":"<p>The purpose of this study is to predict the mRNA expression of CSF1R in HGG non-invasively using MRI (magnetic resonance imaging) omics technology and to evaluate the correlation between the established radiomics model and prognosis. We investigated the predictive value of CSF1R in the Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA) database. The Support vector machine (SVM) and the Logistic regression (LR) algorithms were used to create a radiomics_score (Rad_score), respectively. The effectiveness and performance of the radiomics model was assessed in the training (n = 89) and tenfold cross-validation sets. We further analyzed the correlation between Rad_score and macrophage-related genes using Spearman correlation analysis. A radiomics nomogram combining the clinical factors and Rad_score was constructed to validate the radiomic signatures for individualized survival estimation and risk stratification. The results showed that CSF1R expression was markedly elevated in HGG tissues, which was related to worse prognosis. CSF1R expression was closely related to the abundance of infiltrating immune cells, such as macrophages. We identified nine features for establishing a radiomics model. The radiomics model predicting CSF1R achieved high AUC in training (0.768 in SVM and 0.792 in LR) and tenfold cross-validation sets (0.706 in SVM and 0.717 in LR). Rad_score was highly associated with tumor-related macrophage genes. A radiomics nomogram combining the Rad_score and clinical factors was constructed and revealed satisfactory performance. MRI-based Rad_score is a novel way to predict CSF1R expression and prognosis in high-grade glioma patients. The radiomics nomogram could optimize individualized survival estimation for HGG patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"27 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Endometrial carcinoma (EC) risk stratification prior to surgery is crucial for clinical treatment. In this study, we intend to evaluate the predictive value of radiomics models based on magnetic resonance imaging (MRI) for risk stratification and staging of early-stage EC. The study included 155 patients who underwent MRI examinations prior to surgery and were pathologically diagnosed with early-stage EC between January, 2020, and September, 2022. Three-dimensional radiomics features were extracted from segmented tumor images captured by MRI scans (including T2WI, CE-T1WI delayed phase, and ADC), with 1521 features extracted from each of the three modalities. Then, using five-fold cross-validation and a multilayer perceptron algorithm, these features were filtered using Pearson’s correlation coefficient to develop a prediction model for risk stratification and staging of EC. The performance of each model was assessed by analyzing ROC curves and calculating the AUC, accuracy, sensitivity, and specificity. In terms of risk stratification, the CE-T1 sequence demonstrated the highest predictive accuracy of 0.858 ± 0.025 and an AUC of 0.878 ± 0.042 among the three sequences. However, combining all three sequences resulted in enhanced predictive accuracy, reaching 0.881 ± 0.040, with a corresponding increase in the AUC to 0.862 ± 0.069. In the context of staging, the utilization of a combination involving T2WI with CE-T1WI led to a notably elevated predictive accuracy of 0.956 ± 0.020, surpassing the accuracy achieved when employing any singular feature. Correspondingly, the AUC was 0.979 ± 0.022. When incorporating all three sequences concurrently, the predictive accuracy reached 0.956 ± 0.000, accompanied by an AUC of 0.986 ± 0.007. It is noteworthy that this level of accuracy surpassed that of the radiologist, which stood at 0.832. The MRI radiomics model has the potential to accurately predict the risk stratification and early staging of EC.
{"title":"Predicting Risk Stratification in Early-Stage Endometrial Carcinoma: Significance of Multiparametric MRI Radiomics Model","authors":"Huan Meng, Yu-Feng Sun, Yu Zhang, Ya-Nan Yu, Jing Wang, Jia-Ning Wang, Lin-Yan Xue, Xiao-Ping Yin","doi":"10.1007/s10278-023-00936-4","DOIUrl":"https://doi.org/10.1007/s10278-023-00936-4","url":null,"abstract":"<p>Endometrial carcinoma (EC) risk stratification prior to surgery is crucial for clinical treatment. In this study, we intend to evaluate the predictive value of radiomics models based on magnetic resonance imaging (MRI) for risk stratification and staging of early-stage EC. The study included 155 patients who underwent MRI examinations prior to surgery and were pathologically diagnosed with early-stage EC between January, 2020, and September, 2022. Three-dimensional radiomics features were extracted from segmented tumor images captured by MRI scans (including T2WI, CE-T1WI delayed phase, and ADC), with 1521 features extracted from each of the three modalities. Then, using five-fold cross-validation and a multilayer perceptron algorithm, these features were filtered using Pearson’s correlation coefficient to develop a prediction model for risk stratification and staging of EC. The performance of each model was assessed by analyzing ROC curves and calculating the AUC, accuracy, sensitivity, and specificity. In terms of risk stratification, the CE-T1 sequence demonstrated the highest predictive accuracy of 0.858 ± 0.025 and an AUC of 0.878 ± 0.042 among the three sequences. However, combining all three sequences resulted in enhanced predictive accuracy, reaching 0.881 ± 0.040, with a corresponding increase in the AUC to 0.862 ± 0.069. In the context of staging, the utilization of a combination involving T2WI with CE-T1WI led to a notably elevated predictive accuracy of 0.956 ± 0.020, surpassing the accuracy achieved when employing any singular feature. Correspondingly, the AUC was 0.979 ± 0.022. When incorporating all three sequences concurrently, the predictive accuracy reached 0.956 ± 0.000, accompanied by an AUC of 0.986 ± 0.007. It is noteworthy that this level of accuracy surpassed that of the radiologist, which stood at 0.832. The MRI radiomics model has the potential to accurately predict the risk stratification and early staging of EC.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"26 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-17DOI: 10.1007/s10278-023-00950-6
Ariel Fernando Pascaner, Antonio Rosato, Alice Fantazzini, Elena Vincenzi, Curzio Basso, Francesco Secchi, Mauro Lo Rito, Michele Conti
This work aimed to automatically segment and classify the coronary arteries with either normal or anomalous origin from the aorta (AAOCA) using convolutional neural networks (CNNs), seeking to enhance and fasten clinician diagnosis. We implemented three single-view 2D Attention U-Nets with 3D view integration and trained them to automatically segment the aortic root and coronary arteries of 124 computed tomography angiographies (CTAs), with normal coronaries or AAOCA. Furthermore, we automatically classified the segmented geometries as normal or AAOCA using a decision tree model. For CTAs in the test set (n = 13), we obtained median Dice score coefficients of 0.95 and 0.84 for the aortic root and the coronary arteries, respectively. Moreover, the classification between normal and AAOCA showed excellent performance with accuracy, precision, and recall all equal to 1 in the test set. We developed a deep learning-based method to automatically segment and classify normal coronary and AAOCA. Our results represent a step towards an automatic screening and risk profiling of patients with AAOCA, based on CTA.
{"title":"Automatic 3D Segmentation and Identification of Anomalous Aortic Origin of the Coronary Arteries Combining Multi-view 2D Convolutional Neural Networks","authors":"Ariel Fernando Pascaner, Antonio Rosato, Alice Fantazzini, Elena Vincenzi, Curzio Basso, Francesco Secchi, Mauro Lo Rito, Michele Conti","doi":"10.1007/s10278-023-00950-6","DOIUrl":"https://doi.org/10.1007/s10278-023-00950-6","url":null,"abstract":"<p>This work aimed to automatically segment and classify the coronary arteries with either normal or anomalous origin from the aorta (AAOCA) using convolutional neural networks (CNNs), seeking to enhance and fasten clinician diagnosis. We implemented three single-view 2D Attention U-Nets with 3D view integration and trained them to automatically segment the aortic root and coronary arteries of 124 computed tomography angiographies (CTAs), with normal coronaries or AAOCA. Furthermore, we automatically classified the segmented geometries as normal or AAOCA using a decision tree model. For CTAs in the test set (<i>n</i> = 13), we obtained median Dice score coefficients of 0.95 and 0.84 for the aortic root and the coronary arteries, respectively. Moreover, the classification between normal and AAOCA showed excellent performance with accuracy, precision, and recall all equal to 1 in the test set. We developed a deep learning-based method to automatically segment and classify normal coronary and AAOCA. Our results represent a step towards an automatic screening and risk profiling of patients with AAOCA, based on CTA.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"176 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-16DOI: 10.1007/s10278-023-00964-0
Krista Wernér, Turkka Anttila, Sina Hulkkonen, Timo Viljakka, Ville Haapamäki, Jorma Ryhänen
Deep-learning (DL) algorithms have the potential to change medical image classification and diagnostics in the coming decade. Delayed diagnosis and treatment of avascular necrosis (AVN) of the lunate may have a detrimental effect on patient hand function. The aim of this study was to use a segmentation-based DL model to diagnose AVN of the lunate from wrist postero-anterior radiographs. A total of 319 radiographs of the diseased lunate and 1228 control radiographs were gathered from Helsinki University Central Hospital database. Of these, 10% were separated to form a test set for model validation. MRI confirmed the absence of disease. In cases of AVN of the lunate, a hand surgeon at Helsinki University Hospital validated the accurate diagnosis using either MRI or radiography. For detection of AVN, the model had a sensitivity of 93.33% (95% confidence interval (CI) 77.93–99.18%), specificity of 93.28% (95% CI 87.18–97.05%), and accuracy of 93.28% (95% CI 87.99–96.73%). The area under the receiver operating characteristic curve was 0.94 (95% CI 0.88–0.99). Compared to three clinical experts, the DL model had better AUC than one clinical expert and only one expert had higher accuracy than the DL model. The results were otherwise similar between the model and clinical experts. Our DL model performed well and may be a future beneficial tool for screening of AVN of the lunate.
{"title":"Detecting Avascular Necrosis of the Lunate from Radiographs Using a Deep-Learning Model","authors":"Krista Wernér, Turkka Anttila, Sina Hulkkonen, Timo Viljakka, Ville Haapamäki, Jorma Ryhänen","doi":"10.1007/s10278-023-00964-0","DOIUrl":"https://doi.org/10.1007/s10278-023-00964-0","url":null,"abstract":"<p>Deep-learning (DL) algorithms have the potential to change medical image classification and diagnostics in the coming decade. Delayed diagnosis and treatment of avascular necrosis (AVN) of the lunate may have a detrimental effect on patient hand function. The aim of this study was to use a segmentation-based DL model to diagnose AVN of the lunate from wrist postero-anterior radiographs. A total of 319 radiographs of the diseased lunate and 1228 control radiographs were gathered from Helsinki University Central Hospital database. Of these, 10% were separated to form a test set for model validation. MRI confirmed the absence of disease. In cases of AVN of the lunate, a hand surgeon at Helsinki University Hospital validated the accurate diagnosis using either MRI or radiography. For detection of AVN, the model had a sensitivity of 93.33% (95% confidence interval (CI) 77.93–99.18%), specificity of 93.28% (95% CI 87.18–97.05%), and accuracy of 93.28% (95% CI 87.99–96.73%). The area under the receiver operating characteristic curve was 0.94 (95% CI 0.88–0.99). Compared to three clinical experts, the DL model had better AUC than one clinical expert and only one expert had higher accuracy than the DL model. The results were otherwise similar between the model and clinical experts. Our DL model performed well and may be a future beneficial tool for screening of AVN of the lunate.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"1 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The accurate diagnosis and staging of lymph node metastasis (LNM) are crucial for determining the optimal treatment strategy for head and neck cancer patients. We aimed to develop a 3D Resnet model and investigate its prediction value in detecting LNM. This study enrolled 156 head and neck cancer patients and analyzed 342 lymph nodes segmented from surgical pathologic reports. The patients’ clinical and pathological data related to the primary tumor site and clinical and pathology T and N stages were collected. To predict LNM, we developed a dual-pathway 3D Resnet model incorporating two Resnet models with different depths to extract features from the input data. To assess the model’s performance, we compared its predictions with those of radiologists in a test dataset comprising 38 patients. The study found that the dimensions and volume of LNM + were significantly larger than those of LNM-. Specifically, the Y and Z dimensions showed the highest sensitivity of 84.6% and specificity of 72.2%, respectively, in predicting LNM + . The analysis of various variations of the proposed 3D Resnet model demonstrated that Dual-3D-Resnet models with a depth of 34 achieved the highest AUC values of 0.9294. In the validation test of 38 patients and 86 lymph nodes dataset, the 3D Resnet model outperformed both physical examination and radiologists in terms of sensitivity (80.8% compared to 50.0% and 91.7%, respectively), specificity(90.0% compared to 88.5% and 65.4%, respectively), and positive predictive value (77.8% compared to 66.7% and 55.0%, respectively) in detecting individual LNM + . These results suggest that the 3D Resnet model can be valuable for accurately identifying LNM + in head and neck cancer patients. A prospective trial is needed to evaluate further the role of the 3D Resnet model in determining LNM + in head and neck cancer patients and its impact on treatment strategies and patient outcomes.
{"title":"Development and Validation of a 3D Resnet Model for Prediction of Lymph Node Metastasis in Head and Neck Cancer Patients","authors":"Yi-Hui Lin, Chieh-Ting Lin, Ya-Han Chang, Yen-Yu Lin, Jen-Jee Chen, Chun-Rong Huang, Yu-Wei Hsu, Weir-Chiang You","doi":"10.1007/s10278-023-00938-2","DOIUrl":"https://doi.org/10.1007/s10278-023-00938-2","url":null,"abstract":"<p>The accurate diagnosis and staging of lymph node metastasis (LNM) are crucial for determining the optimal treatment strategy for head and neck cancer patients. We aimed to develop a 3D Resnet model and investigate its prediction value in detecting LNM. This study enrolled 156 head and neck cancer patients and analyzed 342 lymph nodes segmented from surgical pathologic reports. The patients’ clinical and pathological data related to the primary tumor site and clinical and pathology T and N stages were collected. To predict LNM, we developed a dual-pathway 3D Resnet model incorporating two Resnet models with different depths to extract features from the input data. To assess the model’s performance, we compared its predictions with those of radiologists in a test dataset comprising 38 patients. The study found that the dimensions and volume of LNM + were significantly larger than those of LNM-. Specifically, the <i>Y</i> and <i>Z</i> dimensions showed the highest sensitivity of 84.6% and specificity of 72.2%, respectively, in predicting LNM + . The analysis of various variations of the proposed 3D Resnet model demonstrated that Dual-3D-Resnet models with a depth of 34 achieved the highest AUC values of 0.9294. In the validation test of 38 patients and 86 lymph nodes dataset, the 3D Resnet model outperformed both physical examination and radiologists in terms of sensitivity (80.8% compared to 50.0% and 91.7%, respectively), specificity(90.0% compared to 88.5% and 65.4%, respectively), and positive predictive value (77.8% compared to 66.7% and 55.0%, respectively) in detecting individual LNM + . These results suggest that the 3D Resnet model can be valuable for accurately identifying LNM + in head and neck cancer patients. A prospective trial is needed to evaluate further the role of the 3D Resnet model in determining LNM + in head and neck cancer patients and its impact on treatment strategies and patient outcomes.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"25 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anterior cruciate ligament (ACL) tears are prevalent orthopedic sports injuries and are difficult to precisely classify. Previous works have demonstrated the ability of deep learning (DL) to provide support for clinicians in ACL tear classification scenarios, but it requires a large quantity of labeled samples and incurs a high computational expense. This study aims to overcome the challenges brought by small and imbalanced data and achieve fast and accurate ACL tear classification based on magnetic resonance imaging (MRI) of the knee. We propose a lightweight attentive graph neural network (GNN) with a conditional random field (CRF), named the ACGNN, to classify ACL ruptures in knee MR images. A metric-based meta-learning strategy is introduced to conduct independent testing through multiple node classification tasks. We design a lightweight feature embedding network using a feature-based knowledge distillation method to extract features from the given images. Then, GNN layers are used to find the dependencies between samples and complete the classification process. The CRF is incorporated into each GNN layer to refine the affinities. To mitigate oversmoothing and overfitting issues, we apply self-boosting attention, node attention, and memory attention for graph initialization, node updating, and correlation across graph layers, respectively. Experiments demonstrated that our model provided excellent performance on both oblique coronal data and sagittal data with accuracies of 92.94% and 91.92%, respectively. Notably, our proposed method exhibited comparable performance to that of orthopedic surgeons during an internal clinical validation. This work shows the potential of our method to advance ACL diagnosis and facilitates the development of computer-aided diagnosis methods for use in clinical practice.
{"title":"Lightweight Attentive Graph Neural Network with Conditional Random Field for Diagnosis of Anterior Cruciate Ligament Tear","authors":"Jiaoju Wang, Jiewen Luo, Jiehui Liang, Yangbo Cao, Jing Feng, Lingjie Tan, Zhengcheng Wang, Jingming Li, Alphonse Houssou Hounye, Muzhou Hou, Jinshen He","doi":"10.1007/s10278-023-00944-4","DOIUrl":"https://doi.org/10.1007/s10278-023-00944-4","url":null,"abstract":"<p>Anterior cruciate ligament (ACL) tears are prevalent orthopedic sports injuries and are difficult to precisely classify. Previous works have demonstrated the ability of deep learning (DL) to provide support for clinicians in ACL tear classification scenarios, but it requires a large quantity of labeled samples and incurs a high computational expense. This study aims to overcome the challenges brought by small and imbalanced data and achieve fast and accurate ACL tear classification based on magnetic resonance imaging (MRI) of the knee. We propose a lightweight attentive graph neural network (GNN) with a conditional random field (CRF), named the ACGNN, to classify ACL ruptures in knee MR images. A metric-based meta-learning strategy is introduced to conduct independent testing through multiple node classification tasks. We design a lightweight feature embedding network using a feature-based knowledge distillation method to extract features from the given images. Then, GNN layers are used to find the dependencies between samples and complete the classification process. The CRF is incorporated into each GNN layer to refine the affinities. To mitigate oversmoothing and overfitting issues, we apply self-boosting attention, node attention, and memory attention for graph initialization, node updating, and correlation across graph layers, respectively. Experiments demonstrated that our model provided excellent performance on both oblique coronal data and sagittal data with accuracies of 92.94% and 91.92%, respectively. Notably, our proposed method exhibited comparable performance to that of orthopedic surgeons during an internal clinical validation. This work shows the potential of our method to advance ACL diagnosis and facilitates the development of computer-aided diagnosis methods for use in clinical practice.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"3 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-16DOI: 10.1007/s10278-023-00931-9
Yu-meng Cui, Hua-li Wang, Rui Cao, Hong Bai, Dan Sun, Jiu-xiang Feng, Xue-feng Lu
Fully supervised medical image segmentation methods use pixel-level labels to achieve good results, but obtaining such large-scale, high-quality labels is cumbersome and time consuming. This study aimed to develop a weakly supervised model that only used image-level labels to achieve automatic segmentation of four types of uterine lesions and three types of normal tissues on magnetic resonance images. The MRI data of the patients were retrospectively collected from the database of our institution, and the T2-weighted sequence images were selected and only image-level annotations were made. The proposed two-stage model can be divided into four sequential parts: the pixel correlation module, the class re-activation map module, the inter-pixel relation network module, and the Deeplab v3 + module. The dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average symmetric surface distance (ASSD) were employed to evaluate the performance of the model. The original dataset consisted of 85,730 images from 316 patients with four different types of lesions (i.e., endometrial cancer, uterine leiomyoma, endometrial polyps, and atypical hyperplasia of endometrium). A total number of 196, 57, and 63 patients were randomly selected for model training, validation, and testing. After being trained from scratch, the proposed model showed a good segmentation performance with an average DSC of 83.5%, HD of 29.3 mm, and ASSD of 8.83 mm, respectively. As far as the weakly supervised methods using only image-level labels are concerned, the performance of the proposed model is equivalent to the state-of-the-art weakly supervised methods.
{"title":"The Segmentation of Multiple Types of Uterine Lesions in Magnetic Resonance Images Using a Sequential Deep Learning Method with Image-Level Annotations","authors":"Yu-meng Cui, Hua-li Wang, Rui Cao, Hong Bai, Dan Sun, Jiu-xiang Feng, Xue-feng Lu","doi":"10.1007/s10278-023-00931-9","DOIUrl":"https://doi.org/10.1007/s10278-023-00931-9","url":null,"abstract":"<p>Fully supervised medical image segmentation methods use pixel-level labels to achieve good results, but obtaining such large-scale, high-quality labels is cumbersome and time consuming. This study aimed to develop a weakly supervised model that only used image-level labels to achieve automatic segmentation of four types of uterine lesions and three types of normal tissues on magnetic resonance images. The MRI data of the patients were retrospectively collected from the database of our institution, and the T2-weighted sequence images were selected and only image-level annotations were made. The proposed two-stage model can be divided into four sequential parts: the pixel correlation module, the class re-activation map module, the inter-pixel relation network module, and the Deeplab v3 + module. The dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average symmetric surface distance (ASSD) were employed to evaluate the performance of the model. The original dataset consisted of 85,730 images from 316 patients with four different types of lesions (i.e., endometrial cancer, uterine leiomyoma, endometrial polyps, and atypical hyperplasia of endometrium). A total number of 196, 57, and 63 patients were randomly selected for model training, validation, and testing. After being trained from scratch, the proposed model showed a good segmentation performance with an average DSC of 83.5%, HD of 29.3 mm, and ASSD of 8.83 mm, respectively. As far as the weakly supervised methods using only image-level labels are concerned, the performance of the proposed model is equivalent to the state-of-the-art weakly supervised methods.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"120 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139497351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-16DOI: 10.1007/s10278-023-00909-7
Haishuang Sun, Min Liu, Anqi Liu, Mei Deng, Xiaoyan Yang, Han Kang, Ling Zhao, Yanhong Ren, Bingbing Xie, Rongguo Zhang, Huaping Dai
Accurate detection of fibrotic interstitial lung disease (f-ILD) is conducive to early intervention. Our aim was to develop a lung graph-based machine learning model to identify f-ILD. A total of 417 HRCTs from 279 patients with confirmed ILD (156 f-ILD and 123 non-f-ILD) were included in this study. A lung graph-based machine learning model based on HRCT was developed for aiding clinician to diagnose f-ILD. In this approach, local radiomics features were extracted from an automatically generated geometric atlas of the lung and used to build a series of specific lung graph models. Encoding these lung graphs, a lung descriptor was gained and became as a characterization of global radiomics feature distribution to diagnose f-ILD. The Weighted Ensemble model showed the best predictive performance in cross-validation. The classification accuracy of the model was significantly higher than that of the three radiologists at both the CT sequence level and the patient level. At the patient level, the diagnostic accuracy of the model versus radiologists A, B, and C was 0.986 (95% CI 0.959 to 1.000), 0.918 (95% CI 0.849 to 0.973), 0.822 (95% CI 0.726 to 0.904), and 0.904 (95% CI 0.836 to 0.973), respectively. There was a statistically significant difference in AUC values between the model and 3 physicians (p < 0.05). The lung graph-based machine learning model could identify f-ILD, and the diagnostic performance exceeded radiologists which could aid clinicians to assess ILD objectively.