Pub Date : 2025-11-01Epub Date: 2025-07-29DOI: 10.1177/08953996251351624
Mohamed J Saadh, Qusay Mohammed Hussain, Rafid Jihad Albadr, Hardik Doshi, M M Rekha, Mayank Kundlas, Amrita Pal, Jasur Rizaev, Waam Mohammed Taher, Mariem Alwan, Mahmod Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood
ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.
目的本研究旨在通过整合先进的分割和分类方法,建立一个强大的乳腺癌诊断框架。将基于transformer和U-Net的分割模型与放射学特征提取和机器学习分类器相结合,提高乳房x线图像的分割精度和分类精度。材料与方法采用多中心数据集8000张乳房x光片(4200张正常,3800张异常)。使用基于transformer和U-Net模型进行分割,通过Dice Coefficient (DSC)、Intersection over Union (IoU)、Hausdorff Distance (HD95)和Pixel-Wise Accuracy进行评估。利用递归特征消除法(RFE)和方差分析法(ANOVA)选择显著特征,从分割后的掩模中提取放射组学特征。分类器包括Logistic回归、XGBoost、CatBoost和堆叠集成模型,用于将肿瘤分为良性或恶性。采用准确性、敏感性、F1评分和AUC-ROC评价分类效果。SHAP分析验证了特征重要性,q值热图评估了统计显著性。结果基于transformer的模型在DSC(0.94±0.01训练值,0.92±0.02测试值)、IoU(0.91±0.01训练值,0.89±0.02测试值)、HD95(3.0±0.3 mm训练值,3.3±0.4 mm测试值)和Pixel-Wise Accuracy(0.96±0.01训练值,0.94±0.02测试值)上均取得了较好的分割效果,在所有指标上均优于U-Net。对于分类,使用Stacking Ensemble的Transformer-segmented feature获得了最高的测试结果:93%的准确率,92%的灵敏度,93%的F1分数和95%的AUC。u - net分割的特征实现了较低的指标,最佳测试准确率为84%。SHAP分析证实了灰度非均匀性和区域熵等特征的重要性。结论结合放射学特征选择和鲁棒分类模型,验证了基于变压器的图像分割方法的优越性。该框架为乳腺癌诊断提供了精确且可解释的解决方案,具有可扩展到3D成像和多模态数据集的潜力。
{"title":"Radiomics meets transformers: A novel approach to tumor segmentation and classification in mammography for breast cancer.","authors":"Mohamed J Saadh, Qusay Mohammed Hussain, Rafid Jihad Albadr, Hardik Doshi, M M Rekha, Mayank Kundlas, Amrita Pal, Jasur Rizaev, Waam Mohammed Taher, Mariem Alwan, Mahmod Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood","doi":"10.1177/08953996251351624","DOIUrl":"10.1177/08953996251351624","url":null,"abstract":"<p><p>ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1039-1058"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-08-28DOI: 10.1177/08953996251367203
Tianming Du, Chen Li, Marcin Grzegozek, Xinyu Huang, Md Rahaman, Xinghao Wang, Hongzan Sun
PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent 18F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from 18F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.
{"title":"PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer.","authors":"Tianming Du, Chen Li, Marcin Grzegozek, Xinyu Huang, Md Rahaman, Xinghao Wang, Hongzan Sun","doi":"10.1177/08953996251367203","DOIUrl":"10.1177/08953996251367203","url":null,"abstract":"<p><p>PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1081-1092"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-20DOI: 10.1177/08953996251358389
{"title":"Corrigendum to \"Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM)\".","authors":"","doi":"10.1177/08953996251358389","DOIUrl":"10.1177/08953996251358389","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1128"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-05-11DOI: 10.1177/08953996251333364
Wensong Yan, Yunhua Xu, Shiju Yan
BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.
{"title":"Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM).","authors":"Wensong Yan, Yunhua Xu, Shiju Yan","doi":"10.1177/08953996251333364","DOIUrl":"10.1177/08953996251333364","url":null,"abstract":"<p><p>BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1015-1026"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-28DOI: 10.1177/08953996251361041
Hong Kyu Lee, Hyoung Soo Kim, Sung Gyun Kim, Jae Yong Park
ObjectiveDetecting and accurately diagnosing rib fractures in chest radiographs is a challenging and time-consuming task for radiologists. This study presents a novel deep learning system designed to automate the detection and segmentation of rib fractures in chest radiographs.MethodsThe proposed method combines CenterNet with HRNet v2 for precise fracture region identification and HRNet-W48 with contextual representation to enhance rib segmentation. A dataset consisting of 1006 chest radiographs from a tertiary hospital in Korea was used, with a split of 7:2:1 for training, validation, and testing.ResultsThe rib fracture detection component achieved a sensitivity of 0.7171, indicating its effectiveness in identifying fractures. Additionally, the rib segmentation performance was measured by a dice score of 0.86, demonstrating its accuracy in delineating rib structures. Visual assessment results further highlight the model's capability to pinpoint fractures and segment ribs accurately.ConclusionThis innovative approach holds promise for improving rib fracture detection and rib segmentation, offering potential benefits in clinical practice for more efficient and accurate diagnosis in the field of medical image analysis.
{"title":"Self-Assessment of acute rib fracture detection system from chest X-ray: Preliminary study for early radiological diagnosis.","authors":"Hong Kyu Lee, Hyoung Soo Kim, Sung Gyun Kim, Jae Yong Park","doi":"10.1177/08953996251361041","DOIUrl":"10.1177/08953996251361041","url":null,"abstract":"<p><p>ObjectiveDetecting and accurately diagnosing rib fractures in chest radiographs is a challenging and time-consuming task for radiologists. This study presents a novel deep learning system designed to automate the detection and segmentation of rib fractures in chest radiographs.MethodsThe proposed method combines CenterNet with HRNet v2 for precise fracture region identification and HRNet-W48 with contextual representation to enhance rib segmentation. A dataset consisting of 1006 chest radiographs from a tertiary hospital in Korea was used, with a split of 7:2:1 for training, validation, and testing.ResultsThe rib fracture detection component achieved a sensitivity of 0.7171, indicating its effectiveness in identifying fractures. Additionally, the rib segmentation performance was measured by a dice score of 0.86, demonstrating its accuracy in delineating rib structures. Visual assessment results further highlight the model's capability to pinpoint fractures and segment ribs accurately.ConclusionThis innovative approach holds promise for improving rib fracture detection and rib segmentation, offering potential benefits in clinical practice for more efficient and accurate diagnosis in the field of medical image analysis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1027-1038"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-08-25DOI: 10.1177/08953996251370580
Shanshan Li, Yu Zhang, Yao Hong, Wei Yuan, Jihong Sun
Rectal cancer is a major cause of cancer-related mortality, requiring accurate diagnosis via MRI scans. However, detecting rectal cancer in MRI scans is challenging due to image complexity and the need for precise localization. While transformer-based object detection has excelled in natural images, applying these models to medical data is hindered by limited medical imaging resources. To address this, we propose the Spatially Prioritized Detection Transformer (SP DETR), which incorporates a Spatially Prioritized (SP) Decoder to constrain anchor boxes to regions of interest (ROI) based on anatomical maps, focusing the model on areas most likely to contain cancer. Additionally, the SP cross-attention mechanism refines the learning of anchor box offsets. To improve small cancer detection, we introduce the Global Context-Guided Feature Fusion Module (GCGFF), leveraging a transformer encoder for global context and a Globally-Guided Semantic Fusion Block (GGSF) to enhance high-level semantic features. Experimental results show that our model significantly improves detection accuracy, especially for small rectal cancers, demonstrating the effectiveness of integrating anatomical priors with transformer-based models for clinical applications.
{"title":"Anatomy-aware transformer-based model for precise rectal cancer detection and localization in MRI scans.","authors":"Shanshan Li, Yu Zhang, Yao Hong, Wei Yuan, Jihong Sun","doi":"10.1177/08953996251370580","DOIUrl":"10.1177/08953996251370580","url":null,"abstract":"<p><p>Rectal cancer is a major cause of cancer-related mortality, requiring accurate diagnosis via MRI scans. However, detecting rectal cancer in MRI scans is challenging due to image complexity and the need for precise localization. While transformer-based object detection has excelled in natural images, applying these models to medical data is hindered by limited medical imaging resources. To address this, we propose the Spatially Prioritized Detection Transformer (SP DETR), which incorporates a Spatially Prioritized (SP) Decoder to constrain anchor boxes to regions of interest (ROI) based on anatomical maps, focusing the model on areas most likely to contain cancer. Additionally, the SP cross-attention mechanism refines the learning of anchor box offsets. To improve small cancer detection, we introduce the Global Context-Guided Feature Fusion Module (GCGFF), leveraging a transformer encoder for global context and a Globally-Guided Semantic Fusion Block (GGSF) to enhance high-level semantic features. Experimental results show that our model significantly improves detection accuracy, especially for small rectal cancers, demonstrating the effectiveness of integrating anatomical priors with transformer-based models for clinical applications.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1059-1070"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BackgroundTo generate high-quality CT images for an energy-resolving photon-counting detector (PCD) based cone beam CT (CBCT) system, it is essential to mitigate the scatter shading artifacts.ObjectiveThe aim of this study is to explore the capability of an energy-modulated scatter correction method, named e-Grid, in removing the scatter shading artifacts in energy-resolving PCD CBCT imaging.MethodsIn the e-Grid method, a linear approximation is assumed between the high-energy primary/scatter signals and the low-energy primary/scatter signals acquired from the two energy windows of a PCD. Calibration experiments were conducted to determine the parameters used in the aforementioned signal model. Physical validation experiments with head and abdominal phantoms were performed on a PCD CBCT imaging benchtop system.ResultsIt was found that the e-Grid method could significantly eliminate scatter cupping artifacts in both low-energy and high-energy PCD CBCT imaging for objects with varying dimensions. Quantitatively, results demonstrated that the e-Grid method reduced scatter artifacts by more than 70% in both low-energy and high-energy PCD CBCT images.ConclusionsIn this study, it is demonstrated that the e-Grid scatter correction method has great potential for reducing scatter shading artifacts in energy-resolving PCD CBCT imaging.
{"title":"A novel scatter correction method for energy-resolving photon-counting detector based CBCT imaging.","authors":"Xin Zhang, Heran Wang, Yuhang Tan, Jiongtao Zhu, Hairong Zheng, Dong Liang, Yongshuai Ge","doi":"10.1177/08953996251351618","DOIUrl":"10.1177/08953996251351618","url":null,"abstract":"<p><p>BackgroundTo generate high-quality CT images for an energy-resolving photon-counting detector (PCD) based cone beam CT (CBCT) system, it is essential to mitigate the scatter shading artifacts.ObjectiveThe aim of this study is to explore the capability of an energy-modulated scatter correction method, named e-Grid, in removing the scatter shading artifacts in energy-resolving PCD CBCT imaging.MethodsIn the e-Grid method, a linear approximation is assumed between the high-energy primary/scatter signals and the low-energy primary/scatter signals acquired from the two energy windows of a PCD. Calibration experiments were conducted to determine the parameters used in the aforementioned signal model. Physical validation experiments with head and abdominal phantoms were performed on a PCD CBCT imaging benchtop system.ResultsIt was found that the e-Grid method could significantly eliminate scatter cupping artifacts in both low-energy and high-energy PCD CBCT imaging for objects with varying dimensions. Quantitatively, results demonstrated that the e-Grid method reduced scatter artifacts by more than 70% in both low-energy and high-energy PCD CBCT images.ConclusionsIn this study, it is demonstrated that the e-Grid scatter correction method has great potential for reducing scatter shading artifacts in energy-resolving PCD CBCT imaging.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1093-1103"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-08-25DOI: 10.1177/08953996251367214
Nixi Zhao, Junxiong Fang, Jie Tang, Changzhe Zhao, Jianwen Wu, Han Guo, Haipeng Zhang, Tiqiao Xiao
Ghost imaging is an imaging technique that achieves image reconstruction by measuring the intensity correlation function between the reference arm and the object arm. In parallel ghost imaging, each pixel of a position-sensitive detector is further regarded as a bucket detector, enabling the parallel acquisition of hundreds or thousands of ghost imaging subsystems in a single measurement, thus realizing high-resolution imaging with extremely low measurement counts. Relying on synchrotron radiation, we have achieved X-ray parallel ghost imaging with high pixel resolution, low dose, and ultra-large field of view. However, the dependence of X-ray parallel ghost imaging on synchrotron radiation has set extremely high thresholds for the dissemination and application of this technology. In this work, we broke away from synchrotron radiation facility and completed the pipeline-style acquisition of parallel ghost imaging using rough and inexpensive equipment in the most reproducible way for others. Eventually, we achieved ghost imaging with an effective pixel size of 8.03 μm, an image size of 2880 × 2280, and a minimum of 10 measurement numbers (a sampling rate of 0.62%) using a laboratory X-ray light source. It can be achieved merely by making minor modifications to any industrial CT device. With a total experimental cost of only $40, this work demonstrates great universality. We have put forward a comprehensive framework for the practical application of parallel ghost imaging, which is an essential prerequisite for the generalization of parallel ghost imaging to enter the commercial and practical arenas.
{"title":"Generalization of parallel ghost imaging based on laboratory X-ray source.","authors":"Nixi Zhao, Junxiong Fang, Jie Tang, Changzhe Zhao, Jianwen Wu, Han Guo, Haipeng Zhang, Tiqiao Xiao","doi":"10.1177/08953996251367214","DOIUrl":"10.1177/08953996251367214","url":null,"abstract":"<p><p>Ghost imaging is an imaging technique that achieves image reconstruction by measuring the intensity correlation function between the reference arm and the object arm. In parallel ghost imaging, each pixel of a position-sensitive detector is further regarded as a bucket detector, enabling the parallel acquisition of hundreds or thousands of ghost imaging subsystems in a single measurement, thus realizing high-resolution imaging with extremely low measurement counts. Relying on synchrotron radiation, we have achieved X-ray parallel ghost imaging with high pixel resolution, low dose, and ultra-large field of view. However, the dependence of X-ray parallel ghost imaging on synchrotron radiation has set extremely high thresholds for the dissemination and application of this technology. In this work, we broke away from synchrotron radiation facility and completed the pipeline-style acquisition of parallel ghost imaging using rough and inexpensive equipment in the most reproducible way for others. Eventually, we achieved ghost imaging with an effective pixel size of 8.03 μm, an image size of 2880 × 2280, and a minimum of 10 measurement numbers (a sampling rate of 0.62%) using a laboratory X-ray light source. It can be achieved merely by making minor modifications to any industrial CT device. With a total experimental cost of only $40, this work demonstrates great universality. We have put forward a comprehensive framework for the practical application of parallel ghost imaging, which is an essential prerequisite for the generalization of parallel ghost imaging to enter the commercial and practical arenas.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1071-1080"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, semi-supervised learning has demonstrated significant potential in the field of medical image segmentation. However, the majority of the methods fail to establish connections among diverse sample data. Moreover, segmentation networks that utilize fixed parameters can impede model training and even amplify the risk of overfitting. To address these challenges, this paper proposes an adversarial consistency-based semi-supervised segmentation method, leveraging a dual multiscale mean teacher model. First, by designing a discriminator network with adaptive feature selection and training it alternately with the segmentation network, the method enhances the segmentation network's ability to transfer knowledge from the limited labeled data to the unlabeled data. The discriminator evaluates the quality of the segmentation network's results for both labeled and unlabeled data, while simultaneously guiding the network to learn consistency in segmentation performance throughout the training process. Second, we design a Triple-attention dynamic convolutional (TADC) module, which allows the convolution kernel parameters to be adjusted flexibly according to different input data. This improves the feature representation capability of the network model and helps reduce the risk of overfitting. Finally, we propose a novel feature selection and fusion module (FSFM) within the segmentation network, which dynamically selects and integrates important features to enhance the saliency of key information, improving the overall performance of the model. The proposed adversarial consistency-based semi-supervised segmentation method is applied to the MosMedData dataset. The results demonstrate that the segmentation network outperforms the baseline model, achieving improvements of 3.83%, 3.97%, 3.14% in terms of Dice, Jaccard, and NSD scores, respectively, for the segmentation of pneumonia lesions. The proposed segmentation method outperforms state-of-the-art segmentation networks and demonstrates superior potential for segmenting pneumonia lesions, as evidenced by extensive experiments conducted on the MosMedData and COVID-19-P20 datasets.
{"title":"Adversarial consistency-based semi-supervised pneumonia segmentation using dual multiscale feature selection and fusion mean teacher model and triple-attention dynamic convolution in chest CTs.","authors":"Yu Gu, Jianning Zang, Lidong Yang, Baohua Zhang, Jing Wang, Xiaoqi Lu, Jianjun Li, Xin Liu, Ying Zhao, Dahua Yu, Siyuan Tang, Qun He","doi":"10.1177/08953996251367210","DOIUrl":"10.1177/08953996251367210","url":null,"abstract":"<p><p>Recently, semi-supervised learning has demonstrated significant potential in the field of medical image segmentation. However, the majority of the methods fail to establish connections among diverse sample data. Moreover, segmentation networks that utilize fixed parameters can impede model training and even amplify the risk of overfitting. To address these challenges, this paper proposes an adversarial consistency-based semi-supervised segmentation method, leveraging a dual multiscale mean teacher model. First, by designing a discriminator network with adaptive feature selection and training it alternately with the segmentation network, the method enhances the segmentation network's ability to transfer knowledge from the limited labeled data to the unlabeled data. The discriminator evaluates the quality of the segmentation network's results for both labeled and unlabeled data, while simultaneously guiding the network to learn consistency in segmentation performance throughout the training process. Second, we design a Triple-attention dynamic convolutional (TADC) module, which allows the convolution kernel parameters to be adjusted flexibly according to different input data. This improves the feature representation capability of the network model and helps reduce the risk of overfitting. Finally, we propose a novel feature selection and fusion module (FSFM) within the segmentation network, which dynamically selects and integrates important features to enhance the saliency of key information, improving the overall performance of the model. The proposed adversarial consistency-based semi-supervised segmentation method is applied to the MosMedData dataset. The results demonstrate that the segmentation network outperforms the baseline model, achieving improvements of 3.83%, 3.97%, 3.14% in terms of Dice, Jaccard, and NSD scores, respectively, for the segmentation of pneumonia lesions. The proposed segmentation method outperforms state-of-the-art segmentation networks and demonstrates superior potential for segmenting pneumonia lesions, as evidenced by extensive experiments conducted on the MosMedData and COVID-19-P20 datasets.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1104-1127"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-21DOI: 10.1177/08953996251386435
{"title":"Retraction: Investigations on coronary artery plaque detection and subclassification using machine learning classifier.","authors":"","doi":"10.1177/08953996251386435","DOIUrl":"10.1177/08953996251386435","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251386435"},"PeriodicalIF":1.4,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}