Pub Date : 2025-03-01Epub Date: 2025-01-27DOI: 10.1177/08953996241306691
ShiBo Jiang, Shuo Xu, YueWen Sun, ZhiFang Wu
Industrial Computed Tomography (CT) technology is increasingly applied in fields such as additive manufacturing and non-destructive testing, providing rich three-dimensional information for various fields, which is crucial for internal structure detection, defect detection, and product development. In subsequent processes such as analysis, simulation, and editing, three-dimensional volume data models often need to be converted into mesh models, making effective meshing of volume data essential for expanding the application scenarios and scope of industrial CT. However, the existing Marching Cubes algorithm has issues with low efficiency and poor mesh quality during the volume data meshing process. To overcome these limitations, this study proposes an innovative method for industrial CT volume data meshing based on the Iterative Smooth Signed Surface Distance (iSSD) algorithm. This method first refines the segmented voxel model, accurately extracts boundary voxels, and constructs a high-quality point cloud model. By randomly initializing the normals of the point cloud and iteratively updating the point cloud normals, the mesh is reconstructed using the SSD algorithm after each iteration update, ultimately achieving high-quality, watertight, and smooth mesh model reconstruction, ensuring the accuracy and reliability of the reconstructed mesh. Qualitative and quantitative analyses with other methods have further highlighted the excellent performance of the method proposed in this paper. This study not only improves the efficiency and quality of volume data meshing but also provides a solid foundation for subsequent three-dimensional analysis, simulation, and editing, and has important industrial application prospects and academic value.
{"title":"Research on meshing method for industrial CT volume data based on iterative smooth signed distance surface reconstruction.","authors":"ShiBo Jiang, Shuo Xu, YueWen Sun, ZhiFang Wu","doi":"10.1177/08953996241306691","DOIUrl":"10.1177/08953996241306691","url":null,"abstract":"<p><p>Industrial Computed Tomography (CT) technology is increasingly applied in fields such as additive manufacturing and non-destructive testing, providing rich three-dimensional information for various fields, which is crucial for internal structure detection, defect detection, and product development. In subsequent processes such as analysis, simulation, and editing, three-dimensional volume data models often need to be converted into mesh models, making effective meshing of volume data essential for expanding the application scenarios and scope of industrial CT. However, the existing Marching Cubes algorithm has issues with low efficiency and poor mesh quality during the volume data meshing process. To overcome these limitations, this study proposes an innovative method for industrial CT volume data meshing based on the Iterative Smooth Signed Surface Distance (iSSD) algorithm. This method first refines the segmented voxel model, accurately extracts boundary voxels, and constructs a high-quality point cloud model. By randomly initializing the normals of the point cloud and iteratively updating the point cloud normals, the mesh is reconstructed using the SSD algorithm after each iteration update, ultimately achieving high-quality, watertight, and smooth mesh model reconstruction, ensuring the accuracy and reliability of the reconstructed mesh. Qualitative and quantitative analyses with other methods have further highlighted the excellent performance of the method proposed in this paper. This study not only improves the efficiency and quality of volume data meshing but also provides a solid foundation for subsequent three-dimensional analysis, simulation, and editing, and has important industrial application prospects and academic value.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"340-349"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: Early diagnosis of pancreatic cystic neoplasm (PCN) is crucial for patient survival. This study proposes M-YOLO, a novel model combining Mamba architecture and YOLO, to enhance the detection of pancreatic cystic tumors. The model addresses the technical challenge posed by the tumors' complex morphological features in medical images.
Methods: This study develops an innovative deep learning network architecture, M-YOLO (Mamba YOLOv10), which combines the advantages of Mamba and YOLOv10 and aims to improve the accuracy and efficiency of pancreatic cystic neoplasm(PCN) detection. The Mamba architecture, with its superior sequence modeling capabilities, is ideally suited for processing the rich contextual information contained in medical images. At the same time, YOLOv10's fast object detection feature ensures the system's viability for application in clinical practice.
Results: M-YOLO has a high sensitivity of 0.98, a specificity of 0.92, a precision of 0.96, an F1 value of 0.97, an accuracy of 0.93, as well as a mean average precision (mAP) of 0.96 at 50% intersection-to-union (IoU) threshold on the dataset provided by Changhai Hospital.
Conclusions: M-YOLO(Mamba YOLOv10) enhances the identification performance of PCN by integrating the deep feature extraction capability of Mamba and the fast localization technique of YOLOv10.
{"title":"A deep learning detection method for pancreatic cystic neoplasm based on Mamba architecture.","authors":"Junlong Dai, Cong He, Liang Jin, Chengwei Chen, Jie Wu, Yun Bian","doi":"10.1177/08953996251313719","DOIUrl":"10.1177/08953996251313719","url":null,"abstract":"<p><strong>Objective: </strong>Early diagnosis of pancreatic cystic neoplasm (PCN) is crucial for patient survival. This study proposes M-YOLO, a novel model combining Mamba architecture and YOLO, to enhance the detection of pancreatic cystic tumors. The model addresses the technical challenge posed by the tumors' complex morphological features in medical images.</p><p><strong>Methods: </strong>This study develops an innovative deep learning network architecture, M-YOLO (Mamba YOLOv10), which combines the advantages of Mamba and YOLOv10 and aims to improve the accuracy and efficiency of pancreatic cystic neoplasm(PCN) detection. The Mamba architecture, with its superior sequence modeling capabilities, is ideally suited for processing the rich contextual information contained in medical images. At the same time, YOLOv10's fast object detection feature ensures the system's viability for application in clinical practice.</p><p><strong>Results: </strong>M-YOLO has a high sensitivity of 0.98, a specificity of 0.92, a precision of 0.96, an F1 value of 0.97, an accuracy of 0.93, as well as a mean average precision (mAP) of 0.96 at 50% intersection-to-union (IoU) threshold on the dataset provided by Changhai Hospital.</p><p><strong>Conclusions: </strong>M-YOLO(Mamba YOLOv10) enhances the identification performance of PCN by integrating the deep feature extraction capability of Mamba and the fast localization technique of YOLOv10.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"461-471"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-27DOI: 10.1177/08953996241300013
Gong Changcheng, Song Qiang
Computed tomography (CT) reconstruction from incomplete projection data is significant for reducing radiation dose or scanning time. In this work, we investigate a special sampling strategy, which performs two limited-angle scans. We call it orthogonal limited-angle sampling. The X-ray source trajectory covers two limited-angle ranges, and the angle bisectors of the two angular ranges are orthogonal. This sampling method avoids rapid switching of tube voltage in few-view sampling, and reduces data correlation of projections in limited-angle sampling. It has the potential to become a practical imaging strategy. Then we propose a new reconstruction model based on anisotropic self-guided image filtering (ASGIF) and present an algorithm to solve this model. We construct adaptive weights to guide image reconstruction using the gradient information of reconstructed image itself. Additionally, since the shading artifacts are related to the scanning angular ranges and distributed in two orthogonal directions, anisotropic image filtering is used to preserve image edges. Experiments on a digital phantom and real CT data demonstrate that ASGIF method can effectively suppress shading artifacts and preserve image edges, outperforming other competing methods.
{"title":"Orthogonal limited-angle CT reconstruction method based on anisotropic self-guided image filtering.","authors":"Gong Changcheng, Song Qiang","doi":"10.1177/08953996241300013","DOIUrl":"10.1177/08953996241300013","url":null,"abstract":"<p><p>Computed tomography (CT) reconstruction from incomplete projection data is significant for reducing radiation dose or scanning time. In this work, we investigate a special sampling strategy, which performs two limited-angle scans. We call it orthogonal limited-angle sampling. The X-ray source trajectory covers two limited-angle ranges, and the angle bisectors of the two angular ranges are orthogonal. This sampling method avoids rapid switching of tube voltage in few-view sampling, and reduces data correlation of projections in limited-angle sampling. It has the potential to become a practical imaging strategy. Then we propose a new reconstruction model based on anisotropic self-guided image filtering (ASGIF) and present an algorithm to solve this model. We construct adaptive weights to guide image reconstruction using the gradient information of reconstructed image itself. Additionally, since the shading artifacts are related to the scanning angular ranges and distributed in two orthogonal directions, anisotropic image filtering is used to preserve image edges. Experiments on a digital phantom and real CT data demonstrate that ASGIF method can effectively suppress shading artifacts and preserve image edges, outperforming other competing methods.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"325-339"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BackgroundNumerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.ObjectiveTo improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.MethodsIn our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.ResultsCompared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.ConclusionThrough comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.
{"title":"A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising.","authors":"Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui","doi":"10.1177/08953996241306696","DOIUrl":"10.1177/08953996241306696","url":null,"abstract":"<p><p>BackgroundNumerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.ObjectiveTo improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.MethodsIn our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each <math><mn>3</mn><mo>×</mo><mn>3</mn></math> convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.ResultsCompared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.ConclusionThrough comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"393-404"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-19DOI: 10.1177/08953996241290893
P Thilagavathi, R Geetha, S Jothi Shri, K Somasundaram
BackgroundThe global concern regarding the diagnosis of lung-related diseases has intensified due to the rapid transmission of coronavirus disease 2019 (COVID-19). Artificial Intelligence (AI) based methods are emerging technologies that help to identify COVID-19 in chest X-ray images quickly.MethodIn this study, the publically accessible database COVID-19 Chest X-ray is used to diagnose lung-related disorders using a hybrid deep-learning approach. This dataset is pre-processed using an Improved Anisotropic Diffusion Filtering (IADF) method. After that, the features extraction methods named Grey-level Co-occurrence Matrix (GLCM), uniform Local Binary Pattern (uLBP), Histogram of Gradients (HoG), and Horizontal-vertical neighbourhood local binary pattern (hvnLBP) are utilized to extract the useful features from the pre-processed dataset. The dimensionality of a feature set is subsequently reduced through the utilization of an Adaptive Reptile Search Optimization (ARSO) algorithm, which optimally selects the features for flawless classification. Finally, the hybrid deep learning algorithm, Multi-head Attention-based Bi-directional Gated Recurrent unit with Deep Sparse Auto-encoder Network (MhA-Bi-GRU with DSAN), is developed to perform the multiclass classification problem. Moreover, a Dynamic Levy-Flight Chimp Optimization (DLF-CO) algorithm is applied to minimize the loss function in the hybrid algorithm.ResultsThe whole simulation is performed using the Python language in which the 0.001 learning rate accomplishes the proposed method's higher classification accuracy of 0.95%, and 0.98% is obtained for a 0.0001 learning rate. Overall, the performance of the proposed methodology outperforms all existing methods employing different performance parameters.ConclusionThe proposed hybrid deep-learning approach with various feature extraction, and optimal feature selection effectively diagnoses disease using Chest X-ray images demonstrated through classification accuracy.
背景:由于2019冠状病毒病(COVID-19)的快速传播,全球对肺部相关疾病诊断的关注加剧。基于人工智能(AI)的方法是有助于快速识别胸部x射线图像中的COVID-19的新兴技术。方法:本研究利用可公开访问的COVID-19胸部x线数据库,采用混合深度学习方法诊断肺部相关疾病。该数据集采用改进的各向异性扩散滤波(IADF)方法进行预处理。然后,利用灰度共生矩阵(GLCM)、均匀局部二值模式(uLBP)、梯度直方图(HoG)和水平-垂直邻域局部二值模式(hvnLBP)等特征提取方法从预处理后的数据集中提取有用的特征。随后,通过使用自适应爬行动物搜索优化(ARSO)算法降低特征集的维数,该算法最优地选择特征进行完美分类。最后,提出了一种基于多头注意力的双向门控循环单元深度稀疏自编码器网络(MhA-Bi-GRU with DSAN)混合深度学习算法来解决多类分类问题。在混合算法中,采用动态Levy-Flight Chimp Optimization (DLF-CO)算法最小化损失函数。结果:整个模拟使用Python语言进行,其中0.001学习率实现了所提出方法的较高分类准确率0.95%,0.0001学习率获得0.98%。总体而言,所提出的方法的性能优于采用不同性能参数的所有现有方法。结论:本文提出的混合深度学习方法结合多种特征提取和最优特征选择,能够有效地利用胸部x线图像进行疾病诊断,分类精度较高。
{"title":"An effective COVID-19 classification in X-ray images using a new deep learning framework.","authors":"P Thilagavathi, R Geetha, S Jothi Shri, K Somasundaram","doi":"10.1177/08953996241290893","DOIUrl":"10.1177/08953996241290893","url":null,"abstract":"<p><p>BackgroundThe global concern regarding the diagnosis of lung-related diseases has intensified due to the rapid transmission of coronavirus disease 2019 (COVID-19). Artificial Intelligence (AI) based methods are emerging technologies that help to identify COVID-19 in chest X-ray images quickly.MethodIn this study, the publically accessible database COVID-19 Chest X-ray is used to diagnose lung-related disorders using a hybrid deep-learning approach. This dataset is pre-processed using an Improved Anisotropic Diffusion Filtering (IADF) method. After that, the features extraction methods named Grey-level Co-occurrence Matrix (GLCM), uniform Local Binary Pattern (uLBP), Histogram of Gradients (HoG), and Horizontal-vertical neighbourhood local binary pattern (hvnLBP) are utilized to extract the useful features from the pre-processed dataset. The dimensionality of a feature set is subsequently reduced through the utilization of an Adaptive Reptile Search Optimization (ARSO) algorithm, which optimally selects the features for flawless classification. Finally, the hybrid deep learning algorithm, Multi-head Attention-based Bi-directional Gated Recurrent unit with Deep Sparse Auto-encoder Network (MhA-Bi-GRU with DSAN), is developed to perform the multiclass classification problem. Moreover, a Dynamic Levy-Flight Chimp Optimization (DLF-CO) algorithm is applied to minimize the loss function in the hybrid algorithm.ResultsThe whole simulation is performed using the Python language in which the 0.001 learning rate accomplishes the proposed method's higher classification accuracy of 0.95%, and 0.98% is obtained for a 0.0001 learning rate. Overall, the performance of the proposed methodology outperforms all existing methods employing different performance parameters.ConclusionThe proposed hybrid deep-learning approach with various feature extraction, and optimal feature selection effectively diagnoses disease using Chest X-ray images demonstrated through classification accuracy.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"297-316"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-02-12DOI: 10.1177/08953996241304988
Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang
ObjectiveThis study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.MethodsA retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).ResultsMpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.ConclusionThese findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.
{"title":"Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT.","authors":"Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang","doi":"10.1177/08953996241304988","DOIUrl":"10.1177/08953996241304988","url":null,"abstract":"<p><p>ObjectiveThis study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.MethodsA retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).ResultsMpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.ConclusionThese findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"436-447"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-01-27DOI: 10.1177/08953996241308175
Lu Miao, Zidong Li, Jinnan Gao
PurposeThis study presents a comprehensive machine learning framework for assessing breast cancer malignancy by integrating clinical features with imaging features derived from deep learning.MethodsThe dataset included 1668 patients with documented breast lesions, incorporating clinical data (e.g., age, BI-RADS category, lesion size, margins, and calcifications) alongside mammographic images processed using four CNN architectures: EfficientNet, ResNet, DenseNet, and InceptionNet. Three predictive configurations were developed: an imaging-only model, a hybrid model combining imaging and clinical data, and a stacking-based ensemble model that aggregates both data types to enhance predictive accuracy. Twelve feature selection techniques, including ReliefF and Fisher Score, were applied to identify key predictive features. Model performance was evaluated using accuracy and AUC, with 5-fold cross-valida tion and hyperparameter tuning to ensure robustness.ResultsThe imaging-only models demonstrated strong predictive performance, with EfficientNet achieving an AUC of 0.76. The hybrid model combining imaging and clinical data reached the highest accuracy of 83% and an AUC of 0.87, underscoring the benefits of data integration. The stacking-based ensemble model further optimized accuracy, reaching a peak AUC of 0.94, demonstrating its potential as a reliable tool for malignancy risk assessment.ConclusionThis study highlights the importance of integrating clinical and deep imaging features for breast cancer risk stratification, with the stacking-based model.
{"title":"A multi-model machine learning framework for breast cancer risk stratification using clinical and imaging data.","authors":"Lu Miao, Zidong Li, Jinnan Gao","doi":"10.1177/08953996241308175","DOIUrl":"10.1177/08953996241308175","url":null,"abstract":"<p><p>PurposeThis study presents a comprehensive machine learning framework for assessing breast cancer malignancy by integrating clinical features with imaging features derived from deep learning.MethodsThe dataset included 1668 patients with documented breast lesions, incorporating clinical data (e.g., age, BI-RADS category, lesion size, margins, and calcifications) alongside mammographic images processed using four CNN architectures: EfficientNet, ResNet, DenseNet, and InceptionNet. Three predictive configurations were developed: an imaging-only model, a hybrid model combining imaging and clinical data, and a stacking-based ensemble model that aggregates both data types to enhance predictive accuracy. Twelve feature selection techniques, including ReliefF and Fisher Score, were applied to identify key predictive features. Model performance was evaluated using accuracy and AUC, with 5-fold cross-valida tion and hyperparameter tuning to ensure robustness.ResultsThe imaging-only models demonstrated strong predictive performance, with EfficientNet achieving an AUC of 0.76. The hybrid model combining imaging and clinical data reached the highest accuracy of 83% and an AUC of 0.87, underscoring the benefits of data integration. The stacking-based ensemble model further optimized accuracy, reaching a peak AUC of 0.94, demonstrating its potential as a reliable tool for malignancy risk assessment.ConclusionThis study highlights the importance of integrating clinical and deep imaging features for breast cancer risk stratification, with the stacking-based model.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"360-375"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BackgroundOsteoporosis is a major public health concern, especially among older adults, due to its association with an increased risk of fractures, particularly in the proximal femur. These fractures severely impact mobility and quality of life, leading to significant economic and health burdens.ObjectiveThis study aims to enhance bone density assessment in the proximal femur by addressing the limitations of conventional dual-energy X-ray absorptiometry through the integration of tomosynthesis with dual-energy applications and advanced segmentation models.Methods and MaterialsThe imaging capability of a radiography/fluoroscopy system with dual-energy subtraction was evaluated. Two phantoms were included in this study: a tomosynthesis phantom (PH-56) was used to measure the quality of the tomosynthesis images, and a torso phantom (PH-4) was used to obtain proximal femur images. Quantification of bone images was achieved by optimizing the energy subtraction (ene-sub) and scale factors to isolate bone pixel values while nullifying soft tissue pixel values. Both the faster region-based convolutional neural network (Faster R-CNN) and U-Net were used to segment the proximal femoral region. The performance of these models was then evaluated using the intersection-over-union (IoU) metric with a torso phantom to ensure controlled conditions.ResultsThe optimal ene-sub-factor ranged between 1.19 and 1.20, and a scale factor of around 0.1 was found to be suitable for detailed bone image observation. Regarding segmentation performance, a VGG19-based Faster R-CNN model achieved the highest mean IoU, outperforming the U-Net model (0.865 vs. 0.515, respectively).ConclusionsThese findings suggest that the integration of tomosynthesis with dual-energy applications significantly enhances the accuracy of bone density measurements in the proximal femur, and that the Faster R-CNN model provides superior segmentation performance, thereby offering a promising tool for bone density and osteoporosis management. Future research should focus on refining these models and validating their clinical applicability to improve patient outcomes.
{"title":"Proximal femur segmentation and quantification in dual-energy subtraction tomosynthesis: A novel approach to fracture risk assessment.","authors":"Akari Matsushima, Tai-Been Chen, Koharu Kimura, Mizuki Sato, Shih-Yen Hsu, Takahide Okamoto","doi":"10.1177/08953996241312594","DOIUrl":"10.1177/08953996241312594","url":null,"abstract":"<p><p>BackgroundOsteoporosis is a major public health concern, especially among older adults, due to its association with an increased risk of fractures, particularly in the proximal femur. These fractures severely impact mobility and quality of life, leading to significant economic and health burdens.ObjectiveThis study aims to enhance bone density assessment in the proximal femur by addressing the limitations of conventional dual-energy X-ray absorptiometry through the integration of tomosynthesis with dual-energy applications and advanced segmentation models.Methods and MaterialsThe imaging capability of a radiography/fluoroscopy system with dual-energy subtraction was evaluated. Two phantoms were included in this study: a tomosynthesis phantom (PH-56) was used to measure the quality of the tomosynthesis images, and a torso phantom (PH-4) was used to obtain proximal femur images. Quantification of bone images was achieved by optimizing the energy subtraction (ene-sub) and scale factors to isolate bone pixel values while nullifying soft tissue pixel values. Both the faster region-based convolutional neural network (Faster R-CNN) and U-Net were used to segment the proximal femoral region. The performance of these models was then evaluated using the intersection-over-union (IoU) metric with a torso phantom to ensure controlled conditions.ResultsThe optimal ene-sub-factor ranged between 1.19 and 1.20, and a scale factor of around 0.1 was found to be suitable for detailed bone image observation. Regarding segmentation performance, a VGG19-based Faster R-CNN model achieved the highest mean IoU, outperforming the U-Net model (0.865 vs. 0.515, respectively).ConclusionsThese findings suggest that the integration of tomosynthesis with dual-energy applications significantly enhances the accuracy of bone density measurements in the proximal femur, and that the Faster R-CNN model provides superior segmentation performance, thereby offering a promising tool for bone density and osteoporosis management. Future research should focus on refining these models and validating their clinical applicability to improve patient outcomes.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"405-419"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-02-12DOI: 10.1177/08953996241311190
Pengfei Song, Yun Wu
BackgroundDiabetic retinopathy (DR) is a major complication of diabetes and a leading cause of blindness among the working-age population. However, the complex distribution and variability of lesion characteristics within the dataset present significant challenges for achieving high-precision classification of DR images.ObjectiveWe propose an automatic classification method for DR images, named DR-ConvNeXt, which aims to achieve accurate diagnosis of lesion types.MethodsThe method involves designing a dual-branch addition convolution structure and appropriately increasing the number of stacked ConvNeXt Block convolution layers. Additionally, a unique primary-auxiliary loss function is introduced, contributing to a significant enhancement in DR classification accuracy within the DR-ConvNeXt model.ResultsThe model achieved an accuracy of 91.8%,sensitivity of 81.6%, and specificity of 97.9% on the APTOS dataset. On the Messidor-2 dataset, the model achieved an accuracy of 83.6%, sensitivity of 74.0%, and specificity of 94.6%.ConclusionsThe DR-ConvNeXt model's classification results on the two publicly available datasets illustrate the significant advantages in all evaluation indexes for DR classification.
{"title":"DR-ConvNeXt: DR classification method for reconstructing ConvNeXt model structure.","authors":"Pengfei Song, Yun Wu","doi":"10.1177/08953996241311190","DOIUrl":"10.1177/08953996241311190","url":null,"abstract":"<p><p>BackgroundDiabetic retinopathy (DR) is a major complication of diabetes and a leading cause of blindness among the working-age population. However, the complex distribution and variability of lesion characteristics within the dataset present significant challenges for achieving high-precision classification of DR images.ObjectiveWe propose an automatic classification method for DR images, named DR-ConvNeXt, which aims to achieve accurate diagnosis of lesion types.MethodsThe method involves designing a dual-branch addition convolution structure and appropriately increasing the number of stacked ConvNeXt Block convolution layers. Additionally, a unique primary-auxiliary loss function is introduced, contributing to a significant enhancement in DR classification accuracy within the DR-ConvNeXt model.ResultsThe model achieved an accuracy of 91.8%,sensitivity of 81.6%, and specificity of 97.9% on the APTOS dataset. On the Messidor-2 dataset, the model achieved an accuracy of 83.6%, sensitivity of 74.0%, and specificity of 94.6%.ConclusionsThe DR-ConvNeXt model's classification results on the two publicly available datasets illustrate the significant advantages in all evaluation indexes for DR classification.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"448-460"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2025-02-18DOI: 10.1177/08953996251319183
Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong
BackgroundX-ray Computed Laminography (CL) is a popular industrial tool for non-destructive visualization of flat objects. However, high-quality CL imaging requires a large number of projections, resulting in a long imaging time. Reducing the number of projections allows acceleration of the imaging process, but decreases the quality of reconstructed images.ObjectiveOur objective is to build a deep learning network for sparse-view CL reconstruction.MethodsConsidering complementarities of feature extraction in different domains, we design an encoder-decoder network that enables to compensate the missing information during spatial domain feature extraction in wavelet domain. Also, a detail-enhanced module is developed to highlight details. Additionally, Swin Transformer and convolution operators are combined to better capture features.ResultsA total of 3200 pairs of 16-view and 1024-view CL images (2880 pairs for training, 160 pairs for validation, and 160 pairs for testing) of solder joints have been employed to investigate the performance of the proposed network. It is observed that the proposed network obtains the highest image quality with PSNR and SSIM of 37.875 ± 0.908 dB, 0.992 ± 0.002, respectively. Also, it achieves competitive results on the AAPM dataset.ConclusionsThis study demonstrates the effectiveness and generalization of the proposed network for sparse-view CL reconstruction.
{"title":"A novel detail-enhanced wavelet domain feature compensation network for sparse-view X-ray computed laminography.","authors":"Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong","doi":"10.1177/08953996251319183","DOIUrl":"10.1177/08953996251319183","url":null,"abstract":"<p><p>BackgroundX-ray Computed Laminography (CL) is a popular industrial tool for non-destructive visualization of flat objects. However, high-quality CL imaging requires a large number of projections, resulting in a long imaging time. Reducing the number of projections allows acceleration of the imaging process, but decreases the quality of reconstructed images.ObjectiveOur objective is to build a deep learning network for sparse-view CL reconstruction.MethodsConsidering complementarities of feature extraction in different domains, we design an encoder-decoder network that enables to compensate the missing information during spatial domain feature extraction in wavelet domain. Also, a detail-enhanced module is developed to highlight details. Additionally, Swin Transformer and convolution operators are combined to better capture features.ResultsA total of 3200 pairs of 16-view and 1024-view CL images (2880 pairs for training, 160 pairs for validation, and 160 pairs for testing) of solder joints have been employed to investigate the performance of the proposed network. It is observed that the proposed network obtains the highest image quality with PSNR and SSIM of 37.875 ± 0.908 dB, 0.992 ± 0.002, respectively. Also, it achieves competitive results on the AAPM dataset.ConclusionsThis study demonstrates the effectiveness and generalization of the proposed network for sparse-view CL reconstruction.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"488-498"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}