首页 > 最新文献

Journal of X-Ray Science and Technology最新文献

英文 中文
Research on meshing method for industrial CT volume data based on iterative smooth signed distance surface reconstruction. 基于迭代光滑符号距离曲面重建的工业CT体数据网格划分方法研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-01-27 DOI: 10.1177/08953996241306691
ShiBo Jiang, Shuo Xu, YueWen Sun, ZhiFang Wu

Industrial Computed Tomography (CT) technology is increasingly applied in fields such as additive manufacturing and non-destructive testing, providing rich three-dimensional information for various fields, which is crucial for internal structure detection, defect detection, and product development. In subsequent processes such as analysis, simulation, and editing, three-dimensional volume data models often need to be converted into mesh models, making effective meshing of volume data essential for expanding the application scenarios and scope of industrial CT. However, the existing Marching Cubes algorithm has issues with low efficiency and poor mesh quality during the volume data meshing process. To overcome these limitations, this study proposes an innovative method for industrial CT volume data meshing based on the Iterative Smooth Signed Surface Distance (iSSD) algorithm. This method first refines the segmented voxel model, accurately extracts boundary voxels, and constructs a high-quality point cloud model. By randomly initializing the normals of the point cloud and iteratively updating the point cloud normals, the mesh is reconstructed using the SSD algorithm after each iteration update, ultimately achieving high-quality, watertight, and smooth mesh model reconstruction, ensuring the accuracy and reliability of the reconstructed mesh. Qualitative and quantitative analyses with other methods have further highlighted the excellent performance of the method proposed in this paper. This study not only improves the efficiency and quality of volume data meshing but also provides a solid foundation for subsequent three-dimensional analysis, simulation, and editing, and has important industrial application prospects and academic value.

工业CT技术越来越多地应用于增材制造和无损检测等领域,为各个领域提供丰富的三维信息,对内部结构检测、缺陷检测和产品开发至关重要。在后续的分析、仿真、编辑等过程中,往往需要将三维体数据模型转换为网格模型,对体数据进行有效的网格化处理是扩大工业CT应用场景和范围的必要条件。然而,现有的Marching Cubes算法在体数据网格划分过程中存在效率低、网格质量差的问题。为了克服这些局限性,本研究提出了一种基于迭代光滑符号曲面距离(iSSD)算法的工业CT体数据网格化创新方法。该方法首先细化分割体素模型,准确提取边界体素,构建高质量的点云模型。通过随机初始化点云法线并迭代更新点云法线,在每次迭代更新后使用SSD算法重构网格,最终实现高质量、不透水、平滑的网格模型重构,保证重构网格的准确性和可靠性。与其他方法的定性和定量分析进一步突出了本文方法的优异性能。本研究不仅提高了体数据网格划分的效率和质量,而且为后续的三维分析、仿真和编辑提供了坚实的基础,具有重要的工业应用前景和学术价值。
{"title":"Research on meshing method for industrial CT volume data based on iterative smooth signed distance surface reconstruction.","authors":"ShiBo Jiang, Shuo Xu, YueWen Sun, ZhiFang Wu","doi":"10.1177/08953996241306691","DOIUrl":"10.1177/08953996241306691","url":null,"abstract":"<p><p>Industrial Computed Tomography (CT) technology is increasingly applied in fields such as additive manufacturing and non-destructive testing, providing rich three-dimensional information for various fields, which is crucial for internal structure detection, defect detection, and product development. In subsequent processes such as analysis, simulation, and editing, three-dimensional volume data models often need to be converted into mesh models, making effective meshing of volume data essential for expanding the application scenarios and scope of industrial CT. However, the existing Marching Cubes algorithm has issues with low efficiency and poor mesh quality during the volume data meshing process. To overcome these limitations, this study proposes an innovative method for industrial CT volume data meshing based on the Iterative Smooth Signed Surface Distance (iSSD) algorithm. This method first refines the segmented voxel model, accurately extracts boundary voxels, and constructs a high-quality point cloud model. By randomly initializing the normals of the point cloud and iteratively updating the point cloud normals, the mesh is reconstructed using the SSD algorithm after each iteration update, ultimately achieving high-quality, watertight, and smooth mesh model reconstruction, ensuring the accuracy and reliability of the reconstructed mesh. Qualitative and quantitative analyses with other methods have further highlighted the excellent performance of the method proposed in this paper. This study not only improves the efficiency and quality of volume data meshing but also provides a solid foundation for subsequent three-dimensional analysis, simulation, and editing, and has important industrial application prospects and academic value.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"340-349"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning detection method for pancreatic cystic neoplasm based on Mamba architecture. 基于曼巴结构的胰腺囊性肿瘤深度学习检测方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-02-18 DOI: 10.1177/08953996251313719
Junlong Dai, Cong He, Liang Jin, Chengwei Chen, Jie Wu, Yun Bian

Objective: Early diagnosis of pancreatic cystic neoplasm (PCN) is crucial for patient survival. This study proposes M-YOLO, a novel model combining Mamba architecture and YOLO, to enhance the detection of pancreatic cystic tumors. The model addresses the technical challenge posed by the tumors' complex morphological features in medical images.

Methods: This study develops an innovative deep learning network architecture, M-YOLO (Mamba YOLOv10), which combines the advantages of Mamba and YOLOv10 and aims to improve the accuracy and efficiency of pancreatic cystic neoplasm(PCN) detection. The Mamba architecture, with its superior sequence modeling capabilities, is ideally suited for processing the rich contextual information contained in medical images. At the same time, YOLOv10's fast object detection feature ensures the system's viability for application in clinical practice.

Results: M-YOLO has a high sensitivity of 0.98, a specificity of 0.92, a precision of 0.96, an F1 value of 0.97, an accuracy of 0.93, as well as a mean average precision (mAP) of 0.96 at 50% intersection-to-union (IoU) threshold on the dataset provided by Changhai Hospital.

Conclusions: M-YOLO(Mamba YOLOv10) enhances the identification performance of PCN by integrating the deep feature extraction capability of Mamba and the fast localization technique of YOLOv10.

目的:胰腺囊性肿瘤(PCN)的早期诊断对患者的生存至关重要。本研究提出一种结合Mamba结构和YOLO的新型模型M-YOLO,以增强胰腺囊性肿瘤的检测能力。该模型解决了医学图像中肿瘤复杂形态特征带来的技术挑战。方法:本研究结合Mamba和YOLOv10的优点,开发了一种创新的深度学习网络架构M-YOLO (Mamba YOLOv10),旨在提高胰腺囊性肿瘤(PCN)检测的准确性和效率。Mamba架构具有优越的序列建模功能,非常适合处理医学图像中包含的丰富上下文信息。同时,YOLOv10的快速目标检测功能确保了该系统在临床实践中的应用可行性。结果:M-YOLO在长海医院提供的数据集上,灵敏度为0.98,特异性为0.92,精度为0.96,F1值为0.97,精度为0.93,在50% IoU阈值下的平均精度(mAP)为0.96。结论:M-YOLO(Mamba YOLOv10)融合了Mamba的深度特征提取能力和YOLOv10的快速定位技术,提高了PCN的识别性能。
{"title":"A deep learning detection method for pancreatic cystic neoplasm based on Mamba architecture.","authors":"Junlong Dai, Cong He, Liang Jin, Chengwei Chen, Jie Wu, Yun Bian","doi":"10.1177/08953996251313719","DOIUrl":"10.1177/08953996251313719","url":null,"abstract":"<p><strong>Objective: </strong>Early diagnosis of pancreatic cystic neoplasm (PCN) is crucial for patient survival. This study proposes M-YOLO, a novel model combining Mamba architecture and YOLO, to enhance the detection of pancreatic cystic tumors. The model addresses the technical challenge posed by the tumors' complex morphological features in medical images.</p><p><strong>Methods: </strong>This study develops an innovative deep learning network architecture, M-YOLO (Mamba YOLOv10), which combines the advantages of Mamba and YOLOv10 and aims to improve the accuracy and efficiency of pancreatic cystic neoplasm(PCN) detection. The Mamba architecture, with its superior sequence modeling capabilities, is ideally suited for processing the rich contextual information contained in medical images. At the same time, YOLOv10's fast object detection feature ensures the system's viability for application in clinical practice.</p><p><strong>Results: </strong>M-YOLO has a high sensitivity of 0.98, a specificity of 0.92, a precision of 0.96, an F1 value of 0.97, an accuracy of 0.93, as well as a mean average precision (mAP) of 0.96 at 50% intersection-to-union (IoU) threshold on the dataset provided by Changhai Hospital.</p><p><strong>Conclusions: </strong>M-YOLO(Mamba YOLOv10) enhances the identification performance of PCN by integrating the deep feature extraction capability of Mamba and the fast localization technique of YOLOv10.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"461-471"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Orthogonal limited-angle CT reconstruction method based on anisotropic self-guided image filtering. 基于各向异性自引导图像滤波的正交限角 CT 重建方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-01-27 DOI: 10.1177/08953996241300013
Gong Changcheng, Song Qiang

Computed tomography (CT) reconstruction from incomplete projection data is significant for reducing radiation dose or scanning time. In this work, we investigate a special sampling strategy, which performs two limited-angle scans. We call it orthogonal limited-angle sampling. The X-ray source trajectory covers two limited-angle ranges, and the angle bisectors of the two angular ranges are orthogonal. This sampling method avoids rapid switching of tube voltage in few-view sampling, and reduces data correlation of projections in limited-angle sampling. It has the potential to become a practical imaging strategy. Then we propose a new reconstruction model based on anisotropic self-guided image filtering (ASGIF) and present an algorithm to solve this model. We construct adaptive weights to guide image reconstruction using the gradient information of reconstructed image itself. Additionally, since the shading artifacts are related to the scanning angular ranges and distributed in two orthogonal directions, anisotropic image filtering is used to preserve image edges. Experiments on a digital phantom and real CT data demonstrate that ASGIF method can effectively suppress shading artifacts and preserve image edges, outperforming other competing methods.

计算机断层扫描(CT)从不完整的投影数据重建是重要的,以减少辐射剂量或扫描时间。在这项工作中,我们研究了一种特殊的采样策略,它执行两次有限角度扫描。我们称之为正交限角抽样。x射线源轨迹覆盖两个有限角度范围,两个角度范围的角平分线是正交的。该采样方法避免了少视点采样时管电压的快速切换,减少了有限角度采样时投影的数据相关性。它有可能成为一种实用的成像策略。在此基础上,提出了一种基于各向异性自引导图像滤波(ASGIF)的图像重建模型,并给出了求解该模型的算法。利用重构图像本身的梯度信息构造自适应权值来指导图像重建。此外,由于阴影伪影与扫描角度范围有关,并且分布在两个正交方向上,因此采用各向异性图像滤波来保持图像边缘。在数字幻影和真实CT数据上的实验表明,ASGIF方法可以有效地抑制阴影伪影并保持图像边缘,优于其他竞争方法。
{"title":"Orthogonal limited-angle CT reconstruction method based on anisotropic self-guided image filtering.","authors":"Gong Changcheng, Song Qiang","doi":"10.1177/08953996241300013","DOIUrl":"10.1177/08953996241300013","url":null,"abstract":"<p><p>Computed tomography (CT) reconstruction from incomplete projection data is significant for reducing radiation dose or scanning time. In this work, we investigate a special sampling strategy, which performs two limited-angle scans. We call it orthogonal limited-angle sampling. The X-ray source trajectory covers two limited-angle ranges, and the angle bisectors of the two angular ranges are orthogonal. This sampling method avoids rapid switching of tube voltage in few-view sampling, and reduces data correlation of projections in limited-angle sampling. It has the potential to become a practical imaging strategy. Then we propose a new reconstruction model based on anisotropic self-guided image filtering (ASGIF) and present an algorithm to solve this model. We construct adaptive weights to guide image reconstruction using the gradient information of reconstructed image itself. Additionally, since the shading artifacts are related to the scanning angular ranges and distributed in two orthogonal directions, anisotropic image filtering is used to preserve image edges. Experiments on a digital phantom and real CT data demonstrate that ASGIF method can effectively suppress shading artifacts and preserve image edges, outperforming other competing methods.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"325-339"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising. 基于特征增强和三重交互关注的交叉型多维网络LDCT去噪。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-01-29 DOI: 10.1177/08953996241306696
Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui

BackgroundNumerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.ObjectiveTo improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.MethodsIn our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each 3×3 convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.ResultsCompared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.ConclusionThrough comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.

背景:针对低剂量CT图像去噪,人们提出了许多深度学习方法,并取得了令人印象深刻的效果。但仍然存在结构和边缘信息丢失、去噪效率低等问题。目的:为了提高图像去噪质量,提出了一种基于边缘检测的增强型多维混合关注LDCT图像去噪网络。方法:在我们的网络中,我们采用可训练的Sobel卷积设计边缘增强模块,并在每个3×3卷积层后融合一个增强的三重关注网络(ETAN),以更全面地提取更丰富的特征,并抑制无用信息。在训练过程中,我们采用了总变异损失(TVLoss)和均方误差(MSE)损失相结合的策略来减少图像重构中的高频伪影,平衡图像去噪和细节保留。结果:与其他先进算法(CT-former、REDCNN和EDCNN)相比,我们提出的模型在腹部CT图像上的PSNR和SSIM值最佳,分别为34.8211和0.9131。结论:通过与其他相关算法的对比实验可以看出,本文提出的算法无论在主观视觉还是客观指标上都取得了显著的进步。
{"title":"A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising.","authors":"Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui","doi":"10.1177/08953996241306696","DOIUrl":"10.1177/08953996241306696","url":null,"abstract":"<p><p>BackgroundNumerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.ObjectiveTo improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.MethodsIn our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each <math><mn>3</mn><mo>×</mo><mn>3</mn></math> convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.ResultsCompared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.ConclusionThrough comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"393-404"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An effective COVID-19 classification in X-ray images using a new deep learning framework. 使用新的深度学习框架在x射线图像中有效分类COVID-19。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-01-19 DOI: 10.1177/08953996241290893
P Thilagavathi, R Geetha, S Jothi Shri, K Somasundaram

BackgroundThe global concern regarding the diagnosis of lung-related diseases has intensified due to the rapid transmission of coronavirus disease 2019 (COVID-19). Artificial Intelligence (AI) based methods are emerging technologies that help to identify COVID-19 in chest X-ray images quickly.MethodIn this study, the publically accessible database COVID-19 Chest X-ray is used to diagnose lung-related disorders using a hybrid deep-learning approach. This dataset is pre-processed using an Improved Anisotropic Diffusion Filtering (IADF) method. After that, the features extraction methods named Grey-level Co-occurrence Matrix (GLCM), uniform Local Binary Pattern (uLBP), Histogram of Gradients (HoG), and Horizontal-vertical neighbourhood local binary pattern (hvnLBP) are utilized to extract the useful features from the pre-processed dataset. The dimensionality of a feature set is subsequently reduced through the utilization of an Adaptive Reptile Search Optimization (ARSO) algorithm, which optimally selects the features for flawless classification. Finally, the hybrid deep learning algorithm, Multi-head Attention-based Bi-directional Gated Recurrent unit with Deep Sparse Auto-encoder Network (MhA-Bi-GRU with DSAN), is developed to perform the multiclass classification problem. Moreover, a Dynamic Levy-Flight Chimp Optimization (DLF-CO) algorithm is applied to minimize the loss function in the hybrid algorithm.ResultsThe whole simulation is performed using the Python language in which the 0.001 learning rate accomplishes the proposed method's higher classification accuracy of 0.95%, and 0.98% is obtained for a 0.0001 learning rate. Overall, the performance of the proposed methodology outperforms all existing methods employing different performance parameters.ConclusionThe proposed hybrid deep-learning approach with various feature extraction, and optimal feature selection effectively diagnoses disease using Chest X-ray images demonstrated through classification accuracy.

背景:由于2019冠状病毒病(COVID-19)的快速传播,全球对肺部相关疾病诊断的关注加剧。基于人工智能(AI)的方法是有助于快速识别胸部x射线图像中的COVID-19的新兴技术。方法:本研究利用可公开访问的COVID-19胸部x线数据库,采用混合深度学习方法诊断肺部相关疾病。该数据集采用改进的各向异性扩散滤波(IADF)方法进行预处理。然后,利用灰度共生矩阵(GLCM)、均匀局部二值模式(uLBP)、梯度直方图(HoG)和水平-垂直邻域局部二值模式(hvnLBP)等特征提取方法从预处理后的数据集中提取有用的特征。随后,通过使用自适应爬行动物搜索优化(ARSO)算法降低特征集的维数,该算法最优地选择特征进行完美分类。最后,提出了一种基于多头注意力的双向门控循环单元深度稀疏自编码器网络(MhA-Bi-GRU with DSAN)混合深度学习算法来解决多类分类问题。在混合算法中,采用动态Levy-Flight Chimp Optimization (DLF-CO)算法最小化损失函数。结果:整个模拟使用Python语言进行,其中0.001学习率实现了所提出方法的较高分类准确率0.95%,0.0001学习率获得0.98%。总体而言,所提出的方法的性能优于采用不同性能参数的所有现有方法。结论:本文提出的混合深度学习方法结合多种特征提取和最优特征选择,能够有效地利用胸部x线图像进行疾病诊断,分类精度较高。
{"title":"An effective COVID-19 classification in X-ray images using a new deep learning framework.","authors":"P Thilagavathi, R Geetha, S Jothi Shri, K Somasundaram","doi":"10.1177/08953996241290893","DOIUrl":"10.1177/08953996241290893","url":null,"abstract":"<p><p>BackgroundThe global concern regarding the diagnosis of lung-related diseases has intensified due to the rapid transmission of coronavirus disease 2019 (COVID-19). Artificial Intelligence (AI) based methods are emerging technologies that help to identify COVID-19 in chest X-ray images quickly.MethodIn this study, the publically accessible database COVID-19 Chest X-ray is used to diagnose lung-related disorders using a hybrid deep-learning approach. This dataset is pre-processed using an Improved Anisotropic Diffusion Filtering (IADF) method. After that, the features extraction methods named Grey-level Co-occurrence Matrix (GLCM), uniform Local Binary Pattern (uLBP), Histogram of Gradients (HoG), and Horizontal-vertical neighbourhood local binary pattern (hvnLBP) are utilized to extract the useful features from the pre-processed dataset. The dimensionality of a feature set is subsequently reduced through the utilization of an Adaptive Reptile Search Optimization (ARSO) algorithm, which optimally selects the features for flawless classification. Finally, the hybrid deep learning algorithm, Multi-head Attention-based Bi-directional Gated Recurrent unit with Deep Sparse Auto-encoder Network (MhA-Bi-GRU with DSAN), is developed to perform the multiclass classification problem. Moreover, a Dynamic Levy-Flight Chimp Optimization (DLF-CO) algorithm is applied to minimize the loss function in the hybrid algorithm.ResultsThe whole simulation is performed using the Python language in which the 0.001 learning rate accomplishes the proposed method's higher classification accuracy of 0.95%, and 0.98% is obtained for a 0.0001 learning rate. Overall, the performance of the proposed methodology outperforms all existing methods employing different performance parameters.ConclusionThe proposed hybrid deep-learning approach with various feature extraction, and optimal feature selection effectively diagnoses disease using Chest X-ray images demonstrated through classification accuracy.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"297-316"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT. 超声和先进成像技术在前列腺癌诊断中的应用:mpMRI、TRUS和PET/CT的比较研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-02-12 DOI: 10.1177/08953996241304988
Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang

ObjectiveThis study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.MethodsA retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).ResultsMpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.ConclusionThese findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.

目的:本研究旨在评估和比较三种先进的成像方式-多参数磁共振成像(mpMRI),经直肠超声(TRUS)和正电子发射断层扫描/计算机断层扫描(PET/CT)-在检测PSA水平升高和DRE异常的前列腺癌患者中的诊断性能。方法:对150例50 ~ 75岁男性PSA升高、DRE异常患者进行回顾性分析。通过敏感性、特异性和曲线下面积(AUC)来评估每种模式的诊断准确性,以比较检测临床显著性前列腺癌的性能(Gleason评分≥7)。结果:MpMRI表现出最高的诊断性能,灵敏度为90%,特异性为85%,AUC为0.92,优于TRUS(灵敏度76%,特异性78%,AUC 0.77)和PET/CT(灵敏度82%,特异性80%,AUC 0.81)。在80%的病例中MpMRI检测到有临床意义的肿瘤。尽管TRUS和PET/CT对重要肿瘤的检出率相似,但其总体准确率较低。5%的TRUS患者发生轻微不良事件,而mpMRI或PET/CT检查无明显并发症。结论:mpMRI是早期发现具有临床意义的前列腺癌最可靠的成像方式。它减少了不必要的活组织检查,并优化了患者管理。
{"title":"Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT.","authors":"Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang","doi":"10.1177/08953996241304988","DOIUrl":"10.1177/08953996241304988","url":null,"abstract":"<p><p>ObjectiveThis study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.MethodsA retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).ResultsMpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.ConclusionThese findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"436-447"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-model machine learning framework for breast cancer risk stratification using clinical and imaging data. 使用临床和影像学数据进行乳腺癌风险分层的多模型机器学习框架。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-01-27 DOI: 10.1177/08953996241308175
Lu Miao, Zidong Li, Jinnan Gao

PurposeThis study presents a comprehensive machine learning framework for assessing breast cancer malignancy by integrating clinical features with imaging features derived from deep learning.MethodsThe dataset included 1668 patients with documented breast lesions, incorporating clinical data (e.g., age, BI-RADS category, lesion size, margins, and calcifications) alongside mammographic images processed using four CNN architectures: EfficientNet, ResNet, DenseNet, and InceptionNet. Three predictive configurations were developed: an imaging-only model, a hybrid model combining imaging and clinical data, and a stacking-based ensemble model that aggregates both data types to enhance predictive accuracy. Twelve feature selection techniques, including ReliefF and Fisher Score, were applied to identify key predictive features. Model performance was evaluated using accuracy and AUC, with 5-fold cross-valida tion and hyperparameter tuning to ensure robustness.ResultsThe imaging-only models demonstrated strong predictive performance, with EfficientNet achieving an AUC of 0.76. The hybrid model combining imaging and clinical data reached the highest accuracy of 83% and an AUC of 0.87, underscoring the benefits of data integration. The stacking-based ensemble model further optimized accuracy, reaching a peak AUC of 0.94, demonstrating its potential as a reliable tool for malignancy risk assessment.ConclusionThis study highlights the importance of integrating clinical and deep imaging features for breast cancer risk stratification, with the stacking-based model.

目的:本研究提出了一个综合的机器学习框架,通过整合临床特征和来自深度学习的影像学特征来评估乳腺癌恶性。方法:该数据集包括1668例有记录的乳腺病变患者,包括临床数据(如年龄、BI-RADS类别、病变大小、边缘和钙化)以及使用四种CNN架构(EfficientNet、ResNet、DenseNet和InceptionNet)处理的乳房x线照片。开发了三种预测配置:仅成像模型,结合成像和临床数据的混合模型,以及基于堆栈的集成模型,该模型聚合了两种数据类型以提高预测准确性。采用ReliefF和Fisher Score等12种特征选择技术识别关键预测特征。使用精度和AUC评估模型性能,并进行5倍交叉验证和超参数调整以确保鲁棒性。结果:仅成像模型显示出强大的预测性能,其中effentnet实现了0.76的AUC。结合影像和临床数据的混合模型达到了83%的最高准确率和0.87的AUC,强调了数据整合的好处。基于堆叠的集成模型进一步优化了准确性,达到了0.94的峰值AUC,显示了其作为恶性肿瘤风险评估可靠工具的潜力。结论:本研究强调了结合临床和深部影像学特征对乳腺癌风险分层的重要性,并采用基于堆叠的模型。
{"title":"A multi-model machine learning framework for breast cancer risk stratification using clinical and imaging data.","authors":"Lu Miao, Zidong Li, Jinnan Gao","doi":"10.1177/08953996241308175","DOIUrl":"10.1177/08953996241308175","url":null,"abstract":"<p><p>PurposeThis study presents a comprehensive machine learning framework for assessing breast cancer malignancy by integrating clinical features with imaging features derived from deep learning.MethodsThe dataset included 1668 patients with documented breast lesions, incorporating clinical data (e.g., age, BI-RADS category, lesion size, margins, and calcifications) alongside mammographic images processed using four CNN architectures: EfficientNet, ResNet, DenseNet, and InceptionNet. Three predictive configurations were developed: an imaging-only model, a hybrid model combining imaging and clinical data, and a stacking-based ensemble model that aggregates both data types to enhance predictive accuracy. Twelve feature selection techniques, including ReliefF and Fisher Score, were applied to identify key predictive features. Model performance was evaluated using accuracy and AUC, with 5-fold cross-valida tion and hyperparameter tuning to ensure robustness.ResultsThe imaging-only models demonstrated strong predictive performance, with EfficientNet achieving an AUC of 0.76. The hybrid model combining imaging and clinical data reached the highest accuracy of 83% and an AUC of 0.87, underscoring the benefits of data integration. The stacking-based ensemble model further optimized accuracy, reaching a peak AUC of 0.94, demonstrating its potential as a reliable tool for malignancy risk assessment.ConclusionThis study highlights the importance of integrating clinical and deep imaging features for breast cancer risk stratification, with the stacking-based model.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"360-375"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proximal femur segmentation and quantification in dual-energy subtraction tomosynthesis: A novel approach to fracture risk assessment. 双能量减法断层合成中股骨近端分割和量化:骨折风险评估的新方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-01-29 DOI: 10.1177/08953996241312594
Akari Matsushima, Tai-Been Chen, Koharu Kimura, Mizuki Sato, Shih-Yen Hsu, Takahide Okamoto

BackgroundOsteoporosis is a major public health concern, especially among older adults, due to its association with an increased risk of fractures, particularly in the proximal femur. These fractures severely impact mobility and quality of life, leading to significant economic and health burdens.ObjectiveThis study aims to enhance bone density assessment in the proximal femur by addressing the limitations of conventional dual-energy X-ray absorptiometry through the integration of tomosynthesis with dual-energy applications and advanced segmentation models.Methods and MaterialsThe imaging capability of a radiography/fluoroscopy system with dual-energy subtraction was evaluated. Two phantoms were included in this study: a tomosynthesis phantom (PH-56) was used to measure the quality of the tomosynthesis images, and a torso phantom (PH-4) was used to obtain proximal femur images. Quantification of bone images was achieved by optimizing the energy subtraction (ene-sub) and scale factors to isolate bone pixel values while nullifying soft tissue pixel values. Both the faster region-based convolutional neural network (Faster R-CNN) and U-Net were used to segment the proximal femoral region. The performance of these models was then evaluated using the intersection-over-union (IoU) metric with a torso phantom to ensure controlled conditions.ResultsThe optimal ene-sub-factor ranged between 1.19 and 1.20, and a scale factor of around 0.1 was found to be suitable for detailed bone image observation. Regarding segmentation performance, a VGG19-based Faster R-CNN model achieved the highest mean IoU, outperforming the U-Net model (0.865 vs. 0.515, respectively).ConclusionsThese findings suggest that the integration of tomosynthesis with dual-energy applications significantly enhances the accuracy of bone density measurements in the proximal femur, and that the Faster R-CNN model provides superior segmentation performance, thereby offering a promising tool for bone density and osteoporosis management. Future research should focus on refining these models and validating their clinical applicability to improve patient outcomes.

背景:骨质疏松症是一个主要的公共卫生问题,特别是在老年人中,因为它与骨折风险增加有关,特别是在股骨近端。这些骨折严重影响行动能力和生活质量,导致严重的经济和健康负担。目的:本研究旨在通过将断层合成与双能应用和先进的分割模型相结合,解决传统双能x线吸收仪的局限性,增强股骨近端骨密度评估。方法和材料:评价双能减影的x线/透视系统的成像能力。本研究中包括两个模型:一个层合模型(PH-56)用于测量层合图像的质量,一个躯干模型(PH-4)用于获得股骨近端图像。通过优化能量减法(ene-sub)和比例因子来分离骨像素值,同时消除软组织像素值,实现骨图像的量化。采用更快的基于区域的卷积神经网络(faster R-CNN)和U-Net对股骨近端区域进行分割。然后使用具有躯干幻影的交叉-超联合(IoU)度量来评估这些模型的性能,以确保受控条件。结果:最佳的ene- subfactor在1.19 ~ 1.20之间,0.1左右的比例因子适合于详细的骨图像观察。在分割性能方面,基于vgg19的Faster R-CNN模型获得了最高的平均IoU,优于U-Net模型(分别为0.865和0.515)。结论:这些研究结果表明,断层合成与双能量应用的结合显著提高了股骨近端骨密度测量的准确性,Faster R-CNN模型具有优越的分割性能,因此为骨密度和骨质疏松症管理提供了一个有前途的工具。未来的研究应侧重于完善这些模型,并验证其临床适用性,以改善患者的预后。
{"title":"Proximal femur segmentation and quantification in dual-energy subtraction tomosynthesis: A novel approach to fracture risk assessment.","authors":"Akari Matsushima, Tai-Been Chen, Koharu Kimura, Mizuki Sato, Shih-Yen Hsu, Takahide Okamoto","doi":"10.1177/08953996241312594","DOIUrl":"10.1177/08953996241312594","url":null,"abstract":"<p><p>BackgroundOsteoporosis is a major public health concern, especially among older adults, due to its association with an increased risk of fractures, particularly in the proximal femur. These fractures severely impact mobility and quality of life, leading to significant economic and health burdens.ObjectiveThis study aims to enhance bone density assessment in the proximal femur by addressing the limitations of conventional dual-energy X-ray absorptiometry through the integration of tomosynthesis with dual-energy applications and advanced segmentation models.Methods and MaterialsThe imaging capability of a radiography/fluoroscopy system with dual-energy subtraction was evaluated. Two phantoms were included in this study: a tomosynthesis phantom (PH-56) was used to measure the quality of the tomosynthesis images, and a torso phantom (PH-4) was used to obtain proximal femur images. Quantification of bone images was achieved by optimizing the energy subtraction (ene-sub) and scale factors to isolate bone pixel values while nullifying soft tissue pixel values. Both the faster region-based convolutional neural network (Faster R-CNN) and U-Net were used to segment the proximal femoral region. The performance of these models was then evaluated using the intersection-over-union (IoU) metric with a torso phantom to ensure controlled conditions.ResultsThe optimal ene-sub-factor ranged between 1.19 and 1.20, and a scale factor of around 0.1 was found to be suitable for detailed bone image observation. Regarding segmentation performance, a VGG19-based Faster R-CNN model achieved the highest mean IoU, outperforming the U-Net model (0.865 vs. 0.515, respectively).ConclusionsThese findings suggest that the integration of tomosynthesis with dual-energy applications significantly enhances the accuracy of bone density measurements in the proximal femur, and that the Faster R-CNN model provides superior segmentation performance, thereby offering a promising tool for bone density and osteoporosis management. Future research should focus on refining these models and validating their clinical applicability to improve patient outcomes.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"405-419"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DR-ConvNeXt: DR classification method for reconstructing ConvNeXt model structure. DR-ConvNeXt:用于重建ConvNeXt模型结构的DR分类方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-02-12 DOI: 10.1177/08953996241311190
Pengfei Song, Yun Wu

BackgroundDiabetic retinopathy (DR) is a major complication of diabetes and a leading cause of blindness among the working-age population. However, the complex distribution and variability of lesion characteristics within the dataset present significant challenges for achieving high-precision classification of DR images.ObjectiveWe propose an automatic classification method for DR images, named DR-ConvNeXt, which aims to achieve accurate diagnosis of lesion types.MethodsThe method involves designing a dual-branch addition convolution structure and appropriately increasing the number of stacked ConvNeXt Block convolution layers. Additionally, a unique primary-auxiliary loss function is introduced, contributing to a significant enhancement in DR classification accuracy within the DR-ConvNeXt model.ResultsThe model achieved an accuracy of 91.8%,sensitivity of 81.6%, and specificity of 97.9% on the APTOS dataset. On the Messidor-2 dataset, the model achieved an accuracy of 83.6%, sensitivity of 74.0%, and specificity of 94.6%.ConclusionsThe DR-ConvNeXt model's classification results on the two publicly available datasets illustrate the significant advantages in all evaluation indexes for DR classification.

背景:糖尿病视网膜病变(DR)是糖尿病的主要并发症,也是导致工作年龄人群失明的主要原因。然而,数据集中病变特征的复杂分布和可变性为实现DR图像的高精度分类带来了重大挑战。目的:提出一种DR图像自动分类方法DR- convnext,以实现病灶类型的准确诊断。方法:设计一种双分支加法卷积结构,适当增加堆叠的ConvNeXt Block卷积层数。此外,引入了一个独特的主辅助损失函数,有助于在DR- convnext模型中显著提高DR分类精度。结果:该模型在APTOS数据集上的准确率为91.8%,灵敏度为81.6%,特异性为97.9%。在messior -2数据集上,该模型的准确率为83.6%,灵敏度为74.0%,特异性为94.6%。结论:DR- convnext模型在两个公开数据集上的分类结果表明,DR分类的所有评价指标均具有显著优势。
{"title":"DR-ConvNeXt: DR classification method for reconstructing ConvNeXt model structure.","authors":"Pengfei Song, Yun Wu","doi":"10.1177/08953996241311190","DOIUrl":"10.1177/08953996241311190","url":null,"abstract":"<p><p>BackgroundDiabetic retinopathy (DR) is a major complication of diabetes and a leading cause of blindness among the working-age population. However, the complex distribution and variability of lesion characteristics within the dataset present significant challenges for achieving high-precision classification of DR images.ObjectiveWe propose an automatic classification method for DR images, named DR-ConvNeXt, which aims to achieve accurate diagnosis of lesion types.MethodsThe method involves designing a dual-branch addition convolution structure and appropriately increasing the number of stacked ConvNeXt Block convolution layers. Additionally, a unique primary-auxiliary loss function is introduced, contributing to a significant enhancement in DR classification accuracy within the DR-ConvNeXt model.ResultsThe model achieved an accuracy of 91.8%,sensitivity of 81.6%, and specificity of 97.9% on the APTOS dataset. On the Messidor-2 dataset, the model achieved an accuracy of 83.6%, sensitivity of 74.0%, and specificity of 94.6%.ConclusionsThe DR-ConvNeXt model's classification results on the two publicly available datasets illustrate the significant advantages in all evaluation indexes for DR classification.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"448-460"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel detail-enhanced wavelet domain feature compensation network for sparse-view X-ray computed laminography. 稀疏视图x射线计算机层析成像的小波域特征补偿网络。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-02-18 DOI: 10.1177/08953996251319183
Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong

BackgroundX-ray Computed Laminography (CL) is a popular industrial tool for non-destructive visualization of flat objects. However, high-quality CL imaging requires a large number of projections, resulting in a long imaging time. Reducing the number of projections allows acceleration of the imaging process, but decreases the quality of reconstructed images.ObjectiveOur objective is to build a deep learning network for sparse-view CL reconstruction.MethodsConsidering complementarities of feature extraction in different domains, we design an encoder-decoder network that enables to compensate the missing information during spatial domain feature extraction in wavelet domain. Also, a detail-enhanced module is developed to highlight details. Additionally, Swin Transformer and convolution operators are combined to better capture features.ResultsA total of 3200 pairs of 16-view and 1024-view CL images (2880 pairs for training, 160 pairs for validation, and 160 pairs for testing) of solder joints have been employed to investigate the performance of the proposed network. It is observed that the proposed network obtains the highest image quality with PSNR and SSIM of 37.875 ± 0.908 dB, 0.992 ± 0.002, respectively. Also, it achieves competitive results on the AAPM dataset.ConclusionsThis study demonstrates the effectiveness and generalization of the proposed network for sparse-view CL reconstruction.

背景:x射线计算机层析成像(CL)是一种流行的工业工具,用于平面物体的非破坏性可视化。然而,高质量的CL成像需要大量的投影,导致成像时间长。减少投影的数量可以加速成像过程,但会降低重建图像的质量。目的:我们的目标是建立一个用于稀疏视图CL重建的深度学习网络。方法:考虑到不同域特征提取的互补性,设计了一种编码器-解码器网络,对小波域空间域特征提取过程中的缺失信息进行补偿。此外,还开发了一个细节增强模块来突出显示细节。此外,Swin Transformer和卷积运算符相结合可以更好地捕获特征。结果:共使用3200对16视图和1024视图的CL图像(2880对用于训练,160对用于验证,160对用于测试)的焊点来研究所提出的网络的性能。实验结果表明,该网络的PSNR为37.875±0.908 dB, SSIM为0.992±0.002,图像质量最高。此外,它在AAPM数据集上取得了竞争结果。结论:本研究证明了所提出的网络用于稀疏视图CL重建的有效性和泛化性。
{"title":"A novel detail-enhanced wavelet domain feature compensation network for sparse-view X-ray computed laminography.","authors":"Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong","doi":"10.1177/08953996251319183","DOIUrl":"10.1177/08953996251319183","url":null,"abstract":"<p><p>BackgroundX-ray Computed Laminography (CL) is a popular industrial tool for non-destructive visualization of flat objects. However, high-quality CL imaging requires a large number of projections, resulting in a long imaging time. Reducing the number of projections allows acceleration of the imaging process, but decreases the quality of reconstructed images.ObjectiveOur objective is to build a deep learning network for sparse-view CL reconstruction.MethodsConsidering complementarities of feature extraction in different domains, we design an encoder-decoder network that enables to compensate the missing information during spatial domain feature extraction in wavelet domain. Also, a detail-enhanced module is developed to highlight details. Additionally, Swin Transformer and convolution operators are combined to better capture features.ResultsA total of 3200 pairs of 16-view and 1024-view CL images (2880 pairs for training, 160 pairs for validation, and 160 pairs for testing) of solder joints have been employed to investigate the performance of the proposed network. It is observed that the proposed network obtains the highest image quality with PSNR and SSIM of 37.875 ± 0.908 dB, 0.992 ± 0.002, respectively. Also, it achieves competitive results on the AAPM dataset.ConclusionsThis study demonstrates the effectiveness and generalization of the proposed network for sparse-view CL reconstruction.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"488-498"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of X-Ray Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1