首页 > 最新文献

Journal of X-Ray Science and Technology最新文献

英文 中文
Research on ring artifact reduction method for CT images of nuclear graphite components. 核石墨成分 CT 图像的环形伪影消除方法研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-01 Epub Date: 2025-01-22 DOI: 10.1177/08953996241308760
Tianchen Zeng, Jintao Fu, Peng Cong, Ximing Liu, Guangduo Xu, Yuewen Sun

BackgroundThe supporting structure of high-temperature gas-cooled reactors (HTGR) comprises over 3000 carbon/graphite components, necessitating computed tomography (CT) non-destructive testing before operational deployment as per reactor technical specifications. However, CT images are frequently marred by severe ring artifacts due to the response non-uniformity and non-linearity of detector units, which diminishes the ability to detect defects effectively.MethodsTo address this issue, we propose a physics-based ring artifacts reduction method for CT that employs pixel response correction. This method physically accounts for the cause of ring artifacts and leverages the prior knowledge of the detected object to enhance the accuracy of the detection process.ResultsOur proposed method achieved a notable reduction in ring artifacts, as evidenced by a 37.7% decrease in ring total variation (RTV) values compared to the originals, significantly enhancing image quality. It also surpassed traditional and machine learning methods in artifact reduction while maintaining image details. The lower RTV scores confirm our method's superior effectiveness in minimizing ring artifacts.ConclusionWe believe that our research contributes to the enhancement of defect inspection performance in detection systems, which is crucial for ensuring the safety of reactors. The proposed method's effectiveness in mitigating ring artifacts while maintaining image quality highlights its potential impact on the reliability of non-destructive testing in the context of HTGR components.

背景:高温气冷堆(HTGR)的支撑结构由3000多个碳/石墨组件组成,在运行部署前需要根据反应堆技术规范进行计算机断层扫描(CT)无损检测。然而,由于检测单元的响应不均匀性和非线性,CT图像经常被严重的环形伪影所破坏,从而降低了有效检测缺陷的能力。方法:为了解决这一问题,我们提出了一种基于物理的CT环形伪影减少方法,该方法采用像素响应校正。这种方法物理上解释了环形伪像的原因,并利用了被检测对象的先验知识来提高检测过程的准确性。结果:我们提出的方法显著减少了环状伪影,与原始图像相比,环状总变异(RTV)值降低了37.7%,显著提高了图像质量。在保留图像细节的同时,它也超越了传统和机器学习方法。较低的RTV分数证实了我们的方法在最小化环伪影方面的优越有效性。结论:我们认为我们的研究有助于提高检测系统的缺陷检测性能,这对确保反应堆的安全至关重要。该方法在保持图像质量的同时有效地减少了环形伪影,这突出了其对HTGR组件无损检测可靠性的潜在影响。
{"title":"Research on ring artifact reduction method for CT images of nuclear graphite components.","authors":"Tianchen Zeng, Jintao Fu, Peng Cong, Ximing Liu, Guangduo Xu, Yuewen Sun","doi":"10.1177/08953996241308760","DOIUrl":"10.1177/08953996241308760","url":null,"abstract":"<p><p>BackgroundThe supporting structure of high-temperature gas-cooled reactors (HTGR) comprises over 3000 carbon/graphite components, necessitating computed tomography (CT) non-destructive testing before operational deployment as per reactor technical specifications. However, CT images are frequently marred by severe ring artifacts due to the response non-uniformity and non-linearity of detector units, which diminishes the ability to detect defects effectively.MethodsTo address this issue, we propose a physics-based ring artifacts reduction method for CT that employs pixel response correction. This method physically accounts for the cause of ring artifacts and leverages the prior knowledge of the detected object to enhance the accuracy of the detection process.ResultsOur proposed method achieved a notable reduction in ring artifacts, as evidenced by a 37.7% decrease in ring total variation (RTV) values compared to the originals, significantly enhancing image quality. It also surpassed traditional and machine learning methods in artifact reduction while maintaining image details. The lower RTV scores confirm our method's superior effectiveness in minimizing ring artifacts.ConclusionWe believe that our research contributes to the enhancement of defect inspection performance in detection systems, which is crucial for ensuring the safety of reactors. The proposed method's effectiveness in mitigating ring artifacts while maintaining image quality highlights its potential impact on the reliability of non-destructive testing in the context of HTGR components.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"317-324"},"PeriodicalIF":1.4,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DML-MFCM: A multimodal fine-grained classification model based on deep metric learning for Alzheimer's disease diagnosis. DML-MFCM:基于深度度量学习的多模态细粒度分类模型,用于阿尔茨海默病诊断。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2025-01-13 DOI: 10.1177/08953996241300023
Heng Wang, Tiejun Yang, Jiacheng Fan, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao

Background: Alzheimer's disease (AD) is a neurodegenerative disorder. There are no drugs and methods for the treatment of AD, but early intervention can delay the deterioration of the disease. Therefore, the early diagnosis of AD and mild cognitive impairment (MCI) is significant. Structural magnetic resonance imaging (sMRI) is widely used to present structural changes in the subject's brain tissue. The relatively mild structural changes in the brain with MCI have led to ongoing challenges in the task of conversion prediction in MCI. Moreover, many multimodal AD diagnostic models proposed in recent years ignore the potential relationship between multimodal information.

Objective: To solve these problems, we propose a multimodal fine-grained classification model based on deep metric learning for AD diagnosis (DML-MFCM), which can fully exploit the fine-grained feature information of sMRI and learn the potential relationships between multimodal feature information.

Methods: First, we propose a fine-grained feature extraction module that can effectively capture the fine-grained feature information of the lesion area. Then, we introduce a multimodal cross-attention module to learn the potential relationships between multimodal data. In addition, we design a hybrid loss function based on deep metric learning. It can guide the model to learn the feature representation method between samples, which improves the model's performance in disease diagnosis.

Results: We have extensively evaluated the proposed models on the ADNI and AIBL datasets. The ACC of AD vs. NC, MCI vs. NC, and sMCI vs. pMCI tasks in the ADNI dataset are 98.75%, 95.88%, and 88.00%, respectively. The ACC on the AD vs. NC and MCI vs. NC tasks in the AIBL dataset are 94.33% and 91.67%.

Conclusions: The results demonstrate that our method has excellent performance in AD diagnosis.

背景:阿尔茨海默病(AD)是一种神经退行性疾病。目前还没有治疗阿尔茨海默病的药物和方法,但早期干预可以延缓病情的恶化。因此,早期诊断AD和轻度认知障碍(MCI)具有重要意义。结构磁共振成像(sMRI)被广泛用于显示受试者脑组织的结构变化。MCI患者大脑中相对轻微的结构变化导致了MCI转换预测任务的持续挑战。此外,近年来提出的许多多模态AD诊断模型忽略了多模态信息之间的潜在关系。为了解决这些问题,我们提出了一种基于深度度量学习的AD诊断多模态细粒度分类模型(DML-MFCM),该模型可以充分利用sMRI的细粒度特征信息,并学习多模态特征信息之间的潜在关系。方法:首先,我们提出了一个细粒度特征提取模块,可以有效地捕获病灶区域的细粒度特征信息。然后,我们引入了一个多模态交叉注意模块来学习多模态数据之间的潜在关系。此外,我们设计了一个基于深度度量学习的混合损失函数。它可以引导模型学习样本间的特征表示方法,提高模型在疾病诊断中的性能。结果:我们在ADNI和AIBL数据集上广泛评估了所提出的模型。ADNI数据集中AD与NC、MCI与NC、sMCI与pMCI任务的ACC分别为98.75%、95.88%和88.00%。AIBL数据集中AD vs. NC和MCI vs. NC任务的ACC分别为94.33%和91.67%。结论:本方法对阿尔茨海默病有较好的诊断效果。
{"title":"DML-MFCM: A multimodal fine-grained classification model based on deep metric learning for Alzheimer's disease diagnosis.","authors":"Heng Wang, Tiejun Yang, Jiacheng Fan, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao","doi":"10.1177/08953996241300023","DOIUrl":"10.1177/08953996241300023","url":null,"abstract":"<p><strong>Background: </strong>Alzheimer's disease (AD) is a neurodegenerative disorder. There are no drugs and methods for the treatment of AD, but early intervention can delay the deterioration of the disease. Therefore, the early diagnosis of AD and mild cognitive impairment (MCI) is significant. Structural magnetic resonance imaging (sMRI) is widely used to present structural changes in the subject's brain tissue. The relatively mild structural changes in the brain with MCI have led to ongoing challenges in the task of conversion prediction in MCI. Moreover, many multimodal AD diagnostic models proposed in recent years ignore the potential relationship between multimodal information.</p><p><strong>Objective: </strong>To solve these problems, we propose a multimodal fine-grained classification model based on deep metric learning for AD diagnosis (DML-MFCM), which can fully exploit the fine-grained feature information of sMRI and learn the potential relationships between multimodal feature information.</p><p><strong>Methods: </strong>First, we propose a fine-grained feature extraction module that can effectively capture the fine-grained feature information of the lesion area. Then, we introduce a multimodal cross-attention module to learn the potential relationships between multimodal data. In addition, we design a hybrid loss function based on deep metric learning. It can guide the model to learn the feature representation method between samples, which improves the model's performance in disease diagnosis.</p><p><strong>Results: </strong>We have extensively evaluated the proposed models on the ADNI and AIBL datasets. The ACC of AD vs. NC, MCI vs. NC, and sMCI vs. pMCI tasks in the ADNI dataset are 98.75%, 95.88%, and 88.00%, respectively. The ACC on the AD vs. NC and MCI vs. NC tasks in the AIBL dataset are 94.33% and 91.67%.</p><p><strong>Conclusions: </strong>The results demonstrate that our method has excellent performance in AD diagnosis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"211-228"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative elemental sensitive imaging based on K-edge subtraction tomography. 基于k边相减层析成像的定量元素敏感成像。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2024-11-27 DOI: 10.1177/08953996241290323
Yichi Zhang, Fen Tao, Ruoyang Gao, Ling Zhang, Jun Wang, Guohao Du, Tiqiao Xiao, Biao Deng

Background: K-edge subtraction (KES) tomography has been extensively utilized in the field of elemental sensitive imaging due to its high spatial resolution, rapid acquisition, and three-dimensional (3D) imaging capabilities. However, previous studies have primarily focused on the qualitative analysis of element contents, rather than quantitative assessment.

Objective: The current study proposes a novel method for quantitative elemental analysis based on K-edge subtraction tomography.

Methods: The linear correlation between the slice grayscale of standard samples and the difference in their linear absorption coefficients is confirmed. This finding suggests that the grayscale data from slices may be employed to perform quantitative estimations of elemental compositions.

Results: In order to verify the accuracy and validity of this method, the target element contents in standard and actual samples are quantitatively analyzed, respectively. The results demonstrate that the method is capable of achieving nanometer-resolved quantitative elemental sensitive imaging with a relative error of less than 3% in the target elemental content.

Conclusions: The method described in this paper is expected to expand the scope of applications for K-edge subtraction tomography and provide a novel approach to achieve more precise and convenient quantitative elemental analysis.

背景:k -边缘相减(KES)层析成像由于其高空间分辨率、快速采集和三维成像能力,在元素敏感成像领域得到了广泛的应用。然而,以往的研究主要集中在元素含量的定性分析,而不是定量评价。目的:提出一种基于k边相减层析成像的定量元素分析新方法。方法:确定标准样品的切片灰度值与其线性吸收系数的差异呈线性相关关系。这一发现表明,从切片灰度数据可用于执行元素组成的定量估计。结果:为验证该方法的准确性和有效性,分别对标准样品和实际样品中的目标元素含量进行了定量分析。结果表明,该方法能够实现纳米分辨率的定量元素敏感成像,目标元素含量的相对误差小于3%。结论:本文所描述的方法有望扩大k边减影层析成像的应用范围,为实现更精确、更方便的定量元素分析提供一种新的方法。
{"title":"Quantitative elemental sensitive imaging based on K-edge subtraction tomography.","authors":"Yichi Zhang, Fen Tao, Ruoyang Gao, Ling Zhang, Jun Wang, Guohao Du, Tiqiao Xiao, Biao Deng","doi":"10.1177/08953996241290323","DOIUrl":"10.1177/08953996241290323","url":null,"abstract":"<p><strong>Background: </strong>K-edge subtraction (KES) tomography has been extensively utilized in the field of elemental sensitive imaging due to its high spatial resolution, rapid acquisition, and three-dimensional (3D) imaging capabilities. However, previous studies have primarily focused on the qualitative analysis of element contents, rather than quantitative assessment.</p><p><strong>Objective: </strong>The current study proposes a novel method for quantitative elemental analysis based on K-edge subtraction tomography.</p><p><strong>Methods: </strong>The linear correlation between the slice grayscale of standard samples and the difference in their linear absorption coefficients is confirmed. This finding suggests that the grayscale data from slices may be employed to perform quantitative estimations of elemental compositions.</p><p><strong>Results: </strong>In order to verify the accuracy and validity of this method, the target element contents in standard and actual samples are quantitatively analyzed, respectively. The results demonstrate that the method is capable of achieving nanometer-resolved quantitative elemental sensitive imaging with a relative error of less than 3% in the target elemental content.</p><p><strong>Conclusions: </strong>The method described in this paper is expected to expand the scope of applications for K-edge subtraction tomography and provide a novel approach to achieve more precise and convenient quantitative elemental analysis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"37-46"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bonevoyage: Navigating the depths of osteoporosis detection with a dual-core ensemble of cascaded ShuffleNet and neural networks. bonevyage:通过级联ShuffleNet和神经网络的双核集合导航骨质疏松症检测的深度。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2024-11-27 DOI: 10.1177/08953996241289314
Dhamodharan Srinivasan, Ajmeera Kiran, S Parameswari, Jeevanantham Vellaichamy

Background: Osteoporosis (OP) is a condition that significantly decreases bone density and strength, often remaining undetected until the occurrence of a fracture. Timely identification of OP is essential for preventing fractures, reducing morbidity, and enhancing the quality of life.

Objective: This study aims to improve the accuracy, speed, and reliability of early-stage osteoporosis detection by integrating the compact architecture of Cascaded ShuffleNet with the pattern recognition prowess of Artificial Neural Networks (ANNs).

Methods: BoneVoyage leverages the efficiency of ShuffleNet and the analytical capabilities of ANNs to extract and analyze complex features from bone density scans. The framework was trained and validated on a comprehensive dataset containing thousands of bone density images, ensuring robustness across diverse cases.

Results: This model achieving an accuracy of 97.2%, with high sensitivity and specificity. These results significantly surpass those of existing OP detection methods, highlighting the effectiveness of the BoneVoyage framework in identifying subtle changes in bone density indicative of early-stage osteoporosis.

Conclusions: BoneVoyage represents a significant advancement in the early detection of osteoporosis, offering a reliable tool for healthcare providers to identify at-risk individuals prematurely. The early detection facilitated by BoneVoyage allows for the implementation of preventive measures and targeted treatments.

背景:骨质疏松症(Osteoporosis, OP)是一种骨密度和强度显著降低的疾病,通常直到发生骨折时才被发现。及时识别OP对于预防骨折、降低发病率和提高生活质量至关重要。目的:通过将Cascaded ShuffleNet的紧凑架构与人工神经网络(ann)的模式识别能力相结合,提高早期骨质疏松症检测的准确性、速度和可靠性。方法:bonevyage利用ShuffleNet的效率和ann的分析能力,从骨密度扫描中提取和分析复杂的特征。该框架在包含数千张骨密度图像的综合数据集上进行了训练和验证,确保了不同情况下的鲁棒性。结果:该模型准确率为97.2%,具有较高的敏感性和特异性。这些结果明显优于现有的OP检测方法,突出了bonevyage框架在识别早期骨质疏松症指示的骨密度细微变化方面的有效性。结论:bonevyage在骨质疏松症的早期检测方面取得了重大进展,为医疗保健提供者提供了一种可靠的工具,可以过早地识别高危人群。bonevyage的早期发现有助于实施预防措施和有针对性的治疗。
{"title":"Bonevoyage: Navigating the depths of osteoporosis detection with a dual-core ensemble of cascaded ShuffleNet and neural networks.","authors":"Dhamodharan Srinivasan, Ajmeera Kiran, S Parameswari, Jeevanantham Vellaichamy","doi":"10.1177/08953996241289314","DOIUrl":"10.1177/08953996241289314","url":null,"abstract":"<p><strong>Background: </strong>Osteoporosis (OP) is a condition that significantly decreases bone density and strength, often remaining undetected until the occurrence of a fracture. Timely identification of OP is essential for preventing fractures, reducing morbidity, and enhancing the quality of life.</p><p><strong>Objective: </strong>This study aims to improve the accuracy, speed, and reliability of early-stage osteoporosis detection by integrating the compact architecture of Cascaded ShuffleNet with the pattern recognition prowess of Artificial Neural Networks (ANNs).</p><p><strong>Methods: </strong>BoneVoyage leverages the efficiency of ShuffleNet and the analytical capabilities of ANNs to extract and analyze complex features from bone density scans. The framework was trained and validated on a comprehensive dataset containing thousands of bone density images, ensuring robustness across diverse cases.</p><p><strong>Results: </strong>This model achieving an accuracy of 97.2%, with high sensitivity and specificity. These results significantly surpass those of existing OP detection methods, highlighting the effectiveness of the BoneVoyage framework in identifying subtle changes in bone density indicative of early-stage osteoporosis.</p><p><strong>Conclusions: </strong>BoneVoyage represents a significant advancement in the early detection of osteoporosis, offering a reliable tool for healthcare providers to identify at-risk individuals prematurely. The early detection facilitated by BoneVoyage allows for the implementation of preventive measures and targeted treatments.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"3-25"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT image super-resolution under the guidance of deep gradient information. 在深度梯度信息指导下的Ct图像超分辨率。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2024-12-15 DOI: 10.1177/08953996241289225
Ye Shen, Ningning Liang, Xinyi Zhong, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan

Due to the hardware constraints of Computed Tomography (CT) imaging, acquiring high-resolution (HR) CT images in clinical settings poses a significant challenge. In recent years, convolutional neural networks have shown great potential in CT super-resolution (SR) problems. However, the reconstruction results of many deep learning-based SR methods have structural distortion and detail ambiguity. In this paper, a new SR network based on generative adversarial learning is proposed. The network consists of gradient branch and SR branch. Gradient branch is used to recover HR gradient maps. The network merges gradient image features of the gradient branch into the SR branch, offering gradient information guidance for super-resolution (SR) reconstruction. Further, the loss function of the network combines the image space loss function with the gradient loss and the gradient variance loss to further generate a more realistic detail texture. Compared to other comparison algorithms, the structural similarity index of the SR results obtained by the proposed method on simulation and experimental data has increased by 1.8% and 1.4%, respectively. The experimental results demonstrate that the proposed CT SR network exhibits superior performance in terms of structure preservation and detail restoration.

由于计算机断层扫描(CT)成像的硬件限制,在临床环境中获取高分辨率(HR) CT图像提出了重大挑战。近年来,卷积神经网络在CT超分辨率(SR)问题中显示出巨大的潜力。然而,许多基于深度学习的SR方法的重建结果存在结构失真和细节模糊的问题。本文提出了一种新的基于生成对抗学习的SR网络。该网络由梯度分支和SR分支组成。梯度分支用于恢复HR梯度图。该网络将梯度分支的梯度图像特征合并到SR分支中,为超分辨率(SR)重建提供梯度信息指导。进一步,网络的损失函数将图像空间损失函数与梯度损失和梯度方差损失相结合,进一步生成更真实的细节纹理。与其他比较算法相比,本文方法在仿真和实验数据上得到的SR结果的结构相似指数分别提高了1.8%和1.4%。实验结果表明,所提出的CT SR网络在结构保存和细节恢复方面具有较好的性能。
{"title":"CT image super-resolution under the guidance of deep gradient information.","authors":"Ye Shen, Ningning Liang, Xinyi Zhong, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan","doi":"10.1177/08953996241289225","DOIUrl":"10.1177/08953996241289225","url":null,"abstract":"<p><p>Due to the hardware constraints of Computed Tomography (CT) imaging, acquiring high-resolution (HR) CT images in clinical settings poses a significant challenge. In recent years, convolutional neural networks have shown great potential in CT super-resolution (SR) problems. However, the reconstruction results of many deep learning-based SR methods have structural distortion and detail ambiguity. In this paper, a new SR network based on generative adversarial learning is proposed. The network consists of gradient branch and SR branch. Gradient branch is used to recover HR gradient maps. The network merges gradient image features of the gradient branch into the SR branch, offering gradient information guidance for super-resolution (SR) reconstruction. Further, the loss function of the network combines the image space loss function with the gradient loss and the gradient variance loss to further generate a more realistic detail texture. Compared to other comparison algorithms, the structural similarity index of the SR results obtained by the proposed method on simulation and experimental data has increased by 1.8% and 1.4%, respectively. The experimental results demonstrate that the proposed CT SR network exhibits superior performance in terms of structure preservation and detail restoration.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"58-71"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spine X-ray image segmentation based on deep learning and marker controlled watershed. 基于深度学习和标记控制分水岭的脊柱x射线图像分割。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2024-12-19 DOI: 10.1177/08953996241299998
Yating Xiao, Yan Chen, Yong Zhang, Runjie Zhang, Guangyu Cui, Yufeng Song, Quan Zhang

Background: The development of automatic methods for vertebral segmentation provides the objective analysis of each vertebra in the spine image, which is important for the diagnosis of various spinal diseases. However, vertebrae have inter-class similarity and intra-class variability, and some adjacent vertebrae exhibit adhesion.

Objective: To solve the adhesion problem of adjacent vertebrae and ensure that the boundary between adjacent vertebrae can be accurately demarcated, we propose an image segmentation method based on deep learning and marker controlled watershed.

Methods: This method consists of a dual-path model of localization path and segmentation path to achieve automatic vertebral segmentation. For the vertebral localization path, a high-resolution network (HRNet) is used to locate vertebral center. Moreover, based on spine posture, a new bone direction loss (BD-Loss) is designed to constrain HRNet. For the vertebral segmentation path, we proposed a VU-Net network to achieve vertebral preliminary segmentation. Additionally, a position information perception module (PIPM) is introduced to realize the guidance of HRNet to VU-Net. Finally, we novelly use the outputs of HR-Net and VU-Net deep learning networks to initialize the marker controlled watershed algorithm to suppress the adhesion of adjacent vertebrae and achieve vertebral fine segmentation.

Results: The proposed method was evaluated on two spine X-ray datasets using four metrics. The first dataset contains sagittal images of the cervical spine, while the second dataset contains coronal images of the whole spine, both with different health conditions. Our method achieved Recall of 96.82% and 94.38%, Precision of 97.24% and 98.14%, Dice coefficient of 97.03% and 96.22%, Intersection over Union of 94.24% and 92.72% on the cervical spine and whole spine datasets respectively, outperforming current state-of-the-art techniques.

背景:椎体自动分割方法的发展为脊柱图像中的各个椎体提供了客观的分析,这对于各种脊柱疾病的诊断具有重要意义。然而,椎骨具有类间相似性和类内变异性,一些相邻椎骨表现出粘附性。目的:为了解决相邻椎体的粘附问题,保证相邻椎体之间的边界能够准确划分,提出了一种基于深度学习和标记控制分水岭的图像分割方法。方法:采用定位路径和分割路径双路径模型实现椎体自动分割。对于椎体定位路径,采用高分辨率网络(HRNet)定位椎体中心。此外,基于脊柱姿态,设计了一种新的骨方向丢失(BD-Loss)来约束HRNet。对于椎体分割路径,我们提出了一种VU-Net网络来实现椎体的初步分割。此外,还引入了位置信息感知模块(PIPM)来实现HRNet对VU-Net的引导。最后,我们新颖地利用HR-Net和VU-Net深度学习网络的输出,初始化标记控制分水岭算法,抑制相邻椎体的粘连,实现椎体精细分割。结果:采用四个指标对两个脊柱x线数据集进行了评估。第一个数据集包含颈椎矢状面图像,而第二个数据集包含整个脊柱的冠状面图像,两者具有不同的健康状况。该方法在颈椎和全脊柱数据集上的查全率分别为96.82%和94.38%,查全率分别为97.24%和98.14%,Dice系数分别为97.03%和96.22%,交集比联合率分别为94.24%和92.72%,优于目前的技术水平。
{"title":"Spine X-ray image segmentation based on deep learning and marker controlled watershed.","authors":"Yating Xiao, Yan Chen, Yong Zhang, Runjie Zhang, Guangyu Cui, Yufeng Song, Quan Zhang","doi":"10.1177/08953996241299998","DOIUrl":"10.1177/08953996241299998","url":null,"abstract":"<p><strong>Background: </strong>The development of automatic methods for vertebral segmentation provides the objective analysis of each vertebra in the spine image, which is important for the diagnosis of various spinal diseases. However, vertebrae have inter-class similarity and intra-class variability, and some adjacent vertebrae exhibit adhesion.</p><p><strong>Objective: </strong>To solve the adhesion problem of adjacent vertebrae and ensure that the boundary between adjacent vertebrae can be accurately demarcated, we propose an image segmentation method based on deep learning and marker controlled watershed.</p><p><strong>Methods: </strong>This method consists of a dual-path model of localization path and segmentation path to achieve automatic vertebral segmentation. For the vertebral localization path, a high-resolution network (HRNet) is used to locate vertebral center. Moreover, based on spine posture, a new bone direction loss (BD-Loss) is designed to constrain HRNet. For the vertebral segmentation path, we proposed a VU-Net network to achieve vertebral preliminary segmentation. Additionally, a position information perception module (PIPM) is introduced to realize the guidance of HRNet to VU-Net. Finally, we novelly use the outputs of HR-Net and VU-Net deep learning networks to initialize the marker controlled watershed algorithm to suppress the adhesion of adjacent vertebrae and achieve vertebral fine segmentation.</p><p><strong>Results: </strong>The proposed method was evaluated on two spine X-ray datasets using four metrics. The first dataset contains sagittal images of the cervical spine, while the second dataset contains coronal images of the whole spine, both with different health conditions. Our method achieved Recall of 96.82% and 94.38%, Precision of 97.24% and 98.14%, Dice coefficient of 97.03% and 96.22%, Intersection over Union of 94.24% and 92.72% on the cervical spine and whole spine datasets respectively, outperforming current state-of-the-art techniques.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"109-119"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing brain tumor classification by integrating radiomics and deep learning features: A comprehensive study utilizing ensemble methods on MRI scans. 通过整合放射组学和深度学习特征增强脑肿瘤分类:一项利用MRI扫描集成方法的综合研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2024-12-09 DOI: 10.1177/08953996241299996
Liang Yin, Jing Wang

Background and objective: This study aims to assess the effectiveness of combining radiomics features (RFs) with deep learning features (DFs) for classifying brain tumors-specifically Glioma, Meningioma, and Pituitary Tumor-using MRI scans and advanced ensemble learning techniques.

Methods: A total of 3064 T1-weighted contrast-enhanced brain MRI scans were analyzed. RFs were extracted using Pyradiomics, while DFs were obtained from a 3D convolutional neural network (CNN). These features were used both individually and together to train a range of machine learning models, including Support Vector Machines (SVM), Decision Trees (DT), Random Forests (RF), AdaBoost, Bagging, k-Nearest Neighbors (KNN), and Multi-Layer Perceptrons (MLP). To enhance the accuracy of these models, ensemble approaches such as Stacking, Voting, and Boosting were employed. LASSO feature selection and five-fold cross-validation were utilized to ensure the models' robustness.

Results: The results demonstrated that combining RFs and DFs significantly improved the model's performance compared to using either feature set alone. The best performance was achieved using the combined RF + DF approach with ensemble methods, particularly Boosting, which resulted in an accuracy of 95.0%, an AUC of 0.92, a sensitivity of 88%, and a specificity of 90%. Conversely, models utilizing only RFs or DFs showed lower performance, with RFs reaching an AUC of 0.82 and DFs achieving an AUC of 0.85.

Conclusion: The integration of RFs and DFs, along with advanced ensemble methods, significantly improves the accuracy and reliability of brain tumor classification using MRI. This approach shows strong clinical potential, with opportunities for further enhancing generalizability and precision through additional MRI sequences and advanced machine learning techniques.

背景与目的:本研究旨在评估将放射组学特征(rf)与深度学习特征(df)结合使用MRI扫描和先进的集成学习技术对脑肿瘤(特别是胶质瘤、脑膜瘤和垂体瘤)进行分类的有效性。方法:对3064张t1加权脑MRI增强扫描图进行分析。通过Pyradiomics提取rf,而通过3D卷积神经网络(CNN)获得df。这些特征被单独或一起用于训练一系列机器学习模型,包括支持向量机(SVM)、决策树(DT)、随机森林(RF)、AdaBoost、Bagging、k-近邻(KNN)和多层感知器(MLP)。为了提高这些模型的准确性,采用了堆叠、投票和Boosting等集成方法。利用LASSO特征选择和五重交叉验证来保证模型的鲁棒性。结果:结果表明,与单独使用任何一种特征集相比,RFs和DFs的结合显著提高了模型的性能。结合RF + DF方法和集合方法获得了最好的性能,特别是Boosting方法,其准确度为95.0%,AUC为0.92,灵敏度为88%,特异性为90%。相反,仅使用RFs或DFs的模型表现出较低的性能,RFs达到0.82的AUC, DFs达到0.85的AUC。结论:RFs和DFs的整合,结合先进的集成方法,显著提高了MRI对脑肿瘤分类的准确性和可靠性。该方法显示出强大的临床潜力,并有机会通过额外的MRI序列和先进的机器学习技术进一步提高通用性和准确性。
{"title":"Enhancing brain tumor classification by integrating radiomics and deep learning features: A comprehensive study utilizing ensemble methods on MRI scans.","authors":"Liang Yin, Jing Wang","doi":"10.1177/08953996241299996","DOIUrl":"10.1177/08953996241299996","url":null,"abstract":"<p><strong>Background and objective: </strong>This study aims to assess the effectiveness of combining radiomics features (RFs) with deep learning features (DFs) for classifying brain tumors-specifically Glioma, Meningioma, and Pituitary Tumor-using MRI scans and advanced ensemble learning techniques.</p><p><strong>Methods: </strong>A total of 3064 T1-weighted contrast-enhanced brain MRI scans were analyzed. RFs were extracted using Pyradiomics, while DFs were obtained from a 3D convolutional neural network (CNN). These features were used both individually and together to train a range of machine learning models, including Support Vector Machines (SVM), Decision Trees (DT), Random Forests (RF), AdaBoost, Bagging, k-Nearest Neighbors (KNN), and Multi-Layer Perceptrons (MLP). To enhance the accuracy of these models, ensemble approaches such as Stacking, Voting, and Boosting were employed. LASSO feature selection and five-fold cross-validation were utilized to ensure the models' robustness.</p><p><strong>Results: </strong>The results demonstrated that combining RFs and DFs significantly improved the model's performance compared to using either feature set alone. The best performance was achieved using the combined RF + DF approach with ensemble methods, particularly Boosting, which resulted in an accuracy of 95.0%, an AUC of 0.92, a sensitivity of 88%, and a specificity of 90%. Conversely, models utilizing only RFs or DFs showed lower performance, with RFs reaching an AUC of 0.82 and DFs achieving an AUC of 0.85.</p><p><strong>Conclusion: </strong>The integration of RFs and DFs, along with advanced ensemble methods, significantly improves the accuracy and reliability of brain tumor classification using MRI. This approach shows strong clinical potential, with opportunities for further enhancing generalizability and precision through additional MRI sequences and advanced machine learning techniques.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"47-57"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The usefulness of X-ray output management in general radiography systems using exposure index. x射线输出管理在使用曝光指数的一般射线照相系统中的有用性。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2025-01-13 DOI: 10.1177/08953996241299994
Kazuhiro Ogasawara, Shinya Ohwada, Rie Tachibana, Katsuhiko Ogasawara

Purpose: The periodic quality control of X-ray devices is important for obtaining optical medical images and determining the appropriate X-ray exposure dose. Additionally, the measurement of the X-ray output is constrained by time, technical aspects, and expenses. Therefore, we investigated the usefulness of a simple method for managing X-ray output using an Exposure Index (EI).

Methods: The entire surface of the flat panel detector was X-ray-irradiated every Friday at the time of end-of-work inspection under the condition that the recorded EI was approximately 1000. The EI and exposure dose were measured, and the linearity and accuracy were evaluated.

Results: The output gradually decreased from the start of the measurements in Room 1 and stabilized after the output was adjusted. The relationship between exposure dose and EI showed high linearity, with R2 > 0.99, and the CV of EI was less than 2.41%, indicating high reproducibility.

Conclusions: We demonstrated that the results of constancy tests can be easily quantified using EI. The EI method can manage the X-ray output with good reproducibility.

目的:x射线设备的周期性质量控制对获得光学医学图像和确定适当的x射线照射剂量具有重要意义。此外,x射线输出的测量受到时间、技术方面和费用的限制。因此,我们研究了一种使用暴露指数(EI)管理x射线输出的简单方法的实用性。方法:每周五下班检查时,在记录EI约为1000的条件下,对平板探测器的整个表面进行x射线照射。测定辐照指数和辐照剂量,并对其线性和准确性进行评价。结果:从1室测量开始,输出逐渐下降,调整输出后趋于稳定。暴露剂量与EI呈良好的线性关系,R2为0.99,EI的CV < 2.41%,具有较高的重复性。结论:我们证明了常数测试的结果可以很容易地用EI进行量化。EI法可以管理x射线输出,重现性好。
{"title":"The usefulness of X-ray output management in general radiography systems using exposure index.","authors":"Kazuhiro Ogasawara, Shinya Ohwada, Rie Tachibana, Katsuhiko Ogasawara","doi":"10.1177/08953996241299994","DOIUrl":"10.1177/08953996241299994","url":null,"abstract":"<p><strong>Purpose: </strong>The periodic quality control of X-ray devices is important for obtaining optical medical images and determining the appropriate X-ray exposure dose. Additionally, the measurement of the X-ray output is constrained by time, technical aspects, and expenses. Therefore, we investigated the usefulness of a simple method for managing X-ray output using an Exposure Index (EI).</p><p><strong>Methods: </strong>The entire surface of the flat panel detector was X-ray-irradiated every Friday at the time of end-of-work inspection under the condition that the recorded EI was approximately 1000. The EI and exposure dose were measured, and the linearity and accuracy were evaluated.</p><p><strong>Results: </strong>The output gradually decreased from the start of the measurements in Room 1 and stabilized after the output was adjusted. The relationship between exposure dose and EI showed high linearity, with R<sup>2</sup> > 0.99, and the CV of EI was less than 2.41%, indicating high reproducibility.</p><p><strong>Conclusions: </strong>We demonstrated that the results of constancy tests can be easily quantified using EI. The EI method can manage the X-ray output with good reproducibility.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"204-210"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative analysis of deep learning reconstruction in CT angiography: Enhancing CNR and reducing dose. CT血管造影中深度学习重建的定量分析:增强CNR,降低剂量。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2024-12-18 DOI: 10.1177/08953996241301696
Chang-Lae Lee

Background: Computed tomography angiography (CTA) provides significant information on image quality in vascular imaging, thus offering high-resolution images despite having the disadvantages of increased radiation doses and contrast agent-related side effects. The deep-learning image reconstruction strategies were used to quantitatively evaluate the enhanced contrast-to-noise ratio (CNR) and the dose reduction effect of subtracted images.

Objective: This study aimed to elucidate a comprehensive understanding of the quantitative image quality features of the conventional filtered back projection (FBP) and the advanced intelligent clear-IQ engine (AiCE), a deep learning reconstruction technique. The comparison was made in subtracted images with variable concentrations of contrast agents at variable tube currents and voltages, enhancing our knowledge of these two techniques.

Methods: Data were obtained using a state-of-the-art 320-detector CT scanner. Image reconstruction was performed using FBP and AiCE with various intensities. The image quality evaluation was based on eight iodine concentrations in the phantom setup. The efficiency of AiCE relative to FBP was assessed by computing parameters including the root mean square error (RMSE), dose-dependent CNR, and potential dose reduction.

Results: The results showed that elevated concentrations of iodine and increased tube currents improved AiCE performance regarding CNR enhancement compared to FBP. AiCE also demonstrated a potential dose reduction ranging from 13.7 to 81.9% compared to FBP, suggesting a significant reduction in radiation exposure while maintaining image quality.

Conclusions: The employment of deep learning image reconstruction with AiCE presented a significant improvement in CNR and potential dose reduction in CT angiography. This study highlights the potential of AiCE to improve vascular image quality and decrease radiation exposure risk, thereby improving diagnostic precision and patient care in vascular imaging practices.

背景:计算机断层血管造影(CTA)在血管成像中提供了关于图像质量的重要信息,因此尽管存在辐射剂量增加和造影剂相关副作用的缺点,但仍能提供高分辨率图像。采用深度学习图像重建策略定量评价减相后图像的增强噪比(CNR)和减剂量效果。目的:本研究旨在全面了解传统滤波后投影(FBP)和先进的智能清晰iq引擎(AiCE)的定量图像质量特征,这是一种深度学习重建技术。在不同的管电流和电压下,用不同浓度的造影剂进行了对比,增强了我们对这两种技术的了解。方法:使用最先进的320探测器CT扫描仪获取数据。利用不同强度的FBP和AiCE进行图像重建。图像质量评价是基于幻影设置中的八种碘浓度。通过计算包括均方根误差(RMSE)、剂量依赖性CNR和潜在剂量减少在内的参数来评估AiCE相对于FBP的效率。结果:结果表明,与FBP相比,碘浓度升高和管电流增加可改善AiCE在CNR增强方面的性能。与FBP相比,AiCE还显示出潜在的剂量减少幅度为13.7%至81.9%,这表明在保持图像质量的同时,辐射暴露显著减少。结论:采用AiCE进行深度学习图像重建可显著提高CT血管造影的CNR,降低潜在剂量。本研究强调了AiCE在改善血管图像质量和降低辐射暴露风险方面的潜力,从而提高了血管成像实践中的诊断精度和患者护理。
{"title":"Quantitative analysis of deep learning reconstruction in CT angiography: Enhancing CNR and reducing dose.","authors":"Chang-Lae Lee","doi":"10.1177/08953996241301696","DOIUrl":"10.1177/08953996241301696","url":null,"abstract":"<p><strong>Background: </strong>Computed tomography angiography (CTA) provides significant information on image quality in vascular imaging, thus offering high-resolution images despite having the disadvantages of increased radiation doses and contrast agent-related side effects. The deep-learning image reconstruction strategies were used to quantitatively evaluate the enhanced contrast-to-noise ratio (CNR) and the dose reduction effect of subtracted images.</p><p><strong>Objective: </strong>This study aimed to elucidate a comprehensive understanding of the quantitative image quality features of the conventional filtered back projection (FBP) and the advanced intelligent clear-IQ engine (AiCE), a deep learning reconstruction technique. The comparison was made in subtracted images with variable concentrations of contrast agents at variable tube currents and voltages, enhancing our knowledge of these two techniques.</p><p><strong>Methods: </strong>Data were obtained using a state-of-the-art 320-detector CT scanner. Image reconstruction was performed using FBP and AiCE with various intensities. The image quality evaluation was based on eight iodine concentrations in the phantom setup. The efficiency of AiCE relative to FBP was assessed by computing parameters including the root mean square error (RMSE), dose-dependent CNR, and potential dose reduction.</p><p><strong>Results: </strong>The results showed that elevated concentrations of iodine and increased tube currents improved AiCE performance regarding CNR enhancement compared to FBP. AiCE also demonstrated a potential dose reduction ranging from 13.7 to 81.9% compared to FBP, suggesting a significant reduction in radiation exposure while maintaining image quality.</p><p><strong>Conclusions: </strong>The employment of deep learning image reconstruction with AiCE presented a significant improvement in CNR and potential dose reduction in CT angiography. This study highlights the potential of AiCE to improve vascular image quality and decrease radiation exposure risk, thereby improving diagnostic precision and patient care in vascular imaging practices.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"86-95"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study on CT detection image generation based on decompound synthesize method. 基于分解合成法的 CT 检测图像生成研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-01 Epub Date: 2024-12-16 DOI: 10.1177/08953996241296249
Jintao Fu, Renjie Liu, Tianchen Zeng, Peng Cong, Ximing Liu, Yuewen Sun

Background: Nuclear graphite and carbon components are vital structural elements in the cores of high-temperature gas-cooled reactors(HTGR), serving crucial roles in neutron reflection, moderation, and insulation. The structural integrity and stable operation of these reactors heavily depend on the quality of these components. Helical Computed Tomography (CT) technology provides a method for detecting and intelligently identifying defects within these structures. However, the scarcity of defect datasets limits the performance of deep learning-based detection algorithms due to small sample sizes and class imbalance.

Objective: Given the limited number of actual CT reconstruction images of components and the sparse distribution of defects, this study aims to address the challenges of small sample sizes and class imbalance in defect detection model training by generating approximate CT reconstruction images to augment the defect detection training dataset.

Methods: We propose a novel CT detection image generation algorithm called the Decompound Synthesize Method (DSM), which decomposes the image generation process into three steps: model conversion, background generation, and defect synthesis. First, STL files of various industrial components are converted into voxel data, which undergo forward projection and image reconstruction to obtain corresponding CT images. Next, the Contour-CycleGAN model is employed to generate synthetic images that closely resemble actual CT images. Finally, defects are randomly sampled from an existing defect library and added to the images using the Copy-Adjust-Paste (CAP) method. These steps significantly expand the training dataset with images that closely mimic actual CT reconstructions.

Results: Experimental results validate the effectiveness of the proposed image generation method in defect detection tasks. Datasets generated using DSM exhibit greater similarity to actual CT images, and when combined with original data for training, these datasets enhance defect detection accuracy compared to using only the original images.

Conclusion: The DSM shows promise in addressing the challenges of small sample sizes and class imbalance. Future research can focus on further optimizing the generation algorithm and refining the model structure to enhance the performance and accuracy of defect detection models.

背景:核石墨和核碳组分是高温气冷堆(HTGR)堆芯的重要结构元件,在中子反射、调节和绝缘中起着至关重要的作用。这些反应器的结构完整性和稳定运行在很大程度上取决于这些部件的质量。螺旋计算机断层扫描(CT)技术为检测和智能识别这些结构中的缺陷提供了一种方法。然而,缺陷数据集的稀缺性由于样本量小和类不平衡限制了基于深度学习的检测算法的性能。目的:考虑到部件实际CT重建图像数量有限,缺陷分布稀疏,本研究旨在通过生成近似CT重建图像来增强缺陷检测训练数据集,解决缺陷检测模型训练中样本量小和类不平衡的挑战。方法:提出了一种新的CT检测图像生成算法——分解合成法(DSM),该算法将图像生成过程分解为模型转换、背景生成和缺陷合成三个步骤。首先,将各种工业部件的STL文件转换为体素数据,进行正演投影和图像重建,得到相应的CT图像。接下来,使用Contour-CycleGAN模型生成与实际CT图像非常相似的合成图像。最后,从现有的缺陷库中随机抽取缺陷,并使用复制-调整-粘贴(CAP)方法将缺陷添加到图像中。这些步骤极大地扩展了训练数据集的图像,这些图像与实际的CT重建非常相似。结果:实验结果验证了所提图像生成方法在缺陷检测任务中的有效性。使用DSM生成的数据集与实际CT图像具有更大的相似性,并且当与原始数据相结合进行训练时,与仅使用原始图像相比,这些数据集提高了缺陷检测的准确性。结论:DSM在解决小样本量和类不平衡的挑战方面显示出希望。未来的研究可以集中在进一步优化生成算法和细化模型结构上,以提高缺陷检测模型的性能和准确性。
{"title":"A study on CT detection image generation based on decompound synthesize method.","authors":"Jintao Fu, Renjie Liu, Tianchen Zeng, Peng Cong, Ximing Liu, Yuewen Sun","doi":"10.1177/08953996241296249","DOIUrl":"10.1177/08953996241296249","url":null,"abstract":"<p><strong>Background: </strong>Nuclear graphite and carbon components are vital structural elements in the cores of high-temperature gas-cooled reactors(HTGR), serving crucial roles in neutron reflection, moderation, and insulation. The structural integrity and stable operation of these reactors heavily depend on the quality of these components. Helical Computed Tomography (CT) technology provides a method for detecting and intelligently identifying defects within these structures. However, the scarcity of defect datasets limits the performance of deep learning-based detection algorithms due to small sample sizes and class imbalance.</p><p><strong>Objective: </strong>Given the limited number of actual CT reconstruction images of components and the sparse distribution of defects, this study aims to address the challenges of small sample sizes and class imbalance in defect detection model training by generating approximate CT reconstruction images to augment the defect detection training dataset.</p><p><strong>Methods: </strong>We propose a novel CT detection image generation algorithm called the Decompound Synthesize Method (DSM), which decomposes the image generation process into three steps: model conversion, background generation, and defect synthesis. First, STL files of various industrial components are converted into voxel data, which undergo forward projection and image reconstruction to obtain corresponding CT images. Next, the Contour-CycleGAN model is employed to generate synthetic images that closely resemble actual CT images. Finally, defects are randomly sampled from an existing defect library and added to the images using the Copy-Adjust-Paste (CAP) method. These steps significantly expand the training dataset with images that closely mimic actual CT reconstructions.</p><p><strong>Results: </strong>Experimental results validate the effectiveness of the proposed image generation method in defect detection tasks. Datasets generated using DSM exhibit greater similarity to actual CT images, and when combined with original data for training, these datasets enhance defect detection accuracy compared to using only the original images.</p><p><strong>Conclusion: </strong>The DSM shows promise in addressing the challenges of small sample sizes and class imbalance. Future research can focus on further optimizing the generation algorithm and refining the model structure to enhance the performance and accuracy of defect detection models.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"72-85"},"PeriodicalIF":1.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of X-Ray Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1