首页 > 最新文献

Journal of X-Ray Science and Technology最新文献

英文 中文
Research on the effectiveness of multi-view slice correction strategy based on deep learning in high pitch helical CT reconstruction. 基于深度学习的多视角切片校正策略在高螺距螺旋 CT 重建中的有效性研究。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-12-19 DOI: 10.3233/XST-240128
Zihan Deng, Zhisheng Wang, Legeng Lin, Demin Jiang, Junning Cui, Shunli Wang

Background: Recent studies have explored layered correction strategies, employing a slice-by-slice approach to mitigate the prominent limited-view artifacts present in reconstructed images from high-pitch helical CT scans. However, challenges persist in determining the angles, quantity, and sequencing of slices.

Objective: This study aims to explore the optimal slicing method for high pitch helical scanning 3D reconstruction. We investigate the impact of slicing angle, quantity, order, and model on correction effectiveness, aiming to offer valuable insights for the clinical application of deep learning methods.

Methods: In this study, we constructed and developed a series of data-driven slice correction strategies for 3D high pitch helical CT images using slice theory, and conducted extensive experiments by adjusting the order, increasing the number, and replacing the model.

Results: The experimental results indicate that indiscriminately augmenting the number of correction directions does not significantly enhance the quality of 3D reconstruction. Instead, optimal reconstruction outcomes are attained by aligning the final corrected slice direction with the observation direction.

Conclusions: The data-driven slicing correction strategy can effectively solve the problem of artifacts in high pitch helical scanning. Increasing the number of slices does not significantly improve the quality of the reconstruction results, ensuring that the final correction angle is consistent with the observation angle to achieve the best reconstruction quality.

{"title":"Research on the effectiveness of multi-view slice correction strategy based on deep learning in high pitch helical CT reconstruction.","authors":"Zihan Deng, Zhisheng Wang, Legeng Lin, Demin Jiang, Junning Cui, Shunli Wang","doi":"10.3233/XST-240128","DOIUrl":"https://doi.org/10.3233/XST-240128","url":null,"abstract":"<p><strong>Background: </strong>Recent studies have explored layered correction strategies, employing a slice-by-slice approach to mitigate the prominent limited-view artifacts present in reconstructed images from high-pitch helical CT scans. However, challenges persist in determining the angles, quantity, and sequencing of slices.</p><p><strong>Objective: </strong>This study aims to explore the optimal slicing method for high pitch helical scanning 3D reconstruction. We investigate the impact of slicing angle, quantity, order, and model on correction effectiveness, aiming to offer valuable insights for the clinical application of deep learning methods.</p><p><strong>Methods: </strong>In this study, we constructed and developed a series of data-driven slice correction strategies for 3D high pitch helical CT images using slice theory, and conducted extensive experiments by adjusting the order, increasing the number, and replacing the model.</p><p><strong>Results: </strong>The experimental results indicate that indiscriminately augmenting the number of correction directions does not significantly enhance the quality of 3D reconstruction. Instead, optimal reconstruction outcomes are attained by aligning the final corrected slice direction with the observation direction.</p><p><strong>Conclusions: </strong>The data-driven slicing correction strategy can effectively solve the problem of artifacts in high pitch helical scanning. Increasing the number of slices does not significantly improve the quality of the reconstruction results, ensuring that the final correction angle is consistent with the observation angle to achieve the best reconstruction quality.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Industrial digital radiographic image denoising based on improved KBNet.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-12-19 DOI: 10.3233/XST-240125
HuaXia Zhang, ShiBo Jiang, YueWen Sun, ZeHuan Zhang, Shuo Xu

 Industrial digital radiography (DR) images are essential for industrial inspections, but they often suffer from strong scatter, cross-talk, electronic noise, and other factors that affect image quality. The presence of non-zero mean noise and neighborhood correlation loss in 1D array scanning poses significant challenges for denoising. To enhance the denoising process of industrial DR images and address the issues of low resolution and noise, we propose an improved KBNet (iKBNet) that incorporates lightweight modifications and introduces novel elements to the original KBNet. The iKBNet introduces the Convolutional Block Attention Module (CBAM) to reduce the network's parameter count. Additionally, it utilizes the Structural Similarity Index (SSIM) loss as part of a composite loss function to improve denoising performance. The proposed method demonstrates superior denoising results, with image restoration quality metrics that surpass those of commonly used methods such as BM3D, ResNet, DnCNN, and the original KBNet. In practical applications with low-resolution transmission images, the iKBNet has produced satisfactory outputs. The results indicate that the iKBNet not only minimizes computational cost and enhances processing speed but also achieves better denoising results. This suggests the potential of iKBNet for processing noisy digital radiographic images in industrial settings. The iKBNet shows promise in improving the quality of industrial DR images affected by noise, offering a viable solution for industrial image processing needs.

{"title":"Industrial digital radiographic image denoising based on improved KBNet.","authors":"HuaXia Zhang, ShiBo Jiang, YueWen Sun, ZeHuan Zhang, Shuo Xu","doi":"10.3233/XST-240125","DOIUrl":"https://doi.org/10.3233/XST-240125","url":null,"abstract":"<p><p> Industrial digital radiography (DR) images are essential for industrial inspections, but they often suffer from strong scatter, cross-talk, electronic noise, and other factors that affect image quality. The presence of non-zero mean noise and neighborhood correlation loss in 1D array scanning poses significant challenges for denoising. To enhance the denoising process of industrial DR images and address the issues of low resolution and noise, we propose an improved KBNet (iKBNet) that incorporates lightweight modifications and introduces novel elements to the original KBNet. The iKBNet introduces the Convolutional Block Attention Module (CBAM) to reduce the network's parameter count. Additionally, it utilizes the Structural Similarity Index (SSIM) loss as part of a composite loss function to improve denoising performance. The proposed method demonstrates superior denoising results, with image restoration quality metrics that surpass those of commonly used methods such as BM3D, ResNet, DnCNN, and the original KBNet. In practical applications with low-resolution transmission images, the iKBNet has produced satisfactory outputs. The results indicate that the iKBNet not only minimizes computational cost and enhances processing speed but also achieves better denoising results. This suggests the potential of iKBNet for processing noisy digital radiographic images in industrial settings. The iKBNet shows promise in improving the quality of industrial DR images affected by noise, offering a viable solution for industrial image processing needs.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A reconstruction method for ptychography based on residual dense network.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-12-18 DOI: 10.3233/XST-240114
Mengnan Liu, Yu Han, Xiaoqi Xi, Lei Li, Zijian Xu, Xiangzhi Zhang, Linlin Zhu, Bin Yan

Background: Coherent diffraction imaging (CDI) is an important lens-free imaging method. As a variant of CDI, ptychography enables the imaging of objects with arbitrary lateral sizes. However, traditional phase retrieval methods are time-consuming for ptychographic imaging of large-size objects, e.g., integrated circuits (IC). Especially when ptychography is combined with computed tomography (CT) or computed laminography (CL), time consumption increases greatly.

Objective: In this work, we aim to propose a new deep learning-based approach to implement a quick and robust reconstruction of ptychography.

Methods: Inspired by the strong advantages of the residual dense network for computer vision tasks, we propose a dense residual two-branch network (RDenPtycho) based on the ptychography two-branch reconstruction architecture for the fast and robust reconstruction of ptychography. The network relies on the residual dense block to construct mappings from diffraction patterns to amplitudes and phases. In addition, we integrate the physical processes of ptychography into the training of the network to further improve the performance.

Results: The proposed RDenPtycho is evaluated using the publicly available ptychography dataset from the Advanced Photon Source. The results show that the proposed method can faithfully and robustly recover the detailed information of the objects. Ablation experiments demonstrate the effectiveness of the components in the proposed method for performance enhancement.

Significance: The proposed method enables fast, accurate, and robust reconstruction of ptychography, and is of potential significance for 3D ptychography. The proposed method and experiments can resolve similar problems in other fields.

{"title":"A reconstruction method for ptychography based on residual dense network.","authors":"Mengnan Liu, Yu Han, Xiaoqi Xi, Lei Li, Zijian Xu, Xiangzhi Zhang, Linlin Zhu, Bin Yan","doi":"10.3233/XST-240114","DOIUrl":"https://doi.org/10.3233/XST-240114","url":null,"abstract":"<p><strong>Background: </strong>Coherent diffraction imaging (CDI) is an important lens-free imaging method. As a variant of CDI, ptychography enables the imaging of objects with arbitrary lateral sizes. However, traditional phase retrieval methods are time-consuming for ptychographic imaging of large-size objects, e.g., integrated circuits (IC). Especially when ptychography is combined with computed tomography (CT) or computed laminography (CL), time consumption increases greatly.</p><p><strong>Objective: </strong>In this work, we aim to propose a new deep learning-based approach to implement a quick and robust reconstruction of ptychography.</p><p><strong>Methods: </strong>Inspired by the strong advantages of the residual dense network for computer vision tasks, we propose a dense residual two-branch network (RDenPtycho) based on the ptychography two-branch reconstruction architecture for the fast and robust reconstruction of ptychography. The network relies on the residual dense block to construct mappings from diffraction patterns to amplitudes and phases. In addition, we integrate the physical processes of ptychography into the training of the network to further improve the performance.</p><p><strong>Results: </strong>The proposed RDenPtycho is evaluated using the publicly available ptychography dataset from the Advanced Photon Source. The results show that the proposed method can faithfully and robustly recover the detailed information of the objects. Ablation experiments demonstrate the effectiveness of the components in the proposed method for performance enhancement.</p><p><strong>Significance: </strong>The proposed method enables fast, accurate, and robust reconstruction of ptychography, and is of potential significance for 3D ptychography. The proposed method and experiments can resolve similar problems in other fields.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fully linearized ADMM algorithm for optimization based image reconstruction. 基于优化的图像重建全线性化 ADMM 算法。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-12-18 DOI: 10.3233/XST-240029
Zhiwei Qiao, Gage Redler, Boris Epel, Howard Halpern

Background and objective: Optimization based image reconstruction algorithm is an advanced algorithm in medical imaging. However, the corresponding solving algorithm is challenging because the model is usually large-scale and non-smooth. This work aims to devise a simple and convergent solver for optimization model.

Methods: The alternating direction method of multipliers (ADMM) algorithm is a simple and effective solver of the optimization model. However, there always exists a sub-problem that has not close-form solution. One may use gradient descent algorithm to solve this sub-problem, but the step-size selection via line search is time-consuming. Or, one may use fast Fourier transform (FFT) to get a close-form solution if the sparse transform matrix is of special structure. In this work, we propose a fully linearized ADMM (FL-ADMM) algorithm that avoids line search to determine step-size and applies to sparse transform of any structure.

Results: We derive the FL-ADMM algorithm instances for three total variation (TV) models in 2D computed tomography (CT). Further, we validate and evaluate one FL-ADMM algorithm and explore how two important factors impact convergence rate. These studies show that the FL-ADMM algorithm may accurately solve the optimization model.

Conclusion: The FL-ADMM algorithm is a simple, effective, convergent and universal solver of optimization model in image reconstruction. Compared to the standard ADMM algorithm, the new algorithm does not need time-consuming step-size line-search or special demand to sparse transform. It is a rapid prototyping tool for optimization based image reconstruction.

{"title":"A fully linearized ADMM algorithm for optimization based image reconstruction.","authors":"Zhiwei Qiao, Gage Redler, Boris Epel, Howard Halpern","doi":"10.3233/XST-240029","DOIUrl":"https://doi.org/10.3233/XST-240029","url":null,"abstract":"<p><strong>Background and objective: </strong>Optimization based image reconstruction algorithm is an advanced algorithm in medical imaging. However, the corresponding solving algorithm is challenging because the model is usually large-scale and non-smooth. This work aims to devise a simple and convergent solver for optimization model.</p><p><strong>Methods: </strong>The alternating direction method of multipliers (ADMM) algorithm is a simple and effective solver of the optimization model. However, there always exists a sub-problem that has not close-form solution. One may use gradient descent algorithm to solve this sub-problem, but the step-size selection via line search is time-consuming. Or, one may use fast Fourier transform (FFT) to get a close-form solution if the sparse transform matrix is of special structure. In this work, we propose a fully linearized ADMM (FL-ADMM) algorithm that avoids line search to determine step-size and applies to sparse transform of any structure.</p><p><strong>Results: </strong>We derive the FL-ADMM algorithm instances for three total variation (TV) models in 2D computed tomography (CT). Further, we validate and evaluate one FL-ADMM algorithm and explore how two important factors impact convergence rate. These studies show that the FL-ADMM algorithm may accurately solve the optimization model.</p><p><strong>Conclusion: </strong>The FL-ADMM algorithm is a simple, effective, convergent and universal solver of optimization model in image reconstruction. Compared to the standard ADMM algorithm, the new algorithm does not need time-consuming step-size line-search or special demand to sparse transform. It is a rapid prototyping tool for optimization based image reconstruction.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can AI generate diagnostic reports for radiologist approval on CXR images? A multi-reader and multi-case observer performance study. 人工智能能否生成诊断报告,供放射医师审批 CXR 图像?多阅读器和多病例观察者性能研究。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-10-16 DOI: 10.3233/XST-240051
Lin Guo, Li Xia, Qiuting Zheng, Bin Zheng, Stefan Jaeger, Maryellen L Giger, Jordan Fuhrman, Hui Li, Fleming Y M Lure, Hongjun Li, Li Li

Background: Accurately detecting a variety of lung abnormalities from heterogenous chest X-ray (CXR) images and writing radiology reports is often difficult and time-consuming.

Objective: To access the utility of a novel artificial intelligence (AI) system (MOM-ClaSeg) in enhancing the accuracy and efficiency of radiologists in detecting heterogenous lung abnormalities through a multi-reader and multi-case (MRMC) observer performance study.

Methods: Over 36,000 CXR images were retrospectively collected from 12 hospitals over 4 months and used as the experiment group and the control group. In the control group, a double reading method is used in which two radiologists interpret CXR to generate a final report, while in the experiment group, one radiologist generates the final reports based on AI-generated reports.

Results: Compared with double reading, the diagnostic accuracy and sensitivity of single reading with AI increases significantly by 1.49% and 10.95%, respectively (P <  0.001), while the difference in specificity is small (0.22%) and without statistical significance (P = 0.255). Additionally, the average image reading and diagnostic time in the experimental group is reduced by 54.70% (P <  0.001).

Conclusion: This MRMC study demonstrates that MOM-ClaSeg can potentially serve as the first reader to generate the initial diagnostic reports, with a radiologist only reviewing and making minor modifications (if needed) to arrive at the final decision. It also shows that single reading with AI can achieve a higher diagnostic accuracy and efficiency than double reading.

背景:从异质胸部X光(CXR)图像中准确检测出各种肺部异常并撰写放射学报告通常既困难又耗时:目的:通过一项多阅读器和多病例(MRMC)观察者绩效研究,了解新型人工智能(AI)系统(MOM-ClaSeg)在提高放射科医生检测异质性肺部异常的准确性和效率方面的效用:在 4 个月内从 12 家医院回顾性收集了 36,000 多张 CXR 图像,分别作为实验组和对照组。对照组采用双读法,由两名放射科医生对 CXR 进行解读,生成最终报告;实验组由一名放射科医生根据人工智能生成的报告生成最终报告:与双人阅片相比,使用人工智能进行单人阅片的诊断准确率和灵敏度分别显著提高了 1.49% 和 10.95%(P < 0.001),而特异性差异较小(0.22%),且无统计学意义(P = 0.255)。此外,实验组的平均图像阅读和诊断时间减少了 54.70%(P < 0.001):这项 MRMC 研究表明,MOM-ClaSeg 有可能作为第一阅片人生成初步诊断报告,放射科医生只需审阅并稍作修改(如有必要),即可做出最终决定。研究还表明,与双人阅片相比,人工智能单人阅片可实现更高的诊断准确性和效率。
{"title":"Can AI generate diagnostic reports for radiologist approval on CXR images? A multi-reader and multi-case observer performance study.","authors":"Lin Guo, Li Xia, Qiuting Zheng, Bin Zheng, Stefan Jaeger, Maryellen L Giger, Jordan Fuhrman, Hui Li, Fleming Y M Lure, Hongjun Li, Li Li","doi":"10.3233/XST-240051","DOIUrl":"https://doi.org/10.3233/XST-240051","url":null,"abstract":"<p><strong>Background: </strong>Accurately detecting a variety of lung abnormalities from heterogenous chest X-ray (CXR) images and writing radiology reports is often difficult and time-consuming.</p><p><strong>Objective: </strong>To access the utility of a novel artificial intelligence (AI) system (MOM-ClaSeg) in enhancing the accuracy and efficiency of radiologists in detecting heterogenous lung abnormalities through a multi-reader and multi-case (MRMC) observer performance study.</p><p><strong>Methods: </strong>Over 36,000 CXR images were retrospectively collected from 12 hospitals over 4 months and used as the experiment group and the control group. In the control group, a double reading method is used in which two radiologists interpret CXR to generate a final report, while in the experiment group, one radiologist generates the final reports based on AI-generated reports.</p><p><strong>Results: </strong>Compared with double reading, the diagnostic accuracy and sensitivity of single reading with AI increases significantly by 1.49% and 10.95%, respectively (P <  0.001), while the difference in specificity is small (0.22%) and without statistical significance (P = 0.255). Additionally, the average image reading and diagnostic time in the experimental group is reduced by 54.70% (P <  0.001).</p><p><strong>Conclusion: </strong>This MRMC study demonstrates that MOM-ClaSeg can potentially serve as the first reader to generate the initial diagnostic reports, with a radiologist only reviewing and making minor modifications (if needed) to arrive at the final decision. It also shows that single reading with AI can achieve a higher diagnostic accuracy and efficiency than double reading.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACU-TransNet: Attention and convolution-augmented UNet-transformer network for polyp segmentation. ACU-TransNet:用于息肉分割的注意力和卷积增强 UNet 变换器网络。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-10-12 DOI: 10.3233/XST-240076
Lei Huang, Yun Wu

Background: UNet has achieved great success in medical image segmentation. However, due to the inherent locality of convolution operations, UNet is deficient in capturing global features and long-range dependencies of polyps, resulting in less accurate polyp recognition for complex morphologies and backgrounds. Transformers, with their sequential operations, are better at perceiving global features but lack low-level details, leading to limited localization ability. If the advantages of both architectures can be effectively combined, the accuracy of polyp segmentation can be further improved.

Methods: In this paper, we propose an attention and convolution-augmented UNet-Transformer Network (ACU-TransNet) for polyp segmentation. This network is composed of the comprehensive attention UNet and the Transformer head, sequentially connected by the bridge layer. On the one hand, the comprehensive attention UNet enhances specific feature extraction through deformable convolution and channel attention in the first layer of the encoder and achieves more accurate shape extraction through spatial attention and channel attention in the decoder. On the other hand, the Transformer head supplements fine-grained information through convolutional attention and acquires hierarchical global characteristics from the feature maps.

Results: mcU-TransNet could comprehensively learn dataset features and enhance colonoscopy interpretability for polyp detection.

Conclusion: Experimental results on the CVC-ClinicDB and Kvasir-SEG datasets demonstrate that mcU-TransNet outperforms existing state-of-the-art methods, showcasing its robustness.

背景:UNet 在医学图像分割方面取得了巨大成功。然而,由于卷积运算固有的局部性,UNet 在捕捉息肉的全局特征和长程依赖性方面存在不足,导致对复杂形态和背景的息肉识别不够准确。变换器具有顺序操作功能,能更好地感知全局特征,但缺乏低层次细节,导致定位能力有限。如果能有效结合这两种架构的优势,就能进一步提高息肉分割的准确性:本文提出了一种用于息肉分割的注意力和卷积增强 UNet-Transformer 网络(ACU-TransNet)。该网络由综合注意力 UNet 和变换器头组成,通过桥接层依次连接。一方面,综合注意力 UNet 在第一层编码器中通过可变形卷积和通道注意力加强特定特征提取,并在解码器中通过空间注意力和通道注意力实现更精确的形状提取。结果:mcU-TransNet 可以全面学习数据集特征,提高结肠镜息肉检测的可解释性:在 CVC-ClinicDB 和 Kvasir-SEG 数据集上的实验结果表明,mcU-TransNet 的性能优于现有的先进方法,展示了其鲁棒性。
{"title":"ACU-TransNet: Attention and convolution-augmented UNet-transformer network for polyp segmentation.","authors":"Lei Huang, Yun Wu","doi":"10.3233/XST-240076","DOIUrl":"https://doi.org/10.3233/XST-240076","url":null,"abstract":"<p><strong>Background: </strong>UNet has achieved great success in medical image segmentation. However, due to the inherent locality of convolution operations, UNet is deficient in capturing global features and long-range dependencies of polyps, resulting in less accurate polyp recognition for complex morphologies and backgrounds. Transformers, with their sequential operations, are better at perceiving global features but lack low-level details, leading to limited localization ability. If the advantages of both architectures can be effectively combined, the accuracy of polyp segmentation can be further improved.</p><p><strong>Methods: </strong>In this paper, we propose an attention and convolution-augmented UNet-Transformer Network (ACU-TransNet) for polyp segmentation. This network is composed of the comprehensive attention UNet and the Transformer head, sequentially connected by the bridge layer. On the one hand, the comprehensive attention UNet enhances specific feature extraction through deformable convolution and channel attention in the first layer of the encoder and achieves more accurate shape extraction through spatial attention and channel attention in the decoder. On the other hand, the Transformer head supplements fine-grained information through convolutional attention and acquires hierarchical global characteristics from the feature maps.</p><p><strong>Results: </strong>mcU-TransNet could comprehensively learn dataset features and enhance colonoscopy interpretability for polyp detection.</p><p><strong>Conclusion: </strong>Experimental results on the CVC-ClinicDB and Kvasir-SEG datasets demonstrate that mcU-TransNet outperforms existing state-of-the-art methods, showcasing its robustness.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility study of YSO/SiPM based detectors for virtual monochromatic image synthesis. 基于 YSO/SiPM 探测器的虚拟单色图像合成可行性研究。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-09-28 DOI: 10.3233/XST-240039
Du Zhang, Bin Wu, Daoming Xi, Rui Chen, Peng Xiao, Qingguo Xie

Background: The development of photon-counting CT systems has focused on semiconductor detectors like cadmium zinc telluride (CZT) and cadmium telluride (CdTe). However, these detectors face high costs and charge-sharing issues, distorting the energy spectrum. Indirect detection using Yttrium Orthosilicate (YSO) scintillators with silicon photomultiplier (SiPM) offers a cost-effective alternative with high detection efficiency, low dark count rate, and high sensor gain.

Objective: This work aims to demonstrate the feasibility of the YSO/SiPM detector (DexScanner L103) based on the Multi-Voltage Threshold (MVT) sampling method as a photon-counting CT detector by evaluating the synthesis error of virtual monochromatic images.

Methods: In this study, we developed a proof-of-concept benchtop photon-counting CT system, and employed a direct method for empirical virtual monochromatic image synthesis (EVMIS) by polynomial fitting under the principle of least square deviation without X-ray spectral information. The accuracy of the empirical energy calibration techniques was evaluated by comparing the reconstructed and actual attenuation coefficients of calibration and test materials using mean relative error (MRE) and mean square error (MSE).

Results: In dual-material imaging experiments, the overall average synthesis error for three monoenergetic images of distinct materials is 2.53% ±2.43%. Similarly, in K-edge imaging experiments encompassing four materials, the overall average synthesis error for three monoenergetic images is 4.04% ±2.63%. In rat biological soft-tissue imaging experiments, we further predicted the densities of various rat tissues as follows: bone density is 1.41±0.07 g/cm3, adipose tissue density is 0.91±0.06 g/cm3, heart tissue density is 1.09±0.04 g/cm3, and lung tissue density is 0.32±0.07 g/cm3. Those results showed that the reconstructed virtual monochromatic images had good conformance for each material.

Conclusion: This study indicates the SiPM-based photon-counting detector could be used for monochromatic image synthesis and is a promising method for developing spectral computed tomography systems.

背景:光子计数 CT 系统的开发主要集中在碲锌镉(CZT)和碲化镉(CdTe)等半导体探测器上。然而,这些探测器面临着高成本和电荷共享问题,从而扭曲了能谱。使用带硅光电倍增管(SiPM)的正硅酸钇(YSO)闪烁体进行间接探测提供了一种具有高探测效率、低暗计数率和高传感器增益的经济有效的替代方法:本研究旨在通过评估虚拟单色图像的合成误差,证明基于多电压阈值(MVT)采样方法的 YSO/SiPM 检测器(DexScanner L103)作为光子计数 CT 检测器的可行性:在这项研究中,我们开发了一个概念验证型台式光子计数 CT 系统,并在不包含 X 射线光谱信息的情况下,根据最小平方偏差原理,采用多项式拟合的直接方法进行经验虚拟单色图像合成(EVMIS)。通过使用平均相对误差(MRE)和均方误差(MSE)比较校准材料和测试材料的重建衰减系数和实际衰减系数,评估了经验能量校准技术的准确性:在双材料成像实验中,不同材料的三幅单能量图像的总体平均合成误差为 2.53% ±2.43%。同样,在包含四种材料的 K 边成像实验中,三幅单能图像的总体平均合成误差为 4.04% ±2.63%。在大鼠生物软组织成像实验中,我们进一步预测了大鼠各种组织的密度:骨密度为 1.41±0.07 g/cm3,脂肪组织密度为 0.91±0.06 g/cm3,心脏组织密度为 1.09±0.04 g/cm3,肺组织密度为 0.32±0.07 g/cm3。这些结果表明,重建的虚拟单色图像对每种材料都具有良好的一致性:本研究表明,基于 SiPM 的光子计数探测器可用于单色图像合成,是一种很有前途的光谱计算机断层扫描系统开发方法。
{"title":"Feasibility study of YSO/SiPM based detectors for virtual monochromatic image synthesis.","authors":"Du Zhang, Bin Wu, Daoming Xi, Rui Chen, Peng Xiao, Qingguo Xie","doi":"10.3233/XST-240039","DOIUrl":"https://doi.org/10.3233/XST-240039","url":null,"abstract":"<p><strong>Background: </strong>The development of photon-counting CT systems has focused on semiconductor detectors like cadmium zinc telluride (CZT) and cadmium telluride (CdTe). However, these detectors face high costs and charge-sharing issues, distorting the energy spectrum. Indirect detection using Yttrium Orthosilicate (YSO) scintillators with silicon photomultiplier (SiPM) offers a cost-effective alternative with high detection efficiency, low dark count rate, and high sensor gain.</p><p><strong>Objective: </strong>This work aims to demonstrate the feasibility of the YSO/SiPM detector (DexScanner L103) based on the Multi-Voltage Threshold (MVT) sampling method as a photon-counting CT detector by evaluating the synthesis error of virtual monochromatic images.</p><p><strong>Methods: </strong>In this study, we developed a proof-of-concept benchtop photon-counting CT system, and employed a direct method for empirical virtual monochromatic image synthesis (EVMIS) by polynomial fitting under the principle of least square deviation without X-ray spectral information. The accuracy of the empirical energy calibration techniques was evaluated by comparing the reconstructed and actual attenuation coefficients of calibration and test materials using mean relative error (MRE) and mean square error (MSE).</p><p><strong>Results: </strong>In dual-material imaging experiments, the overall average synthesis error for three monoenergetic images of distinct materials is 2.53% ±2.43%. Similarly, in K-edge imaging experiments encompassing four materials, the overall average synthesis error for three monoenergetic images is 4.04% ±2.63%. In rat biological soft-tissue imaging experiments, we further predicted the densities of various rat tissues as follows: bone density is 1.41±0.07 g/cm3, adipose tissue density is 0.91±0.06 g/cm3, heart tissue density is 1.09±0.04 g/cm3, and lung tissue density is 0.32±0.07 g/cm3. Those results showed that the reconstructed virtual monochromatic images had good conformance for each material.</p><p><strong>Conclusion: </strong>This study indicates the SiPM-based photon-counting detector could be used for monochromatic image synthesis and is a promising method for developing spectral computed tomography systems.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive prior image constrained total generalized variation for low-dose dynamic cerebral perfusion CT reconstruction. 用于低剂量动态脑灌注 CT 重建的自适应先验图像约束总广义变异。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-09-18 DOI: 10.3233/XST-240104
Shanzhou Niu, Shuo Li, Shuyan Huang, Lijing Liang, Sizhou Tang, Tinghua Wang, Gaohang Yu, Tianye Niu, Jing Wang, Jianhua Ma

Background: Dynamic cerebral perfusion CT (DCPCT) can provide valuable insight into cerebral hemodynamics by visualizing changes in blood within the brain. However, the associated high radiation dose of the standard DCPCT scanning protocol has been a great concern for the patient and radiation physics. Minimizing the x-ray exposure to patients has been a major effort in the DCPCT examination. A simple and cost-effective approach to achieve low-dose DCPCT imaging is to lower the x-ray tube current in data acquisition. However, the image quality of low-dose DCPCT will be degraded because of the excessive quantum noise.

Objective: To obtain high-quality DCPCT images, we present a statistical iterative reconstruction (SIR) algorithm based on penalized weighted least squares (PWLS) using adaptive prior image constrained total generalized variation (APICTGV) regularization (PWLS-APICTGV).

Methods: APICTGV regularization uses the precontrast scanned high-quality CT image as an adaptive structural prior for low-dose PWLS reconstruction. Thus, the image quality of low-dose DCPCT is improved while essential features of targe image are well preserved. An alternating optimization algorithm is developed to solve the cost function of the PWLS-APICTGV reconstruction.

Results: PWLS-APICTGV algorithm was evaluated using a digital brain perfusion phantom and patient data. Compared to other competing algorithms, the PWLS-APICTGV algorithm shows better noise reduction and structural details preservation. Furthermore, the PWLS-APICTGV algorithm can generate more accurate cerebral blood flow (CBF) map than that of other reconstruction methods.

Conclusions: PWLS-APICTGV algorithm can significantly suppress noise while preserving the important features of the reconstructed DCPCT image, thus achieving a great improvement in low-dose DCPCT imaging.

背景:动态脑灌注 CT(DCPCT)可通过观察脑内血液的变化来深入了解脑血流动力学。然而,标准 DCPCT 扫描方案的相关高辐射剂量一直是病人和辐射物理学的一大担忧。最大限度地减少对患者的 X 射线照射一直是 DCPCT 检查的主要工作。实现低剂量 DCPCT 成像的一个简单而经济的方法是降低数据采集时的 X 射线管电流。然而,由于量子噪声过大,低剂量 DCPCT 的图像质量会下降:为了获得高质量的 DCPCT 图像,我们提出了一种基于惩罚性加权最小二乘法(PWLS)的统计迭代重建(SIR)算法,并使用自适应先验图像约束总广义变异(APICTGV)正则化(PWLS-APICTGV):APICTGV 正则化将对比扫描前的高质量 CT 图像作为低剂量 PWLS 重建的自适应结构先验。因此,低剂量 DCPCT 的图像质量得到了改善,同时还很好地保留了图像的基本特征。为了解决 PWLS-APICTGV 重建的成本函数,我们开发了一种交替优化算法:使用数字脑灌注模型和患者数据对 PWLS-APICTGV 算法进行了评估。与其他同类算法相比,PWLS-APICTGV 算法在降噪和结构细节保留方面表现更佳。此外,与其他重建方法相比,PWLS-APICTGV 算法能生成更精确的脑血流(CBF)图:结论:PWLS-APICTGV 算法能显著抑制噪声,同时保留重建 DCPCT 图像的重要特征,从而极大地改进了低剂量 DCPCT 成像。
{"title":"Adaptive prior image constrained total generalized variation for low-dose dynamic cerebral perfusion CT reconstruction.","authors":"Shanzhou Niu, Shuo Li, Shuyan Huang, Lijing Liang, Sizhou Tang, Tinghua Wang, Gaohang Yu, Tianye Niu, Jing Wang, Jianhua Ma","doi":"10.3233/XST-240104","DOIUrl":"https://doi.org/10.3233/XST-240104","url":null,"abstract":"<p><strong>Background: </strong>Dynamic cerebral perfusion CT (DCPCT) can provide valuable insight into cerebral hemodynamics by visualizing changes in blood within the brain. However, the associated high radiation dose of the standard DCPCT scanning protocol has been a great concern for the patient and radiation physics. Minimizing the x-ray exposure to patients has been a major effort in the DCPCT examination. A simple and cost-effective approach to achieve low-dose DCPCT imaging is to lower the x-ray tube current in data acquisition. However, the image quality of low-dose DCPCT will be degraded because of the excessive quantum noise.</p><p><strong>Objective: </strong>To obtain high-quality DCPCT images, we present a statistical iterative reconstruction (SIR) algorithm based on penalized weighted least squares (PWLS) using adaptive prior image constrained total generalized variation (APICTGV) regularization (PWLS-APICTGV).</p><p><strong>Methods: </strong>APICTGV regularization uses the precontrast scanned high-quality CT image as an adaptive structural prior for low-dose PWLS reconstruction. Thus, the image quality of low-dose DCPCT is improved while essential features of targe image are well preserved. An alternating optimization algorithm is developed to solve the cost function of the PWLS-APICTGV reconstruction.</p><p><strong>Results: </strong>PWLS-APICTGV algorithm was evaluated using a digital brain perfusion phantom and patient data. Compared to other competing algorithms, the PWLS-APICTGV algorithm shows better noise reduction and structural details preservation. Furthermore, the PWLS-APICTGV algorithm can generate more accurate cerebral blood flow (CBF) map than that of other reconstruction methods.</p><p><strong>Conclusions: </strong>PWLS-APICTGV algorithm can significantly suppress noise while preserving the important features of the reconstructed DCPCT image, thus achieving a great improvement in low-dose DCPCT imaging.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive guide to content-based image retrieval algorithms with visualsift ensembling. 基于内容的图像检索算法与视觉漂移集合综合指南。
IF 3 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-09-11 DOI: 10.3233/xst-240189
C Ramesh Babu Durai,R Sathesh Raaj,Sindhu Chandra Sekharan,V S Nishok
BACKGROUNDContent-based image retrieval (CBIR) systems are vital for managing the large volumes of data produced by medical imaging technologies. They enable efficient retrieval of relevant medical images from extensive databases, supporting clinical diagnosis, treatment planning, and medical research.OBJECTIVEThis study aims to enhance CBIR systems' effectiveness in medical image analysis by introducing the VisualSift Ensembling Integration with Attention Mechanisms (VEIAM). VEIAM seeks to improve diagnostic accuracy and retrieval efficiency by integrating robust feature extraction with dynamic attention mechanisms.METHODSVEIAM combines Scale-Invariant Feature Transform (SIFT) with selective attention mechanisms to emphasize crucial regions within medical images dynamically. Implemented in Python, the model integrates seamlessly into existing medical image analysis workflows, providing a robust and accessible tool for clinicians and researchers.RESULTSThe proposed VEIAM model demonstrated an impressive accuracy of 97.34% in classifying and retrieving medical images. This performance indicates VEIAM's capability to discern subtle patterns and textures critical for accurate diagnostics.CONCLUSIONSBy merging SIFT-based feature extraction with attention processes, VEIAM offers a discriminatively powerful approach to medical image analysis. Its high accuracy and efficiency in retrieving relevant medical images make it a promising tool for enhancing diagnostic processes and supporting medical research in CBIR systems.
背景基于内容的图像检索(CBIR)系统对管理医疗成像技术产生的大量数据至关重要。本研究旨在通过引入 VisualSift Ensembling Integration with Attention Mechanisms (VEIAM),提高 CBIR 系统在医学图像分析中的有效性。方法VEIAM将规模不变特征变换(SIFT)与选择性注意机制相结合,动态强调医学图像中的关键区域。该模型采用 Python 语言实现,可无缝集成到现有的医学图像分析工作流程中,为临床医生和研究人员提供了一个强大且易于使用的工具。结果提出的 VEIAM 模型在医学图像分类和检索方面的准确率高达 97.34%,令人印象深刻。结论通过将基于 SIFT 的特征提取与注意过程相结合,VEIAM 为医学图像分析提供了一种具有强大判别能力的方法。VEIAM 在检索相关医学图像方面的高准确性和高效率使其成为一种很有前途的工具,可用于增强诊断过程和支持 CBIR 系统中的医学研究。
{"title":"A comprehensive guide to content-based image retrieval algorithms with visualsift ensembling.","authors":"C Ramesh Babu Durai,R Sathesh Raaj,Sindhu Chandra Sekharan,V S Nishok","doi":"10.3233/xst-240189","DOIUrl":"https://doi.org/10.3233/xst-240189","url":null,"abstract":"BACKGROUNDContent-based image retrieval (CBIR) systems are vital for managing the large volumes of data produced by medical imaging technologies. They enable efficient retrieval of relevant medical images from extensive databases, supporting clinical diagnosis, treatment planning, and medical research.OBJECTIVEThis study aims to enhance CBIR systems' effectiveness in medical image analysis by introducing the VisualSift Ensembling Integration with Attention Mechanisms (VEIAM). VEIAM seeks to improve diagnostic accuracy and retrieval efficiency by integrating robust feature extraction with dynamic attention mechanisms.METHODSVEIAM combines Scale-Invariant Feature Transform (SIFT) with selective attention mechanisms to emphasize crucial regions within medical images dynamically. Implemented in Python, the model integrates seamlessly into existing medical image analysis workflows, providing a robust and accessible tool for clinicians and researchers.RESULTSThe proposed VEIAM model demonstrated an impressive accuracy of 97.34% in classifying and retrieving medical images. This performance indicates VEIAM's capability to discern subtle patterns and textures critical for accurate diagnostics.CONCLUSIONSBy merging SIFT-based feature extraction with attention processes, VEIAM offers a discriminatively powerful approach to medical image analysis. Its high accuracy and efficiency in retrieving relevant medical images make it a promising tool for enhancing diagnostic processes and supporting medical research in CBIR systems.","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":"79 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142258308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale unsupervised network for deformable image registration. 用于可变形图像配准的多尺度无监督网络
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2024-09-04 DOI: 10.3233/XST-240159
Yun Wang, Wanru Chang, Chongfei Huang, Dexing Kong

Background: Deformable image registration (DIR) plays an important part in many clinical tasks, and deep learning has made significant progress in DIR over the past few years.

Objective: To propose a fast multiscale unsupervised deformable image registration (referred to as FMIRNet) method for monomodal image registration.

Methods: We designed a multiscale fusion module to estimate the large displacement field by combining and refining the deformation fields of three scales. The spatial attention mechanism was employed in our fusion module to weight the displacement field pixel by pixel. Except mean square error (MSE), we additionally added structural similarity (ssim) measure during the training phase to enhance the structural consistency between the deformed images and the fixed images.

Results: Our registration method was evaluated on EchoNet, CHAOS and SLIVER, and had indeed performance improvement in terms of SSIM, NCC and NMI scores. Furthermore, we integrated the FMIRNet into the segmentation network (FCN, UNet) to boost the segmentation task on a dataset with few manual annotations in our joint leaning frameworks. The experimental results indicated that the joint segmentation methods had performance improvement in terms of Dice, HD and ASSD scores.

Conclusions: Our proposed FMIRNet is effective for large deformation estimation, and its registration capability is generalizable and robust in joint registration and segmentation frameworks to generate reliable labels for training segmentation tasks.

背景:可变形图像配准(DIR)在许多临床任务中发挥着重要作用:可变形图像配准(DIR)在许多临床任务中发挥着重要作用,过去几年深度学习在DIR领域取得了重大进展:提出一种用于单模态图像配准的快速多尺度无监督变形图像配准方法(简称 FMIRNet):方法:我们设计了一个多尺度融合模块,通过组合和细化三个尺度的变形场来估计大位移场。我们的融合模块采用了空间注意机制,逐像素对位移场进行加权。除了均方误差(MSE),我们还在训练阶段增加了结构相似度(ssim)测量,以增强变形图像与固定图像之间的结构一致性:结果:我们的配准方法在 EchoNet、CHAOS 和 SLIVER 上进行了评估,在 SSIM、NCC 和 NMI 分数方面的性能确实有所提高。此外,我们还将 FMIRNet 集成到了分割网络(FCN、UNet)中,以提高联合精益框架中人工标注较少的数据集的分割任务。实验结果表明,在 Dice、HD 和 ASSD 分数方面,联合分割方法的性能有所提高:我们提出的 FMIRNet 对大变形估计非常有效,其注册能力在联合注册和分割框架中具有通用性和鲁棒性,可为训练分割任务生成可靠的标签。
{"title":"Multiscale unsupervised network for deformable image registration.","authors":"Yun Wang, Wanru Chang, Chongfei Huang, Dexing Kong","doi":"10.3233/XST-240159","DOIUrl":"https://doi.org/10.3233/XST-240159","url":null,"abstract":"<p><strong>Background: </strong>Deformable image registration (DIR) plays an important part in many clinical tasks, and deep learning has made significant progress in DIR over the past few years.</p><p><strong>Objective: </strong>To propose a fast multiscale unsupervised deformable image registration (referred to as FMIRNet) method for monomodal image registration.</p><p><strong>Methods: </strong>We designed a multiscale fusion module to estimate the large displacement field by combining and refining the deformation fields of three scales. The spatial attention mechanism was employed in our fusion module to weight the displacement field pixel by pixel. Except mean square error (MSE), we additionally added structural similarity (ssim) measure during the training phase to enhance the structural consistency between the deformed images and the fixed images.</p><p><strong>Results: </strong>Our registration method was evaluated on EchoNet, CHAOS and SLIVER, and had indeed performance improvement in terms of SSIM, NCC and NMI scores. Furthermore, we integrated the FMIRNet into the segmentation network (FCN, UNet) to boost the segmentation task on a dataset with few manual annotations in our joint leaning frameworks. The experimental results indicated that the joint segmentation methods had performance improvement in terms of Dice, HD and ASSD scores.</p><p><strong>Conclusions: </strong>Our proposed FMIRNet is effective for large deformation estimation, and its registration capability is generalizable and robust in joint registration and segmentation frameworks to generate reliable labels for training segmentation tasks.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of X-Ray Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1