首页 > 最新文献

Journal of X-Ray Science and Technology最新文献

英文 中文
Erratum to "Mask R-CNN assisted diagnosis of spinal tuberculosis". “屏蔽R-CNN辅助诊断脊柱结核”的勘误表。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-05-23 DOI: 10.1177/08953996251346352
{"title":"Erratum to \"Mask R-CNN assisted diagnosis of spinal tuberculosis\".","authors":"","doi":"10.1177/08953996251346352","DOIUrl":"10.1177/08953996251346352","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1012"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144133174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MHASegNet: A multi-scale hybrid aggregation network of segmenting coronary artery from CCTA images. MHASegNet:一种从CCTA图像中分割冠状动脉的多尺度混合聚合网络。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-06-09 DOI: 10.1177/08953996251346484
Shang Li, Yanan Wu, Bojun Jiang, Lingkai Liu, Tiande Zhang, Yu Sun, Jie Hou, Patrice Monkam, Wei Qian, Shouliang Qi

Background: Segmentation of coronary arteries in Coronary Computed Tomography Angiography (CCTA) images is crucial for diagnosing coronary artery disease (CAD), but remains challenging due to small artery size, uneven contrast distribution, and issues like over-segmentation or omission.

Objective: The aim of this study is to improve coronary artery segmentation in CCTA images using both conventional and deep learning techniques.

Methods: We propose MHASegNet, a lightweight network for coronary artery segmentation, combined with a tailored refinement method. MHASegNet employs multi-scale hybrid attention to capture global and local features, and integrates a 3D context anchor attention module to focus on key coronary artery structures while suppressing background noise. An iterative, region-growth-based refinement addresses crown breaks and reduces false alarms. We evaluated the method on an in-house dataset of 90 subjects and two public datasets with 1060 subjects.

Results: MHASegNet, coupled with tailored refinement, outperforms state-of-the-art algorithms, achieving a Dice Similarity Coefficient (DSC) of 0.867 on the in-house dataset, 0.875 on the ASOCA dataset, and 0.827 on the ImageCAS dataset.

Conclusion: The tailored refinement significantly reduces false positives and resolves most discontinuities, even for other networks. MHASegNet and the tailored refinement may aid in diagnosing and quantifying CAD following further validation.

背景:冠状动脉ct血管造影(CCTA)图像中冠状动脉的分割对于诊断冠状动脉疾病(CAD)至关重要,但由于冠状动脉尺寸小、对比度分布不均匀以及过度分割或遗漏等问题,仍然具有挑战性。目的:本研究的目的是利用传统和深度学习技术改善CCTA图像中的冠状动脉分割。方法:我们提出了一种轻量级的冠状动脉分割网络MHASegNet,并结合了量身定制的细化方法。MHASegNet采用多尺度混合注意力捕获全局和局部特征,并集成3D上下文锚定注意力模块,在抑制背景噪声的同时关注关键冠状动脉结构。迭代的、基于区域增长的改进解决了冠状断裂并减少了错误警报。我们在一个包含90名受试者的内部数据集和两个包含1060名受试者的公共数据集上评估了该方法。结果:MHASegNet,加上量身定制的细化,优于最先进的算法,在内部数据集上实现了骰子相似系数(DSC)为0.867,在ASOCA数据集上为0.875,在ImageCAS数据集上为0.827。结论:量身定制的细化显着减少了误报并解决了大多数不连续性,即使对于其他网络也是如此。在进一步验证后,MHASegNet和量身定制的细化可能有助于诊断和量化CAD。
{"title":"MHASegNet: A multi-scale hybrid aggregation network of segmenting coronary artery from CCTA images.","authors":"Shang Li, Yanan Wu, Bojun Jiang, Lingkai Liu, Tiande Zhang, Yu Sun, Jie Hou, Patrice Monkam, Wei Qian, Shouliang Qi","doi":"10.1177/08953996251346484","DOIUrl":"10.1177/08953996251346484","url":null,"abstract":"<p><strong>Background: </strong>Segmentation of coronary arteries in Coronary Computed Tomography Angiography (CCTA) images is crucial for diagnosing coronary artery disease (CAD), but remains challenging due to small artery size, uneven contrast distribution, and issues like over-segmentation or omission.</p><p><strong>Objective: </strong>The aim of this study is to improve coronary artery segmentation in CCTA images using both conventional and deep learning techniques.</p><p><strong>Methods: </strong>We propose MHASegNet, a lightweight network for coronary artery segmentation, combined with a tailored refinement method. MHASegNet employs multi-scale hybrid attention to capture global and local features, and integrates a 3D context anchor attention module to focus on key coronary artery structures while suppressing background noise. An iterative, region-growth-based refinement addresses crown breaks and reduces false alarms. We evaluated the method on an in-house dataset of 90 subjects and two public datasets with 1060 subjects.</p><p><strong>Results: </strong>MHASegNet, coupled with tailored refinement, outperforms state-of-the-art algorithms, achieving a Dice Similarity Coefficient (DSC) of 0.867 on the in-house dataset, 0.875 on the ASOCA dataset, and 0.827 on the ImageCAS dataset.</p><p><strong>Conclusion: </strong>The tailored refinement significantly reduces false positives and resolves most discontinuities, even for other networks. MHASegNet and the tailored refinement may aid in diagnosing and quantifying CAD following further validation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"916-934"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Basic acceleration technique with theoretical analysis on iterative algorithms for image reconstruction. 基本加速技术与理论分析图像重建的迭代算法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-05-11 DOI: 10.1177/08953996251335119
Shuhua Ji, Boyan Ren, Xing Zhao, Xuying Zhao

In image reconstruction and processing, incorporating prior information, particularly the nonnegativity of pixel values, is essential. Existing computed tomography (CT) iterative reconstruction algorithms, including the algebraic reconstruction technique (ART), simultaneous ART (SART), and the simultaneous iterative reconstruction technique (SIRT), typically address negative components during the iteration process by either setting them to zero, introducing regularization terms to prevent negativity, or leaving them unchanged. This paper establishes a general framework in which enforcing the nonnegativity prior accelerates the convergence of the reconstructed image toward the true solution. Within this framework, we propose two efficient and simple acceleration techniques: setting negative pixel values to their absolute values and updating them to the estimated values from the previous update. Experiments were conducted using ART, SIRT, and SART algorithms, integrated with the corresponding acceleration techniques, on full-angle, limited-angle, and noisy simulated data, as well as real data. The results validate the effectiveness of the proposed acceleration methods by evaluating image quality using the PSNR and SSIM metrics. Notably, the proposed technique that sets negative pixel values to their absolute values is strongly recommended, as it significantly outperforms the existing technique that sets them to zero, both in terms of image quality and iteration time.

在图像重建和处理中,融合先验信息,特别是像素值的非负性,是必不可少的。现有的计算机断层扫描(CT)迭代重建算法,包括代数重建技术(ART)、同步重建技术(SART)和同步迭代重建技术(SIRT),通常在迭代过程中通过将负分量设置为零、引入正则化项以防止负分量,或保持不变来处理负分量。本文建立了一个通用的框架,在这个框架中,增强非负先验加速了重构图像向真解的收敛。在此框架内,我们提出了两种高效且简单的加速技术:将负像素值设置为其绝对值,并将其更新为上次更新的估计值。利用ART、SIRT和SART算法,结合相应的加速技术,在全角度、有限角度和有噪声的模拟数据以及真实数据上进行了实验。通过使用PSNR和SSIM指标评估图像质量,验证了所提加速方法的有效性。值得注意的是,我们强烈推荐将负像素值设置为绝对值的技术,因为它在图像质量和迭代时间方面都明显优于将负像素值设置为零的现有技术。
{"title":"Basic acceleration technique with theoretical analysis on iterative algorithms for image reconstruction.","authors":"Shuhua Ji, Boyan Ren, Xing Zhao, Xuying Zhao","doi":"10.1177/08953996251335119","DOIUrl":"10.1177/08953996251335119","url":null,"abstract":"<p><p>In image reconstruction and processing, incorporating prior information, particularly the nonnegativity of pixel values, is essential. Existing computed tomography (CT) iterative reconstruction algorithms, including the algebraic reconstruction technique (ART), simultaneous ART (SART), and the simultaneous iterative reconstruction technique (SIRT), typically address negative components during the iteration process by either setting them to zero, introducing regularization terms to prevent negativity, or leaving them unchanged. This paper establishes a general framework in which enforcing the nonnegativity prior accelerates the convergence of the reconstructed image toward the true solution. Within this framework, we propose two efficient and simple acceleration techniques: setting negative pixel values to their absolute values and updating them to the estimated values from the previous update. Experiments were conducted using ART, SIRT, and SART algorithms, integrated with the corresponding acceleration techniques, on full-angle, limited-angle, and noisy simulated data, as well as real data. The results validate the effectiveness of the proposed acceleration methods by evaluating image quality using the PSNR and SSIM metrics. Notably, the proposed technique that sets negative pixel values to their absolute values is strongly recommended, as it significantly outperforms the existing technique that sets them to zero, both in terms of image quality and iteration time.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"844-865"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposal of a phantom for analyzing out-of-plane artifact in digital breast tomosynthesis. 一种用于数字乳房断层合成中面外伪影分析的模型。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-06-26 DOI: 10.1177/08953996251351621
Emu Yamamoto, Keisuke Kondo, Masato Imahana, Mayumi Otani, Ayako Yoshida, Miki Okazaki

BackgroundOut-of-plane artifacts in digital breast tomosynthesis (DBT) can affect image quality, even subtly, and are influenced by the size and z-position of features with contrast of clinical images.ObjectiveTo propose a phantom and metric to further characterize out-of-plane artifacts in DBT.MethodsPhantoms with a signal inserted were manufactured, and the reconstructed planes were obtained using the DBT system. Normalized maximum contrast within the plane area was used to quantitatively evaluate out-of-plane artifacts. The spread of out-of-plane artifacts within the reconstructed plane was qualitatively evaluated by observing the profile within the plane area.ResultsThe larger the signal diameter, the stronger the effect of out-of-plane artifacts on the z-position far from the in-focus plane. When the z-position of the signal was on the upper side of the z-position of the center of X-ray tube rotation, out-of-plane artifacts were stronger on the upper side and weaker on the lower side of the signal. The spread of out-of-plane artifacts in the off-focus plane changed from monomodal to bimodal, with movement away from the signal's location in the z-direction.ConclusionsThis work proposes new phantoms and analysis methods to investigate the characteristics of out-of-plane artifacts, supplementing conventional methods.

数字乳腺断层合成(DBT)中的面外伪影会对图像质量产生微妙的影响,并且受临床图像对比度特征的大小和z-位置的影响。目的提出一种伪影和度量来进一步表征DBT中的面外伪影。方法制作插入信号的假体,利用DBT系统获得重建平面。平面区域内的归一化最大对比度用于定量评估面外伪影。通过观察平面区域内的轮廓,对重建平面内的面外伪影分布进行定性评价。结果信号直径越大,面外伪影对远离焦内平面的z轴位置的影响越强。当信号的z轴位置在x射线管旋转中心z轴位置的上侧时,信号的上侧面外伪影较强,下侧较弱。面外伪影在离焦平面上的传播由单峰变为双峰,并在z方向上远离信号位置。结论本工作提出了新的幻影和分析方法来研究面外伪影的特征,补充了传统的方法。
{"title":"Proposal of a phantom for analyzing out-of-plane artifact in digital breast tomosynthesis.","authors":"Emu Yamamoto, Keisuke Kondo, Masato Imahana, Mayumi Otani, Ayako Yoshida, Miki Okazaki","doi":"10.1177/08953996251351621","DOIUrl":"10.1177/08953996251351621","url":null,"abstract":"<p><p>BackgroundOut-of-plane artifacts in digital breast tomosynthesis (DBT) can affect image quality, even subtly, and are influenced by the size and z-position of features with contrast of clinical images.ObjectiveTo propose a phantom and metric to further characterize out-of-plane artifacts in DBT.MethodsPhantoms with a signal inserted were manufactured, and the reconstructed planes were obtained using the DBT system. Normalized maximum contrast within the plane area was used to quantitatively evaluate out-of-plane artifacts. The spread of out-of-plane artifacts within the reconstructed plane was qualitatively evaluated by observing the profile within the plane area.ResultsThe larger the signal diameter, the stronger the effect of out-of-plane artifacts on the z-position far from the in-focus plane. When the z-position of the signal was on the upper side of the z-position of the center of X-ray tube rotation, out-of-plane artifacts were stronger on the upper side and weaker on the lower side of the signal. The spread of out-of-plane artifacts in the off-focus plane changed from monomodal to bimodal, with movement away from the signal's location in the z-direction.ConclusionsThis work proposes new phantoms and analysis methods to investigate the characteristics of out-of-plane artifacts, supplementing conventional methods.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"945-958"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical cone-beam CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain. 基于多尺度分解和投影域惩罚加权最小二乘的锥束CT统计降噪方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-07-15 DOI: 10.1177/08953996251337889
Shaojie Tang, Jin Liu, Guo Li, Zhiwei Qiao, Yang Chen, Xuanqin Mou

Purposes:  Suppressing noise can effectively promote image quality and save radiation dose in clinical imaging with x-ray computed tomography (CT). To date, numerous statistical noise reduction approaches have ever been proposed in image domain, projection domain or both domains. Especially, a multiscale decomposition strategy can be exploited to enhance the performance of noise suppression while preserving image sharpness. Recognizing the inherent advantage of noise suppression in the projection domain, we have previously proposed a projection domain multiscale penalized weighted least squares (PWLS) method for fan-beam CT imaging, wherein the sampling intervals are explicitly taken into account for the possible variation of sampling rates. In this work, we extend our previous method into cone-beam (CB) CT imaging, which is more relevant to practical imaging applications.

Methods:  The projection domain multiscale PWLS method is derived for CBCT imaging by converting an isotropic diffusion partial differential equation (PDE) in the three-dimensional (3D) image domain into its counterpart in the CB projection domain. With adoption of the Markov random field (MRF) objective function, the CB projection domain multiscale PWLS method suppresses noise at each scale. The performance of the proposed method for statistical noise reduction in CBCT imaging is experimentally evaluated and verified using the projection data acquired by an actual micro-CT scanner.

Results:  The preliminary result shows that the proposed CB projection domain multiscale PWLS method outperforms the CB projection domain single-scale PWLS, the 3D image domain discriminative feature representation (DFR), and the 3D image domain multiscale nonlinear diffusion methods in noise reduction. Moreover, the proposed method can preserve image sharpness effectively while avoiding generation of novel artifacts.

Conclusions:  Since the sampling intervals are explicitly taken into account in the projection domain multiscale decomposition, the proposed method would be beneficial to advanced applications where the CBCT imaging is employed and the sampling rates vary.

目的:在临床x线计算机断层扫描(CT)成像中,抑制噪声可有效提高图像质量,节约辐射剂量。迄今为止,在图像域、投影域或两者都有许多统计降噪方法被提出。特别是,多尺度分解策略可以在保持图像清晰度的同时增强噪声抑制性能。认识到投影域噪声抑制的固有优势,我们之前提出了一种投影域多尺度惩罚加权最小二乘(PWLS)方法用于扇束CT成像,其中采样间隔明确考虑了采样率可能的变化。在这项工作中,我们将之前的方法扩展到锥束(CB) CT成像中,这与实际成像应用更相关。方法:将三维(3D)图像域的各向同性扩散偏微分方程(PDE)转换为CB投影域的对应方程,推导出CBCT成像的投影域多尺度PWLS方法。CB投影域多尺度PWLS方法采用马尔可夫随机场(MRF)目标函数,在每个尺度上抑制噪声。利用实际微型ct扫描仪的投影数据,对该方法在CBCT成像中的统计降噪性能进行了实验评估和验证。结果:初步结果表明,所提出的CB投影域多尺度PWLS方法在降噪方面优于CB投影域单尺度PWLS方法、三维图像域判别特征表示(DFR)方法和三维图像域多尺度非线性扩散方法。此外,该方法可以有效地保持图像的清晰度,同时避免产生新的伪影。结论:由于在投影域多尺度分解中明确考虑了采样间隔,因此该方法将有利于采用CBCT成像和采样率变化的高级应用。
{"title":"Statistical cone-beam CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain.","authors":"Shaojie Tang, Jin Liu, Guo Li, Zhiwei Qiao, Yang Chen, Xuanqin Mou","doi":"10.1177/08953996251337889","DOIUrl":"10.1177/08953996251337889","url":null,"abstract":"<p><strong>Purposes: </strong> Suppressing noise can effectively promote image quality and save radiation dose in clinical imaging with x-ray computed tomography (CT). To date, numerous statistical noise reduction approaches have ever been proposed in image domain, projection domain or both domains. Especially, a multiscale decomposition strategy can be exploited to enhance the performance of noise suppression while preserving image sharpness. Recognizing the inherent advantage of noise suppression in the projection domain, we have previously proposed a projection domain multiscale penalized weighted least squares (PWLS) method for fan-beam CT imaging, wherein the sampling intervals are explicitly taken into account for the possible variation of sampling rates. In this work, we extend our previous method into cone-beam (CB) CT imaging, which is more relevant to practical imaging applications.</p><p><strong>Methods: </strong> The projection domain multiscale PWLS method is derived for CBCT imaging by converting an isotropic diffusion partial differential equation (PDE) in the three-dimensional (3D) image domain into its counterpart in the CB projection domain. With adoption of the Markov random field (MRF) objective function, the CB projection domain multiscale PWLS method suppresses noise at each scale. The performance of the proposed method for statistical noise reduction in CBCT imaging is experimentally evaluated and verified using the projection data acquired by an actual micro-CT scanner.</p><p><strong>Results: </strong> The preliminary result shows that the proposed CB projection domain multiscale PWLS method outperforms the CB projection domain single-scale PWLS, the 3D image domain discriminative feature representation (DFR), and the 3D image domain multiscale nonlinear diffusion methods in noise reduction. Moreover, the proposed method can preserve image sharpness effectively while avoiding generation of novel artifacts.</p><p><strong>Conclusions: </strong> Since the sampling intervals are explicitly taken into account in the projection domain multiscale decomposition, the proposed method would be beneficial to advanced applications where the CBCT imaging is employed and the sampling rates vary.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"959-977"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-stage training and deep supervision based segmentation approach for 3D abdominal multi-organ segmentation. 基于多阶段训练和深度监督的腹部三维多器官分割方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-07-17 DOI: 10.1177/08953996251355806
Panpan Wu, Peng An, Ziping Zhao, Runpeng Guo, Xiaofeng Ma, Yue Qu, Yurou Xu, Hengyong Yu

Accurate X-ray Computed tomography (CT) image segmentation of the abdominal organs is fundamental for diagnosing abdominal diseases, planning cancer treatment, and formulating radiotherapy strategies. However, the existing deep learning based models for three-dimensional (3D) CT image abdominal multi-organ segmentation face challenges, including complex organ distribution, scarcity of labeled data, and diversity of organ structures, leading to difficulties in model training and convergence and low segmentation accuracy. To address these issues, a novel multi-stage training and a deep supervision model based segmentation approach is proposed. It primary integrates multi-stage training, pseudo- labeling technique, and a developed deep supervision model with attention mechanism (DLAU-Net), specifically designed for 3D abdominal multi-organ segmentation. The DLAU-Net enhances segmentation performance and model adaptability through an improved network architecture. The multi-stage training strategy accelerates model convergence and enhances generalizability, effectively addressing the diversity of abdominal organ structures. The introduction of pseudo-labeling training alleviates the bottleneck of labeled data scarcity and further improves the model's generalization performance and training efficiency. Experiments were conducted on a large dataset provided by the FLARE 2023 Challenge. Comprehensive ablation studies and comparative experiments were conducted to validate the effectiveness of the proposed method. Our method achieves an average organ accuracy (AVG) of 90.5% and a Dice Similarity Coefficient (DSC) of 89.05% and exhibits exceptional performance in terms of training speed and handling data diversity, particularly in the segmentation tasks of critical abdominal organs such as the liver, spleen, and kidneys, significantly outperforming existing comparative methods.

腹部器官的x线计算机断层扫描(CT)图像的准确分割是诊断腹部疾病、规划癌症治疗和制定放射治疗策略的基础。然而,现有的基于深度学习的三维CT图像腹部多器官分割模型面临着器官分布复杂、标记数据稀缺、器官结构多样性等问题,导致模型训练和收敛困难,分割精度不高。为了解决这些问题,提出了一种新的基于多阶段训练和深度监督模型的分割方法。它主要集成了多阶段训练、伪标记技术和开发的具有注意机制的深度监督模型(d劳-网),专为腹部三维多器官分割而设计。dau - net通过改进的网络结构增强了分段性能和模型适应性。多阶段训练策略加速了模型的收敛性,增强了模型的泛化性,有效地解决了腹部器官结构的多样性问题。伪标注训练的引入缓解了标注数据稀缺性的瓶颈,进一步提高了模型的泛化性能和训练效率。实验是在FLARE 2023挑战赛提供的大型数据集上进行的。通过综合烧蚀实验和对比实验验证了该方法的有效性。该方法的平均器官准确率(AVG)为90.5%,骰子相似系数(DSC)为89.05%,在训练速度和处理数据多样性方面表现出色,特别是在肝脏、脾脏和肾脏等关键腹部器官的分割任务中,显著优于现有的比较方法。
{"title":"A multi-stage training and deep supervision based segmentation approach for 3D abdominal multi-organ segmentation.","authors":"Panpan Wu, Peng An, Ziping Zhao, Runpeng Guo, Xiaofeng Ma, Yue Qu, Yurou Xu, Hengyong Yu","doi":"10.1177/08953996251355806","DOIUrl":"10.1177/08953996251355806","url":null,"abstract":"<p><p>Accurate X-ray Computed tomography (CT) image segmentation of the abdominal organs is fundamental for diagnosing abdominal diseases, planning cancer treatment, and formulating radiotherapy strategies. However, the existing deep learning based models for three-dimensional (3D) CT image abdominal multi-organ segmentation face challenges, including complex organ distribution, scarcity of labeled data, and diversity of organ structures, leading to difficulties in model training and convergence and low segmentation accuracy. To address these issues, a novel multi-stage training and a deep supervision model based segmentation approach is proposed. It primary integrates multi-stage training, pseudo- labeling technique, and a developed deep supervision model with attention mechanism (DLAU-Net), specifically designed for 3D abdominal multi-organ segmentation. The DLAU-Net enhances segmentation performance and model adaptability through an improved network architecture. The multi-stage training strategy accelerates model convergence and enhances generalizability, effectively addressing the diversity of abdominal organ structures. The introduction of pseudo-labeling training alleviates the bottleneck of labeled data scarcity and further improves the model's generalization performance and training efficiency. Experiments were conducted on a large dataset provided by the FLARE 2023 Challenge. Comprehensive ablation studies and comparative experiments were conducted to validate the effectiveness of the proposed method. Our method achieves an average organ accuracy (AVG) of 90.5% and a Dice Similarity Coefficient (DSC) of 89.05% and exhibits exceptional performance in terms of training speed and handling data diversity, particularly in the segmentation tasks of critical abdominal organs such as the liver, spleen, and kidneys, significantly outperforming existing comparative methods.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"998-1011"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVID-19CT+: A public dataset of CT images for COVID-19 retrospective analysis. COVID-19CT+:用于COVID-19回顾性分析的CT图像公共数据集。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-05-23 DOI: 10.1177/08953996251332793
Yihao Sun, Tianming Du, Bin Wang, Md Mamunur Rahaman, Xinghao Wang, Xinyu Huang, Tao Jiang, Marcin Grzegorzek, Hongzan Sun, Jian Xu, Chen Li

Background and objectiveCOVID-19 is considered as the biggest global health disaster in the 21st century, and it has a huge impact on the world.MethodsThis paper publishes a publicly available dataset of CT images of multiple types of pneumonia (COVID-19CT+). Specifically, the dataset contains 409,619 CT images of 1333 patients, with subset-A containing 312 community-acquired pneumonia cases and subset-B containing 1021 COVID-19 cases. In order to demonstrate that there are differences in the methods used to classify COVID-19CT+ images across time, we selected 13 classical machine learning classifiers and 5 deep learning classifiers to test the image classification task.ResultsIn this study, two sets of experiments are conducted using traditional machine learning and deep learning methods, the first set of experiments is the classification of COVID-19 in Subset-B versus COVID-19 white lung disease, and the second set of experiments is the classification of community-acquired pneumonia in Subset-A versus COVID-19 in Subset-B, demonstrating that the different periods of the methods differed on COVID-19CT+. On the first set of experiments, the accuracy of traditional machine learning reaches a maximum of 97.3% and a minimum of only 62.6%. Deep learning algorithms reaches a maximum of 97.9% and a minimum of 85.7%. On the second set of experiments, traditional machine learning reaches a high of 94.6% accuracy and a low of 56.8%. The deep learning algorithm reaches a high of 91.9% and a low of 86.3%.ConclusionsThe COVID-19CT+ in this study covers a large number of CT images of patients with COVID-19 and community-acquired pneumonia and is one of the largest datasets available. We expect that this dataset will attract more researchers to participate in exploring new automated diagnostic algorithms to contribute to the improvement of the diagnostic accuracy and efficiency of COVID-19.

背景与目的2019冠状病毒病(covid -19)被认为是21世纪最大的全球性卫生灾难,对世界产生了巨大影响。方法公开多类型肺炎(COVID-19CT+) CT图像数据集。具体而言,该数据集包含1333例患者的409619张CT图像,其中子集a包含312例社区获得性肺炎病例,子集b包含1021例COVID-19病例。为了证明不同时间对COVID-19CT+图像进行分类的方法存在差异,我们选择了13个经典机器学习分类器和5个深度学习分类器来测试图像分类任务。结果本研究采用传统机器学习和深度学习两种方法进行了两组实验,第一组实验是subset b中COVID-19与COVID-19白肺病的分类,第二组实验是subset a中社区获得性肺炎与subset b中COVID-19的分类,表明不同时期的方法在COVID-19CT+上存在差异。在第一组实验中,传统机器学习的准确率最高达到97.3%,最低只有62.6%。深度学习算法最高可达97.9%,最低可达85.7%。在第二组实验中,传统机器学习的准确率最高达到94.6%,最低达到56.8%。深度学习算法最高达到91.9%,最低达到86.3%。结论本研究的COVID-19CT+涵盖了大量COVID-19和社区获得性肺炎患者的CT图像,是目前最大的数据集之一。我们希望该数据集能够吸引更多的研究人员参与探索新的自动诊断算法,为提高COVID-19的诊断准确性和效率做出贡献。
{"title":"COVID-19CT+: A public dataset of CT images for COVID-19 retrospective analysis.","authors":"Yihao Sun, Tianming Du, Bin Wang, Md Mamunur Rahaman, Xinghao Wang, Xinyu Huang, Tao Jiang, Marcin Grzegorzek, Hongzan Sun, Jian Xu, Chen Li","doi":"10.1177/08953996251332793","DOIUrl":"10.1177/08953996251332793","url":null,"abstract":"<p><p>Background and objectiveCOVID-19 is considered as the biggest global health disaster in the 21st century, and it has a huge impact on the world.MethodsThis paper publishes a publicly available dataset of CT images of multiple types of pneumonia (COVID-19CT+). Specifically, the dataset contains 409,619 CT images of 1333 patients, with subset-A containing 312 community-acquired pneumonia cases and subset-B containing 1021 COVID-19 cases. In order to demonstrate that there are differences in the methods used to classify COVID-19CT+ images across time, we selected 13 classical machine learning classifiers and 5 deep learning classifiers to test the image classification task.ResultsIn this study, two sets of experiments are conducted using traditional machine learning and deep learning methods, the first set of experiments is the classification of COVID-19 in Subset-B versus COVID-19 white lung disease, and the second set of experiments is the classification of community-acquired pneumonia in Subset-A versus COVID-19 in Subset-B, demonstrating that the different periods of the methods differed on COVID-19CT+. On the first set of experiments, the accuracy of traditional machine learning reaches a maximum of 97.3% and a minimum of only 62.6%. Deep learning algorithms reaches a maximum of 97.9% and a minimum of 85.7%. On the second set of experiments, traditional machine learning reaches a high of 94.6% accuracy and a low of 56.8%. The deep learning algorithm reaches a high of 91.9% and a low of 86.3%.ConclusionsThe COVID-19CT+ in this study covers a large number of CT images of patients with COVID-19 and community-acquired pneumonia and is one of the largest datasets available. We expect that this dataset will attract more researchers to participate in exploring new automated diagnostic algorithms to contribute to the improvement of the diagnostic accuracy and efficiency of COVID-19.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"901-915"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved U-NET3+ with transformer and adaptive attention map for lung segmentation. 带变压器和自适应注意图的改进U-NET3+肺分割算法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-07-13 DOI: 10.1177/08953996251351623
V Joseph Raj, P Christopher

Accurate segmentation of lung regions from CT scan images is critical for diagnosing and monitoring respiratory diseases. This study introduces a novel hybrid architecture Adaptive Attention U-NetAA, which combines the strengths of U-Net3 + and Transformer based attention mechanisms models for high-precision lung segmentation. The U-Net3 + module effectively segments the lung region by leveraging its deep convolutional network with nested skip connections, ensuring rich multi-scale feature extraction. A key innovation is introducing an adaptive attention mechanism within the Transformer module, which dynamically adjusts the focus on critical regions in the image based on local and global contextual relationships. This model's adaptive attention mechanism addresses variations in lung morphology, image artifacts, and low-contrast regions, leading to improved segmentation accuracy. The combined convolutional and attention-based architecture enhances robustness and precision. Experimental results on benchmark CT datasets demonstrate that the proposed model achieves an IoU of 0.984, a Dice coefficient of 0.989, a MIoU of 0.972, and an HD95 of 1.22 mm, surpassing state-of-the-art methods. These results establish U-NetAA as a superior tool for clinical lung segmentation, with enhanced accuracy, sensitivity, and generalization capability.

从CT扫描图像中准确分割肺区域对于诊断和监测呼吸系统疾病至关重要。本研究提出了一种新的混合结构自适应注意力U-NetAA,它结合了U-Net3 +和基于Transformer的注意机制模型的优势,用于高精度肺分割。U-Net3 +模块利用其具有嵌套跳跃连接的深度卷积网络,有效地分割肺区域,确保丰富的多尺度特征提取。一个关键的创新是在Transformer模块中引入自适应注意力机制,该机制基于局部和全局上下文关系动态调整图像中关键区域的焦点。该模型的自适应注意机制解决了肺形态、图像伪影和低对比度区域的变化,从而提高了分割精度。结合卷积和基于注意力的结构增强了鲁棒性和精度。在基准CT数据集上的实验结果表明,该模型的IoU为0.984,Dice系数为0.989,MIoU为0.972,HD95为1.22 mm,优于现有方法。这些结果确立了U-NetAA作为临床肺分割的优越工具,具有更高的准确性、敏感性和泛化能力。
{"title":"An improved U-NET3+ with transformer and adaptive attention map for lung segmentation.","authors":"V Joseph Raj, P Christopher","doi":"10.1177/08953996251351623","DOIUrl":"10.1177/08953996251351623","url":null,"abstract":"<p><p>Accurate segmentation of lung regions from CT scan images is critical for diagnosing and monitoring respiratory diseases. This study introduces a novel hybrid architecture Adaptive Attention U-NetAA, which combines the strengths of U-Net3 + and Transformer based attention mechanisms models for high-precision lung segmentation. The U-Net3 + module effectively segments the lung region by leveraging its deep convolutional network with nested skip connections, ensuring rich multi-scale feature extraction. A key innovation is introducing an adaptive attention mechanism within the Transformer module, which dynamically adjusts the focus on critical regions in the image based on local and global contextual relationships. This model's adaptive attention mechanism addresses variations in lung morphology, image artifacts, and low-contrast regions, leading to improved segmentation accuracy. The combined convolutional and attention-based architecture enhances robustness and precision. Experimental results on benchmark CT datasets demonstrate that the proposed model achieves an IoU of 0.984, a Dice coefficient of 0.989, a MIoU of 0.972, and an HD95 of 1.22 mm, surpassing state-of-the-art methods. These results establish U-NetAA as a superior tool for clinical lung segmentation, with enhanced accuracy, sensitivity, and generalization capability.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"978-997"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-domain information fusion diffusion model (MDIF-DM) for limited-angle computed tomography. 有限角度计算机断层扫描的多域信息融合扩散模型。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-09-01 Epub Date: 2025-06-19 DOI: 10.1177/08953996251339368
Genwei Ma, Dimeng Xia, Shusen Zhao

BackgroundLimited-angle Computed Tomography imaging suffers from severe artifacts in the reconstructed image due to incomplete projection data. Deep learning methods have been developed currently to address the challenges of robustness and low contrast of the limited-angle CT reconstruction with a relatively effective way.ObjectiveTo improve the low contrast of the current limited-angle CT reconstruction image, enhance the robustness of the reconstruction method and the contrast of the limited-angle image.MethodIn this paper, we proposed a limited-angle CT reconstruction method that combining the Fourier domain reweighting and wavelet domain enhancement, which fused information from different domains, thereby getting high-resolution reconstruction images.ResultsWe verified the feasibility and effectiveness of the proposed solution through experiments, and the reconstruction results are improved compared with the state-of-the-art methods.ConclusionsThe proposed method enhances some features of the original image domain data from different domains, which is beneficial to the reasonable diffusion and restoration of diffuse detail texture features.

背景有限角度计算机断层成像由于投影数据不完整,在重建图像中存在严重的伪影。深度学习方法是目前发展起来的一种相对有效的方法,可以解决有限角度CT重建的鲁棒性和对比度低的问题。目的改善当前有限角度CT重建图像对比度低的问题,增强重建方法的鲁棒性和有限角度图像的对比度。方法提出了一种结合傅里叶域重加权和小波域增强的有限角度CT重建方法,融合不同域的信息,得到高分辨率的重建图像。结果通过实验验证了该方法的可行性和有效性,与现有方法相比,重构结果有所改善。结论该方法从不同的域增强了原始图像域数据的某些特征,有利于漫反射细节纹理特征的合理扩散和恢复。
{"title":"Multi-domain information fusion diffusion model (MDIF-DM) for limited-angle computed tomography.","authors":"Genwei Ma, Dimeng Xia, Shusen Zhao","doi":"10.1177/08953996251339368","DOIUrl":"10.1177/08953996251339368","url":null,"abstract":"<p><p>BackgroundLimited-angle Computed Tomography imaging suffers from severe artifacts in the reconstructed image due to incomplete projection data. Deep learning methods have been developed currently to address the challenges of robustness and low contrast of the limited-angle CT reconstruction with a relatively effective way.ObjectiveTo improve the low contrast of the current limited-angle CT reconstruction image, enhance the robustness of the reconstruction method and the contrast of the limited-angle image.MethodIn this paper, we proposed a limited-angle CT reconstruction method that combining the Fourier domain reweighting and wavelet domain enhancement, which fused information from different domains, thereby getting high-resolution reconstruction images.ResultsWe verified the feasibility and effectiveness of the proposed solution through experiments, and the reconstruction results are improved compared with the state-of-the-art methods.ConclusionsThe proposed method enhances some features of the original image domain data from different domains, which is beneficial to the reasonable diffusion and restoration of diffuse detail texture features.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"935-944"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultra-sparse view lung CT image reconstruction using generative adversarial networks and compressed sensing. 基于生成对抗网络和压缩感知的超稀疏视图肺部CT图像重建。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-07-01 Epub Date: 2025-04-29 DOI: 10.1177/08953996251329214
Zhaoguang Li, Zhengxiang Sun, Lin Lv, Yuhan Liu, Xiuying Wang, Jingjing Xu, Jianping Xing, Paul Babyn, Feng-Rong Sun

X-ray ionizing radiation from Computed Tomography (CT) scanning increases cancer risk for patients, thus making sparse view CT, which diminishes X-ray exposure by lowering the number of projections, highly significant in diagnostic imaging. However, reducing the number of projections inherently degrades image quality, negatively impacting clinical diagnosis. Consequently, attaining reconstructed images that meet diagnostic imaging criteria for sparse view CT is challenging. This paper presents a novel network (CSUF), specifically designed for ultra-sparse view lung CT image reconstruction. The CSUF network consists of three cohesive components including (1) a compressed sensing-based CT image reconstruction module (VdCS module), (2) a U-shaped end-to-end network, CT-RDNet, enhanced with a self-attention mechanism, acting as the generator in a Generative Adversarial Network (GAN) for CT image restoration and denoising, and (3) a feedback loop. The VdCS module enriches CT-RDNet with enhanced features, while CT-RDNet supplies the VdCS module with prior images infused with rich details and minimized artifacts, facilitated by the feedback loop. Engineering simulation experimental results demonstrate the robustness of the CSUF network and its potential to deliver lung CT images with diagnostic imaging quality even under ultra-sparse view conditions.

来自计算机断层扫描(CT)的x射线电离辐射增加了患者的癌症风险,因此稀疏视图CT通过减少投影数量来减少x射线暴露,在诊断成像中非常重要。然而,减少投影数量会降低图像质量,对临床诊断产生负面影响。因此,获得符合稀疏视图CT诊断成像标准的重建图像是具有挑战性的。本文提出了一种专门用于超稀疏视图肺部CT图像重建的新型网络(CSUF)。CSUF网络由三个紧密相连的组件组成,包括(1)基于压缩感知的CT图像重建模块(vdc模块),(2)u型端到端网络CT- rdnet,增强了自关注机制,作为生成式对抗网络(GAN)中的生成器,用于CT图像恢复和去噪,以及(3)反馈回路。vdc模块通过增强功能丰富了CT-RDNet,而CT-RDNet则为vdc模块提供了包含丰富细节和最小化伪影的先验图像,并通过反馈回路加以促进。工程仿真实验结果证明了CSUF网络的鲁棒性及其在超稀疏视图条件下提供具有诊断成像质量的肺部CT图像的潜力。
{"title":"Ultra-sparse view lung CT image reconstruction using generative adversarial networks and compressed sensing.","authors":"Zhaoguang Li, Zhengxiang Sun, Lin Lv, Yuhan Liu, Xiuying Wang, Jingjing Xu, Jianping Xing, Paul Babyn, Feng-Rong Sun","doi":"10.1177/08953996251329214","DOIUrl":"10.1177/08953996251329214","url":null,"abstract":"<p><p>X-ray ionizing radiation from Computed Tomography (CT) scanning increases cancer risk for patients, thus making sparse view CT, which diminishes X-ray exposure by lowering the number of projections, highly significant in diagnostic imaging. However, reducing the number of projections inherently degrades image quality, negatively impacting clinical diagnosis. Consequently, attaining reconstructed images that meet diagnostic imaging criteria for sparse view CT is challenging. This paper presents a novel network (CSUF), specifically designed for ultra-sparse view lung CT image reconstruction. The CSUF network consists of three cohesive components including (1) a compressed sensing-based CT image reconstruction module (VdCS module), (2) a U-shaped end-to-end network, CT-RDNet, enhanced with a self-attention mechanism, acting as the generator in a Generative Adversarial Network (GAN) for CT image restoration and denoising, and (3) a feedback loop. The VdCS module enriches CT-RDNet with enhanced features, while CT-RDNet supplies the VdCS module with prior images infused with rich details and minimized artifacts, facilitated by the feedback loop. Engineering simulation experimental results demonstrate the robustness of the CSUF network and its potential to deliver lung CT images with diagnostic imaging quality even under ultra-sparse view conditions.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"803-816"},"PeriodicalIF":1.4,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144028776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of X-Ray Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1