首页 > 最新文献

Journal of X-Ray Science and Technology最新文献

英文 中文
A Two-Module Parallel Dual-Domain Network for interior tomography reconstruction. 一种用于内部层析成像重建的双模并行双域网络。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-03-26 DOI: 10.1177/08953996261433954
Haihang Zhao, Pengxiang Ji, Yongzhou Wu, Jintao Zhao, Jing Zou

BackgroundInterior tomography is a crucial technique in computed tomography (CT) that aims to minimize radiation exposure by limiting X-ray imaging to the region of interest (ROI) while maintaining diagnostic accuracy. However, traditional reconstruction algorithms often suffer from severe cupping artifacts caused by data truncation, which significantly degrades image quality.ObjectiveThis study aims to develop a parallel network that effectively integrates information between the projection and image domains to improve interior tomography reconstruction.MethodsIn this paper, we propose an end-to-end deep learning framework, the Two-Module Parallel Dual-Domain Network (TPDDN), which consists of two key modules. The Initial Restoration Module generates high-quality prior sinograms and images, providing a robust foundation for subsequent processing and effectively mitigating the impact of data truncation. The Interactive Fusion Module, the core of the network, employs two parallel and interactive branches that operate simultaneously on the projection and image domains. These branches enable bidirectional feature interaction and information fusion, significantly enhancing the accuracy and quality of the reconstructed images.ResultsExtensive experiments were conducted under both normal-dose and high-dose noise conditions to evaluate the performance of TPDDN. The results demonstrate that TPDDN achieves superior qualitative and quantitative performance compared to existing representative methods.ConclusionsThe proposed TPDDN offers a robust and effective approach for interior tomography reconstruction by synergistically integrating information from both the projection and image domains. It effectively suppresses cupping artifacts and enhances reconstructed image quality under both normal-dose and high-noise conditions, demonstrating promising potential for safer and more accurate diagnostic imaging.

内部断层扫描是计算机断层扫描(CT)中的一项关键技术,旨在通过将x射线成像限制在感兴趣区域(ROI)来减少辐射暴露,同时保持诊断准确性。然而,传统的重建算法往往会因数据截断而产生严重的拔罐伪影,从而严重降低图像质量。目的建立一种有效整合投影域和图像域信息的并行网络,以提高内部层析成像的重建水平。方法在本文中,我们提出了一个端到端深度学习框架——双模块并行双域网络(TPDDN),它由两个关键模块组成。初始恢复模块生成高质量的先验图和图像,为后续处理提供坚实的基础,有效减轻数据截断的影响。网络的核心是交互融合模块,它采用两个并行的、交互的分支,在投影域和图像域同时工作。这些分支实现了特征的双向交互和信息融合,显著提高了重建图像的精度和质量。结果在正常剂量和高剂量噪声条件下进行了大量实验,以评估TPDDN的性能。结果表明,与现有代表性方法相比,TPDDN具有更好的定性和定量性能。结论所提出的TPDDN通过协同整合投影域和图像域的信息,为内部断层扫描重建提供了一种强大而有效的方法。在正常剂量和高噪声条件下,它有效地抑制了拔罐伪影,增强了重建图像的质量,显示出更安全、更准确的诊断成像的潜力。
{"title":"A Two-Module Parallel Dual-Domain Network for interior tomography reconstruction.","authors":"Haihang Zhao, Pengxiang Ji, Yongzhou Wu, Jintao Zhao, Jing Zou","doi":"10.1177/08953996261433954","DOIUrl":"https://doi.org/10.1177/08953996261433954","url":null,"abstract":"<p><p>BackgroundInterior tomography is a crucial technique in computed tomography (CT) that aims to minimize radiation exposure by limiting X-ray imaging to the region of interest (ROI) while maintaining diagnostic accuracy. However, traditional reconstruction algorithms often suffer from severe cupping artifacts caused by data truncation, which significantly degrades image quality.ObjectiveThis study aims to develop a parallel network that effectively integrates information between the projection and image domains to improve interior tomography reconstruction.MethodsIn this paper, we propose an end-to-end deep learning framework, the Two-Module Parallel Dual-Domain Network (TPDDN), which consists of two key modules. The Initial Restoration Module generates high-quality prior sinograms and images, providing a robust foundation for subsequent processing and effectively mitigating the impact of data truncation. The Interactive Fusion Module, the core of the network, employs two parallel and interactive branches that operate simultaneously on the projection and image domains. These branches enable bidirectional feature interaction and information fusion, significantly enhancing the accuracy and quality of the reconstructed images.ResultsExtensive experiments were conducted under both normal-dose and high-dose noise conditions to evaluate the performance of TPDDN. The results demonstrate that TPDDN achieves superior qualitative and quantitative performance compared to existing representative methods.ConclusionsThe proposed TPDDN offers a robust and effective approach for interior tomography reconstruction by synergistically integrating information from both the projection and image domains. It effectively suppresses cupping artifacts and enhances reconstructed image quality under both normal-dose and high-noise conditions, demonstrating promising potential for safer and more accurate diagnostic imaging.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433954"},"PeriodicalIF":1.4,"publicationDate":"2026-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147516089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weld defect detection based on improved YOLOv8n. 基于改进YOLOv8n的焊缝缺陷检测。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-03-24 DOI: 10.1177/08953996261433937
Yongqi Yan, Yi Liu, Lingshuang Meng, Junjing Li, Shu Li, Niu Guo, Pengcheng Zhang, Zhiguo Gui

BackgroundIndustrial weld defect detection is challenged by the minimal grayscale contrast between defects and the background, as well as by blurred defect edges, which together hinder the performance of detection algorithms. Moreover, practical industrial environments require high detection accuracy, fast inference speed, and flexible deployment.ObjectiveTo address these challenges, this study proposes an improved YOLOv8n defect detection method that enables more accurate, faster, and lightweight automated weld defect detection.MethodsThe key improvements are as follows. First, in the backbone, the original C2f module is replaced by the C2f_OREPA feature extraction module, constructed with the Online Convolution Parameterization Approach (OREPA), which reduces computational complexity and enhances feature representation. Second, a downsampling module, DCDConv, is introduced to replace the conventional convolution after the first standard convolution layer, allowing better preservation of fine defect features and improving the detection of subtle defects. Additionally, in the neck, a cross-scale feature fusion module (CCFM) is incorporated to improve detection performance across defects of different scales.ResultsExperiments on our self-constructed dataset comprising eight weld defect categories show that the improved model achieves a mean average precision (mAP) of 87.6%, a 4.5% increase over the original YOLOv8n. Meanwhile, the model reduces the number of parameters by 26.9%, decreases computational cost by 35.7%, and achieves an inference speed of 103 frames per second (FPS). On the public NEU-DET dataset, the improved model obtains an mAP of 82.8%, outperforming the original YOLOv8n by 6.7%. Overall, the proposed model surpasses mainstream object detection frameworks, including YOLOv8n, YOLOv12n, Faster R-CNN, and RetinaNet.ConclusionIn summary, the proposed method provides an accurate, efficient, and deployment-friendly solution for weld defect detection in industrial applications, demonstrating substantial practical value.

工业焊接缺陷检测面临缺陷与背景灰度对比度极小以及缺陷边缘模糊等问题,这些问题共同影响了检测算法的性能。此外,实际工业环境要求检测精度高、推理速度快、部署灵活。目的针对这些挑战,本研究提出了一种改进的YOLOv8n缺陷检测方法,使焊缝缺陷自动检测更加准确、快速和轻便。方法主要改进如下。首先,在主干中,将原有的C2f模块替换为C2f_OREPA特征提取模块,该模块采用在线卷积参数化方法(OREPA)构建,降低了计算复杂度,增强了特征表示;其次,引入下采样模块DCDConv,取代第一层标准卷积层之后的常规卷积,更好地保留了细微缺陷特征,提高了细微缺陷的检测。此外,在颈部加入了跨尺度特征融合模块(CCFM),以提高跨不同尺度缺陷的检测性能。结果在我们自建的包含8个焊缝缺陷类别的数据集上进行的实验表明,改进模型的平均精度(mAP)为87.6%,比原始的YOLOv8n提高了4.5%。同时,该模型将参数数量减少26.9%,计算成本降低35.7%,推理速度达到103帧/秒(FPS)。在公开的nue - det数据集上,改进模型的mAP值为82.8%,比原始的YOLOv8n提高了6.7%。总体而言,该模型超越了主流的目标检测框架,包括YOLOv8n、YOLOv12n、Faster R-CNN和RetinaNet。综上所述,该方法为工业应用中的焊缝缺陷检测提供了一种准确、高效、易于部署的解决方案,具有重要的实用价值。
{"title":"Weld defect detection based on improved YOLOv8n.","authors":"Yongqi Yan, Yi Liu, Lingshuang Meng, Junjing Li, Shu Li, Niu Guo, Pengcheng Zhang, Zhiguo Gui","doi":"10.1177/08953996261433937","DOIUrl":"https://doi.org/10.1177/08953996261433937","url":null,"abstract":"<p><p>BackgroundIndustrial weld defect detection is challenged by the minimal grayscale contrast between defects and the background, as well as by blurred defect edges, which together hinder the performance of detection algorithms. Moreover, practical industrial environments require high detection accuracy, fast inference speed, and flexible deployment.ObjectiveTo address these challenges, this study proposes an improved YOLOv8n defect detection method that enables more accurate, faster, and lightweight automated weld defect detection.MethodsThe key improvements are as follows. First, in the backbone, the original C2f module is replaced by the C2f_OREPA feature extraction module, constructed with the Online Convolution Parameterization Approach (OREPA), which reduces computational complexity and enhances feature representation. Second, a downsampling module, DCDConv, is introduced to replace the conventional convolution after the first standard convolution layer, allowing better preservation of fine defect features and improving the detection of subtle defects. Additionally, in the neck, a cross-scale feature fusion module (CCFM) is incorporated to improve detection performance across defects of different scales.ResultsExperiments on our self-constructed dataset comprising eight weld defect categories show that the improved model achieves a mean average precision (mAP) of 87.6%, a 4.5% increase over the original YOLOv8n. Meanwhile, the model reduces the number of parameters by 26.9%, decreases computational cost by 35.7%, and achieves an inference speed of 103 frames per second (FPS). On the public NEU-DET dataset, the improved model obtains an mAP of 82.8%, outperforming the original YOLOv8n by 6.7%. Overall, the proposed model surpasses mainstream object detection frameworks, including YOLOv8n, YOLOv12n, Faster R-CNN, and RetinaNet.ConclusionIn summary, the proposed method provides an accurate, efficient, and deployment-friendly solution for weld defect detection in industrial applications, demonstrating substantial practical value.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433937"},"PeriodicalIF":1.4,"publicationDate":"2026-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147505803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative study of the image quality and radiation dose in paranasal-sinus CT with different tube voltages and reconstruction algorithms. 不同管电压及重建算法对副鼻窦CT图像质量及辐射剂量的比较研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-03-23 DOI: 10.1177/08953996261433936
Ren-Ren Wang, Mei-Tong Ji, Han-Shuo Li, Qi Wang, Yong-Xia Zhao

ObjectivesTo evaluate the application of different tube voltages and image-reconstruction algorithms in paranasal-sinus computed tomography (CT) and optimizes the scanning protocols for paranasal-sinus CT while balancing between image quality and radiation dose.MethodsNinety patients were randomly divided into three groups (A, B, and C). Group A used conventional scanning parameters: tube voltage of 120 kVp, tube current uDose level 1, and the Karl iterative reconstruction algorithm. Groups B and C used tube voltages of 100 and 80 kVp, respectively, and tube current uDose level 1. The Karl iterative reconstruction algorithm and artificial intelligence iterative reconstruction (AIIR) algorithm were used. Optimal image reconstruction noise levels were selected for each group, and the image quality and radiation doses of the best images were statistically analyzed.ResultsThe best image reconstruction noise levels for Groups A, B, and C were Karl level 5, AIIR level 5, and AIIR level 4, respectively. The signal-to-noise ratio, contrast-to-noise ratio, figure of merit, and subjective score values of the images in Groups B (AIIR level 5) and C (AIIR level 4) were higher than those in Group A (Karl level 5). The patients from Groups B and C had the CT dose-index volume, dose-length product, and size-specific dose estimate based on the water-equivalent diameter that were 68.86%, 71.76%, 69.84%, 84.39%, 85.95%, and 85.50% lower, respectively, than those of Group A (P < 0.001).ConclusionsA low tube voltage combined with the AIIR algorithm effectively improves image quality and decreases the radiation doses for patients undergoing paranasal-sinus CT. The optimal parameters for paranasal-sinus CT are 80 kVp, uDose level 1, and AIIR level 4.

目的评价不同管电压和图像重建算法在鼻窦CT扫描中的应用,在平衡图像质量和辐射剂量的前提下,优化鼻窦CT扫描方案。方法90例患者随机分为A、B、C组。A组采用常规扫描参数:管电压120 kVp,管电流uDose level 1,采用Karl迭代重建算法。B组和C组管电压分别为100和80 kVp,管电流为1级。采用卡尔迭代重建算法和人工智能迭代重建(AIIR)算法。选取最佳图像重建噪声水平,并对最佳图像的图像质量和辐射剂量进行统计分析。结果A组、B组和C组图像重建噪声的最佳水平分别为Karl 5级、AIIR 5级和AIIR 4级。B组(AIIR 5级)和C组(AIIR 4级)图像的信噪比、比噪比、优度值和主观评分值均高于A组(Karl 5级)。B组和C组CT剂量指数体积、剂量长度积和基于水当量直径的大小特异性剂量分别比A组低68.86%、71.76%、69.84%、84.39%、85.95%和85.50% (P
{"title":"Comparative study of the image quality and radiation dose in paranasal-sinus CT with different tube voltages and reconstruction algorithms.","authors":"Ren-Ren Wang, Mei-Tong Ji, Han-Shuo Li, Qi Wang, Yong-Xia Zhao","doi":"10.1177/08953996261433936","DOIUrl":"https://doi.org/10.1177/08953996261433936","url":null,"abstract":"<p><p>ObjectivesTo evaluate the application of different tube voltages and image-reconstruction algorithms in paranasal-sinus computed tomography (CT) and optimizes the scanning protocols for paranasal-sinus CT while balancing between image quality and radiation dose.MethodsNinety patients were randomly divided into three groups (A, B, and C). Group A used conventional scanning parameters: tube voltage of 120 kVp, tube current uDose level 1, and the Karl iterative reconstruction algorithm. Groups B and C used tube voltages of 100 and 80 kVp, respectively, and tube current uDose level 1. The Karl iterative reconstruction algorithm and artificial intelligence iterative reconstruction (AIIR) algorithm were used. Optimal image reconstruction noise levels were selected for each group, and the image quality and radiation doses of the best images were statistically analyzed.ResultsThe best image reconstruction noise levels for Groups A, B, and C were Karl level 5, AIIR level 5, and AIIR level 4, respectively. The signal-to-noise ratio, contrast-to-noise ratio, figure of merit, and subjective score values of the images in Groups B (AIIR level 5) and C (AIIR level 4) were higher than those in Group A (Karl level 5). The patients from Groups B and C had the CT dose-index volume, dose-length product, and size-specific dose estimate based on the water-equivalent diameter that were 68.86%, 71.76%, 69.84%, 84.39%, 85.95%, and 85.50% lower, respectively, than those of Group A (P < 0.001).ConclusionsA low tube voltage combined with the AIIR algorithm effectively improves image quality and decreases the radiation doses for patients undergoing paranasal-sinus CT. The optimal parameters for paranasal-sinus CT are 80 kVp, uDose level 1, and AIIR level 4.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433936"},"PeriodicalIF":1.4,"publicationDate":"2026-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147500233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual language model-assisted CT denoising via text-guided diffusion and fidelity maintenance. 通过文本引导扩散和保真度维护的视觉语言模型辅助CT去噪。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-03-12 DOI: 10.1177/08953996251372739
Ye Shen, Ningning Liang, Ailong Cai, Xinrui Zhang, Yizhong Wang, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan

Reducing radiation dose in computed tomography (CT) and photon-counting CT (PCCT) is crucial for patient safety, but lower doses introduce noise that degrades image quality. Existing denoising methods often rely on supervised learning of paired data or are based on specific noise assumptions, which poses challenges in clinical practice. A novel Visual-Language Model-assisted CT Denoising (VLD) framework is proposed to address CT image noise while preserving diagnostic fidelity through semantic guidance. Our method innovatively leverages the human-level knowledge embedded in multimodal visual-language models and applies it to the field of CT image denoising. This approach enables the diffusion model to perform restoration guided by semantic understanding. Meanwhile, a tri-domain consistency framework has been proposed to further enhance image quality by progressively refining details while preserving structural integrity. Extensive experiments on both simulated CT and real PCCT data demonstrate that the VLD method generates high-quality reconstruction images and exhibits robust generalization to new scenarios. In simulation experiments, the VLD method achieves average improvements of 0.95 dB and 1.21 dB in peak signal-to-noise ratio under the 5000-photon number condition, outperforming the WGAN and FBPConvNet methods, which require paired data.

降低计算机断层扫描(CT)和光子计数CT (PCCT)的辐射剂量对患者安全至关重要,但较低的剂量会引入噪声,从而降低图像质量。现有的去噪方法往往依赖于成对数据的监督学习或基于特定的噪声假设,这给临床实践带来了挑战。提出了一种新的视觉语言模型辅助CT去噪(VLD)框架,通过语义引导解决CT图像噪声问题,同时保持诊断保真度。我们的方法创新性地利用了嵌入在多模态视觉语言模型中的人类知识,并将其应用于CT图像去噪领域。这种方法使扩散模型能够在语义理解的指导下执行恢复。同时,提出了一种三域一致性框架,在保持结构完整性的同时,逐步细化细节,进一步提高图像质量。在模拟CT和真实PCCT数据上的大量实验表明,VLD方法可以生成高质量的重建图像,并且对新场景具有鲁棒泛化能力。在仿真实验中,在5000光子数条件下,VLD方法的峰值信噪比平均提高了0.95 dB和1.21 dB,优于需要配对数据的WGAN和FBPConvNet方法。
{"title":"Visual language model-assisted CT denoising via text-guided diffusion and fidelity maintenance.","authors":"Ye Shen, Ningning Liang, Ailong Cai, Xinrui Zhang, Yizhong Wang, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan","doi":"10.1177/08953996251372739","DOIUrl":"https://doi.org/10.1177/08953996251372739","url":null,"abstract":"<p><p>Reducing radiation dose in computed tomography (CT) and photon-counting CT (PCCT) is crucial for patient safety, but lower doses introduce noise that degrades image quality. Existing denoising methods often rely on supervised learning of paired data or are based on specific noise assumptions, which poses challenges in clinical practice. A novel Visual-Language Model-assisted CT Denoising (VLD) framework is proposed to address CT image noise while preserving diagnostic fidelity through semantic guidance. Our method innovatively leverages the human-level knowledge embedded in multimodal visual-language models and applies it to the field of CT image denoising. This approach enables the diffusion model to perform restoration guided by semantic understanding. Meanwhile, a tri-domain consistency framework has been proposed to further enhance image quality by progressively refining details while preserving structural integrity. Extensive experiments on both simulated CT and real PCCT data demonstrate that the VLD method generates high-quality reconstruction images and exhibits robust generalization to new scenarios. In simulation experiments, the VLD method achieves average improvements of 0.95 dB and 1.21 dB in peak signal-to-noise ratio under the 5000-photon number condition, outperforming the WGAN and FBPConvNet methods, which require paired data.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251372739"},"PeriodicalIF":1.4,"publicationDate":"2026-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147437312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain adaptation for low-dose CT denoising via pretraining and self-supervised fine-tuning. 基于预训练和自监督微调的低剂量CT去噪域自适应。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-03-02 DOI: 10.1177/08953996261419893
Simiao Yuan, Haipeng Lv, Zhedian Zhou, Zhongyi Wu, Jiping Wang, Ming Li, Jian Zheng, Qiang Du

Deep learning-based methods have become the dominant approach for low-dose CT (LDCT) denoising. However, their performance often degrades on cross-domain datasets due to domain gaps, highlighting the need for effective domain adaptation techniques. While domain adaptation methods based on the pretraining and fine-tuning paradigm show great potential, they typically require additional labeled data from the target domain, which limits their practicality. Therefore, this work aims to develop a self-supervised fine-tuning method for LDCT denoising. In our work, we propose to fine-tune pretrained models using self-supervised loss based on pixel shuffle image preprocessing. Additionally, we design a two-stage fine-tuning strategy to mitigate the input misalignment between the pretraining and fine-tuning stages. Furthermore, to effectively capture prior knowledge from the source domain, we design a dual-scale SwinIR model as the pretrained backbone. We evaluate our method on two public datasets, and the results demonstrate that it bridges the domain gap without requiring target-domain labels, achieving effective denoising performance and strong cross-domain generalization. Code and model for our proposed approach are publicly available at https://github.com/Wasserdawn/TSFDAN.

基于深度学习的方法已经成为低剂量CT (LDCT)去噪的主流方法。然而,由于领域差距,它们的性能在跨领域数据集上经常下降,这突出了对有效的领域适应技术的需求。虽然基于预训练和微调范式的领域自适应方法显示出巨大的潜力,但它们通常需要来自目标领域的额外标记数据,这限制了它们的实用性。因此,本工作旨在开发一种LDCT去噪的自监督微调方法。在我们的工作中,我们提出使用基于像素洗牌图像预处理的自监督损失来微调预训练模型。此外,我们设计了一个两阶段的微调策略,以减轻预训练和微调阶段之间的输入偏差。此外,为了有效地从源域捕获先验知识,我们设计了一个双尺度的SwinIR模型作为预训练的主干。我们在两个公共数据集上对我们的方法进行了评估,结果表明它在不需要目标域标记的情况下弥合了域差距,实现了有效的去噪性能和强的跨域泛化。我们建议的方法的代码和模型可在https://github.com/Wasserdawn/TSFDAN上公开获得。
{"title":"Domain adaptation for low-dose CT denoising via pretraining and self-supervised fine-tuning.","authors":"Simiao Yuan, Haipeng Lv, Zhedian Zhou, Zhongyi Wu, Jiping Wang, Ming Li, Jian Zheng, Qiang Du","doi":"10.1177/08953996261419893","DOIUrl":"https://doi.org/10.1177/08953996261419893","url":null,"abstract":"<p><p>Deep learning-based methods have become the dominant approach for low-dose CT (LDCT) denoising. However, their performance often degrades on cross-domain datasets due to domain gaps, highlighting the need for effective domain adaptation techniques. While domain adaptation methods based on the pretraining and fine-tuning paradigm show great potential, they typically require additional labeled data from the target domain, which limits their practicality. Therefore, this work aims to develop a self-supervised fine-tuning method for LDCT denoising. In our work, we propose to fine-tune pretrained models using self-supervised loss based on pixel shuffle image preprocessing. Additionally, we design a two-stage fine-tuning strategy to mitigate the input misalignment between the pretraining and fine-tuning stages. Furthermore, to effectively capture prior knowledge from the source domain, we design a dual-scale SwinIR model as the pretrained backbone. We evaluate our method on two public datasets, and the results demonstrate that it bridges the domain gap without requiring target-domain labels, achieving effective denoising performance and strong cross-domain generalization. Code and model for our proposed approach are publicly available at https://github.com/Wasserdawn/TSFDAN.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261419893"},"PeriodicalIF":1.4,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147327877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Mask edge illumination X-ray multimodal imaging: Methodology and parameter impact mechanisms. 单掩模边缘照明x射线多模态成像:方法和参数影响机制。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-02-23 DOI: 10.1177/08953996261421455
Chang Li, Liangliang Lv, Zhi Zhou, Yinqi Lei, Xiaodong Pan, Cui Zhang, Gongping Li

X-ray multimodal imaging, which extracts absorption, refraction, and scattering signals simultaneously, holds significant potential in biomedical and materials science applications. However, laboratory-based X-ray multimodal imaging remains underdeveloped, with existing techniques constrained by system magnification and detector pixel size. This study employs a single-mask edge illumination (SM EI) configuration and establishes the corresponding single-mask illumination curve (SM IC). Using Geant4 simulations, we validate the feasibility of retrieving all three signals under conventional magnification and large-pixel detectors. Results show accurate extraction of both refraction and scattering signals, with model fitting close to unity. We further explore the impact of key system parameters, including focal spot size, tube voltage, mask thickness, duty cycle, pixel count, and detector operation mode on imaging performance. The simulations reveal that small focal spots and low-energy X-rays enhance contrast, thick masks maintain signal quality at high energy, and low duty cycles and high photon counts improve the contrast-to-noise ratio (CNR). Additionally, the charge summing mode increases refraction CNR by approximately three times compared to standard modes. These findings demonstrate the effectiveness of the SM EI method, enhancing spatial resolution and providing optimization insights for designing laboratory-based X-ray multimodal imaging systems.

x射线多模态成像同时提取吸收、折射和散射信号,在生物医学和材料科学应用中具有重大潜力。然而,基于实验室的x射线多模态成像仍然不发达,现有技术受到系统放大倍率和探测器像素大小的限制。本研究采用单掩模边缘照明(smei)配置,并建立相应的单掩模照明曲线(smic)。使用Geant4模拟,我们验证了在常规放大倍率和大像素检测器下检索所有三种信号的可行性。结果表明,折射和散射信号均得到了准确的提取,模型拟合接近统一。我们进一步探讨了关键系统参数,包括焦斑尺寸、管电压、掩模厚度、占空比、像素计数和探测器工作模式对成像性能的影响。仿真结果表明,小焦斑和低能量x射线增强了对比度,厚掩模在高能量下保持信号质量,低占空比和高光子计数提高了对比噪声比(CNR)。此外,与标准模式相比,电荷加和模式增加折射CNR约三倍。这些发现证明了SM EI方法的有效性,提高了空间分辨率,并为设计基于实验室的x射线多模态成像系统提供了优化见解。
{"title":"Single-Mask edge illumination X-ray multimodal imaging: Methodology and parameter impact mechanisms.","authors":"Chang Li, Liangliang Lv, Zhi Zhou, Yinqi Lei, Xiaodong Pan, Cui Zhang, Gongping Li","doi":"10.1177/08953996261421455","DOIUrl":"https://doi.org/10.1177/08953996261421455","url":null,"abstract":"<p><p>X-ray multimodal imaging, which extracts absorption, refraction, and scattering signals simultaneously, holds significant potential in biomedical and materials science applications. However, laboratory-based X-ray multimodal imaging remains underdeveloped, with existing techniques constrained by system magnification and detector pixel size. This study employs a single-mask edge illumination (SM EI) configuration and establishes the corresponding single-mask illumination curve (SM IC). Using Geant4 simulations, we validate the feasibility of retrieving all three signals under conventional magnification and large-pixel detectors. Results show accurate extraction of both refraction and scattering signals, with model fitting close to unity. We further explore the impact of key system parameters, including focal spot size, tube voltage, mask thickness, duty cycle, pixel count, and detector operation mode on imaging performance. The simulations reveal that small focal spots and low-energy X-rays enhance contrast, thick masks maintain signal quality at high energy, and low duty cycles and high photon counts improve the contrast-to-noise ratio (<i>CNR</i>). Additionally, the charge summing mode increases refraction <i>CNR</i> by approximately three times compared to standard modes. These findings demonstrate the effectiveness of the SM EI method, enhancing spatial resolution and providing optimization insights for designing laboratory-based X-ray multimodal imaging systems.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261421455"},"PeriodicalIF":1.4,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to "Retraction notice". “撤回通知”的更正。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-01-21 DOI: 10.1177/08953996251405970
{"title":"Corrigendum to \"Retraction notice\".","authors":"","doi":"10.1177/08953996251405970","DOIUrl":"https://doi.org/10.1177/08953996251405970","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251405970"},"PeriodicalIF":1.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on the method for measuring the focal spot size of micro-focus X-ray sources using the JIMA resolution test card. 利用JIMA分辨率测试卡测量微焦x射线源焦斑尺寸的方法研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-01-13 DOI: 10.1177/08953996251403456
Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng

BackgroundMeasuring an X-ray source's focal spot size is vital for Micro-CT resolution. Standard methods are often too complex or inaccurate. The popular JIMA resolution test card is simple to use but lacks a clear, quantitative formula to determine the actual focal spot size.ObjectiveThis study aims to create a reliable quantitative link between JIMA resolution and focal spot size using simulations and experiments.MethodsWe used Monte Carlo simulations and practical experiments to establish the relationship between JIMA resolution and focal spot size.ResultsWe found that the focal spot size is twice the line pair width on the JIMA card when the image contrast (MTF) is at 10%. This method is highly accurate, with a maximum measurement error of less than 8.7% compared to a high-precision technique.ConclusionsOur findings provide a simple, fast, and validated method for measuring focal spot size using the JIMA test card. This makes it a practical and reliable alternative to more complex procedures.

测量x射线源的焦点光斑大小对Micro-CT分辨率至关重要。标准方法往往过于复杂或不准确。流行的JIMA分辨率测试卡使用简单,但缺乏明确的定量公式来确定实际焦斑大小。目的通过模拟和实验建立JIMA分辨率与焦斑大小之间可靠的定量联系。方法通过蒙特卡罗模拟和实际实验,建立JIMA分辨率与焦点光斑大小的关系。结果当图像对比度(MTF)为10%时,焦斑大小是JIMA卡上线对宽度的两倍。该方法精度高,与高精度技术相比,最大测量误差小于8.7%。结论本研究结果为JIMA测试卡测量焦斑大小提供了一种简单、快速、有效的方法。这使得它成为一个实用和可靠的替代更复杂的程序。
{"title":"Research on the method for measuring the focal spot size of micro-focus X-ray sources using the JIMA resolution test card.","authors":"Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng","doi":"10.1177/08953996251403456","DOIUrl":"https://doi.org/10.1177/08953996251403456","url":null,"abstract":"<p><p>BackgroundMeasuring an X-ray source's focal spot size is vital for Micro-CT resolution. Standard methods are often too complex or inaccurate. The popular JIMA resolution test card is simple to use but lacks a clear, quantitative formula to determine the actual focal spot size.ObjectiveThis study aims to create a reliable quantitative link between JIMA resolution and focal spot size using simulations and experiments.MethodsWe used Monte Carlo simulations and practical experiments to establish the relationship between JIMA resolution and focal spot size.ResultsWe found that the focal spot size is twice the line pair width on the JIMA card when the image contrast (MTF) is at 10%. This method is highly accurate, with a maximum measurement error of less than 8.7% compared to a high-precision technique.ConclusionsOur findings provide a simple, fast, and validated method for measuring focal spot size using the JIMA test card. This makes it a practical and reliable alternative to more complex procedures.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251403456"},"PeriodicalIF":1.4,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
X-ray white beam based 26.7 Hz dynamic tomography. 基于26.7 Hz动态断层扫描的x射线白束。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-01-01 Epub Date: 2025-11-03 DOI: 10.1177/08953996251384476
Rongchang Chen, Honglan Xie, Guohao Du, Zhongliang Li, Tiqiao Xiao

Synchrotron radiation micro-computed tomography (SR-µCT) is a vital technique for the quantitative characterization of three-dimensional internal structures across diverse fields, including energy, integrated circuits, materials science, biomedicine, archaeology etc. While SR-µCT provides high spatial resolution and high image contrast, it typically offers only moderate temporal resolution, with acquisition times ranging from minutes to hours. Recently, dynamic SR-µCT has attracted significant interest for its capacity to capture real-time three-dimensional structural evolution. Here, we demonstrate a dynamic SR-µCT system operating at 26.7Hz, developed at the BL09B test beamline of the Shanghai Synchrotron Radiation Facility using a filtered white beam. The key components of this system include an air-cooling millisecond fast shutter, an air-bearing rotation stage, a high-efficiency detector integrated with a Photron FASTCAM SA-Z camera and a custom-designed optical system, and a synchronization clock to ensure precise temporal alignment of all devices. Experimental results confirm the feasibility of this approach for in vivo four-dimensional studies, making it particularly promising for applications in biomedical research and related disciplines.

同步辐射微计算机断层扫描(SR-µCT)是一种重要的三维内部结构定量表征技术,涉及能源、集成电路、材料科学、生物医学、考古学等多个领域。虽然SR-µCT提供高空间分辨率和高图像对比度,但它通常只能提供中等的时间分辨率,采集时间从几分钟到几小时不等。最近,动态SR-µCT因其捕捉实时三维结构演变的能力而引起了人们的极大兴趣。在这里,我们展示了一个运行在26.7 Hz的动态SR-µCT系统,该系统是在上海同步辐射设施的BL09B测试波束线上开发的,使用过滤白光。该系统的关键组件包括一个空气冷却毫秒级快速快门、一个空气轴承旋转平台、一个集成了Photron FASTCAM SA-Z相机和定制光学系统的高效探测器,以及一个同步时钟,以确保所有设备的精确时间对准。实验结果证实了这种方法在体内四维研究中的可行性,使其在生物医学研究和相关学科中的应用特别有前景。
{"title":"X-ray white beam based 26.7 Hz dynamic tomography.","authors":"Rongchang Chen, Honglan Xie, Guohao Du, Zhongliang Li, Tiqiao Xiao","doi":"10.1177/08953996251384476","DOIUrl":"10.1177/08953996251384476","url":null,"abstract":"<p><p>Synchrotron radiation micro-computed tomography (SR-µCT) is a vital technique for the quantitative characterization of three-dimensional internal structures across diverse fields, including energy, integrated circuits, materials science, biomedicine, archaeology etc. While SR-µCT provides high spatial resolution and high image contrast, it typically offers only moderate temporal resolution, with acquisition times ranging from minutes to hours. Recently, dynamic SR-µCT has attracted significant interest for its capacity to capture real-time three-dimensional structural evolution. Here, we demonstrate a dynamic SR-µCT system operating at 26.7<b> </b>Hz, developed at the BL09B test beamline of the Shanghai Synchrotron Radiation Facility using a filtered white beam. The key components of this system include an air-cooling millisecond fast shutter, an air-bearing rotation stage, a high-efficiency detector integrated with a Photron FASTCAM SA-Z camera and a custom-designed optical system, and a synchronization clock to ensure precise temporal alignment of all devices. Experimental results confirm the feasibility of this approach for <i>in vivo</i> four-dimensional studies, making it particularly promising for applications in biomedical research and related disciplines.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"92-102"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A self-training framework for semi-supervised pulmonary vessel segmentation and its application in COPD. 半监督肺血管分割的自我训练框架及其在COPD中的应用。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2026-01-01 Epub Date: 2025-10-17 DOI: 10.1177/08953996251384489
Shuiqing Zhao, Meihuan Wang, Jiaxuan Xu, Jie Feng, Wei Qian, Rongchang Chen, Zhenyu Liang, Shouliang Qi, Yanan Wu

BackgroundIt is fundamental for accurate segmentation and quantification of the pulmonary vessel, particularly smaller vessels, from computed tomography (CT) images in chronic obstructive pulmonary disease (COPD) patients.ObjectiveThe aim of this study was to segment the pulmonary vasculature using a semi-supervised method.MethodsIn this study, a self-training framework is proposed by leveraging a teacher-student model for the segmentation of pulmonary vessels. First, the high-quality annotations are acquired in the in-house data by an interactive way. Then, the model is trained in the semi-supervised way. A fully supervised model is trained on a small set of labeled CT images, yielding the teacher model. Following this, the teacher model is used to generate pseudo-labels for the unlabeled CT images, from which reliable ones are selected based on a certain strategy. The training of the student model involves these reliable pseudo-labels. This training process is iteratively repeated until an optimal performance is achieved.ResultsExtensive experiments are performed on non-enhanced CT scans of 125 COPD patients. Quantitative and qualitative analyses demonstrate that the proposed method, Semi2, significantly improves the precision of vessel segmentation by 2.3%, achieving a precision of 90.3%. Further, quantitative analysis is conducted in the pulmonary vessel of COPD, providing insights into the differences in the pulmonary vessel across different severity of the disease.ConclusionThe proposed method can not only improve the performance of pulmonary vascular segmentation, but can also be applied in COPD analysis. The code will be made available at https://github.com/wuyanan513/semi-supervised-learning-for-vessel-segmentation.

背景:从慢性阻塞性肺疾病(COPD)患者的计算机断层扫描(CT)图像中准确分割和定量肺血管,特别是小血管是至关重要的。目的采用半监督的方法对肺血管进行分割。方法本研究提出了一种利用师生模型进行肺血管分割的自我训练框架。首先,通过交互的方式在内部数据中获取高质量的注释。然后,以半监督的方式对模型进行训练。在一小组标记的CT图像上训练一个完全监督的模型,得到教师模型。然后,使用教师模型对未标记的CT图像生成伪标签,并根据一定的策略从中选择可靠的伪标签。学生模型的训练涉及到这些可靠的伪标签。这个训练过程迭代重复,直到达到最佳性能。结果对125例慢性阻塞性肺病患者进行了非增强CT扫描。定量和定性分析表明,Semi2方法的血管分割精度显著提高2.3%,达到90.3%。进一步,对COPD肺血管进行定量分析,了解不同病情严重程度肺血管的差异。结论该方法不仅可以提高肺血管分割的性能,而且可以应用于COPD分析。代码将在https://github.com/wuyanan513/semi-supervised-learning-for-vessel-segmentation上提供。
{"title":"A self-training framework for semi-supervised pulmonary vessel segmentation and its application in COPD.","authors":"Shuiqing Zhao, Meihuan Wang, Jiaxuan Xu, Jie Feng, Wei Qian, Rongchang Chen, Zhenyu Liang, Shouliang Qi, Yanan Wu","doi":"10.1177/08953996251384489","DOIUrl":"10.1177/08953996251384489","url":null,"abstract":"<p><p>BackgroundIt is fundamental for accurate segmentation and quantification of the pulmonary vessel, particularly smaller vessels, from computed tomography (CT) images in chronic obstructive pulmonary disease (COPD) patients.ObjectiveThe aim of this study was to segment the pulmonary vasculature using a semi-supervised method.MethodsIn this study, a self-training framework is proposed by leveraging a teacher-student model for the segmentation of pulmonary vessels. First, the high-quality annotations are acquired in the in-house data by an interactive way. Then, the model is trained in the semi-supervised way. A fully supervised model is trained on a small set of labeled CT images, yielding the teacher model. Following this, the teacher model is used to generate pseudo-labels for the unlabeled CT images, from which reliable ones are selected based on a certain strategy. The training of the student model involves these reliable pseudo-labels. This training process is iteratively repeated until an optimal performance is achieved.ResultsExtensive experiments are performed on non-enhanced CT scans of 125 COPD patients. Quantitative and qualitative analyses demonstrate that the proposed method, Semi2, significantly improves the precision of vessel segmentation by 2.3%, achieving a precision of 90.3%. Further, quantitative analysis is conducted in the pulmonary vessel of COPD, providing insights into the differences in the pulmonary vessel across different severity of the disease.ConclusionThe proposed method can not only improve the performance of pulmonary vascular segmentation, but can also be applied in COPD analysis. The code will be made available at https://github.com/wuyanan513/semi-supervised-learning-for-vessel-segmentation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"39-55"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12789263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of X-Ray Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1