Pub Date : 2026-03-26DOI: 10.1177/08953996261433954
Haihang Zhao, Pengxiang Ji, Yongzhou Wu, Jintao Zhao, Jing Zou
BackgroundInterior tomography is a crucial technique in computed tomography (CT) that aims to minimize radiation exposure by limiting X-ray imaging to the region of interest (ROI) while maintaining diagnostic accuracy. However, traditional reconstruction algorithms often suffer from severe cupping artifacts caused by data truncation, which significantly degrades image quality.ObjectiveThis study aims to develop a parallel network that effectively integrates information between the projection and image domains to improve interior tomography reconstruction.MethodsIn this paper, we propose an end-to-end deep learning framework, the Two-Module Parallel Dual-Domain Network (TPDDN), which consists of two key modules. The Initial Restoration Module generates high-quality prior sinograms and images, providing a robust foundation for subsequent processing and effectively mitigating the impact of data truncation. The Interactive Fusion Module, the core of the network, employs two parallel and interactive branches that operate simultaneously on the projection and image domains. These branches enable bidirectional feature interaction and information fusion, significantly enhancing the accuracy and quality of the reconstructed images.ResultsExtensive experiments were conducted under both normal-dose and high-dose noise conditions to evaluate the performance of TPDDN. The results demonstrate that TPDDN achieves superior qualitative and quantitative performance compared to existing representative methods.ConclusionsThe proposed TPDDN offers a robust and effective approach for interior tomography reconstruction by synergistically integrating information from both the projection and image domains. It effectively suppresses cupping artifacts and enhances reconstructed image quality under both normal-dose and high-noise conditions, demonstrating promising potential for safer and more accurate diagnostic imaging.
{"title":"A Two-Module Parallel Dual-Domain Network for interior tomography reconstruction.","authors":"Haihang Zhao, Pengxiang Ji, Yongzhou Wu, Jintao Zhao, Jing Zou","doi":"10.1177/08953996261433954","DOIUrl":"https://doi.org/10.1177/08953996261433954","url":null,"abstract":"<p><p>BackgroundInterior tomography is a crucial technique in computed tomography (CT) that aims to minimize radiation exposure by limiting X-ray imaging to the region of interest (ROI) while maintaining diagnostic accuracy. However, traditional reconstruction algorithms often suffer from severe cupping artifacts caused by data truncation, which significantly degrades image quality.ObjectiveThis study aims to develop a parallel network that effectively integrates information between the projection and image domains to improve interior tomography reconstruction.MethodsIn this paper, we propose an end-to-end deep learning framework, the Two-Module Parallel Dual-Domain Network (TPDDN), which consists of two key modules. The Initial Restoration Module generates high-quality prior sinograms and images, providing a robust foundation for subsequent processing and effectively mitigating the impact of data truncation. The Interactive Fusion Module, the core of the network, employs two parallel and interactive branches that operate simultaneously on the projection and image domains. These branches enable bidirectional feature interaction and information fusion, significantly enhancing the accuracy and quality of the reconstructed images.ResultsExtensive experiments were conducted under both normal-dose and high-dose noise conditions to evaluate the performance of TPDDN. The results demonstrate that TPDDN achieves superior qualitative and quantitative performance compared to existing representative methods.ConclusionsThe proposed TPDDN offers a robust and effective approach for interior tomography reconstruction by synergistically integrating information from both the projection and image domains. It effectively suppresses cupping artifacts and enhances reconstructed image quality under both normal-dose and high-noise conditions, demonstrating promising potential for safer and more accurate diagnostic imaging.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433954"},"PeriodicalIF":1.4,"publicationDate":"2026-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147516089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-24DOI: 10.1177/08953996261433937
Yongqi Yan, Yi Liu, Lingshuang Meng, Junjing Li, Shu Li, Niu Guo, Pengcheng Zhang, Zhiguo Gui
BackgroundIndustrial weld defect detection is challenged by the minimal grayscale contrast between defects and the background, as well as by blurred defect edges, which together hinder the performance of detection algorithms. Moreover, practical industrial environments require high detection accuracy, fast inference speed, and flexible deployment.ObjectiveTo address these challenges, this study proposes an improved YOLOv8n defect detection method that enables more accurate, faster, and lightweight automated weld defect detection.MethodsThe key improvements are as follows. First, in the backbone, the original C2f module is replaced by the C2f_OREPA feature extraction module, constructed with the Online Convolution Parameterization Approach (OREPA), which reduces computational complexity and enhances feature representation. Second, a downsampling module, DCDConv, is introduced to replace the conventional convolution after the first standard convolution layer, allowing better preservation of fine defect features and improving the detection of subtle defects. Additionally, in the neck, a cross-scale feature fusion module (CCFM) is incorporated to improve detection performance across defects of different scales.ResultsExperiments on our self-constructed dataset comprising eight weld defect categories show that the improved model achieves a mean average precision (mAP) of 87.6%, a 4.5% increase over the original YOLOv8n. Meanwhile, the model reduces the number of parameters by 26.9%, decreases computational cost by 35.7%, and achieves an inference speed of 103 frames per second (FPS). On the public NEU-DET dataset, the improved model obtains an mAP of 82.8%, outperforming the original YOLOv8n by 6.7%. Overall, the proposed model surpasses mainstream object detection frameworks, including YOLOv8n, YOLOv12n, Faster R-CNN, and RetinaNet.ConclusionIn summary, the proposed method provides an accurate, efficient, and deployment-friendly solution for weld defect detection in industrial applications, demonstrating substantial practical value.
{"title":"Weld defect detection based on improved YOLOv8n.","authors":"Yongqi Yan, Yi Liu, Lingshuang Meng, Junjing Li, Shu Li, Niu Guo, Pengcheng Zhang, Zhiguo Gui","doi":"10.1177/08953996261433937","DOIUrl":"https://doi.org/10.1177/08953996261433937","url":null,"abstract":"<p><p>BackgroundIndustrial weld defect detection is challenged by the minimal grayscale contrast between defects and the background, as well as by blurred defect edges, which together hinder the performance of detection algorithms. Moreover, practical industrial environments require high detection accuracy, fast inference speed, and flexible deployment.ObjectiveTo address these challenges, this study proposes an improved YOLOv8n defect detection method that enables more accurate, faster, and lightweight automated weld defect detection.MethodsThe key improvements are as follows. First, in the backbone, the original C2f module is replaced by the C2f_OREPA feature extraction module, constructed with the Online Convolution Parameterization Approach (OREPA), which reduces computational complexity and enhances feature representation. Second, a downsampling module, DCDConv, is introduced to replace the conventional convolution after the first standard convolution layer, allowing better preservation of fine defect features and improving the detection of subtle defects. Additionally, in the neck, a cross-scale feature fusion module (CCFM) is incorporated to improve detection performance across defects of different scales.ResultsExperiments on our self-constructed dataset comprising eight weld defect categories show that the improved model achieves a mean average precision (mAP) of 87.6%, a 4.5% increase over the original YOLOv8n. Meanwhile, the model reduces the number of parameters by 26.9%, decreases computational cost by 35.7%, and achieves an inference speed of 103 frames per second (FPS). On the public NEU-DET dataset, the improved model obtains an mAP of 82.8%, outperforming the original YOLOv8n by 6.7%. Overall, the proposed model surpasses mainstream object detection frameworks, including YOLOv8n, YOLOv12n, Faster R-CNN, and RetinaNet.ConclusionIn summary, the proposed method provides an accurate, efficient, and deployment-friendly solution for weld defect detection in industrial applications, demonstrating substantial practical value.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433937"},"PeriodicalIF":1.4,"publicationDate":"2026-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147505803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-23DOI: 10.1177/08953996261433936
Ren-Ren Wang, Mei-Tong Ji, Han-Shuo Li, Qi Wang, Yong-Xia Zhao
ObjectivesTo evaluate the application of different tube voltages and image-reconstruction algorithms in paranasal-sinus computed tomography (CT) and optimizes the scanning protocols for paranasal-sinus CT while balancing between image quality and radiation dose.MethodsNinety patients were randomly divided into three groups (A, B, and C). Group A used conventional scanning parameters: tube voltage of 120 kVp, tube current uDose level 1, and the Karl iterative reconstruction algorithm. Groups B and C used tube voltages of 100 and 80 kVp, respectively, and tube current uDose level 1. The Karl iterative reconstruction algorithm and artificial intelligence iterative reconstruction (AIIR) algorithm were used. Optimal image reconstruction noise levels were selected for each group, and the image quality and radiation doses of the best images were statistically analyzed.ResultsThe best image reconstruction noise levels for Groups A, B, and C were Karl level 5, AIIR level 5, and AIIR level 4, respectively. The signal-to-noise ratio, contrast-to-noise ratio, figure of merit, and subjective score values of the images in Groups B (AIIR level 5) and C (AIIR level 4) were higher than those in Group A (Karl level 5). The patients from Groups B and C had the CT dose-index volume, dose-length product, and size-specific dose estimate based on the water-equivalent diameter that were 68.86%, 71.76%, 69.84%, 84.39%, 85.95%, and 85.50% lower, respectively, than those of Group A (P < 0.001).ConclusionsA low tube voltage combined with the AIIR algorithm effectively improves image quality and decreases the radiation doses for patients undergoing paranasal-sinus CT. The optimal parameters for paranasal-sinus CT are 80 kVp, uDose level 1, and AIIR level 4.
{"title":"Comparative study of the image quality and radiation dose in paranasal-sinus CT with different tube voltages and reconstruction algorithms.","authors":"Ren-Ren Wang, Mei-Tong Ji, Han-Shuo Li, Qi Wang, Yong-Xia Zhao","doi":"10.1177/08953996261433936","DOIUrl":"https://doi.org/10.1177/08953996261433936","url":null,"abstract":"<p><p>ObjectivesTo evaluate the application of different tube voltages and image-reconstruction algorithms in paranasal-sinus computed tomography (CT) and optimizes the scanning protocols for paranasal-sinus CT while balancing between image quality and radiation dose.MethodsNinety patients were randomly divided into three groups (A, B, and C). Group A used conventional scanning parameters: tube voltage of 120 kVp, tube current uDose level 1, and the Karl iterative reconstruction algorithm. Groups B and C used tube voltages of 100 and 80 kVp, respectively, and tube current uDose level 1. The Karl iterative reconstruction algorithm and artificial intelligence iterative reconstruction (AIIR) algorithm were used. Optimal image reconstruction noise levels were selected for each group, and the image quality and radiation doses of the best images were statistically analyzed.ResultsThe best image reconstruction noise levels for Groups A, B, and C were Karl level 5, AIIR level 5, and AIIR level 4, respectively. The signal-to-noise ratio, contrast-to-noise ratio, figure of merit, and subjective score values of the images in Groups B (AIIR level 5) and C (AIIR level 4) were higher than those in Group A (Karl level 5). The patients from Groups B and C had the CT dose-index volume, dose-length product, and size-specific dose estimate based on the water-equivalent diameter that were 68.86%, 71.76%, 69.84%, 84.39%, 85.95%, and 85.50% lower, respectively, than those of Group A (P < 0.001).ConclusionsA low tube voltage combined with the AIIR algorithm effectively improves image quality and decreases the radiation doses for patients undergoing paranasal-sinus CT. The optimal parameters for paranasal-sinus CT are 80 kVp, uDose level 1, and AIIR level 4.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261433936"},"PeriodicalIF":1.4,"publicationDate":"2026-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147500233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-12DOI: 10.1177/08953996251372739
Ye Shen, Ningning Liang, Ailong Cai, Xinrui Zhang, Yizhong Wang, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan
Reducing radiation dose in computed tomography (CT) and photon-counting CT (PCCT) is crucial for patient safety, but lower doses introduce noise that degrades image quality. Existing denoising methods often rely on supervised learning of paired data or are based on specific noise assumptions, which poses challenges in clinical practice. A novel Visual-Language Model-assisted CT Denoising (VLD) framework is proposed to address CT image noise while preserving diagnostic fidelity through semantic guidance. Our method innovatively leverages the human-level knowledge embedded in multimodal visual-language models and applies it to the field of CT image denoising. This approach enables the diffusion model to perform restoration guided by semantic understanding. Meanwhile, a tri-domain consistency framework has been proposed to further enhance image quality by progressively refining details while preserving structural integrity. Extensive experiments on both simulated CT and real PCCT data demonstrate that the VLD method generates high-quality reconstruction images and exhibits robust generalization to new scenarios. In simulation experiments, the VLD method achieves average improvements of 0.95 dB and 1.21 dB in peak signal-to-noise ratio under the 5000-photon number condition, outperforming the WGAN and FBPConvNet methods, which require paired data.
{"title":"Visual language model-assisted CT denoising via text-guided diffusion and fidelity maintenance.","authors":"Ye Shen, Ningning Liang, Ailong Cai, Xinrui Zhang, Yizhong Wang, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan","doi":"10.1177/08953996251372739","DOIUrl":"https://doi.org/10.1177/08953996251372739","url":null,"abstract":"<p><p>Reducing radiation dose in computed tomography (CT) and photon-counting CT (PCCT) is crucial for patient safety, but lower doses introduce noise that degrades image quality. Existing denoising methods often rely on supervised learning of paired data or are based on specific noise assumptions, which poses challenges in clinical practice. A novel Visual-Language Model-assisted CT Denoising (VLD) framework is proposed to address CT image noise while preserving diagnostic fidelity through semantic guidance. Our method innovatively leverages the human-level knowledge embedded in multimodal visual-language models and applies it to the field of CT image denoising. This approach enables the diffusion model to perform restoration guided by semantic understanding. Meanwhile, a tri-domain consistency framework has been proposed to further enhance image quality by progressively refining details while preserving structural integrity. Extensive experiments on both simulated CT and real PCCT data demonstrate that the VLD method generates high-quality reconstruction images and exhibits robust generalization to new scenarios. In simulation experiments, the VLD method achieves average improvements of 0.95 dB and 1.21 dB in peak signal-to-noise ratio under the 5000-photon number condition, outperforming the WGAN and FBPConvNet methods, which require paired data.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251372739"},"PeriodicalIF":1.4,"publicationDate":"2026-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147437312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-02DOI: 10.1177/08953996261419893
Simiao Yuan, Haipeng Lv, Zhedian Zhou, Zhongyi Wu, Jiping Wang, Ming Li, Jian Zheng, Qiang Du
Deep learning-based methods have become the dominant approach for low-dose CT (LDCT) denoising. However, their performance often degrades on cross-domain datasets due to domain gaps, highlighting the need for effective domain adaptation techniques. While domain adaptation methods based on the pretraining and fine-tuning paradigm show great potential, they typically require additional labeled data from the target domain, which limits their practicality. Therefore, this work aims to develop a self-supervised fine-tuning method for LDCT denoising. In our work, we propose to fine-tune pretrained models using self-supervised loss based on pixel shuffle image preprocessing. Additionally, we design a two-stage fine-tuning strategy to mitigate the input misalignment between the pretraining and fine-tuning stages. Furthermore, to effectively capture prior knowledge from the source domain, we design a dual-scale SwinIR model as the pretrained backbone. We evaluate our method on two public datasets, and the results demonstrate that it bridges the domain gap without requiring target-domain labels, achieving effective denoising performance and strong cross-domain generalization. Code and model for our proposed approach are publicly available at https://github.com/Wasserdawn/TSFDAN.
{"title":"Domain adaptation for low-dose CT denoising via pretraining and self-supervised fine-tuning.","authors":"Simiao Yuan, Haipeng Lv, Zhedian Zhou, Zhongyi Wu, Jiping Wang, Ming Li, Jian Zheng, Qiang Du","doi":"10.1177/08953996261419893","DOIUrl":"https://doi.org/10.1177/08953996261419893","url":null,"abstract":"<p><p>Deep learning-based methods have become the dominant approach for low-dose CT (LDCT) denoising. However, their performance often degrades on cross-domain datasets due to domain gaps, highlighting the need for effective domain adaptation techniques. While domain adaptation methods based on the pretraining and fine-tuning paradigm show great potential, they typically require additional labeled data from the target domain, which limits their practicality. Therefore, this work aims to develop a self-supervised fine-tuning method for LDCT denoising. In our work, we propose to fine-tune pretrained models using self-supervised loss based on pixel shuffle image preprocessing. Additionally, we design a two-stage fine-tuning strategy to mitigate the input misalignment between the pretraining and fine-tuning stages. Furthermore, to effectively capture prior knowledge from the source domain, we design a dual-scale SwinIR model as the pretrained backbone. We evaluate our method on two public datasets, and the results demonstrate that it bridges the domain gap without requiring target-domain labels, achieving effective denoising performance and strong cross-domain generalization. Code and model for our proposed approach are publicly available at https://github.com/Wasserdawn/TSFDAN.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261419893"},"PeriodicalIF":1.4,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147327877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-23DOI: 10.1177/08953996261421455
Chang Li, Liangliang Lv, Zhi Zhou, Yinqi Lei, Xiaodong Pan, Cui Zhang, Gongping Li
X-ray multimodal imaging, which extracts absorption, refraction, and scattering signals simultaneously, holds significant potential in biomedical and materials science applications. However, laboratory-based X-ray multimodal imaging remains underdeveloped, with existing techniques constrained by system magnification and detector pixel size. This study employs a single-mask edge illumination (SM EI) configuration and establishes the corresponding single-mask illumination curve (SM IC). Using Geant4 simulations, we validate the feasibility of retrieving all three signals under conventional magnification and large-pixel detectors. Results show accurate extraction of both refraction and scattering signals, with model fitting close to unity. We further explore the impact of key system parameters, including focal spot size, tube voltage, mask thickness, duty cycle, pixel count, and detector operation mode on imaging performance. The simulations reveal that small focal spots and low-energy X-rays enhance contrast, thick masks maintain signal quality at high energy, and low duty cycles and high photon counts improve the contrast-to-noise ratio (CNR). Additionally, the charge summing mode increases refraction CNR by approximately three times compared to standard modes. These findings demonstrate the effectiveness of the SM EI method, enhancing spatial resolution and providing optimization insights for designing laboratory-based X-ray multimodal imaging systems.
{"title":"Single-Mask edge illumination X-ray multimodal imaging: Methodology and parameter impact mechanisms.","authors":"Chang Li, Liangliang Lv, Zhi Zhou, Yinqi Lei, Xiaodong Pan, Cui Zhang, Gongping Li","doi":"10.1177/08953996261421455","DOIUrl":"https://doi.org/10.1177/08953996261421455","url":null,"abstract":"<p><p>X-ray multimodal imaging, which extracts absorption, refraction, and scattering signals simultaneously, holds significant potential in biomedical and materials science applications. However, laboratory-based X-ray multimodal imaging remains underdeveloped, with existing techniques constrained by system magnification and detector pixel size. This study employs a single-mask edge illumination (SM EI) configuration and establishes the corresponding single-mask illumination curve (SM IC). Using Geant4 simulations, we validate the feasibility of retrieving all three signals under conventional magnification and large-pixel detectors. Results show accurate extraction of both refraction and scattering signals, with model fitting close to unity. We further explore the impact of key system parameters, including focal spot size, tube voltage, mask thickness, duty cycle, pixel count, and detector operation mode on imaging performance. The simulations reveal that small focal spots and low-energy X-rays enhance contrast, thick masks maintain signal quality at high energy, and low duty cycles and high photon counts improve the contrast-to-noise ratio (<i>CNR</i>). Additionally, the charge summing mode increases refraction <i>CNR</i> by approximately three times compared to standard modes. These findings demonstrate the effectiveness of the SM EI method, enhancing spatial resolution and providing optimization insights for designing laboratory-based X-ray multimodal imaging systems.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996261421455"},"PeriodicalIF":1.4,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147277516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1177/08953996251405970
{"title":"Corrigendum to \"Retraction notice\".","authors":"","doi":"10.1177/08953996251405970","DOIUrl":"https://doi.org/10.1177/08953996251405970","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251405970"},"PeriodicalIF":1.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1177/08953996251403456
Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng
BackgroundMeasuring an X-ray source's focal spot size is vital for Micro-CT resolution. Standard methods are often too complex or inaccurate. The popular JIMA resolution test card is simple to use but lacks a clear, quantitative formula to determine the actual focal spot size.ObjectiveThis study aims to create a reliable quantitative link between JIMA resolution and focal spot size using simulations and experiments.MethodsWe used Monte Carlo simulations and practical experiments to establish the relationship between JIMA resolution and focal spot size.ResultsWe found that the focal spot size is twice the line pair width on the JIMA card when the image contrast (MTF) is at 10%. This method is highly accurate, with a maximum measurement error of less than 8.7% compared to a high-precision technique.ConclusionsOur findings provide a simple, fast, and validated method for measuring focal spot size using the JIMA test card. This makes it a practical and reliable alternative to more complex procedures.
{"title":"Research on the method for measuring the focal spot size of micro-focus X-ray sources using the JIMA resolution test card.","authors":"Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng","doi":"10.1177/08953996251403456","DOIUrl":"https://doi.org/10.1177/08953996251403456","url":null,"abstract":"<p><p>BackgroundMeasuring an X-ray source's focal spot size is vital for Micro-CT resolution. Standard methods are often too complex or inaccurate. The popular JIMA resolution test card is simple to use but lacks a clear, quantitative formula to determine the actual focal spot size.ObjectiveThis study aims to create a reliable quantitative link between JIMA resolution and focal spot size using simulations and experiments.MethodsWe used Monte Carlo simulations and practical experiments to establish the relationship between JIMA resolution and focal spot size.ResultsWe found that the focal spot size is twice the line pair width on the JIMA card when the image contrast (MTF) is at 10%. This method is highly accurate, with a maximum measurement error of less than 8.7% compared to a high-precision technique.ConclusionsOur findings provide a simple, fast, and validated method for measuring focal spot size using the JIMA test card. This makes it a practical and reliable alternative to more complex procedures.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251403456"},"PeriodicalIF":1.4,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-03DOI: 10.1177/08953996251384476
Rongchang Chen, Honglan Xie, Guohao Du, Zhongliang Li, Tiqiao Xiao
Synchrotron radiation micro-computed tomography (SR-µCT) is a vital technique for the quantitative characterization of three-dimensional internal structures across diverse fields, including energy, integrated circuits, materials science, biomedicine, archaeology etc. While SR-µCT provides high spatial resolution and high image contrast, it typically offers only moderate temporal resolution, with acquisition times ranging from minutes to hours. Recently, dynamic SR-µCT has attracted significant interest for its capacity to capture real-time three-dimensional structural evolution. Here, we demonstrate a dynamic SR-µCT system operating at 26.7Hz, developed at the BL09B test beamline of the Shanghai Synchrotron Radiation Facility using a filtered white beam. The key components of this system include an air-cooling millisecond fast shutter, an air-bearing rotation stage, a high-efficiency detector integrated with a Photron FASTCAM SA-Z camera and a custom-designed optical system, and a synchronization clock to ensure precise temporal alignment of all devices. Experimental results confirm the feasibility of this approach for in vivo four-dimensional studies, making it particularly promising for applications in biomedical research and related disciplines.
{"title":"X-ray white beam based 26.7 Hz dynamic tomography.","authors":"Rongchang Chen, Honglan Xie, Guohao Du, Zhongliang Li, Tiqiao Xiao","doi":"10.1177/08953996251384476","DOIUrl":"10.1177/08953996251384476","url":null,"abstract":"<p><p>Synchrotron radiation micro-computed tomography (SR-µCT) is a vital technique for the quantitative characterization of three-dimensional internal structures across diverse fields, including energy, integrated circuits, materials science, biomedicine, archaeology etc. While SR-µCT provides high spatial resolution and high image contrast, it typically offers only moderate temporal resolution, with acquisition times ranging from minutes to hours. Recently, dynamic SR-µCT has attracted significant interest for its capacity to capture real-time three-dimensional structural evolution. Here, we demonstrate a dynamic SR-µCT system operating at 26.7<b> </b>Hz, developed at the BL09B test beamline of the Shanghai Synchrotron Radiation Facility using a filtered white beam. The key components of this system include an air-cooling millisecond fast shutter, an air-bearing rotation stage, a high-efficiency detector integrated with a Photron FASTCAM SA-Z camera and a custom-designed optical system, and a synchronization clock to ensure precise temporal alignment of all devices. Experimental results confirm the feasibility of this approach for <i>in vivo</i> four-dimensional studies, making it particularly promising for applications in biomedical research and related disciplines.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"92-102"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BackgroundIt is fundamental for accurate segmentation and quantification of the pulmonary vessel, particularly smaller vessels, from computed tomography (CT) images in chronic obstructive pulmonary disease (COPD) patients.ObjectiveThe aim of this study was to segment the pulmonary vasculature using a semi-supervised method.MethodsIn this study, a self-training framework is proposed by leveraging a teacher-student model for the segmentation of pulmonary vessels. First, the high-quality annotations are acquired in the in-house data by an interactive way. Then, the model is trained in the semi-supervised way. A fully supervised model is trained on a small set of labeled CT images, yielding the teacher model. Following this, the teacher model is used to generate pseudo-labels for the unlabeled CT images, from which reliable ones are selected based on a certain strategy. The training of the student model involves these reliable pseudo-labels. This training process is iteratively repeated until an optimal performance is achieved.ResultsExtensive experiments are performed on non-enhanced CT scans of 125 COPD patients. Quantitative and qualitative analyses demonstrate that the proposed method, Semi2, significantly improves the precision of vessel segmentation by 2.3%, achieving a precision of 90.3%. Further, quantitative analysis is conducted in the pulmonary vessel of COPD, providing insights into the differences in the pulmonary vessel across different severity of the disease.ConclusionThe proposed method can not only improve the performance of pulmonary vascular segmentation, but can also be applied in COPD analysis. The code will be made available at https://github.com/wuyanan513/semi-supervised-learning-for-vessel-segmentation.
{"title":"A self-training framework for semi-supervised pulmonary vessel segmentation and its application in COPD.","authors":"Shuiqing Zhao, Meihuan Wang, Jiaxuan Xu, Jie Feng, Wei Qian, Rongchang Chen, Zhenyu Liang, Shouliang Qi, Yanan Wu","doi":"10.1177/08953996251384489","DOIUrl":"10.1177/08953996251384489","url":null,"abstract":"<p><p>BackgroundIt is fundamental for accurate segmentation and quantification of the pulmonary vessel, particularly smaller vessels, from computed tomography (CT) images in chronic obstructive pulmonary disease (COPD) patients.ObjectiveThe aim of this study was to segment the pulmonary vasculature using a semi-supervised method.MethodsIn this study, a self-training framework is proposed by leveraging a teacher-student model for the segmentation of pulmonary vessels. First, the high-quality annotations are acquired in the in-house data by an interactive way. Then, the model is trained in the semi-supervised way. A fully supervised model is trained on a small set of labeled CT images, yielding the teacher model. Following this, the teacher model is used to generate pseudo-labels for the unlabeled CT images, from which reliable ones are selected based on a certain strategy. The training of the student model involves these reliable pseudo-labels. This training process is iteratively repeated until an optimal performance is achieved.ResultsExtensive experiments are performed on non-enhanced CT scans of 125 COPD patients. Quantitative and qualitative analyses demonstrate that the proposed method, Semi2, significantly improves the precision of vessel segmentation by 2.3%, achieving a precision of 90.3%. Further, quantitative analysis is conducted in the pulmonary vessel of COPD, providing insights into the differences in the pulmonary vessel across different severity of the disease.ConclusionThe proposed method can not only improve the performance of pulmonary vascular segmentation, but can also be applied in COPD analysis. The code will be made available at https://github.com/wuyanan513/semi-supervised-learning-for-vessel-segmentation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"39-55"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12789263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}