Pub Date : 2026-01-13DOI: 10.1177/08953996251403456
Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng
BackgroundMeasuring an X-ray source's focal spot size is vital for Micro-CT resolution. Standard methods are often too complex or inaccurate. The popular JIMA resolution test card is simple to use but lacks a clear, quantitative formula to determine the actual focal spot size.ObjectiveThis study aims to create a reliable quantitative link between JIMA resolution and focal spot size using simulations and experiments.MethodsWe used Monte Carlo simulations and practical experiments to establish the relationship between JIMA resolution and focal spot size.ResultsWe found that the focal spot size is twice the line pair width on the JIMA card when the image contrast (MTF) is at 10%. This method is highly accurate, with a maximum measurement error of less than 8.7% compared to a high-precision technique.ConclusionsOur findings provide a simple, fast, and validated method for measuring focal spot size using the JIMA test card. This makes it a practical and reliable alternative to more complex procedures.
{"title":"Research on the method for measuring the focal spot size of micro-focus X-ray sources using the JIMA resolution test card.","authors":"Li Fengxiao, Wang Yixin, Xu Haodong, Zhong Guowei, Liu Chengfeng, Yang Run, Zhou Rifeng","doi":"10.1177/08953996251403456","DOIUrl":"https://doi.org/10.1177/08953996251403456","url":null,"abstract":"<p><p>BackgroundMeasuring an X-ray source's focal spot size is vital for Micro-CT resolution. Standard methods are often too complex or inaccurate. The popular JIMA resolution test card is simple to use but lacks a clear, quantitative formula to determine the actual focal spot size.ObjectiveThis study aims to create a reliable quantitative link between JIMA resolution and focal spot size using simulations and experiments.MethodsWe used Monte Carlo simulations and practical experiments to establish the relationship between JIMA resolution and focal spot size.ResultsWe found that the focal spot size is twice the line pair width on the JIMA card when the image contrast (MTF) is at 10%. This method is highly accurate, with a maximum measurement error of less than 8.7% compared to a high-precision technique.ConclusionsOur findings provide a simple, fast, and validated method for measuring focal spot size using the JIMA test card. This makes it a practical and reliable alternative to more complex procedures.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251403456"},"PeriodicalIF":1.4,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-03DOI: 10.1177/08953996251384476
Rongchang Chen, Honglan Xie, Guohao Du, Zhongliang Li, Tiqiao Xiao
Synchrotron radiation micro-computed tomography (SR-µCT) is a vital technique for the quantitative characterization of three-dimensional internal structures across diverse fields, including energy, integrated circuits, materials science, biomedicine, archaeology etc. While SR-µCT provides high spatial resolution and high image contrast, it typically offers only moderate temporal resolution, with acquisition times ranging from minutes to hours. Recently, dynamic SR-µCT has attracted significant interest for its capacity to capture real-time three-dimensional structural evolution. Here, we demonstrate a dynamic SR-µCT system operating at 26.7Hz, developed at the BL09B test beamline of the Shanghai Synchrotron Radiation Facility using a filtered white beam. The key components of this system include an air-cooling millisecond fast shutter, an air-bearing rotation stage, a high-efficiency detector integrated with a Photron FASTCAM SA-Z camera and a custom-designed optical system, and a synchronization clock to ensure precise temporal alignment of all devices. Experimental results confirm the feasibility of this approach for in vivo four-dimensional studies, making it particularly promising for applications in biomedical research and related disciplines.
{"title":"X-ray white beam based 26.7 Hz dynamic tomography.","authors":"Rongchang Chen, Honglan Xie, Guohao Du, Zhongliang Li, Tiqiao Xiao","doi":"10.1177/08953996251384476","DOIUrl":"10.1177/08953996251384476","url":null,"abstract":"<p><p>Synchrotron radiation micro-computed tomography (SR-µCT) is a vital technique for the quantitative characterization of three-dimensional internal structures across diverse fields, including energy, integrated circuits, materials science, biomedicine, archaeology etc. While SR-µCT provides high spatial resolution and high image contrast, it typically offers only moderate temporal resolution, with acquisition times ranging from minutes to hours. Recently, dynamic SR-µCT has attracted significant interest for its capacity to capture real-time three-dimensional structural evolution. Here, we demonstrate a dynamic SR-µCT system operating at 26.7<b> </b>Hz, developed at the BL09B test beamline of the Shanghai Synchrotron Radiation Facility using a filtered white beam. The key components of this system include an air-cooling millisecond fast shutter, an air-bearing rotation stage, a high-efficiency detector integrated with a Photron FASTCAM SA-Z camera and a custom-designed optical system, and a synchronization clock to ensure precise temporal alignment of all devices. Experimental results confirm the feasibility of this approach for <i>in vivo</i> four-dimensional studies, making it particularly promising for applications in biomedical research and related disciplines.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"92-102"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145440116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BackgroundIt is fundamental for accurate segmentation and quantification of the pulmonary vessel, particularly smaller vessels, from computed tomography (CT) images in chronic obstructive pulmonary disease (COPD) patients.ObjectiveThe aim of this study was to segment the pulmonary vasculature using a semi-supervised method.MethodsIn this study, a self-training framework is proposed by leveraging a teacher-student model for the segmentation of pulmonary vessels. First, the high-quality annotations are acquired in the in-house data by an interactive way. Then, the model is trained in the semi-supervised way. A fully supervised model is trained on a small set of labeled CT images, yielding the teacher model. Following this, the teacher model is used to generate pseudo-labels for the unlabeled CT images, from which reliable ones are selected based on a certain strategy. The training of the student model involves these reliable pseudo-labels. This training process is iteratively repeated until an optimal performance is achieved.ResultsExtensive experiments are performed on non-enhanced CT scans of 125 COPD patients. Quantitative and qualitative analyses demonstrate that the proposed method, Semi2, significantly improves the precision of vessel segmentation by 2.3%, achieving a precision of 90.3%. Further, quantitative analysis is conducted in the pulmonary vessel of COPD, providing insights into the differences in the pulmonary vessel across different severity of the disease.ConclusionThe proposed method can not only improve the performance of pulmonary vascular segmentation, but can also be applied in COPD analysis. The code will be made available at https://github.com/wuyanan513/semi-supervised-learning-for-vessel-segmentation.
{"title":"A self-training framework for semi-supervised pulmonary vessel segmentation and its application in COPD.","authors":"Shuiqing Zhao, Meihuan Wang, Jiaxuan Xu, Jie Feng, Wei Qian, Rongchang Chen, Zhenyu Liang, Shouliang Qi, Yanan Wu","doi":"10.1177/08953996251384489","DOIUrl":"10.1177/08953996251384489","url":null,"abstract":"<p><p>BackgroundIt is fundamental for accurate segmentation and quantification of the pulmonary vessel, particularly smaller vessels, from computed tomography (CT) images in chronic obstructive pulmonary disease (COPD) patients.ObjectiveThe aim of this study was to segment the pulmonary vasculature using a semi-supervised method.MethodsIn this study, a self-training framework is proposed by leveraging a teacher-student model for the segmentation of pulmonary vessels. First, the high-quality annotations are acquired in the in-house data by an interactive way. Then, the model is trained in the semi-supervised way. A fully supervised model is trained on a small set of labeled CT images, yielding the teacher model. Following this, the teacher model is used to generate pseudo-labels for the unlabeled CT images, from which reliable ones are selected based on a certain strategy. The training of the student model involves these reliable pseudo-labels. This training process is iteratively repeated until an optimal performance is achieved.ResultsExtensive experiments are performed on non-enhanced CT scans of 125 COPD patients. Quantitative and qualitative analyses demonstrate that the proposed method, Semi2, significantly improves the precision of vessel segmentation by 2.3%, achieving a precision of 90.3%. Further, quantitative analysis is conducted in the pulmonary vessel of COPD, providing insights into the differences in the pulmonary vessel across different severity of the disease.ConclusionThe proposed method can not only improve the performance of pulmonary vascular segmentation, but can also be applied in COPD analysis. The code will be made available at https://github.com/wuyanan513/semi-supervised-learning-for-vessel-segmentation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"39-55"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12789263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-26DOI: 10.1177/08953996251370578
Haifang Fu, Zhiting Liu, Yunsong Zhao
Exterior CT imaging is a special X-ray imaging problem that allows for nondestructive testing of relatively large tubular samples by using smaller detectors. However, due to the incomplete nature of the exterior projection data, the exterior CT imaging problem is highly challenging. In this study, we introduce a new CT reconstruction model for polychromatic spectrum exterior problems, called the Polychromatic Exterior Discrete Grayscale PAEDS (PE-DG-PAEDS) model. This model is based on the prior of discrete grayscale values in images and introduces a radial regularization term using polychromatic spectrum information for exterior CT reconstruction. Additionally, an alternating minimization method and the Discrete Algebraic Reconstruction Technique (DART) algorithm are used for alternating iterations to provide a solution algorithm for this model. Experiments conducted with both simulated and real data have validated the proposed model and algorithm. The results indicate that the method effectively suppresses artifacts associated with polychromatic X-ray CT exterior problem.
{"title":"A discrete grayscale prior-based exterior reconstruction algorithm for polychromatic X-ray CT.","authors":"Haifang Fu, Zhiting Liu, Yunsong Zhao","doi":"10.1177/08953996251370578","DOIUrl":"10.1177/08953996251370578","url":null,"abstract":"<p><p>Exterior CT imaging is a special X-ray imaging problem that allows for nondestructive testing of relatively large tubular samples by using smaller detectors. However, due to the incomplete nature of the exterior projection data, the exterior CT imaging problem is highly challenging. In this study, we introduce a new CT reconstruction model for polychromatic spectrum exterior problems, called the Polychromatic Exterior Discrete Grayscale PAEDS (PE-DG-PAEDS) model. This model is based on the prior of discrete grayscale values in images and introduces a radial regularization term using polychromatic spectrum information for exterior CT reconstruction. Additionally, an alternating minimization method and the Discrete Algebraic Reconstruction Technique (DART) algorithm are used for alternating iterations to provide a solution algorithm for this model. Experiments conducted with both simulated and real data have validated the proposed model and algorithm. The results indicate that the method effectively suppresses artifacts associated with polychromatic X-ray CT exterior problem.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"16-26"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145180216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-01DOI: 10.1177/08953996251355885
Chenyun Fang, Yarui Xi, Rui Hu, Peng Liu, Yanjun Zhang, Wenjian Wang, Boris Epel, Howard Halpern, Zhiwei Qiao
BackgroundPulsed Electron paramagnetic resonance (EPR) imaging (EPRI) is an advanced oxygen imaging modality for precision radiotherapy, typically acquires high signal-to-noise ratio (SNR) data by averaging the repeatedly collected projections at the corresponding angle to suppress the random noise. This scan mode is the reason for the slow scan speed. The present mitigation is to reduce the repetition times (termed 'shots') for each projection, which leads to noisy projections.ObjectiveAlthough the directional total variation (DTV) algorithm could reconstruct the image from these noisy projections, it may appear staircase artifacts. To solve this problem, we further propose a novel high order DTV (HODTV) algorithm for fast 3D pulsed EPRI.MethodsThe HODTV model has introduced the regularization of high order derivatives, in which the objective term and the high order derivate regularization aim for data fidelity and detail recovery, respectively. Then, we derive its Chambolle-Pock (CP) solving algorithm and verify the correctness. To evaluate the HODTV algorithm, both qualitative and quantitative results are performed with real-world data.ResultsCompared with the filtered back projection (FBP), total variation (TV), and DTV algorithms, the results demonstrate that our method can achieve higher accurate reconstruction. In specific cases, our algorithm only requires 100 shots of scan acquisitions in 6 seconds, whereas the FBP algorithm needs 2000 shots of scan acquisitions taking 120 seconds.ConclusionsThe practical development of clinical imaging workflow, including but not limited to fast 3D pulsed EPRI, may make use of our work.
{"title":"A novel high order directional total variation algorithm of EPR imaging for fast scan.","authors":"Chenyun Fang, Yarui Xi, Rui Hu, Peng Liu, Yanjun Zhang, Wenjian Wang, Boris Epel, Howard Halpern, Zhiwei Qiao","doi":"10.1177/08953996251355885","DOIUrl":"10.1177/08953996251355885","url":null,"abstract":"<p><p>BackgroundPulsed Electron paramagnetic resonance (EPR) imaging (EPRI) is an advanced oxygen imaging modality for precision radiotherapy, typically acquires high signal-to-noise ratio (SNR) data by averaging the repeatedly collected projections at the corresponding angle to suppress the random noise. This scan mode is the reason for the slow scan speed. The present mitigation is to reduce the repetition times (termed 'shots') for each projection, which leads to noisy projections.ObjectiveAlthough the directional total variation (DTV) algorithm could reconstruct the image from these noisy projections, it may appear staircase artifacts. To solve this problem, we further propose a novel high order DTV (HODTV) algorithm for fast 3D pulsed EPRI.MethodsThe HODTV model has introduced the regularization of high order derivatives, in which the objective term and the high order derivate regularization aim for data fidelity and detail recovery, respectively. Then, we derive its Chambolle-Pock (CP) solving algorithm and verify the correctness. To evaluate the HODTV algorithm, both qualitative and quantitative results are performed with real-world data.ResultsCompared with the filtered back projection (FBP), total variation (TV), and DTV algorithms, the results demonstrate that our method can achieve higher accurate reconstruction. In specific cases, our algorithm only requires 100 shots of scan acquisitions in <math><mo>∼</mo></math>6 seconds, whereas the FBP algorithm needs 2000 shots of scan acquisitions taking <math><mo>∼</mo></math>120 seconds.ConclusionsThe practical development of clinical imaging workflow, including but not limited to fast 3D pulsed EPRI, may make use of our work.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"27-38"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-17DOI: 10.1177/08953996251380012
Jintao Fu, Peng Cong, Tianchen Zeng, Xinjiang Hou, Bo Zhao, Ximing Liu, Yuewen Sun
BackgroundNon-destructive testing (NDT) is crucial for the preservation and restoration of ancient wooden structures, with Computed Tomography (CT) increasingly utilized in this field. However, practical CT examinations of these structures-often characterized by complex configurations, large dimensions, and on-site constraints-frequently encounter difficulties in acquiring full-angle projection data. Consequently, images reconstructed under limited-angle conditions suffer from poor quality and severe artifacts, hindering accurate assessment of critical internal features such as mortise-tenon joints and incipient damage.ObjectiveThis study aims to develop a novel algorithm capable of achieving high-quality image reconstruction from incomplete, limited-angle projection data.MethodsWe propose CADRE (Contour-guided Alternating Direction Method of Multipliers-optimized Deep Radon Enhancement), an unsupervised deep learning reconstruction framework. CADRE innovatively integrates the ADMM optimization strategy, the learning paradigm of Deep Radon Prior (DRP) networks, and a geometric contour-guidance mechanism. This approach synergistically enhances reconstruction performance by iteratively optimizing network parameters and input images, without requiring large-scale paired training data, rendering it particularly suitable for cultural heritage applications.ResultsSystematic validation using both a digital dougong simulation model of the Yingxian Wooden Pagoda and a physical wooden dougong model from Foguang Temple demonstrates that, under typical 90° and 120° limited-angle conditions, the CADRE algorithm significantly outperforms traditional FBP, iterative reconstruction algorithms SART and ADMM-TV, and other representative unsupervised deep learning methods (Deep Image Prior, DIP; Residual Back-Projection with DIP, RBP-DIP; DRP). This superiority is evident in quantitative metrics such as PSNR and SSIM, as well as in visual quality, including artifact suppression and preservation of structural details. CADRE exhibits exceptional capability in accurately reproducing internal mortise-tenon configurations and fine features within ancient timber.ConclusionThe CADRE algorithm provides a robust and efficient solution for limited-angle CT image reconstruction of ancient wooden structures. It effectively overcomes the limitations of existing methods in handling incomplete data, significantly enhances the quality of reconstructed images and the characterization of internal fine structures, and offers strong technical support for the scientific understanding, condition assessment, and precise conservation of cultural heritage, thereby holding substantial academic value and promising application prospects.
背景无损检测(NDT)对于古代木结构的保护和修复至关重要,计算机断层扫描(CT)在这一领域的应用越来越广泛。然而,这些结构的实际CT检查通常具有复杂的结构,大尺寸和现场限制,在获得全角度投影数据时经常遇到困难。因此,在有限角度条件下重建的图像质量差,伪影严重,阻碍了对关键内部特征(如榫卯连接和早期损伤)的准确评估。目的开发一种新的算法,能够从不完整的、有限角度的投影数据中实现高质量的图像重建。方法提出了一种无监督深度学习重建框架CADRE (contourd -guided Alternating Direction Method of multiplier -optimized Deep Radon Enhancement)。CADRE创新地集成了ADMM优化策略、深度Radon先验(Deep Radon Prior, DRP)网络的学习范式和几何轮廓引导机制。该方法通过迭代优化网络参数和输入图像,协同提高重建性能,不需要大规模的配对训练数据,特别适合文化遗产应用。结果对英县木塔的数字斗拱仿真模型和佛光寺的物理斗拱模型进行了系统验证,结果表明,在典型的90°和120°有限角度条件下,CADRE算法显著优于传统的FBP、迭代重建算法SART和ADMM-TV,以及其他具有代表性的无监督深度学习方法(deep Image Prior, DIP、残差反向投影与DIP、RBP-DIP、DRP)。这种优势在定量指标,如PSNR和SSIM,以及视觉质量,包括伪影抑制和结构细节的保存中是明显的。CADRE在精确再现古代木材内部榫卯结构和精细特征方面表现出卓越的能力。结论CADRE算法为古代木结构的有限角度CT图像重建提供了一种鲁棒、高效的解决方案。有效克服了现有方法处理数据不完整的局限性,显著提高了重建图像的质量和内部精细结构的表征,为科学认识、状态评估和精确保护文物提供了强有力的技术支撑,具有重要的学术价值和广阔的应用前景。
{"title":"CADRE: A novel unsupervised reconstruction algorithm for limited-angle CT of ancient wooden structures.","authors":"Jintao Fu, Peng Cong, Tianchen Zeng, Xinjiang Hou, Bo Zhao, Ximing Liu, Yuewen Sun","doi":"10.1177/08953996251380012","DOIUrl":"10.1177/08953996251380012","url":null,"abstract":"<p><p>BackgroundNon-destructive testing (NDT) is crucial for the preservation and restoration of ancient wooden structures, with Computed Tomography (CT) increasingly utilized in this field. However, practical CT examinations of these structures-often characterized by complex configurations, large dimensions, and on-site constraints-frequently encounter difficulties in acquiring full-angle projection data. Consequently, images reconstructed under limited-angle conditions suffer from poor quality and severe artifacts, hindering accurate assessment of critical internal features such as mortise-tenon joints and incipient damage.ObjectiveThis study aims to develop a novel algorithm capable of achieving high-quality image reconstruction from incomplete, limited-angle projection data.MethodsWe propose CADRE (Contour-guided Alternating Direction Method of Multipliers-optimized Deep Radon Enhancement), an unsupervised deep learning reconstruction framework. CADRE innovatively integrates the ADMM optimization strategy, the learning paradigm of Deep Radon Prior (DRP) networks, and a geometric contour-guidance mechanism. This approach synergistically enhances reconstruction performance by iteratively optimizing network parameters and input images, without requiring large-scale paired training data, rendering it particularly suitable for cultural heritage applications.ResultsSystematic validation using both a digital <i>dougong</i> simulation model of the Yingxian Wooden Pagoda and a physical wooden <i>dougong</i> model from Foguang Temple demonstrates that, under typical 90° and 120° limited-angle conditions, the CADRE algorithm significantly outperforms traditional FBP, iterative reconstruction algorithms SART and ADMM-TV, and other representative unsupervised deep learning methods (Deep Image Prior, DIP; Residual Back-Projection with DIP, RBP-DIP; DRP). This superiority is evident in quantitative metrics such as PSNR and SSIM, as well as in visual quality, including artifact suppression and preservation of structural details. CADRE exhibits exceptional capability in accurately reproducing internal mortise-tenon configurations and fine features within ancient timber.ConclusionThe CADRE algorithm provides a robust and efficient solution for limited-angle CT image reconstruction of ancient wooden structures. It effectively overcomes the limitations of existing methods in handling incomplete data, significantly enhances the quality of reconstructed images and the characterization of internal fine structures, and offers strong technical support for the scientific understanding, condition assessment, and precise conservation of cultural heritage, thereby holding substantial academic value and promising application prospects.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"56-73"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-28DOI: 10.1177/08953996251375815
Jie Guo, Ailong Cai, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan
Accurate material decomposition constitutes the foundation of Spectral Computed Tomography (Spectral CT) applications across diverse domains. Nevertheless, conventional model-based material decomposition methods face significant limitations including sparse-view sampling artifacts, slow convergence rates, noise amplification, and inherent ill-posedness-challenges that are particularly pronounced in geometrically inconsistent imaging. To overcome these constraints, we propose an unsupervised deep learning framework that synergistically optimizes virtual monochromatic images (VMIs) through the probabilistic diffusion model for direct material decomposition in sparse-view spectral CT. The proposed methodology introduces VMIs as critical differentiation enhancers for polychromatic projections, effectively addressing convergence limitations in iterative reconstruction algorithms. By incorporating probabilistic diffusion priors into the optimization process, we achieve superior refinement of material-specific representations. Our framework systematically enforces dual constraint: 1) data fidelity term ensuring measurement consistency, and 2) probabilistic regularization suppressing unwanted structures, thereby guaranteeing anatomically plausible material image reconstruction. Comprehensive validation on preclinical data demonstrates that our method achieves a 10 dB improvement in the peak-signal-to-noise ratio (PSNR) and a 4.31% increase in structural similarity (SSIM) for soft-tissue reconstructions compared to the optimal comparison algorithm with 90 projections. Experimental results confirm the algorithm's robustness under challenging conditions, maintaining reconstruction fidelity even with geometric inconsistency and sparse sampling.
{"title":"Accelerating direct material decomposition via diffusion probabilistic model for Sparse-view spectral computed tomography.","authors":"Jie Guo, Ailong Cai, Junru Ren, Zhizhong Zheng, Lei Li, Bin Yan","doi":"10.1177/08953996251375815","DOIUrl":"10.1177/08953996251375815","url":null,"abstract":"<p><p>Accurate material decomposition constitutes the foundation of Spectral Computed Tomography (Spectral CT) applications across diverse domains. Nevertheless, conventional model-based material decomposition methods face significant limitations including sparse-view sampling artifacts, slow convergence rates, noise amplification, and inherent ill-posedness-challenges that are particularly pronounced in geometrically inconsistent imaging. To overcome these constraints, we propose an unsupervised deep learning framework that synergistically optimizes virtual monochromatic images (VMIs) through the probabilistic diffusion model for direct material decomposition in sparse-view spectral CT. The proposed methodology introduces VMIs as critical differentiation enhancers for polychromatic projections, effectively addressing convergence limitations in iterative reconstruction algorithms. By incorporating probabilistic diffusion priors into the optimization process, we achieve superior refinement of material-specific representations. Our framework systematically enforces dual constraint: 1) data fidelity term ensuring measurement consistency, and 2) probabilistic regularization suppressing unwanted structures, thereby guaranteeing anatomically plausible material image reconstruction. Comprehensive validation on preclinical data demonstrates that our method achieves a 10 dB improvement in the peak-signal-to-noise ratio (PSNR) and a 4.31% increase in structural similarity (SSIM) for soft-tissue reconstructions compared to the optimal comparison algorithm with 90 projections. Experimental results confirm the algorithm's robustness under challenging conditions, maintaining reconstruction fidelity even with geometric inconsistency and sparse sampling.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"74-91"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-12DOI: 10.1177/08953996251375817
Mehtab Kiran Suddle, Maryam Bashir
Cancer remains a leading cause of mortality, where early detection significantly improves survival rates. Advances in technology have enabled automated cancer detection using medical imaging and microarray gene expression data. However, these datasets often contain redundant or noisy features that hinder classification performance. Feature selection is key preprocessing step to enhance accuracy and reduce computational costs. In cancer-related medical research, optimizing deep learning architectures is crucial for better classification outcomes. Metaheuristic algorithms have been popular for tackling both feature selection and deep neural networks (DNN) optimization. This survey reviews 91 peer-reviewed articles (2012-2025) on metaheuristics for feature selection and DNN optimization in cancer classification using medical images and microarray data. Literature was sourced from databases such as Google Scholar, IEEE Xplore, Elsevier, ResearchGate, Springer, MDPI, and ScienceDirect. Our findings indicate that k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Convolutional Neural Networks (CNN) are the most widely adopted classifiers, used in 23%, 21%, and 18% of cases, respectively. Among metaheuristics, Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Ant Colony Optimization (ACO) dominate the landscape, appearing in 13%, 11%, and 10% of studies. We also review 39 image-based and 44 microarray cancer datasets. This survey identifies critical gaps in current research and proposes several future directions to enhance model robustness and classification accuracy. Through a detailed comparative analysis, this study provides valuable insights for researchers and decision-makers, highlighting the need for continued innovation in computational methods for cancer detection and diagnosis.
{"title":"Optimizing cancer classification: A metaheuristic-driven review of feature selection and deep learning approaches.","authors":"Mehtab Kiran Suddle, Maryam Bashir","doi":"10.1177/08953996251375817","DOIUrl":"10.1177/08953996251375817","url":null,"abstract":"<p><p>Cancer remains a leading cause of mortality, where early detection significantly improves survival rates. Advances in technology have enabled automated cancer detection using medical imaging and microarray gene expression data. However, these datasets often contain redundant or noisy features that hinder classification performance. Feature selection is key preprocessing step to enhance accuracy and reduce computational costs. In cancer-related medical research, optimizing deep learning architectures is crucial for better classification outcomes. Metaheuristic algorithms have been popular for tackling both feature selection and deep neural networks (DNN) optimization. This survey reviews 91 peer-reviewed articles (2012-2025) on metaheuristics for feature selection and DNN optimization in cancer classification using medical images and microarray data. Literature was sourced from databases such as Google Scholar, IEEE Xplore, Elsevier, ResearchGate, Springer, MDPI, and ScienceDirect. Our findings indicate that k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Convolutional Neural Networks (CNN) are the most widely adopted classifiers, used in 23%, 21%, and 18% of cases, respectively. Among metaheuristics, Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Ant Colony Optimization (ACO) dominate the landscape, appearing in 13%, 11%, and 10% of studies. We also review 39 image-based and 44 microarray cancer datasets. This survey identifies critical gaps in current research and proposes several future directions to enhance model robustness and classification accuracy. Through a detailed comparative analysis, this study provides valuable insights for researchers and decision-makers, highlighting the need for continued innovation in computational methods for cancer detection and diagnosis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"103-148"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-08DOI: 10.1177/08953996251358141
Xiangze Teng, Xiang Li, Benzheng Wei
Parkinson's disease (PD) is a challenging neurodegenerative condition often prone to diagnostic errors, where early and accurate diagnosis is critical for effective clinical management. However, existing diagnostic methods often fail to fully exploit multimodal data or systematically incorporate expert domain knowledge. To address these limitations, we propose M2KD-Net, a multimodal and knowledge-driven diagnostic framework that integrates imaging and non-imaging clinical data with structured expert insights to enhance diagnostic performance. The framework consists of three key modules: (1) a contrastive learning-based multimodal feature extractor for improved alignment between imaging and non-imaging data. (2) an expert feature modeling module that encodes domain-specific knowledge through structured annotations, and (3) a cross-modal interaction module that enhances the integration of heterogeneous features across modalities. Experimental results on the Parkinson's Progression Markers Initiative (PPMI) dataset show that M2KD-Net achieves a classification accuracy of 89.6% and an AUC of 0.935 in distinguishing PD patients from healthy controls. This evidence suggests that the developed method provides a dependable, interpretable, and clinically useful solution for PD diagnosis.
{"title":"M<sup>2</sup>KD-Net: A multimodal multi-domain knowledge-driven framework for Parkinson's disease diagnosis.","authors":"Xiangze Teng, Xiang Li, Benzheng Wei","doi":"10.1177/08953996251358141","DOIUrl":"10.1177/08953996251358141","url":null,"abstract":"<p><p>Parkinson's disease (PD) is a challenging neurodegenerative condition often prone to diagnostic errors, where early and accurate diagnosis is critical for effective clinical management. However, existing diagnostic methods often fail to fully exploit multimodal data or systematically incorporate expert domain knowledge. To address these limitations, we propose M<sup>2</sup>KD-Net, a multimodal and knowledge-driven diagnostic framework that integrates imaging and non-imaging clinical data with structured expert insights to enhance diagnostic performance. The framework consists of three key modules: (1) a contrastive learning-based multimodal feature extractor for improved alignment between imaging and non-imaging data. (2) an expert feature modeling module that encodes domain-specific knowledge through structured annotations, and (3) a cross-modal interaction module that enhances the integration of heterogeneous features across modalities. Experimental results on the Parkinson's Progression Markers Initiative (PPMI) dataset show that M<sup>2</sup>KD-Net achieves a classification accuracy of 89.6% and an AUC of 0.935 in distinguishing PD patients from healthy controls. This evidence suggests that the developed method provides a dependable, interpretable, and clinically useful solution for PD diagnosis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"3-15"},"PeriodicalIF":1.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145024700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-07-29DOI: 10.1177/08953996251351624
Mohamed J Saadh, Qusay Mohammed Hussain, Rafid Jihad Albadr, Hardik Doshi, M M Rekha, Mayank Kundlas, Amrita Pal, Jasur Rizaev, Waam Mohammed Taher, Mariem Alwan, Mahmod Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood
ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.
目的本研究旨在通过整合先进的分割和分类方法,建立一个强大的乳腺癌诊断框架。将基于transformer和U-Net的分割模型与放射学特征提取和机器学习分类器相结合,提高乳房x线图像的分割精度和分类精度。材料与方法采用多中心数据集8000张乳房x光片(4200张正常,3800张异常)。使用基于transformer和U-Net模型进行分割,通过Dice Coefficient (DSC)、Intersection over Union (IoU)、Hausdorff Distance (HD95)和Pixel-Wise Accuracy进行评估。利用递归特征消除法(RFE)和方差分析法(ANOVA)选择显著特征,从分割后的掩模中提取放射组学特征。分类器包括Logistic回归、XGBoost、CatBoost和堆叠集成模型,用于将肿瘤分为良性或恶性。采用准确性、敏感性、F1评分和AUC-ROC评价分类效果。SHAP分析验证了特征重要性,q值热图评估了统计显著性。结果基于transformer的模型在DSC(0.94±0.01训练值,0.92±0.02测试值)、IoU(0.91±0.01训练值,0.89±0.02测试值)、HD95(3.0±0.3 mm训练值,3.3±0.4 mm测试值)和Pixel-Wise Accuracy(0.96±0.01训练值,0.94±0.02测试值)上均取得了较好的分割效果,在所有指标上均优于U-Net。对于分类,使用Stacking Ensemble的Transformer-segmented feature获得了最高的测试结果:93%的准确率,92%的灵敏度,93%的F1分数和95%的AUC。u - net分割的特征实现了较低的指标,最佳测试准确率为84%。SHAP分析证实了灰度非均匀性和区域熵等特征的重要性。结论结合放射学特征选择和鲁棒分类模型,验证了基于变压器的图像分割方法的优越性。该框架为乳腺癌诊断提供了精确且可解释的解决方案,具有可扩展到3D成像和多模态数据集的潜力。
{"title":"Radiomics meets transformers: A novel approach to tumor segmentation and classification in mammography for breast cancer.","authors":"Mohamed J Saadh, Qusay Mohammed Hussain, Rafid Jihad Albadr, Hardik Doshi, M M Rekha, Mayank Kundlas, Amrita Pal, Jasur Rizaev, Waam Mohammed Taher, Mariem Alwan, Mahmod Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood","doi":"10.1177/08953996251351624","DOIUrl":"10.1177/08953996251351624","url":null,"abstract":"<p><p>ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1039-1058"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}