首页 > 最新文献

Journal of X-Ray Science and Technology最新文献

英文 中文
KBA-PDNet: A primal-dual unrolling network with kernel basis attention for low-dose CT reconstruction. KBA-PDNet:用于低剂量 CT 重构的具有核基关注度的基元-双展开网络。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-03 DOI: 10.1177/08953996241308759
Rongfeng Li, Dalin Wang

Computed tomography (CT) image reconstruction is faced with challenge of balancing image quality and radiation dose. Recent unrolled optimization methods address low-dose CT image quality issues using convolutional neural networks or self-attention mechanisms as regularization operators. However, these approaches have limitations in adaptability, computational efficiency, or preservation of beneficial inductive biases. They also depend on initial reconstructions, potentially leading to information loss and error propagation. To overcome these limitations, Kernel Basis Attention Primal-Dual Network (KBA-PDNet) is proposed. The method unrolls multiple iterations of the proximal primal-dual optimization process, replacing traditional proximal operators with Kernel Basis Attention (KBA) modules. This design enables direct training from raw measurement data without relying on preliminary reconstructions. The KBA module achieves adaptability by learning and dynamically fusing kernel bases, generating customized convolution kernels for each spatial location. This approach maintains computational efficiency while preserving beneficial inductive biases of convolutions. By training end-to-end from raw projection data, KBA-PDNet fully utilizes all original information, potentially capturing details lost in preliminary reconstructions. Experiments on simulated and clinical datasets demonstrate that KBA-PDNet outperforms existing approaches in both image quality and computational efficiency.

{"title":"KBA-PDNet: A primal-dual unrolling network with kernel basis attention for low-dose CT reconstruction.","authors":"Rongfeng Li, Dalin Wang","doi":"10.1177/08953996241308759","DOIUrl":"https://doi.org/10.1177/08953996241308759","url":null,"abstract":"<p><p>Computed tomography (CT) image reconstruction is faced with challenge of balancing image quality and radiation dose. Recent unrolled optimization methods address low-dose CT image quality issues using convolutional neural networks or self-attention mechanisms as regularization operators. However, these approaches have limitations in adaptability, computational efficiency, or preservation of beneficial inductive biases. They also depend on initial reconstructions, potentially leading to information loss and error propagation. To overcome these limitations, Kernel Basis Attention Primal-Dual Network (KBA-PDNet) is proposed. The method unrolls multiple iterations of the proximal primal-dual optimization process, replacing traditional proximal operators with Kernel Basis Attention (KBA) modules. This design enables direct training from raw measurement data without relying on preliminary reconstructions. The KBA module achieves adaptability by learning and dynamically fusing kernel bases, generating customized convolution kernels for each spatial location. This approach maintains computational efficiency while preserving beneficial inductive biases of convolutions. By training end-to-end from raw projection data, KBA-PDNet fully utilizes all original information, potentially capturing details lost in preliminary reconstructions. Experiments on simulated and clinical datasets demonstrate that KBA-PDNet outperforms existing approaches in both image quality and computational efficiency.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996241308759"},"PeriodicalIF":1.7,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143537915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility exploration of myocardial blood flow synthesis from a simulated static myocardial computed tomography perfusion via a deep neural network. 通过深度神经网络从模拟静态心肌计算机断层扫描灌注合成心肌血流的可行性探索。
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-03-03 DOI: 10.1177/08953996251317412
Jun Dong, Runjianya Ling, Zhenxing Huang, Yidan Xu, Haiyan Wang, Zixiang Chen, Meiyong Huang, Vladimir Stankovic, Jiayin Zhang, Zhanli Hu

Background:: Myocardial blood flow (MBF) provides important diagnostic information for myocardial ischemia. However, dynamic computed tomography perfusion (CTP) needed for MBF involves multiple exposures, leading to high radiation doses.

Objectives:: This study investigated synthesizing MBF from simulated static myocardial CTP to explore dose reduction potential, bypassing the traditional dynamic input function.

Methods:: The study included 253 subjects with intermediate-to-high pretest probabilities of obstructive coronary artery disease (CAD). MBF was reconstructed from dynamic myocardial CTP. A deep neural network (DNN) converted simulated static CTP into synthetic MBF. Beyond the usual image quality evaluation, the synthetic MBF was segmented and a clinical functional assessment was conducted, with quantitative analysis for consistency and correlation.

Results:: Synthetic MBF closely matched the referenced MBF, with an average structure similarity (SSIM) of 0.87. ROC analysis of ischemic segments showed an area under curve (AUC) of 0.915 for synthetic MBF. This method can theoretically reduce the radiation dose for MBF significantly, provided satisfactory static CTP is obtained, reducing reliance on high time resolution of dynamic CTP.

Conclusions:: The proposed method is feasible, with satisfactory clinical functionality of synthetic MBF. Further investigation and validation are needed to confirm actual dose reduction in clinical settings.

{"title":"Feasibility exploration of myocardial blood flow synthesis from a simulated static myocardial computed tomography perfusion via a deep neural network.","authors":"Jun Dong, Runjianya Ling, Zhenxing Huang, Yidan Xu, Haiyan Wang, Zixiang Chen, Meiyong Huang, Vladimir Stankovic, Jiayin Zhang, Zhanli Hu","doi":"10.1177/08953996251317412","DOIUrl":"https://doi.org/10.1177/08953996251317412","url":null,"abstract":"<p><strong>Background:: </strong>Myocardial blood flow (MBF) provides important diagnostic information for myocardial ischemia. However, dynamic computed tomography perfusion (CTP) needed for MBF involves multiple exposures, leading to high radiation doses.</p><p><strong>Objectives:: </strong>This study investigated synthesizing MBF from simulated static myocardial CTP to explore dose reduction potential, bypassing the traditional dynamic input function.</p><p><strong>Methods:: </strong>The study included 253 subjects with intermediate-to-high pretest probabilities of obstructive coronary artery disease (CAD). MBF was reconstructed from dynamic myocardial CTP. A deep neural network (DNN) converted simulated static CTP into synthetic MBF. Beyond the usual image quality evaluation, the synthetic MBF was segmented and a clinical functional assessment was conducted, with quantitative analysis for consistency and correlation.</p><p><strong>Results:: </strong>Synthetic MBF closely matched the referenced MBF, with an average structure similarity (SSIM) of 0.87. ROC analysis of ischemic segments showed an area under curve (AUC) of 0.915 for synthetic MBF. This method can theoretically reduce the radiation dose for MBF significantly, provided satisfactory static CTP is obtained, reducing reliance on high time resolution of dynamic CTP.</p><p><strong>Conclusions:: </strong>The proposed method is feasible, with satisfactory clinical functionality of synthetic MBF. Further investigation and validation are needed to confirm actual dose reduction in clinical settings.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251317412"},"PeriodicalIF":1.7,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143537911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative analysis of machine learning and deep learning algorithms for knee arthritis detection using YOLOv8 models.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-02-26 DOI: 10.1177/08953996241308770
Ilkay Cinar

Knee arthritis is a prevalent joint condition that affects many people worldwide. Early detection and appropriate treatment are essential to slow the disease's progression and enhance patients' quality of life. In this study, various machine learning and deep learning algorithms were used to detect knee arthritis. The machine learning models included k-NN, SVM, and GBM, while DenseNet, EfficientNet, and InceptionV3 were used as deep learning models. Additionally, YOLOv8 classification models (YOLOv8n-cls, YOLOv8s-cls, YOLOv8m-cls, YOLOv8l-cls, and YOLOv8x-cls) were employed. The "Annotated Dataset for Knee Arthritis Detection" with five classes (Normal, Doubtful, Mild, Moderate, Severe) and 1650 images were divided into 80% training, 10% validation, and 10% testing using the Hold-Out method. YOLOv8 models outperformed both machine learning and deep learning algorithms. k-NN, SVM, and GBM achieved success rates of 63.61%, 64.14%, and 67.36%, respectively. Among deep learning models, DenseNet, EfficientNet, and InceptionV3 achieved 62.35%, 70.59%, and 79.41%. The highest success was seen in the YOLOv8x-cls model at 86.96%, followed by YOLOv8l-cls at 86.79%, YOLOv8m-cls at 83.65%, YOLOv8s-cls at 80.37%, and YOLOv8n-cls at 77.91%.

{"title":"Comparative analysis of machine learning and deep learning algorithms for knee arthritis detection using YOLOv8 models.","authors":"Ilkay Cinar","doi":"10.1177/08953996241308770","DOIUrl":"https://doi.org/10.1177/08953996241308770","url":null,"abstract":"<p><p>Knee arthritis is a prevalent joint condition that affects many people worldwide. Early detection and appropriate treatment are essential to slow the disease's progression and enhance patients' quality of life. In this study, various machine learning and deep learning algorithms were used to detect knee arthritis. The machine learning models included k-NN, SVM, and GBM, while DenseNet, EfficientNet, and InceptionV3 were used as deep learning models. Additionally, YOLOv8 classification models (YOLOv8n-cls, YOLOv8s-cls, YOLOv8m-cls, YOLOv8l-cls, and YOLOv8x-cls) were employed. The \"Annotated Dataset for Knee Arthritis Detection\" with five classes (Normal, Doubtful, Mild, Moderate, Severe) and 1650 images were divided into 80% training, 10% validation, and 10% testing using the Hold-Out method. YOLOv8 models outperformed both machine learning and deep learning algorithms. k-NN, SVM, and GBM achieved success rates of 63.61%, 64.14%, and 67.36%, respectively. Among deep learning models, DenseNet, EfficientNet, and InceptionV3 achieved 62.35%, 70.59%, and 79.41%. The highest success was seen in the YOLOv8x-cls model at 86.96%, followed by YOLOv8l-cls at 86.79%, YOLOv8m-cls at 83.65%, YOLOv8s-cls at 80.37%, and YOLOv8n-cls at 77.91%.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996241308770"},"PeriodicalIF":1.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preconditioned block Kaczmarz methods for linear equations with an application to computed tomography.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-02-18 DOI: 10.1177/08953996251317421
Duo Liu, Wenli Wang, Gangrong Qu

Background: Preconditioned Kaczmarz methods play a pivotal role in image reconstruction. A fundamental theoretical question lies in establishing the convergence conditions for these methods. Practically, devising an efficient block strategy to accelerate the reconstruction process is also critical.

Objective: This paper aims to introduce the convergence conditions for the preconditioned Kaczmarz methods and design the block strategy with corresponding preconditioners for these methods in computed tomography (CT).

Methods: We establish a kind of useful convergence conditions for the preconditioned block Kaczmarz methods and prove the dependence of the convergence limit on the initial guess. Tailored for the CT problem, we also propose a new method with a novel block strategy and specific preconditioners, which ensure accelerated convergence.

Results: Numerical experiments with the Shepp-Logan phantom and a real chest CT image demonstrate that our proposed block strategy and preconditioners effectively accelerate the reconstruction process by the preconditioned block Kaczmarz methods while maintaining satisfactory image quality.

Conclusions: Our proposed method, which incorporates the designed block strategy and specific preconditioners, has superior performance compared to the traditional Landweber iteration and the block Kaczmarz iteration without preconditioners.

{"title":"Preconditioned block Kaczmarz methods for linear equations with an application to computed tomography.","authors":"Duo Liu, Wenli Wang, Gangrong Qu","doi":"10.1177/08953996251317421","DOIUrl":"https://doi.org/10.1177/08953996251317421","url":null,"abstract":"<p><strong>Background: </strong>Preconditioned Kaczmarz methods play a pivotal role in image reconstruction. A fundamental theoretical question lies in establishing the convergence conditions for these methods. Practically, devising an efficient block strategy to accelerate the reconstruction process is also critical.</p><p><strong>Objective: </strong>This paper aims to introduce the convergence conditions for the preconditioned Kaczmarz methods and design the block strategy with corresponding preconditioners for these methods in computed tomography (CT).</p><p><strong>Methods: </strong>We establish a kind of useful convergence conditions for the preconditioned block Kaczmarz methods and prove the dependence of the convergence limit on the initial guess. Tailored for the CT problem, we also propose a new method with a novel block strategy and specific preconditioners, which ensure accelerated convergence.</p><p><strong>Results: </strong>Numerical experiments with the Shepp-Logan phantom and a real chest CT image demonstrate that our proposed block strategy and preconditioners effectively accelerate the reconstruction process by the preconditioned block Kaczmarz methods while maintaining satisfactory image quality.</p><p><strong>Conclusions: </strong>Our proposed method, which incorporates the designed block strategy and specific preconditioners, has superior performance compared to the traditional Landweber iteration and the block Kaczmarz iteration without preconditioners.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251317421"},"PeriodicalIF":1.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning detection method for pancreatic cystic neoplasm based on Mamba architecture.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-02-18 DOI: 10.1177/08953996251313719
Junlong Dai, Cong He, Liang Jin, Chengwei Chen, Jie Wu, Yun Bian

Objective: Early diagnosis of pancreatic cystic neoplasm (PCN) is crucial for patient survival. This study proposes M-YOLO, a novel model combining Mamba architecture and YOLO, to enhance the detection of pancreatic cystic tumors. The model addresses the technical challenge posed by the tumors' complex morphological features in medical images.

Methods: This study develops an innovative deep learning network architecture, M-YOLO (Mamba YOLOv10), which combines the advantages of Mamba and YOLOv10 and aims to improve the accuracy and efficiency of pancreatic cystic neoplasm(PCN) detection. The Mamba architecture, with its superior sequence modeling capabilities, is ideally suited for processing the rich contextual information contained in medical images. At the same time, YOLOv10's fast object detection feature ensures the system's viability for application in clinical practice.

Results: M-YOLO has a high sensitivity of 0.98, a specificity of 0.92, a precision of 0.96, an F1 value of 0.97, an accuracy of 0.93, as well as a mean average precision (mAP) of 0.96 at 50% intersection-to-union (IoU) threshold on the dataset provided by Changhai Hospital.

Conclusions: M-YOLO(Mamba YOLOv10) enhances the identification performance of PCN by integrating the deep feature extraction capability of Mamba and the fast localization technique of YOLOv10.

{"title":"A deep learning detection method for pancreatic cystic neoplasm based on Mamba architecture.","authors":"Junlong Dai, Cong He, Liang Jin, Chengwei Chen, Jie Wu, Yun Bian","doi":"10.1177/08953996251313719","DOIUrl":"https://doi.org/10.1177/08953996251313719","url":null,"abstract":"<p><strong>Objective: </strong>Early diagnosis of pancreatic cystic neoplasm (PCN) is crucial for patient survival. This study proposes M-YOLO, a novel model combining Mamba architecture and YOLO, to enhance the detection of pancreatic cystic tumors. The model addresses the technical challenge posed by the tumors' complex morphological features in medical images.</p><p><strong>Methods: </strong>This study develops an innovative deep learning network architecture, M-YOLO (Mamba YOLOv10), which combines the advantages of Mamba and YOLOv10 and aims to improve the accuracy and efficiency of pancreatic cystic neoplasm(PCN) detection. The Mamba architecture, with its superior sequence modeling capabilities, is ideally suited for processing the rich contextual information contained in medical images. At the same time, YOLOv10's fast object detection feature ensures the system's viability for application in clinical practice.</p><p><strong>Results: </strong>M-YOLO has a high sensitivity of 0.98, a specificity of 0.92, a precision of 0.96, an F1 value of 0.97, an accuracy of 0.93, as well as a mean average precision (mAP) of 0.96 at 50% intersection-to-union (IoU) threshold on the dataset provided by Changhai Hospital.</p><p><strong>Conclusions: </strong>M-YOLO(Mamba YOLOv10) enhances the identification performance of PCN by integrating the deep feature extraction capability of Mamba and the fast localization technique of YOLOv10.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251313719"},"PeriodicalIF":1.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel detail-enhanced wavelet domain feature compensation network for sparse-view X-ray computed laminography.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-02-18 DOI: 10.1177/08953996251319183
Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong

Background: X-ray Computed Laminography (CL) is a popular industrial tool for non-destructive visualization of flat objects. However, high-quality CL imaging requires a large number of projections, resulting in a long imaging time. Reducing the number of projections allows acceleration of the imaging process, but decreases the quality of reconstructed images.

Objective: Our objective is to build a deep learning network for sparse-view CL reconstruction.

Methods: Considering complementarities of feature extraction in different domains, we design an encoder-decoder network that enables to compensate the missing information during spatial domain feature extraction in wavelet domain. Also, a detail-enhanced module is developed to highlight details. Additionally, Swin Transformer and convolution operators are combined to better capture features.

Results: A total of 3200 pairs of 16-view and 1024-view CL images (2880 pairs for training, 160 pairs for validation, and 160 pairs for testing) of solder joints have been employed to investigate the performance of the proposed network. It is observed that the proposed network obtains the highest image quality with PSNR and SSIM of 37.875 ± 0.908 dB, 0.992 ± 0.002, respectively. Also, it achieves competitive results on the AAPM dataset.

Conclusions: This study demonstrates the effectiveness and generalization of the proposed network for sparse-view CL reconstruction.

{"title":"A novel detail-enhanced wavelet domain feature compensation network for sparse-view X-ray computed laminography.","authors":"Yawu Long, Qianglong Zhong, Jin Lu, Chengke Xiong","doi":"10.1177/08953996251319183","DOIUrl":"https://doi.org/10.1177/08953996251319183","url":null,"abstract":"<p><strong>Background: </strong>X-ray Computed Laminography (CL) is a popular industrial tool for non-destructive visualization of flat objects. However, high-quality CL imaging requires a large number of projections, resulting in a long imaging time. Reducing the number of projections allows acceleration of the imaging process, but decreases the quality of reconstructed images.</p><p><strong>Objective: </strong>Our objective is to build a deep learning network for sparse-view CL reconstruction.</p><p><strong>Methods: </strong>Considering complementarities of feature extraction in different domains, we design an encoder-decoder network that enables to compensate the missing information during spatial domain feature extraction in wavelet domain. Also, a detail-enhanced module is developed to highlight details. Additionally, Swin Transformer and convolution operators are combined to better capture features.</p><p><strong>Results: </strong>A total of 3200 pairs of 16-view and 1024-view CL images (2880 pairs for training, 160 pairs for validation, and 160 pairs for testing) of solder joints have been employed to investigate the performance of the proposed network. It is observed that the proposed network obtains the highest image quality with PSNR and SSIM of 37.875 ± 0.908 dB, 0.992 ± 0.002, respectively. Also, it achieves competitive results on the AAPM dataset.</p><p><strong>Conclusions: </strong>This study demonstrates the effectiveness and generalization of the proposed network for sparse-view CL reconstruction.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251319183"},"PeriodicalIF":1.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-02-12 DOI: 10.1177/08953996241304988
Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang

Objective: This study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.

Methods: A retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).

Results: MpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.

Conclusion: These findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.

{"title":"Ultrasound and advanced imaging techniques in prostate cancer diagnosis: A comparative study of mpMRI, TRUS, and PET/CT.","authors":"Ying Dong, Peng Wang, Hua Geng, Yankun Liu, Enguo Wang","doi":"10.1177/08953996241304988","DOIUrl":"https://doi.org/10.1177/08953996241304988","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to assess and compare the diagnostic performance of three advanced imaging modalities-multiparametric magnetic resonance imaging (mpMRI), transrectal ultrasound (TRUS), and positron emission tomography/computed tomography (PET/CT)-in detecting prostate cancer in patients with elevated PSA levels and abnormal DRE findings.</p><p><strong>Methods: </strong>A retrospective analysis was conducted on 150 male patients aged 50-75 years with elevated PSA and abnormal DRE. The diagnostic accuracy of each modality was assessed through sensitivity, specificity, and the area under the curve (AUC) to compare performance in detecting clinically significant prostate cancer (Gleason score ≥ 7).</p><p><strong>Results: </strong>MpMRI demonstrated the highest diagnostic performance, with a sensitivity of 90%, specificity of 85%, and AUC of 0.92, outperforming both TRUS (sensitivity 76%, specificity 78%, AUC 0.77) and PET/CT (sensitivity 82%, specificity 80%, AUC 0.81). MpMRI detected clinically significant tumors in 80% of cases. Although TRUS and PET/CT had similar detection rates for significant tumors, their overall accuracy was lower. Minor adverse events occurred in 5% of patients undergoing TRUS, while no significant complications were associated with mpMRI or PET/CT.</p><p><strong>Conclusion: </strong>These findings suggest that mpMRI is the most reliable imaging modality for early detection of clinically significant prostate cancer. It reduces the need for unnecessary biopsies and optimizes patient management.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996241304988"},"PeriodicalIF":1.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DR-ConvNeXt: DR classification method for reconstructing ConvNeXt model structure.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-02-12 DOI: 10.1177/08953996241311190
Pengfei Song, Yun Wu

Background: Diabetic retinopathy (DR) is a major complication of diabetes and a leading cause of blindness among the working-age population. However, the complex distribution and variability of lesion characteristics within the dataset present significant challenges for achieving high-precision classification of DR images.

Objective: We propose an automatic classification method for DR images, named DR-ConvNeXt, which aims to achieve accurate diagnosis of lesion types.

Methods: The method involves designing a dual-branch addition convolution structure and appropriately increasing the number of stacked ConvNeXt Block convolution layers. Additionally, a unique primary-auxiliary loss function is introduced, contributing to a significant enhancement in DR classification accuracy within the DR-ConvNeXt model.

Results: The model achieved an accuracy of 91.8%,sensitivity of 81.6%, and specificity of 97.9% on the APTOS dataset. On the Messidor-2 dataset, the model achieved an accuracy of 83.6%, sensitivity of 74.0%, and specificity of 94.6%.

Conclusions: The DR-ConvNeXt model's classification results on the two publicly available datasets illustrate the significant advantages in all evaluation indexes for DR classification.

{"title":"DR-ConvNeXt: DR classification method for reconstructing ConvNeXt model structure.","authors":"Pengfei Song, Yun Wu","doi":"10.1177/08953996241311190","DOIUrl":"https://doi.org/10.1177/08953996241311190","url":null,"abstract":"<p><strong>Background: </strong>Diabetic retinopathy (DR) is a major complication of diabetes and a leading cause of blindness among the working-age population. However, the complex distribution and variability of lesion characteristics within the dataset present significant challenges for achieving high-precision classification of DR images.</p><p><strong>Objective: </strong>We propose an automatic classification method for DR images, named DR-ConvNeXt, which aims to achieve accurate diagnosis of lesion types.</p><p><strong>Methods: </strong>The method involves designing a dual-branch addition convolution structure and appropriately increasing the number of stacked ConvNeXt Block convolution layers. Additionally, a unique primary-auxiliary loss function is introduced, contributing to a significant enhancement in DR classification accuracy within the DR-ConvNeXt model.</p><p><strong>Results: </strong>The model achieved an accuracy of 91.8%,sensitivity of 81.6%, and specificity of 97.9% on the APTOS dataset. On the Messidor-2 dataset, the model achieved an accuracy of 83.6%, sensitivity of 74.0%, and specificity of 94.6%.</p><p><strong>Conclusions: </strong>The DR-ConvNeXt model's classification results on the two publicly available datasets illustrate the significant advantages in all evaluation indexes for DR classification.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996241311190"},"PeriodicalIF":1.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient and high-quality scheme for cone-beam CT reconstruction from sparse-view dat.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-02-04 DOI: 10.1177/08953996241313121
Shunli Zhang, Mingxiu Tuo, Siyu Jin, Yikuan Gu

Computed tomography (CT) is capable of generating detailed cross-sectional images of the scanned objects non-destructively. So far, CT has become an increasingly vital tool for 3D modelling of cultural relics. Compressed sensing (CS)-based CT reconstruction algorithms, such as the algebraic reconstruction technique (ART) regularized by total variation (TV), enable accurate reconstructions from sparse-view data, which consequently reduces both scanning time and costs. However, the implementation of the ART-TV is considerably slow, particularly in cone-beam reconstruction. In this paper, we propose an efficient and high-quality scheme for cone-beam CT reconstruction based on the traditional ART-TV algorithm. Our scheme employs Joseph's projection method for the computation of the system matrix. By exploiting the geometric symmetry of the cone-beam rays, we are able to compute the weight coefficients of the system matrix for two symmetric rays simultaneously. We then employ multi-threading technology to speed up the reconstruction of ART, and utilize graphics processing units (GPUs) to accelerate the TV minimization. Experimental results demonstrate that, for a typical reconstruction of a 512 × 512 × 512 volume from 60 views of 512 × 512 projection images, our scheme achieves a speedup of 14 × compared to a single-threaded CPU implementation. Furthermore, high-quality reconstructions of ART-TV are obtained by using Joseph's projection compared with that using traditional Siddon's projection.

{"title":"An efficient and high-quality scheme for cone-beam CT reconstruction from sparse-view dat.","authors":"Shunli Zhang, Mingxiu Tuo, Siyu Jin, Yikuan Gu","doi":"10.1177/08953996241313121","DOIUrl":"https://doi.org/10.1177/08953996241313121","url":null,"abstract":"<p><p>Computed tomography (CT) is capable of generating detailed cross-sectional images of the scanned objects non-destructively. So far, CT has become an increasingly vital tool for 3D modelling of cultural relics. Compressed sensing (CS)-based CT reconstruction algorithms, such as the algebraic reconstruction technique (ART) regularized by total variation (TV), enable accurate reconstructions from sparse-view data, which consequently reduces both scanning time and costs. However, the implementation of the ART-TV is considerably slow, particularly in cone-beam reconstruction. In this paper, we propose an efficient and high-quality scheme for cone-beam CT reconstruction based on the traditional ART-TV algorithm. Our scheme employs Joseph's projection method for the computation of the system matrix. By exploiting the geometric symmetry of the cone-beam rays, we are able to compute the weight coefficients of the system matrix for two symmetric rays simultaneously. We then employ multi-threading technology to speed up the reconstruction of ART, and utilize graphics processing units (GPUs) to accelerate the TV minimization. Experimental results demonstrate that, for a typical reconstruction of a 512 × 512 × 512 volume from 60 views of 512 × 512 projection images, our scheme achieves a speedup of 14 × compared to a single-threaded CPU implementation. Furthermore, high-quality reconstructions of ART-TV are obtained by using Joseph's projection compared with that using traditional Siddon's projection.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996241313121"},"PeriodicalIF":1.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising.
IF 1.7 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-01-29 DOI: 10.1177/08953996241306696
Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui

Background: Numerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.

Objective: To improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.

Methods: In our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each 3×3 convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.

Results: Compared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.

Conclusion: Through comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.

{"title":"A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising.","authors":"Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui","doi":"10.1177/08953996241306696","DOIUrl":"https://doi.org/10.1177/08953996241306696","url":null,"abstract":"<p><strong>Background: </strong>Numerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.</p><p><strong>Objective: </strong>To improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.</p><p><strong>Methods: </strong>In our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each <math><mn>3</mn><mo>×</mo><mn>3</mn></math> convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.</p><p><strong>Results: </strong>Compared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.</p><p><strong>Conclusion: </strong>Through comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996241306696"},"PeriodicalIF":1.7,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of X-Ray Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1