{"title":"Special Section: Medical Applications of X-ray Imaging Techniques.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140327312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahao Chang, Chaoyang Zhu, Yuanpeng Song, Zhentao Wang
The time response characteristic of the detector is crucial in radiation imaging systems. Unfortunately, existing parallel plate ionization chamber detectors have a slow response time, which leads to blurry radiation images. To enhance imaging quality, the electrode structure of the detector must be modified to reduce the response time. This paper proposes a gas detector with a grid structure that has a fast response time. In this study, the detector electrostatic field was calculated using COMSOL, while Garfield++ was utilized to simulate the detector's output signal. To validate the accuracy of simulation results, the experimental ionization chamber was tested on the experimental platform. The results revealed that the average electric field intensity in the induced region of the grid detector was increased by at least 33%. The detector response time was reduced to 27% -38% of that of the parallel plate detector, while the sensitivity of the detector was only reduced by 10%. Therefore, incorporating a grid structure within the parallel plate detector can significantly improve the time response characteristics of the gas detector, providing an insight for future detector enhancements.
{"title":"A fast response time gas ionization chamber detector with a grid structure.","authors":"Jiahao Chang, Chaoyang Zhu, Yuanpeng Song, Zhentao Wang","doi":"10.3233/XST-230219","DOIUrl":"10.3233/XST-230219","url":null,"abstract":"<p><p>The time response characteristic of the detector is crucial in radiation imaging systems. Unfortunately, existing parallel plate ionization chamber detectors have a slow response time, which leads to blurry radiation images. To enhance imaging quality, the electrode structure of the detector must be modified to reduce the response time. This paper proposes a gas detector with a grid structure that has a fast response time. In this study, the detector electrostatic field was calculated using COMSOL, while Garfield++ was utilized to simulate the detector's output signal. To validate the accuracy of simulation results, the experimental ionization chamber was tested on the experimental platform. The results revealed that the average electric field intensity in the induced region of the grid detector was increased by at least 33%. The detector response time was reduced to 27% -38% of that of the parallel plate detector, while the sensitivity of the detector was only reduced by 10%. Therefore, incorporating a grid structure within the parallel plate detector can significantly improve the time response characteristics of the gas detector, providing an insight for future detector enhancements.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer is one of the cancers with high morbidity and mortality in the world, which is a serious threat to the health of women. With the development of deep learning, the recognition about computer-aided diagnosis technology is getting higher and higher. And the traditional data feature extraction technology has been gradually replaced by the feature extraction technology based on convolutional neural network which helps to realize the automatic recognition and classification of pathological images. In this paper, a novel method based on deep learning and wavelet transform is proposed to classify the pathological images of breast cancer. Firstly, the image flip technique is used to expand the data set, then the two-level wavelet decomposition and reconfiguration technology is used to sharpen and enhance the pathological images. Secondly, the processed data set is divided into the training set and the test set according to 8:2 and 7:3, and the YOLOv8 network model is selected to perform the eight classification tasks of breast cancer pathological images. Finally, the classification accuracy of the proposed method is compared with the classification accuracy obtained by YOLOv8 for the original BreaKHis dataset, and it is found that the algorithm can improve the classification accuracy of images with different magnifications, which proves the effectiveness of combining two-level wavelet decomposition and reconfiguration with YOLOv8 network model.
{"title":"Research on breast cancer pathological image classification method based on wavelet transform and YOLOv8.","authors":"Yunfeng Yang, Jiaqi Wang","doi":"10.3233/XST-230296","DOIUrl":"10.3233/XST-230296","url":null,"abstract":"<p><p> Breast cancer is one of the cancers with high morbidity and mortality in the world, which is a serious threat to the health of women. With the development of deep learning, the recognition about computer-aided diagnosis technology is getting higher and higher. And the traditional data feature extraction technology has been gradually replaced by the feature extraction technology based on convolutional neural network which helps to realize the automatic recognition and classification of pathological images. In this paper, a novel method based on deep learning and wavelet transform is proposed to classify the pathological images of breast cancer. Firstly, the image flip technique is used to expand the data set, then the two-level wavelet decomposition and reconfiguration technology is used to sharpen and enhance the pathological images. Secondly, the processed data set is divided into the training set and the test set according to 8:2 and 7:3, and the YOLOv8 network model is selected to perform the eight classification tasks of breast cancer pathological images. Finally, the classification accuracy of the proposed method is compared with the classification accuracy obtained by YOLOv8 for the original BreaKHis dataset, and it is found that the algorithm can improve the classification accuracy of images with different magnifications, which proves the effectiveness of combining two-level wavelet decomposition and reconfiguration with YOLOv8 network model.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Cao, Fei Gao, Rong Long, Fan-Dong Zhang, Chen-Cui Huang, Min Cao, Yi-Zhou Yu, Ying-Shi Sun
Purpose: The explore the added value of peri-calcification regions on contrast-enhanced mammography (CEM) in the differential diagnosis of breast lesions presenting as only calcification on routine mammogram.
Methods: Patients who underwent CEM because of suspicious calcification-only lesions were included. The test set included patients between March 2017 and March 2019, while the validation set was collected between April 2019 and October 2019. The calcifications were automatically detected and grouped by a machine learning-based computer-aided system. In addition to extracting radiomic features on both low-energy (LE) and recombined (RC) images from the calcification areas, the peri-calcification regions, which is generated by extending the annotation margin radially with gradients from 1 mm to 9 mm, were attempted. Machine learning (ML) models were built to classify calcifications into malignant and benign groups. The diagnostic matrices were also evaluated by combing ML models with subjective reading.
Results: Models for LE (significant features: wavelet-LLL_glcm_Imc2_MLO; wavelet-HLL_firstorder_Entropy_MLO; wavelet-LHH_glcm_DifferenceVariance_CC; wavelet-HLL_glcm_SumEntropy_MLO;wavelet-HLH_glrlm_ShortRunLowGray LevelEmphasis_MLO; original_firstorder_Entropy_MLO; original_shape_Elongation_MLO) and RC (significant features: wavelet-HLH_glszm_GrayLevelNonUniformityNormalized_MLO; wavelet-LLH_firstorder_10Percentile_CC; original_firstorder_Maximum_MLO; wavelet-HHH_glcm_Autocorrelation_MLO; original_shape_Elongation_MLO; wavelet-LHL_glszm_GrayLevelNonUniformityNormalized_MLO; wavelet-LLH_firstorder_RootMeanSquared_MLO) images were set up with 7 features. Areas under the curve (AUCs) of RC models are significantly better than those of LE models with compact and expanded boundary (RC v.s. LE, compact: 0.81 v.s. 0.73, p < 0.05; expanded: 0.89 v.s. 0.81, p < 0.05) and RC models with 3 mm boundary extension yielded the best performance compared to those with other sizes (AUC = 0.89). Combining with radiologists' reading, the 3mm-boundary RC model achieved a sensitivity of 0.871 and negative predictive value of 0.937 with similar accuracy of 0.843 in predicting malignancy.
Conclusions: The machine learning model integrating intra- and peri-calcification regions on CEM has the potential to aid radiologists' performance in predicting malignancy of suspicious breast calcifications.
目的:探讨造影剂增强乳腺 X 线造影(CEM)上的钙化周围区域在常规乳腺 X 线造影上仅表现为钙化的乳腺病变的鉴别诊断中的附加价值:纳入因可疑钙化病变而接受CEM检查的患者。测试集包括2017年3月至2019年3月期间的患者,验证集收集于2019年4月至2019年10月期间。钙化由基于机器学习的计算机辅助系统自动检测和分组。除了从钙化区域的低能量(LE)和重组(RC)图像中提取放射学特征外,还尝试提取钙化周围区域,该区域是通过以 1 毫米到 9 毫米的梯度径向扩展注释边缘而生成的。建立了机器学习(ML)模型,将钙化分为恶性和良性两组。通过将 ML 模型与主观阅读相结合,还对诊断矩阵进行了评估:结果:LE 的模型(重要特征wavelet-HLH_glszm_GrayLevelNonUniformityNormalized_MLO; wavelet-LLH_firstorder_10Percentile_CC; original_firstorder_Maximum_MLO; wavelet-HHH_glcm_Autocorrelation_MLO;原始_形状_拉长_MLO;wavelet-LHL_glszm_GrayLevelNon-UniformityNormalized_MLO;wavelet-LLH_firstorder_RootMeanSquared_MLO)图像设置了 7 个特征。RC 模型的曲线下面积(AUC)明显优于边界紧凑和边界扩大的 LE 模型(RC 对 LE,紧凑:0.81 对 0.73,p < 0.05;扩大:0.89 对 0.73,p < 0.05):0.89 v.s. 0.81,p < 0.05),与其他尺寸的模型相比,边界扩展 3 毫米的 RC 模型性能最佳(AUC = 0.89)。结合放射科医生的阅读,3 毫米边界的 RC 模型在预测恶性肿瘤方面的灵敏度为 0.871,阴性预测值为 0.937,准确度为 0.843:整合 CEM 上钙化内和钙化周围区域的机器学习模型有望帮助放射科医生预测可疑乳腺钙化的恶性程度。
{"title":"Peri-lesion regions in differentiating suspicious breast calcification-only lesions specifically on contrast enhanced mammography.","authors":"Kun Cao, Fei Gao, Rong Long, Fan-Dong Zhang, Chen-Cui Huang, Min Cao, Yi-Zhou Yu, Ying-Shi Sun","doi":"10.3233/XST-230332","DOIUrl":"10.3233/XST-230332","url":null,"abstract":"<p><strong>Purpose: </strong>The explore the added value of peri-calcification regions on contrast-enhanced mammography (CEM) in the differential diagnosis of breast lesions presenting as only calcification on routine mammogram.</p><p><strong>Methods: </strong>Patients who underwent CEM because of suspicious calcification-only lesions were included. The test set included patients between March 2017 and March 2019, while the validation set was collected between April 2019 and October 2019. The calcifications were automatically detected and grouped by a machine learning-based computer-aided system. In addition to extracting radiomic features on both low-energy (LE) and recombined (RC) images from the calcification areas, the peri-calcification regions, which is generated by extending the annotation margin radially with gradients from 1 mm to 9 mm, were attempted. Machine learning (ML) models were built to classify calcifications into malignant and benign groups. The diagnostic matrices were also evaluated by combing ML models with subjective reading.</p><p><strong>Results: </strong>Models for LE (significant features: wavelet-LLL_glcm_Imc2_MLO; wavelet-HLL_firstorder_Entropy_MLO; wavelet-LHH_glcm_DifferenceVariance_CC; wavelet-HLL_glcm_SumEntropy_MLO;wavelet-HLH_glrlm_ShortRunLowGray LevelEmphasis_MLO; original_firstorder_Entropy_MLO; original_shape_Elongation_MLO) and RC (significant features: wavelet-HLH_glszm_GrayLevelNonUniformityNormalized_MLO; wavelet-LLH_firstorder_10Percentile_CC; original_firstorder_Maximum_MLO; wavelet-HHH_glcm_Autocorrelation_MLO; original_shape_Elongation_MLO; wavelet-LHL_glszm_GrayLevelNonUniformityNormalized_MLO; wavelet-LLH_firstorder_RootMeanSquared_MLO) images were set up with 7 features. Areas under the curve (AUCs) of RC models are significantly better than those of LE models with compact and expanded boundary (RC v.s. LE, compact: 0.81 v.s. 0.73, p < 0.05; expanded: 0.89 v.s. 0.81, p < 0.05) and RC models with 3 mm boundary extension yielded the best performance compared to those with other sizes (AUC = 0.89). Combining with radiologists' reading, the 3mm-boundary RC model achieved a sensitivity of 0.871 and negative predictive value of 0.937 with similar accuracy of 0.843 in predicting malignancy.</p><p><strong>Conclusions: </strong>The machine learning model integrating intra- and peri-calcification regions on CEM has the potential to aid radiologists' performance in predicting malignancy of suspicious breast calcifications.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139673533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Cui, Haipeng Lv, Jiping Wang, Yanyan Zheng, Zhongyi Wu, Hui Zhao, Jian Zheng, Ming Li
Background: Photon-counting computed tomography (Photon counting CT) utilizes photon-counting detectors to precisely count incident photons and measure their energy. These detectors, compared to traditional energy integration detectors, provide better image contrast and material differentiation. However, Photon counting CT tends to show more noticeable ring artifacts due to limited photon counts and detector response variations, unlike conventional spiral CT.
Objective: To comprehensively address this issue, we propose a novel feature shared multi-decoder network (FSMDN) that utilizes complementary learning to suppress ring artifacts in Photon counting CT images.
Methods: Specifically, we employ a feature-sharing encoder to extract context and ring artifact features, facilitating effective feature sharing. These shared features are also independently processed by separate decoders dedicated to the context and ring artifact channels, working in parallel. Through complementary learning, this approach achieves superior performance in terms of artifact suppression while preserving tissue details.
Results: We conducted numerous experiments on Photon counting CT images with three-intensity ring artifacts. Both qualitative and quantitative results demonstrate that our network model performs exceptionally well in correcting ring artifacts at different levels while exhibiting superior stability and robustness compared to the comparison methods.
Conclusions: In this paper, we have introduced a novel deep learning network designed to mitigate ring artifacts in Photon counting CT images. The results illustrate the viability and efficacy of our proposed network model as a new deep learning-based method for suppressing ring artifacts.
{"title":"Feature shared multi-decoder network using complementary learning for Photon counting CT ring artifact suppression.","authors":"Wei Cui, Haipeng Lv, Jiping Wang, Yanyan Zheng, Zhongyi Wu, Hui Zhao, Jian Zheng, Ming Li","doi":"10.3233/XST-230396","DOIUrl":"10.3233/XST-230396","url":null,"abstract":"<p><strong>Background: </strong>Photon-counting computed tomography (Photon counting CT) utilizes photon-counting detectors to precisely count incident photons and measure their energy. These detectors, compared to traditional energy integration detectors, provide better image contrast and material differentiation. However, Photon counting CT tends to show more noticeable ring artifacts due to limited photon counts and detector response variations, unlike conventional spiral CT.</p><p><strong>Objective: </strong>To comprehensively address this issue, we propose a novel feature shared multi-decoder network (FSMDN) that utilizes complementary learning to suppress ring artifacts in Photon counting CT images.</p><p><strong>Methods: </strong>Specifically, we employ a feature-sharing encoder to extract context and ring artifact features, facilitating effective feature sharing. These shared features are also independently processed by separate decoders dedicated to the context and ring artifact channels, working in parallel. Through complementary learning, this approach achieves superior performance in terms of artifact suppression while preserving tissue details.</p><p><strong>Results: </strong>We conducted numerous experiments on Photon counting CT images with three-intensity ring artifacts. Both qualitative and quantitative results demonstrate that our network model performs exceptionally well in correcting ring artifacts at different levels while exhibiting superior stability and robustness compared to the comparison methods.</p><p><strong>Conclusions: </strong>In this paper, we have introduced a novel deep learning network designed to mitigate ring artifacts in Photon counting CT images. The results illustrate the viability and efficacy of our proposed network model as a new deep learning-based method for suppressing ring artifacts.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Huang, Ruxin Cai, Yifei Pi, Kui Ma, Qing Kong, Weihai Zhuo, Yan Kong
Objective: This study aims to explore the feasibility of DenseNet in the establishment of a three-dimensional (3D) gamma prediction model of IMRT based on the actual parameters recorded in the log files during delivery.
Methods: A total of 55 IMRT plans (including 367 fields) were randomly selected. The gamma analysis was performed using gamma criteria of 3% /3 mm (Dose Difference/Distance to Agreement), 3% /2 mm, 2% /3 mm, and 2% /2 mm with a 10% dose threshold. In addition, the log files that recorded the gantry angle, monitor units (MU), multi-leaf collimator (MLC), and jaws position during delivery were collected. These log files were then converted to MU-weighted fluence maps as the input of DenseNet, gamma passing rates (GPRs) under four different gamma criteria as the output, and mean square errors (MSEs) as the loss function of this model.
Results: Under different gamma criteria, the accuracy of a 3D GPR prediction model decreased with the implementation of stricter gamma criteria. In the test set, the mean absolute error (MAE) of the prediction model under the gamma criteria of 3% /3 mm, 2% /3 mm, 3% /2 mm, and 2% /2 mm was 1.41, 1.44, 3.29, and 3.54, respectively; the root mean square error (RMSE) was 1.91, 1.85, 4.27, and 4.40, respectively; the Sr was 0.487, 0.554, 0.573, and 0.506, respectively. There was a correlation between predicted and measured GPRs (P < 0.01). Additionally, there was no significant difference in the accuracy between the validation set and the test set. The accuracy in the high GPR group was high, and the MAE in the high GPR group was smaller than that in the low GPR group under four different gamma criteria.
Conclusions: In this study, a 3D GPR prediction model of patient-specific QA using DenseNet was established based on log files. As an auxiliary tool for 3D dose verification in IMRT, this model is expected to improve the accuracy and efficiency of dose validation.
{"title":"A feasibility study to predict 3D dose delivery accuracy for IMRT using DenseNet with log files.","authors":"Ying Huang, Ruxin Cai, Yifei Pi, Kui Ma, Qing Kong, Weihai Zhuo, Yan Kong","doi":"10.3233/XST-230412","DOIUrl":"10.3233/XST-230412","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to explore the feasibility of DenseNet in the establishment of a three-dimensional (3D) gamma prediction model of IMRT based on the actual parameters recorded in the log files during delivery.</p><p><strong>Methods: </strong>A total of 55 IMRT plans (including 367 fields) were randomly selected. The gamma analysis was performed using gamma criteria of 3% /3 mm (Dose Difference/Distance to Agreement), 3% /2 mm, 2% /3 mm, and 2% /2 mm with a 10% dose threshold. In addition, the log files that recorded the gantry angle, monitor units (MU), multi-leaf collimator (MLC), and jaws position during delivery were collected. These log files were then converted to MU-weighted fluence maps as the input of DenseNet, gamma passing rates (GPRs) under four different gamma criteria as the output, and mean square errors (MSEs) as the loss function of this model.</p><p><strong>Results: </strong>Under different gamma criteria, the accuracy of a 3D GPR prediction model decreased with the implementation of stricter gamma criteria. In the test set, the mean absolute error (MAE) of the prediction model under the gamma criteria of 3% /3 mm, 2% /3 mm, 3% /2 mm, and 2% /2 mm was 1.41, 1.44, 3.29, and 3.54, respectively; the root mean square error (RMSE) was 1.91, 1.85, 4.27, and 4.40, respectively; the Sr was 0.487, 0.554, 0.573, and 0.506, respectively. There was a correlation between predicted and measured GPRs (P < 0.01). Additionally, there was no significant difference in the accuracy between the validation set and the test set. The accuracy in the high GPR group was high, and the MAE in the high GPR group was smaller than that in the low GPR group under four different gamma criteria.</p><p><strong>Conclusions: </strong>In this study, a 3D GPR prediction model of patient-specific QA using DenseNet was established based on log files. As an auxiliary tool for 3D dose verification in IMRT, this model is expected to improve the accuracy and efficiency of dose validation.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140870664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The inherent problems in the existence of electron equilibrium and steep dose fall-off pose difficulties for small- and narrow-field dosimetry.
Objective: To investigate the cutout factors for keloid electron radiotherapy using various dosimetry detectors for small and narrow fields.
Method: The measurements were performed in a solid water phantom with nine different cutout shapes. Five dosimetry detectors were used in the study: pinpoint 3D ionization chamber, Farmer chamber, semiflex chamber, Classic Markus parallel plate chamber, and EBT3 film.
Results: The results demonstrated good agreement between the semiflex and pinpoint chambers. Furthermore, there was no difference between the Farmer and pinpoint chambers for large cutouts. For the EBT3 film, half of the cases had differences greater than 1%, and the maximum discrepancy compared with the reference chamber was greater than 2% for the narrow field.
Conclusion: The parallel plate, semiflex chamber and EBT3 film are suitable dosimeters that are comparable with pinpoint 3D chambers in small and narrow electron fields. Notably, a semiflex chamber could be an alternative option to a pinpoint 3D chamber for cutout widths≥3 cm. It is very important to perform patient-specific cutout factor calibration with an appropriate dosimeter for keloid radiotherapy.
{"title":"Evaluation of cutout factors with small and narrow fields using various dosimetry detectors in electron beam keloid radiotherapy.","authors":"Yu-Fang Lin, Chen-Hsi Hsieh, Hui-Ju Tien, Yi-Huan Lee, Yi-Chun Chen, Lu-Han Lai, Shih-Ming Hsu, Pei-Wei Shueng","doi":"10.3233/XST-240059","DOIUrl":"10.3233/XST-240059","url":null,"abstract":"<p><strong>Background: </strong>The inherent problems in the existence of electron equilibrium and steep dose fall-off pose difficulties for small- and narrow-field dosimetry.</p><p><strong>Objective: </strong>To investigate the cutout factors for keloid electron radiotherapy using various dosimetry detectors for small and narrow fields.</p><p><strong>Method: </strong>The measurements were performed in a solid water phantom with nine different cutout shapes. Five dosimetry detectors were used in the study: pinpoint 3D ionization chamber, Farmer chamber, semiflex chamber, Classic Markus parallel plate chamber, and EBT3 film.</p><p><strong>Results: </strong>The results demonstrated good agreement between the semiflex and pinpoint chambers. Furthermore, there was no difference between the Farmer and pinpoint chambers for large cutouts. For the EBT3 film, half of the cases had differences greater than 1%, and the maximum discrepancy compared with the reference chamber was greater than 2% for the narrow field.</p><p><strong>Conclusion: </strong>The parallel plate, semiflex chamber and EBT3 film are suitable dosimeters that are comparable with pinpoint 3D chambers in small and narrow electron fields. Notably, a semiflex chamber could be an alternative option to a pinpoint 3D chamber for cutout widths≥3 cm. It is very important to perform patient-specific cutout factor calibration with an appropriate dosimeter for keloid radiotherapy.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141437769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Early diagnosis of breast cancer is crucial to perform effective therapy. Many medical imaging modalities including MRI, CT, and ultrasound are used to diagnose cancer.
Objective: This study aims to investigate feasibility of applying transfer learning techniques to train convoluted neural networks (CNNs) to automatically diagnose breast cancer via ultrasound images.
Methods: Transfer learning techniques helped CNNs recognise breast cancer in ultrasound images. Each model's training and validation accuracies were assessed using the ultrasound image dataset. Ultrasound images educated and tested the models.
Results: MobileNet had the greatest accuracy during training and DenseNet121 during validation. Transfer learning algorithms can detect breast cancer in ultrasound images.
Conclusions: Based on the results, transfer learning models may be useful for automated breast cancer diagnosis in ultrasound images. However, only a trained medical professional should diagnose cancer, and computational approaches should only be used to help make quick decisions.
{"title":"Learning technology for detection and grading of cancer tissue using tumour ultrasound images1.","authors":"Liyan Zhang, Ruiyan Xu, Jingde Zhao","doi":"10.3233/XST-230085","DOIUrl":"10.3233/XST-230085","url":null,"abstract":"<p><strong>Background: </strong>Early diagnosis of breast cancer is crucial to perform effective therapy. Many medical imaging modalities including MRI, CT, and ultrasound are used to diagnose cancer.</p><p><strong>Objective: </strong>This study aims to investigate feasibility of applying transfer learning techniques to train convoluted neural networks (CNNs) to automatically diagnose breast cancer via ultrasound images.</p><p><strong>Methods: </strong>Transfer learning techniques helped CNNs recognise breast cancer in ultrasound images. Each model's training and validation accuracies were assessed using the ultrasound image dataset. Ultrasound images educated and tested the models.</p><p><strong>Results: </strong>MobileNet had the greatest accuracy during training and DenseNet121 during validation. Transfer learning algorithms can detect breast cancer in ultrasound images.</p><p><strong>Conclusions: </strong>Based on the results, transfer learning models may be useful for automated breast cancer diagnosis in ultrasound images. However, only a trained medical professional should diagnose cancer, and computational approaches should only be used to help make quick decisions.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9754646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the medical field, diagnostic tools that make use of deep neural networks have reached a level of performance never before seen. A proper diagnosis of a patient's condition is crucial in modern medicine since it determines whether or not the patient will receive the care they need. Data from a sinus CT scan is uploaded to a computer and displayed on a high-definition monitor to give the surgeon a clear anatomical orientation before endoscopic sinus surgery. In this study, a unique method is presented for detecting and diagnosing paranasal sinus disorders using machine learning. The researchers behind the current study designed their own approach. To speed up diagnosis, one of the primary goals of our study is to create an algorithm that can accurately evaluate the paranasal sinuses in CT scans. The proposed technology makes it feasible to automatically cut down on the number of CT scan images that require investigators to manually search through them all. In addition, the approach offers an automatic segmentation that may be used to locate the paranasal sinus region and crop it accordingly. As a result, the suggested method dramatically reduces the amount of data that is necessary during the training phase. As a result, this results in an increase in the efficiency of the computer while retaining a high degree of performance accuracy. The suggested method not only successfully identifies sinus irregularities but also automatically executes the necessary segmentation without requiring any manual cropping. This eliminates the need for time-consuming and error-prone human labor. When tested with actual CT scans, the method in question was discovered to have an accuracy of 95.16 percent while retaining a sensitivity of 99.14 percent throughout.
{"title":"Machine learning framework for simulation of artifacts in paranasal sinuses diagnosis using CT images.","authors":"Abdullah Musleh","doi":"10.3233/XST-230284","DOIUrl":"10.3233/XST-230284","url":null,"abstract":"<p><p>In the medical field, diagnostic tools that make use of deep neural networks have reached a level of performance never before seen. A proper diagnosis of a patient's condition is crucial in modern medicine since it determines whether or not the patient will receive the care they need. Data from a sinus CT scan is uploaded to a computer and displayed on a high-definition monitor to give the surgeon a clear anatomical orientation before endoscopic sinus surgery. In this study, a unique method is presented for detecting and diagnosing paranasal sinus disorders using machine learning. The researchers behind the current study designed their own approach. To speed up diagnosis, one of the primary goals of our study is to create an algorithm that can accurately evaluate the paranasal sinuses in CT scans. The proposed technology makes it feasible to automatically cut down on the number of CT scan images that require investigators to manually search through them all. In addition, the approach offers an automatic segmentation that may be used to locate the paranasal sinus region and crop it accordingly. As a result, the suggested method dramatically reduces the amount of data that is necessary during the training phase. As a result, this results in an increase in the efficiency of the computer while retaining a high degree of performance accuracy. The suggested method not only successfully identifies sinus irregularities but also automatically executes the necessary segmentation without requiring any manual cropping. This eliminates the need for time-consuming and error-prone human labor. When tested with actual CT scans, the method in question was discovered to have an accuracy of 95.16 percent while retaining a sensitivity of 99.14 percent throughout.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139941065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: A coded aperture X-ray diffraction (XRD) imaging system can measure the X-ray diffraction form factor from an object in three dimensions -X, Y and Z (depth), broadening the potential application of this technology. However, to optimize XRD systems for specific applications, it is critical to understand how to predict and quantify system performance for each use case.
Objective: The purpose of this work is to present and validate 3D spatial resolution models for XRD imaging systems with a detector-side coded aperture.
Methods: A fan beam coded aperture XRD system was used to scan 3D printed resolution phantoms placed at various locations throughout the system's field of view. The multiplexed scatter data were reconstructed using a model-based iterative reconstruction algorithm, and the resulting volumetric images were evaluated using multiple resolution criteria to compare against the known phantom resolution. We considered the full width at half max and Sparrow criterion as measures of the resolution and compared our results against analytical resolution models from the literature as well as a new theory for predicting the system resolution based on geometric arguments.
Results: We show that our experimental measurements are bounded by the multitude of theoretical resolution predictions, which accurately predict the observed trends and order of magnitude of the spatial and form factor resolutions. However, we find that the expected and observed resolution can vary by approximately a factor of two depending on the choice of metric and model considered. We observe depth resolutions of 7-16 mm and transverse resolutions of 0.6-2 mm for objects throughout the field of view. Furthermore, we observe tradeoffs between the spatial resolution and XRD form factor resolution as a function of sample location.
Conclusion: The theories evaluated in this study provide a useful framework for estimating the 3D spatial resolution of a detector side coded aperture XRD imaging system. The assumptions and simplifications required by these theories can impact the overall accuracy of describing a particular system, but they also can add to the generalizability of their predictions. Furthermore, understanding the implications of the assumptions behind each theory can help predict performance, as shown by our data's placement between the conservative and idealized theories, and better guide future systems for optimized designs.
{"title":"Resolution analysis of a volumetric coded aperture X-ray diffraction imaging system.","authors":"Zachary Gude, Anuj J Kapadia, Joel A Greenberg","doi":"10.3233/XST-230244","DOIUrl":"10.3233/XST-230244","url":null,"abstract":"<p><strong>Background: </strong>A coded aperture X-ray diffraction (XRD) imaging system can measure the X-ray diffraction form factor from an object in three dimensions -X, Y and Z (depth), broadening the potential application of this technology. However, to optimize XRD systems for specific applications, it is critical to understand how to predict and quantify system performance for each use case.</p><p><strong>Objective: </strong>The purpose of this work is to present and validate 3D spatial resolution models for XRD imaging systems with a detector-side coded aperture.</p><p><strong>Methods: </strong>A fan beam coded aperture XRD system was used to scan 3D printed resolution phantoms placed at various locations throughout the system's field of view. The multiplexed scatter data were reconstructed using a model-based iterative reconstruction algorithm, and the resulting volumetric images were evaluated using multiple resolution criteria to compare against the known phantom resolution. We considered the full width at half max and Sparrow criterion as measures of the resolution and compared our results against analytical resolution models from the literature as well as a new theory for predicting the system resolution based on geometric arguments.</p><p><strong>Results: </strong>We show that our experimental measurements are bounded by the multitude of theoretical resolution predictions, which accurately predict the observed trends and order of magnitude of the spatial and form factor resolutions. However, we find that the expected and observed resolution can vary by approximately a factor of two depending on the choice of metric and model considered. We observe depth resolutions of 7-16 mm and transverse resolutions of 0.6-2 mm for objects throughout the field of view. Furthermore, we observe tradeoffs between the spatial resolution and XRD form factor resolution as a function of sample location.</p><p><strong>Conclusion: </strong>The theories evaluated in this study provide a useful framework for estimating the 3D spatial resolution of a detector side coded aperture XRD imaging system. The assumptions and simplifications required by these theories can impact the overall accuracy of describing a particular system, but they also can add to the generalizability of their predictions. Furthermore, understanding the implications of the assumptions behind each theory can help predict performance, as shown by our data's placement between the conservative and idealized theories, and better guide future systems for optimized designs.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140873376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}