Pub Date : 2024-12-23DOI: 10.3390/tomography10120151
Yujin Eom, Yong-Jin Park, Sumin Lee, Su-Jin Lee, Young-Sil An, Bok-Nam Park, Joon-Kee Yoon
Background/objectives: Calculating the radiation dose from CT in 18F-PET/CT examinations poses a significant challenge. The objective of this study is to develop a deep learning-based automated program that standardizes the measurement of radiation doses.
Methods: The torso CT was segmented into six distinct regions using TotalSegmentator. An automated program was employed to extract the necessary information and calculate the effective dose (ED) of PET/CT. The accuracy of our automated program was verified by comparing the EDs calculated by the program with those determined by a nuclear medicine physician (n = 30). Additionally, we compared the EDs obtained from an older PET/CT scanner with those from a newer PET/CT scanner (n = 42).
Results: The CT ED calculated by the automated program was not significantly different from that calculated by the nuclear medicine physician (3.67 ± 0.61 mSv and 3.62 ± 0.60 mSv, respectively, p = 0.7623). Similarly, the total ED showed no significant difference between the two calculation methods (8.10 ± 1.40 mSv and 8.05 ± 1.39 mSv, respectively, p = 0.8957). A very strong correlation was observed in both the CT ED and total ED between the two measurements (r2 = 0.9981 and 0.9996, respectively). The automated program showed excellent repeatability and reproducibility. When comparing the older and newer PET/CT scanners, the PET ED was significantly lower in the newer scanner than in the older scanner (4.39 ± 0.91 mSv and 6.00 ± 1.17 mSv, respectively, p < 0.0001). Consequently, the total ED was significantly lower in the newer scanner than in the older scanner (8.22 ± 1.53 mSv and 9.65 ± 1.34 mSv, respectively, p < 0.0001).
Conclusions: We successfully developed an automated program for calculating the ED of torso 18F-PET/CT. By integrating a deep learning model, the program effectively eliminated inter-operator variability.
背景/目的:在18F-PET/CT检查中计算CT的辐射剂量是一个重大挑战。本研究的目的是开发一种基于深度学习的自动化程序,使辐射剂量的测量标准化。方法:利用TotalSegmentator将躯干CT分割成6个不同的区域。采用自动程序提取必要信息并计算PET/CT有效剂量(ED)。通过将程序计算的ed与核医学医师确定的ed进行比较(n = 30),验证了我们自动化程序的准确性。此外,我们比较了老式PET/CT扫描仪与新型PET/CT扫描仪的EDs (n = 42)。结果:自动程序计算的CT ED与核医学医师计算的CT ED差异无统计学意义(分别为3.67±0.61 mSv和3.62±0.60 mSv, p = 0.7623)。同样,两种计算方法的总ED也无显著差异(分别为8.10±1.40 mSv和8.05±1.39 mSv, p = 0.8957)。在CT ED和总ED两个测量值之间观察到非常强的相关性(r2分别= 0.9981和0.9996)。该自动化程序具有良好的重复性和再现性。当比较旧的和新的PET/CT扫描仪时,新扫描仪的PET ED明显低于旧扫描仪(分别为4.39±0.91 mSv和6.00±1.17 mSv, p < 0.0001)。因此,新扫描仪的总ED明显低于旧扫描仪(分别为8.22±1.53 mSv和9.65±1.34 mSv, p < 0.0001)。结论:我们成功开发了一个计算躯干18F-PET/CT ED的自动程序。通过集成深度学习模型,该程序有效地消除了操作员之间的可变性。
{"title":"Automated Measurement of Effective Radiation Dose by <sup>18</sup>F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography.","authors":"Yujin Eom, Yong-Jin Park, Sumin Lee, Su-Jin Lee, Young-Sil An, Bok-Nam Park, Joon-Kee Yoon","doi":"10.3390/tomography10120151","DOIUrl":"10.3390/tomography10120151","url":null,"abstract":"<p><strong>Background/objectives: </strong>Calculating the radiation dose from CT in <sup>18</sup>F-PET/CT examinations poses a significant challenge. The objective of this study is to develop a deep learning-based automated program that standardizes the measurement of radiation doses.</p><p><strong>Methods: </strong>The torso CT was segmented into six distinct regions using TotalSegmentator. An automated program was employed to extract the necessary information and calculate the effective dose (ED) of PET/CT. The accuracy of our automated program was verified by comparing the EDs calculated by the program with those determined by a nuclear medicine physician (n = 30). Additionally, we compared the EDs obtained from an older PET/CT scanner with those from a newer PET/CT scanner (n = 42).</p><p><strong>Results: </strong>The CT ED calculated by the automated program was not significantly different from that calculated by the nuclear medicine physician (3.67 ± 0.61 mSv and 3.62 ± 0.60 mSv, respectively, <i>p</i> = 0.7623). Similarly, the total ED showed no significant difference between the two calculation methods (8.10 ± 1.40 mSv and 8.05 ± 1.39 mSv, respectively, <i>p</i> = 0.8957). A very strong correlation was observed in both the CT ED and total ED between the two measurements (r<sup>2</sup> = 0.9981 and 0.9996, respectively). The automated program showed excellent repeatability and reproducibility. When comparing the older and newer PET/CT scanners, the PET ED was significantly lower in the newer scanner than in the older scanner (4.39 ± 0.91 mSv and 6.00 ± 1.17 mSv, respectively, <i>p</i> < 0.0001). Consequently, the total ED was significantly lower in the newer scanner than in the older scanner (8.22 ± 1.53 mSv and 9.65 ± 1.34 mSv, respectively, <i>p</i> < 0.0001).</p><p><strong>Conclusions: </strong>We successfully developed an automated program for calculating the ED of torso <sup>18</sup>F-PET/CT. By integrating a deep learning model, the program effectively eliminated inter-operator variability.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2144-2157"},"PeriodicalIF":2.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679132/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.3390/tomography10120150
Mattin Sayed, Sari Saba-Sadiya, Benedikt Wichtlhuber, Julia Dietz, Matthias Neitzel, Leopold Keller, Gemma Roig, Andreas M Bucher
Background: Medical imagesegmentation is an essential step in both clinical and research applications, and automated segmentation models-such as TotalSegmentator-have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation masks produced by these models can be used.
Methods: To address this gap, we have developed a novel validation framework for segmentation models, leveraging data augmentation to assess model consistency. We produced segmentation masks for both the original and augmented scans, and we calculated the alignment metrics between these segmentation masks.
Results: Our results demonstrate strong correlation between the segmentation quality of the original scan and the average alignment between the masks of the original and augmented CT scans. These results were further validated by supporting metrics, including the coefficient of variance and the average symmetric surface distance, indicating that agreement with augmented-scan segmentation masks is a valid proxy for segmentation quality.
Conclusions: Overall, our framework offers a pipeline for evaluating segmentation performance without relying on manually labeled ground truth data, establishing a foundation for future advancements in automated medical image analysis.
{"title":"Evaluating Medical Image Segmentation Models Using Augmentation.","authors":"Mattin Sayed, Sari Saba-Sadiya, Benedikt Wichtlhuber, Julia Dietz, Matthias Neitzel, Leopold Keller, Gemma Roig, Andreas M Bucher","doi":"10.3390/tomography10120150","DOIUrl":"10.3390/tomography10120150","url":null,"abstract":"<p><strong>Background: </strong>Medical imagesegmentation is an essential step in both clinical and research applications, and automated segmentation models-such as TotalSegmentator-have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation masks produced by these models can be used.</p><p><strong>Methods: </strong>To address this gap, we have developed a novel validation framework for segmentation models, leveraging data augmentation to assess model consistency. We produced segmentation masks for both the original and augmented scans, and we calculated the alignment metrics between these segmentation masks.</p><p><strong>Results: </strong>Our results demonstrate strong correlation between the segmentation quality of the original scan and the average alignment between the masks of the original and augmented CT scans. These results were further validated by supporting metrics, including the coefficient of variance and the average symmetric surface distance, indicating that agreement with augmented-scan segmentation masks is a valid proxy for segmentation quality.</p><p><strong>Conclusions: </strong>Overall, our framework offers a pipeline for evaluating segmentation performance without relying on manually labeled ground truth data, establishing a foundation for future advancements in automated medical image analysis.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2128-2143"},"PeriodicalIF":2.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.3390/tomography10120149
Chloe Dunseath, Emma J Bova, Elizabeth Wilson, Marguerite Care, Kim M Cecil
Using a pediatric-focused lens, this review article briefly summarizes the presentation of several demyelinating and neuroinflammatory diseases using conventional magnetic resonance imaging (MRI) sequences, such as T1-weighted with and without an exogenous gadolinium-based contrast agent, T2-weighted, and fluid-attenuated inversion recovery (FLAIR). These conventional sequences exploit the intrinsic properties of tissue to provide a distinct signal contrast that is useful for evaluating disease features and monitoring treatment responses in patients by characterizing lesion involvement in the central nervous system and tracking temporal features with blood-brain barrier disruption. Illustrative examples are presented for pediatric-onset multiple sclerosis and neuroinflammatory diseases. This work also highlights findings from advanced MRI techniques, often infrequently employed due to the challenges involved in acquisition, post-processing, and interpretation, and identifies the need for future studies to extract the unique information, such as alterations in neurochemistry, disruptions of structural organization, or atypical functional connectivity, that may be relevant for the diagnosis and management of disease.
{"title":"Pediatric Neuroimaging of Multiple Sclerosis and Neuroinflammatory Diseases.","authors":"Chloe Dunseath, Emma J Bova, Elizabeth Wilson, Marguerite Care, Kim M Cecil","doi":"10.3390/tomography10120149","DOIUrl":"10.3390/tomography10120149","url":null,"abstract":"<p><p>Using a pediatric-focused lens, this review article briefly summarizes the presentation of several demyelinating and neuroinflammatory diseases using conventional magnetic resonance imaging (MRI) sequences, such as T1-weighted with and without an exogenous gadolinium-based contrast agent, T2-weighted, and fluid-attenuated inversion recovery (FLAIR). These conventional sequences exploit the intrinsic properties of tissue to provide a distinct signal contrast that is useful for evaluating disease features and monitoring treatment responses in patients by characterizing lesion involvement in the central nervous system and tracking temporal features with blood-brain barrier disruption. Illustrative examples are presented for pediatric-onset multiple sclerosis and neuroinflammatory diseases. This work also highlights findings from advanced MRI techniques, often infrequently employed due to the challenges involved in acquisition, post-processing, and interpretation, and identifies the need for future studies to extract the unique information, such as alterations in neurochemistry, disruptions of structural organization, or atypical functional connectivity, that may be relevant for the diagnosis and management of disease.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2100-2127"},"PeriodicalIF":2.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.3390/tomography10120148
Frida Zacharias, Tony Martin Svahn
Background: This study aimed to assess the interobserver variability of semi-automatic diameter and volumetric measurements versus manual diameter measurements for small lung nodules identified on computed tomography scans.
Methods: The radiological patient database was searched for CT thorax examinations with at least one noncalcified solid nodule (∼3-10 mm). Three radiologists with four to six years of experience evaluated each nodule in accordance with the Fleischner Society guidelines using standard diameter measurements, semi-automatic lesion diameter measurements, and volumetric assessments. Spearman's correlation coefficient measured intermeasurement agreement. We used descriptive Bland-Altman plots to visualize agreement in the measured data. Potential discrepancies were analyzed.
Results: We studied a total of twenty-six nodules. Spearman's test showed that there was a much stronger relationship (p < 0.05) between reviewers for the semi-automatic diameter and volume measurements (avg. r = 0.97 ± 0.017 and 0.99 ± 0.005, respectively) than for the manual method (avg. r = 0.91 ± 0.017). In the Bland-Altman test, the semi-automatic diameter measure outperformed the manual method for all comparisons, while the volumetric method had better results in two out of three comparisons. The incidence of reviewers modifying the software's automatic outline varied between 62% and 92%.
Conclusions: Semi-automatic techniques significantly reduced interobserver variability for small solid nodules, which has important implications for diagnostic assessments and screening. Both the semi-automatic diameter and semi-automatic volume measurements showed improvements over the manual measurement approach. Training could further diminish observer variability, given the considerable diversity in the number of adjustments among reviewers.
{"title":"Interobserver Variability in Manual Versus Semi-Automatic CT Assessments of Small Lung Nodule Diameter and Volume.","authors":"Frida Zacharias, Tony Martin Svahn","doi":"10.3390/tomography10120148","DOIUrl":"10.3390/tomography10120148","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to assess the interobserver variability of semi-automatic diameter and volumetric measurements versus manual diameter measurements for small lung nodules identified on computed tomography scans.</p><p><strong>Methods: </strong>The radiological patient database was searched for CT thorax examinations with at least one noncalcified solid nodule (∼3-10 mm). Three radiologists with four to six years of experience evaluated each nodule in accordance with the Fleischner Society guidelines using standard diameter measurements, semi-automatic lesion diameter measurements, and volumetric assessments. Spearman's correlation coefficient measured intermeasurement agreement. We used descriptive Bland-Altman plots to visualize agreement in the measured data. Potential discrepancies were analyzed.</p><p><strong>Results: </strong>We studied a total of twenty-six nodules. Spearman's test showed that there was a much stronger relationship (<i>p</i> < 0.05) between reviewers for the semi-automatic diameter and volume measurements (avg. r = 0.97 ± 0.017 and 0.99 ± 0.005, respectively) than for the manual method (avg. r = 0.91 ± 0.017). In the Bland-Altman test, the semi-automatic diameter measure outperformed the manual method for all comparisons, while the volumetric method had better results in two out of three comparisons. The incidence of reviewers modifying the software's automatic outline varied between 62% and 92%.</p><p><strong>Conclusions: </strong>Semi-automatic techniques significantly reduced interobserver variability for small solid nodules, which has important implications for diagnostic assessments and screening. Both the semi-automatic diameter and semi-automatic volume measurements showed improvements over the manual measurement approach. Training could further diminish observer variability, given the considerable diversity in the number of adjustments among reviewers.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2087-2099"},"PeriodicalIF":2.2,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11680079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objectives: We evaluated the noise reduction effects of deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) in brain computed tomography (CT).
Methods: CT images of a 16 cm dosimetry phantom, a head phantom, and the brains of 11 patients were reconstructed using filtered backprojection (FBP) and various levels of DLR and HIR. The slice thickness was 5, 2.5, 1.25, and 0.625 mm. Phantom imaging was also conducted at various tube currents. The noise reduction ratio was calculated using FBP as the reference. For patient imaging, overall image quality was visually compared between DLR and HIR images that exhibited similar noise reduction ratios.
Results: The noise reduction ratio increased with increasing levels of DLR and HIR in phantom and patient imaging. For DLR, noise reduction was more pronounced with decreasing slice thickness, while such thickness dependence was less evident for HIR. Although the noise reduction effects of DLR were similar between the head phantom and patients, they differed for the dosimetry phantom. Variations between imaging objects were small for HIR. The noise reduction ratio was low at low tube currents for the dosimetry phantom using DLR; otherwise, the influence of the tube current was small. In terms of visual image quality, DLR outperformed HIR in 1.25 mm thick images but not in thicker images.
Conclusions: The degree of noise reduction using DLR depends on the slice thickness, tube current, and imaging object in addition to the level of DLR, which should be considered in the clinical use of DLR. DLR may be particularly beneficial for thin-slice imaging.
{"title":"Noise Reduction in Brain CT: A Comparative Study of Deep Learning and Hybrid Iterative Reconstruction Using Multiple Parameters.","authors":"Yusuke Inoue, Hiroyasu Itoh, Hirofumi Hata, Hiroki Miyatake, Kohei Mitsui, Shunichi Uehara, Chisaki Masuda","doi":"10.3390/tomography10120147","DOIUrl":"10.3390/tomography10120147","url":null,"abstract":"<p><strong>Objectives: </strong>We evaluated the noise reduction effects of deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) in brain computed tomography (CT).</p><p><strong>Methods: </strong>CT images of a 16 cm dosimetry phantom, a head phantom, and the brains of 11 patients were reconstructed using filtered backprojection (FBP) and various levels of DLR and HIR. The slice thickness was 5, 2.5, 1.25, and 0.625 mm. Phantom imaging was also conducted at various tube currents. The noise reduction ratio was calculated using FBP as the reference. For patient imaging, overall image quality was visually compared between DLR and HIR images that exhibited similar noise reduction ratios.</p><p><strong>Results: </strong>The noise reduction ratio increased with increasing levels of DLR and HIR in phantom and patient imaging. For DLR, noise reduction was more pronounced with decreasing slice thickness, while such thickness dependence was less evident for HIR. Although the noise reduction effects of DLR were similar between the head phantom and patients, they differed for the dosimetry phantom. Variations between imaging objects were small for HIR. The noise reduction ratio was low at low tube currents for the dosimetry phantom using DLR; otherwise, the influence of the tube current was small. In terms of visual image quality, DLR outperformed HIR in 1.25 mm thick images but not in thicker images.</p><p><strong>Conclusions: </strong>The degree of noise reduction using DLR depends on the slice thickness, tube current, and imaging object in addition to the level of DLR, which should be considered in the clinical use of DLR. DLR may be particularly beneficial for thin-slice imaging.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2073-2086"},"PeriodicalIF":2.2,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-13DOI: 10.3390/tomography10120146
Jinnian Zhang, Weijie Chen, Tanmayee Joshi, Xiaomin Zhang, Po-Ling Loh, Varun Jog, Richard J Bruce, John W Garrett, Alan B McMillan
This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method to facilitate detailed interactions between visual and non-visual data by tokenizing non-visual information and concatenating all tokens (visual or non-visual) as the input to the model. The model underwent training on a large-scale dataset from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge, where it exhibited commendable performance, particularly excelling in handling image distortions compared to existing models. The effectiveness of BAE-ViT was further affirmed through statistical analysis, demonstrating a strong correlation with the actual ground-truth labels. This study contributes to the field by showcasing the potential of vision transformers as a viable option for integrating multimodal data in medical imaging applications, specifically emphasizing their capacity to incorporate non-visual elements like sex information into the framework. This tokenization method not only demonstrates superior performance in this specific task but also offers a versatile framework for integrating multimodal data in medical imaging applications.
{"title":"BAE-ViT: An Efficient Multimodal Vision Transformer for Bone Age Estimation.","authors":"Jinnian Zhang, Weijie Chen, Tanmayee Joshi, Xiaomin Zhang, Po-Ling Loh, Varun Jog, Richard J Bruce, John W Garrett, Alan B McMillan","doi":"10.3390/tomography10120146","DOIUrl":"10.3390/tomography10120146","url":null,"abstract":"<p><p>This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method to facilitate detailed interactions between visual and non-visual data by tokenizing non-visual information and concatenating all tokens (visual or non-visual) as the input to the model. The model underwent training on a large-scale dataset from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge, where it exhibited commendable performance, particularly excelling in handling image distortions compared to existing models. The effectiveness of BAE-ViT was further affirmed through statistical analysis, demonstrating a strong correlation with the actual ground-truth labels. This study contributes to the field by showcasing the potential of vision transformers as a viable option for integrating multimodal data in medical imaging applications, specifically emphasizing their capacity to incorporate non-visual elements like sex information into the framework. This tokenization method not only demonstrates superior performance in this specific task but also offers a versatile framework for integrating multimodal data in medical imaging applications.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2058-2072"},"PeriodicalIF":2.2,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background/Objectives: Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. Materials and Methods: Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. Results: The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. Conclusions: This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.
{"title":"CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound.","authors":"Yi-Ming Wang, Chi-Yuan Wang, Kuo-Ying Liu, Yung-Hui Huang, Tai-Been Chen, Kon-Ning Chiu, Chih-Yu Liang, Nan-Han Lu","doi":"10.3390/tomography10120145","DOIUrl":"10.3390/tomography10120145","url":null,"abstract":"<p><p><b>Background/Objectives:</b> Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. <b>Materials and Methods:</b> Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. <b>Results:</b> The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. <b>Conclusions:</b> This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2038-2057"},"PeriodicalIF":2.2,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.3390/tomography10120144
Yu Feng, Weiming Zeng, Yifan Xie, Hongyu Chen, Lei Wang, Yingying Wang, Hongjie Yan, Kaile Zhang, Ran Tao, Wai Ting Siok, Nizhuan Wang
Background: Although it has been noticed that depressed patients show differences in processing emotions, the precise neural modulation mechanisms of positive and negative emotions remain elusive. FMRI is a cutting-edge medical imaging technology renowned for its high spatial resolution and dynamic temporal information, making it particularly suitable for the neural dynamics of depression research.
Methods: To address this gap, our study firstly leveraged fMRI to delineate activated regions associated with positive and negative emotions in healthy individuals, resulting in the creation of the positive emotion atlas (PEA) and the negative emotion atlas (NEA). Subsequently, we examined neuroimaging changes in depression patients using these atlases and evaluated their diagnostic performance based on machine learning.
Results: Our findings demonstrate that the classification accuracy of depressed patients based on PEA and NEA exceeded 0.70, a notable improvement compared to the whole-brain atlases. Furthermore, ALFF analysis unveiled significant differences between depressed patients and healthy controls in eight functional clusters during the NEA, focusing on the left cuneus, cingulate gyrus, and superior parietal lobule. In contrast, the PEA revealed more pronounced differences across fifteen clusters, involving the right fusiform gyrus, parahippocampal gyrus, and inferior parietal lobule.
Conclusions: These findings emphasize the complex interplay between emotion modulation and depression, showcasing significant alterations in both PEA and NEA among depression patients. This research enhances our understanding of emotion modulation in depression, with implications for diagnosis and treatment evaluation.
{"title":"Neural Modulation Alteration to Positive and Negative Emotions in Depressed Patients: Insights from fMRI Using Positive/Negative Emotion Atlas.","authors":"Yu Feng, Weiming Zeng, Yifan Xie, Hongyu Chen, Lei Wang, Yingying Wang, Hongjie Yan, Kaile Zhang, Ran Tao, Wai Ting Siok, Nizhuan Wang","doi":"10.3390/tomography10120144","DOIUrl":"10.3390/tomography10120144","url":null,"abstract":"<p><strong>Background: </strong>Although it has been noticed that depressed patients show differences in processing emotions, the precise neural modulation mechanisms of positive and negative emotions remain elusive. FMRI is a cutting-edge medical imaging technology renowned for its high spatial resolution and dynamic temporal information, making it particularly suitable for the neural dynamics of depression research.</p><p><strong>Methods: </strong>To address this gap, our study firstly leveraged fMRI to delineate activated regions associated with positive and negative emotions in healthy individuals, resulting in the creation of the positive emotion atlas (PEA) and the negative emotion atlas (NEA). Subsequently, we examined neuroimaging changes in depression patients using these atlases and evaluated their diagnostic performance based on machine learning.</p><p><strong>Results: </strong>Our findings demonstrate that the classification accuracy of depressed patients based on PEA and NEA exceeded 0.70, a notable improvement compared to the whole-brain atlases. Furthermore, ALFF analysis unveiled significant differences between depressed patients and healthy controls in eight functional clusters during the NEA, focusing on the left cuneus, cingulate gyrus, and superior parietal lobule. In contrast, the PEA revealed more pronounced differences across fifteen clusters, involving the right fusiform gyrus, parahippocampal gyrus, and inferior parietal lobule.</p><p><strong>Conclusions: </strong>These findings emphasize the complex interplay between emotion modulation and depression, showcasing significant alterations in both PEA and NEA among depression patients. This research enhances our understanding of emotion modulation in depression, with implications for diagnosis and treatment evaluation.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2014-2037"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-08DOI: 10.3390/tomography10120143
Dhrumil Deveshkumar Patel, Laura Z Fenton, Swastika Lamture, Vinay Kandula
Evaluating altered mental status and suspected meningeal disorders in children often begins with imaging, typically before a lumbar puncture. The challenge is that meningeal enhancement is a common finding across a range of pathologies, making diagnosis complex. This review proposes a categorization of meningeal diseases based on their predominant imaging characteristics. It includes a detailed description of the clinical and imaging features of various conditions that lead to leptomeningeal or pachymeningeal enhancement in children and adolescents. These conditions encompass infectious meningitis (viral, bacterial, tuberculous, algal, and fungal), autoimmune diseases (such as anti-MOG demyelination, neurosarcoidosis, Guillain-Barré syndrome, idiopathic hypertrophic pachymeningitis, and NMDA-related encephalitis), primary and secondary tumors (including diffuse glioneuronal tumor of childhood, primary CNS rhabdomyosarcoma, primary CNS tumoral metastasis, extracranial tumor metastasis, and lymphoma), tumor-like diseases (Langerhans cell histiocytosis and ALK-positive histiocytosis), vascular causes (such as pial angiomatosis, ANCA-related vasculitis, and Moyamoya disease), and other disorders like spontaneous intracranial hypotension and posterior reversible encephalopathy syndrome. Despite the nonspecific nature of imaging findings associated with meningeal lesions, narrowing down the differential diagnoses is crucial, as each condition requires a tailored and specific treatment approach.
{"title":"Pediatric Meningeal Diseases: What Radiologists Need to Know.","authors":"Dhrumil Deveshkumar Patel, Laura Z Fenton, Swastika Lamture, Vinay Kandula","doi":"10.3390/tomography10120143","DOIUrl":"10.3390/tomography10120143","url":null,"abstract":"<p><p>Evaluating altered mental status and suspected meningeal disorders in children often begins with imaging, typically before a lumbar puncture. The challenge is that meningeal enhancement is a common finding across a range of pathologies, making diagnosis complex. This review proposes a categorization of meningeal diseases based on their predominant imaging characteristics. It includes a detailed description of the clinical and imaging features of various conditions that lead to leptomeningeal or pachymeningeal enhancement in children and adolescents. These conditions encompass infectious meningitis (viral, bacterial, tuberculous, algal, and fungal), autoimmune diseases (such as anti-MOG demyelination, neurosarcoidosis, Guillain-Barré syndrome, idiopathic hypertrophic pachymeningitis, and NMDA-related encephalitis), primary and secondary tumors (including diffuse glioneuronal tumor of childhood, primary CNS rhabdomyosarcoma, primary CNS tumoral metastasis, extracranial tumor metastasis, and lymphoma), tumor-like diseases (Langerhans cell histiocytosis and ALK-positive histiocytosis), vascular causes (such as pial angiomatosis, ANCA-related vasculitis, and Moyamoya disease), and other disorders like spontaneous intracranial hypotension and posterior reversible encephalopathy syndrome. Despite the nonspecific nature of imaging findings associated with meningeal lesions, narrowing down the differential diagnoses is crucial, as each condition requires a tailored and specific treatment approach.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"1970-2013"},"PeriodicalIF":2.2,"publicationDate":"2024-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-05DOI: 10.3390/tomography10120142
Ahmet Peker, Ayushi Sinha, Robert M King, Jeffrey Minnaard, William van der Sterren, Torre Bydlon, Alexander A Bankier, Matthew J Gounis
Objective: Image-guided diagnosis and treatment of lung lesions is an active area of research. With the growing number of solutions proposed, there is also a growing need to establish a standard for the evaluation of these solutions. Thus, realistic phantom and preclinical environments must be established. Realistic study environments must include implanted lung nodules that are morphologically similar to real lung lesions under X-ray imaging.
Methods: Various materials were injected into a phantom swine lung to evaluate the similarity to real lung lesions in size, location, density, and grayscale intensities in X-ray imaging. A combination of n-butyl cyanoacrylate (n-BCA) and ethiodized oil displayed radiopacity that was most similar to real lung lesions, and various injection techniques were evaluated to ensure easy implantation and to generate features mimicking malignant lesions.
Results: The techniques used generated implanted nodules with properties mimicking solid nodules with features including pleural extensions and spiculations, which are typically present in malignant lesions. Using only n-BCA, implanted nodules mimicking ground glass opacity were also generated. These results are condensed into a set of recommendations that prescribe the materials and techniques that should be used to reproduce these nodules.
Conclusions: Generated recommendations on the use of n-BCA and ethiodized oil can help establish a standard for the evaluation of new image-guided solutions and refinement of algorithms in phantom and animal studies with realistic nodules.
{"title":"A Novel Method for the Generation of Realistic Lung Nodules Visualized Under X-Ray Imaging.","authors":"Ahmet Peker, Ayushi Sinha, Robert M King, Jeffrey Minnaard, William van der Sterren, Torre Bydlon, Alexander A Bankier, Matthew J Gounis","doi":"10.3390/tomography10120142","DOIUrl":"10.3390/tomography10120142","url":null,"abstract":"<p><strong>Objective: </strong>Image-guided diagnosis and treatment of lung lesions is an active area of research. With the growing number of solutions proposed, there is also a growing need to establish a standard for the evaluation of these solutions. Thus, realistic phantom and preclinical environments must be established. Realistic study environments must include implanted lung nodules that are morphologically similar to real lung lesions under X-ray imaging.</p><p><strong>Methods: </strong>Various materials were injected into a phantom swine lung to evaluate the similarity to real lung lesions in size, location, density, and grayscale intensities in X-ray imaging. A combination of n-butyl cyanoacrylate (n-BCA) and ethiodized oil displayed radiopacity that was most similar to real lung lesions, and various injection techniques were evaluated to ensure easy implantation and to generate features mimicking malignant lesions.</p><p><strong>Results: </strong>The techniques used generated implanted nodules with properties mimicking solid nodules with features including pleural extensions and spiculations, which are typically present in malignant lesions. Using only n-BCA, implanted nodules mimicking ground glass opacity were also generated. These results are condensed into a set of recommendations that prescribe the materials and techniques that should be used to reproduce these nodules.</p><p><strong>Conclusions: </strong>Generated recommendations on the use of n-BCA and ethiodized oil can help establish a standard for the evaluation of new image-guided solutions and refinement of algorithms in phantom and animal studies with realistic nodules.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"1959-1969"},"PeriodicalIF":2.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}