首页 > 最新文献

Tomography最新文献

英文 中文
Automated Measurement of Effective Radiation Dose by 18F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography. 18f -氟脱氧葡萄糖正电子发射断层扫描/计算机断层扫描自动测量有效辐射剂量。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-23 DOI: 10.3390/tomography10120151
Yujin Eom, Yong-Jin Park, Sumin Lee, Su-Jin Lee, Young-Sil An, Bok-Nam Park, Joon-Kee Yoon

Background/objectives: Calculating the radiation dose from CT in 18F-PET/CT examinations poses a significant challenge. The objective of this study is to develop a deep learning-based automated program that standardizes the measurement of radiation doses.

Methods: The torso CT was segmented into six distinct regions using TotalSegmentator. An automated program was employed to extract the necessary information and calculate the effective dose (ED) of PET/CT. The accuracy of our automated program was verified by comparing the EDs calculated by the program with those determined by a nuclear medicine physician (n = 30). Additionally, we compared the EDs obtained from an older PET/CT scanner with those from a newer PET/CT scanner (n = 42).

Results: The CT ED calculated by the automated program was not significantly different from that calculated by the nuclear medicine physician (3.67 ± 0.61 mSv and 3.62 ± 0.60 mSv, respectively, p = 0.7623). Similarly, the total ED showed no significant difference between the two calculation methods (8.10 ± 1.40 mSv and 8.05 ± 1.39 mSv, respectively, p = 0.8957). A very strong correlation was observed in both the CT ED and total ED between the two measurements (r2 = 0.9981 and 0.9996, respectively). The automated program showed excellent repeatability and reproducibility. When comparing the older and newer PET/CT scanners, the PET ED was significantly lower in the newer scanner than in the older scanner (4.39 ± 0.91 mSv and 6.00 ± 1.17 mSv, respectively, p < 0.0001). Consequently, the total ED was significantly lower in the newer scanner than in the older scanner (8.22 ± 1.53 mSv and 9.65 ± 1.34 mSv, respectively, p < 0.0001).

Conclusions: We successfully developed an automated program for calculating the ED of torso 18F-PET/CT. By integrating a deep learning model, the program effectively eliminated inter-operator variability.

背景/目的:在18F-PET/CT检查中计算CT的辐射剂量是一个重大挑战。本研究的目的是开发一种基于深度学习的自动化程序,使辐射剂量的测量标准化。方法:利用TotalSegmentator将躯干CT分割成6个不同的区域。采用自动程序提取必要信息并计算PET/CT有效剂量(ED)。通过将程序计算的ed与核医学医师确定的ed进行比较(n = 30),验证了我们自动化程序的准确性。此外,我们比较了老式PET/CT扫描仪与新型PET/CT扫描仪的EDs (n = 42)。结果:自动程序计算的CT ED与核医学医师计算的CT ED差异无统计学意义(分别为3.67±0.61 mSv和3.62±0.60 mSv, p = 0.7623)。同样,两种计算方法的总ED也无显著差异(分别为8.10±1.40 mSv和8.05±1.39 mSv, p = 0.8957)。在CT ED和总ED两个测量值之间观察到非常强的相关性(r2分别= 0.9981和0.9996)。该自动化程序具有良好的重复性和再现性。当比较旧的和新的PET/CT扫描仪时,新扫描仪的PET ED明显低于旧扫描仪(分别为4.39±0.91 mSv和6.00±1.17 mSv, p < 0.0001)。因此,新扫描仪的总ED明显低于旧扫描仪(分别为8.22±1.53 mSv和9.65±1.34 mSv, p < 0.0001)。结论:我们成功开发了一个计算躯干18F-PET/CT ED的自动程序。通过集成深度学习模型,该程序有效地消除了操作员之间的可变性。
{"title":"Automated Measurement of Effective Radiation Dose by <sup>18</sup>F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography.","authors":"Yujin Eom, Yong-Jin Park, Sumin Lee, Su-Jin Lee, Young-Sil An, Bok-Nam Park, Joon-Kee Yoon","doi":"10.3390/tomography10120151","DOIUrl":"10.3390/tomography10120151","url":null,"abstract":"<p><strong>Background/objectives: </strong>Calculating the radiation dose from CT in <sup>18</sup>F-PET/CT examinations poses a significant challenge. The objective of this study is to develop a deep learning-based automated program that standardizes the measurement of radiation doses.</p><p><strong>Methods: </strong>The torso CT was segmented into six distinct regions using TotalSegmentator. An automated program was employed to extract the necessary information and calculate the effective dose (ED) of PET/CT. The accuracy of our automated program was verified by comparing the EDs calculated by the program with those determined by a nuclear medicine physician (n = 30). Additionally, we compared the EDs obtained from an older PET/CT scanner with those from a newer PET/CT scanner (n = 42).</p><p><strong>Results: </strong>The CT ED calculated by the automated program was not significantly different from that calculated by the nuclear medicine physician (3.67 ± 0.61 mSv and 3.62 ± 0.60 mSv, respectively, <i>p</i> = 0.7623). Similarly, the total ED showed no significant difference between the two calculation methods (8.10 ± 1.40 mSv and 8.05 ± 1.39 mSv, respectively, <i>p</i> = 0.8957). A very strong correlation was observed in both the CT ED and total ED between the two measurements (r<sup>2</sup> = 0.9981 and 0.9996, respectively). The automated program showed excellent repeatability and reproducibility. When comparing the older and newer PET/CT scanners, the PET ED was significantly lower in the newer scanner than in the older scanner (4.39 ± 0.91 mSv and 6.00 ± 1.17 mSv, respectively, <i>p</i> < 0.0001). Consequently, the total ED was significantly lower in the newer scanner than in the older scanner (8.22 ± 1.53 mSv and 9.65 ± 1.34 mSv, respectively, <i>p</i> < 0.0001).</p><p><strong>Conclusions: </strong>We successfully developed an automated program for calculating the ED of torso <sup>18</sup>F-PET/CT. By integrating a deep learning model, the program effectively eliminated inter-operator variability.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2144-2157"},"PeriodicalIF":2.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679132/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Medical Image Segmentation Models Using Augmentation. 利用增强技术评价医学图像分割模型。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-23 DOI: 10.3390/tomography10120150
Mattin Sayed, Sari Saba-Sadiya, Benedikt Wichtlhuber, Julia Dietz, Matthias Neitzel, Leopold Keller, Gemma Roig, Andreas M Bucher

Background: Medical imagesegmentation is an essential step in both clinical and research applications, and automated segmentation models-such as TotalSegmentator-have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation masks produced by these models can be used.

Methods: To address this gap, we have developed a novel validation framework for segmentation models, leveraging data augmentation to assess model consistency. We produced segmentation masks for both the original and augmented scans, and we calculated the alignment metrics between these segmentation masks.

Results: Our results demonstrate strong correlation between the segmentation quality of the original scan and the average alignment between the masks of the original and augmented CT scans. These results were further validated by supporting metrics, including the coefficient of variance and the average symmetric surface distance, indicating that agreement with augmented-scan segmentation masks is a valid proxy for segmentation quality.

Conclusions: Overall, our framework offers a pipeline for evaluating segmentation performance without relying on manually labeled ground truth data, establishing a foundation for future advancements in automated medical image analysis.

背景:医学图像分割在临床和研究应用中都是必不可少的一步,自动分割模型(如totalsegmentator)已经无处不在。然而,验证这些模型准确性的稳健方法仍然有限,在使用这些模型产生的分割掩码之前,通常需要进行人工检查。方法:为了解决这一差距,我们开发了一个新的分割模型验证框架,利用数据增强来评估模型一致性。我们为原始扫描和增强扫描生成了分割掩码,并计算了这些分割掩码之间的对齐度量。结果:我们的研究结果表明,原始扫描的分割质量与原始和增强CT扫描的掩码之间的平均对齐之间存在很强的相关性。这些结果进一步验证了支持指标,包括方差系数和平均对称表面距离,表明与增强扫描分割掩码的一致性是分割质量的有效代理。结论:总的来说,我们的框架提供了一个评估分割性能的管道,而不依赖于手动标记的地面真值数据,为自动化医学图像分析的未来发展奠定了基础。
{"title":"Evaluating Medical Image Segmentation Models Using Augmentation.","authors":"Mattin Sayed, Sari Saba-Sadiya, Benedikt Wichtlhuber, Julia Dietz, Matthias Neitzel, Leopold Keller, Gemma Roig, Andreas M Bucher","doi":"10.3390/tomography10120150","DOIUrl":"10.3390/tomography10120150","url":null,"abstract":"<p><strong>Background: </strong>Medical imagesegmentation is an essential step in both clinical and research applications, and automated segmentation models-such as TotalSegmentator-have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation masks produced by these models can be used.</p><p><strong>Methods: </strong>To address this gap, we have developed a novel validation framework for segmentation models, leveraging data augmentation to assess model consistency. We produced segmentation masks for both the original and augmented scans, and we calculated the alignment metrics between these segmentation masks.</p><p><strong>Results: </strong>Our results demonstrate strong correlation between the segmentation quality of the original scan and the average alignment between the masks of the original and augmented CT scans. These results were further validated by supporting metrics, including the coefficient of variance and the average symmetric surface distance, indicating that agreement with augmented-scan segmentation masks is a valid proxy for segmentation quality.</p><p><strong>Conclusions: </strong>Overall, our framework offers a pipeline for evaluating segmentation performance without relying on manually labeled ground truth data, establishing a foundation for future advancements in automated medical image analysis.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2128-2143"},"PeriodicalIF":2.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pediatric Neuroimaging of Multiple Sclerosis and Neuroinflammatory Diseases. 小儿多发性硬化症和神经炎性疾病的神经影像学。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-20 DOI: 10.3390/tomography10120149
Chloe Dunseath, Emma J Bova, Elizabeth Wilson, Marguerite Care, Kim M Cecil

Using a pediatric-focused lens, this review article briefly summarizes the presentation of several demyelinating and neuroinflammatory diseases using conventional magnetic resonance imaging (MRI) sequences, such as T1-weighted with and without an exogenous gadolinium-based contrast agent, T2-weighted, and fluid-attenuated inversion recovery (FLAIR). These conventional sequences exploit the intrinsic properties of tissue to provide a distinct signal contrast that is useful for evaluating disease features and monitoring treatment responses in patients by characterizing lesion involvement in the central nervous system and tracking temporal features with blood-brain barrier disruption. Illustrative examples are presented for pediatric-onset multiple sclerosis and neuroinflammatory diseases. This work also highlights findings from advanced MRI techniques, often infrequently employed due to the challenges involved in acquisition, post-processing, and interpretation, and identifies the need for future studies to extract the unique information, such as alterations in neurochemistry, disruptions of structural organization, or atypical functional connectivity, that may be relevant for the diagnosis and management of disease.

这篇综述文章使用儿科聚焦镜头,简要总结了几种脱髓鞘和神经炎性疾病的表现,使用常规磁共振成像(MRI)序列,如t1加权加或不加外源性钆基造影剂,t2加权和液体衰减反转恢复(FLAIR)。这些传统的序列利用组织的内在特性提供了一个独特的信号对比,通过表征病变累及中枢神经系统和追踪血脑屏障破坏的时间特征,对评估疾病特征和监测患者的治疗反应有用。举例说明了小儿发病多发性硬化症和神经炎性疾病。这项工作还强调了先进的MRI技术的发现,由于涉及采集、后处理和解释的挑战,通常很少使用,并确定了未来研究提取独特信息的需要,例如神经化学的改变、结构组织的破坏或非典型功能连接,这些信息可能与疾病的诊断和管理有关。
{"title":"Pediatric Neuroimaging of Multiple Sclerosis and Neuroinflammatory Diseases.","authors":"Chloe Dunseath, Emma J Bova, Elizabeth Wilson, Marguerite Care, Kim M Cecil","doi":"10.3390/tomography10120149","DOIUrl":"10.3390/tomography10120149","url":null,"abstract":"<p><p>Using a pediatric-focused lens, this review article briefly summarizes the presentation of several demyelinating and neuroinflammatory diseases using conventional magnetic resonance imaging (MRI) sequences, such as T1-weighted with and without an exogenous gadolinium-based contrast agent, T2-weighted, and fluid-attenuated inversion recovery (FLAIR). These conventional sequences exploit the intrinsic properties of tissue to provide a distinct signal contrast that is useful for evaluating disease features and monitoring treatment responses in patients by characterizing lesion involvement in the central nervous system and tracking temporal features with blood-brain barrier disruption. Illustrative examples are presented for pediatric-onset multiple sclerosis and neuroinflammatory diseases. This work also highlights findings from advanced MRI techniques, often infrequently employed due to the challenges involved in acquisition, post-processing, and interpretation, and identifies the need for future studies to extract the unique information, such as alterations in neurochemistry, disruptions of structural organization, or atypical functional connectivity, that may be relevant for the diagnosis and management of disease.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2100-2127"},"PeriodicalIF":2.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interobserver Variability in Manual Versus Semi-Automatic CT Assessments of Small Lung Nodule Diameter and Volume. 人工与半自动CT评估肺小结节直径和体积的观察者间差异。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-19 DOI: 10.3390/tomography10120148
Frida Zacharias, Tony Martin Svahn

Background: This study aimed to assess the interobserver variability of semi-automatic diameter and volumetric measurements versus manual diameter measurements for small lung nodules identified on computed tomography scans.

Methods: The radiological patient database was searched for CT thorax examinations with at least one noncalcified solid nodule (∼3-10 mm). Three radiologists with four to six years of experience evaluated each nodule in accordance with the Fleischner Society guidelines using standard diameter measurements, semi-automatic lesion diameter measurements, and volumetric assessments. Spearman's correlation coefficient measured intermeasurement agreement. We used descriptive Bland-Altman plots to visualize agreement in the measured data. Potential discrepancies were analyzed.

Results: We studied a total of twenty-six nodules. Spearman's test showed that there was a much stronger relationship (p < 0.05) between reviewers for the semi-automatic diameter and volume measurements (avg. r = 0.97 ± 0.017 and 0.99 ± 0.005, respectively) than for the manual method (avg. r = 0.91 ± 0.017). In the Bland-Altman test, the semi-automatic diameter measure outperformed the manual method for all comparisons, while the volumetric method had better results in two out of three comparisons. The incidence of reviewers modifying the software's automatic outline varied between 62% and 92%.

Conclusions: Semi-automatic techniques significantly reduced interobserver variability for small solid nodules, which has important implications for diagnostic assessments and screening. Both the semi-automatic diameter and semi-automatic volume measurements showed improvements over the manual measurement approach. Training could further diminish observer variability, given the considerable diversity in the number of adjustments among reviewers.

背景:本研究旨在评估在计算机断层扫描中发现的肺小结节的半自动直径和体积测量与人工直径测量的观察者间变异性。方法:检索放射学患者数据库,寻找至少有一个非钙化实性结节(~ 3-10 mm)的CT胸部检查。三名具有四到六年经验的放射科医生根据Fleischner协会指南使用标准直径测量,半自动病变直径测量和体积评估评估每个结节。Spearman相关系数测量了测量间的一致性。我们使用描述性Bland-Altman图来可视化测量数据中的一致性。分析潜在的差异。结果:我们共研究了26个结节。Spearman检验显示,与人工方法(平均r = 0.91±0.017)相比,半自动测径法与容积法的评价相关性(平均r = 0.97±0.017,平均r = 0.99±0.005)显著增强(p < 0.05)。在Bland-Altman测试中,半自动直径测量在所有比较中都优于手动方法,而体积测量方法在三分之二的比较中有更好的结果。审稿人修改软件自动大纲的发生率在62%到92%之间变化。结论:半自动技术显著降低了小实性结节的观察者间变异性,这对诊断评估和筛查具有重要意义。与人工测量方法相比,半自动直径和半自动体积测量方法都有改进。训练可以进一步减少观察者的可变性,因为审稿人之间的调整数量存在相当大的多样性。
{"title":"Interobserver Variability in Manual Versus Semi-Automatic CT Assessments of Small Lung Nodule Diameter and Volume.","authors":"Frida Zacharias, Tony Martin Svahn","doi":"10.3390/tomography10120148","DOIUrl":"10.3390/tomography10120148","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to assess the interobserver variability of semi-automatic diameter and volumetric measurements versus manual diameter measurements for small lung nodules identified on computed tomography scans.</p><p><strong>Methods: </strong>The radiological patient database was searched for CT thorax examinations with at least one noncalcified solid nodule (∼3-10 mm). Three radiologists with four to six years of experience evaluated each nodule in accordance with the Fleischner Society guidelines using standard diameter measurements, semi-automatic lesion diameter measurements, and volumetric assessments. Spearman's correlation coefficient measured intermeasurement agreement. We used descriptive Bland-Altman plots to visualize agreement in the measured data. Potential discrepancies were analyzed.</p><p><strong>Results: </strong>We studied a total of twenty-six nodules. Spearman's test showed that there was a much stronger relationship (<i>p</i> < 0.05) between reviewers for the semi-automatic diameter and volume measurements (avg. r = 0.97 ± 0.017 and 0.99 ± 0.005, respectively) than for the manual method (avg. r = 0.91 ± 0.017). In the Bland-Altman test, the semi-automatic diameter measure outperformed the manual method for all comparisons, while the volumetric method had better results in two out of three comparisons. The incidence of reviewers modifying the software's automatic outline varied between 62% and 92%.</p><p><strong>Conclusions: </strong>Semi-automatic techniques significantly reduced interobserver variability for small solid nodules, which has important implications for diagnostic assessments and screening. Both the semi-automatic diameter and semi-automatic volume measurements showed improvements over the manual measurement approach. Training could further diminish observer variability, given the considerable diversity in the number of adjustments among reviewers.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2087-2099"},"PeriodicalIF":2.2,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11680079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise Reduction in Brain CT: A Comparative Study of Deep Learning and Hybrid Iterative Reconstruction Using Multiple Parameters. 脑CT降噪:深度学习与多参数混合迭代重建的比较研究。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.3390/tomography10120147
Yusuke Inoue, Hiroyasu Itoh, Hirofumi Hata, Hiroki Miyatake, Kohei Mitsui, Shunichi Uehara, Chisaki Masuda

Objectives: We evaluated the noise reduction effects of deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) in brain computed tomography (CT).

Methods: CT images of a 16 cm dosimetry phantom, a head phantom, and the brains of 11 patients were reconstructed using filtered backprojection (FBP) and various levels of DLR and HIR. The slice thickness was 5, 2.5, 1.25, and 0.625 mm. Phantom imaging was also conducted at various tube currents. The noise reduction ratio was calculated using FBP as the reference. For patient imaging, overall image quality was visually compared between DLR and HIR images that exhibited similar noise reduction ratios.

Results: The noise reduction ratio increased with increasing levels of DLR and HIR in phantom and patient imaging. For DLR, noise reduction was more pronounced with decreasing slice thickness, while such thickness dependence was less evident for HIR. Although the noise reduction effects of DLR were similar between the head phantom and patients, they differed for the dosimetry phantom. Variations between imaging objects were small for HIR. The noise reduction ratio was low at low tube currents for the dosimetry phantom using DLR; otherwise, the influence of the tube current was small. In terms of visual image quality, DLR outperformed HIR in 1.25 mm thick images but not in thicker images.

Conclusions: The degree of noise reduction using DLR depends on the slice thickness, tube current, and imaging object in addition to the level of DLR, which should be considered in the clinical use of DLR. DLR may be particularly beneficial for thin-slice imaging.

目的:评估深度学习重建(DLR)和混合迭代重建(HIR)在脑计算机断层扫描(CT)中的降噪效果。方法:采用滤波后反投影(FBP)和不同水平DLR和HIR对11例患者的16 cm剂量学影、头部影和大脑进行重建。切片厚度分别为5、2.5、1.25、0.625 mm。幻影成像也在不同的管电流下进行。以FBP为基准计算降噪比。对于患者成像,视觉上比较具有相似降噪比的DLR和HIR图像的整体图像质量。结果:幻影和患者影像中DLR和HIR水平的增加,降噪比增加。对于DLR,随着切片厚度的减小,降噪更加明显,而对于HIR,这种厚度依赖性不太明显。虽然DLR的降噪效果在头部幻像和患者之间相似,但在剂量学幻像中存在差异。HIR成像对象之间的差异很小。在低管电流条件下,DLR剂量测量模体的降噪比较低;否则,管电流的影响很小。在视觉图像质量方面,DLR在1.25 mm厚的图像中优于HIR,但在较厚的图像中优于HIR。结论:DLR的降噪程度除与DLR水平有关外,还与层厚、管电流、成像对象等因素有关,在临床应用DLR时应予以考虑。DLR可能特别有利于薄层成像。
{"title":"Noise Reduction in Brain CT: A Comparative Study of Deep Learning and Hybrid Iterative Reconstruction Using Multiple Parameters.","authors":"Yusuke Inoue, Hiroyasu Itoh, Hirofumi Hata, Hiroki Miyatake, Kohei Mitsui, Shunichi Uehara, Chisaki Masuda","doi":"10.3390/tomography10120147","DOIUrl":"10.3390/tomography10120147","url":null,"abstract":"<p><strong>Objectives: </strong>We evaluated the noise reduction effects of deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) in brain computed tomography (CT).</p><p><strong>Methods: </strong>CT images of a 16 cm dosimetry phantom, a head phantom, and the brains of 11 patients were reconstructed using filtered backprojection (FBP) and various levels of DLR and HIR. The slice thickness was 5, 2.5, 1.25, and 0.625 mm. Phantom imaging was also conducted at various tube currents. The noise reduction ratio was calculated using FBP as the reference. For patient imaging, overall image quality was visually compared between DLR and HIR images that exhibited similar noise reduction ratios.</p><p><strong>Results: </strong>The noise reduction ratio increased with increasing levels of DLR and HIR in phantom and patient imaging. For DLR, noise reduction was more pronounced with decreasing slice thickness, while such thickness dependence was less evident for HIR. Although the noise reduction effects of DLR were similar between the head phantom and patients, they differed for the dosimetry phantom. Variations between imaging objects were small for HIR. The noise reduction ratio was low at low tube currents for the dosimetry phantom using DLR; otherwise, the influence of the tube current was small. In terms of visual image quality, DLR outperformed HIR in 1.25 mm thick images but not in thicker images.</p><p><strong>Conclusions: </strong>The degree of noise reduction using DLR depends on the slice thickness, tube current, and imaging object in addition to the level of DLR, which should be considered in the clinical use of DLR. DLR may be particularly beneficial for thin-slice imaging.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2073-2086"},"PeriodicalIF":2.2,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BAE-ViT: An Efficient Multimodal Vision Transformer for Bone Age Estimation. BAE-ViT:一种有效的骨龄估计的多模态视觉转换器。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-13 DOI: 10.3390/tomography10120146
Jinnian Zhang, Weijie Chen, Tanmayee Joshi, Xiaomin Zhang, Po-Ling Loh, Varun Jog, Richard J Bruce, John W Garrett, Alan B McMillan

This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method to facilitate detailed interactions between visual and non-visual data by tokenizing non-visual information and concatenating all tokens (visual or non-visual) as the input to the model. The model underwent training on a large-scale dataset from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge, where it exhibited commendable performance, particularly excelling in handling image distortions compared to existing models. The effectiveness of BAE-ViT was further affirmed through statistical analysis, demonstrating a strong correlation with the actual ground-truth labels. This study contributes to the field by showcasing the potential of vision transformers as a viable option for integrating multimodal data in medical imaging applications, specifically emphasizing their capacity to incorporate non-visual elements like sex information into the framework. This tokenization method not only demonstrates superior performance in this specific task but also offers a versatile framework for integrating multimodal data in medical imaging applications.

本研究介绍了一种专门用于骨龄估计(BAE)的视觉变形模型BAE- vit。该模型旨在有效地合并图像和性别数据,这是传统卷积神经网络(cnn)所不具备的能力。BAE-ViT采用一种新颖的数据融合方法,通过对非视觉信息进行标记,并将所有标记(视觉或非视觉)连接起来作为模型的输入,从而促进视觉和非视觉数据之间的详细交互。该模型在2017年RSNA儿童骨龄机器学习挑战赛的大规模数据集上进行了训练,与现有模型相比,它表现出了值得称赞的性能,特别是在处理图像失真方面表现出色。通过统计分析进一步肯定了ae - vit的有效性,显示出与实际的地基真值标签有很强的相关性。这项研究通过展示视觉转换器作为医学成像应用中集成多模态数据的可行选择的潜力,特别强调了它们将非视觉元素(如性别信息)纳入框架的能力,为该领域做出了贡献。这种标记化方法不仅在这一特定任务中表现出优越的性能,而且为医学成像应用中集成多模态数据提供了一个通用的框架。
{"title":"BAE-ViT: An Efficient Multimodal Vision Transformer for Bone Age Estimation.","authors":"Jinnian Zhang, Weijie Chen, Tanmayee Joshi, Xiaomin Zhang, Po-Ling Loh, Varun Jog, Richard J Bruce, John W Garrett, Alan B McMillan","doi":"10.3390/tomography10120146","DOIUrl":"10.3390/tomography10120146","url":null,"abstract":"<p><p>This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method to facilitate detailed interactions between visual and non-visual data by tokenizing non-visual information and concatenating all tokens (visual or non-visual) as the input to the model. The model underwent training on a large-scale dataset from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge, where it exhibited commendable performance, particularly excelling in handling image distortions compared to existing models. The effectiveness of BAE-ViT was further affirmed through statistical analysis, demonstrating a strong correlation with the actual ground-truth labels. This study contributes to the field by showcasing the potential of vision transformers as a viable option for integrating multimodal data in medical imaging applications, specifically emphasizing their capacity to incorporate non-visual elements like sex information into the framework. This tokenization method not only demonstrates superior performance in this specific task but also offers a versatile framework for integrating multimodal data in medical imaging applications.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2058-2072"},"PeriodicalIF":2.2,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound. 基于cnn的交叉模态融合增强乳房x线摄影和超声的乳腺癌检测。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-12 DOI: 10.3390/tomography10120145
Yi-Ming Wang, Chi-Yuan Wang, Kuo-Ying Liu, Yung-Hui Huang, Tai-Been Chen, Kon-Ning Chiu, Chih-Yu Liang, Nan-Han Lu

Background/Objectives: Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. Materials and Methods: Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. Results: The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. Conclusions: This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.

背景/目的:乳腺癌是台湾及全球女性死亡的主要原因。非侵入性成像方法,如乳房x光检查和超声检查,对于早期发现至关重要,但独立模式在诊断准确性方面存在局限性。本研究旨在利用先进的卷积神经网络(CNN)架构,通过结合乳房x线摄影和超声成像的跨模态融合方法来增强乳腺癌的检测。材料和方法:乳房图像来源于RSNA、PAS和Kaggle等公开数据集,并分为恶性和良性两组。数据增强技术用于解决超声数据集中的不平衡问题。开发了三种模型:(1)与机器学习分类器集成的预训练CNN,(2)基于迁移学习的CNN,(3)定制设计的17层直接分类CNN。使用准确性和Kappa分数等指标来评估模型的性能。结果:自定义17层CNN优于其他模型,准确率为0.964,Kappa评分为0.927。迁移学习模型获得了中等的性能(准确率0.846,Kappa 0.694),而使用机器学习分类器预训练的cnn获得了最低的结果(准确率0.780,Kappa 0.559)。跨模态融合被证明有效地利用了乳房x线摄影和超声成像的互补优势。结论:本研究证明了跨模态成像和定制CNN架构在显著提高乳腺癌诊断准确性和可靠性方面的潜力。定制设计的模型为早期检测提供了实用的解决方案,有可能减少假阳性和假阴性,并通过及时准确的诊断改善患者的预后。
{"title":"CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound.","authors":"Yi-Ming Wang, Chi-Yuan Wang, Kuo-Ying Liu, Yung-Hui Huang, Tai-Been Chen, Kon-Ning Chiu, Chih-Yu Liang, Nan-Han Lu","doi":"10.3390/tomography10120145","DOIUrl":"10.3390/tomography10120145","url":null,"abstract":"<p><p><b>Background/Objectives:</b> Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. <b>Materials and Methods:</b> Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. <b>Results:</b> The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. <b>Conclusions:</b> This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2038-2057"},"PeriodicalIF":2.2,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Modulation Alteration to Positive and Negative Emotions in Depressed Patients: Insights from fMRI Using Positive/Negative Emotion Atlas. 抑郁症患者积极和消极情绪的神经调节改变:利用积极/消极情绪图谱的功能磁共振成像的见解。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-09 DOI: 10.3390/tomography10120144
Yu Feng, Weiming Zeng, Yifan Xie, Hongyu Chen, Lei Wang, Yingying Wang, Hongjie Yan, Kaile Zhang, Ran Tao, Wai Ting Siok, Nizhuan Wang

Background: Although it has been noticed that depressed patients show differences in processing emotions, the precise neural modulation mechanisms of positive and negative emotions remain elusive. FMRI is a cutting-edge medical imaging technology renowned for its high spatial resolution and dynamic temporal information, making it particularly suitable for the neural dynamics of depression research.

Methods: To address this gap, our study firstly leveraged fMRI to delineate activated regions associated with positive and negative emotions in healthy individuals, resulting in the creation of the positive emotion atlas (PEA) and the negative emotion atlas (NEA). Subsequently, we examined neuroimaging changes in depression patients using these atlases and evaluated their diagnostic performance based on machine learning.

Results: Our findings demonstrate that the classification accuracy of depressed patients based on PEA and NEA exceeded 0.70, a notable improvement compared to the whole-brain atlases. Furthermore, ALFF analysis unveiled significant differences between depressed patients and healthy controls in eight functional clusters during the NEA, focusing on the left cuneus, cingulate gyrus, and superior parietal lobule. In contrast, the PEA revealed more pronounced differences across fifteen clusters, involving the right fusiform gyrus, parahippocampal gyrus, and inferior parietal lobule.

Conclusions: These findings emphasize the complex interplay between emotion modulation and depression, showcasing significant alterations in both PEA and NEA among depression patients. This research enhances our understanding of emotion modulation in depression, with implications for diagnosis and treatment evaluation.

背景:抑郁症患者在情绪加工方面存在差异,但积极情绪和消极情绪的神经调节机制尚不明确。FMRI是一种尖端的医学成像技术,以其高空间分辨率和动态时间信息而闻名,使其特别适合抑郁症的神经动力学研究。方法:为了解决这一空白,我们的研究首先利用fMRI来描绘健康个体中与积极和消极情绪相关的激活区域,从而创建了积极情绪图谱(PEA)和消极情绪图谱(NEA)。随后,我们使用这些地图集检查抑郁症患者的神经影像学变化,并基于机器学习评估其诊断性能。结果:我们的研究结果表明,基于PEA和NEA的抑郁症患者分类准确率超过0.70,与全脑地图集相比有显著提高。此外,ALFF分析还揭示了抑郁症患者与健康对照组在NEA期间的8个功能簇上存在显著差异,主要集中在左侧楔叶、扣带回和顶叶上小叶。相比之下,PEA显示了15个簇之间更明显的差异,包括右侧梭状回、海马旁回和顶叶下小叶。结论:这些发现强调了情绪调节与抑郁之间复杂的相互作用,显示了抑郁症患者PEA和NEA的显著变化。本研究提高了我们对抑郁症情绪调节的认识,对抑郁症的诊断和治疗评价具有重要意义。
{"title":"Neural Modulation Alteration to Positive and Negative Emotions in Depressed Patients: Insights from fMRI Using Positive/Negative Emotion Atlas.","authors":"Yu Feng, Weiming Zeng, Yifan Xie, Hongyu Chen, Lei Wang, Yingying Wang, Hongjie Yan, Kaile Zhang, Ran Tao, Wai Ting Siok, Nizhuan Wang","doi":"10.3390/tomography10120144","DOIUrl":"10.3390/tomography10120144","url":null,"abstract":"<p><strong>Background: </strong>Although it has been noticed that depressed patients show differences in processing emotions, the precise neural modulation mechanisms of positive and negative emotions remain elusive. FMRI is a cutting-edge medical imaging technology renowned for its high spatial resolution and dynamic temporal information, making it particularly suitable for the neural dynamics of depression research.</p><p><strong>Methods: </strong>To address this gap, our study firstly leveraged fMRI to delineate activated regions associated with positive and negative emotions in healthy individuals, resulting in the creation of the positive emotion atlas (PEA) and the negative emotion atlas (NEA). Subsequently, we examined neuroimaging changes in depression patients using these atlases and evaluated their diagnostic performance based on machine learning.</p><p><strong>Results: </strong>Our findings demonstrate that the classification accuracy of depressed patients based on PEA and NEA exceeded 0.70, a notable improvement compared to the whole-brain atlases. Furthermore, ALFF analysis unveiled significant differences between depressed patients and healthy controls in eight functional clusters during the NEA, focusing on the left cuneus, cingulate gyrus, and superior parietal lobule. In contrast, the PEA revealed more pronounced differences across fifteen clusters, involving the right fusiform gyrus, parahippocampal gyrus, and inferior parietal lobule.</p><p><strong>Conclusions: </strong>These findings emphasize the complex interplay between emotion modulation and depression, showcasing significant alterations in both PEA and NEA among depression patients. This research enhances our understanding of emotion modulation in depression, with implications for diagnosis and treatment evaluation.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2014-2037"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pediatric Meningeal Diseases: What Radiologists Need to Know. 儿科脑膜疾病:放射科医生需要知道的。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-08 DOI: 10.3390/tomography10120143
Dhrumil Deveshkumar Patel, Laura Z Fenton, Swastika Lamture, Vinay Kandula

Evaluating altered mental status and suspected meningeal disorders in children often begins with imaging, typically before a lumbar puncture. The challenge is that meningeal enhancement is a common finding across a range of pathologies, making diagnosis complex. This review proposes a categorization of meningeal diseases based on their predominant imaging characteristics. It includes a detailed description of the clinical and imaging features of various conditions that lead to leptomeningeal or pachymeningeal enhancement in children and adolescents. These conditions encompass infectious meningitis (viral, bacterial, tuberculous, algal, and fungal), autoimmune diseases (such as anti-MOG demyelination, neurosarcoidosis, Guillain-Barré syndrome, idiopathic hypertrophic pachymeningitis, and NMDA-related encephalitis), primary and secondary tumors (including diffuse glioneuronal tumor of childhood, primary CNS rhabdomyosarcoma, primary CNS tumoral metastasis, extracranial tumor metastasis, and lymphoma), tumor-like diseases (Langerhans cell histiocytosis and ALK-positive histiocytosis), vascular causes (such as pial angiomatosis, ANCA-related vasculitis, and Moyamoya disease), and other disorders like spontaneous intracranial hypotension and posterior reversible encephalopathy syndrome. Despite the nonspecific nature of imaging findings associated with meningeal lesions, narrowing down the differential diagnoses is crucial, as each condition requires a tailored and specific treatment approach.

评估儿童精神状态改变和疑似脑膜疾病通常从影像学开始,通常在腰椎穿刺前。挑战在于脑膜增强是一系列病理的共同发现,使得诊断复杂。本文根据脑膜疾病的主要影像学特征对其进行分类。它包括对导致儿童和青少年脑膜薄或厚增强的各种疾病的临床和影像学特征的详细描述。这些疾病包括感染性脑膜炎(病毒性、细菌性、结核性、藻类和真菌性)、自身免疫性疾病(如抗mog脱髓鞘、神经结节病、格林-巴勒综合征、特发性肥厚性厚膜脑膜炎和nmda相关脑炎)、原发性和继发性肿瘤(包括儿童弥漫性胶质神经元肿瘤、原发性中枢神经系统横纹肌肉瘤、原发性中枢神经系统肿瘤转移、颅外肿瘤转移和淋巴瘤)、肿瘤样疾病(朗格汉斯细胞组织细胞增多症和alk阳性组织细胞增多症)、血管原因(如颅底血管瘤病、anca相关血管炎和烟雾病)和其他疾病,如自发性颅内低血压和后部可逆性脑病综合征。尽管与脑膜病变相关的影像学发现具有非特异性,但缩小鉴别诊断范围至关重要,因为每种情况都需要量身定制和特定的治疗方法。
{"title":"Pediatric Meningeal Diseases: What Radiologists Need to Know.","authors":"Dhrumil Deveshkumar Patel, Laura Z Fenton, Swastika Lamture, Vinay Kandula","doi":"10.3390/tomography10120143","DOIUrl":"10.3390/tomography10120143","url":null,"abstract":"<p><p>Evaluating altered mental status and suspected meningeal disorders in children often begins with imaging, typically before a lumbar puncture. The challenge is that meningeal enhancement is a common finding across a range of pathologies, making diagnosis complex. This review proposes a categorization of meningeal diseases based on their predominant imaging characteristics. It includes a detailed description of the clinical and imaging features of various conditions that lead to leptomeningeal or pachymeningeal enhancement in children and adolescents. These conditions encompass infectious meningitis (viral, bacterial, tuberculous, algal, and fungal), autoimmune diseases (such as anti-MOG demyelination, neurosarcoidosis, Guillain-Barré syndrome, idiopathic hypertrophic pachymeningitis, and NMDA-related encephalitis), primary and secondary tumors (including diffuse glioneuronal tumor of childhood, primary CNS rhabdomyosarcoma, primary CNS tumoral metastasis, extracranial tumor metastasis, and lymphoma), tumor-like diseases (Langerhans cell histiocytosis and ALK-positive histiocytosis), vascular causes (such as pial angiomatosis, ANCA-related vasculitis, and Moyamoya disease), and other disorders like spontaneous intracranial hypotension and posterior reversible encephalopathy syndrome. Despite the nonspecific nature of imaging findings associated with meningeal lesions, narrowing down the differential diagnoses is crucial, as each condition requires a tailored and specific treatment approach.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"1970-2013"},"PeriodicalIF":2.2,"publicationDate":"2024-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Method for the Generation of Realistic Lung Nodules Visualized Under X-Ray Imaging. 一种在x射线成像下生成真实肺结节的新方法。
IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-05 DOI: 10.3390/tomography10120142
Ahmet Peker, Ayushi Sinha, Robert M King, Jeffrey Minnaard, William van der Sterren, Torre Bydlon, Alexander A Bankier, Matthew J Gounis

Objective: Image-guided diagnosis and treatment of lung lesions is an active area of research. With the growing number of solutions proposed, there is also a growing need to establish a standard for the evaluation of these solutions. Thus, realistic phantom and preclinical environments must be established. Realistic study environments must include implanted lung nodules that are morphologically similar to real lung lesions under X-ray imaging.

Methods: Various materials were injected into a phantom swine lung to evaluate the similarity to real lung lesions in size, location, density, and grayscale intensities in X-ray imaging. A combination of n-butyl cyanoacrylate (n-BCA) and ethiodized oil displayed radiopacity that was most similar to real lung lesions, and various injection techniques were evaluated to ensure easy implantation and to generate features mimicking malignant lesions.

Results: The techniques used generated implanted nodules with properties mimicking solid nodules with features including pleural extensions and spiculations, which are typically present in malignant lesions. Using only n-BCA, implanted nodules mimicking ground glass opacity were also generated. These results are condensed into a set of recommendations that prescribe the materials and techniques that should be used to reproduce these nodules.

Conclusions: Generated recommendations on the use of n-BCA and ethiodized oil can help establish a standard for the evaluation of new image-guided solutions and refinement of algorithms in phantom and animal studies with realistic nodules.

目的:图像引导肺部病变的诊断和治疗是一个活跃的研究领域。随着提出的解决办法越来越多,也越来越需要建立一个评价这些解决办法的标准。因此,必须建立逼真的幻影和临床前环境。真实的研究环境必须包括在x射线成像下形态与真实肺病变相似的植入肺结节。方法:将各种材料注入假肺,评估其与真实肺病变在大小、位置、密度和x线图像灰度强度上的相似性。氰基丙烯酸酯正丁酯(n-BCA)和乙硫化油的组合显示出与真实肺病变最相似的放射不透明,并评估了各种注射技术,以确保易于植入并产生模拟恶性病变的特征。结果:所使用的技术产生的植入式结节具有模拟实体结节的特性,其特征包括胸膜延伸和突起,这些特征通常存在于恶性病变中。仅使用n-BCA,也可以产生模拟磨砂玻璃混浊的植入结节。这些结果被浓缩成一组建议,规定了应用于复制这些结节的材料和技术。结论:对n-BCA和乙二化油的使用提出的建议可以帮助建立新的图像引导解决方案的评估标准,并在具有真实结节的幻影和动物研究中改进算法。
{"title":"A Novel Method for the Generation of Realistic Lung Nodules Visualized Under X-Ray Imaging.","authors":"Ahmet Peker, Ayushi Sinha, Robert M King, Jeffrey Minnaard, William van der Sterren, Torre Bydlon, Alexander A Bankier, Matthew J Gounis","doi":"10.3390/tomography10120142","DOIUrl":"10.3390/tomography10120142","url":null,"abstract":"<p><strong>Objective: </strong>Image-guided diagnosis and treatment of lung lesions is an active area of research. With the growing number of solutions proposed, there is also a growing need to establish a standard for the evaluation of these solutions. Thus, realistic phantom and preclinical environments must be established. Realistic study environments must include implanted lung nodules that are morphologically similar to real lung lesions under X-ray imaging.</p><p><strong>Methods: </strong>Various materials were injected into a phantom swine lung to evaluate the similarity to real lung lesions in size, location, density, and grayscale intensities in X-ray imaging. A combination of n-butyl cyanoacrylate (n-BCA) and ethiodized oil displayed radiopacity that was most similar to real lung lesions, and various injection techniques were evaluated to ensure easy implantation and to generate features mimicking malignant lesions.</p><p><strong>Results: </strong>The techniques used generated implanted nodules with properties mimicking solid nodules with features including pleural extensions and spiculations, which are typically present in malignant lesions. Using only n-BCA, implanted nodules mimicking ground glass opacity were also generated. These results are condensed into a set of recommendations that prescribe the materials and techniques that should be used to reproduce these nodules.</p><p><strong>Conclusions: </strong>Generated recommendations on the use of n-BCA and ethiodized oil can help establish a standard for the evaluation of new image-guided solutions and refinement of algorithms in phantom and animal studies with realistic nodules.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"1959-1969"},"PeriodicalIF":2.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Tomography
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1