Purpose: Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.
Approach: We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.
Results: Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.
Conclusions: Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.
{"title":"Lung vessel connectivity map as anatomical prior knowledge for deep learning-based lung lobe segmentation.","authors":"Simone Bendazzoli, Emelie Bäcklin, Örjan Smedby, Birgitta Janerot-Sjoberg, Bryan Connolly, Chunliang Wang","doi":"10.1117/1.JMI.11.4.044001","DOIUrl":"10.1117/1.JMI.11.4.044001","url":null,"abstract":"<p><strong>Purpose: </strong>Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.</p><p><strong>Approach: </strong>We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.</p><p><strong>Results: </strong>Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.</p><p><strong>Conclusions: </strong>Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044001"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11231955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-06DOI: 10.1117/1.JMI.11.4.044505
Heather M Whitney, Roni Yoeli-Bik, Jacques S Abramowicz, Li Lan, Hui Li, Ryan E Longman, Ernst Lengyel, Maryellen L Giger
Purpose: Segmentation of ovarian/adnexal masses from surrounding tissue on ultrasound images is a challenging task. The separation of masses into different components may also be important for radiomic feature extraction. Our study aimed to develop an artificial intelligence-based automatic segmentation method for transvaginal ultrasound images that (1) outlines the exterior boundary of adnexal masses and (2) separates internal components.
Approach: A retrospective ultrasound imaging database of adnexal masses was reviewed for exclusion criteria at the patient, mass, and image levels, with one image per mass. The resulting 54 adnexal masses (36 benign/18 malignant) from 53 patients were separated by patient into training (26 benign/12 malignant) and independent test (10 benign/6 malignant) sets. U-net segmentation performance on test images compared to expert detailed outlines was measured using the Dice similarity coefficient (DSC) and the ratio of the Hausdorff distance to the effective diameter of the outline ( ) for each mass. Subsequently, in discovery mode, a two-level fuzzy c-means (FCM) unsupervised clustering approach was used to separate the pixels within masses belonging to hypoechoic or hyperechoic components.
Results: The DSC (median [95% confidence interval]) was 0.91 [0.78, 0.96], and was 0.04 [0.01, 0.12], indicating strong agreement with expert outlines. Clinical review of the internal separation of masses into echogenic components demonstrated a strong association with mass characteristics.
Conclusion: A combined U-net and FCM algorithm for automatic segmentation of adnexal masses and their internal components achieved excellent results compared with expert outlines and review, supporting future radiomic feature-based classification of the masses by components.
目的:在超声图像上将卵巢/附件肿块从周围组织中分离出来是一项具有挑战性的任务。将肿块分离成不同的组成部分对于放射学特征提取也很重要。我们的研究旨在开发一种基于人工智能的经阴道超声图像自动分割方法,该方法可(1)勾勒出附件肿块的外部边界,(2)分离内部成分:方法:对附件包块的回顾性超声成像数据库进行审查,以确定患者、包块和图像层面的排除标准,每个包块一张图像。将 53 名患者的 54 个附件肿块(36 个良性/18 个恶性)按患者分为训练集(26 个良性/12 个恶性)和独立测试集(10 个良性/6 个恶性)。使用戴斯相似系数(DSC)和豪斯多夫距离与每个肿块轮廓的有效直径之比(R HD - D)来衡量 U 网在测试图像上与专家详细轮廓相比的分割性能。随后,在发现模式下,使用两级模糊均值(FCM)无监督聚类方法将肿块内属于低回声或高回声成分的像素分开:DSC(中位数[95%置信区间])为 0.91 [0.78,0.96],R HD - D 为 0.04 [0.01,0.12],表明与专家轮廓非常一致。对肿块内部回声成分的临床分析表明,肿块内部回声成分与肿块特征密切相关:结论:U-net 和 FCM 算法相结合用于附件肿块及其内部成分的自动分割,与专家轮廓和复查结果相比取得了极佳的效果,支持未来基于放射学特征的肿块成分分类。
{"title":"AI-based automated segmentation for ovarian/adnexal masses and their internal components on ultrasound imaging.","authors":"Heather M Whitney, Roni Yoeli-Bik, Jacques S Abramowicz, Li Lan, Hui Li, Ryan E Longman, Ernst Lengyel, Maryellen L Giger","doi":"10.1117/1.JMI.11.4.044505","DOIUrl":"10.1117/1.JMI.11.4.044505","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of ovarian/adnexal masses from surrounding tissue on ultrasound images is a challenging task. The separation of masses into different components may also be important for radiomic feature extraction. Our study aimed to develop an artificial intelligence-based automatic segmentation method for transvaginal ultrasound images that (1) outlines the exterior boundary of adnexal masses and (2) separates internal components.</p><p><strong>Approach: </strong>A retrospective ultrasound imaging database of adnexal masses was reviewed for exclusion criteria at the patient, mass, and image levels, with one image per mass. The resulting 54 adnexal masses (36 benign/18 malignant) from 53 patients were separated by patient into training (26 benign/12 malignant) and independent test (10 benign/6 malignant) sets. U-net segmentation performance on test images compared to expert detailed outlines was measured using the Dice similarity coefficient (DSC) and the ratio of the Hausdorff distance to the effective diameter of the outline ( <math> <mrow> <msub><mrow><mi>R</mi></mrow> <mrow><mi>HD</mi> <mtext>-</mtext> <mi>D</mi></mrow> </msub> </mrow> </math> ) for each mass. Subsequently, in discovery mode, a two-level fuzzy c-means (FCM) unsupervised clustering approach was used to separate the pixels within masses belonging to hypoechoic or hyperechoic components.</p><p><strong>Results: </strong>The DSC (median [95% confidence interval]) was 0.91 [0.78, 0.96], and <math> <mrow> <msub><mrow><mi>R</mi></mrow> <mrow><mi>HD</mi> <mtext>-</mtext> <mi>D</mi></mrow> </msub> </mrow> </math> was 0.04 [0.01, 0.12], indicating strong agreement with expert outlines. Clinical review of the internal separation of masses into echogenic components demonstrated a strong association with mass characteristics.</p><p><strong>Conclusion: </strong>A combined U-net and FCM algorithm for automatic segmentation of adnexal masses and their internal components achieved excellent results compared with expert outlines and review, supporting future radiomic feature-based classification of the masses by components.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044505"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11301525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-24DOI: 10.1117/1.JMI.11.4.044008
Chenyu Gao, Shunxing Bao, Michael E Kim, Nancy R Newlin, Praitayini Kanakaraj, Tianyuan Yao, Gaurav Rudravaram, Yuankai Huo, Daniel Moyer, Kurt Schilling, Walter A Kukull, Arthur W Toga, Derek B Archer, Timothy J Hohman, Bennett A Landman, Zhiyuan Li
<p><strong>Purpose: </strong>In brain diffusion magnetic resonance imaging (dMRI), the volumetric and bundle analyses of whole-brain tissue microstructure and connectivity can be severely impeded by an incomplete field of view (FOV). We aim to develop a method for imputing the missing slices directly from existing dMRI scans with an incomplete FOV. We hypothesize that the imputed image with a complete FOV can improve whole-brain tractography for corrupted data with an incomplete FOV. Therefore, our approach provides a desirable alternative to discarding the valuable brain dMRI data, enabling subsequent tractography analyses that would otherwise be challenging or unattainable with corrupted data.</p><p><strong>Approach: </strong>We propose a framework based on a deep generative model that estimates the absent brain regions in dMRI scans with an incomplete FOV. The model is capable of learning both the diffusion characteristics in diffusion-weighted images (DWIs) and the anatomical features evident in the corresponding structural images for efficiently imputing missing slices of DWIs in the incomplete part of the FOV.</p><p><strong>Results: </strong>For evaluating the imputed slices, on the Wisconsin Registry for Alzheimer's Prevention (WRAP) dataset, the proposed framework achieved <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>22.397</mn></mrow> </math> , <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>0.905</mn></mrow> </math> , <math> <mrow> <msub><mrow><mi>PSNR</mi></mrow> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>22.479</mn></mrow> </math> , and <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>0.893</mn></mrow> </math> ; on the National Alzheimer's Coordinating Center (NACC) dataset, it achieved <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>21.304</mn></mrow> </math> , <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>0.892</mn></mrow> </math> , <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>21.599</mn></mrow> </math> , and <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>0.877</mn></mrow> </math> . The proposed framework improved the tractography accuracy, as demonstrated by an increased average Dice score for 72 tracts ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) on both the WRAP and NACC datasets.</p><p><strong>Conclusions: </strong>Results suggest that the proposed framework achieved sufficient imputation performance in brain dMRI data with an incomplete FOV for improving whole-brain tractography, thereby repairing the corrupted data. Our approach achieved more accurate whole-brain tractography results with an extended and complete FOV and reduced the uncertainty when analyzing bundles associa
目的:在脑弥散磁共振成像(dMRI)中,不完整的视场(FOV)会严重影响对全脑组织微观结构和连接性的容积和束状分析。我们的目标是开发一种方法,直接从现有的不完整视场的 dMRI 扫描中估算缺失的切片。我们假设,具有完整视场的估算图像可以改善具有不完整视场的损坏数据的全脑束学。因此,我们的方法提供了一种可取的替代方法,而不是丢弃有价值的脑部 dMRI 数据,使后续的牵引成像分析成为可能,否则这些分析将具有挑战性或无法通过损坏的数据实现:方法:我们提出了一个基于深度生成模型的框架,该模型可估算出不完整 FOV 的 dMRI 扫描中缺失的大脑区域。该模型能够学习扩散加权图像(DWIs)中的扩散特征和相应结构图像中明显的解剖学特征,从而有效地估算FOV不完整部分DWIs中缺失的切片:在威斯康星州阿尔茨海默氏症预防注册数据集(WRAP)上评估估算切片时,所提出的框架达到了 PSNR b 0 = 22.397 , SSIM b 0 = 0.905 , PSNR b 1300 = 22.479 ,SSIM b 1300 = 0.893 ;在国家阿尔茨海默氏症协调中心(NACC)数据集上,实现了 PSNR b 0 = 21.304 ,SSIM b 0 = 0.892 ,PSNR b 1300 = 21.599 ,SSIM b 1300 = 0.877 。在 WRAP 和 NACC 数据集上,拟议框架提高了 72 个神经束的平均 Dice 分数(P 0.001),从而提高了神经束绘制的准确性:结果表明,所提出的框架在具有不完整 FOV 的脑 dMRI 数据中实现了足够的估算性能,可用于改善全脑牵引成像,从而修复损坏的数据。在分析与阿尔茨海默病相关的脑束时,我们的方法在扩展的完整 FOV 下获得了更准确的全脑束图结果,并降低了不确定性。
{"title":"Field-of-view extension for brain diffusion MRI via deep generative models.","authors":"Chenyu Gao, Shunxing Bao, Michael E Kim, Nancy R Newlin, Praitayini Kanakaraj, Tianyuan Yao, Gaurav Rudravaram, Yuankai Huo, Daniel Moyer, Kurt Schilling, Walter A Kukull, Arthur W Toga, Derek B Archer, Timothy J Hohman, Bennett A Landman, Zhiyuan Li","doi":"10.1117/1.JMI.11.4.044008","DOIUrl":"10.1117/1.JMI.11.4.044008","url":null,"abstract":"<p><strong>Purpose: </strong>In brain diffusion magnetic resonance imaging (dMRI), the volumetric and bundle analyses of whole-brain tissue microstructure and connectivity can be severely impeded by an incomplete field of view (FOV). We aim to develop a method for imputing the missing slices directly from existing dMRI scans with an incomplete FOV. We hypothesize that the imputed image with a complete FOV can improve whole-brain tractography for corrupted data with an incomplete FOV. Therefore, our approach provides a desirable alternative to discarding the valuable brain dMRI data, enabling subsequent tractography analyses that would otherwise be challenging or unattainable with corrupted data.</p><p><strong>Approach: </strong>We propose a framework based on a deep generative model that estimates the absent brain regions in dMRI scans with an incomplete FOV. The model is capable of learning both the diffusion characteristics in diffusion-weighted images (DWIs) and the anatomical features evident in the corresponding structural images for efficiently imputing missing slices of DWIs in the incomplete part of the FOV.</p><p><strong>Results: </strong>For evaluating the imputed slices, on the Wisconsin Registry for Alzheimer's Prevention (WRAP) dataset, the proposed framework achieved <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>22.397</mn></mrow> </math> , <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>0.905</mn></mrow> </math> , <math> <mrow> <msub><mrow><mi>PSNR</mi></mrow> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>22.479</mn></mrow> </math> , and <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>0.893</mn></mrow> </math> ; on the National Alzheimer's Coordinating Center (NACC) dataset, it achieved <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>21.304</mn></mrow> </math> , <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>0.892</mn></mrow> </math> , <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>21.599</mn></mrow> </math> , and <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>0.877</mn></mrow> </math> . The proposed framework improved the tractography accuracy, as demonstrated by an increased average Dice score for 72 tracts ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) on both the WRAP and NACC datasets.</p><p><strong>Conclusions: </strong>Results suggest that the proposed framework achieved sufficient imputation performance in brain dMRI data with an incomplete FOV for improving whole-brain tractography, thereby repairing the corrupted data. Our approach achieved more accurate whole-brain tractography results with an extended and complete FOV and reduced the uncertainty when analyzing bundles associa","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044008"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344266/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142056922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Endometrial cancer (EC) is one of the most common types of cancer affecting women. While the hematoxylin-and-eosin (H&E) staining remains the standard for histological analysis, the immunohistochemistry (IHC) method provides molecular-level visualizations. Our study proposes a digital staining method to generate the hematoxylin-3,3'-diaminobenzidine (H-DAB) IHC stain of Ki-67 for the whole slide image of the EC tumor from its H&E stain counterpart.
Approach: We employed a color unmixing technique to yield stain density maps from the optical density (OD) of the stains and utilized the U-Net for end-to-end inference. The effectiveness of the proposed method was evaluated using the Pearson correlation between the digital and physical stain's labeling index (LI), a key metric indicating tumor proliferation. Two different cross-validation schemes were designed in our study: intraslide validation and cross-case validation (CCV). In the widely used intraslide scheme, the training and validation sets might include different regions from the same slide. The rigorous CCV validation scheme strictly prohibited any validation slide from contributing to training.
Results: The proposed method yielded a high-resolution digital stain with preserved histological features, indicating a reliable correlation with the physical stain in terms of the Ki-67 LI. In the intraslide scheme, using intraslide patches resulted in a biased accuracy (e.g., ) significantly higher than that of CCV. The CCV scheme retained a fair correlation (e.g., ) between the LIs calculated from the digital stain and its physical IHC counterpart. Inferring the OD of the IHC stain from that of the H&E stain enhanced the correlation metric, outperforming that of the baseline model using the RGB space.
Conclusions: Our study revealed that molecule-level insights could be obtained from H&E images using deep learning. Furthermore, the improvement brought via OD inference indicated a possible method for creating more generalizable models for digital staining via per-stain analysis.
目的:子宫内膜癌(EC)是妇女最常见的癌症类型之一。虽然苏木精-伊红(H&E)染色仍是组织学分析的标准,但免疫组化(IHC)方法可提供分子水平的可视化。我们的研究提出了一种数字染色方法,通过 H&E 染色法生成 EC 肿瘤整张玻片图像中 Ki-67 的苏木精-3,3'-二氨基联苯胺(H-DAB)IHC 染色法:我们采用了一种颜色不混合技术,从染色剂的光密度(OD)得出染色剂密度图,并利用 U-Net 进行端到端推理。我们利用数字染色和物理染色的标记指数(LI)之间的皮尔逊相关性评估了所提方法的有效性。我们的研究设计了两种不同的交叉验证方案:滑动内验证和交叉案例验证(CCV)。在广泛使用的切片内验证方案中,训练集和验证集可能包括来自同一张切片的不同区域。严格的 CCV 验证方案严格禁止任何验证切片参与训练:结果:所提出的方法得到了保留组织学特征的高分辨率数字染色,表明在 Ki-67 LI 方面与物理染色具有可靠的相关性。在滑动内方案中,使用滑动内补丁的偏倚准确度(如 R = 0.98)明显高于 CCV。CCV 方案保留了数字染色与物理 IHC 计算的 LI 之间的相关性(如 R = 0.66)。从 H&E 染色结果推断 IHC 染色结果的 OD 增强了相关性指标,优于使用 RGB 空间的基线模型:我们的研究表明,利用深度学习可以从 H&E 图像中获得分子级的见解。此外,OD 推理带来的改进表明,通过每染色分析为数字染色创建更具通用性的模型是一种可行的方法。
{"title":"Transformation from hematoxylin-and-eosin staining to Ki-67 immunohistochemistry digital staining images using deep learning: experimental validation on the labeling index.","authors":"Cunyuan Ji, Kengo Oshima, Takumi Urata, Fumikazu Kimura, Keiko Ishii, Takeshi Uehara, Kenji Suzuki, Saori Takeyama, Masahiro Yamaguchi","doi":"10.1117/1.JMI.11.4.047501","DOIUrl":"10.1117/1.JMI.11.4.047501","url":null,"abstract":"<p><strong>Purpose: </strong>Endometrial cancer (EC) is one of the most common types of cancer affecting women. While the hematoxylin-and-eosin (H&E) staining remains the standard for histological analysis, the immunohistochemistry (IHC) method provides molecular-level visualizations. Our study proposes a digital staining method to generate the hematoxylin-3,3'-diaminobenzidine (H-DAB) IHC stain of Ki-67 for the whole slide image of the EC tumor from its H&E stain counterpart.</p><p><strong>Approach: </strong>We employed a color unmixing technique to yield stain density maps from the optical density (OD) of the stains and utilized the U-Net for end-to-end inference. The effectiveness of the proposed method was evaluated using the Pearson correlation between the digital and physical stain's labeling index (LI), a key metric indicating tumor proliferation. Two different cross-validation schemes were designed in our study: intraslide validation and cross-case validation (CCV). In the widely used intraslide scheme, the training and validation sets might include different regions from the same slide. The rigorous CCV validation scheme strictly prohibited any validation slide from contributing to training.</p><p><strong>Results: </strong>The proposed method yielded a high-resolution digital stain with preserved histological features, indicating a reliable correlation with the physical stain in terms of the Ki-67 LI. In the intraslide scheme, using intraslide patches resulted in a biased accuracy (e.g., <math><mrow><mi>R</mi> <mo>=</mo> <mn>0.98</mn></mrow> </math> ) significantly higher than that of CCV. The CCV scheme retained a fair correlation (e.g., <math><mrow><mi>R</mi> <mo>=</mo> <mn>0.66</mn></mrow> </math> ) between the LIs calculated from the digital stain and its physical IHC counterpart. Inferring the OD of the IHC stain from that of the H&E stain enhanced the correlation metric, outperforming that of the baseline model using the RGB space.</p><p><strong>Conclusions: </strong>Our study revealed that molecule-level insights could be obtained from H&E images using deep learning. Furthermore, the improvement brought via OD inference indicated a possible method for creating more generalizable models for digital staining via per-stain analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"047501"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11287056/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-09DOI: 10.1117/1.JMI.11.4.044508
Sina Walluscheck, Annika Gerken, Ivana Galinovic, Kersten Villringer, Jochen B Fiebach, Jan Klein, Stefan Heldmann
Purpose: To help radiologists examine the growing number of computed tomography (CT) scans, automatic anomaly detection is an ongoing focus of medical imaging research. Radiologists must analyze a CT scan by searching for any deviation from normal healthy anatomy. We propose an approach to detecting abnormalities in axial 2D CT slice images of the brain. Although much research has been done on detecting abnormalities in magnetic resonance images of the brain, there is little work on CT scans, where abnormalities are more difficult to detect due to the low image contrast that must be represented by the model used.
Approach: We use a generative adversarial network (GAN) to learn normal brain anatomy in the first step and compare two approaches to image reconstruction: training an encoder in the second step and using iterative optimization during inference. Then, we analyze the differences from the original scan to detect and localize anomalies in the brain.
Results: Our approach can reconstruct healthy anatomy with good image contrast for brain CT scans. We obtain median Dice scores of 0.71 on our hemorrhage test data and 0.43 on our test set with additional tumor images from publicly available data sources. We also compare our models to a state-of-the-art autoencoder and a diffusion model and obtain qualitatively more accurate reconstructions.
Conclusions: Without defining anomalies during training, a GAN-based network was used to learn healthy anatomy for brain CT scans. Notably, our approach is not limited to the localization of hemorrhages and tumors and could thus be used to detect structural anatomical changes and other lesions.
{"title":"Generative adversarial network-based reconstruction of healthy anatomy for anomaly detection in brain CT scans.","authors":"Sina Walluscheck, Annika Gerken, Ivana Galinovic, Kersten Villringer, Jochen B Fiebach, Jan Klein, Stefan Heldmann","doi":"10.1117/1.JMI.11.4.044508","DOIUrl":"10.1117/1.JMI.11.4.044508","url":null,"abstract":"<p><strong>Purpose: </strong>To help radiologists examine the growing number of computed tomography (CT) scans, automatic anomaly detection is an ongoing focus of medical imaging research. Radiologists must analyze a CT scan by searching for any deviation from normal healthy anatomy. We propose an approach to detecting abnormalities in axial 2D CT slice images of the brain. Although much research has been done on detecting abnormalities in magnetic resonance images of the brain, there is little work on CT scans, where abnormalities are more difficult to detect due to the low image contrast that must be represented by the model used.</p><p><strong>Approach: </strong>We use a generative adversarial network (GAN) to learn normal brain anatomy in the first step and compare two approaches to image reconstruction: training an encoder in the second step and using iterative optimization during inference. Then, we analyze the differences from the original scan to detect and localize anomalies in the brain.</p><p><strong>Results: </strong>Our approach can reconstruct healthy anatomy with good image contrast for brain CT scans. We obtain median Dice scores of 0.71 on our hemorrhage test data and 0.43 on our test set with additional tumor images from publicly available data sources. We also compare our models to a state-of-the-art autoencoder and a diffusion model and obtain qualitatively more accurate reconstructions.</p><p><strong>Conclusions: </strong>Without defining anomalies during training, a GAN-based network was used to learn healthy anatomy for brain CT scans. Notably, our approach is not limited to the localization of hemorrhages and tumors and could thus be used to detect structural anatomical changes and other lesions.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044508"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315301/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-10DOI: 10.1117/1.JMI.11.4.045001
Vincenzia S Vargo, Megan R Routzong, Pamela A Moalli, Ghazaleh Rostaminia, Steven D Abramowitch
Purpose: The measures that traditionally describe the levator hiatus (LH) are straightforward and reliable; however, they were not specifically designed to capture significant differences. Statistical shape modeling (SSM) was used to quantify LH shape variation across reproductive-age women and identify novel variables associated with LH size and shape.
Approach: A retrospective study of pelvic MRIs from 19 nulliparous, 32 parous, and 12 pregnant women was performed. The LH was segmented in the plane of minimal LH dimensions. SSM was implemented. LH size was defined by the cross-sectional area, maximal transverse diameter, and anterior-posterior (A-P) diameter. Novel SSM-guided variables were defined by regions of greatest variation. Multivariate analysis of variance (MANOVA) evaluated group differences, and correlations determined relationships between size and shape variables.
Results: Overall shape ( ), SSM mode 2 (oval to -shape, ), mode 3 (rounder to broader anterior shape, ), and maximal transverse diameter ( ) significantly differed between groups. Novel anterior and posterior transverse diameters were identified at 14% and 79% of the A-P length. Anterior transverse diameter and maximal transverse diameter were strongly correlated ( , ), while posterior transverse diameter and maximal transverse diameter were weakly correlated ( , ).
Conclusions: The traditional maximal transverse diameter generally corresponded with SSM findings but cannot describe anterior and posterior variation independently. The novel anterior and posterior transverse diameters represent both size and shape variation, can be easily calculated alongside traditional measures, and are more sensitive to subtle and local LH variation. Thus, they have a greater ability to serve as predictive and diagnostic parameters.
{"title":"Improving radiological quantification of levator hiatus features with measures informed by statistical shape modeling.","authors":"Vincenzia S Vargo, Megan R Routzong, Pamela A Moalli, Ghazaleh Rostaminia, Steven D Abramowitch","doi":"10.1117/1.JMI.11.4.045001","DOIUrl":"10.1117/1.JMI.11.4.045001","url":null,"abstract":"<p><strong>Purpose: </strong>The measures that traditionally describe the levator hiatus (LH) are straightforward and reliable; however, they were not specifically designed to capture significant differences. Statistical shape modeling (SSM) was used to quantify LH shape variation across reproductive-age women and identify novel variables associated with LH size and shape.</p><p><strong>Approach: </strong>A retrospective study of pelvic MRIs from 19 nulliparous, 32 parous, and 12 pregnant women was performed. The LH was segmented in the plane of minimal LH dimensions. SSM was implemented. LH size was defined by the cross-sectional area, maximal transverse diameter, and anterior-posterior (A-P) diameter. Novel SSM-guided variables were defined by regions of greatest variation. Multivariate analysis of variance (MANOVA) evaluated group differences, and correlations determined relationships between size and shape variables.</p><p><strong>Results: </strong>Overall shape ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), SSM mode 2 (oval to <math><mrow><mi>T</mi></mrow> </math> -shape, <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ), mode 3 (rounder to broader anterior shape, <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.004</mn></mrow> </math> ), and maximal transverse diameter ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.003</mn></mrow> </math> ) significantly differed between groups. Novel anterior and posterior transverse diameters were identified at 14% and 79% of the A-P length. Anterior transverse diameter and maximal transverse diameter were strongly correlated ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.780</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), while posterior transverse diameter and maximal transverse diameter were weakly correlated ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.398</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.001</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>The traditional maximal transverse diameter generally corresponded with SSM findings but cannot describe anterior and posterior variation independently. The novel anterior and posterior transverse diameters represent both size and shape variation, can be easily calculated alongside traditional measures, and are more sensitive to subtle and local LH variation. Thus, they have a greater ability to serve as predictive and diagnostic parameters.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045001"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11316399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-09DOI: 10.1117/1.JMI.11.4.045501
Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein
Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors.
Approach: Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC).
Results: The CNN-CADe improved the 3D search for the small microcalcification signal ( , ) and the 2D search for the large mass signal ( , ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( , ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( , ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( , ).
Conclusion: The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.
{"title":"Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images.","authors":"Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein","doi":"10.1117/1.JMI.11.4.045501","DOIUrl":"10.1117/1.JMI.11.4.045501","url":null,"abstract":"<p><strong>Purpose: </strong>Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors.</p><p><strong>Approach: </strong>Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC).</p><p><strong>Results: </strong>The CNN-CADe improved the 3D search for the small microcalcification signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.098</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.0002</mn></mrow> </math> ) and the 2D search for the large mass signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.076</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.066</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.035</mn></mrow> </math> ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( <math><mrow><mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>0.528</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.036</mn></mrow> </math> ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.033</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.133</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045501"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-30DOI: 10.1117/1.JMI.11.4.044504
Johanna Brosig, Nina Krüger, Inna Khasyanova, Isaac Wamala, Matthias Ivantsits, Simon Sündermann, Jörg Kempfert, Stefan Heldmann, Anja Hennemuth
Purpose: Analyzing the anatomy of the aorta and left ventricular outflow tract (LVOT) is crucial for risk assessment and planning of transcatheter aortic valve implantation (TAVI). A comprehensive analysis of the aortic root and LVOT requires the extraction of the patient-individual anatomy via segmentation. Deep learning has shown good performance on various segmentation tasks. If this is formulated as a supervised problem, large amounts of annotated data are required for training. Therefore, minimizing the annotation complexity is desirable.
Approach: We propose two-dimensional (2D) cross-sectional annotation and point cloud-based surface reconstruction to train a fully automatic 3D segmentation network for the aortic root and the LVOT. Our sparse annotation scheme enables easy and fast training data generation for tubular structures such as the aortic root. From the segmentation results, we derive clinically relevant parameters for TAVI planning.
Results: The proposed 2D cross-sectional annotation results in high inter-observer agreement [Dice similarity coefficient (DSC): 0.94]. The segmentation model achieves a DSC of 0.90 and an average surface distance of 0.96 mm. Our approach achieves an aortic annulus maximum diameter difference between prediction and annotation of 0.45 mm (inter-observer variance: 0.25 mm).
Conclusions: The presented approach facilitates reproducible annotations. The annotations allow for training accurate segmentation models of the aortic root and LVOT. The segmentation results facilitate reproducible and quantifiable measurements for TAVI planning.
{"title":"Learning three-dimensional aortic root assessment based on sparse annotations.","authors":"Johanna Brosig, Nina Krüger, Inna Khasyanova, Isaac Wamala, Matthias Ivantsits, Simon Sündermann, Jörg Kempfert, Stefan Heldmann, Anja Hennemuth","doi":"10.1117/1.JMI.11.4.044504","DOIUrl":"10.1117/1.JMI.11.4.044504","url":null,"abstract":"<p><strong>Purpose: </strong>Analyzing the anatomy of the aorta and left ventricular outflow tract (LVOT) is crucial for risk assessment and planning of transcatheter aortic valve implantation (TAVI). A comprehensive analysis of the aortic root and LVOT requires the extraction of the patient-individual anatomy via segmentation. Deep learning has shown good performance on various segmentation tasks. If this is formulated as a supervised problem, large amounts of annotated data are required for training. Therefore, minimizing the annotation complexity is desirable.</p><p><strong>Approach: </strong>We propose two-dimensional (2D) cross-sectional annotation and point cloud-based surface reconstruction to train a fully automatic 3D segmentation network for the aortic root and the LVOT. Our sparse annotation scheme enables easy and fast training data generation for tubular structures such as the aortic root. From the segmentation results, we derive clinically relevant parameters for TAVI planning.</p><p><strong>Results: </strong>The proposed 2D cross-sectional annotation results in high inter-observer agreement [Dice similarity coefficient (DSC): 0.94]. The segmentation model achieves a DSC of 0.90 and an average surface distance of 0.96 mm. Our approach achieves an aortic annulus maximum diameter difference between prediction and annotation of 0.45 mm (inter-observer variance: 0.25 mm).</p><p><strong>Conclusions: </strong>The presented approach facilitates reproducible annotations. The annotations allow for training accurate segmentation models of the aortic root and LVOT. The segmentation results facilitate reproducible and quantifiable measurements for TAVI planning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044504"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11287057/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-24DOI: 10.1117/1.JMI.11.4.044007
Chenyu Gao, Qi Yang, Michael E Kim, Nazirah Mohd Khairi, Leon Y Cai, Nancy R Newlin, Praitayini Kanakaraj, Lucas W Remedios, Aravind R Krishnan, Xin Yu, Tianyuan Yao, Panpan Zhang, Kurt G Schilling, Daniel Moyer, Derek B Archer, Susan M Resnick, Bennett A Landman
Purpose: As large analyses merge data across sites, a deeper understanding of variance in statistical assessment across the sources of data becomes critical for valid analyses. Diffusion tensor imaging (DTI) exhibits spatially varying and correlated noise, so care must be taken with distributional assumptions. Here, we characterize the role of physiology, subject compliance, and the interaction of the subject with the scanner in the understanding of DTI variability, as modeled in the spatial variance of derived metrics in homogeneous regions.
Approach: We analyze DTI data from 1035 subjects in the Baltimore Longitudinal Study of Aging, with ages ranging from 22.4 to 103 years old. For each subject, up to 12 longitudinal sessions were conducted. We assess the variance of DTI scalars within regions of interest (ROIs) defined by four segmentation methods and investigate the relationships between the variance and covariates, including baseline age, time from the baseline (referred to as "interval"), motion, sex, and whether it is the first scan or the second scan in the session.
Results: Covariate effects are heterogeneous and bilaterally symmetric across ROIs. Inter-session interval is positively related ( ) to FA variance in the cuneus and occipital gyrus, but negatively ( ) in the caudate nucleus. Males show significantly ( ) higher FA variance in the right putamen, thalamus, body of the corpus callosum, and cingulate gyrus. In 62 out of 176 ROIs defined by the Eve type-1 atlas, an increase in motion is associated ( ) with a decrease in FA variance. Head motion increases during the rescan of DTI ( mm per volume).
Conclusions: The effects of each covariate on DTI variance and their relationships across ROIs are complex. Ultimately, we encourage researchers to include estimates of variance when sharing data and consider models of heteroscedasticity in analysis. This work provides a foundation for study planning to account for regional variations in metric variance.
目的:由于大型分析会合并不同地点的数据,因此深入了解不同数据源的统计评估差异对于有效分析至关重要。弥散张量成像(DTI)显示出空间变化和相关噪声,因此必须注意分布假设。在此,我们分析了生理学、受试者顺应性以及受试者与扫描仪之间的相互作用在理解 DTI 变异性中的作用,并以同质区域中衍生指标的空间方差为模型:我们分析了巴尔的摩老龄化纵向研究(Baltimore Longitudinal Study of Aging)中 1035 名受试者的 DTI 数据,这些受试者的年龄从 22.4 岁到 103 岁不等。每个受试者都进行了多达 12 次纵向研究。我们评估了由四种分割方法定义的感兴趣区(ROI)内 DTI 标量的方差,并研究了方差与协变量之间的关系,协变量包括基线年龄、距基线时间(称为 "间隔")、运动、性别以及是第一次扫描还是第二次扫描:在不同的 ROI 中,协变量的影响是异质和双侧对称的。会话间隔与楔回和枕回的FA方差呈正相关(p≪0.001),但与尾状核呈负相关(p≪0.001)。男性的右侧丘脑、丘脑、胼胝体和扣带回的FA方差明显更高(p≪0.001)。在夏娃 1 型图谱定义的 176 个 ROI 中,有 62 个 ROI 的运动增加(P 0.05)与 FA 方差减小相关。在 DTI 重新扫描期间,头部运动会增加(Δ μ = 0.045 mm/体积):结论:各协变量对 DTI 方差的影响及其在各 ROI 之间的关系非常复杂。最终,我们鼓励研究人员在共享数据时加入方差估计值,并在分析中考虑异方差模型。这项工作为研究规划提供了基础,以考虑度量方差的区域差异。
{"title":"Characterizing patterns of diffusion tensor imaging variance in aging brains.","authors":"Chenyu Gao, Qi Yang, Michael E Kim, Nazirah Mohd Khairi, Leon Y Cai, Nancy R Newlin, Praitayini Kanakaraj, Lucas W Remedios, Aravind R Krishnan, Xin Yu, Tianyuan Yao, Panpan Zhang, Kurt G Schilling, Daniel Moyer, Derek B Archer, Susan M Resnick, Bennett A Landman","doi":"10.1117/1.JMI.11.4.044007","DOIUrl":"10.1117/1.JMI.11.4.044007","url":null,"abstract":"<p><strong>Purpose: </strong>As large analyses merge data across sites, a deeper understanding of variance in statistical assessment across the sources of data becomes critical for valid analyses. Diffusion tensor imaging (DTI) exhibits spatially varying and correlated noise, so care must be taken with distributional assumptions. Here, we characterize the role of physiology, subject compliance, and the interaction of the subject with the scanner in the understanding of DTI variability, as modeled in the spatial variance of derived metrics in homogeneous regions.</p><p><strong>Approach: </strong>We analyze DTI data from 1035 subjects in the Baltimore Longitudinal Study of Aging, with ages ranging from 22.4 to 103 years old. For each subject, up to 12 longitudinal sessions were conducted. We assess the variance of DTI scalars within regions of interest (ROIs) defined by four segmentation methods and investigate the relationships between the variance and covariates, including baseline age, time from the baseline (referred to as \"interval\"), motion, sex, and whether it is the first scan or the second scan in the session.</p><p><strong>Results: </strong>Covariate effects are heterogeneous and bilaterally symmetric across ROIs. Inter-session interval is positively related ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) to FA variance in the cuneus and occipital gyrus, but negatively ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) in the caudate nucleus. Males show significantly ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) higher FA variance in the right putamen, thalamus, body of the corpus callosum, and cingulate gyrus. In 62 out of 176 ROIs defined by the Eve type-1 atlas, an increase in motion is associated ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) with a decrease in FA variance. Head motion increases during the rescan of DTI ( <math><mrow><mi>Δ</mi> <mi>μ</mi> <mo>=</mo> <mn>0.045</mn></mrow> </math> mm per volume).</p><p><strong>Conclusions: </strong>The effects of each covariate on DTI variance and their relationships across ROIs are complex. Ultimately, we encourage researchers to include estimates of variance when sharing data and consider models of heteroscedasticity in analysis. This work provides a foundation for study planning to account for regional variations in metric variance.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044007"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142056920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2024-04-30DOI: 10.1117/1.JMI.11.3.034008
Hanya Ahmed, Qianni Zhang, Robert Donnan, Akram Alomainy
Purpose: Optical coherence tomography (OCT) is an emerging imaging tool in healthcare with common applications in ophthalmology for detection of retinal diseases, as well as other medical domains. The noise in OCT images presents a great challenge as it hinders the clinician's ability to diagnosis in extensive detail.
Approach: In this work, a region-based, deep-learning, denoising framework is proposed for adaptive cleaning of noisy OCT-acquired images. The core of the framework is a hybrid deep-learning model named transformer enhanced autoencoder rendering (TEAR). Attention gates are utilized to ensure focus on denoising the foreground and to remove the background. TEAR is designed to remove the different types of noise artifacts commonly present in OCT images and to enhance the visual quality.
Results: Extensive quantitative evaluations are performed to evaluate the performance of TEAR and compare it against both deep-learning and traditional state-of-the-art denoising algorithms. The proposed method improved the peak signal-to-noise ratio to 27.9 dB, CNR to 6.3 dB, SSIM to 0.9, and equivalent number of looks to 120.8 dB for a dental dataset. For a retinal dataset, the performance metrics in the same sequence are: 24.6, 14.2, 0.64, and 1038.7 dB, respectively.
Conclusions: The results show that the approach verifiably removes speckle noise and achieves superior quality over several well-known denoisers.
{"title":"Transformer enhanced autoencoder rendering cleaning of noisy optical coherence tomography images.","authors":"Hanya Ahmed, Qianni Zhang, Robert Donnan, Akram Alomainy","doi":"10.1117/1.JMI.11.3.034008","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034008","url":null,"abstract":"<p><strong>Purpose: </strong>Optical coherence tomography (OCT) is an emerging imaging tool in healthcare with common applications in ophthalmology for detection of retinal diseases, as well as other medical domains. The noise in OCT images presents a great challenge as it hinders the clinician's ability to diagnosis in extensive detail.</p><p><strong>Approach: </strong>In this work, a region-based, deep-learning, denoising framework is proposed for adaptive cleaning of noisy OCT-acquired images. The core of the framework is a hybrid deep-learning model named transformer enhanced autoencoder rendering (TEAR). Attention gates are utilized to ensure focus on denoising the foreground and to remove the background. TEAR is designed to remove the different types of noise artifacts commonly present in OCT images and to enhance the visual quality.</p><p><strong>Results: </strong>Extensive quantitative evaluations are performed to evaluate the performance of TEAR and compare it against both deep-learning and traditional state-of-the-art denoising algorithms. The proposed method improved the peak signal-to-noise ratio to 27.9 dB, CNR to 6.3 dB, SSIM to 0.9, and equivalent number of looks to 120.8 dB for a dental dataset. For a retinal dataset, the performance metrics in the same sequence are: 24.6, 14.2, 0.64, and 1038.7 dB, respectively.</p><p><strong>Conclusions: </strong>The results show that the approach verifiably removes speckle noise and achieves superior quality over several well-known denoisers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034008"},"PeriodicalIF":2.4,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11058346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}