首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Lung vessel connectivity map as anatomical prior knowledge for deep learning-based lung lobe segmentation. 将肺血管连接图作为解剖先验知识,用于基于深度学习的肺叶分割。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-07-09 DOI: 10.1117/1.JMI.11.4.044001
Simone Bendazzoli, Emelie Bäcklin, Örjan Smedby, Birgitta Janerot-Sjoberg, Bryan Connolly, Chunliang Wang

Purpose: Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.

Approach: We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.

Results: Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.

Conclusions: Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.

目的:我们的研究探讨了将先前的解剖学知识纳入深度学习(DL)方法的潜在益处,该方法旨在自动分割胸部 CT 扫描中的肺叶:我们介绍了一种基于深度学习的自动方法,该方法利用肺血管系统的解剖信息来指导和增强分割过程。这需要利用肺血管连接图(LVC),该图编码了相关的肺血管解剖数据。我们的研究探索了 nnU-Net 框架内三种不同神经网络架构的性能:独立 U-Net、多任务 U-Net 和级联 U-Net:实验结果表明,将 LVC 信息纳入 DL 模型可提高分割准确性,尤其是在具有挑战性的胸部 CT 容量边界区域。此外,我们的研究还证明了 LVC 增强模型泛化能力的潜力。最后,通过对 10 例 COVID-19 肺叶的分割评估了该方法的鲁棒性,证明了它在肺部疾病中的适用性:结论:将 LVC 等先验解剖信息纳入 DL 模型有望提高分割性能,尤其是在边界区域。结论:将 LVC 等先验解剖信息纳入 DL 模型有望提高分割性能,尤其是在边界区域。然而,这种提高的程度存在局限性,因此需要进一步探索其实际应用性。
{"title":"Lung vessel connectivity map as anatomical prior knowledge for deep learning-based lung lobe segmentation.","authors":"Simone Bendazzoli, Emelie Bäcklin, Örjan Smedby, Birgitta Janerot-Sjoberg, Bryan Connolly, Chunliang Wang","doi":"10.1117/1.JMI.11.4.044001","DOIUrl":"10.1117/1.JMI.11.4.044001","url":null,"abstract":"<p><strong>Purpose: </strong>Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans.</p><p><strong>Approach: </strong>We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net.</p><p><strong>Results: </strong>Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases.</p><p><strong>Conclusions: </strong>Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044001"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11231955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-based automated segmentation for ovarian/adnexal masses and their internal components on ultrasound imaging. 基于人工智能的超声成像卵巢/附件肿块及其内部组件自动分割。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-08-06 DOI: 10.1117/1.JMI.11.4.044505
Heather M Whitney, Roni Yoeli-Bik, Jacques S Abramowicz, Li Lan, Hui Li, Ryan E Longman, Ernst Lengyel, Maryellen L Giger

Purpose: Segmentation of ovarian/adnexal masses from surrounding tissue on ultrasound images is a challenging task. The separation of masses into different components may also be important for radiomic feature extraction. Our study aimed to develop an artificial intelligence-based automatic segmentation method for transvaginal ultrasound images that (1) outlines the exterior boundary of adnexal masses and (2) separates internal components.

Approach: A retrospective ultrasound imaging database of adnexal masses was reviewed for exclusion criteria at the patient, mass, and image levels, with one image per mass. The resulting 54 adnexal masses (36 benign/18 malignant) from 53 patients were separated by patient into training (26 benign/12 malignant) and independent test (10 benign/6 malignant) sets. U-net segmentation performance on test images compared to expert detailed outlines was measured using the Dice similarity coefficient (DSC) and the ratio of the Hausdorff distance to the effective diameter of the outline ( R HD - D ) for each mass. Subsequently, in discovery mode, a two-level fuzzy c-means (FCM) unsupervised clustering approach was used to separate the pixels within masses belonging to hypoechoic or hyperechoic components.

Results: The DSC (median [95% confidence interval]) was 0.91 [0.78, 0.96], and R HD - D was 0.04 [0.01, 0.12], indicating strong agreement with expert outlines. Clinical review of the internal separation of masses into echogenic components demonstrated a strong association with mass characteristics.

Conclusion: A combined U-net and FCM algorithm for automatic segmentation of adnexal masses and their internal components achieved excellent results compared with expert outlines and review, supporting future radiomic feature-based classification of the masses by components.

目的:在超声图像上将卵巢/附件肿块从周围组织中分离出来是一项具有挑战性的任务。将肿块分离成不同的组成部分对于放射学特征提取也很重要。我们的研究旨在开发一种基于人工智能的经阴道超声图像自动分割方法,该方法可(1)勾勒出附件肿块的外部边界,(2)分离内部成分:方法:对附件包块的回顾性超声成像数据库进行审查,以确定患者、包块和图像层面的排除标准,每个包块一张图像。将 53 名患者的 54 个附件肿块(36 个良性/18 个恶性)按患者分为训练集(26 个良性/12 个恶性)和独立测试集(10 个良性/6 个恶性)。使用戴斯相似系数(DSC)和豪斯多夫距离与每个肿块轮廓的有效直径之比(R HD - D)来衡量 U 网在测试图像上与专家详细轮廓相比的分割性能。随后,在发现模式下,使用两级模糊均值(FCM)无监督聚类方法将肿块内属于低回声或高回声成分的像素分开:DSC(中位数[95%置信区间])为 0.91 [0.78,0.96],R HD - D 为 0.04 [0.01,0.12],表明与专家轮廓非常一致。对肿块内部回声成分的临床分析表明,肿块内部回声成分与肿块特征密切相关:结论:U-net 和 FCM 算法相结合用于附件肿块及其内部成分的自动分割,与专家轮廓和复查结果相比取得了极佳的效果,支持未来基于放射学特征的肿块成分分类。
{"title":"AI-based automated segmentation for ovarian/adnexal masses and their internal components on ultrasound imaging.","authors":"Heather M Whitney, Roni Yoeli-Bik, Jacques S Abramowicz, Li Lan, Hui Li, Ryan E Longman, Ernst Lengyel, Maryellen L Giger","doi":"10.1117/1.JMI.11.4.044505","DOIUrl":"10.1117/1.JMI.11.4.044505","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of ovarian/adnexal masses from surrounding tissue on ultrasound images is a challenging task. The separation of masses into different components may also be important for radiomic feature extraction. Our study aimed to develop an artificial intelligence-based automatic segmentation method for transvaginal ultrasound images that (1) outlines the exterior boundary of adnexal masses and (2) separates internal components.</p><p><strong>Approach: </strong>A retrospective ultrasound imaging database of adnexal masses was reviewed for exclusion criteria at the patient, mass, and image levels, with one image per mass. The resulting 54 adnexal masses (36 benign/18 malignant) from 53 patients were separated by patient into training (26 benign/12 malignant) and independent test (10 benign/6 malignant) sets. U-net segmentation performance on test images compared to expert detailed outlines was measured using the Dice similarity coefficient (DSC) and the ratio of the Hausdorff distance to the effective diameter of the outline ( <math> <mrow> <msub><mrow><mi>R</mi></mrow> <mrow><mi>HD</mi> <mtext>-</mtext> <mi>D</mi></mrow> </msub> </mrow> </math> ) for each mass. Subsequently, in discovery mode, a two-level fuzzy c-means (FCM) unsupervised clustering approach was used to separate the pixels within masses belonging to hypoechoic or hyperechoic components.</p><p><strong>Results: </strong>The DSC (median [95% confidence interval]) was 0.91 [0.78, 0.96], and <math> <mrow> <msub><mrow><mi>R</mi></mrow> <mrow><mi>HD</mi> <mtext>-</mtext> <mi>D</mi></mrow> </msub> </mrow> </math> was 0.04 [0.01, 0.12], indicating strong agreement with expert outlines. Clinical review of the internal separation of masses into echogenic components demonstrated a strong association with mass characteristics.</p><p><strong>Conclusion: </strong>A combined U-net and FCM algorithm for automatic segmentation of adnexal masses and their internal components achieved excellent results compared with expert outlines and review, supporting future radiomic feature-based classification of the masses by components.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044505"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11301525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-of-view extension for brain diffusion MRI via deep generative models. 通过深度生成模型扩展脑弥散核磁共振成像的视场。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-08-24 DOI: 10.1117/1.JMI.11.4.044008
Chenyu Gao, Shunxing Bao, Michael E Kim, Nancy R Newlin, Praitayini Kanakaraj, Tianyuan Yao, Gaurav Rudravaram, Yuankai Huo, Daniel Moyer, Kurt Schilling, Walter A Kukull, Arthur W Toga, Derek B Archer, Timothy J Hohman, Bennett A Landman, Zhiyuan Li
<p><strong>Purpose: </strong>In brain diffusion magnetic resonance imaging (dMRI), the volumetric and bundle analyses of whole-brain tissue microstructure and connectivity can be severely impeded by an incomplete field of view (FOV). We aim to develop a method for imputing the missing slices directly from existing dMRI scans with an incomplete FOV. We hypothesize that the imputed image with a complete FOV can improve whole-brain tractography for corrupted data with an incomplete FOV. Therefore, our approach provides a desirable alternative to discarding the valuable brain dMRI data, enabling subsequent tractography analyses that would otherwise be challenging or unattainable with corrupted data.</p><p><strong>Approach: </strong>We propose a framework based on a deep generative model that estimates the absent brain regions in dMRI scans with an incomplete FOV. The model is capable of learning both the diffusion characteristics in diffusion-weighted images (DWIs) and the anatomical features evident in the corresponding structural images for efficiently imputing missing slices of DWIs in the incomplete part of the FOV.</p><p><strong>Results: </strong>For evaluating the imputed slices, on the Wisconsin Registry for Alzheimer's Prevention (WRAP) dataset, the proposed framework achieved <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>22.397</mn></mrow> </math> , <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>0.905</mn></mrow> </math> , <math> <mrow> <msub><mrow><mi>PSNR</mi></mrow> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>22.479</mn></mrow> </math> , and <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>0.893</mn></mrow> </math> ; on the National Alzheimer's Coordinating Center (NACC) dataset, it achieved <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>21.304</mn></mrow> </math> , <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>0</mn></mrow> </msub> <mo>=</mo> <mn>0.892</mn></mrow> </math> , <math> <mrow><msub><mi>PSNR</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>21.599</mn></mrow> </math> , and <math> <mrow><msub><mi>SSIM</mi> <mrow><mi>b</mi> <mn>1300</mn></mrow> </msub> <mo>=</mo> <mn>0.877</mn></mrow> </math> . The proposed framework improved the tractography accuracy, as demonstrated by an increased average Dice score for 72 tracts ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) on both the WRAP and NACC datasets.</p><p><strong>Conclusions: </strong>Results suggest that the proposed framework achieved sufficient imputation performance in brain dMRI data with an incomplete FOV for improving whole-brain tractography, thereby repairing the corrupted data. Our approach achieved more accurate whole-brain tractography results with an extended and complete FOV and reduced the uncertainty when analyzing bundles associa
目的:在脑弥散磁共振成像(dMRI)中,不完整的视场(FOV)会严重影响对全脑组织微观结构和连接性的容积和束状分析。我们的目标是开发一种方法,直接从现有的不完整视场的 dMRI 扫描中估算缺失的切片。我们假设,具有完整视场的估算图像可以改善具有不完整视场的损坏数据的全脑束学。因此,我们的方法提供了一种可取的替代方法,而不是丢弃有价值的脑部 dMRI 数据,使后续的牵引成像分析成为可能,否则这些分析将具有挑战性或无法通过损坏的数据实现:方法:我们提出了一个基于深度生成模型的框架,该模型可估算出不完整 FOV 的 dMRI 扫描中缺失的大脑区域。该模型能够学习扩散加权图像(DWIs)中的扩散特征和相应结构图像中明显的解剖学特征,从而有效地估算FOV不完整部分DWIs中缺失的切片:在威斯康星州阿尔茨海默氏症预防注册数据集(WRAP)上评估估算切片时,所提出的框架达到了 PSNR b 0 = 22.397 , SSIM b 0 = 0.905 , PSNR b 1300 = 22.479 ,SSIM b 1300 = 0.893 ;在国家阿尔茨海默氏症协调中心(NACC)数据集上,实现了 PSNR b 0 = 21.304 ,SSIM b 0 = 0.892 ,PSNR b 1300 = 21.599 ,SSIM b 1300 = 0.877 。在 WRAP 和 NACC 数据集上,拟议框架提高了 72 个神经束的平均 Dice 分数(P 0.001),从而提高了神经束绘制的准确性:结果表明,所提出的框架在具有不完整 FOV 的脑 dMRI 数据中实现了足够的估算性能,可用于改善全脑牵引成像,从而修复损坏的数据。在分析与阿尔茨海默病相关的脑束时,我们的方法在扩展的完整 FOV 下获得了更准确的全脑束图结果,并降低了不确定性。
{"title":"Field-of-view extension for brain diffusion MRI via deep generative models.","authors":"Chenyu Gao, Shunxing Bao, Michael E Kim, Nancy R Newlin, Praitayini Kanakaraj, Tianyuan Yao, Gaurav Rudravaram, Yuankai Huo, Daniel Moyer, Kurt Schilling, Walter A Kukull, Arthur W Toga, Derek B Archer, Timothy J Hohman, Bennett A Landman, Zhiyuan Li","doi":"10.1117/1.JMI.11.4.044008","DOIUrl":"10.1117/1.JMI.11.4.044008","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;In brain diffusion magnetic resonance imaging (dMRI), the volumetric and bundle analyses of whole-brain tissue microstructure and connectivity can be severely impeded by an incomplete field of view (FOV). We aim to develop a method for imputing the missing slices directly from existing dMRI scans with an incomplete FOV. We hypothesize that the imputed image with a complete FOV can improve whole-brain tractography for corrupted data with an incomplete FOV. Therefore, our approach provides a desirable alternative to discarding the valuable brain dMRI data, enabling subsequent tractography analyses that would otherwise be challenging or unattainable with corrupted data.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;We propose a framework based on a deep generative model that estimates the absent brain regions in dMRI scans with an incomplete FOV. The model is capable of learning both the diffusion characteristics in diffusion-weighted images (DWIs) and the anatomical features evident in the corresponding structural images for efficiently imputing missing slices of DWIs in the incomplete part of the FOV.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;For evaluating the imputed slices, on the Wisconsin Registry for Alzheimer's Prevention (WRAP) dataset, the proposed framework achieved &lt;math&gt; &lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;PSNR&lt;/mi&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;22.397&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; , &lt;math&gt; &lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;SSIM&lt;/mi&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.905&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; , &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;PSNR&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;1300&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;22.479&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; , and &lt;math&gt; &lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;SSIM&lt;/mi&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;1300&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.893&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ; on the National Alzheimer's Coordinating Center (NACC) dataset, it achieved &lt;math&gt; &lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;PSNR&lt;/mi&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;21.304&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; , &lt;math&gt; &lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;SSIM&lt;/mi&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;0&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.892&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; , &lt;math&gt; &lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;PSNR&lt;/mi&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;1300&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;21.599&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; , and &lt;math&gt; &lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;SSIM&lt;/mi&gt; &lt;mrow&gt;&lt;mi&gt;b&lt;/mi&gt; &lt;mn&gt;1300&lt;/mn&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;mo&gt;=&lt;/mo&gt; &lt;mn&gt;0.877&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; . The proposed framework improved the tractography accuracy, as demonstrated by an increased average Dice score for 72 tracts ( &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt; &lt;mo&gt;&lt;&lt;/mo&gt; &lt;mn&gt;0.001&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ) on both the WRAP and NACC datasets.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;Results suggest that the proposed framework achieved sufficient imputation performance in brain dMRI data with an incomplete FOV for improving whole-brain tractography, thereby repairing the corrupted data. Our approach achieved more accurate whole-brain tractography results with an extended and complete FOV and reduced the uncertainty when analyzing bundles associa","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044008"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344266/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142056922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformation from hematoxylin-and-eosin staining to Ki-67 immunohistochemistry digital staining images using deep learning: experimental validation on the labeling index. 利用深度学习将苏木精-伊红染色转化为 Ki-67 免疫组化数字染色图像:对标记指数的实验验证。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-07-30 DOI: 10.1117/1.JMI.11.4.047501
Cunyuan Ji, Kengo Oshima, Takumi Urata, Fumikazu Kimura, Keiko Ishii, Takeshi Uehara, Kenji Suzuki, Saori Takeyama, Masahiro Yamaguchi

Purpose: Endometrial cancer (EC) is one of the most common types of cancer affecting women. While the hematoxylin-and-eosin (H&E) staining remains the standard for histological analysis, the immunohistochemistry (IHC) method provides molecular-level visualizations. Our study proposes a digital staining method to generate the hematoxylin-3,3'-diaminobenzidine (H-DAB) IHC stain of Ki-67 for the whole slide image of the EC tumor from its H&E stain counterpart.

Approach: We employed a color unmixing technique to yield stain density maps from the optical density (OD) of the stains and utilized the U-Net for end-to-end inference. The effectiveness of the proposed method was evaluated using the Pearson correlation between the digital and physical stain's labeling index (LI), a key metric indicating tumor proliferation. Two different cross-validation schemes were designed in our study: intraslide validation and cross-case validation (CCV). In the widely used intraslide scheme, the training and validation sets might include different regions from the same slide. The rigorous CCV validation scheme strictly prohibited any validation slide from contributing to training.

Results: The proposed method yielded a high-resolution digital stain with preserved histological features, indicating a reliable correlation with the physical stain in terms of the Ki-67 LI. In the intraslide scheme, using intraslide patches resulted in a biased accuracy (e.g., R = 0.98 ) significantly higher than that of CCV. The CCV scheme retained a fair correlation (e.g., R = 0.66 ) between the LIs calculated from the digital stain and its physical IHC counterpart. Inferring the OD of the IHC stain from that of the H&E stain enhanced the correlation metric, outperforming that of the baseline model using the RGB space.

Conclusions: Our study revealed that molecule-level insights could be obtained from H&E images using deep learning. Furthermore, the improvement brought via OD inference indicated a possible method for creating more generalizable models for digital staining via per-stain analysis.

目的:子宫内膜癌(EC)是妇女最常见的癌症类型之一。虽然苏木精-伊红(H&E)染色仍是组织学分析的标准,但免疫组化(IHC)方法可提供分子水平的可视化。我们的研究提出了一种数字染色方法,通过 H&E 染色法生成 EC 肿瘤整张玻片图像中 Ki-67 的苏木精-3,3'-二氨基联苯胺(H-DAB)IHC 染色法:我们采用了一种颜色不混合技术,从染色剂的光密度(OD)得出染色剂密度图,并利用 U-Net 进行端到端推理。我们利用数字染色和物理染色的标记指数(LI)之间的皮尔逊相关性评估了所提方法的有效性。我们的研究设计了两种不同的交叉验证方案:滑动内验证和交叉案例验证(CCV)。在广泛使用的切片内验证方案中,训练集和验证集可能包括来自同一张切片的不同区域。严格的 CCV 验证方案严格禁止任何验证切片参与训练:结果:所提出的方法得到了保留组织学特征的高分辨率数字染色,表明在 Ki-67 LI 方面与物理染色具有可靠的相关性。在滑动内方案中,使用滑动内补丁的偏倚准确度(如 R = 0.98)明显高于 CCV。CCV 方案保留了数字染色与物理 IHC 计算的 LI 之间的相关性(如 R = 0.66)。从 H&E 染色结果推断 IHC 染色结果的 OD 增强了相关性指标,优于使用 RGB 空间的基线模型:我们的研究表明,利用深度学习可以从 H&E 图像中获得分子级的见解。此外,OD 推理带来的改进表明,通过每染色分析为数字染色创建更具通用性的模型是一种可行的方法。
{"title":"Transformation from hematoxylin-and-eosin staining to Ki-67 immunohistochemistry digital staining images using deep learning: experimental validation on the labeling index.","authors":"Cunyuan Ji, Kengo Oshima, Takumi Urata, Fumikazu Kimura, Keiko Ishii, Takeshi Uehara, Kenji Suzuki, Saori Takeyama, Masahiro Yamaguchi","doi":"10.1117/1.JMI.11.4.047501","DOIUrl":"10.1117/1.JMI.11.4.047501","url":null,"abstract":"<p><strong>Purpose: </strong>Endometrial cancer (EC) is one of the most common types of cancer affecting women. While the hematoxylin-and-eosin (H&E) staining remains the standard for histological analysis, the immunohistochemistry (IHC) method provides molecular-level visualizations. Our study proposes a digital staining method to generate the hematoxylin-3,3'-diaminobenzidine (H-DAB) IHC stain of Ki-67 for the whole slide image of the EC tumor from its H&E stain counterpart.</p><p><strong>Approach: </strong>We employed a color unmixing technique to yield stain density maps from the optical density (OD) of the stains and utilized the U-Net for end-to-end inference. The effectiveness of the proposed method was evaluated using the Pearson correlation between the digital and physical stain's labeling index (LI), a key metric indicating tumor proliferation. Two different cross-validation schemes were designed in our study: intraslide validation and cross-case validation (CCV). In the widely used intraslide scheme, the training and validation sets might include different regions from the same slide. The rigorous CCV validation scheme strictly prohibited any validation slide from contributing to training.</p><p><strong>Results: </strong>The proposed method yielded a high-resolution digital stain with preserved histological features, indicating a reliable correlation with the physical stain in terms of the Ki-67 LI. In the intraslide scheme, using intraslide patches resulted in a biased accuracy (e.g., <math><mrow><mi>R</mi> <mo>=</mo> <mn>0.98</mn></mrow> </math> ) significantly higher than that of CCV. The CCV scheme retained a fair correlation (e.g., <math><mrow><mi>R</mi> <mo>=</mo> <mn>0.66</mn></mrow> </math> ) between the LIs calculated from the digital stain and its physical IHC counterpart. Inferring the OD of the IHC stain from that of the H&E stain enhanced the correlation metric, outperforming that of the baseline model using the RGB space.</p><p><strong>Conclusions: </strong>Our study revealed that molecule-level insights could be obtained from H&E images using deep learning. Furthermore, the improvement brought via OD inference indicated a possible method for creating more generalizable models for digital staining via per-stain analysis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"047501"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11287056/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative adversarial network-based reconstruction of healthy anatomy for anomaly detection in brain CT scans. 基于生成对抗网络的健康解剖学重建,用于脑 CT 扫描中的异常检测。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-08-09 DOI: 10.1117/1.JMI.11.4.044508
Sina Walluscheck, Annika Gerken, Ivana Galinovic, Kersten Villringer, Jochen B Fiebach, Jan Klein, Stefan Heldmann

Purpose: To help radiologists examine the growing number of computed tomography (CT) scans, automatic anomaly detection is an ongoing focus of medical imaging research. Radiologists must analyze a CT scan by searching for any deviation from normal healthy anatomy. We propose an approach to detecting abnormalities in axial 2D CT slice images of the brain. Although much research has been done on detecting abnormalities in magnetic resonance images of the brain, there is little work on CT scans, where abnormalities are more difficult to detect due to the low image contrast that must be represented by the model used.

Approach: We use a generative adversarial network (GAN) to learn normal brain anatomy in the first step and compare two approaches to image reconstruction: training an encoder in the second step and using iterative optimization during inference. Then, we analyze the differences from the original scan to detect and localize anomalies in the brain.

Results: Our approach can reconstruct healthy anatomy with good image contrast for brain CT scans. We obtain median Dice scores of 0.71 on our hemorrhage test data and 0.43 on our test set with additional tumor images from publicly available data sources. We also compare our models to a state-of-the-art autoencoder and a diffusion model and obtain qualitatively more accurate reconstructions.

Conclusions: Without defining anomalies during training, a GAN-based network was used to learn healthy anatomy for brain CT scans. Notably, our approach is not limited to the localization of hemorrhages and tumors and could thus be used to detect structural anatomical changes and other lesions.

目的:为了帮助放射科医生检查日益增多的计算机断层扫描(CT),自动异常检测一直是医学影像研究的重点。放射科医生在分析 CT 扫描时,必须寻找任何偏离正常健康解剖结构的地方。我们提出了一种检测大脑轴向二维 CT 切片图像异常的方法。尽管在检测脑部磁共振图像异常方面已做了大量研究,但在 CT 扫描方面的研究却很少,由于 CT 扫描图像对比度低,异常更难检测,而 CT 扫描图像的异常必须由所使用的模型来表示:方法:我们在第一步使用生成式对抗网络(GAN)学习正常的大脑解剖结构,并比较两种图像重建方法:在第二步训练编码器和在推理过程中使用迭代优化。然后,我们分析与原始扫描的差异,以检测和定位大脑中的异常:我们的方法可以重建健康的解剖结构,并为脑部 CT 扫描提供良好的图像对比度。我们在出血测试数据上获得的中位 Dice 得分为 0.71,在测试集上获得的中位 Dice 得分为 0.43,测试集上还有来自公开数据源的肿瘤图像。我们还将我们的模型与最先进的自动编码器和扩散模型进行了比较,得到了更精确的重建结果:结论:在训练过程中无需定义异常,基于 GAN 的网络就能学习脑 CT 扫描的健康解剖结构。值得注意的是,我们的方法并不局限于出血和肿瘤的定位,因此可用于检测结构解剖学变化和其他病变。
{"title":"Generative adversarial network-based reconstruction of healthy anatomy for anomaly detection in brain CT scans.","authors":"Sina Walluscheck, Annika Gerken, Ivana Galinovic, Kersten Villringer, Jochen B Fiebach, Jan Klein, Stefan Heldmann","doi":"10.1117/1.JMI.11.4.044508","DOIUrl":"10.1117/1.JMI.11.4.044508","url":null,"abstract":"<p><strong>Purpose: </strong>To help radiologists examine the growing number of computed tomography (CT) scans, automatic anomaly detection is an ongoing focus of medical imaging research. Radiologists must analyze a CT scan by searching for any deviation from normal healthy anatomy. We propose an approach to detecting abnormalities in axial 2D CT slice images of the brain. Although much research has been done on detecting abnormalities in magnetic resonance images of the brain, there is little work on CT scans, where abnormalities are more difficult to detect due to the low image contrast that must be represented by the model used.</p><p><strong>Approach: </strong>We use a generative adversarial network (GAN) to learn normal brain anatomy in the first step and compare two approaches to image reconstruction: training an encoder in the second step and using iterative optimization during inference. Then, we analyze the differences from the original scan to detect and localize anomalies in the brain.</p><p><strong>Results: </strong>Our approach can reconstruct healthy anatomy with good image contrast for brain CT scans. We obtain median Dice scores of 0.71 on our hemorrhage test data and 0.43 on our test set with additional tumor images from publicly available data sources. We also compare our models to a state-of-the-art autoencoder and a diffusion model and obtain qualitatively more accurate reconstructions.</p><p><strong>Conclusions: </strong>Without defining anomalies during training, a GAN-based network was used to learn healthy anatomy for brain CT scans. Notably, our approach is not limited to the localization of hemorrhages and tumors and could thus be used to detect structural anatomical changes and other lesions.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044508"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315301/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving radiological quantification of levator hiatus features with measures informed by statistical shape modeling. 利用统计形状建模方法改进左肌裂孔特征的放射学量化。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-08-10 DOI: 10.1117/1.JMI.11.4.045001
Vincenzia S Vargo, Megan R Routzong, Pamela A Moalli, Ghazaleh Rostaminia, Steven D Abramowitch

Purpose: The measures that traditionally describe the levator hiatus (LH) are straightforward and reliable; however, they were not specifically designed to capture significant differences. Statistical shape modeling (SSM) was used to quantify LH shape variation across reproductive-age women and identify novel variables associated with LH size and shape.

Approach: A retrospective study of pelvic MRIs from 19 nulliparous, 32 parous, and 12 pregnant women was performed. The LH was segmented in the plane of minimal LH dimensions. SSM was implemented. LH size was defined by the cross-sectional area, maximal transverse diameter, and anterior-posterior (A-P) diameter. Novel SSM-guided variables were defined by regions of greatest variation. Multivariate analysis of variance (MANOVA) evaluated group differences, and correlations determined relationships between size and shape variables.

Results: Overall shape ( p < 0.001 ), SSM mode 2 (oval to T -shape, p = 0.002 ), mode 3 (rounder to broader anterior shape, p = 0.004 ), and maximal transverse diameter ( p = 0.003 ) significantly differed between groups. Novel anterior and posterior transverse diameters were identified at 14% and 79% of the A-P length. Anterior transverse diameter and maximal transverse diameter were strongly correlated ( r = 0.780 , p < 0.001 ), while posterior transverse diameter and maximal transverse diameter were weakly correlated ( r = 0.398 , p = 0.001 ).

Conclusions: The traditional maximal transverse diameter generally corresponded with SSM findings but cannot describe anterior and posterior variation independently. The novel anterior and posterior transverse diameters represent both size and shape variation, can be easily calculated alongside traditional measures, and are more sensitive to subtle and local LH variation. Thus, they have a greater ability to serve as predictive and diagnostic parameters.

目的:传统上描述提上睑肌间隙(LH)的测量方法简单可靠,但这些方法并非专门为捕捉显著差异而设计。统计形状建模(SSM)被用来量化育龄妇女LH形状的变化,并确定与LH大小和形状相关的新变量:方法:对 19 名非孕期妇女、32 名准孕期妇女和 12 名孕妇的盆腔 MRI 进行回顾性研究。在 LH 最小尺寸平面上对 LH 进行分割。实施了 SSM。LH 的大小由横截面积、最大横径和前后(A-P)径定义。新的 SSM 指导变量由变化最大的区域定义。多变量方差分析(MANOVA)评估了组间差异,相关性分析确定了尺寸和形状变量之间的关系:结果:总体形状(p 0.001)、SSM 模式 2(从椭圆形到 T 形,p = 0.002)、模式 3(从较圆到较宽的前部形状,p = 0.004)和最大横径(p = 0.003)在组间存在显著差异。新的前横径和后横径分别占 A-P 长度的 14% 和 79%。前横径和最大横径呈强相关(r = 0.780,p 0.001),而后横径和最大横径呈弱相关(r = 0.398,p = 0.001):结论:传统的最大横径与 SSM 结果基本一致,但不能独立描述前后变异。新型的前后横径同时代表了大小和形状的变化,可以很容易地与传统的测量方法一起计算,而且对细微和局部的 LH 变化更敏感。因此,它们作为预测和诊断参数的能力更强。
{"title":"Improving radiological quantification of levator hiatus features with measures informed by statistical shape modeling.","authors":"Vincenzia S Vargo, Megan R Routzong, Pamela A Moalli, Ghazaleh Rostaminia, Steven D Abramowitch","doi":"10.1117/1.JMI.11.4.045001","DOIUrl":"10.1117/1.JMI.11.4.045001","url":null,"abstract":"<p><strong>Purpose: </strong>The measures that traditionally describe the levator hiatus (LH) are straightforward and reliable; however, they were not specifically designed to capture significant differences. Statistical shape modeling (SSM) was used to quantify LH shape variation across reproductive-age women and identify novel variables associated with LH size and shape.</p><p><strong>Approach: </strong>A retrospective study of pelvic MRIs from 19 nulliparous, 32 parous, and 12 pregnant women was performed. The LH was segmented in the plane of minimal LH dimensions. SSM was implemented. LH size was defined by the cross-sectional area, maximal transverse diameter, and anterior-posterior (A-P) diameter. Novel SSM-guided variables were defined by regions of greatest variation. Multivariate analysis of variance (MANOVA) evaluated group differences, and correlations determined relationships between size and shape variables.</p><p><strong>Results: </strong>Overall shape ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), SSM mode 2 (oval to <math><mrow><mi>T</mi></mrow> </math> -shape, <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ), mode 3 (rounder to broader anterior shape, <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.004</mn></mrow> </math> ), and maximal transverse diameter ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.003</mn></mrow> </math> ) significantly differed between groups. Novel anterior and posterior transverse diameters were identified at 14% and 79% of the A-P length. Anterior transverse diameter and maximal transverse diameter were strongly correlated ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.780</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), while posterior transverse diameter and maximal transverse diameter were weakly correlated ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.398</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.001</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>The traditional maximal transverse diameter generally corresponded with SSM findings but cannot describe anterior and posterior variation independently. The novel anterior and posterior transverse diameters represent both size and shape variation, can be easily calculated alongside traditional measures, and are more sensitive to subtle and local LH variation. Thus, they have a greater ability to serve as predictive and diagnostic parameters.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045001"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11316399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images. 基于深度学习的计算机辅助检测系统在三维容积医学图像中发现小信号的更大优势。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-07-09 DOI: 10.1117/1.JMI.11.4.045501
Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein

Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors.

Approach: Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC).

Results: The CNN-CADe improved the 3D search for the small microcalcification signal ( Δ AUC = 0.098 , p = 0.0002 ) and the 2D search for the large mass signal ( Δ AUC = 0.076 , p = 0.002 ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( Δ Δ AUC = 0.066 , p = 0.035 ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( r = - 0.528 , p = 0.036 ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( Δ Δ AUC = 0.033 , p = 0.133 ).

Conclusion: The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.

目的:放射科医生的任务是用肉眼仔细检查三维容积成像模式产生的大量数据。在三维搜索过程中,小信号可能会被忽略,因为它们很难在视觉外围被检测到。机器学习和计算机视觉领域的最新进展带来了有效的计算机辅助检测(CADe)支持系统,有望减少感知错误:方法:16 名非专家观察者通过数字乳腺断层合成(DBT)模型和 DBT 模型的单个横截面切片进行搜索。在使用或不使用基于卷积神经网络(CNN)的 CADe 支持系统的情况下进行 3D/2D 搜索。在观察者寻找小的微钙化信号和大的肿块信号时,该模型为观察者提供了叠加在图像刺激上的边界框。眼球注视位置被记录下来,并与 ROC 曲线下面积(AUC)的变化相关联:结果:CNN-CADe 改善了小微钙化信号的三维搜索(Δ AUC = 0.098,p = 0.0002)和大肿块信号的二维搜索(Δ AUC = 0.076,p = 0.002)。CNN-CADe 对小信号的三维获益明显大于二维(Δ Δ AUC = 0.066 , p = 0.035)。个体差异分析表明,眼动探索最少的人从 CNN-CADe 中获益最多 ( r = - 0.528 , p = 0.036 )。然而,对于大信号,二维受益并不明显大于三维受益 ( Δ Δ AUC = 0.033 , p = 0.133 ):结论:CNN-CADe 通过减少因对容积数据探索不足而造成的误差,为小信号的三维(相对于二维)搜索带来了独特的性能优势。
{"title":"Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images.","authors":"Devi S Klein, Srijita Karmakar, Aditya Jonnalagadda, Craig K Abbey, Miguel P Eckstein","doi":"10.1117/1.JMI.11.4.045501","DOIUrl":"10.1117/1.JMI.11.4.045501","url":null,"abstract":"<p><strong>Purpose: </strong>Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors.</p><p><strong>Approach: </strong>Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC).</p><p><strong>Results: </strong>The CNN-CADe improved the 3D search for the small microcalcification signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.098</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.0002</mn></mrow> </math> ) and the 2D search for the large mass signal ( <math><mrow><mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.076</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.002</mn></mrow> </math> ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.066</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.035</mn></mrow> </math> ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( <math><mrow><mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>0.528</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.036</mn></mrow> </math> ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( <math><mrow><mi>Δ</mi> <mi>Δ</mi> <mtext> </mtext> <mi>AUC</mi> <mo>=</mo> <mn>0.033</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.133</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045501"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning three-dimensional aortic root assessment based on sparse annotations. 基于稀疏注释学习三维主动脉根评估
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-07-30 DOI: 10.1117/1.JMI.11.4.044504
Johanna Brosig, Nina Krüger, Inna Khasyanova, Isaac Wamala, Matthias Ivantsits, Simon Sündermann, Jörg Kempfert, Stefan Heldmann, Anja Hennemuth

Purpose: Analyzing the anatomy of the aorta and left ventricular outflow tract (LVOT) is crucial for risk assessment and planning of transcatheter aortic valve implantation (TAVI). A comprehensive analysis of the aortic root and LVOT requires the extraction of the patient-individual anatomy via segmentation. Deep learning has shown good performance on various segmentation tasks. If this is formulated as a supervised problem, large amounts of annotated data are required for training. Therefore, minimizing the annotation complexity is desirable.

Approach: We propose two-dimensional (2D) cross-sectional annotation and point cloud-based surface reconstruction to train a fully automatic 3D segmentation network for the aortic root and the LVOT. Our sparse annotation scheme enables easy and fast training data generation for tubular structures such as the aortic root. From the segmentation results, we derive clinically relevant parameters for TAVI planning.

Results: The proposed 2D cross-sectional annotation results in high inter-observer agreement [Dice similarity coefficient (DSC): 0.94]. The segmentation model achieves a DSC of 0.90 and an average surface distance of 0.96 mm. Our approach achieves an aortic annulus maximum diameter difference between prediction and annotation of 0.45 mm (inter-observer variance: 0.25 mm).

Conclusions: The presented approach facilitates reproducible annotations. The annotations allow for training accurate segmentation models of the aortic root and LVOT. The segmentation results facilitate reproducible and quantifiable measurements for TAVI planning.

目的:分析主动脉和左心室流出道(LVOT)的解剖结构对于经导管主动脉瓣植入术(TAVI)的风险评估和规划至关重要。要对主动脉根部和左心室流出道进行全面分析,需要通过分割提取患者的个体解剖结构。深度学习在各种分割任务中表现出了良好的性能。如果将其表述为一个有监督的问题,则需要大量标注数据进行训练。因此,最大限度地降低标注复杂度是可取的:方法:我们提出了二维(2D)横截面标注和基于点云的表面重建方法,用于训练主动脉根部和左心室出口的全自动三维分割网络。我们的稀疏标注方案可轻松快速地生成主动脉根部等管状结构的训练数据。根据分割结果,我们得出了 TAVI 计划的临床相关参数:结果:所提出的二维横截面标注方法可实现较高的观察者间一致性[戴斯相似系数(DSC):0.94]。分割模型的 DSC 为 0.90,平均表面距离为 0.96 毫米。我们的方法使主动脉环的最大直径在预测和标注之间相差 0.45 毫米(观察者间差异:0.25 毫米):结论:所提出的方法有助于进行可重复的标注。结论:所提出的方法有助于进行可重复的标注,标注结果可用于训练主动脉根部和左心室出口的精确分割模型。分割结果有助于为 TAVI 计划提供可重复、可量化的测量结果。
{"title":"Learning three-dimensional aortic root assessment based on sparse annotations.","authors":"Johanna Brosig, Nina Krüger, Inna Khasyanova, Isaac Wamala, Matthias Ivantsits, Simon Sündermann, Jörg Kempfert, Stefan Heldmann, Anja Hennemuth","doi":"10.1117/1.JMI.11.4.044504","DOIUrl":"10.1117/1.JMI.11.4.044504","url":null,"abstract":"<p><strong>Purpose: </strong>Analyzing the anatomy of the aorta and left ventricular outflow tract (LVOT) is crucial for risk assessment and planning of transcatheter aortic valve implantation (TAVI). A comprehensive analysis of the aortic root and LVOT requires the extraction of the patient-individual anatomy via segmentation. Deep learning has shown good performance on various segmentation tasks. If this is formulated as a supervised problem, large amounts of annotated data are required for training. Therefore, minimizing the annotation complexity is desirable.</p><p><strong>Approach: </strong>We propose two-dimensional (2D) cross-sectional annotation and point cloud-based surface reconstruction to train a fully automatic 3D segmentation network for the aortic root and the LVOT. Our sparse annotation scheme enables easy and fast training data generation for tubular structures such as the aortic root. From the segmentation results, we derive clinically relevant parameters for TAVI planning.</p><p><strong>Results: </strong>The proposed 2D cross-sectional annotation results in high inter-observer agreement [Dice similarity coefficient (DSC): 0.94]. The segmentation model achieves a DSC of 0.90 and an average surface distance of 0.96 mm. Our approach achieves an aortic annulus maximum diameter difference between prediction and annotation of 0.45 mm (inter-observer variance: 0.25 mm).</p><p><strong>Conclusions: </strong>The presented approach facilitates reproducible annotations. The annotations allow for training accurate segmentation models of the aortic root and LVOT. The segmentation results facilitate reproducible and quantifiable measurements for TAVI planning.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044504"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11287057/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing patterns of diffusion tensor imaging variance in aging brains. 老化大脑中弥散张量成像差异模式的特征。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-07-01 Epub Date: 2024-08-24 DOI: 10.1117/1.JMI.11.4.044007
Chenyu Gao, Qi Yang, Michael E Kim, Nazirah Mohd Khairi, Leon Y Cai, Nancy R Newlin, Praitayini Kanakaraj, Lucas W Remedios, Aravind R Krishnan, Xin Yu, Tianyuan Yao, Panpan Zhang, Kurt G Schilling, Daniel Moyer, Derek B Archer, Susan M Resnick, Bennett A Landman

Purpose: As large analyses merge data across sites, a deeper understanding of variance in statistical assessment across the sources of data becomes critical for valid analyses. Diffusion tensor imaging (DTI) exhibits spatially varying and correlated noise, so care must be taken with distributional assumptions. Here, we characterize the role of physiology, subject compliance, and the interaction of the subject with the scanner in the understanding of DTI variability, as modeled in the spatial variance of derived metrics in homogeneous regions.

Approach: We analyze DTI data from 1035 subjects in the Baltimore Longitudinal Study of Aging, with ages ranging from 22.4 to 103 years old. For each subject, up to 12 longitudinal sessions were conducted. We assess the variance of DTI scalars within regions of interest (ROIs) defined by four segmentation methods and investigate the relationships between the variance and covariates, including baseline age, time from the baseline (referred to as "interval"), motion, sex, and whether it is the first scan or the second scan in the session.

Results: Covariate effects are heterogeneous and bilaterally symmetric across ROIs. Inter-session interval is positively related ( p 0.001 ) to FA variance in the cuneus and occipital gyrus, but negatively ( p 0.001 ) in the caudate nucleus. Males show significantly ( p 0.001 ) higher FA variance in the right putamen, thalamus, body of the corpus callosum, and cingulate gyrus. In 62 out of 176 ROIs defined by the Eve type-1 atlas, an increase in motion is associated ( p < 0.05 ) with a decrease in FA variance. Head motion increases during the rescan of DTI ( Δ μ = 0.045 mm per volume).

Conclusions: The effects of each covariate on DTI variance and their relationships across ROIs are complex. Ultimately, we encourage researchers to include estimates of variance when sharing data and consider models of heteroscedasticity in analysis. This work provides a foundation for study planning to account for regional variations in metric variance.

目的:由于大型分析会合并不同地点的数据,因此深入了解不同数据源的统计评估差异对于有效分析至关重要。弥散张量成像(DTI)显示出空间变化和相关噪声,因此必须注意分布假设。在此,我们分析了生理学、受试者顺应性以及受试者与扫描仪之间的相互作用在理解 DTI 变异性中的作用,并以同质区域中衍生指标的空间方差为模型:我们分析了巴尔的摩老龄化纵向研究(Baltimore Longitudinal Study of Aging)中 1035 名受试者的 DTI 数据,这些受试者的年龄从 22.4 岁到 103 岁不等。每个受试者都进行了多达 12 次纵向研究。我们评估了由四种分割方法定义的感兴趣区(ROI)内 DTI 标量的方差,并研究了方差与协变量之间的关系,协变量包括基线年龄、距基线时间(称为 "间隔")、运动、性别以及是第一次扫描还是第二次扫描:在不同的 ROI 中,协变量的影响是异质和双侧对称的。会话间隔与楔回和枕回的FA方差呈正相关(p≪0.001),但与尾状核呈负相关(p≪0.001)。男性的右侧丘脑、丘脑、胼胝体和扣带回的FA方差明显更高(p≪0.001)。在夏娃 1 型图谱定义的 176 个 ROI 中,有 62 个 ROI 的运动增加(P 0.05)与 FA 方差减小相关。在 DTI 重新扫描期间,头部运动会增加(Δ μ = 0.045 mm/体积):结论:各协变量对 DTI 方差的影响及其在各 ROI 之间的关系非常复杂。最终,我们鼓励研究人员在共享数据时加入方差估计值,并在分析中考虑异方差模型。这项工作为研究规划提供了基础,以考虑度量方差的区域差异。
{"title":"Characterizing patterns of diffusion tensor imaging variance in aging brains.","authors":"Chenyu Gao, Qi Yang, Michael E Kim, Nazirah Mohd Khairi, Leon Y Cai, Nancy R Newlin, Praitayini Kanakaraj, Lucas W Remedios, Aravind R Krishnan, Xin Yu, Tianyuan Yao, Panpan Zhang, Kurt G Schilling, Daniel Moyer, Derek B Archer, Susan M Resnick, Bennett A Landman","doi":"10.1117/1.JMI.11.4.044007","DOIUrl":"10.1117/1.JMI.11.4.044007","url":null,"abstract":"<p><strong>Purpose: </strong>As large analyses merge data across sites, a deeper understanding of variance in statistical assessment across the sources of data becomes critical for valid analyses. Diffusion tensor imaging (DTI) exhibits spatially varying and correlated noise, so care must be taken with distributional assumptions. Here, we characterize the role of physiology, subject compliance, and the interaction of the subject with the scanner in the understanding of DTI variability, as modeled in the spatial variance of derived metrics in homogeneous regions.</p><p><strong>Approach: </strong>We analyze DTI data from 1035 subjects in the Baltimore Longitudinal Study of Aging, with ages ranging from 22.4 to 103 years old. For each subject, up to 12 longitudinal sessions were conducted. We assess the variance of DTI scalars within regions of interest (ROIs) defined by four segmentation methods and investigate the relationships between the variance and covariates, including baseline age, time from the baseline (referred to as \"interval\"), motion, sex, and whether it is the first scan or the second scan in the session.</p><p><strong>Results: </strong>Covariate effects are heterogeneous and bilaterally symmetric across ROIs. Inter-session interval is positively related ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) to FA variance in the cuneus and occipital gyrus, but negatively ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) in the caudate nucleus. Males show significantly ( <math><mrow><mi>p</mi> <mo>≪</mo> <mn>0.001</mn></mrow> </math> ) higher FA variance in the right putamen, thalamus, body of the corpus callosum, and cingulate gyrus. In 62 out of 176 ROIs defined by the Eve type-1 atlas, an increase in motion is associated ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) with a decrease in FA variance. Head motion increases during the rescan of DTI ( <math><mrow><mi>Δ</mi> <mi>μ</mi> <mo>=</mo> <mn>0.045</mn></mrow> </math> mm per volume).</p><p><strong>Conclusions: </strong>The effects of each covariate on DTI variance and their relationships across ROIs are complex. Ultimately, we encourage researchers to include estimates of variance when sharing data and consider models of heteroscedasticity in analysis. This work provides a foundation for study planning to account for regional variations in metric variance.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044007"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142056920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer enhanced autoencoder rendering cleaning of noisy optical coherence tomography images. 变压器增强型自动编码器渲染噪声光学相干断层扫描图像的净化。
IF 2.4 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-06-01 Epub Date: 2024-04-30 DOI: 10.1117/1.JMI.11.3.034008
Hanya Ahmed, Qianni Zhang, Robert Donnan, Akram Alomainy

Purpose: Optical coherence tomography (OCT) is an emerging imaging tool in healthcare with common applications in ophthalmology for detection of retinal diseases, as well as other medical domains. The noise in OCT images presents a great challenge as it hinders the clinician's ability to diagnosis in extensive detail.

Approach: In this work, a region-based, deep-learning, denoising framework is proposed for adaptive cleaning of noisy OCT-acquired images. The core of the framework is a hybrid deep-learning model named transformer enhanced autoencoder rendering (TEAR). Attention gates are utilized to ensure focus on denoising the foreground and to remove the background. TEAR is designed to remove the different types of noise artifacts commonly present in OCT images and to enhance the visual quality.

Results: Extensive quantitative evaluations are performed to evaluate the performance of TEAR and compare it against both deep-learning and traditional state-of-the-art denoising algorithms. The proposed method improved the peak signal-to-noise ratio to 27.9 dB, CNR to 6.3 dB, SSIM to 0.9, and equivalent number of looks to 120.8 dB for a dental dataset. For a retinal dataset, the performance metrics in the same sequence are: 24.6, 14.2, 0.64, and 1038.7 dB, respectively.

Conclusions: The results show that the approach verifiably removes speckle noise and achieves superior quality over several well-known denoisers.

目的:光学相干断层扫描(OCT)是一种新兴的医疗成像工具,通常应用于眼科视网膜疾病的检测以及其他医疗领域。OCT 图像中的噪声是一个巨大的挑战,因为它阻碍了临床医生进行详细诊断的能力:在这项工作中,我们提出了一种基于区域的深度学习去噪框架,用于自适应清理噪声 OCT 图像。该框架的核心是一个混合深度学习模型,名为变压器增强自动编码器渲染(TEAR)。利用注意门确保聚焦于前景去噪和去除背景。TEAR 的设计目的是去除 OCT 图像中常见的各种噪声伪影,提高视觉质量:对 TEAR 的性能进行了广泛的定量评估,并将其与深度学习算法和传统的最先进去噪算法进行了比较。对于牙科数据集,所提出的方法将峰值信噪比提高到 27.9 dB,CNR 提高到 6.3 dB,SSIM 提高到 0.9,等效外观数提高到 120.8 dB。对于视网膜数据集,相同序列的性能指标分别为分别为 24.6、14.2、0.64 和 1038.7 dB:结果表明,该方法能有效去除斑点噪声,其质量优于几种著名的去噪器。
{"title":"Transformer enhanced autoencoder rendering cleaning of noisy optical coherence tomography images.","authors":"Hanya Ahmed, Qianni Zhang, Robert Donnan, Akram Alomainy","doi":"10.1117/1.JMI.11.3.034008","DOIUrl":"https://doi.org/10.1117/1.JMI.11.3.034008","url":null,"abstract":"<p><strong>Purpose: </strong>Optical coherence tomography (OCT) is an emerging imaging tool in healthcare with common applications in ophthalmology for detection of retinal diseases, as well as other medical domains. The noise in OCT images presents a great challenge as it hinders the clinician's ability to diagnosis in extensive detail.</p><p><strong>Approach: </strong>In this work, a region-based, deep-learning, denoising framework is proposed for adaptive cleaning of noisy OCT-acquired images. The core of the framework is a hybrid deep-learning model named transformer enhanced autoencoder rendering (TEAR). Attention gates are utilized to ensure focus on denoising the foreground and to remove the background. TEAR is designed to remove the different types of noise artifacts commonly present in OCT images and to enhance the visual quality.</p><p><strong>Results: </strong>Extensive quantitative evaluations are performed to evaluate the performance of TEAR and compare it against both deep-learning and traditional state-of-the-art denoising algorithms. The proposed method improved the peak signal-to-noise ratio to 27.9 dB, CNR to 6.3 dB, SSIM to 0.9, and equivalent number of looks to 120.8 dB for a dental dataset. For a retinal dataset, the performance metrics in the same sequence are: 24.6, 14.2, 0.64, and 1038.7 dB, respectively.</p><p><strong>Conclusions: </strong>The results show that the approach verifiably removes speckle noise and achieves superior quality over several well-known denoisers.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034008"},"PeriodicalIF":2.4,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11058346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1