首页 > 最新文献

Machine learning in medical imaging. MLMI (Workshop)最新文献

英文 中文
Skull Segmentation from CBCT Images via Voxel-Based Rendering. 基于体素渲染的CBCT图像颅骨分割。
Pub Date : 2021-09-01 Epub Date: 2021-09-21 DOI: 10.1007/978-3-030-87589-3_63
Qin Liu, Chunfeng Lian, Deqiang Xiao, Lei Ma, Han Deng, Xu Chen, Dinggang Shen, Pew-Thian Yap, James J Xia

Skull segmentation from three-dimensional (3D) cone-beam computed tomography (CBCT) images is critical for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Convolutional neural network (CNN)-based methods are currently dominating volumetric image segmentation, but these methods suffer from the limited GPU memory and the large image size (e.g., 512 × 512 × 448). Typical ad-hoc strategies, such as down-sampling or patch cropping, will degrade segmentation accuracy due to insufficient capturing of local fine details or global contextual information. Other methods such as Global-Local Networks (GLNet) are focusing on the improvement of neural networks, aiming to combine the local details and the global contextual information in a GPU memory-efficient manner. However, all these methods are operating on regular grids, which are computationally inefficient for volumetric image segmentation. In this work, we propose a novel VoxelRend-based network (VR-U-Net) by combining a memory-efficient variant of 3D U-Net with a voxel-based rendering (VoxelRend) module that refines local details via voxel-based predictions on non-regular grids. Establishing on relatively coarse feature maps, the VoxelRend module achieves significant improvement of segmentation accuracy with a fraction of GPU memory consumption. We evaluate our proposed VR-U-Net in the skull segmentation task on a high-resolution CBCT dataset collected from local hospitals. Experimental results show that the proposed VR-U-Net yields high-quality segmentation results in a memory-efficient manner, highlighting the practical value of our method.

三维锥形束计算机断层扫描(CBCT)图像的颅骨分割对于颅颌面畸形的诊断和治疗计划至关重要。基于卷积神经网络(CNN)的方法目前在体积图像分割中占主导地位,但这些方法受到GPU内存有限和图像尺寸较大(例如512 × 512 × 448)的影响。典型的特殊策略,如降采样或斑块裁剪,会降低分割的准确性,因为没有充分捕获局部细节或全局上下文信息。其他方法如global - local Networks (GLNet)则专注于神经网络的改进,旨在以GPU内存高效的方式将局部细节和全局上下文信息结合起来。然而,所有这些方法都是在规则网格上操作的,这对于体积图像分割来说计算效率很低。在这项工作中,我们提出了一种新的基于VoxelRend的网络(VR-U-Net),通过将3D U-Net的内存高效变体与基于体素的渲染(VoxelRend)模块相结合,该模块通过基于体素的非规则网格预测来细化局部细节。VoxelRend模块建立在相对粗糙的特征映射上,以一小部分GPU内存消耗实现了分割精度的显著提高。我们在从当地医院收集的高分辨率CBCT数据集上评估了我们提出的VR-U-Net在颅骨分割任务中的应用。实验结果表明,本文提出的VR-U-Net算法在节省内存的前提下,获得了高质量的分割结果,突出了本文方法的实用价值。
{"title":"Skull Segmentation from CBCT Images via Voxel-Based Rendering.","authors":"Qin Liu,&nbsp;Chunfeng Lian,&nbsp;Deqiang Xiao,&nbsp;Lei Ma,&nbsp;Han Deng,&nbsp;Xu Chen,&nbsp;Dinggang Shen,&nbsp;Pew-Thian Yap,&nbsp;James J Xia","doi":"10.1007/978-3-030-87589-3_63","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_63","url":null,"abstract":"<p><p>Skull segmentation from three-dimensional (3D) cone-beam computed tomography (CBCT) images is critical for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Convolutional neural network (CNN)-based methods are currently dominating volumetric image segmentation, but these methods suffer from the limited GPU memory and the large image size (<i>e.g</i>., 512 × 512 × 448). Typical ad-hoc strategies, such as down-sampling or patch cropping, will degrade segmentation accuracy due to insufficient capturing of local fine details or global contextual information. Other methods such as Global-Local Networks (GLNet) are focusing on the improvement of neural networks, aiming to combine the local details and the global contextual information in a GPU memory-efficient manner. However, all these methods are operating on regular grids, which are computationally inefficient for volumetric image segmentation. In this work, we propose a novel VoxelRend-based network (VR-U-Net) by combining a memory-efficient variant of 3D U-Net with a voxel-based rendering (VoxelRend) module that refines local details via voxel-based predictions on non-regular grids. Establishing on relatively coarse feature maps, the VoxelRend module achieves significant improvement of segmentation accuracy with a fraction of GPU memory consumption. We evaluate our proposed VR-U-Net in the skull segmentation task on a high-resolution CBCT dataset collected from local hospitals. Experimental results show that the proposed VR-U-Net yields high-quality segmentation results in a memory-efficient manner, highlighting the practical value of our method.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"615-623"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8675180/pdf/nihms-1762343.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39853017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Seeking an Optimal Approach for Computer-Aided Pulmonary Embolism Detection 寻找计算机辅助肺栓塞检测的最佳方法
Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_71
N. Islam, S. Gehlot, Zongwei Zhou, M. Gotway, Jianming Liang
{"title":"Seeking an Optimal Approach for Computer-Aided Pulmonary Embolism Detection","authors":"N. Islam, S. Gehlot, Zongwei Zhou, M. Gotway, Jianming Liang","doi":"10.1007/978-3-030-87589-3_71","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_71","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10 1","pages":"692-702"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86109324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection. 协同CBCT图像分割和地标检测的多阶段CNN框架。
Pub Date : 2021-09-01 Epub Date: 2021-09-21 DOI: 10.1007/978-3-030-87589-3_62
Qin Liu, Han Deng, Chunfeng Lian, Xiaoyang Chen, Deqiang Xiao, Lei Ma, Xu Chen, Tianshu Kuang, Jaime Gateno, Pew-Thian Yap, James J Xia

Accurate bone segmentation and landmark detection are two essential preparation tasks in computer-aided surgical planning for patients with craniomaxillofacial (CMF) deformities. Surgeons typically have to complete the two tasks manually, spending ~12 hours for each set of CBCT or ~5 hours for CT. To tackle these problems, we propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. Experimental results show that SkullEngine significantly improves segmentation quality, especially in regions where the bone is thin. In addition, SkullEngine also efficiently and accurately detect all of the 175 landmarks. Both tasks were completed simultaneously within 3 minutes regardless of CBCT or CT with high segmentation quality. Currently, SkullEngine has been integrated into a clinical workflow to further evaluate its clinical efficiency.

在颅颌面畸形患者的计算机辅助手术计划中,准确的骨分割和地标检测是必不可少的准备工作。外科医生通常必须手动完成这两项任务,每组CBCT花费约12小时,CT花费约5小时。为了解决这些问题,我们提出了一种基于cnn的多阶段粗到细框架SkullEngine,通过协作、集成和可扩展的JSD模型以及三种分割和地标检测细化模型,实现高分辨率分割和大规模地标检测。我们在一个临床数据集上评估了我们的框架,该数据集由170个CBCT/CT图像组成,用于分割2块骨骼(中脸和下颌骨),并检测175个临床常见的骨骼、牙齿和软组织地标。实验结果表明,SkullEngine显著提高了分割质量,特别是在骨较薄的区域。此外,SkullEngine还可以高效准确地检测所有175个地标。无论是CBCT还是高分割质量CT,两项任务均在3分钟内同时完成。目前,SkullEngine已被整合到临床工作流程中,以进一步评估其临床效率。
{"title":"SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection.","authors":"Qin Liu,&nbsp;Han Deng,&nbsp;Chunfeng Lian,&nbsp;Xiaoyang Chen,&nbsp;Deqiang Xiao,&nbsp;Lei Ma,&nbsp;Xu Chen,&nbsp;Tianshu Kuang,&nbsp;Jaime Gateno,&nbsp;Pew-Thian Yap,&nbsp;James J Xia","doi":"10.1007/978-3-030-87589-3_62","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_62","url":null,"abstract":"<p><p>Accurate bone segmentation and landmark detection are two essential preparation tasks in computer-aided surgical planning for patients with craniomaxillofacial (CMF) deformities. Surgeons typically have to complete the two tasks manually, spending ~12 hours for each set of CBCT or ~5 hours for CT. To tackle these problems, we propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. Experimental results show that SkullEngine significantly improves segmentation quality, especially in regions where the bone is thin. In addition, SkullEngine also efficiently and accurately detect all of the 175 landmarks. Both tasks were completed simultaneously within 3 minutes regardless of CBCT or CT with high segmentation quality. Currently, SkullEngine has been integrated into a clinical workflow to further evaluate its clinical efficiency.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"606-614"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8712093/pdf/nihms-1762341.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39631757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Multi-scale Self-supervised Learning for Multi-site Pediatric Brain MR Image Segmentation with Motion/Gibbs Artifacts 基于运动/吉布斯伪影的儿童脑磁共振图像分割的多尺度自监督学习
Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_18
Yue Sun, Kun Gao, W. Lin, Gang Li, Sijie Niu, Li Wang
{"title":"Multi-scale Self-supervised Learning for Multi-site Pediatric Brain MR Image Segmentation with Motion/Gibbs Artifacts","authors":"Yue Sun, Kun Gao, W. Lin, Gang Li, Sijie Niu, Li Wang","doi":"10.1007/978-3-030-87589-3_18","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_18","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"34 1","pages":"171-179"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78010116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving Joint Learning of Chest X-Ray and Radiology Report by Word Region Alignment 利用词区对齐提高胸片与放射学报告的联合学习
Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_12
Zhanghexuan Ji, Mohammad Abuzar Shaikh, Dana Moukheiber, S. Srihari, Yifan Peng, Mingchen Gao
{"title":"Improving Joint Learning of Chest X-Ray and Radiology Report by Word Region Alignment","authors":"Zhanghexuan Ji, Mohammad Abuzar Shaikh, Dana Moukheiber, S. Srihari, Yifan Peng, Mingchen Gao","doi":"10.1007/978-3-030-87589-3_12","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_12","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"1 1","pages":"110-119"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80401668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis. 诊断与预后视觉解释的信息瓶颈归因。
Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_41
Ugur Demir, Ismail Irmakci, Elif Keles, Ahmet Topcu, Ziyue Xu, Concetto Spampinato, Sachin Jambawalikar, Evrim Turkbey, Baris Turkbey, Ulas Bagci

Visual explanation methods have an important role in the prognosis of the patients where the annotated data is limited or unavailable. There have been several attempts to use gradient-based attribution methods to localize pathology from medical scans without using segmentation labels. This research direction has been impeded by the lack of robustness and reliability. These methods are highly sensitive to the network parameters. In this study, we introduce a robust visual explanation method to address this problem for medical applications. We provide an innovative visual explanation algorithm for general purpose and as an example application we demonstrate its effectiveness for quantifying lesions in the lungs caused by the Covid-19 with high accuracy and robustness without using dense segmentation labels. This approach overcomes the drawbacks of commonly used Grad-CAM and its extended versions. The premise behind our proposed strategy is that the information flow is minimized while ensuring the classifier prediction stays similar. Our findings indicate that the bottleneck condition provides a more stable severity estimation than the similar attribution methods. The source code will be publicly available upon publication.

视觉解释方法对注释数据有限或不可用的患者的预后有重要作用。已经有几次尝试使用基于梯度的归因方法来定位医学扫描的病理,而不使用分割标签。这一研究方向一直受到鲁棒性和可靠性不足的阻碍。这些方法对网络参数高度敏感。在这项研究中,我们引入了一种鲁棒的视觉解释方法来解决医疗应用中的这个问题。我们提供了一种创新的通用视觉解释算法,并作为示例应用,我们证明了它在不使用密集分割标签的情况下,对Covid-19引起的肺部病变进行量化的有效性,具有高精度和鲁棒性。这种方法克服了常用的Grad-CAM及其扩展版本的缺点。我们提出的策略背后的前提是信息流最小化,同时确保分类器预测保持相似。研究结果表明,与同类归因方法相比,瓶颈条件提供了更稳定的严重程度估计。源代码将在发布后公开提供。
{"title":"Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis.","authors":"Ugur Demir,&nbsp;Ismail Irmakci,&nbsp;Elif Keles,&nbsp;Ahmet Topcu,&nbsp;Ziyue Xu,&nbsp;Concetto Spampinato,&nbsp;Sachin Jambawalikar,&nbsp;Evrim Turkbey,&nbsp;Baris Turkbey,&nbsp;Ulas Bagci","doi":"10.1007/978-3-030-87589-3_41","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_41","url":null,"abstract":"<p><p>Visual explanation methods have an important role in the prognosis of the patients where the annotated data is limited or unavailable. There have been several attempts to use gradient-based attribution methods to localize pathology from medical scans without using segmentation labels. This research direction has been impeded by the lack of robustness and reliability. These methods are highly sensitive to the network parameters. In this study, we introduce a robust visual explanation method to address this problem for medical applications. We provide an innovative visual explanation algorithm for general purpose and as an example application we demonstrate its effectiveness for quantifying lesions in the lungs caused by the Covid-19 with high accuracy and robustness without using dense segmentation labels. This approach overcomes the drawbacks of commonly used Grad-CAM and its extended versions. The premise behind our proposed strategy is that the information flow is minimized while ensuring the classifier prediction stays similar. Our findings indicate that the bottleneck condition provides a more stable severity estimation than the similar attribution methods. The source code will be publicly available upon publication.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12966 ","pages":"396-405"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921297/pdf/nihms-1871448.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10721276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 医学成像中的机器学习:第十二届国际研讨会,MLMI 2021,与MICCAI 2021一起举行,斯特拉斯堡,法国,2021年9月27日,会议录
Pub Date : 2021-01-01 DOI: 10.1007/978-3-030-87589-3
{"title":"Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-87589-3","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77936797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust Multiple Sclerosis Lesion Inpainting with Edge Prior. 鲁棒性多发性硬化症病灶的边缘预处理。
Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_13
Huahong Zhang, Rohit Bakshi, Francesca Bagnato, Ipek Oguz

Inpainting lesions is an important preprocessing task for algorithms analyzing brain MRIs of multiple sclerosis (MS) patients, such as tissue segmentation and cortical surface reconstruction. We propose a new deep learning approach for this task. Unlike existing inpainting approaches which ignore the lesion areas of the input image, we leverage the edge information around the lesions as a prior to help the inpainting process. Thus, the input of this network includes the T1-w image, lesion mask and the edge map computed from the T1-w image, and the output is the lesion-free image. The introduction of the edge prior is based on our observation that the edge detection results of the MRI scans will usually contain the contour of white matter (WM) and grey matter (GM), even though some undesired edges appear near the lesions. Instead of losing all the information around the neighborhood of lesions, our approach preserves the local tissue shape (brain/WM/GM) with the guidance of the input edges. The qualitative results show that our pipeline inpaints the lesion areas in a realistic and shape-consistent way. Our quantitative evaluation shows that our approach outperforms the existing state-of-the-art inpainting methods in both image-based metrics and in FreeSurfer segmentation accuracy. Furthermore, our approach demonstrates robustness to inaccurate lesion mask inputs. This is important for practical usability, because it allows for a generous over-segmentation of lesions instead of requiring precise boundaries, while still yielding accurate results.

修复病变是分析多发性硬化症(MS)患者大脑MRI的算法的一项重要预处理任务,如组织分割和皮层表面重建。我们为这项任务提出了一种新的深度学习方法。与忽略输入图像的损伤区域的现有修复方法不同,我们利用损伤周围的边缘信息作为先验来帮助修复过程。因此,该网络的输入包括T1-w图像、病变掩模和根据T1-w图计算的边缘图,并且输出是无病变图像。边缘先验的引入是基于我们的观察,即MRI扫描的边缘检测结果通常包含白质(WM)和灰质(GM)的轮廓,即使一些不希望的边缘出现在病变附近。我们的方法没有丢失病变附近的所有信息,而是在输入边缘的指导下保留了局部组织形状(脑/WM/GM)。定性结果表明,我们的管道以逼真和形状一致的方式修复了病变区域。我们的定量评估表明,我们的方法在基于图像的度量和FreeSurfer分割精度方面都优于现有的最先进的修复方法。此外,我们的方法证明了对不准确的损伤掩模输入的鲁棒性。这对实际可用性很重要,因为它允许对病变进行大量的过度分割,而不需要精确的边界,同时仍能产生准确的结果。
{"title":"Robust Multiple Sclerosis Lesion Inpainting with Edge Prior.","authors":"Huahong Zhang,&nbsp;Rohit Bakshi,&nbsp;Francesca Bagnato,&nbsp;Ipek Oguz","doi":"10.1007/978-3-030-59861-7_13","DOIUrl":"10.1007/978-3-030-59861-7_13","url":null,"abstract":"<p><p>Inpainting lesions is an important preprocessing task for algorithms analyzing brain MRIs of multiple sclerosis (MS) patients, such as tissue segmentation and cortical surface reconstruction. We propose a new deep learning approach for this task. Unlike existing inpainting approaches which ignore the lesion areas of the input image, we leverage the edge information around the lesions as a prior to help the inpainting process. Thus, the input of this network includes the T1-w image, lesion mask and the edge map computed from the T1-w image, and the output is the lesion-free image. The introduction of the edge prior is based on our observation that the edge detection results of the MRI scans will usually contain the contour of white matter (WM) and grey matter (GM), even though some undesired edges appear near the lesions. Instead of losing all the information around the neighborhood of lesions, our approach preserves the local tissue shape (brain/WM/GM) with the guidance of the input edges. The qualitative results show that our pipeline inpaints the lesion areas in a realistic and shape-consistent way. Our quantitative evaluation shows that our approach outperforms the existing state-of-the-art inpainting methods in both image-based metrics and in FreeSurfer segmentation accuracy. Furthermore, our approach demonstrates robustness to inaccurate lesion mask inputs. This is important for practical usability, because it allows for a generous over-segmentation of lesions instead of requiring precise boundaries, while still yielding accurate results.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"120-129"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8692168/pdf/nihms-1752653.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39847994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Temporal-Adaptive Graph Convolutional Network for Automated Identification of Major Depressive Disorder Using Resting-State fMRI. 基于静息状态fMRI的时间自适应图卷积网络自动识别重度抑郁症。
Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_1
Dongren Yao, Jing Sui, Erkun Yang, Pew-Thian Yap, Dinggang Shen, Mingxia Liu

Extensive studies focus on analyzing human brain functional connectivity from a network perspective, in which each network contains complex graph structures. Based on resting-state functional MRI (rs-fMRI) data, graph convolutional networks (GCNs) enable comprehensive mapping of brain functional connectivity (FC) patterns to depict brain activities. However, existing studies usually characterize static properties of the FC patterns, ignoring the time-varying dynamic information. In addition, previous GCN methods generally use fixed group-level (e.g., patients or controls) representation of FC networks, and thus, cannot capture subject-level FC specificity. To this end, we propose a Temporal-Adaptive GCN (TAGCN) framework that can not only take advantage of both spatial and temporal information using resting-state FC patterns and time-series but also explicitly characterize subject-level specificity of FC patterns. Specifically, we first segment each ROI-based time-series into multiple overlapping windows, then employ an adaptive GCN to mine topological information. We further model the temporal patterns for each ROI along time to learn the periodic brain status changes. Experimental results on 533 major depressive disorder (MDD) and health control (HC) subjects demonstrate that the proposed TAGCN outperforms several state-of-the-art methods in MDD vs. HC classification, and also can be used to capture dynamic FC alterations and learn valid graph representations.

广泛的研究集中在从网络的角度分析人脑功能连接,其中每个网络都包含复杂的图结构。基于静息状态功能MRI (rs-fMRI)数据,图卷积网络(GCNs)能够全面映射脑功能连接(FC)模式来描述大脑活动。然而,现有的研究通常只描述FC模式的静态特性,而忽略了随时间变化的动态信息。此外,以前的GCN方法通常使用固定的组级(例如,患者或对照组)FC网络表示,因此无法捕获受试者级FC特异性。为此,我们提出了一个时间自适应GCN (TAGCN)框架,该框架不仅可以利用静息状态FC模式和时间序列的空间和时间信息,还可以明确表征FC模式的主题级别特异性。具体来说,我们首先将每个基于roi的时间序列分割成多个重叠的窗口,然后使用自适应GCN挖掘拓扑信息。我们进一步对每个ROI的时间模式进行建模,以了解周期性的大脑状态变化。对533名重度抑郁症(MDD)和健康控制(HC)受试者的实验结果表明,所提出的TAGCN在MDD和HC分类方面优于几种最先进的方法,也可用于捕获动态FC变化和学习有效的图表示。
{"title":"Temporal-Adaptive Graph Convolutional Network for Automated Identification of Major Depressive Disorder Using Resting-State fMRI.","authors":"Dongren Yao,&nbsp;Jing Sui,&nbsp;Erkun Yang,&nbsp;Pew-Thian Yap,&nbsp;Dinggang Shen,&nbsp;Mingxia Liu","doi":"10.1007/978-3-030-59861-7_1","DOIUrl":"https://doi.org/10.1007/978-3-030-59861-7_1","url":null,"abstract":"<p><p>Extensive studies focus on analyzing human brain functional connectivity from a network perspective, in which each network contains complex graph structures. Based on resting-state functional MRI (rs-fMRI) data, graph convolutional networks (GCNs) enable comprehensive mapping of brain functional connectivity (FC) patterns to depict brain activities. However, existing studies usually characterize static properties of the FC patterns, ignoring the time-varying dynamic information. In addition, previous GCN methods generally use fixed group-level (e.g., patients or controls) representation of FC networks, and thus, cannot capture subject-level FC specificity. To this end, we propose a Temporal-Adaptive GCN (TAGCN) framework that can not only take advantage of both spatial and temporal information using resting-state FC patterns and time-series but also explicitly characterize subject-level specificity of FC patterns. Specifically, we first segment each ROI-based time-series into multiple overlapping windows, then employ an adaptive GCN to mine topological information. We further model the temporal patterns for each ROI along time to learn the periodic brain status changes. Experimental results on 533 major depressive disorder (MDD) and health control (HC) subjects demonstrate that the proposed TAGCN outperforms several state-of-the-art methods in MDD vs. HC classification, and also can be used to capture dynamic FC alterations and learn valid graph representations.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9645786/pdf/nihms-1822329.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40687357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Informative Feature-Guided Siamese Network for Early Diagnosis of Autism 信息特征引导的暹罗网络早期诊断自闭症
Pub Date : 2020-10-01 DOI: 10.1007/978-3-030-59861-7_68
Kun Gao, Yue Sun, Sijie Niu, Li Wang
{"title":"Informative Feature-Guided Siamese Network for Early Diagnosis of Autism","authors":"Kun Gao, Yue Sun, Sijie Niu, Li Wang","doi":"10.1007/978-3-030-59861-7_68","DOIUrl":"https://doi.org/10.1007/978-3-030-59861-7_68","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"26 1","pages":"674-682"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74366783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning in medical imaging. MLMI (Workshop)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1