首页 > 最新文献

Machine learning in medical imaging. MLMI (Workshop)最新文献

英文 中文
SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection. 协同CBCT图像分割和地标检测的多阶段CNN框架。
Pub Date : 2021-09-01 Epub Date: 2021-09-21 DOI: 10.1007/978-3-030-87589-3_62
Qin Liu, Han Deng, Chunfeng Lian, Xiaoyang Chen, Deqiang Xiao, Lei Ma, Xu Chen, Tianshu Kuang, Jaime Gateno, Pew-Thian Yap, James J Xia

Accurate bone segmentation and landmark detection are two essential preparation tasks in computer-aided surgical planning for patients with craniomaxillofacial (CMF) deformities. Surgeons typically have to complete the two tasks manually, spending ~12 hours for each set of CBCT or ~5 hours for CT. To tackle these problems, we propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. Experimental results show that SkullEngine significantly improves segmentation quality, especially in regions where the bone is thin. In addition, SkullEngine also efficiently and accurately detect all of the 175 landmarks. Both tasks were completed simultaneously within 3 minutes regardless of CBCT or CT with high segmentation quality. Currently, SkullEngine has been integrated into a clinical workflow to further evaluate its clinical efficiency.

在颅颌面畸形患者的计算机辅助手术计划中,准确的骨分割和地标检测是必不可少的准备工作。外科医生通常必须手动完成这两项任务,每组CBCT花费约12小时,CT花费约5小时。为了解决这些问题,我们提出了一种基于cnn的多阶段粗到细框架SkullEngine,通过协作、集成和可扩展的JSD模型以及三种分割和地标检测细化模型,实现高分辨率分割和大规模地标检测。我们在一个临床数据集上评估了我们的框架,该数据集由170个CBCT/CT图像组成,用于分割2块骨骼(中脸和下颌骨),并检测175个临床常见的骨骼、牙齿和软组织地标。实验结果表明,SkullEngine显著提高了分割质量,特别是在骨较薄的区域。此外,SkullEngine还可以高效准确地检测所有175个地标。无论是CBCT还是高分割质量CT,两项任务均在3分钟内同时完成。目前,SkullEngine已被整合到临床工作流程中,以进一步评估其临床效率。
{"title":"SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection.","authors":"Qin Liu,&nbsp;Han Deng,&nbsp;Chunfeng Lian,&nbsp;Xiaoyang Chen,&nbsp;Deqiang Xiao,&nbsp;Lei Ma,&nbsp;Xu Chen,&nbsp;Tianshu Kuang,&nbsp;Jaime Gateno,&nbsp;Pew-Thian Yap,&nbsp;James J Xia","doi":"10.1007/978-3-030-87589-3_62","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_62","url":null,"abstract":"<p><p>Accurate bone segmentation and landmark detection are two essential preparation tasks in computer-aided surgical planning for patients with craniomaxillofacial (CMF) deformities. Surgeons typically have to complete the two tasks manually, spending ~12 hours for each set of CBCT or ~5 hours for CT. To tackle these problems, we propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. Experimental results show that SkullEngine significantly improves segmentation quality, especially in regions where the bone is thin. In addition, SkullEngine also efficiently and accurately detect all of the 175 landmarks. Both tasks were completed simultaneously within 3 minutes regardless of CBCT or CT with high segmentation quality. Currently, SkullEngine has been integrated into a clinical workflow to further evaluate its clinical efficiency.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"606-614"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8712093/pdf/nihms-1762341.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39631757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Multi-scale Self-supervised Learning for Multi-site Pediatric Brain MR Image Segmentation with Motion/Gibbs Artifacts 基于运动/吉布斯伪影的儿童脑磁共振图像分割的多尺度自监督学习
Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_18
Yue Sun, Kun Gao, W. Lin, Gang Li, Sijie Niu, Li Wang
{"title":"Multi-scale Self-supervised Learning for Multi-site Pediatric Brain MR Image Segmentation with Motion/Gibbs Artifacts","authors":"Yue Sun, Kun Gao, W. Lin, Gang Li, Sijie Niu, Li Wang","doi":"10.1007/978-3-030-87589-3_18","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_18","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"34 1","pages":"171-179"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78010116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving Joint Learning of Chest X-Ray and Radiology Report by Word Region Alignment 利用词区对齐提高胸片与放射学报告的联合学习
Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_12
Zhanghexuan Ji, Mohammad Abuzar Shaikh, Dana Moukheiber, S. Srihari, Yifan Peng, Mingchen Gao
{"title":"Improving Joint Learning of Chest X-Ray and Radiology Report by Word Region Alignment","authors":"Zhanghexuan Ji, Mohammad Abuzar Shaikh, Dana Moukheiber, S. Srihari, Yifan Peng, Mingchen Gao","doi":"10.1007/978-3-030-87589-3_12","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_12","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"1 1","pages":"110-119"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80401668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis. 诊断与预后视觉解释的信息瓶颈归因。
Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_41
Ugur Demir, Ismail Irmakci, Elif Keles, Ahmet Topcu, Ziyue Xu, Concetto Spampinato, Sachin Jambawalikar, Evrim Turkbey, Baris Turkbey, Ulas Bagci

Visual explanation methods have an important role in the prognosis of the patients where the annotated data is limited or unavailable. There have been several attempts to use gradient-based attribution methods to localize pathology from medical scans without using segmentation labels. This research direction has been impeded by the lack of robustness and reliability. These methods are highly sensitive to the network parameters. In this study, we introduce a robust visual explanation method to address this problem for medical applications. We provide an innovative visual explanation algorithm for general purpose and as an example application we demonstrate its effectiveness for quantifying lesions in the lungs caused by the Covid-19 with high accuracy and robustness without using dense segmentation labels. This approach overcomes the drawbacks of commonly used Grad-CAM and its extended versions. The premise behind our proposed strategy is that the information flow is minimized while ensuring the classifier prediction stays similar. Our findings indicate that the bottleneck condition provides a more stable severity estimation than the similar attribution methods. The source code will be publicly available upon publication.

视觉解释方法对注释数据有限或不可用的患者的预后有重要作用。已经有几次尝试使用基于梯度的归因方法来定位医学扫描的病理,而不使用分割标签。这一研究方向一直受到鲁棒性和可靠性不足的阻碍。这些方法对网络参数高度敏感。在这项研究中,我们引入了一种鲁棒的视觉解释方法来解决医疗应用中的这个问题。我们提供了一种创新的通用视觉解释算法,并作为示例应用,我们证明了它在不使用密集分割标签的情况下,对Covid-19引起的肺部病变进行量化的有效性,具有高精度和鲁棒性。这种方法克服了常用的Grad-CAM及其扩展版本的缺点。我们提出的策略背后的前提是信息流最小化,同时确保分类器预测保持相似。研究结果表明,与同类归因方法相比,瓶颈条件提供了更稳定的严重程度估计。源代码将在发布后公开提供。
{"title":"Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis.","authors":"Ugur Demir,&nbsp;Ismail Irmakci,&nbsp;Elif Keles,&nbsp;Ahmet Topcu,&nbsp;Ziyue Xu,&nbsp;Concetto Spampinato,&nbsp;Sachin Jambawalikar,&nbsp;Evrim Turkbey,&nbsp;Baris Turkbey,&nbsp;Ulas Bagci","doi":"10.1007/978-3-030-87589-3_41","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_41","url":null,"abstract":"<p><p>Visual explanation methods have an important role in the prognosis of the patients where the annotated data is limited or unavailable. There have been several attempts to use gradient-based attribution methods to localize pathology from medical scans without using segmentation labels. This research direction has been impeded by the lack of robustness and reliability. These methods are highly sensitive to the network parameters. In this study, we introduce a robust visual explanation method to address this problem for medical applications. We provide an innovative visual explanation algorithm for general purpose and as an example application we demonstrate its effectiveness for quantifying lesions in the lungs caused by the Covid-19 with high accuracy and robustness without using dense segmentation labels. This approach overcomes the drawbacks of commonly used Grad-CAM and its extended versions. The premise behind our proposed strategy is that the information flow is minimized while ensuring the classifier prediction stays similar. Our findings indicate that the bottleneck condition provides a more stable severity estimation than the similar attribution methods. The source code will be publicly available upon publication.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12966 ","pages":"396-405"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921297/pdf/nihms-1871448.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10721276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 医学成像中的机器学习:第十二届国际研讨会,MLMI 2021,与MICCAI 2021一起举行,斯特拉斯堡,法国,2021年9月27日,会议录
Pub Date : 2021-01-01 DOI: 10.1007/978-3-030-87589-3
{"title":"Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-87589-3","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77936797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust Multiple Sclerosis Lesion Inpainting with Edge Prior. 鲁棒性多发性硬化症病灶的边缘预处理。
Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_13
Huahong Zhang, Rohit Bakshi, Francesca Bagnato, Ipek Oguz

Inpainting lesions is an important preprocessing task for algorithms analyzing brain MRIs of multiple sclerosis (MS) patients, such as tissue segmentation and cortical surface reconstruction. We propose a new deep learning approach for this task. Unlike existing inpainting approaches which ignore the lesion areas of the input image, we leverage the edge information around the lesions as a prior to help the inpainting process. Thus, the input of this network includes the T1-w image, lesion mask and the edge map computed from the T1-w image, and the output is the lesion-free image. The introduction of the edge prior is based on our observation that the edge detection results of the MRI scans will usually contain the contour of white matter (WM) and grey matter (GM), even though some undesired edges appear near the lesions. Instead of losing all the information around the neighborhood of lesions, our approach preserves the local tissue shape (brain/WM/GM) with the guidance of the input edges. The qualitative results show that our pipeline inpaints the lesion areas in a realistic and shape-consistent way. Our quantitative evaluation shows that our approach outperforms the existing state-of-the-art inpainting methods in both image-based metrics and in FreeSurfer segmentation accuracy. Furthermore, our approach demonstrates robustness to inaccurate lesion mask inputs. This is important for practical usability, because it allows for a generous over-segmentation of lesions instead of requiring precise boundaries, while still yielding accurate results.

修复病变是分析多发性硬化症(MS)患者大脑MRI的算法的一项重要预处理任务,如组织分割和皮层表面重建。我们为这项任务提出了一种新的深度学习方法。与忽略输入图像的损伤区域的现有修复方法不同,我们利用损伤周围的边缘信息作为先验来帮助修复过程。因此,该网络的输入包括T1-w图像、病变掩模和根据T1-w图计算的边缘图,并且输出是无病变图像。边缘先验的引入是基于我们的观察,即MRI扫描的边缘检测结果通常包含白质(WM)和灰质(GM)的轮廓,即使一些不希望的边缘出现在病变附近。我们的方法没有丢失病变附近的所有信息,而是在输入边缘的指导下保留了局部组织形状(脑/WM/GM)。定性结果表明,我们的管道以逼真和形状一致的方式修复了病变区域。我们的定量评估表明,我们的方法在基于图像的度量和FreeSurfer分割精度方面都优于现有的最先进的修复方法。此外,我们的方法证明了对不准确的损伤掩模输入的鲁棒性。这对实际可用性很重要,因为它允许对病变进行大量的过度分割,而不需要精确的边界,同时仍能产生准确的结果。
{"title":"Robust Multiple Sclerosis Lesion Inpainting with Edge Prior.","authors":"Huahong Zhang,&nbsp;Rohit Bakshi,&nbsp;Francesca Bagnato,&nbsp;Ipek Oguz","doi":"10.1007/978-3-030-59861-7_13","DOIUrl":"10.1007/978-3-030-59861-7_13","url":null,"abstract":"<p><p>Inpainting lesions is an important preprocessing task for algorithms analyzing brain MRIs of multiple sclerosis (MS) patients, such as tissue segmentation and cortical surface reconstruction. We propose a new deep learning approach for this task. Unlike existing inpainting approaches which ignore the lesion areas of the input image, we leverage the edge information around the lesions as a prior to help the inpainting process. Thus, the input of this network includes the T1-w image, lesion mask and the edge map computed from the T1-w image, and the output is the lesion-free image. The introduction of the edge prior is based on our observation that the edge detection results of the MRI scans will usually contain the contour of white matter (WM) and grey matter (GM), even though some undesired edges appear near the lesions. Instead of losing all the information around the neighborhood of lesions, our approach preserves the local tissue shape (brain/WM/GM) with the guidance of the input edges. The qualitative results show that our pipeline inpaints the lesion areas in a realistic and shape-consistent way. Our quantitative evaluation shows that our approach outperforms the existing state-of-the-art inpainting methods in both image-based metrics and in FreeSurfer segmentation accuracy. Furthermore, our approach demonstrates robustness to inaccurate lesion mask inputs. This is important for practical usability, because it allows for a generous over-segmentation of lesions instead of requiring precise boundaries, while still yielding accurate results.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"120-129"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8692168/pdf/nihms-1752653.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39847994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Temporal-Adaptive Graph Convolutional Network for Automated Identification of Major Depressive Disorder Using Resting-State fMRI. 基于静息状态fMRI的时间自适应图卷积网络自动识别重度抑郁症。
Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_1
Dongren Yao, Jing Sui, Erkun Yang, Pew-Thian Yap, Dinggang Shen, Mingxia Liu

Extensive studies focus on analyzing human brain functional connectivity from a network perspective, in which each network contains complex graph structures. Based on resting-state functional MRI (rs-fMRI) data, graph convolutional networks (GCNs) enable comprehensive mapping of brain functional connectivity (FC) patterns to depict brain activities. However, existing studies usually characterize static properties of the FC patterns, ignoring the time-varying dynamic information. In addition, previous GCN methods generally use fixed group-level (e.g., patients or controls) representation of FC networks, and thus, cannot capture subject-level FC specificity. To this end, we propose a Temporal-Adaptive GCN (TAGCN) framework that can not only take advantage of both spatial and temporal information using resting-state FC patterns and time-series but also explicitly characterize subject-level specificity of FC patterns. Specifically, we first segment each ROI-based time-series into multiple overlapping windows, then employ an adaptive GCN to mine topological information. We further model the temporal patterns for each ROI along time to learn the periodic brain status changes. Experimental results on 533 major depressive disorder (MDD) and health control (HC) subjects demonstrate that the proposed TAGCN outperforms several state-of-the-art methods in MDD vs. HC classification, and also can be used to capture dynamic FC alterations and learn valid graph representations.

广泛的研究集中在从网络的角度分析人脑功能连接,其中每个网络都包含复杂的图结构。基于静息状态功能MRI (rs-fMRI)数据,图卷积网络(GCNs)能够全面映射脑功能连接(FC)模式来描述大脑活动。然而,现有的研究通常只描述FC模式的静态特性,而忽略了随时间变化的动态信息。此外,以前的GCN方法通常使用固定的组级(例如,患者或对照组)FC网络表示,因此无法捕获受试者级FC特异性。为此,我们提出了一个时间自适应GCN (TAGCN)框架,该框架不仅可以利用静息状态FC模式和时间序列的空间和时间信息,还可以明确表征FC模式的主题级别特异性。具体来说,我们首先将每个基于roi的时间序列分割成多个重叠的窗口,然后使用自适应GCN挖掘拓扑信息。我们进一步对每个ROI的时间模式进行建模,以了解周期性的大脑状态变化。对533名重度抑郁症(MDD)和健康控制(HC)受试者的实验结果表明,所提出的TAGCN在MDD和HC分类方面优于几种最先进的方法,也可用于捕获动态FC变化和学习有效的图表示。
{"title":"Temporal-Adaptive Graph Convolutional Network for Automated Identification of Major Depressive Disorder Using Resting-State fMRI.","authors":"Dongren Yao,&nbsp;Jing Sui,&nbsp;Erkun Yang,&nbsp;Pew-Thian Yap,&nbsp;Dinggang Shen,&nbsp;Mingxia Liu","doi":"10.1007/978-3-030-59861-7_1","DOIUrl":"https://doi.org/10.1007/978-3-030-59861-7_1","url":null,"abstract":"<p><p>Extensive studies focus on analyzing human brain functional connectivity from a network perspective, in which each network contains complex graph structures. Based on resting-state functional MRI (rs-fMRI) data, graph convolutional networks (GCNs) enable comprehensive mapping of brain functional connectivity (FC) patterns to depict brain activities. However, existing studies usually characterize static properties of the FC patterns, ignoring the time-varying dynamic information. In addition, previous GCN methods generally use fixed group-level (e.g., patients or controls) representation of FC networks, and thus, cannot capture subject-level FC specificity. To this end, we propose a Temporal-Adaptive GCN (TAGCN) framework that can not only take advantage of both spatial and temporal information using resting-state FC patterns and time-series but also explicitly characterize subject-level specificity of FC patterns. Specifically, we first segment each ROI-based time-series into multiple overlapping windows, then employ an adaptive GCN to mine topological information. We further model the temporal patterns for each ROI along time to learn the periodic brain status changes. Experimental results on 533 major depressive disorder (MDD) and health control (HC) subjects demonstrate that the proposed TAGCN outperforms several state-of-the-art methods in MDD vs. HC classification, and also can be used to capture dynamic FC alterations and learn valid graph representations.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9645786/pdf/nihms-1822329.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40687357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Informative Feature-Guided Siamese Network for Early Diagnosis of Autism 信息特征引导的暹罗网络早期诊断自闭症
Pub Date : 2020-10-01 DOI: 10.1007/978-3-030-59861-7_68
Kun Gao, Yue Sun, Sijie Niu, Li Wang
{"title":"Informative Feature-Guided Siamese Network for Early Diagnosis of Autism","authors":"Kun Gao, Yue Sun, Sijie Niu, Li Wang","doi":"10.1007/978-3-030-59861-7_68","DOIUrl":"https://doi.org/10.1007/978-3-030-59861-7_68","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"26 1","pages":"674-682"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74366783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Transfer Learning for Infant Cerebellum Tissue Segmentation. 婴儿小脑组织分割的半监督迁移学习。
Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_67
Yue Sun, Kun Gao, Sijie Niu, Weili Lin, Gang Li, Li Wang

To characterize early cerebellum development, accurate segmentation of the cerebellum into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) tissues is one of the most pivotal steps. However, due to the weak tissue contrast, extremely folded tiny structures, and severe partial volume effect, infant cerebellum tissue segmentation is especially challenging, and the manual labels are hard to obtain and correct for learning-based methods. To the best of our knowledge, there is no work on the cerebellum segmentation for infant subjects less than 24 months of age. In this work, we develop a semi-supervised transfer learning framework guided by a confidence map for tissue segmentation of cerebellum MR images from 24-month-old to 6-month-old infants. Note that only 24-month-old subjects have reliable manual labels for training, due to their high tissue contrast. Through the proposed semi-supervised transfer learning, the labels from 24-month-old subjects are gradually propagated to the 18-, 12-, and 6-month-old subjects, which have a low tissue contrast. Comparison with the state-of-the-art methods demonstrates the superior performance of the proposed method, especially for 6-month-old subjects.

为了描述早期小脑的发育特征,将小脑准确地划分为白质(WM)、灰质(GM)和脑脊液(CSF)组织是最关键的步骤之一。然而,由于组织对比度较弱,微小结构极度折叠,局部体积效应严重,婴儿小脑组织分割尤其具有挑战性,基于学习的方法难以获得和校正人工标记。据我们所知,目前还没有针对小于24个月的婴儿进行小脑分割的研究。在这项工作中,我们开发了一个半监督迁移学习框架,该框架由一个置信度图指导,用于对24个月至6个月婴儿的小脑MR图像进行组织分割。请注意,只有24个月大的受试者有可靠的手动训练标签,因为他们的组织对比度高。通过本文提出的半监督迁移学习,将24月龄被试的标签逐渐传播到组织对比度较低的18、12、6月龄被试。与最先进的方法比较表明,所提出的方法具有优越的性能,特别是对于6个月大的受试者。
{"title":"Semi-supervised Transfer Learning for Infant Cerebellum Tissue Segmentation.","authors":"Yue Sun, Kun Gao, Sijie Niu, Weili Lin, Gang Li, Li Wang","doi":"10.1007/978-3-030-59861-7_67","DOIUrl":"10.1007/978-3-030-59861-7_67","url":null,"abstract":"<p><p>To characterize early cerebellum development, accurate segmentation of the cerebellum into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) tissues is one of the most pivotal steps. However, due to the weak tissue contrast, extremely folded tiny structures, and severe partial volume effect, infant cerebellum tissue segmentation is especially challenging, and the manual labels are hard to obtain and correct for learning-based methods. To the best of our knowledge, there is no work on the cerebellum segmentation for infant subjects less than 24 months of age. In this work, we develop a semi-supervised transfer learning framework guided by a confidence map for tissue segmentation of cerebellum MR images from 24-month-old to 6-month-old infants. Note that only 24-month-old subjects have reliable manual labels for training, due to their high tissue contrast. Through the proposed semi-supervised transfer learning, the labels from 24-month-old subjects are gradually propagated to the 18-, 12-, and 6-month-old subjects, which have a low tissue contrast. Comparison with the state-of-the-art methods demonstrates the superior performance of the proposed method, especially for 6-month-old subjects.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"663-673"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7885085/pdf/nihms-1666988.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25378350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI. 用于胎儿脑磁共振成像运动校正的解剖学引导卷积神经网络
Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_39
Yuchen Pei, Lisheng Wang, Fenqiang Zhao, Tao Zhong, Lufan Liao, Dinggang Shen, Gang Li

Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.

胎儿磁共振成像(MRI)受到胎动和母体呼吸的影响。虽然快速核磁共振成像序列可以无伪影采集单个二维切片,但切片采集之间通常会发生运动。因此,每个切片的运动校正对于重建三维胎儿脑部磁共振成像非常重要,但这高度依赖于操作人员,而且非常耗时。基于卷积神经网络(CNN)的方法在预测任意方向二维切片的三维运动参数方面取得了令人鼓舞的成绩,但这种方法不能利用重要的脑结构信息。为解决这一问题,我们提出了一种新的多任务学习框架,以联合学习每个切片的变换参数和组织分割图,从而提供大脑解剖学信息,指导从二维切片到三维容积空间的粗到细映射。在粗略阶段,第一个网络学习回归和分割任务所共享的特征。在细化阶段,为了充分利用解剖信息,第二个网络引入了基于粗分割构建的距离图。最后,结合签名距离图来指导回归和分割,从而提高了这两项任务的性能。实验结果表明,与最先进的方法相比,所提出的方法在减少运动预测误差和同时获得令人满意的组织分割结果方面表现出色。
{"title":"Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI.","authors":"Yuchen Pei, Lisheng Wang, Fenqiang Zhao, Tao Zhong, Lufan Liao, Dinggang Shen, Gang Li","doi":"10.1007/978-3-030-59861-7_39","DOIUrl":"10.1007/978-3-030-59861-7_39","url":null,"abstract":"<p><p>Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"384-393"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7912521/pdf/nihms-1666981.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25414975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning in medical imaging. MLMI (Workshop)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1