首页 > 最新文献

Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...最新文献

英文 中文
Deep Learning in Medical Image Analysis: Challenges and Applications 医学图像分析中的深度学习:挑战和应用
Wim E. Crusio, J. Lambris, Gobert N. Lee, H. Fujita
{"title":"Deep Learning in Medical Image Analysis: Challenges and Applications","authors":"Wim E. Crusio, J. Lambris, Gobert N. Lee, H. Fujita","doi":"10.1007/978-3-030-33128-3","DOIUrl":"https://doi.org/10.1007/978-3-030-33128-3","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83144475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 医学图像分析中的深度学习和临床决策支持的多模式学习:第四届国际研讨会,DLMIA 2018,第八届国际研讨会,ML-CDS 2018,与MICCAI 2018一起举行,西班牙格拉纳达,2018年9月20日,会议记录
D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, Anne L. Martel, L. Maier-Hein, J. Tavares, A. Bradley, J. Papa, Vasileios Belagiannis, J. Nascimento, Zhi Lu, Sailesh Conjeti, M. Moradi, H. Greenspan, A. Madabhushi
{"title":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings","authors":"D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, Anne L. Martel, L. Maier-Hein, J. Tavares, A. Bradley, J. Papa, Vasileios Belagiannis, J. Nascimento, Zhi Lu, Sailesh Conjeti, M. Moradi, H. Greenspan, A. Madabhushi","doi":"10.1007/978-3-030-00889-5","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75601232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Contextual Additive Networks to Efficiently Boost 3D Image Segmentations 上下文添加网络有效地促进3D图像分割
Zhenlin Xu, Zhengyang Shen, M. Niethammer
{"title":"Contextual Additive Networks to Efficiently Boost 3D Image Segmentations","authors":"Zhenlin Xu, Zhengyang Shen, M. Niethammer","doi":"10.1007/978-3-030-00889-5_11","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_11","url":null,"abstract":"","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"307 1","pages":"92-100"},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91308586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Semi-Automated Extraction of Crohns Disease MR Imaging Markers using a 3D Residual CNN with Distance Prior. 利用具有距离优先权的三维残差 CNN 半自动提取克罗恩病磁共振成像标记物
Yechiel Lamash, Sila Kurugol, Simon K Warfield

We propose a 3D residual convolutional neural network (CNN) algorithm with an integrated distance prior for segmenting the small bowel lumen and wall to enable extraction of pediatric Crohns disease (pCD) imaging markers from T1-weighted contrast-enhanced MR images. Our proposed segmentation framework enables, for the first time, to quantitatively assess luminal narrowing and dilation in CD aimed at optimizing surgical decisions as well as analyzing bowel wall thickness and tissue enhancement for assessment of response to therapy. Given seed points along the bowel lumen, the proposed algorithm automatically extracts 3D image patches centered on these points and a distance map from the interpolated centerline. These 3D patches and corresponding distance map are jointly used by the proposed residual CNN architecture to segment the lumen and the wall, and to extract imaging markers. Due to lack of available training data, we also propose a novel and efficient semi-automated segmentation algorithm based on graph-cuts technique as well as a software tool for quickly editing labeled data that was used to train our proposed CNN model. The method which is based on curved planar reformation of the small bowel is also useful for visualizing, manually refining, and measuring pCD imaging markers. In preliminary experiments, our CNN network obtained Dice coefficients of 75 ± 18%, 81 ± 8% and 97 ± 2% for the lumen, wall and background, respectively.

我们提出了一种三维残差卷积神经网络(CNN)算法,该算法具有综合距离先验,可用于分割小肠管腔和肠壁,从而从 T1 加权对比增强磁共振图像中提取小儿克罗恩病(pCD)的成像标记。我们提出的分割框架首次能够定量评估克罗恩病的管腔狭窄和扩张情况,从而优化手术决策,并分析肠壁厚度和组织增强情况,以评估治疗反应。给定沿肠腔的种子点后,所提出的算法会自动提取以这些点为中心的三维图像补丁,并从插值中心线提取距离图。这些三维斑块和相应的距离图被所提出的残差 CNN 架构共同用于分割肠腔和肠壁,并提取成像标记。由于缺乏可用的训练数据,我们还提出了一种基于图切分技术的新颖高效的半自动分割算法,以及一种用于快速编辑标注数据的软件工具,这些数据被用于训练我们提出的 CNN 模型。该方法基于小肠的曲面平面重构,也可用于可视化、手动完善和测量 pCD 成像标记。在初步实验中,我们的 CNN 网络对肠腔、肠壁和背景的 Dice 系数分别为 75±18%、81±8% 和 97±2%。
{"title":"Semi-Automated Extraction of Crohns Disease MR Imaging Markers using a 3D Residual CNN with Distance Prior.","authors":"Yechiel Lamash, Sila Kurugol, Simon K Warfield","doi":"10.1007/978-3-030-00889-5_25","DOIUrl":"10.1007/978-3-030-00889-5_25","url":null,"abstract":"<p><p>We propose a 3D residual convolutional neural network (CNN) algorithm with an integrated distance prior for segmenting the small bowel lumen and wall to enable extraction of pediatric Crohns disease (pCD) imaging markers from T1-weighted contrast-enhanced MR images. Our proposed segmentation framework enables, for the first time, to quantitatively assess luminal narrowing and dilation in CD aimed at optimizing surgical decisions as well as analyzing bowel wall thickness and tissue enhancement for assessment of response to therapy. Given seed points along the bowel lumen, the proposed algorithm automatically extracts 3D image patches centered on these points and a distance map from the interpolated centerline. These 3D patches and corresponding distance map are jointly used by the proposed residual CNN architecture to segment the lumen and the wall, and to extract imaging markers. Due to lack of available training data, we also propose a novel and efficient semi-automated segmentation algorithm based on graph-cuts technique as well as a software tool for quickly editing labeled data that was used to train our proposed CNN model. The method which is based on curved planar reformation of the small bowel is also useful for visualizing, manually refining, and measuring pCD imaging markers. In preliminary experiments, our CNN network obtained Dice coefficients of 75 ± 18%, 81 ± 8% and 97 ± 2% for the lumen, wall and background, respectively.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"218-226"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235454/pdf/nihms-995214.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36743553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unpaired Deep Cross-Modality Synthesis with Fast Training. 非配对深度交叉模态合成与快速训练。
Lei Xiang, Yang Li, Weili Lin, Qian Wang, Dinggang Shen

Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.

跨模态合成可以将一种模态的输入图像转换为另一种模态的输出图像。因此,它在科学研究和临床应用方面都非常有价值。大多数现有的跨模态合成方法需要大量的配对数据集来进行训练,而对于同一主题,获得不同模态的完美对齐图像往往是很困难的。即使在交叉模态配对图像之间微小的不对齐(即由于患者/器官运动)也可能对训练产生不利影响并破坏合成图像。本文提出了一种利用未配对数据进行训练的跨模态图像合成方法。具体来说,我们采用生成式对抗网络,以循环的方式进行快速训练。为了提高合成图像的质量,引入了一种捕获详细解剖结构的结构不相似度损失方法。我们在三种流行的图像合成任务上验证了我们提出的算法,包括脑MR-to-CT、前列腺MR-to-CT和脑3T-to-7T。实验结果表明,该方法仅使用未配对数据就能获得较好的合成性能。
{"title":"Unpaired Deep Cross-Modality Synthesis with Fast Training.","authors":"Lei Xiang,&nbsp;Yang Li,&nbsp;Weili Lin,&nbsp;Qian Wang,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00889-5_18","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_18","url":null,"abstract":"<p><p>Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"155-164"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_18","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37042832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Contextual Additive Networks to Efficiently Boost 3D Image Segmentations. 上下文相加网络,有效提升3D图像分割。
Zhenlin Xu, Zhengyang Shen, Marc Niethammer

Semantic segmentation for 3D medical images is an important task for medical image analysis which would benefit from more efficient approaches. We propose a 3D segmentation framework of cascaded fully convolutional networks (FCNs) with contextual inputs and additive outputs. Compared to previous contextual cascaded networks the additive output forces each subsequent model to refine the output of previous models in the cascade. We use U-Nets of various complexity as elementary FCNs and demonstrate our method for cartilage segmentation on a large set of 3D magnetic resonance images (MRI) of the knee. We show that a cascade of simple U-Nets may for certain tasks be superior to a single deep and complex U-Net with almost two orders of magnitude more parameters. Our framework also allows greater flexibility in trading-off performance and efficiency during testing and training.

三维医学图像的语义分割是医学图像分析的一项重要任务,它将受益于更有效的方法。我们提出了一种具有上下文输入和加法输出的级联全卷积网络(FCN)的3D分割框架。与先前的上下文级联网络相比,加法输出迫使每个后续模型细化级联中先前模型的输出。我们使用各种复杂度的U-Nets作为基本FCN,并在一大组膝关节的3D磁共振图像(MRI)上演示了我们的软骨分割方法。我们证明,对于某些任务,一系列简单的U-Net可能优于具有几乎两个数量级以上参数的单个深层复杂U-Net。我们的框架还允许在测试和培训期间更大的灵活性来权衡性能和效率。
{"title":"Contextual Additive Networks to Efficiently Boost 3D Image Segmentations.","authors":"Zhenlin Xu,&nbsp;Zhengyang Shen,&nbsp;Marc Niethammer","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Semantic segmentation for 3D medical images is an important task for medical image analysis which would benefit from more efficient approaches. We propose a 3D segmentation framework of cascaded fully convolutional networks (FCNs) with contextual inputs and additive outputs. Compared to previous contextual cascaded networks the additive output forces each subsequent model to refine the output of previous models in the cascade. We use U-Nets of various complexity as elementary FCNs and demonstrate our method for cartilage segmentation on a large set of 3D magnetic resonance images (MRI) of the knee. We show that a cascade of simple U-Nets may for certain tasks be superior to a single deep and complex U-Net with almost two orders of magnitude more parameters. Our framework also allows greater flexibility in trading-off performance and efficiency during testing and training.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"92-100"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6590074/pdf/nihms-1033318.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41223266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UNet++: A Nested U-Net Architecture for Medical Image Segmentation. UNet++:用于医学图像分割的嵌套 U-Net 架构
Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang

In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.

在本文中,我们介绍了 UNet++,一种用于医学图像分割的更强大的新架构。我们的架构本质上是一个深度监督的编码器-解码器网络,其中编码器和解码器子网络通过一系列嵌套的密集跳转路径相连。重新设计的跳过路径旨在缩小编码器和解码器子网络的特征图之间的语义差距。我们认为,当解码器和编码器网络的特征图在语义上相似时,优化器将更容易完成学习任务。我们将 UNet++ 与 U-Net 和宽 U-Net 架构进行了比较,并在多个医疗图像分割任务中进行了评估:胸部低剂量 CT 扫描中的结节分割、显微镜图像中的核分割、腹部 CT 扫描中的肝脏分割以及结肠镜检查视频中的息肉分割。我们的实验证明,与 U-Net 和宽 U-Net 相比,具有深度监督功能的 UNet++ 平均 IoU 增益分别为 3.9 点和 3.4 点。
{"title":"UNet++: A Nested U-Net Architecture for Medical Image Segmentation.","authors":"Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, Jianming Liang","doi":"10.1007/978-3-030-00889-5_1","DOIUrl":"10.1007/978-3-030-00889-5_1","url":null,"abstract":"<p><p>In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"3-11"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7329239/pdf/nihms-1600717.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38108534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Deep Learning with Fisher Information for Patch-wise Semantic Segmentation. 基于Fisher信息的主动深度学习补丁语义分割。
Jamshid Sourati, Ali Gholipour, Jennifer G Dy, Sila Kurugol, Simon K Warfield

Deep learning with convolutional neural networks (CNN) has achieved unprecedented success in segmentation, however it requires large training data, which is expensive to obtain. Active Learning (AL) frameworks can facilitate major improvements in CNN performance with intelligent selection of minimal data to be labeled. This paper proposes a novel diversified AL based on Fisher information (FI) for the first time for CNNs, where gradient computations from backpropagation are used for efficient computation of FI on the large CNN parameter space. We evaluated the proposed method in the context of newborn and adolescent brain extraction problem under two scenarios: (1) semi-automatic segmentation of a particular subject from a different age group or with a pathology not available in the original training data, where starting from an inaccurate pre-trained model, we iteratively label small number of voxels queried by AL until the model generates accurate segmentation for that subject, and (2) using AL to build a universal model generalizable to all images in a given data set. In both scenarios, FI-based AL improved performance after labeling a small percentage (less than 0.05%) of voxels. The results showed that FI-based AL significantly outperformed random sampling, and achieved accuracy higher than entropy-based querying in transfer learning, where the model learns to extract brains of newborn subjects given an initial model trained on adolescents.

卷积神经网络(CNN)的深度学习在分割方面取得了前所未有的成功,但它需要大量的训练数据,而训练数据的获取成本很高。主动学习(AL)框架可以通过智能选择要标记的最小数据来促进CNN性能的重大改进。本文首次针对CNN提出了一种新的基于Fisher信息(FI)的多样化人工智能,利用反向传播的梯度计算在CNN大参数空间上高效地计算FI。在新生儿和青少年大脑提取问题的背景下,我们在两种情况下评估了所提出的方法:(1)对原始训练数据中没有的不同年龄组或病理的特定受试者进行半自动分割,从一个不准确的预训练模型开始,我们迭代标记人工智能查询的少量体素,直到模型对该受试者产生准确的分割;(2)使用人工智能构建一个可推广到给定数据集中所有图像的通用模型。在这两种情况下,基于fi的人工智能在标记一小部分(小于0.05%)体素后提高了性能。结果表明,在迁移学习中,基于fi的人工智能显著优于随机抽样,且准确率高于基于熵的查询,在迁移学习中,模型学习提取针对青少年的初始模型的新生儿受试者的大脑。
{"title":"Active Deep Learning with Fisher Information for Patch-wise Semantic Segmentation.","authors":"Jamshid Sourati,&nbsp;Ali Gholipour,&nbsp;Jennifer G Dy,&nbsp;Sila Kurugol,&nbsp;Simon K Warfield","doi":"10.1007/978-3-030-00889-5_10","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_10","url":null,"abstract":"<p><p>Deep learning with convolutional neural networks (CNN) has achieved unprecedented success in segmentation, however it requires large training data, which is expensive to obtain. Active Learning (AL) frameworks can facilitate major improvements in CNN performance with intelligent selection of minimal data to be labeled. This paper proposes a novel diversified AL based on Fisher information (FI) for the first time for CNNs, where gradient computations from backpropagation are used for efficient computation of FI on the large CNN parameter space. We evaluated the proposed method in the context of newborn and adolescent brain extraction problem under two scenarios: (1) semi-automatic segmentation of a particular subject from a different age group or with a pathology not available in the original training data, where starting from an inaccurate pre-trained model, we iteratively label small number of voxels queried by AL until the model generates accurate segmentation for that subject, and (2) using AL to build a universal model generalizable to all images in a given data set. In both scenarios, FI-based AL improved performance after labeling a small percentage (less than 0.05%) of voxels. The results showed that FI-based AL significantly outperformed random sampling, and achieved accuracy higher than entropy-based querying in transfer learning, where the model learns to extract brains of newborn subjects given an initial model trained on adolescents.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":"11045 ","pages":"83-91"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_10","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36682771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease. 有限训练数据的迭代分割:在先天性心脏病中的应用。
Danielle F Pace, Adrian V Dalca, Tom Brosch, Tal Geva, Andrew J Powell, Jürgen Weese, Mehdi H Moghari, Polina Golland

We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the intermediate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incomplete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Compared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.

我们提出了一种新的迭代分割模型,该模型可以精确地从小数据集中学习。一种常见的方法是训练模型直接分割图像,这需要大量手动注释的图像来捕获队列中的解剖变异性。相反,我们开发了一个分段模型,该模型递归地分几个步骤发展分段,并将其作为递归神经网络实现。除了最终分割外,我们还通过优化进化的中间步骤来学习模型参数。为此,我们通过将不完整和/或不准确的输入分割与推荐的下一步配对来训练我们的分割传播模型。我们的工作旨在缓解从先天性心脏病(CHD)患者的心脏MRI中分割心脏结构的挑战,其中包括一系列形态学变形和拓扑变化。我们在20张冠心病患者的图像数据集上展示了这种方法的优势,学习了一个准确分割单个心室和大血管的模型。与直接分割相比,迭代法对于最严重的冠心病畸形患者的分割更加准确。
{"title":"Iterative Segmentation from Limited Training Data: Applications to Congenital Heart Disease.","authors":"Danielle F Pace,&nbsp;Adrian V Dalca,&nbsp;Tom Brosch,&nbsp;Tal Geva,&nbsp;Andrew J Powell,&nbsp;Jürgen Weese,&nbsp;Mehdi H Moghari,&nbsp;Polina Golland","doi":"10.1007/978-3-030-00889-5_38","DOIUrl":"https://doi.org/10.1007/978-3-030-00889-5_38","url":null,"abstract":"<p><p>We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the intermediate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incomplete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Compared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.</p>","PeriodicalId":92501,"journal":{"name":"Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...","volume":" ","pages":"334-342"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00889-5_38","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40550276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S...
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1