首页 > 最新文献

2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)最新文献

英文 中文
Volume R-CNN: Unified Framework for CT Object Detection and Instance Segmentation 卷R-CNN: CT目标检测和实例分割的统一框架
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759390
Yun Chen, Junxuan Chen, Bo Xiao, Zhengfang Wu, Ying Chi, Xuansong Xie, Xiansheng Hua
As a fundamental task in computer vision, object detection methods for the 2D image such as Faster R-CNN and SSD can be efficiently trained end-to-end. However, current methods for volumetric data like computed tomography (CT) usually contain two steps to do region proposal and classification separately. In this work, we present a unified framework called Volume R-CNN for object detection in volumetric data. Volume R-CNN is an end-to-end method that could perform region proposal, classification and instance segmentation all in one model, which dramatically reduces computational overhead and parameter numbers. These tasks are joined using a key component named RoIAlign3D that extracts features of RoIs smoothly and works superiorly well for small objects in the 3D image. To the best of our knowledge, Volume R-CNN is the first common end-to-end framework for both object detection and instance segmentation in CT. Without bells and whistles, our single model achieves remarkable results in LUNA16. Ablation experiments are conducted to analyze the effectiveness of our method.
作为计算机视觉的一项基础任务,Faster R-CNN、SSD等2D图像的目标检测方法可以端到端高效训练。然而,目前的体积数据处理方法,如CT,通常包含两个步骤,分别进行区域建议和分类。在这项工作中,我们提出了一个名为Volume R-CNN的统一框架,用于体积数据中的目标检测。Volume R-CNN是一种端到端方法,可以在一个模型中执行区域提议、分类和实例分割,大大减少了计算开销和参数数量。这些任务是使用一个名为RoIAlign3D的关键组件来连接的,该组件可以平滑地提取roi的特征,并且对3D图像中的小物体效果非常好。据我们所知,Volume R-CNN是CT中第一个用于对象检测和实例分割的通用端到端框架。没有花哨的东西,我们的单一模型在LUNA16中取得了显著的效果。通过烧蚀实验分析了该方法的有效性。
{"title":"Volume R-CNN: Unified Framework for CT Object Detection and Instance Segmentation","authors":"Yun Chen, Junxuan Chen, Bo Xiao, Zhengfang Wu, Ying Chi, Xuansong Xie, Xiansheng Hua","doi":"10.1109/ISBI.2019.8759390","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759390","url":null,"abstract":"As a fundamental task in computer vision, object detection methods for the 2D image such as Faster R-CNN and SSD can be efficiently trained end-to-end. However, current methods for volumetric data like computed tomography (CT) usually contain two steps to do region proposal and classification separately. In this work, we present a unified framework called Volume R-CNN for object detection in volumetric data. Volume R-CNN is an end-to-end method that could perform region proposal, classification and instance segmentation all in one model, which dramatically reduces computational overhead and parameter numbers. These tasks are joined using a key component named RoIAlign3D that extracts features of RoIs smoothly and works superiorly well for small objects in the 3D image. To the best of our knowledge, Volume R-CNN is the first common end-to-end framework for both object detection and instance segmentation in CT. Without bells and whistles, our single model achieves remarkable results in LUNA16. Ablation experiments are conducted to analyze the effectiveness of our method.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129700699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Feature Aggregation in Perceptual Loss for Ultra Low-Dose (ULD) CT Denoising 超低剂量(ULD) CT去噪中感知损失的特征聚集
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759323
M. Green, E. Marom, E. Konen, N. Kiryati, Arnaldo Mayer
Lung cancer CT screening programs are continuously reducing patient exposure to radiation at the expense of image quality. State-of-the-art denoising algorithms are instrumental in preserving the diagnostic value of these images. In this work, a novel neural denoising scheme is proposed for ULD chest CT. The proposed method aggregates multi-scale features that provide rich information for the computation of a perceptive loss. The loss is further optimized for chest CT data by using denoising auto-encoders on real CT images to build the feature extracting network instead of using an existing network trained on natural images. The proposed method was validated on co-registered pairs of real ULD and normal dose scans and compared favorably with published state-of-the-art denoising$ networks both qualitatively and quantitatively.
肺癌CT筛查项目以牺牲图像质量为代价,不断减少患者的辐射暴露。最先进的去噪算法有助于保留这些图像的诊断价值。本文提出了一种新的ULD胸部CT去噪方法。该方法聚合了多尺度特征,为感知损失的计算提供了丰富的信息。进一步优化胸部CT数据的损失,在真实CT图像上使用去噪自编码器来构建特征提取网络,而不是使用在自然图像上训练的现有网络。所提出的方法在真实ULD和正常剂量扫描的共同注册对上进行了验证,并在定性和定量上与已发表的最先进的去噪网络进行了比较。
{"title":"Feature Aggregation in Perceptual Loss for Ultra Low-Dose (ULD) CT Denoising","authors":"M. Green, E. Marom, E. Konen, N. Kiryati, Arnaldo Mayer","doi":"10.1109/ISBI.2019.8759323","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759323","url":null,"abstract":"Lung cancer CT screening programs are continuously reducing patient exposure to radiation at the expense of image quality. State-of-the-art denoising algorithms are instrumental in preserving the diagnostic value of these images. In this work, a novel neural denoising scheme is proposed for ULD chest CT. The proposed method aggregates multi-scale features that provide rich information for the computation of a perceptive loss. The loss is further optimized for chest CT data by using denoising auto-encoders on real CT images to build the feature extracting network instead of using an existing network trained on natural images. The proposed method was validated on co-registered pairs of real ULD and normal dose scans and compared favorably with published state-of-the-art denoising$ networks both qualitatively and quantitatively.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128801254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning to Segment the Lung Volume from CT Scans Based on Semi-Automatic Ground-Truth 基于半自动Ground-Truth的CT扫描肺体积分割学习
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759309
Patrick Sousa, A. Galdran, P. Costa, A. Campilho
Lung volume segmentation is a key step in the design of Computer-Aided Diagnosis systems for automated lung pathology analysis. However, isolating the lung from CT volumes can be a challenging process due to considerable deformations and the potential presence of pathologies. Convolutional Neural Networks (CNN) are effective tools for modeling the spatial relationship between lung voxels. Unfortunately, they typically require large quantities of annotated data, and manually delineating the lung from volumetric CT scans can be a cumbersome process. We propose to train a 3D CNN to solve this task based on semi-automatically generated annotations. For this, we introduce an extension of the well-known V-Net architecture that can handle higher-dimensional input data. Even if the training set labels are noisy and contain errors, our experiments show that it is possible to learn to accurately segment the lung relying on them. Numerical comparisons on an external test set containing lung segmentations provided by a medical expert demonstrate that the proposed model generalizes well to new data, reaching an average 98.7% Dice coefficient. The proposed approach results in a superior performance with respect to the standard V-Net model, particularly on the lung boundary.
肺体积分割是设计肺病理自动分析计算机辅助诊断系统的关键步骤。然而,由于相当大的变形和潜在的病理存在,从CT体积中分离肺可能是一个具有挑战性的过程。卷积神经网络(CNN)是模拟肺体素之间空间关系的有效工具。不幸的是,它们通常需要大量带注释的数据,并且从体积CT扫描中手动描绘肺部可能是一个繁琐的过程。我们建议训练一个3D CNN来解决这个基于半自动生成注释的任务。为此,我们引入了著名的V-Net架构的扩展,它可以处理高维的输入数据。即使训练集标签有噪声并且包含错误,我们的实验表明,依靠它们来学习准确分割肺是可能的。在医学专家提供的包含肺段的外部测试集上的数值比较表明,所提出的模型可以很好地泛化新数据,Dice系数平均达到98.7%。与标准的V-Net模型相比,该方法具有更好的性能,特别是在肺边界上。
{"title":"Learning to Segment the Lung Volume from CT Scans Based on Semi-Automatic Ground-Truth","authors":"Patrick Sousa, A. Galdran, P. Costa, A. Campilho","doi":"10.1109/ISBI.2019.8759309","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759309","url":null,"abstract":"Lung volume segmentation is a key step in the design of Computer-Aided Diagnosis systems for automated lung pathology analysis. However, isolating the lung from CT volumes can be a challenging process due to considerable deformations and the potential presence of pathologies. Convolutional Neural Networks (CNN) are effective tools for modeling the spatial relationship between lung voxels. Unfortunately, they typically require large quantities of annotated data, and manually delineating the lung from volumetric CT scans can be a cumbersome process. We propose to train a 3D CNN to solve this task based on semi-automatically generated annotations. For this, we introduce an extension of the well-known V-Net architecture that can handle higher-dimensional input data. Even if the training set labels are noisy and contain errors, our experiments show that it is possible to learn to accurately segment the lung relying on them. Numerical comparisons on an external test set containing lung segmentations provided by a medical expert demonstrate that the proposed model generalizes well to new data, reaching an average 98.7% Dice coefficient. The proposed approach results in a superior performance with respect to the standard V-Net model, particularly on the lung boundary.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125220645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Exploring Microstructure Asymmetries in the Infant Brain Cortex: A Methodological Framework Combining Structural and Diffusion Mri 探索婴儿大脑皮层的微观结构不对称:结合结构和扩散Mri的方法学框架
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759421
C. Rolland, J. Lebenberg, F. Leroy, E. Moulton, P. Adibpour, D. Rivière, C. Poupon, L. Hertz-Pannier, J. F. Mangin, G. Dehaene-Lambertz, J. Dubois
The development of the human brain is a complex process that starts during early pregnancy and extends until the end of adolescence. In parallel to morphological changes in brain size and gyrification, several microstructural changes occur in the cortex, such as the development of dendritic arborization, synaptogenesis and pruning, and fiber myelination. Magnetic Resonance Imaging (MRI) can provide indirect markers of these mechanisms through the mapping of quantitative parameters. Here, we used a dedicated methodological framework to perform reliable voxel-wise analyses over the infant cortex. The examination of hemispheric asymmetries in microstructure required careful alignment of morphological asymmetries through registration of native and flipped brains using a 2-step matching strategy of sulci (DISCO approach) and cortical ribbon (DARTEL approach). We tested the potential of this approach in 1-to-5-month-old infants, with a focus on cortical longitudinal diffusivity from Diffusion Tensor Imaging (DTI). This enabled us to unravel different microstructural evolution patterns of specific sensorimotor and language regions in the left and right hemispheres.
人类大脑的发育是一个复杂的过程,从怀孕早期开始,一直延续到青春期结束。在大脑大小和旋转的形态学变化的同时,皮层也发生了一些微观结构的变化,如树突树突的形成、突触的发生和修剪以及纤维髓鞘的形成。磁共振成像(MRI)可以通过定量参数的映射提供这些机制的间接标记。在这里,我们使用了一个专门的方法框架来对婴儿皮层进行可靠的体素分析。脑沟(DISCO入路)和皮质带(DARTEL入路)两步匹配策略,通过对原生脑和翻转脑进行注册,对半球微观结构的不对称进行仔细的比对。我们在1至5个月大的婴儿中测试了这种方法的潜力,重点是通过扩散张量成像(DTI)检测皮质纵向扩散率。这使我们能够解开左右半球特定感觉运动和语言区域的不同微观结构进化模式。
{"title":"Exploring Microstructure Asymmetries in the Infant Brain Cortex: A Methodological Framework Combining Structural and Diffusion Mri","authors":"C. Rolland, J. Lebenberg, F. Leroy, E. Moulton, P. Adibpour, D. Rivière, C. Poupon, L. Hertz-Pannier, J. F. Mangin, G. Dehaene-Lambertz, J. Dubois","doi":"10.1109/ISBI.2019.8759421","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759421","url":null,"abstract":"The development of the human brain is a complex process that starts during early pregnancy and extends until the end of adolescence. In parallel to morphological changes in brain size and gyrification, several microstructural changes occur in the cortex, such as the development of dendritic arborization, synaptogenesis and pruning, and fiber myelination. Magnetic Resonance Imaging (MRI) can provide indirect markers of these mechanisms through the mapping of quantitative parameters. Here, we used a dedicated methodological framework to perform reliable voxel-wise analyses over the infant cortex. The examination of hemispheric asymmetries in microstructure required careful alignment of morphological asymmetries through registration of native and flipped brains using a 2-step matching strategy of sulci (DISCO approach) and cortical ribbon (DARTEL approach). We tested the potential of this approach in 1-to-5-month-old infants, with a focus on cortical longitudinal diffusivity from Diffusion Tensor Imaging (DTI). This enabled us to unravel different microstructural evolution patterns of specific sensorimotor and language regions in the left and right hemispheres.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125393850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Detecting Prostate Cancer Using A CNN-Based System Without Segmentation 基于cnn的无分割前列腺癌检测系统
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759102
Islam Reda, M. Ghazal, A. Shalaby, Mohammed M Elmogy, A. Aboulfotouh, M. El-Ghar, Adel Said Elmaghraby, R. Keynton, A. El-Baz
A computer-aided diagnosis (CAD) system for early detection of prostate cancer from diffusion-weighted magnetic resonance imaging (DWI) is proposed in this paper. The proposed system starts by defining a region of interest that includes the prostate across the different slices of the input DWI volume. Then, the apparent diffusion coefficient (ADC) of the defined ROI is calculated, normalized and refined. Finally, the classification of prostate into either benign or malignant is achieved using a classification system of two stages. In the first stage, seven convolutional neural networks (CNNs) are used to determine initial classification probabilities for each case. Then, an SVM with Guassian kernel is fed with these probabilities to determine the ultimate diagnosis. The proposed system is new in the sense that it has the ability to detect prostate cancer with minimal prior processing (e.g., rough definition of the prostate region). Evaluation of the developed system is done using DWI datasets collected at seven different b -values from 40 patients (20 benign and 20 malignant). The acquisition of these DWI datasets is performed using two different scanners with different magnetic field strengths (1.5 Tesla and 3 Tesla). The resulting area under curve (AUC) after the second stage of classification is 0.99, which shows a high performance of our system without segmentation similar to the performance of up-to-date systems.
提出了一种用于前列腺癌扩散加权磁共振成像(DWI)早期诊断的计算机辅助诊断(CAD)系统。提出的系统首先定义一个感兴趣的区域,该区域包括输入DWI体积的不同切片上的前列腺。然后,对定义的感兴趣区域进行表观扩散系数(ADC)的计算、归一化和细化。最后,前列腺的良性或恶性分类是通过两个阶段的分类系统来实现的。在第一阶段,使用7个卷积神经网络(cnn)来确定每种情况的初始分类概率。然后,将这些概率馈入高斯核支持向量机以确定最终诊断。所提出的系统在某种意义上是新的,它具有以最小的先验处理(例如,前列腺区域的粗略定义)检测前列腺癌的能力。使用从40例患者(20例良性和20例恶性)收集的7个不同b值的DWI数据集对开发的系统进行评估。这些DWI数据集的采集使用了两种不同的扫描仪,具有不同的磁场强度(1.5特斯拉和3特斯拉)。第二阶段分类后得到的曲线下面积(AUC)为0.99,这表明我们的系统在没有分割的情况下具有与最新系统相似的高性能。
{"title":"Detecting Prostate Cancer Using A CNN-Based System Without Segmentation","authors":"Islam Reda, M. Ghazal, A. Shalaby, Mohammed M Elmogy, A. Aboulfotouh, M. El-Ghar, Adel Said Elmaghraby, R. Keynton, A. El-Baz","doi":"10.1109/ISBI.2019.8759102","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759102","url":null,"abstract":"A computer-aided diagnosis (CAD) system for early detection of prostate cancer from diffusion-weighted magnetic resonance imaging (DWI) is proposed in this paper. The proposed system starts by defining a region of interest that includes the prostate across the different slices of the input DWI volume. Then, the apparent diffusion coefficient (ADC) of the defined ROI is calculated, normalized and refined. Finally, the classification of prostate into either benign or malignant is achieved using a classification system of two stages. In the first stage, seven convolutional neural networks (CNNs) are used to determine initial classification probabilities for each case. Then, an SVM with Guassian kernel is fed with these probabilities to determine the ultimate diagnosis. The proposed system is new in the sense that it has the ability to detect prostate cancer with minimal prior processing (e.g., rough definition of the prostate region). Evaluation of the developed system is done using DWI datasets collected at seven different b -values from 40 patients (20 benign and 20 malignant). The acquisition of these DWI datasets is performed using two different scanners with different magnetic field strengths (1.5 Tesla and 3 Tesla). The resulting area under curve (AUC) after the second stage of classification is 0.99, which shows a high performance of our system without segmentation similar to the performance of up-to-date systems.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116811752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Comparison of Different Augmentation Techniques for Improved Generalization Performance for Gleason Grading 改进Gleason分级泛化性能的不同增强技术的比较
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759264
I. Arvidsson, N. Overgaard, K. Åström, A. Heyden
The fact that deep learning based algorithms used for digital pathology tend to overfit to the site of the training data is well-known. Since an algorithm that does not generalize is not very useful, we have in this work studied how different data augmentation techniques can reduce this problem but also how data from different sites can be normalized to each other. For both of these approaches we have used cycle generative adversarial networks (GAN); either to generate more examples to train on or to transform images from one site to another. Furthermore, we have investigated to what extent standard augmentation techniques improve the generalization performance. We performed experiments on four datasets with slides from prostate biopsies, stained with H&E, detailed annotated with Gleason grades. We obtained results similar to previous studies, with accuracies of 77% for Gleason grading for images from the same site as the training data and 59% for images from other sites. However, we also found out that the use of traditional augmentation techniques gave better performance compared to when using cycle GANs, either to augment the training data or to normalize the test data.
众所周知,用于数字病理学的基于深度学习的算法倾向于过度拟合训练数据的位置。由于没有泛化的算法不是很有用,我们在这项工作中研究了不同的数据增强技术如何减少这个问题,以及如何将来自不同站点的数据相互规范化。对于这两种方法,我们都使用了循环生成对抗网络(GAN);要么生成更多的例子进行训练,要么将图像从一个站点转换到另一个站点。此外,我们还研究了标准增强技术在多大程度上提高了泛化性能。我们在四个数据集上进行了实验,这些数据集来自前列腺活检切片,H&E染色,并详细注释了Gleason分级。我们得到的结果与之前的研究相似,对于与训练数据相同站点的图像,Gleason分级的准确率为77%,对于来自其他站点的图像,准确率为59%。然而,我们也发现,与使用循环gan相比,使用传统的增强技术在增强训练数据或规范化测试数据方面具有更好的性能。
{"title":"Comparison of Different Augmentation Techniques for Improved Generalization Performance for Gleason Grading","authors":"I. Arvidsson, N. Overgaard, K. Åström, A. Heyden","doi":"10.1109/ISBI.2019.8759264","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759264","url":null,"abstract":"The fact that deep learning based algorithms used for digital pathology tend to overfit to the site of the training data is well-known. Since an algorithm that does not generalize is not very useful, we have in this work studied how different data augmentation techniques can reduce this problem but also how data from different sites can be normalized to each other. For both of these approaches we have used cycle generative adversarial networks (GAN); either to generate more examples to train on or to transform images from one site to another. Furthermore, we have investigated to what extent standard augmentation techniques improve the generalization performance. We performed experiments on four datasets with slides from prostate biopsies, stained with H&E, detailed annotated with Gleason grades. We obtained results similar to previous studies, with accuracies of 77% for Gleason grading for images from the same site as the training data and 59% for images from other sites. However, we also found out that the use of traditional augmentation techniques gave better performance compared to when using cycle GANs, either to augment the training data or to normalize the test data.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131111737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic Detection of the Nasal Cavities and Paranasal Sinuses Using Deep Neural Networks 使用深度神经网络自动检测鼻腔和鼻窦
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759481
C. O. Laura, Patrick Hofmann, K. Drechsler, S. Wesarg
The nasal cavity and paranasal sinuses present large interpa-tient variabilities. Additional circumstances like for example, concha bullosa or nasal septum deviations complicate their segmentation. As in other areas of the body a previous multi-structure detection could facilitate the segmentation task. In this paper an approach is proposed to individually detect all sinuses and the nasal cavity. For a better delimitation of their borders the use of an irregular polyhedron is proposed. For an accurate prediction the Darknet-19 deep neural network is used which combined with the You Only Look Once method has shown very promising results in other fields of computer vision. 57 CT scans were available of which 85% were used for training and the remaining 15% for validation.
鼻腔和副鼻窦表现出很大的患者间差异。其他情况,例如,耳甲大疱或鼻中隔偏差使其分割复杂化。与身体的其他部位一样,先前的多结构检测可以促进分割任务。本文提出了一种单独检测所有鼻窦和鼻腔的方法。为了更好地划分它们的边界,建议使用不规则多面体。为了准确预测,使用了Darknet-19深度神经网络,该网络与You Only Look Once方法相结合,在计算机视觉的其他领域显示出非常有希望的结果。57个CT扫描可用,其中85%用于训练,其余15%用于验证。
{"title":"Automatic Detection of the Nasal Cavities and Paranasal Sinuses Using Deep Neural Networks","authors":"C. O. Laura, Patrick Hofmann, K. Drechsler, S. Wesarg","doi":"10.1109/ISBI.2019.8759481","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759481","url":null,"abstract":"The nasal cavity and paranasal sinuses present large interpa-tient variabilities. Additional circumstances like for example, concha bullosa or nasal septum deviations complicate their segmentation. As in other areas of the body a previous multi-structure detection could facilitate the segmentation task. In this paper an approach is proposed to individually detect all sinuses and the nasal cavity. For a better delimitation of their borders the use of an irregular polyhedron is proposed. For an accurate prediction the Darknet-19 deep neural network is used which combined with the You Only Look Once method has shown very promising results in other fields of computer vision. 57 CT scans were available of which 85% were used for training and the remaining 15% for validation.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127652022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
High Contrast T1-Weigthed Mri with Fluid and White Matter Suppression Using Mp2Rage 高对比度t1加权Mri与Mp2Rage抑制液体和白质
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759494
J. Beaumont, H. Saint-Jalmes, O. Acosta, T. Kober, M. Tanner, J. Ferré, O. Salvado, J. Fripp, G. Gambarota
A novel magnetic resonance imaging (MRI) sequence called fluid and white matter suppression (FLAWS) was recently proposed for brain imaging at 3T. This sequence provides two co-registered 3D-MRI datasets of T1-weighted images. The voxel-wise division of these two datasets yields contrast-enhanced images that have been used in preoperative Deep Brain Stimulation (DBS) planning. In the current study, we propose a new way of combining the two 3D-MRI FLAWS datasets to increase the contrast-to-noise ratio of the resulting images. Furthermore, since many centers performing DBS are equipped with 1. 5T MRI systems, we also optimized the FLAWS sequence parameters for the data acquisition at the field strength of 1. 5T.
最近提出了一种新的磁共振成像(MRI)序列,称为流体和白质抑制(FLAWS),用于3T脑成像。该序列提供了两个共同注册的t1加权图像的3D-MRI数据集。这两个数据集的体素划分产生对比度增强图像,用于术前深部脑刺激(DBS)计划。在本研究中,我们提出了一种结合两个3D-MRI FLAWS数据集的新方法,以提高所得图像的噪比。此外,由于许多进行DBS的中心都配备了1。在5T MRI系统中,我们还优化了缺陷序列参数,以便在场强为1的情况下进行数据采集。5 t。
{"title":"High Contrast T1-Weigthed Mri with Fluid and White Matter Suppression Using Mp2Rage","authors":"J. Beaumont, H. Saint-Jalmes, O. Acosta, T. Kober, M. Tanner, J. Ferré, O. Salvado, J. Fripp, G. Gambarota","doi":"10.1109/ISBI.2019.8759494","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759494","url":null,"abstract":"A novel magnetic resonance imaging (MRI) sequence called fluid and white matter suppression (FLAWS) was recently proposed for brain imaging at 3T. This sequence provides two co-registered 3D-MRI datasets of T1-weighted images. The voxel-wise division of these two datasets yields contrast-enhanced images that have been used in preoperative Deep Brain Stimulation (DBS) planning. In the current study, we propose a new way of combining the two 3D-MRI FLAWS datasets to increase the contrast-to-noise ratio of the resulting images. Furthermore, since many centers performing DBS are equipped with 1. 5T MRI systems, we also optimized the FLAWS sequence parameters for the data acquisition at the field strength of 1. 5T.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114737565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Robust T2 Relaxometry With Hamiltonian MCMC for Myelin Water Fraction Estimation 用Hamiltonian MCMC鲁棒T2弛豫法估计髓磷脂水分数
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759446
Thomas Yu, M. Pizzolato, Erick Jorge Canales-Rodríguez, J. Thiran
We present a voxel-wise Bayesian multi-compartment $T_{2}$ relaxometry fitting method based on Hamiltonian Markov Chain Monte Carlo (HMCMC) sampling. The $T_{2}$ spectrum is modeled as a mixture of truncated Gaussian components, which involves the estimation of parameters in a completely data-driven and voxel-based fashion, i.e. without fixing any parameters or imposing spatial regularization. We estimate each parameter as the expectation of the corresponding marginal distribution drawn from the joint posterior obtained with Hamiltonian sampling. We validate our scheme on synthetic and ex vivo data for which histology is available. We show that the proposed method enables a more robust parameter estimation than a state of the art point estimate based on differential evolution. Moreover, the proposed HMCMC-based myelin water fraction calculation reveals high spatial correlation with the histological counterpart.
提出了一种基于哈密顿马尔可夫链蒙特卡罗(HMCMC)采样的体素贝叶斯多室$T_{2}$松弛拟合方法。$T_{2}$光谱被建模为截断的高斯分量的混合物,它涉及以完全数据驱动和基于体素的方式估计参数,即不固定任何参数或强加空间正则化。我们用哈密顿抽样得到的关节后验得到的相应的边际分布的期望来估计每个参数。我们在合成和离体数据上验证了我们的方案,其中组织学是可用的。我们表明,所提出的方法能够比基于差分进化的最先进的点估计更鲁棒的参数估计。此外,提出的基于hmcmc的髓磷脂水分数计算显示与组织学对应物具有高度的空间相关性。
{"title":"Robust T2 Relaxometry With Hamiltonian MCMC for Myelin Water Fraction Estimation","authors":"Thomas Yu, M. Pizzolato, Erick Jorge Canales-Rodríguez, J. Thiran","doi":"10.1109/ISBI.2019.8759446","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759446","url":null,"abstract":"We present a voxel-wise Bayesian multi-compartment $T_{2}$ relaxometry fitting method based on Hamiltonian Markov Chain Monte Carlo (HMCMC) sampling. The $T_{2}$ spectrum is modeled as a mixture of truncated Gaussian components, which involves the estimation of parameters in a completely data-driven and voxel-based fashion, i.e. without fixing any parameters or imposing spatial regularization. We estimate each parameter as the expectation of the corresponding marginal distribution drawn from the joint posterior obtained with Hamiltonian sampling. We validate our scheme on synthetic and ex vivo data for which histology is available. We show that the proposed method enables a more robust parameter estimation than a state of the art point estimate based on differential evolution. Moreover, the proposed HMCMC-based myelin water fraction calculation reveals high spatial correlation with the histological counterpart.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125635635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Skeleton and Deformation Based Model for Neonatal Pial Surface Reconstruction in Preterm Newborns 一种基于骨骼和变形的早产儿脑枕表面重建模型
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759183
Mengting Liu, C. Lepage, Seun Jeon, T. Flynn, Shiyu Yuan, Justin Kim, A. Toga, A. Barkovich, Duan Xu, Alan C. Evans, Hosung Kim
Though quantification of cortical thickness characterizes a main aspect of morphology in developing brains, it is challenged in the analysis of neonatal brain MRI due to inaccurate pial surface extraction. In this study, we propose a pial surface reconstruction method to address for the relatively large partial volume (PV) within the sulcal basin. The new approach leverages the benefits of using new skeletonization and the deformation models with a new gradient feature. The proposed skeletonization method combines the voxels representing the skeleton of cerebrospinal fluid partial volume (CSF-PV) with the voxels of the medial plane of the gray matter (GM) volume of the sulcus where no CSF-PV is estimated due to the squashed sulcal bank and the limited resolution. Subsequently, the outer cortical boundary is identified by first deforming the initial surface to the skeleton, then refining it using the gradient model characterizing the subtle edges representing the “ground truth” of the GM/CSF boundary. Our landmark-based evaluation showed that the initial boundary identified by the skeletonization was already close to the “ground truth” of the GM/CSF boundary (0.4 mm distant). Furthermore, this was significantly improved by the reconstruction of the final pial surface $( lt 0.1$ mm; $mathrm {p}lt 0.0001)$. The mean cortical thickness measured through our pipeline positively correlated with postmenstrual age (PMA) at scan $( mathrm {p}lt 0.0001)$. The range of the measurement was biologically reasonable (1.4 mm at 28 weeks of PMA to 2.2 mm at term equivalent $vs$. young adults: 2.5–3.5 mm) and was quite close to past reports (2.1 mm at term).
虽然皮质厚度的量化是发育中的大脑形态学的一个主要方面,但由于不准确的脑皮层表面提取,它在新生儿脑MRI分析中受到挑战。在这项研究中,我们提出了一种基底面重建方法来解决相对较大的部分体积(PV)在沟状盆地内。新方法利用了使用新的骨架化和具有新的梯度特征的变形模型的好处。提出的骨架化方法将代表脑脊液部分体积(CSF-PV)骨架的体素与沟灰质(GM)体积的内平面体素相结合,其中由于被压扁的沟库和有限的分辨率,无法估计CSF-PV。随后,通过首先变形骨架的初始表面来识别外皮层边界,然后使用梯度模型对其进行细化,该模型表征了代表GM/CSF边界的“基本真理”的细微边缘。我们基于里程碑的评估表明,骨架化识别的初始边界已经接近GM/CSF边界的“基本事实”(0.4 mm远)。此外,通过重建最终的头部表面(lt 0.1$ mm;$math {p}lt 0.0001)$。通过我们的管道测量的平均皮质厚度与扫描$( mathm {p}lt 0.0001)$时的月经后年龄(PMA)呈正相关。测量的范围在生物学上是合理的(PMA 28周时为1.4 mm, term equivalent $vs$时为2.2 mm)。年轻人:2.5-3.5毫米),与过去的报道(足月2.1毫米)相当接近。
{"title":"A Skeleton and Deformation Based Model for Neonatal Pial Surface Reconstruction in Preterm Newborns","authors":"Mengting Liu, C. Lepage, Seun Jeon, T. Flynn, Shiyu Yuan, Justin Kim, A. Toga, A. Barkovich, Duan Xu, Alan C. Evans, Hosung Kim","doi":"10.1109/ISBI.2019.8759183","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759183","url":null,"abstract":"Though quantification of cortical thickness characterizes a main aspect of morphology in developing brains, it is challenged in the analysis of neonatal brain MRI due to inaccurate pial surface extraction. In this study, we propose a pial surface reconstruction method to address for the relatively large partial volume (PV) within the sulcal basin. The new approach leverages the benefits of using new skeletonization and the deformation models with a new gradient feature. The proposed skeletonization method combines the voxels representing the skeleton of cerebrospinal fluid partial volume (CSF-PV) with the voxels of the medial plane of the gray matter (GM) volume of the sulcus where no CSF-PV is estimated due to the squashed sulcal bank and the limited resolution. Subsequently, the outer cortical boundary is identified by first deforming the initial surface to the skeleton, then refining it using the gradient model characterizing the subtle edges representing the “ground truth” of the GM/CSF boundary. Our landmark-based evaluation showed that the initial boundary identified by the skeletonization was already close to the “ground truth” of the GM/CSF boundary (0.4 mm distant). Furthermore, this was significantly improved by the reconstruction of the final pial surface $( lt 0.1$ mm; $mathrm {p}lt 0.0001)$. The mean cortical thickness measured through our pipeline positively correlated with postmenstrual age (PMA) at scan $( mathrm {p}lt 0.0001)$. The range of the measurement was biologically reasonable (1.4 mm at 28 weeks of PMA to 2.2 mm at term equivalent $vs$. young adults: 2.5–3.5 mm) and was quite close to past reports (2.1 mm at term).","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133954924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1