首页 > 最新文献

2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)最新文献

英文 中文
Accurate Segmentation of Dental Panoramic Radiographs with U-NETS 基于U-NETS的牙科全景x线片精确分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759563
T. L. Koch, Mathias Perslev, C. Igel, Sami Sebastian Brandt
Fully convolutional neural networks (FCNs) have proven to be powerful tools for medical image segmentation. We apply an FCN based on the U-Net architecture for the challenging task of semantic segmentation of dental panoramic radiographs and discuss general tricks for improving segmentation performance. Among those are network ensembling, test-time augmentation, data symmetry exploitation and bootstrapping of low quality annotations. The performance of our approach was tested on a highly variable dataset of 1500 dental panoramic radiographs. A single network reached the Dice score of 0.934 where 1201 images were used for training, forming an ensemble increased the score to 0.936.
全卷积神经网络已被证明是医学图像分割的有力工具。我们将基于U-Net架构的FCN应用于具有挑战性的牙科全景x线片语义分割任务,并讨论了提高分割性能的一般技巧。其中包括网络集成、测试时间增强、数据对称性开发和低质量注释的自引导。我们的方法的性能在1500张牙科全景x光片的高度可变数据集上进行了测试。在使用1201张图像进行训练的情况下,单个网络的Dice得分达到0.934,形成一个集合将得分提高到0.936。
{"title":"Accurate Segmentation of Dental Panoramic Radiographs with U-NETS","authors":"T. L. Koch, Mathias Perslev, C. Igel, Sami Sebastian Brandt","doi":"10.1109/ISBI.2019.8759563","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759563","url":null,"abstract":"Fully convolutional neural networks (FCNs) have proven to be powerful tools for medical image segmentation. We apply an FCN based on the U-Net architecture for the challenging task of semantic segmentation of dental panoramic radiographs and discuss general tricks for improving segmentation performance. Among those are network ensembling, test-time augmentation, data symmetry exploitation and bootstrapping of low quality annotations. The performance of our approach was tested on a highly variable dataset of 1500 dental panoramic radiographs. A single network reached the Dice score of 0.934 where 1201 images were used for training, forming an ensemble increased the score to 0.936.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128270872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
GPU Acceleration of Wave Based Transmission Tomography 基于波的透射层析成像的GPU加速
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759453
Hongjian Wang, T. Huynh, H. Gemmeke, T. Hopp, J. Hesser
To accelerate the process of 3D ultrasound computed tomography, we parallelize the most time-consuming part of a paraxial forward model on GPU, where massive complex multiplications and 2D Fourier transforms have to be performed iteratively. We test our GPU implementation on a synthesized symmetric breast phantom with different sizes. In the best case, for only one emitter position, the speedup of a desktop GPU reaches 23 times when the data transfer time is included, and 100 times when only GPU parallel computing time is considered. In the worst case, the speedup of a less powerful laptop GPU is still 2.5 times over a six-core desktop CPU, when the data transfer time is included. For the correctness of the values computed on GPU, the maximum percent deviation of L2 norm is only 0.014%.
为了加速三维超声计算机断层扫描的过程,我们在GPU上并行化了最耗时的近轴正演模型部分,其中大量的复乘法和二维傅里叶变换必须迭代执行。我们在不同尺寸的合成对称乳房幻影上测试了我们的GPU实现。在最佳情况下,仅对一个发射器位置,考虑数据传输时间时,桌面GPU的加速可达23倍,仅考虑GPU并行计算时间时,加速可达100倍。在最坏的情况下,考虑到数据传输时间,性能较差的笔记本电脑GPU的加速速度仍然是六核桌面CPU的2.5倍。对于GPU上计算值的正确性,L2范数的最大偏差百分比仅为0.014%。
{"title":"GPU Acceleration of Wave Based Transmission Tomography","authors":"Hongjian Wang, T. Huynh, H. Gemmeke, T. Hopp, J. Hesser","doi":"10.1109/ISBI.2019.8759453","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759453","url":null,"abstract":"To accelerate the process of 3D ultrasound computed tomography, we parallelize the most time-consuming part of a paraxial forward model on GPU, where massive complex multiplications and 2D Fourier transforms have to be performed iteratively. We test our GPU implementation on a synthesized symmetric breast phantom with different sizes. In the best case, for only one emitter position, the speedup of a desktop GPU reaches 23 times when the data transfer time is included, and 100 times when only GPU parallel computing time is considered. In the worst case, the speedup of a less powerful laptop GPU is still 2.5 times over a six-core desktop CPU, when the data transfer time is included. For the correctness of the values computed on GPU, the maximum percent deviation of L2 norm is only 0.014%.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130310842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vessel Extraction Using Crossing-Adaptive Minimal Path Model With Anisotropic Enhancement And Curvature Constraint 基于各向异性增强和曲率约束的交叉自适应最小路径模型的血管提取
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759435
Li Liu, Da Chen, L. Cohen, H. Shu, M. Pâques
In this work, we propose a new minimal path model with a dynamic Riemannian metric to overcome the shortcuts problem in vessel extraction. The invoked metric consists of a crossing-adaptive anisotropic radius-lifted tensor field and a front freezing indicator. It is able to reduce the anisotropy of the metric on the crossing points and steer the front evolution by freezing the points causing high curvature of a geodesic. We validate our model on the DRIVE and IOSTAR datasets, and the segmentation accuracy is 0.861 and 0.881, respectively. The proposed method can extract the centreline position and vessel width efficiently and accuracy.
在这项工作中,我们提出了一种新的具有动态黎曼度量的最小路径模型,以克服船舶提取中的捷径问题。所调用的度量包括一个交叉自适应各向异性半径提升张量场和一个锋面冻结指示器。它能够减少交叉点上度量的各向异性,并通过冻结导致测地线高曲率的点来引导锋面演变。我们在DRIVE和IOSTAR数据集上验证了我们的模型,分割精度分别为0.861和0.881。该方法能够高效、准确地提取中线位置和船舶宽度。
{"title":"Vessel Extraction Using Crossing-Adaptive Minimal Path Model With Anisotropic Enhancement And Curvature Constraint","authors":"Li Liu, Da Chen, L. Cohen, H. Shu, M. Pâques","doi":"10.1109/ISBI.2019.8759435","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759435","url":null,"abstract":"In this work, we propose a new minimal path model with a dynamic Riemannian metric to overcome the shortcuts problem in vessel extraction. The invoked metric consists of a crossing-adaptive anisotropic radius-lifted tensor field and a front freezing indicator. It is able to reduce the anisotropy of the metric on the crossing points and steer the front evolution by freezing the points causing high curvature of a geodesic. We validate our model on the DRIVE and IOSTAR datasets, and the segmentation accuracy is 0.861 and 0.881, respectively. The proposed method can extract the centreline position and vessel width efficiently and accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130804881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semi-Supervised Learning For Cardiac Left Ventricle Segmentation Using Conditional Deep Generative Models as Prior 基于条件深度生成模型的半监督学习左心室分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759292
M. Jafari, H. Girgis, A. Abdi, Zhibin Liao, Mehran Pesteie, R. Rohling, K. Gin, T. Tsang, P. Abolmaesumi
Accurate segmentation of left ventricle (LV) in apical four chamber echocardiography cine is a key step in cardiac functionality assessment. Cardiologists roughly annotate two frames in the cardiac cycle, namely, the end-diastolic and end-systolic frames, as part of their clinical workflow, limiting the annotated data to less than 5% of the frames in the cardiac cycle. In this paper, we propose a semi-supervised learning algorithm to leverage the unlabeled data to improve the performance of LV segmentation algorithms. This approach is based on a generative model which learns an inverse mapping from segmentation masks to their corresponding echo frames. This generator is then used as a critic to assess and improve the LV segmentation mask generated by a given segmentation algorithm such as U-Net. This semi-supervised approach enforces a prior on the segmentation model based on the perceptual similarity of the generated frame with the original frame. This approach promotes utilization of the unlabeled samples, which, in turn, improves the segmentation accuracy.
在心功能评估中,心尖四室超声心动图准确分割左心室(LV)是关键一步。作为临床工作流程的一部分,心脏病专家大致注释了心脏周期中的两个框架,即舒张末期框架和收缩末期框架,将注释数据限制在心脏周期框架的5%以下。在本文中,我们提出了一种半监督学习算法来利用未标记数据来提高LV分割算法的性能。该方法基于生成模型,该模型学习从分割掩码到相应回波帧的逆映射。然后将该生成器用作评论家来评估和改进由给定分割算法(如U-Net)生成的LV分割掩码。这种半监督方法基于生成的帧与原始帧的感知相似性对分割模型进行先验处理。这种方法促进了未标记样本的利用,从而提高了分割的准确性。
{"title":"Semi-Supervised Learning For Cardiac Left Ventricle Segmentation Using Conditional Deep Generative Models as Prior","authors":"M. Jafari, H. Girgis, A. Abdi, Zhibin Liao, Mehran Pesteie, R. Rohling, K. Gin, T. Tsang, P. Abolmaesumi","doi":"10.1109/ISBI.2019.8759292","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759292","url":null,"abstract":"Accurate segmentation of left ventricle (LV) in apical four chamber echocardiography cine is a key step in cardiac functionality assessment. Cardiologists roughly annotate two frames in the cardiac cycle, namely, the end-diastolic and end-systolic frames, as part of their clinical workflow, limiting the annotated data to less than 5% of the frames in the cardiac cycle. In this paper, we propose a semi-supervised learning algorithm to leverage the unlabeled data to improve the performance of LV segmentation algorithms. This approach is based on a generative model which learns an inverse mapping from segmentation masks to their corresponding echo frames. This generator is then used as a critic to assess and improve the LV segmentation mask generated by a given segmentation algorithm such as U-Net. This semi-supervised approach enforces a prior on the segmentation model based on the perceptual similarity of the generated frame with the original frame. This approach promotes utilization of the unlabeled samples, which, in turn, improves the segmentation accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129999353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Ellipse Detection of Optic Disc-and-Cup Boundary in Fundus Images 眼底图像中盘杯边界的椭圆检测
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759173
Zeya Wang, Nanqing Dong, Sean D. Rosario, Min Xu, P. Xie, E. Xing
Glaucoma is an eye disease that damages the optic nerve and leads to loss of vision. The diagnosis of glaucoma involves measurement of cup-to-disc ratio from retinal fundus images, which necessitates the detection of the optic disc-and-cup boundary as a crucial task for glaucoma screening. Most existing computer-aided diagnosis (CAD) systems focus on the segmentation approaches but ignore the localization approaches, which requires less human annotation cost. In this paper, we propose a deep learning-based framework to jointly localize the ellipse for the optic disc (OD) and optic cup (OC) regions. Instead of detecting a bounding box like in most object detection approaches, we directly estimate the parameters of an ellipse that suffices to capture the morphology of each OD and OC region for calculating the cup-to-disc ratio. We use two modules to detect the ellipses for OD and OC regions, where the OD region serves as attention to the OC region. The proposed framework achieves competitive results against the state-of-the-art segmentation methods with less supervision. We empirically evaluate our framework with the recent state-of-the-art segmentation models on two scenarios where the training data and test data come from the same and different domains.
青光眼是一种损害视神经并导致视力丧失的眼部疾病。青光眼的诊断需要测量视网膜眼底图像的杯盘比,这就需要检测视盘和杯的边界,这是青光眼筛查的关键任务。现有的计算机辅助诊断(CAD)系统大多侧重于分割方法,而忽略了需要较少人工标注成本的定位方法。在本文中,我们提出了一个基于深度学习的框架来联合定位视盘(OD)和视杯(OC)区域的椭圆。我们不像大多数物体检测方法那样检测一个边界框,而是直接估计一个椭圆的参数,这个椭圆足以捕获每个OD和OC区域的形态,从而计算杯盘比。我们使用两个模块来检测OD和OC区域的省略号,其中OD区域作为OC区域的关注。该框架在较少监督的情况下实现了与最先进的分割方法的竞争结果。我们在训练数据和测试数据来自相同和不同领域的两种情况下,用最新的最先进的分割模型对我们的框架进行了经验评估。
{"title":"Ellipse Detection of Optic Disc-and-Cup Boundary in Fundus Images","authors":"Zeya Wang, Nanqing Dong, Sean D. Rosario, Min Xu, P. Xie, E. Xing","doi":"10.1109/ISBI.2019.8759173","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759173","url":null,"abstract":"Glaucoma is an eye disease that damages the optic nerve and leads to loss of vision. The diagnosis of glaucoma involves measurement of cup-to-disc ratio from retinal fundus images, which necessitates the detection of the optic disc-and-cup boundary as a crucial task for glaucoma screening. Most existing computer-aided diagnosis (CAD) systems focus on the segmentation approaches but ignore the localization approaches, which requires less human annotation cost. In this paper, we propose a deep learning-based framework to jointly localize the ellipse for the optic disc (OD) and optic cup (OC) regions. Instead of detecting a bounding box like in most object detection approaches, we directly estimate the parameters of an ellipse that suffices to capture the morphology of each OD and OC region for calculating the cup-to-disc ratio. We use two modules to detect the ellipses for OD and OC regions, where the OD region serves as attention to the OC region. The proposed framework achieves competitive results against the state-of-the-art segmentation methods with less supervision. We empirically evaluate our framework with the recent state-of-the-art segmentation models on two scenarios where the training data and test data come from the same and different domains.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125571963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Cancer Detection in Mass Spectrometry Imaging Data by Recurrent Neural Networks 递归神经网络在质谱成像数据中的癌症检测
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759571
F. G. Zanjani, Andreas Panteli, S. Zinger, F. V. D. Sommen, T. Tan, Benjamin Balluff, D. Vos, S. Ellis, R. Heeren, M. Lucas, H. Marquering, Ivo G. H. Jansen, C. D. Savci-Heijink, D. M. Bruin, P. D. With
Mass spectrometry imaging (MSI) reveals the localization of a broad scale of compounds ranging from metabolites to proteins in biological tissues. This makes MSI an attractive tool in biomedical research for studying diseases. Computer-aided diagnosis (CAD) systems facilitate the analysis of the molecular profile in tumor tissues to provide a distinctive fingerprint for finding biomarkers. In this paper, the performance of recurrent neural networks (RNNs) is studied on MSI data to exploit their learning capabilities for finding irregular patterns and dependencies in sequential data. In order to design a better CAD model for tumor detection/classification, several configurations of Long Short-Time Memory (LSTM) are examined. The proposed model consists of a 2-layer bidirectional LSTM, each containing 100 LSTM units. The proposed RNN model outperforms the state-of-the-art CNN model by 1.87% and 1.45% higher accuracy in mass spectra classification on lung and bladder cancer datasets with a sixfold faster training time.
质谱成像(MSI)揭示了生物组织中从代谢物到蛋白质的广泛化合物的定位。这使得MSI成为生物医学研究中研究疾病的一个有吸引力的工具。计算机辅助诊断(CAD)系统有助于分析肿瘤组织中的分子特征,为寻找生物标志物提供独特的指纹。本文研究了递归神经网络(rnn)在MSI数据上的性能,以利用其在序列数据中发现不规则模式和依赖关系的学习能力。为了设计一个更好的用于肿瘤检测/分类的CAD模型,研究了长短时记忆(LSTM)的几种配置。该模型由两层双向LSTM组成,每层LSTM包含100个LSTM单元。本文提出的RNN模型在肺癌和膀胱癌数据集上的质谱分类准确率分别比目前最先进的CNN模型高1.87%和1.45%,训练时间提高了6倍。
{"title":"Cancer Detection in Mass Spectrometry Imaging Data by Recurrent Neural Networks","authors":"F. G. Zanjani, Andreas Panteli, S. Zinger, F. V. D. Sommen, T. Tan, Benjamin Balluff, D. Vos, S. Ellis, R. Heeren, M. Lucas, H. Marquering, Ivo G. H. Jansen, C. D. Savci-Heijink, D. M. Bruin, P. D. With","doi":"10.1109/ISBI.2019.8759571","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759571","url":null,"abstract":"Mass spectrometry imaging (MSI) reveals the localization of a broad scale of compounds ranging from metabolites to proteins in biological tissues. This makes MSI an attractive tool in biomedical research for studying diseases. Computer-aided diagnosis (CAD) systems facilitate the analysis of the molecular profile in tumor tissues to provide a distinctive fingerprint for finding biomarkers. In this paper, the performance of recurrent neural networks (RNNs) is studied on MSI data to exploit their learning capabilities for finding irregular patterns and dependencies in sequential data. In order to design a better CAD model for tumor detection/classification, several configurations of Long Short-Time Memory (LSTM) are examined. The proposed model consists of a 2-layer bidirectional LSTM, each containing 100 LSTM units. The proposed RNN model outperforms the state-of-the-art CNN model by 1.87% and 1.45% higher accuracy in mass spectra classification on lung and bladder cancer datasets with a sixfold faster training time.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125755778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
3D Convolutional Neural Network Segmentation of White Matter Tract Masks from MR Diffusion Anisotropy Maps 磁共振扩散各向异性图中白质束掩模的三维卷积神经网络分割
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759575
Kristofer Pomiecko, Carson D. Sestili, K. Fissell, S. Pathak, D. Okonkwo, W. Schneider
This paper presents an application of 3D convolutional neural network (CNN) techniques to compute the white matter region spanned by a fiber tract (the tract mask) from whole-brain MRI diffusion anisotropy maps. The DeepMedic CNN platform was used, allowing for training directly on 3D volumes. The dataset consisted of 240 subjects, controls and traumatic brain injury (TBI) patients, scanned with a high angular direction and high b-value multi-shell diffusion protocol. Twelve tract masks per subject were learned. Median Dice scores of 0.72 were achieved over the 720 test masks in comparing learned tract masks to manually created masks. This work demonstrates ability to learn complex spatial regions in control and patient populations and contributes a new application of CNNs as a fast pre-selection tool in automated white matter tract segmentation methods.
本文应用三维卷积神经网络(CNN)技术,从全脑MRI扩散各向异性图中计算纤维束(束掩膜)所跨越的白质区域。使用DeepMedic CNN平台,允许直接在3D卷上进行训练。数据集由240名受试者、对照组和创伤性脑损伤(TBI)患者组成,采用高角方向和高b值多壳扩散协议进行扫描。每位受试者学习了12个通道面具。在720个测试掩模中,在比较学习的通道掩模和手动创建的掩模时,Dice的中位数得分为0.72。这项工作证明了在对照组和患者群体中学习复杂空间区域的能力,并为cnn作为自动白质束分割方法中的快速预选工具的新应用做出了贡献。
{"title":"3D Convolutional Neural Network Segmentation of White Matter Tract Masks from MR Diffusion Anisotropy Maps","authors":"Kristofer Pomiecko, Carson D. Sestili, K. Fissell, S. Pathak, D. Okonkwo, W. Schneider","doi":"10.1109/ISBI.2019.8759575","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759575","url":null,"abstract":"This paper presents an application of 3D convolutional neural network (CNN) techniques to compute the white matter region spanned by a fiber tract (the tract mask) from whole-brain MRI diffusion anisotropy maps. The DeepMedic CNN platform was used, allowing for training directly on 3D volumes. The dataset consisted of 240 subjects, controls and traumatic brain injury (TBI) patients, scanned with a high angular direction and high b-value multi-shell diffusion protocol. Twelve tract masks per subject were learned. Median Dice scores of 0.72 were achieved over the 720 test masks in comparing learned tract masks to manually created masks. This work demonstrates ability to learn complex spatial regions in control and patient populations and contributes a new application of CNNs as a fast pre-selection tool in automated white matter tract segmentation methods.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133981259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Deep Learning Approach To Identify MRNA Localization Patterns 一种识别MRNA定位模式的深度学习方法
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759235
Rémy Dubois, Arthur Imbert, Aubin Samacoïts, M. Peter, E. Bertrand, Florian Müller, Thomas Walter
The localization of messenger RNA (mRNA) molecules inside cells play an important role for the local control of gene expression. However, the localization patterns of many mRNAs remain unknown and poorly understood. Single Molecule Fluorescence in Situ Hybridization (smFISH) allows for the visualization of individual mRNA molecules in cells. This method is now scalable and can be applied in High Content Screening (HCS) mode. Here, we propose a computational workflow based on deep convolutional neural networks trained on simulated data to identify different localization patterns from large-scale smFISH data.
信使RNA (mRNA)分子在细胞内的定位对基因表达的局部调控起着重要作用。然而,许多mrna的定位模式仍然是未知的,也很少被理解。单分子荧光原位杂交(smFISH)允许细胞中单个mRNA分子的可视化。这种方法现在是可扩展的,可以应用于高内容筛选(HCS)模式。本文提出了一种基于模拟数据训练的深度卷积神经网络的计算工作流,用于从大规模smFISH数据中识别不同的定位模式。
{"title":"A Deep Learning Approach To Identify MRNA Localization Patterns","authors":"Rémy Dubois, Arthur Imbert, Aubin Samacoïts, M. Peter, E. Bertrand, Florian Müller, Thomas Walter","doi":"10.1109/ISBI.2019.8759235","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759235","url":null,"abstract":"The localization of messenger RNA (mRNA) molecules inside cells play an important role for the local control of gene expression. However, the localization patterns of many mRNAs remain unknown and poorly understood. Single Molecule Fluorescence in Situ Hybridization (smFISH) allows for the visualization of individual mRNA molecules in cells. This method is now scalable and can be applied in High Content Screening (HCS) mode. Here, we propose a computational workflow based on deep convolutional neural networks trained on simulated data to identify different localization patterns from large-scale smFISH data.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"7 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113964475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Extreme-Resolution Image Registration with Deep Learning 用深度学习实现极端分辨率图像配准
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759291
Abdullah Nazib, C. Fookes, Dimitri Perrin
Image registration plays an important role in comparing images. It is particularly important in analysing medical images like CT, MRI and PET, to quantify different biological samples, to monitor disease progression, and to fuse different modalities to support better diagnosis. The recent emergence of tissue clearing protocols enable us to take images at cellular level resolution. Image registration tools developed for other modalities are currently unable to manage images of entire organs at such resolution. The popularity of deep learning based methods in the computer vision community justifies a rigorous investigation of deep-learning based methods on tissue cleared images along with their traditional counterparts. In this paper, we investigate and compare the performance of a deep learning based registration method with traditional optimization based methods on samples from tissue-clearing methods. From the comparative results it is found that a deep-learning based method outperforms all traditional registration tools in terms of registration time and has achieved promising registration accuracy.
图像配准在图像比较中起着重要的作用。它在分析CT、MRI和PET等医学图像、量化不同生物样本、监测疾病进展以及融合不同模式以支持更好的诊断方面尤为重要。最近出现的组织清除协议使我们能够在细胞水平分辨率拍摄图像。为其他模式开发的图像配准工具目前无法以这种分辨率管理整个器官的图像。基于深度学习的方法在计算机视觉社区的流行,证明了对基于深度学习的方法在组织清除图像上的严格研究,以及对传统方法的研究。在本文中,我们研究并比较了基于深度学习的配准方法与基于传统优化方法的组织清除方法的性能。对比结果表明,基于深度学习的配准方法在配准时间上优于所有传统的配准工具,并取得了较好的配准精度。
{"title":"Towards Extreme-Resolution Image Registration with Deep Learning","authors":"Abdullah Nazib, C. Fookes, Dimitri Perrin","doi":"10.1109/ISBI.2019.8759291","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759291","url":null,"abstract":"Image registration plays an important role in comparing images. It is particularly important in analysing medical images like CT, MRI and PET, to quantify different biological samples, to monitor disease progression, and to fuse different modalities to support better diagnosis. The recent emergence of tissue clearing protocols enable us to take images at cellular level resolution. Image registration tools developed for other modalities are currently unable to manage images of entire organs at such resolution. The popularity of deep learning based methods in the computer vision community justifies a rigorous investigation of deep-learning based methods on tissue cleared images along with their traditional counterparts. In this paper, we investigate and compare the performance of a deep learning based registration method with traditional optimization based methods on samples from tissue-clearing methods. From the comparative results it is found that a deep-learning based method outperforms all traditional registration tools in terms of registration time and has achieved promising registration accuracy.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123042082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real-Time Informative Laryngoscopic Frame Classification with Pre-Trained Convolutional Neural Networks 基于预训练卷积神经网络的实时信息喉镜框架分类
Pub Date : 2019-04-08 DOI: 10.1109/ISBI.2019.8759511
A. Galdran, P. Costa, A. Campilho
Visual exploration of the larynx represents a relevant technique for the early diagnosis of laryngeal disorders. However, visualizing an endoscopy for finding abnormalities is a time-consuming process, and for this reason much research has been dedicated to the automatic analysis of endoscopic video data. In this work we address the particular task of discriminating among informative laryngoscopic frames and those that carry insufficient diagnostic information. In the latter case, the goal is also to determine the reason for this lack of information. To this end, we analyze the possibility of training three different state-of-the-art Convolutional Neural Networks, but initializing their weights from configurations that have been previously optimized for solving natural image classification problems. Our findings show that the simplest of these three architectures not only is the most accurate (outperforming previously proposed techniques), but also the fastest and most efficient, with the lowest inference time and minimal memory requirements, enabling real-time application and deployment in portable devices.
喉部的视觉探查是早期诊断喉部疾病的一种相关技术。然而,通过内窥镜成像来发现异常是一个耗时的过程,因此很多研究都致力于内窥镜视频数据的自动分析。在这项工作中,我们解决了区分信息丰富的喉镜框架和那些携带诊断信息不足的喉镜框架的特殊任务。在后一种情况下,目标也是确定这种信息缺乏的原因。为此,我们分析了训练三种不同的最先进的卷积神经网络的可能性,但从先前为解决自然图像分类问题而优化的配置初始化它们的权重。我们的研究结果表明,这三种架构中最简单的架构不仅是最准确的(优于先前提出的技术),而且是最快和最有效的,具有最低的推理时间和最小的内存需求,能够在便携式设备中实现实时应用和部署。
{"title":"Real-Time Informative Laryngoscopic Frame Classification with Pre-Trained Convolutional Neural Networks","authors":"A. Galdran, P. Costa, A. Campilho","doi":"10.1109/ISBI.2019.8759511","DOIUrl":"https://doi.org/10.1109/ISBI.2019.8759511","url":null,"abstract":"Visual exploration of the larynx represents a relevant technique for the early diagnosis of laryngeal disorders. However, visualizing an endoscopy for finding abnormalities is a time-consuming process, and for this reason much research has been dedicated to the automatic analysis of endoscopic video data. In this work we address the particular task of discriminating among informative laryngoscopic frames and those that carry insufficient diagnostic information. In the latter case, the goal is also to determine the reason for this lack of information. To this end, we analyze the possibility of training three different state-of-the-art Convolutional Neural Networks, but initializing their weights from configurations that have been previously optimized for solving natural image classification problems. Our findings show that the simplest of these three architectures not only is the most accurate (outperforming previously proposed techniques), but also the fastest and most efficient, with the lowest inference time and minimal memory requirements, enabling real-time application and deployment in portable devices.","PeriodicalId":119935,"journal":{"name":"2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114468844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1