首页 > 最新文献

2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)最新文献

英文 中文
Automatic detection of aortic dissection in contrast-enhanced CT 增强CT对主动脉夹层的自动检测
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950582
E. Dehghan, Hongzhi Wang, T. Syeda-Mahmood
Aortic dissection is a condition in which a tear in the inner wall of the aorta allows blood to flow between two layers of the aortic wall. Aortic dissection is associated with severe chest pain and can be deadly. Contrast-enhanced CT is the main modality for detection of aortic dissection. Aortic dissection is one of the target abnormalities during evaluation of a triple rule-out CT in emergency cases. In this paper, we present a method for automatic patient-level detection of aortic dissection. Our algorithm starts by an atlas-based segmentation of the aorta which is used to produce cross-sectional images of the organ. Segmentation refinement, flap detection and shape analysis are employed to detect aortic dissection in these cross-sectional slices. Then, the slice-level results are aggregated to render a patient-level detection result. We tested our algorithm on a data set of 37 contrast-enhanced CT volumes, with 13 cases of aortic dissection. We achieved an accuracy of 83.8%, a sensitivity of 84.6% and a specificity of 83.3%.
主动脉夹层是指主动脉内壁的撕裂使血液在两层主动脉壁之间流动。主动脉夹层与严重的胸痛有关,可能是致命的。CT增强扫描是主动脉夹层的主要检查方式。主动脉夹层是急诊病例三重排除CT评估的目标异常之一。在本文中,我们提出了一种自动检测主动脉夹层的方法。我们的算法首先对主动脉进行基于图谱的分割,该分割用于生成器官的横截面图像。采用分割细化、皮瓣检测和形状分析等方法检测主动脉夹层。然后,将切片级结果聚合以呈现患者级检测结果。我们在包含13例主动脉夹层的37个增强CT数据集上测试了我们的算法。准确度为83.8%,灵敏度为84.6%,特异性为83.3%。
{"title":"Automatic detection of aortic dissection in contrast-enhanced CT","authors":"E. Dehghan, Hongzhi Wang, T. Syeda-Mahmood","doi":"10.1109/ISBI.2017.7950582","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950582","url":null,"abstract":"Aortic dissection is a condition in which a tear in the inner wall of the aorta allows blood to flow between two layers of the aortic wall. Aortic dissection is associated with severe chest pain and can be deadly. Contrast-enhanced CT is the main modality for detection of aortic dissection. Aortic dissection is one of the target abnormalities during evaluation of a triple rule-out CT in emergency cases. In this paper, we present a method for automatic patient-level detection of aortic dissection. Our algorithm starts by an atlas-based segmentation of the aorta which is used to produce cross-sectional images of the organ. Segmentation refinement, flap detection and shape analysis are employed to detect aortic dissection in these cross-sectional slices. Then, the slice-level results are aggregated to render a patient-level detection result. We tested our algorithm on a data set of 37 contrast-enhanced CT volumes, with 13 cases of aortic dissection. We achieved an accuracy of 83.8%, a sensitivity of 84.6% and a specificity of 83.3%.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"35 1","pages":"557-560"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87070594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Automated vesicle fusion detection using Convolutional Neural Networks 基于卷积神经网络的自动囊泡融合检测
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950497
Haohan Li, Zhaozheng Yin, Yingke Xu
Quantitative analysis of vesicle-plasma membrane fusion events in the fluorescence microscopy, has been proven to be important in the vesicle exocytosis study. In this paper, we present a framework to automatically detect fusion events. First, an iterative searching algorithm is developed to extract image patch sequences containing potential events. Then, we propose an event image to integrate the critical image patches of a candidate event into a single-image joint representation as the input to Convolutional Neural Networks (CNNs). According to the duration of candidate events, we design three CNN architectures to automatically learn features for the fusion event classification. Compared on 9 challenging datasets, our proposed method showed very competitive performance and outperformed two state-of-the-arts.
荧光显微镜下囊泡-质膜融合事件的定量分析,已被证明在囊泡胞吐研究中是重要的。本文提出了一种自动检测融合事件的框架。首先,提出了一种迭代搜索算法来提取包含潜在事件的图像补丁序列。然后,我们提出了一个事件图像,将候选事件的关键图像补丁集成到单个图像联合表示中,作为卷积神经网络(cnn)的输入。根据候选事件的持续时间,我们设计了三种CNN架构来自动学习融合事件分类的特征。与9个具有挑战性的数据集相比,我们提出的方法表现出非常有竞争力的性能,并且优于两个最先进的技术。
{"title":"Automated vesicle fusion detection using Convolutional Neural Networks","authors":"Haohan Li, Zhaozheng Yin, Yingke Xu","doi":"10.1109/ISBI.2017.7950497","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950497","url":null,"abstract":"Quantitative analysis of vesicle-plasma membrane fusion events in the fluorescence microscopy, has been proven to be important in the vesicle exocytosis study. In this paper, we present a framework to automatically detect fusion events. First, an iterative searching algorithm is developed to extract image patch sequences containing potential events. Then, we propose an event image to integrate the critical image patches of a candidate event into a single-image joint representation as the input to Convolutional Neural Networks (CNNs). According to the duration of candidate events, we design three CNN architectures to automatically learn features for the fusion event classification. Compared on 9 challenging datasets, our proposed method showed very competitive performance and outperformed two state-of-the-arts.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"17 1","pages":"183-187"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86373864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A simple respiratory motion analysis method for chest tomosynthesis 一种简易胸腔断层合成呼吸运动分析方法
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950569
Hua Zhang, X. Tao, G. Qin, Jianhua Ma, Qianjin Feng, Wufan Chen
Chest tomosynthesis (CTS) is a newly developed imaging technique which provides pseudo-3D volume anatomical information of thorax from limited angle projections and therefore improves the visibility of anatomy without so much increase on radiation dose compared to the chest radiography (CXR). However, one of the relatively common problems in CTS is the respiratory motion of patient during image acquisition, which negatively impacts the detectability. In this paper, we propose a sin-quadratic model to analyze the respiratory motion during CTS scanning, which is a real time method that generates the respiratory signal by directly extracting the motion of diaphragm during data acquisition. According to the extracted respiratory signal, physicians could re-scan the patient immediately or conduct motion free CTS image reconstruction for patients that could not hold their breath perfectly during the scan time. The effectiveness of the proposed model was demonstrated with both the simulated phantom data and the real patient data.
胸部断层合成(CTS)是一种新兴的成像技术,它通过有限的角度投影提供胸腔的伪三维体积解剖信息,从而提高了解剖的可见性,而与胸部x线摄影(CXR)相比,辐射剂量没有增加太多。然而,CTS中比较常见的问题之一是患者在图像采集过程中的呼吸运动,这对图像的可检测性产生了负面影响。在本文中,我们提出了一种正弦二次模型来分析CTS扫描过程中的呼吸运动,这是一种在数据采集过程中直接提取隔膜运动来产生呼吸信号的实时方法。根据提取的呼吸信号,医生可以立即对患者进行重新扫描,或者对扫描期间不能完全屏气的患者进行无运动CTS图像重建。仿真数据和实际患者数据验证了该模型的有效性。
{"title":"A simple respiratory motion analysis method for chest tomosynthesis","authors":"Hua Zhang, X. Tao, G. Qin, Jianhua Ma, Qianjin Feng, Wufan Chen","doi":"10.1109/ISBI.2017.7950569","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950569","url":null,"abstract":"Chest tomosynthesis (CTS) is a newly developed imaging technique which provides pseudo-3D volume anatomical information of thorax from limited angle projections and therefore improves the visibility of anatomy without so much increase on radiation dose compared to the chest radiography (CXR). However, one of the relatively common problems in CTS is the respiratory motion of patient during image acquisition, which negatively impacts the detectability. In this paper, we propose a sin-quadratic model to analyze the respiratory motion during CTS scanning, which is a real time method that generates the respiratory signal by directly extracting the motion of diaphragm during data acquisition. According to the extracted respiratory signal, physicians could re-scan the patient immediately or conduct motion free CTS image reconstruction for patients that could not hold their breath perfectly during the scan time. The effectiveness of the proposed model was demonstrated with both the simulated phantom data and the real patient data.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"26 6 1","pages":"498-501"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83678352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancement of 250-MHz quantitative acoustic-microscopy data using a single-image super-resolution method 使用单图像超分辨率方法增强250-MHz定量声学显微镜数据
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950645
A. Basarab, D. Rohrbach, Ningning Zhao, J. Tourneret, D. Kouamé, J. Mamou
Scanning acoustic microscopy (SAM) is a well-accepted imaging modality for forming quantitative, two-dimensional maps of acoustic properties of soft tissues at microscopic scales. The quantitative maps formed using our custom SAM system using a 250-MHz single-element transducer have a nominal resolution of 7 µm, which is insufficient for some investigations. To enhance spatial resolution, a SAM system operating at even higher frequencies could be designed, but associated costs and experimental difficulties are challenging. Therefore, the objective of this study is to evaluate the potential of super-resolution (SR) image processing to enhance the spatial resolution of quantitative maps in SAM. To the best of our knowledge, this is the first attempt at using post-processing, image-enhancement techniques in SAM. Results of realistic simulations and experimental data acquired from a standard resolution test pattern confirm the improved spatial resolution and the potential value of using SR in SAM.
扫描声学显微镜(SAM)是一种被广泛接受的成像方式,用于在微观尺度上形成软组织声学特性的定量二维图。使用我们的定制SAM系统使用250 mhz单元件传感器形成的定量图具有7 μ m的标称分辨率,这对于一些研究是不够的。为了提高空间分辨率,可以设计一种工作在更高频率的地对空导弹系统,但相关的成本和实验困难是具有挑战性的。因此,本研究的目的是评估超分辨率(SR)图像处理在提高SAM定量地图空间分辨率方面的潜力。据我们所知,这是在SAM中使用后处理和图像增强技术的第一次尝试。仿真结果和标准分辨率测试模式下的实验数据证实了空间分辨率的提高和在地对空雷达中使用SR的潜在价值。
{"title":"Enhancement of 250-MHz quantitative acoustic-microscopy data using a single-image super-resolution method","authors":"A. Basarab, D. Rohrbach, Ningning Zhao, J. Tourneret, D. Kouamé, J. Mamou","doi":"10.1109/ISBI.2017.7950645","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950645","url":null,"abstract":"Scanning acoustic microscopy (SAM) is a well-accepted imaging modality for forming quantitative, two-dimensional maps of acoustic properties of soft tissues at microscopic scales. The quantitative maps formed using our custom SAM system using a 250-MHz single-element transducer have a nominal resolution of 7 µm, which is insufficient for some investigations. To enhance spatial resolution, a SAM system operating at even higher frequencies could be designed, but associated costs and experimental difficulties are challenging. Therefore, the objective of this study is to evaluate the potential of super-resolution (SR) image processing to enhance the spatial resolution of quantitative maps in SAM. To the best of our knowledge, this is the first attempt at using post-processing, image-enhancement techniques in SAM. Results of realistic simulations and experimental data acquired from a standard resolution test pattern confirm the improved spatial resolution and the potential value of using SR in SAM.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"150 1","pages":"827-830"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79459530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The structural disconnectome: A pathology-sensitive extension of the structural connectome 结构断连组:结构连接组的病理敏感延伸
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950539
C. Langen, M. Vernooij, L. Cremers, Wyke Huizinga, M. Groot, M. Ikram, T. White, W. Niessen
Brain connectivity is increasingly being studied using connectomes. Typical structural connectome definitions do not directly take white matter pathology into account. Presumably, pathology impedes signal transmission along fibres, leading to a reduction in function. In order to directly study disconnection and localize pathology within the connectome, we present the disconnectome, which only considers fibres that intersect with white matter pathology. To show the potential of the disconnectome in brain studies, we showed in a cohort of 4199 adults with varying loads of white matter lesions (WMLs) that: (1) Disconnection is not a function of streamline density; (2) Hubs are more affected by WMLs than peripheral nodes; (3) Connections between hubs are more severely and frequently affected by WMLs than other connection types; and (4) Connections between region clusters are often more severely affected than those within clusters.
越来越多的人使用连接体来研究大脑的连通性。典型的结构连接组定义不直接考虑白质病理学。据推测,病理阻碍了沿纤维的信号传递,导致功能下降。为了直接研究断连和定位连接组内的病理,我们提出了断连组,它只考虑与白质病理相交的纤维。为了显示断连组在脑研究中的潜力,我们对4199名患有不同白质病变(WMLs)的成年人进行了队列研究,结果表明:(1)断连不是流线密度的函数;(2) hub受wml的影响大于周边节点;(3)集线器之间的连接受wml的影响比其他类型的连接更严重、更频繁;(4)区域集群之间的连接往往比集群内部的连接受到更严重的影响。
{"title":"The structural disconnectome: A pathology-sensitive extension of the structural connectome","authors":"C. Langen, M. Vernooij, L. Cremers, Wyke Huizinga, M. Groot, M. Ikram, T. White, W. Niessen","doi":"10.1109/ISBI.2017.7950539","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950539","url":null,"abstract":"Brain connectivity is increasingly being studied using connectomes. Typical structural connectome definitions do not directly take white matter pathology into account. Presumably, pathology impedes signal transmission along fibres, leading to a reduction in function. In order to directly study disconnection and localize pathology within the connectome, we present the disconnectome, which only considers fibres that intersect with white matter pathology. To show the potential of the disconnectome in brain studies, we showed in a cohort of 4199 adults with varying loads of white matter lesions (WMLs) that: (1) Disconnection is not a function of streamline density; (2) Hubs are more affected by WMLs than peripheral nodes; (3) Connections between hubs are more severely and frequently affected by WMLs than other connection types; and (4) Connections between region clusters are often more severely affected than those within clusters.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"25 1","pages":"366-370"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89297445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An easy-to-use image labeling platform for automatic magnetic resonance image quality assessment 一个易于使用的图像标记平台,用于自动磁共振图像质量评估
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950628
Thomas Kustner, Philipp Wolf, Martin Schwartz, Annika Liebgott, F. Schick, S. Gatidis, Bin Yang
In medical imaging, images are usually evaluated by a human observer (HO) depending on the underlying diagnostic question which can be a time-demanding and cost-intensive process. Model observers (MO) which mimic the human visual system can help to support the HO during this reading process or can provide feedback to the MR scanner and/or HO about the derived image quality. For this purpose MOs are trained on HO-derived image labels with respect to a certain diagnostic task. We propose a non-reference image quality assessment system based on a machine-learning approach with a deep neural network and active learning to keep the amount of needed labeled training data small. A labeling platform is developed as a web application with accounted data security and confidentiality to facilitate the HO labeling procedure. The platform is made publicly available.
在医学成像中,图像通常由人类观察者(HO)根据潜在的诊断问题进行评估,这可能是一个耗时且成本高的过程。模拟人类视觉系统的模型观测器(MO)可以在读取过程中帮助支持HO,或者可以向MR扫描仪和/或HO提供关于衍生图像质量的反馈。为此目的,mo是根据ho衍生的图像标签进行训练的,这些标签是关于某个诊断任务的。我们提出了一种基于深度神经网络和主动学习的机器学习方法的非参考图像质量评估系统,以保持所需的标记训练数据量小。一个标签平台被开发为一个具有数据安全性和保密性的web应用程序,以促进HO标签程序。该平台是公开的。
{"title":"An easy-to-use image labeling platform for automatic magnetic resonance image quality assessment","authors":"Thomas Kustner, Philipp Wolf, Martin Schwartz, Annika Liebgott, F. Schick, S. Gatidis, Bin Yang","doi":"10.1109/ISBI.2017.7950628","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950628","url":null,"abstract":"In medical imaging, images are usually evaluated by a human observer (HO) depending on the underlying diagnostic question which can be a time-demanding and cost-intensive process. Model observers (MO) which mimic the human visual system can help to support the HO during this reading process or can provide feedback to the MR scanner and/or HO about the derived image quality. For this purpose MOs are trained on HO-derived image labels with respect to a certain diagnostic task. We propose a non-reference image quality assessment system based on a machine-learning approach with a deep neural network and active learning to keep the amount of needed labeled training data small. A labeling platform is developed as a web application with accounted data security and confidentiality to facilitate the HO labeling procedure. The platform is made publicly available.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"23 4 1","pages":"754-757"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91233132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-angle TOF MR brain angiography of the common marmoset 普通狨猴的多角度TOF MR脑血管造影
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950714
M. Mescam, J. Brossard, N. Vayssiere, C. Fonta
The relation between normal and pathological aging and the cerebrovascular component is still unclear. In this context, the common marmoset, which has the advantage of enabling longitudinal studies over a reasonable timeframe, appears as a good pre-clinical model. However, there is still a lack of quantitative information on the macrovascular structure of the marmoset brain. In this paper, we investigate the potentiality of multi-angle TOF MR angiography using a 3T MRI scanner to perform morphometric analysis of the marmoset brain vasculature. Our image processing pipeline greatly relies on the use of multiscale vesselness enhancement filters to help extract the 3D macrovasculature and perform subsequent morphometric calculations. Although multi-angle acquisition does not improve morphometric analysis significantly as compared to single-angle acquisition, it improves the network extraction by increasing the robustness of image processing algorithms.
正常和病理性衰老与脑血管成分的关系尚不清楚。在这种情况下,普通狨猴具有在合理的时间框架内进行纵向研究的优势,似乎是一种良好的临床前模型。然而,关于狨猴大脑大血管结构的定量信息仍然缺乏。在本文中,我们研究了使用3T MRI扫描仪对狨猴脑血管进行形态学分析的多角度TOF MR血管造影的潜力。我们的图像处理管道很大程度上依赖于使用多尺度血管增强过滤器来帮助提取3D大血管并执行随后的形态测量计算。尽管与单角度采集相比,多角度采集并不能显著改善形态计量分析,但它通过增加图像处理算法的鲁棒性来改善网络提取。
{"title":"Multi-angle TOF MR brain angiography of the common marmoset","authors":"M. Mescam, J. Brossard, N. Vayssiere, C. Fonta","doi":"10.1109/ISBI.2017.7950714","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950714","url":null,"abstract":"The relation between normal and pathological aging and the cerebrovascular component is still unclear. In this context, the common marmoset, which has the advantage of enabling longitudinal studies over a reasonable timeframe, appears as a good pre-clinical model. However, there is still a lack of quantitative information on the macrovascular structure of the marmoset brain. In this paper, we investigate the potentiality of multi-angle TOF MR angiography using a 3T MRI scanner to perform morphometric analysis of the marmoset brain vasculature. Our image processing pipeline greatly relies on the use of multiscale vesselness enhancement filters to help extract the 3D macrovasculature and perform subsequent morphometric calculations. Although multi-angle acquisition does not improve morphometric analysis significantly as compared to single-angle acquisition, it improves the network extraction by increasing the robustness of image processing algorithms.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"1 1","pages":"1125-1128"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89877684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HEp-2 cell classification based on a Deep Autoencoding-Classification convolutional neural network 基于深度自编码-分类卷积神经网络的HEp-2细胞分类
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950689
Jingxin Liu, Bolei Xu, L. Shen, J. Garibaldi, G. Qiu
In this paper, we present a novel deep learning model termed Deep Autoencoding-Classification Network (DACN) for HEp-2 cell classification. The DACN consists of an autoencoder and a normal classification convolutional neural network (CNN), while the two architectures shares the same encoding pipeline. The DACN model is jointly optimized for the classification error and the image reconstruction error based on a multi-task learning procedure. We evaluate the proposed model using the publicly available ICPR2012 benchmark dataset. We show that this architecture is particularly effective when the training dataset is small which is often the case in medical imaging applications. We present experimental results to show that the proposed approach outperforms all known state of the art HEp-2 cell classification methods.
在本文中,我们提出了一种新的深度学习模型,称为深度自动编码分类网络(DACN),用于HEp-2细胞分类。DACN由一个自动编码器和一个正常分类卷积神经网络(CNN)组成,而这两个架构共享相同的编码管道。基于多任务学习过程,对DACN模型进行了分类误差和图像重建误差的联合优化。我们使用公开可用的ICPR2012基准数据集评估提出的模型。我们表明,当训练数据集很小时,这种架构特别有效,这在医学成像应用中经常出现。我们提出的实验结果表明,所提出的方法优于所有已知的HEp-2细胞分类方法。
{"title":"HEp-2 cell classification based on a Deep Autoencoding-Classification convolutional neural network","authors":"Jingxin Liu, Bolei Xu, L. Shen, J. Garibaldi, G. Qiu","doi":"10.1109/ISBI.2017.7950689","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950689","url":null,"abstract":"In this paper, we present a novel deep learning model termed Deep Autoencoding-Classification Network (DACN) for HEp-2 cell classification. The DACN consists of an autoencoder and a normal classification convolutional neural network (CNN), while the two architectures shares the same encoding pipeline. The DACN model is jointly optimized for the classification error and the image reconstruction error based on a multi-task learning procedure. We evaluate the proposed model using the publicly available ICPR2012 benchmark dataset. We show that this architecture is particularly effective when the training dataset is small which is often the case in medical imaging applications. We present experimental results to show that the proposed approach outperforms all known state of the art HEp-2 cell classification methods.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"529 1","pages":"1019-1023"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77896441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Neuron reconstruction from fluorescence microscopy images using sequential Monte Carlo estimation 神经元重建从荧光显微镜图像使用顺序蒙特卡罗估计
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950462
M. Radojević, E. Meijering
Microscopic analysis of neuronal cell morphology is required in many studies in neurobiology. The development of computational methods for this purpose is an ongoing challenge and includes solving some of the fundamental computer vision problems such as detecting and grouping sometimes very noisy line-like image structures. Advancements in the field are impeded by the complexity and immense diversity of neuronal cell shapes across species and brain regions, as well as by the high variability in image quality across labs and experimental setups. Here we present a novel method for fully automatic neuron reconstruction based on sequential Monte Carlo estimation. It uses newly designed models for predicting and updating branch node estimates as well as novel initialization and final tree construction strategies. The proposed method was evaluated on 3D fluorescence microscopy images containing single neurons and neuronal networks for which manual annotations were available as gold-standard references. The results indicate that our method performs favorably compared to state-of-the-art alternative methods.
神经生物学的许多研究都需要对神经细胞形态进行显微分析。为此目的的计算方法的发展是一个持续的挑战,包括解决一些基本的计算机视觉问题,如检测和分组有时非常嘈杂的线状图像结构。不同物种和大脑区域的神经元细胞形状的复杂性和巨大多样性,以及不同实验室和实验设置的图像质量的高度可变性,阻碍了该领域的进步。本文提出了一种基于序列蒙特卡罗估计的全自动神经元重建方法。它使用新设计的模型来预测和更新分支节点估计,以及新的初始化和最终树构建策略。在包含单个神经元和神经元网络的3D荧光显微镜图像上对所提出的方法进行了评估,其中手动注释可作为金标准参考。结果表明,与最先进的替代方法相比,我们的方法表现良好。
{"title":"Neuron reconstruction from fluorescence microscopy images using sequential Monte Carlo estimation","authors":"M. Radojević, E. Meijering","doi":"10.1109/ISBI.2017.7950462","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950462","url":null,"abstract":"Microscopic analysis of neuronal cell morphology is required in many studies in neurobiology. The development of computational methods for this purpose is an ongoing challenge and includes solving some of the fundamental computer vision problems such as detecting and grouping sometimes very noisy line-like image structures. Advancements in the field are impeded by the complexity and immense diversity of neuronal cell shapes across species and brain regions, as well as by the high variability in image quality across labs and experimental setups. Here we present a novel method for fully automatic neuron reconstruction based on sequential Monte Carlo estimation. It uses newly designed models for predicting and updating branch node estimates as well as novel initialization and final tree construction strategies. The proposed method was evaluated on 3D fluorescence microscopy images containing single neurons and neuronal networks for which manual annotations were available as gold-standard references. The results indicate that our method performs favorably compared to state-of-the-art alternative methods.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"30 1","pages":"36-39"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72954974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Feature selection and thyroid nodule classification using transfer learning 基于迁移学习的特征选择和甲状腺结节分类
Pub Date : 2017-04-01 DOI: 10.1109/ISBI.2017.7950707
Tianjiao Liu, Shuaining Xie, Yukang Zhang, Jing Yu, Lijuan Niu, Weidong Sun
Ultrasonography is a valuable diagnosis method for thyroid nodules. Automatically discriminating benign and malignant nodules in the ultrasound images can provide aided diagnosis suggestions, or increase the diagnosis accuracy when lack of experts. The core problem in this issue is how to capture appropriate features for this specific task. Here, we propose a feature extraction method for ultrasound images based on the convolution neural networks (CNNs), try to introduce more meaningful and specific features to the classification. A CNN model trained with ImageNet data is transferred to the ultrasound image domain, to generate semantic deep features under small sample condition. Then, we combine those deep features with conventional features such as Histogram of Oriented Gradient (HOG) and Scale Invariant Feature Transform (SIFT) together to form a hybrid feature space. Furthermore, to make the general deep features more pertinent to our problem, a feature subset selection process is employed for the hybrid nodule classification, followed by a detailed discussion on the influence of feature number and feature composition method. Experimental results on 1037 images show that the accuracy of our proposed method is 0.929, which outperforms other relative methods by over 10%.
超声检查是诊断甲状腺结节的重要手段。在超声图像中自动区分良恶性结节可以提供辅助诊断建议,或在缺乏专家的情况下提高诊断准确率。这个问题的核心问题是如何为这个特定的任务捕获适当的特性。本文提出了一种基于卷积神经网络(cnn)的超声图像特征提取方法,尝试将更有意义、更具体的特征引入到分类中。将ImageNet数据训练的CNN模型转移到超声图像域,在小样本条件下生成语义深度特征。然后,我们将这些深度特征与传统特征(如定向梯度直方图(HOG)和尺度不变特征变换(SIFT))结合在一起,形成混合特征空间。此外,为了使一般深度特征更符合我们的问题,采用特征子集选择过程进行混合结节分类,然后详细讨论了特征数量和特征组成方法的影响。在1037幅图像上的实验结果表明,该方法的准确率为0.929,比其他相关方法提高了10%以上。
{"title":"Feature selection and thyroid nodule classification using transfer learning","authors":"Tianjiao Liu, Shuaining Xie, Yukang Zhang, Jing Yu, Lijuan Niu, Weidong Sun","doi":"10.1109/ISBI.2017.7950707","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950707","url":null,"abstract":"Ultrasonography is a valuable diagnosis method for thyroid nodules. Automatically discriminating benign and malignant nodules in the ultrasound images can provide aided diagnosis suggestions, or increase the diagnosis accuracy when lack of experts. The core problem in this issue is how to capture appropriate features for this specific task. Here, we propose a feature extraction method for ultrasound images based on the convolution neural networks (CNNs), try to introduce more meaningful and specific features to the classification. A CNN model trained with ImageNet data is transferred to the ultrasound image domain, to generate semantic deep features under small sample condition. Then, we combine those deep features with conventional features such as Histogram of Oriented Gradient (HOG) and Scale Invariant Feature Transform (SIFT) together to form a hybrid feature space. Furthermore, to make the general deep features more pertinent to our problem, a feature subset selection process is employed for the hybrid nodule classification, followed by a detailed discussion on the influence of feature number and feature composition method. Experimental results on 1037 images show that the accuracy of our proposed method is 0.929, which outperforms other relative methods by over 10%.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"43 1","pages":"1096-1099"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72801861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1