首页 > 最新文献

2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)最新文献

英文 中文
Managing Class Imbalance in Multi-Organ CT Segmentation in Head and Neck Cancer Patients 头颈部肿瘤患者多器官CT分割的分类不平衡处理
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433991
Samuel Cros, Eugene Vorontsov, S. Kadoury
Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.
头颈癌患者的放疗计划需要从规划的CT图像中准确描绘几个危险器官(OAR),以便确定减少毒性和挽救正常组织的剂量计划。然而,针对多个器官训练单个深度神经网络对头颈部区域内多个结构之间的类不平衡和大小变异性高度敏感。在本文中,我们为每个桨叶提出了一个单类分割模型,以处理在跨输出类(每个结构一个类)的训练过程中,12个桨叶之间存在严重差异的类不平衡问题。基于U-net架构,我们提出了一种相似的桨叶之间的迁移学习方法,以利用共同的学习特征,以及一种简单的加权平均策略,将模型初始化为多个模型的平均值,每个模型都在一个单独的器官上训练。在200名接受外束放疗的h&n癌症患者的内部数据集上进行的实验表明,与尝试同时训练多个OAR的基线多器官分割模型相比,所提出的模型有显著改进。通过使用跨OAR的迁移学习和加权平均策略,所提出的模型产生的总体Dice分数为0.75 pm 0.12$,这表明可以通过利用来自周围结构的额外数据来实现合理的分割性能,限制了真值注释的不确定性。
{"title":"Managing Class Imbalance in Multi-Organ CT Segmentation in Head and Neck Cancer Patients","authors":"Samuel Cros, Eugene Vorontsov, S. Kadoury","doi":"10.1109/ISBI48211.2021.9433991","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433991","url":null,"abstract":"Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131763319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Time Of Arrival Delineation In Echo Traces For Reflection Ultrasound Tomography 反射超声层析成像回波迹线的到达时间描述
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433846
B. R. Chintada, R. Rau, O. Goksel
Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.
超声计算机断层扫描(USCT)是一种绘制软组织声学特性的成像方法,例如用于诊断乳腺癌。一组USCT方法依赖于成像组织后面的被动反射器,它们通过描绘回波轨迹中的反射器来发挥作用,例如,推断用于重建局部声速图的飞行时间测量。在这项工作中,我们研究了各种回波特征和描绘方法,以鲁棒地识别回波中的反射面剖面。我们在一个真实乳房假体的多静态数据集上比较和评估了这些方法。基于我们的研究结果,基于RANSAC的离群值去除,然后使用我们提出的新“边缘”特征进行基于活动轮廓的描绘,即使在复杂介质中也能检测回波的首次到达时间;特别是在衍射效应突出的位置优于其他方法2.1倍。
{"title":"Time Of Arrival Delineation In Echo Traces For Reflection Ultrasound Tomography","authors":"B. R. Chintada, R. Rau, O. Goksel","doi":"10.1109/ISBI48211.2021.9433846","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433846","url":null,"abstract":"Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133453363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Slice Profile Estimation From 2D MRI Acquisition Using Generative Adversarial Networks 基于生成对抗网络的二维MRI采集切片轮廓估计
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434137
Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince
To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.
为了节省时间和保持足够的信噪比,在二维采集中,磁共振(MR)图像通常具有比平面分辨率更好的平面内分辨率。为了提高图像质量,最近的工作集中在使用深度学习来超分辨透平面分辨率。为了创建训练数据,可以在平面内方向对图像进行降级,以匹配通过平面的分辨率。为了正确地做到这一点,应该知道切片选择剖面(SSP),但这很少是可能的,因为信号激发的精确细节通常是未知的。因此,需要估计图像体积的SSP。在这项工作中,我们首先证明了相对SSP可以从平面和平面图像斑块之间的差异中估计出来。我们进一步提出了一种使用生成对抗网络(GAN)来估计SSP的算法。在该算法中,GAN的生成器使用估计的相对SSP在一个方向上模糊平面内的斑块,然后对它们进行下采样。GAN的鉴别器将发生器的输出与真实的通平面补丁区分开。通过数值模拟、模拟和脑部扫描验证了该方法的有效性。据我们所知,这是第一个从单个MR图像估计SSP的工作。代码可在https://github.com/shuohan/espreso上获得。
{"title":"Slice Profile Estimation From 2D MRI Acquisition Using Generative Adversarial Networks","authors":"Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince","doi":"10.1109/ISBI48211.2021.9434137","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434137","url":null,"abstract":"To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131853053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Focal-Balanced Attention U-Net with Dynamic Thresholding by Spatial Regression for Segmentation of Aortic Dissection in CT Imagery 基于空间回归动态阈值的焦点平衡注意u网分割主动脉夹层CT图像
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434028
Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo
An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.
据报道,主动脉夹层在48小时内死亡率为50%,每小时增加1-2%。因此,快速诊断内膜皮瓣对患者的急诊治疗至关重要。为了准确呈现AD的患病部位,减少医生诊断的时间,图像分割是最有效的呈现方式。本研究采用U-Net模型,在检测过程中重点关注AD(包括上升、拱形和下降部分)。此外,我们还设计了站点和区域回归(SAR)模块。在此基础上,我们获得了99.1%的切片敏感性和93.2%的特异度。
{"title":"Focal-Balanced Attention U-Net with Dynamic Thresholding by Spatial Regression for Segmentation of Aortic Dissection in CT Imagery","authors":"Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo","doi":"10.1109/ISBI48211.2021.9434028","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434028","url":null,"abstract":"An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134497378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deformable Mri To Transrectal Ultrasound Registration For Prostate Interventions With Shape-Based Deep Variational Auto-Encoders 基于形状的深度变分自编码器用于前列腺干预的可变形Mri到经直肠超声配准
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434101
Sh. Shakeri, W. Le, C. Ménard, S. Kadoury
Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.
前列腺癌是男性中最常见的癌症之一,其诊断是通过活检和组织病理学分析来证实的。诊断性T2-w MRI通常在术中经直肠超声(TRUS)中登记,以便在图像引导的活检程序或基于针的治疗干预(如近距离治疗)中有效靶向可疑病变。然而,在干预环境中,这一过程仍然具有挑战性和耗时。目前的工作提出了一种自动的3D可变形MRI到TRUS配准管道,该管道利用深度变分自编码器和非刚性迭代最近点配准方法。首先从三维TRUS图像中训练卷积FC-ResNet分割模型,在此过程中提取前列腺边界。然后使用匹配的MRI-TRUS 3D分割来生成形态之间腺体表面网格的矢量表示,用作10层密集变分自编码器模型的输入,以约束基于变形模式的潜在表示的预测变形。在配准过程的每次迭代中,使用自编码器的重建损失对扭曲的图像进行正则化,确保合理的解剖变形。基于对45名接受HDR近距离治疗的患者的5倍交叉验证策略,该方法的Dice评分为85.0±2.6,靶配准误差为3.9±1.4 mm,该方法的结果优于最先进的方法,且手术内干扰最小。
{"title":"Deformable Mri To Transrectal Ultrasound Registration For Prostate Interventions With Shape-Based Deep Variational Auto-Encoders","authors":"Sh. Shakeri, W. Le, C. Ménard, S. Kadoury","doi":"10.1109/ISBI48211.2021.9434101","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434101","url":null,"abstract":"Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134571219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A More Interpretable Classifier For Multiple Sclerosis 一个更可解释的多发性硬化分类器
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434074
Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika
Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.
在过去的几年里,深度学习证明了它在医学成像诊断或分割方面的有效性。然而,要在诊所中充分整合,这些方法必须既达到良好的性能,又使地区从业者相信它们的可解释性。因此,可解释模型应该像领域专家那样根据临床相关信息做出决策。基于这个目的,我们提出了一个更可解释的分类器,专注于最广泛的自身免疫性神经炎症疾病:多发性硬化症。这种疾病的特点是在MRI(磁共振图像)上可见脑损伤,这是诊断的基础。使用集成梯度归因,我们表明使用脑组织概率图代替原始MR图像作为深度网络输入达到了一个更准确和可解释的分类器,其决策高度基于病变。
{"title":"A More Interpretable Classifier For Multiple Sclerosis","authors":"Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika","doi":"10.1109/ISBI48211.2021.9434074","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434074","url":null,"abstract":"Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"726 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133847116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Cu-Segnet: Corneal Ulcer Segmentation Network Cu-Segnet:角膜溃疡分割网络
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433934
Tingting Wang, Weifang Zhu, Meng Wang, Zhongyue Chen, Xinjian Chen
Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.
角膜溃疡是一种常见的角膜疾病。由于点片状混合性角膜溃疡和片状性角膜溃疡的大小和形状不同,在裂隙灯图像中对角膜溃疡的分割是一个挑战。这些差异导致了预测的不一致性,影响了预测的准确性。为了解决这个问题,我们提出了一种角膜溃疡分割网络(CU-SegNet)来分割荧光素染色图像中的角膜溃疡。在CU-SegNet中,以编码器-解码器结构为主要框架,提出了多尺度全局金字塔特征聚合(MGPA)模块和多尺度自适应感知变形(MAD)模块,并分别嵌入到跳跳连接和编码器路径顶部。MGPA帮助高级特征补充局部高分辨率语义信息,MAD可以引导网络关注多尺度变形特征,自适应聚合上下文信息。该网络在公开的SUSTech-SYSU数据集上进行了评估。该方法的Dice系数为89.14%。
{"title":"Cu-Segnet: Corneal Ulcer Segmentation Network","authors":"Tingting Wang, Weifang Zhu, Meng Wang, Zhongyue Chen, Xinjian Chen","doi":"10.1109/ISBI48211.2021.9433934","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433934","url":null,"abstract":"Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Two-Stream Attention Spatio-Temporal Network For Classification Of Echocardiography Videos 用于超声心动图视频分类的双流注意时空网络
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433773
Zishun Feng, J. Sivak, Ashok K. Krishnamurthy
There is considerable interest in AI systems that can assist a cardiologist to diagnose echocardiograms, and can also be used to train residents in classifying echocardiograms. Prior work has focused on the analysis of a single frame. Classifying echocardiograms at the video-level is challenging due to intra-frame and inter-frame noise. We propose a two-stream deep network which learns from the spatial context and optical flow for the classification of echocardiography videos. Each stream contains two parts: a Convolutional Neural Network (CNN) for spatial features and a bi-directional Long Short-Term Memory (LSTM) network with Attention for temporal. The features from these two streams are fused for classification. We verify our experimental results on a dataset of 170 (80 normal and 90 abnormal) videos that have been manually labeled by trained cardiologists. Our method provides an overall accuracy of 91.18%, with a sensitivity of 94.11% and a specificity of 88.24%.
人工智能系统可以帮助心脏病专家诊断超声心动图,也可以用于培训住院医生对超声心动图进行分类,这引起了人们的极大兴趣。先前的工作主要集中在对单个框架的分析上。由于帧内和帧间噪声的存在,在视频级对超声心动图进行分类具有挑战性。我们提出了一种从空间背景和光流学习的双流深度网络,用于超声心动图视频的分类。每个流包含两个部分:空间特征的卷积神经网络(CNN)和时间特征的双向长短期记忆(LSTM)网络。将这两个流的特征融合在一起进行分类。我们在170个(80个正常和90个异常)视频的数据集上验证了我们的实验结果,这些视频都是由训练有素的心脏病专家手动标记的。该方法的总体准确度为91.18%,灵敏度为94.11%,特异性为88.24%。
{"title":"Two-Stream Attention Spatio-Temporal Network For Classification Of Echocardiography Videos","authors":"Zishun Feng, J. Sivak, Ashok K. Krishnamurthy","doi":"10.1109/ISBI48211.2021.9433773","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433773","url":null,"abstract":"There is considerable interest in AI systems that can assist a cardiologist to diagnose echocardiograms, and can also be used to train residents in classifying echocardiograms. Prior work has focused on the analysis of a single frame. Classifying echocardiograms at the video-level is challenging due to intra-frame and inter-frame noise. We propose a two-stream deep network which learns from the spatial context and optical flow for the classification of echocardiography videos. Each stream contains two parts: a Convolutional Neural Network (CNN) for spatial features and a bi-directional Long Short-Term Memory (LSTM) network with Attention for temporal. The features from these two streams are fused for classification. We verify our experimental results on a dataset of 170 (80 normal and 90 abnormal) videos that have been manually labeled by trained cardiologists. Our method provides an overall accuracy of 91.18%, with a sensitivity of 94.11% and a specificity of 88.24%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Biological Cell Tracking And Lineage Inference Via Random Finite Sets 基于随机有限集的生物细胞跟踪和谱系推断
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433957
Tran Thien Dat Nguyen, Changbeom Shim, Wooil Kim
Automatic cell tracking has long been a challenging problem due to the uncertainty of cell dynamic and observation process, where detection probability and clutter rate are unknown and time-varying. This is compounded when cell lineages are also to be inferred. In this paper, we propose a novel biological cell tracking method based on the Labeled Random Finite Set (RFS) approach to study cell migration patterns. Our method tracks cells with lineage by using a Generalised Label Multi-Bernoulli (GLMB) filter with objects spawning, and a robust Cardinalised Probability Hypothesis Density (CPHD) to address unknown and time-varying detection probability and clutter rate. The proposed method is capable of quantifying the certainty level of the tracking solutions. The capability of the algorithm on population dynamic inference is demonstrated on a migration sequence of breast cancer cells.
由于细胞动态和观测过程的不确定性,检测概率和杂波率是未知的,并且随时间变化,因此细胞的自动跟踪一直是一个具有挑战性的问题。当还需要推断细胞谱系时,这种情况更加复杂。在本文中,我们提出了一种基于标记随机有限集(RFS)方法的生物细胞跟踪方法来研究细胞迁移模式。我们的方法通过使用具有对象生成的广义标签多伯努利(GLMB)滤波器和鲁棒的基数概率假设密度(CPHD)来处理未知和时变的检测概率和杂波率来跟踪具有谱系的细胞。所提出的方法能够量化跟踪解的确定性水平。通过对乳腺癌细胞迁移序列的分析,验证了该算法的种群动态推断能力。
{"title":"Biological Cell Tracking And Lineage Inference Via Random Finite Sets","authors":"Tran Thien Dat Nguyen, Changbeom Shim, Wooil Kim","doi":"10.1109/ISBI48211.2021.9433957","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433957","url":null,"abstract":"Automatic cell tracking has long been a challenging problem due to the uncertainty of cell dynamic and observation process, where detection probability and clutter rate are unknown and time-varying. This is compounded when cell lineages are also to be inferred. In this paper, we propose a novel biological cell tracking method based on the Labeled Random Finite Set (RFS) approach to study cell migration patterns. Our method tracks cells with lineage by using a Generalised Label Multi-Bernoulli (GLMB) filter with objects spawning, and a robust Cardinalised Probability Hypothesis Density (CPHD) to address unknown and time-varying detection probability and clutter rate. The proposed method is capable of quantifying the certainty level of the tracking solutions. The capability of the algorithm on population dynamic inference is demonstrated on a migration sequence of breast cancer cells.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129098570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Flow Through U-Nets U-Nets中的信息流
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433801
Suemin Lee, I. Bajić
Deep Neural Networks (DNNs) have become ubiquitous in medical image processing and analysis. Among them, U-Nets are very popular in various image segmentation tasks. Yet, little is known about how information flows through these networks and whether they are indeed properly designed for the tasks they are being proposed for. In this paper, we employ information-theoretic tools in order to gain insight into information flow through U-Nets. In particular, we show how mutual information between input/output and an intermediate layer can be a useful tool to understand information flow through various portions of a U-Net, assess its architectural efficiency, and even propose more efficient designs.
深度神经网络(dnn)在医学图像处理和分析中已经无处不在。其中,U-Nets在各种图像分割任务中非常流行。然而,对于信息如何在这些网络中流动,以及这些网络是否确实适合它们被提议执行的任务,人们所知甚少。在本文中,我们使用信息论工具来深入了解通过U-Nets的信息流。特别是,我们展示了输入/输出和中间层之间的相互信息如何成为理解U-Net各个部分的信息流、评估其架构效率、甚至提出更有效的设计的有用工具。
{"title":"Information Flow Through U-Nets","authors":"Suemin Lee, I. Bajić","doi":"10.1109/ISBI48211.2021.9433801","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433801","url":null,"abstract":"Deep Neural Networks (DNNs) have become ubiquitous in medical image processing and analysis. Among them, U-Nets are very popular in various image segmentation tasks. Yet, little is known about how information flows through these networks and whether they are indeed properly designed for the tasks they are being proposed for. In this paper, we employ information-theoretic tools in order to gain insight into information flow through U-Nets. In particular, we show how mutual information between input/output and an intermediate layer can be a useful tool to understand information flow through various portions of a U-Net, assess its architectural efficiency, and even propose more efficient designs.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131495400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1