首页 > 最新文献

2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)最新文献

英文 中文
Managing Class Imbalance in Multi-Organ CT Segmentation in Head and Neck Cancer Patients 头颈部肿瘤患者多器官CT分割的分类不平衡处理
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433991
Samuel Cros, Eugene Vorontsov, S. Kadoury
Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.
头颈癌患者的放疗计划需要从规划的CT图像中准确描绘几个危险器官(OAR),以便确定减少毒性和挽救正常组织的剂量计划。然而,针对多个器官训练单个深度神经网络对头颈部区域内多个结构之间的类不平衡和大小变异性高度敏感。在本文中,我们为每个桨叶提出了一个单类分割模型,以处理在跨输出类(每个结构一个类)的训练过程中,12个桨叶之间存在严重差异的类不平衡问题。基于U-net架构,我们提出了一种相似的桨叶之间的迁移学习方法,以利用共同的学习特征,以及一种简单的加权平均策略,将模型初始化为多个模型的平均值,每个模型都在一个单独的器官上训练。在200名接受外束放疗的h&n癌症患者的内部数据集上进行的实验表明,与尝试同时训练多个OAR的基线多器官分割模型相比,所提出的模型有显著改进。通过使用跨OAR的迁移学习和加权平均策略,所提出的模型产生的总体Dice分数为0.75 pm 0.12$,这表明可以通过利用来自周围结构的额外数据来实现合理的分割性能,限制了真值注释的不确定性。
{"title":"Managing Class Imbalance in Multi-Organ CT Segmentation in Head and Neck Cancer Patients","authors":"Samuel Cros, Eugene Vorontsov, S. Kadoury","doi":"10.1109/ISBI48211.2021.9433991","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433991","url":null,"abstract":"Radiotherapy planning of head and neck cancer patients requires an accurate delineation of several organs at risk (OAR) from planning CT images in order to determine a dose plan which reduces toxicity and salvages normal tissue. However training a single deep neural network for multiple organs is highly sensitive to class imbalance and variability in size between several structures within the head and neck region. In this paper, we propose a single-class segmentation model for each OAR in order to handle class imbalance issues during training across output classes (one class per structure), where there exists a severe disparity between 12 OAR. Based on a U-net architecture, we present a transfer learning approach between similar OAR to leverage common learned features, as well as a simple weight averaging strategy to initialize a model as the average of multiple models, each trained on a separate organ. Experiments performed on an internal dataset of 200 H & N cancer patients treated with external beam radiotherapy, show the proposed model presents a significant improvement compared to the baseline multi-organ segmentation model, which attempts to simultaneously train several OAR. The proposed model yields an overall Dice score of $0.75 pm 0.12$, by using both transfer learning across OAR and a weight averaging strategy, indicating that a reasonable segmentation performance can be achieved by leveraging additional data from surrounding structures, limiting the uncertainty in ground-truth annotations.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131763319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Time Of Arrival Delineation In Echo Traces For Reflection Ultrasound Tomography 反射超声层析成像回波迹线的到达时间描述
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433846
B. R. Chintada, R. Rau, O. Goksel
Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.
超声计算机断层扫描(USCT)是一种绘制软组织声学特性的成像方法,例如用于诊断乳腺癌。一组USCT方法依赖于成像组织后面的被动反射器,它们通过描绘回波轨迹中的反射器来发挥作用,例如,推断用于重建局部声速图的飞行时间测量。在这项工作中,我们研究了各种回波特征和描绘方法,以鲁棒地识别回波中的反射面剖面。我们在一个真实乳房假体的多静态数据集上比较和评估了这些方法。基于我们的研究结果,基于RANSAC的离群值去除,然后使用我们提出的新“边缘”特征进行基于活动轮廓的描绘,即使在复杂介质中也能检测回波的首次到达时间;特别是在衍射效应突出的位置优于其他方法2.1倍。
{"title":"Time Of Arrival Delineation In Echo Traces For Reflection Ultrasound Tomography","authors":"B. R. Chintada, R. Rau, O. Goksel","doi":"10.1109/ISBI48211.2021.9433846","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433846","url":null,"abstract":"Ultrasound Computed Tomography (USCT) is an imaging method to map acoustic properties in soft tissues, e.g., for the diagnosis of breast cancer. A group of USCT methods rely on a passive reflector behind the imaged tissue, and they function by delineating such reflector in echo traces, e.g., to infer time-of-flight measurements for reconstructing local speed-of-sound maps. In this work, we study various echo features and delineation methods to robustly identify reflector profiles in echos. We compared and evaluated the methods on a multi-static data set of a realistic breast phantom. Based on our results, a RANSAC based outlier removal followed by an active contours based delineation using a new “edge” feature we propose that detects the first arrival times of echo performs robustly even in complex media; in particular 2.1 times superior to alternative approaches at locations where diffraction effects are prominent.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133453363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Slice Profile Estimation From 2D MRI Acquisition Using Generative Adversarial Networks 基于生成对抗网络的二维MRI采集切片轮廓估计
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434137
Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince
To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.
为了节省时间和保持足够的信噪比,在二维采集中,磁共振(MR)图像通常具有比平面分辨率更好的平面内分辨率。为了提高图像质量,最近的工作集中在使用深度学习来超分辨透平面分辨率。为了创建训练数据,可以在平面内方向对图像进行降级,以匹配通过平面的分辨率。为了正确地做到这一点,应该知道切片选择剖面(SSP),但这很少是可能的,因为信号激发的精确细节通常是未知的。因此,需要估计图像体积的SSP。在这项工作中,我们首先证明了相对SSP可以从平面和平面图像斑块之间的差异中估计出来。我们进一步提出了一种使用生成对抗网络(GAN)来估计SSP的算法。在该算法中,GAN的生成器使用估计的相对SSP在一个方向上模糊平面内的斑块,然后对它们进行下采样。GAN的鉴别器将发生器的输出与真实的通平面补丁区分开。通过数值模拟、模拟和脑部扫描验证了该方法的有效性。据我们所知,这是第一个从单个MR图像估计SSP的工作。代码可在https://github.com/shuohan/espreso上获得。
{"title":"Slice Profile Estimation From 2D MRI Acquisition Using Generative Adversarial Networks","authors":"Shuo Han, A. Carass, M. Schär, P. Calabresi, Jerry L Prince","doi":"10.1109/ISBI48211.2021.9434137","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434137","url":null,"abstract":"To save time and maintain an adequate signal-to-noise ratio, magnetic resonance (MR) images are often acquired with better in-plane than through-plane resolutions in 2D acquisition. To improve image quality, recent work has focused on using deep learning to super-resolve the through-plane resolution. To create training data, images can be degraded in an in-plane direction to match the through-plane resolution. To do this correctly, the slice selection profile (SSP) should be known, but this is rarely possible since precise details of signal excitation are usually unknown. Therefore, estimating the SSP of an image volume is desired. In this work, we first show that a relative SSP can be estimated from the difference between in- and through-plane image patches. We further propose an algorithm that uses generative adversarial networks (GAN) to estimate the SSP. In this algorithm, the GAN’s generator blurs in-plane patches in one direction using an estimated relative SSP then downsamples them. The GAN’s discriminator distinguishes the generator’s output from real through-plane patches. The proposed method was validated using numerical simulations and phantom and brain scans. To our knowledge, it is the first work to estimate the SSP from a single MR image. The code is available at https://github.com/shuohan/espreso.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131853053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Focal-Balanced Attention U-Net with Dynamic Thresholding by Spatial Regression for Segmentation of Aortic Dissection in CT Imagery 基于空间回归动态阈值的焦点平衡注意u网分割主动脉夹层CT图像
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434028
Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo
An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.
据报道,主动脉夹层在48小时内死亡率为50%,每小时增加1-2%。因此,快速诊断内膜皮瓣对患者的急诊治疗至关重要。为了准确呈现AD的患病部位,减少医生诊断的时间,图像分割是最有效的呈现方式。本研究采用U-Net模型,在检测过程中重点关注AD(包括上升、拱形和下降部分)。此外,我们还设计了站点和区域回归(SAR)模块。在此基础上,我们获得了99.1%的切片敏感性和93.2%的特异度。
{"title":"Focal-Balanced Attention U-Net with Dynamic Thresholding by Spatial Regression for Segmentation of Aortic Dissection in CT Imagery","authors":"Tsung-Han Lee, Li-Ting Huang, Paul Kuo, Chien-Kuo Wang, Jiun-In Guo","doi":"10.1109/ISBI48211.2021.9434028","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434028","url":null,"abstract":"An aortic dissection has been reported a mortality of 50% within the first 48 hours and an increase of 1-2% per hour. Therefore, rapid diagnosis of intimal flap would be very important for the emergency treatment of patients. In order to accurately present the affected part of AD and reduce the time for doctors to diagnose, image segmentation is the most effective way of presentation. We used the U-Net model in this study and focus on AD (including ascending, arch, and descending part) in the detection process. Furthermore, we design the site and area regression (SAR) module. With this help of accurate prediction, we achieved slice-level sensitivity and specificity of 99.1 % and 93.2%, respectively.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134497378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deformable Mri To Transrectal Ultrasound Registration For Prostate Interventions With Shape-Based Deep Variational Auto-Encoders 基于形状的深度变分自编码器用于前列腺干预的可变形Mri到经直肠超声配准
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434101
Sh. Shakeri, W. Le, C. Ménard, S. Kadoury
Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.
前列腺癌是男性中最常见的癌症之一,其诊断是通过活检和组织病理学分析来证实的。诊断性T2-w MRI通常在术中经直肠超声(TRUS)中登记,以便在图像引导的活检程序或基于针的治疗干预(如近距离治疗)中有效靶向可疑病变。然而,在干预环境中,这一过程仍然具有挑战性和耗时。目前的工作提出了一种自动的3D可变形MRI到TRUS配准管道,该管道利用深度变分自编码器和非刚性迭代最近点配准方法。首先从三维TRUS图像中训练卷积FC-ResNet分割模型,在此过程中提取前列腺边界。然后使用匹配的MRI-TRUS 3D分割来生成形态之间腺体表面网格的矢量表示,用作10层密集变分自编码器模型的输入,以约束基于变形模式的潜在表示的预测变形。在配准过程的每次迭代中,使用自编码器的重建损失对扭曲的图像进行正则化,确保合理的解剖变形。基于对45名接受HDR近距离治疗的患者的5倍交叉验证策略,该方法的Dice评分为85.0±2.6,靶配准误差为3.9±1.4 mm,该方法的结果优于最先进的方法,且手术内干扰最小。
{"title":"Deformable Mri To Transrectal Ultrasound Registration For Prostate Interventions With Shape-Based Deep Variational Auto-Encoders","authors":"Sh. Shakeri, W. Le, C. Ménard, S. Kadoury","doi":"10.1109/ISBI48211.2021.9434101","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434101","url":null,"abstract":"Prostate cancer is one of the most prevalent cancers in men, where diagnosis is confirmed through biopsies analyzed with histopathology. A diagnostic T2-w MRI is often registered to intra-operative transrectal ultrasound (TRUS) for effective targeting of suspicious lesions during image-guided biopsy procedures or needle-based therapeutic interventions such as brachytherapy. However, this process remains challenging and time-consuming in an interventional environment. The present work proposes an automated 3D deformable MRI to TRUS registration pipeline that leverages both deep variational auto-encoders with a non-rigid iterative closest point registration approach. A convolutional FC-ResNet segmentation model is first trained from 3D TRUS images to extract prostate boundaries during the procedure. Matched MRI-TRUS 3D segmentations are then used to generate a vector representation of the gland’s surface mesh between modalities, used as input to a 10layer dense variational autoencoder model to constrain the predicted deformations based on a latent representation of the deformation modes. At each iteration of the registration process, the warped image is regularized using the autoencoder’s reconstruction loss, ensuring plausible anatomical deformations. Based on a 5-fold cross-validation strategy with 45 patients undergoing HDR brachytherapy, the method yields a Dice score of 85.0 ± 2.6 with a target registration error of 3.9 ± 1.4 mm, with the proposed method yielding results outperforming the state-of-the-art, with minimal intra-procedural disruptions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134571219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A More Interpretable Classifier For Multiple Sclerosis 一个更可解释的多发性硬化分类器
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434074
Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika
Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.
在过去的几年里,深度学习证明了它在医学成像诊断或分割方面的有效性。然而,要在诊所中充分整合,这些方法必须既达到良好的性能,又使地区从业者相信它们的可解释性。因此,可解释模型应该像领域专家那样根据临床相关信息做出决策。基于这个目的,我们提出了一个更可解释的分类器,专注于最广泛的自身免疫性神经炎症疾病:多发性硬化症。这种疾病的特点是在MRI(磁共振图像)上可见脑损伤,这是诊断的基础。使用集成梯度归因,我们表明使用脑组织概率图代替原始MR图像作为深度网络输入达到了一个更准确和可解释的分类器,其决策高度基于病变。
{"title":"A More Interpretable Classifier For Multiple Sclerosis","authors":"Valentine Wargnier-Dauchelle, T. Grenier, F. Durand-Dubief, F. Cotton, M. Sdika","doi":"10.1109/ISBI48211.2021.9434074","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434074","url":null,"abstract":"Over the past years, deep learning proved its effectiveness in medical imaging for diagnosis or segmentation. Nevertheless, to be fully integrated in clinics, these methods must both reach good performances and convince area practitioners about their interpretability. Thus, an interpretable model should make its decision on clinical relevant information as a domain expert would. With this purpose, we propose a more interpretable classifier focusing on the most widespread autoimmune neuroinflammatory disease: multiple sclerosis. This disease is characterized by brain lesions visible on MRI (Magnetic Resonance Images) on which diagnosis is based. Using Integrated Gradients attributions, we show that the utilization of brain tissue probability maps instead of raw MR images as deep network input reaches a more accurate and interpretable classifier with decision highly based on lesions.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"726 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133847116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Zebrafish Histotomography Noise Removal In Projection And Reconstruction Domains 斑马鱼组织断层成像在投影和重建领域的噪声去除
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433914
A. Adishesha, D. Vanselow, P. L. Rivière, Xiaolei Huang, K. Cheng
X-ray “Histotomography” built on the basic principles of CT can be used to create 3D images of zebrafish at resolutions one thousand times greater than CT, enabling the visualization of cell nuclei and other subcellular structures in 3D. Noise in the scans caused either through natural Xray phenomena or other distortions can lead to low accuracy in tasks related to detection and segmentation of anatomically significant objects. We evaluate the use of supervised Encoder-Decoder models for noise removal in projection and reconstruction domain images in absence of clean training targets. We propose the use of a Noise-2-Noise architecture with U-Net backbone along with structural similarity index loss as an addendum to help maintain and sharpen pathologically relevant details. We empirically show that our technique outperforms existing methods, with an average peak signal to noise ratio (PSNR) gain of 14. 50dB and 15. 05dB for noise removal in the reconstruction domain when trained without and with clean targets respectively. Using the same network architecture, we obtain a gain in structural similarity index (SSIM) in the projection domain by an average of 0.213 when trained without clean targets and 0.259 with clean targets. Additionally, by comparing reconstructions from denoised projections with those from original projections, we establish that noise removal in the projection domain is beneficial to improve the quality of reconstructed scans.
基于CT基本原理的x射线“组织断层扫描”可用于创建分辨率比CT高1000倍的斑马鱼3D图像,从而实现细胞核和其他亚细胞结构的3D可视化。通过自然x射线现象或其他畸变引起的扫描噪声可能导致与解剖学上重要物体的检测和分割相关的任务的低准确性。我们评估了在没有清晰训练目标的情况下,在投影和重建领域图像中使用监督编码器-解码器模型去除噪声。我们建议使用带有U-Net骨干网的Noise-2-Noise架构以及结构相似指数损失作为附录,以帮助保持和锐化病理相关细节。我们的经验表明,我们的技术优于现有的方法,平均峰值信噪比(PSNR)增益为14。50dB和15。在无目标训练和有目标训练时,重构域的噪声去除率分别为05dB。使用相同的网络架构,我们在投影域中获得了结构相似指数(SSIM)的增益,无干净目标训练时的平均增益为0.213,有干净目标训练时的平均增益为0.259。此外,通过将去噪投影与原始投影的重建结果进行比较,我们发现在投影域中去噪有利于提高重建扫描的质量。
{"title":"Zebrafish Histotomography Noise Removal In Projection And Reconstruction Domains","authors":"A. Adishesha, D. Vanselow, P. L. Rivière, Xiaolei Huang, K. Cheng","doi":"10.1109/ISBI48211.2021.9433914","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433914","url":null,"abstract":"X-ray “Histotomography” built on the basic principles of CT can be used to create 3D images of zebrafish at resolutions one thousand times greater than CT, enabling the visualization of cell nuclei and other subcellular structures in 3D. Noise in the scans caused either through natural Xray phenomena or other distortions can lead to low accuracy in tasks related to detection and segmentation of anatomically significant objects. We evaluate the use of supervised Encoder-Decoder models for noise removal in projection and reconstruction domain images in absence of clean training targets. We propose the use of a Noise-2-Noise architecture with U-Net backbone along with structural similarity index loss as an addendum to help maintain and sharpen pathologically relevant details. We empirically show that our technique outperforms existing methods, with an average peak signal to noise ratio (PSNR) gain of 14. 50dB and 15. 05dB for noise removal in the reconstruction domain when trained without and with clean targets respectively. Using the same network architecture, we obtain a gain in structural similarity index (SSIM) in the projection domain by an average of 0.213 when trained without clean targets and 0.259 with clean targets. Additionally, by comparing reconstructions from denoised projections with those from original projections, we establish that noise removal in the projection domain is beneficial to improve the quality of reconstructed scans.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114425193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Cu-Segnet: Corneal Ulcer Segmentation Network Cu-Segnet:角膜溃疡分割网络
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433934
Tingting Wang, Weifang Zhu, Meng Wang, Zhongyue Chen, Xinjian Chen
Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.
角膜溃疡是一种常见的角膜疾病。由于点片状混合性角膜溃疡和片状性角膜溃疡的大小和形状不同,在裂隙灯图像中对角膜溃疡的分割是一个挑战。这些差异导致了预测的不一致性,影响了预测的准确性。为了解决这个问题,我们提出了一种角膜溃疡分割网络(CU-SegNet)来分割荧光素染色图像中的角膜溃疡。在CU-SegNet中,以编码器-解码器结构为主要框架,提出了多尺度全局金字塔特征聚合(MGPA)模块和多尺度自适应感知变形(MAD)模块,并分别嵌入到跳跳连接和编码器路径顶部。MGPA帮助高级特征补充局部高分辨率语义信息,MAD可以引导网络关注多尺度变形特征,自适应聚合上下文信息。该网络在公开的SUSTech-SYSU数据集上进行了评估。该方法的Dice系数为89.14%。
{"title":"Cu-Segnet: Corneal Ulcer Segmentation Network","authors":"Tingting Wang, Weifang Zhu, Meng Wang, Zhongyue Chen, Xinjian Chen","doi":"10.1109/ISBI48211.2021.9433934","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433934","url":null,"abstract":"Corneal ulcer is a common-occurring illness in cornea. It is a challenge to segment corneal ulcer in slit-lamp image due to the different sizes and shapes of point-flaky mixed corneal ulcer and flaky corneal ulcer. These differences introduce inconsistency and effect the prediction accuracy. To address this problem, we propose a corneal ulcer segmentation network (CU-SegNet) to segment corneal ulcer in fluorescein staining image. In CU-SegNet, the encoder-decoder structure is adopted as main framework, and two novel modules including multi-scale global pyramid feature aggregation (MGPA) module and multi-scale adaptive-aware deformation (MAD) module are proposed and embedded into the skip connection and the top of encoder path, respectively. MGPA helps high-level features supplement local high-resolution semantic information, while MAD can guide the network to focus on multi-scale deformation features and adaptively aggregate contextual information. The proposed network is evaluated on the public SUSTech-SYSU dataset. The Dice coefficient of the proposed method is 89.14%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Unequivocal Cardiac Phase Sorting From Alternating Ramp-And Pulse-Illuminated Microscopy Image Sequences 从交替斜坡和脉冲照明显微镜图像序列明确的心脏相分选
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433858
Olivia Mariani, François Marelli, C. Jaques, Alexander Ernst, M. Liebling
In vivo microscopy is an important tool to study developing organs such as the heart of the zebrafish embryo but is often limited by slow image frame acquisition speed. While collections of still images of the beating heart at arbitrary phases can be sorted to obtain a virtual heartbeat, the presence of identical heart configurations at two or more heartbeat phases can derail this approach. Here, we propose a dual illumination method to encode movement in alternate frames to disambiguate heartbeat phases in the still frames. We propose to alternately acquire images with a ramp and pulse illumination then sort all successive image pairs based on the ramp-illuminated data but use the pulse-illuminated images for display and analysis. We characterized our method on synthetic data, and show its applicability on experimental data and found that an exposure time of about 7% of the heartbeat or more is necessary to encode the movement reliably in a single heartbeat with a single redundant node. Our method opens the possibility to use sorting algorithms without prior information on the phase, even when the movement presents redundant frames.
活体显微技术是研究斑马鱼胚胎心脏等发育器官的重要工具,但由于图像帧采集速度慢而受到限制。虽然可以对任意阶段的心脏静止图像集合进行分类以获得虚拟心跳,但在两个或多个心跳阶段出现相同的心脏结构可能会破坏这种方法。在这里,我们提出了一种双重照明方法来编码交替帧中的运动,以消除静止帧中的心跳阶段的歧义。我们建议交替获取坡道和脉冲照明的图像,然后根据坡道照明数据对所有连续图像对进行排序,但使用脉冲照明图像进行显示和分析。我们在合成数据上对我们的方法进行了表征,并证明了其在实验数据上的适用性,并发现在具有单个冗余节点的单个心跳中,需要大约7%或更多的暴露时间来可靠地编码运动。我们的方法打开了使用排序算法的可能性,而不需要关于相位的先验信息,即使运动呈现冗余帧。
{"title":"Unequivocal Cardiac Phase Sorting From Alternating Ramp-And Pulse-Illuminated Microscopy Image Sequences","authors":"Olivia Mariani, François Marelli, C. Jaques, Alexander Ernst, M. Liebling","doi":"10.1109/ISBI48211.2021.9433858","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433858","url":null,"abstract":"In vivo microscopy is an important tool to study developing organs such as the heart of the zebrafish embryo but is often limited by slow image frame acquisition speed. While collections of still images of the beating heart at arbitrary phases can be sorted to obtain a virtual heartbeat, the presence of identical heart configurations at two or more heartbeat phases can derail this approach. Here, we propose a dual illumination method to encode movement in alternate frames to disambiguate heartbeat phases in the still frames. We propose to alternately acquire images with a ramp and pulse illumination then sort all successive image pairs based on the ramp-illuminated data but use the pulse-illuminated images for display and analysis. We characterized our method on synthetic data, and show its applicability on experimental data and found that an exposure time of about 7% of the heartbeat or more is necessary to encode the movement reliably in a single heartbeat with a single redundant node. Our method opens the possibility to use sorting algorithms without prior information on the phase, even when the movement presents redundant frames.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114597675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Disentangling The Spatio-Temporal Heterogeneity of Alzheimer’s Disease Using A Deep Predictive Stratification Network 使用深度预测分层网络解开阿尔茨海默病的时空异质性
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433903
Andrew Zhen, Minjeong Kim, Guorong Wu
Alzheimer’s disease (AD) is clinically heterogeneous in presentation and progression, demonstrating variable topographic distributions of clinical phenotypes, progression rate, and underlying neuro-degeneration mechanisms. Although striking efforts have been made to disentangle the massive heterogeneity in AD by identifying latent clusters with similar imaging or phenotype patterns, such unsupervised clustering techniques often yield sub-optimal stratification results that do not agree with clinical manifestations. To address this limitation, we present a novel deep predictive stratification network (DPS-Net) to learn the best feature representations from neuroimages, which allows us to identify latent fine-grained clusters (aka subtypes) with greater neuroscientific insight. The driving force of DPS-Net is a series of clinical outcomes from different cognitive domains (such as language and memory), which we consider as the benchmark to alleviate the heterogeneity issue of neurodegeneration pathways in the AD population. Since subject-specific longitudinal change is more relevant to disease progression, we propose to identify the latent subtypes from longitudinal neuroimaging data. Because AD manifests disconnection syndrome, we have applied our datadriven subtyping approach to longitudinal structural connectivity networks from the ADNI database. Our deep neural network identified more separated and clinically backed subtypes than conventional unsupervised methods used to solve the subtyping task– indicating its great applicability in future neuroimaging studies.
阿尔茨海默病(AD)的临床表现和进展具有异质性,表现出临床表型、进展率和潜在神经变性机制的不同地形分布。尽管通过识别具有相似影像学或表型模式的潜在聚类,已经做出了巨大的努力来理清AD的巨大异质性,但这种无监督聚类技术通常会产生与临床表现不一致的次优分层结果。为了解决这一限制,我们提出了一种新的深度预测分层网络(DPS-Net)来学习神经图像的最佳特征表示,这使我们能够以更大的神经科学洞察力识别潜在的细粒度集群(又名亚型)。DPS-Net的驱动力是来自不同认知领域(如语言和记忆)的一系列临床结果,我们认为这是缓解AD人群神经退行性通路异质性问题的基准。由于受试者特异性的纵向变化与疾病进展更相关,我们建议从纵向神经影像学数据中识别潜在亚型。由于AD表现为断开连接综合征,我们将数据驱动亚型方法应用于来自ADNI数据库的纵向结构连接网络。我们的深度神经网络比用于解决亚型任务的传统无监督方法识别出更多分离的和临床支持的亚型-表明其在未来神经影像学研究中的巨大适用性。
{"title":"Disentangling The Spatio-Temporal Heterogeneity of Alzheimer’s Disease Using A Deep Predictive Stratification Network","authors":"Andrew Zhen, Minjeong Kim, Guorong Wu","doi":"10.1109/ISBI48211.2021.9433903","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433903","url":null,"abstract":"Alzheimer’s disease (AD) is clinically heterogeneous in presentation and progression, demonstrating variable topographic distributions of clinical phenotypes, progression rate, and underlying neuro-degeneration mechanisms. Although striking efforts have been made to disentangle the massive heterogeneity in AD by identifying latent clusters with similar imaging or phenotype patterns, such unsupervised clustering techniques often yield sub-optimal stratification results that do not agree with clinical manifestations. To address this limitation, we present a novel deep predictive stratification network (DPS-Net) to learn the best feature representations from neuroimages, which allows us to identify latent fine-grained clusters (aka subtypes) with greater neuroscientific insight. The driving force of DPS-Net is a series of clinical outcomes from different cognitive domains (such as language and memory), which we consider as the benchmark to alleviate the heterogeneity issue of neurodegeneration pathways in the AD population. Since subject-specific longitudinal change is more relevant to disease progression, we propose to identify the latent subtypes from longitudinal neuroimaging data. Because AD manifests disconnection syndrome, we have applied our datadriven subtyping approach to longitudinal structural connectivity networks from the ADNI database. Our deep neural network identified more separated and clinically backed subtypes than conventional unsupervised methods used to solve the subtyping task– indicating its great applicability in future neuroimaging studies.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115356495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1