首页 > 最新文献

2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)最新文献

英文 中文
The Winner of Age Challenge: Going One Step Further From Keypoint Detection to Scleral Spur Localization 年龄挑战的赢家:从关键点检测到巩膜骨刺定位再进一步
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433822
Xing Tao, Chenglang Yuan, Cheng Bian, Yuexiang Li, Kai Ma, Dong Ni, Yefeng Zheng
Primary angle-closure glaucoma (PACG) is a major sub-type of glaucoma that is responsible for half of the glaucoma-related blindness worldwide. The early detection of PACG is very important, so as to provide timely treatment and prevent potential irreversible vision loss. Clinically, the diagnosis of PACG is based on the evaluation of anterior chamber angle (ACA) with anterior segment optical coherence tomography (AS-OCT). To this end, the Angle closure Glaucoma Evaluation (AGE) challenge1 held on MICCAI 2019 aims to encourage researchers to develop automated systems for angle closure classification and scleral spur (SS) localization. We participated in the competition and won the championship on both tasks. In this paper, we share some ideas adopted in our entry of the competition, which significantly improve the accuracy of scleral spur localization. There are extensive literatures on keypoint detection for the tasks such as human body keypoint and facial landmark detection. However, they are proven to fail on dealing with scleral spur localization in the experiments, due to the gap between natural and medical images. In this regard, we propose a set of constraints to encourage a two-stage keypoint detection framework to spontaneously exploit diverse information, including the image-level knowledge and contextual information around SS, from the AS-OCT for the accurate SS localization. Extensive experiments are conducted to demonstrate the effectiveness of the proposed constraints.1https://age.grand-challenge.org/
原发性闭角型青光眼(PACG)是青光眼的一种主要亚型,全世界青光眼相关失明的一半是由其引起的。早期发现PACG非常重要,以便及时治疗,防止潜在的不可逆视力丧失。临床上,PACG的诊断是基于前段光学相干断层扫描(AS-OCT)对前房角(ACA)的评估。为此,MICCAI 2019举办的闭角型青光眼评估(AGE)挑战赛1旨在鼓励研究人员开发闭角型分类和巩膜骨刺(SS)定位的自动化系统。我们参加了比赛,并在两项任务中都获得了冠军。在本文中,我们分享了我们在比赛中采用的一些想法,这些想法大大提高了巩膜骨刺定位的准确性。针对人体关键点和面部特征点检测等任务,已有大量的关键点检测文献。然而,在实验中,由于自然图像与医学图像之间的差距,它们在处理巩膜骨刺定位方面被证明是失败的。在这方面,我们提出了一组约束,以鼓励两阶段关键点检测框架自发地利用来自AS-OCT的各种信息,包括图像级知识和围绕SS的上下文信息,以准确定位SS。大量的实验证明了所提出的约束的有效性
{"title":"The Winner of Age Challenge: Going One Step Further From Keypoint Detection to Scleral Spur Localization","authors":"Xing Tao, Chenglang Yuan, Cheng Bian, Yuexiang Li, Kai Ma, Dong Ni, Yefeng Zheng","doi":"10.1109/ISBI48211.2021.9433822","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433822","url":null,"abstract":"Primary angle-closure glaucoma (PACG) is a major sub-type of glaucoma that is responsible for half of the glaucoma-related blindness worldwide. The early detection of PACG is very important, so as to provide timely treatment and prevent potential irreversible vision loss. Clinically, the diagnosis of PACG is based on the evaluation of anterior chamber angle (ACA) with anterior segment optical coherence tomography (AS-OCT). To this end, the Angle closure Glaucoma Evaluation (AGE) challenge1 held on MICCAI 2019 aims to encourage researchers to develop automated systems for angle closure classification and scleral spur (SS) localization. We participated in the competition and won the championship on both tasks. In this paper, we share some ideas adopted in our entry of the competition, which significantly improve the accuracy of scleral spur localization. There are extensive literatures on keypoint detection for the tasks such as human body keypoint and facial landmark detection. However, they are proven to fail on dealing with scleral spur localization in the experiments, due to the gap between natural and medical images. In this regard, we propose a set of constraints to encourage a two-stage keypoint detection framework to spontaneously exploit diverse information, including the image-level knowledge and contextual information around SS, from the AS-OCT for the accurate SS localization. Extensive experiments are conducted to demonstrate the effectiveness of the proposed constraints.1https://age.grand-challenge.org/","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121878561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fully Automatic Cardiac Segmentation And Quantification For Pulmonary Hypertension Analysis Using Mice Cine Mr Images 全自动心脏分割和定量分析肺动脉高压用小鼠电影磁共振图像
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433855
B. Zufiria, Maialen Stephens, Maria Jesús Sánchez, J. Ruíz-Cabello, Karen López-Linares, I. Macía
Pulmonary Hypertension (PH) induces anatomical changes in the cardiac muscle that can be quantitativly assessed using Magnetic Resonance (MR). Yet, the extraction of biomarkers relies on the segmentation of the affected structures, which in many cases is performed manually by physicians. Previous approaches have shown successful automatic segmentation results for different heart structures from human cardiac MR images. Nevertheless, the segmentation from mice images is rarely addressed, but it is essential for preclinical studies. Thus, the aim of this work is to develop an automatic tool based on a convolutional neural network for the segmentation of 4 cardiac structures at once in healthy and pathological mice to precisely evaluate biomarkers that may correlate to PH. The obtained automatic segmentations are comparable to manual segmentations, and they improve the distinction between control and pathological cases, especially regarding biomarkers from the right ventricle.
肺动脉高压(PH)引起心肌的解剖变化,可以用磁共振(MR)定量评估。然而,生物标志物的提取依赖于受影响结构的分割,这在许多情况下是由医生手动执行的。以前的方法已经显示了从人类心脏MR图像中对不同心脏结构的自动分割结果。然而,小鼠图像的分割很少被解决,但它对临床前研究至关重要。因此,这项工作的目的是开发一种基于卷积神经网络的自动工具,用于同时分割健康和病理小鼠的4个心脏结构,以精确评估可能与ph相关的生物标志物。所获得的自动分割与人工分割相当,并且它们改善了对照和病理病例之间的区分,特别是关于来自右心室的生物标志物。
{"title":"Fully Automatic Cardiac Segmentation And Quantification For Pulmonary Hypertension Analysis Using Mice Cine Mr Images","authors":"B. Zufiria, Maialen Stephens, Maria Jesús Sánchez, J. Ruíz-Cabello, Karen López-Linares, I. Macía","doi":"10.1109/ISBI48211.2021.9433855","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433855","url":null,"abstract":"Pulmonary Hypertension (PH) induces anatomical changes in the cardiac muscle that can be quantitativly assessed using Magnetic Resonance (MR). Yet, the extraction of biomarkers relies on the segmentation of the affected structures, which in many cases is performed manually by physicians. Previous approaches have shown successful automatic segmentation results for different heart structures from human cardiac MR images. Nevertheless, the segmentation from mice images is rarely addressed, but it is essential for preclinical studies. Thus, the aim of this work is to develop an automatic tool based on a convolutional neural network for the segmentation of 4 cardiac structures at once in healthy and pathological mice to precisely evaluate biomarkers that may correlate to PH. The obtained automatic segmentations are comparable to manual segmentations, and they improve the distinction between control and pathological cases, especially regarding biomarkers from the right ventricle.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"252 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122057757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MVC-NET: Multi-View Chest Radiograph Classification Network With Deep Fusion MVC-NET:深度融合的多视点胸片分类网络
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434000
Xiongfeng Zhu, Qianjin Feng
Chest radiography is a critical imaging modality to access thorax diseases. Automated radiograph classification algorithms have enormous potential to support clinical assistant diagnosis. Most algorithms focus solely on the single-view radiograph to make a prediction. However, both frontal and lateral images are valuable information sources for disease diagnosis. In this paper, we present multi-view chest radiograph classification network (MVC-Net) to fuse paired frontal and lateral views at both the feature and decision level. Specifically, back projection transposition(BPT) explicitly incorporates the spatial information from two orthogonal X-rays at feature level, and mimicry loss enables cross-view predictions to mimic from each other at decision level. The experimental results on 13 pathologies from MIMIC-CXR dataset show that MVC-Net yields the highest average AUROC score of 0.810, which gives better classification metrics as compared with various baseline methods. The code is available at https://github.com/fzfs/Multi-view-Chest-X-ray-Classification.
胸部x线摄影是了解胸部疾病的关键成像方式。自动x线照片分类算法在支持临床辅助诊断方面具有巨大的潜力。大多数算法只关注单视图x光片来进行预测。然而,正面和侧面图像都是疾病诊断的宝贵信息来源。在本文中,我们提出了多视图胸片分类网络(MVC-Net),在特征和决策层面融合配对的正面和侧面视图。具体来说,反向投影转置(BPT)在特征水平上明确地结合了来自两个正交x射线的空间信息,而模仿损失使得交叉视图预测在决策水平上相互模仿。MIMIC-CXR数据集中13种病理的实验结果表明,MVC-Net的平均AUROC得分最高,为0.810,与各种基线方法相比,给出了更好的分类指标。代码可在https://github.com/fzfs/Multi-view-Chest-X-ray-Classification上获得。
{"title":"MVC-NET: Multi-View Chest Radiograph Classification Network With Deep Fusion","authors":"Xiongfeng Zhu, Qianjin Feng","doi":"10.1109/ISBI48211.2021.9434000","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434000","url":null,"abstract":"Chest radiography is a critical imaging modality to access thorax diseases. Automated radiograph classification algorithms have enormous potential to support clinical assistant diagnosis. Most algorithms focus solely on the single-view radiograph to make a prediction. However, both frontal and lateral images are valuable information sources for disease diagnosis. In this paper, we present multi-view chest radiograph classification network (MVC-Net) to fuse paired frontal and lateral views at both the feature and decision level. Specifically, back projection transposition(BPT) explicitly incorporates the spatial information from two orthogonal X-rays at feature level, and mimicry loss enables cross-view predictions to mimic from each other at decision level. The experimental results on 13 pathologies from MIMIC-CXR dataset show that MVC-Net yields the highest average AUROC score of 0.810, which gives better classification metrics as compared with various baseline methods. The code is available at https://github.com/fzfs/Multi-view-Chest-X-ray-Classification.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116737388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automatic Detection of Plis De Passage in the Superior Temporal Sulcus using Surface Profiling and Ensemble SVM 基于表面轮廓和集合支持向量机的颞上沟褶皱通道自动检测
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433937
Tianqi Song, C. Bodin, O. Coulon
Cortical folding, an essential characteristic of the brain cortex, shows variability across individuals. Plis de passages (PPs), namely annectant gyri buried inside the fold, can explain part of the variability. However, a systematic method of automatically detecting all PPs is still not available. In this paper, we present a method to detect the PPs on the cortex automatically. We first extract the geometry information of the localized areas on the cortex via surface profiling. Then, an ensemble support vector machine (SVM) is developed to identify the PPs. Experimental results show the effectiveness and robustness of our method.
皮层折叠是大脑皮层的一个基本特征,在个体之间表现出差异。Plis de passages (PPs),即埋在褶皱内部的邻近脑回,可以部分解释这种变异性。然而,一种系统的自动检测所有PPs的方法仍然不可用。本文提出了一种自动检测脑皮层PPs的方法。我们首先通过表面轮廓提取皮层局部区域的几何信息。然后,提出了一种集成支持向量机(SVM)来识别pp。实验结果表明了该方法的有效性和鲁棒性。
{"title":"Automatic Detection of Plis De Passage in the Superior Temporal Sulcus using Surface Profiling and Ensemble SVM","authors":"Tianqi Song, C. Bodin, O. Coulon","doi":"10.1109/ISBI48211.2021.9433937","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433937","url":null,"abstract":"Cortical folding, an essential characteristic of the brain cortex, shows variability across individuals. Plis de passages (PPs), namely annectant gyri buried inside the fold, can explain part of the variability. However, a systematic method of automatically detecting all PPs is still not available. In this paper, we present a method to detect the PPs on the cortex automatically. We first extract the geometry information of the localized areas on the cortex via surface profiling. Then, an ensemble support vector machine (SVM) is developed to identify the PPs. Experimental results show the effectiveness and robustness of our method.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116853045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3d Unsupervised Kidney Graft Segmentation Based On Deep Learning And Multi-Sequence Mri 基于深度学习和多序列Mri的三维无监督肾移植分割
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433854
Léo Milecki, S. Bodard, J. Correas, M. Timsit, M. Vakalopoulou
Image segmentation is one of the most popular problems in medical image analysis. Recently, with the success of deep neural networks, these powerful methods provide state of the art performance on various segmentation tasks. However, one of the main challenges relies on the high number of annotations that they need to be trained, which is crucial in medical applications. In this paper, we propose an unsupervised method based on deep learning for the segmentation of kidney grafts. Our method is composed of two different stages, the detection of the area of interest and the segmentation model that is able, through an iterative process, to provide accurate kidney draft segmentation without the need for annotations. The proposed framework works in the 3D space to explore all the available information and extract meaningful representations from Dynamic Contrast-Enhanced and T2 MRI sequences. Our method reports a dice of 89.8±3.1%, Hausdorff distance at percentile 95% of 5.8±0.4lmm and percentage of kidney volume difference of 5.9±5.7% on a test dataset of 29 patients subject to a kidney transplant.
图像分割是医学图像分析中最常见的问题之一。近年来,随着深度神经网络的成功,这些强大的方法在各种分割任务上提供了最先进的性能。然而,主要的挑战之一是需要训练大量的注释,这在医疗应用中是至关重要的。在本文中,我们提出了一种基于深度学习的无监督肾移植分割方法。我们的方法由两个不同的阶段组成,即感兴趣区域的检测和分割模型,通过迭代过程,能够在不需要注释的情况下提供准确的肾草案分割。所提出的框架在3D空间中工作,以探索所有可用信息并从动态对比增强和T2 MRI序列中提取有意义的表示。我们的方法报告在29例肾移植患者的测试数据集中,骰子为89.8±3.1%,豪斯多夫距离为5.8±0.4lmm的百分位数95%,肾脏体积差异百分比为5.9±5.7%。
{"title":"3d Unsupervised Kidney Graft Segmentation Based On Deep Learning And Multi-Sequence Mri","authors":"Léo Milecki, S. Bodard, J. Correas, M. Timsit, M. Vakalopoulou","doi":"10.1109/ISBI48211.2021.9433854","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433854","url":null,"abstract":"Image segmentation is one of the most popular problems in medical image analysis. Recently, with the success of deep neural networks, these powerful methods provide state of the art performance on various segmentation tasks. However, one of the main challenges relies on the high number of annotations that they need to be trained, which is crucial in medical applications. In this paper, we propose an unsupervised method based on deep learning for the segmentation of kidney grafts. Our method is composed of two different stages, the detection of the area of interest and the segmentation model that is able, through an iterative process, to provide accurate kidney draft segmentation without the need for annotations. The proposed framework works in the 3D space to explore all the available information and extract meaningful representations from Dynamic Contrast-Enhanced and T2 MRI sequences. Our method reports a dice of 89.8±3.1%, Hausdorff distance at percentile 95% of 5.8±0.4lmm and percentage of kidney volume difference of 5.9±5.7% on a test dataset of 29 patients subject to a kidney transplant.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128613439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Analysis Of Lymph Node Tumor Features In Pet/Ct For Segmentation Pet/Ct淋巴结肿瘤特征分割分析
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433791
D. L. F. Cabrera, Éloïse Grossiord, N. Gogin, D. Papathanassiou, Nicolas Passat
In the context of breast cancer, the detection and segmentation of cancerous lymph nodes in PET/CT imaging is of crucial importance, in particular for staging issues. In order to guide such image analysis procedures, some dedicated descriptors can be considered, especially region-based features. In this article, we focus on the issue of choosing which features should be embedded for lymph node tumor segmentation from PET/CT. This study is divided into two steps. We first investigate the relevance of various features by considering a Random Forest framework. In a second time, we validate the expected relevance of the best scored features by involving them in a U-Net segmentation architecture. We handle the region-based definition of these features thanks to a hierarchical modeling of the PET images. This analysis emphasizes a set of features that can significantly improve / guide the segmentation of lymph nodes in PET/CT.
在乳腺癌的背景下,PET/CT成像的癌性淋巴结的检测和分割是至关重要的,特别是分期问题。为了指导这样的图像分析过程,可以考虑一些专用的描述符,特别是基于区域的特征。在这篇文章中,我们关注的是选择哪些特征应该嵌入到PET/CT的淋巴结肿瘤分割中。本研究分为两个步骤。我们首先通过考虑随机森林框架来研究各种特征的相关性。在第二次,我们通过将最佳得分特征纳入U-Net分割体系结构来验证它们的预期相关性。由于PET图像的分层建模,我们处理了这些特征的基于区域的定义。本分析强调了一组可以显著改善/指导PET/CT淋巴结分割的特征。
{"title":"Analysis Of Lymph Node Tumor Features In Pet/Ct For Segmentation","authors":"D. L. F. Cabrera, Éloïse Grossiord, N. Gogin, D. Papathanassiou, Nicolas Passat","doi":"10.1109/ISBI48211.2021.9433791","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433791","url":null,"abstract":"In the context of breast cancer, the detection and segmentation of cancerous lymph nodes in PET/CT imaging is of crucial importance, in particular for staging issues. In order to guide such image analysis procedures, some dedicated descriptors can be considered, especially region-based features. In this article, we focus on the issue of choosing which features should be embedded for lymph node tumor segmentation from PET/CT. This study is divided into two steps. We first investigate the relevance of various features by considering a Random Forest framework. In a second time, we validate the expected relevance of the best scored features by involving them in a U-Net segmentation architecture. We handle the region-based definition of these features thanks to a hierarchical modeling of the PET images. This analysis emphasizes a set of features that can significantly improve / guide the segmentation of lymph nodes in PET/CT.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124588258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced-Quality Gan (EQ-GAN) on Lung CT Scans: Toward Truth and Potential Hallucinations 增强质量氮化镓(EQ-GAN)在肺部CT扫描:走向真实和潜在的幻觉
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433996
Martin Jammes-Floreani, A. Laine, E. Angelini
Lung Computed Tomography (CT) scans are extensively used to screen lung diseases. Strategies such as large slice spacing and low-dose CT scans are often preferred to reduce radiation exposure and therefore the risk for patients’ health. The counterpart is a significant degradation of image quality and/or resolution. In this work we investigate a generative adversarial network (GAN) for lung CT image enhanced-quality (EQ). Our EQ-GAN is trained on a high-quality lung CT cohort to recover the visual quality of scans degraded by blur and noise. The capability of our trained GAN to generate EQ CT scans is further illustrated on two test cohorts. Results confirm gains in visual quality metrics, remarkable visual enhancement of vessels, airways and lung parenchyma, as well as other enhancement patterns that require further investigation. We also compared automatic lung lobe segmentation on original versus EQ scans. Average Dice scores vary between lobes, can be as low as 0.3 and EQ scans enable segmentation of some lobes missed in the original scans. This paves the way to using EQ as pre-processing for lung lobe segmentation, further research to evaluate the impact of EQ to add robustness to airway and vessel segmentation, and to investigate anatomical details revealed in EQ scans.
肺部计算机断层扫描(CT)被广泛用于筛查肺部疾病。诸如大断层间隔和低剂量CT扫描之类的策略通常是首选的,以减少辐射暴露,从而减少对患者健康的风险。对应的是图像质量和/或分辨率的显著下降。在这项工作中,我们研究了一种用于肺部CT图像增强质量(EQ)的生成对抗网络(GAN)。我们的EQ-GAN在高质量的肺部CT队列上进行训练,以恢复因模糊和噪声而下降的扫描视觉质量。我们训练的GAN生成EQ CT扫描的能力在两个测试队列中得到了进一步的说明。结果证实了视觉质量指标的改善,血管、气道和肺实质的显著视觉增强,以及其他需要进一步研究的增强模式。我们还比较了原始扫描和EQ扫描的自动肺叶分割。平均骰子分数在不同的叶之间变化,可以低至0.3,EQ扫描可以分割原始扫描中遗漏的一些叶。这为使用EQ作为肺叶分割的预处理,进一步研究EQ对增加气道和血管分割的鲁棒性的影响,以及研究EQ扫描显示的解剖细节铺平了道路。
{"title":"Enhanced-Quality Gan (EQ-GAN) on Lung CT Scans: Toward Truth and Potential Hallucinations","authors":"Martin Jammes-Floreani, A. Laine, E. Angelini","doi":"10.1109/ISBI48211.2021.9433996","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433996","url":null,"abstract":"Lung Computed Tomography (CT) scans are extensively used to screen lung diseases. Strategies such as large slice spacing and low-dose CT scans are often preferred to reduce radiation exposure and therefore the risk for patients’ health. The counterpart is a significant degradation of image quality and/or resolution. In this work we investigate a generative adversarial network (GAN) for lung CT image enhanced-quality (EQ). Our EQ-GAN is trained on a high-quality lung CT cohort to recover the visual quality of scans degraded by blur and noise. The capability of our trained GAN to generate EQ CT scans is further illustrated on two test cohorts. Results confirm gains in visual quality metrics, remarkable visual enhancement of vessels, airways and lung parenchyma, as well as other enhancement patterns that require further investigation. We also compared automatic lung lobe segmentation on original versus EQ scans. Average Dice scores vary between lobes, can be as low as 0.3 and EQ scans enable segmentation of some lobes missed in the original scans. This paves the way to using EQ as pre-processing for lung lobe segmentation, further research to evaluate the impact of EQ to add robustness to airway and vessel segmentation, and to investigate anatomical details revealed in EQ scans.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"20 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113968412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ghost-Light-3dnet: Efficient Network For Heart Segmentation Ghost-Light-3dnet:高效的心脏分割网络
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433974
Bin Cai, Erkang Cheng, Pengpeng Liang, Chi Xiong, Zhiyong Sun, Qiang Zhang, Bo Song
Accurate 3D whole heart segmentation provides more details of the morphological and pathological information that could help doctors with more effective patient-specific treatments. 3D CNN network has been recognized as an important role in accurate volumetric segmentation. Typically, 3D CNN network has a large number of parameters as well as the floating point operations (FLOPs), which leads to heavy and complex computation. In this paper, we introduce an efficient 3D network (Ghost-Light-3DNet) for heart segmentation. Our solution is characterized by two key components: First, inspired by GhostNet in 2D, we extend the Ghost module to 3D which can generate more feature maps from cheap operations. Second, a sequential separable conv with residual module is applied as a light plug-and-play component to further reduce network parameters and FLOPs. For evaluation, the proposed method is validated on the MM-WHS heart segmentation Challenge 2017 datasets. Compared to state-of-the-art solution using 3D UNet-like architecture, our Ghost-Light-3DNet achieves comparable segmentation accuracy with the 2. 18x fewer parameters and 4. 48x less FLOPs, respectively.
准确的3D全心分割提供了更多的形态学和病理信息,可以帮助医生更有效地针对患者进行治疗。三维CNN网络在精确的体积分割中发挥着重要的作用。通常,3D CNN网络具有大量的参数和浮点运算(FLOPs),导致计算量大且复杂。本文介绍了一种高效的用于心脏分割的3D网络(Ghost-Light-3DNet)。我们的解决方案有两个关键组成部分:首先,受GhostNet在2D中的启发,我们将Ghost模块扩展到3D,可以从廉价的操作中生成更多的特征地图。其次,采用带剩余模块的顺序可分离转换器作为轻型即插即用组件,进一步降低网络参数和FLOPs。为了进行评估,本文提出的方法在MM-WHS心脏分割挑战赛2017数据集上进行了验证。与使用3D unet架构的最先进的解决方案相比,我们的Ghost-Light-3DNet实现了与2。参数减少18倍,4。分别减少48倍的FLOPs。
{"title":"Ghost-Light-3dnet: Efficient Network For Heart Segmentation","authors":"Bin Cai, Erkang Cheng, Pengpeng Liang, Chi Xiong, Zhiyong Sun, Qiang Zhang, Bo Song","doi":"10.1109/ISBI48211.2021.9433974","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433974","url":null,"abstract":"Accurate 3D whole heart segmentation provides more details of the morphological and pathological information that could help doctors with more effective patient-specific treatments. 3D CNN network has been recognized as an important role in accurate volumetric segmentation. Typically, 3D CNN network has a large number of parameters as well as the floating point operations (FLOPs), which leads to heavy and complex computation. In this paper, we introduce an efficient 3D network (Ghost-Light-3DNet) for heart segmentation. Our solution is characterized by two key components: First, inspired by GhostNet in 2D, we extend the Ghost module to 3D which can generate more feature maps from cheap operations. Second, a sequential separable conv with residual module is applied as a light plug-and-play component to further reduce network parameters and FLOPs. For evaluation, the proposed method is validated on the MM-WHS heart segmentation Challenge 2017 datasets. Compared to state-of-the-art solution using 3D UNet-like architecture, our Ghost-Light-3DNet achieves comparable segmentation accuracy with the 2. 18x fewer parameters and 4. 48x less FLOPs, respectively.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"16 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113976361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Unsupervised Detection Of Disturbances In 2d Radiographs 二维x光片干扰的无监督检测
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434091
Laura Estacio, M. Ehlke, A. Tack, Eveling Castro Gutierrez, H. Lamecker, R. Mora, S. Zachow
We present a method based on a generative model for detection of disturbances such as prosthesis, screws, zippers, and metals in 2D radiographs. The generative model is trained in an unsupervised fashion using clinical radiographs as well as simulated data, none of which contain disturbances. Our approach employs a latent space consistency loss which has the benefit of identifying similarities, and is enforced to reconstruct X-rays without disturbances. In order to detect images with disturbances, an anomaly score is computed also employing the Frechet distance between the input X-ray and the reconstructed one using our generative model. Validation was performed using clinical pelvis radiographs. We achieved an AUC of 0.77 and 0.83 with clinical and synthetic data, respectively. The results demonstrated a good accuracy of our method for detecting outliers as well as the advantage of utilizing synthetic data.
我们提出了一种基于生成模型的方法,用于检测二维x光片中的干扰,如假体,螺钉,拉链和金属。生成模型以无监督的方式使用临床x光片和模拟数据进行训练,其中没有任何一个包含干扰。我们的方法采用了潜在空间一致性损失,它具有识别相似性的好处,并强制重建无干扰的x射线。为了检测有干扰的图像,还使用我们的生成模型利用输入x射线与重建x射线之间的Frechet距离计算异常分数。通过临床骨盆x线片进行验证。临床和合成数据的AUC分别为0.77和0.83。结果表明,我们的方法检测异常值具有良好的准确性和利用合成数据的优势。
{"title":"Unsupervised Detection Of Disturbances In 2d Radiographs","authors":"Laura Estacio, M. Ehlke, A. Tack, Eveling Castro Gutierrez, H. Lamecker, R. Mora, S. Zachow","doi":"10.1109/ISBI48211.2021.9434091","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434091","url":null,"abstract":"We present a method based on a generative model for detection of disturbances such as prosthesis, screws, zippers, and metals in 2D radiographs. The generative model is trained in an unsupervised fashion using clinical radiographs as well as simulated data, none of which contain disturbances. Our approach employs a latent space consistency loss which has the benefit of identifying similarities, and is enforced to reconstruct X-rays without disturbances. In order to detect images with disturbances, an anomaly score is computed also employing the Frechet distance between the input X-ray and the reconstructed one using our generative model. Validation was performed using clinical pelvis radiographs. We achieved an AUC of 0.77 and 0.83 with clinical and synthetic data, respectively. The results demonstrated a good accuracy of our method for detecting outliers as well as the advantage of utilizing synthetic data.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114777047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Annotation-Efficient 3d U-Nets For Brain Plasticity Network Mapping 脑可塑性网络映射的高效注释3d U-Nets
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434142
L. Gjesteby, Tzofi Klinghoffer, Meagan Ash, Matthew A. Melton, K. Otto, Damon G. Lamb, S. Burke, L. Brattain
A fundamental challenge in machine learning-based segmentation of large-scale brain microscopy images is the time and domain expertise required by humans to generate ground truth for model training. Weakly supervised and semi-supervised approaches can greatly reduce the burden of human annotation. Here we present a study of three-dimensional U-Nets with varying levels of supervision to perform neuronal nuclei segmentation in light-sheet microscopy volumes. We leverage automated blob detection with classical algorithms to generate noisy labels on a large volume, and our experiments show that weak supervision, with or without additional fine-tuning, can outperform resource-limited fully supervised learning. These methods are extended to analyze coincidence between multiple fluorescent stains in cleared brain tissue. This is an initial step towards automated whole-brain analysis of plasticity-related gene expression.
基于机器学习的大规模脑显微镜图像分割的一个基本挑战是,人类需要时间和领域专业知识来生成模型训练的基础真理。弱监督和半监督方法可以大大减轻人工注释的负担。在这里,我们提出了一项三维U-Nets的研究,具有不同水平的监督,以在薄片显微镜体积中进行神经元核分割。我们利用经典算法的自动斑点检测在大容量上生成噪声标签,我们的实验表明,弱监督,无论是否有额外的微调,都可以胜过资源有限的完全监督学习。这些方法被扩展到分析清除脑组织中多个荧光染色之间的一致性。这是实现对可塑性相关基因表达的全脑自动化分析的第一步。
{"title":"Annotation-Efficient 3d U-Nets For Brain Plasticity Network Mapping","authors":"L. Gjesteby, Tzofi Klinghoffer, Meagan Ash, Matthew A. Melton, K. Otto, Damon G. Lamb, S. Burke, L. Brattain","doi":"10.1109/ISBI48211.2021.9434142","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434142","url":null,"abstract":"A fundamental challenge in machine learning-based segmentation of large-scale brain microscopy images is the time and domain expertise required by humans to generate ground truth for model training. Weakly supervised and semi-supervised approaches can greatly reduce the burden of human annotation. Here we present a study of three-dimensional U-Nets with varying levels of supervision to perform neuronal nuclei segmentation in light-sheet microscopy volumes. We leverage automated blob detection with classical algorithms to generate noisy labels on a large volume, and our experiments show that weak supervision, with or without additional fine-tuning, can outperform resource-limited fully supervised learning. These methods are extended to analyze coincidence between multiple fluorescent stains in cleared brain tissue. This is an initial step towards automated whole-brain analysis of plasticity-related gene expression.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114787613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1