首页 > 最新文献

2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)最新文献

英文 中文
Adaptive Transfer Learning To Enhance Domain Transfer In Brain Tumor Segmentation 自适应迁移学习在脑肿瘤分割中的应用
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434100
Yuan Liqiang, Marius Erdt, Wang Lipo
Supervised deep learning has greatly catalyzed the development of medical image processing. However, reliable predictions require a large amount of labeled data, which is hard to attain due to the required expensive manual efforts. Transfer learning serves as a potential solution for mitigating the issue of data insufficiency. But up till now, most transfer learning strategies for medical image segmentation either fine-tune only the last few layers of a network or focus on the decoder or encoder parts as a whole. Thus, improving transfer learning strategies is of critical importance for developing supervised deep learning, further benefits medical image processing. In this work, we propose a novel strategy that adaptively fine-tunes the network based on policy value. Specifically, the encoder layers are fine-tuned to extract latent feature followed by a fully connected layer that generates policy value. The decoder is then adaptively fine-tuned according to these policy value. The proposed approach has been applied to segment human brain tumors in MRI. The evaluation has been performed using 769 volumes from public databases. Domain transfer from T2 to T1, T1ce, and Flair shows state-of-the-art segmentation accuracy.
监督深度学习极大地促进了医学图像处理的发展。然而,可靠的预测需要大量的标记数据,由于需要昂贵的人工努力,这很难实现。迁移学习是缓解数据不足问题的潜在解决方案。但到目前为止,大多数用于医学图像分割的迁移学习策略要么只对网络的最后几层进行微调,要么将解码器或编码器部分作为一个整体进行关注。因此,改进迁移学习策略对于发展监督深度学习至关重要,进一步有利于医学图像处理。在这项工作中,我们提出了一种基于策略值自适应微调网络的新策略。具体来说,编码器层被微调以提取潜在特征,然后是一个生成策略值的完全连接层。然后根据这些策略值对解码器进行自适应微调。该方法已在MRI上应用于人脑肿瘤的分割。利用公共数据库中的769卷进行了评价。从T2到T1、T1ce和Flair的域转移显示了最先进的分割精度。
{"title":"Adaptive Transfer Learning To Enhance Domain Transfer In Brain Tumor Segmentation","authors":"Yuan Liqiang, Marius Erdt, Wang Lipo","doi":"10.1109/ISBI48211.2021.9434100","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434100","url":null,"abstract":"Supervised deep learning has greatly catalyzed the development of medical image processing. However, reliable predictions require a large amount of labeled data, which is hard to attain due to the required expensive manual efforts. Transfer learning serves as a potential solution for mitigating the issue of data insufficiency. But up till now, most transfer learning strategies for medical image segmentation either fine-tune only the last few layers of a network or focus on the decoder or encoder parts as a whole. Thus, improving transfer learning strategies is of critical importance for developing supervised deep learning, further benefits medical image processing. In this work, we propose a novel strategy that adaptively fine-tunes the network based on policy value. Specifically, the encoder layers are fine-tuned to extract latent feature followed by a fully connected layer that generates policy value. The decoder is then adaptively fine-tuned according to these policy value. The proposed approach has been applied to segment human brain tumors in MRI. The evaluation has been performed using 769 volumes from public databases. Domain transfer from T2 to T1, T1ce, and Flair shows state-of-the-art segmentation accuracy.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127749478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Style Normalization In Histology With Federated Learning 基于联邦学习的组织学风格归一化
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434078
Jing Ke, Yiqing Shen, Yizhou Lu
The global cancer burden is on the rise, and Artificial Intelligence (AI) has become increasingly crucial to achieve more objective and efficient diagnosis in digital pathology. Current AI-assisted histopathology analysis methods need to address the following two issues. First, the color variations due to use of different stains need to be tackled such as with stain style transfer technique. Second, in parallel with heterogeneity, datasets from individual clinical institutions are characterized by privacy regulations, and thus need to be addressed such as with robust data-private collaborative training. In this paper, to address the color heterogeneity problem, we propose a novel generative adversarial network with one orchestrating generator and multiple distributed discriminators for stain style transfer. We also incorporate Federated Learning (FL) to further preserve data privacy and security from multiple data centers. We use a large cohort of histopathology datasets as a case study.
全球癌症负担正在上升,人工智能(AI)对于在数字病理学中实现更客观、更有效的诊断变得越来越重要。目前人工智能辅助的组织病理学分析方法需要解决以下两个问题。首先,由于使用不同的污渍,颜色的变化需要解决,如污渍风格转移技术。其次,与异质性并行,来自个体临床机构的数据集具有隐私法规的特征,因此需要通过强大的数据私有协作培训来解决。在本文中,为了解决颜色异质性问题,我们提出了一种新的生成对抗网络,该网络具有一个编排生成器和多个分布式鉴别器,用于染色风格转移。我们还结合了联邦学习(FL)来进一步保护来自多个数据中心的数据隐私和安全性。我们使用大量的组织病理学数据集作为案例研究。
{"title":"Style Normalization In Histology With Federated Learning","authors":"Jing Ke, Yiqing Shen, Yizhou Lu","doi":"10.1109/ISBI48211.2021.9434078","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434078","url":null,"abstract":"The global cancer burden is on the rise, and Artificial Intelligence (AI) has become increasingly crucial to achieve more objective and efficient diagnosis in digital pathology. Current AI-assisted histopathology analysis methods need to address the following two issues. First, the color variations due to use of different stains need to be tackled such as with stain style transfer technique. Second, in parallel with heterogeneity, datasets from individual clinical institutions are characterized by privacy regulations, and thus need to be addressed such as with robust data-private collaborative training. In this paper, to address the color heterogeneity problem, we propose a novel generative adversarial network with one orchestrating generator and multiple distributed discriminators for stain style transfer. We also incorporate Federated Learning (FL) to further preserve data privacy and security from multiple data centers. We use a large cohort of histopathology datasets as a case study.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126365617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Adasan: Adaptive Cosine Similarity Self-Attention Network For Gastrointestinal Endoscopy Image Classification 用于胃肠内镜图像分类的自适应余弦相似度自关注网络
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434084
Qian Zhao, Wenming Yang, Q. Liao
Wireless capsule endoscopy plays an important role in the examination of gastrointestinal diseases. However, the large number of medical images produced by endoscopy makes it a time-consuming and labor-intensive work for doctors to examine. Clinically, the detection rate of small ulcers and superficial lesions is low. If these minor lesions are not screened and treated timely, they are likely to develop into cancer. Therefore, it is of great significance to develop computer-aided diagnostic algorithms to help doctors perform gastrointestinal image analysis. In this paper, we propose an adaptive cosine similarity network with self-attention module — AdaSAN, for automatic classification of gastrointestinal wireless capsule endoscope images. The experimental results on the clinical gastrointestinal image analysis dataset illustrate that our proposed method outperforms the state-of-the-art algorithms in the classification of inflammatory lesions, vascular lesions, polyps and normal images, with an average accuracy rate of 95.7%.
无线胶囊内镜在胃肠道疾病的检查中发挥着重要的作用。然而,内窥镜检查产生的大量医学图像使得医生的检查工作既耗时又费力。临床上,小溃疡和浅表病变的检出率较低。如果不及时筛查和治疗这些小病变,它们很可能发展成癌症。因此,开发计算机辅助诊断算法,帮助医生进行胃肠图像分析具有重要意义。本文提出了一种带有自关注模块AdaSAN的自适应余弦相似度网络,用于胃肠道无线胶囊内窥镜图像的自动分类。在临床胃肠道图像分析数据集上的实验结果表明,本文提出的方法在炎性病变、血管病变、息肉和正常图像的分类上优于现有算法,平均准确率为95.7%。
{"title":"Adasan: Adaptive Cosine Similarity Self-Attention Network For Gastrointestinal Endoscopy Image Classification","authors":"Qian Zhao, Wenming Yang, Q. Liao","doi":"10.1109/ISBI48211.2021.9434084","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434084","url":null,"abstract":"Wireless capsule endoscopy plays an important role in the examination of gastrointestinal diseases. However, the large number of medical images produced by endoscopy makes it a time-consuming and labor-intensive work for doctors to examine. Clinically, the detection rate of small ulcers and superficial lesions is low. If these minor lesions are not screened and treated timely, they are likely to develop into cancer. Therefore, it is of great significance to develop computer-aided diagnostic algorithms to help doctors perform gastrointestinal image analysis. In this paper, we propose an adaptive cosine similarity network with self-attention module — AdaSAN, for automatic classification of gastrointestinal wireless capsule endoscope images. The experimental results on the clinical gastrointestinal image analysis dataset illustrate that our proposed method outperforms the state-of-the-art algorithms in the classification of inflammatory lesions, vascular lesions, polyps and normal images, with an average accuracy rate of 95.7%.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128071834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
DEEPACC:Automate Chromosome Classification Based On Metaphase Images Using Deep Learning Framework Fused With Priori Knowledge 使用融合先验知识的深度学习框架,基于中期图像的自动染色体分类
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433943
Li Xiao, Chunlong Luo
Chromosome classification is an important but difficult and tedious task in karyotyping. Previous methods only classify manually segmented single chromosome, which is far from clinical practice. In this work, we propose a detection based method, DeepACC, to locate and fine classify chromosomes simultaneously based on the whole metaphase image. We firstly introduce the Additive Angular Margin Loss to enhance the discriminative power of the model. To alleviate batch effects, we transform decision boundary of each class case-by-case through a siamese network which make full use of priori knowledges that chromosomes usually appear in pairs. Furthermore, we take the clinically seven group criteria as a prior-knowledge and design an additional Group Inner-Adjacency Loss to further reduce inter-class similarities. A private metaphase image dataset from clinical laboratory are collected and labelled to evaluate the performance. Results show that the new design brings encouraging performance gains comparing to the state-of-the-art baseline models.
染色体分类是核型分析中一项重要但又困难而繁琐的工作。以往的方法仅对人工分割的单染色体进行分类,与临床实践相去甚远。在这项工作中,我们提出了一种基于检测的方法,即DeepACC,基于整个中期图像同时定位和精细分类染色体。首先引入加性角边缘损失来增强模型的判别能力。为了减轻批处理效应,我们充分利用染色体通常成对出现的先验知识,通过连体网络逐个变换每一类的决策边界。此外,我们将临床7组标准作为先验知识,并设计了额外的组内邻接损失,以进一步降低类间相似性。从临床实验室收集私人中期图像数据集并标记以评估性能。结果表明,与最先进的基线模型相比,新设计带来了令人鼓舞的性能提升。
{"title":"DEEPACC:Automate Chromosome Classification Based On Metaphase Images Using Deep Learning Framework Fused With Priori Knowledge","authors":"Li Xiao, Chunlong Luo","doi":"10.1109/ISBI48211.2021.9433943","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433943","url":null,"abstract":"Chromosome classification is an important but difficult and tedious task in karyotyping. Previous methods only classify manually segmented single chromosome, which is far from clinical practice. In this work, we propose a detection based method, DeepACC, to locate and fine classify chromosomes simultaneously based on the whole metaphase image. We firstly introduce the Additive Angular Margin Loss to enhance the discriminative power of the model. To alleviate batch effects, we transform decision boundary of each class case-by-case through a siamese network which make full use of priori knowledges that chromosomes usually appear in pairs. Furthermore, we take the clinically seven group criteria as a prior-knowledge and design an additional Group Inner-Adjacency Loss to further reduce inter-class similarities. A private metaphase image dataset from clinical laboratory are collected and labelled to evaluate the performance. Results show that the new design brings encouraging performance gains comparing to the state-of-the-art baseline models.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"113 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131448906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Ssiqa: Multi-Task Learning For Non-Reference Ct Image Quality Assessment With Self-Supervised Noise Level Prediction Ssiqa:多任务学习非参考Ct图像质量评估与自监督噪声水平预测
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434044
A. Imran, D. Pal, B. Patel, Adam S. Wang
Reduction of CT radiation dose is important due to the potential effects on patients. But lowering dose incurs degradation in the reconstructed image quality, furthering compromise in the diagnostic and image-based analyses performance. Considering the patient health risks, high quality reference images cannot be easily obtained, making the assessment challenging. Therefore, automatic no-reference image quality assessment is desirable. Leveraging an innovative self-supervised regularization in a convolutional neural network, we propose a novel, fully automated, no-reference CT image quantification method namely self-supervised image quality assessment (SSIQA). Extensive experimentation via in-domain (abdomen CT) and cross-domain (chest CT) evaluations demonstrates SSIQA is accurate in quantifying CT image quality, generalized across the scan types, and consistent with the established metrics and different relative dose levels.
由于对患者的潜在影响,降低CT辐射剂量非常重要。但降低剂量会降低重建图像的质量,进一步降低诊断和基于图像的分析性能。考虑到患者的健康风险,很难获得高质量的参考图像,这使得评估具有挑战性。因此,自动无参考图像质量评估是可取的。利用卷积神经网络中创新的自监督正则化,我们提出了一种新颖的、全自动的、无参考的CT图像量化方法,即自监督图像质量评估(SSIQA)。通过域内(腹部CT)和跨域(胸部CT)评估进行的大量实验表明,SSIQA在量化CT图像质量方面是准确的,适用于所有扫描类型,并且与既定指标和不同的相对剂量水平一致。
{"title":"Ssiqa: Multi-Task Learning For Non-Reference Ct Image Quality Assessment With Self-Supervised Noise Level Prediction","authors":"A. Imran, D. Pal, B. Patel, Adam S. Wang","doi":"10.1109/ISBI48211.2021.9434044","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434044","url":null,"abstract":"Reduction of CT radiation dose is important due to the potential effects on patients. But lowering dose incurs degradation in the reconstructed image quality, furthering compromise in the diagnostic and image-based analyses performance. Considering the patient health risks, high quality reference images cannot be easily obtained, making the assessment challenging. Therefore, automatic no-reference image quality assessment is desirable. Leveraging an innovative self-supervised regularization in a convolutional neural network, we propose a novel, fully automated, no-reference CT image quantification method namely self-supervised image quality assessment (SSIQA). Extensive experimentation via in-domain (abdomen CT) and cross-domain (chest CT) evaluations demonstrates SSIQA is accurate in quantifying CT image quality, generalized across the scan types, and consistent with the established metrics and different relative dose levels.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132217244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Beam Stack Search-Based Reconstruction Of Unhealthy Coronary Artery Wall Segmentations In CCTA-CPR Scans 基于波束堆栈搜索的CCTA-CPR扫描中不健康冠状动脉壁分割重建
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434171
Antonio Tejero-de-Pablos, Hiroaki Yamane, Y. Kurose, Junichi Iho, Youji Tokunaga, M. Horie, Keisuke Nishizawa, Yusaku Hayashi, Y. Koyama, T. Harada
The estimation of the coronary artery wall boundaries in CCTA scans is a costly but essential task in the diagnosis of cardiac diseases. To automatize this task, deep learning-based image segmentation methods are commonly used. However, in the case of coronary artery wall, even state-of-the-art segmentation methods fail to produce an accurate boundary in the presence of plaques and bifurcations. Post-processing reconstruction methods have been proposed to further refine segmentation results, but when applying general-purpose reconstruction to artery wall segmentations, they fail to reproduce the wide variety of boundary shapes. In this paper, we propose a novel method for reconstructing coronary artery wall segmentations, the Tube Beam Stack Search (TBSS). By leveraging the voxel shape of adjacent slices in a CPR volume, our TBSS is capable of finding the most plausible path of the artery wall. Similarly to the original Beam Stack Search, TBSS navigates along the voxel probabilities output by the segmentation method, reconstructing the inner and outer artery walls. Finally, skeletonization is applied on the TBSS reconstructions to eliminate noise and produce more refined segmentations. Also, since our method does not require learning a model, the lack of annotated data is not a limitation. We evaluated our method on a dataset of coronary CT angiography with curved planar reconstruction (CCTA-CPR) of 92 arteries. Experimental results show that our method outperforms the state-of-the-art work in reconstruction.
在心脏疾病的诊断中,CCTA扫描中冠状动脉壁边界的估计是一项昂贵但重要的任务。为了使这项任务自动化,通常使用基于深度学习的图像分割方法。然而,在冠状动脉壁的情况下,即使是最先进的分割方法也无法在存在斑块和分叉的情况下产生准确的边界。已经提出了后处理重建方法来进一步细化分割结果,但是当将通用重建应用于动脉壁分割时,它们无法再现各种各样的边界形状。在本文中,我们提出了一种重建冠状动脉壁分割的新方法——管束堆栈搜索(TBSS)。通过利用CPR体积中相邻切片的体素形状,我们的TBSS能够找到动脉壁最合理的路径。与最初的波束堆栈搜索类似,TBSS沿着分割方法输出的体素概率进行导航,重建内外部动脉壁。最后,将骨架化技术应用于TBSS重建,消除噪声,得到更精细的分割。此外,由于我们的方法不需要学习模型,因此缺乏注释数据并不是一个限制。我们在92条动脉的冠状动脉CT血管造影曲线平面重建(CCTA-CPR)数据集上评估了我们的方法。实验结果表明,我们的方法在重建方面优于目前最先进的方法。
{"title":"Beam Stack Search-Based Reconstruction Of Unhealthy Coronary Artery Wall Segmentations In CCTA-CPR Scans","authors":"Antonio Tejero-de-Pablos, Hiroaki Yamane, Y. Kurose, Junichi Iho, Youji Tokunaga, M. Horie, Keisuke Nishizawa, Yusaku Hayashi, Y. Koyama, T. Harada","doi":"10.1109/ISBI48211.2021.9434171","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434171","url":null,"abstract":"The estimation of the coronary artery wall boundaries in CCTA scans is a costly but essential task in the diagnosis of cardiac diseases. To automatize this task, deep learning-based image segmentation methods are commonly used. However, in the case of coronary artery wall, even state-of-the-art segmentation methods fail to produce an accurate boundary in the presence of plaques and bifurcations. Post-processing reconstruction methods have been proposed to further refine segmentation results, but when applying general-purpose reconstruction to artery wall segmentations, they fail to reproduce the wide variety of boundary shapes. In this paper, we propose a novel method for reconstructing coronary artery wall segmentations, the Tube Beam Stack Search (TBSS). By leveraging the voxel shape of adjacent slices in a CPR volume, our TBSS is capable of finding the most plausible path of the artery wall. Similarly to the original Beam Stack Search, TBSS navigates along the voxel probabilities output by the segmentation method, reconstructing the inner and outer artery walls. Finally, skeletonization is applied on the TBSS reconstructions to eliminate noise and produce more refined segmentations. Also, since our method does not require learning a model, the lack of annotated data is not a limitation. We evaluated our method on a dataset of coronary CT angiography with curved planar reconstruction (CCTA-CPR) of 92 arteries. Experimental results show that our method outperforms the state-of-the-art work in reconstruction.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132467167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Learned Representation For Multi-Variable Ultrasonic Lesion Quantification 多变量超声损伤量化的学习表征
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433783
SeokHwan Oh, Myeong-Gee Kim, Youngmin Kim, Hyeon-Min Bae
In this paper, a single-probe ultrasonic imaging system that captures multi-variable quantitative profiles is presented. As pathological changes cause biomechanical property variation, quantitative imaging has great potential for lesion characterization. The proposed system simultaneously extracts four clinically informative quantitative biomarkers, such as the speed of sound, attenuation, effective scatter density, and effective scatter radius, in real-time using a single scalable neural network. The performance of the proposed system was evaluated through numerical simulations, and phantom and ex vivo measurements. The simulation results demonstrated that the proposed SQI-Net reconstructs four quantitative images with PSNR and SSIM of 19.52 dB and 0.8251, respectively, while achieving a parameter reduction of 75% compared to the design of four parallel networks, each of which was dedicated to a single parameter. In the phantom and ex vivo experiments, the SQI-Net demonstrated the classification of cyst, and benign- and malignant-like inclusions through a comprehensive analysis of four reconstructed images.
本文介绍了一种单探头超声成像系统,可捕获多变量定量轮廓。由于病理改变会引起生物力学性质的变化,因此定量成像在病变表征方面具有很大的潜力。该系统利用单个可扩展神经网络实时提取四种临床信息定量生物标志物,如声速、衰减、有效散射密度和有效散射半径。通过数值模拟、模拟和离体测量来评估所提出系统的性能。仿真结果表明,所提出的SQI-Net重构了4幅定量图像,其中PSNR和SSIM分别为19.52 dB和0.8251,与4个单一参数的并行网络设计相比,参数减少了75%。在假体和离体实验中,SQI-Net通过对四幅重建图像的综合分析,显示了囊肿的分类,以及良性和恶性样包体。
{"title":"A Learned Representation For Multi-Variable Ultrasonic Lesion Quantification","authors":"SeokHwan Oh, Myeong-Gee Kim, Youngmin Kim, Hyeon-Min Bae","doi":"10.1109/ISBI48211.2021.9433783","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433783","url":null,"abstract":"In this paper, a single-probe ultrasonic imaging system that captures multi-variable quantitative profiles is presented. As pathological changes cause biomechanical property variation, quantitative imaging has great potential for lesion characterization. The proposed system simultaneously extracts four clinically informative quantitative biomarkers, such as the speed of sound, attenuation, effective scatter density, and effective scatter radius, in real-time using a single scalable neural network. The performance of the proposed system was evaluated through numerical simulations, and phantom and ex vivo measurements. The simulation results demonstrated that the proposed SQI-Net reconstructs four quantitative images with PSNR and SSIM of 19.52 dB and 0.8251, respectively, while achieving a parameter reduction of 75% compared to the design of four parallel networks, each of which was dedicated to a single parameter. In the phantom and ex vivo experiments, the SQI-Net demonstrated the classification of cyst, and benign- and malignant-like inclusions through a comprehensive analysis of four reconstructed images.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134438125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Pneumoperitoneum Detection In Chest X-Ray By A Deep Learning Ensemble With Model Explainability 基于模型可解释性的深度学习集成的胸片气腹检测
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434122
M. V. S. D. Cea, D. Gruen, David Richmond
Pneumoperitoneum (free air in the peritoneal cavity) is a rare condition that can be life threatening and require emergency surgery. It can be detected in chest X-ray but there are some challenges associated to this detection, such as small amounts of air that may be missed by a radiologist, or pseudo-pneumoperitoneum (air in the abdomen that may look like pneumoperitoneum). In this work, we propose using an ensemble of deep learning models trained on different subsets of data to boost the classification and generalization performance of the model as well as hard-negative mining to mitigate the effect of pseudo-pneumoperitoneum. We demonstrate superior performance when the model ensemble is utilized as well as good localization of the finding with multiple model explainability techniques.
气腹(腹膜腔内的自由空气)是一种罕见的疾病,可能危及生命,需要紧急手术。它可以在胸部x光片中检测到,但这种检测存在一些挑战,例如少量的空气可能被放射科医生遗漏,或者假性气腹(腹部的空气可能看起来像气腹)。在这项工作中,我们建议使用在不同数据子集上训练的深度学习模型的集合来提高模型的分类和泛化性能,并使用硬负挖掘来减轻伪气腹的影响。当使用模型集成以及使用多个模型可解释性技术对发现进行良好的定位时,我们证明了卓越的性能。
{"title":"Pneumoperitoneum Detection In Chest X-Ray By A Deep Learning Ensemble With Model Explainability","authors":"M. V. S. D. Cea, D. Gruen, David Richmond","doi":"10.1109/ISBI48211.2021.9434122","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434122","url":null,"abstract":"Pneumoperitoneum (free air in the peritoneal cavity) is a rare condition that can be life threatening and require emergency surgery. It can be detected in chest X-ray but there are some challenges associated to this detection, such as small amounts of air that may be missed by a radiologist, or pseudo-pneumoperitoneum (air in the abdomen that may look like pneumoperitoneum). In this work, we propose using an ensemble of deep learning models trained on different subsets of data to boost the classification and generalization performance of the model as well as hard-negative mining to mitigate the effect of pseudo-pneumoperitoneum. We demonstrate superior performance when the model ensemble is utilized as well as good localization of the finding with multiple model explainability techniques.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130716738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Uncertainty-Guided Robust Training For Medical Image Segmentation 医学图像分割的不确定性指导鲁棒训练
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9433954
Yan Li, Xiaoyi Chen, Li Quan, N. Zhang
For medical image segmentation tasks, some of foreground objects have more ambiguities than other areas because of confusing appearances. It is critical to seek a proper method to measure such ambiguity of each pixel and use it for robust model training. To this end, we design a Bayesian uncertainty estimate layer, and propose an uncertainty-guided training for standard convolutional segmentation models. In particular, the proposed Bayesian uncertainty estimate layer provides the confidence on each pixel’s prediction independently, and works with prediction correctness to obtain the rescaling weights of training loss for each pixel. Through this mechanism, the learning importance of the regions with different ambiguities can be distinguished. We validate our proposal by comparing it with other loss rescaling approaches on medical image datasets. The results consistently show that the uncertainty-guided training brings significant improvement on lesion segmentation accuracy.
在医学图像分割任务中,一些前景对象由于外观混淆而比其他区域具有更大的模糊性。寻找一种合适的方法来测量每个像素的模糊度,并将其用于鲁棒模型训练是至关重要的。为此,我们设计了一个贝叶斯不确定性估计层,并提出了一种基于不确定性的标准卷积分割模型训练方法。其中,提出的贝叶斯不确定性估计层对每个像素的预测独立提供置信度,并结合预测正确性获得每个像素的训练损失重标权值。通过这种机制,可以区分具有不同歧义的区域的学习重要性。我们通过将其与其他医学图像数据集上的损失重新缩放方法进行比较来验证我们的提议。结果一致表明,不确定性指导下的训练对病灶分割准确率有显著提高。
{"title":"Uncertainty-Guided Robust Training For Medical Image Segmentation","authors":"Yan Li, Xiaoyi Chen, Li Quan, N. Zhang","doi":"10.1109/ISBI48211.2021.9433954","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433954","url":null,"abstract":"For medical image segmentation tasks, some of foreground objects have more ambiguities than other areas because of confusing appearances. It is critical to seek a proper method to measure such ambiguity of each pixel and use it for robust model training. To this end, we design a Bayesian uncertainty estimate layer, and propose an uncertainty-guided training for standard convolutional segmentation models. In particular, the proposed Bayesian uncertainty estimate layer provides the confidence on each pixel’s prediction independently, and works with prediction correctness to obtain the rescaling weights of training loss for each pixel. Through this mechanism, the learning importance of the regions with different ambiguities can be distinguished. We validate our proposal by comparing it with other loss rescaling approaches on medical image datasets. The results consistently show that the uncertainty-guided training brings significant improvement on lesion segmentation accuracy.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132742935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generative Adversarial Semi-Supervised Network For Medical Image Segmentation 医学图像分割的生成对抗半监督网络
Pub Date : 2021-04-13 DOI: 10.1109/ISBI48211.2021.9434135
Chuchen Li, Huafeng Liu
Due to the limitation of ethics and the number of professional annotators, pixel-wise annotations for medical images are hard to obtain. Thus, how to exploit limited annotations and maintain the performance is an important yet challenging problem. In this paper, we propose Generative Adversarial Semi-supervised Network(GASNet) to tackle this problem in a self-learning manner. Only limited labels are available during the training procedure and the unlabeled images are exploited as auxiliary information to boost segmentation performance. We modulate segmentation network as a generator to produce pseudo labels whose reliability will be judged by an uncertainty discriminator. Feature mapping loss will obtain statistic distribution consistency between the generated labels and the real ones to further ensure the credibility. We obtain 0.8348 to 0.9131 dice coefficient with 1/32 to 1/2 proportion of annotations respectively on right ventricle dataset. Improvements are up to 28.6 points higher than the corresponding fully supervised baseline.
由于伦理和专业注释人员数量的限制,很难获得医学图像的逐像素注释。因此,如何利用有限的注释并保持性能是一个重要而又具有挑战性的问题。在本文中,我们提出了生成对抗半监督网络(GASNet)以一种自学习的方式来解决这个问题。在训练过程中只有有限的标签可用,并且利用未标记的图像作为辅助信息来提高分割性能。我们将分割网络调制为一个生成器来产生伪标签,伪标签的可靠性由一个不确定性鉴别器来判断。特征映射损失将获得生成标签与真实标签的统计分布一致性,进一步保证可信度。在右心室数据集上,以1/32 ~ 1/2的标注比例分别获得0.8348 ~ 0.9131的骰子系数。改进比相应的完全监督基线高出28.6分。
{"title":"Generative Adversarial Semi-Supervised Network For Medical Image Segmentation","authors":"Chuchen Li, Huafeng Liu","doi":"10.1109/ISBI48211.2021.9434135","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434135","url":null,"abstract":"Due to the limitation of ethics and the number of professional annotators, pixel-wise annotations for medical images are hard to obtain. Thus, how to exploit limited annotations and maintain the performance is an important yet challenging problem. In this paper, we propose Generative Adversarial Semi-supervised Network(GASNet) to tackle this problem in a self-learning manner. Only limited labels are available during the training procedure and the unlabeled images are exploited as auxiliary information to boost segmentation performance. We modulate segmentation network as a generator to produce pseudo labels whose reliability will be judged by an uncertainty discriminator. Feature mapping loss will obtain statistic distribution consistency between the generated labels and the real ones to further ensure the credibility. We obtain 0.8348 to 0.9131 dice coefficient with 1/32 to 1/2 proportion of annotations respectively on right ventricle dataset. Improvements are up to 28.6 points higher than the corresponding fully supervised baseline.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"28 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132747008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1