首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
OPTIMAL TRANSPORT GUIDED UNSUPERVISED LEARNING FOR ENHANCING LOW-QUALITY RETINAL IMAGES. 用于增强低质量视网膜图像的最优运输引导无监督学习。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230719
Wenhui Zhu, Peijie Qiu, Mohammad Farazi, Keshav Nandakumar, Oana M Dumitrascu, Yalin Wang

Real-world non-mydriatic retinal fundus photography is prone to artifacts, imperfections, and low-quality when certain ocular or systemic co-morbidities exist. Artifacts may result in inaccuracy or ambiguity in clinical diagnoses. In this paper, we proposed a simple but effective end-to-end framework for enhancing poor-quality retinal fundus images. Leveraging the optimal transport theory, we proposed an unpaired image-to-image translation scheme for transporting low-quality images to their high-quality counterparts. We theoretically proved that a Generative Adversarial Networks (GAN) model with a generator and discriminator is sufficient for this task. Furthermore, to mitigate the inconsistency of information between the low-quality images and their enhancements, an information consistency mechanism was proposed to maximally maintain structural consistency (optical discs, blood vessels, lesions) between the source and enhanced domains. Extensive experiments were conducted on the EyeQ dataset to demonstrate the superiority of our proposed method perceptually and quantitatively.

当存在某些眼部或全身并发症时,真实世界的非散瞳视网膜眼底摄影容易出现伪影、缺陷和低质量。伪影可能导致临床诊断不准确或不明确。在本文中,我们提出了一种简单但有效的端到端框架来增强低质量的视网膜眼底图像。利用最优传输理论,我们提出了一种不成对的图像到图像转换方案,用于将低质量图像传输到高质量图像。我们从理论上证明了一个具有生成器和鉴别器的生成对抗网络(GAN)模型足以完成这项任务。此外,为了缓解低质量图像及其增强之间的信息不一致,提出了一种信息一致性机制,以最大限度地保持源域和增强域之间的结构一致性(光盘、血管、病变)。在EyeQ数据集上进行了大量实验,以从感知和定量上证明我们提出的方法的优越性。
{"title":"OPTIMAL TRANSPORT GUIDED UNSUPERVISED LEARNING FOR ENHANCING LOW-QUALITY RETINAL IMAGES.","authors":"Wenhui Zhu,&nbsp;Peijie Qiu,&nbsp;Mohammad Farazi,&nbsp;Keshav Nandakumar,&nbsp;Oana M Dumitrascu,&nbsp;Yalin Wang","doi":"10.1109/isbi53787.2023.10230719","DOIUrl":"https://doi.org/10.1109/isbi53787.2023.10230719","url":null,"abstract":"<p><p>Real-world non-mydriatic retinal fundus photography is prone to artifacts, imperfections, and low-quality when certain ocular or systemic co-morbidities exist. Artifacts may result in inaccuracy or ambiguity in clinical diagnoses. In this paper, we proposed a simple but effective end-to-end framework for enhancing poor-quality retinal fundus images. Leveraging the optimal transport theory, we proposed an unpaired image-to-image translation scheme for transporting low-quality images to their high-quality counterparts. We theoretically proved that a Generative Adversarial Networks (GAN) model with a generator and discriminator is sufficient for this task. Furthermore, to mitigate the inconsistency of information between the low-quality images and their enhancements, an information consistency mechanism was proposed to maximally maintain structural consistency (optical discs, blood vessels, lesions) between the source and enhanced domains. Extensive experiments were conducted on the EyeQ dataset to demonstrate the superiority of our proposed method perceptually and quantitatively.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10513403/pdf/nihms-1880846.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41179829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SELF-SUPERVISED LEARNING WITH RADIOLOGY REPORTS, A COMPARATIVE ANALYSIS OF STRATEGIES FOR LARGE VESSEL OCCLUSION AND BRAIN CTA IMAGES. 利用放射学报告进行自我监督学习,对大血管闭塞和脑 cta 图像的策略进行比较分析。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230623
S Pachade, S Datta, Y Dong, S Salazar-Marioni, R Abdelkhaleq, A Niktabe, K Roberts, S A Sheth, L Giancardo

Scarcity of labels for medical images is a significant barrier for training representation learning approaches based on deep neural networks. This limitation is also present when using imaging data collected during routine clinical care stored in picture archiving communication systems (PACS), as these data rarely have attached the high-quality labels required for medical image computing tasks. However, medical images extracted from PACS are commonly coupled with descriptive radiology reports that contain significant information and could be leveraged to pre-train imaging models, which could serve as starting points for further task-specific fine-tuning. In this work, we perform a head-to-head comparison of three different self-supervised strategies to pre-train the same imaging model on 3D brain computed tomography angiogram (CTA) images, with large vessel occlusion (LVO) detection as the downstream task. These strategies evaluate two natural language processing (NLP) approaches, one to extract 100 explicit radiology concepts (Rad-SpatialNet) and the other to create general-purpose radiology reports embeddings (DistilBERT). In addition, we experiment with learning radiology concepts directly or by using a recent self-supervised learning approach (CLIP) that learns by ranking the distance between language and image vector embeddings. The LVO detection task was selected because it requires 3D imaging data, is clinically important, and requires the algorithm to learn outputs not explicitly stated in the radiology report. Pre-training was performed on an unlabeled dataset containing 1,542 3D CTA - reports pairs. The downstream task was tested on a labeled dataset of 402 subjects for LVO. We find that the pre-training performed with CLIP-based strategies improve the performance of the imaging model to detect LVO compared to a model trained only on the labeled data. The best performance was achieved by pre-training using the explicit radiology concepts and CLIP strategy.

医学图像标签的匮乏是训练基于深度神经网络的表示学习方法的一大障碍。在使用存储在图片存档通信系统(PACS)中的常规临床护理过程中收集的成像数据时也存在这种限制,因为这些数据很少附带医学图像计算任务所需的高质量标签。不过,从 PACS 中提取的医学图像通常与包含重要信息的描述性放射学报告结合在一起,可用于预训练成像模型,这些模型可作为进一步针对特定任务进行微调的起点。在这项工作中,我们对三种不同的自监督策略进行了正面比较,以在三维脑计算机断层扫描血管造影(CTA)图像上预训练相同的成像模型,并将大血管闭塞(LVO)检测作为下游任务。这些策略评估了两种自然语言处理 (NLP) 方法,一种用于提取 100 个明确的放射学概念(Rad-SpatialNet),另一种用于创建通用放射学报告嵌入(DistilBERT)。此外,我们还尝试了直接学习放射学概念或使用最新的自我监督学习方法(CLIP)学习放射学概念,该方法通过对语言和图像向量嵌入之间的距离进行排序来学习。之所以选择 LVO 检测任务,是因为它需要三维成像数据,在临床上非常重要,而且要求算法学习放射学报告中没有明确说明的输出结果。预训练在一个包含 1,542 对 3D CTA - 报告的无标记数据集上进行。下游任务在一个包含 402 名 LVO 受试者的标注数据集上进行了测试。我们发现,与仅在标记数据上训练的模型相比,使用基于 CLIP 的策略进行的预训练提高了成像模型检测 LVO 的性能。使用明确的放射学概念和 CLIP 策略进行预训练可获得最佳性能。
{"title":"SELF-SUPERVISED LEARNING WITH RADIOLOGY REPORTS, A COMPARATIVE ANALYSIS OF STRATEGIES FOR LARGE VESSEL OCCLUSION AND BRAIN CTA IMAGES.","authors":"S Pachade, S Datta, Y Dong, S Salazar-Marioni, R Abdelkhaleq, A Niktabe, K Roberts, S A Sheth, L Giancardo","doi":"10.1109/isbi53787.2023.10230623","DOIUrl":"10.1109/isbi53787.2023.10230623","url":null,"abstract":"<p><p>Scarcity of labels for medical images is a significant barrier for training representation learning approaches based on deep neural networks. This limitation is also present when using imaging data collected during routine clinical care stored in picture archiving communication systems (PACS), as these data rarely have attached the high-quality labels required for medical image computing tasks. However, medical images extracted from PACS are commonly coupled with descriptive radiology reports that contain significant information and could be leveraged to pre-train imaging models, which could serve as starting points for further task-specific fine-tuning. In this work, we perform a head-to-head comparison of three different self-supervised strategies to pre-train the same imaging model on 3D brain computed tomography angiogram (CTA) images, with large vessel occlusion (LVO) detection as the downstream task. These strategies evaluate two natural language processing (NLP) approaches, one to extract 100 explicit radiology concepts (Rad-SpatialNet) and the other to create general-purpose radiology reports embeddings (DistilBERT). In addition, we experiment with learning radiology concepts directly or by using a recent self-supervised learning approach (CLIP) that learns by ranking the distance between language and image vector embeddings. The LVO detection task was selected because it requires 3D imaging data, is clinically important, and requires the algorithm to learn outputs not explicitly stated in the radiology report. Pre-training was performed on an unlabeled dataset containing 1,542 3D CTA - reports pairs. The downstream task was tested on a labeled dataset of 402 subjects for LVO. We find that the pre-training performed with CLIP-based strategies improve the performance of the imaging model to detect LVO compared to a model trained only on the labeled data. The best performance was achieved by pre-training using the explicit radiology concepts and CLIP strategy.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10498780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10306669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SurfNN: Joint Reconstruction of Multiple Cortical Surfaces from Magnetic Resonance Images. SurfNN:从磁共振图像中联合重建多个皮层表面。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230488
Hao Zheng, Hongming Li, Yong Fan

To achieve fast, robust, and accurate reconstruction of the human cortical surfaces from 3D magnetic resonance images (MRIs), we develop a novel deep learning-based framework, referred to as SurfNN, to reconstruct simultaneously both inner (between white matter and gray matter) and outer (pial) surfaces from MRIs. Different from existing deep learning-based cortical surface reconstruction methods that either reconstruct the cortical surfaces separately or neglect the interdependence between the inner and outer surfaces, SurfNN reconstructs both the inner and outer cortical surfaces jointly by training a single network to predict a midthickness surface that lies at the center of the inner and outer cortical surfaces. The input of SurfNN consists of a 3D MRI and an initialization of the midthickness surface that is represented both implicitly as a 3D distance map and explicitly as a triangular mesh with spherical topology, and its output includes both the inner and outer cortical surfaces, as well as the midthickness surface. The method has been evaluated on a large-scale MRI dataset and demonstrated competitive cortical surface reconstruction performance.

为了从3D磁共振图像(MRI)中快速、稳健和准确地重建人类皮层表面,我们开发了一种新的基于深度学习的框架,称为SurfNN,以从MRI中同时重建内部(白质和灰质之间)和外部(pial)表面。与现有的基于深度学习的皮层表面重建方法不同,该方法要么单独重建皮层表面,SurfNN通过训练单个网络来预测位于皮层内外表面中心的中厚表面,从而联合重建皮层内外表面。SurfNN的输入包括3D MRI和中厚表面的初始化,该表面隐式表示为3D距离图,显式表示为具有球形拓扑结构的三角形网格,其输出包括皮层内表面和皮层外表面以及中厚表面。该方法已在大规模MRI数据集上进行了评估,并证明了具有竞争力的皮层表面重建性能。
{"title":"<i>Surf</i>NN: Joint Reconstruction of Multiple Cortical Surfaces from Magnetic Resonance Images.","authors":"Hao Zheng, Hongming Li, Yong Fan","doi":"10.1109/isbi53787.2023.10230488","DOIUrl":"10.1109/isbi53787.2023.10230488","url":null,"abstract":"<p><p>To achieve fast, robust, and accurate reconstruction of the human cortical surfaces from 3D magnetic resonance images (MRIs), we develop a novel deep learning-based framework, referred to as <i>Surf</i>NN, to reconstruct simultaneously both inner (between white matter and gray matter) and outer (pial) surfaces from MRIs. Different from existing deep learning-based cortical surface reconstruction methods that either reconstruct the cortical surfaces separately or neglect the interdependence between the inner and outer surfaces, <i>Surf</i>NN reconstructs both the inner and outer cortical surfaces jointly by training a single network to predict a midthickness surface that lies at the center of the inner and outer cortical surfaces. The input of <i>Surf</i>NN consists of a 3D MRI and an initialization of the midthickness surface that is represented both implicitly as a 3D distance map and explicitly as a triangular mesh with spherical topology, and its output includes both the inner and outer cortical surfaces, as well as the midthickness surface. The method has been evaluated on a large-scale MRI dataset and demonstrated competitive cortical surface reconstruction performance.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41176109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end First Trimester Fetal Ultrasound Video Automated CRL and NT Segmentation. 端到端第一孕期胎儿超声视频自动 CRL 和 NT 分段。
Pub Date : 2022-04-28 DOI: 10.1109/ISBI52829.2022.9761400
Robail Yasrab, Zeyu Fu, Lior Drukker, Lok Hin Lee, He Zhao, Aris T Papageorghiou, Alison J Noble

This study presents a novel approach to automatic detection and segmentation of the Crown Rump Length (CRL) and Nuchal Translucency (NT), two essential measurements in the first trimester US scan. The proposed method automatically localises a standard plane within a video clip as defined by the UK Fetal Abnormality Screening Programme. A Nested Hourglass (NHG) based network performs semantic pixel-wise segmentation to extract NT and CRL structures. Our results show that the NHG network is faster (19.52% < GFlops than FCN32) and offers high pixel agreement (mean-IoU=80.74) with expert manual annotations.

本研究提出了一种自动检测和分割胎儿臀长(CRL)和颈部透明层(NT)的新方法,这两项指标是孕期前三个月 US 扫描的基本测量指标。根据英国胎儿畸形筛查计划的规定,该方法可自动定位视频片段中的标准平面。基于嵌套沙漏(NHG)的网络执行语义像素分割,以提取 NT 和 CRL 结构。我们的结果表明,NHG 网络的速度更快(比 FCN32 快 19.52%),与专家手动注释的像素一致性更高(平均 IoU=80.74)。
{"title":"End-to-end First Trimester Fetal Ultrasound Video Automated CRL and NT Segmentation.","authors":"Robail Yasrab, Zeyu Fu, Lior Drukker, Lok Hin Lee, He Zhao, Aris T Papageorghiou, Alison J Noble","doi":"10.1109/ISBI52829.2022.9761400","DOIUrl":"10.1109/ISBI52829.2022.9761400","url":null,"abstract":"<p><p>This study presents a novel approach to automatic detection and segmentation of the Crown Rump Length (CRL) and Nuchal Translucency (NT), two essential measurements in the first trimester US scan. The proposed method automatically localises a standard plane within a video clip as defined by the UK Fetal Abnormality Screening Programme. A Nested Hourglass (NHG) based network performs semantic pixel-wise segmentation to extract NT and CRL structures. Our results show that the NHG network is faster (19.52% < GFlops than FCN32) and offers high pixel agreement (mean-IoU=80.74) with expert manual annotations.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614066/pdf/EMS159390.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9359298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
First Trimester video Saliency Prediction using CLSTMU-NET with Stochastic Augmentation. 基于随机增强的CLSTMU-NET的早期妊娠视频显著性预测。
Pub Date : 2022-04-26 DOI: 10.1109/ISBI52829.2022.9761585
Elizaveta Savochkina, Lok Hin Lee, He Zhao, Lior Drukker, Aris T Papageorghiou, J Alison Noble

In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).

在本文中,我们开发了一种多模态视频分析算法来预测超声医师下一步应该看哪里。我们的方法使用视频和专家知识,由凝视跟踪数据定义,这些数据是在常规妊娠早期胎儿超声扫描中获得的。具体来说,我们提出了一种时空卷积LSTMU-Net神经网络(cLSTMU-Net),用于随机增强视频显著性预测。架构设计包括基于U-Net的编码器-解码器网络和考虑时序信息的cLSTM。我们比较了cLSTMU-Net和纯空间架构在预测妊娠早期超声视频凝视任务中的性能。我们的研究数据集包括115个临床获得的妊娠早期美国视频和总共45,666个视频帧。我们采用随机增强策略搜索中的随机增强策略(RA)来提高模型性能并减少过拟合。使用6帧视频剪辑的拟议cLSTMU-Net在所有显着性指标上优于基线方法:KLD, SIM, NSS和CC(2.08, 0.28, 4.53和0.42相对于2.16,0.27,4.34和0.39)。
{"title":"First Trimester video Saliency Prediction using CLSTMU-NET with Stochastic Augmentation.","authors":"Elizaveta Savochkina,&nbsp;Lok Hin Lee,&nbsp;He Zhao,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble","doi":"10.1109/ISBI52829.2022.9761585","DOIUrl":"https://doi.org/10.1109/ISBI52829.2022.9761585","url":null,"abstract":"<p><p>In this paper we develop a multi-modal video analysis algorithm to predict where a sonographer should look next. Our approach uses video and expert knowledge, defined by gaze tracking data, which is acquired during routine first-trimester fetal ultrasound scanning. Specifically, we propose a spatio-temporal convolutional LSTMU-Net neural network (cLSTMU-Net) for video saliency prediction with stochastic augmentation. The architecture design consists of a U-Net based encoder-decoder network and a cLSTM to take into account temporal information. We compare the performance of the cLSTMU-Net alongside spatial-only architectures for the task of predicting gaze in first trimester ultrasound videos. Our study dataset consists of 115 clinically acquired first trimester US videos and a total of 45, 666 video frames. We adopt a Random Augmentation strategy (RA) from a stochastic augmentation policy search to improve model performance and reduce over-fitting. The proposed cLSTMU-Net using a video clip of 6 frames outperforms the baseline approach on all saliency metrics: KLD, SIM, NSS and CC (2.08, 0.28, 4.53 and 0.42 versus 2.16, 0.27, 4.34 and 0.39).</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7614063/pdf/EMS159391.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9350355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SELF-SEMANTIC CONTOUR ADAPTATION FOR CROSS MODALITY BRAIN TUMOR SEGMENTATION. 自语义轮廓自适应跨模态脑肿瘤分割。
Pub Date : 2022-03-01 Epub Date: 2022-04-26 DOI: 10.1109/isbi52829.2022.9761629
Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo

Unsupervised domain adaptation (UDA) between two significantly disparate domains to learn high-level semantic alignment is a crucial yet challenging task. To this end, in this work, we propose exploiting low-level edge information to facilitate the adaptation as a precursor task, which has a small cross-domain gap, compared with semantic segmentation. The precise contour then provides spatial information to guide the semantic adaptation. More specifically, we propose a multi-task framework to learn a contouring adaptation network along with a semantic segmentation adaptation network, which takes both magnetic resonance imaging (MRI) slice and its initial edge map as input. These two networks are jointly trained with source domain labels, and the feature and edge map level adversarial learning is carried out for cross-domain alignment. In addition, self-entropy minimization is incorporated to further enhance segmentation performance. We evaluated our framework on the BraTS2018 database for cross-modality segmentation of brain tumors, showing the validity and superiority of our approach, compared with competing methods.

在两个明显不同的领域之间进行无监督域自适应(UDA)以学习高级语义对齐是一项至关重要但具有挑战性的任务。为此,在本工作中,我们提出利用底层边缘信息来促进自适应作为前置任务,与语义分割相比,该任务具有较小的跨域差距。然后,精确的轮廓提供空间信息来指导语义适应。更具体地说,我们提出了一个多任务框架来学习轮廓自适应网络和语义分割自适应网络,该网络以磁共振成像(MRI)切片及其初始边缘图为输入。利用源域标签对这两个网络进行联合训练,并进行特征和边缘映射级对抗学习进行跨域对齐。此外,引入自熵最小化来进一步提高分割性能。我们在BraTS2018数据库上评估了我们的框架用于脑肿瘤的跨模态分割,与竞争对手的方法相比,显示了我们的方法的有效性和优越性。
{"title":"SELF-SEMANTIC CONTOUR ADAPTATION FOR CROSS MODALITY BRAIN TUMOR SEGMENTATION.","authors":"Xiaofeng Liu,&nbsp;Fangxu Xing,&nbsp;Georges El Fakhri,&nbsp;Jonghye Woo","doi":"10.1109/isbi52829.2022.9761629","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761629","url":null,"abstract":"<p><p>Unsupervised domain adaptation (UDA) between two significantly disparate domains to learn high-level semantic alignment is a crucial yet challenging task. To this end, in this work, we propose exploiting low-level edge information to facilitate the adaptation as a precursor task, which has a small cross-domain gap, compared with semantic segmentation. The precise contour then provides spatial information to guide the semantic adaptation. More specifically, we propose a multi-task framework to learn a contouring adaptation network along with a semantic segmentation adaptation network, which takes both magnetic resonance imaging (MRI) slice and its initial edge map as input. These two networks are jointly trained with source domain labels, and the feature and edge map level adversarial learning is carried out for cross-domain alignment. In addition, self-entropy minimization is incorporated to further enhance segmentation performance. We evaluated our framework on the BraTS2018 database for cross-modality segmentation of brain tumors, showing the validity and superiority of our approach, compared with competing methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9387767/pdf/nihms-1779009.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40431593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
HIERARCHICAL BRAIN EMBEDDING USING EXPLAINABLE GRAPH LEARNING. 使用可解释图学习进行分层大脑嵌入。
Pub Date : 2022-03-01 Epub Date: 2022-04-26 DOI: 10.1109/isbi52829.2022.9761543
Haoteng Tang, Lei Guo, Xiyao Fu, Benjamin Qu, Paul M Thompson, Heng Huang, Liang Zhan

Brain networks have been extensively studied in neuroscience, to better understand human behavior, and to identify and characterize distributed brain abnormalities in neurological and psychiatric conditions. Several deep graph learning models have been proposed for brain network analysis, yet most current models lack interpretability, which makes it hard to gain any heuristic biological insights into the results. In this paper, we propose a new explainable graph learning model, named hierarchical brain embedding (HBE), to extract brain network representations based on the network community structure, yielding interpretable hierarchical patterns. We apply our new method to predict aggressivity, rule-breaking, and other standardized behavioral scores from functional brain networks derived using ICA from 1,000 young healthy subjects scanned by the Human Connectome Project. Our results show that the proposed HBE outperforms several state-of-the-art graph learning methods in predicting behavioral measures, and demonstrates similar hierarchical brain network patterns associated with clinical symptoms.

为了更好地理解人类行为,以及识别和描述神经和精神疾病中的分布式大脑异常,神经科学领域对大脑网络进行了广泛的研究。目前已经提出了几种用于脑网络分析的深度图学习模型,但大多数现有模型都缺乏可解释性,因此很难从结果中获得启发式的生物学见解。在本文中,我们提出了一种新的可解释图学习模型,命名为分层大脑嵌入(HBE),根据网络群落结构提取大脑网络表征,产生可解释的分层模式。我们将新方法应用于预测攻击性、规则破坏和其他标准化行为评分,这些评分来自于人类连接组计划扫描的 1000 名年轻健康受试者的功能性脑网络。我们的研究结果表明,所提出的 HBE 在预测行为指标方面优于几种最先进的图学习方法,并展示了与临床症状相关的类似分层脑网络模式。
{"title":"HIERARCHICAL BRAIN EMBEDDING USING EXPLAINABLE GRAPH LEARNING.","authors":"Haoteng Tang, Lei Guo, Xiyao Fu, Benjamin Qu, Paul M Thompson, Heng Huang, Liang Zhan","doi":"10.1109/isbi52829.2022.9761543","DOIUrl":"10.1109/isbi52829.2022.9761543","url":null,"abstract":"<p><p>Brain networks have been extensively studied in neuroscience, to better understand human behavior, and to identify and characterize distributed brain abnormalities in neurological and psychiatric conditions. Several deep graph learning models have been proposed for brain network analysis, yet most current models lack interpretability, which makes it hard to gain any heuristic biological insights into the results. In this paper, we propose a new explainable graph learning model, named hierarchical brain embedding (HBE), to extract brain network representations based on the network community structure, yielding interpretable hierarchical patterns. We apply our new method to predict aggressivity, rule-breaking, and other standardized behavioral scores from functional brain networks derived using ICA from 1,000 young healthy subjects scanned by the Human Connectome Project. Our results show that the proposed HBE outperforms several state-of-the-art graph learning methods in predicting behavioral measures, and demonstrates similar hierarchical brain network patterns associated with clinical symptoms.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9851647/pdf/nihms-1864928.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10614271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRAPH-BASED SMALL BOWEL PATH TRACKING WITH CYLINDRICAL CONSTRAINTS. 基于图的圆柱形约束小肠路径跟踪。
Pub Date : 2022-03-01 DOI: 10.1109/isbi52829.2022.9761423
Seung Yeon Shin, Sungwon Lee, Ronald M Summers

We present a new graph-based method for small bowel path tracking based on cylindrical constraints. A distinctive characteristic of the small bowel compared to other organs is the contact between parts of itself along its course, which makes the path tracking difficult together with the indistinct appearance of the wall. It causes the tracked path to easily cross over the walls when relying on low-level features like the wall detection. To circumvent this, a series of cylinders that are fitted along the course of the small bowel are used to guide the tracking to more reliable directions. It is implemented as soft constraints using a new cost function. The proposed method is evaluated against ground-truth paths that are all connected from start to end of the small bowel for 10 abdominal CT scans. The proposed method showed clear improvements compared to the baseline method in tracking the path without making an error. Improvements of 6.6% and 17.0%, in terms of the tracked length, were observed for two different settings related to the small bowel segmentation.

我们提出了一种新的基于图的基于圆柱形约束的小肠路径跟踪方法。与其他器官相比,小肠的一个显著特征是其自身各部分之间的接触,这使得路径追踪困难,加上肠壁的模糊外观。当依靠墙检测等低级功能时,它会导致跟踪的路径很容易越过墙。为了避免这种情况,沿着小肠的路线安装了一系列圆柱体,用于引导跟踪到更可靠的方向。它是使用一个新的成本函数作为软约束实现的。所提出的方法是根据10个腹部CT扫描的从开始到结束连接的小肠的基真路径进行评估。与基线方法相比,该方法在无误差跟踪路径方面有明显的改进。在与小肠分割相关的两种不同设置下,观察到跟踪长度的改善分别为6.6%和17.0%。
{"title":"GRAPH-BASED SMALL BOWEL PATH TRACKING WITH CYLINDRICAL CONSTRAINTS.","authors":"Seung Yeon Shin,&nbsp;Sungwon Lee,&nbsp;Ronald M Summers","doi":"10.1109/isbi52829.2022.9761423","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761423","url":null,"abstract":"<p><p>We present a new graph-based method for small bowel path tracking based on cylindrical constraints. A distinctive characteristic of the small bowel compared to other organs is the contact between parts of itself along its course, which makes the path tracking difficult together with the indistinct appearance of the wall. It causes the tracked path to easily cross over the walls when relying on low-level features like the wall detection. To circumvent this, a series of cylinders that are fitted along the course of the small bowel are used to guide the tracking to more reliable directions. It is implemented as soft constraints using a new cost function. The proposed method is evaluated against ground-truth paths that are all connected from start to end of the small bowel for 10 abdominal CT scans. The proposed method showed clear improvements compared to the baseline method in tracking the path without making an error. Improvements of 6.6% and 17.0%, in terms of the tracked length, were observed for two different settings related to the small bowel segmentation.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10134031/pdf/nihms-1887665.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9768420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
INVESTIGATING THE EFFECT OF TAU DEPOSITION AND APOE ON HIPPOCAMPAL MORPHOMETRY IN ALZHEIMER'S DISEASE: A FEDERATED CHOW TEST MODEL. 研究tau沉积和apoe对阿尔茨海默病海马形态的影响:联合鼠粮试验模型。
Pub Date : 2022-03-01 DOI: 10.1109/isbi52829.2022.9761576
Jianfeng Wu, Yi Su, Eric M Reiman, Richard J Caselli, Kewei Chen, Paul M Thompson, Junwen Wang, Yalin Wang

Alzheimer's disease (AD) affects more than 1 in 9 people age 65 and older and becomes an urgent public health concern as the global population ages. Tau tangle is the specific protein pathological hallmark of AD and plays a crucial role in leading to dementia-related structural deformations observed in magnetic resonance imaging (MRI) scans. The volume loss of hippocampus is mainly related to the development of AD. Besides, apolipoprotein E (APOE) also has significant effects on the risk of developing AD. However, few studies focus on integrating genotypes, MRI, and tau deposition to infer multimodal relationships. In this paper, we proposed a federated chow test model to study the synergistic effects of APOE and tau on hippocampal morphometry. Our experimental results demonstrate our model can detect the difference of tau deposition and hippocampal atrophy among the cohorts with different genotypes and subiculum and cornu ammonis 1 (CA1 subfield) were identified as hippocampal subregions where atrophy is strongly associated with abnormal tau in the homozygote cohort. Our model will provide novel insight into the neural mechanisms about the individual impact of APOE and tau deposition on brain imaging.

阿尔茨海默病(AD)影响着超过九分之一的65岁及以上老年人,随着全球人口老龄化,它已成为一个紧迫的公共卫生问题。Tau缠结是AD的特异性蛋白病理标志,在导致磁共振成像(MRI)扫描中观察到的痴呆相关结构变形中起着至关重要的作用。海马体积损失主要与AD的发生有关。此外,载脂蛋白E (APOE)对AD的发病风险也有显著影响。然而,很少有研究集中于整合基因型、MRI和tau沉积来推断多模态关系。在本文中,我们提出了一个联合鼠粮试验模型来研究APOE和tau对海马形态的协同作用。我们的实验结果表明,我们的模型可以检测不同基因型队列中tau沉积和海马萎缩的差异,并且在纯合子队列中,CA1亚区被确定为海马亚区,其中萎缩与异常tau密切相关。我们的模型将为APOE和tau沉积对脑成像的个体影响的神经机制提供新的见解。
{"title":"INVESTIGATING THE EFFECT OF TAU DEPOSITION AND APOE ON HIPPOCAMPAL MORPHOMETRY IN ALZHEIMER'S DISEASE: A FEDERATED CHOW TEST MODEL.","authors":"Jianfeng Wu,&nbsp;Yi Su,&nbsp;Eric M Reiman,&nbsp;Richard J Caselli,&nbsp;Kewei Chen,&nbsp;Paul M Thompson,&nbsp;Junwen Wang,&nbsp;Yalin Wang","doi":"10.1109/isbi52829.2022.9761576","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761576","url":null,"abstract":"<p><p>Alzheimer's disease (AD) affects more than 1 in 9 people age 65 and older and becomes an urgent public health concern as the global population ages. Tau tangle is the specific protein pathological hallmark of AD and plays a crucial role in leading to dementia-related structural deformations observed in magnetic resonance imaging (MRI) scans. The volume loss of hippocampus is mainly related to the development of AD. Besides, apolipoprotein E (<i>APOE</i>) also has significant effects on the risk of developing AD. However, few studies focus on integrating genotypes, MRI, and tau deposition to infer multimodal relationships. In this paper, we proposed a federated chow test model to study the synergistic effects of <i>APOE</i> and tau on hippocampal morphometry. Our experimental results demonstrate our model can detect the difference of tau deposition and hippocampal atrophy among the cohorts with different genotypes and subiculum and cornu ammonis 1 (CA1 subfield) were identified as hippocampal subregions where atrophy is strongly associated with abnormal tau in the homozygote cohort. Our model will provide novel insight into the neural mechanisms about the individual impact of <i>APOE</i> and tau deposition on brain imaging.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9491515/pdf/nihms-1788723.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9254720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ideal-Observer Computation with anthropomorphic phantoms using Markov chain Monte Carlo. 基于马尔可夫链蒙特卡罗的拟人幻影理想观察者计算。
Pub Date : 2022-03-01 DOI: 10.1109/isbi52829.2022.9761579
Md Ashequr Rahman, Zitong Yu, Abhinav K Jha

In medical imaging, it is widely recognized that image quality should be objectively evaluated based on performance in clinical tasks. To evaluate performance in signal-detection tasks, the ideal observer (IO) is optimal but also challenging to compute in clinically realistic settings. Markov Chain Monte Carlo (MCMC)-based strategies have demonstrated the ability to compute the IO using pre-computed projections of an anatomical database. To evaluate image quality in clinically realistic scenarios, the observer performance should be measured for a realistic patient distribution. This implies that the anatomical database should also be derived from a realistic population. In this manuscript, we propose to advance the MCMC-based approach towards achieving these goals. We then use the proposed approach to study the effect of anatomical database size on IO computation for the task of detecting perfusion defects in simulated myocardial perfusion SPECT images. Our preliminary results provide evidence that the size of the anatomical database affects the computation of the IO.

在医学成像中,人们普遍认为图像质量应该根据临床任务的表现进行客观评估。为了评估信号检测任务的性能,理想的观察者(IO)是最佳的,但在临床现实设置中计算也是具有挑战性的。基于马尔可夫链蒙特卡罗(MCMC)的策略已经证明了使用预先计算的解剖数据库投影来计算IO的能力。为了评估临床真实情况下的图像质量,应该根据真实的患者分布来测量观察者的表现。这意味着解剖数据库也应该来自于一个现实的人群。在本文中,我们建议推进基于mcmc的方法来实现这些目标。然后,我们使用该方法研究了解剖数据库大小对模拟心肌灌注SPECT图像中检测灌注缺陷任务IO计算的影响。我们的初步结果提供了证据,解剖数据库的大小影响IO的计算。
{"title":"Ideal-Observer Computation with anthropomorphic phantoms using Markov chain Monte Carlo.","authors":"Md Ashequr Rahman,&nbsp;Zitong Yu,&nbsp;Abhinav K Jha","doi":"10.1109/isbi52829.2022.9761579","DOIUrl":"https://doi.org/10.1109/isbi52829.2022.9761579","url":null,"abstract":"<p><p>In medical imaging, it is widely recognized that image quality should be objectively evaluated based on performance in clinical tasks. To evaluate performance in signal-detection tasks, the ideal observer (IO) is optimal but also challenging to compute in clinically realistic settings. Markov Chain Monte Carlo (MCMC)-based strategies have demonstrated the ability to compute the IO using pre-computed projections of an anatomical database. To evaluate image quality in clinically realistic scenarios, the observer performance should be measured for a realistic patient distribution. This implies that the anatomical database should also be derived from a realistic population. In this manuscript, we propose to advance the MCMC-based approach towards achieving these goals. We then use the proposed approach to study the effect of anatomical database size on IO computation for the task of detecting perfusion defects in simulated myocardial perfusion SPECT images. Our preliminary results provide evidence that the size of the anatomical database affects the computation of the IO.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9648621/pdf/nihms-1819092.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9350470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1