首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
FEDSLD: FEDERATED LEARNING WITH SHARED LABEL DISTRIBUTION FOR MEDICAL IMAGE CLASSIFICATION. FEDSLD:用于医学图像分类的具有共享标签分布的联合学习。
Pub Date : 2022-03-01 Epub Date: 2022-04-26 DOI: 10.1109/isbi52829.2022.9761404
Jun Luo, Shandong Wu

Federated learning (FL) enables collaboratively training a joint model for multiple medical centers, while keeping the data decentralized due to privacy concerns. However, federated optimizations often suffer from the heterogeneity of the data distribution across medical centers. In this work, we propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks, a method that adjusts the contribution of each data sample to the local objective during optimization via knowledge of clients' label distribution, mitigating the instability brought by data heterogeneity. We conduct extensive experiments on four publicly available image datasets with different types of non-IID data distributions. Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms, increasing the test accuracy by up to 5.50 percentage points.

联合学习(FL)能够协同训练多个医疗中心的联合模型,同时出于隐私考虑保持数据的分散。然而,联邦优化经常受到医疗中心之间数据分布的异质性的影响。在这项工作中,我们提出了用于分类任务的共享标签分布联合学习(FedSLD),这是一种在优化过程中通过了解客户的标签分布来调整每个数据样本对局部目标的贡献的方法,减轻了数据异质性带来的不稳定性。我们在四个公开可用的图像数据集上进行了广泛的实验,这些数据集具有不同类型的非IID数据分布。我们的结果表明,与领先的FL优化算法相比,FedSLD实现了更好的收敛性能,将测试精度提高了5.50个百分点。
{"title":"FEDSLD: FEDERATED LEARNING WITH SHARED LABEL DISTRIBUTION FOR MEDICAL IMAGE CLASSIFICATION.","authors":"Jun Luo, Shandong Wu","doi":"10.1109/isbi52829.2022.9761404","DOIUrl":"10.1109/isbi52829.2022.9761404","url":null,"abstract":"<p><p>Federated learning (FL) enables collaboratively training a joint model for multiple medical centers, while keeping the data decentralized due to privacy concerns. However, federated optimizations often suffer from the heterogeneity of the data distribution across medical centers. In this work, we propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks, a method that adjusts the contribution of each data sample to the local objective during optimization via knowledge of clients' label distribution, mitigating the instability brought by data heterogeneity. We conduct extensive experiments on four publicly available image datasets with different types of non-IID data distributions. Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms, increasing the test accuracy by up to 5.50 percentage points.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10563702/pdf/nihms-1933010.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41222928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAXIMIZING UNAMBIGUOUS VELOCITY RANGE IN PHASE-CONTRAST MRI WITH MULTIPOINT ENCODING. 利用多点编码最大化相位对比 mri 中的明确速度范围。
Pub Date : 2022-03-01 Epub Date: 2022-04-26 DOI: 10.1109/isbi52829.2022.9761589
Shen Zhao, Rizwan Ahmad, Lee C Potter

In phase-contrast magnetic resonance imaging (PC-MRI), the velocity of spins at a voxel is encoded in the image phase. The strength of the velocity encoding gradient offers a trade-off between the velocity-to-noise ratio (VNR) and the extent of phase aliasing. Phase differences provide invariance to an unknown background phase. Existing literature proposes processing a reduced set of phase difference equations, simplifying the phase unwrapping problem at the expense of VNR or unaliased range of velocities, or both. Here, we demonstrate that the fullest unambiguous range of velocities is a parallelepiped, which can be accessed by jointly processing all phase differences. The joint processing also maximizes the velocity-to-noise ratio. The simple understanding of the unambiguous parallelepiped provides the potential for analyzing new multi-point acquisitions for an enhanced range of unaliased velocities; two examples are given.

在相位对比磁共振成像(PC-MRI)中,体素的自旋速度被编码在图像相位中。速度编码梯度的强度可在速度噪声比(VNR)和相位混叠程度之间进行权衡。相位差提供了对未知背景相位的不变性。现有文献建议处理一组简化的相位差方程,以牺牲速度噪声比或无混叠的速度范围为代价,或两者兼而有之,从而简化相位解包问题。在这里,我们证明了最完整、最清晰的速度范围是平行六面体,可以通过联合处理所有相位差来获得。联合处理还能最大限度地提高速度噪声比。通过简单地理解清晰的平行六面体,我们可以分析新的多点采集,从而获得更大范围的无锯齿速度;我们将给出两个例子。
{"title":"MAXIMIZING UNAMBIGUOUS VELOCITY RANGE IN PHASE-CONTRAST MRI WITH MULTIPOINT ENCODING.","authors":"Shen Zhao, Rizwan Ahmad, Lee C Potter","doi":"10.1109/isbi52829.2022.9761589","DOIUrl":"10.1109/isbi52829.2022.9761589","url":null,"abstract":"<p><p>In phase-contrast magnetic resonance imaging (PC-MRI), the velocity of spins at a voxel is encoded in the image phase. The strength of the velocity encoding gradient offers a trade-off between the velocity-to-noise ratio (VNR) and the extent of phase aliasing. Phase differences provide invariance to an unknown background phase. Existing literature proposes processing a reduced set of phase difference equations, simplifying the phase unwrapping problem at the expense of VNR or unaliased range of velocities, or both. Here, we demonstrate that the fullest unambiguous range of velocities is a parallelepiped, which can be accessed by jointly processing all phase differences. The joint processing also maximizes the velocity-to-noise ratio. The simple understanding of the unambiguous parallelepiped provides the potential for analyzing new multi-point acquisitions for an enhanced range of unaliased velocities; two examples are given.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9136874/pdf/nihms-1809822.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9354689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INTRACRANIAL VESSEL WALL SEGMENTATION FOR ATHEROSCLEROTIC PLAQUE QUANTIFICATION. 颅内血管壁分割用于动脉粥样硬化斑块定量。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/ISBI48211.2021.9434018
Hanyue Zhou, Jiayu Xiao, Zhaoyang Fan, Dan Ruan

Intracranial vessel wall segmentation is critical for the quantitative assessment of intracranial atherosclerosis based on magnetic resonance vessel wall imaging. This work further improves on a previous 2D deep learning segmentation network by the utilization of 1) a 2.5D structure to balance network complexity and regularizing geometry continuity; 2) a UNET++ model to achieve structure adaptation; 3) an additional approximated Hausdorff distance (HD) loss into the objective to enhance geometry conformality; and 4) landing in a commonly used morphological measure of plaque burden - the normalized wall index (NWI) - to match the clinical endpoint. The modified network achieved Dice similarity coefficient of 0.9172 ± 0.0598 and 0.7833 ± 0.0867, HD of 0.3252 ± 0.5071 mm and 0.4914 ± 0.5743 mm, mean surface distance of 0.0940 ± 0.0781 mm and 0.1408 ± 0.0917 mm for the lumen and vessel wall, respectively. These results compare favorably to those obtained by the original 2D UNET on all segmentation metrics. Additionally, the proposed segmentation network reduced the mean absolute error in NWI from 0.0732 ± 0.0294 to 0.0725 ± 0.0333.

颅内血管壁分割是基于磁共振血管壁成像定量评估颅内动脉粥样硬化的关键。这项工作进一步改进了以前的2D深度学习分割网络,利用1)2.5D结构来平衡网络复杂性和正则化几何连续性;2)采用UNET++模型实现结构自适应;3)在物镜中加入额外的近似豪斯多夫距离(HD)损失,增强几何一致性;4)采用常用的斑块负荷形态学指标——归一化壁指数(NWI),以匹配临床终点。改进后的网络获得的管腔和血管壁的Dice相似系数分别为0.9172±0.0598和0.7833±0.0867,HD分别为0.3252±0.5071 mm和0.4914±0.5743 mm,平均表面距离分别为0.0940±0.0781 mm和0.1408±0.0917 mm。这些结果与原始2D UNET在所有分割指标上获得的结果比较有利。此外,该分割网络将NWI的平均绝对误差从0.0732±0.0294降低到0.0725±0.0333。
{"title":"INTRACRANIAL VESSEL WALL SEGMENTATION FOR ATHEROSCLEROTIC PLAQUE QUANTIFICATION.","authors":"Hanyue Zhou,&nbsp;Jiayu Xiao,&nbsp;Zhaoyang Fan,&nbsp;Dan Ruan","doi":"10.1109/ISBI48211.2021.9434018","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434018","url":null,"abstract":"<p><p>Intracranial vessel wall segmentation is critical for the quantitative assessment of intracranial atherosclerosis based on magnetic resonance vessel wall imaging. This work further improves on a previous 2D deep learning segmentation network by the utilization of 1) a 2.5D structure to balance network complexity and regularizing geometry continuity; 2) a UNET++ model to achieve structure adaptation; 3) an additional approximated Hausdorff distance (HD) loss into the objective to enhance geometry conformality; and 4) landing in a commonly used morphological measure of plaque burden - the normalized wall index (NWI) - to match the clinical endpoint. The modified network achieved Dice similarity coefficient of 0.9172 ± 0.0598 and 0.7833 ± 0.0867, HD of 0.3252 ± 0.5071 mm and 0.4914 ± 0.5743 mm, mean surface distance of 0.0940 ± 0.0781 mm and 0.1408 ± 0.0917 mm for the lumen and vessel wall, respectively. These results compare favorably to those obtained by the original 2D UNET on all segmentation metrics. Additionally, the proposed segmentation network reduced the mean absolute error in NWI from 0.0732 ± 0.0294 to 0.0725 ± 0.0333.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1416-1419"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9434018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39321314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi-Modal Learning from Video, Eye Tracking, and Pupillometry for Operator Skill Characterization in Clinical Fetal Ultrasound. 多模式学习从视频,眼动追踪,和瞳孔测量操作员技能表征在临床胎儿超声。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/ISBI48211.2021.9433863
Harshita Sharma, Lior Drukker, Aris T Papageorghiou, J Alison Noble

This paper presents a novel multi-modal learning approach for automated skill characterization of obstetric ultrasound operators using heterogeneous spatio-temporal sensory cues, namely, scan video, eye-tracking data, and pupillometric data, acquired in the clinical environment. We address pertinent challenges such as combining heterogeneous, small-scale and variable-length sequential datasets, to learn deep convolutional neural networks in real-world scenarios. We propose spatial encoding for multi-modal analysis using sonography standard plane images, spatial gaze maps, gaze trajectory images, and pupillary response images. We present and compare five multi-modal learning network architectures using late, intermediate, hybrid, and tensor fusion. We build models for the Heart and the Brain scanning tasks, and performance evaluation suggests that multi-modal learning networks outperform uni-modal networks, with the best-performing model achieving accuracies of 82.4% (Brain task) and 76.4% (Heart task) for the operator skill classification problem.

本文提出了一种新颖的多模式学习方法,用于利用在临床环境中获得的异构时空感官线索,即扫描视频、眼动追踪数据和瞳孔测量数据,对产科超声操作员进行自动化技能表征。我们解决了相关的挑战,如结合异构,小规模和可变长度的序列数据集,在现实世界的场景中学习深度卷积神经网络。我们提出了使用超声标准平面图像、空间凝视图、凝视轨迹图像和瞳孔响应图像进行多模态分析的空间编码。我们提出并比较了使用晚期、中间、混合和张量融合的五种多模态学习网络架构。我们为心脏和大脑扫描任务建立了模型,性能评估表明,多模态学习网络优于单模态网络,在操作员技能分类问题上,表现最好的模型达到了82.4%(大脑任务)和76.4%(心脏任务)的准确率。
{"title":"Multi-Modal Learning from Video, Eye Tracking, and Pupillometry for Operator Skill Characterization in Clinical Fetal Ultrasound.","authors":"Harshita Sharma,&nbsp;Lior Drukker,&nbsp;Aris T Papageorghiou,&nbsp;J Alison Noble","doi":"10.1109/ISBI48211.2021.9433863","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433863","url":null,"abstract":"<p><p>This paper presents a novel multi-modal learning approach for automated skill characterization of obstetric ultrasound operators using heterogeneous spatio-temporal sensory cues, namely, scan video, eye-tracking data, and pupillometric data, acquired in the clinical environment. We address pertinent challenges such as combining heterogeneous, small-scale and variable-length sequential datasets, to learn deep convolutional neural networks in real-world scenarios. We propose spatial encoding for multi-modal analysis using sonography standard plane images, spatial gaze maps, gaze trajectory images, and pupillary response images. We present and compare five multi-modal learning network architectures using late, intermediate, hybrid, and tensor fusion. We build models for the Heart and the Brain scanning tasks, and performance evaluation suggests that multi-modal learning networks outperform uni-modal networks, with the best-performing model achieving accuracies of 82.4% (Brain task) and 76.4% (Heart task) for the operator skill classification problem.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1646-1649"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9433863","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39327914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
DUAL-CYCLE CONSTRAINED BIJECTIVE VAE-GAN FOR TAGGED-TO-CINE MAGNETIC RESONANCE IMAGE SYNTHESIS. 用于标记到线性磁共振图像合成的双周期受限投射式 vae-gan。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433852
Xiaofeng Liu, Fangxu Xing, Jerry L Prince, Aaron Carass, Maureen Stone, Georges El Fakhri, Jonghye Woo

Tagged magnetic resonance imaging (MRI) is a widely used imaging technique for measuring tissue deformation in moving organs. Due to tagged MRI's intrinsic low anatomical resolution, another matching set of cine MRI with higher resolution is sometimes acquired in the same scanning session to facilitate tissue segmentation, thus adding extra time and cost. To mitigate this, in this work, we propose a novel dual-cycle constrained bijective VAE-GAN approach to carry out tagged-to-cine MR image synthesis. Our method is based on a variational autoencoder backbone with cycle reconstruction constrained adversarial training to yield accurate and realistic cine MR images given tagged MR images. Our framework has been trained, validated, and tested using 1,768, 416, and 1,560 subject-independent paired slices of tagged and cine MRI from twenty healthy subjects, respectively, demonstrating superior performance over the comparison methods. Our method can potentially be used to reduce the extra acquisition time and cost, while maintaining the same workflow for further motion analyses.

标记磁共振成像(MRI)是一种广泛应用的成像技术,用于测量移动器官的组织变形。由于标记磁共振成像本身的解剖分辨率较低,有时需要在同一扫描过程中获取另一套分辨率更高的匹配电影磁共振成像来进行组织分割,从而增加了额外的时间和成本。为了缓解这一问题,我们在这项工作中提出了一种新颖的双周期约束双目标 VAE-GAN 方法,用于进行标记到 cine MR 图像合成。我们的方法基于变异自动编码器骨干和周期重构约束对抗训练,可在给定标记 MR 图像的情况下生成准确、逼真的 cine MR 图像。我们的框架分别使用来自 20 名健康受试者的 1,768 张、416 张和 1,560 张与受试者无关的标记和 cine MRI 成对切片进行了训练、验证和测试,结果表明其性能优于比较方法。我们的方法可用于减少额外的采集时间和成本,同时保持进一步运动分析的工作流程不变。
{"title":"DUAL-CYCLE CONSTRAINED BIJECTIVE VAE-GAN FOR TAGGED-TO-CINE MAGNETIC RESONANCE IMAGE SYNTHESIS.","authors":"Xiaofeng Liu, Fangxu Xing, Jerry L Prince, Aaron Carass, Maureen Stone, Georges El Fakhri, Jonghye Woo","doi":"10.1109/isbi48211.2021.9433852","DOIUrl":"10.1109/isbi48211.2021.9433852","url":null,"abstract":"<p><p>Tagged magnetic resonance imaging (MRI) is a widely used imaging technique for measuring tissue deformation in moving organs. Due to tagged MRI's intrinsic low anatomical resolution, another matching set of cine MRI with higher resolution is sometimes acquired in the same scanning session to facilitate tissue segmentation, thus adding extra time and cost. To mitigate this, in this work, we propose a novel dual-cycle constrained bijective VAE-GAN approach to carry out tagged-to-cine MR image synthesis. Our method is based on a variational autoencoder backbone with cycle reconstruction constrained adversarial training to yield accurate and realistic cine MR images given tagged MR images. Our framework has been trained, validated, and tested using 1,768, 416, and 1,560 subject-independent paired slices of tagged and cine MRI from twenty healthy subjects, respectively, demonstrating superior performance over the comparison methods. Our method can potentially be used to reduce the extra acquisition time and cost, while maintaining the same workflow for further motion analyses.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8547333/pdf/nihms-1669326.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39565006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PROSTATE CANCER DIAGNOSIS WITH SPARSE BIOPSY DATA AND IN PRESENCE OF LOCATION UNCERTAINTY. 利用稀疏的活检数据和位置不确定性来诊断前列腺癌。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433892
Alireza Mehrtash, Tina Kapur, Clare M Tempany, Purang Abolmaesumi, William M Wells

Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.

前列腺癌是全球男性发病率第二高的癌症。深度神经网络已成功应用于磁共振图像(MRI)中的前列腺癌诊断。活检程序的病理结果通常被用作训练此类系统的基本事实。从活检数据中创建地面实况有几个噪声源,包括采样和配准误差。我们建议:1)使用全卷积神经网络(FCN)生成 MRI 中整个前列腺的癌症概率图;2)使用高斯加权损失函数训练具有稀疏活检位置的 FCN;3)使用概率框架来模拟活检位置的不确定性,并根据深度模型预测调整癌症概率。我们对来自 203 名患者的 325 个活检位置进行了评估。我们观察到,拟议的损失提高了接收者工作特征曲线下的面积,活检位置调整提高了模型的灵敏度。
{"title":"PROSTATE CANCER DIAGNOSIS WITH SPARSE BIOPSY DATA AND IN PRESENCE OF LOCATION UNCERTAINTY.","authors":"Alireza Mehrtash, Tina Kapur, Clare M Tempany, Purang Abolmaesumi, William M Wells","doi":"10.1109/isbi48211.2021.9433892","DOIUrl":"10.1109/isbi48211.2021.9433892","url":null,"abstract":"<p><p>Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"443-447"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9552971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33527173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
REGION SPECIFIC AUTOMATIC QUALITY ASSURANCE FOR MRI-DERIVED CORTICAL SEGMENTATIONS. mri衍生皮质分割的区域特定自动质量保证。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433755
Shruti Gadewar, Alyssa H Zhu, Sophia I Thomopoulos, Zhuocheng Li, Iyad Ba Gari, Piyush Maiti, Paul M Thompson, Neda Jahanshad

Quality control (QC) is a vital step for all scientific data analyses and is critically important in the biomedical sciences. Image segmentation is a common task in medical image analysis, and automated tools to segment many regions from human brain MRIs are now well established. However, these methods do not always give anatomically correct labels. Traditional methods for QC tend to reject statistical outliers, which may not necessarily be inaccurate. Here, we make use of a large database of over 12,000 brain images that contain 68 parcellations of the human cortex, each of which was assessed for anatomical accuracy by a human rater. We trained three machine learning models to determine if a region was anatomically accurate (as 'pass', or 'fail') and tested the performance on an independent dataset. We found good performance for the majority of labeled regions. This work will facilitate more anatomically accurate large-scale multi-site research.

质量控制(QC)是所有科学数据分析的重要步骤,在生物医学科学中至关重要。图像分割是医学图像分析中的一项常见任务,目前已经建立了从人脑核磁共振成像中分割许多区域的自动化工具。然而,这些方法并不总是给出解剖学上正确的标签。传统的质量控制方法倾向于拒绝统计异常值,这可能不一定是不准确的。在这里,我们使用了一个包含超过12,000张大脑图像的大型数据库,其中包含68个人类皮层的包裹,每个包裹都由人类评估师评估了解剖学的准确性。我们训练了三个机器学习模型来确定一个区域在解剖学上是否准确(“通过”或“失败”),并在一个独立的数据集上测试了性能。我们发现大多数标记区域的性能都很好。这项工作将有助于更精确的解剖学上的大规模多位点研究。
{"title":"REGION SPECIFIC AUTOMATIC QUALITY ASSURANCE FOR MRI-DERIVED CORTICAL SEGMENTATIONS.","authors":"Shruti Gadewar,&nbsp;Alyssa H Zhu,&nbsp;Sophia I Thomopoulos,&nbsp;Zhuocheng Li,&nbsp;Iyad Ba Gari,&nbsp;Piyush Maiti,&nbsp;Paul M Thompson,&nbsp;Neda Jahanshad","doi":"10.1109/isbi48211.2021.9433755","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433755","url":null,"abstract":"<p><p>Quality control (QC) is a vital step for all scientific data analyses and is critically important in the biomedical sciences. Image segmentation is a common task in medical image analysis, and automated tools to segment many regions from human brain MRIs are now well established. However, these methods do not always give anatomically correct labels. Traditional methods for QC tend to reject statistical outliers, which may not necessarily be inaccurate. Here, we make use of a large database of over 12,000 brain images that contain 68 parcellations of the human cortex, each of which was assessed for anatomical accuracy by a human rater. We trained three machine learning models to determine if a region was anatomically accurate (as 'pass', or 'fail') and tested the performance on an independent dataset. We found good performance for the majority of labeled regions. This work will facilitate more anatomically accurate large-scale multi-site research.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1288-1291"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433755","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40317903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RAP-NET: COARSE-TO-FINE MULTI-ORGAN SEGMENTATION WITH SINGLE RANDOM ANATOMICAL PRIOR. RAP-NET:使用单一随机解剖先验进行从粗到细的多器官分割。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/ISBI48211.2021.9433975
Ho Hin Lee, Yucheng Tang, Shunxing Bao, Richard G Abramson, Yuankai Huo, Bennett A Landman

Performing coarse-to-fine abdominal multi-organ segmentation facilitates extraction of high-resolution segmentation minimizing the loss of spatial contextual information. However, current coarse-to-refine approaches require a significant number of models to perform single organ segmentation. We propose a coarse-to-fine pipeline RAP-Net, which starts from the extraction of the global prior context of multiple organs from 3D volumes using a low-resolution coarse network, followed by a fine phase that uses a single refined model to segment all abdominal organs instead of multiple organ corresponding models. We combine the anatomical prior with corresponding extracted patches to preserve the anatomical locations and boundary information for performing high-resolution segmentation across all organs in a single model. To train and evaluate our method, a clinical research cohort consisting of 100 patient volumes with 13 organs well-annotated is used. We tested our algorithms with 4-fold cross-validation and computed the Dice score for evaluating the segmentation performance of the 13 organs. Our proposed method using single auto-context outperforms the state-of-the-art on 13 models with an average Dice score 84.58% versus 81.69% (p<0.0001).

进行从粗到细的腹部多器官分割有助于提取高分辨率分割,最大限度地减少空间上下文信息的损失。然而,目前从粗到细的方法需要大量模型才能进行单器官分割。我们提出了一种从粗到精的管道 RAP-Net,它首先使用低分辨率的粗网络从三维体积中提取多个器官的全局先验上下文,然后在精细阶段使用单个精细模型分割所有腹部器官,而不是多个器官对应的模型。我们将解剖先验与相应的提取斑块相结合,以保留解剖位置和边界信息,从而在单一模型中对所有器官进行高分辨率分割。为了训练和评估我们的方法,我们使用了一个临床研究队列,该队列由 100 个病人体积组成,其中 13 个器官都有详细标注。我们使用 4 倍交叉验证对算法进行了测试,并计算了 Dice 分数,以评估 13 个器官的分割性能。我们提出的使用单一自动上下文的方法在 13 个模型上的表现优于最先进的方法,平均 Dice 分数为 84.58% 对 81.69% (P<0.05)。
{"title":"RAP-NET: COARSE-TO-FINE MULTI-ORGAN SEGMENTATION WITH SINGLE RANDOM ANATOMICAL PRIOR.","authors":"Ho Hin Lee, Yucheng Tang, Shunxing Bao, Richard G Abramson, Yuankai Huo, Bennett A Landman","doi":"10.1109/ISBI48211.2021.9433975","DOIUrl":"10.1109/ISBI48211.2021.9433975","url":null,"abstract":"<p><p>Performing coarse-to-fine abdominal multi-organ segmentation facilitates extraction of high-resolution segmentation minimizing the loss of spatial contextual information. However, current coarse-to-refine approaches require a significant number of models to perform single organ segmentation. We propose a coarse-to-fine pipeline RAP-Net, which starts from the extraction of the global prior context of multiple organs from 3D volumes using a low-resolution coarse network, followed by a fine phase that uses a single refined model to segment all abdominal organs instead of multiple organ corresponding models. We combine the anatomical prior with corresponding extracted patches to preserve the anatomical locations and boundary information for performing high-resolution segmentation across all organs in a single model. To train and evaluate our method, a clinical research cohort consisting of 100 patient volumes with 13 organs well-annotated is used. We tested our algorithms with 4-fold cross-validation and computed the Dice score for evaluating the segmentation performance of the 13 organs. Our proposed method using single auto-context outperforms the state-of-the-art on 13 models with an average Dice score 84.58% versus 81.69% (p<0.0001).</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1491-1494"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8522467/pdf/nihms-1687705.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39532530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROBUST WHITE MATTER HYPERINTENSITY SEGMENTATION ON UNSEEN DOMAIN. 基于看不见区域的鲁棒白质高强度分割。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/ISBI48211.2021.9434034
Xingchen Zhao, Anthony Sicilia, Davneet S Minhas, Erin E O'Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, Seong Jae Hwang

Typical machine learning frameworks heavily rely on an underlying assumption that training and test data follow the same distribution. In medical imaging which increasingly begun acquiring datasets from multiple sites or scanners, this identical distribution assumption often fails to hold due to systematic variability induced by site or scanner dependent factors. Therefore, we cannot simply expect a model trained on a given dataset to consistently work well, or generalize, on a dataset from another distribution. In this work, we address this problem, investigating the application of machine learning models to unseen medical imaging data. Specifically, we consider the challenging case of Domain Generalization (DG) where we train a model without any knowledge about the testing distribution. That is, we train on samples from a set of distributions (sources) and test on samples from a new, unseen distribution (target). We focus on the task of white matter hyperintensity (WMH) prediction using the multi-site WMH Segmentation Challenge dataset and our local in-house dataset. We identify how two mechanically distinct DG approaches, namely domain adversarial learning and mix-up, have theoretical synergy. Then, we show drastic improvements of WMH prediction on an unseen target domain.

典型的机器学习框架严重依赖于一个基本假设,即训练和测试数据遵循相同的分布。在越来越多地开始从多个地点或扫描仪获取数据集的医学成像中,由于地点或扫描仪相关因素引起的系统变异性,这种相同的分布假设往往不成立。因此,我们不能简单地期望在给定数据集上训练的模型始终工作良好,或者泛化来自另一个分布的数据集。在这项工作中,我们解决了这个问题,研究了机器学习模型在未见过的医学成像数据中的应用。具体地说,我们考虑域泛化(DG)的挑战性案例,我们在没有任何关于测试分布的知识的情况下训练模型。也就是说,我们对来自一组分布(源)的样本进行训练,并对来自一个新的、不可见的分布(目标)的样本进行测试。我们专注于使用多站点WMH分割挑战数据集和我们的本地内部数据集进行白质高强度(WMH)预测任务。我们确定了两种机械不同的DG方法,即领域对抗性学习和混合,如何在理论上协同作用。然后,我们展示了在未知目标域上WMH预测的显著改进。
{"title":"ROBUST WHITE MATTER HYPERINTENSITY SEGMENTATION ON UNSEEN DOMAIN.","authors":"Xingchen Zhao,&nbsp;Anthony Sicilia,&nbsp;Davneet S Minhas,&nbsp;Erin E O'Connor,&nbsp;Howard J Aizenstein,&nbsp;William E Klunk,&nbsp;Dana L Tudorascu,&nbsp;Seong Jae Hwang","doi":"10.1109/ISBI48211.2021.9434034","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434034","url":null,"abstract":"<p><p>Typical machine learning frameworks heavily rely on an underlying assumption that training and test data follow the same distribution. In medical imaging which increasingly begun acquiring datasets from multiple sites or scanners, this identical distribution assumption often fails to hold due to systematic variability induced by site or scanner dependent factors. Therefore, we cannot simply expect a model trained on a given dataset to consistently work well, or generalize, on a dataset from another distribution. In this work, we address this problem, investigating the application of machine learning models to unseen medical imaging data. Specifically, we consider the challenging case of Domain Generalization (DG) where we train a model without any knowledge about the testing distribution. That is, we train on samples from a set of distributions (sources) and test on samples from a new, unseen distribution (target). We focus on the task of white matter hyperintensity (WMH) prediction using the multi-site WMH Segmentation Challenge dataset and our local in-house dataset. We identify how two mechanically distinct DG approaches, namely domain adversarial learning and mix-up, have theoretical synergy. Then, we show drastic improvements of WMH prediction on an unseen target domain.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1047-1051"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9434034","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
CHARACTERIZATION OF SPATIAL DYNAMICS OF FMRI DATA IN WHITE MATTER USING DIFFUSION-INFORMED WHITE MATTER HARMONICS. 利用扩散信息白质谐波表征白质fmri数据的空间动力学。
Pub Date : 2021-04-01 Epub Date: 2021-05-25 DOI: 10.1109/isbi48211.2021.9433958
Hamid Behjat, Iman Aganj, David Abramian, Anders Eklund, Carl-Fredrik Westin

In this work, we leverage the Laplacian eigenbasis of voxel-wise white matter (WM) graphs derived from diffusion-weighted MRI data, dubbed WM harmonics, to characterize the spatial structure of WM fMRI data. Our motivation for such a characterization is based on studies that show WM fMRI data exhibit a spatial correlational anisotropy that coincides with underlying fiber patterns. By quantifying the energy content of WM fMRI data associated with subsets of WM harmonics across multiple spectral bands, we show that the data exhibits notable subtle spatial modulations under functional load that are not manifested during rest. WM harmonics provide a novel means to study the spatial dynamics of WM fMRI data, in such way that the analysis is informed by the underlying anatomical structure.

在这项工作中,我们利用来自扩散加权MRI数据(称为WM谐波)的体素白质(WM)图的拉普拉斯特征基来表征WM fMRI数据的空间结构。我们进行这种表征的动机是基于研究表明,WM功能磁共振成像数据显示出与潜在纤维模式相一致的空间相关各向异性。通过量化与多个谱带WM谐波子集相关的WM fMRI数据的能量含量,我们发现,在功能负载下,数据表现出明显的微妙空间调制,而在休息时则没有表现出来。WM谐波提供了一种新的方法来研究WM fMRI数据的空间动力学,以这种方式,分析是由潜在的解剖结构通知。
{"title":"CHARACTERIZATION OF SPATIAL DYNAMICS OF FMRI DATA IN WHITE MATTER USING DIFFUSION-INFORMED WHITE MATTER HARMONICS.","authors":"Hamid Behjat,&nbsp;Iman Aganj,&nbsp;David Abramian,&nbsp;Anders Eklund,&nbsp;Carl-Fredrik Westin","doi":"10.1109/isbi48211.2021.9433958","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433958","url":null,"abstract":"<p><p>In this work, we leverage the Laplacian eigenbasis of voxel-wise white matter (WM) graphs derived from diffusion-weighted MRI data, dubbed <i>WM harmonics</i>, to characterize the spatial structure of WM fMRI data. Our motivation for such a characterization is based on studies that show WM fMRI data exhibit a spatial correlational anisotropy that coincides with underlying fiber patterns. By quantifying the energy content of WM fMRI data associated with subsets of WM harmonics across multiple spectral bands, we show that the data exhibits notable subtle spatial modulations under functional load that are not manifested during rest. WM harmonics provide a novel means to study the spatial dynamics of WM fMRI data, in such way that the analysis is informed by the underlying anatomical structure.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1586-1590"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433958","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39059968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1