首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
EXPLORING BACKDOOR ATTACKS IN OFF-THE-SHELF UNSUPERVISED DOMAIN ADAPTATION FOR SECURING CARDIAC MRI-BASED DIAGNOSIS. 探索现成的无监督领域适应中的后门攻击,以确保基于心脏核磁共振成像的诊断安全。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635403
Xiaofeng Liu, Fangxu Xing, Hanna Gaggin, C-C Jay Kuo, Georges El Fakhri, Jonghye Woo

The off-the-shelf model for unsupervised domain adaptation (OSUDA) has been introduced to protect patient data privacy and intellectual property of the source domain without access to the labeled source domain data. Yet, an off-the-shelf diagnosis model, deliberately compromised by backdoor attacks during the source domain training phase, can function as a parasite-host, disseminating the backdoor to the target domain model during the OSUDA stage. Because of limitations in accessing or controlling the source domain training data, OSUDA can make the target domain model highly vulnerable and susceptible to prominent attacks. To sidestep this issue, we propose to quantify the channel-wise backdoor sensitivity via a Lipschitz constant and, explicitly, eliminate the backdoor infection by overwriting the backdoor-related channel kernels with random initialization. Furthermore, we propose to employ an auxiliary model with a full source model to ensure accurate pseudo-labeling, taking into account the controllable, clean target training data in OSUDA. We validate our framework using a multi-center, multi-vendor, and multi-disease (M&M) cardiac dataset. Our findings suggest that the target model is susceptible to backdoor attacks during OSUDA, and our defense mechanism effectively mitigates the infection of target domain victims.

无监督领域适应(OSUDA)的现成模型是为了保护患者数据隐私和源领域的知识产权,而无需访问标注的源领域数据。然而,现成的诊断模型如果在源域训练阶段被后门攻击蓄意破坏,就会像寄生虫一样,在 OSUDA 阶段向目标域模型传播后门。由于在访问或控制源域训练数据方面的限制,OSUDA 会使目标域模型变得非常脆弱,容易受到突出攻击。为了避免这一问题,我们建议通过一个 Lipschitz 常数来量化信道方面的后门敏感性,并通过用随机初始化覆盖与后门相关的信道内核来明确消除后门感染。此外,考虑到 OSUDA 中可控的、干净的目标训练数据,我们建议采用一个具有完整源模型的辅助模型,以确保准确的伪标记。我们使用多中心、多供应商和多疾病(M&M)心脏数据集验证了我们的框架。我们的研究结果表明,在OSUDA过程中,目标模型很容易受到后门攻击,而我们的防御机制能有效减轻目标域受害者的感染。
{"title":"EXPLORING BACKDOOR ATTACKS IN OFF-THE-SHELF UNSUPERVISED DOMAIN ADAPTATION FOR SECURING CARDIAC MRI-BASED DIAGNOSIS.","authors":"Xiaofeng Liu, Fangxu Xing, Hanna Gaggin, C-C Jay Kuo, Georges El Fakhri, Jonghye Woo","doi":"10.1109/isbi56570.2024.10635403","DOIUrl":"10.1109/isbi56570.2024.10635403","url":null,"abstract":"<p><p>The off-the-shelf model for unsupervised domain adaptation (OSUDA) has been introduced to protect patient data privacy and intellectual property of the source domain without access to the labeled source domain data. Yet, an off-the-shelf diagnosis model, deliberately compromised by backdoor attacks during the source domain training phase, can function as a parasite-host, disseminating the backdoor to the target domain model during the OSUDA stage. Because of limitations in accessing or controlling the source domain training data, OSUDA can make the target domain model highly vulnerable and susceptible to prominent attacks. To sidestep this issue, we propose to quantify the channel-wise backdoor sensitivity via a Lipschitz constant and, explicitly, eliminate the backdoor infection by overwriting the backdoor-related channel kernels with random initialization. Furthermore, we propose to employ an auxiliary model with a full source model to ensure accurate pseudo-labeling, taking into account the controllable, clean target training data in OSUDA. We validate our framework using a multi-center, multi-vendor, and multi-disease (M&M) cardiac dataset. Our findings suggest that the target model is susceptible to backdoor attacks during OSUDA, and our defense mechanism effectively mitigates the infection of target domain victims.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11483644/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HIGHER ORDER GAUGE EQUIVARIANT CONVOLUTIONS FOR NEURODEGENERATIVE DISORDER CLASSIFICATION. 神经退行性疾病分类的高阶规范等变卷积。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635204
Gianfranco Cortés, Yue Yu, Robin Chen, Melissa Armstrong, David Vaillancourt, Baba C Vemuri

Diffusion MRI (dMRI) has shown significant promise in capturing subtle changes in neural microstructure caused by neurodegenerative disorders. In this paper, we propose a novel end-to-end compound architecture for processing raw dMRI data. It consists of a 3D convolutional kernel network (CKN) that extracts macro-architectural features across voxels and a gauge equivariant Volterra network (GEVNet) on the sphere that extracts micro-architectural features from within voxels. The use of higher order convolutions enables our architecture to model spatially extended nonlinear interactions across the applied diffusion-sensitizing magnetic field gradients. The compound network is globally equivariant to 3D translations and locally equivariant to 3D rotations. We demonstrate the efficacy of our model on the classification of neurodegenerative disorders.

弥散MRI (dMRI)在捕捉神经退行性疾病引起的神经微观结构的细微变化方面显示出显著的前景。在本文中,我们提出了一种新的端到端复合架构来处理原始dMRI数据。它由3D卷积核网络(CKN)和球体上的测量等变Volterra网络(GEVNet)组成,前者可以从体素中提取宏观建筑特征,后者可以从体素中提取微观建筑特征。高阶卷积的使用使我们的架构能够在应用的扩散敏化磁场梯度中模拟空间扩展的非线性相互作用。复合网络对三维平移具有全局等变特性,对三维旋转具有局部等变特性。我们证明了我们的模型对神经退行性疾病分类的有效性。
{"title":"HIGHER ORDER GAUGE EQUIVARIANT CONVOLUTIONS FOR NEURODEGENERATIVE DISORDER CLASSIFICATION.","authors":"Gianfranco Cortés, Yue Yu, Robin Chen, Melissa Armstrong, David Vaillancourt, Baba C Vemuri","doi":"10.1109/isbi56570.2024.10635204","DOIUrl":"10.1109/isbi56570.2024.10635204","url":null,"abstract":"<p><p>Diffusion MRI (dMRI) has shown significant promise in capturing subtle changes in neural microstructure caused by neurodegenerative disorders. In this paper, we propose a novel end-to-end compound architecture for processing raw dMRI data. It consists of a 3D convolutional kernel network (CKN) that extracts macro-architectural features across voxels and a gauge equivariant Volterra network (GEVNet) on the sphere that extracts micro-architectural features from within voxels. The use of higher order convolutions enables our architecture to model spatially extended nonlinear interactions across the applied diffusion-sensitizing magnetic field gradients. The compound network is globally equivariant to 3D translations and locally equivariant to 3D rotations. We demonstrate the efficacy of our model on the classification of neurodegenerative disorders.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11610404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MODALITY-AGNOSTIC LEARNING FOR MEDICAL IMAGE SEGMENTATION USING MULTI-MODALITY SELF-DISTILLATION. 基于多模态自蒸馏的医学图像分割模态不可知学习。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635881
Qisheng He, Nicholas Summerfield, Ming Dong, Carri Glide-Hurst

In medical image segmentation, although multi-modality training is possible, clinical translation is challenged by the limited availability of all image types for a given patient. Different from typical segmentation models, modality-agnostic (MAG) learning trains a single model based on all available modalities but remains input-agnostic, allowing a single model to produce accurate segmentation given any modality combinations. In this paper, we propose a novel frame-work, MAG learning through Multi-modality Self-distillation (MAG-MS), for medical image segmentation. MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities. This makes it an adaptable and efficient solution for handling limited modalities during testing scenarios. Our extensive experiments on benchmark datasets demonstrate its superior segmentation accuracy, MAG robustness, and efficiency than the current state-of-the-art methods.

在医学影像分割中,虽然可以进行多模态训练,但由于特定患者的所有图像类型有限,临床转化面临挑战。与典型的分割模型不同,模式识别(MAG)学习基于所有可用模式训练单一模型,但仍与输入无关,允许单一模型在任何模式组合下生成准确的分割。在本文中,我们为医学图像分割提出了一个新颖的框架--通过多模态自我提炼的 MAG 学习(MAG-MS)。MAG-MS 从多模态融合中提炼知识,并将其应用于增强单个模态的表示学习。这使其成为一种适应性强的高效解决方案,可在测试场景中处理有限的模态。我们在基准数据集上进行的大量实验证明,MAG-MS 在分割准确性、MAG 鲁棒性和效率方面都优于目前最先进的方法。
{"title":"MODALITY-AGNOSTIC LEARNING FOR MEDICAL IMAGE SEGMENTATION USING MULTI-MODALITY SELF-DISTILLATION.","authors":"Qisheng He, Nicholas Summerfield, Ming Dong, Carri Glide-Hurst","doi":"10.1109/isbi56570.2024.10635881","DOIUrl":"10.1109/isbi56570.2024.10635881","url":null,"abstract":"<p><p>In medical image segmentation, although multi-modality training is possible, clinical translation is challenged by the limited availability of all image types for a given patient. Different from typical segmentation models, modality-agnostic (MAG) learning trains a single model based on all available modalities but remains input-agnostic, allowing a single model to produce accurate segmentation given any modality combinations. In this paper, we propose a novel frame-work, MAG learning through Multi-modality Self-distillation (MAG-MS), for medical image segmentation. MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities. This makes it an adaptable and efficient solution for handling limited modalities during testing scenarios. Our extensive experiments on benchmark datasets demonstrate its superior segmentation accuracy, MAG robustness, and efficiency than the current state-of-the-art methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11673955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142904143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HNAS-Reg: Hierarchical Neural Architecture Search for Deformable Medical Image Registration. HNAS Reg:用于可变形医学图像配准的分层神经结构搜索。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230534
Jiong Wu, Yong Fan

Convolutional neural networks (CNNs) have been widely used to build deep learning models for medical image registration, but manually designed network architectures are not necessarily optimal. This paper presents a hierarchical NAS framework (HNAS-Reg), consisting of both convolutional operation search and network topology search, to identify the optimal network architecture for deformable medical image registration. To mitigate the computational overhead and memory constraints, a partial channel strategy is utilized without losing optimization quality. Experiments on three datasets, consisting of 636 T1-weighted magnetic resonance images (MRIs), have demonstrated that the proposal method can build a deep learning model with improved image registration accuracy and reduced model size, compared with state-of-the-art image registration approaches, including one representative traditional approach and two unsupervised learning-based approaches.

卷积神经网络(CNNs)已被广泛用于建立用于医学图像配准的深度学习模型,但手动设计的网络架构并不一定是最优的。本文提出了一种由卷积运算搜索和网络拓扑搜索组成的分层NAS框架(HNAS-Reg),以确定用于可变形医学图像配准的最佳网络架构。为了减轻计算开销和内存限制,在不损失优化质量的情况下使用了部分信道策略。在由636张T1加权磁共振图像(MRI)组成的三个数据集上进行的实验表明,与最先进的图像配准方法(包括一种具有代表性的传统方法和两种基于无监督学习的方法)相比,该方法可以建立一个深度学习模型,提高图像配准精度,缩小模型大小。
{"title":"HNAS-Reg: Hierarchical Neural Architecture Search for Deformable Medical Image Registration.","authors":"Jiong Wu, Yong Fan","doi":"10.1109/isbi53787.2023.10230534","DOIUrl":"10.1109/isbi53787.2023.10230534","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been widely used to build deep learning models for medical image registration, but manually designed network architectures are not necessarily optimal. This paper presents a hierarchical NAS framework (HNAS-Reg), consisting of both convolutional operation search and network topology search, to identify the optimal network architecture for deformable medical image registration. To mitigate the computational overhead and memory constraints, a partial channel strategy is utilized without losing optimization quality. Experiments on three datasets, consisting of 636 T1-weighted magnetic resonance images (MRIs), have demonstrated that the proposal method can build a deep learning model with improved image registration accuracy and reduced model size, compared with state-of-the-art image registration approaches, including one representative traditional approach and two unsupervised learning-based approaches.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41172564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Not in the Loop: Objective Sample Difficulty Measures for Curriculum Learning. 人不在环:课程学习的难度测量的客观样本。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230597
Zhengbo Zhou, Jun Luo, Dooman Arefan, Gene Kitamura, Shandong Wu

Curriculum learning is a learning method that trains models in a meaningful order from easier to harder samples. A key here is to devise automatic and objective difficulty measures of samples. In the medical domain, previous work applied domain knowledge from human experts to qualitatively assess classification difficulty of medical images to guide curriculum learning, which requires extra annotation efforts, relies on subjective human experience, and may introduce bias. In this work, we propose a new automated curriculum learning technique using the variance of gradients (VoG) to compute an objective difficulty measure of samples and evaluated its effects on elbow fracture classification from X-ray images. Specifically, we used VoG as a metric to rank each sample in terms of the classification difficulty, where high VoG scores indicate more difficult cases for classification, to guide the curriculum training process We compared the proposed technique to a baseline (without curriculum learning), a previous method that used human annotations on classification difficulty, and anti-curriculum learning. Our experiment results showed comparable and higher performance for the binary and multi-class bone fracture classification tasks.

课程学习是一种学习方法,它按照从简单样本到难样本的有意义的顺序训练模型。这里的一个关键是设计自动和客观的样本难度测量。在医学领域,以前的工作应用人类专家的领域知识来定性评估医学图像的分类难度,以指导课程学习,这需要额外的注释工作,依赖于主观的人类经验,并且可能会引入偏见。在这项工作中,我们提出了一种新的自动化课程学习技术,使用梯度方差(VoG)来计算样本的客观难度测量,并从X射线图像中评估其对肘部骨折分类的影响。具体来说,我们使用VoG作为一个指标,根据分类难度对每个样本进行排名,其中VoG得分高表示分类难度更大,以指导课程训练过程。我们将所提出的技术与基线(没有课程学习)进行了比较,这是一种以前使用人类对分类难度的注释的方法,以及反课程学习。我们的实验结果显示,二元和多类骨折分类任务具有可比性和更高的性能。
{"title":"Human Not in the Loop: Objective Sample Difficulty Measures for Curriculum Learning.","authors":"Zhengbo Zhou, Jun Luo, Dooman Arefan, Gene Kitamura, Shandong Wu","doi":"10.1109/isbi53787.2023.10230597","DOIUrl":"10.1109/isbi53787.2023.10230597","url":null,"abstract":"<p><p>Curriculum learning is a learning method that trains models in a meaningful order from easier to harder samples. A key here is to devise automatic and objective difficulty measures of samples. In the medical domain, previous work applied domain knowledge from human experts to qualitatively assess classification difficulty of medical images to guide curriculum learning, which requires extra annotation efforts, relies on subjective human experience, and may introduce bias. In this work, we propose a new automated curriculum learning technique using the variance of gradients (VoG) to compute an objective difficulty measure of samples and evaluated its effects on elbow fracture classification from X-ray images. Specifically, we used VoG as a metric to rank each sample in terms of the classification difficulty, where high VoG scores indicate more difficult cases for classification, to guide the curriculum training process We compared the proposed technique to a baseline (without curriculum learning), a previous method that used human annotations on classification difficulty, and anti-curriculum learning. Our experiment results showed comparable and higher performance for the binary and multi-class bone fracture classification tasks.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10602195/pdf/nihms-1891600.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54232739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Clustering Survival Machines with Interpretable Expert Distributions. 具有可解释专家分布的深度聚类生存机器。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230844
Bojian Hou, Hongming Li, Zhicheng Jiao, Zhen Zhou, Hao Zheng, Yong Fan

We develop deep clustering survival machines to simultaneously predict survival information and characterize data heterogeneity that is not typically modeled by conventional survival analysis methods. By modeling timing information of survival data generatively with a mixture of parametric distributions, referred to as expert distributions, our method learns weights of the expert distributions for individual instances based on their features discriminatively such that each instance's survival information can be characterized by a weighted combination of the learned expert distributions. Extensive experiments on both real and synthetic datasets have demonstrated that our method is capable of obtaining promising clustering results and competitive time-to-event predicting performance.

我们开发了深度聚类生存机,以同时预测生存信息并表征传统生存分析方法通常无法建模的数据异质性。通过用被称为专家分布的参数分布的混合来生成生存数据的时序信息,我们的方法基于个体实例的特征来有区别地学习个体实例的专家分布的权重,使得每个实例的生存信息可以由所学习的专家分布的加权组合来表征。在真实数据集和合成数据集上进行的大量实验表明,我们的方法能够获得有希望的聚类结果和有竞争力的事件时间预测性能。
{"title":"Deep Clustering Survival Machines with Interpretable Expert Distributions.","authors":"Bojian Hou, Hongming Li, Zhicheng Jiao, Zhen Zhou, Hao Zheng, Yong Fan","doi":"10.1109/isbi53787.2023.10230844","DOIUrl":"10.1109/isbi53787.2023.10230844","url":null,"abstract":"<p><p>We develop deep clustering survival machines to simultaneously predict survival information and characterize data heterogeneity that is not typically modeled by conventional survival analysis methods. By modeling timing information of survival data <i>generatively</i> with a mixture of parametric distributions, referred to as expert distributions, our method learns weights of the expert distributions for individual instances based on their features <i>discriminatively</i> such that each instance's survival information can be characterized by a weighted combination of the learned expert distributions. Extensive experiments on both real and synthetic datasets have demonstrated that our method is capable of obtaining promising clustering results and competitive time-to-event predicting performance.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41167287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Hemisphere Dissimilarity, a Self-Supervised Learning Approach for alpha-synucleinopathies prediction with FDG PET. 用 FDG PET 预测α-突触核蛋白病的自监督学习方法--脑半球相似性。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230560
S Tripathi, P Mattioli, C Liguori, A Chiaravalloti, D Arnaldi, L Giancardo

Idiopathic Rem sleep Behavior Disorder (iRBD) is a significant biomarker for the development of alpha-synucleinopathies, such as Parkinson's disease (PD) or Dementia with Lewy bodies (DLB). Methods to identify patterns in iRBD patients can help in the prediction of the future conversion to these diseases during the long prodromal phase when symptoms are non-specific. These methods are essential for disease management and clinical trial recruitment. Brain PET scans with 18F-FDG PET radiotracers have recently shown promise, however, the scarcity of longitudinal data and PD/DLB conversion information makes the use of representation learning approaches such as deep convolutional networks not feasible if trained in a supervised manner. In this work, we propose a self-supervised learning strategy to learn features by comparing the brain hemispheres of iRBD non-convertor subjects, which allows for pre-training a convolutional network on a small data regimen. We introduce a loss function called hemisphere dissimilarity loss (HDL), which extends the Barlow Twins loss, that promotes the creation of invariant and non-redundant features for brain hemispheres of the same subject, and the opposite for hemispheres of different subjects. This loss enables the pre-training of a network without any information about the disease, which is then used to generate full brain feature vectors that are fine-tuned to two downstream tasks: follow-up conversion, and the type of conversion (PD or DLB) using baseline 18F-FDG PET. In our results, we find that the HDL outperforms the variational autoencoder with different forms of inputs.

特发性睡眠行为障碍(iRBD)是帕金森病(PD)或路易体痴呆(DLB)等α-突触核蛋白病发展的重要生物标志物。在症状无特异性的漫长前驱期,识别 iRBD 患者模式的方法有助于预测这些疾病的未来转归。这些方法对于疾病管理和临床试验招募至关重要。最近,使用 18F-FDG PET 放射性同位素进行的脑 PET 扫描显示了前景,然而,由于纵向数据和 PD/DLB 转换信息的稀缺,使用深度卷积网络等表征学习方法进行监督训练并不可行。在这项工作中,我们提出了一种自监督学习策略,通过比较 iRBD 非转换者受试者的大脑半球来学习特征,这样就可以在小数据方案上对卷积网络进行预训练。我们引入了一种称为半球不相似性损失(HDL)的损失函数,它扩展了巴洛双胞胎损失(Barlow Twins loss),可促进为同一受试者的大脑半球创建不变且非冗余的特征,而为不同受试者的大脑半球创建相反的特征。通过这种损失,可以在没有任何疾病信息的情况下对网络进行预训练,然后利用预训练生成完整的大脑特征向量,并根据两个下游任务对其进行微调:随访转换和利用基线 18F-FDG PET 确定转换类型(PD 或 DLB)。在我们的研究结果中,我们发现 HDL 在不同形式的输入下都优于变异自动编码器。
{"title":"Brain Hemisphere Dissimilarity, a Self-Supervised Learning Approach for alpha-synucleinopathies prediction with FDG PET.","authors":"S Tripathi, P Mattioli, C Liguori, A Chiaravalloti, D Arnaldi, L Giancardo","doi":"10.1109/isbi53787.2023.10230560","DOIUrl":"10.1109/isbi53787.2023.10230560","url":null,"abstract":"<p><p>Idiopathic Rem sleep Behavior Disorder (iRBD) is a significant biomarker for the development of alpha-synucleinopathies, such as Parkinson's disease (PD) or Dementia with Lewy bodies (DLB). Methods to identify patterns in iRBD patients can help in the prediction of the future conversion to these diseases during the long prodromal phase when symptoms are non-specific. These methods are essential for disease management and clinical trial recruitment. Brain PET scans with 18F-FDG PET radiotracers have recently shown promise, however, the scarcity of longitudinal data and PD/DLB conversion information makes the use of representation learning approaches such as deep convolutional networks not feasible if trained in a supervised manner. In this work, we propose a self-supervised learning strategy to learn features by comparing the brain hemispheres of iRBD non-convertor subjects, which allows for pre-training a convolutional network on a small data regimen. We introduce a loss function called hemisphere dissimilarity loss (HDL), which extends the Barlow Twins loss, that promotes the creation of invariant and non-redundant features for brain hemispheres of the same subject, and the opposite for hemispheres of different subjects. This loss enables the pre-training of a network without any information about the disease, which is then used to generate full brain feature vectors that are fine-tuned to two downstream tasks: follow-up conversion, and the type of conversion (PD or DLB) using baseline 18F-FDG PET. In our results, we find that the HDL outperforms the variational autoencoder with different forms of inputs.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10496490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10264588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intermediate Deformable Image Registration via Windowed Cross-Correlation. 中间变形图像配准通过窗口交叉相关。
Pub Date : 2023-04-01 DOI: 10.1109/isbi53787.2023.10230715
Iman Aganj, Bruce Fischl

In population and longitudinal imaging studies that employ deformable image registration, more accurate results can be achieved by initializing deformable registration with the results of affine registration where global misalignments have been considerably reduced. Such affine registration, however, is limited to linear transformations and it cannot account for large nonlinear anatomical variations, such as those between pre- and post-operative images or across different subject anatomies. In this work, we introduce a new intermediate deformable image registration (IDIR) technique that recovers large deformations via windowed cross-correlation, and provide an efficient implementation based on the fast Fourier transform. We evaluate our method on 2D X-ray and 3D magnetic resonance images, demonstrating its ability to align substantial nonlinear anatomical variations within a few iterations.

在采用可变形图像配准的总体和纵向成像研究中,通过将可变形配准与仿射配准的结果初始化,可以获得更准确的结果,其中全局不对准已经大大减少。然而,这种仿射配准仅限于线性变换,不能解释大的非线性解剖变化,例如术前和术后图像之间或不同主体解剖结构之间的变化。在这项工作中,我们引入了一种新的中间可变形图像配准(IDIR)技术,该技术通过加窗互相关恢复大变形,并提供了一种基于快速傅里叶变换的有效实现。我们在2D x射线和3D磁共振图像上评估了我们的方法,证明了它在几次迭代内对齐大量非线性解剖变化的能力。
{"title":"Intermediate Deformable Image Registration via Windowed Cross-Correlation.","authors":"Iman Aganj,&nbsp;Bruce Fischl","doi":"10.1109/isbi53787.2023.10230715","DOIUrl":"https://doi.org/10.1109/isbi53787.2023.10230715","url":null,"abstract":"<p><p>In population and longitudinal imaging studies that employ deformable image registration, more accurate results can be achieved by initializing deformable registration with the results of affine registration where global misalignments have been considerably reduced. Such affine registration, however, is limited to linear transformations and it cannot account for large nonlinear anatomical variations, such as those between pre- and post-operative images or across different subject anatomies. In this work, we introduce a new <i>intermediate</i> deformable image registration (IDIR) technique that recovers large deformations via windowed cross-correlation, and provide an efficient implementation based on the fast Fourier transform. We evaluate our method on 2D X-ray and 3D magnetic resonance images, demonstrating its ability to align substantial nonlinear anatomical variations within a few iterations.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10485808/pdf/nihms-1872292.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10241374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foveal avascular zone segmentation using deep learning-driven image-level optimization and fundus photographs. 利用深度学习驱动的图像级优化和眼底照片进行眼窝无血管区分割。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230410
I Coronado, S Pachade, H Dawoodally, S Salazar Marioni, J Yan, R Abdelkhaleq, M Bahrainian, A Jagolino-Cole, R Channa, S A Sheth, L Giancardo

The foveal avascular zone (FAZ) is a retinal area devoid of capillaries and associated with multiple retinal pathologies and visual acuity. Optical Coherence Tomography Angiography (OCT-A) is a very effective means of visualizing retinal vascular and avascular areas, but its use remains limited to research settings due to its complex optics limiting availability. On the other hand, fundus photography is widely available and often adopted in population studies. In this work, we test the feasibility of estimating the FAZ from fundus photos using three different approaches. The first two approaches rely on pixel-level and image-level FAZ information to segment FAZ pixels and regress FAZ area, respectively. The third is a training mask-free pipeline combining saliency maps with an active contours approach to segment FAZ pixels while being trained on image-level measures of the FAZ areas. This enables training FAZ segmentation methods without manual alignment of fundus and OCT-A images, a time-consuming process, which limits the dataset that can be used for training. Segmentation methods trained on pixel-level labels and image-level labels had good agreement with masks from a human grader (respectively DICE of 0.45 and 0.4). Results indicate the feasibility of using fundus images as a proxy to estimate the FAZ when angiography data is not available.

眼窝无血管区(FAZ)是一个没有毛细血管的视网膜区域,与多种视网膜病变和视敏度有关。光学相干断层扫描血管造影术(OCT-A)是观察视网膜血管和无血管区的一种非常有效的方法,但由于其光学结构复杂,可用性受到限制,因此仍仅限于研究环境中使用。另一方面,眼底照相技术应用广泛,经常被用于人群研究。在这项工作中,我们使用三种不同的方法测试了从眼底照片估算 FAZ 的可行性。前两种方法分别依靠像素级和图像级 FAZ 信息来分割 FAZ 像素和回归 FAZ 面积。第三种是无训练掩码管道,结合显著性地图和主动轮廓方法来分割 FAZ 像素,同时根据 FAZ 区域的图像级测量进行训练。这样就可以训练 FAZ 分割方法,而无需手动对准眼底和 OCT-A 图像(这是一个耗时的过程,会限制可用于训练的数据集)。根据像素级标签和图像级标签训练的分割方法与人类分级者的掩膜具有良好的一致性(DICE 分别为 0.45 和 0.4)。结果表明,在没有血管造影数据的情况下,使用眼底图像作为代理来估计 FAZ 是可行的。
{"title":"Foveal avascular zone segmentation using deep learning-driven image-level optimization and fundus photographs.","authors":"I Coronado, S Pachade, H Dawoodally, S Salazar Marioni, J Yan, R Abdelkhaleq, M Bahrainian, A Jagolino-Cole, R Channa, S A Sheth, L Giancardo","doi":"10.1109/isbi53787.2023.10230410","DOIUrl":"10.1109/isbi53787.2023.10230410","url":null,"abstract":"<p><p>The foveal avascular zone (FAZ) is a retinal area devoid of capillaries and associated with multiple retinal pathologies and visual acuity. Optical Coherence Tomography Angiography (OCT-A) is a very effective means of visualizing retinal vascular and avascular areas, but its use remains limited to research settings due to its complex optics limiting availability. On the other hand, fundus photography is widely available and often adopted in population studies. In this work, we test the feasibility of estimating the FAZ from fundus photos using three different approaches. The first two approaches rely on pixel-level and image-level FAZ information to segment FAZ pixels and regress FAZ area, respectively. The third is a training mask-free pipeline combining saliency maps with an active contours approach to segment FAZ pixels while being trained on image-level measures of the FAZ areas. This enables training FAZ segmentation methods without manual alignment of fundus and OCT-A images, a time-consuming process, which limits the dataset that can be used for training. Segmentation methods trained on pixel-level labels and image-level labels had good agreement with masks from a human grader (respectively DICE of 0.45 and 0.4). Results indicate the feasibility of using fundus images as a proxy to estimate the FAZ when angiography data is not available.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10498664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10264596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Alzheimer's Disease and Quantifying Asymmetric Degeneration of the Hippocampus Using Deep Learning of Magnetic Resonance Imaging Data. 利用磁共振成像数据的深度学习预测阿尔茨海默病和量化海马不对称变性。
Pub Date : 2023-04-01 Epub Date: 2023-09-01 DOI: 10.1109/isbi53787.2023.10230830
Xi Liu, Hongming Li, Yong Fan

In order to quantify lateral asymmetric degeneration of the hippocampus for early predicting Alzheimer's disease (AD), we develop a deep learning (DL) model to learn informative features from the hippocampal magnetic resonance imaging (MRI) data for predicting AD conversion in a time-to-event prediction modeling framework. The DL model is trained on unilateral hippocampal data with an autoencoder based regularizer, facilitating quantification of lateral asymmetry in the hippocampal prediction power of AD conversion and identification of the optimal strategy to integrate the bilateral hippocampal MRI data for predicting AD. Experimental results on MRI scans of 1307 subjects (817 for training and 490 for validation) have demonstrated that the left hippocampus can better predict AD than the right hippocampus, and an integration of the bilateral hippocampal data with the instance based DL method improved AD prediction, compared with alternative predictive modeling strategies.

为了量化海马侧不对称变性以早期预测阿尔茨海默病(AD),我们开发了一个深度学习(DL)模型,从海马磁共振成像(MRI)数据中学习信息特征,以在时间-事件预测模型框架中预测AD转换。DL模型使用基于自动编码器的正则化子在单侧海马数据上进行训练,有助于量化AD转换的海马预测能力的横向不对称性,并确定整合双侧海马MRI数据预测AD的最佳策略。1307名受试者(817名用于训练,490名用于验证)的MRI扫描实验结果表明,与其他预测建模策略相比,左侧海马体比右侧海马体能够更好地预测AD,并且将双侧海马体数据与基于实例的DL方法相结合改进了AD预测。
{"title":"Predicting Alzheimer's Disease and Quantifying Asymmetric Degeneration of the Hippocampus Using Deep Learning of Magnetic Resonance Imaging Data.","authors":"Xi Liu, Hongming Li, Yong Fan","doi":"10.1109/isbi53787.2023.10230830","DOIUrl":"10.1109/isbi53787.2023.10230830","url":null,"abstract":"<p><p>In order to quantify lateral asymmetric degeneration of the hippocampus for early predicting Alzheimer's disease (AD), we develop a deep learning (DL) model to learn informative features from the hippocampal magnetic resonance imaging (MRI) data for predicting AD conversion in a time-to-event prediction modeling framework. The DL model is trained on unilateral hippocampal data with an autoencoder based regularizer, facilitating quantification of lateral asymmetry in the hippocampal prediction power of AD conversion and identification of the optimal strategy to integrate the bilateral hippocampal MRI data for predicting AD. Experimental results on MRI scans of 1307 subjects (817 for training and 490 for validation) have demonstrated that the left hippocampus can better predict AD than the right hippocampus, and an integration of the bilateral hippocampal data with the instance based DL method improved AD prediction, compared with alternative predictive modeling strategies.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41170627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1