Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635403
Xiaofeng Liu, Fangxu Xing, Hanna Gaggin, C-C Jay Kuo, Georges El Fakhri, Jonghye Woo
The off-the-shelf model for unsupervised domain adaptation (OSUDA) has been introduced to protect patient data privacy and intellectual property of the source domain without access to the labeled source domain data. Yet, an off-the-shelf diagnosis model, deliberately compromised by backdoor attacks during the source domain training phase, can function as a parasite-host, disseminating the backdoor to the target domain model during the OSUDA stage. Because of limitations in accessing or controlling the source domain training data, OSUDA can make the target domain model highly vulnerable and susceptible to prominent attacks. To sidestep this issue, we propose to quantify the channel-wise backdoor sensitivity via a Lipschitz constant and, explicitly, eliminate the backdoor infection by overwriting the backdoor-related channel kernels with random initialization. Furthermore, we propose to employ an auxiliary model with a full source model to ensure accurate pseudo-labeling, taking into account the controllable, clean target training data in OSUDA. We validate our framework using a multi-center, multi-vendor, and multi-disease (M&M) cardiac dataset. Our findings suggest that the target model is susceptible to backdoor attacks during OSUDA, and our defense mechanism effectively mitigates the infection of target domain victims.
{"title":"EXPLORING BACKDOOR ATTACKS IN OFF-THE-SHELF UNSUPERVISED DOMAIN ADAPTATION FOR SECURING CARDIAC MRI-BASED DIAGNOSIS.","authors":"Xiaofeng Liu, Fangxu Xing, Hanna Gaggin, C-C Jay Kuo, Georges El Fakhri, Jonghye Woo","doi":"10.1109/isbi56570.2024.10635403","DOIUrl":"10.1109/isbi56570.2024.10635403","url":null,"abstract":"<p><p>The off-the-shelf model for unsupervised domain adaptation (OSUDA) has been introduced to protect patient data privacy and intellectual property of the source domain without access to the labeled source domain data. Yet, an off-the-shelf diagnosis model, deliberately compromised by backdoor attacks during the source domain training phase, can function as a parasite-host, disseminating the backdoor to the target domain model during the OSUDA stage. Because of limitations in accessing or controlling the source domain training data, OSUDA can make the target domain model highly vulnerable and susceptible to prominent attacks. To sidestep this issue, we propose to quantify the channel-wise backdoor sensitivity via a Lipschitz constant and, explicitly, eliminate the backdoor infection by overwriting the backdoor-related channel kernels with random initialization. Furthermore, we propose to employ an auxiliary model with a full source model to ensure accurate pseudo-labeling, taking into account the controllable, clean target training data in OSUDA. We validate our framework using a multi-center, multi-vendor, and multi-disease (M&M) cardiac dataset. Our findings suggest that the target model is susceptible to backdoor attacks during OSUDA, and our defense mechanism effectively mitigates the infection of target domain victims.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11483644/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635204
Gianfranco Cortés, Yue Yu, Robin Chen, Melissa Armstrong, David Vaillancourt, Baba C Vemuri
Diffusion MRI (dMRI) has shown significant promise in capturing subtle changes in neural microstructure caused by neurodegenerative disorders. In this paper, we propose a novel end-to-end compound architecture for processing raw dMRI data. It consists of a 3D convolutional kernel network (CKN) that extracts macro-architectural features across voxels and a gauge equivariant Volterra network (GEVNet) on the sphere that extracts micro-architectural features from within voxels. The use of higher order convolutions enables our architecture to model spatially extended nonlinear interactions across the applied diffusion-sensitizing magnetic field gradients. The compound network is globally equivariant to 3D translations and locally equivariant to 3D rotations. We demonstrate the efficacy of our model on the classification of neurodegenerative disorders.
{"title":"HIGHER ORDER GAUGE EQUIVARIANT CONVOLUTIONS FOR NEURODEGENERATIVE DISORDER CLASSIFICATION.","authors":"Gianfranco Cortés, Yue Yu, Robin Chen, Melissa Armstrong, David Vaillancourt, Baba C Vemuri","doi":"10.1109/isbi56570.2024.10635204","DOIUrl":"10.1109/isbi56570.2024.10635204","url":null,"abstract":"<p><p>Diffusion MRI (dMRI) has shown significant promise in capturing subtle changes in neural microstructure caused by neurodegenerative disorders. In this paper, we propose a novel end-to-end compound architecture for processing raw dMRI data. It consists of a 3D convolutional kernel network (CKN) that extracts macro-architectural features across voxels and a gauge equivariant Volterra network (GEVNet) on the sphere that extracts micro-architectural features from within voxels. The use of higher order convolutions enables our architecture to model spatially extended nonlinear interactions across the applied diffusion-sensitizing magnetic field gradients. The compound network is globally equivariant to 3D translations and locally equivariant to 3D rotations. We demonstrate the efficacy of our model on the classification of neurodegenerative disorders.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11610404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142775550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635881
Qisheng He, Nicholas Summerfield, Ming Dong, Carri Glide-Hurst
In medical image segmentation, although multi-modality training is possible, clinical translation is challenged by the limited availability of all image types for a given patient. Different from typical segmentation models, modality-agnostic (MAG) learning trains a single model based on all available modalities but remains input-agnostic, allowing a single model to produce accurate segmentation given any modality combinations. In this paper, we propose a novel frame-work, MAG learning through Multi-modality Self-distillation (MAG-MS), for medical image segmentation. MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities. This makes it an adaptable and efficient solution for handling limited modalities during testing scenarios. Our extensive experiments on benchmark datasets demonstrate its superior segmentation accuracy, MAG robustness, and efficiency than the current state-of-the-art methods.
{"title":"MODALITY-AGNOSTIC LEARNING FOR MEDICAL IMAGE SEGMENTATION USING MULTI-MODALITY SELF-DISTILLATION.","authors":"Qisheng He, Nicholas Summerfield, Ming Dong, Carri Glide-Hurst","doi":"10.1109/isbi56570.2024.10635881","DOIUrl":"10.1109/isbi56570.2024.10635881","url":null,"abstract":"<p><p>In medical image segmentation, although multi-modality training is possible, clinical translation is challenged by the limited availability of all image types for a given patient. Different from typical segmentation models, modality-agnostic (MAG) learning trains a single model based on all available modalities but remains input-agnostic, allowing a single model to produce accurate segmentation given any modality combinations. In this paper, we propose a novel frame-work, MAG learning through Multi-modality Self-distillation (MAG-MS), for medical image segmentation. MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities. This makes it an adaptable and efficient solution for handling limited modalities during testing scenarios. Our extensive experiments on benchmark datasets demonstrate its superior segmentation accuracy, MAG robustness, and efficiency than the current state-of-the-art methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11673955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142904143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230534
Jiong Wu, Yong Fan
Convolutional neural networks (CNNs) have been widely used to build deep learning models for medical image registration, but manually designed network architectures are not necessarily optimal. This paper presents a hierarchical NAS framework (HNAS-Reg), consisting of both convolutional operation search and network topology search, to identify the optimal network architecture for deformable medical image registration. To mitigate the computational overhead and memory constraints, a partial channel strategy is utilized without losing optimization quality. Experiments on three datasets, consisting of 636 T1-weighted magnetic resonance images (MRIs), have demonstrated that the proposal method can build a deep learning model with improved image registration accuracy and reduced model size, compared with state-of-the-art image registration approaches, including one representative traditional approach and two unsupervised learning-based approaches.
{"title":"HNAS-Reg: Hierarchical Neural Architecture Search for Deformable Medical Image Registration.","authors":"Jiong Wu, Yong Fan","doi":"10.1109/isbi53787.2023.10230534","DOIUrl":"10.1109/isbi53787.2023.10230534","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been widely used to build deep learning models for medical image registration, but manually designed network architectures are not necessarily optimal. This paper presents a hierarchical NAS framework (HNAS-Reg), consisting of both convolutional operation search and network topology search, to identify the optimal network architecture for deformable medical image registration. To mitigate the computational overhead and memory constraints, a partial channel strategy is utilized without losing optimization quality. Experiments on three datasets, consisting of 636 T1-weighted magnetic resonance images (MRIs), have demonstrated that the proposal method can build a deep learning model with improved image registration accuracy and reduced model size, compared with state-of-the-art image registration approaches, including one representative traditional approach and two unsupervised learning-based approaches.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41172564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230597
Zhengbo Zhou, Jun Luo, Dooman Arefan, Gene Kitamura, Shandong Wu
Curriculum learning is a learning method that trains models in a meaningful order from easier to harder samples. A key here is to devise automatic and objective difficulty measures of samples. In the medical domain, previous work applied domain knowledge from human experts to qualitatively assess classification difficulty of medical images to guide curriculum learning, which requires extra annotation efforts, relies on subjective human experience, and may introduce bias. In this work, we propose a new automated curriculum learning technique using the variance of gradients (VoG) to compute an objective difficulty measure of samples and evaluated its effects on elbow fracture classification from X-ray images. Specifically, we used VoG as a metric to rank each sample in terms of the classification difficulty, where high VoG scores indicate more difficult cases for classification, to guide the curriculum training process We compared the proposed technique to a baseline (without curriculum learning), a previous method that used human annotations on classification difficulty, and anti-curriculum learning. Our experiment results showed comparable and higher performance for the binary and multi-class bone fracture classification tasks.
{"title":"Human Not in the Loop: Objective Sample Difficulty Measures for Curriculum Learning.","authors":"Zhengbo Zhou, Jun Luo, Dooman Arefan, Gene Kitamura, Shandong Wu","doi":"10.1109/isbi53787.2023.10230597","DOIUrl":"10.1109/isbi53787.2023.10230597","url":null,"abstract":"<p><p>Curriculum learning is a learning method that trains models in a meaningful order from easier to harder samples. A key here is to devise automatic and objective difficulty measures of samples. In the medical domain, previous work applied domain knowledge from human experts to qualitatively assess classification difficulty of medical images to guide curriculum learning, which requires extra annotation efforts, relies on subjective human experience, and may introduce bias. In this work, we propose a new automated curriculum learning technique using the variance of gradients (VoG) to compute an objective difficulty measure of samples and evaluated its effects on elbow fracture classification from X-ray images. Specifically, we used VoG as a metric to rank each sample in terms of the classification difficulty, where high VoG scores indicate more difficult cases for classification, to guide the curriculum training process We compared the proposed technique to a baseline (without curriculum learning), a previous method that used human annotations on classification difficulty, and anti-curriculum learning. Our experiment results showed comparable and higher performance for the binary and multi-class bone fracture classification tasks.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10602195/pdf/nihms-1891600.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54232739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop deep clustering survival machines to simultaneously predict survival information and characterize data heterogeneity that is not typically modeled by conventional survival analysis methods. By modeling timing information of survival data generatively with a mixture of parametric distributions, referred to as expert distributions, our method learns weights of the expert distributions for individual instances based on their features discriminatively such that each instance's survival information can be characterized by a weighted combination of the learned expert distributions. Extensive experiments on both real and synthetic datasets have demonstrated that our method is capable of obtaining promising clustering results and competitive time-to-event predicting performance.
{"title":"Deep Clustering Survival Machines with Interpretable Expert Distributions.","authors":"Bojian Hou, Hongming Li, Zhicheng Jiao, Zhen Zhou, Hao Zheng, Yong Fan","doi":"10.1109/isbi53787.2023.10230844","DOIUrl":"10.1109/isbi53787.2023.10230844","url":null,"abstract":"<p><p>We develop deep clustering survival machines to simultaneously predict survival information and characterize data heterogeneity that is not typically modeled by conventional survival analysis methods. By modeling timing information of survival data <i>generatively</i> with a mixture of parametric distributions, referred to as expert distributions, our method learns weights of the expert distributions for individual instances based on their features <i>discriminatively</i> such that each instance's survival information can be characterized by a weighted combination of the learned expert distributions. Extensive experiments on both real and synthetic datasets have demonstrated that our method is capable of obtaining promising clustering results and competitive time-to-event predicting performance.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41167287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230560
S Tripathi, P Mattioli, C Liguori, A Chiaravalloti, D Arnaldi, L Giancardo
Idiopathic Rem sleep Behavior Disorder (iRBD) is a significant biomarker for the development of alpha-synucleinopathies, such as Parkinson's disease (PD) or Dementia with Lewy bodies (DLB). Methods to identify patterns in iRBD patients can help in the prediction of the future conversion to these diseases during the long prodromal phase when symptoms are non-specific. These methods are essential for disease management and clinical trial recruitment. Brain PET scans with 18F-FDG PET radiotracers have recently shown promise, however, the scarcity of longitudinal data and PD/DLB conversion information makes the use of representation learning approaches such as deep convolutional networks not feasible if trained in a supervised manner. In this work, we propose a self-supervised learning strategy to learn features by comparing the brain hemispheres of iRBD non-convertor subjects, which allows for pre-training a convolutional network on a small data regimen. We introduce a loss function called hemisphere dissimilarity loss (HDL), which extends the Barlow Twins loss, that promotes the creation of invariant and non-redundant features for brain hemispheres of the same subject, and the opposite for hemispheres of different subjects. This loss enables the pre-training of a network without any information about the disease, which is then used to generate full brain feature vectors that are fine-tuned to two downstream tasks: follow-up conversion, and the type of conversion (PD or DLB) using baseline 18F-FDG PET. In our results, we find that the HDL outperforms the variational autoencoder with different forms of inputs.
特发性睡眠行为障碍(iRBD)是帕金森病(PD)或路易体痴呆(DLB)等α-突触核蛋白病发展的重要生物标志物。在症状无特异性的漫长前驱期,识别 iRBD 患者模式的方法有助于预测这些疾病的未来转归。这些方法对于疾病管理和临床试验招募至关重要。最近,使用 18F-FDG PET 放射性同位素进行的脑 PET 扫描显示了前景,然而,由于纵向数据和 PD/DLB 转换信息的稀缺,使用深度卷积网络等表征学习方法进行监督训练并不可行。在这项工作中,我们提出了一种自监督学习策略,通过比较 iRBD 非转换者受试者的大脑半球来学习特征,这样就可以在小数据方案上对卷积网络进行预训练。我们引入了一种称为半球不相似性损失(HDL)的损失函数,它扩展了巴洛双胞胎损失(Barlow Twins loss),可促进为同一受试者的大脑半球创建不变且非冗余的特征,而为不同受试者的大脑半球创建相反的特征。通过这种损失,可以在没有任何疾病信息的情况下对网络进行预训练,然后利用预训练生成完整的大脑特征向量,并根据两个下游任务对其进行微调:随访转换和利用基线 18F-FDG PET 确定转换类型(PD 或 DLB)。在我们的研究结果中,我们发现 HDL 在不同形式的输入下都优于变异自动编码器。
{"title":"Brain Hemisphere Dissimilarity, a Self-Supervised Learning Approach for alpha-synucleinopathies prediction with FDG PET.","authors":"S Tripathi, P Mattioli, C Liguori, A Chiaravalloti, D Arnaldi, L Giancardo","doi":"10.1109/isbi53787.2023.10230560","DOIUrl":"10.1109/isbi53787.2023.10230560","url":null,"abstract":"<p><p>Idiopathic Rem sleep Behavior Disorder (iRBD) is a significant biomarker for the development of alpha-synucleinopathies, such as Parkinson's disease (PD) or Dementia with Lewy bodies (DLB). Methods to identify patterns in iRBD patients can help in the prediction of the future conversion to these diseases during the long prodromal phase when symptoms are non-specific. These methods are essential for disease management and clinical trial recruitment. Brain PET scans with 18F-FDG PET radiotracers have recently shown promise, however, the scarcity of longitudinal data and PD/DLB conversion information makes the use of representation learning approaches such as deep convolutional networks not feasible if trained in a supervised manner. In this work, we propose a self-supervised learning strategy to learn features by comparing the brain hemispheres of iRBD non-convertor subjects, which allows for pre-training a convolutional network on a small data regimen. We introduce a loss function called hemisphere dissimilarity loss (HDL), which extends the Barlow Twins loss, that promotes the creation of invariant and non-redundant features for brain hemispheres of the same subject, and the opposite for hemispheres of different subjects. This loss enables the pre-training of a network without any information about the disease, which is then used to generate full brain feature vectors that are fine-tuned to two downstream tasks: follow-up conversion, and the type of conversion (PD or DLB) using baseline 18F-FDG PET. In our results, we find that the HDL outperforms the variational autoencoder with different forms of inputs.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10496490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10264588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1109/isbi53787.2023.10230715
Iman Aganj, Bruce Fischl
In population and longitudinal imaging studies that employ deformable image registration, more accurate results can be achieved by initializing deformable registration with the results of affine registration where global misalignments have been considerably reduced. Such affine registration, however, is limited to linear transformations and it cannot account for large nonlinear anatomical variations, such as those between pre- and post-operative images or across different subject anatomies. In this work, we introduce a new intermediate deformable image registration (IDIR) technique that recovers large deformations via windowed cross-correlation, and provide an efficient implementation based on the fast Fourier transform. We evaluate our method on 2D X-ray and 3D magnetic resonance images, demonstrating its ability to align substantial nonlinear anatomical variations within a few iterations.
{"title":"Intermediate Deformable Image Registration via Windowed Cross-Correlation.","authors":"Iman Aganj, Bruce Fischl","doi":"10.1109/isbi53787.2023.10230715","DOIUrl":"https://doi.org/10.1109/isbi53787.2023.10230715","url":null,"abstract":"<p><p>In population and longitudinal imaging studies that employ deformable image registration, more accurate results can be achieved by initializing deformable registration with the results of affine registration where global misalignments have been considerably reduced. Such affine registration, however, is limited to linear transformations and it cannot account for large nonlinear anatomical variations, such as those between pre- and post-operative images or across different subject anatomies. In this work, we introduce a new <i>intermediate</i> deformable image registration (IDIR) technique that recovers large deformations via windowed cross-correlation, and provide an efficient implementation based on the fast Fourier transform. We evaluate our method on 2D X-ray and 3D magnetic resonance images, demonstrating its ability to align substantial nonlinear anatomical variations within a few iterations.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10485808/pdf/nihms-1872292.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10241374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230410
I Coronado, S Pachade, H Dawoodally, S Salazar Marioni, J Yan, R Abdelkhaleq, M Bahrainian, A Jagolino-Cole, R Channa, S A Sheth, L Giancardo
The foveal avascular zone (FAZ) is a retinal area devoid of capillaries and associated with multiple retinal pathologies and visual acuity. Optical Coherence Tomography Angiography (OCT-A) is a very effective means of visualizing retinal vascular and avascular areas, but its use remains limited to research settings due to its complex optics limiting availability. On the other hand, fundus photography is widely available and often adopted in population studies. In this work, we test the feasibility of estimating the FAZ from fundus photos using three different approaches. The first two approaches rely on pixel-level and image-level FAZ information to segment FAZ pixels and regress FAZ area, respectively. The third is a training mask-free pipeline combining saliency maps with an active contours approach to segment FAZ pixels while being trained on image-level measures of the FAZ areas. This enables training FAZ segmentation methods without manual alignment of fundus and OCT-A images, a time-consuming process, which limits the dataset that can be used for training. Segmentation methods trained on pixel-level labels and image-level labels had good agreement with masks from a human grader (respectively DICE of 0.45 and 0.4). Results indicate the feasibility of using fundus images as a proxy to estimate the FAZ when angiography data is not available.
眼窝无血管区(FAZ)是一个没有毛细血管的视网膜区域,与多种视网膜病变和视敏度有关。光学相干断层扫描血管造影术(OCT-A)是观察视网膜血管和无血管区的一种非常有效的方法,但由于其光学结构复杂,可用性受到限制,因此仍仅限于研究环境中使用。另一方面,眼底照相技术应用广泛,经常被用于人群研究。在这项工作中,我们使用三种不同的方法测试了从眼底照片估算 FAZ 的可行性。前两种方法分别依靠像素级和图像级 FAZ 信息来分割 FAZ 像素和回归 FAZ 面积。第三种是无训练掩码管道,结合显著性地图和主动轮廓方法来分割 FAZ 像素,同时根据 FAZ 区域的图像级测量进行训练。这样就可以训练 FAZ 分割方法,而无需手动对准眼底和 OCT-A 图像(这是一个耗时的过程,会限制可用于训练的数据集)。根据像素级标签和图像级标签训练的分割方法与人类分级者的掩膜具有良好的一致性(DICE 分别为 0.45 和 0.4)。结果表明,在没有血管造影数据的情况下,使用眼底图像作为代理来估计 FAZ 是可行的。
{"title":"Foveal avascular zone segmentation using deep learning-driven image-level optimization and fundus photographs.","authors":"I Coronado, S Pachade, H Dawoodally, S Salazar Marioni, J Yan, R Abdelkhaleq, M Bahrainian, A Jagolino-Cole, R Channa, S A Sheth, L Giancardo","doi":"10.1109/isbi53787.2023.10230410","DOIUrl":"10.1109/isbi53787.2023.10230410","url":null,"abstract":"<p><p>The foveal avascular zone (FAZ) is a retinal area devoid of capillaries and associated with multiple retinal pathologies and visual acuity. Optical Coherence Tomography Angiography (OCT-A) is a very effective means of visualizing retinal vascular and avascular areas, but its use remains limited to research settings due to its complex optics limiting availability. On the other hand, fundus photography is widely available and often adopted in population studies. In this work, we test the feasibility of estimating the FAZ from fundus photos using three different approaches. The first two approaches rely on pixel-level and image-level FAZ information to segment FAZ pixels and regress FAZ area, respectively. The third is a training mask-free pipeline combining saliency maps with an active contours approach to segment FAZ pixels while being trained on image-level measures of the FAZ areas. This enables training FAZ segmentation methods without manual alignment of fundus and OCT-A images, a time-consuming process, which limits the dataset that can be used for training. Segmentation methods trained on pixel-level labels and image-level labels had good agreement with masks from a human grader (respectively DICE of 0.45 and 0.4). Results indicate the feasibility of using fundus images as a proxy to estimate the FAZ when angiography data is not available.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10498664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10264596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01Epub Date: 2023-09-01DOI: 10.1109/isbi53787.2023.10230830
Xi Liu, Hongming Li, Yong Fan
In order to quantify lateral asymmetric degeneration of the hippocampus for early predicting Alzheimer's disease (AD), we develop a deep learning (DL) model to learn informative features from the hippocampal magnetic resonance imaging (MRI) data for predicting AD conversion in a time-to-event prediction modeling framework. The DL model is trained on unilateral hippocampal data with an autoencoder based regularizer, facilitating quantification of lateral asymmetry in the hippocampal prediction power of AD conversion and identification of the optimal strategy to integrate the bilateral hippocampal MRI data for predicting AD. Experimental results on MRI scans of 1307 subjects (817 for training and 490 for validation) have demonstrated that the left hippocampus can better predict AD than the right hippocampus, and an integration of the bilateral hippocampal data with the instance based DL method improved AD prediction, compared with alternative predictive modeling strategies.
{"title":"Predicting Alzheimer's Disease and Quantifying Asymmetric Degeneration of the Hippocampus Using Deep Learning of Magnetic Resonance Imaging Data.","authors":"Xi Liu, Hongming Li, Yong Fan","doi":"10.1109/isbi53787.2023.10230830","DOIUrl":"10.1109/isbi53787.2023.10230830","url":null,"abstract":"<p><p>In order to quantify lateral asymmetric degeneration of the hippocampus for early predicting Alzheimer's disease (AD), we develop a deep learning (DL) model to learn informative features from the hippocampal magnetic resonance imaging (MRI) data for predicting AD conversion in a time-to-event prediction modeling framework. The DL model is trained on unilateral hippocampal data with an autoencoder based regularizer, facilitating quantification of lateral asymmetry in the hippocampal prediction power of AD conversion and identification of the optimal strategy to integrate the bilateral hippocampal MRI data for predicting AD. Experimental results on MRI scans of 1307 subjects (817 for training and 490 for validation) have demonstrated that the left hippocampus can better predict AD than the right hippocampus, and an integration of the bilateral hippocampal data with the instance based DL method improved AD prediction, compared with alternative predictive modeling strategies.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2023 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10544795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41170627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}