Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/ISBI56570.2024.10635326
Yunzheng Zhu, Luoting Zhuang, Yannan Lin, Tengyue Zhang, Hossein Tabatabaei, Denise R Aberle, Ashley E Prosper, Aichi Chien, William Hsu
Spatially aligning two computed tomography (CT) scans of the lung using automated image registration techniques is a challenging task due to the deformable nature of the lung. However, existing deep-learning-based lung CT registration models are not trained with explicit anatomical knowledge. We propose the deformable anatomy-aware registration toolkit (DART), a masked autoencoder (MAE)-based approach, to improve the keypoint-supervised registration of lung CTs. Our method incorporates features from multiple decoders of networks trained to segment anatomical structures, including the lung, ribs, vertebrae, lobes, vessels, and airways, to ensure that the MAE learns relevant features corresponding to the anatomy of the lung. The pretrained weights of the transformer encoder and patch embeddings are then used as the initialization for the training of downstream registration. We compare DART to existing state-of-the-art registration models. Our experiments show that DART outperforms the baseline models (Voxelmorph, ViT-V-Net, and MAE-TransRNet) in terms of target registration error of both corrField-generated keypoints with 17%, 13%, and 9% relative improvement, respectively, and bounding box centers of nodules with 27%, 10%, and 4% relative improvement, respectively. Our implementation is available at https://github.com/yunzhengzhu/DART.
{"title":"DART: DEFORMABLE ANATOMY-AWARE REGISTRATION TOOLKIT FOR LUNG CT REGISTRATION WITH KEYPOINTS SUPERVISION.","authors":"Yunzheng Zhu, Luoting Zhuang, Yannan Lin, Tengyue Zhang, Hossein Tabatabaei, Denise R Aberle, Ashley E Prosper, Aichi Chien, William Hsu","doi":"10.1109/ISBI56570.2024.10635326","DOIUrl":"10.1109/ISBI56570.2024.10635326","url":null,"abstract":"<p><p>Spatially aligning two computed tomography (CT) scans of the lung using automated image registration techniques is a challenging task due to the deformable nature of the lung. However, existing deep-learning-based lung CT registration models are not trained with explicit anatomical knowledge. We propose the deformable anatomy-aware registration toolkit (DART), a masked autoencoder (MAE)-based approach, to improve the keypoint-supervised registration of lung CTs. Our method incorporates features from multiple decoders of networks trained to segment anatomical structures, including the lung, ribs, vertebrae, lobes, vessels, and airways, to ensure that the MAE learns relevant features corresponding to the anatomy of the lung. The pretrained weights of the transformer encoder and patch embeddings are then used as the initialization for the training of downstream registration. We compare DART to existing state-of-the-art registration models. Our experiments show that DART outperforms the baseline models (Voxelmorph, ViT-V-Net, and MAE-TransRNet) in terms of target registration error of both corrField-generated keypoints with 17%, 13%, and 9% relative improvement, respectively, and bounding box centers of nodules with 27%, 10%, and 4% relative improvement, respectively. Our implementation is available at https://github.com/yunzhengzhu/DART.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11412684/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635687
Shunxing Bao, Junlin Guo, Ho Hin Lee, Ruining Deng, Can Cui, Lucas W Remedios, Quan Liu, Qi Yang, Kaiwen Xu, Xin Yu, Jia Li, Yike Li, Joseph T Roland, Qi Liu, Ken S Lau, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo
Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels. Traditional gamma correction methods for fixing saturation are limited, often incorrectly assuming uniform distribution of saturation, which is rarely the case in practice. This paper introduces a novel approach to correct saturation artifacts from a data-driven perspective. We introduce a two-stage, high-resolution hybrid generative adversarial network (HDmixGAN), which merges unpaired (CycleGAN) and paired (pix2pixHD) network architectures. This approach is designed to capitalize on the available small-scale paired data and the more extensive unpaired data from costly MxIF data. Specifically, we generate pseudo-paired data from large-scale unpaired over-saturated datasets with a CycleGAN, and train a Pix2pixGAN using both small-scale real and large-scale synthetic data derived from multiple DAPI staining rounds in MxIF. This method was validated against various baselines in a downstream nuclei detection task, improving the F1 score by 6% over the baseline. This is, to our knowledge, the first focused effort to address multi-round saturation in MxIF images, offering a specialized solution for enhancing cell analysis accuracy through improved image quality. The source code and implementation of the proposed method are available at https://github.com/MASILab/DAPIArtifactRemoval.git.
{"title":"MITIGATING OVER-SATURATED FLUORESCENCE IMAGES THROUGH A SEMI-SUPERVISED GENERATIVE ADVERSARIAL NETWORK.","authors":"Shunxing Bao, Junlin Guo, Ho Hin Lee, Ruining Deng, Can Cui, Lucas W Remedios, Quan Liu, Qi Yang, Kaiwen Xu, Xin Yu, Jia Li, Yike Li, Joseph T Roland, Qi Liu, Ken S Lau, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo","doi":"10.1109/isbi56570.2024.10635687","DOIUrl":"10.1109/isbi56570.2024.10635687","url":null,"abstract":"<p><p>Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels. Traditional gamma correction methods for fixing saturation are limited, often incorrectly assuming uniform distribution of saturation, which is rarely the case in practice. This paper introduces a novel approach to correct saturation artifacts from a data-driven perspective. We introduce a two-stage, high-resolution hybrid generative adversarial network (HDmixGAN), which merges unpaired (CycleGAN) and paired (pix2pixHD) network architectures. This approach is designed to capitalize on the available small-scale paired data and the more extensive unpaired data from costly MxIF data. Specifically, we generate pseudo-paired data from large-scale unpaired over-saturated datasets with a CycleGAN, and train a Pix2pixGAN using both small-scale real and large-scale synthetic data derived from multiple DAPI staining rounds in MxIF. This method was validated against various baselines in a downstream nuclei detection task, improving the F1 score by 6% over the baseline. This is, to our knowledge, the first focused effort to address multi-round saturation in MxIF images, offering a specialized solution for enhancing cell analysis accuracy through improved image quality. The source code and implementation of the proposed method are available at https://github.com/MASILab/DAPIArtifactRemoval.git.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635578
Sushmita Sarker, Prithul Sarker, George Bebis, Alireza Tavakkoli
Traditional deep learning approaches for breast cancer classification has predominantly concentrated on single-view analysis. In clinical practice, however, radiologists concurrently examine all views within a mammography exam, leveraging the inherent correlations in these views to effectively detect tumors. Acknowledging the significance of multi-view analysis, some studies have introduced methods that independently process mammogram views, either through distinct convolutional branches or simple fusion strategies, inadvertently leading to a loss of crucial inter-view correlations. In this paper, we propose an innovative multi-view network exclusively based on transformers to address challenges in mammographic image classification. Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information and promoting the coherent transfer of this information between views at the spatial feature map level. Furthermore, we conduct a comprehensive comparative analysis of the performance and effectiveness of transformer-based models under diverse settings, employing the CBIS-DDSM and Vin-Dr Mammo datasets. Our code is publicly available at https://github.com/prithuls/MV-Swin-T.
用于乳腺癌分类的传统深度学习方法主要集中在单视图分析上。然而,在临床实践中,放射科医生会同时检查乳腺 X 光检查中的所有视图,利用这些视图中固有的相关性来有效检测肿瘤。认识到多视图分析的重要性,一些研究引入了独立处理乳腺 X 光检查视图的方法,这些方法或通过不同的卷积分支,或通过简单的融合策略,无意中导致了重要的视图间相关性的丢失。在本文中,我们提出了一种完全基于变换器的创新型多视图网络,以应对乳房X光图像分类中的挑战。我们的方法引入了一种新颖的基于移位窗口的动态注意力块,有助于有效整合多视图信息,并在空间特征图层面促进视图间信息的连贯传递。此外,我们还利用 CBIS-DDSM 和 Vin-Dr Mammo 数据集,对基于变换器的模型在不同环境下的性能和有效性进行了全面的比较分析。我们的代码可在 https://github.com/prithuls/MV-Swin-T 公开获取。
{"title":"MV-Swin-T: MAMMOGRAM CLASSIFICATION WITH MULTI-VIEW SWIN TRANSFORMER.","authors":"Sushmita Sarker, Prithul Sarker, George Bebis, Alireza Tavakkoli","doi":"10.1109/isbi56570.2024.10635578","DOIUrl":"10.1109/isbi56570.2024.10635578","url":null,"abstract":"<p><p>Traditional deep learning approaches for breast cancer classification has predominantly concentrated on single-view analysis. In clinical practice, however, radiologists concurrently examine all views within a mammography exam, leveraging the inherent correlations in these views to effectively detect tumors. Acknowledging the significance of multi-view analysis, some studies have introduced methods that independently process mammogram views, either through distinct convolutional branches or simple fusion strategies, inadvertently leading to a loss of crucial inter-view correlations. In this paper, we propose an innovative multi-view network exclusively based on transformers to address challenges in mammographic image classification. Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information and promoting the coherent transfer of this information between views at the spatial feature map level. Furthermore, we conduct a comprehensive comparative analysis of the performance and effectiveness of transformer-based models under diverse settings, employing the CBIS-DDSM and Vin-Dr Mammo datasets. Our code is publicly available at https://github.com/prithuls/MV-Swin-T.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11450559/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635641
Yujian Xiong, Wenhui Zhu, Zhong-Lin Lu, Yalin Wang
The reconstruction of human visual inputs from brain activity, particularly through functional Magnetic Resonance Imaging (fMRI), holds promising avenues for unraveling the mechanisms of the human visual system. Despite the significant strides made by deep learning methods in improving the quality and interpretability of visual reconstruction, there remains a substantial demand for high-quality, long-duration, subject-specific 7-Tesla fMRI experiments. The challenge arises in integrating diverse smaller 3-Tesla datasets or accommodating new subjects with brief and low-quality fMRI scans. In response to these constraints, we propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN), leveraging unpaired training across two distinct fMRI datasets in 7T and 3T, respectively. This approach aims to overcome the limitations of the scarcity of high-quality 7-Tesla data and the challenges associated with brief and low-quality scans in 3-Tesla experiments. In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images compared to data-intensive methods trained and tested on a single subject.
{"title":"RECONSTRUCTING RETINAL VISUAL IMAGES FROM 3T FMRI DATA ENHANCED BY UNSUPERVISED LEARNING.","authors":"Yujian Xiong, Wenhui Zhu, Zhong-Lin Lu, Yalin Wang","doi":"10.1109/isbi56570.2024.10635641","DOIUrl":"10.1109/isbi56570.2024.10635641","url":null,"abstract":"<p><p>The reconstruction of human visual inputs from brain activity, particularly through functional Magnetic Resonance Imaging (fMRI), holds promising avenues for unraveling the mechanisms of the human visual system. Despite the significant strides made by deep learning methods in improving the quality and interpretability of visual reconstruction, there remains a substantial demand for high-quality, long-duration, subject-specific 7-Tesla fMRI experiments. The challenge arises in integrating diverse smaller 3-Tesla datasets or accommodating new subjects with brief and low-quality fMRI scans. In response to these constraints, we propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN), leveraging unpaired training across two distinct fMRI datasets in 7T and 3T, respectively. This approach aims to overcome the limitations of the scarcity of high-quality 7-Tesla data and the challenges associated with brief and low-quality scans in 3-Tesla experiments. In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images compared to data-intensive methods trained and tested on a single subject.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/ISBI56570.2024.10635138
Yaşar Utku Alçalar, Merve Gülle, Mehmet Akçakaya
Physics-driven deep learning (PD-DL) methods have gained popularity for improved reconstruction of fast MRI scans. Though supervised learning has been used in early works, there has been a recent interest in unsupervised learning methods for training PD-DL. In this work, we take inspiration from statistical image processing and compressed sensing (CS), and propose a novel convex loss function as an alternative learning strategy. Our loss function evaluates the compressibility of the output image while ensuring data fidelity to assess the quality of reconstruction in versatile settings, including supervised, unsupervised, and zero-shot scenarios. In particular, we leverage the reweighted norm that has been shown to approximate the norm for quality evaluation. Results show that the PD-DL networks trained with the proposed loss formulation outperform conventional methods, while maintaining similar quality to PD-DL models trained using existing supervised and unsupervised techniques.
{"title":"A CONVEX COMPRESSIBILITY-INSPIRED UNSUPERVISED LOSS FUNCTION FOR PHYSICS-DRIVEN DEEP LEARNING RECONSTRUCTION.","authors":"Yaşar Utku Alçalar, Merve Gülle, Mehmet Akçakaya","doi":"10.1109/ISBI56570.2024.10635138","DOIUrl":"10.1109/ISBI56570.2024.10635138","url":null,"abstract":"<p><p>Physics-driven deep learning (PD-DL) methods have gained popularity for improved reconstruction of fast MRI scans. Though supervised learning has been used in early works, there has been a recent interest in unsupervised learning methods for training PD-DL. In this work, we take inspiration from statistical image processing and compressed sensing (CS), and propose a novel convex loss function as an alternative learning strategy. Our loss function evaluates the compressibility of the output image while ensuring data fidelity to assess the quality of reconstruction in versatile settings, including supervised, unsupervised, and zero-shot scenarios. In particular, we leverage the reweighted <math> <mrow><msub><mi>l</mi> <mn>1</mn></msub> </mrow> </math> norm that has been shown to approximate the <math> <mrow><msub><mi>l</mi> <mn>0</mn></msub> </mrow> </math> norm for quality evaluation. Results show that the PD-DL networks trained with the proposed loss formulation outperform conventional methods, while maintaining similar quality to PD-DL models trained using existing supervised and unsupervised techniques.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11779509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143070254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635651
Sneha N Naik, Elsa D Angelini, R Graham Barr, Norrina Allen, Alain Bertoni, Eric A Hoffman, Ani Manichaikul, Jim Pankow, Wendy Post, Yifei Sun, Karol Watson, Benjamin M Smith, Andrew F Laine
High-resolution full lung CT scans now enable the detailed segmentation of airway trees up to the 6th branching generation. The airway binary masks display very complex tree structures that may encode biological information relevant to disease risk and yet remain challenging to exploit via traditional methods such as meshing or skeletonization. Recent clinical studies suggest that some variations in shape patterns and caliber of the human airway tree are highly associated with adverse health outcomes, including all-cause mortality and incident COPD. However, quantitative characterization of variations observed on CT segmented airway tree remain incomplete, as does our understanding of the clinical and developmental implications of such. In this work, we present an unsupervised deep-learning pipeline for feature extraction and clustering of human airway trees, learned directly from projections of 3D airway segmentations. We identify four reproducible and clinically distinct airway sub-types in the MESA Lung CT cohort.
{"title":"UNSUPERVISED AIRWAY TREE CLUSTERING WITH DEEP LEARNING: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY.","authors":"Sneha N Naik, Elsa D Angelini, R Graham Barr, Norrina Allen, Alain Bertoni, Eric A Hoffman, Ani Manichaikul, Jim Pankow, Wendy Post, Yifei Sun, Karol Watson, Benjamin M Smith, Andrew F Laine","doi":"10.1109/isbi56570.2024.10635651","DOIUrl":"10.1109/isbi56570.2024.10635651","url":null,"abstract":"<p><p>High-resolution full lung CT scans now enable the detailed segmentation of airway trees up to the 6th branching generation. The airway binary masks display very complex tree structures that may encode biological information relevant to disease risk and yet remain challenging to exploit via traditional methods such as meshing or skeletonization. Recent clinical studies suggest that some variations in shape patterns and caliber of the human airway tree are highly associated with adverse health outcomes, including all-cause mortality and incident COPD. However, quantitative characterization of variations observed on CT segmented airway tree remain incomplete, as does our understanding of the clinical and developmental implications of such. In this work, we present an unsupervised deep-learning pipeline for feature extraction and clustering of human airway trees, learned directly from projections of 3D airway segmentations. We identify four reproducible and clinically distinct airway sub-types in the MESA Lung CT cohort.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635583
Naresh Nandakumar, David Hsu, Raheel Ahmed, Archana Venkataraman
Resting-sate fMRI (rs-fMRI) has emerged as a viable tool to localize the epileptogenic zone (EZ) in medication refractory focal epilepsy patients. However, due to clinical protocol, datasets with reliable labels for the EZ are scarce. Some studies have used the entire resection area from post-operative structural T1 scans to act as the ground truth EZ labels during training and testing. These labels are subject to noise, as usually the resection area will be larger than the actual EZ tissue. We develop a mathematical framework for characterizing noisy labels in EZ localization. We use a multi-task deep learning framework to identify both the probability of a noisy label as well as the localization prediction for each ROI. We train our framework on a simulated dataset derived from the Human Connectome Project and evaluate it on both the simulated and a clinical epilepsy dataset. We show superior localization performance in our method against published localization networks on both the real and simulated dataset.
静息态 fMRI(rs-fMRI)已成为药物难治性局灶性癫痫患者定位致痫区(EZ)的可行工具。然而,由于临床协议的限制,具有可靠 EZ 标记的数据集非常稀少。一些研究使用术后结构 T1 扫描中的整个切除区域作为训练和测试期间的 EZ 标签。这些标签会受到噪声的影响,因为切除区域通常比实际的 EZ 组织要大。我们开发了一个数学框架,用于描述 EZ 定位中的噪声标签。我们使用多任务深度学习框架来识别噪声标签的概率以及每个 ROI 的定位预测。我们在源自人类连接组计划的模拟数据集上训练我们的框架,并在模拟数据集和临床癫痫数据集上对其进行评估。在真实数据集和模拟数据集上,我们的方法与已发表的定位网络相比,都显示出更优越的定位性能。
{"title":"A DEEP LEARNING FRAMEWORK TO CHARACTERIZE NOISY LABELS IN EPILEPTOGENIC ZONE LOCALIZATION USING FUNCTIONAL CONNECTIVITY.","authors":"Naresh Nandakumar, David Hsu, Raheel Ahmed, Archana Venkataraman","doi":"10.1109/isbi56570.2024.10635583","DOIUrl":"10.1109/isbi56570.2024.10635583","url":null,"abstract":"<p><p>Resting-sate fMRI (rs-fMRI) has emerged as a viable tool to localize the epileptogenic zone (EZ) in medication refractory focal epilepsy patients. However, due to clinical protocol, datasets with reliable labels for the EZ are scarce. Some studies have used the entire resection area from post-operative structural T1 scans to act as the ground truth EZ labels during training and testing. These labels are subject to noise, as usually the resection area will be larger than the actual EZ tissue. We develop a mathematical framework for characterizing noisy labels in EZ localization. We use a multi-task deep learning framework to identify both the probability of a noisy label as well as the localization prediction for each ROI. We train our framework on a simulated dataset derived from the Human Connectome Project and evaluate it on both the simulated and a clinical epilepsy dataset. We show superior localization performance in our method against published localization networks on both the real and simulated dataset.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11500830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635299
Xuzhe Zhang, Elsa D Angelini, Eric A Hoffman, Karol E Watson, Benjamin M Smith, R Graham Barr, Andrew F Laine
Robust quantification of pulmonary emphysema on computed tomography (CT) remains challenging for large-scale research studies that involve scans from different scanner types and for translation to clinical scans. Although the domain shifts in different CT scanners are subtle compared to shifts existing in other modalities (e.g., MRI) or cross-modality, emphysema is highly sensitive to it. Such subtle difference limits the application of general domain adaptation methods, such as image translation-based methods, as the contrast difference is too subtle to be distinguished. Existing studies have explored several directions to tackle this challenge, including density correction, noise filtering, regression, hidden Markov measure field (HMMF) model-based segmentation, and volume-adjusted lung density. Despite some promising results, previous studies either required a tedious workflow or eliminated opportunities for downstream emphysema subtyping, limiting efficient adaptation on a large-scale study. To alleviate this dilemma, we developed an end-to-end deep learning framework based on an existing HMMF segmentation framework. We first demonstrate that a regular UNet cannot replicate the existing HMMF results because of the lack of scanner priors. We then design a novel domain attention block, a simple yet efficient cross-modal block to fuse image visual features with quantitative scanner priors (a sequence), which significantly improves the results.
{"title":"ROBUST QUANTIFICATION OF PERCENT EMPHYSEMA ON CT VIA DOMAIN ATTENTION: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY.","authors":"Xuzhe Zhang, Elsa D Angelini, Eric A Hoffman, Karol E Watson, Benjamin M Smith, R Graham Barr, Andrew F Laine","doi":"10.1109/isbi56570.2024.10635299","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635299","url":null,"abstract":"<p><p>Robust quantification of pulmonary emphysema on computed tomography (CT) remains challenging for large-scale research studies that involve scans from different scanner types and for translation to clinical scans. Although the domain shifts in different CT scanners are subtle compared to shifts existing in other modalities (e.g., MRI) or cross-modality, emphysema is highly sensitive to it. Such subtle difference limits the application of general domain adaptation methods, such as image translation-based methods, as the contrast difference is too subtle to be distinguished. Existing studies have explored several directions to tackle this challenge, including density correction, noise filtering, regression, hidden Markov measure field (HMMF) model-based segmentation, and volume-adjusted lung density. Despite some promising results, previous studies either required a tedious workflow or eliminated opportunities for downstream emphysema subtyping, limiting efficient adaptation on a large-scale study. To alleviate this dilemma, we developed an end-to-end deep learning framework based on an existing HMMF segmentation framework. We first demonstrate that a regular UNet cannot replicate the existing HMMF results because of the lack of scanner priors. We then design a novel domain attention block, a simple yet efficient cross-modal block to fuse image visual features with quantitative scanner priors (a sequence), which significantly improves the results.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11388062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635895
Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya
Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.
{"title":"CYCLE-CONSISTENT SELF-SUPERVISED LEARNING FOR IMPROVED HIGHLY-ACCELERATED MRI RECONSTRUCTION.","authors":"Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya","doi":"10.1109/isbi56570.2024.10635895","DOIUrl":"10.1109/isbi56570.2024.10635895","url":null,"abstract":"<p><p>Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635249
Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen
Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.
阿尔茨海默病(AD)是一种神经退行性疾病,以进行性认知退化和运动障碍为特征,影响着全球数百万人。绘制阿尔茨海默病的进展图对于早期发现大脑功能丧失、及时干预和开发有效的治疗方法至关重要。然而,目前对疾病进展的精确测量仍具有挑战性。本研究提出了一种新方法,通过医学影像和其他模式的纵向生物标记物数据来了解注意力缺失症的异质性途径。我们提出了一种分析管道,采用单细胞转录组学领域两种流行的机器学习方法 PHATE 和 Slingshot,将多模态生物标记物轨迹投射到低维空间。这些嵌入作为我们的伪时间估计。我们将这一管道应用于阿尔茨海默病神经影像倡议(ADNI)数据集,对处于不同疾病阶段的个体的纵向数据进行对齐。我们的方法与根据发育时间表将单细胞数据聚类为细胞类型的技术如出一辙。我们的伪时间估算揭示了疾病演变和生物标志物随时间变化的独特模式,为深入了解 AD 的时间动态提供了依据。研究结果表明,这种方法在神经退行性疾病的临床领域具有潜力,可以实现更精确的疾病建模和早期诊断。
{"title":"MAPPING ALZHEIMER'S DISEASE PSEUDO-PROGRESSION WITH MULTIMODAL BIOMARKER TRAJECTORY EMBEDDINGS.","authors":"Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen","doi":"10.1109/isbi56570.2024.10635249","DOIUrl":"10.1109/isbi56570.2024.10635249","url":null,"abstract":"<p><p>Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}