Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635579
Muhammad Ahmad Sultan, Chong Chen, Yingmin Liu, Xuan Lei, Rizwan Ahmad
High-quality training data are not always available in dynamic MRI. To address this, we propose a self-supervised deep learning method called deep image prior with structured sparsity (DISCUS) for reconstructing dynamic images. DISCUS is inspired by deep image prior (DIP) and recovers a series of images through joint optimization of network parameters and input code vectors. However, DISCUS additionally encourages group sparsity on frame-specific code vectors to discover the low-dimensional manifold that describes temporal variations across frames. Compared to prior work on manifold learning, DISCUS does not require specifying the manifold dimensionality. We validate DISCUS using three numerical studies. In the first study, we simulate a dynamic Shepp-Logan phantom with frames undergoing random rotations, translations, or both, and demonstrate that DISCUS can discover the dimensionality of the underlying manifold. In the second study, we use data from a realistic late gadolinium enhancement (LGE) phantom to compare DISCUS with compressed sensing (CS) and DIP, and to demonstrate the positive impact of group sparsity. In the third study, we use retrospectively undersampled single-shot LGE data from five patients to compare DISCUS with CS reconstructions. The results from these studies demonstrate that DISCUS outperforms CS and DIP, and that enforcing group sparsity on the code vectors helps discover true manifold dimensionality and provides additional performance gain.
{"title":"DEEP IMAGE PRIOR WITH STRUCTURED SPARSITY (DISCUS) FOR DYNAMIC MRI RECONSTRUCTION.","authors":"Muhammad Ahmad Sultan, Chong Chen, Yingmin Liu, Xuan Lei, Rizwan Ahmad","doi":"10.1109/isbi56570.2024.10635579","DOIUrl":"10.1109/isbi56570.2024.10635579","url":null,"abstract":"<p><p>High-quality training data are not always available in dynamic MRI. To address this, we propose a self-supervised deep learning method called deep image prior with structured sparsity (DISCUS) for reconstructing dynamic images. DISCUS is inspired by deep image prior (DIP) and recovers a series of images through joint optimization of network parameters and input code vectors. However, DISCUS additionally encourages group sparsity on frame-specific code vectors to discover the low-dimensional manifold that describes temporal variations across frames. Compared to prior work on manifold learning, DISCUS does not require specifying the manifold dimensionality. We validate DISCUS using three numerical studies. In the first study, we simulate a dynamic Shepp-Logan phantom with frames undergoing random rotations, translations, or both, and demonstrate that DISCUS can discover the dimensionality of the underlying manifold. In the second study, we use data from a realistic late gadolinium enhancement (LGE) phantom to compare DISCUS with compressed sensing (CS) and DIP, and to demonstrate the positive impact of group sparsity. In the third study, we use retrospectively undersampled single-shot LGE data from five patients to compare DISCUS with CS reconstructions. The results from these studies demonstrate that DISCUS outperforms CS and DIP, and that enforcing group sparsity on the code vectors helps discover true manifold dimensionality and provides additional performance gain.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063720/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144024834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635382
Xuan Lei, Philip Schniter, Chong Chen, Muhammad Ahmad Sultan, Rizwan Ahmad
Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data. The sensitivity maps of the coils, when estimated using traditional methods, differ from the true sensitivity maps, which are generally unknown. Consequently, the reconstructed MR images exhibit undesired spatial variation in intensity. These intensity variations can be at least partially corrected using pre-scan data. In this work, we propose an intensity correction method that utilizes pre-scan data. For demonstration, we apply our method to a digital phantom, as well as to cardiac MRI data collected from a commercial scanner by Siemens Healthineers. The code is available at https://github.com/OSU-MR/SCC.
{"title":"SURFACE COIL INTENSITY CORRECTION FOR MRI.","authors":"Xuan Lei, Philip Schniter, Chong Chen, Muhammad Ahmad Sultan, Rizwan Ahmad","doi":"10.1109/isbi56570.2024.10635382","DOIUrl":"10.1109/isbi56570.2024.10635382","url":null,"abstract":"<p><p>Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data. The sensitivity maps of the coils, when estimated using traditional methods, differ from the true sensitivity maps, which are generally unknown. Consequently, the reconstructed MR images exhibit undesired spatial variation in intensity. These intensity variations can be at least partially corrected using pre-scan data. In this work, we propose an intensity correction method that utilizes pre-scan data. For demonstration, we apply our method to a digital phantom, as well as to cardiac MRI data collected from a commercial scanner by Siemens Healthineers. The code is available at https://github.com/OSU-MR/SCC.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063721/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144014693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635393
Soumyanil Banerjee, Ming Dong, Carri Glide-Hurst
U-shaped networks and its variants have demonstrated exceptional results for medical image segmentation. In this paper, we propose a novel dual self-distillation (DSD) framework for U-shaped networks for 3D medical image segmentation. DSD distills knowledge from the ground-truth segmentation labels to the decoder layers and also between the encoder and decoder layers of a single U-shaped network. DSD is a generalized training strategy that could be attached to the backbone architecture of any U-shaped network to further improve its segmentation performance. We attached DSD on two state-of-the-art U-shaped backbones, and extensive experiments on two public 3D medical image segmentation datasets demonstrated significant improvement over those backbones, with negligible increase in trainable parameters and training time. The source code is publicly available at https://github.com/soumbane/DualSelfDistillation.
{"title":"DUAL SELF-DISTILLATION OF U-SHAPED NETWORKS FOR 3D MEDICAL IMAGE SEGMENTATION.","authors":"Soumyanil Banerjee, Ming Dong, Carri Glide-Hurst","doi":"10.1109/isbi56570.2024.10635393","DOIUrl":"10.1109/isbi56570.2024.10635393","url":null,"abstract":"<p><p>U-shaped networks and its variants have demonstrated exceptional results for medical image segmentation. In this paper, we propose a novel dual self-distillation (DSD) framework for U-shaped networks for 3D medical image segmentation. DSD distills knowledge from the ground-truth segmentation labels to the decoder layers and also between the encoder and decoder layers of a single U-shaped network. DSD is a generalized training strategy that could be attached to the backbone architecture of any U-shaped network to further improve its segmentation performance. We attached DSD on two state-of-the-art U-shaped backbones, and extensive experiments on two public 3D medical image segmentation datasets demonstrated significant improvement over those backbones, with negligible increase in trainable parameters and training time. The source code is publicly available at https://github.com/soumbane/DualSelfDistillation.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11666255/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635307
William Kelley, Nathan Ngo, Adrian V Dalca, Bruce Fischl, Lilla Zöllei, Malte Hoffmann
Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.
{"title":"BOOSTING SKULL-STRIPPING PERFORMANCE FOR PEDIATRIC BRAIN IMAGES.","authors":"William Kelley, Nathan Ngo, Adrian V Dalca, Bruce Fischl, Lilla Zöllei, Malte Hoffmann","doi":"10.1109/isbi56570.2024.10635307","DOIUrl":"10.1109/isbi56570.2024.10635307","url":null,"abstract":"<p><p>Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11451993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635897
Sayantan Kumar, Philip Payne, Aristeidis Sotiras
Normative models in neuroimaging learn the brain patterns of healthy population distribution and estimate how disease subjects like Alzheimer's Disease (AD) deviate from the norm. Existing variational autoencoder (VAE)-based normative models using multimodal neuroimaging data aggregate information from multiple modalities by estimating product or averaging of unimodal latent posteriors. This can often lead to uninformative joint latent distributions which affects the estimation of subject-level deviations. In this work, we addressed the prior limitations by adopting the Mixture-of-Product-of-Experts (MoPoE) technique which allows better modelling of the joint latent posterior. Our model labelled subjects as outliers by calculating deviations from the multimodal latent space. Further, we identified which latent dimensions and brain regions were associated with abnormal deviations due to AD pathology.
{"title":"IMPROVING NORMATIVE MODELING FOR MULTI-MODAL NEUROIMAGING DATA USING MIXTURE-OF-PRODUCT-OF-EXPERTS VARIATIONAL AUTOENCODERS.","authors":"Sayantan Kumar, Philip Payne, Aristeidis Sotiras","doi":"10.1109/isbi56570.2024.10635897","DOIUrl":"10.1109/isbi56570.2024.10635897","url":null,"abstract":"<p><p>Normative models in neuroimaging learn the brain patterns of healthy population distribution and estimate how disease subjects like Alzheimer's Disease (AD) deviate from the norm. Existing variational autoencoder (VAE)-based normative models using multimodal neuroimaging data aggregate information from multiple modalities by estimating product or averaging of unimodal latent posteriors. This can often lead to uninformative joint latent distributions which affects the estimation of subject-level deviations. In this work, we addressed the prior limitations by adopting the Mixture-of-Product-of-Experts (MoPoE) technique which allows better modelling of the joint latent posterior. Our model labelled subjects as outliers by calculating deviations from the multimodal latent space. Further, we identified which latent dimensions and brain regions were associated with abnormal deviations due to AD pathology.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11600985/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635322
Akash Awasthi, Safwan Ahmad, Bryant Le, Hien Nguyen
In the realm of chest X-ray (CXR) image analysis, radiologists meticulously examine various regions, documenting their observations in reports. The prevalence of errors in CXR diagnoses, particularly among inexperienced radiologists and hospital residents, underscores the importance of understanding radiologists' intentions and the corresponding regions of interest. This understanding is crucial for correcting mistakes by guiding radiologists to the accurate regions of interest, especially in the diagnosis of chest radiograph abnormalities. In response to this imperative, we propose a novel system designed to identify the primary intentions articulated by radiologists in their reports and the corresponding regions of interest in CXR images. This system seeks to elucidate the visual context underlying radiologists' textual findings, with the potential to rectify errors made by less experienced practitioners and direct them to precise regions of interest. Importantly, the proposed system can be instrumental in providing constructive feedback to inexperienced radiologists or junior residents in the hospital, bridging the gap in face-to-face communication. The system represents a valuable tool for enhancing diagnostic accuracy and fostering continuous learning within the medical community.
{"title":"DECODING RADIOLOGISTS' INTENTIONS: A NOVEL SYSTEM FOR ACCURATE REGION IDENTIFICATION IN CHEST X-RAY IMAGE ANALYSIS.","authors":"Akash Awasthi, Safwan Ahmad, Bryant Le, Hien Nguyen","doi":"10.1109/isbi56570.2024.10635322","DOIUrl":"10.1109/isbi56570.2024.10635322","url":null,"abstract":"<p><p>In the realm of chest X-ray (CXR) image analysis, radiologists meticulously examine various regions, documenting their observations in reports. The prevalence of errors in CXR diagnoses, particularly among inexperienced radiologists and hospital residents, underscores the importance of understanding radiologists' intentions and the corresponding regions of interest. This understanding is crucial for correcting mistakes by guiding radiologists to the accurate regions of interest, especially in the diagnosis of chest radiograph abnormalities. In response to this imperative, we propose a novel system designed to identify the primary intentions articulated by radiologists in their reports and the corresponding regions of interest in CXR images. This system seeks to elucidate the visual context underlying radiologists' textual findings, with the potential to rectify errors made by less experienced practitioners and direct them to precise regions of interest. Importantly, the proposed system can be instrumental in providing constructive feedback to inexperienced radiologists or junior residents in the hospital, bridging the gap in face-to-face communication. The system represents a valuable tool for enhancing diagnostic accuracy and fostering continuous learning within the medical community.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12176413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144328033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635805
Y Djebra, X Liu, T Marin, A Tiss, M Dhaynaut, N Guehl, K Johnson, G El Fakhri, C Ma, J Ouyang
Positron Emission Tomography (PET) is a valuable imaging method for studying molecular-level processes in the body, such as hyperphosphorylated tau (p-tau) protein aggregates, a hallmark of several neurodegenerative diseases including Alzheimer's disease. P-tau density and cerebral perfusion can be quantified from PET data using tracer kinetic modeling techniques. However, noise in PET images leads to uncertainty in the estimated kinetic parameters. This can be quantified in a Bayesian framework by the posterior distribution of kinetic parameters given PET measurements. Markov Chain Monte Carlo (MCMC) techniques can be employed to estimate the posterior distribution, although with significant computational needs. In this paper, we propose to leverage deep learning inference efficiency to infer the posterior distribution. A novel approach using denoising diffusion probabilistic model (DDPM) is introduced. The performance of the proposed method was evaluated on a [18F]MK6240 study and compared to an MCMC method. Our approach offered significant reduction in computation time (over 30 times faster than MCMC) and consistently predicted accurate (< 0.8 % mean error) and precise (< 5.77 % standard deviation error) posterior distributions.
正电子发射断层扫描(PET)是研究体内分子水平过程的一种重要成像方法,例如高磷酸化 tau(p-tau)蛋白聚集,这是包括阿尔茨海默病在内的多种神经退行性疾病的标志。利用示踪剂动力学建模技术,可以从 PET 数据中量化 P-tau 密度和脑灌注。然而,PET 图像中的噪声会导致估计动力学参数的不确定性。这可以在贝叶斯框架中通过给定 PET 测量值的动力学参数后验分布来量化。马尔可夫链蒙特卡罗(MCMC)技术可用于估计后验分布,但需要大量计算。在本文中,我们建议利用深度学习推理的效率来推断后验分布。本文介绍了一种使用去噪扩散概率模型(DDPM)的新方法。我们在[18F]MK6240 研究中评估了所提方法的性能,并将其与 MCMC 方法进行了比较。我们的方法大大减少了计算时间(比 MCMC 方法快 30 多倍),并能持续预测准确(平均误差小于 0.8%)和精确(标准偏差误差小于 5.77%)的后验分布。
{"title":"DIFFUSION MODEL-BASED POSTERIOR DISTRIBUTION PREDICTION FOR KINETIC PARAMETER ESTIMATION IN DYNAMIC PET.","authors":"Y Djebra, X Liu, T Marin, A Tiss, M Dhaynaut, N Guehl, K Johnson, G El Fakhri, C Ma, J Ouyang","doi":"10.1109/isbi56570.2024.10635805","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635805","url":null,"abstract":"<p><p>Positron Emission Tomography (PET) is a valuable imaging method for studying molecular-level processes in the body, such as hyperphosphorylated tau (p-tau) protein aggregates, a hallmark of several neurodegenerative diseases including Alzheimer's disease. P-tau density and cerebral perfusion can be quantified from PET data using tracer kinetic modeling techniques. However, noise in PET images leads to uncertainty in the estimated kinetic parameters. This can be quantified in a Bayesian framework by the posterior distribution of kinetic parameters given PET measurements. Markov Chain Monte Carlo (MCMC) techniques can be employed to estimate the posterior distribution, although with significant computational needs. In this paper, we propose to leverage deep learning inference efficiency to infer the posterior distribution. A novel approach using denoising diffusion probabilistic model (DDPM) is introduced. The performance of the proposed method was evaluated on a [18F]MK6240 study and compared to an MCMC method. Our approach offered significant reduction in computation time (over 30 times faster than MCMC) and consistently predicted accurate (< 0.8 % mean error) and precise (< 5.77 % standard deviation error) posterior distributions.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11554386/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635753
Peiyu Duan, Nicha C Dvornek, Jiyao Wang, Jeffrey Eilbott, Yuexi Du, Denis G Sukhodolsky, James S Duncan
Children with Autism Spectrum Disorder (ASD) frequently exhibit comorbid anxiety, which contributes to impairment and requires treatment. Therefore, it is critical to investigate co-occurring autism and anxiety with functional imaging tools to understand the brain mechanisms of this comorbidity. Multidimensional Anxiety Scale for Children, 2nd edition (MASC-2) score is a common tool to evaluate the daily anxiety level in autistic children. Predicting MASC-2 score with Functional Magnetic Resonance Imaging (fMRI) data will help gain more insights into the brain functional networks of children with ASD complicated by anxiety. However, most of the current graph neural network (GNN) studies using fMRI only focus on graph operations but ignore the spectral features. In this paper, we explored the feasibility of using spectral features to predict the MASC-2 total scores. We proposed SpectBGNN, a graph-based network, which uses spectral features and integrates graph spectral filtering layers to extract hidden information. We experimented with multiple spectral analysis algorithms and compared the performance of the SpectBGNN model with CPM, GAT, and BrainGNN on a dataset consisting of 26 typically developing and 70 ASD children with 5-fold cross-validation. We showed that among all spectral analysis algorithms tested, using the Fast Fourier Transform (FFT) or Welch's Power Spectrum Density (PSD) as node features performs significantly better than correlation features, and adding the graph spectral filtering layer significantly increases the network's performance.
{"title":"SPECTRAL BRAIN GRAPH NEURAL NETWORK FOR PREDICTION OF ANXIETY IN CHILDREN WITH AUTISM SPECTRUM DISORDER.","authors":"Peiyu Duan, Nicha C Dvornek, Jiyao Wang, Jeffrey Eilbott, Yuexi Du, Denis G Sukhodolsky, James S Duncan","doi":"10.1109/isbi56570.2024.10635753","DOIUrl":"10.1109/isbi56570.2024.10635753","url":null,"abstract":"<p><p>Children with Autism Spectrum Disorder (ASD) frequently exhibit comorbid anxiety, which contributes to impairment and requires treatment. Therefore, it is critical to investigate co-occurring autism and anxiety with functional imaging tools to understand the brain mechanisms of this comorbidity. Multidimensional Anxiety Scale for Children, 2nd edition (MASC-2) score is a common tool to evaluate the daily anxiety level in autistic children. Predicting MASC-2 score with Functional Magnetic Resonance Imaging (fMRI) data will help gain more insights into the brain functional networks of children with ASD complicated by anxiety. However, most of the current graph neural network (GNN) studies using fMRI only focus on graph operations but ignore the spectral features. In this paper, we explored the feasibility of using spectral features to predict the MASC-2 total scores. We proposed SpectBGNN, a graph-based network, which uses spectral features and integrates graph spectral filtering layers to extract hidden information. We experimented with multiple spectral analysis algorithms and compared the performance of the SpectBGNN model with CPM, GAT, and BrainGNN on a dataset consisting of 26 typically developing and 70 ASD children with 5-fold cross-validation. We showed that among all spectral analysis algorithms tested, using the Fast Fourier Transform (FFT) or Welch's Power Spectrum Density (PSD) as node features performs significantly better than correlation features, and adding the graph spectral filtering layer significantly increases the network's performance.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655121/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635710
Celina Alba, Giuseppe Barisano, Alexis Bennett, Akul Sharma, Paul V Espa, Dominique Duncan
Post-traumatic epilepsy (PTE) is characterized by seizures that occur at least one week after traumatic brain injury (TBI). Although PTE remains one of the most life-altering outcomes of TBI, there are no preventative treatments. The Epilepsy Bioinformatics Study for Antiepileptogenic Therapy (EpiBioS4Rx) is an international project designed to identify multimodal biomarkers of PTE; early EpiBioS4Rx research suggests that features of perivascular spaces (PVS) are a potential biomarker. This study evaluates the association between volume fraction (VF), the volume of PVS relative to total brain volume, and seizure activity. Structural magnetic resonance (MR) imaging from a subset of 62 EpiBioS4Rx subjects was used to create Enhanced PVS Contrast (EPC) imaging to segment and quantify PVS metrics. A multiple logistic regression model that controlled for demographic and clinical factors revealed a significant difference between the late seizure-positive and seizure-negative groups in the paracentral lobule, precentral gyrus, and temporal pole of the right hemisphere. These findings are supported by prior literature that identify a relationship between PVS function in these regions and seizure activity after TBI.
{"title":"ENLARGED PERIVASCULAR SPACES IN FRONTAL AND TEMPORAL CORTICAL REGIONS CHARACTERIZE SEIZURE OUTCOME AFTER TRAUMATIC BRAIN INJURY.","authors":"Celina Alba, Giuseppe Barisano, Alexis Bennett, Akul Sharma, Paul V Espa, Dominique Duncan","doi":"10.1109/isbi56570.2024.10635710","DOIUrl":"10.1109/isbi56570.2024.10635710","url":null,"abstract":"<p><p>Post-traumatic epilepsy (PTE) is characterized by seizures that occur at least one week after traumatic brain injury (TBI). Although PTE remains one of the most life-altering outcomes of TBI, there are no preventative treatments. The Epilepsy Bioinformatics Study for Antiepileptogenic Therapy (EpiBioS4Rx) is an international project designed to identify multimodal biomarkers of PTE; early EpiBioS4Rx research suggests that features of perivascular spaces (PVS) are a potential biomarker. This study evaluates the association between volume fraction (VF), the volume of PVS relative to total brain volume, and seizure activity. Structural magnetic resonance (MR) imaging from a subset of 62 EpiBioS4Rx subjects was used to create Enhanced PVS Contrast (EPC) imaging to segment and quantify PVS metrics. A multiple logistic regression model that controlled for demographic and clinical factors revealed a significant difference between the late seizure-positive and seizure-negative groups in the paracentral lobule, precentral gyrus, and temporal pole of the right hemisphere. These findings are supported by prior literature that identify a relationship between PVS function in these regions and seizure activity after TBI.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119173/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635289
Abhijeet Parida, Zhifan Jiang, Roger J Packer, Robert A Avery, Syed M Anwar, Marius G Linguraru
Image harmonization is an important preprocessing strategy to address domain shifts arising from data acquired using different machines and scanning protocols in medical imaging. However, benchmarking the effectiveness of harmonization techniques has been a challenge due to the lack of widely available standardized datasets with ground truths. In this context, we propose three metrics- two intensity harmonization metrics and one anatomy preservation metric for medical images during harmonization, where no ground truths are required. Through extensive studies on a dataset with available harmonization ground truth, we demonstrate that our metrics are correlated with established image quality assessment metrics. We show how these novel metrics may be applied to real-world scenarios where no harmonization ground truth exists. Additionally, we provide insights into different interpretations of the metric values, shedding light on their significance in the context of the harmonization process. As a result of our findings, we advocate for the adoption of these quantitative harmonization metrics as a standard for benchmarking the performance of image harmonization techniques.
{"title":"QUANTITATIVE METRICS FOR BENCHMARKING MEDICAL IMAGE HARMONIZATION.","authors":"Abhijeet Parida, Zhifan Jiang, Roger J Packer, Robert A Avery, Syed M Anwar, Marius G Linguraru","doi":"10.1109/isbi56570.2024.10635289","DOIUrl":"10.1109/isbi56570.2024.10635289","url":null,"abstract":"<p><p>Image harmonization is an important preprocessing strategy to address domain shifts arising from data acquired using different machines and scanning protocols in medical imaging. However, benchmarking the effectiveness of harmonization techniques has been a challenge due to the lack of widely available standardized datasets with ground truths. In this context, we propose three metrics- two intensity harmonization metrics and one anatomy preservation metric for medical images during harmonization, where no ground truths are required. Through extensive studies on a dataset with available harmonization ground truth, we demonstrate that our metrics are correlated with established image quality assessment metrics. We show how these novel metrics may be applied to real-world scenarios where no harmonization ground truth exists. Additionally, we provide insights into different interpretations of the metric values, shedding light on their significance in the context of the harmonization process. As a result of our findings, we advocate for the adoption of these quantitative harmonization metrics as a standard for benchmarking the performance of image harmonization techniques.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12790385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}