Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980930
Liang Shang, William A Sethares, Anusha Adluru, Andrew L Alexander, Vivek Prabhakaran, Veena A Nair, Nagesh Adluru
Precise characterization of stroke lesions from MRI data has immense value in prognosticating clinical and cognitive outcomes following a stroke. Manual stroke lesion segmentation is time-consuming and requires the expertise of neurologists and neuroradiologists. Often, lesions are grossly characterized for their location and overall extent using bounding boxes without specific delineation of their boundaries. While such characterization provides some clinical value, to develop a precise mechanistic understanding of the impact of lesions on post-stroke vascular contributions to cognitive impairments and dementia (VCID), the stroke lesions need to be fully segmented with accurate boundaries. This work introduces the Multi-Stage Cross-Scale Attention (MSCSA) mechanism, applied to the U-Net family, to improve the mapping between brain structural features and lesions of varying sizes. Using the Anatomical Tracings of Lesions After Stroke (ATLAS) v2.0 dataset, MSCSA outperforms all baseline methods in both Dice and F1 scores on a subset focusing on small lesions, while maintaining competitive performance across the entire dataset. Notably, the ensemble strategy incorporating MSCSA achieves the highest scores for Dice and F1 on both the full dataset and the small lesion subset. These results demonstrate the effectiveness of MSCSA in segmenting small lesions and highlight its robustness across different training schemes for large stroke lesions. Our code is available at: https://github.com/nadluru/StrokeLesSeg.
{"title":"STROKE LESION SEGMENTATION USING MULTI-STAGE CROSS-SCALE ATTENTION.","authors":"Liang Shang, William A Sethares, Anusha Adluru, Andrew L Alexander, Vivek Prabhakaran, Veena A Nair, Nagesh Adluru","doi":"10.1109/isbi60581.2025.10980930","DOIUrl":"10.1109/isbi60581.2025.10980930","url":null,"abstract":"<p><p>Precise characterization of stroke lesions from MRI data has immense value in prognosticating clinical and cognitive outcomes following a stroke. Manual stroke lesion segmentation is time-consuming and requires the expertise of neurologists and neuroradiologists. Often, lesions are grossly characterized for their location and overall extent using bounding boxes without specific delineation of their boundaries. While such characterization provides some clinical value, to develop a precise mechanistic understanding of the impact of lesions on post-stroke vascular contributions to cognitive impairments and dementia (VCID), the stroke lesions need to be fully segmented with accurate boundaries. This work introduces the Multi-Stage Cross-Scale Attention (MSCSA) mechanism, applied to the U-Net family, to improve the mapping between brain structural features and lesions of varying sizes. Using the Anatomical Tracings of Lesions After Stroke (ATLAS) v2.0 dataset, MSCSA outperforms all baseline methods in both Dice and F1 scores on a subset focusing on small lesions, while maintaining competitive performance across the entire dataset. Notably, the ensemble strategy incorporating MSCSA achieves the highest scores for Dice and F1 on both the full dataset and the small lesion subset. These results demonstrate the effectiveness of MSCSA in segmenting small lesions and highlight its robustness across different training schemes for large stroke lesions. Our code is available at: https://github.com/nadluru/StrokeLesSeg.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980976
Jiale Cheng, Dan Hu, Zhengwang Wu, Xinrui Yuan, Gang Li
Functional connectivity (FC) derived from functional MRI (fMRI) shows significant promise in predicting behavior and demographics using deep learning techniques. Incorporating vertex-wise FC maps, which capture fine-grained spatial details of neural activity, offers the potential to enhance FC-based prediction accuracy. However, fMRI data is inherently limited and noisy, challenging neural networks to reliably identify patterns within high-dimensional cortical vertices. Therefore, we design a novel Masked Momentum Contrastive Dynamic Transformer, which utilizes masked momentum contrastive pre-training to explore subject-specific features and enhances prediction accuracy by leveraging the temporal dynamics of FCs with a dynamic transformer. Specifically, our framework 1) learns effective subject-specific representations by treating vertex-wise FCs from different runs of an individual as distinct views and maximizing their affinity, and 2) employs a vertex-wise masking strategy to promote learning from limited data. Extensive experiments on gender classification and cognition prediction validate its superior performance on the Human Connectome Project dataset.
{"title":"MASKED MOMENTUM CONTRASTIVE DYNAMIC TRANSFORMER FOR SELF-SUPERVISED FUNCTIONAL CONNECTIVITY REPRESENTATION LEARNING.","authors":"Jiale Cheng, Dan Hu, Zhengwang Wu, Xinrui Yuan, Gang Li","doi":"10.1109/isbi60581.2025.10980976","DOIUrl":"10.1109/isbi60581.2025.10980976","url":null,"abstract":"<p><p>Functional connectivity (FC) derived from functional MRI (fMRI) shows significant promise in predicting behavior and demographics using deep learning techniques. Incorporating vertex-wise FC maps, which capture fine-grained spatial details of neural activity, offers the potential to enhance FC-based prediction accuracy. However, fMRI data is inherently limited and noisy, challenging neural networks to reliably identify patterns within high-dimensional cortical vertices. Therefore, we design a novel Masked Momentum Contrastive Dynamic Transformer, which utilizes masked momentum contrastive pre-training to explore subject-specific features and enhances prediction accuracy by leveraging the temporal dynamics of FCs with a dynamic transformer. Specifically, our framework 1) learns effective subject-specific representations by treating vertex-wise FCs from different runs of an individual as distinct views and maximizing their affinity, and 2) employs a vertex-wise masking strategy to promote learning from limited data. Extensive experiments on gender classification and cognition prediction validate its superior performance on the Human Connectome Project dataset.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981287
Kangfu Han, Dan Hu, Jiale Cheng, Tianming Liu, Andrea Bozoki, Dajiang Zhu, Gang Li
In structural magnetic resonance imaging (MRI), morphological connectome plays an important role in capturing coordinated patterns of region-wise morphological features for brain disorder diagnosis. However, significant challenges remain in aggregating diverse representations from multiple brain atlases, stemming from variations in the definition of regions of interest. To effectively integrate complementary information from multiple atlases while mitigating possible biases, we propose a novel dual multi-atlas representation alignment approach (DMAA) for brain disorder diagnosis. Specifically, we first minimize the maximum mean discrepancy of multi-atlas representations to align them into a unified distribution, reducing inter-atlas variability and enhancing effective feature fusion. Then, to further manage the anatomical variability, we apply optimal transport to capture and harmonize region-wise differences, preserving plausible relationships across atlases. Extensive experiments on ADNI, PPMI, ADHD200, and SchizConnect datasets demonstrate the effectiveness of our proposed DMAA on brain disorder diagnosis using multi-atlas morphological connectome.
{"title":"DUAL MULTI-ATLAS REPRESENTATION ALIGNMENT FOR BRAIN DISORDER DIAGNOSIS USING MORPHOLOGICAL CONNECTOME.","authors":"Kangfu Han, Dan Hu, Jiale Cheng, Tianming Liu, Andrea Bozoki, Dajiang Zhu, Gang Li","doi":"10.1109/isbi60581.2025.10981287","DOIUrl":"10.1109/isbi60581.2025.10981287","url":null,"abstract":"<p><p>In structural magnetic resonance imaging (MRI), morphological connectome plays an important role in capturing coordinated patterns of region-wise morphological features for brain disorder diagnosis. However, significant challenges remain in aggregating diverse representations from multiple brain atlases, stemming from variations in the definition of regions of interest. To effectively integrate complementary information from multiple atlases while mitigating possible biases, we propose a novel dual multi-atlas representation alignment approach (DMAA) for brain disorder diagnosis. Specifically, we first minimize the maximum mean discrepancy of multi-atlas representations to align them into a unified distribution, reducing inter-atlas variability and enhancing effective feature fusion. Then, to further manage the anatomical variability, we apply optimal transport to capture and harmonize region-wise differences, preserving plausible relationships across atlases. Extensive experiments on ADNI, PPMI, ADHD200, and SchizConnect datasets demonstrate the effectiveness of our proposed DMAA on brain disorder diagnosis using multi-atlas morphological connectome.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12178661/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981108
Xin Li, Wenhui Zhu, Xuanzhao Dong, Oana M Dumitrascu, Yalin Wang
With the rapid development of deep learning, CNN-based U-shaped networks have succeeded in medical image segmentation and are widely applied for various tasks. However, their limitations in capturing global features hinder their performance in complex segmentation tasks. The rise of Vision Transformer (ViT) has effectively compensated for this deficiency of CNNs and promoted the application of ViT-based U-networks in medical image segmentation. However, the high computational demands of ViT make it unsuitable for many medical devices and mobile platforms with limited resources, restricting its deployment on resource-constrained and edge devices. To address this, we propose EViT-UNet, an efficient ViT-based segmentation network that reduces computational complexity while maintaining accuracy, making it ideal for resource-constrained medical devices. EViT-UNet is built on a U-shaped architecture, comprising an encoder, decoder, bottleneck layer, and skip connections, combining convolutional operations with self-attention mechanisms to optimize efficiency. Experimental results demonstrate that EViT-UNet achieves high accuracy in medical image segmentation while significantly reducing computational complexity. The code is available at https://github.com/Retinal-Research/EVIT-UNET.
{"title":"EVIT-UNET: U-NET LIKE EFFICIENT VISION TRANSFORMER FOR MEDICAL IMAGE SEGMENTATION ON MOBILE AND EDGE DEVICES.","authors":"Xin Li, Wenhui Zhu, Xuanzhao Dong, Oana M Dumitrascu, Yalin Wang","doi":"10.1109/isbi60581.2025.10981108","DOIUrl":"10.1109/isbi60581.2025.10981108","url":null,"abstract":"<p><p>With the rapid development of deep learning, CNN-based U-shaped networks have succeeded in medical image segmentation and are widely applied for various tasks. However, their limitations in capturing global features hinder their performance in complex segmentation tasks. The rise of Vision Transformer (ViT) has effectively compensated for this deficiency of CNNs and promoted the application of ViT-based U-networks in medical image segmentation. However, the high computational demands of ViT make it unsuitable for many medical devices and mobile platforms with limited resources, restricting its deployment on resource-constrained and edge devices. To address this, we propose EViT-UNet, an efficient ViT-based segmentation network that reduces computational complexity while maintaining accuracy, making it ideal for resource-constrained medical devices. EViT-UNet is built on a U-shaped architecture, comprising an encoder, decoder, bottleneck layer, and skip connections, combining convolutional operations with self-attention mechanisms to optimize efficiency. Experimental results demonstrate that EViT-UNet achieves high accuracy in medical image segmentation while significantly reducing computational complexity. The code is available at https://github.com/Retinal-Research/EVIT-UNET.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12337706/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980862
Jiarui Xing, Shuo Wang, Amit R Patel, Kenneth C Bilchick, Frederick H Epstein, Miaomiao Zhang
Accurate identification of late mechanical activation (LMA) regions is crucial for optimal cardiac resynchronization therapy (CRT) lead implantation. However, existing approaches using cardiac magnetic resonance (CMR) imaging often overlook myocardial scar information, which may be mistakenly identified as delayed activation regions. To address this issue, we propose a scar-aware LMA detection network that simultaneously detects myocardial scar and prevents LMA localization in these scarred regions. More specifically, our model integrates a pre-trained scar segmentation network using late gadolinium enhancement (LGE) CMRs into a LMA detection network based on highly accurate strain derived from displacement encoding with stimulated echoes (DENSE) CMRs. We introduce a novel scar-aware loss function that utilizes the segmented scar information to discourage false-positive detections of late activated areas. Our model can be trained with or without paired LGE data. During inference, our model does not require the input of LGE images, leveraging learned patterns from strain data alone to mitigate false-positive LMA detection in potential scar regions. We evaluate our model on subjects with and without myocardial scar, demonstrating significantly improved LMA detection accuracy in both scenarios. Our work paves the way for improved CRT planning, potentially leading to better patient outcomes.
{"title":"SCAR-AWARE LATE MECHANICAL ACTIVATION DETECTION NETWORK FOR OPTIMAL CARDIAC RESYNCHRONIZATION THERAPY PLANNING.","authors":"Jiarui Xing, Shuo Wang, Amit R Patel, Kenneth C Bilchick, Frederick H Epstein, Miaomiao Zhang","doi":"10.1109/isbi60581.2025.10980862","DOIUrl":"10.1109/isbi60581.2025.10980862","url":null,"abstract":"<p><p>Accurate identification of late mechanical activation (LMA) regions is crucial for optimal cardiac resynchronization therapy (CRT) lead implantation. However, existing approaches using cardiac magnetic resonance (CMR) imaging often overlook myocardial scar information, which may be mistakenly identified as delayed activation regions. To address this issue, we propose a scar-aware LMA detection network that simultaneously detects myocardial scar and prevents LMA localization in these scarred regions. More specifically, our model integrates a pre-trained scar segmentation network using late gadolinium enhancement (LGE) CMRs into a LMA detection network based on highly accurate strain derived from displacement encoding with stimulated echoes (DENSE) CMRs. We introduce a novel scar-aware loss function that utilizes the segmented scar information to discourage false-positive detections of late activated areas. Our model can be trained with or without paired LGE data. During inference, our model does not require the input of LGE images, leveraging learned patterns from strain data alone to mitigate false-positive LMA detection in potential scar regions. We evaluate our model on subjects with and without myocardial scar, demonstrating significantly improved LMA detection accuracy in both scenarios. Our work paves the way for improved CRT planning, potentially leading to better patient outcomes.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12467527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/ISBI60581.2025.10980672
Sahar A Mohammed, Siyavash Shabani, Muhammad Sohaib, Corina Nicolescu, Mary Helen Barcellos-Hoff, Bahram Parvin
The tumor microenvironment (TME) is critical in cancer progression, development, and treatment response. However, its complex cellular architecture (e.g., cell type, organization) presents significant challenges for accurate immunofluorescence (IF) image segmentation. We introduce LoGSAGE-Net (LoG-based SAliency for Guided Encoding), which couples a Swin Transformer with the encoded response from Laplacian of Gaussian (LoG) on multiple scales. The loss function incorporates two deformation metrics, combining the Dice- and curvature alignment loss. The model is applied to a large cohort of preclinical data and has shown an improved performance over the state-of-the-art methods. The proposed model achieved a Dice score of 94.92% and a Panoptic Quality (PQ) score of 81%. This model supports robust profiling of the TME for sensitive assays.
肿瘤微环境(TME)在癌症的进展、发展和治疗反应中起着至关重要的作用。然而,其复杂的细胞结构(如细胞类型、组织)对准确的免疫荧光(IF)图像分割提出了重大挑战。我们介绍了LoGSAGE-Net (LoG-based SAliency for Guided Encoding),它在多个尺度上将Swin变压器与来自高斯拉普拉斯函数(LoG)的编码响应耦合在一起。损失函数包含两个变形度量,结合Dice-和曲率对齐损失。该模型适用于临床前数据的大队列,并已显示优于最先进的方法的性能。该模型的Dice得分为94.92%,Panoptic Quality (PQ)得分为81%。该模型支持对TME进行敏感分析的稳健分析。
{"title":"LOGSAGE: LOG-BASED SALIENCY FOR GUIDED ENCODING IN ROBUST NUCLEI SEGMENTATION OF IMMUNOFLUORESCENCE HISTOLOGY IMAGES.","authors":"Sahar A Mohammed, Siyavash Shabani, Muhammad Sohaib, Corina Nicolescu, Mary Helen Barcellos-Hoff, Bahram Parvin","doi":"10.1109/ISBI60581.2025.10980672","DOIUrl":"10.1109/ISBI60581.2025.10980672","url":null,"abstract":"<p><p>The tumor microenvironment (TME) is critical in cancer progression, development, and treatment response. However, its complex cellular architecture (e.g., cell type, organization) presents significant challenges for accurate immunofluorescence (IF) image segmentation. We introduce <b>LoGSAGE</b>-Net (<b>LoG</b>-based <b>SA</b>liency for <b>G</b>uided <b>E</b>ncoding), which couples a Swin Transformer with the encoded response from Laplacian of Gaussian (LoG) on multiple scales. The loss function incorporates two deformation metrics, combining the Dice- and curvature alignment loss. The model is applied to a large cohort of preclinical data and has shown an improved performance over the state-of-the-art methods. The proposed model achieved a Dice score of 94.92% and a Panoptic Quality (PQ) score of 81%. This model supports robust profiling of the TME for sensitive assays.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12735128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981302
Shuo Huang, Lujia Zhong, Yonggang Shi
Fiber orientation distributions (FODs) are widely used in connectome analysis based on diffusion MRI. Spherical harmonics (SPHARMs) are often used for the efficient representation of FODs; however, SPHARMs over the 3-D image volume are in essence four-dimensional. This makes it highly memory-consuming for applying advanced deep learning methods, such as the transformer and diffusion model, to FODs represented by high order SPHARMs. In this work, we present an order-balanced order-level (OBOL) autoencoder to compress the FODs with high accuracy after decoding. Our OBOL method uses separate encoders for FODs in each SPHARM order to balance the feature map size of FODs in different orders. This helps the encoder to better preserve information from the low-order coefficients that have more information but a smaller number of volumes. In our experiments, we demonstrated that the decoded FODs of our OBOL autoencoder have better accuracy than the spatial-level or order-level autoencoder without order balance. We also tested the encoded latent space of the OBOL autoencoder in FOD super-resolution. Results show high accuracy with feasible memory usage in commonly available GPUs.
{"title":"AUTOENCODER FOR 4-DIMENSIONAL FIBER ORIENTATION DISTRIBUTIONS FROM DIFFUSION MRI.","authors":"Shuo Huang, Lujia Zhong, Yonggang Shi","doi":"10.1109/isbi60581.2025.10981302","DOIUrl":"10.1109/isbi60581.2025.10981302","url":null,"abstract":"<p><p>Fiber orientation distributions (FODs) are widely used in connectome analysis based on diffusion MRI. Spherical harmonics (SPHARMs) are often used for the efficient representation of FODs; however, SPHARMs over the 3-D image volume are in essence four-dimensional. This makes it highly memory-consuming for applying advanced deep learning methods, such as the transformer and diffusion model, to FODs represented by high order SPHARMs. In this work, we present an order-balanced order-level (OBOL) autoencoder to compress the FODs with high accuracy after decoding. Our OBOL method uses separate encoders for FODs in each SPHARM order to balance the feature map size of FODs in different orders. This helps the encoder to better preserve information from the low-order coefficients that have more information but a smaller number of volumes. In our experiments, we demonstrated that the decoded FODs of our OBOL autoencoder have better accuracy than the spatial-level or order-level autoencoder without order balance. We also tested the encoded latent space of the OBOL autoencoder in FOD super-resolution. Results show high accuracy with feasible memory usage in commonly available GPUs.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12140619/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144236185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/ISBI60581.2025.10981269
Zaiyang Guo, Ningjun J Dong, Harold Litt, Natalie Yushkevich, Melanie Freas, Jessica Nunez, Victor Ferrari, Jilei Hao, Shir Goldfinger, Matthew A Jolley, Joseph Bavaria, Nimesh Desai, Alison M Pouch
The bicuspid aortic valve (BAV) is the most prevalent congenital heart defect and may require surgery for complications such as stenosis, regurgitation, and aortopathy. BAV repair surgery is effective but challenging due to the heterogeneity of BAV morphology. Multiple imaging modalities can be employed to assist the quantitative assessment of BAVs for surgical planning. Contrast-enhanced 4D computed tomography (CT) produces volumetric temporal sequences with excellent contrast and spatial resolution. Segmentation of the aortic cusps and root in these images is an essential step in creating patient-specific models for visualization and quantification. While deep learning-based methods are capable of fully automated segmentation, no BAV-specific model exists. Among valve segmentation studies, there has been limited quantitative assessment of the clinical usability of the segmentation results. In this work, we developed a fully automated multi-label BAV segmentation pipeline based on nnU-Net. The predicted segmentations were used to carry out surgically relevant morphological measurements including geometric cusp height, commissural angle and annulus diameter, and the results were compared against manual segmentation. Automated segmentation achieved average Dice scores of over 0.7 and symmetric mean distance below 0.7 mm for all three aortic cusps and the root wall. Clinically relevant benchmarks showed good consistency between manual and predicted segmentations. Overall, fully automated BAV segmentation of 3D frames in 4D CT can produce clinically usable measurements for surgical risk stratification, but the temporal consistency of segmentations needs to be improved.
{"title":"TOWARDS PATIENT-SPECIFIC SURGICAL PLANNING FOR BICUSPID AORTIC VALVE REPAIR: FULLY AUTOMATED SEGMENTATION OF THE AORTIC VALVE IN 4D CT.","authors":"Zaiyang Guo, Ningjun J Dong, Harold Litt, Natalie Yushkevich, Melanie Freas, Jessica Nunez, Victor Ferrari, Jilei Hao, Shir Goldfinger, Matthew A Jolley, Joseph Bavaria, Nimesh Desai, Alison M Pouch","doi":"10.1109/ISBI60581.2025.10981269","DOIUrl":"10.1109/ISBI60581.2025.10981269","url":null,"abstract":"<p><p>The bicuspid aortic valve (BAV) is the most prevalent congenital heart defect and may require surgery for complications such as stenosis, regurgitation, and aortopathy. BAV repair surgery is effective but challenging due to the heterogeneity of BAV morphology. Multiple imaging modalities can be employed to assist the quantitative assessment of BAVs for surgical planning. Contrast-enhanced 4D computed tomography (CT) produces volumetric temporal sequences with excellent contrast and spatial resolution. Segmentation of the aortic cusps and root in these images is an essential step in creating patient-specific models for visualization and quantification. While deep learning-based methods are capable of fully automated segmentation, no BAV-specific model exists. Among valve segmentation studies, there has been limited quantitative assessment of the clinical usability of the segmentation results. In this work, we developed a fully automated multi-label BAV segmentation pipeline based on nnU-Net. The predicted segmentations were used to carry out surgically relevant morphological measurements including geometric cusp height, commissural angle and annulus diameter, and the results were compared against manual segmentation. Automated segmentation achieved average Dice scores of over 0.7 and symmetric mean distance below 0.7 mm for all three aortic cusps and the root wall. Clinically relevant benchmarks showed good consistency between manual and predicted segmentations. Overall, fully automated BAV segmentation of 3D frames in 4D CT can produce clinically usable measurements for surgical risk stratification, but the temporal consistency of segmentations needs to be improved.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12237532/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144593151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980900
Yanxi Chen, Yi Su, Celine Dumitrascu, Kewei Chen, David Weidman, Richard J Caselli, Nicholas Ashton, Eric M Reiman, Yalin Wang
Cross-modality translation between MRI and PET imaging is challenging due to the distinct mechanisms underlying these modalities. Blood-based biomarkers (BBBMs) are revolutionizing Alzheimer's disease (AD) detection by identifying patients and quantifying brain amyloid levels. However, the potential of BBBMs to enhance PET image synthesis remains unexplored. In this paper, we performed a thorough study on the effect of incorporating BBBM into deep generative models. By evaluating three widely used cross-modality translation models, we found that BBBMs integration consistently enhances the generative quality across all models. By visual inspection of the generated results, we observed that PET images generated by CycleGAN exhibit the best visual fidelity. Based on these findings, we propose Plasma-CycleGAN, a novel generative model based on CycleGAN, to synthesize PET images from MRI using BBBMs as conditions. This is the first approach to integrate BBBMs in conditional cross-modality translation between MRI and PET.
{"title":"PLASMA-CYCLEGAN: PLASMA BIOMARKER-GUIDED MRI TO PET CROSS-MODALITY TRANSLATION USING CONDITIONAL CYCLEGAN.","authors":"Yanxi Chen, Yi Su, Celine Dumitrascu, Kewei Chen, David Weidman, Richard J Caselli, Nicholas Ashton, Eric M Reiman, Yalin Wang","doi":"10.1109/isbi60581.2025.10980900","DOIUrl":"10.1109/isbi60581.2025.10980900","url":null,"abstract":"<p><p>Cross-modality translation between MRI and PET imaging is challenging due to the distinct mechanisms underlying these modalities. Blood-based biomarkers (BBBMs) are revolutionizing Alzheimer's disease (AD) detection by identifying patients and quantifying brain amyloid levels. However, the potential of BBBMs to enhance PET image synthesis remains unexplored. In this paper, we performed a thorough study on the effect of incorporating BBBM into deep generative models. By evaluating three widely used cross-modality translation models, we found that BBBMs integration consistently enhances the generative quality across all models. By visual inspection of the generated results, we observed that PET images generated by CycleGAN exhibit the best visual fidelity. Based on these findings, we propose Plasma-CycleGAN, a novel generative model based on CycleGAN, to synthesize PET images from MRI using BBBMs as conditions. This is the first approach to integrate BBBMs in conditional cross-modality translation between MRI and PET.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980810
Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Zhengwang Wu, Li Wang, Weili Lin, Gang Li
Predicting the development of functional connectivity (FC) derived from resting-state functional MRI is pivotal for elucidating the intrinsic brain functional organization and modeling its dynamic development during infancy. Existing deep learning methods typically predict FC at a target timepoint from each available FC independently, yielding inconsistent predictions and overlooking longitudinal dependencies, which introduce ambiguity in practical applications. Furthermore, the scarcity and irregular distribution of longitudinal rs-fMRI data pose significant challenges in accurately predicting and delineating the trajectories of early brain functional development. To address these issues, we propose a novel Triplet Cycle-Consistent Masked Autoencoder (TC-MAE) for the trajectory prediction of the development of infant FC. Our TC-MAE has the capability to traverse FC over an extended period, extract unique individual characteristics, and predict target FC at any given age in infancy with longitudinal consistency. Extensive experiments on 368 longitudinal infant rs-fMRI scans demonstrate the superior performance of the proposed method in longitudinal FC prediction compared with state-of-the-art approaches.
{"title":"INDIVIDUALIZED TRAJECTORY PREDICTION OF EARLY DEVELOPING FUNCTIONAL CONNECTIVITY.","authors":"Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Zhengwang Wu, Li Wang, Weili Lin, Gang Li","doi":"10.1109/isbi60581.2025.10980810","DOIUrl":"10.1109/isbi60581.2025.10980810","url":null,"abstract":"<p><p>Predicting the development of functional connectivity (FC) derived from resting-state functional MRI is pivotal for elucidating the intrinsic brain functional organization and modeling its dynamic development during infancy. Existing deep learning methods typically predict FC at a target timepoint from each available FC independently, yielding inconsistent predictions and overlooking longitudinal dependencies, which introduce ambiguity in practical applications. Furthermore, the scarcity and irregular distribution of longitudinal rs-fMRI data pose significant challenges in accurately predicting and delineating the trajectories of early brain functional development. To address these issues, we propose a novel Triplet Cycle-Consistent Masked Autoencoder (TC-MAE) for the trajectory prediction of the development of infant FC. Our TC-MAE has the capability to traverse FC over an extended period, extract unique individual characteristics, and predict target FC at any given age in infancy with longitudinal consistency. Extensive experiments on 368 longitudinal infant rs-fMRI scans demonstrate the superior performance of the proposed method in longitudinal FC prediction compared with state-of-the-art approaches.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}