Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981204
Nima Yaghoobi, Jyothi Rikhab Chand, Yan Chen, Steve R Kecskemeti, James H Holmes, Mathews Jacob
The acquisition of 3D multicontrast MRI data with good isotropic spatial resolution is challenged by lengthy scan times. In this work, we introduce a CNN-based multiscale energy model to learn the joint probability distribution of the multi-contrast images. The joint recovery of the contrasts from undersampled data is posed as a maximum a posteriori estimation scheme, where the learned energy serves as the prior. We use a majorize-minimize algorithm to solve the optimization scheme. The proposed model leverages the redundancies across different contrasts to improve image fidelity. The proposed scheme is observed to preserve fine details and contrast, offering sharper reconstructions compared to reconstruction methods that independently recover the contrasts. While we focus on 3D MPNRAGE acquisitions in this work, the proposed approach is generalizable to arbitrary multi-contrast settings.
{"title":"FAST MULTI-CONTRAST MRI USING JOINT MULTISCALE ENERGY MODEL.","authors":"Nima Yaghoobi, Jyothi Rikhab Chand, Yan Chen, Steve R Kecskemeti, James H Holmes, Mathews Jacob","doi":"10.1109/isbi60581.2025.10981204","DOIUrl":"10.1109/isbi60581.2025.10981204","url":null,"abstract":"<p><p>The acquisition of 3D multicontrast MRI data with good isotropic spatial resolution is challenged by lengthy scan times. In this work, we introduce a CNN-based multiscale energy model to learn the joint probability distribution of the multi-contrast images. The joint recovery of the contrasts from undersampled data is posed as a maximum a posteriori estimation scheme, where the learned energy serves as the prior. We use a majorize-minimize algorithm to solve the optimization scheme. The proposed model leverages the redundancies across different contrasts to improve image fidelity. The proposed scheme is observed to preserve fine details and contrast, offering sharper reconstructions compared to reconstruction methods that independently recover the contrasts. While we focus on 3D MPNRAGE acquisitions in this work, the proposed approach is generalizable to arbitrary multi-contrast settings.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381937/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981036
Roshan Kenia, Fin Amin, Benjamin W Roop, Laura J Brattain, Brian S Eastwood, Matthew G Fay, Charles R Gerfen, Jacob R Glaser, Lars A Gjesteby
Dense axon centerline detection and tracing is an essential task for understanding brain connectivity and functionality. Collecting large amounts of annotated 3D brain imagery to automate this process is time-consuming and costly. To expedite annotation tool use, development of accurate centerline detection techniques using limited annotated data is needed, especially in the case when incomplete annotations are provided. In this work, we explore creating a new topology preserving loss function in conjunction with a deep supervision paradigm to overcome this challenge. Using annotated volumes with varied levels of expert annotations, we show that our training paradigm outperforms existing methods, achieving comparable performance with only 50% of the annotations, whereas the baseline requires 75% for similar results.
{"title":"TOPOLOGY-PRESERVING DEEP SUPERVISION FOR 3D AXON CENTERLINE SEGMENTATION USING PARTIALLY ANNOTATED DATA.","authors":"Roshan Kenia, Fin Amin, Benjamin W Roop, Laura J Brattain, Brian S Eastwood, Matthew G Fay, Charles R Gerfen, Jacob R Glaser, Lars A Gjesteby","doi":"10.1109/isbi60581.2025.10981036","DOIUrl":"10.1109/isbi60581.2025.10981036","url":null,"abstract":"<p><p>Dense axon centerline detection and tracing is an essential task for understanding brain connectivity and functionality. Collecting large amounts of annotated 3D brain imagery to automate this process is time-consuming and costly. To expedite annotation tool use, development of accurate centerline detection techniques using limited annotated data is needed, especially in the case when incomplete annotations are provided. In this work, we explore creating a new topology preserving loss function in conjunction with a deep supervision paradigm to overcome this challenge. Using annotated volumes with varied levels of expert annotations, we show that our training paradigm outperforms existing methods, achieving comparable performance with only 50% of the annotations, whereas the baseline requires 75% for similar results.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12143074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144251223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning-based cortical surface reconstruction (CSR) methods heavily rely on pseudo ground truth (pGT) generated by conventional CSR pipelines as supervision, leading to dataset-specific challenges and lengthy training data preparation. We propose a new approach, SegCSR, for reconstructing multiple cortical surfaces using weak supervision from brain MRI ribbon segmentations. Our approach initializes a midthickness surface and then deforms it inward and outward to form the inner (white matter) and outer (pial) cortical surfaces, respectively, by jointly learning diffeomorphic flows to align the surfaces with the boundaries of the cortical ribbon segmentation maps. Specifically, a boundary surface loss drives the initialization surface to the target inner and outer boundaries, and an inter-surface normal consistency loss regularizes the pial surface in challenging deep cortical sulci. Additional regularization terms are utilized to enforce surface smoothness and topology. Evaluated on two large-scale brain MRI datasets, our weakly-supervised SegCSR achieves comparable or superior CSR accuracy and regularity to existing supervised deep learning alternatives.
{"title":"SegCSR: WEAKLY-SUPERVISED CORTICAL SURFACES RECONSTRUCTION FROM BRAIN RIBBON SEGMENTATIONS.","authors":"Hao Zheng, Xiaoyang Chen, Hongming Li, Tingting Chen, Peixian Liang, Yong Fan","doi":"10.1109/isbi60581.2025.10980662","DOIUrl":"10.1109/isbi60581.2025.10980662","url":null,"abstract":"<p><p>Deep learning-based cortical surface reconstruction (CSR) methods heavily rely on pseudo ground truth (pGT) generated by conventional CSR pipelines as supervision, leading to dataset-specific challenges and lengthy training data preparation. We propose a new approach, SegCSR, for reconstructing multiple cortical surfaces using <i>weak supervision</i> from brain MRI ribbon segmentations. Our approach initializes a midthickness surface and then deforms it inward and outward to form the inner (white matter) and outer (pial) cortical surfaces, respectively, by jointly learning diffeomorphic flows to align the surfaces with the boundaries of the cortical ribbon segmentation maps. Specifically, a boundary surface loss drives the initialization surface to the target inner and outer boundaries, and an inter-surface normal consistency loss regularizes the pial surface in challenging deep cortical sulci. Additional regularization terms are utilized to enforce surface smoothness and topology. Evaluated on two large-scale brain MRI datasets, our weakly-supervised SegCSR achieves comparable or superior CSR accuracy and regularity to existing supervised deep learning alternatives.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12243678/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144628016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981147
Bohan Jiang, Andrew McNeil, Yihao Liu, Gaurav Rudravaram, Inga Saknite, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Rachel Weiss, Lori E Dodd, Veronique Nussenblatt, Daniel Moyer, Bennett A Landman, Benoit M Dawant, Eric R Tkaczyk
Mpox is a viral illness with heavy cutaneous involvement. Automatic tracking of mpox lesion progression is critical in determining the resolution of evolving lesions. This work introduces a novel application of deep learning for lesion monitoring through alignment of dermatological hand photographs. By adapting the VoxelMorph framework for 2D photographic data, we explore key point alignment across serial images. We trained our neural network model on a unique dataset of 1,658 hand images and evaluated its performance on a test set of 254 images. Additionally, we validated the method's generalizability with a supplementary set of 500 images, which included extensive Mpox infection. Our findings indicate modest yet significant improvements in key points and lesion center registration across different regularization strengths. Although promising, the complexity of hand structure presents challenges, requiring cautious application and further refinement, especially in regions with intense spatial discontinuities, such as interdigital areas.
{"title":"DEEP AUTOMATIC ALIGNMENT OF MPOX DERMATOLOGICAL HAND PHOTOGRAPHY.","authors":"Bohan Jiang, Andrew McNeil, Yihao Liu, Gaurav Rudravaram, Inga Saknite, Placide Mbala-Kingebeni, Olivier Tshiani Mbaya, Tyra Silaphet, Rachel Weiss, Lori E Dodd, Veronique Nussenblatt, Daniel Moyer, Bennett A Landman, Benoit M Dawant, Eric R Tkaczyk","doi":"10.1109/isbi60581.2025.10981147","DOIUrl":"10.1109/isbi60581.2025.10981147","url":null,"abstract":"<p><p>Mpox is a viral illness with heavy cutaneous involvement. Automatic tracking of mpox lesion progression is critical in determining the resolution of evolving lesions. This work introduces a novel application of deep learning for lesion monitoring through alignment of dermatological hand photographs. By adapting the VoxelMorph framework for 2D photographic data, we explore key point alignment across serial images. We trained our neural network model on a unique dataset of 1,658 hand images and evaluated its performance on a test set of 254 images. Additionally, we validated the method's generalizability with a supplementary set of 500 images, which included extensive Mpox infection. Our findings indicate modest yet significant improvements in key points and lesion center registration across different regularization strengths. Although promising, the complexity of hand structure presents challenges, requiring cautious application and further refinement, especially in regions with intense spatial discontinuities, such as interdigital areas.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352120/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981144
Jongyeon Yoon, Mingxing Rao, Elyssa M McMaster, Chloe Cho, Nancy R Newlin, Kurt G Schilling, Bennett A Landman, Daniel Moyer
Diffusion MRI (dMRI) streamline tractography has been the gold standard for non-invasive estimation of white matter (WM) pathways in the human brain. Recent advancements in deep learning have enabled the generation of streamlines from T1-weighted (T1w) MRI, a more common imaging method. The accuracy of current T1w tracking methods is limited by their recurrent architecture. In the present work, we modify a current state-of-the-art T1w tractography method (CoRNN), replacing recurrent units and its sequential representation with Transformer modules, and modifying both the representation and the prediction network for the fiber orientation distributions. We demonstrate that these changes provide substantial performance benefits over the baseline method, producing high angular consistency with the gold standard dMRI tractogram in healthy normal adult humans.
{"title":"TRANSFORMER-BASED T1-TRACTOGRAPHY.","authors":"Jongyeon Yoon, Mingxing Rao, Elyssa M McMaster, Chloe Cho, Nancy R Newlin, Kurt G Schilling, Bennett A Landman, Daniel Moyer","doi":"10.1109/isbi60581.2025.10981144","DOIUrl":"10.1109/isbi60581.2025.10981144","url":null,"abstract":"<p><p>Diffusion MRI (dMRI) streamline tractography has been the gold standard for non-invasive estimation of white matter (WM) pathways in the human brain. Recent advancements in deep learning have enabled the generation of streamlines from T1-weighted (T1w) MRI, a more common imaging method. The accuracy of current T1w tracking methods is limited by their recurrent architecture. In the present work, we modify a current state-of-the-art T1w tractography method (CoRNN), replacing recurrent units and its sequential representation with Transformer modules, and modifying both the representation and the prediction network for the fiber orientation distributions. We demonstrate that these changes provide substantial performance benefits over the baseline method, producing high angular consistency with the gold standard dMRI tractogram in healthy normal adult humans.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12345601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144857123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1109/ISBI56570.2024.10635854
M Alsharid, R Yasrab, M Sarker, L Drukker, A T Papageorghiou, J A Noble
The paper explores the use of an under-utilised piece of information extractable from fetal ultrasound images, 'zoom'. In this paper, we explore the obtainment of zoom information and conclude with a couple of potential use cases for it. We make the case that zoom information is meaningful and that convolutional neural networks can distinguish between the different zoom levels that images were acquired at, even if images were manipulated post-acquisition.
{"title":"Zoom is Meaningful: Discerning Ultrasound Images' Zoom Levels.","authors":"M Alsharid, R Yasrab, M Sarker, L Drukker, A T Papageorghiou, J A Noble","doi":"10.1109/ISBI56570.2024.10635854","DOIUrl":"10.1109/ISBI56570.2024.10635854","url":null,"abstract":"<p><p>The paper explores the use of an under-utilised piece of information extractable from fetal ultrasound images, 'zoom'. In this paper, we explore the obtainment of zoom information and conclude with a couple of potential use cases for it. We make the case that zoom information is meaningful and that convolutional neural networks can distinguish between the different zoom levels that images were acquired at, even if images were manipulated post-acquisition.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1109/ISBI56570.2024.10635693
Mourad Gridach, Mohammad Alsharid, Jianbo Jiao, Lior Drukker, Aris T Papageorghiou, J Alison Noble
This paper tackles the challenging problem of real-world data self-supervised representation learning from two modalities: fetal ultrasound (US) video and the corresponding speech acquired when a sonographer performs a pregnancy scan. We propose to transfer knowledge between the different modalities, even though the sonographer's speech and the US video may not be semantically correlated. We design a network architecture capable of learning useful representations such as of anatomical features and structures while recognising the correlation between an US video scan and the sonographer's speech. We introduce dual representation learning from US video and audio, which consists of two concepts: Multi-Modal Contrastive Learning and Multi-Modal Similarity Learning, in a latent feature space. Experiments show that the proposed architecture learns powerful representations and transfers well for two downstream tasks. Furthermore, we experiment with two different datasets for pretraining which differ in size and length of video clips (as well as sonographer speech) to show that the quality of the sonographer's speech plays an important role in the final performance.
{"title":"Dual Representation Learning From Fetal Ultrasound Video And Sonographer Audio.","authors":"Mourad Gridach, Mohammad Alsharid, Jianbo Jiao, Lior Drukker, Aris T Papageorghiou, J Alison Noble","doi":"10.1109/ISBI56570.2024.10635693","DOIUrl":"10.1109/ISBI56570.2024.10635693","url":null,"abstract":"<p><p>This paper tackles the challenging problem of real-world data self-supervised representation learning from two modalities: fetal ultrasound (US) video and the corresponding speech acquired when a sonographer performs a pregnancy scan. We propose to transfer knowledge between the different modalities, even though the sonographer's speech and the US video may not be semantically correlated. We design a network architecture capable of learning useful representations such as of anatomical features and structures while recognising the correlation between an US video scan and the sonographer's speech. We introduce dual representation learning from US video and audio, which consists of two concepts: Multi-Modal Contrastive Learning and Multi-Modal Similarity Learning, in a latent feature space. Experiments show that the proposed architecture learns powerful representations and transfers well for two downstream tasks. Furthermore, we experiment with two different datasets for pretraining which differ in size and length of video clips (as well as sonographer speech) to show that the quality of the sonographer's speech plays an important role in the final performance.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616753/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1109/ISBI56570.2024.10635809
Mahrukh Saeed, Julien Quarez, Hassna Irzan, Bava Kesavan, Matthew Elliot, Oscar Maccormac, James Knight, Sebastien Ourselin, Jonathan Shapey, Alejandro Granados
Physical phantom models have been integral to surgical training, yet they lack realism and are unable to replicate the presence of blood resulting from surgical actions. Existing domain transfer methods aim to enhance realism, but none facilitate blood simulation. This study investigates the overlay of blood on images acquired during endoscopic transsphenoidal pituitary surgery on phantom models. The process involves employing manual techniques using the GIMP image manipulation application and automated methods using pythons Blend Modes module. We then approach this as an image harmonisation task to assess its practicality and feasibility. Our evaluation uses Structural Similarity Index Measure and Laplacian metrics. The results we obtained emphasize the significance of image harmonisation, offering substantial insights within the surgical field. Our work is a step towards investigating data-driven models that can simulate blood for increased realism during surgical training on phantom models.
{"title":"Blood Harmonisation of Endoscopic Transsphenoidal Surgical Video Frames on Phantom Models.","authors":"Mahrukh Saeed, Julien Quarez, Hassna Irzan, Bava Kesavan, Matthew Elliot, Oscar Maccormac, James Knight, Sebastien Ourselin, Jonathan Shapey, Alejandro Granados","doi":"10.1109/ISBI56570.2024.10635809","DOIUrl":"10.1109/ISBI56570.2024.10635809","url":null,"abstract":"<p><p>Physical phantom models have been integral to surgical training, yet they lack realism and are unable to replicate the presence of blood resulting from surgical actions. Existing domain transfer methods aim to enhance realism, but none facilitate blood simulation. This study investigates the overlay of blood on images acquired during endoscopic transsphenoidal pituitary surgery on phantom models. The process involves employing manual techniques using the GIMP image manipulation application and automated methods using pythons Blend Modes module. We then approach this as an image harmonisation task to assess its practicality and feasibility. Our evaluation uses Structural Similarity Index Measure and Laplacian metrics. The results we obtained emphasize the significance of image harmonisation, offering substantial insights within the surgical field. Our work is a step towards investigating data-driven models that can simulate blood for increased realism during surgical training on phantom models.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635176
Dong Liu, Zhuoyao Xin, Robin Ji, Fotis Tsitsos, Sergio Jiménez-Gambín, Elisa E Konofagou, Vincent P Ferrera, Jia Guo
Utilizing a multi-task deep learning framework, this study generated synthetic CT (sCT) images from a limited dataset of Ultrashort echo time (UTE) MRI for transcranial focused ultrasound (tFUS) planning. A 3D Transformer U-Net was employed to produce sCT images that closely replicated actual CT scans, demonstrated by an average Dice coefficient of 0.868 for morphological accuracy. The acoustic simulation with sCT images showed mean focus absolute pressure differences of 8.85±7.29 % for the anterior cingulate cortex, 11.81±8.63 % for the precuneus, and 7.27±3.64 % for the supplemental motor cortex, with focus position discrepancies within 0.9±0.5 mm. These results underscore the efficacy of UTE-MRI as a non-radiative, cost-effective alternative for tFUS planning, with significant potential for clinical application.
{"title":"ENHANCING TRANSCRANIAL FOCUSED ULTRASOUND TREATMENT PLANNING WITH SYNTHETIC CT FROM ULTRA-SHORT ECHO TIME (UTE) MRI: A MULTI-TASK DEEP LEARNING APPROACH.","authors":"Dong Liu, Zhuoyao Xin, Robin Ji, Fotis Tsitsos, Sergio Jiménez-Gambín, Elisa E Konofagou, Vincent P Ferrera, Jia Guo","doi":"10.1109/isbi56570.2024.10635176","DOIUrl":"10.1109/isbi56570.2024.10635176","url":null,"abstract":"<p><p>Utilizing a multi-task deep learning framework, this study generated synthetic CT (sCT) images from a limited dataset of Ultrashort echo time (UTE) MRI for transcranial focused ultrasound (tFUS) planning. A 3D Transformer U-Net was employed to produce sCT images that closely replicated actual CT scans, demonstrated by an average Dice coefficient of 0.868 for morphological accuracy. The acoustic simulation with sCT images showed mean focus absolute pressure differences of 8.85±7.29 % for the anterior cingulate cortex, 11.81±8.63 % for the precuneus, and 7.27±3.64 % for the supplemental motor cortex, with focus position discrepancies within 0.9±0.5 mm. These results underscore the efficacy of UTE-MRI as a non-radiative, cost-effective alternative for tFUS planning, with significant potential for clinical application.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11753620/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143026063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/ISBI56570.2024.10635852
Hong Xu, Shireen Y Elhabian
Particle-based shape modeling (PSM) is a popular approach to automatically quantify shape variability in populations of anatomies. The PSM family of methods employs optimization to automatically populate a dense set of corresponding particles (as pseudo landmarks) on 3D surfaces to allow subsequent shape analysis. A recent deep learning approach leverages implicit radial basis function representations of shapes to better adapt to the underlying complex geometry of anatomies. Here, we propose an adaptation of this method using a traditional optimization approach that allows more precise control over the desired characteristics of models by leveraging both an eigenshape and a correspondence loss. Furthermore, the proposed approach avoids using a black-box model and allows more freedom for particles to navigate the underlying surfaces, yielding more informative statistical models. We demonstrate the efficacy of the proposed approach to state-of-the-art methods on two real datasets and justify our choice of losses empirically.
{"title":"OPTIMIZATION-DRIVEN STATISTICAL MODELS OF ANATOMIES USING RADIAL BASIS FUNCTION SHAPE REPRESENTATION.","authors":"Hong Xu, Shireen Y Elhabian","doi":"10.1109/ISBI56570.2024.10635852","DOIUrl":"10.1109/ISBI56570.2024.10635852","url":null,"abstract":"<p><p>Particle-based shape modeling (PSM) is a popular approach to automatically quantify shape variability in populations of anatomies. The PSM family of methods employs optimization to automatically populate a dense set of corresponding particles (as pseudo landmarks) on 3D surfaces to allow subsequent shape analysis. A recent deep learning approach leverages implicit radial basis function representations of shapes to better adapt to the underlying complex geometry of anatomies. Here, we propose an adaptation of this method using a traditional optimization approach that allows more precise control over the desired characteristics of models by leveraging both an eigenshape and a correspondence loss. Furthermore, the proposed approach avoids using a black-box model and allows more freedom for particles to navigate the underlying surfaces, yielding more informative statistical models. We demonstrate the efficacy of the proposed approach to state-of-the-art methods on two real datasets and justify our choice of losses empirically.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11463973/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142402297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}