Pub Date : 2022-03-01Epub Date: 2022-04-26DOI: 10.1109/isbi52829.2022.9761404
Jun Luo, Shandong Wu
Federated learning (FL) enables collaboratively training a joint model for multiple medical centers, while keeping the data decentralized due to privacy concerns. However, federated optimizations often suffer from the heterogeneity of the data distribution across medical centers. In this work, we propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks, a method that adjusts the contribution of each data sample to the local objective during optimization via knowledge of clients' label distribution, mitigating the instability brought by data heterogeneity. We conduct extensive experiments on four publicly available image datasets with different types of non-IID data distributions. Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms, increasing the test accuracy by up to 5.50 percentage points.
{"title":"FEDSLD: FEDERATED LEARNING WITH SHARED LABEL DISTRIBUTION FOR MEDICAL IMAGE CLASSIFICATION.","authors":"Jun Luo, Shandong Wu","doi":"10.1109/isbi52829.2022.9761404","DOIUrl":"10.1109/isbi52829.2022.9761404","url":null,"abstract":"<p><p>Federated learning (FL) enables collaboratively training a joint model for multiple medical centers, while keeping the data decentralized due to privacy concerns. However, federated optimizations often suffer from the heterogeneity of the data distribution across medical centers. In this work, we propose Federated Learning with Shared Label Distribution (FedSLD) for classification tasks, a method that adjusts the contribution of each data sample to the local objective during optimization via knowledge of clients' label distribution, mitigating the instability brought by data heterogeneity. We conduct extensive experiments on four publicly available image datasets with different types of non-IID data distributions. Our results show that FedSLD achieves better convergence performance than the compared leading FL optimization algorithms, increasing the test accuracy by up to 5.50 percentage points.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10563702/pdf/nihms-1933010.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41222928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01Epub Date: 2022-04-26DOI: 10.1109/isbi52829.2022.9761589
Shen Zhao, Rizwan Ahmad, Lee C Potter
In phase-contrast magnetic resonance imaging (PC-MRI), the velocity of spins at a voxel is encoded in the image phase. The strength of the velocity encoding gradient offers a trade-off between the velocity-to-noise ratio (VNR) and the extent of phase aliasing. Phase differences provide invariance to an unknown background phase. Existing literature proposes processing a reduced set of phase difference equations, simplifying the phase unwrapping problem at the expense of VNR or unaliased range of velocities, or both. Here, we demonstrate that the fullest unambiguous range of velocities is a parallelepiped, which can be accessed by jointly processing all phase differences. The joint processing also maximizes the velocity-to-noise ratio. The simple understanding of the unambiguous parallelepiped provides the potential for analyzing new multi-point acquisitions for an enhanced range of unaliased velocities; two examples are given.
{"title":"MAXIMIZING UNAMBIGUOUS VELOCITY RANGE IN PHASE-CONTRAST MRI WITH MULTIPOINT ENCODING.","authors":"Shen Zhao, Rizwan Ahmad, Lee C Potter","doi":"10.1109/isbi52829.2022.9761589","DOIUrl":"10.1109/isbi52829.2022.9761589","url":null,"abstract":"<p><p>In phase-contrast magnetic resonance imaging (PC-MRI), the velocity of spins at a voxel is encoded in the image phase. The strength of the velocity encoding gradient offers a trade-off between the velocity-to-noise ratio (VNR) and the extent of phase aliasing. Phase differences provide invariance to an unknown background phase. Existing literature proposes processing a reduced set of phase difference equations, simplifying the phase unwrapping problem at the expense of VNR or unaliased range of velocities, or both. Here, we demonstrate that the fullest unambiguous range of velocities is a parallelepiped, which can be accessed by jointly processing all phase differences. The joint processing also maximizes the velocity-to-noise ratio. The simple understanding of the unambiguous parallelepiped provides the potential for analyzing new multi-point acquisitions for an enhanced range of unaliased velocities; two examples are given.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9136874/pdf/nihms-1809822.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9354689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/ISBI48211.2021.9434018
Hanyue Zhou, Jiayu Xiao, Zhaoyang Fan, Dan Ruan
Intracranial vessel wall segmentation is critical for the quantitative assessment of intracranial atherosclerosis based on magnetic resonance vessel wall imaging. This work further improves on a previous 2D deep learning segmentation network by the utilization of 1) a 2.5D structure to balance network complexity and regularizing geometry continuity; 2) a UNET++ model to achieve structure adaptation; 3) an additional approximated Hausdorff distance (HD) loss into the objective to enhance geometry conformality; and 4) landing in a commonly used morphological measure of plaque burden - the normalized wall index (NWI) - to match the clinical endpoint. The modified network achieved Dice similarity coefficient of 0.9172 ± 0.0598 and 0.7833 ± 0.0867, HD of 0.3252 ± 0.5071 mm and 0.4914 ± 0.5743 mm, mean surface distance of 0.0940 ± 0.0781 mm and 0.1408 ± 0.0917 mm for the lumen and vessel wall, respectively. These results compare favorably to those obtained by the original 2D UNET on all segmentation metrics. Additionally, the proposed segmentation network reduced the mean absolute error in NWI from 0.0732 ± 0.0294 to 0.0725 ± 0.0333.
{"title":"INTRACRANIAL VESSEL WALL SEGMENTATION FOR ATHEROSCLEROTIC PLAQUE QUANTIFICATION.","authors":"Hanyue Zhou, Jiayu Xiao, Zhaoyang Fan, Dan Ruan","doi":"10.1109/ISBI48211.2021.9434018","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434018","url":null,"abstract":"<p><p>Intracranial vessel wall segmentation is critical for the quantitative assessment of intracranial atherosclerosis based on magnetic resonance vessel wall imaging. This work further improves on a previous 2D deep learning segmentation network by the utilization of 1) a 2.5D structure to balance network complexity and regularizing geometry continuity; 2) a UNET++ model to achieve structure adaptation; 3) an additional approximated Hausdorff distance (HD) loss into the objective to enhance geometry conformality; and 4) landing in a commonly used morphological measure of plaque burden - the normalized wall index (NWI) - to match the clinical endpoint. The modified network achieved Dice similarity coefficient of 0.9172 ± 0.0598 and 0.7833 ± 0.0867, HD of 0.3252 ± 0.5071 mm and 0.4914 ± 0.5743 mm, mean surface distance of 0.0940 ± 0.0781 mm and 0.1408 ± 0.0917 mm for the lumen and vessel wall, respectively. These results compare favorably to those obtained by the original 2D UNET on all segmentation metrics. Additionally, the proposed segmentation network reduced the mean absolute error in NWI from 0.0732 ± 0.0294 to 0.0725 ± 0.0333.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1416-1419"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9434018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39321314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/ISBI48211.2021.9433863
Harshita Sharma, Lior Drukker, Aris T Papageorghiou, J Alison Noble
This paper presents a novel multi-modal learning approach for automated skill characterization of obstetric ultrasound operators using heterogeneous spatio-temporal sensory cues, namely, scan video, eye-tracking data, and pupillometric data, acquired in the clinical environment. We address pertinent challenges such as combining heterogeneous, small-scale and variable-length sequential datasets, to learn deep convolutional neural networks in real-world scenarios. We propose spatial encoding for multi-modal analysis using sonography standard plane images, spatial gaze maps, gaze trajectory images, and pupillary response images. We present and compare five multi-modal learning network architectures using late, intermediate, hybrid, and tensor fusion. We build models for the Heart and the Brain scanning tasks, and performance evaluation suggests that multi-modal learning networks outperform uni-modal networks, with the best-performing model achieving accuracies of 82.4% (Brain task) and 76.4% (Heart task) for the operator skill classification problem.
{"title":"Multi-Modal Learning from Video, Eye Tracking, and Pupillometry for Operator Skill Characterization in Clinical Fetal Ultrasound.","authors":"Harshita Sharma, Lior Drukker, Aris T Papageorghiou, J Alison Noble","doi":"10.1109/ISBI48211.2021.9433863","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9433863","url":null,"abstract":"<p><p>This paper presents a novel multi-modal learning approach for automated skill characterization of obstetric ultrasound operators using heterogeneous spatio-temporal sensory cues, namely, scan video, eye-tracking data, and pupillometric data, acquired in the clinical environment. We address pertinent challenges such as combining heterogeneous, small-scale and variable-length sequential datasets, to learn deep convolutional neural networks in real-world scenarios. We propose spatial encoding for multi-modal analysis using sonography standard plane images, spatial gaze maps, gaze trajectory images, and pupillary response images. We present and compare five multi-modal learning network architectures using late, intermediate, hybrid, and tensor fusion. We build models for the Heart and the Brain scanning tasks, and performance evaluation suggests that multi-modal learning networks outperform uni-modal networks, with the best-performing model achieving accuracies of 82.4% (Brain task) and 76.4% (Heart task) for the operator skill classification problem.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1646-1649"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9433863","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39327914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433852
Xiaofeng Liu, Fangxu Xing, Jerry L Prince, Aaron Carass, Maureen Stone, Georges El Fakhri, Jonghye Woo
Tagged magnetic resonance imaging (MRI) is a widely used imaging technique for measuring tissue deformation in moving organs. Due to tagged MRI's intrinsic low anatomical resolution, another matching set of cine MRI with higher resolution is sometimes acquired in the same scanning session to facilitate tissue segmentation, thus adding extra time and cost. To mitigate this, in this work, we propose a novel dual-cycle constrained bijective VAE-GAN approach to carry out tagged-to-cine MR image synthesis. Our method is based on a variational autoencoder backbone with cycle reconstruction constrained adversarial training to yield accurate and realistic cine MR images given tagged MR images. Our framework has been trained, validated, and tested using 1,768, 416, and 1,560 subject-independent paired slices of tagged and cine MRI from twenty healthy subjects, respectively, demonstrating superior performance over the comparison methods. Our method can potentially be used to reduce the extra acquisition time and cost, while maintaining the same workflow for further motion analyses.
标记磁共振成像(MRI)是一种广泛应用的成像技术,用于测量移动器官的组织变形。由于标记磁共振成像本身的解剖分辨率较低,有时需要在同一扫描过程中获取另一套分辨率更高的匹配电影磁共振成像来进行组织分割,从而增加了额外的时间和成本。为了缓解这一问题,我们在这项工作中提出了一种新颖的双周期约束双目标 VAE-GAN 方法,用于进行标记到 cine MR 图像合成。我们的方法基于变异自动编码器骨干和周期重构约束对抗训练,可在给定标记 MR 图像的情况下生成准确、逼真的 cine MR 图像。我们的框架分别使用来自 20 名健康受试者的 1,768 张、416 张和 1,560 张与受试者无关的标记和 cine MRI 成对切片进行了训练、验证和测试,结果表明其性能优于比较方法。我们的方法可用于减少额外的采集时间和成本,同时保持进一步运动分析的工作流程不变。
{"title":"DUAL-CYCLE CONSTRAINED BIJECTIVE VAE-GAN FOR TAGGED-TO-CINE MAGNETIC RESONANCE IMAGE SYNTHESIS.","authors":"Xiaofeng Liu, Fangxu Xing, Jerry L Prince, Aaron Carass, Maureen Stone, Georges El Fakhri, Jonghye Woo","doi":"10.1109/isbi48211.2021.9433852","DOIUrl":"10.1109/isbi48211.2021.9433852","url":null,"abstract":"<p><p>Tagged magnetic resonance imaging (MRI) is a widely used imaging technique for measuring tissue deformation in moving organs. Due to tagged MRI's intrinsic low anatomical resolution, another matching set of cine MRI with higher resolution is sometimes acquired in the same scanning session to facilitate tissue segmentation, thus adding extra time and cost. To mitigate this, in this work, we propose a novel dual-cycle constrained bijective VAE-GAN approach to carry out tagged-to-cine MR image synthesis. Our method is based on a variational autoencoder backbone with cycle reconstruction constrained adversarial training to yield accurate and realistic cine MR images given tagged MR images. Our framework has been trained, validated, and tested using 1,768, 416, and 1,560 subject-independent paired slices of tagged and cine MRI from twenty healthy subjects, respectively, demonstrating superior performance over the comparison methods. Our method can potentially be used to reduce the extra acquisition time and cost, while maintaining the same workflow for further motion analyses.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8547333/pdf/nihms-1669326.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39565006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433892
Alireza Mehrtash, Tina Kapur, Clare M Tempany, Purang Abolmaesumi, William M Wells
Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.
{"title":"PROSTATE CANCER DIAGNOSIS WITH SPARSE BIOPSY DATA AND IN PRESENCE OF LOCATION UNCERTAINTY.","authors":"Alireza Mehrtash, Tina Kapur, Clare M Tempany, Purang Abolmaesumi, William M Wells","doi":"10.1109/isbi48211.2021.9433892","DOIUrl":"10.1109/isbi48211.2021.9433892","url":null,"abstract":"<p><p>Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"443-447"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9552971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33527173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433755
Shruti Gadewar, Alyssa H Zhu, Sophia I Thomopoulos, Zhuocheng Li, Iyad Ba Gari, Piyush Maiti, Paul M Thompson, Neda Jahanshad
Quality control (QC) is a vital step for all scientific data analyses and is critically important in the biomedical sciences. Image segmentation is a common task in medical image analysis, and automated tools to segment many regions from human brain MRIs are now well established. However, these methods do not always give anatomically correct labels. Traditional methods for QC tend to reject statistical outliers, which may not necessarily be inaccurate. Here, we make use of a large database of over 12,000 brain images that contain 68 parcellations of the human cortex, each of which was assessed for anatomical accuracy by a human rater. We trained three machine learning models to determine if a region was anatomically accurate (as 'pass', or 'fail') and tested the performance on an independent dataset. We found good performance for the majority of labeled regions. This work will facilitate more anatomically accurate large-scale multi-site research.
{"title":"REGION SPECIFIC AUTOMATIC QUALITY ASSURANCE FOR MRI-DERIVED CORTICAL SEGMENTATIONS.","authors":"Shruti Gadewar, Alyssa H Zhu, Sophia I Thomopoulos, Zhuocheng Li, Iyad Ba Gari, Piyush Maiti, Paul M Thompson, Neda Jahanshad","doi":"10.1109/isbi48211.2021.9433755","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433755","url":null,"abstract":"<p><p>Quality control (QC) is a vital step for all scientific data analyses and is critically important in the biomedical sciences. Image segmentation is a common task in medical image analysis, and automated tools to segment many regions from human brain MRIs are now well established. However, these methods do not always give anatomically correct labels. Traditional methods for QC tend to reject statistical outliers, which may not necessarily be inaccurate. Here, we make use of a large database of over 12,000 brain images that contain 68 parcellations of the human cortex, each of which was assessed for anatomical accuracy by a human rater. We trained three machine learning models to determine if a region was anatomically accurate (as 'pass', or 'fail') and tested the performance on an independent dataset. We found good performance for the majority of labeled regions. This work will facilitate more anatomically accurate large-scale multi-site research.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":" ","pages":"1288-1291"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433755","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40317903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/ISBI48211.2021.9433975
Ho Hin Lee, Yucheng Tang, Shunxing Bao, Richard G Abramson, Yuankai Huo, Bennett A Landman
Performing coarse-to-fine abdominal multi-organ segmentation facilitates extraction of high-resolution segmentation minimizing the loss of spatial contextual information. However, current coarse-to-refine approaches require a significant number of models to perform single organ segmentation. We propose a coarse-to-fine pipeline RAP-Net, which starts from the extraction of the global prior context of multiple organs from 3D volumes using a low-resolution coarse network, followed by a fine phase that uses a single refined model to segment all abdominal organs instead of multiple organ corresponding models. We combine the anatomical prior with corresponding extracted patches to preserve the anatomical locations and boundary information for performing high-resolution segmentation across all organs in a single model. To train and evaluate our method, a clinical research cohort consisting of 100 patient volumes with 13 organs well-annotated is used. We tested our algorithms with 4-fold cross-validation and computed the Dice score for evaluating the segmentation performance of the 13 organs. Our proposed method using single auto-context outperforms the state-of-the-art on 13 models with an average Dice score 84.58% versus 81.69% (p<0.0001).
{"title":"RAP-NET: COARSE-TO-FINE MULTI-ORGAN SEGMENTATION WITH SINGLE RANDOM ANATOMICAL PRIOR.","authors":"Ho Hin Lee, Yucheng Tang, Shunxing Bao, Richard G Abramson, Yuankai Huo, Bennett A Landman","doi":"10.1109/ISBI48211.2021.9433975","DOIUrl":"10.1109/ISBI48211.2021.9433975","url":null,"abstract":"<p><p>Performing coarse-to-fine abdominal multi-organ segmentation facilitates extraction of high-resolution segmentation minimizing the loss of spatial contextual information. However, current coarse-to-refine approaches require a significant number of models to perform single organ segmentation. We propose a coarse-to-fine pipeline RAP-Net, which starts from the extraction of the global prior context of multiple organs from 3D volumes using a low-resolution coarse network, followed by a fine phase that uses a single refined model to segment all abdominal organs instead of multiple organ corresponding models. We combine the anatomical prior with corresponding extracted patches to preserve the anatomical locations and boundary information for performing high-resolution segmentation across all organs in a single model. To train and evaluate our method, a clinical research cohort consisting of 100 patient volumes with 13 organs well-annotated is used. We tested our algorithms with 4-fold cross-validation and computed the Dice score for evaluating the segmentation performance of the 13 organs. Our proposed method using single auto-context outperforms the state-of-the-art on 13 models with an average Dice score 84.58% versus 81.69% (p<0.0001).</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1491-1494"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8522467/pdf/nihms-1687705.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39532530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/ISBI48211.2021.9434034
Xingchen Zhao, Anthony Sicilia, Davneet S Minhas, Erin E O'Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, Seong Jae Hwang
Typical machine learning frameworks heavily rely on an underlying assumption that training and test data follow the same distribution. In medical imaging which increasingly begun acquiring datasets from multiple sites or scanners, this identical distribution assumption often fails to hold due to systematic variability induced by site or scanner dependent factors. Therefore, we cannot simply expect a model trained on a given dataset to consistently work well, or generalize, on a dataset from another distribution. In this work, we address this problem, investigating the application of machine learning models to unseen medical imaging data. Specifically, we consider the challenging case of Domain Generalization (DG) where we train a model without any knowledge about the testing distribution. That is, we train on samples from a set of distributions (sources) and test on samples from a new, unseen distribution (target). We focus on the task of white matter hyperintensity (WMH) prediction using the multi-site WMH Segmentation Challenge dataset and our local in-house dataset. We identify how two mechanically distinct DG approaches, namely domain adversarial learning and mix-up, have theoretical synergy. Then, we show drastic improvements of WMH prediction on an unseen target domain.
{"title":"ROBUST WHITE MATTER HYPERINTENSITY SEGMENTATION ON UNSEEN DOMAIN.","authors":"Xingchen Zhao, Anthony Sicilia, Davneet S Minhas, Erin E O'Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, Seong Jae Hwang","doi":"10.1109/ISBI48211.2021.9434034","DOIUrl":"https://doi.org/10.1109/ISBI48211.2021.9434034","url":null,"abstract":"<p><p>Typical machine learning frameworks heavily rely on an underlying assumption that training and test data follow the same distribution. In medical imaging which increasingly begun acquiring datasets from multiple sites or scanners, this identical distribution assumption often fails to hold due to systematic variability induced by site or scanner dependent factors. Therefore, we cannot simply expect a model trained on a given dataset to consistently work well, or generalize, on a dataset from another distribution. In this work, we address this problem, investigating the application of machine learning models to unseen medical imaging data. Specifically, we consider the challenging case of Domain Generalization (DG) where we train a model without any knowledge about the testing distribution. That is, we train on samples from a set of distributions (sources) and test on samples from a new, unseen distribution (target). We focus on the task of white matter hyperintensity (WMH) prediction using the multi-site WMH Segmentation Challenge dataset and our local in-house dataset. We identify how two mechanically distinct DG approaches, namely domain adversarial learning and mix-up, have theoretical synergy. Then, we show drastic improvements of WMH prediction on an unseen target domain.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1047-1051"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ISBI48211.2021.9434034","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01Epub Date: 2021-05-25DOI: 10.1109/isbi48211.2021.9433958
Hamid Behjat, Iman Aganj, David Abramian, Anders Eklund, Carl-Fredrik Westin
In this work, we leverage the Laplacian eigenbasis of voxel-wise white matter (WM) graphs derived from diffusion-weighted MRI data, dubbed WM harmonics, to characterize the spatial structure of WM fMRI data. Our motivation for such a characterization is based on studies that show WM fMRI data exhibit a spatial correlational anisotropy that coincides with underlying fiber patterns. By quantifying the energy content of WM fMRI data associated with subsets of WM harmonics across multiple spectral bands, we show that the data exhibits notable subtle spatial modulations under functional load that are not manifested during rest. WM harmonics provide a novel means to study the spatial dynamics of WM fMRI data, in such way that the analysis is informed by the underlying anatomical structure.
{"title":"CHARACTERIZATION OF SPATIAL DYNAMICS OF FMRI DATA IN WHITE MATTER USING DIFFUSION-INFORMED WHITE MATTER HARMONICS.","authors":"Hamid Behjat, Iman Aganj, David Abramian, Anders Eklund, Carl-Fredrik Westin","doi":"10.1109/isbi48211.2021.9433958","DOIUrl":"https://doi.org/10.1109/isbi48211.2021.9433958","url":null,"abstract":"<p><p>In this work, we leverage the Laplacian eigenbasis of voxel-wise white matter (WM) graphs derived from diffusion-weighted MRI data, dubbed <i>WM harmonics</i>, to characterize the spatial structure of WM fMRI data. Our motivation for such a characterization is based on studies that show WM fMRI data exhibit a spatial correlational anisotropy that coincides with underlying fiber patterns. By quantifying the energy content of WM fMRI data associated with subsets of WM harmonics across multiple spectral bands, we show that the data exhibits notable subtle spatial modulations under functional load that are not manifested during rest. WM harmonics provide a novel means to study the spatial dynamics of WM fMRI data, in such way that the analysis is informed by the underlying anatomical structure.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2021 ","pages":"1586-1590"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/isbi48211.2021.9433958","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39059968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}