首页 > 最新文献

The journal of machine learning for biomedical imaging最新文献

英文 中文
Learning the Effect of Registration Hyperparameters with HyperMorph 学习HyperMorph配准超参数的效果
Pub Date : 2022-03-01 DOI: 10.59275/j.melba.2022-74f1
Andrew Hoopes, Malte Hoffmann, D. Greve, B. Fischl, J. Guttag, Adrian V. Dalca
We introduce HyperMorph, a framework that facilitates efficient hyperparameter tuning in learning-based deformable image registration. Classical registration algorithms perform an iterative pair-wise optimization to compute a deformation field that aligns two images. Recent learning-based approaches leverage large image datasets to learn a function that rapidly estimates a deformation for a given image pair. In both strategies, the accuracy of the resulting spatial correspondences is strongly influenced by the choice of certain hyperparameter values. However, an effective hyperparameter search consumes substantial time and human effort as it often involves training multiple models for different fixed hyperparameter values and may lead to suboptimal registration. We propose an amortized hyperparameter learning strategy to alleviate this burden by learning the impact of hyperparameters on deformation fields. We design a meta network, or hypernetwork, that predicts the parameters of a registration network for input hyperparameters, thereby comprising a single model that generates the optimal deformation field corresponding to given hyperparameter values. This strategy enables fast, high-resolution hyperparameter search at test-time, reducing the inefficiency of traditional approaches while increasing flexibility. We also demonstrate additional benefits of HyperMorph, including enhanced robustness to model initialization and the ability to rapidly identify optimal hyperparameter values specific to a dataset, image contrast, task, or even anatomical region, all without the need to retrain models. We make our code publicly available at http://hypermorph.voxelmorph.net.
我们介绍了HyperMorph,这是一个框架,有助于在基于学习的可变形图像配准中进行有效的超参数调优。经典配准算法执行迭代成对优化来计算对齐两幅图像的变形场。最近基于学习的方法利用大型图像数据集来学习一个函数,该函数可以快速估计给定图像对的变形。在这两种策略中,所得到的空间对应的准确性受到某些超参数值选择的强烈影响。然而,有效的超参数搜索需要耗费大量的时间和人力,因为它通常涉及为不同的固定超参数值训练多个模型,并可能导致次优配准。我们提出了一种平摊超参数学习策略,通过学习超参数对变形场的影响来减轻这一负担。我们设计了一个元网络或超网络,用于预测输入超参数的配准网络的参数,从而组成一个单一模型,该模型生成对应于给定超参数值的最佳变形场。该策略支持在测试时进行快速、高分辨率的超参数搜索,减少了传统方法的低效率,同时增加了灵活性。我们还展示了HyperMorph的其他好处,包括增强对模型初始化的鲁棒性,以及快速识别特定于数据集、图像对比度、任务甚至解剖区域的最佳超参数值的能力,所有这些都不需要重新训练模型。我们在http://hypermorph.voxelmorph.net上公开了我们的代码。
{"title":"Learning the Effect of Registration Hyperparameters with HyperMorph","authors":"Andrew Hoopes, Malte Hoffmann, D. Greve, B. Fischl, J. Guttag, Adrian V. Dalca","doi":"10.59275/j.melba.2022-74f1","DOIUrl":"https://doi.org/10.59275/j.melba.2022-74f1","url":null,"abstract":"We introduce HyperMorph, a framework that facilitates efficient hyperparameter tuning in learning-based deformable image registration. Classical registration algorithms perform an iterative pair-wise optimization to compute a deformation field that aligns two images. Recent learning-based approaches leverage large image datasets to learn a function that rapidly estimates a deformation for a given image pair. In both strategies, the accuracy of the resulting spatial correspondences is strongly influenced by the choice of certain hyperparameter values. However, an effective hyperparameter search consumes substantial time and human effort as it often involves training multiple models for different fixed hyperparameter values and may lead to suboptimal registration. We propose an amortized hyperparameter learning strategy to alleviate this burden by learning the impact of hyperparameters on deformation fields. We design a meta network, or hypernetwork, that predicts the parameters of a registration network for input hyperparameters, thereby comprising a single model that generates the optimal deformation field corresponding to given hyperparameter values. This strategy enables fast, high-resolution hyperparameter search at test-time, reducing the inefficiency of traditional approaches while increasing flexibility. We also demonstrate additional benefits of HyperMorph, including enhanced robustness to model initialization and the ability to rapidly identify optimal hyperparameter values specific to a dataset, image contrast, task, or even anatomical region, all without the need to retrain models. We make our code publicly available at http://hypermorph.voxelmorph.net.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"178 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72459319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Deep Quantile Regression for Uncertainty Estimation in Unsupervised and Supervised Lesion Detection. 无监督和监督损伤检测中不确定性估计的深度分位数回归。
Haleh Akrami, Anand A Joshi, Sergül Aydöre, Richard M Leahy

Despite impressive state-of-the-art performance on a wide variety of machine learning tasks in multiple applications, deep learning methods can produce over-confident predictions, particularly with limited training data. Therefore, quantifying uncertainty is particularly important in critical applications such as lesion detection and clinical diagnosis, where a realistic assessment of uncertainty is essential in determining surgical margins, disease status and appropriate treatment. In this work, we propose a novel approach that uses quantile regression for quantifying aleatoric uncertainty in both supervised and unsupervised lesion detection problems. The resulting confidence intervals can be used for lesion detection and segmentation. In the unsupervised setting, we combine quantile regression with the Variational AutoEncoder (VAE). The VAE is trained on lesion-free data, so when presented with an image with a lesion, it tends to reconstruct a lesion-free version of the image. To detect the lesion, we then compare the input (lesion) and output (lesion-free) images. Here we address the problem of quantifying uncertainty in the images that are reconstructed by the VAE as the basis for principled outlier or lesion detection. The VAE models the output as a conditionally independent Gaussian characterized by its mean and variance. Unfortunately, joint optimization of both mean and variance in the VAE leads to the well-known problem of shrinkage or underestimation of variance. Here we describe an alternative Quantile-Regression VAE (QR-VAE) that avoids this variance shrinkage problem by directly estimating conditional quantiles for the input image. Using the estimated quantiles, we compute the conditional mean and variance for the input image from which we then detect outliers by thresholding at a false-discovery-rate corrected p-value. In the supervised setting, we develop binary quantile regression (BQR) for the supervised lesion segmentation task. We show how BQR can be used to capture uncertainty in lesion boundaries in a manner that characterizes expert disagreement.

尽管在多种应用中,深度学习方法在各种机器学习任务上的表现令人印象深刻,但它可能会产生过于自信的预测,特别是在有限的训练数据下。因此,量化不确定性在诸如病变检测和临床诊断等关键应用中尤为重要,在这些应用中,对不确定性的现实评估对于确定手术边缘、疾病状态和适当治疗至关重要。在这项工作中,我们提出了一种新的方法,使用分位数回归来量化监督和无监督病变检测问题中的任意不确定性。得到的置信区间可用于病灶检测和分割。在无监督设置中,我们将分位数回归与变分自编码器(VAE)相结合。VAE是在无损伤数据上进行训练的,因此当呈现带有损伤的图像时,它倾向于重建图像的无损伤版本。为了检测病变,我们比较输入(病变)和输出(无病变)图像。在这里,我们解决了量化由VAE重建的图像中的不确定性的问题,作为原则异常值或病变检测的基础。VAE将输出建模为具有均值和方差特征的条件独立高斯。不幸的是,在VAE中对均值和方差进行联合优化会导致众所周知的方差收缩或低估问题。在这里,我们描述了一种替代的分位数回归VAE (QR-VAE),它通过直接估计输入图像的条件分位数来避免这种方差收缩问题。使用估计的分位数,我们计算输入图像的条件均值和方差,然后通过假发现率校正的p值阈值检测异常值。在监督设置中,我们开发了二值分位数回归(BQR)用于监督病灶分割任务。我们展示了如何使用BQR以一种表征专家分歧的方式捕捉病变边界的不确定性。
{"title":"Deep Quantile Regression for Uncertainty Estimation in Unsupervised and Supervised Lesion Detection.","authors":"Haleh Akrami,&nbsp;Anand A Joshi,&nbsp;Sergül Aydöre,&nbsp;Richard M Leahy","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Despite impressive state-of-the-art performance on a wide variety of machine learning tasks in multiple applications, deep learning methods can produce over-confident predictions, particularly with limited training data. Therefore, quantifying uncertainty is particularly important in critical applications such as lesion detection and clinical diagnosis, where a realistic assessment of uncertainty is essential in determining surgical margins, disease status and appropriate treatment. In this work, we propose a novel approach that uses quantile regression for quantifying aleatoric uncertainty in both supervised and unsupervised lesion detection problems. The resulting confidence intervals can be used for lesion detection and segmentation. In the unsupervised setting, we combine quantile regression with the Variational AutoEncoder (VAE). The VAE is trained on lesion-free data, so when presented with an image with a lesion, it tends to reconstruct a lesion-free version of the image. To detect the lesion, we then compare the input (lesion) and output (lesion-free) images. Here we address the problem of quantifying uncertainty in the images that are reconstructed by the VAE as the basis for principled outlier or lesion detection. The VAE models the output as a conditionally independent Gaussian characterized by its mean and variance. Unfortunately, joint optimization of both mean and variance in the VAE leads to the well-known problem of shrinkage or underestimation of variance. Here we describe an alternative Quantile-Regression VAE (QR-VAE) that avoids this variance shrinkage problem by directly estimating conditional quantiles for the input image. Using the estimated quantiles, we compute the conditional mean and variance for the input image from which we then detect outliers by thresholding at a false-discovery-rate corrected p-value. In the supervised setting, we develop binary quantile regression (BQR) for the supervised lesion segmentation task. We show how BQR can be used to capture uncertainty in lesion boundaries in a manner that characterizes expert disagreement.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9881592/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10646814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Metrics and Benchmarking Results MICCAI BraTS 2020脑肿瘤分割中量化不确定性的挑战-排名指标和基准结果分析
Pub Date : 2021-12-19 DOI: 10.59275/j.melba.2022-354b
Raghav Mehta, Angelos Filos, Ujjwal Baid, C. Sako, Richard McKinley, M. Rebsamen, K. Datwyler, Raphael Meier, P. Radojewski, G. Murugesan, S. Nalawade, Chandan Ganesh, B. Wagner, F. Yu, B. Fei, A. Madhuranthakam, J. Maldjian, L. Daza, Catalina G'omez, P. Arbel'aez, Chengliang Dai, Shuo Wang, Hadrien Raynaud, Yuanhan Mo, E. Angelini, Yike Guo, Wenjia Bai, Subhashis Banerjee, L. Pei, A. Murat, Sarahi Rosas-Gonz'alez, Illyess Zemmoura, C. Tauber, Minh H. Vu, T. Nyholm, T. Lofstedt, Laura Mora Ballestar, Verónica Vilaplana, Hugh McHugh, G. M. Talou, Alan Wang, J. Patel, Ken Chang, K. Hoebel, M. Gidwani, N. Arun, Sharut Gupta, M. Aggarwal, Praveer Singh, E. Gerstner, Jayashree Kalpathy-Cramer, Nicolas Boutry, Alexis Huard, L. Vidyaratne, Md Monibor Rahman, K. Iftekharuddin, J. Chazalon, É. Puybareau, G. Tochon, Jun Ma, M. Cabezas, X. Lladó, A. Oliver, Liliana Valencia, S. Valverde, Mehdi Amian, M. Soltaninejad, A. Myronenko, Ali Hatamizadeh, Xuejing Feng, Q. Dou, N. Tustison, Craig Meyer, Nisarg A. Shah, S. Ta
Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.
深度学习(DL)模型在各种医学成像基准挑战中提供了最先进的性能,包括脑肿瘤分割(BraTS)挑战。然而,局灶性病理多室分割(例如,肿瘤和病变亚区域)的任务尤其具有挑战性,并且潜在的错误阻碍了将DL模型转化为临床工作流程。以不确定性的形式量化DL模型预测的可靠性可以使临床审查最不确定的区域,从而建立信任并为临床翻译铺平道路。近年来引入了几种不确定度估计方法用于深度学习医学图像分割任务。开发分数来评估和比较不确定性度量的性能将有助于最终用户做出更明智的决策。在本研究中,我们探索和评估了BraTS 2019和BraTS 2020不确定性量化任务(q -BraTS)中开发的评分,该评分旨在评估和排名脑肿瘤多室分割的不确定性估计。该分数(1)奖励对正确断言产生高置信度的不确定性估计,以及对不正确断言分配低置信度的不确定性估计,并且(2)惩罚导致较高百分比的不确定正确断言的不确定性度量。我们进一步对14个独立参与qui -BraTS 2020的团队产生的分割不确定性进行基准测试,这些团队都参与了BraTS的主要分割任务。总的来说,我们的研究结果证实了不确定性估计对分割算法的重要性和补充价值,强调了医学图像分析中不确定性量化的必要性。最后,为了提高透明度和可再现性,我们的评估代码在https://github.com/RagMeh11/QU-BraTS上公开提供。
{"title":"QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Metrics and Benchmarking Results","authors":"Raghav Mehta, Angelos Filos, Ujjwal Baid, C. Sako, Richard McKinley, M. Rebsamen, K. Datwyler, Raphael Meier, P. Radojewski, G. Murugesan, S. Nalawade, Chandan Ganesh, B. Wagner, F. Yu, B. Fei, A. Madhuranthakam, J. Maldjian, L. Daza, Catalina G'omez, P. Arbel'aez, Chengliang Dai, Shuo Wang, Hadrien Raynaud, Yuanhan Mo, E. Angelini, Yike Guo, Wenjia Bai, Subhashis Banerjee, L. Pei, A. Murat, Sarahi Rosas-Gonz'alez, Illyess Zemmoura, C. Tauber, Minh H. Vu, T. Nyholm, T. Lofstedt, Laura Mora Ballestar, Verónica Vilaplana, Hugh McHugh, G. M. Talou, Alan Wang, J. Patel, Ken Chang, K. Hoebel, M. Gidwani, N. Arun, Sharut Gupta, M. Aggarwal, Praveer Singh, E. Gerstner, Jayashree Kalpathy-Cramer, Nicolas Boutry, Alexis Huard, L. Vidyaratne, Md Monibor Rahman, K. Iftekharuddin, J. Chazalon, É. Puybareau, G. Tochon, Jun Ma, M. Cabezas, X. Lladó, A. Oliver, Liliana Valencia, S. Valverde, Mehdi Amian, M. Soltaninejad, A. Myronenko, Ali Hatamizadeh, Xuejing Feng, Q. Dou, N. Tustison, Craig Meyer, Nisarg A. Shah, S. Ta","doi":"10.59275/j.melba.2022-354b","DOIUrl":"https://doi.org/10.59275/j.melba.2022-354b","url":null,"abstract":"Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85871515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Deep Quantile Regression for Uncertainty Estimation in Unsupervised and Supervised Lesion Detection 无监督和监督损伤检测中不确定性估计的深度分位数回归
Pub Date : 2021-09-20 DOI: 10.59275/j.melba.2022-6751
H. Akrami, Anand A. Joshi, Sergül Aydöre, R. Leahy
Despite impressive state-of-the-art performance on a wide variety of machine learning tasks in multiple applications, deep learning methods can produce over-confident predictions, particularly with limited training data. Therefore, quantifying uncertainty is particularly important in critical applications such as lesion detection and clinical diagnosis, where a realistic assessment of uncertainty is essential in determining surgical margins, disease status and appropriate treatment. In this work, we propose a novel approach that uses quantile regression for quantifying aleatoric uncertainty in both supervised and unsupervised lesion detection problems. The resulting confidence intervals can be used for lesion detection and segmentation. In the unsupervised setting, we combine quantile regression with the Variational AutoEncoder (VAE). The VAE is trained on lesion-free data, so when presented with an image with a lesion, it tends to reconstruct a lesion-free version of the image. To detect the lesion, we then compare the input (lesion) and output (lesion-free) images. Here we address the problem of quantifying uncertainty in the images that are reconstructed by the VAE as the basis for principled outlier or lesion detection. The VAE models the output as a conditionally independent Gaussian characterized by its mean and variance. Unfortunately, joint optimization of both mean and variance in the VAE leads to the well-known problem of shrinkage or underestimation of variance. Here we describe an alternative Quantile-Regression VAE (QR-VAE) that avoids this variance shrinkage problem by directly estimating conditional quantiles for the input image. Using the estimated quantiles, we compute the conditional mean and variance for the input image from which we then detect outliers by thresholding at a false-discovery-rate corrected p-value. In the supervised setting, we develop binary quantile regression (BQR) for the supervised lesion segmentation task. We show how BQR can be used to capture uncertainty in lesion boundaries in a manner that characterizes expert disagreement.
尽管在多种应用中,深度学习方法在各种机器学习任务上的表现令人印象深刻,但它可能会产生过于自信的预测,特别是在有限的训练数据下。因此,量化不确定性在诸如病变检测和临床诊断等关键应用中尤为重要,在这些应用中,对不确定性的现实评估对于确定手术边缘、疾病状态和适当治疗至关重要。在这项工作中,我们提出了一种新的方法,使用分位数回归来量化监督和无监督病变检测问题中的任意不确定性。得到的置信区间可用于病灶检测和分割。在无监督设置中,我们将分位数回归与变分自编码器(VAE)相结合。VAE是在无损伤数据上进行训练的,因此当呈现带有损伤的图像时,它倾向于重建图像的无损伤版本。为了检测病变,我们比较输入(病变)和输出(无病变)图像。在这里,我们解决了量化由VAE重建的图像中的不确定性的问题,作为原则异常值或病变检测的基础。VAE将输出建模为具有均值和方差特征的条件独立高斯。不幸的是,在VAE中对均值和方差进行联合优化会导致众所周知的方差收缩或低估问题。在这里,我们描述了一种替代的分位数回归VAE (QR-VAE),它通过直接估计输入图像的条件分位数来避免这种方差收缩问题。使用估计的分位数,我们计算输入图像的条件均值和方差,然后通过假发现率校正的p值阈值检测异常值。在监督设置中,我们开发了二值分位数回归(BQR)用于监督病灶分割任务。我们展示了如何使用BQR以一种表征专家分歧的方式捕捉病变边界的不确定性。
{"title":"Deep Quantile Regression for Uncertainty Estimation in Unsupervised and Supervised Lesion Detection","authors":"H. Akrami, Anand A. Joshi, Sergül Aydöre, R. Leahy","doi":"10.59275/j.melba.2022-6751","DOIUrl":"https://doi.org/10.59275/j.melba.2022-6751","url":null,"abstract":"Despite impressive state-of-the-art performance on a wide variety of machine learning tasks in multiple applications, deep learning methods can produce over-confident predictions, particularly with limited training data. Therefore, quantifying uncertainty is particularly important in critical applications such as lesion detection and clinical diagnosis, where a realistic assessment of uncertainty is essential in determining surgical margins, disease status and appropriate treatment. In this work, we propose a novel approach that uses quantile regression for quantifying aleatoric uncertainty in both supervised and unsupervised lesion detection problems. The resulting confidence intervals can be used for lesion detection and segmentation. In the unsupervised setting, we combine quantile regression with the Variational AutoEncoder (VAE). The VAE is trained on lesion-free data, so when presented with an image with a lesion, it tends to reconstruct a lesion-free version of the image. To detect the lesion, we then compare the input (lesion) and output (lesion-free) images. Here we address the problem of quantifying uncertainty in the images that are reconstructed by the VAE as the basis for principled outlier or lesion detection. The VAE models the output as a conditionally independent Gaussian characterized by its mean and variance. Unfortunately, joint optimization of both mean and variance in the VAE leads to the well-known problem of shrinkage or underestimation of variance. Here we describe an alternative Quantile-Regression VAE (QR-VAE) that avoids this variance shrinkage problem by directly estimating conditional quantiles for the input image. Using the estimated quantiles, we compute the conditional mean and variance for the input image from which we then detect outliers by thresholding at a false-discovery-rate corrected p-value. In the supervised setting, we develop binary quantile regression (BQR) for the supervised lesion segmentation task. We show how BQR can be used to capture uncertainty in lesion boundaries in a manner that characterizes expert disagreement.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89982900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A review and experimental evaluation of deep learning methods for MRI reconstruction MRI重建中深度学习方法的综述与实验评价
Pub Date : 2021-09-17 DOI: 10.59275/j.melba.2022-3g12
Arghya Pal, Y. Rathi
Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received significant interest for accelerating magnetic resonance imaging (MRI) acquisition and reconstruction strategies. A number of ideas inspired by deep learning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for accelerated MRI. Given the rapidly growing nature of the field, it is imperative to consolidate and summarize the large number of deep learning methods that have been reported in the literature, to obtain a better understanding of the field in general. This article provides an overview of the recent developments in neural-network based approaches that have been proposed specifically for improving parallel imaging. A general background and introduction to parallel MRI is also given from a classical view of k-space based reconstruction methods. Image domain based techniques that introduce improved regularizers are covered along with k-space based methods which focus on better interpolation strategies using neural networks. While the field is rapidly evolving with plenty of papers published each year, in this review, we attempt to cover broad categories of methods that have shown good performance on publicly available data sets. Limitations and open problems are also discussed and recent efforts for producing open data sets and benchmarks for the community are examined.
随着深度学习在广泛应用中的成功,基于神经网络的机器学习技术在加速磁共振成像(MRI)采集和重建策略方面受到了极大的关注。计算机视觉和图像处理的深度学习技术启发了许多思想,这些思想已经成功地应用于非线性图像重建,其精神是加速MRI的压缩感知。鉴于该领域的快速发展性质,有必要整合和总结文献中报道的大量深度学习方法,以便更好地了解该领域。本文概述了基于神经网络的方法的最新发展,这些方法专门用于改善并行成像。从基于k空间的重建方法的经典观点,也给出了平行MRI的一般背景和介绍。基于图像域的技术引入了改进的正则化器,以及基于k空间的方法,重点是使用神经网络更好的插值策略。虽然该领域正在迅速发展,每年都会发表大量论文,但在本综述中,我们试图涵盖在公开可用数据集上表现良好的广泛类别的方法。还讨论了限制和开放问题,并审查了最近为社区生产开放数据集和基准所做的努力。
{"title":"A review and experimental evaluation of deep learning methods for MRI reconstruction","authors":"Arghya Pal, Y. Rathi","doi":"10.59275/j.melba.2022-3g12","DOIUrl":"https://doi.org/10.59275/j.melba.2022-3g12","url":null,"abstract":"Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received significant interest for accelerating magnetic resonance imaging (MRI) acquisition and reconstruction strategies. A number of ideas inspired by deep learning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for accelerated MRI. Given the rapidly growing nature of the field, it is imperative to consolidate and summarize the large number of deep learning methods that have been reported in the literature, to obtain a better understanding of the field in general. This article provides an overview of the recent developments in neural-network based approaches that have been proposed specifically for improving parallel imaging. A general background and introduction to parallel MRI is also given from a classical view of k-space based reconstruction methods. Image domain based techniques that introduce improved regularizers are covered along with k-space based methods which focus on better interpolation strategies using neural networks. While the field is rapidly evolving with plenty of papers published each year, in this review, we attempt to cover broad categories of methods that have shown good performance on publicly available data sets. Limitations and open problems are also discussed and recent efforts for producing open data sets and benchmarks for the community are examined.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77744599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Patch-based Medical Image Segmentation using Matrix Product State Tensor Networks 基于矩阵积状态张量网络的医学图像分割
Pub Date : 2021-09-15 DOI: 10.59275/j.melba.2022-d1f5
Raghavendra Selvan, E. Dam, Soren Alexander Flensborg, Jens Petersen
Tensor networks are efficient factorisations of high dimensional tensors into network of lower order tensors. They have been most commonly used to model entanglement in quantum many-body systems and more recently are witnessing increased applications in supervised machine learning. In this work, we formulate image segmentation in a supervised setting with tensor networks. The key idea is to first lift the pixels in image patches to exponentially high dimensional feature spaces and using a linear decision hyper-plane to classify the input pixels into foreground and background classes. The high dimensional linear model itself is approximated using the matrix product state (MPS) tensor network. The MPS is weight-shared between the non-overlapping image patches resulting in our strided tensor network model. The performance of the proposed model is evaluated on three three 2D- and one 3D- biomedical imaging datasets. The performance of the proposed tensor network segmentation model is compared with relevant baseline methods. In the 2D experiments, the tensor network model yeilds competitive performance compared to the baseline methods while being more resource efficient.
张量网络是将高维张量有效分解为低阶张量网络。它们最常用于模拟量子多体系统中的纠缠,最近在监督机器学习中的应用越来越多。在这项工作中,我们用张量网络在监督设置中制定图像分割。关键思想是首先将图像块中的像素提升到指数高维特征空间,并使用线性决策超平面将输入像素分为前景和背景类。高维线性模型本身是用矩阵积态张量网络逼近的。MPS在不重叠的图像补丁之间进行权重共享,从而形成我们的跨行张量网络模型。在三个二维和一个三维生物医学成像数据集上对该模型的性能进行了评估。将所提出的张量网络分割模型的性能与相关基线方法进行了比较。在二维实验中,与基线方法相比,张量网络模型的性能具有竞争力,同时资源效率更高。
{"title":"Patch-based Medical Image Segmentation using Matrix Product State Tensor Networks","authors":"Raghavendra Selvan, E. Dam, Soren Alexander Flensborg, Jens Petersen","doi":"10.59275/j.melba.2022-d1f5","DOIUrl":"https://doi.org/10.59275/j.melba.2022-d1f5","url":null,"abstract":"Tensor networks are efficient factorisations of high dimensional tensors into network of lower order tensors. They have been most commonly used to model entanglement in quantum many-body systems and more recently are witnessing increased applications in supervised machine learning. In this work, we formulate image segmentation in a supervised setting with tensor networks. The key idea is to first lift the pixels in image patches to exponentially high dimensional feature spaces and using a linear decision hyper-plane to classify the input pixels into foreground and background classes. The high dimensional linear model itself is approximated using the matrix product state (MPS) tensor network. The MPS is weight-shared between the non-overlapping image patches resulting in our strided tensor network model. The performance of the proposed model is evaluated on three three 2D- and one 3D- biomedical imaging datasets. The performance of the proposed tensor network segmentation model is compared with relevant baseline methods. In the 2D experiments, the tensor network model yeilds competitive performance compared to the baseline methods while being more resource efficient.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75450039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Quantifying Topology In Pancreatic Tubular Networks From Live Imaging 3D Microscopy 从实时成像3D显微镜量化胰腺小管网络的拓扑结构
Pub Date : 2021-05-20 DOI: 10.59275/j.melba.2022-4bf2
K. Arnavaz, Oswin Krause, Kilian Zepf, J. A. Bærentzen, Jelena M. Krivokapic, Silja Heilmann, P. Nyeng, Aasa Feragen
Motivated by the challenging segmentation task of pancreatic tubular networks, this paper tackles two commonly encountered problems in biomedical imaging: Topological consistency of the segmentation, and expensive or difficult annotation. Our contributions are the following: a) We propose a topological score which measures both topological and geometric consistency between the predicted and ground truth segmentations, applied to model selection and validation. b) We provide a full deep-learning methodology for this difficult noisy task on time-series image data. In our method, we first use a semisupervised U-net architecture, applicable to generic segmentation tasks, which jointly trains an autoencoder and a segmentation network. We then use tracking of loops over time to further improve the predicted topology. This semi-supervised approach allows us to utilize unannotated data to learn feature representations that generalize to test data with high variability, in spite of our annotated training data having very limited variation. Our contributions are validated on a challenging segmentation task, locating tubular structures in the fetal pancreas from noisy live imaging confocal microscopy. We show that our semi-supervised model outperforms not only fully supervised and pre-trained models but also an approach which takes topological consistency into account during training. Further, our approach achieves a mean loop score of 0.808 for detecting loops in the fetal pancreas, compared to a U-net trained with clDice with mean loop score 0.762.
针对具有挑战性的胰管网络分割任务,本文解决了生物医学成像中常见的两个问题:分割的拓扑一致性和昂贵或困难的标注。我们的贡献如下:a)我们提出了一个拓扑分数,它测量预测和地面真值分割之间的拓扑和几何一致性,应用于模型选择和验证。b)我们提供了一个完整的深度学习方法来解决这个困难的时间序列图像数据的噪声任务。在我们的方法中,我们首先使用适用于一般分割任务的半监督U-net架构,该架构联合训练自编码器和分割网络。然后,我们使用随时间的循环跟踪来进一步改进预测的拓扑结构。这种半监督方法允许我们利用未注释的数据来学习特征表示,这些特征表示可以推广到具有高可变性的测试数据,尽管我们的注释训练数据具有非常有限的变化。我们的贡献在一项具有挑战性的分割任务中得到了验证,该任务是通过嘈杂的实时成像共聚焦显微镜定位胎儿胰腺中的管状结构。我们证明了我们的半监督模型不仅优于完全监督和预训练的模型,而且在训练过程中考虑了拓扑一致性的方法。此外,我们的方法在胎儿胰腺中检测环路的平均环路得分为0.808,而使用clDice训练的U-net平均环路得分为0.762。
{"title":"Quantifying Topology In Pancreatic Tubular Networks From Live Imaging 3D Microscopy","authors":"K. Arnavaz, Oswin Krause, Kilian Zepf, J. A. Bærentzen, Jelena M. Krivokapic, Silja Heilmann, P. Nyeng, Aasa Feragen","doi":"10.59275/j.melba.2022-4bf2","DOIUrl":"https://doi.org/10.59275/j.melba.2022-4bf2","url":null,"abstract":"Motivated by the challenging segmentation task of pancreatic tubular networks, this paper tackles two commonly encountered problems in biomedical imaging: Topological consistency of the segmentation, and expensive or difficult annotation. Our contributions are the following: a) We propose a topological score which measures both topological and geometric consistency between the predicted and ground truth segmentations, applied to model selection and validation. b) We provide a full deep-learning methodology for this difficult noisy task on time-series image data. In our method, we first use a semisupervised U-net architecture, applicable to generic segmentation tasks, which jointly trains an autoencoder and a segmentation network. We then use tracking of loops over time to further improve the predicted topology. This semi-supervised approach allows us to utilize unannotated data to learn feature representations that generalize to test data with high variability, in spite of our annotated training data having very limited variation. Our contributions are validated on a challenging segmentation task, locating tubular structures in the fetal pancreas from noisy live imaging confocal microscopy. We show that our semi-supervised model outperforms not only fully supervised and pre-trained models but also an approach which takes topological consistency into account during training. Further, our approach achieves a mean loop score of 0.808 for detecting loops in the fetal pancreas, compared to a U-net trained with clDice with mean loop score 0.762.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78869543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semi-Supervised Federated Peer Learning for Skin Lesion Classification 半监督联邦同伴学习用于皮肤病变分类
Pub Date : 2021-03-05 DOI: 10.59275/j.melba.2022-8g82
T. Bdair, N. Navab, Shadi Albarqouni
Globally, Skin carcinoma is among the most lethal diseases. Millions of people are diagnosed with this cancer every year. Sill, early detection can decrease the medication cost and mortality rate substantially. The recent improvement in automated cancer classification using deep learning methods has reached a human-level performance requiring a large amount of annotated data assembled in one location, yet, finding such conditions usually is not feasible. Recently, federated learning (FL) has been proposed to train decentralized models in a privacy-preserved fashion depending on labeled data at the client-side, which is usually not available and costly. To address this, we propose FedPerl, a semi-supervised federated learning method. Our method is inspired by peer learning from educational psychology and ensemble averaging from committee machines. FedPerl builds communities based on clients' similarities. Then it encourages communities' members to learn from each other to generate more accurate pseudo labels for the unlabeled data. We also proposed the peer anonymization (PA) technique to anonymize clients. As a core component of our method, PA is orthogonal to other methods without additional complexity, and reduces the communication cost while enhances performance. Finally, we propose a dynamic peer learning policy that controls the learning stream to avoid any degradation in the performance, especially for the individual clients. Our experimental setup consists of 71,000 skin lesion images collected from 5 publicly available datasets. We test our method in four different scenarios in SSFL. With few annotated data, FedPerl is on par with a state-of-the-art method in skin lesion classification in the standard setup while outperforming SSFLs and the baselines by 1.8% and 15.8%, respectively. Also, it generalizes better to an unseen client while being less sensitive to noisy ones.
从全球来看,皮肤癌是最致命的疾病之一。每年有数百万人被诊断出患有这种癌症。尽管如此,早期发现可以大大降低药物费用和死亡率。最近使用深度学习方法的自动癌症分类的改进已经达到了人类水平的性能,需要在一个位置组装大量带注释的数据,然而,找到这样的条件通常是不可行的。最近,联邦学习(FL)被提出以一种隐私保护的方式来训练分散的模型,这种方式依赖于客户端的标记数据,这通常是不可用的,而且成本很高。为了解决这个问题,我们提出了FedPerl,一种半监督的联邦学习方法。我们的方法受到了来自教育心理学的同伴学习和来自委员会机器的整体平均的启发。FedPerl基于客户的相似性构建社区。然后,它鼓励社区成员相互学习,为未标记的数据生成更准确的伪标签。我们还提出了peer anonymization (PA)技术来匿名化客户端。作为我们方法的核心组件,PA与其他方法正交,没有额外的复杂性,在提高性能的同时降低了通信成本。最后,我们提出了一个动态的对等学习策略来控制学习流,以避免任何性能下降,特别是对于单个客户端。我们的实验设置包括从5个公开数据集中收集的71,000张皮肤病变图像。我们在四种不同的SSFL场景中测试了我们的方法。FedPerl几乎没有注释数据,在标准设置中,FedPerl与最先进的皮肤病变分类方法相当,而比ssfl和基线分别高出1.8%和15.8%。此外,它可以更好地泛化到不可见的客户端,同时对有噪声的客户端不那么敏感。
{"title":"Semi-Supervised Federated Peer Learning for Skin Lesion Classification","authors":"T. Bdair, N. Navab, Shadi Albarqouni","doi":"10.59275/j.melba.2022-8g82","DOIUrl":"https://doi.org/10.59275/j.melba.2022-8g82","url":null,"abstract":"Globally, Skin carcinoma is among the most lethal diseases. Millions of people are diagnosed with this cancer every year. Sill, early detection can decrease the medication cost and mortality rate substantially. The recent improvement in automated cancer classification using deep learning methods has reached a human-level performance requiring a large amount of annotated data assembled in one location, yet, finding such conditions usually is not feasible. Recently, federated learning (FL) has been proposed to train decentralized models in a privacy-preserved fashion depending on labeled data at the client-side, which is usually not available and costly. To address this, we propose FedPerl, a semi-supervised federated learning method. Our method is inspired by peer learning from educational psychology and ensemble averaging from committee machines. FedPerl builds communities based on clients' similarities. Then it encourages communities' members to learn from each other to generate more accurate pseudo labels for the unlabeled data. We also proposed the peer anonymization (PA) technique to anonymize clients. As a core component of our method, PA is orthogonal to other methods without additional complexity, and reduces the communication cost while enhances performance. Finally, we propose a dynamic peer learning policy that controls the learning stream to avoid any degradation in the performance, especially for the individual clients. Our experimental setup consists of 71,000 skin lesion images collected from 5 publicly available datasets. We test our method in four different scenarios in SSFL. With few annotated data, FedPerl is on par with a state-of-the-art method in skin lesion classification in the standard setup while outperforming SSFLs and the baselines by 1.8% and 15.8%, respectively. Also, it generalizes better to an unseen client while being less sensitive to noisy ones.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74951977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Adversarial Robust Training of Deep Learning MRI Reconstruction Models 深度学习MRI重建模型的对抗鲁棒训练
Pub Date : 2020-10-30 DOI: 10.59275/j.melba.2021-df47
Francesco Calivá, Kaiyang Cheng, Rutwik Shah, V. Pedoia
Deep Learning (DL) has shown potential in accelerating Magnetic Resonance Image acquisition and reconstruction. Nevertheless, there is a dearth of tailored methods to guarantee that the reconstruction of small features is achieved with high fidelity. In this work, we employ adversarial attacks to generate small synthetic perturbations, which are difficult to reconstruct for a trained DL reconstruction network. Then, we use robust training to increase the network’s sensitivity to these small features and encourage their reconstruction. Next, we investigate the generalization of said approach to real world features. For this, a musculoskeletal radiologist annotated a set of cartilage and meniscal lesions from the knee Fast-MRI dataset, and a classification network was devised to assess the reconstruction of the features. Experimental results show that by introducing robust training to a reconstruction network, the rate of false negative features (4.8%) in image reconstruction can be reduced. These results are encouraging, and highlight the necessity for attention to this problem by the image reconstruction community, as a milestone for the introduction of DL reconstruction in clinical practice. To support further research, we make our annotations and code publicly available at https://github.com/fcaliva/fastMRI_BB_abnormalities_annotation.
深度学习(DL)在加速磁共振图像采集和重建方面显示出潜力。然而,缺乏定制的方法来保证以高保真度实现小特征的重建。在这项工作中,我们使用对抗性攻击来产生小的合成扰动,这对于训练好的DL重建网络来说很难重建。然后,我们使用鲁棒训练来提高网络对这些小特征的敏感性,并鼓励它们的重建。接下来,我们将研究上述方法对现实世界特征的泛化。为此,一名肌肉骨骼放射科医生从膝关节Fast-MRI数据集中注释了一组软骨和半月板病变,并设计了一个分类网络来评估这些特征的重建。实验结果表明,在重建网络中引入鲁棒性训练,可以降低图像重建中的假阴性特征率(4.8%)。这些结果令人鼓舞,并强调了图像重建界关注这一问题的必要性,作为在临床实践中引入DL重建的里程碑。为了支持进一步的研究,我们在https://github.com/fcaliva/fastMRI_BB_abnormalities_annotation上公开了我们的注释和代码。
{"title":"Adversarial Robust Training of Deep Learning MRI Reconstruction Models","authors":"Francesco Calivá, Kaiyang Cheng, Rutwik Shah, V. Pedoia","doi":"10.59275/j.melba.2021-df47","DOIUrl":"https://doi.org/10.59275/j.melba.2021-df47","url":null,"abstract":"Deep Learning (DL) has shown potential in accelerating Magnetic Resonance Image acquisition and reconstruction. Nevertheless, there is a dearth of tailored methods to guarantee that the reconstruction of small features is achieved with high fidelity. In this work, we employ adversarial attacks to generate small synthetic perturbations, which are difficult to reconstruct for a trained DL reconstruction network. Then, we use robust training to increase the network’s sensitivity to these small features and encourage their reconstruction. Next, we investigate the generalization of said approach to real world features. For this, a musculoskeletal radiologist annotated a set of cartilage and meniscal lesions from the knee Fast-MRI dataset, and a classification network was devised to assess the reconstruction of the features. Experimental results show that by introducing robust training to a reconstruction network, the rate of false negative features (4.8%) in image reconstruction can be reduced. These results are encouraging, and highlight the necessity for attention to this problem by the image reconstruction community, as a milestone for the introduction of DL reconstruction in clinical practice. To support further research, we make our annotations and code publicly available at https://github.com/fcaliva/fastMRI_BB_abnormalities_annotation.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81011312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Nested Grassmannians for Dimensionality Reduction with Applications 应用降维的嵌套格拉斯曼算法
Pub Date : 2020-10-27 DOI: 10.59275/j.melba.2022-234f
Chun-Hao Yang, B. Vemuri
In the recent past, nested structures in Riemannian manifolds has been studied in the context of dimensionality reduction as an alternative to the popular principal geodesic analysis (PGA) technique, for example, the principal nested spheres. In this paper, we propose a novel framework for constructing a nested sequence of homogeneous Riemannian manifolds. Common examples of homogeneous Riemannian manifolds include the n-sphere, the Stiefel manifold, the Grassmann manifold and many others. In particular, we focus on applying the proposed framework to the Grassmann manifold, giving rise to the nested Grassmannians (NG). An important application in which Grassmann manifolds are encountered is planar shape analysis. Specifically, each planar (2D) shape can be represented as a point in the complex projective space which is a complex Grassmann manifold. Some salient features of our framework are: (i) it explicitly exploits the geometry of the homogeneous Riemannian manifolds and (ii) the nested lower-dimensional submanifolds need not be geodesic. With the proposed NG structure, we develop algorithms for the supervised and unsupervised dimensionality reduction problems respectively. The proposed algorithms are compared with PGA via simulation studies and real data experiments and are shown to achieve a higher ratio of expressed variance compared to PGA.
在最近的过去,黎曼流形中的嵌套结构已经在降维的背景下被研究,作为流行的主测地分析(PGA)技术的替代方案,例如,主嵌套球。本文提出了一种构造齐次黎曼流形嵌套序列的新框架。齐次黎曼流形的常见例子包括n球,Stiefel流形,Grassmann流形和许多其他的。特别地,我们专注于将所提出的框架应用于Grassmann流形,从而产生嵌套的Grassmann流形(NG)。格拉斯曼流形的一个重要应用是平面形状分析。具体来说,每个平面(2D)形状都可以表示为复投影空间中的一个点,这是一个复格拉斯曼流形。我们的框架的一些显著特征是:(i)它明确地利用了齐次黎曼流形的几何特性,(ii)嵌套的低维子流形不必是测地线。利用提出的神经网络结构,我们分别开发了有监督降维和无监督降维问题的算法。通过仿真研究和实际数据实验,将所提出的算法与PGA进行了比较,结果表明,与PGA相比,该算法的表达方差比更高。
{"title":"Nested Grassmannians for Dimensionality Reduction with Applications","authors":"Chun-Hao Yang, B. Vemuri","doi":"10.59275/j.melba.2022-234f","DOIUrl":"https://doi.org/10.59275/j.melba.2022-234f","url":null,"abstract":"In the recent past, nested structures in Riemannian manifolds has been studied in the context of dimensionality reduction as an alternative to the popular principal geodesic analysis (PGA) technique, for example, the principal nested spheres. In this paper, we propose a novel framework for constructing a nested sequence of homogeneous Riemannian manifolds. Common examples of homogeneous Riemannian manifolds include the n-sphere, the Stiefel manifold, the Grassmann manifold and many others. In particular, we focus on applying the proposed framework to the Grassmann manifold, giving rise to the nested Grassmannians (NG). An important application in which Grassmann manifolds are encountered is planar shape analysis. Specifically, each planar (2D) shape can be represented as a point in the complex projective space which is a complex Grassmann manifold. Some salient features of our framework are: (i) it explicitly exploits the geometry of the homogeneous Riemannian manifolds and (ii) the nested lower-dimensional submanifolds need not be geodesic. With the proposed NG structure, we develop algorithms for the supervised and unsupervised dimensionality reduction problems respectively. The proposed algorithms are compared with PGA via simulation studies and real data experiments and are shown to achieve a higher ratio of expressed variance compared to PGA.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74889425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
The journal of machine learning for biomedical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1