首页 > 最新文献

The journal of machine learning for biomedical imaging最新文献

英文 中文
An Approach to Automated Diagnosis and Texture Analysis of the Fetal Liver & Placenta in Fetal Growth Restriction 胎儿生长受限时胎儿肝、胎盘的自动诊断及纹理分析方法
Pub Date : 2023-01-10 DOI: 10.59275/j.melba.2022-ac28
A. Zeidan, Paula Ramirez Gilliland, Ashay Patel, Zhanchong Ou, Dimitra Flouri, N. Mufti, K. Maksym, Rosalind Aughwane, S. Ourselin, Anna L. David, A. Melbourne
Fetal growth restriction (FGR) is a prevalent pregnancy condition characterised by failure of the fetus to reach its genetically predetermined growth potential. The multiple aetiologies, coupled with the risk of fetal complications - encompassing neurodevelopmental delay, neonatal morbidity, and stillbirth - motivate the need to improve holistic assessment of the FGR fetus using MRI. We hypothesised that the fetal liver and placenta would provide insights into FGR biomarkers, unattainable through conventional methods. Therefore, we explore the application of model fitting techniques, linear regression machine learning models, deep learning regression, and Haralick textured features from multi-contrast MRI for multi-fetal organ analysis of FGR. We employed T2 relaxometry and diffusion-weighted MRI datasets (using a combined T2-diffusion scan) for 12 normally grown and 12 FGR gestational age (GA) matched pregnancies (Estimated Fetal Weight below 3rd centile, Median 28+/-3wks). We applied the Intravoxel Incoherent Motion Model, which describes circulatory properties of the fetal organs, and analysed the resulting features distinguishing both cohorts. We additionally used novel multi-compartment models for MRI fetal analysis, which exhibit potential to provide a multi-organ FGR assessment, overcoming the limitations of empirical indicators - such as abnormal artery Doppler findings - to evaluate placental dysfunction. The placenta and fetal liver presented key differentiators between FGR and normal controls, with significant decreased perfusion, abnormal fetal blood motion and reduced fetal blood oxygenation. This may be associated with the preferential shunting of the fetal blood towards the fetal brain, affecting supply to the liver. These features were further explored to determine their role in assessing FGR severity, by employing simple machine learning models to predict FGR diagnosis (100% accuracy in test data, n=5), GA at delivery, time from MRI scan to delivery, and baby weight. We additionally explored the use of deep learning to regress the latter three variables, training a convolutional neural network with our liver and placenta voxel-level parameter maps, obtained from our multi-compartment model fitting. Image texture analysis of the fetal organs demonstrated prominent textural variations in the placental perfusion fractions maps between the groups (p<0.0009), and spatial differences in the incoherent fetal capillary blood motion in the liver (p<0.009). This research serves as a proof-of-concept, investigating the effect of FGR on fetal organs, measuring differences in perfusion and oxygenation within the placenta and fetal liver, and their prognostic importance in automated diagnosis using simple machine learning models.
胎儿生长受限(FGR)是一种常见的妊娠状况,其特征是胎儿未能达到其遗传预定的生长潜力。多种病因,加上胎儿并发症的风险-包括神经发育迟缓,新生儿发病率和死产-促使需要改进使用MRI对FGR胎儿的整体评估。我们假设胎儿肝脏和胎盘将提供通过传统方法无法获得的FGR生物标志物的见解。因此,我们探索了模型拟合技术、线性回归机器学习模型、深度学习回归和多对比MRI哈拉里克纹理特征在FGR多胎儿器官分析中的应用。我们使用T2松弛测量和弥散加权MRI数据集(使用联合T2弥散扫描)对12例正常生长和12例FGR胎龄(GA)匹配的妊娠(估计胎儿体重低于第3位,中位28+/-3周)。我们应用了描述胎儿器官循环特性的体素内非相干运动模型,并分析了区分这两个队列的结果特征。此外,我们还使用了新的多室模型进行胎儿MRI分析,该模型具有提供多器官FGR评估的潜力,克服了经验指标(如异常动脉多普勒结果)评估胎盘功能障碍的局限性。胎盘和胎儿肝脏是FGR与正常对照的关键分化因子,灌注明显减少,胎儿血液运动异常,胎儿血氧减少。这可能与胎儿血液优先分流到胎儿大脑有关,影响了肝脏的供应。通过使用简单的机器学习模型来预测FGR诊断(测试数据100%准确率,n=5)、分娩时GA、从MRI扫描到分娩的时间以及婴儿体重,我们进一步探索了这些特征,以确定它们在评估FGR严重程度中的作用。我们还探索了使用深度学习来回归后三个变量,用我们的肝脏和胎盘体素级参数图训练卷积神经网络,这些参数图是从我们的多室模型拟合中获得的。胎儿器官图像纹理分析显示,两组间胎盘灌注分数图纹理差异显著(p<0.0009),胎儿肝脏毛细血管血流运动不连贯的空间差异显著(p<0.009)。本研究作为概念验证,研究FGR对胎儿器官的影响,测量胎盘和胎儿肝脏内灌注和氧合的差异,以及它们在使用简单机器学习模型的自动诊断中的预后重要性。
{"title":"An Approach to Automated Diagnosis and Texture Analysis of the Fetal Liver & Placenta in Fetal Growth Restriction","authors":"A. Zeidan, Paula Ramirez Gilliland, Ashay Patel, Zhanchong Ou, Dimitra Flouri, N. Mufti, K. Maksym, Rosalind Aughwane, S. Ourselin, Anna L. David, A. Melbourne","doi":"10.59275/j.melba.2022-ac28","DOIUrl":"https://doi.org/10.59275/j.melba.2022-ac28","url":null,"abstract":"Fetal growth restriction (FGR) is a prevalent pregnancy condition characterised by failure of the fetus to reach its genetically predetermined growth potential. The multiple aetiologies, coupled with the risk of fetal complications - encompassing neurodevelopmental delay, neonatal morbidity, and stillbirth - motivate the need to improve holistic assessment of the FGR fetus using MRI. We hypothesised that the fetal liver and placenta would provide insights into FGR biomarkers, unattainable through conventional methods. Therefore, we explore the application of model fitting techniques, linear regression machine learning models, deep learning regression, and Haralick textured features from multi-contrast MRI for multi-fetal organ analysis of FGR. We employed T2 relaxometry and diffusion-weighted MRI datasets (using a combined T2-diffusion scan) for 12 normally grown and 12 FGR gestational age (GA) matched pregnancies (Estimated Fetal Weight below 3rd centile, Median 28+/-3wks). We applied the Intravoxel Incoherent Motion Model, which describes circulatory properties of the fetal organs, and analysed the resulting features distinguishing both cohorts. We additionally used novel multi-compartment models for MRI fetal analysis, which exhibit potential to provide a multi-organ FGR assessment, overcoming the limitations of empirical indicators - such as abnormal artery Doppler findings - to evaluate placental dysfunction. The placenta and fetal liver presented key differentiators between FGR and normal controls, with significant decreased perfusion, abnormal fetal blood motion and reduced fetal blood oxygenation. This may be associated with the preferential shunting of the fetal blood towards the fetal brain, affecting supply to the liver. These features were further explored to determine their role in assessing FGR severity, by employing simple machine learning models to predict FGR diagnosis (100% accuracy in test data, n=5), GA at delivery, time from MRI scan to delivery, and baby weight. We additionally explored the use of deep learning to regress the latter three variables, training a convolutional neural network with our liver and placenta voxel-level parameter maps, obtained from our multi-compartment model fitting. Image texture analysis of the fetal organs demonstrated prominent textural variations in the placental perfusion fractions maps between the groups (p<0.0009), and spatial differences in the incoherent fetal capillary blood motion in the liver (p<0.009). This research serves as a proof-of-concept, investigating the effect of FGR on fetal organs, measuring differences in perfusion and oxygenation within the placenta and fetal liver, and their prognostic importance in automated diagnosis using simple machine learning models.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81025448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Optimization of Sampling Densities in MRI MRI中采样密度的贝叶斯优化
Pub Date : 2022-09-15 DOI: 10.59275/j.melba.2023-8172
Alban Gossard, F. de Gournay, P. Weiss
Data-driven optimization of sampling patterns in MRI has recently received a significant attention. Following recent observations on the combinatorial number of minimizers in off-the-grid optimization, we propose a framework to globally optimize the sampling densities using Bayesian optimization. Using a dimension reduction technique, we optimize the sampling trajectories more than 20 times faster than conventional off-the-grid methods, with a restricted number of training samples. This method – among other benefits – discards the need of automatic differentiation. Its performance is slightly worse than state-of-the-art learned trajectories since it reduces the space of admissible trajectories, but comes with significant computational advantages. Other contributions include: i) a careful evaluation of the distance in probability space to generate trajectories ii) a specific training procedure on families of operators for unrolled reconstruction networks and iii) a gradient projection based scheme for trajectory optimization.
数据驱动的MRI采样模式优化最近受到了极大的关注。根据最近对离网优化中最小化器组合数量的观察,我们提出了一个使用贝叶斯优化来全局优化采样密度的框架。使用降维技术,我们优化采样轨迹的速度比传统的离网方法快20倍以上,并且训练样本数量有限。这种方法的好处之一是不需要自动区分。它的性能比最先进的学习轨迹略差,因为它减少了可接受轨迹的空间,但具有显著的计算优势。其他贡献包括:i)仔细评估概率空间中的距离以生成轨迹;ii)针对展开重建网络的算子族的特定训练程序;iii)基于梯度投影的轨迹优化方案。
{"title":"Bayesian Optimization of Sampling Densities in MRI","authors":"Alban Gossard, F. de Gournay, P. Weiss","doi":"10.59275/j.melba.2023-8172","DOIUrl":"https://doi.org/10.59275/j.melba.2023-8172","url":null,"abstract":"Data-driven optimization of sampling patterns in MRI has recently received a significant attention. Following recent observations on the combinatorial number of minimizers in off-the-grid optimization, we propose a framework to globally optimize the sampling densities using Bayesian optimization. Using a dimension reduction technique, we optimize the sampling trajectories more than 20 times faster than conventional off-the-grid methods, with a restricted number of training samples. This method – among other benefits – discards the need of automatic differentiation. Its performance is slightly worse than state-of-the-art learned trajectories since it reduces the space of admissible trajectories, but comes with significant computational advantages. Other contributions include: i) a careful evaluation of the distance in probability space to generate trajectories ii) a specific training procedure on families of operators for unrolled reconstruction networks and iii) a gradient projection based scheme for trajectory optimization.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90322877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evaluation of 3D GANs for Lung Tissue Modelling in Pulmonary CT 三维gan在肺部CT肺组织建模中的应用评价
Pub Date : 2022-08-17 DOI: 10.59275/j.melba.2022-9e4b
S. Ellis, O. M. Manzanera, V. Baltatzis, Ibrahim Nawaz, A. Nair, L. L. Folgoc, S. Desai, Ben Glocker, J. Schnabel
Generative adversarial networks (GANs) are able to model accurately the distribution of complex, high-dimensional datasets, for example images. This characteristic makes high-quality GANs useful for unsupervised anomaly detection in medical imaging. However, differences in training datasets such as output image dimensionality and appearance of semantically meaningful features mean that GAN models from the natural image processing domain may not work 'out-of-the-box' for medical imaging applications, necessitating re-implementation and re-evaluation. In this work we adapt and evaluate three GAN models to the application of modelling 3D healthy image patches for pulmonary CT. To the best of our knowledge, this is the first time that such a detailed evaluation has been performed. The deep convolutional GAN (DCGAN), styleGAN and the bigGAN architectures were selected for investigation due to their ubiquity and high performance in natural image processing. We train different variants of these methods and assess their performance using the widely used Frechet Inception Distance (FID). In addition, the quality of the generated images was evaluated by a human observer study, the ability of the networks to model 3D domain-specific features was investigated, and the structure of the GAN latent spaces was analysed. Results show that the 3D styleGAN approaches produce realistic-looking images with meaningful 3D structure, but suffer from mode collapse which must be explicitly addressed during training to obtain diversity in the samples. Conversely, the 3D DCGAN models show a greater capacity for image variability, but at the cost of poor-quality images. The 3D bigGAN models provide an intermediate level of image quality, but most accurately model the distribution of selected semantically meaningful features. The results suggest that future development is required to realise a 3D GAN with sufficient representational capacity for patch-based lung CT anomaly detection and we offer recommendations for future areas of research, such as experimenting with other architectures and incorporation of position-encoding.
生成对抗网络(GANs)能够准确地模拟复杂的高维数据集的分布,例如图像。这一特性使得高质量gan在医学成像中的无监督异常检测中非常有用。然而,训练数据集的差异,如输出图像维度和语义上有意义的特征的外观,意味着来自自然图像处理领域的GAN模型可能无法“开箱即用”地用于医学成像应用,需要重新实施和重新评估。在这项工作中,我们适应并评估了三种GAN模型在肺部CT三维健康图像斑块建模中的应用。据我们所知,这是第一次进行如此详细的评估。由于深度卷积GAN (DCGAN)、styleGAN和bigGAN架构在自然图像处理中的普遍性和高性能,我们选择了它们作为研究对象。我们训练了这些方法的不同变体,并使用广泛使用的Frechet Inception Distance (FID)来评估它们的性能。此外,通过人类观察者研究评估了生成图像的质量,研究了网络对3D特定领域特征的建模能力,并分析了GAN潜在空间的结构。结果表明,3D styleGAN方法产生具有有意义的3D结构的逼真图像,但必须在训练过程中明确解决模式崩溃问题,以获得样本的多样性。相反,3D DCGAN模型显示出更大的图像可变性能力,但代价是图像质量差。3D bigGAN模型提供了一个中等水平的图像质量,但最准确地模拟了选定的语义有意义的特征的分布。结果表明,未来的发展需要实现具有足够表征能力的3D GAN,用于基于补丁的肺部CT异常检测,我们为未来的研究领域提供了建议,例如实验其他架构和结合位置编码。
{"title":"Evaluation of 3D GANs for Lung Tissue Modelling in Pulmonary CT","authors":"S. Ellis, O. M. Manzanera, V. Baltatzis, Ibrahim Nawaz, A. Nair, L. L. Folgoc, S. Desai, Ben Glocker, J. Schnabel","doi":"10.59275/j.melba.2022-9e4b","DOIUrl":"https://doi.org/10.59275/j.melba.2022-9e4b","url":null,"abstract":"Generative adversarial networks (GANs) are able to model accurately the distribution of complex, high-dimensional datasets, for example images. This characteristic makes high-quality GANs useful for unsupervised anomaly detection in medical imaging. However, differences in training datasets such as output image dimensionality and appearance of semantically meaningful features mean that GAN models from the natural image processing domain may not work 'out-of-the-box' for medical imaging applications, necessitating re-implementation and re-evaluation. In this work we adapt and evaluate three GAN models to the application of modelling 3D healthy image patches for pulmonary CT. To the best of our knowledge, this is the first time that such a detailed evaluation has been performed. The deep convolutional GAN (DCGAN), styleGAN and the bigGAN architectures were selected for investigation due to their ubiquity and high performance in natural image processing. We train different variants of these methods and assess their performance using the widely used Frechet Inception Distance (FID). In addition, the quality of the generated images was evaluated by a human observer study, the ability of the networks to model 3D domain-specific features was investigated, and the structure of the GAN latent spaces was analysed. Results show that the 3D styleGAN approaches produce realistic-looking images with meaningful 3D structure, but suffer from mode collapse which must be explicitly addressed during training to obtain diversity in the samples. Conversely, the 3D DCGAN models show a greater capacity for image variability, but at the cost of poor-quality images. The 3D bigGAN models provide an intermediate level of image quality, but most accurately model the distribution of selected semantically meaningful features. The results suggest that future development is required to realise a 3D GAN with sufficient representational capacity for patch-based lung CT anomaly detection and we offer recommendations for future areas of research, such as experimenting with other architectures and incorporation of position-encoding.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82734864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised Learning. 生物医学图像的复合图分离:为自我监督学习挖掘大型数据集。
Pub Date : 2022-08-01 Epub Date: 2022-09-04
Tianyuan Yao, Chang Qu, Jun Long, Quan Liu, Ruining Deng, Yuanhan Tian, Jiachen Xu, Aadarsh Jha, Zuhayr Asad, Shunxing Bao, Mengyang Zhao, Agnes B Fogo, Bennett A Landman, Haichun Yang, Catie Chang, Yuankai Huo

With the rapid development of self-supervised learning (e.g., contrastive learning), the importance of having large-scale images (even without annotations) for training a more generalizable AI model has been widely recognized in medical image analysis. However, collecting large-scale task-specific unannotated data at scale can be challenging for individual labs. Existing online resources, such as digital books, publications, and search engines, provide a new resource for obtaining large-scale images. However, published images in healthcare (e.g., radiology and pathology) consist of a considerable amount of compound figures with subplots. In order to extract and separate compound figures into usable individual images for downstream learning, we propose a simple compound figure separation (SimCFS) framework without using the traditionally required detection bounding box annotations, with a new loss function and a hard case simulation. Our technical contribution is four-fold: (1) we introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations; (2) we propose a new side loss that is optimized for compound figure separation; (3) we propose an intra-class image augmentation method to simulate hard cases; and (4) to the best of our knowledge, this is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation. From the results, the proposed SimCFS achieved state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database. The pretrained self-supervised learning model using large-scale mined figures improved the accuracy of downstream image classification tasks with a contrastive learning algorithm. The source code of SimCFS is made publicly available at https://github.com/hrlblab/ImageSeperation.

随着自我监督学习(如对比学习)的快速发展,在医学图像分析领域,人们普遍认识到拥有大规模图像(即使没有注释)对于训练更具通用性的人工智能模型的重要性。然而,大规模收集针对特定任务的无注释数据对于单个实验室来说具有挑战性。现有的在线资源,如数字图书、出版物和搜索引擎,为获取大规模图像提供了新的资源。然而,医疗保健领域(如放射学和病理学)出版的图像由大量带子图的复合图组成。为了提取复合图并将其分离成可供下游学习使用的单个图像,我们提出了一个简单的复合图分离(SimCFS)框架,无需使用传统上所需的检测边界框注释,并采用了新的损失函数和困难情况模拟。我们的技术贡献有四个方面:(1) 我们引入了一个基于模拟的训练框架,最大限度地减少了对资源丰富的边界框注释的需求;(2) 我们提出了一个新的边损失函数,该函数针对复合图像分离进行了优化;(3) 我们提出了一种类内图像增强方法来模拟困难情况;(4) 据我们所知,这是第一项评估利用自监督学习进行复合图像分离的有效性的研究。从结果来看,所提出的 SimCFS 在 ImageCLEF 2016 复合图像分离数据库上取得了最先进的性能。使用大规模挖掘的图形预训练的自监督学习模型,通过对比学习算法提高了下游图像分类任务的准确性。SimCFS 的源代码已在 https://github.com/hrlblab/ImageSeperation 上公开。
{"title":"Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised Learning.","authors":"Tianyuan Yao, Chang Qu, Jun Long, Quan Liu, Ruining Deng, Yuanhan Tian, Jiachen Xu, Aadarsh Jha, Zuhayr Asad, Shunxing Bao, Mengyang Zhao, Agnes B Fogo, Bennett A Landman, Haichun Yang, Catie Chang, Yuankai Huo","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>With the rapid development of self-supervised learning (e.g., contrastive learning), the importance of having large-scale images (even without annotations) for training a more generalizable AI model has been widely recognized in medical image analysis. However, collecting large-scale task-specific unannotated data at scale can be challenging for individual labs. Existing online resources, such as digital books, publications, and search engines, provide a new resource for obtaining large-scale images. However, published images in healthcare (e.g., radiology and pathology) consist of a considerable amount of compound figures with subplots. In order to extract and separate compound figures into usable individual images for downstream learning, we propose a simple compound figure separation (SimCFS) framework without using the traditionally required detection bounding box annotations, with a new loss function and a hard case simulation. Our technical contribution is four-fold: (1) we introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations; (2) we propose a new side loss that is optimized for compound figure separation; (3) we propose an intra-class image augmentation method to simulate hard cases; and (4) to the best of our knowledge, this is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation. From the results, the proposed SimCFS achieved state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database. The pretrained self-supervised learning model using large-scale mined figures improved the accuracy of downstream image classification tasks with a contrastive learning algorithm. The source code of SimCFS is made publicly available at https://github.com/hrlblab/ImageSeperation.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10112832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9737956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results. QU-BraTS:MICCAI BraTS 2020 脑肿瘤分割不确定性量化挑战赛--排名得分和基准结果分析。
Raghav Mehta, Angelos Filos, Ujjwal Baid, Chiharu Sako, Richard McKinley, Michael Rebsamen, Katrin Dätwyler, Raphael Meier, Piotr Radojewski, Gowtham Krishnan Murugesan, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Fang F Yu, Baowei Fei, Ananth J Madhuranthakam, Joseph A Maldjian, Laura Daza, Catalina Gómez, Pablo Arbeláez, Chengliang Dai, Shuo Wang, Hadrien Reynaud, Yuanhan Mo, Elsa Angelini, Yike Guo, Wenjia Bai, Subhashis Banerjee, Linmin Pei, Murat Ak, Sarahi Rosas-González, Ilyess Zemmoura, Clovis Tauber, Minh H Vu, Tufve Nyholm, Tommy Löfstedt, Laura Mora Ballestar, Veronica Vilaplana, Hugh McHugh, Gonzalo Maso Talou, Alan Wang, Jay Patel, Ken Chang, Katharina Hoebel, Mishka Gidwani, Nishanth Arun, Sharut Gupta, Mehak Aggarwal, Praveer Singh, Elizabeth R Gerstner, Jayashree Kalpathy-Cramer, Nicolas Boutry, Alexis Huard, Lasitha Vidyaratne, Md Monibor Rahman, Khan M Iftekharuddin, Joseph Chazalon, Elodie Puybareau, Guillaume Tochon, Jun Ma, Mariano Cabezas, Xavier Llado, Arnau Oliver, Liliana Valencia, Sergi Valverde, Mehdi Amian, Mohammadreza Soltaninejad, Andriy Myronenko, Ali Hatamizadeh, Xue Feng, Quan Dou, Nicholas Tustison, Craig Meyer, Nisarg A Shah, Sanjay Talbar, Marc-André Weber, Abhishek Mahajan, Andras Jakab, Roland Wiest, Hassan M Fathallah-Shaykh, Arash Nazeri, Mikhail Milchenko, Daniel Marcus, Aikaterini Kotrotsou, Rivka Colen, John Freymann, Justin Kirby, Christos Davatzikos, Bjoern Menze, Spyridon Bakas, Yarin Gal, Tal Arbel

Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.

深度学习(DL)模型在包括脑肿瘤分割(BraTS)挑战赛在内的各种医学影像基准挑战赛中都取得了一流的成绩。然而,病灶病理多区分割(如肿瘤和病灶亚区)任务尤其具有挑战性,潜在的错误阻碍了将深度学习模型转化为临床工作流程。以不确定性的形式量化 DL 模型预测的可靠性,可以对最不确定的区域进行临床审查,从而建立信任并为临床转化铺平道路。最近,针对 DL 医学影像分割任务推出了几种不确定性估计方法。开发评估和比较不确定性测量性能的分数将有助于最终用户做出更明智的决策。在本研究中,我们探索并评估了在 BraTS 2019 和 BraTS 2020 不确定性量化任务(QU-BraTS)中开发的评分方法,该评分方法旨在评估脑肿瘤多室分割的不确定性估计并对其进行排序。该评分(1)奖励那些对正确断言产生高置信度的不确定性估计,以及那些对错误断言赋予低置信度的不确定性估计,(2)惩罚那些导致较高比例置信度不足的正确断言的不确定性测量。我们进一步对 QU-BraTS 2020 的 14 个独立参与团队生成的分割不确定性进行了基准测试,所有这些团队也都参与了 BraTS 的主要分割任务。总之,我们的研究结果证实了不确定性估计对分割算法的重要性和补充价值,突出了医学影像分析中不确定性量化的必要性。最后,为了提高透明度和可重复性,我们在 https://github.com/RagMeh11/QU-BraTS 上公开了评估代码。
{"title":"QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results.","authors":"Raghav Mehta, Angelos Filos, Ujjwal Baid, Chiharu Sako, Richard McKinley, Michael Rebsamen, Katrin Dätwyler, Raphael Meier, Piotr Radojewski, Gowtham Krishnan Murugesan, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Fang F Yu, Baowei Fei, Ananth J Madhuranthakam, Joseph A Maldjian, Laura Daza, Catalina Gómez, Pablo Arbeláez, Chengliang Dai, Shuo Wang, Hadrien Reynaud, Yuanhan Mo, Elsa Angelini, Yike Guo, Wenjia Bai, Subhashis Banerjee, Linmin Pei, Murat Ak, Sarahi Rosas-González, Ilyess Zemmoura, Clovis Tauber, Minh H Vu, Tufve Nyholm, Tommy Löfstedt, Laura Mora Ballestar, Veronica Vilaplana, Hugh McHugh, Gonzalo Maso Talou, Alan Wang, Jay Patel, Ken Chang, Katharina Hoebel, Mishka Gidwani, Nishanth Arun, Sharut Gupta, Mehak Aggarwal, Praveer Singh, Elizabeth R Gerstner, Jayashree Kalpathy-Cramer, Nicolas Boutry, Alexis Huard, Lasitha Vidyaratne, Md Monibor Rahman, Khan M Iftekharuddin, Joseph Chazalon, Elodie Puybareau, Guillaume Tochon, Jun Ma, Mariano Cabezas, Xavier Llado, Arnau Oliver, Liliana Valencia, Sergi Valverde, Mehdi Amian, Mohammadreza Soltaninejad, Andriy Myronenko, Ali Hatamizadeh, Xue Feng, Quan Dou, Nicholas Tustison, Craig Meyer, Nisarg A Shah, Sanjay Talbar, Marc-André Weber, Abhishek Mahajan, Andras Jakab, Roland Wiest, Hassan M Fathallah-Shaykh, Arash Nazeri, Mikhail Milchenko, Daniel Marcus, Aikaterini Kotrotsou, Rivka Colen, John Freymann, Justin Kirby, Christos Davatzikos, Bjoern Menze, Spyridon Bakas, Yarin Gal, Tal Arbel","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10060060/pdf/nihms-1883463.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9287601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised Learning 生物医学图像的复合图分离:用于自监督学习的大数据集挖掘
Pub Date : 2022-08-01 DOI: 10.48550/arXiv.2208.14357
Tianyuan Yao, Changbing Qu, Jun Long, Quan Liu, Ruining Deng, Yuanhan Tian, Jiachen Xu, Aadarsh Jha, Zuhayr Asad, S. Bao, Mengyang Zhao, A. Fogo, Bennett A.Landman, Haichun Yang, Catie Chang, Yuankai Huo
With the rapid development of self-supervised learning (e.g., contrastive learning), the importance of having large-scale images (even without annotations) for training a more generalizable AI model has been widely recognized in medical image analysis. However, collecting large-scale task-specific unannotated data at scale can be challenging for individual labs. Existing online resources, such as digital books, publications, and search engines, provide a new resource for obtaining large-scale images. However, published images in healthcare (e.g., radiology and pathology) consist of a considerable amount of compound figures with subplots. In order to extract and separate compound figures into usable individual images for downstream learning, we propose a simple compound figure separation (SimCFS) framework without using the traditionally required detection bounding box annotations, with a new loss function and a hard case simulation. Our technical contribution is four-fold: (1) we introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations; (2) we propose a new side loss that is optimized for compound figure separation; (3) we propose an intra-class image augmentation method to simulate hard cases; and (4) to the best of our knowledge, this is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation. From the results, the proposed SimCFS achieved state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database. The pretrained self-supervised learning model using large-scale mined figures improved the accuracy of downstream image classification tasks with a contrastive learning algorithm. The source code of SimCFS is made publicly available at https://github.com/hrlblab/ImageSeperation.
随着自监督学习(如对比学习)的快速发展,在医学图像分析中,拥有大规模图像(即使没有注释)对于训练更一般化的AI模型的重要性已得到广泛认可。然而,对于单个实验室来说,大规模收集特定于任务的未注释数据可能具有挑战性。现有的在线资源,如数字图书、出版物和搜索引擎,为获取大规模图像提供了新的资源。然而,在医疗保健(例如,放射学和病理学)中发表的图像由相当数量的带有子图的复合图形组成。为了将复合图提取并分离成可用的单个图像用于下游学习,我们提出了一个简单的复合图分离(SimCFS)框架,不使用传统的检测边界框注释,具有新的损失函数和硬案例模拟。我们的技术贡献有四个方面:(1)我们引入了一个基于模拟的训练框架,该框架最大限度地减少了对资源扩展边界框注释的需求;(2)提出了一种优化的复合图分离新边损;(3)提出了一种类内图像增强方法来模拟硬案例;(4)据我们所知,这是第一个评估利用复合图像分离的自监督学习效果的研究。从结果来看,所提出的SimCFS在ImageCLEF 2016复合图分离数据库上取得了最先进的性能。基于大规模挖掘图的预训练自监督学习模型通过对比学习算法提高了下游图像分类任务的准确率。SimCFS的源代码可以在https://github.com/hrlblab/ImageSeperation上公开获得。
{"title":"Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised Learning","authors":"Tianyuan Yao, Changbing Qu, Jun Long, Quan Liu, Ruining Deng, Yuanhan Tian, Jiachen Xu, Aadarsh Jha, Zuhayr Asad, S. Bao, Mengyang Zhao, A. Fogo, Bennett A.Landman, Haichun Yang, Catie Chang, Yuankai Huo","doi":"10.48550/arXiv.2208.14357","DOIUrl":"https://doi.org/10.48550/arXiv.2208.14357","url":null,"abstract":"With the rapid development of self-supervised learning (e.g., contrastive learning), the importance of having large-scale images (even without annotations) for training a more generalizable AI model has been widely recognized in medical image analysis. However, collecting large-scale task-specific unannotated data at scale can be challenging for individual labs. Existing online resources, such as digital books, publications, and search engines, provide a new resource for obtaining large-scale images. However, published images in healthcare (e.g., radiology and pathology) consist of a considerable amount of compound figures with subplots. In order to extract and separate compound figures into usable individual images for downstream learning, we propose a simple compound figure separation (SimCFS) framework without using the traditionally required detection bounding box annotations, with a new loss function and a hard case simulation. Our technical contribution is four-fold: (1) we introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations; (2) we propose a new side loss that is optimized for compound figure separation; (3) we propose an intra-class image augmentation method to simulate hard cases; and (4) to the best of our knowledge, this is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation. From the results, the proposed SimCFS achieved state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database. The pretrained self-supervised learning model using large-scale mined figures improved the accuracy of downstream image classification tasks with a contrastive learning algorithm. The source code of SimCFS is made publicly available at https://github.com/hrlblab/ImageSeperation.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74206448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis. 用于核磁共振成像重建和分析的频率与图像空间联合学习
Pub Date : 2022-06-01 Epub Date: 2022-06-23
Nalini M Singh, Juan Eugenio Iglesias, Elfar Adalsteinsson, Adrian V Dalca, Polina Golland

We propose neural network layers that explicitly combine frequency and image feature representations and show that they can be used as a versatile building block for reconstruction from frequency space data. Our work is motivated by the challenges arising in MRI acquisition where the signal is a corrupted Fourier transform of the desired image. The proposed joint learning schemes enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures at every layer of the network. This is in contrast to most current deep learning approaches for image reconstruction that treat frequency and image space features separately and often operate exclusively in one of the two spaces. We demonstrate the advantages of joint convolutional learning for a variety of tasks, including motion correction, denoising, reconstruction from undersampled acquisitions, and combined undersampling and motion correction on simulated and real world multicoil MRI data. The joint models produce consistently high quality output images across all tasks and datasets. When integrated into a state of the art unrolled optimization network with physics-inspired data consistency constraints for undersampled reconstruction, the proposed architectures significantly improve the optimization landscape, which yields an order of magnitude reduction of training time. This result suggests that joint representations are particularly well suited for MRI signals in deep learning networks. Our code and pretrained models are publicly available at https://github.com/nalinimsingh/interlacer.

我们提出了明确结合频率和图像特征表征的神经网络层,并证明它们可用作从频率空间数据进行重建的通用构件。我们的工作是受核磁共振成像采集中出现的挑战所激发的,在采集过程中,信号是所需图像的傅里叶变换。所提出的联合学习方案既能校正频率空间的伪影,又能处理图像空间表征,从而在网络的每一层重建连贯的图像结构。这与目前大多数用于图像重建的深度学习方法形成了鲜明对比,后者将频率和图像空间特征分开处理,而且往往只在其中一个空间中进行操作。我们展示了联合卷积学习在各种任务中的优势,包括运动校正、去噪、欠采样采集重建,以及在模拟和实际多线圈磁共振成像数据中结合欠采样和运动校正。在所有任务和数据集上,联合模型都能生成始终如一的高质量输出图像。当集成到具有物理启发数据一致性约束的最先进的非滚动优化网络中进行欠采样重建时,所提出的架构显著改善了优化环境,从而在数量级上减少了训练时间。这一结果表明,在深度学习网络中,联合表征特别适合 MRI 信号。我们的代码和预训练模型可在 https://github.com/nalinimsingh/interlacer 公开获取。
{"title":"Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis.","authors":"Nalini M Singh, Juan Eugenio Iglesias, Elfar Adalsteinsson, Adrian V Dalca, Polina Golland","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We propose neural network layers that explicitly combine frequency and image feature representations and show that they can be used as a versatile building block for reconstruction from frequency space data. Our work is motivated by the challenges arising in MRI acquisition where the signal is a corrupted Fourier transform of the desired image. The proposed joint learning schemes enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures at every layer of the network. This is in contrast to most current deep learning approaches for image reconstruction that treat frequency and image space features separately and often operate exclusively in one of the two spaces. We demonstrate the advantages of joint convolutional learning for a variety of tasks, including motion correction, denoising, reconstruction from undersampled acquisitions, and combined undersampling and motion correction on simulated and real world multicoil MRI data. The joint models produce consistently high quality output images across all tasks and datasets. When integrated into a state of the art unrolled optimization network with physics-inspired data consistency constraints for undersampled reconstruction, the proposed architectures significantly improve the optimization landscape, which yields an order of magnitude reduction of training time. This result suggests that joint representations are particularly well suited for MRI signals in deep learning networks. Our code and pretrained models are publicly available at https://github.com/nalinimsingh/interlacer.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9639401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40475235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nested Grassmannians for Dimensionality Reduction with Applications. 用于降维的嵌套格拉斯曼及其应用
Chun-Hao Yang, Baba C Vemuri

In the recent past, nested structures in Riemannian manifolds has been studied in the context of dimensionality reduction as an alternative to the popular principal geodesic analysis (PGA) technique, for example, the principal nested spheres. In this paper, we propose a novel framework for constructing a nested sequence of homogeneous Riemannian manifolds. Common examples of homogeneous Riemannian manifolds include the n-sphere, the Stiefel manifold, the Grassmann manifold and many others. In particular, we focus on applying the proposed framework to the Grassmann manifold, giving rise to the nested Grassmannians (NG). An important application in which Grassmann manifolds are encountered is planar shape analysis. Specifically, each planar (2D) shape can be represented as a point in the complex projective space which is a complex Grassmann manifold. Some salient features of our framework are: (i) it explicitly exploits the geometry of the homogeneous Riemannian manifolds and (ii) the nested lower-dimensional submanifolds need not be geodesic. With the proposed NG structure, we develop algorithms for the supervised and unsupervised dimensionality reduction problems respectively. The proposed algorithms are compared with PGA via simulation studies and real data experiments and are shown to achieve a higher ratio of expressed variance compared to PGA.

近年来,人们在降维背景下研究了黎曼流形中的嵌套结构,以替代流行的主大地分析(PGA)技术,例如主嵌套球。在本文中,我们提出了构建同质黎曼流形嵌套序列的新框架。同质黎曼流形的常见例子包括 n 球、Stiefel 流形、格拉斯曼流形等。我们特别关注将所提出的框架应用于格拉斯曼流形,从而产生嵌套格拉斯曼流形(NG)。平面形状分析是格拉斯曼流形的一个重要应用。具体来说,每个平面(二维)形状都可以表示为复射空间中的一个点,而复射空间就是复格拉斯曼流形。我们的框架有以下几个显著特点(i) 它明确利用了同质黎曼流形的几何原理;(ii) 嵌套的低维子流形不一定是测地线。利用所提出的 NG 结构,我们分别为有监督和无监督降维问题开发了算法。通过模拟研究和实际数据实验,我们将所提出的算法与 PGA 进行了比较,结果表明,与 PGA 相比,所提出的算法能获得更高的表达方差比。
{"title":"Nested Grassmannians for Dimensionality Reduction with Applications.","authors":"Chun-Hao Yang, Baba C Vemuri","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In the recent past, nested structures in Riemannian manifolds has been studied in the context of dimensionality reduction as an alternative to the popular principal geodesic analysis (PGA) technique, for example, the principal nested spheres. In this paper, we propose a novel framework for constructing a nested sequence of homogeneous Riemannian manifolds. Common examples of homogeneous Riemannian manifolds include the <i>n</i>-sphere, the Stiefel manifold, the Grassmann manifold and many others. In particular, we focus on applying the proposed framework to the Grassmann manifold, giving rise to the nested Grassmannians (NG). An important application in which Grassmann manifolds are encountered is planar shape analysis. Specifically, each planar (2D) shape can be represented as a point in the complex projective space which is a complex Grassmann manifold. Some salient features of our framework are: (i) it explicitly exploits the geometry of the homogeneous Riemannian manifolds and (ii) the nested lower-dimensional submanifolds need not be geodesic. With the proposed NG structure, we develop algorithms for the supervised and unsupervised dimensionality reduction problems respectively. The proposed algorithms are compared with PGA via simulation studies and real data experiments and are shown to achieve a higher ratio of expressed variance compared to PGA.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9938729/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10767452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review and experimental evaluation of deep learning methods for MRI reconstruction. 核磁共振成像重建深度学习方法综述与实验评估
Pub Date : 2022-03-01 Epub Date: 2022-03-11
Arghya Pal, Yogesh Rathi

Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received significant interest for accelerating magnetic resonance imaging (MRI) acquisition and reconstruction strategies. A number of ideas inspired by deep learning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for accelerated MRI. Given the rapidly growing nature of the field, it is imperative to consolidate and summarize the large number of deep learning methods that have been reported in the literature, to obtain a better understanding of the field in general. This article provides an overview of the recent developments in neural-network based approaches that have been proposed specifically for improving parallel imaging. A general background and introduction to parallel MRI is also given from a classical view of k-space based reconstruction methods. Image domain based techniques that introduce improved regularizers are covered along with k-space based methods which focus on better interpolation strategies using neural networks. While the field is rapidly evolving with plenty of papers published each year, in this review, we attempt to cover broad categories of methods that have shown good performance on publicly available data sets. Limitations and open problems are also discussed and recent efforts for producing open data sets and benchmarks for the community are examined.

随着深度学习在广泛应用中取得成功,基于神经网络的机器学习技术在加速磁共振成像(MRI)采集和重建策略方面受到了极大关注。受计算机视觉和图像处理深度学习技术启发的一些想法已成功应用于非线性图像重建,其精神是将压缩传感用于加速磁共振成像。鉴于该领域的快速发展,当务之急是整合和总结文献中报道的大量深度学习方法,以便更好地了解该领域的总体情况。本文概述了专门为改进并行成像而提出的基于神经网络方法的最新发展。文章还从基于 k 空间重建方法的经典视角,介绍了并行 MRI 的一般背景和简介。基于图像域的技术引入了改进的正则器,而基于 k 空间的方法则侧重于利用神经网络改进插值策略。虽然该领域发展迅速,每年都有大量论文发表,但在本综述中,我们试图涵盖在公开数据集上表现良好的几大类方法。此外,我们还讨论了这些方法的局限性和有待解决的问题,并研究了最近为社区提供开放数据集和基准的努力。
{"title":"A review and experimental evaluation of deep learning methods for MRI reconstruction.","authors":"Arghya Pal, Yogesh Rathi","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received significant interest for accelerating magnetic resonance imaging (MRI) acquisition and reconstruction strategies. A number of ideas inspired by deep learning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for accelerated MRI. Given the rapidly growing nature of the field, it is imperative to consolidate and summarize the large number of deep learning methods that have been reported in the literature, to obtain a better understanding of the field in general. This article provides an overview of the recent developments in neural-network based approaches that have been proposed specifically for improving parallel imaging. A general background and introduction to parallel MRI is also given from a classical view of k-space based reconstruction methods. Image domain based techniques that introduce improved regularizers are covered along with k-space based methods which focus on better interpolation strategies using neural networks. While the field is rapidly evolving with plenty of papers published each year, in this review, we attempt to cover broad categories of methods that have shown good performance on publicly available data sets. Limitations and open problems are also discussed and recent efforts for producing open data sets and benchmarks for the community are examined.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9202830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40057805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning the Effect of Registration Hyperparameters with HyperMorph. 用 HyperMorph 学习超参数配准的效果
Pub Date : 2022-03-01 Epub Date: 2022-04-07
Andrew Hoopes, Malte Hoffmann, Douglas N Greve, Bruce Fischl, John Guttag, Adrian V Dalca

We introduce HyperMorph, a framework that facilitates efficient hyperparameter tuning in learning-based deformable image registration. Classical registration algorithms perform an iterative pair-wise optimization to compute a deformation field that aligns two images. Recent learning-based approaches leverage large image datasets to learn a function that rapidly estimates a deformation for a given image pair. In both strategies, the accuracy of the resulting spatial correspondences is strongly influenced by the choice of certain hyperparameter values. However, an effective hyperparameter search consumes substantial time and human effort as it often involves training multiple models for different fixed hyperparameter values and may lead to suboptimal registration. We propose an amortized hyperparameter learning strategy to alleviate this burden by learning the impact of hyperparameters on deformation fields. We design a meta network, or hypernetwork, that predicts the parameters of a registration network for input hyperparameters, thereby comprising a single model that generates the optimal deformation field corresponding to given hyperparameter values. This strategy enables fast, high-resolution hyperparameter search at test-time, reducing the inefficiency of traditional approaches while increasing flexibility. We also demonstrate additional benefits of HyperMorph, including enhanced robustness to model initialization and the ability to rapidly identify optimal hyperparameter values specific to a dataset, image contrast, task, or even anatomical region, all without the need to retrain models. We make our code publicly available at http://hypermorph.voxelmorph.net.

我们介绍了 HyperMorph,这是一个有助于在基于学习的可变形图像配准中高效调整超参数的框架。经典的配准算法通过迭代配对优化,计算出使两幅图像对齐的变形场。最新的基于学习的方法利用大型图像数据集来学习一个函数,该函数能快速估计给定图像对的变形。在这两种方法中,所得到的空间对应关系的准确性都深受某些超参数值选择的影响。然而,有效的超参数搜索需要耗费大量的时间和人力,因为它往往涉及到对不同固定超参数值的多个模型进行训练,并可能导致次优配准。我们提出了一种摊销超参数学习策略,通过学习超参数对变形场的影响来减轻这种负担。我们设计了一个元网络(或称超网络),可预测输入超参数的配准网络参数,从而组成一个单一模型,生成与给定超参数值相对应的最佳变形场。这种策略能在测试时快速、高分辨率地搜索超参数,降低了传统方法的低效率,同时提高了灵活性。我们还展示了 HyperMorph 的其他优势,包括增强对模型初始化的鲁棒性,以及快速识别特定数据集、图像对比度、任务甚至解剖区域的最佳超参数值的能力,所有这些都无需重新训练模型。我们在 http://hypermorph.voxelmorph.net 上公开了我们的代码。
{"title":"Learning the Effect of Registration Hyperparameters with HyperMorph.","authors":"Andrew Hoopes, Malte Hoffmann, Douglas N Greve, Bruce Fischl, John Guttag, Adrian V Dalca","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We introduce HyperMorph, a framework that facilitates efficient hyperparameter tuning in learning-based deformable image registration. Classical registration algorithms perform an iterative pair-wise optimization to compute a deformation field that aligns two images. Recent learning-based approaches leverage large image datasets to learn a function that rapidly estimates a deformation for a given image pair. In both strategies, the accuracy of the resulting spatial correspondences is strongly influenced by the choice of certain hyperparameter values. However, an effective hyperparameter search consumes substantial time and human effort as it often involves training multiple models for different fixed hyperparameter values and may lead to suboptimal registration. We propose an amortized hyperparameter learning strategy to alleviate this burden by <i>learning</i> the impact of hyperparameters on deformation fields. We design a meta network, or hypernetwork, that predicts the parameters of a registration network for input hyperparameters, thereby comprising a single model that generates the optimal deformation field corresponding to given hyperparameter values. This strategy enables fast, high-resolution hyperparameter search at test-time, reducing the inefficiency of traditional approaches while increasing flexibility. We also demonstrate additional benefits of HyperMorph, including enhanced robustness to model initialization and the ability to rapidly identify optimal hyperparameter values specific to a dataset, image contrast, task, or even anatomical region, all without the need to retrain models. We make our code publicly available at http://hypermorph.voxelmorph.net.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9491317/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33477947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The journal of machine learning for biomedical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1