首页 > 最新文献

The journal of machine learning for biomedical imaging最新文献

英文 中文
A Neural Conditional Random Field Model Using Deep Features and Learnable Functions for End-to-End MRI Prostate Zonal Segmentation. 基于深度特征和可学习函数的端到端MRI前列腺分区分割神经条件随机场模型。
Pub Date : 2025-08-01 Epub Date: 2025-08-20 DOI: 10.59275/j.melba.2025-gc4c
Alex Ling Yu Hung, Kai Zhao, Kaifeng Pang, Haoxin Zheng, Xiaoxi Du, Qi Miao, Demetri Terzopoulos, Kyunghyun Sung

The automatic segmentation of prostate MRI often produces inconsistent performance because certain image slices are more difficult to segment than others. In this paper, we show that consistency can be improved using Conditional Random Fields (CRFs), which refine the segmentation results by considering pixel relationships pairwise. In practice, however, conventional CRFs are susceptible to noise and MRI intensity shifts due to their use of simple binary potentials involving spatial distance and intensity difference. Such heuristic potential functions are hardly expressive, limiting the network from extracting more relevant information and having more stable potential calculations. We propose a novel end-to-end Neural CRF (NCRF) model that utilizes learnable binary potential functions based on deep image features. Experiments show that our NCRF is a better model for prostate zonal segmentation than state-of-the-art CRF models. The NCRF improves segmentation accuracy in both the prostate transition zone and peripheral zone such that segmentation results are consistent across all the prostate slices, which can improve the performance of downstream tasks such as prostate cancer detection and segmentation. Our code is available at https://github.com/aL3x-O-o-Hung/NCRF.

前列腺MRI的自动分割往往会产生不一致的性能,因为某些图像切片比其他图像更难分割。在本文中,我们证明了使用条件随机场(CRFs)可以提高一致性,CRFs通过考虑像素关系成对地改进分割结果。然而,传统的crf由于使用了涉及空间距离和强度差的简单二元电位,因此容易受到噪声和MRI强度变化的影响。这种启发式势函数难以表达,限制了网络提取更多相关信息和具有更稳定的势计算。我们提出了一种基于深度图像特征的可学习二元势函数的端到端神经CRF (NCRF)模型。实验表明,我们的NCRF模型比现有的CRF模型更好地用于前列腺分区分割。NCRF提高了前列腺过渡区和外周区的分割精度,使得所有前列腺切片的分割结果一致,从而提高了前列腺癌检测和分割等下游任务的性能。我们的代码可在https://github.com/aL3x-O-o-Hung/NCRF上获得。
{"title":"A Neural Conditional Random Field Model Using Deep Features and Learnable Functions for End-to-End MRI Prostate Zonal Segmentation.","authors":"Alex Ling Yu Hung, Kai Zhao, Kaifeng Pang, Haoxin Zheng, Xiaoxi Du, Qi Miao, Demetri Terzopoulos, Kyunghyun Sung","doi":"10.59275/j.melba.2025-gc4c","DOIUrl":"10.59275/j.melba.2025-gc4c","url":null,"abstract":"<p><p>The automatic segmentation of prostate MRI often produces inconsistent performance because certain image slices are more difficult to segment than others. In this paper, we show that consistency can be improved using Conditional Random Fields (CRFs), which refine the segmentation results by considering pixel relationships pairwise. In practice, however, conventional CRFs are susceptible to noise and MRI intensity shifts due to their use of simple binary potentials involving spatial distance and intensity difference. Such heuristic potential functions are hardly expressive, limiting the network from extracting more relevant information and having more stable potential calculations. We propose a novel end-to-end Neural CRF (NCRF) model that utilizes learnable binary potential functions based on deep image features. Experiments show that our NCRF is a better model for prostate zonal segmentation than state-of-the-art CRF models. The NCRF improves segmentation accuracy in both the prostate transition zone and peripheral zone such that segmentation results are consistent across all the prostate slices, which can improve the performance of downstream tasks such as prostate cancer detection and segmentation. Our code is available at https://github.com/aL3x-O-o-Hung/NCRF.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"3 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12448153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging SO(3)-steerable convolutions for pose-robust semantic segmentation in 3D medical data. 利用SO(3)-可操纵卷积在三维医疗数据中进行姿态鲁棒语义分割。
Pub Date : 2024-05-15 DOI: 10.59275/j.melba.2024-7189
Ivan Diaz, Mario Geiger, Richard Iain McKinley

Convolutional neural networks (CNNs) allow for parameter sharing and translational equivariance by using convolutional kernels in their linear layers. By restricting these kernels to be SO(3)-steerable, CNNs can further improve parameter sharing. These rotationally-equivariant convolutional layers have several advantages over standard convolutional layers, including increased robustness to unseen poses, smaller network size, and improved sample efficiency. Despite this, most segmentation networks used in medical image analysis continue to rely on standard convolutional kernels. In this paper, we present a new family of segmentation networks that use equivariant voxel convolutions based on spherical harmonics. These networks are robust to data poses not seen during training, and do not require rotation-based data augmentation during training. In addition, we demonstrate improved segmentation performance in MRI brain tumor and healthy brain structure segmentation tasks, with enhanced robustness to reduced amounts of training data and improved parameter efficiency. Code to reproduce our results, and to implement the equivariant segmentation networks for other tasks is available at http://github.com/SCAN-NRAD/e3nn_Unet.

卷积神经网络(CNN)通过在线性层中使用卷积核,实现了参数共享和平移等差。通过将这些核限制为 SO(3)-steerable 核,卷积神经网络可以进一步改善参数共享。与标准卷积层相比,这些旋转平方卷积层具有多项优势,包括对未知姿势的鲁棒性更强、网络规模更小、采样效率更高。尽管如此,医学图像分析中使用的大多数分割网络仍然依赖于标准卷积核。在本文中,我们提出了一个新的分割网络系列,它使用基于球面谐波的等变体素卷积。这些网络对训练过程中未出现的数据姿态具有鲁棒性,并且在训练过程中不需要基于旋转的数据增强。此外,我们还证明了在核磁共振成像脑肿瘤和健康大脑结构分割任务中分割性能的提高,以及对训练数据量减少和参数效率提高的鲁棒性。重现我们的结果以及为其他任务实现等变分割网络的代码可在 http://github.com/SCAN-NRAD/e3nn_Unet 网站上获取。
{"title":"Leveraging SO(3)-steerable convolutions for pose-robust semantic segmentation in 3D medical data.","authors":"Ivan Diaz, Mario Geiger, Richard Iain McKinley","doi":"10.59275/j.melba.2024-7189","DOIUrl":"10.59275/j.melba.2024-7189","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) allow for parameter sharing and translational equivariance by using convolutional kernels in their linear layers. By restricting these kernels to be SO(3)-steerable, CNNs can further improve parameter sharing. These rotationally-equivariant convolutional layers have several advantages over standard convolutional layers, including increased robustness to unseen poses, smaller network size, and improved sample efficiency. Despite this, most segmentation networks used in medical image analysis continue to rely on standard convolutional kernels. In this paper, we present a new family of segmentation networks that use equivariant voxel convolutions based on spherical harmonics. These networks are robust to data poses not seen during training, and do not require rotation-based data augmentation during training. In addition, we demonstrate improved segmentation performance in MRI brain tumor and healthy brain structure segmentation tasks, with enhanced robustness to reduced amounts of training data and improved parameter efficiency. Code to reproduce our results, and to implement the equivariant segmentation networks for other tasks is available at http://github.com/SCAN-NRAD/e3nn_Unet.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"2 May 2024","pages":"834-855"},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7617181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dimensionality Reduction and Nearest Neighbors for Improving Out-of-Distribution Detection in Medical Image Segmentation. 降维和最近邻改进医学图像分割中的离分布检测。
Pub Date : 2024-01-01 Epub Date: 2024-10-23 DOI: 10.59275/j.melba.2024-g93a
McKell Woodland, Nihil Patel, Austin Castelo, Mais Al Taie, Mohamed Eltaher, Joshua P Yung, Tucker J Netherton, Tiffany L Calderone, Jessica I Sanchez, Darrel W Cleere, Ahmed Elsaiey, Nakul Gupta, David Victor, Laura Beretta, Ankit B Patel, Kristy K Brock

Clinically deployed deep learning-based segmentation models are known to fail on data outside of their training distributions. While clinicians review the segmentations, these models tend to perform well in most instances, which could exacerbate automation bias. Therefore, detecting out-of-distribution images at inference is critical to warn the clinicians that the model likely failed. This work applied the Mahalanobis distance (MD) post hoc to the bottleneck features of four Swin UNETR and nnU-net models that segmented the liver on T1-weighted magnetic resonance imaging and computed tomography. By reducing the dimensions of the bottleneck features with either principal component analysis or uniform manifold approximation and projection, images the models failed on were detected with high performance and minimal computational load. In addition, this work explored a non-parametric alternative to the MD, a k-th nearest neighbors distance (KNN). KNN drastically improved scalability and performance over MD when both were applied to raw and average-pooled bottleneck features. Our code is available at https://github.com/mckellwoodland/dimen_reduce_mahal.

众所周知,临床部署的基于深度学习的分割模型在训练分布之外的数据上是失败的。当临床医生审查分割时,这些模型在大多数情况下往往表现良好,这可能会加剧自动化偏见。因此,在推理中检测出分布外的图像对于警告临床医生模型可能失败至关重要。本研究将马氏距离(MD)应用于四种Swin UNETR和nnU-net模型的瓶颈特征,这些模型在t1加权磁共振成像和计算机断层扫描上对肝脏进行分割。通过主成分分析或均匀流形逼近和投影来降低瓶颈特征的维数,以高性能和最小的计算负荷检测出模型失败的图像。此外,本研究探索了MD的非参数替代方法,即第k近邻距离(KNN)。当KNN应用于原始和平均池瓶颈特征时,它大大提高了可伸缩性和性能。我们的代码可在https://github.com/mckellwoodland/dimen_reduce_mahal上获得。
{"title":"Dimensionality Reduction and Nearest Neighbors for Improving Out-of-Distribution Detection in Medical Image Segmentation.","authors":"McKell Woodland, Nihil Patel, Austin Castelo, Mais Al Taie, Mohamed Eltaher, Joshua P Yung, Tucker J Netherton, Tiffany L Calderone, Jessica I Sanchez, Darrel W Cleere, Ahmed Elsaiey, Nakul Gupta, David Victor, Laura Beretta, Ankit B Patel, Kristy K Brock","doi":"10.59275/j.melba.2024-g93a","DOIUrl":"10.59275/j.melba.2024-g93a","url":null,"abstract":"<p><p>Clinically deployed deep learning-based segmentation models are known to fail on data outside of their training distributions. While clinicians review the segmentations, these models tend to perform well in most instances, which could exacerbate automation bias. Therefore, detecting out-of-distribution images at inference is critical to warn the clinicians that the model likely failed. This work applied the Mahalanobis distance (MD) post hoc to the bottleneck features of four Swin UNETR and nnU-net models that segmented the liver on T1-weighted magnetic resonance imaging and computed tomography. By reducing the dimensions of the bottleneck features with either principal component analysis or uniform manifold approximation and projection, images the models failed on were detected with high performance and minimal computational load. In addition, this work explored a non-parametric alternative to the MD, a k-th nearest neighbors distance (KNN). KNN drastically improved scalability and performance over MD when both were applied to raw and average-pooled bottleneck features. Our code is available at https://github.com/mckellwoodland/dimen_reduce_mahal.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"2 UNSURE2023 Spec Iss","pages":"2006-2052"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12123533/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144201033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape-aware Segmentation of the Placenta in BOLD Fetal MRI Time Series. 在 BOLD 胎儿 MRI 时间序列中对胎盘进行形状感知分割
Pub Date : 2023-12-01 DOI: 10.59275/j.melba.2023-g3f8
S Mazdak Abulnaga, Neel Dey, Sean I Young, Eileen Pan, Katherine I Hobgood, Clinton J Wang, P Ellen Grant, Esra Abaci Turk, Polina Golland

Blood oxygen level dependent (BOLD) MRI time series with maternal hyperoxia can assess placental oxygenation and function. Measuring precise BOLD changes in the placenta requires accurate temporal placental segmentation and is confounded by fetal and maternal motion, contractions, and hyperoxia-induced intensity changes. Current BOLD placenta segmentation methods warp a manually annotated subject-specific template to the entire time series. However, as the placenta is a thin, elongated, and highly non-rigid organ subject to large deformations and obfuscated edges, existing work cannot accurately segment the placental shape, especially near boundaries. In this work, we propose a machine learning segmentation framework for placental BOLD MRI and apply it to segmenting each volume in a time series. We use a placental-boundary weighted loss formulation and perform a comprehensive evaluation across several popular segmentation objectives. Our model is trained and tested on a cohort of 91 subjects containing healthy fetuses, fetuses with fetal growth restriction, and mothers with high BMI. Biomedically, our model performs reliably in segmenting volumes in both normoxic and hyperoxic points in the BOLD time series. We further find that boundary-weighting increases placental segmentation performance by 8.3% and 6.0% Dice coefficient for the cross-entropy and signed distance transform objectives, respectively.

母体高氧时的血氧水平依赖性(BOLD)磁共振成像时间序列可评估胎盘氧合和功能。要精确测量胎盘的 BOLD 变化,需要对胎盘进行精确的时间分割,并且会受到胎儿和母体运动、宫缩和高氧引起的强度变化的影响。目前的 BOLD 胎盘分割方法是将人工标注的特定受试者模板扭曲为整个时间序列。然而,由于胎盘是一个薄、细长、高度非刚性的器官,会产生较大的变形和模糊的边缘,现有的工作无法准确分割胎盘的形状,尤其是在边界附近。在这项工作中,我们为胎盘 BOLD MRI 提出了一个机器学习分割框架,并将其应用于分割时间序列中的每个容积。我们使用了胎盘边界加权损失公式,并对几种常用的分割目标进行了综合评估。我们的模型在包含健康胎儿、胎儿生长受限和高体重指数母亲在内的 91 名受试者中进行了训练和测试。从生物医学角度来看,我们的模型在分割 BOLD 时间序列中常氧点和高氧点的体积方面表现可靠。我们进一步发现,边界加权可使交叉熵目标和符号距离变换目标的胎盘分割性能分别提高 8.3% 和 6.0% Dice 系数。
{"title":"Shape-aware Segmentation of the Placenta in BOLD Fetal MRI Time Series.","authors":"S Mazdak Abulnaga, Neel Dey, Sean I Young, Eileen Pan, Katherine I Hobgood, Clinton J Wang, P Ellen Grant, Esra Abaci Turk, Polina Golland","doi":"10.59275/j.melba.2023-g3f8","DOIUrl":"10.59275/j.melba.2023-g3f8","url":null,"abstract":"<p><p>Blood oxygen level dependent (BOLD) MRI time series with maternal hyperoxia can assess placental oxygenation and function. Measuring precise BOLD changes in the placenta requires accurate temporal placental segmentation and is confounded by fetal and maternal motion, contractions, and hyperoxia-induced intensity changes. Current BOLD placenta segmentation methods warp a manually annotated subject-specific template to the entire time series. However, as the placenta is a thin, elongated, and highly non-rigid organ subject to large deformations and obfuscated edges, existing work cannot accurately segment the placental shape, especially near boundaries. In this work, we propose a machine learning segmentation framework for placental BOLD MRI and apply it to segmenting each volume in a time series. We use a placental-boundary weighted loss formulation and perform a comprehensive evaluation across several popular segmentation objectives. Our model is trained and tested on a cohort of 91 subjects containing healthy fetuses, fetuses with fetal growth restriction, and mothers with high BMI. Biomedically, our model performs reliably in segmenting volumes in both normoxic and hyperoxic points in the BOLD time series. We further find that boundary-weighting increases placental segmentation performance by 8.3% and 6.0% Dice coefficient for the cross-entropy and signed distance transform objectives, respectively.</p>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"2 PIPPI 2022","pages":"527-546"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11514310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-task learning for joint weakly-supervised segmentation and aortic arch anomaly classification in fetal cardiac MRI 胎儿心脏MRI中关节弱监督分割和主动脉弓异常分类的多任务学习
Pub Date : 2023-11-11 DOI: 10.59275/j.melba.2023-b7bc
Paula Ramirez, Alena Uus, Milou P.M. van Poppel, Irina Grigorescu, Johannes K. Steinweg, David F.A. Lloyd, Kuberan Pushparajah, Andrew P. King, Maria Deprez
Congenital Heart Disease (CHD) is a group of cardiac malformations present already during fetal life, representing the prevailing category of birth defects globally. Our aim in this study is to aid 3D fetal vessel topology visualisation in aortic arch anomalies, a group which encompasses a range of conditions with significant anatomical heterogeneity. We present a multi-task framework for automated multi-class fetal vessel segmentation from 3D black blood T2w MRI and anomaly classification. Our training data consists of binary manual segmentation masks of the cardiac vessels' region in individual subjects and fully-labelled anomaly-specific population atlases. Our framework combines deep learning label propagation using VoxelMorph with 3D Attention U-Net segmentation and DenseNet121 anomaly classification. We target 11 cardiac vessels and three distinct aortic arch anomalies, including double aortic arch, right aortic arch, and suspected coarctation of the aorta. We incorporate an anomaly classifier into our segmentation pipeline, delivering a multi-task framework with the primary motivation of correcting topological inaccuracies of the segmentation. The hypothesis is that the multi-task approach will encourage the segmenter network to learn anomaly-specific features. As a secondary motivation, an automated diagnosis tool may have the potential to enhance diagnostic confidence in a decision support setting. Our results showcase that our proposed training strategy significantly outperforms label propagation and a network trained exclusively on propagated labels. Our classifier outperforms a classifier trained exclusively on T2w volume images, with an average balanced accuracy of 0.99 (0.01) after joint training. Adding a classifier improves the anatomical and topological accuracy of all correctly classified double aortic arch subjects.
先天性心脏病(CHD)是一组在胎儿期就已经存在的心脏畸形,是全球出生缺陷的主要类别。我们在这项研究中的目的是帮助3D胎儿血管拓扑可视化主动脉弓异常,这一组包括一系列具有显著解剖异质性的条件。我们提出了一个多任务框架,用于从3D黑血T2w MRI和异常分类中自动进行多类胎儿血管分割。我们的训练数据包括单个受试者的心脏血管区域的二进制手动分割掩码和完全标记的异常特异性人群地图集。我们的框架将使用VoxelMorph的深度学习标签传播与3D注意力U-Net分割和DenseNet121异常分类相结合。我们的目标是11条心脏血管和三种不同的主动脉弓异常,包括双主动脉弓、右主动脉弓和疑似主动脉缩窄。我们将异常分类器整合到我们的分割管道中,提供了一个多任务框架,其主要动机是纠正分割的拓扑不准确性。假设是,多任务方法将鼓励分割网络学习异常特定的特征。作为次要动机,自动化诊断工具可能有潜力提高决策支持设置中的诊断信心。我们的研究结果表明,我们提出的训练策略明显优于标签传播和仅在传播标签上训练的网络。我们的分类器优于专门训练T2w体积图像的分类器,联合训练后的平均平衡准确率为0.99(0.01)。添加分类器可以提高所有正确分类的双主动脉弓受试者的解剖和拓扑精度。
{"title":"Multi-task learning for joint weakly-supervised segmentation and aortic arch anomaly classification in fetal cardiac MRI","authors":"Paula Ramirez, Alena Uus, Milou P.M. van Poppel, Irina Grigorescu, Johannes K. Steinweg, David F.A. Lloyd, Kuberan Pushparajah, Andrew P. King, Maria Deprez","doi":"10.59275/j.melba.2023-b7bc","DOIUrl":"https://doi.org/10.59275/j.melba.2023-b7bc","url":null,"abstract":"Congenital Heart Disease (CHD) is a group of cardiac malformations present already during fetal life, representing the prevailing category of birth defects globally. Our aim in this study is to aid 3D fetal vessel topology visualisation in aortic arch anomalies, a group which encompasses a range of conditions with significant anatomical heterogeneity. We present a multi-task framework for automated multi-class fetal vessel segmentation from 3D black blood T2w MRI and anomaly classification. Our training data consists of binary manual segmentation masks of the cardiac vessels' region in individual subjects and fully-labelled anomaly-specific population atlases. Our framework combines deep learning label propagation using VoxelMorph with 3D Attention U-Net segmentation and DenseNet121 anomaly classification. We target 11 cardiac vessels and three distinct aortic arch anomalies, including double aortic arch, right aortic arch, and suspected coarctation of the aorta. We incorporate an anomaly classifier into our segmentation pipeline, delivering a multi-task framework with the primary motivation of correcting topological inaccuracies of the segmentation. The hypothesis is that the multi-task approach will encourage the segmenter network to learn anomaly-specific features. As a secondary motivation, an automated diagnosis tool may have the potential to enhance diagnostic confidence in a decision support setting. Our results showcase that our proposed training strategy significantly outperforms label propagation and a network trained exclusively on propagated labels. Our classifier outperforms a classifier trained exclusively on T2w volume images, with an average balanced accuracy of 0.99 (0.01) after joint training. Adding a classifier improves the anatomical and topological accuracy of all correctly classified double aortic arch subjects.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"2 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135041952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Early Prediction of Human iPSC Reprogramming Success 人类iPSC重编程成功的早期预测
Pub Date : 2023-11-10 DOI: 10.59275/j.melba.2023-3d9d
Abhineet Singh, Ila Jasra, Omar Mouhammed, Nidheesh Dadheech, Nilanjan Ray, James Shapiro
This paper presents advancements in automated early-stage prediction of the success of reprogramming human induced pluripotent stem cells (iPSCs) as a potential source for regenerative cell therapies. The minuscule success rate of iPSC-reprogramming of around 0.01% to 0.1% makes it labor-intensive, time-consuming, and exorbitantly expensive to generate a stable iPSC line since that requires culturing of millions of cells and intense biological scrutiny of multiple clones to identify a single optimal clone. The ability to reliably predict which cells are likely to establish as an optimal iPSC line at an early stage of pluripotency would therefore be ground-breaking in rendering this a practical and cost-effective approach to personalized medicine.
Temporal information about changes in cellular appearance over time is crucial for predicting its future growth outcomes. In order to generate this data, we first performed continuous time-lapse imaging of iPSCs in culture using an ultra-high resolution microscope. We then annotated the locations and identities of cells in late-stage images where reliable manual identification is possible. Next, we propagated these labels backwards in time using a semi-automated tracking system to obtain labels for early stages of growth. Finally, we used this data to train deep neural networks to perform automatic cell segmentation and classification.
Our code and data are available at https://github.com/abhineet123/ipsc_prediction
本文介绍了人类诱导多能干细胞(iPSCs)重编程成功的自动化早期预测的进展,作为再生细胞治疗的潜在来源。iPSC重编程的极小成功率约为0.01%至0.1%,这使得生成稳定的iPSC系需要耗费大量的劳动、时间和高昂的成本,因为这需要培养数百万个细胞,并对多个克隆进行严格的生物学检查,以确定一个最佳的克隆。因此,在多能性的早期阶段,可靠地预测哪些细胞可能成为最佳的iPSC细胞系的能力将是突破性的,使其成为一种实用且具有成本效益的个性化医疗方法。关于细胞外观随时间变化的时间信息对于预测其未来的生长结果至关重要。为了获得这些数据,我们首先使用超高分辨率显微镜对培养中的iPSCs进行连续延时成像。然后,我们在后期图像中注释了细胞的位置和身份,其中可靠的手动识别是可能的。接下来,我们使用半自动跟踪系统在时间上向后传播这些标签,以获得生长早期阶段的标签。最后,我们使用这些数据来训练深度神经网络来执行自动细胞分割和分类。<br>我们的代码和数据可在<a href='https://github.com/abhineet123/ipsc_prediction'>https://github.com/abhineet123/ipsc_prediction</a>
{"title":"Towards Early Prediction of Human iPSC Reprogramming Success","authors":"Abhineet Singh, Ila Jasra, Omar Mouhammed, Nidheesh Dadheech, Nilanjan Ray, James Shapiro","doi":"10.59275/j.melba.2023-3d9d","DOIUrl":"https://doi.org/10.59275/j.melba.2023-3d9d","url":null,"abstract":"This paper presents advancements in automated early-stage prediction of the success of reprogramming human induced pluripotent stem cells (iPSCs) as a potential source for regenerative cell therapies. The minuscule success rate of iPSC-reprogramming of around 0.01% to 0.1% makes it labor-intensive, time-consuming, and exorbitantly expensive to generate a stable iPSC line since that requires culturing of millions of cells and intense biological scrutiny of multiple clones to identify a single optimal clone. The ability to reliably predict which cells are likely to establish as an optimal iPSC line at an early stage of pluripotency would therefore be ground-breaking in rendering this a practical and cost-effective approach to personalized medicine.<br>Temporal information about changes in cellular appearance over time is crucial for predicting its future growth outcomes. In order to generate this data, we first performed continuous time-lapse imaging of iPSCs in culture using an ultra-high resolution microscope. We then annotated the locations and identities of cells in late-stage images where reliable manual identification is possible. Next, we propagated these labels backwards in time using a semi-automated tracking system to obtain labels for early stages of growth. Finally, we used this data to train deep neural networks to perform automatic cell segmentation and classification.<br>Our code and data are available at <a href='https://github.com/abhineet123/ipsc_prediction'>https://github.com/abhineet123/ipsc_prediction</a>","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":" 1282","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135186708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morphologically-Aware Consensus Computation via Heuristics-based IterATive Optimization (MACCHIatO) 基于启发式迭代优化的形态感知一致性计算
Pub Date : 2023-09-14 DOI: 10.59275/j.melba.2023-219c
Dimitri Hamzaoui, Sarah Montagne, Raphaële Renard-Penna, Nicholas Ayache, Hervé Delingette
The extraction of consensus segmentations from several binary or probabilistic masks is important to solve various tasks such as the analysis of inter-rater variability or the fusion of several neural network outputs. One of the most widely used methods to obtain such a consensus segmentation is the STAPLE algorithm. In this paper, we first demonstrate that the output of that algorithm is heavily impacted by the background size of images and the choice of the prior. We then propose a new method to construct a binary or a probabilistic consensus segmentation based on the Fr'{e}chet means of carefully chosen distances which makes it totally independent of the image background size. We provide a heuristic approach to optimize this criterion such that a voxel's class is fully determined by its voxel-wise distance to the different masks, the connected component it belongs to and the group of raters who segmented it. We compared extensively our method on several datasets with the STAPLE method and the naive segmentation averaging method, showing that it leads to binary consensus masks of intermediate size between Majority Voting and STAPLE and to different posterior probabilities than Mask Averaging and STAPLE methods. Our code is available at https://gitlab.inria.fr/dhamzaou/jaccardmap .
从多个二值掩模或概率掩模中提取一致性分割对于解决诸如分析变量间变异性或多个神经网络输出的融合等各种任务非常重要。获得这种一致性分割的最广泛使用的方法之一是STAPLE算法。在本文中,我们首先证明了该算法的输出受到图像背景大小和先验选择的严重影响。然后,我们提出了一种基于精心选择的距离的Fr {e}chet方法构建二值或概率一致分割的新方法,使其完全独立于图像背景大小。我们提供了一种启发式方法来优化这一标准,这样一个体素的类完全由它到不同掩模的体素距离、它所属的连接组件和分割它的评分者组来决定。我们将我们的方法与STAPLE方法和朴素分割平均方法在多个数据集上进行了广泛的比较,表明它在Majority Voting和STAPLE方法之间产生了中等大小的二元共识掩码,并且与Mask平均方法和STAPLE方法相比具有不同的后验概率。我们的代码可在https://gitlab.inria.fr/dhamzaou/jaccardmap上获得。
{"title":"Morphologically-Aware Consensus Computation via Heuristics-based IterATive Optimization (MACCHIatO)","authors":"Dimitri Hamzaoui, Sarah Montagne, Raphaële Renard-Penna, Nicholas Ayache, Hervé Delingette","doi":"10.59275/j.melba.2023-219c","DOIUrl":"https://doi.org/10.59275/j.melba.2023-219c","url":null,"abstract":"The extraction of consensus segmentations from several binary or probabilistic masks is important to solve various tasks such as the analysis of inter-rater variability or the fusion of several neural network outputs. One of the most widely used methods to obtain such a consensus segmentation is the STAPLE algorithm. In this paper, we first demonstrate that the output of that algorithm is heavily impacted by the background size of images and the choice of the prior. We then propose a new method to construct a binary or a probabilistic consensus segmentation based on the Fr'{e}chet means of carefully chosen distances which makes it totally independent of the image background size. We provide a heuristic approach to optimize this criterion such that a voxel's class is fully determined by its voxel-wise distance to the different masks, the connected component it belongs to and the group of raters who segmented it. We compared extensively our method on several datasets with the STAPLE method and the naive segmentation averaging method, showing that it leads to binary consensus masks of intermediate size between Majority Voting and STAPLE and to different posterior probabilities than Mask Averaging and STAPLE methods. Our code is available at https://gitlab.inria.fr/dhamzaou/jaccardmap .","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135551969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross Attention Transformers for Multi-modal Unsupervised Whole-Body PET Anomaly Detection 多模态无监督全身PET异常检测的交叉注意转换器
Pub Date : 2023-04-19 DOI: 10.59275/j.melba.2023-18c1
Ashay Patel, Petru-Danial Tudiosu, Walter H.L. Pinaya, Gary Cook, Vicky Goh, Sebastien Ourselin, M. Jorge Cardoso
Cancer is a highly heterogeneous condition that can occur almost anywhere in the human body. [18F]fluorodeoxyglucose Positron Emission Tomography (18F-FDG PET) is a imaging modality commonly used to detect cancer due to its high sensitivity and clear visualisation of the pattern of metabolic activity. Nonetheless, as cancer is highly heterogeneous, it is challenging to train general-purpose discriminative cancer detection models, with data availability and disease complexity often cited as a limiting factor. Unsupervised learning methods, more specifically anomaly detection models, have been suggested as a putative solution. These models learn a healthy representation of tissue and detect cancer by predicting deviations from the healthy norm, which requires models capable of accurately learning long-range interactions between organs, their imaging patterns, and other abstract features with high levels of expressivity. Such characteristics are suitably satisfied by transformers, which have been shown to generate state-of-the-art results in unsupervised anomaly detection by training on normal data. This work expands upon such approaches by introducing multi-modal conditioning of the transformer via cross-attention i.e. supplying anatomical reference information from paired CT images to aid the PET anomaly detection task. Furthermore, we show the importance and impact of codebook sizing within a Vector Quantized Variational Autoencoder, on the ability of the transformer network to fulfill the task of anomaly detection. Using 294 whole-body PET/CT samples containing various cancer types, we show that our anomaly detection method is robust and capable of achieving accurate cancer localization results even in cases where normal training data is unavailable. In addition, we show the efficacy of this approach on out-of-sample data showcasing the generalizability of this approach even with limited training data. Lastly, we propose to combine model uncertainty with a new kernel density estimation approach, and show that it provides clinically and statistically significant improvements in accuracy and robustness, when compared to the classic residual-based anomaly maps. Overall, a superior performance is demonstrated against leading state-of-the-art alternatives, drawing attention to the potential of these approaches.
癌症是一种高度异质性的疾病,几乎可以发生在人体的任何地方。[<sup>18</sup>F]氟脱氧葡萄糖正电子发射断层扫描(<sup>18</sup>F- fdg PET)是一种通常用于检测癌症的成像方式,因为它具有高灵敏度和清晰的代谢活动模式的可视化。然而,由于癌症是高度异质性的,训练通用的判别性癌症检测模型是具有挑战性的,数据可用性和疾病复杂性通常被认为是限制因素。无监督学习方法,更具体地说是异常检测模型,被认为是一种假定的解决方案。这些模型学习组织的健康表示,并通过预测与健康规范的偏差来检测癌症,这需要模型能够准确地学习器官之间的远程相互作用,它们的成像模式,以及其他具有高水平表达能力的抽象特征。变压器可以很好地满足这些特征,通过对正常数据的训练,变压器可以在无监督异常检测中产生最先进的结果。这项工作扩展了这些方法,通过交叉注意引入变压器的多模态调节,即提供来自成对CT图像的解剖学参考信息,以帮助PET异常检测任务。此外,我们还展示了在矢量量化变分自编码器中码本大小对变压器网络完成异常检测任务的能力的重要性和影响。使用294个包含各种癌症类型的全身PET/CT样本,我们表明我们的异常检测方法是鲁棒的,即使在无法获得正常训练数据的情况下也能够获得准确的癌症定位结果。此外,我们展示了这种方法在样本外数据上的有效性,证明了这种方法即使在有限的训练数据下也具有泛化性。最后,我们提出将模型不确定性与一种新的核密度估计方法相结合,并表明与经典的基于残差的异常图相比,它在准确性和鲁棒性方面提供了临床和统计上显著的改进。总体而言,与领先的最先进的替代方案相比,展示了优越的性能,引起了人们对这些方法潜力的关注。
{"title":"Cross Attention Transformers for Multi-modal Unsupervised Whole-Body PET Anomaly Detection","authors":"Ashay Patel, Petru-Danial Tudiosu, Walter H.L. Pinaya, Gary Cook, Vicky Goh, Sebastien Ourselin, M. Jorge Cardoso","doi":"10.59275/j.melba.2023-18c1","DOIUrl":"https://doi.org/10.59275/j.melba.2023-18c1","url":null,"abstract":"Cancer is a highly heterogeneous condition that can occur almost anywhere in the human body. [<sup>18</sup>F]fluorodeoxyglucose Positron Emission Tomography (<sup>18</sup>F-FDG PET) is a imaging modality commonly used to detect cancer due to its high sensitivity and clear visualisation of the pattern of metabolic activity. Nonetheless, as cancer is highly heterogeneous, it is challenging to train general-purpose discriminative cancer detection models, with data availability and disease complexity often cited as a limiting factor. Unsupervised learning methods, more specifically anomaly detection models, have been suggested as a putative solution. These models learn a healthy representation of tissue and detect cancer by predicting deviations from the healthy norm, which requires models capable of accurately learning long-range interactions between organs, their imaging patterns, and other abstract features with high levels of expressivity. Such characteristics are suitably satisfied by transformers, which have been shown to generate state-of-the-art results in unsupervised anomaly detection by training on normal data. This work expands upon such approaches by introducing multi-modal conditioning of the transformer via cross-attention i.e. supplying anatomical reference information from paired CT images to aid the PET anomaly detection task. Furthermore, we show the importance and impact of codebook sizing within a Vector Quantized Variational Autoencoder, on the ability of the transformer network to fulfill the task of anomaly detection. Using 294 whole-body PET/CT samples containing various cancer types, we show that our anomaly detection method is robust and capable of achieving accurate cancer localization results even in cases where normal training data is unavailable. In addition, we show the efficacy of this approach on out-of-sample data showcasing the generalizability of this approach even with limited training data. Lastly, we propose to combine model uncertainty with a new kernel density estimation approach, and show that it provides clinically and statistically significant improvements in accuracy and robustness, when compared to the classic residual-based anomaly maps. Overall, a superior performance is demonstrated against leading state-of-the-art alternatives, drawing attention to the potential of these approaches.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135808044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Positive-unlabeled learning for binary and multi-class cell detection in histopathology images with incomplete annotations 带不完全注释的组织病理学图像中二元和多类细胞检测的正无标记学习
Pub Date : 2023-02-17 DOI: 10.59275/j.melba.2022-8g31
Zipei Zhao, Fengqian Pang, Yaou Liu, Zhiwen Liu, Chuyang Ye
Cell detection in histopathology images is of great interest to clinical practice and research, and convolutional neural networks (CNNs) have achieved remarkable cell detection results. Typically, to train CNN-based cell detection models, every positive instance in the training images needs to be annotated, and instances that are not labeled as positive are considered negative samples. However, manual cell annotation is complicated due to the large number and diversity of cells, and it can be difficult to ensure the annotation of every positive instance. In many cases, only incomplete annotations are available, where some of the positive instances are annotated and the others are not, and the classification loss term for negative samples in typical network training becomes incorrect. In this work, to address this problem of incomplete annotations, we propose to reformulate the training of the detection network as a positive-unlabeled learning problem. Since the instances in unannotated regions can be either positive or negative, they have unknown labels. Using the samples with unknown labels and the positively labeled samples, we first derive an approximation of the classification loss term corresponding to negative samples for binary cell detection, and based on this approximation we further extend the proposed framework to multi-class cell detection. For evaluation, experiments were performed on four publicly available datasets. The experimental results show that our method improves the performance of cell detection in histopathology images given incomplete annotations for network training.
组织病理学图像中的细胞检测是临床实践和研究的热点,卷积神经网络(convolutional neural networks, cnn)已经取得了显著的细胞检测效果。通常,为了训练基于cnn的细胞检测模型,需要对训练图像中的每个正实例进行注释,未标记为正的实例被视为负样本。然而,由于细胞数量多、种类多,人工细胞标注比较复杂,难以保证对每个阳性实例都进行标注。在很多情况下,只有不完整的注释,其中一些正样本被注释而另一些没有,并且典型网络训练中负样本的分类损失项是不正确的。在这项工作中,为了解决这个不完整注释的问题,我们建议将检测网络的训练重新表述为一个正无标签学习问题。由于未注释区域中的实例可以是正的,也可以是负的,因此它们具有未知的标签。利用未知标记的样本和正标记的样本,我们首先推导出一个近似的负样本对应的分类损失项用于二分类细胞检测,并在此近似的基础上进一步将所提出的框架扩展到多分类细胞检测。为了评估,在四个公开可用的数据集上进行了实验。实验结果表明,我们的方法提高了网络训练中不完整注释的组织病理学图像的细胞检测性能。
{"title":"Positive-unlabeled learning for binary and multi-class cell detection in histopathology images with incomplete annotations","authors":"Zipei Zhao, Fengqian Pang, Yaou Liu, Zhiwen Liu, Chuyang Ye","doi":"10.59275/j.melba.2022-8g31","DOIUrl":"https://doi.org/10.59275/j.melba.2022-8g31","url":null,"abstract":"Cell detection in histopathology images is of great interest to clinical practice and research, and convolutional neural networks (CNNs) have achieved remarkable cell detection results. Typically, to train CNN-based cell detection models, every positive instance in the training images needs to be annotated, and instances that are not labeled as positive are considered negative samples. However, manual cell annotation is complicated due to the large number and diversity of cells, and it can be difficult to ensure the annotation of every positive instance. In many cases, only incomplete annotations are available, where some of the positive instances are annotated and the others are not, and the classification loss term for negative samples in typical network training becomes incorrect. In this work, to address this problem of incomplete annotations, we propose to reformulate the training of the detection network as a positive-unlabeled learning problem. Since the instances in unannotated regions can be either positive or negative, they have unknown labels. Using the samples with unknown labels and the positively labeled samples, we first derive an approximation of the classification loss term corresponding to negative samples for binary cell detection, and based on this approximation we further extend the proposed framework to multi-class cell detection. For evaluation, experiments were performed on four publicly available datasets. The experimental results show that our method improves the performance of cell detection in histopathology images given incomplete annotations for network training.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135339724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Approach to Automated Diagnosis and Texture Analysis of the Fetal Liver & Placenta in Fetal Growth Restriction 胎儿生长受限时胎儿肝、胎盘的自动诊断及纹理分析方法
Pub Date : 2023-01-10 DOI: 10.59275/j.melba.2022-ac28
A. Zeidan, Paula Ramirez Gilliland, Ashay Patel, Zhanchong Ou, Dimitra Flouri, N. Mufti, K. Maksym, Rosalind Aughwane, S. Ourselin, Anna L. David, A. Melbourne
Fetal growth restriction (FGR) is a prevalent pregnancy condition characterised by failure of the fetus to reach its genetically predetermined growth potential. The multiple aetiologies, coupled with the risk of fetal complications - encompassing neurodevelopmental delay, neonatal morbidity, and stillbirth - motivate the need to improve holistic assessment of the FGR fetus using MRI. We hypothesised that the fetal liver and placenta would provide insights into FGR biomarkers, unattainable through conventional methods. Therefore, we explore the application of model fitting techniques, linear regression machine learning models, deep learning regression, and Haralick textured features from multi-contrast MRI for multi-fetal organ analysis of FGR. We employed T2 relaxometry and diffusion-weighted MRI datasets (using a combined T2-diffusion scan) for 12 normally grown and 12 FGR gestational age (GA) matched pregnancies (Estimated Fetal Weight below 3rd centile, Median 28+/-3wks). We applied the Intravoxel Incoherent Motion Model, which describes circulatory properties of the fetal organs, and analysed the resulting features distinguishing both cohorts. We additionally used novel multi-compartment models for MRI fetal analysis, which exhibit potential to provide a multi-organ FGR assessment, overcoming the limitations of empirical indicators - such as abnormal artery Doppler findings - to evaluate placental dysfunction. The placenta and fetal liver presented key differentiators between FGR and normal controls, with significant decreased perfusion, abnormal fetal blood motion and reduced fetal blood oxygenation. This may be associated with the preferential shunting of the fetal blood towards the fetal brain, affecting supply to the liver. These features were further explored to determine their role in assessing FGR severity, by employing simple machine learning models to predict FGR diagnosis (100% accuracy in test data, n=5), GA at delivery, time from MRI scan to delivery, and baby weight. We additionally explored the use of deep learning to regress the latter three variables, training a convolutional neural network with our liver and placenta voxel-level parameter maps, obtained from our multi-compartment model fitting. Image texture analysis of the fetal organs demonstrated prominent textural variations in the placental perfusion fractions maps between the groups (p<0.0009), and spatial differences in the incoherent fetal capillary blood motion in the liver (p<0.009). This research serves as a proof-of-concept, investigating the effect of FGR on fetal organs, measuring differences in perfusion and oxygenation within the placenta and fetal liver, and their prognostic importance in automated diagnosis using simple machine learning models.
胎儿生长受限(FGR)是一种常见的妊娠状况,其特征是胎儿未能达到其遗传预定的生长潜力。多种病因,加上胎儿并发症的风险-包括神经发育迟缓,新生儿发病率和死产-促使需要改进使用MRI对FGR胎儿的整体评估。我们假设胎儿肝脏和胎盘将提供通过传统方法无法获得的FGR生物标志物的见解。因此,我们探索了模型拟合技术、线性回归机器学习模型、深度学习回归和多对比MRI哈拉里克纹理特征在FGR多胎儿器官分析中的应用。我们使用T2松弛测量和弥散加权MRI数据集(使用联合T2弥散扫描)对12例正常生长和12例FGR胎龄(GA)匹配的妊娠(估计胎儿体重低于第3位,中位28+/-3周)。我们应用了描述胎儿器官循环特性的体素内非相干运动模型,并分析了区分这两个队列的结果特征。此外,我们还使用了新的多室模型进行胎儿MRI分析,该模型具有提供多器官FGR评估的潜力,克服了经验指标(如异常动脉多普勒结果)评估胎盘功能障碍的局限性。胎盘和胎儿肝脏是FGR与正常对照的关键分化因子,灌注明显减少,胎儿血液运动异常,胎儿血氧减少。这可能与胎儿血液优先分流到胎儿大脑有关,影响了肝脏的供应。通过使用简单的机器学习模型来预测FGR诊断(测试数据100%准确率,n=5)、分娩时GA、从MRI扫描到分娩的时间以及婴儿体重,我们进一步探索了这些特征,以确定它们在评估FGR严重程度中的作用。我们还探索了使用深度学习来回归后三个变量,用我们的肝脏和胎盘体素级参数图训练卷积神经网络,这些参数图是从我们的多室模型拟合中获得的。胎儿器官图像纹理分析显示,两组间胎盘灌注分数图纹理差异显著(p<0.0009),胎儿肝脏毛细血管血流运动不连贯的空间差异显著(p<0.009)。本研究作为概念验证,研究FGR对胎儿器官的影响,测量胎盘和胎儿肝脏内灌注和氧合的差异,以及它们在使用简单机器学习模型的自动诊断中的预后重要性。
{"title":"An Approach to Automated Diagnosis and Texture Analysis of the Fetal Liver & Placenta in Fetal Growth Restriction","authors":"A. Zeidan, Paula Ramirez Gilliland, Ashay Patel, Zhanchong Ou, Dimitra Flouri, N. Mufti, K. Maksym, Rosalind Aughwane, S. Ourselin, Anna L. David, A. Melbourne","doi":"10.59275/j.melba.2022-ac28","DOIUrl":"https://doi.org/10.59275/j.melba.2022-ac28","url":null,"abstract":"Fetal growth restriction (FGR) is a prevalent pregnancy condition characterised by failure of the fetus to reach its genetically predetermined growth potential. The multiple aetiologies, coupled with the risk of fetal complications - encompassing neurodevelopmental delay, neonatal morbidity, and stillbirth - motivate the need to improve holistic assessment of the FGR fetus using MRI. We hypothesised that the fetal liver and placenta would provide insights into FGR biomarkers, unattainable through conventional methods. Therefore, we explore the application of model fitting techniques, linear regression machine learning models, deep learning regression, and Haralick textured features from multi-contrast MRI for multi-fetal organ analysis of FGR. We employed T2 relaxometry and diffusion-weighted MRI datasets (using a combined T2-diffusion scan) for 12 normally grown and 12 FGR gestational age (GA) matched pregnancies (Estimated Fetal Weight below 3rd centile, Median 28+/-3wks). We applied the Intravoxel Incoherent Motion Model, which describes circulatory properties of the fetal organs, and analysed the resulting features distinguishing both cohorts. We additionally used novel multi-compartment models for MRI fetal analysis, which exhibit potential to provide a multi-organ FGR assessment, overcoming the limitations of empirical indicators - such as abnormal artery Doppler findings - to evaluate placental dysfunction. The placenta and fetal liver presented key differentiators between FGR and normal controls, with significant decreased perfusion, abnormal fetal blood motion and reduced fetal blood oxygenation. This may be associated with the preferential shunting of the fetal blood towards the fetal brain, affecting supply to the liver. These features were further explored to determine their role in assessing FGR severity, by employing simple machine learning models to predict FGR diagnosis (100% accuracy in test data, n=5), GA at delivery, time from MRI scan to delivery, and baby weight. We additionally explored the use of deep learning to regress the latter three variables, training a convolutional neural network with our liver and placenta voxel-level parameter maps, obtained from our multi-compartment model fitting. Image texture analysis of the fetal organs demonstrated prominent textural variations in the placental perfusion fractions maps between the groups (p<0.0009), and spatial differences in the incoherent fetal capillary blood motion in the liver (p<0.009). This research serves as a proof-of-concept, investigating the effect of FGR on fetal organs, measuring differences in perfusion and oxygenation within the placenta and fetal liver, and their prognostic importance in automated diagnosis using simple machine learning models.","PeriodicalId":75083,"journal":{"name":"The journal of machine learning for biomedical imaging","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81025448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The journal of machine learning for biomedical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1