Deformation field correction for spatial normalization of PET images using a population-derived partial least squares model.

Murat Bilgel, Aaron Carass, Susan M Resnick, Dean F Wong, Jerry L Prince
{"title":"Deformation field correction for spatial normalization of PET images using a population-derived partial least squares model.","authors":"Murat Bilgel, Aaron Carass, Susan M Resnick, Dean F Wong, Jerry L Prince","doi":"10.1007/978-3-319-10581-9_25","DOIUrl":null,"url":null,"abstract":"<p><p>Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet work on anatomically accurate PET-to-PET registration is limited. We present a method for the spatial normalization of PET images that improves their anatomical alignment based on a deformation correction model learned from structural image registration. To generate the model, we first create a population-based PET template with a corresponding structural image template. We register each PET image onto the PET template using deformable registration that consists of an affine step followed by a diffeomorphic mapping. Constraining the affine step to be the same as that obtained from the PET registration, we find the diffeomorphic mapping that will align the structural image with the structural template. We train partial least squares (PLS) regression models within small neighborhoods to relate the PET intensities and deformation fields obtained from the diffeomorphic mapping to the structural image deformation fields. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation based evaluation on 79 subjects shows that our method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"8679 ","pages":"198-206"},"PeriodicalIF":0.0000,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4222176/pdf/nihms637009.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning in medical imaging. MLMI (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-319-10581-9_25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet work on anatomically accurate PET-to-PET registration is limited. We present a method for the spatial normalization of PET images that improves their anatomical alignment based on a deformation correction model learned from structural image registration. To generate the model, we first create a population-based PET template with a corresponding structural image template. We register each PET image onto the PET template using deformable registration that consists of an affine step followed by a diffeomorphic mapping. Constraining the affine step to be the same as that obtained from the PET registration, we find the diffeomorphic mapping that will align the structural image with the structural template. We train partial least squares (PLS) regression models within small neighborhoods to relate the PET intensities and deformation fields obtained from the diffeomorphic mapping to the structural image deformation fields. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation based evaluation on 79 subjects shows that our method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用源自群体的偏最小二乘法模型对 PET 图像的空间归一化进行变形场校正。
正电子发射断层扫描(PET)图像的空间归一化对群体研究至关重要,但在解剖学上精确的 PET 对 PET 配准工作却很有限。我们提出了一种 PET 图像空间归一化方法,该方法基于从结构图像配准中学习到的变形校正模型,改善了解剖配准。为了生成模型,我们首先创建了一个基于群体的 PET 模板和一个相应的结构图像模板。我们使用可变形配准技术将每张 PET 图像配准到 PET 模板上,该技术包括仿射步骤和差异映射。我们限制仿射步骤与 PET 配准得到的步骤相同,然后找到差分映射,使结构图像与结构模板对齐。我们在小邻域内训练偏最小二乘法(PLS)回归模型,将差异形态映射得到的 PET 强度和变形场与结构图像变形场联系起来。经过训练的模型可用于在不使用结构图像的情况下将 PET 图像更精确地配准到 PET 模板。对 79 名受试者进行的交叉验证评估表明,与可变形的 PET 对 PET 配准相比,我们的方法能更准确地配准 PET 图像,具体表现在:1)可目测变形图像;2)变形场误差较小;3)变形解剖学标签与地面实况分割重叠较多。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images. Class-Balanced Deep Learning with Adaptive Vector Scaling Loss for Dementia Stage Detection. MoViT: Memorizing Vision Transformers for Medical Image Analysis. Robust Unsupervised Super-Resolution of Infant MRI via Dual-Modal Deep Image Prior. IA-GCN: Interpretable Attention based Graph Convolutional Network for Disease Prediction.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1