Image-to-Volume Deformable Registration by Learning Displacement Vector Fields

IF 4.6 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING IEEE Transactions on Radiation and Plasma Medical Sciences Pub Date : 2024-07-18 DOI:10.1109/TRPMS.2024.3430827
Ryuto Miura;Mitsuhiro Nakamura;Megumi Nakao
{"title":"Image-to-Volume Deformable Registration by Learning Displacement Vector Fields","authors":"Ryuto Miura;Mitsuhiro Nakamura;Megumi Nakao","doi":"10.1109/TRPMS.2024.3430827","DOIUrl":null,"url":null,"abstract":"2-D/3-D image registration is a problem that solves the deformation and alignment of a pretreatment 3-D volume to a 2-D projection image, which is available for treatment support and biomedical analysis. 2-D/3-D image registration for abdominal organs is a complicated task because the abdominal organs deform significantly and their contours are not detected in 2-D X-ray images. In this study, we propose a supervised deep learning framework that achieves 2-D/3-D deformable image registration between the 3-D volume and a single-viewpoint 2-D projection image. The proposed method uses latent image features of the 2-D projection images to learn a transformation from the input image, which is a concatenation of the 2-D projection images and the 3-D volume, to a dense displacement vector field (DVF) that represents nonlinear and local organ displacements. The target DVFs are generated by registration between 3-D volumes, and the registration error with the estimated DVF is introduced as a loss function during training. We register 3D-computed tomography (CT) volumes to the digitally reconstructed radiographs generated from abdominal 4D-CT volumes of 35 cases. The experimental results show that the proposed method can reconstruct 3D-CT with a mean voxel-to-voxel error of 29.4 Hounsfield unit and a dice similarity coefficient of 89.2 % on average for the body, liver, stomach, duodenum, and kidney regions, which is a clinically acceptable accuracy. In addition, the average computation time for the registration process by the proposed framework is 0.181 s, demonstrating real-time registration performance.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"69-82"},"PeriodicalIF":4.6000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10602520/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

2-D/3-D image registration is a problem that solves the deformation and alignment of a pretreatment 3-D volume to a 2-D projection image, which is available for treatment support and biomedical analysis. 2-D/3-D image registration for abdominal organs is a complicated task because the abdominal organs deform significantly and their contours are not detected in 2-D X-ray images. In this study, we propose a supervised deep learning framework that achieves 2-D/3-D deformable image registration between the 3-D volume and a single-viewpoint 2-D projection image. The proposed method uses latent image features of the 2-D projection images to learn a transformation from the input image, which is a concatenation of the 2-D projection images and the 3-D volume, to a dense displacement vector field (DVF) that represents nonlinear and local organ displacements. The target DVFs are generated by registration between 3-D volumes, and the registration error with the estimated DVF is introduced as a loss function during training. We register 3D-computed tomography (CT) volumes to the digitally reconstructed radiographs generated from abdominal 4D-CT volumes of 35 cases. The experimental results show that the proposed method can reconstruct 3D-CT with a mean voxel-to-voxel error of 29.4 Hounsfield unit and a dice similarity coefficient of 89.2 % on average for the body, liver, stomach, duodenum, and kidney regions, which is a clinically acceptable accuracy. In addition, the average computation time for the registration process by the proposed framework is 0.181 s, demonstrating real-time registration performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过学习位移向量场实现图像到体积的可变形配准
二维/三维图像配准是解决预处理三维体到二维投影图像的变形和对齐问题,可用于治疗支持和生物医学分析。腹部器官的二维/三维图像配准是一项复杂的任务,因为腹部器官在二维x射线图像中变形明显且无法检测到其轮廓。在这项研究中,我们提出了一个有监督的深度学习框架,该框架实现了三维体和单视点二维投影图像之间的二维/三维可变形图像配准。该方法利用二维投影图像的潜在图像特征,学习从二维投影图像和三维体的串联输入图像到表示非线性和局部器官位移的密集位移向量场(DVF)的转换。通过三维体间的配准生成目标DVF,在训练过程中引入与估计DVF的配准误差作为损失函数。我们将3d计算机断层扫描(CT)体积与35例腹部4D-CT体积生成的数字重建x线片进行登记。实验结果表明,该方法对身体、肝脏、胃、十二指肠和肾脏区域的三维ct重建的体素-体素平均误差为29.4 Hounsfield单位,平均相似系数为89.2%,达到临床可接受的精度。此外,该框架配准过程的平均计算时间为0.181 s,具有实时性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Radiation and Plasma Medical Sciences
IEEE Transactions on Radiation and Plasma Medical Sciences RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
8.00
自引率
18.20%
发文量
109
期刊最新文献
Affiliate Plan of the IEEE Nuclear and Plasma Sciences Society Table of Contents IEEE Transactions on Radiation and Plasma Medical Sciences Publication Information IEEE Transactions on Radiation and Plasma Medical Sciences Information for Authors Affiliate Plan of the IEEE Nuclear and Plasma Sciences Society
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1