{"title":"通过学习位移向量场实现图像到体积的可变形配准","authors":"Ryuto Miura;Mitsuhiro Nakamura;Megumi Nakao","doi":"10.1109/TRPMS.2024.3430827","DOIUrl":null,"url":null,"abstract":"2-D/3-D image registration is a problem that solves the deformation and alignment of a pretreatment 3-D volume to a 2-D projection image, which is available for treatment support and biomedical analysis. 2-D/3-D image registration for abdominal organs is a complicated task because the abdominal organs deform significantly and their contours are not detected in 2-D X-ray images. In this study, we propose a supervised deep learning framework that achieves 2-D/3-D deformable image registration between the 3-D volume and a single-viewpoint 2-D projection image. The proposed method uses latent image features of the 2-D projection images to learn a transformation from the input image, which is a concatenation of the 2-D projection images and the 3-D volume, to a dense displacement vector field (DVF) that represents nonlinear and local organ displacements. The target DVFs are generated by registration between 3-D volumes, and the registration error with the estimated DVF is introduced as a loss function during training. We register 3D-computed tomography (CT) volumes to the digitally reconstructed radiographs generated from abdominal 4D-CT volumes of 35 cases. The experimental results show that the proposed method can reconstruct 3D-CT with a mean voxel-to-voxel error of 29.4 Hounsfield unit and a dice similarity coefficient of 89.2 % on average for the body, liver, stomach, duodenum, and kidney regions, which is a clinically acceptable accuracy. In addition, the average computation time for the registration process by the proposed framework is 0.181 s, demonstrating real-time registration performance.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"69-82"},"PeriodicalIF":4.6000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image-to-Volume Deformable Registration by Learning Displacement Vector Fields\",\"authors\":\"Ryuto Miura;Mitsuhiro Nakamura;Megumi Nakao\",\"doi\":\"10.1109/TRPMS.2024.3430827\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"2-D/3-D image registration is a problem that solves the deformation and alignment of a pretreatment 3-D volume to a 2-D projection image, which is available for treatment support and biomedical analysis. 2-D/3-D image registration for abdominal organs is a complicated task because the abdominal organs deform significantly and their contours are not detected in 2-D X-ray images. In this study, we propose a supervised deep learning framework that achieves 2-D/3-D deformable image registration between the 3-D volume and a single-viewpoint 2-D projection image. The proposed method uses latent image features of the 2-D projection images to learn a transformation from the input image, which is a concatenation of the 2-D projection images and the 3-D volume, to a dense displacement vector field (DVF) that represents nonlinear and local organ displacements. The target DVFs are generated by registration between 3-D volumes, and the registration error with the estimated DVF is introduced as a loss function during training. We register 3D-computed tomography (CT) volumes to the digitally reconstructed radiographs generated from abdominal 4D-CT volumes of 35 cases. The experimental results show that the proposed method can reconstruct 3D-CT with a mean voxel-to-voxel error of 29.4 Hounsfield unit and a dice similarity coefficient of 89.2 % on average for the body, liver, stomach, duodenum, and kidney regions, which is a clinically acceptable accuracy. In addition, the average computation time for the registration process by the proposed framework is 0.181 s, demonstrating real-time registration performance.\",\"PeriodicalId\":46807,\"journal\":{\"name\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"volume\":\"9 1\",\"pages\":\"69-82\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10602520/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10602520/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Image-to-Volume Deformable Registration by Learning Displacement Vector Fields
2-D/3-D image registration is a problem that solves the deformation and alignment of a pretreatment 3-D volume to a 2-D projection image, which is available for treatment support and biomedical analysis. 2-D/3-D image registration for abdominal organs is a complicated task because the abdominal organs deform significantly and their contours are not detected in 2-D X-ray images. In this study, we propose a supervised deep learning framework that achieves 2-D/3-D deformable image registration between the 3-D volume and a single-viewpoint 2-D projection image. The proposed method uses latent image features of the 2-D projection images to learn a transformation from the input image, which is a concatenation of the 2-D projection images and the 3-D volume, to a dense displacement vector field (DVF) that represents nonlinear and local organ displacements. The target DVFs are generated by registration between 3-D volumes, and the registration error with the estimated DVF is introduced as a loss function during training. We register 3D-computed tomography (CT) volumes to the digitally reconstructed radiographs generated from abdominal 4D-CT volumes of 35 cases. The experimental results show that the proposed method can reconstruct 3D-CT with a mean voxel-to-voxel error of 29.4 Hounsfield unit and a dice similarity coefficient of 89.2 % on average for the body, liver, stomach, duodenum, and kidney regions, which is a clinically acceptable accuracy. In addition, the average computation time for the registration process by the proposed framework is 0.181 s, demonstrating real-time registration performance.