Jian Wang, A. Borsdorf, B. Heigl, T. Köhler, J. Hornegger
{"title":"Gradient-Based Differential Approach for 3-D Motion Compensation in Interventional 2-D/3-D Image Fusion","authors":"Jian Wang, A. Borsdorf, B. Heigl, T. Köhler, J. Hornegger","doi":"10.1109/3DV.2014.45","DOIUrl":null,"url":null,"abstract":"In interventional radiology, preoperative 3-D volumes can be fused with intra-operative 2-D fluoroscopic images. Since the accuracy is crucial to the clinical usability of image fusion, patient motion resulting in misalignments has to be corrected during the procedure. In this paper, a novel gradient based differential approach is proposed to estimate the 3-D rigid motion from the 2-D tracking of contour points. The mathematical relationship between the 3-D differential motion and the 2-D motion is derived using the 3-D gradient, based on which a tracking-based motion compensation pipeline is introduced. Given the initial registration, the contour points are extracted and tracked along 2-D frames. The 3-D rigid motion is estimated using the iteratively re-weighted least square minimization to enhance the robustness. Our novel approach is evaluated on 10 datasets consisting of 1010 monoplane fluoroscopic images of a thorax phantom with 3-D rigid motion. Over all datasets, the maximum structure shift in the 2-D projection caused by the 3-D motion varies from 17.3 mm to 33.2 mm. Our approach reduces the 2-D structure shift to the range of 1.93 mm to 6.52 mm. For the most challenging longitudinal off-plane rotation, our approach achieves an average coverage of 79.9% regarding to the ground truth.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 2nd International Conference on 3D Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DV.2014.45","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
In interventional radiology, preoperative 3-D volumes can be fused with intra-operative 2-D fluoroscopic images. Since the accuracy is crucial to the clinical usability of image fusion, patient motion resulting in misalignments has to be corrected during the procedure. In this paper, a novel gradient based differential approach is proposed to estimate the 3-D rigid motion from the 2-D tracking of contour points. The mathematical relationship between the 3-D differential motion and the 2-D motion is derived using the 3-D gradient, based on which a tracking-based motion compensation pipeline is introduced. Given the initial registration, the contour points are extracted and tracked along 2-D frames. The 3-D rigid motion is estimated using the iteratively re-weighted least square minimization to enhance the robustness. Our novel approach is evaluated on 10 datasets consisting of 1010 monoplane fluoroscopic images of a thorax phantom with 3-D rigid motion. Over all datasets, the maximum structure shift in the 2-D projection caused by the 3-D motion varies from 17.3 mm to 33.2 mm. Our approach reduces the 2-D structure shift to the range of 1.93 mm to 6.52 mm. For the most challenging longitudinal off-plane rotation, our approach achieves an average coverage of 79.9% regarding to the ground truth.