{"title":"基于单目视觉与稀疏模式距离数据融合的翻滚目标重建与位姿估计","authors":"J. Padial, M. Hammond, S. Augenstein, S. Rock","doi":"10.1109/MFI.2012.6343026","DOIUrl":null,"url":null,"abstract":"A framework for 3D target reconstruction and relative pose estimation through fusion of vision and sparse-pattern range data (e.g. line-scanning LIDAR) is presented. The algorithm augments previous work in monocular vision-only SLAM/SfM to incorporate range data into the overall solution. The aim of this work is to enable a more dense reconstruction with accurate relative pose estimation that is unambiguous in scale. In order to incorporate range data, a linear estimator is presented to estimate the overall scale factor using vision-range correspondence. A motivating mission is the use of resource-constrained micro- and nano-satellites to perform autonomous rendezvous and docking operations with uncommunicative, tumbling targets, about which little or no prior information is available. The rationale for the approach is explained, and an algorithm is presented. The implementation using a modified Rao-Blackwellised particle filter is described and tested. Results from numerical simulations are presented that demonstrate the performance and viability of the approach.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Tumbling target reconstruction and pose estimation through fusion of monocular vision and sparse-pattern range data\",\"authors\":\"J. Padial, M. Hammond, S. Augenstein, S. Rock\",\"doi\":\"10.1109/MFI.2012.6343026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A framework for 3D target reconstruction and relative pose estimation through fusion of vision and sparse-pattern range data (e.g. line-scanning LIDAR) is presented. The algorithm augments previous work in monocular vision-only SLAM/SfM to incorporate range data into the overall solution. The aim of this work is to enable a more dense reconstruction with accurate relative pose estimation that is unambiguous in scale. In order to incorporate range data, a linear estimator is presented to estimate the overall scale factor using vision-range correspondence. A motivating mission is the use of resource-constrained micro- and nano-satellites to perform autonomous rendezvous and docking operations with uncommunicative, tumbling targets, about which little or no prior information is available. The rationale for the approach is explained, and an algorithm is presented. The implementation using a modified Rao-Blackwellised particle filter is described and tested. Results from numerical simulations are presented that demonstrate the performance and viability of the approach.\",\"PeriodicalId\":103145,\"journal\":{\"name\":\"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)\",\"volume\":\"86 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MFI.2012.6343026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI.2012.6343026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tumbling target reconstruction and pose estimation through fusion of monocular vision and sparse-pattern range data
A framework for 3D target reconstruction and relative pose estimation through fusion of vision and sparse-pattern range data (e.g. line-scanning LIDAR) is presented. The algorithm augments previous work in monocular vision-only SLAM/SfM to incorporate range data into the overall solution. The aim of this work is to enable a more dense reconstruction with accurate relative pose estimation that is unambiguous in scale. In order to incorporate range data, a linear estimator is presented to estimate the overall scale factor using vision-range correspondence. A motivating mission is the use of resource-constrained micro- and nano-satellites to perform autonomous rendezvous and docking operations with uncommunicative, tumbling targets, about which little or no prior information is available. The rationale for the approach is explained, and an algorithm is presented. The implementation using a modified Rao-Blackwellised particle filter is described and tested. Results from numerical simulations are presented that demonstrate the performance and viability of the approach.