{"title":"Exploiting ground plane constraints for visual-inertial navigation","authors":"G. Panahandeh, D. Zachariah, M. Jansson","doi":"10.1109/PLANS.2012.6236923","DOIUrl":null,"url":null,"abstract":"In this paper, an ego-motion estimation approach is introduced that fuses visual and inertial information, using a monocular camera and an inertial measurement unit. The system maintains a set of feature points that are observed on the ground plane. Based on matched feature points between the current and previous images, a novel measurement model is introduced that imposes visual constraints on the inertial navigation system to perform 6 DoF motion estimation. Furthermore, feature points are used to impose epipolar constraints on the estimated motion between current and past images. Pose estimation is formulated implicitly in a state-space framework and is performed by a Sigma-Point Kalman filter. The presented experiments, conducted in an indoor scenario with real data, indicate the ability of the proposed method to perform accurate 6 DoF pose estimation.","PeriodicalId":282304,"journal":{"name":"Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PLANS.2012.6236923","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
In this paper, an ego-motion estimation approach is introduced that fuses visual and inertial information, using a monocular camera and an inertial measurement unit. The system maintains a set of feature points that are observed on the ground plane. Based on matched feature points between the current and previous images, a novel measurement model is introduced that imposes visual constraints on the inertial navigation system to perform 6 DoF motion estimation. Furthermore, feature points are used to impose epipolar constraints on the estimated motion between current and past images. Pose estimation is formulated implicitly in a state-space framework and is performed by a Sigma-Point Kalman filter. The presented experiments, conducted in an indoor scenario with real data, indicate the ability of the proposed method to perform accurate 6 DoF pose estimation.