{"title":"基于sift的反深度参数化单凸SLAM机器人定位","authors":"Chen Chwan-Hsen, Yung-Pyng Chan","doi":"10.1109/ARSO.2007.4531427","DOIUrl":null,"url":null,"abstract":"We have developed a monocular SLAM method which uses the scale-invariant feature transform (SIFT) algorithm to detect salient features within the scene. Only feature points with large scales are considered as worth-tracking features to reduce the computation load and enhance the robustness. These feature information are input to an extended Kalman filter with the spatial coordinates of the feature points and that of the observing camera as its state variables. The angular and translational velocity and acceleration of the camera are also included as the state variables. Compared to previous approaches, we use the reciprocal of the depth, instead of the depth itself, as the state variable, together with other state variables, in the extended Kalman filter to represent the relative distance between the camera and the feature points. The extended Kalman filter can accurately estimate the spatial location of the feature points and that of the camera with only one camera after a very short period for those feature points experiencing significant change in parallax. We have tested the proposed method with a hand-held camera walking in both indoor and outdoor environment. The outdoor environment for the experiment is populated with both close and distant objects. The results show very accurate estimates on the spatial locations of the camera and feature points within seconds.","PeriodicalId":344670,"journal":{"name":"2007 IEEE Workshop on Advanced Robotics and Its Social Impacts","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"SIFT-based monocluar SLAM with inverse depth parameterization for robot localization\",\"authors\":\"Chen Chwan-Hsen, Yung-Pyng Chan\",\"doi\":\"10.1109/ARSO.2007.4531427\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We have developed a monocular SLAM method which uses the scale-invariant feature transform (SIFT) algorithm to detect salient features within the scene. Only feature points with large scales are considered as worth-tracking features to reduce the computation load and enhance the robustness. These feature information are input to an extended Kalman filter with the spatial coordinates of the feature points and that of the observing camera as its state variables. The angular and translational velocity and acceleration of the camera are also included as the state variables. Compared to previous approaches, we use the reciprocal of the depth, instead of the depth itself, as the state variable, together with other state variables, in the extended Kalman filter to represent the relative distance between the camera and the feature points. The extended Kalman filter can accurately estimate the spatial location of the feature points and that of the camera with only one camera after a very short period for those feature points experiencing significant change in parallax. We have tested the proposed method with a hand-held camera walking in both indoor and outdoor environment. The outdoor environment for the experiment is populated with both close and distant objects. The results show very accurate estimates on the spatial locations of the camera and feature points within seconds.\",\"PeriodicalId\":344670,\"journal\":{\"name\":\"2007 IEEE Workshop on Advanced Robotics and Its Social Impacts\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 IEEE Workshop on Advanced Robotics and Its Social Impacts\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ARSO.2007.4531427\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Workshop on Advanced Robotics and Its Social Impacts","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO.2007.4531427","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SIFT-based monocluar SLAM with inverse depth parameterization for robot localization
We have developed a monocular SLAM method which uses the scale-invariant feature transform (SIFT) algorithm to detect salient features within the scene. Only feature points with large scales are considered as worth-tracking features to reduce the computation load and enhance the robustness. These feature information are input to an extended Kalman filter with the spatial coordinates of the feature points and that of the observing camera as its state variables. The angular and translational velocity and acceleration of the camera are also included as the state variables. Compared to previous approaches, we use the reciprocal of the depth, instead of the depth itself, as the state variable, together with other state variables, in the extended Kalman filter to represent the relative distance between the camera and the feature points. The extended Kalman filter can accurately estimate the spatial location of the feature points and that of the camera with only one camera after a very short period for those feature points experiencing significant change in parallax. We have tested the proposed method with a hand-held camera walking in both indoor and outdoor environment. The outdoor environment for the experiment is populated with both close and distant objects. The results show very accurate estimates on the spatial locations of the camera and feature points within seconds.