Structure from motion using SIFT features and the PH transform with panoramic imagery

M. Fiala
{"title":"Structure from motion using SIFT features and the PH transform with panoramic imagery","authors":"M. Fiala","doi":"10.1109/CRV.2005.78","DOIUrl":null,"url":null,"abstract":"Omni-directional sensors are useful in obtaining a 360/spl deg/ field of view of a scene for robot navigation, scene modeling, and telepresence. A method is presented to recover 3D scene structure and camera motion from a sequence of multiple images captured by an omnidirectional catadioptric camera. This 3D model is then used to localize other panoramic images taken in the vicinity. This goal is achieved by tracking the trajectories of SIFT keypoints, and finding the path they travel by utilizing a Hough transform technique modified for panoramic imagery. This technique is applied to spatio-temporal feature extraction in the three-dimensional space of an image sequence, as that scene points trace a horizontal line trajectory relative to the camera. SIFT (scale invariant feature transform) keypoints are distinctive image features which can be identified between images invariant to scale and rotation. Together these methods are applied to reconstruct a three-dimensional model from a sequence of panoramic images, where the panoramic camera was translating in a straight line horizontal path. Only the camera/mirror geometry is known a priori. The camera positions and the world model is determined, up to a scale factor. Experimental results of model building and camera localization using this model are shown.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CRV.2005.78","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Omni-directional sensors are useful in obtaining a 360/spl deg/ field of view of a scene for robot navigation, scene modeling, and telepresence. A method is presented to recover 3D scene structure and camera motion from a sequence of multiple images captured by an omnidirectional catadioptric camera. This 3D model is then used to localize other panoramic images taken in the vicinity. This goal is achieved by tracking the trajectories of SIFT keypoints, and finding the path they travel by utilizing a Hough transform technique modified for panoramic imagery. This technique is applied to spatio-temporal feature extraction in the three-dimensional space of an image sequence, as that scene points trace a horizontal line trajectory relative to the camera. SIFT (scale invariant feature transform) keypoints are distinctive image features which can be identified between images invariant to scale and rotation. Together these methods are applied to reconstruct a three-dimensional model from a sequence of panoramic images, where the panoramic camera was translating in a straight line horizontal path. Only the camera/mirror geometry is known a priori. The camera positions and the world model is determined, up to a scale factor. Experimental results of model building and camera localization using this model are shown.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用SIFT特征和全景图像的PH变换从运动中提取结构
全方位传感器在获得机器人导航、场景建模和远程呈现场景的360/spl度/视场方面非常有用。提出了一种从全向反射相机捕获的多幅图像序列中恢复三维场景结构和摄像机运动的方法。然后使用这个3D模型来定位附近拍摄的其他全景图像。这一目标是通过跟踪SIFT关键点的轨迹,并利用对全景图像进行修改的霍夫变换技术找到它们行进的路径来实现的。该技术应用于图像序列三维空间的时空特征提取,因为场景点相对于相机跟踪一条水平线轨迹。SIFT (scale invariant feature transform,尺度不变特征变换)关键点是在尺度不变图像和旋转不变图像之间识别出的具有鲜明特征的图像特征。这些方法一起应用于从全景图像序列中重建三维模型,其中全景相机在直线水平路径上平移。只有相机/镜子的几何形状是先验的。相机位置和世界模型是确定的,直到一个比例因子。给出了利用该模型进行模型构建和摄像机定位的实验结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Head pose estimation of partially occluded faces Minimum Bayes error features for visual recognition by sequential feature selection and extraction Using vanishing points to correct camera rotation in images Dry granular flows need special tools Body tracking in human walk from monocular video sequences
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1