首页 > 最新文献

2014 Canadian Conference on Computer and Robot Vision最新文献

英文 中文
Generalized Exposure Fusion Weights Estimation 广义曝光融合权重估计
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.8
Mohammed Elamine Moumene, R. Nourine, D. Ziou
Only a small part of the large intensities interval found in high dynamic range scenes can be captured with usual image sensors. This is why delivered images may contain under or overexposed pixels. A popular approach to overcome this problem is to take several images using different exposure parameters, and then fuse them into one single image. This exposure fusion is mostly performed as a weighted average between the corresponding pixels. The challenge is to find weights that produce best fused image quality and in a minimum amount of operations to meet real time requirements. In this paper we present a supervised learning method to estimate generalized exposure fusion weights and we demonstrate how they can be used to fuse any exposures very fast. Subjective and objective comparisons with some relevant works are conducted to prove the effectiveness of the proposed method.
通常的图像传感器只能捕捉到高动态范围场景中大强度间隔的一小部分。这就是为什么交付的图像可能包含曝光不足或曝光过度的像素。克服这个问题的一种流行方法是使用不同的曝光参数拍摄几张图像,然后将它们融合成一张图像。这种曝光融合主要是作为相应像素之间的加权平均来执行的。挑战在于找到产生最佳融合图像质量的权重,并以最少的操作满足实时要求。在本文中,我们提出了一种监督学习方法来估计广义暴露融合权值,并演示了如何使用它们来快速融合任何暴露。通过与相关文献的主客观对比,证明了所提方法的有效性。
{"title":"Generalized Exposure Fusion Weights Estimation","authors":"Mohammed Elamine Moumene, R. Nourine, D. Ziou","doi":"10.1109/CRV.2014.8","DOIUrl":"https://doi.org/10.1109/CRV.2014.8","url":null,"abstract":"Only a small part of the large intensities interval found in high dynamic range scenes can be captured with usual image sensors. This is why delivered images may contain under or overexposed pixels. A popular approach to overcome this problem is to take several images using different exposure parameters, and then fuse them into one single image. This exposure fusion is mostly performed as a weighted average between the corresponding pixels. The challenge is to find weights that produce best fused image quality and in a minimum amount of operations to meet real time requirements. In this paper we present a supervised learning method to estimate generalized exposure fusion weights and we demonstrate how they can be used to fuse any exposures very fast. Subjective and objective comparisons with some relevant works are conducted to prove the effectiveness of the proposed method.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129114664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Metadata-Weighted Score Fusion for Multimedia Event Detection 基于元数据加权评分融合的多媒体事件检测
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.47
Scott McCloskey, Jingchen Liu
We address the problem of multimedia event detection from videos captured 'in the wild,' in particular the fusion of cues from multiple aspects of the video's content: detected objects, observed motion, audio signatures, etc. We employ score fusion, also known as late fusion, and propose a method that learns local weightings of the various base classifier scores which respect the performance differences arising from the video quality. Classifiers working with visual texture features, for instance, are given reduced weight when applied to subsets of the video corpus with high compression, and the weights associated with the other classifiers are adjusted to reflect this lack of confidence. We present a method to automatically partition the video corpus into relevant subsets, and to learn local weightings which optimally fuse scores on a particular subset. Improvements in event detection performance are demonstrated on the TRECVid Multimedia Event Detection (MED) MED Test dataset, and comparisons are provided to several other score fusion methods.
我们解决了从“野外”捕获的视频中检测多媒体事件的问题,特别是融合来自视频内容的多个方面的线索:检测到的物体、观察到的运动、音频签名等。我们采用分数融合,也称为后期融合,并提出了一种学习各种基分类器分数的局部加权的方法,该方法尊重视频质量引起的性能差异。例如,使用视觉纹理特征的分类器在应用于具有高压缩的视频语料库子集时被赋予减少的权重,并且与其他分类器相关联的权重被调整以反映这种缺乏信心。我们提出了一种自动将视频语料库划分为相关子集的方法,并学习局部加权,以最优地融合特定子集上的分数。在TRECVid多媒体事件检测(MED) MED测试数据集上展示了事件检测性能的改进,并与其他几种分数融合方法进行了比较。
{"title":"Metadata-Weighted Score Fusion for Multimedia Event Detection","authors":"Scott McCloskey, Jingchen Liu","doi":"10.1109/CRV.2014.47","DOIUrl":"https://doi.org/10.1109/CRV.2014.47","url":null,"abstract":"We address the problem of multimedia event detection from videos captured 'in the wild,' in particular the fusion of cues from multiple aspects of the video's content: detected objects, observed motion, audio signatures, etc. We employ score fusion, also known as late fusion, and propose a method that learns local weightings of the various base classifier scores which respect the performance differences arising from the video quality. Classifiers working with visual texture features, for instance, are given reduced weight when applied to subsets of the video corpus with high compression, and the weights associated with the other classifiers are adjusted to reflect this lack of confidence. We present a method to automatically partition the video corpus into relevant subsets, and to learn local weightings which optimally fuse scores on a particular subset. Improvements in event detection performance are demonstrated on the TRECVid Multimedia Event Detection (MED) MED Test dataset, and comparisons are provided to several other score fusion methods.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"1_OS 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127817916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Teleoperation Interface for Semi-autonomous Control of Robot Arms 机器人手臂半自主控制的交互式遥操作界面
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.55
C. P. Quintero, R. T. Fomena, A. Shademan, Oscar A. Ramirez, Martin Jägersand
We propose and develop an interactive semi-autonomous control of robot arms. Our system controls two interactions: (1) A user can naturally control a robot arm by a direct linkage to the arm motion from the tracked human skeleton. (2) An autonomous image-based visual servoing routine can be triggered for precise positioning. Coarse motions are executed by human teleoperation and fine motions by image-based visual servoing. A successful application of our proposed interaction is presented for a WAM arm equipped with an eye-in-hand camera.
我们提出并开发了一种交互式半自主控制的机械臂。我们的系统控制两种交互:(1)用户可以自然地通过从跟踪的人体骨骼到手臂运动的直接链接来控制机器人手臂。(2)触发基于自主图像的视觉伺服程序进行精确定位。粗大运动采用人工遥操作,精细运动采用基于图像的视觉伺服。我们提出的交互作用在一个装有眼手相机的WAM手臂上的成功应用。
{"title":"Interactive Teleoperation Interface for Semi-autonomous Control of Robot Arms","authors":"C. P. Quintero, R. T. Fomena, A. Shademan, Oscar A. Ramirez, Martin Jägersand","doi":"10.1109/CRV.2014.55","DOIUrl":"https://doi.org/10.1109/CRV.2014.55","url":null,"abstract":"We propose and develop an interactive semi-autonomous control of robot arms. Our system controls two interactions: (1) A user can naturally control a robot arm by a direct linkage to the arm motion from the tracked human skeleton. (2) An autonomous image-based visual servoing routine can be triggered for precise positioning. Coarse motions are executed by human teleoperation and fine motions by image-based visual servoing. A successful application of our proposed interaction is presented for a WAM arm equipped with an eye-in-hand camera.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121994130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Toward a Unified Framework for EMG Signals Processing and Controlling an Exoskeleton 外骨骼肌电信号处理与控制的统一框架研究
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.46
G. Durandau, W. Suleiman
In this paper, we present a control method of robotic system using electromyography (EMG) signals collected by surface EMG electrodes. The EMG signals are analyzed using a neuromusculoskeletal (NMS) model that represents at the same time the muscle and the skeleton of the body. It has the advantage of adding external forces to the model without changing the initial parameters which is particularly useful for the control of exoskeletons. The algorithm has been validated through experiments consisting of moving only the elbow joint freely or while handling a barbell having various sets of loads. The results of our algorithm are then compared to the motions obtained by a motion capture system during the same session. The comparison points out the efficiency of our algorithm for predicting and estimating the arm motion using only EMG signals.
本文提出了一种利用表面肌电电极采集的肌电信号对机器人系统进行控制的方法。肌电图信号使用神经肌肉骨骼(NMS)模型进行分析,该模型同时表示身体的肌肉和骨骼。它的优点是在不改变初始参数的情况下向模型中添加外力,这对外骨骼的控制特别有用。该算法已通过仅自由移动肘关节或同时处理具有各种负载的杠铃的实验进行了验证。然后将我们的算法结果与同一会话期间由动作捕捉系统获得的动作进行比较。实验结果表明,该算法仅使用肌电信号预测和估计手臂运动是有效的。
{"title":"Toward a Unified Framework for EMG Signals Processing and Controlling an Exoskeleton","authors":"G. Durandau, W. Suleiman","doi":"10.1109/CRV.2014.46","DOIUrl":"https://doi.org/10.1109/CRV.2014.46","url":null,"abstract":"In this paper, we present a control method of robotic system using electromyography (EMG) signals collected by surface EMG electrodes. The EMG signals are analyzed using a neuromusculoskeletal (NMS) model that represents at the same time the muscle and the skeleton of the body. It has the advantage of adding external forces to the model without changing the initial parameters which is particularly useful for the control of exoskeletons. The algorithm has been validated through experiments consisting of moving only the elbow joint freely or while handling a barbell having various sets of loads. The results of our algorithm are then compared to the motions obtained by a motion capture system during the same session. The comparison points out the efficiency of our algorithm for predicting and estimating the arm motion using only EMG signals.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116402753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The Range Beacon Placement Problem for Robot Navigation 机器人导航的距离信标定位问题
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.28
River Allen, Neil MacMillan, D. Marinakis, R. Nishat, Rayhan Rahman, S. Whitesides
Instrumentation of an environment with sensors can provide an effective and scalable localization solution for robots. Where GPS is not available, beacons that provide position estimates to a robot must be placed effectively in order to maximize a robots navigation accuracy and robustness. Sonar range-based beacons are reasonable candidates for low cost position estimate sensors. In this paper we explore heuristics derived from computational geometry to estimate the effectiveness of sonar beacon deployments given a predefined mobile robot path. Results from numerical simulations and experimentation demonstrate the effectiveness and scalability of our approach.
带有传感器的环境仪表可以为机器人提供有效且可扩展的定位解决方案。在GPS不可用的情况下,必须有效地放置向机器人提供位置估计的信标,以最大化机器人的导航精度和鲁棒性。基于声纳距离的信标是低成本位置估计传感器的合理候选。在本文中,我们探索从计算几何推导的启发式方法来估计给定预定义移动机器人路径的声纳信标部署的有效性。数值模拟和实验结果证明了该方法的有效性和可扩展性。
{"title":"The Range Beacon Placement Problem for Robot Navigation","authors":"River Allen, Neil MacMillan, D. Marinakis, R. Nishat, Rayhan Rahman, S. Whitesides","doi":"10.1109/CRV.2014.28","DOIUrl":"https://doi.org/10.1109/CRV.2014.28","url":null,"abstract":"Instrumentation of an environment with sensors can provide an effective and scalable localization solution for robots. Where GPS is not available, beacons that provide position estimates to a robot must be placed effectively in order to maximize a robots navigation accuracy and robustness. Sonar range-based beacons are reasonable candidates for low cost position estimate sensors. In this paper we explore heuristics derived from computational geometry to estimate the effectiveness of sonar beacon deployments given a predefined mobile robot path. Results from numerical simulations and experimentation demonstrate the effectiveness and scalability of our approach.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127726987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Scale-Space Decomposition and Nearest Linear Combination Based Approach for Face Recognition 基于尺度空间分解和最接近线性组合的人脸识别方法
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.37
F. A. Hoque, Liang Chen
Among many illumination robust approaches, scale-space decomposition based methods play an important role to reduce the lighting effects in face images. However, most of the existing scale-space decomposition methods perform recognition, based on the illumination-invariant small-scale features only. We propose a scale-space decomposition based face recognition approach that extracts the features of different scales through the TV+L1 model and wavelet transform. The approach represents a subject's face image via a subspace spanned by linear combination of the features of different scales. To decide the proper identity of the probe, the nearest neighbor (NN) approach is used to measure the similarities between a probe face image and subspace representations of gallery face images. Experiments on various benchmarks have demonstrated that the system outperforms many recognition methods in the same category.
在众多光照鲁棒性方法中,基于尺度空间分解的方法在降低人脸图像的光照效果方面发挥着重要作用。然而,现有的尺度空间分解方法大多只基于光照不变的小尺度特征进行识别。提出了一种基于尺度空间分解的人脸识别方法,通过TV+L1模型和小波变换提取不同尺度的特征。该方法通过不同尺度特征的线性组合所形成的子空间来表示被试的人脸图像。为了确定探针的正确身份,使用最近邻(NN)方法来测量探针人脸图像与画廊人脸图像子空间表示之间的相似性。在各种基准测试上的实验表明,该系统在同一类别中优于许多识别方法。
{"title":"Scale-Space Decomposition and Nearest Linear Combination Based Approach for Face Recognition","authors":"F. A. Hoque, Liang Chen","doi":"10.1109/CRV.2014.37","DOIUrl":"https://doi.org/10.1109/CRV.2014.37","url":null,"abstract":"Among many illumination robust approaches, scale-space decomposition based methods play an important role to reduce the lighting effects in face images. However, most of the existing scale-space decomposition methods perform recognition, based on the illumination-invariant small-scale features only. We propose a scale-space decomposition based face recognition approach that extracts the features of different scales through the TV+L1 model and wavelet transform. The approach represents a subject's face image via a subspace spanned by linear combination of the features of different scales. To decide the proper identity of the probe, the nearest neighbor (NN) approach is used to measure the similarities between a probe face image and subspace representations of gallery face images. Experiments on various benchmarks have demonstrated that the system outperforms many recognition methods in the same category.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127913776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Gradient Orientation to Improve Least Squares Line Fitting 利用梯度方向改进最小二乘拟合
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.38
T. Petković, S. Lončarić
Straight line fitting is an important problem in computer and robot vision. We propose a novel method for least squares line fitting that uses both the point coordinates and the local gradient orientation to fit an optimal line by minimizing the proposed algebraic distance. The proposed inclusion of gradient orientation offers several advantages: (a) one data point is sufficient for the line fit, (b) for the same number of points the fit is more precise due to inclusion of gradient orientation, and (c) outliers can be rejected based on the gradient orientation or the distance to line.
直线拟合是计算机和机器人视觉中的一个重要问题。我们提出了一种新的最小二乘线拟合方法,该方法使用点坐标和局部梯度方向通过最小化所提出的代数距离来拟合最优线。提出的包含梯度方向的方法有几个优点:(a)一个数据点足以进行线拟合,(b)对于相同数量的点,由于包含梯度方向,拟合更加精确,(c)可以根据梯度方向或到线的距离拒绝异常值。
{"title":"Using Gradient Orientation to Improve Least Squares Line Fitting","authors":"T. Petković, S. Lončarić","doi":"10.1109/CRV.2014.38","DOIUrl":"https://doi.org/10.1109/CRV.2014.38","url":null,"abstract":"Straight line fitting is an important problem in computer and robot vision. We propose a novel method for least squares line fitting that uses both the point coordinates and the local gradient orientation to fit an optimal line by minimizing the proposed algebraic distance. The proposed inclusion of gradient orientation offers several advantages: (a) one data point is sufficient for the line fit, (b) for the same number of points the fit is more precise due to inclusion of gradient orientation, and (c) outliers can be rejected based on the gradient orientation or the distance to line.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130794389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Full Omnidirectional Depth Sensing Using Active Vision for Small Unmanned Aerial Vehicles 基于主动视觉的小型无人机全向深度传感研究
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.12
A. Harmat, I. Sharf
Collision avoidance for small unmanned aerial vehicles operating in a variety of environments is limited by the types of available depth sensors. Currently, there are no sensors that are lightweight, function outdoors in sunlight, and cover enough of a field of view to be useful in complex environments, although many sensors excel in one or two of these areas. We present a new depth estimation method, based on concepts from multi-view stereo and structured light methods, that uses only lightweight miniature cameras and a small laser dot matrix projector to produce measurements in the range of 1-12 meters. The field of view of the system is limited only by the number and type of cameras/projectors used, and can be fully Omni directional if desired. The sensitivity of the system to design and calibration parameters is tested in simulation, and results from a functional prototype are presented.
在各种环境中操作的小型无人机的避碰受到可用深度传感器类型的限制。目前,还没有一种传感器重量轻,能在户外的阳光下工作,并能覆盖足够的视野,在复杂的环境中发挥作用,尽管许多传感器在其中一两个领域表现出色。我们提出了一种新的深度估计方法,基于多视点立体和结构光方法的概念,仅使用轻型微型相机和小型激光点阵投影仪产生1-12米范围内的测量。该系统的视野仅受所使用的摄像机/投影仪的数量和类型的限制,如果需要,可以完全是全方向的。通过仿真测试了系统对设计参数和标定参数的敏感性,并给出了功能样机的测试结果。
{"title":"Towards Full Omnidirectional Depth Sensing Using Active Vision for Small Unmanned Aerial Vehicles","authors":"A. Harmat, I. Sharf","doi":"10.1109/CRV.2014.12","DOIUrl":"https://doi.org/10.1109/CRV.2014.12","url":null,"abstract":"Collision avoidance for small unmanned aerial vehicles operating in a variety of environments is limited by the types of available depth sensors. Currently, there are no sensors that are lightweight, function outdoors in sunlight, and cover enough of a field of view to be useful in complex environments, although many sensors excel in one or two of these areas. We present a new depth estimation method, based on concepts from multi-view stereo and structured light methods, that uses only lightweight miniature cameras and a small laser dot matrix projector to produce measurements in the range of 1-12 meters. The field of view of the system is limited only by the number and type of cameras/projectors used, and can be fully Omni directional if desired. The sensitivity of the system to design and calibration parameters is tested in simulation, and results from a functional prototype are presented.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114381817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimizing Camera Perspective for Stereo Visual Odometry 优化相机视角立体视觉里程计
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.9
Valentin Peretroukhin, Jonathan Kelly, T. Barfoot
Visual Odometry (VO) is an integral part of many navigation techniques in mobile robotics. In this work, we investigate how the orientation of the camera affects the overall position estimates recovered from stereo VO. Through simulations and experimental work, we demonstrate that this error can be significantly reduced by changing the perspective of the stereo camera in relation to the moving platform. Specifically, we show that orienting the camera at an oblique angle to the direction of travel can reduce VO error by up to 82% in simulations and up to 59% in experimental data. A variety of parameters are investigated for their effects on this trend including frequency of captured images and camera resolution.
视觉里程计(VO)是移动机器人导航技术的重要组成部分。在这项工作中,我们研究了相机的方向如何影响从立体VO恢复的整体位置估计。通过仿真和实验,我们证明了通过改变立体摄像机相对于移动平台的视角可以显著降低这种误差。具体来说,我们表明,将相机定位在与旅行方向的斜角上,可以在模拟中减少高达82%的VO误差,在实验数据中减少高达59%。研究了各种参数对这一趋势的影响,包括捕获图像的频率和相机分辨率。
{"title":"Optimizing Camera Perspective for Stereo Visual Odometry","authors":"Valentin Peretroukhin, Jonathan Kelly, T. Barfoot","doi":"10.1109/CRV.2014.9","DOIUrl":"https://doi.org/10.1109/CRV.2014.9","url":null,"abstract":"Visual Odometry (VO) is an integral part of many navigation techniques in mobile robotics. In this work, we investigate how the orientation of the camera affects the overall position estimates recovered from stereo VO. Through simulations and experimental work, we demonstrate that this error can be significantly reduced by changing the perspective of the stereo camera in relation to the moving platform. Specifically, we show that orienting the camera at an oblique angle to the direction of travel can reduce VO error by up to 82% in simulations and up to 59% in experimental data. A variety of parameters are investigated for their effects on this trend including frequency of captured images and camera resolution.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125505800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Trajectory Estimation Using Relative Distances Extracted from Inter-image Homographies 基于图像间同形词提取相对距离的轨迹估计
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.39
Mårten Wadenbäck, A. Heyden
The main idea of this paper is to use distances between camera positions to recover the trajectory of a mobile robot. We consider a mobile platform equipped with a single fixed camera using images of the floor and their associated inter-image homographies to find these distances. We show that under the assumptions that the camera is rigidly mounted with a constant tilt and travelling at a constant height above the floor, the distance between two camera positions may be expressed in terms of the condition number of the inter-image homography. Experiments are conducted on synthetic data to verify that the derived distance formula gives distances close to the true ones and is not too sensitive to noise. We also describe how the robot trajectory may be represented as a graph with edge lengths determined by the distances computed using the formula above, and present one possible method to construct this graph given some of these distances. The experiments show promising results.
本文的主要思想是利用相机位置之间的距离来恢复移动机器人的轨迹。我们考虑了一个移动平台,配备了一个固定的相机,使用地板的图像及其相关的图像间同形异构词来找到这些距离。我们证明了在摄像机固定安装并保持恒定倾斜并在地板上保持恒定高度的假设下,两个摄像机位置之间的距离可以用图像间单应性的条件数来表示。在合成数据上进行了实验,验证了导出的距离公式与真实距离接近,并且对噪声不太敏感。我们还描述了机器人轨迹如何被表示为一个图形,其边缘长度由使用上述公式计算的距离决定,并给出了一种可能的方法来构建这个图形给定这些距离中的一些。实验显示出令人满意的结果。
{"title":"Trajectory Estimation Using Relative Distances Extracted from Inter-image Homographies","authors":"Mårten Wadenbäck, A. Heyden","doi":"10.1109/CRV.2014.39","DOIUrl":"https://doi.org/10.1109/CRV.2014.39","url":null,"abstract":"The main idea of this paper is to use distances between camera positions to recover the trajectory of a mobile robot. We consider a mobile platform equipped with a single fixed camera using images of the floor and their associated inter-image homographies to find these distances. We show that under the assumptions that the camera is rigidly mounted with a constant tilt and travelling at a constant height above the floor, the distance between two camera positions may be expressed in terms of the condition number of the inter-image homography. Experiments are conducted on synthetic data to verify that the derived distance formula gives distances close to the true ones and is not too sensitive to noise. We also describe how the robot trajectory may be represented as a graph with edge lengths determined by the distances computed using the formula above, and present one possible method to construct this graph given some of these distances. The experiments show promising results.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128296578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2014 Canadian Conference on Computer and Robot Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1