首页 > 最新文献

2014 Canadian Conference on Computer and Robot Vision最新文献

英文 中文
Metadata-Weighted Score Fusion for Multimedia Event Detection 基于元数据加权评分融合的多媒体事件检测
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.47
Scott McCloskey, Jingchen Liu
We address the problem of multimedia event detection from videos captured 'in the wild,' in particular the fusion of cues from multiple aspects of the video's content: detected objects, observed motion, audio signatures, etc. We employ score fusion, also known as late fusion, and propose a method that learns local weightings of the various base classifier scores which respect the performance differences arising from the video quality. Classifiers working with visual texture features, for instance, are given reduced weight when applied to subsets of the video corpus with high compression, and the weights associated with the other classifiers are adjusted to reflect this lack of confidence. We present a method to automatically partition the video corpus into relevant subsets, and to learn local weightings which optimally fuse scores on a particular subset. Improvements in event detection performance are demonstrated on the TRECVid Multimedia Event Detection (MED) MED Test dataset, and comparisons are provided to several other score fusion methods.
我们解决了从“野外”捕获的视频中检测多媒体事件的问题,特别是融合来自视频内容的多个方面的线索:检测到的物体、观察到的运动、音频签名等。我们采用分数融合,也称为后期融合,并提出了一种学习各种基分类器分数的局部加权的方法,该方法尊重视频质量引起的性能差异。例如,使用视觉纹理特征的分类器在应用于具有高压缩的视频语料库子集时被赋予减少的权重,并且与其他分类器相关联的权重被调整以反映这种缺乏信心。我们提出了一种自动将视频语料库划分为相关子集的方法,并学习局部加权,以最优地融合特定子集上的分数。在TRECVid多媒体事件检测(MED) MED测试数据集上展示了事件检测性能的改进,并与其他几种分数融合方法进行了比较。
{"title":"Metadata-Weighted Score Fusion for Multimedia Event Detection","authors":"Scott McCloskey, Jingchen Liu","doi":"10.1109/CRV.2014.47","DOIUrl":"https://doi.org/10.1109/CRV.2014.47","url":null,"abstract":"We address the problem of multimedia event detection from videos captured 'in the wild,' in particular the fusion of cues from multiple aspects of the video's content: detected objects, observed motion, audio signatures, etc. We employ score fusion, also known as late fusion, and propose a method that learns local weightings of the various base classifier scores which respect the performance differences arising from the video quality. Classifiers working with visual texture features, for instance, are given reduced weight when applied to subsets of the video corpus with high compression, and the weights associated with the other classifiers are adjusted to reflect this lack of confidence. We present a method to automatically partition the video corpus into relevant subsets, and to learn local weightings which optimally fuse scores on a particular subset. Improvements in event detection performance are demonstrated on the TRECVid Multimedia Event Detection (MED) MED Test dataset, and comparisons are provided to several other score fusion methods.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127817916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Segmenting Objects in Weakly Labeled Videos 弱标记视频中的对象分割
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.24
Mrigank Rochan, Shafin Rahman, Neil D. B. Bruce, Yang Wang
We consider the problem of segmenting objects in weakly labeled video. A video is weakly labeled if it is associated with a tag (e.g. Youtube videos with tags) describing the main object present in the video. It is weakly labeled because the tag only indicates the presence/absence of the object, but does not give the detailed spatial/temporal location of the object in the video. Given a weakly labeled video, our method can automatically localize the object in each frame and segment it from the background. Our method is fully automatic and does not require any user-input. In principle, it can be applied to a video of any object class. We evaluate our proposed method on a dataset with more than 100 video shots. Our experimental results show that our method outperforms other baseline approaches.
研究了弱标记视频中物体的分割问题。如果视频与一个描述视频中主要对象的标签相关联(例如Youtube视频带有标签),则该视频是弱标记的。它是弱标记的,因为标签只表明对象的存在/不存在,而不给出对象在视频中的详细空间/时间位置。给定一个弱标记视频,我们的方法可以自动定位每一帧中的对象,并将其从背景中分割出来。我们的方法是全自动的,不需要任何用户输入。原则上,它可以应用于任何对象类的视频。我们在超过100个视频片段的数据集上评估了我们提出的方法。我们的实验结果表明,我们的方法优于其他基线方法。
{"title":"Segmenting Objects in Weakly Labeled Videos","authors":"Mrigank Rochan, Shafin Rahman, Neil D. B. Bruce, Yang Wang","doi":"10.1109/CRV.2014.24","DOIUrl":"https://doi.org/10.1109/CRV.2014.24","url":null,"abstract":"We consider the problem of segmenting objects in weakly labeled video. A video is weakly labeled if it is associated with a tag (e.g. Youtube videos with tags) describing the main object present in the video. It is weakly labeled because the tag only indicates the presence/absence of the object, but does not give the detailed spatial/temporal location of the object in the video. Given a weakly labeled video, our method can automatically localize the object in each frame and segment it from the background. Our method is fully automatic and does not require any user-input. In principle, it can be applied to a video of any object class. We evaluate our proposed method on a dataset with more than 100 video shots. Our experimental results show that our method outperforms other baseline approaches.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126637276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Interactive Teleoperation Interface for Semi-autonomous Control of Robot Arms 机器人手臂半自主控制的交互式遥操作界面
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.55
C. P. Quintero, R. T. Fomena, A. Shademan, Oscar A. Ramirez, Martin Jägersand
We propose and develop an interactive semi-autonomous control of robot arms. Our system controls two interactions: (1) A user can naturally control a robot arm by a direct linkage to the arm motion from the tracked human skeleton. (2) An autonomous image-based visual servoing routine can be triggered for precise positioning. Coarse motions are executed by human teleoperation and fine motions by image-based visual servoing. A successful application of our proposed interaction is presented for a WAM arm equipped with an eye-in-hand camera.
我们提出并开发了一种交互式半自主控制的机械臂。我们的系统控制两种交互:(1)用户可以自然地通过从跟踪的人体骨骼到手臂运动的直接链接来控制机器人手臂。(2)触发基于自主图像的视觉伺服程序进行精确定位。粗大运动采用人工遥操作,精细运动采用基于图像的视觉伺服。我们提出的交互作用在一个装有眼手相机的WAM手臂上的成功应用。
{"title":"Interactive Teleoperation Interface for Semi-autonomous Control of Robot Arms","authors":"C. P. Quintero, R. T. Fomena, A. Shademan, Oscar A. Ramirez, Martin Jägersand","doi":"10.1109/CRV.2014.55","DOIUrl":"https://doi.org/10.1109/CRV.2014.55","url":null,"abstract":"We propose and develop an interactive semi-autonomous control of robot arms. Our system controls two interactions: (1) A user can naturally control a robot arm by a direct linkage to the arm motion from the tracked human skeleton. (2) An autonomous image-based visual servoing routine can be triggered for precise positioning. Coarse motions are executed by human teleoperation and fine motions by image-based visual servoing. A successful application of our proposed interaction is presented for a WAM arm equipped with an eye-in-hand camera.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121994130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
The Range Beacon Placement Problem for Robot Navigation 机器人导航的距离信标定位问题
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.28
River Allen, Neil MacMillan, D. Marinakis, R. Nishat, Rayhan Rahman, S. Whitesides
Instrumentation of an environment with sensors can provide an effective and scalable localization solution for robots. Where GPS is not available, beacons that provide position estimates to a robot must be placed effectively in order to maximize a robots navigation accuracy and robustness. Sonar range-based beacons are reasonable candidates for low cost position estimate sensors. In this paper we explore heuristics derived from computational geometry to estimate the effectiveness of sonar beacon deployments given a predefined mobile robot path. Results from numerical simulations and experimentation demonstrate the effectiveness and scalability of our approach.
带有传感器的环境仪表可以为机器人提供有效且可扩展的定位解决方案。在GPS不可用的情况下,必须有效地放置向机器人提供位置估计的信标,以最大化机器人的导航精度和鲁棒性。基于声纳距离的信标是低成本位置估计传感器的合理候选。在本文中,我们探索从计算几何推导的启发式方法来估计给定预定义移动机器人路径的声纳信标部署的有效性。数值模拟和实验结果证明了该方法的有效性和可扩展性。
{"title":"The Range Beacon Placement Problem for Robot Navigation","authors":"River Allen, Neil MacMillan, D. Marinakis, R. Nishat, Rayhan Rahman, S. Whitesides","doi":"10.1109/CRV.2014.28","DOIUrl":"https://doi.org/10.1109/CRV.2014.28","url":null,"abstract":"Instrumentation of an environment with sensors can provide an effective and scalable localization solution for robots. Where GPS is not available, beacons that provide position estimates to a robot must be placed effectively in order to maximize a robots navigation accuracy and robustness. Sonar range-based beacons are reasonable candidates for low cost position estimate sensors. In this paper we explore heuristics derived from computational geometry to estimate the effectiveness of sonar beacon deployments given a predefined mobile robot path. Results from numerical simulations and experimentation demonstrate the effectiveness and scalability of our approach.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127726987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Scale-Space Decomposition and Nearest Linear Combination Based Approach for Face Recognition 基于尺度空间分解和最接近线性组合的人脸识别方法
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.37
F. A. Hoque, Liang Chen
Among many illumination robust approaches, scale-space decomposition based methods play an important role to reduce the lighting effects in face images. However, most of the existing scale-space decomposition methods perform recognition, based on the illumination-invariant small-scale features only. We propose a scale-space decomposition based face recognition approach that extracts the features of different scales through the TV+L1 model and wavelet transform. The approach represents a subject's face image via a subspace spanned by linear combination of the features of different scales. To decide the proper identity of the probe, the nearest neighbor (NN) approach is used to measure the similarities between a probe face image and subspace representations of gallery face images. Experiments on various benchmarks have demonstrated that the system outperforms many recognition methods in the same category.
在众多光照鲁棒性方法中,基于尺度空间分解的方法在降低人脸图像的光照效果方面发挥着重要作用。然而,现有的尺度空间分解方法大多只基于光照不变的小尺度特征进行识别。提出了一种基于尺度空间分解的人脸识别方法,通过TV+L1模型和小波变换提取不同尺度的特征。该方法通过不同尺度特征的线性组合所形成的子空间来表示被试的人脸图像。为了确定探针的正确身份,使用最近邻(NN)方法来测量探针人脸图像与画廊人脸图像子空间表示之间的相似性。在各种基准测试上的实验表明,该系统在同一类别中优于许多识别方法。
{"title":"Scale-Space Decomposition and Nearest Linear Combination Based Approach for Face Recognition","authors":"F. A. Hoque, Liang Chen","doi":"10.1109/CRV.2014.37","DOIUrl":"https://doi.org/10.1109/CRV.2014.37","url":null,"abstract":"Among many illumination robust approaches, scale-space decomposition based methods play an important role to reduce the lighting effects in face images. However, most of the existing scale-space decomposition methods perform recognition, based on the illumination-invariant small-scale features only. We propose a scale-space decomposition based face recognition approach that extracts the features of different scales through the TV+L1 model and wavelet transform. The approach represents a subject's face image via a subspace spanned by linear combination of the features of different scales. To decide the proper identity of the probe, the nearest neighbor (NN) approach is used to measure the similarities between a probe face image and subspace representations of gallery face images. Experiments on various benchmarks have demonstrated that the system outperforms many recognition methods in the same category.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127913776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Toward a Unified Framework for EMG Signals Processing and Controlling an Exoskeleton 外骨骼肌电信号处理与控制的统一框架研究
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.46
G. Durandau, W. Suleiman
In this paper, we present a control method of robotic system using electromyography (EMG) signals collected by surface EMG electrodes. The EMG signals are analyzed using a neuromusculoskeletal (NMS) model that represents at the same time the muscle and the skeleton of the body. It has the advantage of adding external forces to the model without changing the initial parameters which is particularly useful for the control of exoskeletons. The algorithm has been validated through experiments consisting of moving only the elbow joint freely or while handling a barbell having various sets of loads. The results of our algorithm are then compared to the motions obtained by a motion capture system during the same session. The comparison points out the efficiency of our algorithm for predicting and estimating the arm motion using only EMG signals.
本文提出了一种利用表面肌电电极采集的肌电信号对机器人系统进行控制的方法。肌电图信号使用神经肌肉骨骼(NMS)模型进行分析,该模型同时表示身体的肌肉和骨骼。它的优点是在不改变初始参数的情况下向模型中添加外力,这对外骨骼的控制特别有用。该算法已通过仅自由移动肘关节或同时处理具有各种负载的杠铃的实验进行了验证。然后将我们的算法结果与同一会话期间由动作捕捉系统获得的动作进行比较。实验结果表明,该算法仅使用肌电信号预测和估计手臂运动是有效的。
{"title":"Toward a Unified Framework for EMG Signals Processing and Controlling an Exoskeleton","authors":"G. Durandau, W. Suleiman","doi":"10.1109/CRV.2014.46","DOIUrl":"https://doi.org/10.1109/CRV.2014.46","url":null,"abstract":"In this paper, we present a control method of robotic system using electromyography (EMG) signals collected by surface EMG electrodes. The EMG signals are analyzed using a neuromusculoskeletal (NMS) model that represents at the same time the muscle and the skeleton of the body. It has the advantage of adding external forces to the model without changing the initial parameters which is particularly useful for the control of exoskeletons. The algorithm has been validated through experiments consisting of moving only the elbow joint freely or while handling a barbell having various sets of loads. The results of our algorithm are then compared to the motions obtained by a motion capture system during the same session. The comparison points out the efficiency of our algorithm for predicting and estimating the arm motion using only EMG signals.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116402753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Using Gradient Orientation to Improve Least Squares Line Fitting 利用梯度方向改进最小二乘拟合
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.38
T. Petković, S. Lončarić
Straight line fitting is an important problem in computer and robot vision. We propose a novel method for least squares line fitting that uses both the point coordinates and the local gradient orientation to fit an optimal line by minimizing the proposed algebraic distance. The proposed inclusion of gradient orientation offers several advantages: (a) one data point is sufficient for the line fit, (b) for the same number of points the fit is more precise due to inclusion of gradient orientation, and (c) outliers can be rejected based on the gradient orientation or the distance to line.
直线拟合是计算机和机器人视觉中的一个重要问题。我们提出了一种新的最小二乘线拟合方法,该方法使用点坐标和局部梯度方向通过最小化所提出的代数距离来拟合最优线。提出的包含梯度方向的方法有几个优点:(a)一个数据点足以进行线拟合,(b)对于相同数量的点,由于包含梯度方向,拟合更加精确,(c)可以根据梯度方向或到线的距离拒绝异常值。
{"title":"Using Gradient Orientation to Improve Least Squares Line Fitting","authors":"T. Petković, S. Lončarić","doi":"10.1109/CRV.2014.38","DOIUrl":"https://doi.org/10.1109/CRV.2014.38","url":null,"abstract":"Straight line fitting is an important problem in computer and robot vision. We propose a novel method for least squares line fitting that uses both the point coordinates and the local gradient orientation to fit an optimal line by minimizing the proposed algebraic distance. The proposed inclusion of gradient orientation offers several advantages: (a) one data point is sufficient for the line fit, (b) for the same number of points the fit is more precise due to inclusion of gradient orientation, and (c) outliers can be rejected based on the gradient orientation or the distance to line.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130794389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Full Omnidirectional Depth Sensing Using Active Vision for Small Unmanned Aerial Vehicles 基于主动视觉的小型无人机全向深度传感研究
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.12
A. Harmat, I. Sharf
Collision avoidance for small unmanned aerial vehicles operating in a variety of environments is limited by the types of available depth sensors. Currently, there are no sensors that are lightweight, function outdoors in sunlight, and cover enough of a field of view to be useful in complex environments, although many sensors excel in one or two of these areas. We present a new depth estimation method, based on concepts from multi-view stereo and structured light methods, that uses only lightweight miniature cameras and a small laser dot matrix projector to produce measurements in the range of 1-12 meters. The field of view of the system is limited only by the number and type of cameras/projectors used, and can be fully Omni directional if desired. The sensitivity of the system to design and calibration parameters is tested in simulation, and results from a functional prototype are presented.
在各种环境中操作的小型无人机的避碰受到可用深度传感器类型的限制。目前,还没有一种传感器重量轻,能在户外的阳光下工作,并能覆盖足够的视野,在复杂的环境中发挥作用,尽管许多传感器在其中一两个领域表现出色。我们提出了一种新的深度估计方法,基于多视点立体和结构光方法的概念,仅使用轻型微型相机和小型激光点阵投影仪产生1-12米范围内的测量。该系统的视野仅受所使用的摄像机/投影仪的数量和类型的限制,如果需要,可以完全是全方向的。通过仿真测试了系统对设计参数和标定参数的敏感性,并给出了功能样机的测试结果。
{"title":"Towards Full Omnidirectional Depth Sensing Using Active Vision for Small Unmanned Aerial Vehicles","authors":"A. Harmat, I. Sharf","doi":"10.1109/CRV.2014.12","DOIUrl":"https://doi.org/10.1109/CRV.2014.12","url":null,"abstract":"Collision avoidance for small unmanned aerial vehicles operating in a variety of environments is limited by the types of available depth sensors. Currently, there are no sensors that are lightweight, function outdoors in sunlight, and cover enough of a field of view to be useful in complex environments, although many sensors excel in one or two of these areas. We present a new depth estimation method, based on concepts from multi-view stereo and structured light methods, that uses only lightweight miniature cameras and a small laser dot matrix projector to produce measurements in the range of 1-12 meters. The field of view of the system is limited only by the number and type of cameras/projectors used, and can be fully Omni directional if desired. The sensitivity of the system to design and calibration parameters is tested in simulation, and results from a functional prototype are presented.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114381817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimizing Camera Perspective for Stereo Visual Odometry 优化相机视角立体视觉里程计
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.9
Valentin Peretroukhin, Jonathan Kelly, T. Barfoot
Visual Odometry (VO) is an integral part of many navigation techniques in mobile robotics. In this work, we investigate how the orientation of the camera affects the overall position estimates recovered from stereo VO. Through simulations and experimental work, we demonstrate that this error can be significantly reduced by changing the perspective of the stereo camera in relation to the moving platform. Specifically, we show that orienting the camera at an oblique angle to the direction of travel can reduce VO error by up to 82% in simulations and up to 59% in experimental data. A variety of parameters are investigated for their effects on this trend including frequency of captured images and camera resolution.
视觉里程计(VO)是移动机器人导航技术的重要组成部分。在这项工作中,我们研究了相机的方向如何影响从立体VO恢复的整体位置估计。通过仿真和实验,我们证明了通过改变立体摄像机相对于移动平台的视角可以显著降低这种误差。具体来说,我们表明,将相机定位在与旅行方向的斜角上,可以在模拟中减少高达82%的VO误差,在实验数据中减少高达59%。研究了各种参数对这一趋势的影响,包括捕获图像的频率和相机分辨率。
{"title":"Optimizing Camera Perspective for Stereo Visual Odometry","authors":"Valentin Peretroukhin, Jonathan Kelly, T. Barfoot","doi":"10.1109/CRV.2014.9","DOIUrl":"https://doi.org/10.1109/CRV.2014.9","url":null,"abstract":"Visual Odometry (VO) is an integral part of many navigation techniques in mobile robotics. In this work, we investigate how the orientation of the camera affects the overall position estimates recovered from stereo VO. Through simulations and experimental work, we demonstrate that this error can be significantly reduced by changing the perspective of the stereo camera in relation to the moving platform. Specifically, we show that orienting the camera at an oblique angle to the direction of travel can reduce VO error by up to 82% in simulations and up to 59% in experimental data. A variety of parameters are investigated for their effects on this trend including frequency of captured images and camera resolution.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125505800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Trajectory Inference Using a Motion Sensing Network 基于运动传感网络的轨迹推断
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.29
Doug Cox, Darren Fairall, Neil MacMillan, D. Marinakis, D. Meger, Saamaan Pourtavakoli, Kyle Weston
This paper addresses the problem of inferring human trajectories through an environment using low frequency, low fidelity data from a sensor network. We present a novel "recombine" proposal for Markov Chain construction and use the new proposal to devise a probabilistic trajectory inference algorithm that generates likely trajectories given raw sensor data. We also propose a novel, low-power, long range, 900 MHz IEEE 802.15.4 compliant sensor network that makes outdoors deployment viable. Finally, we present experimental results from our deployment at a retail environment.
本文解决了使用来自传感器网络的低频、低保真数据通过环境推断人类轨迹的问题。我们提出了一种新的“重组”马尔可夫链构造方案,并使用该方案设计了一种概率轨迹推断算法,该算法在给定原始传感器数据的情况下生成可能的轨迹。我们还提出了一种新颖的,低功耗,长距离,900 MHz IEEE 802.15.4兼容的传感器网络,使户外部署可行。最后,我们给出了在零售环境中部署的实验结果。
{"title":"Trajectory Inference Using a Motion Sensing Network","authors":"Doug Cox, Darren Fairall, Neil MacMillan, D. Marinakis, D. Meger, Saamaan Pourtavakoli, Kyle Weston","doi":"10.1109/CRV.2014.29","DOIUrl":"https://doi.org/10.1109/CRV.2014.29","url":null,"abstract":"This paper addresses the problem of inferring human trajectories through an environment using low frequency, low fidelity data from a sensor network. We present a novel \"recombine\" proposal for Markov Chain construction and use the new proposal to devise a probabilistic trajectory inference algorithm that generates likely trajectories given raw sensor data. We also propose a novel, low-power, long range, 900 MHz IEEE 802.15.4 compliant sensor network that makes outdoors deployment viable. Finally, we present experimental results from our deployment at a retail environment.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126992044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2014 Canadian Conference on Computer and Robot Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1