首页 > 最新文献

2014 Canadian Conference on Computer and Robot Vision最新文献

英文 中文
3D Scan Registration Using Curvelet Features 使用曲线特征的3D扫描配准
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.18
Siddhant Ahuja, Steven L. Waslander
Scan registration methods can often suffer from convergence and accuracy issues when the scan points are sparse or the environment violates the assumptions the methods are founded on. We propose an alternative approach to 3D scan registration using the curve let transform that performs multi-resolution geometric analysis to obtain a set of coefficients indexed by scale (coarsest to finest), angle and spatial position. Features are detected in the curve let domain to take advantage of the directional selectivity of the transform. A descriptor is computed for each feature by calculating the 3D spatial histogram of the image gradients, and nearest neighbor based matching is used to calculate the feature correspondences. Correspondence rejection using Random Sample Consensus identifies inliers, and a locally optimal Singular Value Decomposition-based estimation of the rigid-body transformation aligns the laser scans given the re-projected correspondences in the metric space. Experimental results on a publicly available dataset of planetary analogue facility demonstrates improved performance over existing methods.
当扫描点稀疏或环境违背了方法建立的假设时,扫描配准方法往往会出现收敛和精度问题。我们提出了一种3D扫描配准的替代方法,使用曲线let变换进行多分辨率几何分析,以获得一组由尺度(从粗到细)、角度和空间位置索引的系数。在曲线let域中检测特征,利用变换的方向选择性。通过计算图像梯度的三维空间直方图来计算每个特征的描述符,并使用基于最近邻的匹配来计算特征对应关系。使用随机样本一致性识别内线的对应拒绝,以及基于局部最优奇异值分解的刚体变换估计,给定度量空间中重新投影的对应,对激光扫描进行对齐。在公开可用的行星模拟设施数据集上的实验结果表明,与现有方法相比,性能有所提高。
{"title":"3D Scan Registration Using Curvelet Features","authors":"Siddhant Ahuja, Steven L. Waslander","doi":"10.1109/CRV.2014.18","DOIUrl":"https://doi.org/10.1109/CRV.2014.18","url":null,"abstract":"Scan registration methods can often suffer from convergence and accuracy issues when the scan points are sparse or the environment violates the assumptions the methods are founded on. We propose an alternative approach to 3D scan registration using the curve let transform that performs multi-resolution geometric analysis to obtain a set of coefficients indexed by scale (coarsest to finest), angle and spatial position. Features are detected in the curve let domain to take advantage of the directional selectivity of the transform. A descriptor is computed for each feature by calculating the 3D spatial histogram of the image gradients, and nearest neighbor based matching is used to calculate the feature correspondences. Correspondence rejection using Random Sample Consensus identifies inliers, and a locally optimal Singular Value Decomposition-based estimation of the rigid-body transformation aligns the laser scans given the re-projected correspondences in the metric space. Experimental results on a publicly available dataset of planetary analogue facility demonstrates improved performance over existing methods.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"120 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122189707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards Estimating Bias in Stereo Visual Odometry 立体视觉里程计中偏差估计的研究
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.10
Sara Farboud-Sheshdeh, T. Barfoot, R. Kwong
Stereo visual odometry (VO) is a common technique for estimating a camera's motion, features are tracked across frames and the pose change is subsequently inferred. This position estimation method can play a particularly important role in environments in which the global positioning system (GPS) is not available (e.g., Mars rovers). Recently, some authors have noticed a bias in VO position estimates that grows with distance travelled, this can cause the resulting position estimate to become highly inaccurate. The goals of this paper are (i) to investigate the nature of this bias in VO, (ii) to propose methods of estimating it, and (iii) to provide a correction that can potentially be used online. We identify two effects at play in stereo VO bias: first, the inherent bias in the maximum-likelihood estimation framework, and second, the disparity threshold used to discard far-away and erroneous stereo observations. In order to estimate the bias, we investigate three methods: Monte Carlo sampling, the sigma-point method (with modification), and an existing analytical method in the literature. Based on simulations, we show that our new sigma point method achieves similar accuracy to Monte Carlo, but at a fraction of the computational cost. Finally, we develop a bias correction algorithm by adapting the idea of the bootstrap in statistics, and demonstrate that our bias correction algorithm is capable of reducing approximately 95% of bias in VO problems without incorporating other sensors into the setup.
立体视觉里程计(VO)是一种常用的估计相机运动的技术,其特征是跨帧跟踪和姿态变化随后推断。这种位置估计方法在全球定位系统(GPS)不可用的环境中(例如,火星探测器)可以发挥特别重要的作用。最近,一些作者注意到VO位置估计的偏差随着移动距离的增加而增加,这可能导致最终的位置估计变得非常不准确。本文的目标是(i)调查VO中这种偏差的性质,(ii)提出估计它的方法,以及(iii)提供可能在线使用的校正。我们确定了在立体VO偏差中起作用的两个影响:第一,最大似然估计框架中的固有偏差;第二,用于丢弃远距离和错误立体观测的视差阈值。为了估计偏差,我们研究了三种方法:蒙特卡罗抽样,西格玛点方法(修正)和文献中现有的分析方法。基于仿真,我们表明我们的新西格玛点方法达到了与蒙特卡罗相似的精度,但计算成本只是一小部分。最后,我们通过采用统计学中的自举思想开发了一种偏差校正算法,并证明了我们的偏差校正算法能够在不将其他传感器纳入设置的情况下减少大约95%的VO问题偏差。
{"title":"Towards Estimating Bias in Stereo Visual Odometry","authors":"Sara Farboud-Sheshdeh, T. Barfoot, R. Kwong","doi":"10.1109/CRV.2014.10","DOIUrl":"https://doi.org/10.1109/CRV.2014.10","url":null,"abstract":"Stereo visual odometry (VO) is a common technique for estimating a camera's motion, features are tracked across frames and the pose change is subsequently inferred. This position estimation method can play a particularly important role in environments in which the global positioning system (GPS) is not available (e.g., Mars rovers). Recently, some authors have noticed a bias in VO position estimates that grows with distance travelled, this can cause the resulting position estimate to become highly inaccurate. The goals of this paper are (i) to investigate the nature of this bias in VO, (ii) to propose methods of estimating it, and (iii) to provide a correction that can potentially be used online. We identify two effects at play in stereo VO bias: first, the inherent bias in the maximum-likelihood estimation framework, and second, the disparity threshold used to discard far-away and erroneous stereo observations. In order to estimate the bias, we investigate three methods: Monte Carlo sampling, the sigma-point method (with modification), and an existing analytical method in the literature. Based on simulations, we show that our new sigma point method achieves similar accuracy to Monte Carlo, but at a fraction of the computational cost. Finally, we develop a bias correction algorithm by adapting the idea of the bootstrap in statistics, and demonstrate that our bias correction algorithm is capable of reducing approximately 95% of bias in VO problems without incorporating other sensors into the setup.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"460 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131517702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Proof-of-Concept Demonstration of Visual Teach and Repeat on a Quadrocopter Using an Altitude Sensor and a Monocular Camera 使用高度传感器和单目摄像机在四旋翼飞行器上进行视觉教学和重复的概念验证演示
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.40
Andreas Pfrunder, Angela P. Schoellig, T. Barfoot
This paper applies an existing vision-based navigation algorithm to a micro aerial vehicle (MAV). The algorithm has previously been used for long-range navigation of ground robots based on on-board 3D vision sensors such as a stereo or Kinect cameras. A teach-and-repeat operational strategy enables a robot to autonomously repeat a manually taught route without relying on an external positioning system such as GPS. For MAVs we show that a monocular downward looking camera combined with an altitude sensor can be used as 3D vision sensor replacing other resource-expensive 3D vision solutions. The paper also includes a simple path tracking controller that uses feedback from the visual and inertial sensors to guide the vehicle along a straight and level path. Preliminary experimental results demonstrate reliable, accurate and fully autonomous flight of an 8-m-long (straight and level) route, which was taught with the quadrocopter fixed to a cart. Finally, we present the successful flight of a more complex, 16-m-long route.
本文将现有的一种基于视觉的导航算法应用于微型飞行器。该算法此前已被用于基于车载3D视觉传感器(如立体或Kinect摄像头)的地面机器人的远程导航。“教-重复”操作策略使机器人能够自主重复人工教导的路线,而无需依赖GPS等外部定位系统。对于MAVs,我们展示了一个单目向下看的相机结合一个高度传感器可以作为3D视觉传感器,取代其他资源昂贵的3D视觉解决方案。该论文还包括一个简单的路径跟踪控制器,该控制器利用视觉和惯性传感器的反馈来引导车辆沿着一条直线和水平的路径行驶。初步的实验结果表明,在固定在小车上的四旋翼飞行器上,可以可靠、准确和完全自主地完成8米长的(直线和水平)飞行。最后,我们展示了一条更复杂的16米长的航线的成功飞行。
{"title":"A Proof-of-Concept Demonstration of Visual Teach and Repeat on a Quadrocopter Using an Altitude Sensor and a Monocular Camera","authors":"Andreas Pfrunder, Angela P. Schoellig, T. Barfoot","doi":"10.1109/CRV.2014.40","DOIUrl":"https://doi.org/10.1109/CRV.2014.40","url":null,"abstract":"This paper applies an existing vision-based navigation algorithm to a micro aerial vehicle (MAV). The algorithm has previously been used for long-range navigation of ground robots based on on-board 3D vision sensors such as a stereo or Kinect cameras. A teach-and-repeat operational strategy enables a robot to autonomously repeat a manually taught route without relying on an external positioning system such as GPS. For MAVs we show that a monocular downward looking camera combined with an altitude sensor can be used as 3D vision sensor replacing other resource-expensive 3D vision solutions. The paper also includes a simple path tracking controller that uses feedback from the visual and inertial sensors to guide the vehicle along a straight and level path. Preliminary experimental results demonstrate reliable, accurate and fully autonomous flight of an 8-m-long (straight and level) route, which was taught with the quadrocopter fixed to a cart. Finally, we present the successful flight of a more complex, 16-m-long route.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128907013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Multiple Feature Fusion in the Dempster-Shafer Framework for Multi-object Tracking Dempster-Shafer框架中多目标跟踪的多特征融合
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.49
Dorra Riahi, Guillaume-Alexandre Bilodeau
This paper presents a novel multiple object tracking framework based on multiple visual cues. To build tracks by selecting the best matching score between several detections, a set of probability maps is estimated by a function integrating templates using a sparse representation and color information using locality sensitive histograms. All people detected in two consecutive frames are matched with each other based on similarity scores. This last task is performed using the comparison of two models (sparse apparence and color models). A score matrix is then obtained for each model. Those scores are combined by Dempster-Shafer's combination rule. To obtain an optimal selection of the best candidate, a data association step is achieved using a greedy search algorithm. We validated our tracking algorithm on challenging publicly available video sequences and we show that we outperform recent state-of-the-art methods.
提出了一种基于多视觉线索的多目标跟踪框架。为了通过在多个检测之间选择最佳匹配分数来构建轨道,通过使用稀疏表示和使用局部敏感直方图的颜色信息集成模板的函数来估计一组概率图。在两个连续的帧中检测到的所有人都是基于相似度分数相互匹配的。最后一项任务是使用两个模型(稀疏外观模型和颜色模型)的比较来执行的。然后得到每个模型的分数矩阵。这些分数由Dempster-Shafer组合规则组合。为了获得最佳候选的最优选择,使用贪婪搜索算法实现数据关联步骤。我们在具有挑战性的公开视频序列上验证了我们的跟踪算法,并证明我们优于最近最先进的方法。
{"title":"Multiple Feature Fusion in the Dempster-Shafer Framework for Multi-object Tracking","authors":"Dorra Riahi, Guillaume-Alexandre Bilodeau","doi":"10.1109/CRV.2014.49","DOIUrl":"https://doi.org/10.1109/CRV.2014.49","url":null,"abstract":"This paper presents a novel multiple object tracking framework based on multiple visual cues. To build tracks by selecting the best matching score between several detections, a set of probability maps is estimated by a function integrating templates using a sparse representation and color information using locality sensitive histograms. All people detected in two consecutive frames are matched with each other based on similarity scores. This last task is performed using the comparison of two models (sparse apparence and color models). A score matrix is then obtained for each model. Those scores are combined by Dempster-Shafer's combination rule. To obtain an optimal selection of the best candidate, a data association step is achieved using a greedy search algorithm. We validated our tracking algorithm on challenging publicly available video sequences and we show that we outperform recent state-of-the-art methods.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129847518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Grid Seams: A Fast Superpixel Algorithm for Real-Time Applications 网格接缝:用于实时应用的快速超像素算法
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.25
P. Siva, A. Wong
Super pixels are a compact and simple representation of images that has been used for many computer vision applications such as object localization, segmentation and depth estimation. While useful as compact representations of images, the time complexity of super pixel algorithms has prevented their use in real-time applications like video processing. Fast super pixel algorithms have been proposed recently but they lack regular structure or required accuracy for representing image structure. We present Grid Seams, a novel seam carving approach to super pixel generation that preserves image structure information while enforcing a global spatial constraint in the form of a grid structure cost. Using a standard dataset, we show that our approach is faster than existing approaches and can achieve accuracies close to a state-of-the-art super pixel generation algorithms.
超级像素是一种紧凑而简单的图像表示,已用于许多计算机视觉应用,如对象定位,分割和深度估计。虽然作为图像的紧凑表示很有用,但超级像素算法的时间复杂性阻碍了它们在视频处理等实时应用中的使用。近年来人们提出了快速超像素算法,但这些算法在表示图像结构时缺乏规则结构或精度要求。我们提出了网格接缝,这是一种新的接缝雕刻方法,用于超级像素生成,它保留了图像结构信息,同时以网格结构成本的形式强制执行全局空间约束。使用标准数据集,我们表明我们的方法比现有方法更快,并且可以达到接近最先进的超级像素生成算法的精度。
{"title":"Grid Seams: A Fast Superpixel Algorithm for Real-Time Applications","authors":"P. Siva, A. Wong","doi":"10.1109/CRV.2014.25","DOIUrl":"https://doi.org/10.1109/CRV.2014.25","url":null,"abstract":"Super pixels are a compact and simple representation of images that has been used for many computer vision applications such as object localization, segmentation and depth estimation. While useful as compact representations of images, the time complexity of super pixel algorithms has prevented their use in real-time applications like video processing. Fast super pixel algorithms have been proposed recently but they lack regular structure or required accuracy for representing image structure. We present Grid Seams, a novel seam carving approach to super pixel generation that preserves image structure information while enforcing a global spatial constraint in the form of a grid structure cost. Using a standard dataset, we show that our approach is faster than existing approaches and can achieve accuracies close to a state-of-the-art super pixel generation algorithms.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125289985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring 非负最小二乘图像去模糊的不可行迭代投影Barzilai-Borwein方法
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.33
Kathleen Fraser, D. Arnold, G. Dellaire
We present a non-monotonic gradient descent algorithm with infeasible iterates for the nonnegatively constrained least-squares deblurring of images. The skewness of the intensity values of the deblurred image is used to establish a criterion for when to enforce the nonnegativity constraints. The approach is observed on several test images to either perform comparably to or to outperform a non-monotonic gradient descent approach that does not use infeasible iterates, as well as the gradient projected conjugate gradients algorithm. Our approach is distinguished from the latter by lower memory requirements, making it suitable for use with large, three-dimensional images common in medical imaging.
针对图像的非负约束最小二乘去模糊问题,提出了一种迭代不可行的非单调梯度下降算法。利用去模糊图像强度值的偏度来确定何时执行非负性约束。在几个测试图像上观察到,该方法的性能与不使用不可行的迭代的非单调梯度下降方法以及梯度投影共轭梯度算法相当或优于非单调梯度下降方法。我们的方法与后者的区别在于较低的内存要求,使其适合用于医学成像中常见的大型三维图像。
{"title":"Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring","authors":"Kathleen Fraser, D. Arnold, G. Dellaire","doi":"10.1109/CRV.2014.33","DOIUrl":"https://doi.org/10.1109/CRV.2014.33","url":null,"abstract":"We present a non-monotonic gradient descent algorithm with infeasible iterates for the nonnegatively constrained least-squares deblurring of images. The skewness of the intensity values of the deblurred image is used to establish a criterion for when to enforce the nonnegativity constraints. The approach is observed on several test images to either perform comparably to or to outperform a non-monotonic gradient descent approach that does not use infeasible iterates, as well as the gradient projected conjugate gradients algorithm. Our approach is distinguished from the latter by lower memory requirements, making it suitable for use with large, three-dimensional images common in medical imaging.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122497582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Outdoor Ice Accretion Estimation of Wind Turbine Blades Using Computer Vision 基于计算机视觉的风力发电机叶片室外冰积估算
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.41
M. Akhloufi, Nassim Benmesbah
In this paper, we present a new computer-vision based methodology to address the problem of remote ice detection and measurement on wind turbines operating in cold climate. Icing has a significant impact, affecting the productivity, causing premature wear, malfunction and damages that are hard to track. Manufacturers and operators are facing unpredictable losses that can reach millions of dollars. Algorithms were developed and experimented on images of wind turbines acquired by digital camera in outdoor conditions. Experiments show interesting and promising results for the future.
在本文中,我们提出了一种新的基于计算机视觉的方法来解决在寒冷气候下运行的风力涡轮机的远程冰检测和测量问题。结冰影响很大,影响生产效率,导致过早磨损、故障和难以跟踪的损坏。制造商和运营商正面临着不可预测的损失,可能高达数百万美元。对室外条件下由数码相机采集的风力发电机图像进行了算法开发和实验。实验显示了未来有趣和有希望的结果。
{"title":"Outdoor Ice Accretion Estimation of Wind Turbine Blades Using Computer Vision","authors":"M. Akhloufi, Nassim Benmesbah","doi":"10.1109/CRV.2014.41","DOIUrl":"https://doi.org/10.1109/CRV.2014.41","url":null,"abstract":"In this paper, we present a new computer-vision based methodology to address the problem of remote ice detection and measurement on wind turbines operating in cold climate. Icing has a significant impact, affecting the productivity, causing premature wear, malfunction and damages that are hard to track. Manufacturers and operators are facing unpredictable losses that can reach millions of dollars. Algorithms were developed and experimented on images of wind turbines acquired by digital camera in outdoor conditions. Experiments show interesting and promising results for the future.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131274137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Drums: A Middleware-Aware Distributed Robot Monitoring System 鼓:中间件感知分布式机器人监控系统
Pub Date : 2014-05-06 DOI: 10.1145/2843966.2843974
Valiallah Monajjemi, Jens Wawerla, R. Vaughan
We introduce Drums, a new tool for monitoring and debugging distributed robot systems, and a complement to robot middleware systems. Drums provides online time-series monitoring of the underlying resources that are partially abstracted away by middleware like ROS. Interfacing with the middleware, Drums provides de-abstraction and de-multiplexing of middleware services to reveal the system-level interactions of your controller code, the middleware, OS and the robot(s) environment. We show worked examples of Drums' utility for debugging realistic problems, and propose it as a tool for quality of service monitoring and introspection for robust autonomous systems.
我们介绍了用于监视和调试分布式机器人系统的新工具Drums,它是机器人中间件系统的补充。drum提供了对底层资源的在线时间序列监控,这些底层资源被ROS等中间件部分抽象了出来。通过与中间件接口,Drums提供中间件服务的反抽象和反复用,以揭示控制器代码、中间件、操作系统和机器人环境之间的系统级交互。我们展示了用于调试实际问题的drum实用程序的工作示例,并建议将其作为健壮自治系统的服务质量监视和自省工具。
{"title":"Drums: A Middleware-Aware Distributed Robot Monitoring System","authors":"Valiallah Monajjemi, Jens Wawerla, R. Vaughan","doi":"10.1145/2843966.2843974","DOIUrl":"https://doi.org/10.1145/2843966.2843974","url":null,"abstract":"We introduce Drums, a new tool for monitoring and debugging distributed robot systems, and a complement to robot middleware systems. Drums provides online time-series monitoring of the underlying resources that are partially abstracted away by middleware like ROS. Interfacing with the middleware, Drums provides de-abstraction and de-multiplexing of middleware services to reveal the system-level interactions of your controller code, the middleware, OS and the robot(s) environment. We show worked examples of Drums' utility for debugging realistic problems, and propose it as a tool for quality of service monitoring and introspection for robust autonomous systems.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132650828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Camera Matrix Calibration Using Circular Control Points and Separate Correction of the Geometric Distortion Field 基于圆形控制点和几何畸变场单独校正的摄像机矩阵标定
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.34
Victoria Rudakova, P. Monasse
We achieve a precise camera calibration using circular control points by, first, separation of the lens distortion parameters from other camera parameters and computation of the distortion field in advance by using a calibration harp. Second, in order to compensate for perspective bias, which is prone to occur when using a circled pattern, we incorporate conic affine transformation into the minimization error when estimating the homography, and leave all the other calibration steps as they are used in the literature. Such an error function allows to compensate for the perspective bias. Combined with precise key point detection, the approach is shown to be more stable than current state-of-the-art global calibration method.
首先,通过将镜头畸变参数与其他摄像机参数分离,并利用定标竖琴提前计算畸变场,实现了圆形控制点对摄像机的精确定标。其次,为了补偿在使用圆形模式时容易出现的透视偏差,我们在估计单应性时将二次仿射变换纳入最小误差中,并保留所有其他校准步骤,因为它们在文献中使用。这样的误差函数可以补偿视角偏差。结合精确的关键点检测,该方法比目前最先进的全局校准方法更稳定。
{"title":"Camera Matrix Calibration Using Circular Control Points and Separate Correction of the Geometric Distortion Field","authors":"Victoria Rudakova, P. Monasse","doi":"10.1109/CRV.2014.34","DOIUrl":"https://doi.org/10.1109/CRV.2014.34","url":null,"abstract":"We achieve a precise camera calibration using circular control points by, first, separation of the lens distortion parameters from other camera parameters and computation of the distortion field in advance by using a calibration harp. Second, in order to compensate for perspective bias, which is prone to occur when using a circled pattern, we incorporate conic affine transformation into the minimization error when estimating the homography, and leave all the other calibration steps as they are used in the literature. Such an error function allows to compensate for the perspective bias. Combined with precise key point detection, the approach is shown to be more stable than current state-of-the-art global calibration method.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Building Better Formlet Codes for Planar Shape 为平面形状构建更好的模板代码
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.19
A. Yakubovich, J. Elder
The GRID/formlet representation of planar shape has a number of nice properties [4], [10], [3], but there are also limitations: it is slow to converge for shapes with elongated parts, and it can be sensitive to parameterization as well as grossly ill-conditioned. Here we describe a number of innovations on the GRID/formlet model that address these problems: 1) By generalizing the formlet basis to include oriented deformations we achieve faster convergence for elongated parts. 2) By introducing a modest regularizing term that penalizes the total energy of each deformation we limit redundancy in formlet parameters and improve identifiability of the model. 3) By applying a recent contour remapping method [9] we eliminate problems due to drift of the model parameterization during matching pursuit. These innovations are shown to both speed convergence and to improve performance on a shape completion task.
平面形状的GRID/formlet表示具有许多很好的特性[4],[10],[3],但也存在局限性:对于具有细长部分的形状,它收敛速度很慢,并且对参数化和严重病态很敏感。在这里,我们描述了解决这些问题的网格/模板模型上的一些创新:1)通过将模板基础推广到包括定向变形,我们实现了细长零件的更快收敛。2)通过引入一个适度的正则化项来惩罚每个变形的总能量,我们限制了形式参数的冗余,提高了模型的可识别性。3)通过应用一种最新的轮廓重映射方法[9],我们消除了匹配追踪过程中由于模型参数化漂移造成的问题。这些创新既加快了收敛速度,又提高了形状完成任务的性能。
{"title":"Building Better Formlet Codes for Planar Shape","authors":"A. Yakubovich, J. Elder","doi":"10.1109/CRV.2014.19","DOIUrl":"https://doi.org/10.1109/CRV.2014.19","url":null,"abstract":"The GRID/formlet representation of planar shape has a number of nice properties [4], [10], [3], but there are also limitations: it is slow to converge for shapes with elongated parts, and it can be sensitive to parameterization as well as grossly ill-conditioned. Here we describe a number of innovations on the GRID/formlet model that address these problems: 1) By generalizing the formlet basis to include oriented deformations we achieve faster convergence for elongated parts. 2) By introducing a modest regularizing term that penalizes the total energy of each deformation we limit redundancy in formlet parameters and improve identifiability of the model. 3) By applying a recent contour remapping method [9] we eliminate problems due to drift of the model parameterization during matching pursuit. These innovations are shown to both speed convergence and to improve performance on a shape completion task.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2014 Canadian Conference on Computer and Robot Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1