首页 > 最新文献

2013 IEEE Workshop on Robot Vision (WORV)最新文献

英文 中文
Spatial structure analysis for autonomous robotic vision systems 自主机器人视觉系统的空间结构分析
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521933
Kai Zhou, K. Varadarajan, M. Zillich, M. Vincze
Analysis of spatial structures in robotic environments, especially structures such as planar surfaces, has become a fundamental component in diverse robot vision systems since the introduction of low-cost RGB-D cameras that have been widely mounted on various indoor robots. These cameras are capable of providing high quality 3D reconstruction in real time. In order to estimate multiple planar structures without prior knowledge, this paper utilizes Jensen-Shannon Divergence (JSD), which is a similarity measurement method, to represent pairwise relationship between data. This conceptual representation encompasses the pairwise geometrical relations between data as well as the information about whether pairwise relationships exist in a model's inlier data set or not. Tests on datasets comprised of noisy inliers and a large percentage of outliers demonstrate that the proposed solution can efficiently estimate multiple models without prior information. Superior performance in terms of synthetic experiments and pragmatic tests with robot vision system also demonstrate the validity of the proposed approach.
自低成本的RGB-D相机被广泛应用于各种室内机器人以来,机器人环境中空间结构的分析,特别是平面结构的分析,已经成为各种机器人视觉系统的基本组成部分。这些相机能够实时提供高质量的3D重建。为了在没有先验知识的情况下估计多个平面结构,本文利用相似性度量方法Jensen-Shannon Divergence (JSD)来表示数据之间的两两关系。这种概念表示包含数据之间的成对几何关系,以及关于模型的初始数据集中是否存在成对关系的信息。在包含噪声内线和大量异常值的数据集上的测试表明,该方法可以在没有先验信息的情况下有效地估计多个模型。综合实验和机器人视觉系统的实际测试也证明了该方法的有效性。
{"title":"Spatial structure analysis for autonomous robotic vision systems","authors":"Kai Zhou, K. Varadarajan, M. Zillich, M. Vincze","doi":"10.1109/WORV.2013.6521933","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521933","url":null,"abstract":"Analysis of spatial structures in robotic environments, especially structures such as planar surfaces, has become a fundamental component in diverse robot vision systems since the introduction of low-cost RGB-D cameras that have been widely mounted on various indoor robots. These cameras are capable of providing high quality 3D reconstruction in real time. In order to estimate multiple planar structures without prior knowledge, this paper utilizes Jensen-Shannon Divergence (JSD), which is a similarity measurement method, to represent pairwise relationship between data. This conceptual representation encompasses the pairwise geometrical relations between data as well as the information about whether pairwise relationships exist in a model's inlier data set or not. Tests on datasets comprised of noisy inliers and a large percentage of outliers demonstrate that the proposed solution can efficiently estimate multiple models without prior information. Superior performance in terms of synthetic experiments and pragmatic tests with robot vision system also demonstrate the validity of the proposed approach.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133910992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient 7D aerial pose estimation 高效的7D空中姿态估计
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521919
B. Grelsson, M. Felsberg, Folke Isaksson
A method for online global pose estimation of aerial images by alignment with a georeferenced 3D model is presented. Motion stereo is used to reconstruct a dense local height patch from an image pair. The global pose is inferred from the 3D transform between the local height patch and the model. For efficiency, the sought 3D similarity transform is found by least-squares minimizations of three 2D subproblems. The method does not require any landmarks or reference points in the 3D model, but an approximate initialization of the global pose, in our case provided by onboard navigation sensors, is assumed. Real aerial images from helicopter and aircraft flights are used to evaluate the method. The results show that the accuracy of the position and orientation estimates is significantly improved compared to the initialization and our method is more robust than competing methods on similar datasets. The proposed matching error computed between the transformed patch and the map clearly indicates whether a reliable pose estimate has been obtained.
提出了一种基于地理参考三维模型的航拍图像在线全局位姿估计方法。利用运动立体技术从图像对中重建密集的局部高度斑块。全局姿态是从局部高度补丁和模型之间的三维变换中推断出来的。为了提高效率,所寻求的三维相似变换是通过三个二维子问题的最小二乘最小化来找到的。该方法不需要3D模型中的任何地标或参考点,但假设由机载导航传感器提供的全局姿态的近似初始化。利用直升机和飞机飞行的真实航拍图像对该方法进行了评价。结果表明,与初始化方法相比,该方法的位置和方向估计精度显著提高,并且在类似数据集上比竞争方法具有更强的鲁棒性。所提出的变换后的补丁与地图之间的匹配误差的计算清楚地表明是否获得了可靠的姿态估计。
{"title":"Efficient 7D aerial pose estimation","authors":"B. Grelsson, M. Felsberg, Folke Isaksson","doi":"10.1109/WORV.2013.6521919","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521919","url":null,"abstract":"A method for online global pose estimation of aerial images by alignment with a georeferenced 3D model is presented. Motion stereo is used to reconstruct a dense local height patch from an image pair. The global pose is inferred from the 3D transform between the local height patch and the model. For efficiency, the sought 3D similarity transform is found by least-squares minimizations of three 2D subproblems. The method does not require any landmarks or reference points in the 3D model, but an approximate initialization of the global pose, in our case provided by onboard navigation sensors, is assumed. Real aerial images from helicopter and aircraft flights are used to evaluate the method. The results show that the accuracy of the position and orientation estimates is significantly improved compared to the initialization and our method is more robust than competing methods on similar datasets. The proposed matching error computed between the transformed patch and the map clearly indicates whether a reliable pose estimate has been obtained.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122370063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Autonomous robot exploration and cognitive map building in unknown environments using omnidirectional visual information only 基于全向视觉信息的未知环境下自主机器人探索与认知地图构建
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521937
Romain Marie, O. Labbani-Igbida, Pauline Merveilleux, E. Mouaddib
This paper addresses the issues of autonomous exploration and topological mapping using monocular catadioptric vision in fully unknown environments. We propose an incremental process that allows the robot to extract and combine multiple spatial representations built upon its visual information only: free space detection, local space topology extraction, place signatures construction and topological mapping. The efficiency of the proposed system is evaluated in real world experiments. It opens new perspectives for vision-based autonomous exploration, which is still an open problem in robotics.
本文讨论了在完全未知环境中使用单目反射视觉进行自主探索和拓扑映射的问题。我们提出了一个增量过程,允许机器人提取和组合仅基于其视觉信息的多个空间表示:自由空间检测,局部空间拓扑提取,位置签名构建和拓扑映射。在实际实验中对该系统的有效性进行了评价。它为基于视觉的自主探索开辟了新的视角,这仍然是机器人领域的一个悬而未决的问题。
{"title":"Autonomous robot exploration and cognitive map building in unknown environments using omnidirectional visual information only","authors":"Romain Marie, O. Labbani-Igbida, Pauline Merveilleux, E. Mouaddib","doi":"10.1109/WORV.2013.6521937","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521937","url":null,"abstract":"This paper addresses the issues of autonomous exploration and topological mapping using monocular catadioptric vision in fully unknown environments. We propose an incremental process that allows the robot to extract and combine multiple spatial representations built upon its visual information only: free space detection, local space topology extraction, place signatures construction and topological mapping. The efficiency of the proposed system is evaluated in real world experiments. It opens new perspectives for vision-based autonomous exploration, which is still an open problem in robotics.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128986850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fast iterative five point relative pose estimation 快速迭代五点相对位姿估计
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521915
J. Hedborg, M. Felsberg
Robust estimation of the relative pose between two cameras is a fundamental part of Structure and Motion methods. For calibrated cameras, the five point method together with a robust estimator such as RANSAC gives the best result in most cases. The current state-of-the-art method for solving the relative pose problem from five points is due to Nistér [9], because it is faster than other methods and in the RANSAC scheme one can improve precision by increasing the number of iterations. In this paper, we propose a new iterative method, which is based on Powell's Dog Leg algorithm. The new method has the same precision and is approximately twice as fast as Nister's algorithm. The proposed method is easily extended to more than five points while retaining a efficient error metrics. This makes it also very suitable as an refinement step. The proposed algorithm is systematically evaluated on three types of datasets with known ground truth.
两个相机之间的相对姿态的鲁棒估计是结构和运动方法的基本组成部分。对于校准过的相机,五点方法与稳健的估计器(如RANSAC)在大多数情况下给出了最好的结果。目前最先进的从五个点求解相对位姿问题的方法是由于nist[9],因为它比其他方法更快,并且在RANSAC方案中可以通过增加迭代次数来提高精度。本文提出了一种新的基于Powell's Dog Leg算法的迭代方法。新方法具有相同的精度,并且速度大约是Nister算法的两倍。该方法易于扩展到5个点以上,同时保留了有效的误差度量。这使得它也非常适合作为一个细化步骤。该算法在三种已知地面真值的数据集上进行了系统的评估。
{"title":"Fast iterative five point relative pose estimation","authors":"J. Hedborg, M. Felsberg","doi":"10.1109/WORV.2013.6521915","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521915","url":null,"abstract":"Robust estimation of the relative pose between two cameras is a fundamental part of Structure and Motion methods. For calibrated cameras, the five point method together with a robust estimator such as RANSAC gives the best result in most cases. The current state-of-the-art method for solving the relative pose problem from five points is due to Nistér [9], because it is faster than other methods and in the RANSAC scheme one can improve precision by increasing the number of iterations. In this paper, we propose a new iterative method, which is based on Powell's Dog Leg algorithm. The new method has the same precision and is approximately twice as fast as Nister's algorithm. The proposed method is easily extended to more than five points while retaining a efficient error metrics. This makes it also very suitable as an refinement step. The proposed algorithm is systematically evaluated on three types of datasets with known ground truth.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115925799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Subspace and motion segmentation via local subspace estimation 基于局部子空间估计的子空间和运动分割
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521909
A. Sekmen, A. Aldroubi
Subspace segmentation and clustering of high dimensional data drawn from a union of subspaces are important with practical robot vision applications, such as smart airborne video surveillance. This paper presents a clustering algorithm for high dimensional data that comes from a union of lower dimensional subspaces of equal and known dimensions. Rigid motion segmentation is a special case of this more general subspace segmentation problem. The algorithm matches a local subspace for each trajectory vector and estimates the relationships between trajectories. It is reliable in the presence of noise, and it has been experimentally verified by the Hopkins 155 Dataset.
子空间分割和从子空间联合中提取高维数据的聚类对于实际机器人视觉应用,如智能机载视频监控,是非常重要的。本文提出了一种高维数据聚类算法,这些高维数据来自于维数相等且已知的低维子空间的并集。刚性运动分割是这种更普遍的子空间分割问题的一种特殊情况。该算法为每个轨迹向量匹配一个局部子空间,并估计轨迹之间的关系。它在存在噪声的情况下是可靠的,并且已经通过霍普金斯155数据集的实验验证。
{"title":"Subspace and motion segmentation via local subspace estimation","authors":"A. Sekmen, A. Aldroubi","doi":"10.1109/WORV.2013.6521909","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521909","url":null,"abstract":"Subspace segmentation and clustering of high dimensional data drawn from a union of subspaces are important with practical robot vision applications, such as smart airborne video surveillance. This paper presents a clustering algorithm for high dimensional data that comes from a union of lower dimensional subspaces of equal and known dimensions. Rigid motion segmentation is a special case of this more general subspace segmentation problem. The algorithm matches a local subspace for each trajectory vector and estimates the relationships between trajectories. It is reliable in the presence of noise, and it has been experimentally verified by the Hopkins 155 Dataset.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132373638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Quasi-perspective stereo-motion for 3D reconstruction 用于三维重建的准透视立体运动
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521944
Mu Fang, R. Chung
One important motivation of integrating stereo vision and visual motion into the so-called stereo-motion cue is to make the two original vision cues complementary, in the sense that (i) the ease of establishing motion correspondences and (ii) the accuracy of 3D reconstruction under stereo vision can be put together for bypassing or overcoming (i) the generally difficult stereo correspondence problem and (ii) the limited reconstruction accuracy of the motion cue. The objective is to allow a relatively short stereo pair of videos to be adequate for recovering accurate 3D information. A previous work has addressed the issue, which lets the easily acquirable motion correspondences be used to infer the stereo correspondences. Yet the inference mechanism requires to assume the affine projection model of the cameras. This work further extends from the affine camera assumption to quasi-perspective projection models of cameras. A novel stereo-motion model under quasi-perspective projection is proposed, and a simple and fast 3D reconstruction algorithm is given. Only a small number of stereo correspondences are required for reconstruction. Experimental results on real image data are shown to demonstrate the effectiveness of the mechanism.
将立体视觉和视觉运动整合到所谓的立体运动线索中的一个重要动机是使两个原始视觉线索互补,从某种意义上说(i)易于建立运动对应关系和(ii)立体视觉下3D重建的准确性可以放在一起,以绕过或克服(i)通常困难的立体对应问题和(ii)有限的运动线索重建精度。目标是允许相对较短的立体视频对足以恢复准确的3D信息。先前的工作已经解决了这个问题,它让容易获得的运动对应被用来推断立体对应。然而,推理机制需要假设相机的仿射投影模型。这项工作进一步从仿射相机的假设延伸到相机的准透视投影模型。提出了一种新的准透视投影下的立体运动模型,给出了一种简单快速的三维重建算法。重建只需要少量的立体对应。在实际图像数据上的实验结果验证了该机制的有效性。
{"title":"Quasi-perspective stereo-motion for 3D reconstruction","authors":"Mu Fang, R. Chung","doi":"10.1109/WORV.2013.6521944","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521944","url":null,"abstract":"One important motivation of integrating stereo vision and visual motion into the so-called stereo-motion cue is to make the two original vision cues complementary, in the sense that (i) the ease of establishing motion correspondences and (ii) the accuracy of 3D reconstruction under stereo vision can be put together for bypassing or overcoming (i) the generally difficult stereo correspondence problem and (ii) the limited reconstruction accuracy of the motion cue. The objective is to allow a relatively short stereo pair of videos to be adequate for recovering accurate 3D information. A previous work has addressed the issue, which lets the easily acquirable motion correspondences be used to infer the stereo correspondences. Yet the inference mechanism requires to assume the affine projection model of the cameras. This work further extends from the affine camera assumption to quasi-perspective projection models of cameras. A novel stereo-motion model under quasi-perspective projection is proposed, and a simple and fast 3D reconstruction algorithm is given. Only a small number of stereo correspondences are required for reconstruction. Experimental results on real image data are shown to demonstrate the effectiveness of the mechanism.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126135947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual servo control of electromagnetic actuation for a family of microrobot devices 一类微型机器人装置电磁驱动的视觉伺服控制
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521940
J. Piepmeier, S. Firebaugh
Microrobots have a number of potential applications for micromanipulation and assembly, but also offer challenges in power and control. This paper describes the control system for magnetically actuated microrobots operating at the interface between two immiscible fluids. The microrobots are 20 μm thick and approximately 100-200 μm in lateral dimension. Several different robot shapes are investigated. The robots and fluid are in a 20 × 20 mm vial placed at the center of four electromagnets Pulse width modulation of the electromagnet currents is used to control robot speed and direction, and a linear relationship between robot speed and duty cycle was observed, although the slope of that dependence varied with robot type and magnet. A proportional controller has been implemented and characterized. The steady-state error with this controller ranged from 6.4 to 12.8 pixels, or 90-180 μm.
微型机器人在微操作和装配方面有许多潜在的应用,但也在动力和控制方面提出了挑战。本文介绍了在两种非混相流体界面上工作的磁驱动微型机器人的控制系统。微机器人的厚度为20 μm,横向尺寸约为100-200 μm。研究了几种不同形状的机器人。机器人和流体在放置在四个电磁铁中心的一个20 × 20 mm的小瓶中,电磁铁电流的脉宽调制用于控制机器人的速度和方向,机器人速度与占空比之间存在线性关系,尽管这种关系的斜率随机器人类型和磁铁而变化。实现了一种比例控制器,并对其进行了表征。该控制器的稳态误差范围为6.4 ~ 12.8像素,即90 ~ 180 μm。
{"title":"Visual servo control of electromagnetic actuation for a family of microrobot devices","authors":"J. Piepmeier, S. Firebaugh","doi":"10.1109/WORV.2013.6521940","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521940","url":null,"abstract":"Microrobots have a number of potential applications for micromanipulation and assembly, but also offer challenges in power and control. This paper describes the control system for magnetically actuated microrobots operating at the interface between two immiscible fluids. The microrobots are 20 μm thick and approximately 100-200 μm in lateral dimension. Several different robot shapes are investigated. The robots and fluid are in a 20 × 20 mm vial placed at the center of four electromagnets Pulse width modulation of the electromagnet currents is used to control robot speed and direction, and a linear relationship between robot speed and duty cycle was observed, although the slope of that dependence varied with robot type and magnet. A proportional controller has been implemented and characterized. The steady-state error with this controller ranged from 6.4 to 12.8 pixels, or 90-180 μm.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133476006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automated tuning of the nonlinear complementary filter for an Attitude Heading Reference observer 姿态航向参考观测器非线性互补滤波器的自动调谐
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521934
O. de Silva, G. Mann, R. Gosine
In this paper we detail a numerical optimization method for automated tuning of a nonlinear filter used in Attitude Heading Reference Systems (AHRS). First, the Levenberg Marquardt method is used for nonlinear parameter estimation of the observer model. Two approaches are described; Extended Kalman Filter (EKF) based supervised implementation and unsupervised error minimization based implementation. The quaternion formulation is used in the development in order to have a global minimum parametrization in the rotation group. These two methods are then compared using both simulated and experimental data taken from a commercial Inertial Measurement Unit (IMU) used in an autopilot system of an unmanned aerial vehicle. The results reveal that the proposed EKF based supervised implementation is faster and also has a better robustness against different initial conditions.
本文详细介绍了一种用于姿态航向参考系统(AHRS)非线性滤波器自动调谐的数值优化方法。首先,采用Levenberg Marquardt方法对观测器模型进行非线性参数估计。描述了两种方法;基于扩展卡尔曼滤波(EKF)的监督实现和基于无监督误差最小化的实现。在开发中使用了四元数公式,以便在旋转群中具有全局最小参数化。然后使用商用惯性测量单元(IMU)的模拟和实验数据对这两种方法进行比较,IMU用于无人驾驶飞行器的自动驾驶系统。结果表明,基于EKF的监督实现速度更快,并且对不同初始条件具有更好的鲁棒性。
{"title":"Automated tuning of the nonlinear complementary filter for an Attitude Heading Reference observer","authors":"O. de Silva, G. Mann, R. Gosine","doi":"10.1109/WORV.2013.6521934","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521934","url":null,"abstract":"In this paper we detail a numerical optimization method for automated tuning of a nonlinear filter used in Attitude Heading Reference Systems (AHRS). First, the Levenberg Marquardt method is used for nonlinear parameter estimation of the observer model. Two approaches are described; Extended Kalman Filter (EKF) based supervised implementation and unsupervised error minimization based implementation. The quaternion formulation is used in the development in order to have a global minimum parametrization in the rotation group. These two methods are then compared using both simulated and experimental data taken from a commercial Inertial Measurement Unit (IMU) used in an autopilot system of an unmanned aerial vehicle. The results reveal that the proposed EKF based supervised implementation is faster and also has a better robustness against different initial conditions.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115304906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rapid explorative direct inverse kinematics learning of relevant locations for active vision 主动视觉相关位置的快速探索性直接逆运动学学习
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521932
Kristoffer Öfjäll, M. Felsberg
An online method for rapidly learning the inverse kinematics of a redundant robotic arm is presented addressing the special requirements of active vision for visual inspection tasks. The system is initialized with a model covering a small area around the starting position, which is then incrementally extended by exploration. The number of motions during this process is minimized by only exploring configurations required for successful completion of the task at hand. The explored area is automatically extended online and on demand. To achieve this, state of the art methods for learning and numerical optimization are combined in a tight implementation where parts of the learned model, the Jacobians, are used during optimization, resulting in significant synergy effects. In a series of standard experiments, we show that the integrated method performs better than using both methods sequentially.
针对主动视觉在视觉检测任务中的特殊要求,提出了一种冗余度机械臂在线快速学习运动学逆解的方法。系统初始化时,模型覆盖起始位置周围的一小块区域,然后通过勘探逐步扩展。通过仅探索成功完成手头任务所需的配置,在此过程中运动的数量最小化。探索的区域自动扩展在线和按需。为了实现这一目标,将最先进的学习方法和数值优化方法结合在一个紧密的实现中,其中在优化过程中使用了学习模型的某些部分,即雅可比矩阵,从而产生了显著的协同效应。在一系列的标准实验中,我们证明了集成方法比顺序使用两种方法的效果更好。
{"title":"Rapid explorative direct inverse kinematics learning of relevant locations for active vision","authors":"Kristoffer Öfjäll, M. Felsberg","doi":"10.1109/WORV.2013.6521932","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521932","url":null,"abstract":"An online method for rapidly learning the inverse kinematics of a redundant robotic arm is presented addressing the special requirements of active vision for visual inspection tasks. The system is initialized with a model covering a small area around the starting position, which is then incrementally extended by exploration. The number of motions during this process is minimized by only exploring configurations required for successful completion of the task at hand. The explored area is automatically extended online and on demand. To achieve this, state of the art methods for learning and numerical optimization are combined in a tight implementation where parts of the learned model, the Jacobians, are used during optimization, resulting in significant synergy effects. In a series of standard experiments, we show that the integrated method performs better than using both methods sequentially.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"19 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120914627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A compositional approach for 3D arm-hand action recognition 三维手臂动作识别的合成方法
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521926
I. Gori, S. Fanello, F. Odone, G. Metta
In this paper we propose a fast and reliable vision-based framework for 3D arm-hand action modelling, learning and recognition in human-robot interaction scenarios. The architecture consists of a compositional model that divides the arm-hand action recognition problem into three levels. The bottom level is based on a simple but sufficiently accurate algorithm for the computation of the scene flow. The middle level serves to classify action primitives through descriptors obtained from 3D Histogram of Flow (3D-HOF); we further apply a sparse coding (SC) algorithm to deal with noise. Action Primitives are then modelled and classified by linear Support Vector Machines (SVMs), and we propose an on-line algorithm to cope with the real-time recognition of primitive sequences. The top level system synthesises combinations of primitives by means of a syntactic approach. In summary the main contribution of the paper is an incremental method for 3D arm-hand behaviour modelling and recognition, fully implemented and tested on the iCub robot, allowing it to learn new actions after a single demonstration.
在本文中,我们提出了一个快速可靠的基于视觉的框架,用于人机交互场景下的三维手臂动作建模、学习和识别。该体系结构由一个组合模型组成,该模型将手臂动作识别问题分为三个层次。底层是基于一个简单但足够精确的算法来计算场景流。中间层通过三维流直方图(3D- hof)获得的描述符对动作基元进行分类;我们进一步应用稀疏编码(SC)算法来处理噪声。然后用线性支持向量机(svm)对动作原语进行建模和分类,并提出了一种在线算法来处理原语序列的实时识别。顶层系统通过语法方法合成原语的组合。总之,本文的主要贡献是一种3D手臂行为建模和识别的增量方法,在iCub机器人上完全实现和测试,使其在一次演示后学习新的动作。
{"title":"A compositional approach for 3D arm-hand action recognition","authors":"I. Gori, S. Fanello, F. Odone, G. Metta","doi":"10.1109/WORV.2013.6521926","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521926","url":null,"abstract":"In this paper we propose a fast and reliable vision-based framework for 3D arm-hand action modelling, learning and recognition in human-robot interaction scenarios. The architecture consists of a compositional model that divides the arm-hand action recognition problem into three levels. The bottom level is based on a simple but sufficiently accurate algorithm for the computation of the scene flow. The middle level serves to classify action primitives through descriptors obtained from 3D Histogram of Flow (3D-HOF); we further apply a sparse coding (SC) algorithm to deal with noise. Action Primitives are then modelled and classified by linear Support Vector Machines (SVMs), and we propose an on-line algorithm to cope with the real-time recognition of primitive sequences. The top level system synthesises combinations of primitives by means of a syntactic approach. In summary the main contribution of the paper is an incremental method for 3D arm-hand behaviour modelling and recognition, fully implemented and tested on the iCub robot, allowing it to learn new actions after a single demonstration.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128324134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2013 IEEE Workshop on Robot Vision (WORV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1