首页 > 最新文献

2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)最新文献

英文 中文
Navigation information fusion for an AUV in rivers 河流水下航行器导航信息融合
Jinjun Rao, Jinbo Chen, Wei Ding, Zhenbang Gong
Autonomous Underwater Vehicles (AUVs) present an enormous application potential, and the real time accurate position and attitude information is important for AUVs. In order to obtain comprehensive and accurate position and attitude data of AUVs, focusing on the common low cost sensors configuration, the data fusion problem of SINS/USBL/AHRS combination is presented and studied in this paper. Firstly, the error expressions of MEMS are researched and derived, and the data fusion model for Kalman Filter fusion algorithms is presented. The method is validated using a data set gathered for a Huangpu river inspection task. The comparison between original data and fusional data shows that SINS/USBL/AHRS data fusion system can promote accuracy of position and attitude markedly.
自主水下航行器(auv)具有巨大的应用潜力,实时准确的位置和姿态信息对auv至关重要。为了获得全面准确的水下机器人位置姿态数据,本文针对常见的低成本传感器配置,提出并研究了SINS/USBL/AHRS组合的数据融合问题。首先,研究并推导了MEMS的误差表达式,给出了卡尔曼滤波融合算法的数据融合模型。利用黄浦江水质监测任务的数据集对该方法进行了验证。原始数据与融合数据的对比表明,SINS/USBL/AHRS数据融合系统能够显著提高定位和姿态精度。
{"title":"Navigation information fusion for an AUV in rivers","authors":"Jinjun Rao, Jinbo Chen, Wei Ding, Zhenbang Gong","doi":"10.1109/MFI.2012.6343038","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343038","url":null,"abstract":"Autonomous Underwater Vehicles (AUVs) present an enormous application potential, and the real time accurate position and attitude information is important for AUVs. In order to obtain comprehensive and accurate position and attitude data of AUVs, focusing on the common low cost sensors configuration, the data fusion problem of SINS/USBL/AHRS combination is presented and studied in this paper. Firstly, the error expressions of MEMS are researched and derived, and the data fusion model for Kalman Filter fusion algorithms is presented. The method is validated using a data set gathered for a Huangpu river inspection task. The comparison between original data and fusional data shows that SINS/USBL/AHRS data fusion system can promote accuracy of position and attitude markedly.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121181435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast and robust detection of runway incursions using localized sensors 基于局部传感器的跑道入侵快速鲁棒检测
J. Schönefeld, D. Möller
For over a decade avoiding runway incursions (RI), events where two or more vehicles create a conflicting situation by using the same runway, have been a top ten priority of the National Transportation Safety Board (NTSB). Only the recent technological response in form of area wide deployment of Runway Incursion Prevention and Alerting Systems (RIPAS) improved the situation in the USA and safety seems to have increased significantly. Particularly the Runway Status Lights (RWLS) and the Final Approach Runway Occupancy Signal (FAROS) show a statistically measurable impact. However, in some of the most dangerous RI scenarios the surveillance providing the input for the automatic control of the signals reaches its limitations. The necessary surveillance accuracy needed to deal with such scenarios could be achieved by localized sensors. Therefore this work provides a comparative analysis of surveillance performance in a very dangerous RI scenario based on the experimental RIPAS design XL-RIAS.
十多年来,避免跑道入侵(RI)一直是美国国家运输安全委员会(NTSB)的十大优先事项。跑道入侵是指两辆或两辆以上的车辆使用同一条跑道造成冲突的事件。只有最近在全区域部署跑道入侵预防和警报系统(RIPAS)的技术反应改善了美国的情况,安全性似乎有了显着提高。特别是跑道状态灯(RWLS)和最后进近跑道占用信号(FAROS)显示了统计上可衡量的影响。然而,在一些最危险的RI场景中,为信号的自动控制提供输入的监视达到了极限。处理这种情况所需的必要监视精度可以通过局部传感器实现。因此,这项工作提供了基于实验RIPAS设计XL-RIAS的非常危险的RI场景中的监测性能比较分析。
{"title":"Fast and robust detection of runway incursions using localized sensors","authors":"J. Schönefeld, D. Möller","doi":"10.1109/MFI.2012.6343034","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343034","url":null,"abstract":"For over a decade avoiding runway incursions (RI), events where two or more vehicles create a conflicting situation by using the same runway, have been a top ten priority of the National Transportation Safety Board (NTSB). Only the recent technological response in form of area wide deployment of Runway Incursion Prevention and Alerting Systems (RIPAS) improved the situation in the USA and safety seems to have increased significantly. Particularly the Runway Status Lights (RWLS) and the Final Approach Runway Occupancy Signal (FAROS) show a statistically measurable impact. However, in some of the most dangerous RI scenarios the surveillance providing the input for the automatic control of the signals reaches its limitations. The necessary surveillance accuracy needed to deal with such scenarios could be achieved by localized sensors. Therefore this work provides a comparative analysis of surveillance performance in a very dangerous RI scenario based on the experimental RIPAS design XL-RIAS.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122723846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Real-time pose estimation with RGB-D camera 实时姿态估计与RGB-D相机
Ivan Dryanovski, William Morris, R. Kaushik, Jizhong Xiao
An RGB-D camera is a sensor which outputs the distances to objects in a scene in addition to their RGB color. Recent technological advances in this area have introduced affordable devices in the robotics community. In this paper, we present a real-time feature extraction and pose estimation technique using the data from a single RGB-D camera. First, a set of edge features are computed from the depth and color images. The down-sampled point clouds consisting of the feature points are aligned using the Iterative Closest Point algorithm in 3D space. New features are aligned against a model consisting of previous features from a limited number of past scans. The system achieves a 10 Hz update rate running on a desktop CPU, using VGA resolution RGB-D scans.
RGB- d相机是一种传感器,它输出场景中物体的距离以及它们的RGB颜色。该领域的最新技术进步为机器人社区带来了负担得起的设备。在本文中,我们提出了一种使用单个RGB-D相机数据的实时特征提取和姿态估计技术。首先,从深度和颜色图像中计算一组边缘特征;使用迭代最近点算法在三维空间中对齐由特征点组成的下采样点云。新特征与由有限数量的过去扫描的先前特征组成的模型对齐。该系统在桌面CPU上运行,使用VGA分辨率RGB-D扫描,达到10hz的更新速率。
{"title":"Real-time pose estimation with RGB-D camera","authors":"Ivan Dryanovski, William Morris, R. Kaushik, Jizhong Xiao","doi":"10.1109/MFI.2012.6343046","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343046","url":null,"abstract":"An RGB-D camera is a sensor which outputs the distances to objects in a scene in addition to their RGB color. Recent technological advances in this area have introduced affordable devices in the robotics community. In this paper, we present a real-time feature extraction and pose estimation technique using the data from a single RGB-D camera. First, a set of edge features are computed from the depth and color images. The down-sampled point clouds consisting of the feature points are aligned using the Iterative Closest Point algorithm in 3D space. New features are aligned against a model consisting of previous features from a limited number of past scans. The system achieves a 10 Hz update rate running on a desktop CPU, using VGA resolution RGB-D scans.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122977426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Utilizing color information in 3D scan-registration using planar-patches matching 利用平面贴片匹配的三维扫描配准中的颜色信息
K. Pathak, N. Vaskevicius, Francisc Bungiu, A. Birk
In previous work, the authors presented a 3D scan-registration algorithm based on minimizing the uncertainty-volume of the estimated inter-scan transform, computed by matching planar-patches extracted from a pair of 3D range-images. The method was shown to have a larger region of convergence than points-based methods like ICP. With the advent of newer sensors, color-information is now also available in addition to the depth-information in range-images. In this work, we show how this information can be exploited to make our algorithm computationally more efficient. The results are presented for two commercially available sensors providing color: the high-resolution, large field-of-view (FOV), slow scanning Faro sensor, and the low-resolution, small FOV, faster Kinect sensor.
在之前的工作中,作者提出了一种基于最小化估计扫描间变换的不确定性体积的3D扫描配准算法,该算法通过匹配从一对3D距离图像中提取的平面补丁来计算。该方法比基于点的方法(如ICP)具有更大的收敛区域。随着新型传感器的出现,除了距离图像的深度信息外,现在还可以获得颜色信息。在这项工作中,我们展示了如何利用这些信息使我们的算法在计算上更有效率。研究结果是针对两种可商用的彩色传感器:高分辨率、大视场(FOV)、慢扫描的Faro传感器,以及低分辨率、小视场、更快的Kinect传感器。
{"title":"Utilizing color information in 3D scan-registration using planar-patches matching","authors":"K. Pathak, N. Vaskevicius, Francisc Bungiu, A. Birk","doi":"10.1109/MFI.2012.6343047","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343047","url":null,"abstract":"In previous work, the authors presented a 3D scan-registration algorithm based on minimizing the uncertainty-volume of the estimated inter-scan transform, computed by matching planar-patches extracted from a pair of 3D range-images. The method was shown to have a larger region of convergence than points-based methods like ICP. With the advent of newer sensors, color-information is now also available in addition to the depth-information in range-images. In this work, we show how this information can be exploited to make our algorithm computationally more efficient. The results are presented for two commercially available sensors providing color: the high-resolution, large field-of-view (FOV), slow scanning Faro sensor, and the low-resolution, small FOV, faster Kinect sensor.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"30 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Applied sensor fault detection and validation using transposed input data PCA and ANNs 利用转置输入数据的主成分分析和人工神经网络进行传感器故障检测和验证
Yu Zhang, C. Bingham, M. Gallimore, Zhijing Yang, Jun Chen
The paper presents an efficient approach for applied sensor fault detection based on an integration of principal component analysis (PCA) and artificial neural networks (ANNs). Specifically, PCA-based y-indices are introduced to measure the differences between groups of sensor readings in a time rolling window, and the relative merits of three types of ANNs are compared for operation classification. Unlike previously reported PCA techniques (commonly based on squared prediction error (SPE)) which can readily detect a sensor fault wrongly when the system data is subject bias or drifting as a result of power or loading changes, here, it is shown that the proposed methodologies are capable of detecting and identifying the emergence of sensor faults during transient conditions. The efficacy and capability of the proposed approach is demonstrated through their application on measurement data taken from an industrial generator.
提出了一种基于主成分分析(PCA)和人工神经网络(ann)相结合的应用传感器故障检测方法。具体而言,引入基于pca的y指数来衡量时间滚动窗口内传感器读数组之间的差异,并比较了三种神经网络的相对优点进行操作分类。与先前报道的PCA技术(通常基于平方预测误差(SPE))不同,当系统数据由于功率或负载变化而受到偏差或漂移时,PCA技术很容易错误地检测到传感器故障,而本文表明,所提出的方法能够在瞬态条件下检测和识别传感器故障的出现。通过对工业发电机测量数据的应用,证明了该方法的有效性和能力。
{"title":"Applied sensor fault detection and validation using transposed input data PCA and ANNs","authors":"Yu Zhang, C. Bingham, M. Gallimore, Zhijing Yang, Jun Chen","doi":"10.1109/MFI.2012.6343055","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343055","url":null,"abstract":"The paper presents an efficient approach for applied sensor fault detection based on an integration of principal component analysis (PCA) and artificial neural networks (ANNs). Specifically, PCA-based y-indices are introduced to measure the differences between groups of sensor readings in a time rolling window, and the relative merits of three types of ANNs are compared for operation classification. Unlike previously reported PCA techniques (commonly based on squared prediction error (SPE)) which can readily detect a sensor fault wrongly when the system data is subject bias or drifting as a result of power or loading changes, here, it is shown that the proposed methodologies are capable of detecting and identifying the emergence of sensor faults during transient conditions. The efficacy and capability of the proposed approach is demonstrated through their application on measurement data taken from an industrial generator.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132528966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Extrinsic calibration between a stereoscopic system and a LIDAR with sensor noise models 带有传感器噪声模型的立体系统与激光雷达的外部标定
You Li, Y. Ruichek, C. Cappelle
Visual sensors and depth sensors, such as camera and LIDAR (Light Detection and Ranging) are more and more used together in current perception systems of intelligent vehicles. Fusing information obtained separately from these heterogeneous sensors always requires extrinsic calibration of vision sensors and LIDARs. In this paper, we propose an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2D LIDAR. The extrinsic calibration problem is solved by 3D reconstruction of a chessboard and geometric constraints between the views from the stereovision system and the LIDAR. The proposed approach takes sensor noise models into account that it provides optimal results under Mahalanobis distance constraints. Experiments based on both computer simulation and real data sets are presented and analyzed to evaluate the performance of the calibration method. A comparison with a popular camera/LIDAR calibration method is also proposed to show the benefits of our method.
视觉传感器和深度传感器,如摄像头和激光雷达(LIDAR, Light Detection and Ranging),在当前的智能汽车感知系统中越来越多地结合使用。从这些异构传感器单独获得的信息融合通常需要视觉传感器和激光雷达的外部校准。本文提出了一种双目立体视觉系统与二维激光雷达之间的最优外部标定算法。外部标定问题通过三维重建棋盘和立体视觉系统与激光雷达之间的几何约束来解决。该方法考虑了传感器噪声模型,在马氏距离约束下提供了最优结果。在计算机模拟和实际数据集上进行了实验和分析,以评估该校准方法的性能。并与一种流行的相机/激光雷达校准方法进行了比较,以显示我们的方法的优点。
{"title":"Extrinsic calibration between a stereoscopic system and a LIDAR with sensor noise models","authors":"You Li, Y. Ruichek, C. Cappelle","doi":"10.1109/MFI.2012.6343010","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343010","url":null,"abstract":"Visual sensors and depth sensors, such as camera and LIDAR (Light Detection and Ranging) are more and more used together in current perception systems of intelligent vehicles. Fusing information obtained separately from these heterogeneous sensors always requires extrinsic calibration of vision sensors and LIDARs. In this paper, we propose an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2D LIDAR. The extrinsic calibration problem is solved by 3D reconstruction of a chessboard and geometric constraints between the views from the stereovision system and the LIDAR. The proposed approach takes sensor noise models into account that it provides optimal results under Mahalanobis distance constraints. Experiments based on both computer simulation and real data sets are presented and analyzed to evaluate the performance of the calibration method. A comparison with a popular camera/LIDAR calibration method is also proposed to show the benefits of our method.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121759339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Information fusion in multi-task Gaussian process models 多任务高斯过程模型中的信息融合
Shrihari Vasudevan, A. Melkumyan, S. Scheding
This paper evaluates heterogeneous information fusion using multi-task Gaussian processes in the context of geological resource modeling. Specifically, it empirically demonstrates that information integration across heterogeneous information sources leads to superior estimates of all the quantities being modeled, compared to modeling them individually. Multi-task Gaussian processes provide a powerful approach for simultaneous modeling of multiple quantities of interest while taking correlations between these quantities into consideration. Experiments are performed on large scale real sensor data.
利用多任务高斯过程对地质资源建模中的异构信息融合进行了评价。具体地说,它从经验上证明了跨异构信息源的信息集成与单独建模相比,可以对建模的所有数量进行更好的估计。多任务高斯过程为同时建模多个感兴趣的量提供了一种强大的方法,同时考虑了这些量之间的相关性。实验采用了大规模的真实传感器数据。
{"title":"Information fusion in multi-task Gaussian process models","authors":"Shrihari Vasudevan, A. Melkumyan, S. Scheding","doi":"10.1109/MFI.2012.6343066","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343066","url":null,"abstract":"This paper evaluates heterogeneous information fusion using multi-task Gaussian processes in the context of geological resource modeling. Specifically, it empirically demonstrates that information integration across heterogeneous information sources leads to superior estimates of all the quantities being modeled, compared to modeling them individually. Multi-task Gaussian processes provide a powerful approach for simultaneous modeling of multiple quantities of interest while taking correlations between these quantities into consideration. Experiments are performed on large scale real sensor data.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125534513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bayesian 3D independent motion segmentation with IMU-aided RBG-D sensor 基于imu辅助RBG-D传感器的贝叶斯三维独立运动分割
J. Lobo, J. Ferreira, Pedro Trindade, J. Dias
In this paper we propose a two-tiered hierarchical Bayesian model to estimate the location of objects moving independently from the observer. Biological vision systems are very successful in motion segmentation, since they efficiently resort to flow analysis and accumulated prior knowledge of the 3D structure of the scene. Artificial perception systems may also build 3D structure maps and use optical flow to provide cues for ego- and independent motion segmentation. Using inertial and magnetic sensors and an image and depth sensor (RGB-D) we propose a method to obtain registered 3D maps, which are subsequently used in a probabilistic model (the bottom tier of the hierarchy) that performs background subtraction across several frames to provide a prior on moving objects. The egomotion of the RGB-D sensor is estimated starting with the angular pose obtained from the filtered accelerometers and magnetic data. The translation is derived from matched points across the images and corresponding 3D points in the rotation-compensated depth maps. A gyro-aided Lucas Kanade tracker is used to obtain matched points across the images. The tracked points can also used to refine the initial sensor based rotation estimation. Having determined the camera egomotion, the estimated optical flow assuming a static scene can be compared with the observed optical flow via a probabilistic model (the top tier of the hierarchy), using the results of the background subtraction process as a prior, in order to identify volumes with independent motion in the corresponding 3D point cloud. To deal with the computational load CUDA-based solutions on GPUs were used. Experimental results are presented showing the validity of the proposed approach.
在本文中,我们提出了一个两层层次贝叶斯模型来估计独立于观察者运动的物体的位置。生物视觉系统在运动分割方面非常成功,因为它们有效地利用了流分析和积累的场景三维结构的先验知识。人工感知系统也可以构建三维结构图,并使用光流为自我和独立运动分割提供线索。使用惯性和磁性传感器以及图像和深度传感器(RGB-D),我们提出了一种方法来获得注册的3D地图,随后在概率模型(层次结构的底层)中使用,该模型跨几帧执行背景减法,以提供对移动物体的先验。从滤波后的加速度计和磁数据得到的角度姿态开始估计RGB-D传感器的自运动。平移是从图像上的匹配点和旋转补偿深度图中相应的3D点推导出来的。利用陀螺仪辅助的Lucas Kanade跟踪器获得图像上的匹配点。跟踪的点也可以用来改进初始的基于传感器的旋转估计。在确定了相机的自运动之后,可以将假设静态场景的估计光流与观测到的光流通过概率模型(层次结构的顶层)进行比较,使用背景减除过程的结果作为先验,以便在相应的3D点云中识别具有独立运动的体。为了处理计算负荷,采用了基于cuda的gpu解决方案。实验结果表明了该方法的有效性。
{"title":"Bayesian 3D independent motion segmentation with IMU-aided RBG-D sensor","authors":"J. Lobo, J. Ferreira, Pedro Trindade, J. Dias","doi":"10.1109/MFI.2012.6343023","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343023","url":null,"abstract":"In this paper we propose a two-tiered hierarchical Bayesian model to estimate the location of objects moving independently from the observer. Biological vision systems are very successful in motion segmentation, since they efficiently resort to flow analysis and accumulated prior knowledge of the 3D structure of the scene. Artificial perception systems may also build 3D structure maps and use optical flow to provide cues for ego- and independent motion segmentation. Using inertial and magnetic sensors and an image and depth sensor (RGB-D) we propose a method to obtain registered 3D maps, which are subsequently used in a probabilistic model (the bottom tier of the hierarchy) that performs background subtraction across several frames to provide a prior on moving objects. The egomotion of the RGB-D sensor is estimated starting with the angular pose obtained from the filtered accelerometers and magnetic data. The translation is derived from matched points across the images and corresponding 3D points in the rotation-compensated depth maps. A gyro-aided Lucas Kanade tracker is used to obtain matched points across the images. The tracked points can also used to refine the initial sensor based rotation estimation. Having determined the camera egomotion, the estimated optical flow assuming a static scene can be compared with the observed optical flow via a probabilistic model (the top tier of the hierarchy), using the results of the background subtraction process as a prior, in order to identify volumes with independent motion in the corresponding 3D point cloud. To deal with the computational load CUDA-based solutions on GPUs were used. Experimental results are presented showing the validity of the proposed approach.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Object pose estimation and tracking by fusing visual and tactile information 基于视觉和触觉信息融合的目标姿态估计与跟踪
João Bimbo, Silvia Rodríguez-Jiménez, Hongbin Liu, Xiaojing Song, N. Burrus, L. Seneviratne, M. Abderrahim, K. Althoefer
Robot grasping and manipulation require very accurate knowledge of the object's location within the robotic hand. By itself, a vision system cannot provide very precise and robust pose tracking due to occlusions or hardware limitations. This paper presents a method to estimate a grasped object's 6D pose by fusing sensor data from vision, tactile sensors and joint encoders. Given an initial pose acquired by the vision system and the contact locations on the fingertips, an iterative process optimises the estimation of the object pose by finding a transformation that fits the grasped object to the finger tips. Experiments were carried out in both simulation and a real system consisting of a Shadow arm and hand with ATI Force/Torque sensors instrumented on the fingertips and a Microsoft Kinect camera. In order to make the method suitable for real-time applications, the performance of the algorithm was investigated in terms of speed and accuracy of convergence.
机器人抓取和操作需要非常精确地了解物体在机器人手中的位置。由于遮挡或硬件限制,视觉系统本身无法提供非常精确和稳健的姿态跟踪。本文提出了一种融合视觉、触觉和关节编码器传感器数据估计被抓物体6D姿态的方法。给定视觉系统获得的初始姿态和指尖上的接触位置,迭代过程通过寻找适合抓取对象到指尖的变换来优化物体姿态的估计。实验在模拟和真实系统中进行,该系统由一个阴影手臂和手组成,在指尖上安装了ATI力/扭矩传感器和一个微软Kinect摄像头。为了使该方法适用于实时应用,从收敛速度和收敛精度两方面对算法进行了研究。
{"title":"Object pose estimation and tracking by fusing visual and tactile information","authors":"João Bimbo, Silvia Rodríguez-Jiménez, Hongbin Liu, Xiaojing Song, N. Burrus, L. Seneviratne, M. Abderrahim, K. Althoefer","doi":"10.1109/MFI.2012.6343019","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343019","url":null,"abstract":"Robot grasping and manipulation require very accurate knowledge of the object's location within the robotic hand. By itself, a vision system cannot provide very precise and robust pose tracking due to occlusions or hardware limitations. This paper presents a method to estimate a grasped object's 6D pose by fusing sensor data from vision, tactile sensors and joint encoders. Given an initial pose acquired by the vision system and the contact locations on the fingertips, an iterative process optimises the estimation of the object pose by finding a transformation that fits the grasped object to the finger tips. Experiments were carried out in both simulation and a real system consisting of a Shadow arm and hand with ATI Force/Torque sensors instrumented on the fingertips and a Microsoft Kinect camera. In order to make the method suitable for real-time applications, the performance of the algorithm was investigated in terms of speed and accuracy of convergence.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116523825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
期刊
2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1