Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343038
Jinjun Rao, Jinbo Chen, Wei Ding, Zhenbang Gong
Autonomous Underwater Vehicles (AUVs) present an enormous application potential, and the real time accurate position and attitude information is important for AUVs. In order to obtain comprehensive and accurate position and attitude data of AUVs, focusing on the common low cost sensors configuration, the data fusion problem of SINS/USBL/AHRS combination is presented and studied in this paper. Firstly, the error expressions of MEMS are researched and derived, and the data fusion model for Kalman Filter fusion algorithms is presented. The method is validated using a data set gathered for a Huangpu river inspection task. The comparison between original data and fusional data shows that SINS/USBL/AHRS data fusion system can promote accuracy of position and attitude markedly.
{"title":"Navigation information fusion for an AUV in rivers","authors":"Jinjun Rao, Jinbo Chen, Wei Ding, Zhenbang Gong","doi":"10.1109/MFI.2012.6343038","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343038","url":null,"abstract":"Autonomous Underwater Vehicles (AUVs) present an enormous application potential, and the real time accurate position and attitude information is important for AUVs. In order to obtain comprehensive and accurate position and attitude data of AUVs, focusing on the common low cost sensors configuration, the data fusion problem of SINS/USBL/AHRS combination is presented and studied in this paper. Firstly, the error expressions of MEMS are researched and derived, and the data fusion model for Kalman Filter fusion algorithms is presented. The method is validated using a data set gathered for a Huangpu river inspection task. The comparison between original data and fusional data shows that SINS/USBL/AHRS data fusion system can promote accuracy of position and attitude markedly.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121181435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343034
J. Schönefeld, D. Möller
For over a decade avoiding runway incursions (RI), events where two or more vehicles create a conflicting situation by using the same runway, have been a top ten priority of the National Transportation Safety Board (NTSB). Only the recent technological response in form of area wide deployment of Runway Incursion Prevention and Alerting Systems (RIPAS) improved the situation in the USA and safety seems to have increased significantly. Particularly the Runway Status Lights (RWLS) and the Final Approach Runway Occupancy Signal (FAROS) show a statistically measurable impact. However, in some of the most dangerous RI scenarios the surveillance providing the input for the automatic control of the signals reaches its limitations. The necessary surveillance accuracy needed to deal with such scenarios could be achieved by localized sensors. Therefore this work provides a comparative analysis of surveillance performance in a very dangerous RI scenario based on the experimental RIPAS design XL-RIAS.
{"title":"Fast and robust detection of runway incursions using localized sensors","authors":"J. Schönefeld, D. Möller","doi":"10.1109/MFI.2012.6343034","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343034","url":null,"abstract":"For over a decade avoiding runway incursions (RI), events where two or more vehicles create a conflicting situation by using the same runway, have been a top ten priority of the National Transportation Safety Board (NTSB). Only the recent technological response in form of area wide deployment of Runway Incursion Prevention and Alerting Systems (RIPAS) improved the situation in the USA and safety seems to have increased significantly. Particularly the Runway Status Lights (RWLS) and the Final Approach Runway Occupancy Signal (FAROS) show a statistically measurable impact. However, in some of the most dangerous RI scenarios the surveillance providing the input for the automatic control of the signals reaches its limitations. The necessary surveillance accuracy needed to deal with such scenarios could be achieved by localized sensors. Therefore this work provides a comparative analysis of surveillance performance in a very dangerous RI scenario based on the experimental RIPAS design XL-RIAS.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122723846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343046
Ivan Dryanovski, William Morris, R. Kaushik, Jizhong Xiao
An RGB-D camera is a sensor which outputs the distances to objects in a scene in addition to their RGB color. Recent technological advances in this area have introduced affordable devices in the robotics community. In this paper, we present a real-time feature extraction and pose estimation technique using the data from a single RGB-D camera. First, a set of edge features are computed from the depth and color images. The down-sampled point clouds consisting of the feature points are aligned using the Iterative Closest Point algorithm in 3D space. New features are aligned against a model consisting of previous features from a limited number of past scans. The system achieves a 10 Hz update rate running on a desktop CPU, using VGA resolution RGB-D scans.
{"title":"Real-time pose estimation with RGB-D camera","authors":"Ivan Dryanovski, William Morris, R. Kaushik, Jizhong Xiao","doi":"10.1109/MFI.2012.6343046","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343046","url":null,"abstract":"An RGB-D camera is a sensor which outputs the distances to objects in a scene in addition to their RGB color. Recent technological advances in this area have introduced affordable devices in the robotics community. In this paper, we present a real-time feature extraction and pose estimation technique using the data from a single RGB-D camera. First, a set of edge features are computed from the depth and color images. The down-sampled point clouds consisting of the feature points are aligned using the Iterative Closest Point algorithm in 3D space. New features are aligned against a model consisting of previous features from a limited number of past scans. The system achieves a 10 Hz update rate running on a desktop CPU, using VGA resolution RGB-D scans.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122977426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343047
K. Pathak, N. Vaskevicius, Francisc Bungiu, A. Birk
In previous work, the authors presented a 3D scan-registration algorithm based on minimizing the uncertainty-volume of the estimated inter-scan transform, computed by matching planar-patches extracted from a pair of 3D range-images. The method was shown to have a larger region of convergence than points-based methods like ICP. With the advent of newer sensors, color-information is now also available in addition to the depth-information in range-images. In this work, we show how this information can be exploited to make our algorithm computationally more efficient. The results are presented for two commercially available sensors providing color: the high-resolution, large field-of-view (FOV), slow scanning Faro sensor, and the low-resolution, small FOV, faster Kinect sensor.
{"title":"Utilizing color information in 3D scan-registration using planar-patches matching","authors":"K. Pathak, N. Vaskevicius, Francisc Bungiu, A. Birk","doi":"10.1109/MFI.2012.6343047","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343047","url":null,"abstract":"In previous work, the authors presented a 3D scan-registration algorithm based on minimizing the uncertainty-volume of the estimated inter-scan transform, computed by matching planar-patches extracted from a pair of 3D range-images. The method was shown to have a larger region of convergence than points-based methods like ICP. With the advent of newer sensors, color-information is now also available in addition to the depth-information in range-images. In this work, we show how this information can be exploited to make our algorithm computationally more efficient. The results are presented for two commercially available sensors providing color: the high-resolution, large field-of-view (FOV), slow scanning Faro sensor, and the low-resolution, small FOV, faster Kinect sensor.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"30 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343055
Yu Zhang, C. Bingham, M. Gallimore, Zhijing Yang, Jun Chen
The paper presents an efficient approach for applied sensor fault detection based on an integration of principal component analysis (PCA) and artificial neural networks (ANNs). Specifically, PCA-based y-indices are introduced to measure the differences between groups of sensor readings in a time rolling window, and the relative merits of three types of ANNs are compared for operation classification. Unlike previously reported PCA techniques (commonly based on squared prediction error (SPE)) which can readily detect a sensor fault wrongly when the system data is subject bias or drifting as a result of power or loading changes, here, it is shown that the proposed methodologies are capable of detecting and identifying the emergence of sensor faults during transient conditions. The efficacy and capability of the proposed approach is demonstrated through their application on measurement data taken from an industrial generator.
{"title":"Applied sensor fault detection and validation using transposed input data PCA and ANNs","authors":"Yu Zhang, C. Bingham, M. Gallimore, Zhijing Yang, Jun Chen","doi":"10.1109/MFI.2012.6343055","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343055","url":null,"abstract":"The paper presents an efficient approach for applied sensor fault detection based on an integration of principal component analysis (PCA) and artificial neural networks (ANNs). Specifically, PCA-based y-indices are introduced to measure the differences between groups of sensor readings in a time rolling window, and the relative merits of three types of ANNs are compared for operation classification. Unlike previously reported PCA techniques (commonly based on squared prediction error (SPE)) which can readily detect a sensor fault wrongly when the system data is subject bias or drifting as a result of power or loading changes, here, it is shown that the proposed methodologies are capable of detecting and identifying the emergence of sensor faults during transient conditions. The efficacy and capability of the proposed approach is demonstrated through their application on measurement data taken from an industrial generator.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132528966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343010
You Li, Y. Ruichek, C. Cappelle
Visual sensors and depth sensors, such as camera and LIDAR (Light Detection and Ranging) are more and more used together in current perception systems of intelligent vehicles. Fusing information obtained separately from these heterogeneous sensors always requires extrinsic calibration of vision sensors and LIDARs. In this paper, we propose an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2D LIDAR. The extrinsic calibration problem is solved by 3D reconstruction of a chessboard and geometric constraints between the views from the stereovision system and the LIDAR. The proposed approach takes sensor noise models into account that it provides optimal results under Mahalanobis distance constraints. Experiments based on both computer simulation and real data sets are presented and analyzed to evaluate the performance of the calibration method. A comparison with a popular camera/LIDAR calibration method is also proposed to show the benefits of our method.
视觉传感器和深度传感器,如摄像头和激光雷达(LIDAR, Light Detection and Ranging),在当前的智能汽车感知系统中越来越多地结合使用。从这些异构传感器单独获得的信息融合通常需要视觉传感器和激光雷达的外部校准。本文提出了一种双目立体视觉系统与二维激光雷达之间的最优外部标定算法。外部标定问题通过三维重建棋盘和立体视觉系统与激光雷达之间的几何约束来解决。该方法考虑了传感器噪声模型,在马氏距离约束下提供了最优结果。在计算机模拟和实际数据集上进行了实验和分析,以评估该校准方法的性能。并与一种流行的相机/激光雷达校准方法进行了比较,以显示我们的方法的优点。
{"title":"Extrinsic calibration between a stereoscopic system and a LIDAR with sensor noise models","authors":"You Li, Y. Ruichek, C. Cappelle","doi":"10.1109/MFI.2012.6343010","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343010","url":null,"abstract":"Visual sensors and depth sensors, such as camera and LIDAR (Light Detection and Ranging) are more and more used together in current perception systems of intelligent vehicles. Fusing information obtained separately from these heterogeneous sensors always requires extrinsic calibration of vision sensors and LIDARs. In this paper, we propose an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2D LIDAR. The extrinsic calibration problem is solved by 3D reconstruction of a chessboard and geometric constraints between the views from the stereovision system and the LIDAR. The proposed approach takes sensor noise models into account that it provides optimal results under Mahalanobis distance constraints. Experiments based on both computer simulation and real data sets are presented and analyzed to evaluate the performance of the calibration method. A comparison with a popular camera/LIDAR calibration method is also proposed to show the benefits of our method.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121759339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-06DOI: 10.1109/MFI.2012.6343066
Shrihari Vasudevan, A. Melkumyan, S. Scheding
This paper evaluates heterogeneous information fusion using multi-task Gaussian processes in the context of geological resource modeling. Specifically, it empirically demonstrates that information integration across heterogeneous information sources leads to superior estimates of all the quantities being modeled, compared to modeling them individually. Multi-task Gaussian processes provide a powerful approach for simultaneous modeling of multiple quantities of interest while taking correlations between these quantities into consideration. Experiments are performed on large scale real sensor data.
{"title":"Information fusion in multi-task Gaussian process models","authors":"Shrihari Vasudevan, A. Melkumyan, S. Scheding","doi":"10.1109/MFI.2012.6343066","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343066","url":null,"abstract":"This paper evaluates heterogeneous information fusion using multi-task Gaussian processes in the context of geological resource modeling. Specifically, it empirically demonstrates that information integration across heterogeneous information sources leads to superior estimates of all the quantities being modeled, compared to modeling them individually. Multi-task Gaussian processes provide a powerful approach for simultaneous modeling of multiple quantities of interest while taking correlations between these quantities into consideration. Experiments are performed on large scale real sensor data.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125534513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/MFI.2012.6343023
J. Lobo, J. Ferreira, Pedro Trindade, J. Dias
In this paper we propose a two-tiered hierarchical Bayesian model to estimate the location of objects moving independently from the observer. Biological vision systems are very successful in motion segmentation, since they efficiently resort to flow analysis and accumulated prior knowledge of the 3D structure of the scene. Artificial perception systems may also build 3D structure maps and use optical flow to provide cues for ego- and independent motion segmentation. Using inertial and magnetic sensors and an image and depth sensor (RGB-D) we propose a method to obtain registered 3D maps, which are subsequently used in a probabilistic model (the bottom tier of the hierarchy) that performs background subtraction across several frames to provide a prior on moving objects. The egomotion of the RGB-D sensor is estimated starting with the angular pose obtained from the filtered accelerometers and magnetic data. The translation is derived from matched points across the images and corresponding 3D points in the rotation-compensated depth maps. A gyro-aided Lucas Kanade tracker is used to obtain matched points across the images. The tracked points can also used to refine the initial sensor based rotation estimation. Having determined the camera egomotion, the estimated optical flow assuming a static scene can be compared with the observed optical flow via a probabilistic model (the top tier of the hierarchy), using the results of the background subtraction process as a prior, in order to identify volumes with independent motion in the corresponding 3D point cloud. To deal with the computational load CUDA-based solutions on GPUs were used. Experimental results are presented showing the validity of the proposed approach.
{"title":"Bayesian 3D independent motion segmentation with IMU-aided RBG-D sensor","authors":"J. Lobo, J. Ferreira, Pedro Trindade, J. Dias","doi":"10.1109/MFI.2012.6343023","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343023","url":null,"abstract":"In this paper we propose a two-tiered hierarchical Bayesian model to estimate the location of objects moving independently from the observer. Biological vision systems are very successful in motion segmentation, since they efficiently resort to flow analysis and accumulated prior knowledge of the 3D structure of the scene. Artificial perception systems may also build 3D structure maps and use optical flow to provide cues for ego- and independent motion segmentation. Using inertial and magnetic sensors and an image and depth sensor (RGB-D) we propose a method to obtain registered 3D maps, which are subsequently used in a probabilistic model (the bottom tier of the hierarchy) that performs background subtraction across several frames to provide a prior on moving objects. The egomotion of the RGB-D sensor is estimated starting with the angular pose obtained from the filtered accelerometers and magnetic data. The translation is derived from matched points across the images and corresponding 3D points in the rotation-compensated depth maps. A gyro-aided Lucas Kanade tracker is used to obtain matched points across the images. The tracked points can also used to refine the initial sensor based rotation estimation. Having determined the camera egomotion, the estimated optical flow assuming a static scene can be compared with the observed optical flow via a probabilistic model (the top tier of the hierarchy), using the results of the background subtraction process as a prior, in order to identify volumes with independent motion in the corresponding 3D point cloud. To deal with the computational load CUDA-based solutions on GPUs were used. Experimental results are presented showing the validity of the proposed approach.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/MFI.2012.6343019
João Bimbo, Silvia Rodríguez-Jiménez, Hongbin Liu, Xiaojing Song, N. Burrus, L. Seneviratne, M. Abderrahim, K. Althoefer
Robot grasping and manipulation require very accurate knowledge of the object's location within the robotic hand. By itself, a vision system cannot provide very precise and robust pose tracking due to occlusions or hardware limitations. This paper presents a method to estimate a grasped object's 6D pose by fusing sensor data from vision, tactile sensors and joint encoders. Given an initial pose acquired by the vision system and the contact locations on the fingertips, an iterative process optimises the estimation of the object pose by finding a transformation that fits the grasped object to the finger tips. Experiments were carried out in both simulation and a real system consisting of a Shadow arm and hand with ATI Force/Torque sensors instrumented on the fingertips and a Microsoft Kinect camera. In order to make the method suitable for real-time applications, the performance of the algorithm was investigated in terms of speed and accuracy of convergence.
{"title":"Object pose estimation and tracking by fusing visual and tactile information","authors":"João Bimbo, Silvia Rodríguez-Jiménez, Hongbin Liu, Xiaojing Song, N. Burrus, L. Seneviratne, M. Abderrahim, K. Althoefer","doi":"10.1109/MFI.2012.6343019","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343019","url":null,"abstract":"Robot grasping and manipulation require very accurate knowledge of the object's location within the robotic hand. By itself, a vision system cannot provide very precise and robust pose tracking due to occlusions or hardware limitations. This paper presents a method to estimate a grasped object's 6D pose by fusing sensor data from vision, tactile sensors and joint encoders. Given an initial pose acquired by the vision system and the contact locations on the fingertips, an iterative process optimises the estimation of the object pose by finding a transformation that fits the grasped object to the finger tips. Experiments were carried out in both simulation and a real system consisting of a Shadow arm and hand with ATI Force/Torque sensors instrumented on the fingertips and a Microsoft Kinect camera. In order to make the method suitable for real-time applications, the performance of the algorithm was investigated in terms of speed and accuracy of convergence.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116523825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}