首页 > 最新文献

2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)最新文献

英文 中文
Hand gesture recognition for Human-Robot Interaction for service robot 面向服务机器人的人机交互手势识别
R. Luo, Yen-Chang Wu
With advances in technology, robots play an important role in our lives. Nowadays, we have more chance to see robots service in our society such as intelligent robot for rescue and for service. Therefore, Human-Robot interaction becomes an essential issue for research. In this paper we introduce a combining method for hand sign recognition. Hand sign recognition is an essential way for Human-Robot Interaction (HRI). Sign language is the most intuitive and direct way to communication for impaired or disabled people. Through the hand or body gestures, the disabled can easily let caregiver or robot know what message they want to convey. In this paper, we propose a combining hands gesture recognition algorithm which combines two distinct recognizers. These two recognizers collectively determine the hand's sign via a process called CAR equation. These two recognizers are aimed to complement the ability of discrimination. To achieve this goal, one recognizer recognizes hand gesture by hand skeleton recognizer (HSR), and the other recognizer is based on support vector machines (SVM). In addition, the corresponding classifiers of SVM are trained using different features like local binary pattern (LBP) and raw data. Furthermore, the trained images are using Bosphorus Hand Database and in addition to taking by us. A set of rules including recognizer switching and combinatorial approach recognizer CAR equation is devised to synthesize the distinctive methods. We have successfully demonstrated gesture recognition experimentally with successful proof of concept.
随着科技的进步,机器人在我们的生活中扮演着重要的角色。如今,我们有更多的机会看到机器人服务于我们的社会,如智能机器人救援和服务。因此,人机交互成为一个重要的研究课题。本文介绍了一种组合方法用于手势语识别。手势识别是实现人机交互的重要手段之一。手语是残疾人士最直观、最直接的交流方式。通过手或身体的手势,残疾人可以很容易地让护理人员或机器人知道他们想要传达什么信息。本文提出了一种结合两种不同识别器的组合手势识别算法。这两个识别器通过一个叫做CAR方程的过程共同确定手的手势。这两个识别器的目的是补充辨别能力。为了实现这一目标,一个识别器通过手骨架识别器(HSR)识别手势,另一个识别器基于支持向量机(SVM)识别手势。此外,利用局部二值模式(local binary pattern, LBP)和原始数据等不同特征训练支持向量机的分类器。此外,训练后的图像除由我们拍摄外,还使用博斯普鲁斯手数据库。设计了一套包括识别器切换和组合方法识别器CAR方程的规则来综合不同的方法。我们已经成功地通过实验证明了手势识别的概念。
{"title":"Hand gesture recognition for Human-Robot Interaction for service robot","authors":"R. Luo, Yen-Chang Wu","doi":"10.1109/MFI.2012.6343059","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343059","url":null,"abstract":"With advances in technology, robots play an important role in our lives. Nowadays, we have more chance to see robots service in our society such as intelligent robot for rescue and for service. Therefore, Human-Robot interaction becomes an essential issue for research. In this paper we introduce a combining method for hand sign recognition. Hand sign recognition is an essential way for Human-Robot Interaction (HRI). Sign language is the most intuitive and direct way to communication for impaired or disabled people. Through the hand or body gestures, the disabled can easily let caregiver or robot know what message they want to convey. In this paper, we propose a combining hands gesture recognition algorithm which combines two distinct recognizers. These two recognizers collectively determine the hand's sign via a process called CAR equation. These two recognizers are aimed to complement the ability of discrimination. To achieve this goal, one recognizer recognizes hand gesture by hand skeleton recognizer (HSR), and the other recognizer is based on support vector machines (SVM). In addition, the corresponding classifiers of SVM are trained using different features like local binary pattern (LBP) and raw data. Furthermore, the trained images are using Bosphorus Hand Database and in addition to taking by us. A set of rules including recognizer switching and combinatorial approach recognizer CAR equation is devised to synthesize the distinctive methods. We have successfully demonstrated gesture recognition experimentally with successful proof of concept.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Estimating the posture of pipeline inspection robot with a 2D Laser Rang Finder 基于二维激光测距仪的管道检测机器人姿态估计
Yuanyuan Hu, Zhangjun Song, Jun‐Hua Zhu
Pipeline network is one of the city's critical infrastructures, such as lots of gas pipes and water pipes exist in public utilities, factories and so on. Regular inspection is required to ensure the static integrity of the pipes and to insure against the problems associated with failure of the pipes. We have developed a pipeline inspection robot equipped with a camera which can walk in the pipes and stream back live video to the base station. In this paper we propose a new method for estimating the posture of the robot in round pipes with a 2D Laser Rang Finder (LRF) and a dual tilt-sensor by using the geometrical characteristic of the round pipes constructed with the point cloud data. Transformation matrix from the robot coordinate system to the global system is deduced. The positions and sizes of pipe defects can be calculated easily relying on the range data and images. Experiments by the inspection robot in dry smooth HDPE pipes are carried out and the results show that the proposed method is useful and valid.
管网是城市的重要基础设施之一,在公用事业、工厂等场所存在着大量的燃气管道和自来水管道。需要定期检查,以确保管道的静态完整性,并确保防止与管道故障有关的问题。我们开发了一种管道检测机器人,它配备了一个摄像头,可以在管道中行走,并将实时视频传回基站。本文利用点云数据构建的圆管的几何特征,提出了一种利用二维激光测距仪和双倾斜传感器估计机器人在圆管内姿态的新方法。推导了机器人坐标系到全局坐标系的变换矩阵。根据距离数据和图像,可以很容易地计算出管道缺陷的位置和尺寸。利用该检测机器人对干燥光滑HDPE管道进行了检测实验,结果表明了该方法的实用性和有效性。
{"title":"Estimating the posture of pipeline inspection robot with a 2D Laser Rang Finder","authors":"Yuanyuan Hu, Zhangjun Song, Jun‐Hua Zhu","doi":"10.1109/MFI.2012.6342999","DOIUrl":"https://doi.org/10.1109/MFI.2012.6342999","url":null,"abstract":"Pipeline network is one of the city's critical infrastructures, such as lots of gas pipes and water pipes exist in public utilities, factories and so on. Regular inspection is required to ensure the static integrity of the pipes and to insure against the problems associated with failure of the pipes. We have developed a pipeline inspection robot equipped with a camera which can walk in the pipes and stream back live video to the base station. In this paper we propose a new method for estimating the posture of the robot in round pipes with a 2D Laser Rang Finder (LRF) and a dual tilt-sensor by using the geometrical characteristic of the round pipes constructed with the point cloud data. Transformation matrix from the robot coordinate system to the global system is deduced. The positions and sizes of pipe defects can be calculated easily relying on the range data and images. Experiments by the inspection robot in dry smooth HDPE pipes are carried out and the results show that the proposed method is useful and valid.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123696977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Estimation analysis in VSLAM for UAV application 无人机VSLAM中的估计分析
Xiaodong Li, N. Aouf, A. Nemra
This paper presents an in-depth evaluation of filter algorithms utilized in the estimation of 3D position and attitude for UAV using stereo vision based Visual SLAM integrated with feature detection and matching techniques i.e., SIFT and SURF. The evaluation's aim was to investigate the accuracy and robustness of the filters' estimation for vision based navigation problems. The investigation covered several filter methods and both feature extraction algorithms behave in VSLAM applied to UAV. Statistical analyses were carried out in terms of error rates. The Robustness and relative merits of the approaches are discussed to conclude along with evidence of the filters' performances.
本文将基于立体视觉的视觉SLAM与SIFT和SURF特征检测和匹配技术相结合,对用于无人机三维位置和姿态估计的滤波算法进行了深入评估。评估的目的是研究基于视觉的导航问题的滤波器估计的准确性和鲁棒性。研究了几种滤波方法,两种特征提取算法都适用于无人机的VSLAM。根据错误率进行了统计分析。讨论了这些方法的鲁棒性和相对优点,并给出了滤波器性能的证据。
{"title":"Estimation analysis in VSLAM for UAV application","authors":"Xiaodong Li, N. Aouf, A. Nemra","doi":"10.1109/MFI.2012.6343039","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343039","url":null,"abstract":"This paper presents an in-depth evaluation of filter algorithms utilized in the estimation of 3D position and attitude for UAV using stereo vision based Visual SLAM integrated with feature detection and matching techniques i.e., SIFT and SURF. The evaluation's aim was to investigate the accuracy and robustness of the filters' estimation for vision based navigation problems. The investigation covered several filter methods and both feature extraction algorithms behave in VSLAM applied to UAV. Statistical analyses were carried out in terms of error rates. The Robustness and relative merits of the approaches are discussed to conclude along with evidence of the filters' performances.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121544329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Localizability estimation for mobile robots based on probabilistic grid map and its applications to localization 基于概率网格图的移动机器人定位估计及其在定位中的应用
Zhe Liu, Weidong Chen, Yong Wang, Jingchuan Wang
A novel approach to estimate localizability for mobile robots is presented based on probabilistic grid map (PGM). Firstly, a static localizability matrix is proposed for off-line estimation over the priori PGM. Then a dynamic localizability matrix is proposed to deal with unexpected dynamic changes. These matrices describe both localizability index and localizability direction quantitatively. The validity of the proposed method is demonstrated by experiments in different typical environments. Furthermore, two typical localization-related applications, including active global localization and pose tracking, are presented for illustrating the effectiveness of the proposed localizability estimation method.
提出了一种基于概率网格图的移动机器人可定位性估计方法。首先,提出了一种静态定位矩阵,用于先验PGM的离线估计。然后提出了一种动态定位矩阵来处理非预期的动态变化。这些矩阵定量地描述了可定位性指标和可定位性方向。通过不同典型环境下的实验验证了该方法的有效性。通过主动全局定位和姿态跟踪两种典型的定位相关应用,验证了该方法的有效性。
{"title":"Localizability estimation for mobile robots based on probabilistic grid map and its applications to localization","authors":"Zhe Liu, Weidong Chen, Yong Wang, Jingchuan Wang","doi":"10.1109/MFI.2012.6343051","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343051","url":null,"abstract":"A novel approach to estimate localizability for mobile robots is presented based on probabilistic grid map (PGM). Firstly, a static localizability matrix is proposed for off-line estimation over the priori PGM. Then a dynamic localizability matrix is proposed to deal with unexpected dynamic changes. These matrices describe both localizability index and localizability direction quantitatively. The validity of the proposed method is demonstrated by experiments in different typical environments. Furthermore, two typical localization-related applications, including active global localization and pose tracking, are presented for illustrating the effectiveness of the proposed localizability estimation method.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129490586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Towards autonomous airborne mapping of urban environments 走向城市环境的自主航空测绘
B. Adler, Junhao Xiao
This work documents our progress on building an unmanned aerial vehicle capable of autonomously mapping urban environments. This includes localization and tracking of the vehicle's pose, fusion of sensor-data from onboard GNSS receivers, IMUs, laserscanners and cameras as well as realtime path-planning and collision-avoidance. Currently, we focus on a physics-based approach to computing waypoints, which are subsequently used to steer the platform in three-dimensional space. Generation of efficient sensor trajectories for maximized information gain operates directly on unorganized point clouds, creating a perfect fit for environment mapping with commonly used LIDAR sensors and time-of-flight cameras. We present the algorithm's application to real sensor-data and analyze its performance in a virtual outdoor scenario.
这项工作记录了我们在建造能够自主测绘城市环境的无人驾驶飞行器方面的进展。这包括定位和跟踪车辆的姿态,融合来自车载GNSS接收器、imu、激光扫描仪和摄像头的传感器数据,以及实时路径规划和避碰。目前,我们专注于基于物理的方法来计算路径点,这些路径点随后用于在三维空间中引导平台。生成有效的传感器轨迹,以最大化信息增益,直接在无组织的点云上操作,完美适合使用常用的激光雷达传感器和飞行时间相机进行环境映射。我们介绍了该算法在真实传感器数据中的应用,并分析了其在虚拟室外场景中的性能。
{"title":"Towards autonomous airborne mapping of urban environments","authors":"B. Adler, Junhao Xiao","doi":"10.1109/MFI.2012.6343030","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343030","url":null,"abstract":"This work documents our progress on building an unmanned aerial vehicle capable of autonomously mapping urban environments. This includes localization and tracking of the vehicle's pose, fusion of sensor-data from onboard GNSS receivers, IMUs, laserscanners and cameras as well as realtime path-planning and collision-avoidance. Currently, we focus on a physics-based approach to computing waypoints, which are subsequently used to steer the platform in three-dimensional space. Generation of efficient sensor trajectories for maximized information gain operates directly on unorganized point clouds, creating a perfect fit for environment mapping with commonly used LIDAR sensors and time-of-flight cameras. We present the algorithm's application to real sensor-data and analyze its performance in a virtual outdoor scenario.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130780503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Monocular heading estimation in non-stationary urban environment 非平稳城市环境下的单目航向估计
Christian Herdtweck, Cristóbal Curio
Estimating heading information reliably from visual cues only is an important goal in human navigation research as well as in application areas ranging from robotics to automotive safety. The focus of expansion (FoE) is deemed to be important for this task. Yet, dynamic and unstructured environments like urban areas still pose an algorithmic challenge. We extend a robust learning framework that operates on optical flow and has at center stage a continuous Latent Variable Model (LVM) [1]. It accounts for missing measurements, erroneous correspondences and independent outlier motion in the visual field of view. The approach bypasses classical camera calibration through learning stages, that only require monocular video footage and corresponding platform motion information. To estimate the FoE we present both a numerical method acting on inferred optical flow fields and regression mapping, e.g. Gaussian-Process regression. We also present results for mapping to velocity, yaw, and even pitch and roll. Performance is demonstrated for car data recorded in non-stationary, urban environments.
仅从视觉线索中可靠地估计航向信息是人类导航研究以及从机器人到汽车安全等应用领域的重要目标。扩展的焦点(FoE)被认为对这项任务很重要。然而,像城市地区这样的动态和非结构化环境仍然对算法构成挑战。我们扩展了一个健壮的学习框架,该框架在光流上运行,并在中心阶段有一个连续潜变量模型(LVM)[1]。它解释了缺失的测量,错误的对应和独立的离群运动在视野。该方法通过学习阶段绕过了经典的摄像机校准,只需要单目视频片段和相应的平台运动信息。为了估计FoE,我们提出了一种作用于推断光流场和回归映射的数值方法,例如高斯过程回归。我们还提供了映射到速度,偏航,甚至俯仰和滚转的结果。在非固定的城市环境中记录的汽车数据演示了性能。
{"title":"Monocular heading estimation in non-stationary urban environment","authors":"Christian Herdtweck, Cristóbal Curio","doi":"10.1109/MFI.2012.6343057","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343057","url":null,"abstract":"Estimating heading information reliably from visual cues only is an important goal in human navigation research as well as in application areas ranging from robotics to automotive safety. The focus of expansion (FoE) is deemed to be important for this task. Yet, dynamic and unstructured environments like urban areas still pose an algorithmic challenge. We extend a robust learning framework that operates on optical flow and has at center stage a continuous Latent Variable Model (LVM) [1]. It accounts for missing measurements, erroneous correspondences and independent outlier motion in the visual field of view. The approach bypasses classical camera calibration through learning stages, that only require monocular video footage and corresponding platform motion information. To estimate the FoE we present both a numerical method acting on inferred optical flow fields and regression mapping, e.g. Gaussian-Process regression. We also present results for mapping to velocity, yaw, and even pitch and roll. Performance is demonstrated for car data recorded in non-stationary, urban environments.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134368416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tracking ground moving extended objects using RGBD data 使用RGBD数据跟踪地面移动扩展对象
M. Baum, F. Faion, U. Hanebeck
This paper is about an experimental set-up for tracking a ground moving mobile object from a bird's eye view. In this experiment, an RGB and depth camera is used for detecting moving points. The detected points serve as input for a probabilistic extended object tracking algorithm that simultaneously estimates the kinematic parameters and the shape parameters of the object. By this means, it is easy to discriminate moving objects from the background and the probabilistic tracking algorithm ensures a robust and smooth shape estimate. We provide an experimental evaluation of a recent Bayesian extended object tracking algorithm based on a so-called Random Hypersurface Model and give a comparison with active contour models.
本文介绍了一种从鸟瞰角度跟踪地面移动物体的实验装置。在本实验中,使用RGB和深度相机来检测运动点。检测到的点作为一个概率扩展目标跟踪算法的输入,该算法同时估计目标的运动参数和形状参数。通过这种方法,可以很容易地从背景中区分出运动目标,并且概率跟踪算法保证了形状估计的鲁棒性和平滑性。我们提供了一种基于随机超表面模型的贝叶斯扩展目标跟踪算法的实验评估,并与活动轮廓模型进行了比较。
{"title":"Tracking ground moving extended objects using RGBD data","authors":"M. Baum, F. Faion, U. Hanebeck","doi":"10.1109/MFI.2012.6343003","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343003","url":null,"abstract":"This paper is about an experimental set-up for tracking a ground moving mobile object from a bird's eye view. In this experiment, an RGB and depth camera is used for detecting moving points. The detected points serve as input for a probabilistic extended object tracking algorithm that simultaneously estimates the kinematic parameters and the shape parameters of the object. By this means, it is easy to discriminate moving objects from the background and the probabilistic tracking algorithm ensures a robust and smooth shape estimate. We provide an experimental evaluation of a recent Bayesian extended object tracking algorithm based on a so-called Random Hypersurface Model and give a comparison with active contour models.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132020829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A sensor fusion approach for localization with cumulative error elimination 基于累积误差消除的传感器融合定位方法
Feihu Zhang, H. Stahle, Guang Chen, Chao-Wei Chen, Carsten Simon, C. Buckl, A. Knoll
This paper describes a robust approach which improves the precision of vehicle localization in complex urban environments by fusing data from GPS, gyroscope and velocity sensors. In this method, we apply Kalman filter to estimate the position of the vehicle. Compared with other fusion based localization approaches, we process the data in a public coordinate system, called Earth Centred Earth Fixed (ECEF) coordinates and eliminate the cumulative error by its statistics characteristics. The contribution is that it not only provides a sensor fusion framework to estimate the position of the vehicle, but also gives a mathematical solution to eliminate the cumulative error stems from the relative pose measurements (provided by the gyroscope and velocity sensors). The experiments exhibit the reliability and the feasibility of our approach in large scale environment.
本文介绍了一种融合GPS、陀螺仪和速度传感器数据,提高复杂城市环境下车辆定位精度的鲁棒方法。在该方法中,我们使用卡尔曼滤波来估计车辆的位置。与其他基于融合的定位方法相比,我们在一个称为地球中心地球固定(ECEF)坐标的公共坐标系中处理数据,并利用其统计特性消除累积误差。贡献在于它不仅提供了一个传感器融合框架来估计车辆的位置,而且还给出了一个数学解决方案来消除相对姿态测量(由陀螺仪和速度传感器提供)产生的累积误差。实验证明了该方法在大规模环境下的可靠性和可行性。
{"title":"A sensor fusion approach for localization with cumulative error elimination","authors":"Feihu Zhang, H. Stahle, Guang Chen, Chao-Wei Chen, Carsten Simon, C. Buckl, A. Knoll","doi":"10.1109/MFI.2012.6343009","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343009","url":null,"abstract":"This paper describes a robust approach which improves the precision of vehicle localization in complex urban environments by fusing data from GPS, gyroscope and velocity sensors. In this method, we apply Kalman filter to estimate the position of the vehicle. Compared with other fusion based localization approaches, we process the data in a public coordinate system, called Earth Centred Earth Fixed (ECEF) coordinates and eliminate the cumulative error by its statistics characteristics. The contribution is that it not only provides a sensor fusion framework to estimate the position of the vehicle, but also gives a mathematical solution to eliminate the cumulative error stems from the relative pose measurements (provided by the gyroscope and velocity sensors). The experiments exhibit the reliability and the feasibility of our approach in large scale environment.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114757731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
On Active Sensing methods for localization scenarios with range-based measurements 基于距离测量的定位场景的主动传感方法研究
J. Trapnauskas, M. Romanovas, L. Klingbeil, A. Al-Jawad, M. Trächtler, Y. Manoli
The work demonstrates how the methods of Active Sensing (AS), based on the theory of optimal experimental design, can be applied for a location estimation scenario. The simulated problem consists of several mobile and fixed nodes where each mobile unit is equipped with a gyroscope and an incremental path encoder and is capable to make a selective range measurement to one of several fixed anchors as well as to other moving tags. All available measurements are combined within a fusion filter, while the range measurements are selected with one of the AS methods in order to minimize the position uncertainty under the constraints of a maximum available measurement rate. Different AS strategies are incorporated into a recursive Bayesian estimation framework in the form of Extended Kalman and Particle Filters. The performance of the fusion algorithms augmented with the active sensing techniques is discussed for several scenarios with different measurement rates and a number of fixed or moving tags.
该工作展示了基于最优实验设计理论的主动感知(AS)方法如何应用于位置估计场景。模拟问题由几个移动和固定节点组成,每个移动单元配备一个陀螺仪和一个增量路径编码器,能够对几个固定锚点之一以及其他移动标签进行选择性距离测量。在最大可用测量率的约束下,将所有可用测量值组合在一个融合滤波器中,同时使用其中一种AS方法选择距离测量值,以最小化位置不确定性。将不同的AS策略以扩展卡尔曼和粒子滤波的形式整合到递归贝叶斯估计框架中。讨论了在不同测量速率和多个固定或移动标签的情况下,增强了主动传感技术的融合算法的性能。
{"title":"On Active Sensing methods for localization scenarios with range-based measurements","authors":"J. Trapnauskas, M. Romanovas, L. Klingbeil, A. Al-Jawad, M. Trächtler, Y. Manoli","doi":"10.1109/MFI.2012.6343013","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343013","url":null,"abstract":"The work demonstrates how the methods of Active Sensing (AS), based on the theory of optimal experimental design, can be applied for a location estimation scenario. The simulated problem consists of several mobile and fixed nodes where each mobile unit is equipped with a gyroscope and an incremental path encoder and is capable to make a selective range measurement to one of several fixed anchors as well as to other moving tags. All available measurements are combined within a fusion filter, while the range measurements are selected with one of the AS methods in order to minimize the position uncertainty under the constraints of a maximum available measurement rate. Different AS strategies are incorporated into a recursive Bayesian estimation framework in the form of Extended Kalman and Particle Filters. The performance of the fusion algorithms augmented with the active sensing techniques is discussed for several scenarios with different measurement rates and a number of fixed or moving tags.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114787873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi sensors based ultrasonic human face identification: Experiment and analysis 基于多传感器的超声波人脸识别:实验与分析
Y. Xu, J. Y. Wang, B. Cao, J. Yang
This paper presents an ultrasonic sensing based human face identification approach. As a biometric identification method, ultrasonic sensing could detect the geometric structure of faces without being affected by the illumination of the environment. Multi ultrasonic sensors are used for data collection. Continuous Transmitted Frequency Modulated (CTFM) signal is chosen as the detection signal. High Resolution Range Profile (HRRP) is extracted from the echo signal as the feature and a K nearest neighbor (KNN) classifier is used for the face classification. Data fusion is applied to improve the performance for identifying faces with multi facial expressions. Experimental results show a success rate of more than 96.9% when the test database includes 62 persons and 5 facial expressions for each person. The results prove that multi sensors ultrasonic sensing could be a potential competent face identification solution for many applications.
提出了一种基于超声传感的人脸识别方法。超声波传感作为一种生物特征识别方法,可以在不受环境光照影响的情况下检测人脸的几何结构。采用多个超声波传感器进行数据采集。检测信号选择连续发射调频(CTFM)信号。从回波信号中提取高分辨率距离轮廓(HRRP)作为特征,并使用K最近邻(KNN)分类器进行人脸分类。应用数据融合技术提高了多表情人脸识别的性能。实验结果表明,当测试数据库包含62个人,每个人5个面部表情时,成功率超过96.9%。结果表明,多传感器超声传感技术是一种潜在的人脸识别解决方案。
{"title":"Multi sensors based ultrasonic human face identification: Experiment and analysis","authors":"Y. Xu, J. Y. Wang, B. Cao, J. Yang","doi":"10.1109/MFI.2012.6343000","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343000","url":null,"abstract":"This paper presents an ultrasonic sensing based human face identification approach. As a biometric identification method, ultrasonic sensing could detect the geometric structure of faces without being affected by the illumination of the environment. Multi ultrasonic sensors are used for data collection. Continuous Transmitted Frequency Modulated (CTFM) signal is chosen as the detection signal. High Resolution Range Profile (HRRP) is extracted from the echo signal as the feature and a K nearest neighbor (KNN) classifier is used for the face classification. Data fusion is applied to improve the performance for identifying faces with multi facial expressions. Experimental results show a success rate of more than 96.9% when the test database includes 62 persons and 5 facial expressions for each person. The results prove that multi sensors ultrasonic sensing could be a potential competent face identification solution for many applications.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"308 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123481817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1