Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343072
G. Rigatos
The paper proposes a new distributed filtering method, for integrity monitoring of navigation sensors in automatic ground vehicles (AGV). Unlike the Extended Information Filter (EIF), the proposed filter avoids approximation errors caused by the linearization of the AGV kinematic model and does not require the computation of Jacobians. The use of a statistical fault detection and isolation algorithm for processing the residuals generated by the proposed filtering method, can provide an indication about the condition of the navigation sensors and about failures that may have appeared. As an an application example the paper considers failure diagnosis for wheel encoders or IMU devices of an AGV.
{"title":"Derivative-free distributed filtering for integrity monitoring of AGV navigation sensors","authors":"G. Rigatos","doi":"10.1109/MFI.2012.6343072","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343072","url":null,"abstract":"The paper proposes a new distributed filtering method, for integrity monitoring of navigation sensors in automatic ground vehicles (AGV). Unlike the Extended Information Filter (EIF), the proposed filter avoids approximation errors caused by the linearization of the AGV kinematic model and does not require the computation of Jacobians. The use of a statistical fault detection and isolation algorithm for processing the residuals generated by the proposed filtering method, can provide an indication about the condition of the navigation sensors and about failures that may have appeared. As an an application example the paper considers failure diagnosis for wheel encoders or IMU devices of an AGV.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127352019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343015
Z. Li, T. Herfet, Martin P. Grochulla, Thorsten Thormählen
Localization of multiple active speakers in natural environments with only two microphones is a challenging problem. Reverberation degrades performance of speaker localization based exclusively on directional cues. The audio modality alone has problems with localization accuracy while the video modality alone has problems with false speaker activity detections. This paper presents an approach based on audiovisual fusion in two stages. In the first stage, speaker activity is detected based on the audio-visual fusion which can handle false lip movements. In the second stage, a Gaussian fusion method is proposed to integrate the estimates of both modalities. As a consequence, the localization accuracy and robustness compared to the audio/video modality alone is significantly increased. Experimental results in various scenarios confirmed the improved performance of the proposed system.
{"title":"Multiple active speaker localization based on audio-visual fusion in two stages","authors":"Z. Li, T. Herfet, Martin P. Grochulla, Thorsten Thormählen","doi":"10.1109/MFI.2012.6343015","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343015","url":null,"abstract":"Localization of multiple active speakers in natural environments with only two microphones is a challenging problem. Reverberation degrades performance of speaker localization based exclusively on directional cues. The audio modality alone has problems with localization accuracy while the video modality alone has problems with false speaker activity detections. This paper presents an approach based on audiovisual fusion in two stages. In the first stage, speaker activity is detected based on the audio-visual fusion which can handle false lip movements. In the second stage, a Gaussian fusion method is proposed to integrate the estimates of both modalities. As a consequence, the localization accuracy and robustness compared to the audio/video modality alone is significantly increased. Experimental results in various scenarios confirmed the improved performance of the proposed system.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121356731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343077
Uri Levy, Evyatar Hemo
Seismic Unattended Ground Sensors (UGS) systems have a major role in the developing area of seismic signal processing, with applications mainly in security and surveillance systems. Identifying and localizing a potential threat is a preliminary requirement in such systems. Array processing based on measured time of arrivals or gain-ratio values is widely used for solving the localization problem. However, for real world seismic data, estimating time differences and gain-ratios of arrival is a difficult task, due to both the nature of sensors networks and of seismic signals. Sensors synchronization is a common difficulty in networks and the demand for low power consumption and transmission rates prevents solving it by cross-correlating the signals. High variations in sound velocity and background noise among different types of ground, which characterize the underground environment, are additional factors for these difficulties. Hence, applying direct localization algorithms on seismic data often proves ineffective. In this paper, a novel approach toward seismic source localization using UGS system is presented. Given an event of recurring nature, the proposed algorithm is based on two principles which increase its robustness. First, it utilizes both time differences and gain-ratios measurements in a decision directed process. In addition, confidence weights are assigned for each recurrence of the event thus further performance improvement is achieved. Results for applying the proposed algorithm on real-world seismic data are presented and the advantages of the proposed algorithm are demonstrated.
{"title":"Robust source localization using decision-directed algorithm and confidence weights in Unattended Ground Sensors system","authors":"Uri Levy, Evyatar Hemo","doi":"10.1109/MFI.2012.6343077","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343077","url":null,"abstract":"Seismic Unattended Ground Sensors (UGS) systems have a major role in the developing area of seismic signal processing, with applications mainly in security and surveillance systems. Identifying and localizing a potential threat is a preliminary requirement in such systems. Array processing based on measured time of arrivals or gain-ratio values is widely used for solving the localization problem. However, for real world seismic data, estimating time differences and gain-ratios of arrival is a difficult task, due to both the nature of sensors networks and of seismic signals. Sensors synchronization is a common difficulty in networks and the demand for low power consumption and transmission rates prevents solving it by cross-correlating the signals. High variations in sound velocity and background noise among different types of ground, which characterize the underground environment, are additional factors for these difficulties. Hence, applying direct localization algorithms on seismic data often proves ineffective. In this paper, a novel approach toward seismic source localization using UGS system is presented. Given an event of recurring nature, the proposed algorithm is based on two principles which increase its robustness. First, it utilizes both time differences and gain-ratios measurements in a decision directed process. In addition, confidence weights are assigned for each recurrence of the event thus further performance improvement is achieved. Results for applying the proposed algorithm on real-world seismic data are presented and the advantages of the proposed algorithm are demonstrated.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116556330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343014
Tobias Fromm, B. Staehle, W. Ertel
Robust object recognition is a crucial requirement for many robotic applications. We propose a method towards increasing reliability and flexibility of object recognition for robotics. This is achieved by the fusion of diverse recognition frameworks and algorithms on score level which use characteristics like shape, texture and color of the objects. Machine Learning allows for the automatic combination of the respective recognition methods' outputs instead of having to adapt their hypothesis metrics to a common basis. We show the applicability of our approach through several real-world experiments in a service robotics environment. Great importance is attached to robustness, especially in varying environments.
{"title":"Robust multi-algorithm object recognition using Machine Learning methods","authors":"Tobias Fromm, B. Staehle, W. Ertel","doi":"10.1109/MFI.2012.6343014","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343014","url":null,"abstract":"Robust object recognition is a crucial requirement for many robotic applications. We propose a method towards increasing reliability and flexibility of object recognition for robotics. This is achieved by the fusion of diverse recognition frameworks and algorithms on score level which use characteristics like shape, texture and color of the objects. Machine Learning allows for the automatic combination of the respective recognition methods' outputs instead of having to adapt their hypothesis metrics to a common basis. We show the applicability of our approach through several real-world experiments in a service robotics environment. Great importance is attached to robustness, especially in varying environments.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127326328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343035
Junhao Xiao, B. Adler, Houxiang Zhang
This paper focuses on fast 3D point cloud registration in cluttered urban environments. There are three main steps in the pipeline: Firstly a fast region growing planar segmentation algorithm is employed to extract the planar surfaces. Then the area of each planar patch is calculated using the image-like structure of organized point cloud. In the last step, the registration is defined as a correlation problem, a novel search algorithm which combines heuristic search with pruning using geometry consistency is utilized to find the global optimal solution in a subset of SO(3) ∪ R3, and the transformation is refined using weighted least squares after finding the solution. Since all possible transformations are traversed, no prior pose estimation from other sensors such as odometry or IMU is needed, makeing it robust and can deal with big rotations.
{"title":"3D point cloud registration based on planar surfaces","authors":"Junhao Xiao, B. Adler, Houxiang Zhang","doi":"10.1109/MFI.2012.6343035","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343035","url":null,"abstract":"This paper focuses on fast 3D point cloud registration in cluttered urban environments. There are three main steps in the pipeline: Firstly a fast region growing planar segmentation algorithm is employed to extract the planar surfaces. Then the area of each planar patch is calculated using the image-like structure of organized point cloud. In the last step, the registration is defined as a correlation problem, a novel search algorithm which combines heuristic search with pruning using geometry consistency is utilized to find the global optimal solution in a subset of SO(3) ∪ R3, and the transformation is refined using weighted least squares after finding the solution. Since all possible transformations are traversed, no prior pose estimation from other sensors such as odometry or IMU is needed, makeing it robust and can deal with big rotations.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126428791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343069
Ping Song, Yiping Wang, Xiaoyue Wang, Zhiqiang Pan
A constraint mechanism of mobile sensor network based on congestion will is proposed in this paper. This mechanism can solve the problems in current constraint mechanisms that are weak to adjust the formation, lack of elasticity and flexibility. The basic principle of the constraint model is to simulate the willingness of higher organisms that maintain the space between their own to other mobile nodes or obstacles by themselves. Compared with other constraint mechanisms, this mechanism is simple, flexible, efficient, robust, and the amount of communication is small. Therefore, it can be used for the mobile sensor network which the nodes are not highly intelligent. Simulation results show that this constraint mechanism can realize cluster, fragmentation and formation maintenance of multiple mobile nodes.
{"title":"A congestion will based constraints mechanism of mobile sensor network","authors":"Ping Song, Yiping Wang, Xiaoyue Wang, Zhiqiang Pan","doi":"10.1109/MFI.2012.6343069","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343069","url":null,"abstract":"A constraint mechanism of mobile sensor network based on congestion will is proposed in this paper. This mechanism can solve the problems in current constraint mechanisms that are weak to adjust the formation, lack of elasticity and flexibility. The basic principle of the constraint model is to simulate the willingness of higher organisms that maintain the space between their own to other mobile nodes or obstacles by themselves. Compared with other constraint mechanisms, this mechanism is simple, flexible, efficient, robust, and the amount of communication is small. Therefore, it can be used for the mobile sensor network which the nodes are not highly intelligent. Simulation results show that this constraint mechanism can realize cluster, fragmentation and formation maintenance of multiple mobile nodes.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121542310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343076
Wei Xiao, F. Sun, Huaping Liu, Heyu Liu, Chao He
Learning from sensor data is important in many robotic research areas, such as dexterous robotic hand grasping. In this paper, a piecewise linear dynamic model is proposed for analyzing robotic hand grasp. The combination of linear dynamic model and the switched systems can achieve better results in grasp learning due to its advantage of modeling multi-phase grasping process. To the best knowledge of the authors, this is the first time for piecewise linear dynamic model to be incorporated into the framework of modeling robotic hand grasp process. The performance of the proposed model is evaluated on our experimental system and shows promising results.
{"title":"Dexterous robotic hand grasp modeling using piecewise linear dynamic model","authors":"Wei Xiao, F. Sun, Huaping Liu, Heyu Liu, Chao He","doi":"10.1109/MFI.2012.6343076","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343076","url":null,"abstract":"Learning from sensor data is important in many robotic research areas, such as dexterous robotic hand grasping. In this paper, a piecewise linear dynamic model is proposed for analyzing robotic hand grasp. The combination of linear dynamic model and the switched systems can achieve better results in grasp learning due to its advantage of modeling multi-phase grasping process. To the best knowledge of the authors, this is the first time for piecewise linear dynamic model to be incorporated into the framework of modeling robotic hand grasp process. The performance of the proposed model is evaluated on our experimental system and shows promising results.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126904698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343016
L. Winkler, Vojtěch Vonásek, H. Wörn, L. Preucil
A heterogeneous, mobile, self-reconfigurable and modular robot platform is being developed in the projects SYMBRION and REPLICATOR. The locomotion of the robots as well as forming of the robot organisms will be controlled using evolutionary and bio-inspired techniques. As the robots are not available at the beginning of the projects and experiments are time consuming and carry risks of damaging the robots, the evolutionary algorithms will be run using a simulation. The simulation has to provide realistic movements of a swarm of robots, simulating the docking procedure between the robots as well as simulating organism motion. High requirements are imposed on such a simulator. We developed the Robot3D simulator, which dynamically simulates a swarm of mobile robots as well as robot organisms. In this paper we will give an overview of the simulation framework, we will show first results of performance tests and we will present applications for which Robot3D has already been used.
{"title":"Robot3D — A simulator for mobile modular self-reconfigurable robots","authors":"L. Winkler, Vojtěch Vonásek, H. Wörn, L. Preucil","doi":"10.1109/MFI.2012.6343016","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343016","url":null,"abstract":"A heterogeneous, mobile, self-reconfigurable and modular robot platform is being developed in the projects SYMBRION and REPLICATOR. The locomotion of the robots as well as forming of the robot organisms will be controlled using evolutionary and bio-inspired techniques. As the robots are not available at the beginning of the projects and experiments are time consuming and carry risks of damaging the robots, the evolutionary algorithms will be run using a simulation. The simulation has to provide realistic movements of a swarm of robots, simulating the docking procedure between the robots as well as simulating organism motion. High requirements are imposed on such a simulator. We developed the Robot3D simulator, which dynamically simulates a swarm of mobile robots as well as robot organisms. In this paper we will give an overview of the simulation framework, we will show first results of performance tests and we will present applications for which Robot3D has already been used.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129122986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343036
Hongbin Liu, Juan Greco, Xiaojing Song, João Bimbo, L. Seneviratne, K. Althoefer
This paper proposes a novel algorithm for recognizing the shape of object which in contact with a robotic finger through the tactile pressure sensing. The developed algorithm is capable of distinguishing the contact shapes between a set of low-resolution pressure map. Within this algorithm, a novel feature extraction technique is developed which transforms a pressure map into a 512-feature vector. The extracted feature of the pressure map is invariant to scale, positioning and partial occlusion, and is independent of the sensor's resolution or image size. To recognize different contact shape from a pressure map, a neural network classifier is developed and uses the feature vector as inputs. It has proven from tests of using four different contact shapes that, the trained neural network can achieve a high success rate of over 90%. Contact sensory information plays a crucial role in robotic hand gestures. The algorithm introduced in this paper has the potential to provide valuable feedback information to automate and improve robotic hand grasping and manipulation.
{"title":"Tactile image based contact shape recognition using neural network","authors":"Hongbin Liu, Juan Greco, Xiaojing Song, João Bimbo, L. Seneviratne, K. Althoefer","doi":"10.1109/MFI.2012.6343036","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343036","url":null,"abstract":"This paper proposes a novel algorithm for recognizing the shape of object which in contact with a robotic finger through the tactile pressure sensing. The developed algorithm is capable of distinguishing the contact shapes between a set of low-resolution pressure map. Within this algorithm, a novel feature extraction technique is developed which transforms a pressure map into a 512-feature vector. The extracted feature of the pressure map is invariant to scale, positioning and partial occlusion, and is independent of the sensor's resolution or image size. To recognize different contact shape from a pressure map, a neural network classifier is developed and uses the feature vector as inputs. It has proven from tests of using four different contact shapes that, the trained neural network can achieve a high success rate of over 90%. Contact sensory information plays a crucial role in robotic hand gestures. The algorithm introduced in this paper has the potential to provide valuable feedback information to automate and improve robotic hand grasping and manipulation.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124497594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343041
Shuanglong Liu, Yuchao Tang, Chun Zhang, Shigang Yue
Node localization has long been established as a key problem in the sensor networks. Self-mapping in wireless sensor network which enables beacon-based systems to build a node map on-the-fly extends the range of the sensor network's applications. A variety of self-mapping algorithms have been developed for the sensor networks. Some algorithms assume no information and estimate only the relative location of the sensor nodes. In this paper, we assume a very small percentage of the sensor nodes aware of their own locations, so the proposed algorithm estimates other node's absolute location using the distance differences. In particular, time difference of arrival (TDOA) technology is adopted to obtain the distance difference. The obtained time difference accuracy is 10ns which corresponds to a distance difference error of 3m. We evaluate self-mapping's accuracy with a small number of seed nodes. Overall, the accuracy and the coverage are shown to be comparable to those achieved results with other technologies and algorithms.
{"title":"Self-map building in wireless sensor network based on TDOA measurements","authors":"Shuanglong Liu, Yuchao Tang, Chun Zhang, Shigang Yue","doi":"10.1109/MFI.2012.6343041","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343041","url":null,"abstract":"Node localization has long been established as a key problem in the sensor networks. Self-mapping in wireless sensor network which enables beacon-based systems to build a node map on-the-fly extends the range of the sensor network's applications. A variety of self-mapping algorithms have been developed for the sensor networks. Some algorithms assume no information and estimate only the relative location of the sensor nodes. In this paper, we assume a very small percentage of the sensor nodes aware of their own locations, so the proposed algorithm estimates other node's absolute location using the distance differences. In particular, time difference of arrival (TDOA) technology is adopted to obtain the distance difference. The obtained time difference accuracy is 10ns which corresponds to a distance difference error of 3m. We evaluate self-mapping's accuracy with a small number of seed nodes. Overall, the accuracy and the coverage are shown to be comparable to those achieved results with other technologies and algorithms.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134020116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}