Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170383
P. Davidson, R. Piché
INS and GNSS integrated systems have become widespread as a result of low-cost MEMS inertial sensor technology. However, the accuracy of computed velocity and orientation is not sufficient for some applications, e.g. performance and technique monitoring and evaluation in sports. Significant accuracy improvements can be made by post-mission data processing. The approach is based on fixed-lag Rauch-Tung-Striebel smoothing algorithm and provides a simple and effective solution to misalignment correction. The potential velocity accuracy is about 0.02 m/s and pitch/roll accuracy is about 0.02 deg. This algorithm was tested for walking and running. The proposed approach could also be used for accurate velocity and orientation estimation in other applications including different sports, e.g. rowing, paddling, cross-country and downhill skiing, ski jump etc.
{"title":"A method for post-mission velocity and orientation estimation based on data fusion from MEMS-IMU and GNSS","authors":"P. Davidson, R. Piché","doi":"10.1109/MFI.2017.8170383","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170383","url":null,"abstract":"INS and GNSS integrated systems have become widespread as a result of low-cost MEMS inertial sensor technology. However, the accuracy of computed velocity and orientation is not sufficient for some applications, e.g. performance and technique monitoring and evaluation in sports. Significant accuracy improvements can be made by post-mission data processing. The approach is based on fixed-lag Rauch-Tung-Striebel smoothing algorithm and provides a simple and effective solution to misalignment correction. The potential velocity accuracy is about 0.02 m/s and pitch/roll accuracy is about 0.02 deg. This algorithm was tested for walking and running. The proposed approach could also be used for accurate velocity and orientation estimation in other applications including different sports, e.g. rowing, paddling, cross-country and downhill skiing, ski jump etc.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"722 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133476275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170372
Sungjoon Choi, Kyungjae Lee, Songhwai Oh
This paper presents a connectivity control algorithm of a multi-agent system. The connectivity of the multi-agent system can be represented by the second smallest eigenvalue λ2 of the Laplacian matrix LG and it is also referred to as algebraic connectivity. Unlike many of the existing connectivity control algorithms which adapt convex optimization technique to maximize algebraic connectivity, we first show that the algebraic connectivity can be maximized by minimizing the weighted sum of distances between the connected agents. We implement a hill-climbing algorithm that minimizes the weighted sum of distances. Semi-definite programming (SDP) is used for computing proper weight w∗. Our proposed algorithm can effectively be mixed with other cooperative applications such as covering an unknown area or following a leader.
{"title":"A multi-agent coverage algorithm with connectivity maintenance","authors":"Sungjoon Choi, Kyungjae Lee, Songhwai Oh","doi":"10.1109/MFI.2017.8170372","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170372","url":null,"abstract":"This paper presents a connectivity control algorithm of a multi-agent system. The connectivity of the multi-agent system can be represented by the second smallest eigenvalue λ2 of the Laplacian matrix LG and it is also referred to as algebraic connectivity. Unlike many of the existing connectivity control algorithms which adapt convex optimization technique to maximize algebraic connectivity, we first show that the algebraic connectivity can be maximized by minimizing the weighted sum of distances between the connected agents. We implement a hill-climbing algorithm that minimizes the weighted sum of distances. Semi-definite programming (SDP) is used for computing proper weight w∗. Our proposed algorithm can effectively be mixed with other cooperative applications such as covering an unknown area or following a leader.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121978752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170405
M. Fernández-Carmona, S. Coşar, Claudio Coppola, N. Bellotto
The automatic detection of anomalies in Active and Assisted Living (AAL) environments is important for monitoring the wellbeing and safety of the elderly at home. The integration of smart domotic sensors (e.g. presence detectors) and those ones equipping modern mobile robots (e.g. RGB-D cameras) provides new opportunities for addressing this challenge. In this paper, we propose a novel solution to combine local activity levels detected by a single RGB-D camera with the global activity perceived by a network of domotic sensors. Our approach relies on a new method for computing such a global activity using various presence detectors, based on the concept of entropy from information theory. This entropy effectively shows how active a particular room or environment's area is. The solution includes also a new application of Hybrid Markov Logic Networks (HMLNs) to merge different information sources for local and global anomaly detection. The system has been tested with a comprehensive dataset of RGB-D and domotic data containing data entries from 37 different domotic sensors (presence, temperature, light, energy consumption, door contact), which is made publicly available. The experimental results show the effectiveness of our approach and its potential for complex anomaly detection in AAL settings.
{"title":"Entropy-based abnormal activity detection fusing RGB-D and domotic sensors","authors":"M. Fernández-Carmona, S. Coşar, Claudio Coppola, N. Bellotto","doi":"10.1109/MFI.2017.8170405","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170405","url":null,"abstract":"The automatic detection of anomalies in Active and Assisted Living (AAL) environments is important for monitoring the wellbeing and safety of the elderly at home. The integration of smart domotic sensors (e.g. presence detectors) and those ones equipping modern mobile robots (e.g. RGB-D cameras) provides new opportunities for addressing this challenge. In this paper, we propose a novel solution to combine local activity levels detected by a single RGB-D camera with the global activity perceived by a network of domotic sensors. Our approach relies on a new method for computing such a global activity using various presence detectors, based on the concept of entropy from information theory. This entropy effectively shows how active a particular room or environment's area is. The solution includes also a new application of Hybrid Markov Logic Networks (HMLNs) to merge different information sources for local and global anomaly detection. The system has been tested with a comprehensive dataset of RGB-D and domotic data containing data entries from 37 different domotic sensors (presence, temperature, light, energy consumption, door contact), which is made publicly available. The experimental results show the effectiveness of our approach and its potential for complex anomaly detection in AAL settings.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127527697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170436
Hyunhak Shin, W. Hong, Ria Kim, Hanseok Ko
This paper focuses on target motion analysis by fusion of information from two moving acoustic sensors. These two sensors may obtain small measurements in order to make quick analysis of moving targets. In this situation, conventional approaches often fail to find accurate motion of targets. In this paper, a fusion algorithm for target motion analysis designed to handle this situation is proposed. First, optimization based PSO is applied in order to find accurate initial motion of targets. Second, the target trajectory is estimated via a sequential fusion algorithm based on UKF. According to the various simulated results, the effectiveness of the proposed method is then verified.
{"title":"Target motion analysis with evolutionary search by fusion of two moving acoustic sensors","authors":"Hyunhak Shin, W. Hong, Ria Kim, Hanseok Ko","doi":"10.1109/MFI.2017.8170436","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170436","url":null,"abstract":"This paper focuses on target motion analysis by fusion of information from two moving acoustic sensors. These two sensors may obtain small measurements in order to make quick analysis of moving targets. In this situation, conventional approaches often fail to find accurate motion of targets. In this paper, a fusion algorithm for target motion analysis designed to handle this situation is proposed. First, optimization based PSO is applied in order to find accurate initial motion of targets. Second, the target trajectory is estimated via a sequential fusion algorithm based on UKF. According to the various simulated results, the effectiveness of the proposed method is then verified.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114988438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170368
Erind Ujkani, Petter S. Eppeland, Atle Aalerud, G. Hovland
A collision detection system triggering on human motion was developed using the Robot Operating System (ROS) and the Point Cloud Library (PCL). ROS was used as the core of the programs and for the communication with an industrial robot. Combining the depths fields from the 3D cameras was accomplished by the use of PCL. The library was also the underlying tool for segmenting the human from the registrated point clouds. Benchmarking of several collision algorithms was done in order to compare the solution. The registration process gave satisfactory results when testing the repetitiveness and the accuracy of the implementation. The segmentation algorithm was able to segment a person represented by 4–6000 points in real-time successfully.
{"title":"Real-time human collision detection for industrial robot cells","authors":"Erind Ujkani, Petter S. Eppeland, Atle Aalerud, G. Hovland","doi":"10.1109/MFI.2017.8170368","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170368","url":null,"abstract":"A collision detection system triggering on human motion was developed using the Robot Operating System (ROS) and the Point Cloud Library (PCL). ROS was used as the core of the programs and for the communication with an industrial robot. Combining the depths fields from the 3D cameras was accomplished by the use of PCL. The library was also the underlying tool for segmenting the human from the registrated point clouds. Benchmarking of several collision algorithms was done in order to compare the solution. The registration process gave satisfactory results when testing the repetitiveness and the accuracy of the implementation. The segmentation algorithm was able to segment a person represented by 4–6000 points in real-time successfully.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128296840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170403
Jiří Ajgl, O. Straka
The Ellipsoidal Intersection algorithm aims at fusing two state estimates under a partial knowledge of the cross-correlation of the estimation errors. However, it has been observed that it does not provide an upper bound of all admissible fused mean square error matrices. This paper provides a mathematical tool for an analysis of the fusion under the considered partial knowledge of correlations. The tool facilitates the visualisation of the improvement gained by the partial knowledge and exposes weak points of the Ellipsoidal Intersection fusion. Finally, strictness of the fusion assumption relative to the Covariance Intersection fusion is demonstrated.
{"title":"On weak points of the ellipsoidal intersection fusion","authors":"Jiří Ajgl, O. Straka","doi":"10.1109/MFI.2017.8170403","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170403","url":null,"abstract":"The Ellipsoidal Intersection algorithm aims at fusing two state estimates under a partial knowledge of the cross-correlation of the estimation errors. However, it has been observed that it does not provide an upper bound of all admissible fused mean square error matrices. This paper provides a mathematical tool for an analysis of the fusion under the considered partial knowledge of correlations. The tool facilitates the visualisation of the improvement gained by the partial knowledge and exposes weak points of the Ellipsoidal Intersection fusion. Finally, strictness of the fusion assumption relative to the Covariance Intersection fusion is demonstrated.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128340448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170454
P. Chu, Seoungjae Cho, Hieu Trong Nguyen, Sungdae Sim, K. Kwak, Kyungeun Cho
In this paper, a method for modeling three-dimensional scenes from a Lidar point cloud as well as a billboard calibration approach for remote mobile robot control applications are presented as a combined two-step approach. First, by projecting a local three-dimensional point cloud on two-dimensional coordinate system, we obtain a list of colored points. Based on this list, we apply a proposed ground segmentation algorithm to separate ground and non-ground areas. With the ground part, a dynamic triangular mesh is created by means of a height map and the vehicle position. The non-ground part is divided into small groups. Then, a local voxel map is applied for modeling each group. As a result, all the inner surfaces are eliminated. Second, for billboard calibration, we implement three stages in each frame. In the first stage, at the billboard location, an average ground point is estimated. In the second stage, the distortion angle is calculated. The billboard is updated for each frame in the final stage and corresponds to the terrain gradient.
{"title":"Real-time 3D scene modeling using dynamic billboard for remote robot control systems","authors":"P. Chu, Seoungjae Cho, Hieu Trong Nguyen, Sungdae Sim, K. Kwak, Kyungeun Cho","doi":"10.1109/MFI.2017.8170454","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170454","url":null,"abstract":"In this paper, a method for modeling three-dimensional scenes from a Lidar point cloud as well as a billboard calibration approach for remote mobile robot control applications are presented as a combined two-step approach. First, by projecting a local three-dimensional point cloud on two-dimensional coordinate system, we obtain a list of colored points. Based on this list, we apply a proposed ground segmentation algorithm to separate ground and non-ground areas. With the ground part, a dynamic triangular mesh is created by means of a height map and the vehicle position. The non-ground part is divided into small groups. Then, a local voxel map is applied for modeling each group. As a result, all the inner surfaces are eliminated. Second, for billboard calibration, we implement three stages in each frame. In the first stage, at the billboard location, an average ground point is estimated. In the second stage, the distortion angle is calculated. The billboard is updated for each frame in the final stage and corresponds to the terrain gradient.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128651732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170347
Tim Pfeifer, Sven Lange, P. Protzel
In robotics, non-linear least squares estimation is a common technique for simultaneous localization and mapping. One of the remaining challenges are measurement outliers leading to inconsistency or even divergence within the optimization process. Recently, several approaches for robust state estimation dealing with outliers inside the optimization back-end were presented, but all of them include at least one arbitrary tuning parameter that has to be set manually for each new application. Under changing environmental conditions, this can lead to poor convergence properties and erroneous estimates. To overcome this insufficiency, we propose a novel robust algorithm based on a parameter free probabilistic foundation called Dynamic Covariance Estimation. We derive our algorithm directly from the probabilistic formulation of a Gaussian maximum likelihood estimator. Through including its covariance in the optimization problem, we empower the optimizer to approximate these to the sensor's real properties. Finally, we prove the robustness of our approach on a real world wireless localization application where two similar state-of-the-art algorithms fail without extensive parameter tuning.
{"title":"Dynamic Covariance Estimation — A parameter free approach to robust Sensor Fusion","authors":"Tim Pfeifer, Sven Lange, P. Protzel","doi":"10.1109/MFI.2017.8170347","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170347","url":null,"abstract":"In robotics, non-linear least squares estimation is a common technique for simultaneous localization and mapping. One of the remaining challenges are measurement outliers leading to inconsistency or even divergence within the optimization process. Recently, several approaches for robust state estimation dealing with outliers inside the optimization back-end were presented, but all of them include at least one arbitrary tuning parameter that has to be set manually for each new application. Under changing environmental conditions, this can lead to poor convergence properties and erroneous estimates. To overcome this insufficiency, we propose a novel robust algorithm based on a parameter free probabilistic foundation called Dynamic Covariance Estimation. We derive our algorithm directly from the probabilistic formulation of a Gaussian maximum likelihood estimator. Through including its covariance in the optimization problem, we empower the optimizer to approximate these to the sensor's real properties. Finally, we prove the robustness of our approach on a real world wireless localization application where two similar state-of-the-art algorithms fail without extensive parameter tuning.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128212616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170358
Kai Li Lim, T. Drage, T. Bräunl
While current implementations of LIDAR-based autonomous driving systems are capable of road following and obstacle avoidance, they are still unable to detect road lane markings, which is required for lane keeping during autonomous driving sequences. In this paper, we present an implementation of semantic image segmentation to enhance a LIDAR-based autonomous ground vehicle for road and lane marking detection, in addition to object perception and classification. To achieve this, we installed and calibrated a low-cost monocular camera onto a LIDAR-fitted Formula-SAE Electric car as our test bench. Tests were performed first on video recordings of local roads to verify the feasibility of semantic segmentation, and then on the Formula-SAE car with LIDAR readings. Results from semantic segmentation confirmed that the road areas in each video frame were properly segmented, and that road edges and lane markers can be classified. By combining this information with LIDAR measurements for road edges and obstacles, distance measurements for each segmented object can be obtained, thereby allowing the vehicle to be programmed to drive autonomously within the road lanes and away from road edges.
{"title":"Implementation of semantic segmentation for road and lane detection on an autonomous ground vehicle with LIDAR","authors":"Kai Li Lim, T. Drage, T. Bräunl","doi":"10.1109/MFI.2017.8170358","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170358","url":null,"abstract":"While current implementations of LIDAR-based autonomous driving systems are capable of road following and obstacle avoidance, they are still unable to detect road lane markings, which is required for lane keeping during autonomous driving sequences. In this paper, we present an implementation of semantic image segmentation to enhance a LIDAR-based autonomous ground vehicle for road and lane marking detection, in addition to object perception and classification. To achieve this, we installed and calibrated a low-cost monocular camera onto a LIDAR-fitted Formula-SAE Electric car as our test bench. Tests were performed first on video recordings of local roads to verify the feasibility of semantic segmentation, and then on the Formula-SAE car with LIDAR readings. Results from semantic segmentation confirmed that the road areas in each video frame were properly segmented, and that road edges and lane markers can be classified. By combining this information with LIDAR measurements for road edges and obstacles, distance measurements for each segmented object can be obtained, thereby allowing the vehicle to be programmed to drive autonomously within the road lanes and away from road edges.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130100508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170415
Renato Miyagusuku, Yiploon Seow, A. Yamashita, H. Asama
Laser rangefinders are very popular sensors in robot localization due to their accuracy. Typically, localization algorithms based on these sensors compare range measurements with previously obtained maps of the environment. As many indoor environments are highly symmetrical (e.g., most rooms have the same layout and most corridors are very similar) these systems may fail to recognize one location from another, leading to slow convergence and even severe localization problems. To address these two issues we propose a novel system which incorporates WiFi-based localization into a typical Monte Carlo localization algorithm that primarily uses laser rangefinders. Our system is mainly composed of two modules other than the Monte Carlo localization algorithm. The first uses WiFi data in conjunction with the occupancy grid map of the environment to solve convergence of global localization fast and reliably. The second detects possible localization failures using a metric based on WiFi models. To test the feasibility of our system, we performed experiments in an office environment. Results show that our system allows fast convergence and can detect localization failures with minimum additional computation. We have also made all our datasets and software readily available online for the community.
{"title":"Fast and robust localization using laser rangefinder and wifi data","authors":"Renato Miyagusuku, Yiploon Seow, A. Yamashita, H. Asama","doi":"10.1109/MFI.2017.8170415","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170415","url":null,"abstract":"Laser rangefinders are very popular sensors in robot localization due to their accuracy. Typically, localization algorithms based on these sensors compare range measurements with previously obtained maps of the environment. As many indoor environments are highly symmetrical (e.g., most rooms have the same layout and most corridors are very similar) these systems may fail to recognize one location from another, leading to slow convergence and even severe localization problems. To address these two issues we propose a novel system which incorporates WiFi-based localization into a typical Monte Carlo localization algorithm that primarily uses laser rangefinders. Our system is mainly composed of two modules other than the Monte Carlo localization algorithm. The first uses WiFi data in conjunction with the occupancy grid map of the environment to solve convergence of global localization fast and reliably. The second detects possible localization failures using a metric based on WiFi models. To test the feasibility of our system, we performed experiments in an office environment. Results show that our system allows fast convergence and can detect localization failures with minimum additional computation. We have also made all our datasets and software readily available online for the community.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116990639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}