Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343054
Lujia Wang, Ming Liu, M. Meng, R. Siegwart
Cloud Robotics is currently driving interest in both academia and industry. It allows different types of robots to share information and develop new skills even without specific sensors. They can also perform intensive tasks by combining multiple robots with a cooperative manner. Multi-sensor data retrieval is one of the fundamental tasks for resource sharing demanded by Cloud Robotic system. However, many technical challenges persist, for example Multi-Sensor Data Retrieval (MSDR) is particularly difficult when Cloud Cluster Hosts accommodate unpredictable data requested by multi robots in parallel. Moreover, the synchronization of multi-sensor data mostly requires near real-time response of different message types. In this paper, we describe a MSDR framework which is comprised of priority scheduling method and buffer management scheme. It is validated by assessing the quality of service (QoS) model in the sense of facilitating data retrieval management. Experiments show that the proposed framework achieves better performance in typical Cloud Robotics scenarios.
{"title":"Towards real-time multi-sensor information retrieval in Cloud Robotic System","authors":"Lujia Wang, Ming Liu, M. Meng, R. Siegwart","doi":"10.1109/MFI.2012.6343054","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343054","url":null,"abstract":"Cloud Robotics is currently driving interest in both academia and industry. It allows different types of robots to share information and develop new skills even without specific sensors. They can also perform intensive tasks by combining multiple robots with a cooperative manner. Multi-sensor data retrieval is one of the fundamental tasks for resource sharing demanded by Cloud Robotic system. However, many technical challenges persist, for example Multi-Sensor Data Retrieval (MSDR) is particularly difficult when Cloud Cluster Hosts accommodate unpredictable data requested by multi robots in parallel. Moreover, the synchronization of multi-sensor data mostly requires near real-time response of different message types. In this paper, we describe a MSDR framework which is comprised of priority scheduling method and buffer management scheme. It is validated by assessing the quality of service (QoS) model in the sense of facilitating data retrieval management. Experiments show that the proposed framework achieves better performance in typical Cloud Robotics scenarios.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117037699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343024
Etienne Le Grand, S. Thrun
As location-based services have grown increasingly popular, they have become limited by the inability to acquire accurate location information in indoor environments, where the Global Positioning System does not function. In this field, magnetometers have primarily been used as compasses. As such, they are seen as unreliable sensors when in presence of magnetic field disturbances, which are frequent in indoor environment. This work presents a method to account for and extract useful information from those disturbances. This method leads to improved localization in an indoor environment. Local magnetic disturbances carry enough information to localize without the help of other sensors. We describe an algorithm allowing to do so as long as we have access to a map of those disturbances. We then expose a fast mapping technique to produce such maps and we apply this technique to show the stability of the magnetic disturbances in time. Finally, the proposed localization algorithm is tested in a realistic situation, showing high-quality localization capability.
{"title":"3-Axis magnetic field mapping and fusion for indoor localization","authors":"Etienne Le Grand, S. Thrun","doi":"10.1109/MFI.2012.6343024","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343024","url":null,"abstract":"As location-based services have grown increasingly popular, they have become limited by the inability to acquire accurate location information in indoor environments, where the Global Positioning System does not function. In this field, magnetometers have primarily been used as compasses. As such, they are seen as unreliable sensors when in presence of magnetic field disturbances, which are frequent in indoor environment. This work presents a method to account for and extract useful information from those disturbances. This method leads to improved localization in an indoor environment. Local magnetic disturbances carry enough information to localize without the help of other sensors. We describe an algorithm allowing to do so as long as we have access to a map of those disturbances. We then expose a fast mapping technique to produce such maps and we apply this technique to show the stability of the magnetic disturbances in time. Finally, the proposed localization algorithm is tested in a realistic situation, showing high-quality localization capability.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127571583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343049
Diego Rodriguez, N. Aouf
This paper presents an effective egomotion solution based on high curvature image features described using local intensity histograms for stereo matching and tracking. To robustify the visual processing system, we propose feature extraction over moment image representation to overcome the adverse effects of illumination changes. A bundle adjustment optimisation technique, thoroughly analysed for different reprojection strategies, is developed for motion estimation of an autonomous platform. The quality of results is shown to be on par with high quality GPS-corrected-INS systems, even for long-range trajectories.
{"title":"Robust egomotion for large-scale trajectories","authors":"Diego Rodriguez, N. Aouf","doi":"10.1109/MFI.2012.6343049","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343049","url":null,"abstract":"This paper presents an effective egomotion solution based on high curvature image features described using local intensity histograms for stereo matching and tracking. To robustify the visual processing system, we propose feature extraction over moment image representation to overcome the adverse effects of illumination changes. A bundle adjustment optimisation technique, thoroughly analysed for different reprojection strategies, is developed for motion estimation of an autonomous platform. The quality of results is shown to be on par with high quality GPS-corrected-INS systems, even for long-range trajectories.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133371766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343050
J. Stückler, Sven Behnke
The mapping of environments is a prerequisite for many navigation and manipulation tasks. We propose a novel method for acquiring 3D maps of indoor scenes from a freely moving RGB-D camera. Our approach integrates color and depth cues seamlessly in a multi-resolution map representation. We consider measurement noise characteristics and exploit dense image neighborhood to rapidly extract maps from RGB-D images. An efficient ICP variant allows maps to be registered in real-time at VGA resolution on a CPU. For simultaneous localization and mapping, we extract key views and optimize the trajectory in a probabilistic framework. Finally, we propose an efficient randomized loop-closure technique that is designed for on-line operation. We benchmark our method on a publicly available RGB-D dataset and compare it with a state-of-the-art approach that uses sparse image features.
{"title":"Integrating depth and color cues for dense multi-resolution scene mapping using RGB-D cameras","authors":"J. Stückler, Sven Behnke","doi":"10.1109/MFI.2012.6343050","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343050","url":null,"abstract":"The mapping of environments is a prerequisite for many navigation and manipulation tasks. We propose a novel method for acquiring 3D maps of indoor scenes from a freely moving RGB-D camera. Our approach integrates color and depth cues seamlessly in a multi-resolution map representation. We consider measurement noise characteristics and exploit dense image neighborhood to rapidly extract maps from RGB-D images. An efficient ICP variant allows maps to be registered in real-time at VGA resolution on a CPU. For simultaneous localization and mapping, we extract key views and optimize the trajectory in a probabilistic framework. Finally, we propose an efficient randomized loop-closure technique that is designed for on-line operation. We benchmark our method on a publicly available RGB-D dataset and compare it with a state-of-the-art approach that uses sparse image features.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133475920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343052
S. Julier
In this paper, we consider the problem of fusing measurements which contain correlated noises within posegraph-based formulations of filtering and estimation problems. We develop a formulation of the Weighted Geometric Density (WGD) fusion algorithm, a generalisation of Covariance Intersection (CI), for posegraphs. We show that this form can generate covariance consistent estimates. We propose two methods for computing the weighting parameters by maximising the information or maximising the likelihood.
{"title":"Fusion of dependent information in posegraphs","authors":"S. Julier","doi":"10.1109/MFI.2012.6343052","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343052","url":null,"abstract":"In this paper, we consider the problem of fusing measurements which contain correlated noises within posegraph-based formulations of filtering and estimation problems. We develop a formulation of the Weighted Geometric Density (WGD) fusion algorithm, a generalisation of Covariance Intersection (CI), for posegraphs. We show that this form can generate covariance consistent estimates. We propose two methods for computing the weighting parameters by maximising the information or maximising the likelihood.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122745232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343021
R. Stolkin, D. Rees, M. Talha, I. Florescu
This paper presents a method for optimally combining pixel information from an infra-red thermal imaging camera, and a conventional visible spectrum colour camera, for tracking a moving target. The tracking algorithm rapidly re-learns its background models for each camera modality from scratch at every frame. This enables, firstly, automatic adjustment of the relative importance of thermal and visible information in decision making, and, secondly, a degree of “camouflage target” tracking by continuously re-weighting the importance of those parts of the target model that are most distinct from the present background at each frame. Furthermore, this very rapid background adaptation ensures robustness to large, sudden and arbitrary camera motion, and thus makes this method a useful tool for robotics, for example visual servoing of a pan-tilt turret mounted on a moving robot vehicle. The method can be used to track any kind of arbitrarily shaped or deforming object, however the combination of thermal and visible information proves particularly useful for enabling robots to track people. The method is also important in that it can be readily extended for data fusion of an arbitrary number of statistically independent features from one or arbitrarily many imaging modalities.
{"title":"Bayesian fusion of thermal and visible spectra camera data for region based tracking with rapid background adaptation","authors":"R. Stolkin, D. Rees, M. Talha, I. Florescu","doi":"10.1109/MFI.2012.6343021","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343021","url":null,"abstract":"This paper presents a method for optimally combining pixel information from an infra-red thermal imaging camera, and a conventional visible spectrum colour camera, for tracking a moving target. The tracking algorithm rapidly re-learns its background models for each camera modality from scratch at every frame. This enables, firstly, automatic adjustment of the relative importance of thermal and visible information in decision making, and, secondly, a degree of “camouflage target” tracking by continuously re-weighting the importance of those parts of the target model that are most distinct from the present background at each frame. Furthermore, this very rapid background adaptation ensures robustness to large, sudden and arbitrary camera motion, and thus makes this method a useful tool for robotics, for example visual servoing of a pan-tilt turret mounted on a moving robot vehicle. The method can be used to track any kind of arbitrarily shaped or deforming object, however the combination of thermal and visible information proves particularly useful for enabling robots to track people. The method is also important in that it can be readily extended for data fusion of an arbitrary number of statistically independent features from one or arbitrarily many imaging modalities.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126234860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343037
T. Henderson, G. Knight, E. Grant
Several methods for the estimation of thermal diffusivity are studied in this work. In many application scenarios, the thermal diffusivity is unknown and must be estimated in order to perform other estimation functions (e.g., tracking of the physical phenomenon, or solving other inverse problems like localization or sensor variance, etc.). In particular, we describe: 1) The use of minimization methods (the Golden Mean and Lagarias' simplex) to determine the thermal diffusivity coefficient which when used in a forward heat flow simulation results in the least (vector) distance between the sampled data and the simulated data. 2) The Maximum Likelihood Estimate for thermal diffusivity. 3) The Extended Kalman Filter to recover the thermal diffusivity. We apply these methods to the determination of thermal diffusivity in snow.
{"title":"Multisensor methods to estimate thermal diffusivity","authors":"T. Henderson, G. Knight, E. Grant","doi":"10.1109/MFI.2012.6343037","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343037","url":null,"abstract":"Several methods for the estimation of thermal diffusivity are studied in this work. In many application scenarios, the thermal diffusivity is unknown and must be estimated in order to perform other estimation functions (e.g., tracking of the physical phenomenon, or solving other inverse problems like localization or sensor variance, etc.). In particular, we describe: 1) The use of minimization methods (the Golden Mean and Lagarias' simplex) to determine the thermal diffusivity coefficient which when used in a forward heat flow simulation results in the least (vector) distance between the sampled data and the simulated data. 2) The Maximum Likelihood Estimate for thermal diffusivity. 3) The Extended Kalman Filter to recover the thermal diffusivity. We apply these methods to the determination of thermal diffusivity in snow.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125702904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343031
Ming Liu, Lujia Wang, R. Siegwart
Multi sensor fusion has been widely used in recognition problems. Most existing works highly depend on the calibration between different sensors, but less on modeling and reasoning of the co-incidence of multiple hints. In this paper, we propose a generic framework for recognition and clustering problem using a non-parametric Dirichlet hierarchical model, named DP-Fusion. It enables online labeling, clustering and recognition of sequential data simultaneously, while considering multiple types of sensor readings. The algorithm is data-driven, which does not depend on priorknowledge of the data structure. The results show the feasibility and reliability against noise data.
{"title":"DP-Fusion: A generic framework for online multi sensor recognition","authors":"Ming Liu, Lujia Wang, R. Siegwart","doi":"10.1109/MFI.2012.6343031","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343031","url":null,"abstract":"Multi sensor fusion has been widely used in recognition problems. Most existing works highly depend on the calibration between different sensors, but less on modeling and reasoning of the co-incidence of multiple hints. In this paper, we propose a generic framework for recognition and clustering problem using a non-parametric Dirichlet hierarchical model, named DP-Fusion. It enables online labeling, clustering and recognition of sequential data simultaneously, while considering multiple types of sensor readings. The algorithm is data-driven, which does not depend on priorknowledge of the data structure. The results show the feasibility and reliability against noise data.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"64 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113953696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343022
Johannes Bauer, Jorge Dávila-Chacón, Erik Strahl, S. Wermter
Considerable time and effort often go into designing and implementing experimental set-ups (ES) in robotics. These activities are usually not at the focus of our research and thus go underreported. This results in replication of work and lack of comparability. This paper lays out our view of the theoretical considerations necessary when deciding on the type of experiment to conduct. It describes our efforts in designing a virtual reality (VR) ES for experiments in biomimetic robotics. It also reports on experiments carried out and outlines those planned. It thus provides a basis for similar efforts by other researchers and will help make designing ES more rational and economical, and the results more comparable.
{"title":"Smoke and mirrors — Virtual realities for sensor fusion experiments in biomimetic robotics","authors":"Johannes Bauer, Jorge Dávila-Chacón, Erik Strahl, S. Wermter","doi":"10.1109/MFI.2012.6343022","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343022","url":null,"abstract":"Considerable time and effort often go into designing and implementing experimental set-ups (ES) in robotics. These activities are usually not at the focus of our research and thus go underreported. This results in replication of work and lack of comparability. This paper lays out our view of the theoretical considerations necessary when deciding on the type of experiment to conduct. It describes our efforts in designing a virtual reality (VR) ES for experiments in biomimetic robotics. It also reports on experiments carried out and outlines those planned. It thus provides a basis for similar efforts by other researchers and will help make designing ES more rational and economical, and the results more comparable.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132792468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343044
M. Noeske, D. Krupke, N. Hendrich, Jianwei Zhang, Houxiang Zhang
This paper describes a method to integrate a hardware device into a modular robot control and simulation software and introduces sensor fusion to investigate the current set of control parameters. As a highlevel remote unit for modulation of control algorithms the Wiimote and the usage of its sensors will be introduced. Sensor fusion of the Wiimote's sensors allows to create a human-robot interaction modul to control a simulation environment as well as real robots. Finally its benefit for evaluation of locomotion control algorithms will be pointed out.
{"title":"Interactive control parameter investigation of modular robotic simulation environment based on Wiimote-HCI's multi sensor fusion","authors":"M. Noeske, D. Krupke, N. Hendrich, Jianwei Zhang, Houxiang Zhang","doi":"10.1109/MFI.2012.6343044","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343044","url":null,"abstract":"This paper describes a method to integrate a hardware device into a modular robot control and simulation software and introduces sensor fusion to investigate the current set of control parameters. As a highlevel remote unit for modulation of control algorithms the Wiimote and the usage of its sensors will be introduced. Sensor fusion of the Wiimote's sensors allows to create a human-robot interaction modul to control a simulation environment as well as real robots. Finally its benefit for evaluation of locomotion control algorithms will be pointed out.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133503171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}