Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343070
F. García, A. D. L. Escalera, J. M. Armingol, F. Jiménez
Road safety applications require the most reliable and trustable sensors. Context information plays also a key role, adding trustability and allowing the study of the interactions and the danger inherent to them. Vehicle dynamics, dimensions... can be very useful to avoid misdetections when performing vehicle detection and tracking (fusion levels 0 and 1). Traffic safety information is mandatory for fusion levels 2 and 3 by evaluating the interactions and the danger involved in any detection. All this information is context information that was used in this application to enhance the capacity of the sensors, providing a complete and multilevel fusion application. Present application use three sensors: laser scanner, computer vision and inertial system, the information given by these sensors is completed with context information, providing reliable vehicle detection and danger evaluation. Test results are provided to check the usability of the detection algorithm.
{"title":"Context aided fusion procedure for road safety application","authors":"F. García, A. D. L. Escalera, J. M. Armingol, F. Jiménez","doi":"10.1109/MFI.2012.6343070","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343070","url":null,"abstract":"Road safety applications require the most reliable and trustable sensors. Context information plays also a key role, adding trustability and allowing the study of the interactions and the danger inherent to them. Vehicle dynamics, dimensions... can be very useful to avoid misdetections when performing vehicle detection and tracking (fusion levels 0 and 1). Traffic safety information is mandatory for fusion levels 2 and 3 by evaluating the interactions and the danger involved in any detection. All this information is context information that was used in this application to enhance the capacity of the sensors, providing a complete and multilevel fusion application. Present application use three sensors: laser scanner, computer vision and inertial system, the information given by these sensors is completed with context information, providing reliable vehicle detection and danger evaluation. Test results are provided to check the usability of the detection algorithm.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133296079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343041
Shuanglong Liu, Yuchao Tang, Chun Zhang, Shigang Yue
Node localization has long been established as a key problem in the sensor networks. Self-mapping in wireless sensor network which enables beacon-based systems to build a node map on-the-fly extends the range of the sensor network's applications. A variety of self-mapping algorithms have been developed for the sensor networks. Some algorithms assume no information and estimate only the relative location of the sensor nodes. In this paper, we assume a very small percentage of the sensor nodes aware of their own locations, so the proposed algorithm estimates other node's absolute location using the distance differences. In particular, time difference of arrival (TDOA) technology is adopted to obtain the distance difference. The obtained time difference accuracy is 10ns which corresponds to a distance difference error of 3m. We evaluate self-mapping's accuracy with a small number of seed nodes. Overall, the accuracy and the coverage are shown to be comparable to those achieved results with other technologies and algorithms.
{"title":"Self-map building in wireless sensor network based on TDOA measurements","authors":"Shuanglong Liu, Yuchao Tang, Chun Zhang, Shigang Yue","doi":"10.1109/MFI.2012.6343041","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343041","url":null,"abstract":"Node localization has long been established as a key problem in the sensor networks. Self-mapping in wireless sensor network which enables beacon-based systems to build a node map on-the-fly extends the range of the sensor network's applications. A variety of self-mapping algorithms have been developed for the sensor networks. Some algorithms assume no information and estimate only the relative location of the sensor nodes. In this paper, we assume a very small percentage of the sensor nodes aware of their own locations, so the proposed algorithm estimates other node's absolute location using the distance differences. In particular, time difference of arrival (TDOA) technology is adopted to obtain the distance difference. The obtained time difference accuracy is 10ns which corresponds to a distance difference error of 3m. We evaluate self-mapping's accuracy with a small number of seed nodes. Overall, the accuracy and the coverage are shown to be comparable to those achieved results with other technologies and algorithms.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134020116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343053
Katharine Brigham, B. Kumar, N. Rao, Qiang Liu, Xin Wang
We consider a network of sensors wherein the state estimates are sent from sensors to a fusion center to generate a global state estimate. The underlying fusion algorithm affects the performance measure QCC(τ) (with subscripts CC indicating the effects of the communications and computing quality) of the global state estimate computed within the allocated time τ. We present a probabilistic performance bound on QCC(τ) as a function of the distributions of state estimates, communications parameters as well as the fusion algorithm. We present simulations of simplified scenarios to illustrate the qualitative effects of different fusers, and system-level simulations to complement the analytical results.
{"title":"On state fusers over long-haul sensor networks","authors":"Katharine Brigham, B. Kumar, N. Rao, Qiang Liu, Xin Wang","doi":"10.1109/MFI.2012.6343053","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343053","url":null,"abstract":"We consider a network of sensors wherein the state estimates are sent from sensors to a fusion center to generate a global state estimate. The underlying fusion algorithm affects the performance measure QCC(τ) (with subscripts CC indicating the effects of the communications and computing quality) of the global state estimate computed within the allocated time τ. We present a probabilistic performance bound on QCC(τ) as a function of the distributions of state estimates, communications parameters as well as the fusion algorithm. We present simulations of simplified scenarios to illustrate the qualitative effects of different fusers, and system-level simulations to complement the analytical results.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127113896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6342996
R. Matthias, A. Bihlmaier, H. Wörn
In this paper we address some of the most important aspects in modular self-reconfigurable mobile robotics. Related work indicates that it is not sufficient to just have a flexible, scalable and robust platform but that it is necessary to preserve these capabilities in higher levels of organization to benefit from them. Hence we analyze the way in which similar platforms and the according software are implemented. Then we describe the way our own platform is implemented within the SYMBRION and REPLICATOR projects. Afterwards we show how we manage to preserve the robustness and flexibility for use by other researchers in higher levels of organization. To conclude we provide some measurements that show the general adequacy of our platform architecture to cope with the challenges posed by multi-modular self-reconfigurable robotics.
{"title":"Robustness, scalability and flexibility: Key-features in modular self-reconfigurable mobile robotics","authors":"R. Matthias, A. Bihlmaier, H. Wörn","doi":"10.1109/MFI.2012.6342996","DOIUrl":"https://doi.org/10.1109/MFI.2012.6342996","url":null,"abstract":"In this paper we address some of the most important aspects in modular self-reconfigurable mobile robotics. Related work indicates that it is not sufficient to just have a flexible, scalable and robust platform but that it is necessary to preserve these capabilities in higher levels of organization to benefit from them. Hence we analyze the way in which similar platforms and the according software are implemented. Then we describe the way our own platform is implemented within the SYMBRION and REPLICATOR projects. Afterwards we show how we manage to preserve the robustness and flexibility for use by other researchers in higher levels of organization. To conclude we provide some measurements that show the general adequacy of our platform architecture to cope with the challenges posed by multi-modular self-reconfigurable robotics.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128204804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343067
M. Draelos, N. Deshpande, E. Grant
With proper calibration of its color and depth cameras, the Kinect can capture detailed color point clouds at up to 30 frames per second. This capability positions the Kinect for use in robotics as a low-cost navigation sensor. Thus, techniques for efficiently calibrating the Kinect depth camera and altering its optical system to improve suitability for imaging short-range obstacles are presented. To perform depth calibration, a calibration rig and software were developed to automatically map raw depth values to object depths. The calibration rig consisted of a traditional chessboard calibration target with easily locatable features in depth at its exterior corners that facilitated software extraction of corresponding object depths and raw depth values. To modify the Kinect's optics for improved short-range imaging, Nyko's Zoom adapter was used due to its simplicity and low cost. Although effective at reducing the Kinect's minimum range, these optics introduced pronounced distortion in depth. A method based on capturing depth images of planar objects at various depths produced an empirical depth distortion model for correcting such distortion in software. Together, the modified optics and the empirical depth undistortion procedure demonstrated the ability to improve the Kinect's resolution and decrease its minimum range by approximately 30%.
{"title":"The Kinect up close: Adaptations for short-range imaging","authors":"M. Draelos, N. Deshpande, E. Grant","doi":"10.1109/MFI.2012.6343067","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343067","url":null,"abstract":"With proper calibration of its color and depth cameras, the Kinect can capture detailed color point clouds at up to 30 frames per second. This capability positions the Kinect for use in robotics as a low-cost navigation sensor. Thus, techniques for efficiently calibrating the Kinect depth camera and altering its optical system to improve suitability for imaging short-range obstacles are presented. To perform depth calibration, a calibration rig and software were developed to automatically map raw depth values to object depths. The calibration rig consisted of a traditional chessboard calibration target with easily locatable features in depth at its exterior corners that facilitated software extraction of corresponding object depths and raw depth values. To modify the Kinect's optics for improved short-range imaging, Nyko's Zoom adapter was used due to its simplicity and low cost. Although effective at reducing the Kinect's minimum range, these optics introduced pronounced distortion in depth. A method based on capturing depth images of planar objects at various depths produced an empirical depth distortion model for correcting such distortion in software. Together, the modified optics and the empirical depth undistortion procedure demonstrated the ability to improve the Kinect's resolution and decrease its minimum range by approximately 30%.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128711784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343001
Li Yan, Liu Jingtai, Li Haifeng, Lu Xiang, Sun Lei
As the competitive networked robot system has the characteristics of strong interaction and high real-time request, the present control methods which are mostly used for the collaborative networked robot system may not be directly applied to the competitive one. Thus the hierarchical control architecture for the competitive networked robot system is proposed in this paper in order to adapt to its two characteristics above. To deal with the system observation uncertainty caused by noise and time delay, and the system action uncertainty caused by the opponent in the meantime, the control architecture based on Partially Observable Markov Decision Processes (POMDP) has been adopted by the executive layer to select the action of the maximum expected reward, thus fulfilling the intention of the strategy layer effectively. In addition, the introduction of the executive layer has successfully freed the strategic layer from the tasks that should have been completed by its bottom layer alone, thus enabling the strategic layer to focus more on its strategic design. In this paper, the networked robot system named Tele-LightSaber with a high degree of confrontation is designed and implemented, and the experiment results show the validity and efficiency of the proposed method on TLS platform.
{"title":"Modeling and control architecture for the competitive networked robot system based on POMDP","authors":"Li Yan, Liu Jingtai, Li Haifeng, Lu Xiang, Sun Lei","doi":"10.1109/MFI.2012.6343001","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343001","url":null,"abstract":"As the competitive networked robot system has the characteristics of strong interaction and high real-time request, the present control methods which are mostly used for the collaborative networked robot system may not be directly applied to the competitive one. Thus the hierarchical control architecture for the competitive networked robot system is proposed in this paper in order to adapt to its two characteristics above. To deal with the system observation uncertainty caused by noise and time delay, and the system action uncertainty caused by the opponent in the meantime, the control architecture based on Partially Observable Markov Decision Processes (POMDP) has been adopted by the executive layer to select the action of the maximum expected reward, thus fulfilling the intention of the strategy layer effectively. In addition, the introduction of the executive layer has successfully freed the strategic layer from the tasks that should have been completed by its bottom layer alone, thus enabling the strategic layer to focus more on its strategic design. In this paper, the networked robot system named Tele-LightSaber with a high degree of confrontation is designed and implemented, and the experiment results show the validity and efficiency of the proposed method on TLS platform.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114693670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343054
Lujia Wang, Ming Liu, M. Meng, R. Siegwart
Cloud Robotics is currently driving interest in both academia and industry. It allows different types of robots to share information and develop new skills even without specific sensors. They can also perform intensive tasks by combining multiple robots with a cooperative manner. Multi-sensor data retrieval is one of the fundamental tasks for resource sharing demanded by Cloud Robotic system. However, many technical challenges persist, for example Multi-Sensor Data Retrieval (MSDR) is particularly difficult when Cloud Cluster Hosts accommodate unpredictable data requested by multi robots in parallel. Moreover, the synchronization of multi-sensor data mostly requires near real-time response of different message types. In this paper, we describe a MSDR framework which is comprised of priority scheduling method and buffer management scheme. It is validated by assessing the quality of service (QoS) model in the sense of facilitating data retrieval management. Experiments show that the proposed framework achieves better performance in typical Cloud Robotics scenarios.
{"title":"Towards real-time multi-sensor information retrieval in Cloud Robotic System","authors":"Lujia Wang, Ming Liu, M. Meng, R. Siegwart","doi":"10.1109/MFI.2012.6343054","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343054","url":null,"abstract":"Cloud Robotics is currently driving interest in both academia and industry. It allows different types of robots to share information and develop new skills even without specific sensors. They can also perform intensive tasks by combining multiple robots with a cooperative manner. Multi-sensor data retrieval is one of the fundamental tasks for resource sharing demanded by Cloud Robotic system. However, many technical challenges persist, for example Multi-Sensor Data Retrieval (MSDR) is particularly difficult when Cloud Cluster Hosts accommodate unpredictable data requested by multi robots in parallel. Moreover, the synchronization of multi-sensor data mostly requires near real-time response of different message types. In this paper, we describe a MSDR framework which is comprised of priority scheduling method and buffer management scheme. It is validated by assessing the quality of service (QoS) model in the sense of facilitating data retrieval management. Experiments show that the proposed framework achieves better performance in typical Cloud Robotics scenarios.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117037699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343024
Etienne Le Grand, S. Thrun
As location-based services have grown increasingly popular, they have become limited by the inability to acquire accurate location information in indoor environments, where the Global Positioning System does not function. In this field, magnetometers have primarily been used as compasses. As such, they are seen as unreliable sensors when in presence of magnetic field disturbances, which are frequent in indoor environment. This work presents a method to account for and extract useful information from those disturbances. This method leads to improved localization in an indoor environment. Local magnetic disturbances carry enough information to localize without the help of other sensors. We describe an algorithm allowing to do so as long as we have access to a map of those disturbances. We then expose a fast mapping technique to produce such maps and we apply this technique to show the stability of the magnetic disturbances in time. Finally, the proposed localization algorithm is tested in a realistic situation, showing high-quality localization capability.
{"title":"3-Axis magnetic field mapping and fusion for indoor localization","authors":"Etienne Le Grand, S. Thrun","doi":"10.1109/MFI.2012.6343024","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343024","url":null,"abstract":"As location-based services have grown increasingly popular, they have become limited by the inability to acquire accurate location information in indoor environments, where the Global Positioning System does not function. In this field, magnetometers have primarily been used as compasses. As such, they are seen as unreliable sensors when in presence of magnetic field disturbances, which are frequent in indoor environment. This work presents a method to account for and extract useful information from those disturbances. This method leads to improved localization in an indoor environment. Local magnetic disturbances carry enough information to localize without the help of other sensors. We describe an algorithm allowing to do so as long as we have access to a map of those disturbances. We then expose a fast mapping technique to produce such maps and we apply this technique to show the stability of the magnetic disturbances in time. Finally, the proposed localization algorithm is tested in a realistic situation, showing high-quality localization capability.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127571583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343065
T. Henderson, E. Cohen, A. Joshi, E. Grant, M. Draelos, N. Deshpande
We propose that robot perception is enabled by means of a common sensorimotor semantics arising from a set of symmetry theories (expressed as symmetry detectors and parsers) embedded a priori in each robot. These theories inform the production of structural representations of sensorimotor processes, and these representations, in turn, permit perceptual fusion to broaden categories of activity. Although the specific knowledge required by a robot will depend on the particular application domain, there is a need for fundamental mechanisms which allow each individual robot to obtain the requisite knowledge. Current methods are too brittle and do not scale very well, and a new approach to perceptual knowledge representation is necessary. Our approach provides firm semantic grounding in the real world, provides for robust dynamic performance in real-time environments with a range of sensors and allows for communication of acquired knowledge in a broad community of other robots and agents, including humans. Our work focuses on symmetry based multisensor knowledge structuring in terms of: (1) symmetry detection in signals, and (2) symmetry parsing for knowledge structure, including structural bootstrapping and knowledge sharing. Operationally, the hypothesis is that group theoretic representations (G-Reps) inform cognitive activity. Our contributions here are to demonstrate symmetry detection and signal analysis and for 1D and 2D signals in a simple office environment; symmetry parsing based on these tokens is left for future work.
{"title":"Symmetry as a basis for perceptual fusion","authors":"T. Henderson, E. Cohen, A. Joshi, E. Grant, M. Draelos, N. Deshpande","doi":"10.1109/MFI.2012.6343065","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343065","url":null,"abstract":"We propose that robot perception is enabled by means of a common sensorimotor semantics arising from a set of symmetry theories (expressed as symmetry detectors and parsers) embedded a priori in each robot. These theories inform the production of structural representations of sensorimotor processes, and these representations, in turn, permit perceptual fusion to broaden categories of activity. Although the specific knowledge required by a robot will depend on the particular application domain, there is a need for fundamental mechanisms which allow each individual robot to obtain the requisite knowledge. Current methods are too brittle and do not scale very well, and a new approach to perceptual knowledge representation is necessary. Our approach provides firm semantic grounding in the real world, provides for robust dynamic performance in real-time environments with a range of sensors and allows for communication of acquired knowledge in a broad community of other robots and agents, including humans. Our work focuses on symmetry based multisensor knowledge structuring in terms of: (1) symmetry detection in signals, and (2) symmetry parsing for knowledge structure, including structural bootstrapping and knowledge sharing. Operationally, the hypothesis is that group theoretic representations (G-Reps) inform cognitive activity. Our contributions here are to demonstrate symmetry detection and signal analysis and for 1D and 2D signals in a simple office environment; symmetry parsing based on these tokens is left for future work.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129549043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343056
Shugen Ma, Shuai Guo, Minghui Wang, Bin Li
This paper presents an approach to solve the data association problem for a hybrid metric map representation. The hybrid metric map representation uses Voronoi diagram to partition the global map space into a series of local subregions, and then a local dense map is built in each subregion. Finally the global feature map and the local maps make up of the hybrid metric map, which can represent all the observed environment. In the proposed map representation, there exists an important property that global feature map and local maps have clear one-to-one correspondence. Benefited from this property, an identifying rule of the data association based on compatibility testing is proposed. The identifying rule can efficiently reject the wrong data association hypothesis in the application of dense environment. Two experiments validated the efficiency of data association approach and also demonstrated the feasibility of the hybrid metric map presentation.
{"title":"Data association for a hybrid metric map representation","authors":"Shugen Ma, Shuai Guo, Minghui Wang, Bin Li","doi":"10.1109/MFI.2012.6343056","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343056","url":null,"abstract":"This paper presents an approach to solve the data association problem for a hybrid metric map representation. The hybrid metric map representation uses Voronoi diagram to partition the global map space into a series of local subregions, and then a local dense map is built in each subregion. Finally the global feature map and the local maps make up of the hybrid metric map, which can represent all the observed environment. In the proposed map representation, there exists an important property that global feature map and local maps have clear one-to-one correspondence. Benefited from this property, an identifying rule of the data association based on compatibility testing is proposed. The identifying rule can efficiently reject the wrong data association hypothesis in the application of dense environment. Two experiments validated the efficiency of data association approach and also demonstrated the feasibility of the hybrid metric map presentation.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129726958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}