Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343017
Marc Reinhardt, B. Noack, U. Hanebeck
This paper deals with distributed information processing in sensor networks. We propose the Hypothesizing Distributed Kalman Filter that incorporates an assumption of the global measurement model into the distributed estimation process. The procedure is based on the Distributed Kalman Filter and inherits its optimality when the assumption about the global measurement uncertainty is met. Recursive formulas for local processing as well as for fusion are derived. We show that the proposed algorithm yields the same results, no matter whether the measurements are processed locally or globally, even when the process noise is not negligible. For further processing of the estimates, a consistent bound for the error covariance matrix is derived. All derivations and explanations are illustrated by means of a new classification scheme for estimation processes.
{"title":"The Hypothesizing Distributed Kalman Filter","authors":"Marc Reinhardt, B. Noack, U. Hanebeck","doi":"10.1109/MFI.2012.6343017","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343017","url":null,"abstract":"This paper deals with distributed information processing in sensor networks. We propose the Hypothesizing Distributed Kalman Filter that incorporates an assumption of the global measurement model into the distributed estimation process. The procedure is based on the Distributed Kalman Filter and inherits its optimality when the assumption about the global measurement uncertainty is met. Recursive formulas for local processing as well as for fusion are derived. We show that the proposed algorithm yields the same results, no matter whether the measurements are processed locally or globally, even when the process noise is not negligible. For further processing of the estimates, a consistent bound for the error covariance matrix is derived. All derivations and explanations are illustrated by means of a new classification scheme for estimation processes.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"326 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115766465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343073
Tianxiang Bai, Youfu Li, Yazhe Tang
In this work, we propose a robust and flexible appearance model based on the structured sparse representation framework. In our method, we model the complex nonlinear appearance manifold and occlusions as a sparse linear combination of structured union of subspaces in a basis library consisting of multiple learned low dimensional subspaces and a partitioned occlusion template set. In order to enhance the discriminative power of the model, a number of clustered background subspaces are also added into the basis library and updated during tracking. With the Block Orthogonal Matching Pursuit (BOMP) algorithm, we show that the new structured sparse representation based appearance model facilitates the tracking performance compared with the prototype model and other state of the art tracking algorithms.
{"title":"Flexible structured sparse representation for robust visual tracking","authors":"Tianxiang Bai, Youfu Li, Yazhe Tang","doi":"10.1109/MFI.2012.6343073","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343073","url":null,"abstract":"In this work, we propose a robust and flexible appearance model based on the structured sparse representation framework. In our method, we model the complex nonlinear appearance manifold and occlusions as a sparse linear combination of structured union of subspaces in a basis library consisting of multiple learned low dimensional subspaces and a partitioned occlusion template set. In order to enhance the discriminative power of the model, a number of clustered background subspaces are also added into the basis library and updated during tracking. With the Block Orthogonal Matching Pursuit (BOMP) algorithm, we show that the new structured sparse representation based appearance model facilitates the tracking performance compared with the prototype model and other state of the art tracking algorithms.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128513729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343027
Rina Tse, N. Ahmed, M. Campbell
This paper proposes a Markov Random Field (MRF) representation for sensor and terrain information fusion in a 2.5D map. Unlike in the previous works, the proposed MRF formally models the sensor pose and measurement uncertainties, thus allowing the measurements to be appropriately fused with terrain information. Additionally, the MRF's graphical modelbased representation allows for an easy modification to the probabilistic dependencies among variables, permitting a more flexible and general model including terrain spatial correlations to be studied. The use of an MRF representation also makes it easier to perform factorization and inference on any variable subset of interests. Results show that the addition of a terrain MRF model not only helps reduce the estimation error, but also serves as a basis for terrain property characterization, which is useful for future terrain analyses such as traversability assessments in ground robot navigation.
{"title":"Unified mixture-model based terrain estimation with Markov Random Fields","authors":"Rina Tse, N. Ahmed, M. Campbell","doi":"10.1109/MFI.2012.6343027","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343027","url":null,"abstract":"This paper proposes a Markov Random Field (MRF) representation for sensor and terrain information fusion in a 2.5D map. Unlike in the previous works, the proposed MRF formally models the sensor pose and measurement uncertainties, thus allowing the measurements to be appropriately fused with terrain information. Additionally, the MRF's graphical modelbased representation allows for an easy modification to the probabilistic dependencies among variables, permitting a more flexible and general model including terrain spatial correlations to be studied. The use of an MRF representation also makes it easier to perform factorization and inference on any variable subset of interests. Results show that the addition of a terrain MRF model not only helps reduce the estimation error, but also serves as a basis for terrain property characterization, which is useful for future terrain analyses such as traversability assessments in ground robot navigation.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129110325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343043
Ramon Sargeant, Hongbin Liu, L. Seneviratne, K. Althoefer
This paper introduces the design of a 6-DOF force and torque sensor that uses fiber optic guided light and linear polarizer materials to measure the applied force and torque on a grasped object. The sensor is also capable of measuring the contact direction between the sensor and the object. The developed sensor has a diameter of 16 mm, height of 15.75 mm and weight of 1 gram. The sensor's parallel mechanism design and operating principles are explained and experimental data is given to verify the proposed operating principle. The experimental data shows that the proposed force sensor performs well with the ultimate aim of further miniaturization and integration into the fingertip of a dexterous robotic hand.
{"title":"An optical multi-axial force/torque sensor for dexterous grasping and manipulation","authors":"Ramon Sargeant, Hongbin Liu, L. Seneviratne, K. Althoefer","doi":"10.1109/MFI.2012.6343043","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343043","url":null,"abstract":"This paper introduces the design of a 6-DOF force and torque sensor that uses fiber optic guided light and linear polarizer materials to measure the applied force and torque on a grasped object. The sensor is also capable of measuring the contact direction between the sensor and the object. The developed sensor has a diameter of 16 mm, height of 15.75 mm and weight of 1 gram. The sensor's parallel mechanism design and operating principles are explained and experimental data is given to verify the proposed operating principle. The experimental data shows that the proposed force sensor performs well with the ultimate aim of further miniaturization and integration into the fingertip of a dexterous robotic hand.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116478369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In pedicle screw insertion surgeries, the drilling process of the screw path is very critical to decide the success of the surgery, as the hole is drilled on a very narrow area on the vertebral pedicle. In current manual surgeries, surgeons perform operation with monitoring the medical images in navigation system and sensing operation force. To simulate these abilities, in this paper, a bone-drilling state recognition algorithm and the related system based on image-force fusion are proposed. The short-time average magnitude of thrust force, the average energy of thrust force and their gradients are used to recognize drilling state and judge whether the drilling position is appropriate. For medical image information, the preoperatively scanned medical images are combined with the real-time position information of the operation tool. And the boundary of test bone, which is used to limit the drilling motion, is found depending on the drilling direction. Fusing recognition results based on thrust force and medical images, the final recognized results are modified to be more accurate and safer to control the drilling process.
{"title":"Intraoperative state recognition of a bone-drilling system with image-force fusion","authors":"Haiyang Jin, Ying Hu, Huoling Luo, Tianyi Zheng, Peng Zhang","doi":"10.1109/MFI.2012.6343079","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343079","url":null,"abstract":"In pedicle screw insertion surgeries, the drilling process of the screw path is very critical to decide the success of the surgery, as the hole is drilled on a very narrow area on the vertebral pedicle. In current manual surgeries, surgeons perform operation with monitoring the medical images in navigation system and sensing operation force. To simulate these abilities, in this paper, a bone-drilling state recognition algorithm and the related system based on image-force fusion are proposed. The short-time average magnitude of thrust force, the average energy of thrust force and their gradients are used to recognize drilling state and judge whether the drilling position is appropriate. For medical image information, the preoperatively scanned medical images are combined with the real-time position information of the operation tool. And the boundary of test bone, which is used to limit the drilling motion, is found depending on the drilling direction. Fusing recognition results based on thrust force and medical images, the final recognized results are modified to be more accurate and safer to control the drilling process.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124044153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343025
Rui Ma, Qi Hao
This paper presents a novel distributed binary sensing paradigm for walker recognition based on a well-known geometric probability model: Buffon's needle. The research aims to achieve a low-data-throughput gait biometric system suitable for wireless sensor network applications. We presents two types of Buffon's needle (BN) models for gait recognition: (1) a classical BN model based on a static distribution of limb motions; and (2) a hidden Markov BN model based on a dynamic distribution of limb motions. These two models are used to estimate static and dynamic gait features, respectively. By utilizing the random projection principle and the information geometry of binary variables, invariant measures of gait features are developed that can be independent of the walking path of subjects. We have performed both simulations and experiments to verify the proposed sensing theories. Although the experiments are based on a pyroelectric sensor network, the proposed sensing paradigm can be extended to various sensing modalities.
{"title":"Buffon's needle model based walker recognition with distributed binary sensor networks","authors":"Rui Ma, Qi Hao","doi":"10.1109/MFI.2012.6343025","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343025","url":null,"abstract":"This paper presents a novel distributed binary sensing paradigm for walker recognition based on a well-known geometric probability model: Buffon's needle. The research aims to achieve a low-data-throughput gait biometric system suitable for wireless sensor network applications. We presents two types of Buffon's needle (BN) models for gait recognition: (1) a classical BN model based on a static distribution of limb motions; and (2) a hidden Markov BN model based on a dynamic distribution of limb motions. These two models are used to estimate static and dynamic gait features, respectively. By utilizing the random projection principle and the information geometry of binary variables, invariant measures of gait features are developed that can be independent of the walking path of subjects. We have performed both simulations and experiments to verify the proposed sensing theories. Although the experiments are based on a pyroelectric sensor network, the proposed sensing paradigm can be extended to various sensing modalities.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126295027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343078
Robert L. Stewart, Michael Mills, Hong Zhang
This paper investigates the problem of robot visual homing - the navigation to a goal location by a mobile robot using visual sensory input. The visual homing approach taken is to consider the flow vectors between a robot's current view and a desired milestone view. The flow vectors can be used to determine an angular velocity command that attempts to align the two views under a constant forward speed. Experiments with a mobile robot have been conducted following the teach-replay approach. By using a sequence of milestone images taken successively along a path, preliminary results show that a robot can successfully repeat the path and navigate to its goal autonomously. The method should be useful for route following and other applications involving visual navigation.
{"title":"Visual homing for a mobile robot using direction votes from flow vectors","authors":"Robert L. Stewart, Michael Mills, Hong Zhang","doi":"10.1109/MFI.2012.6343078","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343078","url":null,"abstract":"This paper investigates the problem of robot visual homing - the navigation to a goal location by a mobile robot using visual sensory input. The visual homing approach taken is to consider the flow vectors between a robot's current view and a desired milestone view. The flow vectors can be used to determine an angular velocity command that attempts to align the two views under a constant forward speed. Experiments with a mobile robot have been conducted following the teach-replay approach. By using a sequence of milestone images taken successively along a path, preliminary results show that a robot can successfully repeat the path and navigate to its goal autonomously. The method should be useful for route following and other applications involving visual navigation.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130028557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343008
Tongyue Gao, H. Ge, Jinjun Rao, Zhenbang Gong, Jun Luo
Recently, the UAV has become the research focus at home and abroad. this paper puts forward a new aircraft type: double ducted tilting Subminiature UAV system, and carries out the research of the navigation system for this suav. This paper puts forward to apply the gyroscope, accelerometer and magnetometer, using kalman filtering algorithm to establish the optimal attitude matrix, namely the best digital platform. The optimal attitude matrix based on this method can avoid the long-term accumulated errors of attitude matrix in conventional integrated navigation. In addition, the paper puts forward kalman algorithm combined with integrated navigation, which can be adjusted according to the motion information of the carrier. Based on this method, the integrated navigation system can gain the best navigation information under different motion state. Finally, this paper proves that the navigation system design based on multisensor information fusion.
{"title":"Design of double ducted tilting SUAV navigation system based on multi-sensor information fusion","authors":"Tongyue Gao, H. Ge, Jinjun Rao, Zhenbang Gong, Jun Luo","doi":"10.1109/MFI.2012.6343008","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343008","url":null,"abstract":"Recently, the UAV has become the research focus at home and abroad. this paper puts forward a new aircraft type: double ducted tilting Subminiature UAV system, and carries out the research of the navigation system for this suav. This paper puts forward to apply the gyroscope, accelerometer and magnetometer, using kalman filtering algorithm to establish the optimal attitude matrix, namely the best digital platform. The optimal attitude matrix based on this method can avoid the long-term accumulated errors of attitude matrix in conventional integrated navigation. In addition, the paper puts forward kalman algorithm combined with integrated navigation, which can be adjusted according to the motion information of the carrier. Based on this method, the integrated navigation system can gain the best navigation information under different motion state. Finally, this paper proves that the navigation system design based on multisensor information fusion.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114421870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343045
Christopher Baumgärtner, Niels Beuck, W. Menzel
We present an architecture for natural language processing that parses an input sentence incrementally and merges information about its structure with a representation of visual input, thereby changing the results of parsing. At each step of incremental processing, the elements in the context representation are judged whether they match the content of the sentence fragment up to that step. The information contained in the best matching subset then influences the result of parsing the subsentence. As processing progresses and the sentence is extended by adding new words, new information is searched in the context to concur with the expanded language input. This incremental approach to information fusion is highly adaptable with regard to the integration of dynamic knowledge extracted from a constantly changing environment.
{"title":"An architecture for incremental information fusion of cross-modal representations","authors":"Christopher Baumgärtner, Niels Beuck, W. Menzel","doi":"10.1109/MFI.2012.6343045","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343045","url":null,"abstract":"We present an architecture for natural language processing that parses an input sentence incrementally and merges information about its structure with a representation of visual input, thereby changing the results of parsing. At each step of incremental processing, the elements in the context representation are judged whether they match the content of the sentence fragment up to that step. The information contained in the best matching subset then influences the result of parsing the subsentence. As processing progresses and the sentence is extended by adding new words, new information is searched in the context to concur with the expanded language input. This incremental approach to information fusion is highly adaptable with regard to the integration of dynamic knowledge extracted from a constantly changing environment.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129654126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-12DOI: 10.1109/MFI.2012.6343007
Youngmok Yun, Jingfu Jin, N. Kim, Jeongyeon Yoon, Changhwan Kim
Autonomous outdoor navigation algorithms are required in various military and industry fields. A stable and robust outdoor localization algorithm is critical to successful outdoor navigation. However, unpredictable external effects and interruption of the GPS signal cause difficulties in outdoor localization. To address this issue, first we devised a new optical navigation sensor that measures a mobile robot's transverse distance without being subjected to external influence. Next, using the optical navigation sensor, a novel localization algorithm is established with Inertial-Measurement-Unit (IMU) and GPS. The algorithm is verified in an urban environment where the GPS signal is frequently interrupted and rough ground surfaces provide serious disturbances.
{"title":"Outdoor localization with optical navigation sensor, IMU and GPS","authors":"Youngmok Yun, Jingfu Jin, N. Kim, Jeongyeon Yoon, Changhwan Kim","doi":"10.1109/MFI.2012.6343007","DOIUrl":"https://doi.org/10.1109/MFI.2012.6343007","url":null,"abstract":"Autonomous outdoor navigation algorithms are required in various military and industry fields. A stable and robust outdoor localization algorithm is critical to successful outdoor navigation. However, unpredictable external effects and interruption of the GPS signal cause difficulties in outdoor localization. To address this issue, first we devised a new optical navigation sensor that measures a mobile robot's transverse distance without being subjected to external influence. Next, using the optical navigation sensor, a novel localization algorithm is established with Inertial-Measurement-Unit (IMU) and GPS. The algorithm is verified in an urban environment where the GPS signal is frequently interrupted and rough ground surfaces provide serious disturbances.","PeriodicalId":103145,"journal":{"name":"2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130532814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}