Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170426
Nesrine Mahdoui, V. Fremont, E. Natalizio
In this paper, the problem of the exploration of an unknown environment by deploying a fleet of Micro-Aerial Vehicles (MAV) is considered. As a single robot has already proven its efficiency for this task, the challenge is to extend it to a multi-robots system to reduce the exploration time. For this purpose, a cooperative navigation strategy is proposed based on a specific utility function and inter-robots data exchange. The novelty comes from the exchange of the frontiers points instead of maps, which allows to reduce computation and data amount within the network. The proposed system has been implemented and tested under ROS using the Gazebo simulator. The results demonstrate that the proposed navigation strategy efficiently spreads robots over the environment for a faster exploration.
{"title":"Cooperative exploration strategy for micro-aerial vehicles fleet","authors":"Nesrine Mahdoui, V. Fremont, E. Natalizio","doi":"10.1109/MFI.2017.8170426","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170426","url":null,"abstract":"In this paper, the problem of the exploration of an unknown environment by deploying a fleet of Micro-Aerial Vehicles (MAV) is considered. As a single robot has already proven its efficiency for this task, the challenge is to extend it to a multi-robots system to reduce the exploration time. For this purpose, a cooperative navigation strategy is proposed based on a specific utility function and inter-robots data exchange. The novelty comes from the exchange of the frontiers points instead of maps, which allows to reduce computation and data amount within the network. The proposed system has been implemented and tested under ROS using the Gazebo simulator. The results demonstrate that the proposed navigation strategy efficiently spreads robots over the environment for a faster exploration.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134497547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170381
Sangwook Park, C. Cho, Younglo Lee, A. D. Costa, Sangho Lee, Hanseok Ko
Recently, due to wide observable range as well as low power consumption, the usage of high frequency radars has been expanded to ship detection for both harbor management and national security. However, range and angular resolutions are typically low in high frequency radars due to environmental and physical constraints. Thus, a target location detected on a high frequency radar system is far away from its real position. To reduce the error of detection, a location estimation method is proposed based on multiple high frequency radars. With use of the Bayesian approach, a more accurate final location can be determined by posterior mean. For this work, both likelihood and prior probability are modelled. Effectiveness of the proposed method is shown through appropriate simulation that was conducted according to signal to clutter plus noise ratio. Results are shown to verify the proposed method improves both locating and detecting performances.
{"title":"Coastal ship monitoring based on multiple compact high frequency surface wave radars","authors":"Sangwook Park, C. Cho, Younglo Lee, A. D. Costa, Sangho Lee, Hanseok Ko","doi":"10.1109/MFI.2017.8170381","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170381","url":null,"abstract":"Recently, due to wide observable range as well as low power consumption, the usage of high frequency radars has been expanded to ship detection for both harbor management and national security. However, range and angular resolutions are typically low in high frequency radars due to environmental and physical constraints. Thus, a target location detected on a high frequency radar system is far away from its real position. To reduce the error of detection, a location estimation method is proposed based on multiple high frequency radars. With use of the Bayesian approach, a more accurate final location can be determined by posterior mean. For this work, both likelihood and prior probability are modelled. Effectiveness of the proposed method is shown through appropriate simulation that was conducted according to signal to clutter plus noise ratio. Results are shown to verify the proposed method improves both locating and detecting performances.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132936176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170378
Kenta Harada, Yuichi Kobayashi
Autonomous robots that work in the same environment as humans must operate safely and adapt to handle various tools and deal with partial malfunctions. We propose an approach for estimating the robot structure and apply this approach for building a controller of dynamic motions. The robot structure is estimated by evaluating the mutual information (MI) among the sensor variables. Variables with high values of MI are edge-connected and the controller is automatically constructed based on the estimated structure. The proposed approach can accommodate changes in the robot parameters and dynamic motions. We verify the proposed method by using a simulator of a musculoskeletal arm driven that is driven by artificial muscle for mechanical safety.
{"title":"Estimation of structure and physical relations among multi-modal sensor variables for musculoskeletal robotic arm","authors":"Kenta Harada, Yuichi Kobayashi","doi":"10.1109/MFI.2017.8170378","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170378","url":null,"abstract":"Autonomous robots that work in the same environment as humans must operate safely and adapt to handle various tools and deal with partial malfunctions. We propose an approach for estimating the robot structure and apply this approach for building a controller of dynamic motions. The robot structure is estimated by evaluating the mutual information (MI) among the sensor variables. Variables with high values of MI are edge-connected and the controller is automatically constructed based on the estimated structure. The proposed approach can accommodate changes in the robot parameters and dynamic motions. We verify the proposed method by using a simulator of a musculoskeletal arm driven that is driven by artificial muscle for mechanical safety.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129321881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170374
R. Luo, Chung-Kai Hsieh
To interact with humans in Human Social Environments (HSEs), robots are expected to possess the ability of situational context perception and behave appropriately. In this paper, we propose two deep learning models, as situational context perception of robot, to learn from observations of human-robot interaction. Based on these models, we endow robot the capability of perceiving human's mentation. Thus, the appropriate social behaviors can be performed by the robot with respect to human's mental state. The experimental results demonstrate that robot can significantly improve the accuracy of predicting a person's mentation through the proposed deep learning models by comparison to conventional classifiers and possess potential of providing agreeable serving.
{"title":"Robotic sensory perception on human mentation for offering proper services","authors":"R. Luo, Chung-Kai Hsieh","doi":"10.1109/MFI.2017.8170374","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170374","url":null,"abstract":"To interact with humans in Human Social Environments (HSEs), robots are expected to possess the ability of situational context perception and behave appropriately. In this paper, we propose two deep learning models, as situational context perception of robot, to learn from observations of human-robot interaction. Based on these models, we endow robot the capability of perceiving human's mentation. Thus, the appropriate social behaviors can be performed by the robot with respect to human's mental state. The experimental results demonstrate that robot can significantly improve the accuracy of predicting a person's mentation through the proposed deep learning models by comparison to conventional classifiers and possess potential of providing agreeable serving.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124703294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170385
Young-Rae Cho, Seungjun Shin, Sung-Hyuk Yim, Hyun-Woong Cho, Woo‐Jin Song
We propose a multistage fusion stream (MFS) and dissimilarity regularization (DisReg) for deep learning. The degree of similarity between the feature maps of a single-sensor stream is estimated using DisReg. DisReg is applied to the learning problems of each single-sensor stream, so they have distinct types of feature map. Each stage of the MFS fuses the feature maps extracted from single-sensor streams. The proposed scheme fuses information from heterogeneous sensors by learning new patterns that cannot be observed using only the feature map of a single-sensor stream. The proposed method is evaluated by testing its ability to automatically recognize targets in a synthetic aperture radar and infrared images. The superiority of the proposed fusion scheme is demonstrated by comparison with conventional algorithm.
{"title":"Multistage fusion and dissimilarity regularization for deep learning","authors":"Young-Rae Cho, Seungjun Shin, Sung-Hyuk Yim, Hyun-Woong Cho, Woo‐Jin Song","doi":"10.1109/MFI.2017.8170385","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170385","url":null,"abstract":"We propose a multistage fusion stream (MFS) and dissimilarity regularization (DisReg) for deep learning. The degree of similarity between the feature maps of a single-sensor stream is estimated using DisReg. DisReg is applied to the learning problems of each single-sensor stream, so they have distinct types of feature map. Each stage of the MFS fuses the feature maps extracted from single-sensor streams. The proposed scheme fuses information from heterogeneous sensors by learning new patterns that cannot be observed using only the feature map of a single-sensor stream. The proposed method is evaluated by testing its ability to automatically recognize targets in a synthetic aperture radar and infrared images. The superiority of the proposed fusion scheme is demonstrated by comparison with conventional algorithm.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130214504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170379
F. Pfaff, G. Kurz, C. Pieper, G. Maier, B. Noack, H. Kruggel-Emden, R. Gruna, U. Hanebeck, S. Wirtz, V. Scherer, T. Längle, J. Beyerer
Optical belt sorters can be used to sort a large variety of bulk materials. By the use of sophisticated algorithms, the performance of the complex machinery can be further improved. Recently, we have proposed an extension to industrial optical belt sorters that involves tracking the individual particles on the belt using an area scan camera. If the estimated behavior of the particles matches the true behavior, the reliability of the separation process can be improved. The approach relies on multitarget tracking using hard association decisions between the tracks and the measurements. In this paper, we propose to include the orientation in the assessment of the compatibility of a track and a measurement. This allows us to achieve more reliable associations, facilitating a higher accuracy of the tracking results.
{"title":"Improving multitarget tracking using orientation estimates for sorting bulk materials","authors":"F. Pfaff, G. Kurz, C. Pieper, G. Maier, B. Noack, H. Kruggel-Emden, R. Gruna, U. Hanebeck, S. Wirtz, V. Scherer, T. Längle, J. Beyerer","doi":"10.1109/MFI.2017.8170379","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170379","url":null,"abstract":"Optical belt sorters can be used to sort a large variety of bulk materials. By the use of sophisticated algorithms, the performance of the complex machinery can be further improved. Recently, we have proposed an extension to industrial optical belt sorters that involves tracking the individual particles on the belt using an area scan camera. If the estimated behavior of the particles matches the true behavior, the reliability of the separation process can be improved. The approach relies on multitarget tracking using hard association decisions between the tracks and the measurements. In this paper, we propose to include the orientation in the assessment of the compatibility of a track and a measurement. This allows us to achieve more reliable associations, facilitating a higher accuracy of the tracking results.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"382 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127223624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170356
Dinh-Van Nguyen, F. Nashashibi, T. Dao, Eric Castelli
Precise positioning plays a key role in successful navigation of autonomous vehicles. A fusion architecture of Global Positioning System (GPS) and Laser-SLAM (Simultaneous Localization and Mapping) is widely adopted. While Laser-SLAM is known for its highly accurate localization, GPS is still required to overcome accumulated error and give SLAM a required reference coordinate. However, there are multiple cases where GPS signal quality is too low or not available such as in multi-story parking, tunnel or urban area due to multipath propagation issue etc. This paper proposes an alternative approach for these areas with WiFi Fingerprinting technique to replace GPS. Result obtained from WiFi Fingerprinting will then be fused with Laser-SLAM to maintain the general architecture, allow seamless adaptation of vehicle to the environment.
{"title":"Improving poor GPS area localization for intelligent vehicles","authors":"Dinh-Van Nguyen, F. Nashashibi, T. Dao, Eric Castelli","doi":"10.1109/MFI.2017.8170356","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170356","url":null,"abstract":"Precise positioning plays a key role in successful navigation of autonomous vehicles. A fusion architecture of Global Positioning System (GPS) and Laser-SLAM (Simultaneous Localization and Mapping) is widely adopted. While Laser-SLAM is known for its highly accurate localization, GPS is still required to overcome accumulated error and give SLAM a required reference coordinate. However, there are multiple cases where GPS signal quality is too low or not available such as in multi-story parking, tunnel or urban area due to multipath propagation issue etc. This paper proposes an alternative approach for these areas with WiFi Fingerprinting technique to replace GPS. Result obtained from WiFi Fingerprinting will then be fused with Laser-SLAM to maintain the general architecture, allow seamless adaptation of vehicle to the environment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127769183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170382
Gayan Brahmanage, H. Leung
This paper proposes a geometric feature-based method to solve the Simultaneous Localization and Mapping (SLAM) problem in an unknown structured environment using a short range and low Field of View (FoV) measurement unit such as Kinect sensor. A RANdom SAmple Consensus (RANSAC) based algorithm is used for feature detection, and a grid-based point cloud segmentation method has been introduced to improve the multiple feature point-detection in a 2D depth frame. A fast SLAM algorithm is used to estimate the robot posterior and the map of the environment. This approach builds the individual maps for each particle using geometric features that are extracted from a 2D slice of a 3D depth image. Each map contains individual Extended Kalman Filters (EKFs) for each and every feature-point. This method reduces the uncertainty of the robot pose in the prediction step and it improves the pose accuracy when more geometric feature-points are available. The proposed feature-based approach gives better localization and compact map representation in structured environments when distinct features are available. The importance weighting and the comparison of features with the on-line map are performed according to the maximum likelihood criterion. In order to reduce the particle depletion, the map is updated only when a new Odometry measurement and new range measurements are available. The experiments are carried out using the recorded data with a non-holomonic mobile robot equipped with a Kinect sensor in a small scale indoor structured environment. For comparison, the grid based SLAM result is also presented for the same data set.
{"title":"A kinect-based SLAM in an unknown environment using geometric features","authors":"Gayan Brahmanage, H. Leung","doi":"10.1109/MFI.2017.8170382","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170382","url":null,"abstract":"This paper proposes a geometric feature-based method to solve the Simultaneous Localization and Mapping (SLAM) problem in an unknown structured environment using a short range and low Field of View (FoV) measurement unit such as Kinect sensor. A RANdom SAmple Consensus (RANSAC) based algorithm is used for feature detection, and a grid-based point cloud segmentation method has been introduced to improve the multiple feature point-detection in a 2D depth frame. A fast SLAM algorithm is used to estimate the robot posterior and the map of the environment. This approach builds the individual maps for each particle using geometric features that are extracted from a 2D slice of a 3D depth image. Each map contains individual Extended Kalman Filters (EKFs) for each and every feature-point. This method reduces the uncertainty of the robot pose in the prediction step and it improves the pose accuracy when more geometric feature-points are available. The proposed feature-based approach gives better localization and compact map representation in structured environments when distinct features are available. The importance weighting and the comparison of features with the on-line map are performed according to the maximum likelihood criterion. In order to reduce the particle depletion, the map is updated only when a new Odometry measurement and new range measurements are available. The experiments are carried out using the recorded data with a non-holomonic mobile robot equipped with a Kinect sensor in a small scale indoor structured environment. For comparison, the grid based SLAM result is also presented for the same data set.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121136039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170351
N. Rao, Chris Y. T. Ma, K. Hausken, Fei He, David K. Y. Yau, J. Zhuang
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network, and (b) first-order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively. They use the composite utility functions composed of a survival probability term and a cost term, and the previously studied sum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure.
{"title":"Defense strategies for asymmetric networked systems under composite utilities","authors":"N. Rao, Chris Y. T. Ma, K. Hausken, Fei He, David K. Y. Yau, J. Zhuang","doi":"10.1109/MFI.2017.8170351","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170351","url":null,"abstract":"We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network, and (b) first-order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively. They use the composite utility functions composed of a survival probability term and a cost term, and the previously studied sum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131365605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170354
M. Admiraal, Samuel Wilson, R. Vaidyanathan
Wearable motion tracking systems are becoming increasingly popular in human-machine interfaces. For inertial measurement, it is vital to efficiently fuse inertial, gyroscopic, and magnetometer data for spatial orientation. We introduce a new algorithm for this fusion based on using gradient descent to correct for the integral error in calculating the orientation quaternion of a rotating body. The algorithm is an improved formulation of the well-known estimation of orientation using a gradient descent algorithm. The new formulation ensures that the gradient descent algorithm uses the steepest descent, resulting in a five order of magnitude increase in the precision of the calculated orientation quaternion. We have also converted the algorithm to use fixed point integers instead of floating point numbers to more than double the speed of the calculations on the types of processors used with Inertial Measurement Units (IMUs) and Magnetic, Angular Rate and Gravity sensors (MARGs). This enables the corrections to not only be faster than the original formulations, but also remain valid for a larger range of inputs. The improved efficiency and accuracy show significant potential for increasing the scope of inertial measurement in applications where low power or greater precision is necessary such as very small wearable or implantable systems.
{"title":"Improved formulation of the IMU and MARG orientation gradient descent algorithm for motion tracking in human-machine interfaces","authors":"M. Admiraal, Samuel Wilson, R. Vaidyanathan","doi":"10.1109/MFI.2017.8170354","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170354","url":null,"abstract":"Wearable motion tracking systems are becoming increasingly popular in human-machine interfaces. For inertial measurement, it is vital to efficiently fuse inertial, gyroscopic, and magnetometer data for spatial orientation. We introduce a new algorithm for this fusion based on using gradient descent to correct for the integral error in calculating the orientation quaternion of a rotating body. The algorithm is an improved formulation of the well-known estimation of orientation using a gradient descent algorithm. The new formulation ensures that the gradient descent algorithm uses the steepest descent, resulting in a five order of magnitude increase in the precision of the calculated orientation quaternion. We have also converted the algorithm to use fixed point integers instead of floating point numbers to more than double the speed of the calculations on the types of processors used with Inertial Measurement Units (IMUs) and Magnetic, Angular Rate and Gravity sensors (MARGs). This enables the corrections to not only be faster than the original formulations, but also remain valid for a larger range of inputs. The improved efficiency and accuracy show significant potential for increasing the scope of inertial measurement in applications where low power or greater precision is necessary such as very small wearable or implantable systems.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1996 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125572960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}