Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170399
K. Kwon, Seung Hyun Lee, M. Y. Kim
Image-to-patient registration process is required to use actively pre-operative images such as CT and MRI during operation for surgical navigation system. One method to utilize scanning data of patients and 3D data from MRI or CT images is dealt with in this paper. After 3D surface measurement device measures the surface of patient's surgical site, this 3D data is registered to CT or MRI data using computer-based optimization algorithms like conventional ICP algorithms. However, general ICP algorithm has some disadvantages that it takes a long converging time if a proper initial location is not set up and also suffers from local minimum problem during the process. In this paper, we propose an automatic image-to-patient registration method that can accurately find a proper initial location without manual intervention of surgical operators. The proposed method finds and extracts the initial starting location for ICP by converting 3D data set of MRI or CT images and surface scanning data to 2D curvature images and by performing H-K curvature image matching between them automatically. It is based on the characteristics that curvature features are robust to the rotation, translation and even some deformation. Automatic image-to-patient registration is implemented by precisely 3D registration the extracted CT ROI and the patient's surface measurement data using ICP algorithm.
{"title":"A patient-to-CT registration method based on spherical unwrapping and H-K curvature descriptors for surgical navigation system","authors":"K. Kwon, Seung Hyun Lee, M. Y. Kim","doi":"10.1109/MFI.2017.8170399","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170399","url":null,"abstract":"Image-to-patient registration process is required to use actively pre-operative images such as CT and MRI during operation for surgical navigation system. One method to utilize scanning data of patients and 3D data from MRI or CT images is dealt with in this paper. After 3D surface measurement device measures the surface of patient's surgical site, this 3D data is registered to CT or MRI data using computer-based optimization algorithms like conventional ICP algorithms. However, general ICP algorithm has some disadvantages that it takes a long converging time if a proper initial location is not set up and also suffers from local minimum problem during the process. In this paper, we propose an automatic image-to-patient registration method that can accurately find a proper initial location without manual intervention of surgical operators. The proposed method finds and extracts the initial starting location for ICP by converting 3D data set of MRI or CT images and surface scanning data to 2D curvature images and by performing H-K curvature image matching between them automatically. It is based on the characteristics that curvature features are robust to the rotation, translation and even some deformation. Automatic image-to-patient registration is implemented by precisely 3D registration the extracted CT ROI and the patient's surface measurement data using ICP algorithm.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127263179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170420
Md. Zia Uddin, W. Khaksar, J. Tørresen
In this work, we propose a novel human activity recognition method from depth videos using robust spatiotemporal features with convolutional neural network. From the depth images of activities, human body parts are segmented based on random features on a random forest. From the segmented body parts in a depth image of an activity video, spatial features are extracted such as angles of the 3-D body joint pairs, means and variances of the depth information in each part of the body. The spatial features are then augmented with the motion features such as magnitude and direction of joints in next image of the video. Finally, the spatiotemporal features are applied to a convolutional neural network for activity training and recognition. The deep learning-based activity recognition method outperforms other traditional methods.
{"title":"Human activity recognition using robust spatiotemporal features and convolutional neural network","authors":"Md. Zia Uddin, W. Khaksar, J. Tørresen","doi":"10.1109/MFI.2017.8170420","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170420","url":null,"abstract":"In this work, we propose a novel human activity recognition method from depth videos using robust spatiotemporal features with convolutional neural network. From the depth images of activities, human body parts are segmented based on random features on a random forest. From the segmented body parts in a depth image of an activity video, spatial features are extracted such as angles of the 3-D body joint pairs, means and variances of the depth information in each part of the body. The spatial features are then augmented with the motion features such as magnitude and direction of joints in next image of the video. Finally, the spatiotemporal features are applied to a convolutional neural network for activity training and recognition. The deep learning-based activity recognition method outperforms other traditional methods.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122759030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170400
T. Nguyen, J. Spehr, Jian Xiong, M. Baum, S. Zug, R. Kruse
Within the context of road estimation, the present paper addresses the problem of the fusion of several sources with different reliabilities. Thereby, reliability represents a higher-level uncertainty. This problem arises in automated driving and ADAS due to changing environmental conditions, e.g., road type or visibility of lane markings. Thus, we present an online sensor reliability assessment and reliability-aware fusion to cope with this challenge. First, we apply a boosting algorithm to select the highly discriminant features among the extracted information. Using them we apply different classifiers to learn the reliabilities, such as Bayesian Network and Random Forest classifiers. To stabilize the estimated reliabilities over time, we deploy approaches such as Dempster-Shafer evidence theory and Influence Diagram combined with a Bayes Filter. Using a big collection of real data recordings, the experimental results support our proposed approach.
{"title":"Online reliability assessment and reliability-aware fusion for Ego-Lane detection using influence diagram and Bayes filter","authors":"T. Nguyen, J. Spehr, Jian Xiong, M. Baum, S. Zug, R. Kruse","doi":"10.1109/MFI.2017.8170400","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170400","url":null,"abstract":"Within the context of road estimation, the present paper addresses the problem of the fusion of several sources with different reliabilities. Thereby, reliability represents a higher-level uncertainty. This problem arises in automated driving and ADAS due to changing environmental conditions, e.g., road type or visibility of lane markings. Thus, we present an online sensor reliability assessment and reliability-aware fusion to cope with this challenge. First, we apply a boosting algorithm to select the highly discriminant features among the extracted information. Using them we apply different classifiers to learn the reliabilities, such as Bayesian Network and Random Forest classifiers. To stabilize the estimated reliabilities over time, we deploy approaches such as Dempster-Shafer evidence theory and Influence Diagram combined with a Bayes Filter. Using a big collection of real data recordings, the experimental results support our proposed approach.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114769141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170412
Muhammad Abu Bakr, Sukhan Lee
A fundamental issue in sensor fusion is to detect and remove outliers as sensors often produce inconsistent measurements that are difficult to predict and model. The detection and removal of spurious data is paramount to the quality of sensor fusion by avoiding their inclusion in the fusion pool. In this paper, a general framework of data fusion is presented for distributed sensor networks of arbitrary redundancies, where inconsistent data are identified simultaneously within the framework. By the general framework, we mean that it is able to fuse multiple correlated data sources and incorporate linear constraints directly, while detecting and removing outliers without any prior information. The proposed method, referred to here as Covariance Projection (CP) Method, aggregates all the state vectors into a single vector in an extended space. The method then projects the mean and covariance of the aggregated state vectors onto the constraint manifold representing the constraints among state vectors that must be satisfied, including the equality constraint. Based on the distance from the manifold, the proposed method identifies the relative disparity among data sources and assigns confidence measures. The method provides an unbiased and optimal solution in the sense of Minimum Mean Square Error (MMSE) for distributed fusion architectures and is able to deal with correlations and uncertainties among local estimates and/or sensor observations across time. Simulation results are provided to show the effectiveness of the proposed method in identification and removal of inconsistency in distributed sensors system.
{"title":"A general framework for data fusion and outlier removal in distributed sensor networks","authors":"Muhammad Abu Bakr, Sukhan Lee","doi":"10.1109/MFI.2017.8170412","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170412","url":null,"abstract":"A fundamental issue in sensor fusion is to detect and remove outliers as sensors often produce inconsistent measurements that are difficult to predict and model. The detection and removal of spurious data is paramount to the quality of sensor fusion by avoiding their inclusion in the fusion pool. In this paper, a general framework of data fusion is presented for distributed sensor networks of arbitrary redundancies, where inconsistent data are identified simultaneously within the framework. By the general framework, we mean that it is able to fuse multiple correlated data sources and incorporate linear constraints directly, while detecting and removing outliers without any prior information. The proposed method, referred to here as Covariance Projection (CP) Method, aggregates all the state vectors into a single vector in an extended space. The method then projects the mean and covariance of the aggregated state vectors onto the constraint manifold representing the constraints among state vectors that must be satisfied, including the equality constraint. Based on the distance from the manifold, the proposed method identifies the relative disparity among data sources and assigns confidence measures. The method provides an unbiased and optimal solution in the sense of Minimum Mean Square Error (MMSE) for distributed fusion architectures and is able to deal with correlations and uncertainties among local estimates and/or sensor observations across time. Simulation results are provided to show the effectiveness of the proposed method in identification and removal of inconsistency in distributed sensors system.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170393
Xudong Sun, F. Sun, Bin Wang, Jianqin Yin, Xiaolin Sheng, Qinghua Xiao
Frontier-based exploration is the most common approach to exploration, a fundamental problem in robotics. Laser scanner and Kinect have been widely used in robotic application for simultaneous localization and mapping (SLAM) separately. The paper proposes a method to combine the data from Kinect and laser scanner to perform a Frontier-based exploration SLAM. The 2 sensors will be installed facing forward and facing backward in opposite directions which make robot have wider vision, thus the robot can detect more complex surrounding features to increase the exploration efficiency and to construct a more accurate map of the unknown environment.
{"title":"Robotic autonomous exploration SLAM using combination of Kinect and laser scanner","authors":"Xudong Sun, F. Sun, Bin Wang, Jianqin Yin, Xiaolin Sheng, Qinghua Xiao","doi":"10.1109/MFI.2017.8170393","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170393","url":null,"abstract":"Frontier-based exploration is the most common approach to exploration, a fundamental problem in robotics. Laser scanner and Kinect have been widely used in robotic application for simultaneous localization and mapping (SLAM) separately. The paper proposes a method to combine the data from Kinect and laser scanner to perform a Frontier-based exploration SLAM. The 2 sensors will be installed facing forward and facing backward in opposite directions which make robot have wider vision, thus the robot can detect more complex surrounding features to increase the exploration efficiency and to construct a more accurate map of the unknown environment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130886901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a new hardware platform that mimics a compound eye of an insect and propose an algorithm to detect objects using it. The compound eye camera has a wide viewing angle and simulates a number of single eyes on its hemisphere. Each single eye is an elementary unit to acquire visual inputs. Visual information from single eyes is hierarchically merged to estimate objectness. We achieve the accuracy of 77.14% on a combined dataset of PASCAL VOC 2012 and COCO-Stuff 10K databases.
{"title":"Estimating objectness using a compound eye camera","authors":"Hwiyeon Yoo, Donghoon Lee, Geonho Cha, Songhwai Oh","doi":"10.1109/MFI.2017.8170418","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170418","url":null,"abstract":"In this paper, we introduce a new hardware platform that mimics a compound eye of an insect and propose an algorithm to detect objects using it. The compound eye camera has a wide viewing angle and simulates a number of single eyes on its hemisphere. Each single eye is an elementary unit to acquire visual inputs. Visual information from single eyes is hierarchically merged to estimate objectness. We achieve the accuracy of 77.14% on a combined dataset of PASCAL VOC 2012 and COCO-Stuff 10K databases.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123854735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170373
Soumyo Das, Yamini Yarlagadda, Prashantkumar B. Vora, Sabarish R. P. Nair
This paper discusses about the trajectory planning and controlled maneuvering of the vehicle in parking assist mode. The objective of the proposed algorithm is to use low-cost hardware like ultrasonic sensor in order to provide automated parking assist to the vehicle. The concept of grid occupancy is formulated in free space detection algorithm to compute lateral-longitudinal grid cell vacancy. The vehicle will enable automated parking assist mode for controlled maneuvering on completion of free space detection followed by path planning. The reference planned path is an optimized trajectory with single maneuver for the vehicle to traverse in a free parking space. The localized trajectory and control algorithm of the perpendicular parking have been designed with reference to perception sensor input. The controller facilitates the intelligent navigation of vehicle based on measured obstacle free clearance distance and reference point. The steering control algorithm based on fuzzy logic is designed to provide an optimized maneuvering of the host vehicle in perpendicular parking space. In order to ease parking effort, an innovative approach of combined feedback and feed-forward based fuzzy controller has been illustrated. The controller performance has been evaluated in simulation environment keeping vehicle dynamics model in loop. The test scenario has been modeled in Carmaker to substantiate the optimization of route selection and a smooth transition of vehicle in parking assist mode during maneuvering into an empty parking space.
{"title":"Trajectory planning and fuzzy control for perpendicular parking","authors":"Soumyo Das, Yamini Yarlagadda, Prashantkumar B. Vora, Sabarish R. P. Nair","doi":"10.1109/MFI.2017.8170373","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170373","url":null,"abstract":"This paper discusses about the trajectory planning and controlled maneuvering of the vehicle in parking assist mode. The objective of the proposed algorithm is to use low-cost hardware like ultrasonic sensor in order to provide automated parking assist to the vehicle. The concept of grid occupancy is formulated in free space detection algorithm to compute lateral-longitudinal grid cell vacancy. The vehicle will enable automated parking assist mode for controlled maneuvering on completion of free space detection followed by path planning. The reference planned path is an optimized trajectory with single maneuver for the vehicle to traverse in a free parking space. The localized trajectory and control algorithm of the perpendicular parking have been designed with reference to perception sensor input. The controller facilitates the intelligent navigation of vehicle based on measured obstacle free clearance distance and reference point. The steering control algorithm based on fuzzy logic is designed to provide an optimized maneuvering of the host vehicle in perpendicular parking space. In order to ease parking effort, an innovative approach of combined feedback and feed-forward based fuzzy controller has been illustrated. The controller performance has been evaluated in simulation environment keeping vehicle dynamics model in loop. The test scenario has been modeled in Carmaker to substantiate the optimization of route selection and a smooth transition of vehicle in parking assist mode during maneuvering into an empty parking space.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122379887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170451
G. H. Lim, E. Pedrosa, F. Amaral, N. Lau, Artur Pereira, J. L. Azevedo, B. Cunha
This paper presents an integrated neural regularization method in fully-connected neural networks that jointly combines the cutting edge of regularization techniques; Dropout [1] and DropConnect [2]. With a small number of data set, trained feed-forward networks tend to show poor prediction performance on test data which has never been introduced while training. In order to reduce the overfitting, regularization methods commonly use only a sparse subset of their inputs. While a fully-connected layer with Dropout takes account of a randomly selected subset of hidden neurons with some probability, a layer with DropConnect only keeps a randomly selected subset of connections between neurons. It has been reported that their performances are dependent on domains. Image classification results show that the integrated method provides more degrees of freedom to achieve robust image recognition in the test phase. The experimental analyses on CIFAR-10 and one-hand gesture dataset show that the method provides the opportunity to improve classification performance.
{"title":"Neural regularization jointly involving neurons and connections for robust image classification","authors":"G. H. Lim, E. Pedrosa, F. Amaral, N. Lau, Artur Pereira, J. L. Azevedo, B. Cunha","doi":"10.1109/MFI.2017.8170451","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170451","url":null,"abstract":"This paper presents an integrated neural regularization method in fully-connected neural networks that jointly combines the cutting edge of regularization techniques; Dropout [1] and DropConnect [2]. With a small number of data set, trained feed-forward networks tend to show poor prediction performance on test data which has never been introduced while training. In order to reduce the overfitting, regularization methods commonly use only a sparse subset of their inputs. While a fully-connected layer with Dropout takes account of a randomly selected subset of hidden neurons with some probability, a layer with DropConnect only keeps a randomly selected subset of connections between neurons. It has been reported that their performances are dependent on domains. Image classification results show that the integrated method provides more degrees of freedom to achieve robust image recognition in the test phase. The experimental analyses on CIFAR-10 and one-hand gesture dataset show that the method provides the opportunity to improve classification performance.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127718349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170353
W. Müller, A. Kuwertz, D. Mühlenberg, J. Sander
In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.
{"title":"Semantic information fusion to enhance situational awareness in surveillance scenarios","authors":"W. Müller, A. Kuwertz, D. Mühlenberg, J. Sander","doi":"10.1109/MFI.2017.8170353","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170353","url":null,"abstract":"In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"65 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127553597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170401
Hendrik Vincent Koops, Kashish Garg, Munsung Kim, Jonathan Li, A. Volk, F. Franchetti
Improving trust in the state of Cyber-Physical Systems becomes increasingly important as more Cyber-Physical Systems tasks become autonomous. Research into the sound of Cyber-Physical Systems has shown that audio side-channel information from a single microphone can be used to accurately model traditional primary state sensor measurements such as speed and gear position. Furthermore, data integration research has shown that information from multiple heterogeneous sources can be integrated to create improved and more reliable data. In this paper, we present a multi-microphone machine learning data fusion approach to accurately predict ascending/hovering/descending states of a multi-rotor UAV in flight. We show that data fusion of multiple audio classifiers predicts these states with accuracies over 94%. Furthermore, we significantly improve the state predictions of single microphones, and outperform several other integration methods. These results add to a growing body of work showing that microphone side-channel approaches can be used in Cyber-Physical Systems to accurately model and improve the assurance of primary sensors measurements.
{"title":"Multirotor UAV state prediction through multi-microphone side-channel fusion","authors":"Hendrik Vincent Koops, Kashish Garg, Munsung Kim, Jonathan Li, A. Volk, F. Franchetti","doi":"10.1109/MFI.2017.8170401","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170401","url":null,"abstract":"Improving trust in the state of Cyber-Physical Systems becomes increasingly important as more Cyber-Physical Systems tasks become autonomous. Research into the sound of Cyber-Physical Systems has shown that audio side-channel information from a single microphone can be used to accurately model traditional primary state sensor measurements such as speed and gear position. Furthermore, data integration research has shown that information from multiple heterogeneous sources can be integrated to create improved and more reliable data. In this paper, we present a multi-microphone machine learning data fusion approach to accurately predict ascending/hovering/descending states of a multi-rotor UAV in flight. We show that data fusion of multiple audio classifiers predicts these states with accuracies over 94%. Furthermore, we significantly improve the state predictions of single microphones, and outperform several other integration methods. These results add to a growing body of work showing that microphone side-channel approaches can be used in Cyber-Physical Systems to accurately model and improve the assurance of primary sensors measurements.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123640958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}