Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170371
D. Bender, W. Koch, D. Cremers
Up to the present day, GPS signals are the key component in almost all outdoor navigation tasks of robotic platforms. To obtain the platform pose, comprising the position as well as the orientation, and receive information at a higher frequency, the GPS signals are commonly used in a GPS-corrected inertial navigation system (INS). The GPS is a critical single point of failure, especially for autonomous drones. We propose an approach which creates a metric map of the observed area by fusing camera images with inertial and GPS data during its normal operation and use this map to steer a drone efficiently to its home position in the case of an GPS outage. A naive approach would follow the previously traveled path and get accurate pose estimates by comparing the current camera image with the previously created map. The presented procedure allows the usage of shortcuts through unexplored areas to minimize the travel distance. Thereby, we ensure to reach the starting point by taking into consideration the maximal positional drift while performing pure visual navigation in unknown areas. We achieved close to optimal results in intensive numerical studies and we demonstrate the usability of the algorithm in a realistic simulation environment.
{"title":"Map-based drone homing using shortcuts","authors":"D. Bender, W. Koch, D. Cremers","doi":"10.1109/MFI.2017.8170371","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170371","url":null,"abstract":"Up to the present day, GPS signals are the key component in almost all outdoor navigation tasks of robotic platforms. To obtain the platform pose, comprising the position as well as the orientation, and receive information at a higher frequency, the GPS signals are commonly used in a GPS-corrected inertial navigation system (INS). The GPS is a critical single point of failure, especially for autonomous drones. We propose an approach which creates a metric map of the observed area by fusing camera images with inertial and GPS data during its normal operation and use this map to steer a drone efficiently to its home position in the case of an GPS outage. A naive approach would follow the previously traveled path and get accurate pose estimates by comparing the current camera image with the previously created map. The presented procedure allows the usage of shortcuts through unexplored areas to minimize the travel distance. Thereby, we ensure to reach the starting point by taking into consideration the maximal positional drift while performing pure visual navigation in unknown areas. We achieved close to optimal results in intensive numerical studies and we demonstrate the usability of the algorithm in a realistic simulation environment.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"11 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170384
A. Khan, S. Ghani, S. Siddiqui
Recently emerging wireless sensor technologies integrate different types of sensor nodes in a network for information collection. The heterogeneous Wireless Sensor Network (WSN) imposes complex design challenges as nodes in such a network often have different requirements in terms of latency and bandwidth. Therefore, the channel access for nodes needs to be managed ensuring differentiated quality of service for each priority. This paper aims at developing and evaluating a distributed congestion control scheme for CSMA to make it feasible for prioritized heterogeneous traffic. For this purpose, a model earlier developed for 802.15.4 has been enhanced and integrated with the duty-cycled CSMA. Heterogeneous Traffic of three different priorities has been used for evaluating the performance of proposed scheme. The scheme has been implemented in nes-C for the mica2 platform. It has been revealed that for heterogeneous traffic, the throughput of CSMA integrated with our proposed scheme has a significant advantage over basic CSMA.
{"title":"Design & implementation of distributed congestion control scheme for heterogeneous traffic in wireless sensor networks","authors":"A. Khan, S. Ghani, S. Siddiqui","doi":"10.1109/MFI.2017.8170384","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170384","url":null,"abstract":"Recently emerging wireless sensor technologies integrate different types of sensor nodes in a network for information collection. The heterogeneous Wireless Sensor Network (WSN) imposes complex design challenges as nodes in such a network often have different requirements in terms of latency and bandwidth. Therefore, the channel access for nodes needs to be managed ensuring differentiated quality of service for each priority. This paper aims at developing and evaluating a distributed congestion control scheme for CSMA to make it feasible for prioritized heterogeneous traffic. For this purpose, a model earlier developed for 802.15.4 has been enhanced and integrated with the duty-cycled CSMA. Heterogeneous Traffic of three different priorities has been used for evaluating the performance of proposed scheme. The scheme has been implemented in nes-C for the mica2 platform. It has been revealed that for heterogeneous traffic, the throughput of CSMA integrated with our proposed scheme has a significant advantage over basic CSMA.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124073579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170365
Eunkyeong Kim, Hyunhak Cho, Hansoo Lee, Jongeun Park, Sungshin Kim
This paper proposes the method to adjust brightness information by applying CIE L∗a∗b∗ color space and adaptive neuro-fuzzy inference system. The image which is already captured by vision sensor should be adjusted brightness to recognize objects in an image. In case of proper intensity of lights, the clarity of an image is good to recognize objects. However, in case of improper intensity of lights, the image has darkish regions. It will leads to reduce success of object recognition. To make up for this week point, we adjust the image, which is a darkish image, by controlling brightness information of an image. Brightness information can be represented by CIE L∗a∗b∗ color space. So based on CIE L∗a∗b∗ color space, adaptive neuro-fuzzy inference system is implemented as control function. Control function carries out adjusting of brightness information by dealing with the value of L component of CIE L∗a∗b∗ color space. L component describes brightness information of an image. The values which is calculated by adaptive neuro-fuzzy inference system is called the adjustment coefficient. Finally, the adjustment coefficient is added to L component for adjusting brightness information. To verify the propose method, we calculated color difference with respect to RGB and CIE L∗a∗b∗ color space. As experimental results, the propose method can reduce color difference and makes the target image will be similar with reference image under proper intensity of lights.
本文提出了利用CIE L * a * b *色彩空间和自适应神经模糊推理系统来调整亮度信息的方法。对已经被视觉传感器捕获的图像进行亮度调整,以识别图像中的物体。在光线强度适当的情况下,图像的清晰度对识别物体很好。然而,如果光线强度不合适,图像就会出现偏暗的区域。这将导致目标识别的成功率降低。为了弥补这一点,我们通过控制图像的亮度信息来调整图像,这是一个偏暗的图像。亮度信息可以用CIE L * a * b *颜色空间表示。因此基于CIE L * a * b *色彩空间,实现了自适应神经模糊推理系统作为控制函数。控制函数通过处理CIE L * a * b *色彩空间的L分量值来实现亮度信息的调整。L分量描述图像的亮度信息。由自适应神经模糊推理系统计算得到的值称为调节系数。最后,在L分量中加入调节系数,对亮度信息进行调节。为了验证所提出的方法,我们计算了RGB和CIE L∗a∗b∗色彩空间的色差。实验结果表明,在适当的光照强度下,该方法可以减小目标图像的色差,使目标图像与参考图像接近。
{"title":"Image brightness adjustment system based on ANFIS by RGB and CIE L∗a∗b∗","authors":"Eunkyeong Kim, Hyunhak Cho, Hansoo Lee, Jongeun Park, Sungshin Kim","doi":"10.1109/MFI.2017.8170365","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170365","url":null,"abstract":"This paper proposes the method to adjust brightness information by applying CIE L∗a∗b∗ color space and adaptive neuro-fuzzy inference system. The image which is already captured by vision sensor should be adjusted brightness to recognize objects in an image. In case of proper intensity of lights, the clarity of an image is good to recognize objects. However, in case of improper intensity of lights, the image has darkish regions. It will leads to reduce success of object recognition. To make up for this week point, we adjust the image, which is a darkish image, by controlling brightness information of an image. Brightness information can be represented by CIE L∗a∗b∗ color space. So based on CIE L∗a∗b∗ color space, adaptive neuro-fuzzy inference system is implemented as control function. Control function carries out adjusting of brightness information by dealing with the value of L component of CIE L∗a∗b∗ color space. L component describes brightness information of an image. The values which is calculated by adaptive neuro-fuzzy inference system is called the adjustment coefficient. Finally, the adjustment coefficient is added to L component for adjusting brightness information. To verify the propose method, we calculated color difference with respect to RGB and CIE L∗a∗b∗ color space. As experimental results, the propose method can reduce color difference and makes the target image will be similar with reference image under proper intensity of lights.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"13 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117265221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170350
J. Oh, Beomhee Lee
This paper presents an objects-based topological mapping algorithm on different floors with various objects using a wheeled mobile robot. The extended Kalman filter (EKF) with adaptive measurement noise according to the terrain type is proposed to estimate the position of the robot. If an infrared distance sensor detects an object, the robot moves around the object to obtain the shape information. The rowwise max-pooling with a convolutional neural network (CNN) is proposed to classify objects regardless of the starting position of the observation. Finally, the object map consisting of nodes and edges generated from the classified objects and the distance between objects. Experimental results showed that the proposed algorithm could improve an accuracy of position estimation of the robot and efficiently generated the object map on various terrains.
{"title":"Object map building on various terrains for a Wheeled mobile robot","authors":"J. Oh, Beomhee Lee","doi":"10.1109/MFI.2017.8170350","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170350","url":null,"abstract":"This paper presents an objects-based topological mapping algorithm on different floors with various objects using a wheeled mobile robot. The extended Kalman filter (EKF) with adaptive measurement noise according to the terrain type is proposed to estimate the position of the robot. If an infrared distance sensor detects an object, the robot moves around the object to obtain the shape information. The rowwise max-pooling with a convolutional neural network (CNN) is proposed to classify objects regardless of the starting position of the observation. Finally, the object map consisting of nodes and edges generated from the classified objects and the distance between objects. Experimental results showed that the proposed algorithm could improve an accuracy of position estimation of the robot and efficiently generated the object map on various terrains.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130076650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170355
Bin Huang, Hui Xiong, Jianqiang Wang, Qing Xu, Xiaofei Li, Keqiang Li
Due to much imperfect detection performance of onboard sensors in dense driving scenarios, the accurate and explicit perception of surrounding objects for Advanced Driver Assistance Systems and Autonomous Driving is challenging. This paper proposes a novel detection-level fusion approach for multi-object perception in dense traffic environment based on evidence theory. In order to remove uninterested targets and keep tracking important, we integrate four states of track life into a generic fusion framework to improve the performance of multi-object perception. The information of object type, position and velocity is made use of to reduce erroneous data association between tracks and detections. Several experiments in real dense traffic environment on highways and urban roads are conducted. The results verify the proposed fusion approach achieves low false and missing tracking.
{"title":"Detection-level fusion for multi-object perception in dense traffic environment","authors":"Bin Huang, Hui Xiong, Jianqiang Wang, Qing Xu, Xiaofei Li, Keqiang Li","doi":"10.1109/MFI.2017.8170355","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170355","url":null,"abstract":"Due to much imperfect detection performance of onboard sensors in dense driving scenarios, the accurate and explicit perception of surrounding objects for Advanced Driver Assistance Systems and Autonomous Driving is challenging. This paper proposes a novel detection-level fusion approach for multi-object perception in dense traffic environment based on evidence theory. In order to remove uninterested targets and keep tracking important, we integrate four states of track life into a generic fusion framework to improve the performance of multi-object perception. The information of object type, position and velocity is made use of to reduce erroneous data association between tracks and detections. Several experiments in real dense traffic environment on highways and urban roads are conducted. The results verify the proposed fusion approach achieves low false and missing tracking.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125276954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170424
Saly Malak, Hani Al Hajjar, E. Dupont, F. Lamarque
In this paper, a study has been conducted to present a high resolution optical localization and tracking method for micro-robots or micro-conveyors moved over a smart surface in a context of micro-factory. The first approach of this work is presented here, the localization and tracking principles are described, the algorithm is presented and finally, experimentation work on system calibration and open-loop tracking is illustrated. The scanning of the surface as well as the tracking of the mobile micro-conveyor will be ensured by steering a laser beam via a MEMS mirror. Depending on the light power received by a photodetector, the conveyor will be localized and tracked. This technique will ensure the achievement of different micro-robots tasks depending on their priorities without collision between them and avoiding defective cells.
{"title":"First approach of an optical localization and tracking method applied to a micro-conveying system","authors":"Saly Malak, Hani Al Hajjar, E. Dupont, F. Lamarque","doi":"10.1109/MFI.2017.8170424","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170424","url":null,"abstract":"In this paper, a study has been conducted to present a high resolution optical localization and tracking method for micro-robots or micro-conveyors moved over a smart surface in a context of micro-factory. The first approach of this work is presented here, the localization and tracking principles are described, the algorithm is presented and finally, experimentation work on system calibration and open-loop tracking is illustrated. The scanning of the surface as well as the tracking of the mobile micro-conveyor will be ensured by steering a laser beam via a MEMS mirror. Depending on the light power received by a photodetector, the conveyor will be localized and tracked. This technique will ensure the achievement of different micro-robots tasks depending on their priorities without collision between them and avoiding defective cells.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125911833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170425
Qingcong Wu, Xingsong Wang
In recent years, a great many robot-assisted therapy systems have been developed and applied in neural rehabilitation. In this paper, we develop a wearable upper limb exoskeleton robot for the purpose of assisting the disable patients to execute effective rehabilitation. The proposed exoskeleton system consists of 7 degrees of freedom (DOFs) and is capable of providing naturalistic assistance of shoulder, elbow, forearm, and wrist. The major hardware of the robotic system is introduced. The Denavit-Hartenburg (D-H) approach and Monte Carlo method are utilized to establish the kinematic model and analyze the accessible workspace of exoskeleton. Besides, a salient feature of this work is the development of an admittance-based control strategy which can provide patient-active rehabilitation training in virtual environment. Two preliminary comparison experiments are implemented on a healthy subject wearing the exoskeleton. The experimental results verify the effectiveness of the developed robotic rehabilitation system and control strategy.
{"title":"Development of an upper limb exoskeleton for rehabilitation training in virtual environment","authors":"Qingcong Wu, Xingsong Wang","doi":"10.1109/MFI.2017.8170425","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170425","url":null,"abstract":"In recent years, a great many robot-assisted therapy systems have been developed and applied in neural rehabilitation. In this paper, we develop a wearable upper limb exoskeleton robot for the purpose of assisting the disable patients to execute effective rehabilitation. The proposed exoskeleton system consists of 7 degrees of freedom (DOFs) and is capable of providing naturalistic assistance of shoulder, elbow, forearm, and wrist. The major hardware of the robotic system is introduced. The Denavit-Hartenburg (D-H) approach and Monte Carlo method are utilized to establish the kinematic model and analyze the accessible workspace of exoskeleton. Besides, a salient feature of this work is the development of an admittance-based control strategy which can provide patient-active rehabilitation training in virtual environment. Two preliminary comparison experiments are implemented on a healthy subject wearing the exoskeleton. The experimental results verify the effectiveness of the developed robotic rehabilitation system and control strategy.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127009410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170357
Mohammad Aldibaja, Noaki Suganuma, Keisuke Yoneda
Mapping is a very critical issue for enabling autonomous driving. This paper proposes a robust approach to generate high definition maps based on LIDAR point clouds and post-processed localization measurements. Many problems are addressed including quality, saving size, global labeling and processing time. High quality is guaranteed by accumulating and killing the sparsity of the point clouds in a very efficient way. The storing size is decreased using sub-image sampling of the entire map. The global labeling is achieved by continuously considering the top-left corner of the map images as a reference regardless to the driven distance and the vehicle orientation. The processing time is discussed in terms of using the generated maps in autonomous driving. Moreover, the paper highlights a method to increase the density of online LIDAR frames to be compatible with the intensity level of the generated maps. The proposed method was used since 2015 to generate maps of different areas and courses in Japan and USA with very high stability and accuracy.
{"title":"LIDAR-data accumulation strategy to generate high definition maps for autonomous vehicles","authors":"Mohammad Aldibaja, Noaki Suganuma, Keisuke Yoneda","doi":"10.1109/MFI.2017.8170357","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170357","url":null,"abstract":"Mapping is a very critical issue for enabling autonomous driving. This paper proposes a robust approach to generate high definition maps based on LIDAR point clouds and post-processed localization measurements. Many problems are addressed including quality, saving size, global labeling and processing time. High quality is guaranteed by accumulating and killing the sparsity of the point clouds in a very efficient way. The storing size is decreased using sub-image sampling of the entire map. The global labeling is achieved by continuously considering the top-left corner of the map images as a reference regardless to the driven distance and the vehicle orientation. The processing time is discussed in terms of using the generated maps in autonomous driving. Moreover, the paper highlights a method to increase the density of online LIDAR frames to be compatible with the intensity level of the generated maps. The proposed method was used since 2015 to generate maps of different areas and courses in Japan and USA with very high stability and accuracy.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133028453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170414
Young-Mok Ha, Soohee Jang, Kwang-Won Kim, J. Yoon
Digital door lock system is a widely used physical security system. It restricts unauthorized accesses and protects assets or private spaces. However, once its password has been exposed to unauthorized people, it becomes useless. In this paper, we propose a novel side channel attack model, which enables a cracking of a digital door lock password. We noted that when people press the key-lock button, irrespective of how careful they are, the generated vibrations differ with the location of the button pressed. Our model uses and analyzes the natural phenomenon of vibration to infer passwords. Under our attack, the ease of password inference depends on the number of distinguishable buttons rather than password length. The results of our experiments contradict the commonly held security principle that a longer password guarantees a higher level of security.
{"title":"Side channel attack on digital door lock with vibration signal analysis: Longer password does not guarantee higher security level","authors":"Young-Mok Ha, Soohee Jang, Kwang-Won Kim, J. Yoon","doi":"10.1109/MFI.2017.8170414","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170414","url":null,"abstract":"Digital door lock system is a widely used physical security system. It restricts unauthorized accesses and protects assets or private spaces. However, once its password has been exposed to unauthorized people, it becomes useless. In this paper, we propose a novel side channel attack model, which enables a cracking of a digital door lock password. We noted that when people press the key-lock button, irrespective of how careful they are, the generated vibrations differ with the location of the button pressed. Our model uses and analyzes the natural phenomenon of vibration to infer passwords. Under our attack, the ease of password inference depends on the number of distinguishable buttons rather than password length. The results of our experiments contradict the commonly held security principle that a longer password guarantees a higher level of security.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133309497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170407
Jared Shamwell, W. Nothwang, D. Perlis
Due both to the speed and quality of their sensors and restrictive on-board computational capabilities, current state-of-the-art (SOA) size, weight, and power (SWaP) constrained autonomous robotic systems are limited in their abilities to sample, fuse, and analyze sensory data for state estimation. Aimed at improving SWaP-constrained robotic state estimation, we present Multi-Hypothesis DeepEfference (MHDE) — an unsupervised, deep convolutional-deconvolutional sensor fusion network that learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. This new multi-hypothesis formulation of our previous architecture, DeepEfference [1], has been augmented to handle dynamic heteroscedastic sensor and motion noise and computes hypothesis image mappings and predictions at 150–400 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel architectural pathways and n (1, 2, 4, or 8 in this work) multi-hypothesis generation subpathways to generate n pixel-level predictions and correspondences between source and target images. We evaluated MHDE on the KITTI Odometry dataset [2] and benchmarked it against DeepEfference [1] and DeepMatching [3] by mean pixel error and runtime. MHDE with 8 hypotheses outperformed DeepEfference in root mean squared (RMSE) pixel error by 103% in the maximum heteroscedastic noise condition and by 18% in the noise-free condition. MHDE with 8 hypotheses was over 5, 000% faster than DeepMatching with only a 3% increase in RMSE.
{"title":"A deep neural network approach to fusing vision and heteroscedastic motion estimates for low-SWaP robotic applications","authors":"Jared Shamwell, W. Nothwang, D. Perlis","doi":"10.1109/MFI.2017.8170407","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170407","url":null,"abstract":"Due both to the speed and quality of their sensors and restrictive on-board computational capabilities, current state-of-the-art (SOA) size, weight, and power (SWaP) constrained autonomous robotic systems are limited in their abilities to sample, fuse, and analyze sensory data for state estimation. Aimed at improving SWaP-constrained robotic state estimation, we present Multi-Hypothesis DeepEfference (MHDE) — an unsupervised, deep convolutional-deconvolutional sensor fusion network that learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. This new multi-hypothesis formulation of our previous architecture, DeepEfference [1], has been augmented to handle dynamic heteroscedastic sensor and motion noise and computes hypothesis image mappings and predictions at 150–400 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel architectural pathways and n (1, 2, 4, or 8 in this work) multi-hypothesis generation subpathways to generate n pixel-level predictions and correspondences between source and target images. We evaluated MHDE on the KITTI Odometry dataset [2] and benchmarked it against DeepEfference [1] and DeepMatching [3] by mean pixel error and runtime. MHDE with 8 hypotheses outperformed DeepEfference in root mean squared (RMSE) pixel error by 103% in the maximum heteroscedastic noise condition and by 18% in the noise-free condition. MHDE with 8 hypotheses was over 5, 000% faster than DeepMatching with only a 3% increase in RMSE.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129357830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}