Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235261
U. Artan, J. Marshall
In this paper, we describe a method for classifying rock piles characterized by different size distributions by using accelerometer data and wavelet analysis. Size distribution (frag-mentation) estimates are used in the mining and aggregates industries to ensure the rock that enters the crushing and grinding circuits meet input design specifications. Current technologies use exteroceptive sensing to estimate size distributions from, for example, camera images. Our approach instead proposes the use of signals acquired from the process of loading equipment that are used to transport fragmented rock. The experimental setup used a laboratory-sized mock up of a haul truck with two inertial measurement units (IMUs) for data collection. Results utilizing wavelet analysis are provided that show how accelerometers could be used to distinguish between piles with different size distributions.
{"title":"Towards Automatic Classification of Fragmented Rock Piles via Proprioceptive Sensing and Wavelet Analysis","authors":"U. Artan, J. Marshall","doi":"10.1109/MFI49285.2020.9235261","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235261","url":null,"abstract":"In this paper, we describe a method for classifying rock piles characterized by different size distributions by using accelerometer data and wavelet analysis. Size distribution (frag-mentation) estimates are used in the mining and aggregates industries to ensure the rock that enters the crushing and grinding circuits meet input design specifications. Current technologies use exteroceptive sensing to estimate size distributions from, for example, camera images. Our approach instead proposes the use of signals acquired from the process of loading equipment that are used to transport fragmented rock. The experimental setup used a laboratory-sized mock up of a haul truck with two inertial measurement units (IMUs) for data collection. Results utilizing wavelet analysis are provided that show how accelerometers could be used to distinguish between piles with different size distributions.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115295519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235224
Stefan Orf, M. Zofka, Johann Marius Zöllner
During the past years autonomous driving evolved from only being a major topic in scientific research, all the way to practical and commercial applications like on-demand public transportation. Together with this evolution new use cases arose, making reliability and robustness of the complete system more important than ever. Many different stakeholders during development and operation as well as independent certification and admission authorities pose additional challenges. By providing and capturing additional information about the running system, independent of the main driving task (e.g. by components self tests or performance observations) the overall robustness, reliability and safety of the vehicle is increased. This article captures the issues of autonomous driving in modern-day real-life use cases and defines what a diagnostic system needs to look like to tackel these challenges. Furthermore the authors provide a concept for diagnostics in the heterogenous software landscape of component based autonomous driving architectures regarding their special complexities and difficulties.
{"title":"From Level Four to Five: Getting rid of the Safety Driver with Diagnostics in Autonomous Driving","authors":"Stefan Orf, M. Zofka, Johann Marius Zöllner","doi":"10.1109/MFI49285.2020.9235224","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235224","url":null,"abstract":"During the past years autonomous driving evolved from only being a major topic in scientific research, all the way to practical and commercial applications like on-demand public transportation. Together with this evolution new use cases arose, making reliability and robustness of the complete system more important than ever. Many different stakeholders during development and operation as well as independent certification and admission authorities pose additional challenges. By providing and capturing additional information about the running system, independent of the main driving task (e.g. by components self tests or performance observations) the overall robustness, reliability and safety of the vehicle is increased. This article captures the issues of autonomous driving in modern-day real-life use cases and defines what a diagnostic system needs to look like to tackel these challenges. Furthermore the authors provide a concept for diagnostics in the heterogenous software landscape of component based autonomous driving architectures regarding their special complexities and difficulties.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116606787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235222
Stefan Haag, B. Duraisamy, Constantin Blessing, Reiner Marchthaler, W. Koch, M. Fritzsche, J. Dickmann
This paper presents the Online Adaptive Fuser: OAFuser, a novel method for online adaptive estimation of motion and measurement uncertainties for efficient tracking and fusion by applying a system of several estimators for ongoing noise along with the conventional state and state covariance estimation. In our system, process and measurement noises are estimated with steady-state filters to obtain combined measurement noise and process noise estimators for all sensors in order to obtain state estimation with a linear Minimum Mean Square Error (MMSE) estimator and accelerating the system’s performance. The proposed adaptive tracking and fusion system was tested based on high fidelity simulation data and several real-world scenarios for automotive radar, where ground truth data is available for evaluation. We demonstrate the proposed method’s accuracy and efficiency in a challenging, highly dynamic scenario where our system is benchmarked with Multiple Model filter in terms of error statistics and run time performance.
本文提出了一种在线自适应Fuser: OAFuser,它是一种在线自适应估计运动和测量不确定性的新方法,用于有效的跟踪和融合,该方法采用了一个由多个持续噪声估计器组成的系统以及传统的状态和状态协方差估计。在我们的系统中,使用稳态滤波器对过程和测量噪声进行估计,得到所有传感器的测量噪声和过程噪声的组合估计,从而获得线性最小均方误差(MMSE)估计器的状态估计,从而提高系统的性能。提出的自适应跟踪和融合系统基于高保真仿真数据和汽车雷达的几个真实场景进行了测试,其中地面真实数据可用于评估。我们在一个具有挑战性的、高度动态的场景中展示了所提出方法的准确性和效率,在这个场景中,我们的系统在错误统计和运行时性能方面使用Multiple Model filter进行基准测试。
{"title":"OAFuser: Online Adaptive Extended Object Tracking and Fusion using automotive Radar Detections","authors":"Stefan Haag, B. Duraisamy, Constantin Blessing, Reiner Marchthaler, W. Koch, M. Fritzsche, J. Dickmann","doi":"10.1109/MFI49285.2020.9235222","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235222","url":null,"abstract":"This paper presents the Online Adaptive Fuser: OAFuser, a novel method for online adaptive estimation of motion and measurement uncertainties for efficient tracking and fusion by applying a system of several estimators for ongoing noise along with the conventional state and state covariance estimation. In our system, process and measurement noises are estimated with steady-state filters to obtain combined measurement noise and process noise estimators for all sensors in order to obtain state estimation with a linear Minimum Mean Square Error (MMSE) estimator and accelerating the system’s performance. The proposed adaptive tracking and fusion system was tested based on high fidelity simulation data and several real-world scenarios for automotive radar, where ground truth data is available for evaluation. We demonstrate the proposed method’s accuracy and efficiency in a challenging, highly dynamic scenario where our system is benchmarked with Multiple Model filter in terms of error statistics and run time performance.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114298336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235235
J. Freter, T. Seel, Christoph Elfert, D. Göhlich
In this contribution a motion estimation approach for the autonomous flight of tethered airfoils is presented. Accurate motion data are essential for the airborne wind energy sector to optimize the harvested wind energy and for the manufacturer of tethered airfoils to optimize the kite design based on measurement data. We propose an estimation based on tether angle measurements from the ground unit and inertial sensor data from the airfoil. In contrast to existing approaches, we account for the issue of tether sag, which renders tether angle measurements temporarily inaccurate. We formulate a Kalman Filter which adaptively shifts the fusion weight to the measurement with the higher certainty. The proposed estimation method is evaluated in simulations, and a proof of concept is given on experimental data, for which the proposed method yields a three times smaller estimation error than a fixed-weight solution.
{"title":"Motion Estimation for Tethered Airfoils with Tether Sag*","authors":"J. Freter, T. Seel, Christoph Elfert, D. Göhlich","doi":"10.1109/MFI49285.2020.9235235","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235235","url":null,"abstract":"In this contribution a motion estimation approach for the autonomous flight of tethered airfoils is presented. Accurate motion data are essential for the airborne wind energy sector to optimize the harvested wind energy and for the manufacturer of tethered airfoils to optimize the kite design based on measurement data. We propose an estimation based on tether angle measurements from the ground unit and inertial sensor data from the airfoil. In contrast to existing approaches, we account for the issue of tether sag, which renders tether angle measurements temporarily inaccurate. We formulate a Kalman Filter which adaptively shifts the fusion weight to the measurement with the higher certainty. The proposed estimation method is evaluated in simulations, and a proof of concept is given on experimental data, for which the proposed method yields a three times smaller estimation error than a fixed-weight solution.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122802851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235225
R. Worley, Yicheng Yu, S. Anderson
Robot localization in water and wastewater pipes is essential for path planning and for localization of faults, but the environment makes it challenging. Conventional localization suffers in pipes due to the lack of features and due to accumulating uncertainty caused by the limited perspective of typical sensors. This paper presents the implementation of an acoustic echo based localization method for the pipe environment, using a loudspeaker and microphone positioned on the robot. Echoes are used to detect distant features in the pipe and make direct measurements of the robot’s position which do not suffer from accumulated error. Novel estimation of echo class is used to refine the acoustic measurements before they are incorporated into the localization. Finally, the paper presents an investigation into the effectiveness of the method and the robustness of the method to errors in the acoustic measurements.
{"title":"Acoustic Echo-Localization for Pipe Inspection Robots","authors":"R. Worley, Yicheng Yu, S. Anderson","doi":"10.1109/MFI49285.2020.9235225","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235225","url":null,"abstract":"Robot localization in water and wastewater pipes is essential for path planning and for localization of faults, but the environment makes it challenging. Conventional localization suffers in pipes due to the lack of features and due to accumulating uncertainty caused by the limited perspective of typical sensors. This paper presents the implementation of an acoustic echo based localization method for the pipe environment, using a loudspeaker and microphone positioned on the robot. Echoes are used to detect distant features in the pipe and make direct measurements of the robot’s position which do not suffer from accumulated error. Novel estimation of echo class is used to refine the acoustic measurements before they are incorporated into the localization. Finally, the paper presents an investigation into the effectiveness of the method and the robustness of the method to errors in the acoustic measurements.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"389 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122857264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235227
Jiří Ajgl, O. Straka
Equality constrained estimation finds its application in problems like positioning of cars on roads. This paper compares two constructions of confidence sets. The first one is given by the intersection of a standard unconstrained confidence set and the constraint, the second one applies the constraint first and designs the confidence set later. Analytical results are presented for a linear constraint. A family of piecewise linear constraints is inspected numerically. It is shown that for the considered scenarios, the second construction with a properly tuned free parameter provides confidence sets that are smaller in the expectation.
{"title":"Evaluation of Confidence Sets for Estimation with Piecewise Linear Constraint","authors":"Jiří Ajgl, O. Straka","doi":"10.1109/MFI49285.2020.9235227","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235227","url":null,"abstract":"Equality constrained estimation finds its application in problems like positioning of cars on roads. This paper compares two constructions of confidence sets. The first one is given by the intersection of a standard unconstrained confidence set and the constraint, the second one applies the constraint first and designs the confidence set later. Analytical results are presented for a linear constraint. A family of piecewise linear constraints is inspected numerically. It is shown that for the considered scenarios, the second construction with a properly tuned free parameter provides confidence sets that are smaller in the expectation.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133436134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235217
Sven Richter, Johannes Beck, Sascha Wirges, C. Stiller
Accurately estimating the current state of local traffic scenes is a crucial component of automated vehicles. The desired representation may include static and dynamic traffic participants, details on free space and drivability, but also information on the semantics. Multi-layer grid maps allow to include all these information in a common representation. In this work, we present an improved method to estimate a semantic evidential multi-layer grid map using depth from stereo vision paired with pixel-wise semantically annotated images. The error characteristics of the depth from stereo is explicitly modeled when transferring pixel labels from the image to the grid map space. We achieve accurate and dense mapping results by incorporating a disparity-based ground surface estimation in the inverse perspective mapping. The proposed method is validated on our experimental vehicle in challenging urban traffic scenarios.
{"title":"Semantic Evidential Grid Mapping based on Stereo Vision","authors":"Sven Richter, Johannes Beck, Sascha Wirges, C. Stiller","doi":"10.1109/MFI49285.2020.9235217","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235217","url":null,"abstract":"Accurately estimating the current state of local traffic scenes is a crucial component of automated vehicles. The desired representation may include static and dynamic traffic participants, details on free space and drivability, but also information on the semantics. Multi-layer grid maps allow to include all these information in a common representation. In this work, we present an improved method to estimate a semantic evidential multi-layer grid map using depth from stereo vision paired with pixel-wise semantically annotated images. The error characteristics of the depth from stereo is explicitly modeled when transferring pixel labels from the image to the grid map space. We achieve accurate and dense mapping results by incorporating a disparity-based ground surface estimation in the inverse perspective mapping. The proposed method is validated on our experimental vehicle in challenging urban traffic scenarios.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121279278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235269
Chanwoong Lee, Hyorim Choi, Shapna Muralidharan, H. Ko, Byounghyun Yoo, G. Kim
In a rapidly aging society, like in South Korea, the number of Alzheimer’s Disease (AD) patients is a significant public health problem, and the need for specialized healthcare centers is in high demand. Healthcare providers generally rely on caregivers (CG) for elderly persons with AD to monitor and help them in their daily activities. K-Log Centre is a healthcare provider located in Korea to help AD patients meet their daily needs with assistance from CG in the center. The CG’S in the K-Log Centre need to attend the patients’ unique demands and everyday essentials for long-term care. Moreover, the CG also describes and logs the day-to-day activities in Activities of Daily Living (ADL) log, which comprises various events in detail. The CG’s logging activities can overburden their work, leading to appalling results like suffering quality of elderly care and hiring additional CG’s to maintain the quality of care and a negative feedback cycle. In this paper, we have analyzed this impending issue in K-Log Centre and propose a method to facilitate machine-assisted human tagging of videos for logging of the elderly activities using Human Activity Recognition (HAR). To enable the scenario, we use a You Only Look Once (YOLO-v3)-based deep learning method for object detection and use it for HAR creating a multi-modal machine-assisted human tagging of videos. The proposed algorithm detects the HAR with a precision of 98.4%. After designing the HAR model, we have tested it in a live video feed from the K-Log Centre to test the proposed method. The model showed an accuracy of 81.4% in live data, reducing the logging activities of the CG’s.
在像韩国这样的快速老龄化社会中,阿尔茨海默病(AD)患者的数量是一个重大的公共卫生问题,对专业医疗中心的需求很高。医疗保健提供者通常依靠照顾者(CG)对老年AD患者进行监测和帮助他们进行日常活动。K-Log中心是一家位于韩国的医疗保健提供商,通过中心CG的协助,帮助AD患者满足他们的日常需求。在K-Log中心的CG需要照顾病人的独特需求和日常必需品的长期护理。此外,CG还在日常生活活动(ADL)日志中描述和记录日常活动,其中包括各种事件的详细信息。CG的记录活动可能会使他们的工作负担过重,导致可怕的结果,比如老年人护理质量下降,需要雇佣额外的CG来维持护理质量,并形成负反馈循环。在本文中,我们分析了K-Log中心即将出现的这个问题,并提出了一种使用人类活动识别(HAR)促进机器辅助人类标记老年人活动记录视频的方法。为了实现该场景,我们使用基于You Only Look Once (YOLO-v3)的深度学习方法进行对象检测,并将其用于HAR,创建多模态机器辅助的视频人类标记。该算法检测HAR的精度为98.4%。在设计HAR模型后,我们在K-Log中心的实时视频馈送中对其进行了测试,以测试所提出的方法。该模型在实时数据中显示出81.4%的精度,减少了CG的测井活动。
{"title":"Machine Assisted Video Tagging of Elderly Activities in K-Log Centre","authors":"Chanwoong Lee, Hyorim Choi, Shapna Muralidharan, H. Ko, Byounghyun Yoo, G. Kim","doi":"10.1109/MFI49285.2020.9235269","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235269","url":null,"abstract":"In a rapidly aging society, like in South Korea, the number of Alzheimer’s Disease (AD) patients is a significant public health problem, and the need for specialized healthcare centers is in high demand. Healthcare providers generally rely on caregivers (CG) for elderly persons with AD to monitor and help them in their daily activities. K-Log Centre is a healthcare provider located in Korea to help AD patients meet their daily needs with assistance from CG in the center. The CG’S in the K-Log Centre need to attend the patients’ unique demands and everyday essentials for long-term care. Moreover, the CG also describes and logs the day-to-day activities in Activities of Daily Living (ADL) log, which comprises various events in detail. The CG’s logging activities can overburden their work, leading to appalling results like suffering quality of elderly care and hiring additional CG’s to maintain the quality of care and a negative feedback cycle. In this paper, we have analyzed this impending issue in K-Log Centre and propose a method to facilitate machine-assisted human tagging of videos for logging of the elderly activities using Human Activity Recognition (HAR). To enable the scenario, we use a You Only Look Once (YOLO-v3)-based deep learning method for object detection and use it for HAR creating a multi-modal machine-assisted human tagging of videos. The proposed algorithm detects the HAR with a precision of 98.4%. After designing the HAR model, we have tested it in a live video feed from the K-Log Centre to test the proposed method. The model showed an accuracy of 81.4% in live data, reducing the logging activities of the CG’s.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116776040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235233
M. Springer, C. Ament
The quality of pavements is significant to comfort and safety when riding a bicycle on roads and cycleways. As pavements are affected by ageing due to environmental impacts, periodic inspection is required for maintenance planning. Since this involves considerable efforts and costs, there is a need to monitor roads using affordable sensors. This paper presents a modular and low-cost measurement system for road surface recognition. It consists of several sensors that are attached to a bicycle to record e.g. forces or the suspension travel while driving. To ensure high sample rates in data acquisition, the data capturing and storage tasks are distributed to several microcontrollers and the monitoring and control is performed by a single board computer. In addition, the measuring system is intended to simplify the tedious documentation of ground truth. We present the results obtained by using time series analysis to identify different types of obstacles based on raw sensor signals.
{"title":"A Mobile and Modular Low-Cost Sensor System for Road Surface Recognition Using a Bicycle","authors":"M. Springer, C. Ament","doi":"10.1109/MFI49285.2020.9235233","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235233","url":null,"abstract":"The quality of pavements is significant to comfort and safety when riding a bicycle on roads and cycleways. As pavements are affected by ageing due to environmental impacts, periodic inspection is required for maintenance planning. Since this involves considerable efforts and costs, there is a need to monitor roads using affordable sensors. This paper presents a modular and low-cost measurement system for road surface recognition. It consists of several sensors that are attached to a bicycle to record e.g. forces or the suspension travel while driving. To ensure high sample rates in data acquisition, the data capturing and storage tasks are distributed to several microcontrollers and the monitoring and control is performed by a single board computer. In addition, the measuring system is intended to simplify the tedious documentation of ground truth. We present the results obtained by using time series analysis to identify different types of obstacles based on raw sensor signals.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121193545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235249
Christopher Funk, B. Noack, U. Hanebeck
Sensor data fusion in wireless sensor networks poses challenges with respect to both theory and implementation. Unknown cross-correlations between estimates distributed across the network need to be addressed carefully as neglecting them leads to overconfident fusion results. In addition, limited processing power and energy supply of the sensor nodes prohibit the use of complex algorithms and high-bandwidth communication. In this work, fast covariance intersection using both quantized estimates and quantized covariance matrices is considered. The proposed method is computationally efficient and significantly reduces the bandwidth required for data transmission while retaining unbiasedness and conservativeness of fast covariance intersection. The performance of the proposed method is evaluated with respect to that of fast covariance intersection, which proves its effectiveness even in the case of substantial data reduction.
{"title":"Conservative Quantization of Fast Covariance Intersection","authors":"Christopher Funk, B. Noack, U. Hanebeck","doi":"10.1109/MFI49285.2020.9235249","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235249","url":null,"abstract":"Sensor data fusion in wireless sensor networks poses challenges with respect to both theory and implementation. Unknown cross-correlations between estimates distributed across the network need to be addressed carefully as neglecting them leads to overconfident fusion results. In addition, limited processing power and energy supply of the sensor nodes prohibit the use of complex algorithms and high-bandwidth communication. In this work, fast covariance intersection using both quantized estimates and quantized covariance matrices is considered. The proposed method is computationally efficient and significantly reduces the bandwidth required for data transmission while retaining unbiasedness and conservativeness of fast covariance intersection. The performance of the proposed method is evaluated with respect to that of fast covariance intersection, which proves its effectiveness even in the case of substantial data reduction.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128752587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}