Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170453
Kolja Thormann, J. Honer, M. Baum
This paper presents a novel method to extract and track road boundaries in a temporal sequence of occupancy grids collected from a moving vehicle that is equipped with a laser scanner. The road boundaries are represented as circular arcs, where it is assumed that the boundaries are parallel to the driving direction. In order to find the optimal parameters of the circular arcs, first a one-dimensional optimization problem over the curvature is solved. Second, based on the optimal curvature, the optimal offset, i.e., radius, is determined. In order to obtain robust and smooth road boundary estimates, we suggest to employ a tracking algorithm, i.e., the Integrated Probabilistic Data Association (IPDA). The overall method is evaluated with real-world data from a highway scenario and compared with two state-of-the-art methods.
{"title":"Fast road boundary detection and tracking in occupancy grids from laser scans","authors":"Kolja Thormann, J. Honer, M. Baum","doi":"10.1109/MFI.2017.8170453","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170453","url":null,"abstract":"This paper presents a novel method to extract and track road boundaries in a temporal sequence of occupancy grids collected from a moving vehicle that is equipped with a laser scanner. The road boundaries are represented as circular arcs, where it is assumed that the boundaries are parallel to the driving direction. In order to find the optimal parameters of the circular arcs, first a one-dimensional optimization problem over the curvature is solved. Second, based on the optimal curvature, the optimal offset, i.e., radius, is determined. In order to obtain robust and smooth road boundary estimates, we suggest to employ a tracking algorithm, i.e., the Integrated Probabilistic Data Association (IPDA). The overall method is evaluated with real-world data from a highway scenario and compared with two state-of-the-art methods.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125727864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170423
Carlos A. Garcia, Esteban X. Castellanos, Jorge Buele, J. Espinoza, Carmen Beltran, M. Pilatásig, Eddie D. Galarza, Marcelo V. García
In the present day, industry has been brought closer to the Industry 4.0 paradigm due to its needs of reaching better industrial communications and improving the control process. According to the IIoT concepts, Cyber Physical Production Systems (CPPS) are normally connected to each other, and also connected to the virtual world of global digital networks. By introducing the IEC-61499 standard, CPPS will be allowed to implement flexible, reconfigurable, and distributed controllers. Fuzzy controllers are an advanced control solution that adds certain level of human reasoning to the system, making it possible to obtain a suitable alternative for traditional controllers. Due to the requirement of IEC-61499 of using as controller device an embedded platform, the encapsulation of advanced techniques of control in the algorithms of a systems becomes more simple and efficient. This is why it is necessary to provide the industry with low-cost alternatives who can easily integrate more complex and efficient controllers, making it possible to redirect the CPPS development to a new variety of devices. This paper proposes the development of the Function Blocks (FBs) needed in order to create a distributed control system based on Fuzzy Logic to control an analog process.
{"title":"Fuzzy control implementation in low cost CPPS devices","authors":"Carlos A. Garcia, Esteban X. Castellanos, Jorge Buele, J. Espinoza, Carmen Beltran, M. Pilatásig, Eddie D. Galarza, Marcelo V. García","doi":"10.1109/MFI.2017.8170423","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170423","url":null,"abstract":"In the present day, industry has been brought closer to the Industry 4.0 paradigm due to its needs of reaching better industrial communications and improving the control process. According to the IIoT concepts, Cyber Physical Production Systems (CPPS) are normally connected to each other, and also connected to the virtual world of global digital networks. By introducing the IEC-61499 standard, CPPS will be allowed to implement flexible, reconfigurable, and distributed controllers. Fuzzy controllers are an advanced control solution that adds certain level of human reasoning to the system, making it possible to obtain a suitable alternative for traditional controllers. Due to the requirement of IEC-61499 of using as controller device an embedded platform, the encapsulation of advanced techniques of control in the algorithms of a systems becomes more simple and efficient. This is why it is necessary to provide the industry with low-cost alternatives who can easily integrate more complex and efficient controllers, making it possible to redirect the CPPS development to a new variety of devices. This paper proposes the development of the Function Blocks (FBs) needed in order to create a distributed control system based on Fuzzy Logic to control an analog process.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124402356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170434
Seong-Geun Shin, Dae-Ryong Ahn, Hyuck-Kee Lee
In object tracking field, occlusion situations between objects are important factors that degrade the performance of tracking algorithms. In this paper, we present a track management method in the tracking level to solve the discontinuous tracking problem caused by occlusions between detected objects. This work is performed by predicting the occlusion situation between detected objects and managing the state of tracks based on an approach to track-to-track fusion in a high-level sensor fusion approach using a lidar and a monocular camera sensor. The occlusion prediction is computed by taking into account the width, length, position and azimuth angle of the detected objects. The track management system manages the occlusion state of the track from the result of occlusion prediction as well as the initialization, creation, confirmation and deletion of the tracks. The proposed approach has been verified in the occlusion situation between pedestrians, and our experimental results showed the intended performance in the occlusion situation between pedestrians.
{"title":"Occlusion handling and track management method of high-level sensor fusion for robust pedestrian tracking","authors":"Seong-Geun Shin, Dae-Ryong Ahn, Hyuck-Kee Lee","doi":"10.1109/MFI.2017.8170434","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170434","url":null,"abstract":"In object tracking field, occlusion situations between objects are important factors that degrade the performance of tracking algorithms. In this paper, we present a track management method in the tracking level to solve the discontinuous tracking problem caused by occlusions between detected objects. This work is performed by predicting the occlusion situation between detected objects and managing the state of tracks based on an approach to track-to-track fusion in a high-level sensor fusion approach using a lidar and a monocular camera sensor. The occlusion prediction is computed by taking into account the width, length, position and azimuth angle of the detected objects. The track management system manages the occlusion state of the track from the result of occlusion prediction as well as the initialization, creation, confirmation and deletion of the tracks. The proposed approach has been verified in the occlusion situation between pedestrians, and our experimental results showed the intended performance in the occlusion situation between pedestrians.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131660432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170363
Won-Seok Choi, Dongweon Yoon, TaeEon Kim, Jangmyung Lee
In this paper we propose oil skimmer monitoring system by user-controlled smart phone IoT(Internet of Things)-specific controller and built in BLE 4.0. The device Hardware of the IoT is the trend of OSHW(Open Source Hardware). This opens up free design content for physical artifacts together spirit with FOSS (Free and Open Source Software). As a result of the derivation of Open-source culture making an ecosystem, it is shared and developed by the majority. Software(Firmware, OS, Application) develop a standard reference board and by providing relevant sources of a trend in OSHW that emphasizes accessibility or ease of implementation for development.
{"title":"Oil skimmer and controller monitoring system using IoT technology","authors":"Won-Seok Choi, Dongweon Yoon, TaeEon Kim, Jangmyung Lee","doi":"10.1109/MFI.2017.8170363","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170363","url":null,"abstract":"In this paper we propose oil skimmer monitoring system by user-controlled smart phone IoT(Internet of Things)-specific controller and built in BLE 4.0. The device Hardware of the IoT is the trend of OSHW(Open Source Hardware). This opens up free design content for physical artifacts together spirit with FOSS (Free and Open Source Software). As a result of the derivation of Open-source culture making an ecosystem, it is shared and developed by the majority. Software(Firmware, OS, Application) develop a standard reference board and by providing relevant sources of a trend in OSHW that emphasizes accessibility or ease of implementation for development.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131864484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170387
Do Young Kim, D. Yun
An interesting about MAV and a research for flapping wing aero vehicle has been increased. It is required that take-off and landing abilities to lift up a vehicle by flapping motion to achieve a special purpose for flapping wing aero vehicle. An airfoil has to generate lift force more than a weight of the body to work take-off from the ground without additional force. MAV can generate enough lift force, abilities of MAV: load capacity, reliability, damage reduction. In this study, we propose a four-bar linkage structure to generate lift force. And an experiment will be set up to measure the lift force with a folding mechanism of flapping wing. After this, the result will be used for development of nature-inspired bird robot.
{"title":"The effect of folding on motion flapping wing aero vehicle","authors":"Do Young Kim, D. Yun","doi":"10.1109/MFI.2017.8170387","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170387","url":null,"abstract":"An interesting about MAV and a research for flapping wing aero vehicle has been increased. It is required that take-off and landing abilities to lift up a vehicle by flapping motion to achieve a special purpose for flapping wing aero vehicle. An airfoil has to generate lift force more than a weight of the body to work take-off from the ground without additional force. MAV can generate enough lift force, abilities of MAV: load capacity, reliability, damage reduction. In this study, we propose a four-bar linkage structure to generate lift force. And an experiment will be set up to measure the lift force with a folding mechanism of flapping wing. After this, the result will be used for development of nature-inspired bird robot.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134151675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170358
Kai Li Lim, T. Drage, T. Bräunl
While current implementations of LIDAR-based autonomous driving systems are capable of road following and obstacle avoidance, they are still unable to detect road lane markings, which is required for lane keeping during autonomous driving sequences. In this paper, we present an implementation of semantic image segmentation to enhance a LIDAR-based autonomous ground vehicle for road and lane marking detection, in addition to object perception and classification. To achieve this, we installed and calibrated a low-cost monocular camera onto a LIDAR-fitted Formula-SAE Electric car as our test bench. Tests were performed first on video recordings of local roads to verify the feasibility of semantic segmentation, and then on the Formula-SAE car with LIDAR readings. Results from semantic segmentation confirmed that the road areas in each video frame were properly segmented, and that road edges and lane markers can be classified. By combining this information with LIDAR measurements for road edges and obstacles, distance measurements for each segmented object can be obtained, thereby allowing the vehicle to be programmed to drive autonomously within the road lanes and away from road edges.
{"title":"Implementation of semantic segmentation for road and lane detection on an autonomous ground vehicle with LIDAR","authors":"Kai Li Lim, T. Drage, T. Bräunl","doi":"10.1109/MFI.2017.8170358","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170358","url":null,"abstract":"While current implementations of LIDAR-based autonomous driving systems are capable of road following and obstacle avoidance, they are still unable to detect road lane markings, which is required for lane keeping during autonomous driving sequences. In this paper, we present an implementation of semantic image segmentation to enhance a LIDAR-based autonomous ground vehicle for road and lane marking detection, in addition to object perception and classification. To achieve this, we installed and calibrated a low-cost monocular camera onto a LIDAR-fitted Formula-SAE Electric car as our test bench. Tests were performed first on video recordings of local roads to verify the feasibility of semantic segmentation, and then on the Formula-SAE car with LIDAR readings. Results from semantic segmentation confirmed that the road areas in each video frame were properly segmented, and that road edges and lane markers can be classified. By combining this information with LIDAR measurements for road edges and obstacles, distance measurements for each segmented object can be obtained, thereby allowing the vehicle to be programmed to drive autonomously within the road lanes and away from road edges.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130100508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170442
Song-Mi Lee, Heeryon Cho, S. Yoon
Noise and variability in accelerometer data collected using smart devices obscure accurate human activity recognition. In order to tackle the degradation of the triaxial accelerometer data caused by noise and individual user differences, we propose a statistical noise reduction method using total variation minimization to attenuate the noise mixed in the magnitude feature vector generated from triaxial accelerometer data. The experimental results using Random Forest classifier prove that our noise removal approach is constructive in significantly improving the human activity recognition performance.
{"title":"Statistical noise reduction for robust human activity recognition","authors":"Song-Mi Lee, Heeryon Cho, S. Yoon","doi":"10.1109/MFI.2017.8170442","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170442","url":null,"abstract":"Noise and variability in accelerometer data collected using smart devices obscure accurate human activity recognition. In order to tackle the degradation of the triaxial accelerometer data caused by noise and individual user differences, we propose a statistical noise reduction method using total variation minimization to attenuate the noise mixed in the magnitude feature vector generated from triaxial accelerometer data. The experimental results using Random Forest classifier prove that our noise removal approach is constructive in significantly improving the human activity recognition performance.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116053297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170412
Muhammad Abu Bakr, Sukhan Lee
A fundamental issue in sensor fusion is to detect and remove outliers as sensors often produce inconsistent measurements that are difficult to predict and model. The detection and removal of spurious data is paramount to the quality of sensor fusion by avoiding their inclusion in the fusion pool. In this paper, a general framework of data fusion is presented for distributed sensor networks of arbitrary redundancies, where inconsistent data are identified simultaneously within the framework. By the general framework, we mean that it is able to fuse multiple correlated data sources and incorporate linear constraints directly, while detecting and removing outliers without any prior information. The proposed method, referred to here as Covariance Projection (CP) Method, aggregates all the state vectors into a single vector in an extended space. The method then projects the mean and covariance of the aggregated state vectors onto the constraint manifold representing the constraints among state vectors that must be satisfied, including the equality constraint. Based on the distance from the manifold, the proposed method identifies the relative disparity among data sources and assigns confidence measures. The method provides an unbiased and optimal solution in the sense of Minimum Mean Square Error (MMSE) for distributed fusion architectures and is able to deal with correlations and uncertainties among local estimates and/or sensor observations across time. Simulation results are provided to show the effectiveness of the proposed method in identification and removal of inconsistency in distributed sensors system.
{"title":"A general framework for data fusion and outlier removal in distributed sensor networks","authors":"Muhammad Abu Bakr, Sukhan Lee","doi":"10.1109/MFI.2017.8170412","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170412","url":null,"abstract":"A fundamental issue in sensor fusion is to detect and remove outliers as sensors often produce inconsistent measurements that are difficult to predict and model. The detection and removal of spurious data is paramount to the quality of sensor fusion by avoiding their inclusion in the fusion pool. In this paper, a general framework of data fusion is presented for distributed sensor networks of arbitrary redundancies, where inconsistent data are identified simultaneously within the framework. By the general framework, we mean that it is able to fuse multiple correlated data sources and incorporate linear constraints directly, while detecting and removing outliers without any prior information. The proposed method, referred to here as Covariance Projection (CP) Method, aggregates all the state vectors into a single vector in an extended space. The method then projects the mean and covariance of the aggregated state vectors onto the constraint manifold representing the constraints among state vectors that must be satisfied, including the equality constraint. Based on the distance from the manifold, the proposed method identifies the relative disparity among data sources and assigns confidence measures. The method provides an unbiased and optimal solution in the sense of Minimum Mean Square Error (MMSE) for distributed fusion architectures and is able to deal with correlations and uncertainties among local estimates and/or sensor observations across time. Simulation results are provided to show the effectiveness of the proposed method in identification and removal of inconsistency in distributed sensors system.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170367
M. Atif, Sukhan Lee
Structured light 3D camera systems are composed of a commercial video projector and a machine vision camera. High scan speed structured light 3D camera systems, suffers synchronization problems due to a mismatch in the projector screen refresh rate, and camera capturing speed. Commercial video projectors project with fixed refresh rate, frame rate of machine vision camera's increase or decrease with the resolution. Synchronization between projected frame and captured frame cannot be achieved through computer graphic's interface, due to limited control of hardware of computer. This paper presents a method to project structured light patterns and trigger for the camera according to camera frame rate and projector screen refresh rate. An adaptive frame rate pattern projection framework is implemented on Field Programmable Gate Array (FPGA), to achieve camera projector synchronization at any camera frame rate and projector refresh rate, which improves the accuracy of the point cloud.
{"title":"Adaptive frame rate pattern projection for structured light 3D camera system","authors":"M. Atif, Sukhan Lee","doi":"10.1109/MFI.2017.8170367","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170367","url":null,"abstract":"Structured light 3D camera systems are composed of a commercial video projector and a machine vision camera. High scan speed structured light 3D camera systems, suffers synchronization problems due to a mismatch in the projector screen refresh rate, and camera capturing speed. Commercial video projectors project with fixed refresh rate, frame rate of machine vision camera's increase or decrease with the resolution. Synchronization between projected frame and captured frame cannot be achieved through computer graphic's interface, due to limited control of hardware of computer. This paper presents a method to project structured light patterns and trigger for the camera according to camera frame rate and projector screen refresh rate. An adaptive frame rate pattern projection framework is implemented on Field Programmable Gate Array (FPGA), to achieve camera projector synchronization at any camera frame rate and projector refresh rate, which improves the accuracy of the point cloud.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/MFI.2017.8170400
T. Nguyen, J. Spehr, Jian Xiong, M. Baum, S. Zug, R. Kruse
Within the context of road estimation, the present paper addresses the problem of the fusion of several sources with different reliabilities. Thereby, reliability represents a higher-level uncertainty. This problem arises in automated driving and ADAS due to changing environmental conditions, e.g., road type or visibility of lane markings. Thus, we present an online sensor reliability assessment and reliability-aware fusion to cope with this challenge. First, we apply a boosting algorithm to select the highly discriminant features among the extracted information. Using them we apply different classifiers to learn the reliabilities, such as Bayesian Network and Random Forest classifiers. To stabilize the estimated reliabilities over time, we deploy approaches such as Dempster-Shafer evidence theory and Influence Diagram combined with a Bayes Filter. Using a big collection of real data recordings, the experimental results support our proposed approach.
{"title":"Online reliability assessment and reliability-aware fusion for Ego-Lane detection using influence diagram and Bayes filter","authors":"T. Nguyen, J. Spehr, Jian Xiong, M. Baum, S. Zug, R. Kruse","doi":"10.1109/MFI.2017.8170400","DOIUrl":"https://doi.org/10.1109/MFI.2017.8170400","url":null,"abstract":"Within the context of road estimation, the present paper addresses the problem of the fusion of several sources with different reliabilities. Thereby, reliability represents a higher-level uncertainty. This problem arises in automated driving and ADAS due to changing environmental conditions, e.g., road type or visibility of lane markings. Thus, we present an online sensor reliability assessment and reliability-aware fusion to cope with this challenge. First, we apply a boosting algorithm to select the highly discriminant features among the extracted information. Using them we apply different classifiers to learn the reliabilities, such as Bayesian Network and Random Forest classifiers. To stabilize the estimated reliabilities over time, we deploy approaches such as Dempster-Shafer evidence theory and Influence Diagram combined with a Bayes Filter. Using a big collection of real data recordings, the experimental results support our proposed approach.","PeriodicalId":402371,"journal":{"name":"2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114769141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}