Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913854
N. Rao, D. Hooper, J. Ladd-Lively
Three types of information fusion strategies are studied to assess the performance of classifiers for detecting low-level 235U radiation sources, using features obtained from gamma spectra of NaI detectors. These three strategies are based on using two spectral region features, fusing eight classifiers of diverse designs, and fusing multiple detectors located at different positions around the source. The inner, middle and outer groups of detectors, within a formation of two concentric circles and a spiral of 21 detectors, are identified based on their distance to the source, which is located at the center. This study provides two main qualitative insights into this classification task. First, the fusion of detectors leads to an overall improved classification performance, least in the inner group, most in the outer group, and in between for the middle group. Second, several classifiers and fusers achieve lower training error which does not translate to lower generalization error, indicating their over-fitting to training data.
{"title":"On Feature, Classifier and Detector Fusers for 235U Signatures Using Gamma Spectral Counts","authors":"N. Rao, D. Hooper, J. Ladd-Lively","doi":"10.1109/MFI55806.2022.9913854","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913854","url":null,"abstract":"Three types of information fusion strategies are studied to assess the performance of classifiers for detecting low-level 235U radiation sources, using features obtained from gamma spectra of NaI detectors. These three strategies are based on using two spectral region features, fusing eight classifiers of diverse designs, and fusing multiple detectors located at different positions around the source. The inner, middle and outer groups of detectors, within a formation of two concentric circles and a spiral of 21 detectors, are identified based on their distance to the source, which is located at the center. This study provides two main qualitative insights into this classification task. First, the fusion of detectors leads to an overall improved classification performance, least in the inner group, most in the outer group, and in between for the middle group. Second, several classifiers and fusers achieve lower training error which does not translate to lower generalization error, indicating their over-fitting to training data.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126119709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913878
Adithya Badidey, Ryan Dalby, Zhongyi Jiang, D. Sacharny, T. Henderson
Among the challenges of maintaining a safe and efficient transportation system, Departments of Transportation (DOT) must assess the quality of hundreds-of-thousands of miles of roadway every year and prioritize limited resources to address issues that affect safety and reliability. In particular, road damage in the form of 3D analysis of cracks and potholes is difficult to catalog and require significant human resources to survey. However, a new and growing remote-sensing network comprised of low-cost consumer dashcams presents an opportunity to dramatically lower the cost and effort required to perform road damage assessments. This paper provides methods to approach this problem and details a number of public datasets and models that can be used to tackle it. The central contribution here is a set of several practical software pipelines designed to accomplish this task in an automated fashion. An emphasis on deep learning methods is presented that enables organizations to improve or tailor the results according to their specific requirements and the availability of labeled data. Suggestions for possible directions for future work and improvements at each stage of the pipeline are also presented.
{"title":"Monocular Road Damage Size Estimation using Publicly Available Datasets and Dashcam Imagery","authors":"Adithya Badidey, Ryan Dalby, Zhongyi Jiang, D. Sacharny, T. Henderson","doi":"10.1109/MFI55806.2022.9913878","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913878","url":null,"abstract":"Among the challenges of maintaining a safe and efficient transportation system, Departments of Transportation (DOT) must assess the quality of hundreds-of-thousands of miles of roadway every year and prioritize limited resources to address issues that affect safety and reliability. In particular, road damage in the form of 3D analysis of cracks and potholes is difficult to catalog and require significant human resources to survey. However, a new and growing remote-sensing network comprised of low-cost consumer dashcams presents an opportunity to dramatically lower the cost and effort required to perform road damage assessments. This paper provides methods to approach this problem and details a number of public datasets and models that can be used to tackle it. The central contribution here is a set of several practical software pipelines designed to accomplish this task in an automated fashion. An emphasis on deep learning methods is presented that enables organizations to improve or tailor the results according to their specific requirements and the availability of labeled data. Suggestions for possible directions for future work and improvements at each stage of the pipeline are also presented.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125381875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913870
Nesrine Harbaoui, Nourdine Ait Tmazirte, Maan El Badaoui El Najjar
For an autonomous terrestrial transportation system, the ability to determine its position is essential in order to allow other functions, such as control or perception, to be carried out without danger. Thus, the criticality of these functions generates strong requirements in terms of safety/integrity, availability and accuracy. In the present paper, a multilevel positioning framework is proposed to adapt the navigation system to a wide range of environmental contexts. In order to improve the availability and accuracy, a tight coupling method of Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU) and vehicle’s odometry measurements based on nonlinear information filter (NIF) is used. Then, an adaptive diagnostic layer is investigated to adjust the trade-off between safety and other operational requirements. Its principal role is to deal with sensors errors. The use of parametric residuals, coupled with a deep neural network (DNN), makes it possible to select at each instant, the appropriate residual allowing, in the environment crossed, to maximize the detectability of measurement faults. This paper focuses on the conceptual approach and the implementation of this framework in order to adapt to the operating context (open sky, sub-urban, urban, covered …). Finally, to validate the performance of the proposed approach, tests are done with real trajectory showing encouraging position estimation results.
{"title":"Environment Adaptive Diagnostic Framework For Safe Localization of Autonomous Vehicles","authors":"Nesrine Harbaoui, Nourdine Ait Tmazirte, Maan El Badaoui El Najjar","doi":"10.1109/MFI55806.2022.9913870","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913870","url":null,"abstract":"For an autonomous terrestrial transportation system, the ability to determine its position is essential in order to allow other functions, such as control or perception, to be carried out without danger. Thus, the criticality of these functions generates strong requirements in terms of safety/integrity, availability and accuracy. In the present paper, a multilevel positioning framework is proposed to adapt the navigation system to a wide range of environmental contexts. In order to improve the availability and accuracy, a tight coupling method of Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU) and vehicle’s odometry measurements based on nonlinear information filter (NIF) is used. Then, an adaptive diagnostic layer is investigated to adjust the trade-off between safety and other operational requirements. Its principal role is to deal with sensors errors. The use of parametric residuals, coupled with a deep neural network (DNN), makes it possible to select at each instant, the appropriate residual allowing, in the environment crossed, to maximize the detectability of measurement faults. This paper focuses on the conceptual approach and the implementation of this framework in order to adapt to the operating context (open sky, sub-urban, urban, covered …). Finally, to validate the performance of the proposed approach, tests are done with real trajectory showing encouraging position estimation results.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116044458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913851
Nicola Barthelmes, S. Sicklinger, Markus Zimmermann
Continuous and reliable object detection is essential for advanced driving assistant systems and in particular for fully automated vehicles. Most research focuses on developing object detection algorithms and optimizing the rate of successful object identifications on image frames. However, the sensor position as a relevant design variable is not considered, although it significantly influences whether an object is detectable by the camera, or if it is outside of the field of view or occluded by another traffic participant. This paper introduces a method to assess different camera setups to optimize traffic light visibility. We show that appropriate positioning of the cameras can improve the visibility of a traffic light by up to 90% as a vehicle approaches a junction. Furthermore, we show that the combination of a near-field camera with a long-range camera achieves a more robust result than using a single multi-purpose camera.
{"title":"The impact of the camera setup on the visibility rate of traffic lights","authors":"Nicola Barthelmes, S. Sicklinger, Markus Zimmermann","doi":"10.1109/MFI55806.2022.9913851","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913851","url":null,"abstract":"Continuous and reliable object detection is essential for advanced driving assistant systems and in particular for fully automated vehicles. Most research focuses on developing object detection algorithms and optimizing the rate of successful object identifications on image frames. However, the sensor position as a relevant design variable is not considered, although it significantly influences whether an object is detectable by the camera, or if it is outside of the field of view or occluded by another traffic participant. This paper introduces a method to assess different camera setups to optimize traffic light visibility. We show that appropriate positioning of the cameras can improve the visibility of a traffic light by up to 90% as a vehicle approaches a junction. Furthermore, we show that the combination of a near-field camera with a long-range camera achieves a more robust result than using a single multi-purpose camera.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115879776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913848
Reiya Takemura, G. Ishigami
This paper presents a perception-aware path planning framework for unmanned aerial vehicles (UAVs) that explicitly considers perception quality of a light detection and ranging (LiDAR) sensor. The perception quality is quantified based on how scattered feature points are in LiDAR-based simultaneous localization and mapping, which can improve the accuracy of pose estimation of UAVs. In the planning step of a UAV, the proposed framework selects the best path based on the perception quality from a library of candidate paths generated by the rapidly-exploring random trees algorithm. Consequently, the UAV can autonomously fly to a destination in a receding horizon manner. Several simulation trials of the photorealistic environments confirm that our proposed path planner reduces pose estimation error by approximately 85 % on average as compared with a purely-reactive path planner.
{"title":"Perception-aware Receding Horizon Path Planning for UAVs with LiDAR-based SLAM","authors":"Reiya Takemura, G. Ishigami","doi":"10.1109/MFI55806.2022.9913848","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913848","url":null,"abstract":"This paper presents a perception-aware path planning framework for unmanned aerial vehicles (UAVs) that explicitly considers perception quality of a light detection and ranging (LiDAR) sensor. The perception quality is quantified based on how scattered feature points are in LiDAR-based simultaneous localization and mapping, which can improve the accuracy of pose estimation of UAVs. In the planning step of a UAV, the proposed framework selects the best path based on the perception quality from a library of candidate paths generated by the rapidly-exploring random trees algorithm. Consequently, the UAV can autonomously fly to a destination in a receding horizon manner. Several simulation trials of the photorealistic environments confirm that our proposed path planner reduces pose estimation error by approximately 85 % on average as compared with a purely-reactive path planner.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"558 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115493618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913865
Wenbo Zhao, Wei Tian, Yi Han, Xianwang Yu
Lane detection based on the visual sensor is of great significance for the environmental perception of the intelligent vehicle. Current mature lane detection algorithms are trained and implemented in good visual conditions. However, the low-light environment such as in the night is much more complex, easily causing misdetections and even perception failures, which are harmful to the downstream tasks such as behavior decision and control of ego-vehicle. To tackle this problem, we propose a new lane detection algorithm that introduces the multi-light information into lane detection task. The proposed algorithm adopts a multi-exposure image processing module, which generates and fuses multi-exposure information from the source image data. By integrating this module, mainstream lane detection models can jointly learn the extraction of lane features as well as the enhancement of low-exposed image, thus improving both the performance and robustness of lane detection in the night.
{"title":"Low-Illumination Lane Detection by Fusion of Multi-light Information","authors":"Wenbo Zhao, Wei Tian, Yi Han, Xianwang Yu","doi":"10.1109/MFI55806.2022.9913865","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913865","url":null,"abstract":"Lane detection based on the visual sensor is of great significance for the environmental perception of the intelligent vehicle. Current mature lane detection algorithms are trained and implemented in good visual conditions. However, the low-light environment such as in the night is much more complex, easily causing misdetections and even perception failures, which are harmful to the downstream tasks such as behavior decision and control of ego-vehicle. To tackle this problem, we propose a new lane detection algorithm that introduces the multi-light information into lane detection task. The proposed algorithm adopts a multi-exposure image processing module, which generates and fuses multi-exposure information from the source image data. By integrating this module, mainstream lane detection models can jointly learn the extraction of lane features as well as the enhancement of low-exposed image, thus improving both the performance and robustness of lane detection in the night.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127240889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913861
Peng Liu, Gustaf Hendeby, Fredrik K. Gustafsson
The classical SIR model is a fundamental building block in most epidemiological models. Despite its widespread use, its properties in filtering and estimation applications are much less well explored. Independently of how the basic SIR model is integrated into more complex models, the fundamental question is whether the states and parameters can be estimated from a fusion of available numeric measurements. The problem studied in this paper focuses on the parameter and state estimation of a stochastic SIR model from assumed direct measurements of the number of infected people in the population, and the generalisation to other measurements is left for future research. In terms of parameter estimation, two components are discussed separately. The first component is model parameter estimation assuming that the all states are measured directly. The second component is state estimation assuming known parameters. These two components are combined into an iterative state and parameter estimator. This iterative method is compared to a straightforward approach based on state augmentation of the unknown parameters. Feasibility of the problem is studied from an information-theoretic point of view using the Cramér Rao Lower Bound (CRLB). Using simulated data resembling the first wave of Covid-19 in Sweden, the iterative method outperforms the state augmentation approach.
{"title":"Joint Estimation of States and Parameters in Stochastic SIR Model","authors":"Peng Liu, Gustaf Hendeby, Fredrik K. Gustafsson","doi":"10.1109/MFI55806.2022.9913861","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913861","url":null,"abstract":"The classical SIR model is a fundamental building block in most epidemiological models. Despite its widespread use, its properties in filtering and estimation applications are much less well explored. Independently of how the basic SIR model is integrated into more complex models, the fundamental question is whether the states and parameters can be estimated from a fusion of available numeric measurements. The problem studied in this paper focuses on the parameter and state estimation of a stochastic SIR model from assumed direct measurements of the number of infected people in the population, and the generalisation to other measurements is left for future research. In terms of parameter estimation, two components are discussed separately. The first component is model parameter estimation assuming that the all states are measured directly. The second component is state estimation assuming known parameters. These two components are combined into an iterative state and parameter estimator. This iterative method is compared to a straightforward approach based on state augmentation of the unknown parameters. Feasibility of the problem is studied from an information-theoretic point of view using the Cramér Rao Lower Bound (CRLB). Using simulated data resembling the first wave of Covid-19 in Sweden, the iterative method outperforms the state augmentation approach.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126698436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913875
Michael Fennel, Lukas Driller, Antonio Zea, U. Hanebeck
The precise knowledge of a robot manipulator’s kinematic state including position, velocity, and acceleration is one of the base requirements for the application of advanced control algorithms. To obtain this information, encoder data could be differentiated numerically. However, the resulting velocity and acceleration estimates are either noisy or delayed as a result of low-pass filtering. Numerical differentiation can be circumvented by the utilization of gyroscopes and accelerometers, but these suffer from a variety of measurement errors and nonlinearity regarding the desired quantities. Therefore, we present a novel, real-time capable kinematic state estimator based on the Extended Kalman filter with states for the effective sensor biases. This way, the handling of arbitrary inertial sensor setups is made possible without calibration on manipulators composed of revolute and prismatic joints. Simulation experiments show that the proposed estimator is robust towards various error sources and that it outperforms competing approaches. Moreover, the practical relevance is demonstrated using a real manipulator with two joints.
{"title":"Calibration-free IMU-based Kinematic State Estimation for Robotic Manipulators*","authors":"Michael Fennel, Lukas Driller, Antonio Zea, U. Hanebeck","doi":"10.1109/MFI55806.2022.9913875","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913875","url":null,"abstract":"The precise knowledge of a robot manipulator’s kinematic state including position, velocity, and acceleration is one of the base requirements for the application of advanced control algorithms. To obtain this information, encoder data could be differentiated numerically. However, the resulting velocity and acceleration estimates are either noisy or delayed as a result of low-pass filtering. Numerical differentiation can be circumvented by the utilization of gyroscopes and accelerometers, but these suffer from a variety of measurement errors and nonlinearity regarding the desired quantities. Therefore, we present a novel, real-time capable kinematic state estimator based on the Extended Kalman filter with states for the effective sensor biases. This way, the handling of arbitrary inertial sensor setups is made possible without calibration on manipulators composed of revolute and prismatic joints. Simulation experiments show that the proposed estimator is robust towards various error sources and that it outperforms competing approaches. Moreover, the practical relevance is demonstrated using a real manipulator with two joints.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"270 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123366140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913866
Jannik Springer, M. Oispuu, W. Koch
The performance of high-resolution direction finding methods can significantly degrade if mismatches between the actual array response and the modeled array response are not compensated. Using sources of opportunity, self-calibration techniques jointly estimate any unknown perturbations and source parameters. In this work, we propose a self-calibration method for sensor networks that fully exploits the source position by combining the well-known bearings-only localization method and existing eigenstructure based self-calibration techniques. Using numerical experiments we demonstrate that the proposed method can uniquely estimate the gain and phase perturbations of multiple sensors as well as the positions of a moving source. We outline the Cramer-Rao lower bound and´ show that the method is efficient. Finally, the self-calibration method is applied to measurement data collected in field trials.
{"title":"Joint Localization and Calibration in Partly and Fully Uncalibrated Array Sensor Networks","authors":"Jannik Springer, M. Oispuu, W. Koch","doi":"10.1109/MFI55806.2022.9913866","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913866","url":null,"abstract":"The performance of high-resolution direction finding methods can significantly degrade if mismatches between the actual array response and the modeled array response are not compensated. Using sources of opportunity, self-calibration techniques jointly estimate any unknown perturbations and source parameters. In this work, we propose a self-calibration method for sensor networks that fully exploits the source position by combining the well-known bearings-only localization method and existing eigenstructure based self-calibration techniques. Using numerical experiments we demonstrate that the proposed method can uniquely estimate the gain and phase perturbations of multiple sensors as well as the positions of a moving source. We outline the Cramer-Rao lower bound and´ show that the method is efficient. Finally, the self-calibration method is applied to measurement data collected in field trials.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121596427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-20DOI: 10.1109/MFI55806.2022.9913856
Mukhlas A. Rasyidy, Y. Y. Nazaruddin, A. Widyotriatmo
This paper describes a technique to perform a post-detection fusion of camera and LiDAR data for road boundary estimation tasks. To be specific, the technique takes the road boundary detection results that are generated separately from the camera and LiDAR to enhance the accuracy of the estimated road boundaries. The proposed approach can achieve a more accurate estimation in the near range than just LiDAR-based detection and in the long range than just camera-based detection. Random sample consensus (RANSAC) of linear regressions is used to create the road boundary model that is capable of reducing errors and outliers while keeping it simple, explainable, and adaptive to the road curvature. The generated linear models are then combined into a single road boundary that can be interpolated and extrapolated using a Boosting-like algorithm with a non-parametric strategy. This technique is called as RANSAC-Ensemble. The experiments show that this technique has better accuracy with comparable processing time than certain other common methods of road boundary model estimation.
{"title":"Regression with Ensemble of RANSAC in Camera-LiDAR Fusion for Road Boundary Detection and Modeling","authors":"Mukhlas A. Rasyidy, Y. Y. Nazaruddin, A. Widyotriatmo","doi":"10.1109/MFI55806.2022.9913856","DOIUrl":"https://doi.org/10.1109/MFI55806.2022.9913856","url":null,"abstract":"This paper describes a technique to perform a post-detection fusion of camera and LiDAR data for road boundary estimation tasks. To be specific, the technique takes the road boundary detection results that are generated separately from the camera and LiDAR to enhance the accuracy of the estimated road boundaries. The proposed approach can achieve a more accurate estimation in the near range than just LiDAR-based detection and in the long range than just camera-based detection. Random sample consensus (RANSAC) of linear regressions is used to create the road boundary model that is capable of reducing errors and outliers while keeping it simple, explainable, and adaptive to the road curvature. The generated linear models are then combined into a single road boundary that can be interpolated and extrapolated using a Boosting-like algorithm with a non-parametric strategy. This technique is called as RANSAC-Ensemble. The experiments show that this technique has better accuracy with comparable processing time than certain other common methods of road boundary model estimation.","PeriodicalId":344737,"journal":{"name":"2022 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"12 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113979383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}