Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235209
Nadia Burkart, Philipp M. Faller, Elisabeth Peinsipp, Marco F. Huber
Fast progress in the field of Machine Learning and Deep Learning strongly influences the research in many application domains like autonomous driving or health care. In this paper, we propose a batch-wise regularization technique to enhance the interpretability for deep neural networks (NN) by means of a global surrogate rule list. For this purpose, we introduce a novel regularization approach that yields a differentiable penalty term. Compared to other regularization approaches, our approach avoids repeated creating of surrogate models during training of the NN. The experiments show that the proposed approach has a high fidelity to the main model and also results in interpretable and more accurate models compared to some of the baselines.
{"title":"Batch-wise Regularization of Deep Neural Networks for Interpretability","authors":"Nadia Burkart, Philipp M. Faller, Elisabeth Peinsipp, Marco F. Huber","doi":"10.1109/MFI49285.2020.9235209","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235209","url":null,"abstract":"Fast progress in the field of Machine Learning and Deep Learning strongly influences the research in many application domains like autonomous driving or health care. In this paper, we propose a batch-wise regularization technique to enhance the interpretability for deep neural networks (NN) by means of a global surrogate rule list. For this purpose, we introduce a novel regularization approach that yields a differentiable penalty term. Compared to other regularization approaches, our approach avoids repeated creating of surrogate models during training of the NN. The experiments show that the proposed approach has a high fidelity to the main model and also results in interpretable and more accurate models compared to some of the baselines.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132938899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235254
C. Doer, G. Trommer
Accurate localization is key for autonomous robotics. Navigation in GNSS-denied and degraded visual environment is still very challenging. Approaches based on visual sensors usually fail in conditions like darkness, direct sunlight, fog or smoke.Our approach is based on a millimeter wave FMCW radar sensor and an Inertial Measurement Unit (IMU) as both sensors can operate in these conditions. Specifically, we propose an Extended Kalman Filter (EKF) based solution to 3D Radar Inertial Odometry (RIO). A standard automotive FMCW radar which measures the 3D position and Doppler velocity of each detected target is used. Based on the radar measurements, a RANSAC 3D ego velocity estimation is carried out. Fusion with inertial data further improves the accuracy, robustness and provides a high rate motion estimate. An extension with barometric height fusion is presented.The radar based ego velocity estimation is tested in simulation and the accuracy evaluated with real world datasets in a motion capture system. Tests in indoor and outdoor environments with trajectories longer than 200m achieved a final position error below 0.6% of the distance traveled. The proposed odometry approach runs faster than realtime even on an embedded computer.
{"title":"An EKF Based Approach to Radar Inertial Odometry","authors":"C. Doer, G. Trommer","doi":"10.1109/MFI49285.2020.9235254","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235254","url":null,"abstract":"Accurate localization is key for autonomous robotics. Navigation in GNSS-denied and degraded visual environment is still very challenging. Approaches based on visual sensors usually fail in conditions like darkness, direct sunlight, fog or smoke.Our approach is based on a millimeter wave FMCW radar sensor and an Inertial Measurement Unit (IMU) as both sensors can operate in these conditions. Specifically, we propose an Extended Kalman Filter (EKF) based solution to 3D Radar Inertial Odometry (RIO). A standard automotive FMCW radar which measures the 3D position and Doppler velocity of each detected target is used. Based on the radar measurements, a RANSAC 3D ego velocity estimation is carried out. Fusion with inertial data further improves the accuracy, robustness and provides a high rate motion estimate. An extension with barometric height fusion is presented.The radar based ego velocity estimation is tested in simulation and the accuracy evaluated with real world datasets in a motion capture system. Tests in indoor and outdoor environments with trajectories longer than 200m achieved a final position error below 0.6% of the distance traveled. The proposed odometry approach runs faster than realtime even on an embedded computer.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132613112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235213
Jakob Dichgans, Jan Kallwies, H. Wuensche
In this paper we present a robust tracking system that enables an autonomous vehicle to follow a specific convoy leader. Images from a single camera are used as input data, from which predefined keypoints on the lead vehicle are detected by a convolutional neural network. This approach was inspired by the idea of human pose estimation and is shown to be significantly more accurate compared to standard bounding box detection approaches like YOLO.The estimation of the dynamic state of the leading vehicle is realized by means of a moving horizon estimator. We show the practical capabilities and usefulness of the system in real-world experiments. The experiments show that the tracking system, although it only operates with images, is competitive with earlier approaches that also used other sensors such as LiDAR.
{"title":"Robust Vehicle Tracking with Monocular Vision using Convolutional Neuronal Networks","authors":"Jakob Dichgans, Jan Kallwies, H. Wuensche","doi":"10.1109/MFI49285.2020.9235213","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235213","url":null,"abstract":"In this paper we present a robust tracking system that enables an autonomous vehicle to follow a specific convoy leader. Images from a single camera are used as input data, from which predefined keypoints on the lead vehicle are detected by a convolutional neural network. This approach was inspired by the idea of human pose estimation and is shown to be significantly more accurate compared to standard bounding box detection approaches like YOLO.The estimation of the dynamic state of the leading vehicle is realized by means of a moving horizon estimator. We show the practical capabilities and usefulness of the system in real-world experiments. The experiments show that the tracking system, although it only operates with images, is competitive with earlier approaches that also used other sensors such as LiDAR.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132225226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235236
Christian Steffes, Clemens Allmann, M. Oispuu
In this paper, the localization of a radio frequency emitter (RF) using bearing estimates is investigated. We study the position estimation using a single airborne observer platform moving along a preplanned trajectory. We present results from field trials using an emitter location system (ELS) installed on a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV). Raw array data batches have been gathered using a six-channel receiver and a fully polarized array antenna. A standard two-step localization approach based on angle of arrival (AOA) measurements and a Direct Position Determination (DPD) approach have been applied. In real-flight experiments, the performance of both methods has been investigated.
{"title":"Array-based Emitter Localization Using a VTOL UAV Carried Sensor","authors":"Christian Steffes, Clemens Allmann, M. Oispuu","doi":"10.1109/MFI49285.2020.9235236","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235236","url":null,"abstract":"In this paper, the localization of a radio frequency emitter (RF) using bearing estimates is investigated. We study the position estimation using a single airborne observer platform moving along a preplanned trajectory. We present results from field trials using an emitter location system (ELS) installed on a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV). Raw array data batches have been gathered using a six-channel receiver and a fully polarized array antenna. A standard two-step localization approach based on angle of arrival (AOA) measurements and a Direct Position Determination (DPD) approach have been applied. In real-flight experiments, the performance of both methods has been investigated.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128393294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235243
Pedram Ghasemigoudarzi, Weimin Huang, Oscar De Silva
As a tropical cyclone reaches inland, it causes severe flash floods. Real-time flood remote sensing can reduce the resultant damages of a flash flood due to its heavy precipitation. Considering the high temporal resolution and large constellation of the Cyclone Global Navigation Satellite System (CYGNSS), it has the potential to detect and monitor flash floods. In this study, based on CYGNSS data and the Random Under-Sampling Boosted (RUSBoost) machine learning algorithm, a flood detection method is proposed. The proposed technique is applied to the areas affected by Hurricane Harvey and Hurricane Irma, for which test results indicate that the flooded points are detected with 89.00% and 85.00% accuracies, respectively, and non-flooded land points are classified with accuracies equal to 97.20% and 71.00%, respectively.
{"title":"Detecting Floods Caused by Tropical Cyclone Using CYGNSS Data","authors":"Pedram Ghasemigoudarzi, Weimin Huang, Oscar De Silva","doi":"10.1109/MFI49285.2020.9235243","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235243","url":null,"abstract":"As a tropical cyclone reaches inland, it causes severe flash floods. Real-time flood remote sensing can reduce the resultant damages of a flash flood due to its heavy precipitation. Considering the high temporal resolution and large constellation of the Cyclone Global Navigation Satellite System (CYGNSS), it has the potential to detect and monitor flash floods. In this study, based on CYGNSS data and the Random Under-Sampling Boosted (RUSBoost) machine learning algorithm, a flood detection method is proposed. The proposed technique is applied to the areas affected by Hurricane Harvey and Hurricane Irma, for which test results indicate that the flooded points are detected with 89.00% and 85.00% accuracies, respectively, and non-flooded land points are classified with accuracies equal to 97.20% and 71.00%, respectively.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128594949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1109/MFI49285.2020.9235210
Alexander Barbie, W. Hasselbring, Niklas Pech, S. Sommer, S. Flögel, F. Wenzhöfer
In this decade, the amount of (industrial) Internet of Things devices will increase tremendously. Today, there exist no common standards for interconnection, observation, or the monitoring of these devices. In context of the German "Industrie 4.0" strategy the Reference Architectural Model Industry 4.0 (RAMI 4.0) was introduced to connect different aspects of this rapid development. The idea is to let different stakeholders of these products speak and understand the same terminology. In this paper, we present an approach using Digital Twins to prototype different layers along the axis of the RAMI 4.0, by the example of an autonomous ocean observation system developed in the project ARCHES.
{"title":"Prototyping Autonomous Robotic Networks on Different Layers of RAMI 4.0 with Digital Twins","authors":"Alexander Barbie, W. Hasselbring, Niklas Pech, S. Sommer, S. Flögel, F. Wenzhöfer","doi":"10.1109/MFI49285.2020.9235210","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235210","url":null,"abstract":"In this decade, the amount of (industrial) Internet of Things devices will increase tremendously. Today, there exist no common standards for interconnection, observation, or the monitoring of these devices. In context of the German \"Industrie 4.0\" strategy the Reference Architectural Model Industry 4.0 (RAMI 4.0) was introduced to connect different aspects of this rapid development. The idea is to let different stakeholders of these products speak and understand the same terminology. In this paper, we present an approach using Digital Twins to prototype different layers along the axis of the RAMI 4.0, by the example of an autonomous ocean observation system developed in the project ARCHES.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124403583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-06DOI: 10.1109/MFI49285.2020.9235234
Yaron Shulami, Daniel Sigalov
We consider the problem of state estimation in dynamical systems and propose a different mechanism for handling unmodeled system uncertainties. Instead of injecting random process noise, we assign different weights to measurements so that more recent measurements are assigned more weight. A specific choice of exponentially decaying weight function results in an algorithm with essentially the same recursive structure as the Kalman filter. It differs, however, in the manner in which old and new data are combined. While in the classical KF, the uncertainty associated with the previous estimate is inflated by adding the process noise covariance, in the present case, the uncertainty inflation is done by multiplying the previous covariance matrix by an exponential factor. This difference allows us to solve a larger variety of problems using essentially the same algorithm. We thus propose a unified and optimal, in the least-squares sense, method for filtering, prediction, smoothing and general out-of-sequence updates, all of which require different Kalman-like algorithms.
{"title":"Weighted Information Filtering, Smoothing, and Out-of-Sequence Measurement Processing","authors":"Yaron Shulami, Daniel Sigalov","doi":"10.1109/MFI49285.2020.9235234","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235234","url":null,"abstract":"We consider the problem of state estimation in dynamical systems and propose a different mechanism for handling unmodeled system uncertainties. Instead of injecting random process noise, we assign different weights to measurements so that more recent measurements are assigned more weight. A specific choice of exponentially decaying weight function results in an algorithm with essentially the same recursive structure as the Kalman filter. It differs, however, in the manner in which old and new data are combined. While in the classical KF, the uncertainty associated with the previous estimate is inflated by adding the process noise covariance, in the present case, the uncertainty inflation is done by multiplying the previous covariance matrix by an exponential factor. This difference allows us to solve a larger variety of problems using essentially the same algorithm. We thus propose a unified and optimal, in the least-squares sense, method for filtering, prediction, smoothing and general out-of-sequence updates, all of which require different Kalman-like algorithms.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121340675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-05DOI: 10.1109/MFI49285.2020.9235211
Laura M. Wolf, M. Baum
In multi-object tracking, multiple objects generate multiple sensor measurements, which are used to estimate the objects’ state simultaneously. Since it is unknown from which object a measurement originates, a data association problem arises. Considering all possible associations is computationally infeasible for large numbers of objects and measurements. Hence, approximation methods are applied to compute the most relevant associations. Here, we focus on deterministic methods, since multi-object tracking is often applied in safety-critical areas. In this work we show that Herded Gibbs sampling, a deterministic version of Gibbs sampling, applied in the labeled multi-Bernoulli filter, yields results of the same quality as randomized Gibbs sampling while having comparable computational complexity. We conclude that it is a suitable deterministic alternative to randomized Gibbs sampling and could be a promising approach for other data association problems.
{"title":"Deterministic Gibbs Sampling for Data Association in Multi-Object Tracking","authors":"Laura M. Wolf, M. Baum","doi":"10.1109/MFI49285.2020.9235211","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235211","url":null,"abstract":"In multi-object tracking, multiple objects generate multiple sensor measurements, which are used to estimate the objects’ state simultaneously. Since it is unknown from which object a measurement originates, a data association problem arises. Considering all possible associations is computationally infeasible for large numbers of objects and measurements. Hence, approximation methods are applied to compute the most relevant associations. Here, we focus on deterministic methods, since multi-object tracking is often applied in safety-critical areas. In this work we show that Herded Gibbs sampling, a deterministic version of Gibbs sampling, applied in the labeled multi-Bernoulli filter, yields results of the same quality as randomized Gibbs sampling while having comparable computational complexity. We conclude that it is a suitable deterministic alternative to randomized Gibbs sampling and could be a promising approach for other data association problems.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121909147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-17DOI: 10.1109/MFI49285.2020.9235219
Emmett Wise, Matthew Giamou, Soroush Khoubyarian, Abhinav Grover, Jonathan Kelly
Correct fusion of data from two sensors requires an accurate estimate of their relative pose, which can be determined through the process of extrinsic calibration. When the sensors are capable of producing their own egomotion estimates (i.e., measurements of their trajectories through an environment), the ‘hand-eye’ formulation of extrinsic calibration can be employed. In this paper, we extend our recent work on a convex optimization approach for hand-eye calibration to the case where one of the sensors cannot observe the scale of its translational motion (e.g., a monocular camera observing an unmapped environment). We prove that our technique is able to provide a certifiably globally optimal solution to both the known- and unknown-scale variants of hand-eye calibration, provided that the measurement noise is bounded. Herein, we focus on the theoretical aspects of the problem, show the tightness and stability of our convex relaxation, and demonstrate the optimality and speed of our algorithm through experiments with synthetic data.
{"title":"Certifiably Optimal Monocular Hand-Eye Calibration","authors":"Emmett Wise, Matthew Giamou, Soroush Khoubyarian, Abhinav Grover, Jonathan Kelly","doi":"10.1109/MFI49285.2020.9235219","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235219","url":null,"abstract":"Correct fusion of data from two sensors requires an accurate estimate of their relative pose, which can be determined through the process of extrinsic calibration. When the sensors are capable of producing their own egomotion estimates (i.e., measurements of their trajectories through an environment), the ‘hand-eye’ formulation of extrinsic calibration can be employed. In this paper, we extend our recent work on a convex optimization approach for hand-eye calibration to the case where one of the sensors cannot observe the scale of its translational motion (e.g., a monocular camera observing an unmapped environment). We prove that our technique is able to provide a certifiably globally optimal solution to both the known- and unknown-scale variants of hand-eye calibration, provided that the measurement noise is bounded. Herein, we focus on the theoretical aspects of the problem, show the tightness and stability of our convex relaxation, and demonstrate the optimality and speed of our algorithm through experiments with synthetic data.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133646020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-06DOI: 10.1109/MFI49285.2020.9235263
Claudia Malzer, M. Baum
HDBSCAN is a density-based clustering algorithm that constructs a cluster hierarchy tree and then uses a specific stability measure to extract flat clusters from the tree. We show how the application of an additional threshold value can result in a combination of DBSCAN* and HDBSCAN clusters, and demonstrate potential benefits of this hybrid approach when clustering data of variable densities. In particular, our approach is useful in scenarios where we require a low minimum cluster size but want to avoid an abundance of micro-clusters in high-density regions. The method can directly be applied to HDBSCAN's tree of cluster candidates and does not require any modifications to the hierarchy itself. It can easily be integrated as an addition to existing HDBSCAN implementations.
{"title":"A Hybrid Approach To Hierarchical Density-based Cluster Selection","authors":"Claudia Malzer, M. Baum","doi":"10.1109/MFI49285.2020.9235263","DOIUrl":"https://doi.org/10.1109/MFI49285.2020.9235263","url":null,"abstract":"HDBSCAN is a density-based clustering algorithm that constructs a cluster hierarchy tree and then uses a specific stability measure to extract flat clusters from the tree. We show how the application of an additional threshold value can result in a combination of DBSCAN* and HDBSCAN clusters, and demonstrate potential benefits of this hybrid approach when clustering data of variable densities. In particular, our approach is useful in scenarios where we require a low minimum cluster size but want to avoid an abundance of micro-clusters in high-density regions. The method can directly be applied to HDBSCAN's tree of cluster candidates and does not require any modifications to the hierarchy itself. It can easily be integrated as an addition to existing HDBSCAN implementations.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130484139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}