Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841321
Bianca Forkel, Hans-Joachim Wünsche
Much research has been done on the detection and tracking of paved, preferably marked roads. Less work is available on the detection of dirt roads. The challenge is to provide a framework to track both paved roads and dirt roads. In this paper, we are addressing the problem of developing measurement approaches working for both kinds of roads likewise. For that we fuse LiDAR with vision: First, we present indirect measurements from a static environment model populated with LiDAR data, as well as a new approach for LiDAR measurements from a segmented point cloud. Second, we investigate different image color modes to improve the effectiveness of locating dirt road boundaries using local oriented edge detection. We demonstrate the robustness of our measurements on difficult roads by showing qualitative results from our autonomous vehicles.
{"title":"Combined Road Tracking for Paved Roads and Dirt Roads: LiDAR Measurements and Image Color Modes","authors":"Bianca Forkel, Hans-Joachim Wünsche","doi":"10.23919/fusion49751.2022.9841321","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841321","url":null,"abstract":"Much research has been done on the detection and tracking of paved, preferably marked roads. Less work is available on the detection of dirt roads. The challenge is to provide a framework to track both paved roads and dirt roads. In this paper, we are addressing the problem of developing measurement approaches working for both kinds of roads likewise. For that we fuse LiDAR with vision: First, we present indirect measurements from a static environment model populated with LiDAR data, as well as a new approach for LiDAR measurements from a segmented point cloud. Second, we investigate different image color modes to improve the effectiveness of locating dirt road boundaries using local oriented edge detection. We demonstrate the robustness of our measurements on difficult roads by showing qualitative results from our autonomous vehicles.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115712987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841315
Xingchi Liu, Chenyi Lyu, Jemin George, T. Pham, L. Mihaylova
Tracking manoeuvring targets often relies on complex models with non-stationary parameters. Gaussian process (GP) based model-free methods can achieve accurate performance in a data-driven manner but face scalability challenges. Aiming to address such challenges, this paper proposes a distributed GP-based tracking approach able to learn the kernel hyperparameters in an online manner, to improve the tracking performance and scalability. It caters to the inherent distributed feature of sensor networks and does not need measurements to be transmitted among sensors for target states predictions. Theoretical upper confidence bounds about the tracking error are derived within the regret bound setting. Through this theoretical analysis, the tracking error per time step is upper bounded as a function of predictive variances from local sensors. The theoretical results are supported by simulation based ones over a case study for tracking over wireless sensor networks. With evaluation on challenging target trajectories, a comparison on state-of-the-art centralised and distributed GP approaches, numerical results demonstrate that the proposed approach achieves competitively high and robust tracking performance.
{"title":"A Learning Distributed Gaussian Process Approach for Target Tracking over Sensor Networks","authors":"Xingchi Liu, Chenyi Lyu, Jemin George, T. Pham, L. Mihaylova","doi":"10.23919/fusion49751.2022.9841315","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841315","url":null,"abstract":"Tracking manoeuvring targets often relies on complex models with non-stationary parameters. Gaussian process (GP) based model-free methods can achieve accurate performance in a data-driven manner but face scalability challenges. Aiming to address such challenges, this paper proposes a distributed GP-based tracking approach able to learn the kernel hyperparameters in an online manner, to improve the tracking performance and scalability. It caters to the inherent distributed feature of sensor networks and does not need measurements to be transmitted among sensors for target states predictions. Theoretical upper confidence bounds about the tracking error are derived within the regret bound setting. Through this theoretical analysis, the tracking error per time step is upper bounded as a function of predictive variances from local sensors. The theoretical results are supported by simulation based ones over a case study for tracking over wireless sensor networks. With evaluation on challenging target trajectories, a comparison on state-of-the-art centralised and distributed GP approaches, numerical results demonstrate that the proposed approach achieves competitively high and robust tracking performance.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114829127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841383
Yanjue Song, Stijn Kindt, N. Madhu
We propose a multistage approach for enhancing speech captured by a drone-mounted microphone array. The key challenge is suppressing the drone ego-noise, which is the major source of interference in such captures. Since the location of the target is not known a priori, we first apply a UNet-based deep convolutional autoencoder (AE) individually to each microphone signal. The AE generates a time-frequency mask ∈ [0, 1] per signal, where high values correspond to time-frequency points with relatively good signal-to-noise ratios (SNRs). The masks are pooled across all microphones and the aggregated mask is used to steer an adaptive, frequency domain beamformer, yielding a signal with an improved SNR. This beamformer output, after being fed back to the AE, now yields an improved mask - which is used for re-focussing the beamformer. This combination of AE and beamformer, which can be applied to the signals in multiple ‘passes' is termed multistage beamforming. The approach is developed and evaluated on a self-collected database. For the AE - when used to steer a beamformer - a training target that preserves more speech at the cost of less noise suppression outperforms an aggressive training target that suppresses more noise at the cost of more speech distortion. This, in combination with max-pooling of the multi-channel mask - which also lets through more speech (and noise) compared with median pooling - performs best. The experiments further demonstrate that the multistage approach brings extra benefit to the speech quality and intelligibility when the input SNR is 2:-10 dB, and yields comprehensible outputs when the input has a SNR above -5 dB.
{"title":"Drone Ego-Noise Cancellation for Improved Speech Capture using Deep Convolutional Autoencoder Assisted Multistage Beamforming","authors":"Yanjue Song, Stijn Kindt, N. Madhu","doi":"10.23919/fusion49751.2022.9841383","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841383","url":null,"abstract":"We propose a multistage approach for enhancing speech captured by a drone-mounted microphone array. The key challenge is suppressing the drone ego-noise, which is the major source of interference in such captures. Since the location of the target is not known a priori, we first apply a UNet-based deep convolutional autoencoder (AE) individually to each microphone signal. The AE generates a time-frequency mask ∈ [0, 1] per signal, where high values correspond to time-frequency points with relatively good signal-to-noise ratios (SNRs). The masks are pooled across all microphones and the aggregated mask is used to steer an adaptive, frequency domain beamformer, yielding a signal with an improved SNR. This beamformer output, after being fed back to the AE, now yields an improved mask - which is used for re-focussing the beamformer. This combination of AE and beamformer, which can be applied to the signals in multiple ‘passes' is termed multistage beamforming. The approach is developed and evaluated on a self-collected database. For the AE - when used to steer a beamformer - a training target that preserves more speech at the cost of less noise suppression outperforms an aggressive training target that suppresses more noise at the cost of more speech distortion. This, in combination with max-pooling of the multi-channel mask - which also lets through more speech (and noise) compared with median pooling - performs best. The experiments further demonstrate that the multistage approach brings extra benefit to the speech quality and intelligibility when the input SNR is 2:-10 dB, and yields comprehensible outputs when the input has a SNR above -5 dB.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127427684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841390
Francesca Incitti, L. Snidaro
In recent years the idea of fusing diverse type of information has often been employed to solve various Deep Learning tasks. Whether these regard an NLP problem or a Machine Vision one, the concept of using more inputs of the same type has been the basis of many studies. Considering NLP problems, attempts of different word embeddings have already been tried, managing to make improvements to the most common benchmarks. Here we want to explore the combination not only of different types of input together, but also different data modalities. This is done by fusing two popular word embeddings together, mainly ELMo and BERT, with other inputs that embed a visual description of the analysed text. Doing so, different modalities -textual and visual- are both employed to solve a textual problem, a concreteness task. Multimodal feature fusion is here explored through several techniques: input redundancy, concatenation, average, dimensionality reduction and augmentation. By combining these techniques it is possible to generate different vector representations: the goal is to understand which feature fusion techniques allow to obtain more accurate embeddings.
{"title":"Multimodal feature fusion for concreteness estimation","authors":"Francesca Incitti, L. Snidaro","doi":"10.23919/fusion49751.2022.9841390","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841390","url":null,"abstract":"In recent years the idea of fusing diverse type of information has often been employed to solve various Deep Learning tasks. Whether these regard an NLP problem or a Machine Vision one, the concept of using more inputs of the same type has been the basis of many studies. Considering NLP problems, attempts of different word embeddings have already been tried, managing to make improvements to the most common benchmarks. Here we want to explore the combination not only of different types of input together, but also different data modalities. This is done by fusing two popular word embeddings together, mainly ELMo and BERT, with other inputs that embed a visual description of the analysed text. Doing so, different modalities -textual and visual- are both employed to solve a textual problem, a concreteness task. Multimodal feature fusion is here explored through several techniques: input redundancy, concatenation, average, dimensionality reduction and augmentation. By combining these techniques it is possible to generate different vector representations: the goal is to understand which feature fusion techniques allow to obtain more accurate embeddings.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"60 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127582579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841363
Karl-Magnus Dahlen, Christopher Lindberg, Masaki Yoneda, T. Ogawa
A star-convex shape based on Cartesian B-splines provides a good model for detailed extended target tracking, suited for, e.g., high resolution automotive sensors. Motivated by real-world sensor data from traffic scenarios, we present an extended object tracking filter that (i) solves the problem of bad object initialization for contour tracking of mixed-size vehicles in a range of common traffic scenarios; (ii) enables accurate tracking of objects such as motorcycles, that generates detections distributed on the surface, rather than on the contour. Our approach is based on star-convex Cartesian B-spline polynomials, iterative closest point (ICP) and the convex hull. In particular, we implement the ICP algorithm to find the translation and rotation of the contour that best fit the sensor point cloud. We show that, while the original B-spline filter with a “second-time-step-initialization-procedure” fails to robustly track the object, our approach performs on par to the original B-spline filter with ground truth initialization. Furthermore, for targets generating detections on the surface, we utilize the convex hull algorithm on the point cloud. We show that our algorithm successfully tracks the object, while the original B-spline filter fails to robustly track the contour of a motorcycle.
{"title":"An Improved B-spline Extended Object Tracking Model using the Iterative Closest Point Method","authors":"Karl-Magnus Dahlen, Christopher Lindberg, Masaki Yoneda, T. Ogawa","doi":"10.23919/fusion49751.2022.9841363","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841363","url":null,"abstract":"A star-convex shape based on Cartesian B-splines provides a good model for detailed extended target tracking, suited for, e.g., high resolution automotive sensors. Motivated by real-world sensor data from traffic scenarios, we present an extended object tracking filter that (i) solves the problem of bad object initialization for contour tracking of mixed-size vehicles in a range of common traffic scenarios; (ii) enables accurate tracking of objects such as motorcycles, that generates detections distributed on the surface, rather than on the contour. Our approach is based on star-convex Cartesian B-spline polynomials, iterative closest point (ICP) and the convex hull. In particular, we implement the ICP algorithm to find the translation and rotation of the contour that best fit the sensor point cloud. We show that, while the original B-spline filter with a “second-time-step-initialization-procedure” fails to robustly track the object, our approach performs on par to the original B-spline filter with ground truth initialization. Furthermore, for targets generating detections on the surface, we utilize the convex hull algorithm on the point cloud. We show that our algorithm successfully tracks the object, while the original B-spline filter fails to robustly track the contour of a motorcycle.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"34 50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123083179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841257
Chenyi Lyu, Xingchi Liu, L. Mihaylova
Target tracking often relies on complex models with non-stationary parameters. Gaussian process (GP) is a model-free method that can achieve accurate performance. However, the inverse of the covariance matrix poses scalability challenges. Since the covariance matrix is typically dense, direct inversion and determinant evaluation methods suffer from cubic complexity to data size. This bottleneck limits the GP for long-term tracking or high-speed tracking. We present an efficient factorisation-based GP approach without any additional hyperparameters. The proposed approach reduces the computational complexity of the Cholesky decomposition by hierarchically factorising the covariance matrix into off-diagonal low-rank parts. Meanwhile, the resulting low-rank approximated Cholesky factor can also reduce the computation complexity of the inverse and the determinant operations. Numerical results based on offline and online tracking problems demonstrate the effectiveness of the proposed approach.
{"title":"Efficient Factorisation-based Gaussian Process Approaches for Online Tracking","authors":"Chenyi Lyu, Xingchi Liu, L. Mihaylova","doi":"10.23919/fusion49751.2022.9841257","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841257","url":null,"abstract":"Target tracking often relies on complex models with non-stationary parameters. Gaussian process (GP) is a model-free method that can achieve accurate performance. However, the inverse of the covariance matrix poses scalability challenges. Since the covariance matrix is typically dense, direct inversion and determinant evaluation methods suffer from cubic complexity to data size. This bottleneck limits the GP for long-term tracking or high-speed tracking. We present an efficient factorisation-based GP approach without any additional hyperparameters. The proposed approach reduces the computational complexity of the Cholesky decomposition by hierarchically factorising the covariance matrix into off-diagonal low-rank parts. Meanwhile, the resulting low-rank approximated Cholesky factor can also reduce the computation complexity of the inverse and the determinant operations. Numerical results based on offline and online tracking problems demonstrate the effectiveness of the proposed approach.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"1994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125545699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841318
Raghavendra Ramachandra, Guoqiang Li
Face biometrics has become an integral part of the various security and law enforcement applications, including border control scenarios. However, the face recognition systems are vulnerable to the morphing attacks, and thus, it is essential to develop a reliable and robust face Morphing Attack Detection (MAD) techniques. This paper presents a novel approach based on the residual gradients computed from the face image's colour scale-space representation in the reference-based or differential set-up. Thus, the proposed method will take two facial images (one from the passport and another from the trusted device) to compute the residual gradients, which is then classified using Spectral Regression Kernel Discriminant Analysis (SRKDA) to reliable detect the face morphing attacks. Extensive experiments are carried out on two different datasets to benchmark the performance of the proposed method, especially to different morph generation methods, morphing data mediums (digital, print-scan and print-scan compression) and ageing variations. Experimental results demonstrate the improved performance of the proposed method over the state-of-the-art reference-based face MAD in all evaluation protocols.
{"title":"Residual Colour Scale-Space Gradients for Reference-based Face Morphing Attack Detection","authors":"Raghavendra Ramachandra, Guoqiang Li","doi":"10.23919/fusion49751.2022.9841318","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841318","url":null,"abstract":"Face biometrics has become an integral part of the various security and law enforcement applications, including border control scenarios. However, the face recognition systems are vulnerable to the morphing attacks, and thus, it is essential to develop a reliable and robust face Morphing Attack Detection (MAD) techniques. This paper presents a novel approach based on the residual gradients computed from the face image's colour scale-space representation in the reference-based or differential set-up. Thus, the proposed method will take two facial images (one from the passport and another from the trusted device) to compute the residual gradients, which is then classified using Spectral Regression Kernel Discriminant Analysis (SRKDA) to reliable detect the face morphing attacks. Extensive experiments are carried out on two different datasets to benchmark the performance of the proposed method, especially to different morph generation methods, morphing data mediums (digital, print-scan and print-scan compression) and ageing variations. Experimental results demonstrate the improved performance of the proposed method over the state-of-the-art reference-based face MAD in all evaluation protocols.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130382504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841275
M. Herrmann, Tim Luchterhand, Charlotte Hermann, Thomas Wodtko, Jan Strohbeck, M. Buchholz
We previously presented the product multi-sensor generalized labeled multi-Bernoulli filter, which constitutes a multi-object filter for centralized and distributed multi-sensor systems with centralized estimator. It implements the Bayes parallel combination rule for generalized labeled multi-Bernoulli densities, simplifying the NP-hard multidimensional k-best assignment problem of the multi-sensor multi-object update to a polynomial-time k-shortest path problem. This way, the filter allows for efficient, parallelizable, and distributed calculation of the multi-sensor multi-object update, while showing excellent performance. However, the derivation of the filter formulas relies on a well-established approximation of the fundamental multi-sensor Gaussian identity, which was inadvertently not labeled as such in our original article. Thus, on the one hand, we clarify this mistake, discuss its consequences, and present a mathematically clean derivation of the filter yet to establish the claim of Bayes-optimality. On the other hand, we discuss implementation details and present extensive evaluations, that complete the previous publication of the filter.
{"title":"Notes on the Product Multi-Sensor Generalized Labeled Multi-Bernoulli Filter and its Implementation","authors":"M. Herrmann, Tim Luchterhand, Charlotte Hermann, Thomas Wodtko, Jan Strohbeck, M. Buchholz","doi":"10.23919/fusion49751.2022.9841275","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841275","url":null,"abstract":"We previously presented the product multi-sensor generalized labeled multi-Bernoulli filter, which constitutes a multi-object filter for centralized and distributed multi-sensor systems with centralized estimator. It implements the Bayes parallel combination rule for generalized labeled multi-Bernoulli densities, simplifying the NP-hard multidimensional k-best assignment problem of the multi-sensor multi-object update to a polynomial-time k-shortest path problem. This way, the filter allows for efficient, parallelizable, and distributed calculation of the multi-sensor multi-object update, while showing excellent performance. However, the derivation of the filter formulas relies on a well-established approximation of the fundamental multi-sensor Gaussian identity, which was inadvertently not labeled as such in our original article. Thus, on the one hand, we clarify this mistake, discuss its consequences, and present a mathematically clean derivation of the filter yet to establish the claim of Bayes-optimality. On the other hand, we discuss implementation details and present extensive evaluations, that complete the previous publication of the filter.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116530456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841339
Jingling Li, G. Battistelli, L. Chisci, P. Wei, Lin Gao
This paper considers multitarget tracking under out-of-sequence measurements (OOSMs), i.e. when the measurements processed by the tracker might be out of order. In order to fully exploit information provided by the sensor, OOSMs should be re-utilized rather than being simply discarded so as to improve tracking performance. To this end, this paper proposes a message passing (MP) multitarget tracking algorithm under OOSMs, where MP is adopted to perform efficient association between target and (in-sequence and out-of-sequence) measurements. Simulation experiments show that, compared to simply discarding OOSMs, the accuracy in terms of target number and state estimates can be greatly enhanced by incorporating OOSMs, thus demonstrating the effectiveness of the proposed approach.
{"title":"Message passing multitarget tracking with out-of-sequence measurements","authors":"Jingling Li, G. Battistelli, L. Chisci, P. Wei, Lin Gao","doi":"10.23919/fusion49751.2022.9841339","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841339","url":null,"abstract":"This paper considers multitarget tracking under out-of-sequence measurements (OOSMs), i.e. when the measurements processed by the tracker might be out of order. In order to fully exploit information provided by the sensor, OOSMs should be re-utilized rather than being simply discarded so as to improve tracking performance. To this end, this paper proposes a message passing (MP) multitarget tracking algorithm under OOSMs, where MP is adopted to perform efficient association between target and (in-sequence and out-of-sequence) measurements. Simulation experiments show that, compared to simply discarding OOSMs, the accuracy in terms of target number and state estimates can be greatly enhanced by incorporating OOSMs, thus demonstrating the effectiveness of the proposed approach.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129986428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.23919/fusion49751.2022.9841232
Jean-Francois Bariant, Llarina Lobo Palacios, Julia Granitzka, Hanne Groener
This paper aims at adressing specific issues of extended target tracking. Firstly, we propose a method to accurately model the origin of the measurements on the surface of the target. This is achieved by removing the usual hypothesis of the independence of the association of the measurements to possible measurement sources to allow us to assume that a certain number of measurements shall be originating from some specific sources. Secondly, a Gaussian distribution is a poor representation for the length of a target. We developed a method discretizing the length to estimate its distribution without the Gaussian assumption but avoiding the computational burden of a multi-hypothesis tracking for each target. The implementation effectiveness is shown on simulated as well as real data from RADAR.
{"title":"Extended Target Tracking with Constrained PMHT","authors":"Jean-Francois Bariant, Llarina Lobo Palacios, Julia Granitzka, Hanne Groener","doi":"10.23919/fusion49751.2022.9841232","DOIUrl":"https://doi.org/10.23919/fusion49751.2022.9841232","url":null,"abstract":"This paper aims at adressing specific issues of extended target tracking. Firstly, we propose a method to accurately model the origin of the measurements on the surface of the target. This is achieved by removing the usual hypothesis of the independence of the association of the measurements to possible measurement sources to allow us to assume that a certain number of measurements shall be originating from some specific sources. Secondly, a Gaussian distribution is a poor representation for the length of a target. We developed a method discretizing the length to estimate its distribution without the Gaussian assumption but avoiding the computational burden of a multi-hypothesis tracking for each target. The implementation effectiveness is shown on simulated as well as real data from RADAR.","PeriodicalId":176447,"journal":{"name":"2022 25th International Conference on Information Fusion (FUSION)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132643755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}