Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9626991
Jiří Ajgl, O. Straka
Confidence sets are random sets constructed in such a way that the probability that they contain the estimated parameter achieves a chosen level. This paper deals with combining information from two estimates and discusses several designs with respect to various degrees of knowledge of the joint probability density function. Namely, the designs by fusion, intersection and union are considered for unknown joint density, known Gaussian joint density and Gaussian joint density with unknown cross-covariance. Evaluation criteria are proposed and the confidence sets are compared using simple numerical example.
{"title":"Comparison of Confidence Sets Designs for Various Degrees of Knowledge","authors":"Jiří Ajgl, O. Straka","doi":"10.23919/fusion49465.2021.9626991","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9626991","url":null,"abstract":"Confidence sets are random sets constructed in such a way that the probability that they contain the estimated parameter achieves a chosen level. This paper deals with combining information from two estimates and discusses several designs with respect to various degrees of knowledge of the joint probability density function. Namely, the designs by fusion, intersection and union are considered for unknown joint density, known Gaussian joint density and Gaussian joint density with unknown cross-covariance. Evaluation criteria are proposed and the confidence sets are compared using simple numerical example.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9626874
Thore Gerlach, Folker Hoffmann, A. Charlish
This paper considers the problem of finding the best action in a policy rollout algorithm. Policy rollout is an online computation method used in approximate dynamic programming. We applied two different versions of the knowledge gradient (KG) policy to a sensor path planning problem. The goal of this problem is to localize an emitter using only bearing measurements. To the authors’ knowledge, this was the first time the KG was applied in a policy rollout context. The performance of the KG policy was found to be comparable with methods used in prior work while also having a potentially wider applicability.
{"title":"Policy Rollout Action Selection with Knowledge Gradient for Sensor Path Planning","authors":"Thore Gerlach, Folker Hoffmann, A. Charlish","doi":"10.23919/fusion49465.2021.9626874","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9626874","url":null,"abstract":"This paper considers the problem of finding the best action in a policy rollout algorithm. Policy rollout is an online computation method used in approximate dynamic programming. We applied two different versions of the knowledge gradient (KG) policy to a sensor path planning problem. The goal of this problem is to localize an emitter using only bearing measurements. To the authors’ knowledge, this was the first time the KG was applied in a policy rollout context. The performance of the KG policy was found to be comparable with methods used in prior work while also having a potentially wider applicability.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"27 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123397930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9626993
Donglin Zhang, Z. Duan
By simply stacking all converted measurements, recursive LMMSE (linear minimum mean square error) filtering for a single radar has been extended to the case of centralized fusion with multiple radars. To further improve the performance of the LMMSE centralized fusion, [1] ranks all scalar measurements from multiple radars dimension by dimension, and then recombines these measurements for LMMSE filtering. However, due to the inherent shortcomings of centralized fusion, they have potential limitations in practical application. In this paper, we first develop an information filtering form of the recursive LMMSE filter by equivalent transformation, to avoid the inverse operation of innovation covariance. Then, a recursive LMMSE sequential fusion with multi-radar measurements is presented depending on the information filter. The sequential fusion is theoretically optimal in the sense that it is equivalent to the LMMSE centralized fusion. Numerical examples show that the recursive LMMSE sequential fusion with recombined multi-radar measurements performs better in terms of estimation accuracy.
{"title":"Recursive LMMSE Sequential Fusion with Multi-Radar Measurements for Target Tracking","authors":"Donglin Zhang, Z. Duan","doi":"10.23919/fusion49465.2021.9626993","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9626993","url":null,"abstract":"By simply stacking all converted measurements, recursive LMMSE (linear minimum mean square error) filtering for a single radar has been extended to the case of centralized fusion with multiple radars. To further improve the performance of the LMMSE centralized fusion, [1] ranks all scalar measurements from multiple radars dimension by dimension, and then recombines these measurements for LMMSE filtering. However, due to the inherent shortcomings of centralized fusion, they have potential limitations in practical application. In this paper, we first develop an information filtering form of the recursive LMMSE filter by equivalent transformation, to avoid the inverse operation of innovation covariance. Then, a recursive LMMSE sequential fusion with multi-radar measurements is presented depending on the information filter. The sequential fusion is theoretically optimal in the sense that it is equivalent to the LMMSE centralized fusion. Numerical examples show that the recursive LMMSE sequential fusion with recombined multi-radar measurements performs better in terms of estimation accuracy.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131768838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9626844
J. T. Marcos, S. Utete
In this study, we develop a distributed system that can be used by unmanned aerial vehicles (UAVs) or drones for single-animal tracking in terrestrial settings. The system involves a video object tracking (VOT) solution and a drone formation. The proposed VOT solution is based on the particle filter (PF) with two measurement providers: a colour image segmentation (CIS) approach and a machine learning (ML) technique. They are switched based on the structural similarity (SSIM) index between the initial and the current target appearances to mitigate the limitation of computational resources of civilian drones, and to ensure good tracking performance. At first, the deep learning object detector You Only Look Once version three (YOLOv3) is used as the second measurement provider. The proposed VOT solution has been tested on wildlife footage recorded by drones (and obtained from an animal behaviour group). The tests demonstrate amongst other results that the proposed VOT solution is more efficient when YOLOv3 is replaced by other methods such as boosting and channel and spatial reliability tracking (CSRT). The results suggest the utility of the proposed VOT solution in single-animal tracking with cooperative drones for wildlife preservation.
在本研究中,我们开发了一种分布式系统,可用于无人驾驶飞行器(uav)或无人机在陆地环境中进行单动物跟踪。该系统包括视频目标跟踪(VOT)解决方案和无人机编队。提出的VOT解决方案基于粒子滤波器(PF),具有两种测量提供者:彩色图像分割(CIS)方法和机器学习(ML)技术。基于结构相似度(SSIM)指数在初始目标和当前目标之间进行切换,减轻了民用无人机计算资源的限制,保证了良好的跟踪性能。首先,使用深度学习对象检测器You Only Look Once version 3 (YOLOv3)作为第二个测量提供者。提出的VOT解决方案已经在无人机记录的野生动物镜头上进行了测试(并从动物行为小组获得)。测试结果表明,当YOLOv3被其他方法(如增强和信道和空间可靠性跟踪(CSRT))取代时,所提出的VOT解决方案效率更高。结果表明,所提出的VOT解决方案在野生动物保护的单动物跟踪中具有实用价值。
{"title":"Animal Tracking within a Formation of Drones","authors":"J. T. Marcos, S. Utete","doi":"10.23919/fusion49465.2021.9626844","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9626844","url":null,"abstract":"In this study, we develop a distributed system that can be used by unmanned aerial vehicles (UAVs) or drones for single-animal tracking in terrestrial settings. The system involves a video object tracking (VOT) solution and a drone formation. The proposed VOT solution is based on the particle filter (PF) with two measurement providers: a colour image segmentation (CIS) approach and a machine learning (ML) technique. They are switched based on the structural similarity (SSIM) index between the initial and the current target appearances to mitigate the limitation of computational resources of civilian drones, and to ensure good tracking performance. At first, the deep learning object detector You Only Look Once version three (YOLOv3) is used as the second measurement provider. The proposed VOT solution has been tested on wildlife footage recorded by drones (and obtained from an animal behaviour group). The tests demonstrate amongst other results that the proposed VOT solution is more efficient when YOLOv3 is replaced by other methods such as boosting and channel and spatial reliability tracking (CSRT). The results suggest the utility of the proposed VOT solution in single-animal tracking with cooperative drones for wildlife preservation.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133906344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9626945
Siddeeq Laher, A. Paskaramoorthy, Terence L van Zyl
Portfolio selection is complicated by the difficulty of forecasting financial time series and the sensitivity of portfolio optimisers to forecasting errors. To address these issues, a portfolio management model is proposed that makes use of Deep Learning Models for weekly financial time series forecasting of returns. Our model uses a late fusion of an ensemble of forecast models and modifies the standard mean-variance optimiser to account for transaction costs, making it suitable for multi-period trading. Our empirical results show that our portfolio management tool outperforms the equally-weighted portfolio benchmark and the buy-and-hold strategy, using both Long Short-Term Memory and Gated Recurrent Unit forecasts. Although the portfolios are profitable, they are also sub-optimal in terms of their risk to reward ratio. Therefore, greater forecasting accuracy is necessary to construct truly optimal portfolios.
{"title":"Deep Learning for Financial Time Series Forecast Fusion and Optimal Portfolio Rebalancing","authors":"Siddeeq Laher, A. Paskaramoorthy, Terence L van Zyl","doi":"10.23919/fusion49465.2021.9626945","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9626945","url":null,"abstract":"Portfolio selection is complicated by the difficulty of forecasting financial time series and the sensitivity of portfolio optimisers to forecasting errors. To address these issues, a portfolio management model is proposed that makes use of Deep Learning Models for weekly financial time series forecasting of returns. Our model uses a late fusion of an ensemble of forecast models and modifies the standard mean-variance optimiser to account for transaction costs, making it suitable for multi-period trading. Our empirical results show that our portfolio management tool outperforms the equally-weighted portfolio benchmark and the buy-and-hold strategy, using both Long Short-Term Memory and Gated Recurrent Unit forecasts. Although the portfolios are profitable, they are also sub-optimal in terms of their risk to reward ratio. Therefore, greater forecasting accuracy is necessary to construct truly optimal portfolios.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134157137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9626851
Varun K. Garg, Brooks P. Saunders, T. Wickramarathne
Situational awareness methods aim to identify and map what is happening in an operational environment in terms of operational terms that define certain decision-making contexts. The underlying assumption here is that an appropriate decision-making context is either known or can be identified a priori for accurately mapping incoming evidence. However, in many complex and unstructured operational environments where situational awareness systems are most useful (e.g., asymmetric battlegrounds, urban reconnaissance), the decision-making context is neither known a priori nor it is easy to determine by, say subject matter experts. This paper presents a data-driven approach for gaining insights on the decision-making context via judicious processing of ubiquitous soft (i.e., human-based) and hard (e.g., physics-based) data streams generated by voluntarily participating mobile sensors that are traversing the operational environment. In particular, by using spectral clustering in tandem with variable length sequence decoding methods, ubiquitous data stream are clustered and then processed for early identification of specific scenarios of interest (that may have generated the sensor measurements). This will enable a decision-maker to understand emerging situations in the operational environment to set the correct decision-making context and proactively identify what information will be most relevant to reducing uncertainty associated with them. Our approach is illustrated via a simulated example that provides insights into its behavior, performance and sensitivity to parameters.
{"title":"Making Sense of It All: Measurement Cluster Sequencing for Enhanced Situational Awareness with Ubiquitous Sensing","authors":"Varun K. Garg, Brooks P. Saunders, T. Wickramarathne","doi":"10.23919/fusion49465.2021.9626851","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9626851","url":null,"abstract":"Situational awareness methods aim to identify and map what is happening in an operational environment in terms of operational terms that define certain decision-making contexts. The underlying assumption here is that an appropriate decision-making context is either known or can be identified a priori for accurately mapping incoming evidence. However, in many complex and unstructured operational environments where situational awareness systems are most useful (e.g., asymmetric battlegrounds, urban reconnaissance), the decision-making context is neither known a priori nor it is easy to determine by, say subject matter experts. This paper presents a data-driven approach for gaining insights on the decision-making context via judicious processing of ubiquitous soft (i.e., human-based) and hard (e.g., physics-based) data streams generated by voluntarily participating mobile sensors that are traversing the operational environment. In particular, by using spectral clustering in tandem with variable length sequence decoding methods, ubiquitous data stream are clustered and then processed for early identification of specific scenarios of interest (that may have generated the sensor measurements). This will enable a decision-maker to understand emerging situations in the operational environment to set the correct decision-making context and proactively identify what information will be most relevant to reducing uncertainty associated with them. Our approach is illustrated via a simulated example that provides insights into its behavior, performance and sensitivity to parameters.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"479 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133644857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9627001
Asaf Gendler, S. Peleg, A. Amar
We propose a distributed time difference of arrival method for estimating a source using a multi-agent network. By exchanging information with the agents in its local neighborhood, each agent estimates the source position by minimizing a local cost function which is obtained by linearizing the local time difference of arrival measurements. The local minimization is performed using the diffusion approach where at the first step each agent determines a local estimate by combining the weighted source position estimates received from its neighbors, and then adapt the local gradient of its local cost function. We propose to use adaptive weights which are time-varying and depends on the fit errors of each agent in the network. Numerical results and real data experiments demonstrate that such an approach produces close position estimates compared to the centralized method and the theoretical Cramer-Rao lower bounds.
{"title":"A Diffusion-Based Distributed Time Difference Of Arrival Source Positioning","authors":"Asaf Gendler, S. Peleg, A. Amar","doi":"10.23919/fusion49465.2021.9627001","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9627001","url":null,"abstract":"We propose a distributed time difference of arrival method for estimating a source using a multi-agent network. By exchanging information with the agents in its local neighborhood, each agent estimates the source position by minimizing a local cost function which is obtained by linearizing the local time difference of arrival measurements. The local minimization is performed using the diffusion approach where at the first step each agent determines a local estimate by combining the weighted source position estimates received from its neighbors, and then adapt the local gradient of its local cost function. We propose to use adaptive weights which are time-varying and depends on the fit errors of each agent in the network. Numerical results and real data experiments demonstrate that such an approach produces close position estimates compared to the centralized method and the theoretical Cramer-Rao lower bounds.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133068788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9627072
Dave Braines, A. Preece, Colin Roberts, E. Blasch
For many types of data and information fusion, input from human users is essential, both in terms of defining or adjusting the processing steps, as well as in interacting with, understanding, and communicating the results. In many cases, information fusion should increase understanding for the human user(s) working as part of a team of interacting agents, taking into account the needs of each user type, and the factors that might affect individual and team performance. This paper focuses on the decision support that could be provided to users, by presenting a candidate environment to support comprehensive information fusion and exchange in support of human-agent knowledge fusion (HAKF). The paper outlines two distinct HAKF use cases of (1) foraging data for open source intelligence analysis, and (2) sensemaking fusion from sensors and machine agents, using Cogni-sketch. In the first use case, a traditional open source intelligence gathering exercise demonstrates information gathered from multiple sources and maps it to a common model of sensemaking. The second use case shows machine-led activities including fusion of machine vision and object identification, and the utilization of human-led semantic definitions of events and situations in support of sensemaking.
{"title":"Supporting Agile User Fusion Analytics through Human-Agent Knowledge Fusion","authors":"Dave Braines, A. Preece, Colin Roberts, E. Blasch","doi":"10.23919/fusion49465.2021.9627072","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9627072","url":null,"abstract":"For many types of data and information fusion, input from human users is essential, both in terms of defining or adjusting the processing steps, as well as in interacting with, understanding, and communicating the results. In many cases, information fusion should increase understanding for the human user(s) working as part of a team of interacting agents, taking into account the needs of each user type, and the factors that might affect individual and team performance. This paper focuses on the decision support that could be provided to users, by presenting a candidate environment to support comprehensive information fusion and exchange in support of human-agent knowledge fusion (HAKF). The paper outlines two distinct HAKF use cases of (1) foraging data for open source intelligence analysis, and (2) sensemaking fusion from sensors and machine agents, using Cogni-sketch. In the first use case, a traditional open source intelligence gathering exercise demonstrates information gathered from multiple sources and maps it to a common model of sensemaking. The second use case shows machine-led activities including fusion of machine vision and object identification, and the utilization of human-led semantic definitions of events and situations in support of sensemaking.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130145686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9626892
Jonathon S. Gipson, R. Leishman
The Autonomous and Resilient Management of All-source Sensors with Stable Observability Monitoring (ARMAS-SOM) framework fuses collaborative all-source sensor information in a resilient manner with fault detection, exclusion, and integrity solutions recognizable to a Global Navigation Satellite System (GNSS) user. This framework uses a multi-filter residual monitoring approach for fault detection and exclusion which is augmented with an additional "observability" Extended Kalman Filter (EKF) sub-layer for resilience. We monitor the a posteriori state covariances in this sub-layer to provide intrinsic awareness when navigation state observability assumptions required for integrity are in danger. The framework leverages this to selectively augment with offboard information and preserve resilience. By maintaining split parallel collaborative and proprioceptive frameworks and employing a novel "stingy collaboration" technique, we are able maximize efficient use of network resources, limit the propagation of unknown corruption to a single donor, prioritize high fidelity donors, and maintain consistent collaborative navigation without fear of double-counting in a scalable processing footprint. Lastly, we preserve the ability to return to autonomy and are able to use the same intrinsic awareness to notify the user when it is safe to do so.
{"title":"Resilient Collaborative All-source Navigation","authors":"Jonathon S. Gipson, R. Leishman","doi":"10.23919/fusion49465.2021.9626892","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9626892","url":null,"abstract":"The Autonomous and Resilient Management of All-source Sensors with Stable Observability Monitoring (ARMAS-SOM) framework fuses collaborative all-source sensor information in a resilient manner with fault detection, exclusion, and integrity solutions recognizable to a Global Navigation Satellite System (GNSS) user. This framework uses a multi-filter residual monitoring approach for fault detection and exclusion which is augmented with an additional \"observability\" Extended Kalman Filter (EKF) sub-layer for resilience. We monitor the a posteriori state covariances in this sub-layer to provide intrinsic awareness when navigation state observability assumptions required for integrity are in danger. The framework leverages this to selectively augment with offboard information and preserve resilience. By maintaining split parallel collaborative and proprioceptive frameworks and employing a novel \"stingy collaboration\" technique, we are able maximize efficient use of network resources, limit the propagation of unknown corruption to a single donor, prioritize high fidelity donors, and maintain consistent collaborative navigation without fear of double-counting in a scalable processing footprint. Lastly, we preserve the ability to return to autonomy and are able to use the same intrinsic awareness to notify the user when it is safe to do so.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"5 5-6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114048401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.23919/fusion49465.2021.9627032
F. Pfaff, Kailai Li, U. Hanebeck
Filters for circular manifolds are well suited to estimate the orientation of 2-D objects over time. However, manually deriving measurement models for camera data is generally infeasible. Therefore, we propose loss terms that help train neural networks to output Fourier coefficients for a trigonometric polynomial. The square of the trigonometric polynomial then constitutes the likelihood function used in the filter. Particular focus is put on ensuring that rotational symmetries are properly considered in the likelihood. In an evaluation, we train a network with one of the loss terms on artificial data. The filter shows good estimation quality. While the uncertainty of the filter does not perfectly align with the actual errors, the expected and actual errors are clearly correlated.
{"title":"Deep Likelihood Learning for 2-D Orientation Estimation Using a Fourier Filter","authors":"F. Pfaff, Kailai Li, U. Hanebeck","doi":"10.23919/fusion49465.2021.9627032","DOIUrl":"https://doi.org/10.23919/fusion49465.2021.9627032","url":null,"abstract":"Filters for circular manifolds are well suited to estimate the orientation of 2-D objects over time. However, manually deriving measurement models for camera data is generally infeasible. Therefore, we propose loss terms that help train neural networks to output Fourier coefficients for a trigonometric polynomial. The square of the trigonometric polynomial then constitutes the likelihood function used in the filter. Particular focus is put on ensuring that rotational symmetries are properly considered in the likelihood. In an evaluation, we train a network with one of the loss terms on artificial data. The filter shows good estimation quality. While the uncertainty of the filter does not perfectly align with the actual errors, the expected and actual errors are clearly correlated.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116266069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}