The proof assistant Lean has support for abstract polynomials, but this is not necessarily the same as support for computations with polynomials. Lean is also a functional programming language, so it should be possible to implement computational polynomials in Lean. It turns out not to be as easy as the naive author thought.
{"title":"First steps towards Computational Polynomials in Lean","authors":"James Harold Davenport","doi":"arxiv-2408.04564","DOIUrl":"https://doi.org/arxiv-2408.04564","url":null,"abstract":"The proof assistant Lean has support for abstract polynomials, but this is\u0000not necessarily the same as support for computations with polynomials. Lean is\u0000also a functional programming language, so it should be possible to implement\u0000computational polynomials in Lean. It turns out not to be as easy as the naive\u0000author thought.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Maple implementation of partitioned matrices is described. A recursive block data structure is used, with all operations preserving the block abstraction. These include constructor functions, ring operations such as addition and product, and inversion. The package is demonstrated by calculating the PLU factorization of a block matrix.
介绍了分区矩阵的 Maple 实现。使用了递归块数据结构,所有操作都保留了块抽象。这些操作包括构造函数、环运算(如加法和乘积)和反转。通过计算分块矩阵的 PLU 因式分解,演示了该软件包。
{"title":"An Abstraction-Preserving Block Matrix Implementation in Maple","authors":"David J. Jeffrey, Stephen M. Watt","doi":"arxiv-2408.02112","DOIUrl":"https://doi.org/arxiv-2408.02112","url":null,"abstract":"A Maple implementation of partitioned matrices is described. A recursive\u0000block data structure is used, with all operations preserving the block\u0000abstraction. These include constructor functions, ring operations such as\u0000addition and product, and inversion. The package is demonstrated by calculating\u0000the PLU factorization of a block matrix.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This extended abstract accompanies an invited talk at CASC 2024, which surveys recent developments in Real Quantifier Elimination (QE) and Cylindrical Algebraic Decomposition (CAD). After introducing these concepts we will first consider adaptations of CAD inspired by computational logic, in particular the algorithms which underpin modern SAT solvers. CAD theory has found use in collaboration with these via the Satisfiability Modulo Theory (SMT) paradigm; while the ideas behind SAT/SMT have led to new algorithms for Real QE. Second we will consider the optimisation of CAD through the use of Machine Learning (ML). The choice of CAD variable ordering has become a key case study for the use of ML to tune algorithms in computer algebra. We will also consider how explainable AI techniques might give insight for improved computer algebra software without any reliance on ML in the final code.
这篇扩展摘要随同 CASC 2024 大会的特邀演讲一起发表,探讨了实量子消除(QE)和圆柱代数分解(CAD)的最新发展。在介绍了这些概念之后,我们将首先考虑受计算逻辑启发而对 CAD 进行的调整,特别是作为现代 SAT 求解器基础的算法。CAD 理论通过可满足性模态理论 (SMT) 范式与这些算法结合使用;而 SAT/SMT 背后的思想则为 Real QE 带来了新的算法。其次,我们将考虑通过使用机器学习(ML)来优化 CAD。CAD 变量排序的选择已成为使用 ML 调整计算机代数算法的一个重要案例研究。我们还将考虑可解释的人工智能技术如何为改进计算机代数软件提供启示,而无需在最终代码中依赖 ML。
{"title":"Recent Developments in Real Quantifier Elimination and Cylindrical Algebraic Decomposition","authors":"Matthew England","doi":"arxiv-2407.19781","DOIUrl":"https://doi.org/arxiv-2407.19781","url":null,"abstract":"This extended abstract accompanies an invited talk at CASC 2024, which\u0000surveys recent developments in Real Quantifier Elimination (QE) and Cylindrical\u0000Algebraic Decomposition (CAD). After introducing these concepts we will first\u0000consider adaptations of CAD inspired by computational logic, in particular the\u0000algorithms which underpin modern SAT solvers. CAD theory has found use in\u0000collaboration with these via the Satisfiability Modulo Theory (SMT) paradigm;\u0000while the ideas behind SAT/SMT have led to new algorithms for Real QE. Second\u0000we will consider the optimisation of CAD through the use of Machine Learning\u0000(ML). The choice of CAD variable ordering has become a key case study for the\u0000use of ML to tune algorithms in computer algebra. We will also consider how\u0000explainable AI techniques might give insight for improved computer algebra\u0000software without any reliance on ML in the final code.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morphic sequences form a natural class of infinite sequences, typically defined as the coding of a fixed point of a morphism. Different morphisms and codings may yield the same morphic sequence. This paper investigates how to prove that two such representations of a morphic sequence by morphisms represent the same sequence. In particular, we focus on the smallest representations of the subsequences of the binary Fibonacci sequence obtained by only taking the even or odd elements. The proofs we give are induction proofs of several properties simultaneously, and are typically found fully automatically by a tool that we developed.
{"title":"Equality of morphic sequences","authors":"Hans Zantema","doi":"arxiv-2407.15721","DOIUrl":"https://doi.org/arxiv-2407.15721","url":null,"abstract":"Morphic sequences form a natural class of infinite sequences, typically\u0000defined as the coding of a fixed point of a morphism. Different morphisms and\u0000codings may yield the same morphic sequence. This paper investigates how to\u0000prove that two such representations of a morphic sequence by morphisms\u0000represent the same sequence. In particular, we focus on the smallest\u0000representations of the subsequences of the binary Fibonacci sequence obtained\u0000by only taking the even or odd elements. The proofs we give are induction\u0000proofs of several properties simultaneously, and are typically found fully\u0000automatically by a tool that we developed.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstraction is key to human and artificial intelligence as it allows one to see common structure in otherwise distinct objects or situations and as such it is a key element for generality in AI. Anti-unification (or generalization) is textit{the} part of theoretical computer science and AI studying abstraction. It has been successfully applied to various AI-related problems, most importantly inductive logic programming. Up to this date, anti-unification is studied only from a syntactic perspective in the literature. The purpose of this paper is to initiate an algebraic (i.e. semantic) theory of anti-unification within general algebras. This is motivated by recent applications to similarity and analogical proportions.
{"title":"Algebraic anti-unification","authors":"Christian Antić","doi":"arxiv-2407.15510","DOIUrl":"https://doi.org/arxiv-2407.15510","url":null,"abstract":"Abstraction is key to human and artificial intelligence as it allows one to\u0000see common structure in otherwise distinct objects or situations and as such it\u0000is a key element for generality in AI. Anti-unification (or generalization) is\u0000textit{the} part of theoretical computer science and AI studying abstraction.\u0000It has been successfully applied to various AI-related problems, most\u0000importantly inductive logic programming. Up to this date, anti-unification is\u0000studied only from a syntactic perspective in the literature. The purpose of\u0000this paper is to initiate an algebraic (i.e. semantic) theory of\u0000anti-unification within general algebras. This is motivated by recent\u0000applications to similarity and analogical proportions.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in Hierarchical Multi-label Classification (HMC), particularly neurosymbolic-based approaches, have demonstrated improved consistency and accuracy by enforcing constraints on a neural model during training. However, such work assumes the existence of such constraints a-priori. In this paper, we relax this strong assumption and present an approach based on Error Detection Rules (EDR) that allow for learning explainable rules about the failure modes of machine learning models. We show that these rules are not only effective in detecting when a machine learning classifier has made an error but also can be leveraged as constraints for HMC, thereby allowing the recovery of explainable constraints even if they are not provided. We show that our approach is effective in detecting machine learning errors and recovering constraints, is noise tolerant, and can function as a source of knowledge for neurosymbolic models on multiple datasets, including a newly introduced military vehicle recognition dataset.
{"title":"Error Detection and Constraint Recovery in Hierarchical Multi-Label Classification without Prior Knowledge","authors":"Joshua Shay Kricheli, Khoa Vo, Aniruddha Datta, Spencer Ozgur, Paulo Shakarian","doi":"arxiv-2407.15192","DOIUrl":"https://doi.org/arxiv-2407.15192","url":null,"abstract":"Recent advances in Hierarchical Multi-label Classification (HMC),\u0000particularly neurosymbolic-based approaches, have demonstrated improved\u0000consistency and accuracy by enforcing constraints on a neural model during\u0000training. However, such work assumes the existence of such constraints\u0000a-priori. In this paper, we relax this strong assumption and present an\u0000approach based on Error Detection Rules (EDR) that allow for learning\u0000explainable rules about the failure modes of machine learning models. We show\u0000that these rules are not only effective in detecting when a machine learning\u0000classifier has made an error but also can be leveraged as constraints for HMC,\u0000thereby allowing the recovery of explainable constraints even if they are not\u0000provided. We show that our approach is effective in detecting machine learning\u0000errors and recovering constraints, is noise tolerant, and can function as a\u0000source of knowledge for neurosymbolic models on multiple datasets, including a\u0000newly introduced military vehicle recognition dataset.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large language models (LLMs) are very performant connectionist systems, but do they exhibit more compositionality? More importantly, is that part of why they perform so well? We present empirical analyses across four LLM families (12 models) and three task categories, including a novel task introduced below. Our findings reveal a nuanced relationship in learning of compositional strategies by LLMs -- while scaling enhances compositional abilities, instruction tuning often has a reverse effect. Such disparity brings forth some open issues regarding the development and improvement of large language models in alignment with human cognitive capacities.
{"title":"From Words to Worlds: Compositionality for Cognitive Architectures","authors":"Ruchira Dhar, Anders Søgaard","doi":"arxiv-2407.13419","DOIUrl":"https://doi.org/arxiv-2407.13419","url":null,"abstract":"Large language models (LLMs) are very performant connectionist systems, but\u0000do they exhibit more compositionality? More importantly, is that part of why\u0000they perform so well? We present empirical analyses across four LLM families\u0000(12 models) and three task categories, including a novel task introduced below.\u0000Our findings reveal a nuanced relationship in learning of compositional\u0000strategies by LLMs -- while scaling enhances compositional abilities,\u0000instruction tuning often has a reverse effect. Such disparity brings forth some\u0000open issues regarding the development and improvement of large language models\u0000in alignment with human cognitive capacities.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Task-oriented dialogues must maintain consistency both within the dialogue itself, ensuring logical coherence across turns, and with the conversational domain, accurately reflecting external knowledge. We propose to conceptualize dialogue consistency as a Constraint Satisfaction Problem (CSP), wherein variables represent segments of the dialogue referencing the conversational domain, and constraints among variables reflect dialogue properties, including linguistic, conversational, and domain-based aspects. To demonstrate the feasibility of the approach, we utilize a CSP solver to detect inconsistencies in dialogues re-lexicalized by an LLM. Our findings indicate that: (i) CSP is effective to detect dialogue inconsistencies; and (ii) consistent dialogue re-lexicalization is challenging for state-of-the-art LLMs, achieving only a 0.15 accuracy rate when compared to a CSP solver. Furthermore, through an ablation study, we reveal that constraints derived from domain knowledge pose the greatest difficulty in being respected. We argue that CSP captures core properties of dialogue consistency that have been poorly considered by approaches based on component pipelines.
{"title":"Evaluating Task-Oriented Dialogue Consistency through Constraint Satisfaction","authors":"Tiziano Labruna, Bernardo Magnini","doi":"arxiv-2407.11857","DOIUrl":"https://doi.org/arxiv-2407.11857","url":null,"abstract":"Task-oriented dialogues must maintain consistency both within the dialogue\u0000itself, ensuring logical coherence across turns, and with the conversational\u0000domain, accurately reflecting external knowledge. We propose to conceptualize\u0000dialogue consistency as a Constraint Satisfaction Problem (CSP), wherein\u0000variables represent segments of the dialogue referencing the conversational\u0000domain, and constraints among variables reflect dialogue properties, including\u0000linguistic, conversational, and domain-based aspects. To demonstrate the\u0000feasibility of the approach, we utilize a CSP solver to detect inconsistencies\u0000in dialogues re-lexicalized by an LLM. Our findings indicate that: (i) CSP is\u0000effective to detect dialogue inconsistencies; and (ii) consistent dialogue\u0000re-lexicalization is challenging for state-of-the-art LLMs, achieving only a\u00000.15 accuracy rate when compared to a CSP solver. Furthermore, through an\u0000ablation study, we reveal that constraints derived from domain knowledge pose\u0000the greatest difficulty in being respected. We argue that CSP captures core\u0000properties of dialogue consistency that have been poorly considered by\u0000approaches based on component pipelines.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141721172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Continuous-Time Simultaneous Localization And Mapping (CTSLAM) has become a promising approach for fusing asynchronous and multi-modal sensor suites. Unlike discrete-time SLAM, which estimates poses discretely, CTSLAM uses continuous-time motion parametrizations, facilitating the integration of a variety of sensors such as rolling-shutter cameras, event cameras and Inertial Measurement Units (IMUs). However, CTSLAM approaches remain computationally demanding and are conventionally posed as centralized Non-Linear Least Squares (NLLS) optimizations. Targeting these limitations, we not only present the fastest SymForce-based [Martiros et al., RSS 2022] B- and Z-Spline implementations achieving speedups between 2.43x and 110.31x over Sommer et al. [CVPR 2020] but also implement a novel continuous-time Gaussian Belief Propagation (GBP) framework, coined Hyperion, which targets decentralized probabilistic inference across agents. We demonstrate the efficacy of our method in motion tracking and localization settings, complemented by empirical ablation studies.
连续时间同步定位与绘图(Continuous-Time Simultaneous Localization And Mapping,CTSLAM)已成为融合异步和多模式传感器套件的重要方法。与离散时间 SLAM 不同,CTSLAM 采用连续时间运动参数化,便于整合各种传感器,如卷帘快门相机、事件相机和惯性测量单元(InertialMeasurement Units,IMUs)。然而,CTSLAM 方法仍然对计算要求很高,传统上都是采用集中式非线性最小二乘法(NLLS)进行优化。针对这些局限性,我们不仅提出了基于 SymForce 的最快[Martiros 等人,RSS 2022]B-和 Z-样条曲线实现方法,速度比 Sommer 等人[CVPR 2020]提高了 2.43 倍和 110.31 倍,而且还实现了一种新颖的连续时间高斯信念传播(GBP)框架,被称为 Hyperion,其目标是跨代理的分散式概率推理。我们展示了我们的方法在运动跟踪和定位设置中的功效,并辅以实证实验研究。
{"title":"Hyperion - A fast, versatile symbolic Gaussian Belief Propagation framework for Continuous-Time SLAM","authors":"David Hug, Ignacio Alzugaray, Margarita Chli","doi":"arxiv-2407.07074","DOIUrl":"https://doi.org/arxiv-2407.07074","url":null,"abstract":"Continuous-Time Simultaneous Localization And Mapping (CTSLAM) has become a\u0000promising approach for fusing asynchronous and multi-modal sensor suites.\u0000Unlike discrete-time SLAM, which estimates poses discretely, CTSLAM uses\u0000continuous-time motion parametrizations, facilitating the integration of a\u0000variety of sensors such as rolling-shutter cameras, event cameras and Inertial\u0000Measurement Units (IMUs). However, CTSLAM approaches remain computationally\u0000demanding and are conventionally posed as centralized Non-Linear Least Squares\u0000(NLLS) optimizations. Targeting these limitations, we not only present the\u0000fastest SymForce-based [Martiros et al., RSS 2022] B- and Z-Spline\u0000implementations achieving speedups between 2.43x and 110.31x over Sommer et al.\u0000[CVPR 2020] but also implement a novel continuous-time Gaussian Belief\u0000Propagation (GBP) framework, coined Hyperion, which targets decentralized\u0000probabilistic inference across agents. We demonstrate the efficacy of our\u0000method in motion tracking and localization settings, complemented by empirical\u0000ablation studies.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasra Chandio, Momin A. Khan, Khotso Selialia, Luis Garcia, Joseph DeGol, Fatima M. Anwar
Autonomous robots, autonomous vehicles, and humans wearing mixed-reality headsets require accurate and reliable tracking services for safety-critical applications in dynamically changing real-world environments. However, the existing tracking approaches, such as Simultaneous Localization and Mapping (SLAM), do not adapt well to environmental changes and boundary conditions despite extensive manual tuning. On the other hand, while deep learning-based approaches can better adapt to environmental changes, they typically demand substantial data for training and often lack flexibility in adapting to new domains. To solve this problem, we propose leveraging the neurosymbolic program synthesis approach to construct adaptable SLAM pipelines that integrate the domain knowledge from traditional SLAM approaches while leveraging data to learn complex relationships. While the approach can synthesize end-to-end SLAM pipelines, we focus on synthesizing the feature extraction module. We first devise a domain-specific language (DSL) that can encapsulate domain knowledge on the important attributes for feature extraction and the real-world performance of various feature extractors. Our neurosymbolic architecture then undertakes adaptive feature extraction, optimizing parameters via learning while employing symbolic reasoning to select the most suitable feature extractor. Our evaluations demonstrate that our approach, neurosymbolic Feature EXtraction (nFEX), yields higher-quality features. It also reduces the pose error observed for the state-of-the-art baseline feature extractors ORB and SIFT by up to 90% and up to 66%, respectively, thereby enhancing the system's efficiency and adaptability to novel environments.
在动态变化的真实世界环境中,自主机器人、自主车辆和佩戴混合现实头盔的人类需要准确可靠的跟踪服务,以满足对安全至关重要的应用需求。然而,现有的跟踪方法,如同步定位和映射(SLAM),尽管经过大量手动调整,仍不能很好地适应环境变化和边界条件。另一方面,虽然基于深度学习的方法可以更好地适应环境变化,但它们通常需要大量数据进行训练,在适应新领域方面往往缺乏灵活性。为了解决这个问题,我们建议利用神经符号程序合成方法来构建可适应的 SLAM 管道,该管道整合了传统 SLAM 方法中的领域知识,同时利用数据来学习复杂的关系。虽然该方法可以合成端到端的 SLAM 管道,但我们专注于合成特征提取模块。我们首先开发了一种特定领域语言(DSL),它可以封装有关特征提取的重要属性和各种特征提取器实际性能的领域知识。然后,我们的神经符号架构进行自适应特征提取,通过学习优化参数,同时利用符号推理选择最合适的特征提取器。评估结果表明,我们的神经符号特征提取(nFEX)方法可以获得更高质量的特征。它还将最先进的基线特征提取器 ORB 和 SIFT 的错误率分别降低了 90% 和 66%,从而提高了系统的效率和对新环境的适应性。
{"title":"A Neurosymbolic Approach to Adaptive Feature Extraction in SLAM","authors":"Yasra Chandio, Momin A. Khan, Khotso Selialia, Luis Garcia, Joseph DeGol, Fatima M. Anwar","doi":"arxiv-2407.06889","DOIUrl":"https://doi.org/arxiv-2407.06889","url":null,"abstract":"Autonomous robots, autonomous vehicles, and humans wearing mixed-reality\u0000headsets require accurate and reliable tracking services for safety-critical\u0000applications in dynamically changing real-world environments. However, the\u0000existing tracking approaches, such as Simultaneous Localization and Mapping\u0000(SLAM), do not adapt well to environmental changes and boundary conditions\u0000despite extensive manual tuning. On the other hand, while deep learning-based\u0000approaches can better adapt to environmental changes, they typically demand\u0000substantial data for training and often lack flexibility in adapting to new\u0000domains. To solve this problem, we propose leveraging the neurosymbolic program\u0000synthesis approach to construct adaptable SLAM pipelines that integrate the\u0000domain knowledge from traditional SLAM approaches while leveraging data to\u0000learn complex relationships. While the approach can synthesize end-to-end SLAM\u0000pipelines, we focus on synthesizing the feature extraction module. We first\u0000devise a domain-specific language (DSL) that can encapsulate domain knowledge\u0000on the important attributes for feature extraction and the real-world\u0000performance of various feature extractors. Our neurosymbolic architecture then\u0000undertakes adaptive feature extraction, optimizing parameters via learning\u0000while employing symbolic reasoning to select the most suitable feature\u0000extractor. Our evaluations demonstrate that our approach, neurosymbolic Feature\u0000EXtraction (nFEX), yields higher-quality features. It also reduces the pose\u0000error observed for the state-of-the-art baseline feature extractors ORB and\u0000SIFT by up to 90% and up to 66%, respectively, thereby enhancing the system's\u0000efficiency and adaptability to novel environments.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}