Class-imbalanced node classification tasks are prevalent in real-world scenarios. Due to the uneven distribution of nodes across different classes, learning high-quality node representations remains a challenging endeavor. The engineering of loss functions has shown promising potential in addressing this issue. It involves the meticulous design of loss functions, utilizing information about the quantities of nodes in different categories and the network's topology to learn unbiased node representations. However, the design of these loss functions heavily relies on human expert knowledge and exhibits limited adaptability to specific target tasks. In this paper, we introduce a high-performance, flexible, and generalizable automated loss function search framework to tackle this challenge. Across 15 combinations of graph neural networks and datasets, our framework achieves a significant improvement in performance compared to state-of-the-art methods. Additionally, we observe that homophily in graph-structured data significantly contributes to the transferability of the proposed framework.
{"title":"Automated Loss function Search for Class-imbalanced Node Classification","authors":"Xinyu Guo, Kai Wu, Xiaoyu Zhang, Jing Liu","doi":"arxiv-2405.14133","DOIUrl":"https://doi.org/arxiv-2405.14133","url":null,"abstract":"Class-imbalanced node classification tasks are prevalent in real-world\u0000scenarios. Due to the uneven distribution of nodes across different classes,\u0000learning high-quality node representations remains a challenging endeavor. The\u0000engineering of loss functions has shown promising potential in addressing this\u0000issue. It involves the meticulous design of loss functions, utilizing\u0000information about the quantities of nodes in different categories and the\u0000network's topology to learn unbiased node representations. However, the design\u0000of these loss functions heavily relies on human expert knowledge and exhibits\u0000limited adaptability to specific target tasks. In this paper, we introduce a\u0000high-performance, flexible, and generalizable automated loss function search\u0000framework to tackle this challenge. Across 15 combinations of graph neural\u0000networks and datasets, our framework achieves a significant improvement in\u0000performance compared to state-of-the-art methods. Additionally, we observe that\u0000homophily in graph-structured data significantly contributes to the\u0000transferability of the proposed framework.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the study of Hilbert schemes, the integer partition $lambda$ helps researchers identify some geometric and combinatorial properties of the scheme in question. To aid researchers in extracting such information from a Hilbert polynomial, we describe an efficient algorithm which can identify if $p(x)inmathbb{Q}[x]$ is a Hilbert polynomial and if so, recover the integer partition $lambda$ associated with it.
{"title":"The Recovery of $λ$ from a Hilbert Polynomial","authors":"Joseph Donato, Monica Lewis","doi":"arxiv-2405.12886","DOIUrl":"https://doi.org/arxiv-2405.12886","url":null,"abstract":"In the study of Hilbert schemes, the integer partition $lambda$ helps\u0000researchers identify some geometric and combinatorial properties of the scheme\u0000in question. To aid researchers in extracting such information from a Hilbert\u0000polynomial, we describe an efficient algorithm which can identify if\u0000$p(x)inmathbb{Q}[x]$ is a Hilbert polynomial and if so, recover the integer\u0000partition $lambda$ associated with it.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"114 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arjun Pitchanathan, Albert Cohen, Oleksandr Zinenko, Tobias Grosser
A wide range of symbolic analysis and optimization problems can be formalized using polyhedra. Sub-classes of polyhedra, also known as sub-polyhedral domains, are sought for their lower space and time complexity. We introduce the Strided Difference Bound Matrix (SDBM) domain, which represents a sweet spot in the context of optimizing compilers. Its expressiveness and efficient algorithms are particularly well suited to the construction of machine learning compilers. We present decision algorithms, abstract domain operators and computational complexity proofs for SDBM. We also conduct an empirical study with the MLIR compiler framework to validate the domain's practical applicability. We characterize a sub-class of SDBMs that frequently occurs in practice, and demonstrate even faster algorithms on this sub-class.
{"title":"Strided Difference Bound Matrices","authors":"Arjun Pitchanathan, Albert Cohen, Oleksandr Zinenko, Tobias Grosser","doi":"arxiv-2405.11244","DOIUrl":"https://doi.org/arxiv-2405.11244","url":null,"abstract":"A wide range of symbolic analysis and optimization problems can be formalized\u0000using polyhedra. Sub-classes of polyhedra, also known as sub-polyhedral\u0000domains, are sought for their lower space and time complexity. We introduce the\u0000Strided Difference Bound Matrix (SDBM) domain, which represents a sweet spot in\u0000the context of optimizing compilers. Its expressiveness and efficient\u0000algorithms are particularly well suited to the construction of machine learning\u0000compilers. We present decision algorithms, abstract domain operators and\u0000computational complexity proofs for SDBM. We also conduct an empirical study\u0000with the MLIR compiler framework to validate the domain's practical\u0000applicability. We characterize a sub-class of SDBMs that frequently occurs in\u0000practice, and demonstrate even faster algorithms on this sub-class.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lun Ai, Stephen H. Muggleton, Shi-Shun Liang, Geoff S. Baldwin
Recent attention to relational knowledge bases has sparked a demand for understanding how relations change between entities. Petri nets can represent knowledge structure and dynamically simulate interactions between entities, and thus they are well suited for achieving this goal. However, logic programs struggle to deal with extensive Petri nets due to the limitations of high-level symbol manipulations. To address this challenge, we introduce a novel approach called Boolean Matrix Logic Programming (BMLP), utilising boolean matrices as an alternative computation mechanism for Prolog to evaluate logic programs. Within this framework, we propose two novel BMLP algorithms for simulating a class of Petri nets known as elementary nets. This is done by transforming elementary nets into logically equivalent datalog programs. We demonstrate empirically that BMLP algorithms can evaluate these programs 40 times faster than tabled B-Prolog, SWI-Prolog, XSB-Prolog and Clingo. Our work enables the efficient simulation of elementary nets using Prolog, expanding the scope of analysis, learning and verification of complex systems with logic programming techniques.
{"title":"Simulating Petri nets with Boolean Matrix Logic Programming","authors":"Lun Ai, Stephen H. Muggleton, Shi-Shun Liang, Geoff S. Baldwin","doi":"arxiv-2405.11412","DOIUrl":"https://doi.org/arxiv-2405.11412","url":null,"abstract":"Recent attention to relational knowledge bases has sparked a demand for\u0000understanding how relations change between entities. Petri nets can represent\u0000knowledge structure and dynamically simulate interactions between entities, and\u0000thus they are well suited for achieving this goal. However, logic programs\u0000struggle to deal with extensive Petri nets due to the limitations of high-level\u0000symbol manipulations. To address this challenge, we introduce a novel approach\u0000called Boolean Matrix Logic Programming (BMLP), utilising boolean matrices as\u0000an alternative computation mechanism for Prolog to evaluate logic programs.\u0000Within this framework, we propose two novel BMLP algorithms for simulating a\u0000class of Petri nets known as elementary nets. This is done by transforming\u0000elementary nets into logically equivalent datalog programs. We demonstrate\u0000empirically that BMLP algorithms can evaluate these programs 40 times faster\u0000than tabled B-Prolog, SWI-Prolog, XSB-Prolog and Clingo. Our work enables the\u0000efficient simulation of elementary nets using Prolog, expanding the scope of\u0000analysis, learning and verification of complex systems with logic programming\u0000techniques.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The classical theory of Kosambi-Cartan-Chern (KCC) developed in differential geometry provides a powerful method for analyzing the behaviors of dynamical systems. In the KCC theory, the properties of a dynamical system are described in terms of five geometrical invariants, of which the second corresponds to the so-called Jacobi stability of the system. Different from that of the Lyapunov stability that has been studied extensively in the literature, the analysis of the Jacobi stability has been investigated more recently using geometrical concepts and tools. It turns out that the existing work on the Jacobi stability analysis remains theoretical and the problem of algorithmic and symbolic treatment of Jacobi stability analysis has yet to be addressed. In this paper, we initiate our study on the problem for a class of ODE systems of arbitrary dimension and propose two algorithmic schemes using symbolic computation to check whether a nonlinear dynamical system may exhibit Jacobi stability. The first scheme, based on the construction of the complex root structure of a characteristic polynomial and on the method of quantifier elimination, is capable of detecting the existence of the Jacobi stability of the given dynamical system. The second algorithmic scheme exploits the method of semi-algebraic system solving and allows one to determine conditions on the parameters for a given dynamical system to have a prescribed number of Jacobi stable fixed points. Several examples are presented to demonstrate the effectiveness of the proposed algorithmic schemes.
{"title":"Jacobi Stability Analysis for Systems of ODEs Using Symbolic Computation","authors":"Bo Huang, Dongming Wang, Jing Yang","doi":"arxiv-2405.10578","DOIUrl":"https://doi.org/arxiv-2405.10578","url":null,"abstract":"The classical theory of Kosambi-Cartan-Chern (KCC) developed in differential\u0000geometry provides a powerful method for analyzing the behaviors of dynamical\u0000systems. In the KCC theory, the properties of a dynamical system are described\u0000in terms of five geometrical invariants, of which the second corresponds to the\u0000so-called Jacobi stability of the system. Different from that of the Lyapunov\u0000stability that has been studied extensively in the literature, the analysis of\u0000the Jacobi stability has been investigated more recently using geometrical\u0000concepts and tools. It turns out that the existing work on the Jacobi stability\u0000analysis remains theoretical and the problem of algorithmic and symbolic\u0000treatment of Jacobi stability analysis has yet to be addressed. In this paper,\u0000we initiate our study on the problem for a class of ODE systems of arbitrary\u0000dimension and propose two algorithmic schemes using symbolic computation to\u0000check whether a nonlinear dynamical system may exhibit Jacobi stability. The\u0000first scheme, based on the construction of the complex root structure of a\u0000characteristic polynomial and on the method of quantifier elimination, is\u0000capable of detecting the existence of the Jacobi stability of the given\u0000dynamical system. The second algorithmic scheme exploits the method of\u0000semi-algebraic system solving and allows one to determine conditions on the\u0000parameters for a given dynamical system to have a prescribed number of Jacobi\u0000stable fixed points. Several examples are presented to demonstrate the\u0000effectiveness of the proposed algorithmic schemes.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive theorem provers, like Isabelle/HOL, Coq and Lean, have expressive languages that allow the formalization of general mathematical objects and proofs. In this context, an important goal is to reduce the time and effort needed to prove theorems. A significant means of achieving this is by improving proof automation. We have implemented an early prototype of proof automation for equational reasoning in Lean by using equality saturation. To achieve this, we need to bridge the gap between Lean's expression semantics and the syntactically driven e-graphs in equality saturation. This involves handling bound variables, implicit typing, as well as Lean's definitional equality, which is more general than syntactic equality and involves notions like $alpha$-equivalence, $beta$-reduction, and $eta$-reduction. In this extended abstract, we highlight how we attempt to bridge this gap, and which challenges remain to be solved. Notably, while our techniques are partially unsound, the resulting proof automation remains sound by virtue of Lean's proof checking.
{"title":"Bridging Syntax and Semantics of Lean Expressions in E-Graphs","authors":"Marcus Rossel, Andrés Goens","doi":"arxiv-2405.10188","DOIUrl":"https://doi.org/arxiv-2405.10188","url":null,"abstract":"Interactive theorem provers, like Isabelle/HOL, Coq and Lean, have expressive\u0000languages that allow the formalization of general mathematical objects and\u0000proofs. In this context, an important goal is to reduce the time and effort\u0000needed to prove theorems. A significant means of achieving this is by improving\u0000proof automation. We have implemented an early prototype of proof automation\u0000for equational reasoning in Lean by using equality saturation. To achieve this,\u0000we need to bridge the gap between Lean's expression semantics and the\u0000syntactically driven e-graphs in equality saturation. This involves handling\u0000bound variables, implicit typing, as well as Lean's definitional equality,\u0000which is more general than syntactic equality and involves notions like\u0000$alpha$-equivalence, $beta$-reduction, and $eta$-reduction. In this extended\u0000abstract, we highlight how we attempt to bridge this gap, and which challenges\u0000remain to be solved. Notably, while our techniques are partially unsound, the\u0000resulting proof automation remains sound by virtue of Lean's proof checking.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning has achieved remarkable success in recent years. Central to its success is its ability to learn representations that preserve task-relevant structure. However, massive energy, compute, and data costs are required to learn general representations. This paper explores Hyperdimensional Computing (HDC), a computationally and data-efficient brain-inspired alternative. HDC acts as a bridge between connectionist and symbolic approaches to artificial intelligence (AI), allowing explicit specification of representational structure as in symbolic approaches while retaining the flexibility of connectionist approaches. However, HDC's simplicity poses challenges for encoding complex compositional structures, especially in its binding operation. To address this, we propose Generalized Holographic Reduced Representations (GHRR), an extension of Fourier Holographic Reduced Representations (FHRR), a specific HDC implementation. GHRR introduces a flexible, non-commutative binding operation, enabling improved encoding of complex data structures while preserving HDC's desirable properties of robustness and transparency. In this work, we introduce the GHRR framework, prove its theoretical properties and its adherence to HDC properties, explore its kernel and binding characteristics, and perform empirical experiments showcasing its flexible non-commutativity, enhanced decoding accuracy for compositional structures, and improved memorization capacity compared to FHRR.
{"title":"Generalized Holographic Reduced Representations","authors":"Calvin Yeung, Zhuowen Zou, Mohsen Imani","doi":"arxiv-2405.09689","DOIUrl":"https://doi.org/arxiv-2405.09689","url":null,"abstract":"Deep learning has achieved remarkable success in recent years. Central to its\u0000success is its ability to learn representations that preserve task-relevant\u0000structure. However, massive energy, compute, and data costs are required to\u0000learn general representations. This paper explores Hyperdimensional Computing\u0000(HDC), a computationally and data-efficient brain-inspired alternative. HDC\u0000acts as a bridge between connectionist and symbolic approaches to artificial\u0000intelligence (AI), allowing explicit specification of representational\u0000structure as in symbolic approaches while retaining the flexibility of\u0000connectionist approaches. However, HDC's simplicity poses challenges for\u0000encoding complex compositional structures, especially in its binding operation.\u0000To address this, we propose Generalized Holographic Reduced Representations\u0000(GHRR), an extension of Fourier Holographic Reduced Representations (FHRR), a\u0000specific HDC implementation. GHRR introduces a flexible, non-commutative\u0000binding operation, enabling improved encoding of complex data structures while\u0000preserving HDC's desirable properties of robustness and transparency. In this\u0000work, we introduce the GHRR framework, prove its theoretical properties and its\u0000adherence to HDC properties, explore its kernel and binding characteristics,\u0000and perform empirical experiments showcasing its flexible non-commutativity,\u0000enhanced decoding accuracy for compositional structures, and improved\u0000memorization capacity compared to FHRR.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongzhi You, Yijun Cao, Wei Yuan, Fanjun Wang, Ning Qiao, Yongjie Li
From a perspective of feature matching, optical flow estimation for event cameras involves identifying event correspondences by comparing feature similarity across accompanying event frames. In this work, we introduces an effective and robust high-dimensional (HD) feature descriptor for event frames, utilizing Vector Symbolic Architectures (VSA). The topological similarity among neighboring variables within VSA contributes to the enhanced representation similarity of feature descriptors for flow-matching points, while its structured symbolic representation capacity facilitates feature fusion from both event polarities and multiple spatial scales. Based on this HD feature descriptor, we propose a novel feature matching framework for event-based optical flow, encompassing both model-based (VSA-Flow) and self-supervised learning (VSA-SM) methods. In VSA-Flow, accurate optical flow estimation validates the effectiveness of HD feature descriptors. In VSA-SM, a novel similarity maximization method based on the HD feature descriptor is proposed to learn optical flow in a self-supervised way from events alone, eliminating the need for auxiliary grayscale images. Evaluation results demonstrate that our VSA-based method achieves superior accuracy in comparison to both model-based and self-supervised learning methods on the DSEC benchmark, while remains competitive among both methods on the MVSEC benchmark. This contribution marks a significant advancement in event-based optical flow within the feature matching methodology.
{"title":"Vector-Symbolic Architecture for Event-Based Optical Flow","authors":"Hongzhi You, Yijun Cao, Wei Yuan, Fanjun Wang, Ning Qiao, Yongjie Li","doi":"arxiv-2405.08300","DOIUrl":"https://doi.org/arxiv-2405.08300","url":null,"abstract":"From a perspective of feature matching, optical flow estimation for event\u0000cameras involves identifying event correspondences by comparing feature\u0000similarity across accompanying event frames. In this work, we introduces an\u0000effective and robust high-dimensional (HD) feature descriptor for event frames,\u0000utilizing Vector Symbolic Architectures (VSA). The topological similarity among\u0000neighboring variables within VSA contributes to the enhanced representation\u0000similarity of feature descriptors for flow-matching points, while its\u0000structured symbolic representation capacity facilitates feature fusion from\u0000both event polarities and multiple spatial scales. Based on this HD feature\u0000descriptor, we propose a novel feature matching framework for event-based\u0000optical flow, encompassing both model-based (VSA-Flow) and self-supervised\u0000learning (VSA-SM) methods. In VSA-Flow, accurate optical flow estimation\u0000validates the effectiveness of HD feature descriptors. In VSA-SM, a novel\u0000similarity maximization method based on the HD feature descriptor is proposed\u0000to learn optical flow in a self-supervised way from events alone, eliminating\u0000the need for auxiliary grayscale images. Evaluation results demonstrate that\u0000our VSA-based method achieves superior accuracy in comparison to both\u0000model-based and self-supervised learning methods on the DSEC benchmark, while\u0000remains competitive among both methods on the MVSEC benchmark. This\u0000contribution marks a significant advancement in event-based optical flow within\u0000the feature matching methodology.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel approach to predicting the pressure and flow rate of flexible electrohydrodynamic pumps using the Kolmogorov-Arnold Network. Inspired by the Kolmogorov-Arnold representation theorem, KAN replaces fixed activation functions with learnable spline-based activation functions, enabling it to approximate complex nonlinear functions more effectively than traditional models like Multi-Layer Perceptron and Random Forest. We evaluated KAN on a dataset of flexible EHD pump parameters and compared its performance against RF, and MLP models. KAN achieved superior predictive accuracy, with Mean Squared Errors of 12.186 and 0.001 for pressure and flow rate predictions, respectively. The symbolic formulas extracted from KAN provided insights into the nonlinear relationships between input parameters and pump performance. These findings demonstrate that KAN offers exceptional accuracy and interpretability, making it a promising alternative for predictive modeling in electrohydrodynamic pumping.
受 Kolmogorov-Arnold 表示定理的启发,KAN 用可学习的基于样条的激活函数取代了固定的激活函数,使其能够比多层感知器和随机森林等传统模型更有效地逼近复杂的非线性函数。我们在一组灵活的 EHD 泵参数上对 KAN 进行了评估,并将其性能与 RF 和 MLP 模型进行了比较。KAN 的预测准确性更胜一筹,压力和流量预测的均方误差分别为 12.186 和 0.001。从 KAN 中提取的符号公式深入揭示了输入参数与泵性能之间的非线性关系。这些研究结果表明,KAN 具有极高的准确性和可解释性,是电流体动力泵预测建模的理想选择。
{"title":"Predictive Modeling of Flexible EHD Pumps using Kolmogorov-Arnold Networks","authors":"Yanhong Peng, Miao He, Fangchao Hu, Zebing Mao, Xia Huang, Jun Ding","doi":"arxiv-2405.07488","DOIUrl":"https://doi.org/arxiv-2405.07488","url":null,"abstract":"We present a novel approach to predicting the pressure and flow rate of\u0000flexible electrohydrodynamic pumps using the Kolmogorov-Arnold Network.\u0000Inspired by the Kolmogorov-Arnold representation theorem, KAN replaces fixed\u0000activation functions with learnable spline-based activation functions, enabling\u0000it to approximate complex nonlinear functions more effectively than traditional\u0000models like Multi-Layer Perceptron and Random Forest. We evaluated KAN on a\u0000dataset of flexible EHD pump parameters and compared its performance against\u0000RF, and MLP models. KAN achieved superior predictive accuracy, with Mean\u0000Squared Errors of 12.186 and 0.001 for pressure and flow rate predictions,\u0000respectively. The symbolic formulas extracted from KAN provided insights into\u0000the nonlinear relationships between input parameters and pump performance.\u0000These findings demonstrate that KAN offers exceptional accuracy and\u0000interpretability, making it a promising alternative for predictive modeling in\u0000electrohydrodynamic pumping.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"87 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140926730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed Aristide Foughali, Marius Mikučionis, Maryline Zhang
Real-time systems (RTSs) are at the heart of numerous safety-critical applications. An RTS typically consists of a set of real-time tasks (the software) that execute on a multicore shared-memory platform (the hardware) following a scheduling policy. In an RTS, computing inter-core bounds, i.e., bounds separating events produced by tasks on different cores, is crucial. While efficient techniques to over-approximate such bounds exist, little has been proposed to compute their exact values. Given an RTS with a set of cores C and a set of tasks T , under partitioned fixed- priority scheduling with limited preemption, a recent work by Foughali, Hladik and Zuepke (FHZ) models tasks with affinity c (i.e., allocated to core c in C) as a Uppaal timed automata (TA) network Nc. For each core c in C, Nc integrates blocking (due to data sharing) using tight analytical formulae. Through compositional model checking, FHZ achieved a substantial gain in scalability for bounds local to a core. However, computing inter-core bounds for some events of interest E, produced by a subset of tasks TE with different affinities CE, requires model checking the parallel composition of all TA networks Nc for each c in CE, which produces a large, often intractable, state space. In this paper, we present a new scalable approach based on exact abstractions to compute exact inter-core bounds in a schedulable RTS, under the assumption that tasks in TE have distinct affinities. We develop a novel algorithm, leveraging a new query that we implement in Uppaal, that computes for each TA network Nc in NE an abstraction A(Nc) preserving the exact intervals within which events occur on c, therefore drastically reducing the state space. The scalability of our approach is demonstrated on the WATERS 2017 industrial challenge, for which we efficiently compute various types of inter-core bounds where FHZ fails to scale.
实时系统(RTS)是众多安全关键型应用的核心。实时系统通常由一组实时任务(软件)组成,这些任务按照调度策略在多核共享内存平台(硬件)上执行。在 RTS 中,计算内核间界限(即区分不同内核上的任务所产生的事件的界限)至关重要。虽然存在过度估算此类界限的高效技术,但很少有人提出计算其精确值的方法。Foughali, Hladik and Zuepke (FHZ) 最近的一项研究将具有亲和性 c 的任务(即分配给 C 中的核心 c)建模为 Uppaal timedautomata (TA) 网络 Nc。对于 C 中的每个核心 c,Nc 使用严密的分析公式整合阻塞(由于数据共享)。通过组合模型检查,FHZ 在 ac 核局部边界的可扩展性方面取得了重大进展。然而,要计算由具有不同亲缘关系 CE 的任务子集 TE 产生的某些相关事件 E 的核间界限,需要对 CE 中每个 c 的所有 TA 网络 Nc 的并行组成进行建模检查,这会产生一个庞大的、通常难以处理的状态空间。在本文中,我们提出了一种基于精确抽象的全新可扩展方法,在可调度 RTS 中计算精确的内核间边界,前提是 TE 中的任务具有不同的亲和力。我们开发了一种新算法,利用我们在 Uppaal 中实现的新查询,为 NE 中的每个 TA 网络 Nc 计算出保留事件发生精确时间间隔的抽象 A(Nc),从而大大减少了状态空间。我们在 WATERS 2017 工业挑战赛上展示了我们方法的可扩展性,在该挑战赛中,我们有效地计算了 FHZ 无法扩展的各种类型的内核间边界。
{"title":"Scalable Computation of Inter-Core Bounds Through Exact Abstractions","authors":"Mohammed Aristide Foughali, Marius Mikučionis, Maryline Zhang","doi":"arxiv-2405.06387","DOIUrl":"https://doi.org/arxiv-2405.06387","url":null,"abstract":"Real-time systems (RTSs) are at the heart of numerous safety-critical\u0000applications. An RTS typically consists of a set of real-time tasks (the\u0000software) that execute on a multicore shared-memory platform (the hardware)\u0000following a scheduling policy. In an RTS, computing inter-core bounds, i.e.,\u0000bounds separating events produced by tasks on different cores, is crucial.\u0000While efficient techniques to over-approximate such bounds exist, little has\u0000been proposed to compute their exact values. Given an RTS with a set of cores C\u0000and a set of tasks T , under partitioned fixed- priority scheduling with\u0000limited preemption, a recent work by Foughali, Hladik and Zuepke (FHZ) models\u0000tasks with affinity c (i.e., allocated to core c in C) as a Uppaal timed\u0000automata (TA) network Nc. For each core c in C, Nc integrates blocking (due to\u0000data sharing) using tight analytical formulae. Through compositional model\u0000checking, FHZ achieved a substantial gain in scalability for bounds local to a\u0000core. However, computing inter-core bounds for some events of interest E,\u0000produced by a subset of tasks TE with different affinities CE, requires model\u0000checking the parallel composition of all TA networks Nc for each c in CE, which\u0000produces a large, often intractable, state space. In this paper, we present a\u0000new scalable approach based on exact abstractions to compute exact inter-core\u0000bounds in a schedulable RTS, under the assumption that tasks in TE have\u0000distinct affinities. We develop a novel algorithm, leveraging a new query that\u0000we implement in Uppaal, that computes for each TA network Nc in NE an\u0000abstraction A(Nc) preserving the exact intervals within which events occur on\u0000c, therefore drastically reducing the state space. The scalability of our\u0000approach is demonstrated on the WATERS 2017 industrial challenge, for which we\u0000efficiently compute various types of inter-core bounds where FHZ fails to\u0000scale.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140927120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}