Pub Date : 2024-03-24DOI: 10.1016/j.ipl.2024.106494
Feifei Yan , Pinhui Ke , Zuling Chang
Recently, a class of quaternary sequences with period pq, where p and q are two distinct odd primes introduced by Zhang et al. were proved to possess high linear complexity and 4-adic complexity. In this paper, we determine the autocorrelation distribution of this class of quaternary sequence. Our results indicate that the studied quaternary sequence are weak with respect to the correlation property.
最近,Zhang 等人提出的一类周期为 pq(其中 p 和 q 是两个不同的奇数素数)的四元数列被证明具有很高的线性复杂度和四元数列复杂度。本文测定了该类四元序列的自相关分布。我们的结果表明,所研究的四元数列的相关性较弱。
{"title":"The autocorrelation of a class of quaternary sequences of length pq with high complexity","authors":"Feifei Yan , Pinhui Ke , Zuling Chang","doi":"10.1016/j.ipl.2024.106494","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106494","url":null,"abstract":"<div><p>Recently, a class of quaternary sequences with period <em>pq</em>, where <em>p</em> and <em>q</em> are two distinct odd primes introduced by Zhang et al. were proved to possess high linear complexity and 4-adic complexity. In this paper, we determine the autocorrelation distribution of this class of quaternary sequence. Our results indicate that the studied quaternary sequence are weak with respect to the correlation property.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106494"},"PeriodicalIF":0.5,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140321256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-16DOI: 10.1016/j.ipl.2024.106492
Hao Wu , Qizhe Yang , Huan Long
The qCCS model proposed by Feng et al. provides a powerful framework to describe and reason about quantum communication systems that could be entangled with the environment. However, they only studied weak bisimulation semantics. In this paper we propose a new branching bisimilarity for qCCS and show that it is a congruence. The new bisimilarity is based on the concept of ϵ-tree and preserves the branching structure of concurrent processes where both quantum and classical components are allowed. Furthermore, we present a polynomial time equivalence checking algorithm for the ground version of our branching bisimilarity.
{"title":"Branching bisimulation semantics for quantum processes","authors":"Hao Wu , Qizhe Yang , Huan Long","doi":"10.1016/j.ipl.2024.106492","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106492","url":null,"abstract":"<div><p>The qCCS model proposed by Feng et al. provides a powerful framework to describe and reason about quantum communication systems that could be entangled with the environment. However, they only studied weak bisimulation semantics. In this paper we propose a new branching bisimilarity for qCCS and show that it is a congruence. The new bisimilarity is based on the concept of <em>ϵ</em>-tree and preserves the branching structure of concurrent processes where both quantum and classical components are allowed. Furthermore, we present a polynomial time equivalence checking algorithm for the ground version of our branching bisimilarity.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106492"},"PeriodicalIF":0.5,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-15DOI: 10.1016/j.ipl.2024.106493
Dekel Tsur
In this paper we consider two vertex deletion problems. In the Block Vertex Deletion problem, the input is a graph G and an integer k, and the goal is to decide whether there is a set of at most k vertices whose removal from G result in a block graph (a graph in which every biconnected component is a clique). In the Pathwidth One Vertex Deletion problem, the input is a graph G and an integer k, and the goal is to decide whether there is a set of at most k vertices whose removal from G result in a graph with pathwidth at most one. We give a kernel for Block Vertex Deletion with vertices and a kernel for Pathwidth One Vertex Deletion with vertices. Our results improve the previous -vertex kernel for Block Vertex Deletion (Agrawal et al., 2016 [1]) and the -vertex kernel for Pathwidth One Vertex Deletion (Cygan et al., 2012 [3]).
在本文中,我们考虑了两个顶点删除问题。在 "块顶点删除 "问题中,输入是一个图 G 和一个整数 k,目标是判断是否有一组顶点(最多 k 个)从 G 中删除后会形成一个块图(图中每个双连接的部分都是一个小块)。在路径宽度为一的顶点删除问题中,输入是一个图 G 和一个整数 k,目标是判断是否存在一组至多 k 个顶点,将其从 G 中删除后会得到一个路径宽度至多为一的图。我们给出了 O(k3) 个顶点的块顶点删除内核和 O(k2) 个顶点的路径宽度为一的顶点删除内核。我们的结果改进了之前用于块顶点删除的 O(k4)- 顶点内核(Agrawal 等人,2016 [1])和用于路径宽度一个顶点删除的 O(k3)- 顶点内核(Cygan 等人,2012 [3])。
{"title":"Smaller kernels for two vertex deletion problems","authors":"Dekel Tsur","doi":"10.1016/j.ipl.2024.106493","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106493","url":null,"abstract":"<div><p>In this paper we consider two vertex deletion problems. In the <span>Block Vertex Deletion</span> problem, the input is a graph <em>G</em> and an integer <em>k</em>, and the goal is to decide whether there is a set of at most <em>k</em> vertices whose removal from <em>G</em> result in a block graph (a graph in which every biconnected component is a clique). In the <span>Pathwidth One Vertex Deletion</span> problem, the input is a graph <em>G</em> and an integer <em>k</em>, and the goal is to decide whether there is a set of at most <em>k</em> vertices whose removal from <em>G</em> result in a graph with pathwidth at most one. We give a kernel for <span>Block Vertex Deletion</span> with <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span> vertices and a kernel for <span>Pathwidth One Vertex Deletion</span> with <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></math></span> vertices. Our results improve the previous <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>4</mn></mrow></msup><mo>)</mo></math></span>-vertex kernel for <span>Block Vertex Deletion</span> (Agrawal et al., 2016 <span>[1]</span>) and the <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>)</mo></math></span>-vertex kernel for <span>Pathwidth One Vertex Deletion</span> (Cygan et al., 2012 <span>[3]</span>).</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106493"},"PeriodicalIF":0.5,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140160714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1016/j.ipl.2024.106491
Ashwin Jacob, Michał Włodarczyk, Meirav Zehavi
In the Longest-Detour problem, we look for an -path that is at least k vertices longer than a shortest one. We study the parameterized complexity of Longest-Detour when parameterized by k: this falls into the research paradigm of ‘parameterization above guarantee’. Whereas the problem is known to be fixed-parameter tractable (FPT) on undirected graphs, the status of Longest-Detour on directed graphs remains highly unclear: it is not even known to be solvable in polynomial time for . Recently, Fomin et al. made progress in this direction by showing that the problem is FPT on every class of directed graphs where the 3-Disjoint Paths problem is solvable in polynomial time. We improve upon their result by weakening this assumption: we show that only a polynomial-time algorithm for 2-Disjoint Paths is required.
{"title":"Long directed detours: Reduction to 2-Disjoint Paths","authors":"Ashwin Jacob, Michał Włodarczyk, Meirav Zehavi","doi":"10.1016/j.ipl.2024.106491","DOIUrl":"10.1016/j.ipl.2024.106491","url":null,"abstract":"<div><p>In the <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> problem, we look for an <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span>-path that is at least <em>k</em> vertices longer than a shortest one. We study the parameterized complexity of <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> when parameterized by <em>k</em>: this falls into the research paradigm of ‘parameterization above guarantee’. Whereas the problem is known to be fixed-parameter tractable (FPT) on undirected graphs, the status of <span>Longest</span> <span><math><mo>(</mo><mi>s</mi><mo>,</mo><mi>t</mi><mo>)</mo></math></span><span>-Detour</span> on directed graphs remains highly unclear: it is not even known to be solvable in polynomial time for <span><math><mi>k</mi><mo>=</mo><mn>1</mn></math></span>. Recently, Fomin et al. made progress in this direction by showing that the problem is FPT on every class of directed graphs where the <span>3-Disjoint Paths</span> problem is solvable in polynomial time. We improve upon their result by weakening this assumption: we show that only a polynomial-time algorithm for <span>2-Disjoint Paths</span> is required.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106491"},"PeriodicalIF":0.5,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140153962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.ipl.2024.106490
Bhisham Dev Verma , Rameshwar Pratap , Punit Pankaj Dubey
The seminal work of Charikar et al. [1] called Count-Sketch suggests a sketching algorithm for real-valued vectors that has been used in frequency estimation for data streams and pairwise inner product estimation for real-valued vectors etc. One of the major advantages of Count-Sketch over other similar sketching algorithms, such as random projection, is that its running time, as well as the sparsity of sketch, depends on the sparsity of the input. Therefore, sparse datasets enjoy space-efficient (sparse sketches) and faster running time. However, on dense datasets, these advantages of Count-Sketch might be negligible over other baselines. In this work, we address this challenge by suggesting a simple and effective approach that outputs (asymptotically) a sparser sketch than that obtained via Count-Sketch, and as a by-product, we also achieve a faster running time. Simultaneously, the quality of our estimate is closely approximate to that of Count-Sketch. For frequency estimation and pairwise inner product estimation problems, our proposal Sparse-Count-Sketch provides unbiased estimates. These estimations, however, have slightly higher variances than their respective estimates obtained via Count-Sketch. To address this issue, we present improved estimators for these problems based on maximum likelihood estimation (MLE) that offer smaller variances even w.r.t.Count-Sketch. We suggest a rigorous theoretical analysis of our proposal for frequency estimation for data streams and pairwise inner product estimation for real-valued vectors.
{"title":"Sparsifying Count Sketch","authors":"Bhisham Dev Verma , Rameshwar Pratap , Punit Pankaj Dubey","doi":"10.1016/j.ipl.2024.106490","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106490","url":null,"abstract":"<div><p>The seminal work of Charikar et al. <span>[1]</span> called <span>Count-Sketch</span> suggests a sketching algorithm for real-valued vectors that has been used in frequency estimation for data streams and pairwise inner product estimation for real-valued vectors etc. One of the major advantages of <span>Count-Sketch</span> over other similar sketching algorithms, such as random projection, is that its running time, as well as the sparsity of sketch, depends on the sparsity of the input. Therefore, sparse datasets enjoy space-efficient (sparse sketches) and faster running time. However, on dense datasets, these advantages of <span>Count-Sketch</span> might be negligible over other baselines. In this work, we address this challenge by suggesting a simple and effective approach that outputs (asymptotically) a sparser sketch than that obtained via <span>Count-Sketch</span>, and as a by-product, we also achieve a faster running time. Simultaneously, the quality of our estimate is closely approximate to that of <span>Count-Sketch</span>. For frequency estimation and pairwise inner product estimation problems, our proposal <span>Sparse-Count-Sketch</span> provides unbiased estimates. These estimations, however, have slightly higher variances than their respective estimates obtained via <span>Count-Sketch</span>. To address this issue, we present improved estimators for these problems based on maximum likelihood estimation (MLE) that offer smaller variances even <em>w.r.t.</em> <span>Count-Sketch</span>. We suggest a rigorous theoretical analysis of our proposal for frequency estimation for data streams and pairwise inner product estimation for real-valued vectors.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106490"},"PeriodicalIF":0.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.ipl.2024.106485
V.P. Abidha , Pradeesha Ashok
Given a universe of a finite set of red elements R, and a finite set of blue elements B and a family of subsets of , the Red Blue Set Cover problem is to find a subset of that covers all blue elements of B and minimum number of red elements from R.
We prove that the Red Blue Set Cover problem is NP-hard even when R and B respectively are sets of red and blue points in and the sets in are defined by axis−parallel lines i.e., every set is a maximal set of points with the same x or y coordinate.
We then study the parameterized complexity of a generalization of this problem, where is a set of points in and is a collection of set of axis−parallel hyperplanes in under different parameterizations, where d is a constant. For every parameter, we show that the problem is fixed-parameter tractable and also show the existence of a polynomial kernel. We further consider the Red Blue Set Cover problem for some special types of rectangles in .
给定一个由有限红色元素集 R 和有限蓝色元素集 B 组成的宇宙 U=R∪B,以及 U 的子集族 F,红蓝集合覆盖问题就是找到 F 的子集 F′,该子集覆盖 B 中的所有蓝色元素和 R 中的最少红色元素。我们证明,即使 R 和 B 分别是 IR2 中的红色点集和蓝色点集,且 F 中的集合是由轴平行线定义的,即每个集合都是相同 x 或 y 坐标的最大点集,红蓝集合覆盖问题也是 NP 难的、然后,我们研究了该问题的广义参数化复杂度,其中 U 是 IRd 中的点集,F 是 IRd 中轴平行超平面集的集合。对于每个参数,我们都证明了该问题的固定参数可操作性,并证明了多项式内核的存在。我们进一步考虑了 IR2 中一些特殊类型矩形的红蓝集合覆盖问题。
{"title":"Red Blue Set Cover problem on axis-parallel hyperplanes and other objects","authors":"V.P. Abidha , Pradeesha Ashok","doi":"10.1016/j.ipl.2024.106485","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106485","url":null,"abstract":"<div><p>Given a universe <span><math><mi>U</mi><mo>=</mo><mi>R</mi><mo>∪</mo><mi>B</mi></math></span> of a finite set of red elements <em>R</em>, and a finite set of blue elements <em>B</em> and a family <span><math><mi>F</mi></math></span> of subsets of <span><math><mi>U</mi></math></span>, the <span>Red Blue Set Cover</span> problem is to find a subset <span><math><msup><mrow><mi>F</mi></mrow><mrow><mo>′</mo></mrow></msup></math></span> of <span><math><mi>F</mi></math></span> that covers all blue elements of <em>B</em> and minimum number of red elements from <em>R</em>.</p><p>We prove that the <span>Red Blue Set Cover</span> problem is NP-hard even when <em>R</em> and <em>B</em> respectively are sets of red and blue points in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> and the sets in <span><math><mi>F</mi></math></span> are defined by axis−parallel lines i.e., every set is a maximal set of points with the same <em>x</em> or <em>y</em> coordinate.</p><p>We then study the parameterized complexity of a generalization of this problem, where <span><math><mi>U</mi></math></span> is a set of points in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> and <span><math><mi>F</mi></math></span> is a collection of set of axis−parallel hyperplanes in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> under different parameterizations, where <em>d</em> is a constant. For every parameter, we show that the problem is fixed-parameter tractable and also show the existence of a polynomial kernel. We further consider the <span>Red Blue Set Cover</span> problem for some special types of rectangles in <span><math><msup><mrow><mi>IR</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106485"},"PeriodicalIF":0.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1016/j.ipl.2024.106484
Haibo Hong, Shi Bai, Fenghao Liu
With the development of Lie theory, Lie groups have profound significance in many branches of mathematics and physics. In Lie theory, matrix exponential plays a crucial role between Lie groups and Lie algebras. Meanwhile, as finite analogues of Lie groups, finite groups of Lie type also have wide application scenarios in mathematics and physics owning to their unique mathematical structures. In this context, it is meaningful to explore the potential applications of finite groups of Lie type in cryptography. In this paper, we firstly built the relationship between matrix exponential and discrete logarithmic problem (DLP) in finite groups of Lie type. Afterwards, we proved that the complexity of solving non-abelian factorization (NAF) problem is polynomial with the rank n of the finite group of Lie type. Furthermore, combining with the Algebraic Span, we proposed an efficient algorithm for solving group factorization problem (GFP) in finite groups of Lie type. Therefore, it's still an open problem to devise secure cryptosystems based on Lie theory.
{"title":"The group factorization problem in finite groups of Lie type","authors":"Haibo Hong, Shi Bai, Fenghao Liu","doi":"10.1016/j.ipl.2024.106484","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106484","url":null,"abstract":"<div><p>With the development of Lie theory, Lie groups have profound significance in many branches of mathematics and physics. In Lie theory, matrix exponential plays a crucial role between Lie groups and Lie algebras. Meanwhile, as finite analogues of Lie groups, finite groups of Lie type also have wide application scenarios in mathematics and physics owning to their unique mathematical structures. In this context, it is meaningful to explore the potential applications of finite groups of Lie type in cryptography. In this paper, we firstly built the relationship between matrix exponential and discrete logarithmic problem (DLP) in finite groups of Lie type. Afterwards, we proved that the complexity of solving non-abelian factorization (NAF) problem is polynomial with the rank <em>n</em> of the finite group of Lie type. Furthermore, combining with the Algebraic Span, we proposed an efficient algorithm for solving group factorization problem (GFP) in finite groups of Lie type. Therefore, it's still an open problem to devise secure cryptosystems based on Lie theory.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106484"},"PeriodicalIF":0.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140014410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1016/j.ipl.2024.106489
Sam Buss , Emre Yolcu
Regular resolution is a refinement of the resolution proof system requiring that no variable be resolved on more than once along any path in the proof. It is known that there exist sequences of formulas that require exponential-size proofs in regular resolution while admitting polynomial-size proofs in resolution. Thus, with respect to the usual notion of simulation, regular resolution is separated from resolution. An alternative, and weaker, notion for comparing proof systems is that of an “effective simulation,” which allows the translation of the formula along with the proof when moving between proof systems. We prove that regular resolution is equivalent to resolution under effective simulations. As a corollary, we recover in a black-box fashion a recent result on the hardness of automating regular resolution.
{"title":"Regular resolution effectively simulates resolution","authors":"Sam Buss , Emre Yolcu","doi":"10.1016/j.ipl.2024.106489","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106489","url":null,"abstract":"<div><p>Regular resolution is a refinement of the resolution proof system requiring that no variable be resolved on more than once along any path in the proof. It is known that there exist sequences of formulas that require exponential-size proofs in regular resolution while admitting polynomial-size proofs in resolution. Thus, with respect to the usual notion of simulation, regular resolution is separated from resolution. An alternative, and weaker, notion for comparing proof systems is that of an “effective simulation,” which allows the translation of the formula along with the proof when moving between proof systems. We prove that regular resolution is equivalent to resolution under effective simulations. As a corollary, we recover in a black-box fashion a recent result on the hardness of automating regular resolution.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106489"},"PeriodicalIF":0.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S002001902400019X/pdfft?md5=1f40e48e2aad478df5d57137e39d2869&pid=1-s2.0-S002001902400019X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140030944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1016/j.ipl.2024.106487
Adam Polak , Maksym Zub
We propose a framework for speeding up maximum flow computation by using predictions. A prediction is a flow, i.e., an assignment of non-negative flow values to edges, which satisfies the flow conservation property, but does not necessarily respect the edge capacities of the actual instance (since these were unknown at the time of learning). We present an algorithm that, given an m-edge flow network and a predicted flow, computes a maximum flow in time, where η is the error of the prediction, i.e., the sum over the edges of the absolute difference between the predicted and optimal flow values. Moreover, we prove that, given an oracle access to a distribution over flow networks, it is possible to efficiently PAC-learn a prediction minimizing the expected error over that distribution. Our results fit into the recent line of research on learning-augmented algorithms, which aims to improve over worst-case bounds of classical algorithms by using predictions, e.g., machine-learned from previous similar instances. So far, the main focus in this area was on improving competitive ratios for online problems. Following Dinitz et al. (2021) [6], our results are among the firsts to improve the running time of an offline problem.
我们提出了一个利用预测加速最大流计算的框架。预测是一种流量,即对边的非负流量值的分配,它满足流量守恒属性,但不一定尊重实际实例的边容量(因为在学习时这些容量是未知的)。我们提出了一种算法,在给定一个 m 边流量网络和一个预测流量的情况下,可以在 O(mη) 时间内计算出最大流量,其中 η 是预测的 ℓ1 误差,即预测流量值与最优流量值之间的绝对差值在边上的总和。此外,我们还证明,如果有一个获取流量网络分布的甲骨文,就有可能高效地通过 PAC 学习预测,使该分布的预期 ℓ1 误差最小化。我们的研究成果与最近关于学习增强算法的研究方向不谋而合,后者旨在通过使用预测(例如从以前的类似实例中机器学习的预测)来改进经典算法的最坏情况界限。迄今为止,这一领域的主要研究重点是提高在线问题的竞争比率。继 Dinitz 等人(2021 年)[6]之后,我们的成果是首批改善离线问题运行时间的成果之一。
{"title":"Learning-augmented maximum flow","authors":"Adam Polak , Maksym Zub","doi":"10.1016/j.ipl.2024.106487","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106487","url":null,"abstract":"<div><p>We propose a framework for speeding up maximum flow computation by using predictions. A prediction is a flow, i.e., an assignment of non-negative flow values to edges, which satisfies the flow conservation property, but does not necessarily respect the edge capacities of the actual instance (since these were unknown at the time of learning). We present an algorithm that, given an <em>m</em>-edge flow network and a predicted flow, computes a maximum flow in <span><math><mi>O</mi><mo>(</mo><mi>m</mi><mi>η</mi><mo>)</mo></math></span> time, where <em>η</em> is the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> error of the prediction, i.e., the sum over the edges of the absolute difference between the predicted and optimal flow values. Moreover, we prove that, given an oracle access to a distribution over flow networks, it is possible to efficiently PAC-learn a prediction minimizing the expected <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> error over that distribution. Our results fit into the recent line of research on learning-augmented algorithms, which aims to improve over worst-case bounds of classical algorithms by using predictions, e.g., machine-learned from previous similar instances. So far, the main focus in this area was on improving competitive ratios for online problems. Following Dinitz et al. (2021) <span>[6]</span>, our results are among the firsts to improve the running time of an offline problem.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106487"},"PeriodicalIF":0.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140030945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1016/j.ipl.2024.106488
Besik Dundua , Ioane Kapanadze , Helmut Seidl
We show that every prenex universal syntactic first-order safety property can be compiled into a universal invariant of a first-order transition system using quantifier-free substitutions only. We apply this insight to prove that every such safety property is decidable for first-order transition systems with stratified guarded updates only.
{"title":"Prenex universal first-order safety properties","authors":"Besik Dundua , Ioane Kapanadze , Helmut Seidl","doi":"10.1016/j.ipl.2024.106488","DOIUrl":"https://doi.org/10.1016/j.ipl.2024.106488","url":null,"abstract":"<div><p>We show that every prenex universal syntactic first-order safety property can be compiled into a universal invariant of a first-order transition system using quantifier-free substitutions only. We apply this insight to prove that every such safety property is decidable for first-order transition systems with stratified guarded updates only.</p></div>","PeriodicalId":56290,"journal":{"name":"Information Processing Letters","volume":"186 ","pages":"Article 106488"},"PeriodicalIF":0.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0020019024000188/pdfft?md5=4b718d782f26b6bc7eb47445f9e59272&pid=1-s2.0-S0020019024000188-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140024064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}